It is. I generate the Windows executables for a program by cross-compiling Rust on Linux. Then test with Wine. The Rust crates are cross-platform enough that I don't have to special case platforms. This is easier than having a Windows machine.
The only current headache is that there's no pure Rust bundler, to make ".msi" or ".msix" installer files.
> The only current headache is that there's no pure Rust bundler, to make ".msi" or ".msix" installer files.
I have set up a toolchain based on Wix, running via Mono and Wine on Linux, for the purpose of packaging some rather complex .msi files in a commercial software project. Has been running on the Linux-based CI servers of that project for 10 years straight now, with the only change having been that the entire chain has been packaged into a Docker container a few years ago when that became fashionable.
WiX is great. I have been using it for over a decade to package an internal application. It's all driven by a simple text file which I liked over Microsoft's packaging offerings.
It's not. It's an open source project (aka owned by no one) that started at Microsoft. The person who created WiX left Microsoft to create Firegiant, which is the primary contributor to its continued development and maintenance.
Open source does not mean "owned by no one". It means the owner or owners have licenced their intellectual property to you under the terms of an oss license.
I only know about other proprietary non-MS solutions like InstallShield, which also generate .msi files but use their own proprietary way to define the installation process, with the benefit/disadvantage (depending on what you want to do specifically) of being on a higher abstraction level than WiX, which is more or less a direct resemblance of the internal Windows Installer data structures.
> I have set up a toolchain based on Wix, running via Mono and Wine on Linux, for the purpose of packaging some rather complex .msi files in a commercial software project.
That’s exactly how I rigged up building our MSIs at Tailscale.
NSIS (Nullsoft Scriptable Install System)[1] can be compiled[2] for Linux if that's any help. I use that to prepare the install .exe package for a Java based program for one of my clients.
I tried doing that, and then found that my test suite passes on real Windows but fails on Wine, because it uses APIs which Wine doesn't implement correctly (or at all).
It's rather impressive that the whole Rend3->WGPU->Vulkan chain for 3D graphics works under Wine, because that's all bleeding-edge stuff. It's only that full screen won't work. Wine reports "fullscreen true stub!", so it's something not implemented yet, rather than something broken.
I also cross compile and remote debug windows software on Linux (C and Rust programs). Using gdbserver --multi (extended remote mode) is quite comfortable.
The biggest advantage is not having to use different tools for different target platforms.
Unfortunately MacOS is more difficult target than Windows. If anyone has tips for compiling and debugging software of MacOS from a Linux host, please do share.
There is the Julia package https://github.com/JuliaPackaging/BinaryBuilder.jl which creates an environment that fakes being another, but with the correct compilers and SDKs . It's used to build all the binary dependencies
For ardour we use QEMU VMs, running GUI-free for non-interactive builds, and with a GUI for ... interaction. Easy to create, and as long as you have lots of memory and cores, nice and responsive/fast. Set a startup item in the non-interactive VMs so that a minute or two after they come up, they start the build. In our case, they upload to ardour.org when complete. The interactive ones are just .. well, they're just macs running in their own window, do what you want with them.
Cargo-bundle was supposed to do that, but only the Mac part was completed. Nobody ever did the .msi output. I've been trying to get someone interested in doing that. That task needs someone very familiar with the Microsoft ecosystem, which I am not. Discussion in Rust tools forum here.[1]
I wonder if InnoSetup could be a good point of reference for anyone interested. It is in Delphi though, but there is also this guide I found whilst trying to figure out if you could run Inno Setup on Linux:
I looked at that crate.
That lets you read and write .msi files, but those are just containers. It doesn't help you set up the rather complicated contents required. Someone who's into the Windows ecosystem could probably use it to do the limited things cargo-bundler does. The neat thing about cargo-bundle is that it only needs the info from the Cargo.toml file to drive the bundling process. Most Windows installer builders involve manually constructing Windows-specific XML files.
Ah I see. Honestly if I knew Rust better I wouldnt mind taking a stab at the project but I am cozy with dotnet currently. I love building systems tools but very little work I apply for does just that. It is a shame companies dont invest more into R&D.
The Rust part of this isn't that hard. cargo bundler already has a place where the .msi output generator is supposed to go. The reading of the cargo.toml file and the writing in .msi format is already written. It's just generating the Microsoft-specific content for the .msi file that's hard. That takes Microsoft platform expertise.
Counterpoint for this rant (showing it's again not very objective, and just what your or your environment's expectations are): it's easier to cross-compile binaries for Linux (using Clang and a sysroot on Windows) than it is to natively compile on Linux (not using a sysroot, as that is the 'default' flow there) if your environment is mostly Windows already. We do this for one of our products, in fact.
The post meanwhile seems to be part about open source policy (nasty, but not a technical issue - also including a new-to-me and hard to find on a Google or DDG search fact about the VS gallery endpoints having led to legal threats), part issues induced by 'weird' modern languages not caring to support Windows (as expected?), part Git for Windows not bothering to handle symbolic links cleanly (and the weird admin-only default thing that remained from Vista), and part... concerns where if people would spread the proper way of doing stuff (curl.exe bundled by default for downloads, or long file names being 'weird' - albeit less broken than claimed here) instead of just ranting this'd be fine too... but not really a coherent whole.
My counterpoint is simpler: I use some of these tools (like VS Code) because they're better, not because they're open source.
Also I don't think Pylance or whatever core extensions being closed source contradicts with the fact majority of VS Code (or vscodium) is open-source. And I don't see anything morally wrong that they want to close source some of their competitive products.
It may be not intuitive, but tools tend to be better because they are open source.
My personal problem with Pylance or ssh tools is that they are not working with VS Code based forks. That means that VS Code is not that open source as it trying to look like. And this is suspicious for me.
The ssh tool is incredibly slow and resource intensive and defaults to losing unsaved work on disconnect. An open source version would open up the possibility of a user fixing some of these issues.
> tools tend to be better because they are open source
This is not true in general. It isn't even true for programming tools. Paid/non-OSS IDEs (JetBrains suite, Visual Studio) are often better in many ways than their FOSS equivalents (Eclipse, Qt Creator, etc).
Other examples: office programs (MS Office vs LibreOffice), digital AV workstations (Adobe suite, Da Vinci Resolve vs Kdenlive, ShotCut, etc), 3D editors (Blender is the one exception here, but Maya is still damn good)...
I didn't mean that all OSS tools are better than proprietary/commercial, this is not true obviously. I mean that making tool OSS usually makes it better.
Only if you want to be trendchasing rather than letting backwards compatibility take care of itself...
I'm a native Win32 developer, have been one for a few decades, and know quite a few others still using MSVC6 because it's fast and enough for what they do. Takes <250MB of disk space, and the self-contained binaries it generates will work on any version of Windows starting at Win95.
Long file paths: Azure, OpenSearch, and ~90 other open source projects have to document how to enable long file paths on Windows because the default is a ~255 character limit.
Personally, I think 260 is long enough and plenty to work with, while at the same time discouraging the ridiculous verboseness that seems to have crept into "modern" software. Then again, I stay away from .NET, VSCode, and the like. I am reminded of this post by Raymond Chen:
My issue with the 260 character limit is that it can easily be reached through normal usage, and can lead to some really strange behavior.
* File cannot be moved into a subfolder.
* Folder can be renamed, after which a file contained within it is inaccessible.
* File on a network drive may be accessible by some users, but not others. For example, one user may access the drive with the longer network name, while another has mapped the network drive as Z:\. A filename may exceed the 260 character limit for the first user, but not for the second.
* Cannot delete a folder, because it contains a file with a long name.
My experience may be biased due to using a program that recorded metadata in a filename, taking the majority of the 260 characters available in just the filename, but with the number of failure modes, I still don't think it is reasonable to have such a small limit.
Exactly. I believe NTFS has supported 65535-char paths since, forever? But last I checked, even simple apps like Win10 Notepad will fail to open them.
What is the point of the registry/gpedit setting to enable long paths system-wide -- when so much of Windows doesn't support it? File Explorer is really showing its age.
Yikes, 260 character path names can be a real pain. Please support sensible path names or at least tell me why you can't read or write to a file. Often you just get a "can't write to that file" error message, or worse, "forbidden" so you spend an hour trying to debug the mess that is folder permissions on windows.
I'm not going to convince you to change your tool chain, userbinator, but for the sake of the discussion: Once you have multiple projects going on, with multiple components, and then those components have a small directory structure themselves, you can easily reach 260 characters. Add to that, if theres data coming from another org, a long file name can be very helpful to keep track of what it is (and don't forget 10-12 characters for a date!). And finally, the nail in the coffin: most users don't think about path names, I struggle to get people to not put periods in their filenames which messes with some tooling, how am I going to convince the guy in finance who gave me this data that he should use short file names? Should I modify the file name and make it untracable?
ETA: The "if you have to ask you've messed up", I don't ask, I expect and then get annoyed it broke. I had 10,000 files collected into a folder. Why can every other program tell me the list of files in an instant but windows explorer crashes (the whole desktop environment) because I opened that folder to see it. I'm not meant to do that? Then why can the kernel, the disk, the file system, and all other programs handle it with ease?
The 260 path limit is a pain for organizing media. It's not surprising most applications/games store their media in giant blobs instead of just individual files. This makes software updates a pain since it requires users to download the entire blob if you can't diff the blobs or if the diff corrupts the blob due to mismatched implementation versions.
Mostly blob based game assets is a system motivated by consoles, then reused on pc to keep build process consistent.
You'll see older pc centric game engines use loose files. With the advent of ssds on consoles we might see a return to bunch of files approach as it keeps patches smaller.
Steam and EGS updating handles binary diffing wonderfully. While consoles, not so much. Specifics are NDA of course but I bet any game dev reading will know who I am talking about when I say: "platform x has a horrible diffing algo yet requires approvals for updates over size Y". If I could ship a loose filesystem and get reasonable load times I would just for the update package advantage.
> Mostly blob based game assets is a system motivated by consoles, then reused on pc to keep build process consistent.
No, packing individual files into archives makes sense on PCs too due the native filesystem usually having a much bigger per-file storage overhead as well as non-negligible open() times - the second is especially true for Windows with "Anti Virus" software installed.
> You'll see older pc centric game engines use loose files.
Some maybe, but packing game assets into archives is as old as Doom and has been the norm all this time.
> Steam and EGS updating handles binary diffing wonderfully.
Steam handles binary diffin now, but it was not that long ago that it re-downloaded the whole changed file.
Not enough. Between path limits and "this file is in use by another program" nonsense I regularly ran into while developing on Windows, I switched away and never looked back.
That cuts both days, a Windows developer trying to port a project to Linux would be justified in concluding the filesystem locking semantics are horribly broken and non-existent.
I came from windows and found fewer problems when working on Linux. The problem with deleting files is something everybody will run into at some point. I don’t think what you describe is as common. I have never had it or heard it described as a problem when working in an organization which made software running on Linux and Windows.
The path separator, drive letter and text file line endings get me. But back when I switched to Linux for development it was the very slow file searches on Windows. I am fine with either macOS or Linux. Not windows.
It links dynamically to msvcrt.dll which is updated as part of the os. I believe it still gets updates as long as it doesn't break the ABI. Modern vc++ links with the universal CRT which is an independently updated msvcrt distribution.
A quick Google check seems to yield only one CVE in msvcrt.dll in 2012, wouldn't call this "often" especially for a DLL which is likely among the top 5 most used in the world
Both static and dynamic linking to a newer runtime (as they are versioned) will break backwards compatibility. For newer OS features you need, you can always GetProcAddress() the needed functions at runtime instead. The real downside to staying on VC6 is all the compilers' ancient optimizations, or lack thereof.
Microsoft has never really "got" the idea of URLs. The only links to Microsoft stuff you should expect to still work properly in a year from learning them are in the form of an aka.ms alias which is publicly advertised.
Anything else, maybe it's a brief article explaining a technology you care about, next week somebody replaces that with a video blog of some people who don't really know much about it but are sure they'll become world experts over the next months and years - and you're invited on their journey, then the blog becomes a wiki, then it becomes an exciting new user-led forum, and then... it's a 404 because they were re-assigned to a different project and all knowledge was destroyed.
Raymond used to be just one of hundreds, maybe thousands of Microsoft bloggers. Then one day Microsoft decided blogging was not on brand and it just blowtorched all the blogs except a few very popular ones like The Old New Thing.
That's Microsoft being (the new, not the old) Microsoft... they seriously fucked up the MSDN documentation when it was migrated to docs.microsoft.com, so I'm not surprised that a similar thing happened to the blogs:
they knew they would and they advertised that fact, and made blog and content owners migrate their stuff if they wanted it kept. many chose not to. those are the people you should be angry with.
Many of the people were no longer at Microsoft when the migration happened, but their blogs were no less useful for that.
And some weren't even alive. Michael Kaplan and his amazing blog about internationalization and Unicode comes to mind - thankfully, other people have archived it: http://archives.miloush.net/michkap/archive/
1) the URLs are simply missing a "b/" in the path to map to saved copies in the Web Archive, and 2) the comment system switched to requiring JS around 2011 so only comments from before then archived properly (>.>) but the threads are all from 2003-2006 so theoretically there are probably no new comments anyway.
> Only if you want to be trendchasing rather than letting backwards compatibility take care of itself...
> I'm a native Win32 developer, have been one for a few decades, and know quite a few others still using MSVC6 because it's fast and enough for what they do. Takes <250MB of disk space, and the self-contained binaries it generates will work on any version of Windows starting at Win95.
I share your view about the unmatched backwards compatibility of Win32 binaries, but I wouldn't let a 24-year old compiler like MSVC6 near any new project.
We are talking about a compiler here that doesn't even support the C++98 standard, let alone all the basic features for writing safe software (stack cookies, _s APIs, smart pointers - just to name a few).
When I needed reliable self-contained binaries and backwards compatibility, I switched to VS2019's clang-cl compiler and ported their libc++. Together with winpthreads from the mingw-w64 project, this enabled me to write software using even C++20 features, but still maintain compatibility down to Windows NT 4 from 1996.
If you're interested, it's all documented here: https://colinfinck.de/posts/targeting-25-years-of-windows-wi...
That's really cool, was wondering how hard it would be to make libc++ play nice with XP - currently I still use MSVC 2017 for those builds but that won't get newer language features and I am planning to move to Clang anyway. Do you know of any effort to go even further back and support pre-NT Windows with a modern C++ compiler?
It started after they ported UI to WPF. Compare VS2008 and VS2010 if you have the time. I remember using both on a cheap laptop when VS2010 first came out, and was simply incomparable.
> Only if you want to be trendchasing [...]
> still using MSVC6
That's some strange logic - you expect someone wanting to compile things for windows to somehow discover a 24-year-old compiler using some older version of C/C++ and to conclude that, despite modern norms, it still works, is still legally available and still produces working binaries.
And if they don't somehow glean all of the above, you say they're trendchasing, rather than just not knowing and using something two-decades obscure and possibly illegal.
The post mentions CRLF line end syntax, I don't understand why anyone should bother with it. Use \n only, it works everywhere including Windows. Any modern IDE can handle Unix line endings. The same about file paths, use forward slash (/) everywhere and don't waste your time supporting DOS era standards.
I think the core.autocrlf option in Git is more harmful than useful (for example, it can change file hashes and break something) and should be removed. An option to warn about unwanted \r characters on commit might be useful though.
Good post (I couldn't quite follow the open source angle tbh), but for the "why doesn't Developer Mode make Windows behave more like Linux" part: this would probably disgruntle a lot of 'native' Windows developers.
Windows was always its separate island, and at least in the past it had a lot of developer mindshare (so more like a whole continent than an island), and from the point of view of those Windows devs, Linux is the odd one out ;)
One common solution is to use scripting languages for build automation that try to hide most of the differences (e.g. Python or Node.js instead of Powershell vs Batch vs Bash), and most importantly there's simply no way around testing on at least Windows, macOS and Linux.
Mozilla’s automated builds of Firefox for Windows, macOS, Linux, and Android are all cross-compiled on Linux VMs. Cross-compiling is faster and cheaper, especially because Windows’ file I/O and process launching is so slow.
There's some documentation on this in some WSL issues on GitHub, but it's not just NTFS. It's stuff like the broader filesystem architecture's inclusion of pluggable 'filters' (kinda neat, but each layer of them incurs a performance cost) or the way commands on other operating systems depend on caches for certain syscalls that Windows doesn't keep, or keep anything equivalent to.
This one? I remember it as an earnest description of the difficulties the WSL team had with the speed of NTFS - and I think it was one of the reasons for the switch to virtualisation in WSL2.
My takeaway from that comment is that there are some important performances issues that apply generally to all filesystems on Windows. Maybe we can partially test whether that's the case by playing with WSL1 on ReFS, ExFAT (if that's even supported, with its limited permissions support, or ZFS, once OpenZFS on Windows stabilizes a bit.
> Are derived community works, like say the D-language, Dart, or Zig bindings to the win32 API which are generated from those files open source - if Microsoft did not release them as such?
Certainly. The source for these bindings is the WinMD file, which is MIT licensed. While it is true that its contents are generated wholly from non-FOSS sources, it doesn't impact end users of the metadata. Microsoft owns the original IDL/headers and can choose to license any related work however they like.
Although it's a point of pride that someone did manage to get D understanding the recent Win32 API specification extremely quickly (D is ridiculously good at this kind of metaprogramming), the bindings that actually ship with D at the moment are hand-maintained.
If you want to build binaries for a distro, build in that distro. If that distro has a Docker image, it's as simple as:
docker run -v "$PWD:/src" olddistro:version /src/build.sh
$dayjob supports distros as old as CentOS 7 and as new as Ubuntu 22.04 this way.
Compiling on one distro and then expecting it to work on another distro is a foolhardy errand. Libraries have different versions, different paths (eg /usr/lib vs /usr/lib64 vs /usr/lib/x86_64-linux-gnu/ vs...), and different compile-time configuration (eg openssl engines directory) across distros.
> If you want to build binaries for a distro, build in that distro.
> Compiling on one distro and then expecting it to work on another distro is a foolhardy errand.
This is why Windows, with all its issues, is still relevant in 2022. I got tired of updating my distro and software stopping to work. If you are on happy walled garden land of the main repository you're fine. When you need some specialized software or something that is not being maintained anymore, good luck. And at the end of the day, people just want their work done.
This is why Windows with all bloat, advertising, tracking, security issues (no click RCE in 2022, wtf) STILL is going strong.
> Compiling on one distro and then expecting it to work on another distro is a foolhardy errand.
Only if you link random libs you find in on the system. The base system libs making up a Linux desktop (glibc, xlib, libasound, libpulse*, libGL, etc.) all are pretty good about maintaining backwards compatibility so you only need to build against the oldest version you want to support and it will run on all. Other libraries you should distribute yourself or even statically link with their symbols hidden. This approach works for many projects.
It has a few old things unique to it in our support matrix. It's the only distro with openssl 1.0 so a bunch of API and related things are different. It's also the only distro where systemd socket activation doesn't work if your service has more than one socket, because its version of systemd predates the LISTEN_FDNAMES env var.
Also, it is old enough that it's going out of LTS soon.
CentOS 7 may have been released in 2014, but the software it shipped was already quite old then.
As a datapoint, CentOS Stream 9 [0], which was released in 2021, and which RHEL 9 (released in May 2022) is based on, is already ~60% out of date according to repology: https://repology.org/repository/centos_stream_9.
Also: In computer time, 8 years is "very old". That's longer than the "mainstream support" window for Windows 7 was (from 2009 to 2015), and about as long as the mainstream support window for Windows XP (from 2001 to 2009).
[0]: CentOS "Stream" has a different release model and appears to be a bit of a rolling release as I understand it? But that would cause it to be more up-to-date, not less.
I tried that with Rust, actually. My Manjaro was running a much newer version of Glibc than my server so I had to deal with that.
First I tried to get it to link to a different system library but no package manager is happy with multiple major versions of glibc.
Then I tried MUSL but it turns out the moment you enable MUSL several OpenSSL packages used in very common dependencies don't compile anymore. There was something about a custom OpenSSL path that I would need to specify and a cert file I'd need to package, but I gave up at that point.
The solution was to use a Docker image of an ancient version of Debian with an old version of glibc to build the file. I have no idea how you'd deal with this crap if your version of glibc is even older or if you don't have glibc, my conclusion was "I guess you can't use the usual HTTP crates then".
Oh, and the "just statically link everything" approach is also terrible for security patches because most developers don't release security patches with the same urgency as library developers do. GnuTLS had some pretty terrible problems a while back that were quickly resolved with an update but the most recent version of some binaries online are still vulnerable because the devs chose to statically link GnuTLS and abandoned the project a while back.
Libraries are an enormous pain point for Linux development and even static linking won't always be there to save you. This is one of the reasons some game developers choose to release a "Linux version" of their game by just packaging Proton/Wine with their executable and making sure their game performs well under the compatibility layer. All the different versions of all the different distributions and all the different flavours are impossible to keep up with.
Linux devs have chosen to ship entire Linux core libraries with their applications in the form of AppImage/Flatpak/Snap/Docker to solve this problem. If static linking solved all problems, Docker wouldn't have taken off like it did.
"Statically link or bundle everything" is how most Windows apps deal with this tho. So if we're comparing Windows and Linux, and saying that Linux packaging is worse, this method can't just be dismissed on security grounds.
Windows developers tend to stuff directories full of DLLs if they need to ship dependencies, they're not statically linked per se.
Regardless, it's incredibly complicated to compare linking behaviour between Windows and Linux. Windows has tons of components built into the API (and which is maintained by Microsoft) which you'd need an external dependency for in Linux. Microsoft provides interfaces for things like XML processing, TLS connections, file system management, sound libraries, video libraries, rendering engines, scripting engines and more. If there's a vulnerability in WinHTTP, Microsoft will patch it; if you statically link curl, you'll have to fix it yourself.
Of course many open source developers will statically link binaries because that way they don't have to write platform specific code themselves, but they only need all those dependencies because your average distro is pretty bare bones if you strip it to its core components. If you write cod Linux, you're not even getting a GUI by your base platform, you'll have to provide your own bindings for either X11 or Wayland!
Most third party Windows software I run is some application code and maybe a few proprietary DLLs that the software authors bought. Sometimes those DLLs are even just reusable components from the vendor themselves. Only when I install cross compiled Linux software do I really see crazy stuff like a full copy of a Linux file system hidden somewhere in a Program Files folder (GTK+ really likes to do that) or a copy of a modified dotnet runtime (some Unity games).
The big exception to the rule, of course, is video games, but even those seem to include fewer and fewer external components.
Development becomes a lot easier when you can just assume that Microsoft will maintain and fix giant frameworks like the .NET Framework and the Visual C++ runtime (basically libc for Windows) for you. Microsoft even solved the problem of running multiple versions of their C runtime on the same machine through some tricky hard linking to get the right dependencies in the right place. As a result, most Windows executables I find in the wild are actually linked dynamically despite the lack of a library management system.
Hence "... or bundle everything". But what difference does it make from the security perspective? If the app ships its own copy of the DLL, it still needs to do security updates for it - the OS won't.
As far as the OS offering more - it's true, but not to the extent you describe. For example, Windows does offer UI (Win32), but most apps use a third-party toolkit on top of that. Windows does offer MSXML, but it's so dated you'd be hard-pressed to find anything written in the past decade that uses it. And so on, and so forth. I just went and did a search for Qt*.DLL in my Program Files, and it looks like there's a dozen apps that bundle it, including even Microsoft's own OneDrive (which has a full set including QML!).
Even with .NET, the most recent version bundled with Windows is .NET Framework 4.8, which is already legacy by this point - if you want .NET 5+, the standard approach is to package it with your app.
And then there's Electron, which is probably the most popular framework for new desktop apps on any platform including Windows - and it is, of course, bundled.
"Some application code and maybe a few proprietary DLLs" is how things were back in 00s, but it hasn't been true for a while now.
glibc doesn't support static linking, but you can fully static link with another libc if that is what you wanted to do.
Not that it is needed for forward-compatible Linux binaries since newer glibc versions are backwards compatible so dynamically linking the oldest one you want to support works fine. Would be nice if glibc/gcc supported targetting older versions directly without having to have an older copy installed but that is a convenience issue.
nope, it still requires at least the glibc version that the appimage was compiled against. snaps, flatpaks nor appimage solve the long-standing glibc versioning issue that plagues portable Linux binaries. the closest I've seen to fixing this issue is https://github.com/wheybags/glibc_version_header
One of Zig's claims to fame is how widely supported/cross-platform their build tools are, and I had high hopes. But they don't publicize this limit of their macOS support -- I found out the hard way. I really appreciate how far MacPorts bends over backwards to keep things working.
I love my ancient machine and have a few 32-bit apps I need, though I guess old hardware isn't quite the excuse it used to be.
I'm interested in re-evaluating this policy once we get a bit further along in the project. It could work nicely given that we have OS version min/max as part of the target, available to inspect with conditional compilation. We get a lot of requests to support Windows all the way back to XP too, despite Microsoft's long-expired support.
All this scope creep takes development efforts away from reaching 1.0 however. If we had more labor, then we could potentially take on more scope. And if we had more money then we could hire more labor.
So if y'all want Zig to support a wider OS version range, we need some more funding.
Go tell that to the schools I have to ship software to which can't upgrade past 10.13 because their Macs are too old but they don't get enough budget to buy new ones
Not that hard. Since it's not a use case that most GNU/Linux software needs to be concerned about it's easy to make mistakes and resources are scarce, but once you know what you're doing it's usually not a big deal (maybe except of some specific edge cases). There's lots of old games on Steam that still work on modern distros and new games that work on older ones (and, of course, there's a lot of them that's broken too - but these days it takes only a few clicks to simply run them in a container and call it a day).
It's very hard. Incompatible glibc ABIs make this nigh impossible, there's a reason Steam installs a vcredistributable.dll for pretty much every game on Windows. On Linux Steam distributes an entire runtime environment based on an ancient Ubuntu release exactly to circumvent this problem.
Look no further than the hoops you need jump through to distribute a Linux binary on PyPI [1]. Despite tons of engineering effort, and lots of hoop jumping from packagers, getting a non-trivial binary to run across all distros is still considered functionally impossible.
Steam distributes an entire runtime environment because it's a platform that's being targeted by 3rd party developers, who often make mistakes that Steam Runtime tries quite hard to reconcile. When all you care about it's your own app, you're in power to compile things however you want and bundle whatever you want with it, at which point it's honestly not that hard. Building in a container is a breeze, tooling is top notch, images with old distros are one command away, testing is easy; in practice I had much more trouble compiling stuff for Windows than for old GNU/Linux distros.
It's easy to compile an entirely new binary for every platform/distro, and it's easy to bundle an entire execution environment along with a single binary using docker, what's hard is compiling a single binary and have it run across all distros and execution environments.
It really isn't that hard to get a single binary that works across glibc-based distros. Just compile against the oldest version of glibc/Xlib/etc. you want to support and bundle all non-system libraries, statically linking them with symbols hidden if possible.
We are not talking out of our ass here - I myself do this for all the software I maintain and just works. Tons of programs are released this way.
"Across all distros" - sure, that's outright impossible, and for a good reason. "Across all distros that matter" - no, it's not. How do you think Electron apps distributed in binary form work fine across various distributions?
I totally agree, it's practically impossible. Is it a philosophical thing or a technological thing? GPL is about source code and going against that will never make much progress.
It is almost always possible to do some relatively simple hacks to make old stuff work, though (LD_PRELOAD, binfmt_misc/qemu, chroot/docker).
It says a lot about Linux development that for cases like these "just install the Linux equivalent of Windows XP in a container and run the tools inside that" is an accepted solution.
It's a solution that works well and is used by loads of developers, but it's still comically silly.
Sure, having a nice SDK where you can just specify the minimum vesion you want to support would be nice but who do you expect to develop such an SDK? GNU/glibc maintainers? They would rather you ship as source. Red Hat / SUSE / Canonical? They want you to target only their distro. Valve? They decided its easier to just provide an unchaning set of libraries since they need to support existing games that got things wrong anyway and already have a distribution platform to distribute such a base system along with the games without bundling it into every single one.
You can also just cross-compile targeting whatever you want, including RHEL.
I wrote a tool [0] which will take any system and create a cross-compiler toolchain for it, this is what I use to compile for Linux, HP-UX, Solaris, BSD, etc on Linux.
This is portable to CPUs with the same features running older linuxes. To be portable to CPUs with fewer features you should specify the CPU family with `-Dcpu=foo_bar` which seems to be the equivalent of `-march=foo-bar`.
Are symlinks on Windows really such a big issue? I’ve never heard of projects using symlinks in their tree. If Windows developers are complaining about you doing something unusual, why is it their fault?
The higher level thing to notice is that Windows is a second class platform for much of the software world. As such, it is in a position where it needs to bend the knee for compatibility with the dominant platform if they want it to be an active player in the larger ecosystem. In the case of symlinks, the hard part is already done, to the point where it is a toggle that has already been implemented, the UX of enabling it is just bad.
Both soft links and hard links, as well as directory junctions, have been possible in Windows since Windows XP (though soft links have only been available since the release of Windows Vista 18 years ago). In fact, hard links are essential parts of Microsoft's solution to the DLL hell problem without wasting gigabytes of space.
The problem people are running into is that the OS hasn't been designed with wild symlinks in mind and therefore can't guarantee its security principles if any user is able to symlink however they please; if I read the context [1] correctly, it seems like an elevation of privilege is suspected to be very easy to gain if a standard-level user is allowed to place arbitrary soft links on a file system.
I see little reason for Microsoft to enable the "all users can soft link" setting by default. They'd need to audit their OS and userland code to determine where soft links may introduce vulnerabilities in order to change the default and a few developers that absolutely insist on using soft links inside git repos for some obscure reason isn't going to get them to make such an effort.
Microsoft has made enabling the feature a bit easy [2] but I can find very little about the security analysis done for this change, so enabling dev mode might open your computer up to a whole class of vulnerabilities.
I personally don't see a reason to use symlinks in a dev environment, but I suppose *nix developers think otherwise, probably to duplicate files across a source repository for some reason? If your intent is to work together cross platform then there are loads of restrictions you need to deal with. Linux applications tend to trip up over CRLF, every file system has their own stupid restrictions, build tools and shell scripts need to somehow become platform agnostic, you name it.
You can blame Windows for being different but the truth is that Windows is still the most commonly used desktop operating system in the world by a huge margin. macOS and Linux are the odd ones out and there is no good reason why the POSIX/X11 system design is better or worse than Cacoa or Win32; it's a mere difference of convention.
Certainly better to avoid it if at all possible, and I'll be eliminating them specifically to improve Windows compatibility.
It's not that unusual, though, a quick search turns up 1.1k GitHub repositories with `git config core.symlinks true` in their documentation or CI pipelines - including quite popular projects like Ava, Apache Arrow, Solana, Chrome Devtools, adobe-fonts, IBM/houdinID, travis-ci, RabbitMQ, various Google projects & more.
I agree that Windows could do more to support the developers on other platforms. Symlinks just seem like a petty thing to complain about IMHO. Didn’t realize how common they were though.
NTFS does support them, however. Is Git just not supporting them properly?
NTFS has supported them for ages. The main issue is that they're locked behind elevated permissions or having developer mode enabled (as of Windows 10.)
My understanding is Microsoft's concern is that applications and OS components not expecting them could lead to security issues. Not sure how real that concern is, but that's the excuse I've heard.
I've looked before, and have never found a definitive answer from Microsoft. I've heard some speculation that it the nebulous security issues would be from a program that checks the permissions of a symlink's target, then opens the symlink. An attacker could then modify the symlink after the permission check but before the file is opened, escalating access by pointing to a restricted file.
It's a kind of attack called a symlink race, which is also possible on other operating systems. There are kernel parameters for hardening against symlink races on Linux, and they just disable symlinking into world-writable locations. I'm not sure why Windows can't use a less invasive mitigation like that, but I guess there must be one.
Yes, Windows does, but this post is complaining about support. Which led me to ask if Git is the problem. Cause Git also supports symlinks, but is clearly having issues on Windows.
From my point of view, even when someone is on the correct 'side' of an argument, if they got there by mistake it's still important to point that out. Both fortran77 and CoastalCoder can be wrong at the same time.
Is "I think you misread that line" far ruder than I thought it was?
...surely "I doubt my mom knows" wasn't supposed to be a developer anecdote that I misunderstood massively?
How are they not related? If NTFS didn't have symlinks we wouldn't be having this discussion. Critically, the modern standard for removable drives, exFAT does not support symlinks, so you can't count on Windows' support for symlinks if the user is git cloning on a drive that's using exFAT.
Pointing to this page is meaningless. `mklink /d` only works on NTFS but errors out ("The device does not support symbolic links.") on exFAT. GP acknowledges NTFS has symlinks and remarks that exFAT doesn't.
No, they're not really. And IIS is not that bad either. My main gripe with windows and why i won't develop on it anymore is simple that their system is too complex. I suppose this is the gripe of the article author as well tbh.
Windows is horribly hard to learn. In a day, i learned how to create file, directories, change the stats on those, execute diff and patch, an do weird string modifications with grep and cut (and learn to commit and push with git). First day on an UNIX machine. First work week on a windows machine (at that time, i had already built a LFS): we have those two project with two different version of (proprietary JS front framework) and (weird php framework). Those don't use the same version of mycrypt.dll (and others, but this one i will remember for at least two dozen year). With the support of non-intern engineers including a windows DBA, it took us a week to manage to install the two different versions.
A week prior, i was linking to a previous version of Ruby for my RoR app (it was around 2013).
But since, i really respect windows sysadmin, they are the best of us. I will just never, ever want to work on a windows server again. I like learning, but putting that much effort for this little rewards? not worth.
You don't have to enable Developer Mode to enable symlink creation.
Administrators have the right to by default and non-administrator users can get this by enabling the documented policy:
We started at "not supported", talking about making symlinks inside projects.
We backed off to "not supported on normal systems", which is a reasonable change to me.
You mentioning junctions is not even close to a solution, and I am not moving the goalposts. If junctions worked, the complaint wouldn't have been needed from the start. No, this is about actual symlinks, not a similar feature that can do 5% of what symlinks can do.
This what is called in Cognitive Behavioral Therapy "All or Nothing Thinking." Microsoft is a for-profit company. It is pro open-source when it helps them and closed source also when it helps them. I like the new Microsoft which has open sourced a lot of stuff. It's way better than the old closed source regime.
Oh, you and the article's author are either forgiven or seeing the best in people. Me, on the other hand, who lived the Microsoft with their shady dealings in Bill's / Ballmer's era, I remember their crushing on OEM's to not allow even a shred of openness in their systems. Only because of shifting in developer's in favor of Apple's / Google's Microsoft changed the tune with "Microsoft loves Linux". Bleah! I have no love for Microsoft despite my work involving their products 90% of my time. I am just realistic, they just love money, nothing else. And if money means open source, guess what?! They bought GitHub and they own everything that is published there. That's why plenty left GitHub and either started their own venture or went to GitLab.
Microsoft always had great developer tech. And having access to that tech more freely (as in on other platforms, partially open source etc) is a significant plus for me.
I loved .net back in 2003 or whatever. Then I never got to use it for years because I was on unix land. It is now open source and multi platform.
They attacked open anything with all their might (and they were fucking mighty) and the enemy still flourished.
Let them come again. If the new strategy is to pump high quality software out in the open, I hope it becomes a long and bloody war.
I just built an app in Blazor, which is Microsoft's React alternative, except you code everything in C# instead of JavaScript. Once I understood it I found myself able to write and debug code very quickly.
Typescript is ranked number 4 according to Github statistics in 2022[0].
VSCode is also very popular. I couldn't find any good statistics comparing different text editors, but atom is sunsetting this year, and VSCode continues to grow as some anecdotal evidence.
Visual Studio is still a great IDE for C++ imo. CLion is the only other big IDE for C++ that I've had suggested often.
C# is ranked #9 in github usage[0].
I would consider these "great developer tech", and their popularity seems to confirm that.
Edit: DirectX is also a pretty big deal since afaik it's the only way you can make an Xbox game. I don't have enough experience to say whether DX is great developer tech or not though. I often hear how horrible Metal is more often than I hear negativity about DX, but that's just my experience.
Only for Java, for C++, .NET and GPGPU programming? Not really.
And even Java it is debatable, since JetBrains would rather sell CLion licenses than support some of the stuff Netbeans and Eclipse have been doing for 15 years regarding mixing Java and C++ development on the same source base.
When is InteliJ going to have an incremental Java compiler by the way?
yeah the acceptance of oss is cool but they still ship a lot of weird software and technology... which is fine assuming you can avoid it... but sometimes you can't and then it becomes torturous.
A big reason for that is their commitment to compatibility to past software. They kind of have to because they have people all over the world using old Microsoft software. They finally were just able to kill IE6 and Japan is screaming about it. So they ship a lot of ship a lot of weird software and technology in your words. Google just kills things when they get bored with it. Microsoft can't do that.
that's the classic excuse. but even azure, which was built from the ground up, is pretty weird compared to its major competitors.
also, apple showed us how to do this. build the right thing for today and use virtualization to support yesterday until it can be updated. all software is living these days anyhow.
I currently own an iPad which is useless to me because a bunch of apps won't load on it because it's old, including GMail. I have better luck with my old hardware running Windows software.
Apple is worse in that regard. Is XCode open-source? Other than Darwin Kernel, is any part of macOS open-source? Did Apple even try to make Swift cross-platform? Is iCloud really comparable to OneDrive? How about Apple's locked and closed-source bootloaders?
Apple gets close to zero usage on desktops outside of the Silly Valley, has almost zero presence on servers, and it's trivial to avoid it (I haven't used their products for years (and very little before that), and personally know only a single macOS user). Windows — not so much. So when they put out yet another crappy technology, you may get tarnished one way or another.
I’m surprised Qt hasn’t been mentioned yet for c/c++. You can get qtcreator and mingw baked in for the cost of a couple GB, and it “just works” even if you aren’t using the qt framework. I much prefer qtcreator to visual studio anyways. I suppose this breaks down for the use cases that aren’t c/c++ though.
Sadly Qt ships MinGW 8.1 which is positively ancient (released in 2018). If you're starting a new project (which you likely are if you are installing an IDE aha) there's no reason not to go for more recent compilers - msys2 has GCC12 (https://packages.msys2.org/package/mingw-w64-x86_64-gcc) and Clang 14 (https://packages.msys2.org/package/mingw-w64-x86_64-clang) which just work better overall, have much more complete C++20 support, have less bugs, better compile times (especially clang with the various PCH options that appeared in the last few versions), better static analysis, etc.
Personally I use https://github.com/mstorsjo/llvm-mingw's releases directly which does not require MSYS but that's because I recompile all my libraries with specific options - if the MSYS libs as they are built are good for you there's no reason not to use them.
It doesn't, no. If you are only working with libraries designed for Windows or are linking against binaries, most of the dev issues listed disappear. The point of the article is that the Windows push for open source introduces friction because the way of doing things with established OSS projects is so different.
You don't need to implement POSIX in order to not bundle your header files in a 9 GB "SDK" download, which isn't available as a direct URL download and can't be installed without user interaction via the GUI.
If MS would package just the header files, they'd probably get tons of complaints about compilers complaining about missing libraries, missing dependencies and toolsets not being available.
The Windows SDK is not just a few C++ headers and a bunch of lib files to link against. It has a huge surface area. The documentation and toolsets assumes all API calls from Windows 3.1 APIs to UWP to be available.
The SDK download (ISO format) is 1.1GB in size, requiring a total of 4GB of disk size to install (whether this includes the size of the installer itself is unclear). Big, but not unavoidably so, and you can pick and choose some features. It bundles debugging tools, the application identifier, a certification kit and MSI generation tools along with its headers (which seems fair to me); less than 1GB of extra kit on top of 1.8GB of headers and libraries you probably want as a Windows dev anyway. Just the headers won't leave you with a working dev environment even if you bring your own debugger.
Unlike what some developers seem to think, you don't actually need to download Visual Studio to get the SDK, you can also download it separately from the website [1]. Pick the ISO version and extract the CAB files yourself if you want to manually pick and choose your files.
The SDK ships as an installer but that just makes sense. Ubuntu ships their headers in DEB files as well, for example. You want to be able to add and remove these packages as you upgrade or downgrade your target API levels without having to manually set up a file system hierarchy.
As for non-GUI interaction: `WinSDKSetup.exe /quiet /ceip off /features DesktopCPPx64` will install only the necessary headers and libraries for x64 C(++) development in the default location without sharing data with Microsoft. Found this command line with `wine WinSDKSetup.exe /?`. You can also add, repair, and uninstall packages with the same installer.
I mostly agree on the VSCode part. They should put it up front in the README, homepage and maybe a info modal/badge whenever a user installs said extension.
But genuinely asking, how else would you 'label' VSCode?
"A free and open-source code editor with some optional components that are proprietary"?
Apple is worse in that regard. Is XCode open-source? Other than Darwin Kernel, is any part of macOS open-source? Did Apple even try to make Swift cross-platform? Is iCloud really comparable to OneDrive? How about Apple's locked and closed-source bootloaders?
Swift itself works on most platforms. The problem with using Swift cross platform is that Apple only bothered to make their GUI toolkit available for their platform in the same way Microsoft only makes their dotnet GUI platforms available for Windows (without resorting to "official" replacements that are owned by the same company but not built into the system, like Xamarin).
As a consequence, many Swift libraries only focus on macOS, just like many dotnet libraries focus on Windows (though recent efforts have improved that situation). You can probably get a lot of them working on Linux as well and if you use Swift for command line tools or web applications. I suppose you can probably run the most important tools cross platform, but the non-native ecosystem is clearly a second class citizen.
The big difference between XCode and VSCode is that Apple doesn't claim XCode is open source; also, XCode is more comparable to Visual Studio than VSCode in terms of SDK integration and preconfigured tooling.
Huge parts of the Darwin kernel are actually publicly accessible while Microsoft only provides kernel sources under NDA in things like education projects. Unless you count the WinXP source code leak, that is.
C# is sort-of mostly open source-ish except that debugger features are closed and the community has little say in its development.
Apple did in fact put effort into making Swift cross platform, as outlined here[1]; though their intention is that Swift programs use the system runtime on macOS/iOS/iPadOS, they put the effort into making a base layer freely available for other operating systems to gain some portability.
I'd personally argue that Apple and Microsoft are similarly if not equally open in their development, but Microsoft advertises itself much more "open source" than Apple. Apple's approach of "you can look but you can't touch" is a lot more explicit and their supposed openness comes up in fewer marketing materials.
I agree that symlinks cause more trouble then is needed on any platform. Sadly, only Node really likes to use them to optimise the store of packages
Personally, I don't have major issues compiling things on Windows for other platforms (macOs, Win32, Android, Linux). Only awkward thing is designing UIs
I don't think that's what they sre saying, I think they sre saying it's stupid Microsoft requires hoops for such a basic feature, hence "Fix Windows" being option 1
Such a basic feature? If the OS dedups a file, by putting in a symlink behind the scenes, no one need know a symlink (or something else) exist. That makes for a better user experience. There are less concepts to absorb and less errors to be made, as the file-system won't have any cycles.
Symlinks are a premature optimization in my book, or worse, bad design. On the user level, shortcuts are often the better solution, because they allow arguments to be passed in on the command-line.
This story is really about the arrogance that is as abundant as it is inappropriate in linux world. Never have I heard of dependency hell or broken systems on Android, yet geriatric linux always needs the help of an administrator.
The only place where linux still has some merit is on the server, where its classical but aging mainframe concepts still have some life left in them.
I presume most MS managers were in engineering roles earlier. It doesn't make sense to characterize the same kind individuals as crushing merely after a role change. My hunch is they always had the instinct to crush but didn't have the power earlier.
/ is used both in writing and in Maths. \ is a character used almost exclusively in programming.
From a practical point of view, I find \ to be a much more suitable directory separator than /, for the same reason I think Microsoft's choice for blacklisting characters like ? and : from file names is silly. There are real world use cases [1] for adding the / to file names so it shouldn't be excluded!
Microsoft has used the / for command switches since its inception, based on the way the DECS TOPS-10 (1970) used linker flags; with / already taken, they chose the Next Best Thing which is perfectly fine. When Unix came around a year after the TOPS-10 they used a forward slash for directories for some reason but there's no way one is better than the other.
For what it's worth, Windows accepts forward slashes. Since Windows 1.0, actually, all the way back in 1985. Try it for yourself in your browser[2], open notepad.exe and save a file in A:/test.txt. Your path separator may be represented differently, but / works perfectly fine.
Fun fact: in some locales (Japanese, for example) your path separator isn't even a backslash; the path separator is actually rendered as 0x5c, which corresponds to the Yen symbol in Japanese locales of Windows. In Korean locales, it'll show up as the Won symbol and you'll probably find many other path separator characters in other locales that existed way back in the console code page days.
Which is because not only Windows, but DOS itself supported it at least as far back as DOS 2.0 when they added sub-directory support.
There was even an option (in CONFIG.SYS) to alter the 'switch character', which then also caused a lot of command line tools to accept '/' for paths. That option was eventually removed from the config file, but the underlying API retained until quite late. Maybe it was WinME's version of DOS which disabled the API.
Despite all of that, the handle based Int 21 file APIs always supported being passed '/' as a path separator, and it was often accepted by some apps.
DOS based 'C' source code often used "#include <some\\path\\file.h>", however many compilers also simply accepted "#include <some/path/file.h>", probably just as an artifact of the obvious implementation. This wasn't well known, so lots of DOS based 'C' source used '\\' in includes, plus also at the file API level.
I used to use MXE [1] to compile fully static Windows binaries on Linux VMs hosted with Travis. It needed to crane in everything though, so it was a source of bottlenecks from time to time. I was also uncertain about the provenance of a lot of the dependencies in that toolchain. So when Travis died I took the opportunity to move Windows builds back to gnu with msys2, all over GH Actions. These are actually comparatively snappy and I’m reasonably satisfied with it.
Would falsely representing software intended to be proprietary as "open source" in its marketing copy have an adverse impact on the enforceability of its putative license?
The comments here showcase why webapps and Electron/webview shells are so common now. Maybe WebAssembly and its up and coming runtimes can finally some of this mess.
I disagree. Cross-compiling has nothing to do with why Electron is common. The browser is the runtime that currently fixes this mess. WebAssembly would need a runtime matching current browser runtimes to be a real contender, and the best choice here is... a browser. So now you have electron but it's WebAssembly, which isn't significant enough to make a dent in this mess.
You compile in the Cygwin environment where you have all the usual tools. Then shipping the program with cygwin1.dll from the Cygnal project gives it native-like behaviors.
Configure it to respect your preference ('don't change my files') and not pester you with hundreds of warnings any time you interact with Git if your preference differs:
warning: LF will be replaced by CRLF in src/au/policy/dao/EmailQueue.java
If you have some developers using window, and other developers using anything else, without some kind of line ending normalization, then you will end up with inconsistent line endings, possibly in the same file, and potentially diffs and commits where every line in a file is changed from one line ending to another.
For me SDK sizes are showstopper. Mingw packages are probably 100-200 Mb total. To build anything native one has to install Windows (20+Gb in modern version and then Visual Studio 40+Gb). Not so easy to fit it all on an SSD drive.
Xcode also have this problem now. 8Gb for Xcode 7 is manageable. But why 70Gb for Xcode 11?
I'm cross-compiling Mach engine[0] with Zig, it ends up being a quite small toolchain to cross-compile a game engine (using DirectX, Vulkan, OpenGL, and Metal on respective platforms) from any OS. The Zig toolchain is:
Zig provides almost everything that is needed to cross compile to those same targets out of the box, including libc and some system libraries. Except macOS frameworks and some updated DirectX headers/libraries (provides MinGW ones), which we bundle and ship separately:
* Windows: 7 MiB (updated D3D12 headers & libs)
* MacOS: 112 MiB (almost all frameworks provided by XCode, for x86+arm+iOS)
* Linux (x86): 22 MiB (x11/wayland headers & libs)
* Linux (arm): 15 MiB
That's full cross compilation of WebGPU GUI applications to all desktop platforms in under ~217 MiB for most platforms.
You can accomplish similar things by targeting, say, SDL2 with a C compiler. It's a nice solution when you don't need platform-specific stuff, but it's not a general substitute for having the platform SDK, alas. With that said, targeting SDL2 is what I do… but the apps I like to write are games.
Actually, it's better - my work includes everything needed to build:
* GLFW
* Dawn (Chrome's WebGPU implementation)
* The DirectX Shader Compiler (a fork of LLVM)
* Freetype and HarfBuzz
All from source, cross-compiled to every OS. Plus with Zig you can link+sign macOS binaries from Linux and Windows (AFAIK that's not possible with a regular C compiler, but maybe that's changed recently?)
I ran into this when trying to test out Rust. On Windows, they recommend using native tools that require a multi-gigabyte download via the Visual Studio installer (and this is just for tools--not Visual Studio, the IDE).
It was annoying for me because part of the reason I wanted to try Rust on Windows was specifically to avoid multi-gigabyte C/C++ toolchain downloads. I bet the actual compiler and linker aren't that big, so I kind of wonder where all those bytes are going...
Aside: I see a sibling comment mentions Zig. While I haven't really explored Zig, the tiny single-binary download was a breath of fresh air. Go also had a quick and easy download, but I wanted a language without a garbage collector.
Try cross-compiling Go apps that use GTK3 if you’d like to spend a few days in intense pain. It’s not there yet. Electron apps are probably the closest thing we have today that actually works correctly on Win / Linux / Mac.
The visual studio build tools are substantially smaller than 40GB. My IDE install is about 25GB, and visual studio has distributed the build tools separately since vs2015 - my toolchains directory is about 4GB including whatever dotnet runtimes, windows GDK.
> Not so easy to fit it all on an SSD drive.
Professionally, no excuse. As an open source or otherwise unpaid pursuit, a 250GB SSD is about $45 on Amazon right now. That more than comfortably fits your 60GB estimate, with plenty of room to spare.
I have no idea sorry! On the Jetbrains front that's interesting. My Intellij folder is closer to 4GB, but that's only for one language. I've also got Rider and Goland installed, and Jetbrains duplicates the entire install per IDE. My Jetbrains folder is 12GB.
> even THAT seems like a lot to me.
That's a little silly - what is an acceptable amount in that case. The JDK on its own is about 700MB (that's a guesstimate based on last time I installed it sorry).
We are two wealthy people arguing over rents, with you saying $1M/month is reasonable, I'm saying it's not, I'm only paying $100k/month, meanwhile most of the world is paying on average $100/month.
25GB is enormous. 2.5GB is enormous. Consider that the core value of this software is text editing. Consider that not long ago people were buying PCs with perhaps 10MB of hard disk - or even no hard disk at all (e.g. the Apple IIe). Windows XP and Office 97, for example, were (if memory serves) less than 1 GB total. FoxPro for DOS was something like 4 megabytes - and FoxPro was a form builder plus relational database. Consider that with a thoughtful use of resources and an eye toward minimizing attack surface, you can put a fully functional http/s app server in a 1.9MB package (redbean).
These sizes are silly, and they should give you pause. The space is cheap, yes, but the attack surface is not.
It's not "text editing". It's "developing software" with all the tools that belong to that - from understanding the language, to compiling it, optimizing it, linking it and carrying necessary documentation and support libraries.
Don't be reductive - just because your linux hides those costs directly in /usr and /var it doesn't mean those things don't exist.
It's not a text editor. It's an IDE, with built in development environments, SDKs, deployment tools. Debug symbols for native binaries are often 2-4x the size of the binaries themselves.
> Consider that not long ago people were buying PCs with perhaps 10MB of hard disk - or even no hard disk at all
Not that long ago in history, but an absolute eternity ago in computing terms. I have a direct internet connection to my home that is faster than the read write speeds of those computers.
> Consider that with a thoughtful use of resources and an eye toward minimizing attack surface
Attack surfaces have changed significantly since people were buying 10MB hard drives - you cannot write applications with the same security considerations from that time.
> you can put a fully functional http/s app server in a 1.9MB package (redbean). The space is cheap, yes, but the attack surface is not.
Firstly, redbean is the absolute extreme example of minimalism and portability. It's not "normal" it's an incredible feat of engineering frankly. Nginx isn't much bigger (~5mb) and caddy is bigger but still small (30Mb). The big difference between these and IDEs is that web servers dont provide client interfaces. For user facing tools they rely on web browsers to render html and interpret JS, so to make a comparison it's only fair to compare redbean + chrome to a Windows SDK install for example.
It's interesting how people assume that their world is everyone else's...
It's easy to forget that there are billions of people for whom $45 is a massive investment, and that SSD isn't so conveniently available even if they have the money.
I know I got into programming on a mix of graphing calculators and thrown out PCs, and I also distinctly remember having to work around the download sizes of tooling because I was using really crappy internet.
I don't think it's unreasonable for the poster to wish that they could build useful binaries without 60GBs of downloading and storage...
> it's easy to forget that there are billions of people for whom $45 is a massive investment, and that SSD isn't so conveniently available even if they have the money.
So those people can use whatever hardware they have available to them and not buy the SSD. The parent specifically said it was hard to fit on an SSD, so I assumed they could buy one based on that
> I know I got into programming on a mix of graphing calculators and thrown out PCs, and I also distinctly remember having to work around the download sizes of tooling because I was using really crappy internet.
Graphing calculators, and raspberry PIs (and other various low power devices) are still widely available for people to learn and experiment with. Internet speeds are still a problem in many places but ay some point the software has to be delivered to you, and as I mentioned previously the actually install sizes are not 60GB, and the downloads are significantly smaller (a windows 10 iso fits on a 4GB usb)
> without 60GBs of downloading and storage...
Firstly, it's not 60GB - see my previous post about how much space it actually takes up. It's closer to 30GB. Secondly, if you don't have 30GB of storage of any kind available to you on a computing device,then sure, meanwhile anyone running a machine bought in the last 15 years will have that space available to them.
The forest for the trees wouldn't begin to cover half these responses.
Let me say it again, 60, 30, even 10GBs, is a lot when there's tooling from the same company, that still make similarly functional binaries, that took 342 MB.
You don't need to keep obsessing over "how dare this person with limited resources use an SSD!", like I said you run into similar issues with just downloading the stuff.
Again, forest for the trees.
-
Instead maybe you can sit back and just ask "why the bloat over time"?
And the reality is likely: "because no one optimized for it". Because for them lots of fast storage and internet speed are no problem
That line of reasoning maybe allows you to see things from a different prospective and break some assumptions about end users.
Isn't that more useful than browbeating some random for not having 30GB free on their SSD?
I’d say the part where the gp specifically said ssd invalidates your point __in this particular situtation__ because a person worried about their budget wouldn’t splurge on an ssd. I also, after a cursory search, have found that the cheapest hard drives are around $25 USD. While $20 isn’t nothing, it also isn’t budget-breaking.
... what? You think someone stretching their money for an SSD somehow indicates that they're not worried about their budget?
Here's a hint: if they had the budget and availability they'd just get a bigger SSD and not write that comment.
-
Obviously, due to some aspect of their circumstance, be it availability, cost, download speeds, etc. the size of the toolkit is problematic.
I mean there's no intrinsic size for a development toolkit, but I don't know anyone who'd say 60GBs of data is a small development toolkit when as others have pointed out, there are older versions of the same Windows toolchains that still complete the same function and manage to take a fraction of the space...
It's strange that I'm repeatedly explaining you don't even have to be destitute for it to be a problem, but your mind can only latch onto "at the store I have access to with almost everything one could ever want in stock, the price delta is 'only' $20"
But yeah, seriously woe is you having to momentarily imagine some people are poor or have trouble getting access to tech.
I'm from Ghana so I guess it's not as onerous to imagine people don't live the exact same life I do in the US.
If stretching your budget is so important then enable NTFS compression on your entire drive... I'm storing well over a terabyte of data on my one terabyte drive right now lol
This is a utility that fixes a lot of the cross-compiling issues for windows by giving you a portable, unfucked naming, and not-massive SDK. It's the same SDK you get when you install MSVC but it's only a few hundred megs and the names are consistent even with all of Windows' fucked up tooling.
The only caveat is you need to provide your own compiler, in this case clang is often the best option.
The other problem with Xcode is that it won’t run on anything except a Mac, and MacOS licensing prevents one from running it on non-Mac hardware or virtualizing it. That’s why services like https://www.macstadium.com/ and AWS mac instances exist, but it’s very annoying to have to work around this just because of dinosaur business models and licensing.
Visual Studio's install size is highly dependent on what you install. The minimum size is only 800mb, everything is 120gb.
Many of the high level categories, like Web Development,contain features you'll probably never use. If you're concerned about file size you can do a custom install to get a very lean install.
That being said, I would still recommend a Windows VM be allocated a 120Gb drive.
Interesting, how can I install just the cmake + msvc to do command line builds with windows? I'm also installing wsl2 + mingw because it is so much lighter for compilation than many, many gigabytes of vis studio.
Download the build tools[0] and choose the appropriate options. I'd personally consider the following a good baseline: MSVC v143, CMake tools, Windows 11 SDK, Clang tools (if you don't want/need these you can shave off 3.5 GB). `zig cc` is another interesting alternative, it's just a single ~62 MB download.
`msbuild` is what I used back in the day. It seems to still exist, but I have no idea if this is still preferred solution. I just use WSL 2 these days, and live with the extra io latency.
> To build anything native one has to install Windows (20+Gb in modern version and then Visual Studio 40+Gb). Not so easy to fit it all on an SSD drive.
Maybe a decade ago, but SSDs are so much cheaper per GB nowadays. Around $0.11/GB. Pretty sure that's cheaper than the first 1TB HDD I've owned.
But you don't have to develop on Macbook. So you intentionally chose a platform known for being really expensive and now your main complaint is that it's too expensive?
I don't really think that the toolchain is to blame for that one.
Can you please tell that to my IT department which refused to issue me a Linux laptop for 1 month, then took two months to put the order in... I had to get an executive (President of R&D) at the company to harangue IT... I still haven't seen it, apparently they are trying to install the standard "employee spyware" package on it.
If your IT department is an overt detriment to your development effort, send a letter to your company’s upper management and leave. This is the only way to push for change in these moronic companies, and you keep your sanity.
No? There's a lot of other amazing things about the company beyond having 15% development drag? IT is not that important in the grand scheme of things.
Jesus. I have an executive level officer in the company that has my back and you're telling me to quit? Utterly horrible advice.
I do need to develop on a Macbook. The hardware is just so much better than anything else out there. This is my personal opinion of course, but I would rather stop being a software developer than to develop software using a different laptop available on the market today.
Yeah, given the sheer cpu/ram/m.2 power of what even a tiny NUC provides, with modular, replaceable monitor(s)/keyboard/mouse/USB(3/C/Thunderbolt)/HDMI etc, for less than a Mac, I am unable to understand the appeal of a laptop. And I use a i7/64GB/1TB NUC for traveling! I use a twice as powerful Ryzen rack server for my office dev box. 5m extension cables so the noise is in a closet.
I have no problems with external drives. Is this a Mac thing too?
32 years ago as a fresh out I got a government job with an SGI box with an enormous monitor. I stare in wonder at the future, right now. It sure doesn't look like happiness.
I got the cheapest M1 Mac Mini when it came out. I got two m.2 NVME drives hooked up to it (1.92TB & 1TB) and a 1TB SATA SSD and a 64GB microSD card along with the built in 250GB drive.
No issues. Never have ram issues either with the 8GB. I can compile stuff. Code. Have tabs in my browser. Do stuff in Logic. Edit video. Emulate old computers and systems for games. Dosbox. Parsec to my Windows desktop. Do pretty much anything. Got it hooked up to my 55" 4K HDR TV. Bluetooth Logitech keyboard with touchpad. It's great!
8GB ram, only Apple could pull that off. What is it, 2008?
My work stack is tight on 32gb (yeah, it is what it is), that simply wouldn't fly for me. 8gb, especially with all the memory used by the os on MacOs, there's just no way.
I live in a small apartment with my partner who also works from home. We don’t have enough space to dedicate to two desks. We also travel a lot, so I need to be able to take my work with me.
It is a weird thing to me that a minimal impact lifestyle[0] requires a very constrained and expensive computer ecosystem to facilitate unrealized happiness.
We have a small, nearly off the grid cottage (septic, water from the lake, no roads, etc.). It has an internet connection via a Ubiquiti antenna. Not enough room for two desks there either.
That’s fine. The criteria people have for being software engineers is different. I just gave my personal opinion. I don’t think there is a right or wrong.
I believe you mean 256GB.[0] Unless you meant to say 14" or 16" MacBook Pro, in which case the base spec is still only 512GB for either of them, not 1TB.[1][2]
For the prices Apple is charging, I wish they would make 1TB the base spec... but Apple Silicon is so good that the machines sell like hotcakes anyways.
Hum, I think it’s reasonable to only consider the 14” and 16” variants latest generation models. But I guess I misremembered the base spec. Looking at the link, it’s because you get 1TB if you get the model with the full 8 performance cores. But that model is quite a bit more expensive than the base model.
> > Xcode also have this problem now. 8Gb for Xcode 7 is manageable. But why 70Gb for Xcode 11?
> A 1tb M.2 drive is what... £70?
To be contextually fair, Apple charges $400, and the storage isn't replaceable. Who runs their editor off of an external drive? I imagine the number is very small.
Although, on my machine, Xcode only seems to be taking up 17GB, not 70GB.
Microsoft's days as a 'primary' desktop operating system are numbered. The OP doesn't seem to understand this, but even Microsoft itself does, which is why most of its major moves the past few years haven't been improving its own desktop, but positioning itself to control "open source" the best it can.
More people within the MS ecosystem should understand this.
One can argue that desktops are no longer the primary computing environment, but I don't think there's any evidence that Windows is about to displaced within the desktop world.
What are you proposing will become the primary desktop OS? Some Linux flavor of the month? MacOS? Neither seems at all likely in the foreseeable future.
There won't be a "primary" and people will perhaps not pay much attention to which it is, any more than they pay attention to e.g. what Javascript framework their favorite site is using.
I've come to realize while there may never be a "year of the Linux Desktop," there will likely be "years of Linux Desktops" -e.g. Ubuntu, but then also SteamOS, ChomeOS, maybe Android if you're being REALLY inclusive, etc.
I agree with everything but the "controlling open source" bit. I think what they are actually doing is trying to create new platforms they can control, because that's a great business model. Open source, via Github is just a small piece of the pie they want.
Think MS teams/Office365, Linkedin, Gaming, Github, etc. These are platforms they are interested in controlling and deriving revenue from.
I don't think what you and I are saying conflict from a business point-of-view, but I think also it would be naive to "wave away" the incentive for MS to continue EEEing.
If you toggle that response to "All Respondents" instead of "Professional Developers" Windows goes up to 45%. Plus, 3% of users are using windows subsystem for Linux. So 48%. Whereas Mac and Linux are both at 25%. Even for developers, Windows is still the clear majority.
Interesting. So 50% Unix and 48% Windows (and 3% are targeting Unix anyway). That's quite a swing over the past 20 years - there was a time where aside the occasional web developer with a Mac, everyone was on Windows. Leaving out Linux, the interesting part has been how much market that Apple has chiseled away.
Mobile is almost all IOS and Android... and mostly but not all devs use Macs due to xcode.
The only places that Windows seems to persist are developer markets where tooling choices are limited to Windows (games, medical, embedded) or corporate standards dictate tooling.
> Mobile is almost all IOS and Android... and mostly but not all devs use Macs due to xcode.
Indeed and trying to use UNIX for mobile apps won't bring you far.
> The only places that Windows seems to persist are developer markets where tooling choices are limited to Windows (games, medical, embedded) or corporate standards dictate tooling.
The remaining 99% of the market where UNIX GUIs don't matter.
At this point Windows as a primary development platform is essentially dead. With a few exceptions (DX) nearly everyone writes code for Unix/Linux and then cross-compiles for Windows.
>I'm still blown away that they literally made the .NET team revert the dotnet watch PR at the last minute, so that they could sell it as a feature.
This is not even true. Due to limited resources they wanted to cut scope of what the .NET 6 update included and they thought that it would be okay to delay the dotnet watch feature because most people would still have hot reloading from using Visual Studio.
The PR for bringing back the hot reload code was merged 3 days after the PR for removing it.
It is absolutely true. The CLI-first `dotnet watch` was pretty much done (I was using it in preview for months beforehand), and it was cut to promote hot reload as a Visual Studio feature.
Source: I'm the person who raised the initial GitHub issue about the dotnet watch change, and I talked to multiple Developer Division employees off the record. It was a deeply unpopular move that came from the top (Julia Liuson).
So it's the public word of someone from the .NET team versus your unnamed sources. It sounds like there was a communication issue as from the blog post the reason given behind removing it wasn't to drive sales of Visual Studio. It sounds like if their team had more resources it wouldn't have been cut which invalidates the point of them using it to promote VS.
That post by Scott Hunter was a way to save face after Liuson backed down. It is absolutely not truthful about the original reason for removing dotnet watch, and was widely criticized as such at the time. Please trust me on this; the whole dotnet watch drama consumed my life for a good week when it happened.
>That post by Scott Hunter was a way to save face after Liuson backed down
That disappoints me. I would prefer if they would just be honest and say that for the time being they would only support it in Visual Studio.
>to endorse a very critical take at odds with the public one
Interestingly enough the article mentions that Microsoft has been underfunding Omnisharp and VSCode has poor C# support. The first link you gave talks about how Microsoft is now working with Omnisharp to improve C# support.
It is. I generate the Windows executables for a program by cross-compiling Rust on Linux. Then test with Wine. The Rust crates are cross-platform enough that I don't have to special case platforms. This is easier than having a Windows machine.
The only current headache is that there's no pure Rust bundler, to make ".msi" or ".msix" installer files.
If you dump the legacy OS stuff, it gets easier.
> The only current headache is that there's no pure Rust bundler, to make ".msi" or ".msix" installer files.
I have set up a toolchain based on Wix, running via Mono and Wine on Linux, for the purpose of packaging some rather complex .msi files in a commercial software project. Has been running on the Linux-based CI servers of that project for 10 years straight now, with the only change having been that the entire chain has been packaged into a Docker container a few years ago when that became fashionable.
WiX is great. I have been using it for over a decade to package an internal application. It's all driven by a simple text file which I liked over Microsoft's packaging offerings.
The Windows Installer XML Toolset (WiX) is owned by Microsoft.
It's not. It's an open source project (aka owned by no one) that started at Microsoft. The person who created WiX left Microsoft to create Firegiant, which is the primary contributor to its continued development and maintenance.
Open source does not mean "owned by no one". It means the owner or owners have licenced their intellectual property to you under the terms of an oss license.
It's another example where the free tools are better than Microsoft's.
Except it's also Microsoft's.
Which offerings? Are there any?
I only know about other proprietary non-MS solutions like InstallShield, which also generate .msi files but use their own proprietary way to define the installation process, with the benefit/disadvantage (depending on what you want to do specifically) of being on a higher abstraction level than WiX, which is more or less a direct resemblance of the internal Windows Installer data structures.
> I have set up a toolchain based on Wix, running via Mono and Wine on Linux, for the purpose of packaging some rather complex .msi files in a commercial software project.
That’s exactly how I rigged up building our MSIs at Tailscale.
NSIS (Nullsoft Scriptable Install System)[1] can be compiled[2] for Linux if that's any help. I use that to prepare the install .exe package for a Java based program for one of my clients.
[1] https://nsis.sourceforge.io/Main_Page
[2] E.g.: https://aur.archlinux.org/packages/nsis / https://aur.archlinux.org/cgit/aur.git/tree/PKGBUILD?h=nsis
NSIS is not a substitute. It's good for shipping to home users, but msi installers are essential for enterprise software distribution.
That's what you think.
In my enterprise, a big one, I shipped only NSIS installers. Much easier to use, and better user experience also.
I've worked at several enterprise software publishers, and they all used NSIS to package and distribute their software to enterprise end users.
You can use .exe installers as well.
It is only a bit more work
https://activedirectorypro.com/deploy-software-exe-using-gro...
> Then test with Wine.
I tried doing that, and then found that my test suite passes on real Windows but fails on Wine, because it uses APIs which Wine doesn't implement correctly (or at all).
Which APIs are those? A bug filed at https://bugs.winehq.org/ might get some attention.
Waiting on this one.[1]
It's rather impressive that the whole Rend3->WGPU->Vulkan chain for 3D graphics works under Wine, because that's all bleeding-edge stuff. It's only that full screen won't work. Wine reports "fullscreen true stub!", so it's something not implemented yet, rather than something broken.
[1] https://bugs.winehq.org/show_bug.cgi?id=53115
I used to do this using mingw on a raspberry pi for course work that needed Windows exes in college.
The official dev tools are really frustrating (except the debugger, I like gdb but it kind of sucks with c++)
I also cross compile and remote debug windows software on Linux (C and Rust programs). Using gdbserver --multi (extended remote mode) is quite comfortable.
The biggest advantage is not having to use different tools for different target platforms.
Unfortunately MacOS is more difficult target than Windows. If anyone has tips for compiling and debugging software of MacOS from a Linux host, please do share.
There is the Julia package https://github.com/JuliaPackaging/BinaryBuilder.jl which creates an environment that fakes being another, but with the correct compilers and SDKs . It's used to build all the binary dependencies
For compiling, I'm pretty happy with https://github.com/tpoechtrager/osxcross.
For ardour we use QEMU VMs, running GUI-free for non-interactive builds, and with a GUI for ... interaction. Easy to create, and as long as you have lots of memory and cores, nice and responsive/fast. Set a startup item in the non-interactive VMs so that a minute or two after they come up, they start the build. In our case, they upload to ardour.org when complete. The interactive ones are just .. well, they're just macs running in their own window, do what you want with them.
The linked post actually does not mention any specific problem compiling binaries for Windows.
What did not work for you specifically?
I did a similar thing, but I only ever compiled windows versions using Github actions.
Good news is winget will soon support single binaries so that is one option.
Can you link to info about this? I can't find much in a quick Google search
https://github.com/microsoft/winget-cli/issues/182#issuecomm...
Doesn't cargo-bundle do that? I haven't tried to use it to generate .msi files, only for iOS/macOS, but it has the feature!
Cargo-bundle was supposed to do that, but only the Mac part was completed. Nobody ever did the .msi output. I've been trying to get someone interested in doing that. That task needs someone very familiar with the Microsoft ecosystem, which I am not. Discussion in Rust tools forum here.[1]
[1] https://internals.rust-lang.org/t/cross-platform-bundling/16...
I wonder if InnoSetup could be a good point of reference for anyone interested. It is in Delphi though, but there is also this guide I found whilst trying to figure out if you could run Inno Setup on Linux:
https://gist.github.com/amake/3e7194e5e61d0e1850bba144797fd7...
Inno Setup was the best and easiest thing for me to ever use but when I was building Windows installers for use from a Windows OS.
Update:
Found this Cargo package rust-msi:
https://github.com/mdsteele/rust-msi
> Found this Cargo package rust-msi.
I looked at that crate. That lets you read and write .msi files, but those are just containers. It doesn't help you set up the rather complicated contents required. Someone who's into the Windows ecosystem could probably use it to do the limited things cargo-bundler does. The neat thing about cargo-bundle is that it only needs the info from the Cargo.toml file to drive the bundling process. Most Windows installer builders involve manually constructing Windows-specific XML files.
Ah I see. Honestly if I knew Rust better I wouldnt mind taking a stab at the project but I am cozy with dotnet currently. I love building systems tools but very little work I apply for does just that. It is a shame companies dont invest more into R&D.
The Rust part of this isn't that hard. cargo bundler already has a place where the .msi output generator is supposed to go. The reading of the cargo.toml file and the writing in .msi format is already written. It's just generating the Microsoft-specific content for the .msi file that's hard. That takes Microsoft platform expertise.
Counterpoint for this rant (showing it's again not very objective, and just what your or your environment's expectations are): it's easier to cross-compile binaries for Linux (using Clang and a sysroot on Windows) than it is to natively compile on Linux (not using a sysroot, as that is the 'default' flow there) if your environment is mostly Windows already. We do this for one of our products, in fact.
The post meanwhile seems to be part about open source policy (nasty, but not a technical issue - also including a new-to-me and hard to find on a Google or DDG search fact about the VS gallery endpoints having led to legal threats), part issues induced by 'weird' modern languages not caring to support Windows (as expected?), part Git for Windows not bothering to handle symbolic links cleanly (and the weird admin-only default thing that remained from Vista), and part... concerns where if people would spread the proper way of doing stuff (curl.exe bundled by default for downloads, or long file names being 'weird' - albeit less broken than claimed here) instead of just ranting this'd be fine too... but not really a coherent whole.
My counterpoint is simpler: I use some of these tools (like VS Code) because they're better, not because they're open source.
Also I don't think Pylance or whatever core extensions being closed source contradicts with the fact majority of VS Code (or vscodium) is open-source. And I don't see anything morally wrong that they want to close source some of their competitive products.
It may be not intuitive, but tools tend to be better because they are open source.
My personal problem with Pylance or ssh tools is that they are not working with VS Code based forks. That means that VS Code is not that open source as it trying to look like. And this is suspicious for me.
The ssh tool is incredibly slow and resource intensive and defaults to losing unsaved work on disconnect. An open source version would open up the possibility of a user fixing some of these issues.
> tools tend to be better because they are open source
This is not true in general. It isn't even true for programming tools. Paid/non-OSS IDEs (JetBrains suite, Visual Studio) are often better in many ways than their FOSS equivalents (Eclipse, Qt Creator, etc).
Other examples: office programs (MS Office vs LibreOffice), digital AV workstations (Adobe suite, Da Vinci Resolve vs Kdenlive, ShotCut, etc), 3D editors (Blender is the one exception here, but Maya is still damn good)...
I didn't mean that all OSS tools are better than proprietary/commercial, this is not true obviously. I mean that making tool OSS usually makes it better.
Only if you want to be trendchasing rather than letting backwards compatibility take care of itself...
I'm a native Win32 developer, have been one for a few decades, and know quite a few others still using MSVC6 because it's fast and enough for what they do. Takes <250MB of disk space, and the self-contained binaries it generates will work on any version of Windows starting at Win95.
Long file paths: Azure, OpenSearch, and ~90 other open source projects have to document how to enable long file paths on Windows because the default is a ~255 character limit.
Personally, I think 260 is long enough and plenty to work with, while at the same time discouraging the ridiculous verboseness that seems to have crept into "modern" software. Then again, I stay away from .NET, VSCode, and the like. I am reminded of this post by Raymond Chen:
https://devblogs.microsoft.com/oldnewthing/20070301-00/?p=27...
My issue with the 260 character limit is that it can easily be reached through normal usage, and can lead to some really strange behavior.
* File cannot be moved into a subfolder.
* Folder can be renamed, after which a file contained within it is inaccessible.
* File on a network drive may be accessible by some users, but not others. For example, one user may access the drive with the longer network name, while another has mapped the network drive as Z:\. A filename may exceed the 260 character limit for the first user, but not for the second.
* Cannot delete a folder, because it contains a file with a long name.
My experience may be biased due to using a program that recorded metadata in a filename, taking the majority of the 260 characters available in just the filename, but with the number of failure modes, I still don't think it is reasonable to have such a small limit.
Exactly. I believe NTFS has supported 65535-char paths since, forever? But last I checked, even simple apps like Win10 Notepad will fail to open them.
What is the point of the registry/gpedit setting to enable long paths system-wide -- when so much of Windows doesn't support it? File Explorer is really showing its age.
Has any of this improved in Win11?
Yikes, 260 character path names can be a real pain. Please support sensible path names or at least tell me why you can't read or write to a file. Often you just get a "can't write to that file" error message, or worse, "forbidden" so you spend an hour trying to debug the mess that is folder permissions on windows.
I'm not going to convince you to change your tool chain, userbinator, but for the sake of the discussion: Once you have multiple projects going on, with multiple components, and then those components have a small directory structure themselves, you can easily reach 260 characters. Add to that, if theres data coming from another org, a long file name can be very helpful to keep track of what it is (and don't forget 10-12 characters for a date!). And finally, the nail in the coffin: most users don't think about path names, I struggle to get people to not put periods in their filenames which messes with some tooling, how am I going to convince the guy in finance who gave me this data that he should use short file names? Should I modify the file name and make it untracable?
ETA: The "if you have to ask you've messed up", I don't ask, I expect and then get annoyed it broke. I had 10,000 files collected into a folder. Why can every other program tell me the list of files in an instant but windows explorer crashes (the whole desktop environment) because I opened that folder to see it. I'm not meant to do that? Then why can the kernel, the disk, the file system, and all other programs handle it with ease?
The 260 path limit is a pain for organizing media. It's not surprising most applications/games store their media in giant blobs instead of just individual files. This makes software updates a pain since it requires users to download the entire blob if you can't diff the blobs or if the diff corrupts the blob due to mismatched implementation versions.
Mostly blob based game assets is a system motivated by consoles, then reused on pc to keep build process consistent.
You'll see older pc centric game engines use loose files. With the advent of ssds on consoles we might see a return to bunch of files approach as it keeps patches smaller.
Steam and EGS updating handles binary diffing wonderfully. While consoles, not so much. Specifics are NDA of course but I bet any game dev reading will know who I am talking about when I say: "platform x has a horrible diffing algo yet requires approvals for updates over size Y". If I could ship a loose filesystem and get reasonable load times I would just for the update package advantage.
> Mostly blob based game assets is a system motivated by consoles, then reused on pc to keep build process consistent.
No, packing individual files into archives makes sense on PCs too due the native filesystem usually having a much bigger per-file storage overhead as well as non-negligible open() times - the second is especially true for Windows with "Anti Virus" software installed.
> You'll see older pc centric game engines use loose files.
Some maybe, but packing game assets into archives is as old as Doom and has been the norm all this time.
> Steam and EGS updating handles binary diffing wonderfully.
Steam handles binary diffin now, but it was not that long ago that it re-downloaded the whole changed file.
giant blobs is a much faster solution than thousands of individual files. NTFS per-file overhead is significant.
Not enough. Between path limits and "this file is in use by another program" nonsense I regularly ran into while developing on Windows, I switched away and never looked back.
That cuts both days, a Windows developer trying to port a project to Linux would be justified in concluding the filesystem locking semantics are horribly broken and non-existent.
I came from windows and found fewer problems when working on Linux. The problem with deleting files is something everybody will run into at some point. I don’t think what you describe is as common. I have never had it or heard it described as a problem when working in an organization which made software running on Linux and Windows.
The path separator, drive letter and text file line endings get me. But back when I switched to Linux for development it was the very slow file searches on Windows. I am fine with either macOS or Linux. Not windows.
> I'm a native Win32 developer, have been one for a few decades, and know quite a few others still using MSVC6
Doesn't that tie you to a really old C runtime, and aren't security holes often found in the C runtime?
It links dynamically to msvcrt.dll which is updated as part of the os. I believe it still gets updates as long as it doesn't break the ABI. Modern vc++ links with the universal CRT which is an independently updated msvcrt distribution.
Note: https://devblogs.microsoft.com/oldnewthing/20140411-00/?p=12...
Of course he has to toe the company line. But the truth is far simpler.
Similar problem to Java applications directly accessing internal JDK packages, and then breaking once access was restricted.
msvcrt.lib includes things other than direct imports to msvcrt.dll; bugs in that code are going to be linked into your binary at compile time.
A quick Google check seems to yield only one CVE in msvcrt.dll in 2012, wouldn't call this "often" especially for a DLL which is likely among the top 5 most used in the world
Both static and dynamic linking to a newer runtime (as they are versioned) will break backwards compatibility. For newer OS features you need, you can always GetProcAddress() the needed functions at runtime instead. The real downside to staying on VC6 is all the compilers' ancient optimizations, or lack thereof.
It's incredibly fitting that all the links on that post are 403 Forbidden
Microsoft has never really "got" the idea of URLs. The only links to Microsoft stuff you should expect to still work properly in a year from learning them are in the form of an aka.ms alias which is publicly advertised.
Anything else, maybe it's a brief article explaining a technology you care about, next week somebody replaces that with a video blog of some people who don't really know much about it but are sure they'll become world experts over the next months and years - and you're invited on their journey, then the blog becomes a wiki, then it becomes an exciting new user-led forum, and then... it's a 404 because they were re-assigned to a different project and all knowledge was destroyed.
Raymond used to be just one of hundreds, maybe thousands of Microsoft bloggers. Then one day Microsoft decided blogging was not on brand and it just blowtorched all the blogs except a few very popular ones like The Old New Thing.
> Microsoft has never really "got" the idea of URLs
I immediately thought of Teams links.
At least make another URL shortener like teams.ms and pass the URL through it before giving it to me.
I wonder if I it's possible to create a file in Windows named after a MS teams link as it's so long.
Is it so difficult to generate a short UUID?
Zoom does it with numbers.
Google Meet does it better with hyphen separated 10 letters so you can even read it out or just remember for 5 seconds before typing it elsewhere.
And I've found aka.ms links, referenced in a current Microsoft help page (KB article?), that were broken...
And even there they deleted all the comments.
That's Microsoft being (the new, not the old) Microsoft... they seriously fucked up the MSDN documentation when it was migrated to docs.microsoft.com, so I'm not surprised that a similar thing happened to the blogs:
https://news.ycombinator.com/item?id=20351358
they knew they would and they advertised that fact, and made blog and content owners migrate their stuff if they wanted it kept. many chose not to. those are the people you should be angry with.
Many of the people were no longer at Microsoft when the migration happened, but their blogs were no less useful for that.
And some weren't even alive. Michael Kaplan and his amazing blog about internationalization and Unicode comes to mind - thankfully, other people have archived it: http://archives.miloush.net/michkap/archive/
Or they could've just migrated it for them, and kept everything running.
I'm told that MSDN blog posts were owned by the authors, not Microsoft.
it wasn't Microsoft's content to move.
movement to docs.microsoft.com meant giving copyright to Microsoft, and not everyone chose to do that.
that's quite a take.. expecting previous article writers to manually migrate their stuff is just as good as killing it.
the blog authors knew the content was their own responsibility going in.
"quite a take" f u
1) the URLs are simply missing a "b/" in the path to map to saved copies in the Web Archive, and 2) the comment system switched to requiring JS around 2011 so only comments from before then archived properly (>.>) but the threads are all from 2003-2006 so theoretically there are probably no new comments anyway.
nesting windows more than 50 levels deep: https://web.archive.org/web/20110623211503/http://blogs.msdn...
nesting menus more than 25 levels deep: https://web.archive.org/web/20110623211552/http://blogs.msdn...
creating a dialog box with more than 65535 controls: https://web.archive.org/web/20110623211515/http://blogs.msdn...
the maximum number of threads a process can create: https://web.archive.org/web/20110628235212/http://blogs.msdn...
the maximum length of a command line: https://web.archive.org/web/20110707074737/http://blogs.msdn...
the maximum size of an environment block: https://web.archive.org/web/20110623211524/http://blogs.msdn...
the maximum amount of data you can store in the registry: https://web.archive.org/web/20070302025026/http://support.mi... (the live link http://support.microsoft.com/kb/256986 still works perfectly, so here's the view as of ~the post date of the article)
if you have to ask, you can’t afford it: https://web.archive.org/web/20080607185547/http://listserv.l...
> Only if you want to be trendchasing
Cross-platform compiling is not a trend, it is one of the only two sane solutions to supporting customers.
The other is browser-based SaaS. But now you have two problems, to paraphrase Zawinski.
> Only if you want to be trendchasing rather than letting backwards compatibility take care of itself...
> I'm a native Win32 developer, have been one for a few decades, and know quite a few others still using MSVC6 because it's fast and enough for what they do. Takes <250MB of disk space, and the self-contained binaries it generates will work on any version of Windows starting at Win95.
I share your view about the unmatched backwards compatibility of Win32 binaries, but I wouldn't let a 24-year old compiler like MSVC6 near any new project. We are talking about a compiler here that doesn't even support the C++98 standard, let alone all the basic features for writing safe software (stack cookies, _s APIs, smart pointers - just to name a few).
When I needed reliable self-contained binaries and backwards compatibility, I switched to VS2019's clang-cl compiler and ported their libc++. Together with winpthreads from the mingw-w64 project, this enabled me to write software using even C++20 features, but still maintain compatibility down to Windows NT 4 from 1996. If you're interested, it's all documented here: https://colinfinck.de/posts/targeting-25-years-of-windows-wi...
That's really cool, was wondering how hard it would be to make libc++ play nice with XP - currently I still use MSVC 2017 for those builds but that won't get newer language features and I am planning to move to Clang anyway. Do you know of any effort to go even further back and support pre-NT Windows with a modern C++ compiler?
It's depressing how fast and responsive MSVC6 runs on today's hardware compared with the current version of Visual Studio.
It started after they ported UI to WPF. Compare VS2008 and VS2010 if you have the time. I remember using both on a cheap laptop when VS2010 first came out, and was simply incomparable.
Yes, absolutely true. vs08 was their last fast ide.
Dang I don't want to even imagine the amount of bugs and how bad the codegen is compared to newer compilers.
> Only if you want to be trendchasing [...] > still using MSVC6
That's some strange logic - you expect someone wanting to compile things for windows to somehow discover a 24-year-old compiler using some older version of C/C++ and to conclude that, despite modern norms, it still works, is still legally available and still produces working binaries.
And if they don't somehow glean all of the above, you say they're trendchasing, rather than just not knowing and using something two-decades obscure and possibly illegal.
The post mentions CRLF line end syntax, I don't understand why anyone should bother with it. Use \n only, it works everywhere including Windows. Any modern IDE can handle Unix line endings. The same about file paths, use forward slash (/) everywhere and don't waste your time supporting DOS era standards.
I think the core.autocrlf option in Git is more harmful than useful (for example, it can change file hashes and break something) and should be removed. An option to warn about unwanted \r characters on commit might be useful though.
> I think the core.autocrlf option in Git is more harmful than useful
Nowadays, I am willing to agree with you. It breaks more things than fixes them.
Agreed.
Then change your editors and IDEs to save files with LF line endings (preferably with something like .editorconfig) and you'll have fewer problems.Good post (I couldn't quite follow the open source angle tbh), but for the "why doesn't Developer Mode make Windows behave more like Linux" part: this would probably disgruntle a lot of 'native' Windows developers.
Windows was always its separate island, and at least in the past it had a lot of developer mindshare (so more like a whole continent than an island), and from the point of view of those Windows devs, Linux is the odd one out ;)
One common solution is to use scripting languages for build automation that try to hide most of the differences (e.g. Python or Node.js instead of Powershell vs Batch vs Bash), and most importantly there's simply no way around testing on at least Windows, macOS and Linux.
Mozilla’s automated builds of Firefox for Windows, macOS, Linux, and Android are all cross-compiled on Linux VMs. Cross-compiling is faster and cheaper, especially because Windows’ file I/O and process launching is so slow.
> Windows’ file I/O and process launching is so slow.
It's not Windows' fault, rather it's NTFS.
There's some documentation on this in some WSL issues on GitHub, but it's not just NTFS. It's stuff like the broader filesystem architecture's inclusion of pluggable 'filters' (kinda neat, but each layer of them incurs a performance cost) or the way commands on other operating systems depend on caches for certain syscalls that Windows doesn't keep, or keep anything equivalent to.
Too tired to go find them atm
This one? I remember it as an earnest description of the difficulties the WSL team had with the speed of NTFS - and I think it was one of the reasons for the switch to virtualisation in WSL2.
<https://github.com/Microsoft/WSL/issues/873#issuecomment-425...>
Yeah, that's the GitHub comment, thank you!
My takeaway from that comment is that there are some important performances issues that apply generally to all filesystems on Windows. Maybe we can partially test whether that's the case by playing with WSL1 on ReFS, ExFAT (if that's even supported, with its limited permissions support, or ZFS, once OpenZFS on Windows stabilizes a bit.
NTFS is Windows' fault.
https://m.youtube.com/watch?v=qbKGw8MQ0i8
Rather it's their baroque ACL permission system, which is not able to cache inherited perms.
The ACL permission system is in NTFS. For example FAT32 doesn't have it.
Yes, but if you turn off inheritance perf goes up 10x. It's a policy, not the NTFS driver itself.
Can you provide links providing more info for those statements?
no, just try it out.
all my cygwin build rules contained turning off inherited perms to be able to compile bigger systems under an hour. e g. https://github.com/rurban/cygwin-rurban/blob/master/release/...
> Are derived community works, like say the D-language, Dart, or Zig bindings to the win32 API which are generated from those files open source - if Microsoft did not release them as such?
Certainly. The source for these bindings is the WinMD file, which is MIT licensed. While it is true that its contents are generated wholly from non-FOSS sources, it doesn't impact end users of the metadata. Microsoft owns the original IDL/headers and can choose to license any related work however they like.
Although it's a point of pride that someone did manage to get D understanding the recent Win32 API specification extremely quickly (D is ridiculously good at this kind of metaprogramming), the bindings that actually ship with D at the moment are hand-maintained.
If you read further down the issue comments, it's explained that the generated files don't contain all the info in the idl files.
Now try building a "portable" binary that runs on a version of Linux older than yours.
If you want to build binaries for a distro, build in that distro. If that distro has a Docker image, it's as simple as:
$dayjob supports distros as old as CentOS 7 and as new as Ubuntu 22.04 this way.Compiling on one distro and then expecting it to work on another distro is a foolhardy errand. Libraries have different versions, different paths (eg /usr/lib vs /usr/lib64 vs /usr/lib/x86_64-linux-gnu/ vs...), and different compile-time configuration (eg openssl engines directory) across distros.
> If you want to build binaries for a distro, build in that distro.
> Compiling on one distro and then expecting it to work on another distro is a foolhardy errand.
This is why Windows, with all its issues, is still relevant in 2022. I got tired of updating my distro and software stopping to work. If you are on happy walled garden land of the main repository you're fine. When you need some specialized software or something that is not being maintained anymore, good luck. And at the end of the day, people just want their work done.
This is why Windows with all bloat, advertising, tracking, security issues (no click RCE in 2022, wtf) STILL is going strong.
> Compiling on one distro and then expecting it to work on another distro is a foolhardy errand.
Only if you link random libs you find in on the system. The base system libs making up a Linux desktop (glibc, xlib, libasound, libpulse*, libGL, etc.) all are pretty good about maintaining backwards compatibility so you only need to build against the oldest version you want to support and it will run on all. Other libraries you should distribute yourself or even statically link with their symbols hidden. This approach works for many projects.
CentOS 7 is from 2014. That isn’t very old.
It has a few old things unique to it in our support matrix. It's the only distro with openssl 1.0 so a bunch of API and related things are different. It's also the only distro where systemd socket activation doesn't work if your service has more than one socket, because its version of systemd predates the LISTEN_FDNAMES env var.
Also, it is old enough that it's going out of LTS soon.
CentOS 7 may have been released in 2014, but the software it shipped was already quite old then.
As a datapoint, CentOS Stream 9 [0], which was released in 2021, and which RHEL 9 (released in May 2022) is based on, is already ~60% out of date according to repology: https://repology.org/repository/centos_stream_9.
Also: In computer time, 8 years is "very old". That's longer than the "mainstream support" window for Windows 7 was (from 2009 to 2015), and about as long as the mainstream support window for Windows XP (from 2001 to 2009).
[0]: CentOS "Stream" has a different release model and appears to be a bit of a rolling release as I understand it? But that would cause it to be more up-to-date, not less.
Easy static link everything.
I think you meant a GUI application that runs on a version of Linux older than yours.
I tried that with Rust, actually. My Manjaro was running a much newer version of Glibc than my server so I had to deal with that.
First I tried to get it to link to a different system library but no package manager is happy with multiple major versions of glibc.
Then I tried MUSL but it turns out the moment you enable MUSL several OpenSSL packages used in very common dependencies don't compile anymore. There was something about a custom OpenSSL path that I would need to specify and a cert file I'd need to package, but I gave up at that point.
The solution was to use a Docker image of an ancient version of Debian with an old version of glibc to build the file. I have no idea how you'd deal with this crap if your version of glibc is even older or if you don't have glibc, my conclusion was "I guess you can't use the usual HTTP crates then".
Oh, and the "just statically link everything" approach is also terrible for security patches because most developers don't release security patches with the same urgency as library developers do. GnuTLS had some pretty terrible problems a while back that were quickly resolved with an update but the most recent version of some binaries online are still vulnerable because the devs chose to statically link GnuTLS and abandoned the project a while back.
Libraries are an enormous pain point for Linux development and even static linking won't always be there to save you. This is one of the reasons some game developers choose to release a "Linux version" of their game by just packaging Proton/Wine with their executable and making sure their game performs well under the compatibility layer. All the different versions of all the different distributions and all the different flavours are impossible to keep up with.
Linux devs have chosen to ship entire Linux core libraries with their applications in the form of AppImage/Flatpak/Snap/Docker to solve this problem. If static linking solved all problems, Docker wouldn't have taken off like it did.
"Statically link or bundle everything" is how most Windows apps deal with this tho. So if we're comparing Windows and Linux, and saying that Linux packaging is worse, this method can't just be dismissed on security grounds.
Windows developers tend to stuff directories full of DLLs if they need to ship dependencies, they're not statically linked per se.
Regardless, it's incredibly complicated to compare linking behaviour between Windows and Linux. Windows has tons of components built into the API (and which is maintained by Microsoft) which you'd need an external dependency for in Linux. Microsoft provides interfaces for things like XML processing, TLS connections, file system management, sound libraries, video libraries, rendering engines, scripting engines and more. If there's a vulnerability in WinHTTP, Microsoft will patch it; if you statically link curl, you'll have to fix it yourself.
Of course many open source developers will statically link binaries because that way they don't have to write platform specific code themselves, but they only need all those dependencies because your average distro is pretty bare bones if you strip it to its core components. If you write cod Linux, you're not even getting a GUI by your base platform, you'll have to provide your own bindings for either X11 or Wayland!
Most third party Windows software I run is some application code and maybe a few proprietary DLLs that the software authors bought. Sometimes those DLLs are even just reusable components from the vendor themselves. Only when I install cross compiled Linux software do I really see crazy stuff like a full copy of a Linux file system hidden somewhere in a Program Files folder (GTK+ really likes to do that) or a copy of a modified dotnet runtime (some Unity games).
The big exception to the rule, of course, is video games, but even those seem to include fewer and fewer external components.
Development becomes a lot easier when you can just assume that Microsoft will maintain and fix giant frameworks like the .NET Framework and the Visual C++ runtime (basically libc for Windows) for you. Microsoft even solved the problem of running multiple versions of their C runtime on the same machine through some tricky hard linking to get the right dependencies in the right place. As a result, most Windows executables I find in the wild are actually linked dynamically despite the lack of a library management system.
Hence "... or bundle everything". But what difference does it make from the security perspective? If the app ships its own copy of the DLL, it still needs to do security updates for it - the OS won't.
As far as the OS offering more - it's true, but not to the extent you describe. For example, Windows does offer UI (Win32), but most apps use a third-party toolkit on top of that. Windows does offer MSXML, but it's so dated you'd be hard-pressed to find anything written in the past decade that uses it. And so on, and so forth. I just went and did a search for Qt*.DLL in my Program Files, and it looks like there's a dozen apps that bundle it, including even Microsoft's own OneDrive (which has a full set including QML!).
Even with .NET, the most recent version bundled with Windows is .NET Framework 4.8, which is already legacy by this point - if you want .NET 5+, the standard approach is to package it with your app.
And then there's Electron, which is probably the most popular framework for new desktop apps on any platform including Windows - and it is, of course, bundled.
"Some application code and maybe a few proprietary DLLs" is how things were back in 00s, but it hasn't been true for a while now.
.NET Framework 4.8 will start being legacy, when Forms and WPF finally work end to end on Core runtime.
Heck the designer still has issues to render most stuff on .NET 6.
> Windows developers tend to stuff directories full of DLLs if they need to ship dependencies, they're not statically linked per se.
I assure you, there is plenty static going on in Windows land.
Statically link everything? Including glibc?
When trying to statically link everything, glibc is not as much a problem as GPU drivers are if you want binaries that work on more than one system
> Including glibc?
Any reasonable person doing this would use Musl of course.
And then you discover your program can't resolve hostnames because "DNS responses shouldn't be that big".
But honestly, they really shouldn't.
Can you explain how musl is related to this DNS resolving problem?
https://news.ycombinator.com/item?id=28312935
Trick question: glibc doesn't support that.
Trying to statically link with glibc is a fool's errand.
glibc doesn't support static linking, but you can fully static link with another libc if that is what you wanted to do.
Not that it is needed for forward-compatible Linux binaries since newer glibc versions are backwards compatible so dynamically linking the oldest one you want to support works fine. Would be nice if glibc/gcc supported targetting older versions directly without having to have an older copy installed but that is a convenience issue.
AppImage is sufficient.
nope, it still requires at least the glibc version that the appimage was compiled against. snaps, flatpaks nor appimage solve the long-standing glibc versioning issue that plagues portable Linux binaries. the closest I've seen to fixing this issue is https://github.com/wheybags/glibc_version_header
> the closest I've seen to fixing this issue is https://github.com/wheybags/glibc_version_header
Or just compile against an older glibc version. Plenty of tools to setup sysroots for that, e.g. https://wiki.gentoo.org/wiki/Crossdev
Use ‘zig cc’ and target the appropriate glibc.
I was sad to find out "Zig supports only the last three macOS versions". No ncdu2 for me :(
https://github.com/ziglang/zig/pull/10232#issue-1064864004
Apple only supports the last three macOS versions too, so if you're on an older one than that you are not getting security vulnerability patches.
One of Zig's claims to fame is how widely supported/cross-platform their build tools are, and I had high hopes. But they don't publicize this limit of their macOS support -- I found out the hard way. I really appreciate how far MacPorts bends over backwards to keep things working.
I love my ancient machine and have a few 32-bit apps I need, though I guess old hardware isn't quite the excuse it used to be.
https://dortania.github.io/OpenCore-Legacy-Patcher/MODELS.ht...
I'm interested in re-evaluating this policy once we get a bit further along in the project. It could work nicely given that we have OS version min/max as part of the target, available to inspect with conditional compilation. We get a lot of requests to support Windows all the way back to XP too, despite Microsoft's long-expired support.
All this scope creep takes development efforts away from reaching 1.0 however. If we had more labor, then we could potentially take on more scope. And if we had more money then we could hire more labor.
So if y'all want Zig to support a wider OS version range, we need some more funding.
Go tell that to the schools I have to ship software to which can't upgrade past 10.13 because their Macs are too old but they don't get enough budget to buy new ones
Maybe they shouldn't have buy Macs in the first place.
anything else is a hard sell for art schools sadly, especially in 2010
Not that hard. Since it's not a use case that most GNU/Linux software needs to be concerned about it's easy to make mistakes and resources are scarce, but once you know what you're doing it's usually not a big deal (maybe except of some specific edge cases). There's lots of old games on Steam that still work on modern distros and new games that work on older ones (and, of course, there's a lot of them that's broken too - but these days it takes only a few clicks to simply run them in a container and call it a day).
It's very hard. Incompatible glibc ABIs make this nigh impossible, there's a reason Steam installs a vcredistributable.dll for pretty much every game on Windows. On Linux Steam distributes an entire runtime environment based on an ancient Ubuntu release exactly to circumvent this problem.
Look no further than the hoops you need jump through to distribute a Linux binary on PyPI [1]. Despite tons of engineering effort, and lots of hoop jumping from packagers, getting a non-trivial binary to run across all distros is still considered functionally impossible.
[1]: https://github.com/pypa/manylinux
Steam distributes an entire runtime environment because it's a platform that's being targeted by 3rd party developers, who often make mistakes that Steam Runtime tries quite hard to reconcile. When all you care about it's your own app, you're in power to compile things however you want and bundle whatever you want with it, at which point it's honestly not that hard. Building in a container is a breeze, tooling is top notch, images with old distros are one command away, testing is easy; in practice I had much more trouble compiling stuff for Windows than for old GNU/Linux distros.
It's easy to compile an entirely new binary for every platform/distro, and it's easy to bundle an entire execution environment along with a single binary using docker, what's hard is compiling a single binary and have it run across all distros and execution environments.
It really isn't that hard to get a single binary that works across glibc-based distros. Just compile against the oldest version of glibc/Xlib/etc. you want to support and bundle all non-system libraries, statically linking them with symbols hidden if possible.
We are not talking out of our ass here - I myself do this for all the software I maintain and just works. Tons of programs are released this way.
"Across all distros" - sure, that's outright impossible, and for a good reason. "Across all distros that matter" - no, it's not. How do you think Electron apps distributed in binary form work fine across various distributions?
I totally agree, it's practically impossible. Is it a philosophical thing or a technological thing? GPL is about source code and going against that will never make much progress.
It is almost always possible to do some relatively simple hacks to make old stuff work, though (LD_PRELOAD, binfmt_misc/qemu, chroot/docker).
Isn't that simply a matter of targeting an older glibc? I am probably missing something though.
Yes, "simply". It's a very fun process. Around 100 times more fun than the onerous
Anyone who talks nonchalantly about glibc hasn't had their time eaten up by glibc.
Like it’s absolutely a nightmare but you can eliminate a lot of problems by building on an ancient version of RHEL.
It says a lot about Linux development that for cases like these "just install the Linux equivalent of Windows XP in a container and run the tools inside that" is an accepted solution.
It's a solution that works well and is used by loads of developers, but it's still comically silly.
There are other approaches like https://github.com/wheybags/glibc_version_header or sysroots with older glibc, e.g. https://wiki.gentoo.org/wiki/Crossdev - you don't need your whole XP, just the the system libs to link against.
Sure, having a nice SDK where you can just specify the minimum vesion you want to support would be nice but who do you expect to develop such an SDK? GNU/glibc maintainers? They would rather you ship as source. Red Hat / SUSE / Canonical? They want you to target only their distro. Valve? They decided its easier to just provide an unchaning set of libraries since they need to support existing games that got things wrong anyway and already have a distribution platform to distribute such a base system along with the games without bundling it into every single one.
You can also just cross-compile targeting whatever you want, including RHEL.
I wrote a tool [0] which will take any system and create a cross-compiler toolchain for it, this is what I use to compile for Linux, HP-UX, Solaris, BSD, etc on Linux.
[0] https://build-cc.rkeene.org/
I actually haven’t heard of this approach. Could you explain more or point me to some further reading if you read this and have a moment?
Does having a sysroot with an older glibc count?
https://wiki.gentoo.org/wiki/Crossdev
Compile in an old Debian version docker container.
ok: `zig build -Dtarget=native-native-musl -Drelease-fast=true`
This is portable to CPUs with the same features running older linuxes. To be portable to CPUs with fewer features you should specify the CPU family with `-Dcpu=foo_bar` which seems to be the equivalent of `-march=foo-bar`.
Compile with Musl inside Alpine container.
The most stable Linux ABI is Win32 =)
Are symlinks on Windows really such a big issue? I’ve never heard of projects using symlinks in their tree. If Windows developers are complaining about you doing something unusual, why is it their fault?
The higher level thing to notice is that Windows is a second class platform for much of the software world. As such, it is in a position where it needs to bend the knee for compatibility with the dominant platform if they want it to be an active player in the larger ecosystem. In the case of symlinks, the hard part is already done, to the point where it is a toggle that has already been implemented, the UX of enabling it is just bad.
For game developers and GUI applications it is first class.
Apparently being the best in symlinks hasn't made a difference in the Year of Desktop Linux.
Both soft links and hard links, as well as directory junctions, have been possible in Windows since Windows XP (though soft links have only been available since the release of Windows Vista 18 years ago). In fact, hard links are essential parts of Microsoft's solution to the DLL hell problem without wasting gigabytes of space.
The problem people are running into is that the OS hasn't been designed with wild symlinks in mind and therefore can't guarantee its security principles if any user is able to symlink however they please; if I read the context [1] correctly, it seems like an elevation of privilege is suspected to be very easy to gain if a standard-level user is allowed to place arbitrary soft links on a file system.
I see little reason for Microsoft to enable the "all users can soft link" setting by default. They'd need to audit their OS and userland code to determine where soft links may introduce vulnerabilities in order to change the default and a few developers that absolutely insist on using soft links inside git repos for some obscure reason isn't going to get them to make such an effort.
Microsoft has made enabling the feature a bit easy [2] but I can find very little about the security analysis done for this change, so enabling dev mode might open your computer up to a whole class of vulnerabilities.
I personally don't see a reason to use symlinks in a dev environment, but I suppose *nix developers think otherwise, probably to duplicate files across a source repository for some reason? If your intent is to work together cross platform then there are loads of restrictions you need to deal with. Linux applications tend to trip up over CRLF, every file system has their own stupid restrictions, build tools and shell scripts need to somehow become platform agnostic, you name it.
You can blame Windows for being different but the truth is that Windows is still the most commonly used desktop operating system in the world by a huge margin. macOS and Linux are the odd ones out and there is no good reason why the POSIX/X11 system design is better or worse than Cacoa or Win32; it's a mere difference of convention.
[1]: https://docs.microsoft.com/en-us/windows/security/threat-pro...
[2]: https://blogs.windows.com/windowsdeveloper/2016/12/02/symlin...
Even earlier, the Technet tools for it were introduced in Windows 2000.
Certainly better to avoid it if at all possible, and I'll be eliminating them specifically to improve Windows compatibility.
It's not that unusual, though, a quick search turns up 1.1k GitHub repositories with `git config core.symlinks true` in their documentation or CI pipelines - including quite popular projects like Ava, Apache Arrow, Solana, Chrome Devtools, adobe-fonts, IBM/houdinID, travis-ci, RabbitMQ, various Google projects & more.
I agree that Windows could do more to support the developers on other platforms. Symlinks just seem like a petty thing to complain about IMHO. Didn’t realize how common they were though.
NTFS does support them, however. Is Git just not supporting them properly?
Well, it feels 'petty' and simple until:
1. You run into it, go "AH, wow, okay, I'll enable Developer Mode"
2. It still doesn't work, you're confused, you Google around a bit more and find out actually you also need to use a Git config option
3. You retry, now the symlinks work, but your compiler fails because "Destination Path Too Long", huh, that's weird
4. You google a bit more, and discover you need to set a registry option. So you do it, but you still get that error..
5. You discover there's another Git option you need to set
And then you're like, hey, what was I doing again? Where did the last hour of my time go? It's death by a thousand paper cuts.
Yak shaving.
NTFS has supported them for ages. The main issue is that they're locked behind elevated permissions or having developer mode enabled (as of Windows 10.)
My understanding is Microsoft's concern is that applications and OS components not expecting them could lead to security issues. Not sure how real that concern is, but that's the excuse I've heard.
>Not sure how real that concern is, but that's the excuse I've heard.
Bufferoverun I guess? I haven't programmed Windows in years, but there's plenty of code I've seen that is pretty much
from there you put in a large path and then you get your RCE.That would be for removing the 260 character limit, not for enabling of symlinks.
I've looked before, and have never found a definitive answer from Microsoft. I've heard some speculation that it the nebulous security issues would be from a program that checks the permissions of a symlink's target, then opens the symlink. An attacker could then modify the symlink after the permission check but before the file is opened, escalating access by pointing to a restricted file.
It's a kind of attack called a symlink race, which is also possible on other operating systems. There are kernel parameters for hardening against symlink races on Linux, and they just disable symlinking into world-writable locations. I'm not sure why Windows can't use a less invasive mitigation like that, but I guess there must be one.
A lot more tends to be world-writable in Windows, for starters.
It does support them, mklink.
https://docs.microsoft.com/en-us/windows-server/administrati...
Yes, Windows does, but this post is complaining about support. Which led me to ask if Git is the problem. Cause Git also supports symlinks, but is clearly having issues on Windows.
I would assume the issue is that they need to disable symlink support by default on Windows because symlink support is disabled by default on Windows.
That's exactly it. Windows developers know NTFS has Symbolic Links https://docs.microsoft.com/en-us/windows/win32/fileio/creati...
What's the standard for being an "actual Windows user"?
Asking because I doubt e.g. my mom knows that NTFS has symlinks.
Is she a Windows developer?
https://docs.microsoft.com/en-us/windows-server/administrati...
The developers who complained to me about this are Windows developers regularly.
One is even a professional AAA DirectX gamedev.
Apparently not professional enough to know a feature as old as Windows 2000.
Egads, if the parent post wasn't edited, then I misread it. Sorry about the confusion.
It was edited.
> What's the standard for being an "actual Windows user"?
I think you misread that line.
Their message originally read "Actual Windows developers know NTFS has Symbolic Links"
So it did say "developers" and not "users"?
I think they misread the line.
(Whether it was right or not, so you don't need to tell me anything about that.)
Would anyone like to explain their downvotes?
From my point of view, even when someone is on the correct 'side' of an argument, if they got there by mistake it's still important to point that out. Both fortran77 and CoastalCoder can be wrong at the same time.
Is "I think you misread that line" far ruder than I thought it was?
...surely "I doubt my mom knows" wasn't supposed to be a developer anecdote that I misunderstood massively?
Maybe I missed something on this discussion but symlinks and NTFS are not really related.. They have them yes, how they're handled is different.
How are they not related? If NTFS didn't have symlinks we wouldn't be having this discussion. Critically, the modern standard for removable drives, exFAT does not support symlinks, so you can't count on Windows' support for symlinks if the user is git cloning on a drive that's using exFAT.
https://docs.microsoft.com/en-us/windows-server/administrati...
Pointing to this page is meaningless. `mklink /d` only works on NTFS but errors out ("The device does not support symbolic links.") on exFAT. GP acknowledges NTFS has symlinks and remarks that exFAT doesn't.
NTFS is the only filesystem that matters for Windows developers.
No, they're not really. And IIS is not that bad either. My main gripe with windows and why i won't develop on it anymore is simple that their system is too complex. I suppose this is the gripe of the article author as well tbh.
Windows is horribly hard to learn. In a day, i learned how to create file, directories, change the stats on those, execute diff and patch, an do weird string modifications with grep and cut (and learn to commit and push with git). First day on an UNIX machine. First work week on a windows machine (at that time, i had already built a LFS): we have those two project with two different version of (proprietary JS front framework) and (weird php framework). Those don't use the same version of mycrypt.dll (and others, but this one i will remember for at least two dozen year). With the support of non-intern engineers including a windows DBA, it took us a week to manage to install the two different versions.
A week prior, i was linking to a previous version of Ruby for my RoR app (it was around 2013).
But since, i really respect windows sysadmin, they are the best of us. I will just never, ever want to work on a windows server again. I like learning, but putting that much effort for this little rewards? not worth.
You don't have to enable Developer Mode to enable symlink creation. Administrators have the right to by default and non-administrator users can get this by enabling the documented policy:
https://docs.microsoft.com/en-us/windows/security/threat-pro...
Developer Mode is just a bundle of useful developer-related settings and components that get applied to the device when enabled.
There's a post here about this from the Developer Mode angle with a bit more background:
https://blogs.windows.com/windowsdeveloper/2016/12/02/symlin...
Projects don’t use symlinks because they aren’t supported on windows.
https://docs.microsoft.com/en-us/windows-server/administrati...
The command won't help when you don't have the permissions.
Or do you just want it to be phrased "not supported for users on normally configured systems"? There, then.
Well, time to learn about /J flag.
Even if we pretend that's a symlink, it only works for directories, not files.
We moved from Windows doesn't do it, it does but isn't available to non admins, it is but non admins can't do files,....
Yep.
We started at "not supported", talking about making symlinks inside projects.
We backed off to "not supported on normal systems", which is a reasonable change to me.
You mentioning junctions is not even close to a solution, and I am not moving the goalposts. If junctions worked, the complaint wouldn't have been needed from the start. No, this is about actual symlinks, not a similar feature that can do 5% of what symlinks can do.
UNIX, no one complains about using sudo all the time.
Windows, oh bummer you need special permissions to be able to call it.
The feature is there and supported.
People would absolutely complain if you needed sudo to check out a repo.
curl | sudo sh is very fashionable nowadays
Nope, since XP that you can use mklink, and before there were Technet utilities for the same purpose.
However using symlinks just isn't a thing in the Windows culture, people use at most Shell shortcuts.
It’s funny because I find compiling and deploying for Windows far, far, far easier than Linux. That said, the complaints are all good and fair.
This what is called in Cognitive Behavioral Therapy "All or Nothing Thinking." Microsoft is a for-profit company. It is pro open-source when it helps them and closed source also when it helps them. I like the new Microsoft which has open sourced a lot of stuff. It's way better than the old closed source regime.
Oh, you and the article's author are either forgiven or seeing the best in people. Me, on the other hand, who lived the Microsoft with their shady dealings in Bill's / Ballmer's era, I remember their crushing on OEM's to not allow even a shred of openness in their systems. Only because of shifting in developer's in favor of Apple's / Google's Microsoft changed the tune with "Microsoft loves Linux". Bleah! I have no love for Microsoft despite my work involving their products 90% of my time. I am just realistic, they just love money, nothing else. And if money means open source, guess what?! They bought GitHub and they own everything that is published there. That's why plenty left GitHub and either started their own venture or went to GitLab.
Sorry for my rant, I had to get it off my chest.
Microsoft always had great developer tech. And having access to that tech more freely (as in on other platforms, partially open source etc) is a significant plus for me.
I loved .net back in 2003 or whatever. Then I never got to use it for years because I was on unix land. It is now open source and multi platform.
They attacked open anything with all their might (and they were fucking mighty) and the enemy still flourished.
Let them come again. If the new strategy is to pump high quality software out in the open, I hope it becomes a long and bloody war.
I just built an app in Blazor, which is Microsoft's React alternative, except you code everything in C# instead of JavaScript. Once I understood it I found myself able to write and debug code very quickly.
> Microsoft always had great developer tech.
when, exactly?
in the 90s-2000s they were always a poor imitation of Borland's
and from the 2010s to today JetBrain's tools blow them out of the water
Typescript is ranked number 4 according to Github statistics in 2022[0].
VSCode is also very popular. I couldn't find any good statistics comparing different text editors, but atom is sunsetting this year, and VSCode continues to grow as some anecdotal evidence.
Visual Studio is still a great IDE for C++ imo. CLion is the only other big IDE for C++ that I've had suggested often.
C# is ranked #9 in github usage[0].
I would consider these "great developer tech", and their popularity seems to confirm that.
Edit: DirectX is also a pretty big deal since afaik it's the only way you can make an Xbox game. I don't have enough experience to say whether DX is great developer tech or not though. I often hear how horrible Metal is more often than I hear negativity about DX, but that's just my experience.
[0] https://madnight.github.io/githut/#/pull_requests/2022/1
see, I buy popular
but great? absolutely not
and as someone who's uses both: VS without Resharper is still a decade behind Jetbrains' IDEs
Only for Java, for C++, .NET and GPGPU programming? Not really.
And even Java it is debatable, since JetBrains would rather sell CLion licenses than support some of the stuff Netbeans and Eclipse have been doing for 15 years regarding mixing Java and C++ development on the same source base.
When is InteliJ going to have an incremental Java compiler by the way?
Gates and Balmer are long gone and it's no longer the same company. If they only loved money, they wouldn't be amenable to their workers unionizing.
"CWA, Microsoft Announce Labor Neutrality Agreement"
https://cwa-union.org/news/releases/cwa-microsoft-announce-l...
Sorry to be this way, but I'm refreshing my memory about CBT today:
Over-generalization: Make a comprehensive, negative conclusion that is beyond the current situation.
Labeling and Mis-labeling: The extreme form of overgeneralization. Use fixed, comprehensive, and emotional language to label yourself or others.
Disqualifying the Positive: Unreasonably believe that positive experiences, behaviors or qualities do not count.
Jumping to Conclusions: Making a conclusion before having all the evidence.
Emotional Reasoning: Draw conclusions from your own feelings, because what I think is what the facts are.
Feel compelled to repost
https://www.hanselman.com/blog/microsoft-killed-my-pappy
yeah the acceptance of oss is cool but they still ship a lot of weird software and technology... which is fine assuming you can avoid it... but sometimes you can't and then it becomes torturous.
A big reason for that is their commitment to compatibility to past software. They kind of have to because they have people all over the world using old Microsoft software. They finally were just able to kill IE6 and Japan is screaming about it. So they ship a lot of ship a lot of weird software and technology in your words. Google just kills things when they get bored with it. Microsoft can't do that.
that's the classic excuse. but even azure, which was built from the ground up, is pretty weird compared to its major competitors.
also, apple showed us how to do this. build the right thing for today and use virtualization to support yesterday until it can be updated. all software is living these days anyhow.
I currently own an iPad which is useless to me because a bunch of apps won't load on it because it's old, including GMail. I have better luck with my old hardware running Windows software.
Apple is worse in that regard. Is XCode open-source? Other than Darwin Kernel, is any part of macOS open-source? Did Apple even try to make Swift cross-platform? Is iCloud really comparable to OneDrive? How about Apple's locked and closed-source bootloaders?
Apple gets close to zero usage on desktops outside of the Silly Valley, has almost zero presence on servers, and it's trivial to avoid it (I haven't used their products for years (and very little before that), and personally know only a single macOS user). Windows — not so much. So when they put out yet another crappy technology, you may get tarnished one way or another.
I’m surprised Qt hasn’t been mentioned yet for c/c++. You can get qtcreator and mingw baked in for the cost of a couple GB, and it “just works” even if you aren’t using the qt framework. I much prefer qtcreator to visual studio anyways. I suppose this breaks down for the use cases that aren’t c/c++ though.
Sadly Qt ships MinGW 8.1 which is positively ancient (released in 2018). If you're starting a new project (which you likely are if you are installing an IDE aha) there's no reason not to go for more recent compilers - msys2 has GCC12 (https://packages.msys2.org/package/mingw-w64-x86_64-gcc) and Clang 14 (https://packages.msys2.org/package/mingw-w64-x86_64-clang) which just work better overall, have much more complete C++20 support, have less bugs, better compile times (especially clang with the various PCH options that appeared in the last few versions), better static analysis, etc.
Personally I use https://github.com/mstorsjo/llvm-mingw's releases directly which does not require MSYS but that's because I recompile all my libraries with specific options - if the MSYS libs as they are built are good for you there's no reason not to use them.
Typical UNIX FOSS rant, not taking into account all the development workflows on Windows.
It doesn't, no. If you are only working with libraries designed for Windows or are linking against binaries, most of the dev issues listed disappear. The point of the article is that the Windows push for open source introduces friction because the way of doing things with established OSS projects is so different.
Not every OS has to be a UNIX clone, we don't need an OS monoculture.
You don't need to implement POSIX in order to not bundle your header files in a 9 GB "SDK" download, which isn't available as a direct URL download and can't be installed without user interaction via the GUI.
This is a Microsoft issue, not a Windows issue.
If MS would package just the header files, they'd probably get tons of complaints about compilers complaining about missing libraries, missing dependencies and toolsets not being available.
The Windows SDK is not just a few C++ headers and a bunch of lib files to link against. It has a huge surface area. The documentation and toolsets assumes all API calls from Windows 3.1 APIs to UWP to be available.
The SDK download (ISO format) is 1.1GB in size, requiring a total of 4GB of disk size to install (whether this includes the size of the installer itself is unclear). Big, but not unavoidably so, and you can pick and choose some features. It bundles debugging tools, the application identifier, a certification kit and MSI generation tools along with its headers (which seems fair to me); less than 1GB of extra kit on top of 1.8GB of headers and libraries you probably want as a Windows dev anyway. Just the headers won't leave you with a working dev environment even if you bring your own debugger.
Unlike what some developers seem to think, you don't actually need to download Visual Studio to get the SDK, you can also download it separately from the website [1]. Pick the ISO version and extract the CAB files yourself if you want to manually pick and choose your files.
The SDK ships as an installer but that just makes sense. Ubuntu ships their headers in DEB files as well, for example. You want to be able to add and remove these packages as you upgrade or downgrade your target API levels without having to manually set up a file system hierarchy.
As for non-GUI interaction: `WinSDKSetup.exe /quiet /ceip off /features DesktopCPPx64` will install only the necessary headers and libraries for x64 C(++) development in the default location without sharing data with Microsoft. Found this command line with `wine WinSDKSetup.exe /?`. You can also add, repair, and uninstall packages with the same installer.
[1]: https://developer.microsoft.com/en-us/windows/downloads/sdk-...
That 9 GB SDK has support for all kinds of Windows development workloads, versus plain CLI and initd daemons.
Also I advise to learn about the headless installation flags for Microsoft products.
I mostly agree on the VSCode part. They should put it up front in the README, homepage and maybe a info modal/badge whenever a user installs said extension.
But genuinely asking, how else would you 'label' VSCode?
"A free and open-source code editor with some optional components that are proprietary"?
Same as Google Chrome. No one claims Google Chrome is open source nowadays.
Apple is worse in that regard. Is XCode open-source? Other than Darwin Kernel, is any part of macOS open-source? Did Apple even try to make Swift cross-platform? Is iCloud really comparable to OneDrive? How about Apple's locked and closed-source bootloaders?
Swift itself works on most platforms. The problem with using Swift cross platform is that Apple only bothered to make their GUI toolkit available for their platform in the same way Microsoft only makes their dotnet GUI platforms available for Windows (without resorting to "official" replacements that are owned by the same company but not built into the system, like Xamarin).
As a consequence, many Swift libraries only focus on macOS, just like many dotnet libraries focus on Windows (though recent efforts have improved that situation). You can probably get a lot of them working on Linux as well and if you use Swift for command line tools or web applications. I suppose you can probably run the most important tools cross platform, but the non-native ecosystem is clearly a second class citizen.
The big difference between XCode and VSCode is that Apple doesn't claim XCode is open source; also, XCode is more comparable to Visual Studio than VSCode in terms of SDK integration and preconfigured tooling.
Huge parts of the Darwin kernel are actually publicly accessible while Microsoft only provides kernel sources under NDA in things like education projects. Unless you count the WinXP source code leak, that is.
C# is sort-of mostly open source-ish except that debugger features are closed and the community has little say in its development.
Apple did in fact put effort into making Swift cross platform, as outlined here[1]; though their intention is that Swift programs use the system runtime on macOS/iOS/iPadOS, they put the effort into making a base layer freely available for other operating systems to gain some portability.
I'd personally argue that Apple and Microsoft are similarly if not equally open in their development, but Microsoft advertises itself much more "open source" than Apple. Apple's approach of "you can look but you can't touch" is a lot more explicit and their supposed openness comes up in fewer marketing materials.
[1]: https://github.com/apple/swift-corelibs-foundation
I don't get why people like VSCode. I was a happy Atom user for years and found it to be superiour to VSCode in almost every way.
I agree that symlinks cause more trouble then is needed on any platform. Sadly, only Node really likes to use them to optimise the store of packages
Personally, I don't have major issues compiling things on Windows for other platforms (macOs, Win32, Android, Linux). Only awkward thing is designing UIs
I don't think that's what they sre saying, I think they sre saying it's stupid Microsoft requires hoops for such a basic feature, hence "Fix Windows" being option 1
Such a basic feature? If the OS dedups a file, by putting in a symlink behind the scenes, no one need know a symlink (or something else) exist. That makes for a better user experience. There are less concepts to absorb and less errors to be made, as the file-system won't have any cycles.
Symlinks are a premature optimization in my book, or worse, bad design. On the user level, shortcuts are often the better solution, because they allow arguments to be passed in on the command-line.
This story is really about the arrogance that is as abundant as it is inappropriate in linux world. Never have I heard of dependency hell or broken systems on Android, yet geriatric linux always needs the help of an administrator.
The only place where linux still has some merit is on the server, where its classical but aging mainframe concepts still have some life left in them.
I don't agree that symlinks are a basic feature. It's a source of a lot of pain for me on my Mac
I presume most MS managers were in engineering roles earlier. It doesn't make sense to characterize the same kind individuals as crushing merely after a role change. My hunch is they always had the instinct to crush but didn't have the power earlier.
And slash. They got it backwards.
/ is used both in writing and in Maths. \ is a character used almost exclusively in programming.
From a practical point of view, I find \ to be a much more suitable directory separator than /, for the same reason I think Microsoft's choice for blacklisting characters like ? and : from file names is silly. There are real world use cases [1] for adding the / to file names so it shouldn't be excluded!
Microsoft has used the / for command switches since its inception, based on the way the DECS TOPS-10 (1970) used linker flags; with / already taken, they chose the Next Best Thing which is perfectly fine. When Unix came around a year after the TOPS-10 they used a forward slash for directories for some reason but there's no way one is better than the other.
For what it's worth, Windows accepts forward slashes. Since Windows 1.0, actually, all the way back in 1985. Try it for yourself in your browser[2], open notepad.exe and save a file in A:/test.txt. Your path separator may be represented differently, but / works perfectly fine.
Fun fact: in some locales (Japanese, for example) your path separator isn't even a backslash; the path separator is actually rendered as 0x5c, which corresponds to the Yen symbol in Japanese locales of Windows. In Korean locales, it'll show up as the Won symbol and you'll probably find many other path separator characters in other locales that existed way back in the console code page days.
[1]: https://answers.microsoft.com/en-us/windows/forum/all/forwar...
[2]: https://www.pcjs.org/software/pcx86/sys/windows/1.01/
Which is because not only Windows, but DOS itself supported it at least as far back as DOS 2.0 when they added sub-directory support.
There was even an option (in CONFIG.SYS) to alter the 'switch character', which then also caused a lot of command line tools to accept '/' for paths. That option was eventually removed from the config file, but the underlying API retained until quite late. Maybe it was WinME's version of DOS which disabled the API.
Despite all of that, the handle based Int 21 file APIs always supported being passed '/' as a path separator, and it was often accepted by some apps.
DOS based 'C' source code often used "#include <some\\path\\file.h>", however many compilers also simply accepted "#include <some/path/file.h>", probably just as an artifact of the obvious implementation. This wasn't well known, so lots of DOS based 'C' source used '\\' in includes, plus also at the file API level.
I used to use MXE [1] to compile fully static Windows binaries on Linux VMs hosted with Travis. It needed to crane in everything though, so it was a source of bottlenecks from time to time. I was also uncertain about the provenance of a lot of the dependencies in that toolchain. So when Travis died I took the opportunity to move Windows builds back to gnu with msys2, all over GH Actions. These are actually comparatively snappy and I’m reasonably satisfied with it.
[1] https://mxe.cc/
Would falsely representing software intended to be proprietary as "open source" in its marketing copy have an adverse impact on the enforceability of its putative license?
The comments here showcase why webapps and Electron/webview shells are so common now. Maybe WebAssembly and its up and coming runtimes can finally some of this mess.
I disagree. Cross-compiling has nothing to do with why Electron is common. The browser is the runtime that currently fixes this mess. WebAssembly would need a runtime matching current browser runtimes to be a real contender, and the best choice here is... a browser. So now you have electron but it's WebAssembly, which isn't significant enough to make a dent in this mess.
I use Cygnal for making a Windows version of TXR.
https://www.kylheku.com/cygnal/
You compile in the Cygwin environment where you have all the usual tools. Then shipping the program with cygwin1.dll from the Cygnal project gives it native-like behaviors.
Seriously, i use Windows because i could play video games after coding.
Coding in linux machine is 10000x easier than Windows.
>i could play video games
Have you tried gaming in Proton?
Depends on what you use. To me Visual Sudio is the best and most gestire complete IDE on the market and C# one of the best language.
Every time I program on another system I miss Visual Studio.
The only time I have trouble in Windows is when people coding on a Linux machine don’t use cross platform tools.
Life is too short to spend on fooling with Windows.
Configure autocrlf in what way? I don't want git to change my files, and I don't know what problems they're talking about.
Configure it to respect your preference ('don't change my files') and not pester you with hundreds of warnings any time you interact with Git if your preference differs:
If you have some developers using window, and other developers using anything else, without some kind of line ending normalization, then you will end up with inconsistent line endings, possibly in the same file, and potentially diffs and commits where every line in a file is changed from one line ending to another.
For me SDK sizes are showstopper. Mingw packages are probably 100-200 Mb total. To build anything native one has to install Windows (20+Gb in modern version and then Visual Studio 40+Gb). Not so easy to fit it all on an SSD drive.
Xcode also have this problem now. 8Gb for Xcode 7 is manageable. But why 70Gb for Xcode 11?
I'm cross-compiling Mach engine[0] with Zig, it ends up being a quite small toolchain to cross-compile a game engine (using DirectX, Vulkan, OpenGL, and Metal on respective platforms) from any OS. The Zig toolchain is:
Zig provides almost everything that is needed to cross compile to those same targets out of the box, including libc and some system libraries. Except macOS frameworks and some updated DirectX headers/libraries (provides MinGW ones), which we bundle and ship separately: That's full cross compilation of WebGPU GUI applications to all desktop platforms in under ~217 MiB for most platforms.[0] https://machengine.org
You can accomplish similar things by targeting, say, SDL2 with a C compiler. It's a nice solution when you don't need platform-specific stuff, but it's not a general substitute for having the platform SDK, alas. With that said, targeting SDL2 is what I do… but the apps I like to write are games.
Actually, it's better - my work includes everything needed to build:
* GLFW
* Dawn (Chrome's WebGPU implementation)
* The DirectX Shader Compiler (a fork of LLVM)
* Freetype and HarfBuzz
All from source, cross-compiled to every OS. Plus with Zig you can link+sign macOS binaries from Linux and Windows (AFAIK that's not possible with a regular C compiler, but maybe that's changed recently?)
Zig's toolchain is awesome; I was very sad to discover they don't support my comfy, ancient MacBook Air.
I ran into this when trying to test out Rust. On Windows, they recommend using native tools that require a multi-gigabyte download via the Visual Studio installer (and this is just for tools--not Visual Studio, the IDE).
It was annoying for me because part of the reason I wanted to try Rust on Windows was specifically to avoid multi-gigabyte C/C++ toolchain downloads. I bet the actual compiler and linker aren't that big, so I kind of wonder where all those bytes are going...
Aside: I see a sibling comment mentions Zig. While I haven't really explored Zig, the tiny single-binary download was a breath of fresh air. Go also had a quick and easy download, but I wanted a language without a garbage collector.
Try cross-compiling Go apps that use GTK3 if you’d like to spend a few days in intense pain. It’s not there yet. Electron apps are probably the closest thing we have today that actually works correctly on Win / Linux / Mac.
The visual studio build tools are substantially smaller than 40GB. My IDE install is about 25GB, and visual studio has distributed the build tools separately since vs2015 - my toolchains directory is about 4GB including whatever dotnet runtimes, windows GDK.
> Not so easy to fit it all on an SSD drive.
Professionally, no excuse. As an open source or otherwise unpaid pursuit, a 250GB SSD is about $45 on Amazon right now. That more than comfortably fits your 60GB estimate, with plenty of room to spare.
*>My IDE install is about 25GB
Would love to see a breakdown of where that space is going. FWIW IntelliJ takes up 2.5GB, and honestly, even THAT seems like a lot to me.
I have no idea sorry! On the Jetbrains front that's interesting. My Intellij folder is closer to 4GB, but that's only for one language. I've also got Rider and Goland installed, and Jetbrains duplicates the entire install per IDE. My Jetbrains folder is 12GB.
> even THAT seems like a lot to me.
That's a little silly - what is an acceptable amount in that case. The JDK on its own is about 700MB (that's a guesstimate based on last time I installed it sorry).
We are two wealthy people arguing over rents, with you saying $1M/month is reasonable, I'm saying it's not, I'm only paying $100k/month, meanwhile most of the world is paying on average $100/month.
25GB is enormous. 2.5GB is enormous. Consider that the core value of this software is text editing. Consider that not long ago people were buying PCs with perhaps 10MB of hard disk - or even no hard disk at all (e.g. the Apple IIe). Windows XP and Office 97, for example, were (if memory serves) less than 1 GB total. FoxPro for DOS was something like 4 megabytes - and FoxPro was a form builder plus relational database. Consider that with a thoughtful use of resources and an eye toward minimizing attack surface, you can put a fully functional http/s app server in a 1.9MB package (redbean).
These sizes are silly, and they should give you pause. The space is cheap, yes, but the attack surface is not.
It's not "text editing". It's "developing software" with all the tools that belong to that - from understanding the language, to compiling it, optimizing it, linking it and carrying necessary documentation and support libraries.
Don't be reductive - just because your linux hides those costs directly in /usr and /var it doesn't mean those things don't exist.
It's not a text editor. It's an IDE, with built in development environments, SDKs, deployment tools. Debug symbols for native binaries are often 2-4x the size of the binaries themselves.
> Consider that not long ago people were buying PCs with perhaps 10MB of hard disk - or even no hard disk at all
Not that long ago in history, but an absolute eternity ago in computing terms. I have a direct internet connection to my home that is faster than the read write speeds of those computers.
> Consider that with a thoughtful use of resources and an eye toward minimizing attack surface
Attack surfaces have changed significantly since people were buying 10MB hard drives - you cannot write applications with the same security considerations from that time.
> you can put a fully functional http/s app server in a 1.9MB package (redbean). The space is cheap, yes, but the attack surface is not.
Firstly, redbean is the absolute extreme example of minimalism and portability. It's not "normal" it's an incredible feat of engineering frankly. Nginx isn't much bigger (~5mb) and caddy is bigger but still small (30Mb). The big difference between these and IDEs is that web servers dont provide client interfaces. For user facing tools they rely on web browsers to render html and interpret JS, so to make a comparison it's only fair to compare redbean + chrome to a Windows SDK install for example.
It's interesting how people assume that their world is everyone else's...
It's easy to forget that there are billions of people for whom $45 is a massive investment, and that SSD isn't so conveniently available even if they have the money.
I know I got into programming on a mix of graphing calculators and thrown out PCs, and I also distinctly remember having to work around the download sizes of tooling because I was using really crappy internet.
I don't think it's unreasonable for the poster to wish that they could build useful binaries without 60GBs of downloading and storage...
> it's easy to forget that there are billions of people for whom $45 is a massive investment, and that SSD isn't so conveniently available even if they have the money.
So those people can use whatever hardware they have available to them and not buy the SSD. The parent specifically said it was hard to fit on an SSD, so I assumed they could buy one based on that
> I know I got into programming on a mix of graphing calculators and thrown out PCs, and I also distinctly remember having to work around the download sizes of tooling because I was using really crappy internet.
Graphing calculators, and raspberry PIs (and other various low power devices) are still widely available for people to learn and experiment with. Internet speeds are still a problem in many places but ay some point the software has to be delivered to you, and as I mentioned previously the actually install sizes are not 60GB, and the downloads are significantly smaller (a windows 10 iso fits on a 4GB usb)
> without 60GBs of downloading and storage...
Firstly, it's not 60GB - see my previous post about how much space it actually takes up. It's closer to 30GB. Secondly, if you don't have 30GB of storage of any kind available to you on a computing device,then sure, meanwhile anyone running a machine bought in the last 15 years will have that space available to them.
The forest for the trees wouldn't begin to cover half these responses.
Let me say it again, 60, 30, even 10GBs, is a lot when there's tooling from the same company, that still make similarly functional binaries, that took 342 MB.
You don't need to keep obsessing over "how dare this person with limited resources use an SSD!", like I said you run into similar issues with just downloading the stuff.
Again, forest for the trees.
-
Instead maybe you can sit back and just ask "why the bloat over time"?
And the reality is likely: "because no one optimized for it". Because for them lots of fast storage and internet speed are no problem
That line of reasoning maybe allows you to see things from a different prospective and break some assumptions about end users.
Isn't that more useful than browbeating some random for not having 30GB free on their SSD?
I’d say the part where the gp specifically said ssd invalidates your point __in this particular situtation__ because a person worried about their budget wouldn’t splurge on an ssd. I also, after a cursory search, have found that the cheapest hard drives are around $25 USD. While $20 isn’t nothing, it also isn’t budget-breaking.
... what? You think someone stretching their money for an SSD somehow indicates that they're not worried about their budget?
Here's a hint: if they had the budget and availability they'd just get a bigger SSD and not write that comment.
-
Obviously, due to some aspect of their circumstance, be it availability, cost, download speeds, etc. the size of the toolkit is problematic.
I mean there's no intrinsic size for a development toolkit, but I don't know anyone who'd say 60GBs of data is a small development toolkit when as others have pointed out, there are older versions of the same Windows toolchains that still complete the same function and manage to take a fraction of the space...
What I’m saying is the delta between a very cheap hdd and an ssd is 20 bucks.
This whole assuming everyone is destitute is getting rather tiring.
It's strange that I'm repeatedly explaining you don't even have to be destitute for it to be a problem, but your mind can only latch onto "at the store I have access to with almost everything one could ever want in stock, the price delta is 'only' $20"
But yeah, seriously woe is you having to momentarily imagine some people are poor or have trouble getting access to tech.
I'm from Ghana so I guess it's not as onerous to imagine people don't live the exact same life I do in the US.
If stretching your budget is so important then enable NTFS compression on your entire drive... I'm storing well over a terabyte of data on my one terabyte drive right now lol
There's tooling that mostly avoids this. https://github.com/Jake-Shadle/xwin
This is a utility that fixes a lot of the cross-compiling issues for windows by giving you a portable, unfucked naming, and not-massive SDK. It's the same SDK you get when you install MSVC but it's only a few hundred megs and the names are consistent even with all of Windows' fucked up tooling.
The only caveat is you need to provide your own compiler, in this case clang is often the best option.
The other problem with Xcode is that it won’t run on anything except a Mac, and MacOS licensing prevents one from running it on non-Mac hardware or virtualizing it. That’s why services like https://www.macstadium.com/ and AWS mac instances exist, but it’s very annoying to have to work around this just because of dinosaur business models and licensing.
Visual Studio's install size is highly dependent on what you install. The minimum size is only 800mb, everything is 120gb.
Many of the high level categories, like Web Development,contain features you'll probably never use. If you're concerned about file size you can do a custom install to get a very lean install.
That being said, I would still recommend a Windows VM be allocated a 120Gb drive.
There are very few things that actually require Visual Studio just to build.
Interesting, how can I install just the cmake + msvc to do command line builds with windows? I'm also installing wsl2 + mingw because it is so much lighter for compilation than many, many gigabytes of vis studio.
Download the build tools[0] and choose the appropriate options. I'd personally consider the following a good baseline: MSVC v143, CMake tools, Windows 11 SDK, Clang tools (if you don't want/need these you can shave off 3.5 GB). `zig cc` is another interesting alternative, it's just a single ~62 MB download.
[0] https://visualstudio.microsoft.com/downloads/#build-tools-fo...
Thanks, I'll give that a try!
`msbuild` is what I used back in the day. It seems to still exist, but I have no idea if this is still preferred solution. I just use WSL 2 these days, and live with the extra io latency.
> To build anything native one has to install Windows (20+Gb in modern version and then Visual Studio 40+Gb). Not so easy to fit it all on an SSD drive.
Maybe a decade ago, but SSDs are so much cheaper per GB nowadays. Around $0.11/GB. Pretty sure that's cheaper than the first 1TB HDD I've owned.
Not when you are on Macbook. Don't suggest external drives please ;)
But you don't have to develop on Macbook. So you intentionally chose a platform known for being really expensive and now your main complaint is that it's too expensive?
I don't really think that the toolchain is to blame for that one.
> But you don't have to develop on Macbook
Can you please tell that to my IT department which refused to issue me a Linux laptop for 1 month, then took two months to put the order in... I had to get an executive (President of R&D) at the company to harangue IT... I still haven't seen it, apparently they are trying to install the standard "employee spyware" package on it.
If your IT department is an overt detriment to your development effort, send a letter to your company’s upper management and leave. This is the only way to push for change in these moronic companies, and you keep your sanity.
No? There's a lot of other amazing things about the company beyond having 15% development drag? IT is not that important in the grand scheme of things.
Jesus. I have an executive level officer in the company that has my back and you're telling me to quit? Utterly horrible advice.
I do need to develop on a Macbook. The hardware is just so much better than anything else out there. This is my personal opinion of course, but I would rather stop being a software developer than to develop software using a different laptop available on the market today.
Meanwhile I’d rather serve coffee than be a software developer on any laptop! I can’t fathom developing on anything other than a high-end desktop.
M1 is very impressive. But I hate macOS as much, if not more than, you hate Windows. My Threadripper makes me very happy.
Yeah, given the sheer cpu/ram/m.2 power of what even a tiny NUC provides, with modular, replaceable monitor(s)/keyboard/mouse/USB(3/C/Thunderbolt)/HDMI etc, for less than a Mac, I am unable to understand the appeal of a laptop. And I use a i7/64GB/1TB NUC for traveling! I use a twice as powerful Ryzen rack server for my office dev box. 5m extension cables so the noise is in a closet.
I have no problems with external drives. Is this a Mac thing too?
32 years ago as a fresh out I got a government job with an SGI box with an enormous monitor. I stare in wonder at the future, right now. It sure doesn't look like happiness.
I got the cheapest M1 Mac Mini when it came out. I got two m.2 NVME drives hooked up to it (1.92TB & 1TB) and a 1TB SATA SSD and a 64GB microSD card along with the built in 250GB drive. No issues. Never have ram issues either with the 8GB. I can compile stuff. Code. Have tabs in my browser. Do stuff in Logic. Edit video. Emulate old computers and systems for games. Dosbox. Parsec to my Windows desktop. Do pretty much anything. Got it hooked up to my 55" 4K HDR TV. Bluetooth Logitech keyboard with touchpad. It's great!
8GB ram, only Apple could pull that off. What is it, 2008?
My work stack is tight on 32gb (yeah, it is what it is), that simply wouldn't fly for me. 8gb, especially with all the memory used by the os on MacOs, there's just no way.
I live in a small apartment with my partner who also works from home. We don’t have enough space to dedicate to two desks. We also travel a lot, so I need to be able to take my work with me.
It is a weird thing to me that a minimal impact lifestyle[0] requires a very constrained and expensive computer ecosystem to facilitate unrealized happiness.
[0] ...but with frequent travel?
We have a small, nearly off the grid cottage (septic, water from the lake, no roads, etc.). It has an internet connection via a Ubiquiti antenna. Not enough room for two desks there either.
That’s fine. The criteria people have for being software engineers is different. I just gave my personal opinion. I don’t think there is a right or wrong.
I thought apple computers usually had thunderbolt ports. You can't just connect a thunderbolt/M.2 drive enclosure and mount it as root?
That’s still an external drive
True historically, but this has actually been solved on the latest macs. Base spec for the MacBook Pro is 1TB disk.
> Base spec for the MacBook Pro is 1TB disk.
I believe you mean 256GB.[0] Unless you meant to say 14" or 16" MacBook Pro, in which case the base spec is still only 512GB for either of them, not 1TB.[1][2]
For the prices Apple is charging, I wish they would make 1TB the base spec... but Apple Silicon is so good that the machines sell like hotcakes anyways.
[0]: https://www.apple.com/shop/buy-mac/macbook-pro/13-inch
[1]: https://www.apple.com/shop/buy-mac/macbook-pro/14-inch
[2]: https://www.apple.com/shop/buy-mac/macbook-pro/16-inch
Hum, I think it’s reasonable to only consider the 14” and 16” variants latest generation models. But I guess I misremembered the base spec. Looking at the link, it’s because you get 1TB if you get the model with the full 8 performance cores. But that model is quite a bit more expensive than the base model.
Mingw doesn't support anything other than basic UNIX on top of Win32, naturally they take so little.
How is that too big for an SSD? My computer is ALL SSDs. What kind of shit drives u buying?
A 1tb M.2 drive is what... £70?
If that's out of reach im not sure how you're powering your dev machine.
> > Xcode also have this problem now. 8Gb for Xcode 7 is manageable. But why 70Gb for Xcode 11?
> A 1tb M.2 drive is what... £70?
To be contextually fair, Apple charges $400, and the storage isn't replaceable. Who runs their editor off of an external drive? I imagine the number is very small.
Although, on my machine, Xcode only seems to be taking up 17GB, not 70GB.
It’s not bleeding edge raytracing stuff. “Get better hardware” is far from appropriate response.
Microsoft's days as a 'primary' desktop operating system are numbered. The OP doesn't seem to understand this, but even Microsoft itself does, which is why most of its major moves the past few years haven't been improving its own desktop, but positioning itself to control "open source" the best it can.
More people within the MS ecosystem should understand this.
One can argue that desktops are no longer the primary computing environment, but I don't think there's any evidence that Windows is about to displaced within the desktop world.
What are you proposing will become the primary desktop OS? Some Linux flavor of the month? MacOS? Neither seems at all likely in the foreseeable future.
There won't be a "primary" and people will perhaps not pay much attention to which it is, any more than they pay attention to e.g. what Javascript framework their favorite site is using.
I've come to realize while there may never be a "year of the Linux Desktop," there will likely be "years of Linux Desktops" -e.g. Ubuntu, but then also SteamOS, ChomeOS, maybe Android if you're being REALLY inclusive, etc.
I agree with everything but the "controlling open source" bit. I think what they are actually doing is trying to create new platforms they can control, because that's a great business model. Open source, via Github is just a small piece of the pie they want.
Think MS teams/Office365, Linkedin, Gaming, Github, etc. These are platforms they are interested in controlling and deriving revenue from.
I don't think what you and I are saying conflict from a business point-of-view, but I think also it would be naive to "wave away" the incentive for MS to continue EEEing.
Fair enough.
Better check those desktop market share numbers.
I suspect if you restrict it to developers the numbers would look very, very different
on the stackoverflow survey windows is 41.2%: https://insights.stackoverflow.com/survey/2021#most-popular-...
If you toggle that response to "All Respondents" instead of "Professional Developers" Windows goes up to 45%. Plus, 3% of users are using windows subsystem for Linux. So 48%. Whereas Mac and Linux are both at 25%. Even for developers, Windows is still the clear majority.
If you look at desktop os share from tracking general web use rather than surveying people on a dev-focused site windows is 75%.
https://gs.statcounter.com/os-market-share/desktop/worldwide
Interesting. So 50% Unix and 48% Windows (and 3% are targeting Unix anyway). That's quite a swing over the past 20 years - there was a time where aside the occasional web developer with a Mac, everyone was on Windows. Leaving out Linux, the interesting part has been how much market that Apple has chiseled away.
Cloud workloads mean UNIX has effectively won the server room for classical deployments, however UNIX !== Developer.
There are plenty of other workloads and platforms to target.
Desktop, mobile devices, IoT, medical, game consoles, infotainment, unikernel, serverless....
Mobile is almost all IOS and Android... and mostly but not all devs use Macs due to xcode.
The only places that Windows seems to persist are developer markets where tooling choices are limited to Windows (games, medical, embedded) or corporate standards dictate tooling.
> Mobile is almost all IOS and Android... and mostly but not all devs use Macs due to xcode.
Indeed and trying to use UNIX for mobile apps won't bring you far.
> The only places that Windows seems to persist are developer markets where tooling choices are limited to Windows (games, medical, embedded) or corporate standards dictate tooling.
The remaining 99% of the market where UNIX GUIs don't matter.
I use macos to make mobile apps every day. Last I looked its bsd unix.
Also ios and android are unixes too.
How much UNIX do you think powers your beloved XCode implementation based on AppKit and Objective-C?
Android uses the Linux kernel, nothing on the userspace is UNIX. Tomorrow it can use Fuchsia's Zircon and no one will notice.
I cannot find any reference to Objective-C, Swift, UI Kit, CoreData, Metal, OpenGL ES,... on POSIX standard specification.
"POSIX has become outdated", page 6
https://www.usenix.org/system/files/login/issues/login_fall1...
it's very interesting to see Windows at less than 50%
20 years ago that would have been 95%+
and its market share for developers is way, way, way less than its share on desktop (which was my point)
Yeah, because the software developed for 80% of the desktop market appears out of thin air, no developers required.
Windows is a legacy operating system, not something from 2022. I don't see any good reason to use it.
Microsoft's own machine learning repo "Project azua" (https://github.com/microsoft/project-azua) runs on JAX. Let that sink in.
At this point Windows as a primary development platform is essentially dead. With a few exceptions (DX) nearly everyone writes code for Unix/Linux and then cross-compiles for Windows.
Windows is the afterthought.
Let that sink in...
> nearly everyone writes code for Unix/Linux and then cross-compiles for Windows
That's a very bold statement that I suspect is far from true.
> With a few exceptions (DX)
'Let me just exclude an entire multi-billion-dollar industry to cherry-pick examples to make my point sound stronger.'
Yeah, no; Windows isn't going away.
That is true in some corners of the industry, but definitely not others.
>I'm still blown away that they literally made the .NET team revert the dotnet watch PR at the last minute, so that they could sell it as a feature.
This is not even true. Due to limited resources they wanted to cut scope of what the .NET 6 update included and they thought that it would be okay to delay the dotnet watch feature because most people would still have hot reloading from using Visual Studio.
The PR for bringing back the hot reload code was merged 3 days after the PR for removing it.
https://devblogs.microsoft.com/dotnet/net-hot-reload-support...
> This is not even true.
It is absolutely true. The CLI-first `dotnet watch` was pretty much done (I was using it in preview for months beforehand), and it was cut to promote hot reload as a Visual Studio feature.
Source: I'm the person who raised the initial GitHub issue about the dotnet watch change, and I talked to multiple Developer Division employees off the record. It was a deeply unpopular move that came from the top (Julia Liuson).
So it's the public word of someone from the .NET team versus your unnamed sources. It sounds like there was a communication issue as from the blog post the reason given behind removing it wasn't to drive sales of Visual Studio. It sounds like if their team had more resources it wouldn't have been cut which invalidates the point of them using it to promote VS.
That post by Scott Hunter was a way to save face after Liuson backed down. It is absolutely not truthful about the original reason for removing dotnet watch, and was widely criticized as such at the time. Please trust me on this; the whole dotnet watch drama consumed my life for a good week when it happened.
If you want a source from the .NET team, Miguel de Icaza has since quit Microsoft and is able to be more open about what's going on: https://twitter.com/migueldeicaza/status/1537178691218046976
Even during the dotnet watch drama, he was willing to endorse a very critical take at odds with the public line: https://twitter.com/migueldeicaza/status/1451902388290392073
>That post by Scott Hunter was a way to save face after Liuson backed down
That disappoints me. I would prefer if they would just be honest and say that for the time being they would only support it in Visual Studio.
>to endorse a very critical take at odds with the public one
Interestingly enough the article mentions that Microsoft has been underfunding Omnisharp and VSCode has poor C# support. The first link you gave talks about how Microsoft is now working with Omnisharp to improve C# support.