The next step is to isolate the Windows applications: you could use different WINEPREFIX, but I think the better way is to do it like android: one "user" per application.
It's not just to prevent applications to read other applications files, but also to firewall each application individually
For example, if you don't want the application you've mapped to user id 1001 to have any networking, use iptables with '-m owner --uid-owner 1001 -j DROP'
I moved from Windows to Linux a few years ago, I have a few Windows apps I still love a lot (mostly Word and Excel) and thanks to wine I will always be able to use them.
They are also extremely fast: cold starting Word (or Excel) on my laptop takes less than a second, and use far less RAM
Personally, I'd rather purchase a few shrink wrapped old versions of Office from ebay than bother with LibreOffice, Abiword or the online version of Office.
> but I think the better way is to do it like android: one "user" per application.
This would help somewhat, assuming you don't run them all in one user's X session. On Linux, some desktop environments have a "switch user" action to start a separate desktop session running as another user on another virtual console. You can switch between them with Control+Alt+F2, etc.
> In case you're not aware, wine prefixes each use their own settings, but are not isolated from one another.
That's a great point!
I'm aware, which is why recommend instead that wine apps should each be run under a different userid: I don't want any given app to have access to anything that it doesn't absolutely need
> This would help somewhat, assuming you don't run them all in one user's X session
When I start a given wine app, the script starting it allows this user id to render on my Xwayland
It is not as secure as running each on its own X session, but wayland compositors can offer more isolation as needed.
Lutris creates dedicated wine prefixes for the applications/games, so you can use it directly. A lot of apps are also installable with some patches provided by Lutris itself
> It's not just to prevent applications to read other applications files, but also to firewall each application individually
Why would one want to prevent applications from reading other applications' files?
We're talking about running desktop applications designed for an OS that isn't built around any concept of application isolation, and for which using a common filesystem is a primary mechanism of interoperability.
> Why would one want to prevent applications from reading other applications' files?
Because I can, and because I don't trust Windows application to be secure.
Thanks to that, I have no problem running 15 year old office software: even if I knew it was malicious, I also know there's nothing it can do without network access, without file access, and with resources constrains (so it can't even cause a denial of service, except to itself).
In the worst case, I guess it could try to erase its own files? (but it would then be restored from an image on the next run, and I would keep going)
> I have no problem running 15 year old office software: even if I knew it was malicious, I also know there's nothing it can do without network access, without file access
Great. Except... WTF can you do with an office application that can't read or write files?
I don't have the specific setup archived, but I believe my basis for it was a script included in winetricks at the time which installed Office 2013 professional based on offline 2013 proplus 32bit iso.
WineHQ reports that installer for 2013 64bit is "gold", but apps required few tweaks to be applied and Access sometimes failed.
Generally seems 2013-2016 era works on wine per few applications I checked
Is it just me or wine needs a bit more polish? Dialogs and menus are rendered with some weird microscopic font. GDI text rendering seemingly doesn't use font fallbacks, so even something like Scintilla or ebook reader don't quite work under wine.
Many commonly used Windows fonts are licensed under proprietary terms, preventing their inclusion with Wine.
Winetricks[1] can be used to acquire and install a set of default fonts directly from Microsoft.
Furthermore, Windows font fallback differs substantially from that of Linux and similar systems, which generally utilize Fontconfig and FreeType with font relationships defined in configuration files. In contrast, Windows (and consequently Wine) employs a font linking mechanism[2]. Windows handles font linking natively, whereas Wine requires manual registry configuration[3].
Wine should come with fonts with the same metrics as the proprietary ones. Note that while the font file is copyrighted, the letterforms themselves are free to copy. We already had the DejaVu project recreate equivalents of existing fonts, no reason we can't have the same for the Segoe and Calibri families.
I don't think a new distro is needed. Most commonly used windows apps can be made to work through wine, but the hacks used to make one app work can break others and vice versa. Similarly, everyone needs to play around with settings individually to get things to work. What works on one person's machine might not work on another's, because there's no consistency in, effectively, configuration.
The simplest solution, to me, is to just distribute containers (or some other sandbox) with wine in it, and the necessary shenanigans to get the windows program (just the one) working in the container, and just distribute that. Everyone gets the same artifact, and it always works. No more dicking around with wine settings, because it's baked in for whatever the software is.
Yes, this is tremendously space inefficient, so the next step would be a way of slimming wine down for container usage.
The only real barrier to this system is licensing and software anti patterns. You might have to do some dark magic to install the software in the container in the first place.
TIL. I'm gonna check this out. It's good to see that people are already working on this, because it's one of those things that, to me, just makes a lot of sense. You'd think that with the all of the layers of abstraction we have nowadays, it should be possible to run software on any underlying system in an ergonomic fashion, even if it's not necessarily efficient.
It's extremely efficient: cold starting Word from an old Office suite is much faster than starting Libreoffice. It also uses less RAM.
A few years ago I purchased a few shrink-wrapped Office on ebay for each of the versions Wine claimed to support best, tested then with wine32 and wine64, and concluded the "sweet spot" was Office 2010 in wine32 (it may have changed, as wine keep evolving)
Yes, it's 15 years old software, but it works flawlessly with Unicode xkb symbols! Since it doesn't have any network access, and each app is isolated in a different user id, I don't think it can cause any problem.
And Ii I can still use vim to do everything I need and take advantage of how it will not surprise me with any unwanted changes, I don't see why I couldn't use say an old version of Excel in the same way!
It is interesting seeing Office suites from the 90s and wondering what really needed improved.
Google Docs pioneering “auto saving” in the cloud is the only one I can think of.
A few months ago, I ran out of power (my mistake, I use full screen apps to avoid the distraction, so I didn't realize I was unplugged)
After plugging in and restarting Linux then the ancient version of Word I was using, I got a pleasant surprise: the "autosaved" version of the document I was editing, with nothing lost!
As for llm, Excel 2010 may not have been made for AI, but wine copy/paste and a few scripts work surprisingly well!
Autosave has been part of Excel for ages. I had it enabled back in the early 1990s with the version that was distributed alongside Word 6 as part of Office 4.3 (I don't remember the Excel version number).
Current Excel Autosave when used with ODSP is different -- changes are individually autosaved (change cell, autosave, format table, autosave). They're completely transparent to the end user.
So you're saying you're using Word 2010 and have no problem with files created recently? I find it surprising that modern word .docx is compatible with 15 year old Word
The modern Word suite’s basic format was introduced with Word 2010. So as long as the person who created the doc uses features that were previously present in Word 2010 they’d be fine.
Features from later versions will either not show up or show up as boxes
The auto-save in Google Docs is undoubtedly better, but it was possible to set an auto-save interval in minutes on Word 6.0 for Windows 3.1.
Back then, Word's auto-save updated the file that you were working on rather than creating a separate backup file. I liked that better, though there might have been a good reason for changing approaches in later versions of Word.
I always liked this idea; but wouldn't you run into issues with file permissions?
And if not, wouldn't that mean that the program in question would have access to all your files anyhow, removing the benefit of isolation?
When I'm using Office, the files come from a shared directory accessible as Z:
I use scripts to automate everything - including allowing wine to use Xwayland (because until I start the application I want, its userid is not allowed to show content on my display)
If you want to try using wine with different user ids, try to start with a directory in /tmp like /tmp/wine which is group writable, with your windows app and your user belonging to the same group.
I really like the idea of bottles. I wish there was a way to bundle that up into a distro and make it invisible to the user so I could setup my friends and family with it.
If you change a lot of things about a Linux system, then you're making a new distro.
Half of this incompatibility is because Linux is flexible, anyway. My system is different from your system, and did anyone test on both? If you want a more stable ABI then you need a more stable system.
You wouldn't have to change anything about the underlying system, which is the point. Containers work the same regardless of the underlying system, so they get around the various differences in everyone's individual machine. I use identical containers routinely on Fedora and Ubuntu systems at home without any issue, and I produce containers for RHEL and various other systems at work. Half the point of containers is eliminating the variability between dev systems and deployment.
Rather than everyone having to get the software working on their machine, you would get it working once in the container, and then just distribute that.
Containers work because your kernel is close to identical, and ship their own copy of everything else making them bloated, and incompatible at a user-mode level (no graphics drivers!). If my kernel was also very different from yours (which could just be a couple of kernel options or major versions) I'd need a virtual machine.
You could distribute Wine as a Flatpak platform. Flatpaks are already containers that run the same on all distros. Making a Win32 base that works in this same way using the same tooling would not be difficult.
> Windows application submissions that are using Wine or any submissions that aren't native to Linux desktop and is using some emulation or translation layer will only be accepted if they are submitted officially by upstream with the intention of maintaining it in official capacity.
Although, I am curious why; they don't seem to have a general problem with unofficial packages, so I'm not sure why a translation layer makes any difference. It doesn't seem different than happening to use any other runtime (ex. nothing is said about Java or .net).
Probably because it's implicitly pirated. No one is sharing windows freeware this way, because there's no demand for it. It'll be MS Office, Photoshop, CAD, etc. -- stuff for which there's still no good OSS alternative, and for which the barrier to entry is high.
It would take a large organization with enough connections to cut through this. You'd probably need to cut a deal so you could distribute their software, and you'd need to provide a mechanism for users to be able to make purchases. Even then, there are various licensing challenges, because you would be distributing the same install, so thousands (or millions) of "installs" would effectively have the same serial or license number.
It's nontrivial, but the basic idea is straightforward and doable. The challenge is how windows software is distributed and licensed, not anything technical.
I am just thinking out loud. Wouldn't it be better then to just share the reproducible recipes similar to sharing Dockerfiles? For wine, specifically, it could be similar to just using FROM wine-1.23. As long as we keep the recipes maintained and "pinned" to their old dependencies.
I think this could work as a translation layer because containers already abstract away everything on syscall level.
There must be just a GUI for that, which can create multiple sandboxes easily, per application, and remember what you configured and installed there (and add it to the Winefile).
In regards to the sharing serials problem: You can easily diff the .reg file that wine adds there and if anything pops out in \\software, you can assume this is a custom field and offer it as an environment variable for the container?
They do cover that in
https://docs.flathub.org/docs/for-app-authors/requirements#l... , but I don't buy it in the general case because Windows isn't synonymous with proprietary isn't synonymous with non-redistributable licenses. Sure, I doubt there's a legal way to ship Microsoft Office on flathub, but there are plenty of shareware programs that I think would be fine (though IANAL, ofc) right up to FOSS that happens to target Windows. For instance, Notepad++ is GPLv3[0] and WINE platinum rating[1]; why shouldn't it be on flathub?
I don’t think it’s necessarily that either. They probably just want some guarantees that the app will keep working, instead of someone submitting 100s of wine apps and having them break eventually.
They're just trying to prioritize linux applications, without preventing developers who want to support linux through wine from doing so.
The key difference is that applications ran under wine will always have some subtle quirks and misbehaviors that break the usability, that can be worked around by the dev when given the chance to.
>I don't think a new distro is needed. Most commonly used windows apps can be made to work through wine
I think the idea is to provide a seamless Windows like experience so the user works exact how he expects to work under Windows. Without having to fiddle, modify configurations, wrestle with different settings. Just click and run.
Yes, which is what I mentioned in the rest of the post. You could distribute a container that has wine in it, with just the configuration necessary to get the software working in the container. It would be straightforward to write a thin abstraction layer that pulls the container (installation) and gives you an executable that you can double click on and launch.
An end user wouldn't need to modify configs or wrestle settings, because that's already done for you upstream. You just get an artifact that you can click and run. From the other posts, Proton and Steam Deck already do something similar, and this is also conceptually similar to the way AppImages, Flatpaks etc. work.
And if memory serves one of the important features of Proton is to control how each app is configured individually, precisely to let you do needed tweaks at a local level.
Less required in general or by the user? I sort of had it in my head that part of what Proton did was to just bundle all of those tweaks so you didn't have to think about it, but I haven't actually looked under the hood.
It's possible to use docker as a package manager. I worked jobs where did exactly that, because we needed to compile certain dependencies for our application, and it streamlined the whole process.
There's zero reason you couldn't create a small abstraction layer around docker so you can install "executables" that are really just launching within a container. I mean, isn't that the whole idea behind flatpak, snaps, and appimages?
The point is to leverage modern abstraction techniques so that people don't need to learn a new system.
> I can pull down a 20 year old exe and still run it today on Windows. Try doing the same with a Linux binary that's just a year old. There's no guarantee that it will be able to run based off some update that has happened
IMHO, you just compare two different things.
Traditional method of installing apps on Windows is packing all dynamic dependencies with it.
While on linux dynamic dependencies are shared between apps. So, there is nothing surprising that when you change the dependencies of the app, it stops working.
There are few ways to solve this and your are free to choose:
Aside from comparing two different things, as you correctly identify, I believe that even the author's original assertion just isn't true. Maybe for some exe files, but I doubt for all or even most.
I was involved in replacing Windows systems with Linux + Wine, because (mission-critical industrial) legacy software stopped working. No amount of tweaking could get it to work on modern Windows system. With Wine without a hitch, once all the required DLL files were tracked down.
While Wine may indeed be quite stable and a good solution for running legacy Windows software. I think that any dynamically linked legacy software can cause issues, both on Windows and Linux. Kernel changes may be a problem too. While Windows is often claimed to be backwards compatible, in practice your mileage may vary. Apparently, as my client found out the hard/expensive way.
> I was involved in replacing Windows systems with Linux + Wine, because (mission-critical industrial) legacy software stopped working. No amount of tweaking could get it to work on modern Windows system. With Wine without a hitch, once all the required DLL files were tracked down.
I moved from Windows 11 to Linux for the same reason: I was using an old version of Office because it was faster than the included apps: the full Word started faster than Wordpad (it was even on par with Notepad!) The Outlook from an old Office used less ram and was more responsive than the one included with Windows!
When I got a new laptop, I had problems with the installation of each the old versions of Office I had around, and there were rumors old versions Office would be blocked.
I didn't want to take the risk, so I started my migration.
> While Windows is often claimed to be backwards compatible, in practice your mileage may vary
It was perfectly backwards compatible: Windows was working fine with very old versions of everything until some versions of Windows 11 started playing tricks (even with a Pro license)
I really loved Windows (and AutoHotKey and many other things), but now I'm happy with Linux.
> I really loved Windows (and AutoHotKey and many other things)
oh, do you know - how can I configure e.g. Win+1, Win+2, etc to switch to related virtual desktops? And - how to disable this slow animation.. just switch instantly?
May be you have several ideas where I should search.
I'm use Linux as my OS for a long time, but now I need to use Windows at my job. So, I'm trying to bring my Windows usage experience as close as possible to so familiar and common on Linux.
> So, I'm trying to bring my Windows usage experience as close as possible to so familiar and common on Linux.
I see you were given an answer for the slow animation. For most UI tweaks, regedit is a good starting point.
You may also like the powertoys, but I suggest you take the time to create AHK scripts, for example if you want to make your workflow keyboard centric
> So, I'm trying to bring my Windows usage experience as close as possible to so familiar and common on Linux.
I did the opposite with the help of hyprland on arch, but it took me years to get close to how efficient I was on Windows, where there are many very polished tools to do absolutely anything you can think of.
There's no built-in way to set hotkeys to switch to a specific desktop. And my primary annoyance is that there's no way to set hotkeys to move a given window to a different desktop.
Well, there's always LD_PRELOAD and LD_LIBRARY_PATH on Linux. My experience has been that most of the time when older binaries fail to run, it's because they are linked against old versions of libraries, and when I obtain those library versions -- exactly the same as obtaining the DLLs for the Windows executable -- things usually work just fine.
You don't need to bundle anything from the system layer on Windows programs distributed as binaries. On Linux there is no proper separation of system libraries or optional libraries, everything could be both and there are no API / ABI guarantees. So "just bundle your dependencies" simply doesn't work. You cannot bundle Mesa, libwayland or GTK but you cannot fully depend them not breaking compatibility either.
On Windows side nobody bundles Windows GUI libraries, OpenGL drivers or sound libraries. On Linux side, system libs have to be somewhere in the container and you have to hope that it is still compatible.
You cannot link everything statically either. Starting with Glibc, there are many libraries that don't work fully or at all when statically linked.
I am sure this is true. But I seem to have had good results building static executables and libraries for C/C++ with cmake (which presumably passes -static to clang/gcc). golang also seems to be able to create static executables for my use cases.
Unless static linking/relinking is extremely costly, it seems unnecessary to use shared libraries in a top-level docker image (for example), since you have to rebuild the image anyway if anything changes.
Of course if you have a static executable, then you might be able to simplify - or avoid - things like docker images or various kinds of complicated application packaging.
> I am sure this is true. But I seem to have had good results building static executables and libraries for C/C++ with cmake (which presumably passes -static to clang/gcc). golang also seems to be able to create static executables for my use cases.
Depends on what you link with and what those applications do, I would also check the end result. Golang on top of a Docker container is the best case, as far as compatibility goes. Docker means you don't need to depend on the base distro. Go skips libc and provides its own network stack. It even parses resolv.conf and runs its own DNS client. At this point if you replace Linux kernel with FreeBSD, you lose almost nothing as function. So it is a terrible comparison for an end-user app.
If you compile all GUI apps statically, you'll end up with a monstorous distro that takes hundreds of gigabytes of disk space. I say that as someone who uses Rust to ship binaries and my team already had to use quite a bit nasty hacks that walk on the ABI incompatibility edge of rustc to reduce binary size. It is doable but would you like to wait for it to run an update hours every single time?
Skipping that hypothetical case, the reality is that for games and other end user applications binary compatibility is an important matter for Linux (or any singular distro even) to be a viable platform where people can distribute closed-source programs confidently. Otherwise it is a ticking time-bomb. It explodes regularly too: https://steamcommunity.com/app/1129310/discussions/0/6041473...
The incentives to create a reliable binary ecosystem on Linux is not there. In fact, I think the Linux ecosystem creates the perfect environment for the opposite:
- The majority economic incentive is coming from server providers and some embedded systems. Both of those cases build everything from source, and/or rely on a limited set of virtualized hardware.
- The cultural incentive is not there since many core system developers believe that binary-only sofware doesn't belong to Linux.
- The technical incentives are not there since a Linux desktop system is composed of independent libraries developed by semi-independent developers that develop software that is compatible with the libraries that are released in the same narrow slice of time.
Nobody makes Qt3 or GTK2 apps anymore, nor they are supported. On Windows side Rufus, Notepad++ etc. are all written on the most basic Win32 functions and they get to access to the latest features of Windows without requiring huge rewrites. It will be cursed but you can still make an app that uses Win32, WPF and WinUI in the same app on Windows, three UI libraries from 3 decades and you don't need to bundle any of them with the app. At most you ask user to install the latest dotnet.
Except that the "you" is different on each case. You're offering options for the distributor. The quote is talking about options for the user, who has to deal with whatever the distributor landed on. From the point of view of the user at the point of need, a distributor having choices that could have made their lives easier if they'd been picked some time in the past is completely useless.
I think it's not quite simple though. For one, I think the opengl driver situation is complex, where I hear you need userland per-hardware libraries which basically require dynamic linking. From that perspective windows binaries are the de-facto most stable way of releasing games on linux.
I'm not sure about linux syscall ABI stability either, or maybe other things that live in the kernel?
> I think the opengl driver situation is complex, where I hear you need userland per-hardware libraries which basically require dynamic linking
Yes. OpenGL driver is loading dynamically, but..
Are you sure that there are any problems with OpenGL ABI stability?
I have never hear about breaking changes in it
The OpenGL ABI is extremely stable but OpenGL drivers (especially the open source ones) also use other libraries which distros like to link dynamically. This can cause problems if you ship different versions of the same libraries with your program. This includes statically linked libraries if you did not build them correctly and your executable still exports the library symbols. Not insurmountable problems but still thinks that inexperienced Linux developers can mess up.
I was thinking the same thing. I've had loads of issues over the years when I have an archived EXE that gets angry about a missing DLL.
Likewise, as the author states, there's nothing intrinsic to Linux that makes it have binary compatibility issues. If this is a problem you face, and you're considering making a distro that runs EXEs by default through an emulation layer, you are probably much better off just using Alpine or one of the many other musl-based distros.
I need to buckle down and watch a YouTube video on this that gives examples. It obviously comes up in computer engineering all the time, but it's something I've been able to skate by without fully understanding; from time to time I see comments like this one that seem perfectly clear, but I'm sure there's still quite a lot of nuance that I could benefit from learning.
This is like the in-soviet-union joke about shouting "down with the US president" in front of the Kremlin. In this case, I too can run a 20 year old Windows binary on Linux wine.
The article's main premise isn't bad, but it's full of weird technical inaccuracies.
At certain points he talks about syscalls, libc (I'm assuming glibc), PE vs. ELF, and an 'ABI'. Those are all different things, and IIUC all are fairly stable on Linux, what isn't stable is userspace libraries such as GTK and QT. So, what are we talking about?
There's also statements like this, which, I'm not a kernel developer but they sound a little to good to be true:
> A small modification to the "exec" family of system calls to dispatch on executble type would allow any Linux application to fork an exec a Windows application with no effort.
He goes on to talk about Gatekeeper (which you can disable), Recall (which is disabled by default), and signing in with a Microsoft account (which can be easily bypassed, though he linked an article saying they might remove it). He also talks about "scanning your computer for illegal files", I don't know what this is referring to, but the only thing I could find on Google was Apple's iCloud CSAM scanning thing. That's not on your computer, and it got so much backlash that it was cancelled.
There's plenty of stuff to criticize about these companies and their services without being dramatic, and the idea of Linux having more compatibility with Win32 via Wine isn't bad.
> > There's also statements like this, which, I'm not a kernel developer but they sound a little to good to be true:
> A small modification to the "exec" family of system calls to dispatch on executble type would allow any Linux application to fork an exec a Windows application with no effort.
That isn't "too good to be true", it's so good it is false – no kernel modification is required because the Linux kernel already supports this via binfmt_misc. You just need to configure it. And depending on how you installed Wine, you may find it has even already been configured for you.
Lindows made a splash in 2001-2002 and its purpose was to bridge the gap and offer proper support for Windows applications on Linux via a 'click and run' service.
After Microsoft sued them and they changed their name, the bubble was burst and when Ubuntu appeared its niche as a beginner distro ebbed away.
I was surprised to hear it was still alive via a Michael MJD video a month or two ago.
This problem is related to the fact that Linux distos typically dynamically link executables and don’t retain older versions of those libraries vs Windows which does.
It’s on of the many reasons Windows base install is so much heavier than a typical Linux base install.
The reason Windows retains older versions of executables while Linux doesn’t is because Windows doesn’t have a package manager like Linux distros. Ok, there’s now Windows Store plus a recent-ish CLI tool that was based on one of the many unofficial package managers, but traditionally the way to install Windows application was via manual downloads and installs. So those installers would typically come bundled with any shared libraries they’d need and often have those shared libraries in the application directory. Leading to lots of duplication of libraries.
You could easily do the same thing in Linux too but there’s less of a need because Linux distribution package managers are generally really good. But some 3rd party package managers do take this kind of approach, eg Nix, Snap, etc.
So it’s not that Linux is “unstable” but more that people have approached the same problem on Linux in a completely different way.
The fact that drag-and-drop installs work on macOS demonstrates that there isn’t really a UNIX-like limitation preventing Windows-style installs. It’s more that Linux distributions prefer a different method for application installation.
It's not just about dynamically linked executables. The userland of Linux simply isn't as stable time-wise as Windows, especially when the timescale is measured in decades.
As an example, the latest Atari Jaguar linker (aln) for Linux was released back in 1995. It's a proprietary, statically-linked 32-bit Linux a.out executable. To run this on a modern Linux system, you need to:
- Bump vm.mmap_min_addr from 65536 down to 4096, a privileged operation ;
- Use an a.out loader because the Linux kernel dropped support for a.out back in 2022 ;
- Possibly use qemu-user if your system doesn't support 32-bit x86.
That's the best-case scenario, because some of the old Atari Jaguar SDK Linux binaries are dynamically-linked a.out executables and you're basically stuck running ancient Linux kernels in a VM. It's at a point where someone at the AtariAge forums was seriously considering using my delinking black magic to port some of these old programs to modern Linux. It's quite telling when reverse-engineering an executable with Ghidra in order to export relocatable object files to relink (with some additional steps I won't get into) is even an option on the table.
Sure, given enough determination and piles of hacks you can probably forcefully run any old random Linux program on modern systems, but odds are that Windows (or Wine or ReactOS) will manage to run a 32-bit x86 PE program from thirty years ago with minimal compatibility tweaks. Linux (both distributions and to a lesser degree the kernel) simply don't care about that use-case, to the point where I'd be pleasantly surprised if anyone manages to run Tux the Penguin: A Quest for Herring as-is on a modern system.
> It's not just about dynamically linked executables. The userland of Linux simply isn't as stable time-wise as Windows, especially when the timescale is measured in decades.
That’s exactly what dynamically linked executables are: user land
> As an example, the latest Atari Jaguar linker (aln) for Linux was released back in 1995. It's a proprietary, statically-linked 32-bit Linux a.out executable.
That’s not a user land problem. That’s a CPU architecture problem. Windows solves this WOW64 which provides a compatibility layer for 32bit pointers et al.
There are 32bit compatibility layers for Linux too but they’re. It going to be going to help if you’re running an a.out file because it’s a completely different type of executable format (ie not equivalent to a 32bit statically compiled ELF).
Windows has a similar problem with COM files (the early DOS executable format). And lots of COM executables on Windows don’t work either. Windows solves this problem with emulation, which you can do on Linux too. The awkward part of Linux here is that it doesn’t ship those VMs as part of its base install, but why would it because almost no one is trying to run randomly downloaded 32bit a.out files.
To be clear, I’m not arguing that Linuxes backwards compatibility story is as good as Windows. It clearly isn’t. But the answer to that isn’t because Linux can’t be backwards compatible, it’s because Linux traditionally hasn’t needed to be. However all of the same tools Windows uses for it’s compatibility story are available to Linux for Linux executables too.
>> As an example, the latest Atari Jaguar linker (aln) for Linux was released back in 1995. It's a proprietary, statically-linked 32-bit Linux a.out executable.
> That’s not a user land problem. That’s a CPU architecture problem. Windows solves this WOW64 which provides a compatibility layer for 32bit pointers et al.
In this specific case, it really is a user-land problem.
I've went to the trouble of converting that specific executable into a statically linked 32-bit x86 ELF executable [1], to run as-is on modern x86 and x86_64 Linux systems. Besides rebasing it at a higher virtual address and writing about 10 lines of assembly to bridge the entrypoints, it's the same exact binary code as the original artifact. Unless you've specifically disabled or removed 32-bit x86 emulation, it'll run on a x86_64 kernel with no 32-bit userland compatibility layers installed.
Just for kicks, I've also converted it into a dynamically linked executable (with some glue to bridge glibc 1.xx and glib 2.xx) and even into a x86 PE executable that can run on Windows (using more glue and MSYS2) [2].
> Windows has a similar problem with COM files (the early DOS executable format). And lots of COM executables on Windows don’t work either. Windows solves this problem with emulation, which you can do on Linux too.
These cases aren't equivalent. COM and MZ are 16-bit executables for MS-DOS [3], NE is for 16-bit Windows ; all can be officially run without workarounds on 32-bit x86 Windows systems (NTVDM has admittedly spotty compatibility, but the point stands). Here, we're talking about 32-bit x86 code, so COM/MZ/NE does not apply here (to my knowledge there never has been 16-bit Linux programs anyways).
That Windows has 32-bit compatibility out of the box and that Linux distributions don't install 32-bit compatibility layers by default is one thing, but those on Linux only really apply to programs that at best share the same vintage as the host system (and at worst only work for the same distribution). Again, try running Tux the Penguin: A Quest for Herring as-is on a modern system (be it on a 32-bit or 64-bit installation, that part doesn't matter here), I'd gladly be proven wrong if it can be done without either a substantial rewrite+recompilation or egregious amounts of thunking a 2000's-era Linux userspace onto a 2020's-era one (no, a VM doesn't count, it has to run on the host).
> In this specific case, it really is a user-land problem.
a.out isnt even supported in new Linux kernels so how is that a user land problem? And you then repeated my point about how it’s not a user land problem by describing how it works as an ELF. ;)
> These cases aren't equivalent. COM and MZ are 16-bit executables for MS-DOS [3], NE is for 16-bit Windows ; all can be officially run without workarounds on 32-bit x86 Windows systems (NTVDM has admittedly spotty compatibility, but the point stands). Here, we're talking about 32-bit x86 code, so COM/MZ/NE does not apply here (to my knowledge there never has been 16-bit Linux programs anyways).
You’re not listening to what I’m saying.
COM and a.out are equivalent because they’re raw formats. Even on 32 bit NT systems COM required emulation.
The problem is the file formats are more akin to raw machine code than they are a modern container format.
So yeah, one is 16 and the other 32bit but the problem you’re describing is related to the file format being unforgiving for different CPU architectures without emulation; and in many cases, disregarding the user land entirely.
By your own admission, 32bit PEs and 32bit ELFs work perfectly fine on their respective Windows and Linux systems without any hacks.
The difference here is that Windows ships WOW64 as part of the base install whereas mainstream Linux distributions doesn’t ship 32bit libraries as part of their base install. That doesn’t mean that you need hacks for 32bit though. For example on Arch it’s literally just one line in pacman.conf that you uncomment.
My point was, if you wanted to ship a Linux distribution that supported random ELF binaries then you could. And package managers like Nix prove this fact.
The reason it’s harder on Linux isn’t because it requires hacks. It’s because Linux has a completely different design for installing applications and thus backwards compatibility with random ELFs isn’t generally worth the effort.
Also it’s really not fair to argue that a.out, a format that’s defined in the 70s and looong since deprecated across all unix-like systems is proof that Linux isn’t backwards compatible. ELF has been the primary file format for nearly 30 years on Linux now and a.out was only relatively recently fully removed from the kernel.
Whereas COM has been problematic on Windows for the entirety of NT, including Windows 2000 and XP.
I never suggested it was a bad thing. I was just explaining the differences.
However to answer your question:
Storage hasn’t always been cheap. So it used to be a bad thing. Theses days, as you rightly said, it’s less of an issue.
But if you do want to focus on present day then it’s worth noting that these days FOSS does ship a lot of dependencies as part of their releases. Either via Docker containers, Nix packages, or static binaries (eg Go, Rust, etc). And they do this precisely because the storage cost is worth the convenience.
> While the Linux syscalls themselves are very stable and reliable, the c library on top of them is not. Practically all of userland is based on libc, and therefore by proxy Linux itself has a binary compatibility problem.
People who primarily use Linux often forget that Windows has the exact same problem. In the case of Windows libc is distributed as part of the Visual C++ runtime. Each version of Visual Studio has its own version of the VC++ runtime and the application is expected to redistribute the version of VC++ it needs.
The only thing Windows does better is ensuring that they maintain backwards compatibility in libc until they release a new version of Visual Studio.
In Winapi land, the equivalent of "the c library" is NTDLL,its wrappers and other supporting libs (advapi32,userenv,etc... and Win32 specific libs which I consider equivalent to X11 libs). MSVCR in my opinion is there to provide the stdlib for C/C++ programs. In Linux land, the library that provides the C stdlib also wraps syscalls, in Windows, the C stdlib is a wrapper/interface for Windows api's.
My opinion is that they're both great. I really like how clean and well thought out the Windows API's are. Compared to Linux equivalents they're very stable and easier to use. But that doesn't mean there is anything wrong with C stdlib implementation on either OS. But for system API's, Linux is a bit messy, that mess is the result of having so many people have strong opinions, and Linux trying to adhere to the Unix principle of a modular user-space ecosystem.
For example, there is no "Linux graphics api", there is X11 and Wayland and who knows what else, and neither have anything to do with the Linux project. There are many highly opinionated ways to do simple things, and that is how Linux should be. In the same vein, installing apps on Linux is simply querying your package manager, but on Windows there is no "Microsoft package repo" where everyone dumps their apps (although they are trying to fix that in many ways), and that's how Windows should be.
Let Linux be Linux and Windows be Windows. They're both great if you appreciate them for what they are and use the accordingly.
Microsoft has always been end-user-hostile. You hack around it :)
Reverse-engineer it's undesirable behavior, mitigate it. The real stuff that scares me is hardware-based (secure enclave computing for example) and legal measures it is taking to prevent us from hacking it.
ReactOS exists, as does Wine. Linux is a purely monolithic Kernel, unlike NT which is a hybrid that has the concept of subsystems built into it. Linux would have to have the concept of subsystems and have an NT-interop layer (probably based off of Wine), the advantage over Wine I fail to see.
In the end, where is the demand coming from I ask? Not from Linux devs in my opinion. I suppose a Wine focused distro might please folks like you, but Wine itself has lots of bugs and errors even after all these years. I doubt it is keeping up with all the Windows11 changes even, what the author proposes, in my opinion is not practical, at least not if you are expecting an experience better than ReactOS or Wine. If it is just Win32/winapi interop layer, it might be possible, but devs would need to demand it, otherwise who will use it?
Linux users are the most "set in their way" from my experience, try convincing any Linux dev to stop using gtk/qt and write apps for "this new Windows like api interface to create graphical apps".
but ultimately, there is no harm in trying other than wasted time and resources. I too would like to see an ecosystem that learns and imitates windows in many ways (especially security measures).
>There are many highly opinionated ways to do simple things, and that is how Linux should be
I still believe we would be in a better place had BSD was ready for adoption before Linux. Linux is a kernel and a wide family of operating systems assembled from the kernel and different bits and pieces while BSD tried to be a very coherent operating system from the start.
I remember trying to get a program installed on Windows. It complained that I didn't have the right VC redistributable.
I had like ten of them installed — I think several from the same year! — cause every program usually bundles its own.
I found the exact version of vcredist installer I needed but then that one refused to install because I already had a slightly newer version. So I had to uninstall that first.
As far as I'm aware this problem still exists in Wine, I installed something in Wine yesterday and I had to use winetricks commands to get the vcredist installers from Microsoft's servers. (Also illegally download some fonts, otherwise my installer refused to start...)
Next time that happens, search "vcredist aio". I can't endorse any of the scripts that are out there but there are many scripts that will pull them from Microsoft and install them all with the unattended flag.
Is libc updates really the primary problem with the ABI breaks on Linux? Glibc isn't perfect but it has versioned symbols going back a long time now. My guess would be the problem is actually abandoned versions of other libraries (e.g. SDL1, old versions of gtk2?) and maybe a handful of other things.
Yeah, glibc is extremely stable and you can be sure that an app compiled against it now will work well into the future. People just completely ignore that fact based on hearsay, and that the removal of a unused symbol hashing table from the glibc binary broke a few anticheat systems that were attempting to parse it.
Other libraries are the problem, usually. People are generally really good about changing the .so version of a library when the ABI changes in a backwards-incompatible way. Usually distributions ship both versions until everything they ship either has upgraded or been removed. Solutions like appimage can allow you to ship these libraries in your app.
No, not at all, but that's a different problem. That issue is about linkage between two different binaries that have _TIME_BITS=32 and _TIME_BITS=64, not an issue with linking to glibc. However, that's only an issue when you are dealing with software that passes time_t in the ABI. Of course, on the whole, a distribution has to deal with all kinds of weirdly-intermingled and low-level packages, so this does happen a very non-trivial amount of times surely, but in general I expect that a lot of old binary software will be alright. You'd only run into this particular problem if you had an old binary that interfaced with another library that is provided by the system that did this. I didn't check, but I'd be quite surprised to find random time_t in most popular library APIs, e.g. I don't expect to see this in SDL or GTK.
Of course, if you did need to support this case, you don't need to throw the baby out with the bathwater necessarily. You'd just need a _TIME_BITS=32 build of whatever libraries do have time_ts in their ABI, and if that blog post is any indication Gentoo will probably have a solution for that in the future. I like the idea of jamming more backwards-compatibility work into the system dynamic linker. I think we should do more of that.
In any case, this issue is not a case where glibc broke something, it's a case where the ABI had to break. I understand that may seem like nitpicking, but on the other hand, consider what happens in 2038: All of the old binary software that relies on time_t being 32-bit will stop working properly even if you do have 32-bit time_t shims, at which point you'll need dirtier and likely less effective hacks if you want to be able to keep said software functioning.
Someone comes along and builds their software on the latest bleeding-edge Linux distro. It won't run on older (or even many current) Linux desktops. People curse Linux ABI instability because new binaries aren't supported by an older operating system. It is in fact the opposite to the Windows situation, in which older software continues to run on newer operating systems, but good luck getting the latest Windows software to run on a Windows 95 desktop. People are very quick to conflate the two situations so they can score more fake internet points.
The situation is not limited to desktops. For example, a very popular commercial source forge web service does not work on browsers released more than about 10 weeks ago. The web itself has become fantastically unstable and almost unusable for anything except AI bots consuming what other AI bots spew.
To add some validity here, I think to an extent we already see distributions aimed at converting Windows users moving in this direction. Zorin OS has Wine support for .exe's almost out of the box, and there's Steam OS / Proton, where (if I recall correctly) the official guidelines for the Steam Deck state that developers should NOT create native Linux ports for new games, but rather optimize around Proton itself.
I've read the article and the comments with interest. I just have a question: if Windows ABI is so stable that 20-year-old programs are guaranteed to run, why are there computers with Win95 or NT that nobody dares touching lest some specific software stops working? I see plenty of these in industrial environments, but also in public libraries, corporate databases, etc.
In practice most of those machines are an environment in and of themselves. It’s not that they can’t be upgraded, it’s that they likely couldn’t even be rebuilt if they had a hardware failure. The risk they’re taking is that the system is more likely to break due to being touched than it is to suffer a hardware failure. Which as most of us can attest to, is true until it’s not.
Relatedly, at a previous job we ran an absolutely ancient piece of software that was critical to our dev workflow. The machine had an issue of some sort, so someone imaged the hard drive, booted it as a VM and we resumed business as usual. Last I heard it was still running untouched, and unmaintained.
> if Windows ABI is so stable that 20-year-old programs are guaranteed to run
That's not actually true; there are no guarantees. Microsoft does a best effort to ensure the majority of applications continue to work. But there are billions of applications, they're not all going to work. Many applications don't even adhere to the Win32 API properly. Microsoft will sometimes, if the app is important enough, ensure even misbehaving applications work properly.
I know in my use case all these ancient machines are nessessary for interacting with some ancient hardware, not a case where wine is particularly useful.
why touch it? these are usually not directly connected to the internet. some possibly virtualized. "updating" to use wine on linux is a ton of work on its own, you will run into unforseeable issues. nobody wants to pay for that and nobody wants to be responsible for the problems when the net benefit is zero.
but a real update/replacement of all these systems is too expensive, hence the status quo.
They care to a certain degree, and that degree is the size of the carefully-tuned payment that Trend Micro extract for the firewall product that lets the Windows 2000 MRI machine safely co-exist on the network with the hospital DC.
I think this attitude to the Linux ABI is maybe out of date - with a 20 year old Linux binary, that's only 2005, so it will almost certainly be using glibc (no archaic libc5). Glibc has great backwards compatibility and the binary will work on any glibc distribution today as long as you have all the .so's, same as needing the .dll's on Windows.
We're clearly not living in the same universe here.
glibc backward compatibility is horrible.
Every. Single. Time. I try to use an old binary on a modern distro, it bombs, usually with some incomprehensible error message with GLIBC in all caps in it.
And these days, you can't even link glibc statically, when you try it barks at you with vehemence.
As a matter of fact, as pointed out in the article, this particular shortcoming of glibc completely negates the work done by Linus to keep userland backward compatible at all cost.
Please post actual issues encountered, including non-paraphrased errors instead of FUD.
And if you want to statically link your libc there is nothing forcing you to use glibc. You're only stuck with glibc (and even then you don't actually need to use any functions from it yourself) if you need dynamic linnking for e.g. OpenGL/Vulkan. Also, glibc wasn't designed for static linking even before they put in safeguards against that.
1. GNU libc is an exception in the world of compatibility.
2. You can't just dump a bunch of GTK libraries next to the binary and expect it to work. These libraries often expect very specific file system layouts.
In 2005 the hot new Windows technology was .NET Framework 1.1 or 2.0. You can't just dump Framework 1.1's libraries next to the binary and expect it to work either, it needs to be installed properly.
The .NET Framework 1.x runtime is no longer supported, and the .NET Framework 2.0 runtime (used by v2.0-3.5 applications) won’t be supported after 2029. They are slowly abandoning support for old apps.
(Yes, if you fiddle with the config file they might work on the .NET 4.0 runtime. But that’s not something a typical user can/will do.)
It keeps classes for backward compatibility, not assemblies. Some code still didn't migrate from them to not cause gratuitous churn. Also it's web scale, because untyped collections can hold values of different types just like javascript.
What are you talking about??? .NET never got abandoned! If anything its hit a very high bar now. Its one of the top frameworks to build an app across platforms.
They're referring to (I hope), the .NET Framework which is Windows-only and the last/latest version being 4.8. It should live a very long life, as Microsoft server infrastructure is built on it (SharePoint/Exchange).
That is solved by containerization (cgroups and namespaces), which is initially popularized by docker, which appeared about 12 years ago. And newer things like flatpak and snap are just bells and whistles over this.
I think it's fair to say that OS/2 had better Windows compatibility (for it's era) than Wine offers (in this era). The problem was that Microsoft introduced breaking changes with the introduction of Windows 95. While old Windows applications would continue to run under OS/2, IBM felt that it would take too much effort to introduce a compatability layer for Windows 95. If I recall correctly, it involved limitations with how OS/2 handled memory.
Besides, binary compatibity has never really been a big thing in Linux since the majority of software used is open source. It is expected to compile and link against newer libraries, but there is no real incentive for existing binaries to remain compatible. And if the software doesn't compile against newer versions of libraries, well, Windows has similar issues.
A windows95 compatibility layer would have been feasible if OS/2 had more sales volume.
The latest multi-platform packaging systems like Nix or Flatpak have largely solved the binary compatibility problem but providing some guarantees of library versions. This approach makes more sense in modern contexts with cheap storage and fast bandwidth.
So... this already exists. Valve already essentially sells this as a product. Folks know that, right? The Steam Deck is a linux box running wine executables as the native app environment. The fact that the money apps are all "games" doesn't change the technology.
How they do it is by shipping a franken-ubuntu14 as the "steam runtime" for native Linux games. Not a terrible solution but not exactly ideal for general purpose software where games mostly keep to themselves. Their work on Proton is amazing though.
- Steam Runtime: A set of common libraries Linux native games target for multi-distro compatibility, I believe this still uses Ubuntu as the upstream
- Steam OS: An arch based distro pre-configured to run Steam out of the box, used by the Steam Deck, comes with extra stuff like gamescope to smooth over various issues other distros have with VRR, HDR, etc.
No[1], but you can launch a windows executable natively, link against DLLs in a compatible way, thunk between 32 and 64 bit as needed, access the Linux filesystem, network and IPC environment using native APIs, integrate with things like .NET and msvc runtimes, access native-speed DirectX emulation, etc...
Yes, you'd have to buff and polish it. But "paint some chrome on it" is hardly much of a blog post.
[1] Actually, are you sure the answer is "no" here? I wouldn't be at all shocked if some enterprising geek had source on github implementing a MSI extractor and installer
That makes no sense: shipping all dependencies (e.g. shipping a container image) gives perfect binary compatibility on Linux, which is what flatpak/snap/appimage do.
It can also be achieved with static linking and by shipping all needed library and using a shell script loader that sets LD_LIBRARY_PATH.
Also glibc (contrary to the author's false claims) and properly designed libraries are backwards compatible, so in principle just adding the debs/rpms from an older Debian/Fedora that ships the needed libraries to the packaging repositories and running apt/dnf should work in theory, although unfortunately might not in practice due to the general incompetence of programmers and distribution maintainers.
Win32 is obviously not appropriate for GNU/Linux applications, and you also have the same dependency problem here, with the same solution (ship a whole Wine prefix, or maybe ship a bunch of DLLs).
> shipping all dependencies (e.g. shipping a container image) gives perfect binary compatibility on Linux
That doesn’t work for GUI programs which use a hardware 3D GPU. Linux doesn’t have a universally available GPU API: some systems have GL, some have GLES, some have Vulkan, all 3 come in multiple versions of limited compatibility, and optional features many of them are vendor specific.
In contrast, it’s impossible to run modern Windows without working Direct3D 11.0 because dwm.exe desktop compositor requires it. If a software consumes Direct3D 11.0 and doesn’t require any optional features (for example, FP64 math support in shaders is an optional feature, but sticking to the required set of features is not very limiting in practice unless you need to support very old GPUs which don’t implement feature level 11.0), will run on any modern Windows. Surprisingly, it will also run on Linux systems which support Wine: without Vulkan-capable GPU will be slow but should still work due to Lavapipe, which is a Linux equivalent of microsoft’s WARP they use on Windows computers without hardware 3D GPU.
Note that this also underlines that the post's premise of Windows having a simple stable ABI - win32 sure is stable, but that's not what applications are coded against anymore.
Sure, you can run a 20 year old app, but that is not the same as a current app still working in 20 years, or even 5.
> that's not what applications are coded against anymore
Not sure I follow. Sure, most modern programs are not using old-school WinAPI with GDI, but the stuff they added later is also rather stable. For example, the Chromium-based browser I’m looking at uses Direct3D 11 for graphics. It implements a few abstraction layers on top (ANGLE, Skia) but these are parts of the browser not the OS.
I view all that modern stuff like Direct3D, Direct2D, DirectWrite, Media Foundation as simply newer parts of the WinAPI. Pretty sure Microsoft will continue to support them for long time. For example, they can’t even deprecate the 23 years old DirectX 9 because still widely used, e.g. current version of Microsoft’s own WPF GUI framework relies on Direct3D 9 for graphics.
I agree. On Linux (and Mac really), new APIs replace old ones and old binaries stop working.
On Windows, new layers are applied over the old. There is DirectX 9-12. New binaries may use 12 but the ones still using 9 are perfectly happy. Things like .NET work the same. You can have multiple apps installed relying on different .NET versions.
It's not necessarily the same code, though. But COM is nice for a stable ABI like that - so long as you consistently version your interfaces, the apps can just QueryInterface for the old one they need and know that it's there, even if it's just a thin wrapper around the new stuff.
These are however the same on Linux - mesa may change, but what the app uses is OpenGL and GLX. A more modern app might use EGL instead of GLX, or have switched to Vulkan, but that doesn't break old code.
You can also run an old mesa from the time the app was built if it supports your newer hardware, but I'd rather consider that to be part of the platform the same way you'd consider the DirectX libraries to be part of windows.
Ah apologies, you're right - I was tired and read things wrong.
But I suspect "GL issues" (i.e., GL API stability) is being mixed together with e.g. mesa issues if mesa is being bundled inside the app/in a "flatpak SDK" instead of being treated as a system library akin to what you would do with DirectX.
Mesa contains your graphics driver and window system integrations, so when the system changes so must mesa change - but the ABI exposed to clients does not change, other than new features being added.
Win32 is quite extensive for an OS API. It covers the space from low-level stuff like syscalls and page allocation and all the way up to localization, simple media access and GUI. So everything from glibc, libsystemd, libpam to libalsa and egl on Linux side. And it is all stable.
Microsoft also provides quite good stability for DirectX and other extension APIs. You can still run old .Net apps without issues as long as they didn't pull a Hyrum's Law on you and depended on apparent behavior.
Sure, win32 contains GUI bits, but modern apps do not use those GUI bits.
OpenGL and Vulkan ABIs are also stable on Linux, provided by mesa. The post is pretty focused on the simplicity of win32 though, which is what I'm refuting as being as relevant today for new apps.
> As long as they didn't pull a Hyrum's Law on you
It is guaranteed that they "pull a Hyrum's Law", the question is just what apparent behavior they relied on.
> Note that this also underlines that the post's premise of Windows having a simple stable ABI - win32 sure is stable, but that's not what applications are coded against anymore.
It's true, but this touches on another point they made: what apps code to is other dynamically linked libraries. The kind that wine (or other host environments) can provide, without needing to mess with the kernel.
That's what apps are supposed to code to. When it comes to games and especially anti-cheat that's not always the case though and so Wine does have to handle direct system calls, which needs support from the kernel (at least to not be unusably slow).
This is FUD. There isn't a single real desktop Linux distribution without OpenGL support. The basic OpenGL API hasn't changed ever, it's just been extended. It has even more backwards compatibility than Direct3D. Sure you can deliberately build a distro with only Vulkan or GLES (a mobile API) if you want to be an ass but the same goes for Windows. Same for X11 - Xlib works everywhere even any Wayland-only distribution that gives a single crap about running binary distributed software.
Now GUI toolkits are more of an issue. That's annoying for some programs, many others do their own thing anyway.
Question, from an application developer's perspective: What is the implication in regards to cross-platform Vulkan applications? I.e., my 3D applications all use Vulkan, and they compile and just work on both Windows, and Ubuntu. Does this mean that on other or older distros, they might not work?
I don’t think the support depends on distros much, I think the main variable is hardware. If you have a desktop PC bought in the last ~5 years the support should be OK, for the hardware older than that the support is not guaranteed. GeForce GT 730 (launched in 2014) doesn’t support Vulkan, Intel only supports Vulkan on Windows starting from Skylake launched in 2015.
Then there’re quality issues. If you search internets for “Windows Vulkan issue” you’ll find many end users with crashing games, game developers with crashing game engines https://github.com/godotengine/godot/issues/100807 recommendations to update drivers or disable some Vulkan layers in registry, etc.
On Windows, Vulkan is simply not as reliable as D3D. The reasons include market share, D3D being a requirement to render the desktop, D3D runtime being a part of the OS supported by Microsoft (Vulkan relies solely on GPU vendors), and D3D being older (first version of VK spec released in 2016, D3D11 is from 2009).
Another thing, on Linux, the situation with Vulkan support is less than ideal for mobile and embedded systems. Some embedded ARM SoCs only support GLES 3.1 (which BTW is not too far from D3D 11.0 feature-wise) but not Vulkan.
Agree overall. Just want to point out that Vulkan works on Intel Haswell. I have a 2013 MacBook Air and a 2013 Mac Pro that both have Haswell. Linux kernel 6.14 actually includes an Haswell Vulkan update from Intel themselves.
> Does this mean that on other or older distros, they might not work
Yep exactly. While Vulkan API is well defined and mostly stable, there is no guarantee in Linux implementation will also be stable. Moreover Khronos graphics APIs only deal with the stuff after you allocated a buffer and did all the handshakes with the OS and GPU drivers. On Linux none of those have API / ABI / runtime configuration stability guarantees. Basically it works until only one of the libraries in the chain breaks the compatibility.
This is BS. Vulkan buffers are allocated with Vulkan functions. Window system integration is also provided by window-system specific Vulkan extensions just like it was with WGL/GLX/EGL etc. These are all well defined and stable.
That depends how you build you program and what other dependencies you pull in. But as far as Vulkan is concerned your program should run on any distro that is as least as new as the one you build on (talking about ABI, runtime requirements depend on hardware but don't depend on the system you build on).
> That makes no sense: shipping all dependencies (e.g. shipping a container image) gives perfect binary compatibility on Linux, which is what flatpak/snap/appimage do.
True, but sad. The way to achieve compatibility on Linux is to distribute applications in the form of what are essentially tarballs of entire Linux systems. This is the "fuck it" solution.
Of course I suppose it's not unusual for Windows stuff to be statically linked or to ship every DLL with the installer "just in case." This is also a "fuck it" solution.
> to distribute applications in the form of what are essentially tarballs of entire Linux systems.
No so bad when Linux ran from a floppy with 2Mb of RAM. Sadly every library just got bigger and bigger without any practical way to generate a lighter application specific version.
If Linux userspace had libraries with stable ABI, you could just tar or zip binaries and they would work. You wouldn't need to bundle system layer. This is how you deploy server apps on Windows Server systems. You just unpack and they work.
It is not a packaging problem. It is a system design problem. Linux ecosystem simply isn't nice for binary distribution except the kernel, mostly.
Linux feels a bit different since the complete system is not controlled by a single vendor. You have multiple distributions with their own kernel versions, libc versions, library dependencies, etc.
Mac OS has solved this but that is obviously a single vendor. FreeBSD has decent backwards compatibility (through the -compat packages), but that is also a single vendor.
-compat packages exist on fedora-like systems too, usually allowing it older versions to run. I can't say how far back, but RHEL usually has current version
- 1 for -compat packages.
Packaging is “hard” but mobile and app stores do it.
They do it by having standards in the OS, partial containerization, and above all: applications are not installed “on” the OS. They are self contained. They are also jailed and interact via APIs that grant them permissions or allow them to do things by proxy. This doesn’t just help with security but also with modularity. There is no such thing as an “installer” really.
The idea of an app being installed at a bunch of locations across a system is something that really must die. It’s a legacy holdover from old PC and/or special snowflake Unix server days when there were just not many machines in the world and every one had its loving admin. Things were also less complex back then. It was easy for an admin or PC owner to stroll around the filesystem and see everything. Now even my Mac laptop has thousands of processes and a gigantic filesystem larger than a huge UNIX server in the 90s.
I can't think of a single thing that would kill the bit last of joy I take in computing more. If I woke up in such a world, I'd immediately look to reimplement Linux in an app and proceed to totally ignore the host OS.
> Also glibc (contrary to the author's false claims) and properly designed libraries are backwards compatible, so in principle just adding the debs/rpms from an older Debian/Fedora that ships the needed libraries to the packaging repositories and running apt/dnf should work in theory, although unfortunately might not in practice due to the general incompetence of programmers and distribution maintainers.
Got it. So everything is properly designed but somehow there's a lot of general incompetence preventing it from working. I'm pretty sure the principle of engineering design is to make things work in the face of incompetence by others.
And while glibc is backward compatible & that generally does work, glibc is NOT forward compatible which is a huge problem - it means that you have to build on the oldest bistro you can find so that the built binaries actually work on arbitrary machines you try to run it on. Whereas on Mac & Windows it's pretty easy to build applications on my up-to-date system targeting older variants.
> So everything is properly designed but somehow there's a lot of general incompetence preventing it from working.
But it is working, actually:
* If you update your distro with binaries from apt, yum, zypper etc. - they work.
* If you download statically-linked binaries - they work.
* If you download Snaps/Flatpak, they work.
> it means that you have to build on the oldest bistro you can find so that the built binaries actually work on arbitrary machines you try to run it on.
Only if you want to distribute a dynamically-linked binary without its dependencies. And even then - you have to build with a toolchain for that distro, not with that distro itself.
> Only if you want to distribute a dynamically-linked binary
Even statically linked code tends to be dynamically linked against glibc. You’ve basically said “it works but only if you use the package manager in your OS”. In other words, it’s broken and hostile for commercial 3p binary distribution which explains the state of commercial 3p binary ecosystem on Linux (there’s more to it than just that, but being actively hostile to making it easy to distribute software to your platform is a compounding factor).
I really dislike snaps/flat pack as they’re distro specific and overkill if I’m statically linking and my only dynamic dependency is glibc.
Glibc is fantastically stable and backwards compatible in all the same ways , and I think you're overstating how backwards compatible windows is as well. Microsoft has the exact same dynamic library issues that Linux does via it's Microsoft Visual C++ distrubutables (as one example). Likewise, there's forwards compatibility issues on Windows as well (if you build a program in Windows 11 you'll have a hard time running that on windows XP/Vista for a huge number of reasons).
If you build a statically linked program with only glibc dynamically linked, and you do that on Linux from 2005,then that program should run exactly the same today on Linux. The same is true for Windows software.
Im pretty sure it’s safe to distribute Windows 11 built binaries to windows 7 and windows 10 if it’s a valid target set in Visual Studio. The c++ runtime is its own thing because of a combination of c++ BS (no stable runtime) and c++ isn’t an official part of Windows. It’s a developer tool they offer. But you can statically link the c++ runtime in which case you can build with the latest runtime on Windows 11 and distribute to an older Windows.
Linux is the only space where you have to literally do your build on an old snapshot of a distro with an old glibc so that you can distribute said software. If you’re in c++ land you’re in for a world of hurt because the version of the language is now constrained to whatever was available at the time that old distro from 5+ years ago snapshotted unless you build a newer compiler yourself from scratch. With Rust at least this is much easier since they build their toolchain on an old version of Linux and thus their binaries are similarly easily distributed and the latest Rust compiler is trivially easy to obtain on old Linux distros.
Source: I’m literally doing this today for my day job
You can also build a cross-compiler to target an older glibc, you are not limited to the distro-provided toolchain. This also allows to to use newer C++ features (with exceptions) as those mostly depend on the GCC version and not glibc version. Of course the supported range of glibc version varies with gcc version, just like visual studio doesn't support XP anymore - the difference is that if you are sufficiently motivated you can patch gcc.
As for being overkill, surely you can see the advantage of having a single uniform distribution format from the end user's perspective? Which, sure, might be overkill for your case (although app isolation isn't just about dependencies), but the important thing is that it is a working solution that you can use, and users only need to know how to install and manage them.
You have to install the flat pack runtime to begin with so that’s one obstacle for distribution. And it also doesn’t really isolate as much as you’d like to believe - eg dealing with audio will still be a mess because there’s like 4 different major audio interfaces. And now I have to host a flat pack repo and get the user to add my repo if it’s proprietary software. It’s really nowhere near as smooth and simple as on Windows/Mac/Android/ios.
The reason to host a repo regardless is to enable easy auto-updates - and I don't think you can call this bit "smooth and simple" on Windows and Mac, what with most apps each doing their own thing for updates. Unless you use the app store, but then that's exactly the same as repos...
Windows toolchain provides the import libraries to link with, and these are basically just tables mapping function names to indices in the DLL export table. So long as you don't actually use the new functions, an app linked against a modern Windows SDK will run just fine on old Windows, unlike the situation with glibc.
Almost - with glibc your code uses functions like memcpy but you end up linking against symbols like memcpy@GLIBC_2.14 which is the version of memcpy added in glibc 2.14 and which won't be present in older versions. Which symbol version your calls use depends on the glibc version you build against - generally it's the most recent version of that particular function. For the Win32 this is rarely the case and instead you have to explicitly opt in to newer functions with fixed semantics.
Still, to reliably target older Windows versions you need to tell your toolchain what to target. The Windows SDK also lets you specify the Windows version you want to target via WINVER / _WIN32_WINNT macros which make it harder to accidentally use unsupported functions. Similarly, the compilers and linkers for Windows have options to specify the minimum Windows version recorded in the final binary and which libraries to link against (classic win32 dlls or ucrt). Unfortunately there is no such mechanism to specify target version for glibc/gcc and you have you either build against older glibc versions or rely on third-party headers. Both solutions are workable and allow you to create binaries with a wide range of glibc version compatibility but they are not as ideal as direct support in the toolchain would be.
Yeah maybe I should just be complaining that the Rust tool chain (or rather distros) should be including old versions of prebuilt glibc to link against?
> And while glibc is backward compatible & that generally does work, glibc is NOT forward compatible which is a huge problem - it means that you have to build on the oldest bistro you can find so that the built binaries actually work on arbitrary machines you try to run it on.
Isn’t this easily solved by building in a container? Something a lot of people do anyway - I do it all the time because it insulates the build from changes in the underlying build agents - if the CI team decides to upgrade the build agent OS to a new release next month or migrate them to a different distro, building in a container (mostly) isolates my build job from that change, doing it directly on the agents exposes it to them
glibc really doesn't want to be statically linked so if you go this route your option is to ship another libc. It does work but comes with its own problems— mostly revolving around nss.
And NSS defines how usernames are matched to uids, how DNS works, how localization works and so on. If you're changing libc you need to ship an entire distro as well since it will not use the system libraries of a glibc distro correctly.
I'm not sure why there are so many naysayers. I've been having the same thought ever since the initial release of the steam deck and think it's a great idea.
In my vision no trace of Linux is discoverable by the user.
Valve's Steam OS( and inspired distros) already basically does this. It's centered around games, but everything else( if your lucky) is natively supported in Linux.
You can run non games on Proton. Most things work.
This is the first time I've heard of that [Ubuntu?] distro. Would be curious to hear how it's working out, from anyone using it as their daily driver, and how it compares to Mint etc. on the Linux side of things.
My mum was using it as a daily-driver for all your average user PC stuff. It was decent, easily to use and more user-friendly than Mint, IMO – until an update borked it completely, after two years of running. Unfortunately post-update-breakages is something that's typical of Ubuntu and most Ubuntu-based distros, so it's not really surprising [1]. I've since switched her to an immutable distro (Aurora [2]) and it's been rock solid.
> I can pull down a 20 year old exe and still run it today on Windows.
Why, oh why, I have to deal with exe files that are not even 5 years old and don't work on my windows laptop after update... I wish I lived in Author's universe...
> In Windows, you do not make system calls directly, Instead, you dynamically link to libraries that make the system calls for you.
Isn't the actual problem the glibc shared library since the Linux syscall interface is stable? (as promised by "don't break user space") - e.g. I would expect that I can take a 20 years old Linux binary which only does syscalls and run that on a modern Linux, is that assumption wrong?
ABI stability for Windows system DLLs is also only one aspect, historically Microsoft has put a ton of effort into preserving backward compatibility for popular applications even if they depend on bugs in Windows that had been fixed in later Windows versions.
I expect that Windows is full of application specific hacks under the hood to make specific old applications work.
E.g. just using WINE as the desktop Linux API won't be enough, you'll also have to extend the "don't break user space" promise from the kernel to the desktop runtime environment, even if it means "bug-by-bug-compatibility" with older versions.
Yeah the direct syscall interface isn't a problem because it's so stable. The problem is almost entirely glibc. If GCC simply had a flag --glibc-version=2.14 or whatever then 99% of the problems would be solved.
I tend to just compile on a really old distro to work around this. Tbf you need to do the same thing on Mac, it just isn't so much of an issue because Macs are easier to update.
The other side of the problem is that the whole Linux ecosystem is actively hostile to bundling dependencies and binary distribution in general, so it's not a surprise that it sucks so much.
> Isn't the actual problem glibc since the Linux syscall interface is stable?
Yes
> I would expect that I can take a 20 years old Linux binary which only does syscalls and run that on a modern Linux, is that assumption wrong?
You’re right. But those apps are simple enough that we could probably compile them quicker than they actually run.
> I expect that Windows is full of application specific hacks under the hood to make specific old applications work.
Yes [0]!
> just using WINE as the desktop Linux API won't be enough, you'll also have to extend the "don't break user space" promise from the kernel to the desktop runtime environment
Yes, but. Windows is the user space and kernel for the most part. So the windows back compat extends to both the desktop runtime and the kernel.
You might argue it’s a false equivalence, and you’re technically correct. But that doesn’t change the fact that my application doesn’t work on Linux but it does on windows.
I'm not trying to defend Linux btw, and I appreciate Microsoft's approach to backward compatibility (some of the Windows games I play regularly hail from the late 90s).
Just wanted to point out that ABI stability alone probably isn't the reason why Windows is so backward compatible, there's most likely a lot of 'boring' QA and maintenance work going on under the hood to make it work.
Also FWIW some of the early D3D9 games I worked on no longer run on out of the box on Windows (mostly because of problems related to switching into fullscreen), I guess those games were not popular enough to justify a backward compatibility workaround in modern Windows versions ;)
Again, you’re technically correct but I don’t think it matters.
Windows gives (in practice) DE, user space, and kernel stability, and various Linux distributions don’t. If you care about changing the Linux ecosystem to provide that stability it matters, but if you want to run an old application it doesn’t.
Why so complicated? Wine is cool if you need to run an existing binary but when you're writing your own software, why not just compile the platform independent part into a binary and make the platform dependent part a little library (open-source)?
> Imagine we made a new Linux distro. This distro would provide a desktop environment that looks close enough to Windows that a Windows user could use it without training. You could install and run Windows applications exactly as you do on Windows; no extra work needed.
Maybe we should fund ReactOS for end-user applications. Win32 is well established and isn't going anywhere. So why not take advantage of Microsoft's API design effort
People who like and need Windows apps, people who want to have an out of the box experience when running those apps, people who don't like the loss of performance when using Wine, people who generally like Windows but want to have an alternative in case that dislike where Microsoft is heading with Windows development.
That is a lot of people, me included. But since Windows experience is somehow still tolerable, there aren't many willing to invest time or money into ReactOS. There are no corporate sponsors since you don't make money from desktop OS-es unless you use them to sell expensive hardware like Apple did.
Someone like Valve could have sponsored it but they though they can reach their goals with Wine while spending much less money.
Another sponsor for ReactOS can be a state actor like China or EU, somebody with deep pockets who wants and needs to run Windows software but don't want their desktop to be under US control.
Any people who prefer Windows' primary design choices over Unix ones too.
> Another sponsor for ReactOS can be a state actor like China or EU, somebody with deep pockets who wants and needs to run Windows software but don't want their desktop to be under US control.
I would love to see EU to do this actually. Maybe we should pitch this as citizens.
ReactOS is too buggy to be used as a daily driver for your operating system, but it's awesome as Windows Kernel reference code. You want to know what a Kernel-mode function does? Ether read the documentation, or look at what ReactOS does. (Yes, leaked Windows code exists too, and it's even on freakin Microsoft-owned-Github of all places, but you can't legally look at that stuff!)
Hi, it's me, Mr Hair Splitting: to the best of my knowledge it's not illegal to read the source, but it would be illegal to use the source in your own application because you didn't author it or have a license to it
That's actually why the Wine and ReactOS folks want to disqualify folks who have read the source for fear they would inadvertently "borrow" implementation ideas, versus being able to explain to a judge how they, themselves, came up with the implementation. The key point is that Wine and ReactOS merely disqualify someone, not imprison or fine them
I played with ReactOS a few months ago in a virtual machine, and even in that relatively controlled environment it still crashed a lot.
I’ve been hoping that ReactOS would be the thing that truly murdered Microsoft Windows, but that really hasn’t happened; it seems like that’s happening via combination of a lot of applications moving to the browser and compatibility layers like Wine and Proton.
Linux has pretty good driver support nowadays, and outside of drivers Wine will have as good or better support for applications, so I am not completely sure what that says about the future of ReactOS.
It really depends on the game, but generally speaking, 20 year old games (that would be from 2005) work on modern Windows just fine. Games developed back in Win9x era are usually more troublesome.
I recently played Sinistar Unleashed on my Linux laptop.
I was never able to get this game working on regular Windows hardware, even when I bought the game brand new and tried running it on a contemporary computer, but it runs fine with Wine and Proton.
I decidedly could not get it working on a dual boot of Windows 10 (that I installed just to play to it).
Granted, even with Wine it wasn’t trivial to get working, but it wasn’t that bad. The game is actually not bad, I would have loved playing it as a kid, but I had to wait 25 years for Wine to let me play it, apparently.
I actually didn't for this, I was able to mount the ISO with linux and then run the executable directly to install it, then futz around with Wine settings on the install path to eventually get the game launching.
Dancing to a proprietary tune is risky - they can decide to change the API or go after you with lawsuits if it becomes too competitive.
You can provide backwards compatibility in Linux - you can keep old versions of libraries installed. The more commercial distros do this to a greater degree. It's roughly what windows is doing to achieve the same result.
It's just a cost to arrange and since most distros aren't making billions in licensing they choose not to pay it.
Obviously I have nothing against a wine-focused distro but I wouldn't myself waste a fraction of a second writing code against the windows API by choice.
This is a wonderful idea and could succeed if the creator could rally the right devs and users. What it really needs is Ubuntu tier branding and UX work. This has been a rarity in the Linux desktop space.
I am hopeful SteamOS will bring us something very similar.
First-class support for Windows applications might just become doable, if Wine continues to progress and Win32 doesn't accelerate. There were a handful of quality of life improvements in previous Windows releases, but the biggest Win32 changes feel like they happened quite a while ago by now, and for good reason: Win32 is stable and mature. It's still a moving target, but not by nearly as much, and even if Microsoft wanted to move it for the sake of moving it, they might find more resistance than they can completely overcome. For now, I think Wine is still not good enough to recommend people just use for everything, though. It's incredible, but incredible doesn't make Photoshop install.
However, I also think that we could "solve" a lot of the compatibility problems.
There are tons of old Linux binaries that don't work anymore. But... They could. A lot of old binaries, surely the vast majority, could absolutely run on a modern kernel. The problem is the userspace. The binaries themselves contain oodles of information that could be used to figure out what they need to run, it's just that there's nothing in place to try to make sure that stuff is available.
I really believe we could make it possible for a distro, out of the box, to make old binaries "just work", double-click and run. Want to install an old game from an .rpm or .deb you have? The system could identify what base OS that is and install it into it's own chroot with its dependencies, then create desktop icons for it. Execution failures? Missing libraries? Xlib errors? Let's have a graphical error message with actionable help.
Well, it could be done, anyway. If you wanted to follow the spirit of Windows here, it would be the right thing to do, and it'd help users who found a thing that says it supports "Linux" run that thing the way they would hope and expect it to run. Will it actually happen? Not unless someone makes it happen, and convinces distros, desktops and all other stakeholders it's worth shipping, then maintains and improves it going forward. It's a bit depressing when you realize that the technical part of implementing this is basically the least challenging part, at least for a proof of concept.
Yes. It’s the same reason AppImage could work — if the licensing allows for the all libraries to be included in the image, because the Linux syscall interface is generally stable.
AppImages have a few problems. Ever seen how much dependencies you need installed to execute an AppImage?
You also need to be in an environment where you can create FUSE filesystems. And iirc the reference implementation requires the deprecates fuse2 library to work.
Snaps, Flatpaks, AppImages and static linking are all solutions to a real problem. But I don't think AppImages are an especially good solution.
I talked a bit with Richard Brown about supporting AppImages in Aeon, the OpenSUSE immutable distro. But he believed the base system would need far too much dependencies specifically to support the AppImage runtime including deprecated fuse2 support.
True, but I believe Flatpaks offer more than just "single executable applications". In the case of Aeon it's the primary way of installing additional software.
I still can't get MS Office 365 working on Linux over Wine, while no alternatives make me comfortable. Comparing Linux and Win32 ABI on Linux is nonsense without talking about Wine compatibility.
Like a lot of things in any OS it depends how far off the beaten track you go. I think there's a lot of gaming newcomers to linux where their main exposure for wine is via steam, which generally wraps things up for them and hides the details behind their launcher. If you need to go diving into the details for compatibility or you'd like to separate things out with prefixes then it's definitely much less polished and elegant, but I'd argue you could say the same thing about windows for compatibility with some old games or sites like PCGamingWiki.com would be a lot smaller as the combinations of windows (or DOS)+drivers+hardware over the years hasn't resulted in perfect consistent compatibility.
He is not wrong. My software compiled with Borland Delphi 1.0 works beautifully with Wine under Linux and works just good as well under Windows.
I'm saying this as Java developer. Delphi eventually proved itself to be the true "compile once, run everywhere". Can imagine others who wrote executables for Windows before the .NET times can relate to similar experiences.
> Also, around 2001 was the big architectural change for desktop from DOS to NT, so this might seem like cherry-picking the timeframe selected.
It's true that the entire Windows product family converged onto the NT codebase with the release of Windows XP, but this isn't really relevant -- Windows executables and DOS executables were always different things, and despite being built on top of a DOS-based kernel, Windows 9x still supported the same Win32 binaries that ran under NT.
There was even an extension called Win32S that allowed Win32 executables to be run under Windows 3.1. The Win32 API dates to the early '90s, and modern implementations do support executables dating all the way back to the beginning.
I really want to statically link OpenGL and Vulkan for exactly this purpose, but neither use a wire protocol (unlike X11 or Wayland). The whole "loading library" scheme feels like hazing for any beginner graphics programmer on top of the already complex graphics APIs.
I know at least for OpenGL, not all graphics cards/drivers would implement the entire featureset. So there was a reasonable justification for the dynamic linking and bringing in functions one by one.
I think that a wire protocol could support that with a query response for supported versions and functions. The decision of dynamic linking removes the overhead of serialization, but removes the option of static linking.
Yeah, just my thought. Instead of all the effort and overhead and awful API of Win32 just statically link musl. Still, there are of course downsides and limitations to either approach.
> While the Linux syscalls themselves are very stable and reliable, the c library on top of them is not.
Maybe just don't use that library then? Or don't do that ridiculous thing where you fumble around at runtime desperately looking for executable pages that should just be included in your binary.
It's not "some c library on top of them", it's glibc. You can use another libc, but that means you're going to be incompatible with the distro expectations in terms of configuration, because that's handled by glibc, so you just push off the instability to a different part of your system.
This is a wonderful idea. I have some doubts, though. It might not provide a seamless experience.
Just transforming Windows syscalls into Linux syscalls is not enough. There should be some form of emulation involved.
Many apps, like games are using hardware, that means some additional layers of emulation.
>Imagine we made a new Linux distro. This distro would provide a desktop environment that looks close enough to Windows that a Windows user could use it without training. You could install and run Windows applications exactly as you do on Windows; no extra work needed.
I a rough user experience, some loss of performance and many bugs.
But I hope I am wrong, because the idea sounds really promising.
I want the opposite: id like a way to run the windows kernel, drivers and most low level OS stuff by windows, but with a Linux user Interface: Cinammon, apt and all the debian stuff.
I run Mint as my main OS, but hardware compatibility is still a headache in Linux for me.
You can resurrect SFU and build a replacement GUI for Explorer. You can't get rid of Win32, but you can cover up most of it. Implementing a Personality would be the Windows-way of doing this as it is designed for just what you ask.
I’ve never bought one of the dedicated Linux laptops, but I’ve had pretty good luck with AMD stuff.
My current laptop, Thinkpad P16s AMD Gen 2, was pretty straightforward to get working with NixOS. No special drivers, and everything, including WiFi and function buttons on the keyboard, worked fine without any kind of special concessions.
This was also the case for my last non-Mac, from 2017-2020, I got Ubuntu installed on there without many headaches, and it wasn’t a specific Linux laptop, though again it was AMD.
> In Windows, you do not make system calls directly. Instead, you dynamically link to libraries that make the system calls for you. This allows Microsoft to do all sorts of shenanigans at the kernal level while providing a stable API to userspace. This little stroke of genius allows us to have both Linux and Windows on the same machine at the same time.
Precisely correct. Linux should never have allowed system calls from outside libc or a big vdso.
I wouldn't be surprised if Microsoft does something to that effect in the future. Have Win 32 as a layer on top of Linux.
They seem to not be interested in locking the hardware and they don't make much money from selling Windows and it shows. There aren't many strong improvements in Windows and it feels like Windows is a platform they use to sell other stuff they make money with - they are with Windows in a similar position Google is with Android.
Java already solved this problem, for the most part. This whole ABI nonsense really grinds my gears. It's essentially just a result of the silly decision to compile software into dubious blobs and ship those to users. You could get rid of an awful lot of malware and massively simplify software distribution if you were to distribute a platform agnostic intermediary representation of source code that preserves enough semantic meaning to eliminate ABI issues, then leaves the last step of compilation to the operating system. Shipping binary files is just plain bad in every way.
> Shipping binary files is just plain bad in every way.
Aren't .class and .jar files "binaries"?
> Java already solved this problem, for the most part
Maybe, just maybe, there are some drawbacks that mean that in fact it's not solved. Otherwise perhaps Java would've completely obsoleted C, C++. Some of us design applications which can't tolerate the worst case GC pauses, for example. Some of us design applications which we can't afford to spend extra time heap-allocating nearly everything.
Not at all. jar is just a zip with a different extension +some metadata in META-INF (including dependencies). class are compiled java files but they do contain all kinds of metadata, including variable names and debug info (if you choose to retain it). they contain all methods and fields with their original names (along with annotations), so the reflection APIs work. Decompiling a class file is trivial to a point the original line numbers can be restored.
>Otherwise perhaps Java would've completely obsoleted C
Java does require a managed runtime written mostly in C/C++.
>Some of us design applications which can't tolerate the worst case GC pauses
The current breed or low latency GCs w/o a stop-the-world phase should suffices for a large set of applications.
>we can't afford to spend extra time heap-allocating nearly everything.
That has not been an issue for quite some time, heap allocation can be elided, and under normal condition is just a pointer bump. Per thread private allocation is by far the most common case - the garbage collection of non old-gen referenced objects is totally trivial too (i.e. memset). Even shared (cross thread/area) allocation is a CAS'd bump in most cases. Note: copy/generational garbage collectors copy objects that are referenced by non-young-gen ones to another area, then zero the original area.
With that being said - Java (and managed languages) are no panacea.
Java can be optimized beyond all recognition, into bytecode that can no longer be represented by the Java language. At least that used to be the case in the past. It is not different from other binaries, except the target system is a virtual CPU rather than a real one.
Java also deprecated all sorts of things over the years. Not to mention applets being completely killed off. I have Java binaries from 25 years ago that could no longer run at all with a contemporary run-time already 10-15 years ago.
Not to mention much of real-world Java is platform-specific. Not often native code perhaps, but more subtle things like hardcoded paths or forgetting to properly use the correct path-separator. Installers used to often be platform-specific as well. Maybe that has been solved, but you would still run into trouble trying to install an old Java application that has an installer only supporting contemporary Windows and Mac systems.
> Java can be =optimized= beyond all recognition, into bytecode that can no longer be represented by the Java language.
I am not sure how that works, Java is all about JIT. Bytecode almost doesn't matter. Personally I can read assembly (and years [decades] back could just read hex). So even obfuscated (not optimizied) Java is quite readable. Still, the class files do retain all method declarations, all constant pool entries and all bytecode (again trivial to decompile). There have been few changes in the class format of course.
> Java binaries from 25 years ago that could no longer run at all with a contemporary run-time already 10-15 years ago.
Need a shorter frame, Java 8 (10y back) could run pretty much anything from java 1.0 (or even 0.9). It's quite more recent - Java 9 (2017) that introduced project jigsaw. Prior that Java was by far the most backward compatible platform. Still is, for most applications. Do note deprecation mean(t) - do not use in new projects, not it has been removed; again those are more recent changes.
>Not to mention much of real-world Java is platform-specific.
Again, that's super rare nowadays. Stuff like zstd might load a library but even then the native code interfaces are the same across pretty much all platforms. If you talk about native UIs you might have some point, of course.
>to properly use the correct path-separator.
Windows with its backslash is notorious, yet - there is no reason to use backslash any longer, for like 25years now. All Windows paths do work with forward slash (aside that the separator is readily available in java.io.File)
> Some of us design applications which can't tolerate the worst case GC pauses, for example
First of all, I should like to point out that such people are overwhelmingly deluded and come to this belief without ever actually having tested the hypothesis. But at any rate, the idea of a JAR file doesn't require garbage collection. We can already see this with things such as Wasm, though it doesn't quite achieve what I would want.
I think you are just suggesting to replace binary blobs with other binary blobs e.g. CLR/.NET assemblies/executables or WebAssembly files.
Or do it the JavaScript way: distribute compressed minified (kinda compiled) source code and the runtime JIT compiles it at runtime (e.g. V8 engine Turbofan compiler).
I'm trying to replace platform dependant and easily breakable binary files with platform independent and change resistant files. Yes, those files are still in binary, but this is true of all files on a computer. What's useful about these new formats is that they retain a greater degree of information about the source code.
java classes have "abi". any binary representation of executable code that is supposed to be interacted with by other code necessarily defines an application BINARY interface.
While the Java implementation is suboptimal, there is really no need for it to be that way. I think the ideal way to go about it would be to run the compiler optimisations and whatnot then generate something semantically similar to C89 as output. Then you invoke a simple compiler with few optimisations on the target machine the first time the program is run, and cache the results somewhere. On all subsequent runs, you've got something which can actually run faster than pre-compiled C code because the compiler doesn't need to preserve the ABI so can do stuff like inlining dynamic library calls.
Sadly not. Wasm is attempting something similar, but it lacks certain things that would be important for this (the to specify in one module a type of unknown size and then query its size in another module at link time).
> While the Linux syscalls themselves are very stable and reliable, the c library on top of them is not. Practically all of userland is based on libc, and therefore by proxy Linux itself has a binary compatibility problem.
Can't we freeze the functionality of libc? Why does it need to be updated so frequently?
And even if we make changes to its implementation, why do we need to bump the version number if the underlying API is still the same?
> "In Windows, you do not make system calls directly. Instead, you dynamically link to libraries that make the system calls for you. This allows Microsoft to do all sorts of shenanigans at the kernal level while providing a stable API to userspace."
Or, in other words, "We can solve any problem by introducing an extra level of indirection."
I really don't see Microsoft blocking unsigned exe. There's just too much old Windows/DOS software out there still in use, sometimes running critical infrastructure.
Like someone already said somewhere, it will come in steps.
Windows S Mode was already a test.
The nagging, warning and outright "blocking" (while hiding the "run anyway" button under "more info") is the first step. This already is a warning to software vendors that something will come.
The next step will be blocking unsigned exes on Home Editions (not on Pro or Enterprise), so that software vendors and most of places depending on unsigned old software can move on to signed software.
Then Home and Pro Editions of windows wont be able to run unsigned software anymore and if you need unsigned software to run you'll have to use an Enterprise Edition.
The last step would be no windows can run unsigned software anymore and if you need unsigned software running, you'll need to run that one on an Azure instance of Windows which can still run unsigned software or (if you can't / don't want to run your software in the cloud) you will have to contact Microsoft for a special Windows version, costing lots of money. But if your business depends on that one single unsigned exe file, you might be ready to pay for that one.
Exactly, wine is all that's needed here for windows stuff. And we have snap, flatpak, docker, and a few other things for Linux stuff.
We'll probably get a bit of irony in a few years when somebody at MS realizes that they can just use wine on top of their Linux emulation layer to run any old MS legacy software from the past three decades+ and then cleans up the rest of windows to not have any support for legacy APIs. Because having that is redundant. Emulators are great. There's no need to run ancient software natively.
> There's also no guarantee that a binary produced today on Linux will even work on the various distributions of Linux today due to the same installed library version problem.
On Linux, you are supposed to share the source code, not the binaries. FOSS source is easier to fix than a binary blob. This is why the FSF exists.
It is not just _running_ things that's the problem, authentication and authorization are massive, I've attempted to run various Audio plugins with Wine which either do not run at all or they run on a one-time basis which is not feasible for any long term setup. Oh if only you could run them under a vm..
This would be an option if the Linux userland wasn't a mish-mash of unconnected developers with their own practices and release cadence. It's why we have LTS distros where the company will put in the massive amount of work to preserve binary compatibility.
But the trade-off is that the software you have in your repos will be really old. At the end your RHEL support cycle libs will be a decade out of date.
How much more of this opinion should I read when it’s established in the third paragraph that the author doesn’t realize that AppImage does not bundle a libc? Flatpaks do, and Snaps are a closed system that bundles _Ubuntu_, so really the answer is Flatpaks. And the rest of the world has also come to that conclusion.
Every year or two I check the status of ReactOS hoping that some day I will have a good alternative to Windows. After checking the project status today, it seems that day is still far off.
How about packaging Linux apps as Windows apps so they can take advantage of the stability of the Win32 ABI? Is there a way to do this automatically, possibly using AI?
> A modern Wine wrapper for macOS built with SwiftUI
I think you misunderstood GP's request of "running macOS apps on Linux" so you swapped the host and guest OS, and then transposed the guest OS under "emulation"
Everybody is commenting on possible implementations or how similar solutions already exist.
I would like to focus on an overlooked but very important fact: most of the important software in the Linux ecosystem is opensource. Yeah, the ELF binary from 20 or 25 years ago might not run anymore out of the box but you have the source code, access to the whole history of source code of needed libraries. It will for sure not be a 0-effort adventure and it will not work with proprietary and closed source software, but it's doable for most of Linux old abandoned software.
> MacOS has a feature called Gatekeeper, which limits what software you can run on your Mac to only those applications that Apple approves
This is a lie. Gatekeeper in no way limits the software you can run. It presents an easier experience to launch software downloaded from a browser if the developer chose to submit it to apple for a malware scan.
As someone stuck in macOS trying to run docker, I can tell you that the impedance mismatch between a what a "file" is, and its "location", and the meaning of "listen on localhost", and how much "memory" an application has makes virtualization absolutely horrible for trying to run just one program on a different OS (or arch) than the rest of your day to day
I have had the same idea for a while, honestly. Yeah you can install wine and binfmt_misc, but it doesn't come by default. It should be the default. Nobody should be distributing binary Linux applications in this day and age, especially not for the desktop. Win32 is just so much better designed from the ground up for desktop apps, it's not even funny. As a simple example - a Win32 .exe has an icon to tell the user immediately what it is, but Linux apps need a ton of hacks and extra files (wtf is a .desktop) which can get out of sync at the drop of a hat. Also the ABI is indeed stable. You don't have to worry about the graphics and audio APIs disappearing etc.
Like just for the audio stack we had: OSS is deprecated so use ALSA actually direct ALSA device access is deprecated use this special ALSA config with a bunch of plugins actually directly calling ALSA is deprecated use aRts actually aRts only works on KDE use ESD actually ESD is deprecated use pulseaudio actually pulseaudio uses too much CPU rewrite everything to use JACK actually JACK is only for audio workstations go back to pulseaudio actually pulseaudio is deprecated switch to pipewire... I am pretty sure in 6 months I will be reading how pipewire is deprecated and the new definitely final form of the Linux audio stack will be emerging (written in a combination of Rust and Emacs Lisp).
In short, Linux binary compatibility is a clownshow and the OS itself isn't engineered for developing graphical desktop applications. We should stop pretending it is and compile everything user-facing for Win32 ABI, with maybe a tiny extension here and there.
> Try doing the same with a Linux binary that's just a year old. There's no guarantee that it will be able to run based off some update that has happened.
What?? I've used Linux for quite a while, and I've had a very good experience with software. I struggle to follow what they're talking about, Linux works just fine. Using Windows software is also pretty easy and like many people have already mentioned wine-binfmt is basically what this article is describing.
I have points to burn, so I'll post, because I know this will scratch some folks the wrong way- apologies in advance.
I use Windows. In fact, I like Windows. I know lots of (ok, more than 5) greybeards who feel exactly the same way. I don't want Linux to be Windows, but I also don't want Linux on my personal desktop either.
I have a Mac Mini M1 on my desk, and I use that for the things it's good for, mainly videoconferencing. It's also my secondary Adobe Creative Suite machine.
On my Win11 desktop, I have WSL2 with Ubuntu 24.04 for the things it is good for- currently that's Python, SageMath, CUDA, and ffmpeg. For my Unix fix, I use Git Bash (MSYS2) for my "common, everyday Unix-isms" on Windows.
I also use PowerShell and Windows Scripting on my box when I need to.
Why? Well, firstly, it's easy and I've got stuff to do. Secondly, cost is not really an issue- I bought my Windows Pro license back with Win7, and it was about $180. That was maybe 15 years ago. They have graciously upgraded me at every step- Win7 -> Win10 -> Win11, all at no cost. Even if I had had to buy it, my Taco Bell tab is higher in any given month than a Windows license (love that inflation).
Why else? Everything works. I get no annoying popups, and I really no longer sweat garbage living on my drive, because that ship has sailed; wanna waste 50GB? Sure, go ahead.
But the most important reason? My hardware is supported. My monitors look great; printers, scanners, mice and USB drives & keys all work. In fact, >90% of the time, everything just works. Further, I can share effortlessly with my Mac, all my Linux servers speak SMB (CIFS), Wireshark works, and my programs are all supported including most open source software. And I do run apps that are 20+ years old from time to time.
Truth be told, I have tried the dance of daily driving Linux, and it's a laundry list of explanations to others why my stuff is different or deficient in some way. The kicker is that my clients don't care about purity or coolness factors.
Linux has its place. But please don't put in on my main machine, and please don't give it to my family members. They're only being nice by living with a sub-par desktop experience. It will always take a herculean effort to stay on par with Windows or MacOS, and no one really wants to put their money where their mouth is.
Please don't misunderstand. I admire and respect authors of open source software, and my servers thank them. But being a contrarian and dogfooding what KDE and GNOME put out, fighting with Nvidia and AMD, dealing with constant driver interface changes, and not having proper commercial software support is not my idea of fun. It was 30 years ago. Today? I'd rather hang with my daughter or write some code.
These distros have had 35 years. I don't know what else to say.
I have the same experience. I've tried to use Linux as a desktop since 2000. And tried and retried. Year after year and distro after distro.
Until I realized the desktop experience on Linux will never be on par with Windows, that I need things to just work instead of constantly fiddling to make them work.
I discovered that Gimp is not Photoshop and Libre Office is not MS Office. And I discovered that running things under Wine are not always great.
I discovered I need and want to run Windows software.
I discovered that I like the hardware to work out of the box.
For me, Windows is great as a desktop. And I develop microservice based apps that run under Linux containers/Kubernetes in cloud.
Docker Desktop, WSL and Hyper-V are taking care of all of my potential Linux needs.
I also have a MacBook Pro, but I don't care much about the OS, I mainly bought it for the good battery life and use it to browse the web and watch movies in bed or on the couch or while traveling.
> But the most important reason? My hardware is supported. My monitors look great; printers, scanners, mice and USB drives & keys all work. In fact, >90% of the time, everything just works.
the only thing I have had issues with is one printer, and one graphics card with many machines over 20 years, so I would say I Linux manages better than 95% "just works".
I strongly disagree. Linux (KDE) is a far superior desktop experience these days, compared to Windows 11. Have you even seen the new Win11 taskbar and the shitty Start Menu - they ruined something which they perfected in Win7. The overall UX has taken a deep dive - like with the unwanted removal of classic Control Panel applets like "Window Color and Appearance" (which doesn't have a replacement), and the continued bolting-on of unwanted crap like Copilot and forced MS Accounts - like, even the CLOCK app requires you to sign-in (why?) [1]. There are even ads in MS PAINT [2]! Tell me if this is acceptable?
> It will always take a herculean effort to stay on par with Windows or MacOS, and no one really wants to put their money where their mouth is.
I also disagree with this, in fact, Linux has surpassed Windows and macOS in many areas.
Take updates for instance: especially on distros with atomic updates, they are far more reliable and a pleasant experience compared to Windows. Atomic transactions means updates either apply or don't - there's no partial/failed state, so no chance of an update failing and potentially borking your PC. Plus, distros which offer atomic updates also offer easy rollbacks - right from the boot menu - in case of any regressions. Updates also do not interrupt you, nor force you to reboot unexpectedly - you reboot whenever YOU want to, without any annoying nag messages.
Most importantly, updates do not hold your PC hostage like Windows does - seeing that "please wait, do not turn off your computer" has got to be the #1 most annoying thing about Windows.
It's amazing that even with 40 years of development + trillions of dollars at their disposal, Microsoft still can't figure out how to do updates properly.
Finally, your PC will continue to receive updates/upgrades for its entire practical lifespan, unlike Windows (regular versions) which turns a perfectly capable PC into e-waste. Win11 blocking Kaby Lake and older CPUs is a perfect example of planned obsolescence and honestly, it's disgusting that people like you find this acceptable.
There are several other areas where Linux shines, like immutable distros, Flatpak apps, sched_ext schedulers, x86_64 microarchitecture optimisations, low resource usage... I could write an entire essay about this, but that will make this lengthy post even lengthier.
> But being a contrarian and dogfooding what KDE and GNOME put out
Please don't put KDE in the same sentence as GNOME. The GNOME foundation have lost the plot and have betrayed their fans, ever since they released the abomination that is GNOME 3. KDE on the other hand, still delivers what users want (ignoring the KDE 4 era). KDE v6 has been a near-flawless experience, and still has a classic, familiar desktop UX that any old time Windows user would love and feel right at home with, unlike Win11.
> fighting with Nvidia and AMD
Please don't put nVidia and AMD in the same sentence. nVidia sucks and that's completely nVidia's fault for not supplying a full opensource driver stack (their new open kernel module is an improvement, but many driver components are still proprietary and rely on their GSP).
AMD on the other hand, has been a super-pleasant experience over the past few years. Ever since Valve got involved with their Steam Deck efforts, AMD drivers, KDE, Wine and several other related areas have seen massive improvements. I seriously doubt you would have any major complaints with AMD GPUs if you've tried them on a recent distro.
> not having proper commercial software support
What sort of commercial software does your family require? Mine don't need any (and nor do I). The family members who are still working have their own work-supplied Windows/macOS laptops, so that takes care of the commercial side of things, and outside of work we don't need any commercial software - and we do everything most normal PC users do - surfing the web, basic document/graphics/video editing, printing/scanning, file/photo backups etc. Everything works just fine under Linux, so I'm not sure what we're missing out on by not using commercial software.
> These distros have had 35 years. I don't know what else to say.
Maybe don't use an ancient distro that's stuck in the past? Try a modern immutable distro like Aurora [3] or Bazzite [4] and see for yourself how much things have changed.
"Maybe don't use an ancient distro that's stuck in the past? Try a modern immutable distro like Aurora [3] or Bazzite [4] and see for yourself how much things have changed."
This has always been the riposte to Linux-for-normies sceptics - "you haven't tried these modern distros, X, Y, Z".
I've gone down that route several times and they always have issues, from drivers to config settings to just being too different compared to Windows or even MacOS.
Non-tech (and especially older) people will generally have expectations that obscure linux distros (despite their good intentions) cannot meet; they may well suit users who are more confident and curious with sorting things out themselves but this idea that somehow "this time its different" is ultimately on the distro-champions to prove; they've been wrong too many times in the past.
> I've gone down that route several times and they always have issues, from drivers to config settings to just being too different compared to Windows or even MacOS.
You really should give KDE-based distros a try, the UI isn't that much different from the traditional Windows UI paradigm. In fact I'd say KDE is more similar to the Windows 7 UI, than Windows 11 is.
Also, drivers aren't really a problem with compatible hardware. As the person recommending/installing Linux, it is your duty to ensure that they've got compatible hardware. In my experience, anything older than a couple of years, from mainstream brands, work fine. The only couple of cases where I've had to manually install a driver was for printers, but even that is now almost a non-issue these days thanks to driverless/IPP printing.
> Non-tech (and especially older) people will generally have expectations that obscure linux distros (despite their good intentions) cannot meet
I'm surprised you mentioned non-tech and older people, because that's exactly who my ideal targets for Linux are, because their needs are simple, predictable and generally unchanging. It's usually the tech-savvy and younger people who've got complex software needs and specific workflows that find it hard to adjust to Linux. This was also the case for me, I had over a decade worth of custom AutoHotkey scripts + mental dependencies on various proprietary software that I had to wean myself off from, before I ultimately switched.
Older, non-techy folks are mostly fine with just a browser and a document editor. This was the case with my mum, and pretty much most of my older relatives. As long as you set up the desktop in a way it looks familiar (aka creating shortcuts on the desktop), they don't cause too much of a fuss. Initially there may be a "how do I do this" or "where did xxxx go?" depending on their needs/workflow. At least in my mum's case, there wasn't much of an issue after showing her the basics.
I'm curious what needs the older folks you know have, which can't be met with an atomic KDE-based distro like Aurora.
> Have you even seen the new Win11 taskbar and the shitty Start Menu - they ruined something which they perfected in Win7.
Yes, one of the biggest visible downgrade of a core feature with W11! It's is awful and buggy, but then Windhawk mods and menu alternatives and app launchers exist, so it can tweaked to be good again (though they didn't perfect anything in any W7 or any other version, there is not a single perfect UI component)
> The overall UX has taken a deep dive - like with the unwanted removal of classic Control Panel applets like "Window Color and Appearance" (which doesn't have a replacement)
Again, bad stuff, though the classic control panel was also bad, the only consolation is that at the steady state you don't use those often
> CLOCK app requires you to sign-in (why?) [1]. There are even ads in MS PAINT [2]! Tell me if this is acceptable?
It isn't , but then why would you ever use these bad stock apps even if they had no ads??? Much better options exist!
But all of those mostly fixable annoyances pale in comparison with the inability to have a great file manager like Directory Opus or being able to find any file anywhere instantly with Everything or having a bunch of other apps (and then you'd have plenty of other issues tweaking OS UI or have sleep or hardware compatibility issues people keep complaining about)
Windows is still enshittified and everyone needs an exit plan.
For family users I recognize macOS. For windows apps I have virtualized win11 IoT with a passed through GPU. My monitor has multiple inputs and I can't even tell it's not native.
Backwards compatibility is generally a good thing. It certainly has its downsides (like security) which can be more or less of a concern depending on backwards compatibility techniques.
He is missing the point. Flatpak/Snap are not just an alternative way to ship binaries. They are way to isolate applications and what they can do. Landscape has moved from protecting the system or an user from another to protect the same user applications and their data from each other, specially for desktop environments. That is not even in the map for Windows, its security model and its applications. It is a big jump backwards.
Every Application should be it's own 'user' (sub user) while the login-user / manager should be the group leader of all those 'sub users' / 'agents'.
A change in security model from the 1970s/1980s might help with security and isolation. However that same security would also generally be a pain without really smooth management in the desktop environment / shell.
People always talk about this “I can run a 20 year .exe file” situation but when I tell you that I have never, in 30+ years, EVER had a need to run a 20+ year executable, it just makes me go… yeah, and?
Sure I believe backwards compatibility is a nice to have feature, but I have never, nor do I think I will ever, have a need to run 20-year-old software.
My experience is that a 20 year old exe file has a greater chance of running in wine than it would in windows, and a 20 year old Linux executable is going to fail because the shared libraries it depends on are unobtainable
20-year-old exe files can fail on both Windows and WINE if they touch something relatively obscure. It's easier to throw files at the problem under WINE though (you can just throw away the prefix if you break something). The single biggest mistake WINE makes is defaulting to a single shared prefix (and the second sin is similar - trying to integrate folders and menus naively).
20-year-old dynamic binaries on Linux can almost always work today; snapshot.debian.org has all the old libraries you'll ever need. The potential exception is if they touch something hardware-ish or that needs exclusive access, but this is still better than the situation on Windows.
20-year-old static binaries on Linux will fail surprisingly often, since there have been changes in filesystem and file layout.
> NOTE: I am not against Apple or Microsoft. They have amazing engineers! I don't think they're being malicious. Rather, I think the incentives for these companies are improperly aligned.
Improperly aligned incentives? Who gets to say what that is?
Is it "improper" to maximize profit per their own board's guidelines?
I have a feeling OP has some predefined notion of nobility they expect people to somehow operate under.
> Is it "improper" to maximize profit per their own board's guidelines?
From the standpoint of the end user the incentives are improperly aligned. If they had made hammers they would have included licence agreements for their use with specific types of nails and actively prevented users from using competitor's nails. They also would have made sure yesterday's hammer would not be sufficient to hammer in today's nail, they would have added camera's to observe what the user was doing so as to sell 'targeted' advertising - during which the hammer would not strike any nails but would sing like the singing sword in Who Framed Roger Rabbit - and they would have made sure that no matter how agile the user was with his hammer the thing would never be 100% reliable.
Of course hammers are far less complex than computers and operating systems. Maybe this is because they're made by tool manufacturers and not by tech companies, maybe it is because they're old tech. A modern hammer is what Ford would have produced if he had listened to his customers who asked him for a faster horse so maybe there is a whole world of construction efficiency waiting for the Henry Ford of Hammersmiths. Or, maybe - probably - sometimes it is better to get that faster horse, that titanium hammer or that free software operating system which works for you and nobody else.
To the extent binary distribution is "unstable" on Linux, it's because users aren't expected to just download random binaries from wherever, as is normal on Windows (and Mac, for that matter). Users are expected to either obtain binaries from their distro, or compile them from source. In either case, all of the issues about binary distribution being "unstable" are invisible to users. Which is the point. People who want the broken Windows software model can just..run Windows. The last thing any sane Linux user wants is to make Linux into Windows. I run Linux to avoid Windows.
There are a lot of reasons to avoid Windows outside of package/software management.
Linux has appimage, it's already capable of running "loose" native executables like Windows does. Flatpak, Snap and Docker all break the "distro repository or compile from source" model. The primary method of playing video games is installing Steam and running Windows software inside a container. This purist vision you have of Linux doesn't exist.
Linux users? That is, people who use a device that runs Linux? Like Android?
Or you mean desktop Linux users, though there aren't "a lot" of those. There's the business/corpo/science deployments but I don't think we're talking about that, but rather specifically home use. So we're talking mostly enthusiasts. I'd imagine many of those and perhaps even most at least lightly game and Steam is effectively the default place to purchase games on Linux. Do you run anything in an emulator? Impure! Purge with fire!
The software repository+compile from source paradigm isn't "Linux", it's not even "desktop Linux". Linux can execute software in a myriad of different ways, what makes Linux Linux is that it's infinitely flexible.
Obviously, since that's what the article and this discussion is about.
> there aren't "a lot" of those
Depends on what you consider "a lot", I guess. The article that this discussion is talking about apparently thinks there are enough to make its proposal for "converting Linux to Windows" worth an effort.
> Obviously, since that's what the article and this discussion is about.
I was just making the point that Linux isn't any one thing, it's everything. You want an OS handles things the way you want? Well, you do, and others should be given the same privilege. It's silly to stamp your feet about certain implementations or features existing within the Linux ecosystem, the whole point of FOSS is that they can all exist.
> The article that this discussion is talking about apparently thinks there are enough to make its proposal for "converting Linux to Windows" worth an effort.
From the article:
"Imagine we made a new Linux distro. This distro would provide a desktop environment that looks close enough to Windows that a Windows user could use it without training."
It isn't proposed as a distro for people who use Linux, but for people who use Windows but may want to move to Linux. I was one of those people, I switched my gaming PC from Windows to EndevourOS last year, though I've been using various distros for the past 20 years on other devices. I switched because Windows is becoming a shit show. I know my way around Linux, I use Blender, Krita, Gimp, Inkscape and Reaper, all native apps, but sometimes I just want to install a Windows application since the functionality I require makes it's simply necessary. Dual booting is a massive hassle, VMs fuck up productivity workflows and while I can sometimes get it working with Wine it's a hassle. I might not use the proposed OS, but the components that would allow for seamless installation of Windows software? I'd love for those to exist.
Speak for yourself. I have been using Linux for a decade and would want nothing more if standalone application setups like those in Windows became the norm of software distribution.
Centralized package management is a curse. Apps should be responsible for their own updates, not the OS.
OTOH I view application installation as a separate skill from $THING_THIS_APP_DOES.
So I would rather the app authors just focus on perfecting their apps, while said apps can then be packaged and distributed in bulk by different sets of people trained to handle those challenges.
What I very certainly do NOT want is:
* Apps automatically checking for updates on startup — since they can't check while they are off — leading to needlessly leaked data crossing the network about exactly when I'm starting up exactly which apps (since they dial home to predictable locations regardless of TLS usage)
* Apps constantly filling systray with their own bespoke updaters (and "accelerators" which just means the app is running 24/7 but minimized to tray ;P )
* App launches updater, updater window says "can't update because app is running". Close app, wait for update, now I have to go hunt down the document I had originally opened the app with. Next time an app launches an updater, I leave it on its splash screen and go to close the app.. naturally that also closes the updater since this time around the one is a sub-process of the other. (I recall earlier versions of Wireshark causing me much grief on these fronts, for example)
* More diverse attack surface area for hackers to infect my PC: instead of trying to juke a distro who has at least some experience and vested interest in defending against poisoning, just juke any single software author less specialized in distribution security and take over their distribution channel instead.
Great news, there's a distro called Slackware that eschews centralized package management (besides optionally delivering updates for preinstalled packages). It's been around for ~20 years before you started using Linux. If you'd like to rid yourself of the curse of centralized package management in favor of running "./configure && make && sudo make install" like a real man, you should give it a try.
> Apps should be responsible for their own updates, not the OS.
Distros are not quite "the OS". You don't need a distro to run Linux.
The role distros play as far as Linux applications are concerned is more like an app store in the Windows (or Mac) world. Of course Apple has locked down their smartphones that way basically since their inception, and their desktop OS has been becoming more and more like that. So has Windows.
If you mean, do we want desktop Linux to have distros, that ship sailed several decades ago. Yes, you don't need a distro to run Linux (as I said before), but most people who run Linux use one.
However, Linux distros, while they play an app store-like role, are still very different from the Windows or Mac app stores. First, they don't restrict what else you can install on the system; you don't have to jailbreak your Linux computer to install something that the distro doesn't package. Second, they don't insist that you set up an account and hand over your personal information, or nag you constantly if you don't do that.
Given that every OS is heading towards centralised application updates. Windows Store does that AFAIK. I am guessing MacOS's store does too. The major mobile OSes and its the only way almost all users install anything.
The big difference on those other OSes is that packaging is done by the original author, and they don't have to worry about things like distro release cycles (and package freezes etc).
Windows Store is most similar to Flathub in that regard.
> At least in a typical linux distro the binary is built by the distributing org, with some review of where the source comes from.
This "feature" falls apart for nonfree software, which most commercial apps are. You can use Spotify and Steam's PPA but will similarly have no idea what source was included in them.
explain to me why the HELL should i be limited to only running binaries that my distro vendor has deigned to provide, or jump through endless hoops to obtain and build source code (which by the way might not even be obtainable OR buildable). if i have a binary from 5, 10, 15 years ago i should just be able to fucking run it on my fucking computer.
> At least in a typical linux distro the binary is built by the distributing org
... which often patches upstream code in ways that upstream neither approves of nor wants to support. And then, when things break, the user can't go upstream, and the distro package maintainers simply don't have enough time to deal with all the user reports.
What's up with all this "My 20 year old software still works!!!". Who actually runs unmaintained abandonware? I would rather prefer OS devs not wasting time maintaining legacy cruft and evolve with the times.
Is this sarcasm? Some of my favorite games are 20 years old. Windows is popular in a lot of manufacturing spaces because the equipment software doesn't get updated and only connects to old programs over 16 bit serial ports.
There's a whole world out there of legacy software that is happily churning along, and doesn't need to be updated.
> Thesis: We should create a distro of Linux that runs Windows binaries by default via Wine.
On Debian you're one package away:
Otherwise you're still pretty close:This is great! Someone else mentioned binfmt_misc. I didn't know about that.
The next step is to isolate the Windows applications: you could use different WINEPREFIX, but I think the better way is to do it like android: one "user" per application.
It's not just to prevent applications to read other applications files, but also to firewall each application individually
For example, if you don't want the application you've mapped to user id 1001 to have any networking, use iptables with '-m owner --uid-owner 1001 -j DROP'
I moved from Windows to Linux a few years ago, I have a few Windows apps I still love a lot (mostly Word and Excel) and thanks to wine I will always be able to use them.
They are also extremely fast: cold starting Word (or Excel) on my laptop takes less than a second, and use far less RAM
Personally, I'd rather purchase a few shrink wrapped old versions of Office from ebay than bother with LibreOffice, Abiword or the online version of Office.
EDIT: I can't find the old recording I made showing how fast it was, but here's what it looks like on my hypland desktop: you can see in btop it doesn't take much resources https://www.reddit.com/r/unixporn/comments/11w3zzj/hyprland_...
> The next step is to isolate the Windows applications: you could use different WINEPREFIX,
In case you're not aware, wine prefixes each use their own settings, but are not isolated from one another.
https://gitlab.winehq.org/wine/wine/-/wikis/FAQ#how-good-is-...
> but I think the better way is to do it like android: one "user" per application.
This would help somewhat, assuming you don't run them all in one user's X session. On Linux, some desktop environments have a "switch user" action to start a separate desktop session running as another user on another virtual console. You can switch between them with Control+Alt+F2, etc.
> In case you're not aware, wine prefixes each use their own settings, but are not isolated from one another.
That's a great point!
I'm aware, which is why recommend instead that wine apps should each be run under a different userid: I don't want any given app to have access to anything that it doesn't absolutely need
> This would help somewhat, assuming you don't run them all in one user's X session
When I start a given wine app, the script starting it allows this user id to render on my Xwayland
It is not as secure as running each on its own X session, but wayland compositors can offer more isolation as needed.
Lutris creates dedicated wine prefixes for the applications/games, so you can use it directly. A lot of apps are also installable with some patches provided by Lutris itself
> It's not just to prevent applications to read other applications files, but also to firewall each application individually
Why would one want to prevent applications from reading other applications' files?
We're talking about running desktop applications designed for an OS that isn't built around any concept of application isolation, and for which using a common filesystem is a primary mechanism of interoperability.
> Why would one want to prevent applications from reading other applications' files?
Because I can, and because I don't trust Windows application to be secure.
Thanks to that, I have no problem running 15 year old office software: even if I knew it was malicious, I also know there's nothing it can do without network access, without file access, and with resources constrains (so it can't even cause a denial of service, except to itself).
In the worst case, I guess it could try to erase its own files? (but it would then be restored from an image on the next run, and I would keep going)
> I have no problem running 15 year old office software: even if I knew it was malicious, I also know there's nothing it can do without network access, without file access
Great. Except... WTF can you do with an office application that can't read or write files?
This is really interesting.
I thought it was impossible to run newer versions of Office on Linux.
Myself I often prefer LibreOffice, but more options are more options!
Office 2013, last non Click2Run version, worked wonderfully on Wine few years ago
When I did my tests, Office 2007 and 2010 were the most stable
I will try Office 2013 (I'd like a version that works well in wine64!)
I don't have the specific setup archived, but I believe my basis for it was a script included in winetricks at the time which installed Office 2013 professional based on offline 2013 proplus 32bit iso.
WineHQ reports that installer for 2013 64bit is "gold", but apps required few tweaks to be applied and Access sometimes failed.
Generally seems 2013-2016 era works on wine per few applications I checked
What would be the latest usable office version recommended for this?
Yeah, that has been the default for a lot of Linux distributions for quite some time now (if you install wine).
not sure I want my parents to be able to double click Windows binaries and have them execute with their privs
My parents can't do that. They're on macOS.
Wine bottler works well for that.
Does wine and wine bottler work on the new Apple silicon macs? Maybe for the old intel machines it does but not quite sure for the new macs though.
Yeah, it does! You probably need Rosetta 2 installed first though.
Is it just me or wine needs a bit more polish? Dialogs and menus are rendered with some weird microscopic font. GDI text rendering seemingly doesn't use font fallbacks, so even something like Scintilla or ebook reader don't quite work under wine.
Many commonly used Windows fonts are licensed under proprietary terms, preventing their inclusion with Wine.
Winetricks[1] can be used to acquire and install a set of default fonts directly from Microsoft.
Furthermore, Windows font fallback differs substantially from that of Linux and similar systems, which generally utilize Fontconfig and FreeType with font relationships defined in configuration files. In contrast, Windows (and consequently Wine) employs a font linking mechanism[2]. Windows handles font linking natively, whereas Wine requires manual registry configuration[3].
[1] https://github.com/Winetricks/winetricks
[2] https://learn.microsoft.com/en-us/globalization/fonts-layout...
[3] https://stackoverflow.com/questions/29028964/font-recognitio...
Wine should come with fonts with the same metrics as the proprietary ones. Note that while the font file is copyrighted, the letterforms themselves are free to copy. We already had the DejaVu project recreate equivalents of existing fonts, no reason we can't have the same for the Segoe and Calibri families.
I'm more interested in font fallback that works elsewhere in linux. Rendering doesn't match anyway, so metrics isn't very useful.
I installed windows fonts. AIU it's insufficient?
Doesn't wine delegate rendering to FreeType? Might as well delegate font fallback to FreeType.
> Is it just me or wine needs a bit more polish? Dialogs and menus are rendered with some weird microscopic font.
It's just you. I set up the DPI and high res option to run old Office apps, and they have very nice fonts both on my 2k laptop 4k screen.
Try `xprop -root -f _XWAYLAND_GLOBAL_OUTPUT_SCALE 32c -set _XWAYLAND_GLOBAL_OUTPUT_SCALE 2`
I don't think a new distro is needed. Most commonly used windows apps can be made to work through wine, but the hacks used to make one app work can break others and vice versa. Similarly, everyone needs to play around with settings individually to get things to work. What works on one person's machine might not work on another's, because there's no consistency in, effectively, configuration.
The simplest solution, to me, is to just distribute containers (or some other sandbox) with wine in it, and the necessary shenanigans to get the windows program (just the one) working in the container, and just distribute that. Everyone gets the same artifact, and it always works. No more dicking around with wine settings, because it's baked in for whatever the software is.
Yes, this is tremendously space inefficient, so the next step would be a way of slimming wine down for container usage.
The only real barrier to this system is licensing and software anti patterns. You might have to do some dark magic to install the software in the container in the first place.
The concept of containers for Windows applications running in WINE is called "bottles."
https://support.codeweavers.com/en_US/2-getting-started/2-in...
I believe it started with Cedega, but I could be wrong. That's where I first recall encountering it.
TIL. I'm gonna check this out. It's good to see that people are already working on this, because it's one of those things that, to me, just makes a lot of sense. You'd think that with the all of the layers of abstraction we have nowadays, it should be possible to run software on any underlying system in an ergonomic fashion, even if it's not necessarily efficient.
It's extremely efficient: cold starting Word from an old Office suite is much faster than starting Libreoffice. It also uses less RAM.
A few years ago I purchased a few shrink-wrapped Office on ebay for each of the versions Wine claimed to support best, tested then with wine32 and wine64, and concluded the "sweet spot" was Office 2010 in wine32 (it may have changed, as wine keep evolving)
Yes, it's 15 years old software, but it works flawlessly with Unicode xkb symbols! Since it doesn't have any network access, and each app is isolated in a different user id, I don't think it can cause any problem.
And Ii I can still use vim to do everything I need and take advantage of how it will not surprise me with any unwanted changes, I don't see why I couldn't use say an old version of Excel in the same way!
It is interesting seeing Office suites from the 90s and wondering what really needed improved. Google Docs pioneering “auto saving” in the cloud is the only one I can think of.
> wondering what really needed improved
Maybe not much?
A few months ago, I ran out of power (my mistake, I use full screen apps to avoid the distraction, so I didn't realize I was unplugged)
After plugging in and restarting Linux then the ancient version of Word I was using, I got a pleasant surprise: the "autosaved" version of the document I was editing, with nothing lost!
As for llm, Excel 2010 may not have been made for AI, but wine copy/paste and a few scripts work surprisingly well!
Autosave has been part of Excel for ages. I had it enabled back in the early 1990s with the version that was distributed alongside Word 6 as part of Office 4.3 (I don't remember the Excel version number).
Current Excel Autosave when used with ODSP is different -- changes are individually autosaved (change cell, autosave, format table, autosave). They're completely transparent to the end user.
Word is similar.
So you're saying you're using Word 2010 and have no problem with files created recently? I find it surprising that modern word .docx is compatible with 15 year old Word
Not too suprising though. It is zipped xml. Future versions may add to the xml optional nodes that can be ignored by previous versions.
The modern Word suite’s basic format was introduced with Word 2010. So as long as the person who created the doc uses features that were previously present in Word 2010 they’d be fine.
Features from later versions will either not show up or show up as boxes
Collaborative editing? "Track Changes" just doesn't hit the same way.
That’s a good one. Perhaps Google Docs is the pinnacle of office suites.
The auto-save in Google Docs is undoubtedly better, but it was possible to set an auto-save interval in minutes on Word 6.0 for Windows 3.1.
Back then, Word's auto-save updated the file that you were working on rather than creating a separate backup file. I liked that better, though there might have been a good reason for changing approaches in later versions of Word.
> and each app is isolated in a different user id
I always liked this idea; but wouldn't you run into issues with file permissions? And if not, wouldn't that mean that the program in question would have access to all your files anyhow, removing the benefit of isolation?
When I'm using Office, the files come from a shared directory accessible as Z:
I use scripts to automate everything - including allowing wine to use Xwayland (because until I start the application I want, its userid is not allowed to show content on my display)
If you want to try using wine with different user ids, try to start with a directory in /tmp like /tmp/wine which is group writable, with your windows app and your user belonging to the same group.
Cool, hoping this will become more common and easier done with help from distro's!
https://usebottles.com/
I really like the idea of bottles. I wish there was a way to bundle that up into a distro and make it invisible to the user so I could setup my friends and family with it.
Could call it "transparent bottles".
GlassBottle, hopefully the illusion doesn’t get shattered!
If you change a lot of things about a Linux system, then you're making a new distro.
Half of this incompatibility is because Linux is flexible, anyway. My system is different from your system, and did anyone test on both? If you want a more stable ABI then you need a more stable system.
You wouldn't have to change anything about the underlying system, which is the point. Containers work the same regardless of the underlying system, so they get around the various differences in everyone's individual machine. I use identical containers routinely on Fedora and Ubuntu systems at home without any issue, and I produce containers for RHEL and various other systems at work. Half the point of containers is eliminating the variability between dev systems and deployment.
Rather than everyone having to get the software working on their machine, you would get it working once in the container, and then just distribute that.
Containers work because your kernel is close to identical, and ship their own copy of everything else making them bloated, and incompatible at a user-mode level (no graphics drivers!). If my kernel was also very different from yours (which could just be a couple of kernel options or major versions) I'd need a virtual machine.
You could distribute Wine as a Flatpak platform. Flatpaks are already containers that run the same on all distros. Making a Win32 base that works in this same way using the same tooling would not be difficult.
There was Winepak (abandoned, sadly): https://winepak.github.io/
Flatpak could work better because it already knows how to factor out common parts, so it could bring in only one (or just few) version of Wine.
Unfortunately Flathub (the biggest Flatpak repository) doesn't allow Windows apps, despite it working on a technical level
https://docs.flathub.org/docs/for-app-authors/requirements says,
> Windows application submissions that are using Wine or any submissions that aren't native to Linux desktop and is using some emulation or translation layer will only be accepted if they are submitted officially by upstream with the intention of maintaining it in official capacity.
Although, I am curious why; they don't seem to have a general problem with unofficial packages, so I'm not sure why a translation layer makes any difference. It doesn't seem different than happening to use any other runtime (ex. nothing is said about Java or .net).
Probably because it's implicitly pirated. No one is sharing windows freeware this way, because there's no demand for it. It'll be MS Office, Photoshop, CAD, etc. -- stuff for which there's still no good OSS alternative, and for which the barrier to entry is high.
It would take a large organization with enough connections to cut through this. You'd probably need to cut a deal so you could distribute their software, and you'd need to provide a mechanism for users to be able to make purchases. Even then, there are various licensing challenges, because you would be distributing the same install, so thousands (or millions) of "installs" would effectively have the same serial or license number.
It's nontrivial, but the basic idea is straightforward and doable. The challenge is how windows software is distributed and licensed, not anything technical.
I am just thinking out loud. Wouldn't it be better then to just share the reproducible recipes similar to sharing Dockerfiles? For wine, specifically, it could be similar to just using FROM wine-1.23. As long as we keep the recipes maintained and "pinned" to their old dependencies.
I think this could work as a translation layer because containers already abstract away everything on syscall level.
There must be just a GUI for that, which can create multiple sandboxes easily, per application, and remember what you configured and installed there (and add it to the Winefile).
In regards to the sharing serials problem: You can easily diff the .reg file that wine adds there and if anything pops out in \\software, you can assume this is a custom field and offer it as an environment variable for the container?
This is similar to what Bottles [1] does, it uses a YAML file to describe the installation instructions for a Windows app here's an example: [2].
[1] https://usebottles.com [2] https://github.com/bottlesdevs/programs/blob/main/Games%2Fit...
They do cover that in https://docs.flathub.org/docs/for-app-authors/requirements#l... , but I don't buy it in the general case because Windows isn't synonymous with proprietary isn't synonymous with non-redistributable licenses. Sure, I doubt there's a legal way to ship Microsoft Office on flathub, but there are plenty of shareware programs that I think would be fine (though IANAL, ofc) right up to FOSS that happens to target Windows. For instance, Notepad++ is GPLv3[0] and WINE platinum rating[1]; why shouldn't it be on flathub?
[0] https://github.com/notepad-plus-plus/notepad-plus-plus/blob/... [1] https://appdb.winehq.org/objectManager.php?sClass=applicatio...
I don’t think it’s necessarily that either. They probably just want some guarantees that the app will keep working, instead of someone submitting 100s of wine apps and having them break eventually.
> Probably because it's implicitly pirated.
It doesn't have to be. Old software is cheap, even shrink wrapped ("new old stock")
They're just trying to prioritize linux applications, without preventing developers who want to support linux through wine from doing so.
The key difference is that applications ran under wine will always have some subtle quirks and misbehaviors that break the usability, that can be worked around by the dev when given the chance to.
This seems reasonable to me, surely it should be its own repository.
>I don't think a new distro is needed. Most commonly used windows apps can be made to work through wine
I think the idea is to provide a seamless Windows like experience so the user works exact how he expects to work under Windows. Without having to fiddle, modify configurations, wrestle with different settings. Just click and run.
Yes, which is what I mentioned in the rest of the post. You could distribute a container that has wine in it, with just the configuration necessary to get the software working in the container. It would be straightforward to write a thin abstraction layer that pulls the container (installation) and gives you an executable that you can double click on and launch.
An end user wouldn't need to modify configs or wrestle settings, because that's already done for you upstream. You just get an artifact that you can click and run. From the other posts, Proton and Steam Deck already do something similar, and this is also conceptually similar to the way AppImages, Flatpaks etc. work.
At this point nobody is going to learn a new system. People already know how to write and package exes which is the whole point.
> but the hacks used to make one app work can break others and vice versa
I think a lot of these problems could be avoided with a singular OS with the sole goal to support windows exes.
That exists, it’s called SteamOS.
And if memory serves one of the important features of Proton is to control how each app is configured individually, precisely to let you do needed tweaks at a local level.
Yes, and you can do this application specific tweaking but it’s largely less required now than it was in the past.
Less required in general or by the user? I sort of had it in my head that part of what Proton did was to just bundle all of those tweaks so you didn't have to think about it, but I haven't actually looked under the hood.
Correct. Parent is misinformed.
SteamOS is very clearly Linux which is not what the blog is suggesting.
as I read it, Linux was exactly what the blog was suggesting.
It's possible to use docker as a package manager. I worked jobs where did exactly that, because we needed to compile certain dependencies for our application, and it streamlined the whole process.
There's zero reason you couldn't create a small abstraction layer around docker so you can install "executables" that are really just launching within a container. I mean, isn't that the whole idea behind flatpak, snaps, and appimages?
The point is to leverage modern abstraction techniques so that people don't need to learn a new system.
This could be done using something like llamafile, in the sense that it would be a small universal executable containing a package.
It could even support running as a self contained application on Windows, with all needed DLLs provided.
This is how Proton from Steam works
> I can pull down a 20 year old exe and still run it today on Windows. Try doing the same with a Linux binary that's just a year old. There's no guarantee that it will be able to run based off some update that has happened
IMHO, you just compare two different things. Traditional method of installing apps on Windows is packing all dynamic dependencies with it. While on linux dynamic dependencies are shared between apps. So, there is nothing surprising that when you change the dependencies of the app, it stops working.
There are few ways to solve this and your are free to choose:
- distribute the same as on Windows
- link statically
Aside from comparing two different things, as you correctly identify, I believe that even the author's original assertion just isn't true. Maybe for some exe files, but I doubt for all or even most.
I was involved in replacing Windows systems with Linux + Wine, because (mission-critical industrial) legacy software stopped working. No amount of tweaking could get it to work on modern Windows system. With Wine without a hitch, once all the required DLL files were tracked down.
While Wine may indeed be quite stable and a good solution for running legacy Windows software. I think that any dynamically linked legacy software can cause issues, both on Windows and Linux. Kernel changes may be a problem too. While Windows is often claimed to be backwards compatible, in practice your mileage may vary. Apparently, as my client found out the hard/expensive way.
> I was involved in replacing Windows systems with Linux + Wine, because (mission-critical industrial) legacy software stopped working. No amount of tweaking could get it to work on modern Windows system. With Wine without a hitch, once all the required DLL files were tracked down.
I moved from Windows 11 to Linux for the same reason: I was using an old version of Office because it was faster than the included apps: the full Word started faster than Wordpad (it was even on par with Notepad!) The Outlook from an old Office used less ram and was more responsive than the one included with Windows!
When I got a new laptop, I had problems with the installation of each the old versions of Office I had around, and there were rumors old versions Office would be blocked.
I didn't want to take the risk, so I started my migration.
> While Windows is often claimed to be backwards compatible, in practice your mileage may vary
It was perfectly backwards compatible: Windows was working fine with very old versions of everything until some versions of Windows 11 started playing tricks (even with a Pro license)
I really loved Windows (and AutoHotKey and many other things), but now I'm happy with Linux.
> I really loved Windows (and AutoHotKey and many other things)
oh, do you know - how can I configure e.g. Win+1, Win+2, etc to switch to related virtual desktops? And - how to disable this slow animation.. just switch instantly?
May be you have several ideas where I should search. I'm use Linux as my OS for a long time, but now I need to use Windows at my job. So, I'm trying to bring my Windows usage experience as close as possible to so familiar and common on Linux.
> So, I'm trying to bring my Windows usage experience as close as possible to so familiar and common on Linux.
I see you were given an answer for the slow animation. For most UI tweaks, regedit is a good starting point.
You may also like the powertoys, but I suggest you take the time to create AHK scripts, for example if you want to make your workflow keyboard centric
> So, I'm trying to bring my Windows usage experience as close as possible to so familiar and common on Linux.
I did the opposite with the help of hyprland on arch, but it took me years to get close to how efficient I was on Windows, where there are many very polished tools to do absolutely anything you can think of.
You can disable all system animations at
Settings > Accessibility > Visual effects > Animation effects
There's no built-in way to set hotkeys to switch to a specific desktop. And my primary annoyance is that there's no way to set hotkeys to move a given window to a different desktop.
Well, there's always LD_PRELOAD and LD_LIBRARY_PATH on Linux. My experience has been that most of the time when older binaries fail to run, it's because they are linked against old versions of libraries, and when I obtain those library versions -- exactly the same as obtaining the DLLs for the Windows executable -- things usually work just fine.
You don't need to bundle anything from the system layer on Windows programs distributed as binaries. On Linux there is no proper separation of system libraries or optional libraries, everything could be both and there are no API / ABI guarantees. So "just bundle your dependencies" simply doesn't work. You cannot bundle Mesa, libwayland or GTK but you cannot fully depend them not breaking compatibility either.
On Windows side nobody bundles Windows GUI libraries, OpenGL drivers or sound libraries. On Linux side, system libs have to be somewhere in the container and you have to hope that it is still compatible.
You cannot link everything statically either. Starting with Glibc, there are many libraries that don't work fully or at all when statically linked.
I am sure this is true. But I seem to have had good results building static executables and libraries for C/C++ with cmake (which presumably passes -static to clang/gcc). golang also seems to be able to create static executables for my use cases.
Unless static linking/relinking is extremely costly, it seems unnecessary to use shared libraries in a top-level docker image (for example), since you have to rebuild the image anyway if anything changes.
Of course if you have a static executable, then you might be able to simplify - or avoid - things like docker images or various kinds of complicated application packaging.
> I am sure this is true. But I seem to have had good results building static executables and libraries for C/C++ with cmake (which presumably passes -static to clang/gcc). golang also seems to be able to create static executables for my use cases.
Depends on what you link with and what those applications do, I would also check the end result. Golang on top of a Docker container is the best case, as far as compatibility goes. Docker means you don't need to depend on the base distro. Go skips libc and provides its own network stack. It even parses resolv.conf and runs its own DNS client. At this point if you replace Linux kernel with FreeBSD, you lose almost nothing as function. So it is a terrible comparison for an end-user app.
If you compile all GUI apps statically, you'll end up with a monstorous distro that takes hundreds of gigabytes of disk space. I say that as someone who uses Rust to ship binaries and my team already had to use quite a bit nasty hacks that walk on the ABI incompatibility edge of rustc to reduce binary size. It is doable but would you like to wait for it to run an update hours every single time?
Skipping that hypothetical case, the reality is that for games and other end user applications binary compatibility is an important matter for Linux (or any singular distro even) to be a viable platform where people can distribute closed-source programs confidently. Otherwise it is a ticking time-bomb. It explodes regularly too: https://steamcommunity.com/app/1129310/discussions/0/6041473...
The incentives to create a reliable binary ecosystem on Linux is not there. In fact, I think the Linux ecosystem creates the perfect environment for the opposite:
- The majority economic incentive is coming from server providers and some embedded systems. Both of those cases build everything from source, and/or rely on a limited set of virtualized hardware.
- The cultural incentive is not there since many core system developers believe that binary-only sofware doesn't belong to Linux.
- The technical incentives are not there since a Linux desktop system is composed of independent libraries developed by semi-independent developers that develop software that is compatible with the libraries that are released in the same narrow slice of time.
Nobody makes Qt3 or GTK2 apps anymore, nor they are supported. On Windows side Rufus, Notepad++ etc. are all written on the most basic Win32 functions and they get to access to the latest features of Windows without requiring huge rewrites. It will be cursed but you can still make an app that uses Win32, WPF and WinUI in the same app on Windows, three UI libraries from 3 decades and you don't need to bundle any of them with the app. At most you ask user to install the latest dotnet.
> If you compile all GUI apps statically, you'll end up with a monstorous distro that takes hundreds of gigabytes of disk space
And yet the original Macintosh toolbox was 64 kilobytes. Black and white though, and no themes out of the box.
Even a 1MB GUI library (enough for a full Smalltalk-80, or perhaps a compact modern GUI) would be in the noise for most apps.
Except that the "you" is different on each case. You're offering options for the distributor. The quote is talking about options for the user, who has to deal with whatever the distributor landed on. From the point of view of the user at the point of need, a distributor having choices that could have made their lives easier if they'd been picked some time in the past is completely useless.
I think it's not quite simple though. For one, I think the opengl driver situation is complex, where I hear you need userland per-hardware libraries which basically require dynamic linking. From that perspective windows binaries are the de-facto most stable way of releasing games on linux.
I'm not sure about linux syscall ABI stability either, or maybe other things that live in the kernel?
> I think the opengl driver situation is complex, where I hear you need userland per-hardware libraries which basically require dynamic linking
Yes. OpenGL driver is loading dynamically, but.. Are you sure that there are any problems with OpenGL ABI stability? I have never hear about breaking changes in it
The OpenGL ABI is extremely stable but OpenGL drivers (especially the open source ones) also use other libraries which distros like to link dynamically. This can cause problems if you ship different versions of the same libraries with your program. This includes statically linked libraries if you did not build them correctly and your executable still exports the library symbols. Not insurmountable problems but still thinks that inexperienced Linux developers can mess up.
I was thinking the same thing. I've had loads of issues over the years when I have an archived EXE that gets angry about a missing DLL.
Likewise, as the author states, there's nothing intrinsic to Linux that makes it have binary compatibility issues. If this is a problem you face, and you're considering making a distro that runs EXEs by default through an emulation layer, you are probably much better off just using Alpine or one of the many other musl-based distros.
I need to buckle down and watch a YouTube video on this that gives examples. It obviously comes up in computer engineering all the time, but it's something I've been able to skate by without fully understanding; from time to time I see comments like this one that seem perfectly clear, but I'm sure there's still quite a lot of nuance that I could benefit from learning.
This is like the in-soviet-union joke about shouting "down with the US president" in front of the Kremlin. In this case, I too can run a 20 year old Windows binary on Linux wine.
The article's main premise isn't bad, but it's full of weird technical inaccuracies.
At certain points he talks about syscalls, libc (I'm assuming glibc), PE vs. ELF, and an 'ABI'. Those are all different things, and IIUC all are fairly stable on Linux, what isn't stable is userspace libraries such as GTK and QT. So, what are we talking about?
There's also statements like this, which, I'm not a kernel developer but they sound a little to good to be true:
> A small modification to the "exec" family of system calls to dispatch on executble type would allow any Linux application to fork an exec a Windows application with no effort.
He goes on to talk about Gatekeeper (which you can disable), Recall (which is disabled by default), and signing in with a Microsoft account (which can be easily bypassed, though he linked an article saying they might remove it). He also talks about "scanning your computer for illegal files", I don't know what this is referring to, but the only thing I could find on Google was Apple's iCloud CSAM scanning thing. That's not on your computer, and it got so much backlash that it was cancelled.
There's plenty of stuff to criticize about these companies and their services without being dramatic, and the idea of Linux having more compatibility with Win32 via Wine isn't bad.
> > There's also statements like this, which, I'm not a kernel developer but they sound a little to good to be true:
> A small modification to the "exec" family of system calls to dispatch on executble type would allow any Linux application to fork an exec a Windows application with no effort.
That isn't "too good to be true", it's so good it is false – no kernel modification is required because the Linux kernel already supports this via binfmt_misc. You just need to configure it. And depending on how you installed Wine, you may find it has even already been configured for you.
Sounds like someone wants Lindows/Linspire.
https://www.linspirelinux.com/
I thought the same while reading it and... I felt old.
Yes, I remember this! At some point they even claimed the term "Windows" was too generic to be protected, and they wanted to use it for the project...
A vendor that has to write a news item on their front page that their product is not dead? Maybe not the most attractive look.
Lindows made a splash in 2001-2002 and its purpose was to bridge the gap and offer proper support for Windows applications on Linux via a 'click and run' service.
After Microsoft sued them and they changed their name, the bubble was burst and when Ubuntu appeared its niche as a beginner distro ebbed away.
I was surprised to hear it was still alive via a Michael MJD video a month or two ago.
Huh? Linux Mint looks more like Lindows than Linspire.
I guess you haven't used windows 11 yet, luckily.
This problem is related to the fact that Linux distos typically dynamically link executables and don’t retain older versions of those libraries vs Windows which does.
It’s on of the many reasons Windows base install is so much heavier than a typical Linux base install.
The reason Windows retains older versions of executables while Linux doesn’t is because Windows doesn’t have a package manager like Linux distros. Ok, there’s now Windows Store plus a recent-ish CLI tool that was based on one of the many unofficial package managers, but traditionally the way to install Windows application was via manual downloads and installs. So those installers would typically come bundled with any shared libraries they’d need and often have those shared libraries in the application directory. Leading to lots of duplication of libraries.
You could easily do the same thing in Linux too but there’s less of a need because Linux distribution package managers are generally really good. But some 3rd party package managers do take this kind of approach, eg Nix, Snap, etc.
So it’s not that Linux is “unstable” but more that people have approached the same problem on Linux in a completely different way.
The fact that drag-and-drop installs work on macOS demonstrates that there isn’t really a UNIX-like limitation preventing Windows-style installs. It’s more that Linux distributions prefer a different method for application installation.
It's not just about dynamically linked executables. The userland of Linux simply isn't as stable time-wise as Windows, especially when the timescale is measured in decades.
As an example, the latest Atari Jaguar linker (aln) for Linux was released back in 1995. It's a proprietary, statically-linked 32-bit Linux a.out executable. To run this on a modern Linux system, you need to:
- Bump vm.mmap_min_addr from 65536 down to 4096, a privileged operation ;
- Use an a.out loader because the Linux kernel dropped support for a.out back in 2022 ;
- Possibly use qemu-user if your system doesn't support 32-bit x86.
That's the best-case scenario, because some of the old Atari Jaguar SDK Linux binaries are dynamically-linked a.out executables and you're basically stuck running ancient Linux kernels in a VM. It's at a point where someone at the AtariAge forums was seriously considering using my delinking black magic to port some of these old programs to modern Linux. It's quite telling when reverse-engineering an executable with Ghidra in order to export relocatable object files to relink (with some additional steps I won't get into) is even an option on the table.
Sure, given enough determination and piles of hacks you can probably forcefully run any old random Linux program on modern systems, but odds are that Windows (or Wine or ReactOS) will manage to run a 32-bit x86 PE program from thirty years ago with minimal compatibility tweaks. Linux (both distributions and to a lesser degree the kernel) simply don't care about that use-case, to the point where I'd be pleasantly surprised if anyone manages to run Tux the Penguin: A Quest for Herring as-is on a modern system.
> It's not just about dynamically linked executables. The userland of Linux simply isn't as stable time-wise as Windows, especially when the timescale is measured in decades.
That’s exactly what dynamically linked executables are: user land
> As an example, the latest Atari Jaguar linker (aln) for Linux was released back in 1995. It's a proprietary, statically-linked 32-bit Linux a.out executable.
That’s not a user land problem. That’s a CPU architecture problem. Windows solves this WOW64 which provides a compatibility layer for 32bit pointers et al.
There are 32bit compatibility layers for Linux too but they’re. It going to be going to help if you’re running an a.out file because it’s a completely different type of executable format (ie not equivalent to a 32bit statically compiled ELF).
Windows has a similar problem with COM files (the early DOS executable format). And lots of COM executables on Windows don’t work either. Windows solves this problem with emulation, which you can do on Linux too. The awkward part of Linux here is that it doesn’t ship those VMs as part of its base install, but why would it because almost no one is trying to run randomly downloaded 32bit a.out files.
To be clear, I’m not arguing that Linuxes backwards compatibility story is as good as Windows. It clearly isn’t. But the answer to that isn’t because Linux can’t be backwards compatible, it’s because Linux traditionally hasn’t needed to be. However all of the same tools Windows uses for it’s compatibility story are available to Linux for Linux executables too.
>> As an example, the latest Atari Jaguar linker (aln) for Linux was released back in 1995. It's a proprietary, statically-linked 32-bit Linux a.out executable.
> That’s not a user land problem. That’s a CPU architecture problem. Windows solves this WOW64 which provides a compatibility layer for 32bit pointers et al.
In this specific case, it really is a user-land problem.
I've went to the trouble of converting that specific executable into a statically linked 32-bit x86 ELF executable [1], to run as-is on modern x86 and x86_64 Linux systems. Besides rebasing it at a higher virtual address and writing about 10 lines of assembly to bridge the entrypoints, it's the same exact binary code as the original artifact. Unless you've specifically disabled or removed 32-bit x86 emulation, it'll run on a x86_64 kernel with no 32-bit userland compatibility layers installed.
Just for kicks, I've also converted it into a dynamically linked executable (with some glue to bridge glibc 1.xx and glib 2.xx) and even into a x86 PE executable that can run on Windows (using more glue and MSYS2) [2].
> Windows has a similar problem with COM files (the early DOS executable format). And lots of COM executables on Windows don’t work either. Windows solves this problem with emulation, which you can do on Linux too.
These cases aren't equivalent. COM and MZ are 16-bit executables for MS-DOS [3], NE is for 16-bit Windows ; all can be officially run without workarounds on 32-bit x86 Windows systems (NTVDM has admittedly spotty compatibility, but the point stands). Here, we're talking about 32-bit x86 code, so COM/MZ/NE does not apply here (to my knowledge there never has been 16-bit Linux programs anyways).
That Windows has 32-bit compatibility out of the box and that Linux distributions don't install 32-bit compatibility layers by default is one thing, but those on Linux only really apply to programs that at best share the same vintage as the host system (and at worst only work for the same distribution). Again, try running Tux the Penguin: A Quest for Herring as-is on a modern system (be it on a 32-bit or 64-bit installation, that part doesn't matter here), I'd gladly be proven wrong if it can be done without either a substantial rewrite+recompilation or egregious amounts of thunking a 2000's-era Linux userspace onto a 2020's-era one (no, a VM doesn't count, it has to run on the host).
[1] https://boricj.net/atari-jaguar-sdk/2023/12/18/part-3.html
[2] https://boricj.net/atari-jaguar-sdk/2024/01/02/part-5.html
[3] I know about 32-bit DOS extenders, but it's complicated enough as-is without bringing those into the mix.
> In this specific case, it really is a user-land problem.
a.out isnt even supported in new Linux kernels so how is that a user land problem? And you then repeated my point about how it’s not a user land problem by describing how it works as an ELF. ;)
> These cases aren't equivalent. COM and MZ are 16-bit executables for MS-DOS [3], NE is for 16-bit Windows ; all can be officially run without workarounds on 32-bit x86 Windows systems (NTVDM has admittedly spotty compatibility, but the point stands). Here, we're talking about 32-bit x86 code, so COM/MZ/NE does not apply here (to my knowledge there never has been 16-bit Linux programs anyways).
You’re not listening to what I’m saying.
COM and a.out are equivalent because they’re raw formats. Even on 32 bit NT systems COM required emulation.
The problem is the file formats are more akin to raw machine code than they are a modern container format.
So yeah, one is 16 and the other 32bit but the problem you’re describing is related to the file format being unforgiving for different CPU architectures without emulation; and in many cases, disregarding the user land entirely.
By your own admission, 32bit PEs and 32bit ELFs work perfectly fine on their respective Windows and Linux systems without any hacks.
The difference here is that Windows ships WOW64 as part of the base install whereas mainstream Linux distributions doesn’t ship 32bit libraries as part of their base install. That doesn’t mean that you need hacks for 32bit though. For example on Arch it’s literally just one line in pacman.conf that you uncomment.
My point was, if you wanted to ship a Linux distribution that supported random ELF binaries then you could. And package managers like Nix prove this fact.
The reason it’s harder on Linux isn’t because it requires hacks. It’s because Linux has a completely different design for installing applications and thus backwards compatibility with random ELFs isn’t generally worth the effort.
Also it’s really not fair to argue that a.out, a format that’s defined in the 70s and looong since deprecated across all unix-like systems is proof that Linux isn’t backwards compatible. ELF has been the primary file format for nearly 30 years on Linux now and a.out was only relatively recently fully removed from the kernel.
Whereas COM has been problematic on Windows for the entirety of NT, including Windows 2000 and XP.
>It’s on of the many reasons Windows base install is so much heavier than a typical Linux base install.
Is that a bad thing if it means a seamless experience for users? Storage is cheap.
I never suggested it was a bad thing. I was just explaining the differences.
However to answer your question:
Storage hasn’t always been cheap. So it used to be a bad thing. Theses days, as you rightly said, it’s less of an issue.
But if you do want to focus on present day then it’s worth noting that these days FOSS does ship a lot of dependencies as part of their releases. Either via Docker containers, Nix packages, or static binaries (eg Go, Rust, etc). And they do this precisely because the storage cost is worth the convenience.
> While the Linux syscalls themselves are very stable and reliable, the c library on top of them is not. Practically all of userland is based on libc, and therefore by proxy Linux itself has a binary compatibility problem.
People who primarily use Linux often forget that Windows has the exact same problem. In the case of Windows libc is distributed as part of the Visual C++ runtime. Each version of Visual Studio has its own version of the VC++ runtime and the application is expected to redistribute the version of VC++ it needs.
The only thing Windows does better is ensuring that they maintain backwards compatibility in libc until they release a new version of Visual Studio.
In Winapi land, the equivalent of "the c library" is NTDLL,its wrappers and other supporting libs (advapi32,userenv,etc... and Win32 specific libs which I consider equivalent to X11 libs). MSVCR in my opinion is there to provide the stdlib for C/C++ programs. In Linux land, the library that provides the C stdlib also wraps syscalls, in Windows, the C stdlib is a wrapper/interface for Windows api's.
My opinion is that they're both great. I really like how clean and well thought out the Windows API's are. Compared to Linux equivalents they're very stable and easier to use. But that doesn't mean there is anything wrong with C stdlib implementation on either OS. But for system API's, Linux is a bit messy, that mess is the result of having so many people have strong opinions, and Linux trying to adhere to the Unix principle of a modular user-space ecosystem.
For example, there is no "Linux graphics api", there is X11 and Wayland and who knows what else, and neither have anything to do with the Linux project. There are many highly opinionated ways to do simple things, and that is how Linux should be. In the same vein, installing apps on Linux is simply querying your package manager, but on Windows there is no "Microsoft package repo" where everyone dumps their apps (although they are trying to fix that in many ways), and that's how Windows should be.
Let Linux be Linux and Windows be Windows. They're both great if you appreciate them for what they are and use the accordingly.
Very well explained, thank you.
> Let Linux be Linux and Windows be Windows. They're both great if you appreciate them for what they are and use the accordingly.
What if you technically prefer the Windows way, but are worried about Microsoft's behavior related to commercial strategy, lock-down, privacy...?
The author envisions a system that's technically stable as Windows, yet free as Linux.
Microsoft has always been end-user-hostile. You hack around it :)
Reverse-engineer it's undesirable behavior, mitigate it. The real stuff that scares me is hardware-based (secure enclave computing for example) and legal measures it is taking to prevent us from hacking it.
ReactOS exists, as does Wine. Linux is a purely monolithic Kernel, unlike NT which is a hybrid that has the concept of subsystems built into it. Linux would have to have the concept of subsystems and have an NT-interop layer (probably based off of Wine), the advantage over Wine I fail to see.
In the end, where is the demand coming from I ask? Not from Linux devs in my opinion. I suppose a Wine focused distro might please folks like you, but Wine itself has lots of bugs and errors even after all these years. I doubt it is keeping up with all the Windows11 changes even, what the author proposes, in my opinion is not practical, at least not if you are expecting an experience better than ReactOS or Wine. If it is just Win32/winapi interop layer, it might be possible, but devs would need to demand it, otherwise who will use it?
Linux users are the most "set in their way" from my experience, try convincing any Linux dev to stop using gtk/qt and write apps for "this new Windows like api interface to create graphical apps".
but ultimately, there is no harm in trying other than wasted time and resources. I too would like to see an ecosystem that learns and imitates windows in many ways (especially security measures).
FreeBSD?
>There are many highly opinionated ways to do simple things, and that is how Linux should be
I still believe we would be in a better place had BSD was ready for adoption before Linux. Linux is a kernel and a wide family of operating systems assembled from the kernel and different bits and pieces while BSD tried to be a very coherent operating system from the start.
I remember trying to get a program installed on Windows. It complained that I didn't have the right VC redistributable.
I had like ten of them installed — I think several from the same year! — cause every program usually bundles its own.
I found the exact version of vcredist installer I needed but then that one refused to install because I already had a slightly newer version. So I had to uninstall that first.
As far as I'm aware this problem still exists in Wine, I installed something in Wine yesterday and I had to use winetricks commands to get the vcredist installers from Microsoft's servers. (Also illegally download some fonts, otherwise my installer refused to start...)
Next time that happens, search "vcredist aio". I can't endorse any of the scripts that are out there but there are many scripts that will pull them from Microsoft and install them all with the unattended flag.
Is libc updates really the primary problem with the ABI breaks on Linux? Glibc isn't perfect but it has versioned symbols going back a long time now. My guess would be the problem is actually abandoned versions of other libraries (e.g. SDL1, old versions of gtk2?) and maybe a handful of other things.
Yeah, glibc is extremely stable and you can be sure that an app compiled against it now will work well into the future. People just completely ignore that fact based on hearsay, and that the removal of a unused symbol hashing table from the glibc binary broke a few anticheat systems that were attempting to parse it.
Other libraries are the problem, usually. People are generally really good about changing the .so version of a library when the ABI changes in a backwards-incompatible way. Usually distributions ship both versions until everything they ship either has upgraded or been removed. Solutions like appimage can allow you to ship these libraries in your app.
Everything is fine until it isn't when you run into mismatch like 64bit file offsets and time_t.
Good news if you're serious: You can now have a single glibc that supports programs compiled with and without -D_FILE_OFFSET_BITS=64 -D_TIME_BITS=64.
So this is non issue you say?
https://blogs.gentoo.org/mgorny/2024/09/28/the-perils-of-tra...
No, not at all, but that's a different problem. That issue is about linkage between two different binaries that have _TIME_BITS=32 and _TIME_BITS=64, not an issue with linking to glibc. However, that's only an issue when you are dealing with software that passes time_t in the ABI. Of course, on the whole, a distribution has to deal with all kinds of weirdly-intermingled and low-level packages, so this does happen a very non-trivial amount of times surely, but in general I expect that a lot of old binary software will be alright. You'd only run into this particular problem if you had an old binary that interfaced with another library that is provided by the system that did this. I didn't check, but I'd be quite surprised to find random time_t in most popular library APIs, e.g. I don't expect to see this in SDL or GTK.
Of course, if you did need to support this case, you don't need to throw the baby out with the bathwater necessarily. You'd just need a _TIME_BITS=32 build of whatever libraries do have time_ts in their ABI, and if that blog post is any indication Gentoo will probably have a solution for that in the future. I like the idea of jamming more backwards-compatibility work into the system dynamic linker. I think we should do more of that.
In any case, this issue is not a case where glibc broke something, it's a case where the ABI had to break. I understand that may seem like nitpicking, but on the other hand, consider what happens in 2038: All of the old binary software that relies on time_t being 32-bit will stop working properly even if you do have 32-bit time_t shims, at which point you'll need dirtier and likely less effective hacks if you want to be able to keep said software functioning.
The problem is backwards compatibility.
Someone comes along and builds their software on the latest bleeding-edge Linux distro. It won't run on older (or even many current) Linux desktops. People curse Linux ABI instability because new binaries aren't supported by an older operating system. It is in fact the opposite to the Windows situation, in which older software continues to run on newer operating systems, but good luck getting the latest Windows software to run on a Windows 95 desktop. People are very quick to conflate the two situations so they can score more fake internet points.
The situation is not limited to desktops. For example, a very popular commercial source forge web service does not work on browsers released more than about 10 weeks ago. The web itself has become fantastically unstable and almost unusable for anything except AI bots consuming what other AI bots spew.
> The only thing Windows does better is ensuring that they maintain backwards compatibility in libc until they release a new version of Visual Studio
But they are installed side-by-side, major versions at least.
Windows made its libc stable in Win10 (ucrt.dll aka "universal CRT"). Only the C++ runtime must still be redistributed.
A big difference is that you can easily install an up-to-date MSVCRT. How do I upgrade glibc on RHEL 8? As far as I can tell you basically can't.
Linux is based around OSS, so the answer would be to recompile from source.
Falling that, run inside a container
Both terrible answers.
To add some validity here, I think to an extent we already see distributions aimed at converting Windows users moving in this direction. Zorin OS has Wine support for .exe's almost out of the box, and there's Steam OS / Proton, where (if I recall correctly) the official guidelines for the Steam Deck state that developers should NOT create native Linux ports for new games, but rather optimize around Proton itself.
I've read the article and the comments with interest. I just have a question: if Windows ABI is so stable that 20-year-old programs are guaranteed to run, why are there computers with Win95 or NT that nobody dares touching lest some specific software stops working? I see plenty of these in industrial environments, but also in public libraries, corporate databases, etc.
In practice most of those machines are an environment in and of themselves. It’s not that they can’t be upgraded, it’s that they likely couldn’t even be rebuilt if they had a hardware failure. The risk they’re taking is that the system is more likely to break due to being touched than it is to suffer a hardware failure. Which as most of us can attest to, is true until it’s not.
Relatedly, at a previous job we ran an absolutely ancient piece of software that was critical to our dev workflow. The machine had an issue of some sort, so someone imaged the hard drive, booted it as a VM and we resumed business as usual. Last I heard it was still running untouched, and unmaintained.
> if Windows ABI is so stable that 20-year-old programs are guaranteed to run
That's not actually true; there are no guarantees. Microsoft does a best effort to ensure the majority of applications continue to work. But there are billions of applications, they're not all going to work. Many applications don't even adhere to the Win32 API properly. Microsoft will sometimes, if the app is important enough, ensure even misbehaving applications work properly.
I know in my use case all these ancient machines are nessessary for interacting with some ancient hardware, not a case where wine is particularly useful.
This is usually about drivers, not applications. The Windows driver model didn’t maintain long-term compatibility.
why touch it? these are usually not directly connected to the internet. some possibly virtualized. "updating" to use wine on linux is a ton of work on its own, you will run into unforseeable issues. nobody wants to pay for that and nobody wants to be responsible for the problems when the net benefit is zero. but a real update/replacement of all these systems is too expensive, hence the status quo.
Because they just work. Nobody cares if their MRI machine runs Win2000, they care if the machine reveals brain cancer.
They care to a certain degree, and that degree is the size of the carefully-tuned payment that Trend Micro extract for the firewall product that lets the Windows 2000 MRI machine safely co-exist on the network with the hospital DC.
I think this attitude to the Linux ABI is maybe out of date - with a 20 year old Linux binary, that's only 2005, so it will almost certainly be using glibc (no archaic libc5). Glibc has great backwards compatibility and the binary will work on any glibc distribution today as long as you have all the .so's, same as needing the .dll's on Windows.
I hit quirks with glib semi-regularly (~1/year)
For example, recently I tried to run Emacs' Appimage and it has a glib issue
https://github.com/probonopd/Emacs.AppImage/issues/22#issuec...
That's talking about 'glib' which is not the same as 'glibc'.
Glib is a library from the GTK project which offers utility functionality related to the GTK widget toolkit, while glibc is the GNU C Library.
Amusingly, these kinds of beyond-the-core libraries are the ones that have always caused problems for me, never actual core GNU C Library.
glibc, not glib. That's a different library.
that's embarrassing.. you're right. Thank you for the correction. Wish I could delete my comment
Thankfully gtk2 is now stable too.
because it's EOL? :)
> Glibc has great backwards compatibility
We're clearly not living in the same universe here.
glibc backward compatibility is horrible.
Every. Single. Time. I try to use an old binary on a modern distro, it bombs, usually with some incomprehensible error message with GLIBC in all caps in it.
And these days, you can't even link glibc statically, when you try it barks at you with vehemence.
As a matter of fact, as pointed out in the article, this particular shortcoming of glibc completely negates the work done by Linus to keep userland backward compatible at all cost.
[citation needed]
Please post actual issues encountered, including non-paraphrased errors instead of FUD.
And if you want to statically link your libc there is nothing forcing you to use glibc. You're only stuck with glibc (and even then you don't actually need to use any functions from it yourself) if you need dynamic linnking for e.g. OpenGL/Vulkan. Also, glibc wasn't designed for static linking even before they put in safeguards against that.
There are few issues:
1. GNU libc is an exception in the world of compatibility.
2. You can't just dump a bunch of GTK libraries next to the binary and expect it to work. These libraries often expect very specific file system layouts.
In 2005 the hot new Windows technology was .NET Framework 1.1 or 2.0. You can't just dump Framework 1.1's libraries next to the binary and expect it to work either, it needs to be installed properly.
Yep, but you can still install .NET 3.5 on Windows today, which will run .NET 2.0 apps just fine.
.NET 1.x tho, yeah.
The most recent .NET Framework still keeps 1.1 assemblies for compatibility. And yep, .NET sucked and eventually got semi-abandoned.
This must be from an alternate timeline. There is no way to call what is currently going on with .net anything other than 0% abandoned.
The .NET Framework 1.x runtime is no longer supported, and the .NET Framework 2.0 runtime (used by v2.0-3.5 applications) won’t be supported after 2029. They are slowly abandoning support for old apps.
(Yes, if you fiddle with the config file they might work on the .NET 4.0 runtime. But that’s not something a typical user can/will do.)
.net is on version 9 now.
It keeps classes for backward compatibility, not assemblies. Some code still didn't migrate from them to not cause gratuitous churn. Also it's web scale, because untyped collections can hold values of different types just like javascript.
What are you talking about??? .NET never got abandoned! If anything its hit a very high bar now. Its one of the top frameworks to build an app across platforms.
They're referring to (I hope), the .NET Framework which is Windows-only and the last/latest version being 4.8. It should live a very long life, as Microsoft server infrastructure is built on it (SharePoint/Exchange).
.NET Framework is indeed getting sidelined, along with WPF and the other technologies from the .NET 1.1 era.
I still run an unmodified GTK2 app from 2012 that I just grabbed a .deb from an ancient debian version, because I don't like the GTK3 version.
That is solved by containerization (cgroups and namespaces), which is initially popularized by docker, which appeared about 12 years ago. And newer things like flatpak and snap are just bells and whistles over this.
I have flatpaks from several years ago that no longer work (Krita) due to some GL issues.
I heard OpenGL has a bad compatibility story, which was the reason games used DirectX.
FYI: a kernel patch to run exes isn’t needed. binfmt_misc can handle this, and wine-binfmt already exists to automatically run PE files through Wine.
Ask IBM how well that idea worked.
I think it's fair to say that OS/2 had better Windows compatibility (for it's era) than Wine offers (in this era). The problem was that Microsoft introduced breaking changes with the introduction of Windows 95. While old Windows applications would continue to run under OS/2, IBM felt that it would take too much effort to introduce a compatability layer for Windows 95. If I recall correctly, it involved limitations with how OS/2 handled memory.
Besides, binary compatibity has never really been a big thing in Linux since the majority of software used is open source. It is expected to compile and link against newer libraries, but there is no real incentive for existing binaries to remain compatible. And if the software doesn't compile against newer versions of libraries, well, Windows has similar issues.
A windows95 compatibility layer would have been feasible if OS/2 had more sales volume.
The latest multi-platform packaging systems like Nix or Flatpak have largely solved the binary compatibility problem but providing some guarantees of library versions. This approach makes more sense in modern contexts with cheap storage and fast bandwidth.
So... this already exists. Valve already essentially sells this as a product. Folks know that, right? The Steam Deck is a linux box running wine executables as the native app environment. The fact that the money apps are all "games" doesn't change the technology.
How they do it is by shipping a franken-ubuntu14 as the "steam runtime" for native Linux games. Not a terrible solution but not exactly ideal for general purpose software where games mostly keep to themselves. Their work on Proton is amazing though.
Wait, isn't Steam OS Arch based now?
There is three things here:
- Steam Runtime: A set of common libraries Linux native games target for multi-distro compatibility, I believe this still uses Ubuntu as the upstream
- Steam OS: An arch based distro pre-configured to run Steam out of the box, used by the Steam Deck, comes with extra stuff like gamescope to smooth over various issues other distros have with VRR, HDR, etc.
- Proton: Runs Windows games on Linux
Steam OS is clearly not what the blog is proposing, I can't just pop over the desktop mode and install Firefox via a MSI.
No[1], but you can launch a windows executable natively, link against DLLs in a compatible way, thunk between 32 and 64 bit as needed, access the Linux filesystem, network and IPC environment using native APIs, integrate with things like .NET and msvc runtimes, access native-speed DirectX emulation, etc...
Yes, you'd have to buff and polish it. But "paint some chrome on it" is hardly much of a blog post.
[1] Actually, are you sure the answer is "no" here? I wouldn't be at all shocked if some enterprising geek had source on github implementing a MSI extractor and installer
That makes no sense: shipping all dependencies (e.g. shipping a container image) gives perfect binary compatibility on Linux, which is what flatpak/snap/appimage do.
It can also be achieved with static linking and by shipping all needed library and using a shell script loader that sets LD_LIBRARY_PATH.
Also glibc (contrary to the author's false claims) and properly designed libraries are backwards compatible, so in principle just adding the debs/rpms from an older Debian/Fedora that ships the needed libraries to the packaging repositories and running apt/dnf should work in theory, although unfortunately might not in practice due to the general incompetence of programmers and distribution maintainers.
Win32 is obviously not appropriate for GNU/Linux applications, and you also have the same dependency problem here, with the same solution (ship a whole Wine prefix, or maybe ship a bunch of DLLs).
> shipping all dependencies (e.g. shipping a container image) gives perfect binary compatibility on Linux
That doesn’t work for GUI programs which use a hardware 3D GPU. Linux doesn’t have a universally available GPU API: some systems have GL, some have GLES, some have Vulkan, all 3 come in multiple versions of limited compatibility, and optional features many of them are vendor specific.
In contrast, it’s impossible to run modern Windows without working Direct3D 11.0 because dwm.exe desktop compositor requires it. If a software consumes Direct3D 11.0 and doesn’t require any optional features (for example, FP64 math support in shaders is an optional feature, but sticking to the required set of features is not very limiting in practice unless you need to support very old GPUs which don’t implement feature level 11.0), will run on any modern Windows. Surprisingly, it will also run on Linux systems which support Wine: without Vulkan-capable GPU will be slow but should still work due to Lavapipe, which is a Linux equivalent of microsoft’s WARP they use on Windows computers without hardware 3D GPU.
Note that this also underlines that the post's premise of Windows having a simple stable ABI - win32 sure is stable, but that's not what applications are coded against anymore.
Sure, you can run a 20 year old app, but that is not the same as a current app still working in 20 years, or even 5.
> that's not what applications are coded against anymore
Not sure I follow. Sure, most modern programs are not using old-school WinAPI with GDI, but the stuff they added later is also rather stable. For example, the Chromium-based browser I’m looking at uses Direct3D 11 for graphics. It implements a few abstraction layers on top (ANGLE, Skia) but these are parts of the browser not the OS.
I view all that modern stuff like Direct3D, Direct2D, DirectWrite, Media Foundation as simply newer parts of the WinAPI. Pretty sure Microsoft will continue to support them for long time. For example, they can’t even deprecate the 23 years old DirectX 9 because still widely used, e.g. current version of Microsoft’s own WPF GUI framework relies on Direct3D 9 for graphics.
I agree. On Linux (and Mac really), new APIs replace old ones and old binaries stop working.
On Windows, new layers are applied over the old. There is DirectX 9-12. New binaries may use 12 but the ones still using 9 are perfectly happy. Things like .NET work the same. You can have multiple apps installed relying on different .NET versions.
It's not necessarily the same code, though. But COM is nice for a stable ABI like that - so long as you consistently version your interfaces, the apps can just QueryInterface for the old one they need and know that it's there, even if it's just a thin wrapper around the new stuff.
You can still use OpenGL 1.0 and Xlib-like-it's-1999 on modern Linux distributions.
These are however the same on Linux - mesa may change, but what the app uses is OpenGL and GLX. A more modern app might use EGL instead of GLX, or have switched to Vulkan, but that doesn't break old code.
You can also run an old mesa from the time the app was built if it supports your newer hardware, but I'd rather consider that to be part of the platform the same way you'd consider the DirectX libraries to be part of windows.
> These are however the same on Linux .. that doesn't break old code
An example from another comment: https://news.ycombinator.com/item?id=43519949
Apologies, but "I heard that..." is not an example.
The phrase you quoted is not from the comment I linked; you’ve quoted from a response. Here’s the comment I have linked above:
> I have flatpaks from several years ago that no longer work (Krita) due to some GL issues.
That’s an example of Linux GPU APIs being unstable in practice, and container images not helping to fix that.
Ah apologies, you're right - I was tired and read things wrong.
But I suspect "GL issues" (i.e., GL API stability) is being mixed together with e.g. mesa issues if mesa is being bundled inside the app/in a "flatpak SDK" instead of being treated as a system library akin to what you would do with DirectX.
Mesa contains your graphics driver and window system integrations, so when the system changes so must mesa change - but the ABI exposed to clients does not change, other than new features being added.
It more likely is an example of immature container images causing issues.
I'm running Loki game binaries just fine today btw.
Win32 is quite extensive for an OS API. It covers the space from low-level stuff like syscalls and page allocation and all the way up to localization, simple media access and GUI. So everything from glibc, libsystemd, libpam to libalsa and egl on Linux side. And it is all stable.
Microsoft also provides quite good stability for DirectX and other extension APIs. You can still run old .Net apps without issues as long as they didn't pull a Hyrum's Law on you and depended on apparent behavior.
Sure, win32 contains GUI bits, but modern apps do not use those GUI bits.
OpenGL and Vulkan ABIs are also stable on Linux, provided by mesa. The post is pretty focused on the simplicity of win32 though, which is what I'm refuting as being as relevant today for new apps.
> As long as they didn't pull a Hyrum's Law on you
It is guaranteed that they "pull a Hyrum's Law", the question is just what apparent behavior they relied on.
> Note that this also underlines that the post's premise of Windows having a simple stable ABI - win32 sure is stable, but that's not what applications are coded against anymore.
It's true, but this touches on another point they made: what apps code to is other dynamically linked libraries. The kind that wine (or other host environments) can provide, without needing to mess with the kernel.
That's what apps are supposed to code to. When it comes to games and especially anti-cheat that's not always the case though and so Wine does have to handle direct system calls, which needs support from the kernel (at least to not be unusably slow).
This is FUD. There isn't a single real desktop Linux distribution without OpenGL support. The basic OpenGL API hasn't changed ever, it's just been extended. It has even more backwards compatibility than Direct3D. Sure you can deliberately build a distro with only Vulkan or GLES (a mobile API) if you want to be an ass but the same goes for Windows. Same for X11 - Xlib works everywhere even any Wayland-only distribution that gives a single crap about running binary distributed software.
Now GUI toolkits are more of an issue. That's annoying for some programs, many others do their own thing anyway.
Question, from an application developer's perspective: What is the implication in regards to cross-platform Vulkan applications? I.e., my 3D applications all use Vulkan, and they compile and just work on both Windows, and Ubuntu. Does this mean that on other or older distros, they might not work?
I don’t think the support depends on distros much, I think the main variable is hardware. If you have a desktop PC bought in the last ~5 years the support should be OK, for the hardware older than that the support is not guaranteed. GeForce GT 730 (launched in 2014) doesn’t support Vulkan, Intel only supports Vulkan on Windows starting from Skylake launched in 2015.
Then there’re quality issues. If you search internets for “Windows Vulkan issue” you’ll find many end users with crashing games, game developers with crashing game engines https://github.com/godotengine/godot/issues/100807 recommendations to update drivers or disable some Vulkan layers in registry, etc.
On Windows, Vulkan is simply not as reliable as D3D. The reasons include market share, D3D being a requirement to render the desktop, D3D runtime being a part of the OS supported by Microsoft (Vulkan relies solely on GPU vendors), and D3D being older (first version of VK spec released in 2016, D3D11 is from 2009).
Another thing, on Linux, the situation with Vulkan support is less than ideal for mobile and embedded systems. Some embedded ARM SoCs only support GLES 3.1 (which BTW is not too far from D3D 11.0 feature-wise) but not Vulkan.
Agree overall. Just want to point out that Vulkan works on Intel Haswell. I have a 2013 MacBook Air and a 2013 Mac Pro that both have Haswell. Linux kernel 6.14 actually includes an Haswell Vulkan update from Intel themselves.
> Vulkan works on Intel Haswell
Unless you are running Windows, in which case it doesn’t. Intel simply has not made a driver.
> Does this mean that on other or older distros, they might not work
Yep exactly. While Vulkan API is well defined and mostly stable, there is no guarantee in Linux implementation will also be stable. Moreover Khronos graphics APIs only deal with the stuff after you allocated a buffer and did all the handshakes with the OS and GPU drivers. On Linux none of those have API / ABI / runtime configuration stability guarantees. Basically it works until only one of the libraries in the chain breaks the compatibility.
This is BS. Vulkan buffers are allocated with Vulkan functions. Window system integration is also provided by window-system specific Vulkan extensions just like it was with WGL/GLX/EGL etc. These are all well defined and stable.
I'm not sure what they are taking about. And they might not know what they're taking about.
All Linuxes I'm familiar with run Mesa, which gives you OpenGL and Vulkan.
That depends how you build you program and what other dependencies you pull in. But as far as Vulkan is concerned your program should run on any distro that is as least as new as the one you build on (talking about ABI, runtime requirements depend on hardware but don't depend on the system you build on).
without vulkan-capable gpu you still get 3D acceleration via wined3d. Unless you meant, without any gpu at all :s
Good point. I forgot that in addition to DXVK they also have WineD3D.
> That makes no sense: shipping all dependencies (e.g. shipping a container image) gives perfect binary compatibility on Linux, which is what flatpak/snap/appimage do.
True, but sad. The way to achieve compatibility on Linux is to distribute applications in the form of what are essentially tarballs of entire Linux systems. This is the "fuck it" solution.
Of course I suppose it's not unusual for Windows stuff to be statically linked or to ship every DLL with the installer "just in case." This is also a "fuck it" solution.
> to distribute applications in the form of what are essentially tarballs of entire Linux systems.
No so bad when Linux ran from a floppy with 2Mb of RAM. Sadly every library just got bigger and bigger without any practical way to generate a lighter application specific version.
Also, 64-bit code and especially data are just larger, because every address is 8 bytes, and data has to be aligned on at least 4-byte boundary.
You can still have very tiny Linux with a relatively modern kernel on tiny m0 cores, and there's ELKS for 16-bit cores.
You could say the same about container images for server apps. (Packaging is hard.)
If Linux userspace had libraries with stable ABI, you could just tar or zip binaries and they would work. You wouldn't need to bundle system layer. This is how you deploy server apps on Windows Server systems. You just unpack and they work.
It is not a packaging problem. It is a system design problem. Linux ecosystem simply isn't nice for binary distribution except the kernel, mostly.
Linux feels a bit different since the complete system is not controlled by a single vendor. You have multiple distributions with their own kernel versions, libc versions, library dependencies, etc.
Mac OS has solved this but that is obviously a single vendor. FreeBSD has decent backwards compatibility (through the -compat packages), but that is also a single vendor.
-compat packages exist on fedora-like systems too, usually allowing it older versions to run. I can't say how far back, but RHEL usually has current version - 1 for -compat packages.
Yep, having a single supplier of the system layer usually ends up with better backwards compatibility for binary distribution.
That's also why I have the opinion that the world is worse off due to the fact that Linux "won" Unix wars.
Packaging is “hard” but mobile and app stores do it.
They do it by having standards in the OS, partial containerization, and above all: applications are not installed “on” the OS. They are self contained. They are also jailed and interact via APIs that grant them permissions or allow them to do things by proxy. This doesn’t just help with security but also with modularity. There is no such thing as an “installer” really.
The idea of an app being installed at a bunch of locations across a system is something that really must die. It’s a legacy holdover from old PC and/or special snowflake Unix server days when there were just not many machines in the world and every one had its loving admin. Things were also less complex back then. It was easy for an admin or PC owner to stroll around the filesystem and see everything. Now even my Mac laptop has thousands of processes and a gigantic filesystem larger than a huge UNIX server in the 90s.
I can't think of a single thing that would kill the bit last of joy I take in computing more. If I woke up in such a world, I'd immediately look to reimplement Linux in an app and proceed to totally ignore the host OS.
I agree. Though it helps that Apple (NeXT, really) got it right with their .app directory format, even outside the app store.
> Also glibc (contrary to the author's false claims) and properly designed libraries are backwards compatible, so in principle just adding the debs/rpms from an older Debian/Fedora that ships the needed libraries to the packaging repositories and running apt/dnf should work in theory, although unfortunately might not in practice due to the general incompetence of programmers and distribution maintainers.
Got it. So everything is properly designed but somehow there's a lot of general incompetence preventing it from working. I'm pretty sure the principle of engineering design is to make things work in the face of incompetence by others.
And while glibc is backward compatible & that generally does work, glibc is NOT forward compatible which is a huge problem - it means that you have to build on the oldest bistro you can find so that the built binaries actually work on arbitrary machines you try to run it on. Whereas on Mac & Windows it's pretty easy to build applications on my up-to-date system targeting older variants.
> So everything is properly designed but somehow there's a lot of general incompetence preventing it from working.
But it is working, actually:
* If you update your distro with binaries from apt, yum, zypper etc. - they work.
* If you download statically-linked binaries - they work.
* If you download Snaps/Flatpak, they work.
> it means that you have to build on the oldest bistro you can find so that the built binaries actually work on arbitrary machines you try to run it on.
Only if you want to distribute a dynamically-linked binary without its dependencies. And even then - you have to build with a toolchain for that distro, not with that distro itself.
> Only if you want to distribute a dynamically-linked binary
Even statically linked code tends to be dynamically linked against glibc. You’ve basically said “it works but only if you use the package manager in your OS”. In other words, it’s broken and hostile for commercial 3p binary distribution which explains the state of commercial 3p binary ecosystem on Linux (there’s more to it than just that, but being actively hostile to making it easy to distribute software to your platform is a compounding factor).
I really dislike snaps/flat pack as they’re distro specific and overkill if I’m statically linking and my only dynamic dependency is glibc.
Glibc is fantastically stable and backwards compatible in all the same ways , and I think you're overstating how backwards compatible windows is as well. Microsoft has the exact same dynamic library issues that Linux does via it's Microsoft Visual C++ distrubutables (as one example). Likewise, there's forwards compatibility issues on Windows as well (if you build a program in Windows 11 you'll have a hard time running that on windows XP/Vista for a huge number of reasons).
If you build a statically linked program with only glibc dynamically linked, and you do that on Linux from 2005,then that program should run exactly the same today on Linux. The same is true for Windows software.
Im pretty sure it’s safe to distribute Windows 11 built binaries to windows 7 and windows 10 if it’s a valid target set in Visual Studio. The c++ runtime is its own thing because of a combination of c++ BS (no stable runtime) and c++ isn’t an official part of Windows. It’s a developer tool they offer. But you can statically link the c++ runtime in which case you can build with the latest runtime on Windows 11 and distribute to an older Windows.
Linux is the only space where you have to literally do your build on an old snapshot of a distro with an old glibc so that you can distribute said software. If you’re in c++ land you’re in for a world of hurt because the version of the language is now constrained to whatever was available at the time that old distro from 5+ years ago snapshotted unless you build a newer compiler yourself from scratch. With Rust at least this is much easier since they build their toolchain on an old version of Linux and thus their binaries are similarly easily distributed and the latest Rust compiler is trivially easy to obtain on old Linux distros.
Source: I’m literally doing this today for my day job
You can also build a cross-compiler to target an older glibc, you are not limited to the distro-provided toolchain. This also allows to to use newer C++ features (with exceptions) as those mostly depend on the GCC version and not glibc version. Of course the supported range of glibc version varies with gcc version, just like visual studio doesn't support XP anymore - the difference is that if you are sufficiently motivated you can patch gcc.
Flatpaks aren't distro specific.
As for being overkill, surely you can see the advantage of having a single uniform distribution format from the end user's perspective? Which, sure, might be overkill for your case (although app isolation isn't just about dependencies), but the important thing is that it is a working solution that you can use, and users only need to know how to install and manage them.
You have to install the flat pack runtime to begin with so that’s one obstacle for distribution. And it also doesn’t really isolate as much as you’d like to believe - eg dealing with audio will still be a mess because there’s like 4 different major audio interfaces. And now I have to host a flat pack repo and get the user to add my repo if it’s proprietary software. It’s really nowhere near as smooth and simple as on Windows/Mac/Android/ios.
> You have to install the flat pack runtime to begin with so that’s one obstacle for distribution.
Only if you're using a distro that doesn't come with it preinstalled. But that doesn't make it distro-specific?
> And now I have to host a flat pack repo and get the user to add my repo if it’s proprietary software.
You don't have to do that, you can just give them a .flatpak file to install: https://docs.flatpak.org/en/latest/single-file-bundles.html
The reason to host a repo regardless is to enable easy auto-updates - and I don't think you can call this bit "smooth and simple" on Windows and Mac, what with most apps each doing their own thing for updates. Unless you use the app store, but then that's exactly the same as repos...
Windows toolchain (even gnu) just provides old libraries to link with. This should work the same on linux, and AFAIK zig does just that.
Windows toolchain provides the import libraries to link with, and these are basically just tables mapping function names to indices in the DLL export table. So long as you don't actually use the new functions, an app linked against a modern Windows SDK will run just fine on old Windows, unlike the situation with glibc.
The situation with glibc is the same, you only depend on functions you use.
Almost - with glibc your code uses functions like memcpy but you end up linking against symbols like memcpy@GLIBC_2.14 which is the version of memcpy added in glibc 2.14 and which won't be present in older versions. Which symbol version your calls use depends on the glibc version you build against - generally it's the most recent version of that particular function. For the Win32 this is rarely the case and instead you have to explicitly opt in to newer functions with fixed semantics.
Still, to reliably target older Windows versions you need to tell your toolchain what to target. The Windows SDK also lets you specify the Windows version you want to target via WINVER / _WIN32_WINNT macros which make it harder to accidentally use unsupported functions. Similarly, the compilers and linkers for Windows have options to specify the minimum Windows version recorded in the final binary and which libraries to link against (classic win32 dlls or ucrt). Unfortunately there is no such mechanism to specify target version for glibc/gcc and you have you either build against older glibc versions or rely on third-party headers. Both solutions are workable and allow you to create binaries with a wide range of glibc version compatibility but they are not as ideal as direct support in the toolchain would be.
Yeah maybe I should just be complaining that the Rust tool chain (or rather distros) should be including old versions of prebuilt glibc to link against?
> And while glibc is backward compatible & that generally does work, glibc is NOT forward compatible which is a huge problem - it means that you have to build on the oldest bistro you can find so that the built binaries actually work on arbitrary machines you try to run it on.
Isn’t this easily solved by building in a container? Something a lot of people do anyway - I do it all the time because it insulates the build from changes in the underlying build agents - if the CI team decides to upgrade the build agent OS to a new release next month or migrate them to a different distro, building in a container (mostly) isolates my build job from that change, doing it directly on the agents exposes it to them
Umm... that doesn't isolate you in any meaningful way because you're surrounding OS is still the container with a base image from Linux years ago?
glibc really doesn't want to be statically linked so if you go this route your option is to ship another libc. It does work but comes with its own problems— mostly revolving around nss.
And NSS defines how usernames are matched to uids, how DNS works, how localization works and so on. If you're changing libc you need to ship an entire distro as well since it will not use the system libraries of a glibc distro correctly.
I'm not sure why there are so many naysayers. I've been having the same thought ever since the initial release of the steam deck and think it's a great idea. In my vision no trace of Linux is discoverable by the user.
Valve's Steam OS( and inspired distros) already basically does this. It's centered around games, but everything else( if your lucky) is natively supported in Linux.
You can run non games on Proton. Most things work.
Zorin OS supports running .exes directly (via Wine of course). https://help.zorin.com/docs/apps-games/windows-app-support/
This is the first time I've heard of that [Ubuntu?] distro. Would be curious to hear how it's working out, from anyone using it as their daily driver, and how it compares to Mint etc. on the Linux side of things.
My mum was using it as a daily-driver for all your average user PC stuff. It was decent, easily to use and more user-friendly than Mint, IMO – until an update borked it completely, after two years of running. Unfortunately post-update-breakages is something that's typical of Ubuntu and most Ubuntu-based distros, so it's not really surprising [1]. I've since switched her to an immutable distro (Aurora [2]) and it's been rock solid.
[1] https://ounapuu.ee/posts/2025/02/05/done-with-ubuntu/ [2] https://getaurora.dev/en
A major problem is that Microsoft fundamentally torpedoed Wine by blocking MsOffice compatibility with Wine for quite a while now.
(ie.: only fairly old versions of MsOffice work on Wine)
> I can pull down a 20 year old exe and still run it today on Windows.
Why, oh why, I have to deal with exe files that are not even 5 years old and don't work on my windows laptop after update... I wish I lived in Author's universe...
Compatibility mode usually solves those problems.
and you could! all if you could find the right DLLs
> In Linux, you can make system calls directly...
> In Windows, you do not make system calls directly, Instead, you dynamically link to libraries that make the system calls for you.
Isn't the actual problem the glibc shared library since the Linux syscall interface is stable? (as promised by "don't break user space") - e.g. I would expect that I can take a 20 years old Linux binary which only does syscalls and run that on a modern Linux, is that assumption wrong?
ABI stability for Windows system DLLs is also only one aspect, historically Microsoft has put a ton of effort into preserving backward compatibility for popular applications even if they depend on bugs in Windows that had been fixed in later Windows versions.
I expect that Windows is full of application specific hacks under the hood to make specific old applications work.
E.g. just using WINE as the desktop Linux API won't be enough, you'll also have to extend the "don't break user space" promise from the kernel to the desktop runtime environment, even if it means "bug-by-bug-compatibility" with older versions.
Yeah the direct syscall interface isn't a problem because it's so stable. The problem is almost entirely glibc. If GCC simply had a flag --glibc-version=2.14 or whatever then 99% of the problems would be solved.
I tend to just compile on a really old distro to work around this. Tbf you need to do the same thing on Mac, it just isn't so much of an issue because Macs are easier to update.
The other side of the problem is that the whole Linux ecosystem is actively hostile to bundling dependencies and binary distribution in general, so it's not a surprise that it sucks so much.
> Isn't the actual problem glibc since the Linux syscall interface is stable?
Yes
> I would expect that I can take a 20 years old Linux binary which only does syscalls and run that on a modern Linux, is that assumption wrong?
You’re right. But those apps are simple enough that we could probably compile them quicker than they actually run.
> I expect that Windows is full of application specific hacks under the hood to make specific old applications work.
Yes [0]!
> just using WINE as the desktop Linux API won't be enough, you'll also have to extend the "don't break user space" promise from the kernel to the desktop runtime environment
Yes, but. Windows is the user space and kernel for the most part. So the windows back compat extends to both the desktop runtime and the kernel.
You might argue it’s a false equivalence, and you’re technically correct. But that doesn’t change the fact that my application doesn’t work on Linux but it does on windows.
[0] https://news.ycombinator.com/item?id=35203390
I'm not trying to defend Linux btw, and I appreciate Microsoft's approach to backward compatibility (some of the Windows games I play regularly hail from the late 90s).
Just wanted to point out that ABI stability alone probably isn't the reason why Windows is so backward compatible, there's most likely a lot of 'boring' QA and maintenance work going on under the hood to make it work.
Also FWIW some of the early D3D9 games I worked on no longer run on out of the box on Windows (mostly because of problems related to switching into fullscreen), I guess those games were not popular enough to justify a backward compatibility workaround in modern Windows versions ;)
Again, you’re technically correct but I don’t think it matters.
Windows gives (in practice) DE, user space, and kernel stability, and various Linux distributions don’t. If you care about changing the Linux ecosystem to provide that stability it matters, but if you want to run an old application it doesn’t.
Why so complicated? Wine is cool if you need to run an existing binary but when you're writing your own software, why not just compile the platform independent part into a binary and make the platform dependent part a little library (open-source)?
> Imagine we made a new Linux distro. This distro would provide a desktop environment that looks close enough to Windows that a Windows user could use it without training. You could install and run Windows applications exactly as you do on Windows; no extra work needed.
Why not use ReactOS?
Linux has far greater hardware support than ReactOS does.
Maybe we should fund ReactOS for end-user applications. Win32 is well established and isn't going anywhere. So why not take advantage of Microsoft's API design effort
But who is "we"? It might have been valuable to games makers, but Wine/Linux based systems seems to be adequate for them. So who is left?
>So who is left?
People who like and need Windows apps, people who want to have an out of the box experience when running those apps, people who don't like the loss of performance when using Wine, people who generally like Windows but want to have an alternative in case that dislike where Microsoft is heading with Windows development.
That is a lot of people, me included. But since Windows experience is somehow still tolerable, there aren't many willing to invest time or money into ReactOS. There are no corporate sponsors since you don't make money from desktop OS-es unless you use them to sell expensive hardware like Apple did.
Someone like Valve could have sponsored it but they though they can reach their goals with Wine while spending much less money.
Another sponsor for ReactOS can be a state actor like China or EU, somebody with deep pockets who wants and needs to run Windows software but don't want their desktop to be under US control.
Any people who prefer Windows' primary design choices over Unix ones too.
> Another sponsor for ReactOS can be a state actor like China or EU, somebody with deep pockets who wants and needs to run Windows software but don't want their desktop to be under US control.
I would love to see EU to do this actually. Maybe we should pitch this as citizens.
ReactOS is too buggy to be used as a daily driver for your operating system, but it's awesome as Windows Kernel reference code. You want to know what a Kernel-mode function does? Ether read the documentation, or look at what ReactOS does. (Yes, leaked Windows code exists too, and it's even on freakin Microsoft-owned-Github of all places, but you can't legally look at that stuff!)
> but you can't legally look at that stuff!
Hi, it's me, Mr Hair Splitting: to the best of my knowledge it's not illegal to read the source, but it would be illegal to use the source in your own application because you didn't author it or have a license to it
That's actually why the Wine and ReactOS folks want to disqualify folks who have read the source for fear they would inadvertently "borrow" implementation ideas, versus being able to explain to a judge how they, themselves, came up with the implementation. The key point is that Wine and ReactOS merely disqualify someone, not imprison or fine them
I played with ReactOS a few months ago in a virtual machine, and even in that relatively controlled environment it still crashed a lot.
I’ve been hoping that ReactOS would be the thing that truly murdered Microsoft Windows, but that really hasn’t happened; it seems like that’s happening via combination of a lot of applications moving to the browser and compatibility layers like Wine and Proton.
Linux has pretty good driver support nowadays, and outside of drivers Wine will have as good or better support for applications, so I am not completely sure what that says about the future of ReactOS.
Is that even stable to use yet? Last time I used ReactOS, it was very unstable to use as a daily driver.
Some of the basic apps like Jetbrains it could barely run it. At best it was crashing frequently for me.
Because ReactOS doesn't work?
20 year old games don’t work on modern windows that well at all, so that’s one counter example, so not sure where the point comes from.
It really depends on the game, but generally speaking, 20 year old games (that would be from 2005) work on modern Windows just fine. Games developed back in Win9x era are usually more troublesome.
I recently played Sinistar Unleashed on my Linux laptop.
I was never able to get this game working on regular Windows hardware, even when I bought the game brand new and tried running it on a contemporary computer, but it runs fine with Wine and Proton.
I decidedly could not get it working on a dual boot of Windows 10 (that I installed just to play to it).
Granted, even with Wine it wasn’t trivial to get working, but it wasn’t that bad. The game is actually not bad, I would have loved playing it as a kid, but I had to wait 25 years for Wine to let me play it, apparently.
Do you use CD drive emulation with wine?
I actually didn't for this, I was able to mount the ISO with linux and then run the executable directly to install it, then futz around with Wine settings on the install path to eventually get the game launching.
Dancing to a proprietary tune is risky - they can decide to change the API or go after you with lawsuits if it becomes too competitive.
You can provide backwards compatibility in Linux - you can keep old versions of libraries installed. The more commercial distros do this to a greater degree. It's roughly what windows is doing to achieve the same result.
It's just a cost to arrange and since most distros aren't making billions in licensing they choose not to pay it.
Obviously I have nothing against a wine-focused distro but I wouldn't myself waste a fraction of a second writing code against the windows API by choice.
This is a wonderful idea and could succeed if the creator could rally the right devs and users. What it really needs is Ubuntu tier branding and UX work. This has been a rarity in the Linux desktop space.
I am hopeful SteamOS will bring us something very similar.
>What it really needs is Ubuntu tier branding and UX work.
That means somebody paying for the work. Designers and UX specialists don't care about free work like many programmers do.
Yeah Blender might be a good model here.
First-class support for Windows applications might just become doable, if Wine continues to progress and Win32 doesn't accelerate. There were a handful of quality of life improvements in previous Windows releases, but the biggest Win32 changes feel like they happened quite a while ago by now, and for good reason: Win32 is stable and mature. It's still a moving target, but not by nearly as much, and even if Microsoft wanted to move it for the sake of moving it, they might find more resistance than they can completely overcome. For now, I think Wine is still not good enough to recommend people just use for everything, though. It's incredible, but incredible doesn't make Photoshop install.
However, I also think that we could "solve" a lot of the compatibility problems.
There are tons of old Linux binaries that don't work anymore. But... They could. A lot of old binaries, surely the vast majority, could absolutely run on a modern kernel. The problem is the userspace. The binaries themselves contain oodles of information that could be used to figure out what they need to run, it's just that there's nothing in place to try to make sure that stuff is available.
I really believe we could make it possible for a distro, out of the box, to make old binaries "just work", double-click and run. Want to install an old game from an .rpm or .deb you have? The system could identify what base OS that is and install it into it's own chroot with its dependencies, then create desktop icons for it. Execution failures? Missing libraries? Xlib errors? Let's have a graphical error message with actionable help.
Well, it could be done, anyway. If you wanted to follow the spirit of Windows here, it would be the right thing to do, and it'd help users who found a thing that says it supports "Linux" run that thing the way they would hope and expect it to run. Will it actually happen? Not unless someone makes it happen, and convinces distros, desktops and all other stakeholders it's worth shipping, then maintains and improves it going forward. It's a bit depressing when you realize that the technical part of implementing this is basically the least challenging part, at least for a proof of concept.
>Imagine we made a new Linux distro
Imagine we made a new shade of brown
Seriously, this is the most cliché thing you could do with Linux.
The suckless project gave us stali linux, a statically compiled linux distribution.
Doesn't static compilation solve quite a few of the problems states here?
https://sta.li/
Yes. It’s the same reason AppImage could work — if the licensing allows for the all libraries to be included in the image, because the Linux syscall interface is generally stable.
“We do not break userspace”
AppImages have a few problems. Ever seen how much dependencies you need installed to execute an AppImage?
You also need to be in an environment where you can create FUSE filesystems. And iirc the reference implementation requires the deprecates fuse2 library to work.
Snaps, Flatpaks, AppImages and static linking are all solutions to a real problem. But I don't think AppImages are an especially good solution.
I talked a bit with Richard Brown about supporting AppImages in Aeon, the OpenSUSE immutable distro. But he believed the base system would need far too much dependencies specifically to support the AppImage runtime including deprecated fuse2 support.
Snaps and Flatpaks have many more dependencies than AppImage, they are just not deprecated YET.
True, but I believe Flatpaks offer more than just "single executable applications". In the case of Aeon it's the primary way of installing additional software.
I still can't get MS Office 365 working on Linux over Wine, while no alternatives make me comfortable. Comparing Linux and Win32 ABI on Linux is nonsense without talking about Wine compatibility.
Have you checked out OnlyOffice recently? If so, I'm curious what are the deal-breaking features you find that it lacks compared to M365.
why am i getting the vibe that onlyoffice is closed-source.
it's actually open https://github.com/ONLYOFFICE
> We also already have a simple way to run Windows applications, Wine.
Are you high? There is nothing simple about Wine. It's at once a kludgy mess and a technical masterpiece, what it isn't is simple.
Depends. Proton and CrossOver are dead easy, especially for the supported apps.
Easy to use? Sure. Simple? No way.
https://support.codeweavers.com/en_US/2-getting-started/2-in...
Seems easy to me
Again, sure. Now try to contribute to the source.
https://www.christopherspenn.com/2021/08/simple-is-not-easy/
What source?
Proton/Wine/CrossOver.
Fairly simple for the end user, not simple in its implementation.
Like a lot of things in any OS it depends how far off the beaten track you go. I think there's a lot of gaming newcomers to linux where their main exposure for wine is via steam, which generally wraps things up for them and hides the details behind their launcher. If you need to go diving into the details for compatibility or you'd like to separate things out with prefixes then it's definitely much less polished and elegant, but I'd argue you could say the same thing about windows for compatibility with some old games or sites like PCGamingWiki.com would be a lot smaller as the combinations of windows (or DOS)+drivers+hardware over the years hasn't resulted in perfect consistent compatibility.
Doesn’t Zorin Linux already sort of does this?
He is not wrong. My software compiled with Borland Delphi 1.0 works beautifully with Wine under Linux and works just good as well under Windows.
I'm saying this as Java developer. Delphi eventually proved itself to be the true "compile once, run everywhere". Can imagine others who wrote executables for Windows before the .NET times can relate to similar experiences.
> Try doing the same with a Linux binary that's just a year old.
AppImages are very close to fixing this. I'm not sure if it is already solved or very close to.
Adobe Reader for Linux hasn't been updated since 2013 but the flatpak still works fine.
> I can pull down a 20 year old exe and still run it today on Windows
Barely - most bigger programs did not adhere to all standards, but got custom fixes under the hood in follow-up windows versions.
Also, around 2001 was the big architectural change for desktop from DOS to NT, so this might seem like cherry-picking the timeframe selected.
> Also, around 2001 was the big architectural change for desktop from DOS to NT, so this might seem like cherry-picking the timeframe selected.
It's true that the entire Windows product family converged onto the NT codebase with the release of Windows XP, but this isn't really relevant -- Windows executables and DOS executables were always different things, and despite being built on top of a DOS-based kernel, Windows 9x still supported the same Win32 binaries that ran under NT.
There was even an extension called Win32S that allowed Win32 executables to be run under Windows 3.1. The Win32 API dates to the early '90s, and modern implementations do support executables dating all the way back to the beginning.
> Try doing the same with a Linux binary that's just a year old.
I do that all the time. Just link to a static glibc or musl.
I really want to statically link OpenGL and Vulkan for exactly this purpose, but neither use a wire protocol (unlike X11 or Wayland). The whole "loading library" scheme feels like hazing for any beginner graphics programmer on top of the already complex graphics APIs.
I know at least for OpenGL, not all graphics cards/drivers would implement the entire featureset. So there was a reasonable justification for the dynamic linking and bringing in functions one by one.
I think that a wire protocol could support that with a query response for supported versions and functions. The decision of dynamic linking removes the overhead of serialization, but removes the option of static linking.
Yeah, just my thought. Instead of all the effort and overhead and awful API of Win32 just statically link musl. Still, there are of course downsides and limitations to either approach.
There are multiple distros that make it very easy - as in download a .exe or .msi and click on it: https://help.zorin.com/docs/apps-games/windows-app-support/
Some of these have a long history: https://en.wikipedia.org/wiki/Linspire
They have never been all that successful.
I suspect there is not enough overlap between people who want to use Linux and people who need to run Windows apps that badly for it to be viable.
The biggest problem is games, and even with Steam's best efforts not all Windows games will run on Linux, AFAIK.
> While the Linux syscalls themselves are very stable and reliable, the c library on top of them is not.
Maybe just don't use that library then? Or don't do that ridiculous thing where you fumble around at runtime desperately looking for executable pages that should just be included in your binary.
It's not "some c library on top of them", it's glibc. You can use another libc, but that means you're going to be incompatible with the distro expectations in terms of configuration, because that's handled by glibc, so you just push off the instability to a different part of your system.
This is a wonderful idea. I have some doubts, though. It might not provide a seamless experience.
Just transforming Windows syscalls into Linux syscalls is not enough. There should be some form of emulation involved.
Many apps, like games are using hardware, that means some additional layers of emulation.
>Imagine we made a new Linux distro. This distro would provide a desktop environment that looks close enough to Windows that a Windows user could use it without training. You could install and run Windows applications exactly as you do on Windows; no extra work needed.
I a rough user experience, some loss of performance and many bugs.
But I hope I am wrong, because the idea sounds really promising.
I want the opposite: id like a way to run the windows kernel, drivers and most low level OS stuff by windows, but with a Linux user Interface: Cinammon, apt and all the debian stuff.
I run Mint as my main OS, but hardware compatibility is still a headache in Linux for me.
You can resurrect SFU and build a replacement GUI for Explorer. You can't get rid of Win32, but you can cover up most of it. Implementing a Personality would be the Windows-way of doing this as it is designed for just what you ask.
If you're not buying a laptop with Linux preinstalled and supported by the hardware vendor, you're going to have a hard time.
You might get lucky, but it sounds like you've not been lucky.
I’ve never bought one of the dedicated Linux laptops, but I’ve had pretty good luck with AMD stuff.
My current laptop, Thinkpad P16s AMD Gen 2, was pretty straightforward to get working with NixOS. No special drivers, and everything, including WiFi and function buttons on the keyboard, worked fine without any kind of special concessions.
This was also the case for my last non-Mac, from 2017-2020, I got Ubuntu installed on there without many headaches, and it wasn’t a specific Linux laptop, though again it was AMD.
What hardware isn't compatible?
Doesn't WSL do that?
WSL 1 did, WSL 2 runs in a VM.
> In Windows, you do not make system calls directly. Instead, you dynamically link to libraries that make the system calls for you. This allows Microsoft to do all sorts of shenanigans at the kernal level while providing a stable API to userspace. This little stroke of genius allows us to have both Linux and Windows on the same machine at the same time.
Precisely correct. Linux should never have allowed system calls from outside libc or a big vdso.
I wouldn't be surprised if Microsoft does something to that effect in the future. Have Win 32 as a layer on top of Linux.
They seem to not be interested in locking the hardware and they don't make much money from selling Windows and it shows. There aren't many strong improvements in Windows and it feels like Windows is a platform they use to sell other stuff they make money with - they are with Windows in a similar position Google is with Android.
Java already solved this problem, for the most part. This whole ABI nonsense really grinds my gears. It's essentially just a result of the silly decision to compile software into dubious blobs and ship those to users. You could get rid of an awful lot of malware and massively simplify software distribution if you were to distribute a platform agnostic intermediary representation of source code that preserves enough semantic meaning to eliminate ABI issues, then leaves the last step of compilation to the operating system. Shipping binary files is just plain bad in every way.
> Shipping binary files is just plain bad in every way.
Aren't .class and .jar files "binaries"?
> Java already solved this problem, for the most part
Maybe, just maybe, there are some drawbacks that mean that in fact it's not solved. Otherwise perhaps Java would've completely obsoleted C, C++. Some of us design applications which can't tolerate the worst case GC pauses, for example. Some of us design applications which we can't afford to spend extra time heap-allocating nearly everything.
>Aren't .class and .jar files "binaries"?
Not at all. jar is just a zip with a different extension +some metadata in META-INF (including dependencies). class are compiled java files but they do contain all kinds of metadata, including variable names and debug info (if you choose to retain it). they contain all methods and fields with their original names (along with annotations), so the reflection APIs work. Decompiling a class file is trivial to a point the original line numbers can be restored.
>Otherwise perhaps Java would've completely obsoleted C
Java does require a managed runtime written mostly in C/C++.
>Some of us design applications which can't tolerate the worst case GC pauses
The current breed or low latency GCs w/o a stop-the-world phase should suffices for a large set of applications.
>we can't afford to spend extra time heap-allocating nearly everything.
That has not been an issue for quite some time, heap allocation can be elided, and under normal condition is just a pointer bump. Per thread private allocation is by far the most common case - the garbage collection of non old-gen referenced objects is totally trivial too (i.e. memset). Even shared (cross thread/area) allocation is a CAS'd bump in most cases. Note: copy/generational garbage collectors copy objects that are referenced by non-young-gen ones to another area, then zero the original area.
With that being said - Java (and managed languages) are no panacea.
Java can be optimized beyond all recognition, into bytecode that can no longer be represented by the Java language. At least that used to be the case in the past. It is not different from other binaries, except the target system is a virtual CPU rather than a real one.
Java also deprecated all sorts of things over the years. Not to mention applets being completely killed off. I have Java binaries from 25 years ago that could no longer run at all with a contemporary run-time already 10-15 years ago.
Not to mention much of real-world Java is platform-specific. Not often native code perhaps, but more subtle things like hardcoded paths or forgetting to properly use the correct path-separator. Installers used to often be platform-specific as well. Maybe that has been solved, but you would still run into trouble trying to install an old Java application that has an installer only supporting contemporary Windows and Mac systems.
> Java can be =optimized= beyond all recognition, into bytecode that can no longer be represented by the Java language.
I am not sure how that works, Java is all about JIT. Bytecode almost doesn't matter. Personally I can read assembly (and years [decades] back could just read hex). So even obfuscated (not optimizied) Java is quite readable. Still, the class files do retain all method declarations, all constant pool entries and all bytecode (again trivial to decompile). There have been few changes in the class format of course.
> Java binaries from 25 years ago that could no longer run at all with a contemporary run-time already 10-15 years ago.
Need a shorter frame, Java 8 (10y back) could run pretty much anything from java 1.0 (or even 0.9). It's quite more recent - Java 9 (2017) that introduced project jigsaw. Prior that Java was by far the most backward compatible platform. Still is, for most applications. Do note deprecation mean(t) - do not use in new projects, not it has been removed; again those are more recent changes.
>Not to mention much of real-world Java is platform-specific.
Again, that's super rare nowadays. Stuff like zstd might load a library but even then the native code interfaces are the same across pretty much all platforms. If you talk about native UIs you might have some point, of course.
>to properly use the correct path-separator.
Windows with its backslash is notorious, yet - there is no reason to use backslash any longer, for like 25years now. All Windows paths do work with forward slash (aside that the separator is readily available in java.io.File)
> Some of us design applications which can't tolerate the worst case GC pauses, for example
First of all, I should like to point out that such people are overwhelmingly deluded and come to this belief without ever actually having tested the hypothesis. But at any rate, the idea of a JAR file doesn't require garbage collection. We can already see this with things such as Wasm, though it doesn't quite achieve what I would want.
You mean ABI issues like not being able to run a java 11 jar on a java 8 runtime?
No, I mean ABI issues like not being able to change the order of fields in a struct.
> dubious blobs
I think you are just suggesting to replace binary blobs with other binary blobs e.g. CLR/.NET assemblies/executables or WebAssembly files.
Or do it the JavaScript way: distribute compressed minified (kinda compiled) source code and the runtime JIT compiles it at runtime (e.g. V8 engine Turbofan compiler).
I'm trying to replace platform dependant and easily breakable binary files with platform independent and change resistant files. Yes, those files are still in binary, but this is true of all files on a computer. What's useful about these new formats is that they retain a greater degree of information about the source code.
java classes have "abi". any binary representation of executable code that is supposed to be interacted with by other code necessarily defines an application BINARY interface.
The point was that the "abi" is platform independent (and has very late bindings)
Why Libre Office takes much more time to start than MS Office? Why does it feel sluggish?
At least you didn't propose to write everything in Javascript.
While the Java implementation is suboptimal, there is really no need for it to be that way. I think the ideal way to go about it would be to run the compiler optimisations and whatnot then generate something semantically similar to C89 as output. Then you invoke a simple compiler with few optimisations on the target machine the first time the program is run, and cache the results somewhere. On all subsequent runs, you've got something which can actually run faster than pre-compiled C code because the compiler doesn't need to preserve the ABI so can do stuff like inlining dynamic library calls.
Do you know any software that does that?
Sadly not. Wasm is attempting something similar, but it lacks certain things that would be important for this (the to specify in one module a type of unknown size and then query its size in another module at link time).
> While the Linux syscalls themselves are very stable and reliable, the c library on top of them is not. Practically all of userland is based on libc, and therefore by proxy Linux itself has a binary compatibility problem.
Can't we freeze the functionality of libc? Why does it need to be updated so frequently?
And even if we make changes to its implementation, why do we need to bump the version number if the underlying API is still the same?
> "In Windows, you do not make system calls directly. Instead, you dynamically link to libraries that make the system calls for you. This allows Microsoft to do all sorts of shenanigans at the kernal level while providing a stable API to userspace."
Or, in other words, "We can solve any problem by introducing an extra level of indirection."
> I can pull down a 20 year old exe and still run it today on Windows.
Sure, but for how much longer will Microsoft allow this unsigned ancient binary?
Using Linux for runing Windows programs is going to be desperately needed as Microsoft enshittifies Windows going forward.
I really don't see Microsoft blocking unsigned exe. There's just too much old Windows/DOS software out there still in use, sometimes running critical infrastructure.
Like someone already said somewhere, it will come in steps.
Windows S Mode was already a test.
The nagging, warning and outright "blocking" (while hiding the "run anyway" button under "more info") is the first step. This already is a warning to software vendors that something will come.
The next step will be blocking unsigned exes on Home Editions (not on Pro or Enterprise), so that software vendors and most of places depending on unsigned old software can move on to signed software.
Then Home and Pro Editions of windows wont be able to run unsigned software anymore and if you need unsigned software to run you'll have to use an Enterprise Edition.
The last step would be no windows can run unsigned software anymore and if you need unsigned software running, you'll need to run that one on an Azure instance of Windows which can still run unsigned software or (if you can't / don't want to run your software in the cloud) you will have to contact Microsoft for a special Windows version, costing lots of money. But if your business depends on that one single unsigned exe file, you might be ready to pay for that one.
By the way, Wine is more stable than Windows itself in supporting older Windows ABIs.
Someone should develop an analog for Linux itself. I.e. support for older / historic ABIs that would be translated into whatever modern Linux has.
Some isolated example of that is SDL 1.x translated to SDL 2.
Wine itself already exists, you don't need to develop any new distro for running Windows programs on Linux. Just improve Wine if anything is missing.
Exactly, wine is all that's needed here for windows stuff. And we have snap, flatpak, docker, and a few other things for Linux stuff.
We'll probably get a bit of irony in a few years when somebody at MS realizes that they can just use wine on top of their Linux emulation layer to run any old MS legacy software from the past three decades+ and then cleans up the rest of windows to not have any support for legacy APIs. Because having that is redundant. Emulators are great. There's no need to run ancient software natively.
> There's also no guarantee that a binary produced today on Linux will even work on the various distributions of Linux today due to the same installed library version problem.
On Linux, you are supposed to share the source code, not the binaries. FOSS source is easier to fix than a binary blob. This is why the FSF exists.
99% of Windows users don't even know what compiling is, nevermind compiling from source themselves.
It is not just _running_ things that's the problem, authentication and authorization are massive, I've attempted to run various Audio plugins with Wine which either do not run at all or they run on a one-time basis which is not feasible for any long term setup. Oh if only you could run them under a vm..
Seems like it would be easier to identify and fix cases of ABI compat breakage in the Linux userland than to convert Linux to Windows.
This would be an option if the Linux userland wasn't a mish-mash of unconnected developers with their own practices and release cadence. It's why we have LTS distros where the company will put in the massive amount of work to preserve binary compatibility.
But the trade-off is that the software you have in your repos will be really old. At the end your RHEL support cycle libs will be a decade out of date.
OpenSSL
What about it
Breaks compatibility. They remove or rename functions often and openly say they are not going to maintain compatibility.
There are lots of alternative ssl libraries.
Point still stands: fixing the situation with these libraries seems like less fuss than turning Linux into windows.
How much more of this opinion should I read when it’s established in the third paragraph that the author doesn’t realize that AppImage does not bundle a libc? Flatpaks do, and Snaps are a closed system that bundles _Ubuntu_, so really the answer is Flatpaks. And the rest of the world has also come to that conclusion.
That is pretty much what I'm doing with Steam, Proton and my Game Library. 99% the time it works just great.
Sadly a few sim racing games don't run at all and VR is a bit hit and miss (though the Quest3 with either WiVRN or ALVR seems to work well)
Still I'd rather this than deal with daily driving Windows! I'm amazed at how good Gaming on Linux has gotten over the past few years.
If you mostly do singleplayer, sure.
Anything online with anti-cheat is usually broken.
>usually broken
The working number is usually around 40-50% see areweanticheatyet.com
Thank you for writing these thoughts.
I've also reached a similar conclusion while building ZeeeroOS from scratch.
There's also Fat binaries(arch independent) that should be considered but no one does when building for Linux.
[1] https://github.com/zeeeroos
Every year or two I check the status of ReactOS hoping that some day I will have a good alternative to Windows. After checking the project status today, it seems that day is still far off.
How about packaging Linux apps as Windows apps so they can take advantage of the stability of the Win32 ABI? Is there a way to do this automatically, possibly using AI?
Is there a Wine like library that helps running macOS apps on Linux?
It's called Darling [0], but is not nearly as far along as Wine is.
[0] https://www.darlinghq.org/
yes, whisky
https://github.com/Whisky-App/Whisky
> A modern Wine wrapper for macOS built with SwiftUI
I think you misunderstood GP's request of "running macOS apps on Linux" so you swapped the host and guest OS, and then transposed the guest OS under "emulation"
Dang, you're right, I was a bit quick there and misread. Sorry about that.
Everybody is commenting on possible implementations or how similar solutions already exist. I would like to focus on an overlooked but very important fact: most of the important software in the Linux ecosystem is opensource. Yeah, the ELF binary from 20 or 25 years ago might not run anymore out of the box but you have the source code, access to the whole history of source code of needed libraries. It will for sure not be a 0-effort adventure and it will not work with proprietary and closed source software, but it's doable for most of Linux old abandoned software.
"The Looming Future" chapter of the article. I didn't know all those hortiboe things are planned.
I sugest https://archcraft.io
Adopt GNU/Windows, problem solved.
> MacOS has a feature called Gatekeeper, which limits what software you can run on your Mac to only those applications that Apple approves
This is a lie. Gatekeeper in no way limits the software you can run. It presents an easier experience to launch software downloaded from a browser if the developer chose to submit it to apple for a malware scan.
Virtualization works far better than wine. Just copy the window frame buffer to the linux host.
As someone stuck in macOS trying to run docker, I can tell you that the impedance mismatch between a what a "file" is, and its "location", and the meaning of "listen on localhost", and how much "memory" an application has makes virtualization absolutely horrible for trying to run just one program on a different OS (or arch) than the rest of your day to day
I have had the same idea for a while, honestly. Yeah you can install wine and binfmt_misc, but it doesn't come by default. It should be the default. Nobody should be distributing binary Linux applications in this day and age, especially not for the desktop. Win32 is just so much better designed from the ground up for desktop apps, it's not even funny. As a simple example - a Win32 .exe has an icon to tell the user immediately what it is, but Linux apps need a ton of hacks and extra files (wtf is a .desktop) which can get out of sync at the drop of a hat. Also the ABI is indeed stable. You don't have to worry about the graphics and audio APIs disappearing etc.
Like just for the audio stack we had: OSS is deprecated so use ALSA actually direct ALSA device access is deprecated use this special ALSA config with a bunch of plugins actually directly calling ALSA is deprecated use aRts actually aRts only works on KDE use ESD actually ESD is deprecated use pulseaudio actually pulseaudio uses too much CPU rewrite everything to use JACK actually JACK is only for audio workstations go back to pulseaudio actually pulseaudio is deprecated switch to pipewire... I am pretty sure in 6 months I will be reading how pipewire is deprecated and the new definitely final form of the Linux audio stack will be emerging (written in a combination of Rust and Emacs Lisp).
In short, Linux binary compatibility is a clownshow and the OS itself isn't engineered for developing graphical desktop applications. We should stop pretending it is and compile everything user-facing for Win32 ABI, with maybe a tiny extension here and there.
Surely one could design a library in Linux which makes syscalls and dynamically link to it.
> Try doing the same with a Linux binary that's just a year old. There's no guarantee that it will be able to run based off some update that has happened.
What?? I've used Linux for quite a while, and I've had a very good experience with software. I struggle to follow what they're talking about, Linux works just fine. Using Windows software is also pretty easy and like many people have already mentioned wine-binfmt is basically what this article is describing.
I have points to burn, so I'll post, because I know this will scratch some folks the wrong way- apologies in advance.
I use Windows. In fact, I like Windows. I know lots of (ok, more than 5) greybeards who feel exactly the same way. I don't want Linux to be Windows, but I also don't want Linux on my personal desktop either.
I have a Mac Mini M1 on my desk, and I use that for the things it's good for, mainly videoconferencing. It's also my secondary Adobe Creative Suite machine.
On my Win11 desktop, I have WSL2 with Ubuntu 24.04 for the things it is good for- currently that's Python, SageMath, CUDA, and ffmpeg. For my Unix fix, I use Git Bash (MSYS2) for my "common, everyday Unix-isms" on Windows.
I also use PowerShell and Windows Scripting on my box when I need to.
Why? Well, firstly, it's easy and I've got stuff to do. Secondly, cost is not really an issue- I bought my Windows Pro license back with Win7, and it was about $180. That was maybe 15 years ago. They have graciously upgraded me at every step- Win7 -> Win10 -> Win11, all at no cost. Even if I had had to buy it, my Taco Bell tab is higher in any given month than a Windows license (love that inflation).
Why else? Everything works. I get no annoying popups, and I really no longer sweat garbage living on my drive, because that ship has sailed; wanna waste 50GB? Sure, go ahead.
But the most important reason? My hardware is supported. My monitors look great; printers, scanners, mice and USB drives & keys all work. In fact, >90% of the time, everything just works. Further, I can share effortlessly with my Mac, all my Linux servers speak SMB (CIFS), Wireshark works, and my programs are all supported including most open source software. And I do run apps that are 20+ years old from time to time.
Truth be told, I have tried the dance of daily driving Linux, and it's a laundry list of explanations to others why my stuff is different or deficient in some way. The kicker is that my clients don't care about purity or coolness factors.
Linux has its place. But please don't put in on my main machine, and please don't give it to my family members. They're only being nice by living with a sub-par desktop experience. It will always take a herculean effort to stay on par with Windows or MacOS, and no one really wants to put their money where their mouth is.
Please don't misunderstand. I admire and respect authors of open source software, and my servers thank them. But being a contrarian and dogfooding what KDE and GNOME put out, fighting with Nvidia and AMD, dealing with constant driver interface changes, and not having proper commercial software support is not my idea of fun. It was 30 years ago. Today? I'd rather hang with my daughter or write some code.
These distros have had 35 years. I don't know what else to say.
I have the same experience. I've tried to use Linux as a desktop since 2000. And tried and retried. Year after year and distro after distro.
Until I realized the desktop experience on Linux will never be on par with Windows, that I need things to just work instead of constantly fiddling to make them work.
I discovered that Gimp is not Photoshop and Libre Office is not MS Office. And I discovered that running things under Wine are not always great.
I discovered I need and want to run Windows software.
I discovered that I like the hardware to work out of the box.
For me, Windows is great as a desktop. And I develop microservice based apps that run under Linux containers/Kubernetes in cloud.
Docker Desktop, WSL and Hyper-V are taking care of all of my potential Linux needs.
I also have a MacBook Pro, but I don't care much about the OS, I mainly bought it for the good battery life and use it to browse the web and watch movies in bed or on the couch or while traveling.
> But the most important reason? My hardware is supported. My monitors look great; printers, scanners, mice and USB drives & keys all work. In fact, >90% of the time, everything just works.
the only thing I have had issues with is one printer, and one graphics card with many machines over 20 years, so I would say I Linux manages better than 95% "just works".
> sub-par desktop experience.
I strongly disagree. Linux (KDE) is a far superior desktop experience these days, compared to Windows 11. Have you even seen the new Win11 taskbar and the shitty Start Menu - they ruined something which they perfected in Win7. The overall UX has taken a deep dive - like with the unwanted removal of classic Control Panel applets like "Window Color and Appearance" (which doesn't have a replacement), and the continued bolting-on of unwanted crap like Copilot and forced MS Accounts - like, even the CLOCK app requires you to sign-in (why?) [1]. There are even ads in MS PAINT [2]! Tell me if this is acceptable?
> It will always take a herculean effort to stay on par with Windows or MacOS, and no one really wants to put their money where their mouth is.
I also disagree with this, in fact, Linux has surpassed Windows and macOS in many areas.
Take updates for instance: especially on distros with atomic updates, they are far more reliable and a pleasant experience compared to Windows. Atomic transactions means updates either apply or don't - there's no partial/failed state, so no chance of an update failing and potentially borking your PC. Plus, distros which offer atomic updates also offer easy rollbacks - right from the boot menu - in case of any regressions. Updates also do not interrupt you, nor force you to reboot unexpectedly - you reboot whenever YOU want to, without any annoying nag messages.
Most importantly, updates do not hold your PC hostage like Windows does - seeing that "please wait, do not turn off your computer" has got to be the #1 most annoying thing about Windows.
It's amazing that even with 40 years of development + trillions of dollars at their disposal, Microsoft still can't figure out how to do updates properly.
Finally, your PC will continue to receive updates/upgrades for its entire practical lifespan, unlike Windows (regular versions) which turns a perfectly capable PC into e-waste. Win11 blocking Kaby Lake and older CPUs is a perfect example of planned obsolescence and honestly, it's disgusting that people like you find this acceptable.
There are several other areas where Linux shines, like immutable distros, Flatpak apps, sched_ext schedulers, x86_64 microarchitecture optimisations, low resource usage... I could write an entire essay about this, but that will make this lengthy post even lengthier.
> But being a contrarian and dogfooding what KDE and GNOME put out
Please don't put KDE in the same sentence as GNOME. The GNOME foundation have lost the plot and have betrayed their fans, ever since they released the abomination that is GNOME 3. KDE on the other hand, still delivers what users want (ignoring the KDE 4 era). KDE v6 has been a near-flawless experience, and still has a classic, familiar desktop UX that any old time Windows user would love and feel right at home with, unlike Win11.
> fighting with Nvidia and AMD
Please don't put nVidia and AMD in the same sentence. nVidia sucks and that's completely nVidia's fault for not supplying a full opensource driver stack (their new open kernel module is an improvement, but many driver components are still proprietary and rely on their GSP).
AMD on the other hand, has been a super-pleasant experience over the past few years. Ever since Valve got involved with their Steam Deck efforts, AMD drivers, KDE, Wine and several other related areas have seen massive improvements. I seriously doubt you would have any major complaints with AMD GPUs if you've tried them on a recent distro.
> not having proper commercial software support
What sort of commercial software does your family require? Mine don't need any (and nor do I). The family members who are still working have their own work-supplied Windows/macOS laptops, so that takes care of the commercial side of things, and outside of work we don't need any commercial software - and we do everything most normal PC users do - surfing the web, basic document/graphics/video editing, printing/scanning, file/photo backups etc. Everything works just fine under Linux, so I'm not sure what we're missing out on by not using commercial software.
> These distros have had 35 years. I don't know what else to say.
Maybe don't use an ancient distro that's stuck in the past? Try a modern immutable distro like Aurora [3] or Bazzite [4] and see for yourself how much things have changed.
[1] https://old.reddit.com/r/Windows11/comments/ztv70n/since_a_f...
[2] https://bsky.app/profile/d3xt3r.bsky.social/post/3lhhltgbtos...
[3] https://getaurora.dev/en
[4] https://bazzite.gg/
"Maybe don't use an ancient distro that's stuck in the past? Try a modern immutable distro like Aurora [3] or Bazzite [4] and see for yourself how much things have changed."
This has always been the riposte to Linux-for-normies sceptics - "you haven't tried these modern distros, X, Y, Z".
I've gone down that route several times and they always have issues, from drivers to config settings to just being too different compared to Windows or even MacOS.
Non-tech (and especially older) people will generally have expectations that obscure linux distros (despite their good intentions) cannot meet; they may well suit users who are more confident and curious with sorting things out themselves but this idea that somehow "this time its different" is ultimately on the distro-champions to prove; they've been wrong too many times in the past.
> I've gone down that route several times and they always have issues, from drivers to config settings to just being too different compared to Windows or even MacOS.
You really should give KDE-based distros a try, the UI isn't that much different from the traditional Windows UI paradigm. In fact I'd say KDE is more similar to the Windows 7 UI, than Windows 11 is.
Also, drivers aren't really a problem with compatible hardware. As the person recommending/installing Linux, it is your duty to ensure that they've got compatible hardware. In my experience, anything older than a couple of years, from mainstream brands, work fine. The only couple of cases where I've had to manually install a driver was for printers, but even that is now almost a non-issue these days thanks to driverless/IPP printing.
> Non-tech (and especially older) people will generally have expectations that obscure linux distros (despite their good intentions) cannot meet
I'm surprised you mentioned non-tech and older people, because that's exactly who my ideal targets for Linux are, because their needs are simple, predictable and generally unchanging. It's usually the tech-savvy and younger people who've got complex software needs and specific workflows that find it hard to adjust to Linux. This was also the case for me, I had over a decade worth of custom AutoHotkey scripts + mental dependencies on various proprietary software that I had to wean myself off from, before I ultimately switched.
Older, non-techy folks are mostly fine with just a browser and a document editor. This was the case with my mum, and pretty much most of my older relatives. As long as you set up the desktop in a way it looks familiar (aka creating shortcuts on the desktop), they don't cause too much of a fuss. Initially there may be a "how do I do this" or "where did xxxx go?" depending on their needs/workflow. At least in my mum's case, there wasn't much of an issue after showing her the basics.
I'm curious what needs the older folks you know have, which can't be met with an atomic KDE-based distro like Aurora.
I agree. I prefer KDE to any other desktop I have tried, although i also like XFCE.
> Please don't put nVidia and AMD in the same sentence.
and in general I have not had hardware issues. You can pretty much avoid them completely by buying hardware intended for Linux.
XFCE is probably the closest thing around to a traditional Windows desktop environment.
> Have you even seen the new Win11 taskbar and the shitty Start Menu - they ruined something which they perfected in Win7.
Yes, one of the biggest visible downgrade of a core feature with W11! It's is awful and buggy, but then Windhawk mods and menu alternatives and app launchers exist, so it can tweaked to be good again (though they didn't perfect anything in any W7 or any other version, there is not a single perfect UI component)
> The overall UX has taken a deep dive - like with the unwanted removal of classic Control Panel applets like "Window Color and Appearance" (which doesn't have a replacement)
Again, bad stuff, though the classic control panel was also bad, the only consolation is that at the steady state you don't use those often
> CLOCK app requires you to sign-in (why?) [1]. There are even ads in MS PAINT [2]! Tell me if this is acceptable?
It isn't , but then why would you ever use these bad stock apps even if they had no ads??? Much better options exist!
But all of those mostly fixable annoyances pale in comparison with the inability to have a great file manager like Directory Opus or being able to find any file anywhere instantly with Everything or having a bunch of other apps (and then you'd have plenty of other issues tweaking OS UI or have sleep or hardware compatibility issues people keep complaining about)
Windows is still enshittified and everyone needs an exit plan.
For family users I recognize macOS. For windows apps I have virtualized win11 IoT with a passed through GPU. My monitor has multiple inputs and I can't even tell it's not native.
TLDR: I’ve got stuff to do.
I feel exactly the same way. I recently bought my father-in-law an M4 iMac which he thought was a disproportionately nice gift for no apparent reason.
Oh there are some very compelling reasons for it. My tech support load went way down and he’s super happy. Win-win.
Both Corel and Lindows tried this 25 years ago. People did not like it.
I did like them both. They felt much more polished that other Linux distros at that time.
Wine was in a much worse shape back then.
Or just statically compile all binaries like a (useful) madman
Wine/Proton is basically Linux Subsystem for Windows.
WSL1, WSL2 is Hyper-V virtualization.
How often does glibc introduce a breaking change?
A reverse OS X/Classic transition, if you will…
> I can pull down a 20 year old exe and still run it today on Windows. Try doing the same with a Linux binary that's just a year old.
How this can be considered a good thing?
Backwards compatibility is generally a good thing. It certainly has its downsides (like security) which can be more or less of a concern depending on backwards compatibility techniques.
20 yo backwards compatibility in IT technology is impossible.
He is missing the point. Flatpak/Snap are not just an alternative way to ship binaries. They are way to isolate applications and what they can do. Landscape has moved from protecting the system or an user from another to protect the same user applications and their data from each other, specially for desktop environments. That is not even in the map for Windows, its security model and its applications. It is a big jump backwards.
Windows does a lot of sandboxing in this space though what do you mean?
Every Application should be it's own 'user' (sub user) while the login-user / manager should be the group leader of all those 'sub users' / 'agents'.
A change in security model from the 1970s/1980s might help with security and isolation. However that same security would also generally be a pain without really smooth management in the desktop environment / shell.
The Windows MSIX also does sandboxing.
People always talk about this “I can run a 20 year .exe file” situation but when I tell you that I have never, in 30+ years, EVER had a need to run a 20+ year executable, it just makes me go… yeah, and?
Sure I believe backwards compatibility is a nice to have feature, but I have never, nor do I think I will ever, have a need to run 20-year-old software.
My experience is that a 20 year old exe file has a greater chance of running in wine than it would in windows, and a 20 year old Linux executable is going to fail because the shared libraries it depends on are unobtainable
In my experience:
20-year-old exe files can fail on both Windows and WINE if they touch something relatively obscure. It's easier to throw files at the problem under WINE though (you can just throw away the prefix if you break something). The single biggest mistake WINE makes is defaulting to a single shared prefix (and the second sin is similar - trying to integrate folders and menus naively).
20-year-old dynamic binaries on Linux can almost always work today; snapshot.debian.org has all the old libraries you'll ever need. The potential exception is if they touch something hardware-ish or that needs exclusive access, but this is still better than the situation on Windows.
20-year-old static binaries on Linux will fail surprisingly often, since there have been changes in filesystem and file layout.
I've got a terminal open on my desktop, running a copy of ZORK from 1983, which is 42 years old.
Yes there are modern ports, and newer versions, but these kind of retro games, and utilities, are used by many.
I use Nikon Scan 4.0.0. It was released in either 2008 or 2004.
You can have an Electron app with the same features, but an older program likely can do it without being Electron or webapp.
Games
The shitty thing is, this just encourages closed source software on the open platform, giving vendors another reason not to port natively.
Good luck with the hellscape you're building.
I do not care about access to source code or not. I use apps, I do not want to look at their source code or compile them.
So open/close, I do not care as long as it does what I need, how I need.
And I feel I am not the only one thinking like this.
You sure are not alone, your mindset is that of a user, not a developer.
Its like expecting people who only eat food to understand how its made.. You're just not the target.
I am both. But I do not feel the need to look at the source code of all software I use or tinker with it.
I didnt see all in your original post.
https://xkcd.com/927/
Three days early?
> The Linux Environment is Unstable
How many missiles do you know that run Windows?
How many do you know running Linux? Most of them are either specialized bare-metal OS and the rest is VxWorks.
https://search.brave.com/search?q=missiles+running+Linux
There are actually no direct results that prove an actual missile running Linux. Simulations yes. Linux running a missile itself no.
> Try doing the same with a Linux binary that's just a year old.
I do it all the time. Works fine. Do you have a specific example or is this just a hastily constructed strawman?
> NOTE: I am not against Apple or Microsoft. They have amazing engineers! I don't think they're being malicious. Rather, I think the incentives for these companies are improperly aligned.
Improperly aligned incentives? Who gets to say what that is?
Is it "improper" to maximize profit per their own board's guidelines?
I have a feeling OP has some predefined notion of nobility they expect people to somehow operate under.
> Is it "improper" to maximize profit per their own board's guidelines?
From the standpoint of the end user the incentives are improperly aligned. If they had made hammers they would have included licence agreements for their use with specific types of nails and actively prevented users from using competitor's nails. They also would have made sure yesterday's hammer would not be sufficient to hammer in today's nail, they would have added camera's to observe what the user was doing so as to sell 'targeted' advertising - during which the hammer would not strike any nails but would sing like the singing sword in Who Framed Roger Rabbit - and they would have made sure that no matter how agile the user was with his hammer the thing would never be 100% reliable.
Of course hammers are far less complex than computers and operating systems. Maybe this is because they're made by tool manufacturers and not by tech companies, maybe it is because they're old tech. A modern hammer is what Ford would have produced if he had listened to his customers who asked him for a faster horse so maybe there is a whole world of construction efficiency waiting for the Henry Ford of Hammersmiths. Or, maybe - probably - sometimes it is better to get that faster horse, that titanium hammer or that free software operating system which works for you and nobody else.
Worth reading: Unauthorized Bread. https://arstechnica.com/gaming/2020/01/unauthorized-bread-a-...
Yes it's a value judgement by the author. Maximizing profit by any means necessary is pathological. Many here would agree with that.
To the extent binary distribution is "unstable" on Linux, it's because users aren't expected to just download random binaries from wherever, as is normal on Windows (and Mac, for that matter). Users are expected to either obtain binaries from their distro, or compile them from source. In either case, all of the issues about binary distribution being "unstable" are invisible to users. Which is the point. People who want the broken Windows software model can just..run Windows. The last thing any sane Linux user wants is to make Linux into Windows. I run Linux to avoid Windows.
There are a lot of reasons to avoid Windows outside of package/software management.
Linux has appimage, it's already capable of running "loose" native executables like Windows does. Flatpak, Snap and Docker all break the "distro repository or compile from source" model. The primary method of playing video games is installing Steam and running Windows software inside a container. This purist vision you have of Linux doesn't exist.
> This purist vision you have of Linux doesn't exist.
It does on my computer, and I suspect on a lot of Linux users' computers.
Linux users? That is, people who use a device that runs Linux? Like Android?
Or you mean desktop Linux users, though there aren't "a lot" of those. There's the business/corpo/science deployments but I don't think we're talking about that, but rather specifically home use. So we're talking mostly enthusiasts. I'd imagine many of those and perhaps even most at least lightly game and Steam is effectively the default place to purchase games on Linux. Do you run anything in an emulator? Impure! Purge with fire!
The software repository+compile from source paradigm isn't "Linux", it's not even "desktop Linux". Linux can execute software in a myriad of different ways, what makes Linux Linux is that it's infinitely flexible.
> you mean desktop Linux users
Obviously, since that's what the article and this discussion is about.
> there aren't "a lot" of those
Depends on what you consider "a lot", I guess. The article that this discussion is talking about apparently thinks there are enough to make its proposal for "converting Linux to Windows" worth an effort.
> Obviously, since that's what the article and this discussion is about.
I was just making the point that Linux isn't any one thing, it's everything. You want an OS handles things the way you want? Well, you do, and others should be given the same privilege. It's silly to stamp your feet about certain implementations or features existing within the Linux ecosystem, the whole point of FOSS is that they can all exist.
> The article that this discussion is talking about apparently thinks there are enough to make its proposal for "converting Linux to Windows" worth an effort.
From the article: "Imagine we made a new Linux distro. This distro would provide a desktop environment that looks close enough to Windows that a Windows user could use it without training."
It isn't proposed as a distro for people who use Linux, but for people who use Windows but may want to move to Linux. I was one of those people, I switched my gaming PC from Windows to EndevourOS last year, though I've been using various distros for the past 20 years on other devices. I switched because Windows is becoming a shit show. I know my way around Linux, I use Blender, Krita, Gimp, Inkscape and Reaper, all native apps, but sometimes I just want to install a Windows application since the functionality I require makes it's simply necessary. Dual booting is a massive hassle, VMs fuck up productivity workflows and while I can sometimes get it working with Wine it's a hassle. I might not use the proposed OS, but the components that would allow for seamless installation of Windows software? I'd love for those to exist.
Speak for yourself. I have been using Linux for a decade and would want nothing more if standalone application setups like those in Windows became the norm of software distribution.
Centralized package management is a curse. Apps should be responsible for their own updates, not the OS.
OTOH I view application installation as a separate skill from $THING_THIS_APP_DOES.
So I would rather the app authors just focus on perfecting their apps, while said apps can then be packaged and distributed in bulk by different sets of people trained to handle those challenges.
What I very certainly do NOT want is:
* Apps automatically checking for updates on startup — since they can't check while they are off — leading to needlessly leaked data crossing the network about exactly when I'm starting up exactly which apps (since they dial home to predictable locations regardless of TLS usage)
* Apps constantly filling systray with their own bespoke updaters (and "accelerators" which just means the app is running 24/7 but minimized to tray ;P )
* App launches updater, updater window says "can't update because app is running". Close app, wait for update, now I have to go hunt down the document I had originally opened the app with. Next time an app launches an updater, I leave it on its splash screen and go to close the app.. naturally that also closes the updater since this time around the one is a sub-process of the other. (I recall earlier versions of Wireshark causing me much grief on these fronts, for example)
* More diverse attack surface area for hackers to infect my PC: instead of trying to juke a distro who has at least some experience and vested interest in defending against poisoning, just juke any single software author less specialized in distribution security and take over their distribution channel instead.
Great news, there's a distro called Slackware that eschews centralized package management (besides optionally delivering updates for preinstalled packages). It's been around for ~20 years before you started using Linux. If you'd like to rid yourself of the curse of centralized package management in favor of running "./configure && make && sudo make install" like a real man, you should give it a try.
Standalone app installers ≠ compiling from source
> Apps should be responsible for their own updates, not the OS.
Distros are not quite "the OS". You don't need a distro to run Linux.
The role distros play as far as Linux applications are concerned is more like an app store in the Windows (or Mac) world. Of course Apple has locked down their smartphones that way basically since their inception, and their desktop OS has been becoming more and more like that. So has Windows.
> Of course Apple has locked down their smartphones that way... So has Windows.
Is that where we want desktop Linux to go?
If you mean, do we want desktop Linux to have distros, that ship sailed several decades ago. Yes, you don't need a distro to run Linux (as I said before), but most people who run Linux use one.
However, Linux distros, while they play an app store-like role, are still very different from the Windows or Mac app stores. First, they don't restrict what else you can install on the system; you don't have to jailbreak your Linux computer to install something that the distro doesn't package. Second, they don't insist that you set up an account and hand over your personal information, or nag you constantly if you don't do that.
Given that every OS is heading towards centralised application updates. Windows Store does that AFAIK. I am guessing MacOS's store does too. The major mobile OSes and its the only way almost all users install anything.
The big difference on those other OSes is that packaging is done by the original author, and they don't have to worry about things like distro release cycles (and package freezes etc).
Windows Store is most similar to Flathub in that regard.
Thank you very much! I was about to post almost the same thing, so I'll reply to your post instead (and upvote):
It's pretty funny to read a critisism of linux s/w distribution along the lines of the dificulty of distributing binaries.
This is one of the biggest security vulnerabilities of windows. 3rd parties distributing binary executables.
At least in a typical linux distro the binary is built by the distributing org, with some review of where the source comes from.
Downloading a windows app from the internet one has no idea what source is included in that binary.
I'm also not a fan of non-distro based systems such as flatpack. Again, I would prefer my binaries built by the distribution, or if need be, locally.
> At least in a typical linux distro the binary is built by the distributing org, with some review of where the source comes from.
This "feature" falls apart for nonfree software, which most commercial apps are. You can use Spotify and Steam's PPA but will similarly have no idea what source was included in them.
explain to me why the HELL should i be limited to only running binaries that my distro vendor has deigned to provide, or jump through endless hoops to obtain and build source code (which by the way might not even be obtainable OR buildable). if i have a binary from 5, 10, 15 years ago i should just be able to fucking run it on my fucking computer.
Then run your "fuzzy kitty plays on your desktop" binary executable, downloaded from we-swear-we-wont-hack-you.com, and live with the consequences.
In fact this is the way 3rd party windoze executables are typically distributed.
Nobody's stopping you...
p.s. The above seems like a trolling reply, but I always play the straight man...
> At least in a typical linux distro the binary is built by the distributing org
... which often patches upstream code in ways that upstream neither approves of nor wants to support. And then, when things break, the user can't go upstream, and the distro package maintainers simply don't have enough time to deal with all the user reports.
Exactly this. In fact, what's a distro if not basically a well-maintained app directory?
>People who want the broken Windows software model can just..run Windows
That's what billions of people do. :)
There's also flatpak and friends
> Thesis: We should create a distro of Linux that runs Windows binaries by default via Wine.
"We should"? Do you mean me? I have a ton of my own projects I'm busy with.
Why didn't you say "I should create..."? There's nothing stopping you implementing this if you think it's a good idea. Do the work yourself.
That is the weirdest thing to take exception to.
What's up with all this "My 20 year old software still works!!!". Who actually runs unmaintained abandonware? I would rather prefer OS devs not wasting time maintaining legacy cruft and evolve with the times.
Sid Meier's Railroad Tycoon 2 is still the best railroad game and runs great on modern Windows versions, it's now nearly 30 years old:
https://store.steampowered.com/app/7620/Railroad_Tycoon_II_P...
I return to that game for a few quick sessions every couple of months.
> Who actually runs unmaintained abandonware?
Personal computers are for persons. They don't view their use of their own systems through the lens of an imputed purity test.
> I would rather prefer OS devs not wasting time maintaining legacy cruft and evolve with the times.
Then don't support those who do with your dollars. I wouldn't be terribly surprised if the market shows it disagrees with you.
I play Silent Hunter III which was released in 2005. Also Half-Life sometimes. Everything older is going to be console games.
Businesses run the old stuff.
Is this sarcasm? Some of my favorite games are 20 years old. Windows is popular in a lot of manufacturing spaces because the equipment software doesn't get updated and only connects to old programs over 16 bit serial ports.
There's a whole world out there of legacy software that is happily churning along, and doesn't need to be updated.
Okay, but then you could also keep it running on an old OS. Fork out the money for a RHEL license, and just never upgrade.
But I don't want to run a separate machine with 15, 20, 25 year old hardware to keep that OS working.
I still play Total Annihilation, from 1997.
Great game.
Lots of embedded stuff will have control software that's very old.
People object to this approach, but Apple does this, and they seem to be doing fine.