congrats on the open sourcing and launching! beyond the desire to run VMs in "1 command", i don't quite get the reasoning behind this project. could you elucidate? like, besides running macOS VMs, how is it different from lima, colima, and friends? the name lume is quite unfortunate.
the hard part about running VMs isn't really how to launch them (well, ahem, i'm looking at you, qemu), but getting data in and out, and controlling them. some feature requests, if i may ;)
# take screenshot
# this should do the right thing(TM) and take a screenshot of the logged-in user session, which may not necessarily be the console
lume screenshot <vm name> [-o <file.png> | -]
# execute command
lume exec <vm name> [--as-user <user>] <command> [args]
# copy files in and out
lume cp <vm name>:<vm path> <local path>
lume cp <local path> <vm name>:<vm path>
# run clone as new VM
# this should appropriately roll the MAC address, IPs, and reseed any RNGs, of course
lume run --clone <clone name> <vm name>
Can you clone a VM while it's running?
The ability to resume a VM within < 1 second would be useful for on-demand workflows without waiting for a full VM bootup sequence, similar to how you can get a firecracker microVM into the state you want, snapshot it.. then clone as you wish, and resume back into the guest.
You may need to preinstall an agent (a la Parallel/VMware Tools) to make sure this is seamless and fast.
Thanks for the feedback! I actually went into more detail about the project's reasoning and how it compares to lima/tart in another comment.
Interestingly, we left out the screenshot and exec features from this initial release so the CLI wouldn’t get too cluttered. We’re rolling out another update next week that will let you request screenshots programmatically, stream the VNC session in your browser via noVNC, run commands remotely over SSH, add clipboard support (finally!), and more.
As for cloning, you can indeed clone a running VM. However, suspending a VM isn’t supported in this release yet, even though it’s possible with the Apple Virtualization.framework. I’ll open an issue to track that work. Thanks again for the suggestions!
Would this allow to run a near-native speed VM with a decent-speed Docker server inside, and thus allow to have an Orb-like speed for docker containers?
Test environments for macOS configuration, compliance workflows, application installer build processes, matrix testing across macOS versions, etc. I work for an MDM provider and constantly have 2-3 machines with 2 VMs each running.
I have an M4 Mac Mini running the following in a VM:
- OpenWRT (previously OPNSense & once Mikrotik RouterOS) using 2x 2.5Gbps Ethernet NICs via USB-C
- OpenMediaVault (Exposing a 4-bay DAS via USB-C, 2x3TB Drives in Btrfs RAID-1)
- HassOS (Home Assistant OS)
On the host, I'm running OLlama and a reverse proxy through Docker.
The whole thing uses 7 watts of power at any given time - I've seen max peaks of 12w when running LLM queries. The drive bay actually uses more than it.
Through power saving alone, it will pay for itself in 5 years over my previous AMD Zen 2 build.
My question was rather about MacOS guest(s) on a MacOS host. Contrarily to specialised linux distros (Home Assistant, OpenWRT...) MacOS doesn't strike me as particularly minimalistic won I wonder about the amount of overhead and plain storage requirements just running them idle...
I understand for specific MacOS or iOS development wanting template envs one would want to easily and repeatedly spawn up / destroy.
Out of curiosity, why not containers for OMV and Haas? QoS? And I’m dying to know what you are using openwrt for. I’m looking at setting up a Mini as well, and have been using Colima/LIMA to run containers on Rosetta/Mac vz locally and it seems to work well enough.
What is your use case where not having an ECC is critical? Assuming something related to complex calculations that cannot fail and takes a lot of time to process?
If you care about your data, you want ECC. If a bitflip happens in memory, zfs will for example, not catch the error and happily write out the corrupted file.
Trying to run complex CI/CD for automated build/test toolchains for building iOS apps.
We have a Mac Studio dedicated to that purpose. Periodically something on it stops working (software updates are a common reason). Trying to run CI/CD on a bare metal consumer oriented OS is an exercise in frustration.
It’s also handy to be able to sandbox different environments from each other. Once you have multiple projects that need different versions of Xcode, or even macOS (a good example is wanting to spin things on a beta), you need VMs or multiple machines. (And yes I’m aware of tools like Xcodes, but testing on a beta of macOS requires a reboot and a lengthy install.)
This. I've been managing a CI/CD system of around 50 macOS build machines for a few years now, previously our own hardware in a data center, currently EC2 Mac instances.
All the things Apple puts in place to make macOS more secure and consumer friendly make it really hard to manage as a server, especially if you don't want to use MDM. For example with the current version of macOS, the macOS AMI that Amazon provides requires manually logging in over screen sharing to enable local networking. So I haven't updated to Sequoia yet. As it is, my AMI build process is fully automated but still takes almost 2 hours and involves first mounting the Amazon AMI to a Linux instance to modify parts of the image that are read-only when it's booted from.
Our current CI/CD process is to create a unique build user per build, then tear it down afterwards. EC2 has something called root volume replacement to allow you to reset a machine to its AMI, but that still takes too long (~ 10 minutes) to do between every build.
(At least with EC2 Macs I no longer need to open a ticket with DC ops when there's a hardware issue.)
Using macOS VMs that can be quickly reset makes this all a lot easier, more flexible, secure, and cleaner. The only currently viable options I'm aware of are tart and anka. I'm glad to see some open source competition in this space.
Check out my project, WarpBuild [1], if you want managed Mac VMs for CI/CD purposes. It plugs in nicely to the GitHub actions ecosystem and provides machines on demand.
I'm probably an oddball compared to most, but I use a VM as my main work environment. My main reason is because it's super easy to create a backup and test anything like OS or major software updates there first to make sure it doesn't hose anything. Or if I just want to tinker and try new things without risking breaking anything. Also, where I work they get us new hardware every 3 years, and it means I don't have to spend a long time trying to set my environment up on a new computer. I just copy over the VM and jump right in.
I wonder if this is a good alternative for running potentially malicious tools like random fine tunes of LLMs or coding agent outputs with little to no risk
I'll definitely document the option in the README, thanks!
On Lima:
- Lima focuses on Linux VMs and doesn't support managing macOS VMs.
- It is more of a container-oriented way of spinning up Linux VMs. We're still debating whether to go in that direction with lume, but the good news is that it would mostly require tweaking the hooks to expose lume’s core to adopt the containerd standard. I’d love to hear your thoughts - would you find it useful to have a Docker-like interface for starting macOS workloads? Similarly to: https://github.com/qemus/qemu-docker
- Still many dependencies on QEMU, which doesn't play well with Apple silicon - while we opted to support only M chips (80-90% of the market cap today) by relying on the latest Apple Virtualization.Framework bits.
On Tart:
- We share some similarities when it comes to tart's command-line interface. We extend it and make it more accessible to different frameworks and languages with our local server option (lume serve). We have also an interface for python today: https://github.com/trycua/pylume
- Going forward, we'd like to focus more on creating tools around developers, creating tools for automation and extending the available images in our ghcr registry. Stay tuned for more updates next week!
- Lastly, lume is licensed under MIT, so you’re free to use it for commercial purposes. Tart, on the other hand, currently uses a Fair Source license.
I think it is great to have more options in this space, as all of these options are still quite immature. In terms of approaches to take, I find one of the challenges in using lima is that, they way lima works makes the VM different enough from production that I can't be confident testing covers everything.
In terms of feature set, I think some of these have been mentioned below, but these would be great:
- network bridging
- snapshotting
- usb / bluetooth passthrough (this is probably dependent on Apple's framework)
Can you use to launch an Intel VM on Apple Silicone and visa versa? I’m interested in doing this so I can compile C++ applications for different architectures on MacOS. Do you know of any other “easy” methods?
You can do this without virtualization/emulation, pass ‘-arch x86_64’ or ‘-arch arm64’ to clang. Or both, for a universal binary. And on Apple Silicon, you can test them both thanks to Rosetta.
Looks interesting! I’ve been playing with UTM to do aarch64 VMs and I even cooked up a little Sinatra server to do some AppleScript to reboot the VM and do some other things on the host. I’ll look at this as a more robust solution as to be completely honest, UTM has left a lot to be desired at least for virtualization.
Yes, it will - not sure about the lightweight part though :)
We're currently working out the details. It was left out of the initial release, but it’s feasible starting from a Windows on ARM image. I'll create an issue to track this work. Thanks for the feedback!
> What would make you replace UTM/Multipass/Docker Desktop with this?
Just checked my list of VMs in UTM; 5 Linux (Ubuntu, Debian, and Fedora), 9 Mac OS X/OS X/macOS (versions stretching back from Tiger to Sequoia), and 10 Windows (XP through 11). Lume would need to support not only Windows but also emulation in order to consider a move.
The title of this post took me about 3 attempts to parse, because "OS" is more strongly bound to Operating System in this context (but presumably it's supposed to read as Open Source)
Yes, super sorry about this. I meant to write OSS but forgot an S, and now it's too late to edit the post. But yes this could FOSS since it's a free tool with MIT license.
Yes, I think OP wanted to say OSS (last S = Software).
Edit: Did a quick google search and it seems like the acronym OSS has fallen out of prominence. Probably best to explicitly state “Open Source” nowadays to avoid any confusion.
For laptops, there are many nice options. But for tablets, the latest iPads are currently unmatched at under 600 grams for a 13" tablet. So I would love to use one of those.
You can, with ish (https://ish.app). It is a bit slow though and doesn't support the newest releases. (Well and by default it runs Alpine instead of Debian)
Is macOS up to scratch to be used as a server these days? Last I checked it would always run into trouble / randomly reboot / become unavailable whenever a new OS update became available. Admittedly this is about 2 years ago.
If so, this would be great. Particularly to repurpose older macs.
Personally I wouldn’t recommend macOS as a server OS unless you’re doing something that is specifically macOS dependent (like automated e2e testing of macOS / iOS software).
Everything server related that is easy to do on Linux or BSD is at least an order of magnitude more painful on macOS. And like Windows, you cannot run it headless. In fact Windows actually had better support for running headless than macOS despite macOS’s unix heritage.
Also if you need to run macOS on anything other than spare hardware, then expect to pay a premium. This is particularly true of hosting macOS instances in the cloud.
Every time I’ve needed to run macOS as a server, which is a depressingly large amount of times over the years — I’ve had to do that in every job bar one — it’s been a painful process and I’ve had to accept compromises to what would normally be best practices.
Stuff like Tart does make things a little easier. But given how locked down macOS and its hardware are, it really should be something Apple gave a lot more love to themselves. Instead they went the other way and discontinued Mac Server, albeit that was some number of years ago now. And things haven’t gotten any better since.
As a battle hardened unix greybeard, I’d still prefer to administrate enterprise Windows server over “enterprise macOS” servers.
That all said, for any hobby projects, anything is fair game. So if you have spare hardware then go for it. Just don’t expect it to be as “fun” as running Debian on a Rasberry Pi or similar other traditional hobby set ups
macOS has VNC built in so it can run as a "headless computer" but you cannot run it without the GUI. So you end up paying a massive memory and disk storage tax for running any CLI tools.
Years ago I had to administer Jenkins on a MacOS build machine. It broke often. I would have to VNC in to accept license agreements and other trivial tasks. I am nowhere near your level but even what little I wanted was an unnecessary PITA to achieve and maintain.
I know the Mac hardware is pretty trick, but man, I do most of my dev on Linux and have a Framework Laptop (AMD) running Ubuntu; it's just really nice for dev stuff to work the same way it does on our production environment...
Also, the frequency and size and time to install MacOS updates - like these computers are blazingly fast from the cpus to the SSDs - after an update has downloaded, what could it possibly be doing that takes 30+ minutes to install? I've never had to wait for an apt-get upgrade that long.
Always wondered the same, re: updates. To be fair, macOS I believe is downloading an entire OS image each time. macOS is immutable root, like Fedora Silverblue, so OS updates download the entire OS image each time.
On a package-based system you're doing a diff update, essentially.
Then again, even Fedora Silverblue image updates don't take nearly as long as macOS.
At least both are better than Windows, god knows what's going on in Microsoft's update process.
Yeah, but the SSDs today can write something like 2GB/s, so even if you're re-writing the entire OS, you should be able to do that in what? 1 minute max?
Assuming they're using some fast parallel compression algo as well when unpacking it...
Lume[1] is also a static site generator for the Deno runtime[2]. Finding a four-letter name in the Latin alphabet that is unique and still sounds like a real name is nearly impossible. Fortunately, I registered the domain Egont[3] a long time ago, perhaps presciently anticipating the interplay between AI, agents, and ego. Five letters though.
congrats on the open sourcing and launching! beyond the desire to run VMs in "1 command", i don't quite get the reasoning behind this project. could you elucidate? like, besides running macOS VMs, how is it different from lima, colima, and friends? the name lume is quite unfortunate.
the hard part about running VMs isn't really how to launch them (well, ahem, i'm looking at you, qemu), but getting data in and out, and controlling them. some feature requests, if i may ;)
Can you clone a VM while it's running?The ability to resume a VM within < 1 second would be useful for on-demand workflows without waiting for a full VM bootup sequence, similar to how you can get a firecracker microVM into the state you want, snapshot it.. then clone as you wish, and resume back into the guest.
You may need to preinstall an agent (a la Parallel/VMware Tools) to make sure this is seamless and fast.
Thanks for the feedback! I actually went into more detail about the project's reasoning and how it compares to lima/tart in another comment.
Interestingly, we left out the screenshot and exec features from this initial release so the CLI wouldn’t get too cluttered. We’re rolling out another update next week that will let you request screenshots programmatically, stream the VNC session in your browser via noVNC, run commands remotely over SSH, add clipboard support (finally!), and more.
As for cloning, you can indeed clone a running VM. However, suspending a VM isn’t supported in this release yet, even though it’s possible with the Apple Virtualization.framework. I’ll open an issue to track that work. Thanks again for the suggestions!
Honestly, LXD commands and LXD cli would probably have worked with any backend.
Would this allow to run a near-native speed VM with a decent-speed Docker server inside, and thus allow to have an Orb-like speed for docker containers?
Would you mind educating me about use cases for having one or even multiple MacOS VMs on an apple silicon machine please?
Test environments for macOS configuration, compliance workflows, application installer build processes, matrix testing across macOS versions, etc. I work for an MDM provider and constantly have 2-3 machines with 2 VMs each running.
I have an M4 Mac Mini running the following in a VM:
- OpenWRT (previously OPNSense & once Mikrotik RouterOS) using 2x 2.5Gbps Ethernet NICs via USB-C
- OpenMediaVault (Exposing a 4-bay DAS via USB-C, 2x3TB Drives in Btrfs RAID-1)
- HassOS (Home Assistant OS)
On the host, I'm running OLlama and a reverse proxy through Docker.
The whole thing uses 7 watts of power at any given time - I've seen max peaks of 12w when running LLM queries. The drive bay actually uses more than it.
Through power saving alone, it will pay for itself in 5 years over my previous AMD Zen 2 build.
My question was rather about MacOS guest(s) on a MacOS host. Contrarily to specialised linux distros (Home Assistant, OpenWRT...) MacOS doesn't strike me as particularly minimalistic won I wonder about the amount of overhead and plain storage requirements just running them idle...
I understand for specific MacOS or iOS development wanting template envs one would want to easily and repeatedly spawn up / destroy.
Out of curiosity, why not containers for OMV and Haas? QoS? And I’m dying to know what you are using openwrt for. I’m looking at setting up a Mini as well, and have been using Colima/LIMA to run containers on Rosetta/Mac vz locally and it seems to work well enough.
Haas has massive limitations in its container mode. (Add-ons, OTA updates,...)
I’d use a Mac Mini in a heartbeat if they had an ECC option :(
What is your use case where not having an ECC is critical? Assuming something related to complex calculations that cannot fail and takes a lot of time to process?
If you care about your data, you want ECC. If a bitflip happens in memory, zfs will for example, not catch the error and happily write out the corrupted file.
What is the probability of this happening on a home (not a datacenter) under normal conditions of temperature, radiation, etc?
Trying to run complex CI/CD for automated build/test toolchains for building iOS apps.
We have a Mac Studio dedicated to that purpose. Periodically something on it stops working (software updates are a common reason). Trying to run CI/CD on a bare metal consumer oriented OS is an exercise in frustration.
It’s also handy to be able to sandbox different environments from each other. Once you have multiple projects that need different versions of Xcode, or even macOS (a good example is wanting to spin things on a beta), you need VMs or multiple machines. (And yes I’m aware of tools like Xcodes, but testing on a beta of macOS requires a reboot and a lengthy install.)
This. I've been managing a CI/CD system of around 50 macOS build machines for a few years now, previously our own hardware in a data center, currently EC2 Mac instances.
All the things Apple puts in place to make macOS more secure and consumer friendly make it really hard to manage as a server, especially if you don't want to use MDM. For example with the current version of macOS, the macOS AMI that Amazon provides requires manually logging in over screen sharing to enable local networking. So I haven't updated to Sequoia yet. As it is, my AMI build process is fully automated but still takes almost 2 hours and involves first mounting the Amazon AMI to a Linux instance to modify parts of the image that are read-only when it's booted from.
Our current CI/CD process is to create a unique build user per build, then tear it down afterwards. EC2 has something called root volume replacement to allow you to reset a machine to its AMI, but that still takes too long (~ 10 minutes) to do between every build.
(At least with EC2 Macs I no longer need to open a ticket with DC ops when there's a hardware issue.)
Using macOS VMs that can be quickly reset makes this all a lot easier, more flexible, secure, and cleaner. The only currently viable options I'm aware of are tart and anka. I'm glad to see some open source competition in this space.
Check out my project, WarpBuild [1], if you want managed Mac VMs for CI/CD purposes. It plugs in nicely to the GitHub actions ecosystem and provides machines on demand.
[1] https://warpbuild.com
I'm probably an oddball compared to most, but I use a VM as my main work environment. My main reason is because it's super easy to create a backup and test anything like OS or major software updates there first to make sure it doesn't hose anything. Or if I just want to tinker and try new things without risking breaking anything. Also, where I work they get us new hardware every 3 years, and it means I don't have to spend a long time trying to set my environment up on a new computer. I just copy over the VM and jump right in.
When IT policy prevents root access to the host but VMs are fine
Shh. I need my VMs and they also help me run software the corp malware doesn’t play nice with.
The same use case as any other VM? Isolated systems that are portable, easy to bootstrap and destroy, etc, the list goes on.
> Isolated systems that are portable, easy to bootstrap and destroy, etc
So, like Docker but without the part where it's fast or convenient!
I recently used virtualbuddy with a macOS VM to test a developer environment setup with an Ansible playbook.
Worked well for my simple use case.
I wonder if this is a good alternative for running potentially malicious tools like random fine tunes of LLMs or coding agent outputs with little to no risk
The most common ones are for QA testing and CI pipelines.
The first thing that comes to my mind is runners for automated workflows that need to happen on MacOS.
How does this compare to Lima[1] and Tart[2], which are similar?
Also, would it be possible to run BSDs with this?
[1] https://lima-vm.io
[2] https://tart.run
Yes, lume relies on Apple's Virtualization framework and can run BSD on a Mac with Apple silicon: https://wiki.freebsd.org/AppleSilicon
I'll definitely document the option in the README, thanks!
On Lima:
- Lima focuses on Linux VMs and doesn't support managing macOS VMs.
- It is more of a container-oriented way of spinning up Linux VMs. We're still debating whether to go in that direction with lume, but the good news is that it would mostly require tweaking the hooks to expose lume’s core to adopt the containerd standard. I’d love to hear your thoughts - would you find it useful to have a Docker-like interface for starting macOS workloads? Similarly to: https://github.com/qemus/qemu-docker
- Still many dependencies on QEMU, which doesn't play well with Apple silicon - while we opted to support only M chips (80-90% of the market cap today) by relying on the latest Apple Virtualization.Framework bits.
On Tart:
- We share some similarities when it comes to tart's command-line interface. We extend it and make it more accessible to different frameworks and languages with our local server option (lume serve). We have also an interface for python today: https://github.com/trycua/pylume
- Going forward, we'd like to focus more on creating tools around developers, creating tools for automation and extending the available images in our ghcr registry. Stay tuned for more updates next week!
- Lastly, lume is licensed under MIT, so you’re free to use it for commercial purposes. Tart, on the other hand, currently uses a Fair Source license.
Hey, thanks for responding.
I think it is great to have more options in this space, as all of these options are still quite immature. In terms of approaches to take, I find one of the challenges in using lima is that, they way lima works makes the VM different enough from production that I can't be confident testing covers everything.
In terms of feature set, I think some of these have been mentioned below, but these would be great: - network bridging - snapshotting - usb / bluetooth passthrough (this is probably dependent on Apple's framework)
Colima is a container focused macOS VM runner.
I would be interested in running older macOS versions in a VM, but those would be x64-based and an Apple Silicon host is impractical for that.
Can you use to launch an Intel VM on Apple Silicone and visa versa? I’m interested in doing this so I can compile C++ applications for different architectures on MacOS. Do you know of any other “easy” methods?
I don't believe the built in virtualization framework supports emulation, but you can do this with QEMU. An easy way to get started is with UTM:
https://mac.getutm.app
You can do this without virtualization/emulation, pass ‘-arch x86_64’ or ‘-arch arm64’ to clang. Or both, for a universal binary. And on Apple Silicon, you can test them both thanks to Rosetta.
It should be possible. I did this in the early 90's... I had a windows vm running on a powerpc Mac, writing x86 assembly for college class.
I sometimes use Docker for this, assuming you are talking about running Linux on x86-64.
It's a good project, but it just has too few built-in images.
I read GPU and USB passthrough somewhere and did not believe it
Why not? Shared USB and a paravirtualized GPU have been on macOS VMs for years now.
I’m pretty sure there’s no way to pass USB devices through to a guest VM when using Virtualization.framework (which is required for running macOS VMs)
Looks interesting! I’ve been playing with UTM to do aarch64 VMs and I even cooked up a little Sinatra server to do some AppleScript to reboot the VM and do some other things on the host. I’ll look at this as a more robust solution as to be completely honest, UTM has left a lot to be desired at least for virtualization.
So will this ever be able to run a lightweight windows vm?
Yes, it will - not sure about the lightweight part though :)
We're currently working out the details. It was left out of the initial release, but it’s feasible starting from a Windows on ARM image. I'll create an issue to track this work. Thanks for the feedback!
> What would make you replace UTM/Multipass/Docker Desktop with this?
Just checked my list of VMs in UTM; 5 Linux (Ubuntu, Debian, and Fedora), 9 Mac OS X/OS X/macOS (versions stretching back from Tiger to Sequoia), and 10 Windows (XP through 11). Lume would need to support not only Windows but also emulation in order to consider a move.
great! how does this compare to orbstack?
Orb stack is great but it requires a paid license
would be nice if you can bind ports!
This looks interesting but unless there's a vagrant plugin to use it, I'm pretty unlikely to spend any serious time using it.
not really stoked about the name since there are a couple other projects named lume:
- https://lume.land/
The project looks cool though
The title of this post took me about 3 attempts to parse, because "OS" is more strongly bound to Operating System in this context (but presumably it's supposed to read as Open Source)
Yes, FOSS is generally used in this context. Though I have occasionally seen OSS used too.
- FOSS: Free / open source software
- OSS: Open source software.
Yes, super sorry about this. I meant to write OSS but forgot an S, and now it's too late to edit the post. But yes this could FOSS since it's a free tool with MIT license.
No harm done. Anyone, like myself, who read past the title would have understood the submission just fine.
Yes, I think OP wanted to say OSS (last S = Software).
Edit: Did a quick google search and it seems like the acronym OSS has fallen out of prominence. Probably best to explicitly state “Open Source” nowadays to avoid any confusion.
I wish it was possible to run a Debian VM on iOS.
For laptops, there are many nice options. But for tablets, the latest iPads are currently unmatched at under 600 grams for a 13" tablet. So I would love to use one of those.
You can, with ish (https://ish.app). It is a bit slow though and doesn't support the newest releases. (Well and by default it runs Alpine instead of Debian)
But you can with UTM, don't you?
I’d check out multipass from canonical. IIRC you can specify the image it uses.
UTM? https://getutm.app/
I was very curious but it looks like UTM is unable to JIT properly on newer iOS versions. They might want a desktop Debian with better performance.
Is macOS up to scratch to be used as a server these days? Last I checked it would always run into trouble / randomly reboot / become unavailable whenever a new OS update became available. Admittedly this is about 2 years ago.
If so, this would be great. Particularly to repurpose older macs.
Personally I wouldn’t recommend macOS as a server OS unless you’re doing something that is specifically macOS dependent (like automated e2e testing of macOS / iOS software).
Everything server related that is easy to do on Linux or BSD is at least an order of magnitude more painful on macOS. And like Windows, you cannot run it headless. In fact Windows actually had better support for running headless than macOS despite macOS’s unix heritage.
Also if you need to run macOS on anything other than spare hardware, then expect to pay a premium. This is particularly true of hosting macOS instances in the cloud.
Every time I’ve needed to run macOS as a server, which is a depressingly large amount of times over the years — I’ve had to do that in every job bar one — it’s been a painful process and I’ve had to accept compromises to what would normally be best practices.
Stuff like Tart does make things a little easier. But given how locked down macOS and its hardware are, it really should be something Apple gave a lot more love to themselves. Instead they went the other way and discontinued Mac Server, albeit that was some number of years ago now. And things haven’t gotten any better since.
As a battle hardened unix greybeard, I’d still prefer to administrate enterprise Windows server over “enterprise macOS” servers.
That all said, for any hobby projects, anything is fair game. So if you have spare hardware then go for it. Just don’t expect it to be as “fun” as running Debian on a Rasberry Pi or similar other traditional hobby set ups
> And like Windows, you cannot run it headless.
As an FYI for anyone who needs/wants to do this, the workaround is a headless display adapter: https://www.amazon.com/dp/B0D9W2HHM1
Sorry, I should have been more specific when I said "headless" because there are a couple of different interpretations:
- Headless software (can run without a GUI): https://en.wikipedia.org/wiki/Headless_software
- Headless computer (can run without a monitory nor input devices): https://en.wikipedia.org/wiki/Headless_computer
macOS has VNC built in so it can run as a "headless computer" but you cannot run it without the GUI. So you end up paying a massive memory and disk storage tax for running any CLI tools.
Years ago I had to administer Jenkins on a MacOS build machine. It broke often. I would have to VNC in to accept license agreements and other trivial tasks. I am nowhere near your level but even what little I wanted was an unnecessary PITA to achieve and maintain.
I know the Mac hardware is pretty trick, but man, I do most of my dev on Linux and have a Framework Laptop (AMD) running Ubuntu; it's just really nice for dev stuff to work the same way it does on our production environment...
Also, the frequency and size and time to install MacOS updates - like these computers are blazingly fast from the cpus to the SSDs - after an update has downloaded, what could it possibly be doing that takes 30+ minutes to install? I've never had to wait for an apt-get upgrade that long.
Always wondered the same, re: updates. To be fair, macOS I believe is downloading an entire OS image each time. macOS is immutable root, like Fedora Silverblue, so OS updates download the entire OS image each time.
On a package-based system you're doing a diff update, essentially.
Then again, even Fedora Silverblue image updates don't take nearly as long as macOS.
At least both are better than Windows, god knows what's going on in Microsoft's update process.
Yeah, but the SSDs today can write something like 2GB/s, so even if you're re-writing the entire OS, you should be able to do that in what? 1 minute max?
Assuming they're using some fast parallel compression algo as well when unpacking it...
No, they’re using slow parallel compression.
Name clash with the not very well known Lua library, you should strongly consider changing the name https://github.com/rxi/lume
If it's not well known, it's not really a name clash.
And also this Lume, which is a 3d-HTML library.
https://lume.io
Lume[1] is also a static site generator for the Deno runtime[2]. Finding a four-letter name in the Latin alphabet that is unique and still sounds like a real name is nearly impossible. Fortunately, I registered the domain Egont[3] a long time ago, perhaps presciently anticipating the interplay between AI, agents, and ego. Five letters though.
[1] https://lume.land/
[2] https://deno.com/
[3] https://www.egont.com/