I love that OrangePi is making good hardware, but after my experience with the OrangePi 5 Max, I won’t be buying more hardware from them again. The device is largely useless due to a lack of software support. This also happened with the MangoPi MQ-Pro. I’ll just stick with RPi. I may not get as much hardware for the money, but the software support is fantastic.
> The device is largely useless due to a lack of software support.
I think everyone considering an SBC should be warned that none of these are going to be supported by upstream in the way a cheap Intel or AMD desktop will be.
Even the Raspberry Pi 5, one of the most well supported of the SBCs, is still getting trickles of mainline support.
The trend of buying SBCs for general purpose compute is declining, thankfully, as more people come to realize that these are not the best options for general purpose computing.
It takes a few years, but the Broadcom chips in Pis eventually get mainline support for most peripherals, similar to modern Rockchip SoCs.
The major difference is Raspberry Pi maintains a parallel fork of Linux and keeps it up to date with LTS and new releases, even updating their Pi OS to later kernels faster than the upstream Debian releases.
More like people try doing anything other than use the base OS, and realize the bottom-tier x86 mini-PCs are 3-4x faster for the same price, and can encode a basic video stream without bogging down.
If the RPI came with any recent mid-tier Snapdragon SOC, it might be interesting. Or if someone made a Linux distro that supports all devices on one of the Snapdragon X Elite laptops, that would be interesting.
Instead, it's more like the equivalent of a cheap desktop with integrated GPU from 20 years ago, on a single board, with decent linux support, and GPIO. So it's either a linux learning toy, or an integrated component within another product, and not much in between.
I've used them for mostly dedicated tasks, at least the RPi3 and older. I've used the RPi3 as CUPS servers at a couple of sites, for a few printers. Been running for many years now 24/7 with no issues. As I could buy those SBCs for the original low price and the installation was a total no-brainer, I would never consider using any kind of mini PC for that.
I have a couple of RPi4 with 8GB and 4GB RAM respectively, these I have been using as kind-of general computers (they're running off SSDs instead of SD cards). I've had no reason so far to replace them with anything Intel/AMD. On the other hand they can't replace my laptop computer - though I wish they could, as I use the laptop computer with an external display and external keyboard 100% of the time, so its form factor is just in the way. But there's way too little RAM on the SBCs. It's bad enough on the laptop computer, with its measly 16GB.
People do all manner of wacky stuff with Pis that could be more easily done with traditional machines. Kubernetes clusters and emulation machines are the more common use cases; the former can be done with VMs on a desktop and the latter is easily accomplished via a used SFF machine off of eBay. I've also heard multiple anecdotes of people building Pi clusters to run agentic development workflows in parallel.
I think in all cases it's the sheer novelty of doing something with a different ISA and form factor. Having built and racked my share of servers I see no reason to build a miniature datacenter in my home but, hey, to each their own.
I gave up on them and switched to a second hand mini pc. These mini desktops are offloaded in bulk by governments and offices for cheap and have much better specs than the same priced SBC. And you are no longer limited to “raspberry pi” builds of distros.
Unless you strictly need the tiny form factor of an SBC you are so much better going with x86.
I have an even cheesier competitor, which randomly has a dragon on the lid (it would be a terrible choice for all but the wimpiest casual gaming... but it makes a good Home Assistant HAOS server!)
I can run my N100 nuc at 4W wall socket power draw idle. If I keep turbo boost off, it also stays there under normal load up to 6W full power. Then it is also terribly slow. With turbo boost enabled power draw can go to 8-10W on full load.
Not sure how this compares to the OrangePI in terms of performance per watt but it is already pretty far into the area of marginal gains for me at the cost of having to deal with ARM, custom housing, adapters to ensure the wall socket draw to be efficient etc. Having an efficient pico psu power a pi or orange pi is also not cheap.
Not the poster you're replying to, but I run an Acer laptop with an N305 CPU as a Plex server. Idle power draw with the lid closed is 4-5W and I keep the battery capped at 80% charge.
The N100/150/200/etc. can be clocked to use less power at idle (and capped for better thermals, especially in smaller or power-constrained devices).
A lot of the cheaper mini PCs seem to let the chip go wild, and don't implement sleep/low power states correctly, which is why the range is so wide. I've seen N100 boards idle at 6W, and others idle at 10-12W.
I have. It’s great on the RPi. On OPi5max, it didn’t support the hardware.
Worse, if you flash it to UEFI you’ll lose compat with the one system that did support it (older versions of BredOS). For that, you grab an old release, and never update. If you’re running something simple that you know won’t benefit from any update at all, that’s great. An RK3588 is a decent piece of kit though, and it really deserves better.
Hardware video decoding support for h264 and av1 just landed in 7.0 so it hasn't been a great bleeding edge experience for desktop and Plex etc users. But IMO late support is still support.
I was planning to build a NAS from OPi 5 to minimise power consumption, but ended up going for a Zen 3 Ryzen CPU and having zero regrets. The savings are miniscule and would not justify the costs.
On a related note: I pulled my pinebook pro out of a drawer this week, and spent an hour or so trying to figure out why the factory os couldn’t pull updates.
I guess manjaro just abandoned arm entirely. The options are armbian (probably the pragmatic choice, but fsck systemd), or openbsd (no video acceleration because the drivers are gpl for some dumb reason).
This sort of thing is less likely to happen to rpi, but it’s also getting pretty frustrating at this point.
Maybe the LLM was wrong and manjaro completely broke the gpg chain (again), but it spent a long time following mirror links, checking timestamps and running internet searches, and I spent over an hour on manual debugging.
You have to go in with your eyes open wth SBCs. If you have a specific task for it and you can see that it either already supports it or all the required software is there and it just needs to be gathered, then they can be great gadgets.
Often they can go their entire lifespan without some hardware feature being usable because of lack of software.
The blunt truth is that someone has to make that software, and you can't expect someone to make it for you. They may make it for you, and that's great, but really if you want a feature supported, it either has to already be supported, or you have to make the support.
It will be interesting to see if AI gets to the point that more people are capable of developing their own resources. It's a hard task and a lot of devices means the hackers are spread thin. It would be nice to see more people able to meaningfully contribute.
Using vendor kernels is standard in embedded development. Upstreaming takes a long time so even among well-supported boards you either have to wait many years for everything to get upstreamed or find a board where the upstreamed kernel supports enough peripherals that you're not missing anything you need.
I think it's a good thing that people are realizing that these SBCs are better used as development tools for people who understand embedded dev instead of as general purpose PCs. For years now you can find comments under every Raspberry Pi or other SBC thread informing everyone that a mini PC is a better idea for general purpose compute unless you really need something an SBC offers, like specific interfaces or low power.
I have always found it perplexing. Why is that required?
Is it the lack of drivers in upstream? Is it something to do with how ARM devices seemingly can't install Linux the same way x86 machines can (something something device tree)?
Yeah lack of peripheral drivers upstream for all the little things on the board, plus (AIUI) ARM doesn't have the same self-describing hardware discovery mechanisms that x86 computers have. Basically, standardisation. They're closer to MCUs in that way, is how I found it (though my knowledge is way out of date now, been years since I was doing embedded)
I've just been doing some reading. The driver situation in Linux is a bit dire.
On the one hand there is no stable driver ABI because that would restrict the ability for Linux to optimize.
On the other hand vendors (like Orange Pi, Samsung, Qualcomm, etc etc) end up maintaining long running and often outdated custom forks of Linux in an effort to hide their driver sources.
The "somehow" is Microsoft, who defines what the hardware architecture of what a x86-64 desktop/laptop/server is and builds the compatibility test suite (Windows HLK) to verify conformance. Open source operating systems rely on Microsoft's standardization.
Microsoft's standardization got AMD and Intel to write upstream Linux GPU drivers? Microsoft got Intel to maintain upstream xHCI Linux drivers? Microsoft got people to maintain upstream Linux drivers for touchpads, display controllers, keyboards, etc?
I doubt this. Microsoft played a role in standardizing UEFI/ACPI/PCI which allows for a standardized boot process and runtime discovery, letting you have one system image which can discover everything it needs during and after boot. In the non-server ARM world, we need devicetree and u-boot boot scripts in lieu of those standards. But this does not explain why we need vendor kernels.
> You can't have a custom kernel if you can't rebuild the device tree.
What is this supposed to mean? There is no device tree to rebuild on x86 platforms yet you can have a custom kernel on x86 platforms. You sometimes need to use kernel forks there too to work with really weird hardware without upstream drivers, there's nothing different about Linux's driver model on x86. It's just that in the x86 world, for the vast, vast majority of situations, pre-built distro kernels built from upstream kernel releases has all the necessary drivers.
x86 hardware has a standard way to boot and bring up the hardware, usually to at least a minimum level of functionality.
ARM devices aren't even really similar to one another. As a weird example, the Raspberry Pi boots from the GPU, which brings up the rest of the hardware.
It's not just about booting though. We solve this with hardware-specific devicetrees, which is less nice in a way than runtime discovery through PCI/ACPI/UEFI/etc, but it works. But we're not just talking about needing a hardware-specific devicetree; we're talking about needing hardware-specific vendor kernels. That's not due to the lack of boot standardization and runtime discovery.
There also seems to be a plan to add uefi support to u-boot[1]. Many of these kinds of boards have u-boot implementations, so could then boot uefi kernel.
However many of these ARM chips have their own sub-architecture in the Linux source tree, I'm not sure that it's possible today to build a single image with them all built in and choose the subarchitecture at runtime. Theoretically it could be done, of course, but who has the incentive to do that work?
(I seem to remember Linus complaining about this situation to the Arm maintainer, maybe 10-20 years ago)
> At some point SBCs that require a custom linux image will become unacceptable, right?
The flash images contain information used by the bios to configure and bring up the device. It's more than just a filesystem. Just because it's not the standard consoomer "bios menu" you're used to doesn't mean it's wrong. It's just different.
These boards are based off of solutions not generally made available to the public. As a result, they require a small amount of technical knowledge beyond what operating a consumer PC might require.
So, packaging a standard arm linux install into a "custom" image is perfectly fine, to be honest.
I think even custom is unacceptable. It’s too much of a pain being limited in your distro choice because you are limited to specific builds. On x86 you can run anything.
This seems to be an overkill for most of my workloads that require an SBC.
I would choose Jetson for anything computationally intensive, as Orange Pi 6 Plus's NPU is not even utilized due to lack of software support.
For other workloads, this one seems a bit too large in terms of formfactor and power consumption, and older RK3588 should still be sufficient
Looks like the SoC (CIX P1) has Cortex-A720/A520 cores which are Armv9.2, nice.
I've still been on the hunt for a cheap Arm board with a Armv8.3+ or Arvm9.0+ SoC for OSDev stuff, but it's hard to find them in hobbyist price range (this board included, $700-900 USD from what I see).
The NVIDIA Jetson Orin Nanos looked good but unfortunately SWD/JTAG is disabled unless you pay for the $2k model...
Disappointing on the NPU. I have found it's a point where industry wide improvement is necessary. People talk tokens/sec, model sizes, what formats are supported... But I rarely see an objective accuracy comparison. I repeatedly see that AI models are resilient to errors and reduced precision which is what allows the 1 bit quantization and whatnot.
But at a certain point I guess it just breaks? And they need an objective "I gave these tokens, I got out those tokens". But I guess that would need an objective gold standard ground truth that's maybe hard to come by.
There are a couple outfits making M.2 AI accelerators. Recently I noticed this one: DeepX DX-M1M 25 TOPS (INT8) M.2 module from Radxa[1]: https://radxa.com/products/aicore/dx-m1m
If you're in the business of selling unbundled edge accelerators, you're strongly incentivized to modularize your NPU software stack for arbitrary hosts, which increases the likelihood that it actually works, and for more than one particular kernel.
If I had an embedded AI use case, this is something I'd look at hard.
The even more confounding factor is there are specific builds provided by every vendor of these Cix P1 systems: Radxa, Orange Pi, Minisforum, now MetaComputing... it is painful to try to sort it out, as someone who knows where to look.
I couldn't imagine recommending any of these boards to people who aren't already SBC tinkerers.
I was also onboard until he got to the NPU downsides. I don't care about use for an LLM, but I would like to see the ability to run smallish ONNX models generated from a classical ML workflow. Not only is a GPU overkill for the tasks I'm considering, but I'm also concerned that unattended GPUs out on the edge will be repurposed for something else (video games, crypto mining, or just straight up ganked)
just try to find some benchmark top_k, temp, etc parameters for llama.cpp. There's no consistent framing of any of these things. Temp should be effectively 0 so it's atleast deterministic in it's random probabilities.
Right. There are countless parameters and seeds and whatnots to tweak. But theoretically if all the inputs are the same the outputs should be within Epsilon of a known good. I wouldn't even mandate temperature or any other parameter be a specific value, just that it's the same. That way you can make sure even the pseudorandom processes are the same, so long as nothing pulls from a hardware rng or something like that. Which seems reasonable for them to do so idk maybe an "insecure rng" mode
By default CUDA isn't deterministic because of thread scheduling.
The main difference comes from rounding order of reduction difference.
It does make a small difference. Unless you have an unstable floating point algorithm, but if you have an unstable floating point algorithm on a GPU at low precision you were doomed from the start.
But I just don't get... everything, I don't get the org, I don't get the users on hn, I'm like skinner in the 'no the kids are wrong' meme.
It's a lambda. It's a cheap, plug in, ssh, forget. And it's bloody wonderful.
If you buy a 1 or 2 off ebay, ok maybe a 3.
After that? Get a damn computer.
Want more bandwidth on the rj45? Get a computer.
Want faster usb? Get a computer.
Want ssd? Get a computer
Want a retro computing device? Get a computer.
Want a computer experience?
Etc etc etc, i don't need to labour this.
Want something that will sit there, have ssh and run python scripts for years without a reboot? Spend 20 quid on ebay.
People demanded faster horses. And the raspi org, for some, damn fool, reason, tried to give them.
There are people bemoaning the fact that raspberry pi's aren't able to run LLM's. And will then, without irony, complain that the prices are too high. For the love of God, raspi org, stop listening to dickheads on the Internet. Stop paying youtubers to shill. Stop and focus.
Unfortunately only available atm for extremely high prices. I'd like to pick some up to create a ceph cluster (with 1x 18tb hdd osd per node in an 8 node cluster with 4+2 erasure coding)
I love that OrangePi is making good hardware, but after my experience with the OrangePi 5 Max, I won’t be buying more hardware from them again. The device is largely useless due to a lack of software support. This also happened with the MangoPi MQ-Pro. I’ll just stick with RPi. I may not get as much hardware for the money, but the software support is fantastic.
> The device is largely useless due to a lack of software support.
I think everyone considering an SBC should be warned that none of these are going to be supported by upstream in the way a cheap Intel or AMD desktop will be.
Even the Raspberry Pi 5, one of the most well supported of the SBCs, is still getting trickles of mainline support.
The trend of buying SBCs for general purpose compute is declining, thankfully, as more people come to realize that these are not the best options for general purpose computing.
> Even the Raspberry Pi 5 [...] is still getting trickles of mainline support.
I thought raspberry pi could basically run a mainline kernel these days -- are there unsupported peripherals besides Broadcom's GPU?
It takes a few years, but the Broadcom chips in Pis eventually get mainline support for most peripherals, similar to modern Rockchip SoCs.
The major difference is Raspberry Pi maintains a parallel fork of Linux and keeps it up to date with LTS and new releases, even updating their Pi OS to later kernels faster than the upstream Debian releases.
> The trend of buying SBCs for general purpose compute is declining,
Were people actually doing that?
More like people try doing anything other than use the base OS, and realize the bottom-tier x86 mini-PCs are 3-4x faster for the same price, and can encode a basic video stream without bogging down.
If the RPI came with any recent mid-tier Snapdragon SOC, it might be interesting. Or if someone made a Linux distro that supports all devices on one of the Snapdragon X Elite laptops, that would be interesting.
Instead, it's more like the equivalent of a cheap desktop with integrated GPU from 20 years ago, on a single board, with decent linux support, and GPIO. So it's either a linux learning toy, or an integrated component within another product, and not much in between.
Raspberry Pi was and is selling official desktop kits: https://www.raspberrypi.com/products/raspberry-pi-4-desktop-...
I wouldn't wish it upon an enemy, but it's a thing.
I've used them for mostly dedicated tasks, at least the RPi3 and older. I've used the RPi3 as CUPS servers at a couple of sites, for a few printers. Been running for many years now 24/7 with no issues. As I could buy those SBCs for the original low price and the installation was a total no-brainer, I would never consider using any kind of mini PC for that.
I have a couple of RPi4 with 8GB and 4GB RAM respectively, these I have been using as kind-of general computers (they're running off SSDs instead of SD cards). I've had no reason so far to replace them with anything Intel/AMD. On the other hand they can't replace my laptop computer - though I wish they could, as I use the laptop computer with an external display and external keyboard 100% of the time, so its form factor is just in the way. But there's way too little RAM on the SBCs. It's bad enough on the laptop computer, with its measly 16GB.
People do all manner of wacky stuff with Pis that could be more easily done with traditional machines. Kubernetes clusters and emulation machines are the more common use cases; the former can be done with VMs on a desktop and the latter is easily accomplished via a used SFF machine off of eBay. I've also heard multiple anecdotes of people building Pi clusters to run agentic development workflows in parallel.
I think in all cases it's the sheer novelty of doing something with a different ISA and form factor. Having built and racked my share of servers I see no reason to build a miniature datacenter in my home but, hey, to each their own.
They are cheap and seem like the hardware is good enough. The hardware is, but getting software support very diy.
They probably define general purpose as anything homelab based that runs on a commodity OS.
Yeah that's the problem with ARM devices. Better just buy a N100
I gave up on them and switched to a second hand mini pc. These mini desktops are offloaded in bulk by governments and offices for cheap and have much better specs than the same priced SBC. And you are no longer limited to “raspberry pi” builds of distros.
Unless you strictly need the tiny form factor of an SBC you are so much better going with x86.
I thought N100 equivalent SBC computers like Radxa's, etc., were largely out of stock for quite some time now.
The N100 is way larger than a OrangePi 5 Max.
I have a Bosgame AG40 (low end Celeron N4020 - less powerful than the N100 - but runs fanless)[1].
It's 127 x 127 x 508 mm. I think most mini N100 PCs are around that size.
The OrangePi 5 Max board is 89x57mm (it says 1.6mm "thickness" on the spec sheet but I think that is a typo - the ethernet port is more than that)
Add a few mm for a case and it's roughly 2/3 as long and half the width of the A40.
[1] https://manuals.plus/asin/B0DG8P4DGV
There are quite a few x86-64 machines in the 70mm x 70mm form factor[1], which is close?
1: https://www.ecs.com.tw/en/Product/Mini-PC/LIVA_Q2/
Lmao the hero background. They photoshopped the pc into the back pocket of that AI-generated woman. (or the entire thing is AI-generated)
I have an even cheesier competitor, which randomly has a dragon on the lid (it would be a terrible choice for all but the wimpiest casual gaming... but it makes a good Home Assistant HAOS server!)
I dunno, I hear it‘s easy to put in your pocket and let the computer is everywhere
welcome to online shopping..
Also about half as efficient, if that matters, and 1.5-2x higher idle power consumption (again, if that matters).
Sometimes easier to acquire, but usually the same price or more expensive.
I can run my N100 nuc at 4W wall socket power draw idle. If I keep turbo boost off, it also stays there under normal load up to 6W full power. Then it is also terribly slow. With turbo boost enabled power draw can go to 8-10W on full load.
Not sure how this compares to the OrangePI in terms of performance per watt but it is already pretty far into the area of marginal gains for me at the cost of having to deal with ARM, custom housing, adapters to ensure the wall socket draw to be efficient etc. Having an efficient pico psu power a pi or orange pi is also not cheap.
Which NUC do you have? A lot of the nameless brands on aliexpress draw 10 watts on idle.
Not the poster you're replying to, but I run an Acer laptop with an N305 CPU as a Plex server. Idle power draw with the lid closed is 4-5W and I keep the battery capped at 80% charge.
The N100/150/200/etc. can be clocked to use less power at idle (and capped for better thermals, especially in smaller or power-constrained devices).
A lot of the cheaper mini PCs seem to let the chip go wild, and don't implement sleep/low power states correctly, which is why the range is so wide. I've seen N100 boards idle at 6W, and others idle at 10-12W.
Well... https://radxa.com/products/x/x4/
It has major overheating issues though, the N100 was never meant to be put on such a tiny PCB.
They also sell a heatsink for mere $21 (on AliExpress), just in case you don't know how to fit a spare PC cooler onto it.
The N100 is more expensive, does not come with onboard wifi, and requires active cooling.
Once you get a case, power supply, and some usable diskspace is the n100 isn't that much more expensive.
active cooling is the killer though. I'd prefer a board that is fanless any day.
The Orange Pi 6 Plus has a fan.
Have you taken a look at armbian? If so, what was your experience?
https://www.armbian.com/boards?vendor=xunlong
I have. It’s great on the RPi. On OPi5max, it didn’t support the hardware.
Worse, if you flash it to UEFI you’ll lose compat with the one system that did support it (older versions of BredOS). For that, you grab an old release, and never update. If you’re running something simple that you know won’t benefit from any update at all, that’s great. An RK3588 is a decent piece of kit though, and it really deserves better.
>The device is largely useless due to a lack of software support.
It's pretty hacky for sure but wouldn't classify it as useless. e.g. I managed to get some LLMs to run on the NPU of an Orange pi 5 a while back
I see there is now even a NPU compatible llama.cpp fork though haven't tried it
I thought RK3588 had pretty good mainline support, what's the issue with this board?
Hardware video decoding support for h264 and av1 just landed in 7.0 so it hasn't been a great bleeding edge experience for desktop and Plex etc users. But IMO late support is still support.
Video, networking, etc. To get working 3588 you’d have to go with a passionate group like MNT, and then you’re paying way more.
What specifically is lacking?
I was planning to build a NAS from OPi 5 to minimise power consumption, but ended up going for a Zen 3 Ryzen CPU and having zero regrets. The savings are miniscule and would not justify the costs.
On a related note: I pulled my pinebook pro out of a drawer this week, and spent an hour or so trying to figure out why the factory os couldn’t pull updates.
I guess manjaro just abandoned arm entirely. The options are armbian (probably the pragmatic choice, but fsck systemd), or openbsd (no video acceleration because the drivers are gpl for some dumb reason).
This sort of thing is less likely to happen to rpi, but it’s also getting pretty frustrating at this point.
> I guess manjaro just abandoned arm entirely
Er?
https://manjaro.org/products/download/arm explicitly lists the pinebook pro?
None of the arm mirrors have recent updates.
Maybe the LLM was wrong and manjaro completely broke the gpg chain (again), but it spent a long time following mirror links, checking timestamps and running internet searches, and I spent over an hour on manual debugging.
You have to go in with your eyes open wth SBCs. If you have a specific task for it and you can see that it either already supports it or all the required software is there and it just needs to be gathered, then they can be great gadgets.
Often they can go their entire lifespan without some hardware feature being usable because of lack of software.
The blunt truth is that someone has to make that software, and you can't expect someone to make it for you. They may make it for you, and that's great, but really if you want a feature supported, it either has to already be supported, or you have to make the support.
It will be interesting to see if AI gets to the point that more people are capable of developing their own resources. It's a hard task and a lot of devices means the hackers are spread thin. It would be nice to see more people able to meaningfully contribute.
At some point SBCs that require a custom linux image will become unacceptable, right?
Right?
Using vendor kernels is standard in embedded development. Upstreaming takes a long time so even among well-supported boards you either have to wait many years for everything to get upstreamed or find a board where the upstreamed kernel supports enough peripherals that you're not missing anything you need.
I think it's a good thing that people are realizing that these SBCs are better used as development tools for people who understand embedded dev instead of as general purpose PCs. For years now you can find comments under every Raspberry Pi or other SBC thread informing everyone that a mini PC is a better idea for general purpose compute unless you really need something an SBC offers, like specific interfaces or low power.
I have always found it perplexing. Why is that required?
Is it the lack of drivers in upstream? Is it something to do with how ARM devices seemingly can't install Linux the same way x86 machines can (something something device tree)?
Yeah lack of peripheral drivers upstream for all the little things on the board, plus (AIUI) ARM doesn't have the same self-describing hardware discovery mechanisms that x86 computers have. Basically, standardisation. They're closer to MCUs in that way, is how I found it (though my knowledge is way out of date now, been years since I was doing embedded)
I've just been doing some reading. The driver situation in Linux is a bit dire.
On the one hand there is no stable driver ABI because that would restrict the ability for Linux to optimize.
On the other hand vendors (like Orange Pi, Samsung, Qualcomm, etc etc) end up maintaining long running and often outdated custom forks of Linux in an effort to hide their driver sources.
Seems..... broken
Somehow, this isn't a problem in the desktop space, even though new hardware regularly gets introduced there too which require new drivers.
The "somehow" is Microsoft, who defines what the hardware architecture of what a x86-64 desktop/laptop/server is and builds the compatibility test suite (Windows HLK) to verify conformance. Open source operating systems rely on Microsoft's standardization.
Microsoft's standardization got AMD and Intel to write upstream Linux GPU drivers? Microsoft got Intel to maintain upstream xHCI Linux drivers? Microsoft got people to maintain upstream Linux drivers for touchpads, display controllers, keyboards, etc?
I doubt this. Microsoft played a role in standardizing UEFI/ACPI/PCI which allows for a standardized boot process and runtime discovery, letting you have one system image which can discover everything it needs during and after boot. In the non-server ARM world, we need devicetree and u-boot boot scripts in lieu of those standards. But this does not explain why we need vendor kernels.
I think they're related. You can't have a custom kernel if you can't rebuild the device tree. You can't rebuild blobs.
> You can't have a custom kernel if you can't rebuild the device tree.
What is this supposed to mean? There is no device tree to rebuild on x86 platforms yet you can have a custom kernel on x86 platforms. You sometimes need to use kernel forks there too to work with really weird hardware without upstream drivers, there's nothing different about Linux's driver model on x86. It's just that in the x86 world, for the vast, vast majority of situations, pre-built distro kernels built from upstream kernel releases has all the necessary drivers.
x86 hardware has a standard way to boot and bring up the hardware, usually to at least a minimum level of functionality.
ARM devices aren't even really similar to one another. As a weird example, the Raspberry Pi boots from the GPU, which brings up the rest of the hardware.
It's not just about booting though. We solve this with hardware-specific devicetrees, which is less nice in a way than runtime discovery through PCI/ACPI/UEFI/etc, but it works. But we're not just talking about needing a hardware-specific devicetree; we're talking about needing hardware-specific vendor kernels. That's not due to the lack of boot standardization and runtime discovery.
Or you can just upstream what you need yourself.
There are some projects to port UEFI to boards like Orange Pi and Raspberry Pi. You can install a normal OS once you have flashed that.
https://github.com/tianocore/edk2-platforms/tree/master/Plat...
https://github.com/edk2-porting/edk2-rk3588
There also seems to be a plan to add uefi support to u-boot[1]. Many of these kinds of boards have u-boot implementations, so could then boot uefi kernel.
However many of these ARM chips have their own sub-architecture in the Linux source tree, I'm not sure that it's possible today to build a single image with them all built in and choose the subarchitecture at runtime. Theoretically it could be done, of course, but who has the incentive to do that work?
(I seem to remember Linus complaining about this situation to the Arm maintainer, maybe 10-20 years ago)
[1] https://docs.u-boot.org/en/v2021.04/uefi/uefi.html
The orange pi 6 plus supports UEFI from the get go.
> At some point SBCs that require a custom linux image will become unacceptable, right?
The flash images contain information used by the bios to configure and bring up the device. It's more than just a filesystem. Just because it's not the standard consoomer "bios menu" you're used to doesn't mean it's wrong. It's just different.
These boards are based off of solutions not generally made available to the public. As a result, they require a small amount of technical knowledge beyond what operating a consumer PC might require.
So, packaging a standard arm linux install into a "custom" image is perfectly fine, to be honest.
If the image contains information required to bring up the device, why isn't that data shipped in firmware?
“Custom”? No.
Proprietary and closed? One can hope.
I think even custom is unacceptable. It’s too much of a pain being limited in your distro choice because you are limited to specific builds. On x86 you can run anything.
Something in me wants to buy every SBC and/or microcontroller that is advertised to me.
Even though all can be replaced by a decent mini pc with beefy memory, with lots of VMs.
Yeah I have this problem (?) too. They are just so neat. I also really like tiny laptops and recreations of classic computers.
Clockwork Pi if you haven't seen it. Beautiful little constructions.
One or two USB-C 3.2 Gen2 ports are all that's required - can then plug in a hub or dock. eg: https://us.ugreen.com/collections/usb-hub?sort_by=price-desc...
Can also plug in a power bank. https://us.ugreen.com/collections/power-bank?sort_by=price-d...
The advantage is that if the machine breaks or is upgraded, the dock and pb can be retained. Would also distribute the price.
The dock and pb can also be kept away to lower heat to avoid a fan in the housing, ideally.
Better hardware should end up leading to better software - its main problem right now.
This 10-in-1 dock even has an SSD enclosure for $80 https://us.ugreen.com/products/ugreen-10-in-1-usb-c-hub-ssd (no affiliation) (no drivers required)
I'd have another dock/power/screen combo for traveling and portable use.
This seems to be an overkill for most of my workloads that require an SBC. I would choose Jetson for anything computationally intensive, as Orange Pi 6 Plus's NPU is not even utilized due to lack of software support. For other workloads, this one seems a bit too large in terms of formfactor and power consumption, and older RK3588 should still be sufficient
Looks like the SoC (CIX P1) has Cortex-A720/A520 cores which are Armv9.2, nice.
I've still been on the hunt for a cheap Arm board with a Armv8.3+ or Arvm9.0+ SoC for OSDev stuff, but it's hard to find them in hobbyist price range (this board included, $700-900 USD from what I see).
The NVIDIA Jetson Orin Nanos looked good but unfortunately SWD/JTAG is disabled unless you pay for the $2k model...
And it doesn't seem like anything newer than ARMv9.2 is available either, no matter the price point.
Disappointing on the NPU. I have found it's a point where industry wide improvement is necessary. People talk tokens/sec, model sizes, what formats are supported... But I rarely see an objective accuracy comparison. I repeatedly see that AI models are resilient to errors and reduced precision which is what allows the 1 bit quantization and whatnot.
But at a certain point I guess it just breaks? And they need an objective "I gave these tokens, I got out those tokens". But I guess that would need an objective gold standard ground truth that's maybe hard to come by.
There are a couple outfits making M.2 AI accelerators. Recently I noticed this one: DeepX DX-M1M 25 TOPS (INT8) M.2 module from Radxa[1]: https://radxa.com/products/aicore/dx-m1m
If you're in the business of selling unbundled edge accelerators, you're strongly incentivized to modularize your NPU software stack for arbitrary hosts, which increases the likelihood that it actually works, and for more than one particular kernel.
If I had an embedded AI use case, this is something I'd look at hard.
>But I rarely see an objective accuracy comparison.
There are some perplexity comparison numbers for the previous gen - Orange pi 5 in link below.
Bit of a mixed bag, but doesn't seem catastrophic across the board. Some models are showing minimal perplexity loss at Q8...
https://github.com/invisiofficial/rk-llama.cpp/blob/rknpu2/g...
The even more confounding factor is there are specific builds provided by every vendor of these Cix P1 systems: Radxa, Orange Pi, Minisforum, now MetaComputing... it is painful to try to sort it out, as someone who knows where to look.
I couldn't imagine recommending any of these boards to people who aren't already SBC tinkerers.
I was also onboard until he got to the NPU downsides. I don't care about use for an LLM, but I would like to see the ability to run smallish ONNX models generated from a classical ML workflow. Not only is a GPU overkill for the tasks I'm considering, but I'm also concerned that unattended GPUs out on the edge will be repurposed for something else (video games, crypto mining, or just straight up ganked)
just try to find some benchmark top_k, temp, etc parameters for llama.cpp. There's no consistent framing of any of these things. Temp should be effectively 0 so it's atleast deterministic in it's random probabilities.
Right. There are countless parameters and seeds and whatnots to tweak. But theoretically if all the inputs are the same the outputs should be within Epsilon of a known good. I wouldn't even mandate temperature or any other parameter be a specific value, just that it's the same. That way you can make sure even the pseudorandom processes are the same, so long as nothing pulls from a hardware rng or something like that. Which seems reasonable for them to do so idk maybe an "insecure rng" mode
>Temp should be effectively 0 so it's atleast deterministic in it's random probabilities.
Is this a thing? I read an article about how due to some implementation detail of GPUs, you don't actually get deterministic outputs even with temp 0.
But I don't understand that, and haven't experimented with it myself.
By default CUDA isn't deterministic because of thread scheduling.
The main difference comes from rounding order of reduction difference.
It does make a small difference. Unless you have an unstable floating point algorithm, but if you have an unstable floating point algorithm on a GPU at low precision you were doomed from the start.
I'm a big fan of raspberry pi, I have many, in fact I have so many I have:
``` alias findpi='sudo nmap -sP 192.168.1.0/24 | awk '\''/^Nmap/{ip=$NF}/B8:27:EB|DC:A6:32|E4:5F:01|28:CD:C1/{print ip}'\''' ```
On every `.bashrc` i have.
But I just don't get... everything, I don't get the org, I don't get the users on hn, I'm like skinner in the 'no the kids are wrong' meme.
It's a lambda. It's a cheap, plug in, ssh, forget. And it's bloody wonderful.
If you buy a 1 or 2 off ebay, ok maybe a 3.
After that? Get a damn computer.
Want more bandwidth on the rj45? Get a computer.
Want faster usb? Get a computer.
Want ssd? Get a computer
Want a retro computing device? Get a computer.
Want a computer experience? Etc etc etc, i don't need to labour this.
Want something that will sit there, have ssh and run python scripts for years without a reboot? Spend 20 quid on ebay.
People demanded faster horses. And the raspi org, for some, damn fool, reason, tried to give them.
There are people bemoaning the fact that raspberry pi's aren't able to run LLM's. And will then, without irony, complain that the prices are too high. For the love of God, raspi org, stop listening to dickheads on the Internet. Stop paying youtubers to shill. Stop and focus.
You won't win this game
TFA is about an Orange Pi, with a 12-core Arm chip, a bit more than a Raspberry Pi.
They are chasing the same waterfalls though jeff
As opposed to the rivers and the lakes that they’re used to?
Unfortunately only available atm for extremely high prices. I'd like to pick some up to create a ceph cluster (with 1x 18tb hdd osd per node in an 8 node cluster with 4+2 erasure coding)