Really love the ideas behind Deno, and tried to do things the Deno way (Deno.json, JSR, modern imports, Deno Deploy) for a monorepo project with Next.js, Hono and private packages. Some things like Hono worked super well, but Next.js did not. Other things like types would sometimes break in subtle ways. The choice of deployment destination e.g. Vercel for Next also gave me issues.
In contrast, Bun had less cognitive overhead and just "worked" even though it didn't feel as clean as Deno. Some things aren't perfect with Bun either like the lack of a Bun runtime on Vercel
You picked a stack that is still very npm-centric, especially private npm packages. The sweet spot for doing things the Deno way still seems to be choosing stacks that themselves are very Deno and/or ESM-native. I've had some great experiences with Lume, for instance, and targeting things like Deno Deploy over Vercel. (JSR scores are very helpful at finding interesting libraries with great ESM support.) Obviously "start with a fresh stack" is a huge ask and not a great way to sell Deno, given how much time/effort investment exists in stacks like Next.js. But I think in terms of "What does Deno do best?" there's a sweet spot where you 0-60 everything in Deno-native/ESM-native tools.
Also, yeah, a lot of Deno's npm compatibility keeps getting better, as mentioned in these 2.4 release notes there are a few new improvements. As another comment in these threads points out, for a full stack like the one you were trying, using Deno package.json first can give a better compatibility feeling than deno.json first, even if the deno.json first approach is the nicer/cleaner one long term or when you can go 0-60 in Deno-native/ESM-native greenfields.
It works surprisingly well when used in npm compatiblity mode, a lot like Bun is used.
Running `deno install` in a directory with package.json will create a leaner version of node_modules, running `deno task something` will run scripts defined in `package.json`.
Deno way of doing things is a bit problematic, as I too find it is often a timesink where things don't work, then if you have to escape back to node/npm it becomes a bigger hassle. Using Deno with package.json is easier.
People underestimate the node compatibility that Deno offers. I think the compat env variable will do a lot of adoption. Maybe a denon command or something could enable it automatically? Idk.
I've written C for years. The only time it is safe from crashes is when the code doesn't churn and has consistent timing between threads. bun has constant feature churn and new hardware it runs on all the time providing novel timings. It is very unlikely going to be crash free any time soon.
Nice list of solid changes. I really like Deno for scripting random glue code; I use it most places (maybe with the exception of random machine learning stuff, where python/uv fits.) Looking forward to gRPC support later this year, too, for some of my long-tail use cases. And the bundle command looks nice!
Not as big as Linux but I know a few FreeBSD shops that run NodeJS apps so it's not entirely crazy to think that there are more and they would want to try Deno. Besides making your OSS software compilable on *BSD/Linux/Mac/Win has historically been a good thing to do anyways.
For a lowlevel runtime (ie V8 itself) I can accept certain lag since there might be some low-level differences in how signals,etc behave.
However for more generic code Linux'isms often signals a certain "works-on-my-machine" mentality that might even hinder cross-distro compatibility, let alone getting things to work on Windows and/or osX development machines.
I guess a Rust binding for V8 is a tad borderline, not necessarily low-level but still an indicator that there's a lack of care for getting things to work on other machines.
It's more than a little surprising that portability between different Unices is not given more emphasis. "Back in the day" a program being portable between Sun Solaris, HP's HP-UX, Linux, FreeBSD was considered a sign of clean code.
Back in the day, Sun Solaris and HP-UX were not end-of-life, and FreeBSD had more equal industry footing with Linux. Now Linux is the clear winner in server OS UNIX by a wide margin. Also, Ryan Dhal worked at Joyent, a Illumos/Solaris shop when he built Node; perhaps that has informed his lack of interest in supporting FreeBSD these days.
Trying to compile it - it's 2.2.0 but better than nothing. I haven't seen any upstream patches for Rust V8 for FBSD so maybe out of tree ones in the ports if it does compile.
I believe the reason Deno is not more widely used in production environments is the lack of a standardized vulnerability database (other than using 100% npm compatibility which will take many popular deno packages out of scope). The issue is that there is no real centralized package manager (by design) which makes it challenging.
Was there any development in that direction?
> I believe the reason Deno is not more widely used in production environments is the lack of a standardized vulnerability database
If this were a real blocker, then C/C++ wouldn't be used in production either, since both just lean on the language-agnostic CVE/GHSA/etc databases for any relevant vulnerabilities there... and C also heavily encourages just vendoring in entire files from the internet with no way to track down versions.
Anyway, doesn't "deno.lock" exist, and anyone who cares can opt-in to that, and use the versions in there to check vulnerability databases?
The go imports use a Google-owned proxy for resolution which has a vulnerability facility. All golang package installs use the Google-owned proxy unless you set GOPROXY=direct when running go commands.
From a security standpoint it really icks me when projects prominently ask their users to do the `curl mywebsite.com/foo.sh | sh` thing. I know risk acceptance is different for many people, but if you download a file before executing it, at least you or your antivirus can check what it actually does.
As supply chain attacks are a significant security risks for a node/deno stack application, the `curl | sh` is a red flag that signals to me that the author of the website prefers convenience over security.
With a curl request directly executed, this can happen:
- the web server behind mywebsite.com/foo.sh provides malware for the first request from your IP, but when you request it again it will show a different, clean file without any code
- MITM attack gives you a different file than others receive
Node/deno applications using the npm ecosystem put a lot of blind trust into npm servers, which are hosted by microsoft, and therefore easily MITM'able by government agencies.
When looking at official docs for deno at https://docs.deno.com/runtime/getting_started/installation/ the second option behind `curl | sh` they're offering is the much more secure `npm install -g deno`. Here at least some file integrity checks and basic malware scanning are done by npm when downloading and installing the package.
Even though deno has excellent programmers working on the main project, the deno.land website might not always be as secure as the main codebase.
Just my two cents, I know it's a slippery slope in terms of security risk but I cannot say that `curl | sh` is good practice.
I really never understood the threat model behind this often repeated argument.
Most of these installation scripts are just simple bootstappers that will eventually download and execute millions lines of code authored and hosted by the same people behind the shell script.
You simply will not be capable of personally auditing those millions lines of code, so this problem boils down to your trust model. If you have so little trust towards the authors behind the project, to the point that you'd suspect them pulling absurdly convoluted ploys like:
> the web server behind mywebsite.com/foo.sh provides malware for the first request from your IP, but when you request it again it will show a different, clean file without any code
How can you trust them to not hide even more malicious code in the binary itself?
I believe the reason why this flawed argument have spread like a mind virus throughout the years is because it is something that is easy to do and easy to parrot in every mildly-relevant thread.
It is easy to audit a 5-line shell script. But to personally audit the millions lines of code behind the binary that that script will blindly download and run anyways? Nah, that's real security work and no one wants to actually do hard work here. We're just here to score some easy points and signal that we're a smart and security-conscious person to our peers.
> which are hosted by microsoft, and therefore easily MITM'able by government agencies.
If your threat model includes government agencies maliciously tampering your Deno binaries, you have far more things to worry about than just curl | sh.
I think bflesch's reasoning comes from the idea that the website developers may not hold their website to the same security standards as their software, and not from a trust issue. Nor from thinking the author themselves are malicious.
FWIW, I don't have a strong opinion here, besides that I like Debian's model the most. Just felt that it was worth to point out the above.
If the actual installation process can be made simple, you can have users copy/paste the whole installation script rather than pulling it down with curl.
Guix setup used to look like this but now they have a shell script for download. Even so, the instructions advise saving it and walk you through what to expect so you can have reasonable expectations while installing it.
Anyway, my point is that there are other ways to instruct people about the same kind of install process.
> much more secure `npm install -g deno`. Here at least some file integrity checks and basic malware scanning are done by npm when downloading and installing the package.
It boils down to the question "is it more likely the attacker can impersonate or control `npm` servers or our own servers. If the answer to that question is "No" then curl pipe sh is not less secure than `npm install`.
This is security theater. If you're assuming an attacker can impersonate anyone in the internet your only secure option is to cut the cable.
If you're writing server stuff, at the coarse-grained level of isolation that Deno provides you're better off using just about anything else and restricting access to network/disks/etc through systemd. Unlike Deno, it can restrict access to specific filesystem paths and network addresses (whitelist/blacklist, your choice), and you're not locked into using just Deno and not forced to write JS/TS.
However in general I don't think Deno's permission system is all that amazing, and I am annoyed that people call it "capability-based" sometimes (I don't know if this came from the Deno team ever or just misinformed third parties).
I do like that "deno run https://example.com/arbitrary.js" has a minimum level of security by default, and I can e.g. restrict it to read and write my current working dir. It's just less helpful for combining components of varying trust levels into a single application.
Node does have a permissions system, but it's opt in. Many runtimes/interpreters either have no sandbox at all, or they're opt in, which is why Deno's sandbox is an upgrade, even if it's not as hardened as iptables or Linux namespaces.
If security principles are important they should be on a deny-default basis with allow-lists rather than the other way around.
If the deno runtime implements the fetch module itself, then post-resolution checking definitely should be done though. It's more of an bug though than a principled security lapse.
Ah, so by default it's default deny everything but once you need to open up categories, you can't just allow exact what you need in that category? You have to allow the entire category and then deny everything you don't want/need?
> you can't just allow exact what you need in that category? You have to allow the entire category and then deny everything you don't want/need?
No, you can allow access to specific domains, IP addresses, filesystem paths, environment variables, etc, while denying everything else by default. You can for instance allow access to only a specific IP (e.g. `deno run --allow-net='127.0.0.1' main.ts`), while implicitly blocking every other IP.
What the commenter is complaining about is the fact that Deno doesn't check which IP address a domain name actually resolves to using DNS resolution. So if you explicitly deny '1.1.1.1', and the script you're running fetches from a domain with an A record pointing to '1.1.1.1', Deno will allow it.
In practice, I usually use allow lists rather than deny lists, because I very rarely have an exhaustive list on hand of every IP address or domain I'm expecting a rogue script to attempt to access.
I'm sure there are cases where the website CMS was hacked and then malware served instead of the normal install script. The `curl | sh` approach has been around forever.
And depending on what "interesting" IP address you are coming from, NSA/Microsoft/Apple will MITM your npm install / windows update / ios update accordingly.
Same in the linux ecosystem, if you look at the maintainers of popular distributions, some of them had .ru / .cn email addresses before switching to more official email addressess using the project domain - IMO this change of email addressess happened due to public pressure on russia after the Ukraine invasion. Having access to main package signing keys for a linux distribution, you can provide special packages from your linux package mirror to interesting targets.
All of these scenarios are extremely hard to prove after the fact and the parties involved are not the type of people who do public writeups.
That’s why downloading and then executing is preferable — as the GP pointed out, you or your machine’s antivirus can have an opportunity to inspect the file prior to execution, whereas that is not an option when the bytes are streamed directly to the interpreter.
how is it more or less good practice than running any untrusted binary on your system? the only possible stuff would be that the script download is broken midway and it becomes a "dangerous script" because eg of a `rm -rf /some/path` which would become a `rm -rf /` but other than that, it's just the same as downloading any executable on your laptop and running them... any attacks you described on the shell download would work with any other binary, which users routinely do
With a downloaded file your antivirus will run automated checks on it, you can calculate a hash signature and compare the value with others who also download the file, and you will notice if the file changes after you execute it.
Really love the ideas behind Deno, and tried to do things the Deno way (Deno.json, JSR, modern imports, Deno Deploy) for a monorepo project with Next.js, Hono and private packages. Some things like Hono worked super well, but Next.js did not. Other things like types would sometimes break in subtle ways. The choice of deployment destination e.g. Vercel for Next also gave me issues.
Here is an example of a small microcut I faced (which might be fixed now) https://github.com/honojs/hono/issues/1216
In contrast, Bun had less cognitive overhead and just "worked" even though it didn't feel as clean as Deno. Some things aren't perfect with Bun either like the lack of a Bun runtime on Vercel
You picked a stack that is still very npm-centric, especially private npm packages. The sweet spot for doing things the Deno way still seems to be choosing stacks that themselves are very Deno and/or ESM-native. I've had some great experiences with Lume, for instance, and targeting things like Deno Deploy over Vercel. (JSR scores are very helpful at finding interesting libraries with great ESM support.) Obviously "start with a fresh stack" is a huge ask and not a great way to sell Deno, given how much time/effort investment exists in stacks like Next.js. But I think in terms of "What does Deno do best?" there's a sweet spot where you 0-60 everything in Deno-native/ESM-native tools.
Also, yeah, a lot of Deno's npm compatibility keeps getting better, as mentioned in these 2.4 release notes there are a few new improvements. As another comment in these threads points out, for a full stack like the one you were trying, using Deno package.json first can give a better compatibility feeling than deno.json first, even if the deno.json first approach is the nicer/cleaner one long term or when you can go 0-60 in Deno-native/ESM-native greenfields.
It works surprisingly well when used in npm compatiblity mode, a lot like Bun is used.
Running `deno install` in a directory with package.json will create a leaner version of node_modules, running `deno task something` will run scripts defined in `package.json`.
Deno way of doing things is a bit problematic, as I too find it is often a timesink where things don't work, then if you have to escape back to node/npm it becomes a bigger hassle. Using Deno with package.json is easier.
100%. I was all-in on Deno, but there were just too many sharp edges. In contrast, Bun just works.
People underestimate the node compatibility that Deno offers. I think the compat env variable will do a lot of adoption. Maybe a denon command or something could enable it automatically? Idk.
Honestly, I was bullish on Deno back in the day, but I don't see why I'd use it over Bun now.
Less segfault, improved security / capability model
As a Bun user I don't really get segfaults anymore.
I've written C for years. The only time it is safe from crashes is when the code doesn't churn and has consistent timing between threads. bun has constant feature churn and new hardware it runs on all the time providing novel timings. It is very unlikely going to be crash free any time soon.
https://github.com/oven-sh/bun/issues?q=is%3Aissue%20state%3...
Just got one today! But yes it is better.
The security model is very underestimated imo, it will be very evident when more bun projects reach production and not experimental.
[dead]
I have yet no reason to fight IT and architects for having anything besides node on CI/CD pipelines and base images for containers.
Nice list of solid changes. I really like Deno for scripting random glue code; I use it most places (maybe with the exception of random machine learning stuff, where python/uv fits.) Looking forward to gRPC support later this year, too, for some of my long-tail use cases. And the bundle command looks nice!
Crazy that Deno is still not workable on FreeBSD because of the Rust V8 bindings not being ported.
How big is the intersection of modern Javascript developers and FreeBSD users?
Not as big as Linux but I know a few FreeBSD shops that run NodeJS apps so it's not entirely crazy to think that there are more and they would want to try Deno. Besides making your OSS software compilable on *BSD/Linux/Mac/Win has historically been a good thing to do anyways.
For a lowlevel runtime (ie V8 itself) I can accept certain lag since there might be some low-level differences in how signals,etc behave.
However for more generic code Linux'isms often signals a certain "works-on-my-machine" mentality that might even hinder cross-distro compatibility, let alone getting things to work on Windows and/or osX development machines.
I guess a Rust binding for V8 is a tad borderline, not necessarily low-level but still an indicator that there's a lack of care for getting things to work on other machines.
Is it big enough to prioritize fixing though? The answer seems to be a no so far.
Node.js is (maybe surprisingly) used a lot in less common operating systems like FreeBSD and Illumos.
It's more than a little surprising that portability between different Unices is not given more emphasis. "Back in the day" a program being portable between Sun Solaris, HP's HP-UX, Linux, FreeBSD was considered a sign of clean code.
Back in the day, Sun Solaris and HP-UX were not end-of-life, and FreeBSD had more equal industry footing with Linux. Now Linux is the clear winner in server OS UNIX by a wide margin. Also, Ryan Dhal worked at Joyent, a Illumos/Solaris shop when he built Node; perhaps that has informed his lack of interest in supporting FreeBSD these days.
I mean... you can probably see why they don't spend any effort on that.
Looks like it is in ports?
Trying to compile it - it's 2.2.0 but better than nothing. I haven't seen any upstream patches for Rust V8 for FBSD so maybe out of tree ones in the ports if it does compile.
I believe the reason Deno is not more widely used in production environments is the lack of a standardized vulnerability database (other than using 100% npm compatibility which will take many popular deno packages out of scope). The issue is that there is no real centralized package manager (by design) which makes it challenging. Was there any development in that direction?
> I believe the reason Deno is not more widely used in production environments is the lack of a standardized vulnerability database
If this were a real blocker, then C/C++ wouldn't be used in production either, since both just lean on the language-agnostic CVE/GHSA/etc databases for any relevant vulnerabilities there... and C also heavily encourages just vendoring in entire files from the internet with no way to track down versions.
Anyway, doesn't "deno.lock" exist, and anyone who cares can opt-in to that, and use the versions in there to check vulnerability databases?
Wouldn't this also be a problem for Go, which just imports from URLs (mostly GitHub) as well?
The go imports use a Google-owned proxy for resolution which has a vulnerability facility. All golang package installs use the Google-owned proxy unless you set GOPROXY=direct when running go commands.
https://arc.net/l/quote/arrozgok
I really like that bundle subcommand is back. No need to use workarounds.
Surprised they went with esbuild for bundling instead of Rust based Rolldown which in about to be v1.
esbuild is very stable and mature at this point, Rolldown is still rapidly evolving.
I really love where Deno is going, it really is what Node should've been.
My only concern is that they lose patience to their hype-driven competition and start doing hype-driven stuff themselves.
I thought that Deno was the hype-driven competition of nodejs ;)
I keep hearing good things about Deno. It might just convince me to try js after all!
These days it might be good to go straight to TS.
Which is what the Deno defaults guide towards as well.
[dead]
Dont
Big fan of deno, congrats on shipping.
From a security standpoint it really icks me when projects prominently ask their users to do the `curl mywebsite.com/foo.sh | sh` thing. I know risk acceptance is different for many people, but if you download a file before executing it, at least you or your antivirus can check what it actually does.
As supply chain attacks are a significant security risks for a node/deno stack application, the `curl | sh` is a red flag that signals to me that the author of the website prefers convenience over security.
With a curl request directly executed, this can happen:
- the web server behind mywebsite.com/foo.sh provides malware for the first request from your IP, but when you request it again it will show a different, clean file without any code
- MITM attack gives you a different file than others receive
Node/deno applications using the npm ecosystem put a lot of blind trust into npm servers, which are hosted by microsoft, and therefore easily MITM'able by government agencies.
When looking at official docs for deno at https://docs.deno.com/runtime/getting_started/installation/ the second option behind `curl | sh` they're offering is the much more secure `npm install -g deno`. Here at least some file integrity checks and basic malware scanning are done by npm when downloading and installing the package.
Even though deno has excellent programmers working on the main project, the deno.land website might not always be as secure as the main codebase.
Just my two cents, I know it's a slippery slope in terms of security risk but I cannot say that `curl | sh` is good practice.
I really never understood the threat model behind this often repeated argument.
Most of these installation scripts are just simple bootstappers that will eventually download and execute millions lines of code authored and hosted by the same people behind the shell script.
You simply will not be capable of personally auditing those millions lines of code, so this problem boils down to your trust model. If you have so little trust towards the authors behind the project, to the point that you'd suspect them pulling absurdly convoluted ploys like:
> the web server behind mywebsite.com/foo.sh provides malware for the first request from your IP, but when you request it again it will show a different, clean file without any code
How can you trust them to not hide even more malicious code in the binary itself?
I believe the reason why this flawed argument have spread like a mind virus throughout the years is because it is something that is easy to do and easy to parrot in every mildly-relevant thread.
It is easy to audit a 5-line shell script. But to personally audit the millions lines of code behind the binary that that script will blindly download and run anyways? Nah, that's real security work and no one wants to actually do hard work here. We're just here to score some easy points and signal that we're a smart and security-conscious person to our peers.
> which are hosted by microsoft, and therefore easily MITM'able by government agencies.
If your threat model includes government agencies maliciously tampering your Deno binaries, you have far more things to worry about than just curl | sh.
I think bflesch's reasoning comes from the idea that the website developers may not hold their website to the same security standards as their software, and not from a trust issue. Nor from thinking the author themselves are malicious.
FWIW, I don't have a strong opinion here, besides that I like Debian's model the most. Just felt that it was worth to point out the above.
See the codecov incident, where exactly this happened: https://about.codecov.io/security-update/
The problem is getting the new users onboarded. Telling people to use 'npm' doesn't help if you don't have npm installed.
How do I install npm? The npm webpage tells me to go and install nvm. At that tells me to use curl | sh .
So using npm for a new user is still requiring a curl | sh, just in a different place.
If the actual installation process can be made simple, you can have users copy/paste the whole installation script rather than pulling it down with curl.
See for instance...
Setup instructions for Pkgsrc on macOS with the SmartOS people's binary caches: https://pkgsrc.smartos.org/install-on-macos/
Spack installation instructions: https://spack-tutorial.readthedocs.io/en/latest/tutorial_bas...
Guix setup used to look like this but now they have a shell script for download. Even so, the instructions advise saving it and walk you through what to expect so you can have reasonable expectations while installing it.
Anyway, my point is that there are other ways to instruct people about the same kind of install process.
Security is either taken seriously, or it isn't.
If security shortcuts are taken here, trust nothing else.
> much more secure `npm install -g deno`. Here at least some file integrity checks and basic malware scanning are done by npm when downloading and installing the package.
It boils down to the question "is it more likely the attacker can impersonate or control `npm` servers or our own servers. If the answer to that question is "No" then curl pipe sh is not less secure than `npm install`.
This is security theater. If you're assuming an attacker can impersonate anyone in the internet your only secure option is to cut the cable.
using deno isn't good security practice, their sandbox is implemented like stuff from the 90s
If you're writing server stuff, at the coarse-grained level of isolation that Deno provides you're better off using just about anything else and restricting access to network/disks/etc through systemd. Unlike Deno, it can restrict access to specific filesystem paths and network addresses (whitelist/blacklist, your choice), and you're not locked into using just Deno and not forced to write JS/TS.
See `man systemd.exec`, `systemd-analyze security`, https://wiki.archlinux.org/title/Systemd/Sandboxing
Deno can restrict access to filesystem files or directories, and to particular network domains, see docs for examples. https://docs.deno.com/runtime/fundamentals/security/#file-sy...
However in general I don't think Deno's permission system is all that amazing, and I am annoyed that people call it "capability-based" sometimes (I don't know if this came from the Deno team ever or just misinformed third parties).
I do like that "deno run https://example.com/arbitrary.js" has a minimum level of security by default, and I can e.g. restrict it to read and write my current working dir. It's just less helpful for combining components of varying trust levels into a single application.
> Unlike Deno, it can restrict access to specific filesystem paths and network addresses
deno can do this via --(allow/deny)-read and --(allow/deny)-write for the file system.
You can do the same for net too
https://docs.deno.com/runtime/fundamentals/security/#permiss...
Bubblewrap is another convenient sandboxing tool for Linux: https://wiki.archlinux.org/title/Bubblewrap
Is node "sandbox" different? Does it even have a sandbox?
Node does have a permissions system, but it's opt in. Many runtimes/interpreters either have no sandbox at all, or they're opt in, which is why Deno's sandbox is an upgrade, even if it's not as hardened as iptables or Linux namespaces.
Can you expand on this please? Also curious which 90s tech there inspired by.
It is matching strings instead of actually blocking things. That's how sandboxes were implemented when I was a kid.
E.g. --allow-net --deny-net=1.1.1.1
You cannot fetch "http://1.1.1.1" but any domain that resolves to 1.1.1.1 is a bypass...
It's crap security
If security principles are important they should be on a deny-default basis with allow-lists rather than the other way around.
If the deno runtime implements the fetch module itself, then post-resolution checking definitely should be done though. It's more of an bug though than a principled security lapse.
The thing is that this applies to all parts of the sandbox https://secfault-security.com/blog/deno.html
That isn't 90s security, that is just bad code. And bad code was written in the 90s and is still written today.
Ah, so by default it's default deny everything but once you need to open up categories, you can't just allow exact what you need in that category? You have to allow the entire category and then deny everything you don't want/need?
That's a bit of a silly model.
> you can't just allow exact what you need in that category? You have to allow the entire category and then deny everything you don't want/need?
No, you can allow access to specific domains, IP addresses, filesystem paths, environment variables, etc, while denying everything else by default. You can for instance allow access to only a specific IP (e.g. `deno run --allow-net='127.0.0.1' main.ts`), while implicitly blocking every other IP.
What the commenter is complaining about is the fact that Deno doesn't check which IP address a domain name actually resolves to using DNS resolution. So if you explicitly deny '1.1.1.1', and the script you're running fetches from a domain with an A record pointing to '1.1.1.1', Deno will allow it.
In practice, I usually use allow lists rather than deny lists, because I very rarely have an exhaustive list on hand of every IP address or domain I'm expecting a rogue script to attempt to access.
Yeah, that was my point, default deny vs default allow.
If you can default deny, then you're good. It's kind of a junior sysadmin mistake, otherwise, I would say.
Has any attack like this been ever seen in the wild? Not saying it's impossible – but I'm just curious if this vector was ever successfully exploited.
I'm sure there are cases where the website CMS was hacked and then malware served instead of the normal install script. The `curl | sh` approach has been around forever.
And depending on what "interesting" IP address you are coming from, NSA/Microsoft/Apple will MITM your npm install / windows update / ios update accordingly.
Same in the linux ecosystem, if you look at the maintainers of popular distributions, some of them had .ru / .cn email addresses before switching to more official email addressess using the project domain - IMO this change of email addressess happened due to public pressure on russia after the Ukraine invasion. Having access to main package signing keys for a linux distribution, you can provide special packages from your linux package mirror to interesting targets.
All of these scenarios are extremely hard to prove after the fact and the parties involved are not the type of people who do public writeups.
If the website CMS is hacked, they can just swap the installable binary to one's that's hacked, too.
That’s why downloading and then executing is preferable — as the GP pointed out, you or your machine’s antivirus can have an opportunity to inspect the file prior to execution, whereas that is not an option when the bytes are streamed directly to the interpreter.
It would be great if curl could take file integrity hash value as a command line argument.
I'd like to practice verifying file integrity, instead of running `curl | sh`. I see that sha256sum (or 512) is the standard command people use.
But if the server is compromised, the malicious actor would likely be able to serve a matching hash to their file?> ask their users to do the `curl mywebsite.com/foo.sh | sh` thing.
Because it's easier than maintaining packages across 10+ package managers. And in case of Linux it might not require sudo to install something.
how is it more or less good practice than running any untrusted binary on your system? the only possible stuff would be that the script download is broken midway and it becomes a "dangerous script" because eg of a `rm -rf /some/path` which would become a `rm -rf /` but other than that, it's just the same as downloading any executable on your laptop and running them... any attacks you described on the shell download would work with any other binary, which users routinely do
All the attacks you described also apply to downloading and executing a file. I don't think `curl | sh` is worse in this regard.
With a downloaded file your antivirus will run automated checks on it, you can calculate a hash signature and compare the value with others who also download the file, and you will notice if the file changes after you execute it.
If you download it first, you can at least eyeball what's been downloaded to check it doesn't start by installing a bitcoin miner
How often do people do that when they install a package from npm, pypi, or other package repository? In practice never.
[flagged]
Deno is a JS runtime (written in Rust) on the V8 engine.
What’s horrible about V8?
Hobby-horse trolling detected.
It is V8.