Talking about foundational software but no mention of the biggest missing part of rust IMO: an ABI.
If you want to write an OS in rust and provide rich services for applications to use, you need to offer libraries they can call without needing to recompile when the OS is upgraded. Windows mostly does this with COM, Apple historically had ObjC’s dynamic dispatch (and now Swift’s ABI), Android does this with a JVM and bytecode… Rust can only really offer extern "C", and that really limits how useful dynamic libraries can be.
Doing an ABI without a VM-like layer (JVM, .NET) is really difficult though, and requires you to commit to certain implementation details without ever changing them, so I can understand why it’s not a priority. To my knowledge the only success stories are Swift (which faced the problem head-on) and COM (which has a language-independent IDL and other pain points.) ObjC has such extreme late binding that it almost feels like cheating.
If rust had a full-featured ABI it would be nearly the perfect language. (It would improve compile times as well, because we’d move to a world where dependencies could just be binaries, which would cut compile times massively.)
I don't think Rust ever wants to do ABI stability that isn't opt-in, because Rust is about zero-cost abstractions and a stable ABI is not zero-cost. See this explanation of how C++'s de facto stable ABI significantly reduces performance: https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2020/p20... [PDF]
For the use case of letting Rust programs dlopen plugins that are also written in Rust, the current solutions are stabby and abi_stable. They're not perfect but they seem to work fairly well in practice.
For more general use cases like cross-language interop, the hope is to get somewhere on crABI: https://github.com/joshtriplett/rfcs/blob/crabi-v1/text/3470... This is intended to be a superior successor to C's ABI, useful for all the same use cases (at least if the code you want to talk to is new enough to support it). Note that this isn't something the Rust maintainers can do unilaterally; there's a need to get buy-in from maintainers of other languages, Linux distros, etc.
I'm not sure that the "everything is a shared library" approach is workable in a language where heap allocation is generally explicit and opt-in. While Swift has some support for stack allocation, most types are heap-allocated, and I think that reduces the extent to which the rest of the language has to be warped to accommodate the stable ABI.
We are more or less trying to solve this exact problem with Wasm Components [1]. If core WebAssembly is a portable instruction format, WebAssembly Components are a portable executable/linkable format.
Using them is unfortunately not yet quite as easy as using repr(wasm)/extern "wasm". But with wit-bindgen [2] and the wasm32-wasip2 target [3], it's not that hard either.
Not sure if this is really needed though. I mean, it would be more convenient to be able to pass more "things" around interfaces (slices, trait objects, ...), but you can still do everything you need to with an explicit extern "C" abi.
(And I've seen proposal to extend extern ABI to more things)
Would you want to define that boundary in a way that still maintains compatibility with C? Or would all other languages be expected to link in a Rust library to then do the "extern C" conversion from a rust native type to something that is FFI stable? Swift can do this because Apple will tell you to use Swift or GTFO. Windows has COM. Rust is an orphan without a Primary Platform™ pushing for it, even if the rust language team came up with a technical solution, be it an IDL or a stable ABI with bend-over-backwards run-time linking magic, it would only be useful for other Rust binaries or it would be just as limited as the lingua franca of ABIs - C.
I've seen talks on this topic at Rust conferences which seemed strongly influenced by Swift's approach, so that will probably be the direction this ends up going in.
It wouldn’t, there would have to be dynamic dispatch. Swift provides some prior art for “monomorphization in the same binary, dynamic dispatch across linker boundaries” as a general approach, and it works pretty well.
I don't really understand the point for writing an "OS in rust and provide rich services for applications to use".
You can use any serialisation method you want (C structs, Protobuf, JSON, ...).
An ABI is for passing data in the programming language and trusting it blindly, e.g. what functions use when calling each other.
Any sane OS still has to validate the data that crosses its boundaries. For example, if you make a Linux system call, it doesn't blindly use a pointer you pass it, but instead actually checks if the pointer belongs to the calling process's address space.
It seems pointless to pick Rust's internal function-calling data serialisation for the OS boundary; that wouldn't make it easier for any userspace programs running in that OS than explicitly defining a serialisation format.
Why would an OS want to limit itself to Rust's internal representation, or why would Rust want to freeze its internal representation, making improvements impossible on both sides? This seems to bring only drawbacks.
The only benefit I can see so far for a stable ABI is dynamic linking (and its use in "plugins" for programs where the code is equally trusted, e.g. writing some dlopen()able `.so` files in Rust, as plugins for Rust-only programs), where you can really "blindly call the symbol". But even there it's a bit questionable, as any mistake or mismatch in ABI can result in immediately memory corruption, which is something that Rust users rather dislike.
I really like Rust but there are some quite frustrating core paper cuts that I wish would get more attention:
1. Self-referencing structs. Especially where you want to have something like a source file and the parsed AST in the same struct. You can't easily do that at the moment. It would be nice if there was something like an offset reference that made it work. Or something else...
2. The orphan rule. I get it, but it's still annoying. We can do better than newtype wrappers (which sometimes have to be nested 2 or 3 levels deep!).
3. The fact that for reasonable compile time you need to split projects into lots of small crates. Again, I understand the reasons, but the result sucks and we can definitely do better. As I understand it this is because crates are compiled as one compilation unit, and they have to be because circular dependencies are allowed. While that is true, I expect most code doesn't actually have circular dependencies so why not make those opt-in? Or even automatically split items/files within a crate into separate compilation units based on the dependency graph?
There's probably more; this is just what I can remember off the top of my head.
Hopefully that's constructive criticism. Rust is still my favourite programming language by far.
> As I understand it this is because crates are compiled as one compilation unit, and they have to be because circular dependencies are allowed
Rust allows circular dependencies between modules within a single compilation unit, but crates can't be circular dependencies.
> I expect most code doesn't actually have circular dependencies
Not true, most code does have benign circular dependencies, though it's not common to realize this. For example, consider any module that contains a `test` submodule, that's a circular dependency, because the `test` submodule imports items from the parent (it has to, because it wants to test them), but also the parent can refer to the submodule, because that's how modules and submodules fundamentally work. To eliminate the circular dependency you would need to have all test functions within every compilation unit defined within the root of the crate (not in a submodule off of the root; literally defined in the root namespace itself).
Generally the way that you would implement a restriction against circular dependencies is to prevent children from importing from their parents, full stop. The more elaborate analysis that you propose would work, technically, but it would only work for tests (if items can't be reached from the crate root, that's another way of saying that the code in question is dead and unreachable; tests only get around this because the test runner is a weird alternative entry point). You'd still be unable to, for example, have a submodule that defines a trait, and then a sibling submodule that defines a type that imports that trait; you'd need to move either the trait or the type to a shared root. To illustrate, this means that anything in the stdlib that implements std::iter::Iterator would need to be defined in a submodule of std::iter (or vice versa). And then repeat that process for every other trait: std::convert::From, std::ops::Drop, std::io::Read...
Would add 4: partial self borrows (ability for methods to only part of their struct).
For 3, I think the low hanging fruit is probably better support for just using multiple crates (support for publishing them as one package for example).
Or let them be ignored within the root crate when building a binary, since the main concern with orphans is that libraries deep in your dependency tree become accidentally incompatible, which isn't a problem if orphans arise in the root of a binary crate, since by definition you control its source.
Huge agree with the orphan rule. We should be able to disable this in application crates, or do away with it when we can prove certain hygiene, like proc macros.
The emphasis on cross-language compatibility may be misplaced. You gain complexity and lose safety. If we have to do that, it would help if it were bottom-up, replacing libc, for example.
Go doesn't do cross-language calling much.
Google paid people to build the foundations, the low-level libraries right. Rust tends toward many versions of the basics, most flawed.
One of the fundamental problems in computing for about two decades now has been getting programs on the same machine to efficiently talk to each other. Mostly, people have hacked on the Microsoft .DLL concept, giving them state. Object request brokers, like CORBA, never caught on.
Nor did good RPC, like QNX. So people are still trying to link stuff together that doesn't really want to be linked.
Or you can do everything with sockets, even on the same machine. They're the wrong abstraction, raw byte streams, but they're available.
My read, which is based on context from elsewhere but could be wrong, is that the post is not really talking about creating a replacement for traditional shared libraries that works across language boundaries (like COM or Wasm components). Rather, "other languages" here is a euphemism for C++ in particular, and the goal is to enable incremental migration of brownfield C++ codebases to Rust. The goal of pervasive memory safety everywhere, especially in the kinds of contexts where C++ has traditionally been used, requires some kind of answer to "what about all the existing programs?", and the current interop between Rust and C++ is not good enough to be a satisfying answer. If it gets improved to the point where the two languages can interoperate smoothly at the source level, then I think the possibility of Rust displacing C++ for most use cases becomes something like realistic.
Sometimes I think sockets with a spec for what's on the "wire" is about as good an abstraction as you can get for arbitrary cross-language calling. If you could have your perfect abstraction for cross-language calling what would it be?
Not sure, but it should be message-oriented, rather than stream-oriented. You have to put a framing protocol on top before you can do anything else. Then you have to check that framing is in sync and have some recovery.
I'm currently struggling with the connection between Apache mod_fcgid (retro) and a Rust program (modern). Apache launches FCGI programs as subprocesses, with stdin and stdout connected to the parent via either pipes or UNIX local sockets. There's a binary framing protocol, an 8-byte header with a length. You can't transmit arbitrarily large messages; those have to be "chunked". There's a protocol for that. You can have multiple transactions in progress. The parent can make out of band queries of the child. There's a risk of deadlock if you write too much and fill the pipe when the other end is also writing. All that plumbing is specific to this application.
(Current problem: Rust std::io appears to not like stdin being a UNIX socket. Trying to fix that.)
You could definitely do JSON / msgpack and have like 5 C API functions like, read, write, wake_on_readable, and it wouldn't be the worst thing, and it wouldn't incur any IPC overhead.
> You have to avoid abstractions and conveniences and minimizing allocations so as not to trigger the garbage collector.
That's not really how modern GCs work, and not how abstractions work when you have a good JIT. The latency impact of modern GCs is now often effectively zero (there are zero objects processed in a stop-the-world pause, and the overall CPU utilisation of the GC is a function of the ratio between the size of the resident set and the size of the heap) and a JIT can see optimisation opportunities more easily and exploit them more aggressively than an AOT compiler (thanks to speculative optimisations). The real cost is in startup/warmup time and memory overhead, as well as optimising amortised performance rather than worst-case performance. Furthermore, how much those tradeoffs actually cost can be a very different matter from what they are (e.g. 3x higher RAM footprint may translate to zero additional cost, and doing manual memory management may actually cost more), as brilliantly explored in this recent ISMM talk: https://youtu.be/mLNFVNXbw7I
> C++’s innovations in zero-cost abstractions
I think that the notion of zero-cost abstractions - i.e. the illusion of "high" [1] abstraction when reading the code with the experience of "low" abstraction when evolving it - is outdated and dates from an era (the 1980s and early '90s) when C++ believed it could be both a great high-level and great a low-level language. Since then, there's generally been a growing separation rather than unification of the two domains. The portion of software that needs to be very "elaborate" (and possibly benefit from zero-cost abstractions) and still low-level has been shrinking, and the trend, I think, is not showing any sign of reversing.
[1]: By high/low abstraction I mean the number of subroutine implementations (i.e. those that perform the same computational function) that could be done with no impact at all on the caller. High abstraction means that local changes are less likely to require changing remote code and vice-versa, and so may have an impact on maintenance/evolution costs.
Any optimisation transforms the program in some way that has to preserve its meaning. Generally, to do that, AOT compilers need to prove that the transformation is always correct; that can be difficult for some deep optimisations. OTOH, JITs can assume that some transformation is correct, aggressively apply it, and if it turns out they're wrong, the code will trigger some signal that will have the runtime deoptimise (some modern AOT compilers have some speculative and deoptimisation capabilities, but not as general as JITs').
Java has an excellent GC, but a horrible runtime. .net is probably the best GC integrated into a language with decent memory layout ability. If all you want is the GC without a language attached, LXR is probably the most interesting. it's part of MMTK which is a rust library for memory allocation and GC that Java, Ruby, and Julia are all in the process of adding options for.
I think this will be very successful. The Rust community seems very hellbent in achieving success at all costs, this includes making every software permissively licensed, while a lot of foundational software is still copyleft. This will likely trigger Gresham's law soon, and the fact that the community is pushing the security argument forward, which is very difficult to measure, is also very powerful, whether security may be a legitimate concern or it's a think of the children situation.
Certain portions of the Rust community may default to MIT/Apache out of apathy, but any aversion to GPL isn't a Rust thing, it's just the usual sniping between the general FSF crowd and the general OSI crowd that has been ongoing for decades. By all means, make your Rust crate GPL, nobody's going to stop you.
The currently flagged comment and the discussion below it actually do have a salient point.
Comment: How about get an actual published language standard? How about get more implementations of it?
In the discussion, @infogulch: If you are aiming to be the foundation of an entire industry it doesn't seem unreasonable to ask for some specifications. https://news.ycombinator.com/item?id=44926375
I think it’s flagged (although I don’t have an opinion on whether it should be) because it’s not clear at all whether it’s actually a reasonable thing to expect: plenty of industries are built on programming languages that don’t have formal specifications, much less natural language ones.
To argue by example: Ruby has an ISO standard, but that standard is for a very old version of the language. Python doesn’t have an independent standard at all; it’s like Rust in that the reference implementation is itself the standard. But nobody is champing at the bit to replace their Ruby or Python stack for standards reasons.
Who said they are? The GP's language is "foundation of an entire industry"; I would argue that you and I could both find plenty of industries that directly attribute their foundations to those languages.
In what way Ruby and Python are foundations for entire industries? As I see, these two are just surface level languages which mostly use C/C++ under the hood.
Rails and Django alone account for a disproportionate number of current software service companies. That’s just what immediately comes to mind.
“It’s X under the hood” is true, but only in the same sense that “computer programs are made of coffee” is true. You can’t easily replace a Ruby stack with a Python one just because both have a reference C implementation; that’s the entire point of having high level languages with rich abstractions.
This is an ambitious and difficult project that is a work in progress. They’re continuing to work hard at it. There were 2 PRs merged to master less than 2 hours ago when I wrote this comment. On a Saturday, that’s commitment right there.
The plan described in "Our Vision for the Rust Specification", and the linked RFC3355, were abandoned early in 2024.
The team that was formed to carry out RFC3355 still exists, though it's had an almost complete turnover of membership.
They're currently engaged in repeating the process of deciding what they think a spec ought to be like, and how it might be written, from scratch, as if the discussion and decisions from 2023 had never happened.
The tracking issue for the RFC was https://github.com/rust-lang/rust/issues/113527 . You can see the last update was in 2023, at the point where it was time to start the actual "write the spec" part.
That's when it all fell apart, because most of the people involved were, and are, much more interested in having their say on the subject of what the spec should and shouldn't be like than they are in doing the work to write it.
Your comment suggests there is no progress being made on the spec. The activity on this repo suggests the opposite - https://github.com/rust-lang/reference
There's still work being done on the Reference, which is being maintained in the same way as it has been since the Rust documentation team was disbanded in 2020. But it's a very long way from being either complete or correct enough to reliably answer questions about Rust-the-language.
After the "write a new spec" plan was abandoned in 2024 the next stated plan was to turn the Reference into the spec, though they never got as far as deciding how that was going to work. That plan lasted six months before they changed direction again. The only practical effect was to add the paragraph identifiers like [destructors.scope.nesting.function-body].
They're currently arguing about whether to write something from scratch or somehow convert the FLS[1] into a spec.
I hope this is clear: the published language standard the previous poster was asking about does not currently exist.
You linked to efforts to create a specification, but they aren't done or close to done.
It's also not clear the goal is to create a standard. E.g., according to the spec-vision document, the spec isn't planned to be authoritative (the implementation is). If that doesn't change the spec can't really function as a standard. (A standard would allow independent implementations and independent validation of an implementation. A non-authoritative spec doesn't.)
Gccrs an effort to create an alternate compatible implementation. It hasn't yet created a generally usable compiler.
I'm sure they would be among the most interested in a spec. They are currently forced to use the implementation (continuously evolving) as the spec. That makes their job considerably more difficult.
Because I’m a cynical person, I view LLMs like I’ve viewed rust for years: a neat tool that makes a small subset of people happy and a touch more productive.
The evangelical defense of either topic is very off-putting, to say the least.
@infogulch really nailed it with the comment about how the rust community wants influence/power, and ignores the responsibility that comes with said influence/power.
Python still has no standard and open antagonistic towards refactoring their standard library so that the language and the library aren't shipped as a monolith.
The most difficult part for alternative Python implementations is supporting the same stdlib.
Why does a language in which you write operating systems need a spec or a second compiler?
You have a compiler, that’s MIT licensed. It will be MIT licensed in perpetuity. It will continue to compile your code forever, with absolutely no changes needed to your code.
How precisely, does your life change if there is a second compiler or not? Or if there’s an spec? And more interestingly, why does it affect only operating systems development and not web server development?
That’s possible in theory, but do you think that’s what’s needed to improve the Rust compiler in practice?
The pace of development is pretty good, to the point where one of the main criticisms of Rust is that it moves too quickly. The stability is pretty good, I can only remember one time when code was broken. Compile times are mediocre but three ambitious projects hope to tackle that - parallel frontend, cranelift backend, changing the default linker to a better one(landing next month).
This argument would carry more weight if you could point out something specific that the Rust project isn’t addressing but might address if an alternate implementation existed. Remembering of course, that the second implementation would have to be compatible with the OG implementation.
I don't have gripes with Rust per se, but I do take issue with the contingent of commenters that seem to not understand the value of a formal specification in the construction of formal computational systems in the first place. Basically every computational system lives and dies on its degree of formal specifiability, and having more formal specification is always better unless you are cool with being beholden to the fact that some particular implementation of some program happened to work at some point in time for some input once. That insecure ground is what you're standing on if you don't think formal specifications have any value. In the world of computation, the specification is the system, modulo incidental properties of implementation like performance.
> in the construction of formal computational systems
> Basically every computational system lives and dies on its degree of formal specifiability
How much software development work would you say qualifies as "construction of formal computational systems"? I feel like I have to be thinking of something different than you because to a first approximation I think ~no software has much in the way of formal specification done on it, if any at all.
I feel like there's a bit of black-and-white thinking here as well. It's not as if you pick either "full formal specification with bonus Lean/Coq/etc." or "YOLO" - there's shades of grey in between where you can nail down different bits with more or less specificity/formality/etc. depending on feasibility/priorities/etc. Same with support for/against a formal spec - there's more nuance than "an absolute necessity" or "a waste of time".
> How much software development work would you say qualifies as "construction of formal computational systems"?
All of it. That's a literal definition of what a piece of software is, a formal system for computing things. Whether we realize it or like it or not, that's what we are doing when we build software, and that's why formalization of implementation-independent details is almost always beneficial. Sure, you don't always need to do this formalization in Lean or something, but you should at least have a document that outlines the behaviors of the system, it's invariants, etc. etc. Simply pointing people at a codebase and saying "there it is" or making the present, incidental behavior of an implementation the source of truth is like building a bridge and answering the question "how do we know it won't collapse under load X" by answering "idk put the load on it and see".
I know it sounds a bit extreme, but I actually would stick to my original stance here. If you have an implementation but haven't worked out its design formally then you probably have a buggy implementation.
An aversion and under appreciation of formalism is far more endemic in contemporary software development than a penchant for it is. Arguably, if people in the industry took the title of "engineer" a little more seriously, we'd have better systems.
Civil engineers work out formal specifications for bridges even when they are working on a tiny foot bridge in a park, why? because that's basically the whole job. They don't throw their hands in the air and just say "a formal spec ain't worth it, just build the bridge intuitively"—plenty of bridges are built this way, but we don't call those builders engineers, and for good reason.
I completely agree that in reality, we often cannot achieve the ideal precisely for the reasons you described. Constraints may make it infeasible. But for the particular case of Rust, I'd argue that that isn't the case, and my point is more so that those entertaining the idea that formal specification doesn't even have value might be doing something with computers, but it isn't engineering.
People build stuff informally all the time, and that stuff very often works fine. Rich Hickey digs into this in some detail. It's not that types or formal methods or rigorous specs or automated tests (or whatever else - pick your formalism) have no value, but they always have a cost as well, and there are many, many existence proofs of systems being built without them. To me, that seems pretty compelling that formalisms in computing should be "a la carte".
I think my biggest objection to mandating formalism (in the abstract - I do find value in it in some situations) in computing is how little we know about computing compared to what we know about aviation or bridge building. Those are mature industries; computing is unsettled and changing rapidly. Even the formalisms we do have in computing are comparatively weak.
> I think my biggest objection to mandating formalism (in the abstract - I do find value in it in some situations) in computing is how little we know about computing compared to what we know about aviation or bridge building.
I don't think that we should mandate formalism. I'm just trying to say that diminishing the value of formalism is bad for the industry as a whole.
And point taken about maturity, but in that sense if we don't encourage people to actually engage in defining specification and formalizing their software we won't ever get to the same point of maturity as actual engineering in the first place. We need to encourage people to explore these aspects of the discipline so that we can actually set software on better foundations, not discourage them by going around questioning the inherent value of formalism.
Rust is a particularly good example because, as other commenters have pointed out, if we believe it's a waste of time to formalize the language we purportedly want everyone to use to build foundational software, what exactly would we formalize then? If you aren't going to formalize that because it "isn't worth it", well, arguably, nothing is worth formalizing then if the core dependency itself isn't even rigorously defined.
People also forget the other benefits of having a formal spec for a language. Yes it enables alternative implementations, but it also enables people to build tons of other things in a verifiably correct way, such as static analysis checks and code generators.
I do find it curious that the people who write comments demanding a spec and alternate implementations aren’t aware of the work-in-progress spec and alternate implementations.
They are aware. They are also aware that it’s been going nowhere for quite sometimes. Plus it’s still viewed as non authoritative. A non authoritative specification is basically worthless. Actually that might be why it’s going nowhere.
I saw that. To which I say, git appears to disagree. If you look at the contributor graph for the reference (https://github.com/rust-lang/reference/graphs/contributors) you’ll find 327 contributions in 2024 H2 and 322 contributions in 2025 H1. That’s the highest and second highest rate of contribution to the repo by some distance.
People see what they want to see. They complain no matter what. Remember this thread started with “there’s no spec”, then “the spec work has been abandoned”, then “the spec work isn’t going fast enough”.
When they’re moving the goalposts this fast it’s hard to keep up. Especially because, like my other comment says, it’s not clear what they’d actually use the spec for.
The primary would be the fact that Rust does not have a standard of any form and all builds are static means you are effectively required to use cargo to distribute software.
The TOML configuration format is weak and it's easy for things like namespaces and features and optional dependencies to get severely complex rather quickly to the point that it's easy to introduce subtle bugs or create a fragile configuration.
The requirement to use semver, which may work for a lot of projects, but is not necessarily the best when dealing with software that deals with regulatory compliance issues. The version number is sometimes a product of multiple considerations. Unless you want to publish a page that helps users understand how versions map onto regulatory calendars.
The fact that "semver tricks" even exist.
The cache is poorly designed and what you get by default lacks several important features that require third party crates to solve. Even basic things like safely clearing the cache or sharing the cache across workspaces. Last I looked there were some unstable cargo features which spoke to this but weren't finished yet.
The way resolution works. The fact that multiple different versions of the same crate can be pulled into your build without any warning during build. You have to stay on top of that yourself and given the depth of some dependency trees I find that this gets skipped entirely in a lot of released software.
Registries outside of crates.io are hard to implement and have several specific challenges that limit the utility of even using this feature.
This is just cargo. The entire language and ecosystem simply aren't ready for primetime. It works for distributing software in limited circumstances but is overall of very low quality when compared to other build systems and user expectations surrounding their distributions and how software is installed into them.
Which are all solvable problems. Unfortunately the Rust community generally seems to see itself as beyond these problems and does not dedicate any significant effort into solving them. Which is fine, but, also means saying you want to push Rust to be "foundational" is an absurd statement to me. Hence "cart always before the horse."
Finally, as demonstrated here, the Rust community simply cannot handle any criticism. Which is probably why they ignore the "foundational issues" in their language. It's become an echo chamber of mostly bad ideas. You don't have a systems language in Rust, you have a niche application language, which is simply not worth the effort to use in my opinion.
Which is an honest opinion. Flag me and downvote me if you want, although I don't see how what I did here was particularly rude or disruptive, just opinionated. I had the "wrong" opinions I guess. So I will not waste further time contributing to this discussion and the rustacieans can go back to ignoring all outside influence while existing in their niche. Just don't ask yourself "why don't more people use Rust?" We keep trying to tell you. You said you want a "foundational" language. This is part of that.
> Unless you want to publish a page that helps users understand how versions map onto regulatory calendars.
Why would a changelog/release log with dates not work?
> The fact that "semver tricks" even exist.
How would you propose to solve the problem semver-trick solves, then?
> The way resolution works. The fact that multiple different versions of the same crate can be pulled into your build without any warning during build.
What would your preferred solution be if you/your dependencies depend on incompatible versions of a library?
> It works for distributing software in limited circumstances but is overall of very low quality when compared to other build systems and user expectations surrounding their distributions and how software is installed into them.
What are some better build systems you think Cargo devs might want to learn from?
> Finally, as demonstrated here, the Rust community simply cannot handle any criticism.
I suspect your comment might have done better if not for the last two sentences. At least from what I've seen Rust criticism on HN does just fine, especially if explained thoroughly. Insults fare somewhat less well.
Some of your points are somewhat valid but none of this is that big a deal. With most Rust projects cargo build just works out of the box. The uniformity of experience is very high value.
> It works for distributing software in limited circumstances
I don't know what that means.
> You don't have a systems language in Rust, you have a niche application language
Okay.
> which is simply not worth the effort to use in my opinion.
You're welcome to hold that belief. Meanwhile I'll continue to build systems software in Rust that delivers real value to real users in ways that are impracticable or impossible in any other language.
> Rust is a language for people who want to be _seen as programmers_. Not people who _actually love programming_.
Nice gatekeeping. I happen to actually love programming and I pick up languages quickly. Rust has been a fresh breathe of air and I can’t ever see myself going back to the horror show that is the masochism of C++.
> How about get an actual published language standard?
But also a language standard means fuck all to me. C and C++ keep generating them and the language doesn’t seem to be getting meaningfully better.
> How about get more implementations of it?
Why is this important? Python has one central implementation that everyone relies on. So does Java. C++ doesn’t and is plagued by every project either trying to support N platforms across M compilers or just pick clang everywhere (but still be stuck with having unnecessarily complicated platform specific code).
> How about fix the utter mess that is cargo?
What’s the utter mess? I’ve been using it for a while and haven’t observed anything obscenely problematic. Significantly easier than anything in C++ for sure (maybe Conan but I’ve not seen anyone actually using that in the wild)
> How about get an actual published language standard? How about get more implementations of it? How about fix the utter mess that is cargo?
I think these arguably also warrant a "is this putting the cart before the horse?" analysis. I think the value of all of those are pretty debatable (especially in different fields), and I don't think it's at all obvious that Rust would have done any better had it devoted more energy to those earlier in its life.
Also, I think that the article is technically compatible with working on those points anyways?
> Let's replace everything else.
I don't think that's what the article is trying to say? "Targeting" here seems to be more in the vein of "usable for", like "target" in "this language targets this use case".
Consider a programming language that self describes as young and growing, for-fun, silly idea taken too far, we'll see where it goes, no particularly lofty goals at the moment, etc; in this case you're not asking anything of anyone else and reasonable people will be happy to let you have your fun.
As they say, with more power comes more responsibility. Targeting foundational software is aiming for more power; complaining that writing and conforming to a spec is annoying is shirking the corresponding responsibility.
If you are aiming to be the foundation of an entire industry it doesn't seem unreasonable to ask for some specifications.
Theoretically correct, but worse is better - consider how many things we could have asked of C or javascript before they become standards.. Practically, a spec is something to prioritise alongside all the other things we wish for
I don't think I quite agree with the power/responsibility analogy. To me, "power" the way its used in the quote implies some kind of control over something/someone else. That's not really the kind of relationship I see between a programming language and the software that uses it - if I write a program in C++ I don't really view the C++ committee as having "power" over my program. In addition, it's not something a spec really does anything to address - you can be plenty irresponsible with your (hypothetical) power with or without a spec
I'm not sure I'd agree that a programming language has a duty to produce a spec either, whether it's for foundational software or not. Outside of legal/regulatory requirements, I think a hypothetical "perfect" low-level/systems language would be used no matter whether it had a spec or not simply because it was the best tool for the job. In that sense the language devs' "responsibility" would simply to make the language as suitable as possible for users' software. Of course, legal/regulatory requirements throw a wrench into that thought experiment, but hopefully the point made it across.
None of that is to say that asking for a spec is unreasonable, especially if you're required to have one to use the language. I'm just more on the skeptical side as to its overall importance.
Cargo is currently the best first party language package manager on the planet. It learned from all of the other systems that came before it.
Cargo could use a few improvements, like namespaces and hermetic, repeatable builds, but it's one of the nicest infrastructure pieces we have in any language.
No other language in the top 20 has anything like Cargo. The language needs to be designed hand in hand with its package manager, so it'd be hard to bolt a Cargo onto other established languages.
Out of curiosity, why do you think it’s the best? I can imagine where it can be better than e.g. Maven, but Rust-centric and Java-centric developer workflows are so different that those things probably don’t matter much.
There is a glaring error in this article. In the first line. Rust is greater than 10 years old... 1.0 came out in about 2015 but it was already around since 2012.
It's been 10 years since Rust 1.0, the first stable version of Rust. It's a perfectly reasonable milestone to use as the birth of Rust as we know it today.
People often cast doubts by asking why Rust needs a spec (spec is not the same as standard), and this proves there is still too little engineering in so-called "software engineering".
Software engineering is not engineering. No need to pretend that it should be. We dont rebuild a bridge 3 times before we get it right, but in SE thats a pretty good approach
If people like rust for foundational software, cool, I wish they would write something in it and let it compete on its merits, and not try forcing it on people.
This is not how good software is written, it's dogma. Do something people want, don't force it on them. All I see is another dependency. Don't tell me you're funny, tell me a joke.
the only place i see any interest in rust is HN or linux kernel lol, while some highly vocal people are raging about rust taking over the world, the rest of the world is moving along without it. just an observation but it seems like a storm in a tea cup.
but just saying this will likely attract an avalanche of downvotes, its like the only things you cant talk about online are anything against rust or the genocide in palestine
And a bunch of Android, and various other things that impact that your life. Ignoring the elephant in the room just for a second: C++ left gradually more and more, and more performance on the table and eventually somebody is going to stroll past and take it - that was Rust and sure, the C++ people are angry "No, that's ours, we were coming back to get that". Were you though?
Talking about foundational software but no mention of the biggest missing part of rust IMO: an ABI.
If you want to write an OS in rust and provide rich services for applications to use, you need to offer libraries they can call without needing to recompile when the OS is upgraded. Windows mostly does this with COM, Apple historically had ObjC’s dynamic dispatch (and now Swift’s ABI), Android does this with a JVM and bytecode… Rust can only really offer extern "C", and that really limits how useful dynamic libraries can be.
Doing an ABI without a VM-like layer (JVM, .NET) is really difficult though, and requires you to commit to certain implementation details without ever changing them, so I can understand why it’s not a priority. To my knowledge the only success stories are Swift (which faced the problem head-on) and COM (which has a language-independent IDL and other pain points.) ObjC has such extreme late binding that it almost feels like cheating.
If rust had a full-featured ABI it would be nearly the perfect language. (It would improve compile times as well, because we’d move to a world where dependencies could just be binaries, which would cut compile times massively.)
I don't think Rust ever wants to do ABI stability that isn't opt-in, because Rust is about zero-cost abstractions and a stable ABI is not zero-cost. See this explanation of how C++'s de facto stable ABI significantly reduces performance: https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2020/p20... [PDF]
For the use case of letting Rust programs dlopen plugins that are also written in Rust, the current solutions are stabby and abi_stable. They're not perfect but they seem to work fairly well in practice.
For more general use cases like cross-language interop, the hope is to get somewhere on crABI: https://github.com/joshtriplett/rfcs/blob/crabi-v1/text/3470... This is intended to be a superior successor to C's ABI, useful for all the same use cases (at least if the code you want to talk to is new enough to support it). Note that this isn't something the Rust maintainers can do unilaterally; there's a need to get buy-in from maintainers of other languages, Linux distros, etc.
I'm not sure that the "everything is a shared library" approach is workable in a language where heap allocation is generally explicit and opt-in. While Swift has some support for stack allocation, most types are heap-allocated, and I think that reduces the extent to which the rest of the language has to be warped to accommodate the stable ABI.
We are more or less trying to solve this exact problem with Wasm Components [1]. If core WebAssembly is a portable instruction format, WebAssembly Components are a portable executable/linkable format.
Using them is unfortunately not yet quite as easy as using repr(wasm)/extern "wasm". But with wit-bindgen [2] and the wasm32-wasip2 target [3], it's not that hard either.
[1]: https://youtu.be/tAACYA1Mwv4
[2]: https://github.com/bytecodealliance/wit-bindgen
[3]: https://doc.rust-lang.org/nightly/rustc/platform-support/was...
Not sure if this is really needed though. I mean, it would be more convenient to be able to pass more "things" around interfaces (slices, trait objects, ...), but you can still do everything you need to with an explicit extern "C" abi.
(And I've seen proposal to extend extern ABI to more things)
Something slightly more than extern C would be nice, such as definitive way to use Option/Result/Enum across the C boundary
Would you want to define that boundary in a way that still maintains compatibility with C? Or would all other languages be expected to link in a Rust library to then do the "extern C" conversion from a rust native type to something that is FFI stable? Swift can do this because Apple will tell you to use Swift or GTFO. Windows has COM. Rust is an orphan without a Primary Platform™ pushing for it, even if the rust language team came up with a technical solution, be it an IDL or a stable ABI with bend-over-backwards run-time linking magic, it would only be useful for other Rust binaries or it would be just as limited as the lingua franca of ABIs - C.
The way C3[0] handles this is fairly interesting. An function returning an optional in C3, such as:
Would be equivalent to the following in C: [0] https://c3-lang.orgI've seen talks on this topic at Rust conferences which seemed strongly influenced by Swift's approach, so that will probably be the direction this ends up going in.
The WASI Component Model also has a richer ABI. I wonder if that could be copied to native platforms somehow.
We already have? Its called C.
Not your main point, but how would monomorphization of generics work with binary dependencies?
It wouldn’t, there would have to be dynamic dispatch. Swift provides some prior art for “monomorphization in the same binary, dynamic dispatch across linker boundaries” as a general approach, and it works pretty well.
I don't really understand the point for writing an "OS in rust and provide rich services for applications to use".
You can use any serialisation method you want (C structs, Protobuf, JSON, ...).
An ABI is for passing data in the programming language and trusting it blindly, e.g. what functions use when calling each other.
Any sane OS still has to validate the data that crosses its boundaries. For example, if you make a Linux system call, it doesn't blindly use a pointer you pass it, but instead actually checks if the pointer belongs to the calling process's address space.
It seems pointless to pick Rust's internal function-calling data serialisation for the OS boundary; that wouldn't make it easier for any userspace programs running in that OS than explicitly defining a serialisation format.
Why would an OS want to limit itself to Rust's internal representation, or why would Rust want to freeze its internal representation, making improvements impossible on both sides? This seems to bring only drawbacks.
The only benefit I can see so far for a stable ABI is dynamic linking (and its use in "plugins" for programs where the code is equally trusted, e.g. writing some dlopen()able `.so` files in Rust, as plugins for Rust-only programs), where you can really "blindly call the symbol". But even there it's a bit questionable, as any mistake or mismatch in ABI can result in immediately memory corruption, which is something that Rust users rather dislike.
I really like Rust but there are some quite frustrating core paper cuts that I wish would get more attention:
1. Self-referencing structs. Especially where you want to have something like a source file and the parsed AST in the same struct. You can't easily do that at the moment. It would be nice if there was something like an offset reference that made it work. Or something else...
2. The orphan rule. I get it, but it's still annoying. We can do better than newtype wrappers (which sometimes have to be nested 2 or 3 levels deep!).
3. The fact that for reasonable compile time you need to split projects into lots of small crates. Again, I understand the reasons, but the result sucks and we can definitely do better. As I understand it this is because crates are compiled as one compilation unit, and they have to be because circular dependencies are allowed. While that is true, I expect most code doesn't actually have circular dependencies so why not make those opt-in? Or even automatically split items/files within a crate into separate compilation units based on the dependency graph?
There's probably more; this is just what I can remember off the top of my head.
Hopefully that's constructive criticism. Rust is still my favourite programming language by far.
> As I understand it this is because crates are compiled as one compilation unit, and they have to be because circular dependencies are allowed
Rust allows circular dependencies between modules within a single compilation unit, but crates can't be circular dependencies.
> I expect most code doesn't actually have circular dependencies
Not true, most code does have benign circular dependencies, though it's not common to realize this. For example, consider any module that contains a `test` submodule, that's a circular dependency, because the `test` submodule imports items from the parent (it has to, because it wants to test them), but also the parent can refer to the submodule, because that's how modules and submodules fundamentally work. To eliminate the circular dependency you would need to have all test functions within every compilation unit defined within the root of the crate (not in a submodule off of the root; literally defined in the root namespace itself).
> Rust allows circular dependencies between modules within a single compilation unit, but crates can't be circular dependencies.
Yes, that's what I was saying.
> that's a circular dependency, because the `test` submodule imports items from the parent
This isn't a circular dependency because the parent doesn't import anything from the test module.
It would mean your compilation unit might have to be a subset of a module, but I think that's fine?
Generally the way that you would implement a restriction against circular dependencies is to prevent children from importing from their parents, full stop. The more elaborate analysis that you propose would work, technically, but it would only work for tests (if items can't be reached from the crate root, that's another way of saying that the code in question is dead and unreachable; tests only get around this because the test runner is a weird alternative entry point). You'd still be unable to, for example, have a submodule that defines a trait, and then a sibling submodule that defines a type that imports that trait; you'd need to move either the trait or the type to a shared root. To illustrate, this means that anything in the stdlib that implements std::iter::Iterator would need to be defined in a submodule of std::iter (or vice versa). And then repeat that process for every other trait: std::convert::From, std::ops::Drop, std::io::Read...
Totally agree with 1 and 2.
Would add 4: partial self borrows (ability for methods to only part of their struct).
For 3, I think the low hanging fruit is probably better support for just using multiple crates (support for publishing them as one package for example).
Weakening the orphan rule would probably wreck too many crates. It's one of those changes that sounds great on paper but is a nightmare in practice.
> The orphan rule. I get it, but it's still annoying. We can do better than newtype wrappers
By that do you mean that there are better alternatives that Rust could adopt or that we need such alternatives (but they could not exist)?
One option would be to ignore the orphan rule withing workspaces.
Or let them be ignored within the root crate when building a binary, since the main concern with orphans is that libraries deep in your dependency tree become accidentally incompatible, which isn't a problem if orphans arise in the root of a binary crate, since by definition you control its source.
I know it sounds simple, but AFAIK it's not actually so easy to implement, see e.g. https://internals.rust-lang.org/t/allow-disabling-orphan-rul...
Huge agree with the orphan rule. We should be able to disable this in application crates, or do away with it when we can prove certain hygiene, like proc macros.
“Smooth, iterative deepening”
The emphasis on cross-language compatibility may be misplaced. You gain complexity and lose safety. If we have to do that, it would help if it were bottom-up, replacing libc, for example.
Go doesn't do cross-language calling much. Google paid people to build the foundations, the low-level libraries right. Rust tends toward many versions of the basics, most flawed.
One of the fundamental problems in computing for about two decades now has been getting programs on the same machine to efficiently talk to each other. Mostly, people have hacked on the Microsoft .DLL concept, giving them state. Object request brokers, like CORBA, never caught on. Nor did good RPC, like QNX. So people are still trying to link stuff together that doesn't really want to be linked.
Or you can do everything with sockets, even on the same machine. They're the wrong abstraction, raw byte streams, but they're available.
My read, which is based on context from elsewhere but could be wrong, is that the post is not really talking about creating a replacement for traditional shared libraries that works across language boundaries (like COM or Wasm components). Rather, "other languages" here is a euphemism for C++ in particular, and the goal is to enable incremental migration of brownfield C++ codebases to Rust. The goal of pervasive memory safety everywhere, especially in the kinds of contexts where C++ has traditionally been used, requires some kind of answer to "what about all the existing programs?", and the current interop between Rust and C++ is not good enough to be a satisfying answer. If it gets improved to the point where the two languages can interoperate smoothly at the source level, then I think the possibility of Rust displacing C++ for most use cases becomes something like realistic.
Sometimes I think sockets with a spec for what's on the "wire" is about as good an abstraction as you can get for arbitrary cross-language calling. If you could have your perfect abstraction for cross-language calling what would it be?
Not sure, but it should be message-oriented, rather than stream-oriented. You have to put a framing protocol on top before you can do anything else. Then you have to check that framing is in sync and have some recovery.
I'm currently struggling with the connection between Apache mod_fcgid (retro) and a Rust program (modern). Apache launches FCGI programs as subprocesses, with stdin and stdout connected to the parent via either pipes or UNIX local sockets. There's a binary framing protocol, an 8-byte header with a length. You can't transmit arbitrarily large messages; those have to be "chunked". There's a protocol for that. You can have multiple transactions in progress. The parent can make out of band queries of the child. There's a risk of deadlock if you write too much and fill the pipe when the other end is also writing. All that plumbing is specific to this application.
(Current problem: Rust std::io appears to not like stdin being a UNIX socket. Trying to fix that.)
You could definitely do JSON / msgpack and have like 5 C API functions like, read, write, wake_on_readable, and it wouldn't be the worst thing, and it wouldn't incur any IPC overhead.
> You have to avoid abstractions and conveniences and minimizing allocations so as not to trigger the garbage collector.
That's not really how modern GCs work, and not how abstractions work when you have a good JIT. The latency impact of modern GCs is now often effectively zero (there are zero objects processed in a stop-the-world pause, and the overall CPU utilisation of the GC is a function of the ratio between the size of the resident set and the size of the heap) and a JIT can see optimisation opportunities more easily and exploit them more aggressively than an AOT compiler (thanks to speculative optimisations). The real cost is in startup/warmup time and memory overhead, as well as optimising amortised performance rather than worst-case performance. Furthermore, how much those tradeoffs actually cost can be a very different matter from what they are (e.g. 3x higher RAM footprint may translate to zero additional cost, and doing manual memory management may actually cost more), as brilliantly explored in this recent ISMM talk: https://youtu.be/mLNFVNXbw7I
> C++’s innovations in zero-cost abstractions
I think that the notion of zero-cost abstractions - i.e. the illusion of "high" [1] abstraction when reading the code with the experience of "low" abstraction when evolving it - is outdated and dates from an era (the 1980s and early '90s) when C++ believed it could be both a great high-level and great a low-level language. Since then, there's generally been a growing separation rather than unification of the two domains. The portion of software that needs to be very "elaborate" (and possibly benefit from zero-cost abstractions) and still low-level has been shrinking, and the trend, I think, is not showing any sign of reversing.
[1]: By high/low abstraction I mean the number of subroutine implementations (i.e. those that perform the same computational function) that could be done with no impact at all on the caller. High abstraction means that local changes are less likely to require changing remote code and vice-versa, and so may have an impact on maintenance/evolution costs.
By claiming that jit sees more optimization opportunities what do you mean exactly? Jit is supplementary to aot
Any optimisation transforms the program in some way that has to preserve its meaning. Generally, to do that, AOT compilers need to prove that the transformation is always correct; that can be difficult for some deep optimisations. OTOH, JITs can assume that some transformation is correct, aggressively apply it, and if it turns out they're wrong, the code will trigger some signal that will have the runtime deoptimise (some modern AOT compilers have some speculative and deoptimisation capabilities, but not as general as JITs').
When you say "modern GC", which particular implementation(s) of GC are you referring to?
Java has an excellent GC, but a horrible runtime. .net is probably the best GC integrated into a language with decent memory layout ability. If all you want is the GC without a language attached, LXR is probably the most interesting. it's part of MMTK which is a rust library for memory allocation and GC that Java, Ruby, and Julia are all in the process of adding options for.
LXR isn't that great when the amount of heap memory is actually reasonable rather than being unnecessarily low. Watch the ISMM talk I linked above.
ZGC
I think this will be very successful. The Rust community seems very hellbent in achieving success at all costs, this includes making every software permissively licensed, while a lot of foundational software is still copyleft. This will likely trigger Gresham's law soon, and the fact that the community is pushing the security argument forward, which is very difficult to measure, is also very powerful, whether security may be a legitimate concern or it's a think of the children situation.
Certain portions of the Rust community may default to MIT/Apache out of apathy, but any aversion to GPL isn't a Rust thing, it's just the usual sniping between the general FSF crowd and the general OSI crowd that has been ongoing for decades. By all means, make your Rust crate GPL, nobody's going to stop you.
The currently flagged comment and the discussion below it actually do have a salient point.
Comment: How about get an actual published language standard? How about get more implementations of it?
In the discussion, @infogulch: If you are aiming to be the foundation of an entire industry it doesn't seem unreasonable to ask for some specifications. https://news.ycombinator.com/item?id=44926375
I agree with @infogulch
I think it’s flagged (although I don’t have an opinion on whether it should be) because it’s not clear at all whether it’s actually a reasonable thing to expect: plenty of industries are built on programming languages that don’t have formal specifications, much less natural language ones.
To argue by example: Ruby has an ISO standard, but that standard is for a very old version of the language. Python doesn’t have an independent standard at all; it’s like Rust in that the reference implementation is itself the standard. But nobody is champing at the bit to replace their Ruby or Python stack for standards reasons.
Nobody is champing at the bit to rewrite the OS stack in Ruby or Python.
Who said they are? The GP's language is "foundation of an entire industry"; I would argue that you and I could both find plenty of industries that directly attribute their foundations to those languages.
In what way Ruby and Python are foundations for entire industries? As I see, these two are just surface level languages which mostly use C/C++ under the hood.
Rails and Django alone account for a disproportionate number of current software service companies. That’s just what immediately comes to mind.
“It’s X under the hood” is true, but only in the same sense that “computer programs are made of coffee” is true. You can’t easily replace a Ruby stack with a Python one just because both have a reference C implementation; that’s the entire point of having high level languages with rich abstractions.
> How about get an actual published language standard?
I’m curious, did you Google before writing this comment? I did just now and the first result I found is https://github.com/rust-lang/reference.
This is something the Rust project is taking very seriously. You can read up on their approach in this detailed blog post from November 2023 - Our Vision for the Rust Specification (https://blog.rust-lang.org/inside-rust/2023/11/15/spec-visio...).
This is an ambitious and difficult project that is a work in progress. They’re continuing to work hard at it. There were 2 PRs merged to master less than 2 hours ago when I wrote this comment. On a Saturday, that’s commitment right there.
> How about get more implementations of it?
Here’s what another quick google search yielded. From the official Rust blog, published November 2024: gccrs: An alternative compiler for Rust (https://blog.rust-lang.org/2024/11/07/gccrs-an-alternative-c...)
Hope that helps dispel the confusion around the lack of spec or alternate implementation.
The plan described in "Our Vision for the Rust Specification", and the linked RFC3355, were abandoned early in 2024.
The team that was formed to carry out RFC3355 still exists, though it's had an almost complete turnover of membership.
They're currently engaged in repeating the process of deciding what they think a spec ought to be like, and how it might be written, from scratch, as if the discussion and decisions from 2023 had never happened.
The tracking issue for the RFC was https://github.com/rust-lang/rust/issues/113527 . You can see the last update was in 2023, at the point where it was time to start the actual "write the spec" part.
That's when it all fell apart, because most of the people involved were, and are, much more interested in having their say on the subject of what the spec should and shouldn't be like than they are in doing the work to write it.
Your comment suggests there is no progress being made on the spec. The activity on this repo suggests the opposite - https://github.com/rust-lang/reference
There's still work being done on the Reference, which is being maintained in the same way as it has been since the Rust documentation team was disbanded in 2020. But it's a very long way from being either complete or correct enough to reliably answer questions about Rust-the-language.
After the "write a new spec" plan was abandoned in 2024 the next stated plan was to turn the Reference into the spec, though they never got as far as deciding how that was going to work. That plan lasted six months before they changed direction again. The only practical effect was to add the paragraph identifiers like [destructors.scope.nesting.function-body].
They're currently arguing about whether to write something from scratch or somehow convert the FLS[1] into a spec.
[1]: https://github.com/rust-lang/fls
I hope this is clear: the published language standard the previous poster was asking about does not currently exist.
You linked to efforts to create a specification, but they aren't done or close to done.
It's also not clear the goal is to create a standard. E.g., according to the spec-vision document, the spec isn't planned to be authoritative (the implementation is). If that doesn't change the spec can't really function as a standard. (A standard would allow independent implementations and independent validation of an implementation. A non-authoritative spec doesn't.)
If it is impossible to create an alternate implementation without a spec, is that contradicted by the existence of Gccrs?
Gccrs an effort to create an alternate compatible implementation. It hasn't yet created a generally usable compiler.
I'm sure they would be among the most interested in a spec. They are currently forced to use the implementation (continuously evolving) as the spec. That makes their job considerably more difficult.
As do I.
Because I’m a cynical person, I view LLMs like I’ve viewed rust for years: a neat tool that makes a small subset of people happy and a touch more productive.
The evangelical defense of either topic is very off-putting, to say the least.
@infogulch really nailed it with the comment about how the rust community wants influence/power, and ignores the responsibility that comes with said influence/power.
Python still has no standard and open antagonistic towards refactoring their standard library so that the language and the library aren't shipped as a monolith.
The most difficult part for alternative Python implementations is supporting the same stdlib.
I wouldn’t worry about it. Python will never become popular.
Python is too popular, no one uses it anymore.
No one seriously tries to write operating systems or large components of operating systems in python, it's very much a "hacking" language.
Why does a language in which you write operating systems need a spec or a second compiler?
You have a compiler, that’s MIT licensed. It will be MIT licensed in perpetuity. It will continue to compile your code forever, with absolutely no changes needed to your code.
How precisely, does your life change if there is a second compiler or not? Or if there’s an spec? And more interestingly, why does it affect only operating systems development and not web server development?
A second compiler means competition. Different approaches. Better solutions that the original compiler may have missed.
We see it time and again across anything that has two implementations.
That’s possible in theory, but do you think that’s what’s needed to improve the Rust compiler in practice?
The pace of development is pretty good, to the point where one of the main criticisms of Rust is that it moves too quickly. The stability is pretty good, I can only remember one time when code was broken. Compile times are mediocre but three ambitious projects hope to tackle that - parallel frontend, cranelift backend, changing the default linker to a better one(landing next month).
This argument would carry more weight if you could point out something specific that the Rust project isn’t addressing but might address if an alternate implementation existed. Remembering of course, that the second implementation would have to be compatible with the OG implementation.
> That’s possible in theory, but do you think that’s what’s needed to improve the Rust compiler in practice?
We don't know because we don't, and can't, have a competing compiler.
For all languages that have competing compilers they all improve on each other.
I don't have gripes with Rust per se, but I do take issue with the contingent of commenters that seem to not understand the value of a formal specification in the construction of formal computational systems in the first place. Basically every computational system lives and dies on its degree of formal specifiability, and having more formal specification is always better unless you are cool with being beholden to the fact that some particular implementation of some program happened to work at some point in time for some input once. That insecure ground is what you're standing on if you don't think formal specifications have any value. In the world of computation, the specification is the system, modulo incidental properties of implementation like performance.
> having more formal specification is always better
True, all other things being equal. But your logic falls down when all other things aren't equal.
Rust is a more "secure ground" than C even though C has an official specification and Rust doesn't really.
Also you shouldn't say "formal specification" in this context because I don't think you really mean https://en.wikipedia.org/wiki/Formal_specification
> in the construction of formal computational systems
> Basically every computational system lives and dies on its degree of formal specifiability
How much software development work would you say qualifies as "construction of formal computational systems"? I feel like I have to be thinking of something different than you because to a first approximation I think ~no software has much in the way of formal specification done on it, if any at all.
I feel like there's a bit of black-and-white thinking here as well. It's not as if you pick either "full formal specification with bonus Lean/Coq/etc." or "YOLO" - there's shades of grey in between where you can nail down different bits with more or less specificity/formality/etc. depending on feasibility/priorities/etc. Same with support for/against a formal spec - there's more nuance than "an absolute necessity" or "a waste of time".
> How much software development work would you say qualifies as "construction of formal computational systems"?
All of it. That's a literal definition of what a piece of software is, a formal system for computing things. Whether we realize it or like it or not, that's what we are doing when we build software, and that's why formalization of implementation-independent details is almost always beneficial. Sure, you don't always need to do this formalization in Lean or something, but you should at least have a document that outlines the behaviors of the system, it's invariants, etc. etc. Simply pointing people at a codebase and saying "there it is" or making the present, incidental behavior of an implementation the source of truth is like building a bridge and answering the question "how do we know it won't collapse under load X" by answering "idk put the load on it and see".
I know it sounds a bit extreme, but I actually would stick to my original stance here. If you have an implementation but haven't worked out its design formally then you probably have a buggy implementation.
An aversion and under appreciation of formalism is far more endemic in contemporary software development than a penchant for it is. Arguably, if people in the industry took the title of "engineer" a little more seriously, we'd have better systems.
Civil engineers work out formal specifications for bridges even when they are working on a tiny foot bridge in a park, why? because that's basically the whole job. They don't throw their hands in the air and just say "a formal spec ain't worth it, just build the bridge intuitively"—plenty of bridges are built this way, but we don't call those builders engineers, and for good reason.
I completely agree that in reality, we often cannot achieve the ideal precisely for the reasons you described. Constraints may make it infeasible. But for the particular case of Rust, I'd argue that that isn't the case, and my point is more so that those entertaining the idea that formal specification doesn't even have value might be doing something with computers, but it isn't engineering.
People build stuff informally all the time, and that stuff very often works fine. Rich Hickey digs into this in some detail. It's not that types or formal methods or rigorous specs or automated tests (or whatever else - pick your formalism) have no value, but they always have a cost as well, and there are many, many existence proofs of systems being built without them. To me, that seems pretty compelling that formalisms in computing should be "a la carte".
I think my biggest objection to mandating formalism (in the abstract - I do find value in it in some situations) in computing is how little we know about computing compared to what we know about aviation or bridge building. Those are mature industries; computing is unsettled and changing rapidly. Even the formalisms we do have in computing are comparatively weak.
> I think my biggest objection to mandating formalism (in the abstract - I do find value in it in some situations) in computing is how little we know about computing compared to what we know about aviation or bridge building.
I don't think that we should mandate formalism. I'm just trying to say that diminishing the value of formalism is bad for the industry as a whole.
And point taken about maturity, but in that sense if we don't encourage people to actually engage in defining specification and formalizing their software we won't ever get to the same point of maturity as actual engineering in the first place. We need to encourage people to explore these aspects of the discipline so that we can actually set software on better foundations, not discourage them by going around questioning the inherent value of formalism.
Rust is a particularly good example because, as other commenters have pointed out, if we believe it's a waste of time to formalize the language we purportedly want everyone to use to build foundational software, what exactly would we formalize then? If you aren't going to formalize that because it "isn't worth it", well, arguably, nothing is worth formalizing then if the core dependency itself isn't even rigorously defined.
People also forget the other benefits of having a formal spec for a language. Yes it enables alternative implementations, but it also enables people to build tons of other things in a verifiably correct way, such as static analysis checks and code generators.
Could you elaborate what "published language standard" exactly mean and what it would help for?
I can’t speak for them, but here’s the Rust project’s rationale on writing a spec when they started on it: Our Vision for the Rust Specification (https://blog.rust-lang.org/inside-rust/2023/11/15/spec-visio...).
I do find it curious that the people who write comments demanding a spec and alternate implementations aren’t aware of the work-in-progress spec and alternate implementations.
They are aware. They are also aware that it’s been going nowhere for quite sometimes. Plus it’s still viewed as non authoritative. A non authoritative specification is basically worthless. Actually that might be why it’s going nowhere.
> getting nowhere
Would it surprise you to learn that they’re making steady progress, even merging in two PRs two hours ago? https://github.com/rust-lang/reference
A sibling comment argues that it's still work that is basically stalled: https://news.ycombinator.com/item?id=44927519
I saw that. To which I say, git appears to disagree. If you look at the contributor graph for the reference (https://github.com/rust-lang/reference/graphs/contributors) you’ll find 327 contributions in 2024 H2 and 322 contributions in 2025 H1. That’s the highest and second highest rate of contribution to the repo by some distance.
People see what they want to see. They complain no matter what. Remember this thread started with “there’s no spec”, then “the spec work has been abandoned”, then “the spec work isn’t going fast enough”.
When they’re moving the goalposts this fast it’s hard to keep up. Especially because, like my other comment says, it’s not clear what they’d actually use the spec for.
[flagged]
What are your specific gripes with cargo?
The primary would be the fact that Rust does not have a standard of any form and all builds are static means you are effectively required to use cargo to distribute software.
The TOML configuration format is weak and it's easy for things like namespaces and features and optional dependencies to get severely complex rather quickly to the point that it's easy to introduce subtle bugs or create a fragile configuration.
The requirement to use semver, which may work for a lot of projects, but is not necessarily the best when dealing with software that deals with regulatory compliance issues. The version number is sometimes a product of multiple considerations. Unless you want to publish a page that helps users understand how versions map onto regulatory calendars.
The fact that "semver tricks" even exist.
The cache is poorly designed and what you get by default lacks several important features that require third party crates to solve. Even basic things like safely clearing the cache or sharing the cache across workspaces. Last I looked there were some unstable cargo features which spoke to this but weren't finished yet.
The way resolution works. The fact that multiple different versions of the same crate can be pulled into your build without any warning during build. You have to stay on top of that yourself and given the depth of some dependency trees I find that this gets skipped entirely in a lot of released software.
Registries outside of crates.io are hard to implement and have several specific challenges that limit the utility of even using this feature.
This is just cargo. The entire language and ecosystem simply aren't ready for primetime. It works for distributing software in limited circumstances but is overall of very low quality when compared to other build systems and user expectations surrounding their distributions and how software is installed into them.
Which are all solvable problems. Unfortunately the Rust community generally seems to see itself as beyond these problems and does not dedicate any significant effort into solving them. Which is fine, but, also means saying you want to push Rust to be "foundational" is an absurd statement to me. Hence "cart always before the horse."
Finally, as demonstrated here, the Rust community simply cannot handle any criticism. Which is probably why they ignore the "foundational issues" in their language. It's become an echo chamber of mostly bad ideas. You don't have a systems language in Rust, you have a niche application language, which is simply not worth the effort to use in my opinion.
Which is an honest opinion. Flag me and downvote me if you want, although I don't see how what I did here was particularly rude or disruptive, just opinionated. I had the "wrong" opinions I guess. So I will not waste further time contributing to this discussion and the rustacieans can go back to ignoring all outside influence while existing in their niche. Just don't ask yourself "why don't more people use Rust?" We keep trying to tell you. You said you want a "foundational" language. This is part of that.
anyways....
> Unless you want to publish a page that helps users understand how versions map onto regulatory calendars.
Why would a changelog/release log with dates not work?
> The fact that "semver tricks" even exist.
How would you propose to solve the problem semver-trick solves, then?
> The way resolution works. The fact that multiple different versions of the same crate can be pulled into your build without any warning during build.
What would your preferred solution be if you/your dependencies depend on incompatible versions of a library?
> It works for distributing software in limited circumstances but is overall of very low quality when compared to other build systems and user expectations surrounding their distributions and how software is installed into them.
What are some better build systems you think Cargo devs might want to learn from?
> Finally, as demonstrated here, the Rust community simply cannot handle any criticism.
I suspect your comment might have done better if not for the last two sentences. At least from what I've seen Rust criticism on HN does just fine, especially if explained thoroughly. Insults fare somewhat less well.
Some of your points are somewhat valid but none of this is that big a deal. With most Rust projects cargo build just works out of the box. The uniformity of experience is very high value.
> It works for distributing software in limited circumstances
I don't know what that means.
> You don't have a systems language in Rust, you have a niche application language
Okay.
> which is simply not worth the effort to use in my opinion.
You're welcome to hold that belief. Meanwhile I'll continue to build systems software in Rust that delivers real value to real users in ways that are impracticable or impossible in any other language.
> Rust is a language for people who want to be _seen as programmers_. Not people who _actually love programming_.
Nice gatekeeping. I happen to actually love programming and I pick up languages quickly. Rust has been a fresh breathe of air and I can’t ever see myself going back to the horror show that is the masochism of C++.
> How about get an actual published language standard?
https://rustfoundation.org/media/ferrous-systems-donates-fer...
But also a language standard means fuck all to me. C and C++ keep generating them and the language doesn’t seem to be getting meaningfully better.
> How about get more implementations of it?
Why is this important? Python has one central implementation that everyone relies on. So does Java. C++ doesn’t and is plagued by every project either trying to support N platforms across M compilers or just pick clang everywhere (but still be stuck with having unnecessarily complicated platform specific code).
> How about fix the utter mess that is cargo?
What’s the utter mess? I’ve been using it for a while and haven’t observed anything obscenely problematic. Significantly easier than anything in C++ for sure (maybe Conan but I’ve not seen anyone actually using that in the wild)
> How about get an actual published language standard? How about get more implementations of it? How about fix the utter mess that is cargo?
I think these arguably also warrant a "is this putting the cart before the horse?" analysis. I think the value of all of those are pretty debatable (especially in different fields), and I don't think it's at all obvious that Rust would have done any better had it devoted more energy to those earlier in its life.
Also, I think that the article is technically compatible with working on those points anyways?
> Let's replace everything else.
I don't think that's what the article is trying to say? "Targeting" here seems to be more in the vein of "usable for", like "target" in "this language targets this use case".
Consider a programming language that self describes as young and growing, for-fun, silly idea taken too far, we'll see where it goes, no particularly lofty goals at the moment, etc; in this case you're not asking anything of anyone else and reasonable people will be happy to let you have your fun.
As they say, with more power comes more responsibility. Targeting foundational software is aiming for more power; complaining that writing and conforming to a spec is annoying is shirking the corresponding responsibility.
If you are aiming to be the foundation of an entire industry it doesn't seem unreasonable to ask for some specifications.
Theoretically correct, but worse is better - consider how many things we could have asked of C or javascript before they become standards.. Practically, a spec is something to prioritise alongside all the other things we wish for
I don't think I quite agree with the power/responsibility analogy. To me, "power" the way its used in the quote implies some kind of control over something/someone else. That's not really the kind of relationship I see between a programming language and the software that uses it - if I write a program in C++ I don't really view the C++ committee as having "power" over my program. In addition, it's not something a spec really does anything to address - you can be plenty irresponsible with your (hypothetical) power with or without a spec
I'm not sure I'd agree that a programming language has a duty to produce a spec either, whether it's for foundational software or not. Outside of legal/regulatory requirements, I think a hypothetical "perfect" low-level/systems language would be used no matter whether it had a spec or not simply because it was the best tool for the job. In that sense the language devs' "responsibility" would simply to make the language as suitable as possible for users' software. Of course, legal/regulatory requirements throw a wrench into that thought experiment, but hopefully the point made it across.
None of that is to say that asking for a spec is unreasonable, especially if you're required to have one to use the language. I'm just more on the skeptical side as to its overall importance.
> utter mess that is cargo?
Cargo is currently the best first party language package manager on the planet. It learned from all of the other systems that came before it.
Cargo could use a few improvements, like namespaces and hermetic, repeatable builds, but it's one of the nicest infrastructure pieces we have in any language.
No other language in the top 20 has anything like Cargo. The language needs to be designed hand in hand with its package manager, so it'd be hard to bolt a Cargo onto other established languages.
Out of curiosity, why do you think it’s the best? I can imagine where it can be better than e.g. Maven, but Rust-centric and Java-centric developer workflows are so different that those things probably don’t matter much.
There is a glaring error in this article. In the first line. Rust is greater than 10 years old... 1.0 came out in about 2015 but it was already around since 2012.
Edit: adding https://en.m.wikipedia.org/wiki/Rust_(programming_language)
It's been 10 years since Rust 1.0, the first stable version of Rust. It's a perfectly reasonable milestone to use as the birth of Rust as we know it today.
People often cast doubts by asking why Rust needs a spec (spec is not the same as standard), and this proves there is still too little engineering in so-called "software engineering".
Software engineering is not engineering. No need to pretend that it should be. We dont rebuild a bridge 3 times before we get it right, but in SE thats a pretty good approach
If people like rust for foundational software, cool, I wish they would write something in it and let it compete on its merits, and not try forcing it on people.
This is not how good software is written, it's dogma. Do something people want, don't force it on them. All I see is another dependency. Don't tell me you're funny, tell me a joke.Who’s forcing rust on you exactly?
There's tons of software written in Rust. What are you talking about?
the only place i see any interest in rust is HN or linux kernel lol, while some highly vocal people are raging about rust taking over the world, the rest of the world is moving along without it. just an observation but it seems like a storm in a tea cup.
but just saying this will likely attract an avalanche of downvotes, its like the only things you cant talk about online are anything against rust or the genocide in palestine
> the only place i see any interest in rust is HN or linux kernel
Then pay attention.
Rust has been in Windows 11 since 2023:
https://windowsreport.com/windows-11-kernel-rust/
And a bunch of Android, and various other things that impact that your life. Ignoring the elephant in the room just for a second: C++ left gradually more and more, and more performance on the table and eventually somebody is going to stroll past and take it - that was Rust and sure, the C++ people are angry "No, that's ours, we were coming back to get that". Were you though?