Their example of why Ada has better strong typing than Rust is that you can have floats for miles and floats for kilometers and not get them mixed up. News flash, Rust has newtype structs, and you can also do basically the same thing in C++.
I don't know much about Ada. Is its type system any better than Rust's?
Ada programs tend to define types of the problem to be solved. The compiler then adapts the low-level type to match what is requested. Rust programs tend to rely on low-level types.
That may not be clear, so two examples may help:
Ada programmers prefer to specify integer types in terms of the ranges of values they may take and/or the precision of floating-point types in terms of digits. I ended up doing this at least once, where on Day 23 I specified a floating-point type in terms of the number of digits it should reproduce accurately: Digits 18. The compiler automatically chose the most appropriate machine type for that.
Ada arrays don't have to start from 0, nor even do they have to be indexed by integers. An example of this appears below.
By contrast, the Rust programs I've seen tend to specify types in terms of low-level, machine types. Thus, I tried to address the same problem using an f64. In this particular case, there were repercussions, but usually that works fine as long as you know what the machine types can do. You can index Rust types with non-integers, but it takes quite a bit more work than Ada.
> By contrast, the Rust programs I've seen tend to specify types in terms of low-level, machine types.
This seems to be an artifact of the domain that Rust is currently being used in. I don't think it's anathema to Rust to evolve to be able to add some of these features. char indexed arrays are something I've used a lot (most via `char c - 'a'`\, but native support for it would be nice).
at least that is an unsigned (though there are no usigned hardware floats). If you said tempemerature C there you range starts at -273.15 and you want errors of some sort to happen if you go below that.
Yes and no, you need to look deeper into Ada to find that it can have compile time guarantees higher than what you can get from a struct named km and miles.
newtypes are not as good as native low level types. after typing a lot of code, one will find out that he needs nightly to get decent integration to avoid casting to low level and back all time.
I'm super interested how you can do this in C++. Say, I need aggregate struct with a few 16 and 32 bit fields, some are little endian and some big endian. I do not want C++ to let me mix up endianness. How do I do it?
In C++ you probably could even make a templated class that implements all possible operators for any type that supports it with concepts. Then you can just `using kilometer = unique_type<uint32_t, "kilometer">` without needing to create a custom type each time.
Though if you do that km times km isn't km it is a volume - so your custom type would be wrong to have all operations. what unit km times km should be isn't clear.
I have seen several versions. I wrote two different ones myself - both not in use because the real world of units turned out far more complex. the multiplication thing is one simple example of the issuses but not a complete list
You can actually do this in C as well. The Windows API has all sorts of handle types that were originally all one type: HANDLE; but by wrapping a HANDLE in various one-member structs were able to derive different handle types that couldn't be intermixed with each other in a type-safe way without some casting jiggery-pokery.
It's just much, much easier and more ergonomic in Ada.
Fun fact, that many are not aware, mostly because this is Windows 3.x knowledge and one needed the right source to learn about this.
There was a header only library on the Windows SDK that would wrap those HANDLEs into more specific types, that would still be compatible, while providing a more high level API to use them from C.
Unfortunely there is not much left on the Internet about it, but this article provides some insight,
Aside from technical factors, there are social factors involved. For example, both Python and C++ has operator overloading. But in C++ that's horrible and you run screaming from it, while in Python land it's perfectly fine. What is the difference? Culture and taste.
In C++ operator overloading can easily mess with fundamental mechanisms, often intentionally; in Python it is usually no more dangerous than defining regular functions and usually employed purposefully for types that form nice algebraic structures.
Of course I can overload >> in Python, but I cannot foul up output stream state because it doesn't exist.
Formally there is little difference between C++ and Python operator overloading and both languages have good syntax for it, but C++ has many rough edges in the standard library and intrinsic complications that can make operator overloading much more interesting in practice. For instance, overload resolution is rarely trivial.
I found that learning Ada was a good way to learn how to write good C++ code, because both languages are at the same level of abstraction but Ada is clean and opinionated.
The best a example is RAII. This is a pattern in C++ that you have to follow if you don't want to make a mess. In Ada it's a language feature called Controlled Types.
Do you feel the same is true for someone who does mainly rust nowadays ? I had the same feeling with rust & python ; If there are any patterns that help in other languages I'd definitely like to look them up
I'm not sure I'd want to limit the selection of languages that much. Depending on the project and how much language risk you can manage (as opposed to security risk), there also is D, Odin, and Zig. And probably a bunch more I'm unfamiliar with.
Most of what gives high-reliability or high-assurance code that label is the process rather than the language. In colloquial terms it rigorously disallows sloppy code, which devs will happily write in any language given the chance.
As much as C is probably the least safe systems language, and probably my last choice these days if I had to choose one, more high-assurance code has probably been written in C than any other language. Ada SPARK may have more but it would be a close contest. This is because the high-assurance label is an artifact of process rather than the language.
Another interesting distinction is that many formalisms only care about what I would call eventual correctness. That is, the code is guaranteed to produce the correct result eventually but producing that result may not be produced for an unbounded period of time. Many real systems are considered “not correct” if the result cannot be reliably delivered within some bounded period of time. This is a bit different than the classic “realtime systems” concept but captures a similar idea. This is what makes GCs such a challenge for high-reliability systems. It is difficult to model their behavior in the context of the larger system for which you need to build something resembling a rigorous model.
That said, some high-assurance systems are written in GC languages if latency is not a meaningful correctness criterion for that system.
If I was writing a high-reliability system today, I’d probably pick C++20, Zig, and Rust in that order albeit somewhat in the abstract. Depending on what you are actually building the relative strengths of the languages will vary quite a bit. I will add that my knowledge of Zig is more limited than the other two but I do track it relatively closely.
I don’t understand how you can get the same kind of reliability with C than with Spark - process or not, a formal proof is a formal proof. That’s much harder to get with C.
> Most of what gives high-reliability or high-assurance code that label is the process rather than the language.
This is what I've heard too. I have a friend who works in aerospace programming, and the type of C he writes is very different. No dynamic memory, no pointer math, constant CRC checks, and other things. Plus the tooling he has access to also assists with hitting realtime deadlines, JTAG debugging, and other things.
I know there is a belief that Rust/Ada etc is safer than C/C++ and in some cases that is true. I know of multiple, airworthy aircraft that are flying with C++ code. I also know of aircraft flying with Ada. The aircraft flying with Ada is hard to maintain. There is also a mountain of testing that goes into it that is not just unit testing. This mountain of integration, subsystem and system level testing is required regardless
I've never worked in aerospace, however I'm interested in safety-critical software engineering, and I've written a lot of Ada (and written about my experiences with it too).
My understanding is that yes you can write safety-critical code in just about any language, but it's much easier to prove conformance to safety standards (like DO-178C, etc.) in Ada than in C++.
I regularly do hobby bare-metal programming in both Ada and C. I find that Ada helps prevent a lot of the common footguns in C. It's not a silver bullet for software safety, but it definitely helps. The point of any programming language is to make doing the right thing easy, and doing the wrong thing hard. All things considered, I think Ada does a good job of that.
It is not saying that it is not possible to make C code that is safe enough to be on an airplane. Its that there are languages with additional features which make it easier to have a high confidence. If you can remove entire classes of bugs automatically, why not do so?
It matters less than you think. The entire point of e.g. DO-178C is to achieve high assurance that the code performs it's intended function. Basically, you need to be able to trace your object code to system level requirements. You need to achieve 100% MC/DC coverage from requirements based testing. If there's something you don't cover you either remove the code, or add derived requirements (which need to be assessed by the systems safety process). Language choice doesn't remove any of the objectives you need to achieve.
Also, keep in mind that the desire to have a deterministic system puts a lot of constraint on what kind of behavior you can program anyway.
Here’s an example how safety-critical C is written and formally verified: https://www.absint.com/
Based on what I know about Rust, it’s harder to write Rust to that same level of confidence, but I haven’t kept up with their safety-critical initiative.
> Its that there are languages with additional features which make it easier to have a high confidence. If you can remove entire classes of bugs automatically, why not do so?
Which languages remove which classes of bugs entirely? This vagueness is killing me
Safe Rust and Ada SPARK entirely remove classes of bugs like undefined behavior and memory safety issues. The latter will also statically eliminate things like overflow and type range errors.
These are subsets of their respective languages, but all safety critical development in C and C++ relies on even more constrained language subsets (e.g. MISRA or AV++) to achieve worse results.
C and C++ don't have such a subset. That seems pretty relevant, given they're the languages being compared and they're used for the majority of safety critical development.
The standards I mentioned use tricks to get around this. MISRA, for example, has the infamous rule 1.3 that says "just don't do bad things". Actually following that or verifying compliance are problems left completely to the user.
On the other hand, Safe Rust is the default. You have to go out of your way to wrap code in an unsafe block. That unsafe block doesn't change the rules of the language either, it just turns off some compiler checks.
Unfortunately, no. "Memory safe rust" is a more general concept than "Safe Rust". "Safe rust" is a generally understood term for the subset of rust that's everything outside unsafe blocks. Here's an example where it's used in the language docs [0]. "Memory safe rust" also includes all the unsafe code that follows the language rules, which is ideally all of it.
I can see how this would be confusing and probably should have been clarified with emphasis in the original comment. Safety in the sense of "safety critical" isn't a property any programming language can have on its own, so I wouldn't have intended that regardless.
Memory safety doesn't really help that much with functional safety.
Sure, a segfault could potentially make some device fail to do its safety critical operation, but that is treated in the same way a logic bug would be, so it's not really a concern in of itself.
But then again, an unchecked .unwrap() would lead to the same failure mode, so a "safe" crash just just as bad as an "unsafe" one.
Memory safety (as defined by Rust) actually goes a very long way to help with functional safety, mostly because in order to have a memory safe language, you need a number of additional language features that generally aid with correctness.
For example, lifetimes are necessary for memory safety in Rust, but you can use lifetimes much more generally to express things like "while this object exists, this other object is inaccessible", or "this thing is strictly read-only under these particular conditions". That's very useful.
But memory-unsafe code doesn't just segfault, it can corrupt your invariants and continue running, or open a door for an attacker to RCE on the machine. Memory safety is necessary (but not sufficient) to uphold what should be the simplest invariant of any code base, that program execution matches the source code in the first place.
C and C++ don't have such subset defined as part of their standard. Left to users means left to additional tools, which do exist. Rust only has memory safety by default, this is a small part of the problem and it is not clear to me that having this helps with functional safety. (Although I agree that it helps elsewhere).
I'd be happy to explain at length why the existing tools and standards are insufficient if you want. It'd be easier to have that discussion over another medium than HN comment chain though.
If you think a strong and convenient type system helps with functional safety, then Rust helps with functional safety. This is also generally the experience in the industry.
Firmware is a different story, but for controls code the proper and civilized way of working is using Simulink with something like Polyspace and Embedded Coder, and auto-gen verifiable C code from your model. I know that on HN vim + invoking CC is the only way of working, but industry began to move forward quite long ago.
Yes, by putting C and C++ into a straighjacket of formal verification and code practices that would make most complaints about Rust's borrow checker seem like child play.
Mostly non-technical things: continuity (or, rather, the lack thereof) and PR.
Continuity: Ada is not widely taught at universities, and, whilst the AdaCore’s GNAT Academic Program (GAP) does exist, one has to consciously seek out a university that offers a course in/on Ada. Ada and programming in Ada is not common knowledge, which stems from the next point.
PR. Ada, rightfully or wrongully, does not exactly bask in the limelight of popularity – most assuredly not to the same extent as Python, NodeJs, Typescript, C#/.NET etc do. The current generation of Ada developers do not care (and probably should not), and the young and future generations of potential Ada developers miss out. Ada is not talked about in diverse contexts spanning web development, frontend/backend[0] development, containers, cloud – and the list goes on. Not because Ada can't be used in any of the aforementioned contexts, it is just that due to the lack of PR it remains an unnoticed reality – kind of like «if a tree falls in a forest and no one is around to hear it, does it make a sound?»
[0] Yes, «frontend development» and «backend development» are the fancy terms in wide use that the new generation can easily understand.
What needs to be "maintained" in a flying aircraft? If it's in need of an update, why was it certified to fly that way in the first place?
Also in safety critical apps, being "difficult" can be a feature, not a big. Should we have easier turbofans so we can pop them open and swap out blades and rings for tiny little improvements? No. Every flight critical component should be fully understood as a prerequisite for use.
In a project, can you develop one module in Ada and another in SPARK and compile them together ? So, you can use safety-critical code in one module and regular Ada code in other modules ?
Yes, you can mix and match the two. This lets you do things like build a library with SPARK for some critical portion where you want SPARK's guarantees and can accept its limitations, and incorporate it into an application built with the rest of Ada.
Oh lovely - need to put Ada on the learning plan. Formal languages were a bit of a drag because you needed to maintain a separate "specification" copy of your critical code.
Going through Ada sample programs and surprised I can grok stuff without knowing anything about the language. Wondering why it never took off in the standard software world. Sure its a bit verbose, but so are Java and MacOS API's
I think at least slow and expensive compilers back in the days, the defense and aerospace stigma, and in more recent times a common misperception that it’s closed source. And it’s never been cool stuff.
https://learn.adacore.com/ - I'd start here, good set of tutorials on the language including some comparative ones. It won't teach you everything you might need to know, but it's a free and good starting point.
I might be in the minority, but I hate type re-definitions, I want types to just tell me how much memory a variable is using and it’s bit interpretation. Every variable already has a name, use that to communicate the data’s representation and if it’s really important that representation mismatches are caught at compile time wrap it in a struct. I don’t want to guess how much memory the compiler decided a variable needed (though that is also present to an extent in C/C++)
Understandable. Many many years ago I sat in front of a commercial code base for the very first time, a well-known large database company, and my first despair-level was reached quickly when I saw that everybody who ever submitted a patch seemed to have created their own version of very basic types such as 32 bit unsigned integers, making following the code and checking the types on the way very hard.
On the other hand, these kinds of types for different kinds of numbers are meant for higher-level checks. Not confusing some random number with a monetary amount, or, as in the example, miles with kilometers, sure helps. The latter might have prevented that Mars mission that failed due to such a unit error (Mars Climate Orbiter).
The problem, as so often, is that we only have one source code (level), but very different levels of abstraction. Similar when you try to add very low-level things like caching, that have to be concerned with implementation and hardware, and mix it with business logic (alternative: even more abstraction and code to try to separate the two, but then you will have more indirection and less knowledge of what is really going on when you need to debug something).
Sometimes you are interested in the low-level concepts, such as number of bytes, but other types you want some higher level ideas expressed in your types. Yet, we only have one type layer and have to put it all in there, or choose only one and then the other kind of view suffers.
> If you’re prepared to look at alternative programming languages to avoid the costs and risks of C/C++, SPARK offers an opportunity to go much further than Ada or Rust. SPARK, which is based on Ada, offers industrial-strength formal methods: an opportunity for you to prove mathematically that your software is safe and secure. This paradigm shift in software development methodology offers significant cost savings for high-integrity software.
Forgive me if I sound naive but I always found it a bit weird how something like this is touted as a strength of Ada, but never actually demonstrated in comparison with a heavily typed language like Rust. Regardless of the language, you cannot write some special program that guarantees an input will be valid if it comes from an external source or a human. It is simply impossible to prove this at compile time. What you can do in rust, and other languages, is check you're within boundaries and throw an exception if it's out of bounds.
Herein lies the issue: "You can't throw exceptions in this kind of software". True. So how do you prove it WONT go out of bounds at compile time, if the input isn't trusted? Rustacians get picked on a lot for spending more time on type-theory than actually writing working code, but from my perspective it looks worse in Ada, obsessing over proving something in a complete vacuum that can't possibly account for all possible invariants in the real world.
> Regardless of the language, you cannot write some special program that guarantees an input will be valid if it comes from an external source or a human. It is simply impossible to prove this at compile time.
> ...but from my perspective it looks worse in Ada...
This isn't really true. SPARK obviously can't prove that the input will be valid, but it can formally prove that the validity of the user input is verified before it's used.
I am no expert here what I remember is mostly from CS courses, but isn't the entire point of a formal program proof that you can reason about the combinatorics of all data and validate hypothesis on those?
It's one thing to say: "objects of this type never have value X for field Y", or "this function only works on type U and V", but its a lot more impressive to say "in this program state X and Y are never achieved simultaneously" or "in this program state X is always followed by state Y".
This is super useful for safety systems, because you can express safety in these kinds of functions that are falsifiable by the proof system. E.g: "the ejection seat is only engaged after the cockpit window is ejected" or "if the water level exceeds X the pump is always active within 1 minute".
> isn't the entire point of a formal program proof that you can reason about the combinatorics of all data and validate hypothesis on those?
You're right. Validating input is a part of this process. The compiler can trivially disprove that the array can be safely indexed by the full set of any valid integer that the user can input. Adding a runtime check to ensure we have a valid index isn't a very impressive use of formal proofs, I admit. It's just a simple example that clearly demonstrates how SPARK can prove 'memory-safety' properties.
But isn't the entire point of rust's verbose type system to declare valid states at compile time? I don't exactly see how this can't be proved in rust.
> But isn't the entire point of rust's verbose type system to declare valid states at compile time?
Different type systems are capable of expressing different valid states with different levels of expressivity at compile time. Rust could originally express constraints that SPARK couldn't and vice-versa, and the two continue to gain new capabilities.
I think in this specific example it's possible to write a Rust program that can be (almost) verified at compile time, but doing so would be rather awkward in comparison (e.g., custom array type that implements Index for a custom bounded integer type, etc.). The main hole I can think of is the runtime check that the index is in bounds since that's not a first-class concept in the Rust type system. Easy to get right in this specific instance, but I could imagine potentially tripping up in more complex programs.
procedure Overflow_This_Buffer
with SPARK_Mode => On
is
type Integer_Array is array (Positive range <>) of Integer;
Int_Array : Integer_Array (1 .. 10) := [others => 1];
Index_To_Clear : Integer;
begin
Ada.Text_IO.Put ("What array index should be cleared? ");
-- Read the new array size from stdin.
Ada.Integer_Text_IO.Get (Index_To_Clear);
Int_Array (Index_To_Clear) := 0;
end Overflow_This_Buffer;
What I am asking is what exactly is formally proving from 1st principles? Because to me this looks no different than a simple assertion statement or runtime bounds check which can be performed in any language with if statements & exceptions.
How did you prove that no one could just put in an arbitrary integer into that function? Does it fail to compile? Then you're just making a custom type which enforces valid state which, again can be done in any language with a rich type system and if statements.
If I might be more blunt: formal proofs in the field of programming appear to be nothing more than mathematicians trying to strong-arm their way into the industry where they aren't necessarily needed. Formal proofs make sense when you're trying to come up with new formulas or studying the cosmos, but they make no sense when you're trying to predict how bits will behave on a computer, because for the most part that is a solved problem by languages themselves.
> How did you prove that no one could just put in an arbitrary integer into that function? Does it fail to compile?
The answer quite literally follows immediately after the code sample you quoted:
> Attempting to prove the absence of runtime errors gives us the following warnings:
buffer_overflow.adb:162:26: medium: unexpected exception might be raised
162 | Ada.Integer_Text_IO.Get (Index_To_Clear);
| ~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~
buffer_overflow.adb:164:18: medium: array index check might fail
164 | Int_Array (Index_To_Clear) := 0;
| ^~~~~~~~~~~~~~
reason for check: value must be a valid index into the array
possible fix: postcondition of call at line 162 should mention Item
(for argument Index_To_Clear)
162 | Ada.Integer_Text_IO.Get (Index_To_Clear);
| ^ here
> The SPARK prover correctly notices that there's nothing stopping us from entering a value outside the array bounds. It also points out that the Get call we're using to read the integer from stdin can raise an unexpected Constraint_Error at runtime if you type in anything that can't be parsed as an integer.
This is followed by a version of the program which SPARK accepts.
> Then you're just making a custom type which enforces valid state which, again can be done in any language with a rich type system and if statements.
Key word here is can. You can write correct software using any language, from raw binary to theorem proving languages, but some languages check more properties at compile time than others. What using something like SPARK/Lean/Agda/Rocq buys you is the "more" part, up to allowing you to eliminate runtime checks entirely because you proved at compile time that the corresponding error conditions can not happen (e.g., proving that indices are always in bounds allows you to omit bounds checks). That's not something the more popular languages are capable of because their type system is not expressive enough.
> Formal proofs make sense when you're trying to come up with new formulas or studying the cosmos, but they make no sense when you're trying to predict how bits will behave on a computer, because for the most part that is a solved problem by languages themselves.
This seems contradictory, especially when considering that type checking is creating a formal proof?
The idea is that you define a number of pre- and post- condition predicates for a function that you want proved (in what's effectively the header file of your Ada program). Like with tests, these checks that show that the output is correct are often shorter than the function body, as in this sorting example.
Then you implement your function body and the prover attempts to verify that your post-conditions hold given the pre-conditions. Along the way it tries to check other stuff like overflows, whether the pre- and post- conditions of the routines called inside are satisfied, etc. So you can use the prover to try to ensure in compile-time that any properties that you care about in your program are satisfied that you may otherwise catch in run-time via assertions.
How does Ada/Spark compare to theorem provers like Agda and Lean, in terms of guarantees?
I’m not really into languages like this. Anybody got some resources regarding how strict the guarantees can get in either of these types of environments?
The average HN comment section has kind of become a new reddit and I'm kind of pessimistic about it. I see the same low effort comments lately, and I moved away from Reddit for that very reason. Maybe these type of questions just attract the same people.
I've worked with some of the best C++ programmers on the planet, and none of them would ever propose this as a solution. They are the first to recognize that developing software in C++ is expensive and error-prone.
Personally I think Swift can be a good choice that is more familiar to existing C++ developers than the others, and a lot of people seem to be sleeping on it.
It has full native interop with C and C++ so you can already use all your existing libraries. Historically it lacked cross-platform support but this is not true anymore. It does lack a native GUI framework, but for now you can keep using C++ ones.
Some people complain about its ties to Apple, but hopefully with it gaining much wider cross-platform support, it may not matter that much in the future, but I guess it remains to be seen.
Swift, while being slightly more friendly for extreme beginners than rust, is just so kneecapped and bloated that it has the opposite problem as rust. At some moderate level of competence it actively hinders learning. All of the weird closure and multithreading syntax is more harmful than good. Whereas Rust has a high floor and somewhat infuriating borrow checking, once you figure your way out of them you’ve actually learned a lot about how computers work along the way.
Alternatively, just get better at C/C++… It isn’t going anywhere, and it feels like more developers are coming around to the idea that maybe security guarantees are not worth throwing the baby out with the bath water.
It’s a common misconception that the point of Rust is just security. Rust helps avoid a very broad class of bugs, that security bugs are only a subset of.
They'd simply tell you that by just magically "getting better" at C/C++, those would be resolved too. And if my grandmother had wheels, she'd have been a bike [0].
It's like a JRPG that may be a slog at the beginning, but then really gets going after the first 50-60 hours. Just gotta replace the hours with years, and huff even more glue.
And these are the people moaning about Rust and religious thinking... good old DARVO at it again, and it's growing more and more on the nose.
In the RPG analogy, why waste your skill points on maxing out "checking code for UB", when you can get +10 bonus on UB checks from having Rust, and put the skill points into other things.
Go's biggest flaw for backends is the error handling. No exceptions and nothing checking that you use the err.
Java's issue might be the lack of cooperative multitasking until recently (virtual threads). Best you could do was those promises frameworks that mangle your code, and Google in particular uses something a hundred times worse called Guice (which is also DI).
Java's biggest problem is the fact that mutability is so baked into the language. I'm working on a project now where I always need to dig deep to find out if something has been mutated or not. Yes, there are records and we are now getting into data oriented programming. But older codebases are really hard to read.
Java also has (In my experience) a higher concentration of inept developers who seem to have never heard of guard conditiona and early returns, and prefer instead to wrap everything in if conditions, pushing the actual logic into deeper nesting levels. Pyramids of doom all over.
Their example of why Ada has better strong typing than Rust is that you can have floats for miles and floats for kilometers and not get them mixed up. News flash, Rust has newtype structs, and you can also do basically the same thing in C++.
I don't know much about Ada. Is its type system any better than Rust's?
This was posted to about a day ago: https://github.com/johnperry-math/AoC2023/blob/master/More_D...
But a noteworthy excerpt: ```
Ada programs tend to define types of the problem to be solved. The compiler then adapts the low-level type to match what is requested. Rust programs tend to rely on low-level types.
That may not be clear, so two examples may help:
By contrast, the Rust programs I've seen tend to specify types in terms of low-level, machine types. Thus, I tried to address the same problem using an f64. In this particular case, there were repercussions, but usually that works fine as long as you know what the machine types can do. You can index Rust types with non-integers, but it takes quite a bit more work than Ada.```
> By contrast, the Rust programs I've seen tend to specify types in terms of low-level, machine types.
This seems to be an artifact of the domain that Rust is currently being used in. I don't think it's anathema to Rust to evolve to be able to add some of these features. char indexed arrays are something I've used a lot (most via `char c - 'a'`\, but native support for it would be nice).
There is no elegant solution in Rust to make something like
at least that is an unsigned (though there are no usigned hardware floats). If you said tempemerature C there you range starts at -273.15 and you want errors of some sort to happen if you go below that.
Yes and no, you need to look deeper into Ada to find that it can have compile time guarantees higher than what you can get from a struct named km and miles.
newtypes are not as good as native low level types. after typing a lot of code, one will find out that he needs nightly to get decent integration to avoid casting to low level and back all time.
I'm super interested how you can do this in C++. Say, I need aggregate struct with a few 16 and 32 bit fields, some are little endian and some big endian. I do not want C++ to let me mix up endianness. How do I do it?
C: struct be32_t { uint32_t _ }; struct le32_t { uint32_t _ };
C++: That, but with a billion operator overloads and conversion operators so they feel just like native integers.
In C++ you probably could even make a templated class that implements all possible operators for any type that supports it with concepts. Then you can just `using kilometer = unique_type<uint32_t, "kilometer">` without needing to create a custom type each time.
Though if you do that km times km isn't km it is a volume - so your custom type would be wrong to have all operations. what unit km times km should be isn't clear.
I guess I should have said `unique_type<uint32_t, "meter", std::ratio<1000, 1>>` then :) Then you can do the same as std::chrono::duration does.
Thankfully some folks already thought that out, one possible library,
https://mpusz.github.io/mp-units/latest/
I have seen several versions. I wrote two different ones myself - both not in use because the real world of units turned out far more complex. the multiplication thing is one simple example of the issuses but not a complete list
C++ really needs something like `using explicit kilometer = uint32_t;`
The 'explicit' would force you to use static_cast
You can actually do this in C as well. The Windows API has all sorts of handle types that were originally all one type: HANDLE; but by wrapping a HANDLE in various one-member structs were able to derive different handle types that couldn't be intermixed with each other in a type-safe way without some casting jiggery-pokery.
It's just much, much easier and more ergonomic in Ada.
Fun fact, that many are not aware, mostly because this is Windows 3.x knowledge and one needed the right source to learn about this.
There was a header only library on the Windows SDK that would wrap those HANDLEs into more specific types, that would still be compatible, while providing a more high level API to use them from C.
Unfortunely there is not much left on the Internet about it, but this article provides some insight,
https://www.codeguru.com/windows/using-message-crackers-in-t...
Naturally it was saner just to use TP/C++ alongside OWL, C++ with MFC back then, or VB.
Aside from technical factors, there are social factors involved. For example, both Python and C++ has operator overloading. But in C++ that's horrible and you run screaming from it, while in Python land it's perfectly fine. What is the difference? Culture and taste.
It isn't the same operator overloading.
In C++ operator overloading can easily mess with fundamental mechanisms, often intentionally; in Python it is usually no more dangerous than defining regular functions and usually employed purposefully for types that form nice algebraic structures.
I hardly see the difference, given the capabilities of operator overloading in Python,
This type confusion would have been identical with a plain function, __add__ is only syntactic sugar:
Compare with, for example, fouling the state of output streams from operator>> in C++.Hardly any different, trying to pretend Python is somehow better.
Operator overload is indeed syntactic sugar for function calls, regardless of the language.
By the way, you can overload >> in Python via rshift(), or __rshift__() methods.
Of course I can overload >> in Python, but I cannot foul up output stream state because it doesn't exist. Formally there is little difference between C++ and Python operator overloading and both languages have good syntax for it, but C++ has many rough edges in the standard library and intrinsic complications that can make operator overloading much more interesting in practice. For instance, overload resolution is rarely trivial.
It is only one pip install away, if anyone bothers to make on such set of overloads.
It's the exact same thing except in Python the community largely has taste. In C++ `cout >> "foo"` exists in the standard library.
I found that learning Ada was a good way to learn how to write good C++ code, because both languages are at the same level of abstraction but Ada is clean and opinionated.
The best a example is RAII. This is a pattern in C++ that you have to follow if you don't want to make a mess. In Ada it's a language feature called Controlled Types.
Do you feel the same is true for someone who does mainly rust nowadays ? I had the same feeling with rust & python ; If there are any patterns that help in other languages I'd definitely like to look them up
SPARK is used by NVIDIA for the most secure compute code.
https://www.adacore.com/nvidia
I'm not sure I'd want to limit the selection of languages that much. Depending on the project and how much language risk you can manage (as opposed to security risk), there also is D, Odin, and Zig. And probably a bunch more I'm unfamiliar with.
Most of what gives high-reliability or high-assurance code that label is the process rather than the language. In colloquial terms it rigorously disallows sloppy code, which devs will happily write in any language given the chance.
As much as C is probably the least safe systems language, and probably my last choice these days if I had to choose one, more high-assurance code has probably been written in C than any other language. Ada SPARK may have more but it would be a close contest. This is because the high-assurance label is an artifact of process rather than the language.
Another interesting distinction is that many formalisms only care about what I would call eventual correctness. That is, the code is guaranteed to produce the correct result eventually but producing that result may not be produced for an unbounded period of time. Many real systems are considered “not correct” if the result cannot be reliably delivered within some bounded period of time. This is a bit different than the classic “realtime systems” concept but captures a similar idea. This is what makes GCs such a challenge for high-reliability systems. It is difficult to model their behavior in the context of the larger system for which you need to build something resembling a rigorous model.
That said, some high-assurance systems are written in GC languages if latency is not a meaningful correctness criterion for that system.
If I was writing a high-reliability system today, I’d probably pick C++20, Zig, and Rust in that order albeit somewhat in the abstract. Depending on what you are actually building the relative strengths of the languages will vary quite a bit. I will add that my knowledge of Zig is more limited than the other two but I do track it relatively closely.
I don’t understand how you can get the same kind of reliability with C than with Spark - process or not, a formal proof is a formal proof. That’s much harder to get with C.
> Most of what gives high-reliability or high-assurance code that label is the process rather than the language.
This is what I've heard too. I have a friend who works in aerospace programming, and the type of C he writes is very different. No dynamic memory, no pointer math, constant CRC checks, and other things. Plus the tooling he has access to also assists with hitting realtime deadlines, JTAG debugging, and other things.
> no pointer math
How does that work out? Does he never uses arrays and strings?
Like in most languages, with indexes.
Zig is not memory safe at all
Nor is any other dime-a-dozen LLVM frontend with basically the same bullshit syntax. What is your point?
swift and rust?
What about them?
they are memory safe
I know there is a belief that Rust/Ada etc is safer than C/C++ and in some cases that is true. I know of multiple, airworthy aircraft that are flying with C++ code. I also know of aircraft flying with Ada. The aircraft flying with Ada is hard to maintain. There is also a mountain of testing that goes into it that is not just unit testing. This mountain of integration, subsystem and system level testing is required regardless
I've never worked in aerospace, however I'm interested in safety-critical software engineering, and I've written a lot of Ada (and written about my experiences with it too). My understanding is that yes you can write safety-critical code in just about any language, but it's much easier to prove conformance to safety standards (like DO-178C, etc.) in Ada than in C++.
I regularly do hobby bare-metal programming in both Ada and C. I find that Ada helps prevent a lot of the common footguns in C. It's not a silver bullet for software safety, but it definitely helps. The point of any programming language is to make doing the right thing easy, and doing the wrong thing hard. All things considered, I think Ada does a good job of that.
It is not saying that it is not possible to make C code that is safe enough to be on an airplane. Its that there are languages with additional features which make it easier to have a high confidence. If you can remove entire classes of bugs automatically, why not do so?
It matters less than you think. The entire point of e.g. DO-178C is to achieve high assurance that the code performs it's intended function. Basically, you need to be able to trace your object code to system level requirements. You need to achieve 100% MC/DC coverage from requirements based testing. If there's something you don't cover you either remove the code, or add derived requirements (which need to be assessed by the systems safety process). Language choice doesn't remove any of the objectives you need to achieve.
Also, keep in mind that the desire to have a deterministic system puts a lot of constraint on what kind of behavior you can program anyway.
Here’s an example how safety-critical C is written and formally verified: https://www.absint.com/
Based on what I know about Rust, it’s harder to write Rust to that same level of confidence, but I haven’t kept up with their safety-critical initiative.
> Its that there are languages with additional features which make it easier to have a high confidence. If you can remove entire classes of bugs automatically, why not do so?
Which languages remove which classes of bugs entirely? This vagueness is killing me
Safe Rust and Ada SPARK entirely remove classes of bugs like undefined behavior and memory safety issues. The latter will also statically eliminate things like overflow and type range errors.
These are subsets of their respective languages, but all safety critical development in C and C++ relies on even more constrained language subsets (e.g. MISRA or AV++) to achieve worse results.
> These are subsets of their respective languages, but
Pretty much every language has such a subset. Nothing new then, sigh...
C and C++ don't have such a subset. That seems pretty relevant, given they're the languages being compared and they're used for the majority of safety critical development.
The standards I mentioned use tricks to get around this. MISRA, for example, has the infamous rule 1.3 that says "just don't do bad things". Actually following that or verifying compliance are problems left completely to the user.
On the other hand, Safe Rust is the default. You have to go out of your way to wrap code in an unsafe block. That unsafe block doesn't change the rules of the language either, it just turns off some compiler checks.
You mean memory-safe Rust is the default.
Taking this default is not enough to write safety-critical software… but it’s enough to write a browser (in theory) or some Android core daemons.
Unfortunately, no. "Memory safe rust" is a more general concept than "Safe Rust". "Safe rust" is a generally understood term for the subset of rust that's everything outside unsafe blocks. Here's an example where it's used in the language docs [0]. "Memory safe rust" also includes all the unsafe code that follows the language rules, which is ideally all of it.
I can see how this would be confusing and probably should have been clarified with emphasis in the original comment. Safety in the sense of "safety critical" isn't a property any programming language can have on its own, so I wouldn't have intended that regardless.
[0] https://doc.rust-lang.org/nomicon/meet-safe-and-unsafe.html
Memory safety doesn't really help that much with functional safety.
Sure, a segfault could potentially make some device fail to do its safety critical operation, but that is treated in the same way a logic bug would be, so it's not really a concern in of itself.
But then again, an unchecked .unwrap() would lead to the same failure mode, so a "safe" crash just just as bad as an "unsafe" one.
Memory safety (as defined by Rust) actually goes a very long way to help with functional safety, mostly because in order to have a memory safe language, you need a number of additional language features that generally aid with correctness.
For example, lifetimes are necessary for memory safety in Rust, but you can use lifetimes much more generally to express things like "while this object exists, this other object is inaccessible", or "this thing is strictly read-only under these particular conditions". That's very useful.
But memory-unsafe code doesn't just segfault, it can corrupt your invariants and continue running, or open a door for an attacker to RCE on the machine. Memory safety is necessary (but not sufficient) to uphold what should be the simplest invariant of any code base, that program execution matches the source code in the first place.
C and C++ don't have such subset defined as part of their standard. Left to users means left to additional tools, which do exist. Rust only has memory safety by default, this is a small part of the problem and it is not clear to me that having this helps with functional safety. (Although I agree that it helps elsewhere).
I'd be happy to explain at length why the existing tools and standards are insufficient if you want. It'd be easier to have that discussion over another medium than HN comment chain though.
If you think a strong and convenient type system helps with functional safety, then Rust helps with functional safety. This is also generally the experience in the industry.
This is comparing C, C++, ada, Spark and Rust.... I think its obvious.
Firmware is a different story, but for controls code the proper and civilized way of working is using Simulink with something like Polyspace and Embedded Coder, and auto-gen verifiable C code from your model. I know that on HN vim + invoking CC is the only way of working, but industry began to move forward quite long ago.
Sadly, Mathworks have monopoly there.
Yes, by putting C and C++ into a straighjacket of formal verification and code practices that would make most complaints about Rust's borrow checker seem like child play.
What makes Ada harder to maintain? Do you have a source for that so I could read more?
Mostly non-technical things: continuity (or, rather, the lack thereof) and PR.
Continuity: Ada is not widely taught at universities, and, whilst the AdaCore’s GNAT Academic Program (GAP) does exist, one has to consciously seek out a university that offers a course in/on Ada. Ada and programming in Ada is not common knowledge, which stems from the next point.
PR. Ada, rightfully or wrongully, does not exactly bask in the limelight of popularity – most assuredly not to the same extent as Python, NodeJs, Typescript, C#/.NET etc do. The current generation of Ada developers do not care (and probably should not), and the young and future generations of potential Ada developers miss out. Ada is not talked about in diverse contexts spanning web development, frontend/backend[0] development, containers, cloud – and the list goes on. Not because Ada can't be used in any of the aforementioned contexts, it is just that due to the lack of PR it remains an unnoticed reality – kind of like «if a tree falls in a forest and no one is around to hear it, does it make a sound?»
[0] Yes, «frontend development» and «backend development» are the fancy terms in wide use that the new generation can easily understand.
What needs to be "maintained" in a flying aircraft? If it's in need of an update, why was it certified to fly that way in the first place?
Also in safety critical apps, being "difficult" can be a feature, not a big. Should we have easier turbofans so we can pop them open and swap out blades and rings for tiny little improvements? No. Every flight critical component should be fully understood as a prerequisite for use.
There are new features or new subsystems to integrate which require ICD updates or bug fixes that need fixing.
I've never heard of SPARK. What advantages does it have compared to Lean?
It's basically a subset of Ada, so you can use it anywhere you'd use Ada. I don't think Lean is at a point that it's an Ada replacement.
In a project, can you develop one module in Ada and another in SPARK and compile them together ? So, you can use safety-critical code in one module and regular Ada code in other modules ?
Yes, you can mix and match the two. This lets you do things like build a library with SPARK for some critical portion where you want SPARK's guarantees and can accept its limitations, and incorporate it into an application built with the rest of Ada.
Oh lovely - need to put Ada on the learning plan. Formal languages were a bit of a drag because you needed to maintain a separate "specification" copy of your critical code.
Going through Ada sample programs and surprised I can grok stuff without knowing anything about the language. Wondering why it never took off in the standard software world. Sure its a bit verbose, but so are Java and MacOS API's
I think at least slow and expensive compilers back in the days, the defense and aerospace stigma, and in more recent times a common misperception that it’s closed source. And it’s never been cool stuff.
And yes you’re right, it’s a very good language.
https://learn.adacore.com/ - I'd start here, good set of tutorials on the language including some comparative ones. It won't teach you everything you might need to know, but it's a free and good starting point.
Lean the math prover? What does that have to do with Ada/Rust?
> Lean the math prover? What does that have to do with Ada/Rust?
I'm going to be rude, but there are 4 sentences in this thread and you appear to have not read two of them.
The comment I responded to:
>> I've never heard of SPARK. What advantages does it have compared to Lean? [emphasis added]
The "It" in my response refers to SPARK.
There was no need to be rude.
Dlang is the GOAT!
Haha, Walter you're definitely not biased on this one. :)
I think that was appropriate comment under article "Should I choose Ada?" posted on adacore dot com.
Me? Biased? LOL
The truth has been spoken.
I might be in the minority, but I hate type re-definitions, I want types to just tell me how much memory a variable is using and it’s bit interpretation. Every variable already has a name, use that to communicate the data’s representation and if it’s really important that representation mismatches are caught at compile time wrap it in a struct. I don’t want to guess how much memory the compiler decided a variable needed (though that is also present to an extent in C/C++)
Understandable. Many many years ago I sat in front of a commercial code base for the very first time, a well-known large database company, and my first despair-level was reached quickly when I saw that everybody who ever submitted a patch seemed to have created their own version of very basic types such as 32 bit unsigned integers, making following the code and checking the types on the way very hard.
On the other hand, these kinds of types for different kinds of numbers are meant for higher-level checks. Not confusing some random number with a monetary amount, or, as in the example, miles with kilometers, sure helps. The latter might have prevented that Mars mission that failed due to such a unit error (Mars Climate Orbiter).
The problem, as so often, is that we only have one source code (level), but very different levels of abstraction. Similar when you try to add very low-level things like caching, that have to be concerned with implementation and hardware, and mix it with business logic (alternative: even more abstraction and code to try to separate the two, but then you will have more indirection and less knowledge of what is really going on when you need to debug something).
Sometimes you are interested in the low-level concepts, such as number of bytes, but other types you want some higher level ideas expressed in your types. Yet, we only have one type layer and have to put it all in there, or choose only one and then the other kind of view suffers.
If its a personal project then try each and see what you like. Work project then follow your company development guidelines.
One can learn to apply Formal Methods at many levels in any language using available tools at hand. For some ideas see;
1) On Formal Methods Thinking in Computer Science Education - https://dl.acm.org/doi/10.1145/3670419
2) (In-)Formal Methods: The Lost Art --- A Users’ Manual by Carroll Morgan - https://fme-teaching.github.io/2019/10/03/in-formal-methods-...
3) Forthcoming book by Carroll Morgan Formal Methods, Informally How to Write Programs That Work - https://www.cambridge.org/highereducation/books/formal-metho...
4) Understanding Formal Methods by Jean-Francois Monin - A firehose of information.
Even though it's an Ada ad, 'C*/*C++' vs 'Ada*, *SPARK' is wild.
> If you’re prepared to look at alternative programming languages to avoid the costs and risks of C/C++, SPARK offers an opportunity to go much further than Ada or Rust. SPARK, which is based on Ada, offers industrial-strength formal methods: an opportunity for you to prove mathematically that your software is safe and secure. This paradigm shift in software development methodology offers significant cost savings for high-integrity software.
Forgive me if I sound naive but I always found it a bit weird how something like this is touted as a strength of Ada, but never actually demonstrated in comparison with a heavily typed language like Rust. Regardless of the language, you cannot write some special program that guarantees an input will be valid if it comes from an external source or a human. It is simply impossible to prove this at compile time. What you can do in rust, and other languages, is check you're within boundaries and throw an exception if it's out of bounds.
Herein lies the issue: "You can't throw exceptions in this kind of software". True. So how do you prove it WONT go out of bounds at compile time, if the input isn't trusted? Rustacians get picked on a lot for spending more time on type-theory than actually writing working code, but from my perspective it looks worse in Ada, obsessing over proving something in a complete vacuum that can't possibly account for all possible invariants in the real world.
> Regardless of the language, you cannot write some special program that guarantees an input will be valid if it comes from an external source or a human. It is simply impossible to prove this at compile time.
> ...but from my perspective it looks worse in Ada...
This isn't really true. SPARK obviously can't prove that the input will be valid, but it can formally prove that the validity of the user input is verified before it's used.
I wrote about this here: https://ajxs.me/blog/How_Does_Adas_Memory_Safety_Compare_Aga...
I am no expert here what I remember is mostly from CS courses, but isn't the entire point of a formal program proof that you can reason about the combinatorics of all data and validate hypothesis on those?
It's one thing to say: "objects of this type never have value X for field Y", or "this function only works on type U and V", but its a lot more impressive to say "in this program state X and Y are never achieved simultaneously" or "in this program state X is always followed by state Y".
This is super useful for safety systems, because you can express safety in these kinds of functions that are falsifiable by the proof system. E.g: "the ejection seat is only engaged after the cockpit window is ejected" or "if the water level exceeds X the pump is always active within 1 minute".
> isn't the entire point of a formal program proof that you can reason about the combinatorics of all data and validate hypothesis on those?
You're right. Validating input is a part of this process. The compiler can trivially disprove that the array can be safely indexed by the full set of any valid integer that the user can input. Adding a runtime check to ensure we have a valid index isn't a very impressive use of formal proofs, I admit. It's just a simple example that clearly demonstrates how SPARK can prove 'memory-safety' properties.
But isn't the entire point of rust's verbose type system to declare valid states at compile time? I don't exactly see how this can't be proved in rust.
> But isn't the entire point of rust's verbose type system to declare valid states at compile time?
Different type systems are capable of expressing different valid states with different levels of expressivity at compile time. Rust could originally express constraints that SPARK couldn't and vice-versa, and the two continue to gain new capabilities.
I think in this specific example it's possible to write a Rust program that can be (almost) verified at compile time, but doing so would be rather awkward in comparison (e.g., custom array type that implements Index for a custom bounded integer type, etc.). The main hole I can think of is the runtime check that the index is in bounds since that's not a first-class concept in the Rust type system. Easy to get right in this specific instance, but I could imagine potentially tripping up in more complex programs.
This is the smoking gun for me:
What I am asking is what exactly is formally proving from 1st principles? Because to me this looks no different than a simple assertion statement or runtime bounds check which can be performed in any language with if statements & exceptions.How did you prove that no one could just put in an arbitrary integer into that function? Does it fail to compile? Then you're just making a custom type which enforces valid state which, again can be done in any language with a rich type system and if statements.
If I might be more blunt: formal proofs in the field of programming appear to be nothing more than mathematicians trying to strong-arm their way into the industry where they aren't necessarily needed. Formal proofs make sense when you're trying to come up with new formulas or studying the cosmos, but they make no sense when you're trying to predict how bits will behave on a computer, because for the most part that is a solved problem by languages themselves.
> How did you prove that no one could just put in an arbitrary integer into that function? Does it fail to compile?
The answer quite literally follows immediately after the code sample you quoted:
> Attempting to prove the absence of runtime errors gives us the following warnings:
> The SPARK prover correctly notices that there's nothing stopping us from entering a value outside the array bounds. It also points out that the Get call we're using to read the integer from stdin can raise an unexpected Constraint_Error at runtime if you type in anything that can't be parsed as an integer.This is followed by a version of the program which SPARK accepts.
> Then you're just making a custom type which enforces valid state which, again can be done in any language with a rich type system and if statements.
Key word here is can. You can write correct software using any language, from raw binary to theorem proving languages, but some languages check more properties at compile time than others. What using something like SPARK/Lean/Agda/Rocq buys you is the "more" part, up to allowing you to eliminate runtime checks entirely because you proved at compile time that the corresponding error conditions can not happen (e.g., proving that indices are always in bounds allows you to omit bounds checks). That's not something the more popular languages are capable of because their type system is not expressive enough.
> Formal proofs make sense when you're trying to come up with new formulas or studying the cosmos, but they make no sense when you're trying to predict how bits will behave on a computer, because for the most part that is a solved problem by languages themselves.
This seems contradictory, especially when considering that type checking is creating a formal proof?
Maybe you'll find this example to be a bit more useful: https://blog.adacore.com/i-cant-believe-that-i-can-prove-tha...
The idea is that you define a number of pre- and post- condition predicates for a function that you want proved (in what's effectively the header file of your Ada program). Like with tests, these checks that show that the output is correct are often shorter than the function body, as in this sorting example.
Then you implement your function body and the prover attempts to verify that your post-conditions hold given the pre-conditions. Along the way it tries to check other stuff like overflows, whether the pre- and post- conditions of the routines called inside are satisfied, etc. So you can use the prover to try to ensure in compile-time that any properties that you care about in your program are satisfied that you may otherwise catch in run-time via assertions.
C++ is a good alternative to "C/C++"
Unfortunely even big C++ powerhouses use plenty of C style programming on their pseudo-modern C++.
How does Ada/Spark compare to theorem provers like Agda and Lean, in terms of guarantees?
I’m not really into languages like this. Anybody got some resources regarding how strict the guarantees can get in either of these types of environments?
The average HN comment section has kind of become a new reddit and I'm kind of pessimistic about it. I see the same low effort comments lately, and I moved away from Reddit for that very reason. Maybe these type of questions just attract the same people.
No, just learn C/C++ properly.
I've worked with some of the best C++ programmers on the planet, and none of them would ever propose this as a solution. They are the first to recognize that developing software in C++ is expensive and error-prone.
We've tried this for decades.
yes, you should.
Personally I think Swift can be a good choice that is more familiar to existing C++ developers than the others, and a lot of people seem to be sleeping on it.
It has full native interop with C and C++ so you can already use all your existing libraries. Historically it lacked cross-platform support but this is not true anymore. It does lack a native GUI framework, but for now you can keep using C++ ones.
Some people complain about its ties to Apple, but hopefully with it gaining much wider cross-platform support, it may not matter that much in the future, but I guess it remains to be seen.
Swift, while being slightly more friendly for extreme beginners than rust, is just so kneecapped and bloated that it has the opposite problem as rust. At some moderate level of competence it actively hinders learning. All of the weird closure and multithreading syntax is more harmful than good. Whereas Rust has a high floor and somewhat infuriating borrow checking, once you figure your way out of them you’ve actually learned a lot about how computers work along the way.
(2024)
Alternatively, just get better at C/C++… It isn’t going anywhere, and it feels like more developers are coming around to the idea that maybe security guarantees are not worth throwing the baby out with the bath water.
It’s a common misconception that the point of Rust is just security. Rust helps avoid a very broad class of bugs, that security bugs are only a subset of.
They'd simply tell you that by just magically "getting better" at C/C++, those would be resolved too. And if my grandmother had wheels, she'd have been a bike [0].
It's like a JRPG that may be a slog at the beginning, but then really gets going after the first 50-60 hours. Just gotta replace the hours with years, and huff even more glue.
And these are the people moaning about Rust and religious thinking... good old DARVO at it again, and it's growing more and more on the nose.
[0] https://youtu.be/A-RfHC91Ewc
In the RPG analogy, why waste your skill points on maxing out "checking code for UB", when you can get +10 bonus on UB checks from having Rust, and put the skill points into other things.
I'd rather use Rust even if safety weren't a concern. Basically for all the reasons Torvalds doesn't like C++ but is fine with Rust.
Google had published a few papers stating that Rust code has fewer defects than similar complexity Go and Java services.
It's not just memory safety, but the design of the type system and error handling semantics that enable it to be smooth with exceptional behavior.
Go's biggest flaw for backends is the error handling. No exceptions and nothing checking that you use the err.
Java's issue might be the lack of cooperative multitasking until recently (virtual threads). Best you could do was those promises frameworks that mangle your code, and Google in particular uses something a hundred times worse called Guice (which is also DI).
Java's biggest problem is the fact that mutability is so baked into the language. I'm working on a project now where I always need to dig deep to find out if something has been mutated or not. Yes, there are records and we are now getting into data oriented programming. But older codebases are really hard to read.
Java also has (In my experience) a higher concentration of inept developers who seem to have never heard of guard conditiona and early returns, and prefer instead to wrap everything in if conditions, pushing the actual logic into deeper nesting levels. Pyramids of doom all over.