obviously the errno should have been obtained at the time of failure and included in the exception, maybe using a simple subclass of std exception. trying to compute information about the failure at handling time is just stupid.
My biggest beef with exceptions is invisible code flow. Add the attribute throws<t> to function signature (so that its visible) and enforce handling by generating compiler error if ignored. Bubbling up errors is OK. In essence, this is result<t,e>. Thats OK. Even for constructors.
What I dislike is having a mechanism to skip 10 layers of bubbling deep inside call stack by a “mega” throw of type <n> which none of the layers know about. Other than
This strikes me as all wrong. The whole point of exceptions is control flow and destructors. By getting rid of RAII for the sake of simplifying the callsite a little, the author fails to obtain the real advantage, which is automatic resource unwinding of all local resources under failure conditions both during initialization and usage.
If you want to simplify the callsite, just move the exception handling to a higher scope. I can admit it’s a little irritating to put a try/catch in main but it’s trivial to automate, and most programs are not written inline in main.
The main problems I see with destructors have to do with hidden control flow and hidden type information. That said, hiding exceptional control flow from the mainline control flow of a function is also a useful feature, and the “exception type” of any given function is the recursive sum of all the exception types of any functions or operators it calls, including for example allocation. That quickly becomes either an extremely large and awkward flat sum or an extremely large and awkward nested sum, with awkward cross-namespace inclusion, and it becomes part of the hard contract of your API. This means, transitively, that if any deep inner call needs to extend or break its error types, it must either propagate all the way up into all of your callers, or you must recover and merge that error into _your_ API.
For _most_ usecases, it is just simpler to implement a lean common interface such as std::exception and allow callers who care to look for more detailed information that you document. That said, there is a proposal (P3166 by Lewis Baker) for allowing functions to specify their exception set, including via deduction (auto):
Java and Rust are the only language that I know of that has proper handling of exceptions; mandatory declaration as part of method declaration, since exceptions ARE an integral part of the contract. (Yes, I consider the Result<T, E> being corresponding to exception declaration, since the return value MUST be checked prior to use of T.)
Swift user here: I have to say one of the best features of Swift is the exception handling. Which is to say, exceptions in Swift are not C++/Java/Obj-C style exceptions, but instead are a way to return an error result from a function. And Swift enforces that the error is handled.
That is, a `throw` statement in Swift simply returns an `Error` value to the caller via a special return path instead of the normal result.
I saw that in Swift, a method can declare it throws an exception, but it doesn't (can't) declare the exception _type_. I'm not a regular user of Swift (I usually use Java - I'm not sure what other languages you are familiar with), but just thinking about it: isn't it strange that you don't know the exception type? Isn't this kind of like an untyped language, where you have to read the documentation on what a method can return? Isn't this a source of errors itself, in practise?
> isn't it strange that you don't know the exception type?
Java experience taught us that, when writing an interface, it is common not to know the exception type. You often can’t know, for example, whether an implementation can time out (e.g. because it will make network calls) or will access a database (and thus can throw RollbackException). Consequently, when implementing an interface, it is common in Java to wrap exceptions in an exception of the type declared in the interface (https://wiki.c2.com/?ExceptionTunneling)
Yes I know Java and the challenges with exceptions there (checked vs unchecked exceptions, errors). But at least (arguably) in Java, the methods (for checked exceptions at least) declares what class the exception / exceptions is. I personally do not think wrapping exceptions in other exception types, in Java, is a major problem. In Swift, you just have "throws" without _any_ type. And so the caller has to be prepared for everything: a later version of the library might suddenly return a new type of exception.
One could argue Rust is slightly better than Java, because in Rust there are no unchecked exceptions. However, in Rust there is panic, which is in a way like unchecked exceptions, which you can also catch (with panic unwinding). But at least in Rust, regular exceptions are fast.
> And so the caller has to be prepared for everything: a later version of the library might suddenly return a new type of exception.
But you get the same with checked exceptions in Java. Yes, an interface will say foo can only throw FooException, but if you want to do anything when you get a FooException, you have to look inside to figure out what exactly was wrong, and what’s inside that FooException isn’t limited.
A later version of the library may suddenly throw a FooException with a BarException inside it.
What I liked about Bosst's error_code[1], which is part of the standard library now, is that it carties not just the error but the error category, and with it a machinery for categories to compare error_codes from other categories.
So as a user you could check for a generic file_not_found error, and if the underlying library uses http it could just pass on the 404 error_code with an http_category say, and your comparison would return true.
This allows you to handle very specific errors yet also allow users to handle errors in a more generic fashion in most cases.
I say limited because the compiler doesn't (yet, as of 6.2) perform typed throw inference for closures (a closure that throws is inferred to throw `any Error`). I have personally found this sufficiently limiting that I've given up using typed throws in the few places I want to, for now.
Another really nice thing about Swift is that you have to put the `try` keyword in front of any expression that can throw. This means there's no hidden control flow: if some function call can throw, you're informed at the call site and don't have to look at the function declaration.
that sounds very similar to noexcept(bool) to me, except that noexcept can be computed, for example by deriving from some generic trait, and we presume throws unless specified non-throwing by noexcept.
From what I can read Swift gives you a stack trace which is good. At the moment I’m using Go where that stack is only generated where the panic is triggered, which could be much higher up. Makes it a lot more unwieldy to figure out where an error happens because everyone uses:
Swift doesn't capture a stack trace in the `Error` object, but Xcode can break when an error is thrown if you set a “Swift Error Breakpoint”, and the debugger will show you the stack trace. Under the hood it just sets breakpoints on the runtime functions `swift_willThrow` and `swift_willThrowTypedImpl`.
In high-level code, pretty much everything can fail in many different ways, and usually you're either just passing the error up or handling it in some catchall manner. Rust's behavior makes sense for its use cases, but it'd get exhausting doing this in like a web backend.
It's already too exhausting to use for OOM, which is why Rust effectively punted on that from the very beginning. And the ironic thing is that anyhow::Error (or similar) seems poised to become idiomatic in Rust, which AFAIU always allocates, while the extremely belated try_ APIs... does anybody even use them?
It's a shame. Were I designing a "low-level" or "systems" language, rather than put the cart before the horse and pick error results or exceptions, my litmus test would be how to make handling OOM as easy as possible. Any language construct that can make gracefully handling OOM convenient is almost by definition the fabled One True Way. And if you don't have a solution for OOM from the very beginning (whether designing a language or a project), it's just never gonna happen in a satisfactory way.
Lua does it--exceptions at the language level, and either setjmp/longjmp or C++ exceptions in the VM implementation. But for a strongly typed, statically compiled language, I'm not sure which strategy (or hybrid strategy) would be best.
I think Rust's choice to panic on OOM is the right choice for 95% of code people write. It would be absurdly verbose, and OOM can be very tricky to recover from without careful top down design because it's very easy for error handling code to itself try and allocate (print/format a message) leading to a process abort. The mistake was not having the fallible APIs on allocating containers from the start, for the users that do care and can tangibly recover.
Depending on the OS you likely won't even get the chance to handle the error because the allocation never fails, instead you just over commit and kill the process when physical memory gets exhausted.
I guess what I'm trying to say is designing your language's error handling primitives around OOM is probably not a good idea, even in a systems programming language, because it's a very rare class of generally unrecoverable error that only very carefully designed code can recover from.
Anyhow allocating isn't that absurd either. It keeps the size of Result<T, E> down by limiting the 'E' to be a pointer. Smaller Result<T, E> can help the compiler generate better code for passing the return value, and the expectation is the error path is rare so the cost of allocating might be outweighed by the lighter Result<T, E> benefits. Exceptions take a similar standpoint, throwing is very slow but the non-exceptional path can (theoretically) run without any performance cost.
Hm, not sure how you'd do this. OOM error auto bubbling up sounds dangerous because the inner code might not leave things in a consistent state if it exits unexpectedly, so that makes sense to manually handle, but it's tedious. Rust at least has nice ? and ! syntax for errors, unlike Go where the error-prone error handling actually ruins the entire language.
Don't forget, failure modes pierce abstraction boundaries. An abstraction that fully specifies failure modes leaks its implementation.
This is why I think checked exceptions are a dreadful idea; that, and the misguided idea that you should catch exceptions.
Only code close to the exception, where it can see through the abstraction, and code far away from the exception, like a dispatch loop or request handler, where the failure mode is largely irrelevant beyond 4xx vs 5xx, should catch exceptions.
Annotating all the exceptions on the call graph in between is not only pointless, it breaks encapsulation.
This is a better way of expressing what I had been thinking about putting exception details behind an interface, except that in my mind encapsulating errors is just good design rather than implementation hiding, since the programmer might want to express a public error API, for example to tell the user whether a given fopen failed due to not finding the file or due to a filesystem fault.
If your error codes leak the implementation details through the whole call stack you are doing it wrong. Each error code describes what fails in terms of it's function call semantics. A layer isn't supposed to just return this upwards, that wouldn't make sense, but to use it to choose it's own error return code, which is in the abstraction domain of it's function interface.
Author completely misunderstands how to use exceptions and is just bashing them. A lot of what he says is inaccurate if not outwardly incorrect.
Also Bjarne's control of C++ is quite limited, and he is semi-retired, so asking him to "fix his language" is fairly misguided. It's designed by a committee of 200+ people.
Anyway what you want seems to be to not use exceptions, but monads instead. These are also part of the standard, it's called std::expected.
Agreed. Author is trying to mix paradigms. Simplest approach if they want local handling and non-propagation of errors is to just have the file holder not check for open success, and check that manually after construction. Then you get guaranteed closure of file no matter how the function is exited.
class File_handle {
FILE *p;
public:
File_handle(const char *pp, const char *r) { p = fopen(pp, r); }
~File_handle() { if ( p ) fclose(p); }
FILE* file() const { return p; }
};
void f(string s)
{
File_handle fh { s, "r"};
if ( fh.file() == NULL ) {
fprintf(stderr, "failed to open file '%s', error=%d\n", p, errno);
return;
}
// use fh
}
> Author completely misunderstands how to use exceptions and is just bashing them. A lot of what he says is inaccurate if not outwardly incorrect.
Do you mind to elaborate what you believe are the misunderstandings? Examples of incorrect/inaccurate statements and/or an article with better explanations of mentioned use cases would be helpful.
> it's called std::expected
How does std::expected play together with all other possible error handling schemas? Can I get unique ids for errors to record (error) traces along functions?
What is the ABI of std::expected? Stable(ish) or is something planned, ideally to get something C compatible?
Well, he's using try/catch locally. You're not supposed to handle errors locally with exceptions. The whole point is inversion of control, you manage them at the point where you can do meaningful recovery, which then triggers the cleanup of the whole stack of frames once the exception bubbles up, rather than systematically cleaning up on the normal control flow path.
Regardless, in his example, he could achieve what he wants by wrapping the try in a lambda, and returning either the value from try or nullopt from catch. But clearly, that's just converting exceptions to another error-handling mechanism because he isn't doing it right.
He claimed that not handling an exception causes the program to crash, that's just plain incorrect. To be fair many people use the term "crash" liberally.
std::expected or equivalent is often used with std::error_code, which is an extensible system (error codes are arranged in categories) that among others interops with errno.
Most criticism of C++ comes from people not really into the language. Sure, learning curve might be an issue. But if you are really into C++, there are a lot of things to like about it. At least I do love it. However, only since C++11. Before that, the language felt very strange to me, possibly due to the same effect, I didn't know enough about it.
I'm the opposite. I like C++ but only until C++11. From that point onward, the rules got WAY more complicated, and there's only so much I can/want to hold in my brain at once, I just prefer simpler rules I guess.
Occasionally I do like to use auto or lambdas from C++11 but even then I have to remember more rules for initialization because braces were introduced.
You probably do want that exception to bubble up, actually. You probably don't want to catch it immediately after open. Because you need to communicate a failure mode to your caller, and what are you going to do then? Throw another exception? Fall back to error codes? Unwind manually with error codes all the way up? And if so, logging was the wrong thing to do, since the caller is probably going to log as well, based on the same philosophy, and you're going to get loads of error messages for one failure mode, and no stack trace (yes, there are ways of getting semi-decent stack traces from C++ exceptions).
Exception safety has a lot of problems in C++, but it's mostly around allowing various implicit operations to throw (copies, assignments, destructors on temporaries and so forth). And that does come down to poor design of C++.
> you're going to get loads of error messages for one failure mode
> and no stack trace
That loads of error messages, meaning every layer describes what it tried to do and what failed, IS a user readable variant of a stack trace. The user would be confused with a real stack trace, but nice error messages serve both the user and the developer.
I think i agree with commenters that article kind-of uses exceptions wrong.
But this shows problems with C++ exceptions, C++ codebases are literred with bad exception usage because "normal" programmers don't get it. Most of didn't get it for long time. So, they are complex enough that their usage is risk.
Anyway, IMO C++ exceptions have two fundametal problems:
* lack of stacktrace, so actually lack of _debugabbility_, that's why people try/catch everything
* destructor problem (that actually there are exceptions tht cannot be propagated further and are lost
That's not his own invention, but an idea introduced by the language founder to sell the idea of RAII and exceptions. He did say, that he would prefer the C version, but then why use C++ in the first place.
Or do without exceptions for control flow. For example Elixir does have try/catch but it's used very rarely because most functions return tuples with the first element being either :ok or :error. Then we can pattern match it and return an :error to the caller if we have to, possibly bubbling up several levels return after return. Or let the process crash and restart while the rest of the application keeps running. A surprising number of errors eventually fix themselves (API calls, disk space, missing data) when you design the system to attempt more times to complete its tasks. That's not only a characteristic of Elixir and the BEAM languages. You can do it more or less easily on any language. Maybe you need a queue and workers reading from the queue and all it takes to manage them, and BEAM makes it convenient by including most of it.
I agree with the other comments that this understanding of exceptions is wrong. He's not wrong about these two points:
- Correctness: you don’t know if the exception type you’ve caught matches what the code throws
- Exhaustiveness: you don’t know if you’ve caught all exceptions the code can throw
But that's actually not a problem. Most of the time you shouldn't catch a specific exception (so always correct) and if you are catching then you should catch them all (exhaustive). A better solution is merely:
That's all you need. But actually this code is also bad because this function f() shouldn't have a try/catch in it that prints an error message. That's a job for a function (possibly main()) further up the call stack.
I didn’t really understand the writer’s comments with exceptions and I don’t code in C++.
Their main complaint about exceptions seems to be that you can’t handle all of them and that you don’t know which you’ll get? If we compare this to python, what’s the difference here? It looks like it works the same here as in python; you catch and handle some exceptions, and others that you miss will crash your program (unless you catch the base class). Is there something special about C++ that makes it work differently, or would the author have similar problems with python?
"You can't handle all of them and you don't know which you'll get" is a great summary of the first two problems, and, this same problem also applies to Python. I'll add that these only start becoming an issue when you start adding more exceptions to your codebase, especially if those exceptions start appearing deep in a callstack and seemingly unrelated code starts needing to be aware of them/handle them.
The third problem (RAISI) is a C++ specific problem that Python doesn't have. Partly because in Python try/catch doesn't introduce a new scope and also partly because Python tends not to need a lot of RAII because of the nature of interpreted languages.
In normal use it's essentially the same yes. The one interesting edge case that might catch some people out is there's actually nothing special about std::exception, you can throw anything, "throw 123;" is valid and would skip any std::exception handlers - but you can also just catch anything with a "catch (...)".
> would the author have similar problems with python?
I would expect yes. It is true, that in a lot of modern languages you need to live with that dynamism. But to people used to C, not knowing that the error handling is exhaustive, feels deeply uncomfortable.
Or, if you don't want to go to the trouble of writing an RAII wrapper class for FILE*, just use scope_guard and (after determining that fopen() succeeded) register a lambda to close the FILE* on exiting the function (including by throwing an exception). I'm not a huge fan of scope_guard (or defer() in other languages), but it gets the job done for one-off cases.
All this hassle can be avoided by using `cleanup` compiler attribute.
Manage classical C resources by auto-cleanup variables and do error-handling the normal way. If everything is OK, pass the ownership of these resources from auto-cleanup variables to C++ ctor.
Note this approach plays nicely with C++ exception, and will enter C standard in the form of `defer`.
The first two problems can be solved in a straightforward way with more custom exception types. For the "bigger problem", catch(...) can be used to prevent your code from crashing. If you really want to handle each case explicitly you could also use enums in combination with compiler flags that enable exhaustive checking.
Honestly, I thought the diatribe would focus on needless complexity.
The starting example is how I'd do it in C:
```
void f(const char* p) // unsafe, naive use
{
FILE \*f = fopen(p, "r"); // acquire
// use f
fclose(f); // release
}
```
Wouldn't the simpler solution be ensuring your function doesn't exit before release? All that c++ destroyer stuff appears somewhat unnecessary and as the author points out, creates even more problems.
In C, you're correct. The problem is that, in C++, one must account for the fact that anything could throw an exception. If something throws an exception between the time that f is opened and f is closed, the file handle is leaked. This is the "unsafe" that Bjarne is talking about here. Specifically, exception unsafety that can leak resources.
As an aside, it is one of the reasons why I finally decided to let go of C++ after 20 years of use. It was just too difficult to teach developers all of the corner cases. Instead, I retooled my system programming around C with model checking to enforce resource management and function contracts. The code can be read just like this example and I can have guaranteed resource management that is enforced at build time by checking function contracts.
The function contracts are integrated into the codebase. Bounded model checking tools, such as CBMC, can be used to check for integer UB, memory safety, and to evaluate custom user assertions. The latter feature opens the door for creating function contracts.
I include function contracts as part of function declarations in headers. These take the form of macros that clearly define the function contract. The implementation of the function evaluates the preconditions at the start of the function, and is written with a single exit so the postconditions can be evaluated at the end of the function. Since this function contract is defined in the header, shadow functions can be written that simulate all possibilities of the function contract. The two are kept in sync because they both depend on the same header. This way, model checks can be written to focus on individual functions with any dependencies simulated by shadows.
The model checks are included in the same project, but are separate from the code under instrumentation, similar to how unit tests are commonly written. I include the shadow functions as an installation target for the library when it is installed in development mode, so that downstream projects can use existing shadow functions instead of writing their own.
The problem is that it's easy to do it wrong and the C compiler doesn't help you. RAII prevents you from leaking the resource, but the complaint in the post is that it can be cumbersome to use RAII in C++ if acquisition can fail and you want to handle that failure.
That means you cannot use early exit, and all your variables must be checked as to whether they were initialized (which on top of the checks might also require further state).
You can use goto to jump to one of several exit conditions based on the level of cleanup you need. It also nicely unifies all error exits into one place. The kernel makes heavy use of this style.
>The first is that our error message may not be correct. It’s possible that the exception we’ve caught was not introduced by opening this file, and, the errno may not reflect the errno at the time fopen was called.
All that is needed is a better File_error type that includes the error that happened.
This post completely misunderstands how to use exceptions and provides "solutions" that are error-prone to a problem that doesn't exist.
And this is coming from someone that dislikes exceptions.
what about the misreported errno problem?
obviously the errno should have been obtained at the time of failure and included in the exception, maybe using a simple subclass of std exception. trying to compute information about the failure at handling time is just stupid.
My biggest beef with exceptions is invisible code flow. Add the attribute throws<t> to function signature (so that its visible) and enforce handling by generating compiler error if ignored. Bubbling up errors is OK. In essence, this is result<t,e>. Thats OK. Even for constructors.
What I dislike is having a mechanism to skip 10 layers of bubbling deep inside call stack by a “mega” throw of type <n> which none of the layers know about. Other than
This strikes me as all wrong. The whole point of exceptions is control flow and destructors. By getting rid of RAII for the sake of simplifying the callsite a little, the author fails to obtain the real advantage, which is automatic resource unwinding of all local resources under failure conditions both during initialization and usage.
If you want to simplify the callsite, just move the exception handling to a higher scope. I can admit it’s a little irritating to put a try/catch in main but it’s trivial to automate, and most programs are not written inline in main.
The main problems I see with destructors have to do with hidden control flow and hidden type information. That said, hiding exceptional control flow from the mainline control flow of a function is also a useful feature, and the “exception type” of any given function is the recursive sum of all the exception types of any functions or operators it calls, including for example allocation. That quickly becomes either an extremely large and awkward flat sum or an extremely large and awkward nested sum, with awkward cross-namespace inclusion, and it becomes part of the hard contract of your API. This means, transitively, that if any deep inner call needs to extend or break its error types, it must either propagate all the way up into all of your callers, or you must recover and merge that error into _your_ API.
For _most_ usecases, it is just simpler to implement a lean common interface such as std::exception and allow callers who care to look for more detailed information that you document. That said, there is a proposal (P3166 by Lewis Baker) for allowing functions to specify their exception set, including via deduction (auto):
https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2024/p31...
Java and Rust are the only language that I know of that has proper handling of exceptions; mandatory declaration as part of method declaration, since exceptions ARE an integral part of the contract. (Yes, I consider the Result<T, E> being corresponding to exception declaration, since the return value MUST be checked prior to use of T.)
Swift user here: I have to say one of the best features of Swift is the exception handling. Which is to say, exceptions in Swift are not C++/Java/Obj-C style exceptions, but instead are a way to return an error result from a function. And Swift enforces that the error is handled.
That is, a `throw` statement in Swift simply returns an `Error` value to the caller via a special return path instead of the normal result.
More explicitly, a Swift function declared as:
Could be read as More here: https://github.com/swiftlang/swift/blob/main/docs/ErrorHandl...I saw that in Swift, a method can declare it throws an exception, but it doesn't (can't) declare the exception _type_. I'm not a regular user of Swift (I usually use Java - I'm not sure what other languages you are familiar with), but just thinking about it: isn't it strange that you don't know the exception type? Isn't this kind of like an untyped language, where you have to read the documentation on what a method can return? Isn't this a source of errors itself, in practise?
> isn't it strange that you don't know the exception type?
Java experience taught us that, when writing an interface, it is common not to know the exception type. You often can’t know, for example, whether an implementation can time out (e.g. because it will make network calls) or will access a database (and thus can throw RollbackException). Consequently, when implementing an interface, it is common in Java to wrap exceptions in an exception of the type declared in the interface (https://wiki.c2.com/?ExceptionTunneling)
Yes I know Java and the challenges with exceptions there (checked vs unchecked exceptions, errors). But at least (arguably) in Java, the methods (for checked exceptions at least) declares what class the exception / exceptions is. I personally do not think wrapping exceptions in other exception types, in Java, is a major problem. In Swift, you just have "throws" without _any_ type. And so the caller has to be prepared for everything: a later version of the library might suddenly return a new type of exception.
One could argue Rust is slightly better than Java, because in Rust there are no unchecked exceptions. However, in Rust there is panic, which is in a way like unchecked exceptions, which you can also catch (with panic unwinding). But at least in Rust, regular exceptions are fast.
> And so the caller has to be prepared for everything: a later version of the library might suddenly return a new type of exception.
But you get the same with checked exceptions in Java. Yes, an interface will say foo can only throw FooException, but if you want to do anything when you get a FooException, you have to look inside to figure out what exactly was wrong, and what’s inside that FooException isn’t limited.
A later version of the library may suddenly throw a FooException with a BarException inside it.
What I liked about Bosst's error_code[1], which is part of the standard library now, is that it carties not just the error but the error category, and with it a machinery for categories to compare error_codes from other categories.
So as a user you could check for a generic file_not_found error, and if the underlying library uses http it could just pass on the 404 error_code with an http_category say, and your comparison would return true.
This allows you to handle very specific errors yet also allow users to handle errors in a more generic fashion in most cases.
[1]: https://www.boost.org/doc/libs/latest/libs/system/doc/html/s...
When using a language forcing checked exceptions, you would know, wouldn't you?
Swift gained limited support for “typed throws” in Swift 6.0 (2024).
https://github.com/swiftlang/swift-evolution/blob/main/propo...
I say limited because the compiler doesn't (yet, as of 6.2) perform typed throw inference for closures (a closure that throws is inferred to throw `any Error`). I have personally found this sufficiently limiting that I've given up using typed throws in the few places I want to, for now.
Another really nice thing about Swift is that you have to put the `try` keyword in front of any expression that can throw. This means there's no hidden control flow: if some function call can throw, you're informed at the call site and don't have to look at the function declaration.
that sounds very similar to noexcept(bool) to me, except that noexcept can be computed, for example by deriving from some generic trait, and we presume throws unless specified non-throwing by noexcept.
From what I can read Swift gives you a stack trace which is good. At the moment I’m using Go where that stack is only generated where the panic is triggered, which could be much higher up. Makes it a lot more unwieldy to figure out where an error happens because everyone uses:
> if err != nil return err
Swift doesn't capture a stack trace in the `Error` object, but Xcode can break when an error is thrown if you set a “Swift Error Breakpoint”, and the debugger will show you the stack trace. Under the hood it just sets breakpoints on the runtime functions `swift_willThrow` and `swift_willThrowTypedImpl`.
In high-level code, pretty much everything can fail in many different ways, and usually you're either just passing the error up or handling it in some catchall manner. Rust's behavior makes sense for its use cases, but it'd get exhausting doing this in like a web backend.
It's already too exhausting to use for OOM, which is why Rust effectively punted on that from the very beginning. And the ironic thing is that anyhow::Error (or similar) seems poised to become idiomatic in Rust, which AFAIU always allocates, while the extremely belated try_ APIs... does anybody even use them?
It's a shame. Were I designing a "low-level" or "systems" language, rather than put the cart before the horse and pick error results or exceptions, my litmus test would be how to make handling OOM as easy as possible. Any language construct that can make gracefully handling OOM convenient is almost by definition the fabled One True Way. And if you don't have a solution for OOM from the very beginning (whether designing a language or a project), it's just never gonna happen in a satisfactory way.
Lua does it--exceptions at the language level, and either setjmp/longjmp or C++ exceptions in the VM implementation. But for a strongly typed, statically compiled language, I'm not sure which strategy (or hybrid strategy) would be best.
I think Rust's choice to panic on OOM is the right choice for 95% of code people write. It would be absurdly verbose, and OOM can be very tricky to recover from without careful top down design because it's very easy for error handling code to itself try and allocate (print/format a message) leading to a process abort. The mistake was not having the fallible APIs on allocating containers from the start, for the users that do care and can tangibly recover.
Depending on the OS you likely won't even get the chance to handle the error because the allocation never fails, instead you just over commit and kill the process when physical memory gets exhausted.
I guess what I'm trying to say is designing your language's error handling primitives around OOM is probably not a good idea, even in a systems programming language, because it's a very rare class of generally unrecoverable error that only very carefully designed code can recover from.
Anyhow allocating isn't that absurd either. It keeps the size of Result<T, E> down by limiting the 'E' to be a pointer. Smaller Result<T, E> can help the compiler generate better code for passing the return value, and the expectation is the error path is rare so the cost of allocating might be outweighed by the lighter Result<T, E> benefits. Exceptions take a similar standpoint, throwing is very slow but the non-exceptional path can (theoretically) run without any performance cost.
you mean kind of like dynamically implementing std::exception? hmm.
Hm, not sure how you'd do this. OOM error auto bubbling up sounds dangerous because the inner code might not leave things in a consistent state if it exits unexpectedly, so that makes sense to manually handle, but it's tedious. Rust at least has nice ? and ! syntax for errors, unlike Go where the error-prone error handling actually ruins the entire language.
If only there were some sort of automated idiom for cleaning up state under failure modes
> while the extremely belated try_ APIs... does anybody even use them?
I want to say the main driver for those right now is Rust for Linux since the "normal" panicking behavior is generally undesirable there.
Don't forget, failure modes pierce abstraction boundaries. An abstraction that fully specifies failure modes leaks its implementation.
This is why I think checked exceptions are a dreadful idea; that, and the misguided idea that you should catch exceptions.
Only code close to the exception, where it can see through the abstraction, and code far away from the exception, like a dispatch loop or request handler, where the failure mode is largely irrelevant beyond 4xx vs 5xx, should catch exceptions.
Annotating all the exceptions on the call graph in between is not only pointless, it breaks encapsulation.
This is a better way of expressing what I had been thinking about putting exception details behind an interface, except that in my mind encapsulating errors is just good design rather than implementation hiding, since the programmer might want to express a public error API, for example to tell the user whether a given fopen failed due to not finding the file or due to a filesystem fault.
If your error codes leak the implementation details through the whole call stack you are doing it wrong. Each error code describes what fails in terms of it's function call semantics. A layer isn't supposed to just return this upwards, that wouldn't make sense, but to use it to choose it's own error return code, which is in the abstraction domain of it's function interface.
`Result<T, E>` comes from Haskell's `Either a b` type. F# also has a `Result<'T, 'E>` type.
It's funny how often functional programming languages lead the way, but imperative languages end up with the credit.
Actually (insert nerd emoji) this is a direct descendant from tagged union type, which existed in ALGLO 68, an imperative language.
Java's checked exception is just an (very anti-ergonomic) implementation of tagged union type.
In fairness, Rust also has unchecked exceptions (panics, if you use panic-unwind).
Java too has unchecked exceptions, though the post praising "mandatory declaration" didn't mention it for either.
But they CAN be caught with a standard catch which may be non-intuitive if you don't know about them, and about that, in advance.
Author completely misunderstands how to use exceptions and is just bashing them. A lot of what he says is inaccurate if not outwardly incorrect.
Also Bjarne's control of C++ is quite limited, and he is semi-retired, so asking him to "fix his language" is fairly misguided. It's designed by a committee of 200+ people.
Anyway what you want seems to be to not use exceptions, but monads instead. These are also part of the standard, it's called std::expected.
Agreed. Author is trying to mix paradigms. Simplest approach if they want local handling and non-propagation of errors is to just have the file holder not check for open success, and check that manually after construction. Then you get guaranteed closure of file no matter how the function is exited.
Goes against "Resource acquisition is initialization" though to have objects which exist but can't be used. (I think that's the relevant pattern?)
However, where the language and it's objects and memory ends and the external world begins, like files and sockets... that's always tricky.
this may lose the value of errno, right?
Yes, it would. It would be trivial to add errno as an instance variable for File_Handle class though.
> Author completely misunderstands how to use exceptions and is just bashing them. A lot of what he says is inaccurate if not outwardly incorrect.
Do you mind to elaborate what you believe are the misunderstandings? Examples of incorrect/inaccurate statements and/or an article with better explanations of mentioned use cases would be helpful.
> it's called std::expected
How does std::expected play together with all other possible error handling schemas? Can I get unique ids for errors to record (error) traces along functions? What is the ABI of std::expected? Stable(ish) or is something planned, ideally to get something C compatible?
Well, he's using try/catch locally. You're not supposed to handle errors locally with exceptions. The whole point is inversion of control, you manage them at the point where you can do meaningful recovery, which then triggers the cleanup of the whole stack of frames once the exception bubbles up, rather than systematically cleaning up on the normal control flow path.
Regardless, in his example, he could achieve what he wants by wrapping the try in a lambda, and returning either the value from try or nullopt from catch. But clearly, that's just converting exceptions to another error-handling mechanism because he isn't doing it right.
He claimed that not handling an exception causes the program to crash, that's just plain incorrect. To be fair many people use the term "crash" liberally.
std::expected or equivalent is often used with std::error_code, which is an extensible system (error codes are arranged in categories) that among others interops with errno.
> Well, he's using try/catch locally. You're not supposed to handle errors locally with exceptions.
Tell that to Bjarne!
> Do you mind to elaborate what you believe are the misunderstandings?
Exceptional C++, by Herb Sutter is an excellent resource that explains this. It’s quite outdated these days but the core concepts still hold well.
When done well, most of your code can be happy path programming with errors/invalid state taken care of mostly automatically.
However it’s also very easy not to do this well, and that ends up looking like the suggestions the author of the article makes.
https://godbolt.org/z/363oqqKfv
Most criticism of C++ comes from people not really into the language. Sure, learning curve might be an issue. But if you are really into C++, there are a lot of things to like about it. At least I do love it. However, only since C++11. Before that, the language felt very strange to me, possibly due to the same effect, I didn't know enough about it.
I'm the opposite. I like C++ but only until C++11. From that point onward, the rules got WAY more complicated, and there's only so much I can/want to hold in my brain at once, I just prefer simpler rules I guess.
Occasionally I do like to use auto or lambdas from C++11 but even then I have to remember more rules for initialization because braces were introduced.
This is true of me as well. I started with C++ in 1991, and I generally knew the language spec quite well (as far as users go) up through C++11.
But now the language packs so much complexity that I can't be sure I understand all of the code I'm looking at.
My usual quality standard for production code is that it doesn't just need to be correct, it needs to be obviously correct. So that's a problem.
Doubly so when the other programmers on the team aren't C++ geeks.
You probably do want that exception to bubble up, actually. You probably don't want to catch it immediately after open. Because you need to communicate a failure mode to your caller, and what are you going to do then? Throw another exception? Fall back to error codes? Unwind manually with error codes all the way up? And if so, logging was the wrong thing to do, since the caller is probably going to log as well, based on the same philosophy, and you're going to get loads of error messages for one failure mode, and no stack trace (yes, there are ways of getting semi-decent stack traces from C++ exceptions).
Exception safety has a lot of problems in C++, but it's mostly around allowing various implicit operations to throw (copies, assignments, destructors on temporaries and so forth). And that does come down to poor design of C++.
> you're going to get loads of error messages for one failure mode
> and no stack trace
That loads of error messages, meaning every layer describes what it tried to do and what failed, IS a user readable variant of a stack trace. The user would be confused with a real stack trace, but nice error messages serve both the user and the developer.
I think i agree with commenters that article kind-of uses exceptions wrong.
But this shows problems with C++ exceptions, C++ codebases are literred with bad exception usage because "normal" programmers don't get it. Most of didn't get it for long time. So, they are complex enough that their usage is risk.
Anyway, IMO C++ exceptions have two fundametal problems:
* lack of stacktrace, so actually lack of _debugabbility_, that's why people try/catch everything
* destructor problem (that actually there are exceptions tht cannot be propagated further and are lost
This is a common problem with try-catch syntax. An alternative, and arguably more useful syntax would be
Where the "use fh" part is not covered by the exception handler. This is covered (in ML context) in https://www.microsoft.com/en-us/research/publication/excepti...Don't use try/catch in the first place; that's where his error lies.
That's not his own invention, but an idea introduced by the language founder to sell the idea of RAII and exceptions. He did say, that he would prefer the C version, but then why use C++ in the first place.
Or do without exceptions for control flow. For example Elixir does have try/catch but it's used very rarely because most functions return tuples with the first element being either :ok or :error. Then we can pattern match it and return an :error to the caller if we have to, possibly bubbling up several levels return after return. Or let the process crash and restart while the rest of the application keeps running. A surprising number of errors eventually fix themselves (API calls, disk space, missing data) when you design the system to attempt more times to complete its tasks. That's not only a characteristic of Elixir and the BEAM languages. You can do it more or less easily on any language. Maybe you need a queue and workers reading from the queue and all it takes to manage them, and BEAM makes it convenient by including most of it.
The page about try/catch explains it well https://hexdocs.pm/elixir/try-catch-and-rescue.html
I agree with the other comments that this understanding of exceptions is wrong. He's not wrong about these two points:
- Correctness: you don’t know if the exception type you’ve caught matches what the code throws
- Exhaustiveness: you don’t know if you’ve caught all exceptions the code can throw
But that's actually not a problem. Most of the time you shouldn't catch a specific exception (so always correct) and if you are catching then you should catch them all (exhaustive). A better solution is merely:
That's all you need. But actually this code is also bad because this function f() shouldn't have a try/catch in it that prints an error message. That's a job for a function (possibly main()) further up the call stack.It's a bad post, but I find it funny someone thinks C++ can be fixed (:
I didn’t really understand the writer’s comments with exceptions and I don’t code in C++.
Their main complaint about exceptions seems to be that you can’t handle all of them and that you don’t know which you’ll get? If we compare this to python, what’s the difference here? It looks like it works the same here as in python; you catch and handle some exceptions, and others that you miss will crash your program (unless you catch the base class). Is there something special about C++ that makes it work differently, or would the author have similar problems with python?
"You can't handle all of them and you don't know which you'll get" is a great summary of the first two problems, and, this same problem also applies to Python. I'll add that these only start becoming an issue when you start adding more exceptions to your codebase, especially if those exceptions start appearing deep in a callstack and seemingly unrelated code starts needing to be aware of them/handle them.
The third problem (RAISI) is a C++ specific problem that Python doesn't have. Partly because in Python try/catch doesn't introduce a new scope and also partly because Python tends not to need a lot of RAII because of the nature of interpreted languages.
I found this video a fascinating take on comparing C++ to Python if you haven't seen it: https://www.youtube.com/watch?v=9ZxtaccqyWA
In normal use it's essentially the same yes. The one interesting edge case that might catch some people out is there's actually nothing special about std::exception, you can throw anything, "throw 123;" is valid and would skip any std::exception handlers - but you can also just catch anything with a "catch (...)".
> would the author have similar problems with python?
I would expect yes. It is true, that in a lot of modern languages you need to live with that dynamism. But to people used to C, not knowing that the error handling is exhaustive, feels deeply uncomfortable.
Or, if you don't want to go to the trouble of writing an RAII wrapper class for FILE*, just use scope_guard and (after determining that fopen() succeeded) register a lambda to close the FILE* on exiting the function (including by throwing an exception). I'm not a huge fan of scope_guard (or defer() in other languages), but it gets the job done for one-off cases.
All this hassle can be avoided by using `cleanup` compiler attribute.
Manage classical C resources by auto-cleanup variables and do error-handling the normal way. If everything is OK, pass the ownership of these resources from auto-cleanup variables to C++ ctor.
Note this approach plays nicely with C++ exception, and will enter C standard in the form of `defer`.
Yes, but now your code is no longer C or C++ standards compliant as it relies on compiler-specific attributes, if that matters to the programmer.
Unfortunately, even the Linux kernel is no longer C because they use GCC compiler extensions (and are currently discussing adding MS ones too).
A kernel will make use of asm, and can't abstract over the machine, so it will always be unportable and relying on compiler extensions.
The first two problems can be solved in a straightforward way with more custom exception types. For the "bigger problem", catch(...) can be used to prevent your code from crashing. If you really want to handle each case explicitly you could also use enums in combination with compiler flags that enable exhaustive checking.
The classic "Frequently Questioned Answers" is the ultimate takedown of this abomination of a language.
https://yosefk.com/c++fqa/
Honestly, I thought the diatribe would focus on needless complexity.
The starting example is how I'd do it in C:
```
void f(const char* p) // unsafe, naive use
{
}```
Wouldn't the simpler solution be ensuring your function doesn't exit before release? All that c++ destroyer stuff appears somewhat unnecessary and as the author points out, creates even more problems.
In C, you're correct. The problem is that, in C++, one must account for the fact that anything could throw an exception. If something throws an exception between the time that f is opened and f is closed, the file handle is leaked. This is the "unsafe" that Bjarne is talking about here. Specifically, exception unsafety that can leak resources.
As an aside, it is one of the reasons why I finally decided to let go of C++ after 20 years of use. It was just too difficult to teach developers all of the corner cases. Instead, I retooled my system programming around C with model checking to enforce resource management and function contracts. The code can be read just like this example and I can have guaranteed resource management that is enforced at build time by checking function contracts.
Could you elaborate on the model checking? You have two codebases then (model and C), or something more integrated?
The function contracts are integrated into the codebase. Bounded model checking tools, such as CBMC, can be used to check for integer UB, memory safety, and to evaluate custom user assertions. The latter feature opens the door for creating function contracts.
I include function contracts as part of function declarations in headers. These take the form of macros that clearly define the function contract. The implementation of the function evaluates the preconditions at the start of the function, and is written with a single exit so the postconditions can be evaluated at the end of the function. Since this function contract is defined in the header, shadow functions can be written that simulate all possibilities of the function contract. The two are kept in sync because they both depend on the same header. This way, model checks can be written to focus on individual functions with any dependencies simulated by shadows.
The model checks are included in the same project, but are separate from the code under instrumentation, similar to how unit tests are commonly written. I include the shadow functions as an installation target for the library when it is installed in development mode, so that downstream projects can use existing shadow functions instead of writing their own.
The problem is that it's easy to do it wrong and the C compiler doesn't help you. RAII prevents you from leaking the resource, but the complaint in the post is that it can be cumbersome to use RAII in C++ if acquisition can fail and you want to handle that failure.
That means you cannot use early exit, and all your variables must be checked as to whether they were initialized (which on top of the checks might also require further state).
You can use goto to jump to one of several exit conditions based on the level of cleanup you need. It also nicely unifies all error exits into one place. The kernel makes heavy use of this style.
sure, but that has limitations as well, since gotos can't cross lexical scopes, so you can't introduce variables later on, and it's easy to mess up.
Destructors are a higher-level and safer approach.
> since gotos can't cross lexical scopes, so you can't introduce variables later on
That's a C++ specific limitation, it works just fine in C.
>The first is that our error message may not be correct. It’s possible that the exception we’ve caught was not introduced by opening this file, and, the errno may not reflect the errno at the time fopen was called.
All that is needed is a better File_error type that includes the error that happened.
If you use names that frigging bad, exceptions are the least of your problems.