So the f-string literal produces a basic_formatted_string, which is basically a reified argument list for std::format, instead of a basic_string. This allows eg. println to be overloaded to operate on basic_formatted_string without allocating an intermediate string
std::println("Center is: {}", getCenter());
std::println(f"Center is: {getCenter()}"); // same thing, no basic_string allocated
In exchange we have the following problems
// f-strings have unexpected type when using auto or type deduction.
// basic_string is expected here, but we get basic_formatted_string.
// This is especially bad because basic_formatted_string can contain
// dangling references.
auto s = f"Center is: {getCenter()}";
// f-strings won't work in places where providing a string currently
// works by using implicit conversion. For example, filesystem methods
// take paths. Providing a string is okay, since it will be implicitly
// converted to a path, but an f-string would require two implicit
// conversions, first to a string, then to path.
std::filesystem::exists(f"file{n}.dat"); // error, no matching overload
There are two other proposals to fix these problems.
This is becoming such a tiresome opinion. How are concepts fixing a problem created by previous features to the langue? What about ranges? Auto? Move semantics? Coroutines? Constexpr? Consteval? It is time for this narrative to stop.
Move semantics is only needed because C++ introduced implicit copies (copy constructor) and they of course fucked it up my making them non-destructive so they aren't even 'zero cost'.
Constexpr and consteval are hacks that 1) should have just been the default, and 2) shouldn't even be on the function definition, it should instead have been a keyword on the usage site: (and just use const)
int f() { ... } // any old regular function
const int x = f(); // this is always get evaluated at compile time, (or if it can't, then fail to compile)
int y = f(); // this is evaulated at runtime
That would be the sane way to do compile time functions.
I agree that I would have preferred destructive moves, but move semantics makes C++ a much richer and better language. I kinda think pre-move semantics, C++ didn't quite make "sense" as a systems programming language. Move semantics really tied the room together.
const int x = f(); // this is always get evaluated at compile time, (or if it can't, then fail to compile)
That's very silly. You're saying this should fail to compile?
void foo(int x) {
const int y = bar(x);
}
There's no way the compiler can run that, because it doesn't know what x is (indeed, it would have a different value every time you run the function with a new argument). So your proposal would ditch const completely except in the constexpr case, everything runtime would have to be mutable.
So you respond "well, I didn't mean THAT kind of const, you should have a different word for compile-time constants and run-time non-mutability!" Congratulations, you just invented constexpr.
There are many bad things about C++, but constexpr ain't one of them.
>There's no way the compiler can run that, because it doesn't know what x is (indeed, it would have a different value every time you run the function with a new argument). So your proposal would ditch const completely except in the constexpr case, everything runtime would have to be mutable.
Yeah, I see no problem with that.
Non-constant expressions usage of 'const' has always just seemed like a waste of time for me, never found it useful.
But I guess a lot of people really liking typing const and "preventing themselves from accidentally mutating a variable" (when has that ever happened?), so as a compromise I guess you can have a new keyword to force constant expressions:
constexpr auto x = foo(); // always eval at compile time
const auto x = foo(); // old timey const, probably runtime but maybe got constant folded.
but it's not really a big deal what they keyword is, the main point was that "give me a constant value" should be at the usage site, not at the function definition.
Eh not really accurate because C's const means immutable not actually constant. So I get introducing constexpr to actually mean constant. But, yeah, constexpr x = f() should probably have worked as you described.
const is different in C++ from const in C. const variables in C++ are proper compile-time constants. In C they are not (the nearest equivalents are #define and enum values).
So in C++ "const x = EXPR" would make sense to request compile-time evaluation, but in C it wouldn't.
I thought the whole point of ranges is to solve problems created by iterators, move semantics to take care of scenarios where nrvo doesn't apply, constexpr and auto because we were hacking around it with macros (if you can even call it that)?
To me, redoing things that are not orthogonal implies that the older version is being fixed. Being fixed implies that it was incorrect. And to clarify, sure, auto types and constexpr are entirely new things we didn't have (auto changed meaning but yeah), but we were trying to "get something like that" using macros.
One of the two other proposals is user defined type decay, which lets you choose what type auto will be deduced as. i.e. "auto x = y", x might not have the type of y, instead it can be anything you choose…
This is like implicit type conversion on steroids. And all this because C++ lacks the basic safety features to avoid dangling pointers.
> lacks the basic safety features to avoid dangling pointers
It doesn't. Unfortunately, C++ programmers choose not to use basic safety features for performance reasons (or aesthetics, or disagreement with the idea that a language should take into account that a programmer might make a mistake, but at least performance is a good one), but C++ actually has quite a few tricks to prevent the memory management issues that cause C/C++ bugs.
Using modern C++ safety features won't completely prevent bugs and memory issues, just like using Rust won't, but the mess that causes the worst bugs is the result of a choice, not the language itself.
Tell that to the designers of the C++ standard library, and the new features being added. They're the ones that keep adding new features that depend on references and pointers instead of std::shared_ptr or std::unique_ptr.
I don't think this is the only reason. If it were, they could easily have added overloads that work with both std smart pointers and with plain pointers for compatibility. Or they could add pointer type template parameters, maybe with concepts for the right ownership semantics.
Has he? He at least used to be the biggest proponent of it, "just follow these standards and development practices that I had to meticulously develop for the US military, that no tool can automatically check, and you'll be fine!".
I would not say stop using it. But just stick to the really needed features, and stop adding more features every 3 years. Nobody can keep up, not the developers, not the compilers... is just insane.
Smart pointers were added to the language 14 years ago. You're free to use old C++ with raw pointers and manual memory management, risking dangling pointers, or use modern C++, which provides smart pointers to avoid those issues.
And yet most if not all of the standard library keeps using pointer or reference arguments, not the new smart pointers that would actually document the ownership semantics.
Most arguments to standard library calls don't need to take ownership over memory, using a raw pointer or (const) reference is correct. Generally - smart pointers to designate ownership, raw pointers to "borrow".
If a function takes a raw pointer, you need to check the docs to know if it is taking ownership or not. There is no general rule that applies to the whole of std that functions taking raw pointers assume that they are borrowing the value.
And even if you could assume that pointer parameters represent borrowing, they are definitely not guaranteed to represent scoped borrowing: the function could store them somewhere, and then you end up with other issues. So shared_ptr is the only solution if you care about safety to represent a borrowed pointer. And of that's too costly, but the std designers did care about safety, they could have introduced a std::borrowed_ptr<T> that is just a wrapper around T* but that is used uniformly in all std functions that borrow a pointer and guarantee not to store it.
Hah. What's interesting about this is that since it doesn't require everything to actually be converted to a string, one can implement things other than just printing. So you could also implement interpretation, eg:
pylist = python(f"[ y*{coef} for y in {pylist} if y > {threshold}]")
It also allow for things that will set off spidey senses in programmers everywhere despite theoretically being completely safe assuming mydb::sql() handles escaping in the format string:
cursor = mydb::sql(f"UPDATE user SET password={password} WHERE user.id={userid}")
Yeah. You really want "mydb::sql" to not take a basic_string, only a basic_formatted_string, so it will not compile if the conversion actually happened somehow.
Yes. The basic idea is that there's a specifier that allows a formatted string to transparently decay into an ordinary string (à la array-to-pointer decay) so that "auto" doesn't produce dangling references, and so that chains of more than one implicit conversion can take place.
This seems pretty similar to Rust's `format_args!` macro, which however avoids these issues by being much more verbose and thus something people are less likely to use like in those examples. It does however have issues due to the abundant use of temporaries, which makes it hard to use when not immediately passed to a function. I wonder if C++'s fstrings have the same issue.
It would be nice to take care to allow the use of GNU gettext() or any other convenient translation tool.
Recap: _("foo %s") macroexpands to gettext("foo %s"), then "foo %s" is extracted to a lexicon of strings by an external tool, which can be translated and compiled into .po files, which are loaded at runtime so gettext() can use a translated string based on $LC_MESSAGES. (And there is also _N(..) for correct plural handling.)
To do this with f-strings, _(f"foo {name()}") (which is a bit ugly...) needs to translate to make_formatted_string(_("foo {}"), name()) -- note that the _(...) needs to be called before calling make_formatted_string, to be able to return a translated string.
I would wish for a proposal for f-strings to consider translating strings, because we live in a world with many languages. And maybe cite gettext as a convenient method, and think about what could be done. Or point to a better tool. Or state: 'in that case, f-strings cannot be used'.
Tangent: this sort of thing can be implemented without any change to libc++ (the runtime). Updates to compiler versions are sometimes postponed by users with big codebases that treat a libc++ change as something major.
Why don't we see gcc or clang or msvc back porting stuff like this to an older version with a sort of future tag. It's normal to see __future__ in the python ecosystem, for instance.
thank you for the clarification. You are 100% right about the general difference. I didn't consider the level of "confidence" python has in directing it's own evolution that I don't detect in the C++ committee
If a codebase is fragile enough that libc++ changes have to be assumed breaking until proven otherwise, why take the risk? Presumably the application already has a "standard" way of formatting strings. If it ain't broke yada yada
It's not about assumed breaking, it's that when you upgrade libc++ you can become incompatible at runtime with your distro or any other number of libraries outside your control in ways that are difficult to detect
C++20's concepts IMHO are a massive update over C++11. You can basically remove almost 90% of inheritance with them without incurring in any issue (you could do that earlier too, but at the expense of incredibly hard to read error messages - now that's basically solved thanks to concepts).
I don't find the error messages produced by concepts much better than old school template errors. Maybe I got used to the latter with experience and definitely the compilers got better at generating useful error messages for templates as the years passed. On the other hand when I have to review code where a significant portion of the source relates to concepts, my heart sinks.
In my opinion, C++ "concepts" are the least useful C++20 addition to the language - awful syntax, redundancy everywhere (multiple ways of writing the same thing). And for what? Potentially better error messages?
Another gripe; of all the generic overloaded words available to describe this C++ feature, "concept" must be the least descriptive, least useful. Why pick such a meaningless name that does absolutely nothing to even suggest what the feature does?
> In my opinion, C++ "concepts" are the least useful C++20 addition to the language - awful syntax, redundancy everywhere (multiple ways of writing the same thing).
They're not the least useful C++20 addition, in fact they're amongst the most useful ones.
In particular the addition of the "requires" expression is the real killer here.
> And for what? Potentially better error messages?
Removing even more enable_if and making template code even easier to read (you could do some of that with if constexpr + static_assert in C++17, but there were gotchas). Oh and it allows you to check for the presence of members in classes, which you couldn't do before.
C++ concepts are a failure due to them only checking one side of the contract. And the other is basically impossible to implement without breaking other parts of the language
Came here to post the same thing. C++11 was a major and practical step up from previous versions. I haven't seen anything in future standards that looked like it a tool I'd use day-to-day building actual production software. Much of the subsequent versions added things probably interesting to compiler and language academics. "Default constructible and assignable stateless lambdas?" Really?
Off the top of my head, C++17 brought slicker notation for nested namespaces, digit separators for numeric literals (so you can more easily read 1'000'000'000), improvements in type deduction for pairs / tuples (so std::make_pair / make_tuple are basically unnecessary now), guarantees in the standard for copy elision / return value optimization in specific circumstances,. Oh, and structured bindings (so you can now write `for (const auto& [key, value] : map) { ... }`).
edit: I guess digit separators came in C++14, I'm always a little fuzzy there since at work, we jumped straight from 11 -> 17.
C++20 brought a feature that C had decades prior: designated initializers, except it's in a slightly crappier form. Also, spaceship operator (three-way comparison).
Looking at cppreference, it looks like C++17 also brought if constexpr, and standardized a bunch of nonstandard compiler extensions like [[fallthrough]]. C++20 continued standardizing more of those extensions, and also brought concepts / constraints, which are a lot easier to use than template metaprogramming.
You're at least somewhat right though -- none of these are paradigm shifts as C++11 was compared to C++03 (especially with the notion of ownership, especially in the context of std::unique_ptr and std::move).
Optional is nice but slightly awkward in a non-garbage collected language.
IMO variant is one of those things that should not exist in standard.
It tries to implement discriminated union in C++ but that feature is lame without true pattern matching. And you can’t implement pattern matching without thorough syntax level support. So in my books it’s in this academic “let’s pretend a while we are using some other language…” category.
> It tries to implement discriminated union in C++ but that feature is lame without true pattern matching. And you can’t implement pattern matching without thorough syntax level support. So in my books it’s in this academic “let’s pretend a while we are using some other language…” category.
I agree, they should have made it a language/syntax feature. However: if you wanna do a sum type, it does do that. I'd rather have that than nothing.
The safety and readability are nice; but WHY do they have to be in order? That is so typically clueless. Such an obvious feature, screwed up in a way that only a C++ committee member could.
Initializers for members are _always_ run in the order the member is declared - this applies even in constructor initializing lists, see https://en.cppreference.com/w/cpp/language/constructor#:~:te... - it doesn't matter what order you declare the initializers in, and clang will warn you if you write your initializers in an order other than what they will be executed in. Designated initializers are the same way, it's probably best that the behavior is consistent across methods of constructing.
Why is it this way? As best as I can find it's so that destructors always run in reverse order of construction, I guess there could be some edge cases there that matter. It's not the strongest argument, but it's not nothing.
Why does construction and destruction order need to be deterministic?
Well, consider what would happen if you had members whose value depends on other members. For example, one member is a pointer to another. Or perhaps one member uses RAII to hold a lock and another controls a resource.
Deterministic construction and destruction order is a fundamental feature of C++. Calling it clueless is just an indication one does not know C++.
Initially, C++20 and C++23 extended its use cases, and combined with concepts is a pretty sweet spot for compile time metaprogramming without SFINAE or tag dispatch tricks.
Much better than having yet another syntax for macros.
To me moving from C++11 to 17 and then 20 was just a matter of convenience. When digging on how to do this and that I've found few things that just saved my time here and there. Also couple of valuable libs I wanted to use required newer C++ versions.
Yes. There are a very small handful of early adopters in the year 2025 for a feature ostensibly added in C++20.
So, like I said, modules don’t exist in practice and I’d be shocked if in 2030 modules were considered normal.
C++11 was pretty game changing. C++14 and C++17 only took a few years to reach widespread adoption.
It’s very safe to require C++17 today. C++20 was a little slower and because of the modules fuckup it’s a bit inconsistent. But it’s largely fine to use.
C++23 probably needs another year or two. But also C++20 and beyond haven’t added much that’s worth upgrading for.
Like I said, it is a matter of point of view, and yes such is the karma of ISO driven languages with multiple implementations, when one cares about cross platform code.
There are many folks that don't care though, for them it is "one platform, one compiler, language standard is whatever my compiler allows me to do, including extensions".
I am also quite bullish on the opinion that eventually, C++26 might be the last standard, not that WG21 will stop working on new ones, rather that is what many will care about when using C++ in a polyglot environment, as it is already the case in mobile OS platforms, the two major desktop platforms and distributed computing (CNCF project landscape).
> C++20 and beyond haven’t added much that’s worth upgrading for.
std::format is pretty nice (although not yet available on Ubuntu 24.04 LTS.
Lambda capture of parameter packs is actually huge!
And ... I think it still remains to be see what the outcome of modules will be.
One hopes (against hope) that the big payoff for modules will be in tool-ability of C++. IDE support for languages like C#, Java, typescript is vastly superior to C++ IDE tooling. Perhaps. Maybe. Modules will provide a path that will allow that to change. I don't think the benefits of modules have yet fully played out.
Ironically C++ had such tooling in the past but got lost, a bit like Roman technology as the Empire felt.
Visual Age for C++ v4.0 had a Smalltalk like experience with a database storage for the code, and Lucid Energize C++ already had something that people now know as LSP (Cadillac on their implementation), with incremental compilation and linking (at method/function level).
They failed commercially due to high prices and hardware requirements.
We have had C++ Builder for decades for GUI RAD development, Delphi/VB style, but due to how Borland went after the enterprise and various changes of hands, very few are aware that it exists and its capabilities.
C++ Builder with VCL was Java/.NET before these were even an idea looking for an implementation.
Problem now is that C++ has become a specialized tooling for high performance code, language runtimes, drivers and GPGPU, so you write 90% of the code in Java/C#/nodejs/..... and then reach out to native libraries, for various reasons.
Still, Clion, Visual Studio, C++ Builder, are quite good as far as development experience goes.
And observes that this additional feature is needed to avoid dangling references. And, as a long time C++ programmer, this illustrates one of the things I dislike most about C++. In most languages, if you make a little mistake involving mixing up something that references something else with something that contains a copy, you end up with potential overhead or maybe accidental mutation. In Rust, you get a compiler error. In C++, you get use-after-free, and the code often even seems to work!
So now we expect people to type:
auto s = f"{foo}";
And those people expect s to act like a string. But the designers (reasonably!) do not want f to unconditionally produce an actual std::string for efficiency reasons, so there’s a proposal to allow f to produce a reference-like type (that’s a class value, not actually a reference), but for s to actually be std::string.
But, of course, more advanced users might know what they’re doing and want to bypass this hack, so:
explicit auto s = f"{foo}";
Does what they programmer actually typed: s captures foo by reference.
What could possibly go wrong?
(Rust IMO gets this exactly right: shared xor mutable means plus disallowing code that would be undefined behavior means that the cases like this where the code might do the wrong thing don’t compile. Critically, none of this actually strictly requires Rust’s approach to memory management, although a GC’d version might end up with (deterministic) runtime errors instead unless some extra work is done to have stronger static checking. And I think other languages should learn from this.)
IOW I believe it's the same thing as Rust's format_args! macro, but trying to get away without needing a separate format! macro by using implicit conversions.
std::format_args! gets you a Arguments<'a> which we'll note means it has an associated lifetime.
Today-I-learned, Arguments<'a> has a single useful function, which appeared before I learned Rust but only very recently became usable in compile time constants, as_str() -> Option<&'static str>
format_args!("Boo!").as_str() is Some("Boo!")
If you format a literal, this always works, if you format some non-literal the compiler might realise the answer is a compile time fixed string anyway and give you that string, but it might not even if you think it should and no promises are given.
It is not a shortcut because it can't be implemented without knowing the `Arguments` internals. `format_args!("{}", "boo").as_str()` returns None for example.
It’s a shortcut in the sense that most, if not all optimisations are shortcuts. This one allows you to shortcut the usual formatting machinery if the result of formatting is a static string.
Like all shortcuts, it’s not something you can always rely on.
> But, of course, more advanced users might know what they’re doing and want to bypass this hack, so:
explicit auto s = f"{foo}";
> Does what they programmer actually typed, so s captures foo by reference.
Wouldn't this problem be best solved by... not declaring s to have a guess-what-I-mean type? If you want to be explicit about the type of s, why not just say what that type is? Wouldn't that be even more explicit than "explicit auto"?
A general issue with C++ (and many statically typed languages with generic) is hilariously long type names that may even be implementation details. Using auto can be a huge time saver and even necessary for some generic code. And people get in the habit of using it.
Is it a coincidence that all these quality life things start to pop up after C++ is facing real competition for the first time? Seems a bit odd to add print after using std::out for 30 years.
Nerd alt-history story: What if Graydon decides he should attend WG21 and so instead of Rust what we get is a decade of attempts to fix C++ and reform the process, followed by burn out?
Then we'd be supporting a different language that shares the same or similar ideals as Rust. Whether that's something already in existence or something entirely new.
Rust isn't really that unique, there are plenty of other safe languages out there. And if Graydon was alone in wanting something like Rust then Rust wouldn't have grown in popularity like it has.
Rust exists because enough people thought there was a need for Rust to exist. So if that wasn't Graydon with Rust, then it would have been someone else with something else.
This isn't meant to take anything away from Graydon nor Rust. Just saying that innovations seldom happen in silos. They're usually a result of teams of people lusting for change.
Rust was helped by being a Mozilla language, and some of the personalities it had around it.
The big plus of the language was proving that Cyclone ideas to improve C, from AT&T research project were sound and could be made mainstream.
And now other languages are building on it as well, that is why Swift, Chapel, Haskell, OCaml, D are also having a go at a mix of linear types, affine types and effects.
However many folks credit Rust for type system features that are actually available in any ML derived language, or Ada/SPARK, so it isn't as if knowledge is that well spread.
> Rust was helped by being a Mozilla language, and some of the personalities it had around it.
Indeed. But my point is there was already widespread movement behind building a programming language. So if Mozilla hadn’t taken charge then I’m certain someone will.
My point is that Rust was born from a wider desire for change rather than that desire existing because of Rust. Thus that desire would have been met in one form or another regardless of the invention of Rust.
Making any changes to the core language is a sensitive thing as it inevitably imposes new demands on compilers, a learning curve for all users of the language, and risks breaking compatibility and introducing unforeseen issues that will need to be fixed with future changes to the language.
Personally, I'd much prefer a smaller and more stable language.
Leaving curve can decrease as a result of better design, same re. the chance of those unforeseen issues (and it can even decrease the chance of existing bugs popping up)
> This is the sort of change that adds complexity to the language but reduces complexity in the code written in the language. We take those
An admirable statement of policy, but I'm not sure it's possible. Adding complexity to the language means there are more gotchas and edge-cases that a programmer must consider, even if they don't use the feature in question.
Depends on case to case basis. I wouldn't generalize it to every case. As a daily C++ engineer, I think overall many features added over the years have mostly been positive. There are features that I don't use and I don't think it really affects much. That said, I do get the sentiment of language becoming too syntactically complex.
I like this feature as string formatting is something frequently used and this certainly looks cleaner and quicker to write.
> Adding complexity to the language means there are more gotchas and edge-cases that a programmer must consider, even if they don't use the feature in question.
Since this is C++, this is not a problem we have to consider
This is a meme by now, yet it isn't as if Python 3.13 is a simple as Python 1.0, Java 23 versus Java 1.0, .NET 9 with C# 13 versus .NET 1.0 with C# 1.0 and a Framework reboot,....
C# has already enough material for pub Quiz, and no, not all of them are syntatic sugar, and require deep knowledge of the .NET runtime, and the way it interacts with the host platforms.
I imagine you never went too deep into unsafe, cross language interop, lambda evolution since the delegate days, events infrastructure, pluggable GC, RCW/CCW, JIT monitoring, the new COM replacement, how the runtime and language features differ across .NET Framework, Core, .NET MicroFramework, UWP, AOT compilation, Mono, .NET standard versus Portable Class Libraries, CLS friendly libraries,...
On top of that, all the standard frameworks that are part of a full .NET install on Visual Studio, expected that most C# developers know to at least have some passing knowledge on how to use them.
For other readers - more than half of these are irrelevant.
Writing general purpose application code rarely involves thinking about implications of most of these (save for NAOT as of lately I suppose).
Writing systems C# involves additional learning curve, but if you are already familiar with C++, it comes down to understanding the correct mapping between features, learning strengths and weaknesses of the compiler and the GC and maybe doing a cursory disassembly check now and then, if you care about it.
The original comment was about the divergence of the complexity of a language and the complexity of programs implemented in the language. I think the comment you replied to with all its keywords and jargon beautifully illustrated the point
how would this work with internationalized strings? especially if you have to change the order of things? You'd still need a string version with object ordering I would think
The question was, how would you use this if you have i18n requirements. Format strings are normally part of a translation. I think the bad answer is to embed the entire f-string for a translation as usual, except this can't work because C++ f-strings would need to be compiled. The better answer is, don't use f-strings for this because you don't want translators to monkey around with code and you don't want to compile 50 versions of your code.
Even if you told them, "just copy the names from the original string" it's still asking for trouble, and maybe even security holes if they don't follow instructions. But the biggest problem with the idea is surely that the strings need to be compiled.
Do what? Allow translators to reorder the appearance of arguments in a translated format string? It's a completely routine (and completely necessary) feature when
doing translations.
C++ also has std::format, which was introduced in C++20. This is just sugar on top of it, except it also returns a container type so that printing functions can have overloads that format into a file or stream directly from an f-string, instead of going through the overhead of a temporary string.
I'm wonder what this mysterious application is that is doing heavy formatting of strings but can't afford the overhead of a temporary string, and therefore requires horrifying and inscrutable and dangerous language extensions.
Being able to use string formatting without a heap is pretty cool.
Rusts string formatting machinery does not require any heap allocations at all, you can for example impl fmt::Write for a struct that writes directly to a serial console byte-by-byte with no allocations, and then you have access to all of rusts string formatting features available to print over a serial console!
I'm not sure about the horrifying and dangerous extensions part though, I'm not really a C++ expert so I don't know if there's a better way to do what they want to do.
boot is not allowed caused by the complexity. So some people disallow boost, here is the solution, just add the complexity directly to the language definition!
I agree that we should have safe-by-default "decay" behavior to a plain ol std::string, but I'm also picking up that many aren't certain it's a useful syntactic sugar in top of the fmt lib? Many other languages have this same syntax and it quickly becomes your go-to way to concatenate variables into a string. Even if it didn't handle utf-8 out of the box, so what? The amount of utility is still worth it.
But they got the type decay right without introducing further user-defined conversions, unlike this proposal. The syntax is ad hoc, thus so should be the typing rule.
So, the f-string in Python is "spelled" that way because another leading character was the only ASCII syntax left for such a thing. It's odd that PRQL and now potentially C++ might copy it. In the PRQL case it was a new thing so they could have chosen anything, double quotes (like shell interpolation) or even backticks, that seem to make more sense.
Also the f- prefix was supposed to be short for format and pronounced that way. But "eff" caught on and now devs the world over are calling them "eff strings" ... funny. :-D
That is a valid point and something I've also been thinking about lately. I can't speak for the others but in my case the Python string interpolation syntax was the one I was most familiar with, other than bash, so it was just the default. The big idea really is to have string interpolation and the syntax is somewhat secondary but we do aim for ergonomics with PRQL so it is a consideration.
Since then I've seen more alternatives like `Hello ${var}!` in JS/TS and $"Hello {var}!" in F#. Not sure that there's a clear way to prefer one approach over the others.
What would you consider to be factors that would make you prefer one over the others?
> [...] another leading character was the only ASCII syntax left for such a thing.
Not really? The original PEP [1] for example considered `i"asdf"` as an alternative syntax. Any ASCII Latin letter besides from `b`, `r` and `u` would have been usable.
I'm going to make an asinine prediction. We will be exploring F-strings in future languages in 100 years time, encountering the same problems and questions.
I still use printf semantics in Python3 despite trying to get with the program for symbolic string/template logic. I don't need to be told it's better, I need some Philip-K-Dick level brain re-wiring not to reach for
"%d things I hate about f-strings\n" % (int(many()))
It's not broken (try it!). Any value is interpreted as an implicit 1-tuple if it's not a tuple nor a dict. A better example would have been `"..." % many()` where `many` returns a tuple or dict.
When I saw the title I thought “F-strings” might be some novel variant of P—strings. I was disappointed that this is just about formatting. I really would prefer safer string handling in modern C/++
F-strings is one of my favorite features of Python to be honest.
That doesn't automatically mean it's a good idea in C++, knowing C++ there are gonna be a whole lot of gotchas which aren't in Python, but it means that, at least in my opinion, how F-strings worked in Python is an argument in favor of them rather than against them.
Yeah, I really missed ubiquitous C preprocessor macros in C++, so let's bring them back, but now inside string literals. Sweet.
Seriously, I just keep being amazed that people are running with the idea of having a full-blown untyped and unchecked formatting mini language (that's what libfmt, which became C++20 format, literally calls it) inside string literals — i.e., the part of the source code that you're specifically telling the compiler to not treat as code.
Format strings in C++ are checked completely at compile time. There are no hacks or compiler intrinsics involved (like what C does for printf to verify format strings).
Eh? C++20 format is checked at compile-time. This has been possible ever since string literals became constant expressions. These features are within the standard compile-time capabilities. People have done impressive compile-time parsing and codegen using it.
So the f-string literal produces a basic_formatted_string, which is basically a reified argument list for std::format, instead of a basic_string. This allows eg. println to be overloaded to operate on basic_formatted_string without allocating an intermediate string
In exchange we have the following problems There are two other proposals to fix these problems.> There are two other proposals to fix these problems.
Most new features of C++ are introduced to fix problems created by previously new features added to C++.
This is becoming such a tiresome opinion. How are concepts fixing a problem created by previous features to the langue? What about ranges? Auto? Move semantics? Coroutines? Constexpr? Consteval? It is time for this narrative to stop.
Move semantics is only needed because C++ introduced implicit copies (copy constructor) and they of course fucked it up my making them non-destructive so they aren't even 'zero cost'.
Constexpr and consteval are hacks that 1) should have just been the default, and 2) shouldn't even be on the function definition, it should instead have been a keyword on the usage site: (and just use const)
That would be the sane way to do compile time functions.I agree that I would have preferred destructive moves, but move semantics makes C++ a much richer and better language. I kinda think pre-move semantics, C++ didn't quite make "sense" as a systems programming language. Move semantics really tied the room together.
That's very silly. You're saying this should fail to compile? There's no way the compiler can run that, because it doesn't know what x is (indeed, it would have a different value every time you run the function with a new argument). So your proposal would ditch const completely except in the constexpr case, everything runtime would have to be mutable.So you respond "well, I didn't mean THAT kind of const, you should have a different word for compile-time constants and run-time non-mutability!" Congratulations, you just invented constexpr.
There are many bad things about C++, but constexpr ain't one of them.
>There's no way the compiler can run that, because it doesn't know what x is (indeed, it would have a different value every time you run the function with a new argument). So your proposal would ditch const completely except in the constexpr case, everything runtime would have to be mutable.
Yeah, I see no problem with that. Non-constant expressions usage of 'const' has always just seemed like a waste of time for me, never found it useful. But I guess a lot of people really liking typing const and "preventing themselves from accidentally mutating a variable" (when has that ever happened?), so as a compromise I guess you can have a new keyword to force constant expressions:
but it's not really a big deal what they keyword is, the main point was that "give me a constant value" should be at the usage site, not at the function definition.Could have been if backwards compatibility was not a thing indeed.
Move constructors are not needed, they don't solve a 'problem', but improve on previous semantics.
Eh not really accurate because C's const means immutable not actually constant. So I get introducing constexpr to actually mean constant. But, yeah, constexpr x = f() should probably have worked as you described.
const is different in C++ from const in C. const variables in C++ are proper compile-time constants. In C they are not (the nearest equivalents are #define and enum values).
So in C++ "const x = EXPR" would make sense to request compile-time evaluation, but in C it wouldn't.
They absolutely are not. Look at this range for-loop:
`item` is not a compile-time constant. It's different every run of the loop.I thought the whole point of ranges is to solve problems created by iterators, move semantics to take care of scenarios where nrvo doesn't apply, constexpr and auto because we were hacking around it with macros (if you can even call it that)?
Iteratively improving in previously released features does not imply fixing issues caused by those features.
Constexpr and auto have nothing to do with macros.
To me, redoing things that are not orthogonal implies that the older version is being fixed. Being fixed implies that it was incorrect. And to clarify, sure, auto types and constexpr are entirely new things we didn't have (auto changed meaning but yeah), but we were trying to "get something like that" using macros.
Honestly can't tell if this is sarcasm. XD
One of the two other proposals is user defined type decay, which lets you choose what type auto will be deduced as. i.e. "auto x = y", x might not have the type of y, instead it can be anything you choose…
This is like implicit type conversion on steroids. And all this because C++ lacks the basic safety features to avoid dangling pointers.
Stop using C++ already!
> lacks the basic safety features to avoid dangling pointers
It doesn't. Unfortunately, C++ programmers choose not to use basic safety features for performance reasons (or aesthetics, or disagreement with the idea that a language should take into account that a programmer might make a mistake, but at least performance is a good one), but C++ actually has quite a few tricks to prevent the memory management issues that cause C/C++ bugs.
Using modern C++ safety features won't completely prevent bugs and memory issues, just like using Rust won't, but the mess that causes the worst bugs is the result of a choice, not the language itself.
Tell that to the designers of the C++ standard library, and the new features being added. They're the ones that keep adding new features that depend on references and pointers instead of std::shared_ptr or std::unique_ptr.
I think one problem here is that a lot of codebases have their own smart pointers and unfortunately the only currency type is the unsafe one :(
I don't think this is the only reason. If it were, they could easily have added overloads that work with both std smart pointers and with plain pointers for compatibility. Or they could add pointer type template parameters, maybe with concepts for the right ownership semantics.
> the mess that causes the worst bugs is the result of a choice, not the language itself.
Even the creator of the language admitted that "just write better code bro" approach doesn't work.
Has he? He at least used to be the biggest proponent of it, "just follow these standards and development practices that I had to meticulously develop for the US military, that no tool can automatically check, and you'll be fine!".
First we need to rewrite the likes of LLVM, GCC, V8, CUDA,... into something else.
Which is not going to happen in our lifetime, even the mighty Rust depends on LLVM for its reference implementation.
Stop producing a new C++ code as much as possible, then. It doesn't help to nitpick the weakest possible interpretation without acknowledging as such.
I would not say stop using it. But just stick to the really needed features, and stop adding more features every 3 years. Nobody can keep up, not the developers, not the compilers... is just insane.
Smart pointers were added to the language 14 years ago. You're free to use old C++ with raw pointers and manual memory management, risking dangling pointers, or use modern C++, which provides smart pointers to avoid those issues.
And yet most if not all of the standard library keeps using pointer or reference arguments, not the new smart pointers that would actually document the ownership semantics.
Most arguments to standard library calls don't need to take ownership over memory, using a raw pointer or (const) reference is correct. Generally - smart pointers to designate ownership, raw pointers to "borrow".
If a function takes a raw pointer, you need to check the docs to know if it is taking ownership or not. There is no general rule that applies to the whole of std that functions taking raw pointers assume that they are borrowing the value.
And even if you could assume that pointer parameters represent borrowing, they are definitely not guaranteed to represent scoped borrowing: the function could store them somewhere, and then you end up with other issues. So shared_ptr is the only solution if you care about safety to represent a borrowed pointer. And of that's too costly, but the std designers did care about safety, they could have introduced a std::borrowed_ptr<T> that is just a wrapper around T* but that is used uniformly in all std functions that borrow a pointer and guarantee not to store it.
Hah. What's interesting about this is that since it doesn't require everything to actually be converted to a string, one can implement things other than just printing. So you could also implement interpretation, eg:
It also allow for things that will set off spidey senses in programmers everywhere despite theoretically being completely safe assuming mydb::sql() handles escaping in the format string:
Yeah. You really want "mydb::sql" to not take a basic_string, only a basic_formatted_string, so it will not compile if the conversion actually happened somehow.
Yes. The basic idea is that there's a specifier that allows a formatted string to transparently decay into an ordinary string (à la array-to-pointer decay) so that "auto" doesn't produce dangling references, and so that chains of more than one implicit conversion can take place.
This seems pretty similar to Rust's `format_args!` macro, which however avoids these issues by being much more verbose and thus something people are less likely to use like in those examples. It does however have issues due to the abundant use of temporaries, which makes it hard to use when not immediately passed to a function. I wonder if C++'s fstrings have the same issue.
It would be nice to take care to allow the use of GNU gettext() or any other convenient translation tool.
Recap: _("foo %s") macroexpands to gettext("foo %s"), then "foo %s" is extracted to a lexicon of strings by an external tool, which can be translated and compiled into .po files, which are loaded at runtime so gettext() can use a translated string based on $LC_MESSAGES. (And there is also _N(..) for correct plural handling.)
To do this with f-strings, _(f"foo {name()}") (which is a bit ugly...) needs to translate to make_formatted_string(_("foo {}"), name()) -- note that the _(...) needs to be called before calling make_formatted_string, to be able to return a translated string.
I would wish for a proposal for f-strings to consider translating strings, because we live in a world with many languages. And maybe cite gettext as a convenient method, and think about what could be done. Or point to a better tool. Or state: 'in that case, f-strings cannot be used'.
I guess _ could be a function that both takes and returns basic_formatted_string? (I.e. not gettext()).
Tangent: this sort of thing can be implemented without any change to libc++ (the runtime). Updates to compiler versions are sometimes postponed by users with big codebases that treat a libc++ change as something major.
Why don't we see gcc or clang or msvc back porting stuff like this to an older version with a sort of future tag. It's normal to see __future__ in the python ecosystem, for instance.
Because C++, just like C, Ada, Cobol, Fortran, Modula-2, Pascal is an ISO driven language.
Whereas Python language evolution is driven by whatever CPython reference implementation does.
Compilers are free to do whatever they want, but then that code isn't portable.
This is also true about
#pragma once
But it became a de facto standard at some point.
thank you for the clarification. You are 100% right about the general difference. I didn't consider the level of "confidence" python has in directing it's own evolution that I don't detect in the C++ committee
If a codebase is fragile enough that libc++ changes have to be assumed breaking until proven otherwise, why take the risk? Presumably the application already has a "standard" way of formatting strings. If it ain't broke yada yada
It's not about assumed breaking, it's that when you upgrade libc++ you can become incompatible at runtime with your distro or any other number of libraries outside your control in ways that are difficult to detect
Somehow I manage to get by just fine with c++11. I have refactored more than a few codebases that use 17 or greater.
Strangely, the codebase became more maintainable afterwards.
C++20's concepts IMHO are a massive update over C++11. You can basically remove almost 90% of inheritance with them without incurring in any issue (you could do that earlier too, but at the expense of incredibly hard to read error messages - now that's basically solved thanks to concepts).
I don't find the error messages produced by concepts much better than old school template errors. Maybe I got used to the latter with experience and definitely the compilers got better at generating useful error messages for templates as the years passed. On the other hand when I have to review code where a significant portion of the source relates to concepts, my heart sinks.
In my opinion, C++ "concepts" are the least useful C++20 addition to the language - awful syntax, redundancy everywhere (multiple ways of writing the same thing). And for what? Potentially better error messages?
Another gripe; of all the generic overloaded words available to describe this C++ feature, "concept" must be the least descriptive, least useful. Why pick such a meaningless name that does absolutely nothing to even suggest what the feature does?
> In my opinion, C++ "concepts" are the least useful C++20 addition to the language - awful syntax, redundancy everywhere (multiple ways of writing the same thing).
They're not the least useful C++20 addition, in fact they're amongst the most useful ones.
In particular the addition of the "requires" expression is the real killer here.
> And for what? Potentially better error messages?
Removing even more enable_if and making template code even easier to read (you could do some of that with if constexpr + static_assert in C++17, but there were gotchas). Oh and it allows you to check for the presence of members in classes, which you couldn't do before.
C++ concepts are a failure due to them only checking one side of the contract. And the other is basically impossible to implement without breaking other parts of the language
concepts finally allowed retiring stupid SFINAE tricks. They are a huge success if only for that.
Came here to post the same thing. C++11 was a major and practical step up from previous versions. I haven't seen anything in future standards that looked like it a tool I'd use day-to-day building actual production software. Much of the subsequent versions added things probably interesting to compiler and language academics. "Default constructible and assignable stateless lambdas?" Really?
Off the top of my head, C++17 brought slicker notation for nested namespaces, digit separators for numeric literals (so you can more easily read 1'000'000'000), improvements in type deduction for pairs / tuples (so std::make_pair / make_tuple are basically unnecessary now), guarantees in the standard for copy elision / return value optimization in specific circumstances,. Oh, and structured bindings (so you can now write `for (const auto& [key, value] : map) { ... }`).
edit: I guess digit separators came in C++14, I'm always a little fuzzy there since at work, we jumped straight from 11 -> 17.
C++20 brought a feature that C had decades prior: designated initializers, except it's in a slightly crappier form. Also, spaceship operator (three-way comparison).
Looking at cppreference, it looks like C++17 also brought if constexpr, and standardized a bunch of nonstandard compiler extensions like [[fallthrough]]. C++20 continued standardizing more of those extensions, and also brought concepts / constraints, which are a lot easier to use than template metaprogramming.
You're at least somewhat right though -- none of these are paradigm shifts as C++11 was compared to C++03 (especially with the notion of ownership, especially in the context of std::unique_ptr and std::move).
17 is worth it for std::filesystem alone. It also has optional and variant.
Filesystem is great. It’s insane it took so long.
Optional is nice but slightly awkward in a non-garbage collected language.
IMO variant is one of those things that should not exist in standard.
It tries to implement discriminated union in C++ but that feature is lame without true pattern matching. And you can’t implement pattern matching without thorough syntax level support. So in my books it’s in this academic “let’s pretend a while we are using some other language…” category.
It’s _occassionally_ convenient for sure.
> It tries to implement discriminated union in C++ but that feature is lame without true pattern matching. And you can’t implement pattern matching without thorough syntax level support. So in my books it’s in this academic “let’s pretend a while we are using some other language…” category.
I agree, they should have made it a language/syntax feature. However: if you wanna do a sum type, it does do that. I'd rather have that than nothing.
Might still land on C++26, but most likely C++29, then you'll have pattern matching.
Designated initializers are so nice. Even though they have C++-isms like required order, it adds safety and readability to code.
The safety and readability are nice; but WHY do they have to be in order? That is so typically clueless. Such an obvious feature, screwed up in a way that only a C++ committee member could.
Initializers for members are _always_ run in the order the member is declared - this applies even in constructor initializing lists, see https://en.cppreference.com/w/cpp/language/constructor#:~:te... - it doesn't matter what order you declare the initializers in, and clang will warn you if you write your initializers in an order other than what they will be executed in. Designated initializers are the same way, it's probably best that the behavior is consistent across methods of constructing.
Why is it this way? As best as I can find it's so that destructors always run in reverse order of construction, I guess there could be some edge cases there that matter. It's not the strongest argument, but it's not nothing.
Why does construction and destruction order need to be deterministic?
Well, consider what would happen if you had members whose value depends on other members. For example, one member is a pointer to another. Or perhaps one member uses RAII to hold a lock and another controls a resource.
Deterministic construction and destruction order is a fundamental feature of C++. Calling it clueless is just an indication one does not know C++.
> "Default constructible and assignable stateless lambdas?
this comes up surprisingly often.
constexpr in 11 vs 14 was night and day difference.
constexpr "if" statements (C++20) are also a game-changer.
Those are c++17
Initially, C++20 and C++23 extended its use cases, and combined with concepts is a pretty sweet spot for compile time metaprogramming without SFINAE or tag dispatch tricks.
Much better than having yet another syntax for macros.
To me moving from C++11 to 17 and then 20 was just a matter of convenience. When digging on how to do this and that I've found few things that just saved my time here and there. Also couple of valuable libs I wanted to use required newer C++ versions.
Like the compile time isn't long enough. They should just let the compiler do the job like Wformat=2 and skip any preprocess and constexpr function.
It is fast enough when using precompiled headers, binary libraries, and best of all C++20 modules with C++23 modularised standard library.
Could it be better? Most likely.
> best of all C++20 modules
Do any compilers besides VS support this?
Clang with CMake and ninja as build system (cmake.js for nodejs does as well).
The only missing piece is that cmake is still in the process to support header units.
GCC is getting there.
GCC supports it with a flag. C++23's import std is not there at all though.
C++ modules do not exist in practice and probably never will.
Office team apparently is of another opinion, as Vulkan folks, and a few other early adopters.
You could just as well say that anything beyond C89 doesn't exist, given its prevalence in some circles.
Yes. There are a very small handful of early adopters in the year 2025 for a feature ostensibly added in C++20.
So, like I said, modules don’t exist in practice and I’d be shocked if in 2030 modules were considered normal.
C++11 was pretty game changing. C++14 and C++17 only took a few years to reach widespread adoption.
It’s very safe to require C++17 today. C++20 was a little slower and because of the modules fuckup it’s a bit inconsistent. But it’s largely fine to use.
C++23 probably needs another year or two. But also C++20 and beyond haven’t added much that’s worth upgrading for.
Like I said, it is a matter of point of view, and yes such is the karma of ISO driven languages with multiple implementations, when one cares about cross platform code.
There are many folks that don't care though, for them it is "one platform, one compiler, language standard is whatever my compiler allows me to do, including extensions".
I am also quite bullish on the opinion that eventually, C++26 might be the last standard, not that WG21 will stop working on new ones, rather that is what many will care about when using C++ in a polyglot environment, as it is already the case in mobile OS platforms, the two major desktop platforms and distributed computing (CNCF project landscape).
Why C++26 and not earlier? Reflection.
> Reflection
Oh yes please! :-)
> C++20 and beyond haven’t added much that’s worth upgrading for.
std::format is pretty nice (although not yet available on Ubuntu 24.04 LTS.
Lambda capture of parameter packs is actually huge!
And ... I think it still remains to be see what the outcome of modules will be.
One hopes (against hope) that the big payoff for modules will be in tool-ability of C++. IDE support for languages like C#, Java, typescript is vastly superior to C++ IDE tooling. Perhaps. Maybe. Modules will provide a path that will allow that to change. I don't think the benefits of modules have yet fully played out.
Ironically C++ had such tooling in the past but got lost, a bit like Roman technology as the Empire felt.
Visual Age for C++ v4.0 had a Smalltalk like experience with a database storage for the code, and Lucid Energize C++ already had something that people now know as LSP (Cadillac on their implementation), with incremental compilation and linking (at method/function level).
They failed commercially due to high prices and hardware requirements.
We have had C++ Builder for decades for GUI RAD development, Delphi/VB style, but due to how Borland went after the enterprise and various changes of hands, very few are aware that it exists and its capabilities.
C++ Builder with VCL was Java/.NET before these were even an idea looking for an implementation.
Problem now is that C++ has become a specialized tooling for high performance code, language runtimes, drivers and GPGPU, so you write 90% of the code in Java/C#/nodejs/..... and then reach out to native libraries, for various reasons.
Still, Clion, Visual Studio, C++ Builder, are quite good as far as development experience goes.
This links to a “decays_to” proposal:
https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2024/p33...
And observes that this additional feature is needed to avoid dangling references. And, as a long time C++ programmer, this illustrates one of the things I dislike most about C++. In most languages, if you make a little mistake involving mixing up something that references something else with something that contains a copy, you end up with potential overhead or maybe accidental mutation. In Rust, you get a compiler error. In C++, you get use-after-free, and the code often even seems to work!
So now we expect people to type:
And those people expect s to act like a string. But the designers (reasonably!) do not want f to unconditionally produce an actual std::string for efficiency reasons, so there’s a proposal to allow f to produce a reference-like type (that’s a class value, not actually a reference), but for s to actually be std::string.But, of course, more advanced users might know what they’re doing and want to bypass this hack, so:
Does what they programmer actually typed: s captures foo by reference.What could possibly go wrong?
(Rust IMO gets this exactly right: shared xor mutable means plus disallowing code that would be undefined behavior means that the cases like this where the code might do the wrong thing don’t compile. Critically, none of this actually strictly requires Rust’s approach to memory management, although a GC’d version might end up with (deterministic) runtime errors instead unless some extra work is done to have stronger static checking. And I think other languages should learn from this.)
IOW I believe it's the same thing as Rust's format_args! macro, but trying to get away without needing a separate format! macro by using implicit conversions.
std::format_args! gets you a Arguments<'a> which we'll note means it has an associated lifetime.
Today-I-learned, Arguments<'a> has a single useful function, which appeared before I learned Rust but only very recently became usable in compile time constants, as_str() -> Option<&'static str>
format_args!("Boo!").as_str() is Some("Boo!")
If you format a literal, this always works, if you format some non-literal the compiler might realise the answer is a compile time fixed string anyway and give you that string, but it might not even if you think it should and no promises are given.
The most useful function is Arguments::fmt. “as_str” is just a shortcut utility function.
It is not a shortcut because it can't be implemented without knowing the `Arguments` internals. `format_args!("{}", "boo").as_str()` returns None for example.
It’s a shortcut in the sense that most, if not all optimisations are shortcuts. This one allows you to shortcut the usual formatting machinery if the result of formatting is a static string.
Like all shortcuts, it’s not something you can always rely on.
It can be used to shortcut the formatting process, the function itself is however not a shortcut in my opinion.
> But, of course, more advanced users might know what they’re doing and want to bypass this hack, so:
> Does what they programmer actually typed, so s captures foo by reference.Wouldn't this problem be best solved by... not declaring s to have a guess-what-I-mean type? If you want to be explicit about the type of s, why not just say what that type is? Wouldn't that be even more explicit than "explicit auto"?
A general issue with C++ (and many statically typed languages with generic) is hilariously long type names that may even be implementation details. Using auto can be a huge time saver and even necessary for some generic code. And people get in the habit of using it.
Is it a coincidence that all these quality life things start to pop up after C++ is facing real competition for the first time? Seems a bit odd to add print after using std::out for 30 years.
What is this referring to? I would imagine whatever you consider recent competition is actually not that recent.
On the time scale of c++, rust is very recent. :-)
Nerd alt-history story: What if Graydon decides he should attend WG21 and so instead of Rust what we get is a decade of attempts to fix C++ and reform the process, followed by burn out?
Then we'd be supporting a different language that shares the same or similar ideals as Rust. Whether that's something already in existence or something entirely new.
Rust isn't really that unique, there are plenty of other safe languages out there. And if Graydon was alone in wanting something like Rust then Rust wouldn't have grown in popularity like it has.
Rust exists because enough people thought there was a need for Rust to exist. So if that wasn't Graydon with Rust, then it would have been someone else with something else.
This isn't meant to take anything away from Graydon nor Rust. Just saying that innovations seldom happen in silos. They're usually a result of teams of people lusting for change.
Rust was helped by being a Mozilla language, and some of the personalities it had around it.
The big plus of the language was proving that Cyclone ideas to improve C, from AT&T research project were sound and could be made mainstream.
And now other languages are building on it as well, that is why Swift, Chapel, Haskell, OCaml, D are also having a go at a mix of linear types, affine types and effects.
However many folks credit Rust for type system features that are actually available in any ML derived language, or Ada/SPARK, so it isn't as if knowledge is that well spread.
> Rust was helped by being a Mozilla language, and some of the personalities it had around it.
Indeed. But my point is there was already widespread movement behind building a programming language. So if Mozilla hadn’t taken charge then I’m certain someone will.
My point is that Rust was born from a wider desire for change rather than that desire existing because of Rust. Thus that desire would have been met in one form or another regardless of the invention of Rust.
Making any changes to the core language is a sensitive thing as it inevitably imposes new demands on compilers, a learning curve for all users of the language, and risks breaking compatibility and introducing unforeseen issues that will need to be fixed with future changes to the language.
Personally, I'd much prefer a smaller and more stable language.
Leaving curve can decrease as a result of better design, same re. the chance of those unforeseen issues (and it can even decrease the chance of existing bugs popping up)
I'm pretty sure boost::format can do this, though not inline in the string. Do we really need more complexity in cpp? isn't it complex enough?
This is the sort of change that adds complexity to the language but reduces complexity in the code written in the language. We take those
> This is the sort of change that adds complexity to the language but reduces complexity in the code written in the language. We take those
An admirable statement of policy, but I'm not sure it's possible. Adding complexity to the language means there are more gotchas and edge-cases that a programmer must consider, even if they don't use the feature in question.
Depends on case to case basis. I wouldn't generalize it to every case. As a daily C++ engineer, I think overall many features added over the years have mostly been positive. There are features that I don't use and I don't think it really affects much. That said, I do get the sentiment of language becoming too syntactically complex.
I like this feature as string formatting is something frequently used and this certainly looks cleaner and quicker to write.
> Adding complexity to the language means there are more gotchas and edge-cases that a programmer must consider, even if they don't use the feature in question.
Since this is C++, this is not a problem we have to consider
This is a meme by now, yet it isn't as if Python 3.13 is a simple as Python 1.0, Java 23 versus Java 1.0, .NET 9 with C# 13 versus .NET 1.0 with C# 1.0 and a Framework reboot,....
C# has a lot of features but most of them feel like simple syntactic sugar that make the language a joy to use and they interact nicely together.
C++ has lots of features that interact with each other in unexpected ways that could leak memory or access freed memory etc.
C# has already enough material for pub Quiz, and no, not all of them are syntatic sugar, and require deep knowledge of the .NET runtime, and the way it interacts with the host platforms.
I imagine you never went too deep into unsafe, cross language interop, lambda evolution since the delegate days, events infrastructure, pluggable GC, RCW/CCW, JIT monitoring, the new COM replacement, how the runtime and language features differ across .NET Framework, Core, .NET MicroFramework, UWP, AOT compilation, Mono, .NET standard versus Portable Class Libraries, CLS friendly libraries,...
On top of that, all the standard frameworks that are part of a full .NET install on Visual Studio, expected that most C# developers know to at least have some passing knowledge on how to use them.
There is no need to just throw keywords around.
For other readers - more than half of these are irrelevant.
Writing general purpose application code rarely involves thinking about implications of most of these (save for NAOT as of lately I suppose).
Writing systems C# involves additional learning curve, but if you are already familiar with C++, it comes down to understanding the correct mapping between features, learning strengths and weaknesses of the compiler and the GC and maybe doing a cursory disassembly check now and then, if you care about it.
The original comment was about the divergence of the complexity of a language and the complexity of programs implemented in the language. I think the comment you replied to with all its keywords and jargon beautifully illustrated the point
how would this work with internationalized strings? especially if you have to change the order of things? You'd still need a string version with object ordering I would think
f-strings are not an internationalization library.
The question was, how would you use this if you have i18n requirements. Format strings are normally part of a translation. I think the bad answer is to embed the entire f-string for a translation as usual, except this can't work because C++ f-strings would need to be compiled. The better answer is, don't use f-strings for this because you don't want translators to monkey around with code and you don't want to compile 50 versions of your code.
C++ is such a narrow skillset that I'd rather not roll the dice on translators knowing what to do
Even if you told them, "just copy the names from the original string" it's still asking for trouble, and maybe even security holes if they don't follow instructions. But the biggest problem with the idea is surely that the strings need to be compiled.
I'm skeptical that people would want to do this in a single expression.
Do what? Allow translators to reorder the appearance of arguments in a translated format string? It's a completely routine (and completely necessary) feature when doing translations.
C++ also has std::format, which was introduced in C++20. This is just sugar on top of it, except it also returns a container type so that printing functions can have overloads that format into a file or stream directly from an f-string, instead of going through the overhead of a temporary string.
I'm wonder what this mysterious application is that is doing heavy formatting of strings but can't afford the overhead of a temporary string, and therefore requires horrifying and inscrutable and dangerous language extensions.
Being able to use string formatting without a heap is pretty cool.
Rusts string formatting machinery does not require any heap allocations at all, you can for example impl fmt::Write for a struct that writes directly to a serial console byte-by-byte with no allocations, and then you have access to all of rusts string formatting features available to print over a serial console!
I'm not sure about the horrifying and dangerous extensions part though, I'm not really a C++ expert so I don't know if there's a better way to do what they want to do.
freestanding maybe? Embedded apps often don't shy away from using something like printf, yet they don't like unnecessary allocations.
So it's less complex bringing in a 3rd party library and having to pass arguments?
fmt library can also do something similar, but still requires the complexity of adding the library and passing arguments.
especially bringing in boost which isn't allowed in some codebases
boot is not allowed caused by the complexity. So some people disallow boost, here is the solution, just add the complexity directly to the language definition!
just skimmed the proposal, dont see how inline rendered f-strings are more complicated than the alternative.
I agree that we should have safe-by-default "decay" behavior to a plain ol std::string, but I'm also picking up that many aren't certain it's a useful syntactic sugar in top of the fmt lib? Many other languages have this same syntax and it quickly becomes your go-to way to concatenate variables into a string. Even if it didn't handle utf-8 out of the box, so what? The amount of utility is still worth it.
Reflection is going to change things so much.
Reinventing C#’s FormattableString and interpolated string handlers :)
C# wasn't the first language which introduced such mechanism.
But they got the type decay right without introducing further user-defined conversions, unlike this proposal. The syntax is ad hoc, thus so should be the typing rule.
So, the f-string in Python is "spelled" that way because another leading character was the only ASCII syntax left for such a thing. It's odd that PRQL and now potentially C++ might copy it. In the PRQL case it was a new thing so they could have chosen anything, double quotes (like shell interpolation) or even backticks, that seem to make more sense.
Also the f- prefix was supposed to be short for format and pronounced that way. But "eff" caught on and now devs the world over are calling them "eff strings" ... funny. :-D
PRQL contributor here:
That is a valid point and something I've also been thinking about lately. I can't speak for the others but in my case the Python string interpolation syntax was the one I was most familiar with, other than bash, so it was just the default. The big idea really is to have string interpolation and the syntax is somewhat secondary but we do aim for ergonomics with PRQL so it is a consideration.
Since then I've seen more alternatives like `Hello ${var}!` in JS/TS and $"Hello {var}!" in F#. Not sure that there's a clear way to prefer one approach over the others.
What would you consider to be factors that would make you prefer one over the others?
Why is it odd to copy a popular and fitting alternative? What's the better one?
> [...] another leading character was the only ASCII syntax left for such a thing.
Not really? The original PEP [1] for example considered `i"asdf"` as an alternative syntax. Any ASCII Latin letter besides from `b`, `r` and `u` would have been usable.
[1] https://peps.python.org/pep-0498/#how-to-denote-f-strings
I'm going to make an asinine prediction. We will be exploring F-strings in future languages in 100 years time, encountering the same problems and questions.
I still use printf semantics in Python3 despite trying to get with the program for symbolic string/template logic. I don't need to be told it's better, I need some Philip-K-Dick level brain re-wiring not to reach for
modes of thinking.FYI that code sample is broken, it should be `(int(many()),)`
Ironic I guess?
It's not broken (try it!). Any value is interpreted as an implicit 1-tuple if it's not a tuple nor a dict. A better example would have been `"..." % many()` where `many` returns a tuple or dict.
Bless compilers able to catch wrong format specifiers.
Decline based on usage of pascalCase in the first example. How did that even happen?
That isn't even PascalCase. This is camelCase.
When I saw the title I thought “F-strings” might be some novel variant of P—strings. I was disappointed that this is just about formatting. I really would prefer safer string handling in modern C/++
Jesus. This is such a bad idea. Don't repeat the mistakes of Python. Look at what Swift does and make a SANE system ffs.
F-strings is one of my favorite features of Python to be honest.
That doesn't automatically mean it's a good idea in C++, knowing C++ there are gonna be a whole lot of gotchas which aren't in Python, but it means that, at least in my opinion, how F-strings worked in Python is an argument in favor of them rather than against them.
Yea, f-strings are nice. But f-strings, r-strings, \ escaping, {{ escaping, ''' strings? Horrible.
Swift strings have ONE string. Just a single clean design that does all of that. With a simple set of rules.
Most people really like python f-strings.
You might be on your own here.
You missed half of what I said. Look at Swift.
To quote myself:
Yea, f-strings are nice. But f-strings, r-strings, \ escaping, {{ escaping, ''' strings? Horrible.
Swift strings have ONE string. Just a single clean design that does all of that. With a simple set of rules.
Sincere question: what's wrong with Python f-strings?
Three people in a row and not one of you guys checked out Swift strings before commenting, thus making exactly the mistake I complained about.
Look up Swift strings. Python has 4 types of string literals. Swift has 1. And they are BETTER and more powerful. Cleaner.
Yeah, I really missed ubiquitous C preprocessor macros in C++, so let's bring them back, but now inside string literals. Sweet.
Seriously, I just keep being amazed that people are running with the idea of having a full-blown untyped and unchecked formatting mini language (that's what libfmt, which became C++20 format, literally calls it) inside string literals — i.e., the part of the source code that you're specifically telling the compiler to not treat as code.
Think about it for a minute.
Format strings in C++ are checked completely at compile time. There are no hacks or compiler intrinsics involved (like what C does for printf to verify format strings).
So basically doing the same at runtime for the last 55 years was somewhat OK?
Eh? C++20 format is checked at compile-time. This has been possible ever since string literals became constant expressions. These features are within the standard compile-time capabilities. People have done impressive compile-time parsing and codegen using it.