This is a very confusing blog post. I found myself looking for ChatGPT markers because it doesn't make sense to say: omg zig is so much cooler than C here's why, then start listing the absolute basics of the language that are identical in most modern languages without any actual reflection why writing the same thing in a different syntax somehow makes zig superior?
I've learnt this the hard way. The most important thing is to get you to click.
Sometimes I'll first iterate over the title before even writing on substack.
I have learned that too. If you write about C, almost no one clicks. It is not new, it is not flashy, and it does not promise easy results. Yet almost everything still runs on it. The quiet parts of computing rarely get attention, even though they keep everything working.
I still write about C anyway. It may not trend, but it lasts.
I'm sure I'm not alone - after decades - already knowing far too much about C, so that any article I'm likely to read either I'm like "No, that's wrong and I even understand why you thought that, but it's still wrong" or I just nod along and sigh.
I spent a substantial fraction of my professional career writing C, and I remain interested in WG14 (the language committee) and in several projects written in C though I avoid writing any more of it myself.
The reason it's so widespread is called "Worse is Better" and I believe that has somewhat run its course. If you weren't aware of "Worse is better" a quick Google should find you the original essay on that topic years back.
In contrast when I read an article about say Zig, or Swift, I am more likely to learn something new.
But I can certainly endorse your choice to write about whatever you want - life is too short to try to get a high score somehow.
Thanks for sharing your thoughts. I have never deployed any production C code and I would not choose C for professional work either, but learning it, with all its rough edges, has made me a better engineer. It helps me understand how things really work under the hood. No pain, no gain.
Maybe I am biased, but for professional work, I stay with Go. I have built large distributed data systems that handle hundreds of millions of business transactions daily, and Go has been steady and reliable for that scale. Its simplicity, strong concurrency model, and easy deployment make it practical for production systems. I still enjoy exploring Zig and Rust in my spare time, but for shipping real systems, Go continues to get the job done without getting in the way.
> I'm sure I'm not alone - after decades - already knowing far too much about C, so that any article I'm likely to read either I'm like "No, that's wrong and I even understand why you thought that, but it's still wrong" or I just nod along and sigh.
If you have some spare time, I would really like to hear more about your experiences. It sounds like you have worked with C for a long time, and that kind of insight is hard to find now.
Most people around me started with JavaScript or TypeScript as their first language, and for many, that is still all they know. I mean no disrespect, it is just how things are today. It would be great to hear how your view of programming has changed over the years and what lessons from C still matter in your work today.
An alternative view of "not new and flashy" is "known and expected", which not 100% of C conversations have to be. Just look at the excitement around Fil-C lately!
Oh, and I just submitted a link to my article about C. I am pretty sure no one will click it.
Articles about C never get much traffic, but that is fine. I wrote it because I care about how things really work, not because I expect it to trend. If even a few people read it and see the beauty in the old language that still runs the world, that is enough.
Honestly this (the fact it is being massively upvoted) looks a lot more like paid promotion. Not the first time and not the only example of submission btw.
- pointers to bitfields
- checked bitshifts
- small ints like u4
- imperative array initialization blocks
- test code blocks
- equivalent of debugger; keyword from js
- some vague stuff about being able to do at compile time
I had to dig farther on the compile time execution stuff. It's actually pretty cool-looking. Recommend digging into it. I don't know that it's a killer enough feature to draw me away from Rust's guarantees, but it is interesting.
It is difficult to overstate how useful compile-time execution is in practice. I can't imagine using a systems language without it now. The term "modern C++" largely denotes when compile-time execution was added to that language.
I would love to see Rust get compile-time execution that is as capable as Zig or C++20.
The "how to modify an environment variable" bit and the bin-dec-hex table made me feel the same way. Then I saw the part explaining how to check for duplicates in a row... I'm struggling to understand the point of the article. Testing a text generator?
In my opinion the biggest issue of Zig is that it doesn't allow attaching data to error. The error can only be passed via side channel, which is inconvenient and ENOURAGES TOOL DEVELOPERS TO NOT PASS ERROR DATA, which greatly increase debugging difficulty.
Somethings there are 100 things that possibly go wrong. With error data you can easily know which exact thing is wrong. But with error code you just know "something is wrong, don't know which exactly".
> I just spent way longer than I should have to debugging an issue of my project's build not working on Windows given that all I had to work with from the zig compiler was an error: AccessDenied and the build command that failed. When I finally gave up and switched to rewriting and then debugging things through Node the error that it returned was EBUSY and the specific path in question that Windows considered to be busy, which made the problem actually tractable ... I think the fact that even the compiler can't consistently implement this pattern points to it perhaps being too manual/tedious/unergonomic/difficult to expect the Zig ecosystem at large to do the same
Interestingly, I just read an article from matklad (who works a lot with Zig) talking about the benefits of splitting up error codes and error diagnostics, and the pattern of using a diagnostic sync to provide human-readable diagnostic information:
Honestly I was quite convinced by that, because it kind of matches my own experiences that, even when using complex `Error` objects in languages with exceptions, it's still often useful to create a separate diagnostics channel to feed information back to the user. Even for application errors for servers and things, that diagnostics channel is often just logging information out when it happens, then returning an error.
The separation of error codes and diagnostics is fine, but the language needs a standard mechanism to optionally pass this error diagnostic information. Otherwise, everyone will develop their own different way with ZERO consistency and many will simply not pass error diagnostics at all.
Your and GP's two statements are not mutually exclusive. This paradigm can have significant benefits, and at the same time be too cumbersome for people to want to use consistently.
I tend to follow the rest of the ecosystem when developing libraries. If i wanted to make a zig lib id look at what other major libs are doing (or not doing) and copy that.
If i found no consistency id be making a post like OP but from a different perspective.
The "correct" way is highly context dependent with the added proviso that Zig assumes a low-level systems context.
In this context, adding data to an error may be expedient but 1) it has a non-trivial overhead on average and 2) may be inadvisable in some circumstances due to system state. I haven't written any systems in Zig yet but in low-level high-performance C++20 code bases we basically do the same thing when it comes to error handling. The conditional late binding of error context lets you choose when and where to do it when it makes sense and is likely to be safe.
A fundamental caveat of systems languages is that expediency takes a back seat to precision, performance, and determinism. That's the nature of the thing.
If the error rarely happens then passing error data shouldn't affect performance in visible way. If the error occurs in common path then it's designed wrongly.
I agree that in special states like OOM passing error data with allocation is not ok.
Error data being returned instead of just error codes doesn't require allocation at all, and never would, unless the specific unions that you're returning require as much. Zig already has tagged unions with a tag field and associated payload, that is exactly what you would return. The overhead isn't remarkably worse than the cost of modifying the value someone passed in to "Fill this in in case of errors" (which is what you have to do now in Zig).
For quite a long time, I have been wondering why I like to code in Raku so much … in a round about way you set me thinking. Perhaps it’s because, in Raku, precision, performance and determinism take a back seat to expediency. (Sorry for the tangent).
Went to have a look to the (beautiful and informative) website for the raku language to refresh my memory, and looking at the examples I though "Oh god, those sigills, those criptic short keywords... it looks like a modern perl, I doubt we would be happy together", then I went to wikipedia to check and yes indeed, that's perl 6!
I'll pass. :)
I love it. My largest project is about 20k lines … so nothing too big. But if you need to be expedient (just quickly make a data extract/load/transform or a command line thingy) it is great fun. The LLMs seem to be pretty good too, just the usual hallucination here and there.
I thought it was useful information for people who did not know this. Of course Wikipedia would have sufficed, too: "Raku, formerly known as Perl 6 [...]".
People are working on this. std.zon is generally considered to be a good example of how to handle errors and diagnostics, though it's an area of active exploration. The plan is to eventually collect all the good patterns people have come up with and (1) publish them in a collection, and (2) update std to actually use them.
> how to handle errors and diagnostics, though it's an area of active exploration
I am flabbergasted and exasperated by this sentiment. Zig is over 9 years old at this point. This feels this same kind of circular arguments from Golang "defenders" about generics and error handling.
Go gets a lot of flack for getting some things wrong but it was a stable and productive language within a couple of years.
If you look at the current Zig website the hello world example doesn’t compile because they changed the IO interface. Something as simple as writing to the console.
It’s easier to get things right if you have no issues breaking backward compatibility for a decade. It feels it’ll be well over 10 years before Zig is “1.0”.
Agreed, this is probably my biggest ongoing issue with Zig. I really enjoy it overall but this is a really big sticking point.
I find it really amusing that we have a language that has built its brand around "only one obvious way to do things", "reducing the amount one must remember", and passing allocators around so that callers can control the most suitable memory allocation strategy.
And yet in this language we supposedly can't have error payloads because not every error reporting strategy is suitable for every environment due to memory constraints, so we must rely on every library implementing its own, yet slightly unique version of the diagnostic pattern that should really be codified as some sort of a language construct where the caller decides which allocator to use for error payloads (if any).
Instead we must hope that library authors are experienced and curious enough to have gone out of their way to learn this pattern because it isn't mentioned in any official documentation and doesn't have any supporting language constructs and isn't standardized in any way.
There must be an argument against this (rather obvious) observation but I'm not aware of it?
In any case, when debugging annotating error with extra context often is not enough. One often needs a detailed trace of what happens before.
So what I would like to see in any programming language is ability to do a structured logging with extra context from the call stack (including asynchronous support in languages that have that) that has almost zero overhead when the log is not printed.
Various languages and runtimes have some libraries that try to do that, but the usage is awkward and the performance overhead is not trivial.
I made a go of this using the stacktrace functionality built into C++23. The overhead and complexity it introduced made it not worth it, unfortunately. There may be a way to do this but it seems non-trivial in implementation.
I know that Zig doesn't allow attaching data to error for valid reasons. If error data contains interior pointer then it can easily cause memory safety problem. Zig doesn't have a borrow cheker or ownership system to prevent that.
If you wanted to have a parameter that gets filled in when there is an error, this exact issue will remain, it's completely unrelated to which language construct you use to capture errors and has more to do with having a good idea of how your errors are allocated, if they require allocation. I don't think the commenter in the GitHub issue thought this through at all, and probably didn't expect to have it be held up as some example of why you can't return tagged unions (because it's not an example of that, not even remotely).
> Preventing data being attached to an error forces more clear and precise errors.
Okay maybe theorically, but in the real world I would like to have the filename on a "file not found", an address on a "connection timeout", a retry count on a "too many failures", etc.
But also in the real world I may not be interested in any error information for the library I’m using. I’d like to be able to pass a null for the error information structure and have the compile optimize away everything related to tracking and storing error information.
I’d like my parser library to be able to give me the exact file, line and column number an error occurred. But I’d also like to use the library in a “just give me an error if something failed, I don’t really care why” mode.
I don't follow, because there's a possibility that someone somewhere might create a bad overly-generic error set if they were allowed to stuff details in the payload when those should be reflected in the error "type", it's a good idea to make the vast majority of error reporting bad and overly-generic by eliminating error payloads entirely?
Yeah, every single newbie programming language designer starts with a maximalist position of "exceptions are hard, just return an error code", and then end up inventing their own shitty, ad-hoc and malfeatured exception handling system.
This seems kinda contrived. In practice that "ERROR DATA" tends not to exist. Unexpected errors almost never originate within the code in question. In basically all cases that "ERROR DATA" is just recapitulating the result of a system call, and the OS doesn't have any data to pass.
And even if it did, interpreting the error generally doesn't every work with a microscope over attached data. You got an error from a write. What does the data contain? The file descriptor? Not great, since you really want to know the path to the file. But even then, it turns out it doesn't really matter because what really happened was the storage filled up due to a misbehaving process somewhere else.
"Error data" is one of those conceits that sounds like a good idea but in practice is mostly just busy work. Architect your systems to fail gracefully, don't fool yourself into pretending you can "handle" errors in clever ways.
That's not error data, that's (one level of) a stack trace. And you can do that in zig, but not by putting call stack data into error return codes.
The conflation between exception handling and error flagging (something that C++ did largely as a mistake, and that has been embraced by managed runtimes like Python or Java) is actually precisely what this feature is designed to untangle. Exception support actually turns out to have very non-trivial impact on the generated code, and there's a reason why languages like Rust and Zig don't include them.
> That's not error data, that's (one level of) a stack trace.
They're not talking about the stack trace, but about the common case where the error is not helpful without additional information, for example a JSON parsing library that wants to report the position (line number) in the string where the error appears.
There's no way of doing that in Zig, the best you can do is return a "ParseError" and build you own, non-standard diagnostic facilities to report detailed information though output arguments.
Another way to look at this example is that, for the parser, this is not an error. The parser is doing its job correctly, providing an accurate interpretation of its input, and for the parser, this is qualitatively different from something that prevents it doing its job (say, running out of memory).
At the next level up, though, there might be code that expects to be able to read a JSON config file at a certain location, and if it fails, it’s reasonable to report which file it tried to read, the line number, and what the error was.
Error data should specify where the error occurred and what failed. So you'll know which file had a problem, and that the problem in question was a failure to write. From that you can make the inference that maybe the disk is full, etc.
For one example, a number of years back, I built a python package, env, and version manager. It was built entirely Rust and distributed as a binary. Since I know users would likely have pip installed, it provided an easy way for them to install, regardless of OS.
You could go further like in this case, and use wheels + PyPi for something unrelated to Python.
It's useful as a distro-agnostic distribution method. CMake is also installable like this despite having nothing to do with Python.
Or I should say it was useful as a distribution method, because most people had Python already available. Since most distros now don't allow you to install stuff outside a venv you need uv to install things (via `uv tool install`) and we're not yet at the point where most people already have uv installed.
Regular Python bindings / c extensions don’t depend on a pypi-packaged instance of gcc or llvm though. It’s understood that these things are provided externally from the “system” environment.
I know some of it has already happened with rust, but perhaps there’s a broader reckoning that needs to occur here wrt standards around how language specific build and packaging systems handle cross language projects… which could well point to phasing those in favour of nix or pixi, which are designed from the getgo to support this use case.
That's really cool actually. Now that AI is a little more commonly available for developer tooling I feel like its easier than ever to learn any programming language since you can braindrain the model.
The standard models are pretty bad a zig right now since the language is so new and changes so fast. The entire language spec is available in one html file though so you can have a little better success feeding that for context.
> The entire language spec is available in one html file though so you can have a little better success feeding that for context.
This is what I've started doing for every library I use. I go to their Github, download their docs, and drop the whole thing into my project. Then whenever the AI gets confused, I say "consult docs/somelib/"
I'm afraid this article kinda fails at at its job. It starts out with a very bold claim ("Zig is not only a new programming language, but it’s a totally new way to write programs"), but ends up listing a bunch of features that are not unique to Zig or even introduced by Zig: type inference (Invented in the late 60s, first practically implemented in the 80s), anonymous structs (C#, Go, Typescript, many ML-style languages), labeled breaks, functions that are not globally public by default...
It seems like this is written from the perspective of C/C++ and Java and perhaps a couple of traditional (dynamically typed) languages.
On the other hand, the concept that makes Zig really unique (comptime) is not touched upon at all. I would argue compile-time evaluation is not entirely new (you can look at Lisp macros back in the 60s), but the way Zig implements this feature and how it is used instead of generics is interesting enough to make Zig unique. I still feel like the claim is a bit hyperbolic, but there is a story that you can sell about Zig being unique. I wanted to read this story, but I feel like this is not it.
Hello Mr. Bright. I've seen similar comments from you in response to Zig before. Specifically, in the comments on blog post I made about Zig's comptime. I took some time reading D's documentation to try to understand your point (I didn't want to miss some prior art, after all). By the time I felt like I could give a reply, the thread was days old, so I didn't bother.
The parent comment acknowledges that compile time execution is not new. There is little in Zig that is, broad strokes, entirely new. It is in the specifics of the design that I find Zig's ergonomics to be differentiated. It is my understanding that D's compile time function execution is significantly different from Zig's comptime.
Mostly, this is in what Zig doesn't have as a specific feature, but uses comptime for. For generics, D has templates, Zig has functions which take types and return types. D has conditional compilation (version keyword), while Zig just has if statements. D has template mixins, Zig trusts comptime to have 90% of the power for 10% of the headache. The power of comptime is commonly demonstrated, but I find the limitations to be just as important.
A difference I am uncertain about is if there's any D equivalent for Zig having types being expressions. You can, for example, calculate what the return type should be given a type of an argument.
Maybe I don't understand, in D, how do I write a function which makes a new type?
For example Zig has a function ArrayHashMapWithAllocator which returns well, a hash table type in a fairly modern style, no separate chaining and so on
Not an instance of that type, it returns the type itself, the type didn't exist, we called the function, now it does exist, at compile time (because clearly we can't go around making new types at runtime in this sort of language)
You use templates and string mixins alongside each other.
The issue with mixins is that using string concatenation to build types on the fly isn't the greatest debugging experience, as there is only printf debugging available for them.
But Zig doesn't need a keyword to trigger it either? If it's possible at all, it will be done. The keyword should just prevent run-time evaluation. (Unless I grossly misunderstood something.)
Partial evaluation has been quite well known at least since 1943 and Kleene's Smn proof. It has since been put to use, in various forms, by quite a few languages (including C++ in 1990, and even C in the early seventies). But the extent and the way in which Zig specifically puts it to use -- which includes, but is not limited to, how it is used to replace other features that can then be avoided (and all without macros) -- is unprecedented.
Pointing out that other languages have used partial evaluation, sometimes even in ways that somewhat overlap with Zig's use, completely misses the point. It's at least as misplaced as saying that there was nothing new or special about iPhone's no-buttons design because touch screens had existed since the sixties.
If you think Zig's comptime is just about running some computations at compile time, you should take a closer look.
Perl5 had it before. Either by constant-folding, or by BEGIN blocks.
Constant-folding just got watered down by the many dynamic evangelists in the decades after, that even C or C++ didn't enforce it properly. In perl5 is was watered down on add (+) by some hilariously wrong argumentation then. So you could precompute mult const expressions, but not add.
How are perl5’s BEGIN blocks equivalent to comptime? It’s been awhile, but I recall BEGIN blocks executing at require time—which, in complicated pre-forking setups that had to be careful about only requiring certain modules later during program execution because they did dumb things like opening connections when loaded, meant that reasoning about BEGIN blocks required a lot more careful thought than reasoning about comptime.
The same is true for templates, or macros—all of which are distinguished by being computed in a single pass (you don’t have to think about them later, or worry about their execution being interleaved with the rest of the program), before runtime start (meaning that certain language capabilities like IO aren’t available, simplifying reasoning). Those two properties are key to comptime’s value and are not provided by perl5’s BEGIN blocks—or probably even possible at all in the language, given that it has eval and runtime require.
BEGIN blocks execute at compile-time. require is just a wrapper to load a module at compile-time.
When you want to use state, like openening a file for run-time, use INIT blocks instead. These are executed first before runtime, after compile-time.
My perl compiler dumps the state of the program after compile-time. So everything executed in BEGIN blocks is already evaluated. Opening a file in BEGIN would not open it later when required at run-time, and compile-time from run-time is seperated.
All BGEIN state is constant-folded.
I think we’re using different definitions of “compile time”.
I know who you are, and am sure everything you say about the mechanisms of BEGIN is correct, but when I refer to “compile time”, I’m referring to something that happens before my program runs. Perl5’s compilation happens the first time a module is required, which may happen at runtime.
Perhaps there’s a different word for what we’re discussing here: one of the primary benefits of comptime and similar tools is that they are completed before the program starts. Scripting languages like perl5 “compile” (really: load code into in-memory intermediate data structures to be interpreted) at arbitrary points during runtime (require/use, eval, do-on-code).
On the other hand, while code in C/Zig/etc. is sometimes loaded at runtime (e.g. via dlopen(3)), it’s compile-time evaluation is always done before program start.
That “it completed before my code runs at all” property is really important for locality of behavior/reasoning. If the comptime/evaluation step is included in the runtime-code-load step, then your comptime code needs to be vastly more concerned with its environment, and code loading your modules has to be vastly more concerned with the side effects of the import system.
(I guess that doesn’t hold if you’re shelling out to compile code generated dynamically from runtime inputs and then dlopen-ing that, but that’s objectively insane and hopefully incredibly rare.)
But i would not put comptime as some sort of magical invention. Its still just a newish take on meta programming. We had that since forever. From my minimal time with Zig i kind of think comptime as a better version of c++ templates.
That said Zig is possibly a better alternative to c++, but not that exiting for me. I kind of dont get why so many think its the holy grail, first it was rust, and now zig.
IMHO a programming language doesn't need a single USP, it just needs to include good existing ideas and (more importantly) exclude bad existing ideas (of course what's actually a good and bad idea is highly subjective, that's why we need many programming languages, not few).
Rust's borrow checker is unique in the sense that it is production-ready. Cyclone is indeed prior art, but it's not as if it ever got beyond the research project stage.
The code samples are so weird... Some are images, others are not, and there's like 10 different color schemes (even among the textual ones, it's not consistent). That actually takes some kind of effort to achieve :D.
> Zig is not only a new programming language, but it’s a totally new way to write programs
I'd say the same thing about Rust. I find it the best way to express when what code should run at any given point in the program and the design is freakin interstellar: It is basically a "query engine" where you write a query of some code against the entire available "code space" including root crate and its dependencies. Once you understand that programming becomes naming bits and then queries for the ones you wish to execute.
As someone not really familiar with Rust, this sounds intriguing, but I don’t full understand. Do you have any links that can or examples that could clarify this for someone who is just starting out with Rust?
When I read "I can easily say that Zig is not only a new programming language, but it’s a totally new way to write programs" I expected to see something as shocking as LISP/Smalltalk/Realtalk/EVE/FORTH/Prolog... A whole new paradigm, a whole new way to program. Or at least a new concept like the pure functionalism of Haskell, or Prototyping like in Lua/JS/Io. And I was so damn shocked how I must have missed something so huge, having read the entirety of Zig's documentation and not have noticed anything? As you mentioned, turned out nothing, and I was shocked then why is it in the top of HN? Also turned out for no reason based on the comments.
The idea of modern society is "get hyped for the new thing". Tech crowd did not escape that unfortunately, and keeps rediscovering techniques that were already possible more that 50 years ago. Because they don't want to learn the history of the technology they are using.
> I'm afraid this article kinda fails at at its job
Yeah, I know nothing about Zig, and was excited by the author's opening statement that Zig is the most surprising language he has encountered in a 45 yr software career...
But this is then immediately followed by saying that ability to compile C code, and to cross-compile, are the most incredible parts of it, which is when I immediately lost interest. Having a built-in C compiler is certainly novel, and perhaps convenient for inter-op, but if the value goes significantly beyond that then the author is failing to communicate that.
It has been several decades since putting a slash between these two made sense, lumping them together like this. It would be similar to saying something like Java/Scala or ObjectiveC/Swift. These are completely different languages.
Nope, that is a English grammar construct that is a shortcut for "and" and "or", as any good English grammar book will explain.
Indeed you see those for Java/Scala and Objective-C/Swift in technical books and job adverts.
Any search on the careers sites, or documentation, on companies that have seats at ISO, sell/develop C and C++ compilers, have such C/C++ references in a couple of places.
In the general case yes, but "C/C++" became an idiom for the stance, that C and C++ are essentially the same, that C++ is a superset of C or that C++ is just the replacing successor of C and it should be treated as superseded. This is quite wrong and thus there is a lot of rightful intervention to that term. Personally I use "C, C++" when I want to talk about both without claiming, that they are the same language.
Nah, that is what pedantic folks without English grammar knowledge keep complaining about, instead of actually discussing better security practices in both languages.
It is a bikeshedding discussion that doesn't help in anything, regarding lack of security in C, or the legions of folks that keep using C data types in C++, including bare bones null terminated strings and plain arrays instead of collection types with bounds checking enabled.
This has nothing to do with bikeshedding, it is a genuine misunderstanding of these two languages that is propagated in this way. This is not about grammar.
In my opinion, this is an important issue and not "bikeshedding", but it can be discussed whether the term "C/C++" is always an example of that idea or not. I think it is not, but it is connected enough, that I won't use it to side step the issues.
Proper C++ should use new, delete, custom allocators, and standard collection types.
Even better, all heap allocations should be done via ownership types.
Calling into malloc () is writing C in C++, and should only be used for backwards compatibility with existing C code.
Additionally there is no requirement on the C++ standard that new and delete call into malloc()/free(), that is usually done as a matter of convenience, as all C++ compilers are also C compilers.
> Calling into malloc () is writing C in C++, and should only be used for backwards compatibility
And this is exactly the stance I am arguing against. C++ is not the newer version of C. It forked of at some point and is a quite different language now.
One of the reasons I do use malloc for, is for compatibility with C. It is not for backward compatibility, because the C code is newer. In fact I actively change the code, when it needs a rewrite anyway, from C++ to C.
The other reason for using it even when writing C++ is, that new alone doesn't allow to allocate without also calling the constructor. For that I call malloc first and then invoke the constructor with placement new. For deallocating I call the destructor and then free. This also has the additional benefit, that your constructor and deconstructor implementation can fail and you can roll it back.
As a c++ developer who's heard of Zig but never dived into it, I was reading this article scratching my head wondering what is it actually so unique about it.
Why the blog has a section on how it install it on the path is also very puzzling.
I like how Zig feels clear and simple to start with. I like that it gives one toolchain and makes cross compilation easy. I like that it helps people see how systems programming can feel approachable again.
I also like that C has done these things for many years. I can use different tools, link libraries, and trust that it will still work. I can depend on standards that keep improving while staying familiar.
I think Zig is exciting for what it adds today. I think C is cooler because it has proved itself everywhere and still runs the world.
I've used for well oven a year now, and I identify with this comment... well no, but I used to. In between Zig, and another language I enjoy was Python, and it was breath of fresh air to come back to the C style that I know and love within Zig. I would have said exactly this, when I first started writing Zig.
Today, Zig is so much better than C. I used to refer to Zig as an improved version of C. But I don't anymore. C may have come first, but chronological roles reversed. If Zig is a programming language, than C is a toy trying to copy Zig's functionality and usability.
Calling C easier to use in a cross platform context is absolutely insane. If I was only concerned about $HOST I would consider using C. Today, when I might want to copy a binary to literally any other system, I wouldn't even consider C. Zig wants code to work. C wants code to compile. There's a stark and critically important difference between the two.
> I think Zig is exciting for what it adds today. I think C is cooler because it has proved itself everywhere and still runs the world.
I couldn't have put it better myself, the only thing C has over Zig is inertia. But I wouldn't consider that a selling point....
I abandoned the goal of investing More time into C when they couldn't get defer into their latest version.
2 years later, already enjoying it in Zig `defer` is a lot less important to me now. But I still view it as a symptom of the death of the language. C isn't dead, by any stretch of the imagination, but it's no longer learning from it's mistakes, where as I still am.
I started learning C again for one simple reason: to understand the Linux kernel. You cannot do that without knowing C, and soon you end up learning about GCC, linkers, and how programs really run.
Once I spent time with it, I saw how many smart ideas from the kernel could be used anywhere. the initcall system that runs modules in order, the way structs with function pointers create flexible drivers, the use of macros to build type-safe lists and so on.
I totally vibe with the intro but then the rest of the article goes on to be a showcase bits of zig.
I feel what is missing is how each feature is so cool compared to other languages.
As a language nerd zig syntax is just so cool. It doesn’t feel the need to adhere to any conventions and seems to solve the problems in the most direct and simple way.
An example of this declaring a label and referring to a label. By moving the colon to either end it makes labels instantly understood which form it is.
And then there is the runtime promises such as no hidden control flow. There are no magical @decorators or destructors. Instead we have explicit control flow like defer.
Finally there is comptime. No need to learn another macro syntax. It’s just more zig during compilation
I was also curious what direction the article was going to take. The showcase is cool, and the features you mentioned are cool. But for me, Zig is cool is because all the pieces simply fit together with essentially no redundancy or overloading. You learn the constructs and they just compose as you expect. There's one feature I'd personally like added, but there's nothing actually _missing_. Coding in it quickly felt like using a tool I'd used for years, and that's special.
Zig's big feature imo is just the relative absence of warts in the core language. I really don't know how to communicate that in an article. You kind of just have to build something in it.
> Coding in it quickly felt like using a tool I'd used for years, and that's special.
That's been my exact experience too. I was surprised how fast I felt confident in writing zig code. I only started using it a month ago, and already I've made it to 5000 lines in a custom tcl interpreter. It just gets out of the way of me expressing the code I want to write, which is an incredible feeling. Want to focus on fitting data structures on L1 cache? Go ahead. Want to automatically generate lookup tables from an enum? 20 lines of understandable comptime. Want to use tagged pointers? Using "align(128)" ensures your pointers are aligned so you can pack enough bits in.
Having spend a year tinkering in zig and it's absence of features has made me want to drop c#/java professionally and pick up Golang. Its quiet annoying when you see a codebases written in C#/java and you can tell in which year/era it was written because of the language features. The way of writing things in C# changes like every 4 years or so.
There's a certain beauty in only having to know 1~2 loops/iteration concepts compared to 4~5 in modern multi paradigm languages(various forms of loops, multiple shapes of LINQ, the functional stuff etc).
The feature I want is multimethods -- function overloading based on the runtime (not compile time) type of all the arguments.
Programming with it is magical, and its a huge drag to go back to languages without it. Just so much better than common OOP that depends only on the type of one special argument (self, this etc).
Common Lisp has had it forever, and Dylan transferred that to a language with more conventional syntax -- but is very near to dead now, certainly hasn't snowballed.
On the other hand Julia does it very well and seems to be gaining a lot of traction as a very high performance but very expressive and safe language.
I think this is a major mistake for Zig's target adoption market - low level programmers trying to use a better C.
Julia is phenomenally great for solo/small projects, but as soon as you have complex dependencies that _you_ can't update - all the overloading makes it an absolute nightmare to debug.
>The feature I want is multimethods -- function overloading based on the runtime (not compile time) type of all the arguments.
>Programming with it is magical, and its a huge drag to go back to languages without it. Just so much better than common OOP that depends only on the type of one special argument (self, this etc).
Can you give one or two examples? And why is programming with it magical?
For a start it means you can much more naturally define arithmetic operators for a variety of built in and user-defined types, and this can all be done with libraries not the core language.
Because methods aren't "inside" objects, but just look like functions taking (references to) structs, you can add your own methods to someone else's types.
It's really hard to give a concise example that doesn't look artificial, because it's really a feature for large code bases.
The article's claim of Zig being a "totally new way to write programs" is quite mad but I'd like to make a different claim: Zig's own development is a totally new way of writing programming languages (or is at least very rare).
While I don't wholly agree with all choices made by Andrew and the Zig team, I greatly appreciate the care with which they develop features. The slow pace of deliberating over features, refining them, and removing unnecessary ones seems in sharp contrast to the development of any other langauge I'm aware of. I'm no language historian though, happy to be challenged.
I am not sure if a slow pace is as beneficial as you say. I scrolled through the error handling issue brought up in this comment section ( https://github.com/ziglang/zig/issues/2647#issuecomment-2670... ) and its clear that only thing that happened there was communication on the issue was hindered. I come from C++ side and our "ISO C++ committee" language development process leaves a lot to be desired. Now look at error handling that they did passed in C++23 ( std::expected ). It raises some questions on how slow you can be while still appearing to be moving forward.
Disclaimer: I would like to see Zig and other new languages to become a viable alternatives to C++ in Gamedev. But I understand that it might happen way after me retiring =)
I don’t think Java and Rust were so ok with completely removing features. For example, in Zig 0.15 they completely overhauled the io, meaning all libraries now have to rewrite up usage. Just to make sure they did it right
> I don’t think Java and Rust were so ok with completely removing features.
This just shows that you weren't around for pre-1.0 Rust. Back then Rust was infamous for the language making breaking changes every week. Check out this issue from 2013 tracking support for features which were deprecated but had yet to be removed from the compiler: https://github.com/rust-lang/rust/issues/4707 , and that's just a single snapshot from one moment in Rust's prehistory.
Semantic major/minor version 0.15 means it's still in development. It's not supposed to be stable.
Going from 0.14 to 0.15 allows breaking changes.
Try making a similar change between version 5.0 and 6.0, with hundreds of thousands of existing users, programs, packages and frameworks that all have to be updated. (Yes, also the users who have to learn the new thing.)
While some of the features the author references are really interesting, personally I don't see how any of that would justify creating a new memory unsafe language in 2016. I thought it was pretty obvious by now [1][2][3] memory safety is best left to tooling / compilers and not to programmers.
Tanenbaum was right, the future of the Linux kernel is dire, and it's been a huge setback to operating systems research in practical terms.
Fortunately, vendors are gradually moving away from Linux, having been hamstrung by its failures. Google is planning to move to a capability-based microkernel in the coming years for Android and ChromeOS, and Huawei has already done so with HarmonyOS.
In a hundred years, Linux will be a footnote in computing history.
I think even in the year of our lord 2016 there's room for a language with safe defaults but seamless interoperability with existing unsafe code. It's certainly an improvement on the status quo and provides an alternative to rewriting the world in Rust or a GC language.
Unlike C/C++, Zig is not inherently memory-unsafe.
Where Rust insists on having either partial safety through the checker or lack of control in unsafe code, Zig provides a toolkit for contructing safe frameworks. Zig also doesn't have main sources of unsafety coming from certain C design mistakes.
Besides, if you are after true memory safety then garbage collection is the way to go.
I've tried writing a similar post, but I think it's a bit difficult to sound convincing when talking about why Zig is so pleasant. it's really not any one thing. it's a culmination of a lot of well made, pragmatic decisions that don't sound significant on their own. they just culminate in a development experience that feels pleasantly unique.
a few of those decisions seem radical, and I often disagreed with them.. but quite reliably, as I learned more about the decision making, and got deeper into the language, I found myself agreeing with them afterall. I had many moments of enlightenment as I dug deeper.
so anyways, if you're curious, give it an honest chance. I think it's a language and community that rewards curiosity. if you find it fits for you, awesome! luckily, if it doesn't, there's plenty of options these days (I still would like to spend some quality time with Odin)
I prefer Odin to Zig after trying both... but it seems Odin's performance is a bit lower than Zig, C and Rust?! Have you noticed any performance issues or it's not something to worry about?
No, I write Odin for production and there is no performance difference to speak of coming from the way the compiler or language works. If you have one it's likely because of an older/different LLVM version being used, but AFAIK Odin stays as up-to-date as you can without tearing your hair out (and that's good because GingerBill has none of that to spare).
There might be a few pathological code paths in the core libraries or whatever for certain things that aren't what they should be, but in terms of the raw language you're in the land of C as much as with any of these languages; Odin really doesn't do much on top of C, and what it's doing is identifiable and can be opted out of; if you find that a function in a hot loop is marginally slower than it ought to be, you can make it contextless, for example, and see whether that makes a difference.
We haven't found (in a product where performance is explicitly a feature, also containing a custom 3D engine on top of that) that the context being passed automatically in Odin is of much concern performance-wise.
Out of the languages mentioned Rust is the one I've seen in benchmarks be routinely marginally slower, but it's not by a meaningful amount.
I don't think Zig--which certainly is innovative in a number of ways--benefits from this sort of thing. Up front is a claim that it's "totally new way to write programs", but zero support is offered, and almost nothing else "meta" said about the language, other than a couple of sentences in the conclusion that are likewise inaccurate hype. I've programmed in many languages including Zig and it definitely is not a new way of programming. It imposes disciplines that are different from those of other languages, but the same is true of other languages.
The final paragraph says "This is all quite surprising" -- why so?
"and let one think that many advantages previously found only in interpreted languages are gradually migrating to compiled languages in order to offer more performance" -- sure, but Zig is hardly the first ... D and Nim both have interpreters built into the compiler that allow extensive comptime computation--both of those languages have far more metalanguage facilities than Zig, in addition to many other language features that Zig lacks--which is not necessarily a fault, as it aims for a certain kind of simplicity and close-to-the-metal performance ... although both D and Nim are highly performant (both have optional garbage collection, though Nim is more advanced in making GC-free programming approachable). One thing you can say about Zig though--it compiles like a bat out of hell.
P.S. Another thing about Zig worth mentioning that came up in some comments is cross compilation. I don't think people understand how Zig is different and what an engineering feat it is (Andrew has a writeup somewhere of how it's done--it's shocking):
If you install Zig, you can now generate executables for virtually any target with just a command line argument specifying the target, regardless of what machine you installed it on. Nothing else does that--cross compilation generally requires recompiling the compiler and library to target a different architecture. Zig comes with precompiled libraries for a huge number of targets.
I noticed a comment where someone said they love Zig but they've never programmed in it--they use it to cross-compile their Nim programs. (The Nim compiler has a C code backend, and Zig has a C compiler built in, so Nim inherits instant arbitrary cross-compilation to any target via Zig).
It’s incredibly silly but I dislike zigs identifier policy. Mixing snake case and camel case for functions is cursed.
That said, amazing effort, progress and results from the ecosystem.
Bursting on the scene with amazing compilation dx, good allocator (and now io) hygiene/explicitness, and a great build system (though somewhat difficult to ramp on). I’m pretty committed to Rust but I am basically permanently zig curious at this point.
[EDIT] “hate” > “dislike”. Hate is a strong word and surely I just need to spend some time writing zig and I’d get used to it.
I love systems programming language and have worked on the Ada language for a long time. I find Zig to be incredibly underwhelming. Absolutely nothing about it is new or novel, the closest being comptime which is not actually new.
Also highly subjective but the syntax hurts my eyes.
So I’m kind of interested by an answer to the question this articles fails to answer. Why do you guys find Zig so cool ?
It’s hard to do something that is truly novel these days. Though I’d argue that Zigs upcoming approach to Async IO is indeed novel on its own. I haven’t seen anything like it in an imperative language.
What’s important is the integration of various ideas, and the nuances of their implementation. Walter Bright brings up D comptime in every Zig post. I’ve used D. Yet I find Zigs comptime to be more useful and innovative in its implementation details. It’s conceptually simpler yet - to me - better.
You mention Ada. I’ve only dabbled with it, so correct me if I’m wrong, but it doesn’t have anything as powerful as Zigs comptime? I think people get excited about not just the ideas themselves, but the combination and integration of the ideas.
In the end I think it’s also subjective. A lot of people like the syntax and combination of features that Zig provides. I can’t point to one singular thing that makes me excited about Zig
As someone who still thinks one should write C (so as a completely uncool person), what I like about Zig is that it is no-nonsense language that just makes everything work as it is supposed to be without unnecessary complications,
D is similar, except that it fell into the trap of adding to many features.
So, no, I do not really see anything fundamentally new either. But to me this is the appealing part. Syntax is ok (at least compared to Rust or C++).
Having said this, I am still skeptical about comptime for various reasons.
One of the things I like about Zig is that it pretty explicitly recognizes all the weird edge cases that exist in low-level systems code. A rather large cross-section of languages kind of pretend these cases don’t exist because addressing it would violate the aesthetic they are trying to achieve with the language. Nonetheless, these are real cases because low-level hardware and system behavior doesn’t care about aesthetics as might be expressed in a programming language.
Even C++ didn’t fully repent from this sin until around C++17. I appreciate the non-begrudging acceptance of this reality in Zig.
I would highlight `std::launder` as an example. It was added in C++17. Famously, most people have no idea what it is used for or why it exists. For low-level systems it was a godsend because there wasn’t an official way to express the intent, though compilers left backdoors open because some things require it.
It generates no code, it is a compiler barrier related to constant folding and lifetime analysis that is particularly useful when operating on objects in DMA memory. As far as a compiler is concerned DMA doesn’t exist, it is a Deus Ex Machina. This is an annotation to the compiler that everything it thinks it understands about the contents and lifetime of a bit of memory is now voided and it has to start over. This case is endemic in high-end database engines.
It should be noted that `std::launder` only works for different instances of the same type. If you want to dynamically re-type memory there is a different set of APIs for informing the compiler that DMA dropped a completely different type in the same memory address.
All of this is compiled down to nothing. It annotates for the compiler things it can’t understand just by inspecting the code.
I don't think that's quite right. For DMA you would normally use an empty asm block, which is what's typically referred to as a "compiler barrier" and does tell the compiler to discard everything it knows about the contents of a some memory. But std::launder doesn't have the same effect. It only affects type-based optimizations, mainly aliasing, plus the assumption that an object's const fields and vtable can't change.
GCC generates a store followed by a load from the same location, because of the asm block (compiler barrier) in between. But if you change `if (1)` to `if (0)`, making it use `std::launder` instead of an asm block, GCC doesn't generate a load. GCC still assumes that the value read back from the pointer must be 42, despite the use of `std::launder`.
This doesn't seem quite right. The asm block case is equivalent to adding a volatile qualifier to the pointer. If you add this qualifier then `std::launder` produces the same codegen.
I think the subtle semantic distinction is that `volatile` is a current property of the type whereas `std::launder` only indicates that it was a former property not visible in the current scope. Within the scope of that trivial function in which the pointer is not volatile, the behavior of `std::launder` is what I'd expect. The practical effect is to limit value propagation of types marked `const` in that memory. Or at least this is my understanding.
DMA memory (and a type residing therein) is often only operationally volatile within narrow, controlled windows of time. The rest of the time you really don't want that volatile qualifier to follow those types around the code.
One thing that I've found really useful is being able to annotate o pointer's alignment. I'm working on an interpreter, and I'm using tagged pointers (6 bits), so the data structure needs to have 128 byte alignment. I can define a function like `fn toInt(ptr: *align(128) LongString) u56` and the compiler will track and enforce the alignment.
You might also find some of the builtin functions interesting as well[1], they have a lot of really useful functions that in other languages are only accessible via the blessed stdlib, such as @addrSpaceCast, @atomicLoad, @branchHint, @fieldParentPtr, @frameAddress, @prefetch, @returnAddress(), and more.
Why are people so obsessed with Zig when Odin has been stable, though not yet with official spec, for such a long time and used in real production for years? Is it just syntax preference or does Zig provide something amazing that I am missing? Not that I use any of them, I am not interested in manual memory management and i stick to Go. But I'm curious.
Zig has a lot of manpower behind it in comparison to Odin and this is one of the most important things for people, they see a proverbial crowd and that builds a lot more interest.
With that said, here are a couple of things you have in Zig that you don't get in Odin:
- Cross-compilation & cross-linking (more or less works): Odin doesn't do cross-linking.
- Comptime; you can actually use it to effectively get Functors from ML, which means passing in interfaces to modules and getting compile-time generated modules back (structs in this case)
- Error set inference; Zig can figure out the complete set of errors a code path can return and make sure you handle them, or bubble that exact set (plus your own errors) up. This comes with the caveat that Zig has no capability to attach actual data to the errors, so you have to side-channel that info if you have it. Odin doesn't do error inference apart from the type checking side of it, but does allow using tagged unions as errors, which is great. They still interact exactly as they ought to with the zero-value-as-no-error machinery.
I didn't use comptime much when I used Zig, and I like tagged unions as errors much more than I value being able to cross-link, so I decided that Odin was better for me. Defaulting to zero-values and the zero-value being blessed in terms of language features didn't feel right to me before I started using it but now I can't really imagine going back to not assuming the zero-value being there.
Thanks for the info. I'm curious to see what people will do when Jai will finally be released next year. So far, Rust has been gaining a lot of traction, although wit ha lot of controversies attached to it. Zig seems to be doing well but the lack of progress towards v1.0 after all those years is quite concerning, making it looks more and more like a toy project rather than something serious. Odin seems to be flying under the radar of most people a bit too much. Jai will have John's name behind it and I am hearing a lot of praise from insiders(people in the beta program). As I said, I have no use for such languages but if i'll do in the future, I'd like to have a clear choice rather than myriad of languages in various stages of development, all trying to do the same thing.
If Jai is ever actually released to a meaningful amount of people I think we'll see just how little Blow's name means to people in practice. There is an artificial mystery around Jai right now and when the lid comes off the pot I think a lot of that is going to dissipate very fast.
With that said, I'll try it out. I'm not really impressed by what I've seen so far, though, it's very middle-of-the-pack with some really nonsense ideas. The possibility of easily creating your own checks with the compile-time machinery is potentially interesting but would probably turn into a nothingburger for us.
I think that's where most of this is at: After so many years of "waiting" (I think most people stopped actually waiting after a few years of mostly talking and very little actual productive doing) we'll end up with a very meh language that was touted as super special... And a painfully simple sokoban game that people are going to pretend is somehow super complex and hard to make.
Are you implying that Zig hasn’t been used in production? What about Tigerbeetle, Bun and Ghostty? I’m using Ghostty as my terminal right now.
I feel like Zig is aiming a lot higher. So that’s why it’s taking longer and also why people are more obsessed with it. The work on doing their own backend and incremental linker is impressive and interesting. So is their attempt at getting IO and async right.
> Zig for ( 0..9 ) |i| { }
> C for (i = 0; i < 9; i++) { }
I know an open interval [0..9) makes sense in many cases, but it's counterintuitive and I often forget whether it includes the last value or not. It's the same for python's range(0, 9).
Rust's solution to this is quite good, that's 0..9 and if you want to include 9 it's 0..=9, it looks a bit funny but knowing one with an = sign in it exists removes any doubt
I'm actually curious now how this is stored on `Range` in rust. I've certainly used ..= for exactly the reason you say, but as far as I'm aware `.end` on the range is the exclusive upper bound in all cases. What happens to `.end` in the overflowing case?
Edit: it doesn't use Range for ..=, but rather RangeInclusive, which works fine.
The better solution to forgetting whether an interval is closed or half-open is to always use only half-open intervals, without any exceptions.
In most cases half-open intervals result in the simplest program, so I agree with the choice of Zig, which is inherited from other languages well-designed from this point of view, e.g. Icon.
I find half-open intervals more intuitive than either closed intervals or open intervals, and much less prone to errors, for various reasons, e.g. the size of a half-open interval is equal to the difference between its limits, unlike for closed intervals or open intervals. Also when accessing the points in the interval backwards or circularly, there are simplifications in comparison with closed intervals.
By "..._MAX" I assume that you mean the maximum value of a given integer type.
In a language where half-open intervals are supported consistently in all the places, this would be solved trivially, e.g. for a signed byte the _MIN and the _MAX values would be defined as -128 and +128, more intuitively than when using closed intervals, where you must remember to subtract 1 from the negated minimum value.
Even the C language has some support for half-open intervals, because the index pointing after the last element of an array is a valid index value, not an out-of-range value (though obviously, attempting to access the array through that index value would be trapped as an out-of-range access, if that is enabled).
Applied consistently, the same method would ensure that the value immediately above the last representable value of an integer type is valid in ranges of that type, even if it would be invalid in an expression as an operand of that type.
The article doesn't answer the question, it's all just about "the basics of zig" (there is nothing cool manually editing environment variables on Windows with 8 labeled steps (and 5 preliminary steps missing))
and the actual cool stuff is missing:
> with its concept of compile time execution, unfortunately not stressed enough in this article.
Zig being able to (cross)compile C and C++ feels very similar to how UV functions as a drop in replacement for pip/pip-tools. Seems like a fantastic way to gain traction in already established projects.
Some days ago I decided to look at Zig a bit more in detail. So I skipped the usual marketing and I checked why it could be an alternative to C or Rust. Could it be?
It seems that (debug) allocators are a nice idea, however, it seems that they exist "somehow" already for C, so I wonder: why would you pick this language for your next low level program? They provide runtime checks, so you need thorough testing before you can spot use-after-free or so. It's very similar to the existing situation with c/c++ and the sanitizers, although they work a bit differently.
So the question I have for hardcore low level programmers: why don't they invest more on the memory allocators like hardened_malloc[0] instead of starting a new programming language? It would probably be less expensive in terms of time and would help fix existing software.
> So the question I have for hardcore low level programmers: why don't they invest more on the memory allocators
A partial answer is that part of low-level programmers avoid memory allocation and threads like plague. In some cases they are not even an option (small embedded programming, it's nearly as low-level as you can get before going hardcore for real with assembly programming), but when they can the keywords are efficiency, reliability, predictability, and simplicity : statically allocating in advance is a thing you can do because the product is typically with max specs written on the box (e.g. max number of entries in a phone book, to take a generic dumb example), and you have to meet these requirements even if the customer uses all of the capabilities to the max; no memory overbooking allowed, which is basically what dynamic allocation is, in a sense.
> instead of starting a new programming language
If I were to start a new low-low level programming language, I would basically just fix C's weak typing problem, fix the UB problems that only come from issues with long-gone processors (like C++11 finally did with sign encoding), "backport" some C++ features (templates? constexpr?), add a pinch of syntactic sugar and fix union types to have proper sum types. But probably I've just described D and apparently a significant chunk of C23.
> I can’t think of any other language in my 45 years long career that surprised more than Zig.
I can say the same (although my career spans only 30 years), or, more accurately, that it's one of the few languages that surprised me most.
Coming to it from a language design perspective, what surprised me is just how far partial evaluation can be taken. While strictly weaker than AST macros in expressive power (macros are "referentially opaque" and therefore more powerful than a referentially transparent partial evaluation - e.g. partial evaluation has no access to an argument's name), it turns out that it's powerful enough to replace not only most "reasonable" uses of macros, but also generics and interfaces. What gives Zig's partial evaluation (comptime) this power is its access to reflection.
Even when combined with reflection, partial evaluation is more pleasurable to work with than macros. In fact, to understand the program's semantics, partial evaluation can be ignored altogether (as it doesn't affect the meaning of computations). I.e. the semantics of a Zig program are the same as if it were interpreted by some language Zig' that is able to run all of Zig's partial-evaluation code (comptime) at runtime rather than at compile time.
Since it also removes the need for other specialised features (generics, interfaces) - even at the cost of an aesthetic that may not appeal to fans of those specialised features - it ends up creating a very expressive, yet surprisingly simple and easy-to-understand language (Lisps are also simple and expressive, but the use of macros makes understanding a Lisp program less easy).
Being simple and easy to understand makes code reviews easier, which may have a positive impact on correctness. The simplicity can also reduce compilation time, which may also have a positive impact on correctness.
Zig's insistence on explicitness - no overloading, no hidden control flow - which also assists reviews, may not be appropriate for a high-level language, but it's a great fit for an unabashedly low-level language, where being able to see every operation as explicit code "on the page" is important. While its designer may or may not admit this, I think Zig abandons C++'s belief that programs of all sizes and kinds will be written in the same language (hence its "zero-cost abstractions", made to give the illusion of a high-level language without its actual high-level abstraction). Developers writing low-level code lose the explicitness they need for review, while those writing high-level programs don't actually gain the level of abstraction required for a smooth program evolution that they need. That belief may have been reasonable in the eighties, but I think it has since been convincingly disproved.
Some Zig decisions surprised me in a way that made me go more "huh" than "wow", such as it having little encapsulation to speak of. In a high-level language I wouldn't have that (after years of experience with Java's wide ecosystem of libraries, we learned that we need even more and stronger encapsulation than we originally had to keep compatibility while evolving code). But perhaps this is the right choice for a low-level language where programs are expected to be smaller and with fewer dependencies (certainly shallower dependency graphs). I'm curious to see how this pans out.
Zig's terrific support for arenas also makes one of the most powerful low-level memory management techniques (that, like a tracing garbage collector, gives the developer a knob to trade off RAM usage for CPU) very accessible.
I have no idea or prediction on whether Zig will become popular, but it's certainly fascinating. And, being so remarkably easy to learn (especially if you're familiar with low-level programming), it costs little effort to give it a try.
Well put. The majority of language development for the last 20 years has proceeded by adding more features into languages, as they all borrow keywords and execution semantics from each other. It's like a neighborhood version of corporate bureaucracies, where each looks across the street, and decides "they've got a department we don't have, we better add one of those".
I like languages that dare to try to do more with less. Zig's comptime, especially the way it supplants generics, is pretty darn awesome.
I was having a similar feeling with Elixir the other day, when I realized that I could built every single standard IPC mechanism that you might find in something like python.threading (Queue, Mutex, RecursionLock, Condition, Barrier, etc) with the Erlang/Beam/Process mailbox.
This is the real answer (amongst other goodness) - this one is well executed and differentiated
Every language at scale needs a preprocessor (look at the “use server” and “use gpu” silliness happening in TS) - why is it not the the same as the language you use?
Great comment! I agree about comptime, as a Rust programmer I consider it one of the areas where Zig is clearly better than Rust with its two macro systems and the declarative generics language. It's probably the biggest "killer feature" of the language.
> as a Rust programmer I consider it one of the areas where Zig is clearly better than Rust with its two macro systems and the declarative generics language
IMHO "clearly better" might be a matter of perspective; my impression is that this is one of those things where the different approaches buy you different tradeoffs. For example, by my understanding Rust's generics allows generic functions to be completely typechecked in isolation at the definition site, whereas Zig's comptime is more like C++ templates in that type checking can only be completed upon instantiation. I believe the capabilities of Rust's macros aren't quite the same as those for Zig's comptime - Rust's macros operate on syntax, so they can pull off transformations (e.g., #[derive], completely different syntax, etc.) that Zig's comptime can't (though that's not to say that Zig doesn't have its own solutions).
Of course, different people can and will disagree on which tradeoff is more worth it. There's certainly appeal on both sides here.
I look forward to a future high-level language that uses something like comptime for metaprogramming/interfaces/etc, is strongly typed, but lets you write scripts as easily as python or javascript.
Tryout Nim, it has powerful comptime/metaprogramming, statically typed, automatic memory management and is as easy to program as python or javascript while still allowing low level stuff.
For me it'd be hard to go back to languages that don't have all that. Only swift comes close.
D comes close ... it too has a full-language comptime interpreter and other metaprgramming features (though not as rich as Nim's), statically typed, optional garbage collection, and you can write
#!/usr/bin/env rdmd
[D code]
and run it as if it were an executable. (The compilation is cached so it runs just as fast on subsequent runs.)
Thing is, having a good JIT gives you the performance of partial evaluation pretty much automatically (at the cost of less predictability), as compilation occurs at runtime, so the distinction between compile-time and runtime largely disappears. E.g., in Java, a reflective call will eventually be compiled by the JIT into a direct call; virtual dispatch will also be compiled into direct dispatch or even inlined (when appropriate) etc..
There's at least 1 thing that Zig is better than Rust is that Zig compiler for Windows can be downloaded, unzipped then used without admin right. Rust needs msvc, which cannot be installed without admin right. It is said that Rust on Windows can use cygwin but I cannot make it work even with AI help.
cygwin is a POSIX-emulating library intended for porting POSIX-only programs to Windows.
That is: when compiling for cygwin, you'd use the cygwin POSIX APIs instead of the Windows APIs. So anything compiled with cygwin won't be a normal Windows program.
There's no reason to use cygwin with Rust, since Rust has native Windows support. The only reason to use x86_64-pc-cygwin is if you would need your program to use a C library that is not available for Windows, but is available for cygwin.
If you don't want to/can't use the MSVC linker, the usual alternative is Rust's `x86_64-pc-windows-gnu` toolchain.
> Probably the most incredible virtue of Zig compiler is its ability to compile C code. This associated with the ability to cross-compile code to be run in another architecture, different than the machine where it is was originally compiled, is already something quite different and unique.
Isn't cross compilation very, very ordinary? Inline C is cool, like C has inline ASM (for the target arch). But cross-compiling? If you built a phone app on your computer you did that as a matter of course, and there are many other common use cases.
If you install Zig, you can now generate executables for virtually any target with just a CLI argument specifying the target, regardless of what machine you installed it on. Nothing else does that--cross compilation generally requires compiling the compiler to target a different architecture.
Yes, very rare and there is a strong cartel of companies ensuring it doesn't happen in more mainstream langs through multiple avenues to protect their interests!
From helicoptering folks onto steering committee and indoctrination of young CS majors.
If I had the ability to downvote a comment yet, I'd downvote you. If you're going to spout conspiracy-theory-sounding stuff, at least provide some evidence for your claims!
It doesn't sound like a conspiracy theory, you just have an incredibly poorly calibrated sense of judgement as to the tone of a statment.
Not uncommon in this space though, especially as you get closer to the metal (close as cross-compilation is relative to something like React frontends, at least)
I like the idea of the `defer `keyword - you can have automatic cleanup at the end of the scope but you have to make it obvious you are doing so, no hidden execution of anything (unlike c++ destructors).
P.S. In D it's `scope(exit)` = defer, `scope(failure)` = Zig's errdefer, and `scope(success)` -- which no one else has and which I have made good use of. e.g., I have a mixin that traces entry and exit from a function, the latter with scope(success). If I use scope(exit) instead then when an exception is thrown all the leave messages are printed and then the stack trace, rather than seeing the stack trace at the point of failure (this baffled me when it first happened).
I would like a language to support both defer and C++ style destructors / Rust Drop. There are good use-cases for having both. For things like a mutex or straight-forward resource cleanup - having a bunch of brain-dead defer statements adds little value and only bloats unnecessary line count. Let the resource type handle its own release/cleanup at scope close. Code is made sweet, succinct and safe.
In Rust, there’s a drop guard pattern to do this, which leverages the lazy execution of closure, checkout the scopeguard crate. C++ should be easy to do that too I think
Zig is not cool. It's a mediocre new language, missing key features needed for industrial development, like destructors or overall memory safety. But for some reason it's overhyped.
If you think destructors/`Drop` traits or the like are good then Zig was never for you. It has nothing to do with "industrial development", neither does memory safety. The irony is that memory safety as a concept definitely is overhyped.
Destructors aren't just good. They are one of the most important innovations in programming, since they reduce boilerplate and prevent many bugs. Developing a language without them means introducing more bugs which could be avoided.
Zig structs are "modules" in themselves, apart from c like struct fields , they can have local variables, structs and functions declared and used inside of them .
The biggest advantages of Zig for me are that everything is explicit (no hidden features like overloads or implicit conversions) and that its metaprogramming is powerful, easy to use, and easy to understand.
Unfortunately the article glosses too quickly over aspects which seem unique. For instance, I don't get what labeled breaks have to do with comp-time or why a labeled break was used in this situation over a normal function call:
>>>
Labeled breaks
Zig can do many things in compilation time. Let’s initialize an array, for example. Here, a labelled break is used. The block is labelled with an : after its name init and then a value is returned from the block with break.
>>>
The article hasn't even talked about how the language decides what an open curly brace is causing.
I've heard good things about Zig. I want to pick it up and experiment with it but at ~2% market share I find it hard to justify spending the time to learn and master it right now. It's usually much easier to find the time to learn a new language if there is a project (work or open source) that is also using it.
It can be useful sometimes to learn things irrespective of what the rest of the world thinks of them.
My personal experience was (back in 2019) that Zig was basically a language you could learn in a weekend and end up being reasonably productive after a week. With that in mind, you might find that you can try it out and either find something that you really like in it and continue, or simply drop it (I ended up picking Odin over Zig, for example, and have found it delightful even 1+ years into production).
The truth is that if you only ever learn what is already popular you'll end up being the professional equivalent of a gray mass with zero definition and unique value proposition.
Unfortunately I get the same kind of garbage around closing curly braces / closing parenthesis / dots with this magick filter... It seems to do slightly better with an extra `-resize 400%`, but still very far from as good as what you're getting (to be fair the monochrome filter is not pretty (bleeding) when inspecting the result).
I wonder what's different?
( ImageMagick-7.1.1.47-1.fc42.x86_64 and tesseract-5.5.0-5.fc42.x86_64 here, no config, langpack(s) also from the distro)
Is dvui something you want to see? Although the use of backends are still c based, the core part of the gui seems written fully in zig rather than a binding from a c library.
Zig defaults to statically linking musl when targeting Linux, so the output will not be very interesting unless you target dynamic musl, or glibc, or FreeBSD/NetBSD.
Is the inline testing good in practice? I do like the clear proximity and scope of the code being tested but I can also imagine trying to cram in all the unit tests and mocking and logging and such.
Does the feature end up feeling unused, dominating app code with test code, or do people end up finding a happy medium?
Zig doesn't seem like a bad language, but I also don't see anything to make it hands down better than Rust for systems programming. So it kind of fails my "why yet another language?" test. I don't think another language can be justified by marginal improvements.
Rust passes that test because it's categorically better than C and C++ in several ways: much better type system, safety, better modules and code reuse, etc. It's complex, but as far as I can tell most of its complexity is required to offer its level of safety guarantees in a pure systems language without a garbage collector or any kind of true dynamic typing. To make a safe systems language you need to have a very rich type system that can prove safety across a wide array of situations. Either that or you'd have to go to the other far end of the simplicity-complexity spectrum and have a language with virtually no features, which would result in very verbose code and probably a lot of boilerplate.
Zig's coolest feature to me seems like "comptime" and the lack of a weird macro side-language, which is one of Rust's anti-features that feels bolted on. Don't make me learn yet another language. Of course sophisticated macros in Rust are usually instead written in Rust itself via procedural macros, but that is more unwieldy than "comptime."
Still not enough to justify a whole new language and ecosystem though. Again: don't make me learn yet another language unless there's a big payoff.
It's more of an in-between C and Rust than Go as it is a systems language with no built-in garbage collector for memory management. It has a lot of memory safety features, but it's not as memory safe as Rust. However, it avoids a lot of the complexity of Rust like implicit macro expansion, managing lifetimes, generics and complex trait system, etc. It also compiles much more compactly than Rust, in my experience.
In my mind, it's an accessible systems language. Very readable. Minimal footprint.
If you are not using a GC language, you WILL be managing lifetimes. Rust just makes it explicit, when the compiler can’t prove it’s safe, which Zig, C don't really care.
TigerBeetle has a clear purpose and needed those brilliant optimizations. Do you think Zig is suitable as, say, a Go replacement for prod network services and such?
Aside from the fact that Zig is still a bit immature in its std library and ecosystem, I mean. Is it a suitable systems language going forward?
Well, it's insanely simple, insanely fast, often more performant than Rust with lower resource usage, with first class C-interop and cross-compiling out of the box. It's easily my favorite language now, with Go being a close second. Both are opinionated and have a standard formatter that makes Zig code instantly readable when you see it, similar to Go. Rust was once interesting, but it's firmly in macro hell territory now, just like Swift, with concealed execution paths aplenty and neither cross-compiling out of the box.
>often more performant than Rust with lower resource usage
[citation needed]
If we are to trust this page [0] Rust beats Zig on most benchmarks. In the Techempower benchmarks [1] Rust submissions dominate the TOP, while Zig is... quite far.
Several posts which I've seen in the past about Zig beating Rust by 3x or such all turned to be based on low quality Rust code with some performance pitfalls like measuring performance of writing into stdout (which Rust locks by default and Zig does not) or iterating over ..= ranges which are known to be problematic from the performance perspective.
I would say in most submission-based benchmarks among languages that should perform similar, this mostly reflects the size and enthusiasm of the community.
For a language that’s so low level and performance focused, I’m surprised that it has those extra io and allocator arguments to functions. Isn’t that creating code bloat and runtime overhead?
Given that Zig has functions which can return functions, maybe you could capture the top level io and allocator and return a struct with a bunch of functions that now have the top scope io and allocator visible.
Don’t know.
That’s how people usually get rid of repeat arguments (or OOP constructor).
the answer I've seen when it has been brought up before is that (for allocators) there is not a practical impact on performance -- allocating takes way more time than the virtual dispatch does, so it ends up being negligible. for code bloat, I'm not sure what you mean exactly; the allocator interface is implemented via a VTable, and the impact on binary size is pretty minimal. you're also not really creating more than a couple of allocators in an application (typically a general purpose allocator, and maybe an arena allocator that wraps it in specific scenarios).
for IO, which is new and I have not actually used yet, here are some relevant paragraphs:
The new Io interface is non-generic and uses a vtable for dispatching function calls to a concrete implementation. This has the upside of reducing code bloat, but virtual calls do have a performance penalty at runtime. In release builds the optimizer can de-virtualize function calls but it’s not guaranteed.
...
A side effect of proposal #23367, which is needed for determining upper bound stack size, is guaranteed de-virtualization when there is only one Io implementation being used (also in debug builds!).
He's talking about passing the pointers to the allocators and Io objects as parameters throughout the program, not how allocator vtables for calling the allocator's virtual functions are implemented. But context pointers are a requirement in any program. Consider that a context pointer (`this`) is passed to every single method call ... it's no more "code bloat" than having to save and restore registers on every call.
Every class method in other languages receives a hidden argument. Odin passes a hidden context argument that contains the allocator. The alternative is global variables--which you can also use in Zig if you're so inclined. The extra arguments aren't something the Zig language imposes, it's a convention.
Regarding runtime overhead, I'd assume you would still need an io implementation, it is just showing it to you explicitly instead of it being hidden behind the std lib.
For simple projects where you don't want to pass it around in function parameters, you can create a global object with one implementation and use it from everywhere.
You still have to pass arguments to library functions that need to allocate or do I/O ... but the alternative is worse. This is really a bogus issue ... no one is crying over having to pass a `this` pointer to every single call of a method in other languages. Context pointers are a requirement in any sizeable or multi-threaded program, and Zig gives the user full control over what the context object looks like.
Yeah thing is it's usually better to have allocator in particular defined as a parameter so that you can use the testing allocator in your tests to detect memory leaks, double frees, etc. And then you use more optimal allocators for release mode.
A lot of comments here kind of miss the point, but that's to be expected because you can only really get it when you have the experience. Like hearing a description of a painting will not give you the same emotion as looking at it yourself.
Zig has completely changed the way I program (even outside of it). A lot of the goals and heuristics I used to have while writing code have completely changed. It's like seeing programming itself in a new way.
> I can’t think of any other language in my 45 years long career that surprised more than Zig. I can easily say that Zig is not only a new programming language, but it’s a totally new way to write programs, in my opinion. To say it’s merely a language to replace C or C++, it’s a huge understatement.
I don't understand how the things presented in this article are surprising. Zig has several nice features shared by many modern programming languages?
> One may wonder how the compiler discovers the variable type. The type in this case is *inferred* by the initialization.
That the author feels the need to emphasize this means either that they haven't paid attention to modern languages for a very long time, or this article is for people who haven't paid attention to modern languages for a very long time.
Type inference has left academy and proliferated into mainstream languages for so many years that I almost forgot that it's a worth mentioning feature.
> One is Zig’s robustness. In the case of the shift operation no wrong behavior is allowed and the situation is caught at execution time, as has been shown.
Panicking at runtime is better than just silently overflowing, but I don't know if it's the best example to show the 'robustness' of a language...
> Type inference has left academy and proliferated into mainstream languages for so many years that I almost forgot that it's a worth mentioning feature.
I'm not even sure I'd call this type inference (other people definitely do call it type inference) given that it's only working in one direction. Even Java (var) and C23 (auto), the two languages the author calls out, have that. It's much less convenient than something like Hindley-Milner.
And it's not caught in ReleaseFast builds ... which is not at all unique to Zig (although Zig does do many innovative things to catch errors in debug builds).
> Type inference has left academy and proliferated into mainstream languages for so many years that I almost forgot that it's a worth mentioning feature.
It’s not common in lower level languages without garbage collectors or languages focused on compilation speed.
The only popular language I can think of is C (prior to C23). If you want to include Fortran and Ada, that would be three, but these are all very old languages. All modern system languages have type deduction for variable declarations.
I meant for focused on compilation speed to apply only to lower level languages. And when I say lower level I don’t really include D because it has a garbage collector (I know it’s optional but much of the standard library uses it I believe).
That a language has a garbage collector is completely orthogonal to whether it has type inference ... what the heck does it matter what "much of the standard library uses" to this issue? It's pure sophism. Even C now has type inference. The plain fact is that the claim is wrong.
The x axis is orthogonal to the y axis, so I can’t be interested in the area where x < 1 and y = 5?
> what the heck does it matter what "much of the standard library uses" to this issue?
It matters in that most people looking for a low level manually memory managed language won’t likely choose D, so for the purposes of “is this relatively novel among lower level, memory managed languages” D doesn’t fit my criteria.
> Even C now has type inference. The plain fact is that the claim is wrong.
I feel like the article didn't really hit on the big ones: comptime functions, no hidden control flow, elegant defaults, safe buffers, etc.
What Zig really does is make systems programming more accessible. Rust is great, but its guarantees of memory safety come with a learning curve that demands mastering lifetimes and generics and macros and a complex trait system. Zig is in that class of programming languages like C, C++, and Rust, and unlike Golang, C#, Java, Python, JS, etc that have built-in garbage collection.
The explicit control flow allows you as a developer to avoid some optimizations done in Rust (or common in 3rd party libraries) that can bloat binary sizes. This means there's no target too small for the language, including embedded systems. It also means it's a good choice if you want to create a system that maximizes performance by, for example, preventing heap allocations altogether.
The built-in C/C++ compiler and language features for interacting with C code easily also ensures that devs have access to a mature ecosystem despite the language being young.
My experience with Zig so far has been pleasurable. The main downside to the language has been the churn between minor versions (language is still pre-1.0 so makes perfect sense, but still). That being said, I like Zig's new approach to explicit async I/O that parallels how the language treats Allocators. It feels like the correct way to do it and allows developers again the flexibility to control how async and concurrency is handled (can choose single-threaded event loop or multi-threaded pool quite easily).
Zig's generics cause bloat just like any other language with generics--explicit flow control has nothing to do with it.
Zig is a good language. So are Rust, D, Nim, and a bunch of others. People tend to think that the ones they know about are better than all the rest because they don't know about the rest and are implicitly or explicitly comparing their language to C.
Zig's generics can potentially, but not necessarily, because Zig's generics are explicitly controlled through comptime functions, which give the developer a ton of control of how the generic code is unrolled. They're also frequently less used in general than Rust generics.
Of course both Zig and Rust are good languages. But my experience, and I believe your experience will be too if you try to compile programs of similar complexity using standard practices of each language, is that Zig compiles much more compactly in .ReleaseSmall mode than Rust does even with optimization flags, which makes it more ideal for embedded systems, in my opinion. I learned this on my own by implementing the same library in both languages using standard default practices of each.
Of course, at the desktop runtime level, binary size is frequently irrelevant as a concern. I just feel that since Zig makes writing "magic" code more difficult while Rust encourages things like macros, it is much easier to be mindful of things that do impact binary size (and perhaps performance).
Rust has macros that allow for arbitrary compile-time generated code, just like Zig. Most Rust-compiled programs are a bit bloated because libstd is statically linked and not rebuilt from scratch with a project-specific trimmed feature set, which leads to potentially unwanted code being included for e.g. recoverable panics, backtraces, UTF-8 string handling etc. A set of new RFC's is being worked on that may at some point allow libstd to be rebuilt from scratch within Stable Rust projects, with well-defined, stable, subsetted features.
> Rust has macros that allow for arbitrary compile-time generated code, just like Zig.
This is not true. Zig, D, and Nim all have full-language interpreters built into the compiler; Rust does not. Its macros (like macros generally) manipulate source tokens, they don't do arbitrary compile-time calculations (they live in separate crates that are compiled and then run on source code, which is very different from Zig/D/Nim comptime which is intermixed with the source code and is interpreted). Zig has no macros (Andrew hates them)--you cannot "generate code" in Zig (you can in D and Nim); that's not what comptime does. Zig's comptime allows functions written in Zig to execute at compile time (the same functions can also be used to run at execution time if they only use execution-time types). The Zig trick is that comptime code can not only operate on normal data like ints and structs, but also types, which are first class comptime objects. Comptime code has access to the TypeInfo of types, both to read the attributes of types and to create types with specified attributes, which is how Zig implements generics.
> This means there's no target too small for the language, including embedded systems. It also means it's a good choice if you want to create a system that maximizes performance by, for example, preventing heap allocations altogether.
I don't think there's is any significant different here between zig, C and Rust for bare-metal code size. I can get the compiler to generate the same tiny machine code in any of these languages.
That's not been my experience with Rust. On average produces binaries at least 4x bigger than the Zig I've compiled (and yes, I've set all the build optimization flags for binary size). I know it's probably theoretically possible to achieve similar results with Rust, it's just you have to be much more careful about things like monomorphization of generics, inlining, macro expansion, implicit memory allocation, etc that happen under the hood. Even Rust's standard library is quite hefty.
C, yes, you can compile C quite small very easily. Zig is like a simpler C, in my mind.
The Rust standard library in its default config should not be used if you care about code size (std is compiled with panic/fmt and backtrace machinery on by default). no_std has no visible deps besides memcpy/memset, and is comparable to bare metal C.
I understand this, but that is a pain that you don't get with Zig. The no_std constraint is painful to deal with as a dev even with no dependencies and also means that if you're working on a target that needs small binaries, that the crates.io ecosystem is largely unavailable to you (necessitating filtering by https://crates.io/categories/no-std and typically further testing for compilation size beyond that).
Zig on the other hand does lazy evaluation and tree shaking so you can include a few features of the std library without a big concern.
Rustc does a good job of removing unused code, especially with LTO. The trick is to make sure the std library main/panic/backtrace logic doesn't call code you don't want to pay for.
IIRC there's also a mutex somewhere in there used to workaround some threading issues in libc, which brings in a bespoke mutex implementation; I can't remember whether that mutex can be easily disabled, but I think there's a way to use the slower libc mutex implementation instead.
Also, std::fmt is notoriously bad for code size, due to all the dyn vtable shenanigans it does. Avoid using it if you can.
Regardless, the only way to fix many of the problems with std is rebuilding it with the annoying features compiled out. Cargo's build-std feature should make this easy to do in stable Rust soon (and it's available in nightly today).
This. Is Zig an interesting language? Yes sure. But “a totally new way to write programs”? No, I don’t see a single feature that is not found in any other programming languages.
zig is not cool at all, its ugly as sin, and has zero use case other than mingling with legacy c code, and who in their right mind wants to be doing that
its a hipster language, absolute insanity to use it when rust exists unless you have that very specific c related slave work to do
Uncalled for and subjective. Certainly plenty of people call Rust's syntax ugly. Discussing syntax and not semantics is a waste of time.
> has zero use case other than mingling with legacy c code
So it has a use case?
> who in their right mind wants to be doing that
Some people have to.
> absolute insanity to use it when rust exists unless you have that very specific c related slave work to do
Some people do.
What's the need for such emotionally charged language in your comment?
I have my own reasons not to use Zig at this moment. I want enforced memory safety and am waiting on 1.0 to see what the language finally looks like. Until stabilization I certainly won't be using it in production. But that doesn't mean the project is meritless, that experimenting with language features before then is wrong, that making a language suitable for specific niches is a bad idea.
I don't see Zig as a replacement for tools that would have been written in Go, Java or C#, and I would rather we had less memory unsafe software out there, but it is a clear step function ahead of C.
Just like I and many others spend a lot of time trying to make Rust the best it can be, their team is doing the same.
"This associated with the ability to cross-compile code to be run in another architecture, different than the machine where it is was originally compiled, is already something quite different and unique."
Perhaps I'm missing something but this is utterly routine. It even has the name used here: Cross-compiling.
If you install Zig, you can now generate executables for virtually any target with just a CLI argument specifying the target, regardless of what machine you installed it on. Nothing else does that--cross compilation generally requires compiling the compiler to target a different architecture.
Apparently; I wasn't aware. But unlike Zig this doesn't work with FFI ... everything has to be Go code ... cross compilation works by compiling the library code for the target and caching it ... but if you need anything outside of that you're out of luck ... or maybe not ... I ran across this tidbit:
"When a Go project utilizes CGo to interact with C code, standard Go cross-compilation might require additional steps. This is because Go can cross-compile Go code but not C code directly, necessitating the availability of target system libraries on the development machine. Tools like Zig can be used as a C compiler (zcc) to facilitate cross-compilation for CGo-dependent projects by providing the necessary cross-compilation capabilities for the C code."
Do you need pointer arithmetic? I think that's the one feature a modern C replacement should do away with. The other being support for arithmetic with unsigned types.
How on earth is it unique to compile code for different architectures? This is a solved problem since the 80s.
It basically looks like C with different syntax, im also not convinced the 0…9 implicit range is better for iteration - i prefer it explicitly for lower level languages.
Nothing against (or for) Zig, but the article author seems unfamiliar with other modern languages in common use... imagine if they saw Swift or Rust. Their mind would be utterly, utterly blown.
This is a very confusing blog post. I found myself looking for ChatGPT markers because it doesn't make sense to say: omg zig is so much cooler than C here's why, then start listing the absolute basics of the language that are identical in most modern languages without any actual reflection why writing the same thing in a different syntax somehow makes zig superior?
Titles get the headlines.
I've learnt this the hard way. The most important thing is to get you to click. Sometimes I'll first iterate over the title before even writing on substack.
I have learned that too. If you write about C, almost no one clicks. It is not new, it is not flashy, and it does not promise easy results. Yet almost everything still runs on it. The quiet parts of computing rarely get attention, even though they keep everything working.
I still write about C anyway. It may not trend, but it lasts.
I'm sure I'm not alone - after decades - already knowing far too much about C, so that any article I'm likely to read either I'm like "No, that's wrong and I even understand why you thought that, but it's still wrong" or I just nod along and sigh.
I spent a substantial fraction of my professional career writing C, and I remain interested in WG14 (the language committee) and in several projects written in C though I avoid writing any more of it myself.
The reason it's so widespread is called "Worse is Better" and I believe that has somewhat run its course. If you weren't aware of "Worse is better" a quick Google should find you the original essay on that topic years back.
In contrast when I read an article about say Zig, or Swift, I am more likely to learn something new.
But I can certainly endorse your choice to write about whatever you want - life is too short to try to get a high score somehow.
Thanks for sharing your thoughts. I have never deployed any production C code and I would not choose C for professional work either, but learning it, with all its rough edges, has made me a better engineer. It helps me understand how things really work under the hood. No pain, no gain.
Maybe I am biased, but for professional work, I stay with Go. I have built large distributed data systems that handle hundreds of millions of business transactions daily, and Go has been steady and reliable for that scale. Its simplicity, strong concurrency model, and easy deployment make it practical for production systems. I still enjoy exploring Zig and Rust in my spare time, but for shipping real systems, Go continues to get the job done without getting in the way.
> I'm sure I'm not alone - after decades - already knowing far too much about C, so that any article I'm likely to read either I'm like "No, that's wrong and I even understand why you thought that, but it's still wrong" or I just nod along and sigh.
If you have some spare time, I would really like to hear more about your experiences. It sounds like you have worked with C for a long time, and that kind of insight is hard to find now.
Most people around me started with JavaScript or TypeScript as their first language, and for many, that is still all they know. I mean no disrespect, it is just how things are today. It would be great to hear how your view of programming has changed over the years and what lessons from C still matter in your work today.
An alternative view of "not new and flashy" is "known and expected", which not 100% of C conversations have to be. Just look at the excitement around Fil-C lately!
Oh, and I just submitted a link to my article about C. I am pretty sure no one will click it.
Articles about C never get much traffic, but that is fine. I wrote it because I care about how things really work, not because I expect it to trend. If even a few people read it and see the beauty in the old language that still runs the world, that is enough.
I would like to find more articles on C so feel free to share, thanks
Here is the link for you: https://github.com/little-book-of/c/blob/main/articles/zig-i...
I hope next month I will have more time to write deep dives into the internals of SQLite, PostgreSQL, Redis and maybe curl, all written in C.
and my favorites:
Honestly this (the fact it is being massively upvoted) looks a lot more like paid promotion. Not the first time and not the only example of submission btw.
What I got was:
The rest is pretty genericI had to dig farther on the compile time execution stuff. It's actually pretty cool-looking. Recommend digging into it. I don't know that it's a killer enough feature to draw me away from Rust's guarantees, but it is interesting.
It is difficult to overstate how useful compile-time execution is in practice. I can't imagine using a systems language without it now. The term "modern C++" largely denotes when compile-time execution was added to that language.
I would love to see Rust get compile-time execution that is as capable as Zig or C++20.
You're absolutely correct! </s>
The "how to modify an environment variable" bit and the bin-dec-hex table made me feel the same way. Then I saw the part explaining how to check for duplicates in a row... I'm struggling to understand the point of the article. Testing a text generator?
Also bizarre that it got to the front page of HN while being so low quality :/
Well i think that is why it got there people really love hating :)
In my opinion the biggest issue of Zig is that it doesn't allow attaching data to error. The error can only be passed via side channel, which is inconvenient and ENOURAGES TOOL DEVELOPERS TO NOT PASS ERROR DATA, which greatly increase debugging difficulty.
Somethings there are 100 things that possibly go wrong. With error data you can easily know which exact thing is wrong. But with error code you just know "something is wrong, don't know which exactly".
See: https://github.com/ziglang/zig/issues/2647#issuecomment-1444...
> I just spent way longer than I should have to debugging an issue of my project's build not working on Windows given that all I had to work with from the zig compiler was an error: AccessDenied and the build command that failed. When I finally gave up and switched to rewriting and then debugging things through Node the error that it returned was EBUSY and the specific path in question that Windows considered to be busy, which made the problem actually tractable ... I think the fact that even the compiler can't consistently implement this pattern points to it perhaps being too manual/tedious/unergonomic/difficult to expect the Zig ecosystem at large to do the same
Interestingly, I just read an article from matklad (who works a lot with Zig) talking about the benefits of splitting up error codes and error diagnostics, and the pattern of using a diagnostic sync to provide human-readable diagnostic information:
https://matklad.github.io/2025/11/06/error-codes-for-control...
Honestly I was quite convinced by that, because it kind of matches my own experiences that, even when using complex `Error` objects in languages with exceptions, it's still often useful to create a separate diagnostics channel to feed information back to the user. Even for application errors for servers and things, that diagnostics channel is often just logging information out when it happens, then returning an error.
The separation of error codes and diagnostics is fine, but the language needs a standard mechanism to optionally pass this error diagnostic information. Otherwise, everyone will develop their own different way with ZERO consistency and many will simply not pass error diagnostics at all.
Your and GP's two statements are not mutually exclusive. This paradigm can have significant benefits, and at the same time be too cumbersome for people to want to use consistently.
Having metadata with the error doesn't exclude having a separate diagnostics system. You don't have to use errors with metadata.
I agree that building a special diagnostic system is better than just using language's builtin error system. However that takes efforts.
Library developers tend to choose the path of least resistance, which is to not pass diagnostic information.
The most convenient diagonistic system is the good old logging. Logging is easy.
Maybe logging will be the de facto solution of passing error data in Zig ecosystem, due to psychological reasons.
I tend to follow the rest of the ecosystem when developing libraries. If i wanted to make a zig lib id look at what other major libs are doing (or not doing) and copy that.
If i found no consistency id be making a post like OP but from a different perspective.
The "correct" way is highly context dependent with the added proviso that Zig assumes a low-level systems context.
In this context, adding data to an error may be expedient but 1) it has a non-trivial overhead on average and 2) may be inadvisable in some circumstances due to system state. I haven't written any systems in Zig yet but in low-level high-performance C++20 code bases we basically do the same thing when it comes to error handling. The conditional late binding of error context lets you choose when and where to do it when it makes sense and is likely to be safe.
A fundamental caveat of systems languages is that expediency takes a back seat to precision, performance, and determinism. That's the nature of the thing.
If the error rarely happens then passing error data shouldn't affect performance in visible way. If the error occurs in common path then it's designed wrongly.
I agree that in special states like OOM passing error data with allocation is not ok.
Error data being returned instead of just error codes doesn't require allocation at all, and never would, unless the specific unions that you're returning require as much. Zig already has tagged unions with a tag field and associated payload, that is exactly what you would return. The overhead isn't remarkably worse than the cost of modifying the value someone passed in to "Fill this in in case of errors" (which is what you have to do now in Zig).
For quite a long time, I have been wondering why I like to code in Raku so much … in a round about way you set me thinking. Perhaps it’s because, in Raku, precision, performance and determinism take a back seat to expediency. (Sorry for the tangent).
Went to have a look to the (beautiful and informative) website for the raku language to refresh my memory, and looking at the examples I though "Oh god, those sigills, those criptic short keywords... it looks like a modern perl, I doubt we would be happy together", then I went to wikipedia to check and yes indeed, that's perl 6! I'll pass. :)
Wow, Raku looks like a really interesting language, I'd never heard of it before!
Have you used it in any large projects?
I love it. My largest project is about 20k lines … so nothing too big. But if you need to be expedient (just quickly make a data extract/load/transform or a command line thingy) it is great fun. The LLMs seem to be pretty good too, just the usual hallucination here and there.
Had you heard of Perl 6?
Perl I used in 2008 or so but not since. Haven't come across Perl 6.
Perl 6 = Raku.
Please, stop deadnaming the Raku Programming Language :-)
I thought it was useful information for people who did not know this. Of course Wikipedia would have sufficed, too: "Raku, formerly known as Perl 6 [...]".
=b
People are working on this. std.zon is generally considered to be a good example of how to handle errors and diagnostics, though it's an area of active exploration. The plan is to eventually collect all the good patterns people have come up with and (1) publish them in a collection, and (2) update std to actually use them.
Go gets a lot of flack for getting some things wrong but it was a stable and productive language within a couple of years.
If you look at the current Zig website the hello world example doesn’t compile because they changed the IO interface. Something as simple as writing to the console.
It’s easier to get things right if you have no issues breaking backward compatibility for a decade. It feels it’ll be well over 10 years before Zig is “1.0”.
+1
Agreed, this is probably my biggest ongoing issue with Zig. I really enjoy it overall but this is a really big sticking point.
I find it really amusing that we have a language that has built its brand around "only one obvious way to do things", "reducing the amount one must remember", and passing allocators around so that callers can control the most suitable memory allocation strategy.
And yet in this language we supposedly can't have error payloads because not every error reporting strategy is suitable for every environment due to memory constraints, so we must rely on every library implementing its own, yet slightly unique version of the diagnostic pattern that should really be codified as some sort of a language construct where the caller decides which allocator to use for error payloads (if any).
Instead we must hope that library authors are experienced and curious enough to have gone out of their way to learn this pattern because it isn't mentioned in any official documentation and doesn't have any supporting language constructs and isn't standardized in any way.
There must be an argument against this (rather obvious) observation but I'm not aware of it?
There are plans in Zig to allow to include custom information into error stack trace https://github.com/ziglang/zig/issues/14446.
But that is not implemented.
In any case, when debugging annotating error with extra context often is not enough. One often needs a detailed trace of what happens before.
So what I would like to see in any programming language is ability to do a structured logging with extra context from the call stack (including asynchronous support in languages that have that) that has almost zero overhead when the log is not printed.
Various languages and runtimes have some libraries that try to do that, but the usage is awkward and the performance overhead is not trivial.
I made a go of this using the stacktrace functionality built into C++23. The overhead and complexity it introduced made it not worth it, unfortunately. There may be a way to do this but it seems non-trivial in implementation.
Not this is explicitly marked as Not Planned
closed as not planned :)
I know that Zig doesn't allow attaching data to error for valid reasons. If error data contains interior pointer then it can easily cause memory safety problem. Zig doesn't have a borrow cheker or ownership system to prevent that.
https://github.com/ziglang/zig/issues/2647#issuecomment-2670...
If you wanted to have a parameter that gets filled in when there is an error, this exact issue will remain, it's completely unrelated to which language construct you use to capture errors and has more to do with having a good idea of how your errors are allocated, if they require allocation. I don't think the commenter in the GitHub issue thought this through at all, and probably didn't expect to have it be held up as some example of why you can't return tagged unions (because it's not an example of that, not even remotely).
> I know that Zig doesn't allow attaching data to error for good reasons. If error data contains interior pointer then it causes memory safety problem
IMO this is not a good reason at all.
Changed to "valid reasons"
The problem of dangling pointers is not unique to error data.
I can see pros and cons. Preventing data being attached to an error forces more clear and precise errors.
Whereas lazy devs could just attach all possible data in a giant generic error if they don’t want to think about it.
> Preventing data being attached to an error forces more clear and precise errors.
Okay maybe theorically, but in the real world I would like to have the filename on a "file not found", an address on a "connection timeout", a retry count on a "too many failures", etc.
But also in the real world I may not be interested in any error information for the library I’m using. I’d like to be able to pass a null for the error information structure and have the compile optimize away everything related to tracking and storing error information.
I’d like my parser library to be able to give me the exact file, line and column number an error occurred. But I’d also like to use the library in a “just give me an error if something failed, I don’t really care why” mode.
Id rather have data in a generic error type than no data in a specific error type.
How useful is a file not found error type without data (the filename) when the program is looking for 50 files? Not very.
How useful is a generic error type with '{filename} not found' as generic string data packed in? Quite.
I don't follow, because there's a possibility that someone somewhere might create a bad overly-generic error set if they were allowed to stuff details in the payload when those should be reflected in the error "type", it's a good idea to make the vast majority of error reporting bad and overly-generic by eliminating error payloads entirely?
Yeah, every single newbie programming language designer starts with a maximalist position of "exceptions are hard, just return an error code", and then end up inventing their own shitty, ad-hoc and malfeatured exception handling system.
I want off this ride.
This is annoying. It’s because errors were designed to be a bitset and not have pointers. I would also prefer that they were a `union(enum)`.
We are free to do that as a return type like `Result(T)` and just forgo using `try`, but yeah, I wish this was in there.
This seems kinda contrived. In practice that "ERROR DATA" tends not to exist. Unexpected errors almost never originate within the code in question. In basically all cases that "ERROR DATA" is just recapitulating the result of a system call, and the OS doesn't have any data to pass.
And even if it did, interpreting the error generally doesn't every work with a microscope over attached data. You got an error from a write. What does the data contain? The file descriptor? Not great, since you really want to know the path to the file. But even then, it turns out it doesn't really matter because what really happened was the storage filled up due to a misbehaving process somewhere else.
"Error data" is one of those conceits that sounds like a good idea but in practice is mostly just busy work. Architect your systems to fail gracefully, don't fool yourself into pretending you can "handle" errors in clever ways.
I think you've skipped over all the cases where knowing the filename is actually helpful? It's true that sometimes it isn't.
Also, a line number is often helpful, which is why compilers include it. Some JSON parsers omit that, which is annoying.
> Also, a line number is often helpful
That's not error data, that's (one level of) a stack trace. And you can do that in zig, but not by putting call stack data into error return codes.
The conflation between exception handling and error flagging (something that C++ did largely as a mistake, and that has been embraced by managed runtimes like Python or Java) is actually precisely what this feature is designed to untangle. Exception support actually turns out to have very non-trivial impact on the generated code, and there's a reason why languages like Rust and Zig don't include them.
> That's not error data, that's (one level of) a stack trace.
They're not talking about the stack trace, but about the common case where the error is not helpful without additional information, for example a JSON parsing library that wants to report the position (line number) in the string where the error appears.
There's no way of doing that in Zig, the best you can do is return a "ParseError" and build you own, non-standard diagnostic facilities to report detailed information though output arguments.
Another way to look at this example is that, for the parser, this is not an error. The parser is doing its job correctly, providing an accurate interpretation of its input, and for the parser, this is qualitatively different from something that prevents it doing its job (say, running out of memory).
At the next level up, though, there might be code that expects to be able to read a JSON config file at a certain location, and if it fails, it’s reasonable to report which file it tried to read, the line number, and what the error was.
Seralizing error data to text and then dumping that in a log can be pretty useful.
Error data should specify where the error occurred and what failed. So you'll know which file had a problem, and that the problem in question was a failure to write. From that you can make the inference that maybe the disk is full, etc.
A neat little thing I like about Zig is one of the options for installing it is via PyPI like this: https://pypi.org/project/ziglang/
Which means you don't even have to install it separately to try it out via uvx. If you have uv installed already try this:For anyone not familiar: You can bundle arbitrary software as Python wheels. Can be convenient in cases like this!
What "cases" are those? Tell me one useful and neat case. Why is it useful and neat, you think?
For one example, a number of years back, I built a python package, env, and version manager. It was built entirely Rust and distributed as a binary. Since I know users would likely have pip installed, it provided an easy way for them to install, regardless of OS.
You could go further like in this case, and use wheels + PyPi for something unrelated to Python.
It's useful as a distro-agnostic distribution method. CMake is also installable like this despite having nothing to do with Python.
Or I should say it was useful as a distribution method, because most people had Python already available. Since most distros now don't allow you to install stuff outside a venv you need uv to install things (via `uv tool install`) and we're not yet at the point where most people already have uv installed.
reinventing nix but worse.
Not even close, that's still imperative package management
That's a nice trick!
I wish we had that for Nim too!
try pixi!
For this sort of stuff I find micromamba / pixi a better way of managing packages, as oppposed to the pip / uv family of tools
Pixi, Conan, or Nix— all better choices than abusing the Python ecosystem to ship arbitrary executables.
It could easily be the case that the zig compiler is useful in some mixed-language project and this is not actually "abuse".
Regular Python bindings / c extensions don’t depend on a pypi-packaged instance of gcc or llvm though. It’s understood that these things are provided externally from the “system” environment.
I know some of it has already happened with rust, but perhaps there’s a broader reckoning that needs to occur here wrt standards around how language specific build and packaging systems handle cross language projects… which could well point to phasing those in favour of nix or pixi, which are designed from the getgo to support this use case.
That's really cool actually. Now that AI is a little more commonly available for developer tooling I feel like its easier than ever to learn any programming language since you can braindrain the model.
The standard models are pretty bad a zig right now since the language is so new and changes so fast. The entire language spec is available in one html file though so you can have a little better success feeding that for context.
> The entire language spec is available in one html file though so you can have a little better success feeding that for context.
This is what I've started doing for every library I use. I go to their Github, download their docs, and drop the whole thing into my project. Then whenever the AI gets confused, I say "consult docs/somelib/"
Just use gh_grep mcp and the model will fetch what it needs if you tell it to, no need to download from GitHub manually like this
I on the other hand see most languages become superfluous, as coding agents keep improving.
During the last year I have been observing how MCP, tools and agents, have reduced the amount of language specific code we used to write.
I'm afraid this article kinda fails at at its job. It starts out with a very bold claim ("Zig is not only a new programming language, but it’s a totally new way to write programs"), but ends up listing a bunch of features that are not unique to Zig or even introduced by Zig: type inference (Invented in the late 60s, first practically implemented in the 80s), anonymous structs (C#, Go, Typescript, many ML-style languages), labeled breaks, functions that are not globally public by default...
It seems like this is written from the perspective of C/C++ and Java and perhaps a couple of traditional (dynamically typed) languages.
On the other hand, the concept that makes Zig really unique (comptime) is not touched upon at all. I would argue compile-time evaluation is not entirely new (you can look at Lisp macros back in the 60s), but the way Zig implements this feature and how it is used instead of generics is interesting enough to make Zig unique. I still feel like the claim is a bit hyperbolic, but there is a story that you can sell about Zig being unique. I wanted to read this story, but I feel like this is not it.
D has had compile time function execution since 2007 or so.
https://dlang.org/spec/function.html#interpretation
It doesn't need a keyword to trigger it. Any expression that is a const-expression in the grammar triggers it.
Hello Mr. Bright. I've seen similar comments from you in response to Zig before. Specifically, in the comments on blog post I made about Zig's comptime. I took some time reading D's documentation to try to understand your point (I didn't want to miss some prior art, after all). By the time I felt like I could give a reply, the thread was days old, so I didn't bother.
The parent comment acknowledges that compile time execution is not new. There is little in Zig that is, broad strokes, entirely new. It is in the specifics of the design that I find Zig's ergonomics to be differentiated. It is my understanding that D's compile time function execution is significantly different from Zig's comptime.
Mostly, this is in what Zig doesn't have as a specific feature, but uses comptime for. For generics, D has templates, Zig has functions which take types and return types. D has conditional compilation (version keyword), while Zig just has if statements. D has template mixins, Zig trusts comptime to have 90% of the power for 10% of the headache. The power of comptime is commonly demonstrated, but I find the limitations to be just as important.
A difference I am uncertain about is if there's any D equivalent for Zig having types being expressions. You can, for example, calculate what the return type should be given a type of an argument.
Is this a fair assessment?
Maybe I don't understand, in D, how do I write a function which makes a new type?
For example Zig has a function ArrayHashMapWithAllocator which returns well, a hash table type in a fairly modern style, no separate chaining and so on
Not an instance of that type, it returns the type itself, the type didn't exist, we called the function, now it does exist, at compile time (because clearly we can't go around making new types at runtime in this sort of language)
You use templates and string mixins alongside each other.
The issue with mixins is that using string concatenation to build types on the fly isn't the greatest debugging experience, as there is only printf debugging available for them.
But Zig doesn't need a keyword to trigger it either? If it's possible at all, it will be done. The keyword should just prevent run-time evaluation. (Unless I grossly misunderstood something.)
Yes and D's comptime is much more fun, IMHO than Zig's! Yet everyone talks about Zig's comptime as if it were unique or new.
Partial evaluation has been quite well known at least since 1943 and Kleene's Smn proof. It has since been put to use, in various forms, by quite a few languages (including C++ in 1990, and even C in the early seventies). But the extent and the way in which Zig specifically puts it to use -- which includes, but is not limited to, how it is used to replace other features that can then be avoided (and all without macros) -- is unprecedented.
Pointing out that other languages have used partial evaluation, sometimes even in ways that somewhat overlap with Zig's use, completely misses the point. It's at least as misplaced as saying that there was nothing new or special about iPhone's no-buttons design because touch screens had existed since the sixties.
If you think Zig's comptime is just about running some computations at compile time, you should take a closer look.
Hi Walter! Big fan. What do you think of Zig? How would you like to see it evolve? Are there any things from Zig that inspire you to work in D?
Perl5 had it before. Either by constant-folding, or by BEGIN blocks.
Constant-folding just got watered down by the many dynamic evangelists in the decades after, that even C or C++ didn't enforce it properly. In perl5 is was watered down on add (+) by some hilariously wrong argumentation then. So you could precompute mult const expressions, but not add.
How are perl5’s BEGIN blocks equivalent to comptime? It’s been awhile, but I recall BEGIN blocks executing at require time—which, in complicated pre-forking setups that had to be careful about only requiring certain modules later during program execution because they did dumb things like opening connections when loaded, meant that reasoning about BEGIN blocks required a lot more careful thought than reasoning about comptime.
The same is true for templates, or macros—all of which are distinguished by being computed in a single pass (you don’t have to think about them later, or worry about their execution being interleaved with the rest of the program), before runtime start (meaning that certain language capabilities like IO aren’t available, simplifying reasoning). Those two properties are key to comptime’s value and are not provided by perl5’s BEGIN blocks—or probably even possible at all in the language, given that it has eval and runtime require.
BEGIN blocks execute at compile-time. require is just a wrapper to load a module at compile-time.
When you want to use state, like openening a file for run-time, use INIT blocks instead. These are executed first before runtime, after compile-time.
My perl compiler dumps the state of the program after compile-time. So everything executed in BEGIN blocks is already evaluated. Opening a file in BEGIN would not open it later when required at run-time, and compile-time from run-time is seperated. All BGEIN state is constant-folded.
I think we’re using different definitions of “compile time”.
I know who you are, and am sure everything you say about the mechanisms of BEGIN is correct, but when I refer to “compile time”, I’m referring to something that happens before my program runs. Perl5’s compilation happens the first time a module is required, which may happen at runtime.
Perhaps there’s a different word for what we’re discussing here: one of the primary benefits of comptime and similar tools is that they are completed before the program starts. Scripting languages like perl5 “compile” (really: load code into in-memory intermediate data structures to be interpreted) at arbitrary points during runtime (require/use, eval, do-on-code).
On the other hand, while code in C/Zig/etc. is sometimes loaded at runtime (e.g. via dlopen(3)), it’s compile-time evaluation is always done before program start.
That “it completed before my code runs at all” property is really important for locality of behavior/reasoning. If the comptime/evaluation step is included in the runtime-code-load step, then your comptime code needs to be vastly more concerned with its environment, and code loading your modules has to be vastly more concerned with the side effects of the import system.
(I guess that doesn’t hold if you’re shelling out to compile code generated dynamically from runtime inputs and then dlopen-ing that, but that’s objectively insane and hopefully incredibly rare.)
Agreed.
But i would not put comptime as some sort of magical invention. Its still just a newish take on meta programming. We had that since forever. From my minimal time with Zig i kind of think comptime as a better version of c++ templates.
That said Zig is possibly a better alternative to c++, but not that exiting for me. I kind of dont get why so many think its the holy grail, first it was rust, and now zig.
As much as I dislike Rust, I gotta give it credit where it's due. It has something unique: a borrow checker. What is so unique in Zig?
> It has something unique: a borrow checker.
Rust's borrow checker isn't unique either but was inspired by Cylone: https://en.wikipedia.org/wiki/Cyclone_(programming_language)
IMHO a programming language doesn't need a single USP, it just needs to include good existing ideas and (more importantly) exclude bad existing ideas (of course what's actually a good and bad idea is highly subjective, that's why we need many programming languages, not few).
Rust's borrow checker is unique in the sense that it is production-ready. Cyclone is indeed prior art, but it's not as if it ever got beyond the research project stage.
I don't necessarily disagree, of course. That is why I like Odin the most so far, and perhaps C3.
The code samples are so weird... Some are images, others are not, and there's like 10 different color schemes (even among the textual ones, it's not consistent). That actually takes some kind of effort to achieve :D.
gives you a preview of the experience of using it :)
> Zig is not only a new programming language, but it’s a totally new way to write programs
I'd say the same thing about Rust. I find it the best way to express when what code should run at any given point in the program and the design is freakin interstellar: It is basically a "query engine" where you write a query of some code against the entire available "code space" including root crate and its dependencies. Once you understand that programming becomes naming bits and then queries for the ones you wish to execute.
As someone not really familiar with Rust, this sounds intriguing, but I don’t full understand. Do you have any links that can or examples that could clarify this for someone who is just starting out with Rust?
When I read "I can easily say that Zig is not only a new programming language, but it’s a totally new way to write programs" I expected to see something as shocking as LISP/Smalltalk/Realtalk/EVE/FORTH/Prolog... A whole new paradigm, a whole new way to program. Or at least a new concept like the pure functionalism of Haskell, or Prototyping like in Lua/JS/Io. And I was so damn shocked how I must have missed something so huge, having read the entirety of Zig's documentation and not have noticed anything? As you mentioned, turned out nothing, and I was shocked then why is it in the top of HN? Also turned out for no reason based on the comments.
The idea of modern society is "get hyped for the new thing". Tech crowd did not escape that unfortunately, and keeps rediscovering techniques that were already possible more that 50 years ago. Because they don't want to learn the history of the technology they are using.
"Computing is a fashion show"
-- Alan Kay
Dev celebs makes blogposts and videos on how Zig is awesome and unique, so the herd repeats.
> I'm afraid this article kinda fails at at its job
Yeah, I know nothing about Zig, and was excited by the author's opening statement that Zig is the most surprising language he has encountered in a 45 yr software career...
But this is then immediately followed by saying that ability to compile C code, and to cross-compile, are the most incredible parts of it, which is when I immediately lost interest. Having a built-in C compiler is certainly novel, and perhaps convenient for inter-op, but if the value goes significantly beyond that then the author is failing to communicate that.
Compile time seems to be a standard feature in D-lang as well.
Powerful macros that generate code that then gets compiled =)
Anonymous structs and type interference are things even C has, although support for the later one is quite recent and limited.
"this article kinda fails at at its job"
Definitely.
> C/C++
It has been several decades since putting a slash between these two made sense, lumping them together like this. It would be similar to saying something like Java/Scala or ObjectiveC/Swift. These are completely different languages.
Nope, that is a English grammar construct that is a shortcut for "and" and "or", as any good English grammar book will explain.
Indeed you see those for Java/Scala and Objective-C/Swift in technical books and job adverts.
Any search on the careers sites, or documentation, on companies that have seats at ISO, sell/develop C and C++ compilers, have such C/C++ references in a couple of places.
Do you need any example?
In the general case yes, but "C/C++" became an idiom for the stance, that C and C++ are essentially the same, that C++ is a superset of C or that C++ is just the replacing successor of C and it should be treated as superseded. This is quite wrong and thus there is a lot of rightful intervention to that term. Personally I use "C, C++" when I want to talk about both without claiming, that they are the same language.
Nah, that is what pedantic folks without English grammar knowledge keep complaining about, instead of actually discussing better security practices in both languages.
It is a bikeshedding discussion that doesn't help in anything, regarding lack of security in C, or the legions of folks that keep using C data types in C++, including bare bones null terminated strings and plain arrays instead of collection types with bounds checking enabled.
This has nothing to do with bikeshedding, it is a genuine misunderstanding of these two languages that is propagated in this way. This is not about grammar.
In my opinion, this is an important issue and not "bikeshedding", but it can be discussed whether the term "C/C++" is always an example of that idea or not. I think it is not, but it is connected enough, that I won't use it to side step the issues.
So there will be zero C language constructs, and C standard library functions being called, on your C++ source code?
I mostly write C, but yes even a simple call to e.g. malloc has different semantics in C++ (you need to cast).
Proper C++ should use new, delete, custom allocators, and standard collection types.
Even better, all heap allocations should be done via ownership types.
Calling into malloc () is writing C in C++, and should only be used for backwards compatibility with existing C code.
Additionally there is no requirement on the C++ standard that new and delete call into malloc()/free(), that is usually done as a matter of convenience, as all C++ compilers are also C compilers.
> Calling into malloc () is writing C in C++, and should only be used for backwards compatibility
And this is exactly the stance I am arguing against. C++ is not the newer version of C. It forked of at some point and is a quite different language now.
One of the reasons I do use malloc for, is for compatibility with C. It is not for backward compatibility, because the C code is newer. In fact I actively change the code, when it needs a rewrite anyway, from C++ to C.
The other reason for using it even when writing C++ is, that new alone doesn't allow to allocate without also calling the constructor. For that I call malloc first and then invoke the constructor with placement new. For deallocating I call the destructor and then free. This also has the additional benefit, that your constructor and deconstructor implementation can fail and you can roll it back.
Not in this context, that’s incorrect.
The problem is that it's a bit tricky to type the intersection symbol (∩), because C ∩ C++ makes more sense.
Yeah, as I keep repeating, it is a Modula-2 in C clothes, minus comptime, which as others have mentioned D has had for quite some time.
> C/C++
No such thing. Also C++ has most of those features too.
As a c++ developer who's heard of Zig but never dived into it, I was reading this article scratching my head wondering what is it actually so unique about it.
Why the blog has a section on how it install it on the path is also very puzzling.
Zig is so cool, but C is cooler.
I like how Zig feels clear and simple to start with. I like that it gives one toolchain and makes cross compilation easy. I like that it helps people see how systems programming can feel approachable again.
I also like that C has done these things for many years. I can use different tools, link libraries, and trust that it will still work. I can depend on standards that keep improving while staying familiar.
I think Zig is exciting for what it adds today. I think C is cooler because it has proved itself everywhere and still runs the world.
I've used for well oven a year now, and I identify with this comment... well no, but I used to. In between Zig, and another language I enjoy was Python, and it was breath of fresh air to come back to the C style that I know and love within Zig. I would have said exactly this, when I first started writing Zig.
Today, Zig is so much better than C. I used to refer to Zig as an improved version of C. But I don't anymore. C may have come first, but chronological roles reversed. If Zig is a programming language, than C is a toy trying to copy Zig's functionality and usability.
Calling C easier to use in a cross platform context is absolutely insane. If I was only concerned about $HOST I would consider using C. Today, when I might want to copy a binary to literally any other system, I wouldn't even consider C. Zig wants code to work. C wants code to compile. There's a stark and critically important difference between the two.
> I think Zig is exciting for what it adds today. I think C is cooler because it has proved itself everywhere and still runs the world.
I couldn't have put it better myself, the only thing C has over Zig is inertia. But I wouldn't consider that a selling point....
Have you tried C23, especially its new Unicode support? It really surprised me after returning to C more than ten years later.
You can now write wide and UTF-8 string literals directly:
It just works across compilers, no special libraries or hacks needed.C still feels like C, but cleaner, safer, and more consistent.
I abandoned the goal of investing More time into C when they couldn't get defer into their latest version.
2 years later, already enjoying it in Zig `defer` is a lot less important to me now. But I still view it as a symptom of the death of the language. C isn't dead, by any stretch of the imagination, but it's no longer learning from it's mistakes, where as I still am.
I started learning C again for one simple reason: to understand the Linux kernel. You cannot do that without knowing C, and soon you end up learning about GCC, linkers, and how programs really run.
Once I spent time with it, I saw how many smart ideas from the kernel could be used anywhere. the initcall system that runs modules in order, the way structs with function pointers create flexible drivers, the use of macros to build type-safe lists and so on.
https://www.collabora.com/news-and-blog/blog/2020/07/14/intr...
For real work, though, life is short. I use Go.
I totally vibe with the intro but then the rest of the article goes on to be a showcase bits of zig.
I feel what is missing is how each feature is so cool compared to other languages.
As a language nerd zig syntax is just so cool. It doesn’t feel the need to adhere to any conventions and seems to solve the problems in the most direct and simple way.
An example of this declaring a label and referring to a label. By moving the colon to either end it makes labels instantly understood which form it is.
And then there is the runtime promises such as no hidden control flow. There are no magical @decorators or destructors. Instead we have explicit control flow like defer.
Finally there is comptime. No need to learn another macro syntax. It’s just more zig during compilation
matklad did it justice in his post here, in my opinion
https://matklad.github.io/2025/08/09/zigs-lovely-syntax.html
Thread: https://news.ycombinator.com/item?id=44855881
I was also curious what direction the article was going to take. The showcase is cool, and the features you mentioned are cool. But for me, Zig is cool is because all the pieces simply fit together with essentially no redundancy or overloading. You learn the constructs and they just compose as you expect. There's one feature I'd personally like added, but there's nothing actually _missing_. Coding in it quickly felt like using a tool I'd used for years, and that's special.
Zig's big feature imo is just the relative absence of warts in the core language. I really don't know how to communicate that in an article. You kind of just have to build something in it.
> Coding in it quickly felt like using a tool I'd used for years, and that's special.
That's been my exact experience too. I was surprised how fast I felt confident in writing zig code. I only started using it a month ago, and already I've made it to 5000 lines in a custom tcl interpreter. It just gets out of the way of me expressing the code I want to write, which is an incredible feeling. Want to focus on fitting data structures on L1 cache? Go ahead. Want to automatically generate lookup tables from an enum? 20 lines of understandable comptime. Want to use tagged pointers? Using "align(128)" ensures your pointers are aligned so you can pack enough bits in.
Having spend a year tinkering in zig and it's absence of features has made me want to drop c#/java professionally and pick up Golang. Its quiet annoying when you see a codebases written in C#/java and you can tell in which year/era it was written because of the language features. The way of writing things in C# changes like every 4 years or so.
There's a certain beauty in only having to know 1~2 loops/iteration concepts compared to 4~5 in modern multi paradigm languages(various forms of loops, multiple shapes of LINQ, the functional stuff etc).
You already have have Go, before and after modules, before and after generics, before and after ranges over function types.
Skipping other minor changes.
However I do agree C# is adding too much stuff, the team seems trying to justify their existence.
Yeah, the real strength of Zig isn't what's there, but what isn't.
out of curiosity, what feature do you want?
The feature I want is multimethods -- function overloading based on the runtime (not compile time) type of all the arguments.
Programming with it is magical, and its a huge drag to go back to languages without it. Just so much better than common OOP that depends only on the type of one special argument (self, this etc).
Common Lisp has had it forever, and Dylan transferred that to a language with more conventional syntax -- but is very near to dead now, certainly hasn't snowballed.
On the other hand Julia does it very well and seems to be gaining a lot of traction as a very high performance but very expressive and safe language.
I think this is a major mistake for Zig's target adoption market - low level programmers trying to use a better C.
Julia is phenomenally great for solo/small projects, but as soon as you have complex dependencies that _you_ can't update - all the overloading makes it an absolute nightmare to debug.
Ada has them, and I guess we all agree on its systems programming nature.
Erlang/Elixir also has that
>The feature I want is multimethods -- function overloading based on the runtime (not compile time) type of all the arguments.
>Programming with it is magical, and its a huge drag to go back to languages without it. Just so much better than common OOP that depends only on the type of one special argument (self, this etc).
Can you give one or two examples? And why is programming with it magical?
For a start it means you can much more naturally define arithmetic operators for a variety of built in and user-defined types, and this can all be done with libraries not the core language.
Because methods aren't "inside" objects, but just look like functions taking (references to) structs, you can add your own methods to someone else's types.
It's really hard to give a concise example that doesn't look artificial, because it's really a feature for large code bases.
Here's a tutorial example for Julia
https://scientificcoder.com/the-art-of-multiple-dispatch
Thanks.
The article's claim of Zig being a "totally new way to write programs" is quite mad but I'd like to make a different claim: Zig's own development is a totally new way of writing programming languages (or is at least very rare).
While I don't wholly agree with all choices made by Andrew and the Zig team, I greatly appreciate the care with which they develop features. The slow pace of deliberating over features, refining them, and removing unnecessary ones seems in sharp contrast to the development of any other langauge I'm aware of. I'm no language historian though, happy to be challenged.
I am not sure if a slow pace is as beneficial as you say. I scrolled through the error handling issue brought up in this comment section ( https://github.com/ziglang/zig/issues/2647#issuecomment-2670... ) and its clear that only thing that happened there was communication on the issue was hindered. I come from C++ side and our "ISO C++ committee" language development process leaves a lot to be desired. Now look at error handling that they did passed in C++23 ( std::expected ). It raises some questions on how slow you can be while still appearing to be moving forward.
Disclaimer: I would like to see Zig and other new languages to become a viable alternatives to C++ in Gamedev. But I understand that it might happen way after me retiring =)
In terms of programming language development, take a look at Clojure. The clarity of reasoning behind every decision is unmatched.
It seems pretty common to me?! Java is developed that way. So is Rust. And many others. What exactly do you see as different in Zig?
Java and Rust have surpassed 1.0 version a long time ago, so they don’t remove features left and right on each feature release.
Not that it’s a bad thing. Python removes stuff, and it takes time to upgrade to new versions.
And Zig has surpassed 1.0 or where is the argument?
Zig has not surpassed 1.0 and explicitly strives to remove features, which Java and Rust don’t do anymore. That’s why it feels different.
I don’t think Java and Rust were so ok with completely removing features. For example, in Zig 0.15 they completely overhauled the io, meaning all libraries now have to rewrite up usage. Just to make sure they did it right
> I don’t think Java and Rust were so ok with completely removing features.
This just shows that you weren't around for pre-1.0 Rust. Back then Rust was infamous for the language making breaking changes every week. Check out this issue from 2013 tracking support for features which were deprecated but had yet to be removed from the compiler: https://github.com/rust-lang/rust/issues/4707 , and that's just a single snapshot from one moment in Rust's prehistory.
Semantic major/minor version 0.15 means it's still in development. It's not supposed to be stable. Going from 0.14 to 0.15 allows breaking changes.
Try making a similar change between version 5.0 and 6.0, with hundreds of thousands of existing users, programs, packages and frameworks that all have to be updated. (Yes, also the users who have to learn the new thing.)
> Just to make sure they did it right
Let me guess: they didn't, and now there is a third-party "right" way to do it.
(We've been here before, many times.)
While some of the features the author references are really interesting, personally I don't see how any of that would justify creating a new memory unsafe language in 2016. I thought it was pretty obvious by now [1][2][3] memory safety is best left to tooling / compilers and not to programmers.
[1] https://research.google/pubs/secure-by-design-googles-perspe...
[2] https://www.microsoft.com/en-us/msrc/blog/2019/07/we-need-a-...
[3] https://www.cisa.gov/case-memory-safe-roadmaps
This sound a bit like Tanenbaum's rejection of Torvals' project, because monolithic kernels are obsolete.
Most OSes use either hybrid kernels, or type 1 hypervisors, which are microkernels by another name.
Tanenbaum was right, the future of the Linux kernel is dire, and it's been a huge setback to operating systems research in practical terms.
Fortunately, vendors are gradually moving away from Linux, having been hamstrung by its failures. Google is planning to move to a capability-based microkernel in the coming years for Android and ChromeOS, and Huawei has already done so with HarmonyOS.
In a hundred years, Linux will be a footnote in computing history.
Irregardless of whether this is true, it has not prevented adoption of Linux.
I think even in the year of our lord 2016 there's room for a language with safe defaults but seamless interoperability with existing unsafe code. It's certainly an improvement on the status quo and provides an alternative to rewriting the world in Rust or a GC language.
Except Zig defaults could be found on the year of our Lord in 1976, 1978, 1983 and 1986.
Exercise from other posts of mine which languages those might be.
Unlike C/C++, Zig is not inherently memory-unsafe.
Where Rust insists on having either partial safety through the checker or lack of control in unsafe code, Zig provides a toolkit for contructing safe frameworks. Zig also doesn't have main sources of unsafety coming from certain C design mistakes.
Besides, if you are after true memory safety then garbage collection is the way to go.
I've tried writing a similar post, but I think it's a bit difficult to sound convincing when talking about why Zig is so pleasant. it's really not any one thing. it's a culmination of a lot of well made, pragmatic decisions that don't sound significant on their own. they just culminate in a development experience that feels pleasantly unique.
a few of those decisions seem radical, and I often disagreed with them.. but quite reliably, as I learned more about the decision making, and got deeper into the language, I found myself agreeing with them afterall. I had many moments of enlightenment as I dug deeper.
so anyways, if you're curious, give it an honest chance. I think it's a language and community that rewards curiosity. if you find it fits for you, awesome! luckily, if it doesn't, there's plenty of options these days (I still would like to spend some quality time with Odin)
+1 for Odin. I wrote a little game in it last year and found it delightful.
I prefer Odin to Zig after trying both... but it seems Odin's performance is a bit lower than Zig, C and Rust?! Have you noticed any performance issues or it's not something to worry about?
No, I write Odin for production and there is no performance difference to speak of coming from the way the compiler or language works. If you have one it's likely because of an older/different LLVM version being used, but AFAIK Odin stays as up-to-date as you can without tearing your hair out (and that's good because GingerBill has none of that to spare).
There might be a few pathological code paths in the core libraries or whatever for certain things that aren't what they should be, but in terms of the raw language you're in the land of C as much as with any of these languages; Odin really doesn't do much on top of C, and what it's doing is identifiable and can be opted out of; if you find that a function in a hot loop is marginally slower than it ought to be, you can make it contextless, for example, and see whether that makes a difference.
We haven't found (in a product where performance is explicitly a feature, also containing a custom 3D engine on top of that) that the context being passed automatically in Odin is of much concern performance-wise.
Out of the languages mentioned Rust is the one I've seen in benchmarks be routinely marginally slower, but it's not by a meaningful amount.
Author is apparently unaware of alternatives like Ada, Object Pascal and Modula-2, where most of those "innovations" were already available.
It is kind of interesting that packaging the same ideas with a C like syntax suddenly makes them "cool", 40 years later.
I don't think Zig--which certainly is innovative in a number of ways--benefits from this sort of thing. Up front is a claim that it's "totally new way to write programs", but zero support is offered, and almost nothing else "meta" said about the language, other than a couple of sentences in the conclusion that are likewise inaccurate hype. I've programmed in many languages including Zig and it definitely is not a new way of programming. It imposes disciplines that are different from those of other languages, but the same is true of other languages.
The final paragraph says "This is all quite surprising" -- why so? "and let one think that many advantages previously found only in interpreted languages are gradually migrating to compiled languages in order to offer more performance" -- sure, but Zig is hardly the first ... D and Nim both have interpreters built into the compiler that allow extensive comptime computation--both of those languages have far more metalanguage facilities than Zig, in addition to many other language features that Zig lacks--which is not necessarily a fault, as it aims for a certain kind of simplicity and close-to-the-metal performance ... although both D and Nim are highly performant (both have optional garbage collection, though Nim is more advanced in making GC-free programming approachable). One thing you can say about Zig though--it compiles like a bat out of hell.
P.S. Another thing about Zig worth mentioning that came up in some comments is cross compilation. I don't think people understand how Zig is different and what an engineering feat it is (Andrew has a writeup somewhere of how it's done--it's shocking):
If you install Zig, you can now generate executables for virtually any target with just a command line argument specifying the target, regardless of what machine you installed it on. Nothing else does that--cross compilation generally requires recompiling the compiler and library to target a different architecture. Zig comes with precompiled libraries for a huge number of targets.
I noticed a comment where someone said they love Zig but they've never programmed in it--they use it to cross-compile their Nim programs. (The Nim compiler has a C code backend, and Zig has a C compiler built in, so Nim inherits instant arbitrary cross-compilation to any target via Zig).
It’s incredibly silly but I dislike zigs identifier policy. Mixing snake case and camel case for functions is cursed.
That said, amazing effort, progress and results from the ecosystem.
Bursting on the scene with amazing compilation dx, good allocator (and now io) hygiene/explicitness, and a great build system (though somewhat difficult to ramp on). I’m pretty committed to Rust but I am basically permanently zig curious at this point.
[EDIT] “hate” > “dislike”. Hate is a strong word and surely I just need to spend some time writing zig and I’d get used to it.
Me too, so I don't follow a convention for private functions. Good thing is you barely interact with ones defined in dependencies.
Prefix anf different naming conventions of C-imported libraries is not less annoying.
I love systems programming language and have worked on the Ada language for a long time. I find Zig to be incredibly underwhelming. Absolutely nothing about it is new or novel, the closest being comptime which is not actually new.
Also highly subjective but the syntax hurts my eyes.
So I’m kind of interested by an answer to the question this articles fails to answer. Why do you guys find Zig so cool ?
It’s hard to do something that is truly novel these days. Though I’d argue that Zigs upcoming approach to Async IO is indeed novel on its own. I haven’t seen anything like it in an imperative language.
What’s important is the integration of various ideas, and the nuances of their implementation. Walter Bright brings up D comptime in every Zig post. I’ve used D. Yet I find Zigs comptime to be more useful and innovative in its implementation details. It’s conceptually simpler yet - to me - better.
You mention Ada. I’ve only dabbled with it, so correct me if I’m wrong, but it doesn’t have anything as powerful as Zigs comptime? I think people get excited about not just the ideas themselves, but the combination and integration of the ideas.
In the end I think it’s also subjective. A lot of people like the syntax and combination of features that Zig provides. I can’t point to one singular thing that makes me excited about Zig
As someone who still thinks one should write C (so as a completely uncool person), what I like about Zig is that it is no-nonsense language that just makes everything work as it is supposed to be without unnecessary complications, D is similar, except that it fell into the trap of adding to many features.
So, no, I do not really see anything fundamentally new either. But to me this is the appealing part. Syntax is ok (at least compared to Rust or C++).
Having said this, I am still skeptical about comptime for various reasons.
It gets hyped by a few SV influencers.
One of the things I like about Zig is that it pretty explicitly recognizes all the weird edge cases that exist in low-level systems code. A rather large cross-section of languages kind of pretend these cases don’t exist because addressing it would violate the aesthetic they are trying to achieve with the language. Nonetheless, these are real cases because low-level hardware and system behavior doesn’t care about aesthetics as might be expressed in a programming language.
Even C++ didn’t fully repent from this sin until around C++17. I appreciate the non-begrudging acceptance of this reality in Zig.
Zig makes the standard library accessible. Just by clicking "go to definition", you run into all the weird cases.
For example, apparently the plan9 OS gets special page_allocator handling: https://ziglang.org/documentation/master/std/#std.heap.page_...
Interesting, but really in need of some examples.
I would highlight `std::launder` as an example. It was added in C++17. Famously, most people have no idea what it is used for or why it exists. For low-level systems it was a godsend because there wasn’t an official way to express the intent, though compilers left backdoors open because some things require it.
It generates no code, it is a compiler barrier related to constant folding and lifetime analysis that is particularly useful when operating on objects in DMA memory. As far as a compiler is concerned DMA doesn’t exist, it is a Deus Ex Machina. This is an annotation to the compiler that everything it thinks it understands about the contents and lifetime of a bit of memory is now voided and it has to start over. This case is endemic in high-end database engines.
It should be noted that `std::launder` only works for different instances of the same type. If you want to dynamically re-type memory there is a different set of APIs for informing the compiler that DMA dropped a completely different type in the same memory address.
All of this is compiled down to nothing. It annotates for the compiler things it can’t understand just by inspecting the code.
I don't think that's quite right. For DMA you would normally use an empty asm block, which is what's typically referred to as a "compiler barrier" and does tell the compiler to discard everything it knows about the contents of a some memory. But std::launder doesn't have the same effect. It only affects type-based optimizations, mainly aliasing, plus the assumption that an object's const fields and vtable can't change.
For example, in this test case:
https://gcc.godbolt.org/z/j3Ko7rf7z
GCC generates a store followed by a load from the same location, because of the asm block (compiler barrier) in between. But if you change `if (1)` to `if (0)`, making it use `std::launder` instead of an asm block, GCC doesn't generate a load. GCC still assumes that the value read back from the pointer must be 42, despite the use of `std::launder`.
This doesn't seem quite right. The asm block case is equivalent to adding a volatile qualifier to the pointer. If you add this qualifier then `std::launder` produces the same codegen.
I think the subtle semantic distinction is that `volatile` is a current property of the type whereas `std::launder` only indicates that it was a former property not visible in the current scope. Within the scope of that trivial function in which the pointer is not volatile, the behavior of `std::launder` is what I'd expect. The practical effect is to limit value propagation of types marked `const` in that memory. Or at least this is my understanding.
DMA memory (and a type residing therein) is often only operationally volatile within narrow, controlled windows of time. The rest of the time you really don't want that volatile qualifier to follow those types around the code.
This is a good example because I'm familiar with it (I'm a C++ programmer; I haven't had occasion to use `launder`, but I read about it back then).
But what's the Zig equivalent?
One thing that I've found really useful is being able to annotate o pointer's alignment. I'm working on an interpreter, and I'm using tagged pointers (6 bits), so the data structure needs to have 128 byte alignment. I can define a function like `fn toInt(ptr: *align(128) LongString) u56` and the compiler will track and enforce the alignment.
You might also find some of the builtin functions interesting as well[1], they have a lot of really useful functions that in other languages are only accessible via the blessed stdlib, such as @addrSpaceCast, @atomicLoad, @branchHint, @fieldParentPtr, @frameAddress, @prefetch, @returnAddress(), and more.
[1] https://ziglang.org/documentation/master/#Builtin-Functions
Why are people so obsessed with Zig when Odin has been stable, though not yet with official spec, for such a long time and used in real production for years? Is it just syntax preference or does Zig provide something amazing that I am missing? Not that I use any of them, I am not interested in manual memory management and i stick to Go. But I'm curious.
Zig has a lot of manpower behind it in comparison to Odin and this is one of the most important things for people, they see a proverbial crowd and that builds a lot more interest.
With that said, here are a couple of things you have in Zig that you don't get in Odin:
- Cross-compilation & cross-linking (more or less works): Odin doesn't do cross-linking.
- Comptime; you can actually use it to effectively get Functors from ML, which means passing in interfaces to modules and getting compile-time generated modules back (structs in this case)
- Error set inference; Zig can figure out the complete set of errors a code path can return and make sure you handle them, or bubble that exact set (plus your own errors) up. This comes with the caveat that Zig has no capability to attach actual data to the errors, so you have to side-channel that info if you have it. Odin doesn't do error inference apart from the type checking side of it, but does allow using tagged unions as errors, which is great. They still interact exactly as they ought to with the zero-value-as-no-error machinery.
I didn't use comptime much when I used Zig, and I like tagged unions as errors much more than I value being able to cross-link, so I decided that Odin was better for me. Defaulting to zero-values and the zero-value being blessed in terms of language features didn't feel right to me before I started using it but now I can't really imagine going back to not assuming the zero-value being there.
Thanks for the info. I'm curious to see what people will do when Jai will finally be released next year. So far, Rust has been gaining a lot of traction, although wit ha lot of controversies attached to it. Zig seems to be doing well but the lack of progress towards v1.0 after all those years is quite concerning, making it looks more and more like a toy project rather than something serious. Odin seems to be flying under the radar of most people a bit too much. Jai will have John's name behind it and I am hearing a lot of praise from insiders(people in the beta program). As I said, I have no use for such languages but if i'll do in the future, I'd like to have a clear choice rather than myriad of languages in various stages of development, all trying to do the same thing.
If Jai is ever actually released to a meaningful amount of people I think we'll see just how little Blow's name means to people in practice. There is an artificial mystery around Jai right now and when the lid comes off the pot I think a lot of that is going to dissipate very fast.
With that said, I'll try it out. I'm not really impressed by what I've seen so far, though, it's very middle-of-the-pack with some really nonsense ideas. The possibility of easily creating your own checks with the compile-time machinery is potentially interesting but would probably turn into a nothingburger for us.
I think that's where most of this is at: After so many years of "waiting" (I think most people stopped actually waiting after a few years of mostly talking and very little actual productive doing) we'll end up with a very meh language that was touted as super special... And a painfully simple sokoban game that people are going to pretend is somehow super complex and hard to make.
Are you implying that Zig hasn’t been used in production? What about Tigerbeetle, Bun and Ghostty? I’m using Ghostty as my terminal right now.
I feel like Zig is aiming a lot higher. So that’s why it’s taking longer and also why people are more obsessed with it. The work on doing their own backend and incremental linker is impressive and interesting. So is their attempt at getting IO and async right.
Rust's solution to this is quite good, that's 0..9 and if you want to include 9 it's 0..=9, it looks a bit funny but knowing one with an = sign in it exists removes any doubt
Adding additional syntax to a language for this case seems bonkers to me. People can just write 0..10.
If you need `0..=n`, you can't write `0..(n+1)` because that addition might overflow.
I'm actually curious now how this is stored on `Range` in rust. I've certainly used ..= for exactly the reason you say, but as far as I'm aware `.end` on the range is the exclusive upper bound in all cases. What happens to `.end` in the overflowing case?
Edit: it doesn't use Range for ..=, but rather RangeInclusive, which works fine.
It's more meant for usage with variables:
The better solution to forgetting whether an interval is closed or half-open is to always use only half-open intervals, without any exceptions.
In most cases half-open intervals result in the simplest program, so I agree with the choice of Zig, which is inherited from other languages well-designed from this point of view, e.g. Icon.
I find half-open intervals more intuitive than either closed intervals or open intervals, and much less prone to errors, for various reasons, e.g. the size of a half-open interval is equal to the difference between its limits, unlike for closed intervals or open intervals. Also when accessing the points in the interval backwards or circularly, there are simplifications in comparison with closed intervals.
> always use only half-open intervals
That means you have to waste bytes for the index when you need to include ..._MAX.
By "..._MAX" I assume that you mean the maximum value of a given integer type.
In a language where half-open intervals are supported consistently in all the places, this would be solved trivially, e.g. for a signed byte the _MIN and the _MAX values would be defined as -128 and +128, more intuitively than when using closed intervals, where you must remember to subtract 1 from the negated minimum value.
Even the C language has some support for half-open intervals, because the index pointing after the last element of an array is a valid index value, not an out-of-range value (though obviously, attempting to access the array through that index value would be trapped as an out-of-range access, if that is enabled).
Applied consistently, the same method would ensure that the value immediately above the last representable value of an integer type is valid in ranges of that type, even if it would be invalid in an expression as an operand of that type.
I completely agree. One of Zig's big competitors, Odin, has a more explicit syntax for this where `0..<5` is an open interval and `0...5` is closed.
I think that comes from Ruby, right? I know Groovy is inspired by Ruby and has exactly the same syntax.
EDIT: oh just noticed it's 3 dots in the close case... in Groovy it's just 2.
I even forget which word means what, "open", "close"
> an open interval [0..9)
See Dijkstra for why this is the right way to represent ranges: https://www.cs.utexas.edu/~EWD/transcriptions/EWD08xx/EWD831...
The article doesn't answer the question, it's all just about "the basics of zig" (there is nothing cool manually editing environment variables on Windows with 8 labeled steps (and 5 preliminary steps missing))
and the actual cool stuff is missing:
> with its concept of compile time execution, unfortunately not stressed enough in this article.
indeed
Zig being able to (cross)compile C and C++ feels very similar to how UV functions as a drop in replacement for pip/pip-tools. Seems like a fantastic way to gain traction in already established projects.
I love Zig. Never tried to write it though =).
I just use a it as cross-compiler for my Nim[0] programs.
[0] - https://nim-lang.org
Strangler fig
This is referring to the Strangler Fig design pattern, which is relevant: https://learn.microsoft.com/en-us/azure/architecture/pattern...
Some days ago I decided to look at Zig a bit more in detail. So I skipped the usual marketing and I checked why it could be an alternative to C or Rust. Could it be?
It seems that (debug) allocators are a nice idea, however, it seems that they exist "somehow" already for C, so I wonder: why would you pick this language for your next low level program? They provide runtime checks, so you need thorough testing before you can spot use-after-free or so. It's very similar to the existing situation with c/c++ and the sanitizers, although they work a bit differently.
So the question I have for hardcore low level programmers: why don't they invest more on the memory allocators like hardened_malloc[0] instead of starting a new programming language? It would probably be less expensive in terms of time and would help fix existing software.
[0]: https://github.com/GrapheneOS/hardened_malloc
> So the question I have for hardcore low level programmers: why don't they invest more on the memory allocators
A partial answer is that part of low-level programmers avoid memory allocation and threads like plague. In some cases they are not even an option (small embedded programming, it's nearly as low-level as you can get before going hardcore for real with assembly programming), but when they can the keywords are efficiency, reliability, predictability, and simplicity : statically allocating in advance is a thing you can do because the product is typically with max specs written on the box (e.g. max number of entries in a phone book, to take a generic dumb example), and you have to meet these requirements even if the customer uses all of the capabilities to the max; no memory overbooking allowed, which is basically what dynamic allocation is, in a sense.
> instead of starting a new programming language
If I were to start a new low-low level programming language, I would basically just fix C's weak typing problem, fix the UB problems that only come from issues with long-gone processors (like C++11 finally did with sign encoding), "backport" some C++ features (templates? constexpr?), add a pinch of syntactic sugar and fix union types to have proper sum types. But probably I've just described D and apparently a significant chunk of C23.
Indeed, and if someone wants to help work on C, this is very much possible both on the compiler side or on the standards side.
People are also doing that, see FillC. It's just different people doing different things because, well, they have freedom to do what they want.
Slices and UB-explicitness are quite nice comparing to C. Makes head free to think about really important things.
> The @breakpoint built-in
Inserting the literal one byte instruction (on x86) - INT 3 - is the least a compiler should be able to do.
> I can’t think of any other language in my 45 years long career that surprised more than Zig.
I can say the same (although my career spans only 30 years), or, more accurately, that it's one of the few languages that surprised me most.
Coming to it from a language design perspective, what surprised me is just how far partial evaluation can be taken. While strictly weaker than AST macros in expressive power (macros are "referentially opaque" and therefore more powerful than a referentially transparent partial evaluation - e.g. partial evaluation has no access to an argument's name), it turns out that it's powerful enough to replace not only most "reasonable" uses of macros, but also generics and interfaces. What gives Zig's partial evaluation (comptime) this power is its access to reflection.
Even when combined with reflection, partial evaluation is more pleasurable to work with than macros. In fact, to understand the program's semantics, partial evaluation can be ignored altogether (as it doesn't affect the meaning of computations). I.e. the semantics of a Zig program are the same as if it were interpreted by some language Zig' that is able to run all of Zig's partial-evaluation code (comptime) at runtime rather than at compile time.
Since it also removes the need for other specialised features (generics, interfaces) - even at the cost of an aesthetic that may not appeal to fans of those specialised features - it ends up creating a very expressive, yet surprisingly simple and easy-to-understand language (Lisps are also simple and expressive, but the use of macros makes understanding a Lisp program less easy).
Being simple and easy to understand makes code reviews easier, which may have a positive impact on correctness. The simplicity can also reduce compilation time, which may also have a positive impact on correctness.
Zig's insistence on explicitness - no overloading, no hidden control flow - which also assists reviews, may not be appropriate for a high-level language, but it's a great fit for an unabashedly low-level language, where being able to see every operation as explicit code "on the page" is important. While its designer may or may not admit this, I think Zig abandons C++'s belief that programs of all sizes and kinds will be written in the same language (hence its "zero-cost abstractions", made to give the illusion of a high-level language without its actual high-level abstraction). Developers writing low-level code lose the explicitness they need for review, while those writing high-level programs don't actually gain the level of abstraction required for a smooth program evolution that they need. That belief may have been reasonable in the eighties, but I think it has since been convincingly disproved.
Some Zig decisions surprised me in a way that made me go more "huh" than "wow", such as it having little encapsulation to speak of. In a high-level language I wouldn't have that (after years of experience with Java's wide ecosystem of libraries, we learned that we need even more and stronger encapsulation than we originally had to keep compatibility while evolving code). But perhaps this is the right choice for a low-level language where programs are expected to be smaller and with fewer dependencies (certainly shallower dependency graphs). I'm curious to see how this pans out.
Zig's terrific support for arenas also makes one of the most powerful low-level memory management techniques (that, like a tracing garbage collector, gives the developer a knob to trade off RAM usage for CPU) very accessible.
I have no idea or prediction on whether Zig will become popular, but it's certainly fascinating. And, being so remarkably easy to learn (especially if you're familiar with low-level programming), it costs little effort to give it a try.
Well put. The majority of language development for the last 20 years has proceeded by adding more features into languages, as they all borrow keywords and execution semantics from each other. It's like a neighborhood version of corporate bureaucracies, where each looks across the street, and decides "they've got a department we don't have, we better add one of those".
I like languages that dare to try to do more with less. Zig's comptime, especially the way it supplants generics, is pretty darn awesome.
I was having a similar feeling with Elixir the other day, when I realized that I could built every single standard IPC mechanism that you might find in something like python.threading (Queue, Mutex, RecursionLock, Condition, Barrier, etc) with the Erlang/Beam/Process mailbox.
This is the real answer (amongst other goodness) - this one is well executed and differentiated
Every language at scale needs a preprocessor (look at the “use server” and “use gpu” silliness happening in TS) - why is it not the the same as the language you use?
Languages such as D and Nim (both greatly underappreciated) offer full-language compile-time interpretation.
Great comment! I agree about comptime, as a Rust programmer I consider it one of the areas where Zig is clearly better than Rust with its two macro systems and the declarative generics language. It's probably the biggest "killer feature" of the language.
> as a Rust programmer I consider it one of the areas where Zig is clearly better than Rust with its two macro systems and the declarative generics language
IMHO "clearly better" might be a matter of perspective; my impression is that this is one of those things where the different approaches buy you different tradeoffs. For example, by my understanding Rust's generics allows generic functions to be completely typechecked in isolation at the definition site, whereas Zig's comptime is more like C++ templates in that type checking can only be completed upon instantiation. I believe the capabilities of Rust's macros aren't quite the same as those for Zig's comptime - Rust's macros operate on syntax, so they can pull off transformations (e.g., #[derive], completely different syntax, etc.) that Zig's comptime can't (though that's not to say that Zig doesn't have its own solutions).
Of course, different people can and will disagree on which tradeoff is more worth it. There's certainly appeal on both sides here.
I agree.
I look forward to a future high-level language that uses something like comptime for metaprogramming/interfaces/etc, is strongly typed, but lets you write scripts as easily as python or javascript.
Tryout Nim, it has powerful comptime/metaprogramming, statically typed, automatic memory management and is as easy to program as python or javascript while still allowing low level stuff.
For me it'd be hard to go back to languages that don't have all that. Only swift comes close.
D comes close ... it too has a full-language comptime interpreter and other metaprgramming features (though not as rich as Nim's), statically typed, optional garbage collection, and you can write
#!/usr/bin/env rdmd
[D code]
and run it as if it were an executable. (The compilation is cached so it runs just as fast on subsequent runs.)
Thing is, having a good JIT gives you the performance of partial evaluation pretty much automatically (at the cost of less predictability), as compilation occurs at runtime, so the distinction between compile-time and runtime largely disappears. E.g., in Java, a reflective call will eventually be compiled by the JIT into a direct call; virtual dispatch will also be compiled into direct dispatch or even inlined (when appropriate) etc..
D and Nim both offer that. D has a tool, rdmd, that compiles (with caching) and runs a script written in D, so you write
#!/usr/bin/env rdmd D code ...
and run it as if it were an executable.
If you want to write a code example on HN you can just indent it by 2 spaces and it'll work like you'd expect. For example:
Thanks. I didn't catch that it didn't display correctly until it was too late to edit it.
perhaps mojo might be your cup of tea ?
This review should be much higher as TFA provides little substance
There's at least 1 thing that Zig is better than Rust is that Zig compiler for Windows can be downloaded, unzipped then used without admin right. Rust needs msvc, which cannot be installed without admin right. It is said that Rust on Windows can use cygwin but I cannot make it work even with AI help.
cygwin is a POSIX-emulating library intended for porting POSIX-only programs to Windows. That is: when compiling for cygwin, you'd use the cygwin POSIX APIs instead of the Windows APIs. So anything compiled with cygwin won't be a normal Windows program.
There's no reason to use cygwin with Rust, since Rust has native Windows support. The only reason to use x86_64-pc-cygwin is if you would need your program to use a C library that is not available for Windows, but is available for cygwin.
If you don't want to/can't use the MSVC linker, the usual alternative is Rust's `x86_64-pc-windows-gnu` toolchain.
Have you tried the GNU toolchain? IIRC rustup provides the option to use it instead of the MSVC toolchain during the initial installation.
You should check out cargo-zigbuild which makes use of zig for cross compiling rust projects. https://github.com/rust-cross/cargo-zigbuild
> Probably the most incredible virtue of Zig compiler is its ability to compile C code. This associated with the ability to cross-compile code to be run in another architecture, different than the machine where it is was originally compiled, is already something quite different and unique.
Isn't cross compilation very, very ordinary? Inline C is cool, like C has inline ASM (for the target arch). But cross-compiling? If you built a phone app on your computer you did that as a matter of course, and there are many other common use cases.
If you install Zig, you can now generate executables for virtually any target with just a CLI argument specifying the target, regardless of what machine you installed it on. Nothing else does that--cross compilation generally requires compiling the compiler to target a different architecture.
> Isn't cross compilation very, very ordinary?
Working cross compilation out of the box any-to-any still isn't.
Yes, very rare and there is a strong cartel of companies ensuring it doesn't happen in more mainstream langs through multiple avenues to protect their interests!
From helicoptering folks onto steering committee and indoctrination of young CS majors.
If I had the ability to downvote a comment yet, I'd downvote you. If you're going to spout conspiracy-theory-sounding stuff, at least provide some evidence for your claims!
It doesn't sound like a conspiracy theory, you just have an incredibly poorly calibrated sense of judgement as to the tone of a statment.
Not uncommon in this space though, especially as you get closer to the metal (close as cross-compilation is relative to something like React frontends, at least)
This comment deserves a [citation needed] visible from geosynchronous orbit.
I like the idea of the `defer `keyword - you can have automatic cleanup at the end of the scope but you have to make it obvious you are doing so, no hidden execution of anything (unlike c++ destructors).
Adopted from go, first appeared in D, invented by one of its major developers, Andrei Alexandrescu.
P.S. In D it's `scope(exit)` = defer, `scope(failure)` = Zig's errdefer, and `scope(success)` -- which no one else has and which I have made good use of. e.g., I have a mixin that traces entry and exit from a function, the latter with scope(success). If I use scope(exit) instead then when an exception is thrown all the leave messages are printed and then the stack trace, rather than seeing the stack trace at the point of failure (this baffled me when it first happened).
I vaguely remember reading somewhere recently that Andrei left the D community / foundation. Do you know if that is true?
He's moved on: https://research.nvidia.com/person/andrei-alexandrescu
But you can forget it, unlike C++ destructors
The GNU C and C++ dialect also has attribute cleanup. https://gcc.gnu.org/onlinedocs/gcc/Common-Variable-Attribute...
I would like a language to support both defer and C++ style destructors / Rust Drop. There are good use-cases for having both. For things like a mutex or straight-forward resource cleanup - having a bunch of brain-dead defer statements adds little value and only bloats unnecessary line count. Let the resource type handle its own release/cleanup at scope close. Code is made sweet, succinct and safe.
In Rust, there’s a drop guard pattern to do this, which leverages the lazy execution of closure, checkout the scopeguard crate. C++ should be easy to do that too I think
GNU C++?
Zig is not cool. It's a mediocre new language, missing key features needed for industrial development, like destructors or overall memory safety. But for some reason it's overhyped.
If you think destructors/`Drop` traits or the like are good then Zig was never for you. It has nothing to do with "industrial development", neither does memory safety. The irony is that memory safety as a concept definitely is overhyped.
Destructors aren't just good. They are one of the most important innovations in programming, since they reduce boilerplate and prevent many bugs. Developing a language without them means introducing more bugs which could be avoided.
It's ""simple"", so low-information, high-blogspam software developers have something to talk about instead of programming.
A super cool feature are labeled switches: https://codeberg.org/ziglings/exercises/src/branch/main/exer...
They allow for super ergonomical coding of state machines, which is a lot of fun.
Zig structs are "modules" in themselves, apart from c like struct fields , they can have local variables, structs and functions declared and used inside of them .
in fact files in zig are just structs !
The biggest advantages of Zig for me are that everything is explicit (no hidden features like overloads or implicit conversions) and that its metaprogramming is powerful, easy to use, and easy to understand.
Unfortunately the article glosses too quickly over aspects which seem unique. For instance, I don't get what labeled breaks have to do with comp-time or why a labeled break was used in this situation over a normal function call:
>>>
Labeled breaks
Zig can do many things in compilation time. Let’s initialize an array, for example. Here, a labelled break is used. The block is labelled with an : after its name init and then a value is returned from the block with break.
>>>
The article hasn't even talked about how the language decides what an open curly brace is causing.
I've heard good things about Zig. I want to pick it up and experiment with it but at ~2% market share I find it hard to justify spending the time to learn and master it right now. It's usually much easier to find the time to learn a new language if there is a project (work or open source) that is also using it.
https://survey.stackoverflow.co/2025/technology
It can be useful sometimes to learn things irrespective of what the rest of the world thinks of them.
My personal experience was (back in 2019) that Zig was basically a language you could learn in a weekend and end up being reasonably productive after a week. With that in mind, you might find that you can try it out and either find something that you really like in it and continue, or simply drop it (I ended up picking Odin over Zig, for example, and have found it delightful even 1+ years into production).
The truth is that if you only ever learn what is already popular you'll end up being the professional equivalent of a gray mass with zero definition and unique value proposition.
Check out Ghostty. It's a relatively new and ambitious open source project but is rapidly gaining popularity.
To author -- code sample as images is great for syntax highlight but I wanted to play with the examples and.. got stuck trying to copy the content.
(also expected tesseract to do a bit better than this:
tesseract does well for me...
The trick is to preprocess the image a little bit like so:Thank you!
Unfortunately I get the same kind of garbage around closing curly braces / closing parenthesis / dots with this magick filter... It seems to do slightly better with an extra `-resize 400%`, but still very far from as good as what you're getting (to be fair the monochrome filter is not pretty (bleeding) when inspecting the result).
I wonder what's different? ( ImageMagick-7.1.1.47-1.fc42.x86_64 and tesseract-5.5.0-5.fc42.x86_64 here, no config, langpack(s) also from the distro)
Is there a decent native GUI library for Zig yet? I don't want to use bloated toolkits like GTK and Qt.
I like the simplicity and speed of Rust's eGUI. Something similar for Zig would be amazing.
There is capy https://capy-ui.org/
Is dvui something you want to see? Although the use of backends are still c based, the core part of the gui seems written fully in zig rather than a binding from a c library.
https://github.com/david-vanderson/dvui
I would like to see the output of the:
Zig defaults to statically linking musl when targeting Linux, so the output will not be very interesting unless you target dynamic musl, or glibc, or FreeBSD/NetBSD.
I am surprised Native SIMD support is not mentioned in an article with this title. Not sure if it could be applied to the sudoku example he used.
Is the inline testing good in practice? I do like the clear proximity and scope of the code being tested but I can also imagine trying to cram in all the unit tests and mocking and logging and such.
Does the feature end up feeling unused, dominating app code with test code, or do people end up finding a happy medium?
Zig doesn't seem like a bad language, but I also don't see anything to make it hands down better than Rust for systems programming. So it kind of fails my "why yet another language?" test. I don't think another language can be justified by marginal improvements.
Rust passes that test because it's categorically better than C and C++ in several ways: much better type system, safety, better modules and code reuse, etc. It's complex, but as far as I can tell most of its complexity is required to offer its level of safety guarantees in a pure systems language without a garbage collector or any kind of true dynamic typing. To make a safe systems language you need to have a very rich type system that can prove safety across a wide array of situations. Either that or you'd have to go to the other far end of the simplicity-complexity spectrum and have a language with virtually no features, which would result in very verbose code and probably a lot of boilerplate.
Zig's coolest feature to me seems like "comptime" and the lack of a weird macro side-language, which is one of Rust's anti-features that feels bolted on. Don't make me learn yet another language. Of course sophisticated macros in Rust are usually instead written in Rust itself via procedural macros, but that is more unwieldy than "comptime."
Still not enough to justify a whole new language and ecosystem though. Again: don't make me learn yet another language unless there's a big payoff.
Is it cool? It seems to be in nether land between Rust and Go. Not sure what is the unique use case for Zig.
It's more of an in-between C and Rust than Go as it is a systems language with no built-in garbage collector for memory management. It has a lot of memory safety features, but it's not as memory safe as Rust. However, it avoids a lot of the complexity of Rust like implicit macro expansion, managing lifetimes, generics and complex trait system, etc. It also compiles much more compactly than Rust, in my experience.
In my mind, it's an accessible systems language. Very readable. Minimal footprint.
> managing lifetimes
If you are not using a GC language, you WILL be managing lifetimes. Rust just makes it explicit, when the compiler can’t prove it’s safe, which Zig, C don't really care.
We could not have written TigerBeetle, at least not the way it is, without Zig:
https://tigerbeetle.com/blog/2025-10-25-synadia-and-tigerbee...
Read that and what part of that can’t be done in Rust in 2025?
TigerBeetle has a clear purpose and needed those brilliant optimizations. Do you think Zig is suitable as, say, a Go replacement for prod network services and such?
Aside from the fact that Zig is still a bit immature in its std library and ecosystem, I mean. Is it a suitable systems language going forward?
As far as I can tell from my outsider perspective, Rust might be used instead of C++, Zig instead of C, and Go instead of Java.
Well, it's insanely simple, insanely fast, often more performant than Rust with lower resource usage, with first class C-interop and cross-compiling out of the box. It's easily my favorite language now, with Go being a close second. Both are opinionated and have a standard formatter that makes Zig code instantly readable when you see it, similar to Go. Rust was once interesting, but it's firmly in macro hell territory now, just like Swift, with concealed execution paths aplenty and neither cross-compiling out of the box.
>often more performant than Rust with lower resource usage
[citation needed]
If we are to trust this page [0] Rust beats Zig on most benchmarks. In the Techempower benchmarks [1] Rust submissions dominate the TOP, while Zig is... quite far.
Several posts which I've seen in the past about Zig beating Rust by 3x or such all turned to be based on low quality Rust code with some performance pitfalls like measuring performance of writing into stdout (which Rust locks by default and Zig does not) or iterating over ..= ranges which are known to be problematic from the performance perspective.
[0]: https://programming-language-benchmarks.vercel.app/rust-vs-z...
[1]: https://www.techempower.com/benchmarks/
I would say in most submission-based benchmarks among languages that should perform similar, this mostly reflects the size and enthusiasm of the community.
not being a direct competitor to either of these already existing languages is exactly why it is interesting!
I just discovered that you can add build-time arguments that get baked in. It's soo awesome!
For a language that’s so low level and performance focused, I’m surprised that it has those extra io and allocator arguments to functions. Isn’t that creating code bloat and runtime overhead?
Given that Zig has functions which can return functions, maybe you could capture the top level io and allocator and return a struct with a bunch of functions that now have the top scope io and allocator visible.
Don’t know. That’s how people usually get rid of repeat arguments (or OOP constructor).
the answer I've seen when it has been brought up before is that (for allocators) there is not a practical impact on performance -- allocating takes way more time than the virtual dispatch does, so it ends up being negligible. for code bloat, I'm not sure what you mean exactly; the allocator interface is implemented via a VTable, and the impact on binary size is pretty minimal. you're also not really creating more than a couple of allocators in an application (typically a general purpose allocator, and maybe an arena allocator that wraps it in specific scenarios).
for IO, which is new and I have not actually used yet, here are some relevant paragraphs:
https://kristoff.it/blog/zig-new-async-io/https://github.com/ziglang/zig/issues/23367
He's talking about passing the pointers to the allocators and Io objects as parameters throughout the program, not how allocator vtables for calling the allocator's virtual functions are implemented. But context pointers are a requirement in any program. Consider that a context pointer (`this`) is passed to every single method call ... it's no more "code bloat" than having to save and restore registers on every call.
Every class method in other languages receives a hidden argument. Odin passes a hidden context argument that contains the allocator. The alternative is global variables--which you can also use in Zig if you're so inclined. The extra arguments aren't something the Zig language imposes, it's a convention.
Regarding runtime overhead, I'd assume you would still need an io implementation, it is just showing it to you explicitly instead of it being hidden behind the std lib.
For simple projects where you don't want to pass it around in function parameters, you can create a global object with one implementation and use it from everywhere.
You still have to pass arguments to library functions that need to allocate or do I/O ... but the alternative is worse. This is really a bogus issue ... no one is crying over having to pass a `this` pointer to every single call of a method in other languages. Context pointers are a requirement in any sizeable or multi-threaded program, and Zig gives the user full control over what the context object looks like.
Yeah thing is it's usually better to have allocator in particular defined as a parameter so that you can use the testing allocator in your tests to detect memory leaks, double frees, etc. And then you use more optimal allocators for release mode.
I haven't looked to deeply, but I haven't noticed any performance impact. Inlining probably helps too.
io and allocator objects each only contain 4 pointers or so. They are very fast to wire up and don't create much overhead at all.
A lot of comments here kind of miss the point, but that's to be expected because you can only really get it when you have the experience. Like hearing a description of a painting will not give you the same emotion as looking at it yourself.
Zig has completely changed the way I program (even outside of it). A lot of the goals and heuristics I used to have while writing code have completely changed. It's like seeing programming itself in a new way.
Examples?
> I can’t think of any other language in my 45 years long career that surprised more than Zig. I can easily say that Zig is not only a new programming language, but it’s a totally new way to write programs, in my opinion. To say it’s merely a language to replace C or C++, it’s a huge understatement.
I don't understand how the things presented in this article are surprising. Zig has several nice features shared by many modern programming languages?
> One may wonder how the compiler discovers the variable type. The type in this case is *inferred* by the initialization.
That the author feels the need to emphasize this means either that they haven't paid attention to modern languages for a very long time, or this article is for people who haven't paid attention to modern languages for a very long time.
Type inference has left academy and proliferated into mainstream languages for so many years that I almost forgot that it's a worth mentioning feature.
> One is Zig’s robustness. In the case of the shift operation no wrong behavior is allowed and the situation is caught at execution time, as has been shown.
Panicking at runtime is better than just silently overflowing, but I don't know if it's the best example to show the 'robustness' of a language...
> Type inference has left academy and proliferated into mainstream languages for so many years that I almost forgot that it's a worth mentioning feature.
I'm not even sure I'd call this type inference (other people definitely do call it type inference) given that it's only working in one direction. Even Java (var) and C23 (auto), the two languages the author calls out, have that. It's much less convenient than something like Hindley-Milner.
And it's not caught in ReleaseFast builds ... which is not at all unique to Zig (although Zig does do many innovative things to catch errors in debug builds).
> Type inference has left academy and proliferated into mainstream languages for so many years that I almost forgot that it's a worth mentioning feature.
It’s not common in lower level languages without garbage collectors or languages focused on compilation speed.
The only popular language I can think of is C (prior to C23). If you want to include Fortran and Ada, that would be three, but these are all very old languages. All modern system languages have type deduction for variable declarations.
That's true if the only lower level languages one considers are C and assembler. Virtually every other language has moved way beyond that.
C++ added auto 14 years ago. Swift had it since day 1 back in 2014 if I remember right. What else is there?
C, Ada, Fortran, Pascal.
Compilation speed — OCaml, Go, D, C#, Java
“Low-level” languages — Rust, C++, D
I meant for focused on compilation speed to apply only to lower level languages. And when I say lower level I don’t really include D because it has a garbage collector (I know it’s optional but much of the standard library uses it I believe).
That a language has a garbage collector is completely orthogonal to whether it has type inference ... what the heck does it matter what "much of the standard library uses" to this issue? It's pure sophism. Even C now has type inference. The plain fact is that the claim is wrong.
The x axis is orthogonal to the y axis, so I can’t be interested in the area where x < 1 and y = 5?
> what the heck does it matter what "much of the standard library uses" to this issue?
It matters in that most people looking for a low level manually memory managed language won’t likely choose D, so for the purposes of “is this relatively novel among lower level, memory managed languages” D doesn’t fit my criteria.
> Even C now has type inference. The plain fact is that the claim is wrong.
Almost no one is using C23 yet.
I feel like the article didn't really hit on the big ones: comptime functions, no hidden control flow, elegant defaults, safe buffers, etc.
What Zig really does is make systems programming more accessible. Rust is great, but its guarantees of memory safety come with a learning curve that demands mastering lifetimes and generics and macros and a complex trait system. Zig is in that class of programming languages like C, C++, and Rust, and unlike Golang, C#, Java, Python, JS, etc that have built-in garbage collection.
The explicit control flow allows you as a developer to avoid some optimizations done in Rust (or common in 3rd party libraries) that can bloat binary sizes. This means there's no target too small for the language, including embedded systems. It also means it's a good choice if you want to create a system that maximizes performance by, for example, preventing heap allocations altogether.
The built-in C/C++ compiler and language features for interacting with C code easily also ensures that devs have access to a mature ecosystem despite the language being young.
My experience with Zig so far has been pleasurable. The main downside to the language has been the churn between minor versions (language is still pre-1.0 so makes perfect sense, but still). That being said, I like Zig's new approach to explicit async I/O that parallels how the language treats Allocators. It feels like the correct way to do it and allows developers again the flexibility to control how async and concurrency is handled (can choose single-threaded event loop or multi-threaded pool quite easily).
Zig's generics cause bloat just like any other language with generics--explicit flow control has nothing to do with it.
Zig is a good language. So are Rust, D, Nim, and a bunch of others. People tend to think that the ones they know about are better than all the rest because they don't know about the rest and are implicitly or explicitly comparing their language to C.
Zig's generics can potentially, but not necessarily, because Zig's generics are explicitly controlled through comptime functions, which give the developer a ton of control of how the generic code is unrolled. They're also frequently less used in general than Rust generics.
Of course both Zig and Rust are good languages. But my experience, and I believe your experience will be too if you try to compile programs of similar complexity using standard practices of each language, is that Zig compiles much more compactly in .ReleaseSmall mode than Rust does even with optimization flags, which makes it more ideal for embedded systems, in my opinion. I learned this on my own by implementing the same library in both languages using standard default practices of each.
Of course, at the desktop runtime level, binary size is frequently irrelevant as a concern. I just feel that since Zig makes writing "magic" code more difficult while Rust encourages things like macros, it is much easier to be mindful of things that do impact binary size (and perhaps performance).
Rust has macros that allow for arbitrary compile-time generated code, just like Zig. Most Rust-compiled programs are a bit bloated because libstd is statically linked and not rebuilt from scratch with a project-specific trimmed feature set, which leads to potentially unwanted code being included for e.g. recoverable panics, backtraces, UTF-8 string handling etc. A set of new RFC's is being worked on that may at some point allow libstd to be rebuilt from scratch within Stable Rust projects, with well-defined, stable, subsetted features.
> Rust has macros that allow for arbitrary compile-time generated code, just like Zig.
This is not true. Zig, D, and Nim all have full-language interpreters built into the compiler; Rust does not. Its macros (like macros generally) manipulate source tokens, they don't do arbitrary compile-time calculations (they live in separate crates that are compiled and then run on source code, which is very different from Zig/D/Nim comptime which is intermixed with the source code and is interpreted). Zig has no macros (Andrew hates them)--you cannot "generate code" in Zig (you can in D and Nim); that's not what comptime does. Zig's comptime allows functions written in Zig to execute at compile time (the same functions can also be used to run at execution time if they only use execution-time types). The Zig trick is that comptime code can not only operate on normal data like ints and structs, but also types, which are first class comptime objects. Comptime code has access to the TypeInfo of types, both to read the attributes of types and to create types with specified attributes, which is how Zig implements generics.
> This means there's no target too small for the language, including embedded systems. It also means it's a good choice if you want to create a system that maximizes performance by, for example, preventing heap allocations altogether.
I don't think there's is any significant different here between zig, C and Rust for bare-metal code size. I can get the compiler to generate the same tiny machine code in any of these languages.
That's not been my experience with Rust. On average produces binaries at least 4x bigger than the Zig I've compiled (and yes, I've set all the build optimization flags for binary size). I know it's probably theoretically possible to achieve similar results with Rust, it's just you have to be much more careful about things like monomorphization of generics, inlining, macro expansion, implicit memory allocation, etc that happen under the hood. Even Rust's standard library is quite hefty.
C, yes, you can compile C quite small very easily. Zig is like a simpler C, in my mind.
The Rust standard library in its default config should not be used if you care about code size (std is compiled with panic/fmt and backtrace machinery on by default). no_std has no visible deps besides memcpy/memset, and is comparable to bare metal C.
I understand this, but that is a pain that you don't get with Zig. The no_std constraint is painful to deal with as a dev even with no dependencies and also means that if you're working on a target that needs small binaries, that the crates.io ecosystem is largely unavailable to you (necessitating filtering by https://crates.io/categories/no-std and typically further testing for compilation size beyond that).
Zig on the other hand does lazy evaluation and tree shaking so you can include a few features of the std library without a big concern.
Rustc does a good job of removing unused code, especially with LTO. The trick is to make sure the std library main/panic/backtrace logic doesn't call code you don't want to pay for.
IIRC there's also a mutex somewhere in there used to workaround some threading issues in libc, which brings in a bespoke mutex implementation; I can't remember whether that mutex can be easily disabled, but I think there's a way to use the slower libc mutex implementation instead.
Also, std::fmt is notoriously bad for code size, due to all the dyn vtable shenanigans it does. Avoid using it if you can.
Regardless, the only way to fix many of the problems with std is rebuilding it with the annoying features compiled out. Cargo's build-std feature should make this easy to do in stable Rust soon (and it's available in nightly today).
This. Is Zig an interesting language? Yes sure. But “a totally new way to write programs”? No, I don’t see a single feature that is not found in any other programming languages.
Of which, perhaps, the author isn't aware? Perhaps the author has very narrow experience in programming languages.
Or it's hyperbolic.
>Perhaps the author has very narrow experience in programming languages.
I got that impression as well.
Xi's impressed about types being optional because they can be inferred.
That's ... hardly a novelty ...
>Only the first and third part are compulsory in Zig, which is kind of puzzling, coming from Java or C.
Funny they mention Java that has got type inference few years now. Even C got a weaker version of C++'s auto in C23.
Winning at chess is more "avoid gigantic blunders" than "make brilliant moves".
Zig feels like one of the few programming languages that mostly just avoids gigantic blunders.
I have some beefs with some decisions, but none of them that are an immutable failure mode that couldn't be fixed in a straightforward manner.
zig is not cool at all, its ugly as sin, and has zero use case other than mingling with legacy c code, and who in their right mind wants to be doing that
its a hipster language, absolute insanity to use it when rust exists unless you have that very specific c related slave work to do
> its ugly as sin
Uncalled for and subjective. Certainly plenty of people call Rust's syntax ugly. Discussing syntax and not semantics is a waste of time.
> has zero use case other than mingling with legacy c code
So it has a use case?
> who in their right mind wants to be doing that
Some people have to.
> absolute insanity to use it when rust exists unless you have that very specific c related slave work to do
Some people do.
What's the need for such emotionally charged language in your comment?
I have my own reasons not to use Zig at this moment. I want enforced memory safety and am waiting on 1.0 to see what the language finally looks like. Until stabilization I certainly won't be using it in production. But that doesn't mean the project is meritless, that experimenting with language features before then is wrong, that making a language suitable for specific niches is a bad idea.
I don't see Zig as a replacement for tools that would have been written in Go, Java or C#, and I would rather we had less memory unsafe software out there, but it is a clear step function ahead of C.
Just like I and many others spend a lot of time trying to make Rust the best it can be, their team is doing the same.
This article made me want to beat up Zig.
Why a new lang every day?
Because people like to have fun.
That inline test syntax is pretty cool; where does it come from?
>totally new way to write programs
To me it seems like a better C but not at all unique since most concepts in Zig are already present in other languages.
Zig is cool but not unique. And that is cool, too. Originality for the sake of originality doesn't add value in programming.
Top comment: this article sucks.
Then HN proceed to keep the article at the head of the front page for the day.
Why would I write in Zig instead of Rust ? Only meaningful comments here
"This associated with the ability to cross-compile code to be run in another architecture, different than the machine where it is was originally compiled, is already something quite different and unique."
Perhaps I'm missing something but this is utterly routine. It even has the name used here: Cross-compiling.
If you install Zig, you can now generate executables for virtually any target with just a CLI argument specifying the target, regardless of what machine you installed it on. Nothing else does that--cross compilation generally requires compiling the compiler to target a different architecture.
Doesn't Golang support this as well, out of the box?
Apparently; I wasn't aware. But unlike Zig this doesn't work with FFI ... everything has to be Go code ... cross compilation works by compiling the library code for the target and caching it ... but if you need anything outside of that you're out of luck ... or maybe not ... I ran across this tidbit:
"When a Go project utilizes CGo to interact with C code, standard Go cross-compilation might require additional steps. This is because Go can cross-compile Go code but not C code directly, necessitating the availability of target system libraries on the development machine. Tools like Zig can be used as a C compiler (zcc) to facilitate cross-compilation for CGo-dependent projects by providing the necessary cross-compilation capabilities for the C code."
> this is utterly routine. It even has the name used here: Cross-compiling.
Zig makes cross-compilation trivial and part of the language philosophy.
Other languages either rely on external toolchains (C/C++, Rust with C deps) or are limited in target flexibility (Go).
For projects targeting multiple OS/architectures, Zig is currently the most straightforward option.
Do you need pointer arithmetic? I think that's the one feature a modern C replacement should do away with. The other being support for arithmetic with unsigned types.
Man who has only ever written C++ discovers other programming languages exist, news at 11
How on earth is it unique to compile code for different architectures? This is a solved problem since the 80s.
It basically looks like C with different syntax, im also not convinced the 0…9 implicit range is better for iteration - i prefer it explicitly for lower level languages.
Nothing against (or for) Zig, but the article author seems unfamiliar with other modern languages in common use... imagine if they saw Swift or Rust. Their mind would be utterly, utterly blown.
I like D better. And had some of the "cool" features of Zig from quite some time, such as scope(exit) which is clearer.
I don't find Zig nearly as readable as my D code, but alas, I don't do systems programming.
What is Zig?
what comes before Zag ?
Move Zig. For great justice!