So much goodness in this release. Struct redefinition combined with Revise.jl makes development much smoother. Package apps are also an amazing (and long awaited) feature!
I can't wait to try out trimming and see how well it actually works in its current experimental instantiation.
How's the Julia ecosystem these days? I used it for a couple of years in the early days (2013-2016ish) and things initially felt like they were going somewhere, but since then I haven't seen it make much inroads.
Any thoughts from someone more plugged in to the community today?
My company (a hedge fund) has been using Julia for our major data/numeric pipelines for 4 years. It's been great. Very easy to translate math/algorithms into code, lots of syntactical niceties, parallelism/concurrency is easy, macros for the very rare cases you need them. It's easy to get high performance and possible to get extremely high performance.
It does have some well-known issues (like slow startup/compilation time) but if you're using it for long-running data pipelines it's great.
What kind of library stack do you use? Julia has lots of interesting niche libraries for online inference, e.g. Gen.jl, which can be quite relevant for a hedge fund.
If you can't talk about library stacks, it'd be at least interesting to hear your thoughts about how you minimize memory allocation.
In my experience starting with Julia in 2025, the main thing missing from the ecosystem tends to be boring glue type packages, like a production grade gRPC client/server. I heard HTTP.jl is also slow, but I havn't sufficiently dug into this myself. At least we have an excellent ProtoBuf implementation so you can roll your own performant RPC protocol.
As for the actual numerical stuff I tend to roll my own implementations of most algorithms to better control relevant tradeoffs. There are sometimes issues where a particular algorithm is implemented by a Julia package, but has performance issues / bugs in edge cases. For example, in my testing I wasn't able to get ImageContrastAdjustment CLAHE to run very fast and it had an issue where it throws an exception with an image of all zeros. You also can't easily call the OpenCV version as CLAHE is implemented in OpenCV using an object which doesn't have a binding available in Julia. After not getting anywhere within the ecosystem I just wrote my own optimized CLAHE implementation in Julia which I'm very happy with, this is truly where Julia shines. It's worth noting however that there are many excellent packages to build on such as InterprocessCommunication, ResumableFunctions, StaticArrays, ThreadPinning, Makie, and more. If you don't mind filling in some gaps here and there its completely serviceable.
As for the core language and runtime we are deploying a Julia service to production next release and haven't had any stability/GC/runtime issues after a fairly extensive testing period. All of the Python code we replaced led to a ~40% speedup while improvements to numerical precision led to measurably improved predictions. Development with Revise takes some getting used to but once you get familiar with it you will miss it in other languages. All in all it feels like the language is in a good place currently and is only getting better. I'd like to eventually contribute back to help with some of the ecosystem gaps that impacted me.
The other day that old article "Why I no longer recommend Julia" got passed around. On the very same day I encountered my own bug in the Julia ecosystem, in JuliaFormatter, that silently poisoned my results. I went to the GitHub issues and someone else encountered it on the same day. I'm sure they will fix it (they haven't yet, JuliaFormatter at this very moment is a subtle codebase-destroyer) but as a newcomer to the ecosystem I am not prepared to understand which bog standard packages can be trusted and which cannot. As an experiment I switched to R and the language is absolute filth compared to Julia, but I haven't seen anyone complain about bugs (the opposite, in fact) and the packages install fast without needing to ship prebuilt sysimages like I do in Julia. Those are the only two good things about R but they're really important.
I think Julia will get there once they have more time in the oven for everything to stabilize and become battle hardened, and then Julia will be a force to be reckoned with. An actually good language for analysis! Amazing!
just to be fair, the very first words in the README for JuliaFormatter is a warning that v2 is broken, and users should stick to v1. so it is not a "subtle" codebase-destroyer so much as a "loud" codebase-destroyer.
That's fair, and my bug was in 2.x, but it doesn't really make me feel better. If anything, I feel worse knowing this is OffsetArrays again--the ecosystem made cross-cutting changes that it doesn't have the manpower to absorb across the board, so everything is just buggy everywhere as a result. This is now a pattern.
The codebase destruction warning was not super loud, though. Obviously I missed it despite using JuliaFormatter constantly. It doesn't get printed when you install the package nor when you use it. It's not on the docs webpage for JuliaFormatter. 2.x is still the version you get when you install JuliaFormatter without specifying a version. The disclaimer is only in the GitHub readme, and I was reading the docs. What other packages have disclaimers that I'm not seeing because I'm "only" reading the user documentation and not the GitHub developer readme?
> so everything is just buggy everywhere as a result
I don't think this is an accurate summary. the bug here is that JuliaFormatter should put a <=1.9 compatibility bound in its Project.toml if it isn't correct with JuliaSyntax.jl
OffsetArrays was different because it exposed a bunch of buggy and common code patterns that relied on (incorrect) assumptions about the array interface.
You're purposefully being disingenuous. README me says "If you're having issues with v2 outputs use the latest v1". That's a big "If". How about If it's not ready for production use, say so explicitly in the README - not maybe use it but maybe don't use it.
We were a startup and I was the "management", but it was mostly for HR reasons. The original dev who convinced me to try Julia ended up leaving, and when we pivoted to a new niche that required a rethinking of the codebase, we took the opportunity to re-write in C# (mostly because a we _needed_ C# to develop a plugin, and it would simplify things if everything was C#).
for many types of scientific computing, there's a case to be made it is the best language available. often this type of computing would be in scientific/engineering organizations and not in most software companies. this is its best niche, an important one, but not visible to people with SWE jobs making most software.
it can be used for deep learning but you probably shouldn't, currently, except as a small piece of a large problem where you want Julia for other reasons (e.g. scientific machine learning). They do keep improving this and it will probably be great eventually.
i don't know what the experience is like using it for traditional data science tasks. the plotting libraries are actually pretty nicely designed and no longer have horrible compilation delays.
people who like type systems tend to dislike Julia's type system.
they still have the problem of important packages being maintained by PhD students who graduate and disappear.
as a language it promises a lot and mostly delivers, but those compromises where it can't deliver can be really frustrating. this also produces a social dynamic of disillusioned former true believers.
> people who like type systems tend to dislike Julia's type system.
This is true. As far as I understand it, there is not a type theory basis for Julia's design (type theory seems to have little to say about subtyping type lattices). Relatedly, another comment mentioned that Julia needs sum types.
I do wonder in particular about the startup time "time-to-plot" issue. I last used Julia about 2021-ish to develop some signal processing code, and restarting the entire application could have easily taken tens of seconds. Both static precompilation and hot reloading were in early development and did not really work well at the time.
$ time julia -e "exit"
real 0m0.156s
user 0m0.096s
sys 0m0.100s
$ time julia -e "using Plots"
real 0m1.219s
user 0m0.981s
sys 0m0.408s
$ time julia -e "using Plots; display(plot(rand(10)))"
real 0m1.581s
user 0m1.160s
sys 0m0.400s
Not a super fair test since everything was already hot in i/o cache, but still shows how much things have improved.
My shop just moved back to Julia for digital signal processing and it’s accelerated development considerably over our old but mature internal C++ ecosystem.
Mine did the same for image processing but coming from python/numpy/numba. We initially looked at using Rust or C++ but I'm glad we chose to stick it out with Julia despite some initial setbacks. Numerical code flows and read so nicely in Julia. It's also awesome seeing the core language continuously improve so much.
Python predates Julia by 3 decades. In many ways Julia is a response to Python's shortcomings. Julia could've never taken off "instead of" python but it clearly hopes to become the mature and more performant alternative eventually
Some small additional details: 23 years not 30. Also, I think Julia was started as much in response to Octave/Matlab’s shortcomings. I don’t know if it is written down, but I was told a big impetus was that Edelman had just sold his star-p company to Microsoft, and star-p was based around octave/matlab.
When Julia came out neither Python nor data science and ML had the popularity they have today. Even 7-8 years ago people we're still having Python vs R debates.
In 2012, python was already well-established in ML, though not as dominant as it is today. scikit-learn was already well-established and Theano was pretty popular. Most of the top entries on Kaggle were C++ or Python.
I'm excited to see `--trim` finally make it, but it only works when all code from entrypoints are statically inferrable. In any non-toy Julia program that's not going to be the case. Julia sorely needs a static mode and a static analyzer that can check for correctness. It also needs better sum type support and better error messages (static and runtime).
In 2020, I thought Julia would be _the_ language to use in 2025. Today I think that won't happen until 2030, if even then. The community is growing too slowly, core packages have extremely few maintainers, and Python and Rust are sucking the air out of the room. This talk at JuliaCon was a good summary of how developers using Rust are so much more productive in Rust than in Julia that they switched away from Julia:
I don't think Julia was designed for pure overhead projects in memory-constrained environments, or for squeezing out that last 2% of hardware performance to cut costs, like C++, Rust or Zig.
Julia is the language to use in 2025 if what you’re looking for is a JIT-compiled, multiple-dispatch language that lets you write high-performance technical computing code to run on a cluster or on your laptop for quick experimentation, while also being metaprogrammable and highly interactive, whether for modelling, simulation, optimisation, image processing etc.
actually I think it sort of was, I remember berkeley squeezing a ton of perf out of their cray for a crazy task because it was easy to specialize some wild semi-sparse matrix computations onto an architecture with strange memory/cache bottlenecks, while being guaranteed that the results are still okay.
Telling what? Did you actually listen to the talk that you linked to, or read the top comment there by Chris Rackauckas?
> Given all that, outside of depending heavily on DifferentialEquations.jl, I don't know why someone would pick Julia over Python + Rust.
See his last slide. And no, they didn't replace their Julia use in its entirety with Rust, despite his organization being a Rust shop. Considering Rust as a replacement for Julia makes as much sense to me as to considering C as a replacement for Mathematica; Julia and Mathematica are domain specific (scientific computation) languages, not general systems programming languages.
Neither Julia nor Mathematica is a good fit for embedded device programming.
I also find it amusing how you criticize Julia while praising Python (which was originally a "toy" scripting language succeeding ABC, but found some accidental "gaps" to fit in historically) within the narrative that you built.
> In any non-toy Julia program that's not going to be the case.
Python has useful and rich ecosystem that grows every day. Julia is mostly pile of broken promises (it neither reads as Python, nor it runs as C, at least not without significant effort required to produce curated benchmarks) and desperate hype generators.
Since you have a rosy picture of Python, I assume you're young. Python has been mostly a fringe/toy language for 2 decades, until around ~2010, when a Python fad started not too different from the Rust fad of today, and at some point Google started using it seriously and thought they can fix Python but gave up eventually. The fad lived on and kept evolving and somehow found some popularity with SciPy and then ML. I used it in 90s a little, and I found the language bad for anything other than replacing simple bash scripts or simple desktop applications or a desktop calculator, and I still think it is (but sure, there are people who disagree and think it is a good language). It was slow and didn't have type system, you didn't know whether your code would crash or not until you run that line of code, and the correctness of your program depended on invisible characters.
"Ecosystem" is not a part of the language, and in any case, the Python ecosystem is not written in Python, because Python is not a suitable language for scientific computing, which is unsurprising because that's not what it was designed for.
It is ironic you bring up hype to criticize Julia while praising Python which found popularity thanks to hype rather than technical merit.
What promise are you referring to? Who promised you what? It's a programming language.
> Ecosystem" is not a part of the language, and in any case, the Python ecosystem is not written in Python, because Python is not a suitable language for scientific computing
Doesn't matter. Languages do not matter, ecosystems do, for they determine what is practically achievable.
And it doesn't matter that Python ecosystem relies on huge amounts of C/C++ code. Python people made the effort to wrap this code, document it and maintain those wrappers. Other people use such code through Python APIs. Yes, every language with FFI can do the same. For some reason none achieved that.
Even people using Julia use PythonCall.jl, that's how much Python is unsuitable.
> What promise are you referring to? Who promised you what? It's a programming language.
Acting dumb is poor rhetorical strategy, and ignores such a nice rhetorical advice as principle of charity - it is quite obvious that I didn't mean that programming language made any promise. Making a promise is something that only people can do. And Julia creators and people promoting it made quite bombastic claims throughout the years that turned out to not have much support in reality.
I leave your assumptions about my age or other properties to you.
Ecosystems matter, but runtimes do as well. Take Java, for instance. It didn’t have to wrap C/C++ libraries, yet it became synonymous with anything data-intensive. From Apache Hadoop to Flink, from Kafka to Pulsar. Sure, this is mostly ETL, streaming, and databases rather than numeric or scientific computing, but it shows that a language plus a strong ecosystem can drive a movement.
This is why I see Julia as the Java for technical computing. It’s tackling a domain that’s more numeric and math-heavy, not your average data pipeline, and while it hasn’t yet reached the same breadth as Python, the potential is there. Hopefully, over time, its ecosystem will blossom in the same way.
If what determines the value of a language libraries (which makes no sense to me at all, but let's play your game), then it is one more argument against Python.
You don't need FFI to use a Fortran library from Fortran, and I (and many physicists) have found Fortran better suited to HPC than Python since... the day Python came to existence. And no, many other scripting languages have wrappers, and no, scientific computing is not restricted to ML which the only area Python can be argued to have most wrapper libraries to external code.
Language matters, and two-language problem is a real problem, and you can't make it go away by closing your ears and chanting "doesn't matter! doesn't matter!"
Julia is a real step toward solving this problem, and allows you to interact with libraries/packages in ways that is not possible in Python + Fortran + C/C++ + others. You are free to keep pretending that problem doesn't exist.
You are making disparaging and hyperbolic claims about hyperbolic claims without proper attribution, and when asked for source, you cry foul and sadly try to appear smart by saying "you're acting dumb". You should take on your advice and instead of "acting dumb", explicitly cite what "promises" or "bombastic claims" you are referring to. This is what I asked you to do, but instead of doing it, you are doing what you are doing, which is interesting.
in the early aughts educators loved the shit out of python because "it forced kids to organize their code with indentation". This was about a decade before formatting linters became low key required for languages.
These are exactly the feelings that I left with from the community in ~2021 (along with the AD story, which never really materialized _within_ Julia - Enzyme had to come from outside Julia to “save it” - or materialized in a way (Zygote) whose compilation times were absolutely unacceptable compared to competitors like JAX)
More and more over time, I’ve begun to think that the method JIT architecture is a mistake, that subtyping is a mistake.
Subtyping makes abundant sense when paired with multiple dispatch — so perhaps my qualms are not precise there … but it also seems like several designs for static interfaces have sort of bounced off the type system. Not sure, and can’t defend my claims very well.
Julia has much right, but a few things feel wrong in ways that spiral up to the limitations in features like this one.
Anyways, excited to check back next year to see myself proven wrong.
> For example, the all-inference benchmarks improve by about 10%, an LLVM-heavy workload shows a similar ~10% gain, and building corecompiler.ji improves by 13–16% with BOLT. When combined with PGO and LTO, total improvements of up to ~23% have been observed.
> To build a BOLT-optimized Julia, run the following commands
Is BOLT the default build (eg. fetched by juliaup) on the supported Linux x86_64 and aarch64? I'm assuming not, based on the wording here, but I'm interested in what the blocker is and whether there's plans to make it part of the default build process. Is it considered as yet immature? Are there other downsides to it than the harmless warnings the post mentions?
BOLT isn't on by default. The main problem is that no one has tested it much (because you can only get it by building your own Julia). We should try distributing BOLT by default. It should just work...
Wow, there are so many amazing practical improvements in this release. It's better at both interactive use _and_ ahead-of-time compilation use. Workspaces and apps and trimmed binaries are massive - letting it easily do things normally done in other languages. It will be interesting so see what "traditional" desktop software will come out of that (CLI tools? GUI apps?).
This is it. Anyone who's anyone has been waiting for the 1.12 release with the (admittedly experimental) juliac compiler with the --trim feature. This will allow you to create small, redistributable binaries.
Honest take: yes, it's not ready. When I tried it, the generated binary crashed.
For what it's worth, I am able to generate a non-small (>1 GiB) binary with 1.11 that runs on other people's machines. Not shipping in production, but it could be if you're willing to put in the effort. So in a sense, PackageCompiler.jl is all you need. ;)
This is a fantastic release, been looking forward to --trim since the 2024 JuliaCon presentation. All of the other features look like fantastic QoL additions too - especially redefinition of structs and the introduction of apps.
I've tried it on some of my julia code. The lack of support for dynamic dispatch severely limits the use of existing libraries. I spent a couple days pruning out dependencies that caused problems, before hitting some that I decided would be more effort to re-implement than I wanted to spend.
So for now we will continue rewriting code that needs to run on small systems rather than deploy the entire julia environment, but I am excited about the progress that has been made in creating standalone executables, and can't wait to see what the next release holds.
it works well --- IF your code is already written in a manner amenable to static analysis. if your coding style is highly dynamic it will probably be difficult to use this feature for the time being (although UX should of course improve over time)
I think Julia missed the boat with Python totally dominating the AI area.
Which is a shame, because now Python has all the same problems with the long startup time. On my computer, it takes almost 15 seconds just to import all the machine-learning libraries. And I have to do that on every app relaunch.
Waiting 15+ seconds to test small changes to my PyTorch training code on NFS is rather annoying. I know there are ways to work around it, but sometimes I wish we could have a training workflow similar to how Revise works. Make changes to the code, Revise patches it, then run it via a REPL on the main node. Not sure if Revise actually works in a distributed context, but that would be amazing if it did. No need to start/fork a million new Python processes every single time.
Of course I would also rather be doing all of the above in Julia instead of Python ;)
Revise can work on your server for hot reloading if you need it - you copy your new code files in place over the old ones.
Of course there are caveats - it won't update actively running code, but if your code it's structured reasonably and you are aware of Revise's API and the very basics of Julia's world age you can do it pretty easily IME.
So much goodness in this release. Struct redefinition combined with Revise.jl makes development much smoother. Package apps are also an amazing (and long awaited) feature!
I can't wait to try out trimming and see how well it actually works in its current experimental instantiation.
How's the Julia ecosystem these days? I used it for a couple of years in the early days (2013-2016ish) and things initially felt like they were going somewhere, but since then I haven't seen it make much inroads.
Any thoughts from someone more plugged in to the community today?
My company (a hedge fund) has been using Julia for our major data/numeric pipelines for 4 years. It's been great. Very easy to translate math/algorithms into code, lots of syntactical niceties, parallelism/concurrency is easy, macros for the very rare cases you need them. It's easy to get high performance and possible to get extremely high performance.
It does have some well-known issues (like slow startup/compilation time) but if you're using it for long-running data pipelines it's great.
What kind of library stack do you use? Julia has lots of interesting niche libraries for online inference, e.g. Gen.jl, which can be quite relevant for a hedge fund.
If you can't talk about library stacks, it'd be at least interesting to hear your thoughts about how you minimize memory allocation.
In my experience starting with Julia in 2025, the main thing missing from the ecosystem tends to be boring glue type packages, like a production grade gRPC client/server. I heard HTTP.jl is also slow, but I havn't sufficiently dug into this myself. At least we have an excellent ProtoBuf implementation so you can roll your own performant RPC protocol.
As for the actual numerical stuff I tend to roll my own implementations of most algorithms to better control relevant tradeoffs. There are sometimes issues where a particular algorithm is implemented by a Julia package, but has performance issues / bugs in edge cases. For example, in my testing I wasn't able to get ImageContrastAdjustment CLAHE to run very fast and it had an issue where it throws an exception with an image of all zeros. You also can't easily call the OpenCV version as CLAHE is implemented in OpenCV using an object which doesn't have a binding available in Julia. After not getting anywhere within the ecosystem I just wrote my own optimized CLAHE implementation in Julia which I'm very happy with, this is truly where Julia shines. It's worth noting however that there are many excellent packages to build on such as InterprocessCommunication, ResumableFunctions, StaticArrays, ThreadPinning, Makie, and more. If you don't mind filling in some gaps here and there its completely serviceable.
As for the core language and runtime we are deploying a Julia service to production next release and haven't had any stability/GC/runtime issues after a fairly extensive testing period. All of the Python code we replaced led to a ~40% speedup while improvements to numerical precision led to measurably improved predictions. Development with Revise takes some getting used to but once you get familiar with it you will miss it in other languages. All in all it feels like the language is in a good place currently and is only getting better. I'd like to eventually contribute back to help with some of the ecosystem gaps that impacted me.
Disclaimer: I am not plugged into the community.
The other day that old article "Why I no longer recommend Julia" got passed around. On the very same day I encountered my own bug in the Julia ecosystem, in JuliaFormatter, that silently poisoned my results. I went to the GitHub issues and someone else encountered it on the same day. I'm sure they will fix it (they haven't yet, JuliaFormatter at this very moment is a subtle codebase-destroyer) but as a newcomer to the ecosystem I am not prepared to understand which bog standard packages can be trusted and which cannot. As an experiment I switched to R and the language is absolute filth compared to Julia, but I haven't seen anyone complain about bugs (the opposite, in fact) and the packages install fast without needing to ship prebuilt sysimages like I do in Julia. Those are the only two good things about R but they're really important.
I think Julia will get there once they have more time in the oven for everything to stabilize and become battle hardened, and then Julia will be a force to be reckoned with. An actually good language for analysis! Amazing!
just to be fair, the very first words in the README for JuliaFormatter is a warning that v2 is broken, and users should stick to v1. so it is not a "subtle" codebase-destroyer so much as a "loud" codebase-destroyer.
That's fair, and my bug was in 2.x, but it doesn't really make me feel better. If anything, I feel worse knowing this is OffsetArrays again--the ecosystem made cross-cutting changes that it doesn't have the manpower to absorb across the board, so everything is just buggy everywhere as a result. This is now a pattern.
The codebase destruction warning was not super loud, though. Obviously I missed it despite using JuliaFormatter constantly. It doesn't get printed when you install the package nor when you use it. It's not on the docs webpage for JuliaFormatter. 2.x is still the version you get when you install JuliaFormatter without specifying a version. The disclaimer is only in the GitHub readme, and I was reading the docs. What other packages have disclaimers that I'm not seeing because I'm "only" reading the user documentation and not the GitHub developer readme?
> so everything is just buggy everywhere as a result
I don't think this is an accurate summary. the bug here is that JuliaFormatter should put a <=1.9 compatibility bound in its Project.toml if it isn't correct with JuliaSyntax.jl
OffsetArrays was different because it exposed a bunch of buggy and common code patterns that relied on (incorrect) assumptions about the array interface.
You're purposefully being disingenuous. README me says "If you're having issues with v2 outputs use the latest v1". That's a big "If". How about If it's not ready for production use, say so explicitly in the README - not maybe use it but maybe don't use it.
Going well, regardless of the regular doom and gloom comments on HN.
https://juliahub.com/case-studies
One of those case studies is me at my former company. We ended up moving away from Julia
Because of Julia flaws, or management decisions completely unrelated to the tooling?
We were a startup and I was the "management", but it was mostly for HR reasons. The original dev who convinced me to try Julia ended up leaving, and when we pivoted to a new niche that required a rethinking of the codebase, we took the opportunity to re-write in C# (mostly because a we _needed_ C# to develop a plugin, and it would simplify things if everything was C#).
for many types of scientific computing, there's a case to be made it is the best language available. often this type of computing would be in scientific/engineering organizations and not in most software companies. this is its best niche, an important one, but not visible to people with SWE jobs making most software.
it can be used for deep learning but you probably shouldn't, currently, except as a small piece of a large problem where you want Julia for other reasons (e.g. scientific machine learning). They do keep improving this and it will probably be great eventually.
i don't know what the experience is like using it for traditional data science tasks. the plotting libraries are actually pretty nicely designed and no longer have horrible compilation delays.
people who like type systems tend to dislike Julia's type system.
they still have the problem of important packages being maintained by PhD students who graduate and disappear.
as a language it promises a lot and mostly delivers, but those compromises where it can't deliver can be really frustrating. this also produces a social dynamic of disillusioned former true believers.
> people who like type systems tend to dislike Julia's type system.
This is true. As far as I understand it, there is not a type theory basis for Julia's design (type theory seems to have little to say about subtyping type lattices). Relatedly, another comment mentioned that Julia needs sum types.
I do wonder in particular about the startup time "time-to-plot" issue. I last used Julia about 2021-ish to develop some signal processing code, and restarting the entire application could have easily taken tens of seconds. Both static precompilation and hot reloading were in early development and did not really work well at the time.
That was fixed in 1.9. Indeed it makes a huge difference now that you can quickly run for the first time.
On a 5 year old i5-8600, with Samsung PM871b SSD:
Not a super fair test since everything was already hot in i/o cache, but still shows how much things have improved.on a macMini (i.e. fast RAM), time to display:
- Plots.jl, 1.4 seconds (include package loading)
- CairoMakie.jl, 4 seconds (including package loading)
julia> @time @eval (using Plots; display(plot(rand(3))))
My shop just moved back to Julia for digital signal processing and it’s accelerated development considerably over our old but mature internal C++ ecosystem.
Mine did the same for image processing but coming from python/numpy/numba. We initially looked at using Rust or C++ but I'm glad we chose to stick it out with Julia despite some initial setbacks. Numerical code flows and read so nicely in Julia. It's also awesome seeing the core language continuously improve so much.
Can you elaborate on what libraries, platform, and tooling you use?
How do you deploy it?
StaticCompiler.jl is the main workhorse.
I wish a) that I was a Julia programmer and b) that Julia had taken off instead of python for ML. I’m always jealous when I scan the docs.
Python predates Julia by 3 decades. In many ways Julia is a response to Python's shortcomings. Julia could've never taken off "instead of" python but it clearly hopes to become the mature and more performant alternative eventually
Some small additional details: 23 years not 30. Also, I think Julia was started as much in response to Octave/Matlab’s shortcomings. I don’t know if it is written down, but I was told a big impetus was that Edelman had just sold his star-p company to Microsoft, and star-p was based around octave/matlab.
- https://julialang.org/blog/2012/02/why-we-created-julia/
When Julia came out neither Python nor data science and ML had the popularity they have today. Even 7-8 years ago people we're still having Python vs R debates.
In 2012, python was already well-established in ML, though not as dominant as it is today. scikit-learn was already well-established and Theano was pretty popular. Most of the top entries on Kaggle were C++ or Python.
I'm excited to see `--trim` finally make it, but it only works when all code from entrypoints are statically inferrable. In any non-toy Julia program that's not going to be the case. Julia sorely needs a static mode and a static analyzer that can check for correctness. It also needs better sum type support and better error messages (static and runtime).
In 2020, I thought Julia would be _the_ language to use in 2025. Today I think that won't happen until 2030, if even then. The community is growing too slowly, core packages have extremely few maintainers, and Python and Rust are sucking the air out of the room. This talk at JuliaCon was a good summary of how developers using Rust are so much more productive in Rust than in Julia that they switched away from Julia:
https://www.youtube.com/watch?v=gspuMS1hSQo
Which is pretty telling. It takes a overcoming a certain inertia to move from any language.
Given all that, outside of depending heavily on DifferentialEquations.jl, I don't know why someone would pick Julia over Python + Rust.
I don't think Julia was designed for pure overhead projects in memory-constrained environments, or for squeezing out that last 2% of hardware performance to cut costs, like C++, Rust or Zig.
Julia is the language to use in 2025 if what you’re looking for is a JIT-compiled, multiple-dispatch language that lets you write high-performance technical computing code to run on a cluster or on your laptop for quick experimentation, while also being metaprogrammable and highly interactive, whether for modelling, simulation, optimisation, image processing etc.
actually I think it sort of was, I remember berkeley squeezing a ton of perf out of their cray for a crazy task because it was easy to specialize some wild semi-sparse matrix computations onto an architecture with strange memory/cache bottlenecks, while being guaranteed that the results are still okay.
Telling what? Did you actually listen to the talk that you linked to, or read the top comment there by Chris Rackauckas?
> Given all that, outside of depending heavily on DifferentialEquations.jl, I don't know why someone would pick Julia over Python + Rust.
See his last slide. And no, they didn't replace their Julia use in its entirety with Rust, despite his organization being a Rust shop. Considering Rust as a replacement for Julia makes as much sense to me as to considering C as a replacement for Mathematica; Julia and Mathematica are domain specific (scientific computation) languages, not general systems programming languages.
Neither Julia nor Mathematica is a good fit for embedded device programming.
I also find it amusing how you criticize Julia while praising Python (which was originally a "toy" scripting language succeeding ABC, but found some accidental "gaps" to fit in historically) within the narrative that you built.
> In any non-toy Julia program that's not going to be the case.
Why?
Python has useful and rich ecosystem that grows every day. Julia is mostly pile of broken promises (it neither reads as Python, nor it runs as C, at least not without significant effort required to produce curated benchmarks) and desperate hype generators.
Since you have a rosy picture of Python, I assume you're young. Python has been mostly a fringe/toy language for 2 decades, until around ~2010, when a Python fad started not too different from the Rust fad of today, and at some point Google started using it seriously and thought they can fix Python but gave up eventually. The fad lived on and kept evolving and somehow found some popularity with SciPy and then ML. I used it in 90s a little, and I found the language bad for anything other than replacing simple bash scripts or simple desktop applications or a desktop calculator, and I still think it is (but sure, there are people who disagree and think it is a good language). It was slow and didn't have type system, you didn't know whether your code would crash or not until you run that line of code, and the correctness of your program depended on invisible characters.
"Ecosystem" is not a part of the language, and in any case, the Python ecosystem is not written in Python, because Python is not a suitable language for scientific computing, which is unsurprising because that's not what it was designed for.
It is ironic you bring up hype to criticize Julia while praising Python which found popularity thanks to hype rather than technical merit.
What promise are you referring to? Who promised you what? It's a programming language.
> Ecosystem" is not a part of the language, and in any case, the Python ecosystem is not written in Python, because Python is not a suitable language for scientific computing
Doesn't matter. Languages do not matter, ecosystems do, for they determine what is practically achievable.
And it doesn't matter that Python ecosystem relies on huge amounts of C/C++ code. Python people made the effort to wrap this code, document it and maintain those wrappers. Other people use such code through Python APIs. Yes, every language with FFI can do the same. For some reason none achieved that.
Even people using Julia use PythonCall.jl, that's how much Python is unsuitable.
> What promise are you referring to? Who promised you what? It's a programming language.
Acting dumb is poor rhetorical strategy, and ignores such a nice rhetorical advice as principle of charity - it is quite obvious that I didn't mean that programming language made any promise. Making a promise is something that only people can do. And Julia creators and people promoting it made quite bombastic claims throughout the years that turned out to not have much support in reality.
I leave your assumptions about my age or other properties to you.
Ecosystems matter, but runtimes do as well. Take Java, for instance. It didn’t have to wrap C/C++ libraries, yet it became synonymous with anything data-intensive. From Apache Hadoop to Flink, from Kafka to Pulsar. Sure, this is mostly ETL, streaming, and databases rather than numeric or scientific computing, but it shows that a language plus a strong ecosystem can drive a movement.
This is why I see Julia as the Java for technical computing. It’s tackling a domain that’s more numeric and math-heavy, not your average data pipeline, and while it hasn’t yet reached the same breadth as Python, the potential is there. Hopefully, over time, its ecosystem will blossom in the same way.
If what determines the value of a language libraries (which makes no sense to me at all, but let's play your game), then it is one more argument against Python. You don't need FFI to use a Fortran library from Fortran, and I (and many physicists) have found Fortran better suited to HPC than Python since... the day Python came to existence. And no, many other scripting languages have wrappers, and no, scientific computing is not restricted to ML which the only area Python can be argued to have most wrapper libraries to external code.
Language matters, and two-language problem is a real problem, and you can't make it go away by closing your ears and chanting "doesn't matter! doesn't matter!"
Julia is a real step toward solving this problem, and allows you to interact with libraries/packages in ways that is not possible in Python + Fortran + C/C++ + others. You are free to keep pretending that problem doesn't exist.
You are making disparaging and hyperbolic claims about hyperbolic claims without proper attribution, and when asked for source, you cry foul and sadly try to appear smart by saying "you're acting dumb". You should take on your advice and instead of "acting dumb", explicitly cite what "promises" or "bombastic claims" you are referring to. This is what I asked you to do, but instead of doing it, you are doing what you are doing, which is interesting.
in the early aughts educators loved the shit out of python because "it forced kids to organize their code with indentation". This was about a decade before formatting linters became low key required for languages.
These are exactly the feelings that I left with from the community in ~2021 (along with the AD story, which never really materialized _within_ Julia - Enzyme had to come from outside Julia to “save it” - or materialized in a way (Zygote) whose compilation times were absolutely unacceptable compared to competitors like JAX)
More and more over time, I’ve begun to think that the method JIT architecture is a mistake, that subtyping is a mistake.
Subtyping makes abundant sense when paired with multiple dispatch — so perhaps my qualms are not precise there … but it also seems like several designs for static interfaces have sort of bounced off the type system. Not sure, and can’t defend my claims very well.
Julia has much right, but a few things feel wrong in ways that spiral up to the limitations in features like this one.
Anyways, excited to check back next year to see myself proven wrong.
> For example, the all-inference benchmarks improve by about 10%, an LLVM-heavy workload shows a similar ~10% gain, and building corecompiler.ji improves by 13–16% with BOLT. When combined with PGO and LTO, total improvements of up to ~23% have been observed.
> To build a BOLT-optimized Julia, run the following commands
Is BOLT the default build (eg. fetched by juliaup) on the supported Linux x86_64 and aarch64? I'm assuming not, based on the wording here, but I'm interested in what the blocker is and whether there's plans to make it part of the default build process. Is it considered as yet immature? Are there other downsides to it than the harmless warnings the post mentions?
BOLT isn't on by default. The main problem is that no one has tested it much (because you can only get it by building your own Julia). We should try distributing BOLT by default. It should just work...
Wow, there are so many amazing practical improvements in this release. It's better at both interactive use _and_ ahead-of-time compilation use. Workspaces and apps and trimmed binaries are massive - letting it easily do things normally done in other languages. It will be interesting so see what "traditional" desktop software will come out of that (CLI tools? GUI apps?).
I am so excited - well done everyone!
This is it. Anyone who's anyone has been waiting for the 1.12 release with the (admittedly experimental) juliac compiler with the --trim feature. This will allow you to create small, redistributable binaries.
If it is experimental it doesn't allow to create small, redistributable binaries, but allows to hope it will create such binaries.
Though it is quite a progress after years of insisting that this additional package PackageCompiler.jl is all you need.
Honest take: yes, it's not ready. When I tried it, the generated binary crashed.
For what it's worth, I am able to generate a non-small (>1 GiB) binary with 1.11 that runs on other people's machines. Not shipping in production, but it could be if you're willing to put in the effort. So in a sense, PackageCompiler.jl is all you need. ;)
How small are we talking?
~1MB for hello world (almost all of which is runtime), fairly complicated simulations in ~50 MB (solving 500 state ODEs with implicit solvers)
This is a fantastic release, been looking forward to --trim since the 2024 JuliaCon presentation. All of the other features look like fantastic QoL additions too - especially redefinition of structs and the introduction of apps.
Congrats Julia team!
Being able to redefine structs is what I always wanted when prototyping using Revise.jl :) great to have it
Has anyone tried the `--trim` option? I wonder how well it works in "real life".
I've tried it on some of my julia code. The lack of support for dynamic dispatch severely limits the use of existing libraries. I spent a couple days pruning out dependencies that caused problems, before hitting some that I decided would be more effort to re-implement than I wanted to spend.
So for now we will continue rewriting code that needs to run on small systems rather than deploy the entire julia environment, but I am excited about the progress that has been made in creating standalone executables, and can't wait to see what the next release holds.
it works well --- IF your code is already written in a manner amenable to static analysis. if your coding style is highly dynamic it will probably be difficult to use this feature for the time being (although UX should of course improve over time)
Somehow I thought Julia had been around for much longer than this.
I think Julia missed the boat with Python totally dominating the AI area.
Which is a shame, because now Python has all the same problems with the long startup time. On my computer, it takes almost 15 seconds just to import all the machine-learning libraries. And I have to do that on every app relaunch.
Waiting 15+ seconds to test small changes to my PyTorch training code on NFS is rather annoying. I know there are ways to work around it, but sometimes I wish we could have a training workflow similar to how Revise works. Make changes to the code, Revise patches it, then run it via a REPL on the main node. Not sure if Revise actually works in a distributed context, but that would be amazing if it did. No need to start/fork a million new Python processes every single time.
Of course I would also rather be doing all of the above in Julia instead of Python ;)
Revise can work on your server for hot reloading if you need it - you copy your new code files in place over the old ones.
Of course there are caveats - it won't update actively running code, but if your code it's structured reasonably and you are aware of Revise's API and the very basics of Julia's world age you can do it pretty easily IME.