A hobby audio and text analysis application I've written, with no specific concern for low level performance other than algorithmically, runs 4x as fast in .net10 vs .net8. Pretty much every optimization discussed here applies to that app. Great work, kudos to the dotnet team. C# is, imo, the best cross platform GC language. I really can't think of anything that comes close in terms of performance, features, ecosystem, developer experience.
I write C# and rust fulltime. Native discriminated unions (and their integration throughout the ecosystem) are often the deciding factor when choosing rust over C#.
Very hard to imagine teams cross shopping C# and Rust and DU's being the deciding factor. The tool chains, workflows, and use cases are just so different, IMO. What heuristics were your team using to decide between the two?
I do like/respect C# but come on now. I know they're fixing it but the rest of the language was designed the same way and thus still has this vestigial layer of OOP-hubris
It's up to each team to decide how they want to write their code. TypeScript is the same with JS having a "vestigial" `class` (you can argue that "it's not the same", but nevertheless, it is possible to write OOP style code in JS/TS and in fact, is the norm in many packages like Nest.js).
The language is a tool; teams decide how to use the tool.
For me, it will be if they ever get checked errors of some sort. I don’t want to use a language with unchecked exceptions flying about everywhere. This isn't saying I want checked exceptions either, but I think if they get proper unions and then have some sort of error union type it would go a long way.
I'd much rather code F# than Python, it's more principled, at least at the small scale. But F# is in many ways closer to modern mainstream languages than a modern pure functional language. There's nothing scary about it. You can write F# mostly like Python if you want, i.e. pervasive mutation and side effects, if that's your thing.
If Python is the only language you have to compare other languages to, all other programming languages are going to look like "Python with X and Y differences". It makes no sense to compare Python to F# when OCaml exists and is a far closer relative. F# isn't quite "OCaml on .NET" but it's pretty close.
It absolutely does make sense to compare it to the worlds most popular programming language, especially when dismissed as "functional programming". Who benefits from an OCaml comparison? You think F# should be marketed to OCaml users who might want to try dotnet? That's a pretty small market.
Python is the world's most used scripting language, but for application programming languages there are other languages that are widely used and better to compare to F#. For example, C# and Java.
It all depends on the lens one chooses to view them. None of them are really "functional programming" in the truly modern sense, even F#. As more and more mainstream languages get pattern matching and algebraic data types (such as Python), feature lambdas and immutable values, then these languages converge. However, you don't really get the promises of functional programming such as guaranteed correct composition and easier reasoning/analysis, for that one needs at least purity and perhaps even totality. That carries the burden of proof, which means things get harder and perhaps too hard for some (e.g. the parent poster).
If purity is a requirement for "real" functional programming, then OCaml or Clojure aren't functional. Regarding totality, even Haskell has partial functions and exceptions.
Both OCaml and Clojure are principled and well designed languages, but they are mostly evolutions of Lisp and ML from the 70s. That's not where functional programming is today. Both encourage a functional style, which is good. And maybe that's your definition of a "functional language". But I think that definition will get increasingly less useful over time.
Haskell. But there are other examples of "pure functional programming". And the state of the art is dependently typed languages, which are essentially theorem provers but can be used to extract working code.
Sure, Python has types as part of the syntax, but Python doesn't have types like Java, C#, etc. have types. They are not pervasive and the semantics are not locked down.
Exactly what I've observed in practice because most devs have no background in writing functional code and will complain when asked to do so.
Passing or returning a function seems a foreign concept to many devs. They know how to use lambda expressions, but rarely write code that works this way.
We adopted ErrorOr[0] and have a rule that core code must return ErrorOr<T>. Devs have struggled with this and continue to misunderstand how to use the result type.
honestly this sounds like you've never really done it.
FP is much better for ergonomics, developer productivity, correctness. All the important things when writing code.
I like FP, but your claim is just as baseless as the parent’s.
If FP was really better at “all the important things”, why is there such a wide range of opinions, good but also bad? Why is it still a niche paradigm?
Is supported on more platforms, has more developers, more jobs, more OSS projects, is more widely used (Tiobe 2024). Performance was historically better, but c# caught up.
Reified generics, value types, LINQ are just a few things that you would miss when going to Java. Also Java and .NET are both big, that's not a real argument here. Not that I would trust Tiobe index too much, but as of 2025 September C# is right behind Java at 5th place.
My experience was that .NET programs were typically more tunable for greater perf than Java for many years now even if it didn't come free out of the box which generally is what matters with performance. The ability to optimise further what needs to be optimised means that generally you are faster for your business domain than the alternative - with Java code it generally is harder and/or less ergonomic to do this.
For example just having value types and reified generics as a combination meant you could write generic code against value types which usually meant for hot algorithmic loops or certain data structures a big win w.r.t memory and CPU consumption. For example for a collection type critical to an app I wrote many years ago the use of value types would almost half the memory footprint compared to the best Java one I could find, and was somewhat faster with less cache misses. The Java alternative wasn't an amateur one either but they couldn't get the perf out of it even with significant effort.
It also last time I checked doesn't have a value decimal type for financial math which IMO can be a significant performance loss for financial/money based systems. Anything with math, and lots of processing/data structures for example I would find .NET significantly faster after doing the optimisation work. If I had to choose the 2 targets these days I would find .NET in general an easier target w.r.t performance. Of course perf isn't everything depending on the domain.
Having worked with C# professionally for a decade, going through the changes with LINQ, async/await, Roslyn, and the rise of .NET Core, to .NET Core becoming .NET, I disagree. I certainly think that C# is a great tool and that it’s the best it has ever been. It’s also relies on very implicit behaviour, it is build upon OOP design principles and a bunch of “needless” abstraction. Things I personally have come to view as anti-patterns over the years. This isn’t because I specifically dislike C#, you could find me saying something similar about Java.
I suspect that the hidden indirection and runtime magic, may be part of why you love the language. In my experience, however, it leads to poor observability, opaque control flow, and difficult debugging sessions in every organisation and company I’ve ever worked for. It’s fair to argue that this is because the people working with C# are bad at software engineering. Similar to how Uncle Bob will always be correct when he calls teams out for getting his principles wrong. To me that means the language itself has a poor design fit for software development in 2025. Which is probably why we see more and more Go adoption, due to its explicit philosophies. Though to be fair, Python seems to be “winning” as far as adoption goes in the cross platform GC language space. Having worked with Django-Ninja I can certainly see why. It’s so productive, and with stuff like Pyrefly, UV and Ruff it’s very easy to make it a YAGNI experience with decent type control.
I am happy you enjoy C# though, and it’s great to see that it is evolving. If they did more to enhance the developer experience so that people were less inclined to do bad engineering on a thursday afternoon after a long day of useless meetings. Then I would probably agree with you. I'm not sure any of the changes going toward .NET 10 are going in that direction though.
C# has increasingly become more terse (e.g. switch expressions, collection initializers, object initializers, etc) and, IMO, is a good balance between OOP and functional[0].
Functions are first class objects in C# and teams can write functional style C# if they want. But I suspect that this doesn't scale well in human terms as we've encountered a LOT of trouble trying to get TypeScript devs to adopt more functional programming techniques. We've found that many devs like to use functional code (e.g. Linq, `.filter()`, `.map()`), but dislike writing functional code because most devs are not wired this way and do not have any training in how to write functional code and understanding monads. Asking these devs to use a monad has been like asking kids to eat their carrots.
Across much of our backend TS codebase, there are very, very few cases where developers accept a function as input or return a function as output (almost all of it written by 1 dev out of a team of ~20).
> ...it is build upon OOP design principles and a bunch of “needless” abstraction
Having been working with Nest.js for a while, it's clear to me that most of these abstractions are not "needless" but actually "necessary" to manage complexity of apps beyond a certain scale and the reasons are less technical and more about scaling teams and communicating concepts.
Anyone that looks at Nest.js will immediately see the similarities to Spring Boot or .NET Web APIs because it fills the same niche. Whether you call a `*Factory` a "factory" or something else, the core concept of what the thing does still exists whether you're writing C#, Java, Go, or JS: you need a thing that creates instances of things.
You can say "I never use a factory in Go", but if you have a function that creates other things or other functions, that's a factory...you're just not using the nomenclature. Good for you? Or maybe you misunderstand why there is standard nomenclature of common patterns in the first place and are associating these patterns with OOP when in reality, they are almost universal and are rather human language abstractions for programming patterns.
I notice that none of the examples in your blog entry on functional C# deals with error handling. I know that is not the point of your article, but that is actually one of my key issues with C# and its reliance on implicit, because like so many other parts of C# you'd probably hand it over to an exeception handler. I'd much rather prefer you to deal with it explicitly right where it happens, and I would prefer if you were actually forced to do it for examples like yours. This is because implicit error handling is hard. I have no doubt you do it well, but it is frankly rare to meet a C# developer who has as much of an understanding on the language that you clearly have.
I think this is an excellent blog post by the way. My issues with C# (and this applies to a lot of other GC languages) is that most developers would learn a lot from your article. Because none of it is an intuitive part of the language philosophy.
I don't think you should never use OOP or abstractions. I don't think there is a golden rule for when you should use either. I do think you need to understand why you are doing it though, and C# sort of makes people go to abstractions first, not last in my experience. I don't think these changes to the GC is going to help people who write C# without understanding C#, which is frankly most C# developers around here. Because Go is opinionated and explicit it's simply an issue I have to deal with less in Go teams. It's not an issue I have to deal with less in Python teams, but then, everyone who loves Python knows it sucks.
My team just recently made the switch from a TS backend to a C# backend for net new work. When we made this switch, we also introduced `ErrorOr`[0] which is a monadic result type.
I would not have imagined this to be controversial nor difficult, but it turns out that developers really prefer and understand exceptions. That's because for a backend CRUD API, it's really easy to just throw and catch at a global HTTP pipeline exception filter and for 95% of cases, this is OK and good enough; you're not really going to be able to handle it nor is it worth it to to handle it.
We'll stick with ErrorOr, but developers aren't using it as monad and simply unwrapping the value and the error because, as it turns out, most devs just have a preference/greater familiarity with imperative try-catch handling of errors and practically, in an HTTP backend, there's nothing wrong in most cases with just having a global exception filter do the heavy lifting unless the code path has a clear recovery path.
> I don't think you should never use OOP or abstractions. I don't think there is a golden rule for when you should use either.
I do think there is a "silver rule": OOP when you need structural scaffolding, functional when you have "small contracts" over big ones. An interface or abstract class is a "big contract" that means to understand how to use it, you often have to understand a larger surface area. A function signature is still a contract, but a micro-contract.
Depending on what you're building, having structural scaffolding and "big contracts" makes more sense than having lots of micro-contracts (functions). Case in point: REST web APIs make a lot more sense with structural scaffolding. If you write it without structural scaffolding of OOP, it ends up with a lot of repetition and even worse opaqueness with functions wrapping other functions.
The silver rule for OOP vs FP for me: OOP for structural templating for otherwise repetitive code and "big contracts"; FP for "small contracts" and algorithmically complex code. I encourage devs on the team to write both styles depending on what they are building and the nature of complexity in their code. I think this is also why TS and C# are a sweet spot, IMO, because they straddle both OOP and have just enough FP when needed.
You introduced a pattern that is simply different than the usual in C#. It's also not clearly better, it's different. In languages designed for result types like this the ergonomics of such a type are usually better.
All the libraries you use and all methods from the standard library use exceptions. So you have to deal with exceptions in any case.
There's also a million or so libraries that implement types like this. There is no standard, so no interoperability. And people have to learn the pecularities of the chosen library.
I like result types like this, but I'd never try to introduce them in C# (unless at some point they get more language support).
These are developers that have never written C# before so there's no difference between whether it's language supported or not. It was in the core codebase on day 1 when they onboarded so it may as well have been native.
But what I takeaway from this is that Go's approach to error handling is "also not clearly better, it's different".
Even if C# had core language support for result types, you would be surprised how many developers would struggle with it (that is my takeaway from this experience).
If you're not coming from a strongly typed functional language, it's still a pattern you're not used to. Which might be a bit of a roundabout way to say that I agree about your last part, developers without contact to that kind of language will struggle at first with a pattern like this.
I know how to use this pattern, but the C# version still feels weird and cumbersome. Usually you combine this with pattern matching and other functional features and the whole thing makes it convenient in the end. That part is missing in C#. And I think it makes a different in understanding, as you would usually build ony our experience with pattern matching to understand how to handle this case of Result|Error.
> Usually you combine this with pattern matching and other functional features and the whole thing makes it convenient in the end. That part is missing in C#
You mean like this?
string foo = result.MatchFirst(
value => value,
firstError => firstError.Description);
Or this?
ErrorOr<string> foo = result
.Then(val => val * 2)
.Then(val => $"The result is {val}");
Or this?
ErrorOr<string> foo = await result
.ThenDoAsync(val => Task.Delay(val))
.ThenDo(val => Console.WriteLine($"Finsihed waiting {val} seconds."))
.ThenDoAsync(val => Task.FromResult(val * 2))
.ThenDo(val => $"The result is {val}");
With pattern matching like this?
var holidays = new DateTime[] {...};
var output = new Appointment(
DayOfWeek.Friday,
new DateTime(2021, 09, 10, 22, 15, 0),
false
) switch
{
{ SocialRate: true } => 5,
{ Day: DayOfWeek.Sunday } => 25,
Appointment a when holidays.Contains(a.Time) => 25,
{ Day: DayOfWeek.Saturday } => 20,
{ Day: DayOfWeek.Friday, Time.Hour: > 12 } => 20,
{ Time.Hour: < 8 or >= 18 } => 15,
_ => 10,
};
C# pattern matching is pretty damn good[0] (seems you are not aware?).
None of your examples use native C# pattern matching. And without language support like e.g. discriminated unions you can't have exhaustive pattern matching in C#. So you'll have to silence the warnings about the missing default case or always add one, which is annoying.
I mean, it's not a stretch to see how you can use native pattern matching with ErrorOr result types.
#!/usr/local/share/dotnet/dotnet run
#:package ErrorOr@2.0.1
using ErrorOr;
var computeRiskFactor = ErrorOr<decimal> ()
=> 0.5m; // Just an example
var applyAdjustments = ErrorOr<decimal> (decimal baseRiskFactor)
=> baseRiskFactor + 0.1m; // Just an example
var approvalDecision = computeRiskFactor()
.Then(applyAdjustments)
.Match(
riskFactor => riskFactor switch {
< 0.5m => "Approved",
< 0.75m and >= 0.5m => "Approved with Conditions",
>= 0.75m and < 0.9m => "Manual Review",
_ => "Declined"
},
errors => "Error computing risk factor"
);
Console.WriteLine($"Loan application: {approvalDecision}");
(Fully contained program, BTW)
Here's the OCaml version:
let compute_risk_factor () = 0.5
let apply_adjustments base_risk_factor = base_risk_factor +. 0.1
let approval_decision =
let risk_factor = compute_risk_factor () |> apply_adjustments in
match risk_factor with
| r when r < 0.5 -> "Approved"
| r when r < 0.75 -> "Approved with Conditions"
| r when r < 0.9 -> "Manual Review"
| _ -> "Declined"
let () =
print_endline approval_decision
Still not functional enough?...Or you just don't like C#? No point moving goal posts.
My experience of .NET even from version 1 is that it has the best debugging experience of any modern language, from the visual studio debugger to sos.dll debugging crash dumps.
I have been coding in C# for 16 years and I have no idea what you mean by "hidden indirection and runtime magic". Maybe it's just invisible to me at this point, but GC is literally the only "invisible magic" I can think of that's core to the language. And I agree that college-level OOP principles are an anti-pattern; stop doing them. C# does not force you to do that at all, except very lightly in some frameworks where you extend a Controller class if you have to (annoying but avoidable). Other than that, I have not used class inheritance a single time in years, and 98% of my classes and structs are immutable. Just don't write bad code; the language doesn't force you to su all.
Hidden indirection & runtime magic almost always refer to DI frameworks.
Reflection is what makes DI feel like "magic". Type signatures don't mean much in reflection-heavy codes. Newcomers won't know many DI framework implicit behaviors & conventions until either they shoot themself in their foot or get RTFM'd.
My pet theory is this kind of "magic" is what makes some people like Golang, which favors explicit wiring over implicit DI framework magic.
> Just don't write bad code
Reminds me with C advices: "Just don't write memory leaks & UAF!".
Attributes do nothing at all on their own. It's someone else's code that does magic by reflecting on your types and looking for those attributes. That may seem like a trivial distinction, but there's a big difference between "the language is doing magic" and "some poorly documented library I'm using is doing magic". I rarely use and generally dislike attributes. I sometimes wonder if C# would be better off without them, but there are some legitimate usages like interop with unmanaged code that would be really awkward any other way. They are OK if you think of them as a weakly enforced part of the type system, and relegate their use to when a C# code object is representing something external like an API endpoint or an unmanaged call. Even this is often over-done.
Yes, the ASP.NET pipeline is a bit of a mess. My strategy is to plug in a couple adapters that allow me to otherwise avoid it. I rolled my own DI framework, for instance.
Source generators are present in all languages and terrible in all languages, so that certainly is not a criticism of C#. It would be a valid criticism if a language required you to use source generators to work efficiently (e.g. limited languages like VB6/VBA). But I haven't used source generators in C# in at least 10 years, and I honestly don't know why anyone would at this point.
Maybe it sounds like I'm dodging by saying C# is great even though the big official frameworks Microsoft pushes (not to mention many of their tutorials) are kinda bad. I'd be more amenable to that argument if it took more than an afternoon to plug in the few adapters you need to escape their bonds and just do it all your own way with the full joy of pure, clean C#. You can write bad code in any language.
That's not to say there's nothing wrong with C#. There are some features I'd still like to see added (e.g. co-/contra-variance on classes & structs), some that will never be added but I miss sometimes (e.g. higher-kinded types), and some that are wonderful but lagging behind (e.g. Expressions supporting newer language features).
> But I haven't used source generators in C# in at least 10 years, and I honestly don't know why anyone would at this point.
A challenge with .NET web APIs is that it's not possible to detect when interacting with a payload deserialized from JSON whether it's `null` because it was set to `null` or `null` because it was not supplied.
A common way to work around this is to provide a `IsSet` boolean:
private bool _isNameSet;
public string? Name { get; set { ...; isNameSet = true; } }
Now you can check if the value is set.
However, you can see how tedious this can get without a source Generator. With a source generator, we simply take nullable partial properties and generate the stub automatically.
public partial string? Name { get; set; }
Now a single marker attribute will generate as many `Is*Set` properties as needed.
Of course, the other use case is for AOT to avoid reflection by generating the source at runtime.
I am at a (YC, series C) startup that just recently made the switch from TS backend on Nest.js to C# .NET Web API[0]. It's been a progression from Express -> Nest.js -> C#.
What we find is that having attributes in both Nest.js (decorators) and C# allows one part of the team to move faster and another smaller part of the team to isolate complexity.
The indirection and abstraction are explicit decisions to reduce verbosity for 90% of the team for 90% of the use cases because otherwise, there's a lot of repetitive boilerplate.
The use of attributes, reflection, and source generation make the code more "templatized" (true both in our Nest.js codebase as well as the new C# codebase) so that 90% of the devs simply need to "follow the pattern" and 10% of the devs can focus on more complex logic backing those attributes and decorators.
Having the option to dip into source generation, for example, is really powerful in allowing the team to reduce boilerplate.
[0] We are hiring, BTW! Seeking experienced C# engineers; very, very competitive comp and all greenfield work with modern C# in a mixed Linux and macOS environment.
> so that 90% of the devs simply need to "follow the pattern" and 10% of the devs can focus on more complex logic backing those attributes and decorators.
Works well until the 10% that understand the behind the scenes leave and you are left with a bunch of developers copy and pasting magic patterns that they don't understand.
I love express because things are very explicit. This is the JSON schema being added to this route. This route is taking in JSON parameters. This is the function that handles this POST request endpoint.
I joined a team using Spring Boot and the staff engineer there couldn't tell me if each request was handled by its own thread or not, he couldn't tell me what variables were shared across requests vs what was uniquely instantiated per request. Absolute insanity, not understanding the very basics of one's own runtime.
Meanwhile in Express, the threading model is stupid simple (there isn't one) and what is shared between requests is obvious (everything declared in an outer scope).
The trade offs are though that patterns and behind the scenes source code generation is another layer that the devs who have to follow need to deal with when debugging and understanding why something isn’t working. They either spend more time understanding the bespoke things or are bottle necked relying on a team or person to help them get through those moments. It’s a trade off and one that has bit me and others before
I am not talking about C# specifically but also and I agree.
Implicit and magic looks nice at first but sometimes it can be annoying. I remember the first time I tried Ruby On Rails and I was looking for a piece of config.
Yes, "convention over configuration". Namely, ungreppsble and magic.
This kind of stuff must be used with a lot of care.
I usually favor explicit and, for config, plain data (usually toml).
This can be extended to hidden or non-obvious allocations and other stuff (when I work with C++).
It is better to know what is going on when you need to and burying it in a couole of layers can make things unnecessarily difficult.
Would you rather a team move faster and be more productive or be a purist and disallow abstractions to avoid some potential runtime tracing challenges which can be mitigated with good use of OTEL and logging? I don't know about you, but I'm going to bias towards productivity and use integration tests + observability to safeguard code.
Disallow bespoke abstractions and use the industry standard ones instead. People who make abstractions inflate how productive they’re making everyone else. Your user base is much smaller than popular libs, so your docs and abstractions are not as battle tested and easy to use as much as you think.
Well the point of using abstractions is that you don't need to know the things that it is abstracting. I think the abstraction here is self explaining what it does and you can certainly understand and use it without needing to understand all the specifics behind it.
What are you working on that you're debugging annotations everyday? I'd say you've made a big mistake if you're doing that/you didn't read the docs and don't understand how to use the attribute.
(Of course you are also free to write C# without any of the built in frameworks and write purely explicit handling and routing)
On the other hand, we write CRUD every day so anything that saves repetition with CRUD is a gain.
It has been worth the abstraction in my organization with many teams. Thinking 1000+ engineers, at minimum. It helps to abstract as necessary for new teammates that want to simply add a new endpoint yet follow all the legal, security, and data enforcement rules.
Better than no magic abstractions imo. In our large monorepo, LSP feedback can often be so slow that I can’t even rely on it to be productive. I just intuit and pattern match, and these magical abstractions do help. If I get stuck, then I’ll wade into the docs and code myself, and then ask the owning team if I need more help.
People were so afraid of macros they ended up with something even worse.
At least with macros I don't need to consider the whole of the codebase and every library when determining what is happening. Instead I can just... Go to the macro.
They are not. They are generators. Macros tends to be local and explicit as the other commenters have said. They are more like templates. Generators can be fairly involved and feels like a mini language, one that is not as observable as macros.
Maybe you're confusing `System.Reflection.Emit` and source generators? Source generators are just a source tree walker + string templates to write source files.
Attributes and reflection are still used in C# for source generators, JSON serialization, ASP.NET routing, dependency injection... The amount of code that can fail at runtime because of reflection has probably increased in modern C#. (Not from C# source generators of course, but those only made interop even worse for F#-ers).
- code generators, I think I saw it only in regex. Logging can be done via `LoggerDefine` too so attributes are optional. Also code generators have access to full tokenized structure of code, and that means attributes are just design choice of this particular generator you are using. And finally code generators does not produce runtime errors unless code that they generated is invalid.
- Json serialization, sure but you can use your own converters. Attributes are not necessary.
- asp.net routing, yes but those are in controllers, my impression is that minimal APIs are now the go to solution and you have `app.MapGet(path)` so no attributes; you can inject services into minimal APIs and this does not require attributes. Most of the time minimal APIs does not require attributes at all.
- dependency injection, require attributes when you inject services in controllers endpoints, which I never liked nor understood why people do that. What is the use case over injecting it through controller constructor? It is not like constructor is singleton, long living object. It is constructed during Asp.net http pipeline and discarded when no longer necessary.
So occasional usage, may still occur from time to time, in endpoints and DTOs (`[JsonIgnore]` for example) but you have other means to do the same things. It is done via attributes because it is easier and faster to develop.
Also your team should invest some time into testing in my opinion. Integration testing helps a lot with catching those runtime errrors.
> Json serialization, sure but you can use your own converters
And going through converters is (was?) significantly slower for some reason than the built-in serialisation.
> my impression is that minimal APIs are now the go to solution and you have `app.MapGet(path)` so no attribute
Minimal APIs use attributes to explicitly configure how parameters are mapped to the path, query, header fields, body content or for DI dependencies. These can't always be implicit, which BTW means you're stuck in F# if you ever need them, because the codegen still doesn't match what the reflection code expects.
I haven't touched .NET during work hours in ages, these are mostly my pains from hobbyist use of modern .NET from F#. Although the changes I've seen in C#'s ecosystem the last decade don't make me eager to use .NET for web backends again, they somehow kept going with the worst aspects.
I'm fed up by the increasing use of reflection in C#, not the attributes themselves, as it requires testing to ensure even the simplest plumbing will attempt to work as written (same argument we make for static types against dynamic, isn't it?), and makes interop from F# much, much harder; and by the abuse of extension methods, which were the main driver for implicit usings in C#: no one knows which ASP.NET namespaces they need to open anymore.
I am working on entire new hobby project written on minimal apis and I checked today before writing answer to your comment: I did not used any attributes there, beside one 'FromBody' and that one only because otherwise it tries to map model from everywhere so you could in theory pass it from Query string.
Which was extremely weird.
Where did you saw all of those attributes in minimal APIs? I honestly curious because from my experience - it is very forgiving and works mostly without them.
Aye, was involved in some really messed up outages from New Relics agent libraries generating bogus byte code at runtime, absolute nightmare for the teams trying to debug it because none of the code causing the crashing existed anywhere you could easily inspect it. Replaced opaque magic from new relic with simpler OTEL, no more outages
Don't we have automated tests for catching this kind of things or is everyone only YOLOing in nowadays? Serialization, routing, etc can fail at runtime regardless of using or not using attributes or reflection.
Ease of comprehension is more important than tests for preventing bugs. A highly testable DI nightmare will have more bugs than a simple system that people can understand just by looking at it.
If the argument is that most developers can't understand what a DI system does, I don't know if I buy that. Or is the argument it's hard to track down dependencies? Because if that's the case the idiomatic c# has the dependencies declared right in the ctor.
But the "simple" system will be full of repetition and boilerplate, meaning the same bugs are scattered around the code base, and obscured by masses of boilerplate.
Isn't a GC also a Magic? Or anything above assembly? While I also understand the reluctance to use too much magic, in my experience, it's not the magic, it's how well the magic is tested and developed.
I used to work with Play framework, a web framework built around Akka, an async bundle of libraries. Because it wasn't too popular, only the most common issues were well documented. I thought I hated magic.
Then, I started using Spring Boot, and I loved magic. Spring has so much documentation that you can also become the magician, if you need to.
If I wanted explicitness for every little detail I would keep writing in Assembly like in the Z80, 80x86, 68000 days.
Unfortunately we never got Lisp or Smalltalk mainstream, so we got to metaprogramming with what is available, and it is quite powerful when taken advantage of.
Some people avoid wizard jobs, others avoid jobs where magic is looked down upon.
I would also add that in the age of LLM and AI generated applications, discussing programming languages explicitness is kind of irrelevant.
Explicitness is different than verbosity. Often annotations and the like are abused to create a lot of accidental complexity just to not write a few keywords. In almost every lisp project you'll find that macros are not intended for reducing verbosity, they are there to define common patterns. You can have something like
(define-route METHOD PATH BODY)
You can then easily expect the generated code. But in Java and others, you'll have something like
@GET(path=PATH)
And there's a whole system hidden behind this, that you have to carefully understand as every annotation implementation is different.
This is the trade-off with macros and annotation/code-generation systems.
I tend to do obvious things whwn I use this kind of tools. In fact, I try to avoid macros.
Even if configurability is not important, I favor sinplification over reuse. In case I need reuse, I go for higher order functions if I can. Macro is the last bullet.
In some circumstances like Json or serialization maybe they can be slightly abused to mark fields and such. But whole code generation can take it so far and magic that it is not worth in many circumstances IMHO, thiugh every tool has its use cases, even macros and annotations.
IMO, macros and such should be to improve coding UX. But using it for abstractions and the like is very much not worth it. So something like JSX (or the loop system in Common Lisp) is good. But using it for DI is often a code smell for me.
> IMO, macros and such should be to improve coding UX
Coding UX critically leans on familiarity and spread of knowledge. By definition, making a non-obvious macro not known by others makes the UI just worse for a definition of worse which means "less manageable by anyone that looks at it without previous knowledge".
That is also the reason why standard libraries always have an advantage in usability just because people know them or the language constructs themselves.
Only if those Lisp projects are done by newbies, Clojure is quite known for having a community that takes that approach to macros, versus everyone else on Lisp since its early days.
Using macros for DSLs has been common for decades, and is how frameworks like CLOS were initially implemented.
I think this is fair criticism. OOP advocates like Uncle Bob always try to sell you 100 often contradictory and ill-defined rules and guidelines for how to “use it right”. Stuff like
* objects should model a single concept, or
* every domain concept should be an object.
These two alone are already contradictory. And what do they even mean? Concretely?
Then, when OOP invariably breaks down, they can always point to any of the 100 rules that you supposedly violated, and blame the failure on that. “Yes, it did not work out because you did not do it right.” It’s the true scotsman fallacy.
It’s like communism. It would work out if somebody just finally did it properly.
Maybe a system that requires 100 hard to follow rules to have even a chance at success just isn’t a great one.
> I really can't think of anything that comes close in terms of performance, features, ecosystem, developer experience.
This is also why I prefer to use Unity over all other engines by an incredible margin. The productivity gains of using a mature C# scripting ecosystem in a tight editor loop are shocking compared to the competition. Other engines do offer C# support, but if I had to make something profitable to save my own standard of living the choice is obvious.
There's only two vendors that offer built-in SIMD accelerated linear math libraries capable of generating projection matrices out of the box. One is Microsoft and the other is Apple. The benefits of having stuff like this baked into your ecosystem are very hard to overstate. The amount of time you could waste looking for and troubleshooting primitives like this can easily kill an otherwise perfect vision.
Yeah, but if you use F# that then you’ll have all the features C# has been working on for years, only in complete and mature versions, and also an opinionated language encouraging similar styles between teams instead of wild dialects of kinda-sorta immutablity and kinda-sorta DU’s, and everything in between, requiring constant vigilance and training… ;)
I’m a fan of all three languages, but C# spent the first years relearning why Visual Basic was very productive and the last many years learning why OCaml was chosen to model F# after. It’s landed in a place where I can make beautiful code the way I need to, but the mature libraries I’ve crafted to make it so simply aren’t recreate-able by most .Net devs, and the level of micro-managing it takes to maintain across groups is a line-by-line slog against orthodoxy and seeming ‘shortcuts’, draining the productivity those high level guarantees should provide. And then there’s the impact of EF combined with sloppy Linq which makes every junior/consultant LOC a potentially ticking time bomb without more line-by-line, function-by-function slog.
Is this open source? Do you have numbers? I've been coding in .NET since it was "a thing" and frankly I'm having trouble mapping the optimizations to a local application at that magnitude.
The optimizations are seen at scale, they really won't mean much for your local application. Not a 3x+ improvement at least.
I've used it, and am still using it, to generate lots of value in a very large org. Having a language where I can bring Go, Node, etc developers over and get relatively better performance without having to teach OOP and all the implicit conventions that are on the C# side is a bit like a cheat code. With modern .NET, its better than Java perf, with better GC, and having the ability to code generic Python/JS looking code whilst still having type checking (HM inference). There are C# libraries we do use but with standard templates for those few with patterns to interface to mostly F# layers you can get very far in a style of code more fitting of a higher more dynamic language. Ease of use vs perf, its kind of in the middle - and it has also benefited from C# features (e.g. spans recently)
Its not one feature with F# IMO, its little things that add up which generally is the reason it is hard to convince someone to use it. To the point when the developers (under my directive) had to write two products in C# they argued with me to switch back.
> I really can't think of anything that comes close in terms of [...] developer experience.
Of all the languages that I have to touch professionally, C# feels by far the most opaque and unusable.
Documentation tends to be somewhere between nonexistant and useless, and MSDN's navigation feels like it was designed by a sadist. (My gold standard would be Rustdoc or Scala 2.13 era Scaladoc, but even Javadoc has been.. fine for basically forever.) For third-party libraries it tends to be even more dire and inconsistent.
The Roslyn language server crashes all the time, and when it does work.. it doesn't do anything useful? Like cross-project "go-to-definition" takes me to either a list of members or a decompiled listing of source code, even when I have the actual source code right there! (I know there's this thing called "SourceLink" which is.. supposed to solve this? I think? But I've never seen it actually use it in practice.)
Even finding where something comes from is ~impossible without the language server, because `using` statements don't mention.. what they're even importing. (Assuming that you have them at all. Because this is also the company that thought project-scoped imports were a good idea!)
And then there's the dependency injection, where I guess someone thought it would be cute if every library just had an opaque extension method on the god object, that didn't tell you anything about what it actually did. So good luck finding where the actual implementation of anything is.
I almost exclusively work in C# and have never experienced the Roslyn crashes you mentioned. I am using either Rider or Visual Studio though.
> Like cross-project "go-to-definition" takes me to either a list of members or a decompiled listing of source code, even when I have the actual source code right there!
If these are projects you have in the same solution then it should never do this. I would only expect this to happen if either symbol files or source files are missing.
Very mixed feelings about this as there’s a strong case for the decisions made here but it also moves .NET further away from WASMGC, which makes using it in the client a complete non-starter for whole categories of web apps.
It’s a missed opportunity and I can’t help but feel that if the .NET team had gotten more involved in the proposals early on then C# in the browser could have been much more viable.
Those changes affect the .NET runtime, designed for real computers. This does not preclude the existence of a special runtime designed for Wasm with WasmGC support.
The .NET team appears to be aware of WasmGC [0], and they have provided their remarks when WasmGC was being designed [1].
.NET was already incompatible with WASM GC from the start [1]. The changes in .NET 10 are nothing in comparison to those. AFAIK WASM GC was designed with only JavaScript in mind so that's what everyone is stuck with.
1: JavaScript _interoperability_ , ie same heap but incompatible objects (nobody is doing static JS)
2: Java, Schemes and many other GC derived languages ,etc have more "pure" GC models, C# traded some of it for practicality and that would've required some complications to the regular JS GC's.
A lot of the features here, stuff like escape analysis for methods etc. does not directly involve the GC - it reduces the amount of objects that go to the GC heap so the GC has less work to do in the first place.
How would this move .NET further away from WASMGC? This is a new GC for .NET, but doesn't add new things to the language that would make it harder to use WASMGC (nor easier).
For example, .NET has internal pointers which WASMGC's MVP can't handle. This doesn't change that so it's still a barrier to using WASMGC. At the same time, it isn't adding new language requirements that WASMGC doesn't handle - the changes are to the default GC system in .NET.
I agree it's disappointing that the .NET team wasn't able to get WASMGC's MVP to support what .NET needs. However, this change doesn't move .NET further away from WASMGC.
Don't mix mainstream adoption at the same level as regular JavaScrip and Typescript, with availability.
Microsoft would wish Blazor would take off like React and Angular, in reality it is seldom used outside .NET shops intranets in a way similar to WebForms.
I wouldn't be surprised if it did take off, classic Wasm semantics were horrible since you needed a lot of language support to even have simple cludges when referring to DOM objects via indices and extra lifeness checking.
WASM-GC will remove a lot of those and make quite a few languages possible as almost first-class DOM manipulating languages (there's still be cludges as the objects are opaque but they'll be far less bad since they can at least avoid external ID mappings and dual-GC systems that'll behave leakily like old IE ref-counts did).
You still need to usually install plenty of moving pieces to produce a wasm file out of the "place language here", write boilerplate initialisation code, debugging is miserable, only for a few folks to avoid writing JavaScript.
There will always be enthusiasts to take the initial steps, the question is if they have the taste to make it a coherent system that isn't horrible to use.
Counted out over N languages, we should see something decent land before long.
The JVM famously boxes everything though, probably because it was originally designed to run a dynamic language. An array list of floats is an array list of pointers. This created an entire cottage industry of alternative collections libraries with concrete array list implementations.
Arrays have a static fixed size though, making them far less useful in practice. Anything one builds with generics is boxed. Dotnet doesn't have this problem.
Valhalla is over 10 years in the works already and there is still no clear date when or if at all it would be released. It's very difficult to change (or fix) such fundamental things so late in the game.
Almost none of this is in the JVM. Escape analysis is extremely limited on the standard JVM, and it's one of GraalVM's "enterprise" features. You have to pay for it.
One limitation of the stack is that it needs to be contiguous virtual addresses, so it was often limited when devices just didn't have the virtual address space to "waste" on a large stack for every thread in a process.
But 64 bits of virtual address space is large enough that you can keep the stacks far enough apart that even for pretty extreme numbers of threads you'll run out of physical memory before they start clashing. So you can always just allocate more physical pages to the stack as needed, similar to the heap.
I don't know if the .net runtime actually does this, though.
> So you can always just allocate more physical pages to the stack as needed, similar to the heap.
You set the (max) stack size once when you create the thread and you can’t increase the (max) size after that.
Processes see a virtual address space that is handled by the OS, so you would have to involve the OS if you needed to add to the stack size dynamically.
> Won't this potentially cause stack overflows in programs that ran fine in older versions though?
That's certainly a possibility, and one that's come up before even between .net framework things migrated to .net core. Though usually it's a sign that something is awry in the first place. Thankfully the default stack sizes can be overridden with config or environment variables.
I am surprised that they didn't already do a lot of optimizations informed by escape analysis, even though they have had value types from the beginning. Hotspot is currently hampered by only having primitive and reference types, which Project Valhalla is going to rectify.
FWIW Tiered Compilation has been enabled on by default since .NET Core 3.1. If the code tries to use refection to mutate static readonly fields and fails, it's the fault of that code.
I always found that Jit and GC are a marriage destined to come together, but never found one another entirely. Jit marks the hotloop in code- and thus can tell the GC in detail what a generation really is and how long a generation lifetime really lasts.
It can reveal secret cull conditions for long generational objects. If that side-branch is hit in the hot-loop, all longterm objects of that generation, are going to get culled, in a single stroke… so bundle them and keep them bundled.
And now they started using it, to at least detect objects that do not escape lambdas. So its all stack, no more GC involved at all. Its almost at the static allocation thing we do for games. If the model proofs that every hotloop 5 objects are allocated and life until a external event occurs- static allocation and its done.
Great start. But you could do so much more than that with this. If you write a custom JIT whose goal is not just to detect and bytecompile hotloops, but to build a complete multi-lifetime model of object generation.
Any interpreter could theoretically do those "marking" things, also JIT's do far more than just "bytecompile" hot loops, _all_ cooperative modern GC's are enabled by JIT semantics for things like read and/or write barriers (this helps a GC keep track of objects that keep getting "touched" whilst the GC can work in parallel).
Outside of the mentioned, things like detecting finegrained lifetimes is very very hard and the mentioned escape analysis is an optimization that needs to be capped to avoid the halting problem. (1)
A fairly deep covererage of GC behaviours can found in Bacon's "Unified Theory of Garbage Collection" where the author theoretically connect previous works on tracing collectors and reference-counting systems and show that the optimized variations often existing in a design-space between them. (2)
I am considering dotnet Maui for a project. On the one hand, I am worried about committing to the Microsoft ecosystem where projects like Maui have been killed in the past and Microsoft has a lot of control. Also XML… On the other hand, I’ve been seeing so many impressive technical things about dotnet itself. Has anyone here used Maui and wants to comment on their experience?
I've been a C# developer my entire career and spent a few years building apps with Xamarin/Uno. At my current company, we evaluated MAUI and Flutter for our mobile app rewrite (1M+ monthly active users).
We first built a proof of concept with 15 basic tasks to implement in both MAUI and Flutter. Things like authentication, navigation, API calls, localization, lists, map, etc. In MAUI, everything felt heavier than it should've been. Tooling issues, overkill patterns, outdated docs, and a lot of small frustrations that added up. In Flutter, we got the same features done much faster and everything just worked. The whole experience was just nicer. The documentation, the community, the developer experience... everything is better.
I love C#, that's what we use for our backend, but for mobile developement Flutter was the clear winner. We launched the new app a year ago and couldn't be happier with our decision.
Aside from using an esoteric language and being a Google product with a risk of shutting down just because, Flutter's game-like UI rendering on a canvas was confirmed to be quite a questionable approach with the whole Liquid Glass transition. If anything, React Native is a more reliable choice: endless supply of React devs and native UI binding similar to MAUI.
I'd say Uno Platform[0] is a better alternative to Flutter for those who do not care much about the native look: it replicates WinUI API on iOS, Mac, Android, and Linux, while also providing access to the whole mature .NET ecosystem – something Flutter can't match for being so new and niche.
It simply can't use it because it does not use native UIs, but instead mimics them with its own rendering engine. This approach worked to some extent during the flat minimalist era, but now that Apple has added so many new animations and transitions, reproducing them all has become close to impossible.
At best, Flutter can implement some shaders for the glass'y look of the controls, but something as basic as the Liquid Glass tab bar would require a huge effort to replicate it inside Flutter, while in MAUI and RN it's an automatic update.
Not a single user cares about "native ui", it's only a debate among developers. Take the top 20 apps people are using, all of them use their own design system which isn't native.
Flutter will always have multiple advantages against React Native (and even Native toolkits themselves) in terms of upgradability, you can do 6 months of updates with only 30mins of work and make sure it 100% works everywhere.
The quality of the testing toolkit is also something which is still unmatched elsewhere and makes a big difference on the app reliability.
Classic HN comment with unapologetic statements. If Flutter were that good, it wouldn't have flatlined so fast after the initial hype a few years ago. I tried it last year, only to see rendering glitches in the sample project.
All those stats look great on paper, but a few months ago I checked job postings for different mobile frameworks, and Flutter listings were 2-3 times fewer than RN. Go on Indeed and see for yourself.
For a "28% of new iOS apps", the Flutter subreddit is a ghost town with regular "is it dying? should I pick RN?" posts. I just don't buy the numbers because I'm myself in a rather stagnant cross-platform ecosystem, so I know this vibe well.
If I ever leave .NET, no way I'd move to something like Flutter. Even Kotlin Multiplatform is more promising concept-wise. LLMs are changing cross-platform development and Flutter's strong sides are not that important anymore, while its weak sides are critical.
Rendering glitches may be due to completely new, lightweight rendering engine made from scratch, that has replaced Skia. Shoudn't be a problem when it matures a bit.
Not everything is related to tech, in my company for example, they picked React Native because they have the ability to tap into the front-end job market (or they think they do), certainly not for it's intrisic qualities.
Personally I've done a 50k+ line project in Flutter and I didn't hit any of these. There's been a few issues for sure but nowhere near what I experienced with React Native (and don't start me on native itself)
Speaking as an experienced desktop .NET Dev, we've avoided it due to years of instability and no real confidence it'll get fully adopted. We've stuck with WPF, which is certainly a bit warty, but ultimately fine. If starting fresh at this point I'd give a real look at Avalonia, seems like they've got their head on their shoulders and are in it for the long haul.
Last time I had to create a C# desktop app, I went with Blazor Hybrid [1]. I'd say it's "Electron for C#". I don't want to use outdated stuff like WPF / WinForms, and I don't trust more recent frameworks, so for me building on top of the web platform felt safest.
I highly recommend using MvvmCross with native UIs instead of MAUI: you get your model and view model 100% cross-platform, and then build native UIs twice (with UIKit and Android SDK), binding them to the shared VM. It also works with AppKit and WinUI.
In the past it was rather painful for a solo dev to do them twice, but now Claude Code one-shots them. I just do the iOS version and tell it to repeat it on Android – in many cases 80% is done instantly.
Just in case, I have an app with half a million installs on both stores that has been running perfectly since 2018 using this ".NET with native UIs" approach.
I have used MAUI at my previous job to build 3 different apps, used only on mobile (Android and iOS). I don't know why many people dislike XAML, to me it felt natural to use it for UI, I researched flutter and liked MAUI/XAML more. Although the development loop felt smoother with flutter. What I didn't like was the constant bugs, with each new version that I was eager to update to fix current issues, something new appeared. After spending countless hours searching through the projects GitHub, I am under the impression that there aren't much resources dedicated to MAUI development from Microsoft, the project is carried forward by few employees and volunteers. If I would start another project I would seriously look into Avalonia. But I always was a backend guy so now at my current job I do server backend development in C# and couldn't be happier.
If you're windows based, I'd unironically consider winforms, it's been re-added to dotnet in windows, and is one of the easiest and best ways to make simple GUI applications.
Sadly it's not cross-platform, which is a benefit of MAUI.
I don't really understand why Microsoft didn't do a Tauri like thing for C# devs instead of this Maui stuff. It would be a tiny project in comparison and then isn't completely going against the grain like Maui is. If you want a write once / run in more places compromise, the browser already does that very well.
Because web UI for a desktop app sucks compared to actual native UI. As a user, any time that I see an app uses Electron, Tauri or any of that ilk, I immediately look for an alternative because the user experience will be awful.
Maui Blazor Hybrid has a cool model where the HTML UI binds to native code (not WASM) for mobile and desktop. That is the closest you can get to Tauri-like. If you want to run that same app in a browser, then it'll use Blazor with WASM.
MAUI Blazor Hybrid is great if you won't want to learn XAML. Apple killed Silverlight, Microsoft kept it running for ~20 years. If you stayed close to what Xamarin was the migration to MAUI isn't bad from what I've seen.
I would say it really depends on your target. If you want only mobile, then there's different option's (see other comments). But if you want only desktop then Avilonia is good. However if you want both (like my team) then we did end up going for MAUI. However we use MAUI Blazor as we also want to run on a server. We're finding iOS to be difficult to target but I don't think that has anything to do with MAUI.
Benchmark Games[0] shows C# just behind C/C++ and Rust across a variety of benchmark types. C# has good facilities for dipping into unmanaged code and utilizing hardware intrinsics so you'd have to tap into that and bypass managed code in many cases to achieve higher performance.
There are plenty of domains where the competition is not one of pure latency (where FPGAs and custom hardware have even taken over from C++). In these domains managed languages can be sufficient to get to "fast enough" and the faster iteration speed and other comforts they provide can give an edge over native languages.
LINQ doesn't need the JIT for that. I don't even think it is the JIT's responsibility to be aware of a specific library and optimize for it.
LINQ does a lot of work behind the scene to optimize for speed and reduce allocations. An example can be found here [1]. These optimizations are mostly about reducing the various LINQ patterns into simple for loops.
Without JIT support, using Linq involves at least allocating an IEnumerator object on the heap, and a closure object, and a delegate to it (if said delegate captures local vars). Each call to `Select` or `Where` is also a virtual call.
This is hugely expensive compared to just a for loop. With this update it seems like the JIT can do escape analysis to stack-allocate the closure object, and the delegate as well (it could devirtualize calls even before that). It seems like it has everything to optimize away the whole LINQ overhead, though I'm not sure what happens in practice.
It'd be neat since that was a major argument against actually using LINQ in perf-sensitive code.
I think that DATAS also has more knobs to tune it than the old GC. I plan to set the Throughput Cost Percentage (TCP) via System.GC.DTargetTCP to some low value so that is has little impact on latency.
> You may not disclose the results of any benchmark test of the .NET Framework component of
the Software to any third party without Microsoft’s prior written approval.
I seem to vaguely recall such a thing from way back in the early days, but the only copy[1] of the .Net Framework EULA I could readily find says it's OK as long as you publish all the details.
It's because you aren't looking at 20 year old EULA's
>3.4 Benchmark Testing. The Software may contain the Microsoft .NET Framework. You may not disclose the results of any benchmark test of the .NET Framework component of the Software to any third party without Microsoft’s prior written approval.
This person is not likely familiar with the history of the .net framework and .net core because they decided a long time ago they were never going to use it.
As long as it's your deployment target and nothing else. For development, both macOS and Linux continue to be second class citizens, and I don't see this changing as it goes against their interests. In most .NET shops around me, the development and deployment tooling is so closely tied to VS that you can't really not use it.
It's fine if you stick to JetBrains and pay for their IDE (or do non-commercial projects only), and either work in a shop which isn't closely tied to VS (basically non-existent in my area), or work by yourself.
No. My entire office is Linux and macOS. Not a single windows machine. Mixture of people using VS Code and Rider. No issues building and deploying to Linux. We pay for rider. Pay nothing for vscode.
> The development and deployment tooling is so closely tied to VS that you can't really not use it.
Development tooling: It's 50-50. Some use Visual Studio, some use Rider. It's fine. The only drawback is that VS Live Share and the Jetbrains equivalent don't interoperate.
deployment tooling: There is deployment tooling tied to the IDE? No-one uses that, it seems like a poor idea. I see automated build/test/deploy pipelines in GitHib Actions, and in Octopus Deploy. TeamCity still gets used, I guess.
It's true though that the most common development OS is Windows by far (with Mac as second) and the most common deployment target by far is Linux.
However the fact that there is close to no friction in this dev vs deploy changeover means that the cross-platform stuff just works. At least for server-side things such as HTTP request and queue message processing. I know that the GUI toolkit story is more complex and difficult, but I don't have to deal with it at all so I don't have details or recommendations.
VS has the “Publish” functionality for direct deployment to targets. It works well for doing that and nothing else. As you said, CI/CD keeps deployment IDE agnostic and has far more capabilities (e.g. Azure DevOps, GitHub Actions).
Yeah? Ncurses still a thing? I only ask because that's the only api name I remember from forever ago.
I worked on a mud on linux right after high school for awhile. Spent most of the time on the school's bsdi server prior to that though.
Then I went java, and as they got less permissive and .net got more permissive I switched at some point. I've really loved the direction C# has gone merging in functional programming idioms and have stuck with it for most personal projects but I am currently learning gdscript for some reason even though godot has C# as an option.
The only thing that has become "less permissive" is Oracle's proprietary OpenJDK build, which isn't really needed or recommended in 99.9% of cases (except for when the vendor of your proprietary application requires it to provide support).
The rest of the ecosystem is "more permissive" than .NET since there are far more FOSS libraries for every task under the sun (which don't routinely go commercial without warnings), and fully open / really cross-platform development tooling, including proper IDEs.
The fact that you even need to be very careful when choosing a JDK is a lot bigger problem than some simple easily replaceable library is going commercial (not that this has not happend also in Java land). Also .NET is fully open and really cross-platform for a long time already and it includes more batteries than Java out of the box, you may not even need to include any third party dependencies (although there are also plenty to choose - 440k packages in Nuget). .NET has also proper IDEs or is Jetbrains Rider not a proper IDE for you?
Funny, because one the libraries I was using at the time went hyper commercial (javafxports). Java burned me on two fronts at the very same time and lost me. Ymmv I guess. It's always a good time to try something new anyway... I also moved to kotlin on android and couldn't be happier with it, it's a clearly superior language.
It works just fine out of the box. The articles/manuals are just if you want to really understand how it works and get the most out of it. What's the issue with that?
In my 20+ years using C#, there's only been one instance where I needed to explicitly control some behavior of the GC (it would prematurely collect the managed handle on a ZMQ client) and that only required one line of code to pin the handle.
It pretty much never gets in your way for probably 98% of developers.
Dr. Dobbs and The C/C++ Users Journal archives are full of articles and ads for special memory allocators, because the ones on the standard library for C or C++ also don't work in many cases, they are only good enough as general purpose allocation.
You need these settings when you drive your application hard into circumstances where manual memory allocation arguably starts making sense again. Like humongous heaps, lots of big, unwieldy objects, or tight latency (or tail latency) requirements. But unless you're using things like Rust or Swift, the price of memory management is the need to investigate segmentation faults. I'd prefer to spend developer time on feature development and benchmarking instead.
A hobby audio and text analysis application I've written, with no specific concern for low level performance other than algorithmically, runs 4x as fast in .net10 vs .net8. Pretty much every optimization discussed here applies to that app. Great work, kudos to the dotnet team. C# is, imo, the best cross platform GC language. I really can't think of anything that comes close in terms of performance, features, ecosystem, developer experience.
C# will be a force to reckon with if/when discriminated unions finally land as a language feature.
I think people who last looked at C# 10 years ago or haven't adapted to new language features seriously don't know how good C# is these days.
Switch expressions with pattern matching are absolutely killer[0] for its terseness.
Also, it is possible to use OneOf[1] and Dunet[2] to get access to DU
[0] https://timdeschryver.dev/blog/pattern-matching-examples-in-...
[1] https://github.com/mcintyre321/OneOf
[2] https://github.com/domn1995/dunet
I write C# and rust fulltime. Native discriminated unions (and their integration throughout the ecosystem) are often the deciding factor when choosing rust over C#.
Very hard to imagine teams cross shopping C# and Rust and DU's being the deciding factor. The tool chains, workflows, and use cases are just so different, IMO. What heuristics were your team using to decide between the two?
This surprises me.
If you want the .NET ecosystem and GC conveniences, there is already F#. If you want no GC and RAII-style control, then you would already pick Rust.
> OneOf
I do like/respect C# but come on now. I know they're fixing it but the rest of the language was designed the same way and thus still has this vestigial layer of OOP-hubris
It's up to each team to decide how they want to write their code. TypeScript is the same with JS having a "vestigial" `class` (you can argue that "it's not the same", but nevertheless, it is possible to write OOP style code in JS/TS and in fact, is the norm in many packages like Nest.js).
The language is a tool; teams decide how to use the tool.
For me, it will be if they ever get checked errors of some sort. I don’t want to use a language with unchecked exceptions flying about everywhere. This isn't saying I want checked exceptions either, but I think if they get proper unions and then have some sort of error union type it would go a long way.
You can get an error union now: https://github.com/amantinband/error-or
The issue is the ecosystem and standard library. They still will be throwing unchecked exceptions everywhere
> C# is, imo, the best cross platform GC language. I really can't think of anything that comes close
How about F#? Isn't F# mostly C# with better ergonomics?
Personally I love F#, but I feel the community is probably even smaller than OCaml...
I once got a temporary F# role without any F# experience simply by having 7 YoE with C# and the knowledge that F# exists.
As much as I'd like to do more with it, the "just use F#" idea flaunted in this thread is a distant pipe dream for the vast majority of teams.
He means the runtime ".NET CLR". They have the same runtime.
It is but in practice it’s very hard to find programmers for it.
Lmao, functional programming is far from ergonomic
F# is hardly modern functional programming. It's more like a better python with types. And that's much more ergonomic than C#.
Python and F# are not very similar. A better comparison is OCaml. F# and OCaml are similar. They're both ML-style functional languages.
I'd much rather code F# than Python, it's more principled, at least at the small scale. But F# is in many ways closer to modern mainstream languages than a modern pure functional language. There's nothing scary about it. You can write F# mostly like Python if you want, i.e. pervasive mutation and side effects, if that's your thing.
If Python is the only language you have to compare other languages to, all other programming languages are going to look like "Python with X and Y differences". It makes no sense to compare Python to F# when OCaml exists and is a far closer relative. F# isn't quite "OCaml on .NET" but it's pretty close.
It absolutely does make sense to compare it to the worlds most popular programming language, especially when dismissed as "functional programming". Who benefits from an OCaml comparison? You think F# should be marketed to OCaml users who might want to try dotnet? That's a pretty small market.
Python is the world's most used scripting language, but for application programming languages there are other languages that are widely used and better to compare to F#. For example, C# and Java.
It's so weird to describe F# as "Python with Types." First of all, Python is Python with Types. And C# is much more similar to Python than F# is.
It all depends on the lens one chooses to view them. None of them are really "functional programming" in the truly modern sense, even F#. As more and more mainstream languages get pattern matching and algebraic data types (such as Python), feature lambdas and immutable values, then these languages converge. However, you don't really get the promises of functional programming such as guaranteed correct composition and easier reasoning/analysis, for that one needs at least purity and perhaps even totality. That carries the burden of proof, which means things get harder and perhaps too hard for some (e.g. the parent poster).
If purity is a requirement for "real" functional programming, then OCaml or Clojure aren't functional. Regarding totality, even Haskell has partial functions and exceptions.
Both OCaml and Clojure are principled and well designed languages, but they are mostly evolutions of Lisp and ML from the 70s. That's not where functional programming is today. Both encourage a functional style, which is good. And maybe that's your definition of a "functional language". But I think that definition will get increasingly less useful over time.
What is an example of a real functional language for you?
Haskell. But there are other examples of "pure functional programming". And the state of the art is dependently typed languages, which are essentially theorem provers but can be used to extract working code.
Like LEAN4 ?
I, too, am curious and keep checking back for a reply!
Sure, Python has types as part of the syntax, but Python doesn't have types like Java, C#, etc. have types. They are not pervasive and the semantics are not locked down.
Exactly what I've observed in practice because most devs have no background in writing functional code and will complain when asked to do so.
Passing or returning a function seems a foreign concept to many devs. They know how to use lambda expressions, but rarely write code that works this way.
We adopted ErrorOr[0] and have a rule that core code must return ErrorOr<T>. Devs have struggled with this and continue to misunderstand how to use the result type.
[0] https://github.com/amantinband/error-or
That really depends on your preferred coding style.
honestly this sounds like you've never really done it. FP is much better for ergonomics, developer productivity, correctness. All the important things when writing code.
I like FP, but your claim is just as baseless as the parent’s.
If FP was really better at “all the important things”, why is there such a wide range of opinions, good but also bad? Why is it still a niche paradigm?
Java?
Is supported on more platforms, has more developers, more jobs, more OSS projects, is more widely used (Tiobe 2024). Performance was historically better, but c# caught up.
Reified generics, value types, LINQ are just a few things that you would miss when going to Java. Also Java and .NET are both big, that's not a real argument here. Not that I would trust Tiobe index too much, but as of 2025 September C# is right behind Java at 5th place.
My experience was that .NET programs were typically more tunable for greater perf than Java for many years now even if it didn't come free out of the box which generally is what matters with performance. The ability to optimise further what needs to be optimised means that generally you are faster for your business domain than the alternative - with Java code it generally is harder and/or less ergonomic to do this.
For example just having value types and reified generics as a combination meant you could write generic code against value types which usually meant for hot algorithmic loops or certain data structures a big win w.r.t memory and CPU consumption. For example for a collection type critical to an app I wrote many years ago the use of value types would almost half the memory footprint compared to the best Java one I could find, and was somewhat faster with less cache misses. The Java alternative wasn't an amateur one either but they couldn't get the perf out of it even with significant effort.
It also last time I checked doesn't have a value decimal type for financial math which IMO can be a significant performance loss for financial/money based systems. Anything with math, and lots of processing/data structures for example I would find .NET significantly faster after doing the optimisation work. If I had to choose the 2 targets these days I would find .NET in general an easier target w.r.t performance. Of course perf isn't everything depending on the domain.
Having worked with C# professionally for a decade, going through the changes with LINQ, async/await, Roslyn, and the rise of .NET Core, to .NET Core becoming .NET, I disagree. I certainly think that C# is a great tool and that it’s the best it has ever been. It’s also relies on very implicit behaviour, it is build upon OOP design principles and a bunch of “needless” abstraction. Things I personally have come to view as anti-patterns over the years. This isn’t because I specifically dislike C#, you could find me saying something similar about Java.
I suspect that the hidden indirection and runtime magic, may be part of why you love the language. In my experience, however, it leads to poor observability, opaque control flow, and difficult debugging sessions in every organisation and company I’ve ever worked for. It’s fair to argue that this is because the people working with C# are bad at software engineering. Similar to how Uncle Bob will always be correct when he calls teams out for getting his principles wrong. To me that means the language itself has a poor design fit for software development in 2025. Which is probably why we see more and more Go adoption, due to its explicit philosophies. Though to be fair, Python seems to be “winning” as far as adoption goes in the cross platform GC language space. Having worked with Django-Ninja I can certainly see why. It’s so productive, and with stuff like Pyrefly, UV and Ruff it’s very easy to make it a YAGNI experience with decent type control.
I am happy you enjoy C# though, and it’s great to see that it is evolving. If they did more to enhance the developer experience so that people were less inclined to do bad engineering on a thursday afternoon after a long day of useless meetings. Then I would probably agree with you. I'm not sure any of the changes going toward .NET 10 are going in that direction though.
You are missing the forest for the trees.
C# has increasingly become more terse (e.g. switch expressions, collection initializers, object initializers, etc) and, IMO, is a good balance between OOP and functional[0].
Functions are first class objects in C# and teams can write functional style C# if they want. But I suspect that this doesn't scale well in human terms as we've encountered a LOT of trouble trying to get TypeScript devs to adopt more functional programming techniques. We've found that many devs like to use functional code (e.g. Linq, `.filter()`, `.map()`), but dislike writing functional code because most devs are not wired this way and do not have any training in how to write functional code and understanding monads. Asking these devs to use a monad has been like asking kids to eat their carrots.
Across much of our backend TS codebase, there are very, very few cases where developers accept a function as input or return a function as output (almost all of it written by 1 dev out of a team of ~20).
Having been working with Nest.js for a while, it's clear to me that most of these abstractions are not "needless" but actually "necessary" to manage complexity of apps beyond a certain scale and the reasons are less technical and more about scaling teams and communicating concepts.Anyone that looks at Nest.js will immediately see the similarities to Spring Boot or .NET Web APIs because it fills the same niche. Whether you call a `*Factory` a "factory" or something else, the core concept of what the thing does still exists whether you're writing C#, Java, Go, or JS: you need a thing that creates instances of things.
You can say "I never use a factory in Go", but if you have a function that creates other things or other functions, that's a factory...you're just not using the nomenclature. Good for you? Or maybe you misunderstand why there is standard nomenclature of common patterns in the first place and are associating these patterns with OOP when in reality, they are almost universal and are rather human language abstractions for programming patterns.
[0] https://medium.com/itnext/getting-functional-with-c-6c74bf27...
I notice that none of the examples in your blog entry on functional C# deals with error handling. I know that is not the point of your article, but that is actually one of my key issues with C# and its reliance on implicit, because like so many other parts of C# you'd probably hand it over to an exeception handler. I'd much rather prefer you to deal with it explicitly right where it happens, and I would prefer if you were actually forced to do it for examples like yours. This is because implicit error handling is hard. I have no doubt you do it well, but it is frankly rare to meet a C# developer who has as much of an understanding on the language that you clearly have.
I think this is an excellent blog post by the way. My issues with C# (and this applies to a lot of other GC languages) is that most developers would learn a lot from your article. Because none of it is an intuitive part of the language philosophy.
I don't think you should never use OOP or abstractions. I don't think there is a golden rule for when you should use either. I do think you need to understand why you are doing it though, and C# sort of makes people go to abstractions first, not last in my experience. I don't think these changes to the GC is going to help people who write C# without understanding C#, which is frankly most C# developers around here. Because Go is opinionated and explicit it's simply an issue I have to deal with less in Go teams. It's not an issue I have to deal with less in Python teams, but then, everyone who loves Python knows it sucks.
My team just recently made the switch from a TS backend to a C# backend for net new work. When we made this switch, we also introduced `ErrorOr`[0] which is a monadic result type.
I would not have imagined this to be controversial nor difficult, but it turns out that developers really prefer and understand exceptions. That's because for a backend CRUD API, it's really easy to just throw and catch at a global HTTP pipeline exception filter and for 95% of cases, this is OK and good enough; you're not really going to be able to handle it nor is it worth it to to handle it.
We'll stick with ErrorOr, but developers aren't using it as monad and simply unwrapping the value and the error because, as it turns out, most devs just have a preference/greater familiarity with imperative try-catch handling of errors and practically, in an HTTP backend, there's nothing wrong in most cases with just having a global exception filter do the heavy lifting unless the code path has a clear recovery path.
I do think there is a "silver rule": OOP when you need structural scaffolding, functional when you have "small contracts" over big ones. An interface or abstract class is a "big contract" that means to understand how to use it, you often have to understand a larger surface area. A function signature is still a contract, but a micro-contract.Depending on what you're building, having structural scaffolding and "big contracts" makes more sense than having lots of micro-contracts (functions). Case in point: REST web APIs make a lot more sense with structural scaffolding. If you write it without structural scaffolding of OOP, it ends up with a lot of repetition and even worse opaqueness with functions wrapping other functions.
The silver rule for OOP vs FP for me: OOP for structural templating for otherwise repetitive code and "big contracts"; FP for "small contracts" and algorithmically complex code. I encourage devs on the team to write both styles depending on what they are building and the nature of complexity in their code. I think this is also why TS and C# are a sweet spot, IMO, because they straddle both OOP and have just enough FP when needed.
[0] https://github.com/amantinband/error-or
You introduced a pattern that is simply different than the usual in C#. It's also not clearly better, it's different. In languages designed for result types like this the ergonomics of such a type are usually better.
All the libraries you use and all methods from the standard library use exceptions. So you have to deal with exceptions in any case.
There's also a million or so libraries that implement types like this. There is no standard, so no interoperability. And people have to learn the pecularities of the chosen library.
I like result types like this, but I'd never try to introduce them in C# (unless at some point they get more language support).
These are developers that have never written C# before so there's no difference between whether it's language supported or not. It was in the core codebase on day 1 when they onboarded so it may as well have been native.
But what I takeaway from this is that Go's approach to error handling is "also not clearly better, it's different".
Even if C# had core language support for result types, you would be surprised how many developers would struggle with it (that is my takeaway from this experience).
If you're not coming from a strongly typed functional language, it's still a pattern you're not used to. Which might be a bit of a roundabout way to say that I agree about your last part, developers without contact to that kind of language will struggle at first with a pattern like this.
I know how to use this pattern, but the C# version still feels weird and cumbersome. Usually you combine this with pattern matching and other functional features and the whole thing makes it convenient in the end. That part is missing in C#. And I think it makes a different in understanding, as you would usually build ony our experience with pattern matching to understand how to handle this case of Result|Error.
[0] https://timdeschryver.dev/blog/pattern-matching-examples-in-...
None of your examples use native C# pattern matching. And without language support like e.g. discriminated unions you can't have exhaustive pattern matching in C#. So you'll have to silence the warnings about the missing default case or always add one, which is annoying.
I mean, it's not a stretch to see how you can use native pattern matching with ErrorOr result types.
(Fully contained program, BTW)Here's the OCaml version:
Still not functional enough?...Or you just don't like C#? No point moving goal posts.What sort of issues do you get debugging?
My experience of .NET even from version 1 is that it has the best debugging experience of any modern language, from the visual studio debugger to sos.dll debugging crash dumps.
I have been coding in C# for 16 years and I have no idea what you mean by "hidden indirection and runtime magic". Maybe it's just invisible to me at this point, but GC is literally the only "invisible magic" I can think of that's core to the language. And I agree that college-level OOP principles are an anti-pattern; stop doing them. C# does not force you to do that at all, except very lightly in some frameworks where you extend a Controller class if you have to (annoying but avoidable). Other than that, I have not used class inheritance a single time in years, and 98% of my classes and structs are immutable. Just don't write bad code; the language doesn't force you to su all.
Hidden indirection & runtime magic almost always refer to DI frameworks.
Reflection is what makes DI feel like "magic". Type signatures don't mean much in reflection-heavy codes. Newcomers won't know many DI framework implicit behaviors & conventions until either they shoot themself in their foot or get RTFM'd.
My pet theory is this kind of "magic" is what makes some people like Golang, which favors explicit wiring over implicit DI framework magic.
Reminds me with C advices: "Just don't write memory leaks & UAF!".Some examples:
- Attributes can do a lot of magic that is not always obvious or well documented.
- ASP.NET pipeline.
- Source generators.
I love C#, but I have to admit we could have done with less “magic” in cases like these.
Attributes do nothing at all on their own. It's someone else's code that does magic by reflecting on your types and looking for those attributes. That may seem like a trivial distinction, but there's a big difference between "the language is doing magic" and "some poorly documented library I'm using is doing magic". I rarely use and generally dislike attributes. I sometimes wonder if C# would be better off without them, but there are some legitimate usages like interop with unmanaged code that would be really awkward any other way. They are OK if you think of them as a weakly enforced part of the type system, and relegate their use to when a C# code object is representing something external like an API endpoint or an unmanaged call. Even this is often over-done.
Yes, the ASP.NET pipeline is a bit of a mess. My strategy is to plug in a couple adapters that allow me to otherwise avoid it. I rolled my own DI framework, for instance.
Source generators are present in all languages and terrible in all languages, so that certainly is not a criticism of C#. It would be a valid criticism if a language required you to use source generators to work efficiently (e.g. limited languages like VB6/VBA). But I haven't used source generators in C# in at least 10 years, and I honestly don't know why anyone would at this point.
Maybe it sounds like I'm dodging by saying C# is great even though the big official frameworks Microsoft pushes (not to mention many of their tutorials) are kinda bad. I'd be more amenable to that argument if it took more than an afternoon to plug in the few adapters you need to escape their bonds and just do it all your own way with the full joy of pure, clean C#. You can write bad code in any language.
That's not to say there's nothing wrong with C#. There are some features I'd still like to see added (e.g. co-/contra-variance on classes & structs), some that will never be added but I miss sometimes (e.g. higher-kinded types), and some that are wonderful but lagging behind (e.g. Expressions supporting newer language features).
A common way to work around this is to provide a `IsSet` boolean:
Now you can check if the value is set.However, you can see how tedious this can get without a source Generator. With a source generator, we simply take nullable partial properties and generate the stub automatically.
Now a single marker attribute will generate as many `Is*Set` properties as needed.Of course, the other use case is for AOT to avoid reflection by generating the source at runtime.
I don't really consider any of these magic, particularly source generators.
It's just code that generates code. Some of the syntax is awkward, but it's not magic imo.
I am paid to work in Java and C# among Go, Rust, Kotlin, Scala and I wholeheartedly agree.
I hate the implicitness of Spring Boot, Quarkus etc. as much as the one in C# projects.
All these magic annotations that save you a few lines of code until they don't, because you get runtime errors due to incompatible annotations.
And then it takes digging through pages of docs or even reporting bugs on repos instead of just fixing a few explicit lines.
Explicitness and Verbosity are orthogonal concepts mostly!
I disagree on this.
I am at a (YC, series C) startup that just recently made the switch from TS backend on Nest.js to C# .NET Web API[0]. It's been a progression from Express -> Nest.js -> C#.
What we find is that having attributes in both Nest.js (decorators) and C# allows one part of the team to move faster and another smaller part of the team to isolate complexity.
The indirection and abstraction are explicit decisions to reduce verbosity for 90% of the team for 90% of the use cases because otherwise, there's a lot of repetitive boilerplate.
The use of attributes, reflection, and source generation make the code more "templatized" (true both in our Nest.js codebase as well as the new C# codebase) so that 90% of the devs simply need to "follow the pattern" and 10% of the devs can focus on more complex logic backing those attributes and decorators.
Having the option to dip into source generation, for example, is really powerful in allowing the team to reduce boilerplate.
[0] We are hiring, BTW! Seeking experienced C# engineers; very, very competitive comp and all greenfield work with modern C# in a mixed Linux and macOS environment.
> so that 90% of the devs simply need to "follow the pattern" and 10% of the devs can focus on more complex logic backing those attributes and decorators.
Works well until the 10% that understand the behind the scenes leave and you are left with a bunch of developers copy and pasting magic patterns that they don't understand.
I love express because things are very explicit. This is the JSON schema being added to this route. This route is taking in JSON parameters. This is the function that handles this POST request endpoint.
I joined a team using Spring Boot and the staff engineer there couldn't tell me if each request was handled by its own thread or not, he couldn't tell me what variables were shared across requests vs what was uniquely instantiated per request. Absolute insanity, not understanding the very basics of one's own runtime.
Meanwhile in Express, the threading model is stupid simple (there isn't one) and what is shared between requests is obvious (everything declared in an outer scope).
The trade offs are though that patterns and behind the scenes source code generation is another layer that the devs who have to follow need to deal with when debugging and understanding why something isn’t working. They either spend more time understanding the bespoke things or are bottle necked relying on a team or person to help them get through those moments. It’s a trade off and one that has bit me and others before
I am not talking about C# specifically but also and I agree.
Implicit and magic looks nice at first but sometimes it can be annoying. I remember the first time I tried Ruby On Rails and I was looking for a piece of config.
Yes, "convention over configuration". Namely, ungreppsble and magic.
This kind of stuff must be used with a lot of care.
I usually favor explicit and, for config, plain data (usually toml).
This can be extended to hidden or non-obvious allocations and other stuff (when I work with C++).
It is better to know what is going on when you need to and burying it in a couole of layers can make things unnecessarily difficult.
Would you rather a team move faster and be more productive or be a purist and disallow abstractions to avoid some potential runtime tracing challenges which can be mitigated with good use of OTEL and logging? I don't know about you, but I'm going to bias towards productivity and use integration tests + observability to safeguard code.
Disallow bespoke abstractions and use the industry standard ones instead. People who make abstractions inflate how productive they’re making everyone else. Your user base is much smaller than popular libs, so your docs and abstractions are not as battle tested and easy to use as much as you think.
This is raw OpenFGA code:
This is an abstraction we wrote on top of it: You would make the case that the former is better than the latter?In the first example, I have to learn and understand OpenFGA, in the second example I have to learn and understand OpenFGA and your abstractions.
Well the point of using abstractions is that you don't need to know the things that it is abstracting. I think the abstraction here is self explaining what it does and you can certainly understand and use it without needing to understand all the specifics behind it.
More importantly: it prevents "usr:alice_123" instead of "user:alice_123" by using the type constraint to generate the prefix for the identifier.
How much faster are we talking? Because you'd have to account for the time lost debugging annotations.
What are you working on that you're debugging annotations everyday? I'd say you've made a big mistake if you're doing that/you didn't read the docs and don't understand how to use the attribute.
(Of course you are also free to write C# without any of the built in frameworks and write purely explicit handling and routing)
On the other hand, we write CRUD every day so anything that saves repetition with CRUD is a gain.
I don't debug them every day, but when I do, it takes days for a nasty bug to be worked out.
Yes, they make CRUD stuff very easy and convenient.
It has been worth the abstraction in my organization with many teams. Thinking 1000+ engineers, at minimum. It helps to abstract as necessary for new teammates that want to simply add a new endpoint yet follow all the legal, security, and data enforcement rules.
Better than no magic abstractions imo. In our large monorepo, LSP feedback can often be so slow that I can’t even rely on it to be productive. I just intuit and pattern match, and these magical abstractions do help. If I get stuck, then I’ll wade into the docs and code myself, and then ask the owning team if I need more help.
That's the deal with all metaprogramming.
People were so afraid of macros they ended up with something even worse.
At least with macros I don't need to consider the whole of the codebase and every library when determining what is happening. Instead I can just... Go to the macro.
C# source generators are...just macros?
They are not. They are generators. Macros tends to be local and explicit as the other commenters have said. They are more like templates. Generators can be fairly involved and feels like a mini language, one that is not as observable as macros.
Isn't this just a string template? https://github.com/CharlieDigital/SKPromptGenerator/blob/mai...
Maybe you're confusing `System.Reflection.Emit` and source generators? Source generators are just a source tree walker + string templates to write source files.
> Generators can be fairly involved and feels like a mini language, one that is not as observable as macros.
I agree the syntax is awkward, but all it boils down to is concatenating code in strings and adding it as a file to your codebase.
And the syntax will 100% get cleaner (it;s already happening with stuff like ForAttributeWithMetadataName
What are those magic annotations you are talking about? Attributes? Not much of those are left in modern .net.
Attributes and reflection are still used in C# for source generators, JSON serialization, ASP.NET routing, dependency injection... The amount of code that can fail at runtime because of reflection has probably increased in modern C#. (Not from C# source generators of course, but those only made interop even worse for F#-ers).
OK lets brake this down:
- code generators, I think I saw it only in regex. Logging can be done via `LoggerDefine` too so attributes are optional. Also code generators have access to full tokenized structure of code, and that means attributes are just design choice of this particular generator you are using. And finally code generators does not produce runtime errors unless code that they generated is invalid.
- Json serialization, sure but you can use your own converters. Attributes are not necessary.
- asp.net routing, yes but those are in controllers, my impression is that minimal APIs are now the go to solution and you have `app.MapGet(path)` so no attributes; you can inject services into minimal APIs and this does not require attributes. Most of the time minimal APIs does not require attributes at all.
- dependency injection, require attributes when you inject services in controllers endpoints, which I never liked nor understood why people do that. What is the use case over injecting it through controller constructor? It is not like constructor is singleton, long living object. It is constructed during Asp.net http pipeline and discarded when no longer necessary.
So occasional usage, may still occur from time to time, in endpoints and DTOs (`[JsonIgnore]` for example) but you have other means to do the same things. It is done via attributes because it is easier and faster to develop.
Also your team should invest some time into testing in my opinion. Integration testing helps a lot with catching those runtime errrors.
> Json serialization, sure but you can use your own converters
And going through converters is (was?) significantly slower for some reason than the built-in serialisation.
> my impression is that minimal APIs are now the go to solution and you have `app.MapGet(path)` so no attribute
Minimal APIs use attributes to explicitly configure how parameters are mapped to the path, query, header fields, body content or for DI dependencies. These can't always be implicit, which BTW means you're stuck in F# if you ever need them, because the codegen still doesn't match what the reflection code expects.
I haven't touched .NET during work hours in ages, these are mostly my pains from hobbyist use of modern .NET from F#. Although the changes I've seen in C#'s ecosystem the last decade don't make me eager to use .NET for web backends again, they somehow kept going with the worst aspects.
I'm fed up by the increasing use of reflection in C#, not the attributes themselves, as it requires testing to ensure even the simplest plumbing will attempt to work as written (same argument we make for static types against dynamic, isn't it?), and makes interop from F# much, much harder; and by the abuse of extension methods, which were the main driver for implicit usings in C#: no one knows which ASP.NET namespaces they need to open anymore.
I am working on entire new hobby project written on minimal apis and I checked today before writing answer to your comment: I did not used any attributes there, beside one 'FromBody' and that one only because otherwise it tries to map model from everywhere so you could in theory pass it from Query string. Which was extremely weird.
Where did you saw all of those attributes in minimal APIs? I honestly curious because from my experience - it is very forgiving and works mostly without them.
Aye, was involved in some really messed up outages from New Relics agent libraries generating bogus byte code at runtime, absolute nightmare for the teams trying to debug it because none of the code causing the crashing existed anywhere you could easily inspect it. Replaced opaque magic from new relic with simpler OTEL, no more outages
That's likely the old emit approach. Newer source gen will actually generate source that is included in the compilation.
Don't we have automated tests for catching this kind of things or is everyone only YOLOing in nowadays? Serialization, routing, etc can fail at runtime regardless of using or not using attributes or reflection.
Ease of comprehension is more important than tests for preventing bugs. A highly testable DI nightmare will have more bugs than a simple system that people can understand just by looking at it.
If the argument is that most developers can't understand what a DI system does, I don't know if I buy that. Or is the argument it's hard to track down dependencies? Because if that's the case the idiomatic c# has the dependencies declared right in the ctor.
But the "simple" system will be full of repetition and boilerplate, meaning the same bugs are scattered around the code base, and obscured by masses of boilerplate.
Isn't a GC also a Magic? Or anything above assembly? While I also understand the reluctance to use too much magic, in my experience, it's not the magic, it's how well the magic is tested and developed.
I used to work with Play framework, a web framework built around Akka, an async bundle of libraries. Because it wasn't too popular, only the most common issues were well documented. I thought I hated magic.
Then, I started using Spring Boot, and I loved magic. Spring has so much documentation that you can also become the magician, if you need to.
I haven't experienced a DI 'nightmare' myself yet, but then again, we have integration tests to cover for that.
Try Nest.js and you'll know true DI "nightmares".
As polyglot developer, I also disagree.
If I wanted explicitness for every little detail I would keep writing in Assembly like in the Z80, 80x86, 68000 days.
Unfortunately we never got Lisp or Smalltalk mainstream, so we got to metaprogramming with what is available, and it is quite powerful when taken advantage of.
Some people avoid wizard jobs, others avoid jobs where magic is looked down upon.
I would also add that in the age of LLM and AI generated applications, discussing programming languages explicitness is kind of irrelevant.
Explicitness is different than verbosity. Often annotations and the like are abused to create a lot of accidental complexity just to not write a few keywords. In almost every lisp project you'll find that macros are not intended for reducing verbosity, they are there to define common patterns. You can have something like
You can then easily expect the generated code. But in Java and others, you'll have something like And there's a whole system hidden behind this, that you have to carefully understand as every annotation implementation is different.This is the trade-off with macros and annotation/code-generation systems.
I tend to do obvious things whwn I use this kind of tools. In fact, I try to avoid macros.
Even if configurability is not important, I favor sinplification over reuse. In case I need reuse, I go for higher order functions if I can. Macro is the last bullet.
In some circumstances like Json or serialization maybe they can be slightly abused to mark fields and such. But whole code generation can take it so far and magic that it is not worth in many circumstances IMHO, thiugh every tool has its use cases, even macros and annotations.
IMO, macros and such should be to improve coding UX. But using it for abstractions and the like is very much not worth it. So something like JSX (or the loop system in Common Lisp) is good. But using it for DI is often a code smell for me.
> IMO, macros and such should be to improve coding UX
Coding UX critically leans on familiarity and spread of knowledge. By definition, making a non-obvious macro not known by others makes the UI just worse for a definition of worse which means "less manageable by anyone that looks at it without previous knowledge".
That is also the reason why standard libraries always have an advantage in usability just because people know them or the language constructs themselves.
Only if those Lisp projects are done by newbies, Clojure is quite known for having a community that takes that approach to macros, versus everyone else on Lisp since its early days.
Using macros for DSLs has been common for decades, and is how frameworks like CLOS were initially implemented.
> Similar to how Uncle Bob will always be correct when he calls teams out for getting his principles wrong.
Is this sarcasm?
I think this is fair criticism. OOP advocates like Uncle Bob always try to sell you 100 often contradictory and ill-defined rules and guidelines for how to “use it right”. Stuff like
* objects should model a single concept, or
* every domain concept should be an object.
These two alone are already contradictory. And what do they even mean? Concretely?
Then, when OOP invariably breaks down, they can always point to any of the 100 rules that you supposedly violated, and blame the failure on that. “Yes, it did not work out because you did not do it right.” It’s the true scotsman fallacy.
It’s like communism. It would work out if somebody just finally did it properly.
Maybe a system that requires 100 hard to follow rules to have even a chance at success just isn’t a great one.
> I really can't think of anything that comes close in terms of performance, features, ecosystem, developer experience.
This is also why I prefer to use Unity over all other engines by an incredible margin. The productivity gains of using a mature C# scripting ecosystem in a tight editor loop are shocking compared to the competition. Other engines do offer C# support, but if I had to make something profitable to save my own standard of living the choice is obvious.
There's only two vendors that offer built-in SIMD accelerated linear math libraries capable of generating projection matrices out of the box. One is Microsoft and the other is Apple. The benefits of having stuff like this baked into your ecosystem are very hard to overstate. The amount of time you could waste looking for and troubleshooting primitives like this can easily kill an otherwise perfect vision.
Or you can use the "C# without the line noise", which goes under the name of F#.
Yeah, but if you use F# that then you’ll have all the features C# has been working on for years, only in complete and mature versions, and also an opinionated language encouraging similar styles between teams instead of wild dialects of kinda-sorta immutablity and kinda-sorta DU’s, and everything in between, requiring constant vigilance and training… ;)
I’m a fan of all three languages, but C# spent the first years relearning why Visual Basic was very productive and the last many years learning why OCaml was chosen to model F# after. It’s landed in a place where I can make beautiful code the way I need to, but the mature libraries I’ve crafted to make it so simply aren’t recreate-able by most .Net devs, and the level of micro-managing it takes to maintain across groups is a line-by-line slog against orthodoxy and seeming ‘shortcuts’, draining the productivity those high level guarantees should provide. And then there’s the impact of EF combined with sloppy Linq which makes every junior/consultant LOC a potentially ticking time bomb without more line-by-line, function-by-function slog.
Compiler guarantees mean a lot.
>runs 4x as fast in .net10 vs .net8.
Is this open source? Do you have numbers? I've been coding in .NET since it was "a thing" and frankly I'm having trouble mapping the optimizations to a local application at that magnitude.
The optimizations are seen at scale, they really won't mean much for your local application. Not a 3x+ improvement at least.
Except for F#, which also gets all the .NET10 cross-platform GC improvements for free and is a better programming language than C#.
+1 F# is criminally under-used
I've used it, and am still using it, to generate lots of value in a very large org. Having a language where I can bring Go, Node, etc developers over and get relatively better performance without having to teach OOP and all the implicit conventions that are on the C# side is a bit like a cheat code. With modern .NET, its better than Java perf, with better GC, and having the ability to code generic Python/JS looking code whilst still having type checking (HM inference). There are C# libraries we do use but with standard templates for those few with patterns to interface to mostly F# layers you can get very far in a style of code more fitting of a higher more dynamic language. Ease of use vs perf, its kind of in the middle - and it has also benefited from C# features (e.g. spans recently)
Its not one feature with F# IMO, its little things that add up which generally is the reason it is hard to convince someone to use it. To the point when the developers (under my directive) had to write two products in C# they argued with me to switch back.
> I really can't think of anything that comes close in terms of [...] developer experience.
Of all the languages that I have to touch professionally, C# feels by far the most opaque and unusable.
Documentation tends to be somewhere between nonexistant and useless, and MSDN's navigation feels like it was designed by a sadist. (My gold standard would be Rustdoc or Scala 2.13 era Scaladoc, but even Javadoc has been.. fine for basically forever.) For third-party libraries it tends to be even more dire and inconsistent.
The Roslyn language server crashes all the time, and when it does work.. it doesn't do anything useful? Like cross-project "go-to-definition" takes me to either a list of members or a decompiled listing of source code, even when I have the actual source code right there! (I know there's this thing called "SourceLink" which is.. supposed to solve this? I think? But I've never seen it actually use it in practice.)
Even finding where something comes from is ~impossible without the language server, because `using` statements don't mention.. what they're even importing. (Assuming that you have them at all. Because this is also the company that thought project-scoped imports were a good idea!)
And then there's the dependency injection, where I guess someone thought it would be cute if every library just had an opaque extension method on the god object, that didn't tell you anything about what it actually did. So good luck finding where the actual implementation of anything is.
I almost exclusively work in C# and have never experienced the Roslyn crashes you mentioned. I am using either Rider or Visual Studio though.
> Like cross-project "go-to-definition" takes me to either a list of members or a decompiled listing of source code, even when I have the actual source code right there!
If these are projects you have in the same solution then it should never do this. I would only expect this to happen if either symbol files or source files are missing.
I use VS Code on macOS for all of my C# code over the last 5 years and also never experienced Roslyn crashes.
Try "go to implementation" in place of go to definition.
Very mixed feelings about this as there’s a strong case for the decisions made here but it also moves .NET further away from WASMGC, which makes using it in the client a complete non-starter for whole categories of web apps.
It’s a missed opportunity and I can’t help but feel that if the .NET team had gotten more involved in the proposals early on then C# in the browser could have been much more viable.
Those changes affect the .NET runtime, designed for real computers. This does not preclude the existence of a special runtime designed for Wasm with WasmGC support.
The .NET team appears to be aware of WasmGC [0], and they have provided their remarks when WasmGC was being designed [1].
[0] https://github.com/dotnet/runtime/issues/94420
[1] https://github.com/WebAssembly/gc/issues/77
.NET was already incompatible with WASM GC from the start [1]. The changes in .NET 10 are nothing in comparison to those. AFAIK WASM GC was designed with only JavaScript in mind so that's what everyone is stuck with.
[1] https://github.com/dotnet/runtime/issues/94420
There's 2 things,
1: JavaScript _interoperability_ , ie same heap but incompatible objects (nobody is doing static JS)
2: Java, Schemes and many other GC derived languages ,etc have more "pure" GC models, C# traded some of it for practicality and that would've required some complications to the regular JS GC's.
A lot of the features here, stuff like escape analysis for methods etc. does not directly involve the GC - it reduces the amount of objects that go to the GC heap so the GC has less work to do in the first place.
How would this move .NET further away from WASMGC? This is a new GC for .NET, but doesn't add new things to the language that would make it harder to use WASMGC (nor easier).
For example, .NET has internal pointers which WASMGC's MVP can't handle. This doesn't change that so it's still a barrier to using WASMGC. At the same time, it isn't adding new language requirements that WASMGC doesn't handle - the changes are to the default GC system in .NET.
I agree it's disappointing that the .NET team wasn't able to get WASMGC's MVP to support what .NET needs. However, this change doesn't move .NET further away from WASMGC.
Webassembly taking off on the browser is wishful thinking.
There are a couple unicorns like Figma and that is it.
Performance is much better option with WebGPU compute, and not everyone hates JavaScript.
Whereas on the server it is basically a bunch of companies trying to replicate application servers, been there done that.
> Webassembly taking off on the browser is wishful thinking.
It has taken off in the browser. If you've ever used Google Sheets you've used WebAssembly.
Another niche use case.
Google Sheets is one of the most widely used applications on the planet. It's not niche.
Amazon switched their Prime Video app from JavaScript to WebAssembly for double the performance. Is streaming video a niche use case?
I think they meant most people aren’t building a high performance spreadsheet, not most people aren’t using a high performance spreadsheet.
> most people aren’t building a high performance spreadsheet
Lots of people are building Blazor applications:
https://dotnet.microsoft.com/en-us/apps/aspnet/web-apps/blaz...
> not most people aren’t using a high performance spreadsheet
A spreadsheet making use of WebAssembly couldn't be deployed to the browser if WebAssembly hadn't taken off in browsers.
Practical realities contradict pjmlp's preconceptions.
Don't mix mainstream adoption at the same level as regular JavaScrip and Typescript, with availability.
Microsoft would wish Blazor would take off like React and Angular, in reality it is seldom used outside .NET shops intranets in a way similar to WebForms.
> Blazor is seldom used outside .NET shops intranets
So, in other words, widely used in lots and lots of deployments.
Do you have a number for us?
Can you actually build something like Figma in Blazor? Does Blazor somehow facilitate that?
I think that was sarcasm :)
I wouldn't be surprised if it did take off, classic Wasm semantics were horrible since you needed a lot of language support to even have simple cludges when referring to DOM objects via indices and extra lifeness checking.
WASM-GC will remove a lot of those and make quite a few languages possible as almost first-class DOM manipulating languages (there's still be cludges as the objects are opaque but they'll be far less bad since they can at least avoid external ID mappings and dual-GC systems that'll behave leakily like old IE ref-counts did).
All great and dandy, except tooling still sucks.
You still need to usually install plenty of moving pieces to produce a wasm file out of the "place language here", write boilerplate initialisation code, debugging is miserable, only for a few folks to avoid writing JavaScript.
There will always be enthusiasts to take the initial steps, the question is if they have the taste to make it a coherent system that isn't horrible to use.
Counted out over N languages, we should see something decent land before long.
I think you may be underestimating how many people really dislike JavaScript.
As many that dislike PHP, C, C++, yet here we are.
Interesting, I mostly work in JVM, and am always impressed how much more advanced feature-wise the .NET runtime is.
Won't this potentially cause stack overflows in programs that ran fine in older versions though?
I don't think the runtime is "much more advanced", the JVM has had most of these optimizations for years.
The JVM famously boxes everything though, probably because it was originally designed to run a dynamic language. An array list of floats is an array list of pointers. This created an entire cottage industry of alternative collections libraries with concrete array list implementations.
A float[] is packed and not a list of pointers in the jvm.
An ArrayList<Float> is a list of pointers though.
Arrays have a static fixed size though, making them far less useful in practice. Anything one builds with generics is boxed. Dotnet doesn't have this problem.
Currently you can get around this with Panama, even if the API is kind of verbose for the purpose.
Eventually value classes might close the gap, finally available as EA.
Valhalla is over 10 years in the works already and there is still no clear date when or if at all it would be released. It's very difficult to change (or fix) such fundamental things so late in the game.
Because it is a huge engineering effort to add value types without breaking existing binary libraries.
Doing a Python 3 would mean no one wanted going to adopt it.
Yes it is long process.
Some of the JEP in the last versions are the initial baby steps for integration.
They're famously working on changing that. I think we're all hopeful that we'll start seeing the changes from Valhalla roll in post-25.
Almost none of this is in the JVM. Escape analysis is extremely limited on the standard JVM, and it's one of GraalVM's "enterprise" features. You have to pay for it.
> one of GraalVM's "enterprise" features. You have to pay for it.
Free for some (most?) use cases these days.
Basically enterprise edition does not exist anymore as it became the "Oracle GraalVM" with a new license.
https://www.graalvm.org/faq/
One limitation of the stack is that it needs to be contiguous virtual addresses, so it was often limited when devices just didn't have the virtual address space to "waste" on a large stack for every thread in a process.
But 64 bits of virtual address space is large enough that you can keep the stacks far enough apart that even for pretty extreme numbers of threads you'll run out of physical memory before they start clashing. So you can always just allocate more physical pages to the stack as needed, similar to the heap.
I don't know if the .net runtime actually does this, though.
> So you can always just allocate more physical pages to the stack as needed, similar to the heap.
You set the (max) stack size once when you create the thread and you can’t increase the (max) size after that.
Processes see a virtual address space that is handled by the OS, so you would have to involve the OS if you needed to add to the stack size dynamically.
> Won't this potentially cause stack overflows in programs that ran fine in older versions though?
That's certainly a possibility, and one that's come up before even between .net framework things migrated to .net core. Though usually it's a sign that something is awry in the first place. Thankfully the default stack sizes can be overridden with config or environment variables.
I am surprised that they didn't already do a lot of optimizations informed by escape analysis, even though they have had value types from the beginning. Hotspot is currently hampered by only having primitive and reference types, which Project Valhalla is going to rectify.
On the topic of DATAS, there was a discussion here recently: https://news.ycombinator.com/item?id=45358527
Thanks! Macroexpanded:
Preparing for the .NET 10 GC - https://news.ycombinator.com/item?id=45358527 - Sept 2025 (60 comments)
DATAS has been great for us. Literally no effort, upgrade the app to net8 and flip it on. Huge reduction in memory.
TieredCompilation on the other hand caused a bunch of esoteric errors.
FWIW Tiered Compilation has been enabled on by default since .NET Core 3.1. If the code tries to use refection to mutate static readonly fields and fails, it's the fault of that code.
Comprehensive and (I thought) interesting article in perf improvements in .net 10:
Performance Improvements in .NET 10
https://devblogs.microsoft.com/dotnet/performance-improvemen...
This is a great article, as soon as you’re beyond the introductory 5 paragraphs on the minutiae of the opening song of Disney’s Frozen.
I always found that Jit and GC are a marriage destined to come together, but never found one another entirely. Jit marks the hotloop in code- and thus can tell the GC in detail what a generation really is and how long a generation lifetime really lasts.
It can reveal secret cull conditions for long generational objects. If that side-branch is hit in the hot-loop, all longterm objects of that generation, are going to get culled, in a single stroke… so bundle them and keep them bundled. And now they started using it, to at least detect objects that do not escape lambdas. So its all stack, no more GC involved at all. Its almost at the static allocation thing we do for games. If the model proofs that every hotloop 5 objects are allocated and life until a external event occurs- static allocation and its done.
Great start. But you could do so much more than that with this. If you write a custom JIT whose goal is not just to detect and bytecompile hotloops, but to build a complete multi-lifetime model of object generation.
Any interpreter could theoretically do those "marking" things, also JIT's do far more than just "bytecompile" hot loops, _all_ cooperative modern GC's are enabled by JIT semantics for things like read and/or write barriers (this helps a GC keep track of objects that keep getting "touched" whilst the GC can work in parallel).
Outside of the mentioned, things like detecting finegrained lifetimes is very very hard and the mentioned escape analysis is an optimization that needs to be capped to avoid the halting problem. (1)
A fairly deep covererage of GC behaviours can found in Bacon's "Unified Theory of Garbage Collection" where the author theoretically connect previous works on tracing collectors and reference-counting systems and show that the optimized variations often existing in a design-space between them. (2)
1: https://en.wikipedia.org/wiki/Halting_problem
2: https://web.eecs.umich.edu/~weimerw/2008-415/reading/bacon-g...
[dead]
I am considering dotnet Maui for a project. On the one hand, I am worried about committing to the Microsoft ecosystem where projects like Maui have been killed in the past and Microsoft has a lot of control. Also XML… On the other hand, I’ve been seeing so many impressive technical things about dotnet itself. Has anyone here used Maui and wants to comment on their experience?
I've been a C# developer my entire career and spent a few years building apps with Xamarin/Uno. At my current company, we evaluated MAUI and Flutter for our mobile app rewrite (1M+ monthly active users).
We first built a proof of concept with 15 basic tasks to implement in both MAUI and Flutter. Things like authentication, navigation, API calls, localization, lists, map, etc. In MAUI, everything felt heavier than it should've been. Tooling issues, overkill patterns, outdated docs, and a lot of small frustrations that added up. In Flutter, we got the same features done much faster and everything just worked. The whole experience was just nicer. The documentation, the community, the developer experience... everything is better.
I love C#, that's what we use for our backend, but for mobile developement Flutter was the clear winner. We launched the new app a year ago and couldn't be happier with our decision.
Aside from using an esoteric language and being a Google product with a risk of shutting down just because, Flutter's game-like UI rendering on a canvas was confirmed to be quite a questionable approach with the whole Liquid Glass transition. If anything, React Native is a more reliable choice: endless supply of React devs and native UI binding similar to MAUI.
I'd say Uno Platform[0] is a better alternative to Flutter for those who do not care much about the native look: it replicates WinUI API on iOS, Mac, Android, and Linux, while also providing access to the whole mature .NET ecosystem – something Flutter can't match for being so new and niche.
[0]: https://platform.uno/
> Flutter's game-like UI rendering on a canvas was confirmed to be quite a questionable approach with the whole Liquid Glass transition.
Im not a flutter dev and Im very interested to hear how it doesn’t play well liquid glass.
It simply can't use it because it does not use native UIs, but instead mimics them with its own rendering engine. This approach worked to some extent during the flat minimalist era, but now that Apple has added so many new animations and transitions, reproducing them all has become close to impossible.
At best, Flutter can implement some shaders for the glass'y look of the controls, but something as basic as the Liquid Glass tab bar would require a huge effort to replicate it inside Flutter, while in MAUI and RN it's an automatic update.
Not a single user cares about "native ui", it's only a debate among developers. Take the top 20 apps people are using, all of them use their own design system which isn't native.
Flutter will always have multiple advantages against React Native (and even Native toolkits themselves) in terms of upgradability, you can do 6 months of updates with only 30mins of work and make sure it 100% works everywhere.
The quality of the testing toolkit is also something which is still unmatched elsewhere and makes a big difference on the app reliability.
Classic HN comment with unapologetic statements. If Flutter were that good, it wouldn't have flatlined so fast after the initial hype a few years ago. I tried it last year, only to see rendering glitches in the sample project.
28% of new iOS apps are made with flutter and it's the #1 cross platform framework on stack overflow 2024 survey so I highly doubt it has flatlined.
https://flutter.dev/multi-platform/ios
https://survey.stackoverflow.co/2024/technology#1-other-fram...
All those stats look great on paper, but a few months ago I checked job postings for different mobile frameworks, and Flutter listings were 2-3 times fewer than RN. Go on Indeed and see for yourself.
For a "28% of new iOS apps", the Flutter subreddit is a ghost town with regular "is it dying? should I pick RN?" posts. I just don't buy the numbers because I'm myself in a rather stagnant cross-platform ecosystem, so I know this vibe well.
If I ever leave .NET, no way I'd move to something like Flutter. Even Kotlin Multiplatform is more promising concept-wise. LLMs are changing cross-platform development and Flutter's strong sides are not that important anymore, while its weak sides are critical.
Maybe you are just not in the target market? I just checked FlutterShark and I have 14 apps installed with flutter in it.
Flutter is starting to become the default framework to build apps in in Asia at least.
And I disagree about the LLM, Flutter provides strong standardisation and strong typing which make it an ideal target for LLM.
As for Kotlin Multiplatform, maybe it will take off similarly as Flutter but that hasn't happened yet.
Rendering glitches may be due to completely new, lightweight rendering engine made from scratch, that has replaced Skia. Shoudn't be a problem when it matures a bit.
Not everything is related to tech, in my company for example, they picked React Native because they have the ability to tap into the front-end job market (or they think they do), certainly not for it's intrisic qualities.
Personally I've done a 50k+ line project in Flutter and I didn't hit any of these. There's been a few issues for sure but nowhere near what I experienced with React Native (and don't start me on native itself)
Speaking as an experienced desktop .NET Dev, we've avoided it due to years of instability and no real confidence it'll get fully adopted. We've stuck with WPF, which is certainly a bit warty, but ultimately fine. If starting fresh at this point I'd give a real look at Avalonia, seems like they've got their head on their shoulders and are in it for the long haul.
Would also recommend Avalonia. It's truly cross-platform (supports also Linux) unlike MAUI.
I would personally prefer Avalonia (https://avaloniaui.net/) over MAUI.
Last time I had to create a C# desktop app, I went with Blazor Hybrid [1]. I'd say it's "Electron for C#". I don't want to use outdated stuff like WPF / WinForms, and I don't trust more recent frameworks, so for me building on top of the web platform felt safest.
[1]: https://learn.microsoft.com/en-us/aspnet/core/blazor/hybrid/...
I highly recommend using MvvmCross with native UIs instead of MAUI: you get your model and view model 100% cross-platform, and then build native UIs twice (with UIKit and Android SDK), binding them to the shared VM. It also works with AppKit and WinUI.
In the past it was rather painful for a solo dev to do them twice, but now Claude Code one-shots them. I just do the iOS version and tell it to repeat it on Android – in many cases 80% is done instantly.
Just in case, I have an app with half a million installs on both stores that has been running perfectly since 2018 using this ".NET with native UIs" approach.
I have used MAUI at my previous job to build 3 different apps, used only on mobile (Android and iOS). I don't know why many people dislike XAML, to me it felt natural to use it for UI, I researched flutter and liked MAUI/XAML more. Although the development loop felt smoother with flutter. What I didn't like was the constant bugs, with each new version that I was eager to update to fix current issues, something new appeared. After spending countless hours searching through the projects GitHub, I am under the impression that there aren't much resources dedicated to MAUI development from Microsoft, the project is carried forward by few employees and volunteers. If I would start another project I would seriously look into Avalonia. But I always was a backend guy so now at my current job I do server backend development in C# and couldn't be happier.
I do think server/backend is C#'s sweetspot because EF Core is soooo good.
But it's curious that it's used widely with game engines (Unity, Godot), but has a pretty weak and fractured UI landscape.
If you're windows based, I'd unironically consider winforms, it's been re-added to dotnet in windows, and is one of the easiest and best ways to make simple GUI applications.
Sadly it's not cross-platform, which is a benefit of MAUI.
I don't really understand why Microsoft didn't do a Tauri like thing for C# devs instead of this Maui stuff. It would be a tiny project in comparison and then isn't completely going against the grain like Maui is. If you want a write once / run in more places compromise, the browser already does that very well.
Because web UI for a desktop app sucks compared to actual native UI. As a user, any time that I see an app uses Electron, Tauri or any of that ilk, I immediately look for an alternative because the user experience will be awful.
Worse than WPF?
Maui Blazor Hybrid has a cool model where the HTML UI binds to native code (not WASM) for mobile and desktop. That is the closest you can get to Tauri-like. If you want to run that same app in a browser, then it'll use Blazor with WASM.
MAUI Blazor Hybrid is great if you won't want to learn XAML. Apple killed Silverlight, Microsoft kept it running for ~20 years. If you stayed close to what Xamarin was the migration to MAUI isn't bad from what I've seen.
I would say it really depends on your target. If you want only mobile, then there's different option's (see other comments). But if you want only desktop then Avilonia is good. However if you want both (like my team) then we did end up going for MAUI. However we use MAUI Blazor as we also want to run on a server. We're finding iOS to be difficult to target but I don't think that has anything to do with MAUI.
Certainly wouldn't recommend MAUI. Even using it as a simple shell with for Blazor Hybrid was noticeably harder than WPF.
If Microsoft aren't using it themselves in any real capacity, then it's not good bet IMO.
I wonder if this makes .net competitive for high frequency trading...
It's been competitive for a long time now.
https://medium.com/@ocoanet/improving-net-disruptor-performa...
Benchmark Games[0] shows C# just behind C/C++ and Rust across a variety of benchmark types. C# has good facilities for dipping into unmanaged code and utilizing hardware intrinsics so you'd have to tap into that and bypass managed code in many cases to achieve higher performance.
[0] https://benchmarksgame-team.pages.debian.net/benchmarksgame/...
Why would you ever pick a language like this for HFT? It seems like a nonstarter for me but I guess Java is out there in use
There are plenty of domains where the competition is not one of pure latency (where FPGAs and custom hardware have even taken over from C++). In these domains managed languages can be sufficient to get to "fast enough" and the faster iteration speed and other comforts they provide can give an edge over native languages.
"Why we chose Java for our High-Frequency Trading application"
https://medium.com/@jadsarmo/why-we-chose-java-for-our-high-...
Do these updates mean that the JIT can finally optimize LINQ away into simple loops?
LINQ doesn't need the JIT for that. I don't even think it is the JIT's responsibility to be aware of a specific library and optimize for it.
LINQ does a lot of work behind the scene to optimize for speed and reduce allocations. An example can be found here [1]. These optimizations are mostly about reducing the various LINQ patterns into simple for loops.
[1] https://github.com/dotnet/runtime/blob/main/src/libraries/Sy...
Without JIT support, using Linq involves at least allocating an IEnumerator object on the heap, and a closure object, and a delegate to it (if said delegate captures local vars). Each call to `Select` or `Where` is also a virtual call.
This is hugely expensive compared to just a for loop. With this update it seems like the JIT can do escape analysis to stack-allocate the closure object, and the delegate as well (it could devirtualize calls even before that). It seems like it has everything to optimize away the whole LINQ overhead, though I'm not sure what happens in practice.
It'd be neat since that was a major argument against actually using LINQ in perf-sensitive code.
There's also ZLinq: https://github.com/Cysharp/ZLinq
I think that DATAS also has more knobs to tune it than the old GC. I plan to set the Throughput Cost Percentage (TCP) via System.GC.DTargetTCP to some low value so that is has little impact on latency.
https://learn.microsoft.com/en-us/dotnet/core/runtime-config...
Are you now allowed to benchmark the .Net runtime / GC?
Edit: Looks like you are allowed to benchmark the runtime now. I was able to locate an ancient EULA which forbade this (see section 3.4): https://download.microsoft.com/documents/useterms/visual%20s...
> You may not disclose the results of any benchmark test of the .NET Framework component of the Software to any third party without Microsoft’s prior written approval.
Yes, you probably mixed it with SQL Server.
> Publishing SQL Server benchmarks without prior written approval from Microsoft is generally prohibited by the standard licensing agreements.
why wouldn't you be?
...Were you not before?
IIRC the EULA forbids it. This is why you don't see .net v/s Java GC comparisons for example.
I seem to vaguely recall such a thing from way back in the early days, but the only copy[1] of the .Net Framework EULA I could readily find says it's OK as long as you publish all the details.
[1]: https://docs.oracle.com/en/industries/food-beverage/micros-w...
I can't find mention of anything resembling this. The .NET runtime is under the MIT license.
https://download.microsoft.com/documents/useterms/visual%20s...
It's because you aren't looking at 20 year old EULA's
>3.4 Benchmark Testing. The Software may contain the Microsoft .NET Framework. You may not disclose the results of any benchmark test of the .NET Framework component of the Software to any third party without Microsoft’s prior written approval.
This person is not likely familiar with the history of the .net framework and .net core because they decided a long time ago they were never going to use it.
Yeah, you got me there. I have moved on to Linux development since then. Haven't kept up with Microsoft developer tools.
As a dotnet developer all my code these days is run on Linux.
.net core on Linux works great btw.
In recent versions (i.e. since .NET 5 in 2020) ".NET core" is just called ".NET"
The cross-platform version is mainstream, and this isn't new any more.
.NET on Linux works fine for services. Our .NET services are deployed to Linux hosts, and it's completely unremarkable.
As long as it's your deployment target and nothing else. For development, both macOS and Linux continue to be second class citizens, and I don't see this changing as it goes against their interests. In most .NET shops around me, the development and deployment tooling is so closely tied to VS that you can't really not use it.
It's fine if you stick to JetBrains and pay for their IDE (or do non-commercial projects only), and either work in a shop which isn't closely tied to VS (basically non-existent in my area), or work by yourself.
No. My entire office is Linux and macOS. Not a single windows machine. Mixture of people using VS Code and Rider. No issues building and deploying to Linux. We pay for rider. Pay nothing for vscode.
Well, in most .NET shops around me:
> The development and deployment tooling is so closely tied to VS that you can't really not use it.
Development tooling: It's 50-50. Some use Visual Studio, some use Rider. It's fine. The only drawback is that VS Live Share and the Jetbrains equivalent don't interoperate.
deployment tooling: There is deployment tooling tied to the IDE? No-one uses that, it seems like a poor idea. I see automated build/test/deploy pipelines in GitHib Actions, and in Octopus Deploy. TeamCity still gets used, I guess.
It's true though that the most common development OS is Windows by far (with Mac as second) and the most common deployment target by far is Linux.
However the fact that there is close to no friction in this dev vs deploy changeover means that the cross-platform stuff just works. At least for server-side things such as HTTP request and queue message processing. I know that the GUI toolkit story is more complex and difficult, but I don't have to deal with it at all so I don't have details or recommendations.
> is there deployment tooling tied to the IDE?
VS has the “Publish” functionality for direct deployment to targets. It works well for doing that and nothing else. As you said, CI/CD keeps deployment IDE agnostic and has far more capabilities (e.g. Azure DevOps, GitHub Actions).
Yeah? Ncurses still a thing? I only ask because that's the only api name I remember from forever ago.
I worked on a mud on linux right after high school for awhile. Spent most of the time on the school's bsdi server prior to that though.
Then I went java, and as they got less permissive and .net got more permissive I switched at some point. I've really loved the direction C# has gone merging in functional programming idioms and have stuck with it for most personal projects but I am currently learning gdscript for some reason even though godot has C# as an option.
The only thing that has become "less permissive" is Oracle's proprietary OpenJDK build, which isn't really needed or recommended in 99.9% of cases (except for when the vendor of your proprietary application requires it to provide support).
The rest of the ecosystem is "more permissive" than .NET since there are far more FOSS libraries for every task under the sun (which don't routinely go commercial without warnings), and fully open / really cross-platform development tooling, including proper IDEs.
The fact that you even need to be very careful when choosing a JDK is a lot bigger problem than some simple easily replaceable library is going commercial (not that this has not happend also in Java land). Also .NET is fully open and really cross-platform for a long time already and it includes more batteries than Java out of the box, you may not even need to include any third party dependencies (although there are also plenty to choose - 440k packages in Nuget). .NET has also proper IDEs or is Jetbrains Rider not a proper IDE for you?
Funny, because one the libraries I was using at the time went hyper commercial (javafxports). Java burned me on two fronts at the very same time and lost me. Ymmv I guess. It's always a good time to try something new anyway... I also moved to kotlin on android and couldn't be happier with it, it's a clearly superior language.
Wow didn't know that. Can you provide some links?
What are you talking about?
https://benchmarksgame-team.pages.debian.net/benchmarksgame/...
Yes.
Use managed language, it will handle memory stuff for you, you don’t have to care.
But also read these 400 articles to understand our GC. If you are lucky, we will let you change 3 settings.
You can provide your own GC implementation if you really wanted to:
https://learn.microsoft.com/en-us/dotnet/core/runtime-config...
https://github.com/dotnet/runtime/blob/main/src/coreclr/gc/g...
Interesting!
It works just fine out of the box. The articles/manuals are just if you want to really understand how it works and get the most out of it. What's the issue with that?
In my 20+ years using C#, there's only been one instance where I needed to explicitly control some behavior of the GC (it would prematurely collect the managed handle on a ZMQ client) and that only required one line of code to pin the handle.
It pretty much never gets in your way for probably 98% of developers.
Dr. Dobbs and The C/C++ Users Journal archives are full of articles and ads for special memory allocators, because the ones on the standard library for C or C++ also don't work in many cases, they are only good enough as general purpose allocation.
You need these settings when you drive your application hard into circumstances where manual memory allocation arguably starts making sense again. Like humongous heaps, lots of big, unwieldy objects, or tight latency (or tail latency) requirements. But unless you're using things like Rust or Swift, the price of memory management is the need to investigate segmentation faults. I'd prefer to spend developer time on feature development and benchmarking instead.