One of my formative experiences was typing in a terminal emulator for CP/M from a 1984 Byte magazine and porting it to OS-9 on my TRS-80 Color Computer. It was quite the trauma to see 80% of the code was error handling with the error-prone pattern of checking errno. When I saw Java which had try-catch I was so delighted.
"Feels like freedom" is one of the most dangerous feelings out that that reminds you that feelings are not facts. Wasn't it Orwell that coined the slogan "Freedom is slavery?"
I want to amplify this. It isn't just that we don't trust you personally or something at your current level of development. It is that we have seen all the smartest people use C, augment it with all manner of static analysis tools, develop endless "technically this is a C dialect and not C" practices like making bespoke rules about how and when to allocate and how to deal with recursion...
... and they still write programs blasted full of security holes and bugs that would be prevented in any other language. (Other that C++.)
Not that other languages are perfect or anything. But we've got decades of proof that C is full of sharp edges and that no, you (and this time I do mean you personally) aren't avoiding them. You just haven't noticed the bleeding yet... and the odds are good it's actually your users that will pay.
I consider greenfielding in C without the absolute best in static analysis backing you up to be professional malpractice. If you're not acting as a professional than that doesn't apply. But no professional should be doing that in 2025. And I consider "C backed with the absolute best static analysis" to still be something you need to be backed into, against your will, because of something very compelling for your project.
Also, the type system is bad, it lacks good closures, it can't abstract worth a toot, by 2025 it is legitimately a bad language despite all the attempted fixes over the decades. And I'm not a "use Haskell for everything" sort of guy... but C is just way, way too deep into the "costs" on the cost/benefits equation. It was a great language for the time, but we did not reach the epitome of programming perfection in the 1970s. I doubt we've reached it even now. Nobody should be reaching for C routinely now.
I advise spending some time learning Rust. I'm not saying that as a crazed Rust advocate... my actual experience with the language is limited to "I compiled Hello World once". But the borrow teacher will teach you a lot of important things about how to think about memory management. Then you will go back to C, and with a couple of hours you'll find yourself appalled. The resulting clarity you will see C with is completely accurate, and while you can get it from other places, Rust is a very efficient teacher of the problems I'm referencing here. In 2025 C is truly an appalling language.
I truly appreciate the depth of your response. I can see that it comes from long experience and serious thought.
I don’t claim to be immune to the sharp edges of C. I’ve already cut myself more than once. Maybe I’m bleeding now and don’t even know it yet.
But there’s something in me that still wants that control — even if it hurts. I’m not building for clients or large teams. I’m building for myself with intention.
Maybe I’ll shift later. Maybe I’ll come to hate C myself.
But for now, every time I compile a C binary and see it run exactly as I built it — without layers, runtimes, or abstractions — I feel like I’m touching something real.
I’ll take a look at Rust. Not to escape C — but to understand memory from a different angle.
"But there’s something in me that still wants that control"
There are many modern languages that give you control without giving you the grotesquely unsafe constructs that C gives you. You don't need to pervasively be able to access arrays out of bounds. You don't need to pervasively be able to do arbitrary pointer arithmetic. Even if you do need to fool with that stuff you can always do it in some unsafe block. We've built methods that give you the control you need without the control you don't need.
Alternatively, consider learning assembler. No joke. While theoretically all my complaints apply only moreso, assembler has the advantage that being "the language we speak to the CPU with" means that it will always still have some use until such time as we switch CPU architectures. At least in the assembler world you know you're juggling lit torches all the time.
I'm an old scientist that did lots of computer simulations that ate up many many cpu cycles. I started with BASIC in the 70s, then learned machine language, and used Fortran throughout the 80s. My simulation programs ran for 24 hours or more. Once I learned C, the concepts of structures and pointers were tremendously useful for storing and passing information. I don't believe any other language could be as efficient when program run times are measured in hours.
I learned to program in C, and even though I use other languages more often these days, I’m very glad C was my starting point.
It forced me to understand memory, resource management, and what actually happens when you call a function or access a variable. No abstractions to hide behind — just the machine and you.
Even now, when performance matters, I often go back to C-level tools and techniques — mmap, manual memory control, system calls — because they give me the precision I need.
More importantly, learning C gave me fundamentals I don’t think you get as deeply with higher-level languages. It made it easier to pick up other languages, not just syntactically, but conceptually. I understand what they abstract away — and at what cost.
C isn’t perfect, and it’s not the right tool for everything. But if you want to understand computers, I still think it’s one of the best places to start.
And like it or not, there are still things that — for legacy, for portability, or for raw control — can only really be done in C.
I also like C, for reasons you mention. I dislike many of the features of other more modern programming languages (even if they do have some advantages, many of the things they do are not as good in my opinion, so I prefer C programming).
I’m really glad to hear that.
It’s easy to feel alone in that view these days — but the more I talk to people who build things close to the metal, the more I realize we’re not as rare as it seems.
I remember trying Python once — it didn’t click. Then C++ — and when I hit memory management, I didn’t turn away… I got curious.
So I looked deeper — into C. And somehow… I stayed.
I really don't like how they always try to do everything on the heap. Realizing how memory worked in Rust was a pretty major letdown given how it's sold.
I’ve heard similar thoughts from others who went into Rust expecting something radically freeing — and found themselves wrestling with the borrow checker and unexpected heap usage.
I still want to explore Rust at some point, but I think it’s important to hear real experiences like yours.
Sometimes the simplicity of C — for all its sharp edges — feels more honest.
For me, I started with BASIC on various machines, then moved to CP/M, then MS-DOS. I learned Turbo Pascal to be able to maintain a program that talked to a GTEK EPROM burner, and fell in love with the language after a few months.
I stuck with it through the various iterations, right up to the point where Borland's management went insane, and they lost their chief architect to Microsoft. I tried C++ after that, but the amount of boilerplate and cruft compared to Delphi was just unbearable.
Things I personally hate about C include
* Case Sensitivity
* NULL terminated strings
* Macros (there are usually many skunks worth of code smells compressed into C macros)
* Pointer syntax that is way to easy to confuse with line noise.
* Slow, oh so slow compiler/linker cycles
* Usual association with make
Pascal is faster in compile and runtime. It's smaller, and has almost magical string these days.
I really appreciate you sharing that journey — it reads like a personal history of programming.
I’ve heard others mention the elegance of Turbo Pascal and Delphi, especially the string handling.
And yes — C macros and pointer syntax can absolutely feel like arcane incantations sometimes.
I guess in the end, we all gravitate toward the languages that feel like home.
It's great when you're in the driver seat. It's not great when they need to launch in 3 months and management throws a bunch of intermediate engineers at you who can't do anything without "type safety".
Software will continue to grow to the point where people move rapidly until they hit a limit. We laugh at future devs being weak, future devs laugh at our tech being unsafe.
Its not about disliking C. If I have to build an MVP quickly, I need a higher level language to do some of the heavy lifting for me. Memory management or implementing thread safety is not something I wish to do everyday. Doing so would not pay my bills.
For simple things I also like shell scripts. But for bigger more complex systems, especially with long life-times, many users and many developers, the sharp edges get too dangerous. In those case I move to Java or at least something with stronger typing, and less chance of memory management errors.
My last big project was 100kloc C/C++ in a radiator valve though. Not many languages with a run-time would have fit in the 32kB code space for that project.
I think that's the beauty of C — it lets you fit logic into 32kB without screaming.
For me it's not about "modern safety" — it's about knowing that every byte is mine.
But yeah, I get the tradeoff when you’ve got teams, timelines and a JVM.
I still use C to write little programs for Ardunio but it drives me nuts thinking about all the useless stuff C is doing with the stack and calling convention when I could just use global variables for everything and sometimes even use nothing but registers for inner loop variables.
I stick with C because (1) I'd have to figure out the tooling for 100% assembly development on AVR-8, and (2) C is portable to other platforms so if I want to migrate to an ARM or ESP-32 board it would be easy. (The upward migration path for AVR-8 is to go to a soft core on an FPGA but talk about frying pan to the fire)
The 100kloc thing had no malloc/free, ie most significant persistent data objects were static, and the inlining with LTO (and some help) was pretty intense, so the stack was used, but not as much as it seemed on the surface. We did (re)invent some carefully managed buffers shared through several levels of call, since we were doing, for example, AES-GCM: https://github.com/opentrv/OTAESGCM
I've been trending towards "C with namespaces" for Arduino on AVR-8.
With namespaces you can have globals and few collisions. At the expense of more typing, of course.
You do know about the "register" storage class/keyword, I am sure. Sometimes it works.
Edit - the tooling for AVR assembler seems to be avr-gcc and make. Check out Nerd Ralph's "ArduinoShrink" library for an example of blended C and assembler:
BTW you can get very close to the metal in Java too if you are so inclined - I was part of a team doing high-speed trading with microseconds timing and minimal GC in Java. A very few 'unsafe' lines made that possible (but also the possibility to crash the JVM far far faster than it could exit otherwise!)...
That’s fascinating. I never thought Java could get that close without tripping on GC.
Must’ve taken a lot of careful design and deep understanding.
I’d honestly love to read more about setups like that — maybe in my quiet hours.
Feels like something worth studying, even if I keep walking the C path for now.
All your points are valid. I don't think most people "dislike" C. People have options and most choose non-C. From my perspective, when the software or the system itself is already extremely complex, using C just adds more complexity on top. Many people including me choose not to add more.
For me it’s because I enjoy building useful things more than I enjoy futzing with the details of code. I could build something useful in C. Or I could spend the same amount of time building a dozen useful things in Python. With the added benefit that they’re less likely to segfault.
WHY does it matter to you to be that close to the machine? For many of us, we value different things. What you perceive as control, we perceive as fussiness.
For me, it’s not just about building tools — it’s about understanding what I’m building.
I like knowing where the bytes go. What the memory looks like. How the binary behaves.
It slows me down, sure — but it teaches me things I didn’t even know I didn’t know.
I’m not building for scale or clients. I’m building to see.
That kind of closeness makes the machine feel less like a mystery, and more like a partner.
I think if you learned to program 20 years ago or more you're likely to be faster at C than Python unless you jumped on Python in the Python2 days or earlier.
For me it's pretty hard to get in the flow with Python because I keep having to stop and read the documentation. I think this is just a personal thing but it's possible that Python may just be a significantly more complex language and that's driving some of it. I'm sure I'm not the only one like this.
There was a string heavy app I wrote years ago and I had multiple false starts in Python and then one day I just said "this needs to get done" and plowed through it in C nearly one shot in a couple hours. I've experienced this kind of thing multiple times. It's a little hard to really communicate how this feels.
EDIT: Yeah I think it really depends. I certainly do a lot of heavy numerical/ML stuff in Python just because that's where the libraries are (and IMO the libraries being written in python isn't a coincidence, it's a fantastic language for that.)
It's not just "string handling algorithms" though. I've written an entire web browser in straight C and it was mostly just walking through standards and coding. Maybe some of it is familiarity but I think part of the lack of familiarity comes from C just not having complex built-in data structures. There's no hashmap in C. There are no iterators in C etc. There's a tiny standard library to become familiar with and that's about it.
I was taught in C. Over 20 years ago. I’m massively faster in Python, and have been since about 6 weeks after I picked it up.
It sounds to me like part of that for you is about familiarity and another part might be about the specific problems you’re solving.
I’m usually working on solving problems at much higher levels of abstraction than string processing algorithms. For me, a big part of Python’s productivity is its ecosystem of useful abstractions (there’s an XKCD for that). And not having to worry about memory management or null pointers. How powerful exception handling is. Etc.
Python may in some ways be a more complex language, but I find it much simpler to reason about.
Ultimately, there are no right and wrong programming languages (except Visual Basic, lol, or maybe PHP 3). Sometimes a language is wrong for a specific project, but there’s nothing wrong with any particular language preference. I find it hilarious that there have been flame wars over the topic. So if you ultimately find more meaning or enjoyment in writing C code… sure, it’s not for everyone, but it’s awesome that you’ve figured out what you like!
> For me, a big part of Python’s productivity is its ecosystem of useful abstractions
Yeah the huge amount of libraries that are readily available and can be used right away without messing with a complex build system is a huge deal.
For me personally, C# is like the middle ground. A language I like much better than Python but also has a fairly rich ecosystem, and usually nuget makes including dependencies easy.
I think when .Net 10 is released it's a good time to give it a whirl.
New in the upcoming release[1] will be the ability to run C# files as if they were scripts[2], ie without an explicit build step. Should lower the barrier to just fooling around.
I also like how they've gone away from the "everything must be an object" style ala Java, and allow top-level statements so it reads more like C/C++. It's just sugar, but for smaller programs that really makes a difference IMHO.
I have never worked in a legacy C code base, but I could imagine that it might not be a pretty good experience. However, writing low level projects in C just by myself is very interesting.
Exactly. That’s how I feel too — writing small, precise tools in C by myself is strangely rewarding.
Thanks for sharing that — it’s good to know others feel it too.
I don't trust you to get it right.
One of my formative experiences was typing in a terminal emulator for CP/M from a 1984 Byte magazine and porting it to OS-9 on my TRS-80 Color Computer. It was quite the trauma to see 80% of the code was error handling with the error-prone pattern of checking errno. When I saw Java which had try-catch I was so delighted.
"Feels like freedom" is one of the most dangerous feelings out that that reminds you that feelings are not facts. Wasn't it Orwell that coined the slogan "Freedom is slavery?"
I want to amplify this. It isn't just that we don't trust you personally or something at your current level of development. It is that we have seen all the smartest people use C, augment it with all manner of static analysis tools, develop endless "technically this is a C dialect and not C" practices like making bespoke rules about how and when to allocate and how to deal with recursion...
... and they still write programs blasted full of security holes and bugs that would be prevented in any other language. (Other that C++.)
Not that other languages are perfect or anything. But we've got decades of proof that C is full of sharp edges and that no, you (and this time I do mean you personally) aren't avoiding them. You just haven't noticed the bleeding yet... and the odds are good it's actually your users that will pay.
I consider greenfielding in C without the absolute best in static analysis backing you up to be professional malpractice. If you're not acting as a professional than that doesn't apply. But no professional should be doing that in 2025. And I consider "C backed with the absolute best static analysis" to still be something you need to be backed into, against your will, because of something very compelling for your project.
Also, the type system is bad, it lacks good closures, it can't abstract worth a toot, by 2025 it is legitimately a bad language despite all the attempted fixes over the decades. And I'm not a "use Haskell for everything" sort of guy... but C is just way, way too deep into the "costs" on the cost/benefits equation. It was a great language for the time, but we did not reach the epitome of programming perfection in the 1970s. I doubt we've reached it even now. Nobody should be reaching for C routinely now.
I advise spending some time learning Rust. I'm not saying that as a crazed Rust advocate... my actual experience with the language is limited to "I compiled Hello World once". But the borrow teacher will teach you a lot of important things about how to think about memory management. Then you will go back to C, and with a couple of hours you'll find yourself appalled. The resulting clarity you will see C with is completely accurate, and while you can get it from other places, Rust is a very efficient teacher of the problems I'm referencing here. In 2025 C is truly an appalling language.
I truly appreciate the depth of your response. I can see that it comes from long experience and serious thought.
I don’t claim to be immune to the sharp edges of C. I’ve already cut myself more than once. Maybe I’m bleeding now and don’t even know it yet.
But there’s something in me that still wants that control — even if it hurts. I’m not building for clients or large teams. I’m building for myself with intention.
Maybe I’ll shift later. Maybe I’ll come to hate C myself. But for now, every time I compile a C binary and see it run exactly as I built it — without layers, runtimes, or abstractions — I feel like I’m touching something real.
I’ll take a look at Rust. Not to escape C — but to understand memory from a different angle.
Thanks for your honesty.
"But there’s something in me that still wants that control"
There are many modern languages that give you control without giving you the grotesquely unsafe constructs that C gives you. You don't need to pervasively be able to access arrays out of bounds. You don't need to pervasively be able to do arbitrary pointer arithmetic. Even if you do need to fool with that stuff you can always do it in some unsafe block. We've built methods that give you the control you need without the control you don't need.
Alternatively, consider learning assembler. No joke. While theoretically all my complaints apply only moreso, assembler has the advantage that being "the language we speak to the CPU with" means that it will always still have some use until such time as we switch CPU architectures. At least in the assembler world you know you're juggling lit torches all the time.
Thanks — I actually plan to start learning assembler this fall. Not for pain, but for deeper understanding.
Step by step, I want to get closer to what’s really going on.
Yeah, I agree — sometimes you can really run into that and spend a lot of time handling errors.
Java is also a great language. I get why it clicked for you.
I'm an old scientist that did lots of computer simulations that ate up many many cpu cycles. I started with BASIC in the 70s, then learned machine language, and used Fortran throughout the 80s. My simulation programs ran for 24 hours or more. Once I learned C, the concepts of structures and pointers were tremendously useful for storing and passing information. I don't believe any other language could be as efficient when program run times are measured in hours.
That’s deeply encouraging to hear — thank you.
I often doubt myself when I see all the modern hate for C, but hearing from someone who used it when cycles mattered…
That means a lot.
I hope to earn that same kind of precision.
I learned to program in C, and even though I use other languages more often these days, I’m very glad C was my starting point.
It forced me to understand memory, resource management, and what actually happens when you call a function or access a variable. No abstractions to hide behind — just the machine and you.
Even now, when performance matters, I often go back to C-level tools and techniques — mmap, manual memory control, system calls — because they give me the precision I need.
More importantly, learning C gave me fundamentals I don’t think you get as deeply with higher-level languages. It made it easier to pick up other languages, not just syntactically, but conceptually. I understand what they abstract away — and at what cost.
C isn’t perfect, and it’s not the right tool for everything. But if you want to understand computers, I still think it’s one of the best places to start.
And like it or not, there are still things that — for legacy, for portability, or for raw control — can only really be done in C.
I also like C, for reasons you mention. I dislike many of the features of other more modern programming languages (even if they do have some advantages, many of the things they do are not as good in my opinion, so I prefer C programming).
I’m really glad to hear that. It’s easy to feel alone in that view these days — but the more I talk to people who build things close to the metal, the more I realize we’re not as rare as it seems.
I remember trying Python once — it didn’t click. Then C++ — and when I hit memory management, I didn’t turn away… I got curious.
So I looked deeper — into C. And somehow… I stayed.
Thanks for sharing that.
I really don't like how they always try to do everything on the heap. Realizing how memory worked in Rust was a pretty major letdown given how it's sold.
I’ve heard similar thoughts from others who went into Rust expecting something radically freeing — and found themselves wrestling with the borrow checker and unexpected heap usage.
I still want to explore Rust at some point, but I think it’s important to hear real experiences like yours.
Sometimes the simplicity of C — for all its sharp edges — feels more honest.
For me, I started with BASIC on various machines, then moved to CP/M, then MS-DOS. I learned Turbo Pascal to be able to maintain a program that talked to a GTEK EPROM burner, and fell in love with the language after a few months.
I stuck with it through the various iterations, right up to the point where Borland's management went insane, and they lost their chief architect to Microsoft. I tried C++ after that, but the amount of boilerplate and cruft compared to Delphi was just unbearable.
Things I personally hate about C include
Pascal is faster in compile and runtime. It's smaller, and has almost magical string these days.I really appreciate you sharing that journey — it reads like a personal history of programming. I’ve heard others mention the elegance of Turbo Pascal and Delphi, especially the string handling. And yes — C macros and pointer syntax can absolutely feel like arcane incantations sometimes.
I guess in the end, we all gravitate toward the languages that feel like home.
It's great when you're in the driver seat. It's not great when they need to launch in 3 months and management throws a bunch of intermediate engineers at you who can't do anything without "type safety".
Software will continue to grow to the point where people move rapidly until they hit a limit. We laugh at future devs being weak, future devs laugh at our tech being unsafe.
That’s a really insightful way to put it.
I think you're right — being in the driver seat gives one kind of experience, but leading a team under pressure is a different challenge entirely.
I guess every generation of developers finds their own balance between freedom and safety.
Appreciate your perspective.
Its not about disliking C. If I have to build an MVP quickly, I need a higher level language to do some of the heavy lifting for me. Memory management or implementing thread safety is not something I wish to do everyday. Doing so would not pay my bills.
Absolutely fair.
If I needed to ship a product quickly and prove something to investors or customers, I would choose a higher-level language too.
For me, C programming is not about speed, it's about deep understanding, intention, and creating things that I truly understand and can control.
But I get it - at the end of the day, we all have bills to pay. Thanks for sharing your insight.
For simple things I also like shell scripts. But for bigger more complex systems, especially with long life-times, many users and many developers, the sharp edges get too dangerous. In those case I move to Java or at least something with stronger typing, and less chance of memory management errors.
My last big project was 100kloc C/C++ in a radiator valve though. Not many languages with a run-time would have fit in the 32kB code space for that project.
I think that's the beauty of C — it lets you fit logic into 32kB without screaming.
For me it's not about "modern safety" — it's about knowing that every byte is mine. But yeah, I get the tradeoff when you’ve got teams, timelines and a JVM.
I still use C to write little programs for Ardunio but it drives me nuts thinking about all the useless stuff C is doing with the stack and calling convention when I could just use global variables for everything and sometimes even use nothing but registers for inner loop variables.
I stick with C because (1) I'd have to figure out the tooling for 100% assembly development on AVR-8, and (2) C is portable to other platforms so if I want to migrate to an ARM or ESP-32 board it would be easy. (The upward migration path for AVR-8 is to go to a soft core on an FPGA but talk about frying pan to the fire)
The 100kloc thing had no malloc/free, ie most significant persistent data objects were static, and the inlining with LTO (and some help) was pretty intense, so the stack was used, but not as much as it seemed on the surface. We did (re)invent some carefully managed buffers shared through several levels of call, since we were doing, for example, AES-GCM: https://github.com/opentrv/OTAESGCM
Yeah, the stack can be such a headache. I remember how much pain I went through just trying to wrap my head around it — even gave up a few times.
But then I’d cool down… and come back to it. Some languages just never “clicked” for me, but somehow I keep coming back to C.
I've been trending towards "C with namespaces" for Arduino on AVR-8.
With namespaces you can have globals and few collisions. At the expense of more typing, of course.
You do know about the "register" storage class/keyword, I am sure. Sometimes it works.
Edit - the tooling for AVR assembler seems to be avr-gcc and make. Check out Nerd Ralph's "ArduinoShrink" library for an example of blended C and assembler:
https://github.com/nerdralph/ArduinoShrink/tree/master
BTW you can get very close to the metal in Java too if you are so inclined - I was part of a team doing high-speed trading with microseconds timing and minimal GC in Java. A very few 'unsafe' lines made that possible (but also the possibility to crash the JVM far far faster than it could exit otherwise!)...
That’s fascinating. I never thought Java could get that close without tripping on GC. Must’ve taken a lot of careful design and deep understanding.
I’d honestly love to read more about setups like that — maybe in my quiet hours. Feels like something worth studying, even if I keep walking the C path for now.
All your points are valid. I don't think most people "dislike" C. People have options and most choose non-C. From my perspective, when the software or the system itself is already extremely complex, using C just adds more complexity on top. Many people including me choose not to add more.
For me it’s because I enjoy building useful things more than I enjoy futzing with the details of code. I could build something useful in C. Or I could spend the same amount of time building a dozen useful things in Python. With the added benefit that they’re less likely to segfault.
WHY does it matter to you to be that close to the machine? For many of us, we value different things. What you perceive as control, we perceive as fussiness.
That’s a really fair question.
For me, it’s not just about building tools — it’s about understanding what I’m building.
I like knowing where the bytes go. What the memory looks like. How the binary behaves. It slows me down, sure — but it teaches me things I didn’t even know I didn’t know.
I’m not building for scale or clients. I’m building to see.
That kind of closeness makes the machine feel less like a mystery, and more like a partner.
I think if you learned to program 20 years ago or more you're likely to be faster at C than Python unless you jumped on Python in the Python2 days or earlier.
For me it's pretty hard to get in the flow with Python because I keep having to stop and read the documentation. I think this is just a personal thing but it's possible that Python may just be a significantly more complex language and that's driving some of it. I'm sure I'm not the only one like this.
There was a string heavy app I wrote years ago and I had multiple false starts in Python and then one day I just said "this needs to get done" and plowed through it in C nearly one shot in a couple hours. I've experienced this kind of thing multiple times. It's a little hard to really communicate how this feels.
EDIT: Yeah I think it really depends. I certainly do a lot of heavy numerical/ML stuff in Python just because that's where the libraries are (and IMO the libraries being written in python isn't a coincidence, it's a fantastic language for that.)
It's not just "string handling algorithms" though. I've written an entire web browser in straight C and it was mostly just walking through standards and coding. Maybe some of it is familiarity but I think part of the lack of familiarity comes from C just not having complex built-in data structures. There's no hashmap in C. There are no iterators in C etc. There's a tiny standard library to become familiar with and that's about it.
I was taught in C. Over 20 years ago. I’m massively faster in Python, and have been since about 6 weeks after I picked it up.
It sounds to me like part of that for you is about familiarity and another part might be about the specific problems you’re solving.
I’m usually working on solving problems at much higher levels of abstraction than string processing algorithms. For me, a big part of Python’s productivity is its ecosystem of useful abstractions (there’s an XKCD for that). And not having to worry about memory management or null pointers. How powerful exception handling is. Etc.
Python may in some ways be a more complex language, but I find it much simpler to reason about.
Yeah, I think you’re right — Python really shines in many areas.
Honestly, a lot comes down to the person, the kind of problems they solve, their mindset… and sometimes, their scars.
I don’t even fully know why Python didn’t click with me. I understood it. I saw how fast it can be. But for some reason, I kept drifting back to C.
Maybe it’s just how my brain works. Maybe it’s where I found meaning. Hard to explain.
Ultimately, there are no right and wrong programming languages (except Visual Basic, lol, or maybe PHP 3). Sometimes a language is wrong for a specific project, but there’s nothing wrong with any particular language preference. I find it hilarious that there have been flame wars over the topic. So if you ultimately find more meaning or enjoyment in writing C code… sure, it’s not for everyone, but it’s awesome that you’ve figured out what you like!
It’s true — programming languages spark some of the fiercest debates. Everyone wants to defend the one that speaks to them.
That’s why it’s been such a pleasure having this kind of conversation — thoughtful, respectful, and grounded. Thanks again. I’ve really enjoyed it.
> For me, a big part of Python’s productivity is its ecosystem of useful abstractions
Yeah the huge amount of libraries that are readily available and can be used right away without messing with a complex build system is a huge deal.
For me personally, C# is like the middle ground. A language I like much better than Python but also has a fairly rich ecosystem, and usually nuget makes including dependencies easy.
That’s a great point — C# really does hit a sweet spot between control and convenience.
I’ve never spent serious time with it, but I’ve heard a lot of good things about the tooling and the ecosystem.
Maybe one day I’ll give it a try — especially if I ever need something with more abstractions but less ceremony than Python.
Thanks for sharing your take — it’s always helpful to hear how others find their flow.
I think when .Net 10 is released it's a good time to give it a whirl.
New in the upcoming release[1] will be the ability to run C# files as if they were scripts[2], ie without an explicit build step. Should lower the barrier to just fooling around.
I also like how they've gone away from the "everything must be an object" style ala Java, and allow top-level statements so it reads more like C/C++. It's just sugar, but for smaller programs that really makes a difference IMHO.
[1]: https://news.ycombinator.com/item?id=44699174
[2]: https://devblogs.microsoft.com/dotnet/announcing-dotnet-run-...
That actually sounds really cool. Being able to run C# like a script lowers the friction a lot. Thanks again.
I have never worked in a legacy C code base, but I could imagine that it might not be a pretty good experience. However, writing low level projects in C just by myself is very interesting.
Exactly. That’s how I feel too — writing small, precise tools in C by myself is strangely rewarding. Thanks for sharing that — it’s good to know others feel it too.