Have you considered not doing that? It's not obligatory to have an LLM shit out unreviewed code for you, you're making a choice to do that, and you can make a choice not to.
Review the code. Hell, maybe even write some code yourself.
What you're describing is how I feel whenever I use an LLM for anything more than the most basic of tasks. I've gone from being a senior level software developer to managing a barely competent junior developer who's only redeeming skill is the ability to type really, really quickly. I quit the management track some time ago because I hated doing all my software development via the medium of design documents which would then be badly implemented by people who didn't care, there's no way you're going to get me to volunteer for that.
Not OP, but you nailed my feelings perfectly. I did not like managing for precisely this reason, and it never got better. The trenches are for me.
Re LLMs I love collaborative coding because I can sometimes pick up or teach new tricks. If I'm too tired to type the boilerplate I sometimes use an LLM. These are the only two redeeming values of LLM agents: they produce code or designs I can start from when I ask them too. I rarely do.
I hope OP can find a balance that works. It's sad to see the (claimed) state of the art be a soulless crank we have to turn.
> managing a barely competent junior developer who's only redeeming skill is the ability to type really, really quickly
Hits the nail on the head. For an actual junior developer, they'd at least learn over time. With LLM, open up a new chat and you start with a new hire.
I take a hybrid approach. I will describe a simplified problem to the LLM, have it generate a well commented and reasonable approach for the problem. I then use that as a cheat sheet for implementing my actual code. This still gives me hands on codi and more control, without needing to agonize over the details of each coding technique.
I think you're on to something: It sounds like the developer needs to be more hands-on in making changes; as opposed to treating the AI like a subordinate with autonomy.
---
For example: About two years ago I worked with a contractor who was a lot more junior than our team needed. I'd give him instructions, and then the next day spend about 2-3 hours fixing his code. In total, I was spending half of my time handholding the contractor through their project.
The problem was that I had my own assignments; and the contractor was supposed to be able to do their job with minimal oversight. (IE, roughly 0.5-1.5 hours of my day.)
If the contractor was helping me with my assignment; IE, if the contractor was my assistant, I'd have loved the arrangement.
(In case you're wondering how it ended up for the contractor, we let him go and hired someone who worked out great.)
---
I suspect if the OP can figure out how to make the AI an assistant, instead of an employee with autonomy, then it will be a better arrangement. I personally haven't found an AI that does that for me, but I suspect I'm either using it incorrectly or using the wrong tools.
Technology is about efficiency, and for people to adopt it you need to be 10x more efficient. LLMs took a 15 years process of burning out, and shortened it to 1.5 years.
I feel the opposite. I appreciate the ability to iterate and prototype in a way which lowers friction. Sure I have to plan steps out ahead of time, but that's expected with any kind of software architecture. The stimulating part is the design and thought and learning, not digging the ditch.
If you're just firing off prompts all day with no design/input, yea I'm sure that sucks. You might as well "push the big red button" all day.
> If it fails, I just switch to another model—and usually, one of them gets the job done.
This is a huge red flag that you have no idea what you're doing at the fundamental software architecture level imo. Or at least you have bad process (prior to LLMs).
> I feel the opposite. I appreciate the ability to iterate and prototype in a way which lowers friction.
I feel the same way. Things I like: Thinking about architectures and algorithms. Things I don't like: Starting out with a blank slate, looking up the exact function names or parameters. I find it much easier to take something roughly implemented and improve upon it than to start from nothing and build it.
I think about what I want fairly specifically. I discuss it with the LLM. It implements something. Half of the time it's what I expect, I can move on. Sometimes it's done something I wasn't expecting in a better way, which is nice. Frequently it's done something I wasn't expecting in a worse way; I either tell it to fix it, or just fix it myself.
In my previous role, I did a huge amount of patch review, which I always found quite tedious. Even though this looks superficially similar, it doesn't have the same vibe at all. I think it's because the LLM will accept being told what to do in a way no self-respecting coder would. (One complaint I'd heard about another person's reviews was that the person whose code was reviewed felt like they were a marionette, just typing exactly what the reviewer told them to type.)
This way I can do the things I enjoy, while neither having to worry about some human being's feelings, nor having to do the low-level stuff that's a chore.
As the saying goes, mathematics is not a spectator sport. The same applies to programming. If you don't do the lower level work, you are a spectator that is rearranging other people's laundered code, or even their laundered architectures.
> This is a huge red flag that you have no idea what you're doing at the fundamental software architecture level imo. Or at least you have bad process (prior to LLMs).
Particularly in the present. If any of the current models can consistently make senior-level decisions I'd like to know which ones they are. They're probably going to cross that boundary soon, but they aren't there yet. They go haywire too often. Anyone who codes only using the current generation of LLM without reviewing the code is surely capping themselves in code quality in a way that will hurt maintainability.
I always enjoyed problem solving, and programming was more of a means to that end for me. These days, focusing on syntax feels a bit tedious, especially when LLMs can handle so much of it. That being said, I still find myself obsessing over code quality, reading and reviewing code, and thinking a lot about architecture and best practices. I still get a lot of satisfaction from building things well, even if the actual mechanics of typing out code aren't always the most exciting part.
Many of us will make that shift effectively, sure. I think the problem is that to really be a good architect, you need 10+ years of actually doing things to understand what should/should not be built, and the industry is rapidly removing the jobs that let people acquire that experience.
Is AI genuinely that good for you all? I can't leave it to its own devices, I have to review everything because (from experience) I don't trust it. I think it's an amazing technological advancement, perhaps will go down as one of the top 10 in the history of our species. But I can't just "fire and forget".
And that's not just because its output is often not the best, but also because by doing it myself it causes me to think deeply about the problem, come up with a better solution that considers edge cases. Furthermore, it gives me knowledge in my head about that project that helps me for the next change.
I see comments here where people seem to have eliminated almost all of their dev work, and it makes me wonder what I'm doing wrong.
I'm in the same boat: I'm mostly doing C# in Visual Studio (classic) with co-pilot, and it very rarely gives useful code from prompts. Often times the auto-suggestions are hallucinations, and frequently they interfere with "normal" tab completion.
I'm wondering if I'm using the wrong tool, or if Visual Studio (classic) co-pilot is just far behind industry norms?
I felt the same way until I tried Claude Code. Moving from an autocomplete-based workflow to a conversation-based workflow changed everything. I find traditional Copilot useless by comparison.
The main problem I have with auto-suggestions is that they distract my flow of thinking. Suddenly, I go from thinking about my code carefully, to reviewing someone else's code. To the point where I get a bit stressed typing, worrying that if I go too slow, the suggestion will pop up. As you may guess, I therefore have them turned off :)
I am playing with Zed now though, and it has a "subtle" mode for suggestions which is great. When I explicitly want to see them, I press option key. Otherwise, I don't see them.
I think it depends on your niche and model. Gemini pro worked amazing for me in when doing (relatively simple) graph algorithms in python, but completely sucked when I switched to (relatively complicated) latex layouts.
I don’t think you’re doing anything wrong. Some shops just have very low quality bars where they can ship things that are to be frank, broken. I tend to use Sonnet 4 these days, and use it for tasks that aren’t too important or ones that require prototyping and iteration over perfection.
I find it’s really great for augmenting specific kinds of concentrated tasks. But just like you, I have to review everything it creates. Even Claude Opus 4 on MAX produces many bugs on a regular basis that I fix before merging in a change. I don’t mind it though, as I can choose to use it on the most annoying areas, and leave the ones I enjoy to work on myself.
Vibe coding sounds miserable. I use LLMs pretty heavily but never as a replacement for my own mind. I'm glad it exists for the people who can't program but it's much less pleasant than being explicit myself.
For me, vibe coding is becoming more of a niche. I still use it but not that often. I prefer the Codex style workflow in which you do spent a lot more time on specifying things, providing examples and reviewing PRs.
Essentially, vibe coding is synchronous as it is necessary to wait around for the LLM to respond. Codex is async and allows you to focus on other things while it is working. The primary issue with async workflows is you really don't want to have to iterate much. Therefore, investing more time upfront clearly defining the prompt / agents.md and prior examples becomes really important. Usually if it is > 90% correct I will just fix the rest myself. For hand coding, I use a fairly basic vim setup with no LLM plugins. I do not like LLMs jumping in and trying to auto complete stuff.
No, I feel the exact opposite. This post makes me want to stay away from agentic coding all together. Idk about everyone else but I love programming, more than any other thing I do. I do not want to end up like this.
>Recently, I realised I no longer enjoy programming.
You don't? Sounds to me like you just don't enjoy prompting. Try doing some programming again. Engage your brain with a challenge and try to solve the problem itself not just explain it to an ai and never even look at the code. You enjoy the driving not the destination, getting a taxi there is removing your purpose.
I feel you, but I think that if you want to make exceptional software, and not just a large volume of mediocre software, the best way is still to write code manually.
I go through waves. Sometimes I'm in awe of what the LLM does for me, the rapid progress through boilerplate code in seconds that would have taken me forever, leaving me to ponder the actual core issues of the problems I'm solving.
Sometimes I want to hunt it down and erase the lazy, lying, gas-lighting **** from existence.
I actually brought back some of the joy of programming for myself by leveraging LLMS
Context. I'm burnt out,doing web software development for business apps for 15 years now and going
I started to get into game development. I started to test out chatGPT and claude to assist and it's been going great. I make so much progress and the results are fun which makes the coding process fun. The LLM covers gaps in my knowledge of math and physics and game dev strategy and architecture. But since I know how to code I can take what it gives and accomplish all kinds of things that would be much more difficult going on my own.
I'm definitely on the opposite side where despite being in the nascent era of code assisting agents, they've become a productivity multiplier and I shudder thinking about how productive my last 25 years of programming could've been if we had coding agents back when I started my career. Young aspirational developers graduating now are going to be able to accomplish much more than us over their entire career.
It's also got me to explore a lot more domains than I would've considered otherwise, e.g. using Python to accomplish tasks with local pytorch/onnx models and creating ComfyUI nodes or using bash for large complex scripts that I would've previously used bun .ts shell scripts to implement.
Even non dev tasks like resolving Linux update/configuration/conflicts have become a breeze to resolve with Claude/Gemini Pro being able to put me on the right track which no amount of old-school Google searches were able to.
Although it's not all upside as LLMs typically generate a lot more code than I would've used to accomplish the same task so code maintenance is likely to require more effort, so I don't like using LLMs to change code I've written, but am more than happy to get them to make changes to code that other LLMs have created.
As a technical person who is not a professional programmer, but finds or makes whatever I need, LLMs (Gemini) are dizzyingly powerful.
I've made so many things I never would never have even attempted without it: a change-based timelapse tool, virtual hand-controlled web-based theremin, automated sunrise and sunset webcam timelapse creator, healthy-eating themed shoot'em up, content-based podcast ad removal tool, virtual linescan camera, command line video echo effect, video player progress bar benchmark tool, watermark remover, irregular panoramic photo normalizer, QR code game of live, 3D terrain bike share usage map, movie "barcode" generator, tool to edit video by transcript, webcam-based window parallax effect, hidden-image mosaic generator, and all kinds of other toys I've already lost track of.
I had to rip all the LLM crap out of my editor to feel like I was doing anything. My programming ability has gotten better and if I do actually need to use an LLM I just open ChatGPT, DeepSeek, or punch stuff into Ollama.
The majority of what I end up using langle mangles for is trivially verifiable but tedious to do things like "turn this Go struct into an OpenAPI Schema" or "take this protocol buffer definition and write the equivalent in Rust prost".
I suspect that you never were truly interested in programming otherwise you wouldn't have preferred talking to several LLM models instead of writing code yourself.
Nobody forced you to switch LLM models until eventually one of them solve your problem.
What I have found is that using LLM’s to do the same stuff I already knew how to do is not super enjoyable. I know web application development, and having some agent build it for me is just a productivity gain with no job satisfaction. So I’ve been where you are recently.
But on the flip-side, using the AI to help me learn the bits of programming that I’ve spent my whole career ignoring, like setting up DevOps pipelines or containerisation, has been very enjoyable indeed. Pre-AI the amount of hassle I’d have to go through to get the syntax right and the infrastructure set up was just prohibitively annoying. But now that so much of the boilerplate is just done for me, and now that I’ve got a chat window I can use to ask all my stupid questions, it’s all clicking into place. And it’s like discovering a new programming paradigm all over again.
Can 100% recommend stepping outside your comfort zone and using AI to help where you didn’t want to go before.
Long story short, LLMs are great for people who never wanted to become "code artists" (aka hackers) which many people within CS and SWE do not wish to be.
If you goal is to be able to express your ideas fluently though, you'll have to get good at coding. The differentiator is how you look at the pain and struggle involved. If your goal is to improve yourself, the struggle has value. You learn by trying to do harder and harder things. If your goal isn't to learn though, you may as well outsource the struggling to a bot.
I feel the exact same way (using the tools they force on us at work).
I've became very lazy. Most tasks, I explain to the LLM, and go browse the web while it computes. More often than not, it fails, and I re-iterate my prompt several times. Eventually, I need to review the changes before submitting it for review, which isn't very fun.
Overall, I feel I'm losing my skills and the competitive advantage I had at my role (I'm a decent coder, but don't care too much about product discussions). The way I'm using the tool right now, I'm pretty sure I'm not more productive.
We'll see how it goes. It's still a pretty new tech and I think I should learn when not to use it and try to have good hygiene with it.
(e.g. did you consider simply not using LLMs to write code and maybe just use them for rubberducking, cross-checking your code and as StackOverflow replacement?)
Use the LLM to code away the boilerplate if you must, but get stuck in and deal with the novel stuff and get yourself the dopamine hit of doing the hard thing.
Sure, get an LLM to suggest an approach, but how can you feel joy when you've turned yourself into a system architect working with a particularly stupid and relentlessly optimistic bunch of idiots who never really learn?
You can choose how you do your work. You have autonomy. So, choose.
No, because I only use LLMs as code assistants (and I don't think they do that great a job in spitting out code without me needing to review it). Typically I use LLMs to write stuff that K find boring and repetitive (unit tests, glue code for APIs - like JSON mapping for example), Dependency configuration, that sort of thing.
The actual meat I prefer to code myself with minor LLM support (for example, I ask it to review my code).
If you see "programming" as "writing code" then I can see where that joy is lost. If you see "programming" as "creating software" then you can maintain that joy. I would argue the goal has always been to write less code. That has driven the design of programming languages for the past 50 years.
You're more or less describing what happened to me when I changed from dev to dev team lead. I couldn't affort the time anymore to make code reviews, left that to my colleagues and only tested the resulting software. When it didn't work, I returned a bug task and then we iterated this until it worked.
No more joy in writing software. Instead my time is spend in writing user stories and specifications as good as possible.
You should take a break, it's normal to go through down periods. Work coding is also generally dissatisfactory compared to personal fun coding.
Personally I have found that the sky is now the limit thanks to AI assistants! I was never the best coder out there, probably a median level programmer, but now I can code anything I imagine and it's tons of fun.
Find some creative projects you want to work on and code them up!
These posts are so exhausting and telling of those who never enjoyed programming in the first place. Wish these were a wake up call to those posting that they’ve been in it for reasons they haven’t been able to understand till now.
Generally I agree, I use LLM to solve work problems faster but I work on my own personal projects by hand only using LLM to teach me concepts, give me direction or help with complex portions of code (learning rust)
I can't escape the feeling that in 2-4 years no one will be even looking at the code and the IDE+chat interface of today will be a weird relic and most people will be just prompting finished artifacts.
I'd say if you used to find pleasure and satisfaction if the art of writing code unless you're willing to stop using AI it might be worth finding a different pursuit to channel that energy into. If you don't enjoy prompting now it's only going to get worse from here and your energy will be better spent finding something you do enjoy.
>> Recently, I realized I no longer enjoy programming. It feels like I’m just going through the pain of explaining to the LLM what I want, then sitting and waiting for it to finish.
This isn't programming. Delete the AI stuff and start programming again. It's fine to use LLM's if you want but nobody is forcing you to.
Why should labor care about the quality of their work when management also does not because it's hard at the prospect of the largest displacement of labor in history?
Well I believe the thread you're commenting in has answered that question.
Programming is also the field where it's the easiest to strike out on your own. Seizing the means of production in programming amounts to grabbing a $200 laptop.
i agree. Initially i felt liberated that it is doing all the tedious work for me so i can concertate on the "good stuff" of creative thinking.
I usually start out with good intentions like
1. planning out work
2. crafting really good prompts
3. accepting bare miniumum code changes
4. reviewing and testing code changes
But most 'agentic code tools' are not really well aligned with this philosophy. They are always over eager and do more than what you ask them to. Like if you ask it to change button color, it goes and builds out a color picker .
They sneak in more and more code and vex you with all the extra junk that you slowly stop caring about extra stuff that's being snuk in and the whole thing spirals out of control. Now you are just doing pure vibe coding.
Yea, a little. I think it's fun to see what it can do, but that gets old quick.
The real kick in the nuts is that people don't care about quality. Honestly they never have, but now it's just worse. People see productivity gains and that's literally all that matters. I guess they know they can ship bad stuff and still sell it. Only when retention numbers get bad do they complain - not even think about taking the time to do things proper of course - about it and demand higher quality.
I think there's going to be a high demand for AI slop fixers in the future. Don't get me wrong, it's not that AI itself is incapable, it's that people aren't putting any effort in.
I think we'll push the people who code for enjoyment away and they'll be replaced by people who aren't as senior.
I feel exactly the opposite. To explain, programming became very tedious, looking through endless lines of code, and lately it appears everything is generated, and not making sense of it. With programming bots I can focus on what I want rather than spending inordinate amount of time finding just the right syntax for something that itself was generated by another machine.
Honestly and simply, get with the program
Using LLMs is just a different way of programming. Think of it as a high-level language, similar to how conventional programming languages relate to assembly.
Getting tasks done, tasks you would never have had the time or courage to tackle the traditional way, is another source of joy.
I've always found it a bit strange that people enjoy the act of coding so much that LLMs make then sad. For me it's always been about what I can make, not the actual typing of code into the editor. With LLMs I can make better stuff, faster, and it's really exciting. It used to be that if I needed to use a new library for one little task, it would be hours or days of reading the manual and playing around. Now it's minutes and I can understand how the API works, and write good, robust code that solves my problem.
Maybe it's more of a problem with your job and the tasks you're assigned?
> I've always found it a bit strange that people enjoy the act of coding so much that LLMs make then sad.
Why do you find this strange? It's like saying you find it strange that a carpenter enjoys working with wood, that it's only about the end product and not the process.
That's fair. I was actually a professional carpenter for a bit too, and you're right, I love touching the wood, sanding it, admiring it, etc. etc. More so then I've ever liked inputting code into an editor.
I do, however, use electric planers, table saws and miter saws, because I want to produce the product fast and efficiently, because the end product still is the goal.
It is never just inputing text into an editor. When I write a piece of code, it is just the first draft of what I have in mind. Most of the time I change the code interactively until I'm satisfied. The act of writing the code myself helps in clarifying my ideas.
Fun or not fun it is a personal perspective. What I was saying was that when you write code you also think deeply about what you actually write and you have the opportunity to optimize, change ideas and overall improve. A programming language is more precise and succinct than natural language in expressing algorithms/ideas.
Have you considered not doing that? It's not obligatory to have an LLM shit out unreviewed code for you, you're making a choice to do that, and you can make a choice not to.
Review the code. Hell, maybe even write some code yourself.
What you're describing is how I feel whenever I use an LLM for anything more than the most basic of tasks. I've gone from being a senior level software developer to managing a barely competent junior developer who's only redeeming skill is the ability to type really, really quickly. I quit the management track some time ago because I hated doing all my software development via the medium of design documents which would then be badly implemented by people who didn't care, there's no way you're going to get me to volunteer for that.
Not OP, but you nailed my feelings perfectly. I did not like managing for precisely this reason, and it never got better. The trenches are for me.
Re LLMs I love collaborative coding because I can sometimes pick up or teach new tricks. If I'm too tired to type the boilerplate I sometimes use an LLM. These are the only two redeeming values of LLM agents: they produce code or designs I can start from when I ask them too. I rarely do.
I hope OP can find a balance that works. It's sad to see the (claimed) state of the art be a soulless crank we have to turn.
> managing a barely competent junior developer who's only redeeming skill is the ability to type really, really quickly
Hits the nail on the head. For an actual junior developer, they'd at least learn over time. With LLM, open up a new chat and you start with a new hire.
I take a hybrid approach. I will describe a simplified problem to the LLM, have it generate a well commented and reasonable approach for the problem. I then use that as a cheat sheet for implementing my actual code. This still gives me hands on codi and more control, without needing to agonize over the details of each coding technique.
I think you're on to something: It sounds like the developer needs to be more hands-on in making changes; as opposed to treating the AI like a subordinate with autonomy.
---
For example: About two years ago I worked with a contractor who was a lot more junior than our team needed. I'd give him instructions, and then the next day spend about 2-3 hours fixing his code. In total, I was spending half of my time handholding the contractor through their project.
The problem was that I had my own assignments; and the contractor was supposed to be able to do their job with minimal oversight. (IE, roughly 0.5-1.5 hours of my day.)
If the contractor was helping me with my assignment; IE, if the contractor was my assistant, I'd have loved the arrangement.
(In case you're wondering how it ended up for the contractor, we let him go and hired someone who worked out great.)
---
I suspect if the OP can figure out how to make the AI an assistant, instead of an employee with autonomy, then it will be a better arrangement. I personally haven't found an AI that does that for me, but I suspect I'm either using it incorrectly or using the wrong tools.
Technology is about efficiency, and for people to adopt it you need to be 10x more efficient. LLMs took a 15 years process of burning out, and shortened it to 1.5 years.
That is a hilarious and depressing perspective!
I think that's just the way you're doing it?
I feel the opposite. I appreciate the ability to iterate and prototype in a way which lowers friction. Sure I have to plan steps out ahead of time, but that's expected with any kind of software architecture. The stimulating part is the design and thought and learning, not digging the ditch.
If you're just firing off prompts all day with no design/input, yea I'm sure that sucks. You might as well "push the big red button" all day.
> If it fails, I just switch to another model—and usually, one of them gets the job done.
This is a huge red flag that you have no idea what you're doing at the fundamental software architecture level imo. Or at least you have bad process (prior to LLMs).
> I feel the opposite. I appreciate the ability to iterate and prototype in a way which lowers friction.
I feel the same way. Things I like: Thinking about architectures and algorithms. Things I don't like: Starting out with a blank slate, looking up the exact function names or parameters. I find it much easier to take something roughly implemented and improve upon it than to start from nothing and build it.
I think about what I want fairly specifically. I discuss it with the LLM. It implements something. Half of the time it's what I expect, I can move on. Sometimes it's done something I wasn't expecting in a better way, which is nice. Frequently it's done something I wasn't expecting in a worse way; I either tell it to fix it, or just fix it myself.
In my previous role, I did a huge amount of patch review, which I always found quite tedious. Even though this looks superficially similar, it doesn't have the same vibe at all. I think it's because the LLM will accept being told what to do in a way no self-respecting coder would. (One complaint I'd heard about another person's reviews was that the person whose code was reviewed felt like they were a marionette, just typing exactly what the reviewer told them to type.)
This way I can do the things I enjoy, while neither having to worry about some human being's feelings, nor having to do the low-level stuff that's a chore.
As the saying goes, mathematics is not a spectator sport. The same applies to programming. If you don't do the lower level work, you are a spectator that is rearranging other people's laundered code, or even their laundered architectures.
> This is a huge red flag that you have no idea what you're doing at the fundamental software architecture level imo. Or at least you have bad process (prior to LLMs).
Particularly in the present. If any of the current models can consistently make senior-level decisions I'd like to know which ones they are. They're probably going to cross that boundary soon, but they aren't there yet. They go haywire too often. Anyone who codes only using the current generation of LLM without reviewing the code is surely capping themselves in code quality in a way that will hurt maintainability.
> They're probably going to cross that boundary soon
How? There’s no understanding, just output of highly probable text suggestions which sometimes coincides with correct text suggestions.
Correctness exists only in the understanding of humans.
In the case of writing to tests there are infinite ways to have green tests and break things anyway.
I always enjoyed problem solving, and programming was more of a means to that end for me. These days, focusing on syntax feels a bit tedious, especially when LLMs can handle so much of it. That being said, I still find myself obsessing over code quality, reading and reviewing code, and thinking a lot about architecture and best practices. I still get a lot of satisfaction from building things well, even if the actual mechanics of typing out code aren't always the most exciting part.
in this world of LLM coding, we jumped to architect level
Many of us will make that shift effectively, sure. I think the problem is that to really be a good architect, you need 10+ years of actually doing things to understand what should/should not be built, and the industry is rapidly removing the jobs that let people acquire that experience.
Is AI genuinely that good for you all? I can't leave it to its own devices, I have to review everything because (from experience) I don't trust it. I think it's an amazing technological advancement, perhaps will go down as one of the top 10 in the history of our species. But I can't just "fire and forget".
And that's not just because its output is often not the best, but also because by doing it myself it causes me to think deeply about the problem, come up with a better solution that considers edge cases. Furthermore, it gives me knowledge in my head about that project that helps me for the next change.
I see comments here where people seem to have eliminated almost all of their dev work, and it makes me wonder what I'm doing wrong.
These people arent being honest or they arent dealing with any real level of complexity.
> it makes me wonder what I'm doing wrong
I'm in the same boat: I'm mostly doing C# in Visual Studio (classic) with co-pilot, and it very rarely gives useful code from prompts. Often times the auto-suggestions are hallucinations, and frequently they interfere with "normal" tab completion.
I'm wondering if I'm using the wrong tool, or if Visual Studio (classic) co-pilot is just far behind industry norms?
I felt the same way until I tried Claude Code. Moving from an autocomplete-based workflow to a conversation-based workflow changed everything. I find traditional Copilot useless by comparison.
Youre 100% being dishonest or not dealing with any sort of complexity.
The main problem I have with auto-suggestions is that they distract my flow of thinking. Suddenly, I go from thinking about my code carefully, to reviewing someone else's code. To the point where I get a bit stressed typing, worrying that if I go too slow, the suggestion will pop up. As you may guess, I therefore have them turned off :)
I am playing with Zed now though, and it has a "subtle" mode for suggestions which is great. When I explicitly want to see them, I press option key. Otherwise, I don't see them.
I think it depends on your niche and model. Gemini pro worked amazing for me in when doing (relatively simple) graph algorithms in python, but completely sucked when I switched to (relatively complicated) latex layouts.
I don’t think you’re doing anything wrong. Some shops just have very low quality bars where they can ship things that are to be frank, broken. I tend to use Sonnet 4 these days, and use it for tasks that aren’t too important or ones that require prototyping and iteration over perfection.
I find it’s really great for augmenting specific kinds of concentrated tasks. But just like you, I have to review everything it creates. Even Claude Opus 4 on MAX produces many bugs on a regular basis that I fix before merging in a change. I don’t mind it though, as I can choose to use it on the most annoying areas, and leave the ones I enjoy to work on myself.
[dead]
I've been through this a few times during my career.
This book helped me put it in perspective: https://pragprog.com/titles/cfcar2/the-passionate-programmer...
Vibe coding sounds miserable. I use LLMs pretty heavily but never as a replacement for my own mind. I'm glad it exists for the people who can't program but it's much less pleasant than being explicit myself.
For me, vibe coding is becoming more of a niche. I still use it but not that often. I prefer the Codex style workflow in which you do spent a lot more time on specifying things, providing examples and reviewing PRs.
Essentially, vibe coding is synchronous as it is necessary to wait around for the LLM to respond. Codex is async and allows you to focus on other things while it is working. The primary issue with async workflows is you really don't want to have to iterate much. Therefore, investing more time upfront clearly defining the prompt / agents.md and prior examples becomes really important. Usually if it is > 90% correct I will just fix the rest myself. For hand coding, I use a fairly basic vim setup with no LLM plugins. I do not like LLMs jumping in and trying to auto complete stuff.
what is 'codex style workflow' ?
OpenAI Codex. You provide it access to your github repo, provide prompts and AGENTS.md and it creates PRs for you.
No, I feel the exact opposite. This post makes me want to stay away from agentic coding all together. Idk about everyone else but I love programming, more than any other thing I do. I do not want to end up like this.
You're doing it wrong, even with language model...
You stopped reviewing the code..? You're not gonna make it.
You still need the visceral feel of writing the code, this builds the mental model in your head.
>Recently, I realised I no longer enjoy programming.
You don't? Sounds to me like you just don't enjoy prompting. Try doing some programming again. Engage your brain with a challenge and try to solve the problem itself not just explain it to an ai and never even look at the code. You enjoy the driving not the destination, getting a taxi there is removing your purpose.
I feel you, but I think that if you want to make exceptional software, and not just a large volume of mediocre software, the best way is still to write code manually.
I go through waves. Sometimes I'm in awe of what the LLM does for me, the rapid progress through boilerplate code in seconds that would have taken me forever, leaving me to ponder the actual core issues of the problems I'm solving.
Sometimes I want to hunt it down and erase the lazy, lying, gas-lighting **** from existence.
I actually brought back some of the joy of programming for myself by leveraging LLMS
Context. I'm burnt out,doing web software development for business apps for 15 years now and going
I started to get into game development. I started to test out chatGPT and claude to assist and it's been going great. I make so much progress and the results are fun which makes the coding process fun. The LLM covers gaps in my knowledge of math and physics and game dev strategy and architecture. But since I know how to code I can take what it gives and accomplish all kinds of things that would be much more difficult going on my own.
I'm definitely on the opposite side where despite being in the nascent era of code assisting agents, they've become a productivity multiplier and I shudder thinking about how productive my last 25 years of programming could've been if we had coding agents back when I started my career. Young aspirational developers graduating now are going to be able to accomplish much more than us over their entire career.
It's also got me to explore a lot more domains than I would've considered otherwise, e.g. using Python to accomplish tasks with local pytorch/onnx models and creating ComfyUI nodes or using bash for large complex scripts that I would've previously used bun .ts shell scripts to implement.
Even non dev tasks like resolving Linux update/configuration/conflicts have become a breeze to resolve with Claude/Gemini Pro being able to put me on the right track which no amount of old-school Google searches were able to.
Although it's not all upside as LLMs typically generate a lot more code than I would've used to accomplish the same task so code maintenance is likely to require more effort, so I don't like using LLMs to change code I've written, but am more than happy to get them to make changes to code that other LLMs have created.
> I Lost Joy of Programming
I found the joy of making things.
As a technical person who is not a professional programmer, but finds or makes whatever I need, LLMs (Gemini) are dizzyingly powerful.
I've made so many things I never would never have even attempted without it: a change-based timelapse tool, virtual hand-controlled web-based theremin, automated sunrise and sunset webcam timelapse creator, healthy-eating themed shoot'em up, content-based podcast ad removal tool, virtual linescan camera, command line video echo effect, video player progress bar benchmark tool, watermark remover, irregular panoramic photo normalizer, QR code game of live, 3D terrain bike share usage map, movie "barcode" generator, tool to edit video by transcript, webcam-based window parallax effect, hidden-image mosaic generator, and all kinds of other toys I've already lost track of.
I had to rip all the LLM crap out of my editor to feel like I was doing anything. My programming ability has gotten better and if I do actually need to use an LLM I just open ChatGPT, DeepSeek, or punch stuff into Ollama.
The majority of what I end up using langle mangles for is trivially verifiable but tedious to do things like "turn this Go struct into an OpenAPI Schema" or "take this protocol buffer definition and write the equivalent in Rust prost".
I suspect that you never were truly interested in programming otherwise you wouldn't have preferred talking to several LLM models instead of writing code yourself.
Nobody forced you to switch LLM models until eventually one of them solve your problem.
What I have found is that using LLM’s to do the same stuff I already knew how to do is not super enjoyable. I know web application development, and having some agent build it for me is just a productivity gain with no job satisfaction. So I’ve been where you are recently.
But on the flip-side, using the AI to help me learn the bits of programming that I’ve spent my whole career ignoring, like setting up DevOps pipelines or containerisation, has been very enjoyable indeed. Pre-AI the amount of hassle I’d have to go through to get the syntax right and the infrastructure set up was just prohibitively annoying. But now that so much of the boilerplate is just done for me, and now that I’ve got a chat window I can use to ask all my stupid questions, it’s all clicking into place. And it’s like discovering a new programming paradigm all over again.
Can 100% recommend stepping outside your comfort zone and using AI to help where you didn’t want to go before.
Reread this: https://paulgraham.com/hp.html
Long story short, LLMs are great for people who never wanted to become "code artists" (aka hackers) which many people within CS and SWE do not wish to be.
If you goal is to be able to express your ideas fluently though, you'll have to get good at coding. The differentiator is how you look at the pain and struggle involved. If your goal is to improve yourself, the struggle has value. You learn by trying to do harder and harder things. If your goal isn't to learn though, you may as well outsource the struggling to a bot.
I feel the exact same way (using the tools they force on us at work).
I've became very lazy. Most tasks, I explain to the LLM, and go browse the web while it computes. More often than not, it fails, and I re-iterate my prompt several times. Eventually, I need to review the changes before submitting it for review, which isn't very fun.
Overall, I feel I'm losing my skills and the competitive advantage I had at my role (I'm a decent coder, but don't care too much about product discussions). The way I'm using the tool right now, I'm pretty sure I'm not more productive.
We'll see how it goes. It's still a pretty new tech and I think I should learn when not to use it and try to have good hygiene with it.
"Doctor it hurts when I do that..." ;)
(e.g. did you consider simply not using LLMs to write code and maybe just use them for rubberducking, cross-checking your code and as StackOverflow replacement?)
Use the LLM to code away the boilerplate if you must, but get stuck in and deal with the novel stuff and get yourself the dopamine hit of doing the hard thing.
Sure, get an LLM to suggest an approach, but how can you feel joy when you've turned yourself into a system architect working with a particularly stupid and relentlessly optimistic bunch of idiots who never really learn?
You can choose how you do your work. You have autonomy. So, choose.
No, because I only use LLMs as code assistants (and I don't think they do that great a job in spitting out code without me needing to review it). Typically I use LLMs to write stuff that K find boring and repetitive (unit tests, glue code for APIs - like JSON mapping for example), Dependency configuration, that sort of thing.
The actual meat I prefer to code myself with minor LLM support (for example, I ask it to review my code).
If you see "programming" as "writing code" then I can see where that joy is lost. If you see "programming" as "creating software" then you can maintain that joy. I would argue the goal has always been to write less code. That has driven the design of programming languages for the past 50 years.
Thanks for this. It helps to rethink one's motives.
You're more or less describing what happened to me when I changed from dev to dev team lead. I couldn't affort the time anymore to make code reviews, left that to my colleagues and only tested the resulting software. When it didn't work, I returned a bug task and then we iterated this until it worked.
No more joy in writing software. Instead my time is spend in writing user stories and specifications as good as possible.
You should take a break, it's normal to go through down periods. Work coding is also generally dissatisfactory compared to personal fun coding.
Personally I have found that the sky is now the limit thanks to AI assistants! I was never the best coder out there, probably a median level programmer, but now I can code anything I imagine and it's tons of fun.
Find some creative projects you want to work on and code them up!
Seems like you were never programming in the first place. If by programming we mean solving engineering problems with code.
> At this point, I’ve even stopped reviewing the exact code changes. I just keep pushing forward until the task is done.
This is definitely going to end well.
Which products are you working on so I can be sure to avoid using them?
I don't understand why people only think in extremes.
You don't have to give up anything you did before at all.
LLMs just here to increase your productivity, but cranking out unreviewed code where you don't want to do that is just silly to me.
These posts are so exhausting and telling of those who never enjoyed programming in the first place. Wish these were a wake up call to those posting that they’ve been in it for reasons they haven’t been able to understand till now.
Generally I agree, I use LLM to solve work problems faster but I work on my own personal projects by hand only using LLM to teach me concepts, give me direction or help with complex portions of code (learning rust)
Seems like you have given the part you enjoy in software development to LLM.
There's programming for fun, and it's programming for work.
Why would you program in a non-joyus way if you're doing it for fun? For professional work I fully get why you'd want to optimize.
I can't escape the feeling that in 2-4 years no one will be even looking at the code and the IDE+chat interface of today will be a weird relic and most people will be just prompting finished artifacts.
I'd say if you used to find pleasure and satisfaction if the art of writing code unless you're willing to stop using AI it might be worth finding a different pursuit to channel that energy into. If you don't enjoy prompting now it's only going to get worse from here and your energy will be better spent finding something you do enjoy.
Sounds like you are going through the classic transition to management
>> Recently, I realized I no longer enjoy programming. It feels like I’m just going through the pain of explaining to the LLM what I want, then sitting and waiting for it to finish.
This isn't programming. Delete the AI stuff and start programming again. It's fine to use LLM's if you want but nobody is forcing you to.
I’ve had the same feelings. It’s tough for sure.
I’ve pivoted to architecture and higher level problem solving to continue my growth.
I have also found I do my best work when I’m happy. It’s important that the tool works for me and I don’t work for the tool.
> At this point, I’ve even stopped reviewing the exact code changes. I just keep pushing forward until the task is done.
How is this the future of software engineering?
Because capitalism
How is capitalism causing this?
Why should labor care about the quality of their work when management also does not because it's hard at the prospect of the largest displacement of labor in history?
Well I believe the thread you're commenting in has answered that question.
Programming is also the field where it's the easiest to strike out on your own. Seizing the means of production in programming amounts to grabbing a $200 laptop.
Are you genuinely asking?
Yes.
i agree. Initially i felt liberated that it is doing all the tedious work for me so i can concertate on the "good stuff" of creative thinking.
I usually start out with good intentions like
1. planning out work
2. crafting really good prompts
3. accepting bare miniumum code changes
4. reviewing and testing code changes
But most 'agentic code tools' are not really well aligned with this philosophy. They are always over eager and do more than what you ask them to. Like if you ask it to change button color, it goes and builds out a color picker .
They sneak in more and more code and vex you with all the extra junk that you slowly stop caring about extra stuff that's being snuk in and the whole thing spirals out of control. Now you are just doing pure vibe coding.
Yea, a little. I think it's fun to see what it can do, but that gets old quick.
The real kick in the nuts is that people don't care about quality. Honestly they never have, but now it's just worse. People see productivity gains and that's literally all that matters. I guess they know they can ship bad stuff and still sell it. Only when retention numbers get bad do they complain - not even think about taking the time to do things proper of course - about it and demand higher quality.
I think there's going to be a high demand for AI slop fixers in the future. Don't get me wrong, it's not that AI itself is incapable, it's that people aren't putting any effort in.
I think we'll push the people who code for enjoyment away and they'll be replaced by people who aren't as senior.
yeah don’t do that
have self-respect
[dead]
I feel exactly the opposite. To explain, programming became very tedious, looking through endless lines of code, and lately it appears everything is generated, and not making sense of it. With programming bots I can focus on what I want rather than spending inordinate amount of time finding just the right syntax for something that itself was generated by another machine. Honestly and simply, get with the program
Using LLMs is just a different way of programming. Think of it as a high-level language, similar to how conventional programming languages relate to assembly.
Getting tasks done, tasks you would never have had the time or courage to tackle the traditional way, is another source of joy.
Man these comments are a rancid minefield of soapboxing.
OP, take some time off and evaluate what you want.
> Man these comments are a rancid minefield of soapboxing.
s/these comments/this website/
I've always found it a bit strange that people enjoy the act of coding so much that LLMs make then sad. For me it's always been about what I can make, not the actual typing of code into the editor. With LLMs I can make better stuff, faster, and it's really exciting. It used to be that if I needed to use a new library for one little task, it would be hours or days of reading the manual and playing around. Now it's minutes and I can understand how the API works, and write good, robust code that solves my problem.
Maybe it's more of a problem with your job and the tasks you're assigned?
> I've always found it a bit strange that people enjoy the act of coding so much that LLMs make then sad.
Why do you find this strange? It's like saying you find it strange that a carpenter enjoys working with wood, that it's only about the end product and not the process.
That's fair. I was actually a professional carpenter for a bit too, and you're right, I love touching the wood, sanding it, admiring it, etc. etc. More so then I've ever liked inputting code into an editor.
I do, however, use electric planers, table saws and miter saws, because I want to produce the product fast and efficiently, because the end product still is the goal.
Your point is well taken however.
You find it strange that people like different things than you do?
No, I find it strange that people like to input text into an editor so much that LLMs make them sad.
It is never just inputing text into an editor. When I write a piece of code, it is just the first draft of what I have in mind. Most of the time I change the code interactively until I'm satisfied. The act of writing the code myself helps in clarifying my ideas.
Are you saying that that is no longer fun with LLMs?
Fun or not fun it is a personal perspective. What I was saying was that when you write code you also think deeply about what you actually write and you have the opportunity to optimize, change ideas and overall improve. A programming language is more precise and succinct than natural language in expressing algorithms/ideas.