I'm observing a process to purchase an AI Coding assistant right now, and it made me wonder, when the rubber hits the road, how much would I actually spend on the assistant?
I don't know if I'd call myself a booster or skeptic. I'm loosely speaking all in in the office, but what would I actually spend?
On the one hand, my - I dunno, thought leader-y hat would say, why wouldn't I spend 10k/head on them. These tools are amazing, they could double someone's output.
But they also are these like infinite toys that can do anything, and you can spend a lot of money trying all the things. And does that actually provide value. Actually provide like rubber hits the road monetary value on a per-head basis. If you're really focused and good, probably? But if you're not disciplined. You can spend a bunch of money on tokens, then spend a bunch of money on the new features / services in production, and spend a lot of your colleagues time cleaning up that mess.
And this is all human stuff. This is all assuming the LLM works relatively perfectly at interpreting what you're trying to do and doing it.
So like. Does it actually provide the benefit of X dollars per engineer per year? Because it wouldn't have to, it could in fact go the opposite.
If I was working for you and I heard that you spent 10k on a bullshit LLM subscription instead of paying me 10k to work harder I would look for a new job immediately.
Yup, while I like being invested in at work (sending me to learn new shit at conferences or stuff like that) I'd never agree with spending 10k on some tool where even the tool vendors say "use with caution, dont use this in production, bla bla bla".
If it really augments my output, sure, currently I just watch my tokens drop to 0 within 3-4 days of using it and then having to wait a month for them to reset because I wont pay for more parrot tokens. The output speeds up some small things but to my overall speeds its not noticeable a ton.
They're all usage based plans. You probably wouldn't hit 10k/head for most users, but thousands is not unheard of. But it's kind of anticipating that's what they would want to charge.
There is no ceiling to how much waste you can create.
Much like stacks and stacks of badly written web frameworks made things like collapsing comments on new reddit 200 ms of JavaScript execution ( https://bvisness.me/high-level/burnitwithfire.png ) I can easily imagine people layer stuff together till token burn is beyond insane.
I mean just look at the Gastown repository. Its like literally hundreds of thousands of lines of go and md files.
I particpated in the personal computer boom of the 1980's and remember mainframe guys who were the establishment saying the same thing about personal computers.
AI is starting in a much easier business environment, where so much data is online and things can be done by APIs which would’ve required moving paper around in the 80s, but it seems like we’re still missing the clear wins PCs had almost immediately in the 80s. Tons of small businesses were eying spreadsheets or accounting apps, engineers not working in the PC space and scientists saw immediate productivity wins, authors and journalists were all over word processors, etc.
There are some wins from AI now but it’s kind of telling how companies like Microsoft have to trick customers into paying for it with changes to their plans because there just aren’t enough people seeing real value to the point where they’re jumping to pay more for it.
Citibank and Merck exectives are the establishment. MS, Google are the providers. Microsoft became a trillion dollar company by selling software for personal computers, which establishment types at the time were skeptical of.
It seems to me that the numbers don't lie. Either the tool is producing value (additional revenue, fewer costs) in excess of its cost, or it isn't. I don't think you need to be adept at technology to make that evaluation.
This is a reductionist perspective that is unhelpful. Does buying a water cooler in the office increase profit margins? What about a coffee machine? Across a wide portfolio of decisions, a business does need to be profitable. However measuring the individual impact of single vendors is often a very difficult task.
How do you measure developer productivity? Code quality? Developer happiness? As far as I know, no one in the industry can put concrete numbers to these things. This makes it basically impossible to answer the question you pose.
The survey was about operational costs and revenue. Water cooler and coffee machine manufacturers don't market their products to be "smarter than people in many ways" and "able to significantly amplify the output of people using them"[1]. If these claims are true, then surely relying on this technology should bring both lower operational costs, since human labor is expensive, and an increase in revenue, since the superhuman intelligence and significantly amplified output of humans using these tools should produce higher quality products and benefits across the board.
There are of course many factors at play here, and a substantial percentage of CEOs report a positive RoI, but the fact that a majority don't shouldn't be dismissed on the basis of this being difficult to measure.
Numbers don't lie if your instrumentation is measuring the contributions of new technology accurately. The productivity gains of middle managers using personal computers, often at home bought out of personal funds, didn't show up at first either. Managers bought home computers to do spreadsheets to make their jobs easier. Those productivity gains were eventually measured.
I absolutely reject your premise here. I feel like Bush Jr. is very responsible for the state of the world we're in right now. You can track back a whole lot of policies and erosion of democratic norms in our system directly to his administrations.
Or the fact we are woefully unprepared for a peer conflict. We wasted how many trillions in the middle east? We cancelled how many modernization programs to fund counter insurgency programs instead?
On the bright side, 38% saw some improvement. Also, what's the base rate? I would expect that a lot of attempts to adopt new software neither reduce costs nor increase revenue.
What would the average software engineer pay for a AI coding subscription, as compared to not having any at all? Running a survey on that question would give some interesting results.
I may be a bit of an anomaly since I don't really do personal projects outside of work, but if I'm spending my own money then $0. If the company is buying it for me, whatever they are willing to pay but anything more than a couple hundred/month I'd rather they just pay me more instead or hire extra people.
I pay $20 for ChatGPT, I ask it to criticize my code and ideas. Sometimes it's useful, sometimes it says bullshit.
For a few months I used Gemini Pro, there was a period when it was better than OpenAI's model but they did something and now it's worse even though it answers faster so I cancelled my Google One subscription.
I tried Claude Code over a few weekends, it definitely can do tiny projects quickly but I work in an industry where I need to understand every line of code and basically own my projects so it's not useful at all. Also, doing anything remotely complex involves so many twists I find the net benefit negative. Also because the normal side-effects of doing something is learning, and here I feel like my skills devolve.
I also occasionally use Cerebras for quick queries, it's ultrafast.
I also do a lot of ML so use Vast.ai, Simplepod, Runpod and others - sometimes I rent GPUs for a weekend, sometimes for a couple of months, I'm very happy with the results.
i find the actual survey from PWC a bit ambiguous. they say 42% of respondents saw no impact from AI, but nothing about how much those respondents have invested in AI. what if they are only in POC, or have made miniscule investments?
‘Only 12% reported both lower costs and higher revenue, while 56% saw neither benefit. 26% saw reduced costs, but nearly as many experienced cost increases.’
Stock up on dry beans and rice. See if your parents have a spare room. Don’t buy anything expensive. This bubble is gonna hurt.
Why the doom and gloom? It's going to hurt those who jumped on the bandwagon hoping to cash out. I couldn't care less about them, but I would like for computer hardware to be affordable again, and for the tech job market to go back to normal.
Though I'm more concerned about the effects of the current political climate, than the "AI" bubble popping. In the scenario of that going south, nothing will be normal for a long time.
True, but a better comparison is the dot-com crash. The effects of that were mainly contained to the tech industry and the stock market. People who weren't invested in either barely noticed the crash.
This time around the ramifications might be larger, but it will still mostly be felt by those inside the bubble.
Personally, I would rather experience a slight discomfort from the crash followed by a normalization of hardware prices and the job market, than continue bearing the current insanity.
The C-levels who jumped on the bandwagon are definitely not going to fall on their swords should it go south. They’ll blame the tech, fire some subordinates, suggest their customers for “not understanding it”, and their shareholders will eat it up as long as they get a pound of flesh.
> suggest their customers for “not understanding it”
See Microsoft's recent "We don't understand how you all are not impressed by AI."
In the case of MS, you're right, Satya isn't going to fall on his own sword. They will just continue to bundle and raise prices, make it impossible to buy anything else (because you still need the other tools) and then pitch that to shareholders as success "Look how many people buy Copilot (even though it's forcefully bundled into every offering we sell("
They’ll just move on to the next company that will hire them with a golden parachute.
There’s minimal risk to the decision makers. Meanwhile, every one of us peons is significantly more at risk of losing our jobs whether we could be effectively replaced with these AI tools or not because our own C-level execs decided to drink the snake oil that is the current bubble.
I think CEOs are terrified of AI and all the talk of “replacing workers” is more a wish list or a cope than a prediction.
Personally I think AI is super useful, but at my job the amount of progress we’ve made has basically ground to a halt compared to before AI.
The reason is that the people who they chose to lead the new, most innovative “AI initiatives” were the least innovative most corporate drone-y people I’ve ever met. The kind of people who in 2025 would unironically say stuff like “we need to work on our corporate synergies”.
They never wanted innovation, they just wanted people to toe the line for as long as possible until they could jump the sinking ship.
I heard it posited that the Ai stuff is uncovering corporate dysfunction better than any tool in history, i.e. garbage in garbage out, because their processes are so broken
I must admit the idea has a lot of appeal, because there are people seeing good ROIs, so it does not seem to be the tool as much as the tool user
It's part dysfunction for sure, but it's mostly just reality being messy. Anyone who worked on a bigger engineering problem, solving real business problem, learns very quickly that reality is messy and doesn't conform to rules. And that's usually main challenge of building any project.
And it's problem specific to software engineering. Any engineering deals with it - when you manufacture physical things, for example, tolerances, safety factors, etc, are all tools to deal with reality being messy.
> garbage in garbage out, because their processes are so broken
That's, I'd argue, the majority of companies though, which still spells a problem for the AI bubble.
I've been a part of enough failed ERP implementation projects to know that there's actually very few enterprises out there that collectively have their shit together and are good at implementing technology.
If AI also can't solve that problem for them, it'll just join the long list of already existing boring enterprise tech that some successful companies use to great effect, and others ignore or fail to adopt, which isn't exactly a multi-trillion dollar industry to live up to the current hype.
> uncovering corporate dysfunction better than any tool in history, i.e. garbage in garbage out
in the times when Agile was still just a way of work (not a mantra), adopting it showed exactly the glaring troubles in the overall (human) pipeline. But it needed quite some time - like months of work - to really show.
This seems same thing, only much faster..
Actually making some parallels might be interesting.. agile was/is also touted as "the silver bullet"..
My first job out of University was with a startup that had been recently acquired by IBM. I have never seen true agile since. Similar 1/100 actually do it really well and proper. I should see if I still have the slide deck I made to tell the rest of IBM, would make for a great blog post and bringing in this thread, chef's kiss?
That's an interesting take, but doesn't it work backwards from the desired result ("AI" performs near miracles) to processes are so broken? That seems counter to all the enthusiastic application of LLMs to everything. People want "AI" to solve problems, and seem to be doing everything they can to put it everywhere and try everything.
Considering that "AI" providers are now adopting advertising, I wonder how many of them are actually seeing lower costs and higher revenue from dogfooding.
The hype train must go on, and I'm sure all employees are under strict NDAs, so we may never know.
Adopting advertising in almost an inevitably in any tech at this point. I wouldn’t necessarily attribute it to anything they’re seeing in usage - IMO its just the standard “we’re leaving money on the table by not” that we’ve seen time and time again.
This is a really backwards reading of the data. It's like reading signups for internet service in the early years and going "more than half of households have not signed up for internet, looks like this internet thing is a flop." Of course you're not going to see overnight transformation. It's a snowball effect that eventually builds bigger and bigger.
Umm no, your example is a backwards reading of the data.
From the PwC survey:
> More than half (56%) say their company has seen neither higher revenues nor lower costs from AI, while only one in eight (12%) report both of these positive impacts.
So The Register article title is correct.
> It's a snowball effect that eventually builds bigger and bigger.
That's just wishful thinking based on zero evidence.
Ahem, so some orgs are adopting a new technology effectively and others aren't. Whether that's because of the nature of their workflows, their organizational effectiveness, or available talent, this sounds like... the way these things work
I'm observing a process to purchase an AI Coding assistant right now, and it made me wonder, when the rubber hits the road, how much would I actually spend on the assistant?
I don't know if I'd call myself a booster or skeptic. I'm loosely speaking all in in the office, but what would I actually spend?
On the one hand, my - I dunno, thought leader-y hat would say, why wouldn't I spend 10k/head on them. These tools are amazing, they could double someone's output.
But they also are these like infinite toys that can do anything, and you can spend a lot of money trying all the things. And does that actually provide value. Actually provide like rubber hits the road monetary value on a per-head basis. If you're really focused and good, probably? But if you're not disciplined. You can spend a bunch of money on tokens, then spend a bunch of money on the new features / services in production, and spend a lot of your colleagues time cleaning up that mess.
And this is all human stuff. This is all assuming the LLM works relatively perfectly at interpreting what you're trying to do and doing it.
So like. Does it actually provide the benefit of X dollars per engineer per year? Because it wouldn't have to, it could in fact go the opposite.
If I was working for you and I heard that you spent 10k on a bullshit LLM subscription instead of paying me 10k to work harder I would look for a new job immediately.
I like what you’re saying but if I had changed jobs every time my employer set $10k on fire, I guess I would have pretty long resume.
Yup, while I like being invested in at work (sending me to learn new shit at conferences or stuff like that) I'd never agree with spending 10k on some tool where even the tool vendors say "use with caution, dont use this in production, bla bla bla".
If it really augments my output, sure, currently I just watch my tokens drop to 0 within 3-4 days of using it and then having to wait a month for them to reset because I wont pay for more parrot tokens. The output speeds up some small things but to my overall speeds its not noticeable a ton.
I'm not sure how you could spend 10k/head when ex/ a Claude Team premium plan is $150/mo/head.
They're all usage based plans. You probably wouldn't hit 10k/head for most users, but thousands is not unheard of. But it's kind of anticipating that's what they would want to charge.
There is no ceiling to how much waste you can create.
Much like stacks and stacks of badly written web frameworks made things like collapsing comments on new reddit 200 ms of JavaScript execution ( https://bvisness.me/high-level/burnitwithfire.png ) I can easily imagine people layer stuff together till token burn is beyond insane.
I mean just look at the Gastown repository. Its like literally hundreds of thousands of lines of go and md files.
you can buy 10 individual max licenses per dev (have seen it done :))
I particpated in the personal computer boom of the 1980's and remember mainframe guys who were the establishment saying the same thing about personal computers.
AI is starting in a much easier business environment, where so much data is online and things can be done by APIs which would’ve required moving paper around in the 80s, but it seems like we’re still missing the clear wins PCs had almost immediately in the 80s. Tons of small businesses were eying spreadsheets or accounting apps, engineers not working in the PC space and scientists saw immediate productivity wins, authors and journalists were all over word processors, etc.
There are some wins from AI now but it’s kind of telling how companies like Microsoft have to trick customers into paying for it with changes to their plans because there just aren’t enough people seeing real value to the point where they’re jumping to pay more for it.
What are MS, Google, and today's other massive LLM-boosters if not "the establishment?"
Citibank and Merck exectives are the establishment. MS, Google are the providers. Microsoft became a trillion dollar company by selling software for personal computers, which establishment types at the time were skeptical of.
"People were wrong once and now everyone who says something vaguely similar about a related topic has to be wrong, forever"
I didn't say that so nice strawman. My point is C-suite executives are often wrong about new technology. They are inherently conservative.
It seems to me that the numbers don't lie. Either the tool is producing value (additional revenue, fewer costs) in excess of its cost, or it isn't. I don't think you need to be adept at technology to make that evaluation.
This is a reductionist perspective that is unhelpful. Does buying a water cooler in the office increase profit margins? What about a coffee machine? Across a wide portfolio of decisions, a business does need to be profitable. However measuring the individual impact of single vendors is often a very difficult task.
How do you measure developer productivity? Code quality? Developer happiness? As far as I know, no one in the industry can put concrete numbers to these things. This makes it basically impossible to answer the question you pose.
The survey was about operational costs and revenue. Water cooler and coffee machine manufacturers don't market their products to be "smarter than people in many ways" and "able to significantly amplify the output of people using them"[1]. If these claims are true, then surely relying on this technology should bring both lower operational costs, since human labor is expensive, and an increase in revenue, since the superhuman intelligence and significantly amplified output of humans using these tools should produce higher quality products and benefits across the board.
There are of course many factors at play here, and a substantial percentage of CEOs report a positive RoI, but the fact that a majority don't shouldn't be dismissed on the basis of this being difficult to measure.
[1]: https://blog.samaltman.com/the-gentle-singularity
Numbers don't lie if your instrumentation is measuring the contributions of new technology accurately. The productivity gains of middle managers using personal computers, often at home bought out of personal funds, didn't show up at first either. Managers bought home computers to do spreadsheets to make their jobs easier. Those productivity gains were eventually measured.
Bush Jr. was president twice and it wasn't that bad, so having the same Republican president twice will not be a problem.
Just say you hate AI and hope it fails. It's cleaner.
I absolutely reject your premise here. I feel like Bush Jr. is very responsible for the state of the world we're in right now. You can track back a whole lot of policies and erosion of democratic norms in our system directly to his administrations.
Or the fact we are woefully unprepared for a peer conflict. We wasted how many trillions in the middle east? We cancelled how many modernization programs to fund counter insurgency programs instead?
On the bright side, 38% saw some improvement. Also, what's the base rate? I would expect that a lot of attempts to adopt new software neither reduce costs nor increase revenue.
Given the rate of change that is happening with everything AI now I'm a little surprised that the rate is that high.
I'm fairly confident it'll improve over time though.
What would the average software engineer pay for a AI coding subscription, as compared to not having any at all? Running a survey on that question would give some interesting results.
I may be a bit of an anomaly since I don't really do personal projects outside of work, but if I'm spending my own money then $0. If the company is buying it for me, whatever they are willing to pay but anything more than a couple hundred/month I'd rather they just pay me more instead or hire extra people.
I pay $20/m for cursor. It allowed me to revamp my home lab in a weekend.
> It allowed me to revamp my home lab in a weekend.
So, what did you learn from that project??
I pay $20 for ChatGPT, I ask it to criticize my code and ideas. Sometimes it's useful, sometimes it says bullshit.
For a few months I used Gemini Pro, there was a period when it was better than OpenAI's model but they did something and now it's worse even though it answers faster so I cancelled my Google One subscription.
I tried Claude Code over a few weekends, it definitely can do tiny projects quickly but I work in an industry where I need to understand every line of code and basically own my projects so it's not useful at all. Also, doing anything remotely complex involves so many twists I find the net benefit negative. Also because the normal side-effects of doing something is learning, and here I feel like my skills devolve.
I also occasionally use Cerebras for quick queries, it's ultrafast.
I also do a lot of ML so use Vast.ai, Simplepod, Runpod and others - sometimes I rent GPUs for a weekend, sometimes for a couple of months, I'm very happy with the results.
i find the actual survey from PWC a bit ambiguous. they say 42% of respondents saw no impact from AI, but nothing about how much those respondents have invested in AI. what if they are only in POC, or have made miniscule investments?
‘Only 12% reported both lower costs and higher revenue, while 56% saw neither benefit. 26% saw reduced costs, but nearly as many experienced cost increases.’
Stock up on dry beans and rice. See if your parents have a spare room. Don’t buy anything expensive. This bubble is gonna hurt.
Why the doom and gloom? It's going to hurt those who jumped on the bandwagon hoping to cash out. I couldn't care less about them, but I would like for computer hardware to be affordable again, and for the tech job market to go back to normal.
Though I'm more concerned about the effects of the current political climate, than the "AI" bubble popping. In the scenario of that going south, nothing will be normal for a long time.
These things are never self contained, if the subprime crisis only affected predatory bankers nobody would have cared....
True, but a better comparison is the dot-com crash. The effects of that were mainly contained to the tech industry and the stock market. People who weren't invested in either barely noticed the crash.
This time around the ramifications might be larger, but it will still mostly be felt by those inside the bubble.
Personally, I would rather experience a slight discomfort from the crash followed by a normalization of hardware prices and the job market, than continue bearing the current insanity.
The C-levels who jumped on the bandwagon are definitely not going to fall on their swords should it go south. They’ll blame the tech, fire some subordinates, suggest their customers for “not understanding it”, and their shareholders will eat it up as long as they get a pound of flesh.
> suggest their customers for “not understanding it”
See Microsoft's recent "We don't understand how you all are not impressed by AI."
In the case of MS, you're right, Satya isn't going to fall on his own sword. They will just continue to bundle and raise prices, make it impossible to buy anything else (because you still need the other tools) and then pitch that to shareholders as success "Look how many people buy Copilot (even though it's forcefully bundled into every offering we sell("
They can't really stop swords falling on them though...
They’ll just move on to the next company that will hire them with a golden parachute.
There’s minimal risk to the decision makers. Meanwhile, every one of us peons is significantly more at risk of losing our jobs whether we could be effectively replaced with these AI tools or not because our own C-level execs decided to drink the snake oil that is the current bubble.
I think CEOs are terrified of AI and all the talk of “replacing workers” is more a wish list or a cope than a prediction.
Personally I think AI is super useful, but at my job the amount of progress we’ve made has basically ground to a halt compared to before AI.
The reason is that the people who they chose to lead the new, most innovative “AI initiatives” were the least innovative most corporate drone-y people I’ve ever met. The kind of people who in 2025 would unironically say stuff like “we need to work on our corporate synergies”.
They never wanted innovation, they just wanted people to toe the line for as long as possible until they could jump the sinking ship.
That's because they were scammed.
I heard it posited that the Ai stuff is uncovering corporate dysfunction better than any tool in history, i.e. garbage in garbage out, because their processes are so broken
I must admit the idea has a lot of appeal, because there are people seeing good ROIs, so it does not seem to be the tool as much as the tool user
It's part dysfunction for sure, but it's mostly just reality being messy. Anyone who worked on a bigger engineering problem, solving real business problem, learns very quickly that reality is messy and doesn't conform to rules. And that's usually main challenge of building any project.
And it's problem specific to software engineering. Any engineering deals with it - when you manufacture physical things, for example, tolerances, safety factors, etc, are all tools to deal with reality being messy.
Yeah, I think the moral of the story is if your company is not good at technology AI does not magically make you good at technology.
The company needs to have the right culture and ability to integrate leading technology, whatever it is.
> garbage in garbage out, because their processes are so broken
That's, I'd argue, the majority of companies though, which still spells a problem for the AI bubble.
I've been a part of enough failed ERP implementation projects to know that there's actually very few enterprises out there that collectively have their shit together and are good at implementing technology.
If AI also can't solve that problem for them, it'll just join the long list of already existing boring enterprise tech that some successful companies use to great effect, and others ignore or fail to adopt, which isn't exactly a multi-trillion dollar industry to live up to the current hype.
> uncovering corporate dysfunction better than any tool in history, i.e. garbage in garbage out
in the times when Agile was still just a way of work (not a mantra), adopting it showed exactly the glaring troubles in the overall (human) pipeline. But it needed quite some time - like months of work - to really show.
This seems same thing, only much faster..
Actually making some parallels might be interesting.. agile was/is also touted as "the silver bullet"..
this is a great analogy
My first job out of University was with a startup that had been recently acquired by IBM. I have never seen true agile since. Similar 1/100 actually do it really well and proper. I should see if I still have the slide deck I made to tell the rest of IBM, would make for a great blog post and bringing in this thread, chef's kiss?
That's an interesting take, but doesn't it work backwards from the desired result ("AI" performs near miracles) to processes are so broken? That seems counter to all the enthusiastic application of LLMs to everything. People want "AI" to solve problems, and seem to be doing everything they can to put it everywhere and try everything.
Considering that "AI" providers are now adopting advertising, I wonder how many of them are actually seeing lower costs and higher revenue from dogfooding.
The hype train must go on, and I'm sure all employees are under strict NDAs, so we may never know.
Adopting advertising in almost an inevitably in any tech at this point. I wouldn’t necessarily attribute it to anything they’re seeing in usage - IMO its just the standard “we’re leaving money on the table by not” that we’ve seen time and time again.
This is a really backwards reading of the data. It's like reading signups for internet service in the early years and going "more than half of households have not signed up for internet, looks like this internet thing is a flop." Of course you're not going to see overnight transformation. It's a snowball effect that eventually builds bigger and bigger.
Umm no, your example is a backwards reading of the data.
From the PwC survey:
> More than half (56%) say their company has seen neither higher revenues nor lower costs from AI, while only one in eight (12%) report both of these positive impacts.
So The Register article title is correct.
> It's a snowball effect that eventually builds bigger and bigger.
That's just wishful thinking based on zero evidence.
Ahem, so some orgs are adopting a new technology effectively and others aren't. Whether that's because of the nature of their workflows, their organizational effectiveness, or available talent, this sounds like... the way these things work