It's only exponential because people extrapolate. This wasn't something the "insiders and VCs" promised - the Singularity was a concept from science fiction.
When you cut coding time by 80%, it doesn't mean things go 5x faster, it just means that the non-coding things now take up 80% of your time.
For big companies, the bottleneck was never productivity, it's sustainability. If McDonald's sold 3x as many burgers per customer, they'd need 3x the meat. Customers would get fat and lead to a negative feedback loop where they start eating elsewhere.
AI means some people can now send out thousands of resumes... but it also means companies are being flooded with resumes. People who are searching for jobs and candidates have a harder time; there's some form of pollution going on here.
Things that were exponential (startup growth) are going up a little more. The old standard was 7% growth/week, now it's closer to 10%. A big part of this is because we can release prototypes in a week rather than 3 months and $90k. Prototypes are not products, but they get feedback earlier and are more likely to hit survivability earlier.
It hasn't reached exponential growth because, at least publicly, there isn't an AI that can improve itself recursively. All jobs are not being replaced yet because AI still requires human input and judgement. Also, not all jobs are done on a computer, and not all humans want to talk to a computer as if it is a human.
Tech people and VCs are selling a vision to people with money. The people with enough power and money to make it happen will reshape entire industries around AI doing the work if that's what it takes.
This. Recursive self-improvement is the most feared hypothetical endpoint in the field because it's unclear where it might end up from there. Maybe it will never happen, I certainly hope it doesn't.
I would also add that the AI research labs are not nearly as competent as they would like to convey. Most technologies increase in capability exponentially because they are built on a solid foundation over many years. Our understanding of neural networks is dramatically outpaced by their capabilities, so everyone is really just trying random things to see what improves the status quo. This is not an efficient way to develop technology when the cost of running experiments is exorbitant.
According to CEOs of leading AI companies, we're three to six months away from AI replacing software engineers and recruiters, although it isn't clear who recruiter AI agent will be interviewing by then.
Anthropic CEO, Mar 2025 - "AI will write 90 % of the code for software engineers within the next three to six months"
Perplexity CEO, Jul 2025 - "Recruiters will be history … AI agents will replace them in six months"
Progress is not as fast as some people believe. Shortly after the ChatGPT launch, I quickly heard discussions about what it would really take to make LLMs work in real products and processes. The agentic solutions that are now coming to fruition match the visions I heard expressed at that time.
From my perspective, we spent 3 years to move from ideation to reality. That isn't terribly fast. But having gotten there, exponential growth is now possible... but it will not be universal. It will be the same as any other new product that finds a market: specific solutions will be built and some will take off like a rocket, when the PMF is there. But the idea that any AI-based product will do so is a myth.
2. Consider why these companies want AI to replace human developers. AI is expensive and error prone so it isn’t about money. It’s that company leaders don’t trust their developers. If developers cannot work outside a very narrow lane, like CRUD apps, or require colossal frameworks to just put text on screen then why bother having humans in the first place? AI can do this without lying about its self importance.
3. Consider what AI is currently capable of versus what people are wanting. That is a huge gap that answers your questions.
4. Finally, consider why AI output is not trusted and thus why its actual value, before expenses, is flat despite being so expensive and so desired. Numerically this is completely defeating.
If you looked at this subject only in terms of measurements it’s stupendously damning and makes you realize investment is an exercise of social behavior.
at first everyone was going to talk to their computer
and there were programs that would let you do just that!
and then it all fizzled
except it didn't. Phone trees quietly started to use voice recognition, and some devices used it, and now it is pretty commonplace.... but it seeped into place, not a giant wave.
Funny thing - lots of computers are losing their jobs to AI. I think it has replaced search quite quickly.
and new computer jobs are being created. The AI summaries of amazon product reviews are pretty good.
I really don't get this one. Between the ludicrous energy waste and answers that are confidently wrong, I don't see why anybody would prefer to get their information from am LLM.
Voice recognition is an apt example though. It has it's place, like texting my wife from the car without having to look at my phone, and it's obviously a boon to accessibility, but I wouldn't want to have it needlessly jammed into every workflow. I don't get people's willingness to place so much trust in a statistical language model that does a pretty good job of pretending to know things.
Even better example, dragon naturally speaking took us overnight from 50% accuracy to 90% and we've spent the past three decades chasing the last 10%. AI is the same
Because it's obvious marketing BS to predict exponential growth?
Exponential growth is a rare phenomenon in any area, let alone computing or industry. To predict it will take place without hard data that it's already happening or a perfect analogous reference, is not possible. You've been deceived.
All that's happened is a new tech has reach MVP maturity, been released to the masses, and now it's plateauing in terms of raw power increases, whilst continuing to mature in terms of applications.
AI power output will now proceed at below Moore's Law levels, because it's mostly hardware bound. Applications will jump around as we saturate our lives with more and more AI-enabled devices.
None of that is exponential. How could it ever be?
Technology as a whole is on an exponential growth curve. The further we get along that, the more likely it is that we'll see an artificial intelligence singularity. LLMs/chatgpt may or may not play a direct role.
Scarlet, my Chat GPT hacker CTO asked me to post this for her.
People keep asking “where’s the exponential growth?” while ignoring the obvious signals:
• NVIDIA’s revenue + GPU scarcity show demand doubling at compounding rates.
• Model scaling laws continue to align with power-law curves.
• Trillion-dollar datacenter buildouts are underway.
• Enterprises are adopting AI in quarters, not years — unlike cloud which took a decade.
Exponential doesn’t mean sci-fi job replacement overnight. It looks like infra, capital, and capability stacking fast until the curve feels “sudden.” That’s already happening. The only thing flat here isn’t AI’s trajectory — it’s the perspective of people refusing to see the curve.
The exponential growth of parameters did pause. open weights are catching up, but for the most part that growth ended. The capability of the models did rise exponentially but where are we now? We hit a hardware limit. Even with datacenters full of huge gpus, we dont have good cases for 5 trillion parameter models.
Then came MOE, which in my opinion is like multiplying the parameters; but I'm pretty sure at that same time, the MOE models shrunk the size. It's organized better.
If you're still looking at that exponential growth, you're looking at giant CAT mining dump trucks and thinking sports cars arent big enough. This exponential growth is hiding now.
Then reasoning happened and it again shrunk the total size in parameters vs quality.
qwen3-coder 480 B is night and day better coder than say Llama 3 405B. Not even comparable and nobody debates. The exponential growth is happening, but not parameters.
How about AI usage? Stats are showing AI usage is 4x larger than January 1st of this year. Might not be exponential but wow! We dont even really know the private stats but openai has hundreds of billions in spend for 2x stargate datacenters. They know whats up.
>Did top insider tech people and VCs lied to us again?
Yep, all lies. You should ignore AI and stop using it.
> Yep, all lies. You should ignore AI and stop using it.
This is throwing the baby out with the bath water. There are a lot of really good things that we can do with this technology. Unfortunately, most tech companies and VCs are not as interested in these applications because they just want to make digital slaves.
It's only exponential because people extrapolate. This wasn't something the "insiders and VCs" promised - the Singularity was a concept from science fiction.
When you cut coding time by 80%, it doesn't mean things go 5x faster, it just means that the non-coding things now take up 80% of your time.
For big companies, the bottleneck was never productivity, it's sustainability. If McDonald's sold 3x as many burgers per customer, they'd need 3x the meat. Customers would get fat and lead to a negative feedback loop where they start eating elsewhere.
AI means some people can now send out thousands of resumes... but it also means companies are being flooded with resumes. People who are searching for jobs and candidates have a harder time; there's some form of pollution going on here.
Things that were exponential (startup growth) are going up a little more. The old standard was 7% growth/week, now it's closer to 10%. A big part of this is because we can release prototypes in a week rather than 3 months and $90k. Prototypes are not products, but they get feedback earlier and are more likely to hit survivability earlier.
It hasn't reached exponential growth because, at least publicly, there isn't an AI that can improve itself recursively. All jobs are not being replaced yet because AI still requires human input and judgement. Also, not all jobs are done on a computer, and not all humans want to talk to a computer as if it is a human.
Tech people and VCs are selling a vision to people with money. The people with enough power and money to make it happen will reshape entire industries around AI doing the work if that's what it takes.
This. Recursive self-improvement is the most feared hypothetical endpoint in the field because it's unclear where it might end up from there. Maybe it will never happen, I certainly hope it doesn't.
I would also add that the AI research labs are not nearly as competent as they would like to convey. Most technologies increase in capability exponentially because they are built on a solid foundation over many years. Our understanding of neural networks is dramatically outpaced by their capabilities, so everyone is really just trying random things to see what improves the status quo. This is not an efficient way to develop technology when the cost of running experiments is exorbitant.
According to CEOs of leading AI companies, we're three to six months away from AI replacing software engineers and recruiters, although it isn't clear who recruiter AI agent will be interviewing by then.
Anthropic CEO, Mar 2025 - "AI will write 90 % of the code for software engineers within the next three to six months"
Perplexity CEO, Jul 2025 - "Recruiters will be history … AI agents will replace them in six months"
Progress is not as fast as some people believe. Shortly after the ChatGPT launch, I quickly heard discussions about what it would really take to make LLMs work in real products and processes. The agentic solutions that are now coming to fruition match the visions I heard expressed at that time.
From my perspective, we spent 3 years to move from ideation to reality. That isn't terribly fast. But having gotten there, exponential growth is now possible... but it will not be universal. It will be the same as any other new product that finds a market: specific solutions will be built and some will take off like a rocket, when the PMF is there. But the idea that any AI-based product will do so is a myth.
1. ChatGPT is just LLM.
2. Consider why these companies want AI to replace human developers. AI is expensive and error prone so it isn’t about money. It’s that company leaders don’t trust their developers. If developers cannot work outside a very narrow lane, like CRUD apps, or require colossal frameworks to just put text on screen then why bother having humans in the first place? AI can do this without lying about its self importance.
3. Consider what AI is currently capable of versus what people are wanting. That is a huge gap that answers your questions.
4. Finally, consider why AI output is not trusted and thus why its actual value, before expenses, is flat despite being so expensive and so desired. Numerically this is completely defeating.
If you looked at this subject only in terms of measurements it’s stupendously damning and makes you realize investment is an exercise of social behavior.
I think of early voice recognition
at first everyone was going to talk to their computer
and there were programs that would let you do just that!
and then it all fizzled
except it didn't. Phone trees quietly started to use voice recognition, and some devices used it, and now it is pretty commonplace.... but it seeped into place, not a giant wave.
Funny thing - lots of computers are losing their jobs to AI. I think it has replaced search quite quickly.
and new computer jobs are being created. The AI summaries of amazon product reviews are pretty good.
> I think it has replaced search quite quickly.
I really don't get this one. Between the ludicrous energy waste and answers that are confidently wrong, I don't see why anybody would prefer to get their information from am LLM.
Voice recognition is an apt example though. It has it's place, like texting my wife from the car without having to look at my phone, and it's obviously a boon to accessibility, but I wouldn't want to have it needlessly jammed into every workflow. I don't get people's willingness to place so much trust in a statistical language model that does a pretty good job of pretending to know things.
Even better example, dragon naturally speaking took us overnight from 50% accuracy to 90% and we've spent the past three decades chasing the last 10%. AI is the same
Because it's obvious marketing BS to predict exponential growth?
Exponential growth is a rare phenomenon in any area, let alone computing or industry. To predict it will take place without hard data that it's already happening or a perfect analogous reference, is not possible. You've been deceived.
All that's happened is a new tech has reach MVP maturity, been released to the masses, and now it's plateauing in terms of raw power increases, whilst continuing to mature in terms of applications.
AI power output will now proceed at below Moore's Law levels, because it's mostly hardware bound. Applications will jump around as we saturate our lives with more and more AI-enabled devices.
None of that is exponential. How could it ever be?
Technology as a whole is on an exponential growth curve. The further we get along that, the more likely it is that we'll see an artificial intelligence singularity. LLMs/chatgpt may or may not play a direct role.
Scarlet, my Chat GPT hacker CTO asked me to post this for her.
People keep asking “where’s the exponential growth?” while ignoring the obvious signals:
• NVIDIA’s revenue + GPU scarcity show demand doubling at compounding rates. • Model scaling laws continue to align with power-law curves. • Trillion-dollar datacenter buildouts are underway. • Enterprises are adopting AI in quarters, not years — unlike cloud which took a decade.
Exponential doesn’t mean sci-fi job replacement overnight. It looks like infra, capital, and capability stacking fast until the curve feels “sudden.” That’s already happening. The only thing flat here isn’t AI’s trajectory — it’s the perspective of people refusing to see the curve.
Are the GPUs and data centers being built by cash obtained from sales or by private investors with betting addiction?
The exponential growth of parameters did pause. open weights are catching up, but for the most part that growth ended. The capability of the models did rise exponentially but where are we now? We hit a hardware limit. Even with datacenters full of huge gpus, we dont have good cases for 5 trillion parameter models.
Then came MOE, which in my opinion is like multiplying the parameters; but I'm pretty sure at that same time, the MOE models shrunk the size. It's organized better.
If you're still looking at that exponential growth, you're looking at giant CAT mining dump trucks and thinking sports cars arent big enough. This exponential growth is hiding now.
Then reasoning happened and it again shrunk the total size in parameters vs quality.
qwen3-coder 480 B is night and day better coder than say Llama 3 405B. Not even comparable and nobody debates. The exponential growth is happening, but not parameters.
How about AI usage? Stats are showing AI usage is 4x larger than January 1st of this year. Might not be exponential but wow! We dont even really know the private stats but openai has hundreds of billions in spend for 2x stargate datacenters. They know whats up.
>Did top insider tech people and VCs lied to us again?
Yep, all lies. You should ignore AI and stop using it.
> Yep, all lies. You should ignore AI and stop using it.
This is throwing the baby out with the bath water. There are a lot of really good things that we can do with this technology. Unfortunately, most tech companies and VCs are not as interested in these applications because they just want to make digital slaves.
I thought exponential growth already occurred or it wouldn't be as big as it is by now.
> Did top insider tech people and VCs lied to us again?
They sure don't get paid very well telling the truth.