What a bad article. I mean how biased can you be, to put the first big quote from someone who wrote a book called "The AI con". Come on! This feels like the "deepseek r1 is the death of nvda" of 6 months ago. Someone is making a play, and whoever wrote this article fell for it.
gpt5 has always been about making a "collection of models" work together and not about model++. This was announced what, a year ago? And they delivered. Capabilities ~90-110% of their top tier old models at 4-6x lower price. That's insane!
gpt5-mini is insane for its price, in agentic coding. I've had great sessions with it, at 0.x$ / session. It can do things that claude 3.5/3.7 couldn't do ~6 months ago, at 10-15x the price. Whatever RL they did is working wonders.
> gpt5 has always been about making a "collection of models" work together and not about model++.
No, it wasn’t. Have you read and listened to Altman’s hype around GPT-5 from a year ago? They changed the narration after the 4.1 flop, which they thought would be GPT-5, and it seems some people fell for it.
> Capabilities ~90-110% of their top tier old models at 4-6x lower price
Maybe they finally implemented the DeepSeek paper.
I replied below in this thread with the specific post, 6 months ago.
> After that, a top goal for us is to unify o-series models and GPT-series models by creating systems that can use all our tools, know when to think for a long time or not, and generally be useful for a very wide range of tasks.
> In both ChatGPT and our API, we will release GPT-5 as a system that integrates a lot of our technology, including o3. We will no longer ship o3 as a standalone model.
GPT-4 was a long time ago, and honestly mostly useless. But a lot of that progress was already present in the intervening models, and it's easy to forget it happened when comparing GPT-5 to the state of the art a month ago rather than two years ago.
This is hard to quantify exactly since very few benchmarks have the kind of scales where comparing two deltas would be meaningful. But if we pick the Artifical Analysis composite score[0] as the baseline, GPT-3.5 Turbo was at 11, GPT-4 at 25, and GPT-5 at 69. It's just that most of the post-GPT-4 improvement was with o1 and o3.
Source? Others are calling out this as being incorrect, so a source would help people evaluate this claim. Personally I'm much more likely to believe that AI companies are moving the goalposts rather than making significant leaps forward.
This is an Opinion piece, not a news article. The distinction between the two seems to be lost on most people nowadays.
One way I leverage opinion pieces for things with which I disagree, is to treat it as a sort of "devil's advocate". What argument are they making? Is that really the strongest one they have? Does my understanding of that domain effectively counter those arguments? etc.
In this case, the main argumentation is on how ChatGPT is not the miraculous genie it was hyped up to be. That's a fair statement, but to extrapolate that into "AI bubble is crashing now" is overlooking a host of other facts about its usefulness. Yes we'll eventually hit the through of disillusionment but I don't think we're there yet.
You are right. But newspapers make it difficult to distinguish them. Opinion pieces are liberally distributed amid news items. LA Times has a Columns section on the right hand side. But, this particular piece is not listed there. It is listed next to other news articles.
> gpt5 has always been about making a "collection of models" work together and not about model++.
That is revisionist history. Look at Altman's hype statements in the weeks and months leading up to gpt5, some of which were quoted in the article. He never proposed gpt5 as what you're saying and indeed he claimed a massive leap in model performance.
> After that, a top goal for us is to unify o-series models and GPT-series models by creating systems that can use all our tools, know when to think for a long time or not, and generally be useful for a very wide range of tasks.
> In both ChatGPT and our API, we will release GPT-5 as a system that integrates a lot of our technology, including o3. We will no longer ship o3 as a standalone model.
6 months ago.
There's also another one, earlier that says gpt5 will be about routing things to the appropriate model, and not necessarily a new model per se. Could have been in a podcast. Anyway, receipts have been posted :)
I sure am looking forwards to what will happen to my power bill once Facebook decides to default on its share of the bill for the massive power plants Entergy is building solely to power the huge-ass data center FB's building in northern Louisiana. https://www.knoe.com/2025/08/19/entergy-power-plant-meta-dat...
I was just in the Louisiana public service commission meeting where Entergy told the regulators that Meta Platforms was worth 2 trillion and they got some opinion from a New York law firm that their word was good so YOLO. Passed 4-1. The exact words were “Meta Platforms is worth more than the five biggest banks combined” (implying no point in asking a bank for a loan guarantee or backstop)
They really went with the "it's too big to fail" argument hah. Not that I'd be afraid of that, just their short attention span. How is the meta verse these days anyways?
Initially politicians were responsive. In Feb. 2024 he sought trillions of investment to ramp up "AI" [1]. Then he got the White House announcement with Trump and Softbank for the $500 billion Stargate deal. The project has flopped, only one data center will be built.
So I assume he thought hype would work again, but people are beginning to scrutinize the real capabilities of "AI".
I think the announcement was mostly a ploy to get OpenAI access to the White House and not much else (especially because Musk was already in there).
But they are clearly on their way to build 20 data centers[1]. OpenAI raising $500B over 10-15 years to build inference capacity isn’t really that hard to believe or that impressive at this point tbh. Like that could just be venture debt that is constantly serviced.
After the crash, tech "industry leaders" will struggle to explain why/how they were conned into believing that intelligence was a simple database function with some probability and statistics sprinkled on top.
They're still aggressively outsourcing to India, the Philippines, and LATAM under cover of AI to tighten the screws on labor costs. Domestic hiring that drags on is to pull in new employees at current lower market wages that will be sticky for some time.
No they won't. They won't struggle to explain anything because in US business culture, nobody ever actually takes blame except underlings. Nobody ever even asks them to explain themselves. Even in public companies it seems like questions from shareholders that actually get asked in quarterly calls are vague humble brags. "What are we going to do about the problem of winning so hard" while they had record layoffs the previous quarter.
Those talking heads haven't had to mea culpa for : Hype about Hadoop, hype about blockchain, hype about no code, hype about the previous AI bubble, hype about "agile", hype about whatever JS script is popular this week, etc.
We don't have to look far for examples. Meta dropped VR in favor of AI without much explanation. All they need is just start talking about the next shiny thing.
Some of them are very much like this. They think *intelligence* is a measure of your ability to regurgitate data that you have been fed. A genius is someone who wins on Jeopardy.
In engineering school, it was easy to spot the professors who taught this way. I avoided them like the plague.
Nvidia will be Cisco of this era. Cisco was the worlds most valuable company when dot-com bubble peaked, went down almost 90% in 2 years. There was lots of "dark fiber" all around (fiber optic cable already installed but not used).
I think OpenAI and most small AI companies go down. Microsoft, Google, Meta scale down, write down losses but keep going and don't stop research.
I hope AI bubble leaves behind massive amounts of cloud compute that companies are forced to sell at the price of the electricity and upkeep for years. New startups with new ideas can build upon it.
Investors will feel poor, crypto market will crash and so on.
I have also concluded that it is more likely a bubble than not.
Anyone looked at buying S&P sector-specific ETFs? For people who want to keep their portfolio spread as widely as possible, but are frightened by how tech-heavy the S&P index is, these seem a good option. But they all seem to have high costs (the first one I pulled up is 0.39%).
The only thing that crystalizes it that the guys in that one meeting were right... there is no moat. The author might be right but the problem will be oversupply.
Meanwhile Claude is helping me build a robot and is also writing the code that runs its subsystems but okay. Sure, I could do it all myself(maybe?) but not nearly as quickly or in the interstitial moments of life.
I have more hands and a larger context window. It's a collab(/s). But it's cool that I can do it more or less solely with that tool and not necessarily use Google or other resources(obviously for any source, from Google to the Encyclopedia Britannica, one must evaluate the quality of the information).
What are you getting out of this though? Do you think this robot is going to have some kind of positive economic impact? Will you turn it into a business? Do you anticipate it will solve a personal need, like folding your laundry for you? Because a lot of people do robot projects in their free time to learn, but you're doing it without the learning part...so what's the point?
Hum... Does anybody expect the US government to reduce the money supply or distribute it? Or for the dollar to devalue enough that their money doesn't make much of a difference anymore?
If the answer is "no" for all above, then you should expect some bubble to keep going. At most, they will change the bubble subject.
This article tries to argue that the AI bubble has burst by pointing to the failed release of GPT-5. Admittedly, the release of GPT-5 was somewhat of a flop, but I think it's more of a failure in its launch rather than the model itself. In fact, if you use the GPT-5 Thinking model, it's actually quite good. They attempted to make the model automatically route to different levels of thinking intensity, but the routing didn't work very well, which led to the various bad cases people experienced.
>ChatGPT is already the fifth biggest website in the world, according to Altman, and he plans for it to leapfrog Instagram and Facebook to become the third, though he acknowledged: “For ChatGPT to be bigger than Google, that’s really hard.”
>“We have better models, and we just can’t offer them, because we don’t have the capacity,” he said. GPUs remain in short supply, limiting the company’s ability to scale.
So Altman wants to keep building. Whether investors will pay up for that remains to be seen I guess.
Using how GPT-5 generates text within an image is a terrible way to test it.
If you ask it to list all 50 states or all US presidents it does it no problem. Asking it to generate the text of the answer in an image is a piss poor way of testing a language model.
I heavily dislike GPT-5 but at least have a fair review of it.
The core thesis seems valid " AI bots seem intelligent, because they’ve achieved the ability to seem coherent in their use of language. But that’s different from cognition."
As it happens LLMs work comparatively well with code. Is this because code does not refer (a lot) to the outside world and fits well to the workings of a statistical machine? In that case the LLMs output can also be be verified more easily by inspection through an expert, compiling, typechecking, linting and running. Although there might be hidden bugs that only show up later.
"Thought leader" isn't an actual title (or at least it shouldn't be). In my mind, its simply someone who you recognize as having the expertise worth paying attention to.
It’s a title that is given to people to get them to present at junkets, a modern socially and legally acceptable way to bribe people. No one should take them seriously.
I just said the word "domain expert" in my head 5x, and I don't like it any better.
Both of them give off "influencer" vibes. They're meaningless without more context. We used to just call people "experts", but now that's an arbitrarily bad word.
If by "thought leader" you mean domain experts making criticism then yes.
For example, Nouriel Roubini calling out the risks of the 2008 Recession before it happened, Michael Pettis calling out the risks of a real estate balance sheet crisis in China before Evergrande happened, and Arvind Subramanian calling out the risks of a a shadow bank crisis in India before the ILFS collapse in 2018.
For AI/ML, I'd tend to trust Emily Bender, given her background in NLP which itself was what became LLMs originated from.
Hrm. I'd read "thought leader" to mean "hype man"; that's how the term is normally used. I certainly wouldn't read it as "domain expert"; the people generally referred to as 'thought leaders' frequently are not.
It's just another anecdote, but the "vibes" feel like they're shifting.
The employment numbers, the inflation numbers, government austerity, the gpt-5 disappointment... the valuations are all more like meme stocks and not based on reality.
If enough articles about the crash start appearing, and enough people believe the crash is coming, the congratulations: the crash will occur.
"Indeed, as long ago as the 1960s, that phenomenon was noticed by Joseph Weizenbaum, the designer of the pioneering chatbot ELIZA, which replicated the responses of a psychotherapist so convincingly that even test subjects who knew they were conversing with a machine thought it displayed emotions and empathy.
"What I had not realized," Weizenbaum wrote in 1976, "is that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people." Weizenbaum warned that the "reckless anthropomorphization of the computer" - that is, treating it as some sort of thinking companion - produced a "simpleminded view of intelligence.""
Internet traffic kept growing throughout the dotcom bubble. That valuations got ahead of themselves didn't mean that there wasn't something real driving the hype.
Even if AI valuations have a sharp correction, there will still be a great need—and demand—for compute.
> Despite $30–40 billion in enterprise investment into GenAI, this report uncovers a surprising
result in that 95% of organizations are getting zero return. The outcomes are so starkly
divided across both buyers (enterprises, mid-market, SMBs) and builders (startups, vendors,
consultancies) that we call it the GenAI Divide. Just 5% of integrated AI pilots are extracting
millions in value, while the vast majority remain stuck with no measurable P&L impact. This
divide does not seem to be driven by model quality or regulation, but seems to be
determined by approach.
I use my house all day and I'm sure I'm not the only one.
that didn't stop the housing bubble in the 2000s.
likewise, if I argue that Dutch "Tulip mania" [0] was a bubble, "but tulips are pretty" is not an effective counter-argument. tulips being pretty was a necessary precondition for the bubble to form.
the existence of a foo bubble does not mean that foo has zero value - it means that the real-world usefulness of foo has become untethered from market perceptions of its monetary value.
There is definitely value but not sure if it is as much as the AI bosses are promising. I don’t know if it will crash or not but they are definitely overselling it. GPT-4o to 5 is so incremental compared to 3.5 and 4.
LLMs also cost what they cost because NVIDIA won't sell you a 5090 upgraded with 80GB RAM for $3000 instead of $2000 (which is overkill in the first place). Yo have to buy a H200 for $40000
If there is demand, someone will sell that eventually - while NVIDIA has a headstart, they "just" fab stuff on TSMC anyway. AMD and to a degree Intel are already starting to sell cards with more VRAM.
> Crashes come when there was no real business value.
Indeed. That's why we don't have trains or the internet anymore; once they had their big crashes we knew there was no business value, so they went away.
... I mean, what? You generally can't get a big bubble without _some_ business value, so bursting bubbles almost always have _something_ behind them (the crypto one may be the exception).
My take is the reckoning will come for the billion businesses that offer B2B AI solutions that don't offer any meaningful value. "Analyze customer intent and improve conversion with XYZ AI!" The tools of the AI revolution will continue to exist though development (read money) will presumably slow as businesses recalibrate and stop paying for the silver bullet solutions that they discovered don't work. Then the snake oil business people who built businesses around LLMs will move onto whatever BlockChain 2 Electric Boogaloo looks like.
I made this comment earlier, and its just easier to copy it:
>Theres 2 AI conversations on HN occurring simultaneously.
> Convo A: Is AI actually reasoning? does it have a world model? etc..
> Convo B: Is it good enough right now? (for X, Y, or Z workflow)
The internet reshaped the entire global economy, yet the dot com crash occurred all the same.
Convo A leads to questioning if the insane money being poured into AI make sense.
The fact that many people are finding utility, doesn't preclude things from being over valued and over hyped.
I used the internet every day in 2000. Bubbles happen with useful technologies not because we decide they aren't useful, but because we were over-sold on what they could do before they could do it.
A lot of AI investment right now is hinged on promises of "AGI" that are failing to materialize, and models themselves are seeing diminishing returns as we throw more hardware at them.
The issue isn't that AI has no value but that the amount of money invested in it is out of proportion to the value it's going to generate within a reasonable time frame. Useful new technologies are invented all the time. But not many of them will yield a return on a trillion dollar investment.
There was a lot of business value during the dotcom boom and we still had a crash. The question is how many AI companies have strong fundamentals and will survive, vs the ones that have weak fundamentals and will die if/when the investment money dries up.
The sad thing about bubbles based on overhyped but nevertheless useful tech is the collateral damage of the pop. Small promising companies that are simply too young to have good fundamentals will go under from the backlash created among investors and potential customers. It’s destruction that could’ve been avoided if we had a more measured and sober society that doesn’t need a new craze every 5 years.
Crashes, rather, must come when there is an enormous, industry-wide mismatch between perceived value (e.g. assessed in terms of expected return on investment) and actual value in terms of real return on investment within the expected period.
Evidence is emerging that the former could be twenty times the latter, or more.
The value you perceive has been much, much more expensive than investors would like, I suspect.
Internet traffic kept growing throughout the dotcom bubble. That valuations got ahead of themselves didn't mean that there wasn't something real driving the hype.
Even if AI valuations have a sharp correction, there will still be a great need—and demand—for compute.
Some of us have lived through multiple bubbles and know that often, the underlying bits are useful and will gain widespread acceptance. Just play long, and don’t feed the hypemonster.
https://archive.ph/2025.08.20-113134/https://www.latimes.com...
What a bad article. I mean how biased can you be, to put the first big quote from someone who wrote a book called "The AI con". Come on! This feels like the "deepseek r1 is the death of nvda" of 6 months ago. Someone is making a play, and whoever wrote this article fell for it.
gpt5 has always been about making a "collection of models" work together and not about model++. This was announced what, a year ago? And they delivered. Capabilities ~90-110% of their top tier old models at 4-6x lower price. That's insane!
gpt5-mini is insane for its price, in agentic coding. I've had great sessions with it, at 0.x$ / session. It can do things that claude 3.5/3.7 couldn't do ~6 months ago, at 10-15x the price. Whatever RL they did is working wonders.
> gpt5 has always been about making a "collection of models" work together and not about model++.
No, it wasn’t. Have you read and listened to Altman’s hype around GPT-5 from a year ago? They changed the narration after the 4.1 flop, which they thought would be GPT-5, and it seems some people fell for it.
> Capabilities ~90-110% of their top tier old models at 4-6x lower price
Maybe they finally implemented the DeepSeek paper.
This is Altman before the release:
OpenAI's CEO says he's scared of GPT-5
https://www.techradar.com/ai-platforms-assistants/chatgpt/op...
Sam Altman Compares OpenAI To The Manhattan Project—And He's Not Joking About the Risks
https://finance.yahoo.com/news/sam-altman-compares-openai-ma...
This is Altman after the release:
Sam Altman says ‘yes,’ AI is in a bubble
https://www.theverge.com/ai-artificial-intelligence/759965/s...
> No, it wasn’t.
I replied below in this thread with the specific post, 6 months ago.
> After that, a top goal for us is to unify o-series models and GPT-series models by creating systems that can use all our tools, know when to think for a long time or not, and generally be useful for a very wide range of tasks.
> In both ChatGPT and our API, we will release GPT-5 as a system that integrates a lot of our technology, including o3. We will no longer ship o3 as a standalone model.
"the delta between 5 and 4 will be the same as between 4 and 3"[1]
Obviously it's not.
1. https://lexfridman.com/sam-altman-2-transcript/
GPT-4 was a long time ago, and honestly mostly useless. But a lot of that progress was already present in the intervening models, and it's easy to forget it happened when comparing GPT-5 to the state of the art a month ago rather than two years ago.
This is hard to quantify exactly since very few benchmarks have the kind of scales where comparing two deltas would be meaningful. But if we pick the Artifical Analysis composite score[0] as the baseline, GPT-3.5 Turbo was at 11, GPT-4 at 25, and GPT-5 at 69. It's just that most of the post-GPT-4 improvement was with o1 and o3.
Feels like a pretty fair statement.
[0] https://artificialanalysis.ai/#frontier-language-model-intel...
> What a bad article. I mean how biased can you be, to put the first big quote from someone who wrote a book called "The AI con".
It's an op-ed. It's supposed to be biased.
This is my biggest issue with online newspapers. With print it is very clear if you are in the op-ed section. Online not so much
I have a feeling if someone wrote an article about how great GPT-5 is and the first big quote was from Sam Altman, you'd say it's a cool article.
It's only "bias" when you don't like it.
> This was announced what, a year ago?
Source? Others are calling out this as being incorrect, so a source would help people evaluate this claim. Personally I'm much more likely to believe that AI companies are moving the goalposts rather than making significant leaps forward.
I posted the tweet below, please check it there. It was 6 months ago.
This is an Opinion piece, not a news article. The distinction between the two seems to be lost on most people nowadays.
One way I leverage opinion pieces for things with which I disagree, is to treat it as a sort of "devil's advocate". What argument are they making? Is that really the strongest one they have? Does my understanding of that domain effectively counter those arguments? etc.
In this case, the main argumentation is on how ChatGPT is not the miraculous genie it was hyped up to be. That's a fair statement, but to extrapolate that into "AI bubble is crashing now" is overlooking a host of other facts about its usefulness. Yes we'll eventually hit the through of disillusionment but I don't think we're there yet.
You are right. But newspapers make it difficult to distinguish them. Opinion pieces are liberally distributed amid news items. LA Times has a Columns section on the right hand side. But, this particular piece is not listed there. It is listed next to other news articles.
Media orgs (well journalists really) are especially hostile towards AI and it's easy to see understand why.
> gpt5 has always been about making a "collection of models" work together and not about model++.
That is revisionist history. Look at Altman's hype statements in the weeks and months leading up to gpt5, some of which were quoted in the article. He never proposed gpt5 as what you're saying and indeed he claimed a massive leap in model performance.
> https://x.com/sama/status/1889755723078443244?lang=en
> After that, a top goal for us is to unify o-series models and GPT-series models by creating systems that can use all our tools, know when to think for a long time or not, and generally be useful for a very wide range of tasks.
> In both ChatGPT and our API, we will release GPT-5 as a system that integrates a lot of our technology, including o3. We will no longer ship o3 as a standalone model.
6 months ago.
There's also another one, earlier that says gpt5 will be about routing things to the appropriate model, and not necessarily a new model per se. Could have been in a podcast. Anyway, receipts have been posted :)
I sure am looking forwards to what will happen to my power bill once Facebook decides to default on its share of the bill for the massive power plants Entergy is building solely to power the huge-ass data center FB's building in northern Louisiana. https://www.knoe.com/2025/08/19/entergy-power-plant-meta-dat...
I was just in the Louisiana public service commission meeting where Entergy told the regulators that Meta Platforms was worth 2 trillion and they got some opinion from a New York law firm that their word was good so YOLO. Passed 4-1. The exact words were “Meta Platforms is worth more than the five biggest banks combined” (implying no point in asking a bank for a loan guarantee or backstop)
They really went with the "it's too big to fail" argument hah. Not that I'd be afraid of that, just their short attention span. How is the meta verse these days anyways?
I just don't get why did Altman had to hype this release so much, what was the plan?
Also, what was the deal with all those mysterious Star Wars pictures?
Initially politicians were responsive. In Feb. 2024 he sought trillions of investment to ramp up "AI" [1]. Then he got the White House announcement with Trump and Softbank for the $500 billion Stargate deal. The project has flopped, only one data center will be built.
So I assume he thought hype would work again, but people are beginning to scrutinize the real capabilities of "AI".
[1] https://www.wsj.com/tech/ai/sam-altman-seeks-trillions-of-do...
I think the announcement was mostly a ploy to get OpenAI access to the White House and not much else (especially because Musk was already in there).
But they are clearly on their way to build 20 data centers[1]. OpenAI raising $500B over 10-15 years to build inference capacity isn’t really that hard to believe or that impressive at this point tbh. Like that could just be venture debt that is constantly serviced.
[1]: https://builtin.com/articles/stargate-project
After the crash, tech "industry leaders" will struggle to explain why/how they were conned into believing that intelligence was a simple database function with some probability and statistics sprinkled on top.
I remain convinced the whole hype was a way to overfire after the big overhire.
They're still aggressively outsourcing to India, the Philippines, and LATAM under cover of AI to tighten the screws on labor costs. Domestic hiring that drags on is to pull in new employees at current lower market wages that will be sticky for some time.
The internet didn't go away after the dotcom crash in early 2k, so the AI *IF* such crash happens.
There're a lot of business discovering its benefits now, companies will continue building things over it.
No they won't. They won't struggle to explain anything because in US business culture, nobody ever actually takes blame except underlings. Nobody ever even asks them to explain themselves. Even in public companies it seems like questions from shareholders that actually get asked in quarterly calls are vague humble brags. "What are we going to do about the problem of winning so hard" while they had record layoffs the previous quarter.
Those talking heads haven't had to mea culpa for : Hype about Hadoop, hype about blockchain, hype about no code, hype about the previous AI bubble, hype about "agile", hype about whatever JS script is popular this week, etc.
We don't have to look far for examples. Meta dropped VR in favor of AI without much explanation. All they need is just start talking about the next shiny thing.
But who's to say humans are any different than simple databases with probability and statistics firing neurons??
/s
Some of them are very much like this. They think *intelligence* is a measure of your ability to regurgitate data that you have been fed. A genius is someone who wins on Jeopardy.
In engineering school, it was easy to spot the professors who taught this way. I avoided them like the plague.
> But who's to say humans are any different than simple databases with probability and statistics firing neurons??
Nobody has said they're simple databases, they would obviously be complex databases.
Dot-com bubble is a good analogy.
Nvidia will be Cisco of this era. Cisco was the worlds most valuable company when dot-com bubble peaked, went down almost 90% in 2 years. There was lots of "dark fiber" all around (fiber optic cable already installed but not used).
I think OpenAI and most small AI companies go down. Microsoft, Google, Meta scale down, write down losses but keep going and don't stop research.
I hope AI bubble leaves behind massive amounts of cloud compute that companies are forced to sell at the price of the electricity and upkeep for years. New startups with new ideas can build upon it.
Investors will feel poor, crypto market will crash and so on.
I have also concluded that it is more likely a bubble than not.
Anyone looked at buying S&P sector-specific ETFs? For people who want to keep their portfolio spread as widely as possible, but are frightened by how tech-heavy the S&P index is, these seem a good option. But they all seem to have high costs (the first one I pulled up is 0.39%).
The only thing that crystalizes it that the guys in that one meeting were right... there is no moat. The author might be right but the problem will be oversupply.
A comment in a previous thread stuck with me said something like "AI is successful because nothing else interesting is happening".
That rings true and I suspect the bubble won't burst until something else comes along to steal the show.
Meanwhile Claude is helping me build a robot and is also writing the code that runs its subsystems but okay. Sure, I could do it all myself(maybe?) but not nearly as quickly or in the interstitial moments of life.
Sounds more like you're helping Claude build a robot.
I have more hands and a larger context window. It's a collab(/s). But it's cool that I can do it more or less solely with that tool and not necessarily use Google or other resources(obviously for any source, from Google to the Encyclopedia Britannica, one must evaluate the quality of the information).
What are you getting out of this though? Do you think this robot is going to have some kind of positive economic impact? Will you turn it into a business? Do you anticipate it will solve a personal need, like folding your laundry for you? Because a lot of people do robot projects in their free time to learn, but you're doing it without the learning part...so what's the point?
My brother built a robot when he was 14
Hum... Does anybody expect the US government to reduce the money supply or distribute it? Or for the dollar to devalue enough that their money doesn't make much of a difference anymore?
If the answer is "no" for all above, then you should expect some bubble to keep going. At most, they will change the bubble subject.
This article tries to argue that the AI bubble has burst by pointing to the failed release of GPT-5. Admittedly, the release of GPT-5 was somewhat of a flop, but I think it's more of a failure in its launch rather than the model itself. In fact, if you use the GPT-5 Thinking model, it's actually quite good. They attempted to make the model automatically route to different levels of thinking intensity, but the routing didn't work very well, which led to the various bad cases people experienced.
Yeah Altman says "I think we totally screwed up some things on the rollout" https://fortune.com/2025/08/18/sam-altman-openai-chatgpt5-la...
also
>ChatGPT is already the fifth biggest website in the world, according to Altman, and he plans for it to leapfrog Instagram and Facebook to become the third, though he acknowledged: “For ChatGPT to be bigger than Google, that’s really hard.”
>“We have better models, and we just can’t offer them, because we don’t have the capacity,” he said. GPUs remain in short supply, limiting the company’s ability to scale.
So Altman wants to keep building. Whether investors will pay up for that remains to be seen I guess.
Using how GPT-5 generates text within an image is a terrible way to test it.
If you ask it to list all 50 states or all US presidents it does it no problem. Asking it to generate the text of the answer in an image is a piss poor way of testing a language model.
I heavily dislike GPT-5 but at least have a fair review of it.
The core thesis seems valid " AI bots seem intelligent, because they’ve achieved the ability to seem coherent in their use of language. But that’s different from cognition."
As it happens LLMs work comparatively well with code. Is this because code does not refer (a lot) to the outside world and fits well to the workings of a statistical machine? In that case the LLMs output can also be be verified more easily by inspection through an expert, compiling, typechecking, linting and running. Although there might be hidden bugs that only show up later.
>perceptions of AI’s relentless march toward becoming more intelligent ... came to a screeching halt Aug. 7
Overstates things a bit. It seems unlikely OpenAI will release human level AI in the next year or two, but the march of AI improving goes on.
Also re the AI Con book saying AI is a marketing term, I'm more inclined to go with Wikipedia and "a field of research in computer science".
Though there is a bit of a dot com bubble feel to valuations.
The author isn't exactly a thought leader in the space, or really any space for that matter. Opinion worth nothing.
I've never met a "thought leader" whose opinion was worth anything.
"Thought leader" isn't an actual title (or at least it shouldn't be). In my mind, its simply someone who you recognize as having the expertise worth paying attention to.
It’s a title that is given to people to get them to present at junkets, a modern socially and legally acceptable way to bribe people. No one should take them seriously.
Ok, I didn't know that. I thought it was just a shorthand for people in leadership positions with lots of expertise.
I'm a "thought follower". Wherever my leaders tell me to go, I follow.
Never quite realized how much I disliked the term "Thought Leader" until I read it 5x in the comments responses of this thread.
I should have said domain expert instead. These two casually chosen words really riled them up
I just said the word "domain expert" in my head 5x, and I don't like it any better.
Both of them give off "influencer" vibes. They're meaningless without more context. We used to just call people "experts", but now that's an arbitrarily bad word.
Have bubbles ever been successfully called by thought leaders?
Yes. Say there are 10000 thought leaders with different thinkings. There's a chance that at least one is right.
Yep. Then they're the lottery winner that gets to go on TV and write a book about it as if it was expertise that led to their prediction.
If by "thought leader" you mean domain experts making criticism then yes.
For example, Nouriel Roubini calling out the risks of the 2008 Recession before it happened, Michael Pettis calling out the risks of a real estate balance sheet crisis in China before Evergrande happened, and Arvind Subramanian calling out the risks of a a shadow bank crisis in India before the ILFS collapse in 2018.
For AI/ML, I'd tend to trust Emily Bender, given her background in NLP which itself was what became LLMs originated from.
Hrm. I'd read "thought leader" to mean "hype man"; that's how the term is normally used. I certainly wouldn't read it as "domain expert"; the people generally referred to as 'thought leaders' frequently are not.
It's just another anecdote, but the "vibes" feel like they're shifting.
The employment numbers, the inflation numbers, government austerity, the gpt-5 disappointment... the valuations are all more like meme stocks and not based on reality.
If enough articles about the crash start appearing, and enough people believe the crash is coming, the congratulations: the crash will occur.
Is Sam Altman a thought leader?
IMO “thought leaders” only set the agenda of groupthink. So… yes?
But people being just a "thought leader in the space" is exactly the reason there's a bubble.
Bubbles are a lot easier to visualise from the outside.
nothing ever happens
tap tap tap
Well would you look at the time.
"Indeed, as long ago as the 1960s, that phenomenon was noticed by Joseph Weizenbaum, the designer of the pioneering chatbot ELIZA, which replicated the responses of a psychotherapist so convincingly that even test subjects who knew they were conversing with a machine thought it displayed emotions and empathy.
"What I had not realized," Weizenbaum wrote in 1976, "is that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people." Weizenbaum warned that the "reckless anthropomorphization of the computer" - that is, treating it as some sort of thinking companion - produced a "simpleminded view of intelligence.""
https://www.theguardian.com/technology/2023/jul/25/joseph-we...
Weizenbaum's 1976 book: https://news.ycombinator.com/item?id=36875958
HN commenter rates this "greatest tech book of all-time":
https://news.ycombinator.com/item?id=36592209
Internet traffic kept growing throughout the dotcom bubble. That valuations got ahead of themselves didn't mean that there wasn't something real driving the hype.
Even if AI valuations have a sharp correction, there will still be a great need—and demand—for compute.
Meh.
Crashes come when there was no real business value.
I use AI all day and I’m sure I’m not the only one.
> Crashes come when there was no real business value.
You fall into all or nothing logic. That's thinking failure.
If real business value is 10% of the price, there will be massive crash and years of slow advance.
Dot-com bust was like that. Internet clearly had value, but not as much and not as quickly as people thought.
95 per cent of organisations are getting zero return from AI according to MIT - https://news.ycombinator.com/item?id=44956648 - August 2025
State of AI in Business 2025 [pdf] - https://news.ycombinator.com/item?id=44941374 - August 2025
https://web.archive.org/web/20250818145714/https://nanda.med...
> Despite $30–40 billion in enterprise investment into GenAI, this report uncovers a surprising result in that 95% of organizations are getting zero return. The outcomes are so starkly divided across both buyers (enterprises, mid-market, SMBs) and builders (startups, vendors, consultancies) that we call it the GenAI Divide. Just 5% of integrated AI pilots are extracting millions in value, while the vast majority remain stuck with no measurable P&L impact. This divide does not seem to be driven by model quality or regulation, but seems to be determined by approach.
This is not new - the quote was "87% of data science projects fail" in 2019.
https://venturebeat.com/ai/why-do-87-of-data-science-project...
GenAI is barely out of the research phase and is only now being fine-tuned for specific applications. Check back in 3 years.
Hasn't e.g. Deepseek been releasing special coder llms for like two years now?
Two things can be true simultaneously:
a) AI is an extremely useful productivity tool to accomplish tasks that other programming paradigms can't do.
b) Investment in AI is disproportionate to the impact of (a), leading to a low probability of sufficient ROI.
I use my house all day and I'm sure I'm not the only one.
that didn't stop the housing bubble in the 2000s.
likewise, if I argue that Dutch "Tulip mania" [0] was a bubble, "but tulips are pretty" is not an effective counter-argument. tulips being pretty was a necessary precondition for the bubble to form.
the existence of a foo bubble does not mean that foo has zero value - it means that the real-world usefulness of foo has become untethered from market perceptions of its monetary value.
0: https://en.wikipedia.org/wiki/Tulip_mania
There’s a $5 bill taped to every prompt you do. It’s unlikely you’d be using it all day if you were paying by the drink.
There is definitely value but not sure if it is as much as the AI bosses are promising. I don’t know if it will crash or not but they are definitely overselling it. GPT-4o to 5 is so incremental compared to 3.5 and 4.
I also consume all the heavily sibsidized LLM tokens I can find a use for. Great deal for us, the people who funded it?
Not so fun.
LLMs also cost what they cost because NVIDIA won't sell you a 5090 upgraded with 80GB RAM for $3000 instead of $2000 (which is overkill in the first place). Yo have to buy a H200 for $40000
If there is demand, someone will sell that eventually - while NVIDIA has a headstart, they "just" fab stuff on TSMC anyway. AMD and to a degree Intel are already starting to sell cards with more VRAM.
Apparently you can get these on the black market in China, at least according to “Gamers Nexus”
https://m.youtube.com/watch?v=1H3xQaf7BFI
> Crashes come when there was no real business value.
Indeed. That's why we don't have trains or the internet anymore; once they had their big crashes we knew there was no business value, so they went away.
... I mean, what? You generally can't get a big bubble without _some_ business value, so bursting bubbles almost always have _something_ behind them (the crypto one may be the exception).
Crashes also happen when there is a huge mismatch between price and value.
My take is the reckoning will come for the billion businesses that offer B2B AI solutions that don't offer any meaningful value. "Analyze customer intent and improve conversion with XYZ AI!" The tools of the AI revolution will continue to exist though development (read money) will presumably slow as businesses recalibrate and stop paying for the silver bullet solutions that they discovered don't work. Then the snake oil business people who built businesses around LLMs will move onto whatever BlockChain 2 Electric Boogaloo looks like.
I made this comment earlier, and its just easier to copy it:
>Theres 2 AI conversations on HN occurring simultaneously.
> Convo A: Is AI actually reasoning? does it have a world model? etc..
> Convo B: Is it good enough right now? (for X, Y, or Z workflow)
The internet reshaped the entire global economy, yet the dot com crash occurred all the same.
Convo A leads to questioning if the insane money being poured into AI make sense. The fact that many people are finding utility, doesn't preclude things from being over valued and over hyped.
I used the internet every day in 2000. Bubbles happen with useful technologies not because we decide they aren't useful, but because we were over-sold on what they could do before they could do it.
A lot of AI investment right now is hinged on promises of "AGI" that are failing to materialize, and models themselves are seeing diminishing returns as we throw more hardware at them.
The issue isn't that AI has no value but that the amount of money invested in it is out of proportion to the value it's going to generate within a reasonable time frame. Useful new technologies are invented all the time. But not many of them will yield a return on a trillion dollar investment.
There was a lot of business value during the dotcom boom and we still had a crash. The question is how many AI companies have strong fundamentals and will survive, vs the ones that have weak fundamentals and will die if/when the investment money dries up.
The sad thing about bubbles based on overhyped but nevertheless useful tech is the collateral damage of the pop. Small promising companies that are simply too young to have good fundamentals will go under from the backlash created among investors and potential customers. It’s destruction that could’ve been avoided if we had a more measured and sober society that doesn’t need a new craze every 5 years.
pets.com
Crashes, rather, must come when there is an enormous, industry-wide mismatch between perceived value (e.g. assessed in terms of expected return on investment) and actual value in terms of real return on investment within the expected period.
Evidence is emerging that the former could be twenty times the latter, or more.
The value you perceive has been much, much more expensive than investors would like, I suspect.
Internet traffic kept growing throughout the dotcom bubble. That valuations got ahead of themselves didn't mean that there wasn't something real driving the hype.
Even if AI valuations have a sharp correction, there will still be a great need—and demand—for compute.
Some of us have lived through multiple bubbles and know that often, the underlying bits are useful and will gain widespread acceptance. Just play long, and don’t feed the hypemonster.