Where’s the evidence, this writing strikes as purely belief based? For all the overhypers of AI, you also get extreme skeptics like this, and neither side has good evidence. It’s speculation. If he truly knew the future he’d short all the companies right before collapse.
Oracle's share price recently went up 40% on an earnings miss, because apart from the earnings miss they declared $455b in "Remaining Performance Obligations" (which is such an unusual term it caused a spike in Google Trends as people try to work out what it means).
Of the $455b of work they expect to do and get paid for, $300b comes from OpenAI. OpenAI has about $10b in annual revenue, and makes a loss on it.
So OpenAI aren't going to be able to pay their obligations to Oracle unless something extraordinary happens with Project Stargate. Meanwhile Oracle are already raising money to fund their obligations to build the things that they hope OpenAI are going to pay them for.
These companies are pouring hundreds of billions of dollars into building AI infrastructure without any good idea of how they're going to recover the cost.
I'm slightly confused about how solid the expected revenue has to be to be counted as RPO. Does this mean OpenAI actually signed a contract binding them to spend those 300 billions with Oracle?
The second interesting part is also the part you're assuming in your argument. Does the fact that OpenAI doesn't have 300 billions now and neither has the revenue/profit to generate that much matter? Unless there are deals in the background that already secured funding, this seems very shady accounting.
If I earnt £10k a year from my job, and I was spending more than £10k a year getting myself deeper in debt every year, I wouldn't go out and sign up for £300k of goods and services. But maybe that's just me.
There are various links in the article that have more information. Clicking these references will give the evidence for bad unit economics claims and whatnot.
As for predicting the moment, the author has made a prediction and wants it to be wrong. They expect the system will continue to grow larger for some time before collapse. They would prefer that this timeline be abbreviated to reduce the negative economic impacts. He is advising others on how to take economic advantage of his prediction and is likely shorting the market in his own way. It may not be options trading, but making plans for the bust is functionally similar.
Side note: If you're going to short an AI company (or really, buy put options, so you don't have unlimited downside exposure), I would suggest shorting NVIDIA. My reasoning is that if we actually get a fully automated software engineer, NVIDIA stock is liable to lose a bunch of value anyways -- if I understand correctly, their moat is mostly in software.
Wile E Coyote sprints as fast as possible, realizes he zoomed off a cliff, looks down in horror, then takes a huge fall.
Specifically I envision a scenario like: Google applies the research they've been doing on autoformalization and RL-with-verifiable-rewards to create a provably correct, superfast TPU. Initially it's used for a Google-internal AI stack. Gradually they start selling it to other major AI players, taking the 80/20 approach of dominating the most common AI workflows. They might make a deliberate effort to massively undercut NVIDIA just to grab market share. Once Google proves that this approach is possible, it will increasingly become accessible to smaller players, until eventually GPU design and development is totally commoditized. You'll be able to buy cheaper non-NVIDIA chips which implement an identical API, and NVIDIA will lose most of its value.
Will this actually happen? Hard to say, but it certainly seems more feasible than superintelligence, don't you think?
The papers he linked all fail to support his claim. The first paper he linked simply counts the mentions of the term “deep learning” in papers. The 2nd surveyed people who lived in… Denmark and tried to extrapolate that to everyone globally
If today you told me all this about Enron or FTX while they are still an industry darling, I for one wouldn't want to bet against them. For every FTX where cooked books leads to epic failure, there is a Tether where cooked books lead to accidentally unwrapping an unlimited money tap through all sorts of dubious means.
The funny thing though is that on balance I believe that since the launch of GPT to the public almost 3 years back, I feel GPT has delivered more than it has disappointed. I still feel that we should run into limitations soon and may be GPT5 is an example of that.
Surely the tech still has a long way to go and keep improving especially as the money has attracted everyone to work on it in different ways that were not considered important till now but the financial side of things have to correct a bit for healthy growth
I’m not convinced Unit economics is the right lens here given that it’s a general purpose technology.
For the very near term perhaps but the large scale infra rollouts strike me as a 10+ year strategic bet and on that scale what matters is whether this delivers on automation and productivity
Great points but timing it can be very hard. It can last many more years because this time they have a thing called "money printer". When crash happens, they will use it.
Yes it prints whatever amount they want, even trillions. Magically(!)
Most people, who try to time these, usually get it completely wrong and end up losing huge amounts of money. I just stay invested in the indexes and some long term stocks, every time I try to predict something it goes badly.
Bear in mind that your index becomes more and more of those six or seven companies, the more they grow. I think they're over 30% of the market? So an index tracker is still greatly exposed to this.
Companies like Nvidia. Microsoft, Amazon and Google are going nowhere. Just their valuations will in my opinion take massive dip. Which then will have all sorts of other effects.
They are not going to zero, but they can lose lot from the current price.
Their profitability would shrink, but they'd only be in trouble if they were taking on debt to expand operations on the expectation of future growth. AFAIK one of the annoyances gamers have had with Nvidia is that after crypto and now with AI, they've generally been very careful to control how they expand production since they seem quite aware the party could stop any time. It certainly helps to have a lot of product lock-in - people will bear much higher prices to stay with Nvidia at this point (due to, as noted - CUDA).
Sure, the stock price wouldn't be to the moon anymore, but that doesn't materially effect operations if they're still moving product - and gaming isn't going anywhere.
The stock price of a company can crash without it materially effecting the company in anyway...provided the company isn't taking on expansion operations on the basis of that continued growth. Historically, Nvidia have avoided this.
I think there's a bit of a difference between preventing a run on the banks and propping up the entire stock market for the sake of just a handful of companies that all have big enough pockets to fail.
I want the hype to die and the bubble to pop as much as Ed and Cory and everyone else writing about it but right now it’s just them basically recycling the same bad news and posting about it. I’d love to see some writing which looks at the factors which caused previous pops and to line them up with factors today to try and determine what’s actually coming. Clearly the market is irrational as hell right now but seemingly very little is going to change that. The closest I’ve seen to what I’m looking for is the coverage over at Notes on the Crises[0] and he also seems bewildered.
Can anyone give more than a hand-waived explanation on how this crash will come about? This paragraph reads kind of like: Companies not profitable, no more money coming in, ????, crash
>> I firmly believe the (economic) AI apocalypse is coming. These companies are not profitable. They can't be profitable. They keep the lights on by soaking up hundreds of billions of dollars in other people's money and then lighting it on fire. Eventually those other people are going to want to see a return on their investment, and when they don't get it, they will halt the flow of billions of dollars. Anything that can't go on forever eventually stops.
How will this actually lead to a crash? What would that crash look like? Are banks going bust? Which banks would go bust? Who is losing money, why are they losing money?
See comments elsewhere in this thread. To cite one well known recent example, Oracle stock went crazy recently (and Larry Ellison briefly became the world’s richest person) after they disclosed in their earnings report that they are expecting something like $400b in revenue from serving OpenAI in the coming years. These overinflated expectations systematically multiply and propagate until you’ve arrive at the situation we’re in. As soon as that does _not_ happen and everyone realizes it, the whole house of cards come crashing down, in a very 1929 sort of way.
This is the point TFA is making, albeit a bit hyperbolically.
It also happened in the dotcom bubble, telecom companies were providing leasing/loans to their customers to purchase more networking equipment.
Very similar to the circular funding happening between Nvidia, and their customers, Nvidia funding investments in AI datacenters which get spent in Nvidia equipment, each step of the cycle has to take a cut for paying their own OpEx where the money getting back to Nvidia diminishes in each pass through the cycle.
If his clients lose all their money in stocks or lose their jobs he also loses work. Depressions tend to impact everyone in the end as unemployment gets high enough
What it usually looks like when one of the valley's "revolutions" fails to materialize: a formerly cheap and accessible tech becomes niche and expensive, acres of e-waste, the job market is flooded with applicants with years of experience in something no longer considered valuable, and the people responsible sail off into the sunset now richer for having rat fucked everyone else involved in the scheme.
In this case though given the sheer scale of the money about to go away, I would also add: a lot of pensions are going to see huge losses, a lot of cities who have waived various taxes to encourage data-center build outs are going to be left holding bags and possibly huge, ugly concrete buildings in their limits that will need to be destroyed, and, a special added one for this bubble in particular, we'll have a ton of folks out there psychologically dependent on a product that is either priced out of their ability to pay or completely unavailable, and the ensuing mental health crises that might entail.
Isn't the thing that costs everyone an arm and a leg at the moment the race for better models? So all of the training everyone is doing to get SOTA in some obscure AI benchmark? From all of the analysis I've read, inference is quite profitable for the AI companies. So at least for the last part:
> we'll have a ton of folks out there psychologically dependent on a product that is either priced out of their ability to pay or completely unavailable, and the ensuing mental health crises that might entail.
I doubt that this will become true. If there's one really tangible asset these companies are producing, which would be worth quite a bit in a bankruptcy it's the model architectures and weights, no?
> Isn't the thing that costs everyone an arm and a leg at the moment the race for better models? So all of the training everyone is doing to get SOTA in some obscure AI benchmark? From all of the analysis I've read, inference is quite profitable for the AI companies.
From what I've read: The cost to AI companies, per inference as a single operation, is going down. However, all newer models, all reasoning models, and their "agents" thing that's still trying desperately to be an actual product category all require magnitudes more inferences per request to operate. It's also worth noting that code generation and debugging, which is one of the few LLM applications I will actually say has a use and is reasonably good, also costs far more inferences per request to operate. And that number of inferences can increase massively with a sufficiently large chunk of code you're asking it to look at/change.
> If there's one really tangible asset these companies are producing, which would be worth quite a bit in a bankruptcy it's the model architectures and weights, no?
I mean, not really? If the companies enter bankruptcy that's a pretty solid indicator that the models are not profitable to operate, unless you're envisioning this as like a long-tail support model that you see with old MMO games, where a company picks up a hugely expensive to produce product, like LOTRO, and runs it with basically a skeleton crew of devs and support folks for the handful of users who still want to play it, and eeks out a humble if legitimate profit for doing so. I guess I could see that, but it's also worth noting that type of business has extremely thin margins, and operating servers for old MMO games is WAY less energy and compute intensive than running any version of ChatGPT post 2023.
> I firmly believe the (economic) AI apocalypse is coming. These companies are not profitable. They can't be profitable. They keep the lights on by soaking up hundreds of billions of dollars in other people's money and then lighting it on fire.
This is what I don't like. Debating in extremes. How can AI have bad unit economics? They are literally selling compute and code for a handsome markup. This is classic software economics, some of the best unit economics you can get. Look at Midjourney: it pulled in hundreds of millions without raising a single dime.
Companies are unprofitable because they are chasing user growth and subsidising free users. This is not to say there isn't a bubble, but it's a rock-solid business and there to stay. Yes, music will stop one day, and there will be a crash, but I’d bet that most of the big players we see today will still be around after the shakeout. Anecdote: my wife is so dependent on ChatGPT that if the free version ever stopped being good enough, she’d happily pay for premium. And this is coming from someone who usually questions why anyone needs pay for software.
> Further: the topline growth that AI companies are selling comes from replacing most workers with AI, and re-tasking the surviving workers as AI babysitters ("humans in the loop"), which won't work. Finally: AI cannot do your job, but an AI salesman can 100% convince your boss to fire you and replace you with an AI that can't do your job
This hits home. A lot of the supposed claims of improvements due to AI that I see are not really supported by measurements in actual companies. Or they could have been just some regular automation 10 years ago, except requiring less code.
If anything I see a tendency of companies, and especially AI companies, to want developers and other workers to work 996 in exchange for magic beans (shares) or some other crazy stupid grift.
I guess anything other than just claims from people that have a stake in it?
If companies are shipping AI bots with a "human in the loop" to replace what could have been "a button with a human in the loop", but the deployment of the AI takes longer, then it's DEFINITELY not really an improvement, it's just pure waste of money and electricity.
Similarly, what I see different from the pre-AI era are way too many SV and elsewhere companies having roughly the same size and shipping roughly the same amount of features as before (or less!), but are now requiring employees to do 996. That's the definition of loss of productivity.
I'm not saying I hold the truth, but what I see in my day to day is that companies are masters at absorbing any kind of improvement or efficiency gain. Inertia still rules.
Every day I see articles on HN discussing the AI bubble potentially crashing. The large number of such articles appearing daily is increasing my confidence that the AI space will be fine.
Where’s the evidence, this writing strikes as purely belief based? For all the overhypers of AI, you also get extreme skeptics like this, and neither side has good evidence. It’s speculation. If he truly knew the future he’d short all the companies right before collapse.
Here's some evidence:
Oracle's share price recently went up 40% on an earnings miss, because apart from the earnings miss they declared $455b in "Remaining Performance Obligations" (which is such an unusual term it caused a spike in Google Trends as people try to work out what it means).
Of the $455b of work they expect to do and get paid for, $300b comes from OpenAI. OpenAI has about $10b in annual revenue, and makes a loss on it.
So OpenAI aren't going to be able to pay their obligations to Oracle unless something extraordinary happens with Project Stargate. Meanwhile Oracle are already raising money to fund their obligations to build the things that they hope OpenAI are going to pay them for.
These companies are pouring hundreds of billions of dollars into building AI infrastructure without any good idea of how they're going to recover the cost.
I'm slightly confused about how solid the expected revenue has to be to be counted as RPO. Does this mean OpenAI actually signed a contract binding them to spend those 300 billions with Oracle?
The second interesting part is also the part you're assuming in your argument. Does the fact that OpenAI doesn't have 300 billions now and neither has the revenue/profit to generate that much matter? Unless there are deals in the background that already secured funding, this seems very shady accounting.
If I earnt £10k a year from my job, and I was spending more than £10k a year getting myself deeper in debt every year, I wouldn't go out and sign up for £300k of goods and services. But maybe that's just me.
I guess we'll find out.
It's a case of major FOMO. They would rather burn with the others who bet wrong than be the ones left behind.
Pre-banking 30 years of a customer's net revenue is eron-level accounting
> eron-level
Enron?
Elon?
There are various links in the article that have more information. Clicking these references will give the evidence for bad unit economics claims and whatnot.
As for predicting the moment, the author has made a prediction and wants it to be wrong. They expect the system will continue to grow larger for some time before collapse. They would prefer that this timeline be abbreviated to reduce the negative economic impacts. He is advising others on how to take economic advantage of his prediction and is likely shorting the market in his own way. It may not be options trading, but making plans for the bust is functionally similar.
Side note: If you're going to short an AI company (or really, buy put options, so you don't have unlimited downside exposure), I would suggest shorting NVIDIA. My reasoning is that if we actually get a fully automated software engineer, NVIDIA stock is liable to lose a bunch of value anyways -- if I understand correctly, their moat is mostly in software.
Wile E Coyote sprints as fast as possible, realizes he zoomed off a cliff, looks down in horror, then takes a huge fall.
Specifically I envision a scenario like: Google applies the research they've been doing on autoformalization and RL-with-verifiable-rewards to create a provably correct, superfast TPU. Initially it's used for a Google-internal AI stack. Gradually they start selling it to other major AI players, taking the 80/20 approach of dominating the most common AI workflows. They might make a deliberate effort to massively undercut NVIDIA just to grab market share. Once Google proves that this approach is possible, it will increasingly become accessible to smaller players, until eventually GPU design and development is totally commoditized. You'll be able to buy cheaper non-NVIDIA chips which implement an identical API, and NVIDIA will lose most of its value.
Will this actually happen? Hard to say, but it certainly seems more feasible than superintelligence, don't you think?
NVIDIA is like the only company making money on the AI bubble, they're not the one I would choose to short.
The papers he linked all fail to support his claim. The first paper he linked simply counts the mentions of the term “deep learning” in papers. The 2nd surveyed people who lived in… Denmark and tried to extrapolate that to everyone globally
His points are not backed by much evidence
If today you told me all this about Enron or FTX while they are still an industry darling, I for one wouldn't want to bet against them. For every FTX where cooked books leads to epic failure, there is a Tether where cooked books lead to accidentally unwrapping an unlimited money tap through all sorts of dubious means.
Either the AI hype will slow down and the market will crash.
Or AI really does have 100x productivity gains and fewer humans are needed. And you lose your job.
I dont see a positive between any of these scenarios…
Not an expert, but I'm convinced they will all pivot to military applications before they go bankrupt, and that will unleash a whole new type of hell
The new doctrine: drown the enemy in slop!
Der Angriff des Slop ist nicht erfolgt.
There's always somebody predicting an apocalypse. This guarantees that, regardless of what happens, there's somebody who can claim they were right.
Which makes those predictions completely useless. You could as well read your horoscope.
The funny thing though is that on balance I believe that since the launch of GPT to the public almost 3 years back, I feel GPT has delivered more than it has disappointed. I still feel that we should run into limitations soon and may be GPT5 is an example of that.
Surely the tech still has a long way to go and keep improving especially as the money has attracted everyone to work on it in different ways that were not considered important till now but the financial side of things have to correct a bit for healthy growth
>> A much-discussed MIT paper found that 95% of companies that had tried AI had either nothing to show for it, or experienced a loss
The paper they linked to just analyzed how many times “deep learning” appears in academic papers…
This is the proof that most companies unsuccessfully tried AI?
The link is wrong, I believe they meant to link this one: https://mlq.ai/media/quarterly_decks/v0.1_State_of_AI_in_Bus...
I think they linked to the wrong thing. It’s been discussed several times here:
https://news.ycombinator.com/item?id=45170164
3% of the world GDP is more than the 2 trillion needed to fund AI
I’m not convinced Unit economics is the right lens here given that it’s a general purpose technology.
For the very near term perhaps but the large scale infra rollouts strike me as a 10+ year strategic bet and on that scale what matters is whether this delivers on automation and productivity
They're all going to start selling ads, obviously.
Is there going to be enough new ad spend to justify that model? Will everyone else spend even more on ads than they do now?
It doesn't need to be new ad spend. Just the possibility of the existing ad spend being up for grabs justifies all the capex so far.
Great points but timing it can be very hard. It can last many more years because this time they have a thing called "money printer". When crash happens, they will use it.
Yes it prints whatever amount they want, even trillions. Magically(!)
Most people, who try to time these, usually get it completely wrong and end up losing huge amounts of money. I just stay invested in the indexes and some long term stocks, every time I try to predict something it goes badly.
Bear in mind that your index becomes more and more of those six or seven companies, the more they grow. I think they're over 30% of the market? So an index tracker is still greatly exposed to this.
I wish I could get an index without them but it would probably have no growth basically - the rest of the market is struggling in comparison, right?
> When crash happens, they will use it.
You're suggesting that _governments_ will bail out the AI industry? I mean, why would they do that?
I could see governments having a strong interest in bailing out Nvidia, Microsoft, etc at least?
Why would Nvidia be in trouble? They're selling shovels during a gold rush, and have slow boated scale up - they're in no trouble.
Agreed, I also feel like Microsoft is diversified enough that this would not bring them down.
Probably the hoards of startups would be most impacted. It isn’t clear the government would bale them out.
Companies like Nvidia. Microsoft, Amazon and Google are going nowhere. Just their valuations will in my opinion take massive dip. Which then will have all sorts of other effects.
They are not going to zero, but they can lose lot from the current price.
Yeah, I agree with this. Maybe it’s not obvious from my point. I suspect companies will have RIFs and hair cuts but not need bail outs.
Nvidia might have a secondary crash if cheap GPUs flood the market. Or we get a resurgence of crypto mining, who knows.
If demand for compute and cuda dropped suddenly would they be okay going back to selling graphics cards?
Their profitability would shrink, but they'd only be in trouble if they were taking on debt to expand operations on the expectation of future growth. AFAIK one of the annoyances gamers have had with Nvidia is that after crypto and now with AI, they've generally been very careful to control how they expand production since they seem quite aware the party could stop any time. It certainly helps to have a lot of product lock-in - people will bear much higher prices to stay with Nvidia at this point (due to, as noted - CUDA).
Sure, the stock price wouldn't be to the moon anymore, but that doesn't materially effect operations if they're still moving product - and gaming isn't going anywhere.
The stock price of a company can crash without it materially effecting the company in anyway...provided the company isn't taking on expansion operations on the basis of that continued growth. Historically, Nvidia have avoided this.
Well they sell graphics cards now so yes. Why would they suddenly not be okay with selling graphics cards if the AI bubble popped?
"If we won't build it, China will"
I am sure that you already heard this sort of argument for AI. It's a way to bait that juicy government money.
Its way too expensive. We bail out banks so the little people dont lose their shirts too. There's no equivalent in AI.
you are missing the point, once AI companies goes down it will take down the sp500 too. so their retirement accounts will be affected.
I think there's a bit of a difference between preventing a run on the banks and propping up the entire stock market for the sake of just a handful of companies that all have big enough pockets to fail.
covid crash and what they did in respond to market crash disagrees.
I want the hype to die and the bubble to pop as much as Ed and Cory and everyone else writing about it but right now it’s just them basically recycling the same bad news and posting about it. I’d love to see some writing which looks at the factors which caused previous pops and to line them up with factors today to try and determine what’s actually coming. Clearly the market is irrational as hell right now but seemingly very little is going to change that. The closest I’ve seen to what I’m looking for is the coverage over at Notes on the Crises[0] and he also seems bewildered.
0: https://www.crisesnotes.com/
Can anyone give more than a hand-waived explanation on how this crash will come about? This paragraph reads kind of like: Companies not profitable, no more money coming in, ????, crash
>> I firmly believe the (economic) AI apocalypse is coming. These companies are not profitable. They can't be profitable. They keep the lights on by soaking up hundreds of billions of dollars in other people's money and then lighting it on fire. Eventually those other people are going to want to see a return on their investment, and when they don't get it, they will halt the flow of billions of dollars. Anything that can't go on forever eventually stops.
How will this actually lead to a crash? What would that crash look like? Are banks going bust? Which banks would go bust? Who is losing money, why are they losing money?
See comments elsewhere in this thread. To cite one well known recent example, Oracle stock went crazy recently (and Larry Ellison briefly became the world’s richest person) after they disclosed in their earnings report that they are expecting something like $400b in revenue from serving OpenAI in the coming years. These overinflated expectations systematically multiply and propagate until you’ve arrive at the situation we’re in. As soon as that does _not_ happen and everyone realizes it, the whole house of cards come crashing down, in a very 1929 sort of way.
This is the point TFA is making, albeit a bit hyperbolically.
It also happened in the dotcom bubble, telecom companies were providing leasing/loans to their customers to purchase more networking equipment.
Very similar to the circular funding happening between Nvidia, and their customers, Nvidia funding investments in AI datacenters which get spent in Nvidia equipment, each step of the cycle has to take a cut for paying their own OpEx where the money getting back to Nvidia diminishes in each pass through the cycle.
everyone will lose money.
How? Everyone everyone?
How about some guy not invested in the stock market, building a house and working as a plumber be impacted?
If his clients lose all their money in stocks or lose their jobs he also loses work. Depressions tend to impact everyone in the end as unemployment gets high enough
https://www.derekthompson.org/p/this-is-how-the-ai-bubble-wi...
> What would that crash look like?
What it usually looks like when one of the valley's "revolutions" fails to materialize: a formerly cheap and accessible tech becomes niche and expensive, acres of e-waste, the job market is flooded with applicants with years of experience in something no longer considered valuable, and the people responsible sail off into the sunset now richer for having rat fucked everyone else involved in the scheme.
In this case though given the sheer scale of the money about to go away, I would also add: a lot of pensions are going to see huge losses, a lot of cities who have waived various taxes to encourage data-center build outs are going to be left holding bags and possibly huge, ugly concrete buildings in their limits that will need to be destroyed, and, a special added one for this bubble in particular, we'll have a ton of folks out there psychologically dependent on a product that is either priced out of their ability to pay or completely unavailable, and the ensuing mental health crises that might entail.
Isn't the thing that costs everyone an arm and a leg at the moment the race for better models? So all of the training everyone is doing to get SOTA in some obscure AI benchmark? From all of the analysis I've read, inference is quite profitable for the AI companies. So at least for the last part:
> we'll have a ton of folks out there psychologically dependent on a product that is either priced out of their ability to pay or completely unavailable, and the ensuing mental health crises that might entail.
I doubt that this will become true. If there's one really tangible asset these companies are producing, which would be worth quite a bit in a bankruptcy it's the model architectures and weights, no?
> Isn't the thing that costs everyone an arm and a leg at the moment the race for better models? So all of the training everyone is doing to get SOTA in some obscure AI benchmark? From all of the analysis I've read, inference is quite profitable for the AI companies.
From what I've read: The cost to AI companies, per inference as a single operation, is going down. However, all newer models, all reasoning models, and their "agents" thing that's still trying desperately to be an actual product category all require magnitudes more inferences per request to operate. It's also worth noting that code generation and debugging, which is one of the few LLM applications I will actually say has a use and is reasonably good, also costs far more inferences per request to operate. And that number of inferences can increase massively with a sufficiently large chunk of code you're asking it to look at/change.
> If there's one really tangible asset these companies are producing, which would be worth quite a bit in a bankruptcy it's the model architectures and weights, no?
I mean, not really? If the companies enter bankruptcy that's a pretty solid indicator that the models are not profitable to operate, unless you're envisioning this as like a long-tail support model that you see with old MMO games, where a company picks up a hugely expensive to produce product, like LOTRO, and runs it with basically a skeleton crew of devs and support folks for the handful of users who still want to play it, and eeks out a humble if legitimate profit for doing so. I guess I could see that, but it's also worth noting that type of business has extremely thin margins, and operating servers for old MMO games is WAY less energy and compute intensive than running any version of ChatGPT post 2023.
> I firmly believe the (economic) AI apocalypse is coming. These companies are not profitable. They can't be profitable. They keep the lights on by soaking up hundreds of billions of dollars in other people's money and then lighting it on fire.
This is what I don't like. Debating in extremes. How can AI have bad unit economics? They are literally selling compute and code for a handsome markup. This is classic software economics, some of the best unit economics you can get. Look at Midjourney: it pulled in hundreds of millions without raising a single dime.
Companies are unprofitable because they are chasing user growth and subsidising free users. This is not to say there isn't a bubble, but it's a rock-solid business and there to stay. Yes, music will stop one day, and there will be a crash, but I’d bet that most of the big players we see today will still be around after the shakeout. Anecdote: my wife is so dependent on ChatGPT that if the free version ever stopped being good enough, she’d happily pay for premium. And this is coming from someone who usually questions why anyone needs pay for software.
> Further: the topline growth that AI companies are selling comes from replacing most workers with AI, and re-tasking the surviving workers as AI babysitters ("humans in the loop"), which won't work. Finally: AI cannot do your job, but an AI salesman can 100% convince your boss to fire you and replace you with an AI that can't do your job
This hits home. A lot of the supposed claims of improvements due to AI that I see are not really supported by measurements in actual companies. Or they could have been just some regular automation 10 years ago, except requiring less code.
If anything I see a tendency of companies, and especially AI companies, to want developers and other workers to work 996 in exchange for magic beans (shares) or some other crazy stupid grift.
So what metric would you look at that would support the idea that AI is inproving a company?
I guess anything other than just claims from people that have a stake in it?
If companies are shipping AI bots with a "human in the loop" to replace what could have been "a button with a human in the loop", but the deployment of the AI takes longer, then it's DEFINITELY not really an improvement, it's just pure waste of money and electricity.
Similarly, what I see different from the pre-AI era are way too many SV and elsewhere companies having roughly the same size and shipping roughly the same amount of features as before (or less!), but are now requiring employees to do 996. That's the definition of loss of productivity.
I'm not saying I hold the truth, but what I see in my day to day is that companies are masters at absorbing any kind of improvement or efficiency gain. Inertia still rules.
So would a lower headcount, stable/improving revenue be a metric you would look at?
Every day I see articles on HN discussing the AI bubble potentially crashing. The large number of such articles appearing daily is increasing my confidence that the AI space will be fine.
> and when the bubble bursts, the money-hemorrhaging "foundation models" will be shut off
This is not a serious piece of writing.
Of course it isn't. It's a FUD piece.