> “The intelligence of an AI model roughly equals the log of the resources used to train and run it,” [Sam Altman]
Taking that at face value, it means we would have to invest exponential resources just to get linear improvements. That’s not exactly an optimistic outlook.
Even today's frontier models, without any further improvement, have incredible commercial potential. And even when improvements have diminisching returns in terms of result, the market shifts towards improved models in a supralinear fashion. So a linear improvement resulting from exponential investment might still net you exponential commercial return.
Also, the LLM space is a red queen environment. Stop investing and you are done.
All that said. IMHO Short to medium term breakthrouhgs will come from hybrid AI systems, the LLM being the universal putty between all users and systems.
OpenAI literally lost $12b last quarter, where is this "incredible commercial potential" you are talking about sir?? Is it monetizing the seven second slop memes? Where is the commercial potential you scam artist, stonk pumper?
Even if you want to give OpenAI the benefit of the doubt by comparing it to other software primos, they're doing terribly. Google, Facebook, Apple, Amazon, etc. were profitable almost immediately after their founding. In the cases where they accumulated losses it was a deliberate effort to capture as much of the market as possible. They could simple hit the brakes and become profitable at will.
In OpenAI's case, every week yet another little-known lab in China releases a 99% competitive LLM at a fraction of their costs.
It's not looking good at all now or in the long-term.
And in his interviews he talks about the near vertical progress they're making. Of course nobody agrees but if those two are true, that's doubly exponential to keep pace.
At what point will the massive investments into AI show a respectable return? With the literal Trillion dollars OpenAI is constantly trying to raise what type of revenue would make that type of investment make sense? Even if you're incredibly bullish I don't know how you make that math work anymore.
I think it’s hard for individuals to think at the scale of very large institutional investors. They have lakes of money [1][2] that they have to invest in a balanced way, including investing a small percentage into “it probably won’t work but if it does we’ll make a fortune”-type bets. Given the size of these funds even a small percentage is a very large number.
There are also a finite number of opportunities to invest in, so companies that have “buzz” can create a bidding war among potential investors that drives up valuations.
So that’s one possible reason but in the end we can’t know why another investor invests the way they do. We assume that the investor in making a rational decision based on their portfolio. It’s fun to speculate about though, which is why there’s so much press attention.
The problem is we now have municipal and state governments taking on infrastructural investments (usually via subsidies) and energy companies racing to meet the load demands. There are all kinds of institutions both private and public dumping obscene amounts money into this speculative investment that can’t be a winner for everybody.
What happens to the ones that built for projects that end up failing? Seems to me the only way the story ends is with taxpayers on the hook once again.
Yes, many of us are investing in this (even indirectly) and may not realize it! The same rules still apply for municipal and state treasuries though: Only a small percentage of the overall portfolio should be allocated to high-risk investments.
Power generation, power grids are more generally useful today and less speculative than trying to win the AI race, so the risk for those types of things is somewhat lower, but there IS risk even in those.
I think the concern is something like the huge Entergy investment going on in Louisiana. Facebook basically cut a deal with them to build out all kinds of electrical load just for them. We also saw how committed Facebook was to the metaverse - they basically spent the GDP of a small nation, nothing came of it, fired a bunch of people, and moved on.
Entergy is not just going to sit around and take the L if the project doesn’t ultimately turn out to be a good long-term investment. They’re simply going to pass the cost on to their customers in the region (more so than they already plan to in the event of success). Meanwhile Louisiana taxpayers are footing the bill for all the subsidies going through these projects.
So yeah I agree it’s not quite as high risk because at least there’s some in infrastructural investment, but that’s not the kind of investment that is really needed in the region right now and having that extra capacity is not a good thing unfortunately.
To be clear I’m not really disagreeing with you. I’m just kind of bickering over the nuances lol
Au contraire him and associated people (Musk, among others) will be or are being received as heroes in his class for helping to seemingly break the bargaining power of software engineers and middle/upper-middle class information workers and creatives generally.
The folks who would have to press charges are the folks who would be far too embarrassed to admit how transparent the fraud was they fell for - and nuke any remaining asset value.
I think the market & political/economic actors as a whole are justifying these investments on the basis that the benefit is distributed across the labour market generally.
That is, it doesn't matter so much if OpenAI and individual investors get fleeced, if there's a 20-50% labour cost reduction generally for capitalism as a whole (especially cost reduction in our own tech professions, which have been very well paid for a generation) -- Institutional investors and political actors will benefit regardless, by increasing the productivity or rate of exploitation of intellectual / information workers.
It will be interesting to see who owns all the compute hardware in a few years, that cost billions now, and what becomes of it. With an expected useful lifetime so short the depreciation rate is insane.
> Experts note chief dealmaker Altman doesn’t have anything to lose. He has repeatedly claimed he does not have a stake in the company, and won’t have a stake even after OpenAI has restructured to become a public benefit corporation. “He has the upside, in a sense, in terms of influence, if it all succeeds,” said Ofer Eldar, a corporate governance professor at the UC Berkeley School of Law. “He's taking all this commitment knowing that he's not going to actually face any consequences because he doesn't have a financial stake.”
> That’s not good corporate governance, according to Jo-Ellen Pozner, a professor of management and entrepreneurship at Santa Clara University’s Leavey School of Business. “We allow leaders that we see as being super pioneering to behave idiosyncratically, and when things move in the opposite direction and somebody has to pay, it's unclear that they're the ones that are going to have to pay,” she said.
> Luria adds: “He can commit to as much as he wants. He can commit to a trillion dollars, ten trillion, a hundred trillion dollars. It doesn't matter. Either he uses it, he renegotiates it, or he walks away.” There are of course more indirect stakes for Altman, experts said, like the reputational blow he’d take if the deals fall apart. But on paper, he’d seemingly be off the hook, they said.
Is an intended takehome message of the article that Altman should invest his own money (with potential loss, or profit), or that he should be given a big compensation deal (maybe like the one Musk just got)?
I've heard it over and over again that the ultimate problem with corporate governance is that the CEOs and the VPs are incentivized to make the stock up at any cost to pad their own coffers. This is basically taken as gospel in many circles.
Now you have a company where the leader at least notionally doesn't have that kind of a financial stake... and we think that's bad? I disagree with Altman on almost everything, but it feels like grasping at straws.
Of course it is bad; he is "selling" other folks AI snake-oil to get them to part with their money which might have been better spent on proper companies which make actual-products/provide-services.
The description of a "stupid" person by Carlo Cipolla (recent HN thread The Basic Laws of Human Stupidity - https://news.ycombinator.com/item?id=45829210) seems to rather fit Sam Altman.
A stupid person is a person who causes losses to another person or to a
group of persons while himself deriving no gain and even possibly incurring
losses.
Well if you have absolutely no stake then you're just playing with other people's money. If your entire stake is tied to the stock price then that's also bad for the known reasons. In a somewhat better governance model I would expect "the board" to reward / punish the CEO financially based on tangible growth metrics. This is how my current CEO gets paid at $dayjob.
Agreed. The speculation is telling and reflects on the speculators more than reflecting a reality we understand. No one knows what will happen next.
I remember the days when Facebook and Google and other familiar giants were painted with the same brush-- all negative speculation as to them being overvalued and on the cliff's edge of doom, because no one could imagine how they'd survive their obscene overvaluations. How and when will they monetize? How will they ever bring in enough revenue to justify the insane capital spending?
Yes, exactly this. No one is releasing the details of the contracts because they aren't real commitments in most cases. They are almost certainly agreements to spend up to x at y pricing mechanism, can be cancelled, etc etc.
I think sama thinks money won’t really exist within the next few years. ASI will take over and create a world of radical abundance. Any revenue OpenAI generates is just to create leverage.
If any of the super wealthy people actively promoting this fantasy actually believed it they wouldn’t be so worried about amassing wealth today. "Over abundance" talk happened during previous technological revolutions too: at best it was just silly over optimism, at this point I tend to think they’re just obliquely preparing us for underemployment and lower incomes.
What a weird take. Of course he won't be on the hook. No CEO is personally financially liable for a company's potential losses nor should they be. Otherwise why would anyone ever take any risks?
He doesn't even have a financial interest in the company, apparently. Obviously, the people who will lose their investments if everything goes south is… the investors. As it's supposed to be.
The implied premise of this headline, that somehow there's something wrong with the fact that a CEO won't be personally financially responsible for potential future losses, is truly bizarre.
How many actual AI researchers from one of the big AI companies do we have here? I ask because they always seem to be extremely quiet, but I believe there is no way you could be involved in the development of LLMs on a deep level and not understand at this point that the entire thing is a scam. LLMs are very much like a magic trick - they seem truly miraculously to those who don’t understand how the trick is done. But those who designed the trick certainly know it’s deception. They’ve done enough research by this point to see that it’s not intelligent at all, but generates a very good illusion of intelligence by returning text that seems very similar to human output (because that’s what it was trained on).
Useful? Yep - it’s like the best autocomplete you could ever imagine. Paradigm-changing even, as we now have a big chunk of human knowledge in a much more easily searchable format. It’s just not intelligent.
I have to imagine that just like a magic trick, eventually someone will come up with a way to clearly communicate to the layperson how the trick is done. At that point, the illusion collapses.
Both Yann LeCun and Richard Sutton (author of "The Bitter Lesson" essay where he argued scaling of general purpose methods brings about significant returns) have already pointed out that LLMs are a dead end.
All that the AI Industry is doing is scaling computation/data in the hope that the result may encompass more of "existing real-world data" and thus give the illusion of thinking. You don't know whether the correct answer is due to reasoning or due to parroting of previously seen answer data. I always tell common folks to think of LLMs as very very large dictionaries eg. with the words from a pocket oxford dictionary you can construct only so many sentences whereas from a multi-volume set of large oxford dictionaries you can construct orders of magnitude more sentences and thus the probability of finding your specific answer sentence is much much higher. Now they can understand the scaling issue, realize its limits and why this approach can never lead to AGI.
> At an event this week, OpenAI CFO Sarah Friar seemed to suggest that the government could act as a “backstop” for the company’s commitments
She said the quiet part out loud? This was always the play, it is obvious. Too big to fail. National security concerns. China/Russia veer scary. Blah blah blah.
Altman’s libertarian pontification is so obviously insincere it’s laughable.
I love your username. Offtopic,I have complicated thoughts about crypto but I would genuinely agree that for a normal person like crypto is just scam.
I just have money in usdc and I might like monero for privacy sometimes but I guess I am just using it as a bank account right now with usdc
I would personally like it if visa/stripe etc. middleman's could be cut preferably, its honestly insane how we still can't figure that issue of middleman taking cuts etc.
Maybe the issue isn't technological but regulatory
But overall I agree that crypto/ especially the web3 mostly is a scam.
Yep, he saw the Mamdani win and Vance gave new marching orders on X by pretending to care about jobs, housing etc.
Sacks was one of the most prominent whiners on X asking for the bail out the Silicon Valley Bank. He is lying, just as the all-in podcast was lying before Trump's election and then dropped the masks.
Yes, the whole play by OpenAI is to raise enormous amounts of money as quickly as possible from as many people and companies as they can. Their only real goal is to put themselves in a position as the "indispensable company" that is too big to fail and will bring every AI investment to its knees if not supported by the government.
You don’t even have to read the article. There is no such thing as a CEO held accountable.
Just look at Elon’s insane pay package, approved in a landslide. The skulls of the average shareholder must echo like a cave.
And the rich accuse the poor of their poverty being their own fault, because they are just being irresponsible, making bad decisions, and spending unwisely. They should look in the mirror.
Investors in companies such as Tesla are adept of the greater fool theory. As long as somebody else will pay more for the useless shares, they'll continue to support the company.
> The skulls of the average shareholder must echo like a cave.
Given you calling these people stupid, I'd be shocked if you out-earned them over the past 15 years[1]. And if your counter-argument will be "that's a bad yardstick" then what yardstick shall we use? I own no TSLA stock, nor do I particularly like Musk, but this weird irrational hatred just doesn't seem to contribute anything other than reiterating the (oh so boring) liberal "Musk bad" zeitgeist talking points.
If you're a trader, isn't the whole idea maximizing your earnings yield? My point was that gp called these people stupid, but the numbers show they're smart.
The rich are stupid, just like everyone else. That in itself goes against the narrative that has been woven in our corporate mass media, the accepted narrative that makes it seem as though the rich are meritous and deserve to hoard wealth.
This illusion is why CEOs will never be held accountable for their actions. Shareholders are part of the pyramid scheme feeding everyone on the very top. Failures of the visionary founder CEO are like failures of Kim Jong Un: certainly not their fault! Your undying support as a shareholder is required to right the ship. Your lack of faith is what caused the failure in the first place.
Of course I haven’t out-earned them, I don’t have an army of accountants and lawyers working due me to dodge taxes and avoid complying with laws, access to private equity investment, insider trading knowledge, and all of my income is W2, so my tax rate is about 4x higher than any billionaire.
Yeah, a CEO being "held accountable" looks like "we will pay you enormous sums of money to leave the company". Just once I would like to see the CEO of some big corp face actual consequences for running the company badly, but I'm not holding my breath.
The alternative interpretation of the approval of Musk's pay package is that the stakeholders know the goals are unrealistic, but they want to keep the hype going in order to exit quietly.
The Optimus narrative is so obviously a fraud. The things can "dance" and play chess, but they cannot operate in dirt, scrub the kitchen, etc. Even if they succeed, BYD will build a $7000 Optimus. Intimidation of crowds and barking orders at humans (for example for the human to clean the kitchen floor) seem the only somewhat realistic goals.
> “The intelligence of an AI model roughly equals the log of the resources used to train and run it,” [Sam Altman]
Taking that at face value, it means we would have to invest exponential resources just to get linear improvements. That’s not exactly an optimistic outlook.
Even today's frontier models, without any further improvement, have incredible commercial potential. And even when improvements have diminisching returns in terms of result, the market shifts towards improved models in a supralinear fashion. So a linear improvement resulting from exponential investment might still net you exponential commercial return.
Also, the LLM space is a red queen environment. Stop investing and you are done.
All that said. IMHO Short to medium term breakthrouhgs will come from hybrid AI systems, the LLM being the universal putty between all users and systems.
OpenAI literally lost $12b last quarter, where is this "incredible commercial potential" you are talking about sir?? Is it monetizing the seven second slop memes? Where is the commercial potential you scam artist, stonk pumper?
In its first decade (1998-2008), Google's total revenue was approximately $27.9 billion.
Haha:)
Even if you want to give OpenAI the benefit of the doubt by comparing it to other software primos, they're doing terribly. Google, Facebook, Apple, Amazon, etc. were profitable almost immediately after their founding. In the cases where they accumulated losses it was a deliberate effort to capture as much of the market as possible. They could simple hit the brakes and become profitable at will.
In OpenAI's case, every week yet another little-known lab in China releases a 99% competitive LLM at a fraction of their costs.
It's not looking good at all now or in the long-term.
> have incredible commercial potential
That assertion is unsupported and unproven.
Also, if a commercial use for LLMs is ever found, it will be in the local, personal computing market, not the "cloud".
It's actually completely been disproven, Open"AI" is burning over ten billion a quarter in net losses. It has no commercial value.
To play the Devil's advocate, OpenAI actually has hundreds of millions in income, even though they are spending far more in training new models.
And in his interviews he talks about the near vertical progress they're making. Of course nobody agrees but if those two are true, that's doubly exponential to keep pace.
> Taking that at face value, it means we would have to invest exponential resources just to get linear improvements.
Not necessarily. Approaches such as mixture of experts help lower training costs by covering domains with specialized models.
haha good one, so why haven't they done this yet? What are they waiting for? Let's see these super advanced "experts" with "specialized models"!!
I encourage you to rethink your identity. You are way out of your depth on this, and posting nonsensical things as fact.
"A deep incisive point, it seems like you want to turn the entire mass of the solar system into computronium to run ChatGPT27".
Grey goo scenario, but the goo are NVIDIA cards used to train LLMs.
We probably already live in a simulation where an LLM is trying to compute how many “r”s are on “strawberry”
If so, expect seahorses to never have existed by next week
At what point will the massive investments into AI show a respectable return? With the literal Trillion dollars OpenAI is constantly trying to raise what type of revenue would make that type of investment make sense? Even if you're incredibly bullish I don't know how you make that math work anymore.
I think it’s hard for individuals to think at the scale of very large institutional investors. They have lakes of money [1][2] that they have to invest in a balanced way, including investing a small percentage into “it probably won’t work but if it does we’ll make a fortune”-type bets. Given the size of these funds even a small percentage is a very large number.
There are also a finite number of opportunities to invest in, so companies that have “buzz” can create a bidding war among potential investors that drives up valuations.
So that’s one possible reason but in the end we can’t know why another investor invests the way they do. We assume that the investor in making a rational decision based on their portfolio. It’s fun to speculate about though, which is why there’s so much press attention.
[1] https://en.wikipedia.org/wiki/List_of_largest_pension_scheme...
[2] https://en.wikipedia.org/wiki/List_of_sovereign_wealth_funds...
The problem is we now have municipal and state governments taking on infrastructural investments (usually via subsidies) and energy companies racing to meet the load demands. There are all kinds of institutions both private and public dumping obscene amounts money into this speculative investment that can’t be a winner for everybody.
What happens to the ones that built for projects that end up failing? Seems to me the only way the story ends is with taxpayers on the hook once again.
Yes, many of us are investing in this (even indirectly) and may not realize it! The same rules still apply for municipal and state treasuries though: Only a small percentage of the overall portfolio should be allocated to high-risk investments.
Power generation, power grids are more generally useful today and less speculative than trying to win the AI race, so the risk for those types of things is somewhat lower, but there IS risk even in those.
I think the concern is something like the huge Entergy investment going on in Louisiana. Facebook basically cut a deal with them to build out all kinds of electrical load just for them. We also saw how committed Facebook was to the metaverse - they basically spent the GDP of a small nation, nothing came of it, fired a bunch of people, and moved on.
Entergy is not just going to sit around and take the L if the project doesn’t ultimately turn out to be a good long-term investment. They’re simply going to pass the cost on to their customers in the region (more so than they already plan to in the event of success). Meanwhile Louisiana taxpayers are footing the bill for all the subsidies going through these projects.
So yeah I agree it’s not quite as high risk because at least there’s some in infrastructural investment, but that’s not the kind of investment that is really needed in the region right now and having that extra capacity is not a good thing unfortunately.
To be clear I’m not really disagreeing with you. I’m just kind of bickering over the nuances lol
It's gonna be very fun to watch SA being tried for fraud and deceiving investors about the "future profits" of his startup.
What fraud?
You should view your contributions as a donation. What donation has a ROI?
Au contraire him and associated people (Musk, among others) will be or are being received as heroes in his class for helping to seemingly break the bargaining power of software engineers and middle/upper-middle class information workers and creatives generally.
The folks who would have to press charges are the folks who would be far too embarrassed to admit how transparent the fraud was they fell for - and nuke any remaining asset value.
It’s why Musk is also safe from similar problems.
As others have said before me: "the hype IS the product".
That's just a roundabout way of saying they don't expect their money back they just hope to sell before the bubble bursts.
Modern American investing 101.
Which is the same thing as a scam.
I think the market & political/economic actors as a whole are justifying these investments on the basis that the benefit is distributed across the labour market generally.
That is, it doesn't matter so much if OpenAI and individual investors get fleeced, if there's a 20-50% labour cost reduction generally for capitalism as a whole (especially cost reduction in our own tech professions, which have been very well paid for a generation) -- Institutional investors and political actors will benefit regardless, by increasing the productivity or rate of exploitation of intellectual / information workers.
Why Fears of a Trillion-Dollar AI Bubble Are Growing - https://www.bloomberg.com/news/articles/2025-10-04/why-ai-bu...
What Would an AI Crash Look Like? - https://www.bloomberg.com/news/newsletters/2025-10-12/what-h...
It will be interesting to see who owns all the compute hardware in a few years, that cost billions now, and what becomes of it. With an expected useful lifetime so short the depreciation rate is insane.
If the bubble pops, the only upside I’m looking forward to is the cheap liquidated hardware
That will be nice, because right now 64GB of ram is $500
The article unleashes a burst of expert quotes:
> Experts note chief dealmaker Altman doesn’t have anything to lose. He has repeatedly claimed he does not have a stake in the company, and won’t have a stake even after OpenAI has restructured to become a public benefit corporation. “He has the upside, in a sense, in terms of influence, if it all succeeds,” said Ofer Eldar, a corporate governance professor at the UC Berkeley School of Law. “He's taking all this commitment knowing that he's not going to actually face any consequences because he doesn't have a financial stake.”
> That’s not good corporate governance, according to Jo-Ellen Pozner, a professor of management and entrepreneurship at Santa Clara University’s Leavey School of Business. “We allow leaders that we see as being super pioneering to behave idiosyncratically, and when things move in the opposite direction and somebody has to pay, it's unclear that they're the ones that are going to have to pay,” she said.
> Luria adds: “He can commit to as much as he wants. He can commit to a trillion dollars, ten trillion, a hundred trillion dollars. It doesn't matter. Either he uses it, he renegotiates it, or he walks away.” There are of course more indirect stakes for Altman, experts said, like the reputational blow he’d take if the deals fall apart. But on paper, he’d seemingly be off the hook, they said.
Is an intended takehome message of the article that Altman should invest his own money (with potential loss, or profit), or that he should be given a big compensation deal (maybe like the one Musk just got)?
I've heard it over and over again that the ultimate problem with corporate governance is that the CEOs and the VPs are incentivized to make the stock up at any cost to pad their own coffers. This is basically taken as gospel in many circles.
Now you have a company where the leader at least notionally doesn't have that kind of a financial stake... and we think that's bad? I disagree with Altman on almost everything, but it feels like grasping at straws.
Of course it is bad; he is "selling" other folks AI snake-oil to get them to part with their money which might have been better spent on proper companies which make actual-products/provide-services.
The description of a "stupid" person by Carlo Cipolla (recent HN thread The Basic Laws of Human Stupidity - https://news.ycombinator.com/item?id=45829210) seems to rather fit Sam Altman.
A stupid person is a person who causes losses to another person or to a group of persons while himself deriving no gain and even possibly incurring losses.
Well if you have absolutely no stake then you're just playing with other people's money. If your entire stake is tied to the stock price then that's also bad for the known reasons. In a somewhat better governance model I would expect "the board" to reward / punish the CEO financially based on tangible growth metrics. This is how my current CEO gets paid at $dayjob.
Without knowing the details of these "agreements," everyone is just speculating.
OpenAI as an entity is only in trouble if they've bound themselves completely without any way out.
These "commitments" may just function as memorandums of understanding.
Agreed. The speculation is telling and reflects on the speculators more than reflecting a reality we understand. No one knows what will happen next.
I remember the days when Facebook and Google and other familiar giants were painted with the same brush-- all negative speculation as to them being overvalued and on the cliff's edge of doom, because no one could imagine how they'd survive their obscene overvaluations. How and when will they monetize? How will they ever bring in enough revenue to justify the insane capital spending?
Yes, exactly this. No one is releasing the details of the contracts because they aren't real commitments in most cases. They are almost certainly agreements to spend up to x at y pricing mechanism, can be cancelled, etc etc.
Bloomberg maps the $8T AI bubble: a closed loop of money and momentum - https://www.linkedin.com/posts/michael-lee-4049593_signal-bl...
See also - https://news.ycombinator.com/item?id=45857769
I think sama thinks money won’t really exist within the next few years. ASI will take over and create a world of radical abundance. Any revenue OpenAI generates is just to create leverage.
If any of the super wealthy people actively promoting this fantasy actually believed it they wouldn’t be so worried about amassing wealth today. "Over abundance" talk happened during previous technological revolutions too: at best it was just silly over optimism, at this point I tend to think they’re just obliquely preparing us for underemployment and lower incomes.
https://archive.ph/LYVMZ
What a weird take. Of course he won't be on the hook. No CEO is personally financially liable for a company's potential losses nor should they be. Otherwise why would anyone ever take any risks?
He doesn't even have a financial interest in the company, apparently. Obviously, the people who will lose their investments if everything goes south is… the investors. As it's supposed to be.
The implied premise of this headline, that somehow there's something wrong with the fact that a CEO won't be personally financially responsible for potential future losses, is truly bizarre.
> 6 GW of AMD’s chips
Rule no of scam flavored hype: make up impressive sounding units that are opaque and meaningless
How many actual AI researchers from one of the big AI companies do we have here? I ask because they always seem to be extremely quiet, but I believe there is no way you could be involved in the development of LLMs on a deep level and not understand at this point that the entire thing is a scam. LLMs are very much like a magic trick - they seem truly miraculously to those who don’t understand how the trick is done. But those who designed the trick certainly know it’s deception. They’ve done enough research by this point to see that it’s not intelligent at all, but generates a very good illusion of intelligence by returning text that seems very similar to human output (because that’s what it was trained on).
Useful? Yep - it’s like the best autocomplete you could ever imagine. Paradigm-changing even, as we now have a big chunk of human knowledge in a much more easily searchable format. It’s just not intelligent.
I have to imagine that just like a magic trick, eventually someone will come up with a way to clearly communicate to the layperson how the trick is done. At that point, the illusion collapses.
Both Yann LeCun and Richard Sutton (author of "The Bitter Lesson" essay where he argued scaling of general purpose methods brings about significant returns) have already pointed out that LLMs are a dead end.
All that the AI Industry is doing is scaling computation/data in the hope that the result may encompass more of "existing real-world data" and thus give the illusion of thinking. You don't know whether the correct answer is due to reasoning or due to parroting of previously seen answer data. I always tell common folks to think of LLMs as very very large dictionaries eg. with the words from a pocket oxford dictionary you can construct only so many sentences whereas from a multi-volume set of large oxford dictionaries you can construct orders of magnitude more sentences and thus the probability of finding your specific answer sentence is much much higher. Now they can understand the scaling issue, realize its limits and why this approach can never lead to AGI.
> At an event this week, OpenAI CFO Sarah Friar seemed to suggest that the government could act as a “backstop” for the company’s commitments
She said the quiet part out loud? This was always the play, it is obvious. Too big to fail. National security concerns. China/Russia veer scary. Blah blah blah.
Altman’s libertarian pontification is so obviously insincere it’s laughable.
And David Sacks immediately responded with "there will be no government bailout":
https://www.cnbc.com/2025/11/06/trump-ai-sacks-federal-bailo...
Surely the government would never lie.
I love your username. Offtopic,I have complicated thoughts about crypto but I would genuinely agree that for a normal person like crypto is just scam.
I just have money in usdc and I might like monero for privacy sometimes but I guess I am just using it as a bank account right now with usdc
I would personally like it if visa/stripe etc. middleman's could be cut preferably, its honestly insane how we still can't figure that issue of middleman taking cuts etc.
Maybe the issue isn't technological but regulatory
But overall I agree that crypto/ especially the web3 mostly is a scam.
[dead]
Yep, he saw the Mamdani win and Vance gave new marching orders on X by pretending to care about jobs, housing etc.
Sacks was one of the most prominent whiners on X asking for the bail out the Silicon Valley Bank. He is lying, just as the all-in podcast was lying before Trump's election and then dropped the masks.
Yes, the whole play by OpenAI is to raise enormous amounts of money as quickly as possible from as many people and companies as they can. Their only real goal is to put themselves in a position as the "indispensable company" that is too big to fail and will bring every AI investment to its knees if not supported by the government.
You don’t even have to read the article. There is no such thing as a CEO held accountable.
Just look at Elon’s insane pay package, approved in a landslide. The skulls of the average shareholder must echo like a cave.
And the rich accuse the poor of their poverty being their own fault, because they are just being irresponsible, making bad decisions, and spending unwisely. They should look in the mirror.
Investors in companies such as Tesla are adept of the greater fool theory. As long as somebody else will pay more for the useless shares, they'll continue to support the company.
> The skulls of the average shareholder must echo like a cave.
Given you calling these people stupid, I'd be shocked if you out-earned them over the past 15 years[1]. And if your counter-argument will be "that's a bad yardstick" then what yardstick shall we use? I own no TSLA stock, nor do I particularly like Musk, but this weird irrational hatred just doesn't seem to contribute anything other than reiterating the (oh so boring) liberal "Musk bad" zeitgeist talking points.
[1] https://www.macrotrends.net/stocks/charts/TSLA/tesla/roe
Doesn’t seem pretty boring and unreasonable to me to hate a Nazi and who’s contributing to building a mechahilter.
At least, these were previously American values to hate these people.
yes yes my favorite objective the measure of intelligence: earnings yield
If you're a trader, isn't the whole idea maximizing your earnings yield? My point was that gp called these people stupid, but the numbers show they're smart.
Oh you have mistaken me greatly.
This isn’t a liberal/conservative thing.
This is a class conflict thing.
The rich are stupid, just like everyone else. That in itself goes against the narrative that has been woven in our corporate mass media, the accepted narrative that makes it seem as though the rich are meritous and deserve to hoard wealth.
This illusion is why CEOs will never be held accountable for their actions. Shareholders are part of the pyramid scheme feeding everyone on the very top. Failures of the visionary founder CEO are like failures of Kim Jong Un: certainly not their fault! Your undying support as a shareholder is required to right the ship. Your lack of faith is what caused the failure in the first place.
Of course I haven’t out-earned them, I don’t have an army of accountants and lawyers working due me to dodge taxes and avoid complying with laws, access to private equity investment, insider trading knowledge, and all of my income is W2, so my tax rate is about 4x higher than any billionaire.
>this weird irrational hatred
He's a literal Nazi who gave the sieg heil salute on national television.
It's not irrational for people to think he's a bad person.
Yeah, a CEO being "held accountable" looks like "we will pay you enormous sums of money to leave the company". Just once I would like to see the CEO of some big corp face actual consequences for running the company badly, but I'm not holding my breath.
May I introduce you to the Enron scandal?
https://en.wikipedia.org/wiki/Jeffrey_Skilling
Can you imagine such a punishment occurring under the current administration?
https://www.bbc.com/news/articles/cn7ek63e5xyo
One reason of many why I say that the CEO class is no longer accountable for anything. Laws do not exist for them.
Are you sure you can't think of any examples?
SBF was convicted and sentenced to 25 years.
CEOs who fraud superrich totally go to jail.
but if you play your cards right, you can then get pardoned.
Playing cards right means do not upset superrich. Robbing taxpayers and pension funds is Ok.
The alternative interpretation of the approval of Musk's pay package is that the stakeholders know the goals are unrealistic, but they want to keep the hype going in order to exit quietly.
The Optimus narrative is so obviously a fraud. The things can "dance" and play chess, but they cannot operate in dirt, scrub the kitchen, etc. Even if they succeed, BYD will build a $7000 Optimus. Intimidation of crowds and barking orders at humans (for example for the human to clean the kitchen floor) seem the only somewhat realistic goals.