The great thing about LLMs being more or less commoditized is switching is so easy.
I use Claude Code via the VS Code extension. When I got a couple of 500 errors just now I simply copy pasted my last instructions into Codex and kept going.
It's pretty rare that switching costs are THAT low in technology!
I genuinely don't know how any of these companies can make extreme profit for this reason. If a company makes a significantly better model, shouldn't it be able to explain how it's better to any competitor?
Google succeeded because it understood the web better than its competitors. I don't see how any of the players in this space could be so much better that they could take over the market. It seems like these companies will create commodities, which can be profitable, but also incredibly risky for early investors and don't make the profits that would be necessary to justify the evaluations of today.
Which is exactly why these companies are now all focused on building products rather than (or alongside) improving their base models. Claude Code, Cowork, Gemini CLI/Antigravity, Codex - all proprietary and don't allow model swapping (or do with heavy restrictions). As models get more and more commoditized the idea is to enforce lock-in at the app level instead.
FWIW, OpenAI Codex is open source and they help other open source projects like OpenCode to integrate their accounts (not just expensive API), unlike Anthropic who blocked it last month and force people to use their closed source CLI.
> It's pretty rare that switching costs are THAT low in technology!
Look harder. Swapping usb devices (mouse,…) takes even less time. Switching wifi is also easy. Switching browser works the same. I can equally use vim/emacs/vscode/sublime/… for programming.
You make it sound like lock-in doesn't exist. But your examples are cherry picked. And they're all standards anyway, their _purpose_ was for easy switching between implementations.
good point, they are standards, by definition society forced vendors to behave and play nice together. LLMs are not standards yet, and it is just pure bliss that english works fine across different LLMs for now.
Some labs are trying to push their own format and stop it. Specially around reasoning traces, e.g. codex removing reasoning traces between calls and gemini requiring reasoning history. So don't take this for granted.
Most people only have one mouse or Wi-Fi network. If my Wi-Fi goes down, my only other option is to use a mobile hotspot, which is inferior in almost every way.
Gives you a good window into a vibe coder's mentality. They do not care about anything except what they want to get done. If something is in the way, they will just try to brute force it until it works, not giving a duck if they are being an inconvenience to others. They're not aware of existing guidelines/conventions/social norms and they couldn't care less.
This sounds like a case of a bias called availability heuristic. It'd be worth remembering that you often don't notice people who are polite and normal nearly as much as people who are rude and obnoxious.
Could it be that you're creating a stereotype in your head and getting angry about it?
People say these things against any group they dislike. It's so much that these days it feels like most of the social groups are defined by outsiders with the things they dislike about them.
I am starting to get concerned about how much “move fast break things” has basically become the average person’s mantra in the US. Or at least it feels that way.
You're about a decade+ late to the party, this isn't some movement that happened overnight, it's a slow cultural shift that been happening for quite some time already. Quality and stability used to be valued, judging by what most people and companies put out today, they seem to be focusing on quantity and "seeing what sticks" today instead.
I’m not saying it’s a sudden/brand new thing, I think I’m just really seeing the results of the past decade clearly and frequently. LLM usage philosophies really highlight it.
Wow are these submitted automatically by claude code? I'm not comfortable with the level of details they have (user's anthropic email, full path of the project they were working on, stack traces...)
Definitively some automation involved, no way the typical user of Claude Code (no offense) would by default put so much details into reporting an issue, especially users who don't seem to understand it's Anthropic's backend that is the issue (given the status code) rather than the client/harness.
A long time ago I was taking flight lessons and I was going through the takeoff checklist. I was going through each item, but my instructor had to remind me that I am not just reading the checklist - I need understand/verify each checklist item before moving on. Always stuck with me.
A few times a year I have to remind my co-workers that reading & understanding error messages is a critical part of being in the IT business. I'm not perfect in that regard, but the number of times the error message explaining exactly what's wrong and how to solve it is included in the screenshot they share is a little depressing.
I've made a feature request there to add another GitHub Actions bot to auto-close issues reporting errors like this when an outage is happening. Would definitely help to cut through the noise.
This is the kind of abuse that will cause them to just close GitHub issues.
Or they'll have to put something in the system prompt to handle this special case where it first checks for existing bugs and just upvotes it, rather than creating a new one.
There has to be some sort of automation making these issues, to many of them are identical but posted by different people.
Also love how many have the “I searched for issues” checked which is clearly a lie.
Does Claude code make issue reports automatically? (And then how exactly would it be doing that if Anthropic was down when the use of LLM in the report is obvious )
Goes to show that nobody reads error messages and it reminds me of this old blogpost:
> A kid knocks on my office door, complaining that he can't login. 'Have you forgotten your password?' I ask, but he insists he hasn't. 'What was the error message?' I ask, and he shrugs his shoulders. I follow him to the IT suite. I watch him type in his user-name and password. A message box opens up, but the kid clicks OK so quickly that I don't have time to read the message. He repeats this process three times, as if the computer will suddenly change its mind and allow him access to the network. On his third attempt I manage to get a glimpse of the message. I reach behind his computer and plug in the Ethernet cable. He can't use a computer.
Hey folks, I’m Alex from the reliability team at Anthropic. We’re sorry for the downtime and we’ve posted a mini retrospective on our status page. We’re also be doing a more in depth retrospective in the following days.
Anthropic might have the best product for coding but good god the experience is awful. Random limits where you _know_ you shouldn’t hit them yet, the jankiness of their client, the service being down semi-frequently. Feels like the whole infra is built on a house of cards and badly struggles 70% of the time.
I think my $20 openai sub gets me more tokens than claude’s $100. I can’t wait until google or openai overtake them.
For folks who have latest version (0.4.1) LM Studio installed, I just noticed they added endpoints for being compatible with Claude Code, maybe this is an excellent moment to play around with local models, if you have the GPU for it. zai-org/glm-4.7-flash (Q4) is supposed to be OK-ish, and should fit within 24GB VRAM. It's not great, but always fun to experiment, and if the API stays down, you have some time to waste :)
As best as I can tell, there was less than 10 minutes from the last successful request I made and when the downtime was added to their status page - and I'm not particularly crazy with my usage or anything, the gap could have been less than that.
Honestly, that seems okay to me. Certainly better than what AWS usually does.
It appeared there like 5 minutes ago; it was down for at least 20 before that.
That's 20 minutes of millions of people visiting the status page, seeing green, and then spending that time resetting their context, looking at their system and network configs, etc.
It's not a huge deal, but for $200/month it'd be nice if, after the first two-thousand 500s went out (which I imagine is less than 10 seconds), the status page automatically went orange.
The entire AI hype was started because Silicon Valley wanted a new SaaS product to keep themselves afloat, notice that LLMs started getting pushed right after Silicon Valley Bank collapsed.
I've had the $20/month account for OpenAI, Google, and Anthropic for months. Anthropic consistently has more downtime and throws more errors than the other two. Claude (on the web) also has a lot of seemingly false positive errors. It will claim an error occurred but then work normally. I genuinely like Claude the best but its performance does not inspire confidence.
Was getting 500 errors from Claude Code this morning but their status page was green. So frustrating that these pages aren't automated, especially considering there are paying users affected.
I mean, can you expect a vibecoding company to do stuff with 0 downtime? They brought the models down and are now panicking at HQ since there's no one to bring them back up
This made me laugh only because I imagine there could possibly be some truth to it. This is the world we are in. Maybe they all loaded codex to fix their deploy? ;)
What does Down mean in this context? I imagine running inference on any server suffices. Just rent AWS since Amazon owns it anyways and keep Claude running.
Us east coasters here are having a chuckle (what else can we do? we can't get work done while Claude is down... I'll be damned if I have to type in code letter by letter ever again!).
Everybody that uses it knows it is down, what value does this add? No context here either. Posts like these feels so much like a low hanging first-to-post-this for karma grab.
It's like.. Popular service is down, let me post that to hn first! Low effort but can still end up popular.
I dunno. Maybe I'm being overly critical. Thoughts?
To add something to the discussion though: this is a reminder why you should not invest in one tool, claude or otherwise. Also, don't go enhancing one of these agents, ond only one of these agents. beads spent the better part of a medium sized country in energy to create a simple TODO list and got smeared in 10 minutes once claude integrated todos in their client.
The usual impetus is that the official status pages habitually under report and sharing the status page showing the best information gives a basis to form a place to have discussion about where/how/why it was down. Unfortunately, many seem to take the other interpretation instead and said discussion often ends up being low quality in a self fulfilling loop of reasons.
The second note probably deserves to be a separate comment.
Not everyone that interacts with a service is interacting with it directly? How is this a serious question.
A thing called API’s exist and if your users rely on it but your not interacting with it directly yourself, seeing this could save you time to investigate an issue.
Or you are using it yourself and seeing this post confirms it is not just you having an issue and you can move on with your day.
This has nothing to do with it being AI and it being a large service. It is the same with posts about an Azure or AWS.
I don't use it but I'd like to know even if it's just entertainment value. But I can imgine people who intend to use it learn something from it: redundancy must be part of their strategy like you said.
So what value does it add saying "X is down" anywhere?
It's just for discussion, you can't just ignore it and not talk about it with anyone if a particular service is down and posts like this are pretty common on hn and i haven't seen anyone complaining, it's you're being overly critical, yes
The great thing about LLMs being more or less commoditized is switching is so easy.
I use Claude Code via the VS Code extension. When I got a couple of 500 errors just now I simply copy pasted my last instructions into Codex and kept going.
It's pretty rare that switching costs are THAT low in technology!
It’s not a moat, it’s a tiny groove on the sidewalk.
I’ve experienced the same. Even guide markdown files that work well for one model or vendor will work reasonably well for the other.
I genuinely don't know how any of these companies can make extreme profit for this reason. If a company makes a significantly better model, shouldn't it be able to explain how it's better to any competitor?
Google succeeded because it understood the web better than its competitors. I don't see how any of the players in this space could be so much better that they could take over the market. It seems like these companies will create commodities, which can be profitable, but also incredibly risky for early investors and don't make the profits that would be necessary to justify the evaluations of today.
Which is exactly why these companies are now all focused on building products rather than (or alongside) improving their base models. Claude Code, Cowork, Gemini CLI/Antigravity, Codex - all proprietary and don't allow model swapping (or do with heavy restrictions). As models get more and more commoditized the idea is to enforce lock-in at the app level instead.
FWIW, OpenAI Codex is open source and they help other open source projects like OpenCode to integrate their accounts (not just expensive API), unlike Anthropic who blocked it last month and force people to use their closed source CLI.
Gemini CLI is open source too, though I think the consensus is it's a distant third behind Claude Code and Codex
The classic commoditize your complements.
I only integrate with models via MCP. I highly encourage everybody to do the same to preserve the commodity status
Using "low cost" and LLM's in the same sentence is kind of funny to me.
> It's pretty rare that switching costs are THAT low in technology!
Look harder. Swapping usb devices (mouse,…) takes even less time. Switching wifi is also easy. Switching browser works the same. I can equally use vim/emacs/vscode/sublime/… for programming.
You make it sound like lock-in doesn't exist. But your examples are cherry picked. And they're all standards anyway, their _purpose_ was for easy switching between implementations.
good point, they are standards, by definition society forced vendors to behave and play nice together. LLMs are not standards yet, and it is just pure bliss that english works fine across different LLMs for now. Some labs are trying to push their own format and stop it. Specially around reasoning traces, e.g. codex removing reasoning traces between calls and gemini requiring reasoning history. So don't take this for granted.
Switching between vim <-> emacs <-> IDEs is way harder than swapping a USB (unless you already know how to use them).
Most people only have one mouse or Wi-Fi network. If my Wi-Fi goes down, my only other option is to use a mobile hotspot, which is inferior in almost every way.
I mean sublime died overnight when vscode showed up.
Their GitHub issues are wild; random people are posting the same useless "bug reports" over and over multiple times per minute.
https://github.com/anthropics/claude-code/issues
Gives you a good window into a vibe coder's mentality. They do not care about anything except what they want to get done. If something is in the way, they will just try to brute force it until it works, not giving a duck if they are being an inconvenience to others. They're not aware of existing guidelines/conventions/social norms and they couldn't care less.
This sounds like a case of a bias called availability heuristic. It'd be worth remembering that you often don't notice people who are polite and normal nearly as much as people who are rude and obnoxious.
Could it be that you're creating a stereotype in your head and getting angry about it?
People say these things against any group they dislike. It's so much that these days it feels like most of the social groups are defined by outsiders with the things they dislike about them.
Well not really, vibe coding is literally brute forcing things until it works, not caring about the details of it.
if history doesn't repeat, but it rhymes
does vibe coding rhyme with eternal september?
I am starting to get concerned about how much “move fast break things” has basically become the average person’s mantra in the US. Or at least it feels that way.
You're about a decade+ late to the party, this isn't some movement that happened overnight, it's a slow cultural shift that been happening for quite some time already. Quality and stability used to be valued, judging by what most people and companies put out today, they seem to be focusing on quantity and "seeing what sticks" today instead.
I’m not saying it’s a sudden/brand new thing, I think I’m just really seeing the results of the past decade clearly and frequently. LLM usage philosophies really highlight it.
Are these superpredator vibe coders in the room with us right now?
Wow are these submitted automatically by claude code? I'm not comfortable with the level of details they have (user's anthropic email, full path of the project they were working on, stack traces...)
Scanning a few. Some are definitely written by AI but most seem genuinely human (or at least, not claude).
Anecdata: I read five and only found one was AI. Your sampling may vary.
I think claude code has a /bug command which auto-fills those details in a github report.
I consider revealing my file structure and file paths to be PII so naturally seeing people's comfort with putting all that up there makes me queasy.
Definitively some automation involved, no way the typical user of Claude Code (no offense) would by default put so much details into reporting an issue, especially users who don't seem to understand it's Anthropic's backend that is the issue (given the status code) rather than the client/harness.
No, but they are submitted by the sort of people who will use AI to write the GitHub issue details
How could they be? Claude was down
It's wild that people check the box
> I have searched existing issues and this hasn't been reported yet
when the first 50 issues are about 500 error.
and every single one of them checked "I have searched existing issues and this hasn't been reported yet"
A long time ago I was taking flight lessons and I was going through the takeoff checklist. I was going through each item, but my instructor had to remind me that I am not just reading the checklist - I need understand/verify each checklist item before moving on. Always stuck with me.
A few times a year I have to remind my co-workers that reading & understanding error messages is a critical part of being in the IT business. I'm not perfect in that regard, but the number of times the error message explaining exactly what's wrong and how to solve it is included in the screenshot they share is a little depressing.
Application Error:
The exception illegal instruction
An attempt was made to execute an illegal instruction.
(0xc000001d) occurred in the application at location.
Click on OK to terminate the program.
Some of them don't even have error messages.
I've made a feature request there to add another GitHub Actions bot to auto-close issues reporting errors like this when an outage is happening. Would definitely help to cut through the noise.
https://github.com/anthropics/claude-code/issues/22848
This is the kind of abuse that will cause them to just close GitHub issues.
Or they'll have to put something in the system prompt to handle this special case where it first checks for existing bugs and just upvotes it, rather than creating a new one.
I'm not too empathic to Anthropic. They did it to themselves by hyping AI and attracting that kind of people.
And it's not like they have been taking care of issues anyway.
The automation of the SWE.
should enable some kind of agent automation
There has to be some sort of automation making these issues, to many of them are identical but posted by different people.
Also love how many have the “I searched for issues” checked which is clearly a lie.
Does Claude code make issue reports automatically? (And then how exactly would it be doing that if Anthropic was down when the use of LLM in the report is obvious )
That's what happens when people outsource their mental capacity to a machine
Couldn't have happened to a better Repo, I needed that chuckle.
Github issues will be the real social network for AI agents, no humans allowed!
Goes to show that nobody reads error messages and it reminds me of this old blogpost:
> A kid knocks on my office door, complaining that he can't login. 'Have you forgotten your password?' I ask, but he insists he hasn't. 'What was the error message?' I ask, and he shrugs his shoulders. I follow him to the IT suite. I watch him type in his user-name and password. A message box opens up, but the kid clicks OK so quickly that I don't have time to read the message. He repeats this process three times, as if the computer will suddenly change its mind and allow him access to the network. On his third attempt I manage to get a glimpse of the message. I reach behind his computer and plug in the Ethernet cable. He can't use a computer.
http://coding2learn.org/blog/2013/07/29/kids-cant-use-comput...
Hey folks, I’m Alex from the reliability team at Anthropic. We’re sorry for the downtime and we’ve posted a mini retrospective on our status page. We’re also be doing a more in depth retrospective in the following days.
https://status.claude.com/incidents/pr6yx3bfr172
If this overly impacts you as an "engineer" beyond "oh thats minorly annoying i'll go do it another way" please do some soul searching.
OR just take some time off from the grind to enjoy your life.
Anthropic might have the best product for coding but good god the experience is awful. Random limits where you _know_ you shouldn’t hit them yet, the jankiness of their client, the service being down semi-frequently. Feels like the whole infra is built on a house of cards and badly struggles 70% of the time.
I think my $20 openai sub gets me more tokens than claude’s $100. I can’t wait until google or openai overtake them.
In other news: https://fortune.com/2026/01/29/100-percent-of-code-at-anthro...
For folks who have latest version (0.4.1) LM Studio installed, I just noticed they added endpoints for being compatible with Claude Code, maybe this is an excellent moment to play around with local models, if you have the GPU for it. zai-org/glm-4.7-flash (Q4) is supposed to be OK-ish, and should fit within 24GB VRAM. It's not great, but always fun to experiment, and if the API stays down, you have some time to waste :)
I find it a bit annoying that the last place where I can learn about an Anthropic outage is the Anthropic Status page.
As best as I can tell, there was less than 10 minutes from the last successful request I made and when the downtime was added to their status page - and I'm not particularly crazy with my usage or anything, the gap could have been less than that.
Honestly, that seems okay to me. Certainly better than what AWS usually does.
https://status.claude.com/
what do you mean it's right there. Judging by the Github issues it only took them 10 minutes to add the issue message.
It took them about 15 minutes to update that page
It appeared there like 5 minutes ago; it was down for at least 20 before that.
That's 20 minutes of millions of people visiting the status page, seeing green, and then spending that time resetting their context, looking at their system and network configs, etc.
It's not a huge deal, but for $200/month it'd be nice if, after the first two-thousand 500s went out (which I imagine is less than 10 seconds), the status page automatically went orange.
Big models are single points of failure. I don't want to rely on those for my business, security, wealth, health and governance.
Why do people have to learn the same lessons over and over again? What makes them forget or blind to the obvious pitfalls?
Aren't most developers accessing Anthropic's models via vscode/github? It takes seconds to switch to a different model. Today I'm using Gemini.
The entire AI hype was started because Silicon Valley wanted a new SaaS product to keep themselves afloat, notice that LLMs started getting pushed right after Silicon Valley Bank collapsed.
Status page: https://status.claude.com/incidents/pr6yx3bfr172
I've had the $20/month account for OpenAI, Google, and Anthropic for months. Anthropic consistently has more downtime and throws more errors than the other two. Claude (on the web) also has a lot of seemingly false positive errors. It will claim an error occurred but then work normally. I genuinely like Claude the best but its performance does not inspire confidence.
Was getting 500 errors from Claude Code this morning but their status page was green. So frustrating that these pages aren't automated, especially considering there are paying users affected.
Claude Code seems to be back up
i just checked; no version 5 models listed in claude code yet
is back online
Both the CC api and their website -- hopefully related to the rumored Sonnet 5 release
That will be one strange way to release a model.
I mean, can you expect a vibecoding company to do stuff with 0 downtime? They brought the models down and are now panicking at HQ since there's no one to bring them back up
This made me laugh only because I imagine there could possibly be some truth to it. This is the world we are in. Maybe they all loaded codex to fix their deploy? ;)
it is not, sounds like an issue with AWS
OpenClaw agents on Anthropic API taking an unscheduled coffee break.
Their website seems fine to me but CC is throwing API error 500.
-edit- CC is back up on my machine
I was trying to set up OpenClaw and broke it. My bad guys.
What’s updog?
Not much. Anthropic was down. What's up with you?
Not much, what's up with you?
No big deal for people who know how to do their job.
Unless of course your job is writing an agent that uses an Anthropic model.
We need an updated XKCD comic where they are sword fighting but instead of “compiling” its “Anthropic is down”.
What's updog?
Not much, what’s up with you?
Probably everyone refreshing to see if Sonnet5 is out yet :)
What does Down mean in this context? I imagine running inference on any server suffices. Just rent AWS since Amazon owns it anyways and keep Claude running.
Status page reports it's back up
Yup. SOS
Now people are opening issues and LLMs are responding in completely nonsensical ways, nice one https://github.com/anthropics/claude-code/issues/22843
Signal too
Seems like a wider issue. Signal is down too, youtube seems to be struggling. Here we go again...
its up !
Fortunately before working hours on the west coast so it shouldn't impact that many people.
I hear you on the west coast, but two thirds of the population lives east of the Mississippi and is in working hours
Ah yes, because only the west coast of a single country are the priority for a globally distributed company.
HN users assuming the US are the only country in the World, episode #53985902582.
Only one half of the US :)
Us east coasters here are having a chuckle (what else can we do? we can't get work done while Claude is down... I'll be damned if I have to type in code letter by letter ever again!).
We all have a chuckle when AWS east is down.
Yikes man.
Just a healthy reminder that places outside of the USA exist.
Everybody that uses it knows it is down, what value does this add? No context here either. Posts like these feels so much like a low hanging first-to-post-this for karma grab.
It's like.. Popular service is down, let me post that to hn first! Low effort but can still end up popular.
I dunno. Maybe I'm being overly critical. Thoughts?
To add something to the discussion though: this is a reminder why you should not invest in one tool, claude or otherwise. Also, don't go enhancing one of these agents, ond only one of these agents. beads spent the better part of a medium sized country in energy to create a simple TODO list and got smeared in 10 minutes once claude integrated todos in their client.
The usual impetus is that the official status pages habitually under report and sharing the status page showing the best information gives a basis to form a place to have discussion about where/how/why it was down. Unfortunately, many seem to take the other interpretation instead and said discussion often ends up being low quality in a self fulfilling loop of reasons.
The second note probably deserves to be a separate comment.
This is a good take. You are right, they rarely do report actual status.
And yes, it probably should have been :)
Not everyone that interacts with a service is interacting with it directly? How is this a serious question.
A thing called API’s exist and if your users rely on it but your not interacting with it directly yourself, seeing this could save you time to investigate an issue.
Or you are using it yourself and seeing this post confirms it is not just you having an issue and you can move on with your day.
This has nothing to do with it being AI and it being a large service. It is the same with posts about an Azure or AWS.
I wasn't using it. I was just on hn. And yes, it is the same about all of these services of course, which was my point.
Not sure how "nothing to do with being a large service" and then you bring up Azure and AWS matches though, but fair enough.
Now I know you guys care, so fair game.
It’s more about the ensuing discussion.
I don't use it but I'd like to know even if it's just entertainment value. But I can imgine people who intend to use it learn something from it: redundancy must be part of their strategy like you said.
Completely agree, not to mention it was down for couple of minutes. Who even cares?
Everybody who uses anything knows that it's down
So what value does it add saying "X is down" anywhere?
It's just for discussion, you can't just ignore it and not talk about it with anyone if a particular service is down and posts like this are pretty common on hn and i haven't seen anyone complaining, it's you're being overly critical, yes
Fair enough!