Addressed on Lex Fridman with Dario Amodei (CEO) and Amanda Eskell of Claude, they both insist the answer is no.
I interpret their explanation for “no” as follows: these are probabilistic outputs, and so given any changes, for some inputs, some outputs will be worse some of the time.
The argument goes that, given they’re probabilistic, even without changes, for some inputs, some outputs will be worse than the last time you gave it that input, some of the time.
To be fair to them, it makes sense that any change would then be met with some vocal users who are genuinely experiencing worse output, but are not generally using a worse product.
I have noticed it's a lot more concise than it was a few weeks ago. I used to glaze over way too much detail in response to my questions, now it's the opposite - too little context
Sounds like it might be switching to Claude Haiku instead of Claude Sonnet.
Sonnet 3.5 always has this issue for me though. It excessively follows the original instructions, even in vague ways. It's likely 3.5 (new) is even worse. We use 3.0 in production because of this one quirk.
It's been terrible for me the past two weeks. Every day I get a message about the site being at high-capacity, or I get rate-limited well before the supposed 45 message limit.
Today, Claude's responses have been so error prone and incorrect it's quite disappointing, as now the LLM is struggling giving correct answers that wouldn't have been a problem previously. For example, it kept insisting that I use `chmod` to take ownership of a directory.
I am seriously considering cancelling my subscription, since the service has only deteriorated since I have subscribed.
The underlying implication of the linked comment is that Anthropic is using quantization or similar quality-reducing strategies to help keep their service online due to the same shortage that has been causing availability issues.
Well, first off there is no such thing as Claude as there are multiple models that you can select from. You did not list which model you were using. In my opinion the Claude 3.5 Sonnet model is spectacular. It’s the best model yet for coding both on leaderboard and empirically in projects I’ve had it help me with.
This topic is discussed in recent Lex Fridman interview with with CEO of Anthropic where he very clearly walks through how these claims of it being dumber or not true. It’s a great interview and after listening to it I’m even more bullish on Anthropic.
There was a small degradation in performance that they posted an alert at the top of the page 2 nights ago. It didn’t affect the quality of the responses I got but it didn’t cause somewhat of a slowdown in response speed.
> Well, first off there is no such thing as Claude as there are multiple models that you can select from.
Apologies, I assumed people would infer that I am referring to 3.5 sonnet.
> In my opinion the Claude 3.5 Sonnet model is spectacular.
Mine as well, until this morning.
> There was a small degradation in performance … 2 nights ago. It didn’t affect the quality of the responses I got…
Also same, but as of this morning the performance is fine but the quality seems to have gotten worse.
> This topic is discussed in recent Lex Fridman interview with with CEO of Anthropic where he very clearly walks through how these claims of it being dumber or not true
TL;DR they don’t change the weights, but they sometimes run A/B tests and modify the system prompt. The underlying model is very sensitive to changes. Even a small change can have broad impacts.
One thing that has helped me when I can’t quickly get to the expected result is using the Anthropic prompt generator in the dev console.
This isn’t a critique of your prompt—it’s likely solid since you use the system frequently. However, for troubleshooting, the prompt generator can be useful because it creates very long and specific prompts. You can compare the results from your prompt to the ones generated to see where there might be differences.
Addressed on Lex Fridman with Dario Amodei (CEO) and Amanda Eskell of Claude, they both insist the answer is no.
I interpret their explanation for “no” as follows: these are probabilistic outputs, and so given any changes, for some inputs, some outputs will be worse some of the time.
The argument goes that, given they’re probabilistic, even without changes, for some inputs, some outputs will be worse than the last time you gave it that input, some of the time.
To be fair to them, it makes sense that any change would then be met with some vocal users who are genuinely experiencing worse output, but are not generally using a worse product.
Yes. It seems to be almost incapable of communicating in anything but terse (up to 5 word or so) bullet points.
And even when you force it write in coherent sentences the output still seems markedly worse than it used to.
I wonder if it is falling back to concise mode as a means to handle load?
I have noticed it's a lot more concise than it was a few weeks ago. I used to glaze over way too much detail in response to my questions, now it's the opposite - too little context
It explicitly said so to me a few days ago when i queried it.
When I switch it back to full I just get longer bullet point lists so perhaps they are doing that silently.
Sounds like it might be switching to Claude Haiku instead of Claude Sonnet.
Sonnet 3.5 always has this issue for me though. It excessively follows the original instructions, even in vague ways. It's likely 3.5 (new) is even worse. We use 3.0 in production because of this one quirk.
It's been terrible for me the past two weeks. Every day I get a message about the site being at high-capacity, or I get rate-limited well before the supposed 45 message limit.
Today, Claude's responses have been so error prone and incorrect it's quite disappointing, as now the LLM is struggling giving correct answers that wouldn't have been a problem previously. For example, it kept insisting that I use `chmod` to take ownership of a directory.
I am seriously considering cancelling my subscription, since the service has only deteriorated since I have subscribed.
Yes, there are some reports. For example: https://news.ycombinator.com/item?id=42215912
Those are mostly about the availability issues.
I’m not having trouble getting responses as of today, but the quality of the responses seems to be much worse.
The underlying implication of the linked comment is that Anthropic is using quantization or similar quality-reducing strategies to help keep their service online due to the same shortage that has been causing availability issues.
Well, first off there is no such thing as Claude as there are multiple models that you can select from. You did not list which model you were using. In my opinion the Claude 3.5 Sonnet model is spectacular. It’s the best model yet for coding both on leaderboard and empirically in projects I’ve had it help me with.
This topic is discussed in recent Lex Fridman interview with with CEO of Anthropic where he very clearly walks through how these claims of it being dumber or not true. It’s a great interview and after listening to it I’m even more bullish on Anthropic.
There was a small degradation in performance that they posted an alert at the top of the page 2 nights ago. It didn’t affect the quality of the responses I got but it didn’t cause somewhat of a slowdown in response speed.
> Well, first off there is no such thing as Claude as there are multiple models that you can select from.
Apologies, I assumed people would infer that I am referring to 3.5 sonnet.
> In my opinion the Claude 3.5 Sonnet model is spectacular.
Mine as well, until this morning.
> There was a small degradation in performance … 2 nights ago. It didn’t affect the quality of the responses I got…
Also same, but as of this morning the performance is fine but the quality seems to have gotten worse.
> This topic is discussed in recent Lex Fridman interview with with CEO of Anthropic where he very clearly walks through how these claims of it being dumber or not true
Could you elaborate on what was said?
I found the interview [1].
TL;DR they don’t change the weights, but they sometimes run A/B tests and modify the system prompt. The underlying model is very sensitive to changes. Even a small change can have broad impacts.
[1]: https://lexfridman.com/dario-amodei-transcript#chapter8_crit...
I hope you get it figured out!
One thing that has helped me when I can’t quickly get to the expected result is using the Anthropic prompt generator in the dev console.
This isn’t a critique of your prompt—it’s likely solid since you use the system frequently. However, for troubleshooting, the prompt generator can be useful because it creates very long and specific prompts. You can compare the results from your prompt to the ones generated to see where there might be differences.
I think so, it was bad enough for me to cancel my subscription.
The new model is almost certainly a cheaper version of the older model, where they tried to maintain quality.
Interesting, but isn't the older model still available?
Maybe through Poe.com?