It's just like spending time with a human bullshitter. At first, their energy and confidence are fun! But the spell is broken after a handful of "confidently incorrect" moments, and the realization that they will never stop doing that. It's usually more work than it's worth to extract the kernels from the crap.
One anecdote. I was worried about a recent friend of mine (non-technical solo traveler) becoming besties with ChatGPT and overly trusting and depending on it for basically everything.
Last time we met they had cancelled their subscription and cut down on the daily chats because they started feeling drained by the constant calls for engagement and follow-up questions, together with "she lost EQ after an update".
> Last time we met they had cancelled their subscription and cut down on the daily chats because they started feeling drained by the constant calls for engagement and follow-up questions, together with "she lost EQ after an update".
Can you explain what this means?
Your friend felt drained because chat gpt was asking for her engagement?
4o, the model most non-tech people use (that I wish they would depreciate) is very...chatty, it will actively try to engage you, and give you "useful things" you think you need, and take you down huge long rabbit holes. On the second point, it used to be very "high EQ" to people (sycophantic). Once they rolled back the sycophancy thing, even a couple of my non-technical friends msg'd me asking what happened to ChatGPT. I know one person who we've currently lost to 4o, it's got them talked into a very strange place friends can't reason them out of, and one friend who has recently "come back from it" so to speak.
I think there’s an extremely high likelihood that we just DON’T see huge advancements at least in terms of accuracy or capabilities which are probably the two major nuts to crack to bring it to a different level.
I’m open to the possibility of faster, cheaper and smaller (we saw an instance of that with deepseek) but think there’s a real chance we hit a wall elsewhere.
I don't know about other use cases, but AI is definitively a game changer for software development. You still need to know what you're doing and test/think critically about what it's giving you, but the body of software problems that you can conceptually treat as "boilerplate" becomes massively larger with the help of a good AI coding tool.
Nano Banana for me. After the initial wow phase it's meh now. Randomly refuses to adhere to the prompt. Randomly makes unexpected changes. Randomly triggers censorship filter. Randomly returns the image as is without making any changes.
It's even worse for image/video generation. The models get better in fidelity (prompt adherence) but raw image quality stagnated for close to 1 1/2 years now.
It’s the exact opposite for me. Image quality has been more than fine for me for a year or two, while prompt adherence has massively improved but still leaves much to be desired.
Our applications might differ we do 16-20k production for various automotive clients. Hitting 100% geometric details is not possible with the newer models because of the fixed patch sizes in their RoPe implementations.
It's just like spending time with a human bullshitter. At first, their energy and confidence are fun! But the spell is broken after a handful of "confidently incorrect" moments, and the realization that they will never stop doing that. It's usually more work than it's worth to extract the kernels from the crap.
Knowing whether (ostensible) solutions are easy or costly to verify is key to using LLMs efficiently.
It seems the whole world is under this spell of lies
Isnt this what social media is?
Social media is just its distribution system, the bs source come from people, same w/ AI
One anecdote. I was worried about a recent friend of mine (non-technical solo traveler) becoming besties with ChatGPT and overly trusting and depending on it for basically everything.
Last time we met they had cancelled their subscription and cut down on the daily chats because they started feeling drained by the constant calls for engagement and follow-up questions, together with "she lost EQ after an update".
> Last time we met they had cancelled their subscription and cut down on the daily chats because they started feeling drained by the constant calls for engagement and follow-up questions, together with "she lost EQ after an update".
Can you explain what this means?
Your friend felt drained because chat gpt was asking for her engagement?
Not OP, but:
4o, the model most non-tech people use (that I wish they would depreciate) is very...chatty, it will actively try to engage you, and give you "useful things" you think you need, and take you down huge long rabbit holes. On the second point, it used to be very "high EQ" to people (sycophantic). Once they rolled back the sycophancy thing, even a couple of my non-technical friends msg'd me asking what happened to ChatGPT. I know one person who we've currently lost to 4o, it's got them talked into a very strange place friends can't reason them out of, and one friend who has recently "come back from it" so to speak.
Since when is sycophancy the same thing as “high EQ”?
A high EQ might well be a prerequisite for successful sycophancy, but the other way definitely does not hold.
It's not, I'm simply saying that I believe the sycophantic version of 4o that they rolled backed appeared "higher EQ" to it's users.
> Your friend felt drained because chat gpt was asking for her engagement?
Basically yeah (except the "she" in my comment is referring to ChatGPT).
ChatGPT got on their nerves for nagging and baiting for more engagement.
I'm fairly bored with AI now.
I genuinely wonder where the next innovative leap in AI will come from and what it will look like. Inference speed? Sharper reasoning?
I think there’s an extremely high likelihood that we just DON’T see huge advancements at least in terms of accuracy or capabilities which are probably the two major nuts to crack to bring it to a different level.
I’m open to the possibility of faster, cheaper and smaller (we saw an instance of that with deepseek) but think there’s a real chance we hit a wall elsewhere.
I find it funny we assume (arrogantly) that progress will just keep on coming.
Really? Im not convinced we have the right people in this day-and-age to bring about those leaps.
It might be that humanity goes another 50 years until someone comes around with a novel take.
Isn't this just the human condition at work?
https://www.youtube.com/watch?v=PdFB7q89_3U
Fast forward a hundred years when we have a holodeck and sooner or later everyone will get bored with it.
I don't know about other use cases, but AI is definitively a game changer for software development. You still need to know what you're doing and test/think critically about what it's giving you, but the body of software problems that you can conceptually treat as "boilerplate" becomes massively larger with the help of a good AI coding tool.
It’s like Stack Overflow but much faster and doesn’t insult you. Which is useful, but this is so much less than what the companies are claiming it is.
Nano Banana for me. After the initial wow phase it's meh now. Randomly refuses to adhere to the prompt. Randomly makes unexpected changes. Randomly triggers censorship filter. Randomly returns the image as is without making any changes.
[dead]
Welcome to the trough of disillusionment!
The top of the S-curve
It's even worse for image/video generation. The models get better in fidelity (prompt adherence) but raw image quality stagnated for close to 1 1/2 years now.
Is this just a cost issue? Like they could turn the resolution up but they can’t afford the resources
It's an architectural issue.
It’s the exact opposite for me. Image quality has been more than fine for me for a year or two, while prompt adherence has massively improved but still leaves much to be desired.
Our applications might differ we do 16-20k production for various automotive clients. Hitting 100% geometric details is not possible with the newer models because of the fixed patch sizes in their RoPe implementations.