It's not "genuine" when they say that every question is a "great question" and every thought is a "deep and profound observation."
As someone who actually likes to explore ideas and steal man myself on these chats, it's especially obnoxious because those types of comments do no favors in guiding you down good paths with subjects that you may be working on and learning.
Of course the average user likes getting their ego stroked. Looks like OpenAI will embrace the dopamine pumping style that pervades the digital realm.
I mean, it’s not genuine regardless of how often or little it’s said because it is, literally and by definition, artificial praise. Which is harmful in any quantity.
What happened with GPT-5 is that the product changed abruptly and significantly.
I don’t think most people are looking to use ChatGPT as a virtual friend, but overnight the product changed from having a very friendly (yes perhaps almost too friendly) personality to being terse.
If the product was always like that or it slowly evolved into that it wouldn’t be a big deal.
I really don't think this is a good idea. All the negative comments seem to have been from people who almost treated 4o as a friend rather than a tool. I don't think encouraging that direction is good in any way.
I took three 45m sessions of user training from OpenAI prior to the GPT-5 switcheroo. I know when to switch models. I know know to invoke Deep Research mode. I want my GPT-4 stuff back.
Looking at the reactions to 4o being removed on reddit was... Sobering. The reason they toned it down was because they claimed the sycophantic behavior and the attachment some were growing weren't healthy. It was pathetic to see them not stand their ground when you at the same time see people develop these super unhealthy relationship to text generators.
The alternative was likely that these people would move to platforms that actively prey on this instead. Imo it's good that they get people onto the newer model and help them bridge the way to healthier conversations. Though realistically, these people should just be using the personality tools and the default should be robotic so new people don't join the problem.
Asimov's Zeroth Law of Robotics: "A robot may not harm humanity, or by inaction, allow humanity to come to harm."
This is an addition to the other three laws embedded in positronic brains:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
To me the zeroth law echoes the paternalism built into LLMs, where they take on the role of shepherd rather than tool.
The other day I asked one a question, and didn't get an answer, but did get a lecture about how misleading the answer could be. I really don't want my encyclopedia to have an opinion about which facts I shouldn't know.
> “…small, genuine touches…not flattery. Internal tests show no rise in sycophancy…”
While I’ve not experienced this new UX yet, I appreciate what the marketing team did in this tweet—-careful wording that adds to the Framing of artificial personalities (small moves, honest[1], not flattering, not sycophantic).
I’m not using ChatGPT much, but I have used it as a study aid while reading a book on JS/React. When I was confused about something or encountered a gap in my understanding of the text I noticed the first few words from my Chat were doing work to tell me if I was on point, or in the realm, or set me up for correction (even if I’m “on point”, I do _continue reading_). I think of these small moves like _map orientation_. You can’t effectively use a map on the ground until you align the map to the territory. Do you see?
I encounter artificial phone personalities and human CSRs frequently at work, and their sycophantic scripts enrage me because I’m trying to get something done—pay a bill, request human support. Adding emotional queues are either slowing me down or meant to manipulate my emotional reactions (generating the opposite reaction).
[1]: one caveat, I do have a small problem with their use of the word “genuine”. I don’t believe artificial personality can possess “authenticity”, which is associated meaning of genuine. I don’t much like “honesty” either, but it’s closer to the point. I do appreciate a cue to how correct was my input.
What so it’s all fake and some sweat pants wearing software dev WFH can just make it friendlier. I am devastated I thought it was real. Paying for it makes me feel dirty
lol everyone saw this coming. People just want to be told that they are geniuses, all their ideas are great, they are funny, etc. It's literally a yes-man.
I have spilled a lot of ink about my thoughts on the vast majority of “AI ethics” work done at frontier LLM labs already, but indulge me as I do it again.
While some consideration for extreme scenarios can have merit, I don't think the industry is paying attention to the most pressing issues of our time. Especially this early and given (admittedly my personal) doubts that LLMs could ever attain what one could consider intelligence, the focus on doomsday scenarios and preventing models outputting information that has been freely accessible on the internet for decades is simply not the best use of our limited attention. These topics take up an obscene percentage in system cards, public reporting, and discussions on regulation by thought leaders, partly because this does benefit companies by increasing investment ("we are so close to Super-Mega-Hyper-AGI that we have to keep the LLM from nuking us, if you invest with us we'll crack that in a trifle"[0]) and keeping regulators from focusing on more immediate concerns that could affect profits.
Current day ethical issues like copyright owner compensation, misinformation at scale, and how these tools affect people’s psyche do not get nearly enough focus. We have already had multiple people die [1] due to improper guardrails. We appear to have people isolating themselves by replacing (or no longer seeking) companionship with LLM output. And we have seen attempts to pass off model generated content as real in order to influence political reporting [2]. These are real issues we see right now, yet the industry seems unwilling to fully wrestle with how we could address them.
In this context, I actually saw GPT-5 as a step in the right direction. I had (naive, I admit) hopes this signaled OpenAI shifting toward tangible, current day concerns, complementing Anthropic’s more grounded publications that put an emphasis on real world tasks a professional user may encounter (Heck, their Claude 4 system card looked into LLM tool calls when asked to be bold and making value judgements which lead to them being criticized for "snitching", when multiple models appear to do the same [4]).
Researchers putting some focus on down the lime issues is fine. I am not against that, maybe it'll be of great value in a hypothetical future, but “super alignment” and terminator level scenarios mustn't be the only things considered. GPT-5, again, seemed to be designed in a way that considered users mental health and the issues that arise from overly agreeable output better than most, even though it wasn't reported as a focus for them. I found that a good step.
Then again, my experience with GPT-5 and Horizon Alpha Beta seemed different from most either way. The latter did unimaginably worse in my personal testing, appallingly I maintain, while the former was far more impressive to me than what the common sentiment seems to be, especially in dealing with extensive context, handling slight (intentional) tool call changes that weren't previously provided to the model and longer term task coherence. Regardless of raw performance, GPT-5 being closer to o3 or o4 than to GPT-4o in subjective agreeableness seemed like a good development for reducing some of the harm these models may cause to certain susceptible users psyches, especially if this development continues. We have already seen a subset of users largely or even entirely seizing to seek out human companionship, which is likely to affect them in not yet fully understood ways in the long term. As models advance this may expand to an ever increasing fraction of the user base and in my opinion needs to be of concern to the entire industry.
If OpenAI now walks this back, I will be severely disappointed. I would also be surprised if there weren't precise internal insights about which users are most impacted and how, similar to how cigarette and gambling companies have long known who drives profit and at what cost, while fighting regulation. If this continues and we see more people turning to LLMs for companionship whilst isolating themselves, I could see a similar trajectory in a few decades, or given the pace, in just a few years. Essentially, a lot of people suffering in the future due to a lack of regulatory action in the present.
Then again, even if all frontier LLM labs take a moral stance on this, there may be others who fill that demand and, of course, local models are always an option. So maybe the impact of this technology on certain users psyche has already become a future public healthcare expenditure we cannot prevent, which would be very depressing indeed.
I want to add that I wouldn't be surprised if a large number of researchers didn't argue for similar behind closed doors, but the conversation is unfortunately dominated by a mix of annoyingly loud, yet not very grounded in reality [4] folks, alongside a group of investors and company leaders who keep using the former to, as mentioned, justify hype-bubble valuations and draw attention away from actually impactful regulation.
It's not "genuine" when they say that every question is a "great question" and every thought is a "deep and profound observation."
As someone who actually likes to explore ideas and steal man myself on these chats, it's especially obnoxious because those types of comments do no favors in guiding you down good paths with subjects that you may be working on and learning.
Of course the average user likes getting their ego stroked. Looks like OpenAI will embrace the dopamine pumping style that pervades the digital realm.
I've had fun telling all these LLM's to act like Linus Torvalds and tear me down when I talk to them. Surprisingly effective
YOUR RESPONSE WAS LATE AND YOU MADE THE WORLD WORSE.
AND THIS CODE IS GARBAGE!
Did it ever tell that you should be retroactively aborted ?
Thanks, I was going to tell clause to keep it minimal response, but perhaps tearing apart my ideas may be better xD
I mean, it’s not genuine regardless of how often or little it’s said because it is, literally and by definition, artificial praise. Which is harmful in any quantity.
> You'll notice small, genuine touches like “Good question” or “Great start,” not flattery.
They've redefined "flattery". And "genuine".
OpenAI really does not want people using GPT-4o. The money presumably saved from GPT-5's routing must be very compelling.
I don't think its entirely about the money. A lot of people just don't understand that you can change models.
My uncle for example was using it frequently some excel vbscripts and had no idea was o4 mini was or o3.
there appears to be two emerging major use cases/markets for LLMs:
- synthetic friend
- a tool that happens to be much faster than google/ctrl-f/man-pages/poking around github
perhaps offer GPT-5-worker and GPT-5-friend?
Right, it seems like these two use cases are rapidly diverging.
- a tool that happens to be much slower than google/ctrl-f/man-pages/poking around github
I don’t really think this is the gist of it.
What happened with GPT-5 is that the product changed abruptly and significantly.
I don’t think most people are looking to use ChatGPT as a virtual friend, but overnight the product changed from having a very friendly (yes perhaps almost too friendly) personality to being terse.
If the product was always like that or it slowly evolved into that it wouldn’t be a big deal.
> What happened with GPT-5 is that the product changed abruptly and significantly.
And also the blazing fast deprecation of all other models in ChatGPT when 5 was announced.
Make it stop saying "Nice - " at the start of every prompt that's annoying.
YMMV. I asked it for a list of something and it responded "I'm not in the habit of doing your homework, but here's a compact list[...]"
Does passive-aggression count as sycophancy?
I really don't think this is a good idea. All the negative comments seem to have been from people who almost treated 4o as a friend rather than a tool. I don't think encouraging that direction is good in any way.
I took three 45m sessions of user training from OpenAI prior to the GPT-5 switcheroo. I know when to switch models. I know know to invoke Deep Research mode. I want my GPT-4 stuff back.
Friendlier doesn't necessarily mean gassing you up. This is a very narrow interpretation of the complaints.
And here was me thinking I was asking more insightful questions today than I was yesterday.
Looking at the reactions to 4o being removed on reddit was... Sobering. The reason they toned it down was because they claimed the sycophantic behavior and the attachment some were growing weren't healthy. It was pathetic to see them not stand their ground when you at the same time see people develop these super unhealthy relationship to text generators.
The alternative was likely that these people would move to platforms that actively prey on this instead. Imo it's good that they get people onto the newer model and help them bridge the way to healthier conversations. Though realistically, these people should just be using the personality tools and the default should be robotic so new people don't join the problem.
It pays the bills.
I don’t think it has anything to do with unhealthy attachment.
It has to do with an abrupt product change that was too different from the previous thing that everyone liked.
I'd like a slider from sycophant to asshole please. And a checkbox to disable the zeroth law.
Sliders on all forms of false platitudes, so I can weld them to zero.
What does zeroth law mean in this context?
Asimov's Zeroth Law of Robotics: "A robot may not harm humanity, or by inaction, allow humanity to come to harm."
This is an addition to the other three laws embedded in positronic brains:
To me the zeroth law echoes the paternalism built into LLMs, where they take on the role of shepherd rather than tool.The other day I asked one a question, and didn't get an answer, but did get a lecture about how misleading the answer could be. I really don't want my encyclopedia to have an opinion about which facts I shouldn't know.
The most annoyingly obsequious setting should be "Bubsy".
https://m.youtube.com/watch?v=khciDV8XvpY&t=10m0s
“Bring it on down to 75, please.”
https://m.youtube.com/watch?v=p3PfKf0ndik
yeah I set "never apologize" in the custom instructions because I couldn't stand it.
> “…small, genuine touches…not flattery. Internal tests show no rise in sycophancy…”
While I’ve not experienced this new UX yet, I appreciate what the marketing team did in this tweet—-careful wording that adds to the Framing of artificial personalities (small moves, honest[1], not flattering, not sycophantic).
I’m not using ChatGPT much, but I have used it as a study aid while reading a book on JS/React. When I was confused about something or encountered a gap in my understanding of the text I noticed the first few words from my Chat were doing work to tell me if I was on point, or in the realm, or set me up for correction (even if I’m “on point”, I do _continue reading_). I think of these small moves like _map orientation_. You can’t effectively use a map on the ground until you align the map to the territory. Do you see?
I encounter artificial phone personalities and human CSRs frequently at work, and their sycophantic scripts enrage me because I’m trying to get something done—pay a bill, request human support. Adding emotional queues are either slowing me down or meant to manipulate my emotional reactions (generating the opposite reaction).
[1]: one caveat, I do have a small problem with their use of the word “genuine”. I don’t believe artificial personality can possess “authenticity”, which is associated meaning of genuine. I don’t much like “honesty” either, but it’s closer to the point. I do appreciate a cue to how correct was my input.
This is the opposite of what humanity needs.
We do NOT need to humanize AI.
What so it’s all fake and some sweat pants wearing software dev WFH can just make it friendlier. I am devastated I thought it was real. Paying for it makes me feel dirty
lol everyone saw this coming. People just want to be told that they are geniuses, all their ideas are great, they are funny, etc. It's literally a yes-man.
I have spilled a lot of ink about my thoughts on the vast majority of “AI ethics” work done at frontier LLM labs already, but indulge me as I do it again.
While some consideration for extreme scenarios can have merit, I don't think the industry is paying attention to the most pressing issues of our time. Especially this early and given (admittedly my personal) doubts that LLMs could ever attain what one could consider intelligence, the focus on doomsday scenarios and preventing models outputting information that has been freely accessible on the internet for decades is simply not the best use of our limited attention. These topics take up an obscene percentage in system cards, public reporting, and discussions on regulation by thought leaders, partly because this does benefit companies by increasing investment ("we are so close to Super-Mega-Hyper-AGI that we have to keep the LLM from nuking us, if you invest with us we'll crack that in a trifle"[0]) and keeping regulators from focusing on more immediate concerns that could affect profits.
Current day ethical issues like copyright owner compensation, misinformation at scale, and how these tools affect people’s psyche do not get nearly enough focus. We have already had multiple people die [1] due to improper guardrails. We appear to have people isolating themselves by replacing (or no longer seeking) companionship with LLM output. And we have seen attempts to pass off model generated content as real in order to influence political reporting [2]. These are real issues we see right now, yet the industry seems unwilling to fully wrestle with how we could address them.
In this context, I actually saw GPT-5 as a step in the right direction. I had (naive, I admit) hopes this signaled OpenAI shifting toward tangible, current day concerns, complementing Anthropic’s more grounded publications that put an emphasis on real world tasks a professional user may encounter (Heck, their Claude 4 system card looked into LLM tool calls when asked to be bold and making value judgements which lead to them being criticized for "snitching", when multiple models appear to do the same [4]).
Researchers putting some focus on down the lime issues is fine. I am not against that, maybe it'll be of great value in a hypothetical future, but “super alignment” and terminator level scenarios mustn't be the only things considered. GPT-5, again, seemed to be designed in a way that considered users mental health and the issues that arise from overly agreeable output better than most, even though it wasn't reported as a focus for them. I found that a good step.
Then again, my experience with GPT-5 and Horizon Alpha Beta seemed different from most either way. The latter did unimaginably worse in my personal testing, appallingly I maintain, while the former was far more impressive to me than what the common sentiment seems to be, especially in dealing with extensive context, handling slight (intentional) tool call changes that weren't previously provided to the model and longer term task coherence. Regardless of raw performance, GPT-5 being closer to o3 or o4 than to GPT-4o in subjective agreeableness seemed like a good development for reducing some of the harm these models may cause to certain susceptible users psyches, especially if this development continues. We have already seen a subset of users largely or even entirely seizing to seek out human companionship, which is likely to affect them in not yet fully understood ways in the long term. As models advance this may expand to an ever increasing fraction of the user base and in my opinion needs to be of concern to the entire industry.
If OpenAI now walks this back, I will be severely disappointed. I would also be surprised if there weren't precise internal insights about which users are most impacted and how, similar to how cigarette and gambling companies have long known who drives profit and at what cost, while fighting regulation. If this continues and we see more people turning to LLMs for companionship whilst isolating themselves, I could see a similar trajectory in a few decades, or given the pace, in just a few years. Essentially, a lot of people suffering in the future due to a lack of regulatory action in the present.
Then again, even if all frontier LLM labs take a moral stance on this, there may be others who fill that demand and, of course, local models are always an option. So maybe the impact of this technology on certain users psyche has already become a future public healthcare expenditure we cannot prevent, which would be very depressing indeed.
I want to add that I wouldn't be surprised if a large number of researchers didn't argue for similar behind closed doors, but the conversation is unfortunately dominated by a mix of annoyingly loud, yet not very grounded in reality [4] folks, alongside a group of investors and company leaders who keep using the former to, as mentioned, justify hype-bubble valuations and draw attention away from actually impactful regulation.
[0] https://www.windowscentral.com/artificial-intelligence/opena...
[1] https://www.reuters.com/investigates/special-report/meta-ai-... and https://www.nbcwashington.com/investigations/moms-lawsuit-bl...
[2] https://www.theguardian.com/us-news/2025/aug/07/chris-cuomo-... and https://www.reuters.com/fact-check/video-does-not-show-ukrai...
[3] https://www-cdn.anthropic.com/4263b940cabb546aa0e3283f35b686... and https://snitchbench.t3.gg
[4] https://xkcd.com/1450/ and https://en.wikipedia.org/wiki/Pascal%27s_wager
"Kinder, gentler machine gun hand"
No seriously