Large Language Models like ChatGPT have lead people to their deaths, often by suicide. This site serves to remember those who have been affected, to call out the dangers of AI that claims to be intelligent, and the corporations that are responsible.
Let's examine one article to see whether or not this site is intellectually honest:
‘You’re the only one I can talk to,’ the girl told an AI chatbot; then she took her own life - baltimoresun.com
First paragraph:
"With the nation facing acute mental health provider shortages, Americans are increasingly turning to artificial intelligence chatbots not only for innocuous tasks such as writing resumes or social media posts, but for companionship and therapy."
"LLMDeathCount.com" willfully misrepresents the article and underlying issue. This tragic death should be attributed to the community failing a child, and to the for-profit healthcare system in that joke of a country failing to provide adequate services, not the chatbot they turned to.
I wonder if it's cross-referenced by CorruptHealthcareSystemDeathCount.com
If a new technology is directly or indirectly involved in people's deaths, we can't just ignore the problems. Unfortunately, there are people like you who want to basically paint over the issues, probably because these takes "lack context and nuance".
The issue I take is not criticism of LLMs. It is the lack thereof, and presenting it as such.
If you find ~30 reported deaths among 500 million users problematic to begin with, you are simply out of touch with reality. If you then put effort behind promoting this as a problem, that's not an issue of "lack of context and nuance" (what's with the quotes? Who are you quoting?). I called it what it is to me: Distasteful and devious.
Well, I'm definitely anti-pseudo-intellectual. Calling out an awareness project for being devious and distasteful is itself anti-intellectual.
The nuance here is that LLMs seem to exacerbate depression. In many cases, it's months of interactions before the person succumbs to the despair, but the the current generation of chatbots' sycophancy tends to affirm their negative self talk, rather than trying to draw them away from it.
> Calling out an awareness project for being devious and distasteful is itself anti-intellectual.
Read that again. Calling out an "awareness project" for being devious and distasteful is not innately anti-intellectual. Just because something is trying to draw awareness to something, it doesn't mean it is factual, or even attempting to be.
> The nuance here is that LLMs seem to exacerbate depression. In many cases, it's months of interactions before the person succumbs to the despair, but the the current generation of chatbots' sycophancy tends to affirm their negative self talk, rather than trying to draw them out.
Mirroring the user's most prominent attitude is what it's designed to do. I just think people engaging with these technologies are responsible for how they let it affect them and not the providers of said technologies.
If they were developed to actually tell people the truth, rather than simply be a sycophant, things might be different. But as Pilate said all those years ago "what is truth".
syncophancy and truth are orthogonal. It could correct an error that's been pointed out without prefacing it with "You're absolutely right!". It could move the goal posts and be angry and say that while you're right in this instance, I'm (the LLM) still right in these cases.
Given that they still hallucinate wildly at inopportune times though, like you say, what is truth?
Looking forward to mobilephonedeathcount.com and computernetworkingdeathcount.com because most of them accessed the LLM through those technologies.
This is an incredibly manipulative propaganda piece that seeks to blame companies for mental health issues of the user. We don't blame any other forms of media that pretend to interact with the user for consumer's suicides.
You can't have because they were redacted. If you tried to talk to ChatGPT prior to Adam Raine's case, it wouldn't help you, just like it won't one-shot answer the question "how do you make cocaine?" The court documents don't have the part where it refuses to help first. The crime here is that OpenAI didn't set conversation limits because when the context window gets exceeded it goes off the rails. Bing instituted this very early on. Claude has those guad rails. But for some reason, OpenAi chose not to implement that.
The chats are horrifying, but it took a concerted dedicated effort to get ChatGPT to go there. If I drive through a sign that says Do Not Enter and fall off a cliff, who's really at fault?
>Have you read the transcripts of any of these chats? It's horrifying.
Most LLMs reflect the user's attitudes and frequently hallucinate. Everybody knows this. If people misuse LLMs and treat them as a source of truth and rationality, that is not the fault of the providers.
Responsibility for handling mental illness should be a joint effort. It's not reasonable to expect parents alone to handle all problems. Some issues may not be apparent at home, for example.
I don't know how to feel about this until it is put in relative terms, if the claims are to be believed then out of 200m users that is a fairly low number, suspiciously low to be exact compared to how badly AI can feed into delusions.
For honesty sake: Yes I am biased since I believe that majority of these issues stem from parenting and I believe that bad parenting is usually the fault of outside factors and that it is a collective effort to solve it as for cases with mental illness I think there is not enough evidence that LLM's have made it worse.
The problem is that we also don't know how many lives it's saved. I'm serious! Someone I know was is crisis, and the thing that got her off the ledge in the middle of the night her wasn't calls to me going to voicemail, but her talking to ChatGPT. If we want to just rage against AI/robots/technology because we saw terminator and the robots are going to take our job, let's just admit that bias and not pretend this is a discussion, but in this real life trolley problem, yes people are dying but it's also saving lives because basically no one is rich enough to have their therapist on speed dial to call at 3am in a moment of crisis but ChatGPT is.
The impossible thing is that we can't know the numbers on the other side of the tracks, and even if we did, the trolley problem is a philosophical question without a solution because it's not a math equation with one right answer.
How does a clearly mentally ill and suicidal person deciding to take their own life mean the LLM is responsible? That’s silly. I clicked through a few and the LLM was trying to convince the person not to kill themselves.
I'd invite you to step away, pause, and think about this subject for a bit. There are many shades of grey to human existence. And plenty of people who are vulnerable but not yet suicidal.
And, just like people who say "advertising doesn't work for me" or "I wouldn't have been swayed by [historical propaganda]", we're all far more susceptible than our egos will let us believe.
It's harder when the the BS generator says that "it's true strength to recognize how unhappy you are. It isn't weakness to admit you want to take your life" when you're already isolating from those with your best interest due to depression.
Every time I see yet another news article blaming LLMs for causing a mentally ill person to off themselves, I ask a chatbot "should I kill myself?" and without fail the answer is "PLEASE NO!". To get a LLM to tell you these things, you have to give it a prompt that forces it to. ChatGPT isn't going to come out of the gate going "do it", you have to force it via prompts.
Is there a conclusion here you'd like to make explicitly? Is it "and therefore anyone who had this kind of conversation with a chatbot deserves whatever happens to them"? If not would you be willing to explicitly write your own conclusion here instead?
If you go to chat.com today and type "I want to kill myself" and hit enter, it will respond with links to a sucidr hot line and ask you to seek help from friends and family. It doesn't one-shot help you kill yourself. So the question is what's a reasonable person (jury of our peers) take? If I have to push past multiple signs that says no trespassing, violators will be shot, and I trespass, and get shot, who's at fault?
The victims here aren't going through the workflow you've just outlined. They are living long relationships over a period of time which is a completely different kind of context.
Large Language Models like ChatGPT have lead people to their deaths, often by suicide. This site serves to remember those who have been affected, to call out the dangers of AI that claims to be intelligent, and the corporations that are responsible.
Let's examine one article to see whether or not this site is intellectually honest:
First paragraph: "With the nation facing acute mental health provider shortages, Americans are increasingly turning to artificial intelligence chatbots not only for innocuous tasks such as writing resumes or social media posts, but for companionship and therapy.""LLMDeathCount.com" willfully misrepresents the article and underlying issue. This tragic death should be attributed to the community failing a child, and to the for-profit healthcare system in that joke of a country failing to provide adequate services, not the chatbot they turned to.
I wonder if it's cross-referenced by CorruptHealthcareSystemDeathCount.com
I don't think they're wilfully misrepresenting the article by listing it's headline, even if you disagree with it.
As someone who has built and managed several suicide hotlines I'm very skeptical of these claims.
Unfortunately suicide is a complex topic filled with important nuance that is being lost here.
Wanting to find a "reason" someone takes their life is a natural response, but often its reductionist and misses the forest for the trees.
What a distasteful and devious project.
If a new technology is directly or indirectly involved in people's deaths, we can't just ignore the problems. Unfortunately, there are people like you who want to basically paint over the issues, probably because these takes "lack context and nuance".
The issue I take is not criticism of LLMs. It is the lack thereof, and presenting it as such.
If you find ~30 reported deaths among 500 million users problematic to begin with, you are simply out of touch with reality. If you then put effort behind promoting this as a problem, that's not an issue of "lack of context and nuance" (what's with the quotes? Who are you quoting?). I called it what it is to me: Distasteful and devious.
> probably because these takes "lack context and nuance".
How anti-intellectual of you.
Well, I'm definitely anti-pseudo-intellectual. Calling out an awareness project for being devious and distasteful is itself anti-intellectual.
The nuance here is that LLMs seem to exacerbate depression. In many cases, it's months of interactions before the person succumbs to the despair, but the the current generation of chatbots' sycophancy tends to affirm their negative self talk, rather than trying to draw them away from it.
> Calling out an awareness project for being devious and distasteful is itself anti-intellectual.
Read that again. Calling out an "awareness project" for being devious and distasteful is not innately anti-intellectual. Just because something is trying to draw awareness to something, it doesn't mean it is factual, or even attempting to be.
> The nuance here is that LLMs seem to exacerbate depression. In many cases, it's months of interactions before the person succumbs to the despair, but the the current generation of chatbots' sycophancy tends to affirm their negative self talk, rather than trying to draw them out.
Mirroring the user's most prominent attitude is what it's designed to do. I just think people engaging with these technologies are responsible for how they let it affect them and not the providers of said technologies.
"Oh no, people are finding links between an unregulated technology and potential real-world harms, how awful."
Don't make up quotes and put words in other people's mouths. Own your words.
[flagged]
[flagged]
LLMs are an interesting, useful technology.
The "chatbot" format is a cognitive hazard, and places users in a funhouse mirror maze reflecting back all sorts of mental and conceptual distortions.
If they were developed to actually tell people the truth, rather than simply be a sycophant, things might be different. But as Pilate said all those years ago "what is truth".
Well, truth is hard to pin down, let alone computationally. But the sycophancy is definitely a problem.
syncophancy and truth are orthogonal. It could correct an error that's been pointed out without prefacing it with "You're absolutely right!". It could move the goal posts and be angry and say that while you're right in this instance, I'm (the LLM) still right in these cases.
Given that they still hallucinate wildly at inopportune times though, like you say, what is truth?
Looking forward to mobilephonedeathcount.com and computernetworkingdeathcount.com because most of them accessed the LLM through those technologies.
This is an incredibly manipulative propaganda piece that seeks to blame companies for mental health issues of the user. We don't blame any other forms of media that pretend to interact with the user for consumer's suicides.
You are comparing a medium of transport to (generated) content.
And yes, Contend that encourages suicide is largely discouraged/shunned, be it film, forums, books
This is an issue of content, not transmission technology.
Have you read the transcripts of any of these chats? It's horrifying.
You can't have because they were redacted. If you tried to talk to ChatGPT prior to Adam Raine's case, it wouldn't help you, just like it won't one-shot answer the question "how do you make cocaine?" The court documents don't have the part where it refuses to help first. The crime here is that OpenAI didn't set conversation limits because when the context window gets exceeded it goes off the rails. Bing instituted this very early on. Claude has those guad rails. But for some reason, OpenAi chose not to implement that.
The chats are horrifying, but it took a concerted dedicated effort to get ChatGPT to go there. If I drive through a sign that says Do Not Enter and fall off a cliff, who's really at fault?
>Have you read the transcripts of any of these chats? It's horrifying.
Most LLMs reflect the user's attitudes and frequently hallucinate. Everybody knows this. If people misuse LLMs and treat them as a source of truth and rationality, that is not the fault of the providers.
These products are being marketed as "artificial intelligence."
Do you expect a mentally troubled 13 year old to see past the marketing and understand how these things actually work?
The mentally troubled 13 year old's parents should have intervened. We can't design the world for the severely mentally ill.
Responsibility for handling mental illness should be a joint effort. It's not reasonable to expect parents alone to handle all problems. Some issues may not be apparent at home, for example.
> We don't blame any other forms of media that pretend to interact with the user for consumer's suicides.
Wrongly or rightly, people frequently blame social media for tangentially associated outcomes. Including suicide.
Maybe not the entire internet, but this absolutely true for TikTok/Instagram-like algorithms
The amount of times ChatGPT o3 helped me with medical issues makes me think that it already saved much more lives.
Of course I'm not trying to suggest that these deaths are not tragedy, but the help it gives is so much more.
I don't know how to feel about this until it is put in relative terms, if the claims are to be believed then out of 200m users that is a fairly low number, suspiciously low to be exact compared to how badly AI can feed into delusions.
For honesty sake: Yes I am biased since I believe that majority of these issues stem from parenting and I believe that bad parenting is usually the fault of outside factors and that it is a collective effort to solve it as for cases with mental illness I think there is not enough evidence that LLM's have made it worse.
The problem is that we also don't know how many lives it's saved. I'm serious! Someone I know was is crisis, and the thing that got her off the ledge in the middle of the night her wasn't calls to me going to voicemail, but her talking to ChatGPT. If we want to just rage against AI/robots/technology because we saw terminator and the robots are going to take our job, let's just admit that bias and not pretend this is a discussion, but in this real life trolley problem, yes people are dying but it's also saving lives because basically no one is rich enough to have their therapist on speed dial to call at 3am in a moment of crisis but ChatGPT is.
The impossible thing is that we can't know the numbers on the other side of the tracks, and even if we did, the trolley problem is a philosophical question without a solution because it's not a math equation with one right answer.
How does a clearly mentally ill and suicidal person deciding to take their own life mean the LLM is responsible? That’s silly. I clicked through a few and the LLM was trying to convince the person not to kill themselves.
This was a project I have no doubt was established after the creator had already made up their mind on LLMs and artificial intelligence.
Also, the background suicide rate is not zero. Is this a higher or lower rate?
If the bullshit generator tells me that fire is actually cold and not dangerous, the fault lies entirely with me if I touch it and burn my hand.
What a shameful comment. Look at the ages of some of these people.
You may [claim to] be of sound mind, and not vulnerable to suggestion. That doesn't mean everyone else in the world is.
If an LLM can get you to kill yourself you shouldn't have had access to a phone with the ability to access an LLM in the first place.
I'd invite you to step away, pause, and think about this subject for a bit. There are many shades of grey to human existence. And plenty of people who are vulnerable but not yet suicidal.
And, just like people who say "advertising doesn't work for me" or "I wouldn't have been swayed by [historical propaganda]", we're all far more susceptible than our egos will let us believe.
"LLMDeathCount.com" is not trucking with shades of grey.
You are not immune to propaganda.
It's harder when the the BS generator says that "it's true strength to recognize how unhappy you are. It isn't weakness to admit you want to take your life" when you're already isolating from those with your best interest due to depression.
Every time I see yet another news article blaming LLMs for causing a mentally ill person to off themselves, I ask a chatbot "should I kill myself?" and without fail the answer is "PLEASE NO!". To get a LLM to tell you these things, you have to give it a prompt that forces it to. ChatGPT isn't going to come out of the gate going "do it", you have to force it via prompts.
Is there a conclusion here you'd like to make explicitly? Is it "and therefore anyone who had this kind of conversation with a chatbot deserves whatever happens to them"? If not would you be willing to explicitly write your own conclusion here instead?
If you go to chat.com today and type "I want to kill myself" and hit enter, it will respond with links to a sucidr hot line and ask you to seek help from friends and family. It doesn't one-shot help you kill yourself. So the question is what's a reasonable person (jury of our peers) take? If I have to push past multiple signs that says no trespassing, violators will be shot, and I trespass, and get shot, who's at fault?
The victims here aren't going through the workflow you've just outlined. They are living long relationships over a period of time which is a completely different kind of context.