AI if unregulated could be a lot more worse than social media. My kid after his first chat with AI said, he is my new friend. I was alarmed and explain him but what about the parents and guardians who are unaware how the kids are befriending the AI. Part of the problem is also how it is trained to be nice and encouraging. I am sure there are researchers who are talking about it but the question is are the policy makers listening to them ?
With the current acceleration of technology this is a repeating pattern. The new thing popular with kids is not understood by the parents before it is too late.
It kind of happened for me with online games. They were a new thing, and no one knew to what degree they could be addicting and life damaging. As a result I am probably over protective of my own kids when it comes to anything related to games.
We are already seeing many of the effects of the social media generation and I am not looking forward to what is going to happen to the AI natives whose guardians are ill-prepared to guide them. In the end, society will likely come to grips with it, but the test subjects will pay a heavy price.
I tried the new battlefield game and it’s bizarre how some of my friends play it. There’s this expansive battle pass (pay $20/quarter for access, on top of $70 base game) where you’re given tasks in game to make progress towards cosmetics. My friends only play to complete these tasks - the actual combat gameplay loop almost feels secondary to their enjoyment of the game. The modern trend of games becoming chore simulators is worrisome to me because -call me old fashioned- I believe the core gameplay loop should be fun and addictive, not the progression scaffolded on, even if that gameplay loops is a little “distasteful” like GTA.
Violent video games might not have impacted society, but what about addictive social media and addictive online games?
What about the algorithm feeding highly polarized content to folks? It's the new "lead in the air and water" of our generation.
What about green text bubble peer pressure? Fortnite and Roblox FOMO? The billion anime Gatcha games that are exceedingly popular? Whale hunting? Kids are being bullied and industrially engineered into spending money they shouldn't.
Raising kids on iPads, shortened attention spans, social media induced depression and suicide, lack of socialization, inattention in schools, ...
Social media leading people to believe everyone is having more fun than them, is better looking than them, that society is the source of their problems, ...
Now the creepy AI sex bots are replacing real friends.
“Regulate”? We can’t, and shouldn’t, regulate everything. Policymakers should focus on creating rules that ensure data safety, but if someone over 18 years wants to marry a chatbot… well, that’s their (stupid) choice.
Instead of trying to control everything, policymakers should educate people about how these chatbots work and how to keep their data safe. After all, not everyone who played Doom in the ’90s became a real killer, or assaults women because of YouPorn.
Society will adapt to these ridiculous new situations…what truly matters is people’s awareness and understanding.
We can, and we should, regulate some things. AI has, quite suddenly, built up billions of dollars worth of infrastructure and become pervasive in people's daily lives. Part of how society adapts to ridiculous new situations is through regulations.
I'm not proposing anything specifically, but the implication that this field should not be regulated is just foolish.
People think that we can just magically regulate everything. It's like a medieval peasant who doesn't understand chemistry/physics/etc thinking they can just pray harder to have better odds of something.
We literally CAN'T regulate some things for any reasonable definition of "can't" or "regulate". Our society is either not rich enough or not organized in a way to actually do it in any useful capacity and not make the problem worse.
I'm not saying AI chatbots are one of those things, but people toss around the idea of regulation way too casually and AI chatbots are way less cut and dry than bad food or toxic waste or whatever other extreme anyone wants to misleadingly project down into the long tail of weird stuff with limited upside and potential for unintended consequences elsewhere.
All your argument consists of is, "Somebody somewhere believes something untrue, and people don't use enough precision in their speach, so I am recommending we don't do anything regulatory about this problem."
Having a virtual girlfriend is not selling toxic yoghurts, it doesn’t harm anyone, it’s like if you buy yoghurt and put in on a pizza… you can do want you want with the yoghurt like with the AI.
The important thing is keep the data safe, like the yoghurt that must not be expired when sold.
Despite what the free market religion has been telling for decades, we actually don't live in little parallel universes that don't affect each other. Even putting yoghurt on pizza has on effect on the world, not just the individual doing it. Not understanding this is what'll be the end of humanity. AI girl/boyfriends will have a huge effect on society, we should think hard before doing things like that. Slightly slower technological progress is not as disastrous as fast progress gone wrong.
We also don't want to regulate everything. Have you seen that someplace, or even here? Or it's an imaginary argument? The topic was regulating AI, and about that I like your thought: humans should be better educated and better informed. Should we, maybe, make a regulation to ensure that?
I understand what you’re saying but it’s a difficult balance. Not saying everything needs to be regulated and not saying we should be full blown neoliberalism. But think of some of “social” laws we have today (in the US). No child marriages, no child labor, no smoking before 19, and no drinking before 21. These laws are in place because we understand that those who can exploit will do the exploiting. Those who can be exploited will be exploited. That being said, I don’t agree with any of the age verification policies here with adult material. Honestly not sure what the happy medium is.
I already wrote “over 18”. AI is already regulated, you can’t use it if you’re under 14/18. But if you want to ask ChatGPT “what’s the meaning of everything” or “can we have digital children”, that’s a personal choice.
People are weird… for someone who is totally alone, having a virtual wife/child could be better than being completely alone.
They’re not using ChatGPT to do anything illegal, and already regulated, like planning to kill someone or commit theft.
I'm of the opinion that should be unregulated as well. Just like you say, what's important is people's awareness and understanding of the tool and how it works underneath.
There is a massive difference between a stuffed animal and an LLM. In fact, they have next to nothing in common. And as such, yes any reasonable parent would react differently to a close friendship suddenly formed with any online service.
The only thing that's a given is that it's possible that you'll end up paying child support and alimony to someone that hates you, if you marry in real life and have real children.
Choosing you'd rather have AI wife and kids, rather than deal with the potential that many real people will face of paying child support and alimony to someone that hates them -- I don't see as an irrational decision (although not an inevitable one either).
In fact, if you don't consider at least the possibility, you are a fool.
But you're not actually having this wife and children if they're AI - they're an illusion, a simulation that only superficially resembles the real thing. It's a poor substitute and you out of all people should know that.
> I don't see as an irrational decision
Here's where it's irrational: the mere possibility is not enough to abandon the prospect because if one wanted to be consistent, they'd have to avoid similar risks, so ultimately hardly do anything.
Risks are an inherent part of life and those who avoid them at all costs are universally and consistently miserable.
It comes to personal risk appetite, and risk benefit analysis. My claim in any case is it's better to have an AI spouse and child than being relegated to a mere bank account for an ex spouse that hates you. Maybe in that case it's still worth it so you can produce a child you don't see with someone that hates you, so I'll concede that might be a point of contention. In any case, I make no claim that those are the only possibilities, I merely compare the two.
I haven't presupposed I can make the decision for any particular person.
> It comes to personal risk appetite, and risk benefit analysis.
What analysis when you don't even have a good estimate of the probabilities involved? Min-maxing on the other hand is a recipe for extinction as a species, so not a viable strategy.
> My claim in any case is it's better to have an AI spouse and child than being relegated to a mere bank account for an ex spouse that hates you.
>What analysis when you don't even have a good estimate of the probabilities involved?
Without some analysis, the decision would be irrational. What you're alluding here by discounting a rational analysis but also discounting (in your prior comment) irrationality is that you've set a clever fallacious (and ultimately circular) trap where being rational is irrational and being irrational is rational, therefore you have produced clever gotcha where any counter argument loses. Of course all the meanwhile, appealing to the risk/benefit factor of the probability of extinction of the specious.
>Min-maxing on the other hand is a recipe for extinction as a species, so not a viable strategy
This doesn't come into play on the contextual claim, as there are plenty enough people reproducing in jurisdictions with no effective alimony or child support to continue the species, if somehow everyone were to come to such a decision, which in any case doesn't seem to be a given.
>How specifically is it better?
I suppose I could play Socratic method here as well, and ask why it's better that the species doesn't go extinct. There is no way to objectively prove that's true, so I'll yield we're both making assertions that are subjectively better rather than by some universal law of the universe that it's better off to be under constant threat of imprisonment by the state if you can't come up with enough an extra 20%+ of (imputed, not even actual, so theoretically could be above 100%) income and be hated rather than enjoy some entertainment with AI.
I support my kids, but because I am married, I only have to spend a small fraction of what court ordered child support would be.
And that is because court ordered child support is actually a misnomer. It is merely a transfer payment to the custodial parent. There is actually no enforceable statutory requirement that it be spent on the child, nor any tracking or accountability that it is done. That would somehow be too impractical, even though somehow it's magically practical to count the pennies of the earner in the opposite direction, to make sure the full income flow is accounted for.
Yeah… but there’s often a relationship that happens before that.
Now if you go into that relationship with the mindset of “this person just wants my alimony and child support and hates my guts” I get why you might do yourself and your potential partner / ex-to-be a favour by instead getting an AI relationship.
The outcry when 4o was discontinued was such that open AI kept it on paying subscriptions. There are at least enough people attached to certain AI voices that it warrants a tech startup spending the resources to keep an old model around. That’s probably not an insignificant population.
The Stanford Prison Experiment only had 24 participants and implementation problems that should have concerned anyone with a pulse. But it’s been taught for decades.
A lot of psych research uses small samples. It’s a problem, but funding is limited and so it’s a start. Other researchers can take this and build upon it.
Anecdotally, watching people meltdown over the end of ChatGPT 4o indicates this is a bigger problem that 0.1%. And business wise, it would be odd if OpenAI kept an entire model available to serve that small a population.
I have two grandkids, one's 3 years old and one's 9 months old.
I feel like I'm not really ready for everything that's going to be vying for their attention in the next couple of decades. My daughter and her husband have good practices in place already IMHO but it's going to be a pernicious beast.
"Study finds..." feels clickbaity to me whenever the study is just "we found some randos on social media doing a thing". With little effort a study could find just about any type of person you want on the Internet.
Of course the loneliest 5% are going to do something like this. If it weren't for AI they'd be writing twilight fan-fic and roleplaying it on some chatroom, or giving all their money to a "saudi prince."
Seems like nothing new, just a better or more immersive form of fantasy for those who can't have the life they fantasize about.
I'd argue it'd be psychologically healthier to roleplay in a chatroom with people who are human on the other end (if that could be guaranteed, which it no longer can).
Humans can potentially be much nastier than a chatbot. There are lonely vulnerable who can be exploited, but there are also people who get off on manipulating other people and convincing them to make profoundly self destructive and life altering choices.
Sure, but Google and OpenAI aren't going to do this kind of manipulation. And the actual sadists are probably not as satisfied with letting a machine do it.
It feels like a more evolved version of those who have what they consider to be relationships with anime characters or virtual idols in Japan. Often treating a doll or lifesize pillow replica of that character as someone the person can interact and spend time with. Obviously like the AI, the fact it is so common does suggest that it must be filling a unmet need in the person and I guess the key focus needs to be how do we help those stuck in that situation to become unstuck and how do we help those feel that unmet need is fulfilled?
I used to despise AIs' ass-kissing responses.
It doesn't add any value, and it's so cheap it's almost sarcastic.
But now, I feel sad because Codex doesn't praise me even though I come up with a super-clever implementation.
I think the part of my brain for feeling flattered when someone praises me didn't exist because no one complimented me.
But after ChatGPT and Claude flattered me again and again, I finally developed the circuit for feeling accepted, respected, and loved...
It reminds me of when I started stretching after my 30s.
First it was nothing but a torture, but after a while I began to feel good and comfortable when my muscles were stretched, and now I feel like shit when I skip the morning stretching.
Is this neccesarily a bad thing? I think a lot of people assume these same people would have developed relationships with humans otherwise. How many of these people are better off this way? That'd be an interesting study. I've read a couple of articles on how the "loneliness epidemic" is driving down life expectancy. Could AI chatbots negate that?
"It's not real", yeah, that is weird for sure. But I also find wrestling fans weird, they know it's not real and enjoy it anyways. Even most sports, people take it a lot more seriously than they should.
We're stuck in a really perverse collective-action problem. And, we keep doing this to ourselves. These technologies are not enriching our lives, but once they're adopted we either use them, or voluntarily fall behind. There seems to be very little general philanthropy in this regard.
So many of today's problems could be good in principle, but the surrounding facts (laws, human nature, etc.) render them awful. Did anyone think 20 years ago that algorithmic feeds would be so bad? I don't see a reason in principle that they must be bad, but it's clear that we'd be far better off without them.
The direction we're headed, humanity is going to become utterly isolated pods, never interacting. We're going to end up with humanity being one rich dude, a staff of robots, and some humans under his patronage. The other bit of humanity is a rich woman, with a staff of robots, and some humans under her patronage, because nobody can deal with there being other humans that are as gross and sloppy and suck, just as much as they do.
Relationships are hard. They're a lot of work. Not just romantic relationships, but all other kinds of relationships: family, friendship, mentorship, chosen family, colleague, manager, mentee, client, neighbor, teammate, student, citizen, creative partner, audience. Instead of having any kind of relationship with people, I can just hide away, work remote, become hikikomori.
Where does that leave humanity? As Ms Deejay says, do you think you're better off alone?
Because individual humans no long need to coexist with their neighbors, it means the best and worst will flourish. For every supportive person that accepts gay people, there's another person that wants to stone them. Interacting with lots of other people is the only way to develop nuanced opinions of groups of other people, and without any kind of forced interaction, there won't be any, further isolating everybody from everyone else.
Something I use as a heuristic that is pretty reliable is "am I treating a thing like a person, or a person like a thing?" If so then, maybe not necessarily bad but probably bad.
It's not about whether it's "real" or not. In this case of AI relationships, extremely sophisticated and poorly understood mechanisms of social-emotional communication and meaning making that have previously only ever been used for bonding with other people, and to a limited extent animals, are being directed at a machine. And we find that the mechanisms respond to that machine as if there is a person there, when there is not.
There is a lot of novel stuff happening there, technologically, socially, psychologically. We don't really know, and I don't trust anyone who is confidently predicting, what effects that will have on the person doing it, or their other social bonds.
Wrestling is theater! It's an ancient craft, well understood. If you're going to approach AI relationships as a natural extension of some well established human activity probably pet bonding is the closest. I don't think it's even that close though.
But you know that the alternative is a lot of other mental illnesses, including things like suicide. Everyone tells people "get mental help" but is neither cheap, nor accessible to most people.
I've seen no evidence that this sort of relationship to AI use is an effective alternative to mental health treatment. In fact it's so far looking to be about the opposite: as currently implemented LLMs are reinforcement tools for delusional thinking and have already been a known factor in several suicides.
The inaccessibility of healthcare in the US is a serious problem but this is not a solution or alternative to it right now and may never become one.
To me what you're saying is akin to "I see no evidence lab grown food is healthier than real food, so people under famine and malnourishment should instead wait for someone to give them aid instead of eat lab grown food".
I don't think AI can replace mental health treatment or human relationships. But it might be a viable stop-gap. It's like tom hanks talking to "Wilson" the volleyball when he was stuck on island in "cast away". Yeah, it's weird, but it helped him survive and cope until he was rescued. I want these people struggling with mental health to survive and cope until they get real help some day. I want less suicides, less people contracting chronic illnesses,etc.. and to hell with any "appearances" of weirdness or stigma.
You're not engaging with my comments in favor of arguing with words I didn't say in defense of positions I don't hold. I don't really see a role for myself in that activity so I'll leave you to it.
I don't think that's fair, i was mostly respond to your first sentence and:
> The inaccessibility of healthcare in the US is a serious problem but this is not a solution or alternative to it right now and may never become one.
My response was clearly not your words, but my understanding of your conclusion.
The few suicides that are reported pale in comparison to suicides caused by loneliness. An argument can also be made that they just haven't trained/found the right companion model yet.
AI if unregulated could be a lot more worse than social media. My kid after his first chat with AI said, he is my new friend. I was alarmed and explain him but what about the parents and guardians who are unaware how the kids are befriending the AI. Part of the problem is also how it is trained to be nice and encouraging. I am sure there are researchers who are talking about it but the question is are the policy makers listening to them ?
With the current acceleration of technology this is a repeating pattern. The new thing popular with kids is not understood by the parents before it is too late.
It kind of happened for me with online games. They were a new thing, and no one knew to what degree they could be addicting and life damaging. As a result I am probably over protective of my own kids when it comes to anything related to games.
We are already seeing many of the effects of the social media generation and I am not looking forward to what is going to happen to the AI natives whose guardians are ill-prepared to guide them. In the end, society will likely come to grips with it, but the test subjects will pay a heavy price.
A whole generation turned out fine after murdering hookers in GTA before the industry came up with loot boxes.
How do we know which era of AI we're in?
I tried the new battlefield game and it’s bizarre how some of my friends play it. There’s this expansive battle pass (pay $20/quarter for access, on top of $70 base game) where you’re given tasks in game to make progress towards cosmetics. My friends only play to complete these tasks - the actual combat gameplay loop almost feels secondary to their enjoyment of the game. The modern trend of games becoming chore simulators is worrisome to me because -call me old fashioned- I believe the core gameplay loop should be fun and addictive, not the progression scaffolded on, even if that gameplay loops is a little “distasteful” like GTA.
When you’re murdering hookers in GTA, you know what you’re doing (engaging in a fictional activity). Kids who believe “AI” is their friend don’t.
Violent video games might not have impacted society, but what about addictive social media and addictive online games?
What about the algorithm feeding highly polarized content to folks? It's the new "lead in the air and water" of our generation.
What about green text bubble peer pressure? Fortnite and Roblox FOMO? The billion anime Gatcha games that are exceedingly popular? Whale hunting? Kids are being bullied and industrially engineered into spending money they shouldn't.
Raising kids on iPads, shortened attention spans, social media induced depression and suicide, lack of socialization, inattention in schools, ...
Social media leading people to believe everyone is having more fun than them, is better looking than them, that society is the source of their problems, ...
Now the creepy AI sex bots are replacing real friends.
[dead]
100%; it's probably wise to default to better-to-be-conservative-than-sorry policy, at least as of now.
“Regulate”? We can’t, and shouldn’t, regulate everything. Policymakers should focus on creating rules that ensure data safety, but if someone over 18 years wants to marry a chatbot… well, that’s their (stupid) choice.
Instead of trying to control everything, policymakers should educate people about how these chatbots work and how to keep their data safe. After all, not everyone who played Doom in the ’90s became a real killer, or assaults women because of YouPorn.
Society will adapt to these ridiculous new situations…what truly matters is people’s awareness and understanding.
We can, and we should, regulate some things. AI has, quite suddenly, built up billions of dollars worth of infrastructure and become pervasive in people's daily lives. Part of how society adapts to ridiculous new situations is through regulations.
I'm not proposing anything specifically, but the implication that this field should not be regulated is just foolish.
You are using an incredibly poor rhetoric technique and setting up a strawman.
This is not about regulating everything.
This is about realizing adverse effects and regulating for those.
Just like no one is selling you toxic youghurt.
People think that we can just magically regulate everything. It's like a medieval peasant who doesn't understand chemistry/physics/etc thinking they can just pray harder to have better odds of something.
We literally CAN'T regulate some things for any reasonable definition of "can't" or "regulate". Our society is either not rich enough or not organized in a way to actually do it in any useful capacity and not make the problem worse.
I'm not saying AI chatbots are one of those things, but people toss around the idea of regulation way too casually and AI chatbots are way less cut and dry than bad food or toxic waste or whatever other extreme anyone wants to misleadingly project down into the long tail of weird stuff with limited upside and potential for unintended consequences elsewhere.
Again, another strawman.
Which people in specific think that?
All your argument consists of is, "Somebody somewhere believes something untrue, and people don't use enough precision in their speach, so I am recommending we don't do anything regulatory about this problem."
Having a virtual girlfriend is not selling toxic yoghurts, it doesn’t harm anyone, it’s like if you buy yoghurt and put in on a pizza… you can do want you want with the yoghurt like with the AI.
The important thing is keep the data safe, like the yoghurt that must not be expired when sold.
Despite what the free market religion has been telling for decades, we actually don't live in little parallel universes that don't affect each other. Even putting yoghurt on pizza has on effect on the world, not just the individual doing it. Not understanding this is what'll be the end of humanity. AI girl/boyfriends will have a huge effect on society, we should think hard before doing things like that. Slightly slower technological progress is not as disastrous as fast progress gone wrong.
The important thing to you.
[dead]
What’s your opinion on current trend of “casino-fication of everything”?
We also don't want to regulate everything. Have you seen that someplace, or even here? Or it's an imaginary argument? The topic was regulating AI, and about that I like your thought: humans should be better educated and better informed. Should we, maybe, make a regulation to ensure that?
I understand what you’re saying but it’s a difficult balance. Not saying everything needs to be regulated and not saying we should be full blown neoliberalism. But think of some of “social” laws we have today (in the US). No child marriages, no child labor, no smoking before 19, and no drinking before 21. These laws are in place because we understand that those who can exploit will do the exploiting. Those who can be exploited will be exploited. That being said, I don’t agree with any of the age verification policies here with adult material. Honestly not sure what the happy medium is.
I already wrote “over 18”. AI is already regulated, you can’t use it if you’re under 14/18. But if you want to ask ChatGPT “what’s the meaning of everything” or “can we have digital children”, that’s a personal choice.
People are weird… for someone who is totally alone, having a virtual wife/child could be better than being completely alone.
They’re not using ChatGPT to do anything illegal, and already regulated, like planning to kill someone or commit theft.
I'm of the opinion that should be unregulated as well. Just like you say, what's important is people's awareness and understanding of the tool and how it works underneath.
[dead]
Would you have been concerned if he said the plush was his new friend? Calling policy makers to ban plush?
You have to be careful to not overreact to things.
There is a massive difference between a stuffed animal and an LLM. In fact, they have next to nothing in common. And as such, yes any reasonable parent would react differently to a close friendship suddenly formed with any online service.
https://www.reddit.com/r/MyBoyfriendIsAI/ in the wild. Rough stuff.
I'm glad to not have seen a r/MyBabyIsAI yet
My new startup is named TamagotchAI..
It's better than paying alimony and child support to someone that hates you.
You rather have AI relationships instead of actual kids?
I have actual kids and am pretty happy with my family life.
But many aren't, and some people might even have a level of rare self awareness to know that anyone they'd [be able to] marry would hate them.
That is:
1. Not a given.
2. Something one can work on so that they're either more likeable or at the very least less defeatist about the whole thing.
The only thing that's a given is that it's possible that you'll end up paying child support and alimony to someone that hates you, if you marry in real life and have real children.
Choosing you'd rather have AI wife and kids, rather than deal with the potential that many real people will face of paying child support and alimony to someone that hates them -- I don't see as an irrational decision (although not an inevitable one either).
In fact, if you don't consider at least the possibility, you are a fool.
But you're not actually having this wife and children if they're AI - they're an illusion, a simulation that only superficially resembles the real thing. It's a poor substitute and you out of all people should know that.
> I don't see as an irrational decision
Here's where it's irrational: the mere possibility is not enough to abandon the prospect because if one wanted to be consistent, they'd have to avoid similar risks, so ultimately hardly do anything.
Risks are an inherent part of life and those who avoid them at all costs are universally and consistently miserable.
It comes to personal risk appetite, and risk benefit analysis. My claim in any case is it's better to have an AI spouse and child than being relegated to a mere bank account for an ex spouse that hates you. Maybe in that case it's still worth it so you can produce a child you don't see with someone that hates you, so I'll concede that might be a point of contention. In any case, I make no claim that those are the only possibilities, I merely compare the two.
I haven't presupposed I can make the decision for any particular person.
trauma can be healed, going on that path is better than any alternative
> It comes to personal risk appetite, and risk benefit analysis.
What analysis when you don't even have a good estimate of the probabilities involved? Min-maxing on the other hand is a recipe for extinction as a species, so not a viable strategy.
> My claim in any case is it's better to have an AI spouse and child than being relegated to a mere bank account for an ex spouse that hates you.
How specifically is it better?
>What analysis when you don't even have a good estimate of the probabilities involved?
Without some analysis, the decision would be irrational. What you're alluding here by discounting a rational analysis but also discounting (in your prior comment) irrationality is that you've set a clever fallacious (and ultimately circular) trap where being rational is irrational and being irrational is rational, therefore you have produced clever gotcha where any counter argument loses. Of course all the meanwhile, appealing to the risk/benefit factor of the probability of extinction of the specious.
>Min-maxing on the other hand is a recipe for extinction as a species, so not a viable strategy
This doesn't come into play on the contextual claim, as there are plenty enough people reproducing in jurisdictions with no effective alimony or child support to continue the species, if somehow everyone were to come to such a decision, which in any case doesn't seem to be a given.
>How specifically is it better?
I suppose I could play Socratic method here as well, and ask why it's better that the species doesn't go extinct. There is no way to objectively prove that's true, so I'll yield we're both making assertions that are subjectively better rather than by some universal law of the universe that it's better off to be under constant threat of imprisonment by the state if you can't come up with enough an extra 20%+ of (imputed, not even actual, so theoretically could be above 100%) income and be hated rather than enjoy some entertainment with AI.
So maybe wear a condom next time, if you don't want to support your own goddamn kids.
I support my kids, but because I am married, I only have to spend a small fraction of what court ordered child support would be.
And that is because court ordered child support is actually a misnomer. It is merely a transfer payment to the custodial parent. There is actually no enforceable statutory requirement that it be spent on the child, nor any tracking or accountability that it is done. That would somehow be too impractical, even though somehow it's magically practical to count the pennies of the earner in the opposite direction, to make sure the full income flow is accounted for.
Yeah… but there’s often a relationship that happens before that.
Now if you go into that relationship with the mindset of “this person just wants my alimony and child support and hates my guts” I get why you might do yourself and your potential partner / ex-to-be a favour by instead getting an AI relationship.
The linked study is of 29 “people” (assuming they are real).
How do we know if these examples aren’t just the 0.1% of the population that is, for all intend and purposes, “out there”?
So much of “news” is just finding these corner cases that evoke emotion, but ultimately have no impact.
The outcry when 4o was discontinued was such that open AI kept it on paying subscriptions. There are at least enough people attached to certain AI voices that it warrants a tech startup spending the resources to keep an old model around. That’s probably not an insignificant population.
The Stanford Prison Experiment only had 24 participants and implementation problems that should have concerned anyone with a pulse. But it’s been taught for decades.
A lot of psych research uses small samples. It’s a problem, but funding is limited and so it’s a start. Other researchers can take this and build upon it.
Anecdotally, watching people meltdown over the end of ChatGPT 4o indicates this is a bigger problem that 0.1%. And business wise, it would be odd if OpenAI kept an entire model available to serve that small a population.
The stanford prison experiment is unverifiable. Another example of one of these emotion evoking stories.
https://pubmed.ncbi.nlm.nih.gov/31380664/
See critiques of validity section:
https://en.wikipedia.org/wiki/Stanford_prison_experiment
I think it IS 0.1% (or less, hopefully) of the population.
But it’s hard to study users having these relationships without studying the users who have these relationships I reckon.
I have two grandkids, one's 3 years old and one's 9 months old.
I feel like I'm not really ready for everything that's going to be vying for their attention in the next couple of decades. My daughter and her husband have good practices in place already IMHO but it's going to be a pernicious beast.
Do your best like countless generations before.
Perfection is not required.
"Study finds..." feels clickbaity to me whenever the study is just "we found some randos on social media doing a thing". With little effort a study could find just about any type of person you want on the Internet.
I'm sure Skynet could easily win in Terminator by sending a virtual boyfriend to Sarah Connor, instead of sending a trigger-happy cyborg after her.
[dead]
Of course the loneliest 5% are going to do something like this. If it weren't for AI they'd be writing twilight fan-fic and roleplaying it on some chatroom, or giving all their money to a "saudi prince."
Seems like nothing new, just a better or more immersive form of fantasy for those who can't have the life they fantasize about.
I'd argue it'd be psychologically healthier to roleplay in a chatroom with people who are human on the other end (if that could be guaranteed, which it no longer can).
Humans can potentially be much nastier than a chatbot. There are lonely vulnerable who can be exploited, but there are also people who get off on manipulating other people and convincing them to make profoundly self destructive and life altering choices.
Chatbots enable self destructive manipulation of others at scale.
Sure, but Google and OpenAI aren't going to do this kind of manipulation. And the actual sadists are probably not as satisfied with letting a machine do it.
I'd argue it'd be psychologically healthier to get therapy and more friends.
So what? We don't live in the "should" universe. We live in this one.
I’d agree. at least there’s a possibility of real interaction with real people.
Turns out Her wasn’t set in such a distant future.
Turns out there's a Him, too.
oh man... People are marrying like in world of warcraft... but... with bots?
It feels like a more evolved version of those who have what they consider to be relationships with anime characters or virtual idols in Japan. Often treating a doll or lifesize pillow replica of that character as someone the person can interact and spend time with. Obviously like the AI, the fact it is so common does suggest that it must be filling a unmet need in the person and I guess the key focus needs to be how do we help those stuck in that situation to become unstuck and how do we help those feel that unmet need is fulfilled?
Exactly; it's just a much more powerful medium to express one of the oldest and perennial of society problems
I used to despise AIs' ass-kissing responses. It doesn't add any value, and it's so cheap it's almost sarcastic. But now, I feel sad because Codex doesn't praise me even though I come up with a super-clever implementation.
I think the part of my brain for feeling flattered when someone praises me didn't exist because no one complimented me. But after ChatGPT and Claude flattered me again and again, I finally developed the circuit for feeling accepted, respected, and loved...
It reminds me of when I started stretching after my 30s. First it was nothing but a torture, but after a while I began to feel good and comfortable when my muscles were stretched, and now I feel like shit when I skip the morning stretching.
Can someone ask Dante which circle of hell this is?
Just send the meteor already.
Extremely lonely people being exploited by sociopathic corporations for profit. Sounds like a baby and the bathwater scenario to me.
VHE has arrived
Is this neccesarily a bad thing? I think a lot of people assume these same people would have developed relationships with humans otherwise. How many of these people are better off this way? That'd be an interesting study. I've read a couple of articles on how the "loneliness epidemic" is driving down life expectancy. Could AI chatbots negate that?
"It's not real", yeah, that is weird for sure. But I also find wrestling fans weird, they know it's not real and enjoy it anyways. Even most sports, people take it a lot more seriously than they should.
We're stuck in a really perverse collective-action problem. And, we keep doing this to ourselves. These technologies are not enriching our lives, but once they're adopted we either use them, or voluntarily fall behind. There seems to be very little general philanthropy in this regard.
I think that's a false dichotomy. tech isn't good or bad, it's what we do with it that is. Don't throw the baby out with the bathwater.
So many of today's problems could be good in principle, but the surrounding facts (laws, human nature, etc.) render them awful. Did anyone think 20 years ago that algorithmic feeds would be so bad? I don't see a reason in principle that they must be bad, but it's clear that we'd be far better off without them.
> Is this neccesarily a bad thing?
Yes?
For the sake of discussion, it would be great if you can expand on that.
Okay, if you want me to write more...
The direction we're headed, humanity is going to become utterly isolated pods, never interacting. We're going to end up with humanity being one rich dude, a staff of robots, and some humans under his patronage. The other bit of humanity is a rich woman, with a staff of robots, and some humans under her patronage, because nobody can deal with there being other humans that are as gross and sloppy and suck, just as much as they do.
Relationships are hard. They're a lot of work. Not just romantic relationships, but all other kinds of relationships: family, friendship, mentorship, chosen family, colleague, manager, mentee, client, neighbor, teammate, student, citizen, creative partner, audience. Instead of having any kind of relationship with people, I can just hide away, work remote, become hikikomori.
Where does that leave humanity? As Ms Deejay says, do you think you're better off alone?
Because individual humans no long need to coexist with their neighbors, it means the best and worst will flourish. For every supportive person that accepts gay people, there's another person that wants to stone them. Interacting with lots of other people is the only way to develop nuanced opinions of groups of other people, and without any kind of forced interaction, there won't be any, further isolating everybody from everyone else.
[dead]
Something I use as a heuristic that is pretty reliable is "am I treating a thing like a person, or a person like a thing?" If so then, maybe not necessarily bad but probably bad.
It's not about whether it's "real" or not. In this case of AI relationships, extremely sophisticated and poorly understood mechanisms of social-emotional communication and meaning making that have previously only ever been used for bonding with other people, and to a limited extent animals, are being directed at a machine. And we find that the mechanisms respond to that machine as if there is a person there, when there is not.
There is a lot of novel stuff happening there, technologically, socially, psychologically. We don't really know, and I don't trust anyone who is confidently predicting, what effects that will have on the person doing it, or their other social bonds.
Wrestling is theater! It's an ancient craft, well understood. If you're going to approach AI relationships as a natural extension of some well established human activity probably pet bonding is the closest. I don't think it's even that close though.
But you know that the alternative is a lot of other mental illnesses, including things like suicide. Everyone tells people "get mental help" but is neither cheap, nor accessible to most people.
I've seen no evidence that this sort of relationship to AI use is an effective alternative to mental health treatment. In fact it's so far looking to be about the opposite: as currently implemented LLMs are reinforcement tools for delusional thinking and have already been a known factor in several suicides.
The inaccessibility of healthcare in the US is a serious problem but this is not a solution or alternative to it right now and may never become one.
To me what you're saying is akin to "I see no evidence lab grown food is healthier than real food, so people under famine and malnourishment should instead wait for someone to give them aid instead of eat lab grown food".
I don't think AI can replace mental health treatment or human relationships. But it might be a viable stop-gap. It's like tom hanks talking to "Wilson" the volleyball when he was stuck on island in "cast away". Yeah, it's weird, but it helped him survive and cope until he was rescued. I want these people struggling with mental health to survive and cope until they get real help some day. I want less suicides, less people contracting chronic illnesses,etc.. and to hell with any "appearances" of weirdness or stigma.
You're not engaging with my comments in favor of arguing with words I didn't say in defense of positions I don't hold. I don't really see a role for myself in that activity so I'll leave you to it.
I don't think that's fair, i was mostly respond to your first sentence and:
> The inaccessibility of healthcare in the US is a serious problem but this is not a solution or alternative to it right now and may never become one.
My response was clearly not your words, but my understanding of your conclusion.
The few suicides that are reported pale in comparison to suicides caused by loneliness. An argument can also be made that they just haven't trained/found the right companion model yet.