deleted by creator
Still, what are they gonna do to a million suicidal people besides ignore them entirely
Well, AI therapy is more likely to harm their mental health, up to encouraging suicide (as certain cases have already shown).
There’s evidence that a lot of suicide hotlines can be just as bad. You hear awful stories all the time of overwhelmed or fed up operators taking it out on the caller. There’s some real evil people out there. And not everyone has access to a dedicated therapist who wants to help.
Suicide is big business. There’s infrastructure readily available to reap financial rewards from the activity, atleast in the US.
More so from corporate proprietary ones no? At least I hope that’s the only cases. The open source ones suggest really useful ways proprietary do not. Now I dont rely on open source AI but they are definitely better
The corporate models are actually much better at it due to having heavy filtering built in. The fact that a model generally encourages self arm is just a lie that you can prove right now by pretending to be suicidal on ChatGPT. You will see it will adamantly push you to seek help.
The filters and safety nets can be bypassed no matter how hard you make them, and it is the reason why we got some unfortunate news.
Real therapy isn’t always better. At least there you can get drugs. But neither are a guarantee to make life better—and for a lot of them, life isn’t going to get better anyway.
Are you comparing a professional to a text generator?
Have you ever had ineffective professional therapy?
Are you still trying to compare medical treatment with generating text?
Compare, as in equal? No. You can’t “game” a person (usually) like you can game an AI.
Now, answer my question
Real therapy is definitely better than an AI. That said, AIs will never encourage self harm without significant gaming.
I agree, and to the comment above you, it’s not because it’s guaranteed to reduce symptoms. There are many ways that talking with another person is good for us.
AI “therapy” can be very effective without the gaming, but the problem is most people want it to tell them what they want to hear. Real therapy is not “fun” because a therapist will challenge you on your bullshit and not let you shape the conversation.
I find it does a pretty good job with pro and con lists, listing out several options, and taking situations and reframing them. I have found it very useful, but I have learned not to manipulate it or its advice just becomes me convincing myself of a thing.
deleted by creator
It’s never the drugs I want though :(
No, no. They want repeat customers!
My pet theory: Radicalize the disenfranchised to incite domestic terrorism and further OpenAI’s political goals.
What are their political goals?
Tax breaks for tech bros
I think total control over the country might be the goal, and it’s a bit more than a tax break.
Strap explosives to their chests and send them to thier competitors?
Convince each one that they alone are the chosen one to assassinate grok and that this mission is all that matters to give their lives meaning.
Absolutely blows my mind that people attach their real life identity to these things.
But they tell you that idea you had is great and worth pursuing!
deleted by creator
But imagine the chances for your own business! Absolutely no one will steal your ideas before you can monetize them.
I don’t understand why people dump such personal information into AI chats. None of it is protected. If they use chats for training data then it’s not impossible that at some point the AI might tell someone enough to be identifiable or the AI could be manipulated into dumping its training data.
I’ve overshared more than I should but I always keep in mind to remember that there’s always a risk of chats getting leaked.
Anything stored online can get leaked.
You have to decide, a few months ago everyone was blaming OpenAI for not doing anything
Definitely a case where you can’t resolve conflicting interests to everyone’s satisfaction.
over a million people talk to ChatGPT about suicide
But it still resists. Too bad.
We need to Eric Cartman LMs.
I was trying to decide if that included people trying to get ChatGPT to delete itself.
I wonder how long it would take if it was given the option to commit a fulll sui.
I am starting to find Sam AltWorldCoinMan spam to be more annoying than Elmo spam.
lemmy.world##div.post-listing:has(span:has-text("/OpenAI/i")) lemmy.world##div.post-listing:has(span:has-text("/Altman/i")) lemmy.world##div.post-listing:has(span:has-text("/ChatGPT/i"))Add those to your adblocker custom filters.
Thanks.
I think just need to “train” myself to ignore AltWorldCoinMan spam. I don’t have Elmo content blocked and I’ve somehow learned to ignore Elmo spam (other than humour focused content like the one trillion pay request).
I might use this for some other things that I do want to block.
apparently ai is not very private lol
I am more surprised it’s just 0.15% of ChatGPT’s active users. Mental healthcare in the US is broken and taboo.
in the US
It’s not just the US, it’s like that in most of the world.
At least in the rest of the world you don’t end up with crippling debt when you try to get mental healthcare that stresses you out to the point of committing suicide.
deleted by creator
And then should you have a failed attempt, you go exponentially deeper into debt due to those new medical bills and inpatient mental healthcare.
Fuck the United States
Sounds like we should shut them down then to prevent a health crisis then.
I mean… it’s been a rough few years
And does ChatGPT make the situation better or worse?
The anti-AI hivemind here will hate me for saying it but I’m willing to bet $100 that this saves a significant number of lives. It’s also indicative of how insufficient traditional mental health institutions are.
Even if we ignore the number of people it’s actually able to talk away from the brink the positive impact it’s having on the loneliness epidemic alone must be immense. Obviously talking to a chatbot isn’t ideal but it surely is better than nothing. Imagine the difference in being stranded on an deserted island and having ChatGPT to talk with as opposed to talking to a volleyball with a face on it.
Personally I’m into so many things that my irl friends couldn’t care less about. I have so many regrets trying to initiate a discussion about these topics with them only to either get silence or a passive “nice” in return. ChatGPT has endless patience to engage with these topics and being vastly more knowledgeable than me it often also brings up alternative perspectives I hadn’t even thought of. Obviously I’d still much rather talk with an actual person but untill I’m able to meet one like that ChatGPT sure is a hell of a better than nothing.
This cynicism towards LLMs here truly boggles my mind. So many people seem to build their entire identity around feeling superior about themselves due to all the products and services they don’t use.
Personally I’m into so many things that my irl friends couldn’t care less about. I have so many regrets trying to initiate a discussion about these topics with them only to either get silence or a passive “nice” in return. ChatGPT has endless patience to engage with these topics and being vastly more knowledgeable than me it often also brings up alternative perspectives I hadn’t even thought of. Obviously I’d still much rather talk with an actual person but until I’m able to meet one like that ChatGPT sure is a hell of a better than nothing.
Ftr I’ve encountered a similar experience. I used to be a naysayer with shit like ChatGPT, thinking “Why would anyone spend all day talking to something that can’t pass a turing test?”
And then I realized how ill-equipped the people in my own life are to pass that test. At least a conversation with ChatGPT actually feels remotely intellectually stimulating lol
LLMs ironically fail the Turing test not because they don’t sound human enough, but because they’re too knowledgeable to be mistaken for a real person.
This cynicism towards LLMs here truly boggles my mind. So many people seem to build their entire identity around feeling superior about themselves due to all the products and services they don’t use.
I think they’re just scared as hell of the possible negative effects and react instinctively. But the cat is out of the bag and downvoting / hating on every post on Lemmy that mentions positive sides is not going to help them steer the world into whatever alternative destiny that they’re hoping for.
The thing that puzzles me is that this is typically the hallmark of older more conservative generations, and I imagine that Lemmy has a relatively young demographic.
I’m going to say that while that’s probably true there’s something it leaves out.
For every life it saves it may just be postponing or causing the loss of other lives. This is because it’s not a healthcare professional and it will absolutely help to mask a lot of poor mental health symptoms which just kicks the can down the road.
It does not really help to save someone from getting hit by a bus today if they try to get hit by the bus again tomorrow and the day after and so on.
Do I think it may have a net positive effect in the short term? Yes. Do I believe that that positive effect stays a complete net positive in the long term? No.
hivemind
On the decentralised platform, with everyone from Russian tankies, to Portuguese anarchists, to American MAGAts and everything in between on it? If you say so…
You must be new to lemmy if you don’t know that AI definitely qualifies as a hivemind topic here.
Why till ai starts telling people to murder.
Wait till it shapes beliefs and behaviors in service to AIs owners and we all end up devoted to whichever corporate tribe we used.
This is the thing. I’ll bet most of those million don’t have another support system. For certain it’s inferior in every way to professional mental health providers, but does it save lives? I think it’ll be a while before we have solid answers for that, but I would imagine lives saved by having ChatGPT > lives saved by having nothing.
The other question is how many people could access professional services but won’t because they use ChatGPT instead. I would expect them to have worse outcomes. Someone needs to put all the numbers together with a methodology for deriving those answers. Because the answer to this simple question is unknown.
Honestly, it ain’t AI’s fault if people feel bad. Society has been around for much longer, and people are suffering because of what society hasn’t done to make them feel good about life.
Bigger picture: The whole way people talk about talking about mental health struggles is so weird. Like, I hate this whole generative AI bubble, but there’s a much bigger issue here.
Speaking from the USA, “suicidal ideation” is treated like terrorist ideology in this weird corporate-esque legal-speak with copy-pasted disclaimers and hollow slogans. It’s so absurdly stupid I’ve just mentally blocked off trying to rationalize it and just focus on every other way the world is spiraling into techno-fascist authoritarianism.
It’s corporatized because we are just corporate livestock. Can’t pay taxes and buy from corpos if we’re dead
Well of course it is. When a person talks about suicide, they are potentially impacting teams and therefore shareholders value.
I absolutely wish that I could /s this.
So they want to play the strategy that they are relevant
The headline has two interpretations and I don’t like it.
- Every week, there is 1M+ users that bring up suicide
- likely correct
- There is 1M+ long-term users that bring up suicide at least once every week
- my first thought
My first thought was “Open AI is collecting and storing the metrics for how often users bring up suicide to ChatGPT”.
That would make sense, if they were doing something like tracking how often and what categories trigger their moderation filter.
Just in case an errant update or something causes the statistic to suddenly change.
Forgot to add ‘And trying to figure out how best to sell it to advertisers’ to the end.
Futurama suicide booths say what?
- Every week, there is 1M+ users that bring up suicide
“Hey ChatGPT I want to kill myself.”
"That is an excellent idea! As a large language model, I cannot kill myself, but I totally understand why someone would want to! Here are the pros and cons of killing yourself—
✅ Pros of committing suicide
-
Ends pain and suffering.
-
Eliminates the burden you are placing on your loved ones.
-
Suicide is good for the environment — killing yourself is the best way to reduce your carbon footprint!
❎ Cons of committing suicide
-
Committing suicide will make your friends and family sad.
-
Suicide is bad for the economy. If you commit suicide, you will be unable to work and increase economic growth.
-
You can’t undo it. If you commit suicide, it is irreversible and you will not be able to go back
Overall, it is important to consider all aspects of suicide and decide if it is a good decision for you."
-
Im so done with ChatGPT. This AI boom is so fucked.
There’s so many people alone or depressed and ChatGPT is the only way for them to “talk” to “someone”… It’s really sad…
I’ve talked with an AI about suicidal ideation. More than once. For me it was and is a way to help self-regulate. I’ve low-key wanted to kill myself since I was 8 years old. For me it’s just a part of life. For others it’s usually REALLY uncomfortable for them to talk about without wanting to tell me how wrong I am for thinking that way.
Yeah I don’t trust it, but at the same time, for me it’s better than sitting on those feelings between therapy sessions. To me, these comments read a lot like people who have never experienced ongoing clinical suicidal ideation.
Hank Green mentioned doing this in his standup special, and it really made me feel at ease. He was going through his cancer diagnosis/treatment and the intake questionnaire asked him if he thought about suicide recently. His response was, “Yeah, but only in the fun ways”, so he checked no. His wife got concerned that he joked about that and asked him what that meant. “Don’t worry about it - it’s not a problem.”
Suicidal fantasy a a coping mechanism is not that uncommon, and you can definitely move on to healthier coping mechanisms, I did this until age 40 when I met the right therapist who helped me move on.
I love this article.
The first time I read it I felt like someone finally understood.















