Yeah, I actually just read that one a few minutes ago. And man, I’m incredibly torn on this whole thing.
On one side - good it makes that person happy. On the other side - being entirely reliant on a commercialized, sycophant AI that could be used for manipulation, investing large amounts of money in it…
I’ve had LDRs before - one could argue it’s similar there, just “text on a screen”, or calls via digital audio. However I always knew there was a human behind those texts and the voice I heard was real, a person with a personality, experiences, strengths and flaws. The feelings they have are real, or at least one can hope they are assuming one isn’t with a manipulative POS (that’s not an issue exclusive to LDRs, though).
Here you chat with text generated by a company, accuracy having been wildly clowned upon already and I’m sure we’re all ware of this here. Of course the LLM is going to always agree, why would the product of the company actively try to drive away their customers?
Adding the fact that all the personal information will obviously be harvested, used for training the LLM and other stuff… Detailed information about the daily life is provided to the “AI boyfriend”, allowing detailed recreation of everyday life.
I agree, I don’t think it’s likely going to be helpful to mental health in the long run either, based on my totally unprofessional opinion.
I’ve argued with a friend about it who isn’t a tech-person at all. She just says “yeah, it’s her problem” and doesn’t seem to grasp that my issue is not with her doing it as an individual - instead with the fact that it’s possible and the greater societal ramifications it is likely to have.
I’ll make an AI boyfriend, too, and talk to him about it, that’ll show society!
Probably futile to discuss the health or ethics of it without first figuring out if people in the discussion share similar beliefs on what the meaning/purpose of life is.
Cuz if you’re talking to a nihilist who thinks it’s all shadows and dust at the end of the day, you’ll get a very different discussion that someone who thinks family and procreation are the point of life.
It may look innocent until the Chatbot nags you about buying that very cool new product they’ve heard so much praise about. This is very dangerous and needs tons of regulations.
No and that’s not my point. Fox news is slanted just as much as CNN. I mainly only get my news from Reuters and APnews (not including tech news). I recommend you to look at resources like Media Bias Factcheck or an app called Ground News.
It doesn’t particularly help that CNN, to my knowledge, got bought by a big company a good while ago and started slanting right, so it really isn’t the same news source it used to be.
Everything you all are saying can happen in regular relationships too. A person willing to choose an AI likely isn’t going to be great at choosing an actual human who is good for them.
In a relationship, the other person could also be manipulative, or it could be one sided, or they can pressure you to only live certain ways, buy certain things. Or they can backstab you and give your private info to others (family that took my SS info from my parents), or pawn your shit, or cheat on you with others. Like everything negative that might come from this could potentially happen in some remotely similar way in a human relationship too.
I’ve been in and seen others in all kind of relationships that in some ways had these similar negative outcomes.
Yeah, I actually just read that one a few minutes ago. And man, I’m incredibly torn on this whole thing.
On one side - good it makes that person happy. On the other side - being entirely reliant on a commercialized, sycophant AI that could be used for manipulation, investing large amounts of money in it…
I’ve had LDRs before - one could argue it’s similar there, just “text on a screen”, or calls via digital audio. However I always knew there was a human behind those texts and the voice I heard was real, a person with a personality, experiences, strengths and flaws. The feelings they have are real, or at least one can hope they are assuming one isn’t with a manipulative POS (that’s not an issue exclusive to LDRs, though).
Here you chat with text generated by a company, accuracy having been wildly clowned upon already and I’m sure we’re all ware of this here. Of course the LLM is going to always agree, why would the product of the company actively try to drive away their customers?
Adding the fact that all the personal information will obviously be harvested, used for training the LLM and other stuff… Detailed information about the daily life is provided to the “AI boyfriend”, allowing detailed recreation of everyday life.
Bleh.
If she’s not running on your hardware, she’s only dating you for ad revenue.
So we need to encourage locally hosted AI lovebots?
Yes. I may be a little racist, but I won’t respect anyone dating close weighted or cloud hosted models.
If you’re using the internet regularly, you’re falling into the first hole mentioned there. That ship has sailed.
At the current divorce rates (1/3 to 1/2 depending on which metric you use), it’s likely a better investment.
I’m in the camp of if it can fulfill a need, go with it. It’s odd as hell maybe, but I’m just old and antiquitated in my views maybe.
I don’t see it as good at all. It’s not a person and in my opinion it’s unhealthy to romantically love something that isn’t human.
It might feel good, but it’s likely not healthy.
I agree, I don’t think it’s likely going to be helpful to mental health in the long run either, based on my totally unprofessional opinion.
I’ve argued with a friend about it who isn’t a tech-person at all. She just says “yeah, it’s her problem” and doesn’t seem to grasp that my issue is not with her doing it as an individual - instead with the fact that it’s possible and the greater societal ramifications it is likely to have.
I’ll make an AI boyfriend, too, and talk to him about it, that’ll show society!
Probably futile to discuss the health or ethics of it without first figuring out if people in the discussion share similar beliefs on what the meaning/purpose of life is.
Cuz if you’re talking to a nihilist who thinks it’s all shadows and dust at the end of the day, you’ll get a very different discussion that someone who thinks family and procreation are the point of life.
It may look innocent until the Chatbot nags you about buying that very cool new product they’ve heard so much praise about. This is very dangerous and needs tons of regulations.
Or convincies you to kill your parents: https://www.cnn.com/2024/12/10/tech/character-ai-second-youth-safety-lawsuit/index.html
CNN bleh
would you believe it if it came from FOX?
No and that’s not my point. Fox news is slanted just as much as CNN. I mainly only get my news from Reuters and APnews (not including tech news). I recommend you to look at resources like Media Bias Factcheck or an app called Ground News.
I’m not american, so I consume neither CNN nor FOX, it just sounded kinda maga to so casually dismiss CNN
Nah…I’m spoken of in four letter words by the magats…CNN is a bad source these days.
It doesn’t particularly help that CNN, to my knowledge, got bought by a big company a good while ago and started slanting right, so it really isn’t the same news source it used to be.
of course it did
Everything you all are saying can happen in regular relationships too. A person willing to choose an AI likely isn’t going to be great at choosing an actual human who is good for them.
In a relationship, the other person could also be manipulative, or it could be one sided, or they can pressure you to only live certain ways, buy certain things. Or they can backstab you and give your private info to others (family that took my SS info from my parents), or pawn your shit, or cheat on you with others. Like everything negative that might come from this could potentially happen in some remotely similar way in a human relationship too.
I’ve been in and seen others in all kind of relationships that in some ways had these similar negative outcomes.