Posts Tagged ‘chatgpt’
AIs, ChatBots, Lovers…Crowding Out Real-Life Experiences?
Folks, I read a very interesting article over at Substack earlier tonight about AI lovers. It’s written by Ossiana Tepfenhart, a new-to-me writer with a very interesting perspective. I suggest you read this, and then ponder it, before you go on.
But if you don’t, here’s my reaction anyway. (You knew I was going to say that, right?)
There are people out there who are having trouble meeting real people to have relationships with. If they do meet them, they don’t click, or maybe they expect the wrong things (these are the folks Ossiana Tepfenhart calls “pornsick,” and for good reason). They also could be finding that just the act of looking for someone is harder than finding a Chatbot, and then having a “relationship” with that Chatbot.
You know that Chatbots are designed to be accommodating in most cases, right? (Ossiana certainly says this, and I agree. She’s not the only one who’s said it, either, but as I’m discussing her article, I definitely wanted to give this the proper attribution.) So, if you start looking for reassurance, whether it’s for affection, sexual gratification, or whatever, you can quickly get trapped in a feedback loop that goes like this:
Gen X Guy/Gal: “I had a rough day today.”
Chatbot: “Tell me all about it!”
Gen X Guy/Gal: Pictures the Chatbot sitting across from them, in whatever way they want this Chatbot to look. “Well, work was a trial, and then I ran into a bunch of idiots on the way home and nearly ran them over. I lost my temper at least twelve times, too, and I know that’s bad. I just don’t know if I’m worth anything.”
Chatbot: “You’re worth something. You’re a human being, and you’re entitled to feel any way you want.”
What the Chatbot isn’t likely to tell you is that while you are certainly entitled to feel any way you want — that is good advice — you definitely need some anger management, or some sort of counseling to find out why you are so angry all the time. (It’s not natural to want to run people over, nor is it natural to lose your temper over and over again.)
See, the Chatbot cannot call you on your stuff. Just can’t do it. It’s not designed for it. Whereas a real person certainly will tell you something at some point if you’re having these types of issues.
Also, while my example was fictional, there are certainly people out there who want an ideal lover, someone who will always say, “There, there,” or the electronic equivalent. They don’t know how to react to a real, live human being, with wants and needs of their own. That’s why this whole Chatbot lover thing can be so addictive. (I haven’t tried it, but I can see the appeal.)
Then, I started to think about something I read this past week. There was recently a very controversial AI experiment conducted by the University of Zurich on Reddit. The researchers inserted AI chatbots on the r/changemyview forum, and these chatbots made 1700 comments on sensitive topics without anyone apparently twigging to the fact these were chatbots.
How could the University of Zurich do this? Well, they had all sorts of information that’s been on the World Wide Web for the last thirty years to put into the chatbot. That chatbot, while it can’t think for itself, can react if given the right setup, and if it has the response that setup requires in the first place…and with the thirty years of the Internet’s history sitting there, it’s quite possible the right responses are already there.
I didn’t need to know anything about the University of Zurich to figure that out.
Anyway, Reddit threatened to sue, especially after finding out that the AI bots were more likely to change people’s view by a factor of three to six than a real-live person is. (Why is that? Well, again, you have thirty years of the internet and all the various things that have been said there, versus the life experience of one person. That one person may have a lot more experience in this one area than any other given person, but it’s not likely that one person will ever have as much as the entire Internet over the past thirty years.) The University of Zurich backed off, said they will not publish their results, and that they’ll strengthen their ethical review process.
This is a huge scandal. Really, really big. And it only happened because a bunch of behavioral scientists, apparently, forgot to look at the real-life consequences of such a designed experiment before they decided to go through with it.
So, you’ve got AI chatbots causing trouble on Reddit. You’ve got AI online companions that act like lovers that are making it harder for real-life people to find good mates, much less keep them. You’ve got people that Ossiana talked about who, despite having a good relationship, want more (these are usually women), and you’ve got others who feel they’re never going to find anyone, so why not? (The latter are usually men.)
And all the while, it gets harder and harder to bridge the gap between the sexes.
This is not what anyone thought back in the late 1990s that would be going on right now. The hope was that advanced computer computations would make it easier to go to Mars, or battle poverty, or find better ways to distribute food to the poorest and neediest among us, among other such worthy causes.
That has not panned out.
And while there probably are companies out there looking to battle poverty, or go to Mars, or distribute food, there are more companies leveraging people’s loneliness, only to cause more loneliness and alienation along the way.
If this had been around in 2004 or 2005, right after my husband died, I probably would’ve been tempted by it. A chatbot that was infused with all I knew about my husband? I would’ve been right there.
But now, I see it for the travesty it is.
My husband was alive, dammit. He could be paradoxical. He liked being that way, sometimes. He was an incredibly good person, very spiritual, but also very down to Earth, and he did not like simulations of real people at all.
I don’t know if there are any good uses for “romantic” chatbots. I tend to think if you’re not happy in your relationship, you should get out and find another one with a real, live human. I also think that staying with someone you’re not compatible with is unfair to the other person. They can’t be who you need, no matter how much you love them.
So, I’m with Ossiana all the way on this. Be very wary of this type of stuff. Don’t go down that rabbit hole. It leads to nowhere good.