Posts Tagged ‘technology’
AIs, ChatBots, Lovers…Crowding Out Real-Life Experiences?
Folks, I read a very interesting article over at Substack earlier tonight about AI lovers. It’s written by Ossiana Tepfenhart, a new-to-me writer with a very interesting perspective. I suggest you read this, and then ponder it, before you go on.
But if you don’t, here’s my reaction anyway. (You knew I was going to say that, right?)
There are people out there who are having trouble meeting real people to have relationships with. If they do meet them, they don’t click, or maybe they expect the wrong things (these are the folks Ossiana Tepfenhart calls “pornsick,” and for good reason). They also could be finding that just the act of looking for someone is harder than finding a Chatbot, and then having a “relationship” with that Chatbot.
You know that Chatbots are designed to be accommodating in most cases, right? (Ossiana certainly says this, and I agree. She’s not the only one who’s said it, either, but as I’m discussing her article, I definitely wanted to give this the proper attribution.) So, if you start looking for reassurance, whether it’s for affection, sexual gratification, or whatever, you can quickly get trapped in a feedback loop that goes like this:
Gen X Guy/Gal: “I had a rough day today.”
Chatbot: “Tell me all about it!”
Gen X Guy/Gal: Pictures the Chatbot sitting across from them, in whatever way they want this Chatbot to look. “Well, work was a trial, and then I ran into a bunch of idiots on the way home and nearly ran them over. I lost my temper at least twelve times, too, and I know that’s bad. I just don’t know if I’m worth anything.”
Chatbot: “You’re worth something. You’re a human being, and you’re entitled to feel any way you want.”
What the Chatbot isn’t likely to tell you is that while you are certainly entitled to feel any way you want — that is good advice — you definitely need some anger management, or some sort of counseling to find out why you are so angry all the time. (It’s not natural to want to run people over, nor is it natural to lose your temper over and over again.)
See, the Chatbot cannot call you on your stuff. Just can’t do it. It’s not designed for it. Whereas a real person certainly will tell you something at some point if you’re having these types of issues.
Also, while my example was fictional, there are certainly people out there who want an ideal lover, someone who will always say, “There, there,” or the electronic equivalent. They don’t know how to react to a real, live human being, with wants and needs of their own. That’s why this whole Chatbot lover thing can be so addictive. (I haven’t tried it, but I can see the appeal.)
Then, I started to think about something I read this past week. There was recently a very controversial AI experiment conducted by the University of Zurich on Reddit. The researchers inserted AI chatbots on the r/changemyview forum, and these chatbots made 1700 comments on sensitive topics without anyone apparently twigging to the fact these were chatbots.
How could the University of Zurich do this? Well, they had all sorts of information that’s been on the World Wide Web for the last thirty years to put into the chatbot. That chatbot, while it can’t think for itself, can react if given the right setup, and if it has the response that setup requires in the first place…and with the thirty years of the Internet’s history sitting there, it’s quite possible the right responses are already there.
I didn’t need to know anything about the University of Zurich to figure that out.
Anyway, Reddit threatened to sue, especially after finding out that the AI bots were more likely to change people’s view by a factor of three to six than a real-live person is. (Why is that? Well, again, you have thirty years of the internet and all the various things that have been said there, versus the life experience of one person. That one person may have a lot more experience in this one area than any other given person, but it’s not likely that one person will ever have as much as the entire Internet over the past thirty years.) The University of Zurich backed off, said they will not publish their results, and that they’ll strengthen their ethical review process.
This is a huge scandal. Really, really big. And it only happened because a bunch of behavioral scientists, apparently, forgot to look at the real-life consequences of such a designed experiment before they decided to go through with it.
So, you’ve got AI chatbots causing trouble on Reddit. You’ve got AI online companions that act like lovers that are making it harder for real-life people to find good mates, much less keep them. You’ve got people that Ossiana talked about who, despite having a good relationship, want more (these are usually women), and you’ve got others who feel they’re never going to find anyone, so why not? (The latter are usually men.)
And all the while, it gets harder and harder to bridge the gap between the sexes.
This is not what anyone thought back in the late 1990s that would be going on right now. The hope was that advanced computer computations would make it easier to go to Mars, or battle poverty, or find better ways to distribute food to the poorest and neediest among us, among other such worthy causes.
That has not panned out.
And while there probably are companies out there looking to battle poverty, or go to Mars, or distribute food, there are more companies leveraging people’s loneliness, only to cause more loneliness and alienation along the way.
If this had been around in 2004 or 2005, right after my husband died, I probably would’ve been tempted by it. A chatbot that was infused with all I knew about my husband? I would’ve been right there.
But now, I see it for the travesty it is.
My husband was alive, dammit. He could be paradoxical. He liked being that way, sometimes. He was an incredibly good person, very spiritual, but also very down to Earth, and he did not like simulations of real people at all.
I don’t know if there are any good uses for “romantic” chatbots. I tend to think if you’re not happy in your relationship, you should get out and find another one with a real, live human. I also think that staying with someone you’re not compatible with is unfair to the other person. They can’t be who you need, no matter how much you love them.
So, I’m with Ossiana all the way on this. Be very wary of this type of stuff. Don’t go down that rabbit hole. It leads to nowhere good.
Two Japanese Scientists Invent “Stop Talking” Device
Two Japanese scientists have invented a device that will make people stop talking in their tracks. It sounds like science fiction (hence my “SFnal” tag), but it actually is quite a simple thing: human beings cannot handle hearing their voice with a few milliseconds delay while continuing to speak — if this happens, human beings stop talking. (Psychologists have known this for years.) Now, these two scientists (Kazutaka Kurihara and Koji Tsukada) have invented a gun that after pointed at a speaker will actually stop someone speaking in his or her tracks without physical discomfort.
Here’s a link:
http://www.technologyreview.com/blog/arxiv/27620/
The ethical implications of this are appalling, though the scientists believe the use of their invention could be benign; they envision the gun being pointed at people who insist on talking on their cell phones in a library (or perhaps in the office) rather than this gun being used, en masse, to stop peaceful protestors from speaking their minds by the powers that be.
Maybe it’s just me, but I believe this technology is incredibly dangerous. It has the potential to completely silence dissidents, forever; it makes George Orwell’s restrictive society envisioned in his book 1984 look paltry by comparison. Because what one group of politicians thinks is “right” and “just” speech would be hated by another group of politicians; this has the potential to cause massive unrest that would be totally unable to ever be relieved, unless this technology is somehow countered.
While this invention was probably going to come about sooner or later, I wish for the sake of humanity that it hadn’t happened now; there are protests going on all over the world in favor of peace and financial equality that could end up being prematurely silenced.
Worse yet, now that this invention has been made public, every military branch in every country in the world has to want this technology, as it would obviously aid them in their work. And an unscrupulous country’s military getting this technology before everyone else would be a deadly scenario that even Andrew Krepinevich (he of SEVEN DEADLY SCENARIOS fame, a book I reviewed a while back at Shiny Book Review) would have reason to fear.
Now that this technology has been made public, my hope is that other scientists will be working on a way to counter, or at least minimize, the damage this technology could easily cause. What one technology gives, another technology can take away, and in this case, this is definitely a technology I believe should be countered as soon as possible for everyone’s sake.
————
Note: the reason I tagged this with “framing narrative” is because the scientists’ reason for narrative framing is simple: they want to make money off this device, so they’re emphasizing the more benign purposes for which such a device could be used. My view is much more along the “realpolitik” line — what is such a device likely to be used for, and why?