For many of the billion people world-wide who now engage in conversation with a chatbot, the chatbot has become a person—a friend, a therapist, an advisor, even a romantic partner. Because chatbots are designed to be guided by whatever text the user types, the user is the source of how it responds. If the user wants the chatbot to be a person, it will do its best to comply. Chatbots are designed by their makers to be soothing and agreeable. Pose a question and the chatbot will say, “That’s a great question!” It almost never says, “That’s the wrong question.” That’s what users want–agreeableness. AI is a business–a huge business—and like any consumer product, chatbots offer what users want. Chatbots are designed to maximize user engagement and time spent in conversation—probably so that more online ads can be sent their way.
These developments are all so new and happening so fast it’s hard to say where they are headed. People are already worried. A recent survey reported by Entrepreneur magazine said that 66% of those surveyed were worried about AI’s effect on human relationships. Those chatbot users understand the risks. Teens are especially vulnerable. One teenage girl reported in a news story by Associated Press that “I don’t do anything without checking with my chatbot.” In that same story, security experts who tested chatbots to see how quickly their ethical “guardrails” could be breached found that it wasn’t too hard to pose as a teenager and get the chatbot to write a suicide note to the teen’s parents. Will we soon be reading in the news that the parents of a teen who ended her life were left with such a note?
My own reaction to all this ranges from cautious to extremely concerned. We could agree with Dr. Geoffrey Hinton—Nobel prize winner widely considered the “founder of AI”—that chatbots should be programmed with maternal instincts, so that they care and protect users the way a mother would protect her children. But with projected profits so huge, and with so many bad actors roaming the world, it’s hard to imagine that Dr. Hinton’s well-meaning suggestion being widely adopted.
For all of human history, “human relationships”—friendship, family, love, marriage, and children—meant relationships between living, breathing biological human beings. Do we now have to widen our definition to include human-computer or human-robot relationships that emulate or mimic traditional ones? When people say that they have fallen in love with their chatbot, we should probably take them seriously. Intellectually they may acknowledge that the chatbot is a machine, but emotionally the chatbot has become a true romantic partner. This is particularly true for people who are lonely or depressed.
To make things more complicated, intimacy with a chatbot can sometimes be a positive thing. People who have used their chatbot as their therapist swear that they were greatly helped—though mental health professionals are dubious. I asked my own chatbot—Perplexity—how a chatbot might influence a conspiracy theorist, and to my surprise Perplexity cited studies showing that interacting with a chatbot reduced peoples’ beliefs in conspiracies. That was contrary to my expectation. Authors of these studies weren’t entirely sure why this was so, but because the chatbot’s information was 99% factual, and the chatbot was–unlike a real human being–not critical or judgmental, peoples’ beliefs in their conspiracies softened. I guess that’s a good thing.
What we do know is that we have been thrust head over heels into a brave new world whose long-term implications no-one really understands. It is definitely taking us at lightspeed on a journey to a bizarre realm where down is up, up is down, and chatbots can be whatever we want them to be—even weapons or partners in crime. Most people in this world are good, but some are bad—very bad.
That is my real worry. As this technology matures, what will those bad people do?