Can robots be kind? This is one of many questions we will want to know the answer as robots continue to develop. Robots will be much more prevalent in coming years—in industry, health care, transportation, medicine, and in the home. Japan probably has the most advanced homecare robots today, helping the elderly with everyday tasks; undoubtedly these robots are programmed to behave in a kind way, with a soothing voice, gentle touch, and a relaxed pace. In the U.S. there are already videos showing home robots making omelets, folding clothes, and vacuuming the living room.
Undoubtedly such robots can appear kind or act in a kindly manner, but is that the same as being kind? Kindness itself is an emotion, and most AI and robotics experts admit that while a humanoid robot can emulate emotion, to have a real emotion requires a feeling, sensing body with all five senses intact—that is, aliveness. The journal Neuroscience Research notes, in support of this view, that robots lack such emotion-generating human brain structures as the amygdala. There may be some roboticists who believe that as the deep learning circuits of robot brains become more sophisticated, robots will become close to being alive. But they won’t be alive the way we are.
Robots are machines, and are the direct descendants of the 19th century industrial machines—such as the steam engine, looms, mills, cotton gins, and tool and die devices—that created the industrial age and put large numbers of people out of work. The 19th century Luddite movement in the U.K. arose as a rebellion against a machine-based world, and it is possible that a new Luddite movement will arise as robots take over more and more of what is now human employment.
It is not too soon for us to prepare for the coming robotic age. Elon Musk says he soon imagines 20 billion robots in the world—one for every person on earth–doing everything from making cars to mastering haute cuisine. Even if he is exaggerating by a factor of 10 or 100, that is still a lot of robots. Reflecting on this, I wonder what long-term vision for human beings is implied by this? Do we aspire to be like Roman aristocrats at a banquet, lounging on couches while robots drop grapes into our mouths as well as cook and serve the food and clean up afterwards? In this imagined future will no one work? Is this the noblest purpose that we can come up with for our future human existence?
I think not. In order to write the symphonies of Beethoven or the dramas of Shakespeare, and also appreciate them, we need to be fully alive, in every sense of the word—entities of flesh, blood and bone who can be intimate with each other, who can trust each other, and who can love each other. Intimacy, trust, love—I don’t think these qualities can be programmed into even the most advanced AI-programmable robot, and I speak as someone who invented and wrote a high-end business-to-business software product. I know all too well the heady sense of power that comes from designing and coding an expert system; anyone who codes for a living will tell you that as you watch your programmed instructions execute, you are sometimes filled with a sense of godlike power—hopefully a fleeting sense.
As far back as the 16th century that godlike power was embodied in the legend of the Golem—an inanimate lump of clay that, through the magical powers of a cabalistic Rabbi, was brought to life. The Golem originally obeyed its master’s wishes, but eventually got out of control, became destructive, and had to be shut down by another cabalistic spell. The Golem legend inspired such stories as the Sorcerer’s Apprentice—even the Terminator movies–and lurks in the background of robotic development today.
The Golem of legend wasn’t kind, only obedient. In fact over time it became rather evil, and I imagine that as humanoid robots become more and more intelligent, someone will program them to be assassins or terrorists—if they are not already doing so. The armed drones being used by both sides in the Ukraine war today are, in a sense, robot assassins; I read that these drones communicate with each other on the way to their targets, and adjust their programming accordingly in real time. When they arrive at their target they blow things up and kill people. Will such technology soon be coming to an online store on the dark web?
So the question remains: can robots be kind? I wish that were all they could be. We have unleashed the Golem for real now, and as individuals and societies we must step up and take responsibility for what our Golems will do.
Thank you. I will sign up and read your books. It has been many years since I sat session with you at Green Gulch. So glad you are still here as a guide for this perilous time. Example. My phone corrects the word I cannot type to session and I don’t know how to turn the editor off. Yes.
If humans are going to survive as a species, we have to become more of who we really are. Thanks for reminding us that kindness is built into our DNA. We have a ways to go before we can support kindness in more of humanity. Turning our lives over to Robots is not going to get us there. We can do better. Thanks Lewis for reminding us of that and challenging humans to be…more human.
If Robots cannot be kind then I figure they could not be competitive, jealous, greedy for money and power, etc. If this would be true, then humanity could spend its time and energy in healing the hurts of one another and discovering new and superior ways of equalizing the playing field so that all persons would have the ability to live in peace and plenty (or maybe just enough). judith susan