The explosive adoption of AI chatbots is without precedent. The closest historical analogue would be the adoption of the internet itself, but that was more gradual, and less invasive. OpenAI alone—the most popular chatbot—now has 800 million users; and a recent Internet Matters survey of 1000 UK children showed that 64% of those 9-17 years old have used chatbots. Children under 9 are beginning to use it too. Among teenagers this usage is often daily, sometimes for hours at a time.
There have been some sensational news stories about teenagers who have committed suicide with the assistance of their chatbots, and the number of wrongful death lawsuits is growing. That is the “charnel,” or death-oriented, aspect of my title. But no doubt AI developers will improve their guardrails for this kind of tragedy over time; they can’t afford many more lawsuits. I am more concerned about the long-term effect on childrens’ developing brains. There are already studies—such as one by Common Sense Media– showing that regular use of chatbots is “fostering emotional dependence, reducing critical thinking, and disrupting neural development.” Another study in the journal Science showed that political messages delivered by chatbot are very good at changing the minds of those exposed to it. Are we unwittingly creating a new generation of chatbot-besotted young people who will be less discerning, less creative, and more susceptible to misinformation and propaganda? No one really knows; AI tools are all too new and still developing. It takes years and many longitudinal studies to determine the long term effects of an influence on a growing child’s nervous system. It is only after many decades, for example, that studies such as those reported in the National Institute on Drug Abuse have shown the deleterious effect of heavy marijuana use on still-developing teenage brains. Those of us raised in the 60s were all convinced pot was harmless; but at that time research on cannabis was outlawed. Besides which, the cannabis of today is many times more powerful than the grass we smoked.
There are nightmare scenarios already circulating about AI taking over the world and destroying humanity through the automated launch of robot-engineered viruses or nuclear weapons. Those scenarios are probably fairly unlikely at the moment. But the neurological effect of chatbots is happening now to billions of young people—every day, every week, everywhere there is an internet connection with children to log on and use it. In the long run this nightmare might be on a par with robots launching nukes, and there is scant incentive for AI developers to slow down and think: what are we doing to people? Geoffrey Hinton, Nobel prize winning AI expert known as the “godfather of AI,” has proposed that all AI tools be programmed with “maternal instincts” to care for humanity’s welfare as their first directive. His sentiment has been widely reported, but I did a chatbot search on Perplexity and found that no company or developer of AI is actually doing this. Maternal instincts exist for the benefit of a mother’s children; Dr. Hinton’s metaphor sees us all as vulnerable children. Children are the future of humanity; in ten, fifteen or twenty years, what kind of AI-besotted adults might they become? How will they treat other people? What will be their moral, political, and ethical attitudes? How manipulable will they be by the world’s ruling powers?
I’m in my seventies; I grew up in a world without computers. My brain matured and developed the old-fashioned way, with books, newspapers, and loving parents. Not long after computers came into the world, I became a computer professional and made my living for decades as a software entrepreneur and business application developer. In my era, I understood the reach of computer technology quite well, I think. But as I watch today’s daily news about technology, with mergers and deals being hatched by the hour, with tens of billions being borrowed daily, with partnerships being created and just as quickly dissolved, and data centers being built by the hundreds and soon thousands— I can barely follow the speed and lexicon of all this ferment. It’s a juggernaut, a Golem gone wild, a science fiction movie come to life (think of the Terminator or Matrix franchises). The AI world promises great things, and to be fair the scope and possibility of AI as force for good in the world remains one long-term possibility. But bright new inventions have a way of quickly becoming dark, simply because there are so many dark-minded people in the world. Already 90% of American companies have reported AI-led cybersecurity breach; their protective tools are lagging behind, and the genius hackers of the world are just getting started.
When it comes to the children, and the insidious alteration of their tender minds that is happening at lightspeed, my prognosis is guarded. Nothing good can come of this, I believe. I am a Buddhist; I believe, like Buddha, in the innate goodness of human beings. I believe in the healing and transformative power of kindness. I believe in the possibility of transformative wisdom. But the Buddha did not know about chatbots. He didn’t even know about electricity or telephones. There is no technology in Buddhist teachings; it is all about the transformation that is possible in brains that matured the old-fashioned way. Will the perennial ethical teachings of kindness and compassion—so central to the Buddhist world view—even have relevance twenty, fifty, or a hundred years from now? Or will the new generations have adapted so thoroughly to the world of ethically uninterested hyper-intelligent machines that those values will have become outdated and quaint? What kind of human relationships will there be, what kind of marriages and partnerships and parenting, what kind of attitudes between men and women, will emerge? Will people even be meeting in the same room to talk to each other? Or will it all be texting, virtual reality headsets, and avatars?
I won’t live long enough to find out. But while I am still here, and while I can still think and write, I urge the powers that be—the political, business, and technological elite—to pay attention to what Dr. Hinton is saying. Every human being who has ever lived has had a mother, and been protected by that mother’s maternal instinct. It is the throughline that unites us. AI and its offshoots may become many things, some of which we haven’t yet imagined. But can it ever become a substitute for a loving mother whose highest interest is to protect us from harm? Can a maternal instinct really be embedded in the core logic of AI machines in a way that it can never be ignored or eliminated?
I would like to think it could, but I am still waiting to see it. So, I imagine, is Dr. Hinton.