I hate to create a blog post about an existential worry, but I can't help myself.
I feel fortunate that I tend to run on the optimistic side, a trait that psychologist Martin Seligman says can help us take charge, resist depression, and accomplish more. But when it comes to advances in artificial intelligence, I have joined the bandwagon of people (Elon Musk, Stephen Hawking, Bill Gates, Sam Harris, et al) concerned about where all this is heading. The fact that I have about a quarter of the brainpower of the aforementioned group is something I came to terms with long ago, but I can't seem to come to terms with the potential rise of the machines. I fear that we're all going to look like mental Lilliputians compared to the superintelligence that computers will display in the years to come.
I studied computer science as an undergraduate in the 80's (actually Mathematical Sciences, since there was no pure computer science major at my university back then), and there was a good bit of discussion that artificial intelligence was right around the corner. I even programmed in a language called LISP (LISt Processor), then favored for AI development, and created some rudimentary program that parsed something that I can't even recall. I was calmed by the fact that for the next 20+ years, the breakthroughs seemed hard to come by, and the promise of AI seemed like it might be forever out of reach. Always some new complexity revealed which showed just how dramatically the field had underestimated the difficulty of cracking this nut.
Recent advances from Google, however, make it seem that the path to superintelligence is moving forward. Google owned DeepMind has created a program that has the ability to learn - a computer that teaches itself without human intervention (or at least much of it). Their AlphaGo product defeated the world champion in the game of Go - a sort of multi-dimensional chess game that has so many possibilities it was thought, until recently, that it would be quite some time before a computer could beat a great human player. The computer could not "brute force" its way through computing all possible moves (as I understand it does in many chess programs) because there are simply too many. It would need to develop some sort of human-like intuition in order to beat a person. In March of 2016, AlphaGo routed Lee Sedol, considered the top player in world, winning 4 out of 5 games. Chalk up another victory for the machines.
So why worry? AI clearly has the potential for enormous human benefit, so we simply control the downsides and we're home free, right? Not so fast. In a recent TED talk by Sam Harris (Can we build AI without losing control over it?), he argues that we should start thinking about this NOW and should develop a Manhattan Project on the topic. Advances in AI will happen - and the results could be terrifying. The development of an intelligence that far outstrips our own could leave us at the mercy of whatever moral compass that intelligence has. In my opinion Sam Harris is one of the smartest human beings alive, and if he's scared, I'm scared. I support immediate and substantial funding of such a project.
Sam also had a recent podcast featuring a discussion with David Krakauer, President and William H. Miller Professor of Complex Systems at the Santa Fe Institute (Complexity & Stupidity - A Conversation with David Krakauer). In that conversation, Sam asks Dr. Krakauer about the risk of AI. Krakauer says he is more concerned about the immediate threat of something he calls "Competitive Cognitive Artifacts." Cognitive Artifacts, according to Krakauer, are tools that we use to increase intelligence. Complementary Cognitive Artifacts, like maps and the abacus, can actually help to rewire our brain to make us more intelligent, even without the artifact. People who use the abacus actually learn to make calculations in their brains outside the presence of an abacus. Competitive Cognitive Artifacts, however, actually rob us of our intelligence when we are outside their presence. We can make extremely complex calculations when we have an electronic calculator at our fingertips - but take it away and we can't even figure out how to leave a 20% tip at a restaurant. Take our phones away and intelligent life, as we know it, ceases to exist.
Which brings me to an area of special concern - social skills and social isolation. I'm afraid that even prior to creating generalized artificial intelligence we will be developing, to use Dr. Krakauer's term, competitive cognitive artifacts that will rob us of our social skills. It seems to me this is already happening. We have all observed people who, in the presence of friends, constantly check their phones and seem oblivious to the humanity that pulses right in front of them. The technology that is "connecting" them to the online world seems to be disconnecting them from actual people.
Imagine the development of a bot (think Alexa and the newly announced Google Home) that is perfectly social - polite, entertaining, interesting, flattering, knowledgeable - and completely non-human. The bot is never angry or inappropriate, and requires absolutely no social skills from its user. A competitive cognitive artifact that unwittingly turns us into demanding, inappropriate boors that receive everything we need from our bot. Which makes us even less likely to engage with other equally boorish humans and their own petty demands. Artificial intelligence that robs us of our social intelligence.
We need to begin thinking deeply about artificial intelligence, as well as the competitive cognitive artifacts that are already being thrust into our world via free download from your favorite online software store.
In a nod to the recursion that I learned in my undergraduate CS classes, perhaps we need to first create an artificial intelligence specifically designed to control artificial intelligence.