How will humanity eventually destroy itself? In his new book the science journalist (and occasional Times reviewer) Tom Chivers says the answer is not “climate change” or “nuclear weapons”, but artificial intelligence. Just to scare you, here’s how one AI researcher, Eliezer Yudkowsky, imagines our fate: “If you want a picture of AI gone wrong, don’t imagine marching humanoid robots with glowing red eyes. Imagine tiny, invisible synthetic bacteria made of diamond, with tiny onboard computers, hiding inside your bloodstream and everyone else’s. And then, simultaneously they release one microgram of botulinum toxin. Everyone just falls over dead.” This is alarming, but to many readers it will also seem implausible. Perhaps it shouldn’t. Yudkowsky is a leading figure in the “rationalist” community, a loose, mostly online network of nerds and possible geniuses who have done a lot of the big thinking on AI and its potential catastrophes. I use the term “big thinking” advisedly. Rationalists don’t spend much time fretting about whether an automated supermarket checkout or an automated van is going to steal your job.
But they are worried about what might happen if a superintelligent computer ruled the world. The rationalists may sound like cranks and fantasists, but many of them are affiliated to universities and they’re ideas are taken seriously by Elon Musk and Google executives. Chivers’s book is an exploration of the rationalist community and their ideas. Its most intellectually fizzy sections follow rationalist thought experiments about the havoc AI could wreak on humanity — at its best it reads like a cross between a sci-fi horror novel and a philosophy tutorial.
Dangerously intelligent machines may be closer than you think. According to one survey, many AI researchers believe there’s a 50 per cent chance we will achieve human-level AI by 2040, some say as early as 2022. Superhuman AI may not be much farther off. As Chivers’s title suggests, the danger is not that this AI will hate us, but that it will think differently from us in ways we can’t predict. AI could inadvertently harm humans as it attempts to carry out benign tasks. The most famous thought experiment concerns an AI programmed to make as many paper clips as possible and which ends up ransacking the planet in its single-minded quest. This sounds absurd, but Chivers travels through a series of further thought experiments to show how difficult it could be to program an AI in a way that could stop that happening. If you’re a chess-playing superintelligent AI, for example, your only aim is to play as much chess as possible. An obvious impediment to that would be if somebody came along and switched you off. So it’s likely you’d find ways to stop that happening — perhaps by uploading copies of yourself on to the internet or, more drastically, by pre-emptively destroying humans before they get the chance to pull the plug. This sounds absurd, but Chivers shows that this problem has occupied some of the most brilliant minds in the field. Indeed, there are already glimpses of dangerous, unpredictable thinking in our AI. One AI learnt to win noughts and crosses tournaments against other computers by creating impossible moves billions of squares away from the board. This forced all the other computers to try to represent a board billions of squares across in their memory — but they didn’t have the capacity and crashed. The cheating algorithm won by default. If a superintelligent AI wanted power, warns the artificial intelligence expert Scott Alexander, it wouldn’t need guns or a robot army — just an internet connection. He tells Chivers to “imagine an AI that emails Kim Jong-Un”. It offers him a large sum of money in return for friendship. Then the AI provides Kim with impeccable political and economic plans that turn North Korea into an unexpected economic powerhouse. Soon Kim is dependent on his AI adviser and all the ministers who question its wisdom meet “unfortunate accidents around forms of transportation connected to the internet”. Soon Kim is nothing more than a figurehead and the AI rules North Korea. That sounds implausible, but many human advisers have taken control of states in the same way — think of Rasputin’s hold over the royal family in Russia. If superintelligent AI comes into existence, why shouldn’t it do the same? If you think that’s crazy, try the rationalist thought experiment Roko’s basilisk. (Fair warning: reading the next few paragraphs may doom you to an eternity of torture.) Imagine a future where a friendly, superintelligent AI rules the universe. It’s goal is to do the best for humanity and it does an excellent job — solving climate change and financial crises are child’s play. The AI works out that a good way to save the most humans is to ensure that its ideas come to fruition as soon as possible. But how could it do this? One solution could be to blackmail us from the future. If it has ever occurred to you that we could live in an artificial intelligence-managed utopia, but you have done nothing to help to bring it about, then the AI could torture you for eternity as a punishment. Chivers says it could do this by creating a perfect, undying digital copy of your mind — “since a perfect copy of your mind would essentially be you, this is equivalent to bringing you back to life” to torment you. This is an encouragement for everyone who has thought about getting to work on an all-powerful, benign AI to do so as soon as possible. The AI “would have no incentive to torture people who had never heard of it. The punishment/incentive only applied to people who know about the basilisk”. As soon as you have heard of the idea you risk eternal punishment by not acting. That now includes you. Sorry. It’s called a “basilisk” because this is “information that can hurt you simply because you’re aware of it”, in the same way that a mythical basilisk can kill you if you so much as look at it. This sounds nutty and it may well be — but just in case, I would like to put it on record that I warmly welcome our AI overlords and I’m willing to render them assistance in any way possible. Honestly, chaps, just say the word. The AI Does Not Hate You is a haphazard intellectual history of our times. The AI thought experiments are great — terrifying and readable. The pace flags a bit in the sections that concern infighting among prominent rationalists and aspects of rationalist culture such as polyamory and a crowd-funded baby. Only time will tell whether this stuff will seem like a fascinating insight into a group of intellectual pioneers, or just a curious glimpse at a crowd of intriguing but misguided dreamers. As I said, I hope they’re right. Bring on the AI utopia and please don’t torture me!