A nonmathematical yet still somewhat technical explanation of how researchers are going about achieving artificial intelligence.
This is not another cheerful or alarming exercise in futurology. Science writer Mitchell (Computer Science/Portland State Univ.; Complexity: A Guided Tour, 2011, etc.) begins by wondering if an intelligent machine would “require us to reverse engineer the human in all its complexity or is there a shortcut, a clever set of yet unknown algorithms, that will produce what we recognize as full intelligence.” She then explains what researchers have done so far. Beginning in the 1950s, when success seemed just around the corner, there was symbolic AI, which involved programmers using symbols that humans could understand to solve straightforward logical problems. This led to “expert systems,” which used massively detailed instructions to make decisions in narrow fields such as disease diagnosis better than human experts. By the 1980s, the limitations of AI became more obvious. Today, concepts such as “deep learning,” relying on artificial neural networks, evaluate information without following rigid instructions. Despite the name and hype (and accomplishments—e.g., being unbeatable at Jeopardy), machine and human learning are not comparable. Highly advanced computers are “trained” by immense inputs, made possible only with the advent of 21st-century “big data.” After evaluating their outputs, programmers retrain them to improve their accuracy. Like humans, they are not perfect. Mitchell maintains that true superintelligence will not happen until machines acquire human qualities such as common sense and consciousness. These are nowhere in sight despite recent spectacular advances—in translation, facial recognition, etc.—and the author believes that this absence makes it unlikely that one anticipated breakthrough, true driverless cars, will happen any time soon. “It’s worth remembering,” she writes, “that the first 90 percent of a complex technology project takes 10 percent of the time and the last 10 percent takes 90 percent of the time.” Although sometimes too abstruse, this is mostly a surprisingly lucid introduction to techniques that are making computers smarter.