Range Read online

Page 3


  In 2009, Kahneman and Klein took the unusual step of coauthoring a paper in which they laid out their views and sought common ground. And they found it. Whether or not experience inevitably led to expertise, they agreed, depended entirely on the domain in question. Narrow experience made for better chess and poker players and firefighters, but not for better predictors of financial or political trends, or of how employees or patients would perform. The domains Klein studied, in which instinctive pattern recognition worked powerfully, are what psychologist Robin Hogarth termed “kind” learning environments. Patterns repeat over and over, and feedback is extremely accurate and usually very rapid. In golf or chess, a ball or piece is moved according to rules and within defined boundaries, a consequence is quickly apparent, and similar challenges occur repeatedly. Drive a golf ball, and it either goes too far or not far enough; it slices, hooks, or flies straight. The player observes what happened, attempts to correct the error, tries again, and repeats for years. That is the very definition of deliberate practice, the type identified with both the ten-thousand-hours rule and the rush to early specialization in technical training. The learning environment is kind because a learner improves simply by engaging in the activity and trying to do better. Kahneman was focused on the flip side of kind learning environments; Hogarth called them “wicked.”

  In wicked domains, the rules of the game are often unclear or incomplete, there may or may not be repetitive patterns and they may not be obvious, and feedback is often delayed, inaccurate, or both.

  In the most devilishly wicked learning environments, experience will reinforce the exact wrong lessons. Hogarth noted a famous New York City physician renowned for his skill as a diagnostician. The man’s particular specialty was typhoid fever, and he examined patients for it by feeling around their tongues with his hands. Again and again, his testing yielded a positive diagnosis before the patient displayed a single symptom. And over and over, his diagnosis turned out to be correct. As another physician later pointed out, “He was a more productive carrier, using only his hands, than Typhoid Mary.” Repetitive success, it turned out, taught him the worst possible lesson. Few learning environments are that wicked, but it doesn’t take much to throw experienced pros off course. Expert firefighters, when faced with a new situation, like a fire in a skyscraper, can find themselves suddenly deprived of the intuition formed in years of house fires, and prone to poor decisions. With a change of the status quo, chess masters too can find that the skill they took years to build is suddenly obsolete.

  * * *

  • • •

  In a 1997 showdown billed as the final battle for supremacy between natural and artificial intelligence, IBM supercomputer Deep Blue defeated Garry Kasparov. Deep Blue evaluated two hundred million positions per second. That is a tiny fraction of possible chess positions—the number of possible game sequences is more than atoms in the observable universe—but plenty enough to beat the best human. According to Kasparov, “Today the free chess app on your mobile phone is stronger than me.” He is not being rhetorical.

  “Anything we can do, and we know how to do it, machines will do it better,” he said at a recent lecture. “If we can codify it, and pass it to computers, they will do it better.” Still, losing to Deep Blue gave him an idea. In playing computers, he recognized what artificial intelligence scholars call Moravec’s paradox: machines and humans frequently have opposite strengths and weaknesses.

  There is a saying that “chess is 99 percent tactics.” Tactics are short combinations of moves that players use to get an immediate advantage on the board. When players study all those patterns, they are mastering tactics. Bigger-picture planning in chess—how to manage the little battles to win the war—is called strategy. As Susan Polgar has written, “you can get a lot further by being very good in tactics”—that is, knowing a lot of patterns—“and have only a basic understanding of strategy.”

  Thanks to their calculation power, computers are tactically flawless compared to humans. Grandmasters predict the near future, but computers do it better. What if, Kasparov wondered, computer tactical prowess were combined with human big-picture, strategic thinking?

  In 1998, he helped organize the first “advanced chess” tournament, in which each human player, including Kasparov himself, paired with a computer. Years of pattern study were obviated. The machine partner could handle tactics so the human could focus on strategy. It was like Tiger Woods facing off in a golf video game against the best gamers. His years of repetition would be neutralized, and the contest would shift to one of strategy rather than tactical execution. In chess, it changed the pecking order instantly. “Human creativity was even more paramount under these conditions, not less,” according to Kasparov. Kasparov settled for a 3–3 draw with a player he had trounced four games to zero just a month earlier in a traditional match. “My advantage in calculating tactics had been nullified by the machine.” The primary benefit of years of experience with specialized training was outsourced, and in a contest where humans focused on strategy, he suddenly had peers.

  A few years later, the first “freestyle chess” tournament was held. Teams could be made up of multiple humans and computers. The lifetime-of-specialized-practice advantage that had been diluted in advanced chess was obliterated in freestyle. A duo of amateur players with three normal computers not only destroyed Hydra, the best chess supercomputer, they also crushed teams of grandmasters using computers. Kasparov concluded that the humans on the winning team were the best at “coaching” multiple computers on what to examine, and then synthesizing that information for an overall strategy. Human/Computer combo teams—known as “centaurs”—were playing the highest level of chess ever seen. If Deep Blue’s victory over Kasparov signaled the transfer of chess power from humans to computers, the victory of centaurs over Hydra symbolized something more interesting still: humans empowered to do what they do best without the prerequisite of years of specialized pattern recognition.

  In 2014, an Abu Dhabi–based chess site put up $20,000 in prize money for freestyle players to compete in a tournament that also included games in which chess programs played without human intervention. The winning team comprised four people and several computers. The captain and primary decision maker was Anson Williams, a British engineer with no official chess rating. His teammate, Nelson Hernandez, told me, “What people don’t understand is that freestyle involves an integrated set of skills that in some cases have nothing to do with playing chess.” In traditional chess, Williams was probably at the level of a decent amateur. But he was well versed in computers and adept at integrating streaming information for strategy decisions. As a teenager, he had been outstanding at the video game Command & Conquer, known as a “real time strategy” game because players move simultaneously. In freestyle chess, he had to consider advice from teammates and various chess programs and then very quickly direct the computers to examine particular possibilities in more depth. He was like an executive with a team of mega-grandmaster tactical advisers, deciding whose advice to probe more deeply and ultimately whose to heed. He played each game cautiously, expecting a draw, but trying to set up situations that could lull an opponent into a mistake.

  In the end, Kasparov did figure out a way to beat the computer: by outsourcing tactics, the part of human expertise that is most easily replaced, the part that he and the Polgar prodigies spent years honing.

  * * *

  • • •

  In 2007, National Geographic TV gave Susan Polgar a test. They sat her at a sidewalk table in the middle of a leafy block of Manhattan’s Greenwich Village, in front of a cleared chessboard. New Yorkers in jeans and fall jackets went about their jaywalking business as a white truck bearing a large diagram of a chessboard with twenty-eight pieces in midgame play took a left turn onto Thompson Street, past the deli, and past Susan Polgar. She glanced at the diagram as the truck drove by, and then perfectly re-created it on the board in
front of her. The show was reprising a series of famous chess experiments that pulled back the curtain on kind-learning-environment skills.

  The first took place in the 1940s, when Dutch chess master and psychologist Adriaan de Groot flashed midgame chessboards in front of players of different ability levels, and then asked them to re-create the boards as well as they could. A grandmaster repeatedly re-created the entire board after seeing it for only three seconds. A master-level player managed that half as often as the grandmaster. A lesser, city champion player and an average club player were never able to re-create the board accurately. Just like Susan Polgar, grandmasters seemed to have photographic memories.

  After Susan succeeded in her first test, National Geographic TV turned the truck around to show the other side, which had a diagram with pieces placed at random. When Susan saw that side, even though there were fewer pieces, she could barely re-create anything at all.

  That test reenacted an experiment from 1973, in which two Carnegie Mellon University psychologists, William G. Chase and soon-to-be Nobel laureate Herbert A. Simon, repeated the De Groot exercise, but added a wrinkle. This time, the chess players were also given boards with the pieces in an arrangement that would never actually occur in a game. Suddenly, the experts performed just like the lesser players. The grandmasters never had photographic memories after all. Through repetitive study of game patterns, they had learned to do what Chase and Simon called “chunking.” Rather than struggling to remember the location of every individual pawn, bishop, and rook, the brains of elite players grouped pieces into a smaller number of meaningful chunks based on familiar patterns. Those patterns allow expert players to immediately assess the situation based on experience, which is why Garry Kasparov told me that grandmasters usually know their move within seconds. For Susan Polgar, when the van drove by the first time, the diagram was not twenty-eight items, but five different meaningful chunks that indicated how the game was progressing.

  Chunking helps explain instances of apparently miraculous, domain-specific memory, from musicians playing long pieces by heart to quarterbacks recognizing patterns of players in a split second and making a decision to throw. The reason that elite athletes seem to have superhuman reflexes is that they recognize patterns of ball or body movements that tell them what’s coming before it happens. When tested outside of their sport context, their superhuman reactions disappear.

  We all rely on chunking every day in skills in which we are expert. Take ten seconds and try to memorize as many of these twenty words as you can:

  Because groups twenty patterns

  meaningful are words easier into chunk remember

  really sentence familiar can to you much in a.

  Okay, now try again:

  Twenty words are really much easier to

  remember in a meaningful sentence because

  you can chunk familiar patterns into groups.

  Those are the same twenty pieces of information, but over the course of your life, you’ve learned patterns of words that allow you to instantly make sense of the second arrangement, and to remember it much more easily. Your restaurant server doesn’t just happen to have a miraculous memory; like musicians and quarterbacks, they’ve learned to group recurring information into chunks.

  Studying an enormous number of repetitive patterns is so important in chess that early specialization in technical practice is critical. Psychologists Fernand Gobet (an international master) and Guillermo Campitelli (coach to future grandmasters) found that the chances of a competitive chess player reaching international master status (a level down from grandmaster) dropped from one in four to one in fifty-five if rigorous training had not begun by age twelve. Chunking can seem like magic, but it comes from extensive, repetitive practice. Laszlo Polgar was right to believe in it. His daughters don’t even constitute the most extreme evidence.

  For more than fifty years, psychiatrist Darold Treffert studied savants, individuals with an insatiable drive to practice in one domain, and ability in that area that far outstrips their abilities in other areas. “Islands of genius,” Treffert calls it.* Treffert documented the almost unbelievable feats of savants like pianist Leslie Lemke, who can play thousands of songs from memory. Because Lemke and other savants have seemingly limitless retrieval capacity, Treffert initially attributed their abilities to perfect memories; they are human tape recorders. Except, when they are tested after hearing a piece of music for the first time, musical savants reproduce “tonal” music—the genre of nearly all pop and most classical music—more easily than “atonal” music, in which successive notes do not follow familiar harmonic structures. If savants were human tape recorders playing notes back, it would make no difference whether they were asked to re-create music that follows popular rules of composition or not. But in practice, it makes an enormous difference. In one study of a savant pianist, the researcher, who had heard the man play hundreds of songs flawlessly, was dumbstruck when the savant could not re-create an atonal piece even after a practice session with it. “What I heard seemed so unlikely that I felt obliged to check that the keyboard had not somehow slipped into transposing mode,” the researcher recorded. “But he really had made a mistake, and the errors continued.” Patterns and familiar structures were critical to the savant’s extraordinary recall ability. Similarly, when artistic savants are briefly shown pictures and asked to reproduce them, they do much better with images of real-life objects than with more abstract depictions.

  It took Treffert decades to realize he had been wrong, and that savants have more in common with prodigies like the Polgar sisters than he thought. They do not merely regurgitate. Their brilliance, just like the Polgar brilliance, relies on repetitive structures, which is precisely what made the Polgars’ skill so easy to automate.

  * * *

  • • •

  With the advances made by the AlphaZero chess program (owned by an AI arm of Google’s parent company), perhaps even the top centaurs would be vanquished in a freestyle tournament. Unlike previous chess programs, which used brute processing force to calculate an enormous number of possible moves and rate them according to criteria set by programmers, AlphaZero actually taught itself to play. It needed only the rules, and then to play itself a gargantuan number of times, keeping track of what tends to work and what doesn’t, and using that to improve. In short order, it beat the best chess programs. It did the same with the game of Go, which has many more possible positions. But the centaur lesson remains: the more a task shifts to an open world of big-picture strategy, the more humans have to add.

  AlphaZero programmers touted their impressive feat by declaring that their creation had gone from “tabula rasa” (blank slate) to master on its own. But starting with a game is anything but a blank slate. The program is still operating in a constrained, rule-bound world. Even in video games that are less bound by tactical patterns, computers have faced a greater challenge.

  The latest video game challenge for artificial intelligence is StarCraft, a franchise of real-time strategy games in which fictional species go to war for supremacy in some distant reach of the Milky Way. It requires much more complex decision making than chess. There are battles to manage, infrastructure to plan, spying to do, geography to explore, and resources to collect, all of which inform one another. Computers struggled to win at StarCraft, Julian Togelius, an NYU professor who studies gaming AI, told me in 2017. Even when they did beat humans in individual games, human players adjusted with “long-term adaptive strategy” and started winning. “There are so many layers of thinking,” he said. “We humans sort of suck at all of them individually, but we have some kind of very approximate idea about each of them and can combine them and be somewhat adaptive. That seems to be what the trick is.”

  In 2019, in a limited version of StarCraft, AI beat a pro for the first time. (The pro adapted and earned a win after a string of losses.) But the game’s strategic complexity provi
des a lesson: the bigger the picture, the more unique the potential human contribution. Our greatest strength is the exact opposite of narrow specialization. It is the ability to integrate broadly. According to Gary Marcus, a psychology and neural science professor who sold his machine learning company to Uber, “In narrow enough worlds, humans may not have much to contribute much longer. In more open-ended games, I think they certainly will. Not just games, in open ended real-world problems we’re still crushing the machines.”

  The progress of AI in the closed and orderly world of chess, with instant feedback and bottomless data, has been exponential. In the rule-bound but messier world of driving, AI has made tremendous progress, but challenges remain. In a truly open-world problem devoid of rigid rules and reams of perfect historical data, AI has been disastrous. IBM’s Watson destroyed at Jeopardy! and was subsequently pitched as a revolution in cancer care, where it flopped so spectacularly that several AI experts told me they worried its reputation would taint AI research in health-related fields. As one oncologist put it, “The difference between winning at Jeopardy! and curing all cancer is that we know the answer to Jeopardy! questions.” With cancer, we’re still working on posing the right questions in the first place.

  In 2009, a report in the esteemed journal Nature announced that Google Flu Trends could use search query patterns to predict the winter spread of flu more rapidly than and just as accurately as the Centers for Disease Control and Prevention. But Google Flu Trends soon got shakier, and in the winter of 2013 it predicted more than double the prevalence of flu that actually occurred in the United States. Today, Google Flu Trends is no longer publishing estimates, and just has a holding page saying that “it is still early days” for this kind of forecasting. Tellingly, Marcus gave me this analogy for the current limits of expert machines: “AI systems are like savants.” They need stable structures and narrow worlds.