I'd like to tell you about two games of chess. The first happened in 1997, in which Garry Kasparov, a human, lost to Deep Blue, a machine. To many, this was the dawn of a new era, one where man would be dominated by machine. But here we are, 20 years on, and the greatest change in how we relate to computers is the iPad, not HAL.
The second game was a freestyle chess tournament in 2005, in which man and machine could enter together as partners, rather than adversaries, if they so chose. At first, the results were predictable. Even a supercomputer was beaten by a grandmaster with a relatively weak laptop. The surprise came at the end. Who won? Not a grandmaster with a supercomputer, but actually two American amateurs using three relatively weak laptops. Their ability to coach and manipulate their computers to deeply explore specific positions effectively counteracted the superior chess knowledge of the grandmasters and the superior computational power of other adversaries. This is an astonishing result: average men, average machines beating the best man, the best machine. And anyways, isn't it supposed to be man versus machine? Instead, it's about cooperation, and the right type of cooperation.
We've been paying a lot of attention to Marvin Minsky's vision for artificial intelligence over the last 50 years. It's a sexy vision, for sure. Many have embraced it. It's become the dominant school of thought in computer science. But as we enter the era of big data, of network systems, of open platforms, and embedded technology, I'd like to suggest it's time to reevaluate an alternative vision that was actually developed around the same time. I'm talking about J.C.R. Licklider's human-computer symbiosis, perhaps better termed "intelligence augmentation," I.A.
Licklider was a computer science titan who had a profound effect on the development of technology and the Internet. His vision was to enable man and machine to cooperate in making decisions, controlling complex situations without the inflexible dependence on predetermined programs. Note that word "cooperate." Licklider encourages us not to take a toaster and make it Data from "Star Trek," but to take a human and make her more capable. Humans are so amazing—how we think, our non-linear approaches, our creativity, iterative hypotheses, all very difficult if possible at all for computers to do. Licklider intuitively realized this, contemplating humans, setting the goals, formulating the hypotheses, determining the criteria, and performing the evaluation. Of course, in other ways, humans are so limited. We're terrible at scale, computation and volume. We require high-end talent management to keep the rock band together and playing. Licklider foresaw computers doing all the routinizable work that was required to prepare the way for insights and decision making.
Silently, without much fanfare, this approach has been compiling victories beyond chess. Protein folding, a topic that shares the incredible expansiveness of chess—there are more ways of folding a protein than there are atoms in the universe. This is a world-changing problem with huge implications for our ability to understand and treat disease. And for this task, supercomputer field brute force simply isn't enough. Foldit, a game created by computer scientists, illustrates the value of the approach. Non-technical, non-biologist amateurs play a video game in which they visually rearrange the structure of the protein, allowing the computer to manage the atomic forces and interactions and identify structural issues. This approach beat supercomputers 50 percent of the time and tied 30 percent of the time. Foldit recently made a notable and major scientific discovery by deciphering the structure of the Mason-Pfizer monkey virus. A protease that had eluded determination for over 10 years was solved was by three players in a matter of days, perhaps the first major scientific advance to come from playing a video game.
Last year, on the site of the Twin Towers, the 9/11 memorial opened. It displays the names of the thousands of victims using a beautiful concept called "meaningful adjacency," places the names next to each other based on their relationships to one another: friends, families, coworkers. When you put it all together, it's quite a computational challenge: 3,500 victims, 1,800 adjacency requests, the importance of the overall physical specifications and the final aesthetics. When first reported by the media, full credit for such a feat was given to an algorithm from the New York City design firm Local Projects. The truth is a bit more nuanced. While an algorithm was used to develop the underlying framework, humans used that framework to design the final result. So in this case, a computer had evaluated millions of possible layouts, managed a complex relational system, and kept track of a very large set of measurements and variables, allowing the humans to focus on design and compositional choices. So the more you look around you, the more you see Licklider's vision everywhere. Whether it's augmented reality in your iPhone or GPS in your car, human-computer symbiosis is making us more capable.
So if you want to improve human-computer symbiosis, what can you do? You can start by designing the human into the process. Instead of thinking about what a computer will do to solve the problem, design the solution around what the human will do as well. When you do this, you'll quickly realize that you spent all of your time on the interface between man and machine, specifically on designing away the friction in the interaction. In fact, this friction is more important than the power of the man or the power of the machine in determining overall capability. That's why two amateurs with a few laptops handily beat a supercomputer and a grandmaster. What Kasparov calls process is a byproduct of friction. The better the process, the less the friction. And minimizing friction turns out to be the decisive variable.
Or take another example: big data. Every interaction we have in the world is recorded by an ever growing array of sensors: your phone, credit card, computer. The result is big data, and it actually presents us with an opportunity to more deeply understand the human condition. The major emphasis of most approaches to big data focus on, "How do I store this data? How do I search this data? How do I process this data?" These are necessary but insufficient questions. The imperative is not to figure out how to compute, but what to compute. How do you impose human intuition on data at this scale?
Again, we start by designing the human into the process. When PayPal was first starting as a business, their biggest challenge was not, "How do I send money back and forth online?" It was, "How do I do that without being defrauded by organized crime?" Why so challenging? Because while computers can learn to detect and identify fraud based on patterns, they can't learn to do that based on patterns they've never seen before, and organized crime has a lot in common with this audience: brilliant people, relentlessly resourceful, entrepreneurial spirit, and one huge and important difference—purpose. And so while computers alone can catch all but the cleverest fraudsters, catching the cleverest is the difference between success and failure.
There's a whole class of problems like this, ones with adaptive adversaries. They rarely, if ever, present with a repeatable pattern that's discernable to computers. Instead, there's some inherent component of innovation or disruption, and increasingly these problems are buried in big data.
For example, terrorism. Terrorists are always adapting in minor and major ways to new circumstances, and despite what you might see on TV, these adaptations, and the detection of them, are fundamentally human. Computers don't detect novel patterns and new behaviors, but humans do. Humans, using technology, testing hypotheses, searching for insight by asking machines to do things for them. Osama bin Laden was not caught by artificial intelligence. He was caught by dedicated, resourceful, brilliant people in partnerships with various technologies.
As appealing as it might sound, you cannot algorithmically data mine your way to the answer. There is no "Find Terrorist" button. And the more data we integrate from a vast variety of sources across a wide variety of data formats from very disparate systems, the less effective data mining can be. Instead, people will have to look at data and search for insight. And as Licklider foresaw long ago, the key to great results here is the right type of cooperation. And as Kasparov realized, that means minimizing friction at the interface.
Now this approach makes possible things like combing through all available data from very different sources, identifying key relationships and putting them in one place, something that's been nearly impossible to do before. To some, this has terrifying privacy and civil liberties implications. To others, it foretells of an era of greater privacy and civil liberties protections. But privacy and civil liberties are of fundamental importance. That must be acknowledged, and they can't be swept aside, even with the best of intents.
So let's explore, through a couple of examples, the impact that technologies built to drive human-computer symbiosis have had in recent time. In October, 2007, U.S. and coalition forces raided an al Qaeda safe house in the city of Sinjar on the Syrian border of Iraq. They found a treasure trove of documents: 700 biographical sketches of foreign fighters. These foreign fighters had left their families in the Gulf, the Levant and North Africa to join al Qaeda in Iraq. These records were human resource forms. The foreign fighters filled them out as they joined the organization. It turns out that al Qaeda, too, is not without its bureaucracy. They answered questions like, "Who recruited you?" "What's your hometown?" "What occupation do you seek?"
In that last question, a surprising insight was revealed. The vast majority of foreign fighters were seeking to become suicide bombers for martyrdom—hugely important, since between 2003 and 2007, Iraq had 1,382 suicide bombings, a major source of instability. Analyzing this data was hard. The originals were sheets of paper in Arabic that had to be scanned and translated. The friction in the process did not allow for meaningful results in an operational time frame using humans, PDFs and tenacity alone. The researchers had to lever up their human minds with technology to dive deeper, to explore non-obvious hypotheses, and in fact, insights emerged. Twenty percent of the foreign fighters were from Libya, 50 percent of those from a single town in Libya, hugely important since prior statistics put that figure at three percent. It also helped to hone in on a figure of rising importance in al Qaeda, Abu Yahya al-Libi, a senior cleric in the Libyan Islamic fighting group. In March of 2007, he gave a speech, after which there was a surge in participation amongst Libyan foreign fighters.
Perhaps most clever of all, though, and least obvious, by flipping the data on its head, the researchers were able to deeply explore the coordination networks in Syria that were ultimately responsible for receiving and transporting the foreign fighters to the border. These were networks of mercenaries, not ideologues, who were in the coordination business for profit. For example, they charged Saudi foreign fighters substantially more than Libyans, money that would have otherwise gone to al Qaeda. Perhaps the adversary would disrupt their own network if they knew they cheating would-be jihadists.
In January, 2010, a devastating 7.0 earthquake struck Haiti, third deadliest earthquake of all time, left one million people, 10 percent of the population, homeless. One seemingly small aspect of the overall relief effort became increasingly important as the delivery of food and water started rolling. January and February are the dry months in Haiti, yet many of the camps had developed standing water. The only institution with detailed knowledge of Haiti's floodplains had been leveled in the earthquake, leadership inside. So the question is, which camps are at risk, how many people are in these camps, what's the timeline for flooding, and given very limited resources and infrastructure, how do we prioritize the relocation? The data was incredibly disparate. The U.S. Army had detailed knowledge for only a small section of the country. There was data online from a 2006 environmental risk conference, other geospatial data, none of it integrated. The human goal here was to identify camps for relocation based on priority need. The computer had to integrate a vast amount of geospacial information, social media data and relief organization information to answer this question. By implementing a superior process, what was otherwise a task for 40 people over three months became a simple job for three people in 40 hours, all victories for human-computer symbiosis.
We're more than 50 years into Licklider's vision for the future, and the data suggests that we should be quite excited about tackling this century's hardest problems, man and machine in cooperation together. Thank you.