If there's one city in the world where it's hard to find a place to buy or rent, it's Sydney. And if you've tried to find a home here recently, you're familiar with the problem. Every time you walk into an open house, you get some information about what's out there and what's on the market, but every time you walk out, you're running the risk of the very best place passing you by. So how do you know when to switch from looking to being ready to make an offer?
This is such a cruel and familiar problem that it might come as a surprise that it has a simple solution. 37 percent.
If you want to maximize the probability that you find the very best place, you should look at 37 percent of what's on the market, and then make an offer on the next place you see, which is better than anything that you've seen so far. Or if you're looking for a month, take 37 percent of that time—11 days, to set a standard—and then you're ready to act.
We know this because trying to find a place to live is an example of an optimal stopping problem. A class of problems that has been studied extensively by mathematicians and computer scientists.
I'm a computational cognitive scientist. I spend my time trying to understand how it is that human minds work, from our amazing successes to our dismal failures. To do that, I think about the computational structure of the problems that arise in everyday life, and compare the ideal solutions to those problems to the way that we actually behave. As a side effect, I get to see how applying a little bit of computer science can make human decision-making easier.
I have a personal motivation for this. Growing up in Perth as an overly cerebral kid...
I would always try and act in the way that I thought was rational, reasoning through every decision, trying to figure out the very best action to take. But this is an approach that doesn't scale up when you start to run into the sorts of problems that arise in adult life. At one point, I even tried to break up with my girlfriend because trying to take into account her preferences as well as my own and then find perfect solutions—was just leaving me exhausted.
She pointed out that I was taking the wrong approach to solving this problem—and she later became my wife.
Whether it's as basic as trying to decide what restaurant to go to or as important as trying to decide who to spend the rest of your life with, human lives are filled with computational problems that are just too hard to solve by applying sheer effort. For those problems, it's worth consulting the experts: computer scientists.
When you're looking for life advice, computer scientists probably aren't the first people you think to talk to. Living life like a computer—stereotypically deterministic, exhaustive and exact—doesn't sound like a lot of fun. But thinking about the computer science of human decisions reveals that in fact, we've got this backwards. When applied to the sorts of difficult problems that arise in human lives, the way that computers actually solve those problems looks a lot more like the way that people really act.
Take the example of trying to decide what restaurant to go to. This is a problem that has a particular computational structure. You've got a set of options, you're going to choose one of those options, and you're going to face exactly the same decision tomorrow. In that situation, you run up against what computer scientists call the "explore-exploit trade-off." You have to make a decision about whether you're going to try something new—exploring, gathering some information that you might be able to use in the future—or whether you're going to go to a place that you already know is pretty good—exploiting the information that you've already gathered so far. The explore/exploit trade-off shows up any time you have to choose between trying something new and going with something that you already know is pretty good, whether it's listening to music or trying to decide who you're going to spend time with. It's also the problem that technology companies face when they're trying to do something like decide what ad to show on a web page. Should they show a new ad and learn something about it, or should they show you an ad that they already know there's a good chance you're going to click on?
Over the last 60 years, computer scientists have made a lot of progress understanding the explore/exploit trade-off, and their results offer some surprising insights. When you're trying to decide what restaurant to go to, the first question you should ask yourself is how much longer you're going to be in town. If you're just going to be there for a short time, then you should exploit. There's no point gathering information. Just go to a place you already know is good. But if you're going to be there for a longer time, explore. Try something new, because the information you get is something that can improve your choices in the future. The value of information increases the more opportunities you're going to have to use it.
This principle can give us insight into the structure of a human life as well. Babies don't have a reputation for being particularly rational. They're always trying new things, and you know, trying to stick them in their mouths. But in fact, this is exactly what they should be doing. They're in the explore phase of their lives, and some of those things could turn out to be delicious. At the other end of the spectrum, the old guy who always goes to the same restaurant and always eats the same thing isn't boring—he's optimal.
He's exploiting the knowledge that he's earned through a lifetime's experience. More generally, knowing about the explore/exploit trade-off can make it a little easier for you to sort of relax and go easier on yourself when you're trying to make a decision. You don't have to go to the best restaurant every night. Take a chance, try something new, explore. You might learn something. And the information that you gain is going to be worth more than one pretty good dinner.
Computer science can also help to make it easier on us in other places at home and in the office. If you've ever had to tidy up your wardrobe, you've run into a particularly agonizing decision: you have to decide what things you're going to keep and what things you're going to give away. Martha Stewart turns out to have thought very hard about this—and she has some good advice. She says, "Ask yourself four questions: How long have I had it? Does it still function? Is it a duplicate of something that I already own? And when was the last time I wore it or used it?" But there's another group of experts who perhaps thought even harder about this problem, and they would say one of these questions is more important than the others. Those experts? The people who design the memory systems of computers. Most computers have two kinds of memory systems: a fast memory system, like a set of memory chips that has limited capacity, because those chips are expensive, and a slow memory system, which is much larger. In order for the computer to operate as efficiently as possible, you want to make sure that the pieces of information you want to access are in the fast memory system, so that you can get to them quickly. Each time you access a piece of information, it's loaded into the fast memory and the computer has to decide which item it has to remove from that memory, because it has limited capacity.
Over the years, computer scientists have tried a few different strategies for deciding what to remove from the fast memory. They've tried things like choosing something at random or applying what's called the "first-in, first-out principle," which means removing the item which has been in the memory for the longest. But the strategy that's most effective focuses on the items which have been least recently used. This says if you're going to decide to remove something from memory, what you should take out the thing which was last accessed the furthest in the past. And there's a certain kind of logic to this. If it's been a long time since you last accessed that piece of information, it's probably going to be a long time before you're going to need to access it again. Your wardrobe is just like the computer's memory. You have limited capacity, and you need to try and get in there the things that you're most likely to need so that you can get to them as quickly as possible. Recognizing that, maybe it's worth applying the least recently used principle to organizing your wardrobe as well. So if we go back to Martha's four questions, the computer scientists would say that of these, the last one is the most important.
This idea of organizing things so that the things you are most likely to need are most accessible can also be applied in your office. The Japanese economist Yukio Noguchi actually invented a filing system that has exactly this property. He started with a cardboard box, and he put his documents into the box from the left-hand side. Each time he'd add a document, he'd move what was in there along and he'd add that document to the left-hand side of the box. And each time he accessed a document, he'd take it out, consult it and put it back in on the left-hand side. As a result, the documents would be ordered from left to right by how recently they had been used. And he found he could quickly find what he was looking for by starting at the left-hand side of the box and working his way to the right.
Before you dash home and implement this filing system—it's worth recognizing that you probably already have.
That pile of papers on your desk...typically maligned as messy and disorganized, a pile of papers is, in fact, perfectly organized—as long as you, when you take a paper out, put it back on the top of the pile, then those papers are going to be ordered from top to bottom by how recently they were used, and you can probably quickly find what you're looking for by starting at the top of the pile.
Organizing your wardrobe or your desk are probably not the most pressing problems in your life. Sometimes the problems we have to solve are simply very, very hard. But even in those cases, computer science can offer some strategies and perhaps some solace. The best algorithms are about doing what makes the most sense in the least amount of time. When computers face hard problems, they deal with them by making them into simpler problems—by making use of randomness, by removing constraints or by allowing approximations. Solving those simpler problems can give you insight into the harder problems, and sometimes produces pretty good solutions in their own right.
Knowing all of this has helped me to relax when I have to make decisions. You could take the 37 percent rule for finding a home as an example. There's no way that you can consider all of the options, so you have to take a chance. And even if you follow the optimal strategy, you're not guaranteed a perfect outcome. If you follow the 37 percent rule, the probability that you find the very best place is—funnily enough...37 percent. You fail most of the time. But that's the best that you can do.
Ultimately, computer science can help to make us more forgiving of our own limitations. You can't control outcomes, just processes. And as long as you've used the best process, you've done the best that you can. Sometimes those best processes involve taking a chance—not considering all of your options, or being willing to settle for a pretty good solution. These aren't the concessions that we make when we can't be rational—they're what being rational means.
Thank you.