When we make decisions based on what we think someone else will do, in anything from chess to warfare, we must use reason to infer the other’s next move-or next three or more moves-to know what we must do. This so-called recursive reasoning ability in humans has been thought to be somewhat limited.
But now, in just-published research led by a psychologist at the University of Georgia, it appears that people can engage in much higher levels of recursive reasoning than was previously thought.
“In fact, they do it fairly easily and automatically,” said Adam Goodie, head of the Georgia Decision Lab at UGA, “if the game is one that is simple and engages the tendency to pay attention to competition.”
The study was published in the Journal of Behavioral Decision Making. Co-authors of the new research are Prashant Doshi, of UGA’s department of computer science, and Diana Young, now of Georgia College and State University. (She was at UGA when the study was done.) At UGA, the departments of psychology and computer science are both part of the Franklin College of Arts and Sciences.
Decision-making is part of day-to-day life, but when it involves competition, the complexity grows exponentially. Think of the classic scene in The Princess Bride when Vizzini and the Man in Black argue over which of two wine cups is poisoned. (“The battle of wits has begun. It ends when you decide and we both drink, and find out who is right . . . and who is dead.”) In games such as chess, “thinking ahead” and trying to ferret out your opponent’s moves is what distinguishes a casual player from a Grand Master.
When people typically make decisions, especially in competitive situations, they try to choose the path that has the most advantageous outcome, said Goodie. And while sometimes the best path is obvious, often it’s less clear, especially in games or military conflict.
“The question we asked was this: What level of reasoning do human beings engage in when they aren’t master chess players?” Goodie said. “Previous findings had been extremely pessimistic, suggesting that people were about equally likely merely to acknowledge the immediate preferences of an opponent as they were to go beyond that to higher levels of reasoning. If they do go to a higher level, it seemed that they only thought one step ahead.”
In order to find out how deeply people really go in working out how many “moves ahead” they can make, Goodie and his colleagues set up an experiment in which large samples of student participants (136 in one trial, 232 in another) “played” against a programmed computer.
Called the “3-2-1-4 Game,” the experiment was laid out on four spaces in a square with numbers on them in that sequence, starting with 3. The students were told they were playing against another participant in another room, and they and their invisible opponent-actually the computer-would walk around the spaces together, starting together and stopping together, alternating on who decides whether to stop where they are or to continue moving forward. The complicating factor was that each would have a different probability of winning money depending on where they finished-with the students winning more on the highest possible number and the opponent winning more on the lowest.
“The ideal solution is to think ahead to what will happen if you get all the way around to 1,” said Goodie, “and you have to choose whether to stay there or move to 4.”
Contrary to previous literature, those in the experiment had no trouble with the game, ramping up from what is called “first-level reasoning” to “second-level reasoning” easily and consistently.
After discovering that participants had little trouble with the four spaces, the researchers made the game even more complicated by adding five stops in the order 3-2-4-5-1, with the same rules applying.
“To our surprise, participants had just as little trouble learning the game and playing it at the highest possible level,” said Goodie.
In another popular film, WarGames, humans who appear unable to decide whether or not to launch nuclear missiles are replaced by a computer with chaotic and potentially disastrous results. The new research shows, as the film hints, that maybe people could have done the job all along.
The research at UGA was supported by the U. S. Air Force.