Amy Zegart, a Stanford professor, argues that predictions are becoming more accessible.
In the coming days, President Obama will be reviewing intelligence. He will be poring over data and asking for expert opinions. He will also examine history, consider options, and combine intuition with experience. No, it’s not to launch a drone attack, intervene in Syria, or fight Chinese cyber-hacking. Instead, like millions of Americans, Obama will pick his NCAA men’s Basketball teams for March Madness.
Many industries are involved in predicting the future: doctors, Wall Street traders, and movie executives. Each is on a different spectrum of predictability. March Madness bracketology is on the easy end, expecting which teams will make it to this year’s basketball tournament’s finals. The opposite extreme, which is very difficult, is assessing national security threats and outcomes. These two extremes shed some light on the factors that make human activities more predictable than others. They also suggest that more things than we thought are predictable.
Data is the first thing that separates the easy end of the spectrum from the more complex future. This is how much data there is on similar events in previous years. Sports competitions are known for being data-rich. It doesn’t mean sportscasters are always right. March Madness would not be without them: Everyone loves Cinderella teams that win against all odds. It’s not for nothing that they call it “winning despite the odds.” How often do the lowest-ranked teams in a bracket take home the title? Never. Villanova was the lowest-seeded NCAA champ in 1985. They were seeded 8th out of 16 teams in their bracket. Louisville fans are well aware that the Final Four is a place where the usual suspects end up. March Madness is a good indicator of the future, even if history doesn’t determine your fate.
Intelligence analysts don’t have an extensive archive of similar cases that can be used to predict future outcomes. Take the current Iranian nuclear crisis. Nine countries only have nuclear weapons. Five countries had the bomb before anyone had landed on the Moon. North Korea may be the latest nuclear rogue, but its bizarre ruling family is not a model that can be generalized. South Africa is the only country to have developed a nuclear arsenal and then dismantled it voluntarily. This was mainly because apartheid crumbled, and the white regime that left feared the black government would get the bomb. This is not a bracketology database.
Second is the level of bias or how obvious it is. Although biases can’t be eliminated, their distortions can be reduced if people know them. We wear our preferences at sporting events. Everyone knows that I will overestimate Louisville Cardinals’ chances of winning the NCAA every March, because they are my hometown team. Because this bias is apparent, people take my pro-Louisville predictions with a grain of salt. In the CIA, however, no one wears a T-shirt saying, “I often succumb to confirmation bias. I give greater weight to information that confirms my prior beliefs, and discount information that disconfirms them.”
Third, there is the asymmetrical information. Everyone has the same information in March Madness. How much time you spend on ESPN and researching past statistics will determine your level of expertise. Information is classified in intelligence, and analysts are left with different data sets. Imagine distributing NCAA brackets to 1,000 people who do not all know each other, and some have no clue what a bracket or its value is. They could face discipline, termination, or even prosecution if they shared anything with the wrong person. They must collectively choose the winner to be successful.
Fourth, is it clear whether you are winning or losing? This is important. Clear metrics allow analysts to use feedback loops to improve future predictions. In sports, it’s easy to tell who wins and loses. In foreign policy, it’s not. Is Al Qaeda on the road to defeat? Is Iraq heading toward stabilization? Will the U.S. invade Afghanistan? Who was right, and who was wrong? It isn’t easy to tell. The answer today could differ in one year, ten years, or 100 years. Today’s headlines and tomorrow’s history books are seldom the same. Analysts can’t improve their predictions if they don’t have any past data to compare.
The fifth and final element is deception. The University of Louisville coach Rick Pitino maybe saving something for the tournament – a new play or a different substitute. This is nothing compared to the actions of states and transnational actors to hide their true intentions and capabilities.
The extremes are gone. The biggest news is how predictability in the middle is increasing. People are developing clever ways to generate better data, identify and deconstruct biases, and share information that was unimaginable even 10 or 20 years ago. The result: A growing range of human activity has moved from “analysis-by-gut-check” to “analysis by evidence.” Nobody predicted just how much can be expected now.
Three of my favorite examples of the prediction revolution include election forecasting, medical decision-making, and studies of ethnic conflicts.
Nate Silver, a columnist for the New York Times, used math and polling to defeat the gut feeling and experience of veteran election pundits in the 2012 election who predicted a Romney win. Conservative columnist George Will called Minnesota his “wildcard” and predicted that the Republican would win by 321 electoral votes. Peggy Noonan wrote on the eve before the election, “While everyone’s looking at the storm and the polls, Romney’s sliding into the presidency.” Romney slipped all right — but not in the polls Peggy was watching. It was a big, public triumph of big data and sound analysis over reasoning-by-anecdote-and-wishful-thinking that made old-school pundits look old.