Daniel Patrick Moynihan, the late US Senator from New York, has often been credited with one of the go-to sayings of American politics: “Everyone is entitled to their own opinion, but not their own facts.” How quaint. Today, many political movements, interest groups, and politicians aggressively circulate their own flagrantly false “facts,” getting followers to buy in.
The Russian misinformation campaign leading up to the 2016 US presidential election was unprecedented in American politics. In the aftermath, companies such as Facebook and Twitter are still wrestling with how to handle the bogus conspiracy theories their platforms have been used to circulate. And the effects of political lies extend beyond the ballot box: in a notorious 2016 incident, a gunman fired an AR-15 rifle in a Washington, DC, pizza parlor in response to a political misinformation campaign falsely alleging that children were being held there as sex slaves in a child-trafficking conspiracy led by Hillary Clinton.
In short, the problem of misinformation in politics is “not going away,” says Adam Berinsky, the Mitsui Professor of Political Science at MIT and director of the MIT Political Experiments Research Lab (PERL). Moreover, the vitriolic rhetoric accompanying these falsehoods “has infected politics,” Berinsky adds, and “is definitely something that has gotten worse over the last decade or so.”
No one has an easy way to stop the flow of political misinformation or reverse its effects. But we can study it. And so in recent years, MIT scholars have increasingly tackled issues of truth and lies in politics, producing a variety of data-driven studies, often with surprising results. Want to know how fast false news travels, how sticky rumors can be, and how to combat them? MIT researchers have answers.
“It’s hard to imagine a better environment to study fake news, rumors, and how to dispel them than MIT,” says Andrea Campbell, head of the Department of Political Science. She adds that scholars in the Department of Political Science, the Media Lab, the Sloan School, EECS, and other programs are uniquely well positioned to address what she calls “this profound threat to democracies everywhere.”
How to (slightly) quash a lie
If you want to put a stop to a rumor, you’re probably going to need help.
Berinsky has been digging into the subject of political misinformation longer than most scholars—about a decade. His research has produced the vexing finding that attempting to debunk falsehoods can entrench them further. In politics, lies can produce a quicksand effect: the struggle to escape may make things worse.
He first observed this in 2009, when opponents of President Barack Obama’s proposed Affordable Care Act falsely claimed the legislation would fund “death panels” to cut off medical care for the unwell. In reality, the program allowed doctors to inform patients about their end-of-life care options. But as Berinsky found, having Democratic Party figures or even neutral parties attempt to debunk the lie made more people believe it.
The one palliative Berinsky has found is that some people do believe corrections coming from figures who could benefit from the lie being true. In a 2015 paper reporting the effect of actual quotes from Republican and Democratic senators as well as from a nonpartisan speaker, he showed that only corrections from a Republican politician significantly reduced the level of belief in “death panels,” increasing the proportion of people rejecting such rumors from 57 to 69 percent.
The ineffectiveness of even neutral-party corrections, Berinsky thinks, reinforces a wide range of false political beliefs, from the claim that Obama was not born in the US to the notion that the terrorist attacks of September 11, 2001, were an “inside job” perpetrated by the US government. Perhaps it’s because of the sheer power of repetition, or perhaps it’s because most people—Republicans and Democrats alike—won’t accept corrections coming from outside their own political tribe. But expecting the truth to prevail simply because it’s true is a wildly ineffectual strategy.
“Having fact checkers is great, because we should have a sense of what is true and not true in the world,” says Berinsky. “But just the existence of fact checkers alone is not going to ensure that the truth wins out.” After all, he adds, “because people do have partisan and tribal loyalties, they don’t give rumors up just because they’re false.”
Why we’re drawn to wild stories
One reason it’s so hard to quash fake news is that people can be quite eager to acquire—and pass along—false information in the first place. A 2018 study by three MIT scholars, appearing in the journal Science, found that on Twitter, false news stories are 70 percent more likely to be retweeted than true stories are.
“False news is more novel, and people are more likely to share novel information,” says Sinan Aral, a professor at the MIT Sloan School of Management and coauthor of the paper. Fellow coauthor Deb Roy, an associate professor of media arts and sciences at the MIT Media Lab and director of its Laboratory for Social Machines (LSM), was also Twitter’s chief media scientist from 2013 to 2017. And coauthor Soroush Vosoughi ’08, SM ’10, PhD ’15, an LSM postdoc, researched the spread of rumors online for his PhD.
As the deepest look of its kind at Twitter, the study casts new light on “fundamental aspects of our online communication ecosystem,” as Roy puts it. The research project tracked roughly 126,000 chains of news stories retweeted by about three million people from 2006 to 2017, and evaluated their accuracy using the assessments of six fact-checking organizations.
The researchers also concluded that, surprisingly, programmed bots designed to disseminate false stories were not responsible for the spread of these untruths; human users bore most of the blame. Indeed, the phenomenon seems rooted in human psychology.
Analytical thinkers spot fake news better
Associate professor David Rand believes that people may fall prey to false information when they fail to process it properly. Rand, who joined the MIT Sloan School of Management faculty in 2018 from Yale, has long studied cognition, game theory, and cooperation, among other issues. After the 2016 election, he started running experiments about political falsehoods, too.
“I really reoriented a lot of my research in that direction, feeling like misinformation was posing a really serious challenge to our society,” he says.
Rand’s work suggests that people are not necessarily too steeped in ideology to tell truth from falsehood; rather, they simply vary widely in their basic proficiency at identifying lies. In one study, published in the journal Cognition this year, Rand and coauthor Gordon Pennycook of Yale had over 3,000 participants examine fake and real news headlines and also take a cognitive test. They found that people more inclined to think analytically were also more likely to reject untrue stories.
“What we find is that people who are better at reasoning are better at telling fake from real, or hyperpartisan from real,” Rand says. “They’re better at identifying true, accurate headlines.” He adds: “People are falling for bad content because they’re not thinking about it.” After conducting a series of more than 40 studies on this topic, Rand says that the additional experiments reinforce this point.
Thus, he suggests, we should put less emphasis on “motivated reasoning”—people’s supposed desire to interpret everything through a partisan framework—and be wary of solutions that rely on this concept.
Those kinds of solutions wrongly imply that “the way you’re going to get people to be more discerning in their consumption of media is to get them to be less partisan,” Rand says. “Particularly in the media aftermath of the [2016] election, there were a ton of articles basically saying that.” But his work suggests that neither liberals nor conservatives are more inclined toward ideologically motivated reasoning.
Nevertheless, he did find that Hillary Clinton’s supporters from 2016 were generally better at distinguishing false news from the real thing than Donald Trump’s supporters were. “The present results indicate that there is, in fact, a political asymmetry when it comes to the capacity to discern the truth in news media,” Rand and Pennycook write in the Cognition paper, adding that the reasons for this split “remain unclear.” In a separate study, the authors have also found that Trump voters were less reflective than Clinton voters or third-party voters—but that much of the difference stemmed from Democrats who voted for Trump.
The polarized states of America
What is more clear is that partisanship in politics is increasing. Consider the historical scholarship of Devin Caughey, an associate professor of political science at the Institute. Along with his colleague Chris Warshaw—formerly of MIT, now of George Washington University—and a team of researchers, Caughey has created an entirely new data set on state-level politics in the US from 1936 to 2014, looking at almost 150 policy issues over time.
One of their key findings, published in 2015, is that regions in the US have diverged sharply over the last two decades: the already-conservative South has become relatively more conservative (while flipping from Democratic Party to Republican Party control) and the Midwest slightly more conservative, while the Northeast and West Coast states have become more liberal.
An even bigger historical trend, however, is that state-level policies became more economically liberal from 1936 until around 1970. Since then the trend toward liberalism, where it exists, has focused more on social issues such as gay marriage. Or, as Caughey has puts it, “economic policies have been constant” since about 1970, “but social policies have gone in a more liberal direction.”
In a follow-up paper published in 2018, the researchers integrated historical polling into the project and found that these state-level policies are fairly responsive to public opinion over time.
Caughey has further explored some of these divisions in a new book out this fall, The Unsolid South, published by Princeton University Press. In it, he contends that the supposedly “solid South”—almost entirely under Democratic Party control in midcentury, now controlled by the Republican Party—has long been riddled with divisions. Southern congressmen largely supported the New Deal, for instance, but many of them rebelled against it by 1947, backing the Taft-Hartley Act, which limited the power of unions.
Caughey says this was a “critical turning point in American political development” that occurred for a variety of reasons, including regional hostility to organized labor as well as a sense that the New Deal, having helped Southern whites enormously, was now about to lift blacks up as well—something anti-civil-rights politicians hoped to avoid.
“Part of it was the growing fear that the New Deal state posed a potential and maybe actual threat to Jim Crow in the South,” Caughey says. “So racial fears came to the fore.”
Scrutinizing the history of America’s substantial political divisions is a sobering reminder of the conditions in which partisan acrimony, rumors, false stories, and misinformation can flourish. The social splits in the US have long existed, but now, new media can turbocharge the sheer number of falsehoods that reach people and possibly keep them living in separate factual domains.
Still, as Berinsky acknowledges with a bit of weary irony, the things that fracture the polity are ripe for study: “Fake news may be bad for democracy, but good for business,” he says.
Indeed, he is currently working to complete a four-year project, funded by the National Science Foundation, to examine the effects of political media in a methodologically sophisticated way. The project—in which Berinsky is partnering with Teppei Yamamoto, an associate professor in the Department of Political Science—aims to disentangle the threads of cause and effect in the realm of media influence.
“Do people seek out news that fits their beliefs, or do they learn their beliefs from watching the news?” Berinsky says. “Basically, do people become conservative because they watch Fox News, or do they watch Fox News because they’re conservative?”
While the researchers are still working on the project, their studies are specifically designed to disentangle the varying effects of media exposure on people who already have different ideologies. Watching Fox News probably has distinctive effects on people from across the political spectrum—and possibly on those who try to avoid news altogether. Thus, as Berinsky puts it, the researchers are attempting to understand “how people choose media, and how they react to media, in a unified framework.”
After all, a better grasp of media influence can tell us how much political information (and misinformation) is really affecting US democracy. This is something that Berinsky says scholars at MIT are well equipped to examine. The Institute has “probably the best methods training in the country right now” to give students the tools to study such things, he adds. “For me, MIT is a great place because we’re at the forefront of these big substantive questions that are animating American politics.”