‘Fake News’ Can’t Be Flagged Down

Even when the content is flagged as potentially false, Facebook users believe headlines that align with their political beliefs.

Based on the research of Tricia Moravec

‘Fake News’ Can’t Be Flagged Down fake news cant be flagged down img 661db0056f218

Telling the difference between fact and fiction on Facebook is more complicated than most of us would like to believe. It can be challenging for users to reach judgments about truth and falsehood when legitimate news appears alongside animal memes, family updates, videos from friends, and paid content.

“We all believe that we are better than the average person at detecting fake news, but that’s simply not possible,” says Texas McCombs Assistant Professor Patricia Moravec. “The environment of social media and our own biases makes us all much worse than we think.”

Facebook’s algorithmically determined personalized feeds supply targeted information — and disinformation. But do users really believe the fake news that populates their Facebook accounts? Are there tools that can help readers tell what’s legitimate news and what’s not?

“Scrolling through social media is something people do to relax, so they’re in a passive, pleasure-seeking state of mind when they do so,” says lead author Moravec, assistant professor of information, risk and operations management. “This is different from the more purposeful mindset of a person who actively chooses to take time out of their day to read, watch, or listen to a daily news report from a reputable news source.”

Facebook Fact or Fiction

Differences in context and mindset have been previously proven to affect people’s ability to discern the truth. So Moravec’s team set out to investigate whether one of the tools meant to help social media users hone their news judgment actually works: In late 2016, Facebook incorporated fact-checking into its platform and began flagging certain news articles by noting that an article was “disputed by third-party fact checkers.”

Moravec, along with Randall K. Minas of the University of Hawaii at Manoa and Alan R. Dennis of Indiana University, conducted an experiment to find out how well the Facebook “fake news” warning flag worked to help users differentiate between legitimate and illegitimate news when scrolling through their feeds.

In the study, 80 social media-proficient undergraduate students first answered 10 questions about their own political beliefs. Each participant was then fitted with a wireless electroencephalography headset that tracked brain activity related to cognition. The students were asked to read 50 political news headlines presented as they would appear in a Facebook feed and assess their credibility. The headlines ranged from true to false with some gradations in between: “Trump Signs New Executive Order on Immigration” (clearly true), “Nominee to Lead EPA Testifies He’ll Enforce Environmental Laws” (true), “Russian Spies Present at Trump’s Inauguration — Seated on Inauguration Platform” (false). The researchers randomly assigned fake news flags to some of the articles to see what impact they had on the participants’ responses. The students rated each headline’s believability, credibility, and truthfulness.

As they worked through the exercise, the participants spent more time and showed significantly more activity in their frontal cortices — the brain area associated with arousal, memory access, and consciousness — when headlines supported their beliefs but were flagged as false. These reactions of discomfort indicated cognitive dissonance when headlines supporting their beliefs were challenged.

But this dissonance wasn’t enough to make participants change their minds. They overwhelmingly said that headlines conforming with their preexisting beliefs were true, regardless of whether they were flagged as potentially fake. The flag didn’t change their initial response to the headline, even if it did make them pause a moment longer and study it a bit more carefully.

“Our research shows that users are going to believe articles that align with their beliefs regardless of the validity of the article. Facebook’s fake news flag isn’t going to change that. We need better flags if Facebook isn’t going to remove fake news from its platform.” — Patricia Moravec

In late October, Twitter announced a ban on political advertisements. That is a step in the right direction for reducing fake news, Moravec says, and now Facebook and Instagram need to do their parts.

Falling Hard for ‘Fake News’

Irrespective of the flag issue, participants generally found it difficult to tell which Facebook headlines were true or false. They assessed only 44 percent correctly.

Political affiliation made no difference in their ability to determine what was true or false.

“People’s self-reported identity as Democrat or Republican didn’t influence their ability to detect fake news. And it didn’t determine how skeptical they were about what’s news and what’s not.” — Patricia Moravec

On Facebook, we’re more alike than different. “When we’re on social media, we’re passively pursuing pleasure and entertainment,” Moravec says. “We’re avoiding something else.” Unlike when we’re at work or focused on a goal, we’re unable to think very critically when we’re in this passive, pleasure-seeking mindset.

The experiment showed that social media users are highly subject to confirmation bias, or the unintentional tendency to gravitate toward and process information that is consistent with existing beliefs, Moravec says. This can result in decision-making that ignores information that is inconsistent with those beliefs.

The fact that social media perpetuates and feeds this bias is a disturbing issue for society, Moravec says. “People have to constantly make decisions without having all the facts,” she says. “But if the facts that you do have are polluted by fake news that you truly believe, then the decisions you make are going to be much worse.”

Moravec advises swift action by lawmakers and tech titans to address the issue. “We need policymakers and social media leaders to seek out and find people who are intentionally creating fake news and strip them of their ability to do so,” she says.

We face a national emergency, Moravec says. “It poses a threat to our democracy. People are not voting based on actual truth.”


Fake News on Social Media: People Believe What They Want to Believe When it Makes No Sense at All” is online in Management Information Systems Quarterly.

Story by Sloan Wyatt and Molly Dannenmaier