We let our guard down when others are around.
AUGUST 01, 2017
Since the 2016 U.S. Presidential election, concerns over the circulation of “fake” news and other unverified digital content have intensified. As people have grown to rely on social media as a news source, there has been considerable debate about its role in aiding the spread of misinformation. Much recent attention has centered around putting fact-checking filters in place, as false claims often persist in the public consciousness even after they are corrected.
We set out to test how the context in which we process information affects our willingness to verify ambiguous claims. Results across eight experiments reveal that people fact-check less often when they evaluate statements in a collective setting (e.g., in a group or on social media) than when they do so alone. Simply perceiving that others are present appeared to reduce participants’ vigilance when processing information, resulting in lower levels of fact-checking.
Our experiments surveyed over 2,200 U.S. adults via Amazon Mechanical Turk. The general paradigm went as follows: As part of a study about “modes of communication on the internet,” respondents logged onto a simulated website and evaluated a series of statements. These statements consisted of ambiguous claims (of which half were true and half were false) on a range of topics, from current events (e.g., “Scientists have officially declared the Great Barrier Reef to be dead”) to partisan remarks made by political candidates (e.g., “Undocumented immigrants pay $12 billion a year into Social Security”).
Participants could identify each statement as true or false; or, they could raise a fact-checking “flag” to learn its accuracy. On top of a fixed payment for participating, each person had the chance to earn a bonus depending on how well they performed (e.g., they received +1 point and -1 point per correct and incorrect answer, respectively, with each point awarding 5¢). In some studies, people gained no points for flagging; in others, they received a small penalty or a small reward for flagging. In still others, we entered them into a lottery for $100 if they scored in the 90th percentile. These different incentive structures did not change the overall patterns we found.
In the first experiment, participants gave responses (true, false, or flag) for 36 statements described as news headlines published by a U.S. media organization. Throughout the task, half the participants saw their own username displayed alone on the side of the screen, while the other half also saw those of 102 respondents described as currently logged on, presumably completing the same task. People flagged (fact-checked) fewer statements when they perceived that others were present.
We next tried to simulate social presence in a more natural environment. In addition to exposing people to either their own or others’ names, half the participants evaluated “news headlines” on the website used in the previous study (reflecting a more “traditional” media platform), while the other half read the same headlines presented as the news organization’s posts in a Facebook feed. On the traditional site, people again flagged less often when they saw others online compared to when they thought they were alone. But, participants who read Facebook posts flagged few statements regardless of whether they saw others’ names on the screen. Browsing information on social media, an inherently social context, seemed to make individuals behave as if they were in a group.
In another experiment, we learned that others’ presence may be felt even when they’re not engaged in an activity at the same time. People flagged less often when they saw other names on the screen even when we described those other participants as users who had logged in and completed the task a week ago.
Why might collective settings suppress fact-checking? One reason could be that people flagged fewer statements simply because they felt more confident about their answers when others were around. But this doesn’t appear likely. When we asked participants to report their confidence and certainty in their responses, we found that these did not vary according to whether they evaluated claims alone or in the presence of others. We also found that performance on the task did not differ consistently across our alone and group conditions.
A second argument is that people may expect to free-ride on others’ effort, as shown in research on responsibility diffusion and the bystander effect (e.g., “If everyone else is verifying, why should I?”). Participants in most of our studies, though, could not rely on others to fact-check for them. A separate experiment tested whether making people feel individually responsible within a group can correct for this kind of “loafing” mentality. Respondents read 38 statements about U.S. congressmen/women; some saw their names appear alone, while others saw those of other “team members” working on the same task. A third group saw their own name highlighted in red text, which was meant to distinguish them from everyone else’s names in black. Although these participants felt a greater sense of responsibility, they still flagged fewer statements than those who did the task alone. So, loafing does not appear to fully explain the behaviors we observe.
We also investigated whether a particular type of conversational norm — that we often assume a speaker is telling the truth and thus avoid expressing skepticism so as not to offend him or her, especially in group environments — helps explain the findings. Our results do not support this explanation because participants did not tend to believe information more in the presence of others; rather, they just tended to fact-check it less. We assessed directly whether individuals in group settings are more willing to fact-check when this conversational norm isn’t as salient, as is the case when evaluating claims from political candidates. Given that people usually expect politicians to be dishonest (as data from a protest suggests), they should have fewer qualms expressing their mistrust by fact-checking their statements.
Participants evaluated 50 campaign statements from two U.S. politicians before an election: Candidate A’s statements reflected a conservative view, candidate B’s a liberal one. As with previous studies, respondents either saw their own names appear alone or alongside others’ names. Although people identified more statements as true when the views expressed matched their own political affiliation, this alignment didn’t affect fact-checking rates; how much people flagged depended only on whether they evaluated claims alone or in a group. In sum, even for sources perceived as less trustworthy (i.e., politicians), people flagged fewer claims when they believed they were in a group.
Another possibility is that being around others somehow automatically lowers our guards. Research on animal and human behavior has pointed to a “safety in numbers” heuristic in which crowds (or herds) decrease vigilance, perhaps because we believe any risk would be divided. Because fact-checking demands some measure of wariness, a similar mechanism might apply when people are attuned to other individuals online.
A few pieces of evidence lend credence to this idea. First, respondents in another experiment who scored high on chronic prevention focus — a trait associated with being habitually cautious and vigilant — were mostly “immune” to the effect of social presence. That is, these individuals fact-checked just as much in the company of others as they did by themselves. Second, participants who did a proofreading task in a group environment performed worse than those who did so alone, suggesting that social presence may impair our vigilance more generally. Finally, when we promoted a vigilance mindset by having people first do exercises shown to momentarily increase prevention focus, participants in a group setting flagged nearly twice as many statements as those who weren’t given such encouragement (figure 2).
All in all, these findings add to the ongoing conversation about misinformation in increasingly connected online environments. Critics of social media often point to its complicity in creating “echo chambers” that selectively expose us to likeminded people and to content that matches and reinforces our beliefs. But our participants seemed reluctant to question claims even in the presence of strangers, suggesting that this effect may be amplified.
Recent efforts to promote crowdsourced fact-checking have found some success in taming the diffusion of unreliable news. At a time when information is so easily and instantaneously shared, developing tools that encourage people to absorb content with a critical eye is all the more pressing. Understanding when we are likely to verify what we read can help guide these initiatives.
Rachel Meng is a doctoral candidate of Marketing at Columbia Business School. She is interested in judgment and decision making. Her current research focuses on incentives for motivating behavior change (with emphasis on the limits and consequences of monetary rewards), the influence of others on how people process information, and financial decision making among the poor.
Youjung Jun is a doctoral candidate of marketing at Columbia Business School. She studies social influences and media influence on how people process information. Her current research focuses on shared reality – experiencing something in common with others—and its effects on people’s memories, performances, and construction of new knowledge in a social process.
Gita V. Johar is the Meyer Feldberg Professor of Business at Columbia Business School and a co-editor of the Journal of Consumer Research. She also serves as the Faculty Director for Online Initiatives at Columbia Business School and serves as the Chair of the Faculty Steering Committee, Columbia Global Centers Mumbai.