Research: Being in a Group Makes Us Less Likely to Fact-Check

Picture1

We let our guard down when others are around.

Source: Research: Being in a Group Makes Us Less Likely to Fact-Check

By Rachel Meng, Youjung Jun and  Gita V. Johar

AUGUST 01, 2017

Since the 2016 U.S. Presidential election, concerns over the circulation of “fake” news and other unverified digital content have intensified. As people have grown to rely on social media as a news source, there has been considerable debate about its role in aiding the spread of misinformation. Much recent attention has centered around putting fact-checking filters in place, as false claims often persist in the public consciousness even after they are corrected.

We set out to test how the context in which we process information affects our willingness to verify ambiguous claims. Results across eight experiments reveal that people fact-check less often when they evaluate statements in a collective setting (e.g., in a group or on social media) than when they do so alone. Simply perceiving that others are present appeared to reduce participants’ vigilance when processing information, resulting in lower levels of fact-checking.

Our experiments surveyed over 2,200 U.S. adults via Amazon Mechanical Turk. The general paradigm went as follows: As part of a study about “modes of communication on the internet,” respondents logged onto a simulated website and evaluated a series of statements. These statements consisted of ambiguous claims (of which half were true and half were false) on a range of topics, from current events (e.g., “Scientists have officially declared the Great Barrier Reef to be dead”) to partisan remarks made by political candidates (e.g., “Undocumented immigrants pay $12 billion a year into Social Security”).

Participants could identify each statement as true or false; or, they could raise a fact-checking “flag” to learn its accuracy. On top of a fixed payment for participating, each person had the chance to earn a bonus depending on how well they performed (e.g., they received +1 point and -1 point per correct and incorrect answer, respectively, with each point awarding 5¢). In some studies, people gained no points for flagging; in others, they received a small penalty or a small reward for flagging. In still others, we entered them into a lottery for $100 if they scored in the 90th percentile. These different incentive structures did not change the overall patterns we found.

In the first experiment, participants gave responses (true, false, or flag) for 36 statements described as news headlines published by a U.S. media organization. Throughout the task, half the participants saw their own username displayed alone on the side of the screen, while the other half also saw those of 102 respondents described as currently logged on, presumably completing the same task. People flagged (fact-checked) fewer statements when they perceived that others were present.

We next tried to simulate social presence in a more natural environment. In addition to exposing people to either their own or others’ names, half the participants evaluated “news headlines” on the website used in the previous study (reflecting a more “traditional” media platform), while the other half read the same headlines presented as the news organization’s posts in a Facebook feed. On the traditional site, people again flagged less often when they saw others online compared to when they thought they were alone. But, participants who read Facebook posts flagged few statements regardless of whether they saw others’ names on the screen. Browsing information on social media, an inherently social context, seemed to make individuals behave as if they were in a group.

 

Picture2

 

In another experiment, we learned that others’ presence may be felt even when they’re not engaged in an activity at the same time. People flagged less often when they saw other names on the screen even when we described those other participants as users who had logged in and completed the task a week ago.

Why might collective settings suppress fact-checking? One reason could be that people flagged fewer statements simply because they felt more confident about their answers when others were around. But this doesn’t appear likely. When we asked participants to report their confidence and certainty in their responses, we found that these did not vary according to whether they evaluated claims alone or in the presence of others. We also found that performance on the task did not differ consistently across our alone and group conditions.

A second argument is that people may expect to free-ride on others’ effort, as shown in research on responsibility diffusion and the bystander effect (e.g., “If everyone else is verifying, why should I?”). Participants in most of our studies, though, could not rely on others to fact-check for them. A separate experiment tested whether making people feel individually responsible within a group can correct for this kind of “loafing” mentality. Respondents read 38 statements about U.S. congressmen/women; some saw their names appear alone, while others saw those of other “team members” working on the same task. A third group saw their own name highlighted in red text, which was meant to distinguish them from everyone else’s names in black. Although these participants felt a greater sense of responsibility, they still flagged fewer statements than those who did the task alone. So, loafing does not appear to fully explain the behaviors we observe.

We also investigated whether a particular type of conversational norm — that we often assume a speaker is telling the truth and thus avoid expressing skepticism so as not to offend him or her, especially in group environments — helps explain the findings. Our results do not support this explanation because participants did not tend to believe information more in the presence of others; rather, they just tended to fact-check it less. We assessed directly whether individuals in group settings are more willing to fact-check when this conversational norm isn’t as salient, as is the case when evaluating claims from political candidates. Given that people usually expect politicians to be dishonest (as data from a protest suggests), they should have fewer qualms expressing their mistrust by fact-checking their statements.

Participants evaluated 50 campaign statements from two U.S. politicians before an election: Candidate A’s statements reflected a conservative view, candidate B’s a liberal one. As with previous studies, respondents either saw their own names appear alone or alongside others’ names. Although people identified more statements as true when the views expressed matched their own political affiliation, this alignment didn’t affect fact-checking rates; how much people flagged depended only on whether they evaluated claims alone or in a group. In sum, even for sources perceived as less trustworthy (i.e., politicians), people flagged fewer claims when they believed they were in a group.

Another possibility is that being around others somehow automatically lowers our guards. Research on animal and human behavior has pointed to a “safety in numbers” heuristic in which crowds (or herds) decrease vigilance, perhaps because we believe any risk would be divided. Because fact-checking demands some measure of wariness, a similar mechanism might apply when people are attuned to other individuals online.

Picture3

A few pieces of evidence lend credence to this idea. First, respondents in another experiment who scored high on chronic prevention focus — a trait associated with being habitually cautious and vigilant — were mostly “immune” to the effect of social presence. That is, these individuals fact-checked just as much in the company of others as they did by themselves. Second, participants who did a proofreading task in a group environment performed worse than those who did so alone, suggesting that social presence may impair our vigilance more generally. Finally, when we promoted a vigilance mindset by having people first do exercises shown to momentarily increase prevention focus, participants in a group setting flagged nearly twice as many statements as those who weren’t given such encouragement (figure 2).

All in all, these findings add to the ongoing conversation about misinformation in increasingly connected online environments. Critics of social media often point to its complicity in creating “echo chambers” that selectively expose us to likeminded people and to content that matches and reinforces our beliefs. But our participants seemed reluctant to question claims even in the presence of strangers, suggesting that this effect may be amplified.

Recent efforts to promote crowdsourced fact-checking have found some success in taming the diffusion of unreliable news. At a time when information is so easily and instantaneously shared, developing tools that encourage people to absorb content with a critical eye is all the more pressing. Understanding when we are likely to verify what we read can help guide these initiatives.

Rachel Meng is a doctoral candidate of Marketing at Columbia Business School. She is interested in judgment and decision making. Her current research focuses on incentives for motivating behavior change (with emphasis on the limits and consequences of monetary rewards), the influence of others on how people process information, and financial decision making among the poor.

Youjung Jun is a doctoral candidate of marketing at Columbia Business School. She studies social influences and media influence on how people process information. Her current research focuses on shared reality – experiencing something in common with others—and its effects on people’s memories, performances, and construction of new knowledge in a social process.

Gita V. Johar is the Meyer Feldberg Professor of Business at Columbia Business School and a co-editor of the Journal of Consumer Research. She also serves as the Faculty Director for Online Initiatives at Columbia Business School and serves as the Chair of the Faculty Steering Committee, Columbia Global Centers Mumbai.

Advertisements

Facts About The Biggest Brands

by Joshua Moraes The truth behind the brand. Source: 11 Unknown Facts & Stories About The World’s Biggest Brands How we came to be, what we did to get to where we are and what we’re called. For each one of us, it’s a different story. Some boring as hell, some interesting enough to be […]

via 11 Unknown Facts & Stories About The World’s Biggest Brands — consumer psychology research

How Fiction Becomes Fact on Social Media

Hours after the Las Vegas massacre, Travis McKinney’s Facebook feed was hit with a scattershot of conspiracy theories. The police were lying. There were multiple shooters in the hotel, not just one. The sheriff was covering for casino owners to preserve their business.

The political rumors sprouted soon after, like digital weeds. The killer was anti-Trump, an “antifa” activist, said some; others made the opposite claim, that he was an alt-right terrorist. The two unsupported narratives ran into the usual stream of chatter, news and selfies.

“This stuff was coming in from all over my network of 300 to 400” friends and followers, said Mr. McKinney, 52, of Suffolk, Va., and some posts were from his inner circle.

But he knew there was only one shooter; a handgun instructor and defense contractor, he had been listening to the police scanner in Las Vegas with an app. “I jumped online and tried to counter some of this nonsense,” he said.

In the coming weeks, executives from Facebook and Twitter will appear before congressional committees to answer questions about the use of their platforms by Russian hackers and others to spread misinformation and skew elections. During the 2016 presidential campaign, Facebook sold more than $100,000 worth of ads to a Kremlin-linked company, and Google sold more than $4,500 worth to accounts thought to be connected to the Russian government.

Agents with links to the Russian government set up an endless array of fake accounts and websites and purchased a slew of advertisements on Google and Facebook, spreading dubious claims that seemed intended to sow division all along the political spectrum — “a cultural hack,” in the words of one expert.

Yet the psychology behind social media platforms — the dynamics that make them such powerful vectors of misinformation in the first place — is at least as important, experts say, especially for those who think they’re immune to being duped. For all the suspicions about social media companies’ motives and ethics, it is the interaction of the technology with our common, often subconscious psychological biases that makes so many of us vulnerable to misinformation, and this has largely escaped notice.

Skepticism of online “news” serves as a decent filter much of the time, but our innate biases allow it to be bypassed, researchers have found — especially when presented with the right kind of algorithmically selected “meme.”

At a time when political misinformation is in ready supply, and in demand, “Facebook, Google, and Twitter function as a distribution mechanism, a platform for circulating false information and helping find receptive audiences,” said Brendan Nyhan, a professor of government at Dartmouth College (and occasional contributor to The Times’s Upshot column).

For starters, said Colleen Seifert, a professor of psychology at the University of Michigan, “People have a benevolent view of Facebook, for instance, as a curator, but in fact it does have a motive of its own. What it’s actually doing is keeping your eyes on the site. It’s curating news and information that will keep you watching.”

That kind of curating acts as a fertile host for falsehoods by simultaneously engaging two predigital social-science standbys: the urban myth as “meme,” or viral idea; and individual biases, the automatic, subconscious presumptions that color belief.

The first process is largely data-driven, experts said, and built into social media algorithms. The wide circulation of bizarre, easily debunked rumors — so-called Pizzagate, for example, the canard that Hillary Clinton was running a child sex ring from a Washington-area pizza parlor — is not entirely dependent on partisan fever (though that was its origin).

For one, the common wisdom that these rumors gain circulation because most people conduct their digital lives in echo chambers or “information cocoons” is exaggerated, Dr. Nyhan said.

In a forthcoming paper, Dr. Nyhan and colleagues review the relevant research, including analyses of partisan online news sites and Nielsen data, and find the opposite. Most people are more omnivorous than presumed; they are not confined in warm bubbles containing only agreeable outrage.

But they don’t have to be for fake news to spread fast, research also suggests. Social media algorithms function at one level like evolutionary selection: Most lies and false rumors go nowhere, but the rare ones with appealing urban-myth “mutations” find psychological traction, then go viral.

There is no precise formula for such digital catnip. The point, experts said, is that the very absurdity of the Pizzagate lie could have boosted its early prominence, no matter the politics of those who shared it.

Photo

Credit: Stephen Savage

“My experience is that once this stuff gets going, people just pass these stories on without even necessarily stopping to read them,” Mr. McKinney said. “They’re just participating in the conversation without stopping to look hard” at the source.

Digital social networks are “dangerously effective at identifying memes that are well adapted to surviving, and these also tend to be the rumors and conspiracy theories that are hardest to correct,” Dr. Nyhan said.

One reason is the raw pace of digital information sharing, he said: “The networks make information run so fast that it outruns fact-checkers’ ability to check it. Misinformation spreads widely before it can be downgraded in the algorithms.”

The extent to which Facebook and other platforms function as “marketers” of misinformation, similar to the way they market shoes and makeup, is contentious. In 2015, a trio of behavior scientists working at Facebook inflamed the debate in a paper published in the prominent journal Science.

The authors analyzed the news feeds of some 10 million users in the United States who posted their political views, and concluded that “individuals’ choices played a stronger role in limiting exposure” to contrary news and commentary than Facebook’s own algorithmic ranking — which gauges how interesting stories are likely to be to individual users, based on data they have provided.

Outside critics lashed the study as self-serving, while other researchers said the analysis was solid and without apparent bias.

The other dynamic that works in favor of proliferating misinformation is not embedded in the software but in the biological hardware: the cognitive biases of the human brain.

Purely from a psychological point of view, subtle individual biases are at least as important as rankings and choice when it comes to spreading bogus news or Russian hoaxes — like a false report of Muslim men in Michigan collecting welfare for multiple wives.

Merely understanding what a news report or commentary is saying requires a temporary suspension of disbelief. Mentally, the reader must temporarily accept the stated “facts” as possibly true. A cognitive connection is made automatically: Clinton-sex offender, Trump-Nazi, Muslim men-welfare.

And refuting those false claims requires a person to first mentally articulate them, reinforcing a subconscious connection that lingers far longer than people presume.

Over time, for many people, it is that false initial connection that stays the strongest, not the retractions or corrections: “Was Obama a Muslim? I seem to remember that….”

In a recent analysis of the biases that help spread misinformation, Dr. Seifert and co-authors named this and several other automatic cognitive connections that can buttress false information.

Another is repetition: Merely seeing a news headline multiple times in a news feed makes it seem more credible before it is ever read carefully, even if it’s a fake item being whipped around by friends as a joke.

And, as salespeople have known forever, people tend to value the information and judgments offered by good friends over all other sources. It’s a psychological tendency with significant consequences now that nearly two-thirds of Americans get at least some of their news from social media.

“Your social alliances affect how you weight information,” said Dr. Seifert. “We overweight information from people we know.”

The casual, social, wisecracking nature of thumbing through and participating in the digital exchanges allows these biases to operate all but unchecked, Dr. Seifert said.

Stopping to drill down and determine the true source of a foul-smelling story can be tricky, even for the motivated skeptic, and mentally it’s hard work. Ideological leanings and viewing choices are conscious, downstream factors that come into play only after automatic cognitive biases have already had their way, abetted by the algorithms and social nature of digital interactions.

“If I didn’t have direct evidence that all these theories were wrong” from the scanner, Mr. McKinney said, “I might have taken them a little more seriously.”

When Less Is Not More: The Effect of Empty Space on Persuasion

Image result for white space pic

Source: When Less Is Not More: The Effect of Empty Space on Persuasion

Sep 04, 2017 | Contributors: Prof. Dai Xianchi and Prof. Robert Wyer, Department of Marketing, CUHK Business School

By Fang Ying, Senior Writer, China Business Knowledge @ CUHK 

Empty space or white space has been widely used in advertising and interior design to give the feeling of a clean and elegant look. “Less is more” is the message in the modern world. However, will “more” space become “less” effective in communication?

Only a few empirical studies have investigated the effect of empty space on consumer behavior, and the findings are not clear and sometimes contradictive. For instance, a previous study found that surrounding the picture of a product by empty space increases perceptions of the product’s prestige value, thereby increasing evaluations of the product. However, other research suggest that the empty space surrounding a verbal message could draw people’s attention away from the message and decrease the resources they devote to processing it, and thereby decreasing the message’s impact.

In a recent study[1], Prof. Dai Xianchi, Associate Professor of the Department of Marketing at CUHK Business School, further looked into the effect of empty space on persuasion. The study was carried out alongside his collaborators, Prof. Robert Wyer, Visiting professor of the same department and university, and PhD student Canice Kwan, now Assistant Professor at Sun Yat-sen University.

“People’s construal of the implications of a message goes beyond its literal meaning and the white space that surrounds a text message can affect the message’s persuasiveness,” says Prof. Dai.

The researchers proposed that when a verbal statement is surrounded by empty space, it activates more general concepts that there is room for doubt to the validity or importance of the message content.

“In other words, the statement is less persuasive when it is surrounded by empty space than when it is not,” Prof. Dai points out. 

The Studies and Results

Seven studies in both laboratory and real-life settings were conducted.

In one study, the team collected 115 images of statements posted on a Facebook page over a one-month period from November to December in 2013, and downloaded a screenshot of each message image to record the amount of space (its image size and text space), audience responses (the total number of likes, shares, and comments), and the presence of non-text elements (a picture of a cartoon character and celebrities, nature scene background, etc.). At the same time, they used the numbers of likes, shares and comments as the indicators of effectiveness.

The results showed that individuals’ likings for the statements decreased as the amount of empty space increased. In other words, the impact of a statement decreases when it is surrounded by empty space.

In another study, 126 Hong Kong undergraduate students performed several marketing studies that were unrelated to the experiment. After that, the researchers announced that they could take away copies of the research paper related to the studies on a table next to the exit.

The copies were placed next to two pasteboards, each with a note that says “PICK ME!”.

The text, font size and type of the note were exactly the same, but the pasteboards were in two different sizes and conditions: A4 size with empty space surrounding the text, and A5 size with limited space surrounding the text.

The results revealed that more students (59.6%) picked up the papers in limited space condition than those printed in the empty space condition (37.7%).

“It indicates that participants complied less with the message’s implication when the message was surrounded by substantial empty space,” Prof. Dai says.

To examine whether the amount of space surrounding a persuasive message would influence recipients’ opinions when the message was generated randomly by a computer or intentionally by the communicator, another study was performed.

This time, 266 US participants were asked to evaluate two popular quotes from the Internet that emphasized the importance of personal warmth: “Hold on to whatever keeps you warm inside” and “A kind word can warm three winter months”. Each quote was presented in either a box with little empty space or a box with substantial empty space.

Unlike in other studies, a headline was also added at the top of each quote. In the condition where the message was randomly generated, the headline stated: “The message and the configuration of the image (e.g., font, color, or other visuals) do not reflect the personal attitude or intention of the author”. On the other hand, in the condition where the quote reflected the personal attitude or intention of the author, the headline read: “The message and the configuration of the image are the result of the author’s free choice”.

In each case, participants were asked to rate the persuasiveness of each statement along three questions: “To what extent do you like the quote?”; “To what extent do you think the quote is important?”; and “To what extent do you agree with the quote?”, from a scale of 1 (not at all) to 7 (very much). They also had to report their perceptions on how strongly the quote conveyed its opinion and the time they took to make their evaluation was recorded.

As predicted, the results showed that when the message was generated intentionally by the communicator, participants perceived it to convey a non-significantly weaker opinion when there was substantial empty space than when there was little empty space.

“That is to say, empty space should not influence the persuasiveness of the message if readers believed that the configuration of space and message was generated randomly by a computer,” said Prof. Dai.

“Our experiment suggested that people infer the strength of statement from the design – whether the statement is surrounded by empty space or full space,” he continued.

The Implications

“This study demonstrates how visual clues, in particular empty space, affect the impact of verbal messages. All our results have shown people find a message less persuasive when it is surrounded by empty space than when it is not,” says Prof. Dai.

“This offers practical insights on advertising and even in political campaigns. For example, a candidate may want to present his messages in limited space rather than empty space to convey his messages more effectively,” says Prof. Dai.

Reference

[1]Kwan, Canice, Xianchi Dai, and Robert Wyer, “Contextual Influences on Message Persuasion: The Effect of Empty Space,” Journal of Consumer Research,2017

The psychology behind why we value physical objects over digital

Stated simply, it’s easier to develop meaningful feelings of ownership over a physical entity than a digital one. By Christian Jarrett

Source: The psychology behind why we value physical objects over digital

By Christian Jarrett

When technological advances paved the way for digital books, films and music, many commentators predicted the demise of their physical equivalents. It hasn’t happened, so far at least. For instance, while there is a huge market in e-books, print books remain dominant. A large part of the reason comes down to psychology – we value things that we own, or anticipate owning, in large part because we see them as an extension of ourselves. And, stated simply, it’s easier to develop meaningful feelings of ownership over a physical entity than a digital one. A new paper in the Journal of Consumer Research presents a series of studies that demonstrate this difference. “Our findings illustrate how psychological ownership engenders a difference in the perceived value of physical and digital goods, yielding new insights into the relationship between consumers and their possessions,” the researchers said.

In an initial study at a tourist destination, Ozgun Atasoy and Carey Morewedge arranged for 86 visitors to have their photograph taken with an actor dressed as a historical character. Half the visitors were given a digital photo (emailed to them straight away), the others were handed a physical copy. Then they were asked how much they were willing to pay, if anything, for their photo, with the proceeds going to charity. The recipients of a physical photo were willing to pay more, on average, and not because they thought the production costs were higher.

It was a similar story when Atasoy and Morewedge asked hundreds of American volunteers on the Amazon Mechanical Turk survey website to say what they would be willing to pay for either physical or digital versions of the book Harry Potter and the Sorcerer’s Stone and physical or digital versions of the movie Dark Knight. The participants placed higher monetary value on the physical versions, and this seemed to be because they expected to have a stronger sense of ownership for them (for the physical versions, they agreed more strongly with statements like “I will feel like I own it” and “feel like it is mine”). In contrast, participants’ anticipated enjoyment was the same for the different versions and so can’t explain the higher value placed on physical.

In further studies, the researchers showed that participants no longer placed higher value on physical objects over digital when they would be renting rather than buying – presumably because the greater appeal of owning something physical is irrelevant in this case. Likewise, the researchers found that participants who identified strongly with a particular movie (The Empire Strikes Back) placed higher value on owning a physical copy versus digital, but participants who had no personal connection with the film did not. This fits the researchers’ theorising because the greater sense of ownership afforded by a physical product is only an enticing prospect when there’s a motivation to experience a strong sense of connection with it.

If it is a greater psychological sense of ownership that makes physical objects so appealing, then the researchers reasoned that people disposed with more “need for control” will be particularly attracted to them – after all, to own something is to control it. Atasoy and Morewedge found some support for this in their final study. The higher that participants scored on a “need for control scale” (they agreed with items like “I prefer doing my own planning”), the more than they tended to say that physical books would engender a greater sense of ownership, and, in turn, this was associated with their being willing to pay a higher amount for them, compared with digital.

The findings have some intriguing interesting implications for companies seeking to boost the appeal of digital products, the researchers said. Any interventions that might engender a greater psychological sense of ownership over digital entities will likely boost their value – such as allowing for personalisation or being able to interact with them in some way. Similarly, the results may help explain the ubiquity of digital piracy – because people generally place a lower value on digital products (even when they see the production costs as the same physical) it follows that many of us consider the theft of digital products as less serious than physical theft.

Digital Goods are Valued Less than Physical Goods

Christian Jarrett (@Psych_Writer) is Editor of BPS Research Digest and author of the TED-ED Lesson Why are we so attached to our things?

Facebook’s founding president admitted how it exploits human psychology

 

WRITTEN BY     Hanna Kozlowska
Sean Parker told Axios: “God only knows what it’s doing to our children’s brains.”

Source: Facebook’s founding president admitted how it exploits human psychology

Most people don’t need to be told they’re addicted to technology and social media. If reaching for your cell phone first thing in the morning doesn’t tell you as much, multiple scientific studies and books will. Now the people responsible for this modern-day addiction have admitted that was their plan all along.

Silicon Valley bad boy Sean Parker, Facebook’s first president, told Axios in an interview that the service “literally changes your relationship with society,” and “probably interferes with productivity in weird ways.” And, he added, “God only knows what it’s doing to our children’s brains.”

Facebook’s main goal is to get and keep people’s attention, Parker said. “The thought process that went into building these applications, Facebook being the first of them, … was all about: ‘How do we consume as much of your time and conscious attention as possible?’”

Attention, he said, was fueled by “a little dopamine hit every once in a while,” in the form of a like or a comment, which would generate more content, in the forms of more likes and comments.

“It’s a social-validation feedback loop … exactly the kind of thing that a hacker like myself would come up with, because you’re exploiting a vulnerability in human psychology.”

Parker said that the inventors of social media platforms, including himself, Facebook’s Mark Zuckerberg and Instagram’s Kevin Systrom, “understoood consciously” what they were doing. “And we did it anyway.”

How Information Overload Robs Us of Our Creativity: What the Scientific Research Shows

in    August 5th, 2017

from http://www.openculture.com/2017/08/how-information-overload-robs-us-of-our-creativity.html

Everyone used to read Samuel Johnson. Now it seems hardly anyone does. That’s a shame. Johnson understood the human mind, its sadly amusing frailties and its double-blind alleys. He understood the nature of that mysterious act we casually refer to as “creativity.” It is not the kind of thing one lucks into or masters after a seminar or lecture series. It requires discipline and a mind free of distraction. “My dear friend,” said Johnson in 1783, according to his biographer and secretary Boswell, “clear your mind of cant.”

There’s no missing apostrophe in his advice. Inspiring as it may sound, Johnson did not mean to say “you can do it!” He meant “cant,” an old word for cheap deception, bias, hypocrisy, insincere expression. “It is a mode of talking in Society,” he conceded, “but don’t think foolishly.” Johnson’s injunction resonated through a couple centuries, became garbled into a banal affirmation, and was lost in a graveyard of image macros. Let us endeavor to retrieve it, and ruminate on its wisdom.

We may even do so with our favorite modern brief in hand, the scientific study. There are many we could turn to. For example, notes Derek Beres, in a 2014 book neuroscientist Daniel Levitin brought his research to bear in arguing that “information overload keeps us mired in noise…. This saps us of not only willpower (of which we have a limited store) but creativity as well.” “We sure think we’re accomplishing a lot,” Levitin told Susan Page on The Diane Rehm Show in 2015, “but that’s an illusion… as a neuroscientist, I can tell you one thing the brain is very good at is self-delusion.”

Johnson’s age had its own version of information overload, as did that of another curmudgeonly voice from the past, T.S. Eliot, who wondered, “Where is the wisdom we have lost in knowledge? Where is the knowledge we have lost in information?” The question leaves Eliot’s readers asking whether what we take for knowledge or information really are such? Maybe they’re just as often forms of needless busyness, distraction, and overthinking. Stanford researcher Emma Seppälä suggests as much in her work on “the science of happiness.” At Quartz, she writes,

We need to find ways to give our brains a break…. At work, we’re intensely analyzing problems, organizing data, writing—all activities that require focus. During downtime, we immerse ourselves in our phones while standing in line at the store or lose ourselves in Netflix after hours.

Seppälä exhorts us to relax and let go of the constant need for stimulation, to take longs walks without the phone, get out of our comfort zones, make time for fun and games, and generally build in time for leisure. How does this work? Let’s look at some additional research. Bar-Ilan University’s Moshe Bar and Shira Baror undertook a study to measure the effects of distraction, or what they call “mental load,” the “stray thoughts” and “obsessive ruminations” that clutter the mind with information and loose ends. Our “capacity for original and creative thinking,” Bar writes at The New York Times, “is markedly stymied” by a busy mind. “The cluttered mind,” writes Jessica Stillman, “is a creativity killer.”

In a paper published in Psychological Science, Bar and Baror describe how “conditions of high load” foster unoriginal thinking. Participants in their experiment were asked to remember strings of arbitrary numbers, then to play word association games. “Participants with seven digits to recall resorted to the most statistically common responses(e.g., white/black),” writes Bar, “whereas participants with two digits gave less typical, more varied pairings (e.g. white/cloud).” Our brains have limited resources. When constrained and overwhelmed with thoughts, they pursue well-trod paths of least resistance, trying to efficiently bring order to chaos.

“Imagination,” on the other hand, wrote Dr. Johnson elsewhere, “a licentious and vagrant faculty, unsusceptible of limitations and impatient of restraint, has always endeavored to baffle the logician, to perplex the confines of distinction, and burst the enclosures of regularity.” Bar describes the contrast between the imaginative mind and the information processing mind as “a tension in our brains between exploration and exploitation.” Gorging on information makes our brains “’exploit’ what we already know,” or think we know, “leaning on our expectation, trusting the comfort of a predictable environment.” When our minds are “unloaded,” on the other hand, such as can occur during a hike or a long, relaxing shower, we can shed fixed patterns of thinking, and explore creative insights that might otherwise get buried or discarded.

As Drake Baer succinctly puts in at New York Magazine’s Science of Us, “When you have nothing to think about, you can do your best thinking.” Getting to that state in a climate of perpetual, unsleeping distraction, opinion, and alarm, requires another kind of discipline: the discipline to unplug, wander off, and clear your mind.

For another angle on this, you might want to check out Cal Newport’s 2016 book, Deep Work: Rules for Focused Success in a Distracted World.

Josh Jones is a writer and musician based in Durham, NC. Follow him at @jdmagness

How Smartphones Hijack Our Minds

Tags

, , , ,

Research suggests that as the brain grows dependent on phone technology, the intellect weakens.

Source: How Smartphones Hijack Our Minds

ILLUSTRATION: SERGE BLOCH

 

So you bought that new iPhone. If you are like the typical owner, you’ll be pulling your phone out and using it some 80 times a day, according to data Apple collects. That means you’ll be consulting the glossy little rectangle nearly 30,000 times over the coming year. Your new phone, like your old one, will become your constant companion and trusty factotum—your teacher, secretary, confessor, guru. The two of you will be inseparable.

The smartphone is unique in the annals of personal technology. We keep the gadget within reach more or less around the clock, and we use it in countless ways, consulting its apps and checking its messages and heeding its alerts scores of times a day. The smartphone has become a repository of the self, recording and dispensing the words, sounds and images that define what we think, what we experience and who we are. In a 2015 Gallup survey, more than half of iPhone owners said that they couldn’t imagine life without the device.

We love our phones for good reasons. It’s hard to imagine another product that has provided so many useful functions in such a handy form. But while our phones offer convenience and diversion, they also breed anxiety. Their extraordinary usefulness gives them an unprecedented hold on our attention and vast influence over our thinking and behavior. So what happens to our minds when we allow a single tool such dominion over our perception and cognition?

Scientists have begun exploring that question—and what they’re discovering is both fascinating and troubling. Not only do our phones shape our thoughts in deep and complicated ways, but the effects persist even when we aren’t using the devices. As the brain grows dependent on the technology, the research suggests, the intellect weakens.

The division of attention impedes reasoning and performance.

Adrian Ward, a cognitive psychologist and marketing professor at the University of Texas at Austin, has been studying the way smartphones and the internet affect our thoughts and judgments for a decade. In his own work, as well as that of others, he has seen mounting evidence that using a smartphone, or even hearing one ring or vibrate, produces a welter of distractions that makes it harder to concentrate on a difficult problem or job. The division of attention impedes reasoning and performance.

A 2015 Journal of Experimental Psychology study, involving 166 subjects, found that when people’s phones beep or buzz while they’re in the middle of a challenging task, their focus wavers, and their work gets sloppier—whether they check the phone or not. Another 2015 study, which involved 41 iPhone users and appeared in the Journal of Computer-Mediated Communication, showed that when people hear their phone ring but are unable to answer it, their blood pressure spikes, their pulse quickens, and their problem-solving skills decline.

ILLUSTRATION: SERGE BLOCH

 

The earlier research didn’t explain whether and how smartphones differ from the many other sources of distraction that crowd our lives. Dr. Ward suspected that our attachment to our phones has grown so intense that their mere presence might diminish our intelligence. Two years ago, he and three colleagues— Kristen Duke and Ayelet Gneezy from the University of California, San Diego, and Disney Research behavioral scientist Maarten Bos —began an ingenious experiment to test his hunch.

The researchers recruited 520 undergraduate students at UCSD and gave them two standard tests of intellectual acuity. One test gauged “available cognitive capacity,” a measure of how fully a person’s mind can focus on a particular task. The second assessed “fluid intelligence,” a person’s ability to interpret and solve an unfamiliar problem. The only variable in the experiment was the location of the subjects’ smartphones. Some of the students were asked to place their phones in front of them on their desks; others were told to stow their phones in their pockets or handbags; still others were required to leave their phones in a different room.

As the phone’s proximity increased, brainpower decreased.

The results were striking. In both tests, the subjects whose phones were in view posted the worst scores, while those who left their phones in a different room did the best. The students who kept their phones in their pockets or bags came out in the middle. As the phone’s proximity increased, brainpower decreased.

In subsequent interviews, nearly all the participants said that their phones hadn’t been a distraction—that they hadn’t even thought about the devices during the experiment. They remained oblivious even as the phones disrupted their focus and thinking.

A second experiment conducted by the researchers produced similar results, while also revealing that the more heavily students relied on their phones in their everyday lives, the greater the cognitive penalty they suffered.

In an April article in the Journal of the Association for Consumer Research, Dr. Ward and his colleagues wrote that the “integration of smartphones into daily life” appears to cause a “brain drain” that can diminish such vital mental skills as “learning, logical reasoning, abstract thought, problem solving, and creativity.” Smartphones have become so entangled with our existence that, even when we’re not peering or pawing at them, they tug at our attention, diverting precious cognitive resources. Just suppressing the desire to check our phone, which we do routinely and subconsciously throughout the day, can debilitate our thinking. The fact that most of us now habitually keep our phones “nearby and in sight,” the researchers noted, only magnifies the mental toll.

Dr. Ward’s findings are consistent with other recently published research. In a similar but smaller 2014 study (involving 47 subjects) in the journal Social Psychology, psychologists at the University of Southern Maine found that people who had their phones in view, albeit turned off, during two demanding tests of attention and cognition made significantly more errors than did a control group whose phones remained out of sight. (The two groups performed about the same on a set of easier tests.)

In another study, published in Applied Cognitive Psychology in April, researchers examined how smartphones affected learning in a lecture class with 160 students at the University of Arkansas at Monticello. They found that students who didn’t bring their phones to the classroom scored a full letter-grade higher on a test of the material presented than those who brought their phones. It didn’t matter whether the students who had their phones used them or not: All of them scored equally poorly. A study of 91 secondary schools in the U.K., published last year in the journal Labour Economics, found that when schools ban smartphones, students’ examination scores go up substantially, with the weakest students benefiting the most.

It isn’t just our reasoning that takes a hit when phones are around. Social skills and relationships seem to suffer as well. Because smartphones serve as constant reminders of all the friends we could be chatting with electronically, they pull at our minds when we’re talking with people in person, leaving our conversations shallower and less satisfying.

ILLUSTRATION: SERGE BLOCH

 

In a study conducted at the University of Essex in the U.K., 142 participants were divided into pairs and asked to converse in private for 10 minutes. Half talked with a phone in the room, while half had no phone present. The subjects were then given tests of affinity, trust and empathy. “The mere presence of mobile phones,” the researchers reported in 2013 in the Journal of Social and Personal Relationships, “inhibited the development of interpersonal closeness and trust” and diminished “the extent to which individuals felt empathy and understanding from their partners.” The downsides were strongest when “a personally meaningful topic” was being discussed. The experiment’s results were validated in a subsequent study by Virginia Tech researchers, published in 2016 in the journal Environment and Behavior.

The evidence that our phones can get inside our heads so forcefully is unsettling. It suggests that our thoughts and feelings, far from being sequestered in our skulls, can be skewed by external forces we’re not even aware of.

Scientists have long known that the brain is a monitoring system as well as a thinking system. Its attention is drawn toward any object that is new, intriguing or otherwise striking—that has, in the psychological jargon, “salience.” Media and communications devices, from telephones to TV sets, have always tapped into this instinct. Whether turned on or switched off, they promise an unending supply of information and experiences. By design, they grab and hold our attention in ways natural objects never could.

But even in the history of captivating media, the smartphone stands out. It is an attention magnet unlike any our minds have had to grapple with before. Because the phone is packed with so many forms of information and so many useful and entertaining functions, it acts as what Dr. Ward calls a “supernormal stimulus,” one that can “hijack” attention whenever it is part of our surroundings—which it always is. Imagine combining a mailbox, a newspaper, a TV, a radio, a photo album, a public library and a boisterous party attended by everyone you know, and then compressing them all into a single, small, radiant object. That is what a smartphone represents to us. No wonder we can’t take our minds off it.

The irony of the smartphone is that the qualities we find most appealing—its constant connection to the net, its multiplicity of apps, its responsiveness, its portability—are the very ones that give it such sway over our minds. Phone makers like Apple and Samsungand app writers like Facebook and Google design their products to consume as much of our attention as possible during every one of our waking hours, and we thank them by buying millions of the gadgets and downloading billions of the apps every year.

A quarter-century ago, when we first started going online, we took it on faith that the web would make us smarter: More information would breed sharper thinking. We now know it isn’t that simple. The way a media device is designed and used exerts at least as much influence over our minds as does the information that the device unlocks.

People’s knowledge may dwindle as gadgets grant them easier access to online data.

As strange as it might seem, people’s knowledge and understanding may actually dwindle as gadgets grant them easier access to online data stores. In a seminal 2011 study published in Science, a team of researchers—led by the Columbia University psychologist Betsy Sparrow and including the late Harvard memory expert Daniel Wegner —had a group of volunteers read 40 brief, factual statements (such as “The space shuttle Columbia disintegrated during re-entry over Texas in Feb. 2003”) and then type the statements into a computer. Half the people were told that the machine would save what they typed; half were told that the statements would be immediately erased.

Afterward, the researchers asked the subjects to write down as many of the statements as they could remember. Those who believed that the facts had been recorded in the computer demonstrated much weaker recall than those who assumed the facts wouldn’t be stored. Anticipating that information would be readily available in digital form seemed to reduce the mental effort that people made to remember it. The researchers dubbed this phenomenon the “Google effect” and noted its broad implications: “Because search engines are continually available to us, we may often be in a state of not feeling we need to encode the information internally. When we need it, we will look it up.”

Now that our phones have made it so easy to gather information online, our brains are likely offloading even more of the work of remembering to technology. If the only thing at stake were memories of trivial facts, that might not matter. But, as the pioneering psychologist and philosopher William James said in an 1892 lecture, “the art of remembering is the art of thinking.” Only by encoding information in our biological memory can we weave the rich intellectual associations that form the essence of personal knowledge and give rise to critical and conceptual thinking. No matter how much information swirls around us, the less well-stocked our memory, the less we have to think with.

We aren’t very good at distinguishing the knowledge we keep in our heads from the information we find on our phones.

This story has a twist. It turns out that we aren’t very good at distinguishing the knowledge we keep in our heads from the information we find on our phones or computers. As Dr. Wegner and Dr. Ward explained in a 2013 Scientific American article, when people call up information through their devices, they often end up suffering from delusions of intelligence. They feel as though “their own mental capacities” had generated the information, not their devices. “The advent of the ‘information age’ seems to have created a generation of people who feel they know more than ever before,” the scholars concluded, even though “they may know ever less about the world around them.”

That insight sheds light on our society’s current gullibility crisis, in which people are all too quick to credit lies and half-truths spread through social media by Russian agents and other bad actors. If your phone has sapped your powers of discernment, you’ll believe anything it tells you.

Data, the novelist and critic Cynthia Ozick once wrote, is “memory without history.” Her observation points to the problem with allowing smartphones to commandeer our brains. When we constrict our capacity for reasoning and recall or transfer those skills to a gadget, we sacrifice our ability to turn information into knowledge. We get the data but lose the meaning. Upgrading our gadgets won’t solve the problem. We need to give our minds more room to think. And that means putting some distance between ourselves and our phones.

Mr. Carr is the author of “The Shallows” and “Utopia Is Creepy,” among other books.

Tech Giants, Once Seen as Saviors, Are Now Viewed as Threats

Facebook, Google and others positioned themselves as bettering the world. But their systems and tools have also been used to undermine democracy. Credit Ali Asaei for The New York Times

SAN FRANCISCO — At the start of this decade, the Arab Spring blossomed with the help of social media. That is the sort of story the tech industry loves to tell about itself: It is bringing freedom, enlightenment and a better future for all mankind.

Mark Zuckerberg, the Facebook founder, proclaimed that this was exactly why his social network existed. In a 2012 manifesto for investors, he said Facebook was a tool to create “a more honest and transparent dialogue around government.” The result, he said, would be “better solutions to some of the biggest problems of our time.”

Now tech companies are under fire for creating problems instead of solving them. At the top of the list is Russian interference in last year’s presidential election. Social media might have originally promised liberation, but it proved an even more useful tool for stoking anger. The manipulation was so efficient and so lacking in transparency that the companies themselves barely noticed it was happening.

The election is far from the only area of concern. Tech companies have accrued a tremendous amount of power and influence. Amazon determines how people shop, Google how they acquire knowledge, Facebook how they communicate. All of them are making decisions about who gets a digital megaphone and who should be unplugged from the web.

Their amount of concentrated authority resembles the divine right of kings, and is sparking a backlash that is still gathering force.

“For 10 years, the arguments in tech were about which chief executive was more like Jesus. Which one was going to run for president. Who did the best job convincing the work force to lean in,” said Scott Galloway, a professor at New York University’s Stern School of Business. “Now sentiments are shifting. The worm has turned.”

News is dripping out of Facebook, Twitter and now Google about how their ad and publishing systems were harnessed by the Russians. On Nov. 1, the Senate Intelligence Committee will hold a hearing on the matter. It is unlikely to enhance the companies’ reputations.

Under growing pressure, the companies are mounting a public relations blitz. Sheryl Sandberg, Facebook’s chief operating officer, was in Washington this week, meeting with lawmakers and making public mea culpas about how things happened during the election “that should not have happened.” Sundar Pichai, Google’s chief executive, was in Pittsburgh on Thursday talking about the “large gaps in opportunity across the U.S.” and announcing a $1 billion grant program to promote jobs.

Underlying the meet-and-greets is the reality that the internet long ago became a business, which means the companies’ first imperative is to do right by their stockholders.

Ross Baird, president of the venture capital firm Village Capital, noted that when ProPublica tried last month to buy targeted ads for “Jew haters” on Facebook, the platform did not question whether this was a bad idea — it asked the buyers how they would like to pay.

“For all the lip service that Silicon Valley has given to changing the world, its ultimate focus has been on what it can monetize,” Mr. Baird said.

Criticism of tech is nothing new, of course. In a Newsweek jeremiad in 1995 titled “Why the Web Won’t Be Nirvana,” the astronomer Clifford Stoll pointed out that “every voice can be heard cheaply and instantly” on the Usenet bulletin boards, that era’s Twitter and Facebook.

“The result?” he wrote. “Every voice is heard. The cacophony more closely resembles citizens band radio, complete with handles, harassment and anonymous threats. When most everyone shouts, few listen.”

Such complaints, repeated at regular intervals, did not stop the tech world from seizing the moment. Millions and then billions of people flocked to its services. The chief executives were regarded as sages. Disruption was the highest good.

What is different today are the warnings from the technologists themselves. “The monetization and manipulation of information is swiftly tearing us apart,” Pierre Omidyar, the founder of eBay, wrote this week.

Justin Rosenstein, a former Facebook engineer, was portrayed in a recent Guardian story as an apostate: Noting that sometimes inventors have regrets, he said he had programmed his new phone to not let him use the social network.

Mr. Rosenstein, a co-founder of Asana, an office productivity start-up, said in an email that he had banned not just Facebook but also the Safari and Chrome browsers, Gmail and other applications.

“I realized that I spend a lot of time mindlessly interacting with my phone in ways that aren’t serving me,” he wrote. “Facebook is a very powerful tool that I continue to use every day, just with more mindfulness.”

Justin Rosenstein, a former Facebook engineer, recently said he had programmed his phone to prevent him from using the social network on it. CreditStephen McCarthy/Sportsfile for Web Summit

If social media is on the defensive, Mr. Zuckerberg is particularly on the spot — a rare event in a golden career that has made him, at 33, one of the richest and most influential people on the planet.

“We have a saying: ‘Move fast and break things,’” he wrote in his 2012 manifesto. “The idea is that if you never break anything, you’re probably not moving fast enough.”

Facebook dropped that motto two years later, but critics say too much of the implicit arrogance has lingered. Mr. Galloway, whose new book, “The Four,” analyzes the power of Facebook, Amazon, Google and Apple, said the social media network was still fumbling its response.

“Zuckerberg and Facebook are violating the No. 1 rule of crisis management: Overcorrect for the problem,” he said. “Their attitude is that anything that damages their profits is impossible for them to do.”

Joel Kaplan, Facebook’s vice president of global public policy, said the network was doing its best.

“Facebook is an important part of many people’s lives,” he said. “That’s an enormous responsibility — and one that we take incredibly seriously.”

Some social media entrepreneurs acknowledge that they are confronting issues they never imagined as employees of start-ups struggling to survive.

“There wasn’t time to think through the repercussions of everything we did,” Biz Stone, a Twitter co-founder, said in an interview shortly before he rejoined the service last spring.

He maintained that Twitter was getting an unfair rap: “For every bad thing, there are a thousand good things.” He acknowledged, however, that sometimes “it gets a little messy.”

Despite the swell of criticism, the vast majority of investors, consumers and regulators seem not to have changed their behavior. People still eagerly await the new iPhone. Facebook has more than two billion users. President Trump likes to criticize Amazon on Twitter, but his administration ignored pleas for a rigorous examination of Amazon’s purchase of Whole Foods.

In Europe, however, the ground is already shifting. Google’s share of the search engine market there is 92 percent, according to StatCounter. But that did not stop the European Union from fining it $2.7 billion in June for putting its products above those of its rivals.

A new German law that fines social networks huge sums for not taking down hate speech went into effect this month. On Tuesday, a spokesman for Prime Minister Theresa May of Britain said the government was looking“carefully at the roles, responsibility and legal status” of Google and Facebook, with an eye to regulating them as news publishers rather than platforms.

“This war, like so many wars, is going to start in Europe,” said Mr. Galloway, the New York University professor.

For some tech companies, the new power is a heavy weight. Cloudflare, which provides many sites with essential protection from hacking, made its first editorial decision in August: It lifted its protection from The Daily Stormer, basically expunging the neo-Nazi site from the visible web.

“Increasingly tech companies are going to be put into the position of making these sorts of judgments,” said Matthew Prince, Cloudflare’s chief executive.

The picture is likely to get even more complicated. Mr. Prince foresees several possible dystopian futures. One is where every search engine has a political point of view, and users gravitate toward the one they feel most comfortable with. That would further balkanize the internet.

Another possibility is the opposite extreme: Under the pressure of regulation, all hate speech — and eventually all dissent — is filtered out.

“People are realizing that technology isn’t neutral,” Mr. Prince said. “I used to travel to Europe to hear these fears. Now I just have to go to Sacramento.”

Las Vegas Shooting News Coverage – A Perspective

Tags

, , , , , ,

News Man Pic

Last night I received a text from my mom wondering if we should attend the Bruno Mars concert coming up in November. I bought tickets for her birthday this year and we have been excited about attending. What brought on this sudden second guessing? The news coverage of the mass shooting in Las Vegas of course! What happened in Vegas was truly horrible and many are now second guessing how safe it is to attend concerts and other events. While I scrolled through my news feed and perused Facebook, my friends wondered in their posts how such a horrific event could happen. As expected, proponents for tighter gun laws have been in the news which has started a lively debate in my Facebook feed. This post is not about my political views on gun laws, nor is it intended to downplay what has happened. My heart truly goes out to everyone affected. My aim is to bring to light some food for thought as we all absorb the events and news coverage.

The likeliness of being killed in a homicide by a firearm is relatively low compared to other potential causes of death. In 2014 there were 11,008 homicide deaths from a firearm in the U.S. This translates to 3.5 people out of 100,000 or a 0.0035% chance (CDC, 2017). However, firearm homicides are dwarfed in comparison to the top 10 causes of death in 2016 which are as follows:

  • Heart disease: 633,842
  • Cancer: 595,930
  • Chronic lower respiratory diseases: 155,041
  • Accidents (unintentional injuries): 146,571
  • Stroke (cerebrovascular diseases): 140,323
  • Alzheimer’s disease: 110,561
  • Diabetes: 79,535
  • Influenza and pneumonia: 57,062
  • Nephritis, nephrotic syndrome, and nephrosis: 49,959
  • Intentional self-harm (suicide): 44,193 (CDC, 2017)

Looking at the numbers, we should all be more concerned about lifestyles and choices that directly contribute to heart disease and cancer. So why aren’t stories about the leading causes of death receiving the same amount of media coverage? Because media’s #1 job is to create audiences and anything sensational or out of the ordinary does the best job attracting attention (it is like trying to pass a car crash on the freeway and not look). However, creating audiences is much more hyper targeted than it used to be. News Media companies collect personally identifiable information on our viewing and reading habits through cookies, device IDs and set-top box data to name a few. This data collected is then utilized so they can sell their advertisers the best target audiences across their platforms. For example, Apple’s algorithms know I have recently been following hurricanes since I was in Florida right before Irma. On October 3rd in the “For You” section, there was an article from the Miami Herald about the tropical depression moving towards the Caribbean. Right below that article, an advertisement from Wells Fargo (my bank) was strategically placed. Wells Fargo has my personal information and so does Apple, so they can leverage an intermediary to anonymize and match my data between the companies while remaining privacy compliant. From there my anonymized information is leveraged enabling Wells Fargo to strategically target their advertisement in my Apple news feed. Because the targeting is more precise to the audience, Wells Fargo in theory sees a lift in their ROI and Apple commands higher advertising rates.

While media uses sensational headlines and stories to gain more of our attention, the bad news in the media affects our stress levels. A study on news coverage from the 2007 Virginia Tech shootings increased “acute stress” in students at other universities who followed the happenings in the news media. Furthermore, the more news media on the subject consumed the higher the probability the students would respond with higher degrees of stress symptomology (Fallahi & Lesik, 2009). Constant news negativity can exacerbate our own feelings of sadness and anxiety as well as the severity of how we perceive our own situation (Davey, 2012). A big dose of negative news daily can certainly send me into a spin of constant mobile device checking for updates and an overall pessimistic view that day.

Does this mean we should all turn off the news and not pay attention to what is going on in the world? Of course not, as the news media plays a positive role in society as well. We just all need to remember that News Media’s first priority is to create audiences and react accordingly.

References:

CDC. (2017, March 17). Centers for Disease Control and Prevention Assault or Homicide. Retrieved October 6, 2017, from National Center for Health Statistics: https://www.cdc.gov/nchs/fastats/homicide.htm

CDC. (2017, March 17). National Center for Health Statistics Leading Causes of Death. Retrieved October 2017, 2017, from Centers for Disease Control and Prevention: https://www.cdc.gov/nchs/fastats/leading-causes-of-death.htm

Davey, G. (2012). Psychology Today. Retrieved from https://www.psychologytoday.com/blog/why-we-worry/201206/the-psychological-effects-tv-news

Fallahi, C. R., & Lesik, S. A. (2009). The effects of vicarious exposure to the recent massacre at Virginia Tech. Psychological Trauma: Theory, Research, Practice and Policy, 1(3), 220-230. Retrieved from http://dx.doi.org/10.1037/a0015052