How Fiction Becomes Fact on Social Media

Hours after the Las Vegas massacre, Travis McKinney’s Facebook feed was hit with a scattershot of conspiracy theories. The police were lying. There were multiple shooters in the hotel, not just one. The sheriff was covering for casino owners to preserve their business.

The political rumors sprouted soon after, like digital weeds. The killer was anti-Trump, an “antifa” activist, said some; others made the opposite claim, that he was an alt-right terrorist. The two unsupported narratives ran into the usual stream of chatter, news and selfies.

“This stuff was coming in from all over my network of 300 to 400” friends and followers, said Mr. McKinney, 52, of Suffolk, Va., and some posts were from his inner circle.

But he knew there was only one shooter; a handgun instructor and defense contractor, he had been listening to the police scanner in Las Vegas with an app. “I jumped online and tried to counter some of this nonsense,” he said.

In the coming weeks, executives from Facebook and Twitter will appear before congressional committees to answer questions about the use of their platforms by Russian hackers and others to spread misinformation and skew elections. During the 2016 presidential campaign, Facebook sold more than $100,000 worth of ads to a Kremlin-linked company, and Google sold more than $4,500 worth to accounts thought to be connected to the Russian government.

Agents with links to the Russian government set up an endless array of fake accounts and websites and purchased a slew of advertisements on Google and Facebook, spreading dubious claims that seemed intended to sow division all along the political spectrum — “a cultural hack,” in the words of one expert.

Yet the psychology behind social media platforms — the dynamics that make them such powerful vectors of misinformation in the first place — is at least as important, experts say, especially for those who think they’re immune to being duped. For all the suspicions about social media companies’ motives and ethics, it is the interaction of the technology with our common, often subconscious psychological biases that makes so many of us vulnerable to misinformation, and this has largely escaped notice.

Skepticism of online “news” serves as a decent filter much of the time, but our innate biases allow it to be bypassed, researchers have found — especially when presented with the right kind of algorithmically selected “meme.”

At a time when political misinformation is in ready supply, and in demand, “Facebook, Google, and Twitter function as a distribution mechanism, a platform for circulating false information and helping find receptive audiences,” said Brendan Nyhan, a professor of government at Dartmouth College (and occasional contributor to The Times’s Upshot column).

For starters, said Colleen Seifert, a professor of psychology at the University of Michigan, “People have a benevolent view of Facebook, for instance, as a curator, but in fact it does have a motive of its own. What it’s actually doing is keeping your eyes on the site. It’s curating news and information that will keep you watching.”

That kind of curating acts as a fertile host for falsehoods by simultaneously engaging two predigital social-science standbys: the urban myth as “meme,” or viral idea; and individual biases, the automatic, subconscious presumptions that color belief.

The first process is largely data-driven, experts said, and built into social media algorithms. The wide circulation of bizarre, easily debunked rumors — so-called Pizzagate, for example, the canard that Hillary Clinton was running a child sex ring from a Washington-area pizza parlor — is not entirely dependent on partisan fever (though that was its origin).

For one, the common wisdom that these rumors gain circulation because most people conduct their digital lives in echo chambers or “information cocoons” is exaggerated, Dr. Nyhan said.

In a forthcoming paper, Dr. Nyhan and colleagues review the relevant research, including analyses of partisan online news sites and Nielsen data, and find the opposite. Most people are more omnivorous than presumed; they are not confined in warm bubbles containing only agreeable outrage.

But they don’t have to be for fake news to spread fast, research also suggests. Social media algorithms function at one level like evolutionary selection: Most lies and false rumors go nowhere, but the rare ones with appealing urban-myth “mutations” find psychological traction, then go viral.

There is no precise formula for such digital catnip. The point, experts said, is that the very absurdity of the Pizzagate lie could have boosted its early prominence, no matter the politics of those who shared it.

Photo Credit: Stephen Savage

“My experience is that once this stuff gets going, people just pass these stories on without even necessarily stopping to read them,” Mr. McKinney said. “They’re just participating in the conversation without stopping to look hard” at the source.

Digital social networks are “dangerously effective at identifying memes that are well adapted to surviving, and these also tend to be the rumors and conspiracy theories that are hardest to correct,” Dr. Nyhan said.

One reason is the raw pace of digital information sharing, he said: “The networks make information run so fast that it outruns fact-checkers’ ability to check it. Misinformation spreads widely before it can be downgraded in the algorithms.”

The extent to which Facebook and other platforms function as “marketers” of misinformation, similar to the way they market shoes and makeup, is contentious. In 2015, a trio of behavior scientists working at Facebook inflamed the debate in a paper published in the prominent journal Science.

The authors analyzed the news feeds of some 10 million users in the United States who posted their political views, and concluded that “individuals’ choices played a stronger role in limiting exposure” to contrary news and commentary than Facebook’s own algorithmic ranking — which gauges how interesting stories are likely to be to individual users, based on data they have provided.

Outside critics lashed the study as self-serving, while other researchers said the analysis was solid and without apparent bias.

The other dynamic that works in favor of proliferating misinformation is not embedded in the software but in the biological hardware: the cognitive biases of the human brain.

Purely from a psychological point of view, subtle individual biases are at least as important as rankings and choice when it comes to spreading bogus news or Russian hoaxes — like a false report of Muslim men in Michigan collecting welfare for multiple wives.

Merely understanding what a news report or commentary is saying requires a temporary suspension of disbelief. Mentally, the reader must temporarily accept the stated “facts” as possibly true. A cognitive connection is made automatically: Clinton-sex offender, Trump-Nazi, Muslim men-welfare.

And refuting those false claims requires a person to first mentally articulate them, reinforcing a subconscious connection that lingers far longer than people presume.

Over time, for many people, it is that false initial connection that stays the strongest, not the retractions or corrections: “Was Obama a Muslim? I seem to remember that….”

In a recent analysis of the biases that help spread misinformation, Dr. Seifert and co-authors named this and several other automatic cognitive connections that can buttress false information.

Another is repetition: Merely seeing a news headline multiple times in a news feed makes it seem more credible before it is ever read carefully, even if it’s a fake item being whipped around by friends as a joke.

And, as salespeople have known forever, people tend to value the information and judgments offered by good friends over all other sources. It’s a psychological tendency with significant consequences now that nearly two-thirds of Americans get at least some of their news from social media.

“Your social alliances affect how you weight information,” said Dr. Seifert. “We overweight information from people we know.”

The casual, social, wisecracking nature of thumbing through and participating in the digital exchanges allows these biases to operate all but unchecked, Dr. Seifert said.

Stopping to drill down and determine the true source of a foul-smelling story can be tricky, even for the motivated skeptic, and mentally it’s hard work. Ideological leanings and viewing choices are conscious, downstream factors that come into play only after automatic cognitive biases have already had their way, abetted by the algorithms and social nature of digital interactions.

“If I didn’t have direct evidence that all these theories were wrong” from the scanner, Mr. McKinney said, “I might have taken them a little more seriously.”

A version of this article appears in print on October 24, 2017, on Page D1 of the New York edition with the headline: How Fiction Becomes Fact on Social Media

Advertisements

Why Personal Tech Is Depressing

Why Personal Tech Is Depressing

It’s more than Instagram envy. And thanks to our ever-increasing digital dependence, it’s likely to get worse.

Source: Why Personal Tech Is Depressing

 

We live in an era of previously unimaginable luxury. Without leaving our sofas, we can conjure almost any book or film on our phone and enjoy it with exotic cuisine delivered right to our doorstep via an app. But there is a cost to this convenience that doesn’t appear on your credit-card statement. Our indoor, sedentary and socially isolated lives leave us vulnerable to depression. The U.S., the most technologically advanced nation on the planet, is also the most depressed: Three in 10 Americans will battle depressive illness at some point in their lives, an estimated tenfold increase since World War II.

Although antidepressant use in the U.S. has risen 400% since 1990, so has the rate of depression—and not just in America. The World Health Organization says depression is the leading cause of disability around the world.

Labor-saving inventions, from the Roomba to Netflix, spare us the arduous tasks of our grandparents’ generation. But small actions like vacuuming and returning videotapes can have a positive impact on our well-being. Even modest physical activity can mitigate stress and stimulate the brain’s release of dopamine and serotonin—powerful neurotransmitters that help spark motivation and regulate emotions. Remove physical exertion, and our brain’s pleasure centers can go dormant. As AI renders the need for human activity increasingly superfluous, rates of depressive illness will likely get worse.

In theory, labor-saving apps and automation create free time that we could use to hit the beach or join a kickball league. But that isn’t what tends to happen. We’re wired, like our ancestors to conserve energy whenever possible—to be lazy when no exertion is required—an evolutionary explanation for your tendency to sit around after work. Excessive screen time lulls us ever deeper into habitual inactivity, overstimulates the nervous system and increases production of the stress hormone cortisol. In the short term, cortisol helps us react to high-pressure situations, but when chronically activated, it triggers the brain’s toxic runaway stress response, which researchers have identified as an ultimate driver of depressive illness.

At first blush, it seems as if our smartphones should keep us better connected than ever through an endless stream of texts, instant messages, voice calls and social-media interactions. But as smartphones have become ubiquitous over the past decade, the proportion of Americans who report feelings of chronic loneliness has surged to 40%, from 15% 30 years ago. The psychological burden is particularly pronounced for those who don’t balance screen time with in-person interactions. Face-to-face conversations immerse us in a continuous multichannel sensory experience—only a fraction of which can be transferred via text or video message. Communicating solely through technology robs us of the richer neurological effects of in-person interactions and their potential to alleviate feelings of loneliness and depression.

A few generations ago, people spent most of their waking hours outdoors. Direct sunlight boosts the brain’s serotonin circuitry, protects against seasonal affective disorder and triggers the eyes’ light receptors, which regulate the body’s internal clock and sleep patterns—yet we spend 93% of our time inside. Our mood suffers, and our body loses the ability to find restorative sleep. And bathing our eyes in artificial lighting—especially the blueshifted hues of flat screens—stalls the body’s nightly release of melatonin, the drowsiness-inducing hormone, until 45 minutes after we power down. The resulting sleep deprivation can both trigger and compound depression.

But perhaps the most telling evidence of technology’s effect on our well-being comes from the so-called unplugged study from 2010, in which about 1,000 students at 19 universities around the world pledged to give up all screens for 24 hours. Most students dropped out of the study in a matter of hours, and many reported symptoms of withdrawal associated with substance addiction. But those who pushed through the initial discomfort and completed the experiment discovered a surprising array of benefits: greater calm, less fragmented attention, more meaningful conversations, deeper connections with friends and a greater sense of mindfulness.

This isn’t a Luddite manifesto. Personal tech is here to stay, and a mass unplugging is about as likely as the discovery of Atlantis. Luckily for us, the same technology that’s wrecking our emotional well-being can, when smartly employed, reduce and even reverse the symptoms of depressive illness. Sometimes the problem contains the solution.

Ilardi is a professor of clinical psychology at the University of Kansas and the author of ‘The Depression Cure.’
Ilardi is a professor of clinical psychology at the University of Kansas and the author of ‘The Depression Cure.’

 

 

Reading Information Aloud to Yourself Improves Memory

You are more likely to remember something if you read it out loud, a study from the University of Waterloo has found.

This latest study shows that part of the memory benefit of speech stems from it being personal and self-referential. NeuroscienceNews.com image is in the public domain.

NEUROSCIENCE NEWS                     

Whether you are studying for a big exam or just need to remember a few minor details, researchers say reading aloud can help you retain information.

Source: Reading Information Aloud to Yourself Improves Memory

Source: University of Waterloo.

A recent Waterloo study found that speaking text aloud helps to get words into long-term memory. Dubbed the “production effect,” the study determined that it is the dual action of speaking and hearing oneself that has the most beneficial impact on memory.

“This study confirms that learning and memory benefit from active involvement,” said Colin M. MacLeod, a professor and chair of the Department of Psychology at Waterloo, who co-authored the study with the lead author, post-doctoral fellow Noah Forrin. “When we add an active measure or a production element to a word, that word becomes more distinct in long-term memory, and hence more memorable.”

The study tested four methods for learning written information, including reading silently, hearing someone else read, listening to a recording of oneself reading, and reading aloud in real time. Results from tests with 95 participants showed that the production effect of reading information aloud to yourself resulted in the best remembering.

“When we consider the practical applications of this research, I think of seniors who are advised to do puzzles and crosswords to help strengthen their memory,” said MacLeod. “This study suggests that the idea of action or activity also improves memory.

“And we know that regular exercise and movement are also strong building blocks for a good memory.”

This research builds on previous studies by MacLeod, Forrin, and colleagues that measure the production effect of activities, such as writing and typing words, in enhancing overall memory retention.

 

ABOUT THIS NEUROSCIENCE RESEARCH ARTICLE

Source: Matthew Grant – University of Waterloo
Publisher: Organized by NeuroscienceNews.com.
Image Source: NeuroscienceNews.com image is in the public domain.
Original Research: Abstract for “This time it’s personal: the memory benefit of hearing oneself” by Noah D. Forrin & Colin M. MacLeod in Memory. Published online October 2 2017 doi:10.1080/09658211.2017.1383434

 

University of Waterloo “Reading Information Aloud to Yourself Improves Memory.” NeuroscienceNews. NeuroscienceNews, 30 November 2017.
<http://neurosciencenews.com/memory-reading-aloud-8084/&gt;.

Abstract

This time it’s personal: the memory benefit of hearing oneself

The production effect is the memory advantage of saying words aloud over simply reading them silently. It has been hypothesised that this advantage stems from production featuring distinctive information that stands out at study relative to reading silently. MacLeod (2011) (I said, you said: The production effect gets personal. Psychonomic Bulletin & Review, 18, 1197–1202. doi:10.3758/s13423-011-0168-8) found superior memory for reading aloud oneself vs. hearing another person read aloud, which suggests that motor information (speaking), self-referential information (i.e., “I said it”), or both contribute to the production effect. In the present experiment, we dissociated the influence on memory of these two components by including a study condition in which participants heard themselves read words aloud (recorded earlier) – a first for production effect research – along with the more typical study conditions of reading aloud, hearing someone else speak, and reading silently. There was a gradient of memory across these four conditions, with hearing oneself lying between speaking and hearing someone else speak. These results imply that oral production is beneficial because it entails two distinctive components: a motor (speech) act and a unique, self-referential auditory input.

“This time it’s personal: the memory benefit of hearing oneself” by Noah D. Forrin & Colin M. MacLeod in Memory. Published online October 2 2017 doi:10.1080/09658211.2017.1383434

 

Facebook’s Red Herring

Tags

, ,

redherring

Cambridge Analytica collected data on over 50 Million Facebook users without their consent. How? Dr. Aleksander Kogan a psychology professor at Cambridge University built a survey on Facebook that many users participated in. However, only 270,000 of those participants consented to their data being used and according to the New York Times that consent was for “academic purposes” only. Cambridge Analytica told the Times that they did receive the data, however they blamed Mr. Kogan for violating Facebook’s terms. Facebook claims that when they learned of this violation they took the app off their site and demanded all the data Cambridge Analytica acquired be deleted. Facebook claims they now believe all that data was not destroyed. The problem described is an issue of data LEAVING Facebook and consumer consent, not data going into Facebook.

Yesterday, Facebook announced that it will sever all ties with third party data partners to protect consumer privacy. Some privacy proponents believe that this is a great step in protecting the privacy of consumers. In my opinion, this is a red herring and has nothing to do with the Cambridge Analytica scandal, nor has Facebook done anything substantial with this decision to protect your privacy. Let’s start with some standard definitions before I go on and explain why this is nothing other than a distraction.

Third-party data is used by marketers to help them create marketing messages that are more relevant to target a segment. Segmentation is when a marketer defines a group of consumers (current customers or prospects) that have similar attributes. Marketers will then send the same message to that group (a different message to every individual would not be practical). For example, a segment could look something like “people between 25-45, with young children in the household and an interest in skiing”.

Third-party data comes from many sources (there are thousands of sources). For example, demographic information which is a description of the individual such as age, income, marital status, children in the home, etc. is all public information that can be gathered from sources such as the census. Another type of third party data is behavioral data and that can include what types of interests you have. For example, if you have a magazine subscription to a golf magazine or to a hiking magazine, likely that magazine has sold its’ subscriber list data and associates you with an interest in that category. Another way behavioral data is collected is by observing what content you click on, read or share socially. For example, if you share an article on Aspen Ski Vacations, you are likely categorized as someone with an interest in skiing. Purchase data is collected on individuals as well and can be done many ways. For instance, whenever you buy anything with a warranty that you register (like consumer electronics), your personal information is then associated with that purchase. Another source of purchase data can be credit card companies, banks and credit bureaus. Your purchases can be analyzed and then assumptions about other consumers that have similar attributes to you can be inferred (this is called look-a-like or LAL in the data industry). Your actual personal information tied to your specific transactions are NOT available on the market to buy (unless there has been a data breach at one of these organizations, then your information could be on the darknet). All these data sources in combination can be used to infer assumptions about you and others to improve advertisement targeting.

Bringing the thousands of data sources together would be near impossible for every marketer to vet for privacy compliance much less analyze and create the needed elements and models to perform audience segmentation. Therefore, data aggregators such as Acxiom, Experian, Equifax, Trans Union, Oracle, Cardlytics and many others step in. They aggregate multiple data sources so that they can be easily transformed into elements and models that enable segmentation. Some aggregators are much better than others at ethically sourcing their data. The best ones have a process where they vet data sources through their privacy departments and policies to insure consumers gave consent for the data being included as a source in the elements and models created for marketing use. Furthermore, the best aggregators are also transparent with consumers by allowing them to see what data they have and provide the ability to opt out. Finally, the more credible aggregators will not source data in sensitive categories such as sexual orientation or health indications for example.

When marketers leverage social media or digital publishers to advertise they can select elements and models supplied by aggregators to narrow the target audience. Marketers don’t want to advertise baby products to people that are most likely not parents, nor do they want to push golf products on someone who is likely never going to be a golfer. Furthermore, if marketers were not able to target audiences then advertising costs would become unrealistic for all digital media and those increased costs will need to be absorbed somewhere. Possibilities could include increased prices on goods or subscriber fees to use social platforms which would be wildly unpopular with consumers.

Facebook’s announcement yesterday was that they will no longer partner with data aggregators enabling marketers to target on the platform. Regardless of how you feel about third-party data for targeting, this has nothing to do with the Cambridge Analytica controversy. The Cambridge Analytica issue was about Facebook’s user data going out of Facebook to be analyzed and used without consent from the user. Facebook’s recent action is targeting aggregators of data going into the platform. In my opinion, this is a red herring to distract consumers that do not understand what Facebook is really doing.

In my March 2018 article, When Advertisements Become Too Personal I noted how Facebook leverages trackers that passively surveil consumers and collects that data. Facebook uses trackers to conduct surveillance on you without your conscious knowledge since you likely agreed to this surveillance in the terms of service and privacy policies on their platform and advertiser websites. Consumers don’t have the time to read through long privacy policies, terms and conditions. Furthermore, if consumers don’t agree to a digital property’s terms, often they can’t do business on or use the platform. Advertisers can still target on Facebook without aggregator data being available on the platform. For example, a Facebook Custom Audience is a targeting option created from an advertiser owned customer list, so they can target users on Facebook. Facebook Pixel is a small piece of code for websites that allows the website owner AND Facebook to log any Facebook users. Facebook also tracks the kind of content you share, who you are friends with, what your friends share, what you like, what you talk about in Instant Messenger, what you share on Instagram and other Facebook owned properties. Facebook can aggregate all that data themselves to create targeting tools.

Facebook’s own passive surveillance on us across all their platforms, messaging applications, other websites and even texts on our phones (if you haven’t locked down those permissions) is a much larger concern in my opinion. Instead Facebook is distracting consumers with this announcement into thinking they are making a huge step to protect consumer privacy when in fact the data they have and continue to collect on consumers is much more unsettling.

Hard Questions: Is Spending Time on Social Media Bad for Us?

Tags

,

By David Ginsberg, Director of Research, and Moira Burke, Research Scientist at Facebook

With people spending more time on social media, many rightly wonder whether that time is good for us. Do people connect in meaningful ways online? Or are they simply consuming trivial updates and polarizing memes at the expense of time with loved ones?

These are critical questions for Silicon Valley — and for both of us. Moira is a social psychologist who has studied the impact of the internet on people’s lives for more than a decade, and I lead the research team for the Facebook app. As parents, each of us worries about our kids’ screen time and what “connection” will mean in 15 years. We also worry about spending too much time on our phones when we should be paying attention to our families. One of the ways we combat our inner struggles is with research — reviewing what others have found, conducting our own, and asking questions when we need to learn more.

A lot of smart people are looking at different aspects of this important issue. Psychologist Sherry Turkle asserts that mobile phones redefine modern relationships, making us “alone together.” In her generational analyses of teens, psychologist Jean Twenge notes an increase in teen depression corresponding with technology use. Both offer compelling research.

But it’s not the whole story. Sociologist Claude Fischer argues that claims that technology drives us apart are largely supported by anecdotes and ignore the benefits. Sociologist Keith Hampton’s study of public spaces suggests that people spend more time in public now — and that cell phones in public are more often used by people passing time on their own, rather than ignoring friends in person.

We want Facebook to be a place for meaningful interactions with your friends and family — enhancing your relationships offline, not detracting from them. After all, that’s what Facebook has always been about. This is important as we know that a person’s health and happiness relies heavily on the strength of their relationships.

In this post, we want to give you some insights into how the research team at Facebook works with our product teams to incorporate well-being principles, and review some of the top scientific research on well-being and social media that informs our work. Of course, this isn’t just a Facebook issue — it’s an internet issue — so we collaborate with leading experts and publish in the top peer-reviewed journals. We work with scientists like Robert Kraut at Carnegie Mellon; Sonja Lyubomirsky at UC Riverside; Dacher Keltner, Emiliana Simon-Thomas, and Matt Killingsworth from the Greater Good Science Center at UC Berkeley, and have partnered closely with mental health clinicians and organizations like Save.org and the National Suicide Prevention Lifeline.

What Do Academics Say? Is Social Media Good or Bad for Well-Being?

According to the research, it really comes down to how you use the technology. For example, on social media, you can passively scroll through posts, much like watching TV, or actively interact with friends — messaging and commenting on each other’s posts. Just like in person, interacting with people you care about can be beneficial, while simply watching others from the sidelines may make you feel worse.

The bad: In general, when people spend a lot of time passively consuming information — reading but not interacting with people — they report feeling worse afterward. In one experiment, University of Michigan students randomly assigned to read Facebook for 10 minutes were in a worse mood at the end of the day than students assigned to post or talk to friends on Facebook. A study from UC San Diego and Yale found that people who clicked on about four times as many links as the average person, or who liked twice as many posts, reported worse mental health than average in a survey. Though the causes aren’t clear, researchers hypothesize that reading about others online might lead to negative social comparison — and perhaps even more so than offline, since people’s posts are often more curated and flattering. Another theory is that the internet takes people away from social engagement in person.

The good: On the other hand, actively interacting with people — especially sharing messages, posts and comments with close friends and reminiscing about past interactions — is linked to improvements in well-being. This ability to connect with relatives, classmates, and colleagues is what drew many of us to Facebook in the first place, and it’s no surprise that staying in touch with these friends and loved ones brings us joy and strengthens our sense of community.

A study we conducted with Robert Kraut at Carnegie Mellon University found that people who sent or received more messages, comments and Timeline posts reported improvements in social support, depression and loneliness. The positive effects were even stronger when people talked with their close friends online. Simply broadcasting status updates wasn’t enough; people had to interact one-on-one with others in their network. Other peer-reviewed longitudinal research and experiments have found similar positive benefits between well-being and active engagement on Facebook.

In an experiment at Cornell, stressed college students randomly assigned to scroll through their own Facebook profiles for five minutes experienced boosts in self-affirmationcompared to students who looked at a stranger’s Facebook profile. The researchers believe self-affirmation comes from reminiscing on past meaningful interactions — seeing photos they had been tagged in and comments their friends had left — as well as reflecting on one’s own past posts, where a person chooses how to present themselves to the world.

In a follow-up study, the Cornell researchers put other students under stress by giving them negative feedback on a test and then gave them a choice of websites to visit afterward, including Facebook, YouTube, online music and online video games. They found that stressed students were twice as likely to choose Facebook to make themselves feel better as compared with students who hadn’t been put under stress.

In sum, our research and other academic literature suggests that it’s about how you use social media that matters when it comes to your well-being.


So what are we doing about it?

We’re working to make Facebook more about social interaction and less about spending time. As our CEO Mark Zuckerberg recently said, “We want the time people spend on Facebook to encourage meaningful social interactions.” Facebook has always been about bringing people together — from the early days when we started reminding people about their friends’ birthdays, to showing people their memories with friends using the feature we call “On This Day.” We’re also a place for people to come together in times of need, from fundraisers for disaster relief to groups where people can find an organ donor. We’re always working to expand these communities and find new ways to have a positive impact on people’s lives.

We employ social psychologists, social scientists and sociologists, and we collaborate with top scholars to better understand well-being and work to make Facebook a place that contributes in a positive way. Here are a few things we’ve worked on recently to help support people’s well-being.

News Feed quality: We’ve made several changes to News Feed to provide more opportunities for meaningful interactions and reduce passive consumption of low-quality content — even if it decreases some of our engagement metrics in the short term. We demote things like clickbait headlines and false news, even though people often click on those links at a high rate. We optimize ranking so posts from the friends you care about most are more likely to appear at the top of your feed because that’s what people tell us in surveys that they want to see. Similarly, our ranking promotes posts that are personally informative. We also recently redesigned the comments feature to foster better conversations.

Snooze: People often tell us they want more say over what they see in News Feed. Today, we launched Snooze, which gives people the option to hide a person, Page or group for 30 days, without having to permanently unfollow or unfriend them. This will give people more control over their feed and hopefully make their experience more positive.

Take a Break: Millions of people break up on Facebook each week, changing their relationship status from “in a relationship” to “single.” Research on peoples’ experiences after breakups suggests that offline and online contact, including seeing an ex-partner’s activities, can make emotional recovery more difficult. To help make this experience easier, we built a tool called Take a Break, which gives people more centralized control over when they see their ex on Facebook, what their ex can see, and who can see their past posts.

Suicide prevention tools: Research shows that social support can help prevent suicide. Facebook is in a unique position to connect people in distress with resources that can help. We work with people and organizations around the world to develop support options for people posting about suicide on Facebook, including reaching out to a friend, contacting help lines and reading tips about things they can do in that moment. We recently released suicide prevention support on Facebook Live and introduced artificial intelligence to detect suicidal posts even before they are reported. We also connect people more broadly with mental health resources, including support groups on Facebook.


What About Related Areas Like Digital Distraction and the Impact of Technology on Kids?

We know that people are concerned about how technology affects our attention spans and relationships, as well as how it affects children in the long run. We agree these are critically important questions, and we all have a lot more to learn.

That’s why we recently pledged $1 million toward research to better understand the relationship between media technologies, youth development and well-being. We’re teaming up with experts in the field to look at the impact of mobile technology and social media on kids and teens, as well as how to better support them as they transition through different stages of life.

We’re also making investments to better understand digital distraction and the factors that can pull people away from important face-to-face interactions. Is multitasking hurting our personal relationships? How about our ability to focus? Next year we’ll host a summit with academics and other industry leaders to tackle these issues together.

We don’t have all the answers, but given the prominent role social media now plays in many people’s lives, we want to help elevate the conversation. In the years ahead we’ll be doing more to dig into these questions, share our findings and improve our products. At the end of the day, we’re committed to bringing people together and supporting well-being through meaningful interactions on Facebook.

Your Data and “Those Pictures” Are Less Secure Than You Think….

security-protection-anti-virus-software-60504.jpeg

My writing lately has revolved around media, technology, use of data and consequential psychological impacts. However, in a conversation with my friend Michael Becker of Identity Praxis he urged me to write about Personally Identifiable Information (PII) security fundamentals. According to Michael, personal data privacy is “the new luxury good” and we have all heard about the malicious hackers who find creative ways to steal it. Consequences of identity & personal information mismanagement, for the individual and company alike, can lead to reputation damage, debt, criminal records, loss of income, potentially impact your employment prospects, and yes, death. For those of us “non-techies”, when thinking about security on our devices we often default to, “I have antivirus software on my computer, so I am good”. Well congratulations, I’m sure that hacker from who knows where has never gotten past antivirus software. Those questionable pictures of you at your bachelorette party are completely safe and your privacy is protected, NOT (Wayne’s World reference). For your reading pleasure, below are actions, recommended by Michael and explained by me, you can take to protect your devices from being compromised and unleashing holy hell on you personally.

Begin with using common sense before sharing your PII. This doesn’t involve buying expensive software, it requires taking an extra two seconds to think before acting. Consider the trustworthiness of a website, mobile site or application you engage with before sharing your personal data – if something seems suspicious, don’t share. Furthermore, don’t complete a transaction online or in a phone app if you don’t feel it is secure. Either call the company or go to a different site where you can order the same product.  With email if you don’t know the sender and they ask you to click on a link it could be a phishing attack which can grab data off of your computer. Make sure not to just look at the email name or link, look at the actual email address and URL within the link as the name can be used to mask a malicious address link. Sorry, that email you received from a stranger asking for your SSN and credit card information to redeem your grand prize is likely about as real as the Easter Bunny.

 According to an article from the  Telegraph last year, more than 50 percent of people use at least one of the top 25 passwords and almost 17 percent use the password “123456” (Wasn’t this password used in “Spaceballs”?). When creating passwords, the best practice is to include capitals and special characters in our passwords as well as use different user names and passwords for each account. Reality is that with all the different accounts we have now, it is tough to keep track of it all so we all pick a favorite username and password for everything. Therefore, if a hacker can figure out credentials to one account likely it will work on several others. Password managers such as LastPass or 1Password are good programs that can make your life easier. A password manager is an application that will store all your different usernames and passwords and opens with the use of one master password. They also often contain the ability to auto fill log in credentials on websites. What’s nice about this feature is that it is obviously faster and more accurate, but also protects from hacker keylogging attacks. Password managers are also able to detect whether you are on the right URL which helps protect you from phishing sites. Some of them also have unique random password generators so you don’t have to think of new passwords for every account. DO NOT use the autofill features available from your selected browser, these are not secure! Finally, enable two-factor authentication (either SMS or application) on your accounts, e.g. banking, retailer sites, that support it. I know it is a pain in the ass, but so is having your bank account drained or social media account hacked.

With the Equifax breach last year most of us have at least heard about the risks from news coverage. However, most people think there are only three major credit bureaus (go ahead name them in your head…). BUT NO, Michael reminded me there are in fact FOUR. Make sure to visit all four major credit bureaus to freeze your credit (Trans Union, Equifax, Experian, and Innovis). Freezing your credit stops any credit inquiries on you which stops anyone from opening a credit account without your knowledge. When freezing your credit, you will receive a PIN code from each bureau to “unfreeze” it should you need to have a company run your credit perhaps to get a loan. Keep those PIN codes in a protected place (how about that password manager above?). While I know some people are concerned about the inconvenience of needing to unfreeze credit when applying for legitimate credit – it can act as a loan deterrent. True story, my husband and I were considering a larger purchase where a credit application was needed and then never did it because of the time it would take to unfreeze our credit, but I digress. Put it on your calendar to check your credit score annually. You can go directly to the credit bureaus and get the reports for free or use companies like Credit Karma, Credit Sesame, or Quizzle (each offer different services). You might want to consider getting cyber/identity insurance and darknet monitoring services. The darknet is a layer built on top of the Internet that is hidden and designed specifically for anonymity of which the biggest use is peer to peer file sharing. You can only access the darknet with special tools and software so most of us can’t see what data is on there about us. Besides monitory compensation and support in the case of identity theft, this type of service will provide you alerts for the types of things you wouldn’t know such as an unauthorized USPS address change. There are a number of companies like LifeLock, Identity Guard and Experian that offer this service and I recommend you check out this PC Magazine article on the subject.

Yes, I know my introduction started with a rant about how antivirus software will not protect you from everything, but YES YOU STILL NEED IT. PC Magazine recently tested the best antivirus software and the reviews can be seen here. However, antivirus software should not be your last line of defense. For example, antivirus software doesn’t always protect against malware, and what if you lose your laptop? Encryption solutions prevent access to your files (remember those pictures?). On a Mac you can use Filevault features and for Windows, PC Magazine recently wrote a review of the best encryption software for 2018.

Running your computers on the latest operating software and paying attention to those annoying notifications for OS updates can stave off major attacks (my husband a previous systems administrator is rolling his eyes right now because I used to ignore them). According to a Popular Science article the WannaCry malware attack had an update two months prior to the event that protected users from the attack. The same article calls out the importance of selecting a good email provider and mentions Google and Microsoft as smart choices since they filter many suspicious emails (but not all) before they get to your inbox.

Make sure to password protect your home Wi-Fi router (yes I know people who don’t) and use a VPN when you are connecting to a public Wi-Fi network such as at an airport, hotel or nearest Starbucks. You can also consider installing a cybersecurity hub on your home router such as Bitdefender Box, Fing or Cujo. These tools will monitor and block any suspicious traffic on your Internet coming from any of your connected devices (they often come with a virus protection software package). I also like that those mentioned come with parental controls allowing you to block offensive websites, limit social media and control Internet access by device. What I really liked about Bitdefender is that there are features to detect cyberbullying and online predators.

Identity theft is big business affecting more than 15 million consumers with fraud losses of $16 billion in 2016 according to an identity fraud study released from Javelin Strategy and Research in 2017. Digitally connected consumers, defined as those that “have extensive social network activity, frequently shop online or with mobile devices, and are quick to adopt new digital technologies” are at a 30 percent higher risk of identity fraud than the average person. Costs associated with the above suggestions can range from free to a few hundred dollars which could likely be offset by avoiding a couple of unnecessary purchases. Will it take some time? A few hours per year maybe, but the return on effort outpaces the same number of hours you already spend checking your social media or reading the latest salacious news story about identity theft or privacy invasion that stresses you out.

You Should Never, Ever Argue With Anyone on Facebook, According to Science

New research shows how we interact makes a huge difference.

Source: You Should Never, Ever Argue With Anyone on Facebook, According to Science

By Minda Zetlin

You’ve seen it happen dozens if not hundreds of times. You post an opinion, or a complaint, or a link to an article on Facebook. Somebody adds a comment, disagreeing (or agreeing) with whatever you posted. Someone else posts another comment disagreeing with the first commenter, or with you, or both. Then others jump in to add their own viewpoints. Tempers flare. Harsh words are used. Soon enough, you and several of your friends are engaged in a virtual shouting match, aiming insults in all directions, sometimes at people you’ve never even met.

There’s a simple reason this happens, it turns out: We respond very differently to what people write than to what they say–even if those things are exactly the same. That’s the result of a fascinating new experiment by UC Berkeley and University of Chicago researchers. In the study, 300 subjects either read, watched video of, or listened to arguments about such hot-button topics as war, abortion, and country or rap music. Afterward, subjects were interviewed about their reactions to the opinions with which they disagreed.

Their general response was probably very familiar to anyone who’s ever discussed politics: a broad belief that people who don’t agree with you are either too stupid or too uncaring to know better. But there was a distinct difference between those who had watched or listened to someone speak the words out loud and those who had read the identical words as text. Those who had listened or watched to someone say the words were less likely to dismiss the speaker as uninformed or heartless than they were if they were just reading the commenter’s words.

That result was no surprise to at least one of the researchers, who was inspired to try the experiment after a similar experience of his own. “One of us read a speech excerpt that was printed in a newspaper from a politician with whom he strongly disagreed,” researcher Juliana Schroeder told the Washington Post. “The next week, he heard the exact same speech clip playing on a radio station. He was shocked by how different his reaction was toward the politician when he read the excerpt compared to when he heard it.” Whereas the written comments seemed outrageous to this researcher, the same words spoken out loud seemed reasonable.

We’re using the wrong medium.

This research suggests that the best way for people who disagree with each other to work out their differences and arrive at a better understanding or compromise is by talking to each other, as people used to do at town hall meetings and over the dinner table. But now that so many of our interactions take place over social media, chat, text message, or email, spoken conversation or discussion is increasingly uncommon. It’s probably no coincidence that political disagreement and general acrimony have never been greater. Russians used this speech-vs.-text disharmony to full advantage by creating Facebook and Twitter accounts to stir up even more ill will among Americans than we already had on our own. No wonder they were so successful at it.

So what should you do about it? To begin with, if you want to make a persuasive case for your political opinion or proposed action, you’re better off doing it by making a short video (or linking to one by someone else) rather than writing out whatever you have to say. At the same time, whenever you’re reading something someone else wrote that seems outlandish to you, keep in mind that the fact that you’re seeing this as text may be part of the problem. If it’s important for you to be objective, try reading it out loud or having someone else read it to you.

Finally, if you’re already in the middle of an argument over Facebook (or Twitter, or Instagram or email or text), and the person on the other side of the issue is someone you care about, please don’t just keep typing out comments and replies and replies to replies. Instead, make a coffee date so you can speak in person. Or at the very least, pick up the phone.

Giving your child a smartphone is like giving them a gram of cocaine, says top addiction expert

Ofcom figures suggest more than four in ten parents of 12-15 year-olds find it hard to control their children’s screen time

         Getty Images

Harley Street clinic director Mandy Saligari says many of her patients are 13-year-old girls who see sexting as ‘normal’

Rachael Pells Education Correspondent              Wednesday 7 June 201

Source: Giving your child a smartphone is like giving them a gram of cocaine, says top addiction expert

Giving your child a smartphone is like “giving them a gram of cocaine”, a top addiction therapist has warned.

Time spent messaging friends on Snapchat and Instagram can be just as dangerously addictive for teenagers as drugs and alcohol, and should be treated as such, school leaders and teachers were told at an education conference in London.

Speaking alongside experts in technology addiction and adolescent development, Harley Street rehab clinic specialist Mandy Saligari said screen time was too often overlooked as a potential vehicle for addiction in young people.

“I always say to people, when you’re giving your kid a tablet or a phone, you’re really giving them a bottle of wine or a gram of coke,” she said.

“Are you really going to leave them to knock the whole thing out on their own behind closed doors?

“Why do we pay so much less attention to those things than we do to drugs and alcohol when they work on the same brain impulses?”

Her comments follow news that children as young as 13 are being treated for digital technology – with a third of British children aged 12-15 admitting they do not have a good balance between screen time and other activities.

“When people tend to look at addiction, their eyes tend to be on the substance or thing – but really it’s a pattern of behaviour that can manifest itself in a number of different ways,” Ms Saligari said, naming food obsessions, self-harm and sexting as examples.

Concern has grown recently over the number of young people seen to be sending or receiving pornographic images, or accessing age inappropriate content online through their devices.

Ms Saligari, who heads the Harley Street Charter clinic in London, said around two thirds of her patients were 16-20 year-olds seeking treatment for addiction – a “dramatic increase” on ten years ago – but many of her patients were even younger.

In a recent survey of more than 1,500 teachers, around two-thirds said they were aware of pupils sharing sexual content, with as many as one in six of those involved of primary school age.

More than 2,000 children have been reported to police for crimes linked to indecent images in the past three years.

“So many of my clients are 13 and 14 year-old-girls who are involved in sexting, and describe sexting as ‘completely normal’,” said Ms Saligari

Many young girls in particular believe that sending a picture of themselves naked to someone on their mobile phone is “normal”, and that it only becomes “wrong” when a parent or adult finds out, she added.

“If children are taught self-respect they are less likely to exploit themselves in that way,” said Ms Saligari. “It’s an issue of self-respect and it’s an issue of identity.”

Speaking alongside Ms Saligari at the Highgate Junior School conference on teenage development, Dr Richard Graham, a Consultant Psychiatrist at the Nightingale Hospital Technology Addiction Lead, said the issue was a growing area of interest for researchers, as parents report struggling to find the correct balance for their children.

Ofcom figures suggest more than four in ten parents of 12-15 year-olds find it hard to control their children’s screen time.

Even three and four year olds consume an average of six and half hours of internet time per week, according to the broadcasting regulators.

Greater emphasis was needed on sleep and digital curfews at home, the experts suggested, as well as a systematic approach within schools, for example by introducing a smartphone amnesty at the beginning of the school day.

“With sixth formers and teenagers, you’re going to get resistance, because to them it’s like a third hand,” said Ms. Saligari, “but I don’t think it’s impossible to intervene. Schools asking pupils to spend some time away from their phone I think is great.

“If you catch [addiction] early enough, you can teach children how to self-regulate, so we’re not policing them and telling them exactly what to do,” she added.

“What we’re saying is, here’s the quiet carriage time, here’s the free time – now you must learn to self-regulate. It’s possible to enjoy periods of both.”

 

This surgeon wants to connect you to the Internet with a brain implant

Eric Leuthardt believes that in the near future we will allow doctors to insert electrodes into our brains so we can communicate directly with computers and each other.

Source: This surgeon wants to connect you to the Internet with a brain implant

by Adam Piore                    November 30, 2017

It’s the Monday morning following the opening weekend of the movie Blade Runner 2049, and Eric C. Leuthardt is standing in the center of a floodlit operating room clad in scrubs and a mask, hunched over an unconscious patient.

“I thought he was human, but I wasn’t sure,” Leuthardt says to the surgical resident standing next to him, as he draws a line on the area of the patient’s shaved scalp where he intends to make his initial incisions for brain surgery. “Did you think he was a replicant?”

“I definitely thought he was a replicant,” the resident responds, using the movie’s term for the eerily realistic-looking bioengineered androids.

“What I think is so interesting is that the future is always flying cars,” Leuthardt says, handing the resident his Sharpie and picking up a scalpel. “They captured the dystopian component: they talk about biology, the replicants. But they missed big chunks of the future. Where were the neural prosthetics?”

It’s a topic that Leuthardt, a 44-year-old scientist and brain surgeon, has spent a lot of time imagining. In addition to his duties as a neurosurgeon at Washington University in St. Louis, he has published two novels and written an award-winning play aimed at “preparing society for the changes ahead.” In his first novel, a techno-thriller called RedDevil 4, 90 percent of human beings have elected to get computer hardware implanted directly into their brains. This allows a seamless connection between people and computers, and a wide array of sensory experiences without leaving home. Leuthardt believes that in the next several decades such implants will be like plastic surgery or tattoos, undertaken with hardly a second thought.

Eric Leuthardt.

“I cut people open for a job,” he notes. “So it’s not hard to imagine.”

But Leuthardt has done far more than just imagine this future. He specializes in operating on patients with intractable epilepsy, all of whom must spend several days before their main surgery with electrodes implanted on their cortex as computers aggregate information about the neural firing patterns that precede their seizures. During this period, they are confined to a hospital bed and are often extremely bored. About 15 years ago, Leuthardt had an epiphany: why not recruit them to serve as experimental subjects? It would both ease their tedium and help bring his dreams closer to reality.

Leuthardt began designing tasks for them to do. Then he analyzed their brain signals to see what he might learn about how the brain encodes our thoughts and intentions, and how such signals might be used to control external devices. Was the data he had access to sufficiently robust to describe intended movement? Could he listen in on a person’s internal verbal monologues? Is it possible to decode cognition itself?

Though the answers to some of these questions were far from conclusive, they were encouraging. Encouraging enough to instill in Leuthardt the certitude of a true believer—one who might sound like a crackpot, were he not a brain surgeon who deals in the life-and-death realm of the operating room, where there is no room for hubris or delusion. Leuthardt knows better than most that brain surgery is dangerous, scary, and difficult for the patient. But his understanding of the brain has also given him a clear-eyed view of its inherent limitations—and the potential of technology to help overcome them. Once the rest of the world understands the promise, he insists—and once the technologies progress—the human race will do what it has always done. It will evolve. This time with the help of chips implanted in our heads.

One of Leuthardt’s patients is positioned for minimally invasive laser surgery to treat a brain tumor. Such highly precise surgical techniques have made implanting electrodes safer and less daunting for patients.

“A true fluid neural integration is going to happen,” Leuthardt says. “It’s just a matter of when. If it’s 10 or 100 years in the grand scheme of things, it’s a material development in the course of human history.”

Leuthardt is by no means the only one with exotic ambitions for what are known as brain-computer interfaces. Last March Elon Musk, a founder of Tesla and SpaceX, launched Neuralink, a venture aiming to create devices that facilitate mind-machine melds. Facebook’s Mark Zuckerberg has expressed similar dreams, and last spring his company revealed that it has 60 engineers working on building interfaces that would let you type using just your mind. Bryan Johnson, the founder of the online payment system Braintree, is using his fortune to fund Kernel, a company that aims to develop neuroprosthetics he hopes will eventually boost intelligence, memory, and more.

These plans, however, are all in their early phases and have been shrouded in secrecy, making it hard to assess how much progress has been made—or whether the goals are even remotely realistic. The challenges of brain-computer interfaces are myriad. The kinds of devices that people like Musk and Zuckerberg are talking about won’t just require better hardware to facilitate seamless mechanical connection and communication between silicon computers and the messy gray matter of the human brain. They’ll also have to have sufficient computational power to make sense out of the mass of data produced at any given moment as many of the brain’s nearly 100 billion neurons fire. One other thing: we still don’t know the code the brain uses. We will have to, in other words, learn how to read people’s minds.

But Leuthardt, for one, expects he will live to see it. “At the pace at which technology changes, it’s not inconceivable to think that in a 20-year time frame everything in a cell phone could be put into a grain of rice,” he says. “That could be put into your head in a minimally invasive way, and would be able to perform the computations necessary to be a really effective brain-computer interface.”

Decoding the brain

Scientists have long known that the firing of our neurons is what allows us to move, feel, and think. But breaking the code by which neurons talk to each other and the rest of the body—developing the capacity to actually listen in and make sense of precisely how it is that brain cells allow us to function—has long stood as one of neuroscience’s most daunting tasks.

In the early 1980s, an engineer named Apostolos Georgopoulos, at Johns Hopkins, paved the way for the current revolution in brain-computer interfaces. Georgopoulos identified neurons in the higher-level processing areas of the motor cortex that fired prior to specific kinds of movement—such as a flick of the wrist to the right, or a downward thrust with the arm. What made Georgopoulos’s discovery so important was that you could record these signals and use them to predict the direction and intensity of the movements. Some of these neuronal firing patterns guided the behavior of scores of lower-level neurons working together to move the individual muscles and, ultimately, a limb.

Using arrays of dozens of electrodes to track these high-level signals, Georgopoulos demonstrated that he could predict not just which way a monkey would move a joystick in three-dimensional space, but even the velocity of the movement and how it would change over time.

It was, it seemed clear, precisely the kind of data one might use to give a paralyzed patient mind control over a prosthetic device. Which is the task that one of Georgopoulos’s protégés, Andrew Schwartz, took on in the 1990s. By the late 1990s Schwartz, who is currently a neurobiologist at the University of Pittsburgh, had implanted electrodes in the brains of monkeys and begun to demonstrate that it was indeed possible to train them to control robotic limbs just by thinking.

Leuthardt, in St. Louis to do a neurosurgery residency at Washington University in 1999, was inspired by such work: when he needed to decide how to spend a mandated year-long research break, he knew exactly what he wanted to focus on. Schwartz’s initial success had convinced Leuthardt that science fiction was on the verge of becoming reality. Scientists were finally taking the first tentative steps toward the melding of man and machine. Leuthardt wanted to be part of the coming revolution.

He thought he might devote his year to studying the problem of scarring in mice: over time, the single electrodes that Schwartz and others implanted as part of this work caused inflammatory reactions, or ended up sheathed in brain cells and immobilized. But when Leuthardt and his advisor sat down to map out a plan, the two came up with a better idea. Why not see if they might be able to use a different brain recording technique altogether?

“We were like, ‘Hey, we’ve got humans with electrodes in them all the time!’” Leuthardt says. “Why don’t we just do some experiments with them?”

A surgeon prepares to drill a hole in a patient’s skull to place a laser probe.

A stereotactic frame fixed to a patient’s skull guides a laser probe that pinpoints a location in the brain.

WHITTEN SABBATINI

Georgopoulos and Schwartz had collected their data using a technique that relies on microelectrodes next to the cell membranes of individual neurons to detect voltage changes. The electrodes Leuthardt used, which are implanted before surgery in epilepsy patients, were far larger and were placed on the surface of the cortex, under the scalp, on strips of plastic, where they recorded the signals emanating from hundred of thousands of neurons at the same time. To install them, Leuthardt performed an initial operation in which he removed the top of the skull, cut through the dura (the brain’s outermost membrane), and placed the electrodes directly on top of the brain. Then he connected them to wires that snaked out of the patient’s head in a bundle and plugged into machinery that could analyze the brain signals.

Such electrodes had been used successfully for decades to identify the exact origin in the brain of an epilepsy patient’s intractable seizures. After the initial surgery, the patient stops taking anti-seizure medication, which will eventually prompt an epileptic episode—and the data about its physical source helps doctors like Leuthardt decide which section of the brain to resect in order to forestall future episodes.

But many were skeptical that the electrodes would yield enough information to control a prosthetic. To help find out, Leuthardt recruited Gerwin Schalk, a computer scientist at the Wadsworth Center, a public-health laboratory of the New York State Department of Health. Progress was swift. Within a few years of testing, Leuthardt’s patients had shown the capacity to play Space Invaders—moving a virtual spaceship left and right—simply by thinking. Then they moved a cursor in three-dimensional space on a screen.

In 2006, after a speech on this work at a conference, Schalk was approached by Elmar Schmeisser, a program manager at the U.S. Army Research Office. Schmeisser had in mind something far more complex. He wanted to find out if it was possible to decode “imagined speech”—words not vocalized, but simply spoken silently in one’s mind. Schmeisser, also a science fiction fan, had long dreamed of creating a “thought helmet” that could detect a soldier’s imagined speech and transmit it wirelessly to a fellow soldier’s earpiece.

Laser probe.

Leuthardt recruited 12 bedridden epilepsy patients, confined to their rooms and bored as they waited to have seizures, and presented each one with 36 words that had a relatively simple consonant-vowel-consonant structure, such as “bet,” “bat,” “beat,” and “boot.” He asked the patients to say the words out loud and then to simply imagine saying them—conveying the instructions visually (written on a computer screen), with no audio, and again vocally, with no video, to make sure that he could identify incoming sensory signals in the brain. Then he shipped the data to Schalk for analysis.

Schalk’s software relies on pattern recognition algorithms—his programs can be trained to recognize the activation patterns of groups of neurons associated with a given task or thought. With a minimum of 50 to 200 electrodes, each one producing 1,000 readings per second, the programs must churn through a dizzying number of variables. The more electrodes and the smaller the population of neurons per electrode, the better the chance of detecting meaningful patterns—if sufficient computing power can be brought to bear to sort out irrelevant noise.

“The more resolution the better, but at the minimum it’s about 50,000 numbers a second,” Schalk says. “You have to extract the one thing you are really interested in. That’s not so straightforward.”

Schalk’s results, however, were surprisingly robust. As one might expect, when Leuthardt’s subjects vocalized a word, the data indicated activity in the areas of the motor cortex associated with the muscles that produce speech. The auditory cortex, and an area in its vicinity long believed to be associated with speech processing, were also active at the exact same moments. Remarkably, there were similar yet slightly different activation patterns even when the subjects only imagined the words silently.

Schalk, Leuthardt, and others involved in the project believe they have found the little voice that we hear in our mind when we imagine speaking. The system has never been perfect: after years of effort and refinements to his algorithms, Schalk’s program guesses correctly 45 percent of the time. But rather than attempt to push those numbers higher (they expect performance to improve with better sensors), Schalk and Leuthardt have focused on decoding increasingly complex components of speech.

In recent years, Schalk has continued to extend the findings on real and imagined speech (he can tell whether a subject is imagining speaking Martin Luther King Jr.’s “I Have a Dream” speech or Lincoln’s Gettysburg Address). Leuthardt, meanwhile, has attempted to push on into the next realm: identifying the way the brain encodes intellectual concepts across different regions.

The data on that effort is not published yet, “but the honest truth is we’re still trying to make sense of it,” Leuthardt says. His lab, he acknowledges, may be approaching the limits of what’s possible using current technologies.

Implanting the future

“The moment we got early evidence that we could decode intentions,” Leuthardt says, “I knew it was on.”

Soon after obtaining those results, Leuthardt took seven days off to write, visualize the future, and think about both short- and long-term goals. At the top of the list of things to do, he decided, was preparing humanity for what’s coming, a job that is still very much in progress.

Leuthardt drills a hole in the skull.

On this control room computer screen, the laser is monitored in real time.

With sufficient funding, Leuthardt insists, reclining in a chair in his office after performing surgery, he could already create a prosthetic implant for a general market that would allow someone to use a computer and control a cursor in three-dimensional space. Users could also do things like turn lights on and off, or turn heat up and down, using their thoughts alone. They might even be able to experience artificially induced tactile sensations and access some rudimentary means of turning imagined speech into text. “With current technology, I could make an implant—but how many people are going to want that now?” he says. “I think it’s very important to take practical, short interval steps to get people moved along the pathway toward this road of the long-term vision.”

To that end, Leuthardt founded NeuroLutions, a company aimed at demonstrating that there is a market, even today, for rudimentary devices that link mind and machine—and at beginning to use the technology to help people. NeuroLutions has raised several million so far, and a noninvasive brain interface for stroke victims who have lost function on one side is currently in human trials.

The device consists of brain-monitoring electrodes that sit on the scalp and are attached to an arm orthosis; it can detect a neural signature for intended movement before the signal reaches the motor area of the brain. The neural signals are on the opposite side of the brain from the area usually destroyed by the stroke—and thus are usually spared any damage. By detecting them, amplifying them, and using them to control a device that moves the paralyzed limb, Leuthardt has found, he can actually help a patient regain independent control over the limb, far faster and more effectively than is possible with any approach currently on the market. Importantly, the device can be used without brain surgery.

Though the technology is decidedly modest compared with Leuthardt’s grand designs for the future, he believes this is an area where he can meaningfully transform people’s lives right now. There are about 700,000 new stroke patients in the U.S. each year, and the most common motor impairment is a paralyzed hand. Finding a way to help more of them regain function—and demonstrating that he can do it faster and more effectively—would not only demonstrate the power of brain-computer interfaces but meet a huge medical need.

Leuthardt plans the laser probe’s trajectory with the assistance of a stereotactic navigation system.

Leuthardt’s surgical tools.

Using noninvasive electrodes that sit on the outside of the scalp makes the invention much less off-putting for patients, but it also imposes severe limitations. The voltage signals coming from brain cells may be muffled as they travel through the scalp to reach the sensors, and they may be diffused as they pass through bone. Either makes them harder to detect and their origins harder to interpret.

Leuthardt can achieve far more transformative feats using his implanted electrodes that sit directly on the cortex of the brain. But he has learned through painful experience that elective brain surgery is a tough sell—not just with patients, but with investors as well.

When he and Schalk founded NeuroLutions, in 2008, they hoped to restore movement to the paralyzed by bringing just such an interface to market. But the investment community wasn’t interested. For one thing, neuroscientist-led startups have been testing brain-computer interfaces for more than a decade but have had little success in turning the technology into a viable treatment for paralyzed patients (see “Implanting Hope”). The population of potential patients is limited—at least compared with some of the other conditions being targeted by medical-device startups competing for venture capital. (Roughly 40,000 people in the U.S. have complete quadriplegia.) And most of the tasks that could be accomplished using such an interface can already be handled with noninvasive devices. Even most locked-in patients can still blink an eye or perhaps wiggle a finger. Methods that rely on this residual movement can be used to input data or move a wheelchair without the danger, recovery time, or psychological wherewithal involved in implanting electrodes directly on one’s cortex.

So after their initial fund-raising efforts failed, Leuthardt and Schalk set their sights on a more modest goal. Unexpectedly, they found that many patients continued to recover additional function even after the orthosis was removed—extending to, for instance, fine motor control of their fingers. Often, it turned out, all the patients needed was a little push. Then, once new neural pathways were established, the brain continued to remodel and expand them so that they could convey more complex motor commands to the hand.

The initial success Leuthardt expects in these patients, he hopes, will encourage some to move on to a more robust invasive system. “A couple years down the road you might say, ‘You know what? For that noninvasive version, you can get this much benefit, but I think that now, given the science that we know and everything, we can give you this much more benefit,’” he says. “We can enhance your function even more.”

Leuthardt is so eager for the world to share his passion for the technology’s potentially transformative effects that he has also sought to engage the public through art. In addition to writing his novels and play, he is working on a podcast and YouTube series with a fellow neurosurgeon, in which the two discuss technology and philosophy over coffee and doughnuts.

In Leuthardt’s first book, RedDevil 4, one character uses his “cortical prosthetic” to experience hiking the Himalayas while sitting on his couch. Another, a police detective, confers telepathically with a colleague about how to question a murder suspect standing right in front of them. Every character has instant access to all the knowledge in the world’s libraries—can access it as quickly as a person can think any spontaneous thought. No one ever has to be alone, and our bodies no longer limit us. On the flip side, everyone’s brain is vulnerable to computer viruses that can turn people into psychopaths.

Leuthardt acknowledges that at present, we still lack the power to record and stimulate the number of neurons it would take to replicate these visions. But he claims his conversations with some Silicon Valley investors have only fueled his optimism that we’re on the brink of an innovation explosion.

Schalk is a little less sanguine. He’s skeptical that Facebook, Musk, and others are adding much of their own to the quest for a better interface.

“They are not going to do anything different than the scientific community by itself,” Schalk says. “Maybe something is going to come of it, but it’s not like they have this new thing that nobody else has.”

Schalk says it’s “very, very obvious” that in the next five to 10 years some form of brain-computer interface will be used to rehabilitate victims of strokes, spinal cord injuries, chronic pain, and other disorders. But he compares the current recording techniques to the IBM computers of the 1960s, saying that they are now “archaic.” For the technology to reach its true long-term potential, he believes, a new sort of brain-scanning technology will be needed—something that can read far more neurons at once.

“What you really want is to be able to listen to the brain and talk to the brain in a way that the brain cannot distinguish from the way it communicates internally, and we can’t do that right now,” Schalk says. “We really don’t know how to do it at this point. But it’s also obvious to me that it is going to happen. And if and when that happens, our lives are going to change, and our lives are going to change in a way that is completely unprecedented.”

Where and when the breakthroughs will come from is unclear. After decades of research and progress, many of the same technological challenges remain daunting. Still, the progress in neuroscience and computer hardware and software makes the outcome—at least to true believers—inevitable.

At the very least, says Leuthardt, the buzz emanating from Silicon Valley has generated “real excitement and real thinking about brain-computer interfaces being a practical reality.” That, he says, is “something we haven’t seen before.” And though he acknowledges that if this turns out to be hype it could “set the field back a decade or two,” nothing, he believes, will stop us from reaching the ultimate goal: a technology that will allow us to transcend the cognitive and physical limitations previous generations of humankind have taken for granted.

“It’s going to happen,” he insists. “This has the potential to alter the evolutionary direction of the human race.”

Adam Piore is the author of The Body Builders: Inside the Science of the Engineered Human, a book about bioengineering published last March.

The Psychology of The Walking Dead—The Appeal of Post-Apocalyptic Stories

by Dr. Donna Roberts

 

The Story

I’m not a Walking Dead fan, which is surprising because I love binging on TV series and I loved horror movies as a teenager. Or maybe, more accurately I loved watching a horror movie with my girlfriends. I was in high school at the time when the Friday the 13th and Halloween series came out and we frequently headed to the theater in a group, where we huddled together in our seats and clutched each other frantically as we screamed at all the shocking surprises. Good times. But I can’t say that my love for the genre persisted into adulthood.

When I saw that some Facebook friends from high school and current colleagues were TWD fans, I knew I had to give it a try. It just didn’t gel with me at the time. Maybe it still will. Timing is everything.

Though I wasn’t compelled to keep watching the series, I am fascinated enough with dissecting the human condition and the psychology of popular culture to know when I have a gem of some sort in my midst. I am a believer that life imitates art imitates life in a chicken-and-egg circularity. Beginning with its third season, The Walking Dead attracted the most 18- to 49-year-old viewers of any cable or broadcast television series. That’s a pretty wide range of viewers that no marketing segmentation plan would usually put together. It was even well received by critics.

So, the popularity of TWD is enough to make me want to put in on the proverbial couch and see what it has to say.

*

Psych Pstuff’s Summary

Turns out that since the beginning of humanity, or at least since we’ve been writing about it, we’ve been contemplating the end of humanity. From Bible stories to campfire stories, we revel in envisioning the ultimate destruction of the world as we know it, and what ensues in the aftermath.

In 2012, the Daily Mail published results of a survey that polled 16,262 people in more than 20 countries. The results indicated that 22% of Americans believed world would end in their lifetime with 10% thinking the apocalypse was coming in that very year. Certainly, if this is your mindset, then it is only logical to be a wee bit obsessed with what might be in store for you.

Actually, skipping only a few years here and there, predictions of the end of the world have occurred for almost every year since 1910 and there are plenty more scheduled for the future. Historically, even various scientists have weighed in with estimates of cataclysmic destruction that would endanger human existence, though their dates typically range from a comfortable 300,000 to 22 billion years from now. However, given the instability of both climate and the political landscape, more do seem to be cropping up with sooner best-before dates.

The media, including broadcast journalism, popular talk shows, documentaries and fictionalized productions have always played a role in our apocalyptic obsession. Adding a twist to the usual plot of following the experiences of survivors, beginning in 2009 the History Channel aired a two-season (20 episode) series where experts speculated on how the earth would evolve after the demise of humans. With the ominous opening, Welcome to Earth … Population: Zero, it captured the morbid fascination of 5.4 million viewers, making it the most watched program in the history of the History Channel.

From 2011 to 2014 the National Geographic channel ran a reality show, Doomsday Preppers, that profiled real survivalists preparing for various scenarios of the end of civilization. While some critics called it absurd and exploitative, it was the most watched and highest rated show in the history of the network.

Typically, there are only a few oft-repeated variations on the theme—the deadly virus, the meteor strike, nuclear devastation and, the newest kid on the block, the “gray goo” scenario where nanotechnology runs amok and robots commit ecophagy. The WD in particular, and the zombie craze in general, seems to be the latest, and rather enduring, fascination with all things apocalyptic. Now in its 8th season, the show seems as strong as ever. The review site Rotten Tomatoes concludes, “Blood-spattered, emotionally resonant, and white-knuckle intense, The Walking Dead puts an intelligent spin on the overcrowded zombie subgenre.”

But just why do we engage in so much pursuit of these devastating what-ifs?

In one respect, the contemplation of ever-increasing disaster scenarios is just a gradual slippery slope from very functional, and necessary, learned behavior. From the time we are children, through both direct experience and the hypothetical, we learn cause-effect relationships, and thus how to avoid unpleasant and dangerous consequences. We learn not to touch the hot stove or play in traffic. We learn to think ahead and anticipate possible consequences. But in learning these, we also come to understand that there are some things that happen that you can’t anticipate. Sometimes life turns on a dime. Sometimes disasters happen. Sometimes the world runs amok and all you can do is deal with the aftermath.

Enter the captivating world of the post-apocalypse.

Another cognitive construct that leads to our fascination with these doomsday scenarios is to combat the feelings of powerlessness and mistrust of those with power. There’s nothing like all-out devastation to level the proverbial playing field.

Taking us back to the basics of human survival releases us from the complex entanglements and overbearing demands of the modern world, if only for that short time of suspended disbelief.

There is also a surreal romanticizing of the post-apocalyptic world. Taking us back to the basics of human survival releases us from the complex entanglements and overbearing demands of the modern world, if only for that short time of suspended disbelief.

Child psychologist and author of Zombie Autopsies, Steven Schlozman, M.D., notes, “All of this uncertainty and all of this fear comes together and people think maybe life would be better after a disaster. I talk to kids in my practice and they see it as a good thing. They say, ‘life would be so simple—I’d shoot some zombies and wouldn’t have to go to school.’ Similarly, he recounts the following statement from another teenager, “Dude—a zombie apocalypse would be so cool. No homework, no girls, no SATs. Just make it through the night, man … make it through the night.”

While in reality we might not share the exuberance of these kids or long for a disaster to avoid another work deadline, we can sometimes fantasize about a simpler world where our true strengths are utilized and appreciated. Our brains are always seeking a solution to what is plaguing us (pun intended) and causing anxiety. When no plausible solution is readily available we can resort to more fantastical scenarios. Projecting ourselves into future worlds, where life can be better and we can be better, is akin to reverse nostalgia.

The power and endurance of TWD lies not in its clichéd deadly virus plotline, but instead in the development of characters who touch us on a deeper level. While the circumstances are surreal, the resilience of the characters in the face of total devastation and imminent threat to survival, can reflect something much more real, and more universal. As John Russo, co-creator of the WD predecessor Night of the Living Dead, noted, “It has important things to say about the human condition, which is one of frailty and nobility, weakness and courage, fear and hope, good and evil. These are the enduring puzzles and enigmas of our existence, and we can delve into them and learn from them vicariously when we sit down to watch The Walking Dead.”

What more could you ask for from any form of entertainment?

I think I just might give Season 2 a try.