Facebook’s Red Herring

Tags

, ,

redherring

Cambridge Analytica collected data on over 50 Million Facebook users without their consent. How? Dr. Aleksander Kogan a psychology professor at Cambridge University built a survey on Facebook that many users participated in. However, only 270,000 of those participants consented to their data being used and according to the New York Times that consent was for “academic purposes” only. Cambridge Analytica told the Times that they did receive the data, however they blamed Mr. Kogan for violating Facebook’s terms. Facebook claims that when they learned of this violation they took the app off their site and demanded all the data Cambridge Analytica acquired be deleted. Facebook claims they now believe all that data was not destroyed. The problem described is an issue of data LEAVING Facebook and consumer consent, not data going into Facebook.

Yesterday, Facebook announced that it will sever all ties with third party data partners to protect consumer privacy. Some privacy proponents believe that this is a great step in protecting the privacy of consumers. In my opinion, this is a red herring and has nothing to do with the Cambridge Analytica scandal, nor has Facebook done anything substantial with this decision to protect your privacy. Let’s start with some standard definitions before I go on and explain why this is nothing other than a distraction.

Third-party data is used by marketers to help them create marketing messages that are more relevant to target a segment. Segmentation is when a marketer defines a group of consumers (current customers or prospects) that have similar attributes. Marketers will then send the same message to that group (a different message to every individual would not be practical). For example, a segment could look something like “people between 25-45, with young children in the household and an interest in skiing”.

Third-party data comes from many sources (there are thousands of sources). For example, demographic information which is a description of the individual such as age, income, marital status, children in the home, etc. is all public information that can be gathered from sources such as the census. Another type of third party data is behavioral data and that can include what types of interests you have. For example, if you have a magazine subscription to a golf magazine or to a hiking magazine, likely that magazine has sold its’ subscriber list data and associates you with an interest in that category. Another way behavioral data is collected is by observing what content you click on, read or share socially. For example, if you share an article on Aspen Ski Vacations, you are likely categorized as someone with an interest in skiing. Purchase data is collected on individuals as well and can be done many ways. For instance, whenever you buy anything with a warranty that you register (like consumer electronics), your personal information is then associated with that purchase. Another source of purchase data can be credit card companies, banks and credit bureaus. Your purchases can be analyzed and then assumptions about other consumers that have similar attributes to you can be inferred (this is called look-a-like or LAL in the data industry). Your actual personal information tied to your specific transactions are NOT available on the market to buy (unless there has been a data breach at one of these organizations, then your information could be on the darknet). All these data sources in combination can be used to infer assumptions about you and others to improve advertisement targeting.

Bringing the thousands of data sources together would be near impossible for every marketer to vet for privacy compliance much less analyze and create the needed elements and models to perform audience segmentation. Therefore, data aggregators such as Acxiom, Experian, Equifax, Trans Union, Oracle, Cardlytics and many others step in. They aggregate multiple data sources so that they can be easily transformed into elements and models that enable segmentation. Some aggregators are much better than others at ethically sourcing their data. The best ones have a process where they vet data sources through their privacy departments and policies to insure consumers gave consent for the data being included as a source in the elements and models created for marketing use. Furthermore, the best aggregators are also transparent with consumers by allowing them to see what data they have and provide the ability to opt out. Finally, the more credible aggregators will not source data in sensitive categories such as sexual orientation or health indications for example.

When marketers leverage social media or digital publishers to advertise they can select elements and models supplied by aggregators to narrow the target audience. Marketers don’t want to advertise baby products to people that are most likely not parents, nor do they want to push golf products on someone who is likely never going to be a golfer. Furthermore, if marketers were not able to target audiences then advertising costs would become unrealistic for all digital media and those increased costs will need to be absorbed somewhere. Possibilities could include increased prices on goods or subscriber fees to use social platforms which would be wildly unpopular with consumers.

Facebook’s announcement yesterday was that they will no longer partner with data aggregators enabling marketers to target on the platform. Regardless of how you feel about third-party data for targeting, this has nothing to do with the Cambridge Analytica controversy. The Cambridge Analytica issue was about Facebook’s user data going out of Facebook to be analyzed and used without consent from the user. Facebook’s recent action is targeting aggregators of data going into the platform. In my opinion, this is a red herring to distract consumers that do not understand what Facebook is really doing.

In my March 2018 article, When Advertisements Become Too Personal I noted how Facebook leverages trackers that passively surveil consumers and collects that data. Facebook uses trackers to conduct surveillance on you without your conscious knowledge since you likely agreed to this surveillance in the terms of service and privacy policies on their platform and advertiser websites. Consumers don’t have the time to read through long privacy policies, terms and conditions. Furthermore, if consumers don’t agree to a digital property’s terms, often they can’t do business on or use the platform. Advertisers can still target on Facebook without aggregator data being available on the platform. For example, a Facebook Custom Audience is a targeting option created from an advertiser owned customer list, so they can target users on Facebook. Facebook Pixel is a small piece of code for websites that allows the website owner AND Facebook to log any Facebook users. Facebook also tracks the kind of content you share, who you are friends with, what your friends share, what you like, what you talk about in Instant Messenger, what you share on Instagram and other Facebook owned properties. Facebook can aggregate all that data themselves to create targeting tools.

Facebook’s own passive surveillance on us across all their platforms, messaging applications, other websites and even texts on our phones (if you haven’t locked down those permissions) is a much larger concern in my opinion. Instead Facebook is distracting consumers with this announcement into thinking they are making a huge step to protect consumer privacy when in fact the data they have and continue to collect on consumers is much more unsettling.

Hard Questions: Is Spending Time on Social Media Bad for Us?

Tags

,

By David Ginsberg, Director of Research, and Moira Burke, Research Scientist at Facebook

With people spending more time on social media, many rightly wonder whether that time is good for us. Do people connect in meaningful ways online? Or are they simply consuming trivial updates and polarizing memes at the expense of time with loved ones?

These are critical questions for Silicon Valley — and for both of us. Moira is a social psychologist who has studied the impact of the internet on people’s lives for more than a decade, and I lead the research team for the Facebook app. As parents, each of us worries about our kids’ screen time and what “connection” will mean in 15 years. We also worry about spending too much time on our phones when we should be paying attention to our families. One of the ways we combat our inner struggles is with research — reviewing what others have found, conducting our own, and asking questions when we need to learn more.

A lot of smart people are looking at different aspects of this important issue. Psychologist Sherry Turkle asserts that mobile phones redefine modern relationships, making us “alone together.” In her generational analyses of teens, psychologist Jean Twenge notes an increase in teen depression corresponding with technology use. Both offer compelling research.

But it’s not the whole story. Sociologist Claude Fischer argues that claims that technology drives us apart are largely supported by anecdotes and ignore the benefits. Sociologist Keith Hampton’s study of public spaces suggests that people spend more time in public now — and that cell phones in public are more often used by people passing time on their own, rather than ignoring friends in person.

We want Facebook to be a place for meaningful interactions with your friends and family — enhancing your relationships offline, not detracting from them. After all, that’s what Facebook has always been about. This is important as we know that a person’s health and happiness relies heavily on the strength of their relationships.

In this post, we want to give you some insights into how the research team at Facebook works with our product teams to incorporate well-being principles, and review some of the top scientific research on well-being and social media that informs our work. Of course, this isn’t just a Facebook issue — it’s an internet issue — so we collaborate with leading experts and publish in the top peer-reviewed journals. We work with scientists like Robert Kraut at Carnegie Mellon; Sonja Lyubomirsky at UC Riverside; Dacher Keltner, Emiliana Simon-Thomas, and Matt Killingsworth from the Greater Good Science Center at UC Berkeley, and have partnered closely with mental health clinicians and organizations like Save.org and the National Suicide Prevention Lifeline.

What Do Academics Say? Is Social Media Good or Bad for Well-Being?

According to the research, it really comes down to how you use the technology. For example, on social media, you can passively scroll through posts, much like watching TV, or actively interact with friends — messaging and commenting on each other’s posts. Just like in person, interacting with people you care about can be beneficial, while simply watching others from the sidelines may make you feel worse.

The bad: In general, when people spend a lot of time passively consuming information — reading but not interacting with people — they report feeling worse afterward. In one experiment, University of Michigan students randomly assigned to read Facebook for 10 minutes were in a worse mood at the end of the day than students assigned to post or talk to friends on Facebook. A study from UC San Diego and Yale found that people who clicked on about four times as many links as the average person, or who liked twice as many posts, reported worse mental health than average in a survey. Though the causes aren’t clear, researchers hypothesize that reading about others online might lead to negative social comparison — and perhaps even more so than offline, since people’s posts are often more curated and flattering. Another theory is that the internet takes people away from social engagement in person.

The good: On the other hand, actively interacting with people — especially sharing messages, posts and comments with close friends and reminiscing about past interactions — is linked to improvements in well-being. This ability to connect with relatives, classmates, and colleagues is what drew many of us to Facebook in the first place, and it’s no surprise that staying in touch with these friends and loved ones brings us joy and strengthens our sense of community.

A study we conducted with Robert Kraut at Carnegie Mellon University found that people who sent or received more messages, comments and Timeline posts reported improvements in social support, depression and loneliness. The positive effects were even stronger when people talked with their close friends online. Simply broadcasting status updates wasn’t enough; people had to interact one-on-one with others in their network. Other peer-reviewed longitudinal research and experiments have found similar positive benefits between well-being and active engagement on Facebook.

In an experiment at Cornell, stressed college students randomly assigned to scroll through their own Facebook profiles for five minutes experienced boosts in self-affirmationcompared to students who looked at a stranger’s Facebook profile. The researchers believe self-affirmation comes from reminiscing on past meaningful interactions — seeing photos they had been tagged in and comments their friends had left — as well as reflecting on one’s own past posts, where a person chooses how to present themselves to the world.

In a follow-up study, the Cornell researchers put other students under stress by giving them negative feedback on a test and then gave them a choice of websites to visit afterward, including Facebook, YouTube, online music and online video games. They found that stressed students were twice as likely to choose Facebook to make themselves feel better as compared with students who hadn’t been put under stress.

In sum, our research and other academic literature suggests that it’s about how you use social media that matters when it comes to your well-being.


So what are we doing about it?

We’re working to make Facebook more about social interaction and less about spending time. As our CEO Mark Zuckerberg recently said, “We want the time people spend on Facebook to encourage meaningful social interactions.” Facebook has always been about bringing people together — from the early days when we started reminding people about their friends’ birthdays, to showing people their memories with friends using the feature we call “On This Day.” We’re also a place for people to come together in times of need, from fundraisers for disaster relief to groups where people can find an organ donor. We’re always working to expand these communities and find new ways to have a positive impact on people’s lives.

We employ social psychologists, social scientists and sociologists, and we collaborate with top scholars to better understand well-being and work to make Facebook a place that contributes in a positive way. Here are a few things we’ve worked on recently to help support people’s well-being.

News Feed quality: We’ve made several changes to News Feed to provide more opportunities for meaningful interactions and reduce passive consumption of low-quality content — even if it decreases some of our engagement metrics in the short term. We demote things like clickbait headlines and false news, even though people often click on those links at a high rate. We optimize ranking so posts from the friends you care about most are more likely to appear at the top of your feed because that’s what people tell us in surveys that they want to see. Similarly, our ranking promotes posts that are personally informative. We also recently redesigned the comments feature to foster better conversations.

Snooze: People often tell us they want more say over what they see in News Feed. Today, we launched Snooze, which gives people the option to hide a person, Page or group for 30 days, without having to permanently unfollow or unfriend them. This will give people more control over their feed and hopefully make their experience more positive.

Take a Break: Millions of people break up on Facebook each week, changing their relationship status from “in a relationship” to “single.” Research on peoples’ experiences after breakups suggests that offline and online contact, including seeing an ex-partner’s activities, can make emotional recovery more difficult. To help make this experience easier, we built a tool called Take a Break, which gives people more centralized control over when they see their ex on Facebook, what their ex can see, and who can see their past posts.

Suicide prevention tools: Research shows that social support can help prevent suicide. Facebook is in a unique position to connect people in distress with resources that can help. We work with people and organizations around the world to develop support options for people posting about suicide on Facebook, including reaching out to a friend, contacting help lines and reading tips about things they can do in that moment. We recently released suicide prevention support on Facebook Live and introduced artificial intelligence to detect suicidal posts even before they are reported. We also connect people more broadly with mental health resources, including support groups on Facebook.


What About Related Areas Like Digital Distraction and the Impact of Technology on Kids?

We know that people are concerned about how technology affects our attention spans and relationships, as well as how it affects children in the long run. We agree these are critically important questions, and we all have a lot more to learn.

That’s why we recently pledged $1 million toward research to better understand the relationship between media technologies, youth development and well-being. We’re teaming up with experts in the field to look at the impact of mobile technology and social media on kids and teens, as well as how to better support them as they transition through different stages of life.

We’re also making investments to better understand digital distraction and the factors that can pull people away from important face-to-face interactions. Is multitasking hurting our personal relationships? How about our ability to focus? Next year we’ll host a summit with academics and other industry leaders to tackle these issues together.

We don’t have all the answers, but given the prominent role social media now plays in many people’s lives, we want to help elevate the conversation. In the years ahead we’ll be doing more to dig into these questions, share our findings and improve our products. At the end of the day, we’re committed to bringing people together and supporting well-being through meaningful interactions on Facebook.

Your Data and “Those Pictures” Are Less Secure Than You Think….

security-protection-anti-virus-software-60504.jpeg

My writing lately has revolved around media, technology, use of data and consequential psychological impacts. However, in a conversation with my friend Michael Becker of Identity Praxis he urged me to write about Personally Identifiable Information (PII) security fundamentals. According to Michael, personal data privacy is “the new luxury good” and we have all heard about the malicious hackers who find creative ways to steal it. Consequences of identity & personal information mismanagement, for the individual and company alike, can lead to reputation damage, debt, criminal records, loss of income, potentially impact your employment prospects, and yes, death. For those of us “non-techies”, when thinking about security on our devices we often default to, “I have antivirus software on my computer, so I am good”. Well congratulations, I’m sure that hacker from who knows where has never gotten past antivirus software. Those questionable pictures of you at your bachelorette party are completely safe and your privacy is protected, NOT (Wayne’s World reference). For your reading pleasure, below are actions, recommended by Michael and explained by me, you can take to protect your devices from being compromised and unleashing holy hell on you personally.

Begin with using common sense before sharing your PII. This doesn’t involve buying expensive software, it requires taking an extra two seconds to think before acting. Consider the trustworthiness of a website, mobile site or application you engage with before sharing your personal data – if something seems suspicious, don’t share. Furthermore, don’t complete a transaction online or in a phone app if you don’t feel it is secure. Either call the company or go to a different site where you can order the same product.  With email if you don’t know the sender and they ask you to click on a link it could be a phishing attack which can grab data off of your computer. Make sure not to just look at the email name or link, look at the actual email address and URL within the link as the name can be used to mask a malicious address link. Sorry, that email you received from a stranger asking for your SSN and credit card information to redeem your grand prize is likely about as real as the Easter Bunny.

 According to an article from the  Telegraph last year, more than 50 percent of people use at least one of the top 25 passwords and almost 17 percent use the password “123456” (Wasn’t this password used in “Spaceballs”?). When creating passwords, the best practice is to include capitals and special characters in our passwords as well as use different user names and passwords for each account. Reality is that with all the different accounts we have now, it is tough to keep track of it all so we all pick a favorite username and password for everything. Therefore, if a hacker can figure out credentials to one account likely it will work on several others. Password managers such as LastPass or 1Password are good programs that can make your life easier. A password manager is an application that will store all your different usernames and passwords and opens with the use of one master password. They also often contain the ability to auto fill log in credentials on websites. What’s nice about this feature is that it is obviously faster and more accurate, but also protects from hacker keylogging attacks. Password managers are also able to detect whether you are on the right URL which helps protect you from phishing sites. Some of them also have unique random password generators so you don’t have to think of new passwords for every account. DO NOT use the autofill features available from your selected browser, these are not secure! Finally, enable two-factor authentication (either SMS or application) on your accounts, e.g. banking, retailer sites, that support it. I know it is a pain in the ass, but so is having your bank account drained or social media account hacked.

With the Equifax breach last year most of us have at least heard about the risks from news coverage. However, most people think there are only three major credit bureaus (go ahead name them in your head…). BUT NO, Michael reminded me there are in fact FOUR. Make sure to visit all four major credit bureaus to freeze your credit (Trans Union, Equifax, Experian, and Innovis). Freezing your credit stops any credit inquiries on you which stops anyone from opening a credit account without your knowledge. When freezing your credit, you will receive a PIN code from each bureau to “unfreeze” it should you need to have a company run your credit perhaps to get a loan. Keep those PIN codes in a protected place (how about that password manager above?). While I know some people are concerned about the inconvenience of needing to unfreeze credit when applying for legitimate credit – it can act as a loan deterrent. True story, my husband and I were considering a larger purchase where a credit application was needed and then never did it because of the time it would take to unfreeze our credit, but I digress. Put it on your calendar to check your credit score annually. You can go directly to the credit bureaus and get the reports for free or use companies like Credit Karma, Credit Sesame, or Quizzle (each offer different services). You might want to consider getting cyber/identity insurance and darknet monitoring services. The darknet is a layer built on top of the Internet that is hidden and designed specifically for anonymity of which the biggest use is peer to peer file sharing. You can only access the darknet with special tools and software so most of us can’t see what data is on there about us. Besides monitory compensation and support in the case of identity theft, this type of service will provide you alerts for the types of things you wouldn’t know such as an unauthorized USPS address change. There are a number of companies like LifeLock, Identity Guard and Experian that offer this service and I recommend you check out this PC Magazine article on the subject.

Yes, I know my introduction started with a rant about how antivirus software will not protect you from everything, but YES YOU STILL NEED IT. PC Magazine recently tested the best antivirus software and the reviews can be seen here. However, antivirus software should not be your last line of defense. For example, antivirus software doesn’t always protect against malware, and what if you lose your laptop? Encryption solutions prevent access to your files (remember those pictures?). On a Mac you can use Filevault features and for Windows, PC Magazine recently wrote a review of the best encryption software for 2018.

Running your computers on the latest operating software and paying attention to those annoying notifications for OS updates can stave off major attacks (my husband a previous systems administrator is rolling his eyes right now because I used to ignore them). According to a Popular Science article the WannaCry malware attack had an update two months prior to the event that protected users from the attack. The same article calls out the importance of selecting a good email provider and mentions Google and Microsoft as smart choices since they filter many suspicious emails (but not all) before they get to your inbox.

Make sure to password protect your home Wi-Fi router (yes I know people who don’t) and use a VPN when you are connecting to a public Wi-Fi network such as at an airport, hotel or nearest Starbucks. You can also consider installing a cybersecurity hub on your home router such as Bitdefender Box, Fing or Cujo. These tools will monitor and block any suspicious traffic on your Internet coming from any of your connected devices (they often come with a virus protection software package). I also like that those mentioned come with parental controls allowing you to block offensive websites, limit social media and control Internet access by device. What I really liked about Bitdefender is that there are features to detect cyberbullying and online predators.

Identity theft is big business affecting more than 15 million consumers with fraud losses of $16 billion in 2016 according to an identity fraud study released from Javelin Strategy and Research in 2017. Digitally connected consumers, defined as those that “have extensive social network activity, frequently shop online or with mobile devices, and are quick to adopt new digital technologies” are at a 30 percent higher risk of identity fraud than the average person. Costs associated with the above suggestions can range from free to a few hundred dollars which could likely be offset by avoiding a couple of unnecessary purchases. Will it take some time? A few hours per year maybe, but the return on effort outpaces the same number of hours you already spend checking your social media or reading the latest salacious news story about identity theft or privacy invasion that stresses you out.

You Should Never, Ever Argue With Anyone on Facebook, According to Science

New research shows how we interact makes a huge difference.

Source: You Should Never, Ever Argue With Anyone on Facebook, According to Science

By Minda Zetlin

You’ve seen it happen dozens if not hundreds of times. You post an opinion, or a complaint, or a link to an article on Facebook. Somebody adds a comment, disagreeing (or agreeing) with whatever you posted. Someone else posts another comment disagreeing with the first commenter, or with you, or both. Then others jump in to add their own viewpoints. Tempers flare. Harsh words are used. Soon enough, you and several of your friends are engaged in a virtual shouting match, aiming insults in all directions, sometimes at people you’ve never even met.

There’s a simple reason this happens, it turns out: We respond very differently to what people write than to what they say–even if those things are exactly the same. That’s the result of a fascinating new experiment by UC Berkeley and University of Chicago researchers. In the study, 300 subjects either read, watched video of, or listened to arguments about such hot-button topics as war, abortion, and country or rap music. Afterward, subjects were interviewed about their reactions to the opinions with which they disagreed.

Their general response was probably very familiar to anyone who’s ever discussed politics: a broad belief that people who don’t agree with you are either too stupid or too uncaring to know better. But there was a distinct difference between those who had watched or listened to someone speak the words out loud and those who had read the identical words as text. Those who had listened or watched to someone say the words were less likely to dismiss the speaker as uninformed or heartless than they were if they were just reading the commenter’s words.

That result was no surprise to at least one of the researchers, who was inspired to try the experiment after a similar experience of his own. “One of us read a speech excerpt that was printed in a newspaper from a politician with whom he strongly disagreed,” researcher Juliana Schroeder told the Washington Post. “The next week, he heard the exact same speech clip playing on a radio station. He was shocked by how different his reaction was toward the politician when he read the excerpt compared to when he heard it.” Whereas the written comments seemed outrageous to this researcher, the same words spoken out loud seemed reasonable.

We’re using the wrong medium.

This research suggests that the best way for people who disagree with each other to work out their differences and arrive at a better understanding or compromise is by talking to each other, as people used to do at town hall meetings and over the dinner table. But now that so many of our interactions take place over social media, chat, text message, or email, spoken conversation or discussion is increasingly uncommon. It’s probably no coincidence that political disagreement and general acrimony have never been greater. Russians used this speech-vs.-text disharmony to full advantage by creating Facebook and Twitter accounts to stir up even more ill will among Americans than we already had on our own. No wonder they were so successful at it.

So what should you do about it? To begin with, if you want to make a persuasive case for your political opinion or proposed action, you’re better off doing it by making a short video (or linking to one by someone else) rather than writing out whatever you have to say. At the same time, whenever you’re reading something someone else wrote that seems outlandish to you, keep in mind that the fact that you’re seeing this as text may be part of the problem. If it’s important for you to be objective, try reading it out loud or having someone else read it to you.

Finally, if you’re already in the middle of an argument over Facebook (or Twitter, or Instagram or email or text), and the person on the other side of the issue is someone you care about, please don’t just keep typing out comments and replies and replies to replies. Instead, make a coffee date so you can speak in person. Or at the very least, pick up the phone.

Giving your child a smartphone is like giving them a gram of cocaine, says top addiction expert

Ofcom figures suggest more than four in ten parents of 12-15 year-olds find it hard to control their children’s screen time

         Getty Images

Harley Street clinic director Mandy Saligari says many of her patients are 13-year-old girls who see sexting as ‘normal’

Rachael Pells Education Correspondent              Wednesday 7 June 201

Source: Giving your child a smartphone is like giving them a gram of cocaine, says top addiction expert

Giving your child a smartphone is like “giving them a gram of cocaine”, a top addiction therapist has warned.

Time spent messaging friends on Snapchat and Instagram can be just as dangerously addictive for teenagers as drugs and alcohol, and should be treated as such, school leaders and teachers were told at an education conference in London.

Speaking alongside experts in technology addiction and adolescent development, Harley Street rehab clinic specialist Mandy Saligari said screen time was too often overlooked as a potential vehicle for addiction in young people.

“I always say to people, when you’re giving your kid a tablet or a phone, you’re really giving them a bottle of wine or a gram of coke,” she said.

“Are you really going to leave them to knock the whole thing out on their own behind closed doors?

“Why do we pay so much less attention to those things than we do to drugs and alcohol when they work on the same brain impulses?”

Her comments follow news that children as young as 13 are being treated for digital technology – with a third of British children aged 12-15 admitting they do not have a good balance between screen time and other activities.

“When people tend to look at addiction, their eyes tend to be on the substance or thing – but really it’s a pattern of behaviour that can manifest itself in a number of different ways,” Ms Saligari said, naming food obsessions, self-harm and sexting as examples.

Concern has grown recently over the number of young people seen to be sending or receiving pornographic images, or accessing age inappropriate content online through their devices.

Ms Saligari, who heads the Harley Street Charter clinic in London, said around two thirds of her patients were 16-20 year-olds seeking treatment for addiction – a “dramatic increase” on ten years ago – but many of her patients were even younger.

In a recent survey of more than 1,500 teachers, around two-thirds said they were aware of pupils sharing sexual content, with as many as one in six of those involved of primary school age.

More than 2,000 children have been reported to police for crimes linked to indecent images in the past three years.

“So many of my clients are 13 and 14 year-old-girls who are involved in sexting, and describe sexting as ‘completely normal’,” said Ms Saligari

Many young girls in particular believe that sending a picture of themselves naked to someone on their mobile phone is “normal”, and that it only becomes “wrong” when a parent or adult finds out, she added.

“If children are taught self-respect they are less likely to exploit themselves in that way,” said Ms Saligari. “It’s an issue of self-respect and it’s an issue of identity.”

Speaking alongside Ms Saligari at the Highgate Junior School conference on teenage development, Dr Richard Graham, a Consultant Psychiatrist at the Nightingale Hospital Technology Addiction Lead, said the issue was a growing area of interest for researchers, as parents report struggling to find the correct balance for their children.

Ofcom figures suggest more than four in ten parents of 12-15 year-olds find it hard to control their children’s screen time.

Even three and four year olds consume an average of six and half hours of internet time per week, according to the broadcasting regulators.

Greater emphasis was needed on sleep and digital curfews at home, the experts suggested, as well as a systematic approach within schools, for example by introducing a smartphone amnesty at the beginning of the school day.

“With sixth formers and teenagers, you’re going to get resistance, because to them it’s like a third hand,” said Ms. Saligari, “but I don’t think it’s impossible to intervene. Schools asking pupils to spend some time away from their phone I think is great.

“If you catch [addiction] early enough, you can teach children how to self-regulate, so we’re not policing them and telling them exactly what to do,” she added.

“What we’re saying is, here’s the quiet carriage time, here’s the free time – now you must learn to self-regulate. It’s possible to enjoy periods of both.”

 

This surgeon wants to connect you to the Internet with a brain implant

Eric Leuthardt believes that in the near future we will allow doctors to insert electrodes into our brains so we can communicate directly with computers and each other.

Source: This surgeon wants to connect you to the Internet with a brain implant

by Adam Piore                    November 30, 2017

It’s the Monday morning following the opening weekend of the movie Blade Runner 2049, and Eric C. Leuthardt is standing in the center of a floodlit operating room clad in scrubs and a mask, hunched over an unconscious patient.

“I thought he was human, but I wasn’t sure,” Leuthardt says to the surgical resident standing next to him, as he draws a line on the area of the patient’s shaved scalp where he intends to make his initial incisions for brain surgery. “Did you think he was a replicant?”

“I definitely thought he was a replicant,” the resident responds, using the movie’s term for the eerily realistic-looking bioengineered androids.

“What I think is so interesting is that the future is always flying cars,” Leuthardt says, handing the resident his Sharpie and picking up a scalpel. “They captured the dystopian component: they talk about biology, the replicants. But they missed big chunks of the future. Where were the neural prosthetics?”

It’s a topic that Leuthardt, a 44-year-old scientist and brain surgeon, has spent a lot of time imagining. In addition to his duties as a neurosurgeon at Washington University in St. Louis, he has published two novels and written an award-winning play aimed at “preparing society for the changes ahead.” In his first novel, a techno-thriller called RedDevil 4, 90 percent of human beings have elected to get computer hardware implanted directly into their brains. This allows a seamless connection between people and computers, and a wide array of sensory experiences without leaving home. Leuthardt believes that in the next several decades such implants will be like plastic surgery or tattoos, undertaken with hardly a second thought.

Eric Leuthardt.

“I cut people open for a job,” he notes. “So it’s not hard to imagine.”

But Leuthardt has done far more than just imagine this future. He specializes in operating on patients with intractable epilepsy, all of whom must spend several days before their main surgery with electrodes implanted on their cortex as computers aggregate information about the neural firing patterns that precede their seizures. During this period, they are confined to a hospital bed and are often extremely bored. About 15 years ago, Leuthardt had an epiphany: why not recruit them to serve as experimental subjects? It would both ease their tedium and help bring his dreams closer to reality.

Leuthardt began designing tasks for them to do. Then he analyzed their brain signals to see what he might learn about how the brain encodes our thoughts and intentions, and how such signals might be used to control external devices. Was the data he had access to sufficiently robust to describe intended movement? Could he listen in on a person’s internal verbal monologues? Is it possible to decode cognition itself?

Though the answers to some of these questions were far from conclusive, they were encouraging. Encouraging enough to instill in Leuthardt the certitude of a true believer—one who might sound like a crackpot, were he not a brain surgeon who deals in the life-and-death realm of the operating room, where there is no room for hubris or delusion. Leuthardt knows better than most that brain surgery is dangerous, scary, and difficult for the patient. But his understanding of the brain has also given him a clear-eyed view of its inherent limitations—and the potential of technology to help overcome them. Once the rest of the world understands the promise, he insists—and once the technologies progress—the human race will do what it has always done. It will evolve. This time with the help of chips implanted in our heads.

One of Leuthardt’s patients is positioned for minimally invasive laser surgery to treat a brain tumor. Such highly precise surgical techniques have made implanting electrodes safer and less daunting for patients.

“A true fluid neural integration is going to happen,” Leuthardt says. “It’s just a matter of when. If it’s 10 or 100 years in the grand scheme of things, it’s a material development in the course of human history.”

Leuthardt is by no means the only one with exotic ambitions for what are known as brain-computer interfaces. Last March Elon Musk, a founder of Tesla and SpaceX, launched Neuralink, a venture aiming to create devices that facilitate mind-machine melds. Facebook’s Mark Zuckerberg has expressed similar dreams, and last spring his company revealed that it has 60 engineers working on building interfaces that would let you type using just your mind. Bryan Johnson, the founder of the online payment system Braintree, is using his fortune to fund Kernel, a company that aims to develop neuroprosthetics he hopes will eventually boost intelligence, memory, and more.

These plans, however, are all in their early phases and have been shrouded in secrecy, making it hard to assess how much progress has been made—or whether the goals are even remotely realistic. The challenges of brain-computer interfaces are myriad. The kinds of devices that people like Musk and Zuckerberg are talking about won’t just require better hardware to facilitate seamless mechanical connection and communication between silicon computers and the messy gray matter of the human brain. They’ll also have to have sufficient computational power to make sense out of the mass of data produced at any given moment as many of the brain’s nearly 100 billion neurons fire. One other thing: we still don’t know the code the brain uses. We will have to, in other words, learn how to read people’s minds.

But Leuthardt, for one, expects he will live to see it. “At the pace at which technology changes, it’s not inconceivable to think that in a 20-year time frame everything in a cell phone could be put into a grain of rice,” he says. “That could be put into your head in a minimally invasive way, and would be able to perform the computations necessary to be a really effective brain-computer interface.”

Decoding the brain

Scientists have long known that the firing of our neurons is what allows us to move, feel, and think. But breaking the code by which neurons talk to each other and the rest of the body—developing the capacity to actually listen in and make sense of precisely how it is that brain cells allow us to function—has long stood as one of neuroscience’s most daunting tasks.

In the early 1980s, an engineer named Apostolos Georgopoulos, at Johns Hopkins, paved the way for the current revolution in brain-computer interfaces. Georgopoulos identified neurons in the higher-level processing areas of the motor cortex that fired prior to specific kinds of movement—such as a flick of the wrist to the right, or a downward thrust with the arm. What made Georgopoulos’s discovery so important was that you could record these signals and use them to predict the direction and intensity of the movements. Some of these neuronal firing patterns guided the behavior of scores of lower-level neurons working together to move the individual muscles and, ultimately, a limb.

Using arrays of dozens of electrodes to track these high-level signals, Georgopoulos demonstrated that he could predict not just which way a monkey would move a joystick in three-dimensional space, but even the velocity of the movement and how it would change over time.

It was, it seemed clear, precisely the kind of data one might use to give a paralyzed patient mind control over a prosthetic device. Which is the task that one of Georgopoulos’s protégés, Andrew Schwartz, took on in the 1990s. By the late 1990s Schwartz, who is currently a neurobiologist at the University of Pittsburgh, had implanted electrodes in the brains of monkeys and begun to demonstrate that it was indeed possible to train them to control robotic limbs just by thinking.

Leuthardt, in St. Louis to do a neurosurgery residency at Washington University in 1999, was inspired by such work: when he needed to decide how to spend a mandated year-long research break, he knew exactly what he wanted to focus on. Schwartz’s initial success had convinced Leuthardt that science fiction was on the verge of becoming reality. Scientists were finally taking the first tentative steps toward the melding of man and machine. Leuthardt wanted to be part of the coming revolution.

He thought he might devote his year to studying the problem of scarring in mice: over time, the single electrodes that Schwartz and others implanted as part of this work caused inflammatory reactions, or ended up sheathed in brain cells and immobilized. But when Leuthardt and his advisor sat down to map out a plan, the two came up with a better idea. Why not see if they might be able to use a different brain recording technique altogether?

“We were like, ‘Hey, we’ve got humans with electrodes in them all the time!’” Leuthardt says. “Why don’t we just do some experiments with them?”

A surgeon prepares to drill a hole in a patient’s skull to place a laser probe.

A stereotactic frame fixed to a patient’s skull guides a laser probe that pinpoints a location in the brain.

WHITTEN SABBATINI

Georgopoulos and Schwartz had collected their data using a technique that relies on microelectrodes next to the cell membranes of individual neurons to detect voltage changes. The electrodes Leuthardt used, which are implanted before surgery in epilepsy patients, were far larger and were placed on the surface of the cortex, under the scalp, on strips of plastic, where they recorded the signals emanating from hundred of thousands of neurons at the same time. To install them, Leuthardt performed an initial operation in which he removed the top of the skull, cut through the dura (the brain’s outermost membrane), and placed the electrodes directly on top of the brain. Then he connected them to wires that snaked out of the patient’s head in a bundle and plugged into machinery that could analyze the brain signals.

Such electrodes had been used successfully for decades to identify the exact origin in the brain of an epilepsy patient’s intractable seizures. After the initial surgery, the patient stops taking anti-seizure medication, which will eventually prompt an epileptic episode—and the data about its physical source helps doctors like Leuthardt decide which section of the brain to resect in order to forestall future episodes.

But many were skeptical that the electrodes would yield enough information to control a prosthetic. To help find out, Leuthardt recruited Gerwin Schalk, a computer scientist at the Wadsworth Center, a public-health laboratory of the New York State Department of Health. Progress was swift. Within a few years of testing, Leuthardt’s patients had shown the capacity to play Space Invaders—moving a virtual spaceship left and right—simply by thinking. Then they moved a cursor in three-dimensional space on a screen.

In 2006, after a speech on this work at a conference, Schalk was approached by Elmar Schmeisser, a program manager at the U.S. Army Research Office. Schmeisser had in mind something far more complex. He wanted to find out if it was possible to decode “imagined speech”—words not vocalized, but simply spoken silently in one’s mind. Schmeisser, also a science fiction fan, had long dreamed of creating a “thought helmet” that could detect a soldier’s imagined speech and transmit it wirelessly to a fellow soldier’s earpiece.

Laser probe.

Leuthardt recruited 12 bedridden epilepsy patients, confined to their rooms and bored as they waited to have seizures, and presented each one with 36 words that had a relatively simple consonant-vowel-consonant structure, such as “bet,” “bat,” “beat,” and “boot.” He asked the patients to say the words out loud and then to simply imagine saying them—conveying the instructions visually (written on a computer screen), with no audio, and again vocally, with no video, to make sure that he could identify incoming sensory signals in the brain. Then he shipped the data to Schalk for analysis.

Schalk’s software relies on pattern recognition algorithms—his programs can be trained to recognize the activation patterns of groups of neurons associated with a given task or thought. With a minimum of 50 to 200 electrodes, each one producing 1,000 readings per second, the programs must churn through a dizzying number of variables. The more electrodes and the smaller the population of neurons per electrode, the better the chance of detecting meaningful patterns—if sufficient computing power can be brought to bear to sort out irrelevant noise.

“The more resolution the better, but at the minimum it’s about 50,000 numbers a second,” Schalk says. “You have to extract the one thing you are really interested in. That’s not so straightforward.”

Schalk’s results, however, were surprisingly robust. As one might expect, when Leuthardt’s subjects vocalized a word, the data indicated activity in the areas of the motor cortex associated with the muscles that produce speech. The auditory cortex, and an area in its vicinity long believed to be associated with speech processing, were also active at the exact same moments. Remarkably, there were similar yet slightly different activation patterns even when the subjects only imagined the words silently.

Schalk, Leuthardt, and others involved in the project believe they have found the little voice that we hear in our mind when we imagine speaking. The system has never been perfect: after years of effort and refinements to his algorithms, Schalk’s program guesses correctly 45 percent of the time. But rather than attempt to push those numbers higher (they expect performance to improve with better sensors), Schalk and Leuthardt have focused on decoding increasingly complex components of speech.

In recent years, Schalk has continued to extend the findings on real and imagined speech (he can tell whether a subject is imagining speaking Martin Luther King Jr.’s “I Have a Dream” speech or Lincoln’s Gettysburg Address). Leuthardt, meanwhile, has attempted to push on into the next realm: identifying the way the brain encodes intellectual concepts across different regions.

The data on that effort is not published yet, “but the honest truth is we’re still trying to make sense of it,” Leuthardt says. His lab, he acknowledges, may be approaching the limits of what’s possible using current technologies.

Implanting the future

“The moment we got early evidence that we could decode intentions,” Leuthardt says, “I knew it was on.”

Soon after obtaining those results, Leuthardt took seven days off to write, visualize the future, and think about both short- and long-term goals. At the top of the list of things to do, he decided, was preparing humanity for what’s coming, a job that is still very much in progress.

Leuthardt drills a hole in the skull.

On this control room computer screen, the laser is monitored in real time.

With sufficient funding, Leuthardt insists, reclining in a chair in his office after performing surgery, he could already create a prosthetic implant for a general market that would allow someone to use a computer and control a cursor in three-dimensional space. Users could also do things like turn lights on and off, or turn heat up and down, using their thoughts alone. They might even be able to experience artificially induced tactile sensations and access some rudimentary means of turning imagined speech into text. “With current technology, I could make an implant—but how many people are going to want that now?” he says. “I think it’s very important to take practical, short interval steps to get people moved along the pathway toward this road of the long-term vision.”

To that end, Leuthardt founded NeuroLutions, a company aimed at demonstrating that there is a market, even today, for rudimentary devices that link mind and machine—and at beginning to use the technology to help people. NeuroLutions has raised several million so far, and a noninvasive brain interface for stroke victims who have lost function on one side is currently in human trials.

The device consists of brain-monitoring electrodes that sit on the scalp and are attached to an arm orthosis; it can detect a neural signature for intended movement before the signal reaches the motor area of the brain. The neural signals are on the opposite side of the brain from the area usually destroyed by the stroke—and thus are usually spared any damage. By detecting them, amplifying them, and using them to control a device that moves the paralyzed limb, Leuthardt has found, he can actually help a patient regain independent control over the limb, far faster and more effectively than is possible with any approach currently on the market. Importantly, the device can be used without brain surgery.

Though the technology is decidedly modest compared with Leuthardt’s grand designs for the future, he believes this is an area where he can meaningfully transform people’s lives right now. There are about 700,000 new stroke patients in the U.S. each year, and the most common motor impairment is a paralyzed hand. Finding a way to help more of them regain function—and demonstrating that he can do it faster and more effectively—would not only demonstrate the power of brain-computer interfaces but meet a huge medical need.

Leuthardt plans the laser probe’s trajectory with the assistance of a stereotactic navigation system.

Leuthardt’s surgical tools.

Using noninvasive electrodes that sit on the outside of the scalp makes the invention much less off-putting for patients, but it also imposes severe limitations. The voltage signals coming from brain cells may be muffled as they travel through the scalp to reach the sensors, and they may be diffused as they pass through bone. Either makes them harder to detect and their origins harder to interpret.

Leuthardt can achieve far more transformative feats using his implanted electrodes that sit directly on the cortex of the brain. But he has learned through painful experience that elective brain surgery is a tough sell—not just with patients, but with investors as well.

When he and Schalk founded NeuroLutions, in 2008, they hoped to restore movement to the paralyzed by bringing just such an interface to market. But the investment community wasn’t interested. For one thing, neuroscientist-led startups have been testing brain-computer interfaces for more than a decade but have had little success in turning the technology into a viable treatment for paralyzed patients (see “Implanting Hope”). The population of potential patients is limited—at least compared with some of the other conditions being targeted by medical-device startups competing for venture capital. (Roughly 40,000 people in the U.S. have complete quadriplegia.) And most of the tasks that could be accomplished using such an interface can already be handled with noninvasive devices. Even most locked-in patients can still blink an eye or perhaps wiggle a finger. Methods that rely on this residual movement can be used to input data or move a wheelchair without the danger, recovery time, or psychological wherewithal involved in implanting electrodes directly on one’s cortex.

So after their initial fund-raising efforts failed, Leuthardt and Schalk set their sights on a more modest goal. Unexpectedly, they found that many patients continued to recover additional function even after the orthosis was removed—extending to, for instance, fine motor control of their fingers. Often, it turned out, all the patients needed was a little push. Then, once new neural pathways were established, the brain continued to remodel and expand them so that they could convey more complex motor commands to the hand.

The initial success Leuthardt expects in these patients, he hopes, will encourage some to move on to a more robust invasive system. “A couple years down the road you might say, ‘You know what? For that noninvasive version, you can get this much benefit, but I think that now, given the science that we know and everything, we can give you this much more benefit,’” he says. “We can enhance your function even more.”

Leuthardt is so eager for the world to share his passion for the technology’s potentially transformative effects that he has also sought to engage the public through art. In addition to writing his novels and play, he is working on a podcast and YouTube series with a fellow neurosurgeon, in which the two discuss technology and philosophy over coffee and doughnuts.

In Leuthardt’s first book, RedDevil 4, one character uses his “cortical prosthetic” to experience hiking the Himalayas while sitting on his couch. Another, a police detective, confers telepathically with a colleague about how to question a murder suspect standing right in front of them. Every character has instant access to all the knowledge in the world’s libraries—can access it as quickly as a person can think any spontaneous thought. No one ever has to be alone, and our bodies no longer limit us. On the flip side, everyone’s brain is vulnerable to computer viruses that can turn people into psychopaths.

Leuthardt acknowledges that at present, we still lack the power to record and stimulate the number of neurons it would take to replicate these visions. But he claims his conversations with some Silicon Valley investors have only fueled his optimism that we’re on the brink of an innovation explosion.

Schalk is a little less sanguine. He’s skeptical that Facebook, Musk, and others are adding much of their own to the quest for a better interface.

“They are not going to do anything different than the scientific community by itself,” Schalk says. “Maybe something is going to come of it, but it’s not like they have this new thing that nobody else has.”

Schalk says it’s “very, very obvious” that in the next five to 10 years some form of brain-computer interface will be used to rehabilitate victims of strokes, spinal cord injuries, chronic pain, and other disorders. But he compares the current recording techniques to the IBM computers of the 1960s, saying that they are now “archaic.” For the technology to reach its true long-term potential, he believes, a new sort of brain-scanning technology will be needed—something that can read far more neurons at once.

“What you really want is to be able to listen to the brain and talk to the brain in a way that the brain cannot distinguish from the way it communicates internally, and we can’t do that right now,” Schalk says. “We really don’t know how to do it at this point. But it’s also obvious to me that it is going to happen. And if and when that happens, our lives are going to change, and our lives are going to change in a way that is completely unprecedented.”

Where and when the breakthroughs will come from is unclear. After decades of research and progress, many of the same technological challenges remain daunting. Still, the progress in neuroscience and computer hardware and software makes the outcome—at least to true believers—inevitable.

At the very least, says Leuthardt, the buzz emanating from Silicon Valley has generated “real excitement and real thinking about brain-computer interfaces being a practical reality.” That, he says, is “something we haven’t seen before.” And though he acknowledges that if this turns out to be hype it could “set the field back a decade or two,” nothing, he believes, will stop us from reaching the ultimate goal: a technology that will allow us to transcend the cognitive and physical limitations previous generations of humankind have taken for granted.

“It’s going to happen,” he insists. “This has the potential to alter the evolutionary direction of the human race.”

Adam Piore is the author of The Body Builders: Inside the Science of the Engineered Human, a book about bioengineering published last March.

The Psychology of The Walking Dead—The Appeal of Post-Apocalyptic Stories

by Dr. Donna Roberts

 

The Story

I’m not a Walking Dead fan, which is surprising because I love binging on TV series and I loved horror movies as a teenager. Or maybe, more accurately I loved watching a horror movie with my girlfriends. I was in high school at the time when the Friday the 13th and Halloween series came out and we frequently headed to the theater in a group, where we huddled together in our seats and clutched each other frantically as we screamed at all the shocking surprises. Good times. But I can’t say that my love for the genre persisted into adulthood.

When I saw that some Facebook friends from high school and current colleagues were TWD fans, I knew I had to give it a try. It just didn’t gel with me at the time. Maybe it still will. Timing is everything.

Though I wasn’t compelled to keep watching the series, I am fascinated enough with dissecting the human condition and the psychology of popular culture to know when I have a gem of some sort in my midst. I am a believer that life imitates art imitates life in a chicken-and-egg circularity. Beginning with its third season, The Walking Dead attracted the most 18- to 49-year-old viewers of any cable or broadcast television series. That’s a pretty wide range of viewers that no marketing segmentation plan would usually put together. It was even well received by critics.

So, the popularity of TWD is enough to make me want to put in on the proverbial couch and see what it has to say.

*

Psych Pstuff’s Summary

Turns out that since the beginning of humanity, or at least since we’ve been writing about it, we’ve been contemplating the end of humanity. From Bible stories to campfire stories, we revel in envisioning the ultimate destruction of the world as we know it, and what ensues in the aftermath.

In 2012, the Daily Mail published results of a survey that polled 16,262 people in more than 20 countries. The results indicated that 22% of Americans believed world would end in their lifetime with 10% thinking the apocalypse was coming in that very year. Certainly, if this is your mindset, then it is only logical to be a wee bit obsessed with what might be in store for you.

Actually, skipping only a few years here and there, predictions of the end of the world have occurred for almost every year since 1910 and there are plenty more scheduled for the future. Historically, even various scientists have weighed in with estimates of cataclysmic destruction that would endanger human existence, though their dates typically range from a comfortable 300,000 to 22 billion years from now. However, given the instability of both climate and the political landscape, more do seem to be cropping up with sooner best-before dates.

The media, including broadcast journalism, popular talk shows, documentaries and fictionalized productions have always played a role in our apocalyptic obsession. Adding a twist to the usual plot of following the experiences of survivors, beginning in 2009 the History Channel aired a two-season (20 episode) series where experts speculated on how the earth would evolve after the demise of humans. With the ominous opening, Welcome to Earth … Population: Zero, it captured the morbid fascination of 5.4 million viewers, making it the most watched program in the history of the History Channel.

From 2011 to 2014 the National Geographic channel ran a reality show, Doomsday Preppers, that profiled real survivalists preparing for various scenarios of the end of civilization. While some critics called it absurd and exploitative, it was the most watched and highest rated show in the history of the network.

Typically, there are only a few oft-repeated variations on the theme—the deadly virus, the meteor strike, nuclear devastation and, the newest kid on the block, the “gray goo” scenario where nanotechnology runs amok and robots commit ecophagy. The WD in particular, and the zombie craze in general, seems to be the latest, and rather enduring, fascination with all things apocalyptic. Now in its 8th season, the show seems as strong as ever. The review site Rotten Tomatoes concludes, “Blood-spattered, emotionally resonant, and white-knuckle intense, The Walking Dead puts an intelligent spin on the overcrowded zombie subgenre.”

But just why do we engage in so much pursuit of these devastating what-ifs?

In one respect, the contemplation of ever-increasing disaster scenarios is just a gradual slippery slope from very functional, and necessary, learned behavior. From the time we are children, through both direct experience and the hypothetical, we learn cause-effect relationships, and thus how to avoid unpleasant and dangerous consequences. We learn not to touch the hot stove or play in traffic. We learn to think ahead and anticipate possible consequences. But in learning these, we also come to understand that there are some things that happen that you can’t anticipate. Sometimes life turns on a dime. Sometimes disasters happen. Sometimes the world runs amok and all you can do is deal with the aftermath.

Enter the captivating world of the post-apocalypse.

Another cognitive construct that leads to our fascination with these doomsday scenarios is to combat the feelings of powerlessness and mistrust of those with power. There’s nothing like all-out devastation to level the proverbial playing field.

Taking us back to the basics of human survival releases us from the complex entanglements and overbearing demands of the modern world, if only for that short time of suspended disbelief.

There is also a surreal romanticizing of the post-apocalyptic world. Taking us back to the basics of human survival releases us from the complex entanglements and overbearing demands of the modern world, if only for that short time of suspended disbelief.

Child psychologist and author of Zombie Autopsies, Steven Schlozman, M.D., notes, “All of this uncertainty and all of this fear comes together and people think maybe life would be better after a disaster. I talk to kids in my practice and they see it as a good thing. They say, ‘life would be so simple—I’d shoot some zombies and wouldn’t have to go to school.’ Similarly, he recounts the following statement from another teenager, “Dude—a zombie apocalypse would be so cool. No homework, no girls, no SATs. Just make it through the night, man … make it through the night.”

While in reality we might not share the exuberance of these kids or long for a disaster to avoid another work deadline, we can sometimes fantasize about a simpler world where our true strengths are utilized and appreciated. Our brains are always seeking a solution to what is plaguing us (pun intended) and causing anxiety. When no plausible solution is readily available we can resort to more fantastical scenarios. Projecting ourselves into future worlds, where life can be better and we can be better, is akin to reverse nostalgia.

The power and endurance of TWD lies not in its clichéd deadly virus plotline, but instead in the development of characters who touch us on a deeper level. While the circumstances are surreal, the resilience of the characters in the face of total devastation and imminent threat to survival, can reflect something much more real, and more universal. As John Russo, co-creator of the WD predecessor Night of the Living Dead, noted, “It has important things to say about the human condition, which is one of frailty and nobility, weakness and courage, fear and hope, good and evil. These are the enduring puzzles and enigmas of our existence, and we can delve into them and learn from them vicariously when we sit down to watch The Walking Dead.”

What more could you ask for from any form of entertainment?

I think I just might give Season 2 a try.

What happens in your brain when you binge-watch a TV show

Netflix survey found that 73 percent of participants reported positive feelings associated with binge-watching.


Is watching the entire second season of “Stranger Things” on your weekend to-do list? Here’s what you need to know.

Source: What happens in your brain when you binge-watch a TV show

by Danielle Page /

You sit yourself down in front of the TV after a long day at work, and decide to start watching that new show everyone’s been talking about. Cut to midnight and you’ve crushed half a season — and find yourself tempted to stay up to watch just one more episode, even though you know you’ll be paying for it at work the next morning.

It happens to the best of us. Thanks to streaming platforms like Netflix and Hulu, we’re granted access to several hundred show options that we can watch all in one sitting — for a monthly fee that shakes out to less than a week’s worth of lattes. What a time to be alive, right?

And we’re taking full advantage of that access. According to a survey done by the U.S. Bureau of Labor Statistics, the average American spends around 2.7 hours watching TV per day, which adds up to almost 20 hours per week in total.

361,000 people watched all nine episodes of the second season of ‘Stranger Things’ on the first day it was released.

361,000 people watched all nine episodes of the second season of ‘Stranger Things’ on the first day it was released.

As for the amount of binge watching we’re doing, a Netflix surveyfound that 61 percent of users regularly watch between 2-6 episodes of a show in one sitting. A more recent study found that most Netflix members choose to binge-watch their way through a series versus taking their time — finishing an entire season in one week, on average (shows that fall in the Sci-Fi, horror and thriller categories are the most likely to be binged).

In fact, according to Nielsen, 361,000 people watched all nine episodes of season 2 of ‘Stranger Things,’ on the first day it was released.

Of course, we wouldn’t do it if it didn’t feel good. In fact, the Netflix survey also found that 73 percent of participants reported positive feelings associated with binge-watching. But if you spent last weekend watching season two of “Stranger Things” in its entirety, you may have found yourself feeling exhausted by the end of it — and downright depressed that you’re out of episodes to watch.

A Netflix survey found that 61 percent of users regularly watch between 2-6 episodes of a show in one sitting.

A Netflix survey found that 61 percent of users regularly watch between 2-6 episodes of a show in one sitting.

There are a handful of reasons that binge-watching gives us such a high — and then leaves us emotionally spent on the couch. Here’s a look at what happens to our brain when we settle in for a marathon, and how to watch responsibly.

THIS IS YOUR BRAIN ON BINGE WATCHING

When binge watching your favorite show, your brain is continually producing dopamine, and your body experiences a drug-like high.

When binge watching your favorite show, your brain is continually producing dopamine, and your body experiences a drug-like high.

Watching episode after episode of a show feels good — but why is that? Dr. Renee Carr, Psy.D, a clinical psychologist, says it’s due to the chemicals being released in our brain. “When engaged in an activity that’s enjoyable such as binge watching, your brain produces dopamine,” she explains. “This chemical gives the body a natural, internal reward of pleasure that reinforces continued engagement in that activity. It is the brain’s signal that communicates to the body, ‘This feels good. You should keep doing this!’ When binge watching your favorite show, your brain is continually producing dopamine, and your body experiences a drug-like high. You experience a pseudo-addiction to the show because you develop cravings for dopamine.”

According to Dr. Carr, the process we experience while binge watching is the same one that occurs when a drug or other type of addiction begins. “The neuronal pathways that cause heroin and sex addictions are the same as an addiction to binge watching,” Carr explains. “Your body does not discriminate against pleasure. It can become addicted to any activity or substance that consistently produces dopamine.”

Your body does not discriminate against pleasure. It can become addicted to any activity or substance that consistently produces dopamine.

Your body does not discriminate against pleasure. It can become addicted to any activity or substance that consistently produces dopamine.

Spending so much time immersed in the lives of the characters portrayed on a show is also fueling our binge watching experience. “Our brains code all experiences, be it watched on TV, experienced live, read in a book or imagined, as ‘real’ memories,” explains Gayani DeSilva, M.D., a psychiatrist at Laguna Family Health Center in California. “So when watching a TV program, the areas of the brain that are activated are the same as when experiencing a live event. We get drawn into story lines, become attached to characters and truly care about outcomes of conflicts.”

According to Dr. DeSilva, there are a handful of different forms of character involvement that contribute to the bond we form with the characters, which ultimately make us more likely to binge watch a show in its entirety.

“‘Identification’ is when we see a character in a show that we see ourselves in,” she explains. “‘Modern Family,’ for example, offers identification for the individual who is an adoptive parent, a gay husband, the father of a gay couple, the daughter of a father who marries a much younger woman, etc. The show is so popular because of its multiple avenues for identification. ‘Wishful identification,’ is where plots and characters offer opportunity for fantasy and immersion in the world the viewer wishes they lived in (ex. ‘Gossip Girl,’ ‘America’s Next Top Model’). Also, the identification with power, prestige and success makes it pleasurable to keep watching. ‘Parasocial interaction’ is a one-way relationship where the viewer feels a close connection to an actor or character in the TV show.”

If you’ve ever found yourself thinking that you and your favorite character would totally be friends in real life, you’ve likely experienced this type of involvement. Another type of character involvement is “perceived similarity, where we enjoy the experience of ‘I know what that feels like,’ because it’s affirming and familiar, and may also allow the viewer increased self-esteem when seeing qualities valued in another story.” For example, you’re drawn to shows with a strong female lead because you often take on that role at work or in your social groups.

BINGE WATCHING CAN BE A STRESS RELIEVER

The act of binge watching offers us a temporary escape from our day-to-day grind, which can act as a helpful stress management tool, says Dr. John Mayer, Ph.D, a clinical psychologist at Doctor On Demand. “We are all bombarded with stress from everyday living, and with the nature of today’s world where information floods us constantly,” Dr. Mayer says. “It is hard to shut our minds down and tune out the stress and pressures. A binge can work like a steel door that blocks our brains from thinking about those constant stressors that force themselves into our thoughts. Binge watching can set up a great boundary where troubles are kept at bay.”

A binge can work like a steel door that blocks our brains from thinking about those constant stressors that force themselves into our thoughts.

A binge can work like a steel door that blocks our brains from thinking about those constant stressors that force themselves into our thoughts.

Binge watching can also help foster relationships with others who have been watching the same show as you. “It does give you something to talk about with other people,” says Dr. Ariane Machin, Ph.D, clinical psychologist and professor of psychology. “Cue the ‘This Is Us’ phenomenon and feeling left out if you didn’t know what was going on! Binge watching can make us feel a part of a community with those that have also watched it, where we can connect over an in-depth discussion of a show.”

Watching a show that features a character or scenario that ties into your day-to-day routine can also end up having a positive impact on your real life. “Binge watching can be healthy if your favorite character is also a virtual role model for you,” says Carr, “or, if the content of the show gives you exposure to a career you are interested in. Although most characters and scenes are exaggerated for dramatic effect, it can be a good teaching lesson and case study. For example, if a shy person wants to become more assertive, remembering how a strong character on the show behaves can give the shy person a vivid example of how to advocate for herself or try something new. Or, if experiencing a personal crisis, remembering how a favorite character or TV role model solved a problem can give the binge watcher new, creative or bolder solutions.”

THE LET DOWN: WHAT HAPPENS WHEN THE BINGE IS OVER

Have you ever felt sad after finishing a series? Mayer says that when we finish binge watching a series, we actually mourn the loss. “We often go into a state of depression because of the loss we are experiencing,” he says. “We call this situational depression because it is stimulated by an identifiable, tangible event. Our brain stimulation is lowered (depressed) such as in other forms of depression.”

In a study done by the University of Toledo, 142 out of 408 participants identified themselves as binge-watchers. This group reported higher levels of stress, anxiety and depression than those who were not binge-watchers. But in examining the habits that come with binge-watching, it’s not hard to see why it would start to impact our mental health. For starters, if you’re not doing it with a roommate or partner, binge-watching can quickly become isolating.

When we disconnect from humans and over-connect to TV at the cost of human connection, eventually we will ‘starve to death’ emotionally.

When we disconnect from humans and over-connect to TV at the cost of human connection, eventually we will ‘starve to death’ emotionally.

“When we substitute TV for human relations we disconnect from our human nature and substitute for [the] virtual,” says Dr. Judy Rosenberg, psychologist and founder of the Psychological Healing Center in Sherman Oaks, CA. “We are wired to connect, and when we disconnect from humans and over-connect to TV at the cost of human connection, eventually we will ‘starve to death’ emotionally. Real relationships and the work of life is more difficult, but at the end of the day more enriching, growth producing and connecting.”

If you find yourself choosing a night in with Netflix over seeing friends and family, it’s a sign that this habit is headed into harmful territory. (A word of warning to those of us who decided to stay in and binge watch “Stranger Things” instead of heading to that Halloween party.)

HOW TO BINGE-WATCH RESPONSIBLY

The key to reaping the benefits of binge-watching without suffering from the negative repercussions is to set parameters for the time you spend with your television — which can be tough to do when you’re faced with cliff hangers that might be resolved if you just stay up forone more episode. “In addition to pleasure, we often binge-watch to obtain psychological closure from the previous episode,” says Carr. “However, because each new episode leaves you with more questions, you can engage in healthy binge-watching by setting a predetermined end time for the binge. For example, commit to saying, ‘after three hours, I’m going to stop watching this show for the night.”

If setting a time limit cuts you off at a point in your binge where it’s hard to stop (and makes it too easy to tell yourself just ten more minutes), Carr suggests committing to a set number of episodes at the onset instead. “Try identifying a specific number of episodes to watch, then watching only the first half of the episode you have designated as your stopping point,” she says. “Usually, questions from the previous episode will be answered by this half-way mark and you will have enough psychological closure to feel comfortable turning off the TV.”

Also, make sure that you’re balancing your binge with other activities. “After binge-watching, go out with friends or do something fun,” says Carr. “By creating an additional source of pleasure, you will be less likely to become addicted to or binge watch the show. Increase your physical exercise activity or join an adult athletic league. By increasing your heart rate and stimulating your body, you can give yourself a more effective and longer-term experience of fun and excitement.”

When Advertisements Become Too Personal

Tags

, , , , , ,

GuerraGPhoto's/Shutterstock.com

GuerraGPhoto’s/Shutterstock.com

With the proliferation of media channels over the last 20 years, advertisers have taken advantage of marketing technologies combined with data to serve more personalized advertisements to consumers. Personalization is a marketing strategy that delivers specific messages to you by leveraging data analysis and marketing technology    enabling them to target (the ability to identify a specific person or audience). Thus, companies leverage many data sources about you whether obtained directly from you, purchased from data brokers, or passively collected on you (tracking your online behavior). There are advantages to this as a consumer such as advertisement relevance, time savings and product pricing. For example, I don’t like to see the media I consume littered with advertisements on golf equipment or hunting gear, since the products are not of any interest to me. Secondly, I hate it when I have already purchased a product the same product shows up in Facebook, as this is just a waste of my attention. Rather, the marketer should show me something that is at least complimentary to what I have already purchased instead of wasting my time. There is a good reason for optimizing advertising because if targeting were not available companies would need to increase their advertising budgets every time a new media channel presented itself resulting in price increases to consumers. From an advertiser perspective, there is no argument with the return on investment that leveraging data for targeting provides across all channels which is why almost all companies engage in the practice. However, there are times when advertiser personalization attempts cross the line and it recently happened to me.

Last December I had a health matter I needed to address. My doctor recommended I try a supplement that can be only bought online. After trying some samples provided by my doc, I went directly to the company’s website and made the purchase. I never viewed the company’s page nor saw an advertisement for the product on Facebook (i.e. I left no previous online behavior that could be tracked). One day later, a post showed up on my Facebook feed from that same company. Serenol ad screen shot

I immediately yelled “Are You F***ing Kidding Me???” among other things. So dear reader…..you now know I bought a supplement called Serenol which helps alleviate PMS symptoms – hence my use of four letter words above (yes it works). From my perspective this was a complete invasion of my privacy and feels unethical. It may also be against HIPAA laws, or it should be! In the end, what this means, is Serenol, without my permission, disclosed my health condition.  Furthermore, it also begs the question: Now that Facebook has this data on me how will they use it moving forward?

Being from the data integration and marketing technology industry myself I personally have a moderate perspective on the use of data attributes for targeted marketing. I don’t want to see advertisements from companies that are completely irrelevant to me nor do I want to pay increased prices for goods and services, thus I have some comfort with use of my data. However, this scenario violated my personal boundaries, so I downloaded a tracker monitor and followed the data.

Ghostery provides a free mobile browser and search engine plug-in for tracking the trackers, something anyone can access for free.Ghostery Screen Shot

Ghostery shows you what type of trackers are firing on any website that you visit. With this tool I learned there were multiple pixels firing on Serenol’s site, Facebook being one of many.  The two pixels that interested me most were the “Facebook Custom Audiences” and the “Facebook Pixel” trackers. The custom audience pixel enables Serenol (or any other advertiser) to create Facebook Custom Audiences based on their website visitors.

A Facebook Custom Audience is essentially a targeting option created from an advertiser owned customer list, so they can target users on Facebook (Advertiser Help Center, 2018). Facebook Pixel is a small piece of code for websites that allows the site owner AND Facebook to log any Facebook users (Brown, Why Facebook is not telling you everything it knows about you, 2017). Either of these methods would have enabled the survey post I was shown from Serenol. What likely happened is Serenol and Facebook used these tags to conduct surveillance on me without my conscious knowledge and re-targeted me, hence the offending post. Yes – this is technically legal. Why? Because, I mostly likely agreed to this surveillance in the terms of service and privacy policies on each site.  Also, this method of targeting does not provide data back to Serenol who I am on Facebook, only Facebook knows. However, now Facebook has data that associates me with PMS!

Facebook collects information on things you do such as content you share, groups you are part of, things someone may share about you (regardless of whether you granted permission), payment information, the internet connected devices you and your family own and information from third-party partners including advertisers (Data Policy , 2016). They can monitor your mouse movements, track the amount of time you spend on anything and the subject of your photos via machine learning algorithms. Furthermore, when you do upload photos, Facebook scans the image and detects information about that photo such as whether it contains humans, animals, inanimate objects, and potential people you should tag in the picture (Brown, The amount of data facebook collects from your photos will terrify you, 2017). The social media company directly states in their data policy that they use the information they collect to improve their advertising (this means targeting) and then measure such advertising effectiveness (Data Policy , 2016). While Facebook’s data policy states that they do not share personally identifiable information (PII), they do leverage non-personally identifying demographic information that can be used for advertisement targeting purposes provided they adhere to their advertiser guidelines (Data Policy , 2016). This policy is subject to all Facebook companies, including WhatsApp, Facebook Messenger and Instagram. So that private message you are sending on Messenger isn’t as private as you think, Facebook is collecting data on that content. With Facebook owning 4 of the Top 5 Social Media applications, isn’t this a little creepy?

The next obvious question, is how can this data be used for nefarious purposes? Facebook’s advertiser policies state that an advertiser can’t use targeting options to discriminate against or engage in predatory advertising practices (Advertising Policies, n.d.). While they do withhold some demographics from certain types of advertising like housing, there are other questionable practices for targeting. For example, last year an article appeared in AdAge that called out Facebook, LinkedIn and Google who all allow employment advertising targeting using age as a criteria. Facebook has defended using the demographic despite criticism the practice contributes to ageism in the workforce and is illegal in the actual hiring practices of public companies (Sloane, 2017).

So, can Facebook use data about my PMS for targeting? Will they allow potential employers to use this data? What about health insurance companies? This is a slippery slope indeed. The answer is yes, and no. Facebook recently updated its’ policies and now they prevent advertisers from using targeting attributes such as medical conditions (Perez, 2018). This means that Facebook will not provide demographic selection data in their targeting tools to select or deselect users based on medical conditions. This type of targeting requires using third-party data, meaning that the advertiser is using the data provided by Facebook or other data aggregators to create an audience. However, I did not find anything that prevents companies like Serenol from using first-party data to find me on Facebook. Furthermore, when I went to the Serenol site on February 21st, 2018 (after the Facebook policy update), Ghostery showed that Facebooks’ Pixel and Facebook for Developers along with other pixels and tags from The Trade Desk, Adobe, Google, etc. were all live on the site.

This month’s Harvard Business Review published an article about how consumers react to personalization. The authors ran a series of experiments to understand what causes consumers to object to targeting and found out that we don’t always behave logically when it comes to privacy. People will often share details with complete strangers while keeping that information secret from those where close relationships exist. Furthermore, the nature of the information impacts how we feel about it – for example data on sex, health and finances are much more sensitive. Secondly, the way that data exchanges hands (information flows) matter. They found that sharing data with a company personally (first party sharing) generally feels fine because it is necessary to purchase something or engage with a company. However, when that information is shared without our knowledge (third-party sharing) consumers are reacting in a similar way as if a friend shared a secret or talked behind our backs. While third party sharing of data is legal, the study showed that scenarios where companies obtain information outside the website one interacted with or deduced inferred information about someone from analytics elicits a negative reaction from consumers. The study also found when consumers believe their data has been shared unacceptably, purchase interest substantially declines (John, Kim, & Barasz, 2018). Some of the recommendations from the authors to mitigate backlash from consumers included staying away from sensitive subjects, maintain transparency and provide consumers choice/ the ability to opt out.

I reached out to Michael Becker, Managing Partner at Identity Praxis for his point of view on the subject. Michael is an entrepreneur, academic and industry evangelist who has been engaging and supporting the personal identity economy for over a decade. “People are becoming aware that their personal information has value and are awakening to the fact that its’ misuse is not just annoying, but can lead to material and lasting emotional, economic, and physical harm. They are awaking to the fact that they can enact control over their data. Consumers are starting to use password managers, identity anonymization tools, and tracker management tools [like Ghostery]; for instance, 38% of US adults have adopted ad blockers and this is just the beginning. Executives should take heed that a new class of software and services, personal information management solutions, are coming to the market. These solutions, alongside new regulations (like the EU GDPR), give individuals, at scale, the power to determine what information about them is shared, who has access to it, when it can be used, and on what terms. In other words, the core terms of business may change in the very near future from people having to agree to the businesses terms of service to business having to agree to the individuals’ terms of access.”

In the United States the approach to regulations for personal data collection and use is such that if the action from the business or technology isn’t expressly forbidden, then companies can do it regardless of whether it is ethical or not. Unfortunately, regulations do not necessarily keep up with the pace of innovation in the world of data collection. In Europe the approach to data privacy is such that unless a personal data collection practice and its use is explicitly called out as legal then companies CANNOT do it. There are some actions you can take to manage passive data collection; however, this list is not meant to be exhaustive:

  • Use Brave Browser: This browser allows you to block ads and trackers to sites that you visit. Brave claims it will increase download speeds, save you money on your mobile device data since you don’t have to load ads and protect your information.
  • Ghostery permits you to allow what trackers are accepted by site that you visit, or block trackers entirely.
  • Add a script blocker plug-in to your browser such as No-script. No-script has a white list of trustworthy websites and it enables you to choose which sites you want to allow scripts.
  • Review what permissions to track your data on your mobile device and limit it. Do you really want Apple sharing your contact list and calendar with other applications? Do all applications need access to your fitness and activity data? You can find helpful instructions on how for iPhone here or for Android here.

Regardless of what is legal or illegal, comfort levels with how our personal data is used varies by individual. When you think about it, there is similarity to the debate in the 60’s on what constituted obscenity. When we find use of our personal data offensive we will likely say “I’ll know it when I see it”.

References:

Advertiser Help Center. (2018). Retrieved from Facebook Business: https://www.facebook.com/business/help/610516375684216

Advertising Policies. (n.d.). Retrieved February 20, 2018, from Facebook: https://www.facebook.com/policies/ads/

Brown, A. (2017, January 6). The qmount of data facebook collects from your photos will terrify you. Retrieved February 20, 2018, from Express: https://www.express.co.uk/life-style/science-technology/751009/Facebook-Scan-Photos-Data-Collection

Brown, A. (2017, January 2). Why facebook is not telling you everything it knows about you. Retrieved February 2018, 2018, from Express: https://www.express.co.uk/life-style/science-technology/748956/Facebook-Login-How-Much-Data-Know

Data Policy . (2016, September 29). Retrieved from Facebook: https://www.facebook.com/full_data_use_policy

John, L. K., Kim, T., & Barasz, K. (2018, February). Ads that don’t overstep. Harvard Business Review, pp. 62-69.

Perez, S. (2018, February 8). Facebook updates its ad policies and tools to protect against discriminatory practices. Retrieved from Techcrunch: https://techcrunch.com/2017/02/08/facebook-updates-its-ad-policies-and-tools-to-protect-against-discriminatory-practices/

Sloane, G. (2017, December 21). Facebook defends targeting job ads based on age. Retrieved from Ad Age: http://adage.com/article/digital/facebook-defends-targeting-job-ads-based-age/311726/

 

 

 

 

 

 

A new study shows that students learn way more effectively from print textbooks than screens

Tags

, ,

Students told researchers they preferred and performed better when reading on screens. But their actual performance tended to suffer.

Source: A new study shows that students learn way more effectively from print textbooks than screens

Today’s students see themselves as digital natives, the first generation to grow up surrounded by technology like smartphones, tablets and e-readers.

Teachers, parents and policymakers certainly acknowledge the growing influence of technology and have responded in kind. We’ve seen more investment in classroom technologies, with students now equipped with school-issued iPads and access to e-textbooks.

In 2009, California passed a law requiring that all college textbooks be available in electronic form by 2020; in 2011, Florida lawmakers passed legislation requiring public schools to convert their textbooks to digital versions.

Given this trend, teachers, students, parents and policymakers might assume that students’ familiarity and preference for technology translates into better learning outcomes. But we’ve found that’s not necessarily true.

As researchers in learning and text comprehension, our recent work has focused on the differences between reading print and digital media. While new forms of classroom technology like digital textbooks are more accessible and portable, it would be wrong to assume that students will automatically be better served by digital reading simply because they prefer it.

Speed – at a cost

Our work has revealed a significant discrepancy. Students said they preferred and performed better when reading on screens. But their actual performance tended to suffer.

For example, from our review of research done since 1992, we found that students were able to better comprehend information in print for texts that were more than a page in length. This appears to be related to the disruptive effect that scrolling has on comprehension. We were also surprised to learn that few researchers tested different levels of comprehension or documented reading time in their studies of printed and digital texts.

To explore these patterns further, we conducted three studies that explored college students’ ability to comprehend information on paper and from screens.

Students first rated their medium preferences. After reading two passages, one online and one in print, these students then completed three tasks: Describe the main idea of the texts, list key points covered in the readings and provide any other relevant content they could recall. When they were done, we asked them to judge their comprehension performance.

Across the studies, the texts differed in length, and we collected varying data (e.g., reading time). Nonetheless, some key findings emerged that shed new light on the differences between reading printed and digital content:

  • Students overwhelming preferred to read digitally.
  • Reading was significantly faster online than in print.
  • Students judged their comprehension as better online than in print.
  • Paradoxically, overall comprehension was better for print versus digital reading.
  • The medium didn’t matter for general questions (like understanding the main idea of the text).
  • But when it came to specific questions, comprehension was significantly better when participants read printed texts.

studentsGetty Images/Sean Gallup

Placing print in perspective

From these findings, there are some lessons that can be conveyed to policymakers, teachers, parents and students about print’s place in an increasingly digital world.

1. Consider the purpose

We all read for many reasons. Sometimes we’re looking for an answer to a very specific question. Other times, we want to browse a newspaper for today’s headlines.

As we’re about to pick up an article or text in a printed or digital format, we should keep in mind why we’re reading. There’s likely to be a difference in which medium works best for which purpose.

In other words, there’s no “one medium fits all” approach.

2. Analyze the task

One of the most consistent findings from our research is that, for some tasks, medium doesn’t seem to matter. If all students are being asked to do is to understand and remember the big idea or gist of what they’re reading, there’s no benefit in selecting one medium over another.

But when the reading assignment demands more engagement or deeper comprehension, students may be better off reading print. Teachers could make students aware that their ability to comprehend the assignment may be influenced by the medium they choose. This awareness could lessen the discrepancy we witnessed in students’ judgments of their performance vis-à-vis how they actually performed.

Classroom Students Teacher iPadElementary school children use electronic tablets on the first day of class in the new school year in Nice, September 3, 2013.REUTERS/Eric Gaillard

3. Slow it down

In our third experiment, we were able to create meaningful profiles of college students based on the way they read and comprehended from printed and digital texts.

Among those profiles, we found a select group of undergraduates who actually comprehended better when they moved from print to digital. What distinguished this atypical group was that they actually read slower when the text was on the computer than when it was in a book. In other words, they didn’t take the ease of engaging with the digital text for granted. Using this select group as a model, students could possibly be taught or directed to fight the tendency to glide through online texts.

4. Something that can’t be measured

There may be economic and environmental reasons to go paperless. But there’s clearly something important that would be lost with print’s demise.

In our academic lives, we have books and articles that we regularly return to. The dog-eared pages of these treasured readings contain lines of text etched with questions or reflections. It’s difficult to imagine a similar level of engagement with a digital text. There should probably always be a place for print in students’ academic lives – no matter how technologically savvy they become.

Of course, we realize that the march toward online reading will continue unabated. And we don’t want to downplay the many conveniences of online texts, which include breadth and speed of access.

Rather, our goal is simply to remind today’s digital natives – and those who shape their educational experiences – that there are significant costs and consequences to discounting the printed word’s value for learning and academic development.