Cambridge Analytica collected data on over 50 Million Facebook users without their consent. How? Dr. Aleksander Kogan a psychology professor at Cambridge University built a survey on Facebook that many users participated in. However, only 270,000 of those participants consented to their data being used and according to the New York Times that consent was for “academic purposes” only. Cambridge Analytica told the Times that they did receive the data, however they blamed Mr. Kogan for violating Facebook’s terms. Facebook claims that when they learned of this violation they took the app off their site and demanded all the data Cambridge Analytica acquired be deleted. Facebook claims they now believe all that data was not destroyed. The problem described is an issue of data LEAVING Facebook and consumer consent, not data going into Facebook.
Yesterday, Facebook announced that it will sever all ties with third party data partners to protect consumer privacy. Some privacy proponents believe that this is a great step in protecting the privacy of consumers. In my opinion, this is a red herring and has nothing to do with the Cambridge Analytica scandal, nor has Facebook done anything substantial with this decision to protect your privacy. Let’s start with some standard definitions before I go on and explain why this is nothing other than a distraction.
Third-party data is used by marketers to help them create marketing messages that are more relevant to target a segment. Segmentation is when a marketer defines a group of consumers (current customers or prospects) that have similar attributes. Marketers will then send the same message to that group (a different message to every individual would not be practical). For example, a segment could look something like “people between 25-45, with young children in the household and an interest in skiing”.
Third-party data comes from many sources (there are thousands of sources). For example, demographic information which is a description of the individual such as age, income, marital status, children in the home, etc. is all public information that can be gathered from sources such as the census. Another type of third party data is behavioral data and that can include what types of interests you have. For example, if you have a magazine subscription to a golf magazine or to a hiking magazine, likely that magazine has sold its’ subscriber list data and associates you with an interest in that category. Another way behavioral data is collected is by observing what content you click on, read or share socially. For example, if you share an article on Aspen Ski Vacations, you are likely categorized as someone with an interest in skiing. Purchase data is collected on individuals as well and can be done many ways. For instance, whenever you buy anything with a warranty that you register (like consumer electronics), your personal information is then associated with that purchase. Another source of purchase data can be credit card companies, banks and credit bureaus. Your purchases can be analyzed and then assumptions about other consumers that have similar attributes to you can be inferred (this is called look-a-like or LAL in the data industry). Your actual personal information tied to your specific transactions are NOT available on the market to buy (unless there has been a data breach at one of these organizations, then your information could be on the darknet). All these data sources in combination can be used to infer assumptions about you and others to improve advertisement targeting.
Bringing the thousands of data sources together would be near impossible for every marketer to vet for privacy compliance much less analyze and create the needed elements and models to perform audience segmentation. Therefore, data aggregators such as Acxiom, Experian, Equifax, Trans Union, Oracle, Cardlytics and many others step in. They aggregate multiple data sources so that they can be easily transformed into elements and models that enable segmentation. Some aggregators are much better than others at ethically sourcing their data. The best ones have a process where they vet data sources through their privacy departments and policies to insure consumers gave consent for the data being included as a source in the elements and models created for marketing use. Furthermore, the best aggregators are also transparent with consumers by allowing them to see what data they have and provide the ability to opt out. Finally, the more credible aggregators will not source data in sensitive categories such as sexual orientation or health indications for example.
When marketers leverage social media or digital publishers to advertise they can select elements and models supplied by aggregators to narrow the target audience. Marketers don’t want to advertise baby products to people that are most likely not parents, nor do they want to push golf products on someone who is likely never going to be a golfer. Furthermore, if marketers were not able to target audiences then advertising costs would become unrealistic for all digital media and those increased costs will need to be absorbed somewhere. Possibilities could include increased prices on goods or subscriber fees to use social platforms which would be wildly unpopular with consumers.
Facebook’s announcement yesterday was that they will no longer partner with data aggregators enabling marketers to target on the platform. Regardless of how you feel about third-party data for targeting, this has nothing to do with the Cambridge Analytica controversy. The Cambridge Analytica issue was about Facebook’s user data going out of Facebook to be analyzed and used without consent from the user. Facebook’s recent action is targeting aggregators of data going into the platform. In my opinion, this is a red herring to distract consumers that do not understand what Facebook is really doing.
In my March 2018 article, When Advertisements Become Too Personal I noted how Facebook leverages trackers that passively surveil consumers and collects that data. Facebook uses trackers to conduct surveillance on you without your conscious knowledge since you likely agreed to this surveillance in the terms of service and privacy policies on their platform and advertiser websites. Consumers don’t have the time to read through long privacy policies, terms and conditions. Furthermore, if consumers don’t agree to a digital property’s terms, often they can’t do business on or use the platform. Advertisers can still target on Facebook without aggregator data being available on the platform. For example, a Facebook Custom Audience is a targeting option created from an advertiser owned customer list, so they can target users on Facebook. Facebook Pixel is a small piece of code for websites that allows the website owner AND Facebook to log any Facebook users. Facebook also tracks the kind of content you share, who you are friends with, what your friends share, what you like, what you talk about in Instant Messenger, what you share on Instagram and other Facebook owned properties. Facebook can aggregate all that data themselves to create targeting tools.
Facebook’s own passive surveillance on us across all their platforms, messaging applications, other websites and even texts on our phones (if you haven’t locked down those permissions) is a much larger concern in my opinion. Instead Facebook is distracting consumers with this announcement into thinking they are making a huge step to protect consumer privacy when in fact the data they have and continue to collect on consumers is much more unsettling.
With people spending more time on social media, many rightly wonder whether that time is good for us. Do people connect in meaningful ways online? Or are they simply consuming trivial updates and polarizing memes at the expense of time with loved ones?
These are critical questions for Silicon Valley — and for both of us. Moira is a social psychologist who has studied the impact of the internet on people’s lives for more than a decade, and I lead the research team for the Facebook app. As parents, each of us worries about our kids’ screen time and what “connection” will mean in 15 years. We also worry about spending too much time on our phones when we should be paying attention to our families. One of the ways we combat our inner struggles is with research — reviewing what others have found, conducting our own, and asking questions when we need to learn more.
A lot of smart people are looking at different aspects of this important issue. Psychologist Sherry Turkle asserts that mobile phones redefine modern relationships, making us “alone together.” In her generational analyses of teens, psychologist Jean Twenge notes an increase in teen depression corresponding with technology use. Both offer compelling research.
But it’s not the whole story. Sociologist Claude Fischer argues that claims that technology drives us apart are largely supported by anecdotes and ignore the benefits. Sociologist Keith Hampton’s study of public spaces suggests that people spend more time in public now — and that cell phones in public are more often used by people passing time on their own, rather than ignoring friends in person.
We want Facebook to be a place for meaningful interactions with your friends and family — enhancing your relationships offline, not detracting from them. After all, that’s what Facebook has always been about. This is important as we know that a person’s health and happiness relies heavily on the strength of their relationships.
In this post, we want to give you some insights into how the research team at Facebook works with our product teams to incorporate well-being principles, and review some of the top scientific research on well-being and social media that informs our work. Of course, this isn’t just a Facebook issue — it’s an internet issue — so we collaborate with leading experts and publish in the top peer-reviewed journals. We work with scientists like Robert Kraut at Carnegie Mellon; Sonja Lyubomirsky at UC Riverside; Dacher Keltner, Emiliana Simon-Thomas, and Matt Killingsworth from the Greater Good Science Center at UC Berkeley, and have partnered closely with mental health clinicians and organizations like Save.org and the National Suicide Prevention Lifeline.
What Do Academics Say? Is Social Media Good or Bad for Well-Being?
According to the research, it really comes down to how you use the technology. For example, on social media, you can passively scroll through posts, much like watching TV, or actively interact with friends — messaging and commenting on each other’s posts. Just like in person, interacting with people you care about can be beneficial, while simply watching others from the sidelines may make you feel worse.
The bad: In general, when people spend a lot of time passively consuming information — reading but not interacting with people — they report feeling worse afterward. In one experiment, University of Michigan students randomly assigned to read Facebook for 10 minutes were in a worse mood at the end of the day than students assigned to post or talk to friends on Facebook. A study from UC San Diego and Yale found that people who clicked on about four times as many links as the average person, or who liked twice as many posts, reported worse mental health than average in a survey. Though the causes aren’t clear, researchers hypothesize that reading about others online might lead to negative social comparison — and perhaps even more so than offline, since people’s posts are often more curated and flattering. Another theory is that the internet takes people away from social engagement in person.
The good: On the other hand, actively interacting with people — especially sharing messages, posts and comments with close friends and reminiscing about past interactions — is linked to improvements in well-being. This ability to connect with relatives, classmates, and colleagues is what drew many of us to Facebook in the first place, and it’s no surprise that staying in touch with these friends and loved ones brings us joy and strengthens our sense of community.
In an experiment at Cornell, stressed college students randomly assigned to scroll through their own Facebook profiles for five minutes experienced boosts in self-affirmationcompared to students who looked at a stranger’s Facebook profile. The researchers believe self-affirmation comes from reminiscing on past meaningful interactions — seeing photos they had been tagged in and comments their friends had left — as well as reflecting on one’s own past posts, where a person chooses how to present themselves to the world.
In a follow-up study, the Cornell researchers put other students under stress by giving them negative feedback on a test and then gave them a choice of websites to visit afterward, including Facebook, YouTube, online music and online video games. They found that stressed students were twice as likely to choose Facebook to make themselves feel better as compared with students who hadn’t been put under stress.
In sum, our research and other academic literature suggests that it’s about how you use social media that matters when it comes to your well-being.
So what are we doing about it?
We’re working to make Facebook more about social interaction and less about spending time. As our CEO Mark Zuckerberg recently said, “We want the time people spend on Facebook to encourage meaningful social interactions.” Facebook has always been about bringing people together — from the early days when we started reminding people about their friends’ birthdays, to showing people their memories with friends using the feature we call “On This Day.” We’re also a place for people to come together in times of need, from fundraisers for disaster relief to groups where people can find an organ donor. We’re always working to expand these communities and find new ways to have a positive impact on people’s lives.
We employ social psychologists, social scientists and sociologists, and we collaborate with top scholars to better understand well-being and work to make Facebook a place that contributes in a positive way. Here are a few things we’ve worked on recently to help support people’s well-being.
News Feed quality: We’ve made several changes to News Feed to provide more opportunities for meaningful interactions and reduce passive consumption of low-quality content — even if it decreases some of our engagement metrics in the short term. We demote things like clickbait headlines and false news, even though people often click on those links at a high rate. We optimize ranking so posts from the friends you care about most are more likely to appear at the top of your feed because that’s what people tell us in surveys that they want to see. Similarly, our ranking promotes posts that are personally informative. We also recently redesigned the comments feature to foster better conversations.
Snooze: People often tell us they want more say over what they see in News Feed. Today, we launched Snooze, which gives people the option to hide a person, Page or group for 30 days, without having to permanently unfollow or unfriend them. This will give people more control over their feed and hopefully make their experience more positive.
Take a Break: Millions of people break up on Facebook each week, changing their relationship status from “in a relationship” to “single.” Research on peoples’ experiences after breakups suggests that offline and online contact, including seeing an ex-partner’s activities, can make emotional recovery more difficult. To help make this experience easier, we built a tool called Take a Break, which gives people more centralized control over when they see their ex on Facebook, what their ex can see, and who can see their past posts.
What About Related Areas Like Digital Distraction and the Impact of Technology on Kids?
We know that people are concerned about how technology affects our attention spans and relationships, as well as how it affects children in the long run. We agree these are critically important questions, and we all have a lot more to learn.
That’s why we recently pledged $1 million toward research to better understand the relationship between media technologies, youth development and well-being. We’re teaming up with experts in the field to look at the impact of mobile technology and social media on kids and teens, as well as how to better support them as they transition through different stages of life.
We’re also making investments to better understand digital distraction and the factors that can pull people away from important face-to-face interactions. Is multitasking hurting our personal relationships? How about our ability to focus? Next year we’ll host a summit with academics and other industry leaders to tackle these issues together.
We don’t have all the answers, but given the prominent role social media now plays in many people’s lives, we want to help elevate the conversation. In the years ahead we’ll be doing more to dig into these questions, share our findings and improve our products. At the end of the day, we’re committed to bringing people together and supporting well-being through meaningful interactions on Facebook.
My writing lately has revolved around media, technology, use of data and consequential psychological impacts. However, in a conversation with my friend Michael Becker of Identity Praxis he urged me to write about Personally Identifiable Information (PII) security fundamentals. According to Michael, personal data privacy is “the new luxury good” and we have all heard about the malicious hackers who find creative ways to steal it. Consequences of identity & personal information mismanagement, for the individual and company alike, can lead to reputation damage, debt, criminal records, loss of income, potentially impact your employment prospects, and yes, death. For those of us “non-techies”, when thinking about security on our devices we often default to, “I have antivirus software on my computer, so I am good”. Well congratulations, I’m sure that hacker from who knows where has never gotten past antivirus software. Those questionable pictures of you at your bachelorette party are completely safe and your privacy is protected, NOT (Wayne’s World reference). For your reading pleasure, below are actions, recommended by Michael and explained by me, you can take to protect your devices from being compromised and unleashing holy hell on you personally.
Begin with using common sense before sharing your PII. This doesn’t involve buying expensive software, it requires taking an extra two seconds to think before acting. Consider the trustworthiness of a website, mobile site or application you engage with before sharing your personal data – if something seems suspicious, don’t share. Furthermore, don’t complete a transaction online or in a phone app if you don’t feel it is secure. Either call the company or go to a different site where you can order the same product. With email if you don’t know the sender and they ask you to click on a link it could be a phishing attack which can grab data off of your computer. Make sure not to just look at the email name or link, look at the actual email address and URL within the link as the name can be used to mask a malicious address link. Sorry, that email you received from a stranger asking for your SSN and credit card information to redeem your grand prize is likely about as real as the Easter Bunny.
According to an article from the Telegraph last year, more than 50 percent of people use at least one of the top 25 passwords and almost 17 percent use the password “123456” (Wasn’t this password used in “Spaceballs”?). When creating passwords, the best practice is to include capitals and special characters in our passwords as well as use different user names and passwords for each account. Reality is that with all the different accounts we have now, it is tough to keep track of it all so we all pick a favorite username and password for everything. Therefore, if a hacker can figure out credentials to one account likely it will work on several others. Password managers such as LastPass or 1Password are good programs that can make your life easier. A password manager is an application that will store all your different usernames and passwords and opens with the use of one master password. They also often contain the ability to auto fill log in credentials on websites. What’s nice about this feature is that it is obviously faster and more accurate, but also protects from hacker keylogging attacks. Password managers are also able to detect whether you are on the right URL which helps protect you from phishing sites. Some of them also have unique random password generators so you don’t have to think of new passwords for every account. DO NOT use the autofill features available from your selected browser, these are not secure! Finally, enable two-factor authentication (either SMS or application) on your accounts, e.g. banking, retailer sites, that support it. I know it is a pain in the ass, but so is having your bank account drained or social media account hacked.
With the Equifax breach last year most of us have at least heard about the risks from news coverage. However, most people think there are only three major credit bureaus (go ahead name them in your head…). BUT NO, Michael reminded me there are in fact FOUR. Make sure to visit all four major credit bureaus to freeze your credit (Trans Union,Equifax, Experian, and Innovis). Freezing your credit stops any credit inquiries on you which stops anyone from opening a credit account without your knowledge. When freezing your credit, you will receive a PIN code from each bureau to “unfreeze” it should you need to have a company run your credit perhaps to get a loan. Keep those PIN codes in a protected place (how about that password manager above?). While I know some people are concerned about the inconvenience of needing to unfreeze credit when applying for legitimate credit – it can act as a loan deterrent. True story, my husband and I were considering a larger purchase where a credit application was needed and then never did it because of the time it would take to unfreeze our credit, but I digress. Put it on your calendar to check your credit score annually. You can go directly to the credit bureaus and get the reports for free or use companies like Credit Karma, Credit Sesame, or Quizzle (each offer different services). You might want to consider getting cyber/identity insurance and darknet monitoring services. The darknet is a layer built on top of the Internet that is hidden and designed specifically for anonymity of which the biggest use is peer to peer file sharing. You can only access the darknet with special tools and software so most of us can’t see what data is on there about us. Besides monitory compensation and support in the case of identity theft, this type of service will provide you alerts for the types of things you wouldn’t know such as an unauthorized USPS address change. There are a number of companies like LifeLock, Identity Guard and Experian that offer this service and I recommend you check out this PC Magazine article on the subject.
Yes, I know my introduction started with a rant about how antivirus software will not protect you from everything, but YES YOU STILL NEED IT. PC Magazine recently tested the best antivirus software and the reviews can be seen here. However, antivirus software should not be your last line of defense. For example, antivirus software doesn’t always protect against malware, and what if you lose your laptop? Encryption solutions prevent access to your files (remember those pictures?). On a Mac you can use Filevault features and for Windows, PC Magazine recently wrote a review of the best encryption software for 2018.
Running your computers on the latest operating software and paying attention to those annoying notifications for OS updates can stave off major attacks (my husband a previous systems administrator is rolling his eyes right now because I used to ignore them). According to a Popular Science article the WannaCry malware attack had an update two months prior to the event that protected users from the attack. The same article calls out the importance of selecting a good email provider and mentions Google and Microsoft as smart choices since they filter many suspicious emails (but not all) before they get to your inbox.
Make sure to password protect your home Wi-Fi router (yes I know people who don’t) and use a VPN when you are connecting to a public Wi-Fi network such as at an airport, hotel or nearest Starbucks. You can also consider installing a cybersecurity hub on your home router such as Bitdefender Box, Fing or Cujo. These tools will monitor and block any suspicious traffic on your Internet coming from any of your connected devices (they often come with a virus protection software package). I also like that those mentioned come with parental controls allowing you to block offensive websites, limit social media and control Internet access by device. What I really liked about Bitdefender is that there are features to detect cyberbullying and online predators.
Identity theft is big business affecting more than 15 million consumers with fraud losses of $16 billion in 2016 according to an identity fraud study released from Javelin Strategy and Research in 2017. Digitally connected consumers, defined as those that “have extensive social network activity, frequently shop online or with mobile devices, and are quick to adopt new digital technologies” are at a 30 percent higher risk of identity fraud than the average person. Costs associated with the above suggestions can range from free to a few hundred dollars which could likely be offset by avoiding a couple of unnecessary purchases. Will it take some time? A few hours per year maybe, but the return on effort outpaces the same number of hours you already spend checking your social media or reading the latest salacious news story about identity theft or privacy invasion that stresses you out.
You’ve seen it happen dozens if not hundreds of times. You post an opinion, or a complaint, or a link to an article on Facebook. Somebody adds a comment, disagreeing (or agreeing) with whatever you posted. Someone else posts another comment disagreeing with the first commenter, or with you, or both. Then others jump in to add their own viewpoints. Tempers flare. Harsh words are used. Soon enough, you and several of your friends are engaged in a virtual shouting match, aiming insults in all directions, sometimes at people you’ve never even met.
There’s a simple reason this happens, it turns out: We respond very differently to what people write than to what they say–even if those things are exactly the same. That’s the result of a fascinating new experiment by UC Berkeley and University of Chicago researchers. In the study, 300 subjects either read, watched video of, or listened to arguments about such hot-button topics as war, abortion, and country or rap music. Afterward, subjects were interviewed about their reactions to the opinions with which they disagreed.
Their general response was probably very familiar to anyone who’s ever discussed politics: a broad belief that people who don’t agree with you are either too stupid or too uncaring to know better. But there was a distinct difference between those who had watched or listened to someone speak the words out loud and those who had read the identical words as text. Those who had listened or watched to someone say the words were less likely to dismiss the speaker as uninformed or heartless than they were if they were just reading the commenter’s words.
That result was no surprise to at least one of the researchers, who was inspired to try the experiment after a similar experience of his own. “One of us read a speech excerpt that was printed in a newspaper from a politician with whom he strongly disagreed,” researcher Juliana Schroeder told the Washington Post. “The next week, he heard the exact same speech clip playing on a radio station. He was shocked by how different his reaction was toward the politician when he read the excerpt compared to when he heard it.” Whereas the written comments seemed outrageous to this researcher, the same words spoken out loud seemed reasonable.
We’re using the wrong medium.
This research suggests that the best way for people who disagree with each other to work out their differences and arrive at a better understanding or compromise is by talking to each other, as people used to do at town hall meetings and over the dinner table. But now that so many of our interactions take place over social media, chat, text message, or email, spoken conversation or discussion is increasingly uncommon. It’s probably no coincidence that political disagreement and general acrimony have never been greater. Russians used this speech-vs.-text disharmony to full advantage by creating Facebook and Twitter accounts to stir up even more ill will among Americans than we already had on our own. No wonder they were so successful at it.
So what should you do about it? To begin with, if you want to make a persuasive case for your political opinion or proposed action, you’re better off doing it by making a short video (or linking to one by someone else) rather than writing out whatever you have to say. At the same time, whenever you’re reading something someone else wrote that seems outlandish to you, keep in mind that the fact that you’re seeing this as text may be part of the problem. If it’s important for you to be objective, try reading it out loud or having someone else read it to you.
Finally, if you’re already in the middle of an argument over Facebook (or Twitter, or Instagram or email or text), and the person on the other side of the issue is someone you care about, please don’t just keep typing out comments and replies and replies to replies. Instead, make a coffee date so you can speak in person. Or at the very least, pick up the phone.
Giving your child a smartphone is like “giving them a gram of cocaine”, a top addiction therapist has warned.
Time spent messaging friends on Snapchat and Instagram can be just as dangerously addictive for teenagers as drugs and alcohol, and should be treated as such, school leaders and teachers were told at an education conference in London.
Speaking alongside experts in technology addiction and adolescent development, Harley Street rehab clinic specialist Mandy Saligari said screen time was too often overlooked as a potential vehicle for addiction in young people.
“I always say to people, when you’re giving your kid a tablet or a phone, you’re really giving them a bottle of wine or a gram of coke,” she said.
“Are you really going to leave them to knock the whole thing out on their own behind closed doors?
“Why do we pay so much less attention to those things than we do to drugs and alcohol when they work on the same brain impulses?”
“When people tend to look at addiction, their eyes tend to be on the substance or thing – but really it’s a pattern of behaviour that can manifest itself in a number of different ways,” Ms Saligari said, naming food obsessions, self-harm and sexting as examples.
Concern has grown recently over the number of young people seen to be sending or receiving pornographic images, or accessing age inappropriate content online through their devices.
Ms Saligari, who heads the Harley Street Charter clinic in London, said around two thirds of her patients were 16-20 year-olds seeking treatment for addiction – a “dramatic increase” on ten years ago – but many of her patients were even younger.
“So many of my clients are 13 and 14 year-old-girls who are involved in sexting, and describe sexting as ‘completely normal’,” said Ms Saligari
Many young girls in particular believe that sending a picture of themselves naked to someone on their mobile phone is “normal”, and that it only becomes “wrong” when a parent or adult finds out, she added.
“If children are taught self-respect they are less likely to exploit themselves in that way,” said Ms Saligari. “It’s an issue of self-respect and it’s an issue of identity.”
Speaking alongside Ms Saligari at the Highgate Junior School conference on teenage development, Dr Richard Graham, a Consultant Psychiatrist at the Nightingale Hospital Technology Addiction Lead, said the issue was a growing area of interest for researchers, as parents report struggling to find the correct balance for their children.
Ofcom figures suggest more than four in ten parents of 12-15 year-olds find it hard to control their children’s screen time.
Even three and four year olds consume an average of six and half hours of internet time per week, according to the broadcasting regulators.
Greater emphasis was needed on sleep and digital curfews at home, the experts suggested, as well as a systematic approach within schools, for example by introducing a smartphone amnesty at the beginning of the school day.
“With sixth formers and teenagers, you’re going to get resistance, because to them it’s like a third hand,” said Ms. Saligari, “but I don’t think it’s impossible to intervene. Schools asking pupils to spend some time away from their phone I think is great.
“If you catch [addiction] early enough, you can teach children how to self-regulate, so we’re not policing them and telling them exactly what to do,” she added.
“What we’re saying is, here’s the quiet carriage time, here’s the free time – now you must learn to self-regulate. It’s possible to enjoy periods of both.”
It’s the Monday morning following the opening weekend of the movie Blade Runner 2049, and Eric C. Leuthardt is standing in the center of a floodlit operating room clad in scrubs and a mask, hunched over an unconscious patient.
“I thought he was human, but I wasn’t sure,” Leuthardt says to the surgical resident standing next to him, as he draws a line on the area of the patient’s shaved scalp where he intends to make his initial incisions for brain surgery. “Did you think he was a replicant?”
“I definitely thought he was a replicant,” the resident responds, using the movie’s term for the eerily realistic-looking bioengineered androids.
“What I think is so interesting is that the future is always flying cars,” Leuthardt says, handing the resident his Sharpie and picking up a scalpel. “They captured the dystopian component: they talk about biology, the replicants. But they missed big chunks of the future. Where were the neural prosthetics?”
It’s a topic that Leuthardt, a 44-year-old scientist and brain surgeon, has spent a lot of time imagining. In addition to his duties as a neurosurgeon at Washington University in St. Louis, he has published two novels and written an award-winning play aimed at “preparing society for the changes ahead.” In his first novel, a techno-thriller called RedDevil 4, 90 percent of human beings have elected to get computer hardware implanted directly into their brains. This allows a seamless connection between people and computers, and a wide array of sensory experiences without leaving home. Leuthardt believes that in the next several decades such implants will be like plastic surgery or tattoos, undertaken with hardly a second thought.
“I cut people open for a job,” he notes. “So it’s not hard to imagine.”
But Leuthardt has done far more than just imagine this future. He specializes in operating on patients with intractable epilepsy, all of whom must spend several days before their main surgery with electrodes implanted on their cortex as computers aggregate information about the neural firing patterns that precede their seizures. During this period, they are confined to a hospital bed and are often extremely bored. About 15 years ago, Leuthardt had an epiphany: why not recruit them to serve as experimental subjects? It would both ease their tedium and help bring his dreams closer to reality.
Leuthardt began designing tasks for them to do. Then he analyzed their brain signals to see what he might learn about how the brain encodes our thoughts and intentions, and how such signals might be used to control external devices. Was the data he had access to sufficiently robust to describe intended movement? Could he listen in on a person’s internal verbal monologues? Is it possible to decode cognition itself?
Though the answers to some of these questions were far from conclusive, they were encouraging. Encouraging enough to instill in Leuthardt the certitude of a true believer—one who might sound like a crackpot, were he not a brain surgeon who deals in the life-and-death realm of the operating room, where there is no room for hubris or delusion. Leuthardt knows better than most that brain surgery is dangerous, scary, and difficult for the patient. But his understanding of the brain has also given him a clear-eyed view of its inherent limitations—and the potential of technology to help overcome them. Once the rest of the world understands the promise, he insists—and once the technologies progress—the human race will do what it has always done. It will evolve. This time with the help of chips implanted in our heads.
“A true fluid neural integration is going to happen,” Leuthardt says. “It’s just a matter of when. If it’s 10 or 100 years in the grand scheme of things, it’s a material development in the course of human history.”
Leuthardt is by no means the only one with exotic ambitions for what are known as brain-computer interfaces. Last March Elon Musk, a founder of Tesla and SpaceX, launched Neuralink, a venture aiming to create devices that facilitate mind-machine melds. Facebook’s Mark Zuckerberg has expressed similar dreams, and last spring his company revealed that it has 60 engineers working on building interfaces that would let you type using just your mind. Bryan Johnson, the founder of the online payment system Braintree, is using his fortune to fund Kernel, a company that aims to develop neuroprosthetics he hopes will eventually boost intelligence, memory, and more.
These plans, however, are all in their early phases and have been shrouded in secrecy, making it hard to assess how much progress has been made—or whether the goals are even remotely realistic. The challenges of brain-computer interfaces are myriad. The kinds of devices that people like Musk and Zuckerberg are talking about won’t just require better hardware to facilitate seamless mechanical connection and communication between silicon computers and the messy gray matter of the human brain. They’ll also have to have sufficient computational power to make sense out of the mass of data produced at any given moment as many of the brain’s nearly 100 billion neurons fire. One other thing: we still don’t know the code the brain uses. We will have to, in other words, learn how to read people’s minds.
But Leuthardt, for one, expects he will live to see it. “At the pace at which technology changes, it’s not inconceivable to think that in a 20-year time frame everything in a cell phone could be put into a grain of rice,” he says. “That could be put into your head in a minimally invasive way, and would be able to perform the computations necessary to be a really effective brain-computer interface.”
Decoding the brain
Scientists have long known that the firing of our neurons is what allows us to move, feel, and think. But breaking the code by which neurons talk to each other and the rest of the body—developing the capacity to actually listen in and make sense of precisely how it is that brain cells allow us to function—has long stood as one of neuroscience’s most daunting tasks.
In the early 1980s, an engineer named Apostolos Georgopoulos, at Johns Hopkins, paved the way for the current revolution in brain-computer interfaces. Georgopoulos identified neurons in the higher-level processing areas of the motor cortex that fired prior to specific kinds of movement—such as a flick of the wrist to the right, or a downward thrust with the arm. What made Georgopoulos’s discovery so important was that you could record these signals and use them to predict the direction and intensity of the movements. Some of these neuronal firing patterns guided the behavior of scores of lower-level neurons working together to move the individual muscles and, ultimately, a limb.
Using arrays of dozens of electrodes to track these high-level signals, Georgopoulos demonstrated that he could predict not just which way a monkey would move a joystick in three-dimensional space, but even the velocity of the movement and how it would change over time.
It was, it seemed clear, precisely the kind of data one might use to give a paralyzed patient mind control over a prosthetic device. Which is the task that one of Georgopoulos’s protégés, Andrew Schwartz, took on in the 1990s. By the late 1990s Schwartz, who is currently a neurobiologist at the University of Pittsburgh, had implanted electrodes in the brains of monkeys and begun to demonstrate that it was indeed possible to train them to control robotic limbs just by thinking.
Leuthardt, in St. Louis to do a neurosurgery residency at Washington University in 1999, was inspired by such work: when he needed to decide how to spend a mandated year-long research break, he knew exactly what he wanted to focus on. Schwartz’s initial success had convinced Leuthardt that science fiction was on the verge of becoming reality. Scientists were finally taking the first tentative steps toward the melding of man and machine. Leuthardt wanted to be part of the coming revolution.
He thought he might devote his year to studying the problem of scarring in mice: over time, the single electrodes that Schwartz and others implanted as part of this work caused inflammatory reactions, or ended up sheathed in brain cells and immobilized. But when Leuthardt and his advisor sat down to map out a plan, the two came up with a better idea. Why not see if they might be able to use a different brain recording technique altogether?
“We were like, ‘Hey, we’ve got humans with electrodes in them all the time!’” Leuthardt says. “Why don’t we just do some experiments with them?”
Georgopoulos and Schwartz had collected their data using a technique that relies on microelectrodes next to the cell membranes of individual neurons to detect voltage changes. The electrodes Leuthardt used, which are implanted before surgery in epilepsy patients, were far larger and were placed on the surface of the cortex, under the scalp, on strips of plastic, where they recorded the signals emanating from hundred of thousands of neurons at the same time. To install them, Leuthardt performed an initial operation in which he removed the top of the skull, cut through the dura (the brain’s outermost membrane), and placed the electrodes directly on top of the brain. Then he connected them to wires that snaked out of the patient’s head in a bundle and plugged into machinery that could analyze the brain signals.
Such electrodes had been used successfully for decades to identify the exact origin in the brain of an epilepsy patient’s intractable seizures. After the initial surgery, the patient stops taking anti-seizure medication, which will eventually prompt an epileptic episode—and the data about its physical source helps doctors like Leuthardt decide which section of the brain to resect in order to forestall future episodes.
But many were skeptical that the electrodes would yield enough information to control a prosthetic. To help find out, Leuthardt recruited Gerwin Schalk, a computer scientist at the Wadsworth Center, a public-health laboratory of the New York State Department of Health. Progress was swift. Within a few years of testing, Leuthardt’s patients had shown the capacity to play Space Invaders—moving a virtual spaceship left and right—simply by thinking. Then they moved a cursor in three-dimensional space on a screen.
In 2006, after a speech on this work at a conference, Schalk was approached by Elmar Schmeisser, a program manager at the U.S. Army Research Office. Schmeisser had in mind something far more complex. He wanted to find out if it was possible to decode “imagined speech”—words not vocalized, but simply spoken silently in one’s mind. Schmeisser, also a science fiction fan, had long dreamed of creating a “thought helmet” that could detect a soldier’s imagined speech and transmit it wirelessly to a fellow soldier’s earpiece.
Leuthardt recruited 12 bedridden epilepsy patients, confined to their rooms and bored as they waited to have seizures, and presented each one with 36 words that had a relatively simple consonant-vowel-consonant structure, such as “bet,” “bat,” “beat,” and “boot.” He asked the patients to say the words out loud and then to simply imagine saying them—conveying the instructions visually (written on a computer screen), with no audio, and again vocally, with no video, to make sure that he could identify incoming sensory signals in the brain. Then he shipped the data to Schalk for analysis.
Schalk’s software relies on pattern recognition algorithms—his programs can be trained to recognize the activation patterns of groups of neurons associated with a given task or thought. With a minimum of 50 to 200 electrodes, each one producing 1,000 readings per second, the programs must churn through a dizzying number of variables. The more electrodes and the smaller the population of neurons per electrode, the better the chance of detecting meaningful patterns—if sufficient computing power can be brought to bear to sort out irrelevant noise.
“The more resolution the better, but at the minimum it’s about 50,000 numbers a second,” Schalk says. “You have to extract the one thing you are really interested in. That’s not so straightforward.”
Schalk’s results, however, were surprisingly robust. As one might expect, when Leuthardt’s subjects vocalized a word, the data indicated activity in the areas of the motor cortex associated with the muscles that produce speech. The auditory cortex, and an area in its vicinity long believed to be associated with speech processing, were also active at the exact same moments. Remarkably, there were similar yet slightly different activation patterns even when the subjects only imagined the words silently.
Schalk, Leuthardt, and others involved in the project believe they have found the little voice that we hear in our mind when we imagine speaking. The system has never been perfect: after years of effort and refinements to his algorithms, Schalk’s program guesses correctly 45 percent of the time. But rather than attempt to push those numbers higher (they expect performance to improve with better sensors), Schalk and Leuthardt have focused on decoding increasingly complex components of speech.
In recent years, Schalk has continued to extend the findings on real and imagined speech (he can tell whether a subject is imagining speaking Martin Luther King Jr.’s “I Have a Dream” speech or Lincoln’s Gettysburg Address). Leuthardt, meanwhile, has attempted to push on into the next realm: identifying the way the brain encodes intellectual concepts across different regions.
The data on that effort is not published yet, “but the honest truth is we’re still trying to make sense of it,” Leuthardt says. His lab, he acknowledges, may be approaching the limits of what’s possible using current technologies.
Implanting the future
“The moment we got early evidence that we could decode intentions,” Leuthardt says, “I knew it was on.”
Soon after obtaining those results, Leuthardt took seven days off to write, visualize the future, and think about both short- and long-term goals. At the top of the list of things to do, he decided, was preparing humanity for what’s coming, a job that is still very much in progress.
With sufficient funding, Leuthardt insists, reclining in a chair in his office after performing surgery, he could already create a prosthetic implant for a general market that would allow someone to use a computer and control a cursor in three-dimensional space. Users could also do things like turn lights on and off, or turn heat up and down, using their thoughts alone. They might even be able to experience artificially induced tactile sensations and access some rudimentary means of turning imagined speech into text. “With current technology, I could make an implant—but how many people are going to want that now?” he says. “I think it’s very important to take practical, short interval steps to get people moved along the pathway toward this road of the long-term vision.”
To that end, Leuthardt founded NeuroLutions, a company aimed at demonstrating that there is a market, even today, for rudimentary devices that link mind and machine—and at beginning to use the technology to help people. NeuroLutions has raised several million so far, and a noninvasive brain interface for stroke victims who have lost function on one side is currently in human trials.
The device consists of brain-monitoring electrodes that sit on the scalp and are attached to an arm orthosis; it can detect a neural signature for intended movement before the signal reaches the motor area of the brain. The neural signals are on the opposite side of the brain from the area usually destroyed by the stroke—and thus are usually spared any damage. By detecting them, amplifying them, and using them to control a device that moves the paralyzed limb, Leuthardt has found, he can actually help a patient regain independent control over the limb, far faster and more effectively than is possible with any approach currently on the market. Importantly, the device can be used without brain surgery.
Though the technology is decidedly modest compared with Leuthardt’s grand designs for the future, he believes this is an area where he can meaningfully transform people’s lives right now. There are about 700,000 new stroke patients in the U.S. each year, and the most common motor impairment is a paralyzed hand. Finding a way to help more of them regain function—and demonstrating that he can do it faster and more effectively—would not only demonstrate the power of brain-computer interfaces but meet a huge medical need.
Using noninvasive electrodes that sit on the outside of the scalp makes the invention much less off-putting for patients, but it also imposes severe limitations. The voltage signals coming from brain cells may be muffled as they travel through the scalp to reach the sensors, and they may be diffused as they pass through bone. Either makes them harder to detect and their origins harder to interpret.
Leuthardt can achieve far more transformative feats using his implanted electrodes that sit directly on the cortex of the brain. But he has learned through painful experience that elective brain surgery is a tough sell—not just with patients, but with investors as well.
When he and Schalk founded NeuroLutions, in 2008, they hoped to restore movement to the paralyzed by bringing just such an interface to market. But the investment community wasn’t interested. For one thing, neuroscientist-led startups have been testing brain-computer interfaces for more than a decade but have had little success in turning the technology into a viable treatment for paralyzed patients (see “Implanting Hope”). The population of potential patients is limited—at least compared with some of the other conditions being targeted by medical-device startups competing for venture capital. (Roughly 40,000 people in the U.S. have complete quadriplegia.) And most of the tasks that could be accomplished using such an interface can already be handled with noninvasive devices. Even most locked-in patients can still blink an eye or perhaps wiggle a finger. Methods that rely on this residual movement can be used to input data or move a wheelchair without the danger, recovery time, or psychological wherewithal involved in implanting electrodes directly on one’s cortex.
So after their initial fund-raising efforts failed, Leuthardt and Schalk set their sights on a more modest goal. Unexpectedly, they found that many patients continued to recover additional function even after the orthosis was removed—extending to, for instance, fine motor control of their fingers. Often, it turned out, all the patients needed was a little push. Then, once new neural pathways were established, the brain continued to remodel and expand them so that they could convey more complex motor commands to the hand.
The initial success Leuthardt expects in these patients, he hopes, will encourage some to move on to a more robust invasive system. “A couple years down the road you might say, ‘You know what? For that noninvasive version, you can get this much benefit, but I think that now, given the science that we know and everything, we can give you this much more benefit,’” he says. “We can enhance your function even more.”
Leuthardt is so eager for the world to share his passion for the technology’s potentially transformative effects that he has also sought to engage the public through art. In addition to writing his novels and play, he is working on a podcast and YouTube series with a fellow neurosurgeon, in which the two discuss technology and philosophy over coffee and doughnuts.
In Leuthardt’s first book, RedDevil 4, one character uses his “cortical prosthetic” to experience hiking the Himalayas while sitting on his couch. Another, a police detective, confers telepathically with a colleague about how to question a murder suspect standing right in front of them. Every character has instant access to all the knowledge in the world’s libraries—can access it as quickly as a person can think any spontaneous thought. No one ever has to be alone, and our bodies no longer limit us. On the flip side, everyone’s brain is vulnerable to computer viruses that can turn people into psychopaths.
Leuthardt acknowledges that at present, we still lack the power to record and stimulate the number of neurons it would take to replicate these visions. But he claims his conversations with some Silicon Valley investors have only fueled his optimism that we’re on the brink of an innovation explosion.
Schalk is a little less sanguine. He’s skeptical that Facebook, Musk, and others are adding much of their own to the quest for a better interface.
“They are not going to do anything different than the scientific community by itself,” Schalk says. “Maybe something is going to come of it, but it’s not like they have this new thing that nobody else has.”
Schalk says it’s “very, very obvious” that in the next five to 10 years some form of brain-computer interface will be used to rehabilitate victims of strokes, spinal cord injuries, chronic pain, and other disorders. But he compares the current recording techniques to the IBM computers of the 1960s, saying that they are now “archaic.” For the technology to reach its true long-term potential, he believes, a new sort of brain-scanning technology will be needed—something that can read far more neurons at once.
“What you really want is to be able to listen to the brain and talk to the brain in a way that the brain cannot distinguish from the way it communicates internally, and we can’t do that right now,” Schalk says. “We really don’t know how to do it at this point. But it’s also obvious to me that it is going to happen. And if and when that happens, our lives are going to change, and our lives are going to change in a way that is completely unprecedented.”
Where and when the breakthroughs will come from is unclear. After decades of research and progress, many of the same technological challenges remain daunting. Still, the progress in neuroscience and computer hardware and software makes the outcome—at least to true believers—inevitable.
At the very least, says Leuthardt, the buzz emanating from Silicon Valley has generated “real excitement and real thinking about brain-computer interfaces being a practical reality.” That, he says, is “something we haven’t seen before.” And though he acknowledges that if this turns out to be hype it could “set the field back a decade or two,” nothing, he believes, will stop us from reaching the ultimate goal: a technology that will allow us to transcend the cognitive and physical limitations previous generations of humankind have taken for granted.
“It’s going to happen,” he insists. “This has the potential to alter the evolutionary direction of the human race.”
Adam Piore is the author of The Body Builders: Inside the Science of the Engineered Human, a book about bioengineering published last March.