Your Data and “Those Pictures” Are Less Secure Than You Think….


My writing lately has revolved around media, technology, use of data and consequential psychological impacts. However, in a conversation with my friend Michael Becker of Identity Praxis he urged me to write about Personally Identifiable Information (PII) security fundamentals. According to Michael, personal data privacy is “the new luxury good” and we have all heard about the malicious hackers who find creative ways to steal it. Consequences of identity & personal information mismanagement, for the individual and company alike, can lead to reputation damage, debt, criminal records, loss of income, potentially impact your employment prospects, and yes, death. For those of us “non-techies”, when thinking about security on our devices we often default to, “I have antivirus software on my computer, so I am good”. Well congratulations, I’m sure that hacker from who knows where has never gotten past antivirus software. Those questionable pictures of you at your bachelorette party are completely safe and your privacy is protected, NOT (Wayne’s World reference). For your reading pleasure, below are actions, recommended by Michael and explained by me, you can take to protect your devices from being compromised and unleashing holy hell on you personally.

Begin with using common sense before sharing your PII. This doesn’t involve buying expensive software, it requires taking an extra two seconds to think before acting. Consider the trustworthiness of a website, mobile site or application you engage with before sharing your personal data – if something seems suspicious, don’t share. Furthermore, don’t complete a transaction online or in a phone app if you don’t feel it is secure. Either call the company or go to a different site where you can order the same product.  With email if you don’t know the sender and they ask you to click on a link it could be a phishing attack which can grab data off of your computer. Make sure not to just look at the email name or link, look at the actual email address and URL within the link as the name can be used to mask a malicious address link. Sorry, that email you received from a stranger asking for your SSN and credit card information to redeem your grand prize is likely about as real as the Easter Bunny.

 According to an article from the  Telegraph last year, more than 50 percent of people use at least one of the top 25 passwords and almost 17 percent use the password “123456” (Wasn’t this password used in “Spaceballs”?). When creating passwords, the best practice is to include capitals and special characters in our passwords as well as use different user names and passwords for each account. Reality is that with all the different accounts we have now, it is tough to keep track of it all so we all pick a favorite username and password for everything. Therefore, if a hacker can figure out credentials to one account likely it will work on several others. Password managers such as LastPass or 1Password are good programs that can make your life easier. A password manager is an application that will store all your different usernames and passwords and opens with the use of one master password. They also often contain the ability to auto fill log in credentials on websites. What’s nice about this feature is that it is obviously faster and more accurate, but also protects from hacker keylogging attacks. Password managers are also able to detect whether you are on the right URL which helps protect you from phishing sites. Some of them also have unique random password generators so you don’t have to think of new passwords for every account. DO NOT use the autofill features available from your selected browser, these are not secure! Finally, enable two-factor authentication (either SMS or application) on your accounts, e.g. banking, retailer sites, that support it. I know it is a pain in the ass, but so is having your bank account drained or social media account hacked.

With the Equifax breach last year most of us have at least heard about the risks from news coverage. However, most people think there are only three major credit bureaus (go ahead name them in your head…). BUT NO, Michael reminded me there are in fact FOUR. Make sure to visit all four major credit bureaus to freeze your credit (Trans Union, Equifax, Experian, and Innovis). Freezing your credit stops any credit inquiries on you which stops anyone from opening a credit account without your knowledge. When freezing your credit, you will receive a PIN code from each bureau to “unfreeze” it should you need to have a company run your credit perhaps to get a loan. Keep those PIN codes in a protected place (how about that password manager above?). While I know some people are concerned about the inconvenience of needing to unfreeze credit when applying for legitimate credit – it can act as a loan deterrent. True story, my husband and I were considering a larger purchase where a credit application was needed and then never did it because of the time it would take to unfreeze our credit, but I digress. Put it on your calendar to check your credit score annually. You can go directly to the credit bureaus and get the reports for free or use companies like Credit Karma, Credit Sesame, or Quizzle (each offer different services). You might want to consider getting cyber/identity insurance and darknet monitoring services. The darknet is a layer built on top of the Internet that is hidden and designed specifically for anonymity of which the biggest use is peer to peer file sharing. You can only access the darknet with special tools and software so most of us can’t see what data is on there about us. Besides monitory compensation and support in the case of identity theft, this type of service will provide you alerts for the types of things you wouldn’t know such as an unauthorized USPS address change. There are a number of companies like LifeLock, Identity Guard and Experian that offer this service and I recommend you check out this PC Magazine article on the subject.

Yes, I know my introduction started with a rant about how antivirus software will not protect you from everything, but YES YOU STILL NEED IT. PC Magazine recently tested the best antivirus software and the reviews can be seen here. However, antivirus software should not be your last line of defense. For example, antivirus software doesn’t always protect against malware, and what if you lose your laptop? Encryption solutions prevent access to your files (remember those pictures?). On a Mac you can use Filevault features and for Windows, PC Magazine recently wrote a review of the best encryption software for 2018.

Running your computers on the latest operating software and paying attention to those annoying notifications for OS updates can stave off major attacks (my husband a previous systems administrator is rolling his eyes right now because I used to ignore them). According to a Popular Science article the WannaCry malware attack had an update two months prior to the event that protected users from the attack. The same article calls out the importance of selecting a good email provider and mentions Google and Microsoft as smart choices since they filter many suspicious emails (but not all) before they get to your inbox.

Make sure to password protect your home Wi-Fi router (yes I know people who don’t) and use a VPN when you are connecting to a public Wi-Fi network such as at an airport, hotel or nearest Starbucks. You can also consider installing a cybersecurity hub on your home router such as Bitdefender Box, Fing or Cujo. These tools will monitor and block any suspicious traffic on your Internet coming from any of your connected devices (they often come with a virus protection software package). I also like that those mentioned come with parental controls allowing you to block offensive websites, limit social media and control Internet access by device. What I really liked about Bitdefender is that there are features to detect cyberbullying and online predators.

Identity theft is big business affecting more than 15 million consumers with fraud losses of $16 billion in 2016 according to an identity fraud study released from Javelin Strategy and Research in 2017. Digitally connected consumers, defined as those that “have extensive social network activity, frequently shop online or with mobile devices, and are quick to adopt new digital technologies” are at a 30 percent higher risk of identity fraud than the average person. Costs associated with the above suggestions can range from free to a few hundred dollars which could likely be offset by avoiding a couple of unnecessary purchases. Will it take some time? A few hours per year maybe, but the return on effort outpaces the same number of hours you already spend checking your social media or reading the latest salacious news story about identity theft or privacy invasion that stresses you out.


You Should Never, Ever Argue With Anyone on Facebook, According to Science

New research shows how we interact makes a huge difference.

Source: You Should Never, Ever Argue With Anyone on Facebook, According to Science

By Minda Zetlin

You’ve seen it happen dozens if not hundreds of times. You post an opinion, or a complaint, or a link to an article on Facebook. Somebody adds a comment, disagreeing (or agreeing) with whatever you posted. Someone else posts another comment disagreeing with the first commenter, or with you, or both. Then others jump in to add their own viewpoints. Tempers flare. Harsh words are used. Soon enough, you and several of your friends are engaged in a virtual shouting match, aiming insults in all directions, sometimes at people you’ve never even met.

There’s a simple reason this happens, it turns out: We respond very differently to what people write than to what they say–even if those things are exactly the same. That’s the result of a fascinating new experiment by UC Berkeley and University of Chicago researchers. In the study, 300 subjects either read, watched video of, or listened to arguments about such hot-button topics as war, abortion, and country or rap music. Afterward, subjects were interviewed about their reactions to the opinions with which they disagreed.

Their general response was probably very familiar to anyone who’s ever discussed politics: a broad belief that people who don’t agree with you are either too stupid or too uncaring to know better. But there was a distinct difference between those who had watched or listened to someone speak the words out loud and those who had read the identical words as text. Those who had listened or watched to someone say the words were less likely to dismiss the speaker as uninformed or heartless than they were if they were just reading the commenter’s words.

That result was no surprise to at least one of the researchers, who was inspired to try the experiment after a similar experience of his own. “One of us read a speech excerpt that was printed in a newspaper from a politician with whom he strongly disagreed,” researcher Juliana Schroeder told the Washington Post. “The next week, he heard the exact same speech clip playing on a radio station. He was shocked by how different his reaction was toward the politician when he read the excerpt compared to when he heard it.” Whereas the written comments seemed outrageous to this researcher, the same words spoken out loud seemed reasonable.

We’re using the wrong medium.

This research suggests that the best way for people who disagree with each other to work out their differences and arrive at a better understanding or compromise is by talking to each other, as people used to do at town hall meetings and over the dinner table. But now that so many of our interactions take place over social media, chat, text message, or email, spoken conversation or discussion is increasingly uncommon. It’s probably no coincidence that political disagreement and general acrimony have never been greater. Russians used this speech-vs.-text disharmony to full advantage by creating Facebook and Twitter accounts to stir up even more ill will among Americans than we already had on our own. No wonder they were so successful at it.

So what should you do about it? To begin with, if you want to make a persuasive case for your political opinion or proposed action, you’re better off doing it by making a short video (or linking to one by someone else) rather than writing out whatever you have to say. At the same time, whenever you’re reading something someone else wrote that seems outlandish to you, keep in mind that the fact that you’re seeing this as text may be part of the problem. If it’s important for you to be objective, try reading it out loud or having someone else read it to you.

Finally, if you’re already in the middle of an argument over Facebook (or Twitter, or Instagram or email or text), and the person on the other side of the issue is someone you care about, please don’t just keep typing out comments and replies and replies to replies. Instead, make a coffee date so you can speak in person. Or at the very least, pick up the phone.

Giving your child a smartphone is like giving them a gram of cocaine, says top addiction expert

Ofcom figures suggest more than four in ten parents of 12-15 year-olds find it hard to control their children’s screen time

         Getty Images

Harley Street clinic director Mandy Saligari says many of her patients are 13-year-old girls who see sexting as ‘normal’

Rachael Pells Education Correspondent              Wednesday 7 June 201

Source: Giving your child a smartphone is like giving them a gram of cocaine, says top addiction expert

Giving your child a smartphone is like “giving them a gram of cocaine”, a top addiction therapist has warned.

Time spent messaging friends on Snapchat and Instagram can be just as dangerously addictive for teenagers as drugs and alcohol, and should be treated as such, school leaders and teachers were told at an education conference in London.

Speaking alongside experts in technology addiction and adolescent development, Harley Street rehab clinic specialist Mandy Saligari said screen time was too often overlooked as a potential vehicle for addiction in young people.

“I always say to people, when you’re giving your kid a tablet or a phone, you’re really giving them a bottle of wine or a gram of coke,” she said.

“Are you really going to leave them to knock the whole thing out on their own behind closed doors?

“Why do we pay so much less attention to those things than we do to drugs and alcohol when they work on the same brain impulses?”

Her comments follow news that children as young as 13 are being treated for digital technology – with a third of British children aged 12-15 admitting they do not have a good balance between screen time and other activities.

“When people tend to look at addiction, their eyes tend to be on the substance or thing – but really it’s a pattern of behaviour that can manifest itself in a number of different ways,” Ms Saligari said, naming food obsessions, self-harm and sexting as examples.

Concern has grown recently over the number of young people seen to be sending or receiving pornographic images, or accessing age inappropriate content online through their devices.

Ms Saligari, who heads the Harley Street Charter clinic in London, said around two thirds of her patients were 16-20 year-olds seeking treatment for addiction – a “dramatic increase” on ten years ago – but many of her patients were even younger.

In a recent survey of more than 1,500 teachers, around two-thirds said they were aware of pupils sharing sexual content, with as many as one in six of those involved of primary school age.

More than 2,000 children have been reported to police for crimes linked to indecent images in the past three years.

“So many of my clients are 13 and 14 year-old-girls who are involved in sexting, and describe sexting as ‘completely normal’,” said Ms Saligari

Many young girls in particular believe that sending a picture of themselves naked to someone on their mobile phone is “normal”, and that it only becomes “wrong” when a parent or adult finds out, she added.

“If children are taught self-respect they are less likely to exploit themselves in that way,” said Ms Saligari. “It’s an issue of self-respect and it’s an issue of identity.”

Speaking alongside Ms Saligari at the Highgate Junior School conference on teenage development, Dr Richard Graham, a Consultant Psychiatrist at the Nightingale Hospital Technology Addiction Lead, said the issue was a growing area of interest for researchers, as parents report struggling to find the correct balance for their children.

Ofcom figures suggest more than four in ten parents of 12-15 year-olds find it hard to control their children’s screen time.

Even three and four year olds consume an average of six and half hours of internet time per week, according to the broadcasting regulators.

Greater emphasis was needed on sleep and digital curfews at home, the experts suggested, as well as a systematic approach within schools, for example by introducing a smartphone amnesty at the beginning of the school day.

“With sixth formers and teenagers, you’re going to get resistance, because to them it’s like a third hand,” said Ms. Saligari, “but I don’t think it’s impossible to intervene. Schools asking pupils to spend some time away from their phone I think is great.

“If you catch [addiction] early enough, you can teach children how to self-regulate, so we’re not policing them and telling them exactly what to do,” she added.

“What we’re saying is, here’s the quiet carriage time, here’s the free time – now you must learn to self-regulate. It’s possible to enjoy periods of both.”


This surgeon wants to connect you to the Internet with a brain implant

Eric Leuthardt believes that in the near future we will allow doctors to insert electrodes into our brains so we can communicate directly with computers and each other.

Source: This surgeon wants to connect you to the Internet with a brain implant

by Adam Piore                    November 30, 2017

It’s the Monday morning following the opening weekend of the movie Blade Runner 2049, and Eric C. Leuthardt is standing in the center of a floodlit operating room clad in scrubs and a mask, hunched over an unconscious patient.

“I thought he was human, but I wasn’t sure,” Leuthardt says to the surgical resident standing next to him, as he draws a line on the area of the patient’s shaved scalp where he intends to make his initial incisions for brain surgery. “Did you think he was a replicant?”

“I definitely thought he was a replicant,” the resident responds, using the movie’s term for the eerily realistic-looking bioengineered androids.

“What I think is so interesting is that the future is always flying cars,” Leuthardt says, handing the resident his Sharpie and picking up a scalpel. “They captured the dystopian component: they talk about biology, the replicants. But they missed big chunks of the future. Where were the neural prosthetics?”

It’s a topic that Leuthardt, a 44-year-old scientist and brain surgeon, has spent a lot of time imagining. In addition to his duties as a neurosurgeon at Washington University in St. Louis, he has published two novels and written an award-winning play aimed at “preparing society for the changes ahead.” In his first novel, a techno-thriller called RedDevil 4, 90 percent of human beings have elected to get computer hardware implanted directly into their brains. This allows a seamless connection between people and computers, and a wide array of sensory experiences without leaving home. Leuthardt believes that in the next several decades such implants will be like plastic surgery or tattoos, undertaken with hardly a second thought.

Eric Leuthardt.

“I cut people open for a job,” he notes. “So it’s not hard to imagine.”

But Leuthardt has done far more than just imagine this future. He specializes in operating on patients with intractable epilepsy, all of whom must spend several days before their main surgery with electrodes implanted on their cortex as computers aggregate information about the neural firing patterns that precede their seizures. During this period, they are confined to a hospital bed and are often extremely bored. About 15 years ago, Leuthardt had an epiphany: why not recruit them to serve as experimental subjects? It would both ease their tedium and help bring his dreams closer to reality.

Leuthardt began designing tasks for them to do. Then he analyzed their brain signals to see what he might learn about how the brain encodes our thoughts and intentions, and how such signals might be used to control external devices. Was the data he had access to sufficiently robust to describe intended movement? Could he listen in on a person’s internal verbal monologues? Is it possible to decode cognition itself?

Though the answers to some of these questions were far from conclusive, they were encouraging. Encouraging enough to instill in Leuthardt the certitude of a true believer—one who might sound like a crackpot, were he not a brain surgeon who deals in the life-and-death realm of the operating room, where there is no room for hubris or delusion. Leuthardt knows better than most that brain surgery is dangerous, scary, and difficult for the patient. But his understanding of the brain has also given him a clear-eyed view of its inherent limitations—and the potential of technology to help overcome them. Once the rest of the world understands the promise, he insists—and once the technologies progress—the human race will do what it has always done. It will evolve. This time with the help of chips implanted in our heads.

One of Leuthardt’s patients is positioned for minimally invasive laser surgery to treat a brain tumor. Such highly precise surgical techniques have made implanting electrodes safer and less daunting for patients.

“A true fluid neural integration is going to happen,” Leuthardt says. “It’s just a matter of when. If it’s 10 or 100 years in the grand scheme of things, it’s a material development in the course of human history.”

Leuthardt is by no means the only one with exotic ambitions for what are known as brain-computer interfaces. Last March Elon Musk, a founder of Tesla and SpaceX, launched Neuralink, a venture aiming to create devices that facilitate mind-machine melds. Facebook’s Mark Zuckerberg has expressed similar dreams, and last spring his company revealed that it has 60 engineers working on building interfaces that would let you type using just your mind. Bryan Johnson, the founder of the online payment system Braintree, is using his fortune to fund Kernel, a company that aims to develop neuroprosthetics he hopes will eventually boost intelligence, memory, and more.

These plans, however, are all in their early phases and have been shrouded in secrecy, making it hard to assess how much progress has been made—or whether the goals are even remotely realistic. The challenges of brain-computer interfaces are myriad. The kinds of devices that people like Musk and Zuckerberg are talking about won’t just require better hardware to facilitate seamless mechanical connection and communication between silicon computers and the messy gray matter of the human brain. They’ll also have to have sufficient computational power to make sense out of the mass of data produced at any given moment as many of the brain’s nearly 100 billion neurons fire. One other thing: we still don’t know the code the brain uses. We will have to, in other words, learn how to read people’s minds.

But Leuthardt, for one, expects he will live to see it. “At the pace at which technology changes, it’s not inconceivable to think that in a 20-year time frame everything in a cell phone could be put into a grain of rice,” he says. “That could be put into your head in a minimally invasive way, and would be able to perform the computations necessary to be a really effective brain-computer interface.”

Decoding the brain

Scientists have long known that the firing of our neurons is what allows us to move, feel, and think. But breaking the code by which neurons talk to each other and the rest of the body—developing the capacity to actually listen in and make sense of precisely how it is that brain cells allow us to function—has long stood as one of neuroscience’s most daunting tasks.

In the early 1980s, an engineer named Apostolos Georgopoulos, at Johns Hopkins, paved the way for the current revolution in brain-computer interfaces. Georgopoulos identified neurons in the higher-level processing areas of the motor cortex that fired prior to specific kinds of movement—such as a flick of the wrist to the right, or a downward thrust with the arm. What made Georgopoulos’s discovery so important was that you could record these signals and use them to predict the direction and intensity of the movements. Some of these neuronal firing patterns guided the behavior of scores of lower-level neurons working together to move the individual muscles and, ultimately, a limb.

Using arrays of dozens of electrodes to track these high-level signals, Georgopoulos demonstrated that he could predict not just which way a monkey would move a joystick in three-dimensional space, but even the velocity of the movement and how it would change over time.

It was, it seemed clear, precisely the kind of data one might use to give a paralyzed patient mind control over a prosthetic device. Which is the task that one of Georgopoulos’s protégés, Andrew Schwartz, took on in the 1990s. By the late 1990s Schwartz, who is currently a neurobiologist at the University of Pittsburgh, had implanted electrodes in the brains of monkeys and begun to demonstrate that it was indeed possible to train them to control robotic limbs just by thinking.

Leuthardt, in St. Louis to do a neurosurgery residency at Washington University in 1999, was inspired by such work: when he needed to decide how to spend a mandated year-long research break, he knew exactly what he wanted to focus on. Schwartz’s initial success had convinced Leuthardt that science fiction was on the verge of becoming reality. Scientists were finally taking the first tentative steps toward the melding of man and machine. Leuthardt wanted to be part of the coming revolution.

He thought he might devote his year to studying the problem of scarring in mice: over time, the single electrodes that Schwartz and others implanted as part of this work caused inflammatory reactions, or ended up sheathed in brain cells and immobilized. But when Leuthardt and his advisor sat down to map out a plan, the two came up with a better idea. Why not see if they might be able to use a different brain recording technique altogether?

“We were like, ‘Hey, we’ve got humans with electrodes in them all the time!’” Leuthardt says. “Why don’t we just do some experiments with them?”

A surgeon prepares to drill a hole in a patient’s skull to place a laser probe.

A stereotactic frame fixed to a patient’s skull guides a laser probe that pinpoints a location in the brain.


Georgopoulos and Schwartz had collected their data using a technique that relies on microelectrodes next to the cell membranes of individual neurons to detect voltage changes. The electrodes Leuthardt used, which are implanted before surgery in epilepsy patients, were far larger and were placed on the surface of the cortex, under the scalp, on strips of plastic, where they recorded the signals emanating from hundred of thousands of neurons at the same time. To install them, Leuthardt performed an initial operation in which he removed the top of the skull, cut through the dura (the brain’s outermost membrane), and placed the electrodes directly on top of the brain. Then he connected them to wires that snaked out of the patient’s head in a bundle and plugged into machinery that could analyze the brain signals.

Such electrodes had been used successfully for decades to identify the exact origin in the brain of an epilepsy patient’s intractable seizures. After the initial surgery, the patient stops taking anti-seizure medication, which will eventually prompt an epileptic episode—and the data about its physical source helps doctors like Leuthardt decide which section of the brain to resect in order to forestall future episodes.

But many were skeptical that the electrodes would yield enough information to control a prosthetic. To help find out, Leuthardt recruited Gerwin Schalk, a computer scientist at the Wadsworth Center, a public-health laboratory of the New York State Department of Health. Progress was swift. Within a few years of testing, Leuthardt’s patients had shown the capacity to play Space Invaders—moving a virtual spaceship left and right—simply by thinking. Then they moved a cursor in three-dimensional space on a screen.

In 2006, after a speech on this work at a conference, Schalk was approached by Elmar Schmeisser, a program manager at the U.S. Army Research Office. Schmeisser had in mind something far more complex. He wanted to find out if it was possible to decode “imagined speech”—words not vocalized, but simply spoken silently in one’s mind. Schmeisser, also a science fiction fan, had long dreamed of creating a “thought helmet” that could detect a soldier’s imagined speech and transmit it wirelessly to a fellow soldier’s earpiece.

Laser probe.

Leuthardt recruited 12 bedridden epilepsy patients, confined to their rooms and bored as they waited to have seizures, and presented each one with 36 words that had a relatively simple consonant-vowel-consonant structure, such as “bet,” “bat,” “beat,” and “boot.” He asked the patients to say the words out loud and then to simply imagine saying them—conveying the instructions visually (written on a computer screen), with no audio, and again vocally, with no video, to make sure that he could identify incoming sensory signals in the brain. Then he shipped the data to Schalk for analysis.

Schalk’s software relies on pattern recognition algorithms—his programs can be trained to recognize the activation patterns of groups of neurons associated with a given task or thought. With a minimum of 50 to 200 electrodes, each one producing 1,000 readings per second, the programs must churn through a dizzying number of variables. The more electrodes and the smaller the population of neurons per electrode, the better the chance of detecting meaningful patterns—if sufficient computing power can be brought to bear to sort out irrelevant noise.

“The more resolution the better, but at the minimum it’s about 50,000 numbers a second,” Schalk says. “You have to extract the one thing you are really interested in. That’s not so straightforward.”

Schalk’s results, however, were surprisingly robust. As one might expect, when Leuthardt’s subjects vocalized a word, the data indicated activity in the areas of the motor cortex associated with the muscles that produce speech. The auditory cortex, and an area in its vicinity long believed to be associated with speech processing, were also active at the exact same moments. Remarkably, there were similar yet slightly different activation patterns even when the subjects only imagined the words silently.

Schalk, Leuthardt, and others involved in the project believe they have found the little voice that we hear in our mind when we imagine speaking. The system has never been perfect: after years of effort and refinements to his algorithms, Schalk’s program guesses correctly 45 percent of the time. But rather than attempt to push those numbers higher (they expect performance to improve with better sensors), Schalk and Leuthardt have focused on decoding increasingly complex components of speech.

In recent years, Schalk has continued to extend the findings on real and imagined speech (he can tell whether a subject is imagining speaking Martin Luther King Jr.’s “I Have a Dream” speech or Lincoln’s Gettysburg Address). Leuthardt, meanwhile, has attempted to push on into the next realm: identifying the way the brain encodes intellectual concepts across different regions.

The data on that effort is not published yet, “but the honest truth is we’re still trying to make sense of it,” Leuthardt says. His lab, he acknowledges, may be approaching the limits of what’s possible using current technologies.

Implanting the future

“The moment we got early evidence that we could decode intentions,” Leuthardt says, “I knew it was on.”

Soon after obtaining those results, Leuthardt took seven days off to write, visualize the future, and think about both short- and long-term goals. At the top of the list of things to do, he decided, was preparing humanity for what’s coming, a job that is still very much in progress.

Leuthardt drills a hole in the skull.

On this control room computer screen, the laser is monitored in real time.

With sufficient funding, Leuthardt insists, reclining in a chair in his office after performing surgery, he could already create a prosthetic implant for a general market that would allow someone to use a computer and control a cursor in three-dimensional space. Users could also do things like turn lights on and off, or turn heat up and down, using their thoughts alone. They might even be able to experience artificially induced tactile sensations and access some rudimentary means of turning imagined speech into text. “With current technology, I could make an implant—but how many people are going to want that now?” he says. “I think it’s very important to take practical, short interval steps to get people moved along the pathway toward this road of the long-term vision.”

To that end, Leuthardt founded NeuroLutions, a company aimed at demonstrating that there is a market, even today, for rudimentary devices that link mind and machine—and at beginning to use the technology to help people. NeuroLutions has raised several million so far, and a noninvasive brain interface for stroke victims who have lost function on one side is currently in human trials.

The device consists of brain-monitoring electrodes that sit on the scalp and are attached to an arm orthosis; it can detect a neural signature for intended movement before the signal reaches the motor area of the brain. The neural signals are on the opposite side of the brain from the area usually destroyed by the stroke—and thus are usually spared any damage. By detecting them, amplifying them, and using them to control a device that moves the paralyzed limb, Leuthardt has found, he can actually help a patient regain independent control over the limb, far faster and more effectively than is possible with any approach currently on the market. Importantly, the device can be used without brain surgery.

Though the technology is decidedly modest compared with Leuthardt’s grand designs for the future, he believes this is an area where he can meaningfully transform people’s lives right now. There are about 700,000 new stroke patients in the U.S. each year, and the most common motor impairment is a paralyzed hand. Finding a way to help more of them regain function—and demonstrating that he can do it faster and more effectively—would not only demonstrate the power of brain-computer interfaces but meet a huge medical need.

Leuthardt plans the laser probe’s trajectory with the assistance of a stereotactic navigation system.

Leuthardt’s surgical tools.

Using noninvasive electrodes that sit on the outside of the scalp makes the invention much less off-putting for patients, but it also imposes severe limitations. The voltage signals coming from brain cells may be muffled as they travel through the scalp to reach the sensors, and they may be diffused as they pass through bone. Either makes them harder to detect and their origins harder to interpret.

Leuthardt can achieve far more transformative feats using his implanted electrodes that sit directly on the cortex of the brain. But he has learned through painful experience that elective brain surgery is a tough sell—not just with patients, but with investors as well.

When he and Schalk founded NeuroLutions, in 2008, they hoped to restore movement to the paralyzed by bringing just such an interface to market. But the investment community wasn’t interested. For one thing, neuroscientist-led startups have been testing brain-computer interfaces for more than a decade but have had little success in turning the technology into a viable treatment for paralyzed patients (see “Implanting Hope”). The population of potential patients is limited—at least compared with some of the other conditions being targeted by medical-device startups competing for venture capital. (Roughly 40,000 people in the U.S. have complete quadriplegia.) And most of the tasks that could be accomplished using such an interface can already be handled with noninvasive devices. Even most locked-in patients can still blink an eye or perhaps wiggle a finger. Methods that rely on this residual movement can be used to input data or move a wheelchair without the danger, recovery time, or psychological wherewithal involved in implanting electrodes directly on one’s cortex.

So after their initial fund-raising efforts failed, Leuthardt and Schalk set their sights on a more modest goal. Unexpectedly, they found that many patients continued to recover additional function even after the orthosis was removed—extending to, for instance, fine motor control of their fingers. Often, it turned out, all the patients needed was a little push. Then, once new neural pathways were established, the brain continued to remodel and expand them so that they could convey more complex motor commands to the hand.

The initial success Leuthardt expects in these patients, he hopes, will encourage some to move on to a more robust invasive system. “A couple years down the road you might say, ‘You know what? For that noninvasive version, you can get this much benefit, but I think that now, given the science that we know and everything, we can give you this much more benefit,’” he says. “We can enhance your function even more.”

Leuthardt is so eager for the world to share his passion for the technology’s potentially transformative effects that he has also sought to engage the public through art. In addition to writing his novels and play, he is working on a podcast and YouTube series with a fellow neurosurgeon, in which the two discuss technology and philosophy over coffee and doughnuts.

In Leuthardt’s first book, RedDevil 4, one character uses his “cortical prosthetic” to experience hiking the Himalayas while sitting on his couch. Another, a police detective, confers telepathically with a colleague about how to question a murder suspect standing right in front of them. Every character has instant access to all the knowledge in the world’s libraries—can access it as quickly as a person can think any spontaneous thought. No one ever has to be alone, and our bodies no longer limit us. On the flip side, everyone’s brain is vulnerable to computer viruses that can turn people into psychopaths.

Leuthardt acknowledges that at present, we still lack the power to record and stimulate the number of neurons it would take to replicate these visions. But he claims his conversations with some Silicon Valley investors have only fueled his optimism that we’re on the brink of an innovation explosion.

Schalk is a little less sanguine. He’s skeptical that Facebook, Musk, and others are adding much of their own to the quest for a better interface.

“They are not going to do anything different than the scientific community by itself,” Schalk says. “Maybe something is going to come of it, but it’s not like they have this new thing that nobody else has.”

Schalk says it’s “very, very obvious” that in the next five to 10 years some form of brain-computer interface will be used to rehabilitate victims of strokes, spinal cord injuries, chronic pain, and other disorders. But he compares the current recording techniques to the IBM computers of the 1960s, saying that they are now “archaic.” For the technology to reach its true long-term potential, he believes, a new sort of brain-scanning technology will be needed—something that can read far more neurons at once.

“What you really want is to be able to listen to the brain and talk to the brain in a way that the brain cannot distinguish from the way it communicates internally, and we can’t do that right now,” Schalk says. “We really don’t know how to do it at this point. But it’s also obvious to me that it is going to happen. And if and when that happens, our lives are going to change, and our lives are going to change in a way that is completely unprecedented.”

Where and when the breakthroughs will come from is unclear. After decades of research and progress, many of the same technological challenges remain daunting. Still, the progress in neuroscience and computer hardware and software makes the outcome—at least to true believers—inevitable.

At the very least, says Leuthardt, the buzz emanating from Silicon Valley has generated “real excitement and real thinking about brain-computer interfaces being a practical reality.” That, he says, is “something we haven’t seen before.” And though he acknowledges that if this turns out to be hype it could “set the field back a decade or two,” nothing, he believes, will stop us from reaching the ultimate goal: a technology that will allow us to transcend the cognitive and physical limitations previous generations of humankind have taken for granted.

“It’s going to happen,” he insists. “This has the potential to alter the evolutionary direction of the human race.”

Adam Piore is the author of The Body Builders: Inside the Science of the Engineered Human, a book about bioengineering published last March.

The Psychology of The Walking Dead—The Appeal of Post-Apocalyptic Stories

by Dr. Donna Roberts


The Story

I’m not a Walking Dead fan, which is surprising because I love binging on TV series and I loved horror movies as a teenager. Or maybe, more accurately I loved watching a horror movie with my girlfriends. I was in high school at the time when the Friday the 13th and Halloween series came out and we frequently headed to the theater in a group, where we huddled together in our seats and clutched each other frantically as we screamed at all the shocking surprises. Good times. But I can’t say that my love for the genre persisted into adulthood.

When I saw that some Facebook friends from high school and current colleagues were TWD fans, I knew I had to give it a try. It just didn’t gel with me at the time. Maybe it still will. Timing is everything.

Though I wasn’t compelled to keep watching the series, I am fascinated enough with dissecting the human condition and the psychology of popular culture to know when I have a gem of some sort in my midst. I am a believer that life imitates art imitates life in a chicken-and-egg circularity. Beginning with its third season, The Walking Dead attracted the most 18- to 49-year-old viewers of any cable or broadcast television series. That’s a pretty wide range of viewers that no marketing segmentation plan would usually put together. It was even well received by critics.

So, the popularity of TWD is enough to make me want to put in on the proverbial couch and see what it has to say.


Psych Pstuff’s Summary

Turns out that since the beginning of humanity, or at least since we’ve been writing about it, we’ve been contemplating the end of humanity. From Bible stories to campfire stories, we revel in envisioning the ultimate destruction of the world as we know it, and what ensues in the aftermath.

In 2012, the Daily Mail published results of a survey that polled 16,262 people in more than 20 countries. The results indicated that 22% of Americans believed world would end in their lifetime with 10% thinking the apocalypse was coming in that very year. Certainly, if this is your mindset, then it is only logical to be a wee bit obsessed with what might be in store for you.

Actually, skipping only a few years here and there, predictions of the end of the world have occurred for almost every year since 1910 and there are plenty more scheduled for the future. Historically, even various scientists have weighed in with estimates of cataclysmic destruction that would endanger human existence, though their dates typically range from a comfortable 300,000 to 22 billion years from now. However, given the instability of both climate and the political landscape, more do seem to be cropping up with sooner best-before dates.

The media, including broadcast journalism, popular talk shows, documentaries and fictionalized productions have always played a role in our apocalyptic obsession. Adding a twist to the usual plot of following the experiences of survivors, beginning in 2009 the History Channel aired a two-season (20 episode) series where experts speculated on how the earth would evolve after the demise of humans. With the ominous opening, Welcome to Earth … Population: Zero, it captured the morbid fascination of 5.4 million viewers, making it the most watched program in the history of the History Channel.

From 2011 to 2014 the National Geographic channel ran a reality show, Doomsday Preppers, that profiled real survivalists preparing for various scenarios of the end of civilization. While some critics called it absurd and exploitative, it was the most watched and highest rated show in the history of the network.

Typically, there are only a few oft-repeated variations on the theme—the deadly virus, the meteor strike, nuclear devastation and, the newest kid on the block, the “gray goo” scenario where nanotechnology runs amok and robots commit ecophagy. The WD in particular, and the zombie craze in general, seems to be the latest, and rather enduring, fascination with all things apocalyptic. Now in its 8th season, the show seems as strong as ever. The review site Rotten Tomatoes concludes, “Blood-spattered, emotionally resonant, and white-knuckle intense, The Walking Dead puts an intelligent spin on the overcrowded zombie subgenre.”

But just why do we engage in so much pursuit of these devastating what-ifs?

In one respect, the contemplation of ever-increasing disaster scenarios is just a gradual slippery slope from very functional, and necessary, learned behavior. From the time we are children, through both direct experience and the hypothetical, we learn cause-effect relationships, and thus how to avoid unpleasant and dangerous consequences. We learn not to touch the hot stove or play in traffic. We learn to think ahead and anticipate possible consequences. But in learning these, we also come to understand that there are some things that happen that you can’t anticipate. Sometimes life turns on a dime. Sometimes disasters happen. Sometimes the world runs amok and all you can do is deal with the aftermath.

Enter the captivating world of the post-apocalypse.

Another cognitive construct that leads to our fascination with these doomsday scenarios is to combat the feelings of powerlessness and mistrust of those with power. There’s nothing like all-out devastation to level the proverbial playing field.

Taking us back to the basics of human survival releases us from the complex entanglements and overbearing demands of the modern world, if only for that short time of suspended disbelief.

There is also a surreal romanticizing of the post-apocalyptic world. Taking us back to the basics of human survival releases us from the complex entanglements and overbearing demands of the modern world, if only for that short time of suspended disbelief.

Child psychologist and author of Zombie Autopsies, Steven Schlozman, M.D., notes, “All of this uncertainty and all of this fear comes together and people think maybe life would be better after a disaster. I talk to kids in my practice and they see it as a good thing. They say, ‘life would be so simple—I’d shoot some zombies and wouldn’t have to go to school.’ Similarly, he recounts the following statement from another teenager, “Dude—a zombie apocalypse would be so cool. No homework, no girls, no SATs. Just make it through the night, man … make it through the night.”

While in reality we might not share the exuberance of these kids or long for a disaster to avoid another work deadline, we can sometimes fantasize about a simpler world where our true strengths are utilized and appreciated. Our brains are always seeking a solution to what is plaguing us (pun intended) and causing anxiety. When no plausible solution is readily available we can resort to more fantastical scenarios. Projecting ourselves into future worlds, where life can be better and we can be better, is akin to reverse nostalgia.

The power and endurance of TWD lies not in its clichéd deadly virus plotline, but instead in the development of characters who touch us on a deeper level. While the circumstances are surreal, the resilience of the characters in the face of total devastation and imminent threat to survival, can reflect something much more real, and more universal. As John Russo, co-creator of the WD predecessor Night of the Living Dead, noted, “It has important things to say about the human condition, which is one of frailty and nobility, weakness and courage, fear and hope, good and evil. These are the enduring puzzles and enigmas of our existence, and we can delve into them and learn from them vicariously when we sit down to watch The Walking Dead.”

What more could you ask for from any form of entertainment?

I think I just might give Season 2 a try.

What happens in your brain when you binge-watch a TV show

Netflix survey found that 73 percent of participants reported positive feelings associated with binge-watching.

Is watching the entire second season of “Stranger Things” on your weekend to-do list? Here’s what you need to know.

Source: What happens in your brain when you binge-watch a TV show

by Danielle Page /

You sit yourself down in front of the TV after a long day at work, and decide to start watching that new show everyone’s been talking about. Cut to midnight and you’ve crushed half a season — and find yourself tempted to stay up to watch just one more episode, even though you know you’ll be paying for it at work the next morning.

It happens to the best of us. Thanks to streaming platforms like Netflix and Hulu, we’re granted access to several hundred show options that we can watch all in one sitting — for a monthly fee that shakes out to less than a week’s worth of lattes. What a time to be alive, right?

And we’re taking full advantage of that access. According to a survey done by the U.S. Bureau of Labor Statistics, the average American spends around 2.7 hours watching TV per day, which adds up to almost 20 hours per week in total.

361,000 people watched all nine episodes of the second season of ‘Stranger Things’ on the first day it was released.

361,000 people watched all nine episodes of the second season of ‘Stranger Things’ on the first day it was released.

As for the amount of binge watching we’re doing, a Netflix surveyfound that 61 percent of users regularly watch between 2-6 episodes of a show in one sitting. A more recent study found that most Netflix members choose to binge-watch their way through a series versus taking their time — finishing an entire season in one week, on average (shows that fall in the Sci-Fi, horror and thriller categories are the most likely to be binged).

In fact, according to Nielsen, 361,000 people watched all nine episodes of season 2 of ‘Stranger Things,’ on the first day it was released.

Of course, we wouldn’t do it if it didn’t feel good. In fact, the Netflix survey also found that 73 percent of participants reported positive feelings associated with binge-watching. But if you spent last weekend watching season two of “Stranger Things” in its entirety, you may have found yourself feeling exhausted by the end of it — and downright depressed that you’re out of episodes to watch.

A Netflix survey found that 61 percent of users regularly watch between 2-6 episodes of a show in one sitting.

A Netflix survey found that 61 percent of users regularly watch between 2-6 episodes of a show in one sitting.

There are a handful of reasons that binge-watching gives us such a high — and then leaves us emotionally spent on the couch. Here’s a look at what happens to our brain when we settle in for a marathon, and how to watch responsibly.


When binge watching your favorite show, your brain is continually producing dopamine, and your body experiences a drug-like high.

When binge watching your favorite show, your brain is continually producing dopamine, and your body experiences a drug-like high.

Watching episode after episode of a show feels good — but why is that? Dr. Renee Carr, Psy.D, a clinical psychologist, says it’s due to the chemicals being released in our brain. “When engaged in an activity that’s enjoyable such as binge watching, your brain produces dopamine,” she explains. “This chemical gives the body a natural, internal reward of pleasure that reinforces continued engagement in that activity. It is the brain’s signal that communicates to the body, ‘This feels good. You should keep doing this!’ When binge watching your favorite show, your brain is continually producing dopamine, and your body experiences a drug-like high. You experience a pseudo-addiction to the show because you develop cravings for dopamine.”

According to Dr. Carr, the process we experience while binge watching is the same one that occurs when a drug or other type of addiction begins. “The neuronal pathways that cause heroin and sex addictions are the same as an addiction to binge watching,” Carr explains. “Your body does not discriminate against pleasure. It can become addicted to any activity or substance that consistently produces dopamine.”

Your body does not discriminate against pleasure. It can become addicted to any activity or substance that consistently produces dopamine.

Your body does not discriminate against pleasure. It can become addicted to any activity or substance that consistently produces dopamine.

Spending so much time immersed in the lives of the characters portrayed on a show is also fueling our binge watching experience. “Our brains code all experiences, be it watched on TV, experienced live, read in a book or imagined, as ‘real’ memories,” explains Gayani DeSilva, M.D., a psychiatrist at Laguna Family Health Center in California. “So when watching a TV program, the areas of the brain that are activated are the same as when experiencing a live event. We get drawn into story lines, become attached to characters and truly care about outcomes of conflicts.”

According to Dr. DeSilva, there are a handful of different forms of character involvement that contribute to the bond we form with the characters, which ultimately make us more likely to binge watch a show in its entirety.

“‘Identification’ is when we see a character in a show that we see ourselves in,” she explains. “‘Modern Family,’ for example, offers identification for the individual who is an adoptive parent, a gay husband, the father of a gay couple, the daughter of a father who marries a much younger woman, etc. The show is so popular because of its multiple avenues for identification. ‘Wishful identification,’ is where plots and characters offer opportunity for fantasy and immersion in the world the viewer wishes they lived in (ex. ‘Gossip Girl,’ ‘America’s Next Top Model’). Also, the identification with power, prestige and success makes it pleasurable to keep watching. ‘Parasocial interaction’ is a one-way relationship where the viewer feels a close connection to an actor or character in the TV show.”

If you’ve ever found yourself thinking that you and your favorite character would totally be friends in real life, you’ve likely experienced this type of involvement. Another type of character involvement is “perceived similarity, where we enjoy the experience of ‘I know what that feels like,’ because it’s affirming and familiar, and may also allow the viewer increased self-esteem when seeing qualities valued in another story.” For example, you’re drawn to shows with a strong female lead because you often take on that role at work or in your social groups.


The act of binge watching offers us a temporary escape from our day-to-day grind, which can act as a helpful stress management tool, says Dr. John Mayer, Ph.D, a clinical psychologist at Doctor On Demand. “We are all bombarded with stress from everyday living, and with the nature of today’s world where information floods us constantly,” Dr. Mayer says. “It is hard to shut our minds down and tune out the stress and pressures. A binge can work like a steel door that blocks our brains from thinking about those constant stressors that force themselves into our thoughts. Binge watching can set up a great boundary where troubles are kept at bay.”

A binge can work like a steel door that blocks our brains from thinking about those constant stressors that force themselves into our thoughts.

A binge can work like a steel door that blocks our brains from thinking about those constant stressors that force themselves into our thoughts.

Binge watching can also help foster relationships with others who have been watching the same show as you. “It does give you something to talk about with other people,” says Dr. Ariane Machin, Ph.D, clinical psychologist and professor of psychology. “Cue the ‘This Is Us’ phenomenon and feeling left out if you didn’t know what was going on! Binge watching can make us feel a part of a community with those that have also watched it, where we can connect over an in-depth discussion of a show.”

Watching a show that features a character or scenario that ties into your day-to-day routine can also end up having a positive impact on your real life. “Binge watching can be healthy if your favorite character is also a virtual role model for you,” says Carr, “or, if the content of the show gives you exposure to a career you are interested in. Although most characters and scenes are exaggerated for dramatic effect, it can be a good teaching lesson and case study. For example, if a shy person wants to become more assertive, remembering how a strong character on the show behaves can give the shy person a vivid example of how to advocate for herself or try something new. Or, if experiencing a personal crisis, remembering how a favorite character or TV role model solved a problem can give the binge watcher new, creative or bolder solutions.”


Have you ever felt sad after finishing a series? Mayer says that when we finish binge watching a series, we actually mourn the loss. “We often go into a state of depression because of the loss we are experiencing,” he says. “We call this situational depression because it is stimulated by an identifiable, tangible event. Our brain stimulation is lowered (depressed) such as in other forms of depression.”

In a study done by the University of Toledo, 142 out of 408 participants identified themselves as binge-watchers. This group reported higher levels of stress, anxiety and depression than those who were not binge-watchers. But in examining the habits that come with binge-watching, it’s not hard to see why it would start to impact our mental health. For starters, if you’re not doing it with a roommate or partner, binge-watching can quickly become isolating.

When we disconnect from humans and over-connect to TV at the cost of human connection, eventually we will ‘starve to death’ emotionally.

When we disconnect from humans and over-connect to TV at the cost of human connection, eventually we will ‘starve to death’ emotionally.

“When we substitute TV for human relations we disconnect from our human nature and substitute for [the] virtual,” says Dr. Judy Rosenberg, psychologist and founder of the Psychological Healing Center in Sherman Oaks, CA. “We are wired to connect, and when we disconnect from humans and over-connect to TV at the cost of human connection, eventually we will ‘starve to death’ emotionally. Real relationships and the work of life is more difficult, but at the end of the day more enriching, growth producing and connecting.”

If you find yourself choosing a night in with Netflix over seeing friends and family, it’s a sign that this habit is headed into harmful territory. (A word of warning to those of us who decided to stay in and binge watch “Stranger Things” instead of heading to that Halloween party.)


The key to reaping the benefits of binge-watching without suffering from the negative repercussions is to set parameters for the time you spend with your television — which can be tough to do when you’re faced with cliff hangers that might be resolved if you just stay up forone more episode. “In addition to pleasure, we often binge-watch to obtain psychological closure from the previous episode,” says Carr. “However, because each new episode leaves you with more questions, you can engage in healthy binge-watching by setting a predetermined end time for the binge. For example, commit to saying, ‘after three hours, I’m going to stop watching this show for the night.”

If setting a time limit cuts you off at a point in your binge where it’s hard to stop (and makes it too easy to tell yourself just ten more minutes), Carr suggests committing to a set number of episodes at the onset instead. “Try identifying a specific number of episodes to watch, then watching only the first half of the episode you have designated as your stopping point,” she says. “Usually, questions from the previous episode will be answered by this half-way mark and you will have enough psychological closure to feel comfortable turning off the TV.”

Also, make sure that you’re balancing your binge with other activities. “After binge-watching, go out with friends or do something fun,” says Carr. “By creating an additional source of pleasure, you will be less likely to become addicted to or binge watch the show. Increase your physical exercise activity or join an adult athletic league. By increasing your heart rate and stimulating your body, you can give yourself a more effective and longer-term experience of fun and excitement.”

When Advertisements Become Too Personal


, , , , , ,



With the proliferation of media channels over the last 20 years, advertisers have taken advantage of marketing technologies combined with data to serve more personalized advertisements to consumers. Personalization is a marketing strategy that delivers specific messages to you by leveraging data analysis and marketing technology    enabling them to target (the ability to identify a specific person or audience). Thus, companies leverage many data sources about you whether obtained directly from you, purchased from data brokers, or passively collected on you (tracking your online behavior). There are advantages to this as a consumer such as advertisement relevance, time savings and product pricing. For example, I don’t like to see the media I consume littered with advertisements on golf equipment or hunting gear, since the products are not of any interest to me. Secondly, I hate it when I have already purchased a product the same product shows up in Facebook, as this is just a waste of my attention. Rather, the marketer should show me something that is at least complimentary to what I have already purchased instead of wasting my time. There is a good reason for optimizing advertising because if targeting were not available companies would need to increase their advertising budgets every time a new media channel presented itself resulting in price increases to consumers. From an advertiser perspective, there is no argument with the return on investment that leveraging data for targeting provides across all channels which is why almost all companies engage in the practice. However, there are times when advertiser personalization attempts cross the line and it recently happened to me.

Last December I had a health matter I needed to address. My doctor recommended I try a supplement that can be only bought online. After trying some samples provided by my doc, I went directly to the company’s website and made the purchase. I never viewed the company’s page nor saw an advertisement for the product on Facebook (i.e. I left no previous online behavior that could be tracked). One day later, a post showed up on my Facebook feed from that same company. Serenol ad screen shot

I immediately yelled “Are You F***ing Kidding Me???” among other things. So dear reader… now know I bought a supplement called Serenol which helps alleviate PMS symptoms – hence my use of four letter words above (yes it works). From my perspective this was a complete invasion of my privacy and feels unethical. It may also be against HIPAA laws, or it should be! In the end, what this means, is Serenol, without my permission, disclosed my health condition.  Furthermore, it also begs the question: Now that Facebook has this data on me how will they use it moving forward?

Being from the data integration and marketing technology industry myself I personally have a moderate perspective on the use of data attributes for targeted marketing. I don’t want to see advertisements from companies that are completely irrelevant to me nor do I want to pay increased prices for goods and services, thus I have some comfort with use of my data. However, this scenario violated my personal boundaries, so I downloaded a tracker monitor and followed the data.

Ghostery provides a free mobile browser and search engine plug-in for tracking the trackers, something anyone can access for free.Ghostery Screen Shot

Ghostery shows you what type of trackers are firing on any website that you visit. With this tool I learned there were multiple pixels firing on Serenol’s site, Facebook being one of many.  The two pixels that interested me most were the “Facebook Custom Audiences” and the “Facebook Pixel” trackers. The custom audience pixel enables Serenol (or any other advertiser) to create Facebook Custom Audiences based on their website visitors.

A Facebook Custom Audience is essentially a targeting option created from an advertiser owned customer list, so they can target users on Facebook (Advertiser Help Center, 2018). Facebook Pixel is a small piece of code for websites that allows the site owner AND Facebook to log any Facebook users (Brown, Why Facebook is not telling you everything it knows about you, 2017). Either of these methods would have enabled the survey post I was shown from Serenol. What likely happened is Serenol and Facebook used these tags to conduct surveillance on me without my conscious knowledge and re-targeted me, hence the offending post. Yes – this is technically legal. Why? Because, I mostly likely agreed to this surveillance in the terms of service and privacy policies on each site.  Also, this method of targeting does not provide data back to Serenol who I am on Facebook, only Facebook knows. However, now Facebook has data that associates me with PMS!

Facebook collects information on things you do such as content you share, groups you are part of, things someone may share about you (regardless of whether you granted permission), payment information, the internet connected devices you and your family own and information from third-party partners including advertisers (Data Policy , 2016). They can monitor your mouse movements, track the amount of time you spend on anything and the subject of your photos via machine learning algorithms. Furthermore, when you do upload photos, Facebook scans the image and detects information about that photo such as whether it contains humans, animals, inanimate objects, and potential people you should tag in the picture (Brown, The amount of data facebook collects from your photos will terrify you, 2017). The social media company directly states in their data policy that they use the information they collect to improve their advertising (this means targeting) and then measure such advertising effectiveness (Data Policy , 2016). While Facebook’s data policy states that they do not share personally identifiable information (PII), they do leverage non-personally identifying demographic information that can be used for advertisement targeting purposes provided they adhere to their advertiser guidelines (Data Policy , 2016). This policy is subject to all Facebook companies, including WhatsApp, Facebook Messenger and Instagram. So that private message you are sending on Messenger isn’t as private as you think, Facebook is collecting data on that content. With Facebook owning 4 of the Top 5 Social Media applications, isn’t this a little creepy?

The next obvious question, is how can this data be used for nefarious purposes? Facebook’s advertiser policies state that an advertiser can’t use targeting options to discriminate against or engage in predatory advertising practices (Advertising Policies, n.d.). While they do withhold some demographics from certain types of advertising like housing, there are other questionable practices for targeting. For example, last year an article appeared in AdAge that called out Facebook, LinkedIn and Google who all allow employment advertising targeting using age as a criteria. Facebook has defended using the demographic despite criticism the practice contributes to ageism in the workforce and is illegal in the actual hiring practices of public companies (Sloane, 2017).

So, can Facebook use data about my PMS for targeting? Will they allow potential employers to use this data? What about health insurance companies? This is a slippery slope indeed. The answer is yes, and no. Facebook recently updated its’ policies and now they prevent advertisers from using targeting attributes such as medical conditions (Perez, 2018). This means that Facebook will not provide demographic selection data in their targeting tools to select or deselect users based on medical conditions. This type of targeting requires using third-party data, meaning that the advertiser is using the data provided by Facebook or other data aggregators to create an audience. However, I did not find anything that prevents companies like Serenol from using first-party data to find me on Facebook. Furthermore, when I went to the Serenol site on February 21st, 2018 (after the Facebook policy update), Ghostery showed that Facebooks’ Pixel and Facebook for Developers along with other pixels and tags from The Trade Desk, Adobe, Google, etc. were all live on the site.

This month’s Harvard Business Review published an article about how consumers react to personalization. The authors ran a series of experiments to understand what causes consumers to object to targeting and found out that we don’t always behave logically when it comes to privacy. People will often share details with complete strangers while keeping that information secret from those where close relationships exist. Furthermore, the nature of the information impacts how we feel about it – for example data on sex, health and finances are much more sensitive. Secondly, the way that data exchanges hands (information flows) matter. They found that sharing data with a company personally (first party sharing) generally feels fine because it is necessary to purchase something or engage with a company. However, when that information is shared without our knowledge (third-party sharing) consumers are reacting in a similar way as if a friend shared a secret or talked behind our backs. While third party sharing of data is legal, the study showed that scenarios where companies obtain information outside the website one interacted with or deduced inferred information about someone from analytics elicits a negative reaction from consumers. The study also found when consumers believe their data has been shared unacceptably, purchase interest substantially declines (John, Kim, & Barasz, 2018). Some of the recommendations from the authors to mitigate backlash from consumers included staying away from sensitive subjects, maintain transparency and provide consumers choice/ the ability to opt out.

I reached out to Michael Becker, Managing Partner at Identity Praxis for his point of view on the subject. Michael is an entrepreneur, academic and industry evangelist who has been engaging and supporting the personal identity economy for over a decade. “People are becoming aware that their personal information has value and are awakening to the fact that its’ misuse is not just annoying, but can lead to material and lasting emotional, economic, and physical harm. They are awaking to the fact that they can enact control over their data. Consumers are starting to use password managers, identity anonymization tools, and tracker management tools [like Ghostery]; for instance, 38% of US adults have adopted ad blockers and this is just the beginning. Executives should take heed that a new class of software and services, personal information management solutions, are coming to the market. These solutions, alongside new regulations (like the EU GDPR), give individuals, at scale, the power to determine what information about them is shared, who has access to it, when it can be used, and on what terms. In other words, the core terms of business may change in the very near future from people having to agree to the businesses terms of service to business having to agree to the individuals’ terms of access.”

In the United States the approach to regulations for personal data collection and use is such that if the action from the business or technology isn’t expressly forbidden, then companies can do it regardless of whether it is ethical or not. Unfortunately, regulations do not necessarily keep up with the pace of innovation in the world of data collection. In Europe the approach to data privacy is such that unless a personal data collection practice and its use is explicitly called out as legal then companies CANNOT do it. There are some actions you can take to manage passive data collection; however, this list is not meant to be exhaustive:

  • Use Brave Browser: This browser allows you to block ads and trackers to sites that you visit. Brave claims it will increase download speeds, save you money on your mobile device data since you don’t have to load ads and protect your information.
  • Ghostery permits you to allow what trackers are accepted by site that you visit, or block trackers entirely.
  • Add a script blocker plug-in to your browser such as No-script. No-script has a white list of trustworthy websites and it enables you to choose which sites you want to allow scripts.
  • Review what permissions to track your data on your mobile device and limit it. Do you really want Apple sharing your contact list and calendar with other applications? Do all applications need access to your fitness and activity data? You can find helpful instructions on how for iPhone here or for Android here.

Regardless of what is legal or illegal, comfort levels with how our personal data is used varies by individual. When you think about it, there is similarity to the debate in the 60’s on what constituted obscenity. When we find use of our personal data offensive we will likely say “I’ll know it when I see it”.


Advertiser Help Center. (2018). Retrieved from Facebook Business:

Advertising Policies. (n.d.). Retrieved February 20, 2018, from Facebook:

Brown, A. (2017, January 6). The qmount of data facebook collects from your photos will terrify you. Retrieved February 20, 2018, from Express:

Brown, A. (2017, January 2). Why facebook is not telling you everything it knows about you. Retrieved February 2018, 2018, from Express:

Data Policy . (2016, September 29). Retrieved from Facebook:

John, L. K., Kim, T., & Barasz, K. (2018, February). Ads that don’t overstep. Harvard Business Review, pp. 62-69.

Perez, S. (2018, February 8). Facebook updates its ad policies and tools to protect against discriminatory practices. Retrieved from Techcrunch:

Sloane, G. (2017, December 21). Facebook defends targeting job ads based on age. Retrieved from Ad Age:







A new study shows that students learn way more effectively from print textbooks than screens


, ,

Students told researchers they preferred and performed better when reading on screens. But their actual performance tended to suffer.

Source: A new study shows that students learn way more effectively from print textbooks than screens

Today’s students see themselves as digital natives, the first generation to grow up surrounded by technology like smartphones, tablets and e-readers.

Teachers, parents and policymakers certainly acknowledge the growing influence of technology and have responded in kind. We’ve seen more investment in classroom technologies, with students now equipped with school-issued iPads and access to e-textbooks.

In 2009, California passed a law requiring that all college textbooks be available in electronic form by 2020; in 2011, Florida lawmakers passed legislation requiring public schools to convert their textbooks to digital versions.

Given this trend, teachers, students, parents and policymakers might assume that students’ familiarity and preference for technology translates into better learning outcomes. But we’ve found that’s not necessarily true.

As researchers in learning and text comprehension, our recent work has focused on the differences between reading print and digital media. While new forms of classroom technology like digital textbooks are more accessible and portable, it would be wrong to assume that students will automatically be better served by digital reading simply because they prefer it.

Speed – at a cost

Our work has revealed a significant discrepancy. Students said they preferred and performed better when reading on screens. But their actual performance tended to suffer.

For example, from our review of research done since 1992, we found that students were able to better comprehend information in print for texts that were more than a page in length. This appears to be related to the disruptive effect that scrolling has on comprehension. We were also surprised to learn that few researchers tested different levels of comprehension or documented reading time in their studies of printed and digital texts.

To explore these patterns further, we conducted three studies that explored college students’ ability to comprehend information on paper and from screens.

Students first rated their medium preferences. After reading two passages, one online and one in print, these students then completed three tasks: Describe the main idea of the texts, list key points covered in the readings and provide any other relevant content they could recall. When they were done, we asked them to judge their comprehension performance.

Across the studies, the texts differed in length, and we collected varying data (e.g., reading time). Nonetheless, some key findings emerged that shed new light on the differences between reading printed and digital content:

  • Students overwhelming preferred to read digitally.
  • Reading was significantly faster online than in print.
  • Students judged their comprehension as better online than in print.
  • Paradoxically, overall comprehension was better for print versus digital reading.
  • The medium didn’t matter for general questions (like understanding the main idea of the text).
  • But when it came to specific questions, comprehension was significantly better when participants read printed texts.

studentsGetty Images/Sean Gallup

Placing print in perspective

From these findings, there are some lessons that can be conveyed to policymakers, teachers, parents and students about print’s place in an increasingly digital world.

1. Consider the purpose

We all read for many reasons. Sometimes we’re looking for an answer to a very specific question. Other times, we want to browse a newspaper for today’s headlines.

As we’re about to pick up an article or text in a printed or digital format, we should keep in mind why we’re reading. There’s likely to be a difference in which medium works best for which purpose.

In other words, there’s no “one medium fits all” approach.

2. Analyze the task

One of the most consistent findings from our research is that, for some tasks, medium doesn’t seem to matter. If all students are being asked to do is to understand and remember the big idea or gist of what they’re reading, there’s no benefit in selecting one medium over another.

But when the reading assignment demands more engagement or deeper comprehension, students may be better off reading print. Teachers could make students aware that their ability to comprehend the assignment may be influenced by the medium they choose. This awareness could lessen the discrepancy we witnessed in students’ judgments of their performance vis-à-vis how they actually performed.

Classroom Students Teacher iPadElementary school children use electronic tablets on the first day of class in the new school year in Nice, September 3, 2013.REUTERS/Eric Gaillard

3. Slow it down

In our third experiment, we were able to create meaningful profiles of college students based on the way they read and comprehended from printed and digital texts.

Among those profiles, we found a select group of undergraduates who actually comprehended better when they moved from print to digital. What distinguished this atypical group was that they actually read slower when the text was on the computer than when it was in a book. In other words, they didn’t take the ease of engaging with the digital text for granted. Using this select group as a model, students could possibly be taught or directed to fight the tendency to glide through online texts.

4. Something that can’t be measured

There may be economic and environmental reasons to go paperless. But there’s clearly something important that would be lost with print’s demise.

In our academic lives, we have books and articles that we regularly return to. The dog-eared pages of these treasured readings contain lines of text etched with questions or reflections. It’s difficult to imagine a similar level of engagement with a digital text. There should probably always be a place for print in students’ academic lives – no matter how technologically savvy they become.

Of course, we realize that the march toward online reading will continue unabated. And we don’t want to downplay the many conveniences of online texts, which include breadth and speed of access.

Rather, our goal is simply to remind today’s digital natives – and those who shape their educational experiences – that there are significant costs and consequences to discounting the printed word’s value for learning and academic development.

Something universal occurs in the brain when it processes stories, regardless of language


New brain research shows that reading stories generates activity in the same regions of the brain for speakers of three different languages.


English, Farsi and Mandarin readers use the same parts of the brain to decode the deeper meaning of what they’re reading.
Credit: Morteza Dehghani, et a


Source: Something universal occurs in the brain when it processes stories, regardless of language

Date: October 5, 2017

Source: University of Southern California

New brain research by USC scientists shows that reading stories is a universal experience that may result in people feeling greater empathy for each other, regardless of cultural origins and differences.

And in what appears to be a first for neuroscience, USC researchers have found patterns of brain activation when people find meaning in stories, regardless of their language. Using functional MRI, the scientists mapped brain responses to narratives in three different languages — English, Farsi and Mandarin Chinese.

The USC study opens up the possibility that exposure to narrative storytelling can have a widespread effect on triggering better self-awareness and empathy for others, regardless of the language or origin of the person being exposed to it.

“Even given these fundamental differences in language, which can be read in a different direction or contain a completely different alphabet altogether, there is something universal about what occurs in the brain at the point when we are processing narratives,” said Morteza Dehghani, the study’s lead author and a researcher at the Brain and Creativity Institute at USC.

Dehghani is also an assistant professor of psychology at the USC Dornsife College of Letters, Arts and Sciences, and an assistant professor of computer science at the USC Viterbi School of Engineering.

The study was published on Sept. 20 in the journal Human Brain Mapping.

Making sense of 20 million personal anecdotes

The researchers sorted through more than 20 million blog posts of personal stories using software developed at the USC Institute for Creative Technologies. The posts were narrowed down to 40 stories about personal topics such as divorce or telling a lie.

They were then translated into Mandarin Chinese and Farsi, and read by a total of 90 American, Chinese and Iranian participants in their native language while their brains were scanned by MRI. The participants also answered general questions about the stories while being scanned.

Using state-of-the-art machine learning and text-analysis techniques, and an analysis involving over 44 billion classifications, the researchers were able to “reverse engineer” the data from these brain scans to determine the story the reader was processing in each of the three languages. In effect, the neuroscientists were able to read the participants’ minds as they were reading.

The brain is not resting

In the case of each language, reading each story resulted in unique patterns of activations in the “default mode network” of the brain. This network engages interconnected brain regions such as the medial prefrontal cortex, the posterior cingulate cortex, the inferior parietal lobe, the lateral temporal cortex and hippocampal formation.

The default mode network was originally thought to be a sort of autopilot for the brain when it was at rest and shown only to be active when someone is not engaged in externally directed thinking. Continued studies, including this one, suggest that the default mode network actually is working behind the scenes while the mind is ostensibly at rest to continually find meaning in narrative, serving an autobiographical memory retrieval function that influences our cognition related to the past, the future, ourselves and our relationship to others.

“One of the biggest mysteries of neuroscience is how we create meaning out of the world. Stories are deep-rooted in the core of our nature and help us create this meaning,” said Jonas Kaplan, corresponding author at the Brain and Creativity Institute and an assistant professor of psychology at USC Dornsife.

Story Source:

Materials provided by University of Southern California. Note: Content may be edited for style and length.

Journal Reference:

  1. Morteza Dehghani, Reihane Boghrati, Kingson Man, Joe Hoover, Sarah I. Gimbel, Ashish Vaswani, Jason D. Zevin, Mary Helen Immordino-Yang, Andrew S. Gordon, Antonio Damasio, Jonas T. Kaplan. Decoding the neural representation of story meanings across languages. Human Brain Mapping, 2017; DOI: 10.1002/hbm.23814


What is Your Phone Doing to Your Relationships?

New research is exploring how phubbing—ignoring someone in favor of our mobile phone—hurts our relationships, and what we can do about it. #relationships #technology

Source: What is Your Phone Doing to Your Relationships?


Phubbing is the practice of snubbing others in favor of our mobile phones. We’ve all been there, as either victim or perpetrator. We may no longer even notice when we’ve been phubbed (or are phubbing), it has become such a normal part of life. However, research studies are revealing the profound impact phubbing can have on our relationships and well-being.

There’s an irony in phubbing. When we’re staring at our phones, we’re often connecting with someone on social media or through texting. Sometimes, we’re flipping through our pictures the way we once turned the pages of photo albums, remembering moments with people we love. Unfortunately, however, this can severely disrupt our actual, present-moment, in-person relationships, which also tend to be our most important ones.

The research shows that phubbing isn’t harmless—but the studies to date also point the way to a healthier relationship with our phones and with each other.

What phubbing does to us

According to their study of 145 adults, phubbing decreases marital satisfaction, in part because it leads to conflict over phone use. The scientists found that phubbing, by lowering marital satisfaction, affected a partner’s depression and satisfaction with life. A follow-up study by Chinese scientists assessed 243 married adults with similar results: Partner phubbing, because it was associated with lower marital satisfaction, contributed to greater feelings of depression. In a study poignantly titled, “My life has become a major distraction from my cell phone,” Meredith David and James Roberts suggest that phubbing can lead to a decline in one of the most important relationships we can have as an adult: the one with our life partner.

Phubbing also shapes our casual friendships. Not surprisingly to anyone who has been phubbed, phone users are generally seen as less polite and attentive. Let’s not forget that we are extremely attuned to people. When someone’s eyes wander, we intuitively know what brain studies also show: The mind is wandering. We feel unheard, disrespected, disregarded.

A set of studies actually showed that just having a phone out and present during a conversation (say, on the table between you) interferes with your sense of connection to the other person, the feelings of closeness experienced, and the quality of the conversation. This phenomenon is especially the case during meaningful conversations—you lose the opportunity for true and authentic connection to another person, the core tenet of any friendship or relationship.

In fact, many of the problems with mobile interaction relate to distraction from the physical presence of other people. According to these studies, conversations with no smartphones present are rated as significantly higher-quality than those with smartphones around, regardless of people’s age, ethnicity, gender, or mood. We feel more empathy when smartphones are put away.

This makes sense. When we are on our phones, we are not looking at other people and not reading their facial expressions (tears in their eyes, frowns, smiles). We don’t hear the nuances in their tone of voice (was it shaky with anxiety?), or notice their body posture (slumped and sad? or excited and enthusiastic?).

No wonder phubbing harms relationships.

The way of the phubbed

What do “phubbed” people tend do?

According to a study published in March of this year, they themselves start to turn to social media. Presumably, they do so to seek inclusion. They may turn to their cell phone to distract themselves from the very painful feelings of being socially neglected. We know from brain-imaging research that being excluded registers as actual physical pain in the brain. Phubbed people in turn become more likely to attach themselves to their phones in unhealthy ways, thereby increasing their own feelings of stress and depression.

A Facebook study shows that how we interact on Facebook affects whether it makes us feel good or bad. When we use social media just to passively view others’ posts, our happiness decreases. Another study showed that social media actually makes us more lonely.

“It is ironic that cell phones, originally designed as a communication tool, may actually hinder rather than foster interpersonal connectedness,” write David and Roberts in their study “Phubbed and Alone.” Their results suggest the creation of a vicious circle: A phubbed individual turns to social media and their compulsive behavior presumably leads them to phub others—perpetuating and normalizing the practice and problem of “phubbing.”

“It is ironic that cell phones, originally designed as a communication tool, may actually hinder rather than foster interpersonal connectedness”

―Meredith David and James Roberts

Why do people get into the phubbing habit in the first place? Not surprisingly, fear of missing out and lack of self-control predict phubbing. However, the most important predictor is addiction—to social media, to the cell phone, and to the Internet. Internet addiction has similar brain correlates to physiological forms like addiction to heroine and other recreational drugs. The impact of this addiction is particularly worrisome for children whose brain and social skills are still under development.

Nicholas Kardaras, former Stony Brook Medicine clinical professor and author of Glow Kids, goes so far as to liken screen time to digital cocaine. Consider this: The urge to check social media is stronger than the urge for sex, according to research by Chicago University’s Wilhelm Hoffman.

These findings come as no surprise—decades of research have shown that our greatest need after food and shelter is for positive social connections with other people. We are profoundly social people for whom connection and a sense of belonging are crucial for health and happiness. (In fact, lack thereof is worse for you than smoking, high blood pressure, and obesity.) So, we err sometimes. We look for connection on social media at the cost of face-to-face opportunities for true intimacy.

The urge to check social media might be stronger than the urge for sex.

How to stop phubbing people

To prevent phubbing, awareness is the only solution. Know that what drives you and others is to connect and to belong. While you may not be able to control the behavior of others, you yourself have opportunities to model something different.

Research by Barbara Fredrickson, beautifully described in her book Love 2.0, suggests that intimacy happens in micro-moments: talking over breakfast, the exchange with the UPS guy, the smile of a child. The key is to be present and mindful. A revealing study showed that we are happiest when we are present, no matter what we are doing. Can we be present with the person in front of us right now, no matter who it is?

Studies by Paula Niedenthal reveal that the most essential and intimate form of connection is eye contact. Yet social media is primarily verbal. Research conducted by scientists like the GGSC’s Dacher Keltner and others have shown that posture and the most minute facial expressions (the tightening of our lips, the crow’s feet of smiling eyes, upturned eyebrows in sympathy or apology) communicate more than our words.

Most importantly, they are at the root of empathy—the ability to sense what another person is feeling—which is so critical to authentic human connection. Research shows that altruism and compassion also make us happier and healthier, and can even lengthen our lives. True connection thrives on presence, openness, observation, compassion, and, as Brené Brown has so beautifully shared in her TED talk and her bestselling book Daring Greatly, vulnerability. It takes courage to connect with another person authentically, yet it is also the key to fulfillment.

What to do if you are phubbed

What if you are phubbed? Patience and compassion are key here. Understand that the phubber is probably not doing it with malicious intent, but rather is following an impulse (sometimes irresistible) to connect. Just like you or I, their goal is not to exclude. To the contrary, they are looking for a feeling of inclusion. After all, a telling sociological study shows that loneliness is rising at an alarming rate in our society.

What’s more, age and gender play a role in people’s reactions to phubbing. According to studies, older participants and women advocate for more restricted phone use in most social situations. Men differ from women in that they viewed phone calls as more appropriate in virtually all environments including—and this is quite shocking—intimate settings. Similarly, in classrooms, male students find phubbing far less disturbing than their female counterparts.

Perhaps even worse than disconnecting from others, however, Internet addiction and phubbing disconnect us from ourselves. Plunged into a virtual world, we hunch over a screen, strain our eyes unnecessarily, and tune out completely from our own needs—for sleep, exercise, even food. A disturbing study indicates that for every minute we spend online for leisure, we’re not just compromising our relationships, we are also losing precious self-care time (e.g., sleep, household activities) and productivity.

So, the next time you’re with another human and you feel tempted to pull out your phone—stop. Put it away. Look them in the eyes, and listen to what they have to say. Do it for them, do it for yourself, do it to make the world a better place.


This article was adapted from Greater Good, the online magazine of UC Berkeley’s Greater Good Science Center, one of Mindful’s partners. View the original article.