2 Psychological Tactics Marketers Use to Target Impulse Purchasing – Espada Blue

 

About common psychological retail practices to persuade consumers buy more.

Source: 2 Psychological Tactics Marketers Use to Target Impulse Purchasing – Espada Blue

Self-Control and Impulse Purchasing

There’s a bunch of really interesting literature on self-control and impulse purchasing. From the view of functional brain neuroanatomy, self-control is defined as an effortful inhibition of impulses and is likely to be lost when certain brain areas like lateral prefrontal cortex is activated. Although, counterintuitively, one would expect self-control failure to occur with regards of insufficient activation, it is hypothesised that higher activation relates to a fatigue of cognitive resources which can no longer work in an inhibiting fashion, therefore individual’s capability to regulate actions of self starts to decrease. Unsurprisingly, it has enormous effect on consumer behaviour, because higher activation of lateral prefrontal cortex impairs one’s decision making process and results in impulsive actions and choices.

What’s interesting about the mechanics of self-control is that it relates to detailed versus abstract thinking at the given point in time. Specifically, detailed thoughts were shown to diminish self-control efforts while abstract thoughts were shown to strengthen them. This is believed to happen because detailed thoughts tend to be more emotionally charged and therefore activate associative emotional experiences, known as “hot states”. These “hot states” are typically more difficult to control, thus require more cognitive resources to inhibit impulsive behaviour such as impulse purchasing. In contrast, abstract thoughts are more neutral, therefore operate on the basis of more rational “cool states” and allow to regulate behaviour in a more reflective manner.

Undoubtedly, these ideas can be practically applied to shape consumer’s mindset by activating detailed or abstract thoughts and subsequently influence purchasing choices. To this end, marketers often apply prevention and promotion frames which were shown to amplify the effect of given mindset.

 

Prevention Frame

Prevention frame is related to individual’s self-control strategy to avoid losses. As an example, frame of “25% less fat” versus “75% more lean meat” would refer to prevention and promotion focused messages. What’s even more interesting is that, despite different types of framing, messages communicate exactly the same information yet elicit different psychological affect over a consumer.

Prevention frame emphasises the avoidance of losses and is related to strategic vigilance. This type of strategy is more compatible with negative emotions and undesired product attributes, because it evokes more detailed rather than abstract thought processing. Once these two mechanisms are matched, same valence of information increases the speed of information processing and creates the perception of ease. Specifically, such mechanics induces the liking effect which builds extra willingness to have that specific product and is linked to impulse purchasing. In other words, fluent information processing influence the liking of the product and has a persuasive effect towards purchasing behaviour.

 

Promotion Frame

Promotion frame emphasises the pursuit of gains and is related to strategic eagerness. This type of strategy is more compatible with positive emotions and less concrete product attributes like great, safe, comfortable, etc., because it evokes more abstract rather than detailed thought processing. Abstract thinking strengthens self-control efforts which helps to follow end state goal pursuit. Such mechanics prevents consumer from deviating from his planned actions and increases motivational intensity to achieve defined goal. In other words, promotion frame will be more powerful influencing consumers who know what they came for.

Have you noticed this type of framing in supermarkets?

 

References

Fujita, K., Trope, Y., Liberman, N., & Levin-Sagi, M. (2006). Construal levels and self-control. Journal of personality and social psychology90(3), 351.

Lee, A. Y., & Aaker, J. L. (2004). Bringing the frame into focus: the influence of regulatory fit on processing fluency and persuasion. Journal of personality and social psychology86(2), 205.

Trope, Y., & Liberman, N. (2003). Temporal construal. Psychological review110(3), 403.

The scientists who make apps addictive

By IAN LESLIE | OCTOBER/NOVEMBER 2016

Tech companies use the insights of behaviour design to keep us returning to their products. But some of the psychologists who developed the science of persuasion are worried about how it is being used

Source: The scientists who make apps addictive

In 1930, a psychologist at Harvard University called B.F. Skinner made a box and placed a hungry rat inside it. The box had a lever on one side. As the rat moved about it would accidentally knock the lever and, when it did so, a food pellet would drop into the box. After a rat had been put in the box a few times, it learned to go straight to the lever and press it: the reward reinforced the behaviour. Skinner proposed that the same principle applied to any “operant”, rat or man. He called his device the “operant conditioning chamber”. It became known as the Skinner box.

Skinner was the most prominent exponent of a school of psychology called behaviourism, the premise of which was that human behaviour is best understood as a function of incentives and rewards. Let’s not get distracted by the nebulous and impossible to observe stuff of thoughts and feelings, said the behaviourists, but focus simply on how the operant’s environment shapes what it does. Understand the box and you understand the behaviour. Design the right box and you can control behaviour.

Skinner turned out to be the last of the pure behaviourists. From the late 1950s onwards, a new generation of scholars redirected the field of psychology back towards internal mental processes, like memory and emotion. But behaviourism never went away completely, and in recent years it has re-emerged in a new form, as an applied discipline deployed by businesses and governments to influence the choices you make every day: what you buy, who you talk to, what you do at work. Its practitioners are particularly interested in how the digital interface – the box in which we spend most of our time today – can shape human decisions. The name of this young discipline is “behaviour design”. Its founding father is B.J. Fogg.

Earlier this year I travelled to Palo Alto to attend a workshop on behaviour design run by Fogg on behalf of his employer, Stanford University. Roaming charges being what they are, I spent a lot of time hooking onto Wi-Fi in coffee bars. The phrase “accept and connect” became so familiar that I started to think of it as a Californian mantra. Accept and connect, accept and connect, accept and connect.

I had never used Uber before, and since I figured there is no better place on Earth to try it out, I opened the app in Starbucks one morning and summoned a driver to take me to Stanford’s campus. Within two minutes, my car pulled up, and an engineering student from Oakland whisked me to my destination. I paid without paying. It felt magical. The workshop was attended by 20 or so executives from America, Brazil and Japan, charged with bringing the secrets of behaviour design home to their employers.

Fogg is 53. He travels everywhere with two cuddly toys, a frog and a monkey, which he introduced to the room at the start of the day. Fogg dings a toy xylophone to signal the end of a break or group exercise. Tall, energetic and tirelessly amiable, he frequently punctuates his speech with peppy exclamations such as “awesome” and “amazing”. As an Englishman, I found this full-beam enthusiasm a little disconcerting at first, but after a while, I learned to appreciate it, just as Europeans who move to California eventually cease missing the seasons and become addicted to sunshine. Besides, Fogg was likeable. His toothy grin and nasal delivery made him endearingly nerdy.

In a phone conversation prior to the workshop, Fogg told me that he read the classics in the course of a master’s degree in the humanities. He never found much in Plato, but strongly identified with Aristotle’s drive to organise and catalogue the world, to see systems and patterns behind the confusion of phenomena. He says that when he read Aristotle’s “Rhetoric”, a treatise on the art of persuasion, “It just struck me, oh my gosh, this stuff is going to be rolled out in tech one day!”

In 1997, during his final year as a doctoral student, Fogg spoke at a conference in Atlanta on the topic of how computers might be used to influence the behaviour of their users. He noted that “interactive technologies” were no longer just tools for work, but had become part of people’s everyday lives: used to manage finances, study and stay healthy. Yet technologists were still focused on the machines they were making rather than on the humans using those machines. What, asked Fogg, if we could design educational software that persuaded students to study for longer or a financial-management programme that encouraged users to save more? Answering such questions, he argued, required the application of insights from psychology.

Fogg presented the results of a simple experiment he had run at Stanford, which showed that people spent longer on a task if they were working on a computer which they felt had previously been helpful to them. In other words, their interaction with the machine followed the same “rule of reciprocity” that psychologists had identified in social life. The experiment was significant, said Fogg, not so much for its specific finding as for what it implied: that computer applications could be methodically designed to exploit the rules of psychology in order to get people to do things they might not otherwise do. In the paper itself, he added a qualification: “Exactly when and where such persuasion is beneficial and ethical should be the topic of further research and debate.”

Fogg called for a new field, sitting at the intersection of computer science and psychology, and proposed a name for it: “captology” (Computers as Persuasive Technologies). Captology later became behaviour design, which is now embedded into the invisible operating system of our everyday lives. The emails that induce you to buy right away, the apps and games that rivet your attention, the online forms that nudge you towards one decision over another: all are designed to hack the human brain and capitalise on its instincts, quirks and flaws. The techniques they use are often crude and blatantly manipulative, but they are getting steadily more refined, and, as they do so, less noticeable.

Fogg’s Atlanta talk provoked strong responses from his audience, falling into two groups: either “This is dangerous. It’s like giving people the tools to construct an atomic bomb”; or “This is amazing. It could be worth billions of dollars.”

The second group has certainly been proved right. Fogg has been called “the millionaire maker”. Numerous Silicon Valley entrepreneurs and engineers have passed through his laboratory at Stanford, and some have made themselves wealthy.

Fogg himself has not made millions of dollars from his insights. He stayed at Stanford, and now does little commercial work. He is increasingly troubled by the thought that those who told him his ideas were dangerous may have been on to something.

At the workshop, Fogg explained the building blocks of his theory of behaviour change. For somebody to do something – whether it’s buying a car, checking an email, or doing 20 press-ups – three things must happen at once. The person must want to do it, they must be able to, and they must be prompted to do it. A trigger – the prompt for the action – is effective only when the person is highly motivated, or the task is very easy. If the task is hard, people end up frustrated; if they’re not motivated, they get annoyed.

One of Fogg’s current students told me about a prototype speech-therapy program he was helping to modify. Talking to its users, he discovered that parents, who really wanted it to work, found it tricky to navigate – they were frustrated. Their children found it easy to use, but weren’t bothered about doing so – they were merely annoyed. Applying Fogg’s framework helped identify a way forward. Parents would get over the action line if the program was made simpler to use; children if it felt like a game instead of a lesson.

Frustration, says Fogg, is usually more fixable than annoyance. When we want people to do something our first instinct is usually to try to increase their motivation – to persuade them. Sometimes this works, but more often than not the best route is to make the behaviour easier. One of Fogg’s maxims is, “You can’t get people to do something they don’t want to do.” A politician who wants people to vote for her makes a speech or goes on TV instead of sending a bus to pick voters up from their homes. The bank advertises the quality of its current account instead of reducing the number of clicks required to open one.

When you get to the end of an episode of “House of Cards” on Netflix, the next episode plays automatically unless you tell it to stop. Your motivation is high, because the last episode has left you eager to know what will happen and you are mentally immersed in the world of the show. The level of difficulty is reduced to zero. Actually, less than zero: it is harder to stop than to carry on. Working on the same principle, the British government now “nudges” people into enrolling into workplace pension schemes, by making it the default option rather than presenting it as a choice.

When motivation is high enough, or a task easy enough, people become responsive to triggers such as the vibration of a phone, Facebook’s red dot, the email from the fashion store featuring a time-limited offer on jumpsuits. The trigger, if it is well designed (or “hot”), finds you at exactly the moment you are most eager to take the action. The most important nine words in behaviour design, says Fogg, are, “Put hot triggers in the path of motivated people.”

If you’re triggered to do something you don’t like, you probably won’t return, but if you love it you’ll return repeatedly – and unthinkingly. After my first Uber, I never even thought of getting around Palo Alto any other way. This, says, Fogg, is how brands should design for habits. The more immediate and intense a rush of emotion a person feels the first time they use something, the more likely they are to make it an automatic choice. It’s why airlines bring you a glass of champagne the moment you sink into a business-class seat, and why Apple takes enormous care to ensure that a customer’s first encounter with a new phone feels magical.

Such upfront deliveries of dopamine bond users to products. Consider the way Instagram lets you try 12 different filters on your picture, says Fogg. Sure, there’s a functional benefit: the user has control over their images. But the real transaction is emotional: before you even post anything, you get to feel like an artist. Hence another of Fogg’s principles: “Make people feel successful” or, to rephrase it, “Give them superpowers!”

Fogg took ambivalent satisfaction from the example of Instagram, since he felt distantly responsible for it and perhaps distantly guilty. In 2006, two students in Fogg’s class collaborated on a project called Send the Sunshine. Their insight was that one day mobile phones (this was the pre-smartphone era) would be used to send emotions: if your friend was in a place where the weather wasn’t good and you were standing in sunshine, your phone could prompt you to take a picture and send it to them to cheer them up. One of the two students, Mike Krieger, went on to co-found Instagram, where over 400m users now share sunrises, sunsets and selfies.

Fogg built his theory in the years before social media conquered the world. Facebook, Instagram and others have raised behaviour design to levels of sophistication he could hardly have envisaged. Social-media apps plumb one of our deepest wells of motivation. The human brain releases pleasurable, habit-forming chemicals in response to social interactions, even to mere simulacra of them, and the hottest triggers are other people: you and your friends or followers are constantly prompting each other to use the service for longer.

Fogg introduced me to one of his former students, Noelle Moseley, who now consults for technology companies. She told me that she had recently interviewed heavy users of Instagram: young women who cultivated different personas on different social networks. Their aim was to get as many followers as possible – that was their definition of success. Every new follow and every comment delivered an emotional hit. But a life spent chasing hits didn’t feel good. Moseley’s respondents spent all their hours thinking about how to organise their lives in order to take pictures they could post to each persona, which meant they weren’t able to enjoy whatever they were doing, which made them stressed and unhappy. “It was like a sickness,” said Moseley.

B.J. Fogg comes from a Mormon family, which has endowed him with his bulletproof geniality and also with a strong need to believe that his work is making the world a better place. The only times during our conversations when his tone darkened were when he considered the misuse of his ideas in the commercial sphere. He worries that companies like Instagram and Facebook are using behaviour design merely to keep consumers in thrall to them. One of his alumni, Nir Eyal, went on to write a successful book, aimed at tech entrepreneurs, called “Hooked: How to Build Habit-Forming Products”.

“I look at some of my former students and I wonder if they’re really trying to make the world better, or just make money,” said Fogg. “What I always wanted to do was un-enslave people from technology.”

When B.F. Skinner performed further experiments with his box, he discovered that if the rat got the same reward each time, it pulled the lever only when it was hungry. The way to maximise the number of times the rat pulled the lever was to vary the rewards it received. If it didn’t know whether it was going to get one pellet, or none, or several when it pulled the lever, then it pulled the lever over and over again. It became psychologically hooked. This became known as the principle of variable rewards.

In “Hooked”, Eyal argues that successful digital products incorporate Skinner’s insight. Facebook, Pinterest and others tap into basic human needs for connection, approval and affirmation, and dispense their rewards on a variable schedule. Every time we open Instagram or Snapchat or Tinder, we never know if someone will have liked our photo, or left a comment, or written a funny status update, or dropped us a message. So we keep tapping the red dot, swiping left and scrolling down.

Eyal has added his own twists to Fogg’s model of behavioural change. “BJ thinks of triggers as external factors,” Eyal told me. “My argument is that the triggers are internal.” An app succeeds, he says, when it meets the user’s most basic emotional needs even before she has become consciously aware of them. “When you’re feeling uncertain, before you ask why you’re uncertain, you Google. When you’re lonely, before you’re even conscious of feeling it, you go to Facebook. Before you know you’re bored, you’re on YouTube. Nothing tells you to do these things. The users trigger themselves.”

Eyal’s emphasis on unthinking choices raises a question about behaviour design. If our behaviours are being designed for us, to whom are the designers responsible? That’s what Tristan Harris, another former student of Fogg’s, wants everyone to think about. “BJ founded the field of behaviour design,” he told me. “But he doesn’t have an answer to the ethics of it. That’s what I’m looking for.”

Harris was Mike Krieger’s collaborator on Send the Sunshine in Fogg’s class of 2006. Like Krieger, Harris went on to create a real-world app, Apture, which was designed to give instant explanations of complex concepts to online readers: a box would pop up when the user held their mouse over a term they wanted explaining. Apture had some success without ever quite taking off, and in 2011 Google acquired Harris’s startup.

The money was nice but it felt like a defeat. Harris believed in his mission to explain, yet he could not persuade publishers that incorporating his app would lead to people spending more time on their sites. He came to believe that the internet’s potential to inform and enlighten was at loggerheads with the commercial imperative to seize and hold the attention of users by any means possible. “The job of these companies is to hook people, and they do that by hijacking our psychological vulnerabilities.”

Facebook gives your new profile photo a special prominence in the news feeds of your friends, because it knows that this is a moment when you are vulnerable to social approval, and that “likes” and comments will draw you in repeatedly. LinkedIn sends you an invitation to connect, which gives you a little rush of dopamine – how important I must be! – even though that person probably clicked unthinkingly on a menu of suggested contacts. Unconscious impulses are transformed into social obligations, which compel attention, which is sold for cash.

After working for Google for a year or so, Harris resigned in order to pursue research into the ethics of the digital economy. “I wanted to know what responsibility comes with the ability to influence the psychology of a billion people? What’s the Hippocratic oath?” Before leaving, he gave a farewell presentation to Google’s staff in which he argued that they needed to see themselves as moral stewards of the attention of billions of people. Unexpectedly, the slides from his talk became a viral hit inside the company, travelling all the way to the boardroom. Harris was persuaded to stay on and pursue his research at Goo­gle, which created a new job title for him: design ethicist and product philosopher.

After a while, Harris realised that although his colleagues were listening politely, they would never take his message seriously without pressure from the outside. He left Google for good earlier this year to become a writer and advocate, on a mission to wake the world up to how digital technology is diminishing the human capacity for making free choices. “Behaviour design can seem lightweight, because it’s mostly just clicking on screens. But what happens when you magnify that into an entire global economy? Then it becomes about power.”

Harris talks fast and with an edgy intensity. One of his mantras is, “Whoever controls the menu controls the choices.” The news we see, the friends we hear from, the jobs we hear about, the restaurants we consider, even our potential romantic partners – all of them are, increasingly, filtered through a few widespread apps, each of which comes with a menu of options. That gives the menu designer enormous power. As any restaurateur, croupier or marketer can tell you, options can be tilted to influence choices. Pick one of these three prices, says the retailer, knowing that at least 70% of us will pick the middle one.

Harris’s peers have, he says, become absurdly powerful, albeit by accident. Menus used by billions of people are designed by a small group of men, aged between 25 and 35, who studied computer science and live in San Francisco. “What’s the moral operating system running in their head?” Harris asks. “Are they thinking about their ethical responsibility? Do they even have the time to think about it?”

The more influence that tech products exert over our behaviour, the less control we have over ourselves. “Companies say, we’re just getting better at giving people what they want. But the average person checks their phone 150 times a day. Is each one a conscious choice? No. Companies are getting better at getting people to make the choices they want them to make.”

In “Addiction by Design”, her remarkable study of machine gambling in Las Vegas, Natasha Dow Schüll, an anthropologist, quotes an anonymous contributor to a website for recovering addicts. “Slot machines are just Skinner boxes for people! Why they keep you transfixed is not really a big mystery. The machine is designed to do just that.” The gambling industry is a pioneer of behaviour design. Slot machines, in particular, are built to exploit the compelling power of variable rewards. The gambler pulls the lever without knowing what she will get or whether she will win anything at all, and that makes her want to pull it again.

The capacity of slot machines to keep people transfixed is now the engine of Las Vegas’s economy. Over the last 20 years, roulette wheels and craps tables have been swept away to make space for a new generation of machines: no longer mechanical contraptions (they have no lever), they contain complex computers produced in collaborations between software engineers, mathematicians, script writers and graphic artists.

The casinos aim to maximise what they call “time-on-device”. The environment in which the machines sit is designed to keep people playing. Gamblers can order drinks and food from the screen. Lighting, decor, noise levels, even the way the machines smell – everything is meticulously calibrated. Not just the brightness, but also the angle of the lighting is deliberate: research has found that light drains gamblers’ energy fastest when it hits their foreheads.

But it is the variation in rewards that is the key to time-on-device. The machines are programmed to create near misses: winning symbols appear just above or below the “payline” far more often than chance alone would dictate. The player’s losses are thus reframed as potential wins, motivating her to try again. Mathematicians design payout schedules to ensure that people keep playing while they steadily lose money. Alternative schedules are matched to different types of players, with differing appetites for risk: some gamblers are drawn towards the possibility of big wins and big losses, others prefer a drip-feed of little payouts (as a game designer told Schüll, “Some people want to be bled slowly”). The mathematicians are constantly refining their models and experimenting with new ones, wrapping their formulae around the contours of the cerebral cortex.

Gamblers themselves talk about “the machine zone”: a mental state in which their attention is locked into the screen in front of them, and the rest of the world fades away. “You’re in a trance,” one gambler explains to Schüll. “The zone is like a magnet,” says another. “It just pulls you in and holds you there.”

A player who is feeling frustrated and considering quitting for the day might receive a tap on the shoulder from a “luck ambassador”, dispensing tickets to shows or gambling coupons. What the player doesn’t know is that data from his game-playing has been fed into an algorithm that calculates how much that player can lose and still feel satisfied, and how close he is to the “pain point”. The offer of a free meal at the steakhouse converts his pain into pleasure, refreshing his motivation to carry on.

Schüll’s book, which was published in 2013, won applause for its exposure of the dark side of machine gambling. But some readers spotted opportunities in it. Schüll told me that she received an approach from an online education company interested in adopting the idea of “luck ambassadors”. Where is the pain point for a student who isn’t getting the answers right, and what does she need to get over it instead of giving up? Schüll found herself invited to speak at conferences attended by marketers and entrepreneurs, including one on habit formation organised by Nir Eyal.

Las Vegas is a microcosm. “The world is turning into this giant Skinner box for the self,” Schüll told me. “The experience that is being designed for in banking or health care is the same as in Candy Crush. It’s about looping people into these flows of incentive and reward. Your coffee at Starbucks, your education software, your credit card, the meds you need for your diabetes. Every consumer interface is becoming like a slot machine.”

These days, of course, we all carry slot machines in our pockets.

Natasha Dow Schüll accepted her invitation to speak at Eyal’s conference. “It was strange. Nobody in that room wanted to be addicting anyone – they were hipsters from San Francisco, after all. Nice people. But at the same time, their charter is to hook people for startups.” Tristan Harris thinks most people in the world of technology are unwilling to confront the inherent tension in what they do. “Nir and BJ are nice guys. But they overestimate the extent to which they’re empowering people, as opposed to helping to hook them.”

Silicon Valley is bathed in sunshine. The people who work there are optimists who believe in the power of their products to extend human potential. Like Fogg, Eyal sincerely wants to make the world better. “I get almost religious about product design. Product-makers have the ability to improve people’s lives, to find the points when people are in pain, and help them.” He rejects the idea that trying to hook people is inherently dubious. “Habits can be good or bad, and technology has the ability to create healthy habits. If the products are getting better at drawing you in, that’s not a problem: that’s progress.”

The gambling executives Schüll interviewed were not evil. They believe they are simply offering customers more and better ways to get what they want. Nobody was being coerced or deceived into parting with their money. As one executive put it, in a coincidental echo of Fogg, “You can’t make people do something they don’t want to do.” But the relationship, as Schüll points out, is asymmetric. For the gamblers, the zone is an end in itself; for the gambling industry, it is a means of extracting profit.

Tristan Harris sees the entire digital economy in similar terms. No matter how useful the products, the system itself is tilted in favour of its designers. The house always wins. “There is a fundamental conflict between what people need and what companies need,” he explained. Harris isn’t suggesting that tech companies are engaged in a nefarious plot to take over our minds – Google and Apple didn’t set out to make phones like slot machines. But the imperative of the system is to maximise time-on-device, and it turns out the best way of doing that is to dispense rewards to the operant on a variable schedule.

It also means shutting the door to the box. Things that aren’t important to a person are bound up with things that are very important: the machine on which you play games and read celebrity gossip is the one on which you’ll find out if your daughter has fallen ill. So you can’t turn it off or leave it behind. Besides, you might miss a magic moment on Instagram.

“There are people who worry about ai [artificial intelligence],” Harris said. “They ask whether we can maximise its potential without harming human interests. But ai is already here. It’s called the internet. We’ve unleashed this black box which is always developing new ways to persuade us to do things, by moving us from one trance to the next.”

In theory, we can all opt out of the loops of incentive and reward which encircle us, but few of us choose to. It is just so much easier to accept and connect. If we are captives of captology, then we are willing ones.

 

They’ve Got You, Wherever You Are | by Jacob Weisberg

By Jacob Weisberg

The old cliché about advertising was, “Half the money I spend on advertising is wasted; the trouble is I don’t know which half.” The new cliché is, “If you’re not paying for it, you’re the product.” In an attention economy, you pay for free content and services with your time. The compensation isn’t very good.

Source: They’ve Got You, Wherever You Are | by Jacob Weisberg

1.

Earlier this year (2016), Facebook announced a major new initiative called Facebook Live, which was intended to encourage the consumption of minimally produced, real-time video on its site. The videos would come from news organizations such as The New York Times, as well as from celebrities and Facebook users. Interpreted by some as an effort to challenge Snapchat, the app popular with teenagers in which content quickly vanishes, Live reflects the trend toward video’s becoming the dominant consumer and commercial activity on the Web. Following the announcement, one executive at the company predicted that in five years the Facebook News Feed wouldn’t include any written articles at all, because video “helps us to digest more of the information” and is “the best way to tell stories.”

Facebook’s News Feed is the largest source of traffic for news and media sites, representing 43 percent of their referrals, according to the web analytics firm Parse.ly. So when Facebook indicates that it favors a new form of content, publishers start making a lot of it. In this case, news organizations including the Times, BuzzFeed, NPR, and Al Jazeera began streaming live videos, which were funded in part by $50 million in payments from Facebook itself. These subsidies were thought necessary because live video carries no advertising, and thus produces no revenue for Facebook or its partners.

Why, if it generates no revenue, is Facebook pushing video streaming so insistently? For the same reason that it does almost everything: in hopes of capturing more user attention. According to the company’s research, live videos—which feel more spontaneous and authentic—are viewed an average of three times longer than prerecorded videos.

Facebook is currently the fourth most valuable American company. Its stock price is based less upon its current revenues, which are much lower than those of other companies with similar valuations, than upon expectations about revenues it will one day be able to earn. That future revenue depends on reselling to advertisers the attention of 1.7 billion global users, who currently spend an average of fifty minutes a day on Facebook’s sites and apps.

Facebook promotes video, plays publisher-generated content up or down in relation to user-generated content, and tinkers continually with the algorithm that determines what appears on its News Feed; it does this not out of any inherent high- or low-mindedness, but in an effort to harvest an ever greater quantity of our time. If the written word happens to fall out of favor, or if journalism becomes economically unworkable as a consequence, these results, so far as Facebook is concerned, are unintentional. They’re merely collateral damage from the relentless expansion of the most powerful attention-capture machine ever built.

The economist Herbert A. Simon first developed the concept of an attention economy in a 1971 essay.1 Taking note of the new phenomenon of “information overload,” Simon pointed out something that now seems obvious—that “a wealth of information creates a poverty of attention.” In recent years, thinking about attention as a scarce resource has come into vogue as a way to appraise the human and psychological impact of digital and social media.2

The animating insight of Tim Wu’s illuminating new book, The Attention Merchants, is to apply this concept as a backward-facing lens as well as a contemporary one. Modern media, he argues, have always been based on the reselling of human attention to advertisers. Wu, who teaches at Columbia Law School, is a broad thinker about law, technology, and media who has had a varied career as an academic, a journalist, and a 2014 candidate for lieutenant governor of New York. He is best known for developing “net neutrality”—the principle that access providers (such as Comcast or Time Warner) should treat all Internet traffic equally—which formed the basis of a federal regulation that went into effect last year.

Wu’s earlier book, The Master Switch (2010), interpreted the history of mass communications as the ongoing pursuit of media monopoly. In The Attention Merchants, he narrates a history of media built around a model of “free stuff”—for example, radio and TV programs—in exchange for the ingestion of advertising. Wu relates the sequential conquest, by marketing, of formerly exempt spheres: public spaces through posters and billboards, the family home by radio and TV, person-to-person communication by online portals and social media, and physical movement through smartphones.

His story begins with the New York newspaper wars of the Jacksonian Era. Wu names as the first merchant of attention Benjamin Day, who in 1833 disrupted the placid market for printed dailies costing six cents. Day’s larger-circulation New York Sun cost only a penny thanks to revenue from advertising. The battle between the Sun and its competitors established what Wu calls the basic dynamic of such industries: a race to the bottom, since “attention will almost invariably gravitate to the more garish, lurid, outrageous alternative.” In the case of the New York Sun that meant, among other salacious inventions, a five-part series on the astronomical discovery that the moon was covered with trees and populated by unicorns and four-foot-tall winged man-bats. Within two years of its founding, the Sun was the most widely read newspaper in the world.

At the dawn of the twentieth century, advertising remained largely a medium to sell products like “Clark Stanley’s Snake Oil Linament” and “the Elixir of Life,” whose manufacturer promised a cure for “any and every disease that is known to the human body.” Such claims and the occasionally poisonous products they promoted were a favorite target of Progressive Era journalists. Lest we congratulate ourselves on having outgrown such flimflam, Wu reminds us that the “secret ingredient” pitch used to sell patent medicine is still routine in our ostensibly less credulous era. He writes:

As devotees of technology we are, if anything, more susceptible to the supposed degree of difference afforded by some ingenious proprietary innovation, like the “air” in Nike’s sports shoes, triple reverse osmosis in some brands of water, or the gold-plating of audio component cables.

During the 1920s, ad spending in the United States and Europe rose tenfold. The development of mass communications gave rise to an advertising industry with pretensions to be “scientific.” Three techniques, developed then for magazines and radio, are still with us today: (1) “demand engineering,” which is a fancy term for creating desire for new products like orange juice and mouthwash; (2) “branding,” which means building loyalty to names like Buick and Coca-Cola; and (3) “targeted advertising,” which originally meant focusing on the female consumer, who did most household purchasing.

In the radio era the great breakthrough was the minstrel comedy Amos ’n’ Andy, which first aired on a Chicago station in 1928. At its peak, the show drew 40 to 50 million nightly listeners, out of a national population of only 122 million. For David Sarnoff’s National Broadcasting Company, this was “the equivalent of having today’s Super Bowl audiences each and every evening—and with just one advertiser”—Pepsodent toothpaste. (The TV version of the show was canceled in 1953 following protests from the NAACP.) After World War II, NBC began establishing similar dominance over “prime time” television with programs like Texaco Star Theater and Your Show of Shows. Wu calls the 1950s through the 1970s the period of “‘peak attention’…the moment when more regular attention was paid to the same set of messages at the same time than ever before or since.”

What was actually happening so far as economics was concerned? In the conventional analysis, advertising is part of the “discovery” process whereby consumers find out what products, from sliced bread to political candidates, are available in the marketplace. To critics, however, mass marketers were not providing useful information, but misleading the public and exploiting emotion to build monopolies. Dishonest claims, like Lucky Strike’s contention that its cigarettes “soothed the throat,” led to efforts to ban deceptive advertising practices. But in 1931, the Supreme Court ruled that the Federal Trade Commission lacked authority to prohibit or change an ad. The decision prompted the New Dealer Rexford Tugwell, an economist at the Department of Agriculture, to propose more stringent regulation. The advertising industry and its allies in ad-supported media helped scuttle Tugwell’s bill. In 1938 far weaker legislation passed, giving the FTC the power to ban factually untrue statements, but not much more.

Another rebellion came in the 1950s, with the exposure of the rigged outcome of television quiz shows and Vance Packard’s best seller The Hidden Persuaders, which depicted an ad industry engaged in psychological manipulation. A still-bigger wave hit with criticism of consumer society from such commentators as Timothy Leary and Marshall McLuhan. But advertising has a marvelous ability to absorb antagonism and retune its messages to it. This is an ongoing theme in the later seasons of Mad Men, in which the soul-damaged Don Draper applies his talent to making a commodity out of dissent.

John Hamm as Don Draper in the final episode of Mad Men, 2015

John Hamm as Don Draper in the final episode of Mad Men, 2015

2.

The research scientists who designed the World Wide Web intended it to be free from commercial exploitation. The company that decisively fastened it to an advertising model was America Online (AOL), which by the mid-1990s had emerged as victorious over its competitors, its success driven less by its reputation as a family-friendly space than by the use of chat rooms, which enabled people to participate in graphic descriptions of sex—“cybersex.” Though AOL was originally ad-free, earning its money from hourly access charges, the canny executive Bob Pittman, the co-founder of MTV, saw greater potential in subsidizing access—by lowering subscription fees and selling advertising. AOL did this in notoriously unscrupulous fashion, booking phony revenue, exploiting its users, and engaging in other dishonest practices that caused it to implode after swallowing Time Warner in a $164 billion merger that took place in January 2001.

Google originally rejected the attention model too, only to succumb to it in turn. Wu writes that Larry Page and Sergey Brin, the company’s cofounders, were concerned about the corrupting potential of advertising, believing that it would bias their search engine in favor of sellers and against consumers. But because search is so closely allied with consumption, selling paid advertising became irresistible to them. With its Ad Words product—the text ads that appear above and alongside search results—Google became “the most profitable attention merchant in the history of the world.” When Google sells ads accompanying the results of searches people make, it uses a real-time bidding exchange. This electronic auction process remains its central cash machine, generating most of the company’s $75 billion a year in revenue.

Mark Zuckerberg began with the same aversion to advertising, famously rejecting a million dollars from Sprite to turn Facebook green for a day in 2006. He thought obtrusive advertising would disrupt the user experience he needed to perfect in order to undermine his main competitor, Myspace, which had become infested by mischief-making trolls. By requiring the use of real names, Zuckerberg kept the trolls at bay. But he was ultimately no better able to resist the sale of advertising than his predecessors. Facebook’s vast trove of voluntarily surrendered personal information would allow it to resell segmented attention with unparalleled specificity, enabling marketers to target not just the location and demographic characteristics of its users, but practically any conceivable taste, interest, or affinity. And with ad products displayed on smartphones, Facebook has ensured that targeted advertising travels with its users everywhere.

Today Google and Facebook are, as Wu writes, the “de facto diarchs of the online attention merchants.” Their deferred-gratification model, by which the company only starts selling advertisers the audience that it assembles after operating free of ads for a certain period, is now the standard for aspiring tech companies. You begin with idealistic hauteur and visions of engineering purity, proceed to exponential growth, and belatedly open the spigot to fill buckets with revenue. This sequence describes the growth of tech-media companies including YouTube, Twitter, Pinterest, Instagram, and Snapchat, their success underpinned by kinds of user data that television and radio have never had. Many younger start-ups pursuing this trajectory are focused on the next attention-mining frontier: wearable technology, including virtual reality headsets, that will send marketing messages even more directly to the human body.

This is not, however, the only possible business model to support content and services. In the 1990s, HBO developed the alternative of paid programming offered through cable providers such as Time Warner. Netflix has pursued a freestanding subscription model that, in the words of its founder Reed Hastings, doesn’t “cram advertisements down people’s throats.” Under its CEO Tim Cook, Apple has rejected the prevailing model of gathering private information and selling it to marketers to subsidize free services. From Cook’s perspective, advertising doesn’t merely harm privacy. It depletes battery life, eats up mobile data plans, and creates a less pleasing experience on Apple’s beautiful devices. To the chagrin of publishers, the company now offers ad-blocking apps on the iPhone that allow users to gain access to the Internet without any ads at all.

3.

Antonio García Martínez, author of the autobiographical book Chaos Monkeys, is a member of the new class of attention merchants constructing the marketplace for “programmatic” advertising, which is advertising sold on electronic exchanges without the traditional lubricated palaver between buyers and sellers. He is, by his own account, a dissolute character: bad boyfriend, absent father, and often drunk. A tech wise guy working the start-up racket, he was quick to deceive and betray the two less worldly foreign-born engineers who left one advertising technology company with him to start another. He is nonetheless, by the end of his account, a winning antihero, a rebel against Silicon Valley’s culture of nonconformist conformity.

Part of Martínez’s seditiousness is his refusal to accept that the work he does serves some higher social purpose. He writes:

Every time you go to Facebook or ESPN.com or wherever, you’re unleashing a mad scramble of money, data, and pixels that involves undersea fiber-optic cables, the world’s best database technologies, and everything that is known about you by greedy strangers.

To be fair, there’s no reason to think that people in ad tech are greedier than anyone else. Their work is simply more obscure. Some 2,500 companies are part of the technical supply chain for digital advertising. What many of them do to create and transmit ads is largely incomprehensible and uninteresting to outsiders. The simplest explanation is that they interpose themselves between buyers and sellers in an attempt to capture a cut of the revenue.

As a former doctoral student in physics at Berkeley and quantitative analyst on Goldman Sachs’s corporate credit desk, Martínez was well suited to develop this type of intermediation. The advertising technology he works on somewhat resembles the secretive world of high-frequency financial trading depicted by Michael Lewis in Flash Boys (2014). It works to extract value from millions of small daily transactions around captured attention. Adchemy, the company where Martínez first worked, was an intermediary between users of Google’s search engine and companies seeking access to them. It created an information arbitrage by finding potential customers interested in mortgages and online education degrees, and then selling those leads to buyers like Quicken Loans and the University of Phoenix. (“I want to take a shower just reading those names,” he writes.) Martínez worked on building a real-time bidding engine allowing buyers to communicate with Google’s ad exchange—new software that would use Google’s code more efficiently.

The concept behind AdGrok, the company he started after leaving Adchemy with two colleagues, was to break into Google’s search ad business in a different way by allowing store owners to buy location-based ads for products using a barcode scan. By his own admission, this was a half-baked idea. As Martínez writes, a good start-up plan should require no more than one miracle to succeed. AdGrok needed at least five. His stroke of luck was being accepted into Y Combinator, a tech incubator that helps start-ups get going in exchange for a stake in their business. With a Y Combinator pedigree and connections, he was able to sell AdGrok to Twitter after nine months in business. The deal was what’s called an “acqui-hire.” Twitter wanted the company’s engineers, not its software. Facebook was interested as well, but didn’t want the other two engineers or the software, just Martínez. With a sleight-of-hand, he managed to separate himself from the transaction, infuriating Twitter, his two cofounders, and his investors, while landing himself at Facebook, the more promising company.

Martínez arrived at Facebook in 2011 and immediately recognized that he’d never fit in with the hoodie people. He’s an up-from-nowhere cynic; Facebook is an ingenuous company populated by privileged true believers who are cultishly devoted to their boy leader. He was genuinely surprised to discover that the company significantly lagged in its advertising efforts. Facebook’s digital display units were nearly invisible, its tools for use by ad buyers limited, and its general attitude toward advertising one of lordly disdain. Despite its wealth of user data, its ads were—and remain—vastly less effective than Google’s.

Martínez also noticed that status and resources didn’t accrue to advertising product managers at Facebook. He received minimal support for his project of creating a real-time bidding platform called Facebook Exchange, modeled on Google’s Ad Words. Thanks in part to his impatience with lesser minds, his project lost out to an internal competitor, making him superfluous at the company. By the end of the book the reader can’t help rooting for him to get hold of more of his 70,000 pre-IPO stock options before getting fired in 2014, even though he deserves to go.

Martínez takes personally the seeming irrationality of Facebook’s throwing away half a billion dollars a year in Facebook Exchange revenue. What he may not fully appreciate is the extent to which a dismissive attitude toward advertising was a feature of Facebook’s business strategy, not a bug in it. Facebook has succeeded because of its relentless focus on increasing its user base and the addictiveness of its product, which constantly promotes more and better connections with other people. The company introduced ads into the News Feed, its core revenue producer, only in 2012, as it approached a billion users and was preparing to become a public company. Zuckerberg spent his first decade focused on harvesting attention—while postponing the feast. Had Martínez arrived at Facebook five years later, he would have found a company much more like Google was in 2011: still focused on growth but bent on improving its advertising products to drive earnings.

As with Google, Facebook’s passage to maturity has required compromise with the purity of the product and the founder’s original vision. But as Wu makes clear, this kind of transformation is almost irresistible. Whatever high-minded things attention merchants say about their mission of connecting the entire world or putting information at its fingertips, they’re in the same business as Benjamin Day, David Sarnoff, and Bob Pittman. The business is advertising.

Ad exchanges, in which advertising units are bought and sold automatically using software designed to target specific audiences, have made digital advertising more efficient without necessarily making it more effective in increasing sales. Facebook holds out the promise of mass personalization; advertisers can pinpoint users with extraordinary precision. That doesn’t mean, however, that ads on Facebook have any special impact. Unlike on Google, where people go to search for goods and services, Facebook ads are still, in the industry’s jargon, “upper funnel,” a better way for marketers to breed awareness than to make a sale. This has made it a high priority for Facebook to establish “attribution,” to demonstrate that it plays an important part in purchase decisions taking place elsewhere, e.g., through search engines and on other websites.

“Adtech” has also done much to make the Web a squalid, chaotic place. On news sites, video ads begin playing automatically and without permission, sometimes with the sound blaring. Pop-up windows ask you to participate in brief surveys before continuing. “Retargeting” ads for whatever product you last browsed follow you around like lost puppies. Pages load in balky fashion because of the click-bait monetization links, real-time bidding for available ad units, and also because of data-collecting cookies and other tracking technologies. Add to that the misery of trolling on social media and on sites that can’t manage to police user comments effectively.3 This is what the value exchange around free content has become.

The old cliché about advertising was, “Half the money I spend on advertising is wasted; the trouble is I don’t know which half.” The new cliché is, “If you’re not paying for it, you’re the product.” In an attention economy, you pay for free content and services with your time. The compensation isn’t very good. Consider the pre-roll commercials you are forced to watch to gain access to most video clips, increasingly the dominant type of content on the Web. At an above-average $10 CPM, the cost per thousand views, an advertiser is paying one cent per thirty-second increment of distraction. For the user, that works out to a rate of sixty cents an hour for watching ads. The problem isn’t simply that attention has been made into a commodity, it’s that it’s so undervalued. Marketers buy our time for far less than its worth.

Wu suggests that we may be in the middle of another periodic revolt against advertising, based on “a growing sense that media had overtaxed our attentions to the point of crisis.” Though he points to trends like ad-blocking and to higher-quality paid television programming on Netflix and HBO, he doesn’t offer much in the way of a broader prescription. His book is about how we got here, not what to do about it. Based on his reading of media history, Wu doesn’t see much likelihood of replacing the basic model of obtaining apparently “free stuff” in exchange for absorbing commercial messages. Rather, he proposes that we try harder to conserve our mental space through a kind of zoning that declares certain times and spaces off-limits to commercial messages. That might mean a digital sabbath every weekend, or technology to keep advertising out of classrooms. We should appreciate that attention is precious, he writes, and not “part with it as cheaply or unthinkingly as we so often have.”

  1. “Designing Organizations for an Information-Rich World,” in Computers, Communications and the Public Interest, edited by Martin Greenberger (Johns Hopkins University Press, 1971). 
  2. See my “We Are Hopelessly Hooked,” The New York Review, February 25, 2016. 
  3. See Joel Stein, “How Trolls Are Ruining the Internet,” Time, August 29, 2016. 

How video games unwittingly train the brain to justify killing

Teodora Stoica is a PhD student in the translational neuroscience programme at the University of Louisville. She is interested in the relationship between emotion and cognition, and clinical and cognitive psychology.

Source https://aeon.co/ideas/how-video-games-unwittingly-train-the-brain-to-justify-killing

Published in association with
Cognitive Neuroscience Society
an Aeon Partner

Mortal Kombat X gameplay. <em>NetherRealm/Warner Bros. Interactive Entertainment/Wikipedia</em>

Let’s play a game. One of the quotes below belongs to a trained soldier speaking of killing the enemy, while the other to a convicted felon describing his first murder. Can you tell the difference?

(1) ‘I realised that I had just done something that separated me from the human race and it was something that could never be undone. I realised that from that point on I could never be like normal people.’

(2) ‘I was cool, calm and collected the whole time. I knew what I had to do. I knew I was going to do it, and I did.’

Would you be surprised to learn that the first statement, suggesting remorse, comes from the American mass murderer David Alan Gore, while the second, of cool acceptance, was made by Andy Wilson, a soldier in the SAS, Britain’s elite special forces? In one view, the two men are separated by the thinnest filament of morality: justification. One killed because he wanted to, the other because he was acting on behalf of his country, as part of his job.

While most psychologically normal individuals agree that inflicting pain on others is wrong, killing others appears socially sanctioned in specific contexts such as war or self-defence. Or revenge. Or military dictatorships. Or human sacrifice. In fact, justification for murder is so pliant that the TV series Dexter (2006-13) flirted exquisitely with the concept: a sociopath who kills villainous people as a vehicle for satisfying his own dark urges.

Operating under strict ‘guidelines’ that target only the guilty, Dexter (a forensics technician) and the viewer come to believe that the kill is justified. He forces the audience to question their own moral compass by asking them to justify murder in their minds in the split second prior to the kill. Usually when we imagine directly harming someone, the image is preventive: envision a man hitting a woman; or an owner abusing her dog. Yet, sometimes, the opposite happens: a switch is flipped with aggressive, even violent consequences. How can an otherwise normal person override the moral code and commit cold-blooded murder?

That was the question asked at the University of Queensland in Australia, in a study led by the neuroscientist Pascal Molenberghs, in which participants entered an fMRI scanner while viewing a first-person video game. In one scenario, a soldier kills an enemy soldier; in another, the soldier kills a civilian. The game enabled each participant to privately enter the mind of the soldier and control which person to execute.

Screenshot of what each participant saw

The results were, overall, surprising. It made sense that a mental simulation of killing an innocent person (unjustified kill) led to overwhelming feelings of guilt and subsequent activation of the lateral orbitofrontal cortex (OFC), an area of the brain involved in aversive, morally sensitive situations. By contrast, researchers predicted that viewing a soldier killing a soldier would create activity in another region of the brain, the medial OFC, which assesses thorny ethical situations and assigns them positive feelings such as praise and pride: ‘This makes me feel good, I should keep doing it.’

But that is not what occurred: the medial OFC did not light up when participants imagined themselves as soldiers killing the enemy. In fact, none of the OFC did. One explanation for this puzzling finding is that the OFC’s reasoning ability isn’t needed in this scenario because the action is not ethically compromising. That is to say – it is seen as justified. Which brings us to a chilling conclusion: if killing feels justified, anyone is capable of committing the act.

Since the Korean War, the military has altered basic training to help soldiers overcome existing norms of violence, desensitise them to the acts they might have to commit, and reflexively shoot upon cue. Even the drill sergeant is portrayed as the consummate professional personifying violence and aggression.

The same training takes place unconsciously through contemporary video games and media. Young children have unprecedented access to violent movies, games and sports events at an early age, and learning brutality is the norm. The media dwells upon real-life killers, describing every detail of their crime during prime-time TV. The current conditions easily set up children to begin thinking like soldiers and even justify killing. But are we in fact suppressing critical functions of the brain? Are we engendering future generations who will accept violence and ignore the voice of reason, creating a world where violence will become the comfortable norm?

The Queensland study had something to say about this as well. When participants were viewing unjustified killings, researchers noticed increased connectivity between the OFC and an area called the temporal parietal junction (TPJ), a part of the brain that has previously been associated with empathy. They show that disrupting function of the TPJ transforms participants into psychopaths, judging any action as morally permissible and making the TPJ a critical region for empathy. Increased connectivity between the two regions suggests that the participants were actively putting themselves in the shoes of the observer, judging whether killing civilians was morally acceptable or not.

Increased connectivity between left OFC and left and right TPJ for simulating shooting a civilian

‘Emotional and physical distance can allow a person to kill his foe,’ says Lt Colonel Dave Grossman, director of the Killology Research Group in Illinois and one of the world’s foremost experts in human aggression and the roots of violence. ‘Emotional distance can be classified as mechanical, social, cultural and emotional distance.’ In other words, a lack of connection to humans allows a justified murder. The writer Primo Levi, a Holocaust survivor, believed that this was exactly how the Nazis succeeded in killing so many: by stripping away individuality and reducing each person to a generic number.

In 2016, technology and media have turned genocide viral. The video game Mortal Kombat X features spines being snapped, heads crushed and players being diced into cubes. In Hatred, gamers play as a sociopath who attempts to kill innocent bystanders and police officers with guns, flamethrowers and bombs to satisfy his hatred of humanity. Characters beg for mercy before execution, frequently during profanity-laced rants.

A plethora of studies now associate playing such games with greater tolerance of violence, reduced empathy, aggression and sexual objectification. Compared with males who have not played violent video games, males who do play them are 67 per cent more likely to engage in non-violent deviant behaviour, 63 per cent more likely to commit a violent crime or a crime related to violence, and 81 per cent more likely to have engaged in substance use. Other studies have found that engaging in cyberviolence leads people to perceive themselves as less human, and facilitates violence and aggression.

This powerful knowledge could be used to turn violence on its head. Brain-training programs could use current neuroscientific knowledge to serve up exhilarating games to train inhibition, instead of promoting anger. Creating games with the capability to alter thought patterns is itself ethically questionable and could be easily implemented to control a large population. But we’ve already gone down that road, and in the direction of violence. With today’s generation so highly dependent on technology, phasing in games from an early age that encourage tolerance could be a potent tool for building a more humane, more compassionate world.

Attention, Students: Put Your Laptops Away

JAMES DOUBEK

Heard on NPR Weekend Edition Sunday

Researchers Pam Mueller and Daniel M. Oppenheimer found that students remember more via taking notes longhand rather than on a laptop. It has to do with what happens when you’re forced to slow down.

Source: Attention, Students: Put Your Laptops Away

As laptops become smaller and more ubiquitous, and with the advent of tablets, the idea of taking notes by hand just seems old-fashioned to many students today. Typing your notes is faster — which comes in handy when there’s a lot of information to take down. But it turns out there are still advantages to doing things the old-fashioned way.

For one thing, research shows that laptops and tablets have a tendency to be distracting — it’s so easy to click over to Facebook in that dull lecture. And a study has shown that the fact that you have to be slower when you take notes by hand is what makes it more useful in the long run.

In the study published in Psychological Science, Pam A. Mueller of Princeton University and Daniel M. Oppenheimer of the University of California, Los Angeles sought to test how note-taking by hand or by computer affects learning.

“When people type their notes, they have this tendency to try to take verbatim notes and write down as much of the lecture as they can,” Mueller tells NPR’s Rachel Martin. “The students who were taking longhand notes in our studies were forced to be more selective — because you can’t write as fast as you can type. And that extra processing of the material that they were doing benefited them.”

Mueller and Oppenheimer cited that note-taking can be categorized two ways: generative and nongenerative. Generative note-taking pertains to “summarizing, paraphrasing, concept mapping,” while nongenerative note-taking involves copying something verbatim.

And there are two hypotheses to why note-taking is beneficial in the first place. The first idea is called the encoding hypothesis, which says that when a person is taking notes, “the processing that occurs” will improve “learning and retention.” The second, called the external-storage hypothesis, is that you learn by being able to look back at your notes, or even the notes of other people.

Because people can type faster than they write, using a laptop will make people more likely to try to transcribe everything they’re hearing. So on the one hand, Mueller and Oppenheimer were faced with the question of whether the benefits of being able to look at your more complete, transcribed notes on a laptop outweigh the drawbacks of not processing that information. On the other hand, when writing longhand, you process the information better but have less to look back at.

For their first study, they took university students (the standard guinea pig of psychology) and showed them TED talks about various topics. Afterward, they found that the students who used laptops typed significantly more words than those who took notes by hand. When testing how well the students remembered information, the researchers found a key point of divergence in the type of question. For questions that asked students to simply remember facts, like dates, both groups did equally well. But for “conceptual-application” questions, such as, “How do Japan and Sweden differ in their approaches to equality within their societies?” the laptop users did “significantly worse.”

The same thing happened in the second study, even when they specifically told students using laptops to try to avoid writing things down verbatim. “Even when we told people they shouldn’t be taking these verbatim notes, they were not able to overcome that instinct,” Mueller says. The more words the students copied verbatim, the worse they performed on recall tests.

And to test the external-storage hypothesis, for the third study they gave students the opportunity to review their notes in between the lecture and test. The thinking is, if students have time to study their notes from their laptops, the fact that they typed more extensive notes than their longhand-writing peers could possibly help them perform better.

But the students taking notes by hand still performed better. “This is suggestive evidence that longhand notes may have superior external storage as well as superior encoding functions,” Mueller and Oppenheimer write.

Do studies like these mean wise college students will start migrating back to notebooks?

“I think it is a hard sell to get people to go back to pen and paper,” Mueller says. “But they are developing lots of technologies now like Livescribe and various stylus and tablet technologies that are getting better and better. And I think that will be sort of an easier sell to college students and people of that generation.”

Virtual Reality Can Leave You With an Existential Hangover

by REBECCA SEARLES

After exploring a virtual world, some people can’t shake the sense that the actual world isn’t real, either.

Source: Virtual Reality Can Leave You With an Existential Hangover

When Tobias van Schneider slips on a virtual reality headset to play Google’s Tilt Brush, he becomes a god. His fingertips become a fiery paintbrush in the sky. A flick of the wrist rotates the clouds. He can jump effortlessly from one world that he created to another.

When the headset comes off, though, it’s back to a dreary reality. And lately van Schneider has been noticing some unsettling lingering effects. “What stays is a strange feeling of sadness and disappointment when participating in the real world, usually on the same day,” he wrote on the blogging platform Medium last month. “The sky seems less colorful and it just feels like I’m missing the ‘magic’ (for the lack of a better word). … I feel deeply disturbed and often end up just sitting there, staring at a wall.”

Van Schneider dubs the feeling “post-VR sadness.” It’s less a feeling of depression, he writes, and more a sense of detachment. And while he didn’t realize it when he published the post, he’s not the only one who has experienced this. Between virtual reality subreddits and Oculus Rift online forums, there are dozens of stories like his. The ailments range from feeling temporarily fuzzy, light-headed, and in a dream-like state, to more severe detachment that lasts days—or weeks. Many cases have bubbled up in the last year, likely as a response to consumer VR headsets becoming more widely available. But some of the stories date as far back as 2013, when an initial version of the Oculus Rift was released for software developers.

“[W]hile standing and in the middle of a sentence, I had an incredibly strange, weird moment of comparing real life to the VR,” wrote the video-game developer Lee Vermeulen after he tried Valve’s SteamVR system back in 2014. He was mid-conversation with a coworker when he started to feel off, and the experience sounds almost metaphysical. “I understood that the demo was over, but it was [as] if a lower level part of my mind couldn’t exactly be sure. It gave me a very weird existential dread of my entire situation, and the only way I could get rid of that feeling was to walk around or touch things around me.”

It seems that VR is making people ill in a way no one predicted. And as hard as it is to articulate the effects, it may prove even harder to identify its cause.

* * *

The notion of virtual-reality devices having a physical effect their users is certainly familiar. Virtual-reality sickness, also known as cybersickness, is a well-documented type of motion sickness that some people feel during or after VR play, with symptoms that include dizziness, nausea, and imbalance. It’s so common that researchers say it’s one of the biggest hurdles to mass adoption of VR, and companies like Microsoft are already working rapidly to find ways to fix it.

Some VR users on Reddit have pointed out that VR sickness begins to fade with time and experience in a headset. Once they grew their “VR legs,” they wrote, they experienced less illness. Van Schneider has noticed the same thing. “[The physical symptoms] usually fade within the first 1–2 hours and get better over time,” he wrote. “It’s almost like a little hangover, depending on the intensity of your VR experience.” Indeed, VR sickness is often referred to as a “VR hangover.”

“I was very fatigued. I was dizzy. And it definitely hits that strange point where the real world feels surreal.”

The dissociative effects that van Schneider and others have written about, however, are much worse. In an attempt to collectively self-diagnose, many of the internet-forum users have pointed to a study by the clinical psychology researcher Frederick Aardema from 2006 — the only study that looks explicitly at virtual reality and clinical dissociation, a state of detachment from one’s self or reality. Using a questionnaire to measure participants’ levels of dissociation before and after exposure to VR, Aardema found that VR increases dissociative experiences and lessens people’s sense of presence in actual reality. He also found that the greater the individual’s pre-existing tendency for dissociation and immersion, the greater the dissociative effects of VR.

Dissociation itself isn’t necessarily an illness, Aardema said. It works like a spectrum: On the benign side of the spectrum, there is fantasizing and daydreaming — a coping mechanism for boredom or conflict. On the other side, however, there are more pathological types of dissociation, which include disorders like derealization-depersonalization (DPDR).

While derealization is the feeling that the world isn’t real, depersonalization is the feeling that one’s self isn’t real. People who’ve experienced depersonalization say that it feels like they’re outside of their bodies, watching themselves. Derealization makes a person’s surroundings feel strange and dream-like, in an unsettling way, despite how familiar they may be.

When I spoke with Aardema on the phone, he had been wondering why his paper from ten years ago had suddenly been getting so many hits on the science-networking site ResearchGate. His study measured mild dissociative effects — think, “I see things around me differently than usual” — so he emphasized that there is a need to explore how these effects may relate to mood and depressive feelings. “There was some indication in our initial study that feelings of depression were important in relation to dissociation,” he said.

* * *

I’ve never felt depersonalization, but I have felt derealization, the result of a severe panic disorder I developed when I was 25. It was nothing short of nightmarish. When the effects were tolerable, it felt like I was permanently high on psychedelics — a bad trip that wouldn’t end. When it was at it’s most intense, it was like living in my own scary movie: You look around at your everyday life and nothing feels real. Even faces that I knew and loved looked like a jumbled mess of features.

DPDR often occurs after a traumatic event, as a defense mechanism that separates someone from emotional issues that are too difficult to process. My case was triggered by stress. But according to a 2015 study in the journal Multisensory Research, feelings of unreality can also be triggered by contradicting sensory input — like one might experience inside a VR headset.

The study, by Kathrine Jáuregui-Renaud, a health researcher at the Mexican Institute of Social security, explains that in order for the mind to produce a coherent representation of the outside world, it relies on integrating sensory input—combining and making sense of the information coming in through the senses. When there’s a mismatch between the signals from the vestibular system — a series of fluid-filled tubes in the inner ear that senses balance — and the visual system, the brain short-circuits. Part of the brain may think the body is moving, for instance, while another part thinks the feet are firmly planted on the ground. Something feels amiss, which can cause anxiety and panic.

VR’s very purpose is to make it difficult to distinguish simulation from reality.

This, Aardema pointed out, could explain why books, movies, and video games don’t tend to cause the same kinds of dissociative aftereffects. Books don’t have moving visuals, and the movement in movies and video games is usually not intense enough. It also helps that these experiences are usually enjoyed while sitting still. So they just don’t have the same capacity to offset balance and vestibular function. (Though for some, movies can cause motion sickness. And for those people there is Moviehurl.com — a website devoted to rating movies on their likelihood of giving a viewer motion sickness.)

Scientists also believe that this kind of conflicting information is what causes motion-sickness symptoms like nausea and dizziness. So why do some VR users get motion sickness, while others end up experiencing something more serious? Research suggests that there is a link between serotonin levels, which play a role in mood regulation, and the vestibular system. So for those that may already suffer from a serotonin-related imbalance, like the 40 million Americans who suffer from anxiety disorders, VR’s disruption of the vestibular system may have a more profound effect.

* * *

As van Schneider illustrated in his blog post, the appeal of virtual reality’s “superpowers” are compelling. VR’s very purpose is to make it difficult to distinguish simulation from reality. But what happens when the primitive brain is not equipped to process this? To what extent is VR causing users to question the nature of their own reality? And how easily are people able to tackle this line of questioning without losing their grip?

One evening during my DPDR phase, I was riding in a cab down a main street in the West Village, looking out the window. It was summer and there were tourists everywhere, and the light before sunset was still lingering. It was a perfect time to be out in the street, walking with friends and family, taking in New York City. But I remember the distinct feeling of hating everyone I saw. They had brains that just worked, brains that collected streams of sensory information and painted a satisfying picture of reality, just like brains are supposed to do. They most likely never questioned if what they were experiencing was real.

For some people, at least, it seems that VR could change that. In March, Alanah Pearce, an Australian video game journalist and podcast host, recounted troubling post-VR symptoms after the Game Developers Conference in San Francisco. “I was very fatigued. I was dizzy. And it definitely hits that strange point where the real world feels surreal,” she said. “I’m not going to go into that too in-depth, because it’s something I haven’t yet grasped. But I know that I’m not alone, and other people who play VR feel the same thing, where it’s like, nothing really feels real anymore. It’s very odd.”

 

The Internet of Things: explained

Written by  Joe Myers   Formative Content
An illustration picture shows a projection of binary code on a man holding a laptop computer, in an office in Warsaw June 24, 2013. REUTERS/Kacper Pempel

A guide to the Internet of Things.

Source: The Internet of Things: explained

The Internet of what? The internet of things is a term you may have heard a lot recently. It features heavily in discussions about our future – and, increasingly, our present. But what is it?

This is a simple guide to the term, the impact it’s set to have, and how it might change your life.

The internet of what?

At its heart, the concept is very simple. It’s about connecting devices to the internet. This doesn’t just mean smartphones, laptops and tablets. Jacob Morgan from Forbes talks of connecting everything with an “on and off switch.”

The ‘smart fridge’ dominates media discussions. A fridge that could let you know when you’re running out of something, write a shopping list, or alert you if something has gone out of date. But, in theory, anything from a jet engine, to a washing machine could be connected.

Connected devices can be controlled remotely – think: adjusting your heating via an app – and can gather useful data.

According to SAP, the number of connected devices is set to exceed 50 billion by 2020. Revenue for the providers of IoT services is also growing rapidly, as this chart shows.

 Projected global revenue of the Internet of Things from 2007 to 2020

Image: Statista

Solving problems on a massive scale

The IoT is about much more than connecting multiple objects in your home to the internet. As the World Economic Forum’s Intelligent Assets: unlocking the circular economy potential report has highlighted, the IoT has the potential to transform entire cities.

Sensors, combined with smart phones, will allow for more efficient energy networks (across cities and in your home), reduced congestion and improved transport, as well as recyclable, multi-use buildings.

Houses, offices, factories and public buildings could all generate electricity from renewable sources. Sensors would then coordinate the distribution and storage of this power, making whole systems cleaner, more efficient and more stable.

 Intelligent assets making cities smarter by...

Image: World Economic Forum

Smart cities could also make your journey to and from work much easier. Real-time traffic data, gathered from sensors, could reduce journey times. Mobile parking apps will make finding a space much easier, while smart street lights would light your way home.

Connected cars are set to be a major part of the IoT. Gartner forecasts that by 2020 there will be more than 250 million connected vehicles on the road. Live traffic and parking information, real-time navigation, and automated driving could all become a reality as connectivity spreads.

 Smart transport systems

Image: World Economic Forum

The installation of 42 ‘nodes’ – data collection boxes – is set to begin this summer in Chicago. By 2018, the Array of Things project hopes to have installed 500 across the city. This map shows the location of the original 42 nodes, which will gather data on topics from air quality to traffic volume.

 AoT Initial Locations

Image: Array of Things

All this data will be made available to the public. It will provide real-time, location-specific information about Chicago’s “environment, infrastructure and activity”, according to Array of Things.

The IoT has the potential to make our lives better. More efficient heating systems could save us money, transport apps could save us time, and new electrical grid systems could help save the planet.

So it’s all great then?

Not quite. There are numerous security concerns around having so many connected devices. Each connected device in theory becomes a risk, and a possible target for hackers.

Many of these devices contain a lot of personal information and data. Consider a smart electricity meter. It knows your electricity use and typical times you’re at home. All of this could be available to a hacker.

If a whole city is connected, the risk becomes much greater.

In this World Economic Forum video, Lorrie Cranor, Director of Carnegie Mellon University’s CyLab Usable Privacy and Security Laboratory, explores the threat IoT could pose to our privacy. She also looks at what we can do about it.

“In our smart homes we want our fridge to remind us to buy more butter at the store but we don’t want it to tell our health insurers,” she says.

Have you read?

 

What you read matters more than you might think

WRITTEN BY Susan Reynolds    Contributor, Psychology Today

Source: What you read matters more than you might think

A study published in the International Journal of Business Administration in May 2016, found that what students read in college directly affects the level of writing they achieve. In fact, researchers found that reading content and frequency may exert more significant impacts on students’ writing ability than writing instruction and writing frequency. Students who read academic journals, literary fiction, or general nonfiction wrote with greater syntactic sophistication (more complex sentences) than those who read fiction (mysteries, fantasy, or science fiction) or exclusively web-based aggregators like Reddit, Tumblr, and BuzzFeed. The highest scores went to those who read academic journals; the lowest scores went to those who relied solely on web-based content.

The difference between deep and light reading

Recent research also revealed that “deep reading”—defined as reading that is slow, immersive, rich in sensory detail and emotional and moral complexity—is distinctive from light reading—little more than the decoding of words. Deep reading occurs when the language is rich in detail, allusion, and metaphor, and taps into the same brain regions that would activate if the reader were experiencing the event. Deep reading is great exercise for the brain, and has been shown to increase empathy, as the reader dives deeper and adds reflection, analysis, and personal subtext to what is being read. It also offers writers a way to appreciate all the qualities that make novels fascinating and meaningful—and to tap into his ability to write on a deeper level.

 Light reading is equated to what one might read in online blogs, or “headline news” or “entertainment news” websites, particularly those that breezily rely on lists or punchy headlines, and even occasionally use emojis to communicate. These types of light reading lack a genuine voice, a viewpoint, or the sort of analyses that might stimulate thought. It’s light and breezy reading that you can skim through and will likely forget within minutes.

Deep reading synchronizes your brain

Deep reading activates our brain’s centers for speech, vision, and hearing, all of which work together to help us speak, read, and write. Reading and writing engages Broca’s area, which enables us to perceive rhythm and syntax; Wernicke’s area, which impacts our perception of words and meaning; and the angular gyrus, which is central to perception and use of language. These areas are wired together by a band of fibers, and this interconnectivity likely helps writers mimic and synchronize language and rhythms they encounter while reading. Your reading brain senses a cadence that accompanies more complex writing, which your brain then seeks to emulate when writing.

Here are two ways you can use deep reading to fire up your writing brain:

Read poems

In an article published in the Journal of Consciousness Studies, researchers reported finding activity in a “reading network” of brain areas that were activated in response to any written material. In addition, more emotionally charged writing aroused several regions in the brain (primarily on the right side) that respond to music. In a specific comparison between reading poetry and prose, researchers found evidence that poetry activates the posterior cingulate cortex and medial temporal lobes, parts of the brain linked to introspection. When volunteers read their favorite poems, areas of the brain associated with memory were stimulated more strongly than “reading areas,” indicating that reading poems you love is the kind of recollection that evokes strong emotions—and strong emotions are always good for creative writing.

Read literary fiction

Understanding others’ mental states is a crucial skill that enables the complex social relationships that characterize human societies—and that makes a writer excellent at creating multilayered characters and situations. Not much research has been conducted on the theory of mind (our ability to realize that our minds are different than other people’s minds and that their emotions are different from ours) that fosters this skill, but recent experiments revealed that reading literary fiction led to better performance on tests of affective theory of mind (understanding others’ emotions) and cognitive theory of mind (understanding others’ thinking and state of being) compared with reading nonfiction, popular fiction, or nothing at all. Specifically, these results showed that reading literary fiction temporarily enhances theory of mind, and, more broadly, that theory of mind may be influenced greater by engagement with true works of art. In other words, literary fiction provokes thought, contemplation, expansion, and integration. Reading literary fiction stimulates cognition beyond the brain functions related to reading, say, magazine articles, interviews, or most online nonfiction reporting.

Instead of watching TV, focus on deep reading

Time spent watching television is almost always pointless (your brain powers down almost immediately) no matter how hard you try to justify it, and reading fluff magazines or lightweight fiction may be entertaining, but it doesn’t fire up your writing brain. If you’re serious about becoming a better writer, spend lots of time deep-reading literary fiction and poetry and articles on science or art that feature complex language and that require your lovely brain to think.

This post originally appeared at PsychologyToday.com. Susan Reynolds is the author of Fire Up Your Writing Brain, a Writer’s Digest book. You can follow her on Twitter or Facebook.

A new brain study sheds light on why it can be so hard to change someone’s political beliefs

 

screen_shot_2016_12_28_at_10-0

Why we react to inconvenient truths as if they were personal insults.

 

INFOGRAPHIC: How the World Reads

A bunch of interesting facts about reading in one handy infographic

Source: INFOGRAPHIC: How the World Reads

Did you know that people in India read an average of 10.4 hours a week? Or that regular readers are 2.5 times less likely to develop Alzheimer’s Syndrome? This handy infographic from FeelGood puts a bunch of different interesting facts together in one infograhpic.