• About the Authors
  • Blogs and Shows
  • Journals
  • Open Invitation
  • References
  • Resources
  • Taxonomy
  • Who’s Who?

Media Psychology

~ Informing, Educating and Influencing

Media Psychology

Monthly Archives: March 2017

They’ve Got You, Wherever You Are | by Jacob Weisberg

29 Wednesday Mar 2017

Posted by Donna L. Roberts, PhD in Psychology

≈ Leave a comment

By Jacob Weisberg

The old cliché about advertising was, “Half the money I spend on advertising is wasted; the trouble is I don’t know which half.” The new cliché is, “If you’re not paying for it, you’re the product.” In an attention economy, you pay for free content and services with your time. The compensation isn’t very good.

Source: They’ve Got You, Wherever You Are | by Jacob Weisberg

1.

Earlier this year (2016), Facebook announced a major new initiative called Facebook Live, which was intended to encourage the consumption of minimally produced, real-time video on its site. The videos would come from news organizations such as The New York Times, as well as from celebrities and Facebook users. Interpreted by some as an effort to challenge Snapchat, the app popular with teenagers in which content quickly vanishes, Live reflects the trend toward video’s becoming the dominant consumer and commercial activity on the Web. Following the announcement, one executive at the company predicted that in five years the Facebook News Feed wouldn’t include any written articles at all, because video “helps us to digest more of the information” and is “the best way to tell stories.”

Facebook’s News Feed is the largest source of traffic for news and media sites, representing 43 percent of their referrals, according to the web analytics firm Parse.ly. So when Facebook indicates that it favors a new form of content, publishers start making a lot of it. In this case, news organizations including the Times, BuzzFeed, NPR, and Al Jazeera began streaming live videos, which were funded in part by $50 million in payments from Facebook itself. These subsidies were thought necessary because live video carries no advertising, and thus produces no revenue for Facebook or its partners.

Why, if it generates no revenue, is Facebook pushing video streaming so insistently? For the same reason that it does almost everything: in hopes of capturing more user attention. According to the company’s research, live videos—which feel more spontaneous and authentic—are viewed an average of three times longer than prerecorded videos.

Facebook is currently the fourth most valuable American company. Its stock price is based less upon its current revenues, which are much lower than those of other companies with similar valuations, than upon expectations about revenues it will one day be able to earn. That future revenue depends on reselling to advertisers the attention of 1.7 billion global users, who currently spend an average of fifty minutes a day on Facebook’s sites and apps.

Facebook promotes video, plays publisher-generated content up or down in relation to user-generated content, and tinkers continually with the algorithm that determines what appears on its News Feed; it does this not out of any inherent high- or low-mindedness, but in an effort to harvest an ever greater quantity of our time. If the written word happens to fall out of favor, or if journalism becomes economically unworkable as a consequence, these results, so far as Facebook is concerned, are unintentional. They’re merely collateral damage from the relentless expansion of the most powerful attention-capture machine ever built.

The economist Herbert A. Simon first developed the concept of an attention economy in a 1971 essay.1 Taking note of the new phenomenon of “information overload,” Simon pointed out something that now seems obvious—that “a wealth of information creates a poverty of attention.” In recent years, thinking about attention as a scarce resource has come into vogue as a way to appraise the human and psychological impact of digital and social media.2

The animating insight of Tim Wu’s illuminating new book, The Attention Merchants, is to apply this concept as a backward-facing lens as well as a contemporary one. Modern media, he argues, have always been based on the reselling of human attention to advertisers. Wu, who teaches at Columbia Law School, is a broad thinker about law, technology, and media who has had a varied career as an academic, a journalist, and a 2014 candidate for lieutenant governor of New York. He is best known for developing “net neutrality”—the principle that access providers (such as Comcast or Time Warner) should treat all Internet traffic equally—which formed the basis of a federal regulation that went into effect last year.

Wu’s earlier book, The Master Switch (2010), interpreted the history of mass communications as the ongoing pursuit of media monopoly. In The Attention Merchants, he narrates a history of media built around a model of “free stuff”—for example, radio and TV programs—in exchange for the ingestion of advertising. Wu relates the sequential conquest, by marketing, of formerly exempt spheres: public spaces through posters and billboards, the family home by radio and TV, person-to-person communication by online portals and social media, and physical movement through smartphones.

His story begins with the New York newspaper wars of the Jacksonian Era. Wu names as the first merchant of attention Benjamin Day, who in 1833 disrupted the placid market for printed dailies costing six cents. Day’s larger-circulation New York Sun cost only a penny thanks to revenue from advertising. The battle between the Sun and its competitors established what Wu calls the basic dynamic of such industries: a race to the bottom, since “attention will almost invariably gravitate to the more garish, lurid, outrageous alternative.” In the case of the New York Sun that meant, among other salacious inventions, a five-part series on the astronomical discovery that the moon was covered with trees and populated by unicorns and four-foot-tall winged man-bats. Within two years of its founding, the Sun was the most widely read newspaper in the world.

At the dawn of the twentieth century, advertising remained largely a medium to sell products like “Clark Stanley’s Snake Oil Linament” and “the Elixir of Life,” whose manufacturer promised a cure for “any and every disease that is known to the human body.” Such claims and the occasionally poisonous products they promoted were a favorite target of Progressive Era journalists. Lest we congratulate ourselves on having outgrown such flimflam, Wu reminds us that the “secret ingredient” pitch used to sell patent medicine is still routine in our ostensibly less credulous era. He writes:

As devotees of technology we are, if anything, more susceptible to the supposed degree of difference afforded by some ingenious proprietary innovation, like the “air” in Nike’s sports shoes, triple reverse osmosis in some brands of water, or the gold-plating of audio component cables.

During the 1920s, ad spending in the United States and Europe rose tenfold. The development of mass communications gave rise to an advertising industry with pretensions to be “scientific.” Three techniques, developed then for magazines and radio, are still with us today: (1) “demand engineering,” which is a fancy term for creating desire for new products like orange juice and mouthwash; (2) “branding,” which means building loyalty to names like Buick and Coca-Cola; and (3) “targeted advertising,” which originally meant focusing on the female consumer, who did most household purchasing.

In the radio era the great breakthrough was the minstrel comedy Amos ’n’ Andy, which first aired on a Chicago station in 1928. At its peak, the show drew 40 to 50 million nightly listeners, out of a national population of only 122 million. For David Sarnoff’s National Broadcasting Company, this was “the equivalent of having today’s Super Bowl audiences each and every evening—and with just one advertiser”—Pepsodent toothpaste. (The TV version of the show was canceled in 1953 following protests from the NAACP.) After World War II, NBC began establishing similar dominance over “prime time” television with programs like Texaco Star Theater and Your Show of Shows. Wu calls the 1950s through the 1970s the period of “‘peak attention’…the moment when more regular attention was paid to the same set of messages at the same time than ever before or since.”

What was actually happening so far as economics was concerned? In the conventional analysis, advertising is part of the “discovery” process whereby consumers find out what products, from sliced bread to political candidates, are available in the marketplace. To critics, however, mass marketers were not providing useful information, but misleading the public and exploiting emotion to build monopolies. Dishonest claims, like Lucky Strike’s contention that its cigarettes “soothed the throat,” led to efforts to ban deceptive advertising practices. But in 1931, the Supreme Court ruled that the Federal Trade Commission lacked authority to prohibit or change an ad. The decision prompted the New Dealer Rexford Tugwell, an economist at the Department of Agriculture, to propose more stringent regulation. The advertising industry and its allies in ad-supported media helped scuttle Tugwell’s bill. In 1938 far weaker legislation passed, giving the FTC the power to ban factually untrue statements, but not much more.

Another rebellion came in the 1950s, with the exposure of the rigged outcome of television quiz shows and Vance Packard’s best seller The Hidden Persuaders, which depicted an ad industry engaged in psychological manipulation. A still-bigger wave hit with criticism of consumer society from such commentators as Timothy Leary and Marshall McLuhan. But advertising has a marvelous ability to absorb antagonism and retune its messages to it. This is an ongoing theme in the later seasons of Mad Men, in which the soul-damaged Don Draper applies his talent to making a commodity out of dissent.

John Hamm as Don Draper in the final episode of Mad Men, 2015

John Hamm as Don Draper in the final episode of Mad Men, 2015

2.

The research scientists who designed the World Wide Web intended it to be free from commercial exploitation. The company that decisively fastened it to an advertising model was America Online (AOL), which by the mid-1990s had emerged as victorious over its competitors, its success driven less by its reputation as a family-friendly space than by the use of chat rooms, which enabled people to participate in graphic descriptions of sex—“cybersex.” Though AOL was originally ad-free, earning its money from hourly access charges, the canny executive Bob Pittman, the co-founder of MTV, saw greater potential in subsidizing access—by lowering subscription fees and selling advertising. AOL did this in notoriously unscrupulous fashion, booking phony revenue, exploiting its users, and engaging in other dishonest practices that caused it to implode after swallowing Time Warner in a $164 billion merger that took place in January 2001.

Google originally rejected the attention model too, only to succumb to it in turn. Wu writes that Larry Page and Sergey Brin, the company’s cofounders, were concerned about the corrupting potential of advertising, believing that it would bias their search engine in favor of sellers and against consumers. But because search is so closely allied with consumption, selling paid advertising became irresistible to them. With its Ad Words product—the text ads that appear above and alongside search results—Google became “the most profitable attention merchant in the history of the world.” When Google sells ads accompanying the results of searches people make, it uses a real-time bidding exchange. This electronic auction process remains its central cash machine, generating most of the company’s $75 billion a year in revenue.

Mark Zuckerberg began with the same aversion to advertising, famously rejecting a million dollars from Sprite to turn Facebook green for a day in 2006. He thought obtrusive advertising would disrupt the user experience he needed to perfect in order to undermine his main competitor, Myspace, which had become infested by mischief-making trolls. By requiring the use of real names, Zuckerberg kept the trolls at bay. But he was ultimately no better able to resist the sale of advertising than his predecessors. Facebook’s vast trove of voluntarily surrendered personal information would allow it to resell segmented attention with unparalleled specificity, enabling marketers to target not just the location and demographic characteristics of its users, but practically any conceivable taste, interest, or affinity. And with ad products displayed on smartphones, Facebook has ensured that targeted advertising travels with its users everywhere.

Today Google and Facebook are, as Wu writes, the “de facto diarchs of the online attention merchants.” Their deferred-gratification model, by which the company only starts selling advertisers the audience that it assembles after operating free of ads for a certain period, is now the standard for aspiring tech companies. You begin with idealistic hauteur and visions of engineering purity, proceed to exponential growth, and belatedly open the spigot to fill buckets with revenue. This sequence describes the growth of tech-media companies including YouTube, Twitter, Pinterest, Instagram, and Snapchat, their success underpinned by kinds of user data that television and radio have never had. Many younger start-ups pursuing this trajectory are focused on the next attention-mining frontier: wearable technology, including virtual reality headsets, that will send marketing messages even more directly to the human body.

This is not, however, the only possible business model to support content and services. In the 1990s, HBO developed the alternative of paid programming offered through cable providers such as Time Warner. Netflix has pursued a freestanding subscription model that, in the words of its founder Reed Hastings, doesn’t “cram advertisements down people’s throats.” Under its CEO Tim Cook, Apple has rejected the prevailing model of gathering private information and selling it to marketers to subsidize free services. From Cook’s perspective, advertising doesn’t merely harm privacy. It depletes battery life, eats up mobile data plans, and creates a less pleasing experience on Apple’s beautiful devices. To the chagrin of publishers, the company now offers ad-blocking apps on the iPhone that allow users to gain access to the Internet without any ads at all.

3.

Antonio García Martínez, author of the autobiographical book Chaos Monkeys, is a member of the new class of attention merchants constructing the marketplace for “programmatic” advertising, which is advertising sold on electronic exchanges without the traditional lubricated palaver between buyers and sellers. He is, by his own account, a dissolute character: bad boyfriend, absent father, and often drunk. A tech wise guy working the start-up racket, he was quick to deceive and betray the two less worldly foreign-born engineers who left one advertising technology company with him to start another. He is nonetheless, by the end of his account, a winning antihero, a rebel against Silicon Valley’s culture of nonconformist conformity.

Part of Martínez’s seditiousness is his refusal to accept that the work he does serves some higher social purpose. He writes:

Every time you go to Facebook or ESPN.com or wherever, you’re unleashing a mad scramble of money, data, and pixels that involves undersea fiber-optic cables, the world’s best database technologies, and everything that is known about you by greedy strangers.

To be fair, there’s no reason to think that people in ad tech are greedier than anyone else. Their work is simply more obscure. Some 2,500 companies are part of the technical supply chain for digital advertising. What many of them do to create and transmit ads is largely incomprehensible and uninteresting to outsiders. The simplest explanation is that they interpose themselves between buyers and sellers in an attempt to capture a cut of the revenue.

As a former doctoral student in physics at Berkeley and quantitative analyst on Goldman Sachs’s corporate credit desk, Martínez was well suited to develop this type of intermediation. The advertising technology he works on somewhat resembles the secretive world of high-frequency financial trading depicted by Michael Lewis in Flash Boys (2014). It works to extract value from millions of small daily transactions around captured attention. Adchemy, the company where Martínez first worked, was an intermediary between users of Google’s search engine and companies seeking access to them. It created an information arbitrage by finding potential customers interested in mortgages and online education degrees, and then selling those leads to buyers like Quicken Loans and the University of Phoenix. (“I want to take a shower just reading those names,” he writes.) Martínez worked on building a real-time bidding engine allowing buyers to communicate with Google’s ad exchange—new software that would use Google’s code more efficiently.

The concept behind AdGrok, the company he started after leaving Adchemy with two colleagues, was to break into Google’s search ad business in a different way by allowing store owners to buy location-based ads for products using a barcode scan. By his own admission, this was a half-baked idea. As Martínez writes, a good start-up plan should require no more than one miracle to succeed. AdGrok needed at least five. His stroke of luck was being accepted into Y Combinator, a tech incubator that helps start-ups get going in exchange for a stake in their business. With a Y Combinator pedigree and connections, he was able to sell AdGrok to Twitter after nine months in business. The deal was what’s called an “acqui-hire.” Twitter wanted the company’s engineers, not its software. Facebook was interested as well, but didn’t want the other two engineers or the software, just Martínez. With a sleight-of-hand, he managed to separate himself from the transaction, infuriating Twitter, his two cofounders, and his investors, while landing himself at Facebook, the more promising company.

Martínez arrived at Facebook in 2011 and immediately recognized that he’d never fit in with the hoodie people. He’s an up-from-nowhere cynic; Facebook is an ingenuous company populated by privileged true believers who are cultishly devoted to their boy leader. He was genuinely surprised to discover that the company significantly lagged in its advertising efforts. Facebook’s digital display units were nearly invisible, its tools for use by ad buyers limited, and its general attitude toward advertising one of lordly disdain. Despite its wealth of user data, its ads were—and remain—vastly less effective than Google’s.

Martínez also noticed that status and resources didn’t accrue to advertising product managers at Facebook. He received minimal support for his project of creating a real-time bidding platform called Facebook Exchange, modeled on Google’s Ad Words. Thanks in part to his impatience with lesser minds, his project lost out to an internal competitor, making him superfluous at the company. By the end of the book the reader can’t help rooting for him to get hold of more of his 70,000 pre-IPO stock options before getting fired in 2014, even though he deserves to go.

Martínez takes personally the seeming irrationality of Facebook’s throwing away half a billion dollars a year in Facebook Exchange revenue. What he may not fully appreciate is the extent to which a dismissive attitude toward advertising was a feature of Facebook’s business strategy, not a bug in it. Facebook has succeeded because of its relentless focus on increasing its user base and the addictiveness of its product, which constantly promotes more and better connections with other people. The company introduced ads into the News Feed, its core revenue producer, only in 2012, as it approached a billion users and was preparing to become a public company. Zuckerberg spent his first decade focused on harvesting attention—while postponing the feast. Had Martínez arrived at Facebook five years later, he would have found a company much more like Google was in 2011: still focused on growth but bent on improving its advertising products to drive earnings.

As with Google, Facebook’s passage to maturity has required compromise with the purity of the product and the founder’s original vision. But as Wu makes clear, this kind of transformation is almost irresistible. Whatever high-minded things attention merchants say about their mission of connecting the entire world or putting information at its fingertips, they’re in the same business as Benjamin Day, David Sarnoff, and Bob Pittman. The business is advertising.

Ad exchanges, in which advertising units are bought and sold automatically using software designed to target specific audiences, have made digital advertising more efficient without necessarily making it more effective in increasing sales. Facebook holds out the promise of mass personalization; advertisers can pinpoint users with extraordinary precision. That doesn’t mean, however, that ads on Facebook have any special impact. Unlike on Google, where people go to search for goods and services, Facebook ads are still, in the industry’s jargon, “upper funnel,” a better way for marketers to breed awareness than to make a sale. This has made it a high priority for Facebook to establish “attribution,” to demonstrate that it plays an important part in purchase decisions taking place elsewhere, e.g., through search engines and on other websites.

“Adtech” has also done much to make the Web a squalid, chaotic place. On news sites, video ads begin playing automatically and without permission, sometimes with the sound blaring. Pop-up windows ask you to participate in brief surveys before continuing. “Retargeting” ads for whatever product you last browsed follow you around like lost puppies. Pages load in balky fashion because of the click-bait monetization links, real-time bidding for available ad units, and also because of data-collecting cookies and other tracking technologies. Add to that the misery of trolling on social media and on sites that can’t manage to police user comments effectively.3 This is what the value exchange around free content has become.

The old cliché about advertising was, “Half the money I spend on advertising is wasted; the trouble is I don’t know which half.” The new cliché is, “If you’re not paying for it, you’re the product.” In an attention economy, you pay for free content and services with your time. The compensation isn’t very good. Consider the pre-roll commercials you are forced to watch to gain access to most video clips, increasingly the dominant type of content on the Web. At an above-average $10 CPM, the cost per thousand views, an advertiser is paying one cent per thirty-second increment of distraction. For the user, that works out to a rate of sixty cents an hour for watching ads. The problem isn’t simply that attention has been made into a commodity, it’s that it’s so undervalued. Marketers buy our time for far less than its worth.

Wu suggests that we may be in the middle of another periodic revolt against advertising, based on “a growing sense that media had overtaxed our attentions to the point of crisis.” Though he points to trends like ad-blocking and to higher-quality paid television programming on Netflix and HBO, he doesn’t offer much in the way of a broader prescription. His book is about how we got here, not what to do about it. Based on his reading of media history, Wu doesn’t see much likelihood of replacing the basic model of obtaining apparently “free stuff” in exchange for absorbing commercial messages. Rather, he proposes that we try harder to conserve our mental space through a kind of zoning that declares certain times and spaces off-limits to commercial messages. That might mean a digital sabbath every weekend, or technology to keep advertising out of classrooms. We should appreciate that attention is precious, he writes, and not “part with it as cheaply or unthinkingly as we so often have.”

  1. 1 “Designing Organizations for an Information-Rich World,” in Computers, Communications and the Public Interest, edited by Martin Greenberger (Johns Hopkins University Press, 1971). ↩
  2. 2 See my “We Are Hopelessly Hooked,” The New York Review, February 25, 2016. ↩
  3. 3 See Joel Stein, “How Trolls Are Ruining the Internet,” Time, August 29, 2016. ↩
Advertisement

Share this:

  • Email
  • Print
  • Facebook
  • LinkedIn
  • Pinterest
  • Reddit
  • Twitter
  • Tumblr

Like this:

Like Loading...

How video games unwittingly train the brain to justify killing

22 Wednesday Mar 2017

Posted by Donna L. Roberts, PhD in Psychology

≈ Leave a comment

Teodora Stoica is a PhD student in the translational neuroscience programme at the University of Louisville. She is interested in the relationship between emotion and cognition, and clinical and cognitive psychology.

Source https://aeon.co/ideas/how-video-games-unwittingly-train-the-brain-to-justify-killing

Published in association with
Cognitive Neuroscience Society
an Aeon Partner

Mortal Kombat X gameplay. <em>NetherRealm/Warner Bros. Interactive Entertainment/Wikipedia</em>

Let’s play a game. One of the quotes below belongs to a trained soldier speaking of killing the enemy, while the other to a convicted felon describing his first murder. Can you tell the difference?

(1) ‘I realised that I had just done something that separated me from the human race and it was something that could never be undone. I realised that from that point on I could never be like normal people.’

(2) ‘I was cool, calm and collected the whole time. I knew what I had to do. I knew I was going to do it, and I did.’

Would you be surprised to learn that the first statement, suggesting remorse, comes from the American mass murderer David Alan Gore, while the second, of cool acceptance, was made by Andy Wilson, a soldier in the SAS, Britain’s elite special forces? In one view, the two men are separated by the thinnest filament of morality: justification. One killed because he wanted to, the other because he was acting on behalf of his country, as part of his job.

While most psychologically normal individuals agree that inflicting pain on others is wrong, killing others appears socially sanctioned in specific contexts such as war or self-defence. Or revenge. Or military dictatorships. Or human sacrifice. In fact, justification for murder is so pliant that the TV series Dexter (2006-13) flirted exquisitely with the concept: a sociopath who kills villainous people as a vehicle for satisfying his own dark urges.

Operating under strict ‘guidelines’ that target only the guilty, Dexter (a forensics technician) and the viewer come to believe that the kill is justified. He forces the audience to question their own moral compass by asking them to justify murder in their minds in the split second prior to the kill. Usually when we imagine directly harming someone, the image is preventive: envision a man hitting a woman; or an owner abusing her dog. Yet, sometimes, the opposite happens: a switch is flipped with aggressive, even violent consequences. How can an otherwise normal person override the moral code and commit cold-blooded murder?

That was the question asked at the University of Queensland in Australia, in a study led by the neuroscientist Pascal Molenberghs, in which participants entered an fMRI scanner while viewing a first-person video game. In one scenario, a soldier kills an enemy soldier; in another, the soldier kills a civilian. The game enabled each participant to privately enter the mind of the soldier and control which person to execute.

Screenshot of what each participant saw

The results were, overall, surprising. It made sense that a mental simulation of killing an innocent person (unjustified kill) led to overwhelming feelings of guilt and subsequent activation of the lateral orbitofrontal cortex (OFC), an area of the brain involved in aversive, morally sensitive situations. By contrast, researchers predicted that viewing a soldier killing a soldier would create activity in another region of the brain, the medial OFC, which assesses thorny ethical situations and assigns them positive feelings such as praise and pride: ‘This makes me feel good, I should keep doing it.’

But that is not what occurred: the medial OFC did not light up when participants imagined themselves as soldiers killing the enemy. In fact, none of the OFC did. One explanation for this puzzling finding is that the OFC’s reasoning ability isn’t needed in this scenario because the action is not ethically compromising. That is to say – it is seen as justified. Which brings us to a chilling conclusion: if killing feels justified, anyone is capable of committing the act.

Since the Korean War, the military has altered basic training to help soldiers overcome existing norms of violence, desensitise them to the acts they might have to commit, and reflexively shoot upon cue. Even the drill sergeant is portrayed as the consummate professional personifying violence and aggression.

The same training takes place unconsciously through contemporary video games and media. Young children have unprecedented access to violent movies, games and sports events at an early age, and learning brutality is the norm. The media dwells upon real-life killers, describing every detail of their crime during prime-time TV. The current conditions easily set up children to begin thinking like soldiers and even justify killing. But are we in fact suppressing critical functions of the brain? Are we engendering future generations who will accept violence and ignore the voice of reason, creating a world where violence will become the comfortable norm?

The Queensland study had something to say about this as well. When participants were viewing unjustified killings, researchers noticed increased connectivity between the OFC and an area called the temporal parietal junction (TPJ), a part of the brain that has previously been associated with empathy. They show that disrupting function of the TPJ transforms participants into psychopaths, judging any action as morally permissible and making the TPJ a critical region for empathy. Increased connectivity between the two regions suggests that the participants were actively putting themselves in the shoes of the observer, judging whether killing civilians was morally acceptable or not.

Increased connectivity between left OFC and left and right TPJ for simulating shooting a civilian

‘Emotional and physical distance can allow a person to kill his foe,’ says Lt Colonel Dave Grossman, director of the Killology Research Group in Illinois and one of the world’s foremost experts in human aggression and the roots of violence. ‘Emotional distance can be classified as mechanical, social, cultural and emotional distance.’ In other words, a lack of connection to humans allows a justified murder. The writer Primo Levi, a Holocaust survivor, believed that this was exactly how the Nazis succeeded in killing so many: by stripping away individuality and reducing each person to a generic number.

In 2016, technology and media have turned genocide viral. The video game Mortal Kombat X features spines being snapped, heads crushed and players being diced into cubes. In Hatred, gamers play as a sociopath who attempts to kill innocent bystanders and police officers with guns, flamethrowers and bombs to satisfy his hatred of humanity. Characters beg for mercy before execution, frequently during profanity-laced rants.

A plethora of studies now associate playing such games with greater tolerance of violence, reduced empathy, aggression and sexual objectification. Compared with males who have not played violent video games, males who do play them are 67 per cent more likely to engage in non-violent deviant behaviour, 63 per cent more likely to commit a violent crime or a crime related to violence, and 81 per cent more likely to have engaged in substance use. Other studies have found that engaging in cyberviolence leads people to perceive themselves as less human, and facilitates violence and aggression.

This powerful knowledge could be used to turn violence on its head. Brain-training programs could use current neuroscientific knowledge to serve up exhilarating games to train inhibition, instead of promoting anger. Creating games with the capability to alter thought patterns is itself ethically questionable and could be easily implemented to control a large population. But we’ve already gone down that road, and in the direction of violence. With today’s generation so highly dependent on technology, phasing in games from an early age that encourage tolerance could be a potent tool for building a more humane, more compassionate world.

Share this:

  • Email
  • Print
  • Facebook
  • LinkedIn
  • Pinterest
  • Reddit
  • Twitter
  • Tumblr

Like this:

Like Loading...

Attention, Students: Put Your Laptops Away

15 Wednesday Mar 2017

Posted by Donna L. Roberts, PhD in Psychology

≈ 1 Comment

JAMES DOUBEK

Heard on NPR Weekend Edition Sunday

Researchers Pam Mueller and Daniel M. Oppenheimer found that students remember more via taking notes longhand rather than on a laptop. It has to do with what happens when you’re forced to slow down.

Source: Attention, Students: Put Your Laptops Away

As laptops become smaller and more ubiquitous, and with the advent of tablets, the idea of taking notes by hand just seems old-fashioned to many students today. Typing your notes is faster — which comes in handy when there’s a lot of information to take down. But it turns out there are still advantages to doing things the old-fashioned way.

For one thing, research shows that laptops and tablets have a tendency to be distracting — it’s so easy to click over to Facebook in that dull lecture. And a study has shown that the fact that you have to be slower when you take notes by hand is what makes it more useful in the long run.

In the study published in Psychological Science, Pam A. Mueller of Princeton University and Daniel M. Oppenheimer of the University of California, Los Angeles sought to test how note-taking by hand or by computer affects learning.

“When people type their notes, they have this tendency to try to take verbatim notes and write down as much of the lecture as they can,” Mueller tells NPR’s Rachel Martin. “The students who were taking longhand notes in our studies were forced to be more selective — because you can’t write as fast as you can type. And that extra processing of the material that they were doing benefited them.”

 

Mueller and Oppenheimer cited that note-taking can be categorized two ways: generative and nongenerative. Generative note-taking pertains to “summarizing, paraphrasing, concept mapping,” while nongenerative note-taking involves copying something verbatim.

And there are two hypotheses to why note-taking is beneficial in the first place. The first idea is called the encoding hypothesis, which says that when a person is taking notes, “the processing that occurs” will improve “learning and retention.” The second, called the external-storage hypothesis, is that you learn by being able to look back at your notes, or even the notes of other people.

Because people can type faster than they write, using a laptop will make people more likely to try to transcribe everything they’re hearing. So on the one hand, Mueller and Oppenheimer were faced with the question of whether the benefits of being able to look at your more complete, transcribed notes on a laptop outweigh the drawbacks of not processing that information. On the other hand, when writing longhand, you process the information better but have less to look back at.

For their first study, they took university students (the standard guinea pig of psychology) and showed them TED talks about various topics. Afterward, they found that the students who used laptops typed significantly more words than those who took notes by hand. When testing how well the students remembered information, the researchers found a key point of divergence in the type of question. For questions that asked students to simply remember facts, like dates, both groups did equally well. But for “conceptual-application” questions, such as, “How do Japan and Sweden differ in their approaches to equality within their societies?” the laptop users did “significantly worse.”

The same thing happened in the second study, even when they specifically told students using laptops to try to avoid writing things down verbatim. “Even when we told people they shouldn’t be taking these verbatim notes, they were not able to overcome that instinct,” Mueller says. The more words the students copied verbatim, the worse they performed on recall tests.

And to test the external-storage hypothesis, for the third study they gave students the opportunity to review their notes in between the lecture and test. The thinking is, if students have time to study their notes from their laptops, the fact that they typed more extensive notes than their longhand-writing peers could possibly help them perform better.

But the students taking notes by hand still performed better. “This is suggestive evidence that longhand notes may have superior external storage as well as superior encoding functions,” Mueller and Oppenheimer write.

Do studies like these mean wise college students will start migrating back to notebooks?

“I think it is a hard sell to get people to go back to pen and paper,” Mueller says. “But they are developing lots of technologies now like Livescribe and various stylus and tablet technologies that are getting better and better. And I think that will be sort of an easier sell to college students and people of that generation.”

Share this:

  • Email
  • Print
  • Facebook
  • LinkedIn
  • Pinterest
  • Reddit
  • Twitter
  • Tumblr

Like this:

Like Loading...

Virtual Reality Can Leave You With an Existential Hangover

08 Wednesday Mar 2017

Posted by Donna L. Roberts, PhD in Psychology

≈ Leave a comment

by REBECCA SEARLES

After exploring a virtual world, some people can’t shake the sense that the actual world isn’t real, either.

Source: Virtual Reality Can Leave You With an Existential Hangover

When Tobias van Schneider slips on a virtual reality headset to play Google’s Tilt Brush, he becomes a god. His fingertips become a fiery paintbrush in the sky. A flick of the wrist rotates the clouds. He can jump effortlessly from one world that he created to another.

When the headset comes off, though, it’s back to a dreary reality. And lately van Schneider has been noticing some unsettling lingering effects. “What stays is a strange feeling of sadness and disappointment when participating in the real world, usually on the same day,” he wrote on the blogging platform Medium last month. “The sky seems less colorful and it just feels like I’m missing the ‘magic’ (for the lack of a better word). … I feel deeply disturbed and often end up just sitting there, staring at a wall.”

Van Schneider dubs the feeling “post-VR sadness.” It’s less a feeling of depression, he writes, and more a sense of detachment. And while he didn’t realize it when he published the post, he’s not the only one who has experienced this. Between virtual reality subreddits and Oculus Rift online forums, there are dozens of stories like his. The ailments range from feeling temporarily fuzzy, light-headed, and in a dream-like state, to more severe detachment that lasts days—or weeks. Many cases have bubbled up in the last year, likely as a response to consumer VR headsets becoming more widely available. But some of the stories date as far back as 2013, when an initial version of the Oculus Rift was released for software developers.

“[W]hile standing and in the middle of a sentence, I had an incredibly strange, weird moment of comparing real life to the VR,” wrote the video-game developer Lee Vermeulen after he tried Valve’s SteamVR system back in 2014. He was mid-conversation with a coworker when he started to feel off, and the experience sounds almost metaphysical. “I understood that the demo was over, but it was [as] if a lower level part of my mind couldn’t exactly be sure. It gave me a very weird existential dread of my entire situation, and the only way I could get rid of that feeling was to walk around or touch things around me.”

It seems that VR is making people ill in a way no one predicted. And as hard as it is to articulate the effects, it may prove even harder to identify its cause.

* * *

The notion of virtual-reality devices having a physical effect their users is certainly familiar. Virtual-reality sickness, also known as cybersickness, is a well-documented type of motion sickness that some people feel during or after VR play, with symptoms that include dizziness, nausea, and imbalance. It’s so common that researchers say it’s one of the biggest hurdles to mass adoption of VR, and companies like Microsoft are already working rapidly to find ways to fix it.

Some VR users on Reddit have pointed out that VR sickness begins to fade with time and experience in a headset. Once they grew their “VR legs,” they wrote, they experienced less illness. Van Schneider has noticed the same thing. “[The physical symptoms] usually fade within the first 1–2 hours and get better over time,” he wrote. “It’s almost like a little hangover, depending on the intensity of your VR experience.” Indeed, VR sickness is often referred to as a “VR hangover.”

“I was very fatigued. I was dizzy. And it definitely hits that strange point where the real world feels surreal.”

The dissociative effects that van Schneider and others have written about, however, are much worse. In an attempt to collectively self-diagnose, many of the internet-forum users have pointed to a study by the clinical psychology researcher Frederick Aardema from 2006 — the only study that looks explicitly at virtual reality and clinical dissociation, a state of detachment from one’s self or reality. Using a questionnaire to measure participants’ levels of dissociation before and after exposure to VR, Aardema found that VR increases dissociative experiences and lessens people’s sense of presence in actual reality. He also found that the greater the individual’s pre-existing tendency for dissociation and immersion, the greater the dissociative effects of VR.

Dissociation itself isn’t necessarily an illness, Aardema said. It works like a spectrum: On the benign side of the spectrum, there is fantasizing and daydreaming — a coping mechanism for boredom or conflict. On the other side, however, there are more pathological types of dissociation, which include disorders like derealization-depersonalization (DPDR).

While derealization is the feeling that the world isn’t real, depersonalization is the feeling that one’s self isn’t real. People who’ve experienced depersonalization say that it feels like they’re outside of their bodies, watching themselves. Derealization makes a person’s surroundings feel strange and dream-like, in an unsettling way, despite how familiar they may be.

When I spoke with Aardema on the phone, he had been wondering why his paper from ten years ago had suddenly been getting so many hits on the science-networking site ResearchGate. His study measured mild dissociative effects — think, “I see things around me differently than usual” — so he emphasized that there is a need to explore how these effects may relate to mood and depressive feelings. “There was some indication in our initial study that feelings of depression were important in relation to dissociation,” he said.

* * *

I’ve never felt depersonalization, but I have felt derealization, the result of a severe panic disorder I developed when I was 25. It was nothing short of nightmarish. When the effects were tolerable, it felt like I was permanently high on psychedelics — a bad trip that wouldn’t end. When it was at it’s most intense, it was like living in my own scary movie: You look around at your everyday life and nothing feels real. Even faces that I knew and loved looked like a jumbled mess of features.

DPDR often occurs after a traumatic event, as a defense mechanism that separates someone from emotional issues that are too difficult to process. My case was triggered by stress. But according to a 2015 study in the journal Multisensory Research, feelings of unreality can also be triggered by contradicting sensory input — like one might experience inside a VR headset.

The study, by Kathrine Jáuregui-Renaud, a health researcher at the Mexican Institute of Social security, explains that in order for the mind to produce a coherent representation of the outside world, it relies on integrating sensory input—combining and making sense of the information coming in through the senses. When there’s a mismatch between the signals from the vestibular system — a series of fluid-filled tubes in the inner ear that senses balance — and the visual system, the brain short-circuits. Part of the brain may think the body is moving, for instance, while another part thinks the feet are firmly planted on the ground. Something feels amiss, which can cause anxiety and panic.

VR’s very purpose is to make it difficult to distinguish simulation from reality.

This, Aardema pointed out, could explain why books, movies, and video games don’t tend to cause the same kinds of dissociative aftereffects. Books don’t have moving visuals, and the movement in movies and video games is usually not intense enough. It also helps that these experiences are usually enjoyed while sitting still. So they just don’t have the same capacity to offset balance and vestibular function. (Though for some, movies can cause motion sickness. And for those people there is Moviehurl.com — a website devoted to rating movies on their likelihood of giving a viewer motion sickness.)

Scientists also believe that this kind of conflicting information is what causes motion-sickness symptoms like nausea and dizziness. So why do some VR users get motion sickness, while others end up experiencing something more serious? Research suggests that there is a link between serotonin levels, which play a role in mood regulation, and the vestibular system. So for those that may already suffer from a serotonin-related imbalance, like the 40 million Americans who suffer from anxiety disorders, VR’s disruption of the vestibular system may have a more profound effect.

* * *

As van Schneider illustrated in his blog post, the appeal of virtual reality’s “superpowers” are compelling. VR’s very purpose is to make it difficult to distinguish simulation from reality. But what happens when the primitive brain is not equipped to process this? To what extent is VR causing users to question the nature of their own reality? And how easily are people able to tackle this line of questioning without losing their grip?

One evening during my DPDR phase, I was riding in a cab down a main street in the West Village, looking out the window. It was summer and there were tourists everywhere, and the light before sunset was still lingering. It was a perfect time to be out in the street, walking with friends and family, taking in New York City. But I remember the distinct feeling of hating everyone I saw. They had brains that just worked, brains that collected streams of sensory information and painted a satisfying picture of reality, just like brains are supposed to do. They most likely never questioned if what they were experiencing was real.

For some people, at least, it seems that VR could change that. In March, Alanah Pearce, an Australian video game journalist and podcast host, recounted troubling post-VR symptoms after the Game Developers Conference in San Francisco. “I was very fatigued. I was dizzy. And it definitely hits that strange point where the real world feels surreal,” she said. “I’m not going to go into that too in-depth, because it’s something I haven’t yet grasped. But I know that I’m not alone, and other people who play VR feel the same thing, where it’s like, nothing really feels real anymore. It’s very odd.”

 

Share this:

  • Email
  • Print
  • Facebook
  • LinkedIn
  • Pinterest
  • Reddit
  • Twitter
  • Tumblr

Like this:

Like Loading...

The Internet of Things: explained

01 Wednesday Mar 2017

Posted by Donna L. Roberts, PhD in Psychology

≈ 1 Comment

Written by  Joe Myers   Formative Content
An illustration picture shows a projection of binary code on a man holding a laptop computer, in an office in Warsaw June 24, 2013. REUTERS/Kacper Pempel

A guide to the Internet of Things.

Source: The Internet of Things: explained

The Internet of what? The internet of things is a term you may have heard a lot recently. It features heavily in discussions about our future – and, increasingly, our present. But what is it?

This is a simple guide to the term, the impact it’s set to have, and how it might change your life.

The internet of what?

At its heart, the concept is very simple. It’s about connecting devices to the internet. This doesn’t just mean smartphones, laptops and tablets. Jacob Morgan from Forbes talks of connecting everything with an “on and off switch.”

The ‘smart fridge’ dominates media discussions. A fridge that could let you know when you’re running out of something, write a shopping list, or alert you if something has gone out of date. But, in theory, anything from a jet engine, to a washing machine could be connected.

Connected devices can be controlled remotely – think: adjusting your heating via an app – and can gather useful data.

According to SAP, the number of connected devices is set to exceed 50 billion by 2020. Revenue for the providers of IoT services is also growing rapidly, as this chart shows.

 Projected global revenue of the Internet of Things from 2007 to 2020

Image: Statista

Solving problems on a massive scale

The IoT is about much more than connecting multiple objects in your home to the internet. As the World Economic Forum’s Intelligent Assets: unlocking the circular economy potential report has highlighted, the IoT has the potential to transform entire cities.

Sensors, combined with smart phones, will allow for more efficient energy networks (across cities and in your home), reduced congestion and improved transport, as well as recyclable, multi-use buildings.

Houses, offices, factories and public buildings could all generate electricity from renewable sources. Sensors would then coordinate the distribution and storage of this power, making whole systems cleaner, more efficient and more stable.

 Intelligent assets making cities smarter by...

Image: World Economic Forum

Smart cities could also make your journey to and from work much easier. Real-time traffic data, gathered from sensors, could reduce journey times. Mobile parking apps will make finding a space much easier, while smart street lights would light your way home.

Connected cars are set to be a major part of the IoT. Gartner forecasts that by 2020 there will be more than 250 million connected vehicles on the road. Live traffic and parking information, real-time navigation, and automated driving could all become a reality as connectivity spreads.

 Smart transport systems

Image: World Economic Forum

The installation of 42 ‘nodes’ – data collection boxes – is set to begin this summer in Chicago. By 2018, the Array of Things project hopes to have installed 500 across the city. This map shows the location of the original 42 nodes, which will gather data on topics from air quality to traffic volume.

 AoT Initial Locations

Image: Array of Things

All this data will be made available to the public. It will provide real-time, location-specific information about Chicago’s “environment, infrastructure and activity”, according to Array of Things.

The IoT has the potential to make our lives better. More efficient heating systems could save us money, transport apps could save us time, and new electrical grid systems could help save the planet.

So it’s all great then?

Not quite. There are numerous security concerns around having so many connected devices. Each connected device in theory becomes a risk, and a possible target for hackers.

Many of these devices contain a lot of personal information and data. Consider a smart electricity meter. It knows your electricity use and typical times you’re at home. All of this could be available to a hacker.

If a whole city is connected, the risk becomes much greater.

In this World Economic Forum video, Lorrie Cranor, Director of Carnegie Mellon University’s CyLab Usable Privacy and Security Laboratory, explores the threat IoT could pose to our privacy. She also looks at what we can do about it.

“In our smart homes we want our fridge to remind us to buy more butter at the store but we don’t want it to tell our health insurers,” she says.

Have you read?

5 predictions for the Internet of Things in 2016

 

Share this:

  • Email
  • Print
  • Facebook
  • LinkedIn
  • Pinterest
  • Reddit
  • Twitter
  • Tumblr

Like this:

Like Loading...
Ken Heller on

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 4,693 other subscribers

Media Psychology

  • RSS - Posts
  • RSS - Comments

Blog Stats

  • 91,024 hits

Archives

  • December 2020 (3)
  • November 2020 (4)
  • September 2020 (1)
  • June 2020 (1)
  • April 2020 (1)
  • March 2020 (1)
  • February 2020 (3)
  • January 2020 (4)
  • December 2019 (8)
  • November 2019 (1)
  • October 2019 (5)
  • September 2019 (11)
  • August 2019 (7)
  • July 2019 (4)
  • June 2019 (3)
  • May 2019 (5)
  • April 2019 (8)
  • March 2019 (7)
  • February 2019 (4)
  • January 2019 (5)
  • December 2018 (4)
  • November 2018 (4)
  • October 2018 (5)
  • September 2018 (8)
  • August 2018 (7)
  • July 2018 (4)
  • June 2018 (3)
  • May 2018 (6)
  • April 2018 (4)
  • March 2018 (6)
  • February 2018 (6)
  • January 2018 (6)
  • December 2017 (4)
  • November 2017 (5)
  • October 2017 (5)
  • September 2017 (5)
  • August 2017 (5)
  • July 2017 (5)
  • June 2017 (5)
  • May 2017 (2)
  • April 2017 (2)
  • March 2017 (5)
  • February 2017 (4)
  • January 2017 (7)
  • December 2016 (3)
  • November 2016 (2)
  • October 2016 (4)
  • September 2016 (2)
  • August 2016 (2)
  • July 2016 (3)
  • June 2016 (5)
  • May 2016 (6)
  • April 2016 (4)
  • March 2016 (2)
  • February 2016 (1)
  • January 2016 (1)
  • December 2015 (1)
  • November 2015 (2)
  • January 2015 (1)
  • November 2014 (1)
  • September 2014 (1)
  • August 2014 (1)
  • July 2014 (4)
  • May 2014 (1)
  • April 2014 (1)
  • March 2014 (2)
  • February 2014 (2)
  • January 2014 (2)
  • December 2013 (4)
  • November 2013 (2)
  • October 2013 (1)
  • September 2013 (1)
  • August 2013 (4)
  • July 2013 (1)
  • June 2013 (1)
  • April 2013 (1)
  • March 2013 (4)
  • February 2013 (3)
  • January 2013 (5)
  • December 2012 (4)
  • November 2012 (6)

Addiction Advertising Agenda Setting Al-Jazeera Associated Press Behavioralism Bernays Cartoons Causality Cognitive Correlation Cultivation Theory Digital Immigrants Digital Natives Ellul Facebook Fallacious Arguments Film Framing Gaming Gerbner Giles Google Greenwald ICT Identity Imagery Impact of ICT Influence Ingress Internet Internet.org Journalism Marketing McCombs McLuhan Mean World Sydrome Media Media Effects Media Literacy Media Psychology Mobile Computing Mobile Phones Moscow Olympics Neural Pathways news coverage Operant Conditioning Persuasive Technology Physiological Psychology Pinterest Potter Prensky Privacy Propaganda Psychological Effects Psychological Operations Psychology Public Diplomacy Public Relations Quotes Sexism Skinner Smartphone Social Change Social Identity Social Media Social Networks Social Psychology Sports Taylor Technology The Engineering of Consent Transmedia Twitter Walking Dead

RSS The Amplifier – APA Div. 46 Newsletter

  • 2022 APA Division 46 Society for Media Psychology & Technology Convention/Social Hour Photos
  • APA Council Representative Report: August 2022 Council Meeting Highlights
  • President-Elect’s Column: Literally Sick and Tired of Political Advertising
  • Past President Column: Program, Awards, Social Hour
  • Student Committee Column: The Importance of the Pipeline

RSS APA Div. 46 Media Psychology and Technology Facebook Feed – Come check it out!

  • Kids Are Using Minecraft To Design A More Sustainable World 06/07/2015
  • Home – UsMeU 05/07/2015
  • Huggable Robot Befriends Girl in Hospital 03/07/2015
  • Lifelong learning is made possible by recycling of histones, study says 03/07/2015
  • Synthetic Love: Can a Human Fall in Love With a Robot? – 24/06/2015

RSS Changing Minds

  • An error has occurred; the feed is probably down. Try again later.

RSS Media Smarts

  • Focus Group 06/02/2023
    Focus GroupThank you for your interest in participating in this qualitative research study called Reporting Platforms: Young Canadians Evaluate Efforts to Counter Disinformation.  To participate in this study, you must:Read More

RSS Adam Curtis

  • HYPERNORMALISATION 11/10/2016
    Adam Curtis introduces his new epic film

RSS Media Psychology Blog

  • does resurge work : Resurge weight reduction supplement is a... 10/04/2020
    does resurge work : Resurge weight reduction supplement is a distinct advantage program that would bolster your ascent to control. It will change you and make you more grounded than at any other time with improved wellbeing that can assist you with getting away from heftiness. This Resurge audit tells how the Supplement will help your lack of sleep and weigh […]

RSS The Psych Files

  • When Good People Do Bad Things 20/05/2020
    For years, the Stanford Prison Study has been used to tout the idea that putting any individual in a position of absolute control brings out the worst in them (and in a more general sense, that people conform to the roles they’re placed in). An article appearing in Scientific American (Rethinking the Infamous Stanford Prison Experiment) includes new informat […]

RSS The Media Zone

  • And He Knew All the Words 24/11/2014
    Stuart Fischoff pioneered Media Psychology. He was a TV talk-show shrink—until it got too rowdy even for him. He knew all the words to Sondheim. And now he's gone.

RSS The Media Psychology Effect

  • The Nature and Benefits of Earning an Ed.D. Degree 21/12/2022
    The Doctor of Education (Ed.D) degree is ideal for working professionals and leaders planning to advance their careers in education, business, politics, media, and communications.

RSS On The Media

  • An error has occurred; the feed is probably down. Try again later.

Create a free website or blog at WordPress.com.

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy
  • Follow Following
    • Media Psychology
    • Join 558 other followers
    • Already have a WordPress.com account? Log in now.
    • Media Psychology
    • Customize
    • Follow Following
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
%d bloggers like this: