• About the Authors
  • Blogs and Shows
  • Journals
  • Open Invitation
  • References
  • Resources
  • Taxonomy
  • Who’s Who?

Media Psychology

~ Informing, Educating and Influencing

Media Psychology

Category Archives: Media Psychology

Distracted Boyfriend meme is sexist, rules Swedish ad watchdog

26 Wednesday Sep 2018

Posted by Ken S. Heller in Media Effects, Media Psychology, Psychology

≈ Leave a comment

Tags

#MeToo, Advertising, Internet, Marketing, Memes, Sexism, Stereotypes

From The Guardian

Popular image of man ogling another woman deemed degrading and discriminatory

The popular Distracted Boyfriend meme, based on a photo of a man turning away from his outraged girlfriend to stare admiringly at another woman, has been ruled sexist by Sweden’s advertising ombudsman.

The stock image, also known as Man Looking at Other Woman, by Antonio Guillem, a photographer from Barcelona, was named meme of the year in April and was one of the most widely shared memes in 2017, providing comment on anything from music to politics to hit TV shows.

I didn’t know what a meme was, says Distracted Boyfriend photographer

The ombudsman said recruitment advertisements posted on Facebook by the internet services provider Bahnhof, which labelled the boyfriend “You”, the girlfriend “Your current workplace”, and the second woman “Bahnhof”, were gender-discriminatory, the Local reported.

“The advertisement objectifies women,” the ombudsman, RO, said. “It presents women as interchangeable items and suggests only their appearance is interesting … It also shows degrading stereotypical gender roles of both men and women and gives the impression men can change female partners as they change jobs.”

The ombudsman said the image objectified the two women by presenting them as workplaces, but the man as an individual, and added that the “other woman” was clearly a “sex object … unrelated to the advertisement, which is for recruiting salespeople, operating engineers and a web designer”.

Advertisement

The Swedish advertising industry is self-regulating, meaning that the ombudsman can criticise ads but it does not have the power to impose sanctions.

The ad, posted in April, drew nearly 1,000 comments, many from women who complained it was sexist. “1. You really don’t want to attract women to your company,” one commenter, Susanne Lahti Hagbard, said. “2. You really don’t want to attract sensible guys either.”

Another, Sofie Sundåker, said: “It doesn’t matter if it’s a popular meme. If you do not see how this picture is sexist whatever words are on the people, you are clearly not a workplace for any woman who wants to be taken seriously in her work.”

The company said on its Facebook page that its aim had been “to illustrate a situation that shows Bahnhof is an attractive employer, and that people who have a slightly duller workplace might be interested in us. This was the situation illustrated in this meme.

“Anyone familiar with the internet and meme culture knows how this meme is used and interpreted. Gender is usually irrelevant in the context. We explained meme culture to the ombudsman, but it chose to interpret the post differently”.

If the company should be punished for anything, it concluded, “it should be for using a tired old meme”.

While it frequently features near the top of world gender-equality rankings, a 2016 study found Sweden was the worst of the Nordic countries at combating sexist advertising. This year, Stockholm council voted to bar ads deemed sexist or degrading from the city’s public billboards.

Advertisement

Share this:

  • Email
  • Print
  • Facebook
  • LinkedIn
  • Pinterest
  • Reddit
  • Twitter
  • Tumblr

Like this:

Like Loading...

Tech Titans Dish Advice About Phone Addiction – Great Escape – Medium

24 Monday Sep 2018

Posted by Donna L. Roberts, PhD in Advertising, Media Effects, Media Literacy, Media Psychology, Psychology

≈ Comments Off on Tech Titans Dish Advice About Phone Addiction – Great Escape – Medium

Tags

Addiction, Email, Gaming, Mental Health, Smartphone, Social Media

 

Your phone is training you to be its servant. Here’s how to fight back.

by Clint Carter

Source: Tech Titans Dish Advice About Phone Addiction – Great Escape – Medium

With every Facebook post you like, tweet you send, or question you type into Google, you’re giving the internet strength. Feeding the algorithms. Paying the advertisers. You’re also helping to fill server farms that will ultimately be replaced by bigger server farms, effectively anchoring the internet in the real world. This is all sweet and rosy, if the internet-human relationship is mutually beneficial. But it’s not clear that it is.

In some ways, our nonstop online lives are bringing us closer. But at least as often, the relentless pace of social media, email, and constant pings and beeps only serve to pull us further apart. And all this tech is certainly bad for our health and happiness: Research links social media to depression and high-speed internet to poor sleep. Simply having a phone visible during meals has been shown to make conversation among friends less enjoyable.

It’s probably hard to imagine life without a high-powered computer in your pocket or purse at all times, but it’s worth remembering that you’re still an autonomous being.

That said, these effects aren’t inevitable. Not yet, anyway. It’s probably hard to imagine life without a high-powered computer in your pocket or purse at all times, but it’s worth remembering that you’re still an autonomous being. You can decide how often and in what way you interact with the internet. And if you talk to the researchers, authors, and entrepreneurs who understand digital technology best, you discover that many of them already have.

We reached out to eight digital experts to find out how they maintain a (reasonably) healthy relationship with technology. All agreed that push notifications are evil, so you should go ahead and turn those off right now. Some of the experts even said they keep their ringers and text notifications off, at least some of the time. Beyond that, they all had unique strategies for defending themselves against the intrusive, obnoxious, and possibly destructive effects of technology.


Give Yourself One Honest Hour of Work Each Day

Dan Ariely, PhD

Professor of psychology and behavioral economics at Duke University, author of Predictably Irrational: The Hidden Forces That Shape Our Decisions

Much of Dan Ariely’s work — including Timeful, the A.I.-powered calendar app he built and sold to Google — focuses on making the most of limited time. One way he does this is by starting each morning in a distraction-free environment. “I think very carefully about the first hour of the day,” he says. “I used to have two computers, and one had no email or browser on it.” That’s the one he used for writing in the mornings.

“The thing is to realize that our time to work is actually quite precious.”

Ariely’s travel schedule forced him to abandon the dual-computer setup, but the experiment was fruitful enough that he now relies on a self-imposed internet ban to get work done. “The last thing I do each day is turn my computer off,” he says. “The next day, when I turn it back on, my browser and email are still off.” And Ariely keeps it that way until he’s powered through that first hour. “The thing is to realize that our time to work is actually quite precious,” he says. “We need to protect it.”


Quit Cold Turkey

Steve Blank

Stanford professor, retired entrepreneur, and founder of the Lean Startup movement

Over the two-plus decades that Steve Blank helped shape Silicon Valley, he ushered eight technology startups into the world. But it was during his tenure at Rocket Science Games, a company he founded in the mid-1990s, that Blank began getting high on his own supply. “I found myself drug addicted,” he says. “I’d be up playing games until four in the morning.”

“The devices started as tools and ended up as drugs for most people.”

Video games are hardly a Schedule 1 narcotic, but Blank was losing sleep and, he felt, setting a bad example for his children. Emerging research confirms his idea that games and social media can exert drug-like forces over users. A study published in the journal PLOS One even found that digital addictions can shrink the amount of white matter at certain brain sites, creating changes similar to those seen in alcohol, cocaine, and methamphetamine addictions. “The devices started as tools and ended up as drugs for most people,” Blank says. “App manufacturers are incentivized to make us addicted. I’ll contend that a ton of social media is actually a lot like oxycontin.”

When Blank realized that his gaming habit was robbing him of happiness by way of lost sleep and family time, he snapped his CD-ROMs in half (this was the ’90s, remember). Then he threw the pieces into the trash. “I literally went cold turkey,” he says. “And I haven’t played a video game since.”


Create an Email System and Stick to It

Ethan Kross, PhD

Professor of psychology and director of the Emotion and Self-Control Laboratory at the University of Michigan

After studying Facebook — and, more important, after finding that the biggest users were the least satisfied with life — Ethan Kross decided to refrain from any social media use. But he still checks his email more often than he’d like. “It’s a self-control failure from a self-control expert,” he says.

To be fair, the professor is probably selling himself short. The truth is he relies on three solid rules to prevent compulsive emailing.

“So I just try to change my digital environment. We know from research that can be a powerful tool for enhancing self-control.”

First, Kross pushes all fast-moving work conversations to Slack. “That way I can get information from my lab collaborators quickly, and my email becomes less urgent.”

Second, he uses the snooze function, which is available on Gmail and services like Boomerang for Outlook, for any email that isn’t urgent. “If there are 50 things in my inbox, that can be disruptive to my immediate goals,” Kross says. So he snoozes them for a few hours or a few days, depending on the urgency.

Finally, Kross relies on an email-free iPad for reading, so he can’t check his incoming mail even if he wants to. “I don’t like checking my email when I’m in bed, because once every month I’ll receive something that makes me not sleep well,” he says. “So I just try to change my digital environment. We know from research that can be a powerful tool for enhancing self-control.”


Take Weeklong Breaks as Necessary

Jean Twenge, PhD

Researcher and professor of psychology at San Diego State University and the author of iGen, a book about how the internet is changing young adults

In April of last year, Jean Twenge signed up for Twitter. It’s her first and only social media account, and almost immediately she found herself clashing with people who disagreed with her research. “It’s a public forum, and I felt a compulsion to defend my arguments,” Twenge says. “But is that the right response? I don’t know. For my own mental health, I know it’s not.”

“It’s a public forum, and I felt a compulsion to defend my arguments.”

It’s not that she wanted to be on Twitter, but as an academic with a book to promote, Twenge felt like she had to. After six months with the service, though, Twenge noticed that she was increasingly giving in to a compulsion to check up on conversations that were making her miserable. “It completely confirmed why I don’t have social media,” she says. And so she scaled back. Twenge kept the account for promotional reasons and still has periods of time when she’s active, but when she needs a refresh, she consciously steps away for days or weeks.

When asked if she’s tempted to open an Instagram or Facebook account — even if just for research purposes — she replies quickly, “Nope.”


Dock Your Gadget and Walk Away

Erik Peper, PhD

Professor at San Francisco State University and president of the Biofeedback Federation of Europe

As a researcher who explores the impact of excessive phone use (it makes us feel lonely) and the bad posture brought on by constantly staring at a screen, Erik Peper makes a point of keeping his phone at a distance. When he leaves home in the morning, he packs it into his backpack instead of his pocket. And when he returns in the evening, he docks it at the charging station by his front door.

What’s the point? There are two, actually.

“There are very few things that are truly urgent.”

First, the microwaves coming off mobile devices could present a small risk to their owners, Peper says. In a paper he wrote for the journal Biofeedback, Peper cites epidemiological research showing that people who use cellphones for more than 10 years are more likely than nonusers to have tumors on their salivary glands and inside their ear canals. They’re also three times as likely to have certain brain and spinal-cord tumors on the side of their head where they hold their phone. “The data is weak and controversial,” Peper admits. “But I believe in the precautionary principle, which says that you have to first prove something is totally safe before you can use it.”

The second reason is that, simply put, it’s a distraction. “The phone hijacks our evolutionary patterns,” Peper says. “We don’t do good with multitasking, so if you’re writing an article, and every five minutes you pop back to answer a message, you’re much less productive in the long term.” The same logic applies to socializing, he says, which is why his phone is stored out of sight when he’s with friends and family.

Does it matter that he’s a little slow to reply to messages? Or that he occasionally misses a call? “There are very few things that are truly urgent,” Peper says. “It’s different if you’re a firefighter, but beyond that, whether I answer the email this minute, later today, or even this evening — it really makes no difference.”


Eliminate Email on Your Phone

Linden Tibbets

CEO of IFTTT, a service that lets you program your apps and smart devices to carry out rote tasks

Years ago, Linden Tibbets decided he didn’t want to be a slave to his email. Which meant, in short, that he would read and send messages only while sitting at his desk.

“The only time I send email on my phone is if I’m running late to a meeting and there’s no other way to communicate,” Tibbets says. “That’s literally the only time.”

“You can be endlessly entertained with what’s happening in the world around you. You don’t need your phone.”

The upshot, he says, is that he’s able to address his correspondance with better focus. “I would much rather spend an extra hour in the evening responding to email than to be distracted by it off and on throughout the day,” Tibbets says. If it takes a while to reply to people, no big deal. “I just say, ‘Thanks for your patience. I apologize for being slow to get back to you.’”

And if he finds himself with a moment of downtime — standing in line for groceries, for instance — Tibbets considers it rare opportunity for mind wandering. “I play a game with myself where I try not to look at my phone,” he says. “I look at people. I read food labels. I observe things in the environment. You can be endlessly entertained with what’s happening in the world around you. You don’t need your phone.”


Schedule Moments of Disconnection

Adam Alter, PhD

Professor of marketing at New York University and author of Irresistible: The Rise of Addictive Technology and the Business of Keeping Us Hooked

In Irresistable, Adam Alter argues that in some ways, tech addiction may actually be worse than cigarette addiction. Because the web is built on social connections, each new addict makes it harder for the rest of us to abstain. “Addictive tech is part of the mainstream in a way that addictive substances never will be,” Alter writes. “Abstinence isn’t an option.”

“I try to put my phone on airplane mode on weekends.”

So what does the tech critic do to protect his own mental autonomy? He disconnects when the workweek’s done. “I try to put my phone on airplane mode on weekends so I can take photos of my two young kids without interruptions from emails and other needy platforms.”


Swap Out the Brain-Rot Apps for Ones That Enrich

Ali Brown

Entrepreneurial consultant, host of Glambition, a podcast for women in business

Last year, Ali Brown had a social media reckoning. “It was after the election, when everything was getting toxic and weird,” she says. “I was getting all my news from Facebook, and I felt this sense of unease all the time.”

So Brown did an entirely logical thing that most of us haven’t done: She drained the swamp on her phone. In one heroic moment of full-steam bravado, Brown deleted Facebook, Twitter, and Instagram and replaced them with apps from the Wall Street Journal and the New York Times. “I decided to pay for some really good journalism,” she says. “I’ll use my time to read those instead.”

“Responding to social media all day is going to get you nowhere.”

Once her healthier new phone routine was established, Brown added back one social media app — but just one! “I like Instagram because it’s generally happy and fun,” she says. “I post about my kids.”

Brown is lucky enough to have a team to run her Twitter and Facebook accounts, but she knows there are better uses for investing her personal time. “If you’re here in this life to do great, powerful work, then you need to create some space in your day to be a freethinker,” she says. “Responding to social media all day is going to get you nowhere.”

To her clients — mostly women running seven- and eight-figure companies — Brown generally offers this advice: “Try deleting social media for a week. You won’t miss anything, you won’t cease to exist, and you’ll thank me later.”

Go to the profile of Clint Carter

WRITTEN BY Clint Carter

Writer for publications such as Entrepreneur, Men’s Health, Men’s Journal, New York magazine, and Wall Street Journal.

Share this:

  • Email
  • Print
  • Facebook
  • LinkedIn
  • Pinterest
  • Reddit
  • Twitter
  • Tumblr

Like this:

Like Loading...

Wicked Witch or Job Candidate?

19 Wednesday Sep 2018

Posted by Melissa Chyba in Media Psychology, Psychology

≈ Leave a comment

Tags

Cultivation Theory, Film, Influence, Sexism, Television

Wicked Witch or Job Candidate

I have observed an increasing number of articles coming across my news feeds and social media how inaccurate perceptions of aging women impacts them in the workplace. A recent WSJ article about women over 50 looking for work caught my attention as the trends in the workplace and media have some similarities.  The article cited a study conducted in 2015 at the University of California, Irvine where researchers submitted 40,000 fake job applications from both male and female “candidates” across three age ranges.  Unfortunately, significant evidence was found of age discrimination against older women. The author also noted that women often take jobs that are below their capacity, skill level and pay grade and are judged more harshly than their male counterparts for their appearance (Weber, 2017). Being someone who believes the “data doesn’t lie”, I looked at Census and labor statistics.

Women over 40 make up 48% of the U.S. population and men over 40 are roughly 44% (United States Census Bureau, 2017). However, when it comes to unemployment women tend to fare worse than men as they age. Unemployment in the 45-54 age range is higher for women (3.6% of women vs 3% of men), same for the 55-64 range (2.8% for both genders), and higher in the over 65 segment (4% of women vs 3.1% of men) (Labor Force Statistics from the Current Population Survey, 2018).

In the age of awareness and press coverage around unconscious bias, you would think the problem of discrimination and false perceptions associated with age and gender would lead to a more enlightened public. So, the question is, why? Why are women in higher age groups subjected to tougher hurdles and unfair perceptions by other groups? One variable to look at is the media we have consumed. If you think logically about the media consumed by multiple generations, older women have not usually been portrayed in a positive light. For example, Snow White had an evil older step mother, The Little Mermaid had Ursula the old gray-haired villain and 101 Dalmatians’ villain was Cruella Deville. The list goes on and on. See a theme here? If you don’t think these portrayals haven’t impacted our perceptions, please read on…

Cultivation theory in psychology posits that media develops the public’s worldview, especially in children. Media created worldviews, especially those with high exposure, can influence schemas as to what is perceived as normal, particularly with individuals in groups that have little exposure to other groups other than through media (Signorielli, 2004). Portrayals of age groups in television and film can influence our perceptions as to the size of a demographic group, as well as their competencies. Negative portrayals of older age groups can and will create perceptions, particularly with younger demographics, because they are not as likely to critically examine media portrayals. However, perception formation does not only impact younger generations, those in the aging group tend to hold negative stereotypes and perceptions about their own group as well (Lauzen & Dozier, 2005b).

A double standard associated with aging men and women exists in television, film and advertising messages about older women. In many films, women are often portrayed as younger in age compared to male characters and female characters are described as elderly at an earlier age than males. Women are often considered older in the film and television industry by age 35, where this age is higher for men (Bazzini, McIntosh, Smith, Cook, & Harris, 1997). Women’s value in film emphasizes looks and youth whereas men have additional attributes that define their worth. In an analysis of the top 100 grossing films in 2002, Lauzen and Dozier (2005a) found that male characters over the age of 50 were depicted as active in all aspects of life, whereas females were not. Men are portrayed as if they still have things to accomplish as they age, while women are portrayed with less purposeful lives, such as career aspirations (Lauzen & Dozier, 2005a).

Television isn’t any better than film and over time has portrayed aging women as becoming old earlier in life and are less visible than males. Furthermore, aging female characters are portrayed as less useful and with diminished capacity particularly around prestige and elements that would represent importance and vitality compared to men of the same age (Bazzini et. al., 1997: Signorielli, 2004). A study conducted in 2005 on primetime television characters found that representation, recognition and respect are not the same for men and women as they age. Specifically:

  • Aging female characters had less representation than their male counterparts starting in their 40s.
  • Portrayals of leadership increased with age, however when analyzed, men were much more likely to play leadership roles in their 40s and 50s compared to women.
  • Occupational power portrayals had a positive linear relationship to age for both genders however men in their 50s were more likely to have occupational power compared to females of the same age.
  • Male characters of all ages were likely to have goals whereas women in their 40s were most likely to have goals.

Lauzen & Dozier’s research concluded that there is double standard of respect afforded to aging characters based on gender. Male characters were more likely to have leadership roles, occupational power and goals compared to women, which could have potential effects on older women such as reinforcing a stereotype bias against them in the workplace (Lauzen & Dozier, 2005b).

Some of you reading this article may look at the age of the research I am referencing and say, “This research is between 10 – 20 years old and so much has changed”. With women’s issues receiving more attention in the media, it wouldn’t be farfetched to provide proof points of the changing times by referencing actors such as Lilly Tomlin and Jane Fonda in Grace and Frankie or Judy Dench or Helen Mirrin in powerful roles in recent years. However, this is a false assumption because cultivation theory posits that what we see in the media creates our world views regardless of the veracity. A study conducted in 2016 analyzed over 2000 movie screenplays and the gender associated with dialogue. As women aged, their percentage of dialogue quickly diminished while men’s dialogue increased in age. For example, women between 22-31 received 38% of screenplay words (men were 20%) and between ages 42-65 women received 20% while men received 39%. The numbers for over 65 were abysmal for both genders, however women fared worse with 3% compared to males at 5% (Anderson & Daniels, 2016).

The Center for the Study of Women in Television and Film’s 2017 analysis of the top 100 grossing films of 2017 did not provide an encouraging picture. Women’s total speaking roles were 34% of all characters which is sad considering they represent half the population. However, when their unfair portion of speaking roles were broken down by age, the story continues to favor the younger woman as men over 40 accounted for 46% of all male characters whereas women over 40 were only 29%  (Lauzen M. M., 2018). While it is wonderful to see some older women taking on powerful lead roles, the attention it receives is certainly not the norm.

There you have it, as women age in media and entertainment, if they appear at all, they are often portrayed as old, ugly, evil, less competent, less powerful, have little to accomplish and receive less respect than their male counterparts. American culture associates beauty with goodness and therefore a woman’s value tends to be associated with her looks favoring the young (Bazzini, McIntosh, Smith, Cook, & Harris, 1997). The time has come for all supervisors, recruiters and human resource departments to rethink assumptions and check unconscious bias on aging women as well. Women over 40 are a sizeable portion of the population, we are not invisible and dammit we are just as smart, capable and appealing as our male counterparts. America’s unemployment is low, skilled talent is a growing issue and women over 40 represent an opportunity to fill the gap. Is your perception of that woman’s qualifications based on data or is Cinderella’s evil stepmother influencing your opinion?

References

Anderson, H., & Daniels, M. (2016, April). Film dialogue from 2000 screenplays, broken down by gender and age. Retrieved from The Pudding: https://pudding.cool/2017/03/film-dialogue/index.html

Bazzini, D. G., McIntosh, W. D., Smith, S. M., Cook, M., & Harris, C. (1997). The aging woman in popular film: Underrepresented, unattractive, unfriendly, and unintelligent. Sex Roles: A Journal of Research, 36(7-8), 531-543. doi:10.1007/BF0276689

Labor force statistics from the current population survey. (2018, July 6). Retrieved from Bureau of Labor Statistics: https://www.bls.gov/web/empsit/cpsee_e16.htm

Lauzen, M. M. (2018). It’s a man’s (celluloid) world 2017. Retrieved September 1, 2018, from Center for the Study of Women in Television and Film: https://womenintvfilm.sdsu.edu/wp-content/uploads/2018/03/2017_Its_a_Mans_Celluloid_World_Report_3.pdf

Lauzen, M., & Dozier, D. (2005a). Maintaining the double standard: Portrayals of age and gender in popular flms. Sex Roles, 52(7/8), 437-446. doi:10.1007/s11199-00593710-1

Lauzen, M., & Dozier, D. (2005b). Recognition and respect revisited: Portrayals of age and gender in prime-time television. Mass Communication, 8(3), 241-256.

Signorielli, N. (2004). Aging on television: Messages relating to gender, race and occupation in prime time. Journal Of Broadcasting & Electric Media, 48(2), 279-301.

United States Census Bureau. (2017, June). Annual estimates of the resident population by sex, age, race, and hispanic origin for the United States and states: April 1, 2010 to July 1, 2016 more information. Retrieved from American Fact Finder: https://factfinder.census.gov/faces/tableservices/jsf/pages/productview.xhtml?src=bkmk

Weber, L. (2017, October 10). After 50, women struggle to find a foothold at work. Retrieved from Wall Street Journal: https://www.wsj.com/articles/after-50-women-struggle-to-find-a-foothold-at-work-1507608181?ns=prod/accounts-wsj

 

 

 

Share this:

  • Email
  • Print
  • Facebook
  • LinkedIn
  • Pinterest
  • Reddit
  • Twitter
  • Tumblr

Like this:

Like Loading...

Image

Calling BS on Facebook’s PR Ruse

15 Friday Jun 2018

Tags

Cambridge Analytica, Data, Facebook, Personally Identifiable Inforamation, Privacy

brown bull on green glass field under grey and blue cloudy sky

Photo by Pixabay on Pexels.com

In March 2018, Facebook announced they would no longer integrate with third-party data providers that enable marketers to create targeted audiences on its platform as a response to the Cambridge Analytica scandal. Consequently, I wrote an article about this entitled Facebook’s Red Herring, because that is exactly what it was–a very artful distraction and attempt to deceive consumers into believing Facebook’s action was about addressing their privacy. But that is not what it was about.

The Cambridge Analytica scandal was about data leaving Facebook and being used in ways that were not authorized by participants of the survey. The decision to dissolve third-party data partnerships is about data that goes into Facebook to segment audiences for relevant targeting. However, what consumers have not seen the same publicity on is that Facebook has modified their stance on third-party partnerships so their data can still be used. The point I made in my article is that Facebook demonized third-party data providers in the press by announcing their dissolution of partnerships while avoiding the same public scrutiny around the real reason for their action.

Marketers can still append third-party data, which is compiled by a vendor to provide context, to a customer or prospect outside of Facebook and then ‘onboard it’ for digital marketing. Marketers simply need to sign an agreement with an onboarding provider that includes Facebook’s new terms and conditions. They can then append third-party information to customer lists and create target groups, or they obtain prospect lists of their target groups from a third-party data provider. From there advertisers onboard that data and upload it via Campaign Manager to Facebook. Some onboarders such as LiveRamp have third-party data available in their platforms so prospect audiences can be created and pushed to Facebook without the need to purchase the prospect list with personally identifiable information (PII) from the third-party data provider.

Regardless of how the marketer goes about it, once data is onboarded or audiences are created in an onboarding platform, they can be activated (used for media purchase) on Facebook. Voilá – third party data is still being used on Facebook. Facebook’s move to divorce themselves from third-party data did not mean it couldn’t be used, they are just requiring an additional step that many marketers are already proficiently executing.

If you are unfamiliar with how consumer data onboarding works, here is a short explanation: Consumer data onboarders like LiveRamp, Neustar and Oracle move offline marketing lists containing PII such as CRM data, loyalty databases, prospecting lists, etc., to the online ecosystem and match or link (via a common identifier such as email address) to cookies and device IDs in a privacy-compliant manner. The reason this matching is considered privacy compliant is because consumer PII is anonymized. Marketers never receive which specific cookies and device IDs are associated with the consumer profile.

Onboarders can connect consumer PII to cookies because they visit websites that are part of the onboarder’s network where consumers have provided permission to share their information with third parties. One example of a website that collects consumer PII and online attributes such as cookies, device IDs, etc. is Tripit. When you create an account on Tripit, you provide information that associates a cookie or device ID with your PII. If you look at Tripit’s privacy policy under “Cookies, Analytics and Tracking”, it expressly states: “…providers may also automatically collect the above information about you through the App and on other sites and services, including personally identifiable information about your online activities over time and across different websites, devices, online services, and applications when you use our App. Some third parties help us and others associate your activities across the browsers and devices you use, or that your household uses, for retargeting, cross-device advertising, analytics, and measurement purposes”. Because an onboarded list will include PII, it can be matched to a cookie/device ID if a website with these permissions are in the onboarder’s network of partner contributors.

Sorry dear consumer, Facebook’s dissolution of third-party data partnerships continues to be a red herring and does not prevent such data from being used on their platform. Furthermore, Facebook continues to collect and store first-party data (i.e., owned by them) on you that advertisers can leverage for target audience creation; and they have those rights because it is buried in the required terms and conditions you consented to when your account was created.

So, while Facebook has demonized third-party data in the press right after the Cambridge Analytica scandal (even though completely unrelated to the latter’s dubious use), they have not prevented its use. Frankly, I find Facebook’s use of first-party data and passive surveillance via their pixel on other websites resulting in those creepy retargeting advertisements much more intrusive then my being a member of a target audience based on my demographics and other modeled assumptions.

Consumer trust is the new “oil” in today’s data economy, and it requires more than lip service. Perhaps it is time for Facebook to figure that out.

Share this:

  • Email
  • Print
  • Facebook
  • LinkedIn
  • Pinterest
  • Reddit
  • Twitter
  • Tumblr

Like this:

Like Loading...

Posted by Melissa Chyba | Filed under Advertising, Media Literacy, Media Psychology, Personal Data, Psychology

≈ Leave a comment

How Fiction Becomes Fact on Social Media

16 Monday Apr 2018

Posted by Donna L. Roberts, PhD in Media Psychology, Propaganda, Psychology

≈ Comments Off on How Fiction Becomes Fact on Social Media

Tags

Media Psychology, Propaganda

By BENEDICT CAREYOCT. 20, 2017

Source: https://www.nytimes.com/2017/10/20/health/social-media-fake-news.html?_r=0

Hours after the Las Vegas massacre, Travis McKinney’s Facebook feed was hit with a scattershot of conspiracy theories. The police were lying. There were multiple shooters in the hotel, not just one. The sheriff was covering for casino owners to preserve their business.

The political rumors sprouted soon after, like digital weeds. The killer was anti-Trump, an “antifa” activist, said some; others made the opposite claim, that he was an alt-right terrorist. The two unsupported narratives ran into the usual stream of chatter, news and selfies.

“This stuff was coming in from all over my network of 300 to 400” friends and followers, said Mr. McKinney, 52, of Suffolk, Va., and some posts were from his inner circle.

But he knew there was only one shooter; a handgun instructor and defense contractor, he had been listening to the police scanner in Las Vegas with an app. “I jumped online and tried to counter some of this nonsense,” he said.

In the coming weeks, executives from Facebook and Twitter will appear before congressional committees to answer questions about the use of their platforms by Russian hackers and others to spread misinformation and skew elections. During the 2016 presidential campaign, Facebook sold more than $100,000 worth of ads to a Kremlin-linked company, and Google sold more than $4,500 worth to accounts thought to be connected to the Russian government.

Agents with links to the Russian government set up an endless array of fake accounts and websites and purchased a slew of advertisements on Google and Facebook, spreading dubious claims that seemed intended to sow division all along the political spectrum — “a cultural hack,” in the words of one expert.

Yet the psychology behind social media platforms — the dynamics that make them such powerful vectors of misinformation in the first place — is at least as important, experts say, especially for those who think they’re immune to being duped. For all the suspicions about social media companies’ motives and ethics, it is the interaction of the technology with our common, often subconscious psychological biases that makes so many of us vulnerable to misinformation, and this has largely escaped notice.

Skepticism of online “news” serves as a decent filter much of the time, but our innate biases allow it to be bypassed, researchers have found — especially when presented with the right kind of algorithmically selected “meme.”

At a time when political misinformation is in ready supply, and in demand, “Facebook, Google, and Twitter function as a distribution mechanism, a platform for circulating false information and helping find receptive audiences,” said Brendan Nyhan, a professor of government at Dartmouth College (and occasional contributor to The Times’s Upshot column).

For starters, said Colleen Seifert, a professor of psychology at the University of Michigan, “People have a benevolent view of Facebook, for instance, as a curator, but in fact it does have a motive of its own. What it’s actually doing is keeping your eyes on the site. It’s curating news and information that will keep you watching.”

That kind of curating acts as a fertile host for falsehoods by simultaneously engaging two predigital social-science standbys: the urban myth as “meme,” or viral idea; and individual biases, the automatic, subconscious presumptions that color belief.

The first process is largely data-driven, experts said, and built into social media algorithms. The wide circulation of bizarre, easily debunked rumors — so-called Pizzagate, for example, the canard that Hillary Clinton was running a child sex ring from a Washington-area pizza parlor — is not entirely dependent on partisan fever (though that was its origin).

For one, the common wisdom that these rumors gain circulation because most people conduct their digital lives in echo chambers or “information cocoons” is exaggerated, Dr. Nyhan said.

In a forthcoming paper, Dr. Nyhan and colleagues review the relevant research, including analyses of partisan online news sites and Nielsen data, and find the opposite. Most people are more omnivorous than presumed; they are not confined in warm bubbles containing only agreeable outrage.

But they don’t have to be for fake news to spread fast, research also suggests. Social media algorithms function at one level like evolutionary selection: Most lies and false rumors go nowhere, but the rare ones with appealing urban-myth “mutations” find psychological traction, then go viral.

There is no precise formula for such digital catnip. The point, experts said, is that the very absurdity of the Pizzagate lie could have boosted its early prominence, no matter the politics of those who shared it.

Photo Credit: Stephen Savage

“My experience is that once this stuff gets going, people just pass these stories on without even necessarily stopping to read them,” Mr. McKinney said. “They’re just participating in the conversation without stopping to look hard” at the source.

Digital social networks are “dangerously effective at identifying memes that are well adapted to surviving, and these also tend to be the rumors and conspiracy theories that are hardest to correct,” Dr. Nyhan said.

One reason is the raw pace of digital information sharing, he said: “The networks make information run so fast that it outruns fact-checkers’ ability to check it. Misinformation spreads widely before it can be downgraded in the algorithms.”

The extent to which Facebook and other platforms function as “marketers” of misinformation, similar to the way they market shoes and makeup, is contentious. In 2015, a trio of behavior scientists working at Facebook inflamed the debate in a paper published in the prominent journal Science.

The authors analyzed the news feeds of some 10 million users in the United States who posted their political views, and concluded that “individuals’ choices played a stronger role in limiting exposure” to contrary news and commentary than Facebook’s own algorithmic ranking — which gauges how interesting stories are likely to be to individual users, based on data they have provided.

Outside critics lashed the study as self-serving, while other researchers said the analysis was solid and without apparent bias.

The other dynamic that works in favor of proliferating misinformation is not embedded in the software but in the biological hardware: the cognitive biases of the human brain.

Purely from a psychological point of view, subtle individual biases are at least as important as rankings and choice when it comes to spreading bogus news or Russian hoaxes — like a false report of Muslim men in Michigan collecting welfare for multiple wives.

Merely understanding what a news report or commentary is saying requires a temporary suspension of disbelief. Mentally, the reader must temporarily accept the stated “facts” as possibly true. A cognitive connection is made automatically: Clinton-sex offender, Trump-Nazi, Muslim men-welfare.

And refuting those false claims requires a person to first mentally articulate them, reinforcing a subconscious connection that lingers far longer than people presume.

Over time, for many people, it is that false initial connection that stays the strongest, not the retractions or corrections: “Was Obama a Muslim? I seem to remember that….”

In a recent analysis of the biases that help spread misinformation, Dr. Seifert and co-authors named this and several other automatic cognitive connections that can buttress false information.

Another is repetition: Merely seeing a news headline multiple times in a news feed makes it seem more credible before it is ever read carefully, even if it’s a fake item being whipped around by friends as a joke.

And, as salespeople have known forever, people tend to value the information and judgments offered by good friends over all other sources. It’s a psychological tendency with significant consequences now that nearly two-thirds of Americans get at least some of their news from social media.

“Your social alliances affect how you weight information,” said Dr. Seifert. “We overweight information from people we know.”

The casual, social, wisecracking nature of thumbing through and participating in the digital exchanges allows these biases to operate all but unchecked, Dr. Seifert said.

Stopping to drill down and determine the true source of a foul-smelling story can be tricky, even for the motivated skeptic, and mentally it’s hard work. Ideological leanings and viewing choices are conscious, downstream factors that come into play only after automatic cognitive biases have already had their way, abetted by the algorithms and social nature of digital interactions.

“If I didn’t have direct evidence that all these theories were wrong” from the scanner, Mr. McKinney said, “I might have taken them a little more seriously.”

A version of this article appears in print on October 24, 2017, on Page D1 of the New York edition with the headline: How Fiction Becomes Fact on Social Media

Share this:

  • Email
  • Print
  • Facebook
  • LinkedIn
  • Pinterest
  • Reddit
  • Twitter
  • Tumblr

Like this:

Like Loading...

When Advertisements Become Too Personal

23 Friday Feb 2018

Posted by Melissa Chyba in Advertising, Media Literacy, Media Psychology, Personal Data, Psychology

≈ 3 Comments

Tags

Advertising, Analytics, Data Use, Facebook, Marketing, Privacy, Technology

GuerraGPhoto's/Shutterstock.com

GuerraGPhoto’s/Shutterstock.com

With the proliferation of media channels over the last 20 years, advertisers have taken advantage of marketing technologies combined with data to serve more personalized advertisements to consumers. Personalization is a marketing strategy that delivers specific messages to you by leveraging data analysis and marketing technology    enabling them to target (the ability to identify a specific person or audience). Thus, companies leverage many data sources about you whether obtained directly from you, purchased from data brokers, or passively collected on you (tracking your online behavior). There are advantages to this as a consumer such as advertisement relevance, time savings and product pricing. For example, I don’t like to see the media I consume littered with advertisements on golf equipment or hunting gear, since the products are not of any interest to me. Secondly, I hate it when I have already purchased a product the same product shows up in Facebook, as this is just a waste of my attention. Rather, the marketer should show me something that is at least complimentary to what I have already purchased instead of wasting my time. There is a good reason for optimizing advertising because if targeting were not available companies would need to increase their advertising budgets every time a new media channel presented itself resulting in price increases to consumers. From an advertiser perspective, there is no argument with the return on investment that leveraging data for targeting provides across all channels which is why almost all companies engage in the practice. However, there are times when advertiser personalization attempts cross the line and it recently happened to me.

Last December I had a health matter I needed to address. My doctor recommended I try a supplement that can be only bought online. After trying some samples provided by my doc, I went directly to the company’s website and made the purchase. I never viewed the company’s page nor saw an advertisement for the product on Facebook (i.e. I left no previous online behavior that could be tracked). One day later, a post showed up on my Facebook feed from that same company. Serenol ad screen shot

I immediately yelled “Are You F***ing Kidding Me???” among other things. So dear reader…..you now know I bought a supplement called Serenol which helps alleviate PMS symptoms – hence my use of four letter words above (yes it works). From my perspective this was a complete invasion of my privacy and feels unethical. It may also be against HIPAA laws, or it should be! In the end, what this means, is Serenol, without my permission, disclosed my health condition.  Furthermore, it also begs the question: Now that Facebook has this data on me how will they use it moving forward?

Being from the data integration and marketing technology industry myself I personally have a moderate perspective on the use of data attributes for targeted marketing. I don’t want to see advertisements from companies that are completely irrelevant to me nor do I want to pay increased prices for goods and services, thus I have some comfort with use of my data. However, this scenario violated my personal boundaries, so I downloaded a tracker monitor and followed the data.

Ghostery provides a free mobile browser and search engine plug-in for tracking the trackers, something anyone can access for free.Ghostery Screen Shot

Ghostery shows you what type of trackers are firing on any website that you visit. With this tool I learned there were multiple pixels firing on Serenol’s site, Facebook being one of many.  The two pixels that interested me most were the “Facebook Custom Audiences” and the “Facebook Pixel” trackers. The custom audience pixel enables Serenol (or any other advertiser) to create Facebook Custom Audiences based on their website visitors.

A Facebook Custom Audience is essentially a targeting option created from an advertiser owned customer list, so they can target users on Facebook (Advertiser Help Center, 2018). Facebook Pixel is a small piece of code for websites that allows the site owner AND Facebook to log any Facebook users (Brown, Why Facebook is not telling you everything it knows about you, 2017). Either of these methods would have enabled the survey post I was shown from Serenol. What likely happened is Serenol and Facebook used these tags to conduct surveillance on me without my conscious knowledge and re-targeted me, hence the offending post. Yes – this is technically legal. Why? Because, I mostly likely agreed to this surveillance in the terms of service and privacy policies on each site.  Also, this method of targeting does not provide data back to Serenol who I am on Facebook, only Facebook knows. However, now Facebook has data that associates me with PMS!

Facebook collects information on things you do such as content you share, groups you are part of, things someone may share about you (regardless of whether you granted permission), payment information, the internet connected devices you and your family own and information from third-party partners including advertisers (Data Policy , 2016). They can monitor your mouse movements, track the amount of time you spend on anything and the subject of your photos via machine learning algorithms. Furthermore, when you do upload photos, Facebook scans the image and detects information about that photo such as whether it contains humans, animals, inanimate objects, and potential people you should tag in the picture (Brown, The amount of data facebook collects from your photos will terrify you, 2017). The social media company directly states in their data policy that they use the information they collect to improve their advertising (this means targeting) and then measure such advertising effectiveness (Data Policy , 2016). While Facebook’s data policy states that they do not share personally identifiable information (PII), they do leverage non-personally identifying demographic information that can be used for advertisement targeting purposes provided they adhere to their advertiser guidelines (Data Policy , 2016). This policy is subject to all Facebook companies, including WhatsApp, Facebook Messenger and Instagram. So that private message you are sending on Messenger isn’t as private as you think, Facebook is collecting data on that content. With Facebook owning 4 of the Top 5 Social Media applications, isn’t this a little creepy?

The next obvious question, is how can this data be used for nefarious purposes? Facebook’s advertiser policies state that an advertiser can’t use targeting options to discriminate against or engage in predatory advertising practices (Advertising Policies, n.d.). While they do withhold some demographics from certain types of advertising like housing, there are other questionable practices for targeting. For example, last year an article appeared in AdAge that called out Facebook, LinkedIn and Google who all allow employment advertising targeting using age as a criteria. Facebook has defended using the demographic despite criticism the practice contributes to ageism in the workforce and is illegal in the actual hiring practices of public companies (Sloane, 2017).

So, can Facebook use data about my PMS for targeting? Will they allow potential employers to use this data? What about health insurance companies? This is a slippery slope indeed. The answer is yes, and no. Facebook recently updated its’ policies and now they prevent advertisers from using targeting attributes such as medical conditions (Perez, 2018). This means that Facebook will not provide demographic selection data in their targeting tools to select or deselect users based on medical conditions. This type of targeting requires using third-party data, meaning that the advertiser is using the data provided by Facebook or other data aggregators to create an audience. However, I did not find anything that prevents companies like Serenol from using first-party data to find me on Facebook. Furthermore, when I went to the Serenol site on February 21st, 2018 (after the Facebook policy update), Ghostery showed that Facebooks’ Pixel and Facebook for Developers along with other pixels and tags from The Trade Desk, Adobe, Google, etc. were all live on the site.

This month’s Harvard Business Review published an article about how consumers react to personalization. The authors ran a series of experiments to understand what causes consumers to object to targeting and found out that we don’t always behave logically when it comes to privacy. People will often share details with complete strangers while keeping that information secret from those where close relationships exist. Furthermore, the nature of the information impacts how we feel about it – for example data on sex, health and finances are much more sensitive. Secondly, the way that data exchanges hands (information flows) matter. They found that sharing data with a company personally (first party sharing) generally feels fine because it is necessary to purchase something or engage with a company. However, when that information is shared without our knowledge (third-party sharing) consumers are reacting in a similar way as if a friend shared a secret or talked behind our backs. While third party sharing of data is legal, the study showed that scenarios where companies obtain information outside the website one interacted with or deduced inferred information about someone from analytics elicits a negative reaction from consumers. The study also found when consumers believe their data has been shared unacceptably, purchase interest substantially declines (John, Kim, & Barasz, 2018). Some of the recommendations from the authors to mitigate backlash from consumers included staying away from sensitive subjects, maintain transparency and provide consumers choice/ the ability to opt out.

I reached out to Michael Becker, Managing Partner at Identity Praxis for his point of view on the subject. Michael is an entrepreneur, academic and industry evangelist who has been engaging and supporting the personal identity economy for over a decade. “People are becoming aware that their personal information has value and are awakening to the fact that its’ misuse is not just annoying, but can lead to material and lasting emotional, economic, and physical harm. They are awaking to the fact that they can enact control over their data. Consumers are starting to use password managers, identity anonymization tools, and tracker management tools [like Ghostery]; for instance, 38% of US adults have adopted ad blockers and this is just the beginning. Executives should take heed that a new class of software and services, personal information management solutions, are coming to the market. These solutions, alongside new regulations (like the EU GDPR), give individuals, at scale, the power to determine what information about them is shared, who has access to it, when it can be used, and on what terms. In other words, the core terms of business may change in the very near future from people having to agree to the businesses terms of service to business having to agree to the individuals’ terms of access.”

In the United States the approach to regulations for personal data collection and use is such that if the action from the business or technology isn’t expressly forbidden, then companies can do it regardless of whether it is ethical or not. Unfortunately, regulations do not necessarily keep up with the pace of innovation in the world of data collection. In Europe the approach to data privacy is such that unless a personal data collection practice and its use is explicitly called out as legal then companies CANNOT do it. There are some actions you can take to manage passive data collection; however, this list is not meant to be exhaustive:

  • Use Brave Browser: This browser allows you to block ads and trackers to sites that you visit. Brave claims it will increase download speeds, save you money on your mobile device data since you don’t have to load ads and protect your information.
  • Ghostery permits you to allow what trackers are accepted by site that you visit, or block trackers entirely.
  • Add a script blocker plug-in to your browser such as No-script. No-script has a white list of trustworthy websites and it enables you to choose which sites you want to allow scripts.
  • Review what permissions to track your data on your mobile device and limit it. Do you really want Apple sharing your contact list and calendar with other applications? Do all applications need access to your fitness and activity data? You can find helpful instructions on how for iPhone here or for Android here.

Regardless of what is legal or illegal, comfort levels with how our personal data is used varies by individual. When you think about it, there is similarity to the debate in the 60’s on what constituted obscenity. When we find use of our personal data offensive we will likely say “I’ll know it when I see it”.

References:

Advertiser Help Center. (2018). Retrieved from Facebook Business: https://www.facebook.com/business/help/610516375684216

Advertising Policies. (n.d.). Retrieved February 20, 2018, from Facebook: https://www.facebook.com/policies/ads/

Brown, A. (2017, January 6). The qmount of data facebook collects from your photos will terrify you. Retrieved February 20, 2018, from Express: https://www.express.co.uk/life-style/science-technology/751009/Facebook-Scan-Photos-Data-Collection

Brown, A. (2017, January 2). Why facebook is not telling you everything it knows about you. Retrieved February 2018, 2018, from Express: https://www.express.co.uk/life-style/science-technology/748956/Facebook-Login-How-Much-Data-Know

Data Policy . (2016, September 29). Retrieved from Facebook: https://www.facebook.com/full_data_use_policy

John, L. K., Kim, T., & Barasz, K. (2018, February). Ads that don’t overstep. Harvard Business Review, pp. 62-69.

Perez, S. (2018, February 8). Facebook updates its ad policies and tools to protect against discriminatory practices. Retrieved from Techcrunch: https://techcrunch.com/2017/02/08/facebook-updates-its-ad-policies-and-tools-to-protect-against-discriminatory-practices/

Sloane, G. (2017, December 21). Facebook defends targeting job ads based on age. Retrieved from Ad Age: http://adage.com/article/digital/facebook-defends-targeting-job-ads-based-age/311726/

 

 

 

 

 

 

Share this:

  • Email
  • Print
  • Facebook
  • LinkedIn
  • Pinterest
  • Reddit
  • Twitter
  • Tumblr

Like this:

Like Loading...

A new study shows that students learn way more effectively from print textbooks than screens

19 Monday Feb 2018

Posted by Donna L. Roberts, PhD in Media Psychology, Psychology

≈ Comments Off on A new study shows that students learn way more effectively from print textbooks than screens

Tags

Books, Digital Natives, Monitors

Students told researchers they preferred and performed better when reading on screens. But their actual performance tended to suffer.

Source: A new study shows that students learn way more effectively from print textbooks than screens

  • Patricia A. Alexander and Lauren M. Singer, The Conversation

Today’s students see themselves as digital natives, the first generation to grow up surrounded by technology like smartphones, tablets and e-readers.

Teachers, parents and policymakers certainly acknowledge the growing influence of technology and have responded in kind. We’ve seen more investment in classroom technologies, with students now equipped with school-issued iPads and access to e-textbooks.

In 2009, California passed a law requiring that all college textbooks be available in electronic form by 2020; in 2011, Florida lawmakers passed legislation requiring public schools to convert their textbooks to digital versions.

Given this trend, teachers, students, parents and policymakers might assume that students’ familiarity and preference for technology translates into better learning outcomes. But we’ve found that’s not necessarily true.

As researchers in learning and text comprehension, our recent work has focused on the differences between reading print and digital media. While new forms of classroom technology like digital textbooks are more accessible and portable, it would be wrong to assume that students will automatically be better served by digital reading simply because they prefer it.

Speed – at a cost

Our work has revealed a significant discrepancy. Students said they preferred and performed better when reading on screens. But their actual performance tended to suffer.

For example, from our review of research done since 1992, we found that students were able to better comprehend information in print for texts that were more than a page in length. This appears to be related to the disruptive effect that scrolling has on comprehension. We were also surprised to learn that few researchers tested different levels of comprehension or documented reading time in their studies of printed and digital texts.

To explore these patterns further, we conducted three studies that explored college students’ ability to comprehend information on paper and from screens.

Students first rated their medium preferences. After reading two passages, one online and one in print, these students then completed three tasks: Describe the main idea of the texts, list key points covered in the readings and provide any other relevant content they could recall. When they were done, we asked them to judge their comprehension performance.

Across the studies, the texts differed in length, and we collected varying data (e.g., reading time). Nonetheless, some key findings emerged that shed new light on the differences between reading printed and digital content:

  • Students overwhelming preferred to read digitally.
  • Reading was significantly faster online than in print.
  • Students judged their comprehension as better online than in print.
  • Paradoxically, overall comprehension was better for print versus digital reading.
  • The medium didn’t matter for general questions (like understanding the main idea of the text).
  • But when it came to specific questions, comprehension was significantly better when participants read printed texts.

studentsGetty Images/Sean Gallup

Placing print in perspective

From these findings, there are some lessons that can be conveyed to policymakers, teachers, parents and students about print’s place in an increasingly digital world.

1. Consider the purpose

We all read for many reasons. Sometimes we’re looking for an answer to a very specific question. Other times, we want to browse a newspaper for today’s headlines.

As we’re about to pick up an article or text in a printed or digital format, we should keep in mind why we’re reading. There’s likely to be a difference in which medium works best for which purpose.

In other words, there’s no “one medium fits all” approach.

2. Analyze the task

One of the most consistent findings from our research is that, for some tasks, medium doesn’t seem to matter. If all students are being asked to do is to understand and remember the big idea or gist of what they’re reading, there’s no benefit in selecting one medium over another.

But when the reading assignment demands more engagement or deeper comprehension, students may be better off reading print. Teachers could make students aware that their ability to comprehend the assignment may be influenced by the medium they choose. This awareness could lessen the discrepancy we witnessed in students’ judgments of their performance vis-à-vis how they actually performed.

Classroom Students Teacher iPadElementary school children use electronic tablets on the first day of class in the new school year in Nice, September 3, 2013.REUTERS/Eric Gaillard

3. Slow it down

In our third experiment, we were able to create meaningful profiles of college students based on the way they read and comprehended from printed and digital texts.

Among those profiles, we found a select group of undergraduates who actually comprehended better when they moved from print to digital. What distinguished this atypical group was that they actually read slower when the text was on the computer than when it was in a book. In other words, they didn’t take the ease of engaging with the digital text for granted. Using this select group as a model, students could possibly be taught or directed to fight the tendency to glide through online texts.

4. Something that can’t be measured

There may be economic and environmental reasons to go paperless. But there’s clearly something important that would be lost with print’s demise.

In our academic lives, we have books and articles that we regularly return to. The dog-eared pages of these treasured readings contain lines of text etched with questions or reflections. It’s difficult to imagine a similar level of engagement with a digital text. There should probably always be a place for print in students’ academic lives – no matter how technologically savvy they become.

Of course, we realize that the march toward online reading will continue unabated. And we don’t want to downplay the many conveniences of online texts, which include breadth and speed of access.

Rather, our goal is simply to remind today’s digital natives – and those who shape their educational experiences – that there are significant costs and consequences to discounting the printed word’s value for learning and academic development.

Share this:

  • Email
  • Print
  • Facebook
  • LinkedIn
  • Pinterest
  • Reddit
  • Twitter
  • Tumblr

Like this:

Like Loading...

How Smartphones Hijack Our Minds

24 Tuesday Oct 2017

Posted by Donna L. Roberts, PhD in Media Effects, Media Psychology, Psychology

≈ Comments Off on How Smartphones Hijack Our Minds

Tags

Attention, Cognitive Psychology, Intelligence, Smartphone, Technology

Research suggests that as the brain grows dependent on phone technology, the intellect weakens.

Source: How Smartphones Hijack Our Minds

ILLUSTRATION: SERGE BLOCH
By Nicholas Carr     Oct. 6, 2017

 

So you bought that new iPhone. If you are like the typical owner, you’ll be pulling your phone out and using it some 80 times a day, according to data Apple collects. That means you’ll be consulting the glossy little rectangle nearly 30,000 times over the coming year. Your new phone, like your old one, will become your constant companion and trusty factotum—your teacher, secretary, confessor, guru. The two of you will be inseparable.

The smartphone is unique in the annals of personal technology. We keep the gadget within reach more or less around the clock, and we use it in countless ways, consulting its apps and checking its messages and heeding its alerts scores of times a day. The smartphone has become a repository of the self, recording and dispensing the words, sounds and images that define what we think, what we experience and who we are. In a 2015 Gallup survey, more than half of iPhone owners said that they couldn’t imagine life without the device.

We love our phones for good reasons. It’s hard to imagine another product that has provided so many useful functions in such a handy form. But while our phones offer convenience and diversion, they also breed anxiety. Their extraordinary usefulness gives them an unprecedented hold on our attention and vast influence over our thinking and behavior. So what happens to our minds when we allow a single tool such dominion over our perception and cognition?

Scientists have begun exploring that question—and what they’re discovering is both fascinating and troubling. Not only do our phones shape our thoughts in deep and complicated ways, but the effects persist even when we aren’t using the devices. As the brain grows dependent on the technology, the research suggests, the intellect weakens.

‘The division of attention impedes reasoning and performance.’

Adrian Ward, a cognitive psychologist and marketing professor at the University of Texas at Austin, has been studying the way smartphones and the internet affect our thoughts and judgments for a decade. In his own work, as well as that of others, he has seen mounting evidence that using a smartphone, or even hearing one ring or vibrate, produces a welter of distractions that makes it harder to concentrate on a difficult problem or job. The division of attention impedes reasoning and performance.

A 2015 Journal of Experimental Psychology study, involving 166 subjects, found that when people’s phones beep or buzz while they’re in the middle of a challenging task, their focus wavers, and their work gets sloppier—whether they check the phone or not. Another 2015 study, which involved 41 iPhone users and appeared in the Journal of Computer-Mediated Communication, showed that when people hear their phone ring but are unable to answer it, their blood pressure spikes, their pulse quickens, and their problem-solving skills decline.

ILLUSTRATION: SERGE BLOCH

 

The earlier research didn’t explain whether and how smartphones differ from the many other sources of distraction that crowd our lives. Dr. Ward suspected that our attachment to our phones has grown so intense that their mere presence might diminish our intelligence. Two years ago, he and three colleagues— Kristen Duke and Ayelet Gneezy from the University of California, San Diego, and Disney Research behavioral scientist Maarten Bos —began an ingenious experiment to test his hunch.

The researchers recruited 520 undergraduate students at UCSD and gave them two standard tests of intellectual acuity. One test gauged “available cognitive capacity,” a measure of how fully a person’s mind can focus on a particular task. The second assessed “fluid intelligence,” a person’s ability to interpret and solve an unfamiliar problem. The only variable in the experiment was the location of the subjects’ smartphones. Some of the students were asked to place their phones in front of them on their desks; others were told to stow their phones in their pockets or handbags; still others were required to leave their phones in a different room.

‘As the phone’s proximity increased, brainpower decreased.’

The results were striking. In both tests, the subjects whose phones were in view posted the worst scores, while those who left their phones in a different room did the best. The students who kept their phones in their pockets or bags came out in the middle. As the phone’s proximity increased, brainpower decreased.

In subsequent interviews, nearly all the participants said that their phones hadn’t been a distraction—that they hadn’t even thought about the devices during the experiment. They remained oblivious even as the phones disrupted their focus and thinking.

A second experiment conducted by the researchers produced similar results, while also revealing that the more heavily students relied on their phones in their everyday lives, the greater the cognitive penalty they suffered.

In an April article in the Journal of the Association for Consumer Research, Dr. Ward and his colleagues wrote that the “integration of smartphones into daily life” appears to cause a “brain drain” that can diminish such vital mental skills as “learning, logical reasoning, abstract thought, problem solving, and creativity.” Smartphones have become so entangled with our existence that, even when we’re not peering or pawing at them, they tug at our attention, diverting precious cognitive resources. Just suppressing the desire to check our phone, which we do routinely and subconsciously throughout the day, can debilitate our thinking. The fact that most of us now habitually keep our phones “nearby and in sight,” the researchers noted, only magnifies the mental toll.

Dr. Ward’s findings are consistent with other recently published research. In a similar but smaller 2014 study (involving 47 subjects) in the journal Social Psychology, psychologists at the University of Southern Maine found that people who had their phones in view, albeit turned off, during two demanding tests of attention and cognition made significantly more errors than did a control group whose phones remained out of sight. (The two groups performed about the same on a set of easier tests.)

In another study, published in Applied Cognitive Psychology in April, researchers examined how smartphones affected learning in a lecture class with 160 students at the University of Arkansas at Monticello. They found that students who didn’t bring their phones to the classroom scored a full letter-grade higher on a test of the material presented than those who brought their phones. It didn’t matter whether the students who had their phones used them or not: All of them scored equally poorly. A study of 91 secondary schools in the U.K., published last year in the journal Labour Economics, found that when schools ban smartphones, students’ examination scores go up substantially, with the weakest students benefiting the most.

It isn’t just our reasoning that takes a hit when phones are around. Social skills and relationships seem to suffer as well. Because smartphones serve as constant reminders of all the friends we could be chatting with electronically, they pull at our minds when we’re talking with people in person, leaving our conversations shallower and less satisfying.

ILLUSTRATION: SERGE BLOCH

 

In a study conducted at the University of Essex in the U.K., 142 participants were divided into pairs and asked to converse in private for 10 minutes. Half talked with a phone in the room, while half had no phone present. The subjects were then given tests of affinity, trust and empathy. “The mere presence of mobile phones,” the researchers reported in 2013 in the Journal of Social and Personal Relationships, “inhibited the development of interpersonal closeness and trust” and diminished “the extent to which individuals felt empathy and understanding from their partners.” The downsides were strongest when “a personally meaningful topic” was being discussed. The experiment’s results were validated in a subsequent study by Virginia Tech researchers, published in 2016 in the journal Environment and Behavior.

The evidence that our phones can get inside our heads so forcefully is unsettling. It suggests that our thoughts and feelings, far from being sequestered in our skulls, can be skewed by external forces we’re not even aware of.

Scientists have long known that the brain is a monitoring system as well as a thinking system. Its attention is drawn toward any object that is new, intriguing or otherwise striking—that has, in the psychological jargon, “salience.” Media and communications devices, from telephones to TV sets, have always tapped into this instinct. Whether turned on or switched off, they promise an unending supply of information and experiences. By design, they grab and hold our attention in ways natural objects never could.

But even in the history of captivating media, the smartphone stands out. It is an attention magnet unlike any our minds have had to grapple with before. Because the phone is packed with so many forms of information and so many useful and entertaining functions, it acts as what Dr. Ward calls a “supernormal stimulus,” one that can “hijack” attention whenever it is part of our surroundings—which it always is. Imagine combining a mailbox, a newspaper, a TV, a radio, a photo album, a public library and a boisterous party attended by everyone you know, and then compressing them all into a single, small, radiant object. That is what a smartphone represents to us. No wonder we can’t take our minds off it.

The irony of the smartphone is that the qualities we find most appealing—its constant connection to the net, its multiplicity of apps, its responsiveness, its portability—are the very ones that give it such sway over our minds. Phone makers like Apple and Samsungand app writers like Facebook and Google design their products to consume as much of our attention as possible during every one of our waking hours, and we thank them by buying millions of the gadgets and downloading billions of the apps every year.

A quarter-century ago, when we first started going online, we took it on faith that the web would make us smarter: More information would breed sharper thinking. We now know it isn’t that simple. The way a media device is designed and used exerts at least as much influence over our minds as does the information that the device unlocks.

‘People’s knowledge may dwindle as gadgets grant them easier access to online data.’

As strange as it might seem, people’s knowledge and understanding may actually dwindle as gadgets grant them easier access to online data stores. In a seminal 2011 study published in Science, a team of researchers—led by the Columbia University psychologist Betsy Sparrow and including the late Harvard memory expert Daniel Wegner —had a group of volunteers read 40 brief, factual statements (such as “The space shuttle Columbia disintegrated during re-entry over Texas in Feb. 2003”) and then type the statements into a computer. Half the people were told that the machine would save what they typed; half were told that the statements would be immediately erased.

Afterward, the researchers asked the subjects to write down as many of the statements as they could remember. Those who believed that the facts had been recorded in the computer demonstrated much weaker recall than those who assumed the facts wouldn’t be stored. Anticipating that information would be readily available in digital form seemed to reduce the mental effort that people made to remember it. The researchers dubbed this phenomenon the “Google effect” and noted its broad implications: “Because search engines are continually available to us, we may often be in a state of not feeling we need to encode the information internally. When we need it, we will look it up.”

Now that our phones have made it so easy to gather information online, our brains are likely offloading even more of the work of remembering to technology. If the only thing at stake were memories of trivial facts, that might not matter. But, as the pioneering psychologist and philosopher William James said in an 1892 lecture, “the art of remembering is the art of thinking.” Only by encoding information in our biological memory can we weave the rich intellectual associations that form the essence of personal knowledge and give rise to critical and conceptual thinking. No matter how much information swirls around us, the less well-stocked our memory, the less we have to think with.

‘We aren’t very good at distinguishing the knowledge we keep in our heads from the information we find on our phones.’

This story has a twist. It turns out that we aren’t very good at distinguishing the knowledge we keep in our heads from the information we find on our phones or computers. As Dr. Wegner and Dr. Ward explained in a 2013 Scientific American article, when people call up information through their devices, they often end up suffering from delusions of intelligence. They feel as though “their own mental capacities” had generated the information, not their devices. “The advent of the ‘information age’ seems to have created a generation of people who feel they know more than ever before,” the scholars concluded, even though “they may know ever less about the world around them.”

That insight sheds light on our society’s current gullibility crisis, in which people are all too quick to credit lies and half-truths spread through social media by Russian agents and other bad actors. If your phone has sapped your powers of discernment, you’ll believe anything it tells you.

Data, the novelist and critic Cynthia Ozick once wrote, is “memory without history.” Her observation points to the problem with allowing smartphones to commandeer our brains. When we constrict our capacity for reasoning and recall or transfer those skills to a gadget, we sacrifice our ability to turn information into knowledge. We get the data but lose the meaning. Upgrading our gadgets won’t solve the problem. We need to give our minds more room to think. And that means putting some distance between ourselves and our phones.

Mr. Carr is the author of “The Shallows” and “Utopia Is Creepy,” among other books.

Share this:

  • Email
  • Print
  • Facebook
  • LinkedIn
  • Pinterest
  • Reddit
  • Twitter
  • Tumblr

Like this:

Like Loading...

Las Vegas Shooting News Coverage – A Perspective

06 Friday Oct 2017

Posted by Melissa Chyba in Advertising, Media Effects, Media Psychology, Psychology

≈ 1 Comment

Tags

Advertising, Cookies, Depression, Facebook, Marketing, Media, news coverage

News Man Pic

Last night I received a text from my mom wondering if we should attend the Bruno Mars concert coming up in November. I bought tickets for her birthday this year and we have been excited about attending. What brought on this sudden second guessing? The news coverage of the mass shooting in Las Vegas of course! What happened in Vegas was truly horrible and many are now second guessing how safe it is to attend concerts and other events. While I scrolled through my news feed and perused Facebook, my friends wondered in their posts how such a horrific event could happen. As expected, proponents for tighter gun laws have been in the news which has started a lively debate in my Facebook feed. This post is not about my political views on gun laws, nor is it intended to downplay what has happened. My heart truly goes out to everyone affected. My aim is to bring to light some food for thought as we all absorb the events and news coverage.

The likeliness of being killed in a homicide by a firearm is relatively low compared to other potential causes of death. In 2014 there were 11,008 homicide deaths from a firearm in the U.S. This translates to 3.5 people out of 100,000 or a 0.0035% chance (CDC, 2017). However, firearm homicides are dwarfed in comparison to the top 10 causes of death in 2016 which are as follows:

  • Heart disease: 633,842
  • Cancer: 595,930
  • Chronic lower respiratory diseases: 155,041
  • Accidents (unintentional injuries): 146,571
  • Stroke (cerebrovascular diseases): 140,323
  • Alzheimer’s disease: 110,561
  • Diabetes: 79,535
  • Influenza and pneumonia: 57,062
  • Nephritis, nephrotic syndrome, and nephrosis: 49,959
  • Intentional self-harm (suicide): 44,193 (CDC, 2017)

Looking at the numbers, we should all be more concerned about lifestyles and choices that directly contribute to heart disease and cancer. So why aren’t stories about the leading causes of death receiving the same amount of media coverage? Because media’s #1 job is to create audiences and anything sensational or out of the ordinary does the best job attracting attention (it is like trying to pass a car crash on the freeway and not look). However, creating audiences is much more hyper targeted than it used to be. News Media companies collect personally identifiable information on our viewing and reading habits through cookies, device IDs and set-top box data to name a few. This data collected is then utilized so they can sell their advertisers the best target audiences across their platforms. For example, Apple’s algorithms know I have recently been following hurricanes since I was in Florida right before Irma. On October 3rd in the “For You” section, there was an article from the Miami Herald about the tropical depression moving towards the Caribbean. Right below that article, an advertisement from Wells Fargo (my bank) was strategically placed. Wells Fargo has my personal information and so does Apple, so they can leverage an intermediary to anonymize and match my data between the companies while remaining privacy compliant. From there my anonymized information is leveraged enabling Wells Fargo to strategically target their advertisement in my Apple news feed. Because the targeting is more precise to the audience, Wells Fargo in theory sees a lift in their ROI and Apple commands higher advertising rates.

While media uses sensational headlines and stories to gain more of our attention, the bad news in the media affects our stress levels. A study on news coverage from the 2007 Virginia Tech shootings increased “acute stress” in students at other universities who followed the happenings in the news media. Furthermore, the more news media on the subject consumed the higher the probability the students would respond with higher degrees of stress symptomology (Fallahi & Lesik, 2009). Constant news negativity can exacerbate our own feelings of sadness and anxiety as well as the severity of how we perceive our own situation (Davey, 2012). A big dose of negative news daily can certainly send me into a spin of constant mobile device checking for updates and an overall pessimistic view that day.

Does this mean we should all turn off the news and not pay attention to what is going on in the world? Of course not, as the news media plays a positive role in society as well. We just all need to remember that News Media’s first priority is to create audiences and react accordingly.

References:

CDC. (2017, March 17). Centers for Disease Control and Prevention Assault or Homicide. Retrieved October 6, 2017, from National Center for Health Statistics: https://www.cdc.gov/nchs/fastats/homicide.htm

CDC. (2017, March 17). National Center for Health Statistics Leading Causes of Death. Retrieved October 2017, 2017, from Centers for Disease Control and Prevention: https://www.cdc.gov/nchs/fastats/leading-causes-of-death.htm

Davey, G. (2012). Psychology Today. Retrieved from https://www.psychologytoday.com/blog/why-we-worry/201206/the-psychological-effects-tv-news

Fallahi, C. R., & Lesik, S. A. (2009). The effects of vicarious exposure to the recent massacre at Virginia Tech. Psychological Trauma: Theory, Research, Practice and Policy, 1(3), 220-230. Retrieved from http://dx.doi.org/10.1037/a0015052

 

 

Share this:

  • Email
  • Print
  • Facebook
  • LinkedIn
  • Pinterest
  • Reddit
  • Twitter
  • Tumblr

Like this:

Like Loading...

What Horror Movies Do to Your Brain

25 Wednesday Jan 2017

Posted by Donna L. Roberts, PhD in Media Effects, Media Psychology, Psychology

≈ Leave a comment

Tags

Arousal Transfer Theory, Horror Movies, Physiological Psychology

Source: http://www.psychology-spot.com/2016/04/horror-movies-affect-brain.html#.WGUGYA86d_M.facebook

When we watch a movie, we know what we are seeing isn’t real. Yet, sometimes the scenes are so realistic to keep us in suspense throughout the movie, and we seem to experience first hand the feelings of the protagonist.

The movie is a fiction, but the emotions we feel and the reactions they trigger are real. Undoubtedly, it is a very powerful effect that is now being studied in the context of a newborn science called neurocinema, dedicated to study the influence of movies on our brains.

Do you remember when was the last time you jumped on the chair while watching a horror movie? Now we will find out exactly what happened in the brain and how your body reacted.

Scenes of terror directly activate the primitive brain

Usually, watching a movie, we “unplug” the motor areas of the brain because are useless. But sometimes scenes have a strong enough impact to get us through the inhibition of the motor system to make us react.

We bounce on the chair or we cry, because the scene makes us overcome this brain block going to unleash our instincts. It means that content is so strong, under an emotionally point of view, to make us react immediately for protecting ourselves or alert others that are in danger. In fact, shouting we warn those around us, or the characters in the movie, that there is a danger and must save themselves. It is an atavistic reaction.

And all this happens in a matter of milliseconds, we have no time to process what we’re seeing or modulate our reaction. Basically, we react this way because in those few milliseconds, our brain is not aware that it’s just a movie and we’re safe.

If you think about it, this reaction is not surprising since our brain is programmed to assume that everything we see is real. Therefore, it is very difficult to communicate with the most primitive parts, which are those being activated in these cases, that what we are seeing is a fiction. As a result, the body reacts immediately.

In fact, even if isolated cases, there are people who suffered from post-traumatic stress as a result of watching a movie, a problem more common in children, for whom it is more difficult to distinguish the boundaries between reality and fantasy.

In adults, this disorder may be caused by the excessive identification with the characters. In fact, in the case of horror movies the viewer knows as little as the characters, this is why is much easier for him to identify with them. When this identification occurs, the brain may develop deep scars, almost as much as those caused by a real experience. But that’s not all.

3 changes that occur in our body when we watch a horror movie

The reaction to what we see on the screen is not limited to the brain but extends throughout the body. This because the brain sends an alarm signal activating the autonomic nervous system by increasing the production of cortisol and adrenaline, two neurotransmitters that cause some changes at the physiological level.

1. Heart rate increases. A study conducted on a group of young people revealed that watching a horror movie causes an increase of 14 beats per minute of the heart rate. It was also found a significant increase in blood pressure. In addition, researchers found an increase in white blood cells in the blood and a higher concentration of hematocrit, as if the body were to defend against an intruder.

 

2. You start to sweat. Skin conductance is one of the first indicators of emotional arousal. In other words, when you are afraid you sweat. Researchers at the University of Wollongong have analyzed the response of a group of people in front of violent and horror movies and noticed how those who are more empathic tend to sweat more when watching these movies, and show no signs of addiction.

3. Muscles contract. Once the primitive brain has detected a threat and given the alarm signal, it is difficult to stop it, especially if the horror scenes follow one after the other and are accompanied by a chilling soundtrack. Researchers at the University of Amsterdam found that in these movies music generates what is known as “alarm reaction”, a simultaneous response of mind and body to a sudden and unexpected stimulus that leads to contraction of the muscles of arms and legs. That’s why when watching a horror movie we always tense our muscles.

But then, why do we continue to watch horror movies?

At this point it is clear that most of us do not enjoy watching a horror movie. Yet despite all, many continue to suffer the “charm” of these obscure characters. Why?

The Arousal Transfer Theory indicates that negative feelings created by these movies intensify the positive feelings we experience when at the end the hero triumphs. Basically, we like these movies because watching them is like getting on an emotional roller coaster.

Another theory hints at the fact that horror or violent movies help us manage our own fear. In practice, these films would have a cathartic effect, helping us develop our most ancient and hidden fears.

Or maybe it could just be a morbid curiosity fostered by our innate need to keep us safe from dangers that can threaten us.

Sources:

Bos, M. et. Al. (2013) Psychophysiological Response Patterns to Affective Film Stimuli. PLoS One; 8(4).

Mian, R. et. Al. (2003) Observing a Fictitious Stressful Event: Haematological Changes, Including Circulating Leukocyte Activation. Stress: The International Journal on the Biology of Stress; 6(1): 41-47.

Barry, R. J. & Bruggemann, J. M. (2002) Eysenck’s P as a modulator of affective and electrodermal responses to violent and comic film. Personality and Individual Differences; 32(6): 1029–1048.

Invert     Jennifer Delgado Suárez

Psychologist by profession and passion, dedicated to to string words together.

Share this:

  • Email
  • Print
  • Facebook
  • LinkedIn
  • Pinterest
  • Reddit
  • Twitter
  • Tumblr

Like this:

Like Loading...
← Older posts
Ken Heller on

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 4,693 other subscribers

Media Psychology

  • RSS - Posts
  • RSS - Comments

Blog Stats

  • 92,583 hits

Archives

  • December 2020 (3)
  • November 2020 (4)
  • September 2020 (1)
  • June 2020 (1)
  • April 2020 (1)
  • March 2020 (1)
  • February 2020 (3)
  • January 2020 (4)
  • December 2019 (8)
  • November 2019 (1)
  • October 2019 (5)
  • September 2019 (11)
  • August 2019 (7)
  • July 2019 (4)
  • June 2019 (3)
  • May 2019 (5)
  • April 2019 (8)
  • March 2019 (7)
  • February 2019 (4)
  • January 2019 (5)
  • December 2018 (4)
  • November 2018 (4)
  • October 2018 (5)
  • September 2018 (8)
  • August 2018 (7)
  • July 2018 (4)
  • June 2018 (3)
  • May 2018 (6)
  • April 2018 (4)
  • March 2018 (6)
  • February 2018 (6)
  • January 2018 (6)
  • December 2017 (4)
  • November 2017 (5)
  • October 2017 (5)
  • September 2017 (5)
  • August 2017 (5)
  • July 2017 (5)
  • June 2017 (5)
  • May 2017 (2)
  • April 2017 (2)
  • March 2017 (5)
  • February 2017 (4)
  • January 2017 (7)
  • December 2016 (3)
  • November 2016 (2)
  • October 2016 (4)
  • September 2016 (2)
  • August 2016 (2)
  • July 2016 (3)
  • June 2016 (5)
  • May 2016 (6)
  • April 2016 (4)
  • March 2016 (2)
  • February 2016 (1)
  • January 2016 (1)
  • December 2015 (1)
  • November 2015 (2)
  • January 2015 (1)
  • November 2014 (1)
  • September 2014 (1)
  • August 2014 (1)
  • July 2014 (4)
  • May 2014 (1)
  • April 2014 (1)
  • March 2014 (2)
  • February 2014 (2)
  • January 2014 (2)
  • December 2013 (4)
  • November 2013 (2)
  • October 2013 (1)
  • September 2013 (1)
  • August 2013 (4)
  • July 2013 (1)
  • June 2013 (1)
  • April 2013 (1)
  • March 2013 (4)
  • February 2013 (3)
  • January 2013 (5)
  • December 2012 (4)
  • November 2012 (6)

Addiction Advertising Agenda Setting Al-Jazeera Associated Press Behavioralism Bernays Cartoons Causality Cognitive Correlation Cultivation Theory Digital Immigrants Digital Natives Ellul Facebook Fallacious Arguments Film Framing Gaming Gerbner Giles Google Greenwald ICT Identity Imagery Impact of ICT Influence Ingress Internet Internet.org Journalism Marketing McCombs McLuhan Mean World Sydrome Media Media Effects Media Literacy Media Psychology Mobile Computing Mobile Phones Moscow Olympics Neural Pathways news coverage Operant Conditioning Persuasive Technology Physiological Psychology Pinterest Potter Prensky Privacy Propaganda Psychological Effects Psychological Operations Psychology Public Diplomacy Public Relations Quotes Sexism Skinner Smartphone Social Change Social Identity Social Media Social Networks Social Psychology Sports Taylor Technology The Engineering of Consent Transmedia Twitter Walking Dead

RSS The Amplifier – APA Div. 46 Newsletter

  • 2022 APA Division 46 Society for Media Psychology & Technology Convention/Social Hour Photos
  • APA Council Representative Report: August 2022 Council Meeting Highlights
  • President-Elect’s Column: Literally Sick and Tired of Political Advertising
  • Past President Column: Program, Awards, Social Hour
  • Student Committee Column: The Importance of the Pipeline

RSS APA Div. 46 Media Psychology and Technology Facebook Feed – Come check it out!

  • Kids Are Using Minecraft To Design A More Sustainable World 06/07/2015
  • Home – UsMeU 05/07/2015
  • Huggable Robot Befriends Girl in Hospital 03/07/2015
  • Lifelong learning is made possible by recycling of histones, study says 03/07/2015
  • Synthetic Love: Can a Human Fall in Love With a Robot? – 24/06/2015

RSS Changing Minds

  • An error has occurred; the feed is probably down. Try again later.

RSS Media Smarts

  • YCWW IV - Sexting 29/03/2023
    Language English

RSS Adam Curtis

  • HYPERNORMALISATION 11/10/2016
    Adam Curtis introduces his new epic film

RSS Media Psychology Blog

  • does resurge work : Resurge weight reduction supplement is a... 10/04/2020
    does resurge work : Resurge weight reduction supplement is a distinct advantage program that would bolster your ascent to control. It will change you and make you more grounded than at any other time with improved wellbeing that can assist you with getting away from heftiness. This Resurge audit tells how the Supplement will help your lack of sleep and weigh […]

RSS The Psych Files

  • Using the Keyword Mnemonic Technique to Memorize Lines 23/03/2023
    I explain how the keyword mnemonic technique can help actors memorize their lines. It’s an effective and fun strategy you can use in the beginning when you’re first learning lines, or during performance if something really unexpected happens and throws you. Keyword images can help get you back on your game. The post Using the Keyword Mnemonic Technique to Me […]

RSS The Media Zone

  • And He Knew All the Words 24/11/2014
    Stuart Fischoff pioneered Media Psychology. He was a TV talk-show shrink—until it got too rowdy even for him. He knew all the words to Sondheim. And now he's gone.

RSS The Media Psychology Effect

  • When AI Communication Exceeds the Limits of Human Psychology 14/03/2023
    Computer simulated communication is becoming undetectable, but AI isn’t always the best option. Tech management must be sensitive to the human need for personal help and attention.

RSS On The Media

  • An error has occurred; the feed is probably down. Try again later.

Create a free website or blog at WordPress.com.

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy
  • Follow Following
    • Media Psychology
    • Join 558 other followers
    • Already have a WordPress.com account? Log in now.
    • Media Psychology
    • Customize
    • Follow Following
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...
 

    %d bloggers like this: