We still call them “phones,” but they are seldom used for talking. They have become like a substitute for memory—and other brain functions. Is that good for us in the long run?
Illustration by Edmon Haro
Source: Smart Phone, Lazy Brain
You probably know the Google Effect: the first rigorous finding in the booming research into how digital technology affects cognition. It’s also known as digital amnesia, and it works like this: When we know where to find a piece of information, and when it takes little effort to do so, we are less likely to remember that information. First discovered by psychologist Betsy Sparrow of Columbia University and her colleagues, the Google Effect causes our brains to take a pass on retaining or recalling facts such as “an ostrich’s eye is bigger than its brain” (an example Sparrow used) when we know they are only a few keystrokes away.
“Because search engines are continually available to us, we may often be in a state of not feeling we need to encode the information internally,” Sparrow explained in her 2011 paper. “When we need it, we will look it up.” Storing information requires mental effort—that’s why we study before exams and cram for presentations—so unless we feel the need to encode something into a memory, we don’t try. Result: Our recollection of ostrich anatomy, and much else, dissipates like foam on a cappuccino.
It’s tempting to leap from the Google Effect to dystopian visions of empty-headed dolts who can’t remember even the route home (thanks a lot, GPS), let alone key events of history (cue Santayana’s hypothesis that those who can’t remember history are doomed to repeat it). But while the short-term effects of digital tech on what we remember and how we think are real, the long-term consequences are unknown; the technology is simply too new for scientists to have figured it out.
People spend an average of 3 to 5 minutes at their computer working on the task at hand before switching to Facebook or other enticing websites.
Before we hit the panic button, it’s worth reminding ourselves that we have been this way before. Plato, for instance, bemoaned the spread of writing, warning that it would decimate people’s ability to remember (why make the effort to encode information in your cortex when you can just consult your handy papyrus?). On the other hand, while writing did not trigger a cognitive apocalypse, scientists are finding more and more evidence that smartphones and internet use are affecting cognition already.
The Google Effect? We’ve probably all experienced it. “Sometimes I spend a few minutes trying hard to remember some fact”—like whether a famous person is alive or dead, or what actor was in a particular movie—“and if I can retrieve it from my memory, it’s there when I try to remember it two, five, seven days later,” said psychologist Larry Rosen, professor emeritus at California State University, Dominguez Hills, who researches the cognitive effects of digital technology. “But if I look it up, I forget it very quickly. If you can ask your device any question, you do ask your device any question” rather than trying to remember the answer or doing the mental gymnastics to, say, convert Celsius into Fahrenheit.
“Doing that is profoundly impactful,” Rosen said. “It affects your memory as well as your strategy for retrieving memories.” That’s because memories’ physical embodiment in the brain is essentially a long daisy chain of neurons, adding up to something like architect I.M. Pei is alive or swirling water is called an eddy. Whenever we mentally march down that chain we strengthen the synapses connecting one neuron to the next. The very act of retrieving a memory therefore makes it easier to recall next time around. If we succumb to the LMGTFY (let me Google that for you) bait, which has become ridiculously easy with smartphones, that doesn’t happen.
To which the digital native might say, so what? I can still Google whatever I need, whenever I need it. Unfortunately, when facts are no longer accessible to our conscious mind, but only look-up-able, creativity suffers. New ideas come from novel combinations of disparate, seemingly unrelated elements. Just as having many kinds of Legos lets you build more imaginative structures, the more elements—facts—knocking around in your brain the more possible combinations there are, and the more chances for a creative idea or invention. Off-loading more and more knowledge to the internet therefore threatens the very foundations of creativity.
Besides letting us outsource memory, smartphones let us avoid activities that many people find difficult, boring, or even painful: daydreaming, introspecting, thinking through problems. Those are all so aversive, it seems, that nearly half of people in a 2014 experimentwhose smartphones were briefly taken away preferred receiving electric shocks than being alone with their thoughts. Yet surely our mental lives are the poorer every time we check Facebook or play Candy Crush instead of daydream.
But why shouldn’t we open the app? The appeal is undeniable. We each have downloaded an average of nearly 30 mobile apps, and spend 87 hours per month internet browsing via smartphone, according to digital marketing company Smart Insights. As a result, distractions are just a click away—and we’re really, really bad at resisting distractions. Our brains evolved to love novelty (maybe human ancestors who were attracted to new environments won the “survival of the fittest” battle), so we flit among different apps and websites.
As a result, people spend an average of just three to five minutes at their computer working on the task at hand before switching to Facebook or another enticing website or, with phone beside them, a mobile app. The most pernicious effect of the frenetic, compulsive task switching that smartphones facilitate is to impede the achievement of goals, even small everyday ones. “You can’t reach any complex goal in three minutes,” Rosen said. “There have always been distractions, but while giving in used to require effort, like getting up and making a sandwich, now the distraction is right there on your screen.”
The mere existence of distractions is harmful because resisting distractions that we see out of the corner of our eye (that Twitter app sitting right there on our iPhone screen) takes effort. Using fMRI to measure brain activity, neuroscientist Adam Gazzaley of the University of California, San Francisco, found that when people try to ignore distractions it requires significant mental resources. Signals from the prefrontal cortex race down to the visual cortex, suppressing neuronal activity and thereby filtering out what the brain’s higher-order cognitive regions have deemed irrelevant. So far, so good.
The problem is that the same prefrontal regions are also required for judgment, attention, problem solving, weighing options, and working memory, all of which are required to accomplish a goal. Our brains have limited capacity to do all that. If the prefrontal cortex is mightily resisting distractions, it isn’t hunkering down to finish the term paper, monthly progress report, sales projections, or other goal it’s supposed to be working toward. “We are all cruising along on a superhighway of interference” produced by the ubiquity of digital technology, Gazzaley and Rosen wrote in their 2016 book The Distracted Mind. That impedes our ability to accomplish everyday goals, to say nothing of the grander ones that are built on the smaller ones.
The constant competition for our attention from all the goodies on our phone and other screens means that we engage in what a Microsoft scientist called “continuous partial attention.” We just don’t get our minds deeply into any one task or topic. Will that have consequences for how intelligent, creative, clever, and thoughtful we are? “It’s too soon to know,” Rosen said, “but there is a big experiment going on, and we are the lab rats.”
Tech Invasion LMGTFY
“Let me Google that for you” may be some of the most damaging words for our brain. Psychologists have theorized that the “Google Effect” causes our memories to weaken due merely to the fact that we know we can look something up, which means we don’t keep pounding away at the pathways that strengthen memory. Meanwhile, research suggests that relying on GPS weakens our age-old ability to navigate our surroundings. And to top it all off, the access to novel info popping up on our phone means that, according to Deloitte, people in the US check their phones an average of 46 times per day—which is more than a little disruptive.
Sharon Begley is a senior science writer with The Boston Globe Media Group, author of Train Your Mind, Change Your Brain, and coauthor with Richard Davidson of The Emotional Life of Your Brain. She writes a regular column for Mindful magazine called Brain Science.
Are brands addictive? Source: Do We Form Similarly Close Relationship With Brands As With Our Loved One? Martin Reimann For human relationships it is known that after an initial electrifying honeymoon period, excitement for the loved partner often goes down and is maintained at a lower level. At the same time, however, we include our […]
Do we know instinctively what kind of activities are conducive to lasting happiness? If so, why don’t more of us do them more often? By Christian Jarrett
The luxury microwave meal was delicious, the house is warm, work’s going OK, but you’re just not feeling very happy. Some positive psychologists believe this is because many of us in rich, Western countries spend too much of our free time on passive activities, like bingeing on Netflix and browsing Twitter, rather than on active, psychologically demanding activities, like cooking, sports or playing music, that allow the opportunity to experience “flow” – that magic juncture where your abilities only just meet the demands of the challenge. A new paper in the Journal of Positive Psychology examines this dilemma. Do we realise that pursuing more active, challenging activities will make us happier in the long-run? If so, why then do we opt to spend so much more time lazing around engaged in activities that are pleasant in the moment, but unlikely to bring any lasting fulfilment?
Across two studies, L. Parker Schiffer and Tomi-Ann Roberts at the Claremont Graduate University and Colorado College, surveyed nearly 300 people (presumably US citizens, average age 33/34 years) via Amazon’s Mechanical Turk website about what they thought of dozens of different activities: some passive like listening to music or watching movies, others more active and potentially flow-inducing, such as making art or meditating. Specifically, the participants rated how enjoyable, effortful, and daunting they considered the activities to be, as well as how often they engaged in each of them in a typical week. The participants also identified which activities they considered the most and least conducive to lasting happiness.
There was a clear pattern in the participants’ answers: they identified more effortful activities as being more associated with lasting happiness, yet they said they spent much more time on passive, relaxation-based activities, like watching TV. Looking at their other judgments, the key factor that seemed to deter participants from engaging in more active, flow-inducing activities is that they tended to be seen as particularly daunting and less enjoyable, even while being associated with lasting happiness. The more daunting an activity was deemed to be, the less frequently it was undertaken (by contrast, and to the researchers’ surprise, the perceived effort involved in the activity did not seem to be a deterrent).
Schiffer and Roberts consider this to be a paradox of happiness: we know which kind of activities will bring us lasting happiness, but because we see them as daunting and less enjoyable in the moment, we choose to spend much more of our time doing passive, more immediately pleasant things with our free time. Their advice is to plan ahead “to try to ease the physical transition into flow activities” to make them feel less daunting. For example, they suggest getting your gym clothes and bag ready the night before, and choosing a gym that’s close and convenient; or getting your journal and pen, or easel and paintbrushes, ready in advance.
The other thing they suggest is using mindfulness, meditation or some other “controlled consciousness” technique to help yourself to disregard the initial “transition costs” of a flow activity, such as the early pain of a run, and to focus instead on its pleasurable aspects and the long-term rewards.
“Future research is needed in order to empirically back our proposal that preplanning, prearranging, and, and controlled consciousness may aid overcoming the activation energy and transition costs that stand in the way of our true happiness,” the researchers said.
…and how you can learn to ignore them
Douglas Van Praet
Every day we navigate through a cluttered mediaenvironmentof thousands of ads vying for our precious time and limited attention. Studies in North America have shown that on average we are exposed to 3,000 ads per day. If you think you can simply choose to ignore these messages, think again. The best ads are designed to slip through your best defenses.
That’s because every consumer, i.e., human, has an automatic hardwired process for attention and awareness. And our decision to pay attention to stimuli in our environment (such as advertising) is often determined by our emotions, not our thoughts. But here is the challenge for viewers. We don’t choose our emotions. They happen unconsciously. We can only try to choose how to think about our feelings after the fact. So when an advertisement triggers a strong emotion, brands can rise to the top of shopping lists and markets. Because at this stage of human evolution, our feelings influence our thinking way more than our thoughts influence our emotions.
Think of emotions as automated actions programs that guide us through our (media) environment without having to think. Ads that trigger emotions can literally hijack critical thought and conscious awareness. Research has shown that ads processed with high levels of attention are six times more impactful at driving brand choice as compared to ads that aren’t consciously recalled. And cognitivescience experiments corroborate that familiarity breeds affection through mere exposure.
Every second your senses are taking in about 11 million bits of information, but you are only aware of about 40 of those bits. Because our conscious mind is so limited it works on a need to know basis. Think of the human brain as a survival machine vigilantly scanning the environment always making predictions about what will happen next. It works by recognizing and responding to patterns. Cognitive science tells us we don’t notice the world around us when it’s reliably predicted away, when what we are experiencing in the moment matches our intuitive predictions.
However, missed predictions fire a hardwired neuralresponse that biologically commands our attention. This reaction is what neuroscientists technically call the “Oh Shit!” circuit. When we expect something to happen and it does not, a distress signal is released from the anterior cingulate cortex (ACC). The ACC is closely wired to the thalamus, a dual-lobed mass of gray matter beneath the cerebral cortex that plays a critical role in awareness by helping direct conscious attention. Nothing grabs our attention better than the element (and emotion) of surprise. Advertisers do this best by interrupting expected patterns.
In addition, novelty primarily activates the dopaminesystem in our brain, which is responsible for wanting behavior. The dopamine system also has a close relationship with the opioid system of the brain, which produces pleasurable sensations. Since learning is so important to human survival it makes sense that natural selection has also instilled within us feel good emotional responses to novel stimuli.
For instance, the Old Spice brand completely transformed its old-fashioned image thanks to an infectious effort that was brimming with pattern interrupts. This campaign embedded a much cooler and contemporary brand image in the minds of people by introducing the world to the charismatichunk Isaiah Mustafa, or “the man your man can smelllike.”
The magic behind this amazingly impactful campaign is not just the smooth pitchman of Old Spice body wash, but the equally smooth interruptions. The introductory commercial featured a series of seamless transitional pattern interrupts as Isaiah directs the viewer’s attention from unsuspecting scene to scene. He goes from his bathroom, to dropping in on a sailboat, and finally ending up atop a horse. Our brains are surprised and delighted with a blast of dopamine and the pay out of attention again, again, and again. The decision to watch this ad is not a conscious choice. It is the neurobiological equivalent of a forced exposure. Not surprisingly, this campaign generated an amazing 1.4 billion media impressions and a 27% increase in sales during the first 6 months post launch.
Similarly, there are certain stimuli—such as babies, for example—that come prepackaged with positive emotional responses. We don’t consciously choose to find babies adorable. No more than we choose to feel the “aww” reaction that commandeers our thoughts or the impetus to post pictures all over Facebook. The decision to find babies so compelling has been made millions of years ago through evolution and natural selection. If our forbears were not instinctually compassionate towards these innocent helpless creatures, they would have never survived. And our DNAand species would eventually cease to exist.
So when ads add novel twists to these mini mush magnets, attention and engagement soars. Take for instance the computer-generated Evian babies on roller skates who break-danced and back-flipped their way to what the Guinness Book of World Records declared was the most viewed online ad in history. More recently, the most watched ad on YouTube in 2013 was another spot by Evian called “Baby & Me.” This approach featured grown ups dancing while unexpectedly discovering their inner babies dancing in sync as their reflections in a mirror.
Just because you are aware of seeing an ad or buying a brand doesn’t mean you are aware of the unconsciousforces that prompted you to do so. The only way to avoid the trap of becoming glued to these types of advertising is to become aware of the patterns. So much of today’s ads are based on interrupting patterns and generating deep primal emotions because our attention span is an increasingly rare resource. By becoming aware of these patterns your mind will intuitively learns to predict and ignore them in the future and you’ll gain back precious seconds of your busy life.
And remember to push the pause button in your mind and rationally contemplate what draws you to advertising and products in the first place. When it comes to buying brands we often don’t have free will, but we do have free won’t. We can’t help having the feelings tugging at our heartstrings and desires. But we can also rationally reject these suggestions come shopping time if it doesn’t make sense.
For more information check out my book: Unconscious Branding
Does Reading Give Us Access to Other People’s Minds?
Source: The Reading Brain
In her book The Shaking Woman, Siri Hustvedt delights in reading’s power to recast her “internal narrator”:
The closest we can get to . . . entrance into another person’s psyche is through reading. Reading is the mental arena where different thought styles, tough and tender, and the ideas generated by them become more apparent. We have access to a stranger’s internal narrator. Reading, after all, is a way of living inside another person’s words. His or her voice becomes my narrator for the duration. Of course, I retain my own critical faculties, pausing to say to myself, Yes, he’s right about that or No, he’s forgotten this point entirely or That’s a clichéd character, but the more compelling the voice on the page is, the more I lose my own. I am seduced and give myself up to the other person’s words.
Of course, reading doesn’t simply give us access to “another person’s psyche.” Hustvedt argues it’s as close as we get, without the onus to define how close that might be. She describes the capacity of a writer’s voice to become her narrator, to mix with the stream of her consciousness, to give her access to unfamiliar “thought styles” that may lead to new ideas, new ways of understanding the world—and, ultimately, living with it.
Neuroscientist Stanislas Dehaene argues that “the human brain never evolved for reading. . . . The only evolution was cultural—reading itself progressively evolved toward a form adapted to our brain circuits.” Reading is a human invention, made possible by pre-existing brain systems devoted to representing shapes, sound, and speech. Nonetheless, Dehaene acknowledges that “an exponential number of cultural forms can arise from the multiple combinations of restricted selection of fundamental traits.” In other words, the malleability of the brain’s representational systems enables the continuous evolution of new forms of representation.
The literary wing of the so-called “neurohumanities” has been busy with researchers and theorists investigating what it might mean to “live inside another’s words” and the variations of reading possible within the physiological constraints Dehaene describes. Three books in particular have made a splash: Lisa Zunshine’s Why We Read Fiction: Theory of Mind and the Novel (2006), Suzanne Keen’s Empathy and the Novel (2007), and Blakey Vermeule’s Why Do We Care about Literary Characters? (2009). The titles of these books represent the clarity of their purposes and their shared interests in so-called “mind reading“–how we know what another person thinks and feels, or how literature trains us to guess.
Zunshine draws on theory of mind research in cognitive science to argue that literary texts satisfy, create, and test “cognitive cravings,” focusing mostly on cognitive capacities to imagine other people’s mental experiences—and the centrality of doing so to navigating social relations. She makes a strong argument that writers like Virginia Woolf and Jane Austen offer a kind of cognitive exercise, pushing us to practice levels of “cognitiive embedment”–for example, she realized that he thought she was laughing inside, and this worried her.” We practice imagining each other imagining each other’s minds.
Keen emphasizes neuro-cognitive research—especially the fMRI studies of Tania Singer—that link empathy to so-called mirror neurons. Responding to influential research on empathy and mirror systems by Tania Singer, she observes that “Singer and her colleagues conclude that empathy is mediated by the part of the pain network associated with pain’s affective qualities, but not its sensory qualities.” In other words, we can imagine other people’s pain, but we can’t feel it. As a result, Keen’s conclusions are multifarious—and not entirely rosy: It may be easier to empathize with fictional characters that real people; novelists (and writers and artists in general) may be more empathetic than the general population; empathetic responses occur more readily in response to negative emotions; empathy does not necessarily lead to altruism or action; and empathy can lead to an aversive response as well as a sympathetic one.
Vermeule focuses on literary characters, as “tools to think with”: “Literary narratives prove us and make us worry about what it is to interact with fictional people. And we should worry, because interacting with fictional people turns out to be a central cognitive preoccupation, one that exposes many of the aspects of how our minds work.” Vermeule’s “fictional people” include characters like Clarissa Dalloway or Humbert Humbert, but also representations of actual people we don’t know like Barack Obama or Caitlyn Jenner and people we do know, even those we’re intimate with. When we imagine other people’s mental lives, we create a kind of productive fiction. Literature, she argues, makes us attentive to forms of representation that shape the ways we live. If we don’t recognize the role of representation in the shaping of social relations we will mistake our mental reproductions of others for “the real properties” of those people, rather than recognizing the cognitive filters that enable us to relate to them.
Some of this research has gotten a lot of press—for example, Natalie Phillips’s fMRI research on reading Jane Austen, featured on NPR, the Huffington Post, and Salon well before it was published in journals. Phillips conducted her research on a fellowship at Stanford, which touted it with the headline “This Is Your Brain on Jane Austen.” Phillips’s research is a multi-disciplinary collaboration—whose process mirrors its premises with a productive irony Austen might appreciate. She’s interested in the limits of attention, studying Austen’s fiction to make arguments about how it challenges readers to adopt multiple perspectives that test those limits.
Samantha Holmsworth, a neuroimaging expert on the project, describes the challenges: “We were all interested, but working at the edge of our capacity to understand even 10 percent of what each other were saying”—an estimate revised to 30% in an academic article that finally fleshed out the results that had received so much preliminary hype. Phillips presents her research with the enthusiasm of hypothesis that requires further study. In short, close reading (attending to questions about form) and pleasure reading (getting lost in a book) involve related but different forms of representation.
The “neural signatures” involved multiple brain systems, and Phillips envisions future research using a “functional connectivity” approach to measure “synchronous patterns that emerge in parallel across the brain and investigates how these connections change as we engage stimulus over time.” Close reading seems to initiate more widespread activity than pleasure reading, including the somatosensory cortex and motor cortex—areas involved in space and movement.
This is nascent research, and its hypotheses are tentative. That seems appropriate. If Jane Austen abhorred anything, it was too definitive a conclusion. In Austen, mind reading is always misreading.
Jason Tougaw is the author of The Elusive Brain: Literary Experiments in the Age of Neuroscience (Yale UP) and The One You Get: Portrait of a Family Organism (Dzanc Books).
It’s done through through broken promises and spikes of dopamine Photo by Joshua Earle on Unsplash Source: How Brands Addict Us Douglas Van Praet There’s a reason why marketers spend billions of dollars on advertising every year. It works! That’s because humans, and by extension, all consumers, are wired for the joys of anticipation more […]
Disparagement humor makes a punchline out of a marginalized group. Racist or sexist jokes, for instance, aren’t just harmless fun – psychologists find they can foster discrimination.
Q: Why did the woman cross the road?
A: Who cares! What the hell is she doing out of the kitchen?
Q: Why hasn’t NASA sent a woman to the moon?
A: It doesn’t need cleaning yet!
These two jokes represent disparagement humor – any attempt to amuse through the denigration of a social group or its representatives. You know it as sexist or racist jokes – basically anything that makes a punchline out of a marginalized group.
Disparagement humor is paradoxical: It simultaneously communicates two conflicting messages. One is an explicit hostile or prejudiced message. But delivered alongside is a second implicit message that “it doesn’t count as hostility or prejudice because I didn’t mean it — it’s just a joke.”
By disguising expressions of prejudice in a cloak of fun and frivolity, disparagement humor, like the jokes above, appears harmless and trivial. However, a large and growing body of psychology research suggests just the opposite – that disparagement humor can foster discrimination against targeted groups.
Jokes that release restraints
Most of the time prejudiced people conceal their true beliefs and attitudes because they fear others’ criticism. They express prejudice only when the norms in a given context clearly communicate approval to do so. They need something in the immediate environment to signal that it is safe to freely express their prejudice.
Disparagement humor appears to do just that by affecting people’s understanding of the social norms – implicit rules of acceptable conduct – in the immediate context. And in a variety of experiments, my colleagues and I have found support for this idea, which we call prejudiced norm theory.
For instance, in studies, men higher in hostile sexism – antagonism against women – reported greater tolerance of gender harassment in the workplace upon exposure to sexist versus neutral (nonsexist) jokes. Men higher in hostile sexism also recommended greater funding cuts to a women’s organization at their university after watching sexist versus neutral comedy skits. Even more disturbing, other researchers found that men higher in hostile sexism expressed greater willingness to rape a woman upon exposure to sexist versus nonsexist humor.
How did sexist humor make the sexist men in these studies feel freer to express their sexist attitudes? Imagine that the social norms about acceptable and unacceptable ways of treating women are represented by a rubber band. Everything on the inside of the rubber band is socially acceptable; everything on the outside is unacceptable.
Sexist humor essentially stretched the rubber band; it expanded the bounds of acceptable behavior to include responses that would otherwise be considered wrong or inappropriate. So, in this context of expanded acceptability, sexist men felt free to express their antagonism without the risk of violating social norms and facing disapproval from others. Sexist humor signaled that it’s safe to express sexist attitudes.
Who’s the target?
In another study, my colleagues and I demonstrated that this prejudice-releasing effect of disparagement humor varies depending on the position in society occupied by the butt of the joke. Social groups are vulnerable to different degrees depending on their overall status.
Some groups occupy a unique social position of what social psychologists call “shifting acceptability.” For these groups, the overall culture is changing from considering prejudice and discrimination against them completely justified to considering them completely unjustified. But even as society as a whole becomes increasingly accepting of them, many individuals still harbor mixed feelings.
For instance, over the past 60 years or so, the United States has seen a dramatic decline in overt and institutional racism. Public opinion polls over the same period have shown whites holding progressively less prejudiced views of minorities, particularly blacks. At the same time, however, many whites still covertly have negative associations with and feelings toward blacks – feelings they largely don’t acknowledge because they conflict with their ideas about themselves being egalitarian.
Disparagement humor fosters discrimination against social groups – like black Americans – that occupy this kind of shifting ground. In our study, we found that off-color jokes promoted discrimination against Muslims and gay men – which we measured in greater recommended budget cuts to a gay student organization, for instance. However, disparagement humor didn’t have the same effect against two “justified prejudice” groups: terrorists and racists. Social norms are such that people didn’t need to wait for jokes to justify expressions of prejudice against these groups.
An important implication of these findings is that disparagement humor can be more or less detrimental based on the social position occupied by the targeted groups. Movies, television programs or comedy clips that humorously disparage groups such as gays, Muslims or women can potentially foster discrimination and social injustice, whereas those that target groups such as racists will have little social consequence.
On the basis of these findings, one might conclude that disparagement humor targeting oppressed or disadvantaged groups is inherently destructive and thus should be censured. However, the real problem might not be with the humor itself but rather with an audience’s dismissive viewpoint that “a joke is just a joke,” even if disparaging. One study found that such a “cavalier humor belief” might indeed be responsible for some of the negative effects of disparagement humor. For prejudiced people, the belief that “a disparaging joke is just a joke” trivializes the mistreatment of historically oppressed social groups – including women, gay people, racial minorities and religious minorities – which further contributes to their prejudiced attitude.
Can you be ‘in on the joke’?
In addition, if one initiates disparagement humor with the positive intention of exposing the absurdity of stereotypes and prejudice, the humor ironically might have the potential to subvert or undermine prejudice.
Chris Rock is one comedian well-known for using subversive disparagement humor to challenge the status quo of racial inequality in the United States. For instance, in his opening monologue for the 2016 Academy Awards, he used humor to call attention to racism in the film industry and hierarchical race relations more generally:
I’m here at the Academy Awards, otherwise known as the White People’s Choice Awards. You realize if they nominated hosts, I wouldn’t even get this job. So y’all would be watching Neil Patrick Harris right now.
The problem is that in order for the humor to realize its goal of subverting prejudice, the audience must understand and appreciate that intention. And there’s no guarantee that they will.
There was a good-spirited intention behind it. So then when I’m on the set, and we’re finally taping the sketch, somebody on the set [who] was white laughed in such a way – I know the difference of people laughing with me and people laughing at me – and it was the first time I had ever gotten a laugh that I was uncomfortable with. Not just uncomfortable, but like, should I fire this person?
Chapelle’s intentions with his racially charged comedy were misunderstood. By lampooning the stereotype, he meant to call attention to the ridiculousness of racism. However, it became apparent that not everyone was capable of or motivated to look past Chapelle’s comic stereotypical portrayal to get his subversive intent.
One study found that people higher in prejudice are particularly prone to misinterpret subversive humor. Researchers in the 1970s studied amusement with the television show “All in the Family,” which focused on the bigoted character Archie Bunker. They found that low-prejudiced people perceived “All in the Family” as a satire on bigotry and that Archie Bunker was the target of the humor. They “got” the true subversive intent of the show.
In contrast, high-prejudiced people enjoyed the show for satirizing the targets of Archie’s prejudice. Thus, for high-prejudiced people, the subversive disparagement humor of the show backfired. Rather than calling attention to the absurdity of prejudice, for them the show communicated an implicit prejudiced norm, conveying a tolerance of discrimination.
Psychology research suggests that disparagement humor is far more than “just a joke.” Regardless of its intent, when prejudiced people interpret disparagement humor as “just a joke” intended to make fun of its target and not prejudice itself, it can have serious social consequences as a releaser of prejudice.
That BMW isn’t earning you any pals.Angela Weiss/Getty Images for Icelink Source: The status symbols we buy, wear, and drive make people want to do business with us — but not be our friends Shana Lebowitz, Business Insider New friends may be turned off by status symbols like fancy cars, watches, and clothing. Business contacts […]
The rise of social media has meant that everyone is expected to maintain and curate a personal brand. By Hanna Kozlowska Source: Shoppers are buying clothes just for the Instagram pic, and then returning them Buying clothes for a fancy event, tucking in the tags, and returning them to the store the next day has […]