How video games unwittingly train the brain to justify killing

Teodora Stoica is a PhD student in the translational neuroscience programme at the University of Louisville. She is interested in the relationship between emotion and cognition, and clinical and cognitive psychology.


Published in association with
Cognitive Neuroscience Society
an Aeon Partner

Mortal Kombat X gameplay. <em>NetherRealm/Warner Bros. Interactive Entertainment/Wikipedia</em>

Let’s play a game. One of the quotes below belongs to a trained soldier speaking of killing the enemy, while the other to a convicted felon describing his first murder. Can you tell the difference?

(1) ‘I realised that I had just done something that separated me from the human race and it was something that could never be undone. I realised that from that point on I could never be like normal people.’

(2) ‘I was cool, calm and collected the whole time. I knew what I had to do. I knew I was going to do it, and I did.’

Would you be surprised to learn that the first statement, suggesting remorse, comes from the American mass murderer David Alan Gore, while the second, of cool acceptance, was made by Andy Wilson, a soldier in the SAS, Britain’s elite special forces? In one view, the two men are separated by the thinnest filament of morality: justification. One killed because he wanted to, the other because he was acting on behalf of his country, as part of his job.

While most psychologically normal individuals agree that inflicting pain on others is wrong, killing others appears socially sanctioned in specific contexts such as war or self-defence. Or revenge. Or military dictatorships. Or human sacrifice. In fact, justification for murder is so pliant that the TV series Dexter (2006-13) flirted exquisitely with the concept: a sociopath who kills villainous people as a vehicle for satisfying his own dark urges.

Operating under strict ‘guidelines’ that target only the guilty, Dexter (a forensics technician) and the viewer come to believe that the kill is justified. He forces the audience to question their own moral compass by asking them to justify murder in their minds in the split second prior to the kill. Usually when we imagine directly harming someone, the image is preventive: envision a man hitting a woman; or an owner abusing her dog. Yet, sometimes, the opposite happens: a switch is flipped with aggressive, even violent consequences. How can an otherwise normal person override the moral code and commit cold-blooded murder?

That was the question asked at the University of Queensland in Australia, in a study led by the neuroscientist Pascal Molenberghs, in which participants entered an fMRI scanner while viewing a first-person video game. In one scenario, a soldier kills an enemy soldier; in another, the soldier kills a civilian. The game enabled each participant to privately enter the mind of the soldier and control which person to execute.

Screenshot of what each participant saw

The results were, overall, surprising. It made sense that a mental simulation of killing an innocent person (unjustified kill) led to overwhelming feelings of guilt and subsequent activation of the lateral orbitofrontal cortex (OFC), an area of the brain involved in aversive, morally sensitive situations. By contrast, researchers predicted that viewing a soldier killing a soldier would create activity in another region of the brain, the medial OFC, which assesses thorny ethical situations and assigns them positive feelings such as praise and pride: ‘This makes me feel good, I should keep doing it.’

But that is not what occurred: the medial OFC did not light up when participants imagined themselves as soldiers killing the enemy. In fact, none of the OFC did. One explanation for this puzzling finding is that the OFC’s reasoning ability isn’t needed in this scenario because the action is not ethically compromising. That is to say – it is seen as justified. Which brings us to a chilling conclusion: if killing feels justified, anyone is capable of committing the act.

Since the Korean War, the military has altered basic training to help soldiers overcome existing norms of violence, desensitise them to the acts they might have to commit, and reflexively shoot upon cue. Even the drill sergeant is portrayed as the consummate professional personifying violence and aggression.

The same training takes place unconsciously through contemporary video games and media. Young children have unprecedented access to violent movies, games and sports events at an early age, and learning brutality is the norm. The media dwells upon real-life killers, describing every detail of their crime during prime-time TV. The current conditions easily set up children to begin thinking like soldiers and even justify killing. But are we in fact suppressing critical functions of the brain? Are we engendering future generations who will accept violence and ignore the voice of reason, creating a world where violence will become the comfortable norm?

The Queensland study had something to say about this as well. When participants were viewing unjustified killings, researchers noticed increased connectivity between the OFC and an area called the temporal parietal junction (TPJ), a part of the brain that has previously been associated with empathy. They show that disrupting function of the TPJ transforms participants into psychopaths, judging any action as morally permissible and making the TPJ a critical region for empathy. Increased connectivity between the two regions suggests that the participants were actively putting themselves in the shoes of the observer, judging whether killing civilians was morally acceptable or not.

Increased connectivity between left OFC and left and right TPJ for simulating shooting a civilian

‘Emotional and physical distance can allow a person to kill his foe,’ says Lt Colonel Dave Grossman, director of the Killology Research Group in Illinois and one of the world’s foremost experts in human aggression and the roots of violence. ‘Emotional distance can be classified as mechanical, social, cultural and emotional distance.’ In other words, a lack of connection to humans allows a justified murder. The writer Primo Levi, a Holocaust survivor, believed that this was exactly how the Nazis succeeded in killing so many: by stripping away individuality and reducing each person to a generic number.

In 2016, technology and media have turned genocide viral. The video game Mortal Kombat X features spines being snapped, heads crushed and players being diced into cubes. In Hatred, gamers play as a sociopath who attempts to kill innocent bystanders and police officers with guns, flamethrowers and bombs to satisfy his hatred of humanity. Characters beg for mercy before execution, frequently during profanity-laced rants.

A plethora of studies now associate playing such games with greater tolerance of violence, reduced empathy, aggression and sexual objectification. Compared with males who have not played violent video games, males who do play them are 67 per cent more likely to engage in non-violent deviant behaviour, 63 per cent more likely to commit a violent crime or a crime related to violence, and 81 per cent more likely to have engaged in substance use. Other studies have found that engaging in cyberviolence leads people to perceive themselves as less human, and facilitates violence and aggression.

This powerful knowledge could be used to turn violence on its head. Brain-training programs could use current neuroscientific knowledge to serve up exhilarating games to train inhibition, instead of promoting anger. Creating games with the capability to alter thought patterns is itself ethically questionable and could be easily implemented to control a large population. But we’ve already gone down that road, and in the direction of violence. With today’s generation so highly dependent on technology, phasing in games from an early age that encourage tolerance could be a potent tool for building a more humane, more compassionate world.

Attention, Students: Put Your Laptops Away


Heard on NPR Weekend Edition Sunday

Researchers Pam Mueller and Daniel M. Oppenheimer found that students remember more via taking notes longhand rather than on a laptop. It has to do with what happens when you’re forced to slow down.

Source: Attention, Students: Put Your Laptops Away

As laptops become smaller and more ubiquitous, and with the advent of tablets, the idea of taking notes by hand just seems old-fashioned to many students today. Typing your notes is faster — which comes in handy when there’s a lot of information to take down. But it turns out there are still advantages to doing things the old-fashioned way.

For one thing, research shows that laptops and tablets have a tendency to be distracting — it’s so easy to click over to Facebook in that dull lecture. And a study has shown that the fact that you have to be slower when you take notes by hand is what makes it more useful in the long run.

In the study published in Psychological Science, Pam A. Mueller of Princeton University and Daniel M. Oppenheimer of the University of California, Los Angeles sought to test how note-taking by hand or by computer affects learning.

“When people type their notes, they have this tendency to try to take verbatim notes and write down as much of the lecture as they can,” Mueller tells NPR’s Rachel Martin. “The students who were taking longhand notes in our studies were forced to be more selective — because you can’t write as fast as you can type. And that extra processing of the material that they were doing benefited them.”

Mueller and Oppenheimer cited that note-taking can be categorized two ways: generative and nongenerative. Generative note-taking pertains to “summarizing, paraphrasing, concept mapping,” while nongenerative note-taking involves copying something verbatim.

And there are two hypotheses to why note-taking is beneficial in the first place. The first idea is called the encoding hypothesis, which says that when a person is taking notes, “the processing that occurs” will improve “learning and retention.” The second, called the external-storage hypothesis, is that you learn by being able to look back at your notes, or even the notes of other people.

Because people can type faster than they write, using a laptop will make people more likely to try to transcribe everything they’re hearing. So on the one hand, Mueller and Oppenheimer were faced with the question of whether the benefits of being able to look at your more complete, transcribed notes on a laptop outweigh the drawbacks of not processing that information. On the other hand, when writing longhand, you process the information better but have less to look back at.

For their first study, they took university students (the standard guinea pig of psychology) and showed them TED talks about various topics. Afterward, they found that the students who used laptops typed significantly more words than those who took notes by hand. When testing how well the students remembered information, the researchers found a key point of divergence in the type of question. For questions that asked students to simply remember facts, like dates, both groups did equally well. But for “conceptual-application” questions, such as, “How do Japan and Sweden differ in their approaches to equality within their societies?” the laptop users did “significantly worse.”

The same thing happened in the second study, even when they specifically told students using laptops to try to avoid writing things down verbatim. “Even when we told people they shouldn’t be taking these verbatim notes, they were not able to overcome that instinct,” Mueller says. The more words the students copied verbatim, the worse they performed on recall tests.

And to test the external-storage hypothesis, for the third study they gave students the opportunity to review their notes in between the lecture and test. The thinking is, if students have time to study their notes from their laptops, the fact that they typed more extensive notes than their longhand-writing peers could possibly help them perform better.

But the students taking notes by hand still performed better. “This is suggestive evidence that longhand notes may have superior external storage as well as superior encoding functions,” Mueller and Oppenheimer write.

Do studies like these mean wise college students will start migrating back to notebooks?

“I think it is a hard sell to get people to go back to pen and paper,” Mueller says. “But they are developing lots of technologies now like Livescribe and various stylus and tablet technologies that are getting better and better. And I think that will be sort of an easier sell to college students and people of that generation.”

Virtual Reality Can Leave You With an Existential Hangover


After exploring a virtual world, some people can’t shake the sense that the actual world isn’t real, either.

Source: Virtual Reality Can Leave You With an Existential Hangover

When Tobias van Schneider slips on a virtual reality headset to play Google’s Tilt Brush, he becomes a god. His fingertips become a fiery paintbrush in the sky. A flick of the wrist rotates the clouds. He can jump effortlessly from one world that he created to another.

When the headset comes off, though, it’s back to a dreary reality. And lately van Schneider has been noticing some unsettling lingering effects. “What stays is a strange feeling of sadness and disappointment when participating in the real world, usually on the same day,” he wrote on the blogging platform Medium last month. “The sky seems less colorful and it just feels like I’m missing the ‘magic’ (for the lack of a better word). … I feel deeply disturbed and often end up just sitting there, staring at a wall.”

Van Schneider dubs the feeling “post-VR sadness.” It’s less a feeling of depression, he writes, and more a sense of detachment. And while he didn’t realize it when he published the post, he’s not the only one who has experienced this. Between virtual reality subreddits and Oculus Rift online forums, there are dozens of stories like his. The ailments range from feeling temporarily fuzzy, light-headed, and in a dream-like state, to more severe detachment that lasts days—or weeks. Many cases have bubbled up in the last year, likely as a response to consumer VR headsets becoming more widely available. But some of the stories date as far back as 2013, when an initial version of the Oculus Rift was released for software developers.

“[W]hile standing and in the middle of a sentence, I had an incredibly strange, weird moment of comparing real life to the VR,” wrote the video-game developer Lee Vermeulen after he tried Valve’s SteamVR system back in 2014. He was mid-conversation with a coworker when he started to feel off, and the experience sounds almost metaphysical. “I understood that the demo was over, but it was [as] if a lower level part of my mind couldn’t exactly be sure. It gave me a very weird existential dread of my entire situation, and the only way I could get rid of that feeling was to walk around or touch things around me.”

It seems that VR is making people ill in a way no one predicted. And as hard as it is to articulate the effects, it may prove even harder to identify its cause.

* * *

The notion of virtual-reality devices having a physical effect their users is certainly familiar. Virtual-reality sickness, also known as cybersickness, is a well-documented type of motion sickness that some people feel during or after VR play, with symptoms that include dizziness, nausea, and imbalance. It’s so common that researchers say it’s one of the biggest hurdles to mass adoption of VR, and companies like Microsoft are already working rapidly to find ways to fix it.

Some VR users on Reddit have pointed out that VR sickness begins to fade with time and experience in a headset. Once they grew their “VR legs,” they wrote, they experienced less illness. Van Schneider has noticed the same thing. “[The physical symptoms] usually fade within the first 1–2 hours and get better over time,” he wrote. “It’s almost like a little hangover, depending on the intensity of your VR experience.” Indeed, VR sickness is often referred to as a “VR hangover.”

“I was very fatigued. I was dizzy. And it definitely hits that strange point where the real world feels surreal.”

The dissociative effects that van Schneider and others have written about, however, are much worse. In an attempt to collectively self-diagnose, many of the internet-forum users have pointed to a study by the clinical psychology researcher Frederick Aardema from 2006 — the only study that looks explicitly at virtual reality and clinical dissociation, a state of detachment from one’s self or reality. Using a questionnaire to measure participants’ levels of dissociation before and after exposure to VR, Aardema found that VR increases dissociative experiences and lessens people’s sense of presence in actual reality. He also found that the greater the individual’s pre-existing tendency for dissociation and immersion, the greater the dissociative effects of VR.

Dissociation itself isn’t necessarily an illness, Aardema said. It works like a spectrum: On the benign side of the spectrum, there is fantasizing and daydreaming — a coping mechanism for boredom or conflict. On the other side, however, there are more pathological types of dissociation, which include disorders like derealization-depersonalization (DPDR).

While derealization is the feeling that the world isn’t real, depersonalization is the feeling that one’s self isn’t real. People who’ve experienced depersonalization say that it feels like they’re outside of their bodies, watching themselves. Derealization makes a person’s surroundings feel strange and dream-like, in an unsettling way, despite how familiar they may be.

When I spoke with Aardema on the phone, he had been wondering why his paper from ten years ago had suddenly been getting so many hits on the science-networking site ResearchGate. His study measured mild dissociative effects — think, “I see things around me differently than usual” — so he emphasized that there is a need to explore how these effects may relate to mood and depressive feelings. “There was some indication in our initial study that feelings of depression were important in relation to dissociation,” he said.

* * *

I’ve never felt depersonalization, but I have felt derealization, the result of a severe panic disorder I developed when I was 25. It was nothing short of nightmarish. When the effects were tolerable, it felt like I was permanently high on psychedelics — a bad trip that wouldn’t end. When it was at it’s most intense, it was like living in my own scary movie: You look around at your everyday life and nothing feels real. Even faces that I knew and loved looked like a jumbled mess of features.

DPDR often occurs after a traumatic event, as a defense mechanism that separates someone from emotional issues that are too difficult to process. My case was triggered by stress. But according to a 2015 study in the journal Multisensory Research, feelings of unreality can also be triggered by contradicting sensory input — like one might experience inside a VR headset.

The study, by Kathrine Jáuregui-Renaud, a health researcher at the Mexican Institute of Social security, explains that in order for the mind to produce a coherent representation of the outside world, it relies on integrating sensory input—combining and making sense of the information coming in through the senses. When there’s a mismatch between the signals from the vestibular system — a series of fluid-filled tubes in the inner ear that senses balance — and the visual system, the brain short-circuits. Part of the brain may think the body is moving, for instance, while another part thinks the feet are firmly planted on the ground. Something feels amiss, which can cause anxiety and panic.

VR’s very purpose is to make it difficult to distinguish simulation from reality.

This, Aardema pointed out, could explain why books, movies, and video games don’t tend to cause the same kinds of dissociative aftereffects. Books don’t have moving visuals, and the movement in movies and video games is usually not intense enough. It also helps that these experiences are usually enjoyed while sitting still. So they just don’t have the same capacity to offset balance and vestibular function. (Though for some, movies can cause motion sickness. And for those people there is — a website devoted to rating movies on their likelihood of giving a viewer motion sickness.)

Scientists also believe that this kind of conflicting information is what causes motion-sickness symptoms like nausea and dizziness. So why do some VR users get motion sickness, while others end up experiencing something more serious? Research suggests that there is a link between serotonin levels, which play a role in mood regulation, and the vestibular system. So for those that may already suffer from a serotonin-related imbalance, like the 40 million Americans who suffer from anxiety disorders, VR’s disruption of the vestibular system may have a more profound effect.

* * *

As van Schneider illustrated in his blog post, the appeal of virtual reality’s “superpowers” are compelling. VR’s very purpose is to make it difficult to distinguish simulation from reality. But what happens when the primitive brain is not equipped to process this? To what extent is VR causing users to question the nature of their own reality? And how easily are people able to tackle this line of questioning without losing their grip?

One evening during my DPDR phase, I was riding in a cab down a main street in the West Village, looking out the window. It was summer and there were tourists everywhere, and the light before sunset was still lingering. It was a perfect time to be out in the street, walking with friends and family, taking in New York City. But I remember the distinct feeling of hating everyone I saw. They had brains that just worked, brains that collected streams of sensory information and painted a satisfying picture of reality, just like brains are supposed to do. They most likely never questioned if what they were experiencing was real.

For some people, at least, it seems that VR could change that. In March, Alanah Pearce, an Australian video game journalist and podcast host, recounted troubling post-VR symptoms after the Game Developers Conference in San Francisco. “I was very fatigued. I was dizzy. And it definitely hits that strange point where the real world feels surreal,” she said. “I’m not going to go into that too in-depth, because it’s something I haven’t yet grasped. But I know that I’m not alone, and other people who play VR feel the same thing, where it’s like, nothing really feels real anymore. It’s very odd.”


The Internet of Things: explained

Written by  Joe Myers   Formative Content
An illustration picture shows a projection of binary code on a man holding a laptop computer, in an office in Warsaw June 24, 2013. REUTERS/Kacper Pempel

A guide to the Internet of Things.

Source: The Internet of Things: explained

The Internet of what? The internet of things is a term you may have heard a lot recently. It features heavily in discussions about our future – and, increasingly, our present. But what is it?

This is a simple guide to the term, the impact it’s set to have, and how it might change your life.

The internet of what?

At its heart, the concept is very simple. It’s about connecting devices to the internet. This doesn’t just mean smartphones, laptops and tablets. Jacob Morgan from Forbes talks of connecting everything with an “on and off switch.”

The ‘smart fridge’ dominates media discussions. A fridge that could let you know when you’re running out of something, write a shopping list, or alert you if something has gone out of date. But, in theory, anything from a jet engine, to a washing machine could be connected.

Connected devices can be controlled remotely – think: adjusting your heating via an app – and can gather useful data.

According to SAP, the number of connected devices is set to exceed 50 billion by 2020. Revenue for the providers of IoT services is also growing rapidly, as this chart shows.

 Projected global revenue of the Internet of Things from 2007 to 2020

Image: Statista

Solving problems on a massive scale

The IoT is about much more than connecting multiple objects in your home to the internet. As the World Economic Forum’s Intelligent Assets: unlocking the circular economy potential report has highlighted, the IoT has the potential to transform entire cities.

Sensors, combined with smart phones, will allow for more efficient energy networks (across cities and in your home), reduced congestion and improved transport, as well as recyclable, multi-use buildings.

Houses, offices, factories and public buildings could all generate electricity from renewable sources. Sensors would then coordinate the distribution and storage of this power, making whole systems cleaner, more efficient and more stable.

 Intelligent assets making cities smarter by...

Image: World Economic Forum

Smart cities could also make your journey to and from work much easier. Real-time traffic data, gathered from sensors, could reduce journey times. Mobile parking apps will make finding a space much easier, while smart street lights would light your way home.

Connected cars are set to be a major part of the IoT. Gartner forecasts that by 2020 there will be more than 250 million connected vehicles on the road. Live traffic and parking information, real-time navigation, and automated driving could all become a reality as connectivity spreads.

 Smart transport systems

Image: World Economic Forum

The installation of 42 ‘nodes’ – data collection boxes – is set to begin this summer in Chicago. By 2018, the Array of Things project hopes to have installed 500 across the city. This map shows the location of the original 42 nodes, which will gather data on topics from air quality to traffic volume.

 AoT Initial Locations

Image: Array of Things

All this data will be made available to the public. It will provide real-time, location-specific information about Chicago’s “environment, infrastructure and activity”, according to Array of Things.

The IoT has the potential to make our lives better. More efficient heating systems could save us money, transport apps could save us time, and new electrical grid systems could help save the planet.

So it’s all great then?

Not quite. There are numerous security concerns around having so many connected devices. Each connected device in theory becomes a risk, and a possible target for hackers.

Many of these devices contain a lot of personal information and data. Consider a smart electricity meter. It knows your electricity use and typical times you’re at home. All of this could be available to a hacker.

If a whole city is connected, the risk becomes much greater.

In this World Economic Forum video, Lorrie Cranor, Director of Carnegie Mellon University’s CyLab Usable Privacy and Security Laboratory, explores the threat IoT could pose to our privacy. She also looks at what we can do about it.

“In our smart homes we want our fridge to remind us to buy more butter at the store but we don’t want it to tell our health insurers,” she says.

Have you read?


What you read matters more than you might think

WRITTEN BY Susan Reynolds    Contributor, Psychology Today

Source: What you read matters more than you might think

A study published in the International Journal of Business Administration in May 2016, found that what students read in college directly affects the level of writing they achieve. In fact, researchers found that reading content and frequency may exert more significant impacts on students’ writing ability than writing instruction and writing frequency. Students who read academic journals, literary fiction, or general nonfiction wrote with greater syntactic sophistication (more complex sentences) than those who read fiction (mysteries, fantasy, or science fiction) or exclusively web-based aggregators like Reddit, Tumblr, and BuzzFeed. The highest scores went to those who read academic journals; the lowest scores went to those who relied solely on web-based content.

The difference between deep and light reading

Recent research also revealed that “deep reading”—defined as reading that is slow, immersive, rich in sensory detail and emotional and moral complexity—is distinctive from light reading—little more than the decoding of words. Deep reading occurs when the language is rich in detail, allusion, and metaphor, and taps into the same brain regions that would activate if the reader were experiencing the event. Deep reading is great exercise for the brain, and has been shown to increase empathy, as the reader dives deeper and adds reflection, analysis, and personal subtext to what is being read. It also offers writers a way to appreciate all the qualities that make novels fascinating and meaningful—and to tap into his ability to write on a deeper level.

 Light reading is equated to what one might read in online blogs, or “headline news” or “entertainment news” websites, particularly those that breezily rely on lists or punchy headlines, and even occasionally use emojis to communicate. These types of light reading lack a genuine voice, a viewpoint, or the sort of analyses that might stimulate thought. It’s light and breezy reading that you can skim through and will likely forget within minutes.

Deep reading synchronizes your brain

Deep reading activates our brain’s centers for speech, vision, and hearing, all of which work together to help us speak, read, and write. Reading and writing engages Broca’s area, which enables us to perceive rhythm and syntax; Wernicke’s area, which impacts our perception of words and meaning; and the angular gyrus, which is central to perception and use of language. These areas are wired together by a band of fibers, and this interconnectivity likely helps writers mimic and synchronize language and rhythms they encounter while reading. Your reading brain senses a cadence that accompanies more complex writing, which your brain then seeks to emulate when writing.

Here are two ways you can use deep reading to fire up your writing brain:

Read poems

In an article published in the Journal of Consciousness Studies, researchers reported finding activity in a “reading network” of brain areas that were activated in response to any written material. In addition, more emotionally charged writing aroused several regions in the brain (primarily on the right side) that respond to music. In a specific comparison between reading poetry and prose, researchers found evidence that poetry activates the posterior cingulate cortex and medial temporal lobes, parts of the brain linked to introspection. When volunteers read their favorite poems, areas of the brain associated with memory were stimulated more strongly than “reading areas,” indicating that reading poems you love is the kind of recollection that evokes strong emotions—and strong emotions are always good for creative writing.

Read literary fiction

Understanding others’ mental states is a crucial skill that enables the complex social relationships that characterize human societies—and that makes a writer excellent at creating multilayered characters and situations. Not much research has been conducted on the theory of mind (our ability to realize that our minds are different than other people’s minds and that their emotions are different from ours) that fosters this skill, but recent experiments revealed that reading literary fiction led to better performance on tests of affective theory of mind (understanding others’ emotions) and cognitive theory of mind (understanding others’ thinking and state of being) compared with reading nonfiction, popular fiction, or nothing at all. Specifically, these results showed that reading literary fiction temporarily enhances theory of mind, and, more broadly, that theory of mind may be influenced greater by engagement with true works of art. In other words, literary fiction provokes thought, contemplation, expansion, and integration. Reading literary fiction stimulates cognition beyond the brain functions related to reading, say, magazine articles, interviews, or most online nonfiction reporting.

Instead of watching TV, focus on deep reading

Time spent watching television is almost always pointless (your brain powers down almost immediately) no matter how hard you try to justify it, and reading fluff magazines or lightweight fiction may be entertaining, but it doesn’t fire up your writing brain. If you’re serious about becoming a better writer, spend lots of time deep-reading literary fiction and poetry and articles on science or art that feature complex language and that require your lovely brain to think.

This post originally appeared at Susan Reynolds is the author of Fire Up Your Writing Brain, a Writer’s Digest book. You can follow her on Twitter or Facebook.

A new brain study sheds light on why it can be so hard to change someone’s political beliefs



Why we react to inconvenient truths as if they were personal insults.


INFOGRAPHIC: How the World Reads

A bunch of interesting facts about reading in one handy infographic

Source: INFOGRAPHIC: How the World Reads

Did you know that people in India read an average of 10.4 hours a week? Or that regular readers are 2.5 times less likely to develop Alzheimer’s Syndrome? This handy infographic from FeelGood puts a bunch of different interesting facts together in one infograhpic.

Heavy Screen Time Rewires Young Brains, For Better And Worse


Bombarding young mice with video and audio stimulation changes the way the brain develops. But some scientists think those sorts of brain changes could protect kids from stressing out in a busy world.

Source: Heavy Screen Time Rewires Young Brains, For Better And Worse

There’s new evidence that excessive screen time early in life can change the circuits in a growing brain.

Scientists disagree, though, about whether those changes are helpful, or just cause problems. Both views emerged during the Society for Neuroscience meeting in San Diego this week.

The debate centered on a study of young mice exposed to six hours daily of a sound and light show reminiscent of a video game. The mice showed “dramatic changes everywhere in the brain,” said Jan-Marino Ramirez, director of the Center for Integrative Brain Research at Seattle Children’s Hospital.

“Many of those changes suggest that you have a brain that is wired up at a much more baseline excited level,” Ramirez reported. “You need much more sensory stimulation to get [the brain’s] attention.”

So is that a problem?

On the plus side, it meant that these mice were able to stay calm in an environment that would have stressed out a typical mouse, Ramirez explained. But it also meant they acted like they had an attention deficit disorder, showed signs of learning problems, and were prone to risky behavior.

A more optimistic interpretation came from Leah Krubitzer, an evolutionary neurobiologist at the University of California, Davis. “The benefits may outweigh the negative sides to this,” Krubitzer said, adding that a less sensitive brain might thrive in a world where overstimulation is a common problem.

The debate came just weeks after the American Academy of Pediatrics relaxed its longstanding rule against any screen time for kids under two. And it reflected an evolution in our understanding of how sensory stimulation affects developing brains.

Researchers learned many decades ago that young brains need a lot of stimulation to develop normally. So, for a long time parents were encouraged to give kids as many sensory experiences as possible.

“The idea was, basically, the more you are exposed to sensory stimulation, the better you are cognitively,” Ramirez said.

Then studies began to suggest that children who spent too much time watching TV or playing video games were more likely to develop ADHD. So scientists began studying rats and mice to see whether intense audio-visual stimulation early in life really can change brain circuits.

Studies like the one Ramirez presented confirm that it can. The next question is what that means for children and screen time.

“The big question is, was our brain set up to be exposed to such a fast pace,” Ramirez said. “If you think about nature, you would run on the savanna and you would maybe once in your lifetime meet a lion.”

In a video game, he said, you can meet the equivalent of a lion every few seconds. And human brains probably haven’t evolved to handle that sort of stimulation, he said.

Krubitzer, and many other scientists, said they aren’t so sure. It’s true this sort of stimulation may desensitize a child’s brain in some ways, they said. But it also may prepare the brain for an increasingly fast-paced world.

“Less than 300 years ago we had an industrial revolution and today we’re using mobile phones and we interact on a regular basis with machines,” Krubitzer said. “So the brain must have changed.”

Krubitzer rejected the idea that the best solution is to somehow turn back the clock.

“There’s a tendency to think of the good old days, when you were a kid, and [say], ‘I didn’t do that and I didn’t have TV and look how great I turned out,’ ” Krubitzer said.

Gina Turrigiano, a brain researcher at Brandeis University, thinks lots of screen time may be fine for some young brains, but a problem for others.

“Parents have to be really aware of the fact that each kid is going to respond very, very differently to the same kinds of environments,” she said.

What Horror Movies Do to Your Brain


, ,


When we watch a movie, we know what we are seeing isn’t real. Yet, sometimes the scenes are so realistic to keep us in suspense throughout the movie, and we seem to experience first hand the feelings of the protagonist.

The movie is a fiction, but the emotions we feel and the reactions they trigger are real. Undoubtedly, it is a very powerful effect that is now being studied in the context of a newborn science called neurocinema, dedicated to study the influence of movies on our brains.

Do you remember when was the last time you jumped on the chair while watching a horror movie? Now we will find out exactly what happened in the brain and how your body reacted.

Scenes of terror directly activate the primitive brain

Usually, watching a movie, we “unplug” the motor areas of the brain because are useless. But sometimes scenes have a strong enough impact to get us through the inhibition of the motor system to make us react.

We bounce on the chair or we cry, because the scene makes us overcome this brain block going to unleash our instincts. It means that content is so strong, under an emotionally point of view, to make us react immediately for protecting ourselves or alert others that are in danger. In fact, shouting we warn those around us, or the characters in the movie, that there is a danger and must save themselves. It is an atavistic reaction.

And all this happens in a matter of milliseconds, we have no time to process what we’re seeing or modulate our reaction. Basically, we react this way because in those few milliseconds, our brain is not aware that it’s just a movie and we’re safe.

If you think about it, this reaction is not surprising since our brain is programmed to assume that everything we see is real. Therefore, it is very difficult to communicate with the most primitive parts, which are those being activated in these cases, that what we are seeing is a fiction. As a result, the body reacts immediately.

In fact, even if isolated cases, there are people who suffered from post-traumatic stress as a result of watching a movie, a problem more common in children, for whom it is more difficult to distinguish the boundaries between reality and fantasy.

In adults, this disorder may be caused by the excessive identification with the characters. In fact, in the case of horror movies the viewer knows as little as the characters, this is why is much easier for him to identify with them. When this identification occurs, the brain may develop deep scars, almost as much as those caused by a real experience. But that’s not all.

3 changes that occur in our body when we watch a horror movie

The reaction to what we see on the screen is not limited to the brain but extends throughout the body. This because the brain sends an alarm signal activating the autonomic nervous system by increasing the production of cortisol and adrenaline, two neurotransmitters that cause some changes at the physiological level.

1. Heart rate increases. A study conducted on a group of young people revealed that watching a horror movie causes an increase of 14 beats per minute of the heart rate. It was also found a significant increase in blood pressure. In addition, researchers found an increase in white blood cells in the blood and a higher concentration of hematocrit, as if the body were to defend against an intruder.


2. You start to sweat. Skin conductance is one of the first indicators of emotional arousal. In other words, when you are afraid you sweat. Researchers at the University of Wollongong have analyzed the response of a group of people in front of violent and horror movies and noticed how those who are more empathic tend to sweat more when watching these movies, and show no signs of addiction.

3. Muscles contract. Once the primitive brain has detected a threat and given the alarm signal, it is difficult to stop it, especially if the horror scenes follow one after the other and are accompanied by a chilling soundtrack. Researchers at the University of Amsterdam found that in these movies music generates what is known as “alarm reaction”, a simultaneous response of mind and body to a sudden and unexpected stimulus that leads to contraction of the muscles of arms and legs. That’s why when watching a horror movie we always tense our muscles.

But then, why do we continue to watch horror movies?

At this point it is clear that most of us do not enjoy watching a horror movie. Yet despite all, many continue to suffer the “charm” of these obscure characters. Why?

The Arousal Transfer Theory indicates that negative feelings created by these movies intensify the positive feelings we experience when at the end the hero triumphs. Basically, we like these movies because watching them is like getting on an emotional roller coaster.

Another theory hints at the fact that horror or violent movies help us manage our own fear. In practice, these films would have a cathartic effect, helping us develop our most ancient and hidden fears.

Or maybe it could just be a morbid curiosity fostered by our innate need to keep us safe from dangers that can threaten us.


Bos, M. et. Al. (2013) Psychophysiological Response Patterns to Affective Film Stimuli. PLoS One; 8(4).

Mian, R. et. Al. (2003) Observing a Fictitious Stressful Event: Haematological Changes, Including Circulating Leukocyte Activation. Stress: The International Journal on the Biology of Stress; 6(1): 41-47.

Barry, R. J. & Bruggemann, J. M. (2002) Eysenck’s P as a modulator of affective and electrodermal responses to violent and comic film. Personality and Individual Differences; 32(6): 1029–1048.

Invert     Jennifer Delgado Suárez

Psychologist by profession and passion, dedicated to to string words together.

Here’s What Happens in Your Brain When You Hear a Pun


New research explains the neuroscience of wordplay.

Source: Here’s What Happens in Your Brain When You Hear a Pun

Why do spiders make great baseball players?

Because they know how to catch flies.

Sorry, sorry, I know that was bad. And that puns, in general, are among the most despised forms of humor. But pun-haters, bear with me — there’s a reason I made you suffer through the last couple sentences: In the split second between when you read the pun and when you rolled your eyes, something pretty cool was happening in your brain. As writer Roni Jacobson explained in a recent Scientific American column, new research published earlier this year in the journal Laterality: Asymmetries of Body, Brain and Cognition, sheds some light on how our minds process the complexities of wordplay.

For the study, led by University of Windsor psychologist Lori Buchanan, a team of researchers presented participants with a pun on one side of their visual field, so that it would be processed first by one side of the brain — things viewed on the right go to the left hemisphere, and things on the left go to to the right. Among the puns they used was a variation on the spider joke above, along with this gem: “They replaced the baseball with an orange to add some zest to the game.” (“In honor of M. P. Bryden’s love for the game,” they wrote, referring to a psychologist who studied left-right differences, “our pun examples will be baseball-related when possible.”)

With each pun, Buchanan and her colleagues timed how long it took the participant to catch the wordplay on the screen. Overall, they found, puns in the right visual field sparked a quicker reaction time, suggesting that the left side — of the brain takes the lead when it comes to sorting out puns from straight language. “The left hemisphere is the linguistic hemisphere, so it’s the one that processes most of the language aspects of the pun, with the right hemisphere kicking in a bit later,” Buchanan told Scientific American.

The interaction between the right and left hemispheres “enables us to ‘get’ the joke because puns, as a form of word play, complete humor’s basic formula: expectation plus incongruity equals laughter,” Jacobson wrote. (The concept she’s describing is known as the benign violation theory of humor, the idea that to be funny, a joke has to subvert our expectations of the norm in a way that isn’t harmful or malevolent. A slapstick bit about someone falling down the stairs, for example, wouldn’t be funny if the person got seriously hurt in the process.) “In puns—where words have multiple, ambiguous meanings—the sentence context primes us to interpret a word in a specific way, an operation that occurs in the left hemisphere,” she continued. “Humor emerges when the right hemisphere subsequently clues us in to the word’s other, unanticipated meaning, triggering what Buchanan calls a ‘surprise reinterpretation.’”

For a pun to land, in other words, both sides of your brain have to engage in a little teamwork. And speaking of teamwork, did you hear the one about the baseball team’s new batter? He was a real hit.