Video game addiction is a term that has been used for years by parents and mental health professionals who believe that it’s a real disorder. Now, there’s more weight behind their argument: The World Health Organization (WHO) has including “gaming disorder” as a new mental health condition listed in the 11th edition of its International Classification of Diseases.
According to WHO, there are three major criteria for the diagnosis of gaming disorder: Gaming takes precedence over other activities so much that a person often stops doing other things, a person continues gaming even when it causes issues in their life or they feel that they can’t stop, and gaming causes significant distress and impairments in a person’s relationships with others, as well as their work or school life. If your child gets sucked into a game for a few days, but goes back to normal after that, they wouldn’t qualify: Instead, people must engage in this behavior for at least 12 months, WHO says.
It’s worth noting that WHO’s stance on gaming addiction is different from that of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5), the handbook used by health professionals in the U.S. and other countries to help diagnose mental health disorders. The DSM-5 calls out “Internet Gaming Disorder” but says it’s a condition that warrants more clinical research and experience before it can be classified in the book as a formal disorder.
WHO says on its website that all people who participate in gaming should be aware that gaming disorder is a real condition, and that it’s important to be mindful of how often they play video games. However, they also point out that gaming disorder only affects a small amount of people who game.
It’s only natural that the news would make you give your child’s gaming system the side-eye.
In general, parents should limit the amount of screen time their children have daily, and gaming is included in that, along with TV, computers, phones and tablet use, Gina Posner, MD, a pediatrician at MemorialCare Orange Coast Medical Center in Fountain Valley, Calif., tells Yahoo Lifestyle.
Screen time isn’t recommended at all for kids who are 18 months or younger, but for children who are older than that up to five, it’s generally recommended that they have not more than one hour of screen time, she says. For those who are six and up, it’s more at the parents’ discretion. “The maximum amount of screen time should be two hours a day, but less is always better,” Posner says.
Posner says that it’s important to set clear limits for your child when it comes to screen time and gaming. For example, say that your child has to do their homework first and/or get out and play for an hour before they’re allowed to game. And even then, make it clear that they’re only allowed to do so for a set period of time.
If your child starts fussing when they’re not allowed to be gaming all day, it’s a clear sign that you need to cut back, Posner says.
Treatment for gaming disorder is generally based in cognitive behavioral therapy, which would generally be done in two phases, Simon Rego, PsyD, chief psychologist at Montefiore Medical Center/Albert Einstein College of Medicine, tells Yahoo Lifestyle. The first is raising awareness for your child that their gaming is a problem, and looking for triggers and cues that could make the gaming habit better or worse. A mental health professional would also address problematic thoughts associated with either stopping playing or the thoughts that keep them gaming, he says.
The goal then is to step down the behavior from something that’s pathological to problematic, and then being able to manage it in a “reasonable way,” Rego says. People don’t necessarily have to quit gaming altogether, but they do need to learn to better manage it with parameters, like only gaming with friends during select times during the day vs. doing it at night alone in their room.
If you suspect that your child has a gaming disorder, it’s important to seek help for it.
Just know that this is still a new diagnosis and you may need to do some sleuthing to find someone who specializes in this kind of behavior.
Smartphones have by now been implicated in so many crummy outcomes—car fatalities, sleep disturbances, empathy loss, relationship problems, failure to notice a clown on a unicycle—that it almost seems easier to list the things they don’t mess up than the things they do. Our society may be reaching peak criticism of digital devices.
Even so, emerging research suggests that a key problem remains underappreciated. It involves kids’ development, but it’s probably not what you think. More than screen-obsessed young children, we should be concerned about tuned-out parents.
Yes, parents now have more face time with their children than did almost any parents in history. Despite a dramatic increase in the percentage of women in the workforce, mothers today astoundingly spend more time caring for their children than mothers did in the 1960s. But the engagement between parent and child is increasingly low-quality, even ersatz. Parents are constantly present in their children’s lives physically, but they are less emotionallyattuned. To be clear, I’m not unsympathetic to parents in this predicament. My own adult children like to joke that they wouldn’t have survived infancy if I’d had a smartphone in my clutches 25 years ago.
To argue that parents’ use of screens is an underappreciated problem isn’t to discount the direct risks screens pose to children: Substantial evidence suggests that many types of screen time (especially those involving fast-paced or violent imagery) are damaging to young brains. Today’s preschoolers spend more than four hours a day facing a screen. And, since 1970, the average age of onset of “regular” screen use has gone from 4 years to just four months.
Some of the newer interactive games kids play on phones or tablets may be more benign than watching TV (or YouTube), in that they better mimic children’s natural play behaviors. And, of course, many well-functioning adults survived a mind-numbing childhood spent watching a lot of cognitive garbage. (My mother—unusually for her time—prohibited Speed Racer and Gilligan’s Island on the grounds of insipidness. That I somehow managed to watch every single episode of each show scores of times has never been explained.) Still, no one really disputes the tremendous opportunity costs to young children who are plugged in to a screen: Time spent on devices is time not spent actively exploring the world and relating to other human beings.
Yet for all the talk about children’s screen time, surprisingly little attention is paid to screen use by parents themselves, who now suffer from what the technology expert Linda Stone more than 20 years ago called “continuous partial attention.” This condition is harming not just us, as Stone has argued; it is harming our children. The new parental-interaction style can interrupt an ancient emotional cueing system, whose hallmark is responsive communication, the basis of most human learning. We’re in uncharted territory.
Child-development experts have different names for the dyadic signaling system between adult and child, which builds the basic architecture of the brain. Jack P. Shonkoff, a pediatrician and the director of Harvard’s Center on the Developing Child, calls it the “serve and return” style of communication; the psychologists Kathy Hirsh-Pasek and Roberta Michnick Golinkoff describe a “conversational duet.” The vocal patterns parents everywhere tend to adopt during exchanges with infants and toddlers are marked by a higher-pitched tone, simplified grammar, and engaged, exaggerated enthusiasm. Though this talk is cloying to adult observers, babies can’t get enough of it. Not only that: One study showed that infants exposed to this interactive, emotionally responsive speech style at 11 months and 14 months knew twice as many words at age 2 as ones who weren’t exposed to it.
Child development is relational, which is why, in one experiment, nine-month-old babies who received a few hours of Mandarin instruction from a live human could isolate specific phonetic elements in the language while another group of babies who received the exact same instruction via video could not. According to Hirsh-Pasek, a professor at Temple University and a senior fellow at the Brookings Institution, more and more studies are confirming the importance of conversation. “Language is the single best predictor of school achievement,” she told me, “and the key to strong language skills are those back-and-forth fluent conversations between young children and adults.”
A problem therefore arises when the emotionally resonant adult–child cueing system so essential to early learning is interrupted—by a text, for example, or a quick check-in on Instagram. Anyone who’s been mowed down by a smartphone-impaired stroller operator can attest to the ubiquity of the phenomenon. One consequence of such scenarios has been noted by an economist who tracked a rise in children’s injuries as smartphones became prevalent. (AT&T rolled out smartphone service at different times in different places, thereby creating an intriguing natural experiment. Area by area, as smartphone adoption rose, childhood ER visits increased.) These findings attracted a decent bit of media attention to the physical dangers posed by distracted parenting, but we have been slower to reckon with its impact on children’s cognitive development. “Toddlers cannot learn when we break the flow of conversations by picking up our cellphones or looking at the text that whizzes by our screens,” Hirsh-Pasek said.
In the early 2010s, researchers in Boston surreptitiously observed 55 caregivers eating with one or more children in fast-food restaurants. Forty of the adults were absorbed with their phones to varying degrees, some almost entirely ignoring the children (the researchers found that typing and swiping were bigger culprits in this regard than taking a call). Unsurprisingly, many of the children began to make bids for attention, which were frequently ignored. A follow-up study brought 225 mothers and their approximately 6-year-old children into a familiar setting and videotaped their interactions as each parent and child were given foods to try. During the observation period, a quarter of the mothers spontaneously used their phone, and those who did initiated substantially fewer verbal and nonverbal interactions with their child.
Yet another rigorously designed experiment, this one conducted in the Philadelphia area by Hirsh-Pasek, Golinkoff, and Temple’s Jessa Reed, tested the impact of parental cellphone use on children’s language learning. Thirty-eight mothers and their 2-year-olds were brought into a room. The mothers were then told that they would need to teach their children two new words (blicking, which was to mean “bouncing,” and frepping, which was to mean “shaking”) and were given a phone so that investigators could contact them from another room. When the mothers were interrupted by a call, the children did not learn the word, but otherwise they did. In an ironic coda to this study, the researchers had to exclude seven mothers from the analysis, because they didn’t answer the phone, “failing to follow protocol.” Good for them!
It has never been easy to balance adults’ and children’s needs, much less their desires, and it’s naive to imagine that children could ever be the unwavering center of parental attention. Parents have always left kids to entertain themselves at times—“messing about in boats,” in a memorable phrase from The Wind in the Willows, or just lounging aimlessly in playpens. In some respects, 21st-century children’s screen time is not very different from the mother’s helpers every generation of adults has relied on to keep children occupied. When parents lack playpens, real or proverbial, mayhem is rarely far behind. Caroline Fraser’s recent biography of Laura Ingalls Wilder, the author of Little House on the Prairie, describes the exceptionally ad hoc parenting style of 19th-century frontier parents, who stashed babies on the open doors of ovens for warmth and otherwise left them vulnerable to “all manner of accidents as their mothers tried to cope with competing responsibilities.” Wilder herself recounted a variety of near-calamities with her young daughter, Rose; at one point she looked up from her chores to see a pair of riding ponies leaping over the toddler’s head.
Occasional parental inattention is not catastrophic (and may even build resilience), but chronic distraction is another story. Smartphone use has been associated with a familiar sign of addiction: Distracted adults grow irritable when their phone use is interrupted; they not only miss emotional cues but actually misread them. A tuned-out parent may be quicker to anger than an engaged one, assuming that a child is trying to be manipulative when, in reality, she just wants attention. Short, deliberate separations can of course be harmless, even healthy, for parent and child alike (especially as children get older and require more independence). But that sort of separation is different from the inattention that occurs when a parent is with a child but communicating through his or her nonengagement that the child is less valuable than an email. A mother telling kids to go out and play, a father saying he needs to concentrate on a chore for the next half hour—these are entirely reasonable responses to the competing demands of adult life. What’s going on today, however, is the rise of unpredictable care, governed by the beeps and enticements of smartphones. We seem to have stumbled into the worst model of parenting imaginable—always present physically, thereby blocking children’s autonomy, yet only fitfully present emotionally.
Fixing the problem won’t be easy, especially given that it is compounded by dramatic changes in education. More young children than ever (about two-thirds of 4-year-olds) are in some form of institutional care, and recent trends in early-childhood education have filled many of their classrooms with highly scripted lessons and dull, one-sided “teacher talk.” In such environments, children have few opportunities for spontaneous conversation.
One piece of good news is that young children are prewired to get what they need from adults, as most of us discover the first time our diverted gaze is jerked back by a pair of pudgy, reproaching hands. Young children will do a lot to get a distracted adult’s attention, and if we don’t change our behavior, they will attempt to do it for us; we can expect to see a lot more tantrums as today’s toddlers age into school. But eventually, children may give up. It takes two to tango, and studies from Romanian orphanages showed the world that there are limits to what a baby brain can do without a willing dance partner. The truth is, we don’t really know how much our kids will suffer when we fail to engage.
Of course, adults are also suffering from the current arrangement. Many have built their daily life around the miserable premise that they can always be on—always working, always parenting, always available to their spouse and their own parents and anyone else who might need them, while also staying on top of the news, while also remembering, on the walk to the car, to order more toilet paper from Amazon. They are stuck in the digital equivalent of the spin cycle.
Under the circumstances, it’s easier to focus our anxieties on our children’s screen time than to pack up our own devices. I understand this tendency all too well. In addition to my roles as a mother and a foster parent, I am the maternal guardian of a middle-aged, overweight dachshund. Being middle-aged and overweight myself, I’d much rather obsess over my dog’s caloric intake, restricting him to a grim diet of fibrous kibble, than address my own food regimen and relinquish (heaven forbid) my morning cinnamon bun. Psychologically speaking, this is a classic case of projection—the defensive displacement of one’s failings onto relatively blameless others. Where screen time is concerned, most of us need to do a lot less projecting.
If we can get a grip on our “technoference,” as some psychologists have called it, we are likely to find that we can do much more for our children simply by doing less—regardless of the quality of their schooling and quite apart from the number of hours we devote to them. Parents should give themselves permission to back off from the suffocating pressure to be all things to all people. Put your kid in a playpen, already! Ditch that soccer-game appearance if you feel like it. Your kid will be fine. But when you are with your child, put down your damned phone.
Everyday things that you rely on will be considered dinosaurs in the next 10 years. Check out which of your favorite ones make the list! by Lisa Douglas from: https://www.urbo.com/content/things-that-will-probably-be-extinct-by-2025/ Life’s pretty darn good right now with our handy cell phones, our easy-to-use remotes, and our ability to send someone a document with the push of […]
When I was a kid, we often went out for ice cream and a game of mini-golf. Most of the set-ups were fun and relatively easy to negotiate. But there was always that one hole. That one where you gotta time it just right to get the ball through the series of 3 tunnels, making sure the rotating blades of the windmills don’t get in the way. UGH! I hated that one.
Certainly by the time you were on your 12th attempt, the game started to lose its carefree feel and performance anxiety set in.
In our family, we devised a rule to deal with this and keep the game fun. If, after 5 tries you could not get that ball where you wanted it to be, you got a Do-Over. You got to wipe the slate clean and start over again. Usually this worked.
Sometimes you just have to step back, take a deep breath and start back at the beginning with a fresh attitude.
In “real life” we rarely get do-overs. Most of the time you can’t un-ring a bell.
Enter . . . regret.
Psych Pstuff’s Summary
Regrets: everyone has them to some extent. Harsh words, career mistakes, missed opportunities — these are all common experiences. Sometimes we regret the way we acted or failed to act. Other times, we think we wouldn’t do anything differently but regret that the outcome was not as intended.
Regret is generally considered a negative emotion, in the classic way that we regress to automatically judging something either “good” or “bad.” While it surely doesn’t feel good, regret can be good for us in several ways by helping to clarify and focus the confusing aspects of a situation.
After all, for the most part, the majority of us are doing the best we can, given the circumstances. Most of us make bad choices because we don’t have all the information about just how bad that choice is. Regret gives us the gift of hindsight to tuck away for “next time.”
Regret gives you perspective nothing else can.
Psychologist Carl Jung once said “Even a happy life cannot be without a measure of darkness, and the word ‘happy’ would lose its meaning if it were not balanced by sadness.” Knowing what you don’t want, how you don’t want to be, from first-hand experience, helps you truly understand what you do want to do or be.
Regret can also keep us humble. And at least a little bit of humility is a good thing. Its opposite is not. Regret reminds us that we are not perfect and puts us in touch with our humanity.
Run amok, of course, regret, like just about anything let run amok, can be negative and harmful. Contemplating is good. Reflecting is good. Ruminating? Not so much. Obsessing on what you could have and should have done better can lead to feelings of worthlessness and depression that paralyzes us instead of inspiring us to do better.
Most of us make bad choices because we don’t have all the information about just how bad that choice is. Regret gives us the gift of hindsight to tuck away for “next time.”
Research has indicated a cultural component to the experience of regret. Collectivistic cultures that emphasize the group over the individual tend to report experiencing less regret. Individualistic societies place an emphasis on individual choice, independence, and performance, setting the stage for self-doubt and blame.
Other research, conducted by Neal Roese of the Kellogg School of Management at Northwestern University, has indicated regret is considered the most effective negative emotion, specifically with respect to: (1) making sense of the world, (2) avoiding future negative behaviors, (3) gaining insight, (4) achieving social harmony, and (5) improving ability to approach desired opportunities.
While wallowing in missed opportunities or less-than-stellar behavior has been shown to have negative effects on physical and emotional health, when used sparingly, and in an introspective and constructive manner, it can clearly be a tool to facilitate decision-making and increase satisfaction.
Regrets can be big or small based on the severity of the negative outcomes for ourselves and others. But either way, they can provide a guiding light and the wisdom that can only come from experience.
Perhaps a healthy way to consider some of our less-than-perfect life decisions was best expressed by the classic crooner Frank Sinatra: “Regrets, I’ve had a few, but then again, too few to mention”
And with a little luck, life even gives you a do-over.
by Kate Ryan General Mills, Kellogg’s, and Unilever own just about everything Source: This Infographic Shows How Only 10 Companies Own All The World’s Brands Just when you think there’s no end to the diversity of junk food lining supermarket aisles, an insanely detailed infographic comes along to set us all straight. Out of the […]
I was 40-something. I walked across the threshold of the house I grew up in. It was . . .
From the Greek philosopher Heraclitus, who said, “No man ever steps in the same river twice, for it’s not the same river and he’s not the same man,” to Bon Jovi, who asked, “Who says you can’t go home?” humanity has ruminated on returning to our childhood homes.
The theme is one we revisit repeatedly.
We see it in movies, too, from comedies (The Royal Tenenbaums, Home for the Holidays, This is Where I Leave You) to dramas (On Golden Pond, Young Adult, The Judge) that have poignantly depicted heading home for a visit or a re-nesting. Homecoming is equally well represented in classic and contemporary literature (You Can’t Go Home Again by Thomas Wolfe, Gilead and its sequel Home by Marilynne Robinson, and An American Childhood by Annie Dillard).
Regardless of the circumstances of return — joyous or tragic — the experience is … well … it’s complicated.
For some, home is the ultimate safety net as one walks the tightrope of life — always there, always solid, always ready to catch you should you stumble. For others the concept of home dissipates like a morning dream and there is not much left to go back to, except in one’s head. Some childhood rooms are preserved like a time capsule. Others are transformed into the sewing room Mom always wanted, just days after one’s departure. Millennials are notorious for going back home — provided they left in the first place. This privilege has, supposedly, given them the freedom to pursue their dreams, to fail and fail again, without dire consequences.
So, it is paradise or inferno? As with so much in life, it depends.
To answer Bon Jovi, it was in fact Thomas Wolfe who insisted one cannot go back home again, and yet we have Dorothy returned from Oz proclaiming, “There’s no place like home!”
In the end, good or bad, love it or hate it, curse it or miss it, perhaps Nickleback says it best:
I miss that town
I miss the faces
You can’t erase
You can’t replace it
I miss it now
I can’t believe it
So hard to stay
Too hard to leave it
It’s hard to say it, time to say it
Goodbye … goodbye
PSYCH PSTUFF’S SUMMARY
Psychologist Carl Gustav Jung, in his autobiography Memories, Dreams, Reflections, spoke of the home that he built as a “self-realization of the unconscious … a concretization of the individuation process … a symbol of psychic wholeness.” Building on Jung’s work, Clare Cooper Marcus, architect, psychologist and author of House As a Mirror of the Self: Exploring the Deep Meaning of Home, asserts that “as we change and grow throughout our lives, our psychological development is punctuated not only by meaningful emotional relationships with people, but also by close, affective ties with a number of significant physical environments, beginning in childhood.” She insists that, regardless of the external nature of our dwellings — mansions or shacks — we all have a strong emotional relationship, positive or negative, with our homes.
Our census tells us that fewer than 10% of the population remain in the same house they lived in 30 years prior. We are mobile in this 21st century, very mobile. In fact, it is estimated that the average American will move 11.7 times in his or her lifetime. We don’t stay permanently attached to our childhood homes and neighborhoods — at least not physically. But we do, psychologically.
The lure of nostalgia is strong. Millions of adults revisit their childhood homes long after they, or their family members, occupy that space. Some are content to merely drive by and observe from outside. Others write letters to the current owners or even knock on the door and ask to have a look at their old bedroom.
We remember, and feel compelled to re-visit, the view from our bedroom window, the schoolyard playground, our dinner table, the front porch and backyard.
Propelled by his own experience of revisiting his childhood haunts, Burger surveyed other adults about their personal pilgrimages “home.” This is some of what he discovered:
There are three primary reasons for making a trip back to one’s childhood home or neighborhood:
• To reconnect with childhood — 42% percent of people Burger interviewed visited their childhood homes in hopes of jogging their memory and getting back in touch with who they were as a child.
• To help resolve a current crisis or problem by reflecting on their past — 15% of those studied expressed the need to reevaluate how they developed their values and what led them to make the decisions that they made.
• To bring closure to unfinished business from childhood — 12% reported abuse or trauma and hoped that returning to the home where they experienced that pain would be therapeutic and cathartic.
Regardless of the underlying motivations for the return, Burger discovered that in almost all of the cases, people reported being glad they made the journey to their childhood home, even though it was often a deeply emotional and unpredictable experience.
He found three exceptions, where the experience was not a positive one and people reported wishing they had not made the trip back to their past:
• When the house in which they grew up had significantly changed or was no longer there — this usually proved unexpected and very upsetting.
• For those who returned anticipating an escape from problems and hoping to relive the romanticized memory of their childhood — in these cases, reality did not match their expectations and they were deeply disappointed and disillusioned.
• For those who returned to work through childhood trauma — often the painful memories seemed more intense while visiting the childhood home and they did not experience the anticipated relief or closure. (Burger recommends people revisiting the past to confront a traumatic period in their lives do so with the help of a professional counselor.)
In any event, there are few experiences in one’s life that can move a person as deeply and unpredictably as returning “home.”
People with higher empathy differ from others in the way their brains process music, according to a study by researchers at Southern Methodist University, Dallas and UCLA.
The researchers found that compared to low empathy people, those with higher empathy process familiar music with greater involvement of the reward system of the brain, as well as in areas responsible for processing social information.
“High-empathy and low-empathy people share a lot in common when listening to music, including roughly equivalent involvement in the regions of the brain related to auditory, emotion, and sensory-motor processing,” said lead author Zachary Wallmark, an assistant professor in the SMU Meadows School of the Arts.
But there is at least one significant difference.
Highly empathic people process familiar music with greater involvement of the brain’s social circuitry, such as the areas activated when feeling empathy for others. They also seem to experience a greater degree of pleasure in listening, as indicated by increased activation of the reward system.
“This may indicate that music is being perceived weakly as a kind of social entity, as an imagined or virtual human presence,” Wallmark said.
Researchers in 2014 reported that about 20 percent of the population is highly empathic. These are people who are especially sensitive and respond strongly to social and emotional stimuli.
The SMU-UCLA study is the first to find evidence supporting a neural account of the music-empathy connection. Also, it is among the first to use functional magnetic resonance imaging (fMRI) to explore how empathy affects the way we perceive music.
The new study indicates that among higher-empathy people, at least, music is not solely a form of artistic expression.
“If music was not related to how we process the social world, then we likely would have seen no significant difference in the brain activation between high-empathy and low-empathy people,” said Wallmark, who is director of the MuSci Lab at SMU, an interdisciplinary research collective that studies — among other things — how music affects the brain.
“This tells us that over and above appreciating music as high art, music is about humans interacting with other humans and trying to understand and communicate with each other,” he said.
This may seem obvious.
“But in our culture we have a whole elaborate system of music education and music thinking that treats music as a sort of disembodied object of aesthetic contemplation,” Wallmark said. “In contrast, the results of our study help explain how music connects us to others. This could have implications for how we understand the function of music in our world, and possibly in our evolutionary past.”
The co-authors are Choi Deblieck, with the University of Leuven, Belgium, and Marco Iacoboni, UCLA. The research was carried out at the Ahmanson-Lovelace Brain Mapping Center at UCLA.
“The study shows on one hand the power of empathy in modulating music perception, a phenomenon that reminds us of the original roots of the concept of empathy — ‘feeling into’ a piece of art,” said senior author Marco Iacoboni, a neuroscientist at the UCLA Semel Institute for Neuroscience and Human Behavior.
“On the other hand,” Iacoboni said, “the study shows the power of music in triggering the same complex social processes at work in the brain that are at play during human social interactions.”
Comparison of brain scans showed distinctive differences based on empathy
Participants were 20 UCLA undergraduate students. They were each scanned in an MRI machine while listening to excerpts of music that were either familiar or unfamiliar to them, and that they either liked or disliked. The familiar music was selected by participants prior to the scan.
Afterward each person completed a standard questionnaire to assess individual differences in empathy — for example, frequently feeling sympathy for others in distress, or imagining oneself in another’s shoes.
The researchers then did controlled comparisons to see which areas of the brain during music listening are correlated with empathy.
Analysis of the brain scans showed that high empathizers experienced more activity in the dorsal striatum, part of the brain’s reward system, when listening to familiar music, whether they liked the music or not.
The reward system is related to pleasure and other positive emotions. Malfunction of the area can lead to addictive behaviors.
Empathic people process music with involvement of social cognitive circuitry
In addition, the brain scans of higher empathy people in the study also recorded greater activation in medial and lateral areas of the prefrontal cortex that are responsible for processing the social world, and in the temporoparietal junction, which is critical to analyzing and understanding others’ behaviors and intentions.
Typically, those areas of the brain are activated when people are interacting with, or thinking about, other people. Observing their correlation with empathy during music listening might indicate that music to these listeners functions as a proxy for a human encounter.
Beyond analysis of the brain scans, the researchers also looked at purely behavioral data — answers to a survey asking the listeners to rate the music afterward.
Those data also indicated that higher empathy people were more passionate in their musical likes and dislikes, such as showing a stronger preference for unfamiliar music.
Precise neurophysiological relationship between empathy and music is largely unexplored
A large body of research has focused on the cognitive neuroscience of empathy — how we understand and experience the thoughts and emotions of other people. Studies point to a number of areas of the prefrontal, insular, and cingulate cortices as being relevant to what brain scientists refer to as social cognition.
Studies have shown that activation of the social circuitry in the brain varies from individual to individual. People with more empathic personalities show increased activity in those areas when performing socially relevant tasks, including watching a needle penetrating skin, listening to non-verbal vocal sounds, observing emotional facial expressions, or seeing a loved one in pain.
In the field of music psychology, a number of recent studies have suggested that empathy is related to intensity of emotional responses to music, listening style, and musical preferences — for example, empathic people are more likely to enjoy sad music.
“This study contributes to a growing body of evidence,” Wallmark said, “that music processing may piggyback upon cognitive mechanisms that originally evolved to facilitate social interaction.” — Margaret Allen, SMU
Did you have a favorite bedtime story as a child? I loved the fairy tale Snow White. My mother, on the other hand, was not so thrilled about reading it to me every night. Not only because of the boring repetition, but also because every time she got to the part where the witch enticed Snow White to bite into the poisonous apple that caused her to fall into a deep sleep, I would start crying. Every time. Even though I knew the happily-ever-after ending that was coming. I was that engrossed in the emotion of the story. She had to stop reading and convince me that it was all ok, and that it would all work out in the end. But I was unconvinced. Happy ending on its way or not, there was pain and loss along the journey. That was what the author was trying to show and that was exactly what I was experiencing.
Isn’t that what writers aspire to — to connect readers with the feelings of their characters? Isn’t that part of the mystery and the magic that makes a good story?
I can’t remember exactly when I transitioned from having bedtime stories read to me to reading them myself under the covers with a flashlight (long after “lights out”). But I do know that reading (and writing) has remained an important part of my life.
Philip Pullman, a British writer of children’s books, science fiction and fantasy, once described the importance of reading and writing by noting, “After nourishment, shelter and companionship, stories are the thing we need most in the world.”
Psych Pstuff Summary
Friedrich Nietzsche and Wilhelm Fleiss first explored the psychological meaning of the concept of sublimation as a diversion of aggressive tendencies and impulses into socially acceptable venues. Later, Freud incorporated this into his psychoanalytic theories. He posited that creative endeavors represent examples of the ego defense mechanism of sublimation, the only defense mechanism that he considered functional and healthy for the individual psyche as well as society at large. Sublimation, he wrote, “is what makes it possible for higher psychical activities, scientific, artistic or ideological, to play such an important part in civilized life.”
The more mystical psychologist Carl Jung, who considered creativity an aspect of psychic transformation of the highest order, believed, “Sublimation is not a voluntary and forcible channeling of instinct into a spurious field of application. It is a great mystery.” Whereas Freud considered creative endeavors to be intentional and directed, Jung incorporated the mystique of the muse and sublime of the sacred, akin to what modern guru Mihaly Csikszentmihalyi calls “flow.”
From a psychological standpoint, stories and the process of storytelling have merit and purpose both for the reader and the writer. Creative activities in various media, from journaling to artwork to dance, have often been incorporated into therapeutic settings with both children and adults. Therapists will encourage clients to artistically express their emotions, especially when the traditional “talking cure” seems blocked or plateaued. The basic premise is that self-awareness and understanding will result both from the creative process as well as from the interpretation of the product.
However, as Jung intimated, there is still a bit of mystery involved in the healing power of creativity. He believed, “The neurotic is ill not because he has lost his old faith but because he has not yet found a new form for his finest aspirations.”
For readers, curling up with a good book has long been an acceptable form of escapism, a way to detach from the banalities of their real world for a time and vicariously indulge in someone else’s reality (or fantasy).
Bibliotherapy represents a focused treatment plan whereby specific text, fiction or non-fiction, is assigned reading to address a particular problem or facilitate insight. Lest you think this is a new age idea, the ancient Greeks inscribed ψγxhσ Iatpeion — House of Healing for the Soul — above the entrance to what is believed to be the world’s oldest library.
Poetry has traditionally been a medium to express the angst and sorrow of both the poet, and vicariously, the reader. It has the enigma of a higher level of emotion. And song lyrics, especially those that tell a story, seem to speak to us with an uncanny level of understanding — as Elton John crooned, “Sad songs say so much.” Somehow, connecting with the angst of others, even nameless, faceless or “made-up” others, helps us manage our own angst.
Creative expression it seems, in all its various forms, is good for the body, the mind and the soul and can serve many purposes both for the creator and those who enjoy the creation.
Perhaps the Eagles summed it up most succinctly in the line from their song Hotel California, “Some dance to remember. Some dance to forget.”
Carmela enters … “Just a small town girl, livin’ in a lonely world, she took the midnight train goin’ anywhere.”
Cut to Tony … “Just a city boy, born and raised in South Detroit, he took the midnight train goin’ anywhere.”
Suddenly, Journey’s Don’t Stop Believin’ is abruptly silenced and the screen cuts to black …
When the highly anticipated finale of the HBO series The Sopranos aired fans reeled. Not because their beloved Tony was killed, but because, well, they weren’t sure whether he was or not.
It was a cliffhanger. But cliffhangers are not supposed to happen in the finale of a long-running series.
The debate raged on blogs, on talk-shows, on media pages and certainly over cocktails. So much so that David Chase, creator of the series and director of the last episode, was called on to explain himself and settle the deliberation once and for all. And he did … but not really … saying things like “Whether Tony Soprano is alive or dead is not the point. To continue to search for this answer is fruitless. The final scene of The Sopranos raises a spiritual question that has no right or wrong answer,” and, “Life is short. Either it ends here for Tony or some other time. But in spite of that, it’s really worth it. So don’t stop believing.” Even when asked directly if Tony was shot in the last scene, he replied, “I’m not saying anything. I’m not trying to be coy. It’s just that I think that to explain it would diminish it.”
Chase has never answered the one burning question of that final scene — not in the interview the morning after the finale aired and not in any of his numerous public comments since the show ended over eight years ago. He has explained, in great detail, the symbolism he used and how he employed various elements to subtly create tension in the last scene. But what he won’t say, no matter how it is asked or how much we need to know, is what happened at the end.
But that doesn’t mean the fans have let it go. On the eight year anniversary of The Sopranos finale, one blogger posted a new and updated Sopranos: Definitive Explanation of the Final Scene Annotated Guide where every shot of the final scene is analyzed in detail, and references are made to prophetic quotes from previous seasons. Comments continue to be posted on that site, reflecting on his observations and continuing the debate.
But why, eight years later, why are we still asking the question we have been told repeatedly will never be definitely answered, about a fictional character anyway?
In this, as in many of our human endeavors, we have an undeniable desire to “close the loop,” to tie up the package with a pretty bow, or at least a string with a tight knot, and put it on the shelf, accessible if we need, but out of the way of our daily endeavors. In short, the stories we tell — from our entertainment to our real-life relationships to justice for wrongdoing — we want unfinished business finished.
Closure, or more accurately the lack of, is often blamed for our inability to move on and, thus, sought as the Holy Grail that will set our minds at ease. Even heal us.
But does closure really bring the relief we seek? Does it really do all it promises to? Can justice heal the wounds of loss? Can just knowing make the bad somehow more ok?
In a word: sometimes. It depends. On what? On whether or not we have done the emotional work to accompany it.
Seeking closure can become an intellectual pursuit, a distraction, a physical reality that tricks the mind and heart into thinking we are actively addressing a problem, pain, the cruel randomness and injustice of the human condition, when all we are really doing is, in a sense, wallowing.
Is the closure of a diagnosis really better? Most people say so, even if it is bad news. And yet closure doesn’t necessarily relieve the symptoms, it simply changes our perceptions of them. Closure means the mind can relax — oh it’s that. OK, now I know what I am dealing with. Now I can move on.
From the whimsical to the serious, the need to know and know with finality, is so strong it will drive us to seek the unanswerable and run off tilting at windmills.
Psych Pstuff’s Summary
Psychologically speaking, what we call closure is actually referred to as the need for cognitive closure (NFCC). It is generally defined as both the desire for definitive answers and the corresponding aversion to ambiguity. For psychologists it is, like so many other traits, considered a defining and relatively stable aspect of character. In short, you either crave it or you don’t and if you do, you really, really crave it.
Also like many things in psychology, researchers have struggled to quantify the need for closure — in psych speak, to operationalize it — so they can compare apples with apples. The Need for Closure Scale (NFCS) was developed by researchers Arie Kruglanski, Donna Webster, and Adena Klem in 1993 as a standard way to measure the concept and compare individuals along this trait. The NFCS is a forty-seven-item test that measures five separate motivational facets that comprise our underlying affinity for clarity and resolution. These include the preference for (1) order; (2) predictability; and (3) decisiveness; and a corresponding (4) discomfort with ambiguity; and (5) closed-mindedness. Taken together, these elements indicate one’s level of need for closure. You can take an online version of this test at terpconnect.umd.edu.
The problem with an unbridled pursuit of closure is that it tends to be paradoxical and feeds into our general fear of the unknown. According to Kruglanski, the need for closure exerts its effects via two general tendencies — the urgency tendency (the inclination to attain closure as quickly as possible) and the permanence tendency (the tendency to maintain it for as long as possible). Together, these tendencies may cause us to embrace a solution or make a judgment without considering all the possibilities. In short, needing an answer too desperately can cause us to accept any answer as soon as it comes along, simply to resolve the anxiety. This can block the way to finding a better alternative.
Needing an answer too desperately can cause us to accept any answer as soon as it comes along, simply to resolve the anxiety. This can block the way to finding a better alternative.
In popular psychology today the term most often refers to a proposed goal state in the process of overcoming grief or responding to tragedy. Its lure is certainly understandable. Faced with loss there is a natural tendency to desire a resolution to all things disrupted when one’s world is turned upside down. It may be comforting to imagine there is something concrete to be done that will set things somehow right again and help us to move on to a new normal.
However, for many, this fantasized state of resolution is elusive and the very thing we think will bring peace of mind and clarity is, in fact, an empty promise. Counting too much on the achievement of an external milestone to bring comfort and balance after a loss without engaging in the required internal grief work only leaves one feeling empty and still full of unresolved emotions. Certain overt actions can be symbolic and hold the power of ritual, but they are only as effective as a culmination of a larger process of healing and insight.
Some therapists maintain that true closure is a myth and impossible to achieve. They argue that instead of trying to find closure, which may never be possible, it more psychologically healthy to pursue meaning, even if there is no final “end” or resolution.
Hmm … that sounds a bit like what David Chase said in response to the “whatever happened to Tony” questions.
While it might be perfectly natural and part of our psychological makeup to desire resolution, learning how to be comfortable with not having all the answers can lead to deeper personal growth. Learning how to tolerate ambiguity — in fiction and in reality — strengthens one’s ability to tolerate the anxiety and uncertainty that is an inevitable part of the human condition.