End of the world: MIT prediction from 1973 is proving true

An MIT model predicted when and how human civilization would end. Hint: it’s soon.

Credit: ABC.

Source: End of the world: MIT prediction from 1973 is proving true

by PAUL RATNER

In 1973, a computer program was developed at MIT to model global sustainability. Instead, it predicted that by 2040 our civilization would end. While many in history have made apocalyptic predictions that have so far failed to materialize, what the computer envisioned in the 1970s has by and large been coming true. Could the machine be right?

Why the program was created

The prediction, which recently re-appeared in Australian media, was made by a program dubbed World One. It was originally created by the computer pioneer Jay Forrester, who was commissioned by the Club of Rome to model how well the world could sustain its growth. The Club of Rome is an organization comprised of thinkers, former world heads of states, scientists, and UN bureaucrats with the mission to “promote understanding of the global challenges facing humanity and to propose solutions through scientific analysis, communication, and advocacy.”

The predictions

What World One showed was that by 2040 there would be a global collapse if the expansion of the population and industry was to continue at the current levels.

As reported by the Australian broadcaster ABC, the model’s calculations took into account trends in pollution levels, population growth, the amount of natural resources and the overall quality of life on Earth. The model’s predictions for the worsening quality of life and the dwindling natural resources have so far been unnervingly on target.

In fact, 2020 is the first milestone envisioned by World One. That’s when the quality of life is supposed to drop dramatically. The broadcaster presentedthis scenario that will lead to the demise of large numbers of people:

At around 2020, the condition of the planet becomes highly critical. If we do nothing about it, the quality of life goes down to zero. Pollution becomes so seriously it will start to kill people, which in turn will cause the population to diminish, lower than it was in the 1900. At this stage, around 2040 to 2050, civilised life as we know it on this planet will cease to exist.

Alexander King, the then-leader of the Club of Rome, evaluated the program’s results to also mean that nation-states will lose their sovereignty, forecasting a New World Order with corporations managing everything.

Sovereignty of nations is no longer absolute,” King told ABC. “There is a gradual diminishing of sovereignty, little bit by little bit. Even in the big nations, this will happen.

How did the program work?

World One, the computer program, looked at the world as one system. The report called it “an electronic guided tour of our behavior since 1900 and where that behavior will lead us.” The program produced graphs that showed what would happen to the planet decades into the future. It plotted statistics and forecasts for such variables as population, quality of life, the supply of natural resources, pollution, and more. Following the trend lines, one could see where the crises might take place.

Can we stave off disaster?

As one measure to prevent catastrophe, the Club of Rome predicted some nations like the U.S. would have to cut back on their appetites for gobbling up the world’s resources. It hoped that in the future world, prestige would stem from “low consumption”—one fact that has so far not materialized. Currently, nine in ten people around the world breathe air that has high levels of pollution, according to data from the World Health Organization (WHO). The agency estimates that 7 million deaths each year can be attributed to pollution.

Here, Parag Khanna gets into the specifics of what the world may be like in the near future, if we don’t change course:

Advertisements

Control of Screen Time Should Begin by Age 2

 

Source: Control of Screen Time Should Begin by Age 2

By Rick Nauert PhD

A Canadian study suggests that watching too much television can contribute to poor eating habits in adolescence and suboptimal school performance. While the concept is not new, the study suggests that screen time must be controlled by the early age of two, confirming new recommendations by the American Academy of Pediatrics.

Researchers at Université de Montréal’s School of Psychoeducation, performed a longitudinal study looking at a birth cohort of nearly 2,000 Quebec boys and girls born between spring 1997 and 1998. The children were followed since they were five months old as part of the Quebec Longitudinal Study of Child Development.

When they reached two years of age, their parents reported on their daily television habits. Then, at age 13, the youths themselves reported on their dietary habits and behavior in school.

The research appears in the journal Preventive Medicine.

“Not much is known about how excessive screen exposure in early childhood relates to lifestyle choices in adolescence,” explains Professor Linda Pagani. Pagnai supervised the research of graduate student Isabelle Simonato.

“This birth cohort is ideal, because the children were born before smartphones and tablets, and before any pediatric viewing guidelines were publicized for parents to follow. They were raising their children with TV and seeing it as harmless. This makes our study very naturalistic, with no outside guidelines or interference — a huge advantage.”

Simonato added, “Watching TV is mentally and physically sedentary behavior because it does not require sustained effort. We hypothesized that when toddlers watch too much TV it encourages them to be sedentary, and if they learn to prefer effortless leisure activities at a very young age, they likely won’t think much of non-leisure ones, like school, when they’re older.”

In their study, the researchers found that every hourly increase in toddlers’ TV viewing forecasted bad eating habits down the road — an increase of eight percent at age 13 for every hourly increase at age two.

In questionnaires, those early-TV adolescents reported consuming more French fries, prepared meats and cold cuts, white bread, regular and diet soft drinks, fruit-flavored drinks, sports drinks, energy drinks, salty or sweet snacks, and desserts.

Early TV viewing also translated into less eating of breakfast on school days (by 10 percent) and led to more overall screen time at age 13.

Every additional hour of watching TV also predicted a higher body mass index (a 10 percent increase) and less effortful behavior at school in the first year of secondary school, ultimately affecting performance and ambition.

“This study tells us that overindulgent lifestyle habits begin in early childhood and seem to persist throughout the life course,” Pagani noted. “An effortless existence creates health risks. For our society that means a bigger health care burden associated with obesity and lack of cardiovascular fitness.”

The researchers also measured their results against revised screen time guidelines by the American Academy of Pediatrics, which reduced the amount of daily viewing from two hours a day to one a day for children between ages two and five.

Compared to children who viewed less than one hour a day at age two, those who viewed between one and four a day later reported (at age 13) having less healthy dietary habits, skipping breakfast on weekdays, having a higher BMI, engaging in more intense screen time, and being less engaged as students.

“Because we had a lot of information on each child and family we were able to eliminate other psychological and socio-demographic factors that could have explained the results, which is a really ideal situation,” said Simonato.

“We even removed any influence of screen time habits at age 13 to really isolate long-term associations with toddler viewing.”

Source: University of Montreal/EurekAlert

Dr. Rick Nauert has over 25 years experience in clinical, administrative and academic healthcare. He is currently an associate professor for Rocky Mountain University of Health Professionals doctoral program in health promotion and wellness. Dr. Nauert began his career as a clinical physical therapist and served as a regional manager for a publicly traded multidisciplinary rehabilitation agency for 12 years. He has masters degrees in health-fitness management and healthcare administration and a doctoral degree from The University of Texas at Austin focused on health care informatics, health administration, health education and health policy. His research efforts included the area of telehealth with a specialty in disease management.

Don’t Become an Information Junkie: A Balance Between Learning And Taking Action

One big trap in self improvement is becoming an “information junkie.” This is when we spend more time learning new information than putting it into action.

Source: Don’t Become an Information Junkie: A Balance Between Learning And Taking Action

by

An “information junkie” is someone who spends a lot of time reading books, watching videos, and listening to podcasts about self improvement, but they spend very little time actually putting what they learn into practice.

This is a very common problem for many people. We stuff our brains with loads of information, but then we find ourselves not knowing what to do with all of it. This is especially true in our current “information age,” where we are constantly consuming stuff on the internet and social media.

Of course, it’s a very positive thing to want to learn as much as possible and to do your own research into various topics. Overall — reading books, watching videos, and listening to podcasts is a very healthy and beneficial thing to do. Even the occasional surfing on Google and Wikipedia can be fun and informative.

But there comes a point when if you’re NOT able to apply this information to your everyday life, how useful is it really?

Endlessly seeking new information can ultimately become a distraction. We feel we’re not ready to make a change yet, so we think “Well, I should really read more articles or books before I decide what the best course of action is!”

But this can often become an impossible and never-ending task.

You’ll never know everything about a topic. Often times, being successful with your goals means learning how to “take action” even when you realize you don’t have perfect knowledge and perfect information.

And even more importantly, much of what we learn throughout our lives comes not just from books and videos, but through personal experience.

By focusing on information and not action, you’re actually limiting your education and self-growth by ignoring the importance of getting hands-on experience and real world knowledge.

It’s like reading books about how to play baseball without ever picking up a baseball and throwing it, or watching videos of people riding a bike without ever getting on a bike yourself. How good can you really get without any experience?

Have you fallen into the trap of becoming an “information junkie?” Do you spend too much time “learning” and not enough time “doing?”

Here’s advice on how to break out of this habit.

The “Consumer” vs. “Producer” Mindset

One important shift in your attitude is to go from a “consumer mindset” to a “producer mindset.”

The “information junkie” typically views themselves as a consumer. They feel they need to find the right book, the right video, or the right podcast that finally reveals to them some important piece of information that they’ve been waiting for.

Ultimately, they are searching for something outside of themselves before they can move forward, and not simply looking inside and doing the best with what they have.

Unlike the “consumer,” the “producer” is someone that is taking action with the knowledge they have and creating something of value that they can share with the world.

One important question to ask yourself is: “What am I creating on a daily basis? How am I adding value to the world and not just subtracting from it?”

This is a great question for everyone – not just people who are actively seeking self improvement.

In many ways, our culture has turned us all into crazed consumers. We’re constantly searching for the next movie to watch, the next video game to buy, the next fashion trend to jump on, etc. And this is where we draw a lot of our “happiness” from.

But we must also learn how to think of ourselves as “producers,” and not just “consumers.” And often this shift in your mindset can be far more fulfilling.

One important shift for me was making it a personal mission to create something new everyday. Even if it was just working on a new article or new video, I wanted to at least have something that I could show people and say, “I created this!”

The best part is: When you shift into a “producer mindset,” it does wonders for your confidence and self-esteem.

You stop seeing yourself as just a mindless consumer that depends on others. Instead, you become someone who is actually adding to the world and creating stuff – and that gives you an important sense of accomplishment that every human being craves.

There’s no better feeling than being able to point at something in the real world and say “I did this.” It shows you are participating in life and making a difference, however small it may seem to others.

To avoid becoming an “information junkie,” ask yourself, “What am I doing on a daily basis that brings me closer to my goals?” Take a second and write down the small steps you can begin taking within the next 24-48 hours.

Another important rule-of-thumb to follow is for every book, article, video, or podcast you consume, try to identify at least one action you can take based on the information you’ve learned.

Always remember: learning isn’t enough, we must put our knowledge into action, or whatever we learn will be meaningless.

 

Steven Handel is a self improvement author, blogger, speaker, and coach. He first started The Emotion Machine in June 2009 and has since published over 800 articles covering a wide-range of topics including Positive Psychology, Cognitive-Behavioral Therapy, Social Psychology, Mindfulness Meditation, Emotional Intelligence, and much more!

The More Miserable You Are, the Happier Your Social Media Posts, and This Twitter Thread Proves It

A huge online discussion shows why you should never be envious of other people’s glamorous online lives.

By Jessica Stillman

Of all the ways social media can be bad for you, one of the worst, according to science, is the ability of Facebook and the like to induce envy. You see your friends posting smiling selfies at exotic destinations and humblebragging about their professional and personal accomplishments, and you end up thinking your own life doesn’t measure up.

Of course, intellectually we all know that our real life selves and our highly curated online selves differ hugely, but it’s still easy to fall into the trap of letting other people’s perfect social-media profiles convince you that you’re somehow falling short. An emotional and revealing new Twitter thread should explode that worry for good.

The grass really, really isn’t greener.

The deeply revealing discussion was kicked off by this tweet from Tracy Clayton, host of the BuzzFeed podcast Another Round. (Hat tip Quartz.)

Apparently, she hit a nerve, as responses poured in. People shared a torrent of posts about the reality behind seemingly cheerful vacation snaps, glamorous selfies, smiling family portraits, and sports triumphs. Happy-looking couples confessed to fighting moments before the photo, while others bravely told of the mental health issues they were hiding in their smiling posts. Here’s a sampling:

These posts obviously testify to the courage of those who shared them. They also speak volumes about our yearning for genuine human connection and authenticity, even at the cost of potential embarrassment. But on a less personal level, the sheer scale of the response to Clayton’s tweet is a useful reminder that what you see on social media bears basically no resemblance to people’s actual lives.

Remember that next time you’re feeling bad after comparing yourself to something you’ve seen online. Or even let this torrent of truth motivate you to consider scaling back your social media for good. Science suggests you’ll be happier for ditching a habit proven to induce envy, disconnection, and loneliness.

Have you ever posted a happy pic online to mask your real-life suffering?

 

Jessica Stillman is a freelance writer based in Cyprus with interests in unconventional career paths, generational differences, and the future of work. She has blogged for CBS MoneyWatch, GigaOM, and Brazen Careerist.

 

A New Study Has Found a Way to Stop People From Believing in Conspiracy Theories

Mockery feels good but it just makes conspiracy theorists dig in their heels. Try this research-backed idea instead.

Source: A New Study Has Found a Way to Stop People From Believing in Conspiracy Theories

By Jessica Stillman

Apple, YouTube and Facebook have pulled the plug on Infowars’ Alex Jones for peddling loathsome lies such as the idea that the Sandy Hook massacre was an elaborate hoax. Twitter has failed to follow suit, stirring up heated debate about the proper role of media and tech platforms to rein in hateful speech and disinformation.

But while that’s certainly a debate worth having, it’s also worth asking: Does banning those who peddle lies actually reduce the number of people who believe them? Are there other ways to fight back against conspiracy theories and baseless rumors?

Who believes in Pizzagate anyway?

To start answering that question it’s important to understand exactly what sort of person believes the moon landing was faked.

Belief in conspiracy theories is more common than you might think. One study found roughly half of Americans believe at least one (and hey, a few past “conspiracy theories” actually proved true). This popularity is supported by biases hard-wired into us all, psychologists say, such as our tendency to look for information that confirms our beliefs and disregard information that challenges them, or the desire to find big causes for big events.

That means conspiracy theories will probably always be this us to some extent, but there are also demographic and psychological factors that make it more likely people will believe in them, including:

  • Being less educated. This one hardly needs much explaining.

  • A desire to feel special. Those who want to stand out from the crowd (aka those with narcissistic tendencies) can adopt extreme beliefs in order to do so.

  • Feelings of powerlessness. An explanation for events beyond a person’s control — no matter how ludicrous those explanations sound to others — can still be psychologically preferable to being the victim of blind chance or happenstance.

  • A need for certainty. “Seeking explanations for events is a natural human desire,””explains psychology professor David Ludden. “And we don’t just ask questions. We also quickly find answers to those questions–not necessarily the true answers, but rather answers that comfort us or that fit into our worldview.”

Management professors vs. tinfoil hat peddlers

Knowing this, what sort of interventions actually seems to persuade people to see the light and give up on conspiracy theories? As tempting as it can feel to non-believers, mocking conspiracy theorists usually just makes them dig in their heels. And it’s an open question whether taking away the microphones of their leaders will make any real dent.

But when Kellogg School management professor Cynthia Wang and colleagues recently went searching for a way to reduce belief in conspiracy theories they found one promising technique. You can’t quickly make someone more educated or less narcissistic to inoculate them against lies, but you can encourage them to take concrete action in pursuit of their goals. That simple step, which reduces feelings of powerlessness and reinforces the link between cause and effect, seems to move the needle.

Simply by prompting study participants to write about their aspirations the researchers were able to nudge people away from coming to wild-eyed conclusions when asked to evaluate fictional scenarios that might be viewed as conspiracies (for instance, a bank filing for bankruptcy). Subjects were also less likely to endorse existing conspiracy theories after focusing on how to improve their futures.

“You can actually shift someone’s mindset so they see fewer conspiracies,” Wang concluded from the findings.

More control equals fewer conspiracy theories (at work too)

The key to doing that is giving people a sense of control over their lives, even in small ways. “Wang and her co-authors suggest that government organizations such as the Centers for Disease Control can increase public trust by promoting messages that emphasize the ways individuals have control over their health outcomes,” notes the Kellogg Insight write-up of the research.

Whether any intervention along these lines is enough to stop a truly malignant character like Alex Jones is doubtful, though it is handy to know that in order to stop lies like his from spreading you need to build people up rather than tear them down. Broad public applications of this truth remain an open (but interesting) question. Managers can put them to use today, however.

Want less speculating around the office about backroom deals or arbitrary promotions? Science suggests that your best bet is to talk to your people often about their goals and help them understand the steps to take to get there. If people see real, controllable paths to power and self-betterment, they’re far less likely to think a tinfoil hat or a snake oil merchant is the answer.

Jessica Stillman is a freelance writer based in Cyprus with interests in unconventional career paths, generational differences, and the future of work. She has blogged for CBS MoneyWatch, GigaOM, and Brazen Careerist.

The Mystery of People Who Speak Dozens of Languages

What can hyperpolyglots teach the rest of us?

One researcher of language acquisition describes her basic question as “How do I get a thought from my mind into yours?”

Illustration by Oliver Munday; source photograph from Universal History Archive / Getty (face)

Source: The Mystery of People Who Speak Dozens of Languages

Last May, Luis Miguel Rojas-Berscia, a doctoral candidate at the Max Planck Institute for Psycholinguistics, in the Dutch city of Nijmegen, flew to Malta for a week to learn Maltese. He had a hefty grammar book in his backpack, but he didn’t plan to open it unless he had to. “We’ll do this as I would in the Amazon,” he told me, referring to his fieldwork as a linguist. Our plan was for me to observe how he went about learning a new language, starting with “hello” and “thank you.”

Rojas-Berscia is a twenty-seven-year-old Peruvian with a baby face and spiky dark hair. A friend had given him a new pair of earrings, which he wore on Malta with funky tank tops and a chain necklace. He looked like any other laid-back young tourist, except for the intense focus—all senses cocked—with which he takes in a new environment. Linguistics is a formidably cerebral discipline. At a conference in Nijmegen that had preceded our trip to Malta, there were papers on “the anatomical similarities in the phonatory apparati of humans and harbor seals” and “hippocampal-dependent declarative memory,” along with a neuropsychological analysis of speech and sound processing in the brains of beatboxers. Rojas-Berscia’s Ph.D. research, with the Shawi people of the Peruvian rain forest, doesn’t involve fMRI data or computer modelling, but it is still arcane to a layperson. “I’m developing a theory of language change called the Flux Approach,” he explained one evening, at a country inn outside the city, over the delicious pannenkoeken (pancakes) that are a local specialty. “A flux is a dynamism that involves a social fact and an impact, either functionally or formally, in linguistic competence.”

Linguistic competence, as it happens, was the subject of my own interest in Rojas-Berscia. He is a hyperpolyglot, with a command of twenty-two living languages (Spanish, Italian, Piedmontese, English, Mandarin, French, Esperanto, Portuguese, Romanian, Quechua, Shawi, Aymara, German, Dutch, Catalan, Russian, Hakka Chinese, Japanese, Korean, Guarani, Farsi, and Serbian), thirteen of which he speaks fluently. He also knows six classical or endangered languages: Latin, Ancient Greek, Biblical Hebrew, Shiwilu, Muniche, and Selk’nam, an indigenous tongue of Tierra del Fuego, which was the subject of his master’s thesis. We first made contact three years ago, when I was writing about a Chilean youth who called himself the last surviving speaker of Selk’nam. How could such a claim be verified? Pretty much only, it turned out, by Rojas-Berscia.

Superlative feats have always thrilled average mortals, in part, perhaps, because they register as a victory for Team Homo Sapiens: they redefine the humanly possible. If the ultra-marathoner Dean Karnazes can run three hundred and fifty miles without sleep, he may inspire you to jog around the block. If Rojas-Berscia can speak twenty-two languages, perhaps you can crank up your high-school Spanish or bat-mitzvah Hebrew, or learn enough of your grandma’s Korean to understand her stories. Such is the promise of online language-learning programs like Pimsleur, Babbel, Rosetta Stone, and Duolingo: in the brain of every monolingual, there’s a dormant polyglot—a genie—who, with some brisk mental friction, can be woken up. I tested that presumption at the start of my research, signing up on Duolingo to learn Vietnamese. (The app is free, and I was curious about the challenges of a tonal language.) It turns out that I’m good at hello—chào—but thank you, cảm ơn, is harder.

The word “hyperpolyglot” was coined two decades ago, by a British linguist, Richard Hudson, who was launching an Internet search for the world’s greatest language learner. But the phenomenon and its mystique are ancient. In Acts 2 of the New Testament, Christ’s disciples receive the Holy Spirit and can suddenly “speak in tongues” (glōssais lalein, in Greek), preaching in the languages of “every nation under heaven.” According to Pliny the Elder, the Greco-Persian king Mithridates VI, who ruled twenty-two nations in the first century B.C., “administered their laws in as many languages, and could harangue in each of them.” Plutarch claimed that Cleopatra “very seldom had need of an interpreter,” and was the only monarch of her Greek dynasty fluent in Egyptian. Elizabeth I also allegedly mastered the tongues of her realm—Welsh, Cornish, Scottish, and Irish, plus six others.

With a mere ten languages, Shakespeare’s Queen does not qualify as a hyperpolyglot; the accepted threshold is eleven. The prowess of Giuseppe Mezzofanti (1774-1849) is more astounding and better documented. Mezzofanti, an Italian cardinal, was fluent in at least thirty languages and studied another forty-two, including, he claimed, Algonquin. In the decades that he lived in Rome, as the chief custodian of the Vatican Library, notables from around the world dropped by to interrogate him in their mother tongues, and he flitted as nimbly among them as a bee in a rose garden. Lord Byron, who is said to have spoken Greek, French, Italian, German, Latin, and some Armenian, in addition to his immortal English, lost a cursing contest with the Cardinal and afterward, with admiration, called him a “monster.” Other witnesses were less enchanted, comparing him to a parrot. But his gifts were certified by an Irish scholar and a British philologist, Charles William Russell and Thomas Watts, who set a standard for fluency that is still useful in vetting the claims of modern Mezzofantis: Can they speak with an unstilted freedom that transcends rote mimicry?

Mezzofanti, the son of a carpenter, picked up Latin by standing outside a seminary, listening to the boys recite their conjugations. Rojas-Berscia, by contrast, grew up in an educated trilingual household. His father is a Peruvian businessman, and the family lives comfortably in Lima. His mother is a shop manager of Italian origin, and his maternal grandmother, who cared for him as a boy, taught him Piedmontese. He learned English in preschool and speaks it impeccably, with the same slight Latin inflection—a trill of otherness, rather than an accent—that he has in every language I can vouch for. Maltese had been on his wish list for a while, along with Uighur and Sanskrit. “What happens is this,” he said, over dinner at a Chinese restaurant in Nijmegen, where he was chatting in Mandarin with the owner and in Dutch with a server, while alternating between French and Spanish with a fellow-student at the institute. “I’m an amoureux de langues. And, when I fall in love with a language, I have to learn it. There’s no practical motive—it’s a form of play.” An amoureux, one might note, covets his beloved, body and soul.

My own modest competence in foreign languages (I speak three) is nothing to boast of in most parts of the world, where multilingualism is the norm. People who live at a crossroads of cultures—Melanesians, South Asians, Latin-Americans, Central Europeans, sub-Saharan Africans, plus millions of others, including the Maltese and the Shawi—acquire languages without considering it a noteworthy achievement. Leaving New York, on the way to the Netherlands, I overheard a Ghanaian taxi-driver chatting on his cell phone in a tonal language that I didn’t recognize. “It’s Hausa,” he told me. “I speak it with my father, whose family comes from Nigeria. But I speak Twi with my mom, Ga with my friends, some Ewe, and English is our lingua franca. If people in Chelsea spoke one thing and people in SoHo another, New Yorkers would be multilingual, too.”

Linguistically speaking, that taxi-driver is a more typical citizen of the globe than the average American is. Consider Adul Sam-on, one of the teen-age soccer players rescued last July from the cave in Mae Sai, Thailand. Adul grew up in dire poverty on the porous Thai border with Myanmar and Laos, where diverse populations intersect. His family belongs to an ethnic minority, the Wa, who speak an Austroasiatic language that is also widespread in parts of China. In addition to Wa, according to the Times, Adul is “proficient” in Thai, Burmese, Mandarin, and English—which enabled him to interpret for the two British divers who discovered the trapped team.

Nearly two billion people study English as a foreign language—about four times the number of native speakers. And apps like Google Translate make it possible to communicate, almost anywhere, by typing conversations into a smartphone (presuming your interlocutor can read). Ironically, however, as the hegemony of English decreases the need to speak other languages for work or for travel, the cachet attached to acquiring them seems to be growing. There is a thriving online community of ardent linguaphiles who are, or who aspire to become, polyglots; for inspiration, they look to Facebook groups, YouTube videos, chat rooms, and language gurus like Richard Simcott, a charismatic British hyperpolyglot who orchestrates the annual Polyglot Conference. This gathering has been held, on various continents, since 2009, and it attracts hundreds of aficionados. The talks are mostly in English, though participants wear nametags listing the languages they’re prepared to converse in. Simcott’s winkingly says “Try Me.”

No one becomes a hyperpolyglot by osmosis, or without sacrifice—it’s a rare, herculean feat. Rojas-Berscia, who gave up a promising tennis career that interfered with his language studies, reckons that there are “about twenty of us in Europe, and we all know, or know of, one another.” He put me in touch with a few of his peers, including Corentin Bourdeau, a young French linguist whose eleven languages include Wolof, Farsi, and Finnish; and Emanuele Marini, a shy Italian in his forties, who runs an export-import business and speaks almost every Slavic and Romance language, plus Arabic, Turkish, and Greek, for a total of nearly thirty. Neither willingly uses English, resenting its status as a global bully language—its prepotenza, as Marini put it to me, in Italian. Ellen Jovin, a dynamic New Yorker who has been described as the “den mother” of the polyglot community, explained that her own avid study of languages—twenty-five, to date—“is almost an apology for the dominance of English. Polyglottery is an antithesis to linguistic chauvinism.”

Much of the data on hyperpolyglots is still sketchy. But, from a small sample of prodigies who have been tested by neurolinguists, responded to online surveys, or shared their experience in forums, a partial profile has emerged. An extreme language learner has a more-than-random chance of being a gay, left-handed male on the autism spectrum, with an autoimmune disorder, such as asthma or allergies. (Endocrine research, still inconclusive, has investigated the hypothesis that these traits may be linked to a spike in testosterone during gestation.) “It’s true that L.G.B.T. people are well represented in our community,” Simcott told me, when we spoke in July. “And a lot identify as being on the spectrum, some mildly, others more so. It was a subject we explored at the conference last year.”

Simcott himself is an ambidextrous, heterosexual, and notably outgoing forty-one-year-old. He lives in Macedonia with his wife and daughter, a budding polyglot of eleven, who was, he told me, trilingual at sixteen months. His own parents were monolingual, though he was fascinated, as a boy, “by the different ways people spoke English.” (Like Henry Higgins, Simcott can nail an accent to a precise point on the map, not only in the British Isles but all over Europe.) “I’m mistaken for a native in about six languages,” he told me, even though he started slow, learning French in grade school and Spanish as a teen-ager. At university, he added Italian, Portuguese, Swedish, and Old Icelandic. His flawless German, acquired post-college, as an au pair, made Dutch a cinch.

As Simcott entered late adolescence, he said, “the Internet was starting up,” so he could practice his languages in chat rooms. He also found a sense of identity that had eluded him. There was, in particular, a mysterious polyglot who haunted the same rooms. “He was the first person who really encouraged me,” Simcott said. “Everyone else either warned me that my brain would burst or saw me as a talking horse. Eventually, I made a video using bits and bobs of sixteen languages, so I wouldn’t have to keep performing.” But the stranger gave Simcott a validation that he still recalls with emotion. He founded the conference partly to pay that debt forward, by creating a clubhouse for the kind of geeky kid he had been, to whom no tongue was foreign but no place was home.

A number of hyperpolyglots are reclusive savants who bank their languages rather than using them to communicate. The more extroverted may work as translators or interpreters. Helen Abadzi, a Greek educator who speaks nineteen languages “at least at an intermediate level” spent decades at the World Bank. Kató Lomb, a Hungarian autodidact, learned seventeen tongues—the last, Hebrew, in her late eighties—and in middle age became one of the world’s first simultaneous interpreters. Simcott joined the British Foreign Service. On tours of duty in Yemen, Bosnia, and Moldova, he picked up some of the lingo. Every summer, he set himself the challenge of learning a new tongue more purposefully, either by taking a university course—as he did in Mandarin, Japanese, Czech, Arabic, Finnish, and Georgian—or with a grammar book and a tutor.

However they differ, the hyperpolyglots whom I met all winced at the question “How many languages do you speak?” As Rojas-Berscia explained it, the issue is partly semantic: What does the verb “to speak” mean? It is also political. Standard accents and grammar are usually those of a ruling class. And the question is further clouded by the “chauvinism” that Ellen Jovin feels obliged to resist. The test of a spy, in thrillers, is to “pass for a native,” even though the English-speaking natives of Glasgow, Trinidad, Delhi, Lagos, New Orleans, and Melbourne (not to mention Eliza Doolittle’s East End) all sound foreign to one another. “No one masters all the nuances of a language,” Simcott said. “It’s a false standard, and one that gets raised, ironically, mostly by monoglots—Americans in particular. So let’s just say that I have studied more than fifty, and I use about half of them.”

Richard Hudson’s casual search for the ultimate hyperpolyglot was inconclusive, but it led him to an American journalist, Michael Erard, who had embarked on the same quest more methodically. Erard, who has a doctorate in English, spent six years reading the scientific literature and debriefing its authors, visiting archives (including Mezzofanti’s, in Bologna), and tracking down every living language prodigy he had heard from or about. It was his online survey, conducted in 2009, that generated the first systematic overview of linguistic virtuosity. Some four hundred respondents provided information about their gender and their orientation, among other personal details, including their I.Q.s (which were above average). Nearly half spoke at least seven languages, and seventeen qualified as hyperpolyglots. The distillation of this research, “Babel No More,” published in 2012, is an essential reference book—in its way, an ethnography of what Erard calls a “neural tribe.”

The awe that tribe members command has always attracted opportunists. There are, for example, “bizglots” and “broglots,” as Erard calls them. The former hawk tutorials with the dubious promise that anyone can become a prodigy, while the latter engage in online bragfests, like “postmodern frat boys.” And then there are the fauxglots. My favorite is “George Psalmanazar” (his real name is unknown), a vagabond of mysterious provenance and endearing chutzpah who wandered through Europe in the late seventeenth century, claiming, by turns, to be Irish, Japanese, and, ultimately, Formosan. Samuel Johnson befriended him in London, where Psalmanazar published a travelogue about his “native” island which included translations from its language—an ingenious pastiche of his invention. Erard pursued another much hyped character, Ziad Fazah, a Guinness-record holder until 1997, who claimed to speak fifty-eight languages fluently. Fazah flamed out spectacularly on a Chilean television show, failing to answer even simple questions posed to him by native speakers.

Rojas-Berscia derides such theatrics as “monkey business,” and dismisses prodigies who monetize their gifts. “Where do they get the time for it?” he wonders. Erard, in his survey for “Babel No More,” queried his subjects on their learning protocols, and, while some were vague (“I accept mistakes and uncertainty; I listen and read a lot”), others gave elaborate accounts of drawing “mind maps” and of building “memory anchors,” or of creating an architectural model for each new language, to be furnished with vocabulary as they progressed. When I asked Simcott if he had any secrets, he paused to think about it. “Well, I don’t have an amazing memory,” he said. “At many tasks, I’m just average. A neurolinguist at the City University of New York, Loraine Obler, ran some tests on me, and I performed highly on recalling lists of nonsense words.” (That ability, Obler’s research suggests, strongly correlates with a gift for languages.) “I was also a standout at reproducing sounds,” he continued. “But, the more languages you learn, in the more families, the easier it gets. Each one bangs more storage hooks into the wall.”

Alexander Argüelles, a legendary figure in the community, warned Erard that immodesty is the hallmark of a charlatan. When Erard met him, ten years ago, Argüelles, an American who lives in Singapore, started his day at three in the morning with a “scriptorium” exercise: “writing two pages apiece in Arabic, Sanskrit, and Chinese, the languages he calls the ‘etymological source rivers.’ ” He continued with other languages, from different families, until he had filled twenty-four notebook pages. As dawn broke, he went for a long run, listening to audiobooks and practicing what he calls “shadowing”: as the foreign sounds flowed into his headphones, he shouted them out at the top of his lungs. Back at home, he turned to drills in grammar and phonetics, logging the time he had devoted to each language on an Excel spreadsheet. Erard studied logs going back sixteen months, and calculated that Argüelles had spent forty per cent of his waking life studying fifty-two languages, in increments that varied from four hundred and fifty-six hours (Arabic) to four hours (Vietnamese). “The way I see it, there are three types of polyglots,” he told Erard. There were the “ultimate geniuses . . . who excel at anything they do”; the Mezzofantis, “who are only good at languages”; and the “people like me.” He refused to consider himself a special case—he was simply a Stakhanovite.

Erard is a pensive man of fifty, still boyish-looking, with a gift for listening that he prizes in others. We met in Nijmegen, at the Max Planck Institute, where he was finishing a yearlong stint as the writer-in-residence, and looking forward to moving back to Maine with his family. “I saw only when the book was finished that many of the stories had a common thread,” he told me. We had been walking through the woods that surround the institute, listening to the vibrant May birdsong, a Babel of voices. His subjects, he reflected, had been cut from the herd of average mortals by their wiring or by their obsession. They had embraced their otherness, and they had cultivated it. Yet, if speech defines us as human, a related faculty had eluded them: the ability to connect. Each new language was a potential conduit—an escape route from solitude. “I hadn’t realized that was my story, too,” he said.

Rojas-Berscia and I took a budget flight from Brussels to Malta, arriving at midnight. The air smelled like summer. Our taxi-driver presumed we were mother and son. “How do you say ‘mother’ in Maltese?” Rojas-Berscia asked him, in English. By the time we had reached the hotel, he knew the whole Maltese family. Two local newlyweds, still in their wedding clothes, were just checking in. “How do you say ‘congratulations’?” Rojas-Berscia asked. The answer was nifrah.

We were both starving, so we dropped our bags and went to a local bar. It was Saturday night, and the narrow streets of the quarter were packed with revellers grooving to deafening music. I had pictured something a bit different—a quaint inn on a quiet square, perhaps, where a bronze Knight of Malta tilted at the bougainvillea. But Rojas-Berscia is not easily distracted. He took out his notebook and jotted down the kinship terms he had just learned. Then he checked his phone. “I texted the language guide I lined up for us,” he explained. “He’s a personal trainer I found online, and I’ll start working out with him tomorrow morning. A gym is a good place to get the prepositions for direction.” The trainer arrived and had a beer with us. He was overdressed, with a lacquered mullet, and there was something shifty about him. Indeed, Rojas-Berscia prepaid him for the session, but he never turned up the next day. He had, it transpired, a subsidiary line of work.

I didn’t expect Rojas-Berscia to master Maltese in a week, but I was surprised at his impromptu approach. He spent several days raptly eavesdropping on native speakers in markets and cafés and on long bus rides, bathing in the warm sea of their voices. If we took a taxi to some church or ruin, he would ride shotgun and ask the driver to teach him a few common Maltese phrases, or to tell him a joke. He didn’t record these encounters, but in the next taxi or shop he would use the new phrases to start a conversation. Hyperpolyglots, Erard writes, exhibit an imperative “will to plasticity,” by which he means plasticity of the brain. But I was seeing plasticity of a different sort, which I myself had once possessed. In my early twenties, I had learned two languages simultaneously, the first by “sleeping with my dictionary,” as the French put it, and the other by drinking a lot of wine and being willing to make a fool of myself jabbering at strangers. With age, I had lost my gift for abandon. That had been my problem with Vietnamese. You have to inhabit a language, not only speak it, and fluency requires some dramatic flair. I should have been hanging out in New York’s Little Saigon, rather than staring at a screen.

The Maltese were flattered by Rojas-Berscia’s interest in their language, but dumbfounded that he would bother to learn it—what use was it to him? Their own history suggests an answer. Malta, an archipelago, is an almost literal stepping stone from Africa to Europe. (While we were there, the government turned away a boatload of asylum seekers.) Its earliest known inhabitants were Neolithic farmers, who were succeeded by the builders of a temple complex on Gozo. (Their mysterious megaliths are still standing.) Around 750 B.C., Phoenician traders established a colony, which was conquered by the Romans, who were routed by the Byzantines, who were kicked out by the Aghlabids. A community of Arabs from the Muslim Emirate of Sicily landed in the eleventh century and dug in so deep that waves of Christian conquest—Norman, Swabian, Aragonese, Spanish, Sicilian, French, and British—couldn’t efface them. Their language is the source of Maltese grammar and a third of the lexicon, making Malti the only Semitic language in the European Union. Rojas-Berscia’s Hebrew helped him with plurals, conjugations, and some roots. As for the rest of the vocabulary, about half comes from Italian, with English and French loanwords. “We should have done Uighur,” I teased him. “This is too easy for you.”

Linguistics gave Rojas-Berscia tools that civilians lack. But he was drawn to linguistics in part because of his aptitude for systematizing. “I can’t remember names,” he told me, yet his recall for the spoken word is preternatural. “It will take me a day to learn the essentials,” he had reckoned, as we planned the trip. The essentials included “predicate formation, how to quantify, negation, pronouns, numbers, qualification—‘good,’ ‘bad,’ and such. Some clausal operators—‘but,’ ‘because,’ ‘therefore.’ Copular verbs like ‘to be’ and ‘to seem.’ Basic survival verbs like ‘need,’ ‘eat,’ ‘see,’ ‘drink,’ ‘want,’ ‘walk,’ ‘buy,’ and ‘get sick.’ Plus a nice little shopping basket of nouns. Then I’ll get our guide to give me a paradigm—‘I eat an apple, you eat an apple’—and voilà.” I had, I realized, covered the same ground in Vietnamese—tôi ăn một quả táo—but it had cost me six months.

It wasn’t easy, though, to find the right guide. I suggested we try the university. “Only if we have to,” Rojas-Berscia said. “I prefer to avoid intellectuals. You want the street talk, not book Maltese.” How would he do this in the Amazon? “Monolingual fieldwork on indigenous tongues, without the reference point of a lingua franca, is harder, but it’s beautiful,” he said. “You start by making bonds with people, learning to greet them appropriately, and observing their gestures. The rules of behavior are at least as important in cultural linguistics as the rules of grammar. It’s not just a matter of finding the algorithm. The goal is to become part of a society.”

After the debacle with the “trainer,” we went looking for volunteers willing to spend an hour or so over a drink or a coffee. We auditioned a tattoo artist with blond dreadlocks, a physiology student from Valletta, a waiter on Gozo, and a tiny old lady who sold tickets to the catacombs outside Mdina (a location for King’s Landing in “Game of Thrones”). Like nearly all Maltese, they spoke good English, though Rojas-Berscia valued their mistakes. “When someone says, ‘He is angry for me,’ you learn something about his language—it represents a convention in Maltese. The richness of a language’s conventions is the highest barrier to sounding like a native in it.”

On our third day, Rojas-Berscia contacted a Maltese Facebook friend, who invited us to dinner in Birgu, a medieval city fortified by the Knights of Malta in the sixteenth century. The sheltered port is now a marina for super-yachts, although a wizened ferryman shuttles humbler travellers from the Birgu quays to those of Senglea, directly across from them. The waterfront is lined with old palazzos of coralline limestone, whose façades were glowing in the dusk. We ordered some Maltese wine and took in the scene. But the minute Rojas-Berscia opened his notebook his attention lasered in on his task. “Please don’t tell me if a verb is regular or not,” he chided his friend, who was being too helpful. “I want my brain to do the work of classifying.”

Rojas-Berscia’s brain is of great interest to Simon Fisher, his senior colleague at the institute and a neurogeneticist of international renown. In 2001, Fisher, then at Oxford, was part of a team that discovered the FOXP2 gene and identified a single, heritable mutation of it that is responsible for verbal dyspraxia, a severe language disorder. In the popular press, FOXP2 has been mistakenly touted as “the language gene,” and as the long-sought evidence for Noam Chomsky’s famous theory, which posits that a spontaneous mutation gave Homo sapiens the ability to acquire speech and that syntax is hard-wired. Other animals, however, including songbirds, also bear a version of the gene, and most of the researchers I met believe that language is probably, as Fisher put it, a “bio-cultural hybrid”—one whose genesis is more complicated than Chomsky would allow. The question inspires bitter controversy.

Fisher’s lab at Nijmegen focusses on pathologies that disrupt speech, but he has started to search for DNA variants that may correlate with linguistic virtuosity. One such quirk has already been discovered, by the neuroscientist Sophie Scott: an extra loop of gray matter, present from birth, in the auditory cortex of some phoneticians. “The genetics of talent is unexplored territory,” Fisher said. “It’s a hard concept to frame for an experiment. It’s also a sensitive topic. But you can’t deny the fact that your genome predisposes you in certain ways.”

The genetics of talent may thwart average linguaphiles who aspire to become Mezzofantis. Transgenerational studies are the next stage of research, and they will seek to establish the degree to which a genius for language runs in the family. Argüelles is the child of a polyglot. Kató Lomb was, too. Simcott’s daughter might contribute to a science still in its infancy. In the meantime, Fisher is recruiting outliers like Rojas-Berscia and collecting their saliva; when the sample is broad enough, he hopes, it will generate some conclusions. “We need to establish the right cutoff point,” he said. “We tend to think it should be twenty languages, rather than the conventional eleven. But there’s a trade-off: with a lower number, we have a bigger cohort.”

I asked Fisher about another cutoff point: the critical period for acquiring a language without an accent. The common wisdom is that one loses the chance to become a spy after puberty. Fisher explained why that is true for most people. A brain, he said, sacrifices suppleness to gain stability as it matures; once you master your mother tongue, you don’t need the phonetic plasticity of childhood, and a typical brain puts that circuitry to another use. But Simcott learned three of the languages in which he is mistaken for a native when he was in his twenties. Corentin Bourdeau, who grew up in the South of France, passes for a local as seamlessly in Lima as he does in Tehran. Experiments in extending or restoring plasticity, in the hope of treating sensory disabilities, may also lead to opportunities for greater acuity. Takao Hensch, at Harvard, has discovered that Valproate, a drug used to treat epilepsy, migraines, and bipolar disorder, can reopen the critical period for visual development in mice. “Might it work for speech?” Fisher said. “We don’t know yet.”

Rojas-Berscia and I parted on the train from Brussels to Nijmegen, where he got off and I continued to the Amsterdam airport. He had to finish his thesis on the Flux Approach before leaving for a research job in Australia, where he planned to study aboriginal languages. I asked him to assess our little experiment. “The grammar was easy,” he said. “The orthography is a little difficult, and the verbs seemed chaotic.” His prowess had dazzled our consultants, but he wasn’t as impressed with himself. He could read bits of a newspaper; he could make small talk; he had learned probably a thousand words. When a taxi-driver asked if he’d been living on Malta for a year, he’d laughed with embarrassment. “I was flattered, of course,” he added. “And his excitement for my progress excited him to help us.” “Excitement about your progress,” I clucked. It was a rare lapse.

A week later, I was on a different train, from New York to Boston. Fisher had referred me to his collaborator Evelina Fedorenko. Fedorenko is a cognitive neuroscientist at Massachusetts General Hospital who also runs what her postdocs call the EvLab, at M.I.T. My first e-mail to her had bounced back—she was on maternity leave. But then she wrote to say that she would be delighted to meet me. “Are you claustrophobic?” she added. If not, she said, I could take a spin in her fMRI machine, to see what she does with her hyperpolyglots.

Fedorenko is small and fair, with delicate features. She was born in Volgograd in 1980. “When the Soviet Union fell apart, we were starving, and it wasn’t fun,” she said. Her father was an alcoholic, but her parents were determined to help her fulfill her exceptional promise in math and science, which meant escaping abroad. At fifteen, she won a place in an exchange program, sponsored by Senator Bill Bradley, and spent a year in Alabama. Harvard gave her a full scholarship in 1998, and she went on to graduate school at M.I.T., in linguistics and psychology. There, she met the cognitive scientist Ted Gibson. They married, and they now have a one-year-old daughter.

One afternoon, I visited Fedorenko at her home, in Belmont. (She spends as much time as she can with her baby, who was babbling like a songbird.) “Here is my basic question,” she said. “How do I get a thought from my mind into yours? We begin by asking how language fits into the broader architecture of the mind. It’s a late invention, evolutionarily, and a lot of the brain’s machinery was already in place.”

She wondered: Does language share a mechanism with other cognitive functions? Or is it autonomous? To seek an answer, she developed a set of “localizer tasks,” administered in an fMRI machine. Her first goal was to identify the “language-responsive cortex,” and the tasks involved reading or listening to a sequence of sentences, some of them garbled or composed of nonsense words.

The responsive cortex proved to be separate from regions involved in other forms of complex thought. We don’t, for example, use the same parts of our brains for music and for speech, which seems counterintuitive, especially in the case of a tonal language. But pitch, Fedorenko explained, has its own neural turf. And life experience alters the picture. “Literate people use one region of their cortex in recognizing letters,” she said. “Illiterate people don’t have that region, though it develops if they learn to read.”

In order to draw general conclusions, Fedorenko needed to study the way that language skills vary among individuals. They turned out to vary greatly. The intensity of activity in response to the localizer tests was idiosyncratic; some brains worked harder than others. But that raised another question: Did heightened activity correspond to a greater aptitude for language? Or was the opposite true—that the cortex of a language prodigy would show less activity, because it was more efficient?

I asked Fedorenko if she had reason to believe that gay, left-handed males on the spectrum had some cerebral advantage in learning languages. “I’m not prepared to accept that reporting as anything more than anecdotal,” she said. “Males, for one thing, get greater encouragement for intellectual achievement.”

Fedorenko’s initial subjects had been English-speaking monolinguals, or bilinguals who also spoke Spanish or Mandarin. But, in 2013, she tested her first prodigy. “We heard about a local kid who spoke thirty languages, and we recruited him,” she said. He introduced her to other whizzes, and as the study grew Fedorenko needed material in a range of tongues. Initially, she used Bible excerpts, but “Alice’s Adventures in Wonderland” came to seem more congenial. The EvLab has acquired more than forty “Alice” translations, and Fedorenko plans to add tasks in sign language.

Twelve years on, Fedorenko is confident of certain findings. All her subjects show less brain activity when working in their mother tongue; they don’t have to sweat it. As the language in the tests grows more challenging, it elicits more neural activity, until it becomes gibberish, at which point it elicits less—the brain seems to give up, quite sensibly, when a task is futile. Hyperpolyglots, too, work harder in an unfamiliar tongue. But their “harder” is relaxed compared with the efforts of average people. Their advantage seems to be not capacity but efficiency. No matter how difficult the task, they use a smaller area of their brain in processing language—less tissue, less energy.

All Fedorenko’s guinea pigs, including me, also took a daunting nonverbal memory test: squares on a grid flash on and off as you frantically try to recall their location. This trial engages a neural network separate from the language cortex—the executive-function system. “Its role is to support general fluid intelligence,” Fedorenko said. What kind of boost might it give to, say, a language prodigy? “People claim that language learning makes you smarter,” she replied. “Sadly, we don’t have evidence for it. But, if you play an unfamiliar language to ‘normal’ people, their executive-function systems don’t show much response. Those of polyglots do. Perhaps they’re striving to grasp a linguistic signal.” Or perhaps that’s where their genie resides.

Barring an infusion of Valproate, most of us will never acquire Rojas-Berscia’s twenty-eight languages. As for my own brain, I reckoned that the scan would detect a lumpen mass of mac and cheese embedded with low-wattage Christmas lights. After the memory test, I was sure that it had. “Don’t worry,” Matt Siegelman, Fedorenko’s technician, reassured me. “Everyone fails it—well, almost.”

Siegelman’s tactful letdown woke me from my adventures in language land. But as I was leaving I noticed a copy of “Alice” in Vietnamese. I report to you with pride that I could make out “white rabbit” (thỏ trắng), “tea party” (tiệc trà), and ăn tôi, which—you knew it!—means “eat me.” ♦

This article appears in the print edition of the September 3, 2018, issue, with the headline “Maltese for Beginners.”

A Picture Of Language: The Fading Art Of Diagramming Sentences

Once a popular way to teach grammar, the practice of diagramming sentences has fallen out of favor.

The design firm Pop Chart Lab has taken the first lines of famous novels and diagrammed those sentences. This one shows the opening of Franz Kafka’s Metamorphosis.   Pop Chart Lab

Source: A Picture Of Language: The Fading Art Of Diagramming Sentences

When you think about a sentence, you usually think about words — not lines. But sentence diagramming brings geometry into grammar.

If you weren’t taught to diagram a sentence, this might sound a little zany. But the practice has a long — and controversial — history in U.S. schools.

And while it was once commonplace, many people today don’t even know what it is.

So let’s start with the basics.

“It’s a fairly simple idea,” says Kitty Burns Florey, the author of Sister Bernadette’s Barking Dog: The Quirky History and Lost Art of Diagramming Sentences. “I like to call it a picture of language. It really does draw a picture of what language looks like.”

I asked her to show me, and for an example she used the first sentence she recalls diagramming: “The dog barked.”

“By drawing a line and writing ‘dog’ on the left side of the line and ‘barked’ on the right side of the line and separating them with a little vertical line, we could see that ‘dog’ was the subject of the sentence and ‘barked’ was the predicate or the verb,” she explains. “When you diagram a sentence, those things are always in that relation to each other. It always makes the same kind of picture. And supposedly, it makes it easier for kids who are learning to write, learning to use correct English.”

An Education ‘Phenomenon’

Burns Florey and other experts trace the origin of diagramming sentences back to 1877 and two professors at Brooklyn Polytechnic Institute. In their book, Higher Lessons in English, Alonzo Reed and Brainerd Kellogg made the case that students would learn better how to structure sentences if they could see them drawn as graphic structures.

After Reed and Kellogg published their book, the practice of diagramming sentences had something of a Golden Age in American schools.

“It was a purely American phenomenon,” Burns Florey says. “It was invented in Brooklyn, it swept across this country like crazy and became really popular for 50 or 60 years and then began to die away.”

By the 1960s, new research dumped criticism on the practice.

“Diagramming sentences … teaches nothing beyond the ability to diagram,” declared the 1960 Encyclopedia of Educational Research.

In 1985, the National Council of Teachers of English declared that “repetitive grammar drills and exercises” — like diagramming sentences — are “a deterrent to the improvement of students’ speaking and writing.”

Nevertheless, diagramming sentences is still taught — you can find it in textbooks and see it in lesson plans. My question is, why?

Burns Florey says it might still be a good tool for some students. “When you’re learning to write well, it helps to understand what the sentence is doing and why it’s doing it and how you can improve it.”

But does it deserve a place in English class today? (The Common Core doesn’t mention it.)

“There are two kinds of people in this world — the ones who loved diagramming, and the ones who hated it,” Burns Florey says.

She’s in the first camp. But she understands why, for some students, it never clicks.

“It’s like a middle man. You’ve got a sentence that you’re trying to write, so you have to learn to structure that, but also you have to learn to put it on these lines and angles and master that, on top of everything else.”

So many students ended up frustrated, viewing the technique “as an intrusion or as an absolutely confusing, crazy thing that they couldn’t understand.”

Science Discovered That Banning Small Talk from Your Conversations Makes You Happier (Try Asking These 13 Questions Instead)

It’s time to delete questions like ‘what do you do?’ and ‘where do you live?’ from your vocabulary forever.

Source: Science Discovered That Banning Small Talk from Your Conversations Makes You Happier (Try Asking These 13 Questions Instead)

By Marcel Schwantes Principal and founder, Leadership From the Core

Ever walk into a networking event or cocktail party and all you hear is superficial chit-chat? The small talk is deafening and doesn’t evolve into anything substantial. You can hardly stand not to elicit an eye-roll in between sips of your Mojito.

Questions like what do you do? and where do you live? are predictable and exhausting; commentary about the weather or last night’s game fill up awkward moments as people size each other up to determine — is this is someone I want to talk to?

As it turns out, the types of conversations you’re engaging in truly matter for your personal wellbeing. In 2010, scientists from the University of Arizona and Washington University in St. Louis investigated whether happy and unhappy people differ in the types of conversations they have.

The findings

Seventy-nine participants wore a recording device over four days and were periodically recorded as they went about their lives. Out of more than 20,000 recordings, researchers identified the conversations as trivial small talk or substantive discussions.

As published in Psychological Science, the happiest participants had twice as many genuine conversations and one third as much small talk as the unhappiest participants.

These findings suggest that the happy life is social and conversationally deep rather than isolated and superficial. The research has also confirmed what most people know but don’t practice: surface level small talk does not build relationships

The new trend: Ban the small talk

Obviously inspired, behavioral scientists Kristen Berman and Dan Ariely, co-founders of Irrational Labs, a non-profit behavioral consulting company, raised the bar by hosting a dinner party where small talk was literally banned and only meaningful conversations were allowed.

As documented in a Wired article, invited guests of Berman and Ariely were provided with index cards featuring examples of meaningful (and odd) conversation starters like, for example, the theory of suicide prevention or, um … “the art of the dominatrix.”

The party was a hit. The authors report that “everyone was happier” without the obligation of trivial small talk.

Seizing the opportunity as any innovative entrepreneur would, Carolina Gawroński, founder of No Small Talk dinners, launched her business last month in Hong Kong, which is quickly spreading to cities around the world.

“Growing up I was surrounded by, on the one side, [my father’s] interesting friends. But on the other side, there was this whole element of being social and being at bullshit social events,” Gawroński tells Hong Kong Free Press. “Since a young age, I’ve always questioned it: ‘Why do people talk like this? What’s the point?'”

The rules at a No Small Talk dinner event are simple: no phones and no small talk. Guests also receive cards with meaningful-conversation prompts.

Then, there’s Sean Bisceglia, a partner at Sterling Partners, a private equity firm. Bisceglia has hosted Jefferson-style dinners at his home for the past eight years.

The concept is basically the same but shared as a group in a whole-table conversation with a purpose: One person speaks at a time to the whole table, there are no side conversations, and small talk is completely banned.

“I do it because the shallowness of cocktail chitchat kind of drove me crazy,” Bisceglia tells Crain’s Chicago Business. “There was never any conversation deeper than two minutes. I really felt that if we could bring together a group of people, you could get into the issues and hear different people’s perspectives.”

13 questions to start great conversations

If you’ve bought on to this idea of banning small talk from your conversations, here are thirteen no-fail conversation starters cherry-picked from a few credible sources:

  1. What’s your story?
  2. What’s the most expensive thing you’ve ever stolen?
  3. What is your present state of mind?
  4. What absolutely excites you right now?
  5. What book has influenced you the most?
  6. If you could do anything you wanted tonight (anywhere, for any amount of money), what would you do and why?
  7. If you had the opportunity to meet one person you haven’t met who would it be, why and what would you talk about?
  8. What’s the most important thing I should know about you?
  9. What do you value more, intelligence or common sense?
  10. What movie is your favorite guilty pleasure, and why?
  11. You are stuck on a deserted island, and you can only take three things. What would they be?
  12. When and where were you happiest in your life?
  13. What do you think is the driving force in your life?

This Style of Entertainment Makes You Smarter – PsyBlog

An unsettling feeling, like the absurdity of life, can engender the desired state.

Source: This Style of Entertainment Makes You Smarter – PsyBlog

Surreal books and films could make you smarter, research finds.

Stories by Franz Kafka or films by master of the absurd David Lynch could boost learning.

Even an unsettling feeling, like the absurdity of life, can engender the desired state.

The reason is that surreal or nonsensical things put our mind into overdrive looking for meaning.

When people are more motivated to search for meaning, they learn better, the psychologists found.

Dr Travis Proulx, the study’s first author, explained:

“The idea is that when you’re exposed to a meaning threat –– something that fundamentally does not make sense –– your brain is going to respond by looking for some other kind of structure within your environment.

And, it turns out, that structure can be completely unrelated to the meaning threat.”

For the study, people read a Franz Kafka’s short story called ‘The Country Doctor’ — which involves a nonsensical series of events.

A version of the story was rewritten to make more sense and read by a control group.

Afterwards, both groups were given an unconscious learning task that involved spotting strings of letters.

Dr Proulx said:

“People who read the nonsensical story checked off more letter strings –– clearly they were motivated to find structure.

But what’s more important is that they were actually more accurate than those who read the more normal version of the story.

They really did learn the pattern better than the other participants did.”

In a second study, people were made to feel their own lives didn’t make sense.

This was done by pointing out the contradictory decisions they had made.

Dr Proulx said:

“You get the same pattern of effects whether you’re reading Kafka or experiencing a breakdown in your sense of identity.

People feel uncomfortable when their expected associations are violated, and that creates an unconscious desire to make sense of their surroundings.

That feeling of discomfort may come from a surreal story, or from contemplating their own contradictory behaviors, but either way, people want to get rid of it.

So they’re motivated to learn new patterns.”

The study only tested unconscious learning, it doesn’t tell us whether you would be able to use this trick intentionally.

Dr Proulx said:

“It’s important to note that sitting down with a Kafka story before exam time probably wouldn’t boost your performance on a test.

What is critical here is that our participants were not expecting to encounter this bizarre story.

If you expect that you’ll encounter something strange or out of the ordinary, you won’t experience the same sense of alienation.

You may be disturbed by it, but you won’t show the same learning ability.

The key to our study is that our participants were surprised by the series of unexpected events, and they had no way to make sense of them.

Hence, they strived to make sense of something else.

The study was published in the journal Psychological Science (Proulx & Heine, 2009).

How Language Shapes the Way We Think

There are about 7,000 languages spoken around the world — and they all have different sounds, vocabularies and structures. But do they shape the way we think? Cognitive scientist Lera Boroditsky shares examples of language — from an Aboriginal community in Australia that uses cardinal directions instead of left and right to the multiple words for blue in Russian — that suggest the answer is a resounding yes. “The beauty of linguistic diversity is that it reveals to us just how ingenious and how flexible the human mind is,” Boroditsky says. “Human minds have invented not one cognitive universe, but 7,000.”

 

 

Lera Boroditsky · Cognitive scientist

Lera Boroditsky is trying to figure out how humans get so smart.