Neuropsychologists of the Ruhr-Universität Bochum let video gamers compete against non-gamers in a learning competition. During the test, the video gamers performed significantly better and showed an increased brain activity in the brain areas that are relevant for learning. Prof Dr Boris Suchan, Sabrina Schenk and Robert Lech report their findings in the journal Behavioural Brain Research.
The weather prediction task
The research team studied 17 volunteers who – according to their own statement – played action-based games on the computer or a console for more than 15 hours a week. The control group consisted of 17 volunteers who didn’t play video games on a regular basis. Both teams did the so-called weather prediction task, a well-established test to investigate the learning of probabilities. The researchers simultaneously recorded the brain activity of the participants via magnetic resonance imaging.
The participants were shown a combination of three cue cards with different symbols. They should estimate whether the card combination predicted sun or rain and got a feedback if their choice was right or wrong right away. The volunteers gradually learned, on the basis of the feedback, which card combination stands for which weather prediction. The combinations were thereby linked to higher or lower probabilities for sun and rain. After completing the task, the study participants filled out a questionnaire to sample their acquired knowledge about the cue card combinations.
Video gamers better with high uncertainties
The gamers were notably better in combining the cue cards with the weather predictions than the control group. They fared even better with cue card combinations that had a high uncertainty such as a combination that predicted 60 percent rain and 40 percent sunshine.
The research team studied 17 volunteers who – according to their own statement – played action-based games on the computer or a console for more than 15 hours a week. NeuroscienceNews.com image is in the public domain.
The analysis of the questionnaire revealed that the gamers had acquired more knowledge about the meaning of the card combinations than the control group. “Our study shows that gamers are better in analysing a situation quickly, to generate new knowledge and to categorise facts – especially in situations with high uncertainties,” says first author Sabrina Schenk.
This kind of learning is linked to an increased activity in the hippocampus, a brain region that plays a key role in learning and memory. “We think that playing video games trains certain brain regions like the hippocampus”, says Schenk. “That is not only important for young people, but also for older people; this is because changes in the hippocampus can lead to a decrease in memory performance. Maybe we can treat that with video games in the future.”
ABOUT THIS NEUROSCIENCE RESEARCH ARTICLE
Funding: Funded by German Research Foundation.
Source: Boris Suchan – RUB Image Source: NeuroscienceNews.com image is in the public domain. Original Research:Abstract for “Games people play: How video games improve probabilistic learning” by Schenk S, Lech RK, and Suchan B in Behavioral Brain Research. Published online August 24 2017 doi:10.1016/j.bbr.2017.08.027
Our visual attention is drawn to parts of a scene that have meaning, rather than to those that are salient or “stick out,” according to new research from the Center for Mind and Brain at the University of California, Davis. The findings, published Sept. 25 in the journal Nature Human Behavior, overturn the widely-held model of visual attention.
“A lot of people will have to rethink things,” said Professor John Henderson, who led the research. “The saliency hypothesis really is the dominant view.”
Our eyes we perceive a wide field of view in front of us, but we only focus our attention on a small part of this field. How do we decide where to direct our attention, without thinking about it?
The dominant theory in attention studies is “visual salience,” Henderson said. Salience means things that “stick out” from the background, like colorful berries on a background of leaves or a brightly lit object in a room.
Saliency is relatively easy to measure. You can map the amount of saliency in different areas of a picture by measuring relative contrast or brightness, for example.
Henderson called this the “magpie theory” our attention is drawn to bright and shiny objects.
“It becomes obvious, though, that it can’t be right,” he said, otherwise we would constantly be distracted.
Making a Map of Meaning
Henderson and postdoctoral researcher Taylor Hayes set out to test whether attention is guided instead by how “meaningful” we find an area within our view. They first had to construct “meaning maps” of test scenes, where different parts of the scene had different levels of meaning to an observer.
To make their meaning map, Henderson and Hayes took images of scenes, broke them up into overlapping circular tiles, and submitted the individual tiles to the online crowdsourcing service Mechanical Turk, asking users to rate the tiles for meaning.
Conventional thinking on visual attention is that our attention is automatically drawn to “salient” objects that stand out from the background. Researchers at the UC Davis Center for Mind and Brain mapped hundreds of images (examples far left) by eye tracking (center left), “meaning” (center right) and “salience” or outstanding features (far left). Statistical analysis shows that eyes are drawn to “meaningful” areas, not necessarily those that are most outstanding. NeuroscienceNews.com image is credited to John Henderson and Taylor Hayes, UC Davis.
By tallying the votes of Mechanical Turk users they were able to assign levels of meaning to different areas of an image and create a meaning map comparable to a saliency map of the same scene.
Next, they tracked the eye movements of volunteers as they looked at the scene. Those eyetracks gave them a map of what parts of the image attracted the most attention. This “attention map” was closer to the meaning map than the salience map, Henderson said.
In Search of Meaning
Henderson and Hayes don’t yet have firm data on what makes part of a scene meaningful, although they have some ideas. For example, a cluttered table or shelf attracted more attention than a highly salient splash of sunlight on a wall. With further work, they hope to develop a “taxonomy of meaning,” Henderson said.
Although the research is aimed at a fundamental understanding of how visual attention works, there could be some near-term applications, Henderson said, for example in developing automated visual systems that allow computers to scan security footage or to automatically identify or caption images online.
ABOUT THIS NEUROSCIENCE RESEARCH ARTICLE
Funding: The work was supported by the National Science Foundation.
Source: Andy Fell – UC Davis Image Source: NeuroscienceNews.com image is credited to John Henderson and Taylor Hayes, UC Davis. Original Research: Abstract for “Meaning-based guidance of attention in scenes as revealed by meaning maps” by John M. Henderson & Taylor R. Hayes in Nature Human Nature. Published online September 25 2017 doi:10.1038/s41562-017-0208-0
Our research has identified three types of bias that make the social media ecosystem vulnerable to both intentional and accidental misinformation. That is why our Observatory on Social Media at Indiana University is building tools to help people become aware of these biases and protect themselves from outside influences designed to exploit them.
To counter this bias, and help people pay more attention to the source of a claim before sharing it, we developed Fakey, a mobile news literacy game (free on Android and iOS) simulating a typical social media news feed, with a mix of news articles from mainstream and low-credibility sources. Players get more points for sharing news from reliable sources and flagging suspicious content for fact-checking. In the process, they learn to recognize signals of source credibility, such as hyperpartisan claims and emotionally charged headlines.
Bias in society
Another source of bias comes from society. When people connect directly with their peers, the social biases that guide their selection of friends come to influence the information they see.
The tendency to evaluate information more favorably if it comes from within their own social circles creates “echo chambers” that are ripe for manipulation, either consciously or unintentionally. This helps explain why so many online conversations devolve into “us versus them” confrontations.
To study how the structure of online social networks makes users vulnerable to disinformation, we built Hoaxy, a system that tracks and visualizes the spread of content from low-credibility sources, and how it competes with fact-checking content. Our analysis of the data collected by Hoaxy during the 2016 U.S. presidential elections shows that Twitter accounts that shared misinformation were almost completely cut offfrom the corrections made by the fact-checkers.
When we drilled down on the misinformation-spreading accounts, we found a very dense core group of accounts retweeting each other almost exclusively – including several bots. The only times that fact-checking organizations were ever quoted or mentioned by the users in the misinformed group were when questioning their legitimacy or claiming the opposite of what they wrote.
Bias in the machine
The third group of biases arises directly from the algorithms used to determine what people see online. Both social media platforms and search engines employ them. These personalization technologies are designed to select only the most engaging and relevant content for each individual user. But in doing so, it may end up reinforcing the cognitive and social biases of users, thus making them even more vulnerable to manipulation.
Our own research shows that social media platforms expose users to a less diverse set of sources than do non-social media sites like Wikipedia. Because this is at the level of a whole platform, not of a single user, we call this the homogeneity bias.
Another important ingredient of social media is information that is trending on the platform, according to what is getting the most clicks. We call this popularity bias, because we have found that an algorithm designed to promote popular content may negatively affect the overall quality of information on the platform. This also feeds into existing cognitive bias, reinforcing what appears to be popular irrespective of its quality.
To study these manipulation strategies, we developed a tool to detect social bots called Botometer. Botometer uses machine learning to detect bot accounts, by inspecting thousands of different features of Twitter accounts, like the times of its posts, how often it tweets, and the accounts it follows and retweets. It is not perfect, but it has revealed that as many as 15 percent of Twitter accounts show signs of being bots.
Using Botometer in conjunction with Hoaxy, we analyzed the core of the misinformation network during the 2016 U.S. presidential campaign. We found many bots exploiting both the cognitive, confirmation and popularity biases of their victims and Twitter’s algorithmic biases.
These bots are able to construct filter bubbles around vulnerable users, feeding them false claims and misinformation. First, they can attract the attention of human users who support a particular candidate by tweeting that candidate’s hashtags or by mentioning and retweeting the person. Then the bots can amplify false claims smearing opponents by retweeting articles from low-credibility sources that match certain keywords. This activity also makes the algorithm highlight for other users false stories that are being shared widely.
Understanding complex vulnerabilities
Even as our research, and others’, shows how individuals, institutions and even entire societies can be manipulated on social media, there are many questions left to answer. It’s especially important to discover how these different biases interact with each other, potentially creating more complex vulnerabilities.
Tools like ours offer internet users more information about disinformation, and therefore some degree of protection from its harms. The solutions will not likely be only technological, though there will probably be some technical aspects to them. But they must take into account the cognitive and social aspects of the problem.
Tauhid Zaman Associate Professor of Operations Management, MIT Sloan School of Management
Nearly two-thirds of the social media bots with political activity on Twitter before the 2016 U.S. presidential election supported Donald Trump. But all those Trump bots were far less effective at shifting people’s opinions than the smaller proportion of bots backing Hillary Clinton. As my recent research shows, a small number of highly active bots can significantly change people’s political opinions. The main factor was not how many bots there were – but rather, how many tweets each set of bots issued.
My work focuses on military and national security aspects of social networks, so naturally I was intrigued by concerns that bots might affect the outcome of the upcoming 2018 midterm elections. I began investigating what exactly bots did in 2016. There was plenty of rhetoric– but only one basic factual principle: If information warfare effortsusing bots had succeeded, then voters’ opinions would have shifted.
I wanted to measure how much bots were – or weren’t – responsible for changes in humans’ political views. I had to find a way to identify social media bots and evaluate their activity. Then I needed to measure the opinions of social media users. Lastly, I had to find a way to estimate what those people’s opinions would have been if the bots had never existed.
Then we made a list of the roughly 78,000 Twitter users who posted those tweets and constructed the network of who followed whom among those users. To identify the bots among them, we used an algorithm based on our observation that bots often retweeted humans but were not themselves frequently retweeted.
This method found 396 bots – or less than 1 percent of the active Twitter users. And just 10 percent of the accounts followed them. I felt good about that: It seemed unlikely that such a small number of relatively disconnected bots could have a major effect on people’s opinions.
A closer look at the people
Next we set out to measure the opinions of the people in our data set. We did this with a type of machine learning algorithm called a neural network, which in this case we set up to evaluate the content of each tweet, determining the extent to which it supported Clinton or Trump. Individuals’ opinions were calculated as the average of their tweets’ opinions.
Once we had assigned each human Twitter user in our data a score representing how strong a Clinton or Trump backer they were, the challenge was to measure how much the bots shifted people’s opinions – which meant calculating what their opinions would have been if the bots hadn’t existed.
Fortunately, a model from as far back as the 1970s had established a way to gauge people’s sentiments in a social network based on connections between them. In this network-based model, individuals’ opinions tend to align with the people connected to them. After slightly modifying the model to apply it to Twitter, we used it to calculate people’s opinions based on who followed whom on Twitter – rather than looking at their tweets. We found that the opinions we calculated from the network model matched well with opinions measured from the content of their tweets.
Life without the bots
So far we had shown that the follower network structure in Twitter could accurately predict people’s opinions. This now allowed to us to ask questions such as: What would their opinions have been if the network were different? The different network we were interested in was one that contained no bots. So for our last step, we removed the bots from the network and recalculated the network model, to see what real people’s opinions would have been without bots. Sure enough, bots had shifted human users’ opinions – but in a surprising way.
Given much of the news reporting, we were expecting the bots to help Trump – but they didn’t. In a network without bots, the average human user had a pro-Clinton score of 42 out of 100. With the bots, though, we had found the average human had a pro-Clinton score of 58. That shift was a far larger effect than we had anticipated, given how few and unconnected the bots were. The network structure had amplified the bots’ power.
We wondered what had made the Clinton bots more effective than the Trump bots. Closer inspection showed that the 260 bots supporting Trump posted a combined 113,498 tweets, or 437 tweets per bot. However, the 150 bots supporting Clinton posted 96,298 tweets, or 708 tweets per bot. It appeared that the power of the Clinton bots came not from their numbers, but from how often they tweeted. We found that most of what the bots posted were retweets of the candidates or other influential individuals. So they were not really crafting original tweets, but sharing existing ones.
It’s worth noting that our analysis looked at a relatively small number of users, especially when compared to the voting population. And it was only during a relatively short period of time around a specific event in the campaign. Therefore, they don’t suggest anything about the overall election results. But they do show the potential effect bots can have on people’s opinions.
It’s a reminder to be careful about what you read – and what you believe – on social media. We recommend double-checking that you are following people you know and trust – and keeping an eye on who is tweeting what on your favorite hashtags.
≈ Comments Off on Why our screens make us less happy – Adam Alter
What are our screens and devices doing to us? Psychologist Adam Alter has spent the last five years studying how much time screens steal from us and how they’re getting away with it. He shares why all those hours you spend staring at your smartphone, tablet or computer might be making you miserable — and what you can do about it.
According to psychiatrist Dr David Veal: “Two out of three of all the patients who come to see me with Body Dysmorphic Disorder since the rise of camera phones have a compulsion to repeatedly take and post selfies on social media sites.”
“Cognitive behavioral therapy is used to help a patient to recognize the reasons for his or her compulsive behavior and then to learn how to moderate it,” he told the Sunday Mirror.
Is it possible that taking selfies causes mental illness, addiction, narcissism and suicide? Many psychologists say yes, and warn parents to pay close attention to what kids are doing online to avoid any future cases like what happened to Bowman.
A British male teenager tried to commit suicide after he failed to take the perfect selfie. Danny Bowman became so obsessed with capturing the perfect shot that he spent 10 hours a day taking up to 200 selfies. The 19-year-old lost nearly 30 pounds, dropped out of school and did not leave the house for six months in his quest to get the right picture. He would take 10 pictures immediately after waking up. Frustrated at his attempts to take the one image he wanted, Bowman eventually tried to take his own life by overdosing, but was saved by his mom.
“I was constantly in search of taking the perfect selfie and when I realized I couldn’t, I wanted to die. I lost my friends, my education, my health and almost my life,” he told The Mirror.
The teenager is believed to be the UK’s first selfie addict and has had therapy to treat his technology addiction as well as OCD and Body Dysmorphic Disorder.
Part of his treatment at the Maudsley Hospital in London included taking away his iPhone for intervals of 10 minutes, which increased to 30 minutes and then an hour.
“It was excruciating to begin with but I knew I had to do it if I wanted to go on living,” he told the Sunday Mirror.
Public health officials in the UK announced that addiction to social media such as Facebook and Twitter is an illness and more than 100 patients sought treatment every year.
“Selfies frequently trigger perceptions of self-indulgence or attention-seeking social dependence that raises the damned-if-you-do and damned-if-you-don’t spectre of either narcissism or very low self-esteem,” said Pamela Rutledge in Psychology Today.
The big problem with the rise of digital narcissism is that it puts enormous pressure on people to achieve unfeasible goals, without making them hungrier. Wanting to be Beyoncé, Jay Z or a model is hard enough already, but when you are not prepared to work hard to achieve it, you are better off just lowering your aspirations. Few things are more self-destructive than a combination of high entitlement and a lazy work ethic. Ultimately, online manifestations of narcissism may be little more than a self-presentational strategy to compensate for a very low and fragile self-esteem. Yet when these efforts are reinforced and rewarded by others, they perpetuate the distortion of reality and consolidate narcissistic delusions.
Healthy Holistic Living brings you alternative health news from all over the web. These articles are sourced and shared with permission so you get all the news that’s fit to keep you in good physical and mental health. Stay happy, stay hungry, and stay on Healthy Holistic Living.
If one of your New Year’s resolutions is to be a nicer person who is more sensitive and aware of other people’s feelings, read more novels. Really.
Once you are absorbed in the world of Anthony Doerr’s All the Light We Cannot See and other popular novels, you might find yourself a more empathetic person. Researchers who study how reading literature affects us have found that just like anything else, we get better at a subject the more we practice it; the more fiction we read, the more we understand how and what other people think (Djikic & Oatley, 2014).
It may be that in the process of appreciating others’ lives, we incorporate these experiences into our own personality, resulting in a new and reconfigured self. Readers often experience emotions similar to those of fictional characters, which increases our empathy for them. In doing so, “Literature can help us navigate our self-development by transcending our current self while at the same time making available to us a multitude of potential future selves” (Djikic & Oatley, 2014, p. 503). So the more we read, the more we expose ourselves to other ways of being, and other potential identities.
If you are wondering whether or not television or film have the same effect, the answer is unclear, given more research is needed. But television and film provide audiovisual information that novels do not, so literature likely requires more cognitive effort unless the television show or film is complex and challenging (and many contemporary media are).
Novels therefore provide ideal opportunities to practice our emotional intelligence skills such as empathy, as well as the awareness and monitoring of our emotions (Mar, Oatley, Djikic, & Mullin, 2011). And what we read matters, suspense and romance novels seem to foster greater interpersonal sensitivity than do science fiction novels (Fong, Mullin, & Mar, 2013). There are subtle distinctions within genres though. As a fan of Margaret Atwood’s speculative fiction, I look forward to more research on the differences among various genres and sub-genres of literature.
Regardless, the next time you are running errands and waiting in line, consider dipping into that novel you started rather than texting mindlessly or zoning out with a game — if you do so regularly, you will likely become a more sensitive and thoughtful person.
Djikic, M. & Oatley, K. (2014). The art in fiction: From indirect communication to changes of the self. Psychology of Aesthetics, Creativity, and the Arts, 8(4), 498-505.
Fong, K., Mullin, J. B., & Mar, R. A. (2013). What you read matters: The role of fiction genre in predicting interpersonal sensitivity. Psychology of Aesthetics, Creativity, and the Arts, 7(4), 370-376.
Mar, R. A., Oatley, K., Djikic, M., & Mullin, J. (2011). Emotion and narrative fiction: Interactive influences before, during, and after reading. Cognition and Emotion, 25(5), 818-833.
A new book has revealed the crucial hidden details you have missed in some of the world’s most well-known paintings.
A New Way of Seeing: The History of Art in 57 Works hopes to change the way people view these incredible pieces forever through recognising their hidden meanings.
From Sandro Botticelli’s The Birth of Venus to Edvard Munch’s The Scream, the book helps guide viewers to seemingly innocuous details that are actually bursting with meaning.
‘I wrote A New Way of Seeing because I wanted to understand what makes great art great,’ Kelly said.
‘I sensed there were hidden mysteries and strange depths to the paintings and sculptures that we all know by heart but never really look at. I wanted to help readers reconnect with those masterpieces that have the power to enrich our experience of the world.’
Here, Kelly reveals the details you might have missed in some very recognisable paintings…
Sandro Botticelli, The Birth of Venus
J. M. W. Turner, Rain Steam, and Speed – The Great Western Railway
Édouard Manet, A Bar at The Folies-Bergère
Gustav Klimt, The Kiss
Hieronymus Bosch, The Garden of Earthly Delights
Edvard Munch, The Scream
A New Way of Seeing: The History of Art in 57 Works by Kelly Grovier (Thames & Hudson)
We thought we’d seen the all the Christmas adverts this year already, with only a few days left until the big day itself. However, Google has arrived late to the party with what could be the best of them all.
Its Home Alone Again commercial, posted on YouTube, is quite simply brilliant. It brings Kevin McAllister (Macaulay Culkin) back for a modern retake on the classic Christmas movie – one of our favourites of all time.
You can watch it below, along with a selection of the best Christmas adverts that have appeared on UK TVs or online during the 2018 festive period.
Google: Home Alone Again
Imagine what Kevin McAllister’s Home Alone experience would have been like with a Google Home digital assistant.
It must also be said that Macaulay Culkin is looking great these days. Would be good to see him more active on TV or film in 2019.
John Lewis: The Boy and the Piano
You might be a bit sick of it by now, and it’s no patch on former years’ efforts, but the 2018 John Lewis Christmas ad is still one of the best around.
We’re not convinced many small kids will be getting pianos this year though.
While you can watch the actual John Lewis advert above, spare a moment for the real John Lewis who is regularly inundated on Twitter by confused customers.
Twitter brilliantly captured this in its own festive advert this year.
Waitrose: Fast Forward
Another great John Lewis spoof comes from one of the retailer’s own brand partners, Waitrose.
It apes a lot of family’s thoughts on the annual unveiling of the JL Christmas ad.
Aldi: Kevin the Carrot and the Wicked Parsnip
Aldi went all out with its Kevin the Carrot Christmas adverts this year, with several reimagined fairy tales featuring an evil parsnip.
This is our fave, not least for the punchline.
Sainsbury’s: The Big Night
Sainsbury’s went with the tried and trusted children’s Christmas play for its 2018 commercial.
Here, you can see a much longer version than the one aired on TV. We still like the bit with the plug.
Iceland: Say Hello to Rang-tan
You won’t have seen this Iceland advert on British TV this Christmas as it was banned for being too political.
However, it is a great commercial with a good message that’s well worth a watch.
Apple: Share your Gifts
To highlight the creative applications possible with Apple devices, it made a wonderfully animated short film about a girl afraid to show others her work.
The much longer version than shown on TV is available above.
McDonald’s: Reindeer Ready
As a follow up to last year’s McDonald’s ad, the 2018 version now features Santa treating his own herd to the fast food chain’s “Reindeer Treats”.
To be honest, they’d probably have preferred Big Macs.
Visa: Keep it Local this Christmas
Finally, another good message, this time from Visa.
With online shopping and Christmas deliveries being easier than ever, don’t forget the humble high street shop keeper who relies on your custom – especially at this time of year.
In 1973, a computer program was developed at MIT to model global sustainability. Instead, it predicted that by 2040 our civilization would end. While many in history have made apocalyptic predictions that have so far failed to materialize, what the computer envisioned in the 1970s has by and large been coming true. Could the machine be right?
Why the program was created
The prediction, which recently re-appeared in Australian media, was made by a program dubbed World One. It was originally created by the computer pioneer Jay Forrester, who was commissioned by the Club of Rome to model how well the world could sustain its growth. The Club of Rome is an organization comprised of thinkers, former world heads of states, scientists, and UN bureaucrats with the mission to “promote understanding of the global challenges facing humanity and to propose solutions through scientific analysis, communication, and advocacy.”
What World One showed was that by 2040 there would be a global collapse if the expansion of the population and industry was to continue at the current levels.
As reported by the Australian broadcaster ABC, the model’s calculations took into account trends in pollution levels, population growth, the amount of natural resources and the overall quality of life on Earth. The model’s predictions for the worsening quality of life and the dwindling natural resources have so far been unnervingly on target.
In fact, 2020 is the first milestone envisioned by World One. That’s when the quality of life is supposed to drop dramatically. The broadcaster presentedthis scenario that will lead to the demise of large numbers of people:
At around 2020, the condition of the planet becomes highly critical. If we do nothing about it, the quality of life goes down to zero. Pollution becomes so seriously it will start to kill people, which in turn will cause the population to diminish, lower than it was in the 1900. At this stage, around 2040 to 2050, civilised life as we know it on this planet will cease to exist.
Alexander King, the then-leader of the Club of Rome, evaluated the program’s results to also mean that nation-states will lose their sovereignty, forecasting a New World Order with corporations managing everything.
Sovereignty of nations is no longer absolute,” King told ABC. “There is a gradual diminishing of sovereignty, little bit by little bit. Even in the big nations, this will happen.
How did the program work?
World One, the computer program, looked at the world as one system. The report called it “an electronic guided tour of our behavior since 1900 and where that behavior will lead us.” The program produced graphs that showed what would happen to the planet decades into the future. It plotted statistics and forecasts for such variables as population, quality of life, the supply of natural resources, pollution, and more. Following the trend lines, one could see where the crises might take place.
Can we stave off disaster?
As one measure to prevent catastrophe, the Club of Rome predicted some nations like the U.S. would have to cut back on their appetites for gobbling up the world’s resources. It hoped that in the future world, prestige would stem from “low consumption”—one fact that has so far not materialized. Currently, nine in ten people around the world breathe air that has high levels of pollution, according to data from the World Health Organization (WHO). The agency estimates that 7 million deaths each year can be attributed to pollution.
Here, Parag Khanna gets into the specifics of what the world may be like in the near future, if we don’t change course: