The dark side of gamifying work

One of the most common user experiences of our time is also a tool of social control. And nowhere is that more true than in the workplace.

Source: The dark side of gamifying work

Deep under the Disneyland Resort Hotel in California, far from the throngs of happy tourists, laundry workers clean thousands of sheets, blankets, towels and comforters every day. Workers feed the heavy linens into hot, automated presses to iron out wrinkles, and load dirty laundry into washers and dryers large enough to sit in. It’s loud, difficult work, but bearable. The workers were protected by union contracts that guaranteed a living wage and affordable healthcare, and many had worked decades at the company. They were mostly happy to work for Disney.

This changed in 2008. The union contracts were up, and Disney wouldn’t renew without adjustments. One of the changes involved how management tracked worker productivity. Before, employees would track how many sheets or towels or comforters the workers washed, dried, or folded on paper notes turned in at the end of the day. But Disney was replacing that system with an electronic tracking system that monitored their progress in real time.

Electronic monitoring wasn’t unusual in the hotel business. But Disney took the highly unusual step of displaying the productivity of their workers on scoreboards all over the laundry facilities, says Austin Lynch, director of organizing for Unite Here Local 11. According to Lynch, every worker’s name was compared with the names of coworkers, each one colour-coded like traffic signals. If you were keeping up with the goals of management, your name was displayed in green. If you slowed down, your name was in yellow. If you were behind, your name was in red. Managers could see the monitors from their office, and change production targets from their computers. Each laundry machine would also monitor the rate of worker input, and flash red and yellow lights at the workers directly if they slowed down.

[Source Photo: klikk/iStock]

‘They had a hard time ignoring it,’ said Beatriz Topete, a union organiser for Unite Here Local 11 at the time. ‘It pushes you mentally to keep working. It doesn’t give you breathing space.’ Topete recalled an incident where she was speaking to workers on the night shift, feeding hand-towels into a laundry machine. Every time the workers slowed down, the machine would flash at them. They told her they felt like they couldn’t stop.

The workers called this ‘the electronic whip’.

While this whip was cracking, the workers sped up. ‘We saw a higher incidence of injuries,’ Topete said. ‘Several people were injured on the job.’ The formerly collegial environment degenerated into a race. The laundry workers competed with each other, and got upset when coworkers couldn’t keep up. People started skipping bathroom breaks. Pregnant workers fell behind. ‘The scoreboard incentivises competition,’ said Topete. ‘Our human competitiveness, whatever makes us like games, whatever keeps us wanting to win, it’s a similar thing that was happening. Even if you didn’t want to.’

The electronic whip is an example of gamification gone awry.

Gamification is the application of game elements into nongame spaces. It is the permeation of ideas and values from the sphere of play and leisure to other social spaces. It’s premised on a seductive idea: if you layer elements of games, such as rules, feedback systems, rewards and videogame-like user interfaces over reality, it will make any activity motivating, fair and (potentially) fun. ‘We are starving and games are feeding us,’ writes Jane McGonigal in Reality Is Broken (2011).‘What if we decided to use everything we know about game design to fix what’s wrong with reality?’

Consequentially, gamification is everywhere. It’s in coupon-dispensing loyalty programmes at supermarkets. Big Y, my local supermarket chain in Boston, employs digital slot machines at the checkout for its members. Winning dispenses ‘coins’ that can be redeemed for deals. Gamification is in the driver interfaces of Lyft and Uber, which give badges for miles driven. Gamification is the premise of fitness games such as Zombies, Run!, where users push themselves to exercise by outrunning digital zombies, and of language-learning apps such as Duolingo, where scoring prompts one to master more. The playgroundoffices of Silicon Valley, complete with slides and ball pits, have been gamified. Your credit score is one big game, too.

But gamification’s trapping of total fun masks that we have very little control over the games we are made to play – and hides the fact that these games are not games at all. Gamified systems are tools, not toys. They can teach complex topics, engage us with otherwise difficult problems. Or they can function as subtle systems of social control.

[Source Photo: klikk/iStock]

Games are probably as old as the human species itself. Archaeologists have unearthed mancalalike boards made of stone in Jordan, dated to 6,000 BC. The application of games to serious matters has probably been with us almost as long. The Egyptian board game senetrepresented the passage of the ka (or vital spark) to the afterlife; its name is commonly translated as ‘the game of passing’. The Roman senatorial class played latrunculi, an abstract game of military strategy to train the mind and pass the time. Dice-based games of chance are thought to have originated with ancient divination practices involving thrown knucklebones. Native American ball games served as proxies of war and were probably crucial to keeping the Iroquois Confederation together. As many as 1,000 players would converge to play what the Mohawk game called baaga’adowe (the little brother of war).

The conflation of game and ritual is likely by design. The Dutch cultural historian Johan Huizinga observed in Homo Ludens (1938) that both invoke a magic circle, a time and place outside of the norms of reality. During play, as during ritual, new rules supersede the old. Players are not tried as thieves for ‘stealing’ a base in baseball. The Eucharist doesn’t literally become flesh during Catholic transubstantiation rituals. Through play and games, Egyptians could metaphorically engage with the afterlife without the inconvenience of dying.

An important aspect of early games was that they were still limited in size and scope. One-thousand-player stickball games between whole villages were a rarity. We don’t see the emergence of anything analogous to modern gamification until the 18th century when Europe underwent a renaissance of games and game design. In 18th-century Paris, Rome, Vienna and London, an international leisure class emerged that communicated across national and linguistic divides through the medium of games. For example, one of the earliest four-person card games in Europe was ombre – from el hombre (the man) – which originated in 16th-century Spain. The game didn’t become known outside Spain until almost the end of the 17th century, with the marriage of Maria Theresa of Spain to Louis XIV of France. Within a few years, the game spread across the continent and was playable in the courts and salons of every capital in Europe.

The spread of ombre coincided with a boom in games and game culture in Europe. Abraham and David Roentgen became a father-and-son pair of rockstars for building foldable game-tables that could be rearranged to suit everything from backgammon to ombre. Play rooms appeared in the homes of the aristocracy and emergent bourgeois. Books of rules such as Pleasant Pastime with Enchanting and Joyful Games to Be Played in Society (1757) were translated into multiple languages. The Catholic Church got in on the act with the liberalisation of lottery laws by popes Clement XII and Pius VI. In the 1750s, the Swiss mathematician and physicist Daniel Bernoulli even declared: ‘The century that we live in could be subsumed in the history books as … the Century of Play.’

In the mid-18th century, Gerhard Tersteegen, an enterprising priest, developed the ‘Pious Lottery’, a deck of 365 cards with various tasks of faith. ‘You’d read a prayer straight from the card,’ explains the historian Mathias Fuchs of Leuphana University in Germany. It is reminiscent of modern mindfulness or religious apps that attempt to algorithmically generate spiritual fulfilment.

Soon, 18th-century musicians were incorporating the logic of game design into their music through randomised card- or dice-based systems for musical composition. Johann Sebastian Bach’s student Johann Philipp Kirnberger, and second son, Carl Philipp Emanuel Bach, both wrote musical composition games – respectively, ‘The Ever-Ready Minuet and Polonaise Composer’ and ‘A Method for Making Six Bars of Double Counterpoint at the Octave Without Knowing the Rules’ (Musikalisches Würfelspiel), which was also attributed to Mozart. These games asked erstwhile composers to roll a pair of dice to randomly select pre-written measures for minuets. According to one estimate, Mozart’s game features 1.3 x 1029 possible combinations. Players would stitch measures of music together in the order rolled to compose a final product, in essence enacting an algorithm. In a way, these resemble modern musical rhythm games such as Guitar Hero that provide the illusion of musical mastery for the sake of entertainment.

It’s not clear what ended the century of play. Perhaps the rococo play culture of the 18th century ended with the wars and nationalistic fervour of the 19th. Fuchs suggests the French Revolution of 1789 as the likely cause. What’s clear is that the centrality of games as a cultural force wouldn’t reach 18th-century levels of saturation until the development of computers.

By the end of the 20th century, video and then computers became more ubiquitous and user-friendly, and digital games rose in scale and scope. To make computers more accessible, human-computer interface designers borrowed elements from early video games. Graphical user interfaces replaced code. Games and gamers became distinct subsets of the computer software and computer hobbyist landscapes. Because the first computer games were experiments in software design, computer and hobby magazines regularly printed and distributed lines of code. Programs, including games, were freely available to remix and experiment on. Importantly, this hobbyist culture, while not a utopia of gender equality, was not strictly male-coded initially.

As software development became more corporate, and the user experience more centralised, the discourse shifted away from the quality of the software to gameplay and user experience. Game development corporations seized on a booming market, cultivating gamers as a distinct category of consumer, and focusing on white, adolescent and teenage boys. Jennifer deWinter, a video-game scholar at Worcester Polytechnic Institute in Massachusetts, refers to this as the construction of technomasculinity. ‘It takes over the ideology of what it takes to be a successful man … the gamer identity was constructed for them to consume as white, male and tech-savvy,’ she explains. The workers of the future would be gamers.

[Source Photo: klikk/iStock]

By 2008, the gamification of work felt absolutely natural to a generation of people raised on ubiquitous digital technology and computer games. Tech startups were faced with the challenge of attracting and retaining users. Game designers and marketers including Jane McGonigal and Ethan Zuckerman promoted the use of immersive game mechanics as a way of ‘hacking happiness’ and building user engagement at summits, speeches and TED talks. By 2010, interest in gamification intensified with the success of the social network game FarmVille, which seemed to have solved the problem of user retention and engagement. Marketers and consultants were quick to seize on gamification as a tool to create customer loyalty and manage human desire. They sought to capitalise on the ‘addictive fun’ of gambling and games by introducing ‘pseudo-goals’ unrelated to the primary goals of either the consumer or the business in question. Game design elements such as badges, points, scoreboards and progress-tracking proliferated across different platforms, apps and workspaces. In doing so, they unknowingly borrowed from the Pious Lottery. Saying a Hail Mary or going to church because of a game isn’t necessarily aligned with the goal of eternal salvation, in much the same way as buying blood oranges for loyalty points isn’t really the goal of grocery shopping.

This brings us back to the electronic whip; Disney was hardly alone. The US retail giant Target implemented the Checkout Game which tracked and scored the speed of minimum-wage checkout clerks. The clerks could see themselves scored in real time on their point-of-sale computers. The US ice-cream parlour chain Cold Stone Creamery marshalled the power of games to teach workers how to be expert ice-cream mixers with the game Stone City, which uses motion controls to teach people how to ‘feel’ out the correct scoops. The game calculates how large the scoops are in relation to the optimal sizes, and then tells the players how much their over-scoops cost the store. Workers were asked to download the game and play it in their off-hours.

Amazon has also bought big into gamifying work. Warehouse workers are subject to scoreboards that display the silhouettes of workers who were caught stealing, what they were caught stealing, and how they were caught. Their productivity is monitored by handheld devices that scan and locate products. If their productivity drops, workers are disciplined with points on a scorecard. As in golf, more points is bad. Accrue enough points, and the worker is fired. White-collar workers too are scored and ranked by digital metrics, and by their peers and bosses. Until 2016, the bottom scorers were fired in what’s called ‘rank and yank’ by the employees.

Through gamified technology, corporations such as Amazon and Disney now have an unprecedented level of control over the individual bodies of their employees. Steve Sims, a vice-president at the gamification firm Badgeville, now CallidusCloud, in California said: ‘We like to think of it as behaviour management.’ In other words, how to get other people to do more stuff, more often.

This kind of micromanagement resembles Taylorism, a system developed by the American engineer Frederick Winslow Taylor during the 1890s to codify the movements and habits of mind that led to productivity. To eliminate inefficiency and waste, Taylor followed around the ‘most productive’ factory workers, recording the timing of all their movements with a stopwatch. He set managers, similarly armed with stopwatches, to micromanage every detail of a job. Taylor was also famous for fudging his numbers in favour of speed-driving workers to exhaustion and, in some cases, to strike.

But the modern gamified workplace enables control beyond Taylor’s wildest dreams. Games are sets of rules prescribing both actions and outcomes. A gamified workplace sets not just goals for workers but precisely how those goals can be achieved. Managers don’t need to follow workers with stopwatches. They can use smartphones or apps. It’s micromanagement with unprecedented granularity. ‘This is Taylorism 2.0,’ according to the media expert Steven Conway of Swinburne University of Technology in Australia. ‘Activities are more rigidly defined and processed than ever.’ The gamified workplace is not a game in the original sense, nor does it cultivate playful ends.

The problem of the gamified workplace goes beyond micromanagement. The business ethicist Tae Wan Kim at Carnegie Mellon University in Pittsburgh warns that gamified systems have the potential to complicate and subvert ethical reasoning. He cites the example of a drowning child. If you save the child, motivated by empathy, sympathy or goodwill – that’s a morally good act. But say you gamify the situation. Say you earn points for saving drowning children. ‘Your gamified act is ethically unworthy,’ he explained to me in an email. Providing extrinsic gamified motivators, even if they work as intended, deprive us of the option to live worthy lives, Kim argues. ‘The workplace is a sacred space where we develop ourselves and help others,’ he notes. ‘Gamified workers have difficulty seeing what contributions they really make.’

The problem isn’t limited to work. Social platforms all employ some form of gamification in their stats, figures, points, likes and badges. Dating apps gamify our romantic life; Facebook gamifies friendship.

Even war has been gamified: drone pilots operate in a highly gamified environment. Foeke Postma, a researcher and programme officer at the Dutch peace organization PAX, says that drone warfare often takes the shape of a game, right down to the joysticks or PlayStation-like controllers that the pilots use. ‘The US Airforce and the Royal Air Force have specifically targeted gamers to recruit as drone operators,’ he explains. The US drone program also employs game-like terminology when discussing targets. High-value assassination targets are called ‘jackpots’. Anyone caught near a jackpot during an airstrike is called ‘bugsplatter’. When drone pilots retire or transfer, they’re given a scorecard of kills. Postma says that this framework risks the total dehumanisation of the targets of drone warfare. In an interview with The Guardian, a drone pilot said: ‘Ever step on ants and never give it another thought?’

The expansion of game-like elements into nongame spaces is a global phenomenon. We are all living in expanding, overlapping magic circles, with some places moving faster than others. China in introducing a national, gamified social credit score through public-private partnerships. Eight credit scoring systems have been granted charters and each has a share of the national credit system. One social credit system ranks you based on how well you repay loans, the scores of your friends, where you shop and what you post to social media. This ranking determines whether you can receive loans or obtain a visa. In the US, the more limited FICO score can determine whether you get an apartment, a car, or a job.

The 20th-century French philosopher Michel Foucault would have said that these are technologies of power. Today, the interface designer and game scholar Sebastian Deterding says that this kind of gamification expresses a modernist view of a world with top-down managerial control. But the concept is flawed. Gamification promises easy, centralised overviews and control. ‘It’s a comforting illusion because de facto reality is not as predictable as a simulation,’ Deterding says. You can make a model of a city in SimCity that bears little resemblance to a real city. Mistaking games for reality is ultimately mistaking map for territory. No matter how well-designed, a simulation cannot account for the unforeseen.

A prime example of gamification gone awry is Go365, a health app introduced in 2017 by the Public Employees Insurance Agency (PEIA) in West Virginia and the Humana health insurance company. The app was presented as a motivating tool and game, not unlike smartphone fitness apps. Go365’s advertisements featured white, upper-middle-class joggers and attractively dishevelled soccer moms buying carrots. The app tracked physical activity, steps and location. It also allowed users to give more sensitive information to Humana, such as blood glucose levels, sleep cycle, diet and the results of doctor’s visits. Users were asked how often they drank and whether they smoked. Family medical histories were probed. The app awarded points, sets milestones and gave rewards for participation in the form of ‘Bucks’ that could be redeemed for gift cards. The agency claimed that the app was voluntary, but failure to accrue enough points (and to increase points annually) meant an extra $500 in premiums and an additional $1,000 on top of existing deductibles. That might not sound like a lot, but most teachers and support staff in West Virginia make less than $40,000 a year. Many have second jobs. Many more are elderly or have chronic illnesses.

The legislature gave no option but to play Go365 – but how teachers were supposed to play was another matter. ‘It was the cherry on top of a shit sundae,’ said Michael Mochaidean, a teacher and organiser in West Virginia. The teachers didn’t want to give up sensitive medical data. They didn’t want their locations tracked. After years of funding cuts to the PEIA, they saw the app as a way to kick teachers off their healthcare altogether.

Enraged, the teachers of West Virginia took to Facebook. They complained, they organised, and in March of 2018 thousands of them descended on the capitol in Charleston in a wildcat strike. After years of low pay and slashed benefits, their dissatisfaction had finally crystallised around the imposition of Go365. They would not participate in the game. By the end of the strike, the teachers had won a pay raise, and forced West Virginia to end its contract with Humana. Go365 was phased out. The teachers had sent a message to their bosses. Neither their work nor their health was a game.

This article was republished under a Creative Commons license fromAeon. Read the original here.

Advertisements

Can Our Brains Really Read Jumbled Words as Long as The First And Last Letters Are Correct?

Don’t believe everything you read online.

Source: Can Our Brains Really Read Jumbled Words as Long as The First And Last Letters Are Correct?

MICHELLE STARR

You’ve probably seen the classic piece of “internet trivia” in the image above before – it’s been circulating since at least 2003.

On first glance, it seems legit. Because you can actually read it, right? But, while the meme contains a grain of truth, the reality is always more complicated.

The meme asserts, citing an unnamed Cambridge scientist, that if the first and last letters of a word are in the correct places, you can still read a piece of text.

We’ve unjumbled the message verbatim.

“According to a researche [sic] at Cambridge University, it doesn’t matter in what order the letters in a word are, the only importent [sic] thing is that the first and last letter be at the right place. The rest can be a total mess and you can still read it without problem. This is because the human mind does not read every letter by itself but the word as a whole.”

In fact, there never was a Cambridge researcher (the earliest form of the meme actually circulated without that particular addition), but there is some science behind why we can read that particular jumbled text.

The phenomenon has been given the slightly tongue-in-cheek name “Typoglycaemia,” and it works because our brains don’t just rely on what they see – they also rely on what we expect to see.

In 2011, researchers from the University of Glasgow, conducting unrelated research, found that when something is obscured from or unclear to the eye, human minds can predict what they think they’re going to see and fill in the blanks.

“Effectively, our brains construct an incredibly complex jigsaw puzzle using any pieces it can get access to,” explained researcher Fraser Smith. “These are provided by the context in which we see them, our memories and our other senses.”

However, the meme is only part of the story. Matt Davis, a researcher at the University of Cambridge’s MRC Cognition and Brain Sciences Unit, wanted to get to the bottom of the “Cambridge” claim, since he believed he should have heard of the research before.

He managed to track down the original demonstration of letter randomisation to a researcher named Graham Rawlinson, who wrote his PhD thesis on the topic at Nottingham University in 1976.

He conducted 16 experiments and found that yes, people could recognise words if the middle letters were jumbled, but, as Davis points out, there are several caveats.

  • It’s much easier to do with short words, probably because there are fewer variables.
  • Function words that provide grammatical structure, such as and, the and a, tend to stay the same because they’re so short. This helps the reader by preserving the structure, making prediction easier.
  • Switching adjacent letters, such as porbelm for problem, is easier to translate than switching more distant letters, as in plorebm.
  • None of the words in the meme are jumbled to make another word – Davis gives the example of wouthit vs witohut. This is because words that differ only in the position of two adjacent letters, such as calm and clam, or trial and trail, are more difficult to read.
  • The words all more or less preserved their original sound – order was changed to oredrinstead of odrer, for instance.
  • The text is reasonably predictable.

It also helps to keep double letters together. It’s much easier to decipher aoccdrnig and mttaerthan adcinorcg and metatr, for example.

There is evidence to suggest that ascending and descending elements play a role, too – that what we’re recognising is the shape of a word. This is why mixed-case text, such as alternating caps, is so difficult to read – it radically changes the shape of a word, even when all the letters are in the right place.

If you have a play around with this generator, you can see for yourself how properly randomising the middle letters of words can make text extremely difficult to read. Try this:

The adkmgowenlcent – whcih cmeos in a reropt of new mcie etpnremxeis taht ddin’t iotdncure scuh mantiotus – isn’t thelcclnaiy a rtoatriecn of tiher eearlir fidginns, but it geos a lnog way to shnwiog taht the aalrm blels suhold plarobby neevr hvae been sdnuoed in the fsrit plcae.

Maybe that one is cheating a little – it’s a paragraph from a ScienceAlert story about CRISPR.

The acknowledgment – which comes in a report of new mice experiments that didn’t introduce such mutations – isn’t technically a retraction of their earlier findings, but it goes a long way to showing that the alarm bells should probably never have been sounded in the first place.

See how you go with this one.

Soaesn of mtiss and mloelw ftisnflurues,
Csloe boosm-feinrd of the mrtuniag sun;
Cnponsiirg wtih him how to laod and besls
Wtih friut the viens taht runod the tahtch-eevs run

That’s the first four lines of the poem “To Autumn” by John Keats.

Season of mists and mellow fruitfulness,
Close bosom-friend of the maturing sun;
Conspiring with him how to load and bless
With fruit the vines that round the thatch-eves run

So while there are some fascinating cognitive processes behind how we use prediction and word shape to improve our reading skills, it really isn’t as simple as that meme would have you believe.

If you want to delve into the topic further, you can read Davis’ full and fascinating analysis here.

Quote

Brand Connected – Media Psychology

Tiffany White tells a cautionary tale about the modern, connected consumer. Tiffany Barnett White is Associate Professor of Business Administration and Bruce and Anne Strohm Faculty Fellow at the University of Illinois, College of Business. She joined the faculty at Illinois in 1999 and received a Ph.D. in marketing from Duke University in 2000. […]

via The [brand] connected consumer – Tiffany White  — consumer psychology research

Digital Nation

Is our 24/7 wired world causing us to lose as much as we’ve gained

 

 

Source: Digital Nation

SEASON 28: EPISODE 9

Over a single generation, the Web and digital media have remade nearly every aspect of modern culture, transforming the way we work, learn, and connect in ways that we’re only beginning to understand. FRONTLINE producer Rachel Dretzin (Growing up Online) teams up with one of the leading thinkers of the digital age, Douglas Rushkoff (The Persuaders, Merchants of Cool), to continue to explore life on the virtual frontier. The film is the product of a unique collaboration with visitors to the Digital Nation website, who for the past year have been able to react to the work in progress and post their own stories online. [Explore more stories on the original Digital Nation website.]

Watch this documentary at https://www.pbs.org/wgbh/frontline/film/digitalnation/?fbclid=IwAR1306NU_-_a2KeXTN4chlR2rRj_b66DD6hsD5wWN-MPh25JdQLTyB3xr0s

 

Smart Phone, Lazy Brain

We still call them “phones,” but they are seldom used for talking. They have become like a substitute for memory—and other brain functions. Is that good for us in the long run?

Illustration by Edmon Haro

Source: Smart Phone, Lazy Brain

BY SHARON BEGLEY

You probably know the Google Effect: the first rigorous finding in the booming research into how digital technology affects cognition. It’s also known as digital amnesia, and it works like this: When we know where to find a piece of information, and when it takes little effort to do so, we are less likely to remember that information. First discovered by psychologist Betsy Sparrow of Columbia University and her colleagues, the Google Effect causes our brains to take a pass on retaining or recalling facts such as “an ostrich’s eye is bigger than its brain” (an example Sparrow used) when we know they are only a few keystrokes away.

“Because search engines are continually available to us, we may often be in a state of not feeling we need to encode the information internally,” Sparrow explained in her 2011 paper. “When we need it, we will look it up.” Storing information requires mental effort—that’s why we study before exams and cram for presentations—so unless we feel the need to encode something into a memory, we don’t try. Result: Our recollection of ostrich anatomy, and much else, dissipates like foam on a cappuccino.

It’s tempting to leap from the Google Effect to dystopian visions of empty-headed dolts who can’t remember even the route home (thanks a lot, GPS), let alone key events of history (cue Santayana’s hypothesis that those who can’t remember history are doomed to repeat it). But while the short-term effects of digital tech on what we remember and how we think are real, the long-term consequences are unknown; the technology is simply too new for scientists to have figured it out.

People spend an average of 3 to 5 minutes at their computer working on the task at hand before switching to Facebook or other enticing websites.

Before we hit the panic button, it’s worth reminding ourselves that we have been this way before. Plato, for instance, bemoaned the spread of writing, warning that it would decimate people’s ability to remember (why make the effort to encode information in your cortex when you can just consult your handy papyrus?). On the other hand, while writing did not trigger a cognitive apocalypse, scientists are finding more and more evidence that smartphones and internet use are affecting cognition already.

The Google Effect? We’ve probably all experienced it. “Sometimes I spend a few minutes trying hard to remember some fact”—like whether a famous person is alive or dead, or what actor was in a particular movie—“and if I can retrieve it from my memory, it’s there when I try to remember it two, five, seven days later,” said psychologist Larry Rosen, professor emeritus at California State University, Dominguez Hills, who researches the cognitive effects of digital technology. “But if I look it up, I forget it very quickly. If you can ask your device any question, you do ask your device any question” rather than trying to remember the answer or doing the mental gymnastics to, say, convert Celsius into Fahrenheit.

“Doing that is profoundly impactful,” Rosen said. “It affects your memory as well as your strategy for retrieving memories.” That’s because memories’ physical embodiment in the brain is essentially a long daisy chain of neurons, adding up to something like architect I.M. Pei is alive or swirling water is called an eddy. Whenever we mentally march down that chain we strengthen the synapses connecting one neuron to the next. The very act of retrieving a memory therefore makes it easier to recall next time around. If we succumb to the LMGTFY (let me Google that for you) bait, which has become ridiculously easy with smartphones, that doesn’t happen.

To which the digital native might say, so what? I can still Google whatever I need, whenever I need it. Unfortunately, when facts are no longer accessible to our conscious mind, but only look-up-able, creativity suffers. New ideas come from novel combinations of disparate, seemingly unrelated elements. Just as having many kinds of Legos lets you build more imaginative structures, the more elements—facts—knocking around in your brain the more possible combinations there are, and the more chances for a creative idea or invention. Off-loading more and more knowledge to the internet therefore threatens the very foundations of creativity.

Besides letting us outsource memory, smartphones let us avoid activities that many people find difficult, boring, or even painful: daydreaming, introspecting, thinking through problems. Those are all so aversive, it seems, that nearly half of people in a 2014 experimentwhose smartphones were briefly taken away preferred receiving electric shocks than being alone with their thoughts. Yet surely our mental lives are the poorer every time we check Facebook or play Candy Crush instead of daydream.

But why shouldn’t we open the app? The appeal is undeniable. We each have downloaded an average of nearly 30 mobile apps, and spend 87 hours per month internet browsing via smartphone, according to digital marketing company Smart Insights. As a result, distractions are just a click away—and we’re really, really bad at resisting distractions. Our brains evolved to love novelty (maybe human ancestors who were attracted to new environments won the “survival of the fittest” battle), so we flit among different apps and websites.

As a result, people spend an average of just three to five minutes at their computer working on the task at hand before switching to Facebook or another enticing website or, with phone beside them, a mobile app. The most pernicious effect of the frenetic, compulsive task switching that smartphones facilitate is to impede the achievement of goals, even small everyday ones. “You can’t reach any complex goal in three minutes,” Rosen said. “There have always been distractions, but while giving in used to require effort, like getting up and making a sandwich, now the distraction is right there on your screen.”

The mere existence of distractions is harmful because resisting distractions that we see out of the corner of our eye (that Twitter app sitting right there on our iPhone screen) takes effort. Using fMRI to measure brain activity, neuroscientist Adam Gazzaley of the University of California, San Francisco, found that when people try to ignore distractions it requires significant mental resources. Signals from the prefrontal cortex race down to the visual cortex, suppressing neuronal activity and thereby filtering out what the brain’s higher-order cognitive regions have deemed irrelevant. So far, so good.

The problem is that the same prefrontal regions are also required for judgment, attention, problem solving, weighing options, and working memory, all of which are required to accomplish a goal. Our brains have limited capacity to do all that. If the prefrontal cortex is mightily resisting distractions, it isn’t hunkering down to finish the term paper, monthly progress report, sales projections, or other goal it’s supposed to be working toward. “We are all cruising along on a superhighway of interference” produced by the ubiquity of digital technology, Gazzaley and Rosen wrote in their 2016 book The Distracted Mind. That impedes our ability to accomplish everyday goals, to say nothing of the grander ones that are built on the smaller ones.

The constant competition for our attention from all the goodies on our phone and other screens means that we engage in what a Microsoft scientist called “continuous partial attention.” We just don’t get our minds deeply into any one task or topic. Will that have consequences for how intelligent, creative, clever, and thoughtful we are? “It’s too soon to know,” Rosen said, “but there is a big experiment going on, and we are the lab rats.”

Tech Invasion LMGTFY

“Let me Google that for you” may be some of the most damaging words for our brain. Psychologists have theorized that the “Google Effect” causes our memories to weaken due merely to the fact that we know we can look something up, which means we don’t keep pounding away at the pathways that strengthen memory. Meanwhile, research suggests that relying on GPS weakens our age-old ability to navigate our surroundings. And to top it all off, the access to novel info popping up on our phone means that, according to Deloitte, people in the US check their phones an average of 46 times per day—which is more than a little disruptive.

Sharon Begley is a senior science writer with The Boston Globe Media Group, author of Train Your Mind, Change Your Brain, and coauthor with Richard Davidson of The Emotional Life of Your Brain. She writes a regular column for Mindful magazine called Brain Science.

Quote

Do We Form Similarly Close Relationship With Brands As With Our Loved One? – Media Psychology

Are brands addictive? Source: Do We Form Similarly Close Relationship With Brands As With Our Loved One? Martin Reimann For human relationships it is known that after an initial electrifying honeymoon period, excitement for the loved partner often goes down and is maintained at a lower level. At the same time, however, we include our […]

via Do We Form Similarly Close Relationship With Brands As With Our Loved One? — consumer psychology research

We know what will make us happy, why do we watch TV instead?

Do we know instinctively what kind of activities are conducive to lasting happiness? If so, why don’t more of us do them more often? By Christian Jarrett

Source: We know what will make us happy, why do we watch TV instead?

By Christian Jarrett

The luxury microwave meal was delicious, the house is warm, work’s going OK, but you’re just not feeling very happy. Some positive psychologists believe this is because many of us in rich, Western countries spend too much of our free time on passive activities, like bingeing on Netflix and browsing Twitter, rather than on active, psychologically demanding activities, like cooking, sports or playing music, that allow the opportunity to experience “flow” – that magic juncture where your abilities only just meet the demands of the challenge. A new paper in the Journal of Positive Psychology examines this dilemma. Do we realise that pursuing more active, challenging activities will make us happier in the long-run? If so, why then do we opt to spend so much more time lazing around engaged in activities that are pleasant in the moment, but unlikely to bring any lasting fulfilment?

Across two studies, L. Parker Schiffer and Tomi-Ann Roberts at the Claremont Graduate University and Colorado College, surveyed nearly 300 people (presumably US citizens, average age 33/34 years) via Amazon’s Mechanical Turk website about what they thought of dozens of different activities: some passive like listening to music or watching movies, others more active and potentially flow-inducing, such as making art or meditating. Specifically, the participants rated how enjoyable, effortful, and daunting they considered the activities to be, as well as how often they engaged in each of them in a typical week. The participants also identified which activities they considered the most and least conducive to lasting happiness.

There was a clear pattern in the participants’ answers: they identified more effortful activities as being more associated with lasting happiness, yet they said they spent much more time on passive, relaxation-based activities, like watching TV. Looking at their other judgments, the key factor that seemed to deter participants from engaging in more active, flow-inducing activities is that they tended to be seen as particularly daunting and less enjoyable, even while being associated with lasting happiness. The more daunting an activity was deemed to be, the less frequently it was undertaken (by contrast, and to the researchers’ surprise, the perceived effort involved in the activity did not seem to be a deterrent).

Schiffer and Roberts consider this to be a paradox of happiness: we know which kind of activities will bring us lasting happiness, but because we see them as daunting and less enjoyable in the moment, we choose to spend much more of our time doing passive, more immediately pleasant things with our free time. Their advice is to plan ahead “to try to ease the physical transition into flow activities” to make them feel less daunting. For example, they suggest getting your gym clothes and bag ready the night before, and choosing a gym that’s close and convenient; or getting your journal and pen, or easel and paintbrushes, ready in advance.

The other thing they suggest is using mindfulness, meditation or some other “controlled consciousness” technique to help yourself to disregard the initial “transition costs” of a flow activity, such as the early pain of a run, and to focus instead on its pleasurable aspects and the long-term rewards.

“Future research is needed in order to empirically back our proposal that preplanning, prearranging, and, and controlled consciousness may aid overcoming the activation energy and transition costs that stand in the way of our true happiness,” the researchers said.

The paradox of happiness: Why are we not doing what we know makes us happy?

Christian Jarrett (@Psych_Writer) is Editor of BPS Research Digest

How Your Brain Forces You to Watch Ads

…and how you can learn to ignore them

Related image

Source: How Your Brain Forces You to Watch Ads

Douglas Van Praet

Every day we navigate through a cluttered mediaenvironmentof thousands of ads vying for our precious time and limited attention. Studies in North America have shown that on average we are exposed to 3,000 ads per day. If you think you can simply choose to ignore these messages, think again. The best ads are designed to slip through your best defenses.

That’s because every consumer, i.e., human, has an automatic hardwired process for attention and awareness. And our decision to pay attention to stimuli in our environment (such as advertising) is often determined by our emotions, not our thoughts. But here is the challenge for viewers. We don’t choose our emotions. They happen unconsciously. We can only try to choose how to think about our feelings after the fact. So when an advertisement triggers a strong emotion, brands can rise to the top of shopping lists and markets. Because at this stage of human evolution, our feelings influence our thinking way more than our thoughts influence our emotions.

Think of emotions as automated actions programs that guide us through our (media) environment without having to think. Ads that trigger emotions can literally hijack critical thought and conscious awareness. Research has shown that ads processed with high levels of attention are six times more impactful at driving brand choice as compared to ads that aren’t consciously recalled. And cognitivescience experiments corroborate that familiarity breeds affection through mere exposure.

Every second your senses are taking in about 11 million bits of information, but you are only aware of about 40 of those bits. Because our conscious mind is so limited it works on a need to know basis. Think of the human brain as a survival machine vigilantly scanning the environment always making predictions about what will happen next. It works by recognizing and responding to patterns. Cognitive science tells us we don’t notice the world around us when it’s reliably predicted away, when what we are experiencing in the moment matches our intuitive predictions.

However, missed predictions fire a hardwired neuralresponse that biologically commands our attention. This reaction is what neuroscientists technically call the “Oh Shit!” circuit. When we expect something to happen and it does not, a distress signal is released from the anterior cingulate cortex (ACC). The ACC is closely wired to the thalamus, a dual-lobed mass of gray matter beneath the cerebral cortex that plays a critical role in awareness by helping direct conscious attention. Nothing grabs our attention better than the element (and emotion) of surprise. Advertisers do this best by interrupting expected patterns.

In addition, novelty primarily activates the dopaminesystem in our brain, which is responsible for wanting behavior. The dopamine system also has a close relationship with the opioid system of the brain, which produces pleasurable sensations. Since learning is so important to human survival it makes sense that natural selection has also instilled within us feel good emotional responses to novel stimuli.

For instance, the Old Spice brand completely transformed its old-fashioned image thanks to an infectious effort that was brimming with pattern interrupts. This campaign embedded a much cooler and contemporary brand image in the minds of people by introducing the world to the charismatichunk Isaiah Mustafa, or “the man your man can smelllike.”

The magic behind this amazingly impactful campaign is not just the smooth pitchman of Old Spice body wash, but the equally smooth interruptions. The introductory commercial featured a series of seamless transitional pattern interrupts as Isaiah directs the viewer’s attention from unsuspecting scene to scene. He goes from his bathroom, to dropping in on a sailboat, and finally ending up atop a horse. Our brains are surprised and delighted with a blast of dopamine and the pay out of attention again, again, and again. The decision to watch this ad is not a conscious choice. It is the neurobiological equivalent of a forced exposure. Not surprisingly, this campaign generated an amazing 1.4 billion media impressions and a 27% increase in sales during the first 6 months post launch.

Similarly, there are certain stimuli—such as babies, for example—that come prepackaged with positive emotional responses. We don’t consciously choose to find babies adorable. No more than we choose to feel the “aww” reaction that commandeers our thoughts or the impetus to post pictures all over Facebook. The decision to find babies so compelling has been made millions of years ago through evolution and natural selection. If our forbears were not instinctually compassionate towards these innocent helpless creatures, they would have never survived. And our DNAand species would eventually cease to exist.

So when ads add novel twists to these mini mush magnets, attention and engagement soars. Take for instance the computer-generated Evian babies on roller skates who break-danced and back-flipped their way to what the Guinness Book of World Records declared was the most viewed online ad in history. More recently, the most watched ad on YouTube in 2013 was another spot by Evian called “Baby & Me.” This approach featured grown ups dancing while unexpectedly discovering their inner babies dancing in sync as their reflections in a mirror.

Just because you are aware of seeing an ad or buying a brand doesn’t mean you are aware of the unconsciousforces that prompted you to do so. The only way to avoid the trap of becoming glued to these types of advertising is to become aware of the patterns. So much of today’s ads are based on interrupting patterns and generating deep primal emotions because our attention span is an increasingly rare resource. By becoming aware of these patterns your mind will intuitively learns to predict and ignore them in the future and you’ll gain back precious seconds of your busy life.

And remember to push the pause button in your mind and rationally contemplate what draws you to advertising and products in the first place. When it comes to buying brands we often don’t have free will, but we do have free won’t. We can’t help having the feelings tugging at our heartstrings and desires. But we can also rationally reject these suggestions come shopping time if it doesn’t make sense.

For more information check out my book: Unconscious Branding

www.unconsciousbranding.com

https://twitter.com/DouglasVanPraet

The Reading Brain

Does Reading Give Us Access to Other People’s Minds?

Source: The Reading Brain

Jason Tougaw

In her book The Shaking WomanSiri Hustvedt delights in reading’s power to recast her “internal narrator”:

The closest we can get to . . . entrance into another person’s psyche is through reading. Reading is the mental arena where different thought styles, tough and tender, and the ideas generated by them become more apparent. We have access to a stranger’s internal narrator. Reading, after all, is a way of living inside another person’s words. His or her voice becomes my narrator for the duration. Of course, I retain my own critical faculties, pausing to say to myself, Yes, he’s right about that or No, he’s forgotten this point entirely or That’s a clichéd character, but the more compelling the voice on the page is, the more I lose my own. I am seduced and give myself up to the other person’s words.

 AmirReza Fardad
Source: Source: AmirReza Fardad

Of course, reading doesn’t simply give us access to “another person’s psyche.” Hustvedt argues it’s as close as we get, without the onus to define how close that might be. She describes the capacity of a writer’s voice to become her narrator, to mix with the stream of her consciousness, to give her access to unfamiliar “thought styles” that may lead to new ideas, new ways of understanding the world—and, ultimately, living with it.

Neuroscientist Stanislas Dehaene argues that “the human brain never evolved for reading. . . .  The only evolution was cultural—reading itself progressively evolved toward a form adapted to our brain circuits.” Reading is a human invention, made possible by pre-existing brain systems devoted to representing shapes, sound, and speech.  Nonetheless, Dehaene acknowledges that “an exponential number of cultural forms can arise from the multiple combinations of restricted selection of fundamental traits.” In other words, the malleability of the brain’s representational systems enables the continuous evolution of new forms of representation.

The literary wing of the so-called “neurohumanities” has been busy with researchers and theorists investigating what it might mean to “live inside another’s words” and the variations of reading possible within the physiological constraints Dehaene describes. Three books in particular have made a splash: Lisa Zunshine’s Why We Read Fiction: Theory of Mind and the Novel (2006), Suzanne Keen’s Empathy and the Novel (2007), and Blakey Vermeule’s Why Do We Care about Literary Characters? (2009). The titles of these books represent the clarity of their purposes and their shared interests in so-called “mind reading“–how we know what another person thinks and feels, or how literature trains us to guess.

Zunshine draws on theory of mind research in cognitive science to argue that literary texts satisfy, create, and test “cognitive cravings,” focusing mostly on cognitive capacities to imagine other people’s mental experiences—and the centrality of doing so to navigating social relations. She makes a strong argument that writers like Virginia Woolf and Jane Austen offer a kind of cognitive exercise, pushing us to practice levels of “cognitiive embedment”–for example, she realized that he thought she was laughing inside, and this worried her.” We practice imagining each other imagining each other’s minds.

Keen emphasizes neuro-cognitive research—especially the fMRI studies of Tania Singer—that link empathy to so-called mirror neurons. Responding to influential research on empathy and mirror systems by Tania Singer, she observes that “Singer and her colleagues conclude that empathy is mediated by the part of the pain network associated with pain’s affective qualities, but not its sensory qualities.” In other words, we can imagine other people’s pain, but we can’t feel it. As a result, Keen’s conclusions are multifarious—and not entirely rosy: It may be easier to empathize with fictional characters that real people; novelists (and writers and artists in general) may be more empathetic than the general population; empathetic responses occur more readily in response to negative emotions; empathy does not necessarily lead to altruism or action; and empathy can lead to an aversive response as well as a sympathetic one.

Vermeule focuses on literary characters, as “tools to think with”: “Literary narratives prove us and make us worry about what it is to interact with fictional people. And we should worry, because interacting with fictional people turns out to be a central cognitive preoccupation, one that exposes many of the aspects of how our minds work.” Vermeule’s “fictional people” include characters like Clarissa Dalloway or Humbert Humbert, but also representations of actual people we don’t know like Barack Obama or Caitlyn Jenner and people we do know, even those we’re intimate with. When we imagine other people’s mental lives, we create a kind of productive fiction. Literature, she argues, makes us attentive to forms of representation that shape the ways we live. If we don’t recognize the role of representation in the shaping of social relations we will mistake our mental reproductions of others for “the real properties” of those people, rather than recognizing the cognitive filters that enable us to relate to them.

Some of this research has gotten a lot of press—for example, Natalie Phillips’s fMRI research on reading Jane Austen, featured on NPR, the Huffington Post, and Salon well before it was published in journals. Phillips conducted her research on a fellowship at Stanford, which touted it with the headline “This Is Your Brain on Jane Austen.” Phillips’s research is a multi-disciplinary collaboration—whose process mirrors its premises with a productive irony Austen might appreciate. She’s interested in the limits of attention, studying Austen’s fiction to make arguments about how it challenges readers to adopt multiple perspectives that test those limits.

Samantha Holmsworth, a neuroimaging expert on the project, describes the challenges: “We were all interested, but working at the edge of our capacity to understand even 10 percent of what each other were saying”—an estimate revised to 30% in an academic article that finally fleshed out the results that had received so much preliminary hype. Phillips presents her research with the enthusiasm of hypothesis that requires further study. In short, close reading (attending to questions about form) and pleasure reading (getting lost in a book) involve related but different forms of representation.

The “neural signatures” involved multiple brain systems, and Phillips envisions future research using a “functional connectivity” approach to measure “synchronous patterns that emerge in parallel across the brain and investigates how these connections change as we engage stimulus over time.” Close reading seems to initiate more widespread activity than pleasure reading, including the somatosensory cortex and motor cortex—areas involved in space and movement.

This is nascent research, and its hypotheses are tentative. That seems appropriate. If Jane Austen abhorred anything, it was too definitive a conclusion. In Austen, mind reading is always misreading.

 

Jason Tougaw is the author of The Elusive Brain: Literary Experiments in the Age of Neuroscience (Yale UP) and The One You Get: Portrait of a Family Organism (Dzanc Books).