What Is A Digital Fast And Should I Go On One?

The answer depends on how you see your relationship with technology.

Source: What Is A Digital Fast And Should I Go On One?

Utpal Dholakia Ph.D.

What is a digital fast?

Also known by various other terms like “digital detox,” “digital Sabbath,” and “unplugging”, the idea behind a digital fast is to voluntarily and deliberately stop using all connected devices – smartphones, computers, tablets, and so on – that plug you to the internet for a pre-specified amount of time. The abstention could be for as little as a few hours (say, from 7 pm until the next morning). However most digital fasts are at least a day long, and many span an entire weekend or even longer. Just like food fasts, longer digital fasts are thought to be more effective (up to a point) in weaning oneself away from digital connectivity, and in regaining self-control.

distraction by underminingme Flickr Licensed Under CC BY 2.0
Source: distraction by underminingme Flickr Licensed Under CC BY 2.0

Some versions of the digital fast recommend off-line activities you should perform instead (with all the spare time freed up from not texting and Facebooking): going on a walk or a hike, or organizing a pot-luck meal with friends who are also digitally fasting.

In theory, the idea of a digital fast has generated a lot of interest, with organizations such as Adbusters and the Huffington Posteach promoting their respective versions of digital fasts. It is especially popular among Silicon Valley employees of tech companies like Facebook and Google.

But in practice, digital fasting really hasn’t moved beyond the fringes of our culture. One reason is that there is a raging debate about whether a digital fast is desirable. Advocates take the digital stance that digital technology has addictive properties that cause harm in various ways, and we need to regain control over its use. Its opponents counter with the argument that digital technology satisfies our fundamental needs today and is irreplaceable. So, they say, there is no need for anyone to fast or unplug.

In this blog post, I want to provide brief synopses of both positions, so that you can make a better decision for yourself about whether a digital fast is something worth doing.

The Addiction Perspective: I am Addicted to Digital Technology and this is a Growing & Serious Problem

The Hummingbird Doll by Paree Flickr Licensed Under CC BY 2.0
Source: The Hummingbird Doll by Paree Flickr Licensed Under CC BY 2.0

Over the past decade, a growing number of psychologists have started to see addictionto digital technologies as a form of behavioral addiction, similar to pathological gambling, and even to substance dependence addictions. They point to the fact that when using smartphones, or playing online games, or using social media, many people exhibit features that are very similar to those displayed by drug addicts. These featuresinclude the following: excessive use of the digital technology without discretion, experiencing symptoms of withdrawal including feelings of anger, tension or depression without use, and negative repercussions from use such as lowered ability to focus and insomnia. One study from 2014 led by psychologist Julia Hormes concluded:

“The use of online social networking sites is potentially addictive…Disordered online social networking use seems to arise as part of a cluster of symptoms of poor emotion regulation skills and heightened susceptibility to both substance and non-substance addiction.

Under this perspective, unchecked use of digital technology is a potential problem that could have serious consequences. One avenue to gain control over the problem is to disconnect from the source of the addiction in a deliberate and systematic way by means of a digital fast.

The Unalloyed Empowerment Perspective: Digital Technologies Satisfy My Fundamental Needs and Allow Me to do Important Things That I Cannot Otherwise Do

Love at a Distance by Cubmundo Flickr Licensed Under CC BY 2.0
Source: Love at a Distance by Cubmundo Flickr Licensed Under CC BY 2.0

Technology enthusiasts take an entirely different view. They see nothing problematic or dangerous with our incessant digital connectivity. They believe that there is no need for anyone to unplug at all, even momentarily. Rather, they argue that digital technologies now satisfy many of our most basic needs and we no longer have other, old-fashioned ways of meeting these needs. For instance, (at least in the United States) people no longer stroll to the town square every evening, or call each other to talk for hours on a landline phone. Our online and offline lives have now blended together so completely that one is not possible without the other. Thus, fasting digitally will lead to nothing more than depriving ourselves of basic need fulfillment and constitutes a case of unnecessary technological asceticism. This view is expressed nicely by business consultant Alexandra Samuel:

“When we’re online — not just online, but participating in social media — we’re meeting some of our most basic human needs.  Needs like creative expression. The need to connect with other people. The need to be part of a community. Most of all, the need to be seen: not in a surface, aren’t-you-cute way, but in a deep, so-that’s-what’s-going-on-inside-your-head way.”

Under this perspective of technology and digital connectivity, it is clear that a digital fast is anathema. Instead, it would see each of us as electric appliances. Unless it is plugged in to a power source, it is useless. In the same way, we are only functioning when we are digitally  connected.

Which perspective makes sense – Addiction or Unalloyed Empowerment?

It is worth thinking deeply about which perspective of technology relationship – addiction or empowerment – resonates with you. When I thought about this question, it was clear even though using digital technology and being connected is empowering, I am also addicted to its use. Some experts suggest that people like me should approach digital connectivity like a diet instead of a fast. In other words, use technology in a restrained way rather than not at all. Nevertheless, I choose to unplug from all digital devices from time to time, usually for a few hours, or sometimes for a weekend. (I have never gone on a longer digital fast than this). If nothing else, digital fasting is a way to prove to myself that I have the strength to act willfully in my relationship with technology.

 

Utpal M. Dholakia, Ph.D., is the George R. Brown Professor of Marketing at Rice University.

I teach marketing and pricing to MBA students at Rice University. You can find more information about me on my website or follow me on LinkedIn, Facebook, or Twitter @ud.

Advertisements

Quote

How Brands Addict Us – Media Psychology

It’s done through through broken promises and spikes of dopamine Photo by Joshua Earle on Unsplash Source: How Brands Addict Us Douglas Van Praet There’s a reason why marketers spend billions of dollars on advertising every year. It works! That’s because humans, and by extension, all consumers, are wired for the joys of anticipation more […]

via How Brands Addict Us — consumer psychology research

Why Email Is Only 7 Percent as Effective as Talking

… and 4 ways to make it better.

Photo by Anete Lūsiņa on Unsplash

Source: Why Email Is Only 7 Percent as Effective as Talking

Douglas Van Praet

The development of email and texting has enhanced our ability to communicate productively, efficiently, and quickly. But, based on new research into how human communication works, it’s easy to see a downside to our over-reliance on emails and texts. In fact, some of our online habits may be undermining our efforts at communicate successfully.

For example, have you ever made a joke in an email that didn’t go over well because the recipient couldn’t discern your sarcasm (even with the addition of an emoji)? Research by UCLA psychology professor emeritus Albert Mehrabian found that 7 percent of a message was derived from the words, 38 percent from the intonation, and 55 percent from the facial expression or body language. In other words, the vast majority of communication is not carried by our words alone.

Not surprisingly, research shows we communicate most effectively in real-life, real-time conversation. New neurological evidence shows that effective communication physically resounds in the brain of the receiver, echoing the thoughts and sentiments of the communicator by inducing and shaping neurological responses. A remarkable study led by Princeton University’s Greg Stephens determined through fMRI brain scans that in both the communicator and listener, similar regions of the brain fired when engaged in unrehearsed, real-life story telling, leading the team to conclude that our brain cells actually synchronize during successful communication. As the study says:

“The findings shown here indicate that during successful communication, speakers’ and listeners’ brains exhibit joint, temporally coupled response patterns. Such neural coupling substantially diminishes in the absence of communication. Moreover, more extensive speaker-listener neural couplings result in more successful communication.”

The deeper the conversation, the more deeply our minds meld. In some instances, the listener’s brain patterns actually anticipate where the story is going, in deep rapport with the speaker.

These findings support studies that link “mirror” neurons to empathy. The neuroscientist Giacomo Rizzolatti and his team discovered that empathy is mediated by neurons in the brain’s motor system. These “mirror neurons,” as Rizzolatti named them, give humans the capacity for shared experiences by enabling us to project ourselves into the minds, emotions, and actions of others through the direct simulation of feeling, not thinking. This happens best live and in person rather than through the shadowy substitutes of digital communication.

As it happens, online communication may have given rise to completely different standards of trustworthiness. Judy Olson, a professor of information and computer sciences whop has researched the essentials of building trust in digital communication, found that in the absence of traditional trust indicators like voice intonation, emotional expression, and body language in online, text-based messages, research participants default to speed of response as a key marker of trustworthiness.

The mind is a prediction machine and pattern recognizer that hates an open loop or unresolved pattern. On the web, this trigger is often exploited through headlines that beg for closure like: “What happened next will blow your mind.” We are compelled to click on the link to resolve the uncertainty. Similarly, not getting a response to email can cause significant if unintended psychological unrest. But in an email-default communication environment, the non-response has become the norm for messages that appear to lack urgency. In some ways, it may be better to give someone bad news than no news at all.

Given what we’ve learned, here are a few suggestions on how to enhance your own text-based communication:

  1. Play it straight. We don’t process communication on face value: Our minds work mostly through implicit inference, not direct suggestion. We look for the hidden meaning, often times to avoid deception or unmask others’ agendas. As evolutionary biologist Richard Dawkins puts it, “We are evolved to second-guess the behaviors of others by becoming brilliant intuitive psychologists.” The best bet, then, is to be clear rather than clever.
  2. Close the loop. Have you ever reached out to someone to congratulate or compliment them and not heard back? That sense of injustice and anger you felt is not healthy, for either party. It’s a violation of our most deeply ingrained social norm of reciprocal altruism to repay in kind what others have done for or to you. Do you really need people hating on you for such a simple omission? Acknowledge what you’ve received.
  3. Respond quickly. As the digital age obviates the need for live interactions, gaining trust becomes more of a challenge. Person-to-person interactions carry benefits (such as facial expressions and gestures) that facilitate the manner in which humans typically generate trust. Trust is the glue that binds people and the means by which we succeed as social beings who rely on the resources of others. In the absence of these cues, research would indicate that, as a rule of thumb, if you are quick to reply, others will respect you more (even if your message is not what they want to hear).
  4. Move the conversation offline. For an important message, try phone calls, video conference, or in-person talks. Phone has the benefit of real-time conversation and the inclusion of the intonation of one’s voice to convey the real meaning of their words—as does using Skype or Google Hangouts, which can add the further contextual cues of body language and help complete the picture. (In general, video conference services are underrated and underused.) But by far the best way is to sit down in person, a rarity these days as we increasingly hide behind emails—and sometimes pay a price for it.

www.unconsciousbranding.com

https://twitter.com/DouglasVanPraet

The dark side of gamifying work

One of the most common user experiences of our time is also a tool of social control. And nowhere is that more true than in the workplace.

Source: The dark side of gamifying work

Deep under the Disneyland Resort Hotel in California, far from the throngs of happy tourists, laundry workers clean thousands of sheets, blankets, towels and comforters every day. Workers feed the heavy linens into hot, automated presses to iron out wrinkles, and load dirty laundry into washers and dryers large enough to sit in. It’s loud, difficult work, but bearable. The workers were protected by union contracts that guaranteed a living wage and affordable healthcare, and many had worked decades at the company. They were mostly happy to work for Disney.

This changed in 2008. The union contracts were up, and Disney wouldn’t renew without adjustments. One of the changes involved how management tracked worker productivity. Before, employees would track how many sheets or towels or comforters the workers washed, dried, or folded on paper notes turned in at the end of the day. But Disney was replacing that system with an electronic tracking system that monitored their progress in real time.

Electronic monitoring wasn’t unusual in the hotel business. But Disney took the highly unusual step of displaying the productivity of their workers on scoreboards all over the laundry facilities, says Austin Lynch, director of organizing for Unite Here Local 11. According to Lynch, every worker’s name was compared with the names of coworkers, each one colour-coded like traffic signals. If you were keeping up with the goals of management, your name was displayed in green. If you slowed down, your name was in yellow. If you were behind, your name was in red. Managers could see the monitors from their office, and change production targets from their computers. Each laundry machine would also monitor the rate of worker input, and flash red and yellow lights at the workers directly if they slowed down.

[Source Photo: klikk/iStock]

‘They had a hard time ignoring it,’ said Beatriz Topete, a union organiser for Unite Here Local 11 at the time. ‘It pushes you mentally to keep working. It doesn’t give you breathing space.’ Topete recalled an incident where she was speaking to workers on the night shift, feeding hand-towels into a laundry machine. Every time the workers slowed down, the machine would flash at them. They told her they felt like they couldn’t stop.

The workers called this ‘the electronic whip’.

While this whip was cracking, the workers sped up. ‘We saw a higher incidence of injuries,’ Topete said. ‘Several people were injured on the job.’ The formerly collegial environment degenerated into a race. The laundry workers competed with each other, and got upset when coworkers couldn’t keep up. People started skipping bathroom breaks. Pregnant workers fell behind. ‘The scoreboard incentivises competition,’ said Topete. ‘Our human competitiveness, whatever makes us like games, whatever keeps us wanting to win, it’s a similar thing that was happening. Even if you didn’t want to.’

The electronic whip is an example of gamification gone awry.

Gamification is the application of game elements into nongame spaces. It is the permeation of ideas and values from the sphere of play and leisure to other social spaces. It’s premised on a seductive idea: if you layer elements of games, such as rules, feedback systems, rewards and videogame-like user interfaces over reality, it will make any activity motivating, fair and (potentially) fun. ‘We are starving and games are feeding us,’ writes Jane McGonigal in Reality Is Broken (2011).‘What if we decided to use everything we know about game design to fix what’s wrong with reality?’

Consequentially, gamification is everywhere. It’s in coupon-dispensing loyalty programmes at supermarkets. Big Y, my local supermarket chain in Boston, employs digital slot machines at the checkout for its members. Winning dispenses ‘coins’ that can be redeemed for deals. Gamification is in the driver interfaces of Lyft and Uber, which give badges for miles driven. Gamification is the premise of fitness games such as Zombies, Run!, where users push themselves to exercise by outrunning digital zombies, and of language-learning apps such as Duolingo, where scoring prompts one to master more. The playgroundoffices of Silicon Valley, complete with slides and ball pits, have been gamified. Your credit score is one big game, too.

But gamification’s trapping of total fun masks that we have very little control over the games we are made to play – and hides the fact that these games are not games at all. Gamified systems are tools, not toys. They can teach complex topics, engage us with otherwise difficult problems. Or they can function as subtle systems of social control.

[Source Photo: klikk/iStock]

Games are probably as old as the human species itself. Archaeologists have unearthed mancalalike boards made of stone in Jordan, dated to 6,000 BC. The application of games to serious matters has probably been with us almost as long. The Egyptian board game senetrepresented the passage of the ka (or vital spark) to the afterlife; its name is commonly translated as ‘the game of passing’. The Roman senatorial class played latrunculi, an abstract game of military strategy to train the mind and pass the time. Dice-based games of chance are thought to have originated with ancient divination practices involving thrown knucklebones. Native American ball games served as proxies of war and were probably crucial to keeping the Iroquois Confederation together. As many as 1,000 players would converge to play what the Mohawk game called baaga’adowe (the little brother of war).

The conflation of game and ritual is likely by design. The Dutch cultural historian Johan Huizinga observed in Homo Ludens (1938) that both invoke a magic circle, a time and place outside of the norms of reality. During play, as during ritual, new rules supersede the old. Players are not tried as thieves for ‘stealing’ a base in baseball. The Eucharist doesn’t literally become flesh during Catholic transubstantiation rituals. Through play and games, Egyptians could metaphorically engage with the afterlife without the inconvenience of dying.

An important aspect of early games was that they were still limited in size and scope. One-thousand-player stickball games between whole villages were a rarity. We don’t see the emergence of anything analogous to modern gamification until the 18th century when Europe underwent a renaissance of games and game design. In 18th-century Paris, Rome, Vienna and London, an international leisure class emerged that communicated across national and linguistic divides through the medium of games. For example, one of the earliest four-person card games in Europe was ombre – from el hombre (the man) – which originated in 16th-century Spain. The game didn’t become known outside Spain until almost the end of the 17th century, with the marriage of Maria Theresa of Spain to Louis XIV of France. Within a few years, the game spread across the continent and was playable in the courts and salons of every capital in Europe.

The spread of ombre coincided with a boom in games and game culture in Europe. Abraham and David Roentgen became a father-and-son pair of rockstars for building foldable game-tables that could be rearranged to suit everything from backgammon to ombre. Play rooms appeared in the homes of the aristocracy and emergent bourgeois. Books of rules such as Pleasant Pastime with Enchanting and Joyful Games to Be Played in Society (1757) were translated into multiple languages. The Catholic Church got in on the act with the liberalisation of lottery laws by popes Clement XII and Pius VI. In the 1750s, the Swiss mathematician and physicist Daniel Bernoulli even declared: ‘The century that we live in could be subsumed in the history books as … the Century of Play.’

In the mid-18th century, Gerhard Tersteegen, an enterprising priest, developed the ‘Pious Lottery’, a deck of 365 cards with various tasks of faith. ‘You’d read a prayer straight from the card,’ explains the historian Mathias Fuchs of Leuphana University in Germany. It is reminiscent of modern mindfulness or religious apps that attempt to algorithmically generate spiritual fulfilment.

Soon, 18th-century musicians were incorporating the logic of game design into their music through randomised card- or dice-based systems for musical composition. Johann Sebastian Bach’s student Johann Philipp Kirnberger, and second son, Carl Philipp Emanuel Bach, both wrote musical composition games – respectively, ‘The Ever-Ready Minuet and Polonaise Composer’ and ‘A Method for Making Six Bars of Double Counterpoint at the Octave Without Knowing the Rules’ (Musikalisches Würfelspiel), which was also attributed to Mozart. These games asked erstwhile composers to roll a pair of dice to randomly select pre-written measures for minuets. According to one estimate, Mozart’s game features 1.3 x 1029 possible combinations. Players would stitch measures of music together in the order rolled to compose a final product, in essence enacting an algorithm. In a way, these resemble modern musical rhythm games such as Guitar Hero that provide the illusion of musical mastery for the sake of entertainment.

It’s not clear what ended the century of play. Perhaps the rococo play culture of the 18th century ended with the wars and nationalistic fervour of the 19th. Fuchs suggests the French Revolution of 1789 as the likely cause. What’s clear is that the centrality of games as a cultural force wouldn’t reach 18th-century levels of saturation until the development of computers.

By the end of the 20th century, video and then computers became more ubiquitous and user-friendly, and digital games rose in scale and scope. To make computers more accessible, human-computer interface designers borrowed elements from early video games. Graphical user interfaces replaced code. Games and gamers became distinct subsets of the computer software and computer hobbyist landscapes. Because the first computer games were experiments in software design, computer and hobby magazines regularly printed and distributed lines of code. Programs, including games, were freely available to remix and experiment on. Importantly, this hobbyist culture, while not a utopia of gender equality, was not strictly male-coded initially.

As software development became more corporate, and the user experience more centralised, the discourse shifted away from the quality of the software to gameplay and user experience. Game development corporations seized on a booming market, cultivating gamers as a distinct category of consumer, and focusing on white, adolescent and teenage boys. Jennifer deWinter, a video-game scholar at Worcester Polytechnic Institute in Massachusetts, refers to this as the construction of technomasculinity. ‘It takes over the ideology of what it takes to be a successful man … the gamer identity was constructed for them to consume as white, male and tech-savvy,’ she explains. The workers of the future would be gamers.

[Source Photo: klikk/iStock]

By 2008, the gamification of work felt absolutely natural to a generation of people raised on ubiquitous digital technology and computer games. Tech startups were faced with the challenge of attracting and retaining users. Game designers and marketers including Jane McGonigal and Ethan Zuckerman promoted the use of immersive game mechanics as a way of ‘hacking happiness’ and building user engagement at summits, speeches and TED talks. By 2010, interest in gamification intensified with the success of the social network game FarmVille, which seemed to have solved the problem of user retention and engagement. Marketers and consultants were quick to seize on gamification as a tool to create customer loyalty and manage human desire. They sought to capitalise on the ‘addictive fun’ of gambling and games by introducing ‘pseudo-goals’ unrelated to the primary goals of either the consumer or the business in question. Game design elements such as badges, points, scoreboards and progress-tracking proliferated across different platforms, apps and workspaces. In doing so, they unknowingly borrowed from the Pious Lottery. Saying a Hail Mary or going to church because of a game isn’t necessarily aligned with the goal of eternal salvation, in much the same way as buying blood oranges for loyalty points isn’t really the goal of grocery shopping.

This brings us back to the electronic whip; Disney was hardly alone. The US retail giant Target implemented the Checkout Game which tracked and scored the speed of minimum-wage checkout clerks. The clerks could see themselves scored in real time on their point-of-sale computers. The US ice-cream parlour chain Cold Stone Creamery marshalled the power of games to teach workers how to be expert ice-cream mixers with the game Stone City, which uses motion controls to teach people how to ‘feel’ out the correct scoops. The game calculates how large the scoops are in relation to the optimal sizes, and then tells the players how much their over-scoops cost the store. Workers were asked to download the game and play it in their off-hours.

Amazon has also bought big into gamifying work. Warehouse workers are subject to scoreboards that display the silhouettes of workers who were caught stealing, what they were caught stealing, and how they were caught. Their productivity is monitored by handheld devices that scan and locate products. If their productivity drops, workers are disciplined with points on a scorecard. As in golf, more points is bad. Accrue enough points, and the worker is fired. White-collar workers too are scored and ranked by digital metrics, and by their peers and bosses. Until 2016, the bottom scorers were fired in what’s called ‘rank and yank’ by the employees.

Through gamified technology, corporations such as Amazon and Disney now have an unprecedented level of control over the individual bodies of their employees. Steve Sims, a vice-president at the gamification firm Badgeville, now CallidusCloud, in California said: ‘We like to think of it as behaviour management.’ In other words, how to get other people to do more stuff, more often.

This kind of micromanagement resembles Taylorism, a system developed by the American engineer Frederick Winslow Taylor during the 1890s to codify the movements and habits of mind that led to productivity. To eliminate inefficiency and waste, Taylor followed around the ‘most productive’ factory workers, recording the timing of all their movements with a stopwatch. He set managers, similarly armed with stopwatches, to micromanage every detail of a job. Taylor was also famous for fudging his numbers in favour of speed-driving workers to exhaustion and, in some cases, to strike.

But the modern gamified workplace enables control beyond Taylor’s wildest dreams. Games are sets of rules prescribing both actions and outcomes. A gamified workplace sets not just goals for workers but precisely how those goals can be achieved. Managers don’t need to follow workers with stopwatches. They can use smartphones or apps. It’s micromanagement with unprecedented granularity. ‘This is Taylorism 2.0,’ according to the media expert Steven Conway of Swinburne University of Technology in Australia. ‘Activities are more rigidly defined and processed than ever.’ The gamified workplace is not a game in the original sense, nor does it cultivate playful ends.

The problem of the gamified workplace goes beyond micromanagement. The business ethicist Tae Wan Kim at Carnegie Mellon University in Pittsburgh warns that gamified systems have the potential to complicate and subvert ethical reasoning. He cites the example of a drowning child. If you save the child, motivated by empathy, sympathy or goodwill – that’s a morally good act. But say you gamify the situation. Say you earn points for saving drowning children. ‘Your gamified act is ethically unworthy,’ he explained to me in an email. Providing extrinsic gamified motivators, even if they work as intended, deprive us of the option to live worthy lives, Kim argues. ‘The workplace is a sacred space where we develop ourselves and help others,’ he notes. ‘Gamified workers have difficulty seeing what contributions they really make.’

The problem isn’t limited to work. Social platforms all employ some form of gamification in their stats, figures, points, likes and badges. Dating apps gamify our romantic life; Facebook gamifies friendship.

Even war has been gamified: drone pilots operate in a highly gamified environment. Foeke Postma, a researcher and programme officer at the Dutch peace organization PAX, says that drone warfare often takes the shape of a game, right down to the joysticks or PlayStation-like controllers that the pilots use. ‘The US Airforce and the Royal Air Force have specifically targeted gamers to recruit as drone operators,’ he explains. The US drone program also employs game-like terminology when discussing targets. High-value assassination targets are called ‘jackpots’. Anyone caught near a jackpot during an airstrike is called ‘bugsplatter’. When drone pilots retire or transfer, they’re given a scorecard of kills. Postma says that this framework risks the total dehumanisation of the targets of drone warfare. In an interview with The Guardian, a drone pilot said: ‘Ever step on ants and never give it another thought?’

The expansion of game-like elements into nongame spaces is a global phenomenon. We are all living in expanding, overlapping magic circles, with some places moving faster than others. China in introducing a national, gamified social credit score through public-private partnerships. Eight credit scoring systems have been granted charters and each has a share of the national credit system. One social credit system ranks you based on how well you repay loans, the scores of your friends, where you shop and what you post to social media. This ranking determines whether you can receive loans or obtain a visa. In the US, the more limited FICO score can determine whether you get an apartment, a car, or a job.

The 20th-century French philosopher Michel Foucault would have said that these are technologies of power. Today, the interface designer and game scholar Sebastian Deterding says that this kind of gamification expresses a modernist view of a world with top-down managerial control. But the concept is flawed. Gamification promises easy, centralised overviews and control. ‘It’s a comforting illusion because de facto reality is not as predictable as a simulation,’ Deterding says. You can make a model of a city in SimCity that bears little resemblance to a real city. Mistaking games for reality is ultimately mistaking map for territory. No matter how well-designed, a simulation cannot account for the unforeseen.

A prime example of gamification gone awry is Go365, a health app introduced in 2017 by the Public Employees Insurance Agency (PEIA) in West Virginia and the Humana health insurance company. The app was presented as a motivating tool and game, not unlike smartphone fitness apps. Go365’s advertisements featured white, upper-middle-class joggers and attractively dishevelled soccer moms buying carrots. The app tracked physical activity, steps and location. It also allowed users to give more sensitive information to Humana, such as blood glucose levels, sleep cycle, diet and the results of doctor’s visits. Users were asked how often they drank and whether they smoked. Family medical histories were probed. The app awarded points, sets milestones and gave rewards for participation in the form of ‘Bucks’ that could be redeemed for gift cards. The agency claimed that the app was voluntary, but failure to accrue enough points (and to increase points annually) meant an extra $500 in premiums and an additional $1,000 on top of existing deductibles. That might not sound like a lot, but most teachers and support staff in West Virginia make less than $40,000 a year. Many have second jobs. Many more are elderly or have chronic illnesses.

The legislature gave no option but to play Go365 – but how teachers were supposed to play was another matter. ‘It was the cherry on top of a shit sundae,’ said Michael Mochaidean, a teacher and organiser in West Virginia. The teachers didn’t want to give up sensitive medical data. They didn’t want their locations tracked. After years of funding cuts to the PEIA, they saw the app as a way to kick teachers off their healthcare altogether.

Enraged, the teachers of West Virginia took to Facebook. They complained, they organised, and in March of 2018 thousands of them descended on the capitol in Charleston in a wildcat strike. After years of low pay and slashed benefits, their dissatisfaction had finally crystallised around the imposition of Go365. They would not participate in the game. By the end of the strike, the teachers had won a pay raise, and forced West Virginia to end its contract with Humana. Go365 was phased out. The teachers had sent a message to their bosses. Neither their work nor their health was a game.

This article was republished under a Creative Commons license fromAeon. Read the original here.

Can Our Brains Really Read Jumbled Words as Long as The First And Last Letters Are Correct?

Don’t believe everything you read online.

Source: Can Our Brains Really Read Jumbled Words as Long as The First And Last Letters Are Correct?

MICHELLE STARR

You’ve probably seen the classic piece of “internet trivia” in the image above before – it’s been circulating since at least 2003.

On first glance, it seems legit. Because you can actually read it, right? But, while the meme contains a grain of truth, the reality is always more complicated.

The meme asserts, citing an unnamed Cambridge scientist, that if the first and last letters of a word are in the correct places, you can still read a piece of text.

We’ve unjumbled the message verbatim.

“According to a researche [sic] at Cambridge University, it doesn’t matter in what order the letters in a word are, the only importent [sic] thing is that the first and last letter be at the right place. The rest can be a total mess and you can still read it without problem. This is because the human mind does not read every letter by itself but the word as a whole.”

In fact, there never was a Cambridge researcher (the earliest form of the meme actually circulated without that particular addition), but there is some science behind why we can read that particular jumbled text.

The phenomenon has been given the slightly tongue-in-cheek name “Typoglycaemia,” and it works because our brains don’t just rely on what they see – they also rely on what we expect to see.

In 2011, researchers from the University of Glasgow, conducting unrelated research, found that when something is obscured from or unclear to the eye, human minds can predict what they think they’re going to see and fill in the blanks.

“Effectively, our brains construct an incredibly complex jigsaw puzzle using any pieces it can get access to,” explained researcher Fraser Smith. “These are provided by the context in which we see them, our memories and our other senses.”

However, the meme is only part of the story. Matt Davis, a researcher at the University of Cambridge’s MRC Cognition and Brain Sciences Unit, wanted to get to the bottom of the “Cambridge” claim, since he believed he should have heard of the research before.

He managed to track down the original demonstration of letter randomisation to a researcher named Graham Rawlinson, who wrote his PhD thesis on the topic at Nottingham University in 1976.

He conducted 16 experiments and found that yes, people could recognise words if the middle letters were jumbled, but, as Davis points out, there are several caveats.

  • It’s much easier to do with short words, probably because there are fewer variables.
  • Function words that provide grammatical structure, such as and, the and a, tend to stay the same because they’re so short. This helps the reader by preserving the structure, making prediction easier.
  • Switching adjacent letters, such as porbelm for problem, is easier to translate than switching more distant letters, as in plorebm.
  • None of the words in the meme are jumbled to make another word – Davis gives the example of wouthit vs witohut. This is because words that differ only in the position of two adjacent letters, such as calm and clam, or trial and trail, are more difficult to read.
  • The words all more or less preserved their original sound – order was changed to oredrinstead of odrer, for instance.
  • The text is reasonably predictable.

It also helps to keep double letters together. It’s much easier to decipher aoccdrnig and mttaerthan adcinorcg and metatr, for example.

There is evidence to suggest that ascending and descending elements play a role, too – that what we’re recognising is the shape of a word. This is why mixed-case text, such as alternating caps, is so difficult to read – it radically changes the shape of a word, even when all the letters are in the right place.

If you have a play around with this generator, you can see for yourself how properly randomising the middle letters of words can make text extremely difficult to read. Try this:

The adkmgowenlcent – whcih cmeos in a reropt of new mcie etpnremxeis taht ddin’t iotdncure scuh mantiotus – isn’t thelcclnaiy a rtoatriecn of tiher eearlir fidginns, but it geos a lnog way to shnwiog taht the aalrm blels suhold plarobby neevr hvae been sdnuoed in the fsrit plcae.

Maybe that one is cheating a little – it’s a paragraph from a ScienceAlert story about CRISPR.

The acknowledgment – which comes in a report of new mice experiments that didn’t introduce such mutations – isn’t technically a retraction of their earlier findings, but it goes a long way to showing that the alarm bells should probably never have been sounded in the first place.

See how you go with this one.

Soaesn of mtiss and mloelw ftisnflurues,
Csloe boosm-feinrd of the mrtuniag sun;
Cnponsiirg wtih him how to laod and besls
Wtih friut the viens taht runod the tahtch-eevs run

That’s the first four lines of the poem “To Autumn” by John Keats.

Season of mists and mellow fruitfulness,
Close bosom-friend of the maturing sun;
Conspiring with him how to load and bless
With fruit the vines that round the thatch-eves run

So while there are some fascinating cognitive processes behind how we use prediction and word shape to improve our reading skills, it really isn’t as simple as that meme would have you believe.

If you want to delve into the topic further, you can read Davis’ full and fascinating analysis here.

Quote

Brand Connected – Media Psychology

Tiffany White tells a cautionary tale about the modern, connected consumer. Tiffany Barnett White is Associate Professor of Business Administration and Bruce and Anne Strohm Faculty Fellow at the University of Illinois, College of Business. She joined the faculty at Illinois in 1999 and received a Ph.D. in marketing from Duke University in 2000. […]

via The [brand] connected consumer – Tiffany White  — consumer psychology research

Digital Nation

Is our 24/7 wired world causing us to lose as much as we’ve gained

 

 

Source: Digital Nation

SEASON 28: EPISODE 9

Over a single generation, the Web and digital media have remade nearly every aspect of modern culture, transforming the way we work, learn, and connect in ways that we’re only beginning to understand. FRONTLINE producer Rachel Dretzin (Growing up Online) teams up with one of the leading thinkers of the digital age, Douglas Rushkoff (The Persuaders, Merchants of Cool), to continue to explore life on the virtual frontier. The film is the product of a unique collaboration with visitors to the Digital Nation website, who for the past year have been able to react to the work in progress and post their own stories online. [Explore more stories on the original Digital Nation website.]

Watch this documentary at https://www.pbs.org/wgbh/frontline/film/digitalnation/?fbclid=IwAR1306NU_-_a2KeXTN4chlR2rRj_b66DD6hsD5wWN-MPh25JdQLTyB3xr0s

 

Smart Phone, Lazy Brain

We still call them “phones,” but they are seldom used for talking. They have become like a substitute for memory—and other brain functions. Is that good for us in the long run?

Illustration by Edmon Haro

Source: Smart Phone, Lazy Brain

BY SHARON BEGLEY

You probably know the Google Effect: the first rigorous finding in the booming research into how digital technology affects cognition. It’s also known as digital amnesia, and it works like this: When we know where to find a piece of information, and when it takes little effort to do so, we are less likely to remember that information. First discovered by psychologist Betsy Sparrow of Columbia University and her colleagues, the Google Effect causes our brains to take a pass on retaining or recalling facts such as “an ostrich’s eye is bigger than its brain” (an example Sparrow used) when we know they are only a few keystrokes away.

“Because search engines are continually available to us, we may often be in a state of not feeling we need to encode the information internally,” Sparrow explained in her 2011 paper. “When we need it, we will look it up.” Storing information requires mental effort—that’s why we study before exams and cram for presentations—so unless we feel the need to encode something into a memory, we don’t try. Result: Our recollection of ostrich anatomy, and much else, dissipates like foam on a cappuccino.

It’s tempting to leap from the Google Effect to dystopian visions of empty-headed dolts who can’t remember even the route home (thanks a lot, GPS), let alone key events of history (cue Santayana’s hypothesis that those who can’t remember history are doomed to repeat it). But while the short-term effects of digital tech on what we remember and how we think are real, the long-term consequences are unknown; the technology is simply too new for scientists to have figured it out.

People spend an average of 3 to 5 minutes at their computer working on the task at hand before switching to Facebook or other enticing websites.

Before we hit the panic button, it’s worth reminding ourselves that we have been this way before. Plato, for instance, bemoaned the spread of writing, warning that it would decimate people’s ability to remember (why make the effort to encode information in your cortex when you can just consult your handy papyrus?). On the other hand, while writing did not trigger a cognitive apocalypse, scientists are finding more and more evidence that smartphones and internet use are affecting cognition already.

The Google Effect? We’ve probably all experienced it. “Sometimes I spend a few minutes trying hard to remember some fact”—like whether a famous person is alive or dead, or what actor was in a particular movie—“and if I can retrieve it from my memory, it’s there when I try to remember it two, five, seven days later,” said psychologist Larry Rosen, professor emeritus at California State University, Dominguez Hills, who researches the cognitive effects of digital technology. “But if I look it up, I forget it very quickly. If you can ask your device any question, you do ask your device any question” rather than trying to remember the answer or doing the mental gymnastics to, say, convert Celsius into Fahrenheit.

“Doing that is profoundly impactful,” Rosen said. “It affects your memory as well as your strategy for retrieving memories.” That’s because memories’ physical embodiment in the brain is essentially a long daisy chain of neurons, adding up to something like architect I.M. Pei is alive or swirling water is called an eddy. Whenever we mentally march down that chain we strengthen the synapses connecting one neuron to the next. The very act of retrieving a memory therefore makes it easier to recall next time around. If we succumb to the LMGTFY (let me Google that for you) bait, which has become ridiculously easy with smartphones, that doesn’t happen.

To which the digital native might say, so what? I can still Google whatever I need, whenever I need it. Unfortunately, when facts are no longer accessible to our conscious mind, but only look-up-able, creativity suffers. New ideas come from novel combinations of disparate, seemingly unrelated elements. Just as having many kinds of Legos lets you build more imaginative structures, the more elements—facts—knocking around in your brain the more possible combinations there are, and the more chances for a creative idea or invention. Off-loading more and more knowledge to the internet therefore threatens the very foundations of creativity.

Besides letting us outsource memory, smartphones let us avoid activities that many people find difficult, boring, or even painful: daydreaming, introspecting, thinking through problems. Those are all so aversive, it seems, that nearly half of people in a 2014 experimentwhose smartphones were briefly taken away preferred receiving electric shocks than being alone with their thoughts. Yet surely our mental lives are the poorer every time we check Facebook or play Candy Crush instead of daydream.

But why shouldn’t we open the app? The appeal is undeniable. We each have downloaded an average of nearly 30 mobile apps, and spend 87 hours per month internet browsing via smartphone, according to digital marketing company Smart Insights. As a result, distractions are just a click away—and we’re really, really bad at resisting distractions. Our brains evolved to love novelty (maybe human ancestors who were attracted to new environments won the “survival of the fittest” battle), so we flit among different apps and websites.

As a result, people spend an average of just three to five minutes at their computer working on the task at hand before switching to Facebook or another enticing website or, with phone beside them, a mobile app. The most pernicious effect of the frenetic, compulsive task switching that smartphones facilitate is to impede the achievement of goals, even small everyday ones. “You can’t reach any complex goal in three minutes,” Rosen said. “There have always been distractions, but while giving in used to require effort, like getting up and making a sandwich, now the distraction is right there on your screen.”

The mere existence of distractions is harmful because resisting distractions that we see out of the corner of our eye (that Twitter app sitting right there on our iPhone screen) takes effort. Using fMRI to measure brain activity, neuroscientist Adam Gazzaley of the University of California, San Francisco, found that when people try to ignore distractions it requires significant mental resources. Signals from the prefrontal cortex race down to the visual cortex, suppressing neuronal activity and thereby filtering out what the brain’s higher-order cognitive regions have deemed irrelevant. So far, so good.

The problem is that the same prefrontal regions are also required for judgment, attention, problem solving, weighing options, and working memory, all of which are required to accomplish a goal. Our brains have limited capacity to do all that. If the prefrontal cortex is mightily resisting distractions, it isn’t hunkering down to finish the term paper, monthly progress report, sales projections, or other goal it’s supposed to be working toward. “We are all cruising along on a superhighway of interference” produced by the ubiquity of digital technology, Gazzaley and Rosen wrote in their 2016 book The Distracted Mind. That impedes our ability to accomplish everyday goals, to say nothing of the grander ones that are built on the smaller ones.

The constant competition for our attention from all the goodies on our phone and other screens means that we engage in what a Microsoft scientist called “continuous partial attention.” We just don’t get our minds deeply into any one task or topic. Will that have consequences for how intelligent, creative, clever, and thoughtful we are? “It’s too soon to know,” Rosen said, “but there is a big experiment going on, and we are the lab rats.”

Tech Invasion LMGTFY

“Let me Google that for you” may be some of the most damaging words for our brain. Psychologists have theorized that the “Google Effect” causes our memories to weaken due merely to the fact that we know we can look something up, which means we don’t keep pounding away at the pathways that strengthen memory. Meanwhile, research suggests that relying on GPS weakens our age-old ability to navigate our surroundings. And to top it all off, the access to novel info popping up on our phone means that, according to Deloitte, people in the US check their phones an average of 46 times per day—which is more than a little disruptive.

Sharon Begley is a senior science writer with The Boston Globe Media Group, author of Train Your Mind, Change Your Brain, and coauthor with Richard Davidson of The Emotional Life of Your Brain. She writes a regular column for Mindful magazine called Brain Science.

Quote

Do We Form Similarly Close Relationship With Brands As With Our Loved One? – Media Psychology

Are brands addictive? Source: Do We Form Similarly Close Relationship With Brands As With Our Loved One? Martin Reimann For human relationships it is known that after an initial electrifying honeymoon period, excitement for the loved partner often goes down and is maintained at a lower level. At the same time, however, we include our […]

via Do We Form Similarly Close Relationship With Brands As With Our Loved One? — consumer psychology research