Social media is undoubtedly a large part of most people`s lives these days, with an average person spending about 135 minutes daily on social media. This is more than two hours a day! Statistics reveal that teens spend up to nine hours daily on social media. With this in mind, we cannot help it but ask what draws people to spend so much of their time on social media. Well, a recent study tackled certain aspects of this issue.
Motivations for Social Media Use
The study, done by Ozimek and colleagues, came up with three motives for social media use. They included:
Self-presentation, or the need to present yourself and life as positively as possible (to both yourself and others)
Social interaction and the need to belong (staying in touch with friends and family members)
Although there might be more reasons than the ones outlined above, most of them stem from one of the three. For instance, if you tend to scroll back through your feed to remind yourself of some of the things you have posted earlier, self-presentation does matter to you. Regardless of your motivation for social media use, it is important to be aware of both the positive and negative effects it has on your wellbeing.
The researchers found that many people use social media in order to obtain materialistic goals and wondered whether materialism could be yet another motivation for social media use.
Materialism and Social Media
A recent study suggested that people having loads of Facebook friends are more materialist than those with fewer Facebook friends. “Materialistic people use Facebook more frequently because they tend to objectify their Facebook friends – they acquire Facebook friends to increase their possessions,” concluded the study’s researcher, Phillip Ozimek. The study involved use of a questionnaire to measure how much people actually compare themselves to others and their materialistic goals.
This falls in line with a previous study on materialism, which found that materialists collect things that they can show publicly, it is not about having them. And, Facebook is the ideal place for a materialist to display their items. In addition, there is yet another aspect of materialism –objectification- where materialists look on other people as objects. This is clearly seen in social media, where users tends to place quite a high value on the number of friends.
“More generally, we suggest that materialists have a tendency to view and treat non-material events (like friendships) as a possession or as means to attain their materialistic goals, the Ozimek study states.” This can be seen con job networking sites such as LinkedIn.
The Effects of Materialism
First and foremost, materialists often neglect the emotions of those they objectify, which can in turn impact interpersonal relationships. When a person feels devalued, they disconnect from the person.
Secondly, materialism could lead to emotional health problems too. When material items are the key to value, their removal might cause crisis or any other form of stress. If a certain person unfriends you, you can swap them for another. But, if a lot of people do the same, you are likely to question your own value, aren’t you?
It can be a full-time hobby to keep up with technology as it evolves. Every year, I find myself donating or selling my favorite gadgets as they become obsolete. However, there’s one ancient technology that I’ve been buying more than selling and that’s vinyl. And I’m not alone. Vinyl record sales hit a 28-year high in 2016, according to Fortune Magazine.
Raoul Benavides, owner of Flashlight Vinyl, explains why he was able to open a record store in 2016, and why we miss listening to vinyl records.
People like to think of themselves as savvy shoppers, but are still vulnerable to these common psychological tricks. Source: How Stores Trick You Into Buying More Things Oct 11, 2017 Video by The Atlantic How do consumers decide what to buy? The truth is that stores know you better than you do—both online and offline. […]
The new International Classification of Diseases (ICD-11), the WHO’s official diagnostic manual, will be published in 2018, having last been updated in 1990, so this new addition is quite significant.
“Health professionals need to recognise that gaming disorder may have serious health consequences,” Vladimir Poznyak at the WHO’s Department of Mental Health and Substance Abuse told New Scientist.
Of course, most people who indulge in a spot of Super Mario Odyssey or Zelda aren’t addicted, so the criteria for diagnosis of the disorder has been carefully considered.
According to a current draft, the criteria include making gaming a priority “to the extent that gaming takes precedence over other life interests”, and continuing this despite the risk of it being detrimental to your health – such as lack of sleep and sustenance. However, this behavior must be observed for at least a year before diagnosis can be confirmed.
According to Poznyak, the WHO has been considering this inclusion for the best part of a decade, and now, after consultations with mental health experts, the organization is satisfied it meets the criteria of a disorder. When asked why other technology-based addictions were not being included Poznyak said: “There is simply a lack of evidence that these are real disorders.”
Of course, there are plenty of arguments against this new inclusion, including the fear of unnecessarily attaching a stigma to people and trivializing what people consider “real” conditions.
Psychiatrist Allen Frances, former chair of the Diagnostic and Statistical Manual of Mental Disorders has previously said that the DSM, amassed by experts to help define and classify mental disorders, refused to include Internet addiction as a condition for fear of mislabelling and overtreating millions of people who just really really like their smartphones.
As he points out, “billions of people around the world are hooked on caffeine for fun or better functioning, but only rarely does this cause more trouble than its worth.”
However, it was also the DSM’s reclassification of gambling disorder from a compulsion to an addiction in 2013 that legitimized non-substance addiction as a diagnostic category – one that is very hard to define as it is based mostly on symptoms – opening up the possibility that almost anything could be considered pathological.
Indeed, multiple studies have been carried out asking whether or not a wide variety of subjects from shopping to sugar to suntanning to love can be officially described as addictive. Whether they too will one day be recognized as official conditions remains to be seen.
Just about every article I write sets the stage by giving recent estimates of the number of hours children are spending in front of screens. The numbers vary by survey or research study but the fact that they are high and getting higher does not. It’s easy to look at some stats:
Parents estimate their kids 5-to-18-years old spend 4.9 hours per day on a digital device.
Broken out by age in different studies, those numbers look like this:
Parents estimate children up to age eight spend 2 hours and 19 minutes with screens.
Parents estimate children aged 8-to-12 years old spend 4 hours and 36 minutes using screens.
Teens spend an average of 6 hours and 40 minutes engaged with screen entertainment (excluding school work)
The conclusion of examining these shocking statistics is usually: “This can’t be good,” with the same foreboding feeling as when a movie character goes out exploring in a horror flick.
Popular media and parents have been talking addiction in relation to screen-based media since the advent of the iPhone in 2007. The American Psychiatric Association is very conservative about behavioral addiction diagnoses. It took decades of research and consensus for the American Psychiatric Association to add Gambling Disorder as a behavioral addiction diagnosis to the DSM-V (the volume of diagnosable mental health issues) in 2013. Long before it’s addition to the DSM-V, many families struggled with a Gambling Disorder and there were many treatments available. The American Psychiatric Association has strict guidelines regarding research to validate a diagnosis and provide information on prevalence rates, comorbid conditions and course.
In 2013, the American Psychiatric Association also added Internet Gaming Disorder to its list of “Conditions for Further Study.” Once there is sufficient research basis, this disorder could move into a diagnosable disorder. However, it is restricted to online gaming, not screen-time in general. While people may feel addicted to screen-time, research has not yet shown same issues with tolerance and unsuccessful attempts to cut back.
Yet, parents and children alike are using the term “addiction” to describe their relationship with screen-based technology. A recent survey research on teenagers suggests that over 50% of them “feel addicted” to their mobile devices. The survey was conducted by Common Sense Media and, James Steyer, the founder and CEO stated, “What we’ve discovered is that kids and parents feel addicted to their mobile devices, that it is causing daily conflict in homes, and that families are concerned about the consequences. We also know that problematic media use can negatively affect children’s development and that multitasking can harm learning and performance. As a society we all have a responsibility to take media use and addiction seriously and make sure parents have the information to help them make smart choices for their families.”
Another recent survey study asked about addiction and digital devices: 67% of the 394 U.S. parents of children aged 5-to-18-years-old surveyed describe their children as addicted. Virtually an identical percentage of parents say they are addicted to digital devices as well.
Basically, we know that screen-time addiction is not a diagnosable mental health disorder and yet, we also know that a large percentage of parents and children are reporting that they feel they are “addicted” to screen-based media or digital devices. The next step is to clarify and quantify what kids and parents mean when they say they are addicted to screens.
How do I know if my kid’s screen-time is problematic?
Most parents have an idea when their child’s screen-time has become problematic. However, new research has given us astandardized way to determine if those hours of screen-time are problematic. A group of researchers have created the Problematic Media Use Measure specifically for parents of children aged 4-to-11-years. The scale items were created based on the 9 criteria for Internet Gaming Disorder in the DSM-5 and then validated in a series of studies. Importantly, the researchers found that the scale was able to predict problems in functioning, over and above children’s total number of hours of screen-time. This indication of incremental validity demonstrates that this measure adds something to our understanding of problematic screen-time beyond “she spends how many hours on that thing?”
The development of this scale is big news to parents (who can have a standardized method of examining their children’s screen habits) and to researchers (who have a more sensitive measure than total hours to examine screen-time problems). The scale items ask about those things that concern parents about screen-media.
Here are some areas to think about if you are worried about your child’s screen-time:
The study and associated measure was just published this year and more studies will need to be conducted to develop clinical cut-off scores. But, for now, this measure can help parents, clinicians and researchers parse out media-use from problematic media-use.
To learn more about the study, see the abstract here. The full citation for the study is:
Domoff, S. E., Harrison, K., Gearhardt, A. N., Gentile, D. A., Lumeng, J. C., & Miller, A. L. (2017). Development and Validation of the Problematic Media Use Measure: A Parent Report Measure of Screen Media “Addiction” in Children. Psychology of Popular Media Culture. Advance online publication. http://dx.doi.org/10.1037/ppm0000163
Using over seven different social media platforms is linked to a tripling in depression risk, psychological research finds.
The study asked about the 11 most popular social media platforms: Facebook, YouTube, Twitter, Google Plus, Instagram, Snapchat, Reddit, Tumblr, Pinterest, Vine and LinkedIn.
Those who used between 7 and 11 of these, had 3.1 times the depression risk.
They also had 3.3 times the risk of having high levels of anxiety symptoms.
Professor Brian A. Primack, who led the study, said:
“This association is strong enough that clinicians could consider asking their patients with depression and anxiety about multiple platform use and counseling them that this use may be related to their symptoms.
While we can’t tell from this study whether depressed and anxious people seek out multiple platforms or whether something about using multiple platforms can lead to depression and anxiety, in either case the results are potentially valuable.”
There are a number of ways in which using multiple platforms might lead to depression and anxiety, the authors argue:
Multitasking is known to lead to poor mental health and weakened thinking skills.
Using more platforms might lead to more opportunities for embarrassing mistakes.
Professor Primack said:
“It may be that people who suffer from symptoms of depression or anxiety, or both, tend to subsequently use a broader range of social media outlets.
For example, they may be searching out multiple avenues for a setting that feels comfortable and accepting.
However, it could also be that trying to maintain a presence on multiple platforms may actually lead to depression and anxiety.
More research will be needed to tease that apart.”
The results come from a 2014 survey of 1,787 US adults aged between 19 and 32.
Since the 2016 U.S. Presidential election, concerns over the circulation of “fake” news and other unverified digital content have intensified. As people have grown to rely on social media as a news source, there has been considerable debate about its role in aiding the spread of misinformation. Much recent attention has centered around putting fact-checking filters in place, as false claims often persist in the public consciousness even after they are corrected.
We set out to test how the context in which we process information affects our willingness to verify ambiguous claims. Results across eight experiments reveal that people fact-check less often when they evaluate statements in a collective setting (e.g., in a group or on social media) than when they do so alone. Simply perceiving that others are present appeared to reduce participants’ vigilance when processing information, resulting in lower levels of fact-checking.
Our experiments surveyed over 2,200 U.S. adults via Amazon Mechanical Turk. The general paradigm went as follows: As part of a study about “modes of communication on the internet,” respondents logged onto a simulated website and evaluated a series of statements. These statements consisted of ambiguous claims (of which half were true and half were false) on a range of topics, from current events (e.g., “Scientists have officially declared the Great Barrier Reef to be dead”) to partisan remarks made by political candidates (e.g., “Undocumented immigrants pay $12 billion a year into Social Security”).
Participants could identify each statement as true or false; or, they could raise a fact-checking “flag” to learn its accuracy. On top of a fixed payment for participating, each person had the chance to earn a bonus depending on how well they performed (e.g., they received +1 point and -1 point per correct and incorrect answer, respectively, with each point awarding 5¢). In some studies, people gained no points for flagging; in others, they received a small penalty or a small reward for flagging. In still others, we entered them into a lottery for $100 if they scored in the 90th percentile. These different incentive structures did not change the overall patterns we found.
In the first experiment, participants gave responses (true, false, or flag) for 36 statements described as news headlines published by a U.S. media organization. Throughout the task, half the participants saw their own username displayed alone on the side of the screen, while the other half also saw those of 102 respondents described as currently logged on, presumably completing the same task. People flagged (fact-checked) fewer statements when they perceived that others were present.
We next tried to simulate social presence in a more natural environment. In addition to exposing people to either their own or others’ names, half the participants evaluated “news headlines” on the website used in the previous study (reflecting a more “traditional” media platform), while the other half read the same headlines presented as the news organization’s posts in a Facebook feed. On the traditional site, people again flagged less often when they saw others online compared to when they thought they were alone. But, participants who read Facebook posts flagged few statements regardless of whether they saw others’ names on the screen. Browsing information on social media, an inherently social context, seemed to make individuals behave as if they were in a group.
In another experiment, we learned that others’ presence may be felt even when they’re not engaged in an activity at the same time. People flagged less often when they saw other names on the screen even when we described those other participants as users who had logged in and completed the task a week ago.
Why might collective settings suppress fact-checking? One reason could be that people flagged fewer statements simply because they felt more confident about their answers when others were around. But this doesn’t appear likely. When we asked participants to report their confidence and certainty in their responses, we found that these did not vary according to whether they evaluated claims alone or in the presence of others. We also found that performance on the task did not differ consistently across our alone and group conditions.
A second argument is that people may expect to free-ride on others’ effort, as shown in research on responsibility diffusion and the bystander effect (e.g., “If everyone else is verifying, why should I?”). Participants in most of our studies, though, could not rely on others to fact-check for them. A separate experiment tested whether making people feel individually responsible within a group can correct for this kind of “loafing” mentality. Respondents read 38 statements about U.S. congressmen/women; some saw their names appear alone, while others saw those of other “team members” working on the same task. A third group saw their own name highlighted in red text, which was meant to distinguish them from everyone else’s names in black. Although these participants felt a greater sense of responsibility, they still flagged fewer statements than those who did the task alone. So, loafing does not appear to fully explain the behaviors we observe.
We also investigated whether a particular type of conversational norm — that we often assume a speaker is telling the truth and thus avoid expressing skepticism so as not to offend him or her, especially in group environments — helps explain the findings. Our results do not support this explanation because participants did not tend to believe information more in the presence of others; rather, they just tended to fact-check it less. We assessed directly whether individuals in group settings are more willing to fact-check when this conversational norm isn’t as salient, as is the case when evaluating claims from political candidates. Given that people usually expect politicians to be dishonest (as data from a protest suggests), they should have fewer qualms expressing their mistrust by fact-checking their statements.
Participants evaluated 50 campaign statements from two U.S. politicians before an election: Candidate A’s statements reflected a conservative view, candidate B’s a liberal one. As with previous studies, respondents either saw their own names appear alone or alongside others’ names. Although people identified more statements as true when the views expressed matched their own political affiliation, this alignment didn’t affect fact-checking rates; how much people flagged depended only on whether they evaluated claims alone or in a group. In sum, even for sources perceived as less trustworthy (i.e., politicians), people flagged fewer claims when they believed they were in a group.
Another possibility is that being around others somehow automatically lowers our guards. Research on animal and human behavior has pointed to a “safety in numbers” heuristic in which crowds (or herds) decrease vigilance, perhaps because we believe any risk would be divided. Because fact-checking demands some measure of wariness, a similar mechanism might apply when people are attuned to other individuals online.
A few pieces of evidence lend credence to this idea. First, respondents in another experiment who scored high on chronic prevention focus — a trait associated with being habitually cautious and vigilant — were mostly “immune” to the effect of social presence. That is, these individuals fact-checked just as much in the company of others as they did by themselves. Second, participants who did a proofreading task in a group environment performed worse than those who did so alone, suggesting that social presence may impair our vigilance more generally. Finally, when we promoted a vigilance mindset by having people first do exercises shown to momentarily increase prevention focus, participants in a group setting flagged nearly twice as many statements as those who weren’t given such encouragement (figure 2).
All in all, these findings add to the ongoing conversation about misinformation in increasingly connected online environments. Critics of social media often point to its complicity in creating “echo chambers” that selectively expose us to likeminded people and to content that matches and reinforces our beliefs. But our participants seemed reluctant to question claims even in the presence of strangers, suggesting that this effect may be amplified.
Recent efforts to promote crowdsourcedfact-checking have found some success in taming the diffusion of unreliable news. At a time when information is so easily and instantaneously shared, developing tools that encourage people to absorb content with a critical eye is all the more pressing. Understanding when we are likely to verify what we read can help guide these initiatives.
Rachel Meng is a doctoral candidate of Marketing at Columbia Business School. She is interested in judgment and decision making. Her current research focuses on incentives for motivating behavior change (with emphasis on the limits and consequences of monetary rewards), the influence of others on how people process information, and financial decision making among the poor.
Youjung Jun is a doctoral candidate of marketing at Columbia Business School. She studies social influences and media influence on how people process information. Her current research focuses on shared reality – experiencing something in common with others—and its effects on people’s memories, performances, and construction of new knowledge in a social process.
Gita V. Johar is the Meyer Feldberg Professor of Business at Columbia Business School and a co-editor of the Journal of Consumer Research. She also serves as the Faculty Director for Online Initiatives at Columbia Business School and serves as the Chair of the Faculty Steering Committee, Columbia Global Centers Mumbai.
by Joshua Moraes The truth behind the brand. Source: 11 Unknown Facts & Stories About The World’s Biggest Brands How we came to be, what we did to get to where we are and what we’re called. For each one of us, it’s a different story. Some boring as hell, some interesting enough to be […]
≈ Comments Off on How Fiction Becomes Fact on Social Media
Hours after the Las Vegas massacre, Travis McKinney’s Facebook feed was hit with a scattershot of conspiracy theories. The police were lying. There were multiple shooters in the hotel, not just one. The sheriff was covering for casino owners to preserve their business.
The political rumors sprouted soon after, like digital weeds. The killer was anti-Trump, an “antifa” activist, said some; others made the opposite claim, that he was an alt-right terrorist. The two unsupported narratives ran into the usual stream of chatter, news and selfies.
“This stuff was coming in from all over my network of 300 to 400” friends and followers, said Mr. McKinney, 52, of Suffolk, Va., and some posts were from his inner circle.
But he knew there was only one shooter; a handgun instructor and defense contractor, he had been listening to the police scanner in Las Vegas with an app. “I jumped online and tried to counter some of this nonsense,” he said.
In the coming weeks, executives from Facebook and Twitter will appear before congressional committees to answer questions about the use of their platforms by Russian hackers and others to spread misinformation and skew elections. During the 2016 presidential campaign, Facebook sold more than $100,000 worth of ads to a Kremlin-linked company, and Google sold more than $4,500 worth to accounts thought to be connected to the Russian government.
Agents with links to the Russian government set up an endless array of fake accounts and websites and purchased a slew of advertisements on Google and Facebook, spreading dubious claims that seemed intended to sow division all along the political spectrum — “a cultural hack,” in the words of one expert.
Yet the psychology behind social media platforms — the dynamics that make them such powerful vectors of misinformation in the first place — is at least as important, experts say, especially for those who think they’re immune to being duped. For all the suspicions about social media companies’ motives and ethics, it is the interaction of the technology with our common, often subconscious psychological biases that makes so many of us vulnerable to misinformation, and this has largely escaped notice.
Skepticism of online “news” serves as a decent filter much of the time, but our innate biases allow it to be bypassed, researchers have found — especially when presented with the right kind of algorithmically selected “meme.”
At a time when political misinformation is in ready supply, and in demand, “Facebook, Google, and Twitter function as a distribution mechanism, a platform for circulating false information and helping find receptive audiences,” said Brendan Nyhan, a professor of government at Dartmouth College (and occasional contributor to The Times’s Upshot column).
For starters, said Colleen Seifert, a professor of psychology at the University of Michigan, “People have a benevolent view of Facebook, for instance, as a curator, but in fact it does have a motive of its own. What it’s actually doing is keeping your eyes on the site. It’s curating news and information that will keep you watching.”
That kind of curating acts as a fertile host for falsehoods by simultaneously engaging two predigital social-science standbys: the urban myth as “meme,” or viral idea; and individual biases, the automatic, subconscious presumptions that color belief.
The first process is largely data-driven, experts said, and built into social media algorithms. The wide circulation of bizarre, easily debunked rumors — so-called Pizzagate, for example, the canard that Hillary Clinton was running a child sex ring from a Washington-area pizza parlor — is not entirely dependent on partisan fever (though that was its origin).
For one, the common wisdom that these rumors gain circulation because most people conduct their digital lives in echo chambers or “information cocoons” is exaggerated, Dr. Nyhan said.
In a forthcoming paper, Dr. Nyhan and colleagues review the relevant research, including analyses of partisan online news sites and Nielsen data, and find the opposite. Most people are more omnivorous than presumed; they are not confined in warm bubbles containing only agreeable outrage.
But they don’t have to be for fake news to spread fast, research also suggests. Social media algorithms function at one level like evolutionary selection: Most lies and false rumors go nowhere, but the rare ones with appealing urban-myth “mutations” find psychological traction, then go viral.
There is no precise formula for such digital catnip. The point, experts said, is that the very absurdity of the Pizzagate lie could have boosted its early prominence, no matter the politics of those who shared it.
“My experience is that once this stuff gets going, people just pass these stories on without even necessarily stopping to read them,” Mr. McKinney said. “They’re just participating in the conversation without stopping to look hard” at the source.
Digital social networks are “dangerously effective at identifying memes that are well adapted to surviving, and these also tend to be the rumors and conspiracy theories that are hardest to correct,” Dr. Nyhan said.
One reason is the raw pace of digital information sharing, he said: “The networks make information run so fast that it outruns fact-checkers’ ability to check it. Misinformation spreads widely before it can be downgraded in the algorithms.”
The extent to which Facebook and other platforms function as “marketers” of misinformation, similar to the way they market shoes and makeup, is contentious. In 2015, a trio of behavior scientists working at Facebook inflamed the debate in a paper published in the prominent journal Science.
The authors analyzed the news feeds of some 10 million users in the United States who posted their political views, and concluded that “individuals’ choices played a stronger role in limiting exposure” to contrary news and commentary than Facebook’s own algorithmic ranking — which gauges how interesting stories are likely to be to individual users, based on data they have provided.
Outside critics lashed the study as self-serving, while other researchers said the analysis was solid and without apparent bias.
The other dynamic that works in favor of proliferating misinformation is not embedded in the software but in the biological hardware: the cognitive biases of the human brain.
Purely from a psychological point of view, subtle individual biases are at least as important as rankings and choice when it comes to spreading bogus news or Russian hoaxes — like a false report of Muslim men in Michigan collecting welfare for multiple wives.
Merely understanding what a news report or commentary is saying requires a temporary suspension of disbelief. Mentally, the reader must temporarily accept the stated “facts” as possibly true. A cognitive connection is made automatically: Clinton-sex offender, Trump-Nazi, Muslim men-welfare.
And refuting those false claims requires a person to first mentally articulate them, reinforcing a subconscious connection that lingers far longer than people presume.
Over time, for many people, it is that false initial connection that stays the strongest, not the retractions or corrections: “Was Obama a Muslim? I seem to remember that….”
In a recent analysis of the biases that help spread misinformation, Dr. Seifert and co-authors named this and several other automatic cognitive connections that can buttress false information.
Another is repetition: Merely seeing a news headline multiple times in a news feed makes it seem more credible before it is ever read carefully, even if it’s a fake item being whipped around by friends as a joke.
And, as salespeople have known forever, people tend to value the information and judgments offered by good friends over all other sources. It’s a psychological tendency with significant consequences now that nearly two-thirds of Americans get at least some of their news from social media.
“Your social alliances affect how you weight information,” said Dr. Seifert. “We overweight information from people we know.”
The casual, social, wisecracking nature of thumbing through and participating in the digital exchanges allows these biases to operate all but unchecked, Dr. Seifert said.
Stopping to drill down and determine the true source of a foul-smelling story can be tricky, even for the motivated skeptic, and mentally it’s hard work. Ideological leanings and viewing choices are conscious, downstream factors that come into play only after automatic cognitive biases have already had their way, abetted by the algorithms and social nature of digital interactions.
“If I didn’t have direct evidence that all these theories were wrong” from the scanner, Mr. McKinney said, “I might have taken them a little more seriously.”
Sep 04, 2017 | Contributors: Prof. Dai Xianchi and Prof. Robert Wyer, Department of Marketing, CUHK Business School
By Fang Ying, Senior Writer, China Business Knowledge @ CUHK
Empty space or white space has been widely used in advertising and interior design to give the feeling of a clean and elegant look. “Less is more” is the message in the modern world. However, will “more” space become “less” effective in communication?
Only a few empirical studies have investigated the effect of empty space on consumer behavior, and the findings are not clear and sometimes contradictive. For instance, a previous study found that surrounding the picture of a product by empty space increases perceptions of the product’s prestige value, thereby increasing evaluations of the product. However, other research suggest that the empty space surrounding a verbal message could draw people’s attention away from the message and decrease the resources they devote to processing it, and thereby decreasing the message’s impact.
In a recent study, Prof. Dai Xianchi, Associate Professor of the Department of Marketing at CUHK Business School, further looked into the effect of empty space on persuasion. The study was carried out alongside his collaborators, Prof. Robert Wyer, Visiting professor of the same department and university, and PhD student Canice Kwan, now Assistant Professor at Sun Yat-sen University.
“People’s construal of the implications of a message goes beyond its literal meaning and the white space that surrounds a text message can affect the message’s persuasiveness,” says Prof. Dai.
The researchers proposed that when a verbal statement is surrounded by empty space, it activates more general concepts that there is room for doubt to the validity or importance of the message content.
“In other words, the statement is less persuasive when it is surrounded by empty space than when it is not,” Prof. Dai points out.
The Studies and Results
Seven studies in both laboratory and real-life settings were conducted.
In one study, the team collected 115 images of statements posted on a Facebook page over a one-month period from November to December in 2013, and downloaded a screenshot of each message image to record the amount of space (its image size and text space), audience responses (the total number of likes, shares, and comments), and the presence of non-text elements (a picture of a cartoon character and celebrities, nature scene background, etc.). At the same time, they used the numbers of likes, shares and comments as the indicators of effectiveness.
The results showed that individuals’ likings for the statements decreased as the amount of empty space increased. In other words, the impact of a statement decreases when it is surrounded by empty space.
In another study, 126 Hong Kong undergraduate students performed several marketing studies that were unrelated to the experiment. After that, the researchers announced that they could take away copies of the research paper related to the studies on a table next to the exit.
The copies were placed next to two pasteboards, each with a note that says “PICK ME!”.
The text, font size and type of the note were exactly the same, but the pasteboards were in two different sizes and conditions: A4 size with empty space surrounding the text, and A5 size with limited space surrounding the text.
The results revealed that more students (59.6%) picked up the papers in limited space condition than those printed in the empty space condition (37.7%).
“It indicates that participants complied less with the message’s implication when the message was surrounded by substantial empty space,” Prof. Dai says.
To examine whether the amount of space surrounding a persuasive message would influence recipients’ opinions when the message was generated randomly by a computer or intentionally by the communicator, another study was performed.
This time, 266 US participants were asked to evaluate two popular quotes from the Internet that emphasized the importance of personal warmth: “Hold on to whatever keeps you warm inside” and “A kind word can warm three winter months”. Each quote was presented in either a box with little empty space or a box with substantial empty space.
Unlike in other studies, a headline was also added at the top of each quote. In the condition where the message was randomly generated, the headline stated: “The message and the configuration of the image (e.g., font, color, or other visuals) do not reflect the personal attitude or intention of the author”. On the other hand, in the condition where the quote reflected the personal attitude or intention of the author, the headline read: “The message and the configuration of the image are the result of the author’s free choice”.
In each case, participants were asked to rate the persuasiveness of each statement along three questions: “To what extent do you like the quote?”; “To what extent do you think the quote is important?”; and “To what extent do you agree with the quote?”, from a scale of 1 (not at all) to 7 (very much). They also had to report their perceptions on how strongly the quote conveyed its opinion and the time they took to make their evaluation was recorded.
As predicted, the results showed that when the message was generated intentionally by the communicator, participants perceived it to convey a non-significantly weaker opinion when there was substantial empty space than when there was little empty space.
“That is to say, empty space should not influence the persuasiveness of the message if readers believed that the configuration of space and message was generated randomly by a computer,” said Prof. Dai.
“Our experiment suggested that people infer the strength of statement from the design – whether the statement is surrounded by empty space or full space,” he continued.
“This study demonstrates how visual clues, in particular empty space, affect the impact of verbal messages. All our results have shown people find a message less persuasive when it is surrounded by empty space than when it is not,” says Prof. Dai.
“This offers practical insights on advertising and even in political campaigns. For example, a candidate may want to present his messages in limited space rather than empty space to convey his messages more effectively,” says Prof. Dai.
Kwan, Canice, Xianchi Dai, and Robert Wyer, “Contextual Influences on Message Persuasion: The Effect of Empty Space,” Journal of Consumer Research,2017