The Future of Consumer Research January 2013
Subscribe to Insights in Brief to be notified about new Featured Content as it becomes available!
A postanalysis of the 2012 US elections offers insights about the future of consumer-research use in innovative ways to excellent effect. Some lessons can apply to any closely contested race—regardless of country—and also to companies fighting for market share. The first lesson is to pay close attention to changing demographics such as the increasing number of single women and the greater ethnic diversity of young adults. Other important lessons include the use of input from behavioral and social psychologists, avoidance of polling pitfalls, operational use of big data, and attention to finely tuned media buying.
"In the way that it used research, [the 2012 Obama campaign] was a campaign like no other," asserts Todd Rogers, a Harvard psychologist in a New York Times article ("Academic 'Dream Team' Helped Obama's Effort," 13 November 2012). The Obama campaign took advantage of input it invited from a team of social scientists: the consortium of behavioral scientists (COBS). The consortium members included eminent academics from the University of California (Los Angeles and San Diego), Princeton and Columbia Universities, the University of Chicago, and Arizona State University. Ideas included how to characterize Romney, how to counter false claims from the Romney campaign, and "research-based advice" about how to mobilize voters. The consortium of academics did not know if or how its ideas might serve; participation was voluntary and informal. One idea that saw good use was how to neutralize false claims from the opposition—a persistent problem for John Kerry in the 2004 presidential election. The recommendation from social scientists was not to deny a false claim (such as, "Obama is a Muslim") but to focus on the correct information (that Obama is a Christian). Other COBS input resulted in scripts used by on-the-ground volunteers to encourage previous voters to commit to vote by signing an informal contract to do so, to make an election-day plan to vote, and to inform registered voters of neighbors who were planning to vote. Research shows, for example, that people who make a formal commitment are more likely to follow through on promised actions than are people who do not commit. According to the New York Times article, COBS "efforts to contact the Romney campaign were unsuccessful." Companies as well as candidates will ignore research from social scientists at their peril. The competition may be talking to them!
A plethora of political polls in 2012 demonstrated one result clearly: Unlike use of individual polls in the twentieth century, reliance on single-poll results can be dicey in the twenty-first century. Thanks in part to technological advances and a more fragmented population, representative samples of the population are more difficult for pollsters to devise. For example, landline-only households no longer represent the majority, and regulations prevent robocalls to cell phones. Many 2012 polls missed the strength of Obama's appeal, according to the New York Times' 12 November 2012 FiveThirtyEight blog; the majority of Republicans disputed the poll results that showed Obama in the lead. Overall, technology-enabled methods of reaching potential voters fared better than did traditional methods that pollsters used—Google produced more accurate results than did Gallup. When polls designed and "conducted by independent organizations clash with internal polls released by campaigns, the public polls usually prove more reliable," according to Nate Silver, American author and statistician. However, the clearest picture emerges when one aggregates and averages poll results. Two companies that statistically adjusted aggregate-poll data to produce the most reliable forecasts are fivethirtyeight and Pollyvote, whose panel of experts includes Andreas Graefe, a research fellow at Ludwig-Maximillians-Universität (LMU; Munich, Germany) and J. Scott Armstrong (Wharton School, University of Pennsylvania). Candidates perceive positive poll results to be critical because they create momentum, and many people want to vote for a winner. In the case of a US election, the electorate will ultimately decide the outcome in early November of an election year. A company using research from only one survey, poll, focus group, or feedback loop to develop strategy and communications won't find out if it has "won" consumers' approval until the cash register fails to ring—too late to recoup R&D, manufacturing, and marketing costs and lost time. (For related information, see Social-Media Use for Focus Groups.)
Obama strategists studied information that they collected and fed into a database from email lists, "Facebook and millions of door-to-door discussions conducted by volunteers in swing states." They ranked people from least likely to most likely to support Obama. They used results in a number of ways, but of primary importance was how they used data on the ground. Strategists identified paramount concerns of small voter groups and fed the information to local Obama-support organizations. As a result, many volunteers were able to have meaningful conversations with individual voters about the president's position on an issue—or issues—most important to those voters. Technology enables big data collection; many companies are exploring ways to use big data effectively for communications. If a company is collecting big data, it may want to take a page from the Obama strategists' playbook: The strategists used big data successfully with small groups of people. Big data needs to match a company's operational goals; sometimes business applications are not obvious or appropriate.
Another way in which Obama strategists used big data to good effect was in media buying—television-media buying, in particular. Campaign-media dollars were stretched thin because of the number of swing states (nine) in the 2012 election and hotly contested Senate and Congressional races. Rather than buy media on the basis of sex-and-age demographic groups (the basis used by the majority of commercial advertisers), the team used big data from traditional Nielsen Media Research data (ratings) and newly available detailed cable set-top-box data. They worked backward to identify which programs undecided and low-information voters would most likely watch. As a result, they bought more niche programming—such as late night talk shows (Jimmy Kimmel Live), ESPN, and cable channel's TV Land—than they had bought in 2008. They call their technology-based solution to television-ad buying "the Optimizer." Although Optimizer media buys may not make a difference in a runaway election, in a close contest, every detail has the potential to make a big difference in the outcome. The majority of companies can optimize their media buys in much the same way that the Obama strategists did. Most companies use an advertising agency or media-buying firm to place television buys. Most agencies subscribe to large consumer-data studies such as GfK/MRI's Survey of the American Consumer. This study contains not only consumer-product-behavior information—with which to create populations of best customers or people with certain attitudes about a product category—but also extensive media-viewing, reading, and listening data. For companies not held hostage by their ad agency, it's quite easy to identify a population of interest and develop highly targeted media-channel plans and media buys.
When faced with a well-funded opponent, one must very importantly pay close attention to details, have a long-term strategy, and employ technology in innovative ways. Many companies already do so. But as competition heats up for more limited consumer dollars, a good idea in the future might be to consider taking a few pointers from the winning team: Use a diversity of research methods and multiple data inputs to succeed.