Special Report Consumer Research: The art of listening: What respondents really mean

A few months ago, Hugo Powell, president of Labatt Breweries of Canada, and marketing consultant Richard Kelly, in a joint speech to the Grocery Products Manufacturers of Canada, questioned whether companies are really listening to consumers.They said while it is true...

A few months ago, Hugo Powell, president of Labatt Breweries of Canada, and marketing consultant Richard Kelly, in a joint speech to the Grocery Products Manufacturers of Canada, questioned whether companies are really listening to consumers.

They said while it is true companies ask questions of consumers and usually get answers to their questions, in many cases, the research findings do not accurately reflect what consumers really think or feel.

They provided a telling example to illustrate their point.

They said a u.s. airline conducted research which reported high levels of customer satisfaction.

It was the kind of result which made the client feel good – until someone questioned whether this ‘fact’ was really a fact.

New survey

A new survey was developed, including a question to measure levels of agreement with the statement, ‘I hate airlines.’

As it turned out, it was a statement with which almost all of the ‘highly satisfied’ respondents agreed.

Powell concluded: ‘So, while travellers would, in a traditional survey, say they were satisfied, what they meant was that the airline was only satisfactory in relation to their relatively low expectations.’

The speech, in which Powell and Kelly explained how Labatt has fundamentally changed its approach to the research process over the past couple of years, focussed attention on two questions of profound importance to researchers and to the companies that hire them, namely:

What must marketers do to ensure they understand what consumers mean, not merely what they say? And, by extension, if it is true that marketers sometimes do not understand their consumers, what are the underlying causes?

Mike Nestler, a partner at Toronto-based research consultancy Commins Wingrove and president of the Professional Marketing Research Society, says there are many reasons why companies sometimes fail to understand what consumers want.

Nestler says principal among them is the unwillingness of clients to allow the research process sufficient time and latitude to weed out marginal or illusory marketing opportunities.

‘My own experience tells me that the majority of failures are caused by the aggressive business agenda of the company as a whole,’ he says.

‘So often the pressure for results in a certain timeframe seems to rule the decision-making process and does not allow for appropriate caution and stop-points along the way to evaluate the true nature of the feedback you are getting,’ he says.

Nestler says new product development, in particular, seems to roll along like a juggernaut, adding that once volume and anticipated profits have been written into a company’s financial plan, it is hard for clients to stay objective.

Rationalizations

‘The new product development process turns from one of true exploration into a series of rationalizations, where the company will say, `What we are seeing is probably okay, and should probably continue, and will probably be okay in the end,’ ‘ he says.

Marilyn Sandler, president of North York, Ont.-based Creative Research International, agrees the pressure for immediate answers often does not allow sufficient time to think things through.

‘That’s the reality of doing business today,’ Sandler says.

‘Everyone is afraid of being beaten by the next guy, or being out there later than the next guy,’ she says. ‘As opposed to making sure they get it right.’

Angus Reid, president of Vancouver-based Angus Reid Group, attributes the failure of new products, in particular, less to bad research than to a process which begins with the product and ends with the market, rather than the other way around.

‘I think there are a lot of companies that are still very much product-driven as opposed to marketing-driven,’ Reid says.

‘Some engineer at Pepsi says, `I have invented a technique to take the color out of Pepsi and make it [clear,]‘ ‘ he says.

‘It may not be, from a strategic standpoint, the most fertile area in which to develop a new product, but the research company is often boxed into a corner and asked to help sell it.’

Support agenda

Reid says, far too often, market research is used not to understand what consumers want, but to support a broader political agenda within the client’s organization.

Tery Poole, president of Toronto-based Poole Adamson Research Consultants, says companies that fail to understand their consumers do so because they have no appreciation for the true role of the researcher.

‘There is a vast difference between a true researcher and an answer-gatherer,’ Poole says.

Says answer is right

‘The answer-gatherer is somebody that asks a question and expects to get an answer, and says that answer is right,’ he says.

‘The researcher questions it. And says, `Under what circumstances was that question asked? Was it the right question?’ ‘

Poole says he has had field survey companies come to him shaking their heads at the kind of questionnaires they get asked to implement by so-called researchers.

‘The naivete of the wording, the flow between the questions, the thinly disguised, terribly leading questions that are supposed to be objective brand usage questions – it’s like some brand manager, or some agency executive sat down and said, `Here’s a bunch of questions, ask these,’ ‘ he says.

‘They think that trying to understand people is as simple as asking a question. It is not. People are much more fascinating than that.’

Henry Fiorillo, managing partner at Toronto-based Research Management Group – and, with Richard Kelly, one of the key thinkers behind Labatt’s decision to question even its most basic assumptions about its customers – says companies often fail in their quest to understand consumer behavior because they adopt standardized approaches that do not answer the business problem at hand.

‘It is a template approach rather than an investigative approach,’ Fiorillo says.

Standardized tools

‘People say, `Let’s do a tracking study,’ or, `Let’s do a [usage and attitude study]. There are a bunch of standardized tools that produce homogenized findings.’

Fiorillo blames the rise of mba-trained middle managers for what he sees as an overdependence on pre-packaged research solutions.

‘Back 25 years ago, when there were fewer mba-trained middle managers, who had not had exposure to marketing research methodologies, you had managers rely far more on the professional expertise of the researcher to conduct better-designed research investigations,’ he says.

‘Now, with the proliferation of mbas, many of whom have had only a course, or even a half-course in market research methodologies, there is a greater confidence to plunge headlong into their own interpretation and relegate the role of researchers to almost that of a clerical status and function.’

Order-taker

Fiorillo says reducing the status of the researcher to that of an order-taker leaves little room for one of the most important parts of the research process.

‘Real research, in marketing terms, goes back to an ability on the part of the researcher to help articulate the business problem,’ he says. ‘Only when that is done can the research be properly framed.’

Fiorillo believes there is a premium to be gained by moving more slowly at the onset of a research project – doing more background research, following one’s instincts and not committing immediately to a standardized tool.

He says he and Kelly once spent more than 12 days designing a questionnaire that might have taken only two days had they used conventional methods.

But they managed to derive some fascinating insights for their client.

Unfortunately, Fiorillo says, it’s rare that a client will allow a researcher to experiment.

He recalls that early in his career he told one of his clients there was a serious flaw in the approach it was using to gather data.

‘Their answer at the time was, `It must be right, we have been doing it that way so often,’ ‘ Fiorillo says.

‘They didn’t want to question the way they had been doing things,’ he says.

Gary Edwards, research director at Gallup Canada in Toronto, says many companies would rather ask the wrong questions of their customers than have to deal with answers that would force them to make changes.

Edwards says this is especially true of companies that conduct conventional customer satisfaction research.

He says that by asking basic questions about such things as accuracy of invoicing and length of response time, and by combining responses to the top three points of a five-point satisfaction scale, many companies will report that 90% of their customers are satisfied, when that usually is not the case.

‘Doesn’t predict anything’

‘It’s a good thing for someone in the marketing department to bring to their ceo and say, `Look, we are doing great, so we need our budget for next year,’ but it doesn’t predict anything in terms of that company’s growth,’ Edwards says.

He says only by isolating the top score, and asking customers tougher and more open-ended questions can companies hope to gauge customer satisfaction with any degree of accuracy.

Not only do companies ask their customers easy questions, they ask them questions that are irrelevant, says Ruth Corbin, president of Toronto-based research consultancy Decision Resources.

‘I remember we once did a survey for a financial institution, and asked people whether they liked the new uniforms of the tellers,’ Corbin says.

No correlation

‘And the uniforms got high ratings, but had absolutely no correlation whatsoever with people’s satisfaction,’ she says.

‘So, to delude yourself that consumers are happy because they like the uniforms of the staff would be a mistake, and would involve looking at things from your own point of view, rather than the consumer’s point of view.’

Asked how companies can ensure they are asking the right questions, Corbin says there are three ways to do that.

The first is to use a focus group to ask consumers what they believe are the relevant issues.

Although Corbin stresses that focus groups are a statistically insignificant method of gathering data, she says scientifically trained researchers will use them as a rough guide when deciding what to include in a quantitative survey.

A second way to ensure relevancy is to look at the entire scenario of the consumer’s experience rather than just portions of it.

‘If I find out that the airline screwed up by losing your luggage, I might conclude that you are dissatisfied with the airline,’ Corbin says.

Different conclusion

‘But, if I had taken you through the whole process, from the time you checked in until two weeks later, I might discover that the airline had done so well at making good on your complaint that you were, in fact, a more loyal customer than you were before,’ she says.

And the third way is to include checks in the questionnaire.

If, for example, a company was to conduct a survey that included questions to measure aided and unaided brand awareness, it might include in its list of products an imaginary brand, as a way of measuring how many people will say `Yes’ to anything.

Sue Ince, vice-president of Criterion Research in Toronto, says she once included the brand manager’s last name in a list of wines.

‘He did somewhat better than some of the client’s brands,’ Ince says.

Just as companies must ensure they are asking the right questions if they are to better understand their consumers, they must stop taking literally what the consumer is saying, Reid says.

‘The fact is, the majority of consumers do not fully understand what it is that motivates them in their choice of retail establishments or their choice of products or services within those establishments,’ he says.

‘Price’

If, for example, someone were to ask the principal shopper in a household what is the most important thing that motivates them in the choice of a retail store, as many as 60% would respond ‘price,’ he says.

‘There is a consumer, who, in classic terms, is saying, `This is what I want.’ Yet, we know, from some of the tests that we have done, that many of those same consumers don’t have any idea what the price of anything in their basket really is.

‘What they are really saying is, `I want to think of myself as a smart consumer. I don’t want to be ripped off.’ ‘

Reid says the essence of understanding the consumer is closer to good psychiatry than it is to good journalism.

‘Responsibility of listening’

‘The journalist is faced with the responsibility of listening and accurately recording what was said, and maybe trying to interpret that in a broader context if he or she is given some editorial licence, but what is said is very important,’ he says.

‘In market research, in terms of trying to divine the consumer’s mind, one has to listen very carefully to questions that are well put, but the biggest challenge is providing an interpretive context within which those answers can be understood.’

As a result, Reid says, market researchers are starting to move away from research models which assume that consumers can easily articulate their wants and desires.

Unfortunately, he says, that trend has come too late for many brands.

‘Over the years, there has been some very bad research done in which people have said all the consumer is interested in is price, and we have had people move away from supporting brands, all because we have been reading the data wrong, and, in some cases, asking the wrong questions,’ Reid says.

‘Psychiatrist’s couch’

‘What we are dealing with here is the need to put the consumer on the psychiatrist’s couch as much as simply listening to what the consumer articulates,’ he says.

‘We have got to get at deeper needs, values and identities, of which the consumer may not be aware.’

Poole agrees that companies that base their decision-making only on what people say will be sadly misled.

The trick, he says, is to correlate what people say they will do, with their actual behavior – in a systematic, scientific way.

Compare attitudes

If he were trying to determine support for a particular charity, for example, Poole would conduct an experiment to compare the attitudes, perceptions and reported intent of known contributors to non-contributors to find out how reliable is his wording of the ‘intent to contribute’ question.

The discrepancy between people’s reported attitude and their actual behavior – called the social desirability bias – has been well-documented in the marketing research literature, Corbin says, and comes from the respondent’s desire not to look ignorant, or politically incorrect, or cheap in front of other people.

And while it is entrenched, there are a couple of ways researchers can arrive at answers closer to the truth.

They can either use their experience to adjust the results, or, better still, phrase the questions in a way that makes it acceptable for people to tell the truth.

Change wording

Corbin says that rather than asking people whether they voted in the last election, she might change the wording of the question to read, ‘Did you happen to vote in the last election?’ or ‘Many people were not able to get to the polls in the last election. Were you able to get there?’

In the same way, if she were measuring people’s risk tolerance for a new investment product, she would approach the subject in an indirect manner.

‘Rather than saying, `Do you have a high tolerance for risk?’ – that’s a high pressure question and no one likes to admit they are cowardly – you might have to ask them in a couple of different ways,’ Corbin says.

Less threatening questions

‘We would ask them questions like `I like to have a little danger in my life,’ or `A dollar isn’t much money. I wouldn’t worry if I lost it,’ ‘ she says.

‘So you test the dimensions of risk. If you were to ask them simply, `Do you like to take risks?’ you might get a one-dimensional answer that wouldn’t tell you anything about more complicated attitudes.’

Experts say that while it is clear the research process is considerably more complicated than asking questions and recording answers, marketers who want to better understand their consumers are pretty much on their own when it comes to choosing a researcher who can do the job.

‘What you get is a function of your own best judgment,’ says Poole, who advises clients to look for a researcher who can bring something to the discussion, and not just sit there taking notes.

More responsibility

For their part, clients, too, are starting to take more responsibility for the research they commission.

David Ringler, vice-president of marketing at Winona, Ont.-based Andres Wines, likens the process to that of advertising.

‘If you are uncomfortable with what you are getting from your creative team, it’s your fault as much as theirs,’ Ringler says.

‘Can’t blame others’

‘Because you are guiding and directing, and you’ve got to be into it,’ he says. ‘And if you don’t like it, when all is said and done, you can’t blame it on someone else.’

Graham Freeman, president and chief executive officer at Ault Foods in Etobicoke, Ont., says he encourages researchers to delve beneath the behavioral level to get to the heart of what is driving consumer motivations.

And if, in the process, they come across something negative, to tell him the bad news as soon as possible.

‘A researcher that is worth his money is continually being rewarded for telling the client what it’s all about,’ Freeman says. ‘Any client that thwarts that is asking for a much bigger pain later on.’

More attention to customers

Reid says he finds it encouraging that ceos, many of whom spent their time in the 1980s orchestrating mergers and acquisitions, are now paying more attention to their customers.

Consequently, he says, the role of the external researcher has evolved from one of order-taker to that of a professional service on the same level as senior legal corporate counsel.

‘At one time, researchers were order-takers,’ Reid says.

Like waiters

‘They were similar to waiters in a restaurant,’ he says. ‘They would hand out a menu, and the customer was expected to sit back and say they wanted this, that and the other.’

‘Today, we have somewhat more sophisticated restaurants, where the market researcher is duty-bound to at least impress upon his or her client those things the researcher does well and does not do well.

‘And, also, to take a more activist role. To say, `Maybe you should go for something that is heart-healthy, instead of consuming all that grease, Mr. Customer.’ ‘