Advancing communications measurement and evaluation

PR Measurement and Evaluation Needs More Than a One Trick Survey Monkey

Monkey at laptop.

This is the second of a series of articles by Jim Macnamara on the limitations of quantitative metrics in public relations measurement and evaluation. Read the first article: “Why Metrics Don’t Add Up To Evaluation For Public Relations.”

It is understandable that PR and corporate communication practitioners look for simple solutions to measurement and evaluation. A 2013 International Communication Consultancy Organization (ICCO) survey of practitioners reported that the most significant barrier to doing measurement is that it is “too complex”. Having done measurement and evaluation as a practitioner, as a research supplier, and as an academic advisor, I fully recognize the need for M&E to be doable and affordable.

However, at the opposite extreme of highly complex research methodologies are two traps that have ensnared the PR industry and prevented valid, robust, and insightful measurement:

  1. Looking for a single ‘silver bullet’ – a one-size-fits-all approach; and
  2. Overuse and misuse of popular and easy research methods such as online surveys that do not really measure what they purport to measure (a problem referred to as validity in research).

I won’t say much about the futile search for a ‘silver bullet’ here, as this has been extensively discussed and most researchers now recognize that a universal ‘PR Index’ or a single ‘PR ROI’ formula are up there with Big Foot, the Unicorn, and the Garuda bird of Indonesia as myths.

But there is another problem endemic in the PR research field that is resulting in simplistic data and sometimes invalid and misleading findings. That is the predominance of quantitative surveys as a measurement instrument.

Studies by social researchers have shown that quantitative methodology dominates the research landscape generally (e.g., Newman & Benz, 1998, Qualitative-quantitative Research Methodology: Exploring the Interactive Continuum. Southern Illinois University Press)and, in particular, PR and corporate and marketing communication (e.g., Damon & Holloway, 2005, Qualitative Research Methods in Public Relations and Marketing Communications, Routledge).

So why aren’t surveys the best way to go? While there is no disputing that surveys have a role to play in research and that, with well-selected and sufficiently large samples, they can produce generalizable findings (that is, statistically reliable findings across a population category), they also have some major limitations as follows.

  1. Surveys are generally self-administered – that is to say, respondents fill out the questionnaires with no validation that they are telling the truth and no supporting evidence. It is well established that people exaggerate when rating themselves on positive attributes, whether it is in relation to their skills, knowledge, proficiency, professionalism, creativity, innovation, or ethical standards. Thus, surveys are prone to what is called ‘response generation’ – that is, they allow and even facilitate particular responses (what in TV-land is called ‘playing to the camera’).
  1. Samples need to be very carefully selected, but often convenience samples are used. For instance, many industry surveys focus on members of professional PR organizations. But in many countries, PR organisations represent a minority of practitioners, so such surveys do not access a large part of the industry and are therefore not representative. Also, while purposive samples can be valid in some cases (e.g., in targeting a particular group), they are often flawed through lack of a clear, consistent sampling frame (the selection criteria).
  1. Thirdly, a survey questionnaire by its very nature provides a fixed set of questions with a limited choice of responses. It is largely a ‘tick the box’ exercise. As such, surveys gain approximate answers (nearest match) and do not discover any of the richness, complexity, nuance, contradiction, or ambivalence that is common in human attitudes and thinking. Furthermore, while they tell as what people claim to do and think, they tell us little about why (e.g., underlying reasons, motives, fears, feelings, influences, and so on).
  1. A fourth limitation that applies to surveys and all quantitative research methods is that data analysis focuses on means (i.e., averages), and sometimes medians and modes, which exclude ‘outliers’ (effectively any responses outside the standard ‘Bell Curve’ distribution). While calculation of statistical significance (p values) and standard deviation (SD) can ensure the reliability of what is reported, what is left out of quantitative findings is a major limitation. In short, statistically-based quantitative research methods tell us about averages, but they tell us nothing about the diversity or range in a field of study, as well as little granular information to gain deep insights.

It is precisely these limitations that qualitative research methods set out to address – although, to be fair and balanced, qualitative research also has limitations. Qualitative research methods, such as focus groups; in-depth interviews; detailed textual, narrative, conversation and discourse analysis; and ethnography allow a researcher to probe, ask clarifying questions, query, challenge, seek examples, analyze ‘natural’ situations such as conversations, and observe actual behaviour. Qualitative research does not produce statistically reliable findings that can be generalized across a population category – a factor that causes many to see ‘qual’ as a weaker method and favour quantitative methods such as surveys. But qualitative research discovers deep insights within specific groups and contexts.

The best way to demonstrate the need to use qualitative research as a preferred measurement method in many cases, or in conjunction with quantitative research in what is referred to as a ‘mixed method’ approach, is through two simple examples.

Example #1: What do you eat for breakfast?

A major manufacturer of breakfast cereals wanted to find out how and what people consumed for breakfast. In one year, their advertising agency conducted a survey of a sample of consumers. The survey found most people claimed that they ate breakfast in their kitchen and that they regularly consumed fruit, juice, and cereal. Good news the cereal manufacturer thought. And the advertising agency claimed success for its campaign.

However, a social researcher subsequently engaged suggested an entirely different approach. With the permission of a sample of consumers, she installed cameras in their kitchens and dining rooms for a period of six weeks to conduct direct observation (ethnography). In the first few weeks, the consumers who knew the research was about eating habits were mindful of the cameras and dutifully made some effort at eating a healthy breakfast –i.e., they played to the cameras. Then, in the later weeks, the cameras recorded the natural breakfast eating habits of the consumers who by this time had forgotten about the cameras. This turned out to be, in most cases, opening the refrigerator door, glugging some juice or milk from the container, and sometimes grabbing a processed food breakfast bar from the pantry as they rushed out the door to work. A very different story emerged from in-depth qualitative research compared with the self-administered survey.

Example #2: Do organizations listen?

The second example is a current research project of mine. I am researching if, to what extent, and how organizations listen – the important corollary of distributing information and messages (i.e., speaking). Unless organizations listen, claims of two-way communication, dialogue, and engagement ring hollow, yet listening is little addressed in most PR and corporate communication textbooks and journals. If I used a survey, there is no doubt that heads of communication and PR would say they use two-way communication and dialogue, including listening to their publics/stakeholders. In fact, even in face-to-face interviews that I am conducting, that’s usually the opening response. Some communicators even make grand claims of listening.

However, in-depth interviews allow me to go deeper. I ask participants to show me their strategic communication plan and look to see the activities listed. Where does formative research, public consultation, feedback, social media monitoring and response, and other forms of listening appear. Do they appear? I also ask to see some evaluation reports, because measurement usually focuses on what is important. Do they measure/evaluate the effectiveness of information coming into the organization from stakeholders, or only the effectiveness of sending information out?

Then I go even further to triangulate multiple data sets. I interview more than one person in most organizations. I even look at job descriptions. Does listening in any form appear in those (e.g., doing formative research, understanding audiences, public consultation, etc.)? I also ask participants for examples to demonstrate a case where they have listened to stakeholders and taken on board at least some of their feedback or requests. Finally, I am running a field experiment in which research assistants and a group of my students are going on to the Web sites and social media of all organisations studied to submit ‘real life’ inquiries, questions or comments and we monitor the organisations’ response.

That’s in-depth qualitative research. It’s not soft, or easy. In fact, it is much harder and slower than quantitative surveys, which are often done because they are easy and relatively quick. But which approach do you think will get the most accurate view of organizational listening?

Of course, there is not always time to conduct such multi-faceted triangulated research. But the measurement and evaluation field needs to look beyond self-administered surveys and other ‘thin and wide’ quantitative studies to discover reality vs. rhetoric and gain deep insights vs. ‘tick a box’ responses.

###

 

Jim Macnamara

Jim Macnamara

Jim Macnamara PhD, FAMI, CPM, FAMEC, FPRIA is Professor of Public Communication at the University of Technology Sydney, a position he took up in 2007 after a 30-year career working in journalism, PR and media research which culminated in selling the CARMA Asia Pacific franchise that he founded to iSentia (formerly Media Monitors) in 2006. He is the author of 15 books, including his latest, Organizational Listening: The Missing Essential in Public Communication (Peter Lang, 2015), as well as Public Relations Theories, Practices, Critiques (Pearson, 2012); The 21st Century Media (R)evolution: Emergent Communication Practices (Peter Lang, New York, 2010, 2014); and Journalism and PR: Unpacking ‘Spin’, Stereotypes and Media Myths (Peter Lang, New York, 2014).
Jim Macnamara
728 Ad Block

Related posts

3 Comments

  1. Joe Smith

    Of course there is bad research out there, but that is a lame reason to attack scientifically valid research methods. It is not the method that is at issue, it is the practice. Effective qualitative research is exponentially more difficult to conduct well. It is not the tool that is at fault, it is how the tool is used and misused.

  2. Ivan Kasapov

    As for media monitoring tools, I’d like to recommend Mediatoolkit.com. It’s great for tracking mentions of your brand in real time, or any other keyword of interest (like your competition!) for that matter. I use it every day and it has proven to be very reliable, crazy fast, and with a great balance between free features and paid subscription, which is more affordable than other tools I’ve tried

    1. The Measurement Standard

      Thank you for your comment, Mr. Kasapov. Google shows that you’ve left comments with almost the exact same wording on 16 other posts around the web over the past year. If you are a real person, and you truly wish to promote mediatoolkit.com, I invite you to comment here again and explain your enthusiasm further. –Bill Paarlberg, Editor

Comments are closed.

728 Ad Block