In this second part of “The Meaning of Measurement” we learn from our panel of experts that (a) “measurement” is actually a very good term for what our communications measurement industry does, and (b) there are specific ways it needs to improve.
Last month, in “The Meaning of Measurement, Part 1” we discussed two meanings of the term “measurement” that apply to the communications measurement industry:
- The narrow sense: To gather comparative data on something.
- The broad sense: The process of improving a communications program by setting objectives, gathering data, analyzing it, and then refining the program.
We heard from The Dictionary of Public Relations Measurement and our panel of measurement experts—Tina McCorkindale, President and CEO of the Institute for PR; David Geddes, Geddes Analytics LLC; and Jim Macnamara, University of Technology, Sydney—all of which concurred with the narrow definition.
The interesting thing about the narrow definition is that most people in the communications measurement industry don’t use it. When most people talk about “measurement” they mean it in the broad sense: the whole process of improving communications through collecting and comparing data. As in “The X Steps of Measurement,” or “the IPR Measurement Commission,” or “the Measurement Summit.”
Does “the measurement industry” really just measure?
Here at The Measurement Standard our goal is to inspire the industry to aim high. I have argued before that “measurement” is too narrow a term for what communications measurement does—or aspires to do—because it implies that we simply collect data. I’ve suggested that the communications measurement industry should re-brand itself with a term other than “measurement.” Because, by not taking credit for the full scope of what we really do (which is measurement in the broad sense, above), we shortchange our profession and ourselves. Do we want to just measure something or do we want to improve communications?
Turns out, according to our panel of experts, I’m wrong—or at least I’m giving the measurement industry a bit more credit than it is due.
Jim Macnamara notes that most measurement companies do just that, and not much more:
“I think this is symbolic and telling about the state of the industry. It is about metrics and measures; it does not really do evaluation. In other words, it can produce lots of measures, but it can’t in most cases show the value of PR or communication through analysis and interpretation, and it can’t do inferential and predictive analysis. There is a difference between analytics and analysis, and between metrics and evaluation.”
David Geddes is even more pointed in his criticism:
“Unfortunately, public relations research, measurement, and evaluation remains rooted in the measurement of PR outputs. At the highest level, top-performing organizations have highly talented PR research, measurement, and evaluations teams… But, despite aspirations to the contrary, a large portion of what occurs in the business is measuring what is easy to measure, rather than measuring what is important.”
Tina McCorkindale agrees:
“Some of the companies who work in this space are really measurement companies and that’s it. They specifically measure the volume of social and traditional, or changes in the tone month-over-month. Just because you measure media, doesn’t mean you are doing anything outside the measurement process that adds value.
“I’m not trying to be harsh here. As an industry, we throw around this term ‘insights,’ but what does that mean? It doesn’t mean a four percentage points increase in coverage.”
OK then, so what should the measurement industry aspire to?
David Geddes argues that a mere name change or rebranding will not resolve our problems:
“Changing the name of our function will not solve the underlying problems. The profession needs to adopt a more sophisticated framework of research, measurement, and evaluation; take the challenging but essential step of setting measurable objectives; and measure at multiple levels within a theory-based framework of how communications actually works.
“Unfortunately, organizations often fail to take the first step: setting measurable objectives. PR practitioners and PR measurement specialists chase after the illusion of measuring ROI or linking PR to sales, when they should be setting appropriate objectives and measuring against those objectives.”
Tina McCorkindale reminds us that measurement, especially media measurement, is but one small slice of a much bigger pie:
“We, as an industry, need to take a step back and take a holistic approach to research, measurement, and evaluation. What questions are we trying to answer? What outcomes are we trying to achieve? What information do we need to make useful and strategic decisions? What are the gaps in our knowledge base? Understanding the internal and external environment is critical. Measurement is just one aspect of it. The questions should drive the method and not the other way around.
“In focusing on the big picture, we should spend less time on the minutiae. Of course, I wholeheartedly disagree with AVEs and think we should educate people on why they are bad, but this issue is just one grain of sand on a beach. Social and traditional media measurement, in this particular case, is only one element in an ocean of communication, relationship building, and engagement. Integrating effectively and ensuring the right information is available to make decisions can be challenging, but it’s necessary.”
“We should turn more to behavioral sciences, focusing on areas such as understanding influence and the science behind it and what really matters, or how to improve integration or employee alignment (and measure it). So much data is available to us than ever before –how do we make sense of it?”
And finally, back to Jim Macnamara:
“The so-called ‘measurement’ industry and the users of ‘measurement’ need to get beyond playing with basic metrics such as reach, impressions, likes, and views, and start doing evaluation. Evaluation is about showing the value that is generated by activities. When practitioners are counting inputs, activities, and outputs, they are only showing that they are a cost centre. It is only when they can produce evidence of outcomes and impact of their work that they can show they are a value-adding centre.
“Management doesn’t care too much for cost centres. But they are prepared to invest in value-adding centres.”
Big big thanks to our experts for their insight.
Latest posts by Bill Paarlberg (see all)
- The Growing Demand for Online Privacy: Will It Hamper Communications Measurement? - July 21, 2017
- What Is the Meaning of Measurement, Part 2 - July 21, 2017
- Great Minds on Measurement: John Cage - July 18, 2017