Advancing communications measurement and evaluation

Evaluation and Insights Are What “The Measurement Industry” Actually Do – Or Should Do

evaluation-insights-sign

Professor Jim Macnamara provides insight on what the measurement industry actually does, and why “evaluation and insight” is a better description of it than “measurement.” See this article for more on rebranding measurement.

I don’t understand the PR industry’s adoption of the term ‘measurement’ as the overarching description of a process that actually involves measurement, assessment, evaluation, and the application of insights to planning (e.g., for program improvement). The term ‘measurement’ is much too narrow for what the industry is really talking about – (1) identifying the impact and value of PR/communication and (2) gaining insights to inform future planning and program improvement.

There are many definitions, but in simple terms:

Measurement is mostly defined as the taking of measures and collecting data including metrics. I say “including” metrics, because metrics is also a widely misunderstood term that is used generically to denote measurement and even evaluation. Metrics are quantitative data. What’s more, metrics are raw data and are meaningless without evaluation to determine what they show or establish (see evaluation). It is also important to recognize that qualitative data are important to inform many evaluations (e.g., the stated feelings and perceptions of audiences collected through interviews).

Evaluation involves the determination of the effectiveness/likely effectiveness and value of something within a context (e.g., to an organisation, or stakeholders, or society) against one or more objectives. But another problem in the PR field is that evaluation is largely seen as a post-hoc activity. In best practice, evaluation is conducted before, during, and after activities and programs – referred to as formative, process, and summative evaluation. For example, formative evaluation can include pre-testing of creative concepts to determine their likely effectiveness and research to identify existing awareness, attitudes, etc. Summative evaluation reports on the outcomes and impact of activities.

Because measurement and evaluation are two quite different things, some use the two terms together as ‘measurement and evaluation’ (M&E). This works, but is a mouthful.

Research is often thrown into the mix to create even longer descriptions such as ‘measurement, research, and evaluation’ and ‘PR research and evaluation’. Research is the application of valid and reliable methodology and methods to collect and analyse data to inform evaluation and produce insights.

What is the answer?

  1. It is important for practitioners to recognize the difference between measurement and evaluation, and to recognize that valid and reliable research methods should be applied to measurement rather than subjective assessments or ‘black box’ techniques.
  2. However, effective evaluation is the end game – i.e., showing the effectiveness and value of activities and gaining insights to inform future activities.
  3. Therefore, it is preferable to focus on evaluation, and most practitioners involved in evaluation in other fields such as public administration, organisational development, health communication, and education use the term ‘evaluation’ to include the application of research methods to undertake measurement (usually quantitative and qualitative) as the basis of evaluation.

The above conclusion is supported by the fact that ‘program evaluation’ is a major field of study across many industries based on program theory, program evaluation theory, and theory of change. PR evaluation models that structure communication as inputs, outputs, outcomes, etc. are based on program logic models borrowed from the field of program evaluation – although the PR industry has ignored most knowledge about program evaluation in fields such as public administration and bastardized program logic models to a large extent.

###

Jim Macnamara

Jim Macnamara

Jim Macnamara PhD, FAMI, CPM, FAMEC, FPRIA is Professor of Public Communication at the University of Technology Sydney, a position he took up in 2007 after a 30-year career working in journalism, PR and media research which culminated in selling the CARMA Asia Pacific franchise that he founded to iSentia (formerly Media Monitors) in 2006. He is the author of 15 books, including his latest, Organizational Listening: The Missing Essential in Public Communication (Peter Lang, 2015), as well as Public Relations Theories, Practices, Critiques (Pearson, 2012); The 21st Century Media (R)evolution: Emergent Communication Practices (Peter Lang, New York, 2010, 2014); and Journalism and PR: Unpacking ‘Spin’, Stereotypes and Media Myths (Peter Lang, New York, 2014).
Jim Macnamara
728 Ad Block

Related posts

728 Ad Block