Advancing communications measurement and evaluation

Finding a Standard Model for Barcelona Principles 1, 2 & 3: A New Task Force

"standards" spelled in many-colored blocks

The first three Barcelona Principles are:

#1: Importance of Goal Setting and Measurement
#2: Measuring the Effect on Outcomes is Preferred to Measuring Outputs
#3: The Effect on Business Results Can and Should Be Measured Where Possible

Taken together, they suggest a series of steps or stages. These three principles suggest that goals could be set – and measured against – at an Output stage, at an Outcome stage and at a Business Results stage. I have taken the liberty of placing these three stages in this particular order, since when one examines the various goal setting and measurement models for public relations/communications campaigns, this is indeed a common order.

But, this three-stage model is not at all common. Most models have four or five or even more steps or stages at which communication goal setting and measurement can occur.

A plethora of multi-stage models exist, including models developed by the Cutlip, Center & Broom (Pii Model), Macnamara (Pyramid & MAIE models), Watson & Noble (Unified Evaluation Model), Lindenmann (PR Effectiveness Yardstick), AMEC (Valid Metrics), Public Relations Institute of Australia, Skelley & Ziviani (PR Evaluation Model), Lewis (Advertising Aida), Koski Research (Purchase Journey), Grunig & Hunt (Domino Model), Broom & Dozier, Ketchum Global Research & Analytics, German DPRG / ICV, Stacks & Michaelson (Communication Lifecycle), Stacks (Return on Expectations Model), Geddes , Marklein, Likely (Likely Performance Measurement Model), Bartholomew, Wippersberg (Goal Categories) and Booz Allen Hamilton (Return On Engagement).

Macnamara MEIA model
Jim Macnamara’s MEIA Model (above) is one of the many existing models of public relations measurement and evaluation that will inform the new Task Force’s work.

As yet, these models have not been compared to each other in any systematic way, nor have they been tested and peer reviewed. The result is that the profession has a variety of competing models – and no standard for a multi stage goal setting and measurement process for a communication campaign.

Even more disturbing is that the terminology employed in these models to describe each stage is applied differently from model to model. Terms such as output, effect, outtake, outcome, impact, outgrowth, result, and outlook may be used – or not used – or when used mean something different from model to model.

There is no consensus. In fact, there’s confusion. This confusion is manifest in submissions to award programs, where neither planning/goal setting nor evaluation/measurement models seem to have influenced the conduct of the campaign.

Recent research by Maureen Schriner, (University of Wisconsin Eau Clare), Rebecca Swenson (University of Minnesota), and Nathan Wilkerson (Marquette University) highlighted the need for a standard model with standard terminology. Their paper, presented at the 2015 Miami IPRRC, Outputs or Outcomes? An assessment of evaluation measurement from award winning public relations campaigns, analyzed PRSA Silver Anvil winners from 2010 to 2014. (Ultimately, the paper will be included in the Proceedings for 2015http://iprrc.org/paperinfo_proceedings.php ).

Recently, under the auspices of the Coalition for Public Relations Research Standards (founding members: International Association of Measurement and Evaluation of Communications {AMEC}; Institute for Public Relations {IPR}, Public Relations Society of America {PRSA}, Council of Public Relations Firms {CPRF} & Global Alliance for Public Relations and Communication Management {GA}), a new task force was created: the Task Force on Standardization of Communication Planning/Objective Setting and Evaluation/Measurement Models.

The purpose of the Task Force is “To bring together relevant groups and individuals to research, discuss, find consensus on, peer review, and approve a single model—a standard—for communication planning/objective setting and the evaluation/measurement.”

Members of the Task force are:

  • Dr. David Geddes (USA);
  • Dr. Jim Macnamara (Australia);
  • Tim Marklein (USA);
  • Mark Weiner (USA);
  • Dr. Mark-Steffen Buchele (Germany);
  • Dr. Rebecca Swenson (USA);
  • Dr. Nathan Gilkerson (USA);
  • Forrest Anderson (USA);
  • Michael Ziviani (Australia);
  • Alexander Buhmann (Switzerland); and
  • Fraser Likely (Task Force Operational Manager) (Canada).
  • Sarab Kochhar, the IPR’s Director of Research, will be a member of the Task Force and will be the liaison with the IPR.

The IPR will provide financial support for the Task Force’s research.

The Task Force will follow the International Organization for Standardization’s (ISO) six-step process:

  1. Proposal stage;
  2. Development stage with proposed interim standards;
  3. Customer approval by the Coalition customer panel;
  4. Publication of interim standards for academic and practitioner peer review;
  5. Validation through testing; and
  6. Review and revision to create final standards.

Stayed tuned for your chance to review a draft standard for a multi-stage communication goal setting and measurement model and for the terminology employed in that model.

###

Fraser Likely

Fraser Likely

Fraser Likely is President and Managing Partner of Likely Communication Strategies, a Fellow of AMEC, an Emeritus Member of the Institute for Public Relations Measurement Commission, and a Fellow of CPRS.
Fraser Likely
728 Ad Block

Related posts

3 Comments

  1. Daniel Johnson

    Hi Fraser, do you not consider AMEC’s Integrated Evaluation Framework to be the standard model?

  2. Fraser Likely

    Hi Daniel,

    Richard Bagnall, CEO of Prime Research UK, has done a wonderful job of leading the team that created the new six stage evaluation/measurement model. The important thing about this model is that it is built on a theoretical framework. In doing so, Richard’s team was ably supported by the AMEC Academic Advisory Group, led by Dr. Jim Macnamara, with such noted measurement scholars as Don Stacks, Tina McCorkindale, Tom Watson, Anne Gregory and Ansgar Zerfass.

    AMEC’s Integrated Evaluation Framework is both an initial theoretical framework and a model, with the former underpinning the latter. Having both elements is fundamental to the process of standardization. The fingerprints of the advisory group are all over the framework aspect.

    Is it a standard? The marketplace, ultimately, will determine that – and the only marketplace of import is the PR/Communication department within an organization. If a critical mass of CCOs adopt the framework and the model, then yes, it can become a standard. But, there is much water to flow under the bridge first.

    Many questions must be decided in that marketplace.

    For example, is a six stage model appropriate for all situations? Is a six stage model appropriate for a single communication product (news release; tweet; Facebook post; etc.), for a communication campaign (various products and channels with a single objective), and for a communication program (number of campaigns over time, with multiple objectives but directed at the same stakeholder)? What about four or five stage models, do they have utility? Where and when?

    For example, is the terminology used for each stage the language that the marketplace will adopt? Is what we’ve taken from management literature important, including inputs/throughputs and outputs, in all situations? Is what we have created – such as the concept and term outtakes – important in all situations? Is what we’ve borrowed from the program evaluation literature – outcomes at all stages or levels, important in all situations? Is what we’ve taken from the financial/management literature – outflows; impacts; roi; outgrowths – important in all situations? What are the correct terms? What terms will the C-suite recognize and find valuable?

    For example, what model – be it a three, four, five, six or more stage model – is better for marketing communication, internal communication, stakeholder relationships, issues management, etc.? Does the use of digital, social media or traditional media influence the choice of terms and stages? Or, is the model affected by the choice of PESO channels?

    My original piece above, on which you’ve commented Daniel, was written in June 2015. The new AMEC framework and model came out in June 2016. Besides AMEC’s work, the Task Force on the Standardization of Communication Planning/Objective Setting and Evaluation/Measurement Models has also progressed down the road to standardization. The Task Force has conducted a major literature review, examining the development of models both historically and comparatively – but also with an eye on the antecedents that we’ve borrowed from other disciplines. Without an appreciation of where models or model stages and terms came from and how they’ve evolved, we risk – as a profession – continually reinventing. Many of the models that exist now are really simple redos of existing models, but with a new coat of paint in the form of a different stage or term. And, most importantly, most of the models are just that, a model drawn on paper but lacking any theoretical underpinning or empirical testing.

    The Task Force has entered its second stage. We’ll produce two papers on the evolution of measurement models. We’ll also conduct two or three qualitative research projects examining framework and model uptake in agencies/measurement suppliers and PR/C departments from the vantage points of CEOs, CCOs and research/measurement heads. Once we have this research at hand, the Task Force will examine all models closely and compare to what the research has told us. We’ll develop a theoretical framework and suggested model(s). Then we will peer review, test with PR/C departments and have the Coalition of PR Research Standardization, on which both AMEC and the IPR Measurement Commission sit, the framework and model(s), examine the reports and approve.

    It would be great for Richard’s team and the Task Force members (now including Sophia Volk on the list above) to benefit each other going forward. By the way, Jim Macnamara is a member of the Task Force and a member of the AMEC Academic Advisory Group and that cross influence has already been felt.

    So, have we arrived at a standard? Not yet. But a lot of work has been and is being done.

    Comment again in June 2017! Then we’ll see how close we are!

    Fraser

    1. Daniel Johnson

      Thanks for providing such a comprehensive response Fraser. Good luck with your efforts!

Comments are closed.

728 Ad Block