Advancing communications measurement and evaluation

Don W. Stacks Interview: IPRRC, the Replicability Crisis, and Why He Does Measurement

This month the Measurement Life Interview is very pleased to welcome Don Winslow Stacks. Don is a professor of public relations and corporate communication in the Department of Strategic Communication at the University of Miami. He is a former chair of the Institute for Public Relations Measurement Commission, an IPR Research Fellow, and the author of over 200 books, articles, chapters, and professional papers. In 2014, Don received the Dorothy Bowles Award for Outstanding Public Service, and also the PRIDE Contribution to Public Relations Education Award from the National Communication Association’s Public Relations Division. The Measurement Standard wrote about Don previously, way back in 2008 when covering the Measurement Summit: “The Loneliness of the Measurement Mathematician.”

don-stacks-surfboard-tall2— The Measurement Standard: Welcome to the Measurement Life Interview, Don. To start, tell us a little about this photo of you at the beach. That’s a stand up paddleboard, right? Are you an avid SUP-er? Where were you?

— Don W. Stacks:  Correct. Just started last summer. This is at our condo on South Hutchinson Island, Florida.

— TMS: And what’s on your iPod, turntable, or Pandora channel right now?

— DWS:  It would have to be iPod, and it is full of 60s music.

— TMS: So, how did you become interested in measurement and evaluation?

DWS: It was back in the days when most of our campaigns had a goal, and that was all. The basic “proof” was the clipbook, and more was argued to be better, even if it did not result in a call-back for more information. I was concerned about how poorly the profession was evaluating—actually, not evaluating—its successes and failures. As a social scientist and someone who did social measurement and statistical analyses, I was dumbfounded at the lack of precise measurable objectives. We failed to answer the basic questions:

  • Who actually sees this?
  • What do they recall?
  • How accurate was that recall?

So I began to put together models for the programmatic use of research to better understand why campaigns did or did not work. Part of that is found in the Campaign Excellence Model that David Michaelson, Donald K. Wright, and I published several years ago.

“Students should come out of school with an understanding of what we do: Establish expectations that are measurable and can be correlated with business outcomes to provide support for return on communication investment.”

— TMS: What course of study did you follow?

DWS: I was the typical college student, but more closely aligned to a “jock.” I went to college to wrestle and run cross-country. Started in pre-med and quickly found it wasn’t for me. I then moved into English/Communication with a focus on creative writing. My graduate work started in the U.S. Army, while stationed in Washington, DC, where I was introduced to communication research and statistical analysis. I followed that up with an MA from Auburn University and a Ph.D. from the University of Florida in Communication Studies, with minors in social psychology (focus on attitude change and measurement of attitude) and statistics.

— TMS: What would you recommend for today’s students?

I think that much of what I was taught is now done at the undergraduate level in the top schools (U. of Miami, Boston U., Syracuse U., etc.). Students should come out of school with an understanding of what we do: Establish expectations that are measurable and can be correlated with business outcomes to provide support for return on communication investment.

“Why do people behave as they do and what are the norms we should test against? That is where I am coming from.”

— TMS: What’s so special about measurement and evaluation? Why are you doing it instead of something else?

DWS:  I’m curious and I want to know the whys behind the what. I also have coursework at the MA level in humanistic psychology, which focuses on the individual and how he/she makes decisions. Why do people behave as they do and what are the norms we should test against? That is where I’m coming from.

— TMS: Let’s talk about the International Public Relations Research Conference. The IPRRC has been your baby from its beginning back in the 20th century. The conference is known for its roundtable format of rapid presentation of papers. How many papers would you guess have been presented over the years?

IPRRC-2016-IPR
Roundtable discussion at the 2016 IPRRC. Thanks to the IPR for the image.

DWS: IPRRC, now going into its 20th year, went into the current format 17 years ago. We took a chance that people would prefer to listen to an executive summary and then discuss it informally around the tables. It was an instant hit and we quickly found that we received more high quality papers than we could schedule.

On an average year we have over 100 papers presented, probably a total of around 1700 in all. Many of these have found their way into our Proceedings, which average about 800 pages a year, and many were published in academic and professional journals.

Don-Stacks-IPRRC
Don Stacks at the 2016 IPRRC.

I’m most proud that we have kept our registration fee about the same as 20 years ago, yet have raised over $10,000 for professional paper awards, and have treated the participants (all attending are participants, given our roundtable format) like professionals with breakfasts, lunches, and two open bar socials.

Many other groups have tried to duplicate the roundtables, but haven’t quite come up to our standards.  If anyone is attending PRSA, we have an annual program that presents some of the top papers.

— TMS: The so-called Replication Crisis broke over psychological research last summer, when a team of researchers tried to replicate recently published findings, and were unable to do so in 40% of cases. How do you think this crisis affects public relations research? What lessons, if any, can our profession learn from it?

DWS: First, if you were at the casino, would a 60% winning average keep you there? Clearly, however, there are problems.

Back in 1978 when I was studying under Dr. Marvin Shaw—one of the first social psychologists in modern psychometrics—we attempted to start the “Journal of Replication.” As you might guess, it didn’t gain traction.

Personally, I think it is crisis in a teapot.

Let’s start by looking at how we do research. First, an approach leads to a typically deductive theory. That theory suggests the methodology for answering research questions and hypotheses. The methodology then sets the analytics, which in turn sets the evaluation of the initial RQs or Hs. Further, since it is theory-based, our research can be reviewed. Since the methodologies are scientific, we can examine them for methodological and analytical problems.

There is a mistaken assumption that results hold over time, which is simply not true. Each population or sample differs from the next (or preceding). This means that there will always be error, much like a car’s speedometer, which can be off by 1-5% at any given time.

Further, social science is a one-stop program that builds upon previous research, so we don’t actually replicate but extend findings. Some researchers, such as Jim Grunig and associates, have built studies on previous studies; others simply move to something of interest. Unlike medical research, which takes years of replication and often uses a .80 probability and then refines and refines with different populations and conditions, we build on ours without that replication.

Interestingly, the 95% confidence interval (probability of error less than 5%; p < .05) is an agreed upon standard across almost all quantitative research. Doesn’t have to be, and with our current statistical packages, you can present the probability of error out to 0.0001%.

Finally, only about 15-20% of the total studies submitted for publication are accepted. And remember that more than 50% of studies never get written up at all. Sometimes I think it would be interesting to see how many publications come out of how many studies. Not a lot, I bet. Now, add to this the proprietary nature of non-academic research and you have black hole that we really don’t know about.

— TMS: When a client or your boss asks you to do measurement or evaluation in a way that you know to be misguided, how do you handle it?

Don-Stacks-receives-2014-PRIDE-award
Don receives the 2014 PRIDE Contribution to Public Relations Education Award.

DWS: Simple: I don’t do it.

— TMS: Suppose you have to address a tough audience about a tricky project. What A-game presentation techniques will you bring to the meeting?

DWS: I start with a common problem that everyone faces. Then I demonstrate the various ways that we can approach it. Then move towards establishing the empirical indicators that provide norms, and then to the qualitative information we can get to tie to the quantitative, thus triangulating the approaches/methods to a common goal and objectives. I use that to address who should be targeted, what type of messages need to be created, and how to gather data to support strategy or serve as a feedback loop for adjustment.

— TMS: What are your favorite measurement tools or projects?

DWS:  I always start with the BASIC model developed by David Michaelson. Once I’ve established where I am at in the communication lifecycle I then prefer to gather quantitative data—survey or experimental—and if needed to run focus groups to better understand violations of the normative data.

— TMS: Tell us a story of when you used measurement or evaluation to significantly improve a client’s program. Yes, when you were the hero; go ahead and brag.

DWS: I was once asked to evaluate a major energy company’s internal communications. So we ran surveys and multiple focus groups, and it became apparent that what the company thought was happening with their communication strategy was not. We found multiple, competing communication programs, an inconsistent message strategy, and that the home company was not listening to its subsidiaries. We helped this company by providing insights and developing a holistic communication strategy to ensure that messages were getting out, being received, and had demonstrable impact on the company’s triple bottom line.

— TMS: Where are measurement and evaluation going? What do you see in your crystal ball?

DWS: We’ve made great strides in the last 20 years. The mission of the IPR Measurement Commission was to (1) bring awareness to the sorry state of M&E, and (2) to add to the knowledge base. Just last month, we declared VICTORY! Where do I see M&E going? We will be addressing those factors that mediate the expectation outcome, which will be expensive but will help demonstrate the holistic phenomena of communication.

— TMS: If you could invent one magical measurement or evaluation tool to accomplish anything, what would it be?

DWS:  I’d like to be at the stage where we are able to simulate outcomes based on non-financial and financial data. We have the latter, but the former are still causing many problems. I’d also like us to agree that we are responsible for the non-financial outcomes of credibility, confidence, relationship, reputation, and trust—all strategized through a return on expectations prism that can then be correlated with other business functions. Sort of what Cong Li and I recently published on social media ROI and financial outcomes.

— TMS: Thanks for the interview, Don. All the best.
###

Bill Paarlberg
Visit me on

Bill Paarlberg

Bill Paarlberg co-founded The Measurement Standard in 2002 and was its editor until July 2017. He also edits The Measurement Advisor newsletter. He is editor of the award-winning "Measuring the Networked Nonprofit" by Beth Kanter and Katie Paine, and editor of two other books on measurement by Katie Paine, "Measure What Matters" and "Measuring Public Relationships." Visit Bill Paarlberg's page on LinkedIn.
Bill Paarlberg
Visit me on
728 Ad Block

Related posts

4 Comments

    1. Don Stacks

      Variance accounted for (what you know). Subtract it from 1.00 (what you hope to know) and you get what you don’t know.

  1. Bruce Berger

    Thanks, Don. I really enjoyed the interview. And Iike many others, I greatly appreciate the Miami conferences. Now, back to the water!

  2. Angela Jeffrey

    Don, wonderful interview and good to see you claim “victory” for the work the Measurement Commission has done toward furthering measurement & evaluation. While there is much to do, it’s good to see a positive word in this area.

Comments are closed.

728 Ad Block