Advancing communications measurement and evaluation

Jim Grunig: Why He Does Research, His Toughest Audiences, and His Favorite Rock and Roll Music

Jim-Grunig-banner

This month the Measurement Life Interview is very pleased to welcome the genial elder statesman of public relations Jim Grunig. Jim’s Two-Way Symmetrical Model of Public Relations and his Excellence Theory of PR together form the foundation of modern PR and measurement. An excellent review of his contribution to PR and measurement can be found here, in Fraser Likely and Tom Watson’s book chapter “Measuring the Edifice.” You can watch a large number of videos of Jim on YouTube.

Jim is is a retired professor of communication at the University of Maryland, where he taught public relations from 1969 to 2005. He holds a Ph.D. in Mass Communication from the University of Wisconsin. He is the coauthor of five books and editor of a sixth. Jim has written 250 other publications such as book chapters, journal articles, reports, and papers. He has won six major awards in public relations and the most prestigious lifetime award of the Association for Education in Journalism and Mass Communication (AEJMC), the Paul J. Deutschmann Award for Excellence in Research. He was the founding coeditor of the Journal of Public Relations Research. He has been awarded honorary doctorates by the Universidad San Martin de Porres in Peru, the University of Bucharest in Romania, Istanbul University in Turkey, and the University of Quebec at Montreal in Canada.

— The Measurement Standard: Welcome to the Measurement Life Interview, Jim. First, let’s learn a little about you: What’s on your iPod, turntable, or Pandora channel right now?

Jim-Grunig-by-Shel-IsraelJim Grunig: I am most likely to listen to music on Sirius XM in my car or occasionally on Amazon Music on my iPhone. On Sirius XM, I almost always listen to the 50s channel, although I occasionally move over to the 60s channel. I was in high school in the 1950s, and I don’t think there has ever been rock and roll music as good as it was then.

I’m more likely to listen to sports talk radio, however, and read sports blogs online. I have been a huge sports fan since playing basketball and baseball in high school and have had season tickets for Maryland Terrapins basketball and football continuously since 1969. Two highlights of my life were scoring 42 points in a high school basketball game and the Terps national championship in basketball in 2002.

I was in high school in the 1950s, and I don’t think there has ever been rock and roll music as good as it was then.

— TMS: 42 points!!!! I read that one summer in college you once wrote 100 press releases about chickens. How did you become interested in measurement and evaluation?

jim-grunig-with-microphoneJG: Except for part-time and summer jobs while I was an undergraduate and graduate student and numerous consulting jobs over the years, I have been an academic researcher and teacher my entire career. So, in that context, I have always been interested in measurement and evaluation. Academic research could not exist without measurement and evaluation.

More accurately, though, I have always been interested in research—not just measurement and evaluation. They are part of research, but there is more to research than measurement and evaluation. Conceptualization, which essentially is a fancy word for thinking, is perhaps more important in the research process.

Why do I do research instead of something else? The answer is simple: I could not do public relations without it.

Research begins when we identify a problem, think about solutions to that problem, and hypothesize which solutions will be most likely to solve the problem. After implementing a solution, we then must measure the variables in that solution to evaluate its effects. So, without conceptualization, measurement and evaluation can often be meaningless. It doesn’t help much to measure and evaluate a dumb idea.

Although research has always been the focus of my career, I first became interested in doing applied research on public relations programs in the 1970s when Jim Tirone of AT&T called me to ask if I would be interested in working with him to develop an evaluation program for Bell System companies throughout the United States. These companies repeatedly were asked by rate commissions why they should spend money on public relations programs.

I worked with Jim and his staff for about five years to develop objectives, measures, and research protocols for five programs that every Bell System company had: employee relations, media relations, community relations, educational relations, and long-distance advertising. I did most of the work on the community relations program and advised Jim as he worked with others on the other programs. Jim was a brilliant man, and his emphasis on the need for practicality in research influenced my work in public relations research ever since.

— TMS: What course of study did you follow? What would you recommend for today’s students?

Jim-Grunig-lobster1JG: I studied agricultural journalism as an undergraduate at Iowa State University, with a minor in agricultural economics. At the time there was no specialization in public relations at Iowa State, although most agricultural journalism graduates went into public relations with agricultural businesses, government agencies, or research institutions.

For example, I worked part time with the agricultural information service at Iowa State during my last three years as an undergraduate and then had summer jobs with the U.S. Department of Agriculture in Washington, DC, and the International Harvester Company in Chicago. I then completed an M.A. in agricultural economics at the University of Wisconsin and finished with a Ph.D. in mass communication there.

Our research has shown that the extent to which public relations departments or firms use research as an ongoing part of their day-to-day practice is the single best indicator of excellent public relations.

The Ph.D. degree was interdisciplinary, and my minor area again was economics. I also took a course in public relations from Scott Cutlip and a number of behavioral and social sciences courses in psychology, sociology, and political science. Throughout both my undergraduate and graduate programs, I also studied research methods, statistics, and econometrics. All of this came to fruition in my doctoral dissertation, for which I spent two years studying and evaluating the effects of communication programs on agricultural development in Colombia under a USAID grant.

In retrospect, this combination of public relations and communication theory, economics, social sciences, research methods, and statistics provided an excellent background for practicing and teaching public relations and especially for understanding its value and for researching and evaluating communication programs.

I believe it is critical for future practitioners to study public relations and communication theory and research—if not in a university, they should study them on their own or in continuing education programs. I think the days are over when anyone, with any background, can enter public relations and learn on the job. Too much conventional wisdom, most of it ineffective, gets passed on in this way.

A degree in public relations provides a professional with a set of theoretical tools to think about, conceptualize, and evaluate what he or she is doing. Public relations scholars have developed an excellent body of knowledge about public relations; and too few practitioners are familiar with, or have even heard about, that knowledge. Most degrees in public relations today also include thorough training in research and statistics, which I believe are essential for today’s practitioners.

TMS: What’s so special about measurement and evaluation? Why are you doing it instead of something else?

jim-grunig-at-podium-la-nacionJG: Our research has shown that the extent to which public relations departments or firms use research as an ongoing part of their day-to-day practice is the single best indicator of excellent public relations. We found this in the IABC Excellence project as well as from an analysis that my colleague Jeong-Nam Kim of Purdue University has done of the Excellence data and of data from the GAP studies done by the University of Southern California Center for Public Relations. Similar results can be found in the ongoing study of public relations in Europe—the European Communication Monitor. Public relations departments that report regularly using research in their work are more likely to report that their programs meet objectives, and CEOs are more likely to report that they value public relations and that the chief communication officer is part of their strategic management team.

Effective public relations departments use formative research to identify their most strategic publics, identify problems that publics expect organizations to solve or that organizations create for publics, and listen to publics before making decisions. They then use this research to counsel management on strategic decisions and to identify scenarios that might lead to issues and crises before they occur. Excellent public relations departments then use this formative research to plan communication programs such as employee relations, member relations, media relations, customer relations, investor relations, community relations, and public affairs and government relations.

That is where evaluation comes in: The most effective departments set objectives for their communication programs and then measure those objectives to evaluate the programs. What should be clear, however, is that measurement and evaluation alone are not enough. Often, public relations people evaluate programs that are not planned for specific publics and without understanding the objectives they are supposed to achieve.

Why do I do research instead of something else? The answer is simple: I could not do public relations without it.

— TMS: When a client or your boss asks you to do measurement or evaluation in a way that you know to be misguided, how do you handle it?

jim-grunig-hotnewsJG: I’m probably not a good person to answer this question because I have never had to make a living only doing research for clients. I have always received a salary from the University of Maryland for teaching and research. I have worked for outside clients only on an occasional basis as clients came to me for help or as part of projects for graduate or undergraduate seminars for which I was not paid.

As a result, my answer is fairly straight forward: I don’t take on a client who wants to do measurement and evaluation in a misguided way. In most cases, clients have come to me because they know the approach I take, so I haven’t had a problem.

In a few cases, however, I have worked who have worked for organizations that wanted something less than I wanted to offer. Most typically, they just wanted to prove that their programs were effective and didn’t want to learn how to conduct those programs more effectively or to learn from the research. In most of these cases, I have added concepts and approaches to what the client wanted, such as identifying publics for their programs or suggesting objectives that their programs should achieve. These objectives almost always have been outcome objectives—changes in awareness, cognitions, attitudes, and behavior. I will measure whether members of publics are exposed to messages, but I would never settle just for something like AVEs or media monitoring. Thus, I could usually tell clients what they wanted to know, whether their programs were effective, but I also have gone much further with the research to show them who their publics really are, to show the quality of their relationships with those publics, and to suggest objectives for them to evaluate that go beyond message placement in the media or elsewhere.

— TMS: Suppose you have to address a tough audience about a tricky project. What A-game presentation techniques will you bring to the meeting?

jim-grunig-brazilJG: I have done hundreds of presentations over the years to public relations professionals and academic audiences in over 50 countries. Most of these groups have been what I call active publics or active audiences. In most of these presentations, I addressed problems they recognized, so they usually helped with the communication process. That is, they actively listened, asked questions, and tried to understand the theories and methods I was explaining.

My role then is to be well organized and logical to make it easier for them to understand what I am saying. I have found PowerPoint to be useful in this process, unless the slides contain too much information or are too complicated. If the slides are too complicated, the audience spends most of the time trying to read them rather than listening to me or interacting with me. My model for this process was the late Pat Jackson, who never used more than newsprint displays to write down key points. He refused to use PowerPoint, saying “why does the audience need to see my notes?” Pat could look people in the eye, engage them, and explain himself clearly.

My toughest audiences probably have been undergraduate students in introductory courses.

My toughest audiences probably have been undergraduate students in introductory courses. They usually do not have the experiences necessary to understand the relevance of what I am teaching. The most common way to approach this audience is to entertain them in one way or another. I have never been good at that, however.

Instead, I try to provide a context for what I am teaching, i.e., examples of organizational problems and activities, so that I can then explain how the theories I am teaching relate to problems they haven’t experienced yet. John Dewey said many years ago that all learning is problem solving, so the key is to get people to experience or at least think about problems that concepts are designed to solve. This also is a method that works well with audiences who have no experience with research, measurement, or evaluation. You have to explain to them how the research can help to solve problems they are experiencing.

— TMS: What are your favorite measurement tools or projects?

JG: This all depends on the research problem I am dealing with. Public relations programs can be evaluated at several levels:

The program level. The most basic level is the program level, or even the message level. At this level, my preferred short-term objectives are outcomes: changes in awareness, cognition, attitudes, and behaviors. These can be measured both quantitatively and qualitatively.

Our research essentially shows that reputation is a byproduct of relationships, so that if communication can be used to develop and cultivate relationships a good reputation usually will follow.

Programs also have long-term effects on organization-public relationships. Relationships are the key long-term measure of the value of public relations. Along with many students and colleagues, I have developed quantitative and qualitative measures of relationships. A guidebook for measuring relationships, developed by Linda Hon and me, is available on the Institute for Public Relations website. We can also measure reputation as a long-term outcome of public relations, but I think it is less important than relationships. Our research essentially shows that reputation is a byproduct of relationships, so that if communication can be used to develop and cultivate relationships a good reputation usually will follow.

The departmental level. Public relations also can be measured at the departmental level. That is, the organization and activities of a public relations department can be compared to a benchmark of what the most effective departments do. Along with Pat Jackson and others, I did an annual review of the communication department at Brookhaven National Laboratory for several years that fit this type of evaluation. My favorite measurement tool at this level is the profile of an excellent public relations program that we developed in the IABC Excellence project. It provides a theoretical benchmark that can be compared with the quantitative or qualitative profiles of a department’s staffing and activities.

The organizational level. Thirdly, public relations can be evaluated at the organizational level by measuring the value that the communication function creates for the organization. In the Excellence project, we did this by asking the CEO to assign a cost-benefit ratio to the investment that the organization makes in the public relations function. My former colleague Bill Ehling called this the method of compensating variation—how much would someone be willing to pay to keep something of value or pay to get something he or she does not have. It worked very well in the Excellence project, and it also correlates well with measures of the quality of relationships. If we ask a CEO, for example, how much a good relationship with a particular public has been worth, or how much a bad relationship has cost, we can get a good estimate of the value of public relations to an organization.

The societal level. Finally, we can measure the value of public relations at the societal level. At this level, we get into questions of sustainability, social responsibility, and ethics. Many organizations, such as the United Nations and the Caux Roundtable, have developed methods to measure these concepts. Organizations with effective public relations departments usually engage in more ethical, sustainable, and responsible behaviors, so we can evaluate the quality of public relations by measuring the behavior of the organization on one of more of these scales.

— TMS: Tell us a story of when you used measurement or evaluation to significantly improve a client’s program. Yes, when you were the hero. Go ahead and brag.

JG: jim-grunig-arthur-w-page-videoFor many years, my graduate seminar in Public Relations Publics worked with a client organization to identify their publics and to evaluate their current publications and communication activities. I also did similar research in my undergraduate senior seminar in public relations. In each case, we provided a profile of publics and communication activities that the organization could use to plan future programs. I think all of them significantly improved the clients’ communication programs.

The one I remember best, perhaps, was the National Bureau of Standards near Washington, DC. I also worked for several years with the Maryland State Department of Education to study an ongoing communication program it called People on the Grow. The research was used to develop the program as well as to evaluate it.

Finally, the yearly evaluation project at the Brookhaven National Laboratory, which I mentioned previously, also significantly contributed to what I think was an outstanding public relations program. And, lastly, I think the evaluation program I helped develop at AT&T in the 1970s helped to strengthen what was then already an outstanding set of communication programs.

— TMS: Where are measurement and evaluation going? What great strides do you see in your crystal ball?

JG: Most of the recent attention in public relations practice has been to social media and other digital media. I don’t believe that digital media require any fundamental changes in the concepts we use to understand public relations, to identify publics, to formulate objectives, or to evaluate communication programs or long-term relationships. In fact, digital media make such concepts as dialogue, interactivity, symmetrical communication, and engagement much easier than old media did. These media do provide a new set of communication tools as well as a new source of data for measurement and evaluation and for formative research. However, we are only beginning to develop means for using the data available to us in digital media.

Unfortunately, most public relations practitioners approach the new media in the same way that they approached the old media. They think of message generation and media placement as the principal outcome of public relations without thinking of the publics that are most strategic for the organization, the problems these publics experience, and the nature of the relationship an organization should have with publics. Likewise, measurement experts follow this same path: They measure placements, hits, likes, and sometimes “engagement” (which usually is interpreted as someone actually reading or viewing a digital message). So, the massive message generation machine that many people call public relations seems to have expanded exponentially with digital media.

I think we instead should think of the digital media as a source of data more than as a means of dumping more and more irrelevant messages on nonpublics…

I think we instead should think of the digital media as a source of data more than as a means of dumping more and more irrelevant messages on nonpublics—i.e., people don’t need or seek this information. More than anything, digital media have empowered publics to find the information they need wherever they can find it. Therefore, organizations need to identify the publics for whom they are relevant, engage these publics (i.e., establish a relationship with them), and establish a dialogue with these publics. Public relations professionals can use these dialogues as a source of data to identify problems and issues that management needs to deal with. They also can monitor the many websites, blogs, discussions, Facebook pages, and other sites for data that can be analyzed to identify problems and issues that an organization might face or might have created.

In this way, digital media provide an enormous source of data (“big data,” I suppose) that public relations people can use in a strategic management role. I also think it is possible to code and analyze what people say online and other data available digitally to measure objectives of communication programs, relationships, and reputation—therefore replacing a lot of the survey research that has become increasingly difficult to do.

— TMS: If you could invent one magical measurement or evaluation tool to accomplish anything, what would it be?

JG: I would empower all public relations practitioners, and researchers as well, with the ability to think conceptually about what they are doing, why they are doing it, what they hope to accomplish, the likelihood that they can actually accomplish it, how they can measure their objectives, and about what to do next to improve each program after they have completed and evaluated it.

— TMS: Thanks for the interview, Jim. All the best.

###

Image credits: PR Communications Box, marketingmreza, Shel Israel, Bill Paarlberg, La Nacion, hotnews at campusnews.ro, brasileconomico, Arthur W. Page Center,   .

Bill Paarlberg
Visit me on

Bill Paarlberg

Bill Paarlberg co-founded The Measurement Standard in 2002 and was its editor until July 2017. He also edits The Measurement Advisor newsletter. He is editor of the award-winning "Measuring the Networked Nonprofit" by Beth Kanter and Katie Paine, and editor of two other books on measurement by Katie Paine, "Measure What Matters" and "Measuring Public Relationships." Visit Bill Paarlberg's page on LinkedIn.
Bill Paarlberg
Visit me on
728 Ad Block

Related posts

1 Comment

  1. Pingback: How to use formative research in Public Relations - Better PR Now

Comments are closed.

728 Ad Block