Measuring Stakeholder Relationships: A Case Study

December 5, 2008


We work in an increasingly ‘direct-to-stakeholder’ world. Stakeholder, influencer, key opinion leader relationships are everything in this business. So if the industry is investing time and money in initiating, building and maintaining relationships with stakeholders for mutual benefit, then there sure better be a way to first diagnose the situation, respond to it, then measure success.

In Theory

There are. Methods exist to quantifiably benchmark and tracking the quality of stakeholder relationships: (customers, interest groups, investors, employees, vendors, government officials, etc.) over time. It’s an index used to establish a diagnostic benchmark, build a campaign around addressing gaps, and measure again for lift.

The index, which comes to us from the granddaddy of contemporary PR academia Jim Grunig, Ph.D. out of the University of Maryland, can be executed through a series of interviews, focus groups, and/or (ideally) a survey.

It measures success in familiar terms such as: mutual awareness, accuracy, understanding, agreement, and the less familiar: symbiotic behaviour.
Six elements of a relationship are testing using an agree-disagree scale:

  • control mutuality (the extent to which stakeholders feel they have control over the direction of the relationship, the organization, the strategy, or whatever’s at issue)
  • trust (integrity dependability, competence)
  • satisfaction
  • commitment
  • exchange & communal value (anybody remember their Marx readings ?)

Each of the above categories general has at least a dozen or so agree-disagree statement behind it.

So, does it work? Yes, it does. Let’s look at an example undertaken recently.

In Practice

A large public sector organization operating in a very complex environment with a wide and diverse range of external stakeholders wanted to understand what stakeholders thought of them. The project set out to test the hypothesis that this organization had fairly decent and strong (read: good quality) relationships with their stakeholders.

The result was much poorer than was assumed. 34/100. The score means that, all questions and all categories being equally weighted (and they aren’t always or at least don’t have to be) and looking at all stakeholder groups, on average, 34% of all survey respondents would have either agreed or strongly agreed with the statements, such as those below, put to them. Or, looked on another way, one might infer that only 34% of respondents agreed that they have a ‘quality’ relationship with the organization. (Examples of statement below).

  • This organization wants to develop a partnership with clients—29 (% agreed)
  • This organization treats people like me fairly and justly–44
  • This organization is responsive to me–39
  • When this organization makes an important decision, it will be concerned about people like me–24
  • This organization can be relied on to keep its promises–27
  • This organization really listens to what people like me have to say–29
  • I feel that this organization is trying to maintain a long-term commitment to people like me–32
  • I can see that this organization wants to maintain a relationship with people like me–37

More importantly, the results essentially mean that 66% didn’t feel they had a ‘quality’ relationship with the organization with whom they deal. And if one were to look at these results broken down by type of stakeholder (say, investors versus suppliers versus government officials), the bad news is that the results, in many cases, are even more troubling. As a former employer used to say “data can validate the intuitive.” In this case it (the data) did quite the opposite. It disproved an assumption. But that’s not always a bad thing. The good news is that the organization that commissioned the study now has a diagnostic benchmark that helps them identify, with laser focus, problem areas, prioritize stakeholders, go after those stakeholders and improve those relationships.

A perfect example of research that is both pre-campaign formative (objective setting, prioritizing audiences, influencing strategy) and post-campaign evaluative.

Alan Chumley, Director of Communications Research, Leger Marketing, is an instructor of communications research in the PR programs at Ryerson and McMaster Universities, an associate member of the CPRS measurement committee, as well as an industry speaker, conference chair, and blogger: https://alanchumley.wordpress.com
[ad]


Join other professionals!

Get news, tips and other industry news delivered into your inbox.

#1 source for breaking news about Canada's communications and public relations industry


Related Stories:


PR In Canada


PR In Canada is the only independent publication dedicated to the Canadian public relations industry with fresh news and trends every day. We are the voice for the communications and public relations industry across Canada, sharing best practices, opinions, success stories and research that keeps the industry growing and connected

>