Engaging with the evidence to raise the PR bar

About the author

Kevin is a co-founder of PR Academy and editor/co-author of Exploring Internal Communication published by Routledge. Kevin leads the CIPR Internal Communication Diploma course. PhD, MBA, BA Hons, PGCE, FCIPR, CMgr, MCMI.

In one of author Hilary Mantel’s recent Reith Lectures she made a strong point about engaging with the evidence to raise the bar. She was referring to the way history is told, but the point could be applied equally to public relations and communication management.

Sweeping generalisations based on misinterpretations of data or very low samples are as misleading as the flawed logic behind AVEs. Indeed, the discussion about the use of AVEs itself is based on limited data. One recent blog on AVEs claims that the results of a survey showed that where respondents work in-house (or for other types of organisations), 45 per cent agree that PR industry bodies should ban AVEs versus 44 per cent who don’t. However, what the blog does not say is that the number of in-house respondents who took part in the survey was just 15. This is hardly representative of a large community of in-house practitioners and certainly does not justify the dramatic conclusions about practice made in the blog.

The real issue here is the absence of robust data on the use of AVEs. For example, a recent PR Week and PRCA survey reported that more than 35 per cent of UK PR agencies and just over 23 per cent of in-house teams still use AVEs. However, this is based on a sample of 132 respondents, with no figure given for the number of in-house respondents. So again, because of the relatively small sample size, the results cannot be treated as representative of the whole industry.

Of course, all data is useful to some degree. All of the research highlighted above is interesting. However the survey numbers are not large enough to warrant general statements without some serious caveats.

So how should we go about assessing the robustness of quantitative research?

There are two things we need to look at – size and who did it.

Size. For quantitative data (such as questionnaires) check how many respondents there were. In simple terms the more there are, the more certain you can be that the data reported truly reflects the population. But how do you know what the right number is? This is something I get asked a lot when people are thinking about doing research and it’s actually fairly easy to work out.

Let’s say we want to find out how many UK PR practitioners agree that the PESO approach to PR management is a useful model for practice, using a survey. Here is how we might go about it in a few quick steps:

  1. Find out the size of the whole population. For our example let’s say that there are 83,000 PR practitioners in the UK.
  2. How confident do you want to be in the data? This is known as the “confidence level”. Remember that the more confident you want to be, the more responses you need.  Let’s say, we want to be 95 per cent confident in our data.
  3. What margin of error are you prepared to accept? (This is also known as the “confidence interval”). Often this is set as four. The smaller the margin of error, the more responses you need, and it can make a big difference. Let’s say that our survey shows that 40% of practitioners agree that PESO is a good model. Actually, with a margin of error of four, the real number could be anything between 36 per cent and 44 per cent. Now, for a survey such as ours, that difference may not matter, but if you are trying to win an election it absolutely could!
  4. The final step is to pop the total population size, confidence level and acceptable margin of error into a handy online tool such as this one and it will tell you how many responses you need. You can adjust your figures to see how the number of respondents that you need changes. In our example we would need 596 responses to hit 95 per cent confidence with a four point margin of error.

Who did it? Check the qualifications of the report authors. Designing research is specialist work. Look on LinkedIn to see if the authors have a research methodology qualification. If a team of PR practitioners (or a PR agency) is behind the research, does it include an academic or expert with relevant knowledge or experience in designing research?

A good example is the CIPR State of the Profession 2016 report which is based on 1,500 respondents so is more accurate than other industry reports with far fewer respondents.

If you spot any research about PR practice that you think is not as robust as it could be I’d love to hear from you. Or if you’re interested in the broader discussion about research and measurement in PR, look out for the next “Mind the PR Gap” conference which I am helping to organise for 2018.