Nine out of ten doctors agree

About the author

Kevin is a co-founder of PR Academy and editor/co-author of Exploring Internal Communication published by Routledge. Kevin leads the CIPR Internal Communication Diploma course. PhD, MBA, BA Hons, PGCE, FCIPR, CMgr, MCMI.

So, it must be right.

You might well translate ‘nine out of ten doctors as 90 percent of doctors. However, that could be misleading. It all depends on the sample size.

Here are some points to look out for when assessing research reports.

Sampling – if the report is based on a survey but does not say how many people responded then it should be treated with caution, especially if it is worded in ways that make it seem that it is about a whole industry or profession.

Margins of error – if data is being compared with a previous year, then margins of error should be taken into account. For example, if a result is 66% satisfaction and has a margin of error of 3 points then you are looking at a range of 63-69%. Comparing 66% for 2020 with 63% for 2019 is within that margin of error, so it may not be a ‘real’ change of 3%. Margins of error are calculated according to the sample size and the population number for the group concerned. The higher the sample size, the smaller the margin of error.

Doublebarrelled questions – some surveys include questions that are actually two questions in one so cannot be answered. For example, ‘rate the check-in and check-out service at a hotel (with a range of options such as ‘very good’, ‘good, neither good or poor’, ‘poor, or ‘very poor’). The issue here is that the ‘check-in’ could have been good and the ‘check out’ could have been poor. Any data that is reported for a double-barrelled question should be treated very cautiously.

Mixed methods – reports based purely on a survey may not be as informative as reports that mix questionnaires with open comments, interviews, or focus groups. Results from surveys are good at telling you what the general situation is like – although it’s best to be wary of claims for representing wider groups. However, they don’t always explain why the situation is as it is, and that’s where qualitative research can be useful.

Open comments in a survey – should be thoroughly analysed. Sentiment analysis based on ‘natural language processing’ systems that provide ‘positive, ‘neutral’ and ‘negative’ scores are often only useful for broad generalisations. Reports that simply use quotes without deep analysis of the data should also be treated with caution. More robust approaches to analysis of comments include ‘coding’ and ‘clustering’ to generate themes and it is the themes that are interesting and useful.

Interview and focus groups – reports that discuss results from interviews or focus groups and infer that the analysis is generalisable to a wider group should be treated with caution. This is because they are unlikely to be a large enough sample. Some of the benefits from interviews or focus groups are the insights that help to explain some of the deeper thoughts and feelings that may lie behind a given situation or topic.

By applying these quick tests, you can make more informed decisions about interpreting data presented in reports – and this is vital to avoid leaping into campaigns or actions that are based on flawed research designs.