Macnamara’s measurement masterclass

Why we should plan backwards

About the author

Richard Bailey Hon FCIPR is editor of PR Academy's PR Place Insights. He has taught and assessed undergraduate, postgraduate and professional students.

Distinguished Professor Dr Jim Macnamara promised ‘One-Third of [a] PhD in Evaluation in Less Than One Hour’ from his presentation to the AMEC Global Summit in Sofia, Bulgaria on Wednesday 22 May.

Here’s my five minute summary of his talk. But first, let’s introduce AMEC, the global summit and Jim Macnamara for those who may be less familiar with them.

AMEC has emerged from its humble origins as a trade body representing media clippings agencies into an organisation with a global mission to educate the public relations profession on measurement and evaluation principles and practice. It is perhaps best known for its Barcelona Principles, first articulated in 2010 and subsequently revised twice, proclaiming among other things that ‘AVEs are not the value of communication.’

So, we have moved on from a focus on media clippings and spurious measures of media value to a more holistic view of measurement and evaluation. We have also moved on from parochial and national considerations to global issues driven by AI and copyright concerns. The AMEC Global Summit may be conducted in English, but that English is as likely to be spoken by citizens of India or Germany as of the US or UK.

Macnamara is an Australian professor of what he describes as public communication, and he has researched and published widely across public relations and journalism, the digital landscape and latterly organisational listening. But if he’s best known for one area of expertise over his 35 year career, it is evaluation (he gave his name to an early and widely-cited evaluation model, Macnamara’s Macro Model, later renamed the Pyramid Model). Last year he presented his Program Logic Model (shown below) which summarises everything you need to know about evaluation in one visual.

And that’s another characteristic of the AMEC Summit. It’s part technology conference (the vendors are necessarily software companies), part public relations and corporate communication conference (with notable case studies from global organisations such as McDonalds and Tata Steel), and partly an academic conference concerned with principles and practice – and the dissemination of knowledge.

Models and frameworks were are the heart of Macnamara’s presentation.

He based his talk on the Measurement, Evaluation and Learning (MEL) framework in which evaluation is part of the planning cycle and used for continuous improvement. 

Evaluation is part of planning: they are not separate things. I can’t do planning without evaluation and I can’t do evaluation without planning.

Jim Macnamara

Distinguished Professor

University of Technology Sydney

Jim's LinkedIn profile

He also observed how theory of change was consistent with realistic approaches to evaluation as both were highly contextual.

‘In overview, developing a theory of change is a process of thinking carefully and thoroughly about what will cause a desired change (i.e., desired impact) and then designing what behaviour change specialists call the “missing middle between what a program or change initiative does (i.e., its activities and interventions) and how these lead to desired goals being achieved”.’

He noted how theory of change appears back to front: it starts with outcomes (the desired change). ‘Theory of change is the ultimate planning model. It forces us to identify what communication can deliver.’

Working backwards from this, what (communication) actions are needed to achieve these outcomes? ‘Pathways to change are the exact opposite of usual practice.’

Program logic models arise from theory of change. ‘As this backwards mapping process takes shape and is fleshed out, it informs a program logic model – the logic on how desired impacts will be achieved in a program or campaign.’ He cited the AMEC Integrated Evaluation Framework and the UK Government Communication Service evaluation framework as two examples of program logic models applied to evaluation.

Yet ‘the downside of program logic models is they’ve taught us to plan based on what we hope to achieve.’ He warned that the industry still predominantly measures outputs (noting that media sentiment is in the media, not necessarily in the minds of the public).

Evaluation has three stages: formative, process (monitoring) and summative. Formative evaluation gives us a baseline. Summative is about contribution and attribution (it’s unlikely ever to be solely achieved by communication).

He advised that since we can’t measure everything, we should focus on KPIs (KEY performance indicators, agreed in advance with management). SMART objectives are most useful helping us  reach KPIs (there should be no more than five of these). Select those that you have the means and resources to measure. Some of the KPIs can be output-related (eg media), but others must relate to outcomes and impact.

This leads to a taxonomy – or categorisation – of measurement (eg inputs to impacts). This helps us to understand what’s an activity, what’s an output, what the outcome is and so on.

Macnamara provided delegates with a poster of an academic taxonomy of methods, metrics and indicators at each stage within the Measurement, Evaluation & Learning booklet that comes complete with planning templates.

‘The MEL model comprising measurement, evaluation, and learning facilitates a positive, forwardlooking approach rather than a retrospective focus that mainly reports on what has been done and can’t be changed, or it can report ineffectiveness with no guidance for the future. Conversely, the three-part process of measurement, evaluation, and learning (MEL) supports forward planning and continuous improvement.’

He warned against false logic in evaluation (‘substitution error’); the two tests to check for this are ‘what practitioners do’ and ‘what audiences/publics do’.

‘Theory is just knowledge. There is no single tool or method; that’s why we work with frameworks.’

Dissected Program Logic Model for public communication

We read the program logical model from left to right following the arrows, from what we do to the outcomes we hope to achieve. Yet the biggest takeout from Macnamara’s talk is that we should reverse this, and plan backwards from the desired impacts to outcomes, to outputs etc.

If you do this, it’s inevitable that planning and evaluation will be integrated into the one process and your communication efforts will become more strategic.

Based on Macnamara, J. (2024). Jim Macnamara’s MEL Manual for Public Communication