Briefing: AI in PR

About the author

Richard Bailey Hon FCIPR is editor of PR Academy's PR Place Insights. He teaches and assesses undergraduate, postgraduate and professional students.

Created in Bing Image Creator
Created in Bing Image Creator

Much has been written this year about artificial intelligence (AI) and its impact on public relations (PR). So there are various experts in the field who are grappling with the implications for tasks, for jobs, for salaries and consultancy fees and anticipating the many legal, societal and ethical issues arising from an increased use of AI and automation.

This briefing has not been written for them. It’s been written for those who may not have had time to keep up with developments and to read the various reports and publications produced by the experts. This briefing attempts to provide a broad state-of-play update for those who need a quick primer on where we are, what we know, and what we don’t yet know.

What we know about public relations

Let me suggest three propositions that I believe to be true.

  1. Public relations has always been an adopter of technology – from the telephone and camera (still and moving), through recorded sound to fax machines to word processing, the internet, the smartphone and now artificial intelligence. The practice is media neutral, which means that we use whichever channels and technologies will help us share ideas and communicate with people.
  2. That said, there’s a body of opinion that maintains that the processes involved defy being turned into straightforward workflows that are subject to automation, and nor can the impact of public relations be quantified as a simple number. This is the body of opinion that holds there’s something human and emotional about the practice that remains beyond the capabilities of a mere machine or software program.
  3. Public relations is about change. We use it to change attitudes and beliefs and to ultimately change behaviour. We are agents of change.

Though a simplification, these propositions demonstrate the paradoxical responses from within public relations to the advent of automation and artificial intelligence. While there are some cheerleaders for change there are many more living in denial, even though we have integrated other new developments (such as social media) into our practice despite initial delays and misgivings.

The future may be unpredictable, but we should learn lessons from the past. There have always been scare stories – moral panic even – about the risks of each new technology, from the printing press onwards. (Reading for leisure, it was once feared, would create anti-social introverts filling their heads with unhealthy ideas rather as we sometimes condemn teenagers for their screen time and reluctance to engage in face to face conversations.) Yet we have embraced literacy, we have welcomed cameras and cinemas, the internet and social media and smartphones. We have adopted technologies that have enabled us to communicate with individuals and groups to establish mutual understanding and to enhance relationships.

To date, technology has served human purposes. Will it be any different with AI?

What we know about artificial intelligence

Artificial intelligence has been a feature of computer science for more than half a century – and a subject for science fiction for even longer.

Alan Turning is now revered for his work as a Bletchley Park code breaker during the Second World War. In 1950 he proposed what we now call the Turing Test (he named it the Imitation Game) as a means of distinguishing between human conversations and a machine posing as a human. He was already anticipating a time when computers would be able to exhibit human-like intelligence.

Another highlight from the long history of AI was when IBM’s Deep Blue machine was able to beat chess world champion Garry Kasparov at this ancient and complex game. That was in 1997.

It’s no wonder that AI will have made great strides in the quarter of a century since then. There’s the exponential increase in computing power plus the large amounts of data available on the internet as a source for the large language models underlying AI chatbots.

Yet most public interest in the field is less than a year old and relates to generative AI – the ability of systems to generate text or images. Chat GPT is the most-discussed and most accessible AI chatbot that can provide (mostly) credible answers to questions within seconds.

It’s not yet ready to replace humans for drafting technical documents such as news releases and crisis responses but it is a very efficient tool for producing first drafts at speed.

So let’s explain the key terms used in relation to AI.

Generative AI: This describes tools and applications used to generate something (typically text and images). This is the area that has received the most attention since the release of ChatGPT in late November 2022. 

Large language models: Chatbots are able to create credible text having ‘learnt’ from large amounts of text. They are in effect probability engines that are able to work out the likely sequence of words to create the desired meaning. As such, they can be trained to produce text in multiple languages – or to translate text from one language to another.

Transformer technology: The letters in ChatGPT stand for Generative Pre-Trained Transformer – the technology that enables the program to create credible text or images by tranforming inputs into new digital outputs.

Chatbot: A computer program designed to simulate conversations with human users.

Prompt engineering: The concept of a chatbot is that the application is waiting to provide answers, which means it first needs to be prompted by questions. If its answers are not very useful, then one approach is to ask a better question. This emerging skill is leading to talk of a new role or job description – that of prompt engineer. Christopher S. Penn, author of The Woefully Incomplete Book of Generative AI, talks about prompt optimisation being akin to working with keywords in search engine optimisation (SEO). He also describes it as programming using plain English.

Artificial General Intelligence and Autonomous AI: Artificial General Intelligence is the scary future point (the ‘singularity’) at which systems can learn and will take over without human input. Autonomous AI is not yet sentient, is still in its early days, but promises more sophisticated output than is possible with current chatbots. It is the equivalent of autonomous vehicles, those capable of driving without human intervention.

Now let’s summarise some of the main runners and riders in the race to dominate the AI tools market. (Tools come and go at rapid speed in an emerging market, so it’s more useful to look at what the main platforms and software companies are doing in this space).

Main AI vendors and platforms

IBM has been a prioneer; Meta has a stake; Amazon Web Services a platform. But the two competitors vying to lead this emerging space are Microsoft and Google, with Microsoft having early-mover advantage.

Microsoft Azure / Bing / OpenAI

As marketing author Christopher S. Penn observes, the two most accessible AI tools currently are Open AI’s ChatGPT (for text) and Bing Image Creator (which uses DALL.E also from OpenAI). When you consider that Microsoft has made a multi-billion dollar investment in OpenAI, and knowing that AI tools are about to be embedded in Office 365 applications, then Microsoft is positioned as a strong leader in this space.

Google Cloud Platform / Bard 

The release of Open AI’s ChatGPT in November 2022 initially stole Google’s thunder, with Google releasing its Bard chatbot in March 2023. Currently Google is in the unfamiliar position of playing catch-up, though the ubiquity of its search engine and Chrome browser, and the history of innovation and free tools (AI tools are about to be integrated with Google’s suite of applications including Docs and Sheets) means there’s a large potential platform for further adoption.

AI tools for public relations

The CIPR’s AI in PR panel has identified some 6,000 tools applicable to public relations. It’s time-consuming and expensive to keep up with the latest tools, so a more practical approach is to look at your tasks and workflows and ask questions about where AI can provide assistance. ‘Think tasks and not tools’ is their advice.

We have mentioned content creation (text and images) but there’s also voice to text transcription, summaries of meetings, translations, media analysis and more. Indeed, the headline figure from the latest CIPR report, Humans needed, more than ever, is that 40% of what we currently do is being assisted by AI in some way.

Implications for educators and learners

It’s a familiar story. Electronic calculators, online encyclopaedias and now generative AI. The first instinct of educators has always been to ban their use for fear that students will take shortcuts and reach for easy answers without demonstrating learning. So generative AI was initially viewed in education as a plagiarism problem. But as author Christopher S. Penn argues, since AI provides ‘better, faster, cheaper’ it’s not going away. So educators should instead be upskilling students to thrive in a world of AI by being better able to critique a text, say, and to be able to ask better questions (ie prompt engineering).

Implications for fees and salaries

Questions are being asked about the traditional time-based billing models used by agencies/consultancies. A forthcoming book by Crispin Manners for the PRCA will argue instead for values-based billing.

The problem of time-based billing (also used by lawyers) is the suspicion that inefficiency pays. So a news release that took six hours to research, write and gain approval for release would be more valuable to the consultancy than one that took half as long. But the client will be wondering how even one hour is justifiable when generative AI could produce a workable first draft in minutes.

There’s an even bigger problem when it comes to the most junior roles. Since researching information, generating lists and drafting content are key tasks handed to juniors, even a graduate salary of £24,000 a year looks very expensive when compared to monthly subscriptions to a range of AI tools (these might add up to hundreds a month, but would be much cheaper than this modest salary). So just as the learner has had to move from memorising knowledge, to finding information, to prompting a chatbot to produce an answer, to critiquing that answer, so a junior public relations practitioner will need to find new ways to make themselves indispensable in the age of AI.

One stark best practice recommendation from the authors ot the CIPR’s latest report is to state the role of AI in public relations content (even if it was only used for background research). The authors said they have only met resistance from consultants who fear that clients might ask for a discount, or fire the consultant and take on the AI tools instead. If the junior PR practitioner looks expensive in my previous example, the consultant who only offers junior level tasks looks even more so.

If junior roles are most at risk from AI and automation, then the CIPR’s latest report may be reassuring for senior practitioners whose advisory work across reputation, relationship and crisis management currently appears beyond the capacity of today’s tools.

Yet no one should be complacent. Philippe Borremans has written an ebook showing various ways the free version of ChatGPT can be used to support (though not replace) the work of crisis communicators.

Legal and ethical questions

There are many questions, but as yet few answers – though legislators are working on them and some insider voices within artificial intelligence have been calling for action.

There are questions around copyright. We know that we can’t use any image sourced on the web because that would breach the copyright of the image’s creator. Yet when we prompt an image to be created by AI, even though it may draw on others’ copyrighted work, we assume it to be free for us to use. Who owns the copyright of that created image, though – the tool or the person who prompted the tool – is not yet clear.

Then there’s the problem of misinformation and disinformation. There have been many documented examples of errors in ChatGPT responses because it will provide accurate answers based on the information available to it but will also make assumptions where information is unknown. Spreading inaccuracies counts as misinformation and it’s especially dangerous coming from a reputable and trusted source. But think how easy it is to spread disinformation (deliberate misinformation) through a combination of artificial intelligence and social media algorithms.

Utopia or  dystopia?

While the focus of the professional bodies is primarily on the effective incorporation of AI into practice along with raising the legal and ethical implications, critical scholars are thinking more broadly about the impacts of AI-powered public relations on society and on the emotional wellbeing of practitioners..

For Dr Clea Bourne, author of Public Relations and the Digital, AI and automation are worrying developments that risk taking public relations back to its roots in propaganda: ‘Machine-learning bots are now embedded in PR discourses; their ability to manipulate public opinion via hashtag spamming, smear campaigns and political propaganda is a reputational issue. Even benign PR campaigns can go wrong, since bots have had a difficult time interacting with people in all their human complexity.’

She proceeds to ask questions about non-human public relations in the age of algorithmic public relations.

One intriguing future she explores is that a non-human PR avatar could be always on, forever willing to pander to a narcissistic boss or client. The avatar would never suffer from burnout. 

 ‘A PR avatar could validate narcissistic personalities by providing constant attention, while simultaneously getting on with the job. A PR avatar could answer anxious emails sent at midnight, and smile sweetly after receiving testy comments from clients or bosses, all while remaining reliable and self-assured.’

This may sound like utopia, but what happens when the PR practitioner reports to an algorithm? 

‘Current forms of algorithmic management conjure a number of possibilities for the PR profession. The first is that existing algorithmic PR management is likely to intensify. This kind of automated oversight might reduce the number of human contact hours required to manage a PR brief. Meanwhile, failure to meet PR commitments could end a contractual engagement for a PR agency or affect an in-house PR practitioner’s appraisal. There is the additional possibility that AI and automation will consign many PR practitioners to the gig economy, as happened with journalism, where gig journalists find themselves poorly compensated based on social media ‘likes’ or subscribers’ read time.’

We’re back staring at a dystopian future.

Dr Ana Adi from Quadriga University, Berlin has gathered 11 contributions from international scholars and practitioners in a freely available ebook to reflect on ‘AI’s impact on PR and for PR’.

There’s a lot to summarise here, but without the help of generative AI here are some highlights.

Ana Adi argues:

Responding to AI as a wicked problem, complex and with many known and unknown unknowns, requires a reorientation of the PR/Comms profession’s focus: 

  • from servant of organizations and their interests to stakeholder facilitator, 
  • from speaker on behalf of organizations to listener 
  • from short-term measurement to long-term evaluation 
  • from organizational benefit to social impact and social value.’

I’ll also quote her disclaimer, because it goes beyond the expectation set in the new CIPR report that practitioners should disclose the contribution of AI in their work.

‘No AI has been used in writing this chapter – neither as a helper to brainstorm ideas, nor to help summarize texts or to rephrase and refine writing. The temptation was there but the need did not emerge, the editorial process providing the author plenty of insight and food for thought to require neither of those.’

I hope the generative AI headline to accompany the news release about this ebook would have the headline ‘Academics still needed.’

Thomas Stoeckle looks back at public relations as a ‘persuasion technology’, dating from the writings of Edward Bernays early in the twentieth century. He calls on us to focus ‘less on the impact on the profession or the economy, and more on societal impact, addressing “AI’s programmed inequalities – towards race, gender and identities”, for example through improved listening, planning, measurement and evaluation.’

Internal communication consultants Monique Zytnik and Matthew Lequick, taking a people-centred view, worry about the emerging inequalities exposed by patchy use of generative AI (GAI): 

‘When powerful AI is something everyone can suddenly tap into, differences in business acumen are potentially no longer a differentiator between people. Inequality based on educational background, privilege, resources, etc. could be limited in the future with broad access to GAI.’

Or could the exact same forces have the opposite impact, that the early adopters of AI will have an unfair advantage over later adopters. ’We need to remain grounded with the reality that our people need to be supported on the journey and that everyone operates with change at a different pace. We
need to shelter them from the sometimes-overwhelming winds of change.’

‘As internal communication professionals, the best thing we can do right now is work with
our business areas responsible for people and culture to understand where our people
are, what training they need, and work closely with our tech teams on user experience
and adoption. We must also keep close to and understand our colleagues and our
organization to reduce fear and add support to the change journey we all face.’

Taking a political communication perspective, Franziska Mueller-Rech was a sceptic at the outset.

‘My team and I initially felt skeptical when we discussed using AI in our daily work. How can a piece of software build a personal relationship with voters or with other stakeholders? How can a machine communicate human decisions empathetically and furthermore let them be distinguishable from competitors?’

She came to find that AI was like a very effective intern. ‘We analyzed which tasks AI could assist us with and quickly began referring to it as our ‘new intern’. Just like a new intern, AI came into our team initially knowing little about what we do, what the demands of my party, of me personally, my political key issues or key stakeholders’ expectations of me are. And of course, it had no clue of what kind of person I am as a representative, my concerns and how I communicate. Bit by bit, we were, and still are, training the new intern and delegating tasks to them. As with all interns, however, we always keep in mind to check every piece of work before we use it.’

Hemant Gaule introduces a non-European perspective.

‘Academia in India is poised to get ahead of the curve when it comes to AI adoption in curricula and practice. The key driver of this readiness is strong understanding of students and faculty members in technical fundamentals of programming.’

In exploring the ethical implications of AI, Dr René Seidenglanz and Dr Melanie Baier explore the downsides of an explosion of communication content.

‘It’s paradoxical but totally possible that not only will organizations be communicating more thus continuing to contribute to noise instead of listening, but they might also be paying more for it too instead of saving money or time as they might hope by adopting and implementing AI systems.’

I’m reminded of a brilliant Seth Godin aphorism about advertising from two decades ago: ‘the less it works the more it costs. The more it costs, the less it works.’ Will this also apply to AI-powered communication?

Like each of the contributors, I’ve avoided the use of AI to summarise their arguments. But having reached the back page, I discover the following AI-generated blurb about the ebook. What a shameless piece of promotional copy (but then AI, not being sentient, has no sense of shame).

‘In a world where Artificial Intelligence is no longer just a buzzword, the realm of Public Relations and Communications stands at a crossroads. From the silver screens of Hollywood to the boardrooms of tech giants, AI has captured imaginations and headlines. But how does it truly impact the PR industry? Dive into a comprehensive exploration that delves deep into the challenges, opportunities, and ethical dilemmas posed by AI. With contributions from leading professionals, educators, and academics, this book offers a panoramic view of AI’s current role in PR and its potential future implications. From understanding the technical intricacies to grappling with ethical challenges, this book is a must-read for anyone navigating the dynamic landscape of PR in the age of AI. Discover the legacy that today’s choices will leave for tomorrow.’

Disclaimer

The image accopmpanying this article was generated with Bing Image Creator. No AI tools were used to create this text (though some AI text is included in the quoted sections where specified.)

Reading and resources

Books and ebooks

Ana Adi (ed) (2023) Artificial Intelligence in Public Relations and Communications: cases, reflections and predictions, Quadriga University of Applied Sciences

Philippe Borremans (2023) Mastering Crisis Communication with ChatGPT: A Practical Guide to Writing Perfect Prompts and Communicating Effectively in Any Situation, Kindle ebook (reviewed on PR Place Insights)

Clea Bourne (2022) Public Relations and the Digital: Professional Discourse and Change, Palgrave Macmillan

Christopher S. Penn (2023) The Woefully Incomplete Book on Generative AI, Trust Insights

Reports 

Chartered Institute of Public Relations AI in PR reports (download them all from this page)

Reuters  (2023) Powering Trusted News with AI

Podcasts

Digital Download episode: Paul Sutton with Stephen Waddington – The Impact of AI on Communications

The Small Data Forum episode: Thomas Stoeckle with Sam Knowles and Neville Hobson

Experts to follow on LinkedIn

Andrew Bruce Smith

Ethan Mollick