Briefing: AI policy and governance
About the author
Richard Bailey Hon FCIPR is editor of PR Academy's PR Place Insights. He has taught and assessed undergraduate, postgraduate and professional students.
A smart approach to the age of intelligence
A year ago, we published an introductory briefing on AI and PR. Since then, much has changed – but some concerns have stayed the same.
Updating an introduction to AI is probably unnecessary as awareness levels are high and most will have at least dabbled (the recent CIPR State of the Profession 2024 report provided a somewhat mixed picture of AI adoption: 17% of respondents say they use it daily, 28% often and 28% sometimes while 19% rarely use it and 8% say they never do.)
This same general question probably cannot be asked again in future surveys as AI is becoming embedded into voice and text search and into office apps, so how will we know if and when we’re using AI?
For those wanting a thoughtful overview of artificial intelligence, I recommend US academic Ethan Mollick’s book Co-Intelligence: Living and Working with AI (reviewed here at PR Academy Insights). He’s also well worth a follow on LinkedIn alongside CIPR AI in PR panel member Andrew Bruce Smith.
Explaining the historical significance of artificial intelligence Mollick wrote: ‘We have invented technologies, from axes to helicopters, that boost our physical capabilities; and others, like spreadsheets, that automate complex tasks; but we have never built a generally applicable technology that can boost our intelligence.’
So it may not be hyperbole when Sam Altman of OpenAI describes this as the age of intelligence.
In his book Ethan Mollick cites an academic study to quantify the productivity gains from using AI in a management consultancy. Two randomised groups at Boston Consulting Group worked on a series of typical tasks; one group had been trained on GPT-4 and the other group did not use AI. The tasks ranged from creative to analytical to writing and persuasiveness exercises (NB they were very similar to tasks performed by PR practitioners; one even involved writing a press release).
The AI powered consultants were faster, and their work was considered more creative, better written, and more analytical than that of their peers.
The results were conclusive. ‘The group working with the AI did significantly better than the consultants who were not…. The AI powered consultants were faster, and their work was considered more creative, better written, and more analytical than that of their peers.’
Yet this masked a problem. The results from AI were so good that the tendency was for the humans not to edit the AI responses.
So one further experiment was conducted – a test that AI would find hard to answer (in itself, a hard test to create). This involved a tricky statistical issue involving misleading data.
Now the tables turned. ‘Human consultants got the problem right 84 percent of the time without AI help, but when consultants used the AI, they did worse – getting it right only 60 to 70 percent of the time.’
The conclusion? ‘The powerful AI made it likelier that the consultants fell asleep at the wheel and made big errors when it counted.’
As an analogy, we use calculators for large sums, but also need to retain some level of proficiency in mental arithmetic in order to spot rogue calculations.
So this briefing is not about the daily use of AI, but rather about the systems and policies we should put in place to govern its use.
There’s a more encouraging response to this in the CIPR’s State of the Profession 2024 report: most (67%) say they already have workplace policies on AI. Yet can we believe this statistic? It looks very different from a Muck Rack survey in the US that cited 64% AI usage among practitioners yet with only 22% saying their employers had AI policies.
But first, here’s a warning from history about counterproductive policies on technology adoption. One of my favourite case studies about early social media adoption came from West Midlands Police towards the end of the first decade of this century. Mark Payne – a detective inspector who had taken on a comms role – wanted to run a hard-hitting awareness campaign on knife crime. Since the victims of knife crime and its perpetrators are both overwhelmingly young men, he saw little benefit in running a traditional media campaign targeting the local press. Instead, he wanted to use YouTube (then a new kid on the block). There was just one problem with this: the force had banned use of social media including YouTube in the workplace (it was widely considered to be a distraction from work back then). His robust response: to change the policy on social media use and then proceed with his planned campaign.
We’re at a similar point with AI adoption. How can we encourage its responsible use while mitigating the obvious risks around confidentiality, copyright, bias and accuracy? How can we best combine the strengths of AI with those things that humans are still stronger at?
For consultants there is an additional question. How can you continue to justify time-based billing when clients might reasonably expect to see improved speed around many routine tasks leading to lower monthly fees? AI is one reason for the talk about values-based billing: for more on this, see our review of How to Sell Value – Demystified by Crispin Manners.
As the author explains it: ‘If PR people define their value by WHAT they do – rather than WHY they do it – achieve a business outcome – then they will have little defence against clients replacing them with tools like ChatGPT.’
So the pressing issues facing communicators and managers relate to the guardrails they put in place around AI usage and the encouragement they can give to speed its safe adoption. Where to turn for help with this?
As it so often does, the Government Communication Service (GCS) has taken a leadership position by sharing its policy on generative AI use. It’s a good starting point, whether you work in the public, private or not-for-profit sectors.
Let’s walk through the main elements of an AI policy applicable to communicators (with the GCS text quoted in italics).
Introduction
You should state your overall approach to the use of (generative) AI. If you encourage its use, what are you doing to ensure it’s used responsibly? Who will benefit from its use?
‘The Government Communications Service (GCS) is committed to embracing responsible use of generative AI technology to deliver effective, engaging and increasingly relevant communications to the public.
‘Through the responsible use of generative AI and commitment to our public service values, we aim to set the standard for excellence in government communication in the AI age, and inspire trust and confidence. The policy [sic] set out clear principles for all GCS members to follow in their use of AI within their organisations.
‘Our aim is to seize the benefits of the revolution in generative AI, and ensure all of Government Communications can responsibly harness this exciting new technology, for the benefit of the public.’
Definitions
The GCS policy specifically covers generative AI, but broad and narrow definitions are provided.
- ‘Artificial Intelligence (AI): computer systems able to perform tasks usually requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.
- ‘Generative AI is a type of AI system capable of generating text, images, or other media in response to prompts.’
Guiding principles
There are a number of key areas that an AI policy should consider.
Law and ethics: Statements are needed that AI usage will conform to applicable laws and ethical principles.
- ‘Always use generative AI in accordance with the latest official government guidance. For example: the Generative AI framework for HM Government, the Introduction to AI Assurance, and the GCS Ethical Framework for Responsible Innovation. This is in addition to the latest regulations and legislations in the UK. Please see Annex A for more information. For GCS, acting responsibly in our use of generative AI means operating consistently in line with our values. This includes the Civil Service Code, the Government Publicity Conventions, and is underpinned by the values of Democracy, Rule of Law, and Individual Liberty. See the GCS Innovating with Impact Strategy for further information.’
Training and support: What guidance and support is your organisation providing for teams and individuals? Will you be providing training?
- ‘The central GCS team at the Cabinet Office will provide training on responsible use of generative AI to all government communicators, in particular around ensuring accuracy, inclusivity, and mitigating biases. For example, this could be biases against the race, religion, gender, age, of an individual or group.’
Use of suppliers and contractors: You may have a strong policy in place governing AI use in your workplace, but this does not cover your suppliers (for example, an organisation’s external consultancy/ies or freelance contractors). So how are you ensuring they are making responsible use of AI in the work they do on your behalf?
- ‘Require that all our contracted and framework suppliers adhere to this GCS policy on the responsible use of generative AI, and have safeguards in place to ensure this. Ultimately, our contracted and framework suppliers remain responsible for their use of the technology.’
Importance of truth and accuracy: There is much less talk of ‘hallucinations’ now than there was a year ago as generative AI systems have continued to improve at a rapid pace. Yet that progress raises problems: there is much more concern about AI’s ability to create deepfake video and audio and to be used for manipulative purposes. So your policy should cover truth, accuracy and misinformation.
- ‘Uphold factuality in our use of generative AI. This means not creating content that misleads, in addition to not altering or changing existing digital content in a way that removes or replaces its original meaning or messaging.’
Future developments: It’s remarkable how far generative AI has developed in less than two years since the first release of ChatGPT, and we’re still only at the beginning. So how will you ensure you keep up to date with developments and how will you seek to inform policy makers and influence developers?
- ‘Engage with appropriate government organisations, strategic suppliers, technology providers, and civil society, around significant developments in generative AI, and the implications for its use in government communications.’
Public concerns: A technology that promises such productivity gains will inevitably produce winners and losers. How will you remain aware of public opinion and how should policy reflect any concerns?
- ‘Continue to review reputable research into the public’s attitudes towards generative AI and consider how this policy should evolve in response.’
Usage examples: opportunities and risks
Policies are necessarily broad-brush and brief (the GCS policy statement runs to some two dozen paragraphs) and rather than anticipating every usage example, some general indications can be given.
The GCS policy gives examples of generative AI being used for brainstorming and for creating first drafts of text, visuals and audio. It also shows that AI use does not remove users from obligations to those who may be the source of eg images (thus seeking to avoid future complaints or copyright challenges).
The two main risk warnings in the GCS policy are against AI being used without human oversight; and to warn against the sharing of private or sensitive data with AI providers.
The two main risk warnings in the GCS policy are against AI being used without human oversight, and to warn against the sharing of private or sensitive data with AI providers ‘without having appropriate data sharing and security agreements in place.’ This is a particular concern for in-house teams. Customised AI systems based on your organisation’s data will deliver a stronger tone of voice and greater insight than AI systems trained on non-proprietary data – but at the risk that you could give away your own data for others to use.
So just as communicators needed to get up to speed with data protection principles and practice when the General Data Protection Regulation (GDPR) came into effect in 2018, so we all need some understanding of data governance in the age of intelligence.
We have alluded to human oversight of AI, but what about AI oversight of humans?
Having drawn on just one trusted source (GCS) for this briefing, what about asking AI for some AI policy guidelines suitable for a public relations team? Here between the two lines is Microsoft Copilot’s response to this prompt (Copilot uses ChatGPT).
AI Policy for Public Relations: Guidelines and Best Practices
- When to Use AI Tools
- Guideline: We will use AI tools when they enhance efficiency, accuracy, or creativity in PR tasks.
- Explanation: AI can assist with media monitoring, sentiment analysis, content generation, and audience segmentation. We’ll evaluate each use case to determine if AI is appropriate.
- Transparency and Disclosure
- Guideline: We commit to transparently disclosing when AI tools are used in our PR efforts.
- Explanation: Whether it’s automated chatbots, data analytics, or content creation, we’ll clearly communicate when AI is involved. Honesty builds trust with our stakeholders.
- Content Quality Assurance
- Guideline: All content generated by AI must align with our company’s voice, values, and messaging.
- Explanation: We’ll review AI-generated press releases, social media posts, and articles to ensure they reflect our brand identity and maintain consistency.
- Guarding Against Bias
- Guideline: We’ll actively monitor and address bias in AI algorithms.
- Explanation: Bias can inadvertently creep into AI systems. We’ll regularly assess and adjust to minimize any unintended discriminatory impact.
- Data Privacy and Security
- Guideline: Protect user data and respect privacy regulations.
- Explanation: AI often relies on data. We’ll handle it securely, obtain proper consent, and comply with relevant laws (e.g., GDPR).
- Human Oversight and Decision-Making
- Guideline: Human judgment remains crucial in PR.
- Explanation: While AI assists, our PR professionals will make final decisions. AI is a tool, not a replacement.
- Feedback and Continuous Improvement
- Guideline: We encourage feedback from PR team members regarding AI tools.
- Explanation: Regular input helps us refine our AI processes and adapt to changing needs.
- Training and Awareness
- Guideline: PR team members will receive training on AI basics.
- Explanation: Understanding AI’s capabilities and limitations empowers us to use it effectively.
Copilot cites four sources and makes it clear that its primary source was a blog post from US agency Axia Public Relations (two clues were the US spelling of minimize and judgment above).
The one improvement here over the GCS policy was the very clear statement around transparency and disclosure (perhaps because the GCS policy implies generative AI is suitable only for first drafts):
- We commit to transparently disclosing when AI tools are used in our PR efforts.
This is consistent with the recommendations arising from two 2023 publications from the CIPR’s AI in PR panel: its Ethics Guide to Artificial Intelligence in PR and the Humans Needed More Than Ever report. This matches the first of the “Warsaw Principles” adopted by International Communications Consultancy Organisation (ICCO) in 2023 around transparency, disclosure and authenticity.
- PR professionals should transparently disclose when generative AI is used to create purely artificial content that might pose a threat of distorting the public’s perceptions of reality. Disclosing the use of AI-generated content is vital, especially in the age of deep fakes.
There is a useful discussion of the leading role of the EU in AI governance in Cayce Myers‘s new book The Rules of Public Relations, reviewed here.
Disclosure: This briefing was researched and written by a named human. Information sources have been declared including the one section created by AI chatbot Copilot that itself draws on a named source.