The trust dilemma

Managing content in the age of generative AI

About the author

Zarrion Walker prepared this article as part of a CIPR Professional PR Diploma assignment while studying with PR Academy.

Image created using Microsoft Copilot
Image created using Microsoft Copilot

Since ChatGPT became mainstream in late 2022, generative AI based technologies have democratised widescale content creation for the masses.

Zarrion Walker
Zarrion Walker

For those new to an evolving glossary of terms, these AI powered tools using large language models are capable of producing human-like text, sounds, and imagery at scale without experience.

Generative language models offer transformative content creation and management potential. However, this poses ethical and practical dilemmas confronting the profession. Maintaining trust is challenging in a new age as revolutionary as the internet and social media. Like it or loathe it, PR experts see this as an opportunity to exploit as advisers.

CIPR AI in PR Panel Chair Andrew Bruce Smith and its founder, Stephen Waddington, urge PR professionals to learn more about AI tools’ impact. Writing in The AI Journal they explain why it’s important to understand how to use them effectively and to guide others about reputational consequences.

CIPR’s comprehensive analysis of tech tools in PR in February 2023 by Smith and Waddington predicted an explosion of AI based tools in the wake of ChatGPT. It also prompted practitioners to think about bigger questions for a sustainable future. What impact does AI have on stakeholder relationships? How is managing messaging and reputations affected?

“You can’t communicate your way out of bad behaviour or poor decisions – and AI doesn’t offer a solution to either of these issues. In fact it may make it worse if your bad decisions and what others say about them are the narrative from which AI tools draw their data.”

2023: The year of generative AI

It’s no surprise that the most emerging technology of last year was generative AI, according to Gartner’s IT research. ChatGPT, at the start of the year, became one of the fastest growing consumer internet apps–  estimated to have reached 100 million users within 2 months, outperforming TikTok’s launch.

The arrival of more sophisticated models, such as Google’s Gemini, Claude by Anthropic, and the more powerful ChatGPT 4, signifies increasing technological advancements. Although these early models are more than this, they’re catalysts making significant tech market waves.

Microsoft’s aim is to “fundamentally change the way we work” by integrating ChatGPT technology into Office 365 applications with their generative AI assistant, “Copilot”. Others like Amazon, Apple, Google, and Meta also have plans to integrate AI into everyday products. Amplifying blurring lines between human and artificial content more.

However, KPMG says generative AI could eventually boost UK productivity by £31 billion, and economically, Goldman Sachs forecasts AI investment could reach $200bn globally by 2025.

Multinational brands have embarked on early experiments to unlock its potential, recognised for inhouse AI innovations, especially in their advertising and marketing efforts.

Some examples:

  • Coca-Cola used it for their campaign “Create Real Magic”, inviting fans to compete and create AI generated artwork using Coca-Cola’s iconic brand assets.
  • Nestle worked with ChatGPT 4 and Dall-E 2 on generating imaginative campaign briefs that humans then refined.
  • Unilever is utilising AI to create product descriptions matched to brand tone for its TRESemme products on Amazon.

AI’s impact on PR content and consumption

Where’s PR right now? According to studies, its adoption has been cautious and slower over the past five years but is expected to speed up. Prowly found that of 303 PR professionals surveyed in May 2023, 63% felt positive about integrating AI into their work.

CIPR’s Humans Needed More Than Ever report, published in September 2023, provides the most comprehensive picture yet of AI in public relations.

AI tools now assist with up to 40% of PR tasks on average. Automating tasks such as transcription, media monitoring, and press release distribution relieves high volume workloads. AI can also help edit and generate ideas and draft content, including articles and social media posts.

Writing assistants like Grammarly, Hemingway App, and ProWritingAid are commonplace. However, AI has made them more efficient and personalised, enabling some to write in distinctive styles and tones. Time efficiency reported as one of the most significant advantages of AI usage for more strategic thinking.

Yet generative AI is still in its infancy and has several limitations. As a result, its content creation capabilities come with risks, as KPMG finds. It might create large volumes of inadequate quality or biased content or be exploited for misinformation, making it ethically challenging.

Gartner’s Hype Cycle model indicates generative AI is currently at the “peak of inflated expectations”, which suggests its benefits may be exaggerated, even “overhyped.” The technology expects to reach maturity over two to ten years to achieve greater potential.

For now, large language models are predictive and need more genuine human comprehension of the meaning behind the content produced. Sometimes, it generates factual errors that look credible, termed “hallucinations.” For its flaws, Google had labeled its chatbot Bard as “experimental”, and Microsoft, with Copilot using Chat GPT, openly admits its systems will get things wrong.

“AI-produced content and outputs may contain inaccuracies, biases, or sensitive materials because they were trained on information from the internet, as well as other sources. AI may not know about recent events yet, and struggles to understand and interpret sarcasm, irony, or humor. Please remember that it’s not a person.”

This is not just a PR dilemma but one for journalists and the media. Digital rights activist Samantha Floreani writes for The Guardian about the potential cause and effect of AI generated content quality on giving journalism a bad reputation. Floreani is concerned that as new AI models are trained on the output of other AI models, this could create “mutant news” that’s even more inaccurate and distorted.

Vacancy: Humans needed, to establish and maintain trust

Generative AI’s potential pitfalls leave much to be wished for in digital trust. KPMG’s global study with the University of Queensland found across 17 countries, including the UK, that one in three people (63%) said they were ambivalent or unwilling to trust AI.

Its heightened potential across sectors suggests it could lead to job losses as organisations seek time and cost savings. A BBC Worklife article examines the anxiety from technological uncertainty AI brings, even felt as a threat to personal skills and belonging. One of the ways to overcome this is to demystify the unknown with learning.

Whether in 12 months or five years, Wadds Inc. management paper acknowledges AI will likely displace functional – mainly junior – roles within PR. However, AI creates new opportunities, such as mastering instructions for AI prompts.

Telefónica blogs about the emerging role of prompt engineers, which needs effective communication skills. PR sounds like a match. However, professionals must avoid overcomplacency and keep up with best practices to avoid being replaced by those who will. Using creative and lateral thinking, linguistic knowledge and analytical and evaluative skills to get the best results.

A gap in talent and trust is evident. AI skills demand is growing, but Salesforce reveals a skills deficit in the UK and the wider economy. Only 1 in 10 workers feel confident that they have AI ability. A future survey of 4,000 fulltime desk workers confirms the knowledge gap, with just over half (52%) admitting they’re unsure how to get the most value out of generative AI.

Grammarly’s CEO, Rahul Roy-Chowdhury, blogs about its “pro human” approach to generative AI by “augmenting” human intelligence rather than replacing it. Highlighting how the effective use of AI tools can increase accessibility, allowing individuals to overcome pain points to clearly and effectively communicate whilst respecting individual autonomy and identity.

Whilst some sectors will see more automation of work than others, the clue in CIPR’s report title confirms that human intervention is still vital for public relations, including content creation and management aspects. The report finds developing relationships and reputation management cannot be reduced to tasks or replaced by AI. Humans must continue to oversee the correct application of AI tools, assure ethics and quality of outputs, and infuse creative and intellectual input uniquely as beings.

Ethics and regulation of AI generated content

As a result of the rapid pace of change, the Future of Life Institute’s open letter, earlier this year, signed by Elon Musk, called for AI labs to pause training of systems more powerful than GPT-4 for at least six months. Also, AI experts and public figures, including the CEO of ChatGPT, signed the Centre for AI Safety’s global statement encouraging global action to mitigate catastrophic risks. However, it’s hard to see signs of stopping.

Opinions on regulation in the UK remain divided. The UK Government published a White Paper in March 2023 detailing its “pro-innovation” principles based approach rather than rules based. It outlines no intention to introduce AI legislation currently. Instead, tailoring its cross-sectoral principles and embedding them into existing work by the UK data watchdog and communications regulator, Ofcom.

BBC reports UK MPs want stronger regulatory efforts to avoid falling behind tougher EU regulations, set to introduce transparency requirements.

CIPR suggests that new laws or regulations are unlikely to keep up with modern technologies in its Ethics Guide to Artificial Intelligence in PR published back in 2020.

As a result, PR professionals should advocate to ensure their organisations are safe from intentional harm from using AI powered tools or corporate overexcitement. CIPR follows a principles based ethical decision-making approach, referring to 16 principles adopted by the Global Alliance for PR.

Here are several principles to be applied in practice:

  • Working in the public interest – content moderation should ensure generative AI helps the public rather than causing harm or spreading misinformation. Specific AI tools can help these efforts.
  • Obeying laws and respect diversity and local customs – ensuring compliance with legal regulations surrounding AI as policies develop, being mindful of cultural sensitivities in operational environments when generating content.
  • Honesty, truth and fact-based communication – verifying facts and sources used to train models to ensure accuracy, including correction where needed.
  • Transparency and disclosure – clearly labelling AI generated content to support openness and trust and supplying reassurance through human validation.
  • Privacy – protecting personal and sensitive information from misuse by understanding, anonymising and obtaining consent.
  • Commitment to continuous learning and training – keeping current with developments, including experimentation, and following the latest ethical standards and best practices.
  • Advocating for the profession – participating in dialogues with stakeholder groups to shape AI governance and policymaking and educating others on correct use.

The US-based PR Council also released guidelines on generative AI last year. They cover protecting the integrity of information, acting responsibly, commitment to accuracy, communicating with openness and transparency, valuing diversity and inclusion, and increasing society’s confidence in the practice. Leaders are encouraged to consult with their own legal counsel to tailor and implement policies and training for their organisations/clients.

The future of generative AI

How content and channels are managed has much to play for in the future as AI hopefully gets better and more ethically minded over time.

Whilst industries globally have unknown questions to answer, it’s fine to admit it’s hard for PR to make ultimate decisions right now. Some look to be guided by the UK having hosted the world’s first AI safety summit that aimed to find common ground.

The profession must keep up as emerging technology like generative AI will continue to disrupt the oldest and most influential communication models of our time, calling for new ideas.

As for now, professionals should consider what they can influence:

  • Educating and training to debunk anxiety, myths, and uncertainty, such as upskilling and developing guidance to understand AI usage.
  • Applying ethical principles and standards that underpin codes of conduct that build credibility and trust.
  • Finding the best use of generative tools for the right tasks.
  • Reminding human value comes in the form of developing connections, creativity and intellect, augmented by AI, not replaced.

I advise you to stay updated with the latest from CIPR’s #AIinPR panel.

This article was researched and written by Zarrion Walker. AI was used to create the main image.