Will AI ruin your reputation?

Welcome to AI Communicator!

In today's issue:

  • Will AI ruin your reputation?

  • Guidelines on using AI - an excellent example

  • Willy Wonka and the AI overpromise

Will AI ruin your reputation?

"It takes 20 years to build a reputation and five minutes to ruin it. If you think about that, you'll do things differently." – Warren Buffett

It feels like the current crop of AI apps have only been with us for five minutes.

We are still learning the consequences of using these tools.

That’s why it's important to be alive to the risks as well as the benefits.

Looking through a communications and public relations lens, reputational risks - and how to mitigate them - are an obvious place to start.

Risk One - Perception

People might think less of your skills and knowledge if they know, or think, AI did most of the heavy lifting and hard thinking.

They may fear that you’ve accepted AI generated output uncritically - failing to give it the necessary thought and attention.

Risk Two - Hallucinations

ChatGPT, Gemini, Copilot. All Large Language Models make things up.

In fact, making things up is their super power. But this also leads to hallucinations.

It could mean the output ChatGPT gives you mentions entirely plausible sounding, but false quotes, citations, research papers, people, books or events. 

A cut-and-paste approach to using AI output that includes these errors could lead to a reputation for playing fast and loose with the facts.

It could even lead to critical decision making errors if you or others rely on this information. 

Risk Three - Fakes

It’s never been easier to fake a photograph, video or audio of someone. And harder to spot one.

What untold damage could AI-generated fakes do to you or your organisation? 

Even AI-generated audio or video that you’ve created of yourself could be problematic - eroding trust and reducing authenticity.

Risk Four - Responsibility

Outsourcing to AI without understanding its limitations and being clear about them is bad enough.

Not taking responsibility and blaming the AI is the clincher.

Air Canada tried to argue their chatbot was a "separate legal entity that is responsible for its own actions" when it gave a passenger incorrect advice about a bereavement discount. A tribunal disagreed.

The worst bit is this could've been a reputation enhancer if it had been handled with common sense.

Mitigating the risks

  1. Perception - Consider a content authenticity statement setting out how, and how much AI, was used. Like the one used by Christopher Penn in his newsletter

  1. Hallucinations - Double-check and cross reference when including facts generated by AI. If using ChatGPT Plus, you can ask it to browse the web to give you sources. Gemini includes a G button to double-check the response. Bear in mind, this is asking Gemini to mark its own homework.

  1. Fakes - Monitor. Act fast. Be transparent. Consider legal action. 

  1. Responsibility - Minimise unforeseen issues through thorough testing. But prepare in advance in the expectation things will go wrong anyway. Accept responsibility quickly. Make it right.

Bonus - How to enhance your reputation with AI

Be known for:

  • Transparent and ethical use of AI

  • Innovative use of AI

  • AI skills and knowledge

  • Use of AI to help others.

Guidelines on using AI - an excellent example

The University of Cambridge’s Communications Team has put together a well-thought out and transparent set of guidelines on their use of generative AI. 

They are a great place to start when considering your own. Which you should.

Willy Wonka and the AI overpromise

A few weeks ago, my wife sent me a link to the Willy Wonka Experience in Glasgow.

We have three young kids who are fans. “A place where chocolate dreams become reality” sounded like it would keep them entertained for a while.

But there was something off about the AI generated images used to promote it - and no sign of actual photography of what was in store. So I passed.

This meant my kids missed out on a unique and character-building experience

And a life lesson on the gap between marketing and reality. Something which is now much easier to exploit thanks to AI. 

Caveat emptor.

I’d love to hear what you think about the newsletter. Reply to this email with feedback or questions.

Thanks for reading. 

P.S. To sign up or share with a colleague you can find the newsletter here. Make sure you get the next issue by moving this email to your primary inbox