Avoiding the fat finger slip of AI
Most of us have experienced that awkward moment of a fat finger slip. In most cases the consequences are minimal, like inputting the wrong figure into a spreadsheet.
Unfortunately, in some instances, a fat finger slip can be serious. So, as we start using AI to generate content to be used in PR campaigns, we must ask how marketers can avoid the potentially negative consequences?
The phrase fat finger error first originated in the financial trading market. There have been numerous examples of an input error leading to stock market transactions at unusually high or low prices.
The problem is that a single trading error can be replicated by algorithms across the globe, meaning that a stock price can rocket or crash overnight as the consequence of a single misplaced decimal point.
The cost of the mistake multiplies as its replicated, with each transaction exacerbating and compounding the original error, meaning that a mistake that represented a fraction of a penny in the pound when it was made, turns into hundreds of thousands, or millions of pounds by the next morning.
The analogue in content production is clear. A single piece of content placed in a respected outlet, but containing errors generated by AI, has the potential to multiply over months and years. Eventually it will generate enough mentions to become accepted as true – at least as far as other AIs writing about the same thing are concerned.
In a document published in June 2023, entitled “The Guardian’s approach to generative AI,” the newspaper in question wrote, “We will guard against the dangers of bias embedded within generative tools and their underlying training sets.
“If we wish to include significant elements generated by AI in a piece of work, we will only do so with clear evidence of a specific benefit, human oversight, and the explicit permission of a senior editor. We will be open with our readers when we do this.
A scientific example
Maintaining integrity is vital in any industry, but it’s particularly important in the publication of technical, engineering, and scientific content. Researchers and writers have a responsibility to ensure any information they share is accurate, transparent, and reliable, to build trust between scientists, engineering, technologists, and the public. When mistakes are discovered, that trust is put into question.
In August 2023 a prominent physics journal retracted a materials science paper in order to investigate reports of one of the authors including fabricated and falsified data. While the investigations are still ongoing, some would argue that the damage is already done, with previous work from the same author — a professor in physics and mechanical engineering writing on superconductivity —questioned for its validity.
The researcher, Dr. Dias, maintains that any errors were accidentally introduced when collaborators on the paper used Adobe Illustrator software and its AI tool to create scientific charts. He claims that any inconsistencies were an unintentional consequence of using the software, rather than an effort to mislead.
Thankfully, this mistake has been spotted and is, as a result, not subject to the multiplication inherent in the idea of the fat finger slip. But crucially, it was published in a peer reviewed journal, which is then analysed by a global community of scientists who rightly employ the scientific method to prove or disprove findings.
Is the same true of an article published in a leading engineering or technology publication? Or in a non-peer-reviewed but still respected scientific publication? The answer is no, and that lack of critical faculty means that a mistake like this in another context could easily be accepted as ‘internet fact’.
So, what has any of this got to do with your PR and marketing campaign?
Avoiding the fat finger when using AI
Many businesses are using AI and open-source tools to streamline their operations, whether it’s for data management, HR processes or creating assets. A recent report by the Chartered Institute of Public Relations (CIPR) found that there are now around 5,800 technology tools that the public relations industry could use for research, planning and measurement.
While the report — Artificial Intelligence (AI) tools and the impact on public relations practice — charts the impressive growth of generative AI and potential tools to support PR practices, it also highlights concerns. The ethical issues associated with AI, for example, include the question of whether practitioners need to declare when they use AI in their work, as The Guardian does, and the risk of the tool being used to create misinformation.
Effective communication is integral to showcase a business as credible and trustworthy. PR professionals could really benefit from using AI – we can use it for supporting technical research, monitoring the media, reporting and content management. We just need to make sure anything we say is factual and authentic.
Generative AI is still developing, so requires fact checking and often amending before sharing the content externally. Sending this content out without adequate review could lead to a business being accused of spreading misinformation and suffering similar reputational damage to that which Dr Dias has experienced, after the fact.
We’re all seeing the benefits of AI for marketing and discovering how tools can enhance our work and improve effectiveness. While these tools can automate tasks, human intellect is still essential to ensure that a fat finger slip won’t lead to spreading misinformation.
Inadequate review of AI generated content is ethically irresponsible, financially damaging, and reputationally destructive. And you can’t really blame your errors on the fat finger slip.