23948sdkhjf

To build trust, one must be able to say “I don’t know” – whether human or AI

Will AI strengthen or break down trust? It depends on whether we can understand and accept its limitations, and our own, writes Sarah Lidé in a column.

The past year, 2023, can be considered the year when AI went mainstream. “ChatGPT” was The Economist’s chosen word of the year, becoming synonymous with “Generative AI” among the average individual.

But for every article that touts AI’s potential, there’s a corresponding one that highlights its risks, particularly for generative AI. Despite being the general technology optimist that I am, the one risk that I’ve been mulling over is its impact on trust.

For every article that touts AI’s potential, there’s a corresponding one that highlights its risks

A commonly mentioned challenge with ChatGPT is its tendency to hallucinate, for example, generating false research references. However, this is perhaps manageable since references can easily be verified by running them through legitimate research databases. The larger problem is ChatGPT’s ability to express uncertain or incorrect answers as confident truth. Or, as a recent report from industry trade body UK Finance puts it, these tools are designed to “prioritise fluency over accuracy”.

A Nature editorial expressed this as an issue that goes beyond mere hallucination – it enters the territory of fabrications and falsifications. For example, a study in the medical journal Cureus investigating the authenticity and accuracy of references in medical articles generated by ChatGPT found that of 115 references provided, 47% were fabricated, 46% were authentic but inaccurate, and only 7% were authentic and accurate. A letter published in JAMA in November discussed how researchers were able to use large language models to create a realistic but fake clinical-trial data set to support an unverified scientific claim.

In a world where soundbites are the norm, and our attention spans are getting shorter, my concern is that when using generative AI becomes as common as Googling an answer, we might start losing the rigour and discipline of reading widely, thinking deeply, and questioning if the results we get are grounded in fact and truth, or if we accept them because they sound believable and plausible.

Research suggests that people are more likely to follow advice delivered with confidence, and to reject advice delivered with hesitancy or uncertainty. Considering the believability of generated responses and ChatGPT’s tendency to “double down” when confronted with its inaccuracies, prioritising fluency over accuracy scales up the risk of false information being acted upon.

Trust, on the other hand, is built differently - research also suggests that, in many cases, people trust those who are able to express uncertainty and are willing to say “I don’t know” or, in other words, recognise the limits of their knowledge. A major bank has limited its own internal chatbot (based on OpenAI's generative AI technology) to respond only to banking matters; the chatbot will simply say it does not know if asked unrelated matters such as football.

Until ChatGPT and similar tools prove increased reliability, we have the responsibility to use these tools with care – embracing their potential while not backing down from confronting their limitations – and to also stand firm on the fundamental human value of integrity – demanding that not just of the tools we use, but of ourselves. And that starts with being courageous enough to say, “I don’t know”.

Artikeln är en del av vårt tema om News in English.

Kommentera en artikel
Utvalda artiklar

Nyhetsbrev

Sänd till en kollega

0.093