Transparency needed to tackle lying AI

Publicly released:
International
Yasmine Boudiaf & LOTI / Better Images of AI / Data Processing / CC-BY 4.0
Yasmine Boudiaf & LOTI / Better Images of AI / Data Processing / CC-BY 4.0

Trusted but incorrect machine-generated information is entering human conversations, thanks to the rise of large language models - and the legal protection against this is unclear. Oxford University researchers found that truth-related legal obligations often don't apply to the private sector, and cover platforms or people but not hybrids such as chatbots. To fill this gap, they propose a new broad legal requirement for large language model providers to minimise careless speech and avoid centralized, private control of the truth, through transparency and public involvement.

Media release

From: The Royal Society

Truth be told – Should large language models be legally obligated to tell the truth? Researchers examined whether a legal duty for AI to be truthful already exists and whether it would be feasible. Current frameworks were found to be limited and sector-specific. The authors propose a pathway to ‘create a legal truth duty for providers of narrow- and general-purpose LLMs’.

Attachments

Note: Not all attachments are visible to the general public. Research URLs will go live after the embargo ends.

Research The Royal Society, Web page The URL will go live after the embargo lifts
Journal/
conference:
Royal Society Open Science
Research:Paper
Organisation/s: University of Oxford, UK
Funder: n/a
Media Contact/s
Contact details are only visible to registered journalists.