Transparency needed to tackle lying AI

Trusted but incorrect machine-generated information is entering human conversations, thanks to the rise of large language models – and the legal protection against this is unclear. Oxford University researchers found that truth-related legal obligations often don’t apply to the private sector, and cover platforms or people but not hybrids such as chatbots. To fill this gap, they propose a new broad legal requirement for large language model providers to minimise careless speech and avoid centralized, private control of the truth, through transparency and public involvement.

Hot Topics

Related Articles