• Mon. Dec 11th, 2023

AI Hallucinations Pose a Significant Threat to Scientific Integrity, Oxford Study Warns

ByEditor

Nov 20, 2023

The Oxford Internet Institute is warning researchers about the dangers of using Large Language Models (LLMs) in chatbots. These models are designed to generate helpful and convincing responses, but they do not guarantee accuracy or alignment with fact.

A recent paper published in Nature Human Behaviour highlights this issue. LLMs are often treated as knowledge sources and used to generate information in response to questions or prompts, but the data they are trained on may not be factually correct. This can lead users to believe that responses are accurate even when they have no basis in fact or present a biased or partial version of the truth.

One reason for this is that LLMs often use online sources which can contain false statements, opinions, and inaccurate information. Users often trust LLMs as a human-like information source, due to their design as helpful, human-sounding agents. However, researchers warn that it is crucial to verify the output of LLMs and ensure that it is factually correct and aligned with the provided input.

To address this issue, researchers suggest using LLMs as “zero-shot translators.” This means that users should provide the model with the appropriate data and ask it to transform it into a conclusion or code, rather than relying on the model itself as a source of knowledge. This approach makes it easier to verify that the output is factually correct and aligned with the provided input.

While LLMs will undoubtedly assist with scientific workflows, it is crucial for the scientific community to use them responsibly and maintain clear expectations of how they can contribute. By doing so, we can ensure that these powerful tools are used ethically and effectively in our quest for knowledge and understanding.

Leave a Reply