Productivity Tools

OpenAI sued for defamation: A legal test for AI-Generated misinformation

By Steven, on July 11, 2023, updated on July 4, 2023 - 2 min read

In a groundbreaking legal case that could redefine the boundaries of AI technology and its role in spreading misinformation, OpenAI finds itself facing a defamation lawsuit.

The case revolves around false information generated by its AI system, ChatGPT, with potentially far-reaching implications for users, developers, and the future of AI-generated content.

Here’s the story behind this lawsuit…

Radio host sues OpenAI over false allegations

Mark Walters, a radio broadcaster, filed a defamation lawsuit against OpenAI after its artificial intelligence system, ChatGPT, falsely accused him of fraud and embezzlement from a non-profit organization.

The claim was produced in response to journalist Fred Riehl, who used ChatGPT to summarize a court case.

This unprecedented legal case could serve as a test for determining the liability of AI systems in disseminating false information.

However, its applicability to AI systems is unclear. If successful, this lawsuit could challenge the existing framework and prompt further examination of AI’s role in spreading misinformation.

ChatGPT and the AI misinformation dilemma

ChatGPT and other AI systems have faced criticism for generating incorrect or misleading information due to their inability to differentiate fact from fiction.

In the process of generating new data, AI systems like ChatGPT often create false information instead of merely linking to existing data sources.

Although OpenAI provides disclaimers about potential inaccuracies, the company has also marketed ChatGPT as a reliable source of information.

The lawsuit’s outcome may influence whether legal precedents exist for holding companies responsible for AI-generated misinformation.

In the United States, Section 230 protects internet firms from liability over third-party information.

fake news

Challenging the libel claims against AI companies

Eugene Volokh, a law professor, has noted that although libel claims against AI companies are theoretically viable, this particular lawsuit might be difficult to maintain.

Walters did not notify OpenAI of the false statements or demonstrate actual damages resulting from the AI’s output.

Moreover, ChatGPT was asked to summarize a PDF document – something it cannot do without additional plugins – yet it still produced a false summary, illustrating its potential to mislead users.

OpenAI has not yet provided a comment on the lawsuit. The outcome of this case could have significant implications for AI companies and the future development of AI systems, particularly regarding their potential to spread misinformation.

Moving beyond generative AI: The future of artificial intelligence

Yann LeCun, Chief Scientist for Facebook-owner Meta, claimed that generative AI like ChatGPT is already outdated, calling the technology a “dead end.”

He stated that humans have common sense while machines do not, highlighting current limitations in AI and machine learning.

ai future

As a response to these limitations, Meta has announced its latest AI project, named image-based Joint Embedding Predictive Architecture (JEPA), which aims to move beyond generative AI like ChatGPT.

JEPA is designed to allow machines to conceptualize abstract ideas, rather than merely reproducing information found online.

According to LeCun, generative models are the past, and they will be replaced by joint embedding predictive architectures.

As legal battles and new advancements unfold, the future of AI remains uncertain, but it is clear that industry experts are working to address the challenges posed by misinformation and limitations in existing AI systems.

Steven