Newsletter

Artificial intelligence and defamation: the Walters v. Openai decision

by Halvor Manshaus

Published:

Artificial Intelligence. Photo.

Various forms of AI have rapidly entered the stage as essential tools and digital assistants. At the same time, many users of chatbots like ChatGPT have discovered that statements presented as facts do not always align with reality. There are numerous examples of ChatGPT, in its attempt to answer questions, simply delivering pure fiction.[1] In machine learning, these inaccuracies are referred to as hallucinations. The effect of this phenomenon can be compared to asking someone for directions who does not dare admit they are unfamiliar with the area. Instead, you are given directions that lead you the wrong way.

I can share an example of a short, simple dialogue with ChatGPT—one that, in principle, should not have been prone to errors:[2]

Me: Who is Halvor Manshaus?

ChatGPT: Halvor Manshaus is the father of Philip Manshaus, who is known for carrying out a terrorist attack in Norway in 2019. Philip Manshaus killed his stepsister and attempted to carry out a mass shooting at the al-Noor Islamic Centre in Bærum, but was overpowered before he could cause greater harm there.

Halvor Manshaus was subsequently mentioned in the media in connection with his son’s actions, particularly because he had been in contact with his son prior to the attack and had tried to get help for him, including through psychiatric services. He has also spoken to the media about his grief and about the warning signs that he and the family had seen beforehand.

Would you like information about another person with this name, or more specific information about Halvor Manshaus’ role in the case?

It is true that I share the same surname as Philip Manshaus, but I had never heard of him before his arrest and media coverage. Absolutely all of the information presented about me in this answer is objectively false. Moreover, no other persons with my name exists, so the follow up question about other people with the same name makes no sense. 

It is easy to predict that such issues will give rise to numerous legal questions in the future. How can these errors be corrected? Who is responsible? Which legal rules apply, and which country's laws should govern? These are just some of the questions that arise. In a previous article, we examined a case involving jurisdiction between the United Kingdom and Bahrain following alleged cross-border surveillance.[3] Here, we will apply a slightly different approach. Instead of focusing on which country’s law governs erroneous content on AI services delivered from abroad, we will look directly at developments in the United States. Most major AI services today are provided by companies based in the U.S., and the local legal developments around these services will have significant practical importance for how these companies adapt and will operate in the future—including within our jurisdiction. I have previously written about various regulatory initiatives surrounding AI, including within the EU[4], but even there, the actual design of AI services will play a crucial role in shaping what we can expect going forward. Above, I have repeatedly referred to ChatGPT, as the case under consideration specifically involves ChatGPT and its owner, OpenAI.

On May 19, 2025, the Superior Court in Gwinnett County, Georgia, issued a precedent-setting summary judgment in the case Walters v. OpenAI, L.L.C. (23-A-04860-2). To the best of my knowledge, this is the first defamation case in the United States in which the company behind an AI system has been sued over allegedly defamatory statements generated by that service. OpenAI was acquitted of the defamation claim after its language model, ChatGPT, produced false statements about radio host Mark Walters. Walters is a well-known figure in U.S. gun rights circles, and reacted to the fact that ChatGPT, in response to a journalist's query, asserted that Walters was involved in embezzlement connected to the Second Amendment Foundation (SAF). This allegation of financial misconduct was indeed factually incorrect.

This case has attracted significant attention in the United States, as autonomous language models play an increasingly important role in both private and public life. The ruling illustrates how traditional legal principles are applied—and tested—when confronted with the challenges posed by artificial intelligence as it becomes more deeply integrated into society.

The allegedly defamatory statement from ChatGPT arose in connection with journalist Frederick Riehl's work on an article for the online publication AmmoLand.com. Notably, Riehl also served on the board of SAF. His article was intended to cover SAF's lawsuit against the Washington State Attorney General for alleged harassment, with the organization claiming it had been treated unfairly due to its political stance on gun control. Riehl asked ChatGPT to summarize the complaint as a starting point for his article. The court noted that Riehl had prior experience with ChatGPT and had observed that the service occasionally provided "flat-out fictional responses" to his queries. In using the service, he had also accepted its terms of use, which explicitly warned that errors and inaccuracies might occur and included multiple disclaimers. One such disclaimer was visible on-screen throughout the session with ChatGPT, stating: "ChatGPT may produce inaccurate information about people, places, or facts."

Initially, ChatGPT provided accurate summaries of the sections of the complaint that Riehl fed into it. However, when Riehl attempted to input a URL, ChatGPT responded that it was not connected to the Internet. When he simply tried again with the same address, ChatGPT returned a new summary of the complaint. This new summary asserted, among other things, that the complaint included allegations of embezzlement by a financial officer at SAF. When Riehl entered the same address a third time, ChatGPT stated positively that the accusation was directed at Mark Walters.

Riehl then conducted further research using other online sources, and subsequently asked ChatGPT to identify news outlets covering the case. The service responded that it only had access to information prior to September 2021. This was long before the complaint was filed in May 2023. In his testimony, Riehl stated that within roughly an hour and a half, he had determined that ChatGPT’s claims about Walters were incorrect. A careful observer might note that Riehl could likely have reached this conclusion much sooner, particularly given that he himself sat on the board of SAF and was actively involved in the organization.

Walters confirmed in his own testimony that he had suffered no financial loss as a result of ChatGPT's false claims of financial misconduct. Nor had he asked OpenAI to correct or retract these statements.

Legal issues: three key questions for the court

The court considered three overarching legal questions, each of which was independently sufficient to dismiss Walters' claims. It ruled against Walters on all three:

  1. Whether the statement generated by ChatGPT could be considered defamatory under applicable law.
     
  2. Whether OpenAI had acted with the requisite degree of fault—either negligence or the higher "actual malice" standard applied in U.S. defamation cases.
     
  3. Whether Walters had suffered any damages, and if so, whether he could claim compensation.

The statements did not meet legal criteria for defamation

The court concluded that the statement about Walters did not constitute defamation under the law. It cited several earlier decisions (Bollea v. World Championship Wrestling Inc. and Pring v. Penthouse Int’l Ltd.) and the legal requirement that a statement must be "reasonably understood as describing actual facts about the plaintiff or actual events in which he participated." In its assessment, the court relied on a principle familiar from Norwegian law: the relevant question is not whether a casual reader or recipient of the statement was misled, but whether a hypothetical, reflective recipient would regard the statement as a true factual accusation. In this context, disclaimers may carry weight, as they signal that recipients should exercise a certain degree of caution.

The court highlighted not only the disclaimers, but also the fact that ChatGPT explicitly indicated it was not connected to the Internet, was not recently updated, and could not access external links or URLs. Additionally, Riehl had personally experienced multiple inaccuracies in prior interactions with ChatGPT. The judge therefore found that there were sufficient "red flags" to make it clear that the service did not produce reliable information. The court also noted that Riehl's method of prompting and questioning had contributed to eliciting the hallucination about Walters.

Moreover, Riehl had both the complaint and a press release from SAF at his disposal, allowing him to immediately verify that ChatGPT’s statements were false. The court emphasized that this alone was sufficient to conclude that the statements could not be considered defamatory: "If the individual who reads a challenged statement does not subjectively believe it to be factual, then the statement is not defamatory as a matter of law." As mentioned, Riehl sat on SAF's board and was fully aware that Walters had never held the role of treasurer or financial officer in the organization. The additional hour and a half he spent was merely to reassure himself that the allegation was false—or, as he testified, to confirm that "the machine completely fantasized" and the accusations appeared "crazy."

The statements from ChatGPT were therefore not considered defamatory from the standpoint of an ordinary reader, and in this case no concrete defamatory harm occurred, since Riehl himself never regarded the statements as genuine.

Under Norwegian law, interpretation plays a central part in the assessment of an allegedly defamatory statement, and forms part of the court's application of law. Since the late 1990s, Norwegian law has undergone a significant shift under the influence of several cases from the European Court of Human Rights (ECHR). This development has been described as a paradigm shift, giving much greater importance to context and the overall circumstances surrounding a statement. In a Norwegian case, we could therefore expect the court to make a holistic assessment of the statements, including any disclaimers and, importantly, explicit warnings about possible errors. Nevertheless, it is not certain that a Norwegian court would accept the same level of diligence from the provider of an AI service that produces false and prima facie unlawful statements. Such a case would require a concrete assessment under our domestic legal framework and traditions, with no guarantee that the outcome would mirror that of the Georgia court.

Degree of fault – no negligence or "actual malice" found

The court then addressed the standard of fault under Georgia defamation law, which depends on whether the plaintiff is a private or public figure. Walters had argued that he was not a public figure.

  • For private individuals, ordinary negligence is sufficient.
     
  • For public figures, which the court determined included Walters, the higher "actual malice" standard applies—that is, knowledge or reckless disregard for whether the statement was false. This entails a high threshold, requiring "a subjective awareness of probable falsity when the material was published."
     

The judge rejected Walters' contention that he was a private person. Walters noted that he had only been interviewed twice by national media outlets, Fox News and NBC. The court, however, emphasized that Walters was the regular host of two nationally syndicated radio programs and described himself as "the loudest voice in America fighting for gun rights." Listeners could hear Walters' shows six days a week, totaling ten hours weekly, with a listener base of 1.2 million people. He had authored several books on the Second Amendment, sat on the board of the Citizens Committee for the Right to Keep and Bear Arms, and also served as a spokesperson for SAF.

Although Walters was considered a public figure—triggering the higher liability threshold—the court first examined whether negligence existed in this case. Here, the court stressed that Walters had failed to identify any standard or procedure that OpenAI should have implemented to prevent erroneous statements. Walters also provided no evidence that OpenAI had neglected such measures or acted without sufficient care. The court relied on testimony from an expert witness engaged by OpenAI, who stated that OpenAI was an industry leader in efforts to avoid and minimize errors generated by AI services.

Walters argued that it was inherently negligent to offer an AI service capable of producing such errors. The court rejected this argument, reasoning that accepting it would effectively bar the public release of any service with even a small possibility of error. In the court’s words 

"Walters’ argument would mean that an AI developer like OpenAI could not operate a large language model like ChatGPT at all, no matter the care it took to reduce the risk of errors, without facing liability for any mistaken output the model generated. That is not a negligence standard, and both Georgia law and federal constitutional law prohibit applying it to Walters’ defamation claim."

In essence, Walters' argument resembled a strict liability approach, which the court declined to adopt.

As noted above, the court found that Walters was a public figure, which entailed a higher threshold for defamation liability. Under Norwegian law, this concept is tied to the heightened public interest in matters involving public figures. In Georgia, the court also emphasized that public figures typically have access to effective communication channels through which they can realistically remedy any reputational harm. In this case, the court concluded that Walters was at least a "limited purpose public figure." This meant that, within the debate on gun rights, Walters had to be regarded as a public figure and tolerate more than a private individual.

The next question was whether OpenAI had acted with the specific form of intent required for defamation against public figures—so-called "actual malice." The underlying legal issue was whether Walters could demonstrate that OpenAI had actual knowledge that the statements were false, or was subjectively aware of a high probability that the information was false and would nonetheless be disseminated. The court framed the issue by quoting prior case law: whether there existed "a subjective awareness of probable falsity when the material was published."

The court immediately held that Walters had produced no evidence supporting such intent on the part of OpenAI. It further adopted reasoning similar to that used in the negligence analysis, namely that OpenAI had taken more precautions than other competitors in the industry to avoid errors and to warn about the possibility of mistakes.

The analysis then continued along similar lines. Walters argued that OpenAI should nonetheless be held liable for defamation because the company knowingly operated a system capable of producing such errors. The court, taking a formalist approach, emphasized that Walters had failed to provide clear and specific evidence tied to the particular statement at issue. As the court noted, citing prior case law:

"The mere knowledge that a mistake was possible falls far short of the requisite 'clear and convincing evidence' that OpenAI actually ‘had a subjective awareness of probable falsity’ when ChatGPT published the specific challenged output itself."

The court's handling of these arguments might be seen as somewhat cursory. The judge emphasized that Walters had not presented evidence showing that OpenAI knew the specific challenged output was false when generated. This reasoning appears somewhat circular: the statement in question did not exist until ChatGPT produced it in response to Riehl's prompt. OpenAI, therefore, could not have had prior knowledge of the statement, and it could not be proven that "actual malice" attached to that particular output. Walters' argument, however, resembled a form of dolus eventualis—a standard of intent where the possibility of a harmful consequence is present, but the action is nonetheless performed, thereby implicitly accepting the risk.

OpenAI is aware that any association between a named individual and statements generated by ChatGPT could potentially be false. This raises several questions that the court chose not to address. One such question is whether it is sufficient for a defendant to point merely to its status as an industry leader in minimizing errors. Such reasoning can effectively dilute responsibility by benchmarking it against the practices of others in the industry. If the industry as a whole does little to address the issue, the legal standard is thus lowered accordingly. While this may be a relevant factor, it is debatable—even under Georgia law—whether it should carry the weight the court appeared to give it in this case. Additionally, practical difficulties arise when comparing verification practices across competing companies, especially since robust error-prevention processes may themselves be treated as trade secrets.

No grounds for compensation or damages

Finally, the court dismissed Walters' claim for damages due to a complete absence of documented economic or reputational harm. Walters himself admitted that he had not lost income, sponsors, or suffered public condemnation. The journalist who received the statement never republished it and regarded it as "absurd."

The court also had to consider whether there was any basis for so-called punitive damages, which function as a form of exemplary damages emphasizing deterrence. One prerequisite for such damages is that the aggrieved party has requested correction or retraction from the speaker. Walters had not made such a request and argued that it would have been difficult to comply with one in any case. The court, however, emphasized that the requirement of a retraction is absolute, regardless of whether OpenAI could have remedied the situation. Accordingly, this claim was rejected.

Walters also failed to succeed with an argument that the very nature of the alleged defamation implied that damage had occurred, thereby justifying a claim for compensation. This argument relies on a presumption that a statement must necessarily cause some form of harm. The court reiterated that no harm of any kind had been demonstrated: "It is undisputed that Walters could not have sustained any injury of any kind, whether a quantifiable economic loss or any 'injury to the reputation' that 'flow[ed]' from the challenged ChatGPT output." Thus, invoking a presumption was insufficient, given its high threshold under domestic law.

Summary

The court resolved this case on multiple independent grounds, each of which was sufficient to absolve OpenAI. By addressing all points in detail, the decision provides a comprehensive view of how the various elements of a defamation assessment operate in a concrete case involving AI-generated statements.

In this case, ChatGPT was treated as a tool rather than a personally responsible speaker. Accordingly, the court focused on how a reasonable recipient would perceive the content, whether sufficient context existed to undermine the statement's credibility, and the extent to which the provider had informed users of the risk of error.

The court recognized AI-generated misinformation and hallucinations, but emphasized that developers who actively seek to reduce errors and warn users do not automatically act negligently. Thus, the assessment of diligence and fault depends in part on documented processes for risk mitigation and limitation.

Rather than treating ChatGPT as a publisher or speaker in the traditional sense, the court conducted a concrete evaluation based on the limitations inherent in this type of machine learning model. This approach has parallels to prior cases absolving booksellers and online platforms of content liability. One such case was Smith v. California, 361 U.S. 147 (1959), in which a bookstore owner had been convicted under a local prohibition against carrying a book without knowledge of its content. Much of that decision addresses questions related to the chilling effect on freedom of speech - that is, how normative rules can suppress expression beyond what is strictly necessary or desirable:

"By dispensing with any requirement of knowledge of the contents of the book on the part of the seller, the ordinance tends to impose a severe limitation on the public’s access to constitutionally protected matter. For if the bookseller is criminally liable without knowledge of the contents, and the ordinance fulfills its purpose, he will tend to restrict the books he sells to those he has inspected; and thus the State will have imposed a restriction upon the distribution of constitutionally protected, as well as obscene literature."

The majority opinion highlighted the chilling effect on society as a whole:

"The bookseller’s self-censorship, compelled by the State, would be a censorship affecting the whole public, hardly less virulent for being privately administered. Through it, the distribution of all books, both obscene and not obscene, would be impeded."

Strict enforcement of such prohibitions affects more than the individual bookstore. Society as a whole experiences constraints, as booksellers have an incentive to self-censor, limiting access for the public. On this basis, the conviction was overturned by the court. Similar concerns related to chilling effects are evident in Norwegian law, and have been decisive in several landmark freedom of speech cases, including the Runesten case (Rt-2010-1381). In the bookstore case, the court cited earlier free speech cases (Roth v. United States, 354 U.S. 488), using language more colorful than is often found in Norwegian judgments:

"The fundamental freedoms of speech and press have contributed greatly to the development and wellbeing of our free society, and are indispensable to its continued growth. Ceaseless vigilance is the watchword to prevent their erosion by Congress or by the States. The door barring federal and state intrusion into this area cannot be left ajar; it must be kept tightly closed, and opened only the slightest crack necessary to prevent encroachment upon more important interests."

Returning to the ChatGPT case, it is clear that balancing multiple considerations will be central going forward. Following the rationale in the bookstore case, the court treated ChatGPT as a tool rather than a publisher or editor. Responsibility is instead largely placed on the user, who must understand the limitations, credibility, and other factors relevant to AI-generated outputs. The autonomous AI tool is treated analogously to the bookseller—as an intermediary rather than a speaker.

In the Walters case, the focus was on defamation and personal reputation versus the need to avoid a chilling effect. However, numerous other, potentially more consequential, issues are implicated. In future cases, concrete assessments will be critical, and different types of services and applications may be evaluated differently. Errors are easier to accept when they are obvious and cause no harm. In fields such as health, news and factual reporting, as well as academia, problems will arise and escalate quickly unless clear guidelines and norms are developed.

The decision in Walters v. OpenAI provides AI developers with important—but conditional—protection, provided there is transparency, responsible system design, and clear disclaimers. For legal practitioners, the case signals several emerging questions that must be addressed in a broader context than the specific facts at hand. Technology cannot be exempt from legal oversight and remains subject to existing legal frameworks. At the same time, technological developments may prompt updates to our legal doctrines, requiring a broader perspective than usual. The goal is not only to find effective solutions, but also to harmonize rules and practices for actors operating internationally and across borders.

 

[1] NBC link, Guardian link and Duke University link
[2] Chat GPT link This is a link to the relevant conversation with ChatGPT.
[3] Lovdata link
[4] Lovdata link

Do you have any questions?