Advertisement

News

Legal Nightmares: A Lawyer’s Tale of Using ChatGPT in a Trial Gone Wrong

AIs are good, yes, but they are even better at lying.

Legal Nightmares: A Lawyer’s Tale of Using ChatGPT in a Trial Gone Wrong
Pedro Domínguez

Pedro Domínguez

  • Updated:

Yes, ChatGPT is an artificial intelligence that allows us to do practically everything: school assignments, movie scripts, political speeches, poems… Buuut, even with all the versatility of the language model created by OpenAI, it’s not the first time we’ve told you that AI loves to lie.

ChatGPT ACCESS

Because, yes, when ChatGPT doesn’t have enough resources to resolve a question, the AI tends to mix data and make things up in order to get by. This “habit,” combined with poor professionalism, can lead to examples like a recent trial in the United States, where ChatGPT even made up several legal cases.

As reported by The New York Times, defense lawyers for Colombian airline Avianca pointed out in the trial, held in New York, that the prosecution had presented nonexistent legal cases, including fake citations. This statement was confirmed by the presiding judge of the case, Kevin Castel, who declared that at least 6 of the presented legal cases were invented.

Steven A. Schwartz, the prosecution’s lawyer, admitted in an affidavit that he used ChatGPT in his research and only verified the information by asking the chatbot about its sources and whether it was lying. The AI embarked on a forward escape, firmly asserting that the detailed cases were real, and even claiming they could be found in legal research services like LexisNexis or Westlaw.

OpenAI itself warns that ChatGPT may generate “incorrect information”.

In one of the invented cases, ChatGPT took data from a real case but completely changed the names of those involved, the details, and even the date of the case. Schwartz, who “was unaware of the possibility that the content could be false,” now “deeply regrets using generative artificial intelligence to supplement the legal research conducted here and will never do so in the future without absolute verification of its authenticity.”

Currently, the sanctions that Judge Castel will impose on the prosecution lawyers are being studied. Another lawyer involved in the case, Peter LoDuca, will also have to appear before the judge to provide explanations.

ChatGPT ACCESS

If there’s one thing the Terminator saga has taught us, it’s never to trust artificial intelligence. So, if you use ChatGPT to assist you with any work, always verify the accuracy of the information, or you may encounter problems in the future. Consider yourselves warned.

Some of the links added in the article are part of affiliate campaigns and may represent benefits for Softonic.

Pedro Domínguez

Pedro Domínguez

Publicist and audiovisual producer in love with social networks. I spend more time thinking about which videogames I will play than playing them.

Latest from Pedro Domínguez

Editorial Guidelines