Page 1 of 1

Chatbot makes dangerous error

Posted: Mon May 29, 2023 9:57 am
by MaxPC
A New York lawyer is facing a court hearing of his own after his firm used AI tool ChatGPT for legal research.

A judge said the court was faced with an "unprecedented circumstance" after a filing was found to reference example legal cases that did not exist.

The lawyer who used the tool told the court he was "unaware that its content could be false".

ChatGPT creates original text on request, but comes with warnings it can "produce inaccurate information".
...
"Six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations," Judge Castel wrote in an order demanding the man's legal team explain itself.

Over the course of several filings, it emerged that the research had not been prepared by Peter LoDuca, the lawyer for the plaintiff, but by a colleague of his at the same law firm. Steven A Schwartz, who has been an attorney for more than 30 years, used ChatGPT to look for similar previous cases.

https://www.bbc.com/news/world-us-canada-65735769

Re: Chatbot makes dangerous error

Posted: Mon May 29, 2023 10:02 am
by Robert
I have known computers will lie for years. Ever since I had it say, "Hello handsome" every time I turn on the machine.

Re: Chatbot makes dangerous error

Posted: Mon May 29, 2023 10:04 am
by mike
The good thing about this is that it will have the affect of making people more wary of things like legal arguments and studies which can easily be created out of thin air by someone skilled at writing or by AI engines. Who knows how much legitimate-sounding content is routinely accepted as credible just because it sounds authoritative.

Re: Chatbot makes dangerous error

Posted: Mon May 29, 2023 10:05 am
by Szdfan
Yup. This is one of the issues with AI -- it makes mistakes and because it sounds authoritative, people assume it's correct.

I've had ChatGTP spit out information that I know isn't true.

Always, ALWAYS verify information from ChatGTP.

Re: Chatbot makes dangerous error

Posted: Mon May 29, 2023 10:05 am
by Szdfan
mike wrote: Mon May 29, 2023 10:04 am The good thing about this is that it will have the affect of making people more wary of things like legal arguments and studies which can easily be created out of thin air by someone skilled at writing or by AI engines. Who knows how much legitimate-sounding content is routinely accepted as credible just because it sounds authoritative.
Information literacy is a skill that needs to be taught.