Lawyer in hot water after using AI to present made up information: 'incompetent'

FOX News 

A New York lawyer could face discipline after it was discovered a case she cited was generated by artificial intelligence and did not actually exist. The 2nd U.S. Circuit Court of Appeals ordered lawyer Jae Lee to its grievance panel last week after discovering she used OpenAI's ChatGPT to research prior cases for a medical malpractice lawsuit but failed to confirm whether the case she was citing actually existed, according to a report from Reuters. The attorney included the fictitious state court decision in an appeal for her client's lawsuit claiming that a Queens doctor botched an abortion, according to the report, leading the court to order that Lee submit a copy of the decision that the lawyer later found she was "unable to furnish." The lawyer's conduct "falls well below the basic obligations of counsel," the 2nd U.S. Circuit Court of Appeals concluded in its disciplinary review, which was sent to Lee. Lee would later admit to using a case that was "suggested" to her by ChatGPT, a popular AI chatbot, and failing to verify the results herself. The lawyer's decision to use the popular application comes even though experts have warned against such practices, noting that AI is a relatively new technology that also is well-known for "hallucinating" false or misleading results.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found