New York – Judges around the world are dealing with a growing problem: legal writings generated with the help of the artificial intelligence and presented with errors, such as citations of cases that do not existaccording to lawyers and court documents.
The trend serves as a warning to people learning to use AI tools at work. Many employers want to hire workers who can use technology to help with tasks such as conducting research and writing reports. As teachers, accountants, and marketing professionals begin to interact with chatbots and AI assistants to generate ideas and improve productivity, they are also discovering that the programs can make mistakes.
A French data scientist and lawyer, Damien Charlotin, has cataloged at least 490 court filings in the past six months that contained “hallucinations,” which are AI responses that contain false or misleading information. The pace is accelerating as more people use AI, he said.
“Even the most sophisticated player can have a problem with this,” Charlotin said. “AI can be a blessing. It’s wonderful, but there are also these pitfalls.”
Charlotin, a senior researcher at HEC Paris, a business school located just outside France’s capital, created a database to track cases in which a judge ruled that generative AI produced mind-bending content, such as fabricated case law and fake citations. Most of the rulings are from U.S. cases in which plaintiffs represented themselves without a lawyer, he said. While most judges issued warnings about the errors, some imposed fines.
But even high-profile companies have filed problematic legal documents. A federal judge in Colorado ruled that a lawyer for MyPillow Inc. filed a brief containing nearly 30 flawed quotes as part of a defamation case against the company and founder Michael Lindell.
The legal profession is not the only one struggling with the weaknesses of AI. AI overviews that appear at the top of web search results pages often contain errors.
And AI tools also raise privacy concerns. Workers in all industries should be careful about the details they upload or enter into prompts to ensure they are safeguarding confidential employer and client information.
Legal and employment experts share their experiences with AI mistakes and outline pitfalls to avoid.
Think of AI as an assistant
Don’t trust AI to make big decisions for you. Some AI users treat the tool like an intern to whom you assign tasks and whose completed work you look forward to reviewing.
“Think of AI as augmenting your workflow,” said Maria Flynn, CEO of Jobs for the Future, a nonprofit focused on workforce development. It can act as an assistant for tasks like composing an email or researching a travel itinerary, but don’t think of it as a substitute who can do all the work, he said.
When preparing for a meeting, Flynn experimented with an internal AI tool, asking it to suggest discussion questions based on an article he shared with the team.
“Some of the questions he proposed weren’t really the right context for our organization, so I was able to give him some of that feedback…and he came back with five very thoughtful questions,” he said.
Check accuracy
Flynn has also found problems in the output of the AI tool, which is still in a pilot stage. He once asked him to gather information about the work his organization had done in several states. But the AI tool was treating completed work and funding proposals as the same.
“In that case, our AI tool couldn’t identify the difference between something that had been proposed and something that had been completed,” Flynn said.
Fortunately, she had the institutional knowledge to recognize mistakes. “If you’re new to an organization, ask your coworkers if they find the results accurate,” Flynn suggested.
While AI can help with brainstorming, relying on it to provide objective information is risky. Take the time to verify the accuracy of what the AI generates, even if it’s tempting to skip that step.
“People are assuming because it sounds so plausible that it’s right, and it’s convenient,” said Justin Daniels, an Atlanta-based attorney and shareholder at the Baker Donelson law firm. “Having to go back and check all the quotes, or when I look at a contract that the AI has summarized, I have to go back and read what the contract says, that’s a little inconvenient and time consuming, but that’s what you have to do. As much as you think the AI can replace that, it can’t.”
Beware of note takers
It may be tempting to use AI to record and take notes during meetings. Some tools generate useful summaries and outline action steps based on what was said.
But many jurisdictions require consent from participants before recording conversations. Before using AI to take notes, pause and consider whether the conversation should be kept privileged and confidential, said Danielle Kays, a Chicago-based partner at law firm Fisher Phillips.
Consult with colleagues in legal or human resources departments before implementing a note taker in high-stakes situations, such as investigations, performance evaluations or legal strategy discussions, he suggested.
“People are saying that with the use of AI there should be multiple levels of consent, and that’s something that’s making its way through the courts,” Kays said. “That’s an issue that I would say companies should continue to watch as it’s litigated.”
Protection of confidential information
If you’re using free AI tools to write a memo or marketing campaign, don’t tell them identifying information or corporate secrets. Once you’ve uploaded that information, it’s possible for others using the same tool to find it.
That’s because when other people ask an AI tool questions, it will search for available information, including the details you revealed, while it constructs its answer, Flynn said. “It doesn’t discern whether something is public or private,” he added.
Seek education
If your employer doesn’t offer AI training, try experimenting with free tools like ChatGPT or Microsoft Copilot. Some universities and technology companies offer classes that can help you develop your understanding of how AI works and the ways it can be useful.
A course that teaches people how to build the best AI prompts or hands-on courses that provide opportunities to practice are valuable, Flynn said.
Despite potential problems with the tools, learning how they work can be beneficial at a time when they are ubiquitous.
“The biggest potential pitfall in learning to use AI is not learning to use it at all,” Flynn said. “We are all going to need to master AI, and taking the first steps to build your familiarity, your literacy, your comfort with the tool is going to be vitally important.”