We’ve recently discussed the history of Artificial Intelligence and AI’s role in eDiscovery and the review process and how you can leverage the power of today’s AI in your review process.

Today we will examine the limitations of an AI model like ChatGPT.

We all know when something seems too good to be true it usually is. That’s not to say I’m not a true believer in AI making the eDiscovery process much better. More, it’s a cautionary tale that no solution is perfect.

When it comes to generative AI models like ChatGPT, that limitation has become known as a “hallucination”. An AI model is said to have hallucinated when it returns incorrect information in an answer to your question. The limitation to this model is that it only knows what its source data is saying, it doesn’t know if it’s true.

In a recent case, two New York lawyers were sanctioned for submitting a legal brief that included six fictitious case citations, generated by ChatGPT. In his decision, U.S. District Judge P. Kevin Castel clearly stated there was nothing, “inherently improper” about lawyers using AI for assistance, but he said the lawyer ethics rules “impose a gatekeeping role on attorneys to ensure the accuracy of their filings.”

I share this case as a cautionary tale, not to discourage using ChatGPT for legal research, but to warn you against blindly trusting the answers you get from legal AI when using this type of technology. The AI can only reveal what information is present, it cannot validate it as true.

AI models can be a game changer in your practice, expediting the review process and helping you make better-informed decisions. Take advantage of these powerful tools, before your adversaries beat you to it.

We stand ready to help. Contact us to learn more.