Last week we shared the history of Artificial Intelligence, today we take a look at the role of AI in eDiscovery.

Most of the leading eDiscovery platforms offer multiple types of AI models for use in Technology Assisted Review. These technologies are often available at little or no additional cost. That’s the good news. The challenge can often come with matching the correct AI model to your specific need.

For example, if you’re pouring over hours of surveillance footage, a product that can automatically detect and redact Personally Identifiable Information (PII) may not be that effective.

Fortunately, the new generation of AI tools are far more universal in their impact, for both sides of a case.

The Beginnings of AI and Document Review

Since the mid 2000s, several different iterations of AI have been available to assist attorneys with document review. An understanding of their value and their limitations is a helpful way to appreciate successive iterations as well.

Technology Assisted Review (TAR) 1.0

Technology Assisted Review (TAR) involves the interplay of humans and computers to identify the documents in a collection that are responsive for a production request, or to identify those documents that should be withheld for example. The early versions of these algorithms involved a human subject matter expert manually reviewing a seed set of documents for responsiveness. These decisions would be used to train the algorithm. The algorithm would then use those initial decisions to identify similar documents throughout the remainder of the collection and apply those decisions across the entire document universe.

While this version of TAR was helpful in grouping similar documents, or segmenting, it still left plenty to be desired. Depending on the desired accuracy rate, these technologies often called for extremely large seed sets to train the algorithm.

Also, many early algorithms only learned from one or two reviewers and could not be adjusted   once programmed. Once you trained the AI, it became locked in and mostly made grouping/clustering decisions based on the document’s extracted text, largely ignoring additional sources of metadata when making determinations.

Perhaps the biggest drawback to this iteration of AI was that you had to predetermine your acceptable error rate. As you can imagine, in the mid 2000s, determining the acceptable error rate posed a significant challenge. For instance, achieving a 65% accuracy rate might only require reviewing 10-15% of the documents, while reaching a 75% accuracy rate would necessitate reviewing closer to 50%. For 95% accuracy? You had to be prepared to review 92% of the documents.

It’s difficult to agree to a 50% accuracy rate in a negotiation, but early studies showed that human review (for responsiveness and relevance) is at best 55% accurate. In some studies, those numbers are far lower, well below 50%. Dr. Maura Grossman, J.D., is one of the original experts in this area and a flag bearer for researching and advocating for the responsible use of technology in litigation. I’d encourage anyone interested in a deeper knowledge of this topic to seek out her work.

Ultimately, being the first technology to force attorneys and courts to face the statistical reality of human document review may have been the largest headwind to this early technology

In order to accept the limitations of TAR 1.0, we had to first accept the rather large limitations in our own review abilities.

Continuous Active Learning (CAL) Models

Continuous Active Learning (CAL) models (sometimes referred to as TAR 2.0) were introduced in the mid 2010s and are now widely used across the industry. CAL’s underlying algorithms start to assess the data and calculate relevancy scores after reviewing only a few dozen documents. With the introduction of CAL, we no longer had to predetermine accuracy rates or do enormous initial reviews.

As soon as the reviewers begin making coding decisions, these models go to work, assessing the similarity of each document in the database against the decisions of the reviewers. With each coding decision, the algorithm evaluates itself against the decision of the reviewer, continuously learning and improving.

The end result is usually expressed in a relevancy ranking, typically on a scale of 1 to 100. A document scoring 100 is almost assuredly relevant, while one with a score of 10 probably is not. This doesn’t alleviate the need for human review, rather CAL prioritizes the review process so you can spend your early review time on the most relevant documents.

While an improvement over earlier versions of AI, these models still rely on humans to train the technology. The obvious limitation being the humans.

In our next blog post we will explore how you can leverage the power of new technologies like Generative AI / Large Lanagage Models (LLM), Natural Language Processing, and Computer Vision / Photo and Video Intelligence in your review process.

In the meantime, we stand ready to help. Contact us to learn more.