For years, keyword search has been widely used in e-discovery to divine the core set of documents used in litigation. For most firms, it remains the main tool in their e-discovery arsenal. As they kick off litigation, counsel will still try to agree on which keywords to use—before understanding the full document set.
I’ve been in litigation support for 30 years and have been a proponent of keyword search in our firm’s e-discovery practice. I’d even go so far as to call myself a fanatic. When I started to hear about technology-assisted review, I had to see what it was all about. If the new technology was as powerful—and accurate—as I was hearing, it could positively impact Procopio’s ability to serve clients.
I learned that technology-assisted review, or TAR, is a process for selecting and ranking documents using a computerized system that incorporates lawyers’ decisions on a smaller set of documents, then applies that logic to the remaining document pool. I sought out demonstrations on TAR, many of which used data from the Enron case. What I saw was enough to sell me on the idea, or at least convince me to try it out. The trick was to make the case for technology-assisted review at a diehard keyword search firm.
It will come as no surprise that I was met with skepticism. Here are four of the most common doubts I heard, and how I addressed them—including an experiment I conducted to generate data to support the cause.
1. “We’ve always used keyword search. Why change?”
We’ve used keyword search for 20 years. We also used to print everything out and code it by hand—remember Bates stamps? (Anyone?) Beginning in the 1980s and 1990s, the sheer volume and complexity of data has grown to staggering levels. Technology got us into this mess—now technology has to get us out. We have a fear of artificial intelligence, which is really a fear of the unknown. Further understanding helps us build trust in technology we haven’t used before.
2. “We need eyes on every single document. A computer can’t replace a trained reviewer.”
Studies have shown that manual review is far less accurate than we thought. Even if it were our best option, we can’t review everything manually—there is too much to possibly review, and data volumes will only grow. And to clarify, using technology-assisted review doesn’t mean eliminating trained reviewer involvement. The process relies on a subject-matter expert manually coding a subset of documents, which the machine can use to apply to the remaining document pool. The technology is intended to help eliminate documents that aren’t relevant to narrow the pool for manual review—but, of course, you’ll review any documents you intend to produce to the other side. Human expertise is still a big part of the equation.
3. “Analytics technology is a pricey investment.”
An investment in new tools will cost money and time for adequate training, but the ROI is huge. Also, consider the opportunity cost of preserving the status quo. Insisting on manual review is costly—especially in cases with a high volume of data. There’s also the risk of not identifying documents relevant to your case, and the resulting cost to you and your client.
4. “How do we know it really works?”
The mathematical equations behind analytics engines are admittedly complicated. We see demonstrations of the technology with sample data sets and statistical reports of the results—but using unfamiliar data for a complex process makes it hard to tell whether the technology is working. There are plenty of studies and rulings to validate the use of TAR in general, but it’s a fair point—and it inspired me to investigate. The key would be to test the technology in a familiar situation.
I wanted to create a test that would be interesting and relatable for my colleagues to help them better understand the technology. What kind of data set would be approachable and familiar? Storybooks.
Imagine you were carrying 2,190 loose pages from Gone with the Wind, Wizard of Oz, Grapes of Wrath, Moby Dick, Alice in Wonderland, Origin of the Species, and Pride and Prejudice—and then dropped them. How long would it take to find all of the pages related to Alice in Wonderland?
In this scenario, I’d already know the answers—which pages matched which book—so I could test the analytics technology underlying TAR to see how accurately the electronic pages would be sorted.
I downloaded electronic copies of each book and tagged them correctly for tracking, then scrambled the files. I decided to use Relativity Analytics and Assisted Review to attempt to correctly reassemble the information and employ keyword search as a comparison. I compiled my results and began sharing the experiment in presentations to my colleagues to a warm reception. The storybook angle helped lower the barrier to understanding and opened the conversation about a complex technology.
Are you thinking about setting up a similar demonstration of technology for your colleagues? I’ll leave you with a few tips:
- Eliminate as much complexity as possible. Providing something people can relate to helps simplify the demonstration and focuses concentration on the tool itself. We also could have revisited an old case, where we knew the outcome, to see how the new tool fared.
- Make the sandbox real. Using the tool I wanted to use long term brought the experiment to life and served as valuable training at the same time.
- Be open and inclusive. Bring your team along with you. No one wants technology forced upon them, so be inclusive about building understanding of the tool and its potential business benefits.