by Constantine Pappas on April 18, 2014
At LTNY 2014, members of the advice team helped staff the Relativity Showcase throughout the show. Several Relativity users were also on hand to share their experiences with the software.
Paul Laven of Merrill Corporation was one of those users. He shared his story with attendees and the kCura staff and, after the show, sat down with Constantine Pappas to further discuss his experiences with Relativity.
Constantine: You’re one of our few Analytics Experts. Tell us about it?
Paul: It was really exciting for me to have that opportunity, and Merrill was very supportive in letting me take the exam. It was a lot of work—and it’s one of the toughest exams I’ve ever taken—but it’s such a volume of product information that it has to be that way.
I started using analytics in 2010, first with some clustering for a large defense contract with several million documents. From there, I read a lot about it to learn how I could use it to offer value to my clients. At the time, we were looking at the emergence of computer-assisted review, too, so I was able to pilot a couple of those early cases. It was exciting, and I was fortunate to work with clients who were very interested in using the software to see what it could do to help them get through their data.
I’ve been using Analytics whenever I can since then. All of us are facing the Big Data problem, and Analytics is really about making that less painful for our clients. It’s tough to collect millions of documents and face the daunting task of hiring reviewers and getting through the documents, so I like putting this technology to work to cut through that.
Since that start back in 2010, you’ve worked with Relativity Assisted Review often in recent years. How many projects—and what styles of project—have you worked on?
I started working with computer-assisted review way back in 2011, when it was just emerging. I’ve been involved in several dozen in one way or another, and I’ve been able to use it in a lot of ways.
When I work with a client who’s new to it, it’s easy to get them started with the technology by introducing it as a QC measure for validating reviewers’ work. I can say confidently that 100 percent of the times I’ve used Assisted Review as a QC measure, it’s been smarter than the reviewers who looked at the documents first. That’s always great to show off to reluctant clients.
The QC option is also helpful when clients face projects with very large productions that may not get to receive a really thorough second-level review. In cases like that, we’ll use sets of known privileged items to train Assisted Review. The system then identifies privileged documents we hadn’t found on our own because they were worded slightly differently, or any number of other reasons. In fact, during one of those projects, Assisted Review identified an entirely new lawyer in a case during a QC project on privilege. That ended up making a huge difference.
I’ve also used Assisted Review to organize data early in a case. For example, if you have a case with a tight deadline and the client is uncomfortable diving into a complete Assisted Review project, even running just a couple of training rounds can paint a great early portrait of their data. In projects like that, Assisted Review helps highlight some highly responsive documents first so we can send them to reviewers right away, prioritizing the most useful information on the front-end.
Have you ever used Assisted Review for quick productions that didn’t require a manual review?
Yes, definitely. One example was a case that involved about 2.5 million documents and faced substantial pressure for a quick and accurate review. It was a financial investigation and there was a lot of media attention around it, and some production leaks were making a quick turnaround really important.
After a really broad keyword search, 875,000 docs were identified as potentially responsive. From there, we ran five training rounds of Assisted Review and identified about 600,000 docs that were firmly set within that responsive bucket. We ran a quick privilege screen on those documents and—after just two weeks—we were done. That was a fantastic experience because we got all of our highly responsive documents identified and out the door right away. After that, because we were a couple of weeks ahead of schedule, we were able to take a look at the non-responsive set and run some analysis to make sure we had a good handle on the data.
It also became helpful to have the Assisted Review data available throughout the case. A few weeks later, when we got 500,000 documents produced back to us with all the same custodians, we already had a great training set. We were able to recycle that work product to train the system for the production review, too.
How do you evaluate Assisted Review as an option for new cases that come your way?
I look for a sizable data set to make sure we’re saving time and money over a linear approach. The low threshold there is usually something over 50,000 documents.
I also consider the type of case at hand. For me, the big question is whether or not there’s any potential for ambiguity in the data or the facts of the matter. Sometimes you have cases where every document that includes a particular keyword is going to be responsive, and those tend to be pretty straightforward with traditional approaches. But often, you have cases that require some investigation—for example, if there’s any chance that a document in the universe references the matter at hand without actually naming it.
What I mean is that, especially in the cases of government and financial investigations, it’s helpful to use Assisted Review to dig through data because the focus of a case may not have a specific project, event, or organization name. When folks within the same team talk about subjects like those, the language they use isn’t as obvious as a keyword requires, so a subpoena that requests every document using a specific keyword can miss a lot of potentially relevant information.
When you’re working on a project, are there any reporting trends you look for to identify and address any problems as they arise?
Included in recent updates to Assisted Review, the designation-issue comparison report is beautiful. I use it a lot to dig into the details of the machine’s decisions, and identify coding inconsistencies in real time.
A lot of folks familiar with computer-assisted review understand that there’s an art and science to identifying how many rounds are appropriate for each project. To evaluate that, I examine how closely my documents are matched to responsiveness or issues in that report. If I have a lot of records that are sitting in a grey, middle area and aren’t too distinguishable in either direction, I’ll bring that observation to clients so we can make sure case teams are coding consistently and following best practices.
I understand you also use Assisted Review for issue coding. What’s your take on that workflow?
I love issue coding because it has rescued me several times. A common example is when you have a case where certain items are responsive but not to be produced—for example, if they’re privileged. That comes up a lot. Any time you’re working with a very large data set, you can send a production out and get a request for another one based on what the opposite side has learned from that first production. By issue coding with Assisted Review from the start, you can easily code and keep track of the bulk of that huge data set throughout the lifecycle of a case. When a second production is required, you can make sure you stick to the original production requirements and avoid oversharing by referencing that log of documents relating to the issues you identified the first time around.
Plus, having that ability to categorize and identify groups of documents is very helpful even if you aren’t producing, because it gives you a full look at the data and helps prevent vulnerabilities in your case.
Assisted Review for responsive versus non-responsive decisions is easy to use and pitch to clients. Beyond that, helping a client see how Assisted Review can boost efficiency in a case with 20 or more issues is rewarding because it makes a big difference in the quality of that review.
That’s because we find that, no matter how smart we were the first time, we’ll almost always find some disconnect between responsiveness coding and issue coding during QC. Assisted Review can help find and resolve those discrepancies.
In your Assisted Review projects, how much of your workflow have you typically had to disclose to opposing counsel?
Every case is different but, for the most part, we haven’t had to disclose much. We’ve had a number of cases where attorneys just allowed us to use the software and produce responsive documents with few questions asked. Now that there’s a lot of talk around computer-assisted review in the marketplace, we’ve also had cases involving open conversation about what is and isn’t allowed, which have also gone smoothly.
Between when you started with Assisted Review and now, have you seen a shift in perception?
I wouldn’t say the industry is saturated with a really solid understanding of it yet, but I will say a lot of people have an idea of what it can do for them. People have read the brochures and followed the press, and they know it’s out there. Recently, we’ve even had clients come to us with ESI protocol that includes references to computer-assisted review standards.
Some folks are still resistant to change. For my part, I’m glad that computer-assisted review is becoming more acceptable. I can now tell my clients that they’re not the only ones using this technology, and that makes it much easier to get them to explore and embrace the option. Now they know, with a little support from us, that there’s nothing to fear—it’s just a matter of learning how the system works.
Posted by Constantine Pappas.