by Jay Leib on March 29, 2013
In this installment of his interview series, Jay Leib—resident computer-assisted review expert at kCura—discusses computer-assisted review at an Am Law 100 firm with Alison Silverstein of McDermott Will & Emery. Alison is the managing director of discovery and dispute services at McDermott.
Jay Leib: What’s your focus at McDermott?
Alison Silverstein: My job is to help make our firm the best provider of all the discovery services we can offer—including consulting, collection and processing, big data management, document review services, and a lot in between.
This particular case was extremely well-suited to computer-assisted review, because it was a second request—not litigation—so we were comfortable exploring the new workflow. We had participated in heavy negotiations with the U.S. Department of Justice to reduce the scope of their demand, but in exchange for those allowances, they gave us a very tight deadline. It was a terrific deal for the client, but it made for a challenging process for us: 10 days to review hundreds of thousands of documents.
As a group, we put our heads together to discuss our options. We had already performed a linear review when we collected the documents, covering about 200,000 records, so we had a good idea of what our responsive rate would be and a good understanding of the collection. Because of that, we were comfortable setting the computer-assisted review protocol. We started going down that path with the goal of using the process to identify buckets—responsive, non-responsive, and uncategorized documents. From there, we traditionally reviewed the responsive and uncategorized documents, running additional Assisted Review rounds until we achieved a specified degree of confidence in the categorization of non-responsive documents.
How receptive were your clients and partners to the idea of using Assisted Review?
We talked to the client directly to explain our approach and the goals up front. Our clients are good business people, and it’s easy to explain the merit and efficacy of using a more automated process like this. They understand that this will help us save them a lot of time and money, so they’re sold.
The lawyers just needed to be reassured that we were creating a legally defensible process that would be a responsible option for their clients. After all, it’s our job to articulate risk to our clients, and then provide good recommendations for a smart course of action that weighs the costs and benefits of all approaches.
In every case, there’s an intersection between business requirements, legal obligations, and technical capabilities. We need to consider all of those pieces, inform our clients, and empower them to make the right decisions.
How did you prepare your team for the Assisted Review workflow?
We outlined our protocol, deciding on the confidence interval and related settings. In this case, we were working with the DOJ in a partnership of sorts, so we made sure they were comfortable, too. We tightened the protocol as we progressed through the project, but having clear goals from the start was a successful approach for building our seed set.
As for reviewers, we sat them all in one room together—two people who had led the linear review, and two partners who were working on the case. All of them reviewed the seed set together to ensure that everyone was making the same decisions about the same documents. If they had any differences, they debated, discussed, and solved the issues as they came up, all together. It was incredibly important to have that collaboration early on. All of the training was done with that core set of reviewers, and we think that approach is critical in these cases.
Were you pleased with the outcome?
We were very pleased. It absolutely met our expectations and testing requirements, both on our side and on the DOJ’s side. They did a sampling of their own from our non-responsive set to QC the results, and they were pleased. It helped our relationship with them going forward—and, therefore, further improves the support we can provide for our clients. Additionally, we saved costs for our client, meeting the deadline and presenting a clean, defensible set of documents. Between Assisted Review and early culling, we consistently see an 80-percent cost reduction compared to human review. That’s a big success in itself.
Have you experienced any unexpected challenges in other Assisted Review projects?
We have smart lawyers who are good at thinking through their problems. The unexpected challenge I can think of is getting those attorneys on board with the process. At the beginning, there’s a leap of faith—you just have to follow the protocol and understand that it’ll work for you in the long run, based on the algorithms and statistics. But some lawyers just want to mark a certain group of documents as non-responsive for reasons that don’t live within the four corners of the documents, or maybe gerrymander the sample sets to adjust the output. Unfortunately, it doesn’t work like that. You’ve got to follow the protocol, look at text that’s only on the face of the document, and take the out-of-the-box thinking cap off for a second to really kick start the workflow in the right way.
What advice would you offer someone who’s diving into their first computer-assisted review project?
It’s not a button that you push. You can’t hand off a hard drive to an admin and expect it to get done independently. You need to make some conscious, precise decisions to get there. Before you even start the process, you need to outline the protocol you’re comfortable with, understanding the confidence interval and related statistics. Understand that there will be a margin of error. It’s lower than human review, in our experience, but it’s there. Everyone should know that, including opposing counsel.
Case teams need to understand what they’re doing, and what the importance of that is. The case manager has to be a consultant, not a behind-the-scenes admin. You need someone with true knowledge and experience to lead your case.