A Review Manager's Guide to Convincing an e-Discovery Skeptic

One of the many takeaways from last November’s US presidential election is that the mechanics of e-discovery aren’t particularly well-known.

Case in point: a tweet from former National Security Adviser Michael Flynn, posted days before the American public voted amid what one might call a second-pass review of Hillary Clinton’s emails.

 

 

As forensic experts and e-discovery pros were quick to point out, the task facing the FBI was not just possible, but routine. Even though the election is long over, Flynn’s tweet still prompts a good question: How can you use Relativity to convince a skeptic—maybe even your own client—that the millions of documents in these projects can be reviewed on time?

Step 1: Show Them How Analytics Can Expedite the Process

The major flaw in Flynn’s analysis is his assumption that there is a one-to-one relationship between the number of emails collected by the FBI and the number of documents that were subject to human review. But, of course, computers are pretty smart, and we use their capabilities to tackle problems of relevance like this all the time (think about how products like Pandora work, for example).

When it comes to e-discovery, processing techniques like de-duplication and structured analytics tools like email threading are widely used. Technology-assisted review also gets a lot of attention. That being said, there are more tools in the kit. When used strategically, analytics features like clustering and categorization can help your review teams power through large document sets in ways that your naysayer might not think possible.  

For example, before your review begins, enable clustering to map out groups of conceptually similar documents. You can then run a saved search by cluster and create a batch of documents for review based on it. This will allow each reviewer, like the best machine-learning algorithms, to home in on a narrow set of issues from the start, performing incrementally better with each decision.

 

 

Taking advantage of categorization is another way to distill your document set down to the most relevant pieces of evidence. Simply ask the system to categorize your documents using hot documents you’ve found as examples. Your results will provide a quick way to find what else might be responsive.

Step 2: Create Dashboards to Track Reviewer Progress

Preparing a workspace with the proper permissions and efficient workflows is an important part of a review manager’s job. But once the review is underway, your responsibility shifts to understanding the throughput and accuracy of your reviewers, course correcting when necessary, and providing periodic updates to the client (internal or external) overseeing the case.

There are a few ways to track review progress in Relativity. We’ll start with dashboards which, when applied to reviewer progress, can provide dynamic insight into how documents are being coded. A typical Reviewer Progress dashboard will include widgets that group by responsiveness or issues. To take things a step further, you can layer information about those coding decisions with the reviewer who is making them. Solutions like “Track Document Field Edits by Reviewer,” available for download in the Relativity Community, make your widgets even more powerful.  

Creating dashboards up front is worth it down the line, especially because you can export them to a .csv file to share with other members of your team and package them as an application to reuse across your cases.

Step 3: Use Relativity Applications to Gain Deeper Insight

Relativity has been architected as a platform to be not only extensible, but open and connected, as well. This has empowered developers from both within and outside kCura to create applications that take understanding your review even further.

For example, we created Review Manager to help you forecast and track the time and cost of your review project. Using the application, you can generate reports to better understand the status of your review project and whether you need to add or remove reviewers to meet your deadline.

If you’re purely interested in your reviewers’ speed and accuracy, there’s a new application called Case Metrics specifically for that. Available in the Relativity Community, it bundles two popular solutions—Reviewer Statistics and Reviewer Productivity—with Review Manager’s Reviewer Overturns report, but it sits at the instance level, allowing you to compare data across all your workspaces.

 

Case-Metrics-min.gif

 

There are even Relativity applications for understanding what your review means – at least from a dollars and cents standpoint. RelativityOne customers are familiar with MaxBilling, an application that provides data usage dashboards and the ability to automatically invoice clients using that information.  

We’d also be remiss not to mention groups in our third-party ecosystem, Relativity developer partners like Esquify and Milyli, who have built their own innovative solutions for review management on top of Relativity.

While some might think what the FBI did was “impossible,” combining the Relativity tools that improve the throughput of reviewers with those that track their progress should help illustrate that what was being discussed around Election Day was just another day in the life for those in e-discovery.

Rob Galliani was a member of the product management team at Relativity, where he focused on Review Manager, Case Metrics, and RelativityOne.

 

Other Posts We Thought You Might Like:

Two Words Every e-Discovery Professional Should Know