by Bill Bailey
on May 20, 2020
Review & Production
Even in the best of times, maintaining high standards of quality, consistency, and productivity in your e-discovery project is an ongoing challenge. With changing circumstances and a need to transition to distributed teams, the effort is magnified. On top of that, distractions are everywhere as team members settle into new routines.
A key element of any successful e-discovery project is ensuring that your review team is on the same page and working productively. Ongoing review monitoring and quality control procedures are essential to verify that team members are applying decisions accurately and consistently.
Keeping tabs on geographically dispersed teams isn’t easy, but it’s certainly not impossible. In fact, you likely have some powerful tools at your disposal that you can use right now to help you meet the challenges of keeping your distributed teams on track.
For those using Relativity, let’s explore some of the tools you can use today to help bolster your quality control and keep your team working productively when team members are distributed.
Ensuring that your remote workers have what they need to be productive can be a concern when transitioning to a distributed work environment. Distributed workers can certainly be as effective as co-located ones, but even under normal circumstances, a change of scenery can make for a tricky transition—and with kids home from school, quarantines in place, and a lot of anxiety hanging over many of us, these are no normal circumstances.
You’ll want to be supportive of your review teams to ensure they can deliver the results your case requires in the timeline you’ve been given. Good reporting can help you identify slowdown areas and narrow in on cases or reviewers who may need a check-in or different accommodations to help them work productively.
In Relativity, a set of reports is always at your fingertips to help you remain confident that your review is on track. Armed with these reports, you can make course corrections early on—minimizing rework, missed deadlines, and upset clients.
For instance, you can monitor reviewer productivity with the Reviewer Statistics Report. It shows the amount of time reviewers have spent reviewing documents, how many documents they have reviewed, and the average number of documents reviewed per day and per hour. You can generate these reports for specific users or for users across multiple clients, cases, or groups.
Reviewer Statistics Report:
In times like these, productivity reporting can help you benchmark current progress against what you’d normally expect to see from your team. If it’s not on track to hit your deadlines, you can get clear insight on what areas need your attention—and how you can evolve your case management strategy to balance those needs with those of other stakeholders.
Review speeds are important. But hitting your benchmark for docs per hour means little if the coding decisions made on those documents are rushed, confused, or inaccurate. Round out your productivity reporting by closely monitoring your quality control results and how they compare to first-pass review.
After more senior reviewers have performed a quality control (QC) review of first pass reviewer coding decisions, take a look at how often the first pass reviewers’ decisions are being overturned by the QC reviewers.
This report helps you zero in on the reviewers who need extra guidance and even allows you to see the documents they were struggling with. Since this report provides you with real time feedback, a quick chat to understand where the confusion lies and how it can be resolved can get your reviewers back on track with very little time lost.
In Relativity, you can schedule these reports to be sent to your inbox daily, and the built-in dashboard provides data visualization tools to give you an at-a-glance overview of how reviewers are progressing in your case.
Reviewer Overturn Report:
Reviewer Metrics dashboard:
Another tool that is likely available to you right now, but may not strike you as a quality control tool for a distributed workforce, is active learning.
Active learning is a technology-assisted review (TAR) solution that uses machine learning to help teams identify the most important documents in their case, allowing them to power through their reviews.
But it isn’t only good for one use case. Active learning is a flexible tool that can be used in a variety of ways—including as a mechanism to help you QC reviewer coding decisions, even if you are performing a traditional, linear review, and not running your review project through active learning. Here's how that would work in Relativity.
As a case administrator, you would set up an active learning project to run in the background. Once a day, or periodically throughout the day, copy over the relevant and not relevant reviewer coding decisions to the active learning review field. That's it.
Active learning will automatically apply a rank score to each document, with a higher score indicating that a document is likely to be relevant, and lower score indicating a document is likely not relevant.
With this information, you can spot documents where the machine disagrees with the human reviewers. For instance, you could set up a search that returns documents highly ranked by active learning but that reviewers have coded as not relevant. This type of conflict between the machine and the human reviewer would warrant additional review of these documents.
The great thing about this type of conflict checking is that it can be set up and run behind the scenes by a case administrator, without disrupting an ongoing review. The conflicting documents that are identified can then by presented to the review team for additional QC.
With the ever-present pressures of e-discovery exacerbated by a need to manage remote teams, we hope that some of these approaches can help ease the tension by giving you the ability to stay on top of team quality and output.
If you have any questions please contact us at any time. We’re happy to help.
Please stay tuned for an on-demand webinar that will provide a deeper dive into this topic; we'll update this article with the link once it's available.
You can read more about Case Metrics here, and active learning here and here.
Bill is a senior solutions specialist at Relativity, providing workflow consultation to ensure clients are getting the most out of the platform, with a focus on analytics and technology-assisted review. Bill previously practiced law in the area of mergers and acquisitions and has extensive experience in the software and technology fields.
5 Ways to Streamline Collaboration for Widespread Teams
Never Build Another Boring Skills Training Again
How a Services Partner Can Support Your e-Discovery Process