by Yvette Bula – Commonwealth Legal
on July 15, 2016
Analytics & Assisted Review
It’s impossible not to catch on to the trend of analytics usage for e-discovery in today’s legal world. Alongside growing data volumes, we continue to see greater acceptance amongst lawyers and the courts, easier-to-use technology, and updated rules having an impact on how many case teams are using analytics to increase efficiency and find the needle in their metaphorical haystacks faster and earlier.
For many, the question of analytics isn’t “Should I?” It’s “How should I?” Choosing analytics doesn’t have to be a decision about whether to use one analytics feature or none of them. Taking advantage of a combination of analytics tools can narrow down the scope of complex cases—even when it comes to technology-assisted review. In fact, a combined effort is usually a great approach to TAR. Here’s a look at why.
If you’re anything like me and return from vacation to find an overwhelming amount of email in your inbox, you could decide to focus on the most recent notes in each conversation to see what you may have missed without reading every individual email. Fortunately, email threading brings this efficiency to every e-discovery project—no matter the size—and it’s incredibly useful during TAR, too.
You can take this approach to exclude repetitive content and read only one or two messages—known as the inclusive emails—that capture their entire threads and any attachments. With fewer documents to review for the initial “seed set,” you’re helping your expert team train the system in less time—and reducing the chances that they’ll accidentally make conflicting coding decisions on similar documents, which could confuse the system and lengthen the overall timeline of your project.
What’s even better is that email threading offers a further reduction when you choose to exclude email duplicate spares—emails that have the exact same body content and attachments. Why review the same content more than once?
During the early phases of a TAR project, expert reviewers have to ask themselves an extra question after deeming a document responsive to the matter at hand: would it serve as a good example document to help train the system?
There are typical best practices for making this call, such as focusing only on the content within the document itself (as opposed to its metadata) and setting aside family relationships (such as emails and their attachments). But concept searching can be a great help when you’re trying to identify the best documents—or excerpts from documents—to use as fodder for training.
For example, let’s say your team has coded a particular document as responsive but isn’t sure about its usefulness during training. It’s very long, and only one page has text that’s highly relevant to the matter. You can use concept searching to get an idea of what the technology can learn from the document. If a concept search using the relevant excerpt as the search query returns a lot of juicy results, you can feel confident submitting that text as training material for the TAR project. Zeroing in on ultra-relevant content during these training rounds can help your project reach completion faster.
Cluster visualizations can be an excellent way to both jumpstart a TAR project and support your QC efforts at a glance.
Clustering requires no human input, so you can run this feature on your data set quickly to sort documents into conceptually related groups based on their content. If you perform this function during the early stages of a TAR project, you can use the results to identify prime training examples for your project. This is especially useful if you’ve already made some headway on a review before deciding to perform TAR, because you can locate documents in the same clusters as those you’ve already coded to prioritize training and get to the meaty stuff first.
During a QC workflow, you can also use cluster visualizations as a heat map to call out coding inconsistencies among your data. If you have documents coded as non-responsive in an otherwise responsive cluster, for example, they’ll be easy to spot. This is another highly visual QC exercise to complement the reporting features that maybe included in your TAR software.
As it turns out, there is no “end all” or “be all” in analytics—not even when it comes to TAR. The best teams collaborate amongst their experts, internal and external, and make use of the the most effective tools for the job to get the most out of the time and money spent on an e-discovery project.
Yvette Bula is managing director of technical services at Commonwealth Legal, a division of Ricoh Canada, where she works collaboratively with clients to architect solutions across the EDRM. She has more than 15 years of experience in e-discovery.
Now in Relativity Analytics: 3 Customer-Driven Enhancements
4 Ways to Move e-Discovery Data That You May Not Know About