It’s exciting to see more organizations adopting active learning as their go-to technology-assisted review (TAR) tool. Since the workflow was introduced to Relativity in early 2018, you’ve probably heard about the benefits of performing a TAR project using active learning.
What you might not know is that active learning is not just for TAR—it’s also a valuable tool for prioritizing a traditional linear review.
Why Try Active Learning without TAR?
When running a TAR project, it’s important to establish solid, defensible workflows and protocols, and to be up-to-speed on concepts such as statistically valid sampling, margin of error, confidence level, and elusion rates. Having confidence in the process and its supporting technology on your e-discovery team is necessary if you wish to defensibly set aside unreviewed documents deemed non-responsive by the system.
Investing the time and effort to do this can pay big dividends, but some teams aren’t ready to take the leap, or feel their projects don’t justify doing so. Others worry that a TAR workflow may be challenged by opposing counsel, risking additional costs or delays.
But that doesn’t mean hesitant teams can’t benefit from active learning. In fact, you can realize significant benefits from active learning without having to master the nuances of these topics in a way that virtually eliminates the risk of being challenged.
That means you’ll learn the ins and outs of the tool in a low-risk way, benefit from its time and cost savings, and better prepare for the next big case.
The Document Hunt
In many projects, the responsiveness rate in the data set can be quite low—often amounting to just 30, 20, or even 10 percent (or less) of the documents qualifying as relevant to the case.
In a standard manual review unaided by analytics technology, truly responsive documents will be randomly scattered across the document set. Batching out documents for review in this scenario means that 70-90 percent—possibly even more—of the documents that reviewers see in each batch will be non-responsive.
While batching technology has been a workhorse of review projects for years, if it’s your primary method of getting documents in front of reviewers and you’re not using the analytics tools at your disposal, you’re likely going to find yourself falling behind the competition. Reviewers are going to be sifting through mountains of irrelevant information, and the dullness of it can make them less attuned to what’s actually responsive and, therefore, less effective in making their designations.
There’s a better way to approach a manual review.
This is where active learning comes in. The technology can supercharge your manual review by automatically prioritizing your data set and getting the most important documents to your reviewers sooner.
Instead of following the standard approach of batching out documents for manual review, create an active learning project, fire up the prioritized review queue and have your team work there.
In an active learning project, the system employs machine learning technology to analyze reviewers’ coding decisions, and builds a model based on those decisions to rank score all the documents in the project. The higher the rank score, the more likely a document is to be relevant.
The system then automatically prioritizes the documents in your review by serving up the highest-ranked documents first. The system continually learns as reviewers code documents, updating the rank scores along the way to ensure that the most important documents are being presented to reviewers first.
Little Risk, Big Rewards
You can conduct your entire review, top to bottom, in the active learning prioritized review until the team has coded all the documents in the project.
With this approach, you let the active learning engine automatically prioritize your review so that the best documents bubble up to the top of the review queue.
This front-loaded review strategy has many benefits, including:
- The legal team will be able to construct a well-informed case strategy sooner.
- Performing rolling productions will be easier as hot documents are discovered sooner, and lower priority documents can be produced later.
- Because reviewers won’t have to slog through as much junk, they will be kept more engaged, allowing them to be more efficient, consistent, and accurate.
- All the while, you and your team will be gaining expertise in setting up and running active learning projects.
Because the review team is putting eyes on all documents in this scenario, there is no need to run elusion tests on unreviewed documents, determine appropriate margins of error and confidence levels, or get into involved discussions about protocols and defensibility. You are essentially performing a manual review, but with the benefit of having the active learning engine automatically prioritize your documents.
You can avoid incurring the up-front costs that can be associated with ramping up your team for a TAR project, while taking on little to no additional risk versus a manual review performed without the aid of active learning.
How to Get Started
Active learning is straightforward to set up and administer, so it’s easy to get started. Check out this documentation for a step-by-step guide. You can also contact us at any time, and we will be happy to help you get going.
There’s no minimum document count, so don’t wait for that big, high-profile case to be your first active learning experience. Take advantage of the technology in a low-risk way to ease into the workflow—and convince your team that it’s worth its weight in gold.