elaw Saves $258,000 and 861 Hours in High-stakes Document Review
When an Australian regulator demanded documents be produced within a very tight timeframe, a large global law firm engaged elaw to assist in helping the firm’s client avoid substantial fines and imprisonment for non-compliance. Missing the deadline was not an option—nor was producing anything other than the full set of responsive documents. elaw would use Relativity Assisted Review to identify the 3,367 relevant documents from an initial collection of 80,000 and accomplish something that would have been impossible through a traditional, linear review.
elaw is a niche consulting, litigation support, and legal technology service provider offering expertise in the management, analysis, and presentation of electronic evidence. elaw provides these services to law firms, corporations, and government agencies.
“I knew Relativity Assisted Review would be the only effective option to meet the challenge of reviewing tens of thousands of documents in about a week.”
Leonie Taylor is the head of consulting at elaw. Leonie has practiced as a litigation lawyer and also has over 10 years’ experience as an e-discovery consultant, assisting clients to understand the ways in which technology can best assist them with evidence and matter management.
3 Lawyers, 80,000 Documents, and an 8-day Deadline
elaw’s client, a large global law firm, acted for a party that had received a Notice to Produce from an Australian regulator. The Notice required production within a tight timeframe; after collection and data processing, the client had eight days to review and produce the relevant documents. The ramifications of non-compliance were significant: a substantial fine and possible imprisonment of directors.
A team of three lawyers was tasked with the review of 80,000 documents in Relativity. The prospect of compliance by relying upon traditional review methods was virtually non-existent.
High-level culling based on keywords and date ranges reduced the collection for review to 46,908 documents, still a very challenging collection for effective review by a small legal team. Given the strict timeframe and limited review team, Leonie designed a workflow using Relativity Assisted Review.
“I knew Relativity Assisted Review would be the only effective option to meet the challenge of reviewing tens of thousands of documents in about a week,” said Leonie. “The law firm’s client was enthusiastic about a strategy that minimized the need for extensive legal review, which reinforced our recommendation. With the clock ticking, the decision was made.”
No Rest Between Rounds
While the clock was ticking, the legal team had commenced a manual review of the collection to keep things moving. Rather than resulting in double work, having their eyes on the documents right away would prove useful.
Once the prospect of using Assisted Review arose, the focus of the manual review shifted to tagging documents to identify good examples to train the system in understanding what the team considered relevant. These examples were then used as the pre-coded set for the first training round. A summary of the rounds is set out below.
Assisted Review then returned the first set of coding decisions based on the training documents. This would be the first of five quality control rounds. During quality control, the team would verify the system was on the right track to code the remainder of the documents in the data set.
Overturns Indicate Something’s Amiss
During the QC rounds, reports in Relativity indicated that the overturn rate—the rate at which the team of lawyers disagreed with the coding decisions made by the system—was very high. Strangely, many of the overturned documents were 100 percent conceptually similar to documents they had coded to train the system.
After further investigation into these documents with the legal team, the following issues were identified as the causes of the high overturn rate:
- Tagging of family documents in the pre-coded set: Entire families of documents were mass-tagged for the precoded set, when only one of those documents in the family was a good example.
- Using certain keywords as a basis for inclusion in the pre-coded set: A list of highly relevant keywords was used to mass-tag documents for the pre-coded set when, in fact, it wasn’t always the case that the documents returned by the keywords were good examples.
- Tagging of encrypted, password-protected attachments in the pre-coded set: Attachments with a “good example” host document were also tagged for the pre-coded set when they were, in fact, encrypted or password protected.
- Changing the scope for relevance: Documents that were relevant in the first round of review were no longer relevant due to changes in client instructions.
Issues 1-3 were traced back to the work of one of the lawyers on the team and, after some un-tagging of the example documents, the process was back on track. “Access to the overturn reports was vital in identifying these issues in the review process,” said Leonie.
After five QC rounds the legal team sought to address about 6,068 un-categorized documents. They undertook two rounds of sampling these un-categorized documents to ensure there was nothing in that set that should be included in the production. After these rounds were completed, they were confident they had not missed anything vital. They were also comfortable with a three percent rate of overturn from “not relevant” to “relevant” in the categorized document set, given the objective was to get all the vital relevant documents produced with less concern about non-relevant material appearing in the production. At the end of the review process, 3,367 documents were categorized as relevant, and 37,455 documents were categorized as not relevant.
Nothing Compares for Efficiency
A post-project comparison of Assisted Review vs. linear review paints a compelling picture. Using Assisted Review, the collection was analyzed as the result of legal team review of 7,108 documents, requiring 181 billable hours’ effort. That amounts to a review rate of 39 documents per hour. On the basis of this review rate, linear review would have required 1,203 billable hours’ effort.
The project deadline notwithstanding, the team estimated they saved 861 hours of review work and approximately $258,000 in the process.
“Had the team of three worked 24 hours a day for the eight available days, they would have reviewed little over half the documents, putting their client and its directors in a perilous position,” said Leonie.
“Had the team of three worked 24 hours a day for the eight available days, they would have reviewed little over half the documents, putting their client and its directors in a perilous position.”
LEONIE TAYLOR, Head of Consulting