Rev Up Your Reviewers

We’d like to talk briefly about a topic that naturally comes to mind when we think about Relativity and e-discovery—stereo speakers.

It is commonly held, with good reason, that a stereo system is only as good as its speakers. All the power and beauty of the other components mean nothing if the speakers can’t pump some bass. We think the same concept applies to document review in general, but especially to computer-assisted review projects. Restated (and a bit more on topic), your assisted review project is only as good as your reviewers. Your index could be tight as a drum, your de-duping strategy above reproach, your custom-designed workflow the toast of Broadway—yet it simply won’t matter without strong reviewers.

In our observations, most assisted review difficulties stem from reviewer-level issues. This trend prompted us to create our Reviewer Protocol documentation. In particular, we wanted to provide early, comprehensive education on the vital process of selecting good example documents from which the system can learn. The material seems to have gained good traction with our end users, as it specifies some guidelines for teaching Relativity Assisted Review how to categorize accurately. For instance, our interpretation of the four corner rule assists reviewers in understanding how document families should be treated within an Assisted Review project. If the content making a document responsive isn’t contained within the four corners of the document, don’t count it as responsive. Other tips include how to work with spreadsheets during assisted review and how to select quality example text.

We’re here to make your job easier. If you have any questions—or if you’d like to request a copy of the Reviewer Protocol document—please feel free to drop us a line. You can email us at, or call us at (312) 870-5555.