Quick Answers to Common Questions for Relativity Assisted Review Projects: Part 2

by Constantine Pappas on August 26, 2014

Analytics & Assisted Review , Litigation Support , Product Spotlight

Following up on this post from last May, we’re continuing to hear more complex questions from customers as they become more interested in Relativity Assisted Review. Our team is available to guide you through the workflow and any questions you may have—so hopefully these easy answers to more of our most-heard questions are helpful as you get started.

Can I add documents to the project in the middle of the review?

Absolutely. It is very common for new documents to arrive in the middle of a case, or that teams want to begin a project immediately—even as documents are still being added to the workspace. Documents can be added to an Assisted Review project at any time, but ideally before you finish your current round. To include them in the project, add the documents to the workspace, update the index, and finish the round. The great thing is that all of your existing example documents can provide values for all the new documents right away.

What do Seed Count and Eligible Sample Documents represent in the project overview?

There is a difference between documents that are coded as examples and those that are not examples. Documents submitted as examples are called seeds, meaning they are given a designation and will help train the system.

Eligible documents are all documents that can be reviewed for the project. Because Assisted Review is most effective if it learns new information in each round, the system will not batch out documents that have been reviewed as part of a previous round. Therefore, eligible documents cannot have a value present in the RAR Designation field.

Does Assisted Review work with foreign language documents?

Assisted Review should work the same with other languages as with English, but there are some special considerations to keep in mind.

Relativity Analytics—which powers Assisted Review—looks mathematically at strings of characters, and it does not distinguish languages in the index. As a result, it doesn’t use a dictionary or word list to understand the concepts, so the words included the index come only from the documents you provide to it.

That said, the standard stop words list is in English and helps the engine ignore common words like “the” and “it.” If you wish to index other languages, you can download standard stop lists for those languages and add them to your index settings. While not absolutely necessary, the quality of the index will improve if the stop words for the present languages are applied.

Additionally, when indexing foreign language documents, Analytics will not translate them—it will only group documents of the same language together, and then further group them based on the concepts it’s learned.

What does kCura recommend for confidence level and margin of error?

We don’t have a direct recommendation. There hasn’t been a clear court decision to indicate a particular sampling method or statistical threshold. Additionally, many factors that affect which sampling method is best for each project, and what you intend to do with the documents after this phase of the review can dictate sampling methods. Those details make it difficult to provide a golden metric for all cases.

To allow you to use the workflow that best suits each case, Assisted Review offers multiple sampling methods and accuracy measurements. If you need assistance identifying which approach is best—or if you have any other questions about how to set up your Assisted Review project—please don’t hesitate to contact us.

Posted by Greg Houston.


Post a Comment

Required Field