When our team sat down to build the Relativity 7.5 feature set for our processing solution, the goal was to solve a specific problem for our end users—particularly with processing administrators in mind. Even against the backdrop of the wider litigation support industry, these administrators are often faced with tasks that come with limited timelines and big demands.
Relativity Processing includes a feature set that is built to accommodate those urgent, last-minute requests, which often involve 100 gigabytes of data or less. Built on a powerful, enterprise-grade engine, Processing is specially designed to make smaller jobs quick and easy to ingest directly into Relativity. The workflow avoids unnecessary complexities, tightly integrating processing, review, and analysis in the same interface.
Why are we so passionate about this approach to processing? It’s because we’ve heard from our users and tried to understand their pain points when it comes to processing. We wanted to build a solution that addressed their challenges. The following narrative from the POV of a processing admin illustrates what we found our users often go through during this process. We hope it helps clarify the role ease of use plays in Processing.
Please feel free to contact us with any questions, or to learn more about Processing.
* * * * *
Saturday, 9:13 a.m. – FedEx just arrived with a portable hard drive. Last night, our team received an email from one of our attorneys about a new project that needs to be made available for review in an ongoing case immediately. There are 89 gigabytes of compressed data on the drive to be processed. Because we’re already in the middle of this case, however, sufficient storage space isn't available; I need to contact IT to make room for the new project.
While I’m waiting on that, I fill out a project kickoff form (PKF) and submit it to the case team. I need to confirm specs for processing the data and setting it up in the workspace—things like custodian names, de-duplication standards, OCR protocol, which time zones we’ll reference, how we’ll handle embedded objects, and maintaining folder structure once the data is in the workspace, among other details.
Saturday, 1:12 p.m. – IT reported back that the necessary disk space has been provisioned, so I can start copying data from the external drive into our network storage. I need to follow a forensic copying process to maintain metadata for the files, which is key to defensibility. The forensic workflow also helps with reporting, as well as maintaining the chain of custody. While the data is copying over, I can start setting up our workspace in Relativity and build our network share location.
Saturday, 5:04 p.m. – We still haven’t received the case team’s PKF response, so I’ll probably wrap up for the day. I’ve spent most of the afternoon monitoring the forensic copying. Before I head home, I’ll check on the process one last time. It looks like it’s progressing smoothly.
Sunday, 8:02 a.m. – First things first: I log in to confirm that the data copy completed successfully overnight and we have a detailed confirmation report from the forensic copying application. I then confirm that the data on the network share and the data on the original hard drive are identical. Meanwhile, I receive confirmation from the case team on the PKF, so I can be confident in the settings for data ingestion.
Sunday, 9:43 a.m. – I create a new processing set in our Relativity workspace—a quick setup. When I’m done, I specify a secured staging folder where the relevant data will be published in the Documents tab. That way, it’s not immediately available to reviewers, so the data can be culled down and we can perform QC on it before it’s formally released for review. I kick off the data discovery phase and wait for the application to process it.
Sunday, 3:45 p.m. – The data discovery is complete, so I run the associated reports. I use built-in reports to provide a quick update on the status of the project to our stakeholders. The discovery report provides a breakdown of the processed data by custodian and file type. It’s designed for the case team, as it paints a picture of the data set and makes it easy for them to start building a plan of attack for review. The error report is for my team, helping us determine our next steps on error resolution. With the report, we can quickly determine if any errors need to be addressed immediately, or if they can be resolved while the formal review progresses.
Monday, 8:01 a.m. – After resolving the errors from yesterday’s report, I can start the publishing phase to make the data available in the case workspace. I click a button and, a few minutes later, Relativity’s Import APIs have pushed the data into the Documents tab automatically. I navigate to the Documents tab, and the data is ready to go. From there, I can use standard Relativity features—like searching and filtering—to cull down the data per the instructions confirmed in the PKF. I’ll give the case team access to the corresponding saved searches so they can review and approve them.
Monday, 10:08 a.m. – Finally, I drop the data into a final review folder in Relativity. Following our PKF standards, I maintain the folder structure that already exists in the workspace. After roughly 48 hours, the data is off the hard drive and in review. I breathe a sigh of relief, and check my email. There’s a note saying I should expect 200 gigabytes of data that need to be immediately available for review in another ongoing case…