by Kyle Disterheft on July 07, 2016
Nobody wants to look through user audit logs. No human, on an otherwise enjoyable afternoon, wants to comb through page after page of densely packed, mind-numbing, teeny tiny text in an application’s history in search of what a user did or didn’t do.
But investigations into audit data play an integral role in helping e-discovery teams understand how a case plays out and why. They also help identify red flags, increase efficiency, and provide important lessons for the next project.
What’s it like to be a litigation support pro with the responsibility of monitoring, exploring, and acting on audit data? Here’s a “week in the life” based on conversations we’ve had with many of them.
Monday, 4:58 p.m. – I was considering leaving on time today. As I’ve come to expect, however, it’s never too late in the day for litigation support to fix a catastrophe. We recently hired a large review team, and their coding decisions on responsiveness are kind of important. Actually, they’re critical. That must be why nobody wanted to tell me that Jack accidentally mass updated every document in the system and overwrote the responsiveness decisions.
I jump into the Data Grid for Audit application in my Relativity workspace, pull back the list of documents that were accidentally updated by Jack, and use the audit log to overlay the original value back onto the documents. Problem solved.
“How did I do it?” I write back to the group. “I’m an audit superhero.”
Wednesday, 3:14 p.m. – I’ve been getting pinged all day from the case team about our reviewers’ accuracy in that large case I helped course correct the other day. The next stage of review has begun, and it seems to the case team that second-pass review is overturning tons of the original coding decisions. So many that it’s looking suspicious. I want to help get to the bottom of it.
I open up an audit investigation dashboard in the workspace and begin filtering down to see all document updates done by members of the second-pass review team. Graphs are showing me that the majority of all the updates were actually done yesterday, and they were all done by the same person. It looks like Simon has single-handedly overturned more documents than all the other members of the second pass review team combined. I’m going to have a talk with him about this.
Wednesday, 3:45 p.m. – Apparently when it rains, it pours. The second I fire off an email to Simon, I get a knock on the door. Several members of the review team are seeing slowness when they run searches. Something tells me I won’t like what I see when I dig in.
I jump into the workspace, filter for these reviewers, and look at their recent saved searches with wait times. A moment or two later, I see the culprit: These reviewers are all running the same document search, which happens to reference 12 other document searches simultaneously. The funny thing is, they could accomplish the same search with only one search instead. I’m going to give Donna, one of our litigation support managers, a heads up on a search training opportunity. Maybe she could send me to training, too…in Hawaii…
Thursday, 10:32 a.m. – Just got a call from an attorney asking how she can review the coding history for a set of key documents. She’s getting a jumpstart on some case strategizing alongside review but is seeing some inconsistencies with where she left off on the documents late yesterday.
Sometimes my job is easier than others. In this case, I simply open up the doc and click to view the document-specific audits—showing everything from when images were created for the document to who deleted that attorney comment without asking.
Friday, 9:05 a.m. – I think it’s time to come clean with my team on how easy it is to work with audits.
Better yet, I’ll make a few audit dashboards in our case template so all of our cases can be investigated and searched more easily. I set up one dashboard as the default to show the case team stats on reviewer progress over time. They can just click around and filter for information on review status and batch completion if they want to.
I set up a second dashboard to show metrics on saved search performance to make sure those issues are resolved and we aren’t left with any more poorly constructed searches.
Finally, I make one more dashboard showing a list of all the documents that have been viewed. This will come in handy if we need a quick report of if someone viewed inadvertent disclosures later on.
Making these dashboards available to my team will help us be more efficient at resolving challenges during the normal course of review throughout the week (and hopefully not the weekend). Plus, putting dashboards together is kinda fun—an ideal Friday task. So it’s pretty much a win-win.
Kyle is a product manager at kCura, where he helps guide the development of Data Grid as well as some of Relativity's core features.