Editor’s Note: Anderson Hind, director of technology at Sky Discovery, also contributed as an author on this piece.
Not every fact in e-discovery can be uncovered with words.
A few years ago, our team worked on a construction litigation that revolved around a failure in a hydroelectric dam. Specifically, it came down to cracks in the dam wall.
The legal team on the case was busy organising experts, and they needed as much evidence as they could find relating to these defects in order to brief them. In this instance, they knew the company has captured the evidence through photographs. The challenge, however, was trying to find the photos that showed the defect.
Like many matters in construction litigation, the data set for this case involved a huge number of documents—and within that set, a large number of photographs.
To uncover text-based documents, we can use search terms. But for photos, typical e-discovery workflows tend to come up a bit short. If we’re lucky, we can get a Created or Last Saved date out of the metadata, and if we’re really lucky maybe even get GPS coordinates. Often, though, that’s about where the clues end. If you want insight into what’s going on in an image, unfortunately, you just have to review it manually.
Or at least, that’s how it used to be. With the right technology, this predicament is no more.
High Tech for Visual Data
For the hydroelectric matter a few years ago, the manual solution was what we went with. Needless to say, it was time-consuming and expensive to get eyeballs on all of those photos.
So we’ve known about this photo problem in litigation for some time without being able to offer much in the way of help. However, recently, there has been an interesting development in the cloud provider space.
For example, all of the big providers—Amazon Web Services, Microsoft Azure, and Google Cloud—have been developing new services, offering them to the public via their clouds, and making them accessible by developers.
If we can use these innovations to simplify our search for our favourite beach or a memorable cake in our personal photo libraries, why not tap into them for e-discovery?
One such innovation that piqued our interest was object recognition. With it, you can submit a photo and it will compare that photo to its database of millions of images. If anything looks similar, it gives you a matching label with a confidence score.
The service is pretty helpful across a variety of images:
The image recognition returns this as Building, Bridge, Suspension Bridge, Town, Urban, Metropolis, and City.
This comes back as Slope, Nature, Mountain Range, Outdoors, Mountain, Peak, Plant, Fir, Tree, and Countryside.
This returns Human, Person, Clothing, Apparel, Outdoors, Nature, Art, Field, Painting, and Water.
After playing with this technology, we’ve been impressed. Thinking back on our hydroelectric case, we, of course, had to know if it would have helped us with the defects in the dam wall matter:
This came back with the term “Dam” at 95 percent confidence, so we at least knew it could point us in the right direction.
Equalizing Data Types for Discovery Work
The exciting thing about bringing these services to e-discovery is that they give us a tool for meaningfully searching for photos in case data for the first time. Like our hydroelectric case, legal teams around the world are dealing with matters whose bottom lines are hinged upon the truth captured by photographic data.
A picture may be worth 1,000 words, but if you can’t figure out which words those are and how to find them in a tangled mess of millions of other words, that value won’t do you much good.
So our team set out to bring that value to the forefront.
We’ve been experimenting with this service for a while in an ad hoc manner for our clients, but recently wrapped it up into a nice solution called Sky Photo Search. The app won the Community Choice category of the 2020 Relativity Innovation Awards, and we’re quite proud of how it’s come together.
With Sky Photo Search, a Relativity user can effectively click one button inside the platform to reveal more about visual data. Behind the scenes, the application securely connects with a designated cloud provider (such as AWS), gets the photo label results, and then puts them back into the user’s review workspace. When it’s done, every photo in the workspace is text searchable.
Since the application’s initial launch, we’ve expanded the solution to find other traditionally non-searchable items, too. Using this same tool, we’re now able to search documents for handwriting and then transcribe the results:
We’ve found it’s actually much easier for the reviewer to read the transcribed text than the original human scrawl. (One note of warning, though: We haven’t actually put through any doctor’s notes yet, and wonder if those may be just too far beyond saving.)
In summary, this solution can offer a significant improvement over manual review of photos for construction, infrastructure, and many other matters. Let us know what you think in the comments or by contacting our team.
Anderson Hind, Director of Technology, Sky Discovery
As the director of technology at Sky Discovery, Anderson has significant experience working with partners, senior associates, and CIOs to provide strategic advice and innovative solutions in service of resolving challenging situations. He has worked with a variety of software to design solutions and integrate the results into Relativity and the e-discovery process.