Generative AI has rapidly evolved from a buzzword to a transformative force in the e-discovery industry. In a recent fireside chat, industry experts shared their experiences with AI-driven review tools, particularly Relativity aiR for Review, shedding light on the challenges, benefits, and future directions of this technology.
Meet the session speakers:
- Jonathan Moody (Moderator) – Vice President, eDiscovery Sales, JND eDiscovery
- Ben Sexton – Senior Vice President, Innovation and Strategy, JND eDiscovery
- Jeff Gilles – Lead Generative AI Solutions Engineer, Relativity
- Rachel Koy – Director of Innovation and Strategy, JND eDiscovery
- Hannah Baxter – Lead Account Executive, Relativity
And read on for key insights from the discussion.
How Generative AI is Impacting Industry Professionals
The panelists had diverse experiences with generative AI. Ben Sexton discussed JND’s methodical six-month evaluation of AI tools to assess their real-world benefits. Rachel Koy, initially skeptical, found herself impressed after stress-testing aiR for Review. Jeff Gilles acknowledged AI’s potential, though he doesn’t personally use it daily, while Hannah Baxter emphasized her role in educating clients on AI’s capabilities and workflow integration.
The Challenges of AI Adoption: Getting Buy-In
One of the key discussions centered around securing buy-in for AI-driven review. Ben highlighted that JND engaged existing clients by exploring use cases relevant to their cases, discovering that producing parties found AI prompting more straightforward than plaintiffs. He emphasized the importance of demonstrating AI’s internal benefits on small document sets before addressing broader validation concerns.
Hannah stressed education as the cornerstone of AI adoption, advocating for a deep understanding of the tools to confidently answer stakeholder questions. Rachel and Jeff both underscored the value of hands-on experience; for example, testing AI on as few as ten documents can showcase its effectiveness and drive adoption.
Choosing the Right Documents for AI Testing
An audience member asked about selecting an initial test set of documents. Ben suggested using a “special interest sample” of highly relevant or borderline documents to quickly gauge AI’s effectiveness. He advised against over-engineering the process, recommending simple exposure to the UI and results. He also noted that multiple checkpoints allow users to back out if the tool doesn’t meet expectations.
Is There a Minimum Data Requirement for AI Effectiveness?
Unlike traditional TAR methods that require significant data volumes, aiR for Review provides value whether analyzing ten or a million documents. However, as Jeff pointed out, the level of prompt refinement should correspond to the data set size: more extensive data sets warrant greater upfront investment in prompt iteration.
Rachel also highlighted that AI’s consistency helps mitigate human fatigue in document review.
Unforeseen AI Features that Resonate with Clients
Clients have gravitated towards aiR’s ability to provide rationales and counterpoints for its decisions, addressing the common concern that AI operates as a “black box.”
Ben praised the user-friendly prompt iteration interface, while Jeff noted the tool’s structured ranking system (from 0 for irrelevant to 4 for hot documents) as particularly beneficial.
Defining AI’s Role: Is This TAR 3.0?
The panel debated whether generative AI represents the next evolution of technology-assisted review (TAR). Ben suggested that, while it might be classified under TAR for legal consistency, calling it “review automation” could also be appropriate. An audience member proposed “intelligent augmented review” (IAR) as an alternative.
Jeff pointed out key differences from TAR: generative AI relies on instructions rather than training with seed sets, incurs costs per run, and more closely mimics human review workflows.
Optimizing Review Strategies with AI
Pricing concerns arose regarding re-running AI on evolving document sets. Jeff recommended leveraging clustering and small AI passes to refine search strategies before full-scale review.
Ben advised focusing on clear, high-level objectives rather than overcomplicating prompts, illustrating with a data breach example where broad descriptions yielded surprisingly precise results.
Hybrid Approaches: Active Learning Meets AI
JND has experimented with hybrid models, using aiR for Review to prime active learning models or vice versa to streamline review workflows. While not yet widely used for QC, these approaches hint at future possibilities.
Collaborating between tools can help improve accuracy, cost effectiveness, and flexibility in the workflows that may be required by each unique matter.
Addressing Common AI Concerns
Jeff tackled the fear of AI hallucinations, explaining that aiR for Review requires citations to validate its decisions, reducing erroneous outputs. He also reassured users that aiR does not train on proprietary data.
Hannah emphasized proactive education as the best way to alleviate concerns—letting skeptics test the tool on a small sample often wins them over.
Final Thoughts
AI is undeniably reshaping e-discovery, offering efficiency, cost savings, and new insights into document review. As the panelists reflected on the future, they each shared key takeaways:
- Jeff Gilles: “AI is like an ambidextrous utility player in baseball—it can review in multiple languages and surface relevant foreign-language documents with rationale and considerations translated into English.”
- Jonathan Moody: “aiR for Review is cheaper than traditional translation, and I’ve advised clients to use it to identify relevant foreign-language documents before committing to full translation.”
- Ben Sexton: “Q&A tools are coming, but they’re still limited by context window constraints. In time, they’ll likely be the most utilized generative AI tools in this industry.”
- Rachel Koy: “Gen AI should be treated like an employee: attorneys have a duty to supervise their teams, and that same oversight should apply to AI-driven document review.”
- Hannah Baxter: “The best way to overcome skepticism is to let people see AI in action. Running aiR for Review on just a few documents can change minds quickly.”
This fireside chat provided valuable perspectives on AI’s growing influence in e-discovery. Whether you’re exploring AI for the first time or looking to refine your approach, these insights can help navigate the evolving landscape.
Graphics for this article were created by Kael Rose.
