An intentional, intelligent ESI protocol can set the tone for your entire e-discovery project.
And, unfortunately, so can a … not-great ESI protocol.
“The importance of strategic foresight, client customization, and staying ahead of modern data challenges can't be overstated,” Alex Jacobs, a director at Berkely Research Group (BRG), explained after an in-person panel on this topic back in 2023. “Therefore, you should actively engage with clients and technologists from the outset to ensure a defensible, flexible, and cost-efficient ESI protocol. This will not only stand up to scrutiny down the line, but will also allow for adaptive measures as case dynamics change.”
The importance of a sound ESI strategy hasn’t changed since then, but the tools certainly have. Awareness and adoption of generative and agentic AI have grown rapidly over the last two years; both now regularly appear in e-discovery.
So where does modern AI show up in your meet-and-confers?
Do you disclose how you’re using generative AI? Should you negotiate around prompt criteria? Are opposing counsel’s tools and your own subject to the same standards of validation and explainability?
There’s no one-size-fits-all answer yet. But what’s clear is that, as AI becomes more integrated into document review, you must approach early ESI protocol negotiations as the foundation for the kind of cooperation, transparency, and even courtroom defensibility you hope to benefit from down the line.
At a recent Relativity Fest, a packed house of attorneys gathered for “Negotiating ESI Protocols for the Use of Generative AI in e-Discovery,” featuring a panel of brilliant speakers. Through a mock negotiation, they explored the big question: how do you balance innovation with respect for professional caution when the rules are still being written?
Their debate revealed a lot about the different personalities you might encounter when negotiating an AI-inclusive protocol. So let’s meet your potential adversaries—and the best way to work with them.
For a whimsical visualization of this mock negotiation, check out our
illustrated e-book—"Drawing Conclusions: Negotiating AI in Your ESI Protocols"—here.
The Willing Collaborator
Tone: Cooperative, curious, forward-thinking
Position: Open to using generative AI, and maybe already doing so. Wants to create shared protocols that reduce friction and build mutual confidence.
This adversary is your dream opponent: collaborative, pragmatic, and genuinely interested in finding efficiencies that benefit everyone. They’re not here to fight about whether AI belongs in discovery; they’re here to make sure it’s used responsibly.
How to recognize them: They’ll ask questions like:
- “What kind of validation workflow are you proposing?”
- “How can we make sure the process is explainable to a judge if needed?”
These are all good signs. As one speaker put it during the Fest session, “You need a protocol established. It makes sense from the perspective of getting the other side on board; this isn’t the kind of process we want to go through again.”
In other words, the Willing Collaborator understands that defining a clear process early prevents re-litigation of discovery issues later.
How to engage them: Share your process proactively. Offer transparency—especially around validation metrics and safeguards—and emphasize proportionality and cooperation. Highlight how your validation process builds confidence without overexposing privileged or proprietary work.
If you’re both using tools like Relativity aiR for Review, point to validation workflows that make your process defensible, explainable, and repeatable. Show how your review meets (or exceeds) the same quality thresholds as traditional and early-generation TAR methods. This is your chance to build mutual trust and set a model for future matters.
The Cautious Realist
Tone: Guarded, thoughtful, risk-aware
Position: Not opposed to AI—but concerned about its reliability, explainability, or misuse. Likely to demand more validation or partial disclosures.
How to recognize them: The Cautious Realist isn’t trying to block progress; they just don’t want to be the test case that goes sideways. Their skepticism is practical. They’ll ask:
- “How do we know your AI didn’t miss something important?”
- “Can we see examples of your prompts or validation reports?”
- “What happens if the model fails—who’s accountable?”
All very fair questions. During the Relativity Fest session, Aurora Bryant noted that negotiating protocols early “can help frontload issues that will come up as the matter progresses.” Ultimately, this will help you “build trust and facilitate cooperation, and it will be helpful in managing the entire litigation.”
For the Realist, that early discussion is crucial to feeling secure.
How to engage them: You’ll need to bridge the knowledge gap without overstepping. Start with education: explain that generative AI can enhance review consistency and speed, but that it still relies on attorneys’ oversight. Emphasize iterative validation—such as multi-stage prompt refinement or statistical calculations like recall and precision.
When pressed about disclosure, take a cooperative stance without giving away the store. One panelist felt that “Things can be negotiated to validate and show methodology; validation can be iterative. But the prompts and review protocol are beyond the pale” when it comes to disclosure expectations.
You can respect that line while still being transparent about quality controls.
You might also lean on Sedona Principle 6, advocating for a responding party’s ability to choose their own methodologies, when responding to requests for exhaustive transparency. The key is positioning your disclosures as a trust-building gesture, not an invitation to second-guess every click.
The Staunch Skeptic
Tone: Adversarial, defensive, maybe nostalgic for paper productions
Position: Deeply distrustful of AI in discovery. Believes it invites risk.
How to recognize them: They’re not here to collaborate—they’re here to protect their position.
For the Staunch Skeptic, AI is just the latest in a long line of technological threats to attorney control. You’ll hear things like:
- “Why should we trust a machine to determine responsiveness?”
- “I need to be involved in every step of the process.”
- “We’re not here to make your life easier.”
You’re not imagining the hostility. It may happen. At the Fest session, for instance, one of our mock negotiators argued that: “You can’t sit in my review room and listen to me instructing my reviewers; the same way, you can’t see prompt criteria. It’s not comparable to automated retrieval like search terms—it’s comparable to human review.”
Meanwhile, another countered that hiding prompt criteria “seems suspicious. If it’s reflective of the RFP, why is it a secret?”
This back-and-forth captured the heart of the debate: does transparency about prompt criteria promote fairness, or erode work product protection?
How to engage them: This one’s tough. Don’t try to convert them. Instead, contain the conflict. Focus on validating results rather than exposing the process. Reinforce that your AI review meets the same ethical obligations of completeness and accuracy as human reviewers.
Consider a third panelist's perspective that “we’re focused on the wrong part of it. We’re focused on all the steps we’re gonna take, instead of whether or when we get there. Getting here, by a certain time, is the important part. Meeting the needs of a particular report card isn’t as important as meeting the requirements you’re under obligation to meet.”
In other words, the outcome—compliance with Rule 26 and proportional discovery—is what matters, not micromanaging every iteration of an AI-powered workflow.
If opposing counsel digs in on full disclosure, pivot to an argument about proportionality and efficiency. Offer to exchange validation reports instead of raw prompts, and remind them that overregulating the process could discourage the very innovation that professional bodies and even courts are trying to encourage.
Lessons from the Mock Courtroom
At Relativity Fest last year, Cristin Traylor offered a follow-up session to these mock negotiations: “AI in the Courtroom: A Mock Argument on Generative AI for Document Review.” In that interactive setting, Cristin moderated a hypothetical argument before the court where plaintiff and defense counsel went head-to-head arguing for and against the use of Relativity aiR for Review in their case.
It was a great example of how these discussions play out in action. In the session’s mock argument before a fictional judge, panelists emphasized preparation as the ultimate defense against an adversarial showdown. If your negotiation collapses, you’ll want to have:
- Statistical validation demonstrating accuracy, like precision, recall, and elusion rates. (Are you an aiR for Review user? Read about how this is calculated in the platform here.)
- Clear documentation of your workflow and quality checks.
- Sedona Principle 6 at the ready: “Responding [producing] parties are best situated to evaluate the procedures, methodologies, and technologies appropriate for preserving and producing their own electronically stored information.”
- Highlight standard transparency features from your review tool (like aiR’s built-in citations, rationale, and considerations) that illustrate process integrity.
Being ready to answer challenges and questions with evidence is half the battle. Don’t hesitate to bring a technology expert to the table with you to help tackle each one as it comes.
It’s Not About the Tool
Ultimately, your strategy for negotiating AI-related protocols isn’t just about the tool you use. It’s about relationships, trust, and framing. Aurora summed it up best: “The more cooperative and transparent you can be, the better and more smoothly things will move forward.”
Whether you’re dealing with a Willing Collaborator, a Cautious Realist, or a Staunch Skeptic, your goal is the same: establish a defensible, explainable process that aligns with professional obligations and keeps the discovery train on the tracks.
Looking to practice your own negotiation strategy? Try drafting an ESI clause that covers generative AI workflows, validation methods, and transparency terms. Test it against each adversary type. The more scenarios you explore, the more confident you’ll be when it’s time for the real meet and confer.
And while you’re at it—why not practice your generative AI skills simultaneously?
We’ve put together some prompt materials you can use to create a custom GPT that will chat with you about just this topic. I’ll note that this is just for fun, with no legal advice offered or implied. Consider it good, lighthearted practice verbalizing your ESI strategies, plus a little experiment with customizing and using AI for a specific task.
To begin, download the training materials, which includes a custom prompt, detailed notes from our “Negotiating ESI Protocols” session, and the session deck. Simply attach these documents to the configurations of your custom GPT and, in the Instructions prompt box, add the following:
See attached "Custom GPT Prompt" document for detailed prompt instructions.
Refer to "Negotiating ESI Protocols for the Use of Generative AI in e-Discovery" documents for slides and notes from a live session modeling the kind of mock conversations you should facilitate with the user.
Click here for the files you need and a screenshot of how to set it up in ChatGPT, specifically (or feel free to try the experiment with your gen AI tool of choice).
Let us know what you think—and share your custom GPT ideas for fellow legal practitioners—via LinkedIn.
Graphics for this article were created by Caroline Patterson.




