Generative AI tools like Relativity aiR for Review are revolutionizing the way legal teams work and effective prompt writing has quickly become a must-have skill. By leveraging well-crafted prompts, aiR can rapidly cut through irrelevant documents, allowing you to zero in on key evidence faster than ever.
While powerful, aiR is not here to replace attorneys – rather it is meant to enhance efficiency and accuracy, allowing you to spend less time sifting through noise and more time focusing on high-value legal strategy and case analysis. For those who have yet not used the application, don’t be intimidated: simply start using the tool and the results will speak for themselves.
Unsure where to start? Let’s explore how to craft effective prompts and build buy-in from your legal teams.
Goodbye to the “Blank Page” Problem
When I discuss writing the first draft of the prompt, I compare it to onboarding a new attorney to a case who is starting with no background knowledge. You have to guide them: explain what makes a document relevant, what context matters, how to spot key issues. Crafting an aiR for Review prompt works the same way. Think of it as writing a review memo for aiR: the clearer and more detailed your instructions, the better the results.
The challenge? Getting aiR projects moving while waiting for legal teams to draft that very first prompt. That is why Miller Thomson was excited when Relativity invited us to participate in the Advance Access program for aiR for Review’s prompt kickstarter. It generates prompt criteria based on the matter pleadings, allowing legal teams avoid starting from a “blank page” and jump straight to fine-tuning. In some instances, we were so happy with the kickstarter’s output that we started iterating on the prompt almost immediately—saving time and eliminating roadblocks to getting strong results quickly. The feature is especially helpful for first-time aiR users as it removes the guesswork out of how a prompt should be crafted.
Building Prompt Mastery: Lessons from the Field
1. Speak the LLM’s language.
The other key benefit of using AI to write your prompt is that we found that it responds remarkably well when you speak its own language. We found doing so particularly important for nuanced phrasing. If there is room for interpretation between human reviewers, the same logic will apply to LLMs. Here is an example to illustrate the point: a reference to “internal communications.” The following is how ChatGPT defines an internal email in one sentence when asked to do so:
“An internal communication is any message, document, or exchange of information shared within an organization among its members for work-related purposes.”
That sounds exactly like what we might be looking for in a project, but what if another entity was included in a lower thread within the same chain? Would ChatGPT still consider it as internal? Well, here is the answer when it is asked:
“No—even if the topmost email in the thread was internal-only, the moment an external party appears lower in the chain, the entire thread can no longer be treated as purely internal.”
This response is certainly surprising and unexpected. Interpretations become even more complex when you have more than one entity that would meet the definition of an internal communication such as affiliated organizations, joint ventures, parent companies and subsidiaries, et cetera.
Understanding how an LLM interprets your prompt allows you to make necessary changes in order to get the desired outputs. Therefore, asking the AI to draft parts of the aiR prompt based on your parameters ensures that misinterpretations can be avoided. Much like mastering any new language, over time you will become more fluent how to “speak AI” and get to results more quickly.
2. Involve attorneys where they shine.
Attorneys’ writing and language skills will work in harmony with the language of LLMs. Prompting AI in legal workflows mirrors the art of asking leading questions in litigation. Both rely on carefully controlled inputs to extract a desired or targeted response.
Just as an attorney uses strategic phrasing to elicit a particular testimony, legal professionals must craft AI prompts to be precise, clear, and direct because vague or overly broad questions can yield generic or unhelpful answers (or in aiR terms: too many borderlines). But beware of being too specific: an overly narrow prompt can box you in, potentially misclassifying relevant documents that don't fit your strict criteria.
Finding the sweet spot between specificity and breadth is where recall excellence happens. To hit that mark, work backwards: identify the types of documents you need and reverse-engineer your prompt around them, partnering with attorneys for guidance.
3. Validate—and use the right sample.
Once the first iteration of the prompt is complete, it is best practice to validate it on a small data set before applying to your full target population. Ideally, this sample should be rich in both relevant and not relevant content while excluding clearly irrelevant documents such as junk mail, email subscriptions, content relating to other matters, et cetera. Standard review exclusions should also be applied, such as removing non-inclusive emails, attachment duplicates, encrypted files, documents without text, and so on. Keeping your aiR for Review source data as clean as possible ensures you get the most value from the tool.
There are many approaches to gathering this sample, including stratified sampling using the clustering fields, statistical sampling, or leveraging existing coding from prior reviews. Typically the initial sample ranges from 150 to 300 documents. Once these documents have been reviewed by the legal team, we compare aiR’s output against the tagging from reviewers. The prompt is then adjusted as needed based on conflicts and borderline documents and rerun on the same sample. Once conflicts from that sample are resolved, we test the prompt on a new sample, and repeat the process until we are confident to use it and run aiR for Review across the entire document population.
Make Prompting an Art You Own
There are countless ways to use aiR for Review. Its value isn’t limited by the size of your data: any firm or matter, large or small, can benefit from its adoption.
For smaller firms, it’s a no-brainer: aiR for Review accelerates first-pass review with minimal setup, delivering fast, cost-effective results without overwhelming resources. Meanwhile, medium and large firms can get their feet wet with smaller cases or targeted review on existing ongoing projects, providing legal teams with a low-risk approach that builds confidence before scaling up to more complex reviews. Visually demonstrating aiR’s rationale and consideration outputs is far more persuasive than abstract explanations. Pair that with a simple cost-analysis comparing manual review against the efficiency of aiR, and you make the case for adoption undeniable.
But in a world of LLMs and the solutions that use them, quick and accurate results hinge upon the quality of the prompts you give them. aiR for Review sets you up for success from the start. The secret to success is simple: start, refine, repeat.
Over time, the more you use the solution, the more you will see that prompt writing isn’t just a technical skill—it’s an art. The earlier you start honing that art, the faster you will turn AI from a tool you use into a skill you own. AI is already changing the rules of document review. The real question is: will you be the one shaping that change, or waiting for it to shape you?
Graphics for this article were created by Caroline Patterson.






