Your single source for new lessons on legal technology, e-discovery, and the people innovating behind the scenes.

Committed to Responsible AI: Introducing Relativity's AI Principles

Chris Brown

Artificial intelligence (AI) continues to play an integral role in data discovery. It benefits our customers in their work daily—from automating repetitive tasks to searching through massive data sets. With it, legal professionals can place more of their focus on high-value activities and more quickly develop and act on their insights.

But the use of AI is not without risks—and to minimize those risks, new applications of AI must be built with responsibility and usefulness top of mind. At Relativity, we are committed to responsible AI. This means that we develop and deploy valuable AI technologies with measured intention, using processes that are thoughtful, disciplined, and engender trust among our customers.

This commitment is particularly important to us given the vital role that our products, customers, partners, and the whole Relativity community play in supporting justice systems—and the people who count on them—around the world. With AI, the how is as important as the what.

To that end, I’m excited to announce that we’ve established a set of principles to guide our product development and underscore our commitment to you—our community. We may evolve these principles over time as we continue to learn, but what won’t change is our dedication to being a responsible steward within our industry.

Relativity’s Responsible AI Principles

1. We build AI with purpose that delivers value for our customers.

Every AI system we create is designed to help people solve a specific legal or compliance challenge easily, productively, and defensibly. Our AI development isn’t dictated by the latest trends or news headlines. Instead, it is fit for purpose—driven by the people who use our AI, the problems they’re trying to solve, and the capabilities and limitations of the technologies we create.

We also invest significant resources to develop AI models that function across the various types of data our customers encounter and meet the requirements of each use case we define. Ultimately, our AI solutions should be intuitive and tailored to our customers’ needs.

2. We empower our customers with clarity and control.

We design systems and interactions that harness and amplify human expertise to help our users best accomplish their work. It’s important that we’re open about our AI so customers like you are well informed and able to defend your processes.

This includes offering clear information to help you understand how our models are trained and the purpose they were built for, and what each model looks at when making decisions.

We are also honest about the fact that no AI system is perfect, including ours. Despite the best intentions, AI systems can at times deliver incorrect results, behave unexpectedly, or contribute to biased decision-making. By documenting our development processes and training our users, we can equip our community to manage these factors appropriately and use our AI in a way they can trust.

3. We ensure fairness is front and center in our AI development.

Throughout development we carefully consider the fair treatment of not only our users, but anyone who could be impacted by the AI we’re creating, such as custodians in a litigation or subjects of an investigation. We strive for fairness by seeking diverse perspectives from a wide variety of sources so our models can be as representative as possible, and we use human-centric design processes to focus on user needs.

We test our models for potential bias, and if we find any, we pause to thoughtfully consider mitigations, document our decisions, and ensure our customers are informed about the model's appropriate purpose and use. We believe that responsible use of AI can lead to more equitable outcomes for all, and we work to contribute to this ideal as we build our models.

4. We champion privacy throughout the AI product development lifecycle.

Relativity is a global company that operates in an ever-evolving data privacy landscape. Given this, our privacy principle reflects the latest applicable privacy regulations and guidance, leveraging the principles of Privacy by Design.

We put privacy as the ‘default setting’ within our AI products and fully embed privacy as part of our AI design process. In doing so, we use various operational, contractual, and security measures to align with the technical and procedural safeguards we commit to our customers. We also utilize data minimization in the models we train, using personal data only to the extent necessary to fit the purpose of the model and help users efficiently organize information.

5. We place the security of our customers’ data at the heart of everything we do.

Relativity has a best-in-class security program, championed by Calder7, our award-winning security team. This team works around the world and around the clock to protect and defend our customer and corporate environments, proactively alleviate threats to our company, and enable clients to control the security of their data through greater transparency.

We’ve built a strong culture of security at Relativity, cultivated with the right mix of people, technology, and processes. One aspect of our security culture is “defense in depth,” a strategy that uses several independent security control layers to protect a company’s assets. Calder7’s comprehensive tactics include real-time defect detection, strict access controls, proper segregation, encryption, integrity checks, and firewalls.

Our proactive, continual focus on security means that you can trust us with your data, and trust our AI solutions that reference, automate, and augment that data to accelerate your productivity.

6. We act with a high standard of accountability.

We understand that people rely on our AI technologies to find the truth in their data, and we’re committed to developing these technologies in a responsible way. We put every AI model within Relativity through an extensive peer review and validate each one on a representative set of customer use cases to ensure we deliver on the expectations of our community.

Being accountable for the AI we develop also means we’re responsible for building AI that’s trustworthy. We earn this trust by delivering reliable, well-documented output that you can confidently understand and defend. Our AI teams are accountable for model quality and ensuring that we meet the requirements of the use cases we support. But accountability doesn’t rest solely with these teams—it’s shared by everyone at Relativity. We’re all responsible for following our AI principles and empowered to do what’s needed to keep our AI systems safe.

Our Principles in Practice

Having strong AI principles is important, but to make any real impact, they must actively guide our everyday decision-making. One example of how we’ve used our principles to help ensure that our AI systems are aligned, useful, and safe can be found in the development of our sentiment analysis capability in RelativityOne.

The feature provides machine learning-powered labels that indicate the presence of positive or negative tone in documents, in addition to emotions like desire, fear, and anger. Users can then search on these behavior signals to quickly pinpoint items of interest to a case or scan the top positive and negative sentences of each document. Additionally, an updated document viewer highlights sentences that contain behavior signals and show why documents were flagged. You’ll find sentiment analysis useful in many data exploration workflows, such as analyzing a judge’s comments in court transcripts or finding harassing communications in an investigation.

In early versions of the model, testing from our applied science team observed that the model’s public training data came with underlying human biases around certain ethnicities, genders, and religions. For example, when the model would see the same sentence but with different countries as the subject, it would incorrectly give different sentiment scores just because of the country. We determined that the risk of introducing harm via this bias was too high, and delayed release of the feature until we could build a new model—entirely from scratch—that corrected for it. Read the full story here.

Why Now?

You may be asking, “Why is Relativity sharing these now?” Great question. There is no denying that we’re at a disruptive inflection point in technological advancement. Generative AI is the elephant in the room, and at the center of discussion across the legal industry and beyond—and rightly so. These models are poised to transform the role of AI in our personal and professional lives. In light of this surge in momentum, we believe that now is the time to clarify how we approach our own AI advancement by formalizing our AI principles and sharing them with our community. 

But let’s be clear—neither AI, nor the rigor we place around how we leverage and advance it, are new for Relativity. Since we delivered our first version of Analytics to the market in 2008, proactively focusing on the intersection of artificial and human intelligence to accelerate your work has been at the heart of everything we do.  

Cut to present day, and Relativity continues to innovate as we strategically and responsibly integrate AI across the end-to-end e-discovery journey. From tackling sensitive data with Text IQ for Privilege, Personal Information, and Data Breach, to uncovering emotion and tone with sentiment analysis, to game-changing efficiency with Translate (where we’re already leveraging generative models), and, finally, to our most recent launch of Review Center—we are determined to transform how you do your work.

As we look ahead, it is essential in this next phase of technological advancement to actively collaborate with our community to unlock new possibilities and optimize the value we deliver. To that end, and thanks to our unique collaboration with Microsoft’s engineering teams, we’re already piloting solutions built on generative models such as GPT through Azure’s OpenAI Service. These pilots aim to discover how we can improve outcomes for key use cases in your investigations and litigation projects, including relevance review, privilege review, and PII detection, with more to come.  

In summary, our AI principles signify our commitment to approaching innovation with a deep sense of responsibility—both in how we operate internally, and in how we show up as a leader in our industry. We’ll continue to learn and adapt as we go, with an unwavering commitment to always exceeding the expectations of our community—and of each other. 

Graphics for this article were created by Natalie Andrews.

Developing Responsible AI Solutions for e-Discovery and Investigation

Chris Brown is the chief product officer at Relativity. He leads our product and user experience teams and is responsible for the development of Relativity’s product vision, strategy, and product roadmap in collaboration with engineering.

The latest insights, trends, and spotlights — directly to your inbox.

The Relativity Blog covers the latest in legal tech and compliance, professional development topics, and spotlights on the many bright minds in our space. Subscribe today to learn something new, stay ahead of emerging tech, and up-level your career.

Interested in being one of our authors? Learn more about how to contribute to The Relativity Blog.