Building Responsible AI
Relativity’s AI Principles
Artificial intelligence benefits our customers in their daily data discovery work, from automating repetitive tasks to searching through massive data sets. With it, legal professionals can place more of their focus on high-value activities and more quickly develop and act on their insights.
But the use of AI is not without challenges. To minimize the risks, we are committed to building new applications with responsibility and usefulness top of mind.
Our AI Principles
These principles guide our product development and underscore our commitment to you: our community. We may evolve these principles over time as we continue to learn, but what won’t change is our dedication to being a responsible steward within our industry.
We build AI with purpose that delivers value for our customers.
- Every AI system Relativity creates is designed to help people solve a specific legal or compliance challenge easily, productively, and defensibly.
- Our AI development is driven by the people who use the technology, the problems they’re trying to solve, and the capabilities and limitations of the technologies we create.
- We invest significant resources to develop AI models that function across the various types of data our customers encounter and meet the requirements of each use case we define.
- Ultimately, our AI solutions should be intuitive and tailored to our customers’ needs.
We empower our customers with clarity and control.
- We’re open about our AI so customers are well informed and able to defend their processes.
- We offer clear information to help users understand how our models are trained, the purpose they were built for, and what each model looks at when making decisions.
- We are honest about the fact that no AI system is perfect, including ours. Despite the best intentions, AI systems can at times deliver incorrect results, behave unexpectedly, or contribute to biased decision-making.
- We document our development processes and train our users to equip our community to manage these factors appropriately and use our AI in a way they can trust.
We ensure fairness is front and center in our AI development.
- We believe the responsible use of AI can lead to more equitable outcomes for all, and we work to contribute to this ideal as we build our models.
- We carefully consider the fair treatment of not only our users, but anyone who could be impacted by the AI we’re creating, such as custodians in a litigation or subjects of an investigation.
- We strive for fairness by seeking diverse perspectives from a wide variety of sources so our models can be as representative as possible, and we use human-centric design processes to focus on user needs.
- We test our models for potential bias, and if we find any, we pause to thoughtfully consider mitigations, document our decisions, and ensure our customers are informed about the model's appropriate purpose and use.
We champion privacy throughout the AI product development lifecycle.
- Our privacy principles reflect the latest applicable privacy regulations and guidance, leveraging the principles of Privacy by Design.
- We put privacy as the ‘default setting’ within our AI products and fully embed privacy as part of our AI design process.
- We use various operational, contractual, and security measures to align with the technical and procedural safeguards we commit to our customers.
- We utilize data minimization in the models we train, using personal data only to the extent necessary to fit the purpose of the model and help users efficiently organize information.
We place the security of our customers’ data at the heart of everything we do.
- We’ve built a strong culture of security at Relativity, cultivated with the right mix of people, technology, and processes. Our award-winning security team, Calder7, works around the world and around the clock to protect and defend our customer and corporate environments, proactively alleviate threats to our company, and enable clients to control the security of their data through greater transparency.
- Calder7’s comprehensive tactics include real-time defect detection, strict access controls, proper segregation, encryption, integrity checks, and firewalls.
- Our proactive, continual focus on security means that you can trust us with your data, and trust our AI solutions that reference, automate, and augment that data to accelerate your productivity.
We act with a high standard of accountability.
- We put every AI model through an extensive peer review and validate each one on a representative set of customer use cases to ensure we deliver on the expectations of our community.
- We’re responsible for building AI that’s trustworthy. We earn this trust by delivering reliable, well-documented output that customers can confidently understand and defend.
- Accountability doesn’t rest solely with the AI teams – it’s shared by everyone at Relativity. We’re all responsible for following our AI principles and empowered to do what’s needed to keep our AI systems safe.
See Relativity's AI principles in practice
Harm, Less? Why Relativity built fit-for-purpose AI models to power sentiment analysis
Does causing harm require intentionality? Recklessness? Sentience? See how we engineered sentiment analysis in RelativityOne with responsible AI in mind.
Developing Responsible AI Solutions for e-Discovery and Investigation
Relativity's sentiment analysis model is industry-leading in mitigating bias. Learn how we built it - and why we couldn't bring it to market any other way.
Artificial Intelligence in e-Discovery: Explaining, Describing, and Defending What is Beyond our Field of Vision
We asked our partners (and ChatGPT) to give us their take on artificial intelligence in e-discovery. Interestingly, we found four common themes.