You’ve probably heard the words “user experience” used to describe everything from e-discovery software to salad spinners, but what does it mean for a product to have a good user experience? Ultimately it comes down to your own perspective on the experience you have with a product. There is no user experience without you, the user.
Products don’t end up being pleasant to use by accident. Building products that people enjoy requires designers to understand the needs and expectations of their users. This includes the steps they take to complete their goals, as well as how they might think the tools and processes work. We call this a mental model.
As Relativity designs new features or updates existing ones, we start by updating our knowledge about the work users need to get done and how they use our product now. This gives us an understanding of the problems we need to solve and ideas about how to approach them. We build prototypes that leverage these insights along with UX design best practices (tactics like reducing clicks, avoiding ambiguous or irrelevant language or visuals, and providing an appealing look and feel), and then test them directly with our users. We’ll do this until we feel convinced that the design meets their needs and is easy for them to use. Only after this testing is complete are we ready to launch an update.
This is true for all parts of Relativity’s UI, including AI features. We work to understand our users so that they don’t have to deeply understand AI algorithms and how the background processes work—instead, they can simply get started leveraging the software in a simple, approachable interface.
So, while the basics of good UI design are applicable to AI, there are still some specific considerations that come into play when designing for AI.
Staying Focused on Solving Problems
Technologies are often built to address a particular challenge, but then can also be leveraged to build other new and exciting things. Some of these new use cases are more beneficial than others. For example, when you think about it, there are plenty of things you could do with a hammer—but not all of them would be helpful, right?
Similarly, unless you begin your design with a focus on user needs, it would be easy to build features that leverage AI but are not very helpful. That's why, with new technologies like AI, we start by asking questions like, “What problems could this new technology solve?” and “Which use of this technology would be most helpful for making our users’ work easier or better?”
A good understanding of what’s needed versus what's possible in order to stay focused on the problems that matter also allows our product teams to make the most of their available and often limited resources.
For example, a recent feature we’ve investigated is named entity recognition (NER). NER effectively identifies proper nouns in text. It is often associated with individuals or corporations, but there are many other types of entities such as identification numbers and IDs, locations, and even dates (think date of birth, or launch date, or closing date). It’d be easy to go down the rabbit hole of implementing everything that’s possible, but what entities are actually useful or valuable? Talking to our users about how a feature like this might impact their workflow, it’s become clear that individuals are the priority, and that a feature like this would be even more valuable than another similar feature we have: name normalization.
Gaining User Trust
At Relativity, we believe AI is like a collaborator in your work. Good collaborations are built on trust—you have confidence that your collaborator will do their job, and will do it right. Many things can factor into this trust, but the most important ones are often transparency, communication, and previous experience.
One big thing we’re investigating in RelativityOne is how we present AI recommendations and insights in a trustworthy way. AI and machine learning is often perceived as opaque; it can be hard to understand or explain its results, which makes it harder to accept or trust them in a defensible way.
Our colleagues at Text IQ have addressed some of these issues with their Priv IQ product by including explanations of why a document was classified a certain way, as well as providing supporting evidence. A document might be classified as privileged, for example, because of the type of language being used or because an individual identified as an attorney is in the conversation. This information is crucial for use in a privilege log, the document that describes why certain documents were withheld from production.
Working with Complexity
AI is complex and technical. But when you think about it, many objects and products in the world are complex and technical. Several complex technologies don’t require technical knowledge to use effectively. For example, we may have complicated electrical systems in our house, but we don’t need to know much of that complexity to be able to use power outlets and light switches.
A big part of the job of UX in AI is taking technological complexity and simplifying how users interact with it. This requires a really good understanding of the opportunities and constraints presented by both the technology and the user, so that the UX designer can then act as the mediator or go-between to build an interface that reduces the perceived complexity.
For example, active learning in RelativityOne has a very complex and technical underpinning, but is very user-friendly to set up and use. It utilizes advanced classification algorithms to evaluate documents across multiple dimensions to determine if they are relevant or not based on previous coding decisions. For a user, though, all they would need to understand in order to put the software to work for them is that they can create a new active learning project by providing it with a handful of sample documents that have been coded for relevance in order to get the computer started on making its own classifications.
From there, the algorithm works behind the scenes to provide reviewers with predicted highly relevant documents and improves its predictions over time automatically. The user is also able to iteratively QC these predictions as appropriate for their project.
Iterative Design
Of course, UI and UX design aren’t all about coding and following industry best practices. A big part of our process is to iterate on our designs after gaining feedback from the experts and users who are in our software every day. We know that our initial designs are likely to have assumptions built into them about users’ needs, ways of working, mental models of AI, and more, which may or may not be valid. When we are working on a design, we also know too much about it to be able to see it with fresh eyes. Similarly, we are not you, the real users of our software, and, as a result, will inevitably miss things that only you might notice about our designs. As a result, we test our designs with users and iterate them until they are working well for you.
To be able to inform our designs in this way, we are always looking for more e-discovery professionals to participate in research studies that inform our roadmap development. If you are interested in being invited to participate in research, please consider joining our UX Research Participant Pool at relativity.com/uxresearch. We’ll reach out to participants throughout the year for opportunities to engage with us, and you’re welcome to accept or decline these invitations as you wish, so your time commitment is yours to control. We look forward to hearing from you!
Editor's Note: Miguel Martinez, product design manager in AI at Relativity, also contributed to this article.
