Your single source for new lessons on legal technology, e-discovery, and the people innovating behind the scenes.

AI and Legal: Four Trends to Watch in 2023

Omar Haroun

Trends are not born in a vacuum. Before diving into the trends shaping the evolution of AI and legal tech in 2023, let’s take a look at some of the major developments from 2022 foreshadowing these trends. These developments were:  

  • Expanding data volumes and the proliferation of new categories of unstructured data as remote work became normalized and more workplace interactions continued to take place online.
  • The emergence of generative AI as a new class of AI models—like DALL-E 2—takes human-AI collaboration to new heights.
  • The recognition of data security and privacy as boardroom priorities, as several high-profile data breaches highlighted the need to stay on top of protecting PII and other types of sensitive data like trade secrets and confidential information.
  • The White House’s publication of the Blueprint for an AI Bill of Rights—a major institutional acknowledgement of the perils of algorithmic bias and the need for organizations to make earnest efforts in maintaining ethical oversight over the development and use of AI models.

These developments will continue to have important ramifications in the future. Rooted in the factors mentioned above, here is a detailed look at four major trends that will shape artificial intelligence and legal technology in 2023:

#1: New data challenges will set the grounds for new legal tech use cases.

Early this year, NVIDIA was hit by a massive data breach affecting over 1 terabyte of data, including proprietary data (which reportedly included the chipmaker’s trade secrets, source code, and employee credentials). Like the infamous SolarWinds hack of 2021, the incident reinforced corporate America’s growing concerns around data security and privacy. Such security and privacy challenges are further compounded by the glut of massive data volumes—particularly human-generated unstructured data, which is harder to manage and protect—making it harder for organizations to stay on top of their data risks.  

As general counsels grapple with these challenges, they will look to legal technologists and legal professionals with the skills to manage sensitive data and the know-how to apply technologies like AI. Indeed, the same skills and AI-enabled technologies used in litigation discovery work can also be used across a variety of adjacent use cases: data breach response, sensitive data governance, and contract lifecycle management, to name a few.

#2: Generative AI models will become commoditized and challenge existing jurisprudence on intellectual property law.

The recent public release of OpenAI’s DALL-E 2 opened the floodgates to generative AI and, overnight, created a new category of creative professionals leveraging generative AI models to create high-quality art and media that look and feel like they were created by humans. Trained on public data scraped from the internet, these AI models can produce images, videos, and text based on only a few keyword prompts. A Colorado-based artist, for instance, won an art contest for an image he created using generative AI.

Far from being a viral fad, generative AI actually represents an inflection point in how humans use AI. Art is just the tip of the iceberg as these AI models are a base layer on which users can build thousands of applications that could, in collaboration with humans, perform complex functions like designing synthetic proteins, writing contracts, and coding software. Indeed, in the long run, the benefits of these AI models could accrue to knowledge workers and builders across industries. Already, we are beginning to see tech companies like Microsoft, GitHub, and Adobe integrate generative AI models into their products—not to mention the emergence of an entire ecosystem of startup unicorns offering solutions built on generative AI.

But as these AI models undergo exponential improvements and become increasingly more accessible, they will challenge the foundational legal concepts and assumptions underlying iintellectual property law.    

At issue will be fundamental questions on authorship and copyright. For example, imagine an AI model, which has been trained on paintings created by the artist Banksy, is fed a set of keywords by a user to produce a painting in the style of Banksy. Can the user claim copyright over the painting? Can Banksy? Alternatively, can the AI model have a copyright claim over it?      

Current laws are not set up to address novel questions around copyright and intellectual property, so jurists and lawmakers will need to, sooner or later, embark on the complex task of answering these questions.   

#3: AI adoption will accelerate as companies face recessionary headwinds.

With rising concerns over the increasing likelihood of a recession next year, legal departments will be compelled to drive down costs. As budgets are tightened, the legal workforce will be forced to do more with less and increase efficiency. Legal professionals will increasingly rely on AI models to scale their work output and automate time-consuming tasks. These factors will accelerate AI adoption and set the grounds for larger AI deployments.       

#4: When it comes to ethical AI, organizations will need to walk the talk.

New York City’s Automated Employment Decision Tool (AEDT) Law, which regulates the use of AI in screening job candidates and making hiring decisions, is slated to go into effect next year; the White House’s Blueprint for the AI Bill of AI Rights was published earlier this year; and, across the pond, the European Union is expected to finalize its Artificial Intelligence Act before its parliamentary elections in 2024.

Across the board, regulators, watchdogs, and governments are taking cognizance of the pernicious impact that algorithmic bias can have on society—particularly on disadvantaged groups. As the public discourse on AI matures, individuals, groups, and governments will need to hold technology companies more accountable for the impact of their AI models. AI companies will need to do more than pay lip service to the importance of ethical AI and take concrete steps to audit their AI models—particularly in high-risk use cases—for bias, transparency, and fairness.      

AI Is Not What the World Thinks It Is: Insights from Microsoft CTO Kevin Scott

Currently head of AI strategy at Relativity, Omar Haroun helps drive commercialization efforts and executive relationships for our team and customers.

The latest insights, trends, and spotlights — directly to your inbox.

The Relativity Blog covers the latest in legal tech and compliance, professional development topics, and spotlights on the many bright minds in our space. Subscribe today to learn something new, stay ahead of emerging tech, and up-level your career.

Interested in being one of our authors? Learn more about how to contribute to The Relativity Blog.