Your single source for new lessons on legal technology, e-discovery, and the people innovating behind the scenes.

Beyond the Jargon: 4 Generative AI Terms You Should Know

Nitant Narang

The viral emergence of ChatGPT has turned the spotlight on generative AI. The large language model (LLM)—reportedly the fastest-growing consumer application in history—is spawning a new ecosystem on the internet as startups and companies scramble to build applications on top of it.

With billions of dollars of venture capital being readied for deployment, generative AI is being seen as a rare bright spot in an otherwise uncertain economic environment.

Indeed, its significance extends beyond the tech social bubble—and the legal industry, which has historically been slower to adopt new technologies, is no exception to its potential. ChatGPT piqued the industry’s curiosity further when it passed four exams at the University of Minnesota’s law school, answering questions and writing essays on areas such as constitutional, tort, and taxation law.

Regardless of its inconsistencies, biases, and inaccuracies, which have been widely reported and scrutinized, the underlying technology is creating waves in the legal world. From symposiums at law schools and panel discussions at legal conferences to small talk at cocktail parties, generative AI is becoming a popular subject for much discussion and debate.      

In this post, we will attempt to demystify some key technical concepts that might come in handy the next time you are pulled into a conversation on generative AI.

1. What is Generative AI?

Generative AI is a type of AI model that—as the term suggests—generates new data, such as images, text, or videos, based on a prompt and informed by the large data sets it is trained on. The most obvious example here is ChatGPT—an AI system that uses generative AI models to generate answers to questions.

By contrast, a discriminative AI model is an AI model that can classify and distinguish between different types of input data.

Generative AI models can be applied to a variety of use cases in the legal industry: writing contracts, composing briefs, conducting legal research.       

2. What are Large Language Models (LLMs)?

A large language model is an AI model that can learn the structure and meaning of text. Reportedly, ChatGPT, itself an LLM, was trained on 45 terabytes of text-based data encompassing half a trillion words crawled from the internet.

LLMs represent a significant breakthrough in AI with their ability to understand semantics and the context of natural language.

3. What are Transformers?

Transformers are what equip LLMs like ChatGPT to intuit the context of our questions and generate meaningful answers; put simply, they are what make LLMs uncannily good. A transformer is a new type of neural network architecture that’s fueling the explosive growth of generative AI models. This technology is not novel to ChatGPT—in fact, it was an open-source project launched by Google in 2017.

Besides being more computationally efficient than previous neural network (NN) architectures and working more optimally with machine learning GPUs, there is one key differentiator that gives transformers a significant edge over other NNs: while other NNs decode the meaning of words in a sentence sequentially (i.e., word by word), transformers have what’s called a “self-attention” mechanism. This gives them the unique ability to compare and directly model relationships between words in a sentence.

For instance, it’s easier for a transformer model to disambiguate the two different meanings of the word “bank” in the example sentences below:  

A) I arrived at the bank after crossing the street.

B) I arrived at the bank after crossing the river.

This is because a transformer model is able to compare “bank” with each of the other words in the sentence to divine the context in which the word is being used.       

4. What is Prompt Engineering?

This is not engineering as the term is traditionally understood. Prompt engineering is used to describe someone who is adept at interfacing with a generative AI model and coming up with the right prompts needed to produce the desired output.

Prompt engineering is an experimentative and iterative process: it involves trying out different prompts, making changes to the training data set, and adjusting the AI model’s architecture. Prompt engineers can make language models more performant particularly when it comes to specific tasks; they generally have a substantive understanding of language; and they are clear and creative communicators.

The engineering of thoughtful and effective prompts helps train models and ensure they deliver optimized results.

We will publish more educational content on generative AI soon, so subscribe to the blog to ensure you catch everything. Additionally, feel free to write to us and share what topics you would like us to explore in the future, submit ideas for your own articles, and find us on LinkedIn to keep the conversation going.    

Graphics for this article were created by Sarah Vachlon.

AI and Legal: Four Trends to Watch in 2023

Nitant Narang is a member of the marketing team at Relativity