Your single source for new lessons on legal technology, e-discovery, and the people innovating behind the scenes.

AI in Our Hands: Lessons from a Masterclass on Ethical AI with Timnit Gebru

Sam Bock

Although “AI” has quickly become a bit of a buzzword across industries, the real impact—realized and potential—of artificial intelligence on our world is profound. Its consequences simply cannot be overstated.

This isn’t because we’re on the cusp of a reality where our computer overlords take control over the functioning of society from us mere mortals.

It’s because AI isn’t perfect, and it needs a lot of thoughtful human input to become useful in a way that maximizes efficiency while minimizing harm.

At Relativity Fest 2022, we welcomed Dr. Timnit Gebru to lead our closing keynote on the final afternoon of the conference.

Timnit is a highly regarded voice on ethical AI, a prominent researcher of algorithmic bias, and the founder and executive director of the Distributed AI Research Institute (DAIR). Put simply, she has a lot of wisdom to share on the potential of AI, the reality of how we use it in this moment of history, and the implications on our shared future if it’s developed and applied without ethics staying top of mind.

We’re covering a bit of what Timnit shared in this article, but to experience everything she had to say for yourself, make sure you jump into the Relativity Fest event platform before December 16, 2022 to watch the recording of her keynote. (You can still register to gain access to recorded sessions, if you haven’t already.)

Big Data ≠ Diverse Data

As Relativity CEO Mike Gamson said in his introduction for Timnit, “part of the risk profile of AI is not about our intention, but the unconsciousness of our biases.”

Discussing the need for ethical AI is less about chasing down black-hat developers writing nefarious algorithms, and more about rooting out the unconscious biases that unintentionally inform our work—and our work products—before they can be multiplied by AI applications in the real world.

That means being deliberate about diverse representation in development teams, of course. It also means identifying the potential inherent biases of data used to train algorithms.

Machine learning models are trained with vast data sets that, we hope, will inform their understanding of certain concepts and their context. Some of the decisions we ask AI to make based on this learned understanding come with relatively lesser risks, such as which Netflix shows users might want to watch next. Others can have life-changing implications—like which criminal offenders are more likely to reoffend in the future.

AI can be taught potentially devastating biases if the data used to train it is biased. As the saying goes, “garbage in, garbage out.” And simply adding more data to these sets in the hopes that volume alone will diversify them, and therefore address the problem, is not a real solution.

“We think the scale of the internet will represent diverse voices, but think about who’s on the internet,” Timnit told Relativity Fest attendees. “Some people don’t have access at all. Many others have hegemonic views.”

Scale and representation do not go hand in hand.

Confronting Cognitive Biases

Another layer to the danger of biased AI applications is that they are hard to discredit once they’re out in the world.  

Timnit noted a couple of predispositions people have in how we perceive AI. One is our tendency to anthropomorphize the technology, as seen in a story from earlier this year in which a Google engineer claimed one of the company’s algorithms had become sentient. Timnit also mentioned a chatbot called ELIZA, developed by MIT in the 1960s; in that case, creator Joseph Weizenbaum was disturbed by how easily people were misled about the true intelligence of the relatively basic computer program.

“A danger comes from people attributing communicative intent to a model, which is essentially anthropomorphizing it and thinking it’s a person. If you have the ability to unleash a lot of harmful content in a huge way, with people perceiving this is coming from a lot of real people, it can have terrible consequences,” Timnit said.

Social media bots are particularly threatening in this capacity. And misleading people into believing the information they’re reading was put forth by a human is its own kind of unethical.

“My understanding of the potential harms of AI is pretty expansive. People being misled about these systems is a harm,” Timnit noted.

Moreover, she said, by claiming an algorithm might be sentient, “with its own thoughts and feelings, we abdicate our responsibility as people who develop and deploy it.”

She continued: “AI is not the Terminator. It’s not a sentient thing that you’re talking to—it’s an artifact, a tool, built by people and impacting people. It can be controlled by us as people. So we must develop and deploy it responsibly.”

Additionally, consumers and developers alike must be mindful of what’s known as automation bias. This is our tendency to trust automated systems at face value, perhaps because we believe them to be smarter than we are.

Timnit described an experiment where a group of people were together inside a building when a simulated fire broke out. The subjects couldn’t see the flames, but they could see smoke and hear the fire alarms. A robot appeared and offered instructions it claimed would guide them to safety. The instructions ended up leading nowhere—in fact, they led away from marked, safe exits—but all of the participants followed them anyway. They all trusted the machine.

“If I tried to do that to you, I’d hope you’d ask me questions—‘That doesn’t seem like an exit?’—because you know that I can be wrong, using your critical thinking skills on me,” Timnit joked with the audience. “But when it’s an automated system, we have an automation bias, and we somehow think ‘it must be right!’”

To drive the point home, she asked: “How many of you have used Google Maps and it takes you to a random place, but you think, ‘Oh, it must know something I don’t know?’”

(The chorus of sheepish chuckles from the audience suggested we all had to admit some guilt there.)

“These systems are not all-knowing. We need to remember that.”

Building Transparency into the How and Why of AI

We all have big ideas about the value, potential, and power of AI. But overstating its authority leads us to biased places.

“One thing we really need to do when talking about AI is bring it back to earth,” Timnit said.

“Remember that AI is not its own thing. We build it; we control it,” she reminded us. Taking responsibility for that power is crucial. And for now, most of the accountability for doing so comes from within the field.

“Think about when you’re building bridges. You can’t say ‘I made it but I didn’t test it on a truck,’ or ‘it might not work if it gets too hot.’ You can’t do that; there are laws outlining what tests to do,” Timnit said. “We don’t have laws like that in our field, but I propose transparency.”

AI creators, she said, “need to tell people what a system was built for. Is it supposed to be used in high risk scenarios? If you’ll use something on cancer patients, it needs different characteristics than if it’s in some toy.”

Highlighting these intentions and limitations helps “make it clear that the systems are not perfect.”

But long before getting to that stage—before building anything in the first place—Timnit said, “we have to ask: should we build this thing? Should this exist?”

Sometimes the answer will be no, and that should be respected. Timnit reinforced keeping diverse perspectives involved from the very start of a project to avoid wasting time and, worse, causing harm down the road.

“Who are the people who will tell us whether something is harmful or not? The people who are most likely to be negatively impacted by these models,” she said. “If we don’t have them at the table making decisions with us, we’re not going to know about those harms. We’ll put something out there, try to diversify the data set, and in the end, they’ll tell us it shouldn’t have existed in the first place.”

Facing a Realistic Future

At the close of the session, Mike posed a question: “You’ve been courageous in speaking out and holding your ground on what you believe. In terms of your view of the future, is it optimistic? Is it scary?”

Timnit’s answer is food for thought for all of us who might have mixed feelings on the promise of AI.

“What I say about the future is always that I want people to remember that we control technology. We control how we build it, what it should be used for, what it should not be used for. We can make tech work for us; it’s in our hands,” she said. “If we believe that, if we feel in control, and if we feel we have actions we can take—holding companies and developers accountable, holding others responsible—we can make it work for us.”

Relativity Fest 2022

Sam Bock is a member of the marketing team at Relativity, and serves as editor of The Relativity Blog.

The latest insights, trends, and spotlights — directly to your inbox.

The Relativity Blog covers the latest in legal tech and compliance, professional development topics, and spotlights on the many bright minds in our space. Subscribe today to learn something new, stay ahead of emerging tech, and up-level your career.

Interested in being one of our authors? Learn more about how to contribute to The Relativity Blog.