Your single source for new lessons on legal technology, e-discovery, and the people innovating behind the scenes.

AI Is Not What the World Thinks It Is: Insights from Microsoft CTO Kevin Scott

Sam Bock

It’s not often that you get to hear a noted AI leader’s first-hand story of how he made his way into tech. At Relativity Fest last year, attendees were treated to just that as Microsoft CTO Kevin Scott sat down for a Zoom-style fireside chat with Greg Ball, Relativity’s VP of engineering in AI and machine learning.

The two discussed Kevin’s introduction to computer science, thoughts on today’s tech landscape, and predictions for what’s down the road. Read on to dive into some of the insights he shared.

Getting Started in Tech: Just Try It

Kevin shared that he’s been interested in computer science since childhood.

“I was fascinated by how it all worked. It’s one thing I inherited from my farming family: curiosity and a passion to learn and understand how things work,” he explained. The field of computer science was in an exciting growth period at the time, inspiring unique opportunities to engage with a blossoming digital frontier.

After starting with programming at age 12, Kevin said, “it felt great to do something that other people found valuable.” He was particularly interested in pursuing a career in academics as he progressed through high school and college. “After meeting my computer science professors, the act of teaching and unlocking this world for others seemed like such a great thing,” he explained.

Obviously that path changed, but Kevin emphasized that one of the greatest beauties of the computer science world today is that resources abound for those who want to dig in—even without formal schooling on the subject.

“We have so many more resources for kids to learn about software engineering now than we did then,” he said. “The machines are more powerful, software is more powerful, but even more important, the resources you have now to learn how to make those things work for you are more powerful now.”

In general, he said, the attainability of the tools and skills required to make computers do fascinating new things leaves a lot of room for groundbreaking work by just about anyone: “Thinking of the things I built 17 years ago, which required stacks of grad papers and textbooks and C++ to accomplish, I think a reasonably motivated high school student could write the same program over a weekend using existing cloud platforms, frameworks, and training resources available now. They may not have to do much coding at all.”

In fact, this is one of his foundational passions for this industry, he said. “Watching what people are doing with these things is truly extraordinary. That excites me—not just that we have these models, but that we can package them up and get them into the hands of lots of people.”

Why AI Is Only as Good as the Data …

Despite the ubiquity of artificial intelligence in his work today, Kevin says he was initially skeptical of what the tech could be leveraged to do. It required a change in perspective to get excited about the possibilities.

“What was popular then was focused on export systems and planning—things where the philosophy was, ‘We’re going to discover rules that help us encode intelligence into software, and that’s how we’ll build powerful systems,’” he recalled. “When I went to Google, they needed me to help solve a machine learning problem, so I spent time researching it as quickly as possible to become more helpful. It was an aha moment as I realized that, instead of having to figure out those rules of intelligence independently, you could deduce them by training models over data.”

In other words, it wasn’t the AI itself that had a wealth of knowledge to share. It was the data—and the AI was a way of uncovering that wealth faster.

“Early on at Google I learned that the data was more interesting than the algorithm; you get better results with more data and a more basic algorithm than you can with a more sophisticated algorithm and less data,” he said. “In the 17 years since I first wrote machine learning code, the things we can do today are really stunning. Large models themselves are beginning to function as platforms, so instead of solving very narrow problems, you invest in a platform and find a lot of solutions to tackle with it.”

… And the Humans Who Use It

Setting aside even the potential of the algorithms themselves, Kevin had some pretty fascinating perspectives to share on the coexistence of human intelligence and artificial intelligence.

“Comparing artificial intelligence to human intelligence is a long history of people assuming something that’s hard for machines will be hard for them and vice versa, but it’s more the opposite: humans find becoming grandmaster of chess, world champion of Go, and a lot of complicated or repetitive work difficult, whereas machines are easily able to do those things,” he said. “But in areas we take for granted—like common sense reasoning—machines still have long way to go. So it’s clear that framing the debate in these terms is the wrong approach.”

It should be less about replacing human intelligence with AI, he said, and more about allowing the two systems to balance one another for exponentially better results.

“AI tools help humans do cognitive work. The story isn’t about whether AI is becoming the exact equivalent of human intelligence—that’s not even a goal I’m working toward,” Kevin explained. “Instead, the goal is finding which types of cognitive work a machine could do instead, so humans are freed up to do things that are more productive, fulfilling, and interesting. Then we can get a divide between what’s human work and what’s machine work, like we have in the physical world.”

As an example, he explained that “we don’t fret over forklifts depriving people of their dignity because it can lift a heavier load than a human; it’s just a machine that helps us do that physical work.” Similarly, AI can be understood in the same way over time.

Building a Better Future with a Little Help from AI

Ultimately, moving in this direction will not just result in better technology and more robust applications. It can mean a better, more thoughtful world, where human intelligence is applied where it’s needed most—and is informed by the cognitive work of artificial intelligence.

“We don’t have enough cognitive power across all the humans in the world to do all the cognitive work that needs to be done to solve complex problems like vaccine development, social justice, and so on. Certainly not enough to solve them in a way that gets us to where the world is more equitable, just, inclusive, and sustainable. That’s where AI has an enormous role to play,” Kevin predicted.

“In the current order of things, we like to think that every problem that can be solved has been solved. But technology changes the zero-sum games where we fight over who gets what, into non-zero-sum games where you have abundance,” he continued. “If institutions like the Electronic Frontier Foundation, the ACLU, and the Southern Poverty Law Center have infinitely more ability to solve the problems they care about and be super meaningful in that more equitable and just future—that would be an unalloyed good.”

In particular, Kevin said, he’s “really hopeful that the progress we’ve made in recent years with natural language understanding, given how the law is about common use of language, will lead to great innovation in this area.”

Innovators who acknowledge that the human-AI partnership offers the most potential for breaking new ground can build an advantage into their tech: “There are some things where we will forevermore want humans to be the final decision makers, so building the tools at the outset to support human decision making instead of substituting it is an incredibly important first consideration.”

This is the sort of approach that encourages forward-thinking solutions, yes, but also that gets stakeholders on board with giving AI a try right now. And that first step is crucial at a moment like this.

“At Microsoft, we probably have hundreds of people involved in pushing the frontier—building big models and infrastructure, writing big papers, appearing on the leaderboards, prospecting for the frontier. Then we have thousands of people who are aware enough of the importance of machine learning or are active practitioners of it, that they are using machine learning to help them do their work,” he explained. “These thousands also come to an internal data science conference every six months; all of them benefit from what the folks on the frontier are doing. But we package it up so it’s easier for them to consume.”

Looking Toward an AI-Enabled Future

Though the path forward with AI is certainly exciting, it’s not going to progress without some hurdles.

“As recently as two to three years ago, very smart people said that AI was ‘solved’—that we’ve got supervised learning and now it’s just a matter of figuring out how best to use it. But now the big progress is happening in unsupervised learning,” Kevin told Relativity Fest attendees. “We’re going to have another five of those paradigmatic shifts over the next 10-15 years, and some really important conversations that need to happen center around what we want tech to be doing or not doing for the global public good.”

And it should be, he emphasized, a collective conversation: “We had the Apollo program, with an arbitrary target of putting people on the moon—and we can say that, for AI, we want another initiative like that.”

The humanitarian applications of AI abound, but not without some amount of risk. That’s why these conversations are so important.

“Generative models can generate all sorts of data that’s never existed before, which sounds sort of terrifying—but thought another way, is actually quite useful,” Kevin shared as an example. “Applied to eliminating bias in our vision models, generative models can generate debiasing data to be fed into recognition models. But the flip side is that you’ll have people using these generative tools for misinformation, creating deepfakes or fraudulent data.”

We should all approach these very real possibilities with caution, but without allowing ourselves to fear the technology outright because of them.

“Start with a very open mind for AI—casting aside assumptions you have about what it is and isn’t capable of, as those are what always trip people up in the beginning,” he said.

And for those who want to jump in as engineers and practitioners on the more technical side of AI? It’s easier than you might think.

“Start with learning Python, how to work with Jupiter notebooks, how to use tools like Azure Machine Learning or Sagemaker. Experiment. Be aware of resources like Hugging Face, where you can download models and experiment for yourself,” Kevin advised. “It’s very easy, relative to where it once was, to dip your toe in the water—just sit down and within a half hour, you’ll have tools assembled to write a program that does something interesting.”

Again, Kevin said, it’s this kind of broadly accessible—and broadly pursued—education that will help forge a smarter, more responsible, and more exciting future for AI applications across areas of business and humanitarianism.

“This stuff is so fascinating; my advice to everyone is to give yourself enough time where you can learn a new thing every day and challenge your assumptions,” he emphasized.

“I’m super optimistic about the future. There hasn’t been a better time to be a technologist or someone who is using tech to solve problems,” Kevin told attendees toward the end of the session. “What we all ultimately want to do is solve problems, make our lives and our families’ and communities’ lives better, and make that progress that lets us continue to live good lives.”

AI can help make it happen.

2021 Data Discovery Legal Year in Review

Sam Bock is a member of the marketing team at Relativity, and serves as editor of The Relativity Blog.

The latest insights, trends, and spotlights — directly to your inbox.

The Relativity Blog covers the latest in legal tech and compliance, professional development topics, and spotlights on the many bright minds in our space. Subscribe today to learn something new, stay ahead of emerging tech, and up-level your career.

Interested in being one of our authors? Learn more about how to contribute to The Relativity Blog.