by Shelley Podolny - H5
on January 02, 2019
Education & Certification
Editor's Note: Relativity Authorized Partner H5 originally published this fun article on their blog last year. We thought our readers might like to kick off 2019 with a level set on their AI IQs.
The spotlight on the technological competence of counsel—called out by the ABA’s Model Rules of Professional Conduct and the 31 states that have thus far adopted the ethical duty of technology competence in their own rules—continues to up the ante for counsel to keep pace with, or at least attempt to chase, the moving train of advancing technology in the legal realm.
Let’s face it, it’s tough to be considered competent in any part of the real world now without being somewhat tech-savvy, but most of us aren’t faced with tech savvy as part of an ethical duty in doing our jobs. Lawyers, on the other hand, have an ethical duty to “keep abreast of … the benefits and risks associated with relevant technology” as they do theirs. How much knowledge is enough to keep up with the moving technology train? And how do you know how much (or little) you really know?
That question is becoming ever more compelling as the burgeoning field of AI injects yet another layer of technological complexity into the legal realm. The use of AI introduces a variety of strategic, tactical, and ethical issues that lawyers must address.
Not so long ago, the use of machine learning algorithms for technology-assisted review (TAR) seemed like a challenging legal question, but now, a plethora of mind-bending AI applications present ethical and practical dilemmas that few could ever have contemplated. TAR seems like a no-brainer compared to liability issues for self-driving automobiles, predictive analytics for criminal sentencing, or how our privacy may be affected by bots. We are learning more every day about the use (and possible abuse) of “intelligence” over which we may have little control.
As the use of AI grows, various think tanks and AI-targeted committees are springing to life to address such questions, and the legal community must heartily engage in the conversation as they consider AI a part of relevant technology, the benefits and risks of which they must “keep abreast.” AI, after all, raises some of the most challenging ethical concerns we have ever faced as a society, forcing us to consider in the broadest sense: “To what extent should societies delegate to machines decisions that affect people?”
For further insight into that question and the intersection of AI, society and the law, read what some prominent thinkers have to say (including H5’s own CEO, Nicolas Economou) in articles here, here, and here. And stay tuned to True North and The Relativity Blog, where there will be much more to come on this topic.
In the meantime, here’s a little diversion in the form of a quiz to quickly test your “AI IQ.” If you pass with flying colors, you know you’re keeping up with the moving train of advancing technology. If not, grab a coffee. You’re stuck in the station. (Answers below.)
1. Who has been coined the “father of AI?”
a. Albert Einstein
b. Alan Turing
c. Professor Irwin Corey
d. John McCarthy
2. The first AI program to run in the United States, written in 1952 by Arthur Samuels, did what?
a. Played checkers
b. Played chess
c. Played Go!
3. Predictive coding, as used for e-discovery, is a form of AI.
4. Although it is being considered, AI has not yet been used by judges to determine sentencing for convicted criminals.
5. A spam filter that weeds out irrelevant email uses a form of AI.
6. The Rules of Professional Conduct explicitly call out AI when describing a lawyer’s corresponding ethical duties to clients.
7. Although it was close, IBM Watson was unable to beat human intelligence in a game of Jeopardy!
1. b (or d. Depends who you ask.)
2. a – true (Using AI for chess was being developed around the same time, but checkers was first. Go! is acknowledged as the real challenge, and AI was recently a winner. Learn more here.)
3. a – true (It’s machine learning, a form of AI)
4. b – false (See Loomis v. Wisconsin)
5. a – true (Machine learning again.)
6. b – false (At least not yet.)
7. b – false (Win Watson did.)
Shelley Podolny is the director of marketing and and information management consulting at H5. She has more than two decades of experience in the legal and technology spheres.
Machine Learning Can Save Your Life – Here's How
4 Tech Trends Lawyers Can't Ignore
Cybersecurity Buzzwords Defined in 60 Seconds or Less [Videos]