Editor’s Note: A version of this article was first published in Legaltech News.
Today marks the first full day of ILTACON, the annual celebration of law, technology, and the professionals practicing in this intersection, presented by the International Legal Technology Association (ILTA). As legal technology teams gather in Nashville this week, one wouldn’t be surprised if the 2024 edition were renamed “AICON.”
At least 55 of ILTACON 2024’s sessions have some reference to artificial intelligence, and for almost every attendee in Nashville this week, legal ethics should be an important issue—especially when it comes to generative AI (abbreviated often as “GenAI” or, in the case of the American Bar Association, “GAI”).
The American Bar Association (ABA) made big news on July 29 when its Standing Committee on Ethics and Professional Responsibility released ABA Formal Opinion 512: Generative Artificial Intelligence Tools.
However, Formal Opinion 512 is not the first attempt to address the legal ethics of AI—far from it. In fact, state legal authorities from Florida, Pennsylvania, and West Virginia in the East to California in the West, and others, including Texas, have weighed in, and courts across the nation have issued standing orders and other efforts at legal regulation of AI generally and generative AI specifically.
So, as you go to some of those 55 AI-related sessions at ILTACON, speak with the Legaltech News editorial team on the ground in Nashville, visit with software developers and service providers with AI offerings, or simply discuss AI and the law this week, here’s a roadmap of how we got here with the technology and an educational reference guide to where we are with some of the case law and selected legal ethics rules in the Era of Generative AI.
The Technology: Optimus Prime for a New Generation
Some might argue the fictional character, Optimus Prime, and his fellow Transformers changed the world of toys and Hollywood kids’ movies after Hasbro representatives found their early versions at the International Tokyo Toy Show in the 1980s.
After all, transforming yourself from a living bio-mechanical “autobot” into a Freightliner FL86 18-wheeler is really kind of impressive.
Perhaps more impressive is the transformer that arrived on the scene on November 30, 2022, when OpenAI released ChatGPT, its generative AI chatbot. This transformer affected technology, the law, and, in some ways, the world.
Based on generative pre-trained transformer (GPT) large language model (LLM) AI technology, GPT is “generative” in that it generates new content, it’s “pre-trained” with supervised learning in addition to using unsupervised learning, and it’s a “transformer,” changing—or transforming—one group of words or “prompts” into something different.
Generative AI’s transformative feats over the past 20 months—everything from researching case law to creating art—have been impressive, and the more data the GPT systems get, the better they become.
OpenAI and its technology provides an excellent example. It’s worth noting that—although OpenAI refers to its offerings as “GPT”—GPT is a model, and OpenAI does not own it. In fact, not unlike over a decade ago when an e-discovery software developer tried to trademark the term, “predictive coding,” the U.S. Patent and Trademark Office rejected OpenAI’s attempt to trademark GPT.
Having said that, OpenAI does own the large language model, and it is appealing the U.S. Patent and Trademark Office's rejection of its trademark application to the Trademark Trial and Appeal Board.
Founded in 2015 by a group of well-known Big Tech luminaries—including Sam Altman, Reid Hoffman, Elon Musk, and Peter Thiel—OpenAI developed GPT-1 in 2018, GPT-2 in 2019, and GPT-3 in 2020, before releasing ChatGPT (based on GPT-3.5) to the public in 2022. Since then, OpenAI released GPT-4 in 2023 and GPT-4o (“o” for “omni”) earlier this year.
Each new version expands of the work of the previous ones, often working off a larger corpus of data. For instance, in an oft-cited example, GPT-3.5 flunked the bar exam, scoring in the 10th percentile, while GPT-4 passed with flying colors—an impressive accomplishment, even if an MIT doctoral student claimed the test was stacked in GPT-4’s favor.
AI in the Law: Nothing New
At its most basic level, “artificial intelligence” simply describes a system that demonstrates behavior that could be interpreted as human intelligence.
Whether one considers it artificial intelligence, augmented intelligence, or something else, legal teams have been using machine learning technology for years.
In fact, it’s been over 12 years since then-U.S. Magistrate Judge Andrew Peck’s landmark 2012 opinion and order in Da Silva Moore v. Publicis Groupe SA, the first known judicial blessing of the use of technology-assisted review (TAR), a form of machine learning, in the discovery phase of litigation. Ireland in Quinn and the United Kingdom in Pyrrho were not far behind.
Questions of robots and the law are also nothing new as we saw in 2015 in Lola v. Skadden, Arps, Slate, Meagher & Flom LLP, where, in reversing a U.S. district court, the Second Circuit noted the parties agreed at oral argument that tasks that could be performed entirely by a machine could not be said to constitute the practice of law. In an era of machine learning, we didn’t buy the argument then, and we certainly don’t buy it now.
Roadmap for the Rules
As courts have grappled with AI in case law, the ABA Model Rules of Professional Conduct—and state rules often based on them—provide additional guidance. The Model Rules are not binding law. As the name implies, they serve as a model for the state bars regulating the conduct of legal profession.
The ABA’s efforts to regulate the professional behavior of lawyers and their legal teams did not begin with the Model Rules of Professional Conduct.
Before the ABA House of Delegates adopted the Model Rules in 1983, there were the Canons of Professional Ethics in 1908 and Model Code of Professional Responsibility in 1969.
Even the U.S. Constitution has 27 amendments; the Model Rules change as well, having been amended 10 times in the past 22 years.
A review of some of the Model Rules illustrates the legal ethics issues generative AI can present.
The Robot Lawyer and Rule 5.5: Unauthorized Practice of Law; Multijurisdictional Practice of Law
In Lola, we saw the conundrum of whether machines being able to perform a task meant that task could not possibly be the practice of law.
But what if the robots can practice law?
In In re Patterson and In re Crawford, two bankruptcy matters in Maryland, we saw the issue of whether software was providing legal advice.
Not unlike in Lola, where the issue was whether a contract attorney doing e-discovery document review was using “criteria developed by others to simply sort documents into different categories” or actually practicing law, in In re Patterson and In re Crawford, the question was whether an access to justice organization’s bankruptcy software functioned as a mere bankruptcy petition preparer under 11 U.S.C. § 110 or was the software actually practicing law.
Not surprisingly, the software did not possess a law license.
Although U.S. Bankruptcy Judge Stephen St. John noted the noble goals of the access to justice organization, Upsolve, he cited Jansen v. LegalZoom.com, Inc., and wrote:
“Upsolve fails to recognize that the moment the software limits the options presented to the user based upon the user's specific characteristics—thus affecting the user's discretion and decision-making—the software provides the user with legal advice.”
The TechnoLawyer and Rule 1.1: Competence
Perhaps the most significant legal ethics and technology development in the Model Rules came in 2012 with the new Comment 8 to Rule 1.1: Competence.
The new comment provided:
[8] To maintain the requisite knowledge and skill, a lawyer should keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology, engage in continuing study and education and comply with all continuing legal education requirements to which the lawyer is subject. (Emphasis added.)
Adding the benefits and risks of relevant technology was a watershed moment—especially for those who went to law school, at least in part, to avoid studying technology. However, at the time, some commentators asked, “Where’s the beef?” as they noted this technology “requirement” was no requirement at all, as it was buried in the comment—not even appearing in the text of the rule itself.
However, the beef manifested itself in state rules across the nation.
The states—many of them basing their own ethics rules and opinions on the Model Rules—embraced Comment 8, and knowledge of technology has become a requirement for most lawyers around the country.
To keep current on these state requirements, lawyer and journalist Bob Ambrogi, of LawSites, provides an excellent resource for tracking state technology competence requirements for lawyers and their legal teams. The count is now up to 40 states.
The new Formal Opinion 512 on generative AI tools also focuses on Rule 1.1 on competence. Lawyers can rest easily in that 512 does not mandate that lawyers go out and obtain a PhD in data science.
“To competently use a GAI tool in a client representation, lawyers need not become GAI experts,” Formal Opinion 512 states.
However, the opinion provides also that “lawyers must have a reasonable understanding of the capabilities and limitations of the specific GAI technology that the lawyer might use,” adding, “This is not a static undertaking” and noting that lawyers should read about generative AI tools targeted at the legal profession, attend continuing legal education (CLE) classes, and consult with others who are proficient in generative AI technology.
Rule 1.1 on competence is a focus for the states as well. Referencing California Rule of Professional Conduct 1.1, which also has a technology provision in the comment, the State Bar of California’s Practical Guidance for the Use of Generative Artificial Intelligence in the Practice of Law notes: “It is possible that generative AI outputs could include information that is false, inaccurate, or biased.”
With this risk in mind, the California guidance provides, “A lawyer’s professional judgment cannot be delegated to generative AI and remains the lawyer’s responsibility at all times,” adding, “a lawyer may supplement any AI-generated research with human-performed research and supplement any AI-generated argument with critical, human-performed analysis and review of authorities.”
It Takes a Village: Rules 5.1 and 5.3
Whether you’re at ILTACON, ALM’s Legalweek, Relativity Fest, or other legal technology conferences, it matters not if you’re a lawyer, a paralegal, or a technologist, if you’re part of a legal team, your conduct could affect the application of these rules.
Rule 5.1: Responsibilities of Partners, Managers, and Supervisory Lawyers and Rule 5.3: Responsibilities Regarding Nonlawyer Assistance both implicate conduct by members of legal teams who are not lawyers, and the legal ethics guidance includes these professionals as well.
“Managerial lawyers must establish clear policies regarding the law firm’s permissible use of GAI, and supervisory lawyers must make reasonable efforts to ensure that the firm’s lawyers and nonlawyers comply with their professional obligations when using GAI tools,” Formal Opinion 512 provides.
To comply with this duty, the opinion notes, “Supervisory obligations also include ensuring that subordinate lawyers and nonlawyers are trained, including in the ethical and practical use of the GAI tools relevant to their work as well as on risks associated with relevant GAI use.
The formal opinion suggests, “Training could include the basics of GAI technology, the capabilities and limitations of the tools, ethical issues in use of GAI, and best practices for secure data handling, privacy, and confidentiality.”
Duty to Disclose? Rule 1.4: Communication
Do you need to tell your client if you’re using generative AI in your legal work on her case?
ABA Model Rule of Professional Conduct 1.4: Communications provides, in part, that a lawyer shall “reasonably consult with the client about the means by which the client’s objectives are to be accomplished” and that the lawyer will “promptly comply with reasonable requests for information.”
But does Rule 1.4 trigger a requirement to inform your client about your use of ChatGPT?
In a classically lawyeresque approach to the question, Formal Opinion 512 says basically, “It depends.”
“The facts of each case will determine whether Model Rule 1.4 requires lawyers to disclose their GAI practices to clients or obtain their informed consent to use a particular GAI tool. Depending on the circumstances, client disclosure may be unnecessary,” the opinion provides.
However, the opinion notes that, in some matters, the duty to disclose the use of generative AI is clear: “Of course, lawyers must disclose their GAI practices if asked by a client how they conducted their work, or whether GAI technologies were employed in doing so, or if the client expressly requires disclosure under the terms of the engagement agreement or the client’s outside counsel guidelines.”
Keeping Confidences and Rule 1.6: Confidentiality of Information
Lawyers and legal teams have few responsibilities more sacred than keeping client confidences. It’s a fundamental aspect of an adversarial legal system, and the requirement is codified in ABA Model Rule of Professional Conduct 1.6: Confidentiality of Information.
Florida has a similar corresponding rule, Rule Regulating The Florida Bar 4-1.6. In January of this year, Florida issued Florida Bar Ethics Opinion 24-1 on the use of generative AI, which has substantial references to Rule 4-1.6 and the requirement for confidentiality of information.
Generative AI presents new potential challenges for client confidentiality—depending on what type of model is used.
In A.T. v. OpenAI LP, a putative class of plaintiffs argued their data privacy rights were violated when GPT was trained by allegedly scraping data about them from various sources without their consent. However, carrying this concept to client data, some generative AI systems do not require sharing data with outside entities.
In addition, as The Florida Bar notes, we don’t have to reinvent the ethical wheel here.
“Existing ethics opinions relating to cloud computing, electronic storage disposal, remote paralegal services, and metadata have addressed the duties of confidentiality and competence to prior technological innovations and are particularly instructive,” Florida’s Ethics Opinion 24-1 provides, citing Florida Ethics Opinion 12-3, which, in turn cites New York State Bar Ethics Opinion 842 and Iowa Ethics Opinion 11-01 (Use of Software as a Service—Cloud Computing).
No Windfall Generative AI Profits – Rule 1.5: Fees
ABA Model Rule 1.5: Fees provides guidance on appropriate fees and client billing practices, and provides, in part, “A lawyer shall not make an agreement for, charge, or collect an unreasonable fee or an unreasonable amount for expenses.”
Say, for instance, a legal research project legitimately took 10 hours for a competent lawyer, but with your snazzy new generative AI tool, you can accomplish the task in 30 minutes.
Have you just earned a 9.5-hour windfall? After all, 10 hours is a legitimate billing for a competent attorney to complete the task.
Not exactly.
“GAI tools may provide lawyers with a faster and more efficient way to render legal services
to their clients, but lawyers who bill clients an hourly rate for time spent on a matter must bill for their actual time,” Formal Opinion 512 provides.
However, the opinion notes also that a lawyer may bill for time to check the generative AI work product for accuracy and completeness.
Bottom line: As Opinion 512 states, citing Attorney Grievance Comm’n v. Monfried, “A fee charged for which little or no work was performed is an unreasonable fee.”
Why This Roadmap Matters and the Road Ahead
The Model Rules of Professional Conduct and their state counterparts don’t operate in a vacuum. Although the comments provide guidance, the rules are open to interpretation.
As we’ve seen, courts weigh in on these legal ethics issues. In perhaps the most well-known case of bad results from generative AI, last year’s hallucination hijinks of submitting fake AI-generated cases to the court in Mata v. Avianca, Inc., the court focused more on the lawyer’s violations of Federal Rule of Civil Procedure 11(b)(2) than the ethics rules.
Citing Muhammad v. Walmart Stores East, L.P., the Mata court wrote, “Under Rule 11, a court may sanction an attorney for, among other things, misrepresenting facts or making frivolous legal arguments.”
Of course, the Federal Rules of Civil Procedure may have taken center stage in Mata, but counsel in the future should be aware of the requirements of Model Rule of Professional Conduct 3.3: Candor Toward the Tribunal.
On a more positive note, this year, in Snell v. United Specialty Ins. Co., U.S. Circuit Judge Kevin Newsom highlighted the positive impact on generative AI on the law, using it to help him analyze the issues and draft his concurring opinion in a decision from the U.S. Court of Appeals for the 11th Circuit.
In addition, courts are reacting—some would argue overreacting—to generative AI and the law by issuing new standing orders and local rules governing the use of AI. EDRM has developed a useful tool for tracking this proliferation of court orders.
But is this veritable cornucopia of standing orders really necessary?
Judge Scott Schlegel, a Louisiana state court appellate judge and leading jurist on technology issues, wrote convincingly last year in A Call for Education Over Regulation: An Open Letter, saying that, in essence, this wave of judicial reaction may be a solution in search of a problem.
“In my humble opinion, an order specifically prohibiting the use of generative AI or requiring a disclosure of its use is unnecessary, duplicative, and may lead to unintended consequences,” Judge Schlegel wrote, arguing that many of the current Model Rules cited here address the issue of generative AI adequately.
Looking at the road ahead—and considering Judge Schlegel’s call for education over regulation—it’s important to note that it’s not only the courts, rulemakers, and governments considering the ethics of AI. Private initiatives are important as well, going beyond legal ethics into the general ethics of artificial intelligence.
For instance, the National Academy of Medicine is organizing a group of medical, academic, corporate, and legal leaders to establish a Health Care Artificial Intelligence Code of Conduct.
In addition to non-governmental organizations, private companies are starting initiatives as well.
Relativity has established Relativity’s AI Principles, which guide the company’s efforts in artificial intelligence in “our dedication to being a responsible steward within our industry.” The six principles are:
- Principle #1: We build AI with purpose that delivers value for our customers.
- Principle #2: We empower our customers with clarity and control.
- Principle #3: We ensure fairness is front and center in our AI development.
- Principle #4: We champion privacy throughout the AI product development lifecycle.
- Principle #5: We place the security of our customers’ data at the heart of everything we do.
- Principle #6: We act with a high standard of accountability.
For each one of these principles, Relativity goes into additional detail. For example, on Principle 6, referring to our internal security team: “Calder7’s comprehensive tactics include real-time defect detection, strict access controls, proper segregation, encryption, integrity checks, and firewalls.”
The work of Relativity and others to help ensure the safety of artificial intelligence is important, but perhaps the concern over generative AI and the law is overblown.
As U.S. Magistrate Judge William Matthewman, a leading jurist in the field of e-discovery law, observed after a Relativity Fest Judicial Panel discussion of AI, “I don’t fear artificial intelligence. Instead, I look forward to it and embrace it. This is primarily because artificial intelligence cannot possibly be worse than certain levels of human intelligence I’ve suffered over the years.”
