State Bars Weigh in on AI Tech (Spoiler Alert: They Don’t Like It)

Until recently, Artificial Intelligence (AI) belonged to the future. It was something we wondered about and saw in sci-fi movies – but AI is here now. In fact, everyday applications of AI have been around for some time, from the facial recognition technology in your phone to Amazon’s voice assistant Alexa.

On November 30, OpenAI quietly released ChatGPT to the public. Its ability to write code, provide seemingly coherent answers to questions, and generate human-sounding content caused many to sound the alarm about AI taking our jobs – and even replacing lawyers.

Advertisement

Some intrepid companies have already started using AI to create content, with some negative results. For example, CNET and its sister company Bankrate’s AI-generated content was riddled with errors, forcing it to pause the program.

I’ve already explored some practical issues related to creating marketing content with AI. What hasn’t been explored yet is what state licensing authorities have to say about it. In light of a startup DoNotPay’s attempt to have a “robot lawyer” represent a defendant in court, we’ve got our first look at how state bars are reacting to AI lawyering – and it’s not pretty.

State Bar Associations Seem Hostile to AI

Until a few days ago, a California traffic court was set to see its first “robot lawyer” in February. Joshua Browder, CEO of DoNotPay, planned to have an AI-powered robot argue a defendant’s traffic ticket case in court. The plan was to have the defendant wear smart glasses that record court proceedings while using a small speaker near the defendant’s ear to dictate appropriate legal responses. The system relied on AI text generators, including ChatGPT and DaVinci, among others.

Advertisement

Never before has this type of technology been used within a courtroom. Upon catching wind of Browder’s plans, several state bars and other entities quickly voiced their opposition. Browder said multiple state bars threatened him with legal action. One even suggested they may refer him to the district attorney’s office and that prosecution and incarceration time was possible, as the unauthorized practice of law is a misdemeanor in some states and punishable by up to six months in county jail.

State bars license and regulate attorneys, ensuring those needing legal assistance hire lawyers who understand the law. According to them, Browder’s AI technology intended for courtroom use amounts to the unauthorized practice of law and is impermissible.

Although Browder has declined to disclose which state bars he received warnings from, DoNotPay is under investigation by several of them, including the California State Bar. Furthermore, currently, courtroom rules for federal courts and many state courts don’t permit the recording of court proceedings, which is required with AI technology like DoNotPay.

Even still, he hopes it’s not the end of the road for AI in the courtroom. “The truth is, most people can’t afford lawyers,” according to Browder. “This could’ve shifted the balance and allowed people to use tools like ChatGPT in the courtroom that could’ve helped them win cases.”

Advertisement

Will AI-Generated Content be Met with Similar Hostility?

We already know that AI-generated content on well-known news sources hasn’t yet panned out so well. We also know that state bars are hostile to the use of AI – at least within the courtroom. Knowing these things, how will AI-generated content be viewed by state bars?

While publishing online content is undoubtedly different from the practice of law, this is the first time state bars have weighed in on the use of AI in the legal profession. They are not impressed, to say the least. This shouldn’t come as a surprise, as the legal profession is historically slow to adopt new technologies and has the incentive to protect its ability to regulate who (or what, apparently) can represent clients in court and provide legal advice.

The Rules of Professional Conduct May Regulate the Use of AI

According to the American Bar Association Rules of Professional Conduct, Rule 7.1: Communications Concerning a Lawyer’s Services:

A lawyer shall not make a false or misleading communication about the lawyer or the lawyer’s services. A communication is false or misleading if it contains a material misrepresentation of fact or law or omits a fact necessary to make the statement considered as a whole, not materially misleading.

Further, the comments clarify that:

A truthful statement is misleading if a substantial likelihood exists that it will lead a reasonable person to formulate a specific conclusion about the lawyer or the lawyer’s services for which there is no reasonable factual foundation.

Violating this rule can lead to sanctions, censure, or even disbarment. Being mindful of this rule and its consequences, most lawyers do all they can to follow it.

However, AI-generated content stands to blur these lines a bit. At first glance, being able to generate AI content on a large scale quickly, easily, and presumably at a low cost is extremely attractive.

What’s not to like? The potential for AI to often spit out incorrect or even slightly misleading information

In fact, OpenAI, ChatGPT’s creator, even admits this: OpenAI notes that ChatGPT “sometimes writes plausible-sounding but incorrect or nonsensical answers … is often excessively verbose … [and] will sometimes respond to harmful instructions or exhibit biased behavior.”

Considering this in regard to Rule 7.1, there’s a real threat that publishing AI-generated content without substantial review could be considered false and misleading advertising, regardless of its accuracy. Therefore, AI-generated content may save you time and money on the front end. However, unless you want to spend a substantial amount of time reviewing it for accuracy and even then face the potential that you may have broken the Rules of Professional Conduct, you should stick with only human-generated content.

Update: OpenAI Has Taken Steps to Identify Content Created by ChatGPT

One of the major questions that has been at issue since the release of ChatGPT is whether AI content detectors work. Such a detector would assuage some of the fears associated with AI-generated content, including academic plagiarism and the mass creation and dissemination of propaganda or false information. In addition, many people in the SEO world have wondered if Google can recognize AI-generated content and whether it will label it as spam (likely hurting the rankings of websites that use AI Content).

On January 31, ChatGPT’s creator, OpenAI, released a tool to detect AI-generated content. Not only does this allow people to identify content created by ChatGPT, but it also provides a preliminary answer to the question of whether OpenAI wants people to be able to identify it. It also has discussed cryptographically watermarking its content so that it could be easily identified in the future.

In light of these issues, it’s likely a good idea to err on the side of caution and wait to see what Google and your state bar have to say about AI content before posting it on your law firm’s website.

To Use AI or Not to Use AI?

After Bankrate and CNET’s quizzical and embarrassing blunders with AI-generated content, anyone, especially attorneys and law firms, who wants to ensure that they are only posting content that is factual and not misleading in any way should stay away from any content entirely generated by any AI engine, including but not limited to ChatGPT.

At this point, it’s not worth the risk to an attorney’s practice, bar admission, and reputation. Human-generated content is currently the only way to guarantee staying on the good side of state bars and that only true and factual information is published.

David Arato

David is a 2009 graduate of the St. Louis University School of Law. He got his start in legal marketing at a now-defunct legal marketing agency as a content writer. It was during this experience David recognized the issues firms and agencies have sourcing quality legal content at scale and founded Lexicon Legal Content to solve these problems. Since 2012, Lexicon has been providing firms and marketing agencies with accurate, optimized, and ethics-compliant content across a variety of practice areas and jurisdictions.

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Posts