Yellow LineAn SVG yellow line intended to emphasize some copy
Professional Responsibility

Adopting Emerging Technology Responsibly

OSBA iconBy Merisa K. BowersOhio LawyerFebruary 14, 2024
  • Share

Tech-Keyboard.jpg

With the advents of recorded dictation 70 years ago, digitized legal research 50 years ago, the internet 30 years ago and multifunctional firm management software in the last 20 years, we don’t research, write or practice like my grandfather did in the 1960s. The times are a-changin’ yet again, with an impact on the legal profession that’s just as dramatic as we saw from the rise of the internet in the 90s. This decade, however, the impact will be due to advancements in artificial intelligence (AI) technology.

Just as we were trained on how to get the most out of online legal research tools – refining our skills to accurately search for cases and concepts and learning what it means to “Shepardize” – use of this next wave of technology comes with a learning curve and an opportunity for those who lean into the new tools to set themselves apart among both clients and colleagues.

However, attorneys are wise to have a degree of skepticism when applying generative AI to the legal sector. Some general-use AI tools, like ChatGPT, face limitations in tasks essential to legal professionals. Accurate legal research, case law analysis, recommendation of sound legal strategies and factual investigation remain on the fringes or out of reach of the current capabilities of publicly available applications. And while a recent proliferation of legal-specific tools has made AI a useful tool for law practices, lawyers should still take heed of some best practices and, as always, keep professional rules in mind.

Cautionary Tales

Last year saw legal consequences stemming from the increased utilization of generative AI in the legal sector. A cautionary tale from New York stands out, where attorneys were sanctioned for the careless use of ChatGPT in a legal brief in March 2023. Finally admitting that they were unaware of the tool's limitations, the attorneys cited fake cases “hallucinated” by ChatGPT, leading to a federal judge finding that they acted in bad faith largely based on their attempt to avoid responsibility.

That same spring, a Colorado lawyer filed a motion containing case citations generated by ChatGPT. He was disciplined, not necessarily for the underlying error, but due again to failing to alert the court or withdraw the motion after being made aware of the fictitious cases and misrepresenting to the court that the error was attributable to a legal intern.

Even as late as Nov. 29, 2023 a lawyer representing former attorney Michael Cohen filed a brief without checking citations. In a December affidavit, Mr. Cohen acknowledged using Google’s Bard for what he thought was legal research, submitting his findings to his attorney. His attorney then included the cases in a filed brief, suggesting that another attorney had produced the research. The court has not yet ruled on the underlying motion nor sanctions or consequences for submitting a brief containing hallucinated case law.

All three of these incidents underscore the need for legal professionals to fully comprehend the capabilities and limitations of AI tools and to ensure its responsible integration into legal processes. However, what emerges from a closer look at these examples is that the sanctions were driven more by the attorneys’ attempts to cover up their errors, rather than the inappropriate use of the AI technology itself.

Last year, during the hype of the emergence of generative AI, in Texas, one federal judge issued an order mandating attorneys to include a certificate as to use of AI with their filings. The certificate was required to confirm whether generative AI was used and, if so, whether information contained in the brief had been subject to human review and validation. This proactive approach reminds us of the role of judicial oversight in the application of AI in legal practice.

The legal industry further witnessed OpenAI, the parent company of ChatGPT, face substantial legal challenges in 2023 which are carrying forward to this year. Content creators/copyright owners initiated various suits, including one class-action, alleging copyright infringement due to the use of generative AI trained on copyrighted works without consent, attribution or compensation. These legal battles, coupled with claims of unlawful collection of personal information, underscore the complex legal terrain surrounding AI technologies and raise critical questions about ownership, consent and privacy.

In addition to reaction from the judiciary, the regulatory landscape is evolving. The Biden administration issued an executive order in late October 2023 emphasizing the importance of ensuring the safety and trustworthiness of generative artificial intelligence. This executive action marked a starting point for administrative regulatory action, urging comprehensive data privacy standards and safeguards against AI-related threats in critical areas such as employment, housing, credit, education and healthcare.

"Especially for small and mid-size law firms, this new landscape comes with some new responsibilities."

Emerging Guidance Across the Nation

While analytic AI tools have been in use for the last several years, the leap forward with generative AI presents opportunities and risks to be regularly evaluated by lawyers, striking a balance between innovation and ethical considerations while shaping responsible practices.

In shaping those practices, legal professionals must grapple with questions of transparency, accountability and privacy. While the Ohio Board of Professional Conduct has not (yet) issued any advisory opinions regarding the use of generative AI, attorneys are well-advised to stay alert to legal news and disciplinary decisions from other states.

The Florida Bar of Governors’ Review Committee on Professional Ethics issued Advisory Opinion 24-1 Regarding Lawyers’ Use of Generative Artificial Intelligence in January 2024. Advisory Opinion 24-1 concludes that ethical use of generative AI is permissible, “but only to the extent that the lawyer can reasonably guarantee compliance with the lawyer’s ethical obligations.” Various rules of professional conduct are implicated in the opinion, including advertising rules, legal fees and competency.

The State Bar of California Standing Committee on Professional Responsibility and Conduct recently published “Practical Guidance for the Use of Generative Artificial Intelligence in the Practice of Law” which identifies implicated duties and responsibilities for attorneys. Like Florida’s proposed advisory opinion, the California publication urges awareness of the risks and benefits of the emerging technology and emphasizes caution as “generative AI poses the risk of encouraging greater reliance and trust on its outputs because of its purpose to generate responses and its ability to do so in a manner that projects confidence and effectively emulates human responses.”

Applicable Ohio Rules

Several critical professional conduct principles must be considered when employing generative AI and, in general, new technologies in law practice. A recent Florida advisory opinion summarizes implicated rules, most of which are substantially similar to Ohio rules. These ethical principles include:

1. Duty to maintain confidentiality of information (Ohio RPC 1.6).

Attorneys must be attentive to how information entered into AI systems will later be used. Client names and sensitive information should never be included in publicly accessible generative AI tools.

2. Obligations to candor to the tribunal and truthfulness in statements to others (Ohio RPC 3.3 and 4.1).

As demonstrated by recent cases involving the misuse of AI tools, as soon as an attorney becomes aware of an error, the attorney may have an obligation to disclose it. When questioned by a court, attorneys have an obligation to respond truthfully and should consider consulting with an ethics attorney.

3. The requirement to avoid frivolous contentions and claims (Ohio R. Civil Proc. 11 and ORC § 2323.51).

A hallucinated case or other misrepresentation of case law should be immediately remedied. Egregious mis-citation of cases can result in Rule 11 sanctions for improper and frivolous arguments.

4. Duty to ensure reasonableness in fees and costs (Ohio RPC 1.5).

Attorneys have an obligation to bill reasonably. Time should not be inflated to account for any efficiency gained by AI use and attorneys should not bill time to a client to gain general competence, either on a general area of law or general use of technology.  Attorneys should seek guidance from ethics counsel on whether it is proper to pass along costs of the technology itself to clients.

5. Compliance with advertising rules (Ohio RPC 7.1 – 7.4).

As certain aspects of communication with clients or prospective clients are delegated to AI assistants, attorneys must consider appropriate disclosure to clients and prospective clients, limitations on those AI assistants, and compliance with applicable boundaries of communicating information about legal services, as required by Ohio Rules of Professional Conduct.

6. Responsibilities of reasonable oversight of subordinate attorneys, staff and vendors (Ohio RPC 5.1 – 5.3).

Not only must an attorney adhere to professional standards, but attorneys with supervisory authority have an obligation to reasonably ensure that subordinate attorneys, staff and vendors are also compliant. This includes reasonable investigation into practices of third party vendors such as technology companies that develop or provide services to law firms.

Practical Guidance

Especially for small and mid-size law firms, this new landscape comes with some new responsibilities, namely:

  • Staying informed about emerging technology.

  • Prioritizing confidentiality and data privacy.

  • Ensuring attorney oversight and critical review of AI outputs.

Additionally, developing and implementing an internal office AI use policy is essential. Such a policy should include:

  • Directives to disclose the use of AI tools to clients.

  • Oversight and review of all outputs from AI tools.

  • Prevention of the disclosure of sensitive client information into non-secure AI tools.

  • Limitations of which AI tools or applications are used by the firm.

  • Mandates that the deployment of AI tools manifest the ability to justify or explain how the AI tool generated its response.

The integration of generative AI into the legal industry presents both opportunities and challenges. As legal professionals embrace these tools, a delicate balance must be maintained, acknowledging the transformative potential while adhering to ethical principles and legal obligations.

The legal community's proactive engagement in shaping policies and practices will define how generative AI contributes to the future of legal services. The journey involves continuous evaluation, a commitment to ethical integration and a collective effort to harness the benefits of AI responsibly. Just as we’ve embraced a wide variety of innovative technology in the last 50 years, the new wave of generative and analytic AI tools has the potential to make the practice of law stronger, faster, and better so long as we prioritize the needs of our clients, integrity of our systems and respect for the rules that govern us.

Yellow LineAn SVG yellow line intended to emphasize some copy

About the Author

Merisa Bowers Headshot.jpegMerisa K. Bowers joined the Ohio Bar Liability Insurance Company (OBLIC) as loss prevention counsel in early 2023 after 13 years in private practice. A born problem-solver, Merisa strives to support fellow attorneys through proactive policy development and risk management.

OSBA LogoThe Ohio State Bar Association logo
Yellow LineAn SVG yellow line intended to emphasize some copy

Update Your Member Profile

Customize your Digital Engagement Platform experience and help clients find you.