Lawyers, LLMs, and the Line of Integrity: The Need for AI Guidelines for the Bar and Bench

Lawyers, LLMs, and the Line of Integrity: The Need for AI Guidelines for the Bar and Bench

Introduction

In 2025, the use of Large Language Models (hereafter ‘LLMs’ or ‘models’) such as Chatgpt, Gemini, Deepseek, and other such models has continued to burgeon.1 A simple working definition of an LLM is that it is a type of Artificial Intelligence (AI) program that can, among other things, recognize and generate text.2

The rise in the use of LLMs has not been limited to the sphere of private use by individuals seeking answers to everyday questions; rather, it has gained equal significance as an essential tool for professionals. Legal professionals are among those who have embraced LLMs in their daily tasks. One need not think too deeply to conclude that any model that is able to, say, think and then respond to questions based on an enormous amount of data is useful to a lawyer. And so, it is no surprise that a plethora of evidence exists to show that lawyers have boarded the AI train. As is the case with leveraging any technological advancement without caution, the laissez-faire approach to using LLMs has raised questions regarding the professionalism of lawyers.

Thus far, this piece has presented three key premises: first, that LLMs are growing in popularity; second, that legal professionals frequently utilize LLMs in their work; and third, that reliance on AI has challenged professional integrity. A logical conclusion that may be drawn from these premises is the need for safeguards to prevent legal professionals from facing such reputational risks and maintaining professional ethics established in the profession. However, this conclusion rests on two underlying presuppositions: first, that legal professionals are exposing themselves to risk and ridicule; and second, that such exposure is undesirable and should be mitigated.

It must be said that, aside from ‘image’ issues and potential embarrassment that unbound use of AI may bring with it, there is undoubtedly an issue of professional ethics at play. The Bar and the Bench are both governed by a strict code of conduct; it is in the very nature of the profession that a code exists, and it must be followed to the very letter. Take, for instance, the Law Society of Kenya’s Code of Conduct. It is imperative for advocates to maintain key ethical considerations such as honesty and integrity; it emphasizes that any form of dishonourable conduct, whether in private or professional spheres, adversely impacts the advocate, the legal profession’s integrity, and the administration of justice. By the foregoing metric then, uncritical reliance on AI tools, which may produce erroneous or fabricated information, poses a significant risk of violating these ethical obligations. And so, it must be said that this blog is imortant for the sake that it seeks to vehemently protect these strict codes.

In this article, I will begin by proving the third premise as a necessary step to achieve the main goal of this piece: to sound a clarion call for the creation of guidelines to regulate the ethical and professional use of AI within the legal profession.

Artificial Errors, Real Consequences: AI’s Legal Pitfalls

As alluded to above, the point of inception in this part is to show various examples of instances where AI has caused some mishaps to lawyers.

Take, for instance a 2023 personal injury case. ​In May, New York attorney Steven A. Schwartz faced potential sanctions after submitting a legal brief in a federal case that included fictitious case citations generated by ChatGPT. Schwartz, representing plaintiff Roberto Mata against Avianca Airlines, used ChatGPT to supplement his legal research. Unaware of the AI’s propensity for fabricating information, he included six non-existent cases in his filing. When questioned by Judge P. Kevin Castel, Schwartz admitted his reliance on ChatGPT and acknowledged the tool’s unreliability.3

In July 2024, a Melbourne lawyer representing a husband in a family court dispute submitted a list of prior case citations generated using the legal software Leap, which incorporates artificial intelligence. Upon review, Justice Amanda Humphreys and her associates were unable to verify the cited cases, leading to the revelation that the AI-generated citations were fictitious. The lawyer admitted to not verifying the accuracy of the information before submission and issued an unconditional apology. Despite this, Justice Humphreys referred the matter to the Victorian Legal Services Board and Commissioner for investigation, emphasizing the importance of due diligence given the increasing use of AI tools in legal practice. This incident highlights the critical need for legal professionals to thoroughly review AI-assisted research to maintain the integrity of legal proceedings.4

In the turn of the new year, a product liability lawsuit against Walmart and Jetson Electric Bikes took an unexpected turn when attorneys representing the plaintiffs cited fabricated legal cases in a court filing, generated by an AI tool. The Wyoming District Court identified eight out of nine cited cases in a January 2025 motion as non-existent or incorrect, prompting the judge to demand an explanation. The attorneys, Taly Goody and T. Michael Morgan, admitted the mistake and expressed embarrassment. In contrast, a third attorney, Rudwin Ayala, took full responsibility, revealing he had used an internal AI tool to generate case law. The law firm has since implemented safeguards for AI use, and Ayala apologized profusely to the court, acknowledging the professional and personal repercussions of his error.5

To put at ease, those who may claim that the foregoing cases are foreign to the African continent, a recent case in the Johannesburg Regional Court is an apt response.6 A lawyer, representing a plaintiff in a defamation suit, admitted to using Chatgpt, which produced a fictitious case citation. Magistrate Arvin Chaitram had no choice but to impose a punitive cost order on the attorney.

The preceding examples illustrate the pervasive nature of the challenges posed by AI. While only four instances have been discussed, they effectively demonstrate the broader issue at hand. These examples substantiate the premise that AI has contributed to significant embarrassment within the legal profession. This naturally leads to the question of what solutions may be implemented to address these challenges.

Taming the Tech and Guiding the Algorithm: A Blueprint for the Bar and the Bench

Before delving into a solution-oriented discussion, some may argue that the issue could be resolved simply by prohibiting the use of AI in the legal field. However, such a course of action would be hasty and rash, given AI’s numerous benefits to the legal profession. These advantages are well-documented. For instance, AI significantly enhances efficiency by automating tasks such as document review. AI-powered platforms can expedite the discovery process by swiftly identifying relevant documents, thereby reducing both the time and costs associated with manual review.7 With its ability to canvass large pools of data, AI tools can minimise human error and ensure more accurate outcomes in tasks like contract analysis and due diligence.8 In terms of improving the bottom line, by automating tasks that traditionally require significant human resources, AI helps law firms reduce operational costs and offer more affordable services to clients.

In totality, if complete eradication of AI robs us of significant utility, and unfettered access leads to equally potent disaffection, a middle point must be found. In this regard, this article posits the use of well-informed AI-use guidelines for the bar and the bench. This idea is not entirely novel. Additionally, the Global Toolkit on AI and the Rule of Law, developed by international institutions such as the UN and The Hague Institute for Innovation of Law, offers a valuable framework. It provides cross-jurisdictional principles such as transparency, accountability, and human oversight all of which align squarely with the pillars proposed in this article. As such, this piece seeks to borrow from the ideas espoused within the toolkit and build the conversation on regulating AI in the law. Below are various considerations and pillars that any guidelines should consider.

First, competence is essential. To effectively use a tool, one must acquire a deep understanding of the tool itself. Similarly, lawyers must begin by acquiring a deep understanding of AI technologies, including their capabilities and limitations. Without this foundational knowledge, there is a danger of misapplying AI outputs or failing to recognize the potential for error. The bar and the bench must act quickly to ensure that lawyers and judges are getting exposure to some form of learning, where these skills can be acquired. By mandating continuous education and training in AI, the legal profession can ensure that its practitioners are equipped to harness these tools responsibly. The term ‘continuous’ is employed to keep lawyers updated on the constantly evolving power of AI tools. Furthermore, if AI tools are only expected to burgeon, this competence must begin at a formative level, such as bar or law school.

Another pillar of the guideline ought to be confidentiality. The legal field is built on trust and the safeguarding of sensitive information and AI applications, which, if not properly managed, could inadvertently expose confidential data, jeopardizing client privacy and breaching ethical standards. Thus, strict protocols must be established to ensure that any AI system complies with stringent data protection standards, ensuring that client information remains secure and confidential at all times. This could be done in two ways. First, on the part of the institutions such as Open AI, the existence of specifically legal LLM tools, with more stringent privacy standards, could be of great benefit. On the part of the lawyer, they, too, could work on implementing stricter protocols to observe data safety.

Equally important is the principle of consent. Clients deserve transparency about the role of AI in their representation. Informing clients about how and when AI is used reinforces trust and also keeps within the ethical obligations by securing informed consent. Clear communication about AI’s involvement in legal processes helps demystify the technology and mitigates potential concerns over its impact on legal outcomes. This could be included in engagement letters, whereby clients can undersign and consent to the use of AI within very specific boundaries.

Supervision and accountability are critical practical safeguards. If AI tools are prone to erring, then guidelines must emphasise that AI tools should serve as aids, not replacements, for human judgment. Legal professionals must rigorously verify AI-generated outputs to avoid the pitfalls of relying on unvetted information – a failure that should lead to severe ethical breaches and judicial sanctions. Establishing accountability frameworks ensures that there is always a human element overseeing AI contributions, because they operate the system that should be held accountable. This then incentivises lawyers to be critical in their supervision to prevent errors and uphold the highest standards of legal practice.

Conclusion

AI offers significant benefits to legal practice. But, like anything, its unbridled use can also lead to serious professional missteps and reputational harm. The examples discussed highlight the risks of overreliance on AI, and if anything, the need for a proper middle ground is increasingly necessary. As this article has claimed, these guidelines should begin with bolstering understanding through rigorous competence training, ensuring that legal professionals fully appreciate both the capabilities and limitations of AI tools. They must also safeguard confidentiality through strict data protection protocols, guarantee informed client consent, and enforce robust supervision and accountability measures.

Doing this does two things. First, it will protect the integrity of the legal profession, which is apart from other professions, as authors such as Deborah Rhode have claimed in their treatise. Second, it will ensure that AI serves as an aid to justice rather than a risk to professional integrity.

Image used was generated by Grok

 

1 See for instance, Lin X, Luterbach K, Gregory KH and Sconyers SE, ‘A Case Study Investigating the Utilization of ChatGPT in Online Discussions’ (2024) 28(2) Online Learning; Chivose EM, ‘The Adoption and Usage Patterns of ChatGPT Among Students and Faculty Members in Higher Education: A Study of the University of Nairobi, Faculty of Education’ (PhD thesis, University of Nairobi 2023); Trending WT, ‘ChatGPT or Google Scholar?’ (2023); Liang W and others, ‘Mapping the Increasing Use of LLMs in Scientific Papers’ (2024) arXiv preprint arXiv:2404.01268; Liao Z and others, ‘LLMs as Research Tools: A Large Scale Survey of Researchers’ Usage and Perceptions’ (2024) arXiv preprint arXiv:2411.05025.

2 M U Hadi, R Qureshi, A Shah, M Irfan, A Zafar, M B Shaikh and S Mirjalili, ‘Large Language Models: A Comprehensive Survey of Its Applications, Challenges, Limitations, and Future Prospects’ (2023) Authorea Preprints.

3​ Lyle Moran, ‘Lawyer Cites Fake Cases Generated by ChatGPT in Legal Brief’ (Legal Dive, 30 May 2023) https://www.legaldive.com/news/chatgpt-fake-legal-cases-generative-ai-hallucinations/651557/ accessed 8 April 2025.​

4 ​Josh Taylor, ‘Melbourne Lawyer Referred to Complaints Body After AI Generated Made-up Case Citations in Family Court’ (The Guardian, 10 October 2024) https://www.theguardian.com/law/2024/oct/10/melbourne-lawyer-referred-to-complaints-body-after-ai-generated-made-up-case-citations-in-family-court-ntwnfb accessed 8 April 2025.

5 ​Thomas Claburn, ‘Attorney Faces Sanctions Filing Fake Cases Dreamed up by AI’ (The Register, 14 February 2025) https://www.theregister.com/2025/02/14/attorneys_cite_cases_hallucinated_ai/ accessed 8 April 2025.​

6 ​Legal Interact, ‘SA Lawyer Fined for ChatGPT Use: Importance of Legal Technology Solutions’ (Legal Interact, 1 July 2023) https://legalinteract.com/legal-technology-solutions/ accessed 8 April 2025.

7 ​Thomson Reuters, ‘How AI is Transforming the Legal Profession’ (Legal Blog, 16 January 2025) https://legal.thomsonreuters.com/blog/how-ai-is-transforming-the-legal-profession/ accessed 8 April 2025.

8 ​AIT Staff Writer, ‘Transforming Legal Landscape: How AI is Becoming the Ultimate Sidekick for Lawyers’ (AiThority, 5 September 2023) https://aithority.com/ai-machine-learning-projects/transforming-legal-landscape-how-ai-is-becoming-the-ultimate-sidekick-for-lawyers/ accessed 8 April 2025.​

Leave a Comment

Your email address will not be published. Required fields are marked