October 17th 2024 | Jason Lafita
Edited by Stella Lee
Equity, or lack thereof, in the justice system is an increasingly talked about issue in modern political spheres. Much of the discussion in recent years has centered on pro se– self represented –litigants, who do not have access to the same resources as others and often come from underrepresented and underserved communities. Given recent spikes in artificial intelligence technology, it is worth exploring how these new developments could aid in leveling the playing field for pro se litigants.
Artificial Intelligence
In recent months, it has grown difficult to read the news without at least one mention of some new development in artificial intelligence. Artificial intelligence (AI) is a subfield of computer science defined as “the science and engineering of making intelligent machines” by emeritus Stanford Professor John McCarthy, who coined the term in 1955 [1]. The core of this is intelligence, or “the ability to learn and perform suitable techniques to solve problems and achieve goals, appropriate to the context in an uncertain, ever-varying world” [1]. AI technology has been growing exponentially, with generative AI like ChatGPT or DALL-E becoming increasingly more prevalent in the public eye as their accuracy and humanity improves. Besides these general AI platforms, there are specific AIs for attorneys, like Lexis+ AI and CARA AI [2].
There has been increasing policymaking around AI, with states like Connecticut and Texas implementing oversight boards for AI and states like North Dakota specifically passing legislation excluding AI from the definition of a person [3]. In relation to the justice system, one major question that has arisen is whether use of AI in the courtroom is legal. Far before the AI explosion, some states passed laws declaring electronic legal service providers as “unauthorized practice of the law,” meaning they provide legal services without proper licensing [4].
Pro Se Litigants
“Pro se” is a Latin phrase that means “for oneself.” In legal proceedings, this phrase refers to litigants who have foregone their right to formal representation by a counsel, so they would represent themselves [5]. Pro se litigants (PSLs) are no small minority of the legal system– a Cambridge University study found that 28 percent of all federal cases filed involved at least one pro se party [6]. While there is no evidence that the rate at which parties choose to be unrepresented has “exploded” as many policymakers have warned, the numbers are steadily rising given a lack of government action to counteract the so-called “pro se crisis” [6].
PSLs are often the most underserved individuals, including currently incarcerated plaintiffs, victims of domestic violence, or disenfranchised employees, all with high rates of poverty or near poverty [7]. Because of these systemic disadvantages, there exists stark differences in outcomes between pro se and represented litigants in our adversarial legal system. In cases where a plaintiff proceeds pro se, they win 4 percent of the time; when a defendant is pro se, they win 14 percent of the time [8]. When plaintiffs and defendants are both represented, the win rate for each is around 50 percent [8]. The disproportionate number of cases won by pro se litigants undermines the American ideal of equal access in the justice system.
Roadmap
This article analyzes whether AI should be constituted as an unauthorized practice of law. These ideas will be applied to pro se litigants to determine whether AI can ethically be used by these persons, and if so, whether its use could advance equity in the justice system.
Legal AI as Unauthorized Practice of Law
Legal AI is the shorthand for multiple machine learning algorithms that can assist both lawyers and non-lawyers alike in legal matters. This can include analyzing briefs, finding cases, honing arguments, and more [9]. As this technology grows, it has become increasingly essential for attorneys to learn and perfect it [10]. However, whenever a new technology arises that threatens to change society— whether this be the radio, the television, the computer, or something other innovation —people tend to “panic” over the societal implications [11].
The same has been true over this past year about AI [12]. In legal circles, the question over whether using AI constitutes an unauthorized practice of law (UPL) has arisen.
By federal standards, to practice law, a lawyer must be authorized to practice by being admitted to the bar, either generally or specifically within a certain jurisdiction or covering a certain issue [13]. The exact definition of “practice of law” varies on a jurisdictional basis, but the rule does not prohibit lawyers from “employing the services of paraprofessionals and delegating functions to them, so long as the lawyer supervises the delegated work and retains responsibility for their work” [13]. If legal AI is considered a tool, it could be utilized by attorneys, similar to how paralegals aid them. Therefore, an argument could be made that the use of AI by attorneys is not a UPL, since it is simply a non-lawyer advisor.
However, this does not cover the use of AI by PSLs, who are non-lawyers. In general, courts have held that it is not legal practice to sell “informational publications” that assist people without providing specific legal advice [14]. Some courts even publish full glossaries of legal terms and other information about litigation to specifically assist pro se litigants [14]. This is not the first time that UPL has been called into question under the lens of recent technologies. When Pre-AI software developed, there were many legal challenges to various means of usage. For example, form-filling programs were deemed a UPL in both California and Texas in the 1990s [14]. This led to Texas passing a law that any act related to “design, creation, publication, distribution, display, or sale [of]” computer software was not UPL if it is stated that the products do not substitute the use of an attorney [15].
A similar principle could be applied to AI. Pro se litigants have the right to use other types of computer software and guidebooks to assist them, and AI could constitute another tool they could use. The difference is AI does have the capability to provide specific advice, which is a major point in the definition of practice of law. However, if, like in Texas, it is clearly stated that Legal AI does not substitute for an attorney’s advising, it could be constituted as not a practice of law, and therefore be ethically and legally utilized by pro se litigants. To ensure the use of legal AI is protected, further legislation must be passed. The definition of a practice of law is fluid and as of now, there are no clear answers about AI’s role.
Data Privacy and Attorney-Client Privilege
A major issue in technology policy as a whole is data privacy. It is no secret that many prevalent websites and applications collect and sell user data. If the user agrees to the terms and services of these sites, companies cannot face any legal consequences for this practice unless data privacy laws are in place [16], which are only established in two states [17]. There is also the issue of data being illegally sold outside of terms and conditions. Specific to AI, there have been numerous instances of personal data being leaked, like the Microsoft AI employee who leaked 38 terabytes of data [18]. This issue of data leakage grows more nuanced when dealing with data about legal matters. Privacy is paramount within the justice system, with attorney-client privilege being held to high regard when an individual chooses to be represented.
In the interest of privacy, the question arises if the relationship between publishers of the legal AI and users should be treated as that of attorney and client. An attorney-client relationship is entered when a person willingly seeks the advice of an attorney, and the attorney agrees and obliges [19]. If the reasonable belief exists that the relationship exists, the attorney has the professional obligations of one [19]. A PSL using legal AI willingly seeks the advice of the software, but using AI for legal research does not constitute a practice of law, so professional obligations do not exist. Therefore, AI publishers cannot be constituted as attorneys in an attorney-client relationship.
An attorney-client relationship can be enacted through– not with –non-lawyers, but only if the non-lawyer is under the supervision of an authorized attorney [20]. There is no supervising attorney present in cases where PSLs use AI, since the AI is not being used as a paralegal assisting a barred attorney. Even under the sphere of attorney-client relationships with non-lawyers, AI users cannot form attorney-client relationships with publishers.
Even if a formal attorney-client relationship does not exist, there is still room for laws on data privacy in relation to sensitive information– especially sensitive legal information. Currently, only three states have comprehensive laws related to data privacy, which give the user some choice in how they want their information shared [21]. There are no current federal laws against the processing of personal data, even if that data involves legal matters [22]. Thus, no legal protection exists for the sensitive information that a pro se litigant could provide a Legal AI.
Disclosure of Use of AI
While it has been established that AI can be ethically used in the courtroom, it is unclear if the use of AI should be mandated to be disclosed by judges. PSLs are specially prescribed more grace on technical terms in the interest of judicial fairness, given that it is an impossible expectation to “negotiate a thicket of legal formalities at peril of losing his or her right to be heard” [23]. However, they are awarded no additional rights, just increased leniency. With AI as a new advancement, the leeway that PSLs are afforded could be called into question if they are compelled to disclose their use of AI.
Traditionally, disclosure in the courtroom refers to the evidence that will be given in court, including supporting documents and summaries of testimony of expert witnesses [24]. However, in recent months, judges have begun mandating that parties disclose whether AI was used to craft or supplement their arguments. For example, federal judge Stephen Vaden of the United States Court of International Trade mandated that any party passing through his court must disclose whether generative AI was used to craft any part of their briefs or other supporting documents, and must include a certification that no confidential information was disclosed to “any unauthorized party,” meaning the AI publisher [25]. This was in the interest of preserving the security of confidential information, related to the issues outlined above with data leakage [25].
With orders like this in mind, a pro se litigant would be forced to disclose whether they used generative AI to write their arguments. However, it is unclear if disclosure is required if AI was just used for research or brainstorming, but never directly quoted or summarized. So far, measures like this have been enacted on a jurisdictional basis, so only when a judge has ordered AI usage to be disclosed should it be compelled.
The question of leniency for PSLs remains. An analogous issue in law is that of “ghostwriting” for PSLs. This is the process of lawyers drafting arguments for PSLs without any acknowledgement, and attorneys are often paid for the services after [26]. The American Bar Association (ABA) supports the practice of ghostwriting, since if a lawyer worked on the arguments, their voice should be evident in the finished result [27]. This opinion is not universally held, and many jurisdictions hold ghostwriting as unconstitutional or require there to be some disclosure whether a document was ghostwritten [14].
Legal AI could be considered a ghostwriter of sorts for PSLs. It has been established that legal AI does not constitute a practice of law, so it would not fill every facet of ghostwriting. Using the ABA argument, if ghostwriting is evident when PSLs litigate, the use of AI should logically also be evident: no additional disclosure is needed. Using AI would most likely strip PSLs of the leeway they are afforded by the court, since they would have knowledge of technical terminology through writing arguments with generative AI.
Conclusion
It is evident that legal AI has made great strides in the last year, and will continue to expand in its innovation. Since AI constitutes a general tool for usage, as opposed to an unauthorized practice of law, its usage in the courtroom will expand the same. Limits and drawbacks do exist on how it can be applied within the justice system, including concerns over privacy and security for users.
Given this nuance, it remains unclear whether legal AI actually does improve equity for PSLs. Legal AI has the capacity to write clear, technically correct arguments for individuals who typically would not have the legal know-how to do so. However, the lack of privacy could lead to PSLs’ sensitive information being leaked, which could lead to worse outcomes in the long run. The same goes for the compelled disclosure of use. If that policy becomes more common, it could lead to the grace judges usually award PSLs being taken away.
Legal scholars have warned about the so-called pro se crisis since the turn of the twenty-first century. The number of PSLs is steadily rising, but conditions for them are not improving. Even with the leniency judges often provide, it is extremely difficult for PSLs, often part of underserved demographics, to argue for themselves in a courtroom setting. Currently, there exist gray areas or unintended consequences, which were outlined in this article. If legal AI can reduce this burden and provide more resources to those who need it most, it is worth putting more stake into its application within the justice system. Despite these positive impacts, further research must be conducted to mitigate the possible negative side effects that would lessen the equalizing force that legal AI could provide.
Sources:
[1] Manning, Christopher. “Artificial Intelligence Definitions.” (Stanford University, September 2020). https://hai.stanford.edu/sites/default/files/2020-09/AI-Definitions-HAI.pdf.
[2] “Legal AI Tools: Essential for Attorneys.” Thomson Reuters Law Blog, January 17, 2023. https://legal.thomsonreuters.com/blog/legal-ai-tools-essential-for-attorneys/.
[3] “Artificial Intelligence 2023 Legislation.” National Conference of State Legislatures, July 20, 2023. https://www.ncsl.org/technology-and-communication/artificial-intelligence-2023-legislation.
[4] Snyder, Justin. “RoboCourt: How Artificial Intelligence Can Help pro Se Litigants and Create a ‘Fairer’ Judiciary, 10 Ind.” Maurer School of Law, 2022. https://www.repository.law.indiana.edu/cgi/viewcontent.cgi?article=1136&context=ijlse.
[5] “Pro Se.” Legal Information Institute, n.d., https://www.law.cornell.edu/wex/pro_se.
[6] Gough, Mark D., and Emily S. Taylor Poppe. “(Un)Changing Rates of pro Se Litigation in Federal Court.” Law & Social Inquiry 45, no. 3 (January 20, 2020): 567–89, https://doi.org/10.1017/lsi.2019.69.
[7] Seifter, Miriam. “State Institutions and Democratic Opportunity.” University of Wisconsin Law School, February 24, 2022. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4042959.
[8] Levy, Mitchell. “Empirical Patterns of pro Se Litigation in Federal District Courts.” University of Chicago Law Review, November 2018. https://lawreview.uchicago.edu/print-archive/empirical-patterns-pro-se-litigation-federal-district-courts.
[9] “How Artificial Intelligence Is Used in Legal Practice.” Bloomberg Law, August 1, 2023. https://pro.bloomberglaw.com/brief/ai-in-legal-practice-explained/.
[10] “Legal AI Tools: Essential for Attorneys | Legal Blog.” Thomson Reuters Law Blog, January 17, 2023. https://legal.thomsonreuters.com/blog/legal-ai-tools-essential-for-attorneys/.
[11] Orben, Amy. “The Sisyphean Cycle of Technology Panics.” Perspectives on Psychological Science 15, no. 5 (June 30, 2020): 1143–57, https://doi.org/10.1177/1745691620919372.
[12] Williams, Rhiannon. “The Download: Explaining the Recent AI Panic, and Digital Inequality in the US.” MIT Technology Review, June 20, 2023. https://www.technologyreview.com/2023/06/20/1075191/the-download-explaining-the-recent-ai-panic-and-digital-inequality-in-the-us/.
[13] “Comment on Rule 5.5: Unauthorized Practice of Law; Multijurisdictional Practice of Law.” American Bar Association, 2019. https://www.americanbar.org/groups/professional_responsibility/publications/model_rules_of_professional_conduct/rule_5_5_unauthorized_practice_of_law_multijurisdictional_practice_of_law/comment_on_rule_5_5_unauthorized_practice_of_law_multijurisdictional_practice_of_law/.
[14] Brimo, Brooke. “How Should Legal Ethics Rules Apply When Artificial Intelligence Assists pro Se Litigants?” The Georgetown Journal of Legal Ethics, 2022. https://www.law.georgetown.edu/legal-ethics-journal/wp-content/uploads/sites/24/2023/03/GT-GJLE220037.pdf.
[15] Texas Government Code § 81.101(c) (1999).
[16] “Your Data Is Shared and Sold…What’s Being Done about It?” Knowledge at Wharton, October 28, 2019. https://knowledge.wharton.upenn.edu/article/data-shared-sold-whats-done/.
[17] Murray, Conor. “U.S. Data Privacy Protection Laws: A Comprehensive Guide.” Forbes, August 19, 2023. https://www.forbes.com/sites/conormurray/2023/04/21/us-data-privacy-protection-laws-a-comprehensive-guide/?sh=47ed951b5f92.
[18] Kan, Michael. “Microsoft AI Employee Accidentally Leaks 38TB of Data.” PC Magazine, September 18, 2023. https://www.pcmag.com/news/microsoft-ai-employee-accidentally-leaks-38tb-of-data.
[19] “Section 1 – Establishing the Attorney-Client Relationship.” Louisiana State Bar Association, n.d.. https://www.lsba.org/PracticeAidGuide/PAG1.aspx.
[20] Bagby, Laura. “ABA Opinion Provides Guidance on Proper Client Intake by Nonlawyer Assistants.” 2Civility, June 16, 2023. https://www.2civility.org/aba-opinion-provides-guidance-on-proper-client-intake-by-nonlawyer-assistants/.
[21] Klosowski, Thorin. “The State of Consumer Data Privacy Laws in the US (and Why It Matters).” Wirecutter (New York Times, September 6, 2021). https://www.nytimes.com/wirecutter/blog/state-of-privacy-laws-in-us/.
[22] “Legal Bases for Processing of Sensitive Data.” Baker McKenzie Resource Hub, December 23, 2023. https://resourcehub.bakermckenzie.com/en/resources/global-data-privacy-and-cybersecurity-handbook/north-america/united-states/topics/legal-bases-for-processing-of-sensitive-data.
[23] Gray, Cynthia. “Pro Se Litigants in the Code of Judicial Conduct.” Judicial Conduct Report 36, no. 3 (2014). https://www.ncsc.org/__data/assets/pdf_file/0013/15250/jcr-fall-2014.pdf.
[24] “Disclosure.” Legal Information Institute, n.d.. https://www.law.cornell.edu/wex/disclosure.
[25] Vaden, Stephen. “Order on Artificial Intelligence.” US Court of International Trade, June 8, 2023. https://www.cit.uscourts.gov/sites/cit/files/Order%20on%20Artificial%20Intelligence.pdf.
[26] Loudenslager, Michael. “Giving up the Ghost: A Proposal for Dealing with Attorney ‘Ghostwriting’ of pro Se Litigants’ Court Documents through Explicit Rules Requiring Disclosure and Allowing Limited Appearances for Such Attorneys Repository Citation.” Marquette Law Review 103, no. 1 (2008): 92, https://scholarship.law.marquette.edu/cgi/viewcontent.cgi?params=/context/mulr/article/1301/&path_info=Loudenslager_13.pdf.
[27] “Formal Opinion 07-446: Undisclosed Legal Assistance to pro Se Litigants.” American Bar Association, May 7, 2007. https://www.americanbar.org/products/ecd/chapter/220008/.





Leave a comment