Artificial intelligence and judicial ethics

March 14, 2024

By Cynthia Gray

In 2023 the ethical issues raised for judges by the use of artificial intelligence were addressed, for the first time, in advisory opinions from Michigan and West Virginia. Michigan Advisory Opinion JI-155 (2023); West Virginia Advisory Opinion 2023-22.

The Michigan opinion explains that “artificial intelligence (AI) is not a single piece of hardware or software but a multitude of technologies that provide a computer system with the ability to perform tasks, solve problems, or draft documents that would otherwise require human intelligence.” The West Virginia opinion notes that whether judges realize it or not, they already use some form of AI in their everyday life, citing facial recognition on their cell phone, smart email categorization, friend suggestions from Facebook, recommendations on streaming apps, and navigation sites such as Google Maps.

Both opinions conclude that judges have a duty to maintain competence in technology, including AI.

The Michigan opinion describes why knowledge of AI technology is essential to ensure that a judge’s use of AI does not conflict with the code of judicial conduct. For example, code requirements could be implicated if the algorithm or training data for an AI tool is biased.

Specifically, if an AI tool’s algorithm’s output deviates from accepted norms, would the output influence judicial decisions . . . ? An algorithm may weigh factors that the law or society deem inappropriate or do so with a weight that is inappropriate in the context presented . . . . AI does not understand the world as humans do, and unless instructed otherwise, its results may reflect an ignorance of norms or case law precedent.

Further, Michigan stresses the ethics requirement that judicial officers have “competency with advancing technology,” such as “knowing the benefits and risks associated with the technology that judicial officers and their staff use daily, as well as the technology used by lawyers who come before the bench.”

West Virginia advises that a judge may use AI for research but “because of perceived bias that may be built into the program,” “a judge should NEVER use AI to reach a conclusion on the outcome of a case” (emphasis in original). The opinion also states that using AI to prepare an opinion or order is “a gray area” that requires “extreme caution.” Thus, the opinion advises judges to think of AI as a “law clerk,” adding that just like a judge “cannot say, ‘the law clerk made me do it,’” they cannot “say, ‘AI made me do it.’” Likewise, the judge must decide which way he/she wants to rule and let the program know in advance to ensure that the product conforms with the decision rendered by the judge. As with the law clerk’s final draft, the judge must review it to ensure accuracy and make changes where needed.

The Michigan opinion concludes: AI is becoming more advanced every day and is rapidly integrating within the judicial system, which requires continual thought and ethical assessment of the use, risks, and benefits of each tool. The most important thing courts can do today is to ask the right questions and place their analysis and application of how they reached their conclusion on the record.

Interested in judicial ethics?  Sign up for the Judicial Conduct Reporter and the Center for Judicial Ethics blog. Does your court have experience with AI? For more information, contact Knowledge@ncsc.org or call 800-616-6164. Follow the National Center for State Courts on Facebook, X, LinkedIn, and Vimeo. For more Trending Topics posts, visit ncsc.org/trendingtopics or subscribe to the LinkedIn newsletter.