A hand touching a digital screen banner image

Tech series: Procedural fairness and AI

September 13, 2023

By Bill Raftery

New research sheds light on questions related to the role of artificial intelligence (AI) in procedural fairness in the courts. This is a version of a post on NCSC’s restarted Procedural fairness for judges and courts blog. It offers 3 recent attempts to answer AI questions about procedural justice.

  • AI Decision-making and the courts: A guide for judges, tribunal members and court administrators” focuses on the use of AI in Australian courts and references the United States and others using it. The report looks at AI in every step of the judicial process, from e-filing and case triage to sentencing. It finds several concerns. Are the AI models making judicial decisions (which the report concludes would deny procedural fairness)? Is there a registry of all AI systems used by the courts, or whose outputs are then used by tribunals? Can system outputs be challenged where litigants feel that the inputs were in error or that the system fails to take account of relevant factors? While the report does reach some conclusions, it is more helpful in the list of questions that are raised for courts and judges to ponder before using or relying on AI.
  • Algorithms in the court: Does it matter which part of the judicial decision-making is automated?” looks at the perceived procedural fairness of algorithmic decision making (ADM) at 4 stages in the judicial process: information acquisition, information analysis, decision selection, and decision implementation. Based on survey data, people generally tend to view low levels of automation will ensure the fairest outcomes. However, there may be a willingness to accept the use of automation in the information acquisition stage as procedurally fair. This perception occurs in those both inside and outside the legal professions. Why this occurs is unclear. Perhaps people have become so accepting of using search engines to find out initial information that they will accept AI doing a similar initial information search in the judicial context. Another possibility of acceptance is that using AI at this stage will reduce confirmation bias. Instead of judges forming an opinion and then searching for the initial information to support it (“confirmation bias”), AI can boost perceptions of procedural fairness where the initial information is handled by the AI system instead.
  • The researchers in “Having your day in robot court” attempted to ascertain to what extent the total or near-total removal of humanity from the judicial process can boost perceptions of procedural fairness by focusing on one key element through 2 experiments: the human voice. In the first 3 scenarios (consumer refund for a damaged camera, bail before trial, and sentence for manslaughter), the party or defendant “lost”: there was no refund given, bail was not allowed, and the maximum sentence was handed down. In some of the scenarios, the determination was made by a human, in others, by an algorithm. This was further subdivided into whether the decision was made with or without a hearing and whether the decision was “interpretable” (was the decision maker’s reasoning easy to understand?). The researchers concluded that with a human decision maker having a hearing and interpretability/clarity of the decision does matter, reaffirming past research. When made by an algorithm, the decision is viewed as less procedurally fair. However, the use of a hearing may make people more willing to see the results as fair, more so than whether the AI’s ultimate decision is clear/interpretable.

How is your court looking at issues related to AI for procedural fairness? Follow our Trending Topics Tech Series. Email us at Knowledge@ncsc.org or call 800-616-6164 and let us know. Follow the National Center for State Courts on Facebook, Twitter, Instagram, LinkedIn, and Vimeo.