cybersecurity banner banner image

Tech Series: The intersection of AI and cybersecurity

July 11, 2024

By Michael Navin

Generative AI, with its ability to create convincing forgeries and automate tasks, presents a new set of challenges for court systems in technology use/adoption and cybersecurity. In the wrong hands, these powerful tools could be used to manipulate the legal process, disrupt proceedings, and erode public trust in the justice system.

One of the most concerning applications of generative AI is fabricated evidence. AI can generate realistic-looking documents relatively easily, such as contracts, wills, or witness statements. These forgeries could be used to support a fraudulent claim, cast doubt on legitimate evidence, or manipulate the narrative of a case. Similarly, AI can be used to create deepfakes—doctored audio, images, or video recordings—that purport to show someone saying or doing something they never did. These deepfakes could be used to smear a party's reputation, influence a jury’s verdict, or even extort someone into dropping a lawsuit.

These tools are another potential weapon for cybercriminals in disinformation campaigns to generate fake news articles, social media posts, and online reviews that target jurors, judges, or the public. This fabricated information could sway public opinion in favor of a particular party, damage someone’s reputation, or even incite violence. The ability of AI to personalize this misinformation, tailoring it to the specific biases and vulnerabilities of the target audience, makes it particularly dangerous.

Cybercriminals could also use AI to overwhelm court systems. By automating tasks like filing lawsuits or motions, criminals could flood the system with bogus cases, clogging the docket and delaying legitimate proceedings. This could create a backlog that would strain court resources and make it difficult for judges to hear real cases in a timely manner. Furthermore, AI could be used to identify vulnerabilities in the court’s IT infrastructure. By analyzing legal documents and public records (such as RFP awards), hackers could exploit weaknesses in the system to launch cyberattacks, steal sensitive information, or disrupt court proceedings.

Impersonation is another threat posed by generative AI. AI can mimic someone’s writing style, creating fake emails or social media messages that appear to be from a judge, lawyer, or court official. These messages could be used in phishing scams, tricking individuals into revealing sensitive information such as passwords or financial data. They could also be used to intimidate witnesses, discourage people from filing lawsuits, or manipulate the outcome of a case.

Courts are aware of these potential threats and are taking steps to mitigate them. Judges may start requiring disclosure of AI use in legal filings. Courts may also implement stricter protocols for electronic submissions to ensure the authenticity of documents.

Additionally, there’s a growing focus on cybersecurity measures to protect court IT systems from cyberattacks. However, the rapid development of generative AI technology means that cybercriminals are constantly finding new ways to exploit its capabilities. Staying ahead of this curve will require ongoing collaboration between legal professionals, technology experts, and policymakers to develop safeguards and best practices to protect the integrity of the justice system.

Is your court prepared for generative AI? Share your experiences with us. For more information, see NCSC's technology page, contact knowledge@ncsc.org, or call 800-616-6164. Follow the National Center for State Courts on Facebook, X, LinkedIn, and Vimeo. For more Trending Topics posts, visit ncsc.org/trendingtopics and subscribe to the LinkedIn newsletter.