The Integration of Artificial Intelligence in the Legal System

This is The Marshall Project’s Conclusion Newsletter, a weekly deep dive into a critical criminal justice issue. Get this delivered to your inbox? Join future e-newsletters here.

Being criminal justice reporters, my team and I read a good amount of legal documents.

In the past, if I encountered a reference in a document — for instance, “Bourguignon v. Coordinated Behavioral Health Servs., Inc., 114 A.D.3d 947 (3d Dep’t 2014)” — I could be reasonably confident that the case was real, even if, maybe, the document exaggerated its importance.

Artificial intelligence is making that less assured. The example above is a fabricated case created by the AI chatbot ChatGPT. But the citation was included in a genuine medical malpractice lawsuit against a New York doctor, and last week, the Second Circuit Court of Appeals upheld penalties against Jae S. Lee, the lawyer who filed the lawsuit.

These types of “hallucinations” are not rare for large language model AI, which composes text by determining which word is probable to come next, based on the text it has seen previously. Lee isn’t the first lawyer to get into trouble for including such a hallucination in a court filing. Others in Colorado and New York — including former Donald Trump lawyer Michael Cohen — have also been hurt by supposedly not verifying the AI’s work. In response, the Fifth Circuit Court of Appeals proposed new rules last year that would necessitate litigants to verify that any AI-generated text was reviewed for accuracy. Professional law organizations have issued similar advice.

There’s no evidence that a majority of lawyers are using AI in this way, but soon, most will be using it in one way or another. The American Lawyer, a legal trade magazine, recently questioned 100 large law firms if they were using generative AI in their day-to-day business, and 41 firms responded affirmatively — typically for summarizing documents, creating transcripts and conducting legal research. Advocates argue that the increased productivity will mean clients receive more services in less time and money.

Similarly, some view the growth of AI lawyering as a potential benefit to justice access, picturing a world in which the technology can help public interest lawyers serve more clients. As we examined in a prior conclusion, access to lawyers in the U.S. is frequently scarce. About 80% of criminal defendants can’t afford to hire a lawyer, according to some estimates, and 92% of the civil legal issues that low-income Americans face go entirely or mainly unaddressed, according to a study by the Legal Services Corporation.

The California Innocence Project, a law clinic at the California Western School of Law that works to overturn wrongful convictions, is utilizing an AI legal assistant called CoCounsel to identify patterns in documents, such as inconsistencies in witness statements. “We are spending a lot of our resources and time trying to figure out which cases deserve investigation,” former managing attorney Michael Semanchik told the American Bar Association Journal. “If AI can just tell me which ones to focus on, we can focus on the investigation and litigation of getting people out of prison.”

But the new technology also presents many opportunities for things to go wrong, beyond embarrassing lawyers who try to pass off AI-generated work as their own. One significant issue is confidentiality. What happens when a client provides information to a lawyer’s chatbot, instead of the lawyer? Is that information still protected by the secrecy of attorney-client privilege? What happens if a lawyer enters a client’s personal information into an AI-tool that is simultaneously training itself on that information? Could the right prompt by an opposing lawyer using the same tool serve to hand that information over?

These questions are mostly hypothetical now, and the answers may need to play out in courts as the technology becomes more common. Another ever-present worry with all AI — not just in law — is that bias baked into the data used to train AI will show itself in the text that large language models produce.

While some lawyers are looking to AI to assist their practices, there are also tech entrepreneurs looking to replace attorneys in certain settings. In the most well-known case, the legal service DoNotPay briefly flirted with the idea of its AI robot lawyer arguing a case in a live courtroom (by feeding lines to a human wearing an earbud) before backing out over alleged legal threats.

DoNotPay started in 2015, offering clients legal templates to fight parking tickets and file simple civil suits, and still mostly offers services in this realm, rather than the showy specter of robot lawyers arguing in court. But even the automation of these seemingly humdrum aspects of law could have dramatic consequences on the legal system.

Writing for Wired Magazine last summer, Keith Porcaro concluded that AI lawyers could end up democratizing law and making legal services available to people who otherwise wouldn’t have access, while simultaneously ​helping powerful people to “use the legal system as a cudgel.”

He notes that if AI makes it easier for debt collectors to seek wage garnishments and file evictions, it could unleash a wave of default judgments against poor people who fail to show up in court. And even if, as a counterbalance, AI becomes a tool to help ordinary people defend themselves from predatory cases, the resulting torrent of legal disputes could grind the current court system to a halt. “Nearly every application of large language models in courts becomes a volume problem that courts aren’t equipped to handle,” Porcaro writes.

Then again, perhaps not. While it’s still far off, the American Bar Association has wondered whether AI, in this brave new legal world, might best serve in the role of judge, delivering an “impartial, ‘quick-and-dirty’ resolution for those who simply need to move on, and move on quickly.”

Leave a Reply

Your email address will not be published. Required fields are marked *