Become A member
Subscribe to our newsletter
Insights

AI’s Wrestling Match with the Law

Everyday we humans wrestle with three questions: what do I want to do, What should I do, And what must I do. This constant questioning is what it means to have agency in a world where we desire many things, instinctually feel some actions are right and wrong, and frequently encounter constraints on our behavior we call “the law”.

Amidst this welter of human agency, a new participant has arrived – AI. AI is expanding our choices for desire, and confusing (and sometimes assisting) our sense of right and wrong. Here we consider the impact of AI on the “must do” question of the law.

Ubiquitous and increasingly powerful AI could radically transform our wrestling match with the law. This relates vitally to the work of AI and Faith because reliance on reasonable application of the rule of law is a cornerstone of human flourishing. Events throughout modern history have dramatically illustrated the importance of the rule of law, and we feel its weight in this present moment.

Like earlier tools, AI can be deployed by humans for good or malevolent purposes. What’s different now is that AI is developing agentic capability of its own. It operates with feedback loops that seem like “learning”. We often cannot explain or understand this process, but largely accept the outcome because the results are so tantalizing: solving the great issues of science, nature, society and commerce, increasing productivity, doing work we would rather not, and other exciting hitherto impossible opportunities. Even if humans remain in the loop for the foreseeable future, we are increasingly ceding control to AI tools.

This is as true for how law governs us as other parts of life. AI is transforming the law in at least three ways: the processes by which law is applied, the structure of the legal profession, and our understanding of the law.

How dramatically AI is changing the process of practicing law is evident in the growth of legal technology conferences and expos. The granddaddy of these is the annual TechShow of the American Bar Association, now in its 42nd year, with thousands of attendees, over 100 exhibitors, and dozens of sponsors. Each year advance registrants vote on the most promising legal tech start-ups. This year’s fifteen fell into such categories as AI legal writing and research agents, visualization tools to reduce a lawsuit into juror-digestible (fun-size!) narrative nuggets, and virtual client communication tools.

Tradeoffs abound for these use cases, as in other service sectors. For what is gained, something is often lost.

For example, crafting a well-researched, accurate legal document is time consuming and made more difficult by the short attention spans of the anticipated recipients. Legal documents need to clearly reflect agreements (like contracts, wills, property arrangements), or accurately explain and argue the applicable law to clients and judges. Moving from paper to digitized documents over the past 30 years already radically changed how lawyers write and analyze the law and available evidence. But these changes pale against the opportunity LLMs now pose for more precisely researching the law, speeding up tedious initial drafting of contracts or briefs, and sorting and analyzing documents for “smoking guns” or patterns of behavior buried deep within gargantuan piles of evidentiary data.

There are some big gains that can be achieved, such as reducing client costs, potentially better marshalling of facts and law, and better systematizing law firm knowledge. What may be lost is similar to those native map skills we lose to Google Maps. Lawyers’ ultimate stock in trade is experience gained by working with particular aspects of the law and the resulting advice and arguments they can offer to clients and judges. Will the granular, thought-intensive skill of assembling facts and law into a persuasive case or agreement suffer when the majority of arguments, risks, and factual connections are made by an AI agent?

Judges similarly gain wisdom over time by making decisions based on written briefs, oral arguments, expert testimonies, and facts filtered by rules of evidence. But increasingly they also are expected to use software tools developed by private companies to predict risks of criminal reoffending for bail, sentencing, parole and other future-oriented judgments. In civil cases judges must evaluate evidence around complex models purporting to assess what actually happened in the facts underlying the suit, the proper amount of damages, or other key factual questions. The gain is said to be more objective, better informed decision making. But because the creators of these AI tools normally object to model disclosure on intellectual property grounds, the reliability and fairness of these tools have been extensively contested. Meanwhile, judges may come to rely less on their own judgment and feel outcomes are more “defensible” even if they do not fully understand the basis for their own rulings.

AI is also changing the structure of the law, mainly by reducing the number of humans in the loop. In litigation, for example, AI-powered case development platforms and better legal research and drafting tools streamline the work of legal assistants and younger attorneys, again lowering labor costs. One obvious tradeoff is fewer job openings for young attorneys and legal assistants and narrower pathways into legal employment. A less obvious tradeoff is the loss of human perspectives, cultural awareness and diverse perspectives that new, younger employees bring to the field.

Systematizing legal advice and practice through data analysis, work flows, and standardization, as well as the use of intermediary bots, can democratize legal tools and expand access to cheaper substitutes for direct attorney advice. Greatly expanding access to legal services like wills and property transfers should count as a gain, so long as the tools are well-crafted and the bots stay on track. But as legal services move outside a professional context, who will regulate them for quality and be held responsible for model “malpractice”? Ending the professional monopoly is good, but accountability will remain vital.

Finally, AI will change our actual understanding of the law and how it works.

Take our common law system by which judges base their legal rulings on the huge body of prior decisions. Such “precedents” act as a brake on one-off, inconsistent, highly subjective judicial decisions. AI-powered research will likely make accessing and analyzing that entire body of common law precedents easier. This leads to the possibility of abstracting out of that body what “the weight of the law” actually is. Instead of relying on a couple of cases that are especially apt precedents, AI pattern analysis may allow a litigant to effectively “boil the ocean” of precedents and tell a trial judge how the actual weight of thousands of precedents favors the client’s position. It has never before been possible to see and argue the common law this way.

As with sentencing and parole tools, it is not really possible for the decision maker to test the analysis – its impossible for the judge to read all the cases or assess the claims. Moreover, this large-scale pattern-finding capability may also result in a loss of skilled legal analysis and the careful parsing and comparison that lawyers have been taught to do forever as the way to best understand the “wisdom” of past decisions.

AI’s powerful predictive quality may also change how we use jurors and judges to decide the facts of a case. AI’s power to analyze probabilities and model outcomes from a vastly greater base than any human, no matter how experienced and insightful, seems likely to sideline trial lawyers’ advice based on the actual evidence and their years of trial experience. All the research undertaken in the course of developing AI tools, showing the fallibility of human memory and our tendency to shortcut rational thinking, makes us even less likely to trust the outcome of a lawsuit to eyewitnesses and jury deliberation. AI’s ability to generate highly convincing deepfakes, coupled with social media’s corrosion of our ability to discern truth from falsity, undercut our trust in human ability to decide facts and causation.

Thus, many parties to a legal dispute defer to Monte Carlo simulations to evaluate all possible variables for a weighted outcome rather than trust the fallibility of human memory and mental shortcuts. As a result, our thousand year old system for evaluating on a peer-to-peer basis what is true and actually happened may give way to the best answer of a machine.

Time will tell who will win this wrestling match. The best outcome of all may be a draw, applying the law with both time-tested human and powerful new AI abilities alike.


Views and opinions expressed by authors and editors are their own and do not necessarily reflect the view of AI and Faith or any of its leadership.


David Brenner

Currently serves as the board chair of AI and Faith. For 35 years he practiced law in Seattle and Washington DC, primarily counseling clients and litigating claims related to technology, risk management and insurance. He is a graduate of Stanford University and UC Berkeley’s Law School.

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Youtube
Consent to display content from - Youtube
Vimeo
Consent to display content from - Vimeo
Google Maps
Consent to display content from - Google
Spotify
Consent to display content from - Spotify
Sound Cloud
Consent to display content from - Sound
Subscribe to our newsletter