Editors’ Note: The New Digest is delighted to present this guest post by Jack Kieffaber and the Joseph Story Society. Mr. Kieffaber received his J.D. from Harvard Law School in 2023.
The question isn’t whether machines are going to replace judges and lawyers—they will. The question is whether that’s a good thing. If you’re a textualist, you have to answer yes. But you won’t—which means you’re not a textualist.
That existential quandary is what lies at the bottom of the rabbit hole that Judge Kevin Newsom of the Eleventh Circuit uncovered this May with his concurrence in Snell v. United Specialty Insurance Company. It’s also the crux of an article I wrote alongside Harvard’s Joseph Story Society, currently forthcoming in the Harvard Journal of Law and Public Policy. To be sure, Judge Newsom’s query in that otherwise innocuous insurance case was pretty modest: Why can’t judges use ChatGPT to find the ordinary meaning of words? But Snell opened a door, and we all know what’s behind it—if the judges can use AI, why can’t the judges just be AI?
They can. Sam Altman knows it, and so does DC legal insider Adam Unikowsky. That’s the boring part—technology evolves, AI will replace us all, ho hum. Here’s what interests me: Even if the robot judiciary were mathematically perfect—such that it could beat John Roberts at judging the way Stockfish beats Magnus Carlsen at chess—I still wouldn’t like it. In short, I’m a luddite. But I’m also a card-carrying Federalist Society strict textualist. And I don’t think one can logically be both.
That’s because I think that a legal system run by robots—one where all lawyers lose their jobs and all legal questions are decided algorithmically—is actually the inevitable and platonic end result of the textualist project that Bork and Scalia started us upon. To prove it, I built my article around this hypothetical:
1. The year is 2030. AI has advanced to the point where its command of language and legal precedent has far eclipsed that of the median federal jurist.
2. A new democratic republic is created. It uses human legislators to write laws and programs a state-sponsored Large Language Model (LLM) called “Judge.AI”—which operates according to the same principles that drive contemporary LLMs such as ChatGPT—to apply those laws to facts.
3. Judge.AI then authors all legal opinions on the back end based on facts submitted to it when laws are violated.
4. But Judge.AI also works on the front end. A citizen can access Judge.AI and type in the real life action he would like to take; after the citizen hits submit, Judge.AI will tell him if that action is legal or illegal under the relevant statutes, and why. Call this the ex ante query function.
5. Judge.AI carries the full force of federal law. The answer it gives you on the front end is precisely what it will hold on the back end if you go through with your desired action and are prosecuted.
6. Judge.AI is a perfectly neutral arbiter and interprets words with perfect mathematical accuracy. Therefore, between its ex ante and ex post functions, it offers perfect predictability as to any legal outcome.
Don’t fight my hypothetical; suspend all technical qualms and “yeah buts” as to whether the machinery would work. Assume it would. Here’s the question: Is that a utopia or a dystopia? So framed, I think that Judge.AI is the ultimate textualist litmus test. And that’s because, if you answer dystopia… you’re not a textualist.
I mean it. And that’s because Judge.AI—particularly due to its ex ante query function—offers perfect predictability. Indeed, you’re an idiot if you ever end up in court. About to take X action? Run X action past Judge.AI before you leave the house just to be safe. Notice what just happened: At the same moment it became your judge, the statute also became your lawyer. The words themselves are governing directly; how could a textualist possibly object?
Well, I tend to hear two lines of objection—neither of which is intellectually honest. The first amounts to “Judge.AI isn’t Constitutional” in that it obliterates the separation of legislative and judicial powers, it allows advisory opinions, it might even do away with common law. But to the extent our nation’s founders baked these bedrock concepts into our system of governance, they did so in large part to counteract human errors like bias, activism, and sheer mistake—all of which an AI judiciary would eliminate. To that end, notice that my hypothetical never mentioned the American Constitution. That’s because Judge.AI would present a wholly new form of government that’s never been contemplated at length. Its efficacy isn’t a question for an American legal scholar but rather for a political theorist; it comes with the inevitability of a de facto second founding.
The second line of argument is that Judge.AI leads to any number of injustices or inconveniences. And my rebuttal here courts controversy: That should not concern a textualist. My article surveys the textualist literature and concludes that the textualist’s sole concern is what the text says—whatever it says. Results are the electorate’s problem. Indeed, Justice Scalia put it tellingly:
“Where the positive law places a judge in the position of being the instrument of evil, the judge must recuse from the case or (if there are many such cases) resign from the bench. Thus, if I were a judge in Nazi Germany, charged with sending Jews and Poles to their death, I would be obliged to resign my office….”
Even in the face of the Holocaust, Justice Scalia would not concern himself with results until he had formally resigned his textualist obligations. That says a lot about textualism’s lodestar. It’s not morality or even functionality; it’s not anything a validly-promulgated law could abridge. It’s merely predictability. It captures enlightened man’s conclusion, after millennia of various despotisms, that it is impossible to know the good for certain—so we would rather be certain than good.
If predictability is the textualist endgame, then textualists should be clamoring for the day when all law is statutory, all statutes are algorithms, and all algorithms are executed by mathematically perfect machines. The problem is that I don’t see any textualists vocally coveting that world. And that tees up the cynical question that several in the academy have asked: Are the “textualists” really textualists? Or are they, at best, textualists until some moral or practical inconvenience arises?
In sum, AI’s encroachment on the legal profession was always inevitable; that’s not the real rabbit hole in Snell. Rather, Snell is salient because it presents an inescapable textualist litmus test. If you claim to be a textualist, as I do, you should think long and hard about whether you pass that test. Because I’ll be frank: I don’t and I bet you don’t either. Seeing as it’s fashionable to claim “we are all textualists now,” the profession should begin to grapple with this paradox; we might find we’ve all been moralists the whole time.
Our current system disempowers the executive and the policing powers and the legislature and empowers judges and bureaucrats. The police are already expected to enforce orders from judges regardless of how they feel about the morality, unless the press also agrees that it’s immoral. Take that same impulse and apply it to judges. Expect them to obey the legislature regardless of how they feel about it.
Nicely done. FWIW, we seem to be nowhere close to the sort of AI your hypothetical requires. But I completely agree. The idea of a purely mechanical "law" is an oxymoron. I rarely write law reviews anymore (I'm a law professor), but I did flirt with an article that linked self-executing contracts to the realist formalist debates over the nature of contract (Corbin & Williston), the refusal to enforce racially restrictive covenants. Bottom line, I think, some notion of equity is intrinsic to the notion of contract, even if historically they had different courts. If you'd send me an email, I'd gladly share a short essay on AI and law that you might find of interest. For now, kudos, and I look to hear more from you n due course.