Our current system disempowers the executive and the policing powers and the legislature and empowers judges and bureaucrats. The police are already expected to enforce orders from judges regardless of how they feel about the morality, unless the press also agrees that it’s immoral. Take that same impulse and apply it to judges. Expect them to obey the legislature regardless of how they feel about it.
Nicely done. FWIW, we seem to be nowhere close to the sort of AI your hypothetical requires. But I completely agree. The idea of a purely mechanical "law" is an oxymoron. I rarely write law reviews anymore (I'm a law professor), but I did flirt with an article that linked self-executing contracts to the realist formalist debates over the nature of contract (Corbin & Williston), the refusal to enforce racially restrictive covenants. Bottom line, I think, some notion of equity is intrinsic to the notion of contract, even if historically they had different courts. If you'd send me an email, I'd gladly share a short essay on AI and law that you might find of interest. For now, kudos, and I look to hear more from you n due course.
Thought-provoking. I'm a textualist and I side with algorithmic, highly-predictive judging indeed being legal utopia, definitely in the sense of Hayek and Fuller, and most certainly when graded relative to today's messy welter. Honestly I think most other textualists will follow suit and, "get with the program", so to speak, and drop their weak, tenuous objections as they intellectually reconcile their position to what is more consistent and coherent, not to mention a way to address major timely systemic problems. My impression is that we are dealing with mostly emotional discomfort and squeamishness derived from a combination of an imagination-limiting lack of sufficient time and familiarity with the power of the new tech, along with it being embarrassing to admit one's legitimate anxieties about being replaced by machines, out of a high-status job, and needing to go hunting for alternative employment in middle or advanced age with debts to pay off and while not being qualified or credentialed to do anything else. To riff off Sinclair, it's very hard to get a man to imagine - in an intellectually serious and rigorous way - a future in which he is no longer getting his paycheck.
This seems like a flawed argument. It rests on a false assumption: (1) that a textualist is commited to a view where there is a univocity between the meaning of a law and its application to a specific case. If that were so, the argument holds. Is this not merely to affirm the consequent? if textualists hold to an untenable hermeneutic then rejecting the ai hypothetical proves its falsity. if one admits that textualism does distinguish between these two things and the role of a judge is to make an abductive inference (i.e. result+rule->case), then the hypothetical can be summarily dismissed as follows:
1. An abductive inference is material to legal judgment
2. freedom/responsibility are constituitive of the possibility of abductive inference. unlike induction, unlike deduction.
3. ai does not have, and will never have, freedom\responsibility.
4. sure ai can accurately interpret and apply law, i.e. issue a judgment of a case as it relates to a specific law, but this judgment is always going to be parasitic on the aggregation of human abduction to do so, as it requires freedom\responsibility to make an abductive inference. and whatever abduction it makes will be indicative of the sample set given to ai, not of the proper legal judgment of the case.
5. ai entails slavery to an aggregate of human abductions, not perspicuous legal judgment as only a human can carry out.
6. q.e.d. whether or not textualism is false, this argument should be rejected out of hand, all the textualist must do is claim the obvious. (1) legal judgment involves abduction, (2) ai cant perform abduction, (3) ai cant interpret law. any sensible hermeneutic of law (textualism included) will dismiss this hypothetical as fallacious. it proves nothing other than what it already assumed: textualism is non-sense. (i also posted as a note: https://substack.com/profile/55162219-stephen-weller/note/c-72895384)
Thanks for the thoughtful article. Do you have recommendations for other people/articles grappling with the Judge.AI concept? It’s hard to think through what a brave new world we would (will?) be in. E.g, are laws automatically and near-instantly enforced? If Judge.AI can do everything in discovery flawlessly, it seems all you would need to do is press a button, or maybe the button can press itself, and then get a judgement against someone in as much time as it needs to “think” (plus maybe a bit of time to depose people until neuralink is fully adopted). I haven’t come across academic articles on how this could all shake out, and the bar association-type articles I see on AI seem to grasp about .01% of the potential consequences
Our current system disempowers the executive and the policing powers and the legislature and empowers judges and bureaucrats. The police are already expected to enforce orders from judges regardless of how they feel about the morality, unless the press also agrees that it’s immoral. Take that same impulse and apply it to judges. Expect them to obey the legislature regardless of how they feel about it.
Nicely done. FWIW, we seem to be nowhere close to the sort of AI your hypothetical requires. But I completely agree. The idea of a purely mechanical "law" is an oxymoron. I rarely write law reviews anymore (I'm a law professor), but I did flirt with an article that linked self-executing contracts to the realist formalist debates over the nature of contract (Corbin & Williston), the refusal to enforce racially restrictive covenants. Bottom line, I think, some notion of equity is intrinsic to the notion of contract, even if historically they had different courts. If you'd send me an email, I'd gladly share a short essay on AI and law that you might find of interest. For now, kudos, and I look to hear more from you n due course.
Thought-provoking. I'm a textualist and I side with algorithmic, highly-predictive judging indeed being legal utopia, definitely in the sense of Hayek and Fuller, and most certainly when graded relative to today's messy welter. Honestly I think most other textualists will follow suit and, "get with the program", so to speak, and drop their weak, tenuous objections as they intellectually reconcile their position to what is more consistent and coherent, not to mention a way to address major timely systemic problems. My impression is that we are dealing with mostly emotional discomfort and squeamishness derived from a combination of an imagination-limiting lack of sufficient time and familiarity with the power of the new tech, along with it being embarrassing to admit one's legitimate anxieties about being replaced by machines, out of a high-status job, and needing to go hunting for alternative employment in middle or advanced age with debts to pay off and while not being qualified or credentialed to do anything else. To riff off Sinclair, it's very hard to get a man to imagine - in an intellectually serious and rigorous way - a future in which he is no longer getting his paycheck.
This seems like a flawed argument. It rests on a false assumption: (1) that a textualist is commited to a view where there is a univocity between the meaning of a law and its application to a specific case. If that were so, the argument holds. Is this not merely to affirm the consequent? if textualists hold to an untenable hermeneutic then rejecting the ai hypothetical proves its falsity. if one admits that textualism does distinguish between these two things and the role of a judge is to make an abductive inference (i.e. result+rule->case), then the hypothetical can be summarily dismissed as follows:
1. An abductive inference is material to legal judgment
2. freedom/responsibility are constituitive of the possibility of abductive inference. unlike induction, unlike deduction.
3. ai does not have, and will never have, freedom\responsibility.
4. sure ai can accurately interpret and apply law, i.e. issue a judgment of a case as it relates to a specific law, but this judgment is always going to be parasitic on the aggregation of human abduction to do so, as it requires freedom\responsibility to make an abductive inference. and whatever abduction it makes will be indicative of the sample set given to ai, not of the proper legal judgment of the case.
5. ai entails slavery to an aggregate of human abductions, not perspicuous legal judgment as only a human can carry out.
6. q.e.d. whether or not textualism is false, this argument should be rejected out of hand, all the textualist must do is claim the obvious. (1) legal judgment involves abduction, (2) ai cant perform abduction, (3) ai cant interpret law. any sensible hermeneutic of law (textualism included) will dismiss this hypothetical as fallacious. it proves nothing other than what it already assumed: textualism is non-sense. (i also posted as a note: https://substack.com/profile/55162219-stephen-weller/note/c-72895384)
antonin scalia, ora pro nobis.
Thanks for the thoughtful article. Do you have recommendations for other people/articles grappling with the Judge.AI concept? It’s hard to think through what a brave new world we would (will?) be in. E.g, are laws automatically and near-instantly enforced? If Judge.AI can do everything in discovery flawlessly, it seems all you would need to do is press a button, or maybe the button can press itself, and then get a judgement against someone in as much time as it needs to “think” (plus maybe a bit of time to depose people until neuralink is fully adopted). I haven’t come across academic articles on how this could all shake out, and the bar association-type articles I see on AI seem to grasp about .01% of the potential consequences