4 Comments
User's avatar
Adrian Vermeule's avatar

Which never quite made sense, did it, precisely because the executive (ordinarily) has to accept or reject the incoherent mishmash as a whole. The unity of reason would be better ensured, on that logic, by having the draft proceed from a single mind or from a conseil d’Etat responding to the direction of a single mind. Cf. the Code Napoleon, arguably the most coherent legal document and principled legal document ever enacted at a given time.

Expand full comment
Kevin Hawickhorst's avatar

The practice of the early Congress was that, in place of standing committees, the general outline of a bill was agreed on the floor and then handed over to an ad hoc committee of the bill's supporters to draft.

It was much closer to this model. I'm uncertain that legislation was strikingly more rational then, however.

Expand full comment
Kevin Hawickhorst's avatar

Fascinating question, although I'm not quite convinced.

First, AI drafting would probably be an improvement, as it would be a more direct reflection of a legislator's reason than the status quo. As it stands, laws are sometimes written by a somewhat random adaptation of old leg language, rounds of revision by more or less informed commentators, etc. AI drafting would be more direct.

Second, this is a general problem with legislation and, under classical legal thought, is a reason for royal/executive signature to bills. Hegel explains this point clearly (or at least as clearly as Hegel explains anything). A law emerges from a legislature as a mishmash of individual visions. The executive then reasons about whether to accept the law as a whole or reject it, and his signature therefore adds the unity of reason that the legislation previously lacked.

Expand full comment
Vikram V.'s avatar

Thank you for your post, Professor Christiansen. Taking your well-explained definition of what "law" is as given, I do not think your overall conclusion about AI-generated law ("it is not an ordinance of reason") necessarily follows.

Firstly, the decision to produce a law in a given area with AI does reflect a human's judgment. The human author decides to prompt the AI, to make the prompt be about lawmaking, and to make the prompt refer to a specific area. While the resultant AI output is much less determinative than directly translating the human intent to words, that seems like a question of degree, not kind. A human could also generate indeterminate law by using vague, open-ended, or just poorly drafted language in a law. While there may be *some* level of vagueness that makes a written text "not a law," (like a prohibition on being "annoying"), it seems like some AI drafted laws carry sufficient intentionality from the human prompter to clear this bar. For example, if I prompted ChatGPT to "Generate an amended version of 47 U.S.C. Section 230(c)(1) ratifying judge Paul Matey's dissent in the TikTok case he recently ruled on," it seems like I, as a human, would be making a judgement (in agreement with Judge Matey) about what ordinance would promote the common good. In contrast, I agree that merely asking an AI to "write a good, totally unique, law to promote the common good" would probably not produce a law under this theory.

Secondly, the AI generation tools currently employed today are all text prediction algorithms. They have ingested a large corpus of human-produced writing and use it to predict what a (selectively coded for performance) human would produce. Given that AI is totally trained on human output, it seems odd to say that there is *no* human intentionality going into an AI-generated law...

Thirdly, AI can produce text that is obviously declaratory of natural law, without need for further human thought. For example, if I asked an AI program to give me "good laws," and it recited the Ten Commandments verbatim (replacing the "me" in the First Commandment, of course), what the AI produced would be entitled to the utmost respect. If a hypothetical society with no knowledge of the Bible constructed an AI that produced the 10 commandments and then followed those commandments, would they be committing some rgrevious error about the nature of law. I think not. (Maybe this hypothetical is fanciful. Getting a current AI to generate the commandments without direct prompting is extremely difficult).

Fourthly, I am not sure about your invocation of imago dei here. I have it on good authority that at least one Pope would have been willing to baptize non-human martains. Assuming he was not misled by his wicked counselers, why would this logic not extend to an AI that really could reason for itself. There does not seem to be a well motivated reason to distinguish biological life from mechanical life, if independent intelligence/reason exists in both. Current AI systems fall far short of this, but we are seeing progress by the day...

Lastly, I would like to express by disappointment about the way this post treats Congress. I am not aware of any enacted bill where a provision was drafted by AI without further human review. However, I am aware of a major Executive Branch tariff policy decision that was made on the strength of AI-generated charts.

This is a monarchist publication, and I have no problem with you forcefully raising the many criticisms of Democracy that have merit. You can even use scare quotes around "liberty" and "separation of powers"! But inventing hypothetical scenarios about evil legislatures while ignoring the *same* actually-existing problem in the executive is highly misleading. If the Truth is on your side, why do that?

Expand full comment