When AI Writes the Rules: Administrative Law's Next Test
Government agencies increasingly use generative AI to draft regulatory text, guidance, and policy memos. This trend raises urgent questions about transparency, legal responsibility, and the integrity of the administrative record. Can machine-assisted deliberation satisfy notice-and-comment and reasoned decisionmaking? This article traces the legal history and recent policy moves. It assesses practical implications for democratic accountability and rule-of-law safeguards in governance.
Background: Administrative Rulemaking Meets Automation
Administrative rulemaking in the United States has long been governed by the Administrative Procedure Act of 1946, which established notice-and-comment requirements and articulated the need for reasoned decisionmaking by executive agencies. Historically, drafting and deliberation occurred within agency counsel offices, program teams, and through documented memoranda. Over the past decade, advances in computational linguistics and the commercial availability of generative models have introduced a new toolset: models that can draft, summarize, or propose regulatory text at scale. This development shifts not only the mechanics of drafting but also the architecture of the administrative record that courts scrutinize when reviewing agency action.
Doctrinal Tensions: Notice, Record, and Reasoned Decisions
Administrative law doctrines that courts use to review agency action—reasoned decisionmaking, the substantive adequacy of explanations, and the integrity of the administrative record—are tested when AI participates in drafting. The APA requires agencies to articulate the basis and purpose of rules in a way that enables public scrutiny and judicial review. When an AI system contributes to the reasoning or the text of a proposed rule, key questions arise: what must agencies disclose in the notice of proposed rulemaking? How should the underlying inputs, prompts, or model outputs be preserved in the administrative record? Courts have emphasized that agencies cannot hide critical grounds for action outside the record; algorithmic contributions that materially inform policy choices will likely have to be included in the record to satisfy judicial review.
Legal Responsibility and Authorship
If a model generates a regulatory provision or a crucial argument, who is the author in legal terms? Agencies remain legally responsible for their actions and cannot outsource decisionmaking to private entities in a manner that evades accountability. Reliance on proprietary models developed by private contractors raises further complications: contractual confidentiality, trade secrets, and intellectual property claims can conflict with transparency obligations. Moreover, delegation concerns surface when agencies effectively cede substantive policymaking to third-party systems without meaningful human oversight. Existing doctrines—such as the prohibition on unlawful abdication of discretionary authority—provide conceptual guardrails, but implementing these principles in the context of probabilistic, opaque models requires doctrinal adaptation.
Recent Policy Developments and Comparative Regulation
Regulatory attention to government use of AI accelerated globally in the early 2020s. In the United States, the federal government issued a comprehensive executive directive in late 2023 urging agencies to inventory AI use, assess risks, and adopt governance measures to ensure safety and accountability. Offices such as the Office of Management and Budget and several regulators have issued guidance encouraging documentation, transparency, and risk assessments for AI deployments. Internationally, the European Union’s AI Act frames many public-sector uses of AI as high risk, imposing obligations on public authorities to ensure systems meet transparency, accuracy, and oversight standards. These policy moves signal a growing consensus that public-sector AI requires specific rules distinct from private-sector applications, especially where governance processes and legal obligations are implicated.
Litigation Risks and Practical Challenges
Agencies that use AI in rulemaking face litigation risks on multiple fronts. Plaintiffs may challenge an agency’s failure to disclose AI-generated reasoning as arbitrary and capricious or allege that the agency’s reliance on an opaque model deprived stakeholders of meaningful notice. Records management poses a practical challenge: preserving prompts, model versions, training data provenance, and output logs may be necessary for judicial review but can conflict with procurement confidentiality. Agencies also must grapple with bias and fairness concerns in a non-privacy context—where AI outputs systematically favor certain regulated parties or policy outcomes, the question will be whether the agency adequately tested, audited, and explained those results. Courts will likely require a demonstrable human role in final decisionmaking, but the contours of “meaningful human oversight” remain to be defined.
Policy Design: Toward a Coherent Governance Framework
Designing a governance framework that reconciles technological utility with legal norms involves several elements. First, agencies should adopt clear internal policies requiring disclosure when AI substantially contributes to regulatory text or reasoning, and preserve relevant artifacts in the administrative record. Second, standardized algorithmic impact assessments tailored to rulemaking can help identify legal, economic, and equity risks before a proposal is issued. Third, procurement contracts should balance legitimate confidentiality interests with the public’s right to assess government action; mechanisms such as redactions, government-held model escrow, or independent audit panels may bridge interests. Fourth, Congress and state legislatures can consider targeted statutes clarifying disclosure obligations and establishing minimum audit and documentation standards for public-sector AI. Finally, a tiered approach that treats AI use in regulatory design as higher risk than routine drafting—requiring stricter transparency and human review—aligns regulatory burden with democratic stakes.
Implications for Democratic Accountability
The integration of AI into rulemaking is not a purely technical problem; it implicates democratic legitimacy. Rulemaking is a public process through which agencies translate statutory mandates into concrete obligations and rights. When the mechanics of that translation become opaque, public participation and informed civic contestation are undermined. Conversely, thoughtful governance of AI-assisted drafting can increase administrative capacity, improve clarity of proposed rules, and enable faster iteration when accompanied by robust disclosure and oversight. The critical policy challenge is to capture the efficiency gains of automation without eroding the procedural protections that enable citizens, regulated parties, and courts to hold government to account.
Rules for Machines That Help Make Rules
AI offers agencies powerful drafting and analytic tools, but its adoption in regulatory design must be governed by principles rooted in longstanding administrative law: transparency, reasoned explanation, and accountability. Recent executive and international moves show momentum toward tailored governance, but doctrinal and statutory refinement is required to resolve authorship, recordkeeping, and oversight puzzles. Policymakers should prioritize clear disclosure obligations, preservation of the administrative record, and enforceable human oversight standards so that regulation shaped with AI remains subject to democratic scrutiny and legal review.