The United States is experiencing a growing dispute over the regulation of artificial intelligence (AI), as federal lawmakers and the White House pursue measures to preempt state-level AI legislation. Efforts are reportedly underway to include language in the National Defense Authorization Act (NDAA) that would block states from enacting their own AI laws, while a leaked draft of a White House executive order (EO) also indicated support for overriding state regulatory actions. This federal push has ignited a contentious debate concerning regulatory authority and consumer protection across the nation.
Industry proponents, including technology giants and startups, contend that a "patchwork" of state-specific laws could hinder innovation and disadvantage the U.S. in the global AI race, according to Josh Vlasto, co-founder of pro-AI PAC Leading the Future. States have already introduced dozens of bills, such as California's AI safety bill SB-53 and Texas's Responsible AI Governance Act, which seeks to prohibit intentional misuse of AI systems. Pro-AI PACs, including Leading the Future, have invested significant capital in campaigns advocating for a singular national standard.
Conversely, a substantial segment of Congress, state attorneys general, and cybersecurity experts oppose a broad preemption. Lawmakers previously voted against a similar moratorium earlier this year, arguing that blocking state regulations without an established federal standard could leave consumers vulnerable to harm and allow technology companies to operate without sufficient oversight. New York Assembly member Alex Bores, who sponsored the RAISE Act requiring safety plans for large AI labs, supports national policy but notes states' ability to address emerging risks more swiftly.
As of November 2025, 38 states have enacted over 100 AI-related laws, primarily addressing deepfakes, transparency, and government use of AI. This contrasts with the slower pace of federal legislation, where hundreds of AI bills have been introduced but few have become law. Cybersecurity expert Bruce Schneier and data scientist Nathan E. Sanders assert that the "patchwork" complaint is overstated, pointing to industries already navigating varied state laws and AI companies complying with stricter EU regulations.
House lawmakers, including Rep. Ted Lieu (D-CA), are preparing a comprehensive package of federal AI bills designed to cover consumer protections such as fraud, healthcare, and child safety. Lieu stated his goal is to enact legislation during the current term, acknowledging the need for a bill that can pass a Republican-controlled House, Senate, and White House. Meanwhile, the leaked White House EO draft outlined an "AI Litigation Task Force" to challenge state laws and empower agencies to push for national standards, with David Sacks, co-founder of Craft Ventures, cited as having co-lead authority on creating a uniform legal framework. Sacks has publicly favored industry self-regulation to "maximize growth."