Trump vows to block state AI regulations, calling them a threat to innovation
President Donald Trump just announced that he plans to issue an executive order this week to set federal rules around artificial intelligence—and prevent states from setting their own.
“I will be doing a ONE RULE Executive Order this week. You can’t expect a company to get 50 Approvals every time they want to do something,” Trump wrote in a Truth Social post on Monday. “We are beating ALL COUNTRIES at this point in the race, but that won’t last long if we are going to have 50 States, many of them bad actors, involved in RULES and the APPROVAL PROCESS.”
The executive order is just the latest dramatic act of deregulation from Trump, who, since taking office, has slashed rules from banking regulations to environmental protections. Under Trump’s plan, the federal government’s framework on AI would override any rules that individual states might put in place to shape the technology’s use or development.
Trump’s AI executive order isn’t out yet, but a draft version that circulated last month proposed an aggressive framework that would go as far as creating a federal legal task force designed to punish states with AI regulations. Under the order, which would likely attract its own legal challenges, states with AI laws could be denied federal funds.
The White House’s interest in preempting AI regulations is a huge windfall for AI companies and investors who have lobbied against state protections. In a hearing on Capitol Hill in May, OpenAI CEO Sam Altman stressed that any rules slowing AI down in the U.S. would allow China to speed ahead.
The proposed executive order is the Trump administration’s latest effort to end-run state AI laws, but it isn’t the first. This summer, Congress rejected a moratorium on state AI laws slipped into Trump’s One Big Beautiful Bill Act. Similar language that appeared in the year-end defense budget also looks unlikely to make it through, Politico reports, because Republicans don’t agree on the issue.
States step in on AI
Florida Gov. Ron DeSantis slammed the idea of limiting states’ ability to regulate AI as “federal overreach” in a post on X last month, a position he shares with many other red state governors.
“Stripping states of jurisdiction to regulate AI is a subsidy to Big Tech and will prevent states from protecting against online censorship of political speech, predatory applications that target children, violations of intellectual property rights, and data center intrusions on power/water resources,” DeSantis wrote.
AI technology has exploded over the last few years with little to stand in its way. The technology is the latest example of how the tech world’s breakneck speed easily outstrips the U.S. government’s ability to craft meaningful regulations. Congress in particular is slow, often gridlocked and ineffective at regulating new industries, which leaves states to work quickly to put their own protections in place.
A scenario in which states actually place the most stringent limits on AI wouldn’t be unprecedented. In the absence of federal protections, an Illinois law known as the Biometric Information Privacy Act (BIPA) shields state residents from companies that would use their facial recognition data without permission. While BIPA only applies to Illinois residents, the law has proven strong enough to trip up Meta, which paid out $650 million to settle a related lawsuit before backing away from the technology altogether.
For AI companies like OpenAI, navigating a vast patchwork of varying state laws is anathema to the pace of progress—and to their skyrocketing valuations. But states are increasingly wary of the technology: In 2025, all 50 states introduced legislation on AI, and 38 states put new rules in place. In Oregon, a new state law prevents AI agents from using medical titles when dispensing advice. In Arkansas, an amendment to an existing law now restricts how AI can imitate someone’s voice or appearance.
In November, dozens of state attorneys general sent a letter to lawmakers urging Congress to reject any limits on states’ abilities to regulate AI. “New applications for AI are regularly being found for healthcare, hiring, housing markets, customer service, law enforcement and public safety, transportation, banking, education, and social media,” they wrote. “Federal inaction paired with a rushed, broad federal preemption of state regulations risks disastrous consequences for our communities.”