More than 100 organizations are raising alarms about a provision in the House’s sweeping tax and spending cuts package that would hamstring the regulation of artificial intelligence systems.
Tucked into President Donald Trump’s “one big, beautiful” agenda bill is a rule that, if passed, would prohibit states from enforcing “any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems” for 10 years.
With AI rapidly advancing and extending into more areas of life — such as personal communications, health care, hiring and policing — blocking states from enforcing even their own laws related to the technology could harm users and society, the organizations said. They laid out their concerns in a letter sent Monday to members of Congress, including House Speaker Mike Johnson and House Democratic Leader Hakeem Jeffries.
“This moratorium would mean that even if a company deliberately designs an algorithm that causes foreseeable harm — regardless of how intentional or egregious the misconduct or how devastating the consequences — the company making or using that bad tech would be unaccountable to lawmakers and the public,” the letter, provided exclusively to CNN ahead of its release, states.
The bill cleared a key hurdle when the House Budget Committee voted to advance it on Sunday night, but it still must undergo a series of votes in the House before it can move to the Senate for consideration.
The 141 signatories on the letter include academic institutions such as the University of Essex and Georgetown Law’s Center on Privacy and Technology, and advocacy groups such as the Southern Poverty Law Center and the Economic Policy Institute. Employee coalitions such as Amazon Employees for Climate Justice and the Alphabet Workers Union, the labor group representing workers at Google’s parent company, also signed the letter, underscoring how widely held concerns about the future of AI development are.
“The AI preemption provision is a dangerous giveaway to Big Tech CEOs who have bet everything on a society where unfinished, unaccountable AI is prematurely forced into every aspect of our lives,” said Emily Peterson-Cassin, corporate power director at non-profit Demand Progress, which drafted the letter.
The idea that some applications of AI should be regulated has been a rare point of bipartisan agreement on Capitol Hill. On Monday, President Donald Trump is set to sign into law the Take It Down Act, which will make it illegal to share non-consensual, AI-generated explicit images, which passed both the House and Senate with support from both sides of the aisle.
The budget bill provision would run counter to the calls from some tech leaders for more regulation of AI.
OpenAI CEO Sam Altman testified to a Senate subcommittee in 2023 that “regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models.” More recently on Capitol Hill, Altman said he agreed that a risk-based approach to regulating AI “makes a lot of sense,” although he urged federal lawmakers to create clear guidelines to help tech companies navigating a patchwork of state regulations.
“We need to make sure that companies like OpenAI and others have legal clarity on how we’re going to operate. Of course, there will be rules. Of course, there need to be some guardrails,” he said. But, he added, “we need to be able to understand how we’re going to offer services, and where the rules of the road are going to be.”