Добавить новость
Новости сегодня

Новости от TheMoneytizer

The States Leading the Way on Regulating AI

Artificial intelligence is rushing into our lives at a breakneck pace. And the AI mavens know it: In the merging language of Silicon Valley and Capitol Hill, they like to describe our present state of affairs as a “race.” There are reports that the technology can now execute large-scale cyberattacks, that it can engineer bioweapons. AI chatbots are melting our brains while AI video generators churn out deepfake revenge porn. The GOP’s support for the industry, guided by a friendly President Donald Trump and the considerable influence of the Big Tech money lining the coffers of lawmakers—including, it must be said, some Democrats loath to confront the administration on the matter—has ensured federal regulation is off the table. The first item listed in Trump’s AI Action Plan is a promise to the AI industry that it will be “unencumbered by bureaucratic red tape.”

In response, state legislatures are passing their own regulations, red states included: Texas and West Virginia both passed AI laws in 2025. But the biggest story this year was right here in the industry’s California backyard, where Governor Gavin Newsom signed an AI transparency bill that capped a years-long lobbying battle, immediately becoming a top target in Washington.

California’s new AI law is aimed at preventing “catastrophic risk,” which it defines as “serious injury to more than 50 people” or damages of $1 billion. Colloquially discussed as S.B. 53 but formally titled the Transparency in Frontier Artificial Intelligence Act, the law has already been replicated in New York where, as in California, they’re concerned with an AI model going—let’s say … “semi-sentient”—and “evading the control of its frontier developer or user.” Similar legislation is winding its way through statehouses in Michigan, Massachusetts, and Illinois.

This encroaching regulation has congressional Republicans (and the AI lobby) hell-bent on instituting a federal override on state regulations. In a conversation with The New Republic, state Senator Scott Weiner, the California lawmaker behind Senate Bill 53, called the AI preemption push “a Night of the Living Dead … it keeps coming back.”

Republicans tried to put the preemption in the reconciliation bill in June. They failed. Spectacularly, actually: shot down by a vote of 99–1 in the Senate. Then they tried to slip a 10-year preemption into the NDAA, but that too fell apart—leading an undaunted House Majority Leader Steve Scalise promising to “look for other places” for the language.

This legislative scramble takes place in the foreground of the death of Adam Raine—a 16-year-old who killed himself after a monthslong ChatGPT conversation so disturbing that I’d rather not recount it here. His family is now suing OpenAI. The company says that Raine “misused” their chatbot. They say ChatGPT isn’t designed to do things like congratulate teenagers for attempting suicide. Their legal argument seems to be that ChatGPT evaded the control of its developer.

Stymied on the legislative side, Trump recently drafted an executive order that would cut funding to states with laws like S.B. 53, arguing that regulation hampers progress and that we’re in a “race with our adversaries.” The executive order refers to S.B. 53 as a “burdensome disclosure and reporting law.” Donald Trump obviously didn’t write the order; he probably knows as much about AI as I do about particle accelerators. The more likely author is White House AI czar David Sacks, who has pushed to loosen export controls, thereby allowing America’s advanced AI chips to be sold to China.

Now, it can’t be true that (a) we should sell our advanced AI chips to China, and yet (b) we shouldn’t regulate the industry because we need to beat China in the AI race. The only way you can concoct a logic possible of making both those arguments is if you assume a common motivation: fattening the already bloated bank accounts of tech billionaires like Sacks, Mark Andreessen, and Sam Altman.

Andreessen and Altman both opposed California’s efforts to regulate AI. Altman’s company, OpenAI, even subpoenaed policy wonk Nathan Calvin, requesting “all documents concerning SB 53 or its potential impact on OpenAI.” Calvin is a lawyer at the AI think tank Encode, where he worked on S.B. 53. He told The New Republic that the bill represents “the first time that we’ve seen any jurisdiction in the United States say very clearly, ‘We think that catastrophic risk from the most advanced AI models is worth taking seriously, and we should take affirmative steps to have companies guard against that and have government prepare for that.’”

Now that the law is on the books, OpenAI has asked Governor Newsom to be considered‭ compliant with state requirements because it signed an AI code of conduct in the European Union. Adherence to the EU code is voluntary.

Weiner spent years working on the law that became S.B. 53, first passing a bill known as S.B. 1047, which would have instituted more guardrails on AI companies, requiring third-party audits and a kill switch on AI models. OpenAI lobbied against S.B. 1047. Google lobbied against it. Meta lobbied against it. Andreessen Horowitz lobbied against it. Eight members of the California congressional delegation—Lofgren, Eshoo, Khanna, Cardenas, Correa Barrigan, Bera, and Peters—sent a letter to Gavin Newsom asking him to veto 1047, complaining that the bill was “skewed toward addressing extreme misuse scenarios and hypothetical existential risks while largely ignoring demonstrable AI risks like misinformation, discrimination, nonconsensual deepfakes, environmental impacts and workplace displacement.”

Weiner says the congressional letter was “very odd,” and that it’s the only time in his statehouse tenure that members of Congress have lobbied against one of his bills. He calls their argument a “bad-faith whataboutism.”

Bad faith or not, it worked. Newsom vetoed the law and formed a working group to produce a report that eventually informed S.B. 53. The working group included three people, among them Stanford professor Fei-Fei Li, who herself leads an AI start-up worth $1 billion and backed by, surprise, Andreessen Horowitz.

It’s obviously highly unusual for members of Congress to weigh in on state legislation. You’d have a difficult time figuring out exactly how much money the tech industry has pumped into those campaign coffers. I spent hours trying. But beyond the obvious red flags, there are two problems with the congressional letter on S.B. 1047: First, we probably should be concerned about the “extreme misuse scenarios” of AI—that’s precisely what OpenAI is calling the death of Adam Raine, after all. Secondly, California already has laws prohibiting deceptive AI in elections, nonconsensual deepfakes, and employment discrimination.

All of those laws, of course, would become useless if a federal AI preemption were to be put in place. If the preemption comes through Congress—as Scalise is promising—Weiner says that it “will be litigated,” adding, “Congress’s preemption power is typically tied to enactment of a comprehensive regulatory scheme. And then you can have a fight about what that scheme should be, but the idea that Congress would just ban states from doing this without replacing it with anything else, that’s a legal question.” He seemed even less convinced by Trump’s threats of an executive order, saying, “Trump thinks he’s a king, but he’s not. The president can’t nullify state law by executive order, and it’s a fever dream to suggest otherwise.”

These laws are overwhelmingly popular. They do things like protect the elderly from AI cyberscams. That’s why the Senate vote killing the AI preemption was a staggering 99–1. And these laws represent the most likely avenues for future regulation, if only at the state level.

Nicholas Farnsworth, a lawyer at Orrick who specializes in state-level AI laws, told The New Republic that “we’ll see more and more states adopting the transparency regulations” like the companion chatbot law recently passed in California that requires a disclosure that users are interacting with AI. Farnsworth added that high-risk AI laws like the new one in Colorado “will likely go across the United States.” These deal with high-risk decisions: those in the fields of health care, education, and employment. You can’t, for example, use an AI as the ultimate decision-maker in denying somebody a loan or firing them. Or if you do, you have to give them a chance to appeal for a review with a real live human being.

The Colorado law was also targeted by Trump’s executive order. He complained that the law could “force AI models to embed DEI in their programming.” And while the DEI criticism sounds like vintage Trump, Sacks is also more than fluent in Republican grievance language. He complains constantly about “woke AI,” even fretting that such a thing would be “Orwellian.”

The language of the AI industry is similarly melodramatic, which should come as little surprise since AI is the industry’s latest idée fixe—where every new line of code seems to be about the technology’s self-evident beneficence (and only incidentally about making money). The AI race metaphor is particularly suspicious. Dr. Julia Powles, director of the Institute for Technology, Policy and Law at UCLA called it “an industry narrative.” Nathan Calvin concurred, telling The New Republic that “some of these companies like Meta lobbying most fervently for blocking state AI regulation are not even focused on trying to build the forms of AI that do seem really important to national security. The stuff they’re trying to build is like automated infinite-scroll AI video slop. Do we really need to beat China in the race to addict our kids and citizens in general to as much AI short-form video as we can? Is that the best way to protect our national competitiveness and security?”

It’s worth noting that Calvin, like Weiner and all of the other pro-regulation figures I spoke to, see real promise in artificial intelligence. Calvin said that “AI is here to stay and is really important.” Weiner described himself as a “fan” of AI, adding, “I want AI. I want folks to be able to innovate, to help solve the world’s problems.”

But we’re at a crossroads, all the same. Regulation is coming. Powles noted that “there’s a real sensibility from policymakers that we missed a trick with social media and we don’t want to do that again with AI.” Even the companies seem to be aware that they are in the crosshairs. That’s why the brass at Andreessen Horowitz and OpenAI have a new hundred-million-dollar super PAC and are spending big to fight regulation.

Calvin says that “some of the companies’ approach is to effectively take as much as they can now, grow and embed themselves as fast as they can, and then by the time the public has woken up to what’s happening or is demanding changes, be in a sufficiently entrenched position to prevent any meaningful oversight or changes.” Of course, with a federal moratorium legalizing AI cyberscams on grannies and revenge porn on teens, the public might wake up even quicker than the latest semi-sentient chatbot.

Читайте на сайте


Smi24.net — ежеминутные новости с ежедневным архивом. Только у нас — все главные новости дня без политической цензуры. Абсолютно все точки зрения, трезвая аналитика, цивилизованные споры и обсуждения без взаимных обвинений и оскорблений. Помните, что не у всех точка зрения совпадает с Вашей. Уважайте мнение других, даже если Вы отстаиваете свой взгляд и свою позицию. Мы не навязываем Вам своё видение, мы даём Вам срез событий дня без цензуры и без купюр. Новости, какие они есть —онлайн с поминутным архивом по всем городам и регионам России, Украины, Белоруссии и Абхазии. Smi24.net — живые новости в живом эфире! Быстрый поиск от Smi24.net — это не только возможность первым узнать, но и преимущество сообщить срочные новости мгновенно на любом языке мира и быть услышанным тут же. В любую минуту Вы можете добавить свою новость - здесь.




Новости от наших партнёров в Вашем городе

Ria.city
Музыкальные новости
Новости России
Экология в России и мире
Спорт в России и мире
Moscow.media






Топ новостей на этот час

Rss.plus





СМИ24.net — правдивые новости, непрерывно 24/7 на русском языке с ежеминутным обновлением *