•      Fri Dec 5 2025
Logo

Let a Thousand AI Regimes Bloom



LOS ANGELES – It is widely assumed that America is “deregulating” AI, and it’s not hard to see why.

The Trump administration has rescinded President Joe Biden’s AI executive orders and unveiled an “AI Action Plan” promising to “dismantle unnecessary regulatory barriers.”

When Vice President JD Vance attended the Paris AI Action Summit back in February, he warned Europeans that “excessive regulation” would cripple their AI industry.

But this deregulatory narrative elides how the United States is actually governed. The federal government is not the only regulator in town.

There are 50 state governments that can legislate and enforce rules when federal authorities decline to act, as well as independent courts that intervene in cases brought before them by private individuals or entities. The result is a decentralized patchwork, not a regulatory vacuum.

As of October, lawmakers in 42 states had introduced 210 AI-related bills to regulate the private sector, with Democratic-controlled states leading the charge.

One is California, which just enacted a sweeping package that requires bot disclosures and youth safeguards for “companion chatbots”; introduces a phased data disclosure regime, so that platforms can flag AI-generated content; sets limits on unlicensed “AI medical” advice; strengthens the remedies for nonconsensual deepfake pornography; circumscribes the “AI-did-it” defense in litigation; and explicitly bans algorithmic price-fixing.

Meanwhile, New York has barred landlord rent-pricing algorithms and is weighing measures on frontier-model safety, labeling, and disclosures for synthetic performers (AI “actors”). Illinois has banned “AI therapy” without licensed professionals and limited AI use in hiring.

And last year, Colorado adopted the first comprehensive “high-risk AI” law focused on preventing algorithmic discrimination.

Republican-controlled states have also moved this year. Texas passed an omnibus law targeting manipulative and discriminatory uses of AI. Tennessee (the capital of the country-music industry) has regulated the use of AI voice cloning. And Utah has set disclosures and guardrails for chatbots offering mental-health services.

Clearly, there is bipartisan support for protecting children, regulating bots, and banning AI deception, which helps to explain why a recent effort to pre-empt all state AI laws for ten years – tucked into the Republicans’ “One Big Beautiful Bill” – failed after strong pushback from states and civil society.

To be sure, decentralization increases compliance costs, especially for smaller companies, and supporters of pre-emption argue that legislation across 50 states could lead to 50 definitions of AI, 50 disclosure requirements, and 50 enforcement approaches.

But that is a caricature. Markets rarely run 50 playbooks, because patchworks tend to converge, with companies harmonizing to the strictest workable standard. Many industries are already familiar with the “California effect”: rules set in Sacramento often become the de facto national norm, as seen in auto emissions and data privacy.

Moreover, a decentralized system enables the US to run real-time experiments in AI regulation. When states adopt rules with varying intensity, they become laboratories of governance. Policymakers can see what works, discard what doesn’t, and devise a practical template for better regulation.

At the same time, the divergence among states is unlikely to be as stark as many assume. Since states compete for investment, heavy-handed rules can drive away innovative firms.

That is why California Governor Gavin Newsom vetoed SB 1047, legislation that would have imposed heavy compliance duties on frontier AI developers. In the end, state lawmakers opted for a narrower transparency-and-disclosure measure instead. And OpenAI’s restructuring offers another example of the same pattern. As the firm shifted from a nonprofit to a public-benefit structure, California’s attorney general, Rob Bonta, pushed for stronger guardrails. After securing governance and safety concessions from OpenAI, he compromised and declined to object.

If this all sounds familiar, that is because AI’s rollout is replaying the politics of the US data-privacy debate. Here, too, it is incorrect to say that America has “no data privacy law.” The US has many such laws – just not a single, coherent federal one.

Although the American Data Privacy and Protection Act sailed out of the US House Energy and Commerce Committee with overwhelming bipartisan support in 2022, it then stalled.

With Democratic-controlled states, led by California, urging Congress to adopt a federal floor (not a ceiling) and to preserve concurrent state attorney-general investigative and enforcement powers, US businesses lobbied against legislation that would not override stronger state-level regimes.

We can expect the same dynamic with AI regulation, unless Congress opts for a bill that sets federal baselines while allowing states to go further. Yes, critics will warn that patchwork regulation might handicap US firms relative to their Chinese competitors. China’s centralized system can indeed move fast: when market chaos emerges, the authorities can impose sweeping controls with little institutional resistance; and when growth falters under heavy-handed intervention, they can just as quickly ease rules to lure investors back.

But Americans tempted by China’s approach should remember the other side of the coin. As I elaborate in my book High Wire: How China Regulates Big Tech and Governs Its Economy, China’s mishandling of the COVID-19 pandemic, its sweeping internet crackdown (which erased vast tech fortunes), and the lingering economic malaise from heavy-handed property reforms are stark reminders that with centralization comes fragility.

So, no: America isn’t deregulating AI. It is decentralizing its governance. This messier, noisier, more pluralistic path is an inherent feature of its constitutional order and the foundation of its regulatory resilience.

Rather than seeing decentralization as a drag on innovation, it should be viewed as one of America’s most important competitive advantages.

Angela Huyue Zhang, Professor of Law at the University of Southern California, is the author of High Wire: How China Regulates Big Tech and Governs Its Economy (Oxford University Press, 2024) and Chinese Antitrust Exceptionalism: How the Rise of China Challenges Global Regulation (Oxford University Press, 2021).

Copyright: Project Syndicate, 2025.
www.project-syndicate.org