Don't be fooled. The US is regulating AI – just not the way you think

The Guardian 

Early frameworks like the EU's AI Act focused on highly visible applications - banning high-risk uses in health, employment and law enforcement to prevent societal harms. But countries now target the underlying building blocks of AI. China restricts models to combat deepfakes and inauthentic content. Citing national security risks, the US controls the exports of the most advanced chips and, under Biden, even model weights - the "secret sauce" that turns user queries into results. These AI regulations are hiding in dense administrative language - "Implementation of Additional Export Controls" or "Supercomputer and Semiconductor End Use" bury the ledes. But behind this complex language is a clear trend: regulation is moving from AI applications to its building blocks.