The White House released a new framework for national AI legislation Friday morning, focusing on protections for children and boosting the industry while calling for sharp limits on legal liability for developers and state laws that it says would slow down the technology’s development.
The Trump administration’s legislative proposal emphasizes the need for Congress to establish a unifying federal approach to artificial intelligence rather than let states create “a patchwork” of individual rules. Politicians and activists across the political spectrum have instead advocated for states’ ability to regulate AI in the absence of meaningful federal action regarding the fast-moving technology.
“The Federal government is uniquely positioned to set a consistent national policy that enables us to win the AI race and deliver its benefits to the American people,” the White House said in an announcement accompanying the framework’s release.
“The Administration looks forward to working with Congress in the coming months to turn this framework into legislation that the President can sign.”
The framework is split into seven main areas, from “Protecting Children and Empowering Parents” and “Respecting Intellectual Property Rights and Supporting Creators” to “Educating Americans and Developing an AI-Ready Workforce.”
Among other specific provisions, the document calls on Congress to fight AI-enabled scams and require that AI platforms verify the age of their users while respecting their privacy, ensuring that minors are protected against “sexual exploitation and self-harm.”
Several of the framework’s provisions, including the focus on child protections and support for building American AI infrastructure, were previewed in President Donald Trump’s executive order from December. That order directed David Sacks, the White House’s AI czar, and Michael Kratsios, director of the Office of Science and Technology Policy, to create Friday’s draft framework.
The framework supports limiting the liability of AI developers due to harms from AI systems, particularly railing against “open-ended liability” which “could give rise to excessive litigation” for issues related to child safety. The framework also advances limitations on states’ ability to “penalize AI developers for a third party’s unlawful conduct involving their models.”
These proposed restrictions on liability align with messaging from Sacks, a venture capitalist, and many leading Silicon Valley investors who say significant liability provisions would harm American AI innovation and scare away future investment.
The need to regulate America’s booming AI industry has quickly become a uniting factor for MAGA conservatives and progressive activists.
In recent months, for example, slowing the spread and construction of data centers has become a key bipartisan issue in many state capitols. Friday’s framework calls on Congress to ensure that residential electricity rates do not rise as a result of new data center construction or operation.
While no sweeping federal legislation governing AI currently exists, California’s SB 53 and New York’s RAISE Act have established the current standard for AI legislation. Both laws mandate that leading AI companies, such as OpenAI, Anthropic and Google, establish additional whistleblower protections for employees, report any significant safety-related events to state offices and disclose how they test their models for key risks.
The Trump administration’s calls to restrict states’ ability to legislate AI have recently rankled many Republicans. In a letter to Trump at the beginning of March, more than 50 Republicans said that “recent attempts to halt state AI legislation suggest not merely a desire for coordination, but an effort to prevent the passage of measures holding the tech industry accountable.”
The letter was issued in response to the Trump administration’s pressure campaign against a proposed Utah bill that would require AI companies to be more transparent about how they strive to protect children using their systems and how they plan to limit catastrophic risks from their models, such as assisting terrorists in the creation of bioweapons or crippling cyberattacks.
Friday’s proposed framework says that states should retain the power to prosecute issues that would normally fall under state jurisdiction, such as preventing fraud and protecting consumers.
The policy document also prioritizes protections against AI-related censorship, advocating that Congress should prevent the federal government “from coercing technology providers, including AI providers, to ban, compel, or alter content based on partisan or ideological agendas.”
This anti-censorship messaging comes shortly after Trump and Defense Secretary Pete Hegseth cut off Anthropic, one of America’s leading AI companies, from government business for being “woke” and misaligned with government priorities.
Anthropic is now suing the federal government, claiming that the abrupt cancellation of its work with the government infringed on its First Amendment rights.