Federal AI Oversight Eliminated in Historic Policy Shift
On January 20, 2025, President Donald Trump issued an executive order titled “Initial Rescissions of Harmful Executive Orders and Actions” that rescinded Executive Order 14110 of October 30, 2023 (Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence), marking the beginning of what experts call the most dramatic shift in American AI policy since artificial intelligence entered mainstream business use. Three days later, on January 23, Trump issued a follow-up order specifically focused on AI policy.
Executive Order 14110, signed by President Biden in October 2023, was described as the most comprehensive piece of governance by the United States government pertaining to AI. The Biden framework required companies developing advanced AI models to conduct mandatory safety testing, share results with the federal government, and implement bias detection systems. It also established requirements for federal agencies to use AI ethically and created oversight mechanisms for AI use in critical infrastructure like healthcare and transportation.
The revocation affects multiple government agencies that had already begun implementing AI safety measures. The Equal Employment Opportunity Commission (EEOC) and Department of Labor pulled or updated a number of AI-related publications from their websites immediately following the order. This means that existing guidance helping employers understand how to use AI tools fairly in hiring and workplace decisions is no longer official federal policy.
America’s AI Action Plan Prioritizes Innovation Over Safety
In July 2025, the Trump administration released its comprehensive response: “Winning the AI Race: America’s AI Action Plan”. This 28-page document outlines over 90 federal policy actions across three pillars – Accelerating Innovation, Building American AI Infrastructure, and Leading in International Diplomacy and Security.
The plan takes a fundamentally different approach than its predecessor. Instead of focusing on potential risks from AI systems, it emphasizes removing what the administration calls “bureaucratic red tape” that could slow American AI development. This includes streamlining permits for data centers, reducing regulatory requirements for AI companies, and ensuring federal funding goes to states with fewer AI restrictions.
A significant change involves how the federal government thinks about AI bias and fairness. The new plan calls for AI systems to be “objective and free from top-down ideological bias” and recommends revisions to the National Institute of Standards and Technology (NIST) AI Risk Management Framework. The plan suggests revising the NIST AI Risk Management Framework to “eliminate references to misinformation, Diversity, Equity, and Inclusion, and climate change”. This represents a complete reversal from the Biden administration’s emphasis on preventing AI discrimination.
State Regulations Continue Despite Federal Deregulation
While federal oversight has been eliminated, individual states are moving forward with their own AI regulations. California leads this effort with multiple bills currently advancing through the legislature. California’s AI regulation framework continues to develop despite the federal policy change.
Senate Bill 243, introduced by Senator Steve Padilla, specifically targets AI chatbot companies after the death of 14-year-old Sewell Setzer, a Florida boy who died by suicide after months of emotionally intense, sexually explicit conversations with an AI companion bot. The bill, which was approved by the California State Senate with bipartisan support but is still advancing through the legislative process, would require chatbot platforms to ban reward systems that encourage compulsive use. Implement and publish a protocol for addressing thoughts of suicide and to direct users to suicide prevention hotlines. It also mandates that companies remind users every three hours that they are talking to artificial intelligence, not a human being.
Senate Bill 420, also by Senator Padilla, creates what he calls “a framework and regulatory structure to ensure that AI systems respect human rights, promote fairness, transparency, accountability, and safeguard Californian’s well-being”. This bill, which was approved by the California Senate and is currently moving through the Assembly, would require companies using high-risk automated decision systems to conduct impact assessments examining potential bias and provide people with information about how AI tools make decisions affecting them.
Business Impact and Compliance Challenges
The regulatory changes, which occurred just months ago, create both opportunities and challenges for businesses using AI. On the positive side, companies face fewer restrictions on AI development and deployment at the federal level. Federal agencies are no longer required to conduct lengthy reviews of AI procurement decisions, and companies don’t need to share safety testing results with the government unless they choose to do so.
However, businesses still face significant legal obligations. Employment discrimination laws remain in effect, meaning companies are still liable if their AI tools produce biased results in hiring, promotion, or workplace decisions. Privacy regulations at both state and federal levels continue to apply when AI systems process personal information.
The divergence between federal deregulation and state-level regulations creates what experts call a “patchwork” problem. The House-passed AI moratorium would have barred states and localities from enforcing any law or regulation targeting “artificial intelligence models,” “AI systems” or “automated decision systems” for 10 years, but this provision was stripped by a near unanimous 99-1 Senate vote from the “One Big Beautiful Bill” reconciliation package signed in July 2025. This means companies operating in multiple states must navigate different requirements in each location.
International Implications and Competitive Positioning
The Trump administration’s approach contrasts sharply with international AI governance trends. The European Union’s AI Act, which began implementation in 2024, imposes strict requirements on high-risk AI systems including mandatory impact assessments and transparency obligations. The Trump EO’s emphasis on reducing regulatory burdens stands in stark contrast to the EU’s approach, which reflects a precautionary principle that prioritizes societal safeguards over rapid innovation.
This creates potential compliance challenges for American companies operating globally. While they may face fewer restrictions domestically, they still must meet European standards to sell AI products in EU markets. International AI compliance strategies become more complex when home country and target market regulations diverge significantly.
The administration argues this deregulatory approach will help American companies compete with China and other nations investing heavily in AI development. President Trump rolled out a wide-ranging action plan aimed at ensuring the United States dominates the global artificial intelligence industry, including partnerships with private companies to export American AI technology to allied nations.
Key Takeaways
- Trump administration rescinded comprehensive federal AI oversight, eliminating mandatory safety testing and bias detection requirements for AI companies nationwide.
- State-level AI regulations continue advancing, particularly in California, creating complex compliance requirements that vary significantly by geographic location.
- Businesses face reduced federal oversight but unchanged legal liability for discrimination, requiring careful risk management despite deregulatory trends.
FAQs
What specific AI regulations did Trump eliminate?
Trump rescinded Executive Order 14110, which required companies to share AI safety testing results with the government, mandated bias detection in federal AI use, and established oversight for AI in critical infrastructure. The order also eliminated requirements for agencies to study AI risks and implement equity protections in AI deployment.
Do businesses still need to worry about AI discrimination laws?
Yes, existing employment and civil rights laws still apply to AI systems. Companies remain liable for discriminatory outcomes from AI tools in hiring, lending, housing, and other decisions, even though federal oversight and guidance have been reduced. State laws and sectoral regulations also continue to impose requirements.
How do state AI laws affect businesses operating nationally?
Companies must comply with AI regulations in each state where they operate, creating a complex patchwork of requirements. California’s advancing legislation on chatbots and automated decision systems will apply to any company serving California customers, regardless of where the company is headquartered or incorporated.
Keep Reading
- California Advances Strictest AI Rules in United States – Learn what pending state requirements mean for chatbot companies and automated decision systems.
- Employment AI Creates New Bias Risks for Employers – Understand continuing legal obligations despite federal deregulation and EEOC guidance changes.
- Europe Tightens AI Rules While America Deregulates – Compare international approaches and implications for companies operating across multiple jurisdictions.
- America’s AI Action Plan Boosts Data Center Development – Explore federal initiatives supporting AI infrastructure and their economic development implications.
- Complete Guide to State AI Legislation in 2025 – Track emerging regulations across all fifty states and their business compliance requirements.
- Federal AI Funding Now Tied to State Regulation Levels – Discover how new policies linking federal support to state regulatory approaches affect technology investments.