BlogLine

And we’re off! America’s AI action plan to win the AI race*

7/25/25

ai race

By: Danielle A. Ocampo

*Text below not written by AI!

We’ve come a long way since the race to space. The track is now set for the world’s race to AI dominance. To win this race, the White House released “America’s AI Action Plan” on July 23, 2025. The plan emphasizes urgency and the need to “innovate faster and more comprehensively than our competitors in the development and distribution of new AI technology across every field.” This includes removing any and all hurdles, including the dismantling of “unnecessary regulatory barriers that hinder the private sector” to achieve a win for the home team.

The twenty-three-page plan outlines three pillars: (1) innovation, (2) infrastructure and (3) international diplomacy and security. These pillars have three underlying named principles, among others, in the plan. First, American workers must gain from this technological revolution by creating high-paying jobs, and AI will improve the standard of living for all Americans. Second, AI systems must be free from ideological bias and social engineering agendas to preserve the objective truth and factual information or analysis. Third, mitigating the risk of AI and preventing its misuse by malicious actors requires constant vigilance.

Pillar One: Accelerating AI Innovation

On July 1, 2025, the U.S. Senate voted to fully remove a previously proposed ten-year moratorium on states’ regulation of AI and enforcement of those laws (‘AI Moratorium’) as a part of the Administration’s “One Big Beautiful Bill.” The Trump Administration’s AI Action Plan Pillar I, however, calls for the removal of the red tape and onerous regulation because “AI is far too important to smother in bureaucracy at this early stage.” The plan calls on federal agencies like the Office of Management and Budget to “not allow AI-related funding to be directed towards states with burdensome AI regulation, but should also not interfere with states’ rights to pass prudent laws that are not unduly restrictive to innovation.” To achieve this, the plan recommends the OMB work with federal agencies to consider a state’s regulatory climate when making AI-related funding decisions and limit funding if the regulatory regimes may “hinder the effectiveness of that funding[.]”

Other recommended actions for the federal government include:

  • Investigating all FTC orders, consent decrees and injunctions that unduly burden AI innovation;
  • Revising the NIST AI Risk Management Framework to eliminate references to misinformation; Diversity, Equity and Inclusion; and climate change;
  • Partnering with leading technology companies to increase the research communities’ access to private sector computing, models, data and software resources to promote open-source and open-weight AI;
  • Promoting integration of AI skill development and AI literacy into federally funded technical education and workforce skills-training initiatives;
  • Investing in cloud-enabled labs for AI-enabled science;
  • Adopting AI within the federal government to promote efficiency;
  • Combatting malicious deepfakes and AI-generated media in the legal system by issuing guidance and additions to the Federal Rules of Evidence.

Pillar Two: Building American AI Infrastructure

Pillar II focuses on clearing the way for American AI Infrastructure. To do that, the Plan calls for building data centers to run and factories to produce chips by garnering new sources of energy to power them. Revitalization of the U.S. chip industry that will generate jobs, reinforce technological leadership and protect our products and supply chains from our foreign adversaries. Importantly, this Pillar contemplates the likelihood of inputs of “some of the U.S. government’s most sensitive data” into AI systems, which would require bolstering cybersecurity. Recommended actions include:

  • Establishing new exclusions to The National Environmental Policy Act (NEPA) would require to “cover data-related actions that normally do not have a significant effect on the environment”;
  • Constructing data centers on Federal land;
  • Creating new technical standards for high-security AI data centers led by NIST, the Department of Defense (DOD) and other federal agencies; 
  • Information sharing across critical infrastructure sectors;
  • Providing guidance to private sector entities on remediating and responding to AI threats and vulnerabilities;
  • Modify existing standards and frameworks for incident response, such as CISA’s Cyber Incident & Vulnerability Response Playbooks, to incorporate AI considerations. 

Pillar Three: Leading AI Diplomacy and Security

Lastly, the Pillar III stresses the importance of America’s role in being a global AI leader. The Pillar specifically names China as a top competitor in the race and calls on America to take the lead when it comes to shaping international AI governance approaches that promote innovation. Recommended actions include:

  • Implementing new and strong export controls on sensitive technologies;
  • Develop a technology diplomacy strategic plan for an AI global alliance to align initiatives and policies;
  • Evaluating and assessing potential security vulnerabilities and foreign influence arising from malicious use of AI systems in critical infrastructure and the economy;
  • Investing in biosecurity to combat AI’s ability to unlock biological warfare.

Conclusion

This plan suggests that a federal comprehensive AI law during this Administration is not underway. The Administration is mainly concerned with ramping up AI innovation; however, the states may have a differing view. Businesses adopting AI must continue to monitor the budding patchwork of AI laws. Perhaps states developing their own comprehensive AI laws will consider creative ways to ensure it’s “safe” for business to innovate. Texas, for example, recently passed its Texas Responsible AI Governance Act that carves out a regulatory sandbox program for responsible AI testing to promote innovation. With the AI Moratorium now out of the way, TRAIGA will take full effect January 1, 2026.

The plan does not call on businesses to use AI irresponsibly. Instead, the Administration believes in a “try-first” culture for AI across American industries when it comes to AI adoption. Though a federal AI law may not be in our line of sight, we can anticipate new standards, regulations and guidance from agencies, such as NIST, CISA, DOD and the DOJ, that adhere to the plan’s recommendations to make space and prepare the country for a technological AI shift. This will likely amend the ever-changing regulatory landscape for business’s obligations, for example, to implement and maintain reasonable cybersecurity, as AI integration increases. Even as a nation in a race for AI against global competitors, we can expect to see conflicting positions between the federal government and states when it comes to regulating AI and data, which can lend to further tension around pre-emption principles, states’ rights and consumer protections.

For more information, please contact Danielle A. Ocampo at danielle.ocampo@fmglaw.com or your local FMG attorney.

Information conveyed herein should not be construed as legal advice or represent any specific or binding policy or procedure of any organization. Information provided is for educational purposes only. These materials are written in a general format and are not intended to be advice applicable to any specific circumstance. Legal opinions may vary when based on subtle factual distinctions. All rights reserved. No part of this presentation may be reproduced, published or posted without the written permission of Freeman Mathis & Gary, LLP.