Artificial Intelligence Priorities for the Next Administration
The rise of generative artificial intelligence (AI) has been breathtaking, and American firms are leading the way in showing the potential of a new AI-propelled world. But rivals like China are gaining ground, with major consequences for the U.S. economy and security.
November 26, 2024 11:12 am (EST)
- Expert Brief
- CFR scholars provide expert analysis and commentary on international issues.
Sebastian Elbaum is the Technologist-in-Residence at the Council on Foreign Relations. Adam Segal is the Ira A. Lipman Chair in Emerging Technologies and National Security and Director of the Digital and Cyberspace Policy program at CFR.
By many measures, the United States dominates the AI landscape: it is home to more top AI models and more leading companies and invests more in AI development than China and Europe. The U.S. market is dominated by a handful of private companies producing foundational models—large models trained on vast data sets that can perform many tasks—but there is a rapidly growing ecosystem of smaller companies building specialized systems, often on top of foundational ones.
More on:
The hardware sustaining these systems is also led by U.S. companies, primarily Nvidia along with Advanced Micro Devices (AMD) and Intel, but Chinese firms like Huawei are becoming increasingly competitive in semiconductor design and manufacturing. Longer term, supply chain issues, especially Nvidia’s dependence on Taiwan’s manufacturing capacity, and questions about the availability of the massive amounts of energy needed to train models, cloud the future of American AI. Failure to address these and other issues, and to create a regulatory environment that balances opportunities and risks, could slow American innovation and threaten U.S. economic and national security.
What have been the main federal government responses to the emergence of AI?
Over the last decade, the U.S. federal government has made strides toward AI adoption. Most agencies have appointed chief AI officers, collected potential use cases of AI, adjusted their compliance mechanisms, and integrated AI usage guidelines into their practices. The newly established AI Safety Institute has adapted the risk framework developed by the National Institute of Standards and Technology to work on AI and set up a voluntary assessment program with the major technology companies building the most sophisticated AI systems. The National Science Foundation is spearheading the National Artificial Intelligence Research Resource Pilot to provide additional computing power, datasets, and models for researchers and educators, thus building a broader base for technology innovation and diffusion.
The incoming Donald Trump administration has signaled that it will prioritize accelerating AI innovation and dismantling some of the regulatory barriers put in place by its predecessor that it believes could hamper innovation. Still, there are a number of steps that the administration should take to ensure both the safety of AI systems and the competitiveness of the U.S. AI ecosystem.
What laws govern AI?
There is no comprehensive AI law so far. Congress has considered hundreds of bills that touch on AI, as well as several that directly address its risks, but the most consequential AI law it has passed is not even specifically an AI law. The CHIPS and Science Act of 2022 funded the investments necessary to boost the semiconductor manufacturing capacity and scientific research that will support the next wave of AI advances. Beyond that, Congress has produced several AI bills embedded into the annual National Defense Authorization Acts. In 2021, for example, the act created the National Artificial Intelligence Initiative Office and provided a structure to coordinate research and development (R&D) between the defense and intelligence communities and civilian federal agencies.
Still, despite bipartisan support for some type of AI-specific regulation, the prospect for congressional action on this is uncertain. Republicans and Democrats differ on the purpose and proposed methods of legislation. The former have so far been more focused on AI and content moderation, the latter on AI’s impact on equity and economic inclusion. As a result, the executive branch has taken the lead on AI governance thus far.
More on:
During his first term, President Donald Trump issued two executive orders on artificial intelligence. Executive Order (EO) 13859, signed in February 2019, highlighted the importance of AI leadership to the United States. It outlined a coordinated federal strategy to prioritize AI research and development, develop standards to reduce barriers to deployment and ensure safety, equip workers with AI skills, foster public trust in its technologies, and engage with international partners. EO 13960, signed in December 2020, directed AI’s use in the federal government to enhance the efficiency and effectiveness of operations and services. It also established principles for AI applications and mandated that agencies create inventories of AI use cases, ensure compliance with the principles, and promote interagency coordination to foster public trust in AI technologies.
In July 2023, President Joe Biden announced voluntary commitments from leading AI companies to advance the safe, secure, and transparent development of AI technology. In October 2023, the president signed EO 14110, which focused on the responsible development and deployment of emerging AI technology, with mandates for risk assessment, robust evaluations, standardized testing of and safeguards for AI systems to provide a broad range of user protections. The order also supported job training and education for the AI era and sought to enhance the federal government’s capacity to use AI responsibly, including by attracting and retaining talent. Finally, it promoted U.S. global leadership through engagement with international partners to develop a framework for managing AI risks and benefits.
Biden followed the EO with the first National Security Memorandum on AI in October 2024. The NSM is expected to accelerate the development of cutting-edge AI tools, protect AI R&D from theft and exploitation, and promote the adoption of advanced AI capabilities to address national security needs while ensuring defense and intelligence uses of AI to protect human rights and advance democratic values.
Do U.S. states have any rules governing AI use?
In the absence of legislation at the federal level, more than a dozen states, including California, Colorado, New York, Texas, and Virginia, have approved bills supporting various forms of consumer protection, stakeholder participation in the development and monitoring of AI systems, and accountability in the face of violations. For example, the California state legislature has passed bills protecting individuals from having their voice or likeness copied and requiring watermarks on AI-generated images. SB 1047 mandated safety training for companies that spend more than $100 million training frontier models and required AI developers to take “reasonable care” to avoid critical harms that could result in mass casualties or cyberattacks causing catastrophic damage to critical infrastructure. It was vetoed by Governor Gavin Newsom after vocal opposition from AI researchers, tech firms, and venture capitalists as well as members of Congress. There is a real risk that state-level initiatives, particularly to provide user protections from AI systems, could create a patchwork of conflicting laws that technological companies will struggle to navigate, slowing down innovation.
Is there any international coordination on establishing regulatory norms?
The United States internationalized the voluntary commitments signed by the tech companies through what is known as the Group of Seven (G7) Hiroshima AI process. The United States also participated in the United Kingdom AI Safety Summit and signed the Bletchley Declaration, which encouraged transparency and accountability from actors developing frontier AI technology. In September 2024, the United States signed the Council of Europe’s Framework Convention on Artificial Intelligence and Human Rights, Democracy, and the Rule of Law. This document aims to ensure AI systems align with human rights, democracy, and the rule of law.
What should an AI governing roadmap look like?
Set up an AI Commission. While the specifics of AI policy under Trump’s second term are not yet clear, the president-elect has said he would revoke EO 14110, lifting regulations so the United States can compete more effectively with China. The incoming administration has also been clear about its desire to limit the federal government and cut its spending.
Still, the Trump administration should create an AI Commission to ensure the safety of AI systems. Private technology companies are driving the AI revolution, but they are unable to ensure that the benefits of their products outweigh the risks. Moreover, the market often incentivizes them to be first to introduce a product, before they can assure the product’s safety. Currently, no existing government agency has the expertise, resources, and authority required to formulate federal AI policies and regulations, conduct inspections of AI systems, check that these systems are good enough for their use cases, and levy penalties if systems fail.
The AI Commission could be modeled on the National Highway Traffic Safety Administration, which manages safety, security, and efficiency of vehicles. Such a commission would also collect, analyze and investigate incidents, and invoke a recall of an AI system if necessary. These measures will incentivize companies to improve and accelerate the quality control practices of their AI systems.
Spur AI investment in universities. The Trump administration should also support investment in AI systems outside of the big tech players, especially in federal labs and universities. The cutting-edge AI landscape is now monopolized by a handful of private companies that invest and rely on massive computing resources to advance technology. Universities have the talent but cannot match the level of computing investment. The National Science Foundation’s annual funding for AI-related research—over $800 million in 2023—is comparable with the cost of training a couple of foundational AI models, and an order of magnitude smaller than the total training budget of the top AI companies; OpenAI’s training and inference costs could reach $7 billion in 2024.
The lack of investment curtails universities’ capabilities to develop the next generation of technology and to train young researchers and engineers who will create and staff new companies. More fundamentally, if this trend continues, universities will no longer be in a credible position to independently judge the strengths and weaknesses of emerging technology when it comes to social goals such as job creation or educational access instead of the market value that drives private companies.
Align AI and energy policy. As AI systems have become more sophisticated, they require data centers with more computing power, which also requires more energy. One study found that AI could make up 0.5 percent of worldwide electricity use by 2027. Companies can reduce energy costs by developing more efficient chips, architectures, algorithms, and models, but it is unclear in the end whether the savings will result in lower energy consumption or just promote more usage. The government can help ensure sufficient energy capacity by streamlining the regulatory process, incentivizing private sector investment in grid modernization, and supporting research, demonstration, and easing deployment and permitting advanced nuclear projects.
This work represents the views and opinions solely of the author. The Council on Foreign Relations is an independent, nonpartisan membership organization, think tank, and publisher, and takes no institutional positions on matters of policy.