Berlin

November 4 & 5, 2024

New York

September 4 & 5, 2024

What you need to know about Biden’s AI executive order

US President Joe Biden has announced a sweeping executive order covering artificial intelligence. Here’s everything you need to know about how the US government is looking to regulate the technology.
October 31, 2023

US President Joe Biden has announced a sweeping executive order which lays out how the US government plans to regulate AI. Here’s everything you need to know

The keenly anticipated White House executive order on AI was published on October 30, establishing US President Joe Biden’s approach to regulating the technology and joining other government attempts to manage a quickly emerging set of societal risks.

Officially called “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence”, the document highlights that artificial intelligence (AI) technology has “extraordinary potential for both promise and peril”. This directive from the president prompts the United States federal government to ensure that it offers the former rather than the latter.

The document outlines the importance of AI risk management in key areas like workplaces, housing, education, and government benefits. It includes requirements for federal agencies to adopt processes that will put the protection of civil rights and privacy at the forefront of any AI deployment, including the appointment of chief AI officers, whose responsibility will be to reduce the risk of bias in models.

Companies developing AI systems that could impact national security, economic security, or are in the area of health and safety must also regularly test their systems and report the results to the US government. Rules for how to properly ‘red team’ AI models – by trying to find security flaws before they are released – will be developed by the National Institute of Standards and Safety (NIST), and must be used by companies developing AI models, such as OpenAI and Meta.

The order also attempts to help position the United States as a global hub for AI innovation and talent. Specifically, to “clarify and modernize immigration pathways for experts in AI and other critical and emerging technologies”, including startup founders running AI companies.

A step in the right direction

The order has been generally well-received by AI experts, but the US government will have more work to do to solidify these rules beyond an executive order, which, unlike legislation, can be revoked by subsequent presidents.

“In general, I’m pleased with how the overall executive order presented itself as a good-hearted attempt towards a pretty comprehensive set of legislation,” says Deb Raji, a fellow at Mozilla working on AI accountability and evaluation.

The order is “an important next step to get federal agencies better prepared to use its decision making power in a way that increases protections, rather than letting AI run unchecked,” says Maya Wiley, president and CEO of The Leadership Conference on Civil and Human Rights.

It was also praised by Alexandra Reeve Givens, president and CEO of the Center for Democracy and Technology. “It’s notable to see the administration focus on both the emergent risks of sophisticated foundation models and the many ways in which AI systems are already impacting people’s rights,” she says. “The administration rightly underscores that US innovation must also include pioneering safeguards to deploy technology responsibly.”

What are other countries doing?

This regulatory work by the US government runs in parallel with a number of different legislative approaches worldwide, including the European Union, G7 group of countries, and the United Kingdom, which is hosting an AI Safety Summit this week. 

The EU is taking a tiered approach to risk and safety, by outlining the potential risks of AI and requiring more dangerous AI uses follow more stringent rules. The UK’s summit has opted to focus on a narrow set of largely existential and weaponized risks. This sets the US’ apart in its attempt at broad regulation of AI.

“The approach of an executive order is a little tricky,” says Raji. “Because there’s only so much that they can be able to execute in terms of enforcement.” Raji worries that it’s likely to come under challenge in court from interested parties who want to profit from the AI revolution, but says “at a high level, it’s a pretty solid start”.