Laurent - stock.adobe.com

Biden’s AI plans focus on US workers’ protection

The US president has issued an Executive Order that sets out his administration’s strategy for AI safety and security

US president Joe Biden has set out the country’s strategy for artificial intelligence (AI), balancing responsible innovation with safety and security. The White House said the Executive Order aims to ensure the US establishes new standards for AI safety and security, protects Americans’ privacy, advances equity and civil rights, stands up for consumers and workers, promotes innovation and competition, and advances American leadership.

Among the key areas Biden has focused on is the development of principles and best practices to mitigate the harms and maximise the benefits of AI for workers. The principles include addressing job displacement; labour standards; workplace equity, health and safety; and data collection.

The Executive Order includes new standards for AI safety and security that require AI developers to share their safety test results and other critical information with the US government.

The US government has stipulated that as part of the Defense Production Act, the Order requires that companies developing any foundation model that poses a serious risk to national security, national economic security, or national public health and safety must notify the federal government when training the model, and must share the results of a set of rigorous “red” safety tests set by the National Institute of Standards and Technology.

The White House said the Department of Homeland Security will apply these standards to critical infrastructure sectors and establish the AI Safety and Security Board, and the Departments of Energy and Homeland Security will also address AI systems’ threats to critical infrastructure, as well as chemical, biological, radiological, nuclear and cyber security risks. 

Among the areas the US administration has focused on is protection against the risks of using AI to engineer dangerous biological materials through the development and introduction of new standards for biological synthesis screening. According to the White House, agencies that fund life-science projects will establish these standards as a condition of federal funding, creating powerful incentives to ensure appropriate screening and manage risks potentially made worse by AI.

The AI strategy will also see the Department of Commerce develop guidance for content authentication and watermarking, for the clear label AI-generated content.

Read more about AI regulations

  • MPs at a recent artificial intelligence governance meeting were keen to hear how Ofcom, the FCA and the ICO are preparing for UK AI legislation.
  • Prime minister Rishi Sunak has outlined how the UK will approach making AI safe, but experts say there is still too big a focus on catastrophic but speculative risks over real harms the technology is already causing.

The US also plans to develop an AI-powered cyber security programme for finding and fixing vulnerabilities in critical software.

Beyond these measures, Biden has asked Congress to pass bipartisan data privacy legislation to protect the privacy of Americans by prioritising federal support for accelerating the development and use of privacy-preserving techniques – including ones that use cutting-edge AI and that let AI systems be trained while preserving the privacy of the training data.

The US administration has also committed to strengthening privacy-preserving research and technologies, such as cryptographic tools that preserve individuals’ privacy, by funding a Research Coordination Network to advance breakthroughs and development.

The White House said that the National Science Foundation will also work with this network to promote the adoption of privacy-preserving technologies by federal agencies.

Read more on Artificial intelligence, automation and robotics