Biden's Executive Order on AI and the Implications for Industrial Organizations

Author photo: Colin Masson
ByColin Masson
Category:
Technology Trends

President Joe Biden's administration issued an Executive Order on October 30, 2023, outlining a comprehensive approach to ensuring the safety, security, and trustworthiness of artificial intelligence (AI) technologies. This pivotal Biden's Executive Order on AImove has far-reaching implications for the mainstream adoption of frontier AI technologies.

Biden’s Executive Order on AI does not explicitly target frontier models, but it does address some of the issues and challenges that they pose. For example, the Executive Order requires that developers of the most powerful AI systems share their safety test results and other critical information with the U.S. government.

Frontier AI models are a term used to describe highly capable and powerful artificial intelligence systems that can perform a wide variety of tasks across different domains. They are often based on large-scale machine learning models that exceed the capabilities of the most advanced existing models. Some examples of frontier AI models are GPT-4, DALL·E, and 3, which are able to generate natural language, images, and code respectively.

While all frontier models are foundation models, not all foundation models are frontier models. The difference lies in their capabilities and potential risks, with frontier models representing the most advanced and potentially risky subset of foundation models.

Frontier AI models have the potential to bring tremendous benefits to humanity, but they also pose significant challenges and risks to public safety. Therefore, it is important to ensure that they are developed and used in a safe and responsible manner. To address this issue, some leading AI companies and research organizations have formed the Frontier Model Forum, an industry body that aims to advance AI safety research, identify best practices, share knowledge, and support efforts to leverage AI for social good.

Key Components of the Biden Order:

The Biden administration's Executive Order establishes new standards for AI safety and security, protects Americans' privacy, and advances equity and civil rights. It also seeks to safeguard against threats posed by artificial intelligence, providing a comprehensive framework to guide the development of AI technologies.

This order, while fostering the adoption of AI technologies, also necessitates industrial organizations to understand its implications, and the potential impact of government regulations on specific emerging AI technologies and align their Industrial AI strategies accordingly.

In particular, this Executive Order appears to focus attention on frontier AI models that pose a serious risk to national security, national economic security, or national public health and safety.

The Executive Order directs the National Institute of Standards and Technology to set rigorous standards for extensive red-team testing to ensure safety before public release. New frontier models will need to undergo thorough evaluation and verification.

Furthermore, the Executive Order establishes an AI Safety and Security Advisory Board, chaired by the Secretary of Homeland Security, to support the responsible development of AI. This board could provide recommendations and best practices for various AI use cases, likely prioritizing those involving frontier models.

Some of the most relevant sections from the Executive Order that are relevant to frontier AI models are:

  • "The Secretary of Commerce shall require that any person developing a foundation model that poses a serious risk to national security, national economic security, or national public health and safety notify the Secretary of Commerce prior to commencing training of such model."

  • "The Secretary of Commerce shall require that any person developing a foundation model that poses a serious risk to national security, national economic security, or national public health and safety share with the Secretary of Commerce the results of all red-team tests conducted on such model."

  • "The Director of NIST shall develop standards for red-team testing of AI systems prior to public release, including standards for testing robustness, security, privacy, and fairness."

  • "The Secretary of Homeland Security shall establish an AI Safety and Security Advisory Board (AISSB) to support the responsible development of AI."

Industrial AI: No Cause for Panic

The Industrial AI (R)Evolution was well underway in the industrial sector long before the advent of generative AI. As outlined in the ARC Advisory Group's blog post on "25 Industrial AI Use Cases for Sustainable Business Outcomes", most industrial AI use cases leverage proven mainstream data science and AI tools and technologies.

While many use cases can indeed be enhanced by the frontier models powering Generative AI, -particularly using the breakthroughs in natural language processing offered by the latest Language Models (LLMs) as a new way of interacting with complex systems - industrial organizations need not panic in light of the new Executive Order. rather, they should continue to focus on leveraging proven AI tools and technologies to drive sustainable business outcomes.

ARC's Industrial AI Impact Assessment Model

ARC's Industrial AI Impact Assessment Model takes into consideration a wide range of AI techniques, including machine learning, neural networks, computer vision, and natural language processing. It also factors in emerging AI techniques including Generative AI, but also Causal AI, Explainable AI,  NeuroSymbolic AI, etc. This comprehensive approach ensures that industrial organizations can effectively assess the impact of various AI technologies on their operations and make informed decisions.

Conclusion

President Biden's Executive Order on AI is a significant development in the regulation of AI technologies. It attempts to promote innovation and mainstream adoption of responsible AI technologies, with a sharp focus on the risks of high impact frontier AI models, mostly associated with Generative AI.

Industrial organizations should maintain their focus on leveraging proven AI tools and technologies to drive sustainable business outcomes. ARC's Industrial AI Impact Assessment Model provides a robust framework for these organizations to navigate the evolving AI landscape and harness the full potential of AI for their operations.

For more information or to contribute to Industrial AI research, please contact Colin Masson at cmasson@arcweb.com .

Engage with ARC Advisory Group

Representative End User Clients
Representative Automation Clients
Representative Software Clients