This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.

A Fresh Take

Insights on M&A, litigation, and corporate governance in the US.

| 5 minute read

US Signals Expanded AI Export Regulations in Forthcoming Reporting Requirements

On September 11, 2024, the Biden administration released a Notice of Proposed Rulemaking (the Proposed Rule) that would establish mandatory reporting requirements for companies that develop or plan to develop advanced artificial intelligence (AI) models and computing clusters.  The Proposed Rule aims to balance the rapid advance of AI technologies with the US gov­­ernment’s interest in national security and regulatory oversight.  There is a 30-day public comment period for the Proposed Rule, closing on October 11, 2024, which is expected to be followed by a final notice. 

The Proposed Rule was issued pursuant to Executive Order 14110 on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, issued on October 30, 2023 (EO 14110).  Section 4.2 of EO 14110 essentially requires companies that develop (or demonstrate an intent to develop) potential dual-use foundation models (i.e., models that can be applied across various civilian and military contexts, including those that pose significant security risks) report those models to the US government on an ongoing basis. 

The newly released the Proposed Rule, if adopted in its current form, would require certain advanced AI developers that are Covered US persons[1] to share with the US Department of Commerce’s Bureau of Industry and Security (BIS) a host of information relating to AI developmental activities and model training, cybersecurity measures, and red-team outcomes (a structured testing effort to find flaws and vulnerabilities in an AI system).

The Proposed Rule foreshadows the nature of—and expected increase in—US export restrictions on AI technologies in the coming months and years.  Even companies that are not obligated to report under the Proposed Rule may be impacted by the US export controls that later emerge.

Reporting Requirements

Who must report?

The Proposed Rule’s reporting requirements apply to “covered US persons” engaged in “applicable activities,” who plan to engage in “applicable activities within six months,” or who have engaged in “applicable activities” within the last seven quarters. 

What must be reported?

A company must report its “applicable activities,” which are defined to include (i) AI model training using more than 10^26 computational operations (e.g., integer or floating-point operations); and (ii) acquiring, developing, or coming into possession of a computer cluster with a set of machines transitively connected by a data center networking of over 300Gbit/s and having a theoretical maximum above 10^20 computational operations for AI training (with sparsity). 

Upon receipt of a report, BIS may send questions to the Covered US persons that must address, but are not limited to:

  1. Ongoing or planned activities relating to training, developing, or producing a “dual-use foundation model”[2] including both physical and cybersecurity protections;
  2. any ongoing or planned activities involving the “training, developing, or producing” of dual-use foundation models, including the physical and cybersecurity protections taken to assure the integrity of that training process against sophisticated threats;
  3. the ownership and possession of the model weights of any dual-use foundation models, and the physical and cybersecurity measures taken to protect those model weights; 
  4. the results of any developed dual-use foundation model's performance in relevant AI red-team testing, including a description of any associated measures the company has taken to meet safety objectives, such as mitigations to improve performance on these red-team tests and strengthen overall model security; and
  5. a catch-all encompassing “other information” relating to the safety and reliability of dual-use foundation models, or activities that present concerns to US national security.

When is a report submitted?

The report must be submitted quarterly to BIS and for seven consecutive quarters following a quarter covered by a report of “applicable activities.”  

Key Considerations and Takeaways

The Proposed Rule’s stated purpose is to enhance visibility and oversight of AI technology that could be used to threaten US national security, economic security, and public health or safety.  This includes ensuring that AI technologies are not misused by foreign adversaries or non-state actors.

Senior BIS officials stated that the new reporting requirements, as summarized below, would inform the federal government about emerging risks and help it develop a system to identify frontier capabilities in AI research. BIS also affirmed that the US government’s “proactive thinking” about advanced AI would help BIS assess defense-relevant capabilities at the bleeding edge of AI research and development. Commerce Secretary Gina Raimondo observed that the reporting requirements would help the United States “bolster…national defense and safeguard…national security.”

The Proposed Rule provides insight into the regulatory posture of the US government towards AI—and potentially foreshadows regulatory and policy decisions that may come next. 

  1. Reporting requirements may signal additional regulations.  The proposed mandatory reporting requirement is unlikely to have a significant direct impact on established AI industry actors. However, it could lay the foundation for upcoming related regulations that do. If the reporting regime serves as the basis for a new, analogous set of export controls developed on the back of such reports, it could expose covered AI developers to additional regulatory risk and could impact US innovation and competitiveness in the AI sector.
  2. Potential costs and reach of proposed reporting requirements. The proposed reporting requirements are not expected to create a significant cost for established AI companies, significantly alter their operations, or affect host services.  After all, this is not the first set of defense industrial base reporting obligations, and in past cases, we have not seen analogous reporting requirements materially impact on company profits or location decisions.  BIS claims that between zero to fifteen companies exceed the proposed reporting thresholds at the time of publication, and that all of these companies are “well-resourced technology companies.” 
  3. Export controls are likely to be a piece of the AI regulatory puzzle. US export control regulators have been increasingly focused on future AI regulation and have committed resources to developing cross-agency expertise on AI, such as the Department of Justice’s recent AI convenings, the Treasury Department’s exploration of AI in the financial services sector, the Department of Defense’s AI Adoption Strategy, and the Federal Trade Commission’s “Operation AI Comply” an enforcement initiative targeted companies that use AI as a way to deceive or engage in unfair conduct that harms consumers.  The Proposed Rule’s reporting requirement will likely inform future regulatory decisions and may signal additional regulatory scrutiny of AI, including export controls and related regulatory standards for AI development than provided for by the reporting requirements alone. 
  4. National security reaffirmed as the policy objective for AI export controls.  Companies should expect future export controls on dual-use foundation models to reflect the Proposed Rule’s stated policy goals: (i) ensuring the defense industrial base is able to integrate dual-use foundation models; (ii) ensuring that dual-use foundation models produced by US companies are available to the defense industrial base; (iii) ensuring that dual-use foundation models operate in a safe and reliable manner; (iv) minimizing vulnerability to cyberattacks; and (v) preparing the defense industrial base for the possibility that adversaries will use their own models to develop weapons or other dangerous technologies. 

Conclusion

The US government is increasingly interested in AI and its potential consequences, and the Proposed Rule signals that BIS will likely increase export controls on advanced dual-use foundation models and computing clusters.  Companies might consider submitting comments on the Proposed Rule.  Companies might also consider preparing to identify AI models and computing clusters that would fit within the Proposed Rule and preparing to answer regulators’ questions, as described in the Proposed Rule.  These steps can help companies prepare for the seemingly inevitable increase in AI-focused export controls and help companies navigate the evolving regulatory landscape.
 

[1] A “Covered US person” means any US citizen, lawful US permanent resident, or entity organized under the laws of the US or organized in any jurisdiction within the United Staes, or any person located in the United States.

[2] For the purposes of the Proposed Rule, a “dual-use foundational model” is an AI model that is (i) trained on broad data; (ii) generally uses self-supervision; (iii) contains at least tens of billions of parameters; and (iv) exhibits or could be easily modified to exhibit, high levels of performance at tasks that pose a serious risk to security, national economic security, national public health of safety, or any combination of those matters.  The Proposed Rule’s definition examples include AI models relating weapons of mass destruction, offensive cyber operations, and the evasion of human control or oversight by means of deception or obfuscation. 

 

Tags

sanctions and trade, ai