For companies interested in selling AI to U.S. defense and intelligence agencies, the 2026 National Defense Authorization Act (NDAA) contains a clear signal: strict supply chain and security requirements are on the way.
Congress is following a familiar playbook, refined over the past decade to purge “covered” technologies and impose strict cybersecurity standards throughout the Defense Industrial Base. Additionally, Congress has mandated that the Defense Department develop AI security standards that expand on the nascent Cybersecurity Maturity Model Certification (CMMC) program.
This presents both risk and opportunity. Experience shows that these types of requirements are bound to become high priority issues in M&A diligence and prime targets for False Claims Act (FCA) enforcement. But there is a silver lining: the companies that anticipate and tailor their product lines to satisfy these unique security concerns will be well positioned to supply AI capabilities to U.S. defense and intelligence agencies in the years to come.
Overview: AI Provisions in the 2026 NDAA
There are several provisions scattered through the 2026 NDAA that address AI policy and practices among the defense and intelligence agencies.
Many of the 2026 NDAA’s provisions are aimed at nudging the defense and intelligence agencies to expand the adoption of AI. For example, Congress requires several pilot programs and studies focused on how AI can be used to address logistics tracking, planning, operations, and ground vehicle maintenance. And Congress requires creation of “digital sandbox environments” to support experimentation and training across the Department.
The NDAA also creates the governance bodies that will be responsible for leading defense AI strategy and policy:
- At the strategic level, a new “AI Futures Steering Committee” will be co-chaired by the Deputy Secretary of Defense and the Vice-Chairman of the Joint Chiefs of Staff. The Steering Committee is directed to consider a number of frontier issues, including analyzing the forecasted trajectory of AI technologies that could enable artificial general intelligence (AGI), monitoring the threat landscape associated with the use of advanced AI by adversaries, and developing counter-strategies. Congress identifies several categories of technologies for the Steering Committee to consider relevant to enabling AGI, including: emerging AI frontier and world models, agentic algorithms, neuromorphic computing, cognitive science applications for AI development, emerging microelectronics technologies, and infrastructure requirements associated with development of emerging AI technologies.
- At the policy level, a new “Cross Functional Team,” led by the “Chief Digital and Artificial Intelligence Officer,” is tasked with developing “Department of Defense-wide guidelines for evaluating future artificial intelligence models being considered for use by the Department,” including AI performance standards, requirements for documentation of AI development, and AI security and compliance requirements.
While these new entities have significant discretion and broad mandates, Congress does not leave them to draft on an entirely blank slate. Instead, the NDAA imposes detailed requirements to shape the supply chain and security rules that will apply to AI acquired and used by the defense and intelligence agencies.
Supply Chain Restrictions
Over the last decade Congress has increasingly relied on NDAA prohibitions to purge the defense supply chain of “covered” technologies originating from certain countries and companies of concern. The most notable example of this was Section 889 of the 2019 NDAA, which banned “covered telecommunications equipment or services,” particularly those associated with Huawei.
The trend continues in the 2026 NDAA, with broad prohibitions against the use of “Covered AI” during performance of contracts with defense or intelligence agencies. The provisions primarily target AI developed by one corporate family—DeepSeek and its parent, High Flyer. Yet, the full definition of “Covered AI” is multi-faceted and requires diligence into the origins and ownership structure associated with AI to determine if it is developed by:
- DeepSeek, High Flyer, or any entity that is owned, funded, or supported by High Flyer
- A company that is domiciled in—or subject to unmitigated foreign ownership control, or influence by—a “covered nation,” which is currently defined to include the Democratic People's Republic of North Korea, the People's Republic of China; the Russian Federation; and the Islamic Republic of Iran
- A company that is included on the “Consolidated Screening List” maintained by the International Trade Administration of the Department of Commerce
- A company listed on the “civil-military fusion list” of entities that are identified as “Chinese Military Companies Operating in the United States”
It is notable that Congress has gone beyond just ensuring that any AI used in performance of a defense contract is not developed by DeepSeek; instead, it has restricted AI technology that may have been “supported by” or 20% indirectly owned by High Flyer. The prohibition also extends to “successor AI” developed by DeepSeek and High Flyer affiliates, raising potentially complex questions about how broadly this could be interpreted in practice. Companies doing business with defense or intelligence agencies (even as downstream subcontractors and suppliers) should anticipate being required to certify compliance with these provisions as a condition to contract award, at risk of violating the False Claims Act.
Physical and Cybersecurity Standards
Perhaps the greatest practical challenge (and risk of FCA liability) for defense contractors today is rooted in the patchwork of cybersecurity standards and certifications required by mandatory defense contract clauses and the CMMC program. Throughout 2025, the Department of Justice’s Civil-Cyber Fraud Initiative maintained a consistent drumbeat of settlement agreements targeting companies that accepted these contract requirements without investing the time and resources to fully understand and comply. At the same time, for those companies who took early action and invested in developing complaint systems, the unique security requirements associated with the U.S. defense market have become a valuable competitive edge.
The 2026 NDAA foreshadows that a similar story will play out in the market for AI capabilities in the defense sector. Congress has directed the Secretary of Defense to develop “a framework for the implementation of cybersecurity and physical security standards and best practices relating to covered artificial intelligence and machine learning technologies to mitigate risks to the Department of Defense from the use of such technologies.” Among other issues, this security framework must address risks relating to:
- “the supply chains of such systems, including counterfeit parts or data poisoning risks”
- “adversarial tampering with artificial intelligence systems”
- “unintended exposure or theft of artificial intelligence systems or data”
- “security posture management practices, including governance of security measures, continuous monitoring and incident reporting procedures”
Congress directs that this new AI security framework “be implemented as an extension or augmentation of existing cybersecurity frameworks developed by the Department of Defense, including the [CMMC] framework.”
Taken together, Congress seems to be endorsing the CMMC program as the natural platform for forthcoming AI security requirements throughout the Defense Industrial Base. Companies doing business with the Pentagon should anticipate that the CMMC program is not likely to be abandoned and, in fact, will be extended to AI. As with the current CMMC program, the new AI security requirements are likely to extend deep into the supply chain, regardless of a companies’ size or status as a “non-traditional” contractor.
* * *
The supply chain restrictions and security requirements described in the 2026 NDAA are not check-the-box compliance certifications that can be addressed with paper policies and training sessions. As these rules are further refined and implemented, companies that find themselves locked-in to non-compliant vendors and business practices will find themselves locked-out of the Defense Industrial Base.
On the other hand, companies that design their products and business models around the forthcoming requirements will find themselves well positioned to capitalize on the growing demand for AI and other emerging technologies in U.S. defense and intelligence agencies.
Successfully navigating these issues requires deep, practical experience at the intersection of AI, national security, and defense contracting.
