Skip to main content

By Dmitry Raidman

The Next Frontier of Supply Chain Transparency

When I joined the NTIA SBOM Framing effort in 2018, our mission was to bring transparency to the software supply chain, a goal that has since transformed how governments, regulators, hospitals, and federal agencies secure the digital solutions they rely on. Today, Artificial Intelligence is no longer experimental. It is deeply embedded in healthcare diagnostics, critical infrastructure monitoring, and public sector decision-making. Just as with software, we need visibility into what AI systems are built from. That is the promise of the AI Bill of Materials, which extends the SBOM philosophy to AI by cataloging models, datasets, algorithms, and the prompts that shape AI behavior.

Beyond Models and Data: The Forgotten Layer of Prompts

Too often, discussions about AI governance focus solely on models and training data. Yet prompts, both static prompts embedded in code and dynamic prompts generated from software or users, resemble SQL injections and are equally critical in shaping behavior. A healthcare chatbot, for example, relies on its system prompt to set bedside manner, while dynamic prompts from patients drive what information it retrieves.  An AIBOM that captures prompt, control policies, model weights, dataset hashes, and vectorized datasets gives security and compliance teams the visibility they need. Without it, organizations remain blind to one of the most critical attack surfaces in modern AI.

The Shai Hulud Moment: Lessons from Software Supply Chain Attacks

The recent Nx, Qix, and Shai-Hulud worms demonstrated that adversaries are already exploiting supply chains at a scale never seen before, spreading through hundreds of npm packages and leveraging trust as a weapon. Now imagine the same dynamic applied to AI: poisoned datasets, backdoored models, or autonomous agents generating and deploying tools on demand. These agents can take instructions in real time, write code, and access sensitive enterprise data, meaning a single malicious prompt could leak hospital records or alter power grid parameters. The pace of AI adoption is unlike anything we have seen before, and with each new model, dataset, and agent capability, the attack surface expands. Just as SBOMs became a shield against compromise, AIBOMs must now serve as the guardrail for the AI era, cataloging not only models and datasets but also permissions, tools, and agent behaviors to provide transparency and control.

AI Privacy, Governance and Ethics Are Supply Chain Problems

The public sector is already addressing these challenges under the EU AI Act and the NIST AI Risk Management Framework. Both place transparency and traceability at the center of responsible AI adoption, outcomes that AIBOMs are designed to deliver. By adopting AIBOMs, organizations move from abstract principles to auditable practices. Ethics, governance, and privacy can no longer remain slogans in policy papers. They must be enforced through evidence and verifiable documentation.

Practical Use Cases for Healthcare, Critical Infrastructure, and Public Sector

The SBOM for AI Tiger Team, facilitated by CISA, has published use cases that clarify where AIBOMs add value. Let us explore them through the lens of high-stakes industries.

   Compliance

Hospitals deploying AI diagnostic tools must prove that models follow AI ethics, patient privacy, and data provenance. An AIBOM ensures regulators can trace training data sources, licenses, consent, and whether sensitive records were used without approval. It is the basis of trust in life-critical decisions.

    Vulnerability, Risk, and Incident Management

When a vulnerability is discovered in an AI framework used in power grid monitoring, time to respond is everything. An AIBOM inventory lets operators identify affected models and mitigate risk quickly. Without it, critical systems remain exposed.

    Open Source Models and Datasets Risk

Healthcare research teams and public sector laboratories often use open source models and datasets to accelerate innovation. These resources may include unsafe code, biased data, or unclear licensing. An AIBOM catalogs its origin, checksums, and licenses so organizations can assess risk before adoption.

    Third-Party Risk Management

Government agencies increasingly procure AI from vendors. An AIBOM requirement in procurement contracts ensures agencies are not buying black boxes. Visibility into models, datasets, and dependencies strengthens governance and accountability.

    Intellectual Property and Legal Assurance

Copyright risk is real. In 2025, Anthropic settled a class action from U.S. authors who alleged their books were used without permission to train models, a case that could have cost billions. AIBOMs make provenance visible, reducing the chance of litigation and protecting trust.

    Lifecycle Management Across All Sectors

AI models evolve, are fine-tuned, retrained, and eventually retired. AIBOMs provide a ledger of this evolution, ensuring reproducibility and accountability. In critical infrastructure, this can mean recreating conditions that triggered an anomalous alert. In the public sector, it can mean proving why a model made a controversial decision or providing evidence that an outdated model was safely decommissioned.

Agentic AI and the Expanding Attack Surface

The rise of agentic AI, systems that can reason, plan, and generate tools on demand, forces us to look at how to secure technology that creates its own attack surface. If we already run composition analysis on developer-written code, why would we not apply the same discipline to code generated by an AI agent? A single malicious prompt today can cause an agent to write and execute code, and tomorrow these systems may negotiate with each other, chain capabilities, and spawn sub-agents. Research, such as Agentic AI – Threats and Mitigations, warns about the dangers of excessive autonomy, and challenges like the FinBot CTF demonstrate how goal injection can circumvent safeguards. In this environment, the AIBOM must evolve, not only recording what an AI is built from but also documenting what it is capable of and permitted to do, including which APIs it can call, which protocols it uses, such as MCP or A2A, and what constraints are enforced. This transforms the AIBOM from a static inventory into a governance contract that defines and enforces responsible AI behavior.

Actionable Guidance for Cybersecurity Teams

For engineers and analysts, the path forward is practical.

  1. Integrate AIBOM generation into MLOps pipelines. Every new model, dataset, or change in prompt collection should automatically produce an updated AIBOM.
  2. Use AIBOMs for continuous vulnerability monitoring. Cross-reference inventories with CVE databases, supply chain intelligence data, and threat intelligence feeds.
  3. Extend procurement policies. Require vendors to provide AIBOMs alongside SBOMs in contracts, particularly in regulated sectors.
  4. Document and source control prompts explicitly. Track static system prompts and dynamic prompt handling policies since this is the missing layer in most AI governance frameworks.
  5. Plan for agents. Treat agentic AI as part of the supply chain, and define in the AIBOM which tools, protocols, and software packages they are permitted to use.

Healthcare, critical infrastructure, and the public sector cannot afford opaque AI when visibility and accountability are the foundation of trust. Working with an exceptional group of colleagues while co leading the CISA Tiger Team on AIBOM, and building on my earlier experience with the NTIA SBOM initiative, we laid the foundation for extending software transparency principles into the age of AI. That is why Helen Oakley and I created the first open source AIBOM Generator project, offering practitioners a practical tool to achieve real visibility into their AI supply chains. The adoption of AIBOM is no longer optional. They are the architecture of trust in systems that make medical decisions, protect national infrastructure, facilitates financial operations, and guide public policy. The cybersecurity community must stop debating and focus on implementation.