– By Allie Howie.
It was predicted that 2025 would be the year of AI Agents and I believe weʼve seen this come to fruition. As we near the end of 2025, I canʼt help but reflect on how the state of AI Security has changed along the way. At the start of the year agents hadnʼt notably made their way into our day to day workflows but now they seem to not only be integrated into our daily workflows, but are heavily depended upon. As an example, I have personally enjoyed using coding agents and canʼt really imagine going back to coding before their existence. Coding agents have gotten dramatically better as the year has gone on and allowed users such as myself to 10x their productivity as a developer.
Another change Iʼve noticed is at the beginning of the year there seemed to be a lack of frameworks for AI Security, but now it almost seems like there are too many.
As we race to secure AI, govern it, and assess itʼs trustworthiness we have generated a sea of noise with no clear leading framework for the official stamp of approval for Trustworthy AI. In this environment it appears customers care less about if who they are buying from is compliant with a certain framework, and more about if they can trust the AI they are purchasing.
Donʼt Get Lost in the Noise: A Bias for Action
Recently I had Steven Vandenburg, AI Security Architect at Cotiviti on the Insecure Agents Podcast to talk about AI Governance and Security. Steven shared how they chose to pursue multiple frameworks at Cotiviti and how that was key to building their AI Security program. He said they did HITRUST with AI Risk Management and supplemented that with Pillar Securityʼs SAIL framework.
Steven talked about the importance of not getting lost in the noise and taking too long to evaluate and compare frameworks. As agents become more powerful and a part of our everyday work itʼs clear Sequoia didnʼt get it wrong when they said the total addressable market for AI Agents compared to traditional SaaS is three orders of magnitude larger. Customers are eager to pay for agents that help them do more work in less time. However they also are eager to understand if the agents they are buying are a security liability.
In this environment itʼs beneficial to have a bias for action, decide on what frameworks make sense, and add supplemental controls as needed. This will allow teams selling AI Agents to go to market quickly and confidently.
Popular AI Framework Choices Today
I want to be clear that none of these are inherently bad. In fact they are likely worthwhile for your business. However, like Stevenʼs team, I think youʼll find that youʼll need to combine controls across several of these to create an AI Security program that generates trust.
ISO 42001
If you asked which AI Security Framework is the leading standard today most people would say ISO 42001. ISO 42001 is the first international standard focusing on the governance of AI management systems. According to The International Organization for Standardization (ISO) an AI management system is a set of interrelated or interacting elements of an organization intended to establish policies and objectives, as well as processes to achieve those objectives, in relation to the responsible development, provision or use of AI systems.
This framework has gathered significant momentum since itʼs debut in December 2023. Some of the first US companies to get certified were AWS in late 2024 and Vanta in early 2025. One of the reasons for its success is due to it being one of the few frameworks that is a certifiable, international standard and not done on a self attestation. A self attestation means there is no third party coming in to assess your compliance, you are doing it yourself. Another reason this has gained momentum is that many companies already do ISO 27001 and a good amount of the work that goes into that framework can be extended to complete ISO 42001.
While ISO 27001 can make completing ISO 42001 easier, ISO 27001 is not a prerequisite for ISO 42001.
Some of the drawbacks of ISO 42001 is that it takes several months (around 4-9) to get certified and the required external audits can cost upwards of $20,000. This makes it less favorable for small startups and more approachable for larger companies with more resources. ISO 42001 begins with a risk assessment to assess AI specific risks and an AI impact assessment. The results of these determine the design of the AIMS in stage 1 and management of AI risks in stage 2. The work involved during these stages involves the creation of several policies, plans or procedures for access and management of the AIMS, and documentation of the data used by the AI system.
SAIL
SAIL is a new framework created this year by Pillar Security in collaboration with security leaders and practitioners (as a disclaimer I was one of the collaborators). SAIL, like Pillarʼs AI lifecycle security platform, focuses on evaluating the threats present at each stage of the AI lifecycle. These stages are plan, code, build, test, deploy, operate, and monitor. Multiple threats are listed for each stage with an example of how they might be exploited and suggestions on how to mitigate the threat. SAIL focuses on AI Security and actionable guidance for implementing technical controls and less on documentation compared to other frameworks.
There is no governing body associated and compliance is done by self attestation.
NIST AI RMF
NIST AI RMF was created by NIST in 2023 and focuses on four main functions: Govern, Map, Measure, and Manage. This framework is a self attestation and takes less time to complete than ISO 42001 or HITRUST. NIST AI RMF focuses heavily on documentation and AI specific risk assessments. How well you perform this risk assessment heavily influences the value you will get out of this framework. If you donʼt have the AI Security experience needed to find the threats you should be most worried about you wonʼt be able to manage them appropriately. This framework also has a good amount of controls dedicated to bias which may be more or less applicable to you depending on what your AI use case is.
HITRUST with AI Risk Management Assessment or AI Security Assessment and Certification
HITRUST combines controls from NIST 800-53, ISO 27001, HIPAA, FedRAMP & PCI and is typically pursued by companies in heavily regulated industries such as health tech companies. While HITRUST has been around for many years, their AI assessments are new. They have two AI assessments. First is their AI Risk
Management Assessment which was inspired by NIST AI RMF and is a self attestation. The second is their AI Security Assessment and Certification which is more in depth and includes a third-party validation and centralized quality review. While this assessment offers 44 controls that can be tailored based on specific use case scenarios, these controls donʼt necessarily cover the threats that are present at different stages of the AI lifecycle. Thatʼs why some teams may want to supplement HITRUST with SAIL, like Stevenʼs team did.
This is by no means an exhaustive list and is merely here to provide some common examples of AI Security frameworks. There are many other frameworks that have been created by companies, governments, and open source and community initiatives.
Compliance Should be an Outcome, Not a Catalyst
While the frameworks we went over above are arguably good choices for demonstrating trustworthy AI, others are not. Take SOC 2 for example. Many companies pursue SOC 2 compliance because their customers require it or simply because itʼs an industry standard for B2B companies in the US.
While implementing SOC 2 controls at an organization does create some material security value, it provides very little help for demonstrating that your AI is trustworthy. If the frameworks you choose are a catalyst for what security controls you implement you might be left still struggling to convey trust in deal cycles after becoming compliant. For example, Iʼve seen AI startups become SOC 2 compliant hoping to skip over lengthy security reviews in deal cycles only to find out thatʼs not the case because the controls currently in place donʼt help prove theyʼve built trustworthy AI.
While it may be more obvious that this would be a common scenario with SOC 2, it could also be the case with an AI specific framework. Take ISO 42001 for example. You could be compliant with that framework without having any technical controls for context-aware agent auth in place. In my mind the pillars for trustworthy AI are security, safety, reliability, privacy, and transparency. Context- aware authentication and authorization contributes significantly to security and data privacy. Without technical controls in place that build a concrete narrative around your AI Security posture you might find you donʼt generate enough trust so that you can skip the AI security questionnaires.
Customers Care if Youʼre Trustworthy
While some customers may have hard and fast requirements that permit them to only buy from companies that demonstrate compliance with specific frameworks, largely Iʼm seeing through my work that the majority care less about which frameworks you are compliant with and more about if the AI youʼve built is trustworthy.
My advice is to start with an architecture review of your AI application and do some threat modeling across three different stages of the AI lifecycle: build, test, and deploy. In the build stage you might implement model scanning to prevent serialization attacks. In the test stage you might employ AI red teaming. In the deploy stage you might install LLM guardrails. Add technical controls like these across the three stages that make sense for your business and then take what youʼve got and map it to frameworks you know you have to pursue and those that seem like a good fit for what is already in place. After that you can fill in any gaps that exist across the frameworks youʼve chosen for a robust and effective AI Security posture.
Lastly, remember that AI Security programs are not a one size fits all so take the pressure off yourself to do it a certain way. Thereʼs a lot of noise out there for how to build a program and what frameworks to choose from. You may find mixing and matching controls from several frameworks is right for you.
Overall customers donʼt care if you’re compliant with a certain framework, they care if your AI is trustworthy in a world where untrustworthy AI is the default.
