Skip to main content

By Zinet Kemal

In 2024, IBM reported that 13% of organizations had already experienced breaches involving AI models or applications, and  97% admitted they lacked proper access controls. This data underscores what many of us are witnessing firsthand. AI is no longer an experimental tool, it’s embedded in daily enterprise workflows. Large language models (LLMs) are generating code snippets, generative AI tools are drafting reports & unfortunately, phishing emails, while machine learning models analyze network traffic at scale.

And now, agentic AI systems capable of reasoning, planning, and acting with minimal human input are beginning to emerge, first in experimental customer service bots and potential early use cases such as automated incident response.

Each of these capabilities creates real advantages such as faster detection, more efficient workflows, and new ways to serve customers. But they also expand the attack surface. A generative model that can write code can also be manipulated to introduce insecure logic. An agentic AI that takes autonomous actions could escalate privileges or misclassify a threat if left unchecked.

That’s why governance matters. The same tools that drive productivity also create new risks if deployed without oversight.

Governance Vs. Security: Two sides of the same coin

Too often, the conversation about AI in cybersecurity stops at “how do we secure AI?” Security is critical, but it’s not the whole picture. AI security is about protecting the models, pipelines, and data from manipulation. AI governance is about accountability, setting policies, meeting compliance requirements, and ensuring AI is used ethically and responsibly.

Without governance, security has no framework, there’s no structure for applying protections consistently. Without security, governance is just words on paper; policies exist but there’s no way to enforce them. True resilience comes when governance and security intersect, where protections are enforced within a framework of accountability.

You would think this is just theoretical. However, what troubles me personally as an example is how AI is already being used in warfare, sometimes to target civilians and innocent people. That reality scares me as we speak. The International Committee of the Red Cross (ICRC) has warned that military AI systems particularly in targeting, risk worsening civilian harm when oversight and accountability are lacking. It’s a reminder that without accountability, AI can be weaponized in ways that undermine our humanity. If we don’t hold AI users and creators accountable, we risk allowing technology to become a tool for harm rather than progress. To me, governance is about ensuring AI is used to benefit human good never to take it away.

Embedding governance into the day-to-day

Governance can sound abstract, but for practitioners, it shows up in a very concrete ways. Take data quality for example, phishing detection model underperformed for months simply because its training set included duplicate and mislabeled emails. The system itself isn’t broken, it is a governance failure, one that routine data audits could have prevented.

Supply chain security is another area where governance matters. Teams often pull open-source ML libraries into production without a thorough review, only to later discover that a critical vulnerability went unpatched. Understanding and managing what enters your environment is a core governance responsibility.

Then there are the new threats AI introduces. Adversarial tactics like data poisoning or model inversion that target systems & exploit how AI learns, making oversight even more important. Strong governance also needs to be built directly into the pipeline. Some DevSecOps teams now run automated integrity checks, audit logging, and access control validation in their CI/CD process. This is governance in action, not something added on after deployment.

From my own experience shadowing an enterprise AI governance team, I saw how governance comes alive when multiple disciplines collaborate. Security experts focused on adversarial attacks and model drift, legal teams ensured data privacy and regulatory compliance, and risk managers pushing for accountability and impact assessments. Each AI use case was reviewed from several angles before approval. It is a clear reminder that governance isn’t policy rather a cross-functional oversight embedded from day one. Organizations that wait to establish these practices often find themselves scrambling once risks emerge.

Leadership sets the tone

Well, governance isn’t just a practitioner’s task. Leaders shape the culture around it. Regulations like the EU AI Act and NIST’s AI Risk Management Framework are already signaling what “responsible AI” means. Leaders who wait until compliance deadlines hit are already behind.

Accountability has to be clear.  AI risk is seen as “everyone’s problem”, which usually means it’s no one’s problem. Defining ownership matters. So does transparency. If your AI makes decisions that no one can explain, trust erodes fast.

The urgency is backed by data. EY’s 2024 study found that while 72% of executives say AI is embedded across their initiatives, only about one-third have strong governance protocols in place. The message is clear, leaders can’t afford to let adoption outpace oversight.

A shared responsibility

AI governance isn’t only the responsibility of CISOs or compliance officers. It requires collaboration across legal, ethics, business, and technical teams. Cybersecurity professionals contribute a unique lens spotting risks others might miss & when paired with governance, enterprises can innovate with confidence rather than fear. We’ve seen this play out before, a decade ago, cloud governance felt optional until breaches forced it to become a baseline business requirement. AI is on the same trajectory, but accelerating even faster.

The best place to start is to start small. Inventory your AI use cases, map the associated risks, and align them with frameworks such as NIST AI RMF. From there, scale governance until it’s fully embedded in enterprise-wide risk management. Importantly, organizations should form cross-functional AI governance teams, bringing together security, legal, risk, and business leaders to review AI use cases and ensure responsible adoption from day one.

Final Thought

According to ModelOp’s 2024 survey of 200 enterprise organizations, 81% are already using AI in production but only 15% report having effective governance in place. That gap is a real risk. You don’t have to be an AI expert to take the lead, but you do need AI literacy. As cybersecurity professionals and leaders, understanding how AI works and how to govern it is what keeps it secure, responsible, and trusted. Just as cloud governance evolved from ‘nice to have’ to non-negotiable, AI governance is following the same path, but at a faster pace. The organizations that wait will be scrambling when risks catch up. The time to start building that muscle is now.