– By Victor Khaemba
In financial services, cyber defense has always depended on people making hard calls under pressure. We now want those calls to happen faster, with fewer mistakes, and with cleaner audit trails. So we are layering machine learning on top of the security stack, then assuming the people running it will stay calm, curious, and consistent.
That assumption is becoming a risk in its own right.
What Is Human-State Risk?
I call it human-state risk. It is the danger that the internal state of the humans who deploy, tune, and oversee AI-enabled cyber defense becomes a driver of vulnerability.
| THE HUMAN-STATE RISK FACTORS
Stress, fatigue, cognitive overload, and low psychological safety do not just reduce performance. They reshape judgment. They change: • What gets noticed • What gets escalated • What gets automated without enough challenge |
Why This Matters in Finance
This matters in finance because the sector is built for cascading effects. Similar vendor stacks appear across banks, insurers, and payment firms. When an AI tool is deployed widely, a blind spot turns into shared exposure. Add exhausted teams, and you have a multiplier.
| “Most governance discussions about AI security tools focus on model accuracy, drift, and robustness. Those are essential, yet they assume a stable human system around the model. In reality, the human system is often the least stable part.“ |
The Cognitive Load Problem
Under sustained pressure, people simplify, defer, accept defaults, and avoid friction. They do not stop caring, but their brains start conserving energy.
AI-enabled defense can intensify that pattern. Analysts are no longer only hunting and validating. They are supervising automation, interpreting probabilistic outputs, and deciding when to trust a system that cannot explain itself.
| THE PATH OF LEAST RESISTANCE
When a tool speaks with confidence, the easiest move is acceptance. When a tool is opaque, the easiest move is to route around it. Oversight becomes fragile. |
The Promise vs. The Reality
Then there is the promise of relief. Reduce noise. Catch what humans miss. Let the team focus on what matters. Sometimes that happens. Often, the relief is partial.
Someone still tunes thresholds, manages false positives, and explains automated decisions to auditors. A new tool can become a new source of operational load, falling on the same overstretched team the tool was meant to help.
The Incentive Cascade
This is where misaligned incentives take over. The chain is predictable:
| THE MISALIGNMENT CASCADE
Boards press executives for faster digital transformation → Executives press CISOs for rapid capability deployment → CISOs press procurement for speed and cost efficiency → Procurement presses vendors for quick wins → Vendors optimize for detection volume because those metrics close deals → Analysts inherit systems built to impress buyers rather than support operators → When the architecture fails, customers absorb the fraud, the frozen accounts, and the long recovery |
| “Each link acts rationally within its own constraints. The cumulative effect is security optimized for appearance rather than resilience.“
– Victor Khaemba |
What Incentives Teach
If leadership celebrates rapid adoption, rapid adoption is what you get. If procurement is measured on speed and cost, speed and cost will win. If vendors are rewarded for detection volume, detection volume will grow, even when triage capacity does not.
| THE DEFERENCE TRAP
If analysts are punished for missing a signal but never rewarded for challenging a model, deference becomes the safe move. Incentives teach the organization what to ignore, and the organization becomes easier to surprise. |
Governance leaders should treat incentive design as part of cyber architecture. You do not need to read a technical paper to ask the right question: What behaviors are we rewarding, and do those behaviors reduce human-state risk or intensify it?
A Human-State Risk Lens
| 1. Start in Procurement
Contracts for AI security tools should reflect human outcomes, not just technical ones. Does the tool reduce alert burden during normal operations and incidents? How does alert volume change after updates? If a system forces humans to fight it, that is hidden labor that will surface at the worst moment. |
| 2. Continue in Governance and Assurance
Bring security operations into AI governance reviews, and bring human factors into cyber reviews. Ask how the tool will be used on a bad day, not a demo day. Run exercises where the AI is wrong in a plausible way, and watch how quickly humans notice. The goal is not to prove the model is perfect. The goal is to prove the system fails safely. |
| 3. Treat Psychological Safety as a Control
In high-reliability environments, people must raise doubts early. If a junior analyst cannot challenge an automated recommendation, you have created an authority gradient between human and machine. Leaders can change this by rewarding early escalation and running reviews that focus on learning rather than blame. |
| 4. Push Regulators to Ask the Right Questions
If regulators ask about tooling but not about the people running it, institutions will buy tools and burn out people. If they reward operational resilience, institutions will invest in staffing and testing that reflects real cognitive strain. |
The Real Argument
None of this argues against AI in cyber defense. It argues against assuming that AI can compensate for depleted human systems. In finance, where the cost of failure is shared, AI becomes a force multiplier only when the humans behind it are supported and protected from incentive traps.
| “The next breach that matters may not begin with a novel exploit. It may begin with a tired team accepting a confident output, or a rushed leader signing off on a deployment because incentives made slowing down feel riskier.“ |
If we want AI to strengthen the defenses of financial systems, we must govern the technology and the human state that surrounds it.
* * *
| Victor Khaemba works at the intersection of responsible AI, financial systems, and cyber psychology. He serves as Global Ambassador for the Global Council for Responsible AI and focuses on governance approaches that connect technical controls with human decision-making. |
