This is Part 3 of a three-part series on how AI is reshaping the real attack surface of modern enterprises.
In Part 1, we showed how attackers increasingly win by exploiting the space between controls, the areas your tools don’t fully see or own.
In Part 2, we examined how AI-driven variance is breaking detection models built for repetition, flooding Security Operations Centers (SOCs) with noise while real threats slip through.
In Part 3, we arrive at the most uncomfortable conclusion:
Even when identity is verified, devices are compliant, and policies are followed, organizations are still making catastrophic decisions because identity alone can no longer guarantee trust.
In a world where anyone can look, sound, and type like your CEO, the core problem isn’t identity theft. It’s authority theft.
Your stack can confidently assert who is on the line, logged in, or present in a meeting. What it cannot assert is whether the request they’re making is legitimate.
That’s not theoretical. In 2024, a Hong Kong employee wired the equivalent of $25M after a video conference in which every “executive” was an AI-generated deepfake. Identity checks passed; the request still should never have been trusted.
Our architectures were built to “prove you are who you say you are.”
Attackers have moved on to “prove you deserve to be obeyed right now.”
Security Has Shifted from Systems → Trust
Single Sign-On (SSO), Multi-Factor Authentication (MFA), device posture, and Zero Trust all answer a narrow question: “Is this identity allowed here?”
In 2026, that’s no longer enough. Because:
- Deepfake fraud has exploded. Some analyses estimate thousands of percent growth in deepfake-enabled scams since 2022, with hundreds of millions in losses in 2025 alone.
- Voice cloning now needs only seconds of audio; AI video can convincingly mimic real people in real time. Human “gut feel” simply can’t keep up.
Identity infrastructure still does its job. The person (or agent) appears authenticated. The device appears compliant. The channel appears encrypted.
What’s missing is everything between identity and action:
- Should this person be making this request?
- Through this channel?
- With this level of urgency?
- In this sequence, given what just happened in the workflow?
That missing layer is digital trust, and right now, it’s collapsing.
The Three Modern Trust Failures
1. Voice Verification Failure
For years, we told employees: “If something feels off, call and verify.” That advice assumes humans can tell when a voice is wrong.
They can’t anymore.
Recent research and fraud data show:
- Voice cloning is now one of the top attack vectors in deepfake fraud, thanks to cheap tools and minimal audio requirements.
- High-quality synthetic audio routinely fools human listeners; detection rates hover around coin-flip levels in controlled experiments.
Real-world attacks have already hit the C-suite. At WPP, the world’s largest advertising group, scammers cloned the CEO’s voice and orchestrated a fake Teams call attempting to extract money and data. Only unusually vigilant executives stopped it.
You can’t “train your way out” of that. We’ve asked humans to be the final verification layer in a game they are now mathematically incapable of winning.
There is no widely adopted standard for real-time, cryptographic “verified caller” signals in enterprise voice and video. Until that exists, every urgent call is a coin toss dressed up as a process.
2. Workflow Approval Failure
The second failure lives in your day-to-day business workflows.
A CFO gets an urgent Slack from the CEO’s real account:
“Board just greenlit this. Need you to push this payment now; I’ll brief you after the call.”
MFA was satisfied weeks ago when that account was compromised.
The Enterprise Resource Planning (ERP) login is protected.
The approval policy is technically followed.
What never gets evaluated is whether the interaction pattern makes sense:
- Is this how the CEO has historically handled sensitive approvals?
- Does this vendor exist in our system?
- Does this request align with the current initiative load and board activity?
The $25M Hong Kong deepfake heist is essentially this failure at scale: a series of seemingly “legitimate” approvals, over a “legitimate” video call, resulting in catastrophic loss because no system asked whether this authority chain made sense in context.
Banks and regulators are now explicitly warning about AI-enhanced executive impersonation and payment fraud, including scams that ride on top of existing, authenticated sessions and bypass MFA through real-time hijacking and social engineering.
Our approvals are still built on a 2015 threat model: protect credentials, enforce MFA, and log who clicked “approve.”
The 2026 threat model is: the identity is clean; the authority is counterfeit.
3. AI Agent Verification Failure
The third failure is emerging inside our own defenses.
Enterprises are deploying “agentic AI” for procurement, support, financial ops, and even security.
The market for AI agents is already measured in billions and growing fast; by mid-decade, most large organizations will have autonomous agents acting on critical systems.
But almost none of these are being deployed with:
- Strong provenance guarantees on the data agents consume.
- Granular, enforceable permissions and guardrails.
- Auditability of why a given decision was made.
Researchers and practitioners are warning that agents can be manipulated not just by model exploits, but by poisoning upstream data and prompts.
When an AI agent approves a suspicious transaction or alters an access policy:
- You can’t easily reconstruct why it did so.
- You can’t prove the inputs weren’t tampered with.
- You can’t explain the decision to auditors in language they’ll accept.
We’ve effectively handed authority to systems with no chain-of-custody and no human-understandable reasoning trail. That’s not “automation.” That’s unmanaged delegation.
What a Modern Trust Framework Requires
Fixing this isn’t a matter of bolting on one more control. It’s an architectural shift.
1. Cryptographically Backed Identity + Contextual Risk
Identity must stay at table stakes, but it’s only the first half of the equation.
The second half is a continuous, contextual risk assessment.
Think of this as “identity × context” rather than identity alone: cryptographic assurance that the actor is real, multiplied by an environment-specific score that this interaction makes sense right now.
Analysts and consultancies are already arguing that deepfake disruption demands new trust standards that combine technical identity proof with contextual authenticity checks.
2. Behavioral Consistency Scoring
If you can’t trust what you see or hear, you must trust what people consistently do.
That means building baselines for:
- How your executives normally make requests.
- Which channels they use, for which types of decisions.
- How they phrase urgency and exceptions.
- How approvals typically flow for specific transaction types.
This is the same logic Modern User and Entity Behavior Analytics (UEBA) applies to logins and endpoint behavior; now it needs to be applied to human authority patterns and communications.
3. Automatic, Out-of-Band Verification
We cannot keep relying on “just call to verify” as a manual step.
Out-of-band verification needs to be systemic and automatic, not optional.
The user doesn’t decide whether to verify; the risk engine does.
Done right, this doesn’t punish normal work. It selectively adds friction where authority is being exercised in ways that deviate from baseline.
4. Explainable AI for Every Autonomous Defensive Action
As AI takes a larger role in defense, “the model said so” is not a sufficient answer.
Every autonomous action blocking a transaction, quarantining a communication, or escalating an incident must come with an audit-ready explanation:
- Which signals drove the risk score up?
- Which historical patterns did it compare against?
- Why the chosen response was appropriate.
Emerging AI-safety work is converging around the need for guardrails, permissions, and auditability in agentic AI, and regulators (via instruments like the EU AI Act) are pushing towards explainability and traceability as mandatory properties.
If your own security AI can’t explain itself, you’ve simply created another opaque authority you can’t fully trust.
Why CISOs Can’t Solve This Alone
Even the strongest CISO can’t brute-force a trust fabric into existence with the tools they have today.
Three hard constraints get in the way:
1. Vendor Fragmentation
The trust story for a single approval request might pass through:
- Okta or another identity provider (IdP)
- Slack or Teams
- Zoom or a telephony provider
- Your ERP or billing system
- A bank portal
Each has its own data model, logs, APIs, and blind spots. None were designed to share the rich, real-time signals needed to evaluate authority in context across the chain.
A CISO can’t fix that in a policy doc. It needs an architectural layer that sits above the tools and stitches their signals into one coherent view.
2. Channel Blindness
Executives increasingly operate across:
- Corporate email and chat.
- Personal messaging apps (WhatsApp, Signal, iMessage).
- Mixed environments while traveling or working remotely.
The Hong Kong deepfake case succeeded in part because the critical interaction happened on a video platform and in a context where security tooling had little to no visibility.
You can mandate “corporate channels only,” but reality will ignore that memo. Trust frameworks must account for how work actually happens, not how we wish it did.
3. Compliance Lag
Frameworks like SOC 2, ISO 27001, PCI DSS, and even many cloud security standards still focus on:
- Access controls
- Encryption
- Logging and retention
- Traditional incident response
They say almost nothing about:
- Synthetic media and deepfake risk.
- Behavioral trust evaluation for authority decisions.
- AI agent governance, provenance, and explainability.
Thought leaders are starting to frame deepfakes and generative AI as “cybersecurity-scale” challenges requiring new standards for trust and provenance, but regulation and audit criteria are behind the curve.
So CISOs are forced to justify real, urgent trust investments with language written for yesterday’s threats.
Pulling It All Together
Across this series, a single pattern emerges:
- Controls can work and still fail.
- Detection can fire and still miss the point.
- Identity can be verified and still be abused.
AI hasn’t just introduced new attack techniques. It has invalidated old assumptions about how trust, legitimacy, and authority function in digital systems.
The organizations that adapt won’t be the ones with the most tools or the loudest alerts. They’ll be the ones to rethink security as a continuous trust problem, spanning people, systems, workflows, and machines.
That’s the shift this moment demands.
Building the Trust Layer Between People, Systems, and AI
VikingCloud’s stance is simple: Identity is necessary. Trust context is decisive.
Everything we’ve described, voice fraud, workflow abuse, agentic AI gone wrong, happens after identity checks succeed. That’s where we operate.
If you’re ready to move beyond “strong identity” and start engineering defensible trust, reach out to a member of our team.


.avif)