Welcome to the first of our three-part blog series exploring the evolving landscape of cybersecurity in the age of artificial intelligence. In this first blog, we’ll examine the hidden gaps between security controls, the spaces where AI-driven attackers thrive, and where traditional defenses often falter.
The biggest misconception in enterprise security isn’t whether your controls work. It’s the assumption that working controls add up to a secure environment.
Your email gateway blocks 99% of commodity phishing. Your EDR (Endpoint Detection and Response) stops most commodity malware before it executes. Your IdP (Identity Provider) enforces MFA (Multi-Factor Authentication) and conditional access. On paper, the stack looks solid.
But the breaches you actually care about rarely go through those hardened controls.
They slide between them.
The attack doesn’t look like “malicious payload blocked by anti-virus.”
It looks like a user:
- Getting a “mandatory training” notice in Slack.
- Scanning a QR code with their personal phone.
- Opening a browser in private mode.
- Entering real credentials into what looks like an internal portal.
No corporate control ever actually “sees” the attack chain.
Meanwhile, AI is quietly changing the economics of that whole operation. IBM and others are already reporting huge jumps in losses driven by AI-assisted phishing and social engineering, with billions lost annually to scams that are increasingly machine-crafted, personalized, and automated.
This is where AI-driven attacks win: not by being impossibly sophisticated, but by ruthlessly exploiting the space between your controls.
The Three Gaps CISOs Can’t Currently Close
1. Managed - Unmanaged Devices: The Quishing Gap
Quishing makes this painfully obvious.
A user receives an email that passes SPF (Sender Policy Framework), DKIM (DomainKeys Identified Mail), and DMARC (Domain-based Message Authentication, Reporting, and Conformance). It contains a QR code for “updated benefits enrollment” or “new security training.” To your secure email gateway, it’s just an image attachment. No URL to analyze, no obvious IOC (Indicators of Compromise) to sandbox.
The user does what you trained them to do:
“Don’t click suspicious links on your work machine.” So, they scan the code with their personal phone.
That personal phone:
- Isn’t enrolled in MDM (Mobile Device Management).
- Isn’t covered by your EDR.
- Isn’t feeding logs into your SIEM (Security Information and Event Management).
The QR sends them to a fake login page that proxies traffic to Microsoft 365 or your IdP, harvesting credentials and session tokens in real time via adversary-in-the-middle techniques that are now widely commoditized.
By the time they realize something’s off, the attacker is already inside, using a legitimate session.
From your stack’s perspective, secure email (no obvious IOC), successful MFA, and an endpoint device that never existed.
This isn’t “control failure.” It’s a boundary failure. The attack wins in the unmanaged space you never see.
2. Identity - Authenticity: When “Verified” Isn’t Real
Modern security architecture assumes a simple equation:
Verified identity = trusted intent.
AI just broke that equation.
Voice cloning and deepfake tools now need only a few seconds of audio to generate convincing, real-time speech. Law enforcement and regulators are publicly warning about AI-assisted vishing, and we’re already seeing multimillion-dollar frauds in which finance teams wired funds after “speaking” with executives who were never on the call.
In practice, that looks like:
- A “CFO” calling from the right number.
- Using the right jargon, right projects, and right tone.
- Asking for a one-time exception to an established payment process.
Finance follows the playbook.
Caller ID checks out. The details are specific. The voice sounds exactly right.
What they can’t verify is whether the person on that line is who they claim to be.
The same pattern plays out in email, Slack, Teams, and internal portals. Compromised or spoofed accounts, AI-generated text and voice, and context-aware pretexts combine into communications that look more legitimate than most of your internal traffic.
Identity is verified. Authenticity is completely unknown.
3. User Trust - Verification: Humans vs. Machine-Optimized Deception
Humans operate on heuristics.
If it looks official, comes from the right channel, and uses familiar language and context, we treat it as legitimate by default. That’s not “user carelessness.” It’s how people survive a workday without drowning in verification.
AI is explicitly optimized to pass those heuristics.
Recent threat research shows AI-enhanced spear phishing achieves dramatically higher click-through rates—even against trained users—precisely because it mimics local patterns: your templates, your tone, and your workflows.
So, users behave rationally.
They “verify” with the same mental shortcuts that have worked 99.9% of the time.
AI patiently optimizes to be the 0.1% exception.
Security’s response, "always use a second channel” and “never trust an urgent request," collapses under operational reality. People cannot secondary-verify 50 messages a day.
There’s no “awareness” gap. Just a gap of humans playing linear defense against a non-stop, machine-speed offense tuned to their exact behavior.
Why Traditional Security Tools Miss These Attacks
Most enterprise tools were built for a world where “bad” has fingerprints.
- Known malware hashes.
- Reused phishing templates.
- Obvious anomalies in login geography or device.
Signature-based anything assumes:
- Malicious artifacts are reusable.
- The difference between good and bad is stable enough to encode.
AI-generated attacks destroy both assumptions:
Every phish is unique in wording, tone, and structure. And every lure is turned to sit inside “normal-looking” behavior.
Email security sees 10,000 unique messages that don’t match any known bad patterns. Behavioral models see variance within normal thresholds because the content is explicitly crafted to remain within those thresholds.
At the same time, your architecture is siloed.
Email security in one console. Endpoint in another. Identity in a third. Collaboration tools in none.
A quishing attack that flows from email to personal phone to fake login to legitimate cloud session—never appears as a coherent story anywhere. Each control logs “clean email,” “successful MFA,” or “normal app access.”
And because unmanaged devices don’t feed telemetry, your SIEM has nothing to correlate, even if you wanted to.
The result. Each control is doing its job, and the attack still wins.
What CISOs Actually Need to Close the Gaps
This isn’t solved by “more tools.” It’s solved by changing the way you think about control and trust. But beware: if controls become too rigid, users will inevitably look for workarounds and avoid the “official” managed tools entirely. When that happens, you lose visibility—and true control slips away.
1. Cross-Channel Behavioral Correlation
Security must move from “event detection” to “narrative detection.” To see the previously mentioned QR-based attack on your user’s phone as one attack chain, you need:
- Signals from email, identity, and applications.
- Metadata from intersections with unmanaged devices.
- A data model that can tell when these belong to the same user, same timeframe, and same risk pattern.
That’s correlation at the behavioral level, not just IPs and timestamps.
2. Unified Visibility Across Managed + Unmanaged Intersections
You will never fully control personal devices. But you must instrument the boundary crossings.
When a QR code in a corporate email is scanned. When credentials are entered from an unfamiliar endpoint. When a workflow jumps from a managed laptop to a mobile device mid-stream.
You don’t need full telemetry from every phone. You need reliable signals that a boundary was crossed, and policies that treat those crossings as risk events in their own right.
3. Policy-Driven, Adaptive Trust Models
Trust can’t be a binary allow/deny decision. It must be contextual and provisional.
Low-risk wiki from a known laptop on the corporate network?
- Smooth, low-friction access.
High-risk wire transfer after an “urgent” executive message and a weird device change?
- Step-up verification, extra approvals, or even temporary blocks until trust is re-established.
That means:
- Real-time risk scoring per user and transaction.
- Policies that adapt based on behavior, not just attributes.
- Friction that scales with risk—not with how loudly compliance yells.
4. Automated Verification Inside High-Risk Workflows
If AI can perfectly mimic legitimacy, humans alone cannot be your last line of verification.
You need programmatic checks embedded into your workflows:
- High-value payments automatically trigger out-of-band callbacks to pre-verified numbers.
- Sensitive access requests from chat or email automatically route through a bot that enforces secondary verification.
- First-time external file shares from certain roles always require a manager’s sign-off.
Done right, this doesn’t just add friction. It breaks attacker economics.
Ready to See Your Own Gaps?
If your stack “looks good on paper” but you’re still uneasy about AI-driven social engineering, quishing, and deepfake-enabled fraud, you’re not paranoid. You’re paying attention.
That discomfort is a signal.
At VikingCloud, we turn that signal into a concrete plan.
Reach out to a member of our team to see exactly where attackers are winning between your controls and how to take that space back.
In Part 2 of our series, AI Variance: The Real Threat Security Teams Are Not Built to Handle, we’ll dive deeper into the unpredictable nature of AI variance and discuss why today’s security teams are ill-equipped to manage these emerging risks.


.avif)