AI in cybersecurity: The arms race nobody's winning

Daniel Okonkwo, Security Engineer · 15.02.2026, 00:23:32

AI in cybersecurity: The arms race nobody's winning


Daniel Okonkwo, Security Engineer

Every few months, someone publishes a breathless article about AI revolutionizing cybersecurity. Defenders will detect threats instantly. Attackers will be stopped before they start. The future is secure.

I've spent eight years in security operations. The reality is messier. AI is changing both offense and defense simultaneously, and neither side is pulling ahead. We're in an arms race where the weapons keep getting better but the war never ends.

The defender's new tools

Let me start with what's genuinely improved.

Anomaly detection actually works now. Traditional security tools used signatures — known patterns of malicious behavior. If an attack looked different from anything seen before, it slipped through. Machine learning changed this. Modern systems learn what "normal" looks like for your specific environment and flag deviations.

This matters because attackers constantly modify their techniques. A signature-based system might catch yesterday's malware but miss today's variant. An ML system that understands your baseline traffic patterns notices when something behaves strangely, regardless of whether it matches a known signature.

Alert triage is another genuine win. Security teams drown in alerts — thousands per day, mostly false positives. Investigating each one manually is impossible. AI systems now correlate alerts, assess likelihood, prioritize by risk, and sometimes resolve obvious false positives automatically. The analyst's job shifts from "look at everything" to "look at what matters."

Threat hunting has become more systematic. Instead of analysts manually searching through logs hoping to find something suspicious, AI can surface statistical anomalies across massive datasets. Patterns that would take humans weeks to notice appear in minutes.

The attacker's new tools

Now the uncomfortable part.

Phishing got terrifyingly good. Generic "Dear Customer" emails are obsolete. AI can analyze a target's writing style, social media presence, professional connections, and craft personalized messages that feel authentic. The spelling mistakes and awkward phrasing that used to signal phishing? Gone. Modern AI-generated phishing is grammatically perfect and contextually relevant.

Voice cloning crossed from "impressive demo" to "practical attack tool." With a few minutes of audio, attackers can generate convincing voice messages. That call from your CEO requesting an urgent wire transfer? It might not be your CEO. We've seen this in real incidents, not just proofs of concept.

Vulnerability discovery accelerated. AI can analyze code faster than humans, identifying potential security flaws at scale. This helps defenders audit their own systems — but it also helps attackers find weaknesses to exploit. The same technology that makes security auditing efficient makes attack reconnaissance efficient.

Malware evasion improved. AI helps create polymorphic code that changes its appearance while maintaining functionality, making signature-based detection even less reliable. It also helps attackers model defender behavior, testing whether their malware will trigger specific security tools before launching real attacks.

Why neither side wins

Here's the fundamental problem: AI amplifies capabilities, it doesn't change the asymmetry.

Defenders must protect everything. Attackers only need to find one weakness. AI makes defenders faster at finding and fixing vulnerabilities, but it also makes attackers faster at finding new ones. The ratio doesn't change.

Defenders need to avoid false positives. Blocking legitimate users or crashing production systems is unacceptable. Attackers have no such constraint. They can try a thousand approaches; only one needs to work. AI makes defenders more accurate, but the tolerance for error remains asymmetric.

Defenders operate transparently. Security tools are documented, purchased through official channels, discussed at conferences. Attackers operate covertly. They can study defensive AI systems extensively before attacking. Defenders rarely get the same reconnaissance opportunity.

The bottom line: AI is a capability multiplier for both sides. It doesn't fundamentally advantage either.

What actually helps

Given this reality, what should security teams focus on?

First, basics still matter most. The majority of successful attacks exploit known vulnerabilities, weak passwords, or human error. AI doesn't change this. Patch your systems, enforce strong authentication, train your users. Boring advice, but it addresses how attacks actually succeed.

Second, reduce attack surface before deploying fancy detection. Every system you expose is a potential entry point. Every permission granted is potential lateral movement. Every piece of software installed is potential vulnerability. Minimizing what attackers can target beats detecting attacks after they start.

Third, assume breach and plan response. No defense is perfect. AI or not, some attacks will succeed. The organizations that recover fastest have practiced incident response, maintained offline backups, segmented their networks to contain damage. Resilience matters as much as prevention.

Fourth, be skeptical of vendor claims. Security vendors love AI marketing. "AI-powered threat detection" might mean sophisticated machine learning — or it might mean a simple rules engine with AI buzzwords. Evaluate actual capabilities, not brochure copy.

The uncomfortable truth

I wish I could tell you AI is about to solve cybersecurity. It's not. The same technology that improves defense improves offense. We're running faster to stay in the same place.

This doesn't mean AI is useless for security — it's genuinely valuable. But it's valuable to everyone, attackers included. The competitive advantage isn't having AI; it's having better fundamentals, better processes, better people. AI amplifies whatever you already have.

If your security program is solid, AI makes it more solid. If your security program has gaps, AI helps attackers find them faster. The technology is a multiplier, not a solution.

Daniel Okonkwo works in security operations for a financial services company. He previously held roles at a major cloud provider and a security startup. Views expressed are his own.

#AI


Related posts

AI content detection: Can blockchain prove what's real?
Why users don't trust AI recommendations (it's not what you think)
Scroll down to load next post