AI‑Driven XDR for Incident Response: The Future of SOC Defense
I’ve spent the better part of seven years getting the call no organization wants to receive. Ransomware is on the network. Data is moving somewhere it shouldn’t. Systems are going dark. In those moments, your security stack either works for you or it doesn’t—and I can tell you from painful, repeated experience that the gap between those two outcomes almost never comes down to which product you bought.
It comes down to whether your tools could actually tell the story of the attack fast enough for you to do something about it.
That’s why I keep coming back to XDR as one of the most consequential shifts in how we run security operations. Not because it’s a shiny acronym—we’ve had plenty of those—but because the underlying problem it tries to solve is the exact problem that burns us in incident response, over and over again.
The Problem XDR Was Built to Solve
If you’ve ever led an incident response engagement, you’ve lived this reality: attacks don’t stay in one place.
A typical ransomware intrusion I’ve worked might start with a compromised identity—maybe a phished credential, maybe a session token lifted from an infostealer log. From there, the attacker authenticates through legitimate remote access. They enumerate Active Directory. They move laterally using native Windows tooling. They stage data. They deploy ransomware across the domain, often in the middle of the night.
That kill chain touches the email platform, the identity provider, the VPN concentrator, the endpoint, the network, and probably a cloud tenant. Historically, each of those domains had its own detection tool, its own log format, its own console, and its own team staring at it.
EDR for endpoints. SIEM for logs. NDR for traffic. Identity tools for authentication events.
The result? Analysts manually pivoting between six different panes of glass trying to reconstruct what happened. In an active incident, that fragmentation doesn’t just slow you down. It costs you the engagement.
XDR—at its best—attempts to close those seams. The concept is straightforward: collect and correlate telemetry across multiple security layers so that multi-stage attacks surface as coherent incidents rather than disconnected alerts (CrowdStrike, 2025; IBM, n.d.). Instead of an analyst spending three hours manually stitching together a timeline from four different tools, the platform assembles the attack narrative and hands it to you.
In theory, that’s transformative. In practice, it depends entirely on how well the detection engineering underneath it actually works.
Detection Engineering Is Still the Bottleneck
This is the part most vendor pitches skip, and it’s the part that matters most for anyone running a real SOC.
XDR platforms are correlation engines. They are only as useful as the detections feeding them. And writing good detections—detections that catch real attacker behavior without drowning your team in false positives—is genuinely hard. It requires normalized telemetry, well-mapped threat intelligence, continuous tuning, and people who understand both the tooling and the adversary tradecraft.
Without that investment, XDR becomes exactly what a lot of SIEM deployments became a decade ago: an expensive log aggregator generating noise that nobody trusts enough to act on.
The organizations I’ve seen get real operational value from XDR are the ones that treat detection engineering as a discipline, not a checkbox. They have people writing and testing detection logic. They run purple team exercises to validate coverage. They map their detection library against MITRE ATT&CK and know where their blind spots are (MITRE, n.d.).
The platform matters. But the people and the process behind the platform matter more.
Where AI Actually Changes the Calculus
Here’s where the RSA 2026 conversation gets genuinely interesting for me—and where I think the industry is crossing a meaningful threshold.
The detection engineering bottleneck I described above is fundamentally a scale problem. Adversaries are iterating faster than human detection engineers can write rules. The “monitor-diff-test-weaponize” loop for a motivated attacker is measured in hours now, not weeks. Meanwhile, most SOC teams are understaffed and buried in alert volume.
AI—applied thoughtfully—can attack this problem from multiple angles.
Building detections.
We’re beginning to see AI tools that can ingest threat intelligence, analyze attacker techniques, and draft detection logic that a human engineer can review, refine, and deploy. This doesn’t replace the detection engineer. It accelerates them. Instead of starting from a blank page every time a new technique surfaces, the engineer starts from a working draft. For a team writing detections against dozens of new techniques per quarter, that’s a meaningful reduction in cycle time.
Improving existing detections.
Large language models are surprisingly good at analyzing detection rules for logical gaps, suggesting tuning adjustments based on false positive patterns, and identifying coverage overlaps or conflicts across a detection library. Think of it as a code review partner for your SIGMA rules.
Building correlations.
This is where the intersection of AI and XDR gets most compelling for incident response. The hardest part of correlation isn’t the technology—it’s defining what “related” means across disparate data sources. An identity event, a network connection, and an endpoint process execution are three fundamentally different data types. Knowing that they’re part of the same attack requires contextual reasoning. AI models that can cluster related signals across those boundaries—grouping a suspicious OAuth token refresh with a geographically anomalous VPN login and an unusual PowerShell execution into a single incident—represent a real step forward. Research into AI-driven investigation and response workflows has demonstrated that these systems can meaningfully accelerate analyst triage and surface patterns humans miss in massive datasets (Freitas et al., 2024).
Reducing Mean Time to Understand.
In every ransomware engagement I’ve worked, the first few hours are spent answering the same question: what actually happened? Translating raw telemetry into a plain-language attack narrative—”this user account was compromised via credential phishing at 2:14 AM, authenticated to the VPN from a non-corporate IP, performed LDAP reconnaissance, then moved laterally to three servers using RDP”—is the most valuable thing a platform can do in an active incident. AI that can generate that narrative from correlated telemetry in seconds instead of hours fundamentally changes the economics of incident response.
For organizations running lean SOC teams—and that’s most of them—this kind of augmentation isn’t a luxury. It’s becoming a baseline requirement.
My Watchlist for RSA 2026
With all of that as context, here’s what I’ll be paying attention to on the floor this year:
Incident-centric platforms, not alert-centric dashboards.
The XDR systems that win in real-world IR don’t show me 400 individual alerts. They show me an attack narrative. I want to see platforms that present multi-stage intrusions as coherent stories with timelines, affected assets, and recommended containment actions. If I still have to mentally reconstruct the kill chain myself, the platform hasn’t solved my problem.
AI-assisted detection engineering.
I want to see how vendors are helping SOC teams write better detections faster. Not just AI that tunes thresholds, but tools that analyze coverage gaps, draft detection logic from threat intelligence, and help teams validate their rules against realistic attack simulations.
Identity as a first-class telemetry source.
Ransomware cases I’ve worked in the last two years almost universally involve identity compromise as the initial access vector or the mechanism for lateral movement. XDR platforms that still treat identity as a secondary data source—an afterthought bolted onto endpoint telemetry—are missing where the attacks are actually happening. The Verizon Data Breach Investigations Report has consistently shown credential-related attack vectors as a dominant factor in breaches (Verizon, 2024), and platforms need to reflect that reality.
Open vs. closed architectures.
Most real enterprise environments are heterogeneous. They run multiple vendors, multiple clouds, and legacy systems that aren’t going anywhere. XDR platforms that only work well when you’ve bought the entire vendor ecosystem are going to struggle with the messiness of actual production environments. I’ll be looking at how platforms handle integration with third-party telemetry sources and whether their correlation engines degrade when the data doesn’t come from their own stack.
Operationalized response, not just detection.
Detection without response is just expensive observation. The platforms I’m most interested in are the ones that have closed the loop—where a correlated incident can flow directly into containment actions like host isolation, credential revocation, or network segmentation without requiring an analyst to jump into three different admin consoles.
The Governance Question
One thing I want to flag, because I think the industry is moving fast and not everyone is thinking about this carefully enough.
As AI takes a larger role in detection, correlation, and response, we are introducing non-deterministic behavior into critical security workflows. An autonomous agent that isolates a production server because its model flagged anomalous behavior—without sufficient context about a planned maintenance window—can cause a business disruption that looks a lot like the attack it was trying to prevent.
Our job as security leaders is shifting. We’re not just selecting tools anymore. We’re becoming orchestrators of automated decision-making that operates at machine speed on production infrastructure. That requires governance, guardrails, and a clear understanding of where the human stays in the loop.
If a vendor can’t clearly articulate how their AI makes decisions, what its confidence thresholds are, and how you override it when it gets it wrong, that should be a disqualifying conversation.
Final Thought
I’ve been in rooms where the ransomware was already deployed, the backups were encrypted, and the only thing standing between the organization and a catastrophic outcome was how quickly we could understand what the attacker did and where they still had access.
In those moments, you don’t care about marketing categories. You care about whether your security stack can tell you the truth, fast.
That’s the promise of XDR done right. And with AI finally reaching a point where it can meaningfully accelerate detection engineering, correlation, and investigation, I think we’re closer to delivering on that promise than we’ve ever been.
But we’re not there yet. The technology is maturing. The hard part—building the operational discipline, the detection engineering programs, and the governance frameworks to use it well—is still on us.
If you’re heading to RSA 2026 and working on any of this, I’d love to hear what you’re seeing. What’s the one XDR capability you’re hoping to find on the floor this year?
Art Ocain is the Executive Director of Operations at Airiam, where he leads cybersecurity strategy and operations for organizations navigating complex threat landscapes.
Contact us to discuss how AI can help your business grow smarter instead of just bigger.

References
- CrowdStrike. (2025). What is extended detection and response (XDR)? https://www.crowdstrike.com/en-us/cybersecurity-101/endpoint-security/extended-detection-and-response-xdr/
- Freitas, S., Kalajdjieski, J., Gharib, A., & McCann, R. (2024). AI-driven guided response for security operation centers with Microsoft Copilot for Security. arXiv. https://arxiv.org/abs/2407.09017
- IBM. (n.d.). What is XDR (extended detection and response)? https://www.ibm.com/think/topics/xdr
- MITRE. (n.d.). MITRE ATT&CK. https://attack.mitre.org/
- Verizon. (2024). 2024 Data Breach Investigations Report. https://www.verizon.com/business/resources/reports/dbir/