The Asymmetry That Defines Cybersecurity: Lessons from a Linux Vulnerability and the KEV List
When I saw CISA add a Linux kernel vulnerability from 2023 to the Known Exploited Vulnerabilities (KEV) catalog in mid-2025, one thought came to mind: the striking asymmetry in how we think about cybersecurity.
Many organizations still believe that a vulnerability from 2023 is something long gone — patched, irrelevant, or magically fixed along the way. We assume that someone, somewhere, must be maintaining the system properly. But cybercriminals operate under no such illusions. They don’t care whether a vulnerability is old or new. If it provides a viable attack path, it’s fair game.
There’s also a persistent myth that attackers only go after “fancy” or novel vulnerabilities — the kind that make headlines, involve zero-days, or showcase advanced techniques. But in reality, attackers are often pragmatic, not innovative. They’ll use whatever works, and older, reliable vulnerabilities are often more attractive because they’re well-documented, widely present, and easier to automate at scale.
This is the uncomfortable truth at the heart of cybersecurity: attackers only need one working exploit. Defenders need to be right everywhere, all the time. This is what makes the cyber battlefield so complex — and so intellectually demanding.
The Illusion of Stillness: When Vulnerabilities Appear from Thin Air
There’s something almost magical — and deeply unsettling — about how software vulnerabilities work. Imagine this: you have a perfectly patched Linux server. You shut it down, disconnect it from the network, and lock it in a vault. It doesn’t move. No one touches it. No updates, no changes, no human interaction.
Six months later, you power it back on. The system is identical to the one you secured — same code, same configuration, same binaries. But now… it’s dangerously exposed.
Somewhere during those months of dormancy, researchers discovered new attack vectors, previously unnoticed logic flaws, and hidden side effects in the software it runs. They published their findings. Maybe a proof-of-concept appeared. Maybe it entered the KEV list. And suddenly, your untouched system is a liability — a passive target with newly discovered weaknesses it always had, but no one had noticed yet.
This is the paradox of vulnerability: it doesn’t need an update or a change to appear. It’s not introduced — it’s revealed. The code was always fragile. We just didn’t know how — or where — it could break.
It’s almost as if time itself introduces risk. Not by altering the system, but by expanding the world’s knowledge of how to exploit it.
This is what makes cybersecurity so different from physical security. A locked door remains a locked door, until someone breaks it. But in cyber, a door you never knew was open can suddenly be wide open — not because it changed, but because the map changed.
And it’s precisely why we can’t rely on static inventories or “last patch applied” checklists. What wasn’t a vulnerability yesterday might be a high-risk exposure today, without the system ever changing. That’s the strange, almost poetic nature of this field: defenders must constantly re-evaluate what they thought was secure, because discovery itself changes the game.
A Timeline That Tells a Larger Story
Understanding the timeline of a vulnerability’s exploitation isn’t just technical detail — it’s a window into the very different mental models that cybercriminals and organizations operate with. Most defenders assume a “rational” timeline: a vulnerability is discovered, patched, disclosed, and then gradually fades into irrelevance. In this model, time is a buffer — “If it’s old, it’s probably fixed.” But attackers don’t think that way. For them, time is an opportunity. The longer a vulnerability exists unpatched in real environments, the more valuable it becomes. They understand what many defenders overlook: the gap between patch availability and real-world remediation is often wide — and persistent.
January 27, 2023 — Vulnerability patched in the Linux source tree
Technically, the problem was solved at this point — but only in the upstream kernel code. For most distributions and end users, this patch existed in theory, not in practice. Many Linux systems depend on distributions like Ubuntu, Debian, or Red Hat to package and release their own kernel updates, often weeks or months later. Worse yet, many organizations delay kernel upgrades due to fear of breaking dependencies or disrupting operations.
March 22, 2023 — CVE publicly disclosed by NIST
At this point, defenders became aware of the issue. But awareness doesn’t equal action. Without exploit details or a KEV flag, many security teams deprioritize even serious vulnerabilities — especially if the CVSS score doesn’t place it in the top of their scanner reports. This is a flaw in the mental model defenders use: visibility without context often leads to complacency.
May 4, 2023 — Proof-of-concept (PoC) exploit appears on GitHub
This is when the game changes. A public exploit lowers the technical bar to weaponization. What used to require expertise now requires copy-paste skills. But still, many organizations don’t respond with urgency unless active exploitation is confirmed or regulators force their hand. Meanwhile, threat actors begin experimenting, adapting the PoC to real-world environments.
June 17, 2025 — CISA adds CVE-2023–0386 to the Known Exploited Vulnerabilities (KEV) catalog
Now it’s official: attackers are actively exploiting this vulnerability in the wild. For some defenders, this is the first time they take it seriously. For cybercriminals, this is old news — they’ve had two years to automate, integrate, and profit from it.
This timeline isn’t just about dates — it’s about the mismatch between how we think attackers operate and how they actually do.
Cybersecurity programs built on assumptions of linearity and closure (“we patched it, so we’re safe”) miss the reality: most threats operate in nonlinear, opportunistic, and recurring cycles. Attackers don’t care if a CVE is two years old, or if it’s been “fixed.” If your environment is still exposed, it’s still a target.
This is why vulnerability management based on CVSS scores or calendar age is fundamentally flawed. It’s not about how critical a vulnerability looks — it’s about how dangerous it is, right now, in your specific context.
And that’s where Risk-Based Vulnerability Management (RBVM) steps in: by dynamically prioritizing vulnerabilities based on real-world exploitability, asset exposure, business criticality, and environmental relationships. RBVM replaces wishful thinking with adaptive prioritization, helping defenders catch up to an attacker’s reality.
Just because a fix exists doesn’t mean it’s been applied. Just because a vulnerability is public doesn’t mean it’s been understood. And just because something is old doesn’t mean it’s irrelevant.
This is why cybersecurity cannot rely on static timelines or one-size-fits-all metrics. It requires dynamic prioritization, continuous visibility, and above all, an understanding that risk is a living, moving target.
Not All Vulnerability Timelines Are Equal: The Pwn2Own Alternative
While many vulnerabilities follow chaotic, unpredictable paths — from silent exploitation to late-stage patching — there are alternative timelines that show how things should work. A powerful example of this is the Pwn2Own competition, organized by Trend Micro’s Zero Day Initiative (ZDI).
In contrast to opportunistic attacks and delayed patch cycles, the Pwn2Own timeline is a controlled, transparent, and responsible process — one that benefits the entire security ecosystem:
A researcher discovers a new vulnerability, typically in widely used platforms or software.
They develop an exploit and demonstrate it live during the Pwn2Own event.
Instead of using it maliciously, they submit the finding to ZDI, who acts as a neutral broker.
The affected vendor is immediately notified, triggering the official disclosure and patch process.
A virtual patch is made available through Trend Micro protections to defend customers even before the vendor issues a fix.
The researcher is awarded a cash prize, reinforcing ethical discovery and responsible disclosure.
Once the vendor develops an official patch, users can apply it, closing the loop before any real-world exploitation occurs.
Only after this responsible chain is complete — sometimes months later — is the exploit publicly disclosed via a blog or technical writeup. And in many cases, active exploitation never occurs because the vulnerability was contained early.
Compare this with the typical exploit lifecycle we explored earlier, where:
Patches are delayed or ignored,
Exploits surface on GitHub,
Exploitation happens in the wild,
And agencies like CISA step in only when real damage is unfolding.
The Pwn2Own model is a refreshing counter-narrative. It’s a demonstration of proactive, structured vulnerability management driven by collaboration — not chaos.
It also reminds us that not all vulnerabilities follow the same path, and not all attackers are adversaries. The security research community — when incentivized and supported — can become one of our strongest allies in closing the gap between discovery and defense.
As the visual from the ZDI team shows, this kind of timeline has clear stages, defined handoffs, and embedded protection mechanisms like virtual patching. It’s proof that, with the right ecosystem, vulnerabilities can serve the good of all instead of being weapons in the wrong hands.
Risk-Based Vulnerability Management: Focusing Where It Matters Most
One of the most common mistakes in vulnerability management is trying to treat every vulnerability as equally urgent. But in the real world, not all vulnerabilities are created equal — and not all of them matter to your environment.
The diagram above illustrates this:
The large red circle represents all disclosed vulnerabilities — tens of thousands published across all platforms and software.
The green circle is the subset of vulnerabilities that are actively exploited in the wild — those with real-world impact.
The orange circle represents vulnerabilities present in your environment — discovered through scans, agents, or configuration analysis.
The tiny blue zone, where all three circles overlap, is the high-risk epicenter: This is where Risk-Based Vulnerability Management (RBVM) focuses, helping you focus on what matters most — at the moment it matters.
Why Static Prioritization Doesn’t Work Anymore
Cyber risk is not static. It’s dynamic, continuous, and shared — influenced by both the speed of the business and the speed of attacks.
This means a list of prioritized vulnerabilities is not a report you generate once a quarter and track in a spreadsheet. It must be a living, adaptive, continuously updated model that reflects reality as it changes:
A new identity permission granted to a contractor can expose a previously low-risk asset.
A temporary misconfiguration to troubleshoot a production issue can open an unexpected attack path.
A new exploit published on GitHub can instantly elevate the threat level of a vulnerability that was previously deemed low.
A new asset deployed into a subnet with lateral movement potential can suddenly make dormant vulnerabilities critical.
This is why what isn’t critical today could become critical tomorrow — or in the next 10 minutes. The attack surface changes constantly, and so must your understanding of which vulnerabilities pose the greatest risk.
The Cyber Risk Broker: Real-Time Risk Intelligence for RBVM
This is where the Cyber Risk Broker comes into play as a technology layer that acts as the brain behind risk-based prioritization.
A Cyber Risk Broker is an intelligent system that continuously ingests and correlates:
Vulnerability data from scanners
Threat intelligence and exploit activity
Asset criticality and exposure levels
Configuration and identity data
Environmental and business context
It calculates cyber risk scores dynamically, identifying which vulnerabilities are most urgent in your specific environment, based on how your systems are connected, configured, and exposed right now.
The output of the Cyber Risk Broker becomes the input for RBVM — fueling dashboards, ticketing systems, automation playbooks, and executive reporting with real-time, context-aware prioritization.
In essence, RBVM is the action layer. The Cyber Risk Broker (CRB) is the intelligence layer that powers it.
This modern architecture replaces guesswork with evidence, spreadsheets with systems, and opinion with continuous, risk-driven decisions.
Continuous, Context-Aware Prioritization
RBVM tools are built on the premise that risk must be calculated in real time. They continuously ingest telemetry from:
Asset inventories and configurations
Exploit intelligence and threat actor behavior
Identity and access management systems
Network topology and segmentation
Cloud workloads, containers, and dynamic services
This allows organizations to dynamically re-prioritize vulnerabilities as risk conditions change — ensuring that defense efforts are always aligned with current exposure, not yesterday’s assumptions.
In other words, it’s not just about knowing what vulnerabilities you have. It’s about knowing which ones matter right now.
Visualizing Real-Time Risk: What Happens on Patch Tuesday?
One of the most powerful ways to understand the value of continuous cyber risk calculation is by watching how your Cyber Risk Index evolves in response to real-world events — particularly Microsoft Patch Tuesday.
The image above shows exactly that: a dynamic, time-based risk score that reflects the live state of an organization’s assets, exposures, and misconfigurations. On the surface, the graph looks like a simple trend line. But if you look closely, you’ll notice a recurring pattern:
Every second Tuesday of the month, when Microsoft releases its latest patches, there is a noticeable increase in the Cyber Risk Index.
Why? Because new vulnerabilities are disclosed, including critical ones affecting Windows, Office, Exchange, and Azure services. At the moment of disclosure:
The threat landscape changes.
The attack surface expands.
And the Cyber Risk Broker recalculates risk, identifying which of these newly disclosed CVEs affect your critical assets.
In that instant, even though nothing has been exploited yet, your organization’s cyber risk score goes up — because the conditions for compromise have changed.
But here’s the key insight: The score doesn’t go back down automatically.
It only improves if — and when — your vulnerable assets are actually patched.
This reflects a fundamental truth in cyber risk management: Awareness doesn’t reduce cyber risk — action does.
And this is the power of continuous risk scoring: it doesn’t just inform you of technical debt; it pressurizes the need for timely response. It shows your organization how doing nothing keeps you in the red, and how targeted action — like patching a vulnerable domain controller or updating a cloud workload — visibly reduces risk.
Traditional vulnerability reports are static — they tell you what was vulnerable when the scan ran. But cyber risk is fluid.
With tools that leverage Cyber Risk Brokers, your risk index becomes a living metric, influenced by:
Real-time exploit activity
Business-critical asset exposure
Patch availability and status
Identity, access, and configuration changes
This enables a new level of operational maturity — where cybersecurity decisions can be measured, tracked, and justified, using a common language both technical teams and executives can understand.
And it reinforces the principle that cyber risk isn’t a technical score. It’s a strategic indicator of how well you’re keeping up with the adversary.
From Awareness to Action
Throughout this article, we’ve explored the often misunderstood lifecycle of vulnerabilities — from disclosure to exploitation, and from patch availability to actual remediation. We’ve examined CVE-2023–0386 as a case study in how old vulnerabilities can still be actively weaponized years after disclosure, and how the timeline of exploitation rarely aligns with the assumptions defenders make.
We’ve challenged the outdated mental model that sees patching as a one-time event, and vulnerability lists as static Excel exports. In reality, cyber risk is a moving target — driven by constant changes in attacker behavior, system configurations, business operations, and external disclosures.
We’ve shown how modern defenders must operate with continuous visibility, contextual intelligence, and risk-based decision-making powered by:
Cyber Risk Brokers that calculate real-time risk based on how vulnerabilities, assets, access, and threats intersect
Risk-Based Vulnerability Management (RBVM) that filters out the noise and focuses on what’s actually dangerous in your environment
Cyber Risk Indexes that quantify and communicate risk as it fluctuates — driven by real-world events like Patch Tuesday or changes in system posture
And perhaps most importantly, we’ve reinforced a fundamental truth:
Cybercriminals don’t care how old a vulnerability is. If it works, they’ll use it. So defenders must stop treating cyber risk as a snapshot — and start treating it as a stream.
The future of cybersecurity doesn’t belong to those with the most tools or the most alerts. It belongs to those who can see risk clearly, prioritize quickly, and act decisively. This means shifting from:
Static patch lists → to live, risk-informed prioritization
Quarterly reports → to continuous recalculation
Reactive defense → to proactive risk reduction
It means investing not only in controls, but in the intelligence layer that connects your technical reality to your business priorities.
That layer is the Cyber Risk Broker — and it’s what enables every other part of your security strategy to move at the speed of risk.
Because in the end, it’s not about patching everything. It’s about knowing what to patch next — and why.
Castro, J. (2025). The Illusion of “Continuous” in Cybersecurity: The Biggest Vulnerability in Frameworks and Regulations. ResearchGate. https://www.researchgate.net/publication/388682749 DOI:10.13140/RG.2.2.10471.15520/1
Castro, J. (2025). Context is Everything in Cybersecurity: Why Signals Without Meaning Are Just Noise. ResearchGate. https://www.researchgate.net/publication/392408653 DOI:10.13140/RG.2.2.15442.26561
Castro, J. (2024). The Power of Risk-Based Vulnerability Management. ResearchGate. https://www.researchgate.net/publication/388421250 DOI:10.13140/RG.2.2.29892.74889/1
Castro, J. (2024). Visualizing Cybersecurity: The Impact of Asset Graphs and Influence Mapping in Risk-Based Asset Inventory. ResearchGate. https://www.researchgate.net/publication/388321083 DOI:10.13140/RG.2.2.21051.94248/1