How to prioritize CVEs

(And why the CVSS score isn’t enough)

Propolis
6 min readSep 11, 2024

Dependency vulnerability scanners generally output a list of findings containing CVE field data such the CVSS score. While useful, CVSS scores and other intrinsic CVE details don’t answer the most pressing question: How much should my organization care about this finding? This vulnerability impact is inherently dependent on context data that scanners can’t provide.

In this article I’ll enumerate some context factors (data points) that help to make an accurate prioritization / triage. In parallel, let’s look briefly at the ‘automation effort’ for each factor — meaning, how easily can that kind of context information be loaded, machine-accessed and incorporated into an automatic prioritization workflow. Triage personnel already intuitively consider these context factors when triaging a vulnerability detection, but this requires a set of manual searches for each finding if the context isn’t added automatically. The holy grail of vulnerability management, of course, is to spend little-to-no energy on each triage. So ideally, automation is used to patch in the context that can be automatically collected and calculate a new contextual vulnerability score before humans are ever involved — and the humans can spend their energy working from the top of the better-sorted pile.

As you read along, think about how your existing triage processes collect and weigh each factor — automatically, manually, or not at all. Do you have something to add? Please let me know in a comment! :)

Triage factors

(The puzzle pieces needed to assess risk for a vuln)

🩺You’ve just arrived at the ER with a sprained ankle. The nurse asks you to “describe your pain, on a scale of 1–10”. Because you give them the CVSS score, they decide to amputate.

To fully grasp the risk from a vulnerability, it’s essential to evaluate both intrinsic factors (which describe the specific security problem discovered in a piece of code) and contextual factors. Unlike intrinsic factors, contextual factors may change over time and might not be public knowledge (e.g. activity details available through a paid security vendor feed; or a business factor known / decided within your company).

Let’s further split the contextual factors (based on where you’ll probably find the information) to end up with three categories: 🕷️intrinsic details, 🌐exploit intelligence, and 🧠business context.

🕷️ Intrinsic details

These factors are inherent to the vulnerability and are generally provided by the scanner. All details can alternatively be queried trivially from a vulnerability database based on the ID.

💡These details combine to make the CVSS “Base” score.

  1. Attack Vector (AV):
    Description: How the vulnerability can be exploited (network, local, etc.)
    Automation Effort: 🟢 Simple
  2. Attack Complexity (AC):
    Description: The difficulty level of exploiting the vulnerability.
    Automation Effort: 🟢 Simple
  3. Privileges Required (PR):
    Description: The level of access needed to exploit the vulnerability.
    Automation Effort: 🟢 Simple
  4. User Interaction (UI):
    Description: Whether the exploit requires user interaction (e.g., opening a malicious file).
    Automation Effort: 🟢 Simple

🌐 Exploit intelligence

These factors need to be updated over time, and give an indication of how likely any internet-facing vulnerable system is to experience an attack against the specific vulnerability. Generally, you’ll only get these out of the box with paid SaaS vulnerability management solutions.

  1. Exploit Availability:
    Description: Whether exploit code for the vulnerability is publicly available.
    Automation Effort: 🟡 Moderate
    Automation Mechanisms: Paid threat intelligence feeds, EPSS score
  2. Threat Actor Activity:
    Description: Whether the vulnerability has been actively exploited by threat actors.
    Automation Effort: 🔴 Challenging
    Automation Mechanisms: Paid threat intelligence feeds, EPSS score
  3. Exploit Maturity:
    Description: How reliable and well-developed the exploit is.
    Automation Effort: 🟡 Moderate
    Automation Mechanisms: Threat intelligence, exploit databases, forums

🧠 Business context

These factors relate to the service or system where the vulnerability was found, and require human input as they depend on your specific business context and operational priorities. Neither free or paid software can add this context for you (you’ll need to label / catalogue your assets), but paid offerings typically make it easier to to ingest the annotation / cataloguing you’ve done, for example with environment-detected tag comprehension.

  1. Criticality of the Asset:
    Description: The importance of the affected system to business operations.
    Automation Effort: 🔴 Challenging
    Automation Mechanisms: Tagging, asset management systems
  2. Data Sensitivity:
    Description: Whether the system handles sensitive data.
    Automation Effort: 🔴 Challenging
    Automation Mechanisms: Tagging
  3. Compliance Risks:
    Description: Whether the vulnerability impacts regulatory compliance (e.g., GDPR, HIPAA).
    Automation Effort: 🔴 Challenging
    Automation Mechanisms: Tagging
  4. Operational Impact:
    Description: The potential disruption to business operations if the vulnerability is exploited.
    Automation Effort: 🔴 Challenging
    Automation Mechanisms: Tagging

What is CVSS useful for?

CVE vulnerabilities are generally given a CVSS (Common Vulnerability Scoring System) score, a number indicating criticality that ranges from 0 to 10. This number is generally the ‘Base’ score (intrinsic) metric factors (see 🕷️ Intrinsic details above) and gives a good indication of how bad of a security risk the vulnerability can present in the worst case. The problem is, we often see hundreds or thousands of vulnerabilities in scan results, and prioritization efforts end up focusing on the wrong ‘worst case’ score— the generic one — when the context-factored ‘worst case’ is almost always lower and mitigating the much smaller filtered set of still-very-bad-in-context vulnerabilities would be a better use of time for actual defense of the organization.

The makers of CVSS are of course also aware of the context problem, and in version 3 and 4 introduced fields specifically to allow vendors and analysts to add contextual scores: CVSS v3 includes Temporal and Environmental metrics, which theoretically add external (exploit likelihood) and internal (asset criticality) contexts. However, NVD and most other common CVE databases only provide Base (intrinsic) scores and ignore Temporal metrics due to the challenge of keeping them up to date. Environmental scores, which must reflect an organization’s unique environment, are meant to be used for asset criticality adjustments by security analysts / scan tooling, but I’m not aware of any vulnerability management tooling that uses this CVSS section to store that context.

CVSS v4 is not yet widely in use as of Nov 2024 (vulnerabilities have not been scored according to it), but it will not fundamentally address the data collection problem that affects v3 — although interestingly it has further introduced a “Supplementary” metric section, intended for vendor-specific intrinsic context overrides.

In short, CVSS scores provided in scanner outputs are generally the Base (intrinsic) score and if you want to work with CVSS, you will need tooling / automation that supports:

  1. annotating the finding with asset criticality (you supply this)
  2. annotating the finding with exploit intelligence, where available
  3. adjusting the base score based on the above context.

Summary

CVSS scores provided with CVE details are valuable information, but they are only one piece of the puzzle. CVEs’ structure gives information about how a specific code is insecure and provide an intrinsic severity score, but they can’t tell you how the vulnerability will impact your business. Real risk assessment requires context — the importance of the affected assets, the sensitivity of the data, and whether the vulnerability is likely to be exploited in the wild.

To fully protect your organization, you need a balance of automation and human expertise. You’ll want to:

  • work with security and product owners to catalogue assets by criticality
  • add tooling to patch in exploit intelligence to findings, if missing and desired
  • implement scan / triage tooling that can consume the above context and make it available in prioritization flows.

Typically, paid SaaS vendors can offer help with 2. and 3. in their offerings, but the brain work of criticality annotation and customization of threat feeds will still be your responsibility, so don’t overlook it! If you have an open source stack, you will likely have to build custom scripts to integrate the above context details into scanner output.

Ultimately, your goal should be to minimize time spent triaging / prioritizing each finding. Carefully automating annotation of the needed external and business-specific context after the vulnerability detection stage makes it possible to make fast, even automatic decisions about some vulnerabilities, where previously an analyst would need to collect the data separately for each investigation.

That’s all for now! 👋

Bonus: Some example triage questions

Here are a few example triage questions and the factors / factor categories their answers depend on:

  • How does it spread? (🕷️reachability vector)
  • How hard is it to exploit? (🕷️complexity, 🕷️user interaction requirement, 🌐exploit availability, maturity)
  • Where is the vulnerability? (🧠criticality of affected asset)
  • What happens if it’s exploited? (🕷️exploit type, 🧠linked systems knowledge)
  • What happens if it’s left unpatched? (🧠compliance?🧠contractual obligations?)

More reading

--

--

Propolis

IT / Cybersecurity Grad with a strong interest in coding and #OffSec.