Aaron Frost explores the overly complex world of vulnerability identifiers for end of life software. We discuss how incomplete CVE reporting creates blind spots for users while arming attackers with knowledge. The conversation uncovers the ethical tensions between resource constraints and security transparency, highlighting why the “vulnerable until proven otherwise” approach is the best path forward for end of life software.
Episode Links
This episode is also available as a podcast, search for “Open Source Security” on your favorite podcast player.
End of Life Vulnerabilities
There’s a problem lurking in the world of software security that doesn’t get nearly enough attention. The core dilemma is easy to understand, but the devil is in the details. When a vulnerability is discovered in current software versions, what responsibility do maintainers have to verify and disclose whether that same vulnerability affects older, end of life versions? We can’t expect developers to put substantial effort into the EOL versions, but what does that mean for assigning CVEs?
The Challenge of Backwards Vulnerability Assessment
Many developers understandably focus their security efforts on their currently supported software versions. When a new vulnerability surfaces, they diligently verify it does or doesn’t affect their active codebases and release appropriate patches. But what do we do about all those EOL versions still running in production environments?
This economic reality means that many organizations don’t include those EOL versions when reporting vulnerabilities. They verify the issue in current versions, issue the CVE identifier for those versions, and consider the matter closed.
But what if an old version is vulnerable? Now there’s a blind spot, the affected information included in security advisories is generally treated as accurate, so if an old version isn’t marked as vulnerable, many organizations won’t know they have a problem.
This brings us to the question at the heart of this issue. Is it responsible to disclose a vulnerability exists in current versions without investigating its presence in older, widely-deployed versions? Does doing so inadvertently create a roadmap for attackers to exploit systems running legacy software?
At the same time, there’s the practical reality that open source maintainers are often working with limited resources. Demanding they verify vulnerabilities across numerous legacy versions may simply not be feasible. Open source owes us nothing, we can’t expect them to do our homework for us.
The Default Should Be Vulnerable Until Proven Otherwise
The approach Aaron and I both agree for this problem is to adopt a “vulnerable until proven otherwise” stance for older software versions. When a vulnerability is discovered in version 2.5, the assumption should be that versions 2.4, 2.3 and earlier are also affected unless specifically verified otherwise, even if version 2.4 is EOL and no effort has been made to investigate if it’s affected.
This more conservative approach to vulnerability reporting would still require additional work, but it shifts the burden of proof in a way that prioritizes user security. Rather than requiring proof that an older version is vulnerable, we would need to put in the effort to prove an old version isn’t vulnerable to the issue. For example the vulnerability is in code that’s not present in the older version.
The Reality of EOL
When discussing this topic, it’s crucial to acknowledge the reality that EOL software continues to run in production environments for many reasons. Despite the common refrain of “just upgrade,” organizations often find themselves locked into older versions due to compatibility requirements, resource constraints, or even contractual obligations.
These situations create a landscape where vulnerability information for EOL software isn’t just an academic concern, it’s important information we need for risk management as well as adhering to various compliance standards.
The Way Forward
Addressing this all is a different way of thinking about vulnerabilities and EOL. Historically we’ve just sort of ignored EOL. We need to better understand that software ages in complex ways, with codebases persisting in production far longer than their official support lifespans. Our vulnerability management approaches should evolve to reflect this reality. They probably won’t, because change is hard and there’s not enough consensus about anything in the vulnerability world. But sometimes just pointing out a problem is the start.