There was recently a really good thread about the Copy Fail vulnerability between Will Dormann and Greg K-H. The TL;DR is that vulnerability reporting and disclosure is in a weird state of flux. This discussion got me wondering what’s going on, and I think we’re seeing the extremes emerging of how vulnerabilities have always worked. The middle of the bell curve has been removed.

There are three groups in this story. The Security Researchers, the Companies, and Open Source developers. In the above discussion Will is a security research (one of the best I’ve ever seen). Greg is part of open source. There isn’t a great company representative, but that’s OK.

The other important thing to keep in mind is LLMs have made it possible for nearly anyone to find security vulnerabilities and the tools are getting pretty good pretty fast. Love them or hate them, it’s the reality. The quality of these findings is a matter of debate, but as I will explain soon, it’s probably doesn’t matter.

Some history

Historically a lot of vulnerability reporting and interactions happened between researchers and companies. The researcher would report something, maybe there was a bug bounty, maybe not. The company would then try to weasel out by calling the vulnerability just a bug, or lower the severity from critical to low. Companies have an economic incentive to not call things vulnerabilities.

The economic incentives here probably don’t make sense to anyone outside this universe, I’ll try to explain briefly.

Companies have contracts and customers, so vulnerabilities cost them real money. The lower the severity of a vulnerability, the easier it is for them to not fix something or slow walk the fix. I don’t think this is inherently bad, if everything is critical nothing is.

The researchers have an incentive also. I am the Cavalry has something they call the 5 Motivations of Security Researchers. Protect, Puzzle, Prestige, Profit, and Protest/Patriotism. The two I want to focus on are prestige and profit. If they are looking for prestige, they want to find critical vulnerabilities. If they’re looking for profit, they will work with a company and probably accept whatever severity company wants. If you don’t cooperate you don’t get paid.

In the past a lot of open source vulnerability disclosure happened through a Linux Distro. Somewhere like Red Hat, Debian, or Canonical might get a report then work with the researcher and open source to get it dealt with. We called this “coordination”. This sort of made sense since a lot of open source was delivered by a Linux distribution. The distros cooperated partly because it was the right thing to do and partly because nobody wanted to be left holding the proverbial vulnerability bag by themselves.

But things are different now. The amount of open source is hundreds of times bigger than it was a decade ago and the number of researchers is much larger. Somewhat due to LLMs, and somewhat due to the population increasing. But the point is, historically, we didn’t really worry about the economics of open source and vulnerabilities, a lot of the economics seemed to line up and things seemed to work OK.

It’s different now

Everything is different now. There are a lot of security reports going directly to open source projects. There are many reasons for this. Two big ones are GitHub made it easy to report directly to the project, and there is no company that can claim responsibility for more than a very small slice of open source projects. Most of us aren’t getting our open source from a Linux distribution anymore.

What’s the economic model for open source projects handling vulnerabilities? Most open source projects do not exist with a purpose of fixing vulnerabilities. They exist to solve some problem that probably has nothing to do with security. And the developers want to work on interesting features, not fix security vulnerabilities.

I don’t have any proof, but I suspect the vulnerability incentive for most open source projects is something like “let’s get rid of this report as fast as possible”. There’s no incentive to lower the severity, or declare something a bug instead of a vulnerability on a technicality. It’s almost certainly easier to just fix it and get back to doing whatever they actually want to do.

Open source projects don’t have customers or contracts. There’s no SLA or notification requirements. The incentives today between the researchers and projects is probably a lot more aligned than the company incentives.

The Linux Kernel is a good example here. The Kernel assigns lots of CVEs. They say it’s because they don’t really know how the Kernel is being used, so they err on the side of caution. Companies hate this because they have to deal with a lot of CVEs. Does the Kernel do this because it’s easier or do they have some sort of secret nefarious reason? Probably because it’s just easier and they have zero downside to disclosing and moving on. The Kernel doesn’t have customers and SLAs. It’s an open source project. If you don’t like how they work you can just swich to … hahahahaha just kidding, Linux won.

But anyway, they assign CVES and that’s it. This is true of most open source projects. Once that release gets cut, they’re done. Just assigning IDs and moving on is cheaper than trying to coordinate or argue with researchers. If you don’t like this process the only real recourse you have is complaining on social media. You can’t threaten to not renew your contract.

What about the poor companies?

So this brings us to the economic incentives for the companies in all this. They don’t really have any power in what’s going on. The Copy Fail vulnerability saw much wailing and gnashing of teeth, and that’s quite literally the most impactful thing an organization can do here.

It was made worse because the researchers behind this used it as a PR stunt. Then a lot of people who are security experts after being pandemic experts, and supply chain experts, and inflation experts … they helped get everyone all riled up. I would like to say not winding everyone up is the solution, but I don’t think that will ever happen.

Anyway, companies have more on the hook when there is a vulnerability. They don’t get to cut a release and they’re done. They will need to talk to customers, run some support sessions. Document what’s going on. Communicate clearly what they’re doing (or not doing). It’s quite a bit of work. But that’s sort of the point. Customers pay them for this very thing. If you want free stuff with no support or strings, we have that, it’s called open source.

What happens now?

A lot of people will proclaim the old way is dead, and it sort of is. A lot of the new research will end up reporting directly to open source. A concern is open source projects are being overwhelmed by vulnerability reports. This is a valid concern.

The open source projects that do fix security things will want to get the fix out quickly and move on. They don’t have an incentive to try to drag these things out and tell lots of people.

I’m sure a bunch of companies will demand someone think of the poor companies and demand someone tell the super important companies, who they of course are one of. But that’s so far gone now it’s laughable. How would a project figure out who to coordinate with?

There’s probably some sort of message in here about if you use open source you need to support those projects. It’s possible an economic incentive could emerge where an open source project can work with their downstream (for a price) to make vulnerabilities less painful. This won’t work for something like the Linux Kernel or Kubernetes, they’re just too big. It also might not work for projects that are too small. Who knows.

Fundamentally I think there is a new truth we all have to accept: The age of security embargoes is over

Anyone building software has to exist in a reality where they will get no notice of critical vulnerabilities. Something will come out and you will have to deal with it. This will happen constantly. Anyone in this space has been seeing a new serious things at least once a week for many months now. Sometimes there will be a patch when the research is released, sometimes there won’t.

We have constructed most of our security practices around a world that is gone. We rely on embargoes and Patch Tuesday being a thing. The smart security teams will understand this and adapt. The bad ones will burn out and blame someone else, like the Linux Kernel.

A lot of vulnerabilities are super lame

At some point we might also need to change what we call a security vulnerability. A lot of the CVEs don’t really matter. It’s hard to know which are lame and which are good. I’ve gotten this wrong many times as have all security people. Security bugs are usually hard to understand and a clever exploit can make a seemingly lame issue in to full blown emergency. There are also examples where something we all thought would be an emergency was a dud.

I’m not very optimistic this will happen as a lot of our existing compliance rules don’t account for this reality. And a huge number of the people who care about vulnerabilities don’t care because it keeps things secure, they care because the compliance auditor told them they care. I have no problem with this, the job of compliance is getting a company to do something they don’t really want to.

What can we do?

If you’re a researcher, I suspect the ability to profit from this is going to diminish to zero. Bug bounties only exist because vulnerabilities were scarce. Perhaps critical bugs will continue to be scarce. There will always be researchers but they will have to be driven less by profit and more by the other P’s.

If you’re an open source maintainer, good luck. Nobody knows what to do with all these vulnerabilities in the world of open source yet. Don’t burn yourself out. It’s OK to take it slow and ask researcher to lend a hand.

If you’re a vulnerability person at a software company, it’s going to be rough for a while, maybe forever. You’re going to get vulnerability reports that are overly long and annoying to figure out. All the open source in your stuff is going to have tons of vulnerabilities you need to track. Some of that open source will end up abandoned. There will be vulnerabilities without fixes or clear plans for a fix. You will run scanning tools that find problems everywhere they look.

I would love to say everyone should just move faster. Or that some magic tool will save us. I don’t think those are serious or honest statements. We’re already moving as fast as we can. Lots of companies already have all the tools and it’s not fixing the problems. The people who claim AI can solve this are literal clowns. Ask them which circus they ran away from.

My only real suggestion is try not to burn yourself out and be nice to each other. Everyone is going to have it rough, it’s not just you. We probably need a support group or something.