Josh and Kurt talk about how to think about open source in the context of society. Open source is more like a natural resource than a supplier. It’s common to think of open source projects as delivered to us, but it’s more like acquiring raw materials from the forest. The problem is we’re harvesting the raw materials in an unsustainable manner at the moment.
The perverse incentive of vulnerability counting
It seems like every few years the topic of counting vulnerabilities in products shows up. Last time the focus seemed to be around vulnerabilities in Linux distributions, which made distroless and very small container images popular. Today it seems to be around the vulnerabilities in open source dependencies. The general idea is you want to have as few vulnerabilities in the open source you’re using, so logically zero is the goal.
However, trying to get to zero vulnerabilities in your products, projects, and infrastructure is a perverse incentive. It’s easy to imagine zero as the end state, but you end up with the cobra effect. A goal of zero vulnerabilities will result in zero vulnerabilities, but not in the way you want. And really zero isn’t what you want, what you want is process that reduces your risk. If all you focus on is vulnerability counting, there’s a very good chance you would lower your vulnerability count and accidentally increase risk elsewhere.
Why do we care about vulnerabilities
It’s not hard to understand why vulnerabilities are important. Some of the biggest tech events to ever happen involve vulnerabilities. Heartbleed, Shellshock, Log4Shell, and vulnerabilities yet to come will all be burned into our collective minds forever. It’s easy to make the connection that more vulnerabilities are bad, and fewer vulnerabilities is good. This view is comparable to kids believing when they grow up they will be able to eat candy for supper every day. Unfortunately the topic of vulnerabilities is not something that can fit in a Tweet. It’s all very complicated. In our world of sound bites and tweets, it can be hard to discuss a complicated problem.
Not all vulnerabilities matter
This problem isn’t new, and that means some security folks have been dealing with vulnerabilities for a very long time. Red Hat has a nice blog article that explains why not all vulnerabilities are equal: Do all vulnerabilities really matter? The short version is there are more vulnerabilities than can be fixed in a reasonable amount of time, so they prioritize and batch the fixes. Many companies have teams of people that are working to understand how a given vulnerability affects their product or service, then those vulnerabilities are dealt with in an order of importance.
Here is where we can start writing tweets about using small distributions, or just upgrading multiple times a day, or some other IF YOU DO THIS ONE THING!!! The reality is there are zero easy fixes. Anyone claiming to know how to fix this is running a low budget rendition of The Music Man. Maybe not low budget actually, the original Music Man budget was only $15 million.
Anyway, running bleeding edge and deploying dozens of times a day comes with its own set of risks that have to be managed. If we can’t just update everything constantly, we have to start to prioritize our work and deal with what matters. Obviously you want to address the most dangerous vulnerabilities first.
Today when we try to understand what matters, we tend to fall back on CVSS scores. There’s a lot wrong with CVSS, but it’s basically all we have now, so logically it gets the most use. One of the great sins of security past has been complaining incessantly about things like CVSS while offering no alternatives. If you find a better way to score your vulnerabilities, that’s great (also you should tell everyone). If not, using CVSS is a fine place to start. Remember the goal should be to reduce risk overall, not reduce it in one place while increasing it somewhere else.
Zero is impossible
Why is zero vulnerabilities impossible? It doesn’t seem like it should be all that hard to fix all the vulnerabilities. Just run super small containers, use only the dependencies you need, upgrade everything quickly, and DONE!
This is probably true when you’re a small team (or maybe giving a conference keynote), but if you’ve ever been part of a group managing infrastructure more than a few years old, it’s rarely as easy as running the minimum and upgrading six times a day. You’re on the 12th generation of developers. Nobody remembers why you can’t shut down that machine in us-east-1, but if you do everything breaks. There are dependencies that you can’t find the source for anymore. The tests broke a week ago and there’s no time to fix it because everyone is off on Christmas break.
If you tell people like this they need zero vulnerabilities, they will find a way to make the scanner report zero. Upgrading everything quickly won’t be how it reports zero, it will be by doing things to hide vulnerabilities. This comes back to the idea of increasing risk elsewhere. While hiding things gives the impression of reducing risk, we’ve actually increased the overall risk by a lot.
How do we fix this?
First, and most importantly, we need to fix our mindsets. While saying a goal of reducing risk is a great idea, it’s also as meaningless as calling everything the “supply chain”. What we really want to do is reduce the number of vulnerabilities that matter in our infrastructure. Being able to measure your infrastructure is probably the first goal. Do you know what you’re running? Assuming we do know what we’re running, then we need to know what vulnerabilities we have. Then we need to know which vulnerabilities matter.
If you can’t or don’t want to hire a security team to go over every vulnerability report and identify which vulnerabilities matter, you’re going to use CVSS. It’s broken, but it’s something, so it gets used. There are other scoring methods in the works, such as The Bugs Framework, The Exploit Prediction Scoring System (EPSS) and Vulnerability Exploitability eXchange (VEX) that show promise, but nothing is as widely supported as CVSS today. There are bespoke databases that can give out some sort of risk score, those can be fine too, but often it’s a general rating that might not apply to your usage. Due to the complex nature of software, there’s never just one way to do something. This infinite number of combinations are why things like CVSS scores are so hard to get right.
Asking vendors for risk ratings is also acceptable. If you are using a product or service, the vendor should know which vulnerabilities matter. Then you can just focus on what you’re doing rather than all the vulnerabilities in software from a vendor. Keep in mind open source is not your vendor, demanding vulnerability ratings from them isn’t OK, they owe you nothing.
At the end of the day, this is really about doing things that make sense for each of us. There’s no one way to deal with vulnerabilities. There’s no one way to develop software. The only thing I’m sure about is you shouldn’t be getting your security advice from Twitter. Keep your end goal in mind of reducing risk and watch out for those perverse incentives.
Episode 356 – LastPass ducked up, now what?
Josh and Kurt talk about the LastPass saga. There’s a lot of great explanations about what happened, but there hasn’t been a lot of info on how to start cleaning up this mess. We rehash some of the existing details then try to untangle what existing users can do to try to start recovering. The real problem is how LastPass is dealing with this, not the technical details.
Episode 355 – Security Boxing Day
Josh and Kurt talk about some security gifts for boxing day. We start out with the idea of the security poverty line and discuss a few ideas for how a low resource group can make their open source more secure. There are no simple answers unfortunately.
Episode 354 – Jerry Bell tells us why Mastodon is awesome and MFA is hard
Josh and Kurt talk about how hard multi factor authentication is. This all starts from a Mastodon thread, and Jerry Bell, the administrator of infosec.exchange joins us to discuss password security and all things Mastodon. Infosec.exchange is an incredible story and Jerry weaves a thrilling tale.
Episode 353 – Jill Moné-Corallo on GitHub’s bug bounty program
Josh and Kurt talk to Jill Moné-Corallo about GitHub’s bug bounty and product security team. It’s a treat to discuss bug bounties with someone who is managing a very large bug bounty for one of the most important web sites in the world of software today.
Episode 352 – Stylometry removes anonymity
Josh and Kurt talk about a new tool that can do Stylometry analysis of Hacker News authors. The availability of such tools makes anonymity much harder on the Internet, but it’s also not unexpected. The amount of power and tooling available now is incredible. We also discuss some of the future challenges we will see from all this technology.
Episode 351 – Is security or usability a law of the universe?
Josh and Kurt talk about end to end encrypted messages. This has been a popular topic lately due to the Mastodon popularity. Mastodon has a uniquely insecure messaging system, but they aren’t the only one. The eternal debate of can security and usability exist together? We suspect it can’t be, but it’s a very complicated topic.
Episode 350 – Spam, Email, Content Moderation, and Infrastructure Oh My
Josh and Kurt talk about email security and the perils of trying to run your own mail infrastructure. We then get into discussing the value and danger of trying to run your own infrastructure, email, blogs, or most anything. There’s a lot to juggle about all this these days, it’s complicated.
Episode 349 – The cyber is coming from inside the house – the UK is scanning itself
Josh and Kurt talk about the UK plan to scan their country’s IP space. The purpose and outcome of this isn’t completely clear at this point, but we are hopeful the data can be used as a positive force. We are only going to see more programs like this as all the governments are told they have to cyber harder.