Episode 203 – Humans, conferences, and security: let me think and get back to you in a bit

Josh and Kurt talk about human behavior. The conversation makes its way to conferences and the perpetual question of if a conference is useful or not. We come to the agreement the big shows aren’t what they used to be, but things like BSides are great experiences.

Show Notes

The ineffective CISO

I’ve been thinking about this one for a while. I’ve seen some CISOs who are amazing at what they do, and I’ve seen plenty that can’t get anything done. After working with one that I think is particularly good lately, I’ve made some observations that has changed my mind about the modern day CISO reporting structure.

The TL;DR of this post is if you have a CISO that claims they can only get their job done if they report to the board or CEO, you have an ineffective CISO.

All change, even change in our organizations tends to obey Newton’s Third Law of motion. For every action there must be an equal and opposite action. Change happens because there is something driving that change. Change doesn’t happen because someone is complaining about it. A CEO demanding action could be your incentive. Maybe you need better security posture to help sales. Maybe you had an incident and making sure it never happens again is a driver.

What’s the inception for security change in your organization? If bad security is holding back sales, that’s easy to understand. But what happens when there isn’t an obvious need for security? All change in an organization, especially security change, will be the result of some other action. In our case we are going to call that action our incentive.

Now let’s think about incentives in the context of a CISO. I hear a lot of stories about how a CISO has to report to the board, or the CEO, or everyone in IT should report to the CISO, or some other crazy reporting structure to give the CISO the power they need to do the job. The number of permutations on this is huge. Why do we think like this? Much of the time, the reason I hear is that the CISO needs to be a part of decision making. It’s important for an organization to be secure and if the CISO isn’t at the table for every decision, nothing will be secure because nobody else knows how security works. It is of course mostly CISOs saying this, which should surprise nobody.

From what I’ve seen the reason there is talk like this is because in a lot of organizations the security team, and by extension the CISO, is so ineffective at driving change the only way to get it done is to get someone at the top of the company to demand everyone listens to the CISO, or else! In most instances this isn’t going to be an effective long term strategy. It will work for a little while, then everyone will get sick of it and start looking for a new job.

I’ve seen plenty of examples where the CISO shows up, makes demands with no actionable guidance or plan, gets told no, then blames everyone else because they can’t do their job. Handing the IT team a PDF of the ISO 27001 standard then telling them that’s what they have to do is about as useful as showing up at a restaurant with an actual cow and demanding they make you a hamburger with it. When nothing gets done because nobody in IT knows where to even start it’s not ITs fault. It’s the security team’s fault for not giving actual guidance. If they blame IT, having the CISO report to the board isn’t going to suddenly make things better. What it will do is ensure the security team doesn’t get the blame they deserve. It might drive some change if they can get people yelled at or fired, but even then it’s not going to be lasting change. It will be chaotic change.

Now on the other side I also know of CISOs that have built teams that don’t need a CEO or board to back them up. They have programs and teams that are highly respected within the organization. They’re not seen as a black hole that slows everyone down and generally spreads misery and little else. They empower teams to work better and faster. They understand what risk actually is and they make it work for them. Their advice is something other groups seek out and ask for help, then get the help they need and probably some things they didn’t even know they wanted. This is what an effective CISO looks like. It doesn’t matter where they report because their leadership is earned, not enforced. A team like this could report to the janitor and still get more done in a week than the CISO reporting to the board gets done in a year. It’s an amazing thing to watch.

Everyone who works in modern IT understands why security matters. There are plenty of stories around leaked databases, stolen logins, unpatched servers, and more. Nobody wants security to be optional, but if your security team is difficult there is a real disincentive to avoid working with them. The only way you might be able to get other groups to even talk to security is if the CEO is demanding it. They might get the job done, but it will often be in spite of security, not because of security. In many of these settings the incentive is fear and obedience. It’s not secret that these are not good motivators. I do think I would read the business book titled “Fear and Obedience” out of morbid curiosity, but I’m not sure I’d want to work for the author.

I have no doubt not everyone will agree with this particular article. I’m OK with that. I’d love to hear from you. I’m @joshbressers on Twitter. I truly value all feedback, especially feedback that disagrees with my thoughts. I’m not going to say there is no place for a CISO reporting to the board, or the CEO, or the janitor. I think every organization and culture is unique and should be treated as such. I do think anyone who claims they can’t do their job because they lack the authority is probably not going to do much better once they have their fabled authority. It’s the proverbial dog catching the car.

Episode 202 – The convergence of application security

Josh and Kurt talk about the security of applications. We talk about the security of infrastructure all the time, but what happens when we combine infrastructure into an application or solution?

Show Notes

Episode 201 – We broke CVSSv3, now how do we fix it?

Josh and Kurt talk about CVSSv3 and how it’s broken. We started with a blog post to explain why the NVD CVSS scores are so wrong, and we ended up researching CVSSv3 and found out it’s far more broken than any of us expected in ways we didn’t expect. NVD isn’t broken, CVSSv3 is. How did we get here? Are there any options that work today? Where should we go next?

Show Notes

Episode 200 – Talking Container Security with Liz Rice

Josh and Kurt talk to Liz Rice from Aqua Security about container security and her new book on the same topic. What does container security look like today? What are some things you can do now? What will container security look like in the future?

Show Notes

Episode 199 – Special cases are special: DNS, Websockets, and CSV

Josh and Kurt talk about a grab bag of topics. A DNS security flaw, port scanning your machine from a web browser, and CSV files running arbitrary code. All of these things end up being the result of corner cases. Letting a corner case be part of a default setup is always a mistake. Yes always, not even that one time.

Show Notes

Broken vulnerability severities

This blog post originally started out as a way to point out why the NVD CVSS scores are usually wrong. One of the amazing things about having easy access to data is you can ask a lot of questions, questions you didn’t even know you had, and find answers right away. If you haven’t read it yet, I wrote a very long series on security scanners. One of my struggles I have is there are often many “critical” findings in those scan reports that aren’t actually critical. I wanted to write something that explained why that was, but because my data took me somewhere else, this is the post you get. I knew CVSSv3 wasn’t perfect (even the CVSS folks know this), but I found some really interesting patterns in the data. The TL;DR of this post is: It may be time to start talking about CVSSv4.

It’s easy to write a post that made a lot of assumptions and generally makes facts up that suit whatever argument I was trying to make (which was the first draft of this). I decided to crunch some data to make sure my hypothesis were correct and because graphs are fun. It turns out I learned a lot of new things, which of course also means it took me way longer to do this work. The scripts I used to build all these graphs can be found here if you want to play along at home. You can save yourself a lot of suffering by using my work instead of trying to start from scratch.

CVSSv3 scores

Firstly, we’re going to do most of our work with whole integers of CVSSv3 scores. The scores are generally an integer and one decimal place, so for example ‘7.3’. Using the decimal place makes the data much harder to read in this post and the results using only integers were the same. If you don’t believe me, try it yourself.

So this is the distribution of CVSSv3 scores NVD has logged for CVE IDs. Not every ID has a CVSSv3 score which is OK. It’s a somewhat bell curve shape, which should surprise nobody.

CVSSv2 scores

Just for the sake of completeness and because someone will ask, here is the CVSSv2 graph. This doesn’t look as nice, which is one of the problems CVSSv2 had, it tended to favor certain scores. CVSSv3 was built to fix this. I simply show this graph to point out progress is being made, please don’t assume I’m trying to bash CVSSv3 here (I am a little). I’m using this opportunity to explain some things I see in the CVSSv3 data. We won’t be looking at CVSSv2 again.

Now I wanted something to compare this data to, how can we decide if the NVD data is good, bad, or something in the middle? I decided to use the Red Hat CVE dataset. Red Hat does a fantastic job capturing things like severity and CVSS scores, their data is insanely open, it’s really good, and its’ easy to download. I would like to do this with some other large datasets someday, like Microsoft, but getting access to that data isn’t so simple and I have limited time.

Red Hat CVSSv3 scores

Here are the Red Hat CVSSv3 scores. It looks a lot like the NVD CVSSv3 data, which given how CVSSv3 was designed, is basically what anyone would expect.

Except it’s kind of not the same it turns out. If we take the NVD score and subtract it from the Red Hat score for every CVE ID and graph the rest we get something that shows NVD likes to score higher than Red Hat does. For example let’s look at CVE-2020-10684. Red Hat gave it a CVSSv3 score of 7.9, while NVD gave it 7.1. This means in our dataset the score would be 7.1 – 7.9 = -0.8

Difference between Red Hat and NVD CVSSv3 scores

This data is more similar than I expected. About 41 percent of the scores are within 1. The zero doesn’t mean they match, very few match exactly. It’s pretty clear from that graph that the NVD scores are generally higher than the Red Hat scores. This shouldn’t surprise anyone as NVD will generally error on the side of caution where Red Hat has a deeper understanding of how a particular vulnerability affects their products.

Now by itself we could write about how NVD scores are often higher than they should be. If you receive security scanner reports you’re no doubt used to a number of “critical” findings that aren’t very critical at all. Those ratings almost always come from this NVD data. I didn’t think this data was compelling enough to stand on its own, so I kept digging, what other relationships existed?

Red Hat severity vs CVSSv3 scores

The graph that really threw me for a loop was when I graphed the Red Hat CVSSv3 scores versus the Red Hat assigned severity. Red Hat doesn’t use the CVSSv3 scores to assign severity, they use something called the Microsoft Security Update Severity Rating System. This rating system predates CVSS and in many ways is superior as it is very simple to score and simple to understand. If you clicked that link and read the descriptions you can probably score vulnerabilities using this scale now. Knowing how to use CVSSv3 will take a few days to get started and long time to be good at it.

If we look at the graph we can see low are generally on the left side, moderate in the middle, high toward the right, but what’s the deal with those critical flaws? Red Hat’s CVSSv3 scores place things as being in the moderate to high range, but the Microsoft scale says they’re critical. I looked at some of these, strangely Flash Player accounts for about 2/3 of those critical issues. That’s a name I thought I would never hear again.

The reality is there shouldn’t be a lot of critical flaws, they are meant to be rare occurrences, and generally are. So I kept digging. What are the relationship between the Red Hat severity and NVD severity? The NVD severity is based on the CVSSv3 score.

This is where my research sort of fell off the rails. The ratings provided by NVD and the ratings Red Hat assigns have some substantial differences. I have a few more graphs that help drive this home. If we look at the NVD rating vs the Red Hat ratings, we see the inconsistency.

NVD severity vs Red Hat severity

I think the most telling graph here is the Red Hat Low vulnerabilities are mostly medium, high, and critical from the NVD CVSSv3 scoring. That strikes me as being a problem. I could maybe understand a lot of low and moderate issues, but there’s something very wrong with this data. There shouldn’t be this many high and critical findings.

Red Hat severity vs CVSSv3 scores

Even if we graph the Red Hat CVSSv3 scores for their low issues the graph doesn’t look like it should in my opinion. There’s a lot of scoring that’s a 4 or higher.

Again, I don’t think the problem is Red Hat, or NVD, I think they’re using the tools they have the best they can. Now it should be noted that I only have two sources of data, NVD and Red Hat. I really need to find more data to see if my current hypothesis holds. And we can easily determine if what we see from Red Hat is repeated, or maybe Red Hat is an outlier.

There are also some more details that can be dug into. Are there certain CVSSv3 fields where Red Hat and NVD consistently score differently? Are there certain applications and libraries that create the most inconsistency? It will take time to work through this data, I’m not sure how to start looking at this just yet (if you have ideas or want to try it out yourself, do let me know). I view this post at the start of a journey, not a final explanation. CVSS scoring has helped the entire industry. I have no doubt some sort of CVSS scoring will always exist and should always exist.

The takeaway here was going to be an explanation of why the NVD CVSS scores shouldn’t be used to make decisions about severity. I think the actual takeaway now is the problem isn’t NVD, well, they sort of are, but the real problem is CVSSv3. CVSSv3 scores shouldn’t be trusted as the only source for calculating vulnerability severity.

Episode 198 – Good advice or bad advice? Hang up, look up, and call back

Josh and Kurt talk about the Krebs blog post titled “When in Doubt: Hang Up, Look Up, & Call Back”. In the world of security there isn’t a lot of actionable advice, it’s worth discussing if something like this will work, or ever if it’s the right way to handle these situations.

Show Notes

Comment on Twitter with the #osspodcast hashtag

Episode 197 – Beer, security, and consistency; the newer, better, triad

Josh and Kurt talk about what beer and reproducible builds have in common. It’s a lot more than you think, and it mostly comes down to quality control. If you can’t reproduce what you do, you’re not a mature organization and you need maturity to have quality.

Show Notes

Episode 196 – Pounding square solutions into round holes: forced updates from Ubuntu

Josh and Kurt talk about automatic updates. Specifically we discuss a recent decision by Ubuntu to enable forced automatic updates. There are lessons here for the security community. We have a history of jumping to solutions rather than defining and understanding problems. Sometimes our solutions aren’t the best. Also murder bees.

Show Notes