Episode 217 – How to tell your story with Travis Murdock

Josh and Kurt talk to Travis Murdock about how to tell your story. Travis explains how to talk to the press and how to tell our story in a way that helps get our message across and lets the reporter do their job better.

Show Notes

Episode 216 – Security didn’t find life on Venus

Josh and Kurt talk about how we talk about what we do in the context of life on Venus. We didn’t really discover life on Venus, we discovered a gas that could be created by life on Venus. The world didn’t hear that though. We have a similar communication problem in security. How often are your words misunderstood?

Show Notes

Episode 215 – Real security is boring

Josh and Kurt talk about attacking open source. How serious is the threat of developers being targeted or a git repo being watched for secret security fixes? The reality of it all is there are many layers in a security journey, the most important things you can do are also the least exciting.

Show Notes

Episode 213 – Security Signals: What are you telling the world

Josh and Kurt talk about how your actions can tell the world if you actually take security seriously. We frame the discussion in the context of Slack paying a very low bug bounty and discover some ways we can look at Slack and decide if they do indeed take our security very seriously.

Show Notes

We take security seriously, VERY SRSLY!

Every company tells you they take security seriously. Some even take it very seriously. But do they? I started to think about this because of a recent Slack bug. I think there are a lot of interesting things we can look at to decide if a company is taking security seriously or if the company thinks security is just a PR problem. I’m going to call the behavior we want to look at “security signals”.

On August 28, 2020 a remote code execution (RCE) bug was made public from the Slack bug bounty program. This bug really got me thinking about how a company or project signals their security maturity to the world. Unless you’ve been living under a rock, you know Slack has become one of the most popular communication platforms of the modern era, one would assume Slack has some of the best security around. It turns out the security signals from Slack are pretty bad.

Let’s first focus on the Slack public bug bounty. Having a bug bounty is often a good sign that you consider security important. In the case of Slack however it’s a signal against. Their bounties are comically low.

To put this $1500 critical into context, Mozilla has a critical bounty of $10,000. Twitter has $20,000. I would expect something that is critical infrastructure to be much higher. The security signal here is security isn’t taken very seriously. It’s likely someone managed to convince leadership they need a bug bounty, but didn’t manage to convince anyone to properly staff and fund it. Not having a bug bounty is better than one that is being starved of resources.

The next thing I look for is a security landing page. In the case of Slack it’s https://slack.com/security. That’s a great address! The page doesn’t explicitly tell us they take security seriously so we’ll have to figure it out on our own. They do list their vast array of compliance certifications, but we all know compliance and security are not the same thing. The page has no instructions on how to report security vulnerabilities. No link to the bug bounty program. There are no security advisories. There are links to whitepapers. It looks like this page is owned by marketing. Once again we have a security signal that tells us the security team is not in charge.

Next I wanted to look at is how many CVE IDs have been assigned to Slack products? I could only find one: CVE-2020-11498. If I search for this CVE to be listed on slack.com I can’t find it anywhere. Slack has a number of libraries and a desktop client. If they took security seriously, they would have CVE IDs for their client so their customers can track their risk. I tried to find security advisories on the Slack web site and found a page owned by their legal team that does link to the bug bounty program. On the plus side that page does say “We take the security of your data very seriously at Slack.” They’re not just serious, they’re VERY serious!

I could continue looking for signals from Slack, but I think it’s more useful to focus on how this can happen and what can be done about it. There have been plenty of people discussing not letting marketing, or PR, or legal, or some other department be in charge of your security. This is of course easier said than done. A great deal of these problems are the result of the security team not understanding how to properly communicate with the business people. Security is not a tautology, security has to exist as part of the larger strategy.

Signaling Security

The single most important thing you need to learn is never NEVER try to scare people. Nobody likes to be scared. If you try to pull the “If you don’t do this monsters will eat the children” you’ve already lost. Fear does not work. If you have to, write “FEAR DOES NOT WORK” on your wall in huge red letters.

Now that we have that out of the way, let’s talk about what a bug bounty amount tells us. The idea behind a bug bounty isn’t to entice researchers to report issues to you instead of the black market. Anyone willing to sell exploits will always make more selling them illegally than you will pay. If you pay nothing or a terribly small amount you will still get reports for things people accidentally find.

The purpose of the bounty is to entice a security researcher into spending some time on your project. Good bugs take time, and the more time you spend the more good bugs you can find. It’s a bit circular. Think of these people as contract workers for a project you don’t even know about yet. The other thing you have to understand is if your bounty amounts are too low, that either means you couldn’t get a proper budget for your program, or you have so many critical bugs a critical has no value. Neither is a good signal.

How do you get enough budget for a proper bug bounty program? You have to frame a bug bounty as something that can enable business. A bug bounty seen as only a cost will always have to battle for funding. A proper bug bounty program signals to the world you take security seriously (you might not even have to tell everyone how serious you take it). It gives customers a sense of calm that shows you’re more than just some compliance certificates. It helps you gain unexpected contractors to help you find and fix bugs you may never find on your own. If you run the numbers it will be drastically more expensive to let one of your full time people hunt for bugs, and you still have to pay them if they don’t find anything!

What about having a security contact page? It’s a great tool for marketing to convince security minded customers how seriously you take security! This again comes back to the business value of having a proper security landing page. This answer will be very cultural. If your business values technical users, you use this as a way to show how smart your security people are. If you value transparency, you show off how open and honest your security is. This is one of those places you have to understand what your company considers business value. If you don’t know this, that’s going to be your single biggest problem to solve before worrying about any other security problems. Make your security landing page reflect the business strategy and culture of your company.

What about CVE IDs? Should your company issue CVE IDs for your products? I would reframe this one a little bit. CVE IDs are great, but they’re really the second step in publishing security advisories. Advisories first, CVEs second. Security advisories are a way to communicate important security details to customers. Advisories show customers you have a competent security team on the inside. Good security advisories are hard. If you have zero why is that? No product has zero security vulnerabilities.

There is an external value to having a team responsible for advisories. If you have a product your customers are running 3rd party scanners against it. It’s the cool new thing to do. Do you know what those scans look like? Are you staying ahead of the scanners? Are you publishing details for customers about what you’re fixing? Security advisories aren’t the goal, the goal is to have a product that has someone keeping ahead of industry standard security. The industry standard says security advisories and 3rd party scanners are now table stakes.

This post is already pretty long and I don’t want to make it any longer, every one of these paragraphs could be a series of blog posts on its own. The real takeaway here is that security needs to work with the business. Show how you make the product better and help with the bottom line. If security is seen only as a cost, it will always be a battle to get the funding needed. It’s not up to the business leaders to figure out security can help the bottom line, it’s up to security leadership to show it. If you just expect everyone else to make security a big deal, you’re going to end up with marketing owning your security landing page.

And I want to end on an important concept that is often overlooked. If there is something you do, and you can’t prove adds to the bottom line, you need to stop doing it. Focus on the work that matters, eschew the work that doesn’t. That’s how you take security seriously, not by putting it on a web site.

Episode 212 – Grab Bag: The Security We Deserve Edition

Josh and Kurt talk about Chromium sending traffic to root DNS servers. Telemetry watching what we do. Cryptocurrency scams and a few other random topics. Also pandas.

Show Notes

2020 CWE Top 25 I mean 10 or maybe 4.5

A few days ago I ran across this report from MITRE. It’s titled “2020 CWE Top 25 Most Dangerous Software Weaknesses”. I found the report lacking the sort of details I was hoping for, so I’m going rogue and adding those details myself because it’s a topic I care about and I like seeing conclusions. Think of this as a sort of modern graffiti.

Firstly, all of my data and graphs come from the NVD CVE json data. You can find my project to put this data into Elasticsearch then doing interesting things with it on GitHub here. All graphs are screenshots from Kibana.

The first step was to try to reproduce the data MITRE used. They’re not exactly clear how this worked from reading the report. They claim to have used 2018 and 2019 data to generate the 2020 report, but I couldn’t get my graphs even close using that data. It looks like if we use 2018, 2019, and what we currently have in 2020 we get pretty close. If I graph the top 25 of that data, here is what I get

There are a few differences from MITRE’s data, but the graph is close enough to get started. Once we have this data in Elasticsearch we can quickly change our queries. Fast queries are the most important part of this project. If it takes hours to answer questions it’s easy to lose interest.

MITRE created a formula that takes CVSS scores into account to create a score for these flaws. They don’t explain why this score makes sense or what the purpose really is. I’ve already made my thoughts on CVSS public, but more importantly, I don’t understand how this score is useful so I’m not going to use it. The NVD data is good enough to draw conclusions from by just counting the number of CWE IDs.

The one other point of order is we need to look at the MITRE CWE-787 score. the NVD data shows 822 instances of CWE-787 using 2018, 2019, and 2020 data. If we include every year in the data we get a number closer to the MITRE score for CWE-787. I don’t want to make a big deal about it, I just want to point it out. The MITRE data isn’t public, I can’t be certain where that number came from, and the NVD data is quite a bit different. It won’t matter for reasons you’ll see below.

The first thing we notice is 25 CWEs is a lot. It’s more than I want to draw conclusions from, so let’s roll this back to a top ten which is slightly more usable as a human.

This top ten list is sort of a lie. I would call it a top 4.5 list.

First we have web based CWE IDs. CWE-79 is cross site scripting (XSS). CWE-352 is cross site request forgery (CSRF). And we can sort of include CWE-20 which is “improper input validation”. I will include CWE-20 in every other possible grouping, so it’s where our .5 comes from.

Then we have memory safety issues. CWE-119, CWE-125, CWE-416, CWE-190, and CWE-787 are all “C is hard” bugs. Integer overflows and memory problems (which are usually the result of integer overflows). We can probably include CWE-20 here also.

Then we have CWE-200: Exposure of Sensitive Information to an Unauthorized Actor which is too vague to be very useful to us. I would say that basically all security bugs could be CWE-200. Probably caused by CWE-20.

And finally there’s CWE-89: Improper Neutralization of Special Elements used in an SQL Command (‘SQL Injection’) which again could be caused by CWE-20. SQL injection is pretty easy to understand, no arguments here.

We could stop here and just give some actionable advice: don’t use C, and when you write your web apps, use a framework. This is pretty easy and solid advice. But since we can do whatever want with the data, let’s see what we can find. (You should still follow this advice)

I am determined to find some other lessons in this data, so let’s dig deeper.

I want to start with CWE-79 and CWE-352. Let’s look at these CWEs by year instead of trying to only look at 2018, 2019, and 2020.

This data shows that for the last decade or so CWE-79 and CWE-352 have pretty consistently been around 15% of the CVE IDs per year. That tells us we’re not getting better or worse. Obviously getting better would be preferred, but not getting worse is still better than getting worse. My suspicion was this data would be getting worse.

The “C is hard” bugs looks about the same except it’s about 20% of issues

I really expected this to be getting better per year. C use is supposed to be in decline. I looked to see if Microsoft and Linux were keeping this number high, but removing them doesn’t drastically change anything. The reality is we’re still using a lot of C and C has a lot of problems. C is hard.

If you just thought “I’m special and smart and my C is OK” you need to stop reading now and go reflect on the life choices that got you here.

I’m a bit annoyed at this data. I want to find a lesson in here, but I feel like I can’t. Every CWE I look up is basically the same percentage per year, every year. I had one last question which was what if I graph the top 5 CWE IDs against all the CWE IDs per year?

Holy cow, now we have something that looks exciting! However, this data shows us we can’t really draw conclusions from the NVD data. Sometimes a non finding can be the finding.

Let’s break the graphs down. If we look at the left side, the NVD-CWE-Other data just means we don’t have CWE names for that old data. CWE didn’t exist back then, so that makes sense.

The big section on the right is the “Other” collection. The “Other” bars are all the CWE IDs that aren’t in the top 5 for a given year. We can see the VAST majority of issues fall outside the top 5. Even if we graph the top 25, the majority are still “Other”

It’s possible there are just too many types of security problems and we will never have enough useful data we can draw conclusions from. I hope that’s not true. I think if we could better measure this data we could learn lessons from it, so we should find a way to measure it.

There is an effort to minimize the number of CWE IDs used. CWE view 1003 “Weaknesses for Simplified Mapping of Published Vulnerabilities” is trying to do just this. The idea is to have 127 CWE IDs used instead of 1248. Over 1000 CWE IDs is not particularly useful especially when a lot of them overlap. Using the power of Elasticsearch we can figure out there are 193 unique CWE names in the NVD data today. Of the 148,559 CVE IDs 86,173 are currently mapped to the CWE-1003 view. 62,386 are not mapped to CWE-1003. I feel like this is a reasonable place to start, but it’s going to be a high hill to climb.

I think there is one very important takeaway from this data, but it’s not what I had hoped it would be. If we want to rely on this data as an industry, we need to improve it. I use “we” a lot, and in this case it’s not the royal we, it’s literally you and me. The whole security industry is using the NVD data in all of our tools. There are policies and decisions being made based on this data. I cover how scanners are using this data in a previous blog post.

The data needs to be improved if we are going to rely on it. The current state of the data is preventing us from making progress in certain places. Have you ever looked at an automated security scan? Part of the reason the results aren’t great is because the NVD data isn’t great. I need to change my talking point from “security scanners produce low quality results” to “security scanners only have access to low quality data”.

If this is a topic you care about, I would suggest starting with the CVE working groups. There is a lot of work to do. If someone who knows more about this than I do has ideas for how to help, let me know.

Episode 211 – The only thing harder than signing files is managing users

Josh and Kurt talk about the Microsoft 2 year old signature bug and GitLab no longer processing MFA resets for free users. Signing things is hard, but trying to manage users and infrastructure at scale is even harder.

Show Notes

Episode 210 – Cult of Information Security

Josh and Kurt talk about the current state of information security. There are aspects that resemble a cult more than we would like. It’s not all bad though, there are some things we can do to help move things forward. This episode shouldn’t be taken too seriously.

Show Notes

Episode 209 – Secure Boot isn’t Secure

Josh and Kurt talk about Secure Boot. The conversation uses the recent “Boot Hole” vulnerability to frame a conversation about what Secure Boot is and isn’t. Why the Boot Hole flaw doesn’t really matter, and why Secure Boot was very scary for Linux users back when it came out.

Show Notes