Part 5: Which of these security problems do I need to care about?

If you just showed up here, go back and start at the intro post, you’ll want the missing context before reading this article. Or not, I mean, whatever.

I’ve spent the last few posts going over the challenges of security scanners. I think the most important takeaway is we need to temper our expectations. Even a broken clock is right twice a day. So assuming some of the security flaws reported are real, how can we figure out what we should be paying attention to?

I ran the scan

If you ran a security scanner, running it is the easy part. What you do with the results of your scan is a challenge. I’ve seen teams just send the scan along to the developers without even looking at it. Never do this. This tells your developers two very important thing. 1) You think your time is worth more than theirs. 2) You aren’t smart enough to parse the scan. Even if one or both of these are true, don’t just dump these scans on someone else. If you ran it, you own it. Suddenly that phone book of a scan is more serious.

When you have the result of any security report, automated or human created, how you deal with the results depends on a lot of factors. Every organization has different process, different resources, and different goals. It’s super important to keep in mind the purpose of your organization, resolving security scan reports probably isn’t one of them. Why did you run this scan in the first place? If you did it because everyone else is doing it, reading this blog series isn’t going to help you. Fundamentally we want to run these scanners to make our products and services more secure. That’s the context we should read these reports. Which of these findings make my product or service less secure? And which findings should I fix to make it more secure?

I was given a scan

If you were given a scan, good luck. As I mention in the previous section. If you were given one of these scans and it’s pretty clear the person giving it to you didn’t read it, there’s nothing wrong with pushing back by asking for some clarification. There’s nothing more frustrating than someone handing you a huge scan with the only comment being “please fix”. As we’ve covered at length, a lot (almost all) of these results are going to be false positives. Now you have to weed through someone else’s problem and try to explain what’s happening.

I’ve seen cases where a groups claim they can’t run an application unless the scan comes back clean. That’s not a realistic goal. I would compare it to only buying computers that don’t crash. You can have it as a requirement, but you aren’t going to find one no matter how hard you try. Silly requirements lead to silly results.

Falsifying false positives

If you ran the scan or you were handed a scan, one of the biggest jobs will be figuring out which results are false positives. I don’t know of a way to do this that isn’t backbreaking manual labor. Every finding has a number of questions that you have to answer “yes” to in order for the finding to matter.

  1. Do you actually include the vulnerable dependency?
  2. Is version you’re using is affected by the issue?
  3. Are you use the feature in your application?
  4. Can attackers exploit the vulnerability?
  5. Can attackers use the vulnerability to cause actual harm?

As humans it’s hard work to do these steps, it’s likely you can’t do them by yourself. Find some help, don’t try to do everything yourself.

One really important thing to do as you are answering these questions is to document your work. Write down as much detail as you can because in three months you’re not going to remember any of this. Also, don’t use whatever scanner ID you get from the vendor, use the CVE ID. Every scanner should be reporting CVE IDs (if they don’t, that’s a bug you should report). Then if you run a second scanner you can know right away if something has already been investigated since you’ve already documented the CVE ID. Using only scanner IDs isn’t useful across vendors.

Parsing the positive positives

Let’s make the rather large leap from running a scan to having some positive positives to deal with. The false positives have been understood, or maybe the scanners have all been fixed so there aren’t any false positives! (har har har) Now it’s time to deal with the actual findings.

The first and most important thing to understand is all of the findings aren’t critical. There is going to be a cornucopia of results. Some will be critical, some will be low. Part of our job is to rank everything in an order that makes sense.

Don’t trust the severity the scanner gives you. A lot of scanners will assign a severity rating to the findings. They have no idea how you’re using a particular piece of code or dependency. Their severity ratings should be treated with extreme suspicion. They could be an easy way for a first pass ranking, but those rating shouldn’t be used for anything after the first pass. I’ll write a bit more on where these severities come from in a future post, the short version is the sausage is made with questionable ingredients.

It makes a lot of sense to fix the critical findings first, nobody will argue this point. A point that is a bit more contentious is not fixing low and moderate findings, at least not at first. You have finite resources. If fixing the critical issues consume all of your resources, that’s OK. You can mark low findings in a way that says you’re not fixing them now, but might fix them later. If your security team comes back claiming that’s not acceptable and you have to fix everything, I suggest a very hearty “patches welcome” be sent to them. In typical software development minor bugs don’t always gets fixed. Security bugs are just bugs, fix the important stuff first, don’t be afraid to WONTFIX silly things.

It’s also really important to avoid trying to “fix” everything just to make the scanner be quiet. If your goal is a clean report, you will suffer other consequences due to this. Beware the cobra effect.

Can’t we all just get along

The biggest takeaway from all of this is to understand intent and purpose. If you are running a scanner, understand why. If you’re receiving a report, make sure you ask why it was run and the expectations of whoever gave it to you. It’s generally a good idea not to assume malice, these scanners are very new and there is a huge knowledge gap, even with the people who historically would consider themselves security experts. It can get even more complicated because there’s a lot of open source thrown into the mix. The amount of knowledge needed for this problem is enormous, don’t be afraid to ask lots of questions and seek out help.

If you are doing the scanning, be patient.

If you are receiving a scan, be patient.

Remember, it’s nice to be nice.

Part 6: What do we do now?