Part 4: Application scanning

We’ve already discussed the perils of code and composition scanning. If you’ve not already read those, you should go back to the beginning.

Now we’re going to discuss application scanning. The basic idea here is we have a scanner that interacts with a running application and looks for bugs. The other two scanners run against static content. A running application is dynamic and ever changing. If we thought code scanning was hard, this is even harder. Well it can be harder, it can also be easier. Sometimes.

Scanning a running application is hard

Back in the code scanning post, we talk about how much harder it is to scan a weakly typed language because there is a distinct lack of rules. Scanning an application can be comparable to this. It mostly depends what you’re scanning. There are many types of applications. Some have fancy user interfaces. Some are just libraries that get included in other things. Some are just APIs that you access over a network. Of course it’s common for many application scanners to pretend everything is the same.

First, a word on fuzzing. If you’ve never heard of fuzzing that’s OK. It was hugely popular a few years back as a way to stress test certain applications. The basic idea is you stress test inputs by subtly modifying whatever the application is trying to read. An example is you have a program that converts image files from one format to another. You take one good image and flip some bits at random, then see if it crashes your application. Repeat this a few million times and you’re fuzzing. Fuzzing is an example of application scanning that works really well, but generally only with languages that are not memory safe. Fuzzing is less effective on memory safe languages. We will not be talking about fuzzing in the rest of this post.

Scanning user interfaces

I want to start with application scanners that go after user interfaces. Today the most common user interface is a website accessed in a browser. It seems like this would be a space that is ripe for scanning since we can teach a robot to parse HTML. You would be right that we’re really good at scanning HTML. What we’re not really good at is finding good results when we scan HTML.

The single biggest challenge an application scanner has in almost every instance is a lack of situational awareness. What I mean by that is the scanner approaches a web app without any real knowledge about how the app works or what it does. The scanners just start throwing spaghetti at the walls and see what stick. It’s very common for a webapp scanner to tell you about security flaws affecting a webserver you’ve never even heard of, but because you returned some HTML snippet a webserver from 1994 returned once, they assume that’s what you’re using.

Webapp scanners also make many assumptions about the results they get back. If a scanners sends a request that they think should have returned a 400 error code, and they don’t get it, that’s going to end up in the report. My favorite example of this was a scanner putting a / at the end of every URL, which resulted in a json error message being returned, it reported it as a security flaw for every URL it could find. Every. Single. URL. It was very silly, and not very useful.

Some scanners will let you configure them to be a bit more clever with respect to your webapp. If you can configure them you should, but even with that the results aren’t going to be amazing. You’re going see an increase in quality from completely terrible to mostly terrible. You should definitely do some work to decide if running a webapp scanner makes sense for you. This is a great place to figure out some return on investment calculations. And of course keep in mind the big question of “what our reason for doing this”. If you want a more secure application, keep that in mind while you’re parsing the reports.

Scanning APIs

If your application has an API, that’s great news. APIs are ways for machines to talk to each other, so logically one would expect an application scanner to do a great job scanning an API for problems. One would expect …

The reality here is many application scanners will treat an API as if it was a user interface that is returning HTML. Many scanners will report things like error messages as being security flaws because they don’t respond in a way the scanner is familiar with. Web browsers don’t treat content that has a type of application/json as HTML. Scanners don’t seem to understand this.

If you are building an API using modern design principals it’s very likely you already have an application scanner running against your API. You just don’t call it an application scanner, you call it “continuous integration”. That’s right, if you have a robust test suite against your API, you can expect far better results from that than you’ll ever see from an automated scanner. If you have a finite budget, you should write more tests, not buy an application scanner for your APIs.

What can we do?

For the what action we can take part, I’m going to point at the conclusion for source code scanners rather than try to write something new and interesting. These things have been around for a while and they’ve not improved a lot. Everyone should calculate their own ROI here, but if I was writing the check I would look into composition scanning.

The next post will be far more exciting as we start to tackle how to parse a phone book sized report.

Part 5: Which of these security problems do I need to care about?

Episode 188 – Depressing news sucks, we’re talking about cheating in video games

Josh and Kurt talk about video games. Yeah, video games. Specifically about cheating in video games. There’s a lot of other security themes in the discussion. With the news being horrible these days, we needed to talk about something fun.

Show Notes

Episode 187 – Wireguard vs IPsec: the OK Boomer of security

Josh and Kurt talk about Wireguard. There have been a lot of recent conversations about it and if it’s better or worse than other VPN solutions. It’s safe to say in our modern age, less is usually more, especially when it comes to security. Wireguard has a lot going for it, it can’t be ignored.

Show Notes

Part 3: Composition scanning

If you just showed up here, go back and start at the intro post, you’ll want the missing context before reading this article.

In this post we’re going to talk about a newer type of scanner called a composition scanner. The idea here is when you build an application today it’s never just what you wrote. It also includes source code from a large number of other sources. Usually these other sources are open source.

A composition scanner will look at your project, specifically the things you didn’t write, and attempt to alert you if you are including components that have known security vulnerabilities. It’s very common to not upgrade the open source we put into our projects. Upgrading is hard and can break things, so doing nothing is easier most of the time. Composition scanners let us see what’s hiding in the depths of our project, sometimes it isn’t very pretty.

An easy example we can use is if you are including OpenSSL code in your application. Do you know if the version of OpenSSL you are using is still vulnerable to Heartbleed? You probably can’t say for certain if this is true or not, but a composition scanner probably can.

Who let all this open source in?

Everything we build today is open source. I’m sure a non trivial number of readers just jumped up and shouted “no it isn’t”. There are two kinds of open source today. The open source we know is open source, and the open source we think isn’t.

I’m only half joking here. The reality is everything is filled to the brim with open source now. If you’re building anything and you aren’t using as much open source software as you can find, you’re a fool. Sure you have some of your own features you’ve written that aren’t technically covered by an open source license, but when that’s only 10% of your codebase, you better be willing to pay attention to the other 90%.

When we use open source in our products, projects, and businesses, we lose the protection of security through obscurity. “But security through obscurity doesn’t work!” You shout. It does actually work, until it doesn’t. There is a huge number of applications out there that are only secure because nobody has ever looked at them, and nobody ever will. These applications have existed for years and will continue to exist for years hidden behind the veil of obscurity.

But once we include some open source, we’re going to start getting noticed. It’s like bringing the best looking person to the party. Everyone notices. It’s a bit poetic that by downloading and using something free, the cost is attention. What I mean by this is if you have a public web site, there are people scanning the internet looking for certain known libraries. There are attackers scanning the internet for certain applications running on it. There are researchers scanning all the source code in github looking for known bad libraries. When your website was just some perl code you wrote in 1995, nobody noticed or cared. Now that’s you’re using AngularJS, everyone sees.

Updating dependencies is harder than not updating them

Now, as we mentioned just above, everything is open source now. Open source comes with a catch though. You include it in your application at a point in time, then the arrow of time marches forward. There are some who like to say software ages like milk, not wine. But I say it ages like humans. It’s exciting, good looking, and nimble when it’s young, then the ravages of time start to set in and things stop being young and beautiful pretty fast.

You have to update your open source dependencies. You can’t just grab some code off the internet and forget it’s there. There are going to be newer versions that fix bugs and security issues. You’ll want those fixes. This is further complicated because sometimes the new version of an open source library will break your application. It could be a bug in the new version, it could be you were using something incorrectly, or they might just break it on purpose because they decided the old way was bad. Every time you pull in a new version of something there will be a cost. Sometimes the cost is as small as pulling in the new version. Sometimes the cost will be refactoring half of your application.

The current best practice advice is to keep your dependencies updated. One could easily write a book debating the pros and cons of updating the open source you use, and there are many ways to manage this problem. Rather than delve into that problem right now, let’s just stick with the idea that we should be updating our open source, but that then leads to the question of when. If we pull in every update for every dependency, that’s going to be a lot of churn. We probably want to update things in a way that makes sense and is manageable. We all have more work to do than time to do it, so we have to be smart about these updates.

Which of these security problems do I need to care about?

OK, so let’s assume if you made it this far you agree that software is incredibly complex. It’s also mostly open source, and that open source will have security flaws that need to be fixed. But you also can’t fix everything, you can only fix some things.

As we’ve already mentioned several times, there are going to be false positives, true positives, and false negatives. The vast majority of your findings will still be false positives, but in composition scanning, there are two types of false positives. There are false positives of the sort where the vulnerability reported doesn’t exist in your dependency. And there is the false positive where the vulnerability exists in the dependency, but you don’t use the vulnerable code, so you’re not vulnerable.

This idea of a vulnerability existing in a dependency but being a false positive can be difficult to understand, here is a laughably simple example. Let’s say you have a library in your application that has two features. One of the features is designed to remove dangerous HTML from strings. The other feature adds two numbers together (this is meant to be a very simple and ridiculous example). You only need the feature that adds numbers together so you ignore the string sanitizer. There was a number of security vulnerabilities found in the string sanitizer, but since you don’t use it, you never upgraded the library. Now that you run a composition scanner, you see it lights up like a Christmas tree due to all the unfixed vulnerabilities in the sanitizer. The vulnerable code is there, but you don’t use it, are you vulnerable? There isn’t a single answer to this question. I say it is a false positive, but it’s up to you really. This is a very common type of false positive with composition scanners.

What now?

It’s important we keep in mind why we are running these scanners. Are we doing it just to run a scanner, or are we doing it to make our application more secure? I think today a lot of users are running them for the sake of running them. But the real reason should be to make things more secure. A vulnerability an attacker can’t exploit isn’t a vulnerability. It’s important we invest our limited resources into fixing vulnerabilities that attackers can attack. There is a lot of nuance in this explanation, I expect to write some future posts about it after this series is complete.

As composition scanners are the kid on the block they also currently show the most promise. But that optimism is worthless if we don’t work with the scanner vendors. Today the scanners produce relatively low quality results, the number of false positives are still unacceptably high and the reports are enormous. But composition scanning is a much easier problem to understand than any of the other scanning problems. I do think it has a bright future.

Part 4: Application scanning

Part 2: Scanning the code

If you just showed up here, go back and start at the intro post, you’ll want the missing context before reading this article.

The first type of scanner we’re going to cover are source code scanners. It seems fitting to start at the bottom with the code that drives everything. Every software project has source code. It doesn’t matter what language you use. Some is compiled, some interpreted, it’s all still source code. The idea behind a source code scanner is to review the code a human wrote and find potential security problems with it. This sounds easy enough in theory, but it’s extremely difficult in practice.

Strongly typed languages like C, C++, and Java lend themselves to code scanning. An oversimplified explanation would be a strongly typed language is one where a named variable has to be a certain type. For example if I have a variable named “number” that is a number, I can’t assign a string to it. It can only be a number.

Weakly typed languages, such as JavaScript and Python are incredibly difficult to properly scan. These are languages where I can assign the string “potato” to my variable named “number”. While weakly typed languages offer great flexibility to developers, they are a nightmare for code scanners.

I have software and nobody knows how it works

Software today is infinitely complex. That statement isn’t a joke, it really is infinitely complex. There is no limit to what computers, and by extension software, can do (this is a concept called Turing Complete). An infinitely complex problem will have an infinitely complex solution. It’s important to keep in mind how big infinity is. Since humans can barely solve finite problems, it’s safe to say we can’t actually solve problems that are infinitely complex, even with a scanner. Now just because you can’t solve a problem doesn’t mean you can’t make things better. There’s a lot of space between “solved” and “do nothing”.

So the real problem is basically if you have software running today in any environment, it’s so complex nobody really knows how it all works. If you write software, you’re going to accidentally include security vulnerabilities. Finding those vulnerabilities is a nearly impossible task in many instances. One way to try to uncover some of them is, you guessed it, scanning the source code for security vulnerabilities.

Trying to scan for those flaws is really really hard problem it turns out.

The only thing harder than writing secure software is writing a code scanner

So if software is infinitely complex, it’s safe to say building a scanner is more complex than infinity. I’m not sure what that is, but I’m comfortable assuming it’s really hard. Being able to scan code that can do anything is an incredibly difficult problem. Now, just because it’s really hard doesn’t mean we should do nothing, but it’s important we have reasonable expectations. When I point out shortcomings in something it doesn’t mean we should throw our hands up and declare the problem too hard to solve. This has been the default reaction in the security industry to many problems. It doesn’t work.

A code scanner isn’t going to catch all your bugs. It’s probably not going to catch half of your bugs. Code scanners are plagued by the problem of very high false positive rates and extremely high false negative rates. Most code scanners can only find a certain subset of security vulnerabilities, and of the subset they can find, they will be wrong a lot.

I mentioned strongly and weakly typed languages in the intro. You can imagine that weakly typed languages are incredibly difficult to scan. The flexibility you gain from not defining types can lead to a lot of complexity. Having a subroutine that can return an integer or a string means now your scanner has to try and figure out what is getting returned, and hope it can solve if there are going to be problems when processing the output.

Scanning a strongly typed language will have a slightly higher level of success if you structure your code in a way the scanner likes. Some scanners can be augmented with certain comments to help it understand what’s happening. Even if you do everything right your scanner will have a high number of false positives. Scanning code is hard.

The other important thing to keep in mind is these scanners generally only pick up a subset of possible security vulnerabilities. Even if you ran a code scanner and it came back clean, you should not assume your code is free from security vulnerabilities. Scanners tend to be good at finding problems like buffer overflows, but not good at finding logic problems for example.

Every scanner will also have false positives. Some scanners will have a lot of false positives. As mentioned in the last post, make sure you report false positives to the scanner vendors. They are bugs. False negatives are also bugs, but they’re a lot harder to pick out and report.

What can we do?

I would love to tell you security code scanners will get better with time. They’re already about a decade old and the progress we’ve seen is not super impressive. Like most technology you should understand your return on investment for using a code scanner. If that return is negative, you’re wasting resources scanning the code.

One of the most dangerous traps we can fall into in security is using tools or processes “because that’s the way we do it”. We should always be evaluating everything we do constantly and making it better. Because of the arrow of time, a process that isn’t getting better is getting worse. Nothing ever just stays the same. using this logic I would probably argue code scanners are mostly staying the same (feel free to draw a conclusion here). Newer, safer, languages are likely the future, not better cod scanners.

In the next post we will cover composition scanners. Composition scanner is newer and currently shows promise. It’s also a problem that’s a lot easier to understand and solve than code scanning is.

Part 1: Is your security scanner running? You better go catch it!

This post is the first part in a series on automated security scanners. I explain some of the ideas and goals in the intro post, rather than rehashing that post as filler, just go read it, rehashing content isn’t exciting.

There are different kinds of security scanners, but the problem with all of them is basically the same. The results returned by the scanners are not good in the same way catching poison ivy is not good. The more you have, the worse it is. The most important thing to understand, and the whole reason I’m writing this series, is that scanners will get better in the future. How they get better will be driven by all of us. If we do nothing, they will get better in a way that might not make our lives easier. If we can understand the current shortcomings of these systems, we can better work with the vendors to improve them in ways that will benefit everyone.

The quick win: I did something, and something is better than nothing!

One of the easiest problems we can see when running a security scanner is the idea that doing anything is better than doing nothing. Sometimes this is true, sometimes it’s not. You have to decide for yourself if running a scanner is better than not running a scanner, every organization is different. I see a common theme in the security industry where we take actions but then never follow through. Just running a scanner isn’t the goal, the goal is making our products and services more secure.

If you’re running scans and not reviewing the output, doing something is not better than doing nothing. What I see on a regular basis is a security team handing a phone book sized scan report to the development team, demanding they fix everything clearly not having even looked at it. If you can’t be bothered to review the massive report, why should the dev team? A good dev team will push back with a firm “no”. Even if they wanted to review all the findings, practically speaking it can’t be done in a reasonable amount of time.

So what do we expect to see in the report we didn’t read?

False positives

False positive findings are probably the single biggest problem with security scanners today. A false positive is when the scanner flags a particular line of source code, or dependency, or application behavior, as being a security problem. A very high percentage of the time whatever findings the report spits out will be a false positive. This is problematic as dealing with false positive results have a cost.

No scanner will ever have zero false positives, but a scanner than has magnitudes more false positives than actual positives is a broken scanner. Anything it tells you should be treated with skepticism. I did some searching to find what other industries consider an acceptable false rate to be. I couldn’t find anything I want to link to, it’s all quite dull, but the vast majority of industries seem to find somewhere around 1%-10% in the acceptable category. If I had to guess, most scanners today have a false positive rate very close to 100%. I’ve seen many scan reports where all of the findings were false positives. This is not acceptable.

Our job as users of security scanners is to report false positives to whoever creates the scanner. They won’t get better if we don’t tell them what we’re seeing. When you’re paying a company for their product it’s quite appropriate to make reasonable requests. One challenge any product team has is receiving a large number of unrelated requests. When you have 100 customers with 100 different feature requests it’s really hard to prioritize. If you have 100 customers with the same request, the product team can have a razor sharp focus. We should all be asking our scanner vendors for fewer false positives. Every false positive is a bug. Bugs should be reported.

Maybe positives

There’s another call of issues I’ve seen from security scanners that I’m going to call “maybe” positives. The idea here is the scanner flags something and when a human reviews the results they can’t tell if it is or isn’t a problem.

These are tough to deal with, and they can be dangerous. I bring it up because this problem created some trouble for Debian some time ago. DSA-1571 was a security flaw that resulted in OpenSSL generating predictable secret keys. The patch that caused the bug was the result of fixing a warning from the compiler. Sometimes warnings are OK, it can be very difficult to know when.

The reason I include this is to warn that fixing errors just to make the scanner be quiet can be dangerous. You’re better off fixing the scanners than trying to fix code you don’t really understand just to make the scanner results go away.

Low quality positives

One of the other big problems you will see from security scanners is findings that are positives, but they’re not really security problems. I call these “low quality positives.” They are technically positives, but they shouldn’t be.

An easy example I saw recently was a scanner claiming a container had a vulnerable package in it. The vulnerability didn’t have a CVE ID. After some digging I managed to find a link to the upstream bug. The bug was closed by the upstream project as “not a bug”. That means the findings from the scanner wasn’t actually a vulnerability (or a bug). One could argue the scanner may have added the bug before it was marked “not a bug.” I would argue back they should be double checking these findings to know when the bug gets fixed (or closed in this instance).

These scanners should be getting CVE IDs for their findings. If a scanner isn’t a CVE naming authority (CNA), you should ask them why not. Findings that don’t have CVE IDs are probably low quality findings. Actual security issues will get CVE IDs. If the security issue can’t get a CVE ID, it’s not a security issue.

False negatives

The last point I want to cover is false negatives. A false negative is when there is a vulnerability present in a project but the scanner doesn’t report it. It is inevitable that there will always be false negatives with every scanner, and it’s very difficult to figure out false negatives.

Something we can do here is to try out different scanners every now and then. If you only ever run one scanner you will have blind spots. Running a different scanner can give you some insight into what your primary scanner may be missing.

Wrapping up

If you take away anything from this post I hope it’s that the most important thing we can do about the currently quality problems with security scanners is to report the low quality results as bugs. I think we are currently treating these unacceptable reports with more seriousness than they deserve. Remember this industry is in its infancy, it will grow up, but without guidance it could grow into a horrible monster. We need to help nurture it into something beautiful. Or at least not completely ugly.

Part 2 is when we start to talk about specific types of scanners. Source code scanners are up next.

The Security Scanner Problem

Are you running a security scanner? It seems like everyone is doing it, maybe it’s time to get with it. It’s looking like automated security scanning is the next stage in the long winding history of the security industry. If you’ve never run one of these scanners that’s OK. I’m going to explain what they are, how they work, how we’re not using them correctly, and most importantly, what you can do about it. If you are running a scanner I’m either going to tell you why you’re doing it wrong, or why you’re doing it REALLY wrong. If you’re a vendor who builds a security scanner I assure you I understand there is a high probability I am indeed an idiot and don’t know what I’m talking about. I’m sure everything will be fine.

Automated scanning IS changing the world, but right now it’s not changing it for the better, it’s currently the security industry version of lead paint. The technology is still REALLY new, so it’s important we have proper expectations and work together to make things better. One of the challenges with new technology is understanding what you have now, and more importantly understanding what you need next. Like any tool, if you use it wrong it can make things worse than doing nothing at all. Let’s talk about how to make things better.

If you’ve never seen the sort of report an automated scanner generates you should probably consider yourself lucky. The best way to describe these reports is if you had a 10 page report that wasn’t very good, then you made 100 copies of every page, shuffled them around a bit and stapled it all together. There are some useful findings in the report, but they’re really hard to find. Expecting anyone to parse a 1000 page report for one or two findings has a terrible return on investment. It’s even less helpful if you send the report to someone else with unrealistic demands, such as requesting they fix all of the findings. By Friday. If you didn’t read the report, why should they?

There’s also the problem of incentives. Today a lot of report vendors talk about all the findings their scanner will … well, find. They fail to mention how many false positives are in those findings. In the case of security reports more findings is like bragging your house has the most lead paint. It’s not really a contest you want to win, and if you are winning you probably have a lot of work to do. That’s OK though, this is new technology trying to solve a REALLY hard and mostly unsolvable problem up to this point. Someday we’re going to look back on all this the same way we look back at food safety in the 1900’s. Asbestos was an ice cream flavor, motor oil counted as a vegetable, and nobody worried about where the meat came from.

There are a lot of moving parts in the scanner story, so I’m going to write a bunch of blogs posts to help explain the problem, explain what these scanners are doing, and finally what we can do about it.

The rough outline is going to look something like this

  1. Is your scanner running? You better go catch it!
    1. The quick win: I did something, and something is better than nothing!
    2. False positives
    3. Maybe positives
    4. Low quality positives
    5. False negatives
  2. Source code scanners
    1. I have software and nobody knows how it works
    2. The only thing harder than writing secure software is writing a code scanner
  3. Composition scanners
    1. Who let all this open source in?
    2. Updating dependencies is harder than not updating them
    3. Which of these security problems do I need to care about?
  4. Application scanning
    1. Scanning a running application is hard
    2. Scanning user interfaces
    3. Scanning APIs
  5. Which of these security problems do I need to care about?
    1. I ran the scan
    2. I was given a scan
    3. Falsifying false positives
    4. Parsing the positive positives
  6. What do we do now?
    1. Understand the problem we want to solve
    2. Push back on scanner vendors
    3. Work with your vendors
    4. Get involved in open source

Ending a blog post with an outline is pretty lame. Since writing good conclusions is hard work, I’m going to just link you to the first post of actual content in the series. I have a number of posts on this blog that talk about open source dependencies and the supply chain, I’m not going to torture you with links to all of them, I’m going to explain the problem with a slightly different angle in this series, if your time has very little value you can dig through the archives and see if there’s anything there worth reading.

Part 1: Is your security scanner running? You better go catch it!

Episode 186 – Endpoint security with Tony Meehan

Josh and Kurt talk to Tony Meehan from Elastic (formerly Endgame) about endpoint detection, response, protection, and even SIEM. Tony has a great history coming from the NSA and has a number of great stories to help understand the topics.

Show Notes

Episode 185 – Is it even possible to fix open source security?

Josh and Kurt talk about the Linux Foundation Census 2. There is a lot of talk around how to fix open source security, but the reality is we can’t fix it. We need to stop trying to fix what isn’t broken and engineering around the system we have, not the system we want.

Show Notes

Episode 184 – It’s DNS. It’s always DNS

Josh and Kurt talk about the sale of the corp.com domain. Is it going to be the end of the world, or a non event? We disagree on what should happen with it. Josh hopes an evildoer buys it, Kurt hopes for Microsoft. We also briefly discuss the CIA owning Crypto AG.

Show Notes