This tweet from Jim Manico really has me thinking about why we like to consider security bugs special. There are a lot of tools on the market today to scan your github repos, containers, operating systems, web pages … pick something, for security vulnerabilities. I’ve written a very very long series about these scanners and why they’re generally terrible today but will get better, but only if we demand it. I’m now wondering why we want to consider security special. Why do we have an entire industry focused just on security bugs?
Let’s change the conversation a little bit. Rather than focus on security bugs, let’s ask the question: Which bugs in a given project should we care about?
There are of course bugs an attacker could use to compromise your system. There are also bugs that could result in data loss. Or bugs that could bring everything down. What about a bug that uses 10% more CPU? Every piece of software has bugs. All bugs are equal, but some bugs are more equal than others.
We are at a time in software history where we have decided security bugs are more equal than other bugs. This has created entire industries around scanning just for security problems. Unfortunately the end goal isn’t always to fix problems, the goal is often to find problems, so problems are found (a LOT of problems). I think this is a pretty typical case of perverse incentives. You will always find what you measure. The pendulum will swing back in time, maybe we can help it swing a little faster.
How do you find bugs in your software? There are probably two big sources. You have users and those users report problems (and of course demand fixes). You can also have a bug show up during testing. Ideally it’s automated testing. Automated testing is one of the most important innovations in software development in the last few decades. The most important functionality of your project can tested to make sure nothing obvious breaks when you make a change, like updating a dependency.
But users and testing don’t really find security bugs. A lot of these are for really bizarre corner cases that no normal user would ever do. If you’ve ever run a fuzzer you know what this means. You might have an API that expects positive integers as the input. When you hand it the string “banana stand” it suddenly decides to crash because a string isn’t an integer. Why would anyone do this is the obvious question. Most automated testing don’t do extremely silly things such as this.
Yet this is basically how security bugs work. Now imagine when we send our API “banana stand” instead of crashing it deletes all your files. That would be a very bad thing. Now imagine someone in Australia figures out they can delete all your files and decides it would be a fun thing to do that on a Friday night. Your weekend is ruined either because you have to spend it restoring data, or you just lost your job.
And this is probably why we like to make a huge deal about security bugs.
I also think the other half of this story is open source. A lot of these scanners are focused on open source libraries you are using in your application. Initially the threat was open source itself. It’s written by miscreants who want to give away things for free! The first iteration of these scanners was all about just finding hidden open source in your products so you can remove it.
But the siren song of free lured away too many developers. If we can’t scare you away from open source, maybe we can scare you into thinking the open source you have is inherently broken, you just don’t know it, and you need a tool to tell you what’s wrong with it, and we happen to have such a tool!
Security and fear were forged on the same anvil. The forward thinking security folks know that fear is not a motivator, it is a de-motivator. Fear paralyzes people, it does not empower. The vast majority of these scanners are focused on what can go wrong. Why you should be careful with what libraries you are using. There is some truth in this, but we should be focusing on what can go right and how to make things better. There is more good than bad.
This is where I start to wonder why we focus only on security bugs. Is there a tool that tells us when a new version of a library has a 20% speed improvement? Or what about when a library fixes a bug you had to work around last year. What if a project fixed a bug that could cause data corruption? What about giving advice as to which library has a healthier community and better features? These are arguably more important than 98% of all security bugs affecting your project.
It’s easy to say “But those problems are hard” and I would agree. Solving hard problems is how the future is built. Telling me my project has 600 security vulnerabilities isn’t very useful. How many of those 600 actually matter? If my project is this insecure how is it not literally on fire all of the time? Maybe I should just put a paper bag over my head and ignore the scan. Yes, that sounds easier.
Now, imagine the scanner telling me I should upgrade 3 libraries to get a 20% speed improvement, fix a data corruption bug, and get rid of a cross site scripting vulnerability. Now I’m listening! This is the tool we should be demanding.
The reality is many of these tools cost the price of a full time employee and deliver the value of an intern. That’s probably too harsh. I apologize to the interns.