Why you can’t backdoor cryptography

Once again the topic of backdooring cryptography is in the news. The same people will fight the same fight. Again. So far sanity has prevailed every time we do this, but that doesn’t mean anyone should sit this one out. Make sure you tell everyone to pay attention and care. Trustworthy cryptography is too important.

Given the language used it sounds a lot like what’s really being discussed is having the ability to view chat apps, view emails, and unlock phones. All things with a consumer focus. They’ve lost this fight more times than we can count now, no doubt this direction change is an attempt to spread confusion.

I also want to look at this from a slightly different angle this time. Generally we talk about how the technology behind a backdoor doesn’t work. That’s still true, but let’s pretend the technology could work. Maybe some grad student is finishing up a paper and next month we’ll hear about a new form of cryptography that can be backdoored without any technical problems. It actually can’t because people are the problem. This is like insisting we build a rocketship out of cardboard to go to the moon. Just no. But in this post, we’re going to pretend we have a technical solution. Put on your cardboard space helmet, it’s time to get real.

Geography

So let’s start with the easiest problem to understand. Geography. We live on a planet that is pretty big. It has a lot of countries and a lot of people. Even inside of countries there are different rules and jurisdictions. There are also a huge number of laws and departments that all do things differently. What happens when you visit another country? What about when someone comes to your country? If only one country has these laws, what happens? What if every country decided to do something like this? Have you ever seen two countries agree on anything this complex? They can’t even agree on simple things to be honest.

Geography is the easiest to understand and biggest hurdle here. There will no doubt be talk of how this will only affect American companies. Or only people in America, maybe even only American citizens. The problem with this is we live in a connected society. There’s no such thing as an “American company” anymore. Even small companies have staff and customers all over the globe. Do you think another country is going to do nothing if America decides it can spy on their citizens? This is not going to have a pleasant ending.

The fringe

What about the non mainstream apps? There are hundreds of chat apps in any of the app stores. There are thousands of email providers. Many of us only know about a few of the options, but those who wish to do us harm know about all the non mainstream options.

There are two possible ways this plays out with all the fringe solutions. One possibility is we don’t force the smaller players to include a backdoor. That would naturally drive the criminals to those apps and services. Word gets out pretty fast what is and isn’t safe. Crime is an opportunistic industry, they will use whatever they can to their advantage.

The other possibility is small competitors are driven out of the market because they can’t possibly comply with a law like this. Adding a backdoor is going to be difficult. Ignoring the technical arguments, when you’re small adding a feature that would have zero impact on the user experience is wasted resources. Small companies won’t be able to afford to innovate. This will effectively result in worse options as the big players have less competition.

Trust

Trust is starting to become really important. Is anyone going to really trust this process? This isn’t going to be like a wiretap law. Our phones are part of our life now. Many people would rather lose their wallet than their phones. Will you trust a phone or app that could leak all your secrets because you are suddenly a person of interest in an investigation?

It’s well known people act differently when they know they are being watched. If you know you can’t trust your phone, your chat application, or even your email that will change how you communicate and what you do. Modern free societies are built on trust, we forget that sometimes.

The why

The most important question we should be asking is the why. Why do they need this? Is there some huge number of unsolved crimes because WhatsApp chats couldn’t be viewed? Is there a warehouse of locked phones somewhere that’s stopping law enforcement from doing their jobs? We will of course never know the real why, but it’s not because law enforcement can’t do their job. In a healthy democracy law enforcement works to ensure the innocent are not charged with crimes they didn’t commit. Even with the current process there are problems. Do we honestly believe this program will result in fewer problems?

A program that gives the state access to our data isn’t going to make us safer. It’s not going to put more criminals behind bars. Why would we do something this disruptive that won’t drastically change things for the better? And if there is a compelling why, let’s hear it. The risk vs reward here doesn’t look very good.

The economics

The last and most important point is the economics that ties everything together. Technology that allows data to be encrypted on the Internet is why we trust the Internet. It’s what has driven most economic growth over the last decade. The positive change we have seen thanks to trustworthy encryption vastly outweighs the possible negative repercussions if we weaken our encryption. Think of it this way. If we give law enforcement the access they want, it comes with a real cost. Will that cost be offset by the crimes this would stop? I have a very strong suspicion the answer is “probably not”.

While we all have new and interesting technology today, law enforcement has new technology too. Rather than trying to recycle old ideas, figure out how you can leverage what you have and move the needle forward by innovating. This isn’t innovation, this is short sighted nonsense.

The security of dependencies

So you’ve written some software. It’s full of open source dependencies. These days all software is full of open source, there’s no way around it at this point. I explain the background in my previous post.

Now that we have all this open source, how do we keep up with it? If you’re using a lot of open source in your code there could be one or more updated dependencies per day!

Step one is knowing what you have. There are a ton of ways to do this, but I’m going to bucket things into 3 areas.

  1. Do nothing
  2. Track things on your own
  3. Use an existing tool to track things

Do nothing

First up is don’t track anything. Ignore the problem.

At first glance you may think I’m joking, but this could be a potential solution. There are two ways to think about this one.

One is you literally ignore the dependencies. You never ever update them. Ever. This is a bad idea, there will be bugs, there will be security problem. They will affect you and someday you’ll regret this decision. I wouldn’t suggest this to anyone ever. If you do this, make sure you keep your résumé up to date.

The non bananas way you can do this is to let things auto update. I don’t mean ignore things altogether, I mean ignore knowing exactly what you have. If you’re building a container, make sure you update the container to the latest and greatest everything during build. For example if you have a Fedora container, you would run “dnf -y upgrade” on every build. That will pull in the latest and greatest packages from Fedora. If you pull in npm dependencies, you make sure the latest and greatest npm packages are installed every time you build. If you’re operating in a very devops style environment you’re rebuilding everything constantly (right …. RIGHT!) so why not take advantage of it.

Now, it should be noted that if you operate in this way, sometimes things will break. And by sometimes I mean quite often and by things I mean everything. Updated dependencies will eventually break existing functionality. The more dependencies you have, the more often things will break. It’s not a deal breaker, it’s just something you have to be prepared for.

Track things on your own

The next option is to track things on your own. This one is going to be pretty rough. I’ve been part of teams that have done this in the past. It’s a lot of heavy lifting. A lot. You have to keep track of everything you have, everything that gets added, how it’s used, what’s being updated. What has outstanding security problems. I would compare this to juggling 12 balls with one hand.

Now, even though it’s extremely difficult to try to track all this on your own, you do have the opportunity to track exactly what you need and how you need it. It’s an effort that will require a generous number of people.

I’m not going to spend any time explaining this because I think it’s a corner case now. It used to be fairly common mostly because options 1 and 3 either didn’t exist or weren’t practical. If this is something you have interest in, feel free to reach out, I’d be happy to convince you not to do it 🙂

Use an existing tool to track things

The last option is to use an existing tool. In the past few years there have been quite a few tools and companies to emerge with the purpose of tracking what open source you have in your products. Some have a focus on security vulnerabilities. Some focus on licensing. Some look for code that’s been copy and pasted. It’s really nice to see so many options available.

There are two really important things you should keep in mind if this is the option you’re interested in. Firstly, understand what your goal is. If your primary concern is keeping your dependencies up to date in a node.js project, make sure you look for that. Some tools do a better job with certain languages. Some tools inspect containers and not source code for example. Some focus on git repositories. Know what you want then go find it.

The second important thing to keep in mind is none of these tools are going to be 100% correct. You’ll probably see around 80% accuracy, maybe less depending what you’re doing. I often say “perfect and nothing are the same thing”. There is no perfect here so don’t expect it. There are going to be false positives, there will be false negatives. This isn’t a reason to write off tools. Account for this in your planning. Things will get missed, there will be some fire-drills. If you’re prepared to deal with it it won’t be a huge deal.

The return on investment will be magnitudes greater than trying to build your own perfect tracking system. It’s best to look at a lot of these things from a return on investment perspective. Perfect isn’t realistic. Nothing isn’t realistic. Find your minim viable security.

 

So now that you know what you’re shipping, how does this all work, what do you do next? We’ll cover that in the near future. Stay tuned.

Supplying the supply chain

A long time ago Marc Andreessen said “software is eating the world”. This statement ended up being quite profound in hindsight, as most profound statements are. At the time nobody really understood what he meant and it probably wasn’t until the public cloud caught on that it became something nobody could ignore. The future of technology was less about selling hardware as it is about building software.

We’re at a point now where it’s time to rethink software. Well, the rethinking happened quite some time ago, now everyone has to catch up. Today it’s a pretty safe statement to declare open source is eating the world. Open source won, it’s everywhere, you can’t not use it. It’s not always well understood. And it’s powering your supply chain, even if you don’t know it.

In a previous post I talk about what open source dependencies are. This post is meant to explain how all these dependencies interact with each other and what you need to know about it. The topic of supply chains is coming up more and more and then and it’s usually not great news. When open source comes up in the context of the supply chain it’s very common for the story to center around how dangerous open source is. Of course if you just use this one tool, or this one vendor, or this one something, you’ll be able to sleep at night. Buying solutions for problems you don’t understand is usually slightly less useful than just throwing your money directly into the fire.

Any application depends on other software. Without getting overly detailed it’s safe to say that most of us develop software using libraries, interpreters, compilers, and operating systems from somewhere else. In most cases these are open source projects. Purely proprietary software is an endangered species. It’s probably already extinct but there are a few deniers who won’t let it go quietly into the night.

The intent of the next few blog posts is going to be to pick apart what using open source in your supply chain means. For the rest of this particular post we’re going to put our focus on open source libraries you depend on in your project. Specifically I’m going to pick on npm and containers in my examples. They have two very different ways to deal with dependencies. Containers tends to include packaged dependencies where npm has a more on demand approach. I don’t think one is right, each has drawbacks and advantages, they’re just nice examples that are widely used.

Let’s explain Containers first.

So in the container world we use what’s called a filesystem bundle. It’s really just a compressed archive file but that’s not important. The idea is if you need some sort of library to solve a problem, you toss it in the bundle. You can share your bundles, others can add more things on top, then ship a complete package that has all the important bits stuffed inside in one pretty package. This is mostly done because it’s far easier to deploy a complete system than it is to give someone hundreds of poorly written instructions to setup and deploy a solution. Sysadmins from the late 90’s and early 2000’s understand this pain better than anyone ever. The advantages substantially outweigh the drawbacks which is one of the reasons containers are taking over the world.

The way something like NPM does this is a bit different. When you need a dependency for NPM, you install the dependency, then it installs whatever it needs. It’s sort of turtles all the way down with dependencies having dependencies of dependencies. Then you get to use it. The thing that’s missed sometimes is that if you install something today, then you install the exact same thing tomorrow, you could get a different set of packages and versions. If version 1.2 is released today it couldn’t have been the version you installed yesterday. This has the advantage of getting more updated packages, but has the downside of breaking things as newer packages can behave differently. You can work around this by specifying a certain version of a package at install time. It’s not uncommon to peg the version like this, but it does introduce some of the container problems with outdated dependencies.

My code is old but it works

There are basically two tradeoffs here.

You have old code in your project, but it works because you don’t have to worry about a newer library version changing something. While the code doesn’t change, the world around it does. There is a 100% chance security flaws and bugs will be discovered and fixed in the dependencies you rely on.

The other option is you don’t have old libraries, you update things constantly and quickly but you run the risk of breaking your application every time you update the dependencies. It’s also 100% risk. At some point, something will happen that breaks your application. Sometimes it will be a one line fix, sometimes you’re going to be rewriting huge sections of a feature or installing the old library and never updating it again.

The constant update option is the more devops style and probably the future, but we have to get ourselves to that future. It’s not practical for every project to update their dependencies at this breakneck speed.

What now

The purpose of this post wasn’t to solve any problems, it’s just to explain where we are today. Problem solving will come as part of the next few posts on this topic. I have future posts that will explain how to handle the dependencies in your project, and a post that explains some of the rules and expectations around handling open source security problems.

Security isn’t a feature

As CES draws to a close, I’ve seen more than one security person complain that nobody at the show was talking about security. There were an incredible number of consumer devices unveiled, no doubt there is no security in any of them. I think we get caught up in the security world sometimes so we forget that the VAST majority of people don’t care if something has zero security. People want interesting features that amuse them or make their lives easier. Security is rarely either of these, generally it makes their lives worse so it’s an anti-feature to many.

Now the first thing many security people think goes something like this “if there’s no security they’ll be sorry when their lightbulb steals their wallet and dumps the milk on the floor!!!” The reality is that argument will convince nobody, it’s not even very funny so they’re laughing at us, not with us. Our thoughts by very nature blame all the wrong people and we try to scare them into listening to us. It’s never worked. Ever. That one time you think it worked they were only pretended to care so you would go away.

So it brings us to the idea that security isn’t a feature. Turning your lights on is a feature. Cooking you dinner is a feature. Driving your car is a feature. Not bursting into flames is not a feature. Well it sort of is, but nobody talks about it. Security is a lot like the bursting into flames thing. Security really is about something not happening, things not happening is the fundamental  problem we have when we try to talk about all this. You can’t build a plausible story around an event that may or may not happen. Trying to build a narrative around something that may or may not happen is incredibly confusing. This isn’t how feature work, features do positive things, they don’t not do negative things (I don’t even know if that’s right). Security isn’t a feature.

So the question you should be asking then is how do we make products being created contain more of this thing we keep calling security. The reality is we can’t make this happen given our current strategies. There are two ways products will be produced that are less insecure (see what I did there). Either the market demands it, which given the current trends isn’t happening anytime soon. People just don’t care about security. The second way is a government creates regulations that demand it. Given the current state of the world’s governments, I’m not confident that will happen either.

Let’s look at market demand first. If consumers decide that buying products that are horribly insecure is bad, they could start buying products with more built in security. But even the security industry can’t define what that really means. How can you measure which product has the best security? Consumers don’t have a way to know which products are safer. How to measure security could be a multi-year blog series so I won’t get into the details today.

What if the government regulates security? We sort of end up in a similar place to consumer demand. How do we define security? It’s a bit like defining safety I suppose. We’re a hundred years into safety regulations and still get a lot wrong and I don’t think anyone would argue defining safety is much easier than defining security. Security regulation would probably follow a similar path. It will be decades before things could be good enough to create real change. It’s very possible by then the machines will have taken over (that’s the secret third way security gets fixed, perhaps a post for another day).

So here we are again, things seem a bit doom and gloom. That’s not the intention of this post. The real purpose is to point out we have to change the way we talk about security. Yelling at vendors for building insecure devices isn’t going to ever work. We could possibly talk to consumers in a way that resonates with them, but does anyone buy the stove that promises to burst into flames the least? Nobody would ever use that as a marketing strategy. I bet it would have the opposite effect, a bit like our current behaviors and talking points I suppose.

Complaining that companies don’t take security seriously hasn’t ever worked and never will work. They need an incentive to care, us complaining isn’t an incentive. Stay tuned for some ideas on how to frame these conversations and who the audience needs to be.

Misguided misguidings over the EU bug bounty

The EU recently announced they are going to sponsor a security bug bounty program for 14 open source projects in 2019. There has been quite a bit of buzz about this program in all the usual places. The opinions are all over the place. Some people wonder why those 14, some wonder why not more. Some think it’s great. Some think it’s a horrible idea.

I don’t want to focus too much on the details as they are unimportant in the big picture. Which applications are part of the program don’t really matter. What matters is why are we here today and where should this go in the future.

There are plenty of people claiming that a security bug bounty isn’t fair, we need to be paying the project developers, the people who are going to fix the bugs found by the bug bounty. Why are we only paying the people who find the bugs? This is the correct question, but it’s not correct for the reasons most think it is.

There are a lot of details to unpack about all this and I don’t want to write a novel to explain all the nuance and complication around what’s going to happen. The TL;DR is basically this: The EU doesn’t have a way to pay the projects today, but they do have a way to pay security bug bounties.

Right now if you want to pay a particular project, who do you send the check to? In some cases like the Apache Software Foundation it’s quite clear. In other cases when it’s some person who publishes a library for fun, it’s not clear at all. It may even be illegal in some cases, sending money across borders can get complicated very quickly. I’ll give a shoutout to Tidelift here, I think they’re on the right path to make this happen. The honest truth is it’s really really hard to give money to the open source projects you use.

Now, the world of bug bounties has figured out a lot of these problems. They’ve gotten pretty good at paying people in various countries. Making sure the people getting paid are vetted in some way. And most importantly, they give an organization one place to send the check and one place to hold accountable. They’ve given us frameworks to specify who gets paid and for what. It’s quite nice really.  I wrote some musing about this a few years ago. I still mostly agree with my past self.

So what does this all really mean is the big question. The EU is doing the only thing they can do right now. They have money to throw at the problem, the only place they can throw it today is a bug bounty, so that’s what they did. I think it’s great. Step one is basically admitting you have a problem.

Where we go next is the real question. If nothing changes and bug bounties are the only way to spend money on open source, this will fizzle out as there isn’t going to be a massive return on investment. The projects are already overworked, they don’t need a bunch of new bugs to fix. We need a “next step” that will give the projects resources. Resources aren’t always money, sometimes it’s help, sometimes it’s gear, sometimes it’s pizza. An organization like the EU has money, they need help turning that into something useful to an open source project.

I don’t know exactly what the next few steps will look like, but I do know the final step is going to be some framework that lets different groups fund open source projects. Some will be governments, some will be companies, some might even be random people who want to give a project a few bucks.

Everyone is using open source everywhere. It’s woven into the fabric of our most critical infrastructures. It’s imperative we find a way to ensure it has the care and feeding it needs. Don’t bash this bug bounty program for being short sighted, praise it for being the first step of a long journey.

On that note, if you are part of any of these projects (or any project really) and you want help dealing with security reports, get in touch, I will help you with security patches, advisories, and vulnerability coordination. I know what sort of pain you’ll have to deal with, open source security can be even less rewarding than running an open source project 🙂

What’s up with backdoored npm packages?

A story broke recently about a backdoor added to a Node Package Manager (NPM) package called event-stream. This package is downloaded about two million times a week by developers. That’s a pretty impressive amount, many projects would be happy with two million downloads a year.

The Register did a pretty good writeup, I don’t want to recap the details here, I have a different purpose and that’s really to look at how does this happen and can we stop it?

Firstly, the short answer is we can’t stop it. You can stop reading now if that’s all you came for. Go tell all your friends how smart you are for only using artisan C libraries instead of filthy NPM modules.

The long answer is, well, long.

So the thing is event-stream is an open source project. There are a lot of open source projects. More than we can count. Probably millions. The VAST majority of open source projects are not well funded or run by people getting paid to work on their project. A few are, Linux and Apache are easy examples. These are not the norm though.

We’ll use event-stream as our example for obvious reasons. If we look at the contributions graph, we see a project that isn’t swimming in help or commits. This is probably a pretty normal looking set of contributions for most open source.

So the way it works is if I want to help, I just pretty much start helping. It’s likely they’ll accept my pull requests without too much fanfare. I could probably become a committer in a few weeks if I really wanted to. People like it when someone helps them out, we like helpers, helpers are our friends. Humans evolved to generally trust each other because we’re mostly good. We don’t deal well with bad actors. It’s a good thing we don’t try to account for bad actors in our every day lives, it will drive you mad (I’m looking at you security industry).

So basically someone asked if they could help, they were allowed to help, then they showed their true intent once they got into the building. This is not the first time this has happened. It’s happened many times before, I pretty much guarantee it’s happening right now with other open source projects. We don’t have a good way to know who has nefarious intent when they start helping a project.

At this point if your first thought is to stop using open source you should probably slow down a little bit. Open source already won, you can’t avoid it, don’t pretend you can. Anyone who tells you different is an idiot or trying to sell you something (or both).

As long as open source is willing to allow people to contribute, this problem will exist. If people can’t contribute to open source, it’s not very open. There are some who will say we should make sure we review every commit or run some sort of code scanner or some other equally unlikely process. The reality is a small project just can’t do this, they don’t have the resources and probably never will.

It’s not all doom and gloom though, the real point to this story is that open source worked exactly how it is meant to work. Someone did something bad, it was noticed, and quickly fixed. There are some people who will be bitten by this package, it sucks if you’re one of them, but it’s just how things work sometimes.

It’s a bit like public health. We can’t stop all disease, some number of people will get sick, some will even die. We can’t prevent all disease but we can control things well enough that there isn’t an epidemic that wipes out huge numbers of people. Prevention is an impossible goal, containment is not.

This problem was found and fixed in a pretty reasonable amount of time, that’s pretty good. Our focus shouldn’t be on prevention, prevention is impossible. We should focus on containment. When something bad starts to happen, we make sure we can stop it quickly. Open source is really a story about containment, not a story about prevention. We like to focus on prevention because it sounds like the better option, but it’s really impossible. What happened with event-stream isn’t a tire fire, it’s how open source works. It will happen again. The circle will be unbroken.

Dependencies in open source

The topic of securing your open source dependencies just seems to keep getting bigger and bigger. I always expect it to get less attention for some reason, and every year I’m wrong about what’s happening out there. I remember when I first started talking about this topic, nobody really cared about it. It’s getting a lot more traction these days, especially as we see stories about open source dependencies being wildly out of date and some even being malicious backdoors.

So what does it really mean to have dependencies? Ignoring the topic of open source for a minute, we should clarify what a dependency is. If you develop software today, there’s no way you build everything yourself. Even if you’re writing something in a low level language there are other libraries you rely on to do certain things for you. Just printing “hello world” calls into another library to actually print the text on the screen. Nobody builds at this level outside of a few select low level projects. Because of this we use code and applications that someone else wrote. If your business is about selling something online, writing your own web server would be a massive cost. It’s far cheaper to find a web server someone else wrote. The web server would be a dependency. If the web server is open source (which is probably is), we would call that an open source dependency.

Now that we grasp the idea of a dependency, what makes an open source dependency different? Fundamentally there is no difference. The differences revolve around availability and perception. Open source is very available. Anyone can find it and use it without almost no barrier to entry. If you have a library or application you have to purchase from another company the availability is a very different story. It’s going to take some time and effort to find that dependency. Open source doesn’t need this level of time and effort.

If you visit github.com to download code to include in your project, or you visit stack overflow for help, or if you find snippits using a search engine, you understand the simplicity of finding and using open source code. This is without question one of the reasons open source won. If you have a deadline of next month are you going to use the library you can find and use right now, or spend three weeks trying to find and buy a library from another company? Even if it’s not as good, having something right now is a massive advantage.

The perception aspect of open source is sort of a unique beast. I still see some people who wonder if this open source thing will catch on. I secretly feel bad for those people (not very bad though). There are also some who think open source is a huge free buffet of solutions they can take whatever they want and never look at it again. Neither of these attitudes are healthy. I’m going to ignore the group that wonders if open source is a real thing. If you found this blog you’re almost certainly not one of those people. What needs to be understood is that open source isn’t free. You can’t just take things then use them without consequence, all software, including open source has to be cared for during its life. It’s free like a puppy in that regard.

Obviously this is a security focused blog. If you’re using any software you have to worry about security updates in your software. Security bugs are found in all software. It’s up to us to decide how and when to fix them. If you include dependencies in whatever you’re doing (and you are certainly including them) ignoring security issues is going to end badly someday. Probably not right away, but I’ve never seen that story end well.

Something that’s not often understood about all this open source, is open source usually depends on other open source. It’s turtles all the way down! It’s very common for any bit of code to depend on something else. Think about this in the context of a complicated machine like your car. Your car has parts, lots of parts. Most of those parts are built from multiple parts, which have parts. Eventually we get to parts that are mostly bolts and pieces of metal. Software works like this to a degree. Complex pieces of software are built with less complex pieces of software.

This post is meant to be the first part in what I suspect will be a very long series talking about how open source dependencies are, how they work, and what you need to do about it. It’s not terribly difficult to understand all this, but it’s not very obvious either.

 

Targeted vs General purpose security

There seems to be a lot of questions going around lately about how to best give out simple security advice that is actionable. Goodness knows I’ve talked about this more than I can even remember at this point. The security industry is really bad at giving out actionable advice. It’s common someone will ask what’s good advice. They’ll get a few morsels, them someone will point out whatever corner case makes that advice bad and the conversation will spiral into nonsense where we find ourselves trying to defend someone mostly concerned about cat pictures from being kidnapped by a foreign nation. Eventually whoever asked for help quit listening a long time ago and decided to just keep their passwords written on a sticky note under the keyboard.

I’m pretty sure the fundamental flaw in all this thinking is we never differentiate between a targeted attack and general purpose security. They are not the same thing. They’re incredibly different in fact. General purpose advice can be reasonable, simple, and good. If you are a target you’ve already lost, most advice won’t help you.

General purpose security is just basic hygiene. These are the really easy concepts. Ideas like using a password manager, multi-factor-auth, install updates on your system. These are the activities anyone and everyone should be doing. One could argue these should be the default settings for any given computer or service (that’s a post for another day though). You don’t need to be a security genius to take these steps. You just have to restrain yourself from acting like a crazy person so whoever asked for help can actually get the advice they need.

Now if you’re the target of a security operation, things are really different. Targeted security is when you’re an active target, someone has picked you out for some reason and has a specific end goal in mind. This is the sort of attack where people will send you very specific phishing mails. They will probably try to brute force your password to a given account. They might call friends and family. Maybe even looking through your trash for clues they can use. If you are a target the goal isn’t to stop the attacker, it’s just to slow them down enough so you know you’re under attack. Once you know you’re under attack you can find a responsible adult to help.

These two things are very different. If you try to lump them together you end up with no solution, and at best a confused audience. In reality you probably end up with no audience because you sound like a crazy person.

Here is an example. Let’s say someone asks for some advice for people connecting to public wifi. Then you get a response about how your pen test used public wifi against an employee to steal their login credentials. That’s not a sane comparison. If you have a specific target in mind you can play off their behaviors and typical activities. You knew which sites they visit, you knew which coffee house they like. You knew which web browser and operating system they had. You had a level of knowledge that put the defender in a position they couldn’t defend against. General security doesn’t work like that.

The goal of general purpose advice is to be, well, general. This is like telling people to wash their hands. You don’t get into specifics about if they’ve been in contact with flesh eating bacteria and how they should be keeping some incredibly strong antiseptic on hand at all times just in case. Actual advice is to get some soap, pretty much any soap is fine, and wash your hands. That’s it. If you find yourself in the company of flesh eating bacteria in the future, go find someone who specializes in such a field. They’ll know what to actually do. Keeping special soap under your sink isn’t going to be one of the things they suggest.

There’s nothing wrong with telling people the coffee house wifi is probably OK for many things. Don’t do banking from it, make sure you have an updated browser and operating system. Stay away from dodgy websites. If things start to feel weird, stop using the wifi. The goal isn’t to eliminate all security threats, it’s just to make things a little bit better. Progress is made one step at a time, not in one massive leap. Massive leaps are how you trip and fall.

And if you are a specific target, you can only lose. You aren’t going to stop that attacker. Targeted attacks, given enough time, never fail.

Millions of unfixed security flaws is a lie

On a pretty regular basis I see claims that the public CVE dataset is missing some large number of security issues. I’ve seen ranges from tens of thousands all the way up to millions. The purpose behind such statements is to show that the CVE data is woefully incomplete. Of course almost everyone making that claim has a van filled with security issues and candy they’re trying very hard to lure us into. It’s a pretty typical sales tactic as old as time itself. Whatever you have today isn’t good enough, but what I have, holy cow it’s better. It’s so much better you better come right over and see for yourself. After you pay me of course.

If you take away any single thing from this post, make it this: There are not millions of unfixed security flaws missing from the CVE data.

If you’re not familiar with how CVE works, I’ll give you a very short crash course. Essentially someone (anyone) requests a CVE ID, and if it’s a real security issue, a CVE gets assigned. It really is fundamentally this simple. Using some sort of advanced logic, the obvious question becomes: “why not get a CVE ID for all these untracked security flaws?”

That’s a great question! There are two possible reasons for this. The first is the organizations in question don’t want to share what they know. The second is all the things they claim are security issues really aren’t security issues at all. The second answer is of course correct, but let’s understand why.

The first answer assumes their security flaw are some sort of secret information only they know. This would also suggest the security issues in question are not acknowledged by the projects or vendors. If a project has any sort of security maturity, they are issuing CVE IDs (note: if you are a project who cares about security and don’t issue CVE IDs, talk to me, I will help you). This means that if the project knows about a security issue they will release a CVE ID for it. If they don’t know about the issue, it not only doesn’t have a CVE ID but is also unfixed. Not telling projects and vendors about security issues would be pretty weaselly. It also wouldn’t make anyone any safer. In fact it would make us all a lot less safe.

This brings us to the next stop in our complex logical journey. If you are a company that has the ability to scan and track security issues, and you find an unknown security issue in a project, you will want to make some noise about finding it. That means you follow some sort of security process that includes getting a CVE ID for the issue in question. After all, you want to make sure your security problem is known to the public and what better way then the largest public security dataset?

This brings us to our logical conclusion about all these untracked security issues is that they’re not really security problems. Some are just bugs. Some are nothing. Some are probably design decisions. Fundamentally if there is a security issue that matters, it will get a CVE ID. We should all be working together to make CVE better, not trying to claim our secret data is better than everyone else’s. There are no winners and loser when it comes to security issues. We all win or we all lose.

As most of these sort of fantastical claims tend to end, if it sounds too good to be true, it probably is.

Security reviews and microservices

We love to do security reviews on the projects, products, and services our companies use. Security reviews are one of those ways we can show how important security is. If those reviews didn’t get done we might end up using a service that could put our users and data at risk. Every good horror story involving dinosaurs starts with bad security reviews! It’s a lesson too few of us really take to heart.

The reality is someone picks a service, we review it, and it will probably still put our data at risk, but went through a very rigorous review so we can show how much … review it got? I’m not really sure what that means but we know that security reviews are really important.

These reviews are quite complex and fairly important in all seriousness. Doing any sort of review of an application takes a certain amount of knowledge and understanding. There’s a lot of value in making sure you’re not shipping or using something that is a tire fire of security problems. All these rules are going to change. The world of microservices is going to make us rethink how everything works.

One security review can take a day or more depending what you’re looking for. If something is large enough it wouldn’t be unreasonable for someone to spend a week going over all the details you need to understand before trusting something with your most important data.

But what happens when we have to review a dozen microservices?

What happens when we have to review a thousand microservices?

We can’t review a thousand microservices. We probably can’t review a dozen in all seriousness. It’s possible some things can be grouped together in some sane and reasonable manner but we all know that’s not going to be the norm, it’s going to be the exception.

What do we do now? There are two basic paths we can take. The first is we spend some quality time crying under our desk. It’s not terribly useful but will make you feel better. The second option is to automate the heck out of this stuff.

Humans don’t scale, not even linearly. In fact adding more humans probably results in worse performance. If you need to review a thousand services you will need an incredible number of people, and anytime people are involved there are going to be a lot of mistakes made. There is no secret option three where we just staff up to get this done. Staffing up probably just means you now have two problems instead of one.

Automation is the only plausible solution.

I did some digging into this, there isn’t a ton of information on automating this sort of process. I find that interesting as it’s a pretty big problem for most everyone, yet we don’t have a nice way to simplify this process.

I did manage to find a project Google seems to use called VSAQ

It’s not exactly automation as a human from the vendor has to fill out a form. Once the details are entered you can do things with the data and results. It puts all the work on the vendor to get things right. I don’t think vendors try to purposely mislead, but mistakes happen. And if you’re using open source there is no vendor to fill out the form.

Unfortunately this blog post is going to end without any sort of actionable advice. I had hoped to spend time reviewing options in this space but I found nothing. So the call to action is two things.

Firstly, if there is something I’m missing, please let me know. Nothing would please me more than a giant “updated” section showing off some tools.

Second, if this is a problem you have let’s collaborate a bit. This would make a great open source project (I have some ideas I’m already working on, more about those in a future post). The best way is to hit me up on twitter @joshbressers