Sysadmins used to rule the world. Anyone who’s been around for more than a few years remembers the days when whatever the system administrator wanted, the system administrator got. They were the center of the business. Without them nothing would work. They were generally super smart and could quite often work magic with what they had. It was certainly a different time back then.
Now developers are king, the days of the sysadmin have waned. The systems we run workloads on are becoming a commodity, you either buy a relatively complete solution, or you just run it in the cloud. These days most anyone using technology for their business relies on developers instead of sysadmins.
But wait, what about servers in the cloud, or containers which are like special mini servers, or … other things that sysadmins have to take care of! If you really think about it, containers and cloud are just vehicles for developers. All this new technology, all the new disruption, all the interesting things happening are all about enabling developers. Containers and cloud aren’t ends to themselves, they are the boats by which developers deliver their cargo. Cloud didn’t win, developers won, cloud just happens to be their weapon of choice right now.
If we think about all this, the question I keep wondering is “where does security fit in?”
I think the answer is that it doesn’t, it probably should, but we have to change the rules since what we call security today is an antiquated and broken idea. A substantial amount of our security ideas and methods are from the old sysadmin world. Even our application security revolves around finding individual bugs, then releasing updates for them. This new world changes all the rules.
Much of our security ideas and concepts are based on the days when sysadmins ruled the world. They were like a massive T-Rex ruling their domain, instilling fear into those beneath them. Today in security we are trying to build Jurassic Park, except there are no dinosaurs, they all went extinct. Maybe we can use horses instead, nobody will notice … probably. Most security leaders and security conferences are the same people saying the same things for the last ten years. If any of it worked even a little, I think we’d notice by now.
If you pay attention to the new hip ideas around development and security you’ve probably heard of DevSecOps, Rugged DevOps, SecDevOps, and a few more. They may be different things but the thing is, it should just be called “DevOps”. We’re in the middle of disruptive change, a lot of the old ideas and ways don’t make sense anymore. Security is pretty firmly entrenched in 2004. Security isn’t a special snowflake, it’s not magic, it shouldn’t be treated like it’s somehow outside the business. Security should just exist the same way electricity or internet does. If you write software, having a security step makes as much sense as having a special testing step. You used to have testing as a step, you don’t anymore because it’s just a part of the workflow.
I’ve asked the question in the past “where are all the young security people?” I think I’m starting to figure this out. There are very few because nobody wants to join an industry that is being disrupted (at least nobody smart) and let’s face it, security is seen as a problem, not a solution. The only real reason it’s getting attention lately is because we’ve done a bad job in the past so everything is on fire now. If you want to really scare someone to death, pull out the line “I’m from security and I’m here to help”. You aren’t really, you might think you are, but they know better.
If I asked everyone to tell me what security is, what do you do about it, and why you do it. I wouldn’t get two answers that were the same. I probably wouldn’t even get two that are similar. Why is this? After recording Episode 9 of the Open Source Security Podcast I co-host, I started thinking about measuring a lot. It came up in the podcast in the context of bug bounties, which get exactly what they measure. But do they measure the right things? I don’t know the answer, nor does it really matter. It’s just important to keep this in mind as in any system, you will get exactly what you measure.
Why do we do the things we do?
I’ve asked this question before, and I often get answers from people. Some are well thought out reasonable answers. Some are overly simplistic. Some are just plain silly. All of them are wrong. I’m going to go so far as to say we don’t know why we do what we do in most instances. Sure, there might be compliance, with a bunch of rules, that everyone knows don’t really increase security. Some of us fix security bugs so the bad guys don’t exploit them (even though very few actually get exploited). Some of us harden systems using rules that probably don’t stop a motivated attacker.
Are we protecting data? Are we protecting the systems? Are we protecting people? Maybe we’re protecting the business. Sure, that one sounds good.
Measuring a negative
There’s a reason this is so hard and weird though. It’s only sort of our fault, it’s what we try to measure. We are trying to measure something not happening. You cannot measure how many times an event didn’t happen. It’s also impossible to prove a negative.
Do you know how many car accidents you didn’t get in last week? How about how many times you weren’t horribly maimed in an industrial accident? How many times did you not get mugged? These questions don’t even make sense, no sane person would even try to measure those things. This is basically our current security metrics.
The way we look at security today is all about the negatives. The goal is to not be hacked. The goal is to not have security bugs. Those aren’t goals, those are outcomes.
What’s our positive?
In order to measure something, it has to be true. We can’t prove a negative, we have to prove something to measure it, so what’s the “positive” we need to look for and measure. This isn’t easy. I’ve been in this industry for a long time and I’ve done a lot of thinking about this. I’m not sure I’m right in my list below, but getting others to think about this is more important than being right.
As security people, we need to think about risk. Our job isn’t to stop bad things, it’s to understand and control risk. We cannot stop bad things from happening, the best we can hope for is to minimize damage from bad things. Right about now is where many would start talking about the NIST framework. I’m not going to. NIST is neat, but it’s too big for my liking, we need something simple. I’m going to suggest you build a security score card and track it over time. The historical trends will be very important.
Security Score Card
I’m not saying this is totally correct, it’s just an idea I have floating in my mind, you’re welcome to declare it insane. Here’s what I’m suggesting you track.
1) Number of staff
2) Number of “systems”
3) Lines of code
4) Number of security people
Here’s why though. Let’s think about measuring positives. We can’t measure what isn’t happening, but we can measure what we have and what is happening. If you work for a healthy company, 1-3 will be increasing. What does your #4 look like? I bet in many organizations it’s flat and grossly understaffed. Good staff will help deal with security problems. If you have a good leader and solid staff, a lot of security problems get dealt with. Things like the NIST framework is what happens when you have competent staff who aren’t horribly overworked, you can’t force a framework on a broken organization, it just breaks it worse. Every organization is different, there is no one framework or policy that will work. The only way we tackle this stuff is by having competent motivated staff.
The other really important thing this does is makes you answer the questions. I bet a lot of organizations can’t answer 2 and 3. #1 is usually pretty easy (just ask ldap), #2 is much harder, and #3 may be impossible for some. These look like easy things to measure and just like quantum physics – by measuring it we will change it, probably for the better.
If you have 2000 employees, 200 systems, 4 million lines of code, and 2 security people, that’s clearly a disaster waiting to happen. If you have 20, there may be hope. I have no idea what the proper ratios should be, if you’re willing to share ratios with me I’d love to start collecting data. As I said, I don’t have scientific proof behind this, it’s just something I suspect is true.
I should probably add one more thing. What we measure not only needs to be true, it needs to be simple.
Send me your scorecard via Twitter
This title is a bit click baity, but it’s true, not for the reason you think. Keep reading to see why.
If you’ve ever been involved in keeping a software product updated, I mean from the development side of things, you know it’s not a simple task. It’s nearly impossible really. The biggest problem is that even after you’ve tested it to death and gone out of your way to ensure the update is as small as possible, things break. Something always breaks.
If you’re using a typical computer, when something breaks, you sit down in front of it, type away on the keyboard, and you fix the problem. More often than not you just roll back the update and things go back to the way they used to be.
IoT is a totally different story. If you install an update and something goes wrong, you now have a very expensive paperweight. It’s usually very difficult to fix IoT devices if something goes wrong, many of them are installed in less than ideal places and some may even be dangerous to get near the device.
This is why very few things do automatic updates. If you have automatic updates configured, things can just stop working one day. You’ll probably have no idea it’s coming, one day you wake up and your camera is bricked. Of course it’s just as likely things won’t break until it’s something super important, we all know how Murphy’s Law works out.
This doesn’t even take into account the problems of secured updates, vendors going out of business, hardware going end of life, and devices that fail to update for some reason or other.
The law of truly large numbers
Let’s assume there are 2 million of a given device out there. Let’s assume there are automatic updates enabled. If we can guess 10% won’t get updates for some reason or other. That means there will be around 200,000 vulnerable devices that miss the first round of updates. That’s one product. With IoT the law of truly large numbers kicks in. Crazy things will happen because of this.
The law of truly large numbers tells us that if you have a large enough sample set, every crazy thing that can happen, will happen. Because of this law, the IoT can never be secured.
Now, all this considered, that’s no reason to lose hope. It just means we have take this into consideration. We don’t build systems that can handle a large number of crazy events. Once we take this into account we can start to design a system that’s robust against these problems. The way we develop these systems and products will need a fundamental change. The way we do things today doesn’t work in a large number situation. It’s not a matter of maybe fixing this, it has to be fixed, and someone will fix it, the rewards will be substantial.
I had a discussion last week with some fellow security folks about how we can discuss security with normal people. If you pay attention to what’s going on, you know the security people and the non security people don’t really communicate well. We eventually made our way to comparing what we do to the door to door religious groups. They’re rarely seen in a positive light, are usually annoying, and only seem to show up when it’s most inconvenient. This got me thinking, we probably have more in common there than we want to admit, but there are also some lessons for us.
Firstly, nobody wants to talk to either group. The reasons are basically the same. People are already mostly happy with whatever choices they’ve made and don’t need someone showing up to mess with their plans. Do you enjoy being told you’re wrong? Even if you are wrong, you don’t want someone telling you this. At best you want to figure it out yourself but in reality you don’t care and will keep doing whatever you want. It’s part of being an irrational human. I’m right, you’re wrong, everything else is just pointless details.
Let’s assume you are certain that the message you have is really important. If you’re not telling people something useful, you’re wasting their time. It doesn’t matter how important a message is, the audience has to want to hear it. Nobody likes having their time wasted. In this crazy election season, how often are you willing to not just hang up your phone when a pollster calls? You know it’s just a big waste of time.
Most importantly though, you can’t act pretentious. If you think you’re better than whoever you’re talking to, even if you’re trying hard not to show it, they’ll know. Humans are amazing at understanding what another person is thinking by how they act. It’s how we managed to survive this long. Our monkey brains are really good at handling social interactions without us even knowing. How often do you talk to someone who is acting superior to you, and all you want to do is stop talking to them.
Kurt and Josh discuss prime numbers (probably getting a lot of it wrong), Samsung, passwords, National Cyber Security Awareness Month, and bathroom scales.
Kurt and Josh discuss the ORWL computer, crashing systemd with one line, NIST, and a security journal.
Sometimes when you plan for a security event, it would be expected that the thing you’re doing will be making some outcome (something bad probably) impossible. The goal of the security group is to keep the bad guys out, or keep the data in, or keep the servers patched, or find all the security bugs in the code. One way to look at this is security is often in the business of preventing things from happening, such as making data exfiltration impossible. I’m here to tell you it’s impossible to make something impossible.
As you think about that statement for a bit, let me explain what’s happening here, and how we’re going to tie this back to security, business needs, and some common sense. We’ve all heard of the 80/20 rule, one of the forms is that the last 20% of the features are 80% of the cost. It’s a bit more nuanced than that if you really think about it. If your goal is impossible it would be more accurate to say 1% of the features are 2000% of the cost. What’s really being described here is a curve that looks like this
The thinking behind this came about while I was discussing DRM with someone. No matter what sort of DRM gets built, someone will break it. DRM is built by a person which means, by definition, a smarter person can break it. It can’t be 100%, in some cases it’s not even 80%. But when a lot of people or groups think about DRM, the goal is to make acquiring the movie or music or whatever 100% impossible. They even go so far as to play the cat and mouse game constantly. Every time a researcher manages to break the DRM, they fix it, the researcher breaks it, they fix it, continue this forever.
Here’s the question about the above graph though. Where is the break even point? Every project has a point of diminishing returns. A lot of security projects forget that if the cost of what you’re doing is greater than the cost of the thing you’re trying to protect, you’re wasting resources. Never forget that there is such a thing as negative value. Doing things that don’t matter often create negative value.
This is easiest to explain in the context of ransomware. If you’re spending $2000 to protect yourself from a ransomware invasion that will cost $300, that’s a bad investment. As crime inc. continues to evolve I imagine they will keep a lot of this in mind, if they can keep their damage low, there won’t be a ton of incentive for security spending, which helps them grow their business. That’s a topic for another day though.
The summary of all this is that perfect security doesn’t exist. It might never exist (never say never though). You have to accept good enough security. And more often than not, good enough is close enough to perfect that it gets the job done.