Is there a future view that isn’t a security dystopia?

I recently finished reading the book Ghost Fleet, it’s not a bad read if you’re into what cyberwar could look like. It’s not great though, I won’t suggest it as the book of the summer. The biggest thing I keep thinking about is I’ve yet to really see any sort of book that takes place in the future, with a focus on technology, that isn’t a dystopian warning. Ghost Fleet is no different.

My favorite part was how everyone knew the technology was totally pwnt, yet everyone still used it. There were various drones, smart display glasses, AI to control boats, rockets, even a space laser (which every book needs). This reminds me of today to a certain degree. We all use web sites we know will be hacked. We know our identities have been stolen. We know our phones aren’t secure. Our TVs record our conversations. You can even get doorbells that can stream you a video feed. We love this technology even though it’s either already hacked, or will be soon. We know it and we don’t care, we just keep buying broken phones, TVs, blenders, cars, anything that comes with WiFi!

Disregarding the fact that we are probably already living in the dystopian future, it really made me wonder if there are any examples of a future that isn’t a security nightmare? You could maybe make the argument that Star Trek is our hopeful future, but that’s pretty old these days. And even then, the android took over the ship more times than I’d be comfortable with. I think it’s safe to say their security required everyone to be a decent human. If that’s our only solution, we’re pretty screwed.

Most everything I come across is pretty bleak and I get why. Where is our escape from all the insecure devices we pretend we hate? The only number growing faster than the number of connected devices is the number of security flaws in those devices. There aren’t even bad ideas to fix this stuff, there’s just nothing. The thing about bad ideas is they can often be fixed. A smart person can take a bad idea and turn it into a good idea. Bad ideas are at least something to build on. I don’t see any real ideas to fix these devices. We have nothing to build on. Nothing is dangerous. No matter how many times you improve it, it’s still nothing. I have no space laser, so no matter how many ideas I have to make it better, I still won’t have a space laser (if anyone has one I could maybe borrow, be sure to let me know).

Back to the idea about future technology. Are there any real examples of a future based heavily on technology that isn’t a horrible place? This worries me. One of the best parts about science fiction is getting to dream about a future that’s better than the present. Like that computer on the space ship in 2001, that thing was awesome! It had pretty good security too … sort of.

So here is the question we should all think about. At what point do connected devices get bad enough people stop buying them? We’re nowhere near that point today. Will we ever reach that point? Maybe people will just accept the fact that their dishwasher will send spam when it’s not running and the toaster will record your kitchen conversations. I really want to live in a nice future, one where our biggest threat is an android that got taken over by a malevolent intelligence, not one where my biggest threat is my doorbell.

Do you know of any non dystopian predictions? Let me know: @joshbressers

Regulation can fix security, except you can’t regulate security

Every time I start a discussion about how we can solve some of our security problems it seems like the topics of professional organizations and regulation are where things end up. I think regulations and professional organizations can fix a lot of problems in an industry, I’m not sure they work for security. First let’s talk about why regulation usually works, then, why it won’t work for security.

What is regulation?
You may not know it, but you deal with regulated industries every day. The food we eat, the cars we drive, the buildings we use, the roads, our water, products we buy, phones, internet, banks; there are literally too many to list. The reasons for the regulation vary greatly, but at the end of the day it’s a nice way to use laws to protect society. It doesn’t always directly protect people, sometimes it protects the government, or maybe even a giant corporation, but the basic idea is because of the regulation society is a better place. There are plenty of corner cases but for now let’s just assume the goal is to make the world a better place.

Refrigerator regulation
One of my favorite stories about regulation involves refrigerator doors. A long time ago the door to a refrigerator would lock from the outside. If someone found themselves on the inside with a closed door, they couldn’t get out. Given a refrigerator is designed to be air tight, one wouldn’t last very long on the inside. The government decided to do something about this and told the companies that made refrigerators there had to be a way to get out if you’re stuck inside. Of course this was seen as impossible and it was expected most companies would have to go out of business or stop making refrigerators. Given a substantial percentage of the population now owns refrigerators, it’s safe to say that didn’t happen. The solution was to use magnets to hold the door shut. Now the thought of using a locking door seems pretty silly especially when the solution was elegant and better in nearly every way.

Can we regulate cybersecurity?
The short answer is no. It can’t be done. I do hate claiming something can’t be done, someday I might be wrong. I imagine there will be some form of regulation eventually, it probably won’t really work though. Let’s use the last financial crisis to explain this. The financial industry has a lot of regulation, but it also has a lot of possibility. What I mean by this is the existing regulation mostly covers bad things that were done in the past, it’s nearly impossible to really regulate the future due to the nature of regulation. So here’s the thing. How many people went to jail from the last financial crisis? Not many. I’d bet in a lot of cases while some people were certainly horrible humans, they weren’t breaking any laws. This will be the story of security regulation. We can create rules to dictate what happened in the past, but technology, bad guys, and people move very quickly in this space. If you regulated the industry to prevent a famous breach from a few years ago (there are many to choose from), by now the whole technology landscape has changed so much many of those rules wouldn’t even apply today. This gets even crazier when you think about the brand new technology being invented every day.

Modern computer systems are Turing complete
A refrigerator has one door. One door that the industry didn’t think they could fix. A modern IT system can do an infinite number of operations. You can’t regulate a machine that can literally do anything. This would be like saying the front fridge door can’t lock when you have a fridge with infinite area on the inside. If you can’t find the door, and there are millions of other doors, some which don’t open, it’s not a useful regulation.

This is our challenge. We have machines that can literally do anything, and we have to make them secure. If there are infinite operations, there are by definitions infinite security problems. I know that’s a bit over dramatic, but the numbers are big enough they’re basically infinity.

The things that generally come up revolve around having security professionals, or training staff, or getting tools to lock things down, or better defaults. None of this things will hurt, but none really work either. even if you have the best staff in the world, you have to work with vendors who don’t. Even if you have the best policies and tools, your developers and sysadmins will make silly mistakes. Even with the best possible defaults, one little error can undo everything.

What can we do?
I’m not suggesting we should curl up in the corner and weep (I’m also not saying not to). Weeping can be less dangerous than letting the new guy configure the server, it’s not very helpful long term. I’m not suggesting that tools and training and staff are wastes of time and money, they have value to a certain point. It’s sort of like taking a CPR course. You can’t do brain surgery, but you can possibly save a life in an emergency. The real fix is going to be from technology and process that don’t exist yet. Cybersecurity is a new concept that we can’t use old models to understand. We need new models, tools, and ideas. They don’t exist yet, but they will someday. Go invent them, I’m impatient and don’t want to wait.

If you have any ideas, let me know: @joshbressers

Thoughts on our security bubble

Last week I spent time with a lot of normal people. Well, they were all computer folks, but not the sort one would find in a typical security circle. It really got me thinking about the bubble we live in as the security people.

There are a lot of things we take for granted. I can reference Dunning Kruger and “turtles all the way down” and not have to explain myself. If I talk about a buffer overflow, or most any security term I never have to explain what’s going on. Even some of the more obscure technologies like container scanners and SCAP don’t need but a few words to explain what happens. It’s easy to talk to security people, at least it’s easy for security people to talk to other security people.
Sometimes it’s good to get out of your comfort zone though. Last week I spent a lot of the week well outside groups I was comfortable with. It’s a good thing for us to do this though. I really do think this is a big problem the security universe suffers from. There are a lot of us who don’t really get out there and see what it’s really like. I know I always assume everyone else knows a lot about security. They don’t know a lot about security. They usually don’t even know a little about security. This puts us in a place where we think everyone else is dumb, and they think we’re idiots. Do you listen to someone who appears to be a smug jerk? Of course not, nobody does. This is one of the reasons it can be hard to get our messages across.
If we want people to listen to us, they have to trust us. If we want people to trust us, we have to make them understand us. If we want people to understand us, we have to understand them first. That bit of circular Yoda logic sounds insane, but it really is true. There’s nothing worse than trying to help someone only to have them ignore you, or worse, do the opposite because they can.
So here’s what I want to do. I have some homework for you, assuming you made it this far, which you probably did if you’re reading this. Go talk to some non security people. Don’t try to educate them on anything, just listen to what they have to say, even if they’re wrong, especially if they’re wrong, don’t correct them. Just listen. Listen and see what you can learn. I bet it will be something amazing.
Let me know what you learn: @joshbressers

Security will fix itself, eventually

If you’re in the security industry these days things often don’t look very good. Everywhere you look it sometimes feels like everything is on fire. The joke is there are two types of companies, those that know they’ve been hacked and those that don’t. The world of devices looks even worse. They’re all running old software, most will never see updates, most of the people building the things don’t know or care about proper security, most people buying them don’t know this is a problem.

I heard a TED talk by Al Gore called The case for optimism on climate change. This made me think of security in some ways. The basics of the talk are that things are getting better, we’re surpassing many goals set for things like renewable energy. A few years ago the idea of renewable energy beating out something like coal seemed far fetched.

That reminded me of the current state of security. It’s hard to see a future that’s very bright sometimes. For ever problem that gets fixed, at least two new ones show up. The thing that gives me optimism though is the same basic idea as climate change. It has to get better because there is no alternative.

If we look back at renewable energy, the biggest force keeping it out of the market even five years ago was cost. It was really expensive to build and deploy things like solar panels. Today it’s the same price or cheaper in some instances.

What happened?

The market happened. As new technology emerges and develops, it gets cheaper. This is one of the amazing things about emerging technology. Entrenched technology generally doesn’t change price drastically just due to its nature. Solar power is getting better, it’s not done yet, it will continue to get better for less cost. The day will come when we think about current power generation the way we think about using horses for transportation.

Now let’s think about security.

If you want secure devices and a secure infrastructure it’s going to cost a fortune. You’re talking about very high skilled staff and extremely expensive hardware and software (assuming you can even get it in some cases). Today security is added cost in many cases, so lots of producers skip it. Bad security has cost too though. Today bad security is generally cheaper than good security. We need to flip this around, good security needs to be cheaper than bad security.

The future.

Here’s my prediction though. In the future, good security will be cheaper to build, deploy, and run that bad security. This sounds completely insane with today’s technology. A statement like is some kook ten years ago telling everyone solar power is our future. Ten years ago solar wasn’t a serious thing, today it is. Our challenge is figuring out what the new security future will look like. We don’t really know yet. We know we can’t train our way out of this, most existing technology is a band-aid at best. If I had to guess I’ll use the worn out “Artificial Intelligence will save us all”, but who knows what the future will bring. Thanks to Al Gore, I’m now more optimistic things will get better. I’m impatient though, I don’t want to wait for the future, I want it now! So all you smart folks do me a favor and start inventing the future.

What do you think? Leave your comments on twitter: @joshbressers

Security isn’t a feature, it’s a part of everything

Almost every industry goes through a time when new novel features are sold as some sort of add on or extra product. Remember needing a TCP stack? What about having to buy a sound card for your computer, or a CD drive? (Does anyone even know what a CD is anymore?) Did you know that web browsers used to cost money? Times were crazy.
Let’s think about security now. There is a lot of security that’s some sort of add on, or maybe a separate product. Some of this is because it’s a clever idea, some things exist because people are willing to pay for it even if it should be included. No matter what we’re talking about, there is always a march toward commoditization. This is how Linux took over the universe, the operating system is a commodity now, it’s all about how you put things together using things like containers and devops and cloud.

Now let’s think about security. Of all the things going on, all the products out there, all the methodologies, security is always the special snowflake. For being so special you’d think we could get more right. If everything was fine, the Red Team wouldn’t win. every. single. time.

The reality is that until we stop treating security like some sort of special add on, we’re not going to see things make any real improvements. Think about any product you use, there are always things that are just an expected part of it. Security should fall under this category. Imagine if your car didn’t come with locks. Or if it had locks, but you had to cut your own keys before you could use them. What if every safe shipped with the same combination, if you wanted a new one you had to pay for it? There are a lot of things we just expect because they make sense.

I’m sure you get the idea I’m shooting for here. Today we treat security like something special. You have to buy a security solution if you want to be secure. Or you have to configure your product a certain way if you want it secure. If we want to really start solving security problems, we have to make sure security isn’t something special we talk about later, or plan to add in version two. It has to just be a part of everything. There aren’t secure options, all the options need to be what we would call “secure” today. The days of security as an optional requirement are long gone. Remember when we thought those old SSL algorithms could just stick around forever? Nobody thinks that anymore.

How are we going to fix this? That’s the real trick. It’s easy to talk about demanding security and voting with your pocketbook, but the reality is this isn’t very possible today. Security isn’t usually a big differentiator. If we expect security to just be part of everything, we also can’t expect anyone to see security as a feature they look for. How do we ensure there is a demand for something that is by definition a secondary requirement? How do we get developers to care about something that isn’t part of a requirement? How do we get organizations to pay for something that doesn’t generate revenue?

There are some groups trying to do the right thing here. I think almost everyone is starting to understand security isn’t a feature. Of course just because there’s some interest and people are beginning to understand doesn’t mean everything will be fixed quickly or easily. We have a long way to go still. It won’t be easy, it won’t be quick. It’s possible everything could go off the rails. The only thing harder than security is planning for security 🙂

Do you think you know how to fix this mess? Impress me with your ideas: @joshbressers

Trusting, Trusting Trust

A long time ago Ken Thompson wrote something called Reflections on Trusting Trust. If you’ve never read this, go read it right now. It’s short and it’s something everyone needs to understand. The paper basically explains how Ken backdoored the compiler on a UNIX system in such a way it was extremely hard to get rid of the backdoors (yes, more than one). His conclusion was you can only trust code you wrote. Given the nature of the world today, that’s no longer an option.

Every now and then I have someone ask me about Debian’s Reproducible Builds. There are other groups working on similar things, but these guys seem to be the furthest along. I want to make clear right away that this work being done is really cool and super important, but not exactly for the reasons people assume. The Debian page is good about explaining what’s going on but I think it’s easy to jump to some false conclusions on this one.

Firstly, the point of a reproducible build is to allow two different systems to build the exact same binary. This tells us that the resulting binary was not tampered with. It does not tell us the compiler is trustworthy or the thing we built is trustworthy. Just that the system used to build it was clean and the binary wasn’t meddled with before it got to you.

A lot of people assume a reproducible build means there can’t be a backdoor in the binary. There can due to how the supply chain works. Let’s break this down into a few stages. In the universe of software creation and distribution there are literally thousands to millions of steps happening. From each commit, to releases, to builds, to consumption. It’s pretty wild. We’ll keep it high level.

Here are the places I will talk about. Each one of these could be a book, but I’ll keep it short on purpose.

  1. Development: Creation of the code in question
  2. Release: Sending the code out into the world
  3. Build: Turning the code into a binary
  4. Compose: Including the binary in some larger project
  5. Consumption: Using the binary to do something useful
Development
The development stage of anything is possibly the hardest to control. We have reached a point in how we build software that development is now really fast. I would expect any healthy project to have hundreds or thousands of commits every day. Even with code reviews and sign offs, bugs can sneak in. A properly managed project will catch egregious attempts to insert a backdoor.
Release
This is the stage where the project in question cuts a release and puts it somewhere it can be downloaded. A good project will include a detached signature which almost nobody checks. This stage of the trust chain has been attacked in the past. There are many instances of hacked mirrors serving up backdoored content. The detached signature ensures the release is trustworthy. We mostly have trust here solved which is why those signatures are so important.
Build
This is the stage where we take the source code and turn it into a binary. This the step that a reproducible build project has injected trust into. Without a reproducible build stage, there was no real trust here. It’s still sort of complicated though. If you’ve ever looked at the rules that trigger these builds, it wouldn’t be very hard to violate trust there, so it’s not bullet proof. It is a step in the right direction though.
Compose
This step is where we put a bunch of binaries together to make something useful. It’s pretty rare for a single build to output the end result. I won’t say it never happens, but it’s a bit outside what we’re worried about, so let’s not dwell on it. The threat we see during this stage is the various libraries you bundle with your application. Do you know where they came from? Do they have some level of trust built in? At this point you could have a totally trustworthy chain of trust, but if you include a single bad library, it can undo everything. If you want to be as diligent as possible you won’t ship things built by any 3rd parties. If you build it all yourself, you can ensure some level of trust up to this point then. Of course building everything yourself generally isn’t practical. I think this is the next stage that we’ll end up adding more trust. Various code scanners are trying to help here.
Consumption
Here is where whatever you put together is used. In general nobody is looking for software, they want a solution to a problem they have. This stage can be the most complex and dangerous though. Even if you have done everything perfectly up to here, if whoever does the deployment makes a mistake it can open up substantial security problems. Better management tools can help this step a lot.

The point of this article isn’t to try to scare anyone (even though it is pretty scary if you really think about it). The real point to this is to stress nobody can do this alone. There was once a time a single group could plausibly try to own their entire development stack, those times are long gone now though. What you need to do is look a the above steps and decide where you want to draw your line. Do you have a supplier you can trust all the way to consumption? Do you only trust them for development and release? If you can’t draw that line, you shouldn’t be using that supplier. In most cases you have to draw the line at compose. If you don’t trust what your supplier does beneath that stage, you need a new supplier. Demanding they give you reproducible builds isn’t going to help you, they could backdoor things during development or release. It’s the old saying: Turst, but verify.

Let me know what you think. I’m @joshbressers on Twitter.

Can we train our way out of security flaws?

I had a discussion with some people I work with smarter than myself about training developers. The usual training suggests came up, but at the end of the day, and this will no doubt enrage some of you, we can’t train developers to write secure code.

It’s OK, my twitter handle is @joshbressers, go tell me how dumb I am, I can handle it.

So anyhow, training. It’s a great idea in theory. It works in many instances, but security isn’t one of them. If you look at where training is really successful it’s for things like how to use a new device, or how to work with a bit of software. Those are really single purpose items, that’s the trick. If you have a device that really only does one thing, you can train a person how to use it; it has a finite scope. Writing software has no scope. To quote myself from this discussion:

You have a Turing complete creature, using a Turing complete machine, writing in a Turing complete language, you’re going to end up with Turing complete bugs.

The problem with training in this situation is that you can’t train for infinite permutations. By its very definition, training can only cover a finite amount of content. Programming by definition requires you to draw on an infinite amount of content. The two are mutually exclusive.

Since you’ve made it this far, let’s come to an understanding. Firstly, training, even how to write software is not a waste of time. Just because you can’t train someone to write secure software you can teach them to understand the problem (or a subset of it). The tech industry is notorious for seeing everything as all or none. It’s a sliding scale.

So what’s the point?

My thoughts on this matter are one of how can we think about the challenges in a different way. Sometimes you have to understand the problem and the tools you have to find better solutions for it. We love to worry about how to teach everyone how to be more secure, when in reality it’s all about many layers with small bits of security in each spot.

I hate car analogies, but this time it sort of makes sense.

We don’t proclaim the way to stop people getting killed in road accidents is to train them to be better drivers. In fact I’ve never heard anyone claim this is the solution. We have rules that dictate how to road is to be used (which humans ignore). We have cars with lots of safety features (which humans love to disable). We have humans on the road to ensure the rules are being followed. We have safety built into lots of roads, like guard rails and rumble strips. At the end of the day even with layers of safety built in, there are accidents, lots of accidents, and almost no calls for more training.

You know what’s currently the talk about how to make things safer? Self driving cars. It’s ironic that software may be the solution to human safety. The point though is that every system reaches a point where the best you can ever do is marginal improvements. Cars are there, software is there. If we want to see substantial change we need new technology that changes everything.

In the meantime, we can continue to add layers of safety for software, this is where most effort seems to be today. We can leverage our existing knowledge and understanding of problems to work on making things marginally better. Some of this could be training, some of this will be technology. What we really need to do is figure out what’s next though.

Just as humans are terrible drivers, we are terrible developers. We won’t fix auto safety with training any more than we will fix software security with training. Of course there are basic rules everyone needs to understand which is why some training is useful. We’re not going see any significant security improvements without some sort of new technology breakthrough. I don’t know what that is, nobody does yet. What is self driving software development going to look like?

Let me know what you think. I’m @joshbressers on Twitter.

Software end of life matters!

Anytime you work on a software project, the big events are always new releases. We love to get our update and see what sort of new and exciting things have been added. New versions are exciting, they’re the result of months or years of hard work. Who doesn’t love to talk about the new cool things going on?

There’s a side of software that rarely gets talked about though, and honestly in the past it just wasn’t all that important or exciting. That’s the end of life. When is it time to kill off the old versions. Or sometimes even kill an entire project. When you do, what happens to the people using it? These are hard things to decide, there aren’t good answers usually, it’s just not a topic we’re good at yet.

I bring this up now because apparently Apple has decided that Quicktime on Windows is no longer a thing. I think everyone can agree that expecting users to find some obscure message on the Internet to know they should uninstall something is pretty far fetched.

The conversation is way bigger than just Apple though. Google is going to brick some old Nest hardware. What about all those old tablets that still work but have no security updates? What about all those Windows XP machines still out there? I bet there are people still using Windows 95!

In some instances, the software and hardware can be decoupled. If you’re running XP you can probably upgrade to something slightly better (maybe). Generally speaking though, you have some level of control. If you think about tablets or IoT style devices, the software and hardware are basically the same thing. The software will likely end of life before the hardware stops working. So what does that mean? In the case of pure software, if you need it to get work done, you’re not going to uninstall it. It’s all really complex unfortunately which is why nobody has figured this out yet.

In the past, you could keep most “hardware” working almost forever. There are cars out there nearly 100 years old. They still work and can be fixed. That’s crazy. The thought of 100 year old software should frighten you to your core. They may have stopped making your washing machine years ago, but it still works and you can get it fixed. We’ve all seen the power tools our grandfathers used.

Now what happens when we decide to connect something to the Internet? Now we’ve chained the hardware to the software. Software has a defined lifecycle. It is born, it lives, it reaches end of life. Physical goods do not have a predetermined end of life (I know, it’s complicated, let’s keep it simple), they break, you get a new one. If we add software to this mix, software that creates a problem once it’s hit the end of life stage, what do we do? There are two options really.

1) End the life of the hardware (brick it)
2) Let the hardware continue to run with the known bad software.

Neither is ideal. Now there are some devices you could just cut off features. A refrigerator for example. Instead of knowing when to order more pickles it reverts back to only keeping things cold. While this could create confusion in the pickle industry, at least you still have a working device. Other things would be tricky. An internet connected smart house isn’t very useful if the things can’t talk to each other. A tablet without internet isn’t good for much.

I don’t have any answers, just questions. We’re still trying to sort out what this all means I suspect. If you think you know the answer I imagine you don’t understand the question. This one is turtles all the way down.

What do you think? Tell me: @joshbressers

What happened with Badlock?

Unless you live under a rock, you’ve heard of the Badlock security issue. It went public on April 12. Then things got weird.

I wrote about this a bit in a previous post. I mentioned there that this better be good. If it’s not, people will get grumpy. People got grumpy.

The thing is, this is a nice security flaw. Whoever found it is clearly bright, and if you look at the Samba patchset, it wasn’t trivial to fix. Hats off to those two groups.

$ diffstat -s samba-4.4.0-security-2016-04-12-final.patch 

 227 files changed, 14582 insertions(+), 5037 deletions(-)

 Here’s the thing though. It wasn’t nearly as good as the hype claimed. It probably couldn’t ever be as good as the hype claimed. This is like waiting for a new Star Wars movie. You have memories from being a child and watching the first few. They were like magic back then. Nothing that ever comes out again will be as good. Your brain has created ideas and memories that are too amazing to even describe. Nothing can ever beat the reality you built in your mind.

Badlock is a similar concept.

Humans are squishy irrational creatures. When we know something is coming one of two things happen. We imagine the most amazing thing ever which nothing will ever live up to (the end result here is being disappointed). Or we imagine something stupid which almost anything will be better than (the end result here is being pleasantly surprised).

I think most of us were expecting the most amazing thing ever. We had weeks to imagine what the worse possible security flaw could be that affects Samba and Windows. Most of us can imagine some pretty amazing things. We didn’t get that though. We didn’t get amazing. We got a pretty good security flaw, but not one that will change the world. We expected amazing, we got OK, now we’re angry. If you look at twitter, the poor guy who discovered this is probably having a bad day. Honestly, there probably wouldn’t have been anything that would have lived up to the elevated expectations that were set.

All that said, I do think by doing an announcement weeks in advance created this atmosphere. If this was all quiet until today, we would have been impressed, even if it had a name. Hype isn’t something you can usually control. Some try, but by its very nature things get out of hand quickly and easily.

I’ll leave you with two bits of wisdom you should remember.

  1. Name your pets, not your security flaws
  2. Never over-hype security. Always underpromise and overdeliver.
What do you think? Tell me: @joshbressers

Cybersecurity education isn’t good, nobody is shocked

There was a news story published last week about the almost total lack of cybersecurity attention in undergraduate education. Most people in the security industry won’t be surprised by this. In the majority of cases when the security folks have to talk to developers, there is a clear lack of understanding about security.

Every now and then I run across someone claiming that our training and education is going great. Sometimes I believe them for a few seconds, then I remember the state of things. Here’s the thing. While there is a lot of good training and education opportunities. The ratio between competent security people and developers is without doubt going down. Software engineering positions are growing at more than double the rate of other positions. By definition it’s significantly harder to educate a security person, the math says there’s a problem here (this disregards the fact that as an industry we do a horrible job of passing on knowledge).

While it’s clear students don’t care about security, the question is should they?

It’s always easy to pull out an analogy here, comparing this to car safety, or maybe architects vs civil engineers. Those analogies never really work though, the rules are just too different. The fundamental problem really boils down to the fact that a 12 year old kid in his basement has access to the exact same tools and technology the guy working on his PhD at MIT does. I’m not sure there has ever been an industry with a similar situation. Generally those in large organizations had access to significant resources that a normal person doesn’t. Like building a giant rocket, or a bridge.

Here is what we need to think about.

Would we expect a kid learning how to build a game on his Dad’s computer to also learn security? If I was that kid, I would say no. I want to build a game, security sounds dumb.

What if we’re a college kid interested in computer algorithms. Security sounds uninteresting and is probably a waste of time. Remember when they made you take that PhyEd class and all the jocks laughed at you while you whispered to yourself about how they’ll all be working at a gas station someday? Yeah, that’s us now.

Let’s assume that normal people don’t care about security and don’t want to care about security, what does that mean?

The simple answer would be to “fix the tools”, but that’s sort of chicken and egg. Developers build their own tools at a rather impressive speed these days, you can’t really secure that stuff.

What if we sandbox everything? That really only protects the underlying system, most everything interesting these days is in the database, you can still steal all of that from a sandbox.

Maybe we could … NO, just stop.
So how can we fix this?
We can’t.

It’s not that the problems are unfixable, it’s that we don’t understand them well enough. My best comparison here is when futurists wondered how New York could possible deal with all the horse manure if the city kept growing. Clearly they were thinking only in the context of what was available to them at the time. We think in this way too. It’s not that we’re dumb, I’m certain we don’t really understand the problems. The problems aren’t insecure code or bad tools. It’s something more fundamental than that. Did we expect the people cleaning up after the horses to solve the manure problem?

If we start to think about the fundamentals, what’s the level below our current development models? With the above example it was really about transportation, not horses, but horses are what everyone obsessed over. Our problems aren’t really developers, code, and education. It’s something more fundamental. What is it though? I don’t know.

Do you think you know? Tell me: @joshbressers