Episode 221 – Security, magic, and FaceID

Josh and Kurt talk about how to get started in security. It’s like the hero’s journey, but with security instead of magic. We then talk about what Webkit bringing Face ID and Touch ID to the browsers will mean.

Show Notes

Episode 220 – Securing network time and IoT

Josh and Kurt talk about Network Time Security (NTS) how it works and what it means for the world (probably not very much). We also talk about Singapore’s Cybersecurity Labelling Scheme (CLS). It probably won’t do a lot in the short term, but we hope it’s a beacon of hope for the future.

Show Notes

Episode 218 – The past was a terrible place

Josh and Kurt talk about change. Specifically we discuss how the past was a terrible place. Never believe anyone who tells you it was better. Part of a career now is learning how to learn. The things you learn today won’t be useful skills in a few years. The future is is always better than the past. Even in 2020.

Show Notes

A bug by any other name

This tweet from Jim Manico really has me thinking about why we like to consider security bugs special. There are a lot of tools on the market today to scan your github repos, containers, operating systems, web pages … pick something, for security vulnerabilities. I’ve written a very very long series about these scanners and why they’re generally terrible today but will get better, but only if we demand it. I’m now wondering why we want to consider security special. Why do we have an entire industry focused just on security bugs?

Let’s change the conversation a little bit. Rather than focus on security bugs, let’s ask the question: Which bugs in a given project should we care about?

There are of course bugs an attacker could use to compromise your system. There are also bugs that could result in data loss. Or bugs that could bring everything down. What about a bug that uses 10% more CPU? Every piece of software has bugs. All bugs are equal, but some bugs are more equal than others.

We are at a time in software history where we have decided security bugs are more equal than other bugs. This has created entire industries around scanning just for security problems. Unfortunately the end goal isn’t always to fix problems, the goal is often to find problems, so problems are found (a LOT of problems). I think this is a pretty typical case of perverse incentives. You will always find what you measure. The pendulum will swing back in time, maybe we can help it swing a little faster.

Finding bugs

How do you find bugs in your software? There are probably two big sources. You have users and those users report problems (and of course demand fixes). You can also have a bug show up during testing. Ideally it’s automated testing. Automated testing is one of the most important innovations in software development in the last few decades. The most important functionality of your project can tested to make sure nothing obvious breaks when you make a change, like updating a dependency.

But users and testing don’t really find security bugs. A lot of these are for really bizarre corner cases that no normal user would ever do. If you’ve ever run a fuzzer you know what this means. You might have an API that expects positive integers as the input. When you hand it the string “banana stand” it suddenly decides to crash because a string isn’t an integer. Why would anyone do this is the obvious question. Most automated testing don’t do extremely silly things such as this.

Yet this is basically how security bugs work. Now imagine when we send our API “banana stand” instead of crashing it deletes all your files. That would be a very bad thing. Now imagine someone in Australia figures out they can delete all your files and decides it would be a fun thing to do that on a Friday night. Your weekend is ruined either because you have to spend it restoring data, or you just lost your job.

And this is probably why we like to make a huge deal about security bugs.

Open Source

I also think the other half of this story is open source. A lot of these scanners are focused on open source libraries you are using in your application. Initially the threat was open source itself. It’s written by miscreants who want to give away things for free! The first iteration of these scanners was all about just finding hidden open source in your products so you can remove it.

But the siren song of free lured away too many developers. If we can’t scare you away from open source, maybe we can scare you into thinking the open source you have is inherently broken, you just don’t know it, and you need a tool to tell you what’s wrong with it, and we happen to have such a tool!

Security and fear were forged on the same anvil. The forward thinking security folks know that fear is not a motivator, it is a de-motivator. Fear paralyzes people, it does not empower. The vast majority of these scanners are focused on what can go wrong. Why you should be careful with what libraries you are using. There is some truth in this, but we should be focusing on what can go right and how to make things better. There is more good than bad.

This is where I start to wonder why we focus only on security bugs. Is there a tool that tells us when a new version of a library has a 20% speed improvement? Or what about when a library fixes a bug you had to work around last year. What if a project fixed a bug that could cause data corruption? What about giving advice as to which library has a healthier community and better features? These are arguably more important than 98% of all security bugs affecting your project.

It’s easy to say “But those problems are hard” and I would agree. Solving hard problems is how the future is built. Telling me my project has 600 security vulnerabilities isn’t very useful. How many of those 600 actually matter? If my project is this insecure how is it not literally on fire all of the time? Maybe I should just put a paper bag over my head and ignore the scan. Yes, that sounds easier.

Now, imagine the scanner telling me I should upgrade 3 libraries to get a 20% speed improvement, fix a data corruption bug, and get rid of a cross site scripting vulnerability. Now I’m listening! This is the tool we should be demanding.

The reality is many of these tools cost the price of a full time employee and deliver the value of an intern. That’s probably too harsh. I apologize to the interns.

Episode 217 – How to tell your story with Travis Murdock

Josh and Kurt talk to Travis Murdock about how to tell your story. Travis explains how to talk to the press and how to tell our story in a way that helps get our message across and lets the reporter do their job better.

Show Notes

Episode 216 – Security didn’t find life on Venus

Josh and Kurt talk about how we talk about what we do in the context of life on Venus. We didn’t really discover life on Venus, we discovered a gas that could be created by life on Venus. The world didn’t hear that though. We have a similar communication problem in security. How often are your words misunderstood?

Show Notes

Episode 215 – Real security is boring

Josh and Kurt talk about attacking open source. How serious is the threat of developers being targeted or a git repo being watched for secret security fixes? The reality of it all is there are many layers in a security journey, the most important things you can do are also the least exciting.

Show Notes

Episode 213 – Security Signals: What are you telling the world

Josh and Kurt talk about how your actions can tell the world if you actually take security seriously. We frame the discussion in the context of Slack paying a very low bug bounty and discover some ways we can look at Slack and decide if they do indeed take our security very seriously.

Show Notes

We take security seriously, VERY SRSLY!

Every company tells you they take security seriously. Some even take it very seriously. But do they? I started to think about this because of a recent Slack bug. I think there are a lot of interesting things we can look at to decide if a company is taking security seriously or if the company thinks security is just a PR problem. I’m going to call the behavior we want to look at “security signals”.

On August 28, 2020 a remote code execution (RCE) bug was made public from the Slack bug bounty program. This bug really got me thinking about how a company or project signals their security maturity to the world. Unless you’ve been living under a rock, you know Slack has become one of the most popular communication platforms of the modern era, one would assume Slack has some of the best security around. It turns out the security signals from Slack are pretty bad.

Let’s first focus on the Slack public bug bounty. Having a bug bounty is often a good sign that you consider security important. In the case of Slack however it’s a signal against. Their bounties are comically low.

To put this $1500 critical into context, Mozilla has a critical bounty of $10,000. Twitter has $20,000. I would expect something that is critical infrastructure to be much higher. The security signal here is security isn’t taken very seriously. It’s likely someone managed to convince leadership they need a bug bounty, but didn’t manage to convince anyone to properly staff and fund it. Not having a bug bounty is better than one that is being starved of resources.

The next thing I look for is a security landing page. In the case of Slack it’s https://slack.com/security. That’s a great address! The page doesn’t explicitly tell us they take security seriously so we’ll have to figure it out on our own. They do list their vast array of compliance certifications, but we all know compliance and security are not the same thing. The page has no instructions on how to report security vulnerabilities. No link to the bug bounty program. There are no security advisories. There are links to whitepapers. It looks like this page is owned by marketing. Once again we have a security signal that tells us the security team is not in charge.

Next I wanted to look at is how many CVE IDs have been assigned to Slack products? I could only find one: CVE-2020-11498. If I search for this CVE to be listed on slack.com I can’t find it anywhere. Slack has a number of libraries and a desktop client. If they took security seriously, they would have CVE IDs for their client so their customers can track their risk. I tried to find security advisories on the Slack web site and found a page owned by their legal team that does link to the bug bounty program. On the plus side that page does say “We take the security of your data very seriously at Slack.” They’re not just serious, they’re VERY serious!

I could continue looking for signals from Slack, but I think it’s more useful to focus on how this can happen and what can be done about it. There have been plenty of people discussing not letting marketing, or PR, or legal, or some other department be in charge of your security. This is of course easier said than done. A great deal of these problems are the result of the security team not understanding how to properly communicate with the business people. Security is not a tautology, security has to exist as part of the larger strategy.

Signaling Security

The single most important thing you need to learn is never NEVER try to scare people. Nobody likes to be scared. If you try to pull the “If you don’t do this monsters will eat the children” you’ve already lost. Fear does not work. If you have to, write “FEAR DOES NOT WORK” on your wall in huge red letters.

Now that we have that out of the way, let’s talk about what a bug bounty amount tells us. The idea behind a bug bounty isn’t to entice researchers to report issues to you instead of the black market. Anyone willing to sell exploits will always make more selling them illegally than you will pay. If you pay nothing or a terribly small amount you will still get reports for things people accidentally find.

The purpose of the bounty is to entice a security researcher into spending some time on your project. Good bugs take time, and the more time you spend the more good bugs you can find. It’s a bit circular. Think of these people as contract workers for a project you don’t even know about yet. The other thing you have to understand is if your bounty amounts are too low, that either means you couldn’t get a proper budget for your program, or you have so many critical bugs a critical has no value. Neither is a good signal.

How do you get enough budget for a proper bug bounty program? You have to frame a bug bounty as something that can enable business. A bug bounty seen as only a cost will always have to battle for funding. A proper bug bounty program signals to the world you take security seriously (you might not even have to tell everyone how serious you take it). It gives customers a sense of calm that shows you’re more than just some compliance certificates. It helps you gain unexpected contractors to help you find and fix bugs you may never find on your own. If you run the numbers it will be drastically more expensive to let one of your full time people hunt for bugs, and you still have to pay them if they don’t find anything!

What about having a security contact page? It’s a great tool for marketing to convince security minded customers how seriously you take security! This again comes back to the business value of having a proper security landing page. This answer will be very cultural. If your business values technical users, you use this as a way to show how smart your security people are. If you value transparency, you show off how open and honest your security is. This is one of those places you have to understand what your company considers business value. If you don’t know this, that’s going to be your single biggest problem to solve before worrying about any other security problems. Make your security landing page reflect the business strategy and culture of your company.

What about CVE IDs? Should your company issue CVE IDs for your products? I would reframe this one a little bit. CVE IDs are great, but they’re really the second step in publishing security advisories. Advisories first, CVEs second. Security advisories are a way to communicate important security details to customers. Advisories show customers you have a competent security team on the inside. Good security advisories are hard. If you have zero why is that? No product has zero security vulnerabilities.

There is an external value to having a team responsible for advisories. If you have a product your customers are running 3rd party scanners against it. It’s the cool new thing to do. Do you know what those scans look like? Are you staying ahead of the scanners? Are you publishing details for customers about what you’re fixing? Security advisories aren’t the goal, the goal is to have a product that has someone keeping ahead of industry standard security. The industry standard says security advisories and 3rd party scanners are now table stakes.

This post is already pretty long and I don’t want to make it any longer, every one of these paragraphs could be a series of blog posts on its own. The real takeaway here is that security needs to work with the business. Show how you make the product better and help with the bottom line. If security is seen only as a cost, it will always be a battle to get the funding needed. It’s not up to the business leaders to figure out security can help the bottom line, it’s up to security leadership to show it. If you just expect everyone else to make security a big deal, you’re going to end up with marketing owning your security landing page.

And I want to end on an important concept that is often overlooked. If there is something you do, and you can’t prove adds to the bottom line, you need to stop doing it. Focus on the work that matters, eschew the work that doesn’t. That’s how you take security seriously, not by putting it on a web site.