Episode 32 – Gambling as a Service

Josh and Kurt discuss random numbers, a lot. Also slot machines, gambling, and dice.

Show Notes

Show tags:

  • #random
  • #prng

Episode 31 – XML is never the solution

Josh and Kurt discuss door locks, Ikea, chair testing sounds, electrical safety, autonomous cars, and XML vs JSON.

Show Notes

Everything you know about security is wrong, stop protecting your empire!

Last week I kept running into old school people trying to justify why something that made sense in the past still makes sense today. Usually I ignore these sort of statements, but I feel like I’m seeing them often enough it’s time to write something up. We’re in the middle of disruptive change. That means that the way security used to work doesn’t work anymore (some people think it does) and in the near future, it won’t work at all. In some instances will actually be harmful if it’s not already.


The real reason I’m writing this up is because there are really two types of leaders. Those who lead to inspire change, and those who build empires. For empire builders, change is their enemy, they don’t welcome the new disrupted future. Here’s a list of the four things I ran into this week that gave me heartburn.


  • You need AV
  • You have to give up usability for security
  • Lock it all down then slowly open things up
  • Firewall everything


Let’s start with AV. A long time ago everyone installed an antivirus application. It’s just what you did, sort of like taking your vitamins. Most people can’t say why, they just know if they didn’t do this everyone would think they’re weird. Here’s the question for you to think about though: How many times did your AV actually catch something? I bet the answer is very very low, like number of times you’ve seen bigfoot low. And how many times have you seen AV not stop malware? Probably more times than you’ve seen bigfoot. Today malware is big business, they likely outspend the AV companies on R&D. You probably have some control in that phone book sized policy guide that says you need AV. That control is quite literally wasting your time and money. It would be in your best interest to get it changed.


Usability vs security is one of my favorite topics these days. Security lost. It’s not that usability won, it’s that there was never really a battle. Many of us security types don’t realize that though. We believe that there is some eternal struggle between security and usability where we will make reasonable and sound tradeoffs between improving the security of a system and adding a text field here and an extra button there. What really happened was the designers asked to use the bathroom and snuck out through the window. We’re waiting for them to come back and discuss where to add in all our great ideas on security.


Another fan favorite is the best way to improve network security is to lock everything down then start to open it up slowly as devices try to get out. See the above conversation about usability. If you do this, people just work around you. They’ll use their own devices with network access, or just work from home. I’ve seen employees using the open wifi of the coffee shop downstairs. Don’t lock things down, solve problems that matter. If you think this is a neat idea, you’re probably the single biggest security threat your organization has today, so at least identifying the problem won’t take long.


And lastly let’s talk about the old trusty firewall. Firewalls are the friend who shows up to help you move, drinks all your beer instead of helping, then tells you they helped because now you have less stuff to move. I won’t say they have no value, they’re just not great security features anymore. Most network traffic is encrypted (or should be), and the users have their own phones and tablets connecting to who knows what network. Firewalls only work if you can trust your network, you can’t trust your network. Do keep them at the edge though. Zero trust networking doesn’t mean you should purposely build a hostile network.

We’ll leave it there for now. I would encourage you to leave a comment below or tell me how wrong I am on Twitter. I’d love to keep this conversation going. We’re in the middle of a lot of change. I won’t say I’m totally right, but I am trying really hard to understand where things are going, or need to go in some instances. If my silly ramblings above have put you into a murderous rage, you probably need to rethink some life choices, best to do that away from Twitter. I suspect this will be a future podcast topic at some point, these are indeed interesting times.


How wrong am I? Let me know: @joshbressers on Twitter.

Episode 30 – I’m not an expert but I’ve been yelled at by experts

Josh and Kurt discuss security automation. Machine learning, AI, and a bunch of moral and philosophical boundaries that new future will bring. You’ve been warned.

Show Notes

Return on Risk Investment

I found myself in a discussion earlier this week that worked its way into return on investment topics. Of course nobody could really agree on what the return was which is sort of how these conversations often work out. It’s really hard to decide what the return on investment is for security features and products. It can be hard to even determine cost sometimes, which should be the easy number to figure out.

All this talk got me thinking about something I’m going to call risk investment. The idea here is that you have a risk, which we’ll think about as the cost. You have an investment of some sort, it could be a product, training, maybe staff. This investment in theory reduces your risk in some measurable way. The reduction of the risk is the return on risk investment. We like to think about these things in the context of money, but risk doesn’t exactly work that way. Risk isn’t something that can often be measured easily. Even incredibly risky behaviors can work out fine, and playing it safe can end horribly. Rather than try to equate everything to money, what if we ignored that for the moment and just worried about risk.
 First, how do you measure your risk? There isn’t a nice answer for this. There are plenty of security frameworks you can use. There are plenty of methodologies that exist, threat modeling, attack surface analysis, pen test reports, architecture reviews, automated scanning of products and infrastructure. There’s no single good answer to this question. I can’t tell you what your risk profile is, you have to decide how you’re going to measure this. What are you protecting? If it’s some sort of regulated data, there will be substantial cost in losing it, so this risk measurement is easy. It’s less obvious if you’re not operating in an environment that has direct cost to having an incident. It’s even possible you have systems and applications that pose zero risk (yeah, I said it).
 Assuming we have a way to determine risk, now we wonder how do you measure the return on controlling risk? This is possibly more tricky than deciding on how to measure your risk. You can’t prove a negative in many instances, there’s no way to say your investment is preventing something from happening. Rather than measure how many times you didn’t get hacked, the right way to think about this is if you were doing nothing, how would you measure your level of risk? We can refer back to our risk measurement method for that. Now we think about where we do have certain protections in place, what will an incident look like? How much less trouble will there be? If you can’t answer this you’re probably in trouble. This is the important data point though. When there is an incident, how do you think your counter measures will help mitigate damage? What was your investment in the risk?
 And now this brings us to our Return on Risk Investment, or RORI as I’ll call it, because I can and who doesn’t like acronyms? Here’s the thing to think about if you’re a security leader. If you have risk, which we all do, you must find some way to measure it. If you can’t measure something you don’t understand it. If you can’t measure your risk, you don’t understand your risk. Once you have your method to understand what’s happening, make note of your risk measurement without any sort of security measures in place, your risk with ideal (not perfect, perfect doesn’t exist) measures in place, and your risk with existing measures in place. That will give you an idea of how effective what you’re doing is. Here’s the thing to watch for. If your existing measures are close to the risk level for no measures, that’s not a positive return. Those are things you either should fix or stop doing. Sometimes it’s OK to stop doing something that doesn’t really work. Security theater is real, it doesn’t work, and it wastes money. The trick is to find a balance that can show measurable risk reduction without breaking the bank.



How do you measure risk? Let me know: @joshbressers on Twitter.

Episode 29 – The Security of Rogue One

Josh and Kurt discuss the security of the movie Rogue One! Spoiler: Security in the Star Wars universe is worse than security in our universe.

Show Notes

Mechanical Computer

Episode 28 – RSA Conference 2017

Josh and Kurt discuss their involvement in the upcoming 2017 RSA conference: Open Source, CVEs, and Open Source CVE. Of course IoT and encryption manage to come up as topics.

Show Notes

What does security and USB-C have in common?

I’ve decided to create yet another security analogy! You can’t tell, but I’m very excited to do this. One of my long standing complaints about security is there are basically no good analogies that make sense. We always try to talk about auto safety, or food safety, or maybe building security, how about pollution. There’s always some sort of existing real world scenario we try warp and twist in a way so we can tell a security story that makes sense. So far they’ve all failed. The analogy always starts out strong, then something happens that makes everything fall apart. I imagine a big part of this is because security is really new, but it’s also really hard to understand. It’s just not something humans are good at understanding.

The other day this article was sent to me by @kurtseifried
How Volunteer Reviewers Are Saving The World From Crummy—Even Dangerous—USB-C Cables

The TL;DR is essentially the world of USB-C cables is sort of a modern day wild west. There’s no way to really tell which ones are good and which ones are bad, so there are some people who test the cables. It’s nothing official, they’re basically volunteers doing this in their free time. Their feedback is literally the only real way to decide which cables are good and which are bad. That’s sort of crazy if you think about it.

This really got me thinking though, it’s has a lot in common with our current security problems. We have a bunch of products and technologies. We don’t have a good way to tell if something is good or bad. There are some people who try to help with good information. But fundamentally most of our decisions are made with bad or incomplete data.

In the case of the cables, I see two practical ways out of this. Either have some sort of official testing lab. If something doesn’t pass testing, it can’t be sold. This makes sense, there are plenty of things on the market today that go through similar testing. If the products fails, it doesn’t get sold. In this case the comparable analogies hold up. Auto safety, electrical safety, hdmi; there are plenty of organizations that are responsible for ensuring the quality and safety of certain products. The cables would be no different.

A possible alternative to deal with this problem is you make sure every device will exist in a way that assumes bad cables are possible and deal with this situation in hardware. This would mean devices being smart enough to not draw too much power, or not provide too much power. To know when there will be some sort of failure mode and disconnect. There are a lot of possibilities here, and to be perfectly honest, no device will be able to do this with 100% accuracy. More importantly though, no manufacturer will be willing to add this functionality because it would add cost, probably a lot of cost. It’s still a remote possibility though, and for the sake of the analogy, we’re going to go with it.

The first example twisted to cybersecurity would mean you need a nice way to measure security. There would be a lab or organization that is capable of doing the testing, then giving some sort of stamp of approval. This has proven to be a really hard thing to do in the past. The few attempts to do this have failed. I suspect it’s possible, just very difficult to do right. Today Mudge is doing some of this with the CITL, but other than that I’m not really aware of anything of substance. It’s a really hard problem to solve, but if anyone can do it right, it’s probably Mudge.

This then leads us to the second possibility which is sort of how things work today. There is a certain expectation that an endpoint will handle certain situations correctly. Each endpoint has to basically assume anything talking to it is broken in some way. All data transferred must be verified. Executables must be signed and safely distributed. The networks the data flows across can’t really be trusted. Any connection to the machine could be an attacker and must be treated as such. This is proving to be very hard though and in the context of the cables, it’s basically the crazy solution. Our current model of security is the crazy solution. I doubt anyone will argue with that.

This analogy certainly isn’t perfect, but the more I think about it the more I like it. I’m sure there are problems thinking about this in such a way, but for the moment, it’s something to think about at least. The goal is to tell a story that normal people can understand so we can justify what we want to do and why. Normal people don’t understand security, but they do understand USB cables.


Do you have a better analogy? Let me know @joshbressers on Twitter.

Episode 27 – Prove to me you are human

Josh and Kurt discuss NTP, authentication issues, network security, airplane security, AI, and Minecraft.

Show Notes

Episode 26 – Tell your sister, Stallman was right

Josh and Kurt end up discussing video game speed running, which is really just hacking. We also end up discussing the pitfalls of the modern world where you don’t own your software or services. Stallman was right!

Show Notes