Holding open source to a higher standard

hurdles on a track

Open source has always been held to a higher standard. It has always surpassed this standard.

I ran across a story recently about a proposed bill in the US Congress that is meant to “help” open source software. The bill lays out steps CISA should take to help secure open source software. This post isn’t meant to argue if open source needs to be fixed (it doesn’t), but rather let’s consider the standards and expectations open source is held to.

In the early days of open source, there was an ENORMOUS amount of attention paid to the fact that open source was built by volunteers. It was amateur software, not like the professional companies of Sun, DEC, and Microsoft. The term FUD was used quite a lot by the open source developers to explain what was going on. And even though all the big commercial companies kept changing the reasons open source couldn’t be trusted, open source just kept exceeding every expectation. Now the FUD slingers are either out of business or have embraced open source.

The events following Log4Shell created whole new industries bent on convincing us open source can’t be trusted. This time instead of the answer being closed source software, it’s some sort of trusted open source, that only their company can sell you, and don’t forget to be afraid of open source! I don’t think humans will ever learn that fear doesn’t work. It might work for a little while during an emergency, but it’s just not a long term strategy. Open source won because it works. No amount of trying to convince us otherwise will change a single mind.

We’re again at a place where there are a lot of governments and companies telling us open source needs to be fixed. We also see attempts at trying to create ways to measure open source so we can determine which bits of open source should be trusted and which should be eschewed. On the surface this seems like a fair idea. Obviously some open source would measure well, some will measure bad (this doesn’t mean a good score is better, more on that another day). All software works like this. There’s no call to measure closed source, consider it homework to figure out why not.

Higher Standards

So this brings us to how open source surpasses whatever standards are put in front of it. This will probably keep happening. Open source is easy to pick on. Nobody is in charge. There’s no legal department to come after you for making up lies. It’s just people building software in an amazing way. When someone starts trying to scare off users, they just keep building software and ignore the nonsense. It’s generally very impressive.

If we look at some of the most successful open source companies. Red Hat, Canonical, GitHub, and GitLab to name a few, they aren’t trying to convince you open source is dangerous. Open source is how they build their products and their business. They understand open source and that allows them to reap the benefits. These companies directly benefit from the higher standards of open source. Ironically, their competition created the very environment that let the open source companies win.

This isn’t the risk you are looking for

Now, all this said, there is risk when you use open source. There is risk in everything you do. Crossing the street, driving to fast, eating expired ketchup. However, the risk that comes with open source isn’t what we may think it is. There are many who will tell you some open source is dangerous because it’s old, or has vulnerabilities, or doesn’t have the backing of a foundation, or fill in some metric. These reasons aren’t the risk, they’re marketing. The actual risk is never a simple one line explanation.

Open source is like any asset. If you know you have it, you can make informed decisions. There will always be vulnerabilities. There will always be old open source. There will always be projects run by one person in their basement. None of these things are a problem by themselves if you know about it.

Here’s an easy example that picks on Red Hat and Google. Google has container images called “distroless”. They are designed to be incredibly stripped down, the idea being you should only run what you need. The Red Hat container images are gigantic in comparison. One reason often brought up for running distroless is they have fewer security vulnerabilities, which is completely true. If you have a surface understanding of the issues, this seems like a no brainer, smaller image equals less risk. But nothing is ever this simple. Red Hat has a great team of people that is constantly providing care and feeding of their packages (I used to be one of them). They don’t just patch the security problems, they are in the trenches helping open source projects with the updates and making sure they understand the vulnerabilities better than anyone else. Red Hat knows exactly how a security vulnerability affects them and they fix the things that matter very quickly.

The risk in the container images you run is really about support and tooling. Some projects and companies will benefit more from small container images. Maybe because they have fewer vulnerabilities, or maybe because they want to move at lightning speed. Some companies will benefit more from the slower pace Red Hat affords and they want to have world class support. There’s no right answer, everyone has to decide for themselves what makes sense.

The only constant is FUD

I don’t expect how open source is judged or measured to change anytime soon. It’s just too easy to blame with one hand, and keep using with the other. Only time will tell if and how governments get involved. I’m sure some ideas will be good and some will be bad. Something about a road paved with good intentions maybe.

The one thing that gives me the most solace about all of this is how much open source has won. It hasn’t won by a little bit, software ate the world, then open source ate the software. If it uses electricity, it uses open source.