Much has been written about Heartbleed and the significant impact it has on the security infrastructure of the internet. Articles and blog postings have taken both the “sky is falling” and “it’s not so bad” points of view. However, there is a more fundamental issue which has raised its ugly head – is the use of open source “commercially reasonable” in a security framework?

The Heartbeat Extension for the Transport Layer Security (TLS) protocol is a proposed standard which can be found in RFC 6520. Back in 2011, In Robin Seggelmann, who was a Ph.D. student at the University of Duisburg-Essen at that point in time, developed the Heartbeat Extension for OpenSSL. Dr. Seggelmann then requested that the result of his work be integrated into OpenSSL. His code was reviewed by Stephen N. Henson, one of OpenSSL’s four core developers. Mr. Henson apparently missed the fact that when a server sends a Heartbeat request to another server, a misformed request can return 64k more data than is appropriate. Simply put, the reviewer missed validating a variable containing a length parameter.

This sounds like a simple mistake which should not have been made in the first place. However, it raises two questions: 1) how was the mistake made in the first place, and 2) why wasn’t it caught?

  • Open Source – How it Works

Open source code is created and managed by a community of developers. This community isn’t a company, or a government standards body. It is merely a group of folks who are doing the work in their spare time. Some money gets donated to these projects, but it is usually very little. The OpenSSL project has received a total of $841 in funding. That is hardly enough money to effectively incentivize folks to engage in the project, much less spend large amounts of time reviewing and auditing the effectiveness of the code.

One of the benefits of the open source movement which is often touted is that with the code out in the open, lots of people have the opportunity to review and evaluate the code independently. Unfortunately, unless those developers are independently wealthy, they have no motivation to so such a review.

It is this lack of funding which can be pointed to as a root cause of both problems – why the mistake was made and why it wasn’t caught. Which, of course, brings us to the legal question. Businesses have an obligation to manage their systems in a resilient and reliable way. I won’t rehash all the different reasons that mandate good security practices. Much ink has been spilled on that topic already. So, is the use of open source then consistent with this obligation?

  • Commercially Reasonable Security

One really cannot paint the open source initiatives with a generalized “this is good” or “this is bad”. Each project has different levels of involvement and different levels of funding. What will be important for businesses to do is treat open source applications the same way that they treat commercial off-the-shelf products. Evaluate the functionality and test the applications before deployment. Some companies have a general prohibition against using open source inside their infrastructure. However, this is usually associated with the need to protect intellectual property of the company. Consequently, the policy isn’t applied to non-IP related environments. While it isn’t completely necessary to prohibit open source solutions in the security infrastructure, companies need to be cognizant that they should be putting the same level of scrutiny toward the security applications they use as the intellectual property they want to protect.