Published by Marco on 2. May 2014 08:35:25

[image]Much of the Internet has been affected by the "Heartbleed"
 vulnerability in the widely used
OpenSSL server-side software. The bug effectively allows anyone to collect
random data from the memory of machines running the affected software, which was
about 60% of encrypted sites worldwide. A massive cleanup effort ensued, but the
vulnerability has been in the software for two years, so there's no telling how
much information was stolen in the interim.

The OpenSSL software is used not only to encrypt HTTPS connections to web
servers but also to generate the certificates that undergird those connections
as well as many PKIs. Since data could have been stolen over a period of two
years, it should be assumed that certificates, usernames and passwords have been
stolen as well. Pessimism is the only way to be sure. [1]

In fact, any data that was loaded into memory on a server running a
pre-Heartbleed version of the OpenSSL software is potentially compromised.

[How to respond]

We should all generate new certificates, ensuring that the root certificate from
which we generate has also been re-generated and is clean. We should also choose
new passwords for all affected sites. I use "LastPass"  to
manage my passwords, which makes it much easier to use long, complicated and
most importantly unique passwords. If you're not already using a password
manager, now would be a good time to start.

And this goes especially for those who tend to reuse their password on different
sites. If one of those sites is cracked, then the hacker can use that same
username/password combination on other popular sites and get into your stuff
everywhere instead of just on the compromised site.

[Forking OpenSSL]

Though there are those who are blaming open-source software, we should instead
blame ourselves for using software of unknown quality to run our most trusted
connections. That the software was designed and built without the required
quality controls is a different issue. People are going to write bad software.
If you use their free software and it ends up not being as secure as advertised,
you have to take at least some of the blame on yourself. 

Instead, the security experts and professionals who've written so many articles
and done so many reviews over the years touting the benefits of Open SSL should
take more of the blame. They are the ones who misused their reputations by
touting poorly written software to which they had source-code access, but were
too lazy to perform a serious evaluation.

An advantage of open-source software is that we can at least pinpoint exactly
when a bug appeared. Another is that the entire codebase is available to all, so
others can jump in and try to fix it. Sure, it would have been nice if the
expert security programmers of the world had jumped in earlier, but better late
than never.

The site "OpenSSL Rampage"  follows the efforts of
the OpenBSD team to refactor and modernize the OpenSSL codebase. They are
documenting their progress live on Tumblr, which collects commit messages,
tweets, blog posts and official security warnings that result from their
investigations and fixes.

They are working on a fork and are making radical changes, so it's unlikely that
the changes will be taken up in the official OpenSSL fork but perhaps a new
TLS/SSL tool will be available soon. [2]

[VMS and custom memory managers]

The messages tell tales of support for extinct operating systems like VMS, whose
continued support makes for much more complicated code to support current OSs.
This complexity, in turn, hides further misuses of malloc as well as misuses of
custom buffer-allocation schemes that the OpenSSL team came up with because
"malloc is too slow". Sometimes memory is freed "twice for good measure"

The article "Today's bugs have BRANDS? Be still my bleeding heart [logo]" by
Verity Stob  has a
(partially) humorous take on the most recent software errors that have reared
their ugly heads. As also mentioned in that article, the "Heartbleed Explained"
by Randall Munroe  cartoon shows the Heartbleed issue
well, even for non-technical people.

[Lots o' cruft]

This is all sounds horrible and one wonders how the software runs at all. Don't
worry: the code base contains a tremendous amount of cruft that is never used.
It is compiled and still included, but it acts as a cozy nest of code that is
wrapped around the actual code.

There are vast swaths of script files that haven't been used for years, that can
build versions of the software under compilers and with options that haven't
been seen on this planet since before .. well, since before Tumblr or Facebook.
For example, there's no need to retain a forest of macros at the top of many
header files for the Metrowerks compiler for PowerPC on OS9. No reason at all.

There are also incompatibly licensed components in regular use as well as those
associated with components that don't seem to be used anymore.

[Modes and options and platforms: oh my!]

There are compiler options for increasing resiliency that seem to work. Turning
these off, however, yields an application that crashes immediately. There are
clearly no tests for any of these modes. OpenSSL sounds like a classically grown
system that has little in the way of code conventions, patterns or architecture.
There seems to be no one who regularly cleans out and decides which code to keep
and which to make obsolete. And, even when code is deemed obsolete, it remains
in the code base over a decade later.

[Security professionals wrote this?]

This is to say nothing of how their encryption algorithm actually works. There
are tales on that web site of the OpenSSL developers desperately having tried to
keep entropy high by mixing in the current time every once in a while. Or even
"mixing in bits of the private key"

for good measure.

[A lack of discipline (or skill)]

The current OpenSSL codebase seems to be a minefield for security reviewers or
for reviewers of any kind. A codebase like this is also terrible for new
developers, the onboarding of which you want to encourage in such a widely used,
distributed, open-source project.

Instead, the current state of the code says: don't touch, you don't know what to
change or remove because clearly the main developers don't know either. The last
person who knew may have died or left the project years ago.

It's clear that the code has not been reviewed in the way that it should be.
Code on this level and for this purpose needs good developers/reviewers who
constantly consider most of the following points during each review:

  * Correctness (does the code do what it should? Does it do it in an acceptable
  * Patterns (does this code invent its own way of doing things?)
  * Architecture (is this feature in the right module?)
  * Security implications
  * Performance
  * Memory leaks/management (as long as they're still using C, which they
    honestly shouldn't be)
  * Supported modes/options/platforms
  * Third-party library usage/licensing
  * Automated tests (are there tests for the new feature or fix? Do existing
    tests still run?)
  * Comments/documentation (is the new code clear in what it does? Any tips for
    those who come after?)
  * Syntax ("using braces can be important"

[Living with OpenSSL (for now)]

It sounds like it is high time that someone does what the BSD team is doing. A
spring cleaning can be very healthy for software, especially once it's reached a
certain age. That goes double for software that was blindly used by 60% of the
encrypted web sites in the world.

It's wonderful that OpenSSL exists. Without it, we wouldn't be as encrypted as
we are. But the apparent state of this code bespeaks of failure to manage on all
levels. The developers of software this important must be of higher quality.
They must be the best of the best, not just anyone who read about encryption on
Wikipedia and "wants to help". Wanting to help is nice, but you have to know
what you're doing.

OpenSSL will be with us for a while. It may be crap code and it may lack
automated tests, but it has been manually (and possibly regression-) tested and
used a lot, so it has earned a certain badge of reliability and predictability.
The state of the code means only that future changes are riskier, not
necessarily that the current software is not usable.

Knowing that the code is badly written should make everyone suspicious of
patches -- which we now know are likely to break something in that vast pile of
C code -- but not suspicious of the officially supported versions from Debian
and Ubuntu (for example). Even if the developer team of OpenSSL doesn't test a
lot (or not automatically for all options, at any rate -- they may just be
testing the "happy path"), the major Linux distros do. So there's that comfort,
at least.


[1] As Ripley so famously put it in the movie Aliens: "I say we take off and
    nuke the entire site from orbit. It's the only way to be sure."

[1] It will, however, be quite a while before the new fork is as battle-tested
    as OpenSSL.