This should be listed on the front page of the forum. Anyone on a shared host should not have to worry as the hosting company will take care of it. Anyone on a VPS or dedicated server has some work to do.
I could not determine for certain whether or not Apache and nginx (or whatever web server someone may be using) should be recompiled (if your web server has SSL support). Some sources say yes, it should be compiled using the updated, patched version of OpenSSL. Having recently learned to compile Apache on Linux, it seemed to me that Apache doesn’t just call to OpenSSL, but that components of OpenSSL are compiled into Apache (possibly only mod_ssl.so). It may be the same for PHP as well. I am not sure at this time.
To be on the safe side, it may be wise for anyone on a VPS or Dedicated server to update their system (especially OpenSSL), recompile Apache and PHP if they have SSL enabled, and reboot their system.
The code submitted that created the vulnerability was 2 years ago, it’s only been found now.
Destroys the commonly held belief that open source is secure because it has more eyes on it.
One of the other major issues is the fact openssl is used in a huge amount of hardware from home router modems to firewalls and many commercial internet appliances. Many of these that are not current generation will likely never get firmware updates.
Other than recompiling apache and openssl, you also need to replace public and private keys and get new certificates that use the new keys.
I am only a beginner level C++ coder, yet I cannot help but be astounded by how many of these missing bounds check vulnerabilities have been discovered in all kinds of software over the years. From Windows to Linux to web browsers to Java to Flash to audio and video players and pretty much everything in between, if it was written in C or C++, there has probably been one of these bugs in it. This is an avoidable mistake. Most of the vulnerabilities that have resulted in mass virus propagations (Nimda, Code Red, and more) and many vulnerabilities resulting in mass data theft have been caused by missing bounds check vulnerabilities exploited by buffer overflows.
All of this software is created by bright people who make stupid little mistakes. A few times I have downloaded open source code and took a quick look at it to be overwhelmed by the number of files contained in the project. Some of the files like a header file or something may contain as little as one line of code. Managing code that is scattered among hundreds of different files has to be difficult, isn’t it? How else could one explain bright people making these missing bounds check mistakes?
Is there not a way for C/C++ compilers to do bounds checking and error on compilation? I don’t know much about it, but I do know there have been way too many of these vulnerabilities over the years, all of which are preventable. We will never have secure computing as long as these simple little mistakes are allowed to happen. There has to be a better way.
I am now being flooded by email messages, from the various services to which I subscribe, urging me to change my password and regenerate API keys.
It strikes me, though, that I should wait to change my credentials until AFTER the vulnerability has been corrected. Regardless of what we do now, if there is still an opening in the protocol it still can be exploited.
Secondly, I wonder about services like 1Password and LastPass. Are they vulnerable to attack? Thinking about myself, that could be more devastating than gaining access to my bank account!!
If the service provider has updated their server software and created new keys and certificates then it’s safe to update, I’d presume no competent provider would request you to change details while they are still exposed to the vulnerability.
What I fail to understand is how a system that is as prevalent and important as this could go for nearly two years with such a fundamental flaw. I would have imagined that an error of this magnitude would have been discovered/exploited long before this.
It’s not just buffer overrun. From what I’ve read the creators of OpenSSL chose to implement their own memory pool which doesn’t erase data from memory when it’s no longer needed for performance reasons. So normally when you don’t need a piece of memory anymore you overwrite it with all zeroes to prevent problems like this, but the OpenSSL creators explicitly chose not to do that. I wonder how they feel about that decision now…
The GnuTLS bug that’s been sitting around letting bogus certs through for 9 years before RedHat managed to notice it (because of an audit, not someone just randomly eyeballing the code) should have destroyed that first.
Instead of a lack of constraints of data sent back, this one was two coders writing C functions-- one returning 0 for false and another returning -1 or something intending to mean “false”. Whoops.
If it has only been found now then it isn’t really a huge problem as it will be mostly fixed before anyone manages to exploit it.
It is only a problem is someone actually found it at some point in the past two years and exploited it before it got patched.
Presumably the patch was made available by the time that the public announcement was made.
It only destroys the myth about open source being more secure if the fact that it is open source made it easier for someone to find and exploit this long before it was patched. That it is open source means that someone did find and patch it. Because the alternatives are not open source there is no way to tell which if any of those is able to be similarly exploited.
You’re right, you should change your password only after the website has been secured… else those passwords can be stolen again. 1Password and LastPass say that they’re safe. Furthermore, they’re doing great by telling anyone that they should use their services
It is also good to notice that not every website that uses OpenSSL is vulnerable. Not all use this piece of code, it is not essential and not part of the core of OpenSSL. So it is good to confirm with the website administrator if you’re in doubt.
That’s true. It is hard to say if propietary is better than open source or viceversa. The only thing is that there’s something wrong with open source, someone will find it one day because anyone can view the code. With propietary code, it may look more secure because finding the error will be harder. That doesn’t mean that someone patient enough will not find the error.