Wednesday, February 11, 2009

Security vs. Obscurity

This was a great quote I had to share, if not at least write down for future reference.
If I take a letter, lock it in a safe, hide the safe somewhere in New York, then tell you to read the letter, that's not security. That's obscurity. On the other hand, if I take a letter and lock it in a safe, and then give you the safe along with the design specifications of the safe and a hundred identical safes with their combinations so that you and the world's best safecrackers can study the locking mechanism -and you still can't open the safe and read the letter - that's security.
Another element from other articles I have read is that security only exists in relationship over time. While the above example is true, it is not true with respect to reality... at least for an approach. There are always limits to security. Security can always be better, and all security can eventually be broken. So when designing a security system, how secure is not whether or not it can be broken, how secure is what is the minimum amount of time we can be reasonably assured the security is going to hold. Or further, how much effort would be necessary to break a system in relationship to cost and skill level of the cracker.

Maximize the cost and skill necessary to crack a safe, and you have a very secure safe. Minimize the investment of such a system and you have a very good safe.

From my understanding of today's computer security, some people stumble over vulnerabilities, and others hunt them out. If it is a problem with Microsoft software, the cracker / hacker can report the vulnerability and hope that Microsoft fixes it before anyone else that might not be so kind reports the issue. The person to discover the problem also gets no notoriety unless they publish a proof of concept, or no a full scale attack on a computer system. Frequently Microsoft will not fix issues until that point has been reached anyway. Why fix a problem unless that specific problem is going to hit your bottom line. Linux and BSD are different. Hacking and cracking are not only encouraged, but eluded to above, it is encouraged. If you find a vulnerability, write your proof of concept and, because you have the code, a possible fix to take care of the problem. If it is a design flaw, explain why the flaw exists. You get famous (within that circle) and the system gets better. Once the problem is fixed and a patch has been distributed, the proof of concept gets public release, and the nerds and geeks cheer and update their system, if it hadn't already done so automatically.

For hackers and crackers, operating systems are like puzzle books. Microsoft puts the same puzzles out there every time, and each time a puzzle is solved, they may or may not change the puzzle at their leisure if there is a chance so many puzzles are solved that it might become hard to keep selling the same book. Gnu/Linux/BSD on the other hand, is... harder. Every time a puzzle is solved, no one else is allowed to take a shot at it because whoever solved the puzzle and whoever wrote the puzzle work together as and with a community to make it harder.

Now consider this: these are the nerdiest people in the world offering hypothetically the most challenging puzzles in the world that have real life consequences... and they have been going at this for ~25 years. Security patches are frequent, but most often very obscure, unlikely circumstances that create hypothetical vulnerabilities often proofed in the most ideal of environments.

Not to discredit Microsoft for their desire to be secure, but there really isn't any money in fixing software that has already been sold. There is no community to improve the software because no one is allowed the source code that makes it easy to make the software, and further, even if someone were to get their hands on the code, or miraculously is able to write their own patch without knowing how it works, such activity is not only illegal, but there is virtually no process that allows for such fixes to be certified.

The last MAJOR bug for Gnu/Linux / BSD was in SSH where some .01% of computers were statistically likely share the same list of some 100,000 "random" keys due to a glitch in the standard distribution of key generation. WTF?? Are you joking? This is the most critical vulnerability discovered in years?!? Not to mention that the guy that even discovered the vulnerability had a fix submitted upstream within a day that pretty much fixed the problem world wide within maybe as long as 48 hours? Compare that to a Microsoft Exchange bug that allows an attacker to do anything they want after simply sending a cleverly malformed email.

Personally, if Microsoft says that if looking at the code would expose the software to an unlimited number of critical vulnerabilities compromising your network and all your data, that doesn't make me concerned about Gnu/Linux, that makes me concerned about Windows.

Like seriously, the code is that bad?

I'd tell customers with that concern that Gnu/Linux have been openly audited by the nerdiest geeks for roughly 25 years and worked together to develop the best security ever. Linux community says that open source is more secure; if Microsoft is saying that being able to see the source code exposes you to limitless vulnerabilities, maybe there should be some concern that the code to Windows has been leaked to the Internet for quite some time? Not to mention, didn't they recently change to some "shared-source" BS where you can look at the code, but it doesn't actually mean shit like with OpenOffice?

Anyone else having as much difficulty following Microsoft's supposed argument here, and how if true, just makes everything look worse for Microsoft?

Further, this article is a study of "Automatic Patch-Based Exploit Generation" where the simple process of Microsoft even attempting to fix the software is done so poorly that is ca be used to have quite the reverse intended affect.

No comments: