Full disclosure – yet again


I came about this post about ethical hacking and I felt the need to respond to it publicly since (I feel that) the article offers a skewed view and does not present the counter-arguments:

First of all I would like to stress that discovering and writing exploits for certain types of flaws (and I’m not referring to XSS 🙂 ) does require serious knowledge and skills, which 99.9% of the programmers do not posses (and I’m saying this as a malware analyst who is doing reverse engineering as part of his daily job). While humility is a good, the fact of the matter is that these people are part of a select group. Also, as a sidenote: a large percentage of programmers (I don’t want to guess, but certainly more than half) do not understand even the basics of the possible security risks which may affect their products (in case of a “web application” this may be sql injections or in case of binary product something like stack overflows).

Second of all, from an economics point of view: software vendors have no financial reason to fix bugs (and by bugs I mean here problems which wouldn’t come up in real life – ie. they wouldn’t bother the customers – but under exceptional circumstances – like a specially crafted query or input file – they could lead to information disclosure / arbitrary code execution / etc). Fixed bugs don’t sell products. New features sell products. And most of the time the client isn’t knowledgeable enough to asses how secure the product s/he buys really is. One might argue that competitors could disclose the bugs, but this rarely happens because the competing companies know that their product is equally buggy and if they disclose a bug in their competitors product, it will try (and most probably succeed) to find bugs in their product and the whole thing will come crashing down. In this sense “ethical hacking” and the threat of full disclosure plays a role of keeping the players (at least somewhat) honest.

Where the ethics part comes in (in my opinion) is thinking about the customer. As I see it there are two extremes:

  • the “bad guys” discover the vulnerability and use it to take advantage of the users of the certain product without anybody knowing it
  • the vendor discovers it and patches the problem (hopefully before anybody else discovers it)

Of course (as with everything) there are many shades of gray inbetween (like customers not deploying the patch right away and the “bad guys” reverse engineering it to find the flaw it fixes and then exploit it on the customer base who didn’t apply the patch), but I didn’t want to complicate this description.

The “ethical hacker” approach falls somewhere in the middle: after discovery let the vendor know and if it doesn’t care (doesn’t communicate with you, doesn’t promise to release a fix within reasonable time-frame), release the vulnerability publicly, preferably with methods for potential customers to mitigate it. Why should it be released? Because as time passes, the probability that the “bad guys” find it increases! As an independent security researcher you don’t have any other choice than to follow this path (because I don’t think that very many companies will admit that they screwed up and bring you in to help them – because this would mean admitting failure which would result in many management types loosing their bonus packages which they don’t want).

There are many bad apples in the “research community” who place personal pride before the interest of the customers, but they are not practicing ethical hacking! Examples:

However the example you cited does not apply. A big vendor can not disregard a serious vulnerability just because the style of the communication. Do you consider that just because I write “I’m the king of the world and you know s**** about software development” in an e-mail to MS in which I disclose a remotely exploitable flaw for Vista, they should disregard it? If the vulnerability is genuine and the vendor really doesn’t communicate (doesn’t even acknowledge the receiving of the mail) there is no other possibility than going public (again: preferably with a mitigation method for clients) – the alternative being to wait until the “bad guys” discover the vulnerability and the exploitation becomes widespread enough that the company is forced to do something about it. Here are some examples which you should consider:

  • Apple trying to discredit security researchers who found exploitable code in their wireless drivers
  • A person being arrested and persecuted because he discovered an information disclosure vulnerability in the website of a university and tried to notify them!
  • Amazon not fixing a bug for one year (!) which allowed arbitrary websites to manipulate your shopping cart if you were logged in to the Amazon website
  • Oracle not fixing vulnerabilities for years
  • Microsoft knowing about the recent .ANI vulnerability since December (by their own admission) and not releasing a patch sooner (exactly how many months does it take to test patch?!!) and when releasing it breaking software (the later of course can be the fault of poorly written software, but in this case it’s not probable)
  • And finally a personal story: I discovered that using some CSS and JScript I can crash IE at my will. This was several years ago. I tried to notify MS several times and got back no response. The vulnerability persisted (I last verified it a couple of months ago that my fully patched XP box with IE 6.0 was vulnerable). Now I’m not going to disclose to anybody the code (because I migrated away from MS products), but after such long a period of time don’t you think that I would be justified to do so?

You can’t rely on companies to try to make the most secure products. They will make the products which generates the most revenue. Cars didn’t have safety belts until they were forced to. The same way software vendors won’t place security first (or at least in the first 3 positions) until they will be forced to.


2 responses to “Full disclosure – yet again”

  1. what you seem to be calling ethical hacking sounds like what (as far as i know) is more generally referred to as responsible disclosure…

    i suppose it should go without saying that an ethical hacker would follow the responsible disclosure guidelines… of course there’s more to ethical hacking than just responsible disclosure (like, for example, not hacking production systems without permission)…

    it seems to me that the referenced post, however, is not an argument for or against the ideal of ethical hacking, but rather a description of how it’s gone wrong in practice and perhaps why it didn’t work out the way it was supposed to… i can’t say i disagree, either, since i’ve certainly seen people claim to be whitehats one moment and then perform blackhat activities (like releasing attack code) in the next… there really are a lot of people claiming to be ethical hackers who deviate wildly from what you or i would consider ethical…

  2. I’m calling it ethical hacking because the original article called it that. It is possible that I overreacted a little, however I tend to have strong reactions to posts which (in my opinion) present very one-sided views (because that’s how hype is born :)).

    What also bugged me is that he disapproved painted Mitnick in a negative light. Now I’m far from being a fan-boy and I know that Mitnick did his share of illegal things – but he got his punishment and I think that he tries to give something back to society (with his book for example which I found really interesting).

    Maybe the problem is one of challenge: everybody likes challenge and we (as the security community) should try to present in more detail the challenges we face in our jobs so that capable people know that they can be on the good side and still have a challenging job. For example since working as a malware analyst I appreciate the difficulties of detecting malware more and although I could write an “uber-malware” I won’t because fighting against it is more challenging (and also because I want to be on the good side). Exposing this face of the security land is however difficult because while “cracking” can be performed in solitude, all positive efforts are usually done inside companies where outsiders have less access.

    As for the issue of naming things: just because others call themselves something, it doesn’t mean that term should be discredited. For example many “doctors” performed (and maybe still perform) dubious experiments around the world on human subjects, but this doesn’t mean that we should discredit the term “doctor”. Of course there are situations where the odds are against us (like with the term “hacker”), but we should still try.

Leave a Reply to kurt wismer Cancel reply

Your email address will not be published. Required fields are marked *