I was reading the piece entitled White Listing – The End of Antivirus??? by the “Director of Technical Education”. Now it would be fairly easy to do ad-hominem attack against him, I will stick to the technical details of the post:
First, it gives the argument that one of the approaches whitelisting companies use is to gather as much software as possible and then scan it using (many) AV engines. From this he concludes that whitelisting companies couldn’t work without AV companies. This is not true. Whitelisting companies have at least two things going for them:
- Reputation – they get the software from “reputable sources”, which means that statistically the chance for it to be malware is much, much lower (interesting how he misses this arguments since – by his own admission – he was doing something similar for MS – scanning files to be released for viruses and can verify this first hand – in older blog post he said that he only had very few – something like one or two cases – when he found infected files – reputation).
- The whitelisting companies could very easily build farms of machines which would execute the files and compare the “before” and “after” state of the machine to decide whether the file is malicious or not. The technology is out there, readily available for anyone.
An other option for the whitelisting companies would be to classify files based on characteristics like entropy, digital signature, etc (like the Mandiant Red Courtain does). Of course this is not a 100% solution, but it would cover a big majority of cases.
Now examining some other claims of the post we find that “TSA does whitelisting”, which shows a lack of understanding of the terms. The TSA does blacklisting based on the infamous “no-fly list”. The TSA also does blacklisting of objects you can carry on the plane (ie they don’t list all the things you can take on bord, they list all the things you can’t).
The other argument bought against whitelisting (as a concept – applied to websites this time) in the post is that it doesn’t protect against the current trend of reputable websites being hacked. The problem is that, while the argument is correct – ironically – the examples are wrong. In all of those examples the sites were only modified to redirect – in one way or an other – the browser to a malicious site where the exploitation attempt would take place. Both whitelisting and blacklisting would protect against these attacks (whitelisting – because it wouldn’t allow the redirect away from the site and blacklisting – because hopefully it would include the target of the redirect). The situation where these solutions would have a problem is when all the malicious content would be hosted on the modified website (there would be no redirection), a practice which is not yet common. For a more technical discussion of the URL blacklisting topic see this blogpost.
In the end, both solutions are just layers in a defense in depth solutions. Sometimes whitelisting is more appropriate and sometimes blacklisting is. Claiming one is superior to the other shows lack of understanding or intention to mislead.