av – Grey Panthers Savannah https://grey-panther.net Just another WordPress site Mon, 11 Jan 2010 15:52:00 +0000 en-US hourly 1 https://wordpress.org/?v=6.9 206299117 Security vendor’s “top-threat” list proof for their less-than-perfect performance? https://grey-panther.net/2010/01/security-vendors-top-threat-list-proof-for-their-less-than-perfect-performance.html https://grey-panther.net/2010/01/security-vendors-top-threat-list-proof-for-their-less-than-perfect-performance.html#respond Mon, 11 Jan 2010 15:52:00 +0000 https://grey-panther.net/?p=143 539560646_2a6865e8cf_o Here is something I’ve been thinking about lately: most (all?) security vendors publish their “top-threats” periodically. Those lists are made up by centralizing numbers reported by their clients. While it is safe to assume that the majority of the enumerated threats are blocked straight-away – before they can execute a single piece of code – there is a certain percentage which is after-the-fact detection (ie. the machine gets infected, a signature comes out later on at which point – if you’re lucky – the security program will block the malware).

Now I have no idea about the relative size of this subset (or if the companies have it, or how they can collect it for that matter), but I find the idea that marketing material put “out there” can backfire amusing :-).

Picture taken from tigger1fic’s photostream with permission.

]]>
https://grey-panther.net/2010/01/security-vendors-top-threat-list-proof-for-their-less-than-perfect-performance.html/feed 0 143
A missed opportunity https://grey-panther.net/2010/01/a-missed-opportunity.html https://grey-panther.net/2010/01/a-missed-opportunity.html#comments Fri, 08 Jan 2010 15:51:00 +0000 https://grey-panther.net/?p=144 3024043706_46c08dc0f5_o The theory of capitalism (and I’m greatly oversimplifying here, I know) says that, even is we all follow just our own self interest, a global “good” will somehow emerge. This is what F-Secure is doing in their blogpost where they write about a specific ransomware which – if you get infected with – encrypts your data and asks you a certain amount of money to decrypt it.

Trouble is that their only recommendation is to “remind everyone to backup their important files regularly” (coincidentally – sarcasm, sarcasm – they have an online backup component in their suite). They could have at least mentioned that Sunbelt provides a tool which may decrypt the files (I say may, because I didn’t actually try the tool). This is even more inexplicable given the fact that they got the samples from Sunbelt (“Many thanks to Adam Thomas from Sunbelt for providing samples of the dropper”).

Shame on you F-Secure for putting a (possible) financial interest before the interest of your users!

So I don’t know about you, but instead of claiming that pure self-interest is the solution, I will go with:

Everything in moderation – including moderation.

Picture taken from d3stiny_sm4sher’s photostream with permission.

PS. Who wants to bet that – if these claims are bought to F-Secure’s attention – they will claim that they didn’t know about the removal tool?

Update: I’m not singling out F-Secure here, Zarestel Ferrer from CA just made a very similar blogpost: here are the facts (he did include some more technical detail, which is nice for us, security geeks), you should have used a security product to keep it out:

CA advises to keep your security products signature updated to prevent this kind of ransomware.

The plus side: he doesn’t pimp his company’s product necessarily. The minus: he doesn’t link to the Sunbelt decryption tool either. On the plus side, there is a comment facility on their website which could be used by visitors to mention the tool and thus help out people who lost data, but on the negative side: it doesn’t work, not even with IE!.

]]>
https://grey-panther.net/2010/01/a-missed-opportunity.html/feed 2 144
Congratulation to AV-Comparatives! https://grey-panther.net/2009/12/congratulation-to-av-comparatives.html https://grey-panther.net/2009/12/congratulation-to-av-comparatives.html#comments Fri, 25 Dec 2009 18:06:00 +0000 https://grey-panther.net/?p=158 AV-Comparatives is an independent, well-known and well respected testing organization in the AV/Anti-Malware field. They recently published two reports and one meta-report:

Go read them if you have questions like “which product is the best for me?”. Thank you Andreas for providing a great and impartial service.

PS. One surprising thing for me was the high detection rates in the dynamic test – upward of 90%. This indicates that either I’m too much of a cynic or that their crawler system still has room to improve – I would expect AV products to be around 60-70% effective against new threats.

]]>
https://grey-panther.net/2009/12/congratulation-to-av-comparatives.html/feed 1 158
What VirusTotal is not https://grey-panther.net/2009/11/what-virustotal-is-not.html https://grey-panther.net/2009/11/what-virustotal-is-not.html#comments Mon, 09 Nov 2009 11:34:00 +0000 https://grey-panther.net/?p=177 2139429_dedfc5706f_b Since its inception VirusTotal has been used by people to compare different AV products (just in case you don’t know: VirusTotal is great free service which scans the uploaded file with 40 AV engines currently and reports back the results). The AV industry has objected to this practice because of a couple of reasons, some more valid than others IMHO.

Today however I want to talk about the practice of saying “(only) X% of AV detect this” and then giving a VirusTotal link. Two recent examples: here and here (to be clear: I don’t have anything against the particular blogs / companies / authors – there are many more examples of this practice, these are just two recent ones which came to my attention).

Why is this percentage meaningless and serves only to perpetuate FUD?

  • As I first argument I could mention all the discussion about AV engine configuration (this is frequently raised in discussion regarding the detection discussion, so I won’t dissect it further). A very thoroughly discussed argument is also that VT results represent a “point in time” rather than “now” (ie. detections since the scanning might have changed).
  • The second argument would be: VirusTotal goes for quantity not necessarily quality. Ie. the fact that a given engine is included in the list of engines used by VirusTotal isn’t a statement about the engine resource use, detection rate or false positive rate. Again, this doesn’t mean that the engines used are of low quality, it just means that VirusTotal isn’t in the AV engine testing business. It doesn’t say anything about the market share of the product either.
  • This means that the affirmation “X% of the engines detect a given file on VT” isn’t equivalent with the affirmation “X% of the users using AV are protected” or “AV software is X% effective”. However these are the thoughts which appear (by association) in a readers mind when seeing the initial affirmation.
  • Furthermore, some engines appear in multiple products (for example GData integrates BitDefender – amongst others) while other engines appear “split” (for example the McAfee desktop product contains both the “classical” and “cloud” engine, however on VT they appear as two separate entries “McAfee” and “McAfee+Artemis” respectively). If these relations are not considered (and I’m almost sure that they aren’t – given that these relations are not always publicly documented and they can change over time), the results come out skewed.

Conclusion: please never, ever take the VT result page and copy-paste the percentage from it! Do provide permalinks to the result pages and you can even make some sensible general statements (like “most of the major AV vendors detect this threat” or “this threat is not well detected by the smaller, Asian AV companies, but given its reliance on the English language for social engineering, it might not be such a big threat”). However, giving percentage wreaks of FUD and smells of negative propaganda (do we really want to be at each-others throat, analyzing which vendor doesn’t detect what? – there would be no winners in such a discussion). Lets concentrate on giving sensible security advice to users instead.

Picture taken from Peter Kaminski’s photostream with permission.

]]>
https://grey-panther.net/2009/11/what-virustotal-is-not.html/feed 2 177
The importance of false positives https://grey-panther.net/2009/10/the-importance-of-false-positives.html https://grey-panther.net/2009/10/the-importance-of-false-positives.html#comments Fri, 30 Oct 2009 14:11:00 +0000 https://grey-panther.net/?p=180 2748438226_c0ed3e06f6_o An interesting paper was bought to my attention recently by this blog post: The Base Rate Fallacy and its implications for the difficulty of Intrusion Detection. The central question of this paper is: if we have a flow of N packets per day and our network IDS has a false-positive rate of X, what is the probability that we are experiencing a real attack, given that the IDS says that we are? The paper uses Bayes’ theorem (of which you can find a nice explanation here) to put some numbers in and to get horrifying results (many false alerts), and to conclude that such a rate of FPs seriously undermines the credibility of the system.

The issue of false positives is also a concern in the anti-malware industry. And while I rant quite a bit about the AV industry, you have to give this one to them: the number of false positives is really low. For example, in the AV-Comparatives test 20 false positives is considered many, even though the collection is over 1 500 000 samples (so the acceptable FP rate is below 0.0015%!). Update: David Harley was kind enough to correct me, because I was comparing apples (the number of malware samples) to oranges (the number of clean files falsely detected). So here is an updated calculation: the Bit9 Global File Registry has more than 6 billion files indexed (they index clean files). Consider whatever percent from that which is used by AV-Comparatives for FP testing (as David correctly pointed out, the cleanset size of AV-Comparatives is not public information – although I would be surprised if it was less than 1 TB). Some back-of-the-napkin calculations: lets say that AV-Comparatives has only one tenth of one percent of the 6 billion files, which would result in 600 000 files. Even so, 20 files out of 600 000 is just 0.003%.

Now there were (and will be) a couple of big f***-ups by different companies (like detecting files from Windows), but still, consumers have a very good reason to trust them. Compare this with more “chatty” solutions like software firewalls or – why not – the UAC. Any good security solution needs to have at least this level of FPs and much better detection. AV companies with low FP rates – we salute you!

PS. There might be an argument to be made that different false-positives should be weighted differently (for example depending on the popularity of the file) to emphasize the big problems (when out-of-control heuristics start detecting Windows components for example). That is a valid argument which can be analyzed, but the fact remains that FP rates of AV solutions, is very low!

Picture taken from wadem’s photostream with permission.

]]>
https://grey-panther.net/2009/10/the-importance-of-false-positives.html/feed 1 180
SMOG button removed! https://grey-panther.net/2009/10/smog-button-removed.html https://grey-panther.net/2009/10/smog-button-removed.html#respond Thu, 01 Oct 2009 14:28:00 +0000 https://grey-panther.net/?p=206 3643979463_1d89c1a7bd_b

Almost a year ago I added a SMOG button to each blogpost, which (in a more or less serious manner) evaluated the “reading level” needed to understand the blogpost. However, today the site used for this service came up with a warning from Google saying that it might be malicious. I’ve looked into it, and indeed, it contains an IFRAME pointing towards a malicious site.

So I’ve taken down the script until this issue is resolved to protect people. Hopefully this issue will quickly be resolved.

Picture taken from riNux’s photostream with permission.

]]>
https://grey-panther.net/2009/10/smog-button-removed.html/feed 0 206
The myth of the cognitive quantum jumps https://grey-panther.net/2009/08/the-myth-of-the-cognitive-quantum-jumps.html https://grey-panther.net/2009/08/the-myth-of-the-cognitive-quantum-jumps.html#comments Fri, 21 Aug 2009 08:08:00 +0000 https://grey-panther.net/?p=224 Update: see this presentation given by Scott Berkun at Google, which which explains my points much more eloquently.

2362129522_c3ce6282e5_b Very often media (and I’m using the word “media” here in its most comprehensive way – including things like blogs, Slashdot, etc) tells us the story of some uber-hyper-mega-cool new-unseen-until-now method of performing X. This leads many people to believe that progress is done in “quantum leaps” – ie. there are no intermediate steps between point A (where we are now) and point B (where we can get using this new discovery). As a side-effect, it also makes people think that all they have to do is to come up with a “big idea”.

This is utter nonsense and I would like to ask everybody to stop propagating this myth! (Of course I know that it is wishful thinking on my part to think that this blogpost would have a large impact on humanity, but hey, at least I’ve vented my frustration, and if just one person is convinced, I’m happy).

There are at least two factors which mislead people into this fallacy: first, the lack of knowledge of the reader in a particular field. So, there is no chance for the reader to evaluate what works the current one is based upon, unless this is explicitly mentioned by the author. And here is the second problem: our tendency to over-emphasize (either intentionally or unintentionally) our contribution.

Also, there are a lot of both empirical and scientific evidence for the fact that progress is not as simple as coming up with one great-idea. The quote from Thomas Edison (“Genius is 1 percent inspiration and 99 percent perspiration”) illustrates this. A more scientific study comes from Malcolm Gladwell, who says that you need about 10 000 hours (about ten years) of deliberate practice to become great in a given field.

One example which comes to mind from the field of malware-research is the case of the Storm worm. When it “appeared”, there was a big media frenzy around it, fueled mainly by the AV companies. What nobody mentioned (because it would have broken the myth of “new, ultra-dangerous malware materializing from nowhere”) is that “Storm” is in fact the “normal” evolution of a much older malware family detected by many as “Tibs”. If one would to place the samples on a timeline and study them in the order as they appeared, one could clearly see how the different methods (like using a simple encryption layer over UPX, using different API calls to thwart emulators, using MMX/SSE instructions, using the return values of the API calls in the decoding process, etc) appeared and evolved. In fact “Tibs” and “Storm” are very clearly the work of the same group of people, and not something new as the reports would like you to believe.

No quantum leaps (except in theoretical physics :-))!

Picture taken from renrut’s photostream with permission.

]]>
https://grey-panther.net/2009/08/the-myth-of-the-cognitive-quantum-jumps.html/feed 2 224
Creating a closed standard https://grey-panther.net/2009/08/creating-a-closed-standard.html https://grey-panther.net/2009/08/creating-a-closed-standard.html#respond Tue, 18 Aug 2009 04:41:00 +0000 https://grey-panther.net/?p=230 Propellant Containers

After reading on Graham Cluley’s blog that the IEEE came up with a new standard [PDF] for malware interchange, I had to check it out immediately. As always, being a cranky old man, I found several problems with the proposed standard:

  • Even though the presentation has a section abou “Re-Inventing the Wheel”, it fails to mention that such sample exchange has been going on for at least a decade at this point between participants of the AV industry
  • It fails to address the issue which traditionally concerned the people the most: who should the samples be shared with?
  • The specification is tied strictly to proprietary products, where at least comparable (if not better) open products exists, the adopting of which would ensure that these files can be easily processed on any platform: RAR and PGP. While they both are excellent products, their selection also means that there is a minimal license fee for anybody interested in producing such archives. Also, certain encryption schemes of PGP are not implemented in GnuPG because of patent concerns, but the document doesn’t mention this. A much better option would have been to go with 7-zip and GnuPG for example (and explicitly stating that patent encumbered encryption algorithms won’t be used).
  • The strictly defined attributes (like md5, sha1, sha256) can be easily recalculated at the receiving end. You might argue that they provide an integrity check, however the presentation explicitly states that the archive provides this function – “RAR-archived (for integrity checking)”
  • Some of the definitions are lacking in detail – for example they introduce a “classification” tag, but it doesn’t seem to include timestamp / engine version / signature version information. Without these, in todays dynamic world, the information is not very useful.
  • Many of the fields are “free-form”, meaning that no complete automatic parsing can be done.

The conclusion? This format doesn’t bring anything new to the table and is (as it stands) just a poorly thought out waste of time.

]]>
https://grey-panther.net/2009/08/creating-a-closed-standard.html/feed 0 230
Those who know, do it https://grey-panther.net/2009/08/those-who-know-do-it.html https://grey-panther.net/2009/08/those-who-know-do-it.html#respond Mon, 17 Aug 2009 11:07:00 +0000 https://grey-panther.net/?p=232 502573660_445f4e77c0_b There is an old joke, which I might have referenced in the past (my memory is almost non-existing :-P), which goes something like this:

Those who know how to do X, do it. Those who don’t, teach it. Those who can’t even teach it, supervise it.

I assume that journalists come in somewhere in the second or third category. Take the following article from Ars Technica for example: Symantec, 11 others, fail Virus Bulletin’s August 2009 test (Updated). Quote (emphasis added):

Microsoft’s success with its Forefront product is promising not only for business users, but for consumers as well, given that the upcoming Microsoft Security Essentials product is closely tied to it.

Now we go over to the Microsoft Forefront site, the whitepaper section in the first one (The Multiple Scan Engine Advantage and Best Practices for Optimal Security and Performance) we have the following (again emphasis added):

Forefront Security for Exchange Server and Forefront Security for SharePoint each ship with multiple scan engines, and customers can use up to five scanning engines simultaneously.

The performance of Forefront Security in tests (or in the real world for that matter) has almost zero relevance for the performance of Morro (aka. Microsoft Security Essentials). Coincidentally I think that both are good products, but this is not because they share code/signatures.

An other example is the following one: Computer viruses slow African expansion (found it via the Sunbelt Blog). To be fair, in this case the reporter only repeats the insanity “Tariq Khokhar, the chief development officer of Aptivate, a non-governmental organisation that focuses on IT” says, but would he be more informed, he could have asked some pointed questions to debunk some of the things said. For example:

Without special pricing, poor countries are forced to rely on free antivirus products, such as AVG. "Writing antivirus software is a fairly brain-intensive task, and AVG just don’t have the resources," Khokhar says. "It’s not to say something’s not better than nothing, but ultimately, the viruses that are going to cause real damage are going to get through."

First, in the long run (meaning a year in this example) AVG is just as effective as any other product (as would be MS Security Essentials, despite of what some say). Second of all, you just can’t rely on one “magic pixie-dust solution” (eg. AV) to solve the problems. You need a layered approach, for example for the ISP to block known malicious sites. Third of all, you need user education. He (I assume it is a he) works for an NGO, so go create some education programs for new computer users. Also, there are a lot of dangers out there (like 419 scams), which have nothing to do with malware, but can be just as (or even more) devastating as a malware attack. Please, people, go inform yourself!

Picture taken from fsse8info’s photostream with permission.

]]>
https://grey-panther.net/2009/08/those-who-know-do-it.html/feed 0 232
Sunbelt Software VIPRE Antivirus review https://grey-panther.net/2009/07/sunbelt-software-vipre-antivirus-review.html https://grey-panther.net/2009/07/sunbelt-software-vipre-antivirus-review.html#comments Fri, 31 Jul 2009 11:33:00 +0000 https://grey-panther.net/?p=242 sunbelt_vipre Full disclosure: for several years I worked in the AV industry for a company which can be considered a competitor to Sunbelt Software. However I don’t any more.

Sunbelt Software started out as an anti-spyware company, however a few years ago they re-oriented themselves towards the more general anti-malware market, which is a really nice move (in my opinion) because anti-spyware products have a vague definition. Since then they launched their VIPRE Antivirus Software product which I briefly tested.

What I liked:

  • The cool logo 🙂
  • It works perfectly with Windows 7, even though the site mentions only Windows XP / Vista
  • The installation is very quick, there aren’t many options to tweak which could confuse less tech-savvy users
  • Both the EICAR test file and a malware sample were correctly recognized by the on-access scanner (of course I can’t say what the general detection rate of the product is, since I don’t currently have access to a larger malware collection)
  • On a full scan cookies were detected as an issue (a pet peeve of mine – I consider that such detections are not really relevant and only frighten users), but they are classified (correctly!) as low risk and there is a very objective, factual and “calm” description about the issue when you ask for more details

What I didn’t like:

  • To download the setup, you have to give your email address and the download link is emailed to you. The problem is (besides the obvious privacy concern) is that the email can take a while to get to your inbox (it might even get lost or land in the junk folder). To work around this, download it from softpedia (or from download.com), and install it without a serial (it is still good for 15 days). Later, when the link arrives, you can “activate” the product with the serial.
  • The setup workflow is not fully consistent. While the setup itself was quick and painless, after the reboot I had some difficulties: clicking on the update icon didn’t do anything, I had to right-click and select update explicitly. Then I started the main interface which wanted to update the signatures again (???) and it downloaded / updated them again, even though there were no newer signatures available…
  • In the “process manager” component all processes (even Microsoft processes) were categorized as “unknown”. This could frighten less experienced users. At least the executables with valid digital signatures should be categorized as “trusted”…
  • When showing the details of the alert, it first displays the details of the “parent” process (ie. process X tried to start process Y), which can be a little confusing if X is trusted (for example Windows Explorer, Internet Explorer, etc), because the first phrase that catches your eye is “known clean”, which raises the question “so why is it detected?”. Of course closer examination of the text makes the context clear, but first impressions are important
  • Multiple alerts can appear for the same file. Fortunately there is a “don’t show this to me again” checkbox, which works well.

So the final question: would I recommend buying it? Unfortunately (and I say unfortunately, because they seem like a good company) no. For home users I would still recommend AVG (since it is free), while for businesses I would wait until a test from AV-Tests / AV-Comparatives / other reputable testing organizations comes out to be assured that it has a detection rate comparable to the other vendors.

Full disclosure: this is a paid review from ReviewMe. Under the terms of the understanding I was not obligated to skew my viewpoint in any way (ie. only post positive facts).

]]>
https://grey-panther.net/2009/07/sunbelt-software-vipre-antivirus-review.html/feed 3 242