The disadvantages of cloud based scanning


My fellow blogger Kurt has written a post about the benefits of scanning in the cloud. While I mostly agree with it, there are some disadvantages which also needs mentioning:

  • The need to be always connected – how will such a system deal with the disconnected scenario? As much as we are used to being always connected, there are still cases when we need to operate disconnected. For example: when we are on the road, when we are on an airplane, when our ISP has an outage, when we moved to a new apartment/changed ISP and we are waiting to “get connected”, etc. There are two possible solutions which come to mind:
    • (a) refuse to start the computer – this is very dramatic and most probably unacceptable…
    • or (b) start, but only allow files which were previously scanned – this seems to be a good solution, but the line between executable and non-executable files is very blurry (for example the Word DOC you are working on could contain malicious macro code, so it would be nice to scan it at every modification) and thus this operation method will either offer lesser protection (by not scanning “document” files which could still contain executable code) or still block the users ability to do her work.

    of course there is always the third possibility of “failing open”, but that is a worst-case scenario which hopefully nobody will choose.

  • Network latency – just what speed impact will the need to contact a server before executing each file will have? The protocol will probably look something like the following:
    • send a hash of the whole file or relevant parts of the file to the server
    • wait for the server response
    • if the server determines that the file might be infected, the client will need to upload the entire file

    if you thought that your current desktop AV solution is slow, this might be slower by a factor of 10x! (not in the average case, but in some extreme cases)

  • “under-reporting of new samples is reduced/minimized” – two counter-points here: first, most AV vendors already collect “suspicious” files from the client computers (of course it is with “consent” in the form of the EULA – but that’s an other discussion). Second, the problem with under-reporting is not necessarily that companies don’t have access to the files (although that can be a problem sometimes), but that it is of very low priority for them (if you were a blacklisting company, what would you look at first? a file which is reported by 1000 users of a file which is reported by one user?)
  • “conventional malware q/a should be entirely thwarted” – while I agree that it will somewhat reduce the problem, the bad guys (and girls, I don’t want to discriminate here :-)) can still use proxies all over the world to circumvent statistical analysis.
  • scanner reverse engineering is almost completely nullified – true, however a host of other possible vulnerabilities is created: is the communication protocol created to protect against MITM attacks (created for example by DNS hijacking)? Is the infrastructure able to withstand a DDoS? Your protection can be disabled by (partially) cutting-off network access (again, how does the product react to this?).
  • Also, Kurt mentions that there are sensitive materials which users might not be comfortable with sending to the “cloud”. Two points here: vendors are already collecting files (the level of awareness about this practice is a different question). Secondly, there is almost no such thing as “non-executable” files these days. Word documents, Photoshop files, HTML pages – they all need to be scanned. The scanning probably will be partial (ie sending hashes from the key areas of the files at first), but they will also need to contain a fallback mechanism to send the entire file. Will people be comfortable with the prospect that their software vendor could spy on all of their activities, circumventing protections put in place to prevent data leakage (file encryption, drive encryption, network traffic encryption)? Will this be even legal given the laws and regulations different institutions and organizations are subject to?
  • Given the previous point, it is possible that some companies will have “enterprise” products which will have the old model of “delivering signature files to the client” to ameliorate the concerns enumerated. They would essentially leverage the clients who do agree to participate in the fluffy (cloud :-)) version as a sensor network. However, at the same time, bad guys could obtain these “corporate” versions and use it for their QA purposes.

To finish on a lighter note: these are just speculations since no such product exists (yet). Hopefully the vendors will take all these elements into consideration.

, ,

One response to “The disadvantages of cloud based scanning”

  1. following your points in order:
    1) when you’re disconnected your chances of encountering malware in the first place are vastly reduced… even so, (a) and (b) are not the only options… traditional scanning is still possible (no one says cloud-based scanning and client-side scanning have to be mutually exclusive) and there are other preventative technologies you should still be using in addition to scanning…
    2) i don’t anticipate network latency being the biggest slowdown… in fact i expect (and maybe i should update my post to include this) that the analysis being done in the cloud will be more involved than what is feasible to do client-side… i don’t know that the client must necessarily block while waiting for a response from the server (at least for on-demand scanning that shouldn’t be necessary) rather than continuing and polling for answers later…
    3) under the current operating scenario, analyzing the files submitted is not part of the scanning process – cloud-based scanning implicitly requires it…
    4) they may foil statistical analysis, but they can’t change the fact that they’re either giving vendors their samples or they’re not getting the full picture of the detectability of the sample… neither of these options are conducive to malware q/a…
    5) there are ways to avoid mitm attacks, and the DoS/DDoS scenario is identical to the scenario where the client is simply not connected….
    6) ideally i think consent has to be given in different way than just accepting an EULA… i agree this is not a simple issue… this is probably the biggest problem people will have with it…
    7) i’m sure the traditional scanners will remain available and malware authors *could* use them for their malware q/a, but if the cloud-based scanning does more than the client-side scanner can feasibly do then the malware author will NOT be hardening his creations against cloud-based scanning…

Leave a Reply

Your email address will not be published. Required fields are marked *