My fellow blogger Kurt has written a post about the benefits of scanning in the cloud. While I mostly agree with it, there are some disadvantages which also needs mentioning:
- The need to be always connected – how will such a system deal with the disconnected scenario? As much as we are used to being always connected, there are still cases when we need to operate disconnected. For example: when we are on the road, when we are on an airplane, when our ISP has an outage, when we moved to a new apartment/changed ISP and we are waiting to “get connected”, etc. There are two possible solutions which come to mind:
- (a) refuse to start the computer – this is very dramatic and most probably unacceptable…
- or (b) start, but only allow files which were previously scanned – this seems to be a good solution, but the line between executable and non-executable files is very blurry (for example the Word DOC you are working on could contain malicious macro code, so it would be nice to scan it at every modification) and thus this operation method will either offer lesser protection (by not scanning “document” files which could still contain executable code) or still block the users ability to do her work.
of course there is always the third possibility of “failing open”, but that is a worst-case scenario which hopefully nobody will choose.
- Network latency – just what speed impact will the need to contact a server before executing each file will have? The protocol will probably look something like the following:
- send a hash of the whole file or relevant parts of the file to the server
- wait for the server response
- if the server determines that the file might be infected, the client will need to upload the entire file
if you thought that your current desktop AV solution is slow, this might be slower by a factor of 10x! (not in the average case, but in some extreme cases)
- “under-reporting of new samples is reduced/minimized” – two counter-points here: first, most AV vendors already collect “suspicious” files from the client computers (of course it is with “consent” in the form of the EULA – but that’s an other discussion). Second, the problem with under-reporting is not necessarily that companies don’t have access to the files (although that can be a problem sometimes), but that it is of very low priority for them (if you were a blacklisting company, what would you look at first? a file which is reported by 1000 users of a file which is reported by one user?)
- “conventional malware q/a should be entirely thwarted” – while I agree that it will somewhat reduce the problem, the bad guys (and girls, I don’t want to discriminate here :-)) can still use proxies all over the world to circumvent statistical analysis.
- scanner reverse engineering is almost completely nullified – true, however a host of other possible vulnerabilities is created: is the communication protocol created to protect against MITM attacks (created for example by DNS hijacking)? Is the infrastructure able to withstand a DDoS? Your protection can be disabled by (partially) cutting-off network access (again, how does the product react to this?).
- Also, Kurt mentions that there are sensitive materials which users might not be comfortable with sending to the “cloud”. Two points here: vendors are already collecting files (the level of awareness about this practice is a different question). Secondly, there is almost no such thing as “non-executable” files these days. Word documents, Photoshop files, HTML pages – they all need to be scanned. The scanning probably will be partial (ie sending hashes from the key areas of the files at first), but they will also need to contain a fallback mechanism to send the entire file. Will people be comfortable with the prospect that their software vendor could spy on all of their activities, circumventing protections put in place to prevent data leakage (file encryption, drive encryption, network traffic encryption)? Will this be even legal given the laws and regulations different institutions and organizations are subject to?
- Given the previous point, it is possible that some companies will have “enterprise” products which will have the old model of “delivering signature files to the client” to ameliorate the concerns enumerated. They would essentially leverage the clients who do agree to participate in the fluffy (cloud :-)) version as a sensor network. However, at the same time, bad guys could obtain these “corporate” versions and use it for their QA purposes.
To finish on a lighter note: these are just speculations since no such product exists (yet). Hopefully the vendors will take all these elements into consideration.