Disclaimer: the viewpoints and ideas expressed here an entirely my own and are by no means representative for any institution I am affiliated with. Also I do not want to offend anybody, since I realize that the amount of work it takes to create some of these programs and the wealth of knowledge one must posses for it is enormous (most of the features implemented are possible only using kernel mode drivers).
To start up: what are HIPSs? The term is a little nebulous (you can read the Wikipedia article to see the different shapes and forms it takes), but based on the experience I had with products it is a software application which can apply security restrictions on a per-application base (rather than on a per-user base). I will be talking about Windows based HIPSs in this post.
Because the term itself is hard to define, lets take a concrete example: you have an IM client running on your system. You can define the
security boundary of this application with regards to the system resources (in windows this being: the file system, the registry and the network). This would look something like this:
- It needs to have read-only access to the following files: the executable, extensions / plugins stored in DLLs, …
- It needs to have read-write access to the following files: the conversation log, …
- It needs to have read-only access to the following registry keys: …
- It needs to have read-write access to the following registry keys: …
- It needs to communicate over the internet with the following host on the following TCP ports: … (and BTW, this doesn’t necessary mean all the hosts one chats with because most of the big IMs route the messages through a central server to avoid to have to deal with NATs).
- It has to listen on the following TCP port for incoming connections from a given set of hosts (or from any host): …
Once you defined the security boundary, you can enter it in the configuration file of your HIPS and let it enforce it. Here are my reasons I think these systems are unusable by 99.9% of the Windows user base:
It is too complex – how many users are able to take a piece of software and define its behavior at the level required by a HIPS? Probably less than 0.01%. There are several approaches trying to get around this, however none of them very usable:
- Some HIPSs have a so called
learning modewhich is activated when the software is first installed or the policy for a new executable must be defined. The problems with this is:
- it assumes that the system is in a clean state when installed
- it won’t learn non-frequent operations (because they are not executed during the learning period)
- it may learn malicious operations and include them in the policy
- if there is no way to transfer this policy from computer to computer, in a large scale environment every instance of the software must be trained separately
- Other products have a predefined database (possibly upgradeable from the Internet) which contains a list of known executables and the policies which need to be applied to them. This database may either be put together by the creator of the HIPS or the community around the product. Setting aside the issue of trust (if it is a community based effort), it isn’t very likely that there can be a comprehensive database of policies at the rate new software comes out. An other issue would be that users might need to delay upgrading their software because no policy exists yet for the new version, even though this would expose them to vulnerabilities (if the update includes security fixes) or hinder their ability of operating (if the new version is needed to open some files). It would become a chocking point much like patching is currently.
- Yet an other approach taken is to classify the actions taken as good (for example opening a text file from the desktop), dubious (writing a new DLL to the system folder) and bad (overwriting an existing executable in the system folder). The problem with this approach is that there is no inherent
goodness valueassociated to this actions. It all depends on the context. By exploiting the 80/20 rule the creators of the system may be right 80% overall, but their accuracy may drop to 0 in some cases. For example setup programs usually write files to the system directory, so an alert asking the user if this action should be allowed is wrong 99% of the time. This also means that the user will be trained (subconsciously) to click the allow button whenever s/he wants to install something. This will lead to zero protection against adware and spyware bundled with setup programs (or even trojans which disguise themselves as setups for useful programs).
It is useless for large scale systems (enterprises) – as mentioned earlier large scale zero-administration deployment of such a system is impossible and at best they can serve as a kind of
parental control by allowing only the execution of programs which are permitted by the company policy. This can be however done by properly using the configuration properties included in Windows (which is integrated in AD – meaning that it’s easily deployed to a large number of systems)
It is a hack / patch – again, I want to emphasize that I don’t mean to offend anybody, but the application centric model of access control (meaning that rights and privileges are associated with a given application) offered by the HIPSs is almost orthogonal to the user based security model included in Windows (where rights and privileges are associated with user accounts and groups). Without discussing the merits of one approach over the other, just think about this: many people spent years of their lives thinking about the security system included in Windows (also known as discretionary access control)?. It is also a standard described in the orange book. Now lets ask the question: is there a possibility that a similar amount and quality of work (yes, Windows has some quality code in it if you know how to configure it right) can be duplicated in a 1-2 years by teams much smaller than the force behind MS and the orange book? If your answer is not (as is mine), you should be really worried that a technology which is still is in its infancy replaces your mature security system.
An other piece of software – installing a HIPS means adding an other piece of software to your system. An other piece of software means an other set of bugs. And especially in something as critical as a security system you should rely on something that has proven itself rather than on some new technology.
What could Microsoft do? – Create user awareness about the security features, add a GUI which makes their configuration simpler and finally add a method to alert the user when a program tries to do something it’s not allowed to (this is the biggest problem currently: if a program tries to do something it is not allowed too – because it was written with the assumption that it will run under the administrative account for example – the OS won’t give you any help in figuring out what went wrong. You have to rely on the program to give a meaningful error message – which most of the time isn’t the case and many times it doesn’t give any error message at all – or use some additional tools to figure out what went wrong). In my opinion as Microsoft integrates its security with the UI in a better way, HIPS system will become irrelevant. One such example is the LUA implemented in Vista. One area where MS was lagging behind was the firewall where one could restrict based on the application or on the protocol/port combination but not both and it wouldn’t filter outgoing traffic. From what I’ve understood this was resolved in Vista (it was an implementation restriction not a technical one). Probably the more limited version for WinXP SP2 was chosen to be able to deliver it on time and avoid conflict with personal firewall vendors.