software firewalls – Grey Panthers Savannah https://grey-panther.net Just another WordPress site Thu, 30 Nov 2006 18:22:00 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.1 206299117 The hidden capabilities of windows firewall https://grey-panther.net/2006/11/the-hidden-capabilities-of-windows-firewall.html https://grey-panther.net/2006/11/the-hidden-capabilities-of-windows-firewall.html#respond Thu, 30 Nov 2006 18:22:00 +0000 https://grey-panther.net/?p=989 Windows, beginning with XP SP2 contains a decent firewall. It doesn’t have leak prevention or outbound connection filtering. However it does have: inbound connection filtering, ICMP filtering, a default deny policy, GUI and command line interface, configuration using group policy and something I discovered only recently: limiting a certain rule with multiple IP / netmasks (up to around 10). It also comes preinstalled with the OS.

What this means that you can open up a given service only to the people who need to access it. This provides an additional security layer. Why make it possible for the entire world to brute-force your SSH account? Drop them at the firewall! This doesn’t replace a good password, but is an additional layer of security

Some more random firewall related advice:

Port based rules are preferable to application based rules, because with them you enable one port with one protocol (TCP or UDP) for a given set of IP addresses, while if you use application based rules, every port for that given application will be enabled. This is important for application which open up multiple ports (for example an FTP server which has also a web administration interface or Apache, which opens up a port for HTTP and one for HTTPS).

Configure the binding address / listening address for servers correctly. For example:

  • If your computer is multi-homed (it has multiple network interfaces), by default most of the servers will listen on all the interfaces. If you need them to listen on only one of the interfaces, specify it as the binding address. Also limit the access from the firewall to that particular subnet (remember, multiple layers of security!)
  • Usually you can specify the listening address in the form of <ip address>:<port>. If you use 0.0.0.0 for the ip address, the server will listen on all the interfaces (not recommended!). If you specify 127.0.0.1, it will listen only on the loopback interface. This is a special virtual interface which you can connect to only from the local computer. This is the recommended configuration for administration interfaces. database servers and any other service you only want to connect to locally (or using an SSH tunnel, which from the programs point of view is a local connection).

Here are some further references:

And remember, if you want to change the listening port for remote desktop for an additional layer of security on a machine you don’t have physical access to and you don’t want to call technical support, the correct order of steps is (which I learned the hard way :)):

  1. Open the new port you selected on the firewall for TCP connections.
  2. Verify that you can access that port using a tool like netcat of TTCP
  3. Change the registry
  4. Reboot the computer and pray that it comes up 🙂
  5. Connect on the new port to remote desktop
  6. Remove the rule for the old port
]]>
https://grey-panther.net/2006/11/the-hidden-capabilities-of-windows-firewall.html/feed 0 989
Hack The Gibson – Episode #66 https://grey-panther.net/2006/11/hack-the-gibson-episode-66.html https://grey-panther.net/2006/11/hack-the-gibson-episode-66.html#respond Sun, 19 Nov 2006 17:59:00 +0000 https://grey-panther.net/?p=1013 Read the reason for these posts. Read Steve Gibson’s response.

This again will be a short one. Steve talks about Vista which I have no immediate experience with (I’ve seen it on some decent machines and all that I can say is that it’s reeeeeeeeeeeeally slow. really, really, really slow. Event without the Aero interface).

Now for some fun (although it isn’t so funny when you think about how many people listen to the podcast and get erroneous information): BitLocker and booting from an USB device has nothing in common even though Steve implies this. And TPM (the Trusted Platform Module) is not (just) a pre-boot technology. The external USB device is needed (if you don’t have a TPM) to store the encryption key (similar to the way you can setup TrueCrypt). And the TPM is basically (there are of course some details like validating the BIOS before running it) a secure place to store stuff. An encrypted memory.

Now about the topic of Patchguard, Kernel patching & co.:

Even though Steve implies the contrary, there are many documented functions to create firewalls (TDI drivers, NDIS drivers, Filter Hook drivers and the Windows Filtering platform which again is just an evolutionary step, not a revolutionary one as Steve implies and you certainly don’t have to wait for Vista to write a driver which uses well documented standardized APIs) and AV products (by using File System Filters and Registry Filters).

In my opinion most of the people participating in this debate are coming at it from the wrong way. No offense to anyone, but to understand the whole picture and to be able to make a fair decision you must know at least some details about the inner workings of the Windows kernel. I’ll try to explain it here as simple as possible, but again, to make up your own mind it’s necessary to have read the Windows Internals or an equivalent book. So here goes my version:

In programming there are certain linkage points between components which are written by different groups (think different companies here, like Microsoft and other companies who produce software that runs on Windows). These are called API and are well documented. Their advantage is that the one who is offering it is (or should be) commited to them, meaning that in future version they will work the same way (so that third party software can continue to work without modification). This also means that bugs will be fixed in it. An other advantage (this time from the point of view of the one offering it) is that s/he can change the inner workings of the software, as long as the effects of the APIs remain the same. This is the ideal world.

Now for the real life: sometimes programmers find the available APIs insufficient. This can have multiple causes: (a) the developer doesn’t know about all the available APIs (ignorance) (b) the developer is trying to do something that breaks some assumption the system is built on or (c) there is truly no API for this. In this situation the developer might reverse-engineer the system s/he is developing against and try to modify (patch it) such a way that s/he is able to accomplish her/his goal. The are many, many problems with this approach: (a) it reduces system stability – there are many steps one must take to create a reliable patch and if one is missed, the stability of the system is in danger. (b) they are not guaranteed to work in every condition. because reverse-engineering includes most of the time a fair amount of black-box testing and because you never actually spoke to the original implementers of the code, you can never be sure that you covered all the possible situations. (c) they can be broken at any update of the system. because the original vendor doesn’t know about this patch, there is no way it can guarantee that a future updated version is not going to break it.

The actual debate is about the fact that some vendors said that the current API is insufficient, but did not say what other APIs they would need (most probably because there is a documented API almost for anything including for notification when a process is created). They created products which rely on patchwork, even though this puts the customer at risk and now that they have to get their act together are whining. Or they created some dubious HIPS product which is pretty much useless. So no, Vista won’t contain less security, it will contain more security. Even though some companies brag that they bypassed the patchguard, I’m sure that Microsoft will modify it so that they mechanism gets invalidated. Would you like to buy a product which stops working for a couple of days randomly after security updates (until the the given company catches up – it they do)?

]]>
https://grey-panther.net/2006/11/hack-the-gibson-episode-66.html/feed 0 1013
Software vs. Hardware firewalls https://grey-panther.net/2006/09/software-vs-hardware-firewalls.html https://grey-panther.net/2006/09/software-vs-hardware-firewalls.html#comments Fri, 29 Sep 2006 10:54:00 +0000 https://grey-panther.net/?p=1067 I’ve already done my post for the day and was listening to episode 56 of Security Now when I’ve heard something that ticked me of. I hear this all the time from various sources (but those are mostly uninformed and not security experts). This won’t be an other Hack the Gibson post, although you can expect more of those shortly.

There are several variations of this misinformation, like: You don’t need a software firewall, if you have a router / hardware firewall, Hardware firewalls are better than software firewalls and so on. The main point is: they have different purposes!

Now to elaborate on this: back in the old days a firewall was (and still is a lot of times) a hardware / software device using which you could filter your traffic using rules like if it comes from port X, allow it, if it comes from the IP X, allow it and so on. This is what the hardware firewalls can do (and probably 99.9% percent of the home routers have this feature integrated). The problem is that it’s rather hard to set up (I would like to know what percent of the home users even know what an IP is), and rather ineffective because these days a very large percent of the traffic flows through port 80 (so if you don’t allow port 80, you basically can’t communicate and if you allowed it, you allowed almost all the traffic – so it becomes an all or nothing decision).

Software firewalls had the same features at the beginning, however they evolved in what is called “personal firewall software” and now offer a control on a per program basis. What this means basically that you can set different rules for different applications (although this usually is an all-or-nothing decision in most of the personal firewalls to avoid overwhelming the user, but at least it’s on the application level). A major drawback is that because it runs on the same machine were the malware runs (if the machine gets infected), the malware can turn it off, or inject code in other processes so that the firewall thinks that the other program is trying to communicate.

One note about the firewall built in Windows XP and 2003 (as opposed to the one build in to Vista which is rumored to have this feature): it doesn’t contain filtering for outgoing connections (which means connection initiated from your computer) only for incoming connections. This means that it can prevent classic backdoors from working (like SubSeven or BackOrifice), but it won’t catch most of the modern malware which initiates the connection, usually on port 80 (so that your hardware firewall won’t filter it either).

In conclusion my advice would be (from the point of view of the firewalls):

  • Use a router so that you can use file-sharing (I’m referring to the integrated file sharing, not some peer-to-peer program) without complicated configuration on your firewall.
  • I also use a router because I do web development on my machine so it will run Apache / MySQL / PostgreSQL and I sleep better to know that there is no way somebody from the outside can reach those (even if I missconfigure something locally).
  • In addition use a personal firewall so that you can control per program which has access to what on the network.
  • This isn’t directly related to firewalls but: don’t run as admin (watch my blog because I’ll have more posts that should help you avoiding running as admin).
]]>
https://grey-panther.net/2006/09/software-vs-hardware-firewalls.html/feed 1 1067