I recently found out about the following Firefox plugin/addon: RequestPolicy (via this blogpost) – see also the Firefox addon page. Its function is to whitelist all kinds of cross-domain requests, including scripts, style-sheets, images, objects (Flash, Java, Silverlight), etc. Anything in a webpage hosted on the domain A can reference other content from domain A, but if it references content from other domains, it must be present in the RequestPolicy whitelist. There are three types on entries which can be added to the whitelist:
There are still some glitches to work out, but all in all it is a good tool for the security conscious. So is it worth it? It depends. If you are not a power-user who has some knowledge HTML (ie. how CSS, HTML, JS and plugin objects fit together to form the page), I would recommend against it (because you will have the experience of webpages “not working for no good reason”). It takes some initial training (just like NoScript), but after that it is pretty invisible (even though not as invisible as NoScript, because it blocks images / style-sheets).
Does it make you more secure? Yes, but just in the “you don’t have to outrun the bear”: once the attacker has enough control to insert a linked resource (script, iframe, etc) in a page, s/he almost certainly has enough control to insert the attack script directly in the page, rather than linking to it. The current practice of linking to a centralized place is mostly because the attackers want to have centralized control (for example to add new exploits) and statistics. Would such a whitelisting solution to become widely used, they could switch with very little effort to the “insert everything into the page” model. Still, such a solution shouldn’t be underestimated, since it gives an almost perfect protection under the current conditions.
Update: If leaving digital trails is something you like to avoid, take into consideration that the fact that a given site is present in the whitelist of addons such as NoScript or RequestPolicy can be considered proof that you’ve visited the given site (unless it is on the default list of the respective addon). Just something to consider from a privacy standpoint. Life is a series of compromises and everyone has to decide for herself how to make them.
Picture taken from Luke Hoagland’s photostream with permission.
]]>They’ve published a blogpost insinuating that Firefox 3.5 has a remote code execution vulnerability. I’ve tried to inquire if they notified Mozilla about the issue, but after 4 days (!!!) my comment still awaits moderation or it has been directly deleted.
So I decided to look a little more into the problem (from the safety of a VM) and arrived to the same conclusion as the F-Secure people: this is not a FF issue, rather a Flash / other third-party software issue. The given pages seem to contain a link to an “attack kit” which tries to detect the browser version / available plugins, after which it tries to send down a targeted exploit.
What I would have liked from ParetoLogic:
Currently, my opinion still stands: they are a “grey-zone” company and you should avoid their products.
Picture taken from mikebaird’s photostream with permission.
]]>My second thought was: the Browser Security Handbook documents several “pseudo-protocol” which can contain other files in them that can be directly adressed from the browser. Although support for them is rather spotty, I thought that using JAR (supported by Firefox) and MHT (supported by IE) I could cover a large gamut of users.
The results are rather disappointing, but I document the failure sources which I isolated, maybe it can help someone out.
First of was JAR. JARs are in fact just zip files, so creating them is very straight forward. After creating and testing it locally, I uploaded the archive and tried to access it like this (if you have NoScript, you must add it to the whitelist for it to work):
jar:http://ghdb.mirror.googlepages.com/ghdb.jar!/_0toc.html
Just to get the following error message:
Unsafe File Type
The page you are trying to view cannot be shown because it is contained in a file type that may not be safe to open. Please contact the website owners to inform them of this problem.
After searching for the error message and not coming up with anything useful, I took a stab at looking at the source code, this is one of the reasons open source is great after all.
From the code:
// We only want to run scripts if the server really intended to
// send us a JAR file. Check the server-supplied content type for
// a JAR type.
...
mIsUnsafe = !contentType.EqualsLiteral("application/java-archive") &&
!contentType.EqualsLiteral("application/x-jar");
...
if (prefs) {
prefs->GetBoolPref("network.jar.open-unsafe-types", &allowUnpack);
}
if (!allowUnpack) {
status = NS_ERROR_UNSAFE_CONTENT_TYPE;
}
Ignoring the fact that the code uses negative assertions (ie. mIsUnsage) rather than positive assertions (ie. mIsSafe), the code tells us that they are looking for the correct Content-Type sent by the webserver or, alternatively, for the “network.jar.open-unsafe-types” setting. This is probable to prevent the GIFAR attack. So, it seems that the googlepages server doesn’t return the correct Content-Type. We can quickly confirm it with the command:
curl http://ghdb.mirror.googlepages.com/ghdb.jar --output /dev/null --dump-header /dev/stdout
And indeed the result is:
HTTP/1.1 200 OK Last-Modified: Wed, 31 Dec 2008 11:25:06 GMT Cache-control: public Expires: Fri, 09 Jan 2009 10:54:28 GMT Content-Length: 2700935 Content-Type: application/octet-stream Date: Fri, 09 Jan 2009 10:54:28 GMT Server: GFE/1.3 ...
So the options would be to (a) tell people to lower their security or (b) not use Google’s server, none of which was particularly attractive.
Now lets take a look at the MHT format. As many other MS formats, it is very sparsely documented (all hail our closed-source overlord), although there were some standardization efforts. Anyway, here is the Perl script I’ve thrown together to generate an MHTML file from the mirror:
use strict;
use warnings;
use File::Basename;
use MIME::Lite;
use File::Temp qw/tempfile/;
use MIME::Types;
my $mimetypes = MIME::Types->new;
my $msg = MIME::Lite->new(
From =>'Saved by Microsoft Internet Explorer 5',
Subject =>'Google Hacking Data Base',
Type =>'multipart/related'
);
my $i = 0;
my @tempfiles;
opendir my $d, 'WEB';
while (my $f = readdir $d) {
$f = "WEB/$f";
next unless -f $f;
++$i;
next unless $f =~ /.([^.]+)$/;
my $ext = lc $1;
my $mime_type = $mimetypes->mimeTypeOf($ext);
my $path = $f;
if ('text/html' eq $mime_type) {
my ($fh, $filename) = tempfile( "tmimeXXXXXXXX" );
open my $fhtml, '<', $f;
my $html = join('', <$fhtml>);
close $fhtml;
$html =~ s/(href|src)s*=s*"(.*?)"/manipulate_href($1, $2)/ge;
$html =~ s/(href|src)s*=s*'(.*?)'/manipulate_href($1, $2)/ge;
$html =~ s/(href|src)s*=s*([^'"][^s>]+)/manipulate_href($1, $2)/ge;
print $fh $html;
close $fh;
$path = $filename;
push @tempfiles, $path;
}
my $part = $msg->attach(
Type => $mime_type,
Path => $path,
Filename => basename $f,
);
$part->attr('Content-Location' => 'http://example.com/' . basename $f);
}
closedir $d;
$msg->print(*STDOUT);
unlink $_ for (@tempfiles);
sub manipulate_href {
my ($attr, $target) = @_;
return qq{$attr="$target"} if ($target =~ /^http:///i);
return qq{$attr="http://example.com/$target"};
}
The two important things here are the fact that each element must contain the Content-Location header (ok, is somewhat of an oversimplifaction, because there are other ways to identify subcontent, but this is the easiest) and all URLs must be absolute! This is why there is all the regex replacement going on (again, this is quick hack, if you want to create production code, you should consider using a parser. An other possibility – which I haven’t tried – is to use the BASE tag – you may also want to check out the changes IE7 brings to it, although most probably they wouldn’t affect you).
Now, with the MHT file created, time to try it out (with IE obviously):
mhtml:http://ghdb.mirror.googlepages.com/ghdb.mht!http://example.com/_0toc.html
The result is an IE consuming 100% CPU (or less if you are on a multi-core system :-)) and seemingly doing nothing. Tried this on two different systems with IE6 and IE7. Now I assume that in the background it is downloading and parsing the file, but I just got bored with waiting. Update: I did manage to get it working after a fair amount of working, however it seemed to want to download the entire file at each click, making this solution unusable. It still might be an alternative for smaller files…
Conclusions, future work:
The given theme was marked as experimental, and thus – to download it – I had to create a user account on the site. The F.A.Q. explains it as follows:
Why do I have to log in to install an experimental add-on?
The add-on site requires that users log in to install experimental add-ons as a reminder that you are about to undertake a risk step.
Now lets analyze the approach a little deeper: Firefox addons (and themes) can be downloaded from any website, not just from the official one. Downloading from other sites is a two-step process, whereby you first have to approve the site, then the addon. Hosting an addon on the official site gives it an air of trustworthiness. Historically “experimental” / “beta” addons were hosted on the author’s site or on mozdev. I assume that the option of hosting “experimental” extensions on the official site was created as a compromise between people wanting to post less-tested extensions on the mozilla site and the mozilla staff wanting to avoid less-stable plugins giving a bad name to Firefox.
However, I argue that such a move is detrimental to both parties. The sign-up process is quite “old school”, and has a couple of usability issues:
Many people will be deterred by one of these obstacles, resulting in less usage (testing) for the extension. Those who will battle their way trough (like me), will be frustrated by the experience. The method itself is sending a mixed message from the mozilla team: “yes, this is an addon on the official site, but no, we don’t want you to download it”. The only possible benefit would be it the addons would show up when searching on the official site (or from the Firefox UI), however they do not! Luckily most people rarely use the site-specific search engine to find things (this is true for all the sites, not just mozilla).
What would be a better solution?
Finally: the paranoia is overblown considering the percentage of Firefox users in general and the percentage of those users who use any extensions. I would argue that people who use more than two extensions are a very, very small percentage of the userbase, making the risk associated with “bad” extensions tarnishing FF name very small.
]]>Also, the “news” was misleading (what a surprise – FUD on Slashdot :-)). Thunderbird is not going away, nor do you have to update to an alpha version of it. They will be supporting Gecko 1.8 with security patches for some time, it’s just that new features won’t be added (which isn’t so critical in the case of mail clients – HTML mails are evil anyways :-)). The new Thunderbird will be released sometimes next year with the new Gecko engine, but there is no need to rush the upgrade (or at least nothing related to this announcement – maybe there are features in 3 which are vital for you).
In conclusion: the sky isn’t falling (yet) and always look at the bright side of life :-).
Update: It seems that the theme is marked as “experimental”, and thus you need an account on addons.mozilla.com to be able to download it. I found the following account to be working from bugmenot: [email protected] / bugmenot.
]]>One of the great features of Firefox 2 is the session saving (I know, there were extensions before that to do the same thing, but they somehow never worked for me). If you want to activate it for every start, not just when Firefox crashes, go to Edit -> Preferences (or Tools -> Options on the Windows I think), Main -> Startup and set When Firefox starts
to Show my windows and tabs from the last time
. (Via Otaku, Cedric’s weblog and MozillaZine)
Update: Thanks to Andy for the tip: there are a lot more hidden
features of the command shell which make it a lot more bearable. For a complete description check out The Windows NT Command Shell if you have some time on your hand and/or wish to make your immersions in the command line world more efficient.
Update to the update: the shell has an emulation layer for DOSKEY, which means you can use all of its features without having to run unsupported 16 bit code!
]]>The post I’m talking about is Is Firefox less secure than IE 7?. First a little disclaimer: I may be biased in this matter (but who isn’t) as someone who’s been using and loving FireFox since version 0.9. The sentence I have the most issue with is the following: Firefox alone in recent months has had more exploits than Windows XP and Vista combined
(yes, I should complain to George Ou for this one, and be sure that I will). People please try to limit ourselves to useful and meaningful information instead of trying to construct bogus and meaningless statistics to prove our points. If we have biases, lets come out and share them (like I did earlier) and lets try to compare apples to apples and oranges to oranges. This quote was insulting to the intellect of your readers (who are smart enough to realize that within MS there are different teams working on different products and they are so separated that you could almost call them a company withing a company). It is as if I would say that: IE had more vulnerabilities than there were full moons in 2006, so it is bad
.
To finish up with an other statistic (again biased, but at least it is clear from the context): during 2006 Internet Explorer was vulnerable for 286 without a patch being available (78%) and Firefox for 9 (2.5%)
]]>Warning! These methods actually execute the script on your machine! They should be used with extreme care, and preferably only in controlled virtual machines or computers not connected to network.
]]>