Firefox – Grey Panthers Savannah https://grey-panther.net Just another WordPress site Wed, 28 Oct 2009 14:54:00 +0000 en-US hourly 1 https://wordpress.org/?v=6.9 206299117 RequestPolicy Firefox Plugin – the ultimate NoScript https://grey-panther.net/2009/10/requestpolicy-firefox-plugin-the-ultimate-noscript.html https://grey-panther.net/2009/10/requestpolicy-firefox-plugin-the-ultimate-noscript.html#comments Wed, 28 Oct 2009 14:54:00 +0000 https://grey-panther.net/?p=184 3236129283_d61fb9c429_b I recently found out about the following Firefox plugin/addon: RequestPolicy (via this blogpost) – see also the Firefox addon page. Its function is to whitelist all kinds of cross-domain requests, including scripts, style-sheets, images, objects (Flash, Java, Silverlight), etc. Anything in a webpage hosted on the domain A can reference other content from domain A, but if it references content from other domains, it must be present in the RequestPolicy whitelist. There are three types on entries which can be added to the whitelist:

  • source (ie. pages on domain S can reference anything)
  • destination (ie. anything can reference domain D)
  • source-to-destination (ie. pages on domain S can reference resources on domain D)

There are still some glitches to work out, but all in all it is a good tool for the security conscious. So is it worth it? It depends. If you are not a power-user who has some knowledge HTML (ie. how CSS, HTML, JS and plugin objects fit together to form the page), I would recommend against it (because you will have the experience of webpages “not working for no good reason”). It takes some initial training (just like NoScript), but after that it is pretty invisible (even though not as invisible as NoScript, because it blocks images / style-sheets).

RequestPolicy

Does it make you more secure? Yes, but just in the “you don’t have to outrun the bear”: once the attacker has enough control to insert a linked resource (script, iframe, etc) in a page, s/he almost certainly has enough control to insert the attack script directly in the page, rather than linking to it. The current practice of linking to a centralized place is mostly because the attackers want to have centralized control (for example to add new exploits) and statistics. Would such a whitelisting solution to become widely used, they could switch with very little effort to the “insert everything into the page” model. Still, such a solution shouldn’t be underestimated, since it gives an almost perfect protection under the current conditions.

Update: If leaving digital trails is something you like to avoid, take into consideration that the fact that a given site is present in the whitelist of addons such as NoScript or RequestPolicy can be considered proof that you’ve visited the given site (unless it is on the default list of the respective addon). Just something to consider from a privacy standpoint. Life is a series of compromises and everyone has to decide for herself how to make them.

Picture taken from Luke Hoagland’s photostream with permission.

]]>
https://grey-panther.net/2009/10/requestpolicy-firefox-plugin-the-ultimate-noscript.html/feed 1 184
The fox in the henhouse? https://grey-panther.net/2009/07/the-fox-in-the-henhouse.html https://grey-panther.net/2009/07/the-fox-in-the-henhouse.html#comments Mon, 13 Jul 2009 09:59:00 +0000 https://grey-panther.net/?p=272 2532618591_85f4393493_b Some time back I ranted about ParetoLogic which was used to be known as the makers of a rogue security product (XoftSpy). Today I can rant once again about them:

They’ve published a blogpost insinuating that Firefox 3.5 has a remote code execution vulnerability. I’ve tried to inquire if they notified Mozilla about the issue, but after 4 days (!!!) my comment still awaits moderation or it has been directly deleted.

So I decided to look a little more into the problem (from the safety of a VM) and arrived to the same conclusion as the F-Secure people: this is not a FF issue, rather a Flash / other third-party software issue. The given pages seem to contain a link to an “attack kit” which tries to detect the browser version / available plugins, after which it tries to send down a targeted exploit.

What I would have liked from ParetoLogic:

  • Research the issue in more detail (a remotely exploitable bug in the up-to-date version of a popular browser is not an issue which should be taken lightly)
  • It is ok to make mistakes, but one should stand up to their mistakes and admit that s/he was wrong (update the original post)
  • Don’t moderate user comments into oblivion (why do you have the “Comments” link then?)

Currently, my opinion still stands: they are a “grey-zone” company and you should avoid their products.

Picture taken from mikebaird’s photostream with permission.

]]>
https://grey-panther.net/2009/07/the-fox-in-the-henhouse.html/feed 9 272
Using a single file to serve up multiple web resources https://grey-panther.net/2009/01/using-a-single-file-to-serve-up-multiple-web-resources.html https://grey-panther.net/2009/01/using-a-single-file-to-serve-up-multiple-web-resources.html#comments Fri, 09 Jan 2009 10:24:00 +0000 https://grey-panther.net/?p=466 While trying to set up my GHDB mirror, my first thought was to use googlepages. I quickly found the bulk upload to googlepages how to by X de Xavier, which is a very cool tool (and also an interesting way to hack your “chrome”), but unfortunately I found that Google Pages has a limit of 500 files (and the mirror contained aroung 1400 files), so this was a no-go.

My second thought was: the Browser Security Handbook documents several “pseudo-protocol” which can contain other files in them that can be directly adressed from the browser. Although support for them is rather spotty, I thought that using JAR (supported by Firefox) and MHT (supported by IE) I could cover a large gamut of users.

The results are rather disappointing, but I document the failure sources which I isolated, maybe it can help someone out.

First of was JAR. JARs are in fact just zip files, so creating them is very straight forward. After creating and testing it locally, I uploaded the archive and tried to access it like this (if you have NoScript, you must add it to the whitelist for it to work):

jar:http://ghdb.mirror.googlepages.com/ghdb.jar!/_0toc.html

Just to get the following error message:

Unsafe File Type

The page you are trying to view cannot be shown because it is contained in a file type that may not be safe to open. Please contact the website owners to inform them of this problem.

After searching for the error message and not coming up with anything useful, I took a stab at looking at the source code, this is one of the reasons open source is great after all.

From the code:

// We only want to run scripts if the server really intended to
// send us a JAR file.  Check the server-supplied content type for
// a JAR type.
...
mIsUnsafe = !contentType.EqualsLiteral("application/java-archive") &&
            !contentType.EqualsLiteral("application/x-jar");
...
if (prefs) {
    prefs->GetBoolPref("network.jar.open-unsafe-types", &allowUnpack);
}

if (!allowUnpack) {
    status = NS_ERROR_UNSAFE_CONTENT_TYPE;
}

Ignoring the fact that the code uses negative assertions (ie. mIsUnsage) rather than positive assertions (ie. mIsSafe), the code tells us that they are looking for the correct Content-Type sent by the webserver or, alternatively, for the “network.jar.open-unsafe-types” setting. This is probable to prevent the GIFAR attack. So, it seems that the googlepages server doesn’t return the correct Content-Type. We can quickly confirm it with the command:

curl http://ghdb.mirror.googlepages.com/ghdb.jar --output /dev/null --dump-header /dev/stdout

And indeed the result is:

HTTP/1.1 200 OK
Last-Modified: Wed, 31 Dec 2008 11:25:06 GMT
Cache-control: public
Expires: Fri, 09 Jan 2009 10:54:28 GMT
Content-Length: 2700935
Content-Type: application/octet-stream
Date: Fri, 09 Jan 2009 10:54:28 GMT
Server: GFE/1.3
...

So the options would be to (a) tell people to lower their security or (b) not use Google’s server, none of which was particularly attractive.

Now lets take a look at the MHT format. As many other MS formats, it is very sparsely documented (all hail our closed-source overlord), although there were some standardization efforts. Anyway, here is the Perl script I’ve thrown together to generate an MHTML file from the mirror:

use strict;
use warnings;
use File::Basename;
use MIME::Lite;
use File::Temp qw/tempfile/;
use MIME::Types;


my $mimetypes = MIME::Types->new;
my $msg = MIME::Lite->new(
        From    =>'Saved by Microsoft Internet Explorer 5',
        Subject =>'Google Hacking Data Base',
        Type    =>'multipart/related'
    );

my $i = 0;
my @tempfiles;
opendir my $d, 'WEB';
while (my $f = readdir $d) {
  $f = "WEB/$f";
  next unless -f $f;
  ++$i;

  next unless $f =~ /.([^.]+)$/;
  my $ext = lc $1;
  my $mime_type = $mimetypes->mimeTypeOf($ext);
  my $path = $f;

  if ('text/html' eq $mime_type) {
    my ($fh, $filename) = tempfile( "tmimeXXXXXXXX" );
    
    open my $fhtml, '<', $f;
    my $html = join('', <$fhtml>);
    close $fhtml;
    $html =~ s/(href|src)s*=s*"(.*?)"/manipulate_href($1, $2)/ge;
    $html =~ s/(href|src)s*=s*'(.*?)'/manipulate_href($1, $2)/ge;
    $html =~ s/(href|src)s*=s*([^'"][^s>]+)/manipulate_href($1, $2)/ge;
    print $fh $html;
    close $fh;

    $path = $filename;
    push @tempfiles, $path;
  }

  my $part = $msg->attach(
      Type        => $mime_type,
      Path        => $path,
      Filename    => basename $f,
  );
  $part->attr('Content-Location' => 'http://example.com/' . basename $f);  
}
closedir $d;

$msg->print(*STDOUT);

unlink $_ for (@tempfiles);

sub manipulate_href {
  my ($attr, $target) = @_;

  return qq{$attr="$target"} if ($target =~ /^http:///i);
  return qq{$attr="http://example.com/$target"};
}

The two important things here are the fact that each element must contain the Content-Location header (ok, is somewhat of an oversimplifaction, because there are other ways to identify subcontent, but this is the easiest) and all URLs must be absolute! This is why there is all the regex replacement going on (again, this is quick hack, if you want to create production code, you should consider using a parser. An other possibility – which I haven’t tried – is to use the BASE tag – you may also want to check out the changes IE7 brings to it, although most probably they wouldn’t affect you).

Now, with the MHT file created, time to try it out (with IE obviously):

mhtml:http://ghdb.mirror.googlepages.com/ghdb.mht!http://example.com/_0toc.html

The result is an IE consuming 100% CPU (or less if you are on a multi-core system :-)) and seemingly doing nothing. Tried this on two different systems with IE6 and IE7. Now I assume that in the background it is downloading and parsing the file, but I just got bored with waiting. Update: I did manage to get it working after a fair amount of working, however it seemed to want to download the entire file at each click, making this solution unusable. It still might be an alternative for smaller files…

Conclusions, future work:

  • Both solutions want to download the entire file before displaying it, making the solutions very slow in case of large files.
  • It would be interesting to see if the MHT could incorporate some compressed resources. IE, something like: Content-Encoding: gzip, base64 (first gzipped, and after it base64 encoded). This could possibly reduce the size problem.
  • It would also be interesting to know in which context the content is interpreted. Hopefully in the context of the MHT file URL (ie, in this case http://ghdb.mirror.googlepages.com/), rather than the specified URL (ie. http://example.com), because, if not, it can result in some nasty XSS-type scenarios (ie. malicious individual crafts MHT pages with resources being referred to as http://powned.com/ and hosts it on its own server. Convinces a user to click on the link mhtml:http://evil.com/pown.mht!http://powned.com/foo.html and steals the cookies for example from powned.com, even if powned has no vulnerabilities per se!). I’m too lazy to try this out :-), but hupefully this can’t happen.
]]>
https://grey-panther.net/2009/01/using-a-single-file-to-serve-up-multiple-web-resources.html/feed 4 466
Effective self-censorship https://grey-panther.net/2008/11/effective-self-censorship.html https://grey-panther.net/2008/11/effective-self-censorship.html#respond Sat, 29 Nov 2008 17:41:00 +0000 https://grey-panther.net/?p=567 No, I won’t be talking about China or Australia here. I would like talk about my experience of downloading a Firefox theme.

The given theme was marked as experimental, and thus – to download it – I had to create a user account on the site. The F.A.Q. explains it as follows:

Why do I have to log in to install an experimental add-on?

The add-on site requires that users log in to install experimental add-ons as a reminder that you are about to undertake a risk step.

Now lets analyze the approach a little deeper: Firefox addons (and themes) can be downloaded from any website, not just from the official one. Downloading from other sites is a two-step process, whereby you first have to approve the site, then the addon. Hosting an addon on the official site gives it an air of trustworthiness. Historically “experimental” / “beta” addons were hosted on the author’s site or on mozdev. I assume that the option of hosting “experimental” extensions on the official site was created as a compromise between people wanting to post less-tested extensions on the mozilla site and the mozilla staff wanting to avoid less-stable plugins giving a bad name to Firefox.

However, I argue that such a move is detrimental to both parties. The sign-up process is quite “old school”, and has a couple of usability issues:

  • No javascript validation of the fields, you have to submit the form to find out that you’ve missed / mistyped something
  • You have to solve a CAPTCHA every time the form is displayed, even though you’ve successfully solved the CAPTCHA for the previous submission
  • You have to validate your e-mail address. This arguably is a security feature, however it could be implemented much more sensibly (for example not letting you do things that modify the “state” of the site – like submitting comments – until you’ve validated your account, but still letting you download things)
  • The confirmation link doesn’t automatically log you in. Again, this is arguably a security feature, however we are not talking about your online banking here, we are talking about a site which tries to “sell” you a product.
  • It doesn’t support OpenID

Many people will be deterred by one of these obstacles, resulting in less usage (testing) for the extension. Those who will battle their way trough (like me), will be frustrated by the experience. The method itself is sending a mixed message from the mozilla team: “yes, this is an addon on the official site, but no, we don’t want you to download it”. The only possible benefit would be it the addons would show up when searching on the official site (or from the Firefox UI), however they do not! Luckily most people rarely use the site-specific search engine to find things (this is true for all the sites, not just mozilla).

What would be a better solution?

  • Take a firm stance on the matter. Either make these extensions “first class citizens” (don’t require logins to download them, make them show up in search results, etc). One thing which could be done (which seems acceptable) is to place these plugins at the end of the search results.
  • Optimize the signup experience. No, you are not protecting fort knox!
  • Trust user ratings / reviews! If the given addon is of such poor quality, it will quickly get a reputation as such (or more importantly: it won’t get a reputation as a “must have” extension).

Finally: the paranoia is overblown considering the percentage of Firefox users in general and the percentage of those users who use any extensions. I would argue that people who use more than two extensions are a very, very small percentage of the userbase, making the risk associated with “bad” extensions tarnishing FF name very small.

]]>
https://grey-panther.net/2008/11/effective-self-censorship.html/feed 0 567
Firefox 2 end-of-life https://grey-panther.net/2008/11/firefox-2-end-of-life.html https://grey-panther.net/2008/11/firefox-2-end-of-life.html#respond Tue, 18 Nov 2008 06:01:00 +0000 https://grey-panther.net/?p=588 Via Slashdot came the news that version 1.8 of the Gecko engine used to render HTML in Firefox 2, Thunderbird 2, etc. was being end of lifed. Now I have still a few computers which I’m responsible for that have FF2 on them, just because that’s what the users were accustomed to. So I searched around and found this: Firefox 2.0 Classic Theme for Firefox 3.0. So I will be installing FF3 with a FF2 skin there.

Also, the “news” was misleading (what a surprise – FUD on Slashdot :-)). Thunderbird is not going away, nor do you have to update to an alpha version of it. They will be supporting Gecko 1.8 with security patches for some time, it’s just that new features won’t be added (which isn’t so critical in the case of mail clients – HTML mails are evil anyways :-)). The new Thunderbird will be released sometimes next year with the new Gecko engine, but there is no need to rush the upgrade (or at least nothing related to this announcement – maybe there are features in 3 which are vital for you).

In conclusion: the sky isn’t falling (yet) and always look at the bright side of life :-).

Update: It seems that the theme is marked as “experimental”, and thus you need an account on addons.mozilla.com to be able to download it. I found the following account to be working from bugmenot: [email protected] / bugmenot.

]]>
https://grey-panther.net/2008/11/firefox-2-end-of-life.html/feed 0 588
Two quick tips https://grey-panther.net/2007/07/two-quick-tips.html https://grey-panther.net/2007/07/two-quick-tips.html#comments Thu, 05 Jul 2007 14:57:00 +0000 https://grey-panther.net/?p=850 Via the .:Computer Defense:. blog: the Windows command prompt has a history feature: just press F7 in a command window.

One of the great features of Firefox 2 is the session saving (I know, there were extensions before that to do the same thing, but they somehow never worked for me). If you want to activate it for every start, not just when Firefox crashes, go to Edit -> Preferences (or Tools -> Options on the Windows I think), Main -> Startup and set When Firefox starts to Show my windows and tabs from the last time. (Via Otaku, Cedric’s weblog and MozillaZine)

Update: Thanks to Andy for the tip: there are a lot more hidden features of the command shell which make it a lot more bearable. For a complete description check out The Windows NT Command Shell if you have some time on your hand and/or wish to make your immersions in the command line world more efficient.

Update to the update: the shell has an emulation layer for DOSKEY, which means you can use all of its features without having to run unsupported 16 bit code!

]]>
https://grey-panther.net/2007/07/two-quick-tips.html/feed 1 850
Lies, Damn Lies and Statistics https://grey-panther.net/2007/04/lies-damn-lies-and-statistics.html https://grey-panther.net/2007/04/lies-damn-lies-and-statistics.html#respond Thu, 05 Apr 2007 05:55:00 +0000 https://grey-panther.net/?p=867 I’m back with more critique for Deb Shinder (who for one reason or an other doesn’t allow commenting on her blog, so I can’t directly post there). Read part one (Biometrics is not the answer!) and part two (Three letter acronyms don’t provide good security!) for more opinionated posts.

The post I’m talking about is Is Firefox less secure than IE 7?. First a little disclaimer: I may be biased in this matter (but who isn’t) as someone who’s been using and loving FireFox since version 0.9. The sentence I have the most issue with is the following: Firefox alone in recent months has had more exploits than Windows XP and Vista combined (yes, I should complain to George Ou for this one, and be sure that I will). People please try to limit ourselves to useful and meaningful information instead of trying to construct bogus and meaningless statistics to prove our points. If we have biases, lets come out and share them (like I did earlier) and lets try to compare apples to apples and oranges to oranges. This quote was insulting to the intellect of your readers (who are smart enough to realize that within MS there are different teams working on different products and they are so separated that you could almost call them a company withing a company). It is as if I would say that: IE had more vulnerabilities than there were full moons in 2006, so it is bad.

To finish up with an other statistic (again biased, but at least it is clear from the context): during 2006 Internet Explorer was vulnerable for 286 without a patch being available (78%) and Firefox for 9 (2.5%)

]]>
https://grey-panther.net/2007/04/lies-damn-lies-and-statistics.html/feed 0 867
Decoding obfuscated Javascript https://grey-panther.net/2007/02/decoding-obfuscated-javascript.html https://grey-panther.net/2007/02/decoding-obfuscated-javascript.html#comments Fri, 23 Feb 2007 07:06:00 +0000 https://grey-panther.net/?p=895 SANS had recently a posting about methods to decode obfuscated Javascript, and I just wanted to mention 2+1 tools here:

  • In Firefox you can use the View Source Chart extension to view the source after the javascript has executed. There is also the versatile Firebug, but IMHO that’s an overkill for this.
  • For Internet Explorer there is the Internet Explorer Developer Toolbar which is free (as in beer) and as of writing this required no WGA silliness.
  • And the bonus tips: if you are using Firefox, it may be worth to install the User Agent Switcher plugin and to switch to IE, because exploit sites were known for trying to serve up different exploits for different browsers. If you encounter scripts of type JScript.encoded or VBScript.encoded, you should find this tool useful.

Warning! These methods actually execute the script on your machine! They should be used with extreme care, and preferably only in controlled virtual machines or computers not connected to network.

]]>
https://grey-panther.net/2007/02/decoding-obfuscated-javascript.html/feed 4 895