xss – Grey Panthers Savannah https://grey-panther.net Just another WordPress site Thu, 09 Nov 2006 16:02:00 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.1 206299117 Cookie viruses? Me thinks not https://grey-panther.net/2006/11/cookie-viruses-me-thinks-not.html https://grey-panther.net/2006/11/cookie-viruses-me-thinks-not.html#comments Thu, 09 Nov 2006 16:02:00 +0000 https://grey-panther.net/?p=1020 The only reader of mine had a question: what is my opinion about cookie viruses? (If you also read this blog, I apologize and also I’m werry happy in this case that I have more than reader. If you have questions or topics you would like me to discuss, please post them in the comments)

Getting back to the topic here: I don’t have an opinion, since there is no such thing as a cookie virus. By definition the a virus is (quote from Wikipedia):

A computer virus is a self-replicating computer program written to alter the way a computer operates, without the permission or knowledge of the user.

There is no such thing because cookies should be (and usually are) treated by browsers as opaque tokens (that is they are not interpreted in any way and are sent back exactly as received to the server). Now one could imagine the following really far-fetched scenario which would be something similar to viruses:

A given site uses cookies to return some javascript which is evaluated at the clientside by some javascript embedded in the page (that is the code embedded in the page is looking at document.cookies and doing an eval on it. Now in this case we could make the client side javascript do whatever we want, however:

  • If we can modify the client side headers, probably we have very such access to either the client or the server that there are much more malicious things that we can do.
  • The javascript will be executed in the very limited context of the web page, so we could only infect other cookies that go to the same site (but we already had access to it when modifying the first cookie, so there is no reason for using such a convoluted method.

Now many sensationalist sources use the word virus to refer to all kind of malicious actions to drive up hype (and we all know what my opinion is about that). There are however some real possibilities of doing harm, most of them in the area of information theft and input validation.

  • The first one (which doesn’t fit in any of the two mentioned categories) is the possibility that there is a buffer overflow exploit in the cookie handling code at the server. In the official standard it is stated that a cookie should be now larger than 4096 bytes and we all know that when something like this is in the spec many coders automatically write char buffer[4096];. However before you think that this is a 0-day against Apache or something, let me say the following: I threw together some quick code an ran it against an Apache server (2.2.something) and it very nicely refused to accept the headers. It also generated a return message which was properly escaped, so there is no possible XSS vulnerability there. I’m also sure that IIS has no such problem but maybe some home-brew custom http servers might have this problem.
  • A scenario on which some papers are focusing is the following: the cookie contains some text which is relayed back to the server, which in turns embeds it in the HTML output without proper sanitization. This can result in the attacker embedding code of their choice in the page, including javascript, however such an attack has no real-life benefit, since if the attacker can access the clients cookies, s/he has probable write access to the file system and can do much more nefarious things with much less complication.
  • A third possibility would be that the server relies on data contained in the cookie for authentication or for some other action. In the first case there are two vulnerabilities: cookie theft and creating a custom cookie to gain access (if the server relies for example on some value to be present in the cookie to indicate that the user authenticated successfully). The second case would be when there are parameters in the cookie for a server-side process (shopping cart information). If the server has no way of validating the information upon receiving it or doesn’t do so, one could manipulate this information to gain advantages (to buy things at a 0 price for example). Ideally this information should be kept in a server-side session storage or if you don’t want to break the REST model, encode it in the URL, but make sure that you provide a way for the server to verify the posted back information, by for example encrypting it and then appending a salted hash (where the salt is only known to the server) to it and verifying this when receiving a new request.

In conclusion: Developers – validate your input! validate your input! validate your input! (at every step)

]]>
https://grey-panther.net/2006/11/cookie-viruses-me-thinks-not.html/feed 3 1020
The kind of articles I don’t want to see https://grey-panther.net/2006/10/the-kind-of-articles-i-dont-want-to-see.html https://grey-panther.net/2006/10/the-kind-of-articles-i-dont-want-to-see.html#respond Wed, 11 Oct 2006 06:19:00 +0000 https://grey-panther.net/?p=1045 After reading this article I was in pain. I don’t want to offend anybody, but this is a perfect example for the things against which this blog was created. The article contains a lot of hype-words but is vague on technical details and some of the details is wrong. I don’t want to accuse anybody but it seems to me that this article is scaremongering more than anything else.

The first thing would be that everything which is covered falls in the category of input validation. While it is good to present different aspects and effects of this problem, it is at least misleading to say that these are the Top 10 vulnerability categories. To see a real and comprehensive list of top 10 vulnerability categories in web applications, visit the OWASP site.

Secondly, many of the technologies and problems presented are not new (in the sense that they predate the whole Web 2.0 craze with several years) and are not primarily used in web applications (like WSDL, XPATH, SOAP).

Thirdly the article tends to invent terminology, probably to get as much attention as possible. Lets take the first element in the list for example: Cross-site scripting in AJAX. This is an unneeded repetition and also somewhat confusing (you are not doing the cross-site scripting IN AJAX, you are doing it in Javascript or VBScript). Also the definition is a bit foggy and slightly incorrect: AJAX gets executed on the client-side by allowing an incorrectly written script to be exploited by an attacker. This is misleading in the sense that one tends to think about client-side scripting when reading the word script in this context, however it is most of the times the server side which includes incorrectly escaped user data in the final page (there are a few exceptions which us client-side scripting to dynamically generate parts of the page based on the user supplied parameters, but they are few and far between).

Last, but not least, some of the things are flat out wrong: at point three of the article Malicious AJAX code execution it basically says that using a XMLHttpRequest object one could send requests to any sites. This is not true, browsers adopt a same domain policy on XMLHttpRequest (meaning that the script can send requests only to the domain from which it was originally loaded). You can send requests to other sites by using IFRAMEs, but IFRAME and XMLHttpRequest are not the same thing (although they can be used in similar manner)

My advice to the management type of people who read these articles would be: don’t panic or start running around in circles because of such articles. There is a good chance that many of the things mentioned in it don’t apply to systems. Then again there are many things NOT mentioned here which may apply, so please don’t make a checklist from it and make your people concentrate only on these issues. Read more useful material, like the OWASP list (have I said already how great they are :)).

My advice for programmers: go read the OWASP list and if a manager comes your way about this article, point her/him to the OWASP list and this blog post.

]]>
https://grey-panther.net/2006/10/the-kind-of-articles-i-dont-want-to-see.html/feed 0 1045
Things you (probably) didn’t know about your webserver https://grey-panther.net/2006/10/things-you-probably-didnt-know-about-your-webserver.html https://grey-panther.net/2006/10/things-you-probably-didnt-know-about-your-webserver.html#respond Wed, 04 Oct 2006 08:41:00 +0000 https://grey-panther.net/?p=1057 Today’s webservers are incredibly complex beasts. I don’t know how many of the people operating Apache have read the full specifications. I sure didn’t. So it should come as no surprise that there are hidden features in our servers (and some of them turned on by default), which can weaken our defenses. There are two that I want to talk about today, both turned on by default:

  • The first (and the more important one, although in security every item is important) was only recently publicized and involves sending an invalid header to Apache, which responds with an error page. I’ve got this one from the SecuriTeam blog. If the default error pages were not changed, they will include the invalid header, so a cross-site scripting attack is possible. To test if your site is vulnerable, you can use curl like this: curl http://localhost/asdf -H "Expect: <script>alert('Vulnerable');</script>" -v -i. If the output contains the alert, your server is vulnerable. To worsen the situation, you can use Flash or XMLHttpRequest to create these types of requests (although not with Firefox, which disallows the transmission of this header). Now don’t start filtering on Mozilla browsers, because user agents can also be spoofed. The two possible workarounds are: create custom error pages (harder if you host multiple sites) or enable mod_headers and use the following global rule: RequestHeader unset Expect early (tested with Apache 2.2.3 on WinXP). This might slow your webserver a little down as described in the documentation, but at least you’re not vulnerable until you update Apache.
  • The second is a lesser problem, and involves the possibility of stealing cookies if the site has a XSS vulnerability even if the cookies are marked HttpOnly: It involves sending a TRACE request to the webserver. This request is usually used for debugging, and echoes everything back, including the cookie headers. Again Flash or XMLHttpRequest can be used to craft these special queries. A more detailed description of them can be found here: http://www.cgisecurity.com/whitehat-mirror/WhitePaper_screen.pdf. To test if your vulnerable, telnet to your webserver and enter the following commands:
    TRACE / HTTP/1.1
    Host: localhost (replace it with your host)
    X-Header: test
    
    (two enters)
    

    and you should see everything echoed back to you. As described here, you can use mod_rewrite to filter this attack, by adding the following rules:

    RewriteEngine On
    RewriteCond %{REQUEST_METHOD} ^TRACE
    RewriteRule .* - [F]
    

    And it is also a good idea to make sure that your sites are not vulnerable to XSS 😉

]]>
https://grey-panther.net/2006/10/things-you-probably-didnt-know-about-your-webserver.html/feed 0 1057