javascript – Grey Panthers Savannah https://grey-panther.net Just another WordPress site Sun, 08 May 2022 11:39:11 +0000 en-US hourly 1 https://wordpress.org/?v=6.7 206299117 Proxying pypi / npm / etc for fun and profit! https://grey-panther.net/2014/02/proxying-pypi-npm-etc-for-fun-and-profit.html https://grey-panther.net/2014/02/proxying-pypi-npm-etc-for-fun-and-profit.html#respond Wed, 05 Feb 2014 15:26:00 +0000 Package managers for source code (like pypi, npm, nuget, maven, gems, etc) are great! We should all use them. But what happens if the central repository goes down? Suddenly all your continious builds / deploys fail for no reason. Here is a way to prevent that:

Configure Apache as a caching proxy fronting these services. This means that you can tolerate downtime for the services and you have quicker builds (since you don’t need to contact remote servers). It also has a security benefit (you can firewall of your build server such that it can’t make any outgoing connections) and it’s nice to avoid consuming the bandwidth of those registries (especially since they are provided for free).

Without further ado, here are the config bits for Apache 2.4

/etc/apache2/force_cache_proxy.conf – the general configuration file for caching:

# Security - we don't want to act as a proxy to arbitrary hosts
ProxyRequests Off
SSLProxyEngine On
 
# Cache files to disk
CacheEnable disk /
CacheMinFileSize 0
# cache up to 100MB
CacheMaxFileSize 104857600
# Expire cache in one day
CacheMinExpire 86400
CacheDefaultExpire 86400
# Try really hard to cache requests
CacheIgnoreCacheControl On
CacheIgnoreNoLastMod On
CacheStoreExpired On
CacheStoreNoStore On
CacheStorePrivate On
# If remote can't be reached, reply from cache
CacheStaleOnError On
# Provide information about cache in reply headers
CacheDetailHeader On
CacheHeader On
 
# Only allow requests from localhost
<Location />
        Order Deny,Allow
        Deny from all
        Allow from 127.0.0.1
</Location>
 
<Proxy *>
        # Don't send X-Forwarded-* headers - don't leak local hosts
        # And some servers get confused by them
        ProxyAddHeaders Off
</Proxy>

# Small timeout to avoid blocking the build to long
ProxyTimeout    5

Now with this prepared we can create the individual configurations for the services we wish to proxy:

For pypi:

# pypi mirror
Listen 127.1.1.1:8001

<VirtualHost 127.1.1.1:8001>
        Include force_cache_proxy.conf

        ProxyPass         /  https://pypi.python.org/ status=I
        ProxyPassReverse  /  https://pypi.python.org/
</VirtualHost>

For npm:

# npm mirror
Listen 127.1.1.1:8000

<VirtualHost 127.1.1.1:8000>
        Include force_cache_proxy.conf

        ProxyPass         /  https://registry.npmjs.org/ status=I
        ProxyPassReverse  /  https://registry.npmjs.org/
</VirtualHost>

After configuration you need to enable the site (a2ensite) as well as needed modules (a2enmod – ssl, cache, disk_cache, proxy, proxy_http).

Finally you need to configure your package manager clients to use these endpoints:

For npm you need to edit ~/.npmrc (or use npm config set) and add registry = http://127.1.1.1:8000/

For Python / pip you need to edit ~/.pip/pip.conf (I recommend having download-cache as per Stavros’s post):

[global]
download-cache = ~/.cache/pip/
index-url = http://127.1.1.1:8001/simple/

If you use setuptools (why!? just stop and use pip :-)), your config is ~/.pydistutils.cfg:

[easy_install]
index_url = http://127.1.1.1:8001/simple/

Also, if you use buildout, the needed config adjustment in buildout.cfg is:

[buildout]
index = http://127.1.1.1:8001/simple/

This is mostly it. If your client is using any kind of local caching, you should clear your cache and reinstall all the dependencies to ensure that Apache has them cached on the disk. There are also dedicated solutions for caching the repositories (for example devpi for python and npm-lazy-mirror for node), however I found them somewhat unreliable and with Apache you have a uniform solution which already has things like startup / supervision implemented and which is familiar to most sysadmins.

]]>
https://grey-panther.net/2014/02/proxying-pypi-npm-etc-for-fun-and-profit.html/feed 0 9
Cleaning up Google AppEngine Mapreduce Jobs https://grey-panther.net/2013/11/cleaning-up-google-appengine-mapreduce-jobs.html https://grey-panther.net/2013/11/cleaning-up-google-appengine-mapreduce-jobs.html#comments Tue, 12 Nov 2013 10:31:00 +0000 Do you use the Google MapReduce library on AppEngine? And do you have a lot of completed tasks which clutter your dashboard? Use the JS below by pasting it into your developer console to clean them up! (use it at your own risk, no warranty is provided :-))

schedule = function() { window.setTimeout(function() { var c = $('a:contains(Cleanup)').first(); if (c.length > 0) { c.click(); } else { $('a:contains(Next page)').click(); schedule(); } }, 300); return true; }; window.confirm = schedule; schedule();

]]>
https://grey-panther.net/2013/11/cleaning-up-google-appengine-mapreduce-jobs.html/feed 2 11
Is hand-writing assembly still necessary these days? https://grey-panther.net/2011/02/is-hand-writing-assembly-still-necessary-these-days.html https://grey-panther.net/2011/02/is-hand-writing-assembly-still-necessary-these-days.html#comments Sun, 06 Feb 2011 07:14:00 +0000 https://grey-panther.net/?p=81 12878535_df4197ea6b_o Some time ago I came over the following article: Fast CRC32 in Assembly. It claimed that the assembly implementation was faster than the one implemented in C. Performance was always something I’m interested in, so I repeated and extended the experiment.

Here are the numbers I got. This is on a Core 2 Duo T5500 @ 1.66 Ghz processor. The numbers express Mbits/sec processed:

  • The assembly version from the blogpost (table taken from here): ~1700
  • Optimized C implementation (taken from the same source): ~1500. The compiler used was Microsoft Visual C++ Express 2010
  • Unoptimized C implementation (ie. Debug build): ~900
  • Java implementation using polynomials: ~100 (using JRE 1.6.0_23)
  • Java implementation using table: ~1900
  • Built-in Java implementation: ~1700
  • Javascript (for the fun of it) implementation (using the code from here with optimization – storing the table as numeric rather than string) on Firefox 4.0 Beta 10: ~80
  • Javascript on Chrome 10.0.648.18: ~40
  • (No IE9 test – they don’t offer it for Windows XP)

Final thoughts:

  • Hand coding assembly is not necessary in 99.999% (then again 80% of all statistics are made up :-p). Using better tools or better algorithms (see the “Java table based” vs. “Java polynomial”) can give just as good of performance improvement. Maintainability and portability (almost always) trump performance
  • Be pragmatic. Are you sure that your performance is CPU bound? If you are calculating a CRC32 of disk files, a gigabit per second is more than enough
  • Revisit your assumptions periodically (especially if you are dealing with legacy code). The performance characteristics of modern systems (CPUs) differ enormously from the old ones. I would wager that on an old CPU with little cache the polynomial version would have performed much better, but now that we have CPU caches measured in MB rather than KB the table one performs much better
  • Javascript engines are getting better and better.

Some other interesting remarks:

  • The source code can be found in my repo. Unfortunately I can’t include the C version since I managed to delete it by mistake 🙁
  • The file used to benchmark the different implementations was a PDF copy of the Producing Open Source Software book
  • The HTML5 implementation is surprisingly inconsistent between Firefox and Chrome, so I needed to add the following line to keep them both happy: var blob = file.slice ? file.slice(start, len) : file;
  • The Javascript code doesn’t work unless it is loaded via the http(s) protocol. Loading it from a local file gives “Error no. 4”, so I used a small python webserver
  • Javascript timing has some issues, but my task took longer than 15ms, so I got stable measurements
  • The original post mentions a variation of the algorithm which can take 16 bits at one (rather than 8) which could result in a speed improvement (and maybe it can be extended to 32 bits)
  • Be aware of the “free” tools from Microsoft! This article would have been published sooner if it wasn’t for the fact MSVC++ 2010 Express require an online registration and when I had time I had no Internet access!
  • Update: If you want to run the experiment with GCC, you might find the following post useful: Intel syntax on GCC

Picture taken from the TheGiantVermin’s photostream with permission.

]]>
https://grey-panther.net/2011/02/is-hand-writing-assembly-still-necessary-these-days.html/feed 2 81
Update to the Blogger Tag Cloud https://grey-panther.net/2010/04/update-to-the-blogger-tag-cloud.html https://grey-panther.net/2010/04/update-to-the-blogger-tag-cloud.html#comments Tue, 20 Apr 2010 14:27:00 +0000 https://grey-panther.net/?p=106 A small PSE (Public Service Announcement): if you were using the Blogger Tag Cloud I’ve put together based on the WP-Cumulus plugin, you might have noticed that it stopped working some time ago (I’m not entirely sure when, since I didn’t notice it, until a reader commented and brought it to my attention – thanks again Soufiane).

The problem was that the server hosting the SWF and JS file didn’t serve them anymore, instead giving a 403 – access refused error. To mitigate this problem I’ve uploaded the SWF file to Google Code and used the JS file from the Google Ajax Library and bought the plugin back to life.

So, if you are using the plugin and you are subscribed to my feed, go to the original (now updated) post and use the new code.

Thank you and sorry for any inconvenience caused!

]]>
https://grey-panther.net/2010/04/update-to-the-blogger-tag-cloud.html/feed 1 106
Updated YARPG https://grey-panther.net/2010/04/updated-yarpg.html https://grey-panther.net/2010/04/updated-yarpg.html#comments Fri, 09 Apr 2010 12:10:00 +0000 https://grey-panther.net/?p=108 3273756192_6008cde373_b This has been sitting in my queue for some time: almost four years ago (it’s incredible how time flies!) – amongst the first posts I’ve published on the blog – I’ve written a random password generator in Javascript which I’ve named YARPG (for “Yet Another Random Password Generator”). The advantages to using it are the same as they were back then:

  • Customizable (password length, types of characters included, etc)
  • Secure (it doesn’t communicate over the network, hence no need for SSL)
  • Fully reviewable (as opposed to server-based solutions, where you have to trust the server)

The only flaw it had (as pointed out by a commenter) was the fact that passwords didn’t always include all the characters you’ve selected (ie. the checkboxes represented “possible” not “mandatory” characters, which was a little counter-intuitive).

I’ve thought about how to create passwords which included at least one character from each set. My first ideas were around generating a password, then checking that it contained at least one character from each set and if not, replacing some of the characters with ones from the missing set. However this train of thought quickly ran into problems when I had to decide which character to replace. Choosing something fixed (like the first one, last one, etc) is too predictable. If I choose a random one, I run the risk of overwriting previous change. So finally I realized that there is a simple solution: just re-generate the password until it satisfies all of the constraints. Although this might seem like a brute-force solution, in practice its speed is indistinguishable from a constant-time solution.

Below you have the new and improved YARPG:

I’ve also updated the original posting. You can get the source code for it by looking at the source of this webpage, or from my SVN repository: js_password_generator.html. Hopefully you find it useful!

Picture taken from cjc4454’s photostream with permission.

]]>
https://grey-panther.net/2010/04/updated-yarpg.html/feed 1 108
Youtube gadget generator https://grey-panther.net/2009/08/youtube-gadget-generator.html https://grey-panther.net/2009/08/youtube-gadget-generator.html#respond Thu, 27 Aug 2009 14:43:00 +0000 https://grey-panther.net/?p=220 Some time ago I posted about how the Google Gadget code for Youtube seems to be borked up. Now it seems that they completely removed the option from the YouTube pages, for whatever reason, but the old code still seems functional. So below you can find a small Javascript which generates the equivalent code for a given Youtube user (if you are reading this in an RSS reader, you need to click trough to the post page, since chances are that the reader stripped out the JS for security reasons).

Of course this is an unofficial and unsupported solution, so it might break at any time and without warning!

:

]]>
https://grey-panther.net/2009/08/youtube-gadget-generator.html/feed 0 220
A new security provider https://grey-panther.net/2009/08/a-new-security-provider.html https://grey-panther.net/2009/08/a-new-security-provider.html#respond Tue, 18 Aug 2009 15:17:00 +0000 https://grey-panther.net/?p=226 I found out about Dasient via the presentation they did at Google (which you can see embedded below). Their angle seems to be (although this probably will change – them being a young company) that: we check your rating at Google / McAfee / Symantec and if they say that you are bad, we will find the pages which are bad and “fix” them for you (by removing the malicious code).

What bothers me:

  • The blacklist approach – this means that there will be a lag before new attacks are detected
  • Relying on third-party service (like the Google Safe Browsing API, McAfee SiteAdvisor, etc). While the Google Safe Browsing API has an explicit TOS stating that you can use it (under certain circumstances of course), the situation with McAfee and Symantec is not as clear-cut. Does Dasient have a contract with them or are they just scraping their websites? What if McAfee / Symantec decides that enough is enough and blocks them or even worse, sues them? Also, relying on these services means further delay in detecting the infected sites (because they must wait until these providers detect the infection)
  • Their touted “dynamic filtering” technology seems to be over engineered for me. It also (as far as I understand) can’t handle situations like “the request is directed to a different machine” or “the machine is rootkitted and the malicious code is added on-the-fly”, both of these being situations which occurred in the real world (the first with CN CERT and the second with a bunch of compromised Linux machines)
  • Also, I fear that because this filtering masks the problem (much like a WAF does), it will encourage people to be complacent about fixing the root of the problem (“so what if we get compromised twice a day due to weak passwords? we just click the checkbox!”)
  • Finally, the prices seem a little steep to me (starting from ~10 USD a month and going over ~ 50 USD per month)

All in all it doesn’t seem to me to be worth 2M USD (which they claim to have in funding)…

]]>
https://grey-panther.net/2009/08/a-new-security-provider.html/feed 0 226
Pulling a Hanselman https://grey-panther.net/2009/07/pulling-a-hanselman.html https://grey-panther.net/2009/07/pulling-a-hanselman.html#comments Fri, 24 Jul 2009 14:24:00 +0000 https://grey-panther.net/?p=256 User interface / interaction design 101: if you want something, the least you can do is to ask for it. So I decided to take a page out of Scot Hanselman’s book (a blog worth reading BTW if you are interested in programming – it has an emphasis on Microsoft specific technologies, but other topics are also mentioned quite frequently) and created a “banner” which is shown to the visitors the first time they arrive at the site (or every time if they use incognito mode :-)).

scott_hanselman_blog

So, if you like my ramblings, please subscribe. If you don’t, leave a comment and subscribe :-).

]]>
https://grey-panther.net/2009/07/pulling-a-hanselman.html/feed 1 256
The right way to embed https://grey-panther.net/2009/07/the-right-way-to-embed.html https://grey-panther.net/2009/07/the-right-way-to-embed.html#respond Tue, 07 Jul 2009 05:17:00 +0000 https://grey-panther.net/?p=282 I occasionally rant about “web 2.0” services which want me to embed Javascript on my page to get the functionality. Besides them being a security risk (because they can change the JS on their servers at any time and p0wn all my visitors – and it doesn’t have to be malice on their part – it might just be that they themselves got p0wned – and are you comfortable knowing that if any of the X gadgets you use turns malicious your visitors might get infected?), it also slows down the page loadtime, because loading of scripts is done synchronously (given that they can affect the structure of the page, browsers need to load them entirely before they can continue).

But today I wanted to praise two widgets which IMHO get it right. I’m talking about Feedjit which shows the (approximate) physical location of your last N visitors and the Weather Sticker from Weather Underground. The things I like in them:

  • Minimal fuss (no signup) required to get them.
  • They are just an image! (Feedjit offers also a javascript version, but it is optional)
  • Given that it is just an image, the browser can download it in parallel and doesn’t need to suspend the processing of the page.
  • Even though it is just an image, the provider gets a link back. Also, it can

    Finally, it works when Javascript is not available and it eliminates a very obvious way to attack your users!

I wish that all gadget providers would offer a lo-fi version of their gadgets. A+ to both of these services (and you can find them in my sidebar).

]]>
https://grey-panther.net/2009/07/the-right-way-to-embed.html/feed 0 282
More benchmarking in the 127.0.0.1 vs 0.0.0.0 issue https://grey-panther.net/2009/07/more-benchmarking-in-the-127-0-0-1-vs-0-0-0-0-issue.html https://grey-panther.net/2009/07/more-benchmarking-in-the-127-0-0-1-vs-0-0-0-0-issue.html#respond Mon, 06 Jul 2009 11:36:00 +0000 https://grey-panther.net/?p=285 I’ve done a little more benchmarking in the 127.0.0.1 vs. 0.0.0.0 issue:

<script>var start=new Date();</script>

<script src="http://ad.a8.net/foo.js"></script>
<script src="http://asy.a8ww.net/foo.js"></script>
<script src="http://a9rhiwa.cn/foo.js"></script>
<script src="http://www.a9rhiwa.cn/foo.js"></script>
<script src="http://acezip.net/foo.js"></script>

<script>var stop=new Date(); alert(stop.getTime() - start.getTime());</script>

What this code does, is to try to include javascript files from five sites and measure the time it takes to process these tags. This testcase was selected because current browsers load the scripts synchronously (the reasons being that the loaded script might modify the current document) and also it is a commonly used method for including third-party content (advertisements, gadgets, etc). The result are:

  • Firefox (3.5) showed no difference: the timing always was a couple of milliseconds
  • Internet Explorer (8) and Google Chrome (2.0) showed quite a large difference in favor of 0.0.0.0: when using it, the pages constantly loaded in tens of milliseconds (between 0 and 30), while using 127.0.0.1 meant a page load time closer to a second (between 900 and 1500 milliseconds).
  • Using Opera (9.64) the results were even further apart: the 0.0.0.0 case took ~1 second, while the 127.0.0.1 case took around 25 (!!!) seconds.

All the measurements were repeated multiple times to ensure their validity. The machine used for this test was running Windows XP with all the latest updates and without any webserver. IMHO this is yet an other argument in favor of using 0.0.0.0 instead of 127.0.0.1 when trying to block hosts.

]]>
https://grey-panther.net/2009/07/more-benchmarking-in-the-127-0-0-1-vs-0-0-0-0-issue.html/feed 0 285