I’ve been playing around with Python (mostly because pefile is written in it) and got very annoyed with the whole
white-space as a control structure. In theory it all sounds great: you write beautiful code and it
just works. However in practice I find this approach lacking in at least two ways:
- When moving code around (inside of the same file or between files), many times I got in a situation when the indentation looked alright visually, but the interpreter complained. (Then again, I guess it could have been worse: the interpreter could have silently accepted the line(s), but attached it to the wrong block)
- When I need to step back (decrease the indentation, instead of increasing it) it seems that it is much harder identifying the level of indentation needed. My theory about this is that while you increase your indentation level usually by one (thus it is easy to follow), it is not uncommon to decrease it by several levels at once (which is much harder to follow). In
C-likeI find that the following method works reasonably well: when I start a block, I immediately place the ending marker (bracket) on the corresponding version. Then the bracket, combined with support in the editor for highlighting bracket pairs, gives a very good indication of my position.
Some other (mild) annoyances:
Python seems to have this idea of
interpreting everything as late as possible. Again, this sounds nice, but when I wait 30 minutes just to find out that I forgot to import some module and a function is missing, it makes me ask: where is my Perl strict mode?
This issue also seems to be related to the
dynamism: you can’t use a function until you defined it (which sound ok, but here is the kicker) even if it is in the same file! Ok, Pascal was great and I loved Delphi, but grow up already. The parser went through the file, it knows that the function exists, now let me use it!
The python debugger doesn’t have a command to inspect the contents of a class (something like ‘x’ in the Perl debugger). You have p (for print) and pp (for pretty print), but both of those print out something along the lines of
class F at 0xblahblah. So here are some things I found useful:
- Most of the sites have a very weird attitude. They suppose that you would want to add code to your source to debug it (!??). If I have to insert code in my file, I just do a bunch of print statements and be done with it. Fortunately the documentation mentions (although very briefly) that you can debug a script by running it as follows:
python -m pdb myscript.py
- You can find the debugger commands also in the documentation. The one I found very useful was the alias example:
alias pi for k in %1.__dict__.keys(): print "%1.",k,"=",%1.__dict__[k]
which creates a new command named si which you can use to inspect the class elements (see my earlier complaint with regards to p and pp)
- Although the si command/alias is very useful, it can screw up the terminal badly if the class contains variables with binary data. In this case you are better off printing only the key names.
Update: before I forget – the implementation of pack/unpack was very annoying also. Why reinvent the wheel when there are very good implementations already? And why not include an
arbitrarily many modifier? Why do I have to use a custom function (which must be declared before hand)?
One response to “The hard edges of Python”
Thanks for the article; I’ve added it to http://www.wikivs.com/wiki/Perl_vs_Python.
Another bad thing (TM) about significant whitespace is that visually impaired users can’t read it.