views:

86467

answers:

148

What are the lesser-known but useful features of the Python programming language?

  • Try to limit answers to Python core.
  • One feature per answer.
  • Give an example and short description of the feature, not just a link to documentation.
  • Label the feature using a title as the first line.
+55  A: 

Main messages :)

import this
# btw look at this module's source :)


De-cyphered:

The Zen of Python, by Tim Peters

Beautiful is better than ugly.
Explicit is better than implicit.
Simple is better than complex.
Complex is better than complicated.
Flat is better than nested.
Sparse is better than dense.
Readability counts.
Special cases aren't special enough to break the rules.
Although practicality beats purity.
Errors should never pass silently.
Unless explicitly silenced.
In the face of ambiguity, refuse the temptation to guess. There should be one-- and preferably only one --obvious way to do it.
Although that way may not be obvious at first unless you're Dutch.
Now is better than never.
Although never is often better than right now.
If the implementation is hard to explain, it's a bad idea.
If the implementation is easy to explain, it may be a good idea.
Namespaces are one honking great idea -- let's do more of those!

cleg
Loving the source for that :D
Teifion
Any idea why the source was cyphered that way? Was it just for fun, or was there some other reason?
MiniQuark
the way the source is written goes against the zen!
hasen j
http://svn.python.org/view/python/trunk/Lib/this.py?view=markup
erikprice
It should be easier to understand if instead of 65 it used ord("A"), ord("a") instead of 97 and ord("z")-ord("a") instead of 26. The rest is just a Caesar cipher by 13 (A.K.A. ROT13). But indeed it would have been more pythonic to use the str.translate method :-p
fortran
I've updated my /usr/lib/python2.6/this.py replacing the old code with this `print s.translate("".join(chr(64<i<91 and 65+(i-52)%26 or 96<i<123 and 97+(i-84)%26 or i) for i in range(256)))` and it looks much better now!! :-D
fortran
year, that's called irony. (the reason, why they made it)
Joschua
@MiniQuark: quick history lesson: http://www.wefearchange.org/2010/06/import-this-and-zen-of-python.html
Dan
A more hidden feature (or easter egg) of similar vein: `from __future__ import barry_as_FLUFL`
Lie Ryan
+2  A: 

List comprehensions

list comprehensions

Compare the more traditional (without list comprehension):

foo = []
for x in xrange(10):
  if x % 2 == 0:
     foo.append(x)

to:

foo = [x for x in xrange(10) if x % 2 == 0]
Oko
In what way is list comprehensions a *hidden* feature of Python ?
Eli Bendersky
finnw
The question does ask for "an example and short description of the feature, not just a link to documentation". Any chance of adding one?
Dave Webb
List comprehensions were implemented by Greg Ewing, who was a postdoc at a department where they taught functional programming in a first-year paper.
ConcernedOfTunbridgeWells
If this was a hidden feature of python there would have been 40% more lines of code written in python today.
Vasil
It took me _ages_ to find list comprehensions in Python. Can't live without them now, of course...
Chinmay Kanchi
+1 I think that nested list comprehensions should also be mentioned: http://stackoverflow.com/questions/1198777/double-iteration-in-list-comprehension
inspectorG4dget
+16  A: 

Metaclasses

of course :-) http://stackoverflow.com/questions/100003/what-is-a-metaclass-in-python

Matthias Kestenholz
darkest secret!
jeffjose
+2  A: 

Special methods

Absolute power!

cleg
+234  A: 

Creating generators objects

If you write

x=(n for n in foo if bar(n))

you can get out the generator and assign it to x. Now it means you can do

for n in x:

The advantage of this is that you don't need intermediate storage, which you would need if you did

x = [n for n in foo if bar(n)]

In some cases this can lead to significant speed up.

You can append many if statements to the end of the generator, basically replicating nested for loops:

>>> n = ((a,b) for a in range(0,2) for b in range(4,6))
>>> for i in n:
...   print i 

(0, 4)
(0, 5)
(1, 4)
(1, 5)
freespace
You could also use a nested list comprehension for that, yes?
shapr
Of particular note is the memory overhead savings. Values are computed on-demand, so you never have the entire result of the list comprehension in memory. This is particularly desirable if you later iterate over only part of the list comprehension.
saffsd
I use ifilter for this kind of thing: http://docs.python.org/library/itertools.html#itertools.ifilter
Dan
This is not particularly "hidden" imo, but also worth noting is the fact that you could not rewind a generator object, whereas you can reiterate over a list any number of times.
susmits
ditto susmits.Although these are extremely cool, it's a documented feature of Pythonhttp://docs.python.org/tutorial/classes.htmlUsing callbacks with your generators, also documented, adds to the coolness of generators. http://www.python.org/dev/peps/pep-0255/
Justin
The "no rewind" feature of generators can cause some confusion. Specifically, if you print a generator's contents for debugging, then use it later to process the data, it doesn't work. The data is produced, consumed by print(), then is not available for the normal processing.This doesn't apply to list comprehensions, since they're completely stored in memory.
shavenwarthog
Similar (dup?) answer: http://stackoverflow.com/questions/101268/hidden-features-of-python/165138#165138 Note, however, that the answer I linked here mentions a REALLY GOOD presentation about the power of generators. You really should check it out.
Denilson Sá
+167  A: 

Decorators

Decorators allow to wrap a function or method in another function that can add functionality, modify arguments or results, etc. You write decorators one line above the function definition, beginning with an "at" sign (@).

Example shows a print_args decorator that prints the decorated function's arguments before calling it:

>>> def print_args(function):
>>>     def wrapper(*args, **kwargs):
>>>         print 'Arguments:', args, kwargs
>>>         return function(*args, **kwargs)
>>>     return wrapper

>>> @print_args
>>> def write(text):
>>>     print text

>>> write('foo')
Arguments: ('foo',) {}
foo
DzinX
When defining decorators, I'd recommend decorating the decorator with @decorator. It creates a decorator that preserves a functions signature when doing introspection on it. More info here: http://www.phyast.pitt.edu/~micheles/python/documentation.html
How is this a hidden feature?
vetler
Well, it's not present in most simple Python tutorials, and I stumbled upon it a long time after I started using Python. That is what I would call a hidden feature, just about the same as other top posts here.
DzinX
vetler,the questions asks for "lesser-known but useful features of the Python programming language."How do you measure 'lesser-known but useful features'? I mean how are any of these responses hidden features?
Johnd
@vetler Most of the thing here are hardly "hidden".
Beau Martínez
Hidden? this is a documented featurehttp://www.python.org/dev/peps/pep-0318/
Justin
If the standard is whether or not a feature is documented, then this question should be closed.
Jesse Dhillon
+130  A: 

Readable regular expressions

In Python you can split a regular expression over multiple lines, name your matches and insert comments.

Example verbose syntax (from Dive into Python):

>>> pattern = """
... ^                   # beginning of string
... M{0,4}              # thousands - 0 to 4 M's
... (CM|CD|D?C{0,3})    # hundreds - 900 (CM), 400 (CD), 0-300 (0 to 3 C's),
...                     #            or 500-800 (D, followed by 0 to 3 C's)
... (XC|XL|L?X{0,3})    # tens - 90 (XC), 40 (XL), 0-30 (0 to 3 X's),
...                     #        or 50-80 (L, followed by 0 to 3 X's)
... (IX|IV|V?I{0,3})    # ones - 9 (IX), 4 (IV), 0-3 (0 to 3 I's),
...                     #        or 5-8 (V, followed by 0 to 3 I's)
... $                   # end of string
... """
>>> re.search(pattern, 'M', re.VERBOSE)

Example naming matches (from Regular Expression HOWTO)

>>> p = re.compile(r'(?P<word>\b\w+\b)')
>>> m = p.search( '(((( Lots of punctuation )))' )
>>> m.group('word')
'Lots'

You can also verbosely write a regex without using re.VERBOSE thanks to string literal concatenation.

>>> pattern = (
...     "^"                 # beginning of string
...     "M{0,4}"            # thousands - 0 to 4 M's
...     "(CM|CD|D?C{0,3})"  # hundreds - 900 (CM), 400 (CD), 0-300 (0 to 3 C's),
...                         #            or 500-800 (D, followed by 0 to 3 C's)
...     "(XC|XL|L?X{0,3})"  # tens - 90 (XC), 40 (XL), 0-30 (0 to 3 X's),
...                         #        or 50-80 (L, followed by 0 to 3 X's)
...     "(IX|IV|V?I{0,3})"  # ones - 9 (IX), 4 (IV), 0-3 (0 to 3 I's),
...                         #        or 5-8 (V, followed by 0 to 3 I's)
...     "$"                 # end of string
... )
>>> print pattern
"^M{0,4}(CM|CD|D?C{0,3})(XC|XL|L?X{0,3})(IX|IV|V?I{0,3})$"
I don't know if I'd really consider that a Python feature, most RE engines have a verbose option.
Jeremy Banks
Yes, but because you can't do it in grep or in most editors, a lot of people don't know it's there. The fact that other languages have an equivalent feature doesn't make it not a useful and little known feature of python
Mark Baker
In a large project with lots of optimized regular expressions (read: optimized for machine but not human beings), I bit the bullet and converted all of them to verbose syntax. Now, introducing new developers to projects is much easier. From now on we enforce verbose REs on every project.
Berk D. Demir
I'd rather just say: hundreds = "(CM|CD|D?C{0,3})" # 900 (CM), 400 (CD), etc. The language already has a way to give things names, a way to add comments, and a way to combine strings. Why use special library syntax here for things the language already does perfectly well? It seems to go directly against Perlis' Epigram 9.
Ken
@Ken: a regex may not always be directly in the source, it could be read from settings or a config file. Allowing comments or just additional whitespace (for readability) can be a great help.
Roger Pate
If you're writing a Python program and your config file isn't Python, then (Yegge would say and I'd agree that) "you're talking out of both sides of your mouth" re OO: http://sites.google.com/site/steveyegge2/the-emacs-problem
Ken
+47  A: 

Nested list comprehensions and generator expressions:

[(i,j) for i in range(3) for j in range(i) ]    
((i,j) for i in range(4) for j in range(i) )

These can replace huge chunks of nested-loop code.

Rafał Dowgird
"for j in range(i)" - is this a typo? Normally you'd want fixed ranges for i and j. If you're accessing a 2d array, you'd miss out on half your elements.
Peter Gibson
I'm not accessing any arrays in this example. The only purpose of this code is to show that the expressions from the inner ranges can access those from the outer ones. The by-product is a list of pairs (x,y) such that 4>x>y>0.
Rafał Dowgird
sorta like double integration in calculus, or double summation.
RamyenHead
Key point to remember here (which took me a long time to realize) is that the order of the ``for`` statements are to be written in the order you'd expect them to be written in a standard for-loop, from the outside inwards.
sykora
+25  A: 

Getter functions in module operator

The functions attrgetter() and itemgetter() in module operator can be used to generate fast access functions for use in sorting and search objects and dictionaries

Chapter 6.7 in the Python Library Docs

Ber
s/Capter/Chapter/
J.F. Sebastian
Rite :) Fixed it.
Ber
+166  A: 

Sending values into generator functions. For example having this function:

def mygen():
  """Yield 5 until something else is passed back via send()"""
  a = 5
  while True:
    f = yield(a) #yield a and possibly get f in return
    if f is not None: a = f  #store the new value

You can:

>>> g = mygen()
>>> g.next()
5
>>> g.next()
5
>>> g.send(7)  #we send this back to the generator
7
>>> g.next() #now it will yield 7 until we send something else
7
Rafał Dowgird
You should test f against None, otherwise object considered false can't be used (for example 0).
Sylvain Defresne
Agreed. Let's treat this as a nasty example of a hidden feature of Python :)
Rafał Dowgird
`if f` -> `if f is not None`
J.F. Sebastian
In other languages, I believe this magical device is called a "variable".
finnw
coroutines should be coroutines and genenerator should be themselves too, without mixing. Mega-great link and talk and examples about this here: http://www.dabeaz.com/coroutines/
kaizer.se
This is a non-hidden feature
Justin
@finnw: the example implements something that's similar to a variable. However, the feature could be used in many other ways ... unlike a variable. It should also be obvious that similar semantics can be implemented using objects (a class implemting Python's __call__ method, in particular).
Jim Dennis
+7  A: 

Ability to substitute even things like file deletion, file opening etc. - direct manipulation of language library. This is a huge advantage when testing. You don't have to wrap everything in complicated containers. Just substitute a function/method and go. This is also called monkey-patching.

phjr
Creating a test harness which provides classes that have the same interfaces as the objects which would be manipulated by the code under test (the subjects of our testing) is referred to as "Mocking" (these are called "Mock Classes" and their instances are "Mock Objects").
Jim Dennis
A: 
>>> x=[1,1,2,'a','a',3]
>>> y = [ _x for _x in x if not _x in locals()['_[1]'] ]
>>> y
[1, 2, 'a', 3]


"locals()['_[1]']" is the "secret name" of the list being created. Very useful when state of list being built affects subsequent build decisions.

Kevin Little
Ew. This 'name' of the result list depends on too many factors to really consider it more than abuse of a specific implementation (and specific to a particular version, to boot.) On top of that it's an O(n^2) algorithm. Yuck.
Thomas Wouters
Well, at least no one will claim this one isn't hidden.
I. J. Kennedy
+155  A: 

The step argument in slice operators. For example:

a = [1,2,3,4,5]
>>> a[::2]  # iterate over the whole list in 2-increments
[1,3,5]

The special case x[::-1] is a useful idiom for 'x reversed'.

>>> a[::-1]
[5,4,3,2,1]
Rafał Dowgird
much clearer, in my opinion, is the reversed() function.>>> list(reversed(range(4)))[3, 2, 1, 0]
Christian Oudard
then how to write "this i a string"[::-1] in a better way? reversed doesnt seem to help
Berry Tsakala
"".join(reversed("this i a string"))
erikprice
The problem with reversed() is that it returns an iterator, so if you want to preserve the type of the reversed sequence (tuple, string, list, unicode, user types...), you need an additional step to convert it back.
Rafał Dowgird
def reverse_string(string): return string[::-1]
pi
@pi I think if one knows enough to define reverse_string as you have then one can leave the [::-1] in your code and be comfortable with its meaning and the feeling it is appropriate.
vgm64
Is there a speed difference between `[::-1]` and `reversed()`?
Austin
-1, because it is not hidden and you learn it early enought, but its an useful feature
Quonux
+15  A: 

Implicit concatenation:

>>> print "Hello " "World"
Hello World

Useful when you want to make a long text fit on several lines in a script:

hello = "Greaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa Hello " \
        "Word"

or

hello = ("Greaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa Hello " 
         "Word")
e-satis
To make a long text fit on several lines, you can also use the triple quotes.
Rafał Dowgird
Your example is wrong and misleading. After running it, the "Word" part won't be on the end of the hello string. It won't concatenate. To continue on next line like that, you would need implicit line continuation and string concatenation and that only happens if you use some delimiter like () or [].
nosklo
Only one thing was wrong here: the tab before "word" (typo). What's more, you are really unfriendly, espacially for somebody who didn't even take the time to check if it works (since you would have seen it does). You may want to read that : steve.yegge.googlepages.com/bambi-meets-godzilla
e-satis
Anyone who has ever forgotten a comma in a list of strings knows how evil this 'feature' is.
Terhorst
Well, a PEP had been set to get rid of it but Guido decided finally to keep it. I guess it's more useful than hateful. Actually the drawbacks are no so dangerous (no safety issues) and for long strings, it helps a lot.
e-satis
This is probably my favorite feature of Python. You can forget correct syntax and it's still correct syntax.
sli
even better: hello = "Greaaaaa Hello \<pretend there's a line break here>World"
JAB
+36  A: 

property

class ClassName(object):
    """
    """    
    def __init__(self, foo, bar):
        """
        """
        self.foo = foo # read-write property
        self.bar = bar # simple attribute

    def _set_foo(self, value):
        self._foo = value

    def _get_foo(self):
        return self._foo

    foo = property(_get_foo, _set_foo)

In Python 2.6 and 3.0:

class C(object):
    @property
    def x(self):
        return self._x

    @x.setter
    def x(self, value):
        self._x = value

    @x.deleter
    def x(self):
        del self._x

class D(C):
    @C.x.getter
    def x(self):
        return self._x * 2

    @x.setter
    def x(self, value):
        self._x = value / 2
J.F. Sebastian
It would be nice if your pre-2.6 and your 2.6 and 3.0 examples would actually present the exact same thing: classname is different, there are comments in the pre-2.6 version, the 2.6 and 3.0 versions don't contain initialization code.
Confusion
+362  A: 

Chaining comparison operators:

>>> x = 5
>>> 1 < x < 10
True
>>> 10 < x < 20 
False
>>> x < 10 < x*10 < 100
True
>>> 10 > x <= 9
True
>>> 5 == x > 4
True

In case you're thinking it's doing 1 < x, which comes out as True, and then comparing True < 10, which is also True, then no, that's really not what happens (see the last example.) It's really translating into 1 < x and x < 10, and x < 10 and 10 < x * 10 and x*10 < 100, but with less typing and each term is only evaluated once.

Thomas Wouters
Isn't `10 > x <= 9` the same as `x <= 9` (ignoring overloaded operators, that is)
ΤΖΩΤΖΙΟΥ
Of course. It was just an example of mixing different operators.
Thomas Wouters
That's very helpful. It should be standard for all languages. Sadly, it isn't.
stalepretzel
you should add some examples that return false aswell. such as>>> 10 < x < 20False
ShoeLace
This applies to other comparison operators as well, which is why people are sometimes surprised why code like (5 in [5] is True) is False (but it's unpythonic to explicitly test against booleans like that to begin with).
Miles
They should really really be in all languages, I totally agree.
Andrew Szeto
Lisp does not have anything similar?
Hai
Not that I know of. Perl 6 does have this feature, though :)
ephemient
I tried 0 < x < 100 in C# the other day. Bah, humbug.
Garth T Kidd
Good but watch out for equal prcedence, like 'in' and '='. 'A in B == C in D' means '(A in B) and (B == C) and (C in D)' which might be unexpected.
Charles Merriam
_"each term evaluated only once"_ That's key.
wilhelmtell
Azafe: Lisp's comparisons naturally work this way. It's not a special case because there's no other (reasonable) way to interpret `(< 1 x 10)`. You can even apply them to single arguments, like `(= 10)`: http://www.cs.cmu.edu/Groups/AI/html/hyperspec/HyperSpec/Body/fun_eqcm_sleq__lteqcm_gteq.html
Ken
@Miles a less confusing example might be "a == b in c" which is equivalent to "a == b and b in c". See http://docs.python.org/reference/expressions.html#notin
poolie
+4  A: 

Everything is dynamic

"There is no compile-time". Everything in Python is runtime. A module is 'defined' by executing the module's source top-to-bottom, just like a script, and the resulting namespace is the module's attribute-space. Likewise, a class is 'defined' by executing the class body top-to-bottom, and the resulting namespace is the class's attribute-space. A class body can contain completely arbitrary code -- including import statements, loops and other class statements. Creating a class, function or even module 'dynamically', as is sometimes asked for, isn't hard; in fact, it's impossible to avoid, since everything is 'dynamic'.

Thomas Wouters
This gives Python the wonderful reload().
sli
Everything is dynamic... Except classes and modules implemented in C, which are not as dynamic as everything else. (try something like `dict.x = 3`, and Python won't let you)
Denilson Sá
Yes, modules and types defined in C are defined at compiletime, but they're still *created* at runtime. Also, `dict.x = 3` has nothing to do with things being dynamic, but with the `dict` type not allowing attributes to be assigned. You can make your own classes, in Python, that don't allow that. You can make your own type, in C, that does allow it. It's unrelated.
Thomas Wouters
+70  A: 

Re-raising exceptions:

# Python 2 syntax
try:
    some_operation()
except SomeError, e:
    if is_fatal(e):
        raise
    handle_nonfatal(e)

# Python 3 syntax
try:
    some_operation()
except SomeError as e:
    if is_fatal(e):
        raise
    handle_nonfatal(e)

The 'raise' statement with no arguments inside an error handler tells Python to re-raise the exception with the original traceback intact, allowing you to say "oh, sorry, sorry, I didn't mean to catch that, sorry, sorry."

If you wish to print, store or fiddle with the original traceback, you can get it with sys.exc_info(), and printing it like Python would is done with the 'traceback' module.

Thomas Wouters
Sorry but this is a well known and common feature of almost all languages.
Lucas S.
I agree with Lucas S.
Cristian Ciupitu
Note the italicized text. Some people will do `raise e` instead, which doesn't preserve the original traceback.
Aaron Gallagher
Maybe more magical, `exc_info = sys.exc_info(); raise exc_info[0], exc_info[1], exc_info[2]` is equivalent to this, but you can change those values around (e.g., change the exception type or message)
ianb
@Lucas S. Well, I didn't know it, and I'm glad it's written here.
e-satis
+126  A: 

In-place value swapping

>>> a = 10
>>> b = 5
>>> a, b
(10, 5)

>>> a, b = b, a
>>> a, b
(5, 10)

The right-hand side of the assignment is an expression that creates a new tuple. The left-hand side of the assignment immediately unpacks that (unreferenced) tuple to the names a and b.

After the assignment, the new tuple is unreferenced and marked for garbage collection, and the values bound to a and b have been swapped.

As noted in the Python tutorial section on data structures,

Note that multiple assignment is really just a combination of tuple packing and sequence unpacking.

Lucas S.
Does this use more real memory than the traditional way? I would guess do since you are creating a tuple, instead of just one swap variable
Nathan
It doesn't use more memory. It uses less.. I just wrote it both ways, and de-compiled the bytecode.. the compiler optimizes, as you'd hope it would. dis results showed it's setting up the vars, and then ROT_TWOing. ROT_TWO means 'swap the two top-most stack vars'... Pretty snazzy, actually.
royal
+90  A: 

Descriptors

They're the magic behind a whole bunch of core Python features.

When you use dotted access to look up a member (eg, x.y), Python first looks for the member in the instance dictionary. If it's not found, it looks for it in the class dictionary. If it finds it in the class dictionary, and the object implements the descriptor protocol, instead of just returning it, Python executes it. A descriptor is any class that implements the __get__, __set__, or __del__ methods.

Here's how you'd implement your own (read-only) version of property using descriptors:

class Property(object):
    def __init__(self, fget):
        self.fget = fget

    def __get__(self, obj, type):
        if obj is None:
            return self
        return self.fget(obj)

and you'd use it just like the built-in property():

class MyClass(object):
    @Property
    def foo(self):
        return "Foo!"

Descriptors are used in Python to implement properties, bound methods, static methods, class methods and slots, amongst other things. Understanding them makes it easy to see why a lot of things that previously looked like Python 'quirks' are the way they are.

Raymond Hettinger has an excellent tutorial that does a much better job of describing them than I do.

Nick Johnson
+98  A: 

Doctest: documentation and unit-testing at the same time.

Example extracted from the Python documentation:

def factorial(n):
    """Return the factorial of n, an exact integer >= 0.

    If the result is small enough to fit in an int, return an int.
    Else return a long.

    >>> [factorial(n) for n in range(6)]
    [1, 1, 2, 6, 24, 120]
    >>> factorial(-1)
    Traceback (most recent call last):
        ...
    ValueError: n must be >= 0

    Factorials of floats are OK, but the float must be an exact integer:
    """

    import math
    if not n >= 0:
        raise ValueError("n must be >= 0")
    if math.floor(n) != n:
        raise ValueError("n must be exact integer")
    if n+1 == n:  # catch a value like 1e300
        raise OverflowError("n too large")
    result = 1
    factor = 2
    while factor <= n:
        result *= factor
        factor += 1
    return result

def _test():
    import doctest
    doctest.testmod()    

if __name__ == "__main__":
    _test()
Pierre-Jean Coudert
Doctests are certainly cool, but I really dislike all the cruft you have to type to test that something should raise an exception
TM
Doctests are overrated and pollute the documentation. How often do you test a standalone function without any setUp() ?
a paid nerd
who says you can't have setup in a doctest? write a function that generates the context and returns `locals()` then in your doctest do `locals().update(setUp())` =D
Jiaaro
These are nice for making sure examples in docstrings don't go out of sync.
Longpoke
If a standalone function requires a setUp, chances are high that it should be decoupled from some unrelated stuff or put into a class. Class doctest namespace can then be re-used in class method doctests, so it's a bit like setUp, only DRY and readable.
Andy Mikhaylenko
http://bemusement.org/diary/2008/October/24/more-doctest-problems - doctests make for ok docs, bad tests
poolie
+209  A: 

iter() can take a callable argument

For instance:

def seek_next_line(f):
    for c in iter(lambda: f.read(1),'\n'):
        pass

The iter(callable, until_value) function repeatedly calls callable and yields its result until until_value is returned.

mbac32768
This is really cool, I didn't know iter could do that!
Ryan
You should also add the explanation: iter(callable, sentinel) -> iterator; the callable is called until it returns the sentinel.
Cristian Ciupitu
@Cristian Is this clearer?
badp
To be honest, either the generic description of `iter` from the [Python documentation](http://docs.python.org/library/functions.html#iter) (`help(iter)`) or an explanation of what's going on here should be used. For example, something like this: *iter(...) creates an iterator that calls `f.read(1)` until it returns `'\n'`*. Anyway, since I already know what's going on, others (newbies?) should decide.
Cristian Ciupitu
+3  A: 

Too lazy to initialize every field in a dictionary? No problem:

In Python > 2.3:

from collections import defaultdict

In Python <= 2.3:

def defaultdict(type_):
    class Dict(dict):
        def __getitem__(self, key):
            return self.setdefault(key, type_())
    return Dict()

In any version:

d = defaultdict(list)
for stuff in lots_of_stuff:
     d[stuff.name].append(stuff)
pi
You may be interested to learn about collections.defaultdict(list).
Thomas Wouters
Thanks. Does not work on my production environment though. Python 2.3.
pi
+16  A: 

The Python Interpreter

>>>

Maybe not lesser known, but certainly one of my favorite features of Python.

davidavr
The #1 reason Python is better than everything else. </fanboi>
sli
Everything else you've seen. </smuglispweenie>
Matt Curtis
And it also has iPython which is much better than the default interpreter
juanjux
I wish I could use iPython like SLIME in all of its glory
nicholas
+11  A: 

First-class functions

It's not really a hidden feature, but the fact that functions are first class objects is simply great. You can pass them around like any other variable.

>>> def jim(phrase):
...   return 'Jim says, "%s".' % phrase
>>> def say_something(person, phrase):
...   print person(phrase)

>>> say_something(jim, 'hey guys')
'Jim says, "hey guys".'
Jeremy Cantrell
This also makes callback and hook creation (and, thus, plugin creation for your Python scripts) so trivial that you might not even know you're doing it.
sli
Any langauge that doesn't have first class functions (or at least some good substitute, like C function pointers) it is a misfeature. It is completely unbearable to go without.
TokenMacGuy
This might be a stupider question than I intend, but isn't this essentially a function pointer? Or do I have this mixed up?
inspectorG4dget
@inspectorG4dget: It's certainly related to function pointers, in that it can accomplish all of the same purposes, but it's slightly more general, more powerful, and more intuitive. Particularly powerful when you combine it with the fact that functions can have attributes, or the fact that instances of certain classes can be called, but that starts to get arcane.
eswald
+85  A: 

Creating new types at runtime

>>> NewType = type("NewType", (object,), {"x": "hello"})
>>> n = NewType()
>>> n.x
"hello"

which is exactly the same as

>>> class NewType(object):
>>>     x = "hello"
>>> n = NewType()
>>> n.x
"hello"

Probably not the most useful thing, but nice to know.

Edit: Fixed name of new type, should be NewType to be the exact same thing as with class statement.

Torsten Marek
This has a lot of potential for usefulness, e.g., JIT ORMs
Mark Cidade
I use it to generate HTML-Form classes based on a dynamic input. Very nice!
pi
I also used it to generate dynamic django forms (until i discovered formsets)
Jiaaro
Note: all classes are created at runtime. So you can use the 'class' statement within a conditional, or within a function (very useful for creating families of classes or classes that act as closures). The improvement that 'type' brings is the ability to neatly define a dynamically generated set of attributes (or bases).
spookylukey
+25  A: 

Interleaving if and for in list comprehensions

>>> [(x, y) for x in range(4) if x % 2 == 1 for y in range(4)]
[(1, 0), (1, 1), (1, 2), (1, 3), (3, 0), (3, 1), (3, 2), (3, 3)]

I never realized this until I learned Haskell.

Torsten Marek
way cool. http://docs.python.org/tutorial/datastructures.html#list-comprehensions
jimmyorr
Not so cool, you are just having a list comprehension with two for loops. What is so surprising about that?
Olivier
@Olivier: there's an if between the two for loops.
Torsten Marek
@Torsten: well, the list comprehension comprises already a for .. if, so what is so interesting? You can write: `[x for i in range(10) if i%2 for j in range(10) if j%2]`, nothing especially cool or interesting. The if in the middle of your example has nothing to do with the second for.
Olivier
I was wondering, is there a way to do this with an else?`[ a for (a, b) in zip(lista, listb) if a == b else: '-' ]`
Austin
in `[ _ for _ in _ if _ ]` the if is a filter for the example above it would need to be `[ _ if _ else _ for _ ]`
Dan D
+101  A: 

Context managers and the "with" Statement

Introduced in PEP 343, a context manager is an object that acts as a run-time context for a suite of statements.

Since the feature makes use of new keywords, it is introduced gradually: it is available in Python 2.5 via the __future__ directive. Python 2.6 and above (including Python 3) has it available by default.

I have used the "with" statement a lot because I think it's a very useful construct, here is a quick demo:

from __future__ import with_statement

with open('foo.txt', 'w') as f:
    f.write('hello!')

What's happening here behind the scenes, is that the "with" statement calls the special __enter__ and __exit__ methods on the file object. Exception details are also passed to __exit__ if any exception was raised from the with statement body, allowing for exception handling to happen there.

What this does for you in this particular case is that it guarantees that the file is closed when execution falls out of scope of the with suite, regardless if that occurs normally or whether an exception was thrown. It is basically a way of abstracting away common exception-handling code.

Other common use cases for this include locking with threads and database transactions.

Ycros
I wouldn't approve a code review which imported anything from __future__. The features are more cute than useful, and usually they just end up confusing Python newcomers.
a paid nerd
Yes, such "cute" features as nested scopes and generators are better left to those who know what they're doing. And anyone who wants to be compatible with future versions of Python. For nested scopes and generators, "future versions" of Python means 2.2 and 2.5, respectively. For the with statement, "future versions" of Python means 2.6.
Chris B.
This may go without saying, but with python v2.6+, you no longer need to import from __future__. with is now a first class keyword.
fitzgeraldsteele
In 2.7 you can have multiple `withs` :) `with open('filea') as filea and open('fileb') as fileb: ...`
Austin
+9  A: 

Some of the builtin favorites, map(), reduce(), and filter(). All extremely fast and powerful.

daniel
Be careful of reduce(), If you're not careful, you can write really slow reductions.
S.Lott
And be careful of map(), it's depreciated in 2.6 and removed in 3.0.
sli
list comprehensions can achieve everything you can do with any of those functions.
recursive
It can also obfuscate Python code if you abuse them
juanjux
@sil: map still exists in Python 3, as does filter, and reduce exists as functools.reduce.
kaizer.se
@recursive: I defy you to produce a list comprehension/generator expression that performs the action of `reduce()`
TokenMacGuy
+103  A: 

Function argument unpacking

You can unpack a list or a dictionary as function arguments using * and **.

For example:

def draw_point(x, y):
    # do some magic

point_foo = (3, 4)
point_bar = {'y': 3, 'x': 2}

draw_point(*point_foo)
draw_point(**point_bar)

Very useful shortcut since lists, tuples and dicts are widely used as containers.

e-satis
I remember when I first found this and had a fun night of caffeine binging trying to figure it out. Ahhh, those were the days.
sli
Use this all the time, love it.
Skurmedel
* is also known as the splat operator
Gabe
+90  A: 

Dictionaries have a 'get()' method. If you do d['key'] and key isn't there, you get an exception. If you do d.get('key'), you get back None if 'key' isn't there. You can add a second argument to get that item back instead of None, eg: d.get('key', 0).

It's great for things like adding up numbers:

sum[value] = sum.get(value, 0) + 1

Rory
also, checkout the setdefault method.
Daren Thomas
also, checkout collections.defaultdict class.
J.F. Sebastian
If you are using Python 2.7 or later, or 3.1 or later, check out the Counter class in the collections module. http://docs.python.org/library/collections.html#collections.Counter
mikez302
+2  A: 

If you use exec in a function the variable lookup rules change drastically. Closures are no longer possible but Python allows arbitrary identifiers in the function. This gives you a "modifiable locals()" and can be used to star-import identifiers. On the downside it makes every lookup slower because the variables end up in a dict rather than slots in the frame:

>>> def f():
...  exec "a = 42"
...  return a
... 
>>> def g():
...  a = 42
...  return a
... 
>>> import dis
>>> dis.dis(f)
  2           0 LOAD_CONST               1 ('a = 42')
              3 LOAD_CONST               0 (None)
              6 DUP_TOP             
              7 EXEC_STMT           

  3           8 LOAD_NAME                0 (a)
             11 RETURN_VALUE        
>>> dis.dis(g)
  2           0 LOAD_CONST               1 (42)
              3 STORE_FAST               0 (a)

  3           6 LOAD_FAST                0 (a)
              9 RETURN_VALUE
Armin Ronacher
Just to nitpick: that only applies to bare exec. If you specify the namespace for it to use, eg "d={}; exec "a=42" in d" this won't happen.
Brian
+154  A: 

From 2.5 onwards dicts have a special method __missing__ that is invoked for missing items:

>>> class MyDict(dict):
...  def __missing__(self, key):
...   self[key] = rv = []
...   return rv
... 
>>> m = MyDict()
>>> m["foo"].append(1)
>>> m["foo"].append(2)
>>> dict(m)
{'foo': [1, 2]}

There is also a dict subclass in collections called defaultdict that does pretty much the same but calls a function without arguments for not existing items:

>>> from collections import defaultdict
>>> m = defaultdict(list)
>>> m["foo"].append(1)
>>> m["foo"].append(2)
>>> dict(m)
{'foo': [1, 2]}

I recommend converting such dicts to regular dicts before passing them to functions that don't expect such subclasses. A lot of code uses d[a_key] and catches KeyErrors to check if an item exists which would add a new item to the dict.

Armin Ronacher
This is where I put fork bombs.
Vince
I prefer using setdefault. m={} ; m.setdefault('foo',1)
grayger
@grayger meant this `m={}; m.setdefault('foo', []).append(1)`.
Cristian Ciupitu
There are however cases where passing the defaultdict is very handy. The function may for example iter over the value and it works for undefined keys without extra code, as the default is an empty list.
Marian
+4  A: 

If you are using descriptors on your classes Python completely bypasses __dict__ for that key which makes it a nice place to store such values:

>>> class User(object):
...  def _get_username(self):
...   return self.__dict__['username']
...  def _set_username(self, value):
...   print 'username set'
...   self.__dict__['username'] = value
...  username = property(_get_username, _set_username)
...  del _get_username, _set_username
... 
>>> u = User()
>>> u.username = "foo"
username set
>>> u.__dict__
{'username': 'foo'}

This helps to keep dir() clean.

Armin Ronacher
+146  A: 

If you don't like using whitespace to denote scopes, you can use the C-style {} by issuing:

from __future__ import braces
eduffy
That's evil. :)
Jason Baker
>>> from __future__ import braces File "<stdin>", line 1 SyntaxError: not a chance:P
Benjamin W. Smith
ewww, does Markdown not work in comment boxes?!
Benjamin W. Smith
Wait, isn't the future package future additions to the language? So are they planning to add braces at some point?
James McMahon
Dynamic whitespace is half of python's goodness. That's... twisted.
stalepretzel
Very funny! :-)
MiniQuark
that's blasphemy!
Berk D. Demir
I think that we may have a syntactical mistake here, shouldn't that be "from __past__ import braces"?
Bill K
from __cruft__ import braces
digitala
I admit that's funny, but inversely what about the blind? I remember reading a while back of an individual who was blind and frustrated that he/she couldn't use Python due to the lack of brackets for statements.
David
I can understand the use of braces for minification of code :)
Jiaaro
Totally breaks the Python idiom
jpartogi
@David: How are braces better for the blind? In the best-case scenario (Well-indented code, which Python enforces), braces would only add a minuscule amount of clarity. A block of text with whitespace before would be in my opinion much easier to notice than the presence of a small typographical character. The legibility of braces depends on which version of the OTBS that person believes in. The inline braces I prefer would be horrible to read without proper vision.
Alex Brault
@Alex: How does the text reader say the indentation level? You would need a Python specific text reader to tell you "for <stuff> colon newline indent pass newline <next statement>". Now add some indents: "indent indent indent for <stuff> colon newline indent indent indent indent pass newline indent indent indent <next statement>"
jmucchiello
jmucchiello: Yes you need something python-specific. The screen reader should speak the tokens that the python interpreter uses, "intent in", "indent out".
kaizer.se
@David, @jmucchiello: there is a script that adds braces to every block in a comment (`# }`), and in fact I've read of blind people that uses it to allow them to write Python :)
voyager
@David, @jmucchiello: Ah, you meant blind-blind, not just "horribly bad eyesight"-blind.
Alex Brault
I know a few devs that are learning Python (but know a c style language) who would love this. It's just because they don't know any better ;)
Justin
+12  A: 

__slots__ is a nice way to save memory, but it's very hard to get a dict of the values of the object. Imagine the following object:

class Point(object):
    __slots__ = ('x', 'y')

Now that object obviously has two attributes. Now we can create an instance of it and build a dict of it this way:

>>> p = Point()
>>> p.x = 3
>>> p.y = 5
>>> dict((k, getattr(p, k)) for k in p.__slots__)
{'y': 5, 'x': 3}

This however won't work if point was subclassed and new slots were added. However Python automatically implements __reduce_ex__ to help the copy module. This can be abused to get a dict of values:

>>> p.__reduce_ex__(2)[2][1]
{'y': 5, 'x': 3}
Armin Ronacher
Oh wow, I might actually have good use for this!
sli
Beware that `__reduce_ex__` can be overridden in subclasses, and since it's also used for pickling, it often is. (If you're making data containers, you should think of using it too! or it's younger siblings `__getstate__` and `__setstate__`.)
Ken Arnold
You can still do `object.__reduce_ex__(p, 2)[2][1]` then.
Armin Ronacher
+47  A: 

Python's advanced slicing operation has a barely known syntax element, the ellipsis:

>>> class C(object):
...  def __getitem__(self, item):
...   return item
... 
>>> C()[1:2, ..., 3]
(slice(1, 2, None), Ellipsis, 3)

Unfortunately it's barely useful as the ellipsis is only supported if tuples are involved.

Armin Ronacher
see http://stackoverflow.com/questions/118370/how-do-you-use-the-ellipsis-slicing-syntax-in-python for more info
molasses
That one's really hidden. +1
gorsky
Actually, the ellipsis is quite useful when dealing with multi-dimensional arrays from `numpy` module.
Denilson Sá
+4  A: 

Builtin methods or functions don't implement the descriptor protocol which makes it impossible to do stuff like this:

>>> class C(object):
...  id = id
... 
>>> C().id()
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
TypeError: id() takes exactly one argument (0 given)

However you can create a small bind descriptor that makes this possible:

>>> from types import MethodType
>>> class bind(object):
...  def __init__(self, callable):
...   self.callable = callable
...  def __get__(self, obj, type=None):
...   if obj is None:
...    return self
...   return MethodType(self.callable, obj, type)
... 
>>> class C(object):
...  id = bind(id)
... 
>>> C().id()
7414064
Armin Ronacher
It's simpler and easier to do this as a property, in this case: class C(object): id = property(id)
Piet Delport
lambda is also a good alternative: `class C(object): id = lambda s, *a, **kw: id(*a, **kw)`; and a better version of bind: `def bind(callable): return lambda s, *a, **kw: callable(*a, **kw)`
Lie Ryan
+73  A: 

Named formatting, % -formatting takes a dictionary (also applies %i/%s etc. validation).

>>> print "The %(foo)s is %(bar)i." % {'foo': 'answer', 'bar':42}
The answer is 42.

>>> foo, bar = 'question', 123

>>> print "The %(foo)s is %(bar)i." % locals()
The question is 123.

And since locals() is also a dictionary, you can simply pass that as a dict and have % -substitions from your local variables. I think this is frowned upon, but simplifies things..

New Style Formatting

>>> print "The {foo} is {bar}".format(foo='answer', bar=42)
Pasi Savolainen
Will be phased out and eventually replaced with string's format() method.
Constantin
Named formatting is very useful for translators as they tend to just see the format string without the variable names for context
pixelbeat
does this work in python3?
Victor
Appears to work in python 3.0.1 (needed to add parenttheses around print call).
Pasi Savolainen
a *hash*, huh? I see where you came from.
shylent
%-formatting won't go away any time soon, but the "format" method on strings is the new (current) best-practices method. It supports everything %-formatting does and most people think the API and the formatting syntax is much nicer. (Myself included.) Python has a third method, string.Template added in 2.4; basically nobody likes that one.
Larry Hastings
%s formatting will not be phased out. str.format() is certainly more pythonic, however is actually 10x's slower for simple string replacement.My belief is %s formatting is still best practice.
Kenneth Reitz
+134  A: 

Be careful with mutable default arguments

>>> def foo(x=[]):
...     x.append(1)
...     print x
... 
>>> foo()
[1]
>>> foo()
[1, 1]
>>> foo()
[1, 1, 1]

Instead, you should use a sentinel value denoting "not given" and replace with the mutable you'd like as default:

>>> def foo(x=None):
...     if x is None:
...         x = []
...     x.append(1)
...     print x
>>> foo()
[1]
>>> foo()
[1]
Jason Baker
That's definitely one of the more nasty hidden features. I've run into it from time to time.
Torsten Marek
I found this a lot easier to understand when I learned that the default arguments live in a tuple that's an attribute of the function, e.g. `foo.func_defaults`. Which, being a tuple, is immutable.
Robert Rossney
Could you explain how it happens in detail?
grayger
@grayger: As the def statement is executed its arguments are evaluated by the interpreter. This creates (or rebinds) a name to a code object (the suite of the function). However, the default arguments are instantiated as objects at the time of definition. This is true of any time of defaulted object, but only significant (exposing visible semantics) when the object is mutable. There's no way of re-binding that default argument name in the function's closure although it can obviously be over-ridden for any call or the whole function can be re-defined).
Jim Dennis
@Robert of course the arguments tuple might be immutable, but the objects it point to are not necessarily immutable.
poolie
One quick hack to make your initialization a little shorter: x = x or []. You can use that instead of the 2 line if statement.
dave mankoff
Default values also become nasty if you use more than one of them. For example - say you wrote a function like: <function> def f(a=[], b=[], c=[]): a.append(3) </ function>. You will have inadvertently changed the values of a, b and c without having touched them. This is because similar default values seem to point to the same thing in memory. Nasty bugs arise
inspectorG4dget
this feature / wart or what you'd call it is one of the most important things to understand when you start learning python. it directly connects you to understanding what is done when in a program, and without that knowledge, any code beyond a pretty low threshold of complexity cannot be written.
flow
+22  A: 

Tuple unpacking:

>>> (a, (b, c), d) = [(1, 2), (3, 4), (5, 6)]
>>> a
(1, 2)
>>> b
3
>>> c, d
(4, (5, 6))

More obscurely, you can do this in function arguments (in Python 2.x; Python 3.x will not allow this anymore):

>>> def addpoints((x1, y1), (x2, y2)):
...     return (x1+x2, y1+y2)
>>> addpoints((5, 0), (3, 5))
(8, 5)
ianb
For what it's worth, tuple unpacking in function definitions is going aaway in python 3.0
Ryan
why is it going away?
interstar
Mostly because it makes the implementation really nasty, as far as I understand. (Eg.in inspect.getargs in the standard library - the normal path (no tuple args) is about 10 lines, and there are about 30 extra lines for handling tuple args (which only gets used occasionally).)Makes me sad though.
wilberforce
Looks like they are removing some of the batteries in 3.0 :/ .
FeatureCreep
It's good, that they remove it, because it's ugly and you can just emulate this, by typing: `x1, x2 = x; y1, y2 = y` (if you have x,y arguments)
Joschua
+2  A: 

unzip un-needed in Python

Someone blogged about Python not having an unzip function to go with zip(). unzip is straight-forward to calculate because:

>>> t1 = (0,1,2,3)
>>> t2 = (7,6,5,4)
>>> [t1,t2] == zip(*zip(t1,t2))
True

On reflection though, I'd rather have an explicit unzip().

Paddy3118
def unzip(x): return zip(*x) Done!
bukzor
The solution is slightly subtle (I can understand the point of view of anyone who asks for it), but I can also see why it would be redundant
inspectorG4dget
+64  A: 

To add more python modules (espcially 3rd party ones), most people seem to use PYTHONPATH environment variables or they add symlinks or directories in their site-packages directories. Another way, is to use *.pth files. Here's the official python doc's explanation:

"The most convenient way [to modify python's search path] is to add a path configuration file to a directory that's already on Python's path, usually to the .../site-packages/ directory. Path configuration files have an extension of .pth, and each line must contain a single path that will be appended to sys.path. (Because the new paths are appended to sys.path, modules in the added directories will not override standard modules. This means you can't use this mechanism for installing fixed versions of standard modules.)"

dgrant
I never made the connection between that .pth file in the site-packages directory from setuptools and this idea. awesome.
dave paola
+49  A: 

Exception else clause:

try:
  put_4000000000_volts_through_it(parrot)
except Voom:
  print "'E's pining!"
else:
  print "This parrot is no more!"
finally:
  end_sketch()

The use of the else clause is better than adding additional code to the try clause because it avoids accidentally catching an exception that wasn’t raised by the code being protected by the try ... except statement.

See http://docs.python.org/tut/node10.html

Constantin
+1 this is awesome. If the try block executes without entering any exception blocks, then the else block is entered. And then of course, the finally block is executed
inspectorG4dget
+136  A: 

The for...else idiom (see http://docs.python.org/ref/for.html )

for i in foo:
    if i == 0:
        break
else:
    print("i was never 0")

The "else" block will be normally executed at the end of the for loop, unless the break is called.

The above code could be emulated as follows:

found = False
for i in foo:
    if i == 0:
        found = True
        break
if not found: 
    print("i was never 0")
rlerallut
I think the for/else syntax is awkward. It "feels" as if the else clause should be executed if the body of the loop is never executed.
codeape
It becomes less awkward if we think of it as for/if/else, with the else belonging to the if. And it's so useful an idiom that I wonder other language designers didn't think of it!
sundar
ah. Never saw that one! But I must say it is a bit of a misnomer. Who would expect the else block to execute only if break never does? I agree with codeape: It looks like else is entered for empty foos.
Daren Thomas
I've added an equivalent code that is not using `else`.
J.F. Sebastian
I find this much less useful than if the else clause executed if the for loop didn't. I've wanted that so many times, but I've never found a case I wanted to use this.
Draemon
Anyone remember the FOR var … NEXT var … END FOR var of Sinclair QL's SuperBasic? Everything between NEXT and END FOR would execute at the end of the loop, unless an EXIT FOR was issued. *That* syntax was cleaner :)
ΤΖΩΤΖΙΟΥ
seems like the keyword should be finally, not else
Jiaaro
Except finally is already used in a way where that suite is always executed.
Roger Pate
This is really convenient, and I use it, but it needs an explaining comment each time.
kaizer.se
Should definately not be 'else'. Maybe 'then' or something, and then 'else' for when the loop was never executed.
Tor Valamo
I used this on a programming assignment for a class and lost points because the grader had never seen it before... totally got those back.
Matt Nichols
Hey, people forgot to mention that this idiom also works for `while` loops.
Denilson Sá
I've always thought a `for...then...else` construct would be better, where `then` is only executed if the `for` is successful, `else` when the for cannot be entered (eg: `for i in []; pass; else; print "empty list"`. But then I'm a novice. :)
digitala
Does this work ONLY if there is a break statement in the for loop or are there any other circumstances where this trick works this way?
inspectorG4dget
@inspectorG4dget: it works fine without a break... but serves no purpose if there's no break. (The code in the else might as well just be outdented one level)
jkerian
@jkerian: Many thanks. I observed that behavior, but was wondering more along the lines of "would this work the same way if return was used instead of break?"
inspectorG4dget
i shun this feature. every time i want to use it i have to read up on it, and then i still find it hard to get right.
flow
+30  A: 

Many people don't know about the "dir" function. It's a great way to figure out what an object can do from the interpreter. For example, if you want to see a list of all the string methods:

>>> dir("foo")
['__add__', '__class__', '__contains__', (snipped a bunch), 'title',
 'translate', 'upper', 'zfill']

And then if you want more information about a particular method you can call "help" on it.

>>> help("foo".upper)
    Help on built-in function upper:

upper(...)
    S.upper() -> string

    Return a copy of the string S converted to uppercase.
lacker
dir() is essential for development. For large modules I've enhanced it to add filtering. See http://www.pixelbeat.org/scripts/inpy
pixelbeat
You can also directly use help: help('foo')
yuriks
If you use IPython, you can append a question mark to get help on a variable/method.
akaihola
see: An alternative to Python's dir(). Easy to type; easy to read! For humans only: http://github.com/inky/see
compie
I call this python's man pages and can also be implemented to work when 'man' is called rather than 'help'
inspectorG4dget
+1  A: 
class AttrDict(dict):

    def __getattr__(self, name):
        if name in self:
            return self[name]
        raise AttributeError('%s not found' % name)

    def __setattr__(self, name, value):
        self[name] = value

    def __delattr__(self, name):
        del self[name]

person = AttrDict({'name': 'John Doe', 'age': 66})
print person['name']
print person.name

person.name = 'Frodo G'
print person.name

del person.age

print person
amix
no title or explanation? where is the hidden feature here?
Sanjay Manohar
+19  A: 

Python sort function sorts tuples correctly (i.e. using the familiar lexicographical order):

a = [(2, "b"), (1, "a"), (2, "a"), (3, "c")]
print sorted(a)
#[(1, 'a'), (2, 'a'), (2, 'b'), (3, 'c')]

Useful if you want to sort a list of persons after age and then name.

amix
This is a consequence of tuple comparison working correctly in general, i.e. (1, 2) < (1, 3).
Constantin
This is useful for version tuples: (1, 9) < (1, 10).
Roger Pate
+80  A: 

Conditional Assignment

x = 3 if (y == 1) else 2

It does exactly what it sounds like: "assign 3 to x if y is 1, otherwise assign 2 to x". Note that the parens are not necessary, but I like them for readability. You can also chain it if you have something more complicated:

x = 3 if (y == 1) else 2 if (y == -1) else 1

Though at a certain point, it goes a little too far.

tghw
The assignment is not the special part. You could just as easily do something like: return 3 if (y == 1) else 2.
Brian
An alternate way to do this is: y == 1 and 3 or 2
yuriks
That alternate way is fraught with problems. For one thing, normally this works: if y == 1: #3 else if y == 70: #2Why? y == 1 is only evaluated, THEN y == 70 if y == 1 is false.In this statement: y == 1 and 3 or 2 - 3 and 2 are evaluated as well as y == 1.
kylebrooks
That alternate way is the first time I've seen obfuscated Python.
Craig McQueen
Kylebrooks: It doesn't in that case, boolean operators short circuit. It will only evaluate 2 if bool(3) == False.
RoadieRich
this backwards-style coding confusing me. something like `x = ((y == 1) ? 3 : 2)` makes more sense to me
Mark
I feel just the opposite of @Mark, C-style ternary operators have always confused me, is the right side or the middle what gets evaluated on a false condition? I much prefer Python's ternary syntax.
Jeffrey Harris
@Mark "x = (y == 1) and 3 or 2" is also valid.
Kyle Ambroff
I think C-style ternary operators are simpler, more english-like: `'am I drunk' ? 'yes, make out with her' : 'no, dont even think about it'`
Infinity
`x = 3 if (y == 1) else 2` - I find that in many cases, `x = (2, 3)[y==1]` is actually more readable (normally with really long statements, so you can keep the results (2, 3) together).
Wallacoloo
+31  A: 
  • The underscore, it contains the most recent output value displayed by the interpreter (in an interactive session):
>>> (a for a in xrange(10000))
<generator object at 0x81a8fcc>
>>> b = 'blah'
>>> _
<generator object at 0x81a8fcc>
  • A convenient Web-browser controller:
>>> import webbrowser
>>> webbrowser.open_new_tab('http://www.stackoverflow.com')
  • A built-in http server. To serve the files in the current directory:
python -m SimpleHTTPServer 8000
  • AtExit
>>> import atexit
Tzury Bar Yochay
Why not just SimpleHTTPServer?
Andrew Szeto
worth noting that the `_` is available only in interactive mode. when running scripts from a file, `_` has no special meaning.
TokenMacGuy
+3  A: 

__getattr__()

getattr is a really nice way to make generic classes, which is especially useful if you're writing an API. For example, in the FogBugz Python API, getattr is used to pass method calls on to the web service seamlessly:

class FogBugz:
    ...

    def __getattr__(self, name):
        # Let's leave the private stuff to Python
        if name.startswith("__"):
            raise AttributeError("No such attribute '%s'" % name)

        if not self.__handlerCache.has_key(name):
            def handler(**kwargs):
                return self.__makerequest(name, **kwargs)
            self.__handlerCache[name] = handler
        return self.__handlerCache[name]
    ...

When someone calls FogBugz.search(q='bug'), they don't get actually call a search method. Instead, getattr handles the call by creating a new function that wraps the makerequest method, which crafts the appropriate HTTP request to the web API. Any errors will be dispatched by the web service and passed back to the user.

tghw
You can also create semi-custom types in this manner.
sli
+196  A: 

enumerate

Wrap an iterable with enumerate and it will yield the item along with its index.

For example:


>>> a = ['a', 'b', 'c', 'd', 'e']
>>> for index, item in enumerate(a): print index, item
...
0 a
1 b
2 c
3 d
4 e
>>>

References:

Dave
I'm surprised this isn't covered routinely in tutorials talking about python lists.
Draemon
it's such a cool feature/function
hasen j
i think it's been deprecated in python3
Berry Tsakala
And all this time I was coding this way: for i in range(len(a)): ... and then using a[i] to get the current item.
fmartin
@Berry Tsakala: To my knowledge, it has not been deprecated.
JAB
shorter than using zip and count for index, item in zip(itertools.count(), a): print(index,item)
RamyenHead
Great feature, +1. @Draemon: this is actually covered in the Python tutorial that comes installed with Python (there's a section on various looping constructs), so I'm always surprised that this is so little known.
Edan Maor
The nice thing about this is when you're iterating through more than one loop simultaneously
dassouki
Holy crap this is awesome. for i in xrange(len(a)): has always been my least favorite python idiom.
Personman
+16  A: 

Ternary operator

>>> 'ham' if True else 'spam'
'ham'
>>> 'ham' if False else 'spam'
'spam'

This was added in 2.5, prior to that you could use:

>>> True and 'ham' or 'spam'
'ham'
>>> False and 'ham' or 'spam'
'spam'

However, if the values you want to work with would be considered false, there is a difference:

>>> [] if True else 'spam'
[]
>>> True and [] or 'spam'
'spam'
Alexander Kojevnikov
That's "ternary".
recursive
Prior to 2.5, "foo = bar and 'ham' or 'spam'"
a paid nerd
+8  A: 

You can build up a dictionary from a set of length-2 sequences. Extremely handy when you have a list of values and a list of arrays.

>>> dict([ ('foo','bar'),('a',1),('b',2) ])
{'a': 1, 'b': 2, 'foo': 'bar'}

>>> names = ['Bob', 'Marie', 'Alice']
>>> ages = [23, 27, 36]
>>> dict(zip(names, ages))
{'Alice': 36, 'Bob': 23, 'Marie': 27}
Dan
self.data = {}_i = 0for keys in self.VDESC.split(): self.data[keys] = _data[_i] _i += 1I replaced my code with this one-liner :)self.data = dict(zip(self.VDESC.split(), _data))Thanks for the handy tip.
Gökhan Sever
Also helps in Python2.x where there is no dict comprehension syntax. Sou you can write `dict((x, x**2) for x in range(10))`.
Marian
+1  A: 

Tuple unpacking in for loops, list comprehensions and generator expressions:

>>> l=[(1,2),(3,4)]
>>> [a+b for a,b in l ] 
[3,7]

Useful in this idiom for iterating over (key,data) pairs in dictionaries:

d = { 'x':'y', 'f':'e'}
for name, value in d.items():  # one can also use iteritems()
   print "name:%s, value:%s" % (name,value)

prints:

name:x, value:y
name:f, value:e
Rafał Dowgird
+10  A: 

"Unpacking" to function parameters

def foo(a, b, c):
        print a, b, c

bar = (3, 14, 15)
foo(*bar)

When executed prints:

3 14 15
csl
This is the canonical alternative to the old "apply()" built-in.
Jim Dennis
+1  A: 

Objects in boolean context

Empty tuples, lists, dicts, strings and many other objects are equivalent to False in boolean context (and non-empty are equivalent to True).

empty_tuple = ()
empty_list = []
empty_dict = {}
empty_string = ''
empty_set = set()
if empty_tuple or empty_list or empty_dict or empty_string or empty_set:
  print 'Never happens!'

This allows logical operations to return one of it's operands instead of True/False, which is useful in some situations:

s = t or "Default value" # s will be assigned "Default value"
                         # if t is false/empty/none
Constantin
actually this is discouraged, you should use the "new" s = t if t else "default value"
Tom
+1  A: 

The first-classness of everything ('everything is an object'), and the mayhem this can cause.

>>> x = 5
>>> y = 10
>>> 
>>> def sq(x):
...   return x * x
... 
>>> def plus(x):
...   return x + x
... 
>>> (sq,plus)[y>x](y)
20

The last line creates a tuple containing the two functions, then evaluates y>x (True) and uses that as an index to the tuple (by casting it to an int, 1), and then calls that function with parameter y and shows the result.

For further abuse, if you were returning an object with an index (e.g. a list) you could add further square brackets on the end; if the contents were callable, more parentheses, and so on. For extra perversion, use the result of code like this as the expression in another example (i.e. replace y>x with this code):

(sq,plus)[y>x](y)[4](x)

This showcases two facets of Python - the 'everything is an object' philosophy taken to the extreme, and the methods by which improper or poorly-conceived use of the language's syntax can lead to completely unreadable, unmaintainable spaghetti code that fits in a single expression.

Dan Udey
why would you ever do this? it is hardly a valid criticism of a language to show how it can be intentionally abused. accidental abuse would be valid, but this would never happen by accident.
Christian Oudard
@Gorgapor: Python's consistency and lack of exceptions and special cases is what makes it easy to learn and, to me at least, beautiful. Any powerful tool, used abusively can cause 'mayhem'. Contrary to your opinion, I think the ability to index into a sequence of functions and call it, in a single expression is a powerful and useful idiom, and I've used it more than once, with explanatory comments.
Don O'Donnell
@Don: Your use case, indexing a sequence of functions, is a good one, and very useful. Dan Udey's use case, using a boolean as an index into an inline tuple of functions, is a horrible and useless one, which is needlessly obfuscated.
Christian Oudard
@Gorganpor: Sorry, I meant to address my comment to Dan Udey, not you. I agree entirely with you.
Don O'Donnell
+14  A: 

Assigning and deleting slices:

>>> a = range(10)
>>> a
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
>>> a[:5] = [42]
>>> a
[42, 5, 6, 7, 8, 9]
>>> a[:1] = range(5)
>>> a
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
>>> del a[::2]
>>> a
[1, 3, 5, 7, 9]
>>> a[::2] = a[::-2]
>>> a
[9, 3, 5, 7, 1]

Note: when assigning to extended slices (s[start:stop:step]), the assigned iterable must have the same length as the slice.

Torsten Marek
+19  A: 

dict's constructor accepts keyword arguments:

>>> dict(foo=1, bar=2)
{'foo': 1, 'bar': 2}
So long as the keyword arguments are valid Python identifiers (names). You can't use: dict(1="one", two=2 ...) because the "1" is not a valid identifier even though it's a perfectly valid dictionary key.
Jim Dennis
A: 

Not an out-of-the-box feature, but Pyrex is incredibly useful.

fivebells
Python core only please!
jpartogi
+285  A: 

Get the python regex parse tree to debug your regex

Regular expression are a great feature of python, but debugging them can be a pain, and it's just too easy to get a regex wrong.

Fortunately, python have a really hidden feature to print the regex parse tree, by passing the undocumented, experimental, hidden flag re.DEBUG (actually, 128) to re.compile

>>> re.compile("^\[font(?:=(?P<size>[-+][0-9]{1,2}))?\](.*?)[/font]",
    re.DEBUG)
at at_beginning
literal 91
literal 102
literal 111
literal 110
literal 116
max_repeat 0 1
  subpattern None
    literal 61
    subpattern 1
      in
        literal 45
        literal 43
      max_repeat 1 2
        in
          range (48, 57)
literal 93
subpattern 2
  min_repeat 0 65535
    any None
in
  literal 47
  literal 102
  literal 111
  literal 110
  literal 116

Once you understand the syntax, you can spot your errors. There we can see that i forgot to escape the [] in [/font].

Of course you can combine it with whatever flags you want, like commented regexes :

>>> re.compile("""
 ^              # start of a line
 \[font         # the font tag
 (?:=(?P<size>  # optional [font=+size]
 [-+][0-9]{1,2} # size specification
 ))?
 \]             # end of tag
 (.*?)          # text beetween the tags
 \[/font\]      # end of the tag
 """, re.DEBUG|re.VERBOSE|re.DOTALL)
BatchyX
Instead of 128 you can also use re.DEBUG. Be aware that the comment in the source says this flag is experimental and you shouldn't rely on it.
Andreas Thomas
If you can use re.DEBUG, then you should. It may be experimental, but it's still the symbolic name, and the actual 128 value is just as experimental, but less readable, and more subject to change.
Lee B
improved the example, thanks ;)
BatchyX
The more idiomatic way to combine flags is using the OR operator, so it should probably be "re.DEBUG | re.VERBOSE | re.DOTALL" instead. They're equivalent in this case, but in other cases where you might want to set a flag in addition to a group of flags that *might* already have it, the OR operator is essential.
sysrqb
This is super handy while parsing HTML ! :)
extraneon
Except parsing HTML using regular expression is slow and painful. Even the built-in 'html' parser module doesn't use regexes to get the work done. And if the html module doesn't please you, there is plenty of XML/HTML parser modules that does the job without having to reinvent the wheel.
BatchyX
A link to documentation on the output syntax would be great.
Personman
This should be an official part of Python, not experimental... RegEx is always tricky and being able to trace what's happening is really helpful.
Cahit
+35  A: 

Built-in base64, zlib, and rot13 codecs

Strings have encode and decode methods. Usually this is used for converting str to unicode and vice versa, e.g. with u = s.encode('utf8'). But there are some other handy builtin codecs. Compression and decompression with zlib (and bz2) is available without an explicit import:

>>> s = 'a' * 100
>>> s.encode('zlib')
'x\x9cKL\xa4=\x00\x00zG%\xe5'

Similarly you can encode and decode base64:

>>> 'Hello world'.encode('base64')
'SGVsbG8gd29ybGQ=\n'
>>> 'SGVsbG8gd29ybGQ=\n'.decode('base64')
'Hello world'

And, of course, you can rot13:

>>> 'Secret message'.encode('rot13')
'Frperg zrffntr'
spiv
Sadly this will stop working in Python 3
Marius Gedminas
Oh, will it stop working? That's too bad :/. I was just thinking how great this feature was. Then I saw your comment.
FeatureCreep
Awe, the base64 one was pretty useful in interactive sessions handling data from the web.
Longpoke
In my opionion it's some type of en/decoding, but on the other side there should "only one way to it" and I think, that these things are better put in its own module!
Joschua
+21  A: 

Obviously, the antigravity module. xkcd #353

tadeusz
Probably my most used module. After the soul module, of course.
sli
Which actually works. Try putting "import antigravity" in the newest Py3K.
Andrew Szeto
@Andrew Szeto... what does it do?
Jiaaro
@Jim Robert: It opens up the webbrowser to the xkcd site ;)
poke
+7  A: 

Generators

I think that a lot of beginning Python developers pass over generators without really grasping what they're for or getting any sense of their power. It wasn't until I read David M. Beazley's PyCon presentation on generators (it's available here) that I realized how useful (essential, really) they are. That presentation illuminated what was for me an entirely new way of programming, and I recommend it to anyone who doesn't have a deep understanding of generators.

Robert Rossney
Wow! My brain is fried and that was just the first 6 parts. Starting in 7 I had to start drawing pictures just to see if I really understood what was happening with multi-process / multi-thread / multi-machine processing pipelines. Amazing stuff!
Peter Rowell
+1 for the link to the presentation
Mark Heath
+61  A: 

Interactive Interpreter Tab Completion

try:
    import readline
except ImportError:
    print "Unable to load readline module."
else:
    import rlcompleter
    readline.parse_and_bind("tab: complete")


>>> class myclass:
...    def function(self):
...       print "my function"
... 
>>> class_instance = myclass()
>>> class_instance.<TAB>
class_instance.__class__   class_instance.__module__
class_instance.__doc__     class_instance.function
>>> class_instance.f<TAB>unction()

You will also have to set a PYTHONSTARTUP environment variable.

mjard
This is a very useful feature. So much so I've a simple script to enable it (plus a couple of other introspection enhancements):http://www.pixelbeat.org/scripts/inpy
pixelbeat
IPython gives you this plus tons of other neat stuff
akaihola
@akaihola read the main qn.
Sriram
This would have been more useful at pdb prompt than the regular python prompt (as IPython serves that purpose anyway). However, this doesn't seem to work at the pdb prompt, probably because pdb binds its own for tab (which is less useful). I tried calling parse_and_bind() at the pdb prompt, but it still didn't work. The alternative of getting pdb prompt with IPython is more work so I tend to not use it.
haridsv
Found this recipe, but this didn't work for me (using python 2.6): http://code.activestate.com/recipes/498182/
haridsv
@haridsv -- `easy_install ipdb` -- then you can use `import ipdb; ipdb.set_trace()`
Doug Harris
For me the best tip was to use the try:except:else:. I've forgotten about the else in the try block
neves
+19  A: 

Python has GOTO

...and it's implemented by external pure-Python module :)

from goto import goto, label
for i in range(1, 10):
    for j in range(1, 20):
        for k in range(1, 30):
            print i, j, k
            if k == 3:
                goto .end # breaking out from a deeply nested loop
label .end
print "Finished"
Constantin
Maybe it is best that this feature remains hidden.
James McMahon
Well, the actual hidden feature here is mechanism used to implement GOTO.
Constantin
Surely, for breaking out of a nested loop you can just raise an exception, no?
shylent
+1 first one I actually did not know about.
TokenMacGuy
@shylent: Exceptions should be exceptional. For that reason they are optimized for the case that they are not thrown. If you expect the condition to occur in the course of normal processing, you should use another method
TokenMacGuy
@shylent, the correct way to break out of a nested loop is to put the loop into a function, and return from the function
Christian Oudard
+5  A: 

Taking advantage of python's dynamic nature to have an apps config files in python syntax. For example if you had the following in a config file:

{
  "name1": "value1",
  "name2": "value2"
}

Then you could trivially read it like:

config = eval(open("filename").read())
pixelbeat
I agree. I've started using a settings.py or config.py file which I then load as a module. Sure beats the extra steps of parsing some other file format.
monkut
I can see this becoming a security issue.
rmw1985
It could be, but sometimes it's not. In those cases, it's awesome.
recursive
Python can be a much more expressive configuration language than any amount of XML or INI files. I'm trying to avoid explicit config, with just an invoke script that does “import myapp; app= myapp.Application(...); app.run()”. Options default sensibly but can be changed using constructor args.
bobince
(This assumes that run-time configuration in the app itself is stored in a database. More significant configuration is possible through allowing the user to subclass Application and set properties/methods on the subclass.)
bobince
That's a bold action for even non-hostile environments. eval() is a loaded gun, that needs intensive caution while handling. On the other hand, using JSON (now in 2.6 stdlib) is much more secure and portable for carrying configuration.
Berk D. Demir
I would never approve a code review which contained an `eval`.
a paid nerd
@Richard Waite: It's usually a security issue if an adversary can modify your config file...
Longpoke
I agree, this is extremely useful in many quick'n'dirty scripts. But it's better to use execfile instead of eval+open+read.
Jukka Suomela
Even in a trusted environment, this is an unacceptable security issue. If you need to parse config files, use `ConfigParser` - 10 lines of code give you a full blown mechanism for creating universally readable configuration file. Your approach is really not portable and not extensible.
Arrieta
Then why does Django store site settings in a .py file (including db password)? Are they out of their minds, are they not using eval(), or is there something I'm missing?
Agos
I personally don't like using `eval()` for anything, especially settings. I always wrap Django settings around `ConfigParser` and save actual information in a permission-guarded file. Like Rasmus Lerdorf said "If eval() is the answer, you’re almost certainly asking the wrong question."
AdmiralNemo
+3  A: 

Nested Function Parameter Re-binding

def create_printers(n):
    for i in xrange(n):
        def printer(i=i): # Doesn't work without the i=i
            print i
        yield printer
ironfroggy
it works without it, but differently. :-)
kaizer.se
No, it doesn't work without it. Omit the i=i and see the difference between map(apply, create_printers(10)) and map(apply, list(apply_printers(10))), where converting to a list consumes the generator and now all ten printer functions have i bound to the same value: 9, where calling them one at a time calls them before the next iteration of the generator changes the int i is bound to in the outer scope.
ironfroggy
+6  A: 

A slight misfeature of python. The normal fast way to join a list of strings together is,

''.join(list_of_strings)
Martin Beckett
there are very good reasons that this is a method of string instead of a method of list. this allows the same function to join any iterable, instead of duplicating join for every iterable type.
Christian Oudard
Yes I know why it does - but would anyone discover this if they hadn't been told?
Martin Beckett
Discover? It's pretty hard to remember too, and I've used python since before there were methods om strings.
kaleissin
If this is too ugly for you to cope with, you can write the very same thing as `str.join('',list_of_strings)` but other pythonistas may scorn you for trying to write java.
TokenMacGuy
@TokenMacGuy: the reason why ''.join([...]) is preferred is because many people often mixes up the order of the arguments in string.join(..., ...); by putting ''.join() things become clearer
Lie Ryan
I'm fairly certain that the only reason most pythonistas use `"".join(iterable)` over `str.join("",iterable)` is because it's 4 characters shorter.
TokenMacGuy
+5  A: 

import antigravity

Gurch
this answer was already given
Davide
+1  A: 

Private methods and data hiding (encapsulation)

There's a common idiom in Python of denoting methods and other class members that are not intended to be part of the class's external API by giving them names that start with underscores. This is convenient and works very well in practice, but it gives the false impression that Python does not support true encapsulation of private code and/or data. In fact, Python automatically gives you lexical closures, which make it very easy to encapsulate data in a much more bulletproof way when the situation really warrants it. Here's a contrived example of a class that makes use of this technique:

class MyClass(object):
  def __init__(self):

    privateData = {}

    self.publicData = 123

    def privateMethod(k):
      print privateData[k] + self.publicData

    def privilegedMethod():
      privateData['foo'] = "hello "
      privateMethod('foo')

    self.privilegedMethod = privilegedMethod

  def publicMethod(self):
    print self.publicData

And here's a contrived example of its use:

>>> obj = MyClass()
>>> obj.publicMethod()
123
>>> obj.publicData = 'World'
>>> obj.publicMethod()
World
>>> obj.privilegedMethod()
hello World
>>> obj.privateMethod()
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
AttributeError: 'MyClass' object has no attribute 'privateMethod'
>>> obj.privateData
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
AttributeError: 'MyClass' object has no attribute 'privateData'

The key is that privateMethod and privateData aren't really attributes of obj at all, so they can't be accessed from outside, nor do they show up in dir() or similar. They're local variables in the constructor, completely inaccessible outside of __init__. However, because of the magic of closures, they really are per-instance variables with the same lifetime as the object with which they're associated, even though there's no way to access them from outside except (in this example) by invoking privilegedMethod. Often this sort of very strict encapsulation is overkill, but sometimes it really can be very handy for keeping an API or a namespace squeaky clean.

In Python 2.x, the only way to have mutable private state is with a mutable object (such as the dict in this example). Many people have remarked on how annoying this can be. Python 3.x will remove this restriction by introducing the nonlocal keyword described in PEP 3104.

zaphod
this is almost never a good idea.
Christian Oudard
"They're local variables in the constructor, completely inaccessible outside of __init__." Not true: >>> [c.cell_contents for c in obj.privilegedMethod.func_closure] --> [{'foo': 'hello '}, <function privateMethod at 0x65530>]
Miles
The right way of preventing attribute access would be have a `__getattribute__` or `__getattr__` sentinal and route accepted calls accordingly. Again, secrecy and python isnt a good idea.
jeffjose
+17  A: 

Using keyword arguments as assignments

Sometimes one wants to build a range of functions depending on one or more parameters. However this might easily lead to closures all referring to the same object and value:

funcs = [] 
for k in range(10):
     funcs.append( lambda: k)

>>> funcs[0]()
9
>>> funcs[7]()
9

This behaviour can be avoided by turning the lambda expression into a function depending only on its arguments. A keyword parameter stores the current value that is bound to it. The function call doesn't have to be altered:

funcs = [] 
for k in range(10):
     funcs.append( lambda k = k: k)

>>> funcs[0]()
0
>>> funcs[7]()
7
A less hackish way to do that (imho) is just to use a separate function to manufacture lambdas that don't close on a loop variable. Like this: `def make_lambda(k): return lambda: k`.
Jason Orendorff
+4  A: 

Method replacement for object instance

You can replace methods of already created object instances. It allows you to create object instance with different (exceptional) functionality:

>>> class C(object):
...     def fun(self):
...         print "C.a", self
...
>>> inst = C()
>>> inst.fun()  # C.a method is executed
C.a <__main__.C object at 0x00AE74D0>
>>> instancemethod = type(C.fun)
>>>
>>> def fun2(self):
...     print "fun2", self
...
>>> inst.fun = instancemethod(fun2, inst, C)  # Now we are replace C.a by fun2
>>> inst.fun()  # ... and fun2 is executed
fun2 <__main__.C object at 0x00AE74D0>

As we can C.a was replaced by fun2() in inst instance (self didn't change).

Alternatively we may use new module, but it's depreciated since Python 2.6:

>>> def fun3(self):
...     print "fun3", self
...
>>> import new
>>> inst.fun = new.instancemethod(fun3, inst, C)
>>> inst.fun()
fun3 <__main__.C object at 0x00AE74D0>

Node: This solution shouldn't be used as general replacement of inheritance mechanism! But it may be very handy in some specific situations (debugging, mocking).

Warning: This solution will not work for built-in types and for new style classes using slots.

Tupteq
+21  A: 

Referencing a list comprehension as it is being built...

You can reference a list comprehension as it is being built by the symbol '_[1]'. For example, the following function unique-ifies a list of elements without changing their order by referencing its list comprehension.

def unique(my_list):
    return [x for x in my_list if x not in locals()['_[1]']]
Jake
Nifty trick. Do you know if this is accepted behavior or is it more of a dirty hack that may change in the future? The underscore makes me think the latter.
Kiv
Interesting. I think it'd be a dirty hack of the locals() dictionary, but I'd be curious to know for sure.
Rory
Brilliant, I was literally just looking for this yesterday!
Rob Golding
not a good idea for algorithmic as well as practical reasons. Algorithmically, this will give you a linear search of the list so far on every iteration, changing your O(n) loop into O(n**2); much better to just make the list into a set afterwards. Practically speaking, it's undocumented, may change, and probably doesn't work in ironpython/jython/pypy .
llimllib
This is an undocumented implementation detail, not a hidden feature. It would be a bad idea to rely on this.
Marius Gedminas
there is a set() for that
valya
+26  A: 

set/frozenset

Probably an easily overlooked python builtin is "set/frozenset".

Useful when you have a list like this, [1,2,1,1,2,3,4] and only want the uniques like this [1,2,3,4].

Using set() that's exactly what you get:

>>> x = [1,2,1,1,2,3,4] 
>>> 
>>> set(x) 
set([1, 2, 3, 4]) 
>>>
>>> for i in set(x):
...     print i
...
1
2
3
4

And of course to get the number of uniques in a list:

>>> len(set([1,2,1,1,2,3,4]))
4

You can also find if a list is a subset of another list using, suprise, set().isasubset()

>>> set([1,2,3,4]).isasubset([0,1,2,3,4,5])
True

For more details: http://docs.python.org/library/stdtypes.html#set

monkut
Also useful in cases where a dictionary were used only to test if a value is there.
Jacek Konieczny
I use set about as much as tuple and list.
Longpoke
A: 

Functional support.

Generators and generator expressions, specifically.

Ruby made this mainstream again, but Python can do it just as well. Not as ubiquitous in the libraries as in Ruby, which is too bad, but I like the syntax better, it's simpler.

Because they're not as ubiquitous, I don't see as many examples out there on why they're useful, but they've allowed me to write cleaner, more efficient code.

+21  A: 

While debugging complex data structures pprint module comes handy.

Quoting from the docs..

>>> import pprint    
>>> stuff = sys.path[:]
>>> stuff.insert(0, stuff)
>>> pprint.pprint(stuff)
[<Recursion on list with id=869440>,
 '',
 '/usr/local/lib/python1.5',
 '/usr/local/lib/python1.5/test',
 '/usr/local/lib/python1.5/sunos5',
 '/usr/local/lib/python1.5/sharedmodules',
 '/usr/local/lib/python1.5/tkinter']
utku_karatas
pprint is also good for printing dictionaries in doctests, since it always sorts the output by keys
akaihola
+20  A: 
>>> from functools import partial
>>> bound_func = partial(range, 0, 10)
>>> bound_func()
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
>>> bound_func(2)
[0, 2, 4, 6, 8]

not really a hidden feature but partial is extremely useful for having late evaluation of functions.

you can bind as many or as few parameters in the initial call to partial as you want, and call it with any remaining parameters later (in this example i've bound the begin/end args to range, but call it the second time with a step arg)

I wish curryfication add a decent operator in python though.
poulejapon
A: 
is_ok() and "Yes" or "No"
M. Utku ALTINKAYA
That's strange. Interesting, but strange.>>> True and "Yes" or "No"'Yes'>>> False and "Yes" or "No"'No'>>> x = "Yes">>> y = "No">>>>>> False and x or y
monkut
The preferred way to accomplish this in Python 2.5 or up is " 'Yes' if is_ok() else 'No' ".
Paul Fisher
whether it is preferred or not, the way is correct and I use all the time and I think it is elegant. since this is hidden features question really interesting this post has been negatively voted,
M. Utku ALTINKAYA
"preferred" argument is open to discussion, becouse this way, the execution order is the same as the logical order, while "Yes" if True else "No" is not like that.
M. Utku ALTINKAYA
"Preferred" In this case means that the conditional operator works as expected for all possible operands. Specifically, `True and False or True` is True, but `False if True else True` is false, which is almost certainly what you expected. This is especially important where the operands have side effects, and the conditional operator will ***NEVER*** evaluate more than one of its conditional clauses.
TokenMacGuy
+1  A: 

...that dict.get() has a default value of None, thereby avoiding KeyErrors:

In [1]: test = { 1 : 'a' }

In [2]: test[2]
---------------------------------------------------------------------------
              Traceback (most recent call last)

<ipython console> in ()

: 2

In [3]: test.get( 2 )

In [4]: test.get( 1 )
Out[4]: 'a'

In [5]: test.get( 2 ) == None
Out[5]: True

and even to specify this 'at the scene':

In [6]: test.get( 2, 'Some' ) == 'Some'
Out[6]: True
Steen
I hope this isn't too hidden...
bukzor
+28  A: 

You can easily transpose an array with zip.

a = [(1,2), (3,4), (5,6)]
zip(*a)
# [(1, 3, 5), (2, 4, 6)]
FA
+37  A: 

Negative round

The round() function rounds a float number to given precision in decimal digits, but precision can be negative:

>>> str(round(1234.5678, -2))
'1200.0'
>>> str(round(1234.5678, 2))
'1234.57'

Note: round() always returns a float, str() used in the above example because floating point math is inexact, and under 2.x the second example can print as 1234.5700000000001. Also see the decimal module.

Abgan
So often I have to round a number to a multiple. Eg, round 17 to a multiple of 5 (15). But Python's round doesn't let me do that! IMO, it should be structured as `round(num, precision=1) - round "num" to the nearest multiple of "precision"`
Wallacoloo
@wallacoloo what's the matter with (17 / 5)*5 ? Isn't it short and expressive?
silviot
+24  A: 

An interpreter within the interpreter

The standard library's code module let's you include your own read-eval-print loop inside a program, or run a whole nested interpreter. E.g. (copied my example from here)

$ python
Python 2.5.1 (r251:54863, Jan 17 2008, 19:35:17) 
[GCC 4.0.1 (Apple Inc. build 5465)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> shared_var = "Set in main console"
>>> import code
>>> ic = code.InteractiveConsole({ 'shared_var': shared_var })
>>> try:
...     ic.interact("My custom console banner!")
... except SystemExit, e:
...     print "Got SystemExit!"
... 
My custom console banner!
>>> shared_var
'Set in main console'
>>> shared_var = "Set in sub-console"
>>> sys.exit()
Got SystemExit!
>>> shared_var
'Set in main console'

This is extremely useful for situations where you want to accept scripted input from the user, or query the state of the VM in real-time.

TurboGears uses this to great effect by having a WebConsole from which you can query the state of you live web app.

Alabaster Codify
+39  A: 

Operator overloading for the set builtin:

>>> a = set([1,2,3,4])
>>> b = set([3,4,5,6])
>>> a | b # Union
{1, 2, 3, 4, 5, 6}
>>> a & b # Intersection
{3, 4}
>>> a < b # Subset
False
>>> a - b # Difference
{1, 2}
>>> a ^ b # Symmetric Difference
{1, 2, 5, 6}

More detail from the standard library reference: Set Types

Kiv
+6  A: 

You can override the mro of a class with a metaclass

>>> class A(object):
...     def a_method(self):
...         print("A")
... 
>>> class B(object):
...     def b_method(self):
...         print("B")
... 
>>> class MROMagicMeta(type):
...     def mro(cls):
...         return (cls, B, object)
... 
>>> class C(A, metaclass=MROMagicMeta):
...     def c_method(self):
...         print("C")
... 
>>> cls = C()
>>> cls.c_method()
C
>>> cls.a_method()
Traceback (most recent call last):
 File "<stdin>", line 1, in <module>
AttributeError: 'C' object has no attribute 'a_method'
>>> cls.b_method()
B
>>> type(cls).__bases__
(<class '__main__.A'>,)
>>> type(cls).__mro__
(<class '__main__.C'>, <class '__main__.B'>, <class 'object'>)

It's probably hidden for a good reason. :)

Benjamin Peterson
That's playing with fire, and asking for ethernal damnation. Better have good reason ;)
gorsky
+5  A: 

The reversed() builtin. It makes iterating much cleaner in many cases.

quick example:

for i in reversed([1, 2, 3]):
    print(i)

produces:

3
2
1

However, reversed() also works with arbitrary iterators, such as lines in a file, or generator expressions.

Christian Oudard
+8  A: 

The Zen of Python

>>> import this
The Zen of Python, by Tim Peters

Beautiful is better than ugly.
Explicit is better than implicit.
Simple is better than complex.
Complex is better than complicated.
Flat is better than nested.
Sparse is better than dense.
Readability counts.
Special cases aren't special enough to break the rules.
Although practicality beats purity.
Errors should never pass silently.
Unless explicitly silenced.
In the face of ambiguity, refuse the temptation to guess.
There should be one-- and preferably only one --obvious way to do it.
Although that way may not be obvious at first unless you're Dutch.
Now is better than never.
Although never is often better than *right* now.
If the implementation is hard to explain, it's a bad idea.
If the implementation is easy to explain, it may be a good idea.
Namespaces are one honking great idea -- let's do more of those!
sprintf
Hidden? OTOH, This is one of the selling points of Python.
jeffjose
+4  A: 

pdb — The Python Debugger

As a programmer, one of the first things that you need for serious program development is a debugger. Python has one built-in which is available as a module called pdb (for "Python DeBugger", naturally!).

http://docs.python.org/library/pdb.html

Tom Viner
+3  A: 

Objects of small intgers (-5 .. 256) never created twice:


>>> a1 = -5; b1 = 256
>>> a2 = -5; b2 = 256
>>> id(a1) == id(a2), id(b1) == id(b2)
(True, True)
>>>
>>> c1 = -6; d1 = 257
>>> c2 = -6; d2 = 257
>>> id(c1) == id(c2), id(d1) == id(d2)
(False, False)
>>>

Edit: List objects never destroyed (only objects in lists). Python has array in which it keeps up to 80 empty lists. When you destroy list object - python puts it to that array and when you create new list - python gets last puted list from this array:


>>> a = [1,2,3]; a_id = id(a)
>>> b = [1,2,3]; b_id = id(b)
>>> del a; del b
>>> c = [1,2,3]; id(c) == b_id
True
>>> d = [1,2,3]; id(d) == a_id
True
>>>

Mykola Kharechko
This feature is implementation dependent, so you shouldn't rely on it.
Denis Otkidach
+6  A: 

Creating dictionary of two sequences that have related data

In [15]: t1 = (1, 2, 3)

In [16]: t2 = (4, 5, 6)

In [17]: dict (zip(t1,t2))
Out[17]: {1: 4, 2: 5, 3: 6}
Lakshman Prasad
A: 

Simulating the tertiary operator using and and or.

and and or operators in python return the objects themselves rather than Booleans. Thus:

In [18]: a = True

In [19]: a and 3 or 4
Out[19]: 3

In [20]: a = False

In [21]: a and 3 or 4
Out[21]: 4

However, Py 2.5 seems to have added an explicit tertiary operator

    In [22]: a = 5 if True else '6'

    In [23]: a
    Out[23]: 5

Well, this works if you are sure that your true clause does not evaluate to False. example:

>>> def foo(): 
...     print "foo"
...     return 0
...
>>> def bar(): 
...     print "bar"
...     return 1
...
>>> 1 and foo() or bar()
foo
bar
1

To get it right, you've got to just a little bit more:

>>> (1 and [foo()] or [bar()])[0]
foo
0

However, this isn't as pretty. if your version of python supports it, use the conditional operator.

>>> foo() if True or bar()
foo
0
Lakshman Prasad
Careful with that:>>> a and "" or ":("you'll always get a frowny face back, no matter if a is true or false
Marius Gedminas
Marius, Only, if a is false. Otherwise U'd want ":(" as "" is false.
Lakshman Prasad
`(falseValue, trueValue)[cond]` is a cleaner (IMO) way to simulate a ternary operator.
Wallacoloo
this is simply bad style.
bukzor
+4  A: 

inspect module is also a cool feature.

TheMachineCharmer
+1  A: 

The spam module in standard Python

It is used for testing purposes.

I've picked it from ctypes tutorial. Try it yourself:

>>> import __hello__
Hello world...
>>> type(__hello__)
<type 'module'>
>>> from __phello__ import spam
Hello world...
Hello world...
>>> type(spam)
<type 'module'>
>>> help(spam)
Help on module __phello__.spam in __phello__:

NAME
    __phello__.spam

FILE
    c:\python26\<frozen>
J.F. Sebastian
sorry, why and how would you use this?
Casey
@Casey: read "Accessing values exported from dlls" section from the `ctypes` tutorial http://starship.python.net/crew/theller/ctypes/tutorial.html#accessing-values-exported-from-dlls
J.F. Sebastian
+1  A: 

Memory Management

Python dynamically allocates memory and uses garbage collection to recover unused space. Once an object is out of scope, and no other variables reference it, it will be recovered. I do not have to worry about buffer overruns and slowly growing server processes. Memory management is also a feature of other dynamic languages but Python just does it so well.

Of course, we must watch out for circular references and keeping references to objects which are no longer needed, but weak references help a lot here.

+35  A: 

re can call functions!

The fact that you can call a function every time something matches a regular expression is very handy. Here I have a sample of replacing every "Hello" with "Hi," and "there" with "Fred", etc.

import re

def Main(haystack):
  # List of from replacements, can be a regex
  finds = ('Hello', 'there', 'Bob')
  replaces = ('Hi,', 'Fred,', 'how are you?')

  def ReplaceFunction(matchobj):
    for found, rep in zip(matchobj.groups(), replaces):
      if found != None:
        return rep

    # log error
    return matchobj.group(0)

  named_groups = [ '(%s)' % find for find in finds ]
  ret = re.sub('|'.join(named_groups), ReplaceFunction, haystack)
  print ret

if __name__ == '__main__':
  str = 'Hello there Bob'
  Main(str)
  # Prints 'Hi, Fred, how are you?'
Scott Kirkwood
This is insane. I had no idea this existed. awesome. thanks a lot.
jeffjose
+11  A: 

I personally love the 3 different quotes

str = "I'm a string 'but still I can use quotes' inside myself!"
str = """ For some messy multi line strings.
Such as
<html>
<head> ... </head>"""

Also cool: not having to escape regular expressions, avoiding horrible backslash salad by using raw strings:

str2 = r"\n" 
print str2
>> \n

And my favourite:

Getting values from a dict, without having to worry if the key exists, and it even sets the key for you! (I love you Python guys!)

The 3 times happiness dict package:


a = {}
print a.setdefault("mykey",20) 
# Prints value of a['mykey'] if key exists.
# Prints 20, if key doesn't exist.
# And even adds 20 to the dict in that case.
# This has made so many parts of my code so much nicer!
Tom
_Four_ different quotes, if you include `'''`
grawity
+1 'backslash salad'
TokenMacGuy
+7  A: 

One word: IPython

Tab introspection, pretty-printing, %debug, history management, pylab, ... well worth the time to learn well.

Ken Arnold
That's not built in python core is it?
jpartogi
You're right, it's not. And probably with good reason. But I recommend it without reservation to any Python programmer. (However, I heartily recommend turning off autocall. When it does something you don't expect, it can be very hard to realize why.)
Ken Arnold
BPython is cooler :)
Kenneth Reitz
I also love IPython. I've tried BPython, but it was too slow for me (although I agree it has some cool features).
Denilson Sá
+6  A: 

Reloading modules enables a "live-coding" style. But class instances don't update. Here's why, and how to get around it. Remember, everything, yes, everything is an object.

>>> from a_package import a_module
>>> cls = a_module.SomeClass
>>> obj = cls()
>>> obj.method()
(old method output)

Now you change the method in a_module.py and want to update your object.

>>> reload(a_module)
>>> a_module.SomeClass is cls
False # Because it just got freshly created by reload.
>>> obj.method()
(old method output)

Here's one way to update it (but consider it running with scissors):

>>> obj.__class__ is cls
True # it's the old class object
>>> obj.__class__ = a_module.SomeClass # pick up the new class
>>> obj.method()
(new method output)

This is "running with scissors" because the object's internal state may be different than what the new class expects. This works for really simple cases, but beyond that, pickle is your friend. It's still helpful to understand why this works, though.

Ken Arnold
+1 for suggesting `pickle` (or `cPickle`). It was really helpful for me, some weeks ago.
Denilson Sá
+14  A: 

Not very hidden, but functions have attributes:

def doNothing():
    pass

doNothing.monkeys = 4
print doNothing.monkeys
4
Markus
It's because functions can be though of as objects with __call__() function defined.
Tomasz Zielinski
It's because functions can be thought of as descriptors with __call__() function defined.
jeffjose
+3  A: 

You can decorate functions with classes - replacing the function with a class instance:

class countCalls(object):
    """ decorator replaces a function with a "countCalls" instance
    which behaves like the original function, but keeps track of calls

    >>> @countCalls
    ... def doNothing():
    ...     pass
    >>> doNothing()
    >>> doNothing()
    >>> print doNothing.timesCalled
    2
    """
    def __init__ (self, functionToTrack):
        self.functionToTrack = functionToTrack
        self.timesCalled = 0
    def __call__ (self, *args, **kwargs):
        self.timesCalled += 1
        return self.functionToTrack(*args, **kwargs)
Markus
+4  A: 

With a minute amount of work, the threading module becomes amazingly easy to use. This decorator changes a function so that it runs in its own thread, returning a placeholder class instance instead of its regular result. You can probe for the answer by checking placeolder.result or wait for it by calling placeholder.awaitResult()

def threadify(function):
    """
    exceptionally simple threading decorator. Just:
    >>> @threadify
    ... def longOperation(result):
    ...     time.sleep(3)
    ...     return result
    >>> A= longOperation("A has finished")
    >>> B= longOperation("B has finished")

    A doesn't have a result yet:
    >>> print A.result
    None

    until we wait for it:
    >>> print A.awaitResult()
    A has finished

    we could also wait manually - half a second more should be enough for B:
    >>> time.sleep(0.5); print B.result
    B has finished
    """
    class thr (threading.Thread,object):
        def __init__(self, *args, **kwargs):
            threading.Thread.__init__ ( self )  
            self.args, self.kwargs = args, kwargs
            self.result = None
            self.start()
        def awaitResult(self):
            self.join()
            return self.result        
        def run(self):
            self.result=function(*self.args, **self.kwargs)
    return thr
Markus
+91  A: 

ROT13 is a valid encoding for source code, when you use the right coding declaration at the top of the code file:

#!/usr/bin/env python
# -*- coding: rot13 -*-

cevag "Uryyb fgnpxbiresybj!".rapbqr("rot13")
André
Great! Notice how byte strings are taken literally, but unicode strings are decoded: try `cevag h"Uryyb fgnpxbiresybj!"`
kaizer.se
Haha, hillarious! +1
gorsky
unfortunately it is removed from py3k
mykhal
This is good for bypassing antivirus.
Longpoke
That has nothing to do with the encoding, it is just Python written in Welsh. :-P
Olivier
Ph'nglui mglw'nafh Cthulhu R'lyeh wgah'nagl fhtagn!
Manuel Ferreria
see? you can write unintelligible code in any languages, even in python
Lie Ryan
+2  A: 

If you've renamed a class in your application where you're loading user-saved files via Pickle, and one of the renamed classes are stored in a user's old save, you will not be able to load in that pickled file.

However, simply add in a reference to your class definition and everything's good:

e.g., before:

class Bleh:
    pass

now,

class Blah:
    pass

so, your user's pickled saved file contains a reference to Bleh, which doesn't exist due to the rename. The fix?

Bleh = Blah

simple!

Steven Sproat
A reasonable hack, but why has the class name changed? was it because it conflicts with something else? Doing this sort of negates any benefit you might have had from renaming the class in the first place.
TokenMacGuy
I was modelling classes on "drawing" tools - pen, rectangle, select etc, and was using the class name as GUI button labels. I then changed to a class variable to represent the name, later.
Steven Sproat
+1  A: 

The fact that EVERYTHING is an object, and as such is extensible. I can add member variables as metadata to a function that I define:

>>> def addInts(x,y): 
...    return x + y
>>> addInts.params = ['integer','integer']
>>> addInts.returnType = 'integer'

This can be very useful for writing dynamic unit tests, e.g.

Greg
Most things are objects; and some objects do not take property assignments so happily.
pst
+1  A: 

The getattr built-in function :

>>> class C():
    def getMontys(self):
     self.montys = ['Cleese','Palin','Idle','Gilliam','Jones','Chapman']
     return self.montys


>>> c = C()
>>> getattr(c,'getMontys')()
['Cleese', 'Palin', 'Idle', 'Gilliam', 'Jones', 'Chapman']
>>>

Useful if you want to dispatch function depending on the context. See examples in Dive Into Python (Here)

Busted Keaton
+2  A: 

Simple way to test if a key is in a dict:

>>> 'key' in { 'key' : 1 }
True

>>> d = dict(key=1, key2=2)
>>> if 'key' in d:
...     print 'Yup'
... 
Yup
Cixate
This is hopefully not hidden for any non-new Python coder!
kaizer.se
+1  A: 

Classes as first-class objects (shown through a dynamic class definition)

Note the use of the closure as well. If this particular example looks like a "right" approach to a problem, carefully reconsider ... several times :)

def makeMeANewClass(parent, value):
  class IAmAnObjectToo(parent):
    def theValue(self):
      return value
  return IAmAnObjectToo

Klass = makeMeANewClass(str, "fred")
o = Klass()
print isinstance(o, str)  # => True
print o.theValue()        # => fred
pst
+2  A: 

Exposing Mutable Buffers

Using the Python Buffer Protocol to expose mutable byte-oriented buffers in Python (2.5/2.6).

(Sorry, no code here. Requires use of low-level C API or existing adapter module).

pst
+5  A: 

Extending properties (defined as descriptor) in subclasses

Sometimes it's useful to extent (modify) value "returned" by descriptor in subclass. It can be easily done with super():

class A(object):
    @property
    def prop(self):
        return {'a': 1}

class B(A):
    @property
    def prop(self):
        return dict(super(B, self).prop, b=2)

Store this in test.py and run python -i test.py (another hidden feature: -i option executed the script and allow you to continue in interactive mode):

>>> B().prop
{'a': 1, 'b': 2}
Denis Otkidach
+1 properties! Cant get enough of them.
jeffjose
+5  A: 

The pythonic idiom x = ... if ... else ... is far superior to x = ... and ... or ... and here is why:

Although the statement

x = 3 if (y == 1) else 2

Is equivalent to

x = y == 1 and 3 or 2

if you use the x = ... and ... or ... idiom, some day you may get bitten by this tricky situation:

x = 0 if True else 1    # sets x equal to 0

and therefore is not equivalent to

x = True and 0 or 1   # sets x equal to 1

For more on the proper way to do this, see http://stackoverflow.com/questions/101268/hidden-features-of-python/116480#116480.

Amol
+8  A: 

Python can understand any kind of unicode digits, not just the ASCII kind:

>>> s = u'10585'
>>> s
u'\uff11\uff10\uff15\uff18\uff15'
>>> print s
10585
>>> int(s)
10585
>>> float(s)
10585.0
kaizer.se
+1  A: 

Regarding Nick Johnson's implementation of a Property class (just a demonstration of descriptors, of course, not a replacement for the built-in), I'd include a setter that raises an AttributeError:

class Property(object):
    def __init__(self, fget):
        self.fget = fget

    def __get__(self, obj, type):
        if obj is None:
            return self
        return self.fget(obj)

    def __set__(self, obj, value):
       raise AttributeError, 'Read-only attribute'

Including the setter makes this a data descriptor as opposed to a method/non-data descriptor. A data descriptor has precedence over instance dictionaries. Now an instance can't have a different object assigned to the property name, and attempts to assign to the property will raise an attribute error.

eryksun
+10  A: 

The unpacking syntax has been upgraded in the recent version as can be seen in the example.

>>> a, *b = range(5)
>>> a, b
(0, [1, 2, 3, 4])
>>> *a, b = range(5)
>>> a, b
([0, 1, 2, 3], 4)
>>> a, *b, c = range(5)
>>> a, b, c
(0, [1, 2, 3], 4)
Noctis Skytower
never seen this before, it's pretty nice!
MatToufoutu
which version? as this doesn't work in 2.5.2
Dan D
+3  A: 

Manipulating Recursion Limit

Getting or setting the maximum depth of recursion with sys.getrecursionlimit() & sys.setrecursionlimit().

We can limit it to prevent a stack overflow caused by infinite recursion.

grayger
+32  A: 

Multiplying by a boolean

One thing I'm constantly doing in web development is optionally printing HTML parameters. We've all seen code like this in other languages:

class='<% isSelected ? "selected" : "" %>'

In Python, you can multiply by a boolean and it does exactly what you'd expect:

class='<% "selected" * isSelected %>'

This is because multiplication coerces the boolean to an integer (0 for False, 1 for True), and in python multiplying a string by an int repeats the string N times.

darkporter
+1, that's a nice one. OTOH, as it's just a bit arcane, it's easy to see why you might not want to do this, for readability reasons.
TokenMacGuy
I would write `bool(isSelected)` both for reliability and readability.
Marian
you could also use something like:`('not-selected', 'selected')[isSelected]`if you need an option for False value too..
redShadow
+9  A: 

Mod works correctly with negative numbers

-1 % 5 is 4, as it should be, not -1 as it is in other languages like JavaScript. This makes "wraparound windows" cleaner in Python, you just do this:

index = (index + increment) % WINDOW_SIZE
darkporter
A: 

You can construct a functions kwargs on demand:

kwargs = {}
kwargs[str("%s__icontains" % field)] = some_value
some_function(**kwargs)

The str() call is somehow needed, since python complains otherwise that it is no string. Don't know why ;) I use this for a dynamic filters within Djangos object model:

result = model_class.objects.filter(**kwargs)
Martin
The reason is complains is probably because "field" is unicode, which makes the whole string unicode.
truppo
+8  A: 

Guessing integer base

>>> int('10', 0)
10
>>> int('0x10', 0)
16
>>> int('010', 0)  # does not work on Python 3.x
8
>>> int('0o10', 0)  # Python >=2.6 and Python 3.x
8
>>> int('0b10', 0)  # Python >=2.6 and Python 3.x
2
Xavier Martinez-Hidalgo
+8  A: 

itertools

This module is often overlooked. The following example uses itertools.chain() to flatten a list:

>>> from itertools import *
>>> l = [[1, 2], [3, 4]]
>>> list(chain(*l))
[1, 2, 3, 4]

See http://docs.python.org/library/itertools.html#recipes for more applications.

Xavier Martinez-Hidalgo
+1  A: 

Monkeypatching objects

Every class in Python has a __dict__ object which stores the class's attributes. So, you can do something like this:

class Foo(object):
    def __init__(self, arg1, arg2, **kwargs):
        #do stuff with arg1 and arg2
        self.__dict__.update(kwargs)

f = Foo('arg1', 'arg2', bar=20, baz=10)
#now f is a Foo object with two extra attributes

This can be exploited to add both attributes and functions arbitrarily to objects. This can also be exploited to create a quick-and-dirty struct type.

class struct(object):
    def __init__(**kwargs):
       self.__dict__.update(kwargs)

s = struct(foo=10, bar=11, baz="i'm a string!')
Chinmay Kanchi
except for the classes with `__slots__`
gnibbler
Except for some "primitive" types implemented in C (for performance reasons, I guess). For instance, after `a = 2`, there is no `a.__dict__`
Denilson Sá
+3  A: 

Creating enums

In Python, you can do this to quickly create an enumeration:

>>> FOO, BAR, BAZ = range(3)
>>> FOO
0

But the "enums" don't have to have integer values. You can even do this:

class Colors(object):
    RED, GREEN, BLUE, YELLOW = (255,0,0), (0,255,0), (0,0,255), (0,255,255)

#now Colors.RED is a 3-tuple that returns the 24-bit 8bpp RGB 
#value for saturated red
Chinmay Kanchi
+9  A: 

Manipulating sys.modules

You can manipulate the modules cache directly, making modules available or unavailable as you wish:

>>> import sys
>>> import ham
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
ImportError: No module named ham

# Make the 'ham' module available -- as a non-module object even!
>>> sys.modules['ham'] = 'ham, eggs, saussages and spam.'
>>> import ham
>>> ham
'ham, eggs, saussages and spam.'

# Now remove it again.
>>> sys.modules['ham'] = None
>>> import ham
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
ImportError: No module named ham

This works even for modules that are available, and to some extent for modules that already are imported:

>>> import os
# Stop future imports of 'os'.
>>> sys.modules['os'] = None
>>> import os
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
ImportError: No module named os
# Our old imported module is still available.
>>> os
<module 'os' from '/usr/lib/python2.5/os.pyc'>

As the last line shows, changing sys.modules only affects future import statements, not past ones, so if you want to affect other modules it's important to make these changes before you give them a chance to try and import the modules -- so before you import them, typically. None is a special value in sys.modules, used for negative caching (indicating the module was not found the first time, so there's no point in looking again.) Any other value will be the result of the import operation -- even when it is not a module object. You can use this to replace modules with objects that behave exactly like you want. Deleting the entry from sys.modules entirely causes the next import to do a normal search for the module, even if it was already imported before.

Thomas Wouters
+3  A: 

There are no secrets in Python ;)

Juanjo Conti
+8  A: 

Passing tuple to builtin functions

Much Python functions accept tuples, also it doesn't seem like. For example you want to test if your variable is a number, you could do:

if isinstance (number, float) or isinstance (number, int):  
   print "yaay"

But if you pass us tuple this looks much cleaner:

if isinstance (number, (float, int)):  
   print "yaay"
evilpie
cool, is this even documented?
Wallacoloo
Yes, but nearly nobody knows about that.
evilpie
What other functions support this?? Good tip
Infinity
+4  A: 

You can assign several variables to the same value

>>> foo = bar = baz = 1
>>> foo, bar, baz
(1, 1, 1)

Useful to initialize several variable to None, in a compact way.

haridsv
You could also do: foo, bar, baz = [None]*3 to get the same result.
Van Nguyen
+4  A: 

threading.enumerate() gives access to all Thread objects in the system and sys._current_frames() returns the current stack frames of all threads in the system, so combine these two and you get Java style stack dumps:

def dumpstacks(signal, frame):
    id2name = dict([(th.ident, th.name) for th in threading.enumerate()])
    code = []
    for threadId, stack in sys._current_frames().items():
        code.append("\n# Thread: %s(%d)" % (id2name[threadId], threadId))
        for filename, lineno, name, line in traceback.extract_stack(stack):
            code.append('File: "%s", line %d, in %s' % (filename, lineno, name))
            if line:
                code.append("  %s" % (line.strip()))
    print "\n".join(code)

import signal
signal.signal(signal.SIGQUIT, dumpstacks)

Do this at the beginning of a multi-threaded python program and you get access to current state of threads at any time by sending a SIGQUIT. You may also choose signal.SIGUSR1 or signal.SIGUSR2.

See

haridsv
A: 

Braces

def g():
    print 'hi!'

def f(): (
    g()
)

>>> f()
hi!
Longpoke
>>> def f(): (... g()... g() File "<stdin>", line 3 g() ^SyntaxError: invalid syntax
bukzor
@bukzor: wat.``
Longpoke
@Longpoke: I was trying to show that your feature doesn't work if you have more than one statement inside the "braces".
bukzor
Everyone knows that Python uses `#{` and `#}` for braces. Subject to certain lexical constraints.
detly
A: 

Top Secret Attributes

>>> class A(object): pass
>>> a = A()
>>> setattr(a, "can't touch this", 123)
>>> dir(a)
['__class__', '__delattr__', '__dict__', '__doc__', '__format__', '__getattribute__', '__hash__', '__init__', '__module__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', '__weakref__', "can't touch this"]
>>> a.can't touch this # duh
  File "<stdin>", line 1
    a.can't touch this
                     ^
SyntaxError: EOL while scanning string literal
>>> getattr(a, "can't touch this")
123
>>> setattr(a, "__class__.__name__", ":O")
>>> a.__class__.__name__
'A'
>>> getattr(a, "__class__.__name__")
':O'
Longpoke
+8  A: 

Nice treatment of infinite recursion in dictionaries:

>>> a = {}
>>> b = {}
>>> a['b'] = b
>>> b['a'] = a
>>> print a
{'b': {'a': {...}}}
Evgeny
That is just the 'nice treatment' of "print", it doesn't imply a nice treatment across the language.
haridsv
Both `str()` and `repr()` return the string you posted above. However, the `ipython` shell returns something a little different, a little more informative: {'b': {'a': <Recursion on dict with id=17830960>}}
Denilson Sá
@denilson: ipython uses pprint module, which is available whithin standard python shell.
rafak
+12  A: 

Multiple references to an iterator

You can create multiple references to the same iterator using list multiplication:

>>> i = (1,2,3,4,5,6,7,8,9,10) # or any iterable object
>>> iterators = [iter(i)] * 2
>>> iterators[0].next()
1
>>> iterators[1].next()
2
>>> iterators[0].next()
3

This can be used to group an iterable into chunks, for example, as in this example from the itertools documentation

def grouper(n, iterable, fillvalue=None):
    "grouper(3, 'ABCDEFG', 'x') --> ABC DEF Gxx"
    args = [iter(iterable)] * n
    return izip_longest(fillvalue=fillvalue, *args)
David Zaslavsky
You can do the opposite with `itertools.tee` -- take one iterator and return `n` that yield the same but do not share state.
Daenyth
I actually don't see the difference to doing this one: "a = iter(i)" and subsequently "b = a" I also get multiple references to the same iterator -- there is no magic about that to me, no hidden feature it is just the normal reference copying stuff of the language. What is done, is creating the iterator, then (the list multiplication) copying this iterator several times. Thats all, its all in the language.
Juergen
@Juergen: indeed, `a = iter(i); b = a` does the same thing and I could just as well have written that into the answer instead of `[iter(i)] * n`. Either way, there is no "magic" about it. That's no different from any of the other answers to this question - none of them are "magical", they are all in the language. What makes the features "hidden" is that many people don't realize they're possible, or don't realize interesting ways in which they can be used, until they are pointed out explicitly.
David Zaslavsky
+4  A: 

You can ask any object which module it came from by looking at its __ module__ property. This is useful, for example, if you're experimenting at the command line and have imported a lot of things.

Along the same lines, you can ask a module where it came from by looking at its __ file__ property. This is useful when debugging path issues.

John D. Cook
+8  A: 

reversing an iterable using negative step

>>> s = "Hello World"
>>> s[::-1]
'dlroW olleH'
>>> a = (1,2,3,4,5,6)
>>> a[::-1]
(6, 5, 4, 3, 2, 1)
>>> a = [5,4,3,2,1]
>>> a[::-1]
[1, 2, 3, 4, 5]
Marcin Swiderski
Good to know, but minor point: that only works with sequences not iterables in general. I.e., `(n for n in (1,2,3,4,5))[::-1]` doesn't work.
Don O'Donnell
That notation will actually create a new (reversed) instance of that sequence, which might be undesirable in some cases. For such cases, `reversed()` function is better, as it returns a reverse iterator instead of allocating a new sequence.
Denilson Sá
+10  A: 

When using the interactive shell, "_" contains the value of the last printed item:

>>> range(10)
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
>>> _
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
>>>
Giampaolo Rodolà
I always forget about this one! It's a great feature.
thebackhand
`_` automatic variable is the best feature when using Python shell as a calculator. Very powerful calculator, by the way.
Denilson Sá
A: 

Slices & Mutability

Copying lists

>>> x = [1,2,3]
>>> y = x[:]
>>> y.pop()
3
>>> y
[1, 2]
>>> x
[1, 2, 3]

Replacing lists

>>> x = [1,2,3]
>>> y = x
>>> y[:] = [4,5,6]
>>> x
[4, 5, 6]
Daniel Hepper
+2  A: 

Combine unpacking with the print function:

# in 2.6 <= python < 3.0, 3.0 + the print function is native
from __future__ import print_function 

mylist = ['foo', 'bar', 'some other value', 1,2,3,4]  
print(*mylist)
Wayne Werner
I prefer something like `print(' '.join([str(x) for x in mylist]))`. Using unpacking like this is too clever.
Brian
Performance wise I think the 'clever' version is faster (after doing some completely non-scientific tests). Plus you know `*` means you're unpacking a list or tuple, and you can use the `sep` keyword.
Wayne Werner
+7  A: 

From python 3.1 ( 2.7 ) dictionary and set comprehensions are supported :

{ a:a for a in range(10) }
{ a for a in range(10) }
Piotr Duda
there is no such thing as tuples comprehension, and this is not a syntax for dict comprehensions.
SilentGhost
Edited the typo with dict comprehensions.
Piotr Duda
uh oh, looks like I have to upgrade my version of python so I can play with dict and set comprehensions
Carson Myers
for dictionaries that way is better but `dict( (a,a) for a in range(10) )` works too and your error is probably due to remembering this form
Dan D
A: 

Python 2.x ignore commas if found after the last element of the "set"

>>> a_tuple_for_instance = (0,1,2,3)
>>> another_tuple = (0,1,2,3)
>>> a_tuple_for_instance == anther_tuple
True

be aware of the fact that a tuple with only one element need a comma

a_tuple_with_one_element = (8,)

Martin
+2  A: 
** Using sets to reference contents in sets of frozensets**

As you probably know, sets are mutable and thus not hashable, so it's necessary to use frozensets if you want to make a set of sets (or use sets as dictionary keys):

>>> fabc = frozenset('abc')
>>> fxyz = frozenset('xyz')
>>> mset = set((fabc, fxyz))
>>> mset
{frozenset({'a', 'c', 'b'}), frozenset({'y', 'x', 'z'})}

However, it's possible to test for membership and remove/discard members using just ordinary sets:

>>> abc = set('abc')
>>> abc in mset
True
>>> mset.remove(abc)
>>> mset
{frozenset({'y', 'x', 'z'})}

To quote from the Python Standard Library docs:

Note, the elem argument to the __contains__(), remove(), and discard() methods may be a set. To support searching for an equivalent frozenset, the elem set is temporarily mutated during the search and then restored. During the search, the elem set should not be read or mutated since it does not have a meaningful value.

Unfortunately, and perhaps astonishingly, the same is not true of dictionaries:

>>> mdict = {fabc:1, fxyz:2}
>>> fabc in mdict
True
>>> abc in mdict
Traceback (most recent call last):
File "<interactive input>", line 1, in <module>
TypeError: unhashable type: 'set'
Don O'Donnell
+10  A: 

The textwrap.dedent utility function in python can come quite in handy testing that a multiline string returned is equal to the expected output without breaking the indentation of your unittests:

import unittest, textwrap

class XMLTests(unittest.TestCase):
    def test_returned_xml_value(self):
        returned_xml = call_to_function_that_returns_xml()
        expected_value = textwrap.dedent("""\
        <?xml version="1.0" encoding="utf-8"?>
        <root_node>
            <my_node>my_content</my_node>
        </root_node>
        """)

        self.assertEqual(expected_value, returned_xml)
Remco Wendt
+2  A: 

Slices as lvalues. This Sieve of Eratosthenes produces a list that has either the prime number or 0. Elements are 0'd out with the slice assignment in the loop.

def eras(n):
    last = n + 1
    sieve = [0,0] + list(range(2, last))
    sqn = int(round(n ** 0.5))
    it = (i for i in xrange(2, sqn + 1) if sieve[i])
    for i in it:
        sieve[i*i:last:i] = [0] * (n//i - i + 1)
    return filter(None, sieve)

To work, the slice on the left must be assigned a list on the right of the same length.

hughdbrown
+13  A: 

Zero-argument and variable-argument lambdas

Lambda functions are usually used for a quick transformation of one value into another, but they can also be used to wrap a value in a function:

>>> f = lambda: 'foo'
>>> f()
'foo'

They can also accept the usual *args and **kwargs syntax:

>>> g = lambda *args, **kwargs: args[0], kwargs['thing']
>>> g(1, 2, 3, thing='stuff')
(1, 'stuff')
David Zaslavsky
The main reason I see to keep lambda around: `defaultdict(lambda: 1)`
eswald
+21  A: 

Multi line strings

One approach is to use backslashes:

>>> sql = "select * from some_table \
where id > 10"
>>> print sql
select * from some_table where id > 10

Another is to use the triple-quote:

>>> sql = """select * from some_table 
where id > 10"""
>>> print sql
select * from some_table where id > 10

Problem with those is that they are not indented (look poor in your code). If you try to indent, it'll just print the white-spaces you put.

A third solution, which I found about recently, is to divide your string into lines and surround with parentheses:

>>> sql = ("select * from some_table " # <-- no comma, whitespace at end
           "where id > 10 "
           "order by name") 
>>> print sql
select * from some_table where id > 10 order by name

note how there's no comma between lines (this is not a tuple), and you have to account for any trailing/leading white spaces that your string needs to have. All of these work with placeholders, by the way (such as "my name is %s" % name).

sa125
+22  A: 

pow() can also calculate (x ** y) % z efficiently.

There is a lesser known third argument of the built-in pow() function that allows you to calculate xy modulo z more efficiently than simply doing (x ** y) % z:

>>> x, y, z = 1234567890, 2345678901, 17
>>> pow(x, y, z)            # almost instantaneous
6

In comparison, (x ** y) % z didn't given a result in one minute on my machine for the same values.

Tamás
I've always wondered what the use case is for this. I haven't encountered one, but then again I don't do scientific computing.
bukzor
@buzkor: it's pretty useful for cryptography, too
Agos
Remember, this is the **built-in** `pow()` function. This is **not** the `math.pow()` function, which accepts only 2 arguments.
Denilson Sá
I remember stating very adamantly that I could not code cryptography in pure Python without this feature. This was in 2003, and so the version of Python I was working with was 2.2 or 2.3. I wonder if I was making a fool of myself and `pow` had that third parameter then or not.
Omnifarious
`pow` had that third parameter at least since Python 2.1. However, according to the documentation, "[i]n Python 2.1 and before, floating 3-argument `pow()` returned platform-dependent results depending on floating-point rounding accidents."
Tamás
+2  A: 

Backslashes inside raw strings can still escape quotes. See this:

>>> print repr(r"aaa\"bbb")
'aaa\\"bbb'

Note that both the backslash and the double-quote are present in the final string.

As consequence, you can't end a raw string with a backslash:

>>> print repr(r"C:\")
SyntaxError: EOL while scanning string literal
>>> print repr(r"C:\"")
'C:\\"'

This happens because raw strings were implemented to help writing regular expressions, and not to write Windows paths. Read a long discussion about this at Gotcha — backslashes in Windows filenames.

Denilson Sá
Note that the backslash is *still* part of the string afterwards... So one might not regard this as regular escaping.
huin
+13  A: 

Sequence multiplication and reflected operands

>>> 'xyz' * 3
'xyzxyzxyz'

>>> [1, 2] * 3
[1, 2, 1, 2, 1, 2]

>>> (1, 2) * 3
(1, 2, 1, 2, 1, 2)

We get the same result with reflected (swapped) operands

>>> 3 * 'xyz'
'xyzxyzxyz'

It works like this:

>>> s = 'xyz'
>>> num = 3

To evaluate an expression s * num interpreter calls s.__mul__(num)

>>> s * num
'xyzxyzxyz'

>>> s.__mul__(num)
'xyzxyzxyz'

To evaluate an expression num * s interpreter calls num.__mul__(s)

>>> num * s
'xyzxyzxyz'

>>> num.__mul__(s)
NotImplemented

If the call returns NotImplemented then interpreter calls a reflected operation s.__rmul__(num) if operands have different types

>>> s.__rmul__(num)
'xyzxyzxyz'

See http://docs.python.org/reference/datamodel.html#object.rmul

Ruslan Spivak
+1 I knew about sequence multiplication, but the reflected operands are new to me.
Space_C0wb0y
@Space, it would be unpythonic to have `x * y != y * x`, after all :)
badp
In python you **may** have x * y != y * x (it's just enough to play with the '__mul__' methods).
Roberto Liffredo
+18  A: 

enumerate with different starting index

enumerate has partly been covered in this answer, but recently I've found an even more hidden feature of enumerate that I think deserves its own post instead of just a comment.

Since Python 2.6, you can specify a starting index to enumerate in its second argument:

>>> l = ["spam", "ham", "eggs"]
>>> list(enumerate(l))
>>> [(0, "spam"), (1, "ham"), (2, "eggs")]
>>> list(enumerate(l, 1))
>>> [(1, "spam"), (2, "ham"), (3, "eggs")]

One place where I've found it utterly useful is when I am enumerating over entries of a symmetric matrix. Since the matrix is symmetric, I can save time by iterating over the upper triangle only, but in that case, I have to use enumerate with a different starting index in the inner for loop to keep track of the row and column indices properly:

for ri, row in enumerate(matrix):
    for ci, column in enumerate(matrix[ri:], ri):
        # ci now refers to the proper column index

Strangely enough, this behaviour of enumerate is not documented in help(enumerate), only in the online documentation.

Tamás
`help(enumerate)` has this proper function signature in python2.x, but not in py3k. I guess, a bug needs to be filled.
SilentGhost
`help(enumerate)` is definitely wrong in Python 2.6.5. Maybe they have fixed it already in Python 2.7.
Tamás
`help(enumerate)` from Python 3.1.2 says *class enumerate(object) | enumerate(iterable) -> iterator for index, value of iterable*, but the trick from the answer works fine.
Cristian Ciupitu