I would stop using deprecated methods in 2.6 so your app/script will be ready and easier to convert to python 3.
The another one could be to avoid using keywords as your own identifiers. Also it's always good to not use from somemodule import *
BTW, wouldn't it be better to post it to community wiki?
some personal opinions, but I find it best NOT to:
- as said, use deprecated modules (use warnings for them)
- overuse classes & inheritance (typical of static languages legacy maybe)
- explicitly use declarative algorithms (as iteration with for vs use of
itertools
) - reimplement functions from the standard lib "because I don't need all of those features"
- using features for the sake of it (reducing compatibility with older python versions)
- using metaclasses when you really don't have to and more generally make things too "magic"
- avoid using generators (i.e., use them)
- (more personal) try to micro-optimize CPython code on a low-level basis. Better spend time on algorithms and then optimize by making a small C shared lib called by ctypes (it's so easy to gain 5x perf boosts on an inner loop)
- use unnecessary lists when iterators would suffice
- (controversial maybe now) code a project directly for 3.x before the libs you need are all available.
(edited : best to --> NOT to of course, duh.)
Python Language Gotchas -- things that fail in very obscure ways
Using mutable default arguments.
Leading zeroes mean octal.
09
is a very obscure syntax error in Python 2.xMisspelling overridden method names in a superclass or subclass. The superclass misspelling mistake is worse, because none of the subclasses override it correctly.
Python Design Gotchas
Spending time on introspection (e.g. trying to automatically determine types or superclass identity or other stuff). First, it's obvious from reading the source. More importantly, time spent on weird Python introspection usually indicates a fundamental failure to grasp polymorphism. 80% of the Python introspection questions on SO are failure to get Polymorphism.
Spending time on code golf. Just because your mental model of your application is four keywords ("do", "what", "I", "mean"), doesn't mean you should build a hyper-complex introspective decorator-driven framework to do that. Python allows you to take DRY to a level that is silliness. The rest of the Python introspection questions on SO attempts to reduce complex problems to code golf exercises.
Monkeypatching.
Failure to actually read through the standard library, and reinventing the wheel.
Conflating interactive type-as-you go Python with a proper program. While you're typing interactively, you may lose track of a variable and have to use
globals()
. Also, while you're typing, almost everything is global. In proper programs, you'll never "lose track of" a variable, and nothing will be global.
When you need a population of arrays you might be tempted to type something like this:
>>> a=[[1,2,3,4,5]]*4
And sure enough it will give you what you expect when you look at it
>>> from pprint import pprint
>>> pprint(a)
[[1, 2, 3, 4, 5],
[1, 2, 3, 4, 5],
[1, 2, 3, 4, 5],
[1, 2, 3, 4, 5]]
But don't expect the elements of your population to be seperate objects:
>>> a[0][0] = 2
>>> pprint(a)
[[2, 2, 3, 4, 5],
[2, 2, 3, 4, 5],
[2, 2, 3, 4, 5],
[2, 2, 3, 4, 5]]
Unless this is what you need...
It is worth mentioning a workaround:
a = [[1,2,3,4,5] for _ in range(4)]
import this
Beautiful is better than ugly.
Explicit is better than implicit.
Simple is better than complex.
Complex is better than complicated.
Flat is better than nested.
Sparse is better than dense.
Readability counts.
Special cases aren't special enough to break the rules.
Although practicality beats purity.
Errors should never pass silently.
Unless explicitly silenced.
In the face of ambiguity, refuse the temptation to guess.
There should be one-- and preferably only one --obvious way to do it.
Although that way may not be obvious at first unless you're Dutch.
Now is better than never.
Although never is often better than right now.
If the implementation is hard to explain, it's a bad idea.
If the implementation is easy to explain, it may be a good idea.
Namespaces are one honking great idea -- let's do more of those!
import not_this
Write ugly code.
Write implicit code.
Write complex code.
Write nested code.
Write dense code.
Write unreadable code.
Write special cases.
Strive for purity.
Ignore errors and exceptions.
Write optimal code before releasing.
Every implementation needs a flowchart.
Don't use namespaces.
++n and --n may not work as expected by people coming from C/Java background
++n is positive of a positive number which is positive = n
--n is negative of a negative number which is positive = n
A bad habit I had to train myself out of was using X and Y or Z
for inline logic.
Unless you can 100% always guarantee that Y
will be a true value, even when your code changes in 18 months time, you set yourself up for some unexpected behaviour.
Thankfully, in later versions you can use Y if X else Z
.
- don't write large output messages to standard output
- strings are immutable - build them not using "+" operator but rather using str.join() function.
- read those articles:
Last link is the original one, this SO question is an duplicate.
Somewhat related to the default mutable argument, how one checks for the "missing" case results in differences when an empty list is passed:
def func1(toc=None):
if not toc:
toc = []
toc.append('bar')
def func2(toc=None):
if toc is None:
toc = []
toc.append('bar')
def demo(toc, func):
print func.__name__
print ' before:', toc
func(toc)
print ' after:', toc
demo([], func1)
demo([], func2)
Here's the output:
func1
before: []
after: []
func2
before: []
after: ['bar']
Surprised that nobody said this
Mix tab and spaces when indenting.
really, it's a killer. believe me. In particular if it runs.
I don´t know whether this is a common mistake, but while python doesn´t has increment and decrement operators, double signs are allowed, so
++i
and
--i
is syntactically correct code, but doesn´t do anything.
I've started learning Python as well and one of the bigest mistakes I made is constantly using C++/C# indexed "for" loop. Python have for(i ; i < length ; i++) type loop and for a good reason - most of the time there are better ways to do there same thing.
Example: I had a method that iterated over a list and returned the indexes of selected items:
for i in range(len(myList)):
if myList[i].selected:
retVal.append(i)
Instead Python has list comprehension that solves the same problem in a more elegant and easy to read way:
retVal = [index for index, item in enumerate(myList) if item.selected]
Mutating a default argument:
def foo(bar=[]):
bar.append('baz')
return bar
The default value is evaluated only once, and not every time the function is called. Repeated calls to foo()
would return ['baz']
, ['baz', 'baz']
, ['baz', 'baz', 'baz']
, ...
If you want to mutate bar do something like this:
def foo(bar=None):
if bar is None:
bar = []
bar.append('baz')
return bar
Or, if you like arguments to be final:
def foo(bar=[]):
not_bar = bar[:]
not_bar.append('baz')
return not_bar
Normal copying (assigning) is done by reference, so filling a container by adapting the same object and inserting, ends up with a container with references to the last added object. Use copy.deepcopy instead.
The very first mistake before you even start: Don't be afraid of whitespace.
When you show someone a piece of Python code, they are impressed until you tell them that they have to indent correctly. For some reason, most people feel that a language shouldn't force a certain style on them while all of them will indent the code nonetheless.
Don't use index to loop over a sequence
Don't :
for i in range(len(tab)) :
print tab[i]
Do :
for elem in tab :
print elem
For will automate most iteration operations for you.
Use enumerate
if you really need both the index and the element.
for i, elem in enumerate(tab):
print i, elem
Be careful when using "==" to check against True or False
if (var == True) :
# this will execute if var is True or 1, 1.0, 1L
if (var != True) :
# this will execute if var is neither True nor 1
if (var == False) :
# this will execute if var is False or 0 (or 0.0, 0L, 0j)
if (var == None) :
# only execute if var is None
if var :
# execute if var is a non-empty string/list/dictionary/tuple, non-0, etc
if not var :
# execute if var is "", {}, [], (), 0, None, etc.
if var is True :
# only execute if var is boolean True, not 1
if var is False :
# only execute if var is boolean False, not 0
if var is None :
# same as var == None
Do not check if you can, just do it and handle the error
Pythonistas usually say "It's easier to ask for forgiveness than permission".
Don't :
if os.path.isfile(file_path) :
file = open(file_path)
else :
# do something
Do :
try :
file = open(file_path)
except OSError as e:
# do something
Or even better with python 2.6 / 3:
with open(file_path) as file :
It is much better because it's much more generical. You can apply "try / except" to almost anything. You don't need to care about what to do to prevent it, just about the error you are risking.
Do not check against type
Python is dynamically typed, therefore checking for type makes you lose flexibility. Instead, use duck typing by checking behavior. E.G, you expect a string in a function, then use str() to convert any object in a string. You expect a list, use list() to convert any iterable in a list.
Don't :
def foo(name) :
if isinstance(name, str) :
print name.lower()
def bar(listing) :
if isinstance(listing, list) :
listing.extend((1, 2, 3))
return ", ".join(listing)
Do :
def foo(name) :
print str(name).lower()
def bar(listing) :
l = list(listing)
l.extend((1, 2, 3))
return ", ".join(l)
Using the last way, foo will accept any object. Bar will accept strings, tuples, sets, lists and much more. Cheap DRY :-)
Don't mix spaces and tabs
Just don't. You would cry.
Use object as first parent
This is tricky, but it will bite you as your program grows. There are old and new classes in Python 2.x. The old ones are, well, old. They lack some features, and can have awkward behavior with inheritance. To be usable, any of your class must be of the "new style". To do so, make it inherit from "object" :
Don't :
class Father :
pass
class Child(Father) :
pass
Do :
class Father(object) :
pass
class Child(Father) :
pass
In Python 3.x all classes are new style so you don't need to do that.
Don't initialize class attributes outside the __init__
method
People coming from other languages find it tempting because that what you do the job in Java or PHP. You write the class name, then list your attributs and give them a default value. It seems to work in Python, however, this doesn't work the way you think.
Doing that will setup class attributes (static attributes), then when you will try to get the object attribute, it will gives you its value unless it's empty. In that case it will return the class attributes.
It implies two big hazards :
- If the class attribute is changed, then the initial value is changed.
- If you set a mutable object as a default value, you'll get the same object shared across instances.
Don't (unless you want static) :
class Car(object):
color = "red"
wheels = [wheel(), Wheel(), Wheel(), Wheel()]
Do :
class Car(object):
def __init__(self):
self.color = "red"
self.wheels = [wheel(), Wheel(), Wheel(), Wheel()]
Importing re
and using the full regular expression approach to string matching/transformation, when perfectly good string methods exist for every common operation (e.g. capitalisation, simple matching/searching).
Common pitfall: Default arguments are evaluated once:
def x(a, l=[]):
l.append(a)
return l
print x(1)
print x(2)
prints:
[1]
[1, 2]
i.e. you always get the same list.
If you're coming from C++, realize that variables declared in a class definition are static. You can initialize nonstatic members in the init method.
Example:
class MyClass:
static_member = 1
def __init__(self):
self.non_static_member = random()
Similar to mutable default arguments is the mutable class attribute.
>>> class Classy:
... foo = []
... def add(self, value):
... self.foo.append(value)
...
>>> instance1 = Classy()
>>> instance2 = Classy()
>>> instance1.add("Foo!")
>>> instance2.foo
['Foo!']
Not what you expect.
Rolling your own code before looking in the standard library. For example, writing this:
def repeat_list(items):
while True:
for item in items:
yield item
When you could just use this:
from itertools import cycle
Examples of frequently overlooked modules (besides itertools
) include:
optparse
for creating command line parsersConfigParser
for reading configuration files in a standard mannertempfile
for creating and managing temporary filesshelve
for storing Python objects to disk, handy when a full fledged database is overkill
Not using functional tools. This isn't just a mistake from a style standpoint, it's a mistake from a speed standpoint because a lot of the functional tools are optimized in C.
This is the most common example:
temporary = []
for item in itemlist:
temporary.append(somefunction(item))
itemlist = temporary
The correct way to do it:
itemlist = map(somefunction, itemlist)
The just as correct way to do it:
itemlist = [somefunction(x) for x in itemlist]
And if you only need the processed items available one at a time, rather than all at once, you can save memory and improve speed by using the iterable equivalents
# itertools-based iterator
itemiter = itertools.imap(somefunction, itemlist)
# generator expression-based iterator
itemiter = (somefunction(x) for x in itemlist)
Never assume that having a multi-threaded Python application and a SMP capable machine (for instance one equipped with a multi-core CPU) will give you the benefit of introducing true parallelism into your application. Most likely it will not because of GIL (Global Interpreter Lock) which synchronizes your application on the byte-code interpreter level.
There are some workarounds like taking advantage of SMP by putting the concurrent code in C API calls or using multiple processes (instead of threads) via wrappers (for instance like the one available at http://www.parallelpython.org) but if one needs true multi-threading in Python one should look at things like Jython, IronPython etc. (GIL is a feature of the CPython interpreter so other implementations are not affected).
According to Python 3000 FAQ (available at Artima) the above still stands even for the latest Python versions.
Similar to mutable default arguments is the mutable class attribute.
>>> class Classy:
... foo = []
... def add(self, value):
... self.foo.append(value)
...
>>> instance1 = Classy()
>>> instance2 = Classy()
>>> instance1.add("Foo!")
>>> instance2.foo
['Foo!']
This has been mentioned already, but I'd like to elaborate a bit on class attribute mutability.
When you define a member attribute, then every time you instance that class it gets an attribute that's a shallow copy of the class attribute.
So if you have something like
class Test(object):
myAttr = 1
instA = Test()
instB = Test()
instB.myAttr = 2
It will behave as expected.
>>> instA.myAttr
1
>>> instB.myAttr
2
The problem comes when you have class attributes that are mutable. Since instantiation just did a shallow copy, all instances are going to just have a reference pointing to the same object.
class Test(object):
myAttr=[1,2,3]
instA = Test()
instB = Test()
instB.myAttr[0]=2
>>> instA.myAttr
[2,2,3]
But the references are actual members of the instance, so as long as you are actually assigning something new to the attribute you are ok.
You can get around this by making a deep copy of mutable variables during the init function
import copy
class Test(object):
myAttr = [1,2,3]
def __init__(self):
self.myAttr = copy.deepcopy(self.myAttr)
instA = Test()
instB = Test()
instB.myAttr[0] = 5
>>> instA.myAttr
[1,2,3]
>>> instB.myAttr
[5,2,3]
It might be possible to write a decorator that would automatically deepcopy all your class attributes during init, but I don't know offhand of one that is provided anywhere.
my_variable = [something]
...
my_varaible = f(my_variable)
...
use my_variable and thinking it contains the result from f, and not the initial value
Python won't warn you in any way that on the second assignment you misspelled the variable name and created a new one.
Algorithm blogs has a good post about Python performance issues and how to avoid them: 10 Python Optimization Tips and Issues
You've mentioned default arguments... One that's almost as bad as mutable default arguments: default values which aren't None
.
Consider a function which will cook some food:
def cook(breakfast="spam"):
arrange_ingredients_for(breakfast)
heat_ingredients_for(breakfast)
serve(breakfast)
Because it specifies a default value for breakfast
, it is impossible for some other function to say "cook your default breakfast" without a special-case:
def order(breakfast=None):
if breakfast is None:
cook()
else:
cook(breakfast)
However, this could be avoided if cook
used None
as a default value:
def cook(breakfast=None):
if breakfast is None:
breakfast = "spam"
def order(breakfast=None):
cook(breakfast)
A good example of this is Django bug #6988. Django's caching module had a "save to cache" function which looked like this:
def set(key, value, timeout=0):
if timeout == 0:
timeout = settings.DEFAULT_TIMEOUT
_caching_backend.set(key, value, timeout)
But, for the memcached backend, a timeout of 0
means "never timeout"… Which, as you can see, would be impossible to specify.
Using the %s
formatter in error messages. In almost every circumstance, %r
should be used.
For example, imagine code like this:
try:
get_person(person)
except NoSuchPerson:
logger.error("Person %s not found." %(person))
Printed this error:
ERROR: Person wolever not found.
It's impossible to tell if the person
variable is the string "wolever"
, the unicode string u"wolever"
or an instance of the Person
class (which has __str__
defined as def __str__(self): return self.name
). Whereas, if %r
was used, there would be three different error messages:
...
logger.error("Person %r not found." %(person))
Would produce the much more helpful errors:
ERROR: Person 'wolever' not found. ERROR: Person u'wolever' not found. ERROR: Person not found.
Another good reason for this is that paths are a whole lot easier to copy/paste. Imagine:
try:
stuff = open(path).read()
except IOError:
logger.error("Could not open %s" %(path))
If path
is some path/with 'strange' "characters"
, the error message will be:
ERROR: Could not open some path/with 'strange' "characters"
Which is hard to visually parse and hard to copy/paste into a shell.
Whereas, if %r
is used, the error would be:
ERROR: Could not open 'some path/with \'strange\' "characters"'
Easy to visually parse, easy to copy-paste, all around better.
Creating a local module with the same name as one from the stdlib. This is almost always done by accident (as reported in this question), but usually results in cryptic error messages.
Don't modify a list while iterating over it.
odd = lambda x : bool(x % 2)
numbers = range(10)
for i in range(len(numbers)):
if odd(numbers[i]):
del numbers[i]
One common suggestion to work around this problem is to iterate over the list in reverse:
for i in range(len(numbers)-1,0,-1):
if odd(numbers[i]):
del numbers[i]
But even better is to use a list comprehension to build a new list to replace the old:
numbers[:] = [n for n in numbers if not odd(n)]
Class attributes
Some answers above are incorrect or unclear about class attributes.
They do not become instance attributes, but are readable using the same syntax as instance attributes. They can be changed by accessing them via the class name.
class MyClass:
attrib = 1 # class attributes named 'attrib'
another = 2 # and 'another'
def __init__(self):
self.instattr = 3 # creates instance attributes
self.attrib = 'instance'
mc0 = MyClass()
mc1 = MyClass()
print mc.attrib # 'instance'
print mc.another # '2'
MyClass.another = 5 # change class attributes
MyClass.attrib = 21 # <- masked by instance attribute of same name
print mc.attrib # 'instance' unchanged instance attribute
print mc.another # '5' changed class attribute
Class attributes can be used as sort of default values for instance attributes, masked later by instance attributes of the same name with a different value.
Intermediate scope local variables
A more difficult matter to understand is the scoping of variables in nested functions.
In the following example, y is unwritable from anywhere other than function 'outer'. x is readable and writable from anywhere, as it is declared global in each function. z is readable and writable in 'inner*' only. y is readable in 'outer' and 'inner*', but not writable except in 'outer'.
x = 1
def outer():
global x
y = 2
def inner1():
global x, y
y = y+1 # creates new global variable with value=3
def inner2():
global x
y = y+1 # creates new local variable with value=3
I believe that Python 3 includes an 'outer' keyword for such 'outside this function but not global' cases. In Python 2.#, you are stuck with either making y global, or making it a mutable parameter to 'inner'.
Promiscuous Exception Handling
This is something that I see a surprising amount in production code and it makes me cringe.
try:
do_something() # do_something can raise a lot errors e.g. files, sockets
except:
pass # who cares we'll just ignore it
Was the exception the one you want suppress, or is it more serious? But there are more subtle cases. That can make you pull your hair out trying to figure out.
try:
foo().bar().baz()
except AttributeError: # baz() may return None or an incompatible *duck type*
handle_no_baz()
The problem is foo or baz could be the culprits too. I think this can be more insidious in that this is idiomatic python where you are checking your types for proper methods. But each method call has chance to return something unexpected and suppress bugs that should be raising exceptions.
Knowing what exceptions a method can throw are not always obvious. For example, urllib and urllib2 use socket which has its own exceptions that percolate up and rear their ugly head when you least expect it.
Exception handling is a productivity boon in handling errors over system level languages like C. But I have found suppressing exceptions improperly can create truly mysterious debugging sessions and take away a major advantage interpreted languages provide.