e.g. if you have code that does something like this somewhere in your codebase:
>>> import logging
>>> logging.basicConfig()
>>> logger = logging.getLogger(__name__)
>>> logger.critical("foo: %s", 1, 2)
Traceback (most recent call last):
File "/pluto/local/lib/python2.6/logging/__init__.py", line 768, in emit
msg = self.format(record)
File "/pluto/local/lib/python2.6/logging/__init__.py", line 648, in format
return fmt.format(record)
File "/pluto/local/lib/python2.6/logging/__init__.py", line 436, in format
record.message = record.getMessage()
File "/pluto/local/lib/python2.6/logging/__init__.py", line 306, in getMessage
msg = msg % self.args
TypeError: not all arguments converted during string formatting
>>>
...you will just get that stack trace with no indication where the original bad logger statement is.
It seems like it wouldn't be too hard to write something that uses the tokenize module or the ast module or something to process python source and detect these problems but I can't imagine I'm the first person that's wanted something like this.
Update: I was picturing more of a static-analysis type of tool rather than either a change to the logging code or a process change (e.g. code review). I specifically don't consider code review as good enough because humans are never infallible. In my particular case we already have many tools that run over the code and catch common mistakes (e.g. you left a pdb in, pylint, etc)... I was really hoping to add tool to that toolbox.