I would absolutely love you to bore me with details about the whats and the whys, application specific stuff, etc
Ho. Well you asked for it!
Like Daniel I personally use Apache with mod_wsgi. It is still new enough that deploying it in some environments can be a struggle, but if you're compiling everything yourself anyway it's pretty easy. I've found it very reliable, even the early versions. Props to Graham Dumpleton for keeping control of it pretty much by himself.
However for me it's essential that WSGI applications work across all possible servers. There is a bit of a hole at the moment in this area: you have the WSGI standard telling you what a WSGI callable (application) does, but there's no standardisation of deployment; no single way to tell the web server where to find the application. There's also no standardised way to make the server reload the application when you've updated it.
The approach I've adopted is to put:
all application logic in modules/packages, preferably in classes
all website-specific customisations to be done by subclassing the main Application and overriding members
all server-specific deployment settings (eg. database connection factory, mail relay settings) as class __init__() parameters
one top-level ‘application.py’ script that initialises the Application class with the correct deployment settings for the current server, then runs the application in such a way that it can work deployed as a CGI script, a mod_wsgi WSGIScriptAlias (or Passenger, which apparently works the same way), or can be interacted with from the command line
a helper module that takes care of above deployment issues and allows the application to be reloaded when the modules the application is relying on change
So what the application.py looks like in the end is something like:
#!/usr/bin/env python
import os.path
basedir= os.path.dirname(__file__)
import MySQLdb
def dbfactory():
return MySQLdb.connect(db= 'myappdb', unix_socket= '/var/mysql/socket', user= 'u', passwd= 'p')
def appfactory():
import myapplication
return myapplication.Application(basedir, dbfactory, debug= False)
import wsgiwrap
ismain= __name__=='__main__'
libdir= os.path.join(basedir, 'system', 'lib')
application= wsgiwrap.Wrapper(appfactory, libdir, 10, ismain)
The wsgiwrap.Wrapper checks every 10 seconds to see if any of the application modules in libdir have been updated, and if so does some nasty sys.modules magic to unload them all reliably. Then appfactory() will be called again to get a new instance of the updated application.
(You can also use command line tools such as
./application.py setup
./application.py daemon
to run any setup and background-tasks hooks provided by the application callable — a bit like how distutils works. It also responds to start/stop/restart like an init script.)
Another trick I use is to put the deployment settings for multiple servers (development/testing/production) in the same application.py script, and sniff ‘socket.gethostname()’ to decide which server-specific bunch of settings to use.
At some point I might package wsgiwrap up and release it properly (possibly under a different name). In the meantime if you're interested, you can see a dogfood-development version at http://www.doxdesk.com/file/software/py/v/wsgiwrap-0.5.py.