I have a lot of python unit tests for a project and it's getting to the point where it takes a long time to run them. I don't want to add more because I know they're going to make things slower. How do people solve this problem? Is there any easy way to distribute the test execution over a cluster?
Profile them to see what really is slow. You may be able to solve this problem with out distribution. If the tests are truly unit tests then I see not many problems with running the tests across multiple execution engines.
The first part of the solution to this problem is to only run the tests one needs to run. Between commits to a shared branch, one runs only those tests that one's new work interacts with; that should take all of five seconds. If one adopts this model, it becomes vital to make a point of running the entire test suite before committing to a shared resource.
The problem of running the full test suite for regression purposes remains, of course, though it's already partially addressed by simply running the full suite less often. To avoid having to wait around while that job runs, one can offload the task of testing to another machine. That quickly turns into a task for a continuous integration system; buildbot seem fairly appropriate to your use case.
You should also be able to distribute tests across hosts using buildbot, firing off two jobs with different entry points to the test suite. But I am not convinced that this will gain you much over the first two steps I've mentioned here; it should be reserved for cases when tests take much longer to run than the interval between commits to shared resources.
D'A
[Caveat lector: My understanding of buildbot is largely theoretical at this point, and it is probably harder than it looks.]
While coding, only run the tests of the class that You have just changed, not all the tests in the whole project.
Still, it is a good practice to run all tests before You commit Your code (but the Continuous Integration server can do it for You).
See py.test, which has the ability to pass unit tests off to a group of machines, or Nose, which (as of trunk, not the currently released version) supports running tests in parallel with the multiprocessing module.
You can't frequently run all your tests, because they're too slow. This is an inevitable consequence of your project getting bigger, and won't go away. Sure, you may be able to run the tests in parallel and get a nice speedup, but the problem will just come back later, and it'll never be as it was when your project was small.
For productivity, you need to be able to code, and run relevant unit tests and get results within a few seconds. If you have a hierarchy of tests, you can do this effectively: run the tests for the module you're working on frequently, the tests for the component you're working on occasionally, and the project-wide tests infrequently (perhaps before you're thinking of checking it in). You may have integration tests, or full system tests which you may run overnight: this strategy is an extension of that idea.
All you need to do to set this up is to organize your code and tests to support the hierarchy.