views:

31

answers:

1

I've taken to putting module code directly in a packages __init__.py, even for simple packages where this ends up being the only file.

So I have a bunch of packages that look like this (though they're not all called pants:)

+ pants/
\-- __init__.py
\-- setup.py
\-- README.txt
\--+ test/
   \-- __init__.py

I started doing this because it allows me to put the code in a separate (and, critically, separately versionable) directory, and have it work in the same way as it would if the package were located in a single module.py. I keep these in my dev python lib directory, which I have added into $PYTHONPATH when working on such things. Each package is a separate git repo.

edit...

Compared to the typical Python package layout, as exemplified in Radomir's answer, this setup saves me from having to add each package's directory into my PYTHONPATH.

.../edit

This has worked out pretty well, but I've hit upon this (somewhat obscure) issue:

When running tests from within the package directory, the package itself, i.e. code in __init__.py, is not guaranteed to be on the sys.path. This is not a problem under my typical environment, but if someone downloads pants-4.6.tgz and extracts a tarball of the source distribution, cds into the directory, and runs python setup.py test, the package pants itself won't normally be in their sys.path.

I find this strange, because I would expect setuptools to run the tests from a parent directory of the package under test. However, for whatever reason, it doesn't do that, I guess because normally you wouldn't package things this way.

Relative imports don't work because test is a top-level package, having been found as a subdirectory of the current-directory component of sys.path.

I'd like to avoid having to move the code into a separate file and importing its public names into __init__.py. Mostly because that seems like pointless clutter for a simple module.

I could explicitly add the parent directory to sys.path from within setup.py, but would prefer not to. For one thing, this could, at least in theory, fail, e.g. if somebody decides to run the test from the root of their filesystem (presumably a Windows drive). But mostly it just feels jerry-rigged.

Is there a better way?

Is it considered particularly bad form to put code in __init__.py?

+1  A: 

I think the standard way to package python programs would be more like this:

\-- setup.py
\-- README.txt
\--+ pants/
   \-- __init__.py
   \-- __main__.py
   ...
\--+ tests/
   \-- __init__.py
   ...
\--+ some_dependency_you_need/
   ...

Then you avoid the problem.

Radomir Dopieralski
Yes, but I also acquire another problem: the package's code is no longer available by virtue of its parent directory being in the PYTHONPATH. I'm hoping to minimize or eliminate the amount of infrastructure/build complications involved in packaging code that I'm actively using.
intuited
It is available just fine, you just do 'from pants import socks' instead of 'import socks', but you should do that anyways to avoid weird name clashes. Most of the python projects I've seen out there do it like that.
Radomir Dopieralski
If you follow this layout, then you can use "setup.py develop" to add the right directory to sys.path, and won't have to do it manually.
pjeby
I thought about making a module named `underpants` but it seemed like overdressing. <buddum-ching /> |||||||||||| What I ended up doing was putting the code in `pants.py` and using `py_modules` to install it. Then I just symlinked to it from the parent directory. I got rid of the top-level `__init__.py`. @pjeby: thanks for the tip on `setup.py develop`; I don't really understand what it does, but get the impression that it will be pretty self-explanatory when I try it. @Radomir Dopieralski: With a symlink from my dev python directory this works okay. Thanks.
intuited