Good morning,
I am currently writing a python library. At the moment, modules and classes are deployed in an unorganized way, with no reasoned design. As I approach a more official release, I would like to reorganize classes and modules so that they have a better overall design. I drew a diagram of the import dependencies, and I was planning to aggregate classes by layer level. Also, I was considering some modification to the classes so to reduce these dependencies.
What is your strategy for a good overall design of a potentially complex and in-the-making python library? Do you have interesting suggestions ?
Thanks
Update:
I was indeed looking for a rule of thumb. For example, suppose this case happens (init.py removed for clarity)
foo/bar/a.py
foo/bar/b.py
foo/hello/c.py
foo/hello/d.py
now, if you happen to have d.py importing bar.b and a.py importing hello.c, I would consider this a bad setting. Another case would be
foo/bar/a.py
foo/bar/baz/b.py
foo/bar/baz/c.py
suppose that both a.py and b.py import c. you have three solutions: 1) b imports c, a import baz.c 2) you move c in foo/bar. a.py imports c, b.py imports .c 3) you move c somewhere else (say foo/cpackage/c.py) and then both a and b import cpackage.c
I tend to prefer 3), but if c.py has no meaning as a standalone module, for example because you want to keep it "private" into the bar package, I would preferentially go for 1).
There are many other similar cases. My rule of thumb is to reduce the number of dependencies and crossings at a minimum, so to prevent a highly branched, highly interweaved setup, but I could be wrong.