We have noticed that even though we have a lot of doctests in our Python code, when we trace the testing using the methods described here:
we find that there are certain lines of code that are never executed. We currently sift through the traceit logs to identify blocks of code that are never run, and then try to come up with different test cases to exercise these particular blocks. As you can imagine, this is very time-consuming and I was wondering if we are going about this the wrong way and whether you all have other advice or suggestions to deal with this problem, which I'm sure must be common as software becomes sufficiently complex.
Any advice appreciated.