The more detail I put in an interface, the less reusable it is. On the other hand the less detail the more ethereal and useless it seems to become. Is there a standard set of recommendations about how to weigh this for various situations?
I have just co-authored a paper on granularity (size) of components and one of our conclusions is that there is no simple way to determine "what's right". So no, there is no standard set of recommendations.
I can give you a couple of academic references on the subject just in case you're interested:
- Genero, M., Piattini, M., Calero, C. (eds.): Metrics for Software Conceptual Models. Imperial College Press, London, UK (2005)
- Shekhovtsov, V.A.: On Conceptualization of Quality. paper presented at Dagstuhl Seminar on Conceptual Modelling, April 27-30 2008 (preprint on conference website) (2008)
Consider the Human genome as a class.
Each instance (cell object) has available to it all the functions of the genome. (Although not all cell objects have access to all functions; except perhaps stem cells).
I'm bringing up this point because I have seen many instances of single classes trying to perform many functions, instead of having multiple classes, each performing a single function.
This is equivalent to a grain of sand having the instructions encoded in it to build a castle. Evolution has had the benefit of billions of years to work out the bugs. Engineers just don't have the capacity or the time to do this.
I'm a big fan of SOLID principles. The "I" in SOLID leads me to belive that clients shouldn't be forced to implement interfaces they do not need or use. In other words, if you have an abstract class or an interface, then the implementer should not be forced to implement parts that they don't care about.
Ray Houston wrote a good article on it (looking at the Membership Provider) here.