Here are two that come to mind.
1) Duck Typing - If an object's methods and properties make it usable as a duck, then it is a duck, for all useful purposes. This is a form of inductive reasoning, and is therefore subject to the problem of induction ("I have seen only white swans" -> "all swans are white"), if applied blindly. See, for example, this discussion:
One issue with duck typing is that it forces the programmer to have a much wider understanding of the code he or she is working with at any given time. In a strongly and statically typed language that uses type hierarchies and parameter type checking, it's much harder to supply an unexpected object type to a class. For instance, in Python, you could easily create a class called Wine, which expects a class implementing the "press" attribute as an ingredient. However, a class called Trousers might also implement the press() method. With Duck Typing, in order to prevent strange, hard-to-detect errors, the developer needs to be aware of each potential use of the method "press", even when it's conceptually unrelated to what he or she is working on.
In essence, the problem is that, "if it walks like a duck and quacks like a duck", it could be a dragon doing a duck impersonation. You may not always want to let dragons into a pond, even if they can impersonate a duck.
2) Unit testing as documentation (a way of defining an API) - If you can't show the behaviour with a unit test, then the behaviour you would like to enforce is technically undefined. This reminds me of Logical Positivism:
Perhaps the view for which the logical positivists are best known is the verifiability criterion of meaning, or verificationism. In one of its earlier and stronger formulations, this is the doctrine that a proposition is "cognitively meaningful" only if there is a finite procedure for conclusively determining whether it is true or false.