In general, it's not worth worrying too much about the hash functions of the standard JDK classes. Even if you could override String (you can't), in practice, it's hash function is practically always "good enough". There are maybe a few exceptions-- e.g. certain classes such as BigInteger and collections calculate their hash code every time by cycling through every single element they contain, which is pretty spurious in some cases-- but how often do you key on instances of those classes?
For designing hash codes for your own classes, the thing you're trying to do is spread hash codes "randomly" over the range of integers. To do this, you generally want to "mix" the bits of successive fields in your object (you may be interested in an article on my web site that graphically illustrates how the String hash code mixes bits). Multiplying the current hash by an odd number (and generally a prime number) then adding in the hash of the next element generally works sufficiently well as a first attempt. (However, problems can occur with this method when, for example, the numbers/hash codes being combined tend to have zeroes in their lower bits-- there's generally no practical hash function that's absolutely guaranteed to work well in all cases.)
Then, you can consider testing your hash code. Generate a series of random objects (or even use some real ones), calculate their hash codes, AND off the bottom, say, 16 bits of the hash codes, and then see how many collisions you get. Check that the number of collisions you get roughly matches the number of hash collisions you'd expect to get by chance. For example, if you AND off the bottom 16 bits of the hash code (& 0xffff) then after 1000 random objects, you'd expect about 8 collisions. After 2000, you'd expect about 30 collisions.
As far as performance is concerned, then up to some point, I think that getting a hash code that's well distributed will generally be more beneficial nowadays than sacrificing hash quality for hash calculation speed.