I know this is old but just discovered this thread and see it is not technically answered.
Yes, Pearson is commonly mentioned in recommender engine writeups, and it works reasonably, but has some quirks like this. (By the way the correlation is 1 in your example, not 0.)
The cosine measure similarity is indeed a good alternative. However if you "center" the data (shift so mean is 0) before computing, and there are reasons you should, then it reduces to be identical to the Pearson correlation. So you end up with similar issues, or else, have a different set of issues from not centering.
Consider a Euclidean distance-based similarity metric -- similarity is inversely related to distance, where user ratings are viewed as points in space. It doesn't have this sparseness problem, though it needs to be normalized for dimension in order to not favor users who co-rate many items and are thus far since their distance is increased along many dimensions.
But really, I'd suggest you look at a log-likelihood-based similarity metric. It also doesn't have these issues, and doesn't even need rating values. This is a great default.
There are more to consider that wouldn't have this issue: Spearman correlation, Tanimoto distance (Jaccard coefficient)-based.
Where can you learn more and get an implementation? Voila, Apache Mahout:
http://svn.apache.org/viewvc/lucene/mahout/trunk/core/src/main/java/org/apache/mahout/cf/taste/impl/similarity/