Recently, I read an article entitled "SATA vs. SCSI reliability". It mostly discusses the very high rate bit flipping in consumer SATA drives and concludes "A 56% chance that you can't read all the data from a particular disk now". Even Raid-5 can't save us as it must be constantly scanned for problems and if a disk does die you are pretty much guaranteed to have some flipped bits on your rebuilt file system.
Considerations:
I've heard great things about Sun's ZFS with Raid-Z but the Linux and BSD implementations are still experimental. I'm not sure it's ready for prime time yet.
I've also read quite a bit about the Par2 file format. It seems like storing some extra % parity along with each file would allow you to recover from most problems. However, I am not aware of a file system that does this internally and it seems like it could be hard to manage the separate files.
Backups (Edit):
I understand that backups are paramount. However, without some kind of check in place you could easily be sending bad data to people without even knowing it. Also figuring out which backup has a good copy of that data could be difficult.
For instance, you have a Raid-5 array running for a year and you find a corrupted file. Now you have to go back checking your backups until you find a good copy. Ideally you would go to the first backup that included the file but that may be difficult to figure out, especially if the file has been edited many times. Even worse, consider if that file was appended to or edited after the corruption occurred. That alone is reason enough for block-level parity such as Par2.