I think @Icheb's answer covers it all.
I have tried something new this year in a project that I thought I'd share.
For a PHP based content aggregation / distribution service, an application that runs quietly in the background on some server and you tend to forget, we needed an error reporting system that makes sure we notice errors.
Every error that occurs has an Error ID that is specified in the code:
$success = mysql_query(this_and_that);
if (!$success) log_error ("Failed Query: ".mysql_error(), "MYSQL_123");
Errors get logged in a file, but more importantly sent out by mail to the administrator, together with a full backtrace and variable dump.
To avoid flooding with mails - the service has tens of thousands of users on a good day - error mails get sent out only once every x hours for each error code. When an error of the same code occurs twice within that timespan, no additional mail will be sent. It means that every kind of error gets recorded, but you don't get killed by error messages when it's something that happens to hundreds or thousands of users.
This is fairly easy to implement; the art is getting the error IDs right. You can, for example, give every failed mySQL query in your system the same generic "MYSQL" Error ID. In most cases, that will be too generic and block too much. If you give each mySQL query a unique error ID, you might get flowed with mails and the filtering effect is gone. But wWhen grouped intelligently, this can be a very good setup.