A: 

I think Does not store useless or redundant information and Is easy to query (lends itself to gathering/displaying useful information) are mutually exclusive. If you store data very efficiently, like in your example #2, you would need complex queries to recreate what was happening during a range or point in time, since you only store changes.

You really provide no details, so specific recommendations are difficult. For example, in your example #1, How many intervals per minute would you consider? this could affect your Causes no noticeable performance hit on the server being monitored maybe not, depends on 1 per hour or 30 per minute.

You provide no information on the kinds of stats you are gathering, so table design is impossible.

Whatever you do, have the stats going to a database on a different server. This will give you the least performance impact on the production database.

KM
In response to your first point, 1) I don't agree that all useless information is difficult to query (which would need to be true for those to be mutually exclusive), but your point was really that those two principles conflict, so 2) I agree. The principles conflict.On the lack of specifics, I have approximately 50 servers I want to set up monitoring for, and the characteristics (purpose, design, workload, operating system, hardware, etc.) of the servers vary wildly, so I'm intentionally looking for any general principles regarding statistics collection. I've never collected statistics...
Nathan
That comment was supposed to be nicely formatted in two separate paragraphs, but apparently comment formatting doesn't work the same way as questions and responses.
Nathan
I absolutely agree that the database should be on a separate server, and I do plan to do that. Trying to keep the statistics for each server local to each server would be painful.
Nathan
A: 

For the record, I agree somewhat with KM, it's hard to provide specific answers on the info given; and as is often the case - in this sort of senario you'll probably get more value out of thinking things through rather than th eend result.

For the storage of the data - the best reporting will be done of a DB that's designed to be reported off - an OLAP type schema.

How you get the data in there will be a different matter - how much data are we talking about and how do you want to move it across? the reason I ask is that if you're going to insert it in a synchronous manner you'll want it to be fast - a OLTP styled DB schema.

Strictly speaking, if you're after bleeding edge performance, you'll probably want a seperate DB for each part (capturing data / reporting off it).

Before you start - if you want elegance - you'll need to carefully consider the logical data model of the data you're wanting to pull in. High on your priority list should be core dimensions: time, origin (component, etc), and so on. This is one of the most important things about BI / data based projects - what questions are you trying to answer?

This is also where you'll start to figure out what you want to capture. Make sure you have good definitions of that data (where it comes from, what it means, etc). By 'where it comes from' I'm refering to not just the method / class / component / system, but also what actually originates the values you're recording and their meaning; this will be especially important for stuff like numbers of users logged in, what exactly does the figure mean? If its a web app, if you record every request you'll be able to report on the numbers of users "logged in" anyway you want: averages, by time of day, peak concurrency, etc.

One final point - depending on what you're doing (and how) the risk of performance loss, due to capturing too much data, is low; it's often better to have it and not need it - than to need it and not have it. Just because you have it doesn't mean tyou have to report on it.

Accuracy: use an existing well used industry component for capturing / logging data.
The MS Ent Libs are great for this, they have a large user base - and so their quality is high. They include a Trace statement for recording execution time down to a fine level. They are also highly configurable - which helps contribute towards an elegant solution.

Adrian K
I appreciate the pointer to OLAP. I had never heard of that kind of database structure before. In my specific case, we probably won't need blazing fast access to results, so I should be able to get away a single database. Whether or not I should do an RDBMS or OLAP (or something else) will apparently take some research...
Nathan
"dimensions" - that's a great formal definition explanation for the "things you want to store." It's good to finally put a name on that.
Nathan
In our case, we have 1 windows server, and about 49 servers with other operating systems (various linux, opensolaris, freebsd, os x), so the "use an existing well-used..." won't work for me.
Nathan
One thing I find perplexing, is trying to figure out both how to measure in the first place and then what data (or sets of data) I need to store to provide different kinds of accurate information in the reports later on. For example, lets say I want to store statistics on "CPU usage". Obviously I can easily get a measurement of current CPU usage at an instance of time on most servers, but storing a series of instantaneous measures may not be helpful when I try to graph "total CPU usage" during different hours of the day. Of course, that's getting down to specifics, and I asked for general..
Nathan
Perhaps this is as far as I can go in general, and I should just move on to specifics.
Nathan
Re 1xWindows and 49other - Wow, I asee what you mean. As a general abstract approach I'd look at a the Interface Segregation Principle (http://en.wikipedia.org/wiki/Interface_segregation_principle). The idea would be to define a "contract" for each specific "area" you wanted to measure (like CPU, or Memory usage, etc), then just work on implementing them as you can.
Adrian K