tags:

views:

283

answers:

2

Generally, log levels can be switched to get different level of detailed logs. Usually a lowest level of log often can help one to identify which area of the codes could be wrong. To further debug, one usually increases the debug level to get more information. However, this results in unnecessary large amount of logs that is not related to the problem.

The quest is: what are the best practices on this issue? define another dimension of logs? By logic area, methods or else?

EDIT: This comes from a real project where the application deployed on customer environment and when things go wrong, the log is the thing they send in for debugging, definitely they will hate sending large amount of logs, or do the analysis/parsing themselves: usually they are non-technical customers. I guess this is related to the question of how to manager the logging efficiency in this situation. Please leave comment if opening another thread is more proper. Thanks.

+2  A: 

You could use different listeners for different parts of your application. But probably the best thing you could use is the Microsoft Log Parser which gives you the ability to run queries over your log file, e.g. you can do a SELECT on the data within a text log file. Check it out, it really is quite a powerful tool.

slugster
That is a cool tool, thanks for the tip slugster!
Dr. Xray
+1  A: 

If you have massive amounts of logs, you could separate them by functional area, this is what NHibernate does using log4net. Example:

NHiberate (root)
NHibernate.Loader
NHibernate.Cache
NHiberante.SQL
...

Also with a good library like log4j/net you can use a rolling log file appender which you can easily configure to not fill up your hard drive. For example, you can configure it to generate a log file up 10MB, then roll to another file up to 10 times and then go back the to first file and overwrite it.

Michael Valenty