views:

549

answers:

3

http://logging.apache.org/chainsaw/quicktour.html

First feature.

I completed the tutorial, it simply showed how to visually use the GUI, it didn't go into much detail at all regarding this new feature. The best documentation I have found is this:

Just as Appenders send logging events outside of the log4j environment (to files, to smtp, to sockets, etc), Receivers bring logging events inside the log4j environment.

Receivers are meant to support the receiving of remote logging events from another process. For example, SocketAppender "appends" a logging event to a socket, configured for a specific host and port number. On the receiving side of the socket can be a SocketReceiver object. The SocketReceiver object receives the logging event, and then "posts" it to the log4j environment (LoggerRepository) on the receiving machine, to be handled by the configured appenders, etc. The various settings in this environment (Logger levels, Appender filters & thresholds) are applied to the received logging event.

Receivers can also be used to "import" log messages from other logging packages into the log4j environment.

Receivers can be configured to post events to a given LoggerRepository.

So...

What kind of logging strategy can I achieve using this new component that I couldn't use just from using chainsaw + simple log4j file appenders?

+4  A: 

Their are many interesting things you can do with remote events:
- Avoid to create files on application servers. Files are bad.
- Centralize logs in case of multiple application servers.
- View live production logs from from your local environment, even if chainsaw is not very sexy, the filtering capabilities are more handy than plain vi/grep.
- Log in database instead of files. Files are bad.

And probably much more !

MatthieuP
Helpful answer, would upvote but for some reason the proxy at work blocks that button + mark as accepted.
Zombies
Damn! your proxy does not want me to be successful on SO !
MatthieuP
A: 

I've used remote events in the past with grid environments.

Why ? Because we didn't know where our code would be running. We would deploy 'n' jobs, and the grid infrastructure would choose which machines to run those jobs on. Without remote events we would have to keep track of where those jobs had gone to, and then have the hassle of logging in, finding the logs etc. Because the grid consisted of machined used for other purposes, we couldn't guarantee that the machines would be up at a later date to diagnose issues.

So everything was configured to stream log events back to a server where we could create log files per originating server, and manage those logs ourselves. There are issues such as managing the quantity of data streaming across the network to one server, but so long as you're aware of that, that's fine.

Brian Agnew
+1  A: 

Centralized log server can greatly improve your working environment. It's setup once and for all and will serve any application as long as they talk the same "language". You control what and how is persisted, how log data is dispatched (if at all). Besides, you don't manage log files locally on every applications. In fact you don't manage them at all - they can be instantly created based on any criteria you wish. For example, - take a snapshot of exceptions took place 3 days ago in particular application on particular host. Or in real-time.., you can view correlated flow of events which is often very hard to anticipate. For example, you can tune a log viewer to show what happens in data layer when user logs into the system.. Anyway, there are plenty of uses of centralized logging system. Have a look at logFaces for example.

Dima