We have started using a third party platform (GigaSpaces) that helps us with distributed computing. One of the major problems we are trying to solve now is how to manage our log files in this distributed environment. We have the following setup currently.
Our platform is distributed over 8 machines. On each machine we have 12-15 processes that log to separate log files using java.util.logging. On top of this platform we have our own applications that use log4j and log to separate files. We also redirect stdout to a separate file to catch thread dumps and similar.
This results in about 200 different log files.
As of now we have no tooling to assist in managing these files. In the following cases this causes us serious headaches.
Troubleshooting when we do not beforehand know in which process the problem occurred. In this case we currently log into each machine using ssh and start using
grep
.Trying to be proactive by regularly checking the logs for anything out of the ordinary. In this case we also currently log in to all machines and look at different logs using
less
andtail
.Setting up alerts. We are looking to setup alerts on events over a threshold. This is looking to be a pain with 200 log files to check.
Today we have only about 5 log events per second, but that will increase as we migrate more and more code to the new platform.
I would like to ask the community the following questions.
- How have you handled similar cases with many log files distributed over several machines logged through different frameworks?
- Why did you choose that particular solution?
- How did your solutions work out? What did you find good and what did you find bad?
Many thanks.