tags:

views:

226

answers:

6

We have cases wherein we write a lot of log files to the host increasing the i/o on a host. Are there any good open source logging over the wire solutions.

The application language is C++ on Red Hat Linux 3.

+1  A: 

There are logging libraries that can help you. depending on your application language. But if i/o is a problem probably network bandwidth is a bigger problem.

Igal Serban
A: 

The application language is C++ on Red Hat Linux 3.

Its better to edit your question and add this information.
Aaron Fischer
I've added this information to the question, kal; I suggest you remove this answer.
Jonathan Leffler
+1  A: 

First set of questions:

  • Do you need all those log files?
  • All the time?
  • Can you control how much logging occurs?
  • Why not?

Second set of questions:

  • Why are your log writing operations slow?
  • Are you using inappropriate operations (for example, O_SYNC or related options on POSIX)?
  • How many disk drives do you have?
  • Can you gain by having different log files on different drives (or, at least, having the log files on a different drive from where the other files are stored)?

As @igalse says, there are logging libraries available. For C++, you should look at what is available at Boost, but there are undoubtedly other sources too.

Jonathan Leffler
+1  A: 

For C++, Boost doesn't contain a library for logging yet. But you can use the most advanced candidate, written by John Torjo, here.

It allows to filter some of your logging (you probably need that, if the logging is so important that it becomes a performance problem) and setting different destinations, like a stream.

ckarmann
+3  A: 

A very simple logging option is to use syslog and rely on (after correct configuration) the syslog daemon to forward it to a remote server.

Take a look at:

openlog()

syslog()

closelog()

and:

syslog.conf

Andrew Edgecombe
A: 

If the I/O on the host is being impacted unnecessarily, then in my opinion you are either:

  • Doing FAR too much logging - logging sufficiently that it approaches IO write speed on a sustained basis is probably too much - as even the most modest hardware can comfortably do several megabytes per second without a problem
  • Doing too many flushes of your logs - this is more likely. Syslogd flushes the log quite a lot by default (too much for log-intensive applications) - this is so that log files are durable in the event of a crash - but generates a lot of IOs. Syslogd can be reconfigured on a per-file basis (see its man page) not to flush the files out so often.

Logging to a network server isn't going to solve these problems if it has the same issue - in fact, it will make them worse if several hosts are logging to the same server.

MarkR