views:

1549

answers:

9
+16  Q: 

Logging in Linux

So I have a daemon running on a linux system, and I want to have a record of its activities: a log. The question is, what is the "best" way to accomplish this?

My first idea is to simply open a file and write to it.

FILE* log = fopen("logfile.log", "w");
/* daemon works...needs to write to log */
fprintf(log, "foo%s\n", (char*)bar);
/* ...all done, close the file */
fclose(log);

Is there anything inherently wrong with logging this way? Is there a better way, such as some framework built into linux?

+26  A: 

Unix has had for a long while a special logging framework called syslog. Type in your shell

man 3 syslog

and you'll get the help for the C interface to it.

Some examples

#include <stdio.h>
#include <unistd.h>
#include <syslog.h>

int main(void) {

 openlog("slog", LOG_PID|LOG_CONS, LOG_USER);
 syslog(LOG_INFO, "A different kind of Hello world ... ");
 closelog();

 return 0;
}
Vinko Vrsalovic
"man 3..."! I didn't know about this.
Grant
A: 

There are a lot of potential issues: for example, if the disk is full, do you want your daemon to fail? Also, you will be overwriting your file every time. Often a circular file is used so that you have space allocated on the machine for your file, but you can keep enough history to be useful without taking up too much space. There are tools like log4c that you can help you. If your code is c++, then you might consider log4cxx in the Apache project (apt-get install liblog4cxx9-dev on ubuntu/debian), but it looks like you are using C.

David Nehme
+10  A: 

This is probably going to be a was horse race, but yes the syslog facility which exists in most if not all Un*x derivatives is the preferred way to go. There is nothing wrong with logging to a file, but it does leave on your shoulders an number of tasks:

  • is there a file system at your logging location to save the file
  • what about buffering (for performance) vs flushing (to get logs written before a system crash)
  • if your daemon runs for a long time, what do you do about the ever growing log file.

Syslog takes care of all this, and more, for you. The API is similar the printf clan so you should have no problems adapting your code.

Richard
+3  A: 

I spit a lot of daemon messages out to daemon.info and daemon.debug when I am unit testing. A line in your syslog.conf can stick those messages in whatever file you want.

http://www.linuxjournal.com/files/linuxjournal.com/linuxjournal/articles/040/4036/4036s1.html has a better explanation of the C API than the man page, imo.

phreakre
+1  A: 

As stated above you should look into syslog. But if you want to write your own logging code I'd advise you to use the "a" (write append) mode of fopen.

A few drawbacks of writing your own logging code are: Log rotation handling, Locking (if you have multiple threads), Synchronization (do you want to wait for the logs being written to disk ?). One of the drawbacks of syslog is that the application doesn't know if the logs have been written to disk (they might have been lost).

Mathias Brossard
A: 

Syslog is a good option, but you may wish to consider looking at log4c. The log4[something] frameworks work well in their Java and Perl implementations, and allow you to - from a configuration file - choose to log to either syslog, console, flat files, or user-defined log writers. You can define specific log contexts for each of your modules, and have each context log at a different level as defined by your configuration. (trace, debug, info, warn, error, critical), and have your daemon re-read that configuration file on the fly by trapping a signal, allowing you to manipulate log levels on a running server.

Jon Topper
+3  A: 

One other advantage of syslog in larger (or more security-conscious) installations: The syslog daemon can be configured to send the logs to another server for recording there instead of (or in addition to) the local filesystem.

It's much more convenient to have all the logs for your server farm in one place rather than having to read them separately on each machine, especially when you're trying to correlate events on one server with those on another. And when one gets cracked, you can't trust its logs any more... but if the log server stayed secure, you know nothing will have been deleted from its logs, so any record of the intrusion will be intact.

Dave Sherohman
A: 

If you use threading and you use logging as a debugging tool, you will want to look for a logging library that uses some sort of thread-safe, but unlocked ring buffers. One buffer per thread, with a global lock only when strictly needed.

This avoids logging causing serious slowdowns in your software and it avoids creating heisenbugs which change when you add debug logging.

If it has a high-speed compressed binary log format that doesn't waste time with format operations during logging and some nice log parsing and display tools, that is a bonus.

I'd provide a reference to some good code for this but I don't have one myself. I just want one. :)

Zan Lynx
A: 

Our embedded system doesn't have syslog so the daemons I write do debugging to a file using the "a" open mode similar to how you've described it. I have a function that opens a log file, spits out the message and then closes the file (I only do this when something unexpected happens). However, I also had to write code to handle log rotation as other commenters have mentioned which consists of 'tail -c 65536 logfile > logfiletmp && mv logfiletmp logfile'. It's pretty rough and maybe should be called "log frontal truncations" but it stops our small RAM disk based filesystem from filling up with log file.

MattSmith