tags:

views:

423

answers:

3

I have an issue where an application is randomly dying during a DBI call. We cannot reliably reproduce this in our test or acceptance environment, so I am required to monitor it on our production system to try to figure out what is happening.

I am logging all the DBI traffic via the DBI_TRACE environment variable.

DBI_TRACE=3=dbi.log script.pl

The issue is however that there are no time stamps in the DBI log files, so it is difficult to go back through them to find what was occurring at the time of the die.

Is there any way to enable logging of DBI with time stamps?

+3  A: 

You could use File::Tee to redirect STDERR and add a prefix with the timestamp.

For example:

use strict;
use warnings;

use File::Tee 'tee';

my $logfile = "/path/to/timestamped/logfile.log";
my $filter  = sub { return localtime() . ' ' . $_[0] };

my $pid = tee STDERR, { preprocess => $filter, reopen => $logfile };

print STDERR "something bad happened.";

The advantage here is that it won't interfere with your existing STDERR -- all error messages will continue to go to the same place. But the stream is duplicated and also written to $logfile, with whatever transformation you want via the $filter hook.

friedo
+2  A: 

If overriding STDERR isn't an option, and if you're on a UNIXy system (which various posts suggest you are), you might direct the trace output to a fifo and run a timestamping filter there:

$ mkfifo /tmp/fifo
$ perl -MTime::HiRes=time -npe 's/^/time . " "/e' < /tmp/fifo > /tmp/timestamped.log &
[1] 12345
$ DBI_TRACE=1=/tmp/fifo script.pl

That filtering program could be anything, even logger, which on my system adds timestamps courtesy of syslogd.

pilcrow
+2  A: 

DBI documentation contains code how to enable timestamped logs using layered file-handles. The downside is that you lose the luxury of using environment variables, and have to set tracing parameters in code:

$dbh->trace('SQL', $fh);

Where $fh holds reference to the object which is 'subclass' of PerlIO::Via

zakovyrya