views:

506

answers:

3

I'm suffering some weird issue where my delayed_jobs are failing in production. Finally I narrowed it down to the logger. If I comment out my log function calls, everything works. However if I try to log, I get this in the delayed_job handler:

 --- !ruby/struct:Delayed::PerformableMethod 
object: AR:User:1
method: :load_and_update_without_send_later
args: []

 | closed stream
/opt/ruby/lib/ruby/1.8/logger.rb:504:in `write'
/opt/ruby/lib/ruby/1.8/logger.rb:504:in `write'
/opt/ruby/lib/ruby/1.8/monitor.rb:242:in `synchronize'
/opt/ruby/lib/ruby/1.8/logger.rb:496:in `write'
/opt/ruby/lib/ruby/1.8/logger.rb:326:in `add'
/opt/ruby/lib/ruby/1.8/logger.rb:374:in `info'
/home/rails/myapp.com/releases/20100203203031/app/models/gmail.rb:35:in `log'

My logger looks like this:

@@error_log_file = File.open("#{RAILS_ROOT}/log/error.log", 'a')
@@error_log_file.sync = true
def log(msg)
  msg.each do |line|
    line = "#{Time.now.strftime('%H:%M:%S')}  #{line}"
    @@error_log_file.info(line) # this is line 35 where it's failing
    puts line
  end
end

If I comment out the line "@@error_log_file.sync = true", it also works.

Is this a delayed job problem, or could it be related to my log directory being a symbolic link (as setup by a standard capistrano deploy)?

Also nothing is being written to my error.log file and nothing is being written to delayed_job.log. Totally stumped...

A: 

It looks like you're trying to log to a closed file. Have you perhaps considered trying the SyslogLogger gem?

Here's an article on how to use it with rails, which should help get you started.

jonnii
Thanks, checking it out. Here I'm just using Logger. But I don't think it should be a closed file, since I explicitly open it.. plus it works fine in development mode. Also delayed_job isn't writing it's usual logging to log/delayed_job.log -- which is something I couldn't have tinkered with.
Allan Grant
A: 

I've had a similar issue. The logging in the collectiveidea fork is being rewritten - see this issue for more info. Perhaps try the newest version to see if that fixes it for you.

Jonathan Julian
A: 

Woody Peterson found the problem here: http://groups.google.com/group/delayed_job/browse_thread/thread/f7d0534bb6c7c83f/37b4e8ed7bfaba42

The problem is:

DJ is using Rails' buffered log in production, and flushing the buffer is not being triggered for some reason (don't know if it's flushed by buffer size or explicitly flushed after a request).

The temporary fix (credit to Nathan Phelps) is:

When in production the buffered log is set to an auto_flushing value of 1000 which means that flush is not called until 1000 messages have been logged. Assuming you're using collectiveidea's fork of delayed_job, you can address this by setting auto_flushing to a more reasonable value in command.rb right after the logger is initialized on line 64. I.E.

Delayed::Worker.logger = Rails.logger

Delayed::Worker.logger.auto_flushing = 1 # or whatever

Works for me perfectly!

Allan Grant