So I've had Delayed::Job running swimmingly in production for a while.
Whenever i'd make a change to a job I would (all in the production env mind you)
- restart delayed job using the [script](http://wiki.github.com/tobi/delayed_job/running-delayedworker-as-a-daemon) i used
- clear the jobs using `rake jobs:clear`
Also, i have monit running, I've stopped monit, restarted the script and then started monit, in that order... still no dice.
Anyways, we all do this to get the old job out of memory and i clear the job queue just cause that's what i do. that step may not be needed and in my app it doesn't hurt.
However, using these steps recently has not reset my new Job code for some reason. When i look at my job_runner.log file i get this error when i restart the script
*** below you find the most recent exception thrown, this will be likely (but not certainly) the exception that made the applicati
on exit abnormally ***
#<SystemExit: exit>*** below you find all exception objects found in memory, some of them may have been thrown in your application, others may just be in memory because they are standard exceptions ***
#<NoMemoryError: failed to allocate memory>
#<SystemStackError: stack level too deep>
#<fatal: exception reentered>
#<LoadError: no such file to load -- rubygems/defaults/operating_system>
#<LoadError: no such file to load -- daemons>
#<NameError: uninitialized constant Rails::Plugin::HoptoadNotifier>
#<Errno::ENOENT: No such file or directory - /var/rails/wigify/tmp/pids/job_runner.pid>
#<SystemExit: exit>
So i'm not sure what is going on. That stack level too deep error, does that come from my code? All my integration tests pass, just like they did before.
Do I have memory issues on my slice? Even though free
tells me i have 300MB on average when i do it?
Who here can help a brother out?