views:

86

answers:

1

So I'm using Delayed::Job workers (on Heroku) as an after_create callback after a user creates a certain model.

A common use case, though, it turns out, is for users to create something, then immediately delete it (likely because they made a mistake or something).

When this occurs the workers are fired up, but by the time they query for the model at hand, it's already deleted, BUT because of the auto-retry feature, this ill-fated job will retry 25 times, and definitely never work.

Is there any way I can catch certain errors and, when they occur, prevent that specific job from ever retrying again, but if it's not that error, it will retry in the future?

A: 

Abstract the checks into the function you call with delayed_job. Make the relevant checks wether your desired job can proceed or not and either work on that job or return success.

David Lyod