execute only one of many duplicate jobs with sidekiq?

Eric Seifert picture Eric Seifert · Feb 5, 2013 · Viewed 8k times · Source

I have a background job that does a map/reduce job on MongoDB. When the user sends in more data to the document, it kicks of the background job that runs on the document. If the user sends in multiple requests, it will kick off multiple background jobs for the same document, but only one really needs to run. Is there a way I can prevent multiple duplicate instances? I was thinking of creating a queue for each document and making sure it is empty before I submit a new job. Or perhaps I can set a job id somehow that is the same as my document id, and check that none exists before submitting it?

Also, I just found a sidekiq-unique-jobs gem. But the documentation is non-existent. Does this do what I want?

Answer

crftr picture crftr · Feb 5, 2013

My initial suggestion would be a mutex for this specific job. But as there's a chance that you may have multiple application servers working the sidekiq jobs, I would suggest something at the redis level.

For instance, use redis-semaphore within your sidekiq worker definition. An untested example:

def perform
  s = Redis::Semaphore.new(:map_reduce_semaphore, connection: "localhost")

  # verify that this sidekiq worker is the first to reach this semaphore.
  unless s.locked?

    # auto-unlocks in 90 seconds. set to what is reasonable for your worker.
    s.lock(90)
    your_map_reduce()
    s.unlock
  end
end

def your_map_reduce
  # ...
end