Multi-threaded algorithm for solving sudoku?

Jon picture Jon · May 12, 2009 · Viewed 9.9k times · Source

I have a homework assignment to write a multi-threaded sudoku solver, which finds all solutions to a given puzzle. I have previously written a very fast single-threaded backtracking sudoku solver, so I don't need any help with the sudoku solving aspect.

My problem is probably related to not really grokking concurrency, but I don't see how this problem benefits from multi-threading. I don't understand how you can find different solutions to the same problem at the same time without maintaining multiple copies of the puzzle. Given this assumption (please prove it wrong), I don't see how the multi-threaded solution is any more efficient than a single-threaded.

I would appreciate it if anyone could give me some starting suggestions for the algorithm (please, no code...)


I forgot to mention, the number of threads to be used is specified as an argument to the program, so as far as I can tell it's not related to the state of the puzzle in any way...

Also, there may not be a unique solution - a valid input may be a totally empty board. I have to report min(1000, number of solutions) and display one of them (if it exists)

Answer

Tom Leys picture Tom Leys · May 12, 2009

Pretty simple really. The basic concept is that in your backtracking solution you would branch when there was a choice. You tried one branch, backtracked and then tried the other choice.

Now, spawn a thread for each choice and try them both simultaneously. Only spawn a new thread if there are < some number of threads already in the system (that would be your input argument), otherwise just use a simple (i.e your existing) single-threaded solution. For added efficiency, get these worker threads from a thread pool.

This is in many ways a divide and conquer technique, you are using the choices as an opportunity to split the search space in half and allocate one half to each thread. Most likely one half is harder than the other meaning thread lifetimes will vary but that is what makes the optimisation interesting.

The easy way to handle the obvious syncronisation issues is to to copy the current board state and pass it into each instance of your function, so it is a function argument. This copying will mean you don't have to worry about any shared concurrency. If your single-threaded solution used a global or member variable to store the board state, you will need a copy of this either on the stack (easy) or per thread (harder). All your function needs to return is a board state and a number of moves taken to reach it.

Each routine that invokes several threads to do work should invoke n-1 threads when there are n pieces of work, do the nth piece of work and then wait with a syncronisation object until all the other threads are finished. You then evaluate their results - you have n board states, return the one with the least number of moves.