I have a basic implementation of alpha-beta pruning but I have no idea how to improve the move ordering. I have read that it can be done with a shallow search, iterative deepening or storing the bestMoves to transition table.
Any suggestions how to implement one of these improvements in this algorithm?
public double alphaBetaPruning(Board board, int depth, double alpha, double beta, int player) {
if (depth == 0) {
return board.evaluateBoard();
}
Collection<Move> children = board.generatePossibleMoves(player);
if (player == 0) {
for (Move move : children) {
Board tempBoard = new Board(board);
tempBoard.makeMove(move);
int nextPlayer = next(player);
double result = alphaBetaPruning(tempBoard, depth - 1, alpha,beta,nextPlayer);
if ((result > alpha)) {
alpha = result;
if (depth == this.origDepth) {
this.bestMove = move;
}
}
if (alpha >= beta) {
break;
}
}
return alpha;
} else {
for (Move move : children) {
Board tempBoard = new Board(board);
tempBoard.makeMove(move);
int nextPlayer = next(player);
double result = alphaBetaPruning(tempBoard, depth - 1, alpha,beta,nextPlayer);
if ((result < beta)) {
beta = result;
if (depth == this.origDepth) {
this.bestMove = move;
}
}
if (beta <= alpha) {
break;
}
}
return beta;
}
}
public int next(int player) {
if (player == 0) {
return 4;
} else {
return 0;
}
}
Node reordering with shallow search is trivial: calculate the heuristic value for each child of the state before recursively checking them. Then, sort the values of these states [descending for max vertex, and ascending for min vertex], and recursively invoke the algorithm on the sorted list. The idea is - if a state is good at shallow depth, it is more likely to be good at deep state as well, and if it is true - you will get more prunnings.
The sorting should be done before this [in both if
and else
clauses]
for (Move move : children) {
storing moves is also trivial - many states are calculated twice,
when you finish calculating any state, store it [with the depth of
the calculation! it is improtant!] in a HashMap
. First thing you do
when you start calculation on a vertex - is check if it is already
calculated - and if it is, returned the cached value. The idea behind
it is that many states are reachable from different paths, so this
way - you can eliminate redundant calculations.
The changes should be done both in the first line of the method [something like if (cache.contains((new State(board,depth,player)) return cache.get(new State(board,depth,player))
] [excuse me for lack of elegance and efficiency - just explaining an idea here].
You should also add cache.put(...)
before each return
statement.