Alpha beta pruning in Checkers (test cases to prove the efficiency)
I have developed a parallelized checkers (English draughts) game using alpha beta pruning in order to find the optimal move that can be made by the machine. I would like to know whether increasing the depth/level of the game tree and searching it using alpha beta pruning algorithm necessarily evolves a best possible move?
I'm running on a low level machine and I'm not able to add up depth more than 9.I have checked my program using the following test cases, but I'm getting the same possible move considering the depth from 1 to 9 as follows.
case 1
+B+B+B+B
B+B+B+B+
+B+B+B+B
O+O+O+O+
+O+O+O+O
A+A+A+A+
+A+A+A+A
A+A+A+A+ output: (5, 0) => (4, 1)
case 2
+B+B+B+B
O+O+B+B+
+O+O+B+B
O+B+O+O+
+O+O+O+O
A+A+A+O+
+O+O+O+O
O+O+O+O+ output: (5, 2) => (4, 3)
case 3
+O+O+O+O
O+O+O+O+
+B+O+O+O
O+O+O+O+
+B+B+O+O
O+A+A+O+
+O+O+O+O
O+O+A+A+ output: (5, 2) => (3, 4)
case 4
+k+O+O+O
O+B+O+O+
+O+O+O+B
O+O+O+B+
+O+O+B+O
O+O+O+O+
+O+O+O+O
A+A+A+A+ output: (0, 1) => (2, 3)
case 5
+B+B+B+B
O+O+O+O+
+O+O+O+O
O+O+O+O+
+B+B+K+O
O+A+O+O+
+O+O+O+O
A+A+O+A+ output: (5, 2) => (3, 0)
case 6
+k+O+O+O
B+O+O+O+
+O+O+O+O
O+O+O+O+
+O+O+O+O
O+O+O+O+
+O+O+O+O
O+O+O+O+ output: (0, 1) => (1, 2)
where the interpretations are,
O- Empty dark square
+- Empty white square
A- Machine's pawn
B- Opponent's pawn
k- Machine's king
K- Opponent's king
I have calculated the Heuristic value for the leaf node of the game tree as the number of Machine's pieces left in the board subtracted by the number of player Opponent's pieces, since kings have more powerful ability than pawns, the heuristic counts each king as two normal pawns, using which alpha beta search is applied.
I guess my program works fine, but the heuristic values calculated for the leaf nodes of the game tree eventually didn't not change as I increase the depth up to 9 (it may change if I increase the depth still more).Can anyone please provide me some test cases using which I can prove the efficiency within the depth 9?
your question is quite open ended but here a couple of hints.
The heuristic value calculated for leaf nodes does not depend on search depth, because they are leaf nodes. So your comment "values calculated for the leaf nodes... didn't change" does not make much sense. Maybe you mean that the value for the root node did not change.
Normally increasing search depth leads to better moves. If you get the same "best" move for all search depths 1..9 then there is a bug somewhere.
The evaluation function is the most important piece of an alpha-beta search solution. You need a better evaluation function than one that just counts material in a simplistic manner especially if you can't afford deep search.
Usually people do not use plain alpha-beta, but things like principal variation search, iterative deepening, null-move heuristic and so on to increase the practical efficiency of the algorithm.
Construct your own test cases where you know what the best move actually is, and verify that your algorithm can find it. For example, end-game situations where you know one player can win in three moves.
上一篇: 国际象棋AI与alpha beta算法