Chess engine code – minimax

This article is part of a series – the contents page is here.

In this article we cut to the chase and present the function that implements the minimax algorithm.  I’m going to cut out some miscellaneous stuff so that we can focus on the core, which is walking the tree and comparing the lines.  You can get the full version in the code base.

    private getBestMoveMinimax(board: Chess.Board, headStartMoves: Chess.GameMove[], depth: number, isCaptureChain: boolean, alphaScore: number): EvaluatedMove {

        const movesAndReplies = this.getMovesAndReplies(board, headStartMoves);

        // ... snipped the test for checkmate/stalemate, which boils down to whether we have any moves.

        let maxDepth = isCaptureChain ? 6 : 4;

        let bestMoveSoFar: Chess.GameMove = null;
        let bestScoreSoFar: number = null;

        const haveReachedDepthLimit = depth >= maxDepth;
        if (haveReachedDepthLimit) {
            board.isWhiteToMove = !board.isWhiteToMove;
            const opponentAttacks = board.getPotentialMoves(true);
            board.isWhiteToMove = !board.isWhiteToMove;

            return { move: null, score: EvaluationFunction.getEvaluation(board, movesAndReplies, opponentAttacks) };

        let numMoves = movesAndReplies.length;
        let movesEvaluated = 0;
        for (let move of movesAndReplies) {

            if ((!move.move.isPawnAttack)
                && move.replies.some(reply => reply.isTheoreticalKingCapture)) {
                continue; // move is not legal.

            let lineEvaluation: number;

            const boardAfterMove = board.applyMove(move.move);
            let bestReply = this.getBestMoveMinimax(boardAfterMove, move.replies, depth + 1, isCaptureChain && move.move.isCapture, bestScoreSoFar);
            lineEvaluation = bestReply.score;

            if (!bestMoveSoFar
                || ComputerPlayer.isBetterScore(lineEvaluation, bestScoreSoFar, board.isWhiteToMove)) {
                bestScoreSoFar = lineEvaluation;
                bestMoveSoFar = move.move;

            if (alphaScore !== null
                && ComputerPlayer.isBetterScore(lineEvaluation, alphaScore, board.isWhiteToMove)) {
        return { move: bestMoveSoFar, score: bestScoreSoFar };

    private static isBetterScore(newScore: number, baseScore, isWhiteToMove: boolean) {
        if (isWhiteToMove) {
            return newScore > baseScore;
        return baseScore > newScore;

Moves versus replies

The getBestMoveMinimax function starts by calling getMovesAndReplies().  As you can probably guess, this returns an array of all the potential moves for the current player and, for each move, all of the opposing player’s potential replies.  But why do we need to calculate the replies now instead of in the next recursive call?  And what is that headStartMoves parameter for?  The answer is that we need the replies in order to know whether each move is legal.  Recall that the move generator enumerates the moves without regard to check.  The only way to know whether a move is legal is to generate all the potential replies and see if any of them is a ‘potential king capture.’  If you find one then the move is illegal (you can see this test being performed at the top of the for loop that iterates through the moves; it also ignores ‘pawn attack’ moves that are generated to help the evaluation function).

But as we start to walk the tree we will need the opponent’s replies again.  After all, the moves that are the “opponent’s replies” at depth zero become the “player’s moves” at depth one!  It would be inefficient to regenerate the reply moves in the next recursive call, so we pass them down via the headStartMoves parameter (Get it?  They give us a “head start” in calculating the next set of potential moves).

Evaluating leaf nodes

All recursive functions have a base case and ours is no exception.  The hard part is knowing when we’ve reached that base case.  In one sense, we usually don’t: unless we’ve found a checkmate or stalemate we can just keep searching ever deeper and we never seem to run out of moves.  So we have to draw the line somewhere and hope that we don’t get kicked too badly by the horizon effect.

The function above uses a simple cutoff based on the current search depth, with a slight boost for lines that involve nothing but capturing moves.  The point here is that we somehow end up with the haveReachedDepthLimit flag.  When it’s true we call the evaluation function.  One of the inputs to the evaluation is our opponent’s current attack (even though it’s not their turn to move).  To get this we temporarily flip the isWhiteToMove flag and call getPotentialMoves, then we flip the flag back again to restore the board.

Since we haven’t made an actual move in the base case, the return value is just the evaluation score for the state of the board – the move is null.

Searching non-leaf nodes

If we haven’t reached our depth limit then we enter the for loop that is pretty much a straight implementation of the alpha-beta algorithm that we discussed in a previous article.  Note the following points as we iterate through the moves:

  • As explained earlier, the first thing to do is to check that the move is actually a legal move.
  • We call Board.applyMove() to get a new board that describes the state of the game after the move has been played (remember that in this new board it will now be the opponent’s turn to move).
  • Then comes the recursive call.  Note that we pass in our current best score (the “alpha” of the alpha-beta algorithm).  This call returns to us the best score that we can achieve with this move (assuming that both players always play their best possible moves).
  • If this move has the best score we’ve seen then it becomes our best-move-so-far.  We have to be careful in evaluating “best” – note the call to isBetterScore with its isWhiteToMove parameter.  For White, a more positive score is better, but for Black a more negative score is better.
  • Finally, if we’ve been given an alpha score we perform the alpha-beta short circuit test.  If the current move’s score (the “beta” score) is better for us than the alpha score then we can bail out of the whole loop

The most confusing thing when talking about this function (particularly the alpha-beta test) is that the meaning of words like “player”, “us”, “better” and “best” flips around at each depth!

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s