Can Monte-Carlo Tree Search learn to sacrifice?

Nathan Companez, Aldeida Aleti

    Research output: Contribution to journalArticleResearchpeer-review

    2 Citations (Scopus)


    One of the most basic activities performed by an intelligent agent is deciding what to do next. The decision is usually about selecting the move with the highest expectation, or exploring new scenarios. Monte-Carlo Tree Search (MCTS), which was developed as a game playing agent, deals with this exploration–exploitation ‘dilemma’ using a multi-armed bandits strategy. The success of MCTS in a wide range of problems, such as combinatorial optimisation, reinforcement learning, and games, is due to its ability to rapidly evaluate problem states without requiring domain-specific knowledge. However, it has been acknowledged that the trade-off between exploration and exploitation is crucial for the performance of the algorithm, and affects the efficiency of the agent in learning deceptive states. One type of deception is states that give immediate rewards, but lead to a suboptimal solution in the long run. These states are known as trap states, and have been thoroughly investigated in previous research. In this work, we study the opposite of trap states, known as sacrifice states, which are deceptive moves that result in a local loss but are globally optimal, and investigate the efficiency of MCTS enhancements in identifying this type of moves.

    Original languageEnglish
    Pages (from-to)783-813
    Number of pages31
    JournalJournal of Heuristics
    Issue number6
    Publication statusPublished - 2016


    • Artificial intelligence
    • Games
    • Monte-Carlo Tree Search
    • Sacrifice moves

    Cite this