![]() In this work, we present an approach that effectively combines the use of heuristics, Monte Carlo Tree Search (MCTS), and reinforcement learning for building a Chinese Checkers agent without the use of any human game-play data. First, checkers in the game remain indefinitely on the board and cannot be captured and second, the possibility of repetition and backward movements of checkers mean that the game can be arbitrarily long without violating game rules. In addition, with the goal to move all of a player’s checkers to the opponent’s side, there are two particular aspects of the game that are different from other traditional games which may lead to an enormously large game-tree and state-space complexity (Allis, 1994) and game divergence. While there are known strategies for approaching Chinese Checkers, such strategies often only focus on the initial starting policy and locally optimal game-play patterns, or that they may heavily rely on cooperation between the players, which is often not possible. Little research attention, however, has been drawn to solve the game of Chinese Checkers with machine learning techniques. There have been many successes (Silver et al., 2017a, b, 2016 Khalifa et al., 2016) in developing machine learning algorithms that excel at traditional zero-sum board games of perfect information, such as Chess (Silver et al., 2017a Thrun, 1995) and Go ( Silver et al., 2017b), as well as other games (Lagoudakis & Parr, 2003 Waugh & Bagnell, 2015 Conitzer & Sandholm, 2003 Pendharkar, 2012 Guo et al., 2014 Stanley et al., 2006 Finnsson & Björnsson, 2008 Chen et al., 2017).
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |