Timezone: »

Bootstrapping from Game Tree Search
Joel Veness · David Silver · William Uther · Alan Blair

Wed Dec 09 07:00 PM -- 11:59 PM (PST) @ None #None

In this paper we introduce a new algorithm for updating the parameters of a heuristic evaluation function, by updating the heuristic towards the values computed by an alpha-beta search. Our algorithm differs from previous approaches to learning from search, such as Samuels checkers player and the TD-Leaf algorithm, in two key ways. First, we update all nodes in the search tree, rather than a single node. Second, we use the outcome of a deep search, instead of the outcome of a subsequent search, as the training signal for the evaluation function. We implemented our algorithm in a chess program Meep, using a linear heuristic function. After initialising its weight vector to small random values, Meep was able to learn high quality weights from self-play alone. When tested online against human opponents, Meep played at a master level, the best performance of any chess program with a heuristic learned entirely from self-play.

Author Information

Joel Veness (DeepMind)
David Silver (DeepMind)
William Uther (NICTA)
Alan Blair

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors