[1910.10906] Optimistic Regret Minimization for Extensive-Form Games via Dilated Distance-Generating Functions
We also showed that when optimistic regret minimization methods are instantiated with dilated distance-generating functions, the regret updates are local to each information set in the game, mirroring the structure of the counterfactual regret minimization framework

Abstract: We study the performance of optimistic regret-minimization algorithms for
both minimizing regret in, and computing Nash equilibria of, zero-sum
extensive-form games. In order to apply these algorithms to extensive-form
games, a distance-generating function is needed. We study the use of the
dilated entropy and dilated Euclidean distance functions. For the dilated
Euclidean distance function we prove the first explicit bounds on the
strong-convexity parameter for general treeplexes. Furthermore, we show that
the use of dilated distance-generating functions allow us to decompose the
mirror descent algorithm, and its optimistic variant, into local mirror descent
algorithms at each information set. This decomposition mirrors the structure of
the counterfactual regret minimization framework. We experimentally compare our
algorithms to the popular algorithm CFR+. CFR+ is known to often converge at a
rate of $T^{-1}$, or better, in practice. We give an example matrix game where
CFR+ experimentally converges at a relatively slow rate of $T^{-0.74}$, whereas
optimistic methods converge faster than $T^{-1}$. We go on to show that this
fast rate also holds in the Kuhn poker game, which is an extensive-form game.
For games with deeper game trees however, we find that CFR+ is still faster.
Finally we show that when the goal is minimizing regret, rather than computing
a Nash equilibrium, optimistic methods can outperform CFR+, even in deep game
trees.

x**(t + 1) == argmin_(x in X) (m**(t + 1), x + 1/eta * D(x * parallel * z**t)), \ * z**(t + 1) == argmin_(z in X) (ell**(t + 1), z + 1/eta * D(z * parallel * z**t))

Figure 1. Sequential action space for the first player in the game of Kuhn poker. denotes an observation point; represents the end of the decision process. (Treeplexes and Sequence Form)Figure 2. (Left and upper right) Saddle-point gap as a function of the number of iterations. The plots show the last-iterate convergence for OOMD and OFTRL.(Lower right) Sum of cumulative regret for both players in Leduc. Optimistic OMD (OOMD) and OFTRL use step-size parameter η = 0.1 in Smallmatrix and η = 2 in Kuhn. OFTRL uses step-size parameter η = 200 in Leduc. (Experimental Evaluation)›