[1910.00177] Advantage-Weighted Regression: Simple and Scalable Off-Policy Reinforcement Learning
A better theoretical understanding of the convergence properties of these algorithms, especially when combined with experience replay, could also be valuable for the development of future algorithms.

Abstract: In this paper, we aim to develop a simple and scalable reinforcement learning
algorithm that uses standard supervised learning methods as subroutines. Our
goal is an algorithm that utilizes only simple and convergent maximum
likelihood loss functions, while also being able to leverage off-policy data.
Our proposed approach, which we refer to as advantage-weighted regression
(AWR), consists of two standard supervised learning steps: one to regress onto
target values for a value function, and another to regress onto weighted target
actions for the policy. The method is simple and general, can accommodate
continuous and discrete actions, and can be implemented in just a few lines of
code on top of standard supervised learning methods. We provide a theoretical
motivation for AWR and analyze its properties when incorporating off-policy
data from experience replay. We evaluate AWR on a suite of standard OpenAI Gym
benchmark tasks, and show that it achieves competitive performance compared to
a number of well-established state-of-the-art RL algorithms. AWR is also able
to acquire more effective policies than most off-policy algorithms when
learning from purely static datasets with no additional environmental
interactions. Furthermore, we demonstrate our algorithm on challenging
continuous control tasks with highly complex simulated characters.

‹Figure 1: Complex simulated character trained using advantage-weighted regression. Left: Humanoid performing a spinkick. Right: Dog performing a canter. (Introduction)Figure 2: Snapshots of AWR policies trained on OpenAI Gym tasks. Our simple algorithm learns effective policies for a diverse set of discrete and continuous control tasks. (Related Work)

Figure 3: Learning curves of the various algorithms when applied to OpenAI Gym tasks. Results are averaged across 5 random seeds. AWR is generally competitive with the best current methods. (Ablation Experiments)Figure 4: Left: Learning curves comparing AWR with various components removed. Each component appears to contribute to improvements in performance, with the best performance achieved when all components are combined. Right: Learning curves comparing AWR with different capacity replay buffers. AWR remains stable with large replay buffers containing primarily off-policy data from previous iterations of the algorithm. (Ablation Experiments)Figure 5: Snapshots of 34 DoF humanoid and 82 DoF dog trained with AWR to imitate reference motion recorded from real world subjects. AWR is able to learn sophisticated skills with characters with large numbers of degrees of freedom. (Motion Imitation) (Off-Policy Learning with Static Datasets)Figure 6: Performance of various algorithms on off-policy learning tasks with static datasets. AWR is able to learn policies that are comparable or better than the original demo policies. (Off-Policy Learning with Static Datasets)Figure 7: Learning curves of the various algorithms when applied to OpenAI Gym tasks. Results are averaged over 5 random seeds. AWR is generally competitive with the best current methods. (Learning Curves)Figure 8: Learning curves on motion imitation tasks. On these challenging tasks, AWR generally learns faster than PPO and RWR. (Learning Curves)