[1912.02875] Reinforcement Learning Upside Down: Don't Predict Rewards -- Just Map Them to Actions
Experiments in a separate paper  show that even our initial pilot version of RL can outperform traditional RL methods on certain challenging problems
Abstract We transform reinforcement learning (RL) into a form of supervised learning (SL) by turning traditional RL on its head, calling this RL or Upside Down RL (UDRL). Standard RL predicts rewards, while RL instead uses rewards as task-defining inputs, together with representations of time horizons and other computable functions of historic and desired future data. RL learns to interpret these input observations as commands, mapping them to actions (or action probabilities) through SL on past (possibly accidental) experience. RL generalizes to achieve high rewards or other goals, through input commands such as: get lots of reward within at most so much time! A separate paper  on first experiments with RL shows that even a pilot version of RL can outperform traditional baseline algorithms on certain challenging RL problems. We also introduce a related simple but general approach for teaching a robot to imitate humans. First videotape humans imitating the robot’s current behaviors, then let the robot learn through SL to map the videos (as input commands) to these behaviors, then let it generalize and imitate videos of humans executing previously unknown behavior. This Imitate-Imitator concept may actually explain why biological evolution has resulted in parents who imitate the babbling of their babies.