[1911.04448v1] Real-Time Reinforcement Learning
We predicted and confirmed experimentally that conventional off-policy algorithms would perform worse in real-time environments and then proposed a new actor-critic algorithm, RTAC, that not only avoids the problems of conventional off-policy methods with real-time interaction but also allows us to merge actor and critic which comes with an additional gain in performance

Abstract Markov Decision Processes (MDPs), the mathematical framework underlying most algorithms in Reinforcement Learning (RL), are often used in a way that wrongfully assumes that the state of an agent’s environment does not change during action selection. As RL systems based on MDPs begin to find application in realworld safety critical situations, this mismatch between the assumptions underlying classical MDPs and the reality of real-time computation may lead to undesirable outcomes. In this paper, we introduce a new framework, in which states and actions evolve simultaneously and show how it is related to the classical MDP formulation. We analyze existing algorithms under the new real-time formulation and show why they are suboptimal when used in real-time. We then use those insights to create a new algorithm Real-Time Actor-Critic (RTAC) that outperforms the existing state-of-the-art continuous control algorithm Soft Actor-Critic both in real-time and non-real-time settings. Code and videos can be found at github.com/rmst/rtrl.
‹

R * T * M * R * P(E, * pi) == T * B * M * R * P(R * T * M * D * P(E), * pi)

Figure 1: TBMRP (Background)Figure 2: RTMRP (Real-Time Reinforcement Learning)Figure 3: Real-Time Actor-Critic (Stabilizing learning)Figure 4: Return trends for SAC in turn-based environments E and real-time environments RTMDP(E). Mean and 95% confidence interval are computed over eight training runs per environment. (SAC in Real-Time Markov Decision Processes)Figure 5: Comparison between RTAC and SAC in RTMDP versions of the benchmark environments. Mean and 95% confidence interval are computed over eight training runs per environment. (RTAC and SAC on MuJoCo in Real-Time)Figure 6: Left: Agent’s view in RaceSolo. Right: Passenger view in CityPedestrians. (Autonomous Driving Task)Figure 7: Comparison between RTAC and SAC in RTMDP versions of the autonomous driving tasks. We can see that RTAC under real-time constraints outperforms SAC even without real-time constraints. Mean and 95% confidence interval are computed over four training runs per environment. (Autonomous Driving Task)Figure 8: SAC with and without output normalization. SAC in E (no output norm) corresponds to the canonical version presented in Haarnoja et al. (2018a). Mean and 95% confidence interval are computed over eight training runs per environment. (Additional Experiments)Figure 9: Comparison between different actor loss scales (β). Mean and 95% confidence interval are computed over four training runs per environment. (Additional Experiments)Figure 10: Comparison between RTAC (real-time) and SAC in E (turn-based). Mean and 95% confidence interval are computed over eight training runs per environment. (Additional Experiments)Figure 11: RTAC with and without output normalization. Mean and 95% confidence interval are computed over eight and four training runs per environment, respectively. (Additional Experiments)›