Mathematics Colloquia and Seminars

Return to Colloquia & Seminar listing

Reinforcement Learning as Saddle-Point Optimization

Mathematics of Data & Decisions

Speaker: Lihong Li, Google Inc.
Related Webpage: http://maddd.math.ucdavis.edu
Location: 1147 MSB
Start time: Mon, Oct 22 2018, 4:10PM

When function approximation is used, solving the Bellman optimality equation with stability guarantees has remained a major open problem in reinforcement learning for decades. The fundamental difficulty is that the Bellman operator may become an expansion in general, resulting in oscillating and even divergent behavior of popular algorithms like Q-learning. In this paper, we revisit the Bellman equation, and reformulate it into a novel primal-dual optimization problem using Nesterov’s smoothing technique and the Legendre-Fenchel transformation. We then develop a new algorithm, called Smoothed Bellman Error Embedding (SBEED), to solve this optimization problem where any differentiable function class may be used. We provide what we believe to be the first convergence guarantee for general nonlinear function approximation, and analyze the algorithm’s sample complexity. Empirically, our algorithm compares favorably to state-of-the-art baselines in several benchmark control problems. (Joint work with Bo Dai et al., presented at ICML 2018.)