Theoretical Foundations of Reinforcement Learning
@ ICML 2020
July 17, 2020


In many settings such as education, healthcare, drug design, robotics, transportation, and achieving better-than-human performance in strategic games, it is important to make decisions sequentially. This poses two interconnected algorithmic and statistical challenges: effectively exploring to learn information about the underlying dynamics and effectively planning using this information. Reinforcement Learning (RL) is the main paradigm tackling both of these challenges simultaneously which is essential in the aforementioned applications. Over the last years, reinforcement learning has seen enormous progress both in solidifying our understanding on its theoretical underpinnings and in applying these methods in practice.

This workshop aims to highlight recent theoretical contributions, with an emphasis on addressing significant challenges on the road ahead. Such theoretical understanding is important in order to design algorithms that have robust and compelling performance in real-world applications. As part of the ICML 2020 conference, this workshop will be held virtually. It will feature keynote talks from six reinforcement learning experts tackling different significant facets of RL. It will also offer the opportunity for contributed material (see below the call for papers and our outstanding program committee). The authors of each accepted paper will prerecord a 10-minute presentation and will also appear in a poster session. Finally, the workshop will have a panel discussing important challenges in the road ahead.



Keynote Speakers

Shipra Agrawal

Assistant Professor
Columbia University

Sham Kakade

University of Washington

Akshay Krishnamurthy

Principal Researcher
Microsoft Research NYC

Gergely Neu

Research Assistant Professor
Universitat Pompeu Fabra

Csaba Szepesvari

University of Alberta / DeepMind

Martha White

Assistant Professor
University of Alberta

Contributed Papers

We invite submissions tackling hurdles in our theoretical understanding of reinforcement learning. Relevant submissions include (but are not limited to) classical topics such as sample-efficient exploration, off-policy learning, policy gradient methods, representation learning, transfer learning in RL. We are particularly interested in submissions which aim to broaden the range of problem settings and environments under which we have theoretical understanding:

  • constrained settings
  • human-in-the-loop
  • RL beyond classical reward-maximization
  • multi-agent systems
  • risk-sensitive RL
  • adversarial environments
  • attempts to bridge bandits and RL

Finally, we strongly encourage submissions that explore interdisciplinary connections of RL to other areas such as:

  • causality
  • game theory
  • privacy
  • fairness
  • operations research
  • competitive analysis

The papers should be 4 pages in ICML 2020 format excluding references (there can be unlimited appendix but reviewers are only required to read the first 4 pages). The submissions will be single blind. The papers will be evaluated with respect to four criteria and we encourage authors to make sure that their submissions make these contributions clear. First, the paper should be within the (broadly defined) scope of the workshop. Second, the paper should explicitly motivate the question it poses. Third, the paper should adequately contrast to prior work and explain the fundamental limitations preventing previous techniques from solving that question. Finally, the paper should provide in a crisp way the key theoretical idea that allows to address the limitation and make progress in our understanding of the question.

Papers accepted to ICML 2020 will not be considered. However, we encourage submission of recent papers accepted in other conferences, especially those drawing interdisciplinary connections.

Program Committee

  • Dhaval Adjodah (MIT)
  • Alon Cohen (Google Research)
  • Sarah Dean (UC Berkeley)
  • Yaqi Duan (Princeton University)
  • Chris Dann (Google Research)
  • Dylan Foster (MIT)
  • Botao Hao (Purdue University)
  • Chi Jin (Princeton University)
  • Alec Koppel (U.S. Army Research Laboratory)
  • Tor Lattimore (Deepmind)
  • Christina Lee Yu (Cornell University)
  • Bo Liu (Auburn)
  • Horia Mania (UC Berkeley)
  • Aditya Modi (U of Michigan at Ann Arbor)
  • Tong Mu (Stanford University)
  • Vidya Muthukumar (UC Berkeley)
  • Aldo Pacchiano (UC Berkeley)
  • Ciara Pike-Burke (Universitat Pompeu Fabra)
  • Tuhin Sarkar (MIT)
  • Karan Singh (Princeton University)
  • Adith Swaminathan (Microsoft Research Redmond)
  • Yi Su (Cornell University)
  • Masatoshi Uehara (Harvard University)
  • Ruosong Wang (CMU)
  • Qiaomin Xie (Cornell University)
  • Renyuan Xu (Oxford University)
  • Lin Yang (UCLA)
  • Zhuoran Yang (Princeton University)
  • Tiancheng Yu (MIT)
  • Andrea Zanette (Stanford University)
  • Angela Zhou (Cornell University)
  • Zhengyuan Zhou (NYU)

Important Dates


Paper Submission Deadline: June 10, 2020, 11:59 PM UTC ([OpenReview])

Author Notification: June 25, 2020, 11:59 PM PDT

Final Version: July 10, 2020, 11:59 PM PDT

Workshop: July 17, 2020 (Time: TBD)

Workshop Organizers                

Emma Brunskill

Stanford University

Thodoris Lykouris

Microsoft Research NYC

Max Simchowitz

UC Berkeley

Wen Sun

Cornell University / Microsoft Research NYC

Mengdi Wang

Princeton University

We thank Hoang M. Le from providing the website template.