Connectionists: Sergey Levine speaking on July 28 in Developing Minds global online lecture series

Jochen Triesch triesch at fias.uni-frankfurt.de
Tue Jul 26 09:01:49 EDT 2022


Dear colleagues,

On July 28, the Developing Minds global online lecture series will feature Sergey Levine, UC Berkeley, USA:
"From Reinforcement Learning to Embodied Learning“

https://sites.google.com/view/developing-minds-series/home

The live event will take place via zoom at:
09:00 PDT (Pacific Daylight Time)
16:00 UTC (Universal Coordinated Time)
18:00 CEST (Central European Summer Time)
01:00 JST, July 29 (Japan Standard Time)

To participate please register here:
https://sites.google.com/view/developing-minds-series/home

Abstract
Reinforcement learning provides one of the most widely studied abstractions for learning-based control. However, while the RL formalism is elegant and concise, real-world embodied learning problems (e.g., in robotics) deviate substantially from the most widely studied RL problem settings. The need to exist in a real physical environment challenges RL methods in terms of generalization, robustness, and capacity for lifelong learning -- all aspects of the RL problem that are often neglected in commonly studied benchmark problems. In this talk, I will discuss how we can devise a framework for learning-based control that is at its core focused on generalization, robustness, and continual adaptation. I will argue that effective utilization of previously collected experience, in combination with multi-task learning, represents one of the most promising paths for tackling these challenges, and present recent research in RL and robotics that studies this perspective.

Bio
Sergey Levine received a BS and MS in Computer Science from Stanford University in 2009, and a Ph.D. in Computer Science from Stanford University in 2014. He joined the faculty of the Department of Electrical Engineering and Computer Sciences at UC Berkeley in fall 2016. His work focuses on machine learning for decision making and control, with an emphasis on deep learning and reinforcement learning algorithms. Applications of his work include autonomous robots and vehicles, as well as computer vision and graphics. His research includes developing algorithms for end-to-end training of deep neural network policies that combine perception and control, scalable algorithms for inverse reinforcement learning, deep reinforcement learning algorithms, and more.

Web: https://people.eecs.berkeley.edu/~svlevine/ 

The talk will also be recored and the recording made available via the web page:
https://sites.google.com/view/developing-minds-series/home

Stay healthy,
Jochen Triesch

--
Prof. Dr. Jochen Triesch
Johanna Quandt Chair for Theoretical Life Sciences
Frankfurt Institute for Advanced Studies and
Goethe University Frankfurt
http://fias.uni-frankfurt.de/~triesch/
Tel: +49 (0)69 798-47531
Fax: +49 (0)69 798-47611


More information about the Connectionists mailing list