Connectionists: NIPS workshop on “Continual Learning and Deep Networks” [apologies for cross-posting]

Razvan Pascanu r.pascanu at gmail.com
Thu Sep 22 09:37:58 EDT 2016


TL;DR: We invite you to our workshop on Continual Learning and Deep
Networks, at this year’s NIPS. Submission deadline for 4-page abstracts is
October 21. Submission page:
https://easychair.org/conferences/?conf=cldl2016

---------------------

Description:

Humans have the extraordinary ability to learn continually from experience.
Not only can we apply previously learned knowledge and skills to new
situations, we can also use these as the foundation for later learning. One
of the grand goals of AI is building an artificial "continual learning"
agent that constructs a sophisticated understanding of the world from its
own experience, through the autonomous incremental development of ever more
complex skills and knowledge.

Hallmarks of continual learning include: interactive, incremental, online
learning (learning occurs at every moment, with no fixed tasks or data
sets); hierarchy or compositionality (previous learning can become the
foundation far later learning); "isolaminar" construction (the same
algorithm is used at all stages of learning); resistance to catastrophic
forgetting (new learning does not destroy old learning); and unlimited
temporal abstraction (both knowledge and skills may refer to or span
arbitrary periods of time).

Continual learning is an unsolved problem which presents particular
difficulties for the deep-architecture approach that is currently the
favoured workhorse for many applications. Some strides have been made
recently, and many diverse research groups have continual learning on their
road map. Hence we believe this is an opportune moment for a workshop
focusing on this theme. The goals would be to define the different facets
of the continual-learning problem, to tease out the relationships between
different relevant fields (such as reinforcement learning, deep learning,
lifelong learning, transfer learning, developmental learning, computational
neuroscience, etc.) and to propose and explore promising new research
directions.

Confirmed speakers:

   -

   Claudia Clopath (Imperial College London)
   -

   Eric Eaton (University of Pennsylvania)
   -

   Raia Hadsell (Google DeepMind)
   -

   Honglak Lee (University of Michigan)
   -

   Joelle Pineau (McGill University)
   -

   Satinder Singh Baveja (Cogitai and University of Michigan)
   -

   Alexander Stoytchev (University of Iowa)
   -

   Richard Sutton (University of Alberta)

Dates:

   -

   Submission deadline: Friday October 21
   -

   Workshop: Saturday December 10

Submission format: 4 page extended abstracts, which can include previously
published work here https://easychair.org/conferences/?conf=cldl2016.

Travel grants are available thanks to our sponsors: DeepMind, Cogitai, Sony
!

More details at the website: https://sites.google.com/site/cldlnips2016/

We look forward to seeing you in December!

Razvan Pascanu, Mark Ring and Tom Schaul.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20160922/81db83a4/attachment.html>


More information about the Connectionists mailing list