Connectionists: FW: Lifelong Machine Learning please
Danny Silver
danny.silver at acadiau.ca
Wed Apr 19 13:31:49 EDT 2017
Not sure this made it to the list the first time.
.. Danny
On 2017-04-17, 6:08 PM, "Danny Silver" <danny.silver at acadiau.ca> wrote:
Thanks Juergen .. There are actually two problems beyond transfer learning that makes LML really interesting.
(1) As you have mentioned, there is the issue of how a LML agent improves its search algorithm over time, a meta-level learning problem. Some would say this is resolved simply by selecting the appropriate hyper-bias as per Bayesian inference. Interest in this problem goes back to Rysard Michalksi and work by Tom Michell and others. And there remains great value in its pursuit.
(2) And there is the issue of how a LML agent consolidates the knowledge that it has learned over time; either in terms of example after example as in continual learning, or task after task as in learning to learn (from my perspective these are the same problem for an agent in a fixed environment with fixed inputs)
The correct solution to the second problem has been examined larger from the perspective of inductive transfer learning; ie. how does one retain prior knowledge in a form than can be used to selectively bias future learning. However, in consideration of an agent that reasons with what it learns, the correct solution to the second problem has significant new possibilities. Most importantly, it would provide at least part of the solution to the background knowledge problem that has plagued AI. Clearly your work on deep learning and that of others will play a key role in this, as it seems to hold the key to the learning of reusable internal representations. A significant challenge here is how to overcome the stability-plasticity problem within these type of networks as the agent encounters new examples from its environment. My sense is that nature figured out a rehearsal mechanism during REM sleep as per James McClelland and Bruce McNaughton - http://cseweb.ucsd.edu/~gary/258/jay.pdf.
For a solution we have ben working on see http://dblp.uni-trier.de/rec/html/conf/ai/SilverME15 or https://www.researchgate.net/publication/277871940_Consolidation_using_Sweep_Task_Rehearsal_Overcoming_the_Stability-Plasticity_Problem
.. Danny
++++
On 2017-04-17, 4:06 PM, "Juergen Schmidhuber" <juergen at idsia.ch> wrote:
Dear all,
indeed, what is in a name? Since my favorite topic “learning to learn” got injected into this thread, I can’t resist the temptation to react.
Most chapters in the mentioned book edited by Thrun & Pratt (1997) use “learning to learn” in the quite limited sense of “transfer learning” from one data set to the next, e.g., through standard backprop.
However, according to my 1987 diploma thesis and numerous follow-up papers (1992, 1993, 1994, 1995, 1996, 1997, 2003, 2004, ...), "learning to learn” or meta-learning in ML is really about inspecting & modifying & learning & improving the learning algorithm itself, where the search space is essentially the set of all possible (learning) algorithms, and where one has to solve the meta-credit assignment problem of recursive self-improvement: which early self-modifications of the lifelong learner’s learning algorithm set the stage for later self-modifications etc ...
Overview pages with papers on “learning to learn” since 1987:
http://people.idsia.ch/~juergen/metalearner.html
http://people.idsia.ch/~juergen/oops.html
http://people.idsia.ch/~juergen/goedelmachine.html
Slides from the overview talk at the NIPS 2016 MAIN workshop:
http://people.idsia.ch/~juergen/rsi2016white.pdf
Cheers,
Jürgen
Jürgen Schmidhuber
Scientific Director, Swiss AI Lab IDSIA
Professor of AI, USI & SUPSI, Switzerland
President, NNAISENSE
http://www.idsia.ch/~juergen/
> On 17 Apr 2017, at 16:51, Danny Silver <danny.silver at acadiau.ca> wrote:
>
> Dear Hava and others …
>
> What is in a name?
> Lifelong Learning Machines <= Lifelong Machine Learning <= Machine Lifelong Learning <= Learning to Learn
>
> All of the above are concerned with the persistent and cumulative nature of learning with machines. They are based on the hypothesis that more efficient (shorter training times, fewer training examples) and more effective (more accurate hypotheses) learning relies on an appropriate inductive bias, one source being prior knowledge from related tasks (or examples from the same task domain). They should also be concerned with the consolidation of knowledge acquired through learning to support inductive bias, which forces one to look at the representation of learned knowledge.
>
> I have been studying Lifelong Machine Learning since 1993. The field has gone from having no name to several vintages. This is an appeal for the community to stay with the title “Lifelong Machine Learning” unless there is some need to distinquish “Lifelong Learning Machines” as a separate discipline.
>
> In 1995, Rich Caruana and I organized the first NIPS workshop on “Learning to Learn: Knowledge Consolidation
> and Transfer in Inductive Systems". See http://plato.acadiau.ca/courses/comp/dsilver/NIPS95_LTL/transfer.workshop.1995.html
> This workshop produced a seminal book by Sebastian Thrun that solidified the title “Learning to Learn” or L2L. Seehttp://robots.stanford.edu/papers/thrun.book3.html
>
> Over the next decade myself and several others started to use the term “Machine Lifelong Learning” or ML3.
> Our lab created a ML3 contributor website that has fallen behind over the years (http://ml3.acadiau.ca/) being replaced by material on our current lab website http://mlrl.acadiau.ca/ and by ResearchGate websites such as -https://www.researchgate.net/profile/Daniel_Silver
>
> The L2L and ML3 titles lasted well into the first decade of the 2000s and was used at the second NIPS workshop on the subject “Inductive Transfer : 10 Years Later”. Seehttp://socrates.acadiau.ca/courses/comp/dsilver/Share/2005Conf/NIPS2005_ITWS/Website/index.htm
>
> Along the way Mark Ring has distinquished “Continual Learning” in the Reinforcement Learning paradigm as a process of learning ever more complicated skills by building on those skills already developed. See
> https://www.cs.utexas.edu/~ring/Diss/index.html and his new companyhttp://www.cogitai.com/
>
> Around about 2010, Eric Eaton and others started to use the term “Lifelong Machine Learning” or LML which many people have come to like. Please see Eric’s webpage for some of the work he has been involved inhttps://www.seas.upenn.edu/~eeaton/research.html
>
> So given that we have the well-used term “Lifelong Machine Learning” and that the name has changed a few times already, I really do not cherish the community moving toward yet another permutation of the three words “Lifelong”, “Machine”, and “Leaning”, unless it is really a different research area … In which case, I would ask that we use a significantly different monicker. I make my case for sticking with the title “Lifelong Machine Learning” with the list of its uses shown below my signature.
>
> Note that a new research theme is starting to be used that brings together machine learning and knowledge representation to solve one of the Big AI problems. The problem is how to the learn background knowledge so it can be used for reason and the new title is “Lifelong Machine Learning and Reasoning”.
> Recently, I created a ResearchGate project which is gaining followers
> https://www.researchgate.net/project/Lifelong-Machine-Learning-and-Reasoning
>
> .. Danny
>
> ==========================
> Daniel L. Silver
> Professor and Acting Director, Jodrey School of Computer Science
> Director, Acadia Institute for Data Analytics
> Acadia University,
> Office 314, Carnegie Hall,
> Wolfville, Nova Scotia Canada B4P 2R6
>
> t. (902) 585-1413
> f. (902) 585-1067
>
> acadiau.ca
> Facebook Twitter YouTube LinkedIn Flickr
> <image001.png>
>
>
> In recent years, there has come to exist a wide variety of websites that are currently using “Lifelong Machine Learning” including those related to:
>
> Books:
> https://www.cs.uic.edu/~liub/lifelong-machine-learning.html
>
> Papers:
> Lifelong machine learning: a paradigm for continuous learning
> http://www.aaai.org/ocs/index.php/SSS/SSS13/paper/viewFile/5802/5977
> http://dl.acm.org/citation.cfm?id=2433459
> https://pdfs.semanticscholar.org/fb24/b6917eb42ccbf354371ee9565a3014b51e7c.pdf
> https://cs.byu.edu/colloquium/sentiment-analysis-and-lifelong-machine-learning
> https://scholar.google.com/citations?user=Z_vWXgsAAAAJ&hl=en
>
> Popular Press Articles:
> https://www.weforum.org/agenda/2017/01/lifelong-machine-learning/
> http://www.rollproject.org/lifelong-machine-learning-systems-optimisation/
>
> Videos:
> https://www.youtube.com/watch?v=wc2xn4g1-uU
>
> Courses:
> https://www.cs.uic.edu/~liub/lifelong-learning.html
>
> Tutorials and Workshops:
> https://www.cs.uic.edu/~liub/IJCAI15-tutorial.html
> https://www.seas.upenn.edu/~eeaton/AAAI-SSS13-LML/
> http://repository.ust.hk/ir/Record/1783.1-73755
> https://bigdata.cs.dal.ca/news/2014-06-09-000000/seminar-lifelong-machine-learning-and-reasoning
>
> Research Websites:
> http://mlrl.acadiau.ca/
> https://www.cs.uic.edu/~liub/lifelong-learning.html
> https://www.seas.upenn.edu/~eeaton/research.html
> https://jaimefernandezdcu.wordpress.com/2016/10/24/lml/
>
> ++++++++ +++++++++ ++++++++++
>
> From: Connectionists <connectionists-bounces at mailman.srv.cs.cmu.edu> on behalf of Hava Siegelmann <hava.siegelmann at gmail.com>
> Date: Sunday, April 16, 2017 at 1:57 PM
> To: "connectionists at mailman.srv.cs.cmu.edu" <connectionists at mailman.srv.cs.cmu.edu>
> Subject: Re: Connectionists: Lifelong Learning Machines - Call for Grants
>
> Dear Connectionists, it was a typo, Lifelong Learning Machine is NOW AVAILABLE - please read, get together and apply.
> We have the chance to start a new chapter of AI.
>
> Hava
>
>
> On Fri, Apr 14, 2017 at 5:41 PM, Hava Siegelmann <hava.siegelmann at gmail.com> wrote:
>> Dear Friends
>>
>> Lifelong Learning Machines (L2M) call for proposals (or in DARPA lingo BAA (Broad agency announcement)) is not available online from the DARPA portal
>>
>> Note that there is also a link for teaming to enable create small groups is you are looking for collaborators.
>>
>> Note that DARPA programs are once in a lifetime rather than NSF/NIH with repeating ideas. So start reading and prepare your applications on time.
>>
>>
>> All the best and much luck -
>>
>> Hava Siegelmann
>>
>>
>
More information about the Connectionists
mailing list