From ranzato at cs.toronto.edu Mon Feb 3 01:24:30 2014 From: ranzato at cs.toronto.edu (Marc'Aurelio Ranzato) Date: Mon, 3 Feb 2014 01:24:30 -0500 (EST) Subject: Connectionists: reminder IJCV: special issue on Deep Learning In-Reply-To: References: Message-ID: Dear Colleague, this is a reminder that the deadline for submitting papers and long abstracts to the IJCV special issue on Deep Learning is approaching soon: Sunday February 9th, 2014. You can find the call for papers at: http://www.cs.toronto.edu/~ranzato/publications/cfp_ijcv_si_deeplearning.pdf Best regards, Marc'Aurelio Ranzato Geoffrey Hinton Yann LeCUn From alessandro.torcini at cnr.it Sat Feb 1 12:18:43 2014 From: alessandro.torcini at cnr.it (Alessandro Torcini) Date: Sat, 1 Feb 2014 18:18:43 +0100 Subject: Connectionists: Fully funded Postdoc position (Experienced Researcher) in Computational Neuroscience Message-ID: Fully funded Marie Curie Postdoc position (Experienced Researcher) in Computational Neuroscience New approaches for the analysis of neuronal spike trains within the Marie Curie Initial Training Network - 'Neural Engineering Transformative Technologies' (NETT) at the Institute for Complex Systems (ISC), CNR, Florence, Italy. Gross Salary per year: 64,701 EURO (Living Allowance) plus 9,290 - 13,272 EURO (Mobility Allowance) depending on the family situation Required title: PhD in Physics, Engineering, Applied Mathematics, or Computational Neuroscience Applications: There will be some specific application procedure which we will announce as soon as possible. Closing date for the position: May 15, 2014 Applications are invited for the above post to work with Dr. Thomas Kreuz [http://wwwold.fi.isc.cnr.it/users/thomas.kreuz/] and Dr. Alessandro Torcini [http://wwwold.fi.isc.cnr.it/users/alessandro.torcini/] in the Computational Neuroscience group [http://neuro.fi.isc.cnr.it/] at ISC, Florence. This world leading group combines theoretical investigations (e.g., on non-trivial collective phenomena in neuronal populations) with practical applications (such as spike train analysis). The group is one of the main participants in the Center for the Study of Complex Dynamics (CSDC) created with the purpose of coordinating interdisciplinary training and research activities. CSDC researchers include physicists, control engineers, mathematicians, biologists and psychologists. This full-time post will begin on the 1st of September 2014 (at the latest, no possibility of further delay) and will be offered on a fixed-term contract for a period of 24 months. The activity will include a six-month stay with NETT partner Prof. Bert Kappen [http://www.snn.ru.nl/~bertk/] at the Radboud University Nijmegen, Netherlands. The Florence part of the project will mainly consist of two parts: Development of new approaches for (multivariate) data analysis and analysis of electrophysiological data (in particular neuronal spike trains). During the six-month stay in the Netherlands the project will be devoted to the applicability of methods derived from latent state models, sequential Monte Carlo sampling, and path integral control to analyze neural time-series data, such as spike trains. The candidate should have a strong background in at least one of the following fields: computational neuroscience, data analysis, and nonlinear dynamics as well as solid experience in scientific programming (e.g. in Matlab, C, Python, Fortran). Candidates must be in the first 5 years of their research careers (starting from the date of the MSc title) and already be in possession of a doctoral degree in physics, engineering, applied mathematics, or computational neuroscience. As part of our commitment to promoting diversity we encourage applications from women. To comply with the Marie Curie Actions rule for mobility applicants must not have resided, worked or studied in Italy for more than 12 months in the 3 years prior to May 2014. At a later stage a formal application will be required. Guidelines will be given as soon as possible on this webpage [http://neuro.fi.isc.cnr.it/index.php?page=marie-curie-itn-postdoc]. As of now informal inquiries should be addressed to Dr. Thomas Kreuz (thomas.kreuz at cnr.it) and/or Dr. Alessandro Torcini (alessandro.torcini at cnr.it). -- --------------------------------------------------------------------------------------- Alessandro Torcini - Istituto dei Sistemi Complessi - CNR via Madonna del Piano, 10 --- I-50019 Sesto Fiorentino Tel:+39-055-522-6670 Fax:+39-055-522-6683 SKyPE: torcini http://www.fi.isc.cnr.it/users/alessandro.torcini ----------------------------------------------------------------------------------------- From aburkitt at unimelb.edu.au Mon Feb 3 07:11:35 2014 From: aburkitt at unimelb.edu.au (Anthony Burkitt) Date: Mon, 3 Feb 2014 12:11:35 +0000 Subject: Connectionists: Two open postdoc positions in Neuro-engineering / Computational Neuroscience at The University of Melbourne Message-ID: Two postdoc positions in neuro-engineering / computational neuroscience are available at the University of Melbourne. The details are below and the closing date for both positions is 24th February. ---------- Position 1: Computational neural modelling of synaptic plasticity associated with reward reinforcement Link to position: http://go.unimelb.edu.au/up2n RESEARCH FELLOW Position no.: 0032727 Employment type: Full-time Fixed Term Campus: Parkville Department of Electrical and Electronic Engineering Melbourne School of Engineering The University of Melbourne Australia Salary: Level A $61,138* - $82,963 p.a. (*PhD entry Level A.6 $77,290 p.a.) or Level B $87,334 - $103,705 p.a., plus 9.25% superannuation. The level of appointment is subject to the appointee's research record, qualifications and experience. We are seeking a talented and dedicated candidate to join the Melbourne School of Engineering, one of Australia's leading engineering schools, as a postdoctoral research fellow. This position is part of a newly funded ARC Discovery project to develop models of synaptic plasticity in the brain through mathematical analysis and computational simulation. In this project, models of synaptic plasticity associated with reward reinforcement will be used to develop "plasticity targeted" techniques for improved brain-machine interfaces (also called brain-computer interfaces). To be successful in this position, you will have a PhD in a discipline relevant to neural engineering or computational neuroscience (including physics and mathematics). You will have a solid skill set in mathematical and computational modelling and algorithm development, preferably applied to neuroscience, and a strong publication track record. Excellent written and oral communication skills and the ability to work both independently and as part of a team are essential. This position will be based in the Neuroengineering Laboratory in the Department of Electrical and Electronic Engineering at the University of Melbourne. There is the opportunity for collaboration with a wide range of biomedical engineers, electrophysiologists, and clinicians who are associated with the research programs in the Neuroengineering Laboratory. You must be able to demonstrate clearly an ability to perform independent research. Excellent written and verbal communication skills are essential. Research expertise in the area of neural modelling will be advantageous but not essential. The position is open to both national and international applicants. The position will commence in 2014 for a period of up to 3 years. Close date: 24 February 2014 ---------- Position 2: Neural modelling of advanced retinal stimulation methods Link to position: http://go.unimelb.edu.au/op2n RESEARCH FELLOW Position no.: 0032823 Employment type: Full-time Fixed Term Campus: Parkville Department of Electrical and Electronic Engineering Melbourne School of Engineering The University of Melbourne Australia We are seeking a talented researcher to join the Melbourne School of Engineering, one of Australia's leading engineering schools, as a postdoctoral research fellow. This position is part of a newly funded ARC Discovery project to develop advanced stimulation methods for a retinal implant to restore a sense of vision to people with degenerative or inherited retinal disease. The project will build on substantial previous research undertaken by Bionic Vision Australia (BVA), which is a partnership of world-leading Australian researchers collaborating to develop an advanced retinal implant. To be successful in this position, you will have a PhD in a discipline relevant to neural engineering or computational neuroscience. You will have a solid skill set in mathematical and computational modelling and algorithm development, preferably applied to neuroscience, and a strong publication track record. Excellent written and oral communication skills and the ability to work both independently and as part of a team are essential. This position will be based in the Neuroengineering Laboratory in the Department of Electrical and Electronic Engineering at the University of Melbourne, working with a team of neural engineers and retinal electrophysiologists to develop algorithms for improving the spatial resolution offered by retinal implants. You must be able to demonstrate clearly an ability to perform independent research. Excellent written and verbal communication skills are essential. Research expertise in the area of neural modelling will be advantageous but not essential. The position is open to both national and international applicants. The position will commence in 2014 for a period of up to 3 years. Close date: 24 February 2014 Professor Anthony N. Burkitt Chair of Bio-Signals and Bio-Systems NeuroEngineering Laboratory Department of Electrical and Electronic Engineering Centre for Neural Engineering, Building 261, 203 Bouverie St Melbourne School of Engineering The University of Melbourne, Victoria 3010 Australia T: +61 3 9035 3552 ? M: +61 4 22 960 880 ? F: +61 3 9035 3002 ? E: aburkitt at unimelb.edu.au W: www.eng.unimelb.edu.au ? www.neuroeng.unimelb.edu.au From fpc at amazon.com Mon Feb 3 12:59:18 2014 From: fpc at amazon.com (Perez-Cruz, Fernando) Date: Mon, 3 Feb 2014 17:59:18 +0000 Subject: Connectionists: ML Research Positions at Amazon in NYC Message-ID: Hi to all, We are expanding the machine learning research team at Amazon in NYC for the profit system group. We are looking for someone that can rigorously use state-of-the-art supervised and unsupervised methods and who is deeply familiar with discriminative and generative modeling. The successful candidate will analyze large data sets to identify attributes that directly influence Amazon?s profit; design models that capture drivers of realized profit; and formulate proposals that drive optimal profit maximization. If you are a subject matter expert in machine learning who enjoys coding production-strength algorithms to mine data, uncover patterns, identify significant variables, and build predictive models over large-scale data sets then we would like to chat with you! You can find more information about these posts at: http://www.amazon.com/gp/jobs/ref=j_sq_btn?jobSearchKeywords=&category=Machine+Learning+Science&location=US%2C+NY%2C+New+York&x=8&y=7 Please do not hesitate to contact me. Fernando -------------- next part -------------- An HTML attachment was scrubbed... URL: From tmh at eecs.qmul.ac.uk Mon Feb 3 13:46:11 2014 From: tmh at eecs.qmul.ac.uk (Timothy Hospedales) Date: Mon, 3 Feb 2014 18:46:11 +0000 Subject: Connectionists: PhD position: Intelligent Sensing at QMUL Message-ID: <421A9143-4AD6-41EF-87D3-AD596E028117@eecs.qmul.ac.uk> Fully funded PhD studentship in Resource Constrained Intelligent Sensing School of Electronic Engineering & Computer Science, Queen Mary University of London Conventional approaches to machine perception apply supervised machine learning techniques to predict semantic quantities of interest from a fixed set of low-level features. This project will address the meta challenges of learning which features to extract, and which learning algorithms to apply on a dynamic case-by-case basis to do the best possible job within a constrained amount of computing resource. Applications include real-time multi-media sensing and "big data" analysis. Nationality: Open to all. Deadline: Thu 20 Feb 2014. Interviews: Expected Wed 26 Feb 2014. Start: Sep 2014. For queries contact: Dr. Timothy Hospedales, t.hospedales at qmul.ac.uk More details: http://www.eecs.qmul.ac.uk/~tmh/job_CIS2014.pdf To Apply: http://www.qmul.ac.uk/postgraduate/pgrcoursefinder/computer-science/index.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From irodero at cac.rutgers.edu Mon Feb 3 16:11:50 2014 From: irodero at cac.rutgers.edu (Ivan Rodero) Date: Mon, 3 Feb 2014 16:11:50 -0500 Subject: Connectionists: TAAS - Call for Papers Message-ID: <88166D49-78BF-4351-A4C1-77894DE2039D@cac.rutgers.edu> -------------------------------------------------------------------------------------------------- Please accept our apologies if you receive multiple copies of this CFP! -------------------------------------------------------------------------------------------------- Call for Papers ACM Transactions on Autonomous and Adaptive Systems (TAAS) (http://taas.acm.org) Aim and Scope: The ACM Transactions on Autonomous and Adaptive Systems (TAAS) is a venue for high quality research contributions addressing foundational, engineering, and technological aspects related to all those complex ICT systems that have to serve ? in autonomy and with capabilities of autonomous adaptation ? in highly dynamic socio-technico-physical environments. TAAS addresses research on autonomous and adaptive systems being undertaken by an increasingly interdisciplinary research community -- and provide a common platform under which this work can be published and disseminated. TAAS encourages contributions aimed at supporting the understanding, development, and control of such systems and of their behaviors. Contributions are expected to be based on sound and innovative theoretical models, algorithms, engineering and programming techniques, infrastructures and systems, or technological and application experiences. Call for Papers: TAAS invites authors to submit original and unpublished articles that are written in English and on topics that are within the scope of the journal. Paper should have an introductory part that is comprehensible by a non-expert, and should reference up-to-date related literature. Papers can be up to 10000 words (20 printed pages) in length ? papers exceeding this limit will have to either be shortened, or have to move some of the material to an appendix that will only be published online. Additional information can be found at http://taas.acm.org/authors.html. Expected Turnaround Times: Currently, the average review turnaround time (the time from article submission to first notification) is approximately 2 months and the time from acceptance to publication is now typically less than 6 months for papers requiring revision. Additional statistics can be found at http://dl.acm.org/pub.cfm?id=J1010. Editorial Board: Editors-in-Chief Manish Parashar, Rutgers University, USA Franco Zambonelli, University of Modena e Reggio Emilia, Italy Associated Editors Tarek Abdelzaher, University of Illinois at Urbana Champaign, USA Ozalp Babaoglu, University of Bologna, Italy Luciano Baresi, Politecnico di Milano, Italy Jake Beal, BBN Technologies and MIT, USA Simon Dobson, St Andrews University, UK Marco Dorigo, Universite Libre de Bruxelles, Belgium Indy Gupta, University of Illinois, Urbana-Champaign, USA Salima Hassas, University of Lyon, France Anthony Karageorgos, University of Thessaly, Greece Michael Luck, University of Southampton, UK Julie Mc Cann, Imperial College London, UK Andrea Omicini, University of Bologna, Italy Jeremy Pitt, Imperial College London, UK Omer F. Rana, University of Cardiff, UK Onn Shehory, IBM Haifa Research Lab and Bar Ilan University, Israel Roy Sterritt, University of Ulster, UK H. Van Dyke Parunak, Jacobs Technology Inc., USA Dongyan Xu, Purdue University, USA ============================================================= Ivan Rodero, Ph.D. Rutgers Discovery Informatics Institute (RDI2) NSF Center for Cloud and Autonomic Computing (CAC) Department of Electrical and Computer Engineering Rutgers, The State University of New Jersey Office: CoRE Bldg, Rm 625 94 Brett Road, Piscataway, NJ 08854-8058 Phone: (732) 993-8837 Fax: (732) 445-0593 Email: irodero at rutgers dot edu WWW: http://nsfcac.rutgers.edu/people/irodero ============================================================= -------------- next part -------------- An HTML attachment was scrubbed... URL: From Muhammad.Iqbal at ecs.vuw.ac.nz Mon Feb 3 16:51:10 2014 From: Muhammad.Iqbal at ecs.vuw.ac.nz (Muhammad.Iqbal at ecs.vuw.ac.nz) Date: Tue, 4 Feb 2014 10:51:10 +1300 Subject: Connectionists: Call for Papers - IWLCS 2014 Message-ID: Dear colleague, The Seventeenth International Workshop on Learning Classifier Systems (IWLCS 2014) will be held in Vancouver, BC, Canada during the Genetic and Evolutionary Computation Conference (GECCO-2014), July 12-16, 2014. We invite submissions that discuss recent developments in all areas of research on, and applications of, Learning Classifier Systems. IWLCS is the event that brings together most of the core researchers in classifier systems. The workshop also provides an opportunity for researchers interested in LCSs to get an impression of the current research directions in the field as well as a guideline for the application of LCSs to their problem domain. For more details, please visit the IWLCS'14 URL: http://homepages.ecs.vuw.ac.nz/~iqbal/iwlcs2014/index.html Submission and Publication: Submissions will be short-papers up to 8 pages in ACM format. Please see the GECCO 2014 information for authors for further details. However, unlike GECCO, papers do not have to be submitted in anonymous format. All accepted papers will be presented at IWLCS 2014 and will appear in the GECCO workshop volume, which will be published by ACM (Association for Computing Machinery). Authors will be invited after the workshop to submit revised (full) papers that, after a thorough review process, are to be published in a special issue of the Evolutionary Intelligence journal. All papers should be submitted in PDF format and e-mailed to: iwlcssubmissions at gmail.com Important dates: March 28, 2014 - Paper submission deadline April 15, 2014 - Notification to authors April 29, 2014 - Submission of camera-ready material July 12-16, 2014 - GECCO 2014 Conference in Vancouver, BC, Canada Regards, IWLCS'14 Organizing Committee Muhammad Iqbal, Victoria University of Wellington, New Zealand. (muhammad.iqbal at ecs.vuw.ac.nz) Kamran Shafi, University of New South Wales, Australia. (k.shafi at adfa.edu.au) Ryan Urbanowicz, Dartmouth College, USA. (ryan.j.urbanowicz at dartmouth.edu) From ahu at cs.stir.ac.uk Mon Feb 3 19:29:13 2014 From: ahu at cs.stir.ac.uk (Dr Amir Hussain) Date: Tue, 4 Feb 2014 00:29:13 +0000 Subject: Connectionists: (Abstract Submission Deadline Extended to 21 Feb 2014) International Workshop on Autonomous Cognitive Robotics, Stirling, Scotland, UK, 27-28 March 2014 Message-ID: Dear friends **with advance apologies for any cross-postings** ***By popular demand, the deadline for submitting abstracts (300 words max.) has now been extended to: Fri, 21 Feb 2014 (Decisions due: 28 Feb 2014)*** The Call for Abstracts below may be of interest - we would very much appreciate if you could also kindly help circulate the Call to any interested colleagues and friends. Details of the Workshop and distinguished invited Speakers can also be found here: http://www.cs.stir.ac.uk/~ahu/AUTCOGROB2014 Prospective contributors are required to submit an abstract of no more than 300 words (by the extended deadline: 21 Feb 2014) to: eya at cs.stir.ac.uk PhD/research students will benefit from a 50% registration fee discount. We look forward to seeing you soon in Stirling! Kindest regards Prof Amir Hussain, University of Stirling, UK & Prof Kevin Gurney, University of Sheffield, UK (Workshop Organisers & Co-Chairs) Important Dates: Abstract submissions deadline (extended): 21 Feb 2014; Decisions Due: 28 Feb 2014 Workshop dates: Thurs 27- Fri 28 March 2014 ------- Call for Abstracts/Participation International IEEE/EPSRC Workshop on Autonomous Cognitive Robotics University of Stirling, Stirling, Scotland, UK, 27-28 March 2014 http://www.cs.stir.ac.uk/~ahu/AUTCOGROB2014 Autonomous Cognitive Robotics is an emerging discipline, fusing ideas across several traditional domains and seeks to further our understanding in two problem domains. First, by instantiating brain models into an embodied form, it supplies a strong test of those models, thereby furthering out understanding of neurobiology and cognitive psychology. Second, by harnessing the insights we have about cognition, it is a potentially fruitful source of engineering solutions to a range of problems in robotics, and in particular, in areas such as intelligent autonomous vehicles and assistive technology. It therefore promises next generation solutions in the design of urban autonomous vehicles, planetary rovers, and artificial social (e)companions. The aim of this 2-day workshop is to bring together leading international and UK scientists, engineers and industry representatives, alongside European research network and EU funding unit leaders, to present state-of-the-art in autonomous cognitive systems and robotics research, and discuss future R&D challenges and opportunities. We welcome contributions from people working in: neurobiology, cognitive psychology, artificial intelligence, control engineering, and computer science, who embrace the vision outlined above. If you wish to contribute, please email an abstract of not more than 300 words (by 21 Feb 2014) to: eya at cs.stir.ac.uk Both "works-in-progress" and fully-developed ideas are welcome. Selected abstracts will be invited for oral presentation but there will also be poster sessions. We also welcome people at all stages of their career to submit. Authors of selected best presentations will be invited to submit extended papers for publication in a special issue of Springer's Cognitive Computation journal (http://www.springer.com/12559) Invited Speakers: Juha Heikkil?, Deputy Head of Unit: Robotics & Cognitive Systems, European Commission Prof Vincent Muller, Co-ordinator, EU-Cognition-III: European Network Dr Ingmar Posner, The Oxford Mobile Robotics Group, University of Oxford, UK Prof Tony Pipe, Bristol Robotics Laboratory, UK Prof David Robertson, University of Edinburgh, UK Prof Mike Grimble, Industrial Systems & Control Ltd., & University of Strathclyde, UK Dr Tony Dodd, Dept. of Automatic Control Systems Engineering, Sheffield University, UK Prof Derong Liu, University of Illinois, USA & Chinese Academy of Sciences, Beijing Workshop Organisers & Co-Chairs: Prof Amir Hussain, University of Stirling, UK & Prof Kevin Gurney, University of Sheffield, UK Important Dates: Abstract submissions deadline: 21 Feb 2014; Decisions Due: 28 Feb 2014 Workshop dates: Thurs 27- Fri 28 March 2014 Registration: Registration fees will include lunches, refreshments and a copy of the Workshop Abstract Proceedings Early Registration Fee: ?100 Early Deadline: 1 Mar 2014 Late Registration Fee: ?150 Final deadline: 10 Mar 2014 Registration payment details will be sent on acceptance of Abstract, or can be obtained by emailing: eya at cs.stir.ac.uk They will also be available on-line: http://www.cs.stir.ac.uk/~ahu/AUTCOGROB2014 Research students are entitled to a 50% discount (proof of registration is required), and IEEE Members can benefit from a 15% discount. Venue, Travel & Accommodation: The Workshop will be held in the Cottrell Building, Division of Computing Science and Maths, School of Natural Sciences, at the University of Stirling. Travel directions and maps can be found at: http://www.stir.ac.uk/about/getting-here/ Accommodation options include the on-Campus Stirling Management Centre (http://www.smc.stir.ac.uk/), as well as numerous local B&Bs, for examples, see: http://www.stirling.co.uk/accommodation/guesthouse.htm Local Organizing Team: Dr Erfu Yang, Mr. Zeeshan Malik & Ms Grace McArthur Division of Computing Science & Maths, School of Natural Sciences, University of Stirling, UK E-mail: eya at cs.stir.ac.uk ________________________________ The University of Stirling has been ranked in the top 12 of UK universities for graduate employment*. 94% of our 2012 graduates were in work and/or further study within six months of graduation. *The Telegraph The University of Stirling is a charity registered in Scotland, number SC 011159. -- The University of Stirling has been ranked in the top 12 of UK universities for graduate employment*. 94% of our 2012 graduates were in work and/or further study within six months of graduation. *The Telegraph The University of Stirling is a charity registered in Scotland, number SC 011159. From liam at stat.columbia.edu Mon Feb 3 21:59:13 2014 From: liam at stat.columbia.edu (Liam Paninski) Date: Mon, 3 Feb 2014 21:59:13 -0500 Subject: Connectionists: Columbia postdoc positions: statistical neuroscience and machine learning Message-ID: Colleagues, While we're on the topic of big data in neuroscience... will you please circulate this postdoc advertisement amongst your students or anyone else who might be interested: http://grossmancenter.columbia.edu/postdoc.html ? Thank you. L Postdoctoral positions in statistical neuroscience and machine learning Two full-time postdoctoral positions are available immediately in Columbia University's Grossman Center for the Statistics of the Mind, a new center focusing on the intersection of neuroscience, machine learning, and statistics. The Grossman Center bridges Columbia's Statisticsand Neuroscience departments, and is closely integrated with the Center for Theoretical Neuroscience. The Grossman Center is part of Columbia's larger, growing initiatives in neuroscience and data sciences, all located in New York City. The principal appointments will be in the Statistics department, in the research groups of Liam Paninski and/or John Cunningham , with ample opportunities for interaction with an exceptional group of experimental and theoretical collaborators, including M. Churchland, L. Abbott, R. Yuste, R. Bruno, T. Jessell, K. Miller, B. Pesaran, E.J. Chichilnisky, E. Simoncelli, K. Shenoy, M. Ahrens, and more. * Requirements:* Qualifications include primarily a strong research portfolio in computational neuroscience, statistical neuroscience, machine learning, or a related field. Backgrounds in analyzing large neural datasets, modeling complex neural systems, and cutting-edge machine learning will be particularly valuable. These positions are highly quantitative and highly interdisciplinary; applicants should have a PhD in Electrical Engineering, Statistics, Machine Learning, Physics, Applied Mathematics, or Computational Neuroscience. *Appointment:* The initial appointments will be for one year, and are renewable. Salaries will be set based on experience and skills. Applicants should send email to "grossman at stat dot columbia dot edu" providing: 1. a one-page description of past research experience 2. a one-page description of future research interests and goals 3. a resume of educational and research experience, including publications 4. names of at least two people that could provide letters of reference All materials should be in pdf or plain text. Applications will be reviewed as they arrive until the positions are filled; interested candidates are encouraged to express their interest early. Due to demand, we may not be able to reply to all applications. -------------- next part -------------- An HTML attachment was scrubbed... URL: From odobez at idiap.ch Tue Feb 4 11:18:44 2014 From: odobez at idiap.ch (Jean-Marc Odobez) Date: Tue, 04 Feb 2014 17:18:44 +0100 Subject: Connectionists: JOB: IDIAP (CH) : postdoctoral position in computer vision Message-ID: <52F112E4.6090905@idiap.ch> Postdoctoral position in computer vision for surveillance We are looking for a highly motivated candidate for a postdoctoral position in computer vision for surveillance applications. The work will take place in the context of a project recently funded by the Swiss Governement, where the main goal is to design an intrusion detection system that can leverage on incremental learning techniques and automatic scene parameter adaptation. The project involves a Small company and a Technological university. The ideal postdoctoral candidate is expected to have a PhD in computer science or electrical engineering with a strong background in one of the following topics: - computer vision and video processing (tracking, low-level feature extraction, motion analysis; 3d geometry...) - machine learning - surveillance The applicant should have strong programming skills and be familiar with C/C++ and the Linux environment. The position is for 18 months. The starting date is as soon as possible. Interested candidates should a email a letter of motivation, a detailed CV, and the names of three references to: Jean-Marc Odobez (odobez at idiap.ch, tel : +41 (0)27 721 77 26) Www.idiap.ch/~odobez/ About Idiap: Idiap (www.idiap.ch), EPFL's Lidiap laboratory (idiap.epfl.ch), is located in Martigny in Valais, a scenic region in French-speaking Switzerland surrounded by the highest mountains of Europe, which offers multiple recreational activities,including hiking, climbing, and skiing, as well as varied cultural activities, all within close proximity to Lausanne and Geneva. Idiap is an equal opportunity employer and offers a young, multicultural environment where English is the main working language. -- Jean-Marc Odobez, IDIAP & EPFL Senior Researcher (EPFL MER) IDIAP Research Institute (http://www.idiap.ch) Tel: +41 (0)27 721 77 26 Web: http://www.idiap.ch/~odobez From hermann.neuro at gmail.com Wed Feb 5 07:21:03 2014 From: hermann.neuro at gmail.com (Hermann Cuntz) Date: Wed, 5 Feb 2014 13:21:03 +0100 Subject: Connectionists: Prospective PhD/Postdoc Positions in Computational Neuroscience Message-ID: <007f01cf226c$bd6ca4e0$3845eea0$@gmail.com> Prospective PhD/Postdoc Positions in Computational Neuroscience Job openings as PhD student or postdoc will be available in my group at the Ernst Str?ngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society and the Frankfurt Institute for Advanced Studies (FIAS) in Frankfurt/Main, Germany. My group is primarily focused on models that describe neuroanatomy at the cellular level and works in tight collaboration with experimental labs (see my website: http://www.treestoolbox.org/hermann/). We will be located at the ESI ( http://www.esi-frankfurt.de/esi-frankfurt/) with additional office space at the FIAS (http://fias.uni-frankfurt.de/). The group will be embedded in the German Bernstein Network of Computational Neuroscience. I am particularly looking for enthusiastic candidates that are interested in the cellular mechanisms of computation in the brain. Good programming or analytical skills and a Neuroscience background are an advantage. If you are interested please contact me directly via email ( hermann.neuro at gmail.com). Please note that the positions are still pending approval and a more formal job posting will follow. Best wishes, Hermann Cuntz -------------- next part -------------- An HTML attachment was scrubbed... URL: From benoit.frenay at uclouvain.be Wed Feb 5 14:07:46 2014 From: benoit.frenay at uclouvain.be (=?ISO-8859-1?Q?Beno=EEt_Fr=E9nay?=) Date: Wed, 05 Feb 2014 20:07:46 +0100 Subject: Connectionists: Deadline Extension: Neurocomputing Special Issue on Advances in Learning with Label Noise Message-ID: <52F28C02.8010905@uclouvain.be> An HTML attachment was scrubbed... URL: From juergen at idsia.ch Thu Feb 6 07:24:03 2014 From: juergen at idsia.ch (Schmidhuber Juergen) Date: Thu, 6 Feb 2014 13:24:03 +0100 Subject: Connectionists: Deep Belief Nets (2006) / Neural History Compressor (1991) or Hierarchical Temporal Memory Message-ID: <3FD4809C-5EF7-478D-8939-AE56E6A434BE@idsia.ch> Deep Learning in Artificial Neural Networks (NN) is about credit assignment across many subsequent computational stages, in deep or recurrent NN. A popluar Deep Learning NN is the Deep Belief Network (2006) [1,2]. A stack of feedforward NN (FNN) is pre-trained in unsupervised fashion. This can facilitate subsequent supervised learning. Let me re-advertise a much older, very similar, but more general, working Deep Learner of 1991. It can deal with temporal sequences: the Neural Hierarchical Temporal Memory or Neural History Compressor [3]. A stack of recurrent NN (RNN) is pre-trained in unsupervised fashion. This can greatly facilitate subsequent supervised learning. The RNN stack is more general in the sense that it uses sequence-processing RNN instead of FNN with unchanging inputs. In the early 1990s, the system was able to learn many previously unlearnable Deep Learning tasks, one of them requiring credit assignment across 1200 successive computational stages [4]. Related developments: In the 1990s there was a trend from partially unsupervised [3] to fully supervised recurrent Deep Learners [5]. In recent years, there has been a similar trend from partially unsupervised to fully supervised systems. For example, several recent competition-winning and benchmark record-setting systems use supervised LSTM RNN stacks [6-9]. References: [1] G. E. Hinton, R. R. Salakhutdinov. Reducing the dimensionality of data with neural networks. Science, Vol. 313. no. 5786, pp. 504 - 507, 2006. http://www.cs.toronto.edu/~hinton/science.pdf [2] G. W. Cottrell. New Life for Neural Networks. Science, Vol. 313. no. 5786, pp. 454-455, 2006. http://www.academia.edu/155897/Cottrell_Garrison_W._2006_New_life_for_neural_networks [3] J. Schmidhuber. Learning complex, extended sequences using the principle of history compression, Neural Computation, 4(2):234-242, 1992. (Based on TR FKI-148-91, 1991.) ftp://ftp.idsia.ch/pub/juergen/chunker.pdf Overview: http://www.idsia.ch/~juergen/firstdeeplearner.html [4] J. Schmidhuber. Habilitation thesis, TUM, 1993. ftp://ftp.idsia.ch/pub/juergen/habilitation.pdf . Includes an experiment with credit assignment across 1200 subsequent computational stages for a Neural Hierarchical Temporal Memory or History Compressor or RNN stack with unsupervised pre-training [2] (try Google Translate in your mother tongue): http://www.idsia.ch/~juergen/habilitation/node114.html [5] S. Hochreiter, J. Schmidhuber. Long Short-Term Memory. Neural Computation, 9(8):1735-1780, 1997. Based on TR FKI-207-95, 1995. ftp://ftp.idsia.ch/pub/juergen/lstm.pdf . Lots of of follow-up work on LSTM under http://www.idsia.ch/~juergen/rnn.html [6] S. Fernandez, A. Graves, J. Schmidhuber. Sequence labelling in structured domains with hierarchical recurrent neural networks. In Proc. IJCAI'07, p. 774-779, Hyderabad, India, 2007. ftp://ftp.idsia.ch/pub/juergen/IJCAI07sequence.pdf [7] A. Graves, J. Schmidhuber. Offline Handwriting Recognition with Multidimensional Recurrent Neural Networks. NIPS'22, p 545-552, Vancouver, MIT Press, 2009. http://www.idsia.ch/~juergen/nips2009.pdf [8] 2009: First very deep (and recurrent) learner to win international competitions with secret test sets: deep LSTM RNN (1995-) won three connected handwriting contests at ICDAR 2009 (French, Arabic, Farsi), performing simultaneous segmentation and recognition. http://www.idsia.ch/~juergen/handwriting.html [9] A. Graves, A. Mohamed, G. E. Hinton. Speech Recognition with Deep Recurrent Neural Networks. ICASSP 2013, Vancouver, 2013. http://www.cs.toronto.edu/~hinton/absps/RNN13.pdf Juergen Schmidhuber http://www.idsia.ch/~juergen/whatsnew.html From zhong at maebashi-it.ac.jp Wed Feb 5 20:26:36 2014 From: zhong at maebashi-it.ac.jp (Ning Zhong) Date: Thu, 06 Feb 2014 10:26:36 +0900 Subject: Connectionists: Call for Papers: Special Issue on Brain Big Data in the Hyper World Message-ID: <52F2E4CC.8080800@maebashi-it.ac.jp> [Apologies for cross-postings] Call for Papers: Special Issue on Brain Big Data in the Hyper World Brain Informatics: Brain Data Computing and Health Studies (BRIN) An International Journal (Springer) Guest Editors: Stephen S. Yau, Arizona State University, USA Ning Zhong, Maebashi Institute of Technology, Japan The "hyper world" means a new world encompassing coupling and empowering humans in the social world, information/computers in the cyber world, and things in the physical world. Brain Informatics related technologies offer informatics-enabled brain studies and applications in the hyper world, which can be regarded as a brain big data cycle. This brain big data cycle is implemented by various processing, interpreting, and integrating multiple forms of brain big data obtained from atomic and molecular levels to the entire brain. The implementation involves using powerful new neuro-imaging technologies, including fMRI, PET, and MEG/EEG, as well as other sources like eye-tracking and wearable, portable, micro and nano devices. Such brain big data will not only help scientists improve their understanding of human thinking, learning, decision-making, emotion, memory, and social behavior, but also help cure disease, serve health-care, facilitate environmental control and sustainability, using human-centric information and computing technologies in the hyper world. This special issue will present some of the best work being done worldwide to deal with fundamental issues, new challenges and potential applications of brain big data in the hyper world. Although this special issue is based on a very successful panel at AMT-BHI 2013 in Maebashi, Japan (http://wi-consortium.org/conferences/amtbi13/), we extend the call for papers to all active researchers and practitioners who are working on this exciting topic. Topics of interests include, but are not limited to: - Hyper world and cyber individual model - Future of brain big data and big data on the brain - Assessing the brain's white matter with diffusion imaging - A big brain vs the main requirement for big data - Big data in neuroimaging and connectome - Heart to heart science - Multimodal data analysis for depression early stage prediction and intervention - Multi-granular computing for brain big data - Big data analytics and interactive knowledge discovery with respect to brain cognition and mental health - Big data policy for life and brain sciences and its implications to future research All manuscripts must be in English. Manuscripts submitted for publication are reviewed by at least three peer reviewers, according to the usual policies of the BRIN Journal. Important Dates: - Paper submission deadline: March 20, 2014 - First round notification: March 31, 2014 - Revised version due: April 20, 2014 - Final decision notification: April 30, 2014 - Publication: June, 2014 Contact information: Ning Zhong Please submit your paper to and CC to Dr. Jian Yang From volker.roth at unibas.ch Thu Feb 6 05:56:45 2014 From: volker.roth at unibas.ch (Volker Roth) Date: Thu, 06 Feb 2014 11:56:45 +0100 Subject: Connectionists: Two PhD positions in Biomedical Data Analysis at the University of Basel, Switzerland Message-ID: <52F36A6D.2060300@unibas.ch> February 06, 2014 Applications are invited for two PhD positions in the Biomedical Data Analysis group at the Department of Mathematics and Computer Science of the University of Basel. The focus of our group concerns the development of integrated solutions for large-scale data analysis problems, ranging from low-level data/image-processing, to feature and structure detection to classification, clustering and network inference. Successful applicants have a profound knowledge in mathematical modeling and in algorithmics, together with substantial programming skills and a genuine interest in biomedical applications. Candidates are expected to engage in interdisciplinary research groups and to foster collaborations with clinicians and biologists. A prerequisite is a Masters degree in Computer Science, Mathematics, Physics or related disciplines. Successful candidates will be awarded a fellowship with a competitive salary (approx. 47000 CHF/year). Applications with a full CV, list of publications, short statement of research interests and name(s) of at least one referee should be submitted (in electronic form) to volker.roth at unibas.ch before March 10, 2014. For further inquiries, please contact Volker Roth, Email: volker.roth at unibas.ch Phone: +41-61-2670549 -- ============================================================================ Prof. Dr. Volker Roth Department of Mathematics and Computer Science University of Basel Tel.: +41-(0)61-2670549 Bernoullistr. 16, email: volker.roth at unibas.ch CH-4056 Basel, Switzerland http://bmda.cs.unibas.ch/ ============================================================================ From zhong at maebashi-it.ac.jp Wed Feb 5 20:31:37 2014 From: zhong at maebashi-it.ac.jp (Ning Zhong) Date: Thu, 06 Feb 2014 10:31:37 +0900 Subject: Connectionists: CFPs: Brain Informatics & Health (BIH 2014) Message-ID: <52F2E5F9.5080501@maebashi-it.ac.jp> [Apologies if you receive this more than once] ################################################################## The 2014 International Conference on Brain Informatics & Health (BIH'14) August 11-14, 2014, Warsaw, Poland 2ND CALL FOR PAPERS ################################################################## Homepage:http://wic2014.mimuw.edu.pl/bih/homepage ################################################################## BIH'14 is a part of the 2014 Web Intelligence Congress (WIC 2014). The series of Brain Informatics conferences was started in China, in 2006, with the International Workshop on Web Intelligence meets Brain Informatics (WImBI'06). The next events have been held in China, Canada and Japan. Since 2012 the conference topics have been extended with major elements of Health Informatics in order to investigate some common challenges in both areas. In 2014, this series of events will visit Europe for the first time. Important Dates: ################################################################## # Electronic submission of full papers: March 2, 2014 # Workshop paper submission: March 23, 2014 # Notification of paper acceptance: May 4-11, 2014 # Camera-ready of accepted papers: May 18, 2014 ################################################################## BIH'14 Keynote Speaker: - Karl Friston (University College London) Turing Keynote Speaker: - Andrew Chi-Chih Yao (2000 Turing Award Winner) Other WIC 2014 Speakers: - Stefan Decker (National University of Ireland) - Sadaaki Miyamoto (University of Tsukuba) - Yi Pan (Georgia State University) - Andrzej Szalas (Linkoping University & The University of Warsaw) BIH'14 Co-Organizers/Co-Sponsors: - Web Intelligence Consortium (WIC) - IEEE-CIS Task Force on Brain Informatics (IEEE TF-BI) - The University of Warsaw - Polish Mathematical Society (PTM) - Warsaw University of Technology - Polish Academy of Sciences (PAS) Committee on Informatics - Polish Artificial Intelligence Society BIH'14 Program Co-Chairs: - Xiaohua (Tony) Hu, USA - Lars Schwabe, Germany - Ah-Hwee Tan, Singapore On-Line Submissions & Publications: ################################################################## # Papers need to have up to 10 pages in LNCS format: #http://www.springer.com/computer/lncs?SGWID=0-164-6-793341-0 # Accepted papers will be published by Springer as a # volume of the series of LNCS/LNAI. ################################################################## CONFERENCE TOPICS INCLUDE, BUT ARE NOT LIMITED TO: 1. Foundations of Brain Understanding * Brain Organization & Reaction Modelling * Causal, Hierarchical & Granular Brain Modelling * Human Reasoning & Learning Mechanisms * Neural Basis for High-Level Human Concepts * Higher Cognitive Functions & Consciousness * Systematic Design of Cognitive Experiments * Visual, Auditory & Tactile Information Processes * Spatio-Temporality of Human Information Processes 2. Brain-Inspired Problem Solving * Brain & Cognition-Inspired Intelligent Systems * Foundations & Applications of Neurocomputing * Deep, Hierarchical & Energy-based Learning * Brain-Related Aspects of Natural Computing * Brain Informatics for Cyber-Individual Models * Human Factors in Computing Systems * Neuroeconomics & Neuromarketing * Neurolinguistics & Neurosemantics 3. Brain & Health Data Management * Digital, Data & Computational Brain * Big Brain Data Centers & Computational Grids * Brain Data/Information Flow Simulations * Brain & Health Data Repositories & Benchmarks * Brain & Health Data Cleaning / Quality Assurance * Brain & Health Data/Evidence Integration * Electronic Patient Record Management * Medical Knowledge Abstraction & Representation 4. Biomedical Decision Support * Neurological/Mental Disease Diagnosis & Assessment * Therapy Planning & Disease Prognostic Support * Risk Management for Diagnostic/Therapeutic Support * Computer Support for Surgical Intervention * Physiological, Clinical & Epidemiological Modelling * Operations Research for Biomedicine & Healthcare 5. Brain & Health Data Analytics * Pattern Recognition Methods for Brain & Health Data * Knowledge Discovery from Brain & Health Databases * Multimodal Brain Information Fusion * Neuroimaging & Electromagnetic Brain Signals * Discovery of Biomarkers & New Therapies * Healthcare Workflow Mining for Quality Assurance * Domain Knowledge in Medical Image/Signal Analysis * Mining Brain/Health Literature & Medical Records * Survival Analysis & Health Hazard Evaluations * Interactive & Visual Analytics for Biomedicine 6. Healthcare Systems * Healthcare Systems as Complex Systems * Risk Management for Healthcare Processes * IT Solutions for Healthcare Service Delivery * IT Solutions for Hospital Management * Organizational Impacts of Healthcare IT Solutions * Public Health Informatics & Healthcare Networks * Medical Compliance Support & Automation * Social Aspects of Healthcare Mechanisms 7. Biomedical Technologies * Brain-Computer Middleware * Biomedical Intelligent Devices * Biomedical Sensor Calibration * Assistive & Monitoring Technologies * Biomedical Software Engineering * Biomedical Robotics & Microrobotics 8. Applications of Brain & Health Informatics * Brain & Health Scientific Research Support Portals * Brain Signal Interfaces & Non-verbal Communication * Telemedicine, E-medicine & M-medicine * Clinical/Hospital Information Systems * Biomedical & Health Recommender Systems * Business Intelligence based on Brain & Health Data Tutorial Proposals: ################################################################## # Electronic submission of proposals: April 13, 2014 # Notification of proposal acceptance: April 20, 2014 ################################################################## *** Post-Conference Journal Publications *** - Web Intelligence and Agent Systems (IOS Press) - Brain Informatics (Springer) - Information Technology & Decision Making (World Scientific) - Computational Intelligence (Wiley) - Health Information Science and Systems (Springer) - Cognitive Systems Research (Elsevier) - Computational Cognitive Science (Springer) - Semantic Computing (World Scientific) *** About WIC 2014 *** The 2014 Web Intelligence Congress (WIC 2014) is a Special Event of Web25 (25 years of the Web). It includes four top-quality international conferences related to intelligent informatics: - IEEE/WIC/ACM Web Intelligence 2014 (WI'14) - IEEE/WIC/ACM Intelligent Agent Technology 2014 (IAT'14) - Active Media Technology 2014 (AMT'14) - Brain Informatics & Heath 2014 (BIH'14) They are co-located in order to bring together researchers and practitioners from diverse fields with the purpose of exploring the fundamental roles, interactions and practical impacts of Artificial Intelligence and Advanced Information Technology. *** About the Venue *** The conference will be held in August - the best Summer period to visit Warsaw and Poland. Lectures will take place in Central Campus of the University of Warsaw, in the Old Library building converted to a modern conference center. The campus is located in downtown Warsaw, close to the Old Town and Vistula River. *** Contact Information *** Dominik Slezak WIC 2014 Congress Program Chair From n.lepora at sheffield.ac.uk Thu Feb 6 08:14:44 2014 From: n.lepora at sheffield.ac.uk (Nathan F Lepora) Date: Thu, 6 Feb 2014 13:14:44 +0000 Subject: Connectionists: Living Machines III: Second Call for Papers, Satellite Events and Sponsors Message-ID: ______________________________________________________________ *Second Call for Papers, Satellite Events and Sponsors* *Living Machines III: The 3rd International Conference on Biomimetic and Biohybrid Systems **30th July to 1st August 2014* http://csnetwork.eu/livingmachines To be hosted at the Museo Nazionale Della Scienza E Della Tecnologia Leonardo Da Vinci (National Museum of Science and Technology Leonardo da Vinci) Milan, Italy In association with the Istituto Italiano di Technologia (IIT) Accepted papers will be published in *Springer* *Lecture Notes in Artificial Intelligence* Submission deadline March 14th, 2014. ______________________________________________________________ *ABOUT LIVING MACHINES 2014* The development of future real-world technologies will depend strongly on our understanding and harnessing of the principles underlying living systems and the flow of communication signals between living and artificial systems. *Biomimetics* is the development of novel technologies through the distillation of principles from the study of biological systems. The investigation of biomimetic systems can serve two complementary goals. First, a suitably designed and configured biomimetic artefact can be used to test theories about the natural system of interest. Second, biomimetic technologies can provide useful, elegant and efficient solutions to unsolved challenges in science and engineering. *Biohybrid* systems are formed by combining at least one biological component--an existing living system--and at least one artificial, newly-engineered component. By passing information in one or both directions, such a system forms a new hybrid bio-artificial entity. The following are some examples: * Biomimetic robots and their component technologies (sensors, actuators, processors) that can intelligently interact with their environments. * Active biomimetic materials and structures that self-organize and self-repair. * Biomimetic computers--neuromimetic emulations of the physiological basis for intelligent behaviour. * Biohybrid brain-machine interfaces and neural implants. * Artificial organs and body-parts including sensory organ-chip hybrids and intelligent prostheses. * Organism-level biohybrids such as robot-animal or robot-human systems. *ACTIVITIES* The main conference will take the form of a *three-day single-track oral * and* poster presentation programme*, 30th July to 1st August 2014, hosted at the Museo Nazionale Della Scienza E Della Tecnologia Leonardo Da Vinci in Milan (http://www.museoscienza.org). The conference programme will include *five plenary lectures* from leading international researchers in biomimetic and biohybrid systems, and the demonstrations of state-of-the-art living machine technologies. Agreed speakers are: *Sarah Begreiter*, University of Maryland (Microfabrication and robotics) *Darwin Caldwell*, Italian Institute of Technology (Legged locomotion) *Andrew Schwartz*, University of Minnesota, Pittsburgh (Neural control of prosthetics) *Ricard Sole*, Universitat Pompeu Fabra, Barcelona (Self-organization and synthetic biology) *Srini Srinivasan*, Queensland Brain Institute (Insect-inspired cognition and vision) The full conference will be preceded by up to two days of *Sateliite Events* hosted by the Istituto Italiano di Technologia in Milan. *SUBMITTING TO LIVING MACHINES 2014* We invite both *full papers* and *extended abstracts *in areas related to the conference themes. All contributions will be refereed and accepted papers will appear in the Living Machines 2014 *proceedings published in the Springer-Verlag LNAI Series. *Submissions should be made before the advertised deadline via the Springer submission site: http://senldogo0039.springer-sbm.com/ocs/en/home/LM2014 Full papers (up to 12 pages) are invited from researchers at any stage in their career but should present significant findings and advances in biomimetic or biohybid research; more preliminary work would be better suited to extended abstract submission (3 pages). Further details of submission formats will be circulated in an updated CfP and will be posted on the conference web-site. Full papers will be accepted for either oral presentation (single track) or poster presentation. Extended abstracts will be accepted for poster presentation only. Authors of the best full papers will be invited to submitted extended versions of their paper for publication in a special issue of *Bioinspiration and Biomimetics*. *Satellite events* Active researchers in biomimetic and biohybrid systems are invited to propose topics for *1-day or 2-day tutorials, symposia *or *workshops *on related themes to be held 28-29th July at Italian Institute of Technology in Milan. Events can be scheduled on either the 28th or 29th or across both days. Attendance at satellite events will attract a small fee intended to cover the costs of the meeting. There is a lot of flexibility about the content, organisation, and budgeting for these events. Please contact us if you are interested in organising a satellite event! *EXPECTED DEADLINES* March 14th, 2014 Paper submission deadline April 29th, 2014 Notification of acceptance May 20th, 2014 Camera ready copy July 29-August 2nd 2014 Conference *SPONSORSHIP* Living Machines 2014 is sponsored by the *Convergent Science Network (CSN) for Biomimetics and Neurotechnology*. CSN is an EU FP7 Future Emerging Technologies Co-ordination Activity that also organises many highly successful workshop series: the *Barcelona Summer School on Brain, Technology and Cognition*, the *Capo Caccia Neuromorphic Cognitive Engineering Workshop*, the *School on Neuro-techniques*, the *Okinawa School of Computational Neuroscience* and the *Telluride workshop of Cognitive Neuromorphic Engineering *(see http://csnetwork.eu/activities for details) The 2014 Living Machines conference will also be hosted and sponsored by *the Istituto Italiano di Technologia (http://www.iit.it ).* *Call for Sponsors.* Other organisations wishing to sponsor the conference in any way and gain the corresponding benefits by promoting themselves and their products to through conference publications, the conference web-site, and conference publicity are encouraged to contact the conference organisers to discuss the terms of sponsorship and necessary arrangements. We offer a number of attractive and good-value packages to potential sponsors. *ABOUT THE VENUE* Living Machines 2014 continues our practice of hosting our annual meeting in an inspirational venue related to the conference themes. The scientific and technological genius Leonardo da Vinci drew much of his inspiration from biology and invented many biomimetic artefacts. We are therefore delighted that this year's conference will be hosted at the Da Vinci museum of Science and Technology in Milan, one of the largest technology museums in Europe and host to a collection of working machines that realise many of Da Vinci's ideas. We look forward to seeing you in Milan. *Organising Committee:* Tony Prescott, University of Sheffield (Co-chair) Paul Verschure, Universitat Pompeu Fabra (Co-chair) Armin Duff, Universitat Pompeu Fabra (Program Chair) Giorgio Metta, Instituto Italiano di Technologia (Local Organizer) Barbara Mazzolai, Instituto Italiano di Technologia (Local Organiser) Anna Mura, Universitat Pompeu Fabra (Communications) Nathan Lepora, University of Bristol (Communications) *Program Committee:* Anders Lyhne Christensen Andy Adamatzky Andy Phillipides Arianna Menciassi Auke Ijspeert Barry Trimmer Ben Mitchinson Benoit Girard Cecilia Laschi Charles Fox Chrisantha Fernando Christophe Grand Danilo de Rossi Darwin Caldwell Dieter Braun Emre Neftci Enrico Pagello Eris Chinaletto Ferdinando Rodrigues y Baena Frank Grasso Fred Claeyssens Frederic Boyer Frederico Carpi Giacomo Indiveri Gregory Chirikjian Hillel Chiel Holger Krapp Holk Cruse Husosheng Hu Jess Krichmar Jira Okada John Hallam Jon Timmis Jonathan Rossiter Jose Halloy Joseph Ayers Julian Vincent Keisuke Morisima Lucia Beccai Marco Dorigo Mark Cutkosky Martin Pearson Mat Evans Mehdi Khamassi Michele Giogliano Nathan Lepora Noah Cowan Pablo Varona Paul Graham Paul Verschure Reiko Tanaka Robert Allen Roberto Cingolani Roderich Gross Roger Quinn Sean Anderson Serge Kernbach Simon Garnier Stephane Doncieux Stuart Wilson Thomas Schmickl Tim Pearce Tony Pipe Tony Prescott Volker Durr Wolfgang Eberle Yiannis Demiris Yoseph Bar-Cohen -------------- next part -------------- An HTML attachment was scrubbed... URL: From erik at oist.jp Thu Feb 6 00:49:57 2014 From: erik at oist.jp (Erik De Schutter) Date: Thu, 6 Feb 2014 14:49:57 +0900 Subject: Connectionists: Okinawa Computational Neuroscience Course 2014: Application deadline this Sunday Message-ID: <31F82814-8C7D-4482-B6B1-C4AA21305E87@oist.jp> OKINAWA COMPUTATIONAL NEUROSCIENCE COURSE 2014 Methods, Neurons, Networks and Behaviors June 16 - July 3, 2014 Okinawa Institute of Science and Technology, Japan https://groups.oist.jp/ocnc The aim of the Okinawa Computational Neuroscience Course is to provide opportunities for young researchers with theoretical backgrounds to learn the latest advances in neuroscience, and for those with experimental backgrounds to have hands-on experience in computational modeling. We invite graduate students and postgraduate researchers to participate in the course, held from June 16th through July 3rd, 2014 at an oceanfront seminar house of the Okinawa Institute of Science and Technology Graduate University. Applications are through the course web page (https://groups.oist.jp/ocnc) only; they will close night of February 9th, 2014. Applicants are required to propose a project at the time of application. Applicants will receive confirmation of acceptance in March. Like in preceding years, OCNC will be a comprehensive three-week course covering single neurons, networks, and behaviors with ample time for student projects. The first week will focus exclusively on methods with hands-on tutorials during the afternoons, while the second and third weeks will have lectures by international experts. We invite those who are interested in integrating experimental and computational approaches at each level, as well as in bridging different levels of complexity. There is no tuition fee. The sponsor will provide lodging and meals during the course and may support travel for those without funding. We hope that this course will be a good opportunity for theoretical and experimental neuroscientists to meet each other and to explore the attractive nature and culture of Okinawa, the southernmost island prefecture of Japan. Invited faculty: ? Upinder Bhalla (NCBS, India) ? Erik De Schutter (OIST) ? Kenji Doya (OIST) ? Tomoki Fukai (RIKEN, Japan) ? Bernd Kuhn (OIST) ? Javier Medina (University of Pennsylvania, USA) ? Abigail Morrison (Forschungszentrum J?lich, Germany) ? Yael Niv (Princeton University, USA) ? Tony Prescott (University of Sheffield, UK) ? Bernardo Sabatini (Harvard University, USA) ? Ivan Soltesz (UC Irvine, USA) ? Greg Stephens (OIST) ? Greg Stuart (Eccles Institute of Neuroscience, Australia) ? Josh Tenenbaum (MIT, USA) ? Jeff Wickens (OIST) ? Yoko Yazaki-Sugiyama (OIST) From kkuehnbe at uos.de Thu Feb 6 08:45:54 2014 From: kkuehnbe at uos.de (Kai-Uwe Kuehnberger) Date: Thu, 06 Feb 2014 14:45:54 +0100 Subject: Connectionists: "Neural-Symbolic Networks for Cognitive Capacities" - Call for Papers for a special issue of Elsevier's "Biologically Inspired Cognitive Architectures" Message-ID: <52F39212.2070902@uos.de> Call for Papers: Journal Special Issue on == Neural-Symbolic Networks for Cognitive Capacities == Tarek R. Besold, Artur d'Avila Garcez, Kai-Uwe K?hnberger, Terrence C. Stewart Special issue of the Elsevier Journal on Biologically Inspired Cognitive Architectures (BICA) http://www.journals.elsevier.com/biologically-inspired-cognitive-architectures/ = SCOPE = Researchers in artificial intelligence and cognitive systems modelling continue to face foundational challenges in their quest to develop plausible models and implementations of cognitive capacities and intelligence. One of the methodological core issues is the question of the integration between sub-symbolic and symbolic approaches to knowledge representation, learning and reasoning in cognitively-inspired models. Network-based approaches very often enable flexible tools which can discover and process the internal structure of (possibly large) data sets. They promise to give rise to efficient signal-processing models which are biologically plausible and optimally suited for a wide range of applications, whilst possibly also offering an explanation of cognitive phenomena of the human brain. Still, the extraction of high-level explicit (i.e. symbolic) knowledge from distributed low-level representations thus far has to be considered a mostly unsolved problem. In recent years, network-based models have seen significant advancement in the wake of the development of the new "deep learning" family of approaches to machine learning. Due to the hierarchically structured nature of the underlying models, these developments have also reinvigorated efforts in overcoming the neural-symbolic divide. The aim of the special issue is to bring together recent work developed in the field of network-based information processing in a cognitive context, which bridges the gap between different levels of description and paradigms and which sheds light onto canonical solutions or principled approaches occurring in the context of neural-symbolic integration to modelling or implementing cognitive capacities. = TOPICS = We particularly encourage submissions related to the following non-exhaustive list of topics: - new learning paradigms of network-based models addressing different knowledge levels - biologically plausible methods and models - integration of network models and symbolic reasoning - cognitive systems using neural-symbolic paradigms - extraction of symbolic knowledge from network-based representations - challenging applications which have the potential to become benchmark problems - visionary papers concerning the future of network approaches to cognitive modelling = SUBMISSIONS = Deadline for submissions is *** April 16, 2014 ***. The suggested submission category for the special issue is Research Article - up to 20 journal pages or 20,000 words -, while shorter submissions in the category Letter - up to 6 journal pages or 5000 words - are equally welcome. Visionary papers dealing with the future of network approaches to cognitive modelling must belong to the category Research Article and are subject to prior acceptance by the editors. If you are planning on submitting to this category, please get in touch with Tarek R. Besold, tbesold at uni-osnabrueck.de. Submissions shall follow the guidelines laid out for the journal "Biologically Inspired Cognitive Architectures", which can be found under http://www.elsevier.com/journals/biologically-inspired-cognitive-architectures/2212-683X/guide-for-authors. Contributions shall be submitted via the journal's submission system which can be found under http://ees.elsevier.com/bica/ and in addition shall be sent by email as .pdf to Tarek R. Besold, tbesold at uni-osnabrueck.de. When submitting their papers online, authors are asked to select "Article Type" SI:Neural-Symbolic Net in order to assure identification of the submission as belonging to the special issue. Please also indicate in the cover letter that the article has been submitted to the special issue on "Neural-Symbolic Networks for Cognitive Capacities". = IMPORTANT DATES = Deadline for submissions: April 16, 2014 Feedback to authors*: May 9, 2014 Submission of revised versions: May 19, 2014 Final notification of acceptance: May 24, 2014 Publication of the special issue: July 2014 as Vol. 9 of "Biologically Inspired Cognitive Architectures" (*= Including rejection / minor revisions / acceptance.) = GUEST EDITORS = Tarek R. Besold, Institute of Cognitive Science, University of Osnabr?ck, Germany Artur D'Avila Garcez, Department of Computer Science, City University London, UK Kai-Uwe K?hnberger, Institute of Cognitive Science, University of Osnabr?ck, Germany Terrence C. Stewart, Centre for Theoretical Neuroscience, University of Waterloo, Canada From b.telenczuk at biologie.hu-berlin.de Thu Feb 6 10:55:28 2014 From: b.telenczuk at biologie.hu-berlin.de (Bartosz Telenczuk) Date: Thu, 06 Feb 2014 16:55:28 +0100 Subject: Connectionists: [ANN] Summer School "Advanced Scientific Programming in Python" in Split, Croatia Message-ID: <52F3B070.4030204@biologie.hu-berlin.de> Advanced Scientific Programming in Python ========================================= a Summer School by the G-Node and the Faculty of Electrical Engineering, Mechanical Engineering and Naval Architecture (FESB), University of Split Scientists spend more and more time writing, maintaining, and debugging software. While techniques for doing this efficiently have evolved, only few scientists have been trained to use them. As a result, instead of doing their research, they spend far too much time writing deficient code and reinventing the wheel. In this course we will present a selection of advanced programming techniques, incorporating theoretical lectures and practical exercises tailored to the needs of a programming scientist. New skills will be tested in a real programming project: we will team up to develop an entertaining scientific computer game. We use the Python programming language for the entire course. Python works as a simple programming language for beginners, but more importantly, it also works great in scientific simulations and data analysis. We show how clean language design, ease of extensibility, and the great wealth of open source libraries for scientific computing and data visualization are driving Python to become a standard tool for the programming scientist. This school is targeted at Master or PhD students and Post-docs from all areas of science. Competence in Python or in another language such as Java, C/C++, MATLAB, or Mathematica is absolutely required. Basic knowledge of Python is assumed. Participants without any prior experience with Python should work through the proposed introductory materials before the course. Date and Location ================= September 8?13, 2014. Split, Croatia Preliminary Program =================== Day 0 (Mon Sept 8) ? Best Programming Practices ? Best Practices for Scientific Computing ? Version control with git and how to contribute to Open Source with github ? Object-oriented programming & design patterns Day 1 (Tue Sept 9) ? Software Carpentry ? Test-driven development, unit testing & quality assurance ? Debugging, profiling and benchmarking techniques ? Advanced Python I: idioms, useful built-in data structures, generators Day 2 (Wed Sept 10) ? Scientific Tools for Python ? Advanced NumPy ? The Quest for Speed (intro): Interfacing to C with Cython ? Programming in teams Day 3 (Thu Sept 11) ? The Quest for Speed ? Writing parallel applications in Python ? Python 3: why should I care ? Programming project Day 4 (Fri Sept 12) ? Efficient Memory Management ? When parallelization does not help: the starving CPUs problem ? Advanced Python II: decorators and context managers ? Programming project Day 5 (Sat Sept 13) ? Practical Software Development ? Programming project ? The Pelita Tournament Every evening we will have the tutors' consultation hour: Tutors will answer your questions and give suggestions for your own projects. Applications ============ You can apply on-line at http://python.g-node.org Applications must be submitted before 23:59 UTC, May 1, 2014. Notifications of acceptance will be sent by June 1, 2014. No fee is charged but participants should take care of travel, living, and accommodation expenses. Candidates will be selected on the basis of their profile. Places are limited: acceptance rate is usually around 20%. Prerequisites: You are supposed to know the basics of Python to participate in the lectures. You are encouraged to go through the introductory material available on the website. Faculty ======= ? Francesc Alted, Continuum Analytics Inc., USA ? Pietro Berkes, Enthought Inc., UK ? Kathryn D. Huff, Department of Nuclear Engineering, University of California - Berkeley, USA ? Zbigniew J?drzejewski-Szmek, Krasnow Institute, George Mason University, USA ? Eilif Muller, Blue Brain Project, ?cole Polytechnique F?d?rale de Lausanne, Switzerland ? Rike-Benjamin Schuppner, Technologit GbR, Germany ? Nelle Varoquaux, Centre for Computational Biology Mines ParisTech, Institut Curie, U900 INSERM, Paris, France ? St?fan van der Walt, Applied Mathematics, Stellenbosch University, South Africa ? Niko Wilbert, TNG Technology Consulting GmbH, Germany ? Tiziano Zito, Institute for Theoretical Biology, Humboldt-Universit?t zu Berlin, Germany Organized by Tiziano Zito (head) and Zbigniew J?drzejewski-Szmek for the German Neuroinformatics Node of the INCF (Germany), Lana Peri?a for the Numerical and applied mathematics group, FESB, University of Split (Croatia), Ivana Kaji? from the Bernstein Center for Computational Neuroscience Berlin (Germany), Ivana Bala?evi? from the Technical University Berlin (Germany), and Filip Petkovski from IN2 Ltd. Skopje (Macedonia). Website: http://python.g-node.org Contact: python-info at g-node.org From friedhelm.schwenker at uni-ulm.de Thu Feb 6 15:13:58 2014 From: friedhelm.schwenker at uni-ulm.de (Dr. Schwenker) Date: Thu, 06 Feb 2014 21:13:58 +0100 Subject: Connectionists: CFP: Special Issue in Pattern Recognition Letters on "Pattern Recognition in Human-Computer-Interaction" Message-ID: <52F3ED06.8070300@uni-ulm.de> ---- PLEASE, APOLOGIZE MULTIPLE COPIES ---- Dear colleagues, due to several requests the deadline for the Special Issue in the/Pattern Recognition Letters Journal/ on "Pattern Recognition in Human-Computer-Interatio/*n"*/ has been extended to FEBRUARY 20, 2014. Please find the CfP below and at https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.130/Mitarbeiter/schwenker/CfP.pdf With best regards, Friedhelm Schwenker (SI Guest Editor) --- *Call for Papers* Special Issue on */Pattern Recognition in Human-Computer-Interaction/* to be published in the /*Pattern Recognition Letters Journal*/ ****New Submission deadline: February 20, 2014 **** Building intelligent artificial companions capable to interact with humans in the same way humans interact with each other is a major challenge in affective computing. Such a type of interactive companion must be able to perceiveand interprete multimodal information about the user in order to be able to produce an appropriate response. The proposed special issue mainly focuses on pattern recognition and machine learning methods for the perception of the user's affective states, activities and intentions. Topics of interest include (yet, they are not limited to) the following issues. A. Algorithms to recognize emotions, behaviors, activities and intentions ?Facial expression recognition ?Recognition of gestures, head/body poses ?Audiovisual emotion recognition ?Analysis of bio-physiological data for emotion recognition ?Multimodal information fusion architectures ?Multi Classifier Systems and Multi View Classifiers ?Temporal fusion B. Learning algorithms for social signal processing ?Learning from unlabeled and partially labeled data ?Learning with noisy/uncertain labels ?Deep learning architectures ?Learning of time series C. Applications ?Companion Technologies ?Robotics ?Assistive systems D. Benchmark data bases This special issue invites paper submissions on the most recent developments in human computer interaction research rooted in pattern recognition. The special issue will comprise (1) papers submitted in response to this call, and (2) extended versions of selected papers from the recent, successful MPRSS 2012 and MPRSS 2013 workshops sponsored by the International Association for Pattern Recognition. MPRSS 2012: November 11, 2012 Tsukuba, Japan http://neuro.informatik.uni-ulm.de/MPRSS2012/ MPRSS 2013: June 15, 2013, Lausanne, Switzerland, http://neuro.informatik.uni-ulm.de/MPRSS2013/ ** *Paper submission* The papers must be submitted online via the Pattern Recognition Letters Journal website (http://ees.elsevier.com/patrec/), selecting the choice that indicates this special issue (identifier: PR-HCI). Please, prepare your paper following the Journal guidelines for Authors (http://www.elsevier.com/wps/find/journaldescription.cws_home/505619/authorinstructions), which include specifications for submissions aimed at Special Issues. Priority will be given to the papers with high novelty and originality. *Submission templates*(for both LaTex and MW Word users) are available and it is mandatory that submissions are prepared by using these templates. Potential contributors will find the templates in the guidelines for Authors in the PRLetters webpage. Submissions to the SI can be*at most 10 pages long* (in the PRLetters layout). This is different from what has been done until a few weeks ago, where Word/Figure-/Table counting was done at EES to check whether a paper had been prepared according to the rules or had to be sent back to Authors for editing. Now only page counting is done at the EES ASA department and papers longer than 10 pages will be sent back to Authors to shorten them. If you are not sure on whether your manuscripts matches the aims and scope of this special issue or not, do not hesitate to get in touch with the guest editors at any time. *Guest editors* /Friedhelm Schwenker (Managing Editor)/ /Institute of Neural Information Processing/ /Ulm University, Germany/ /friedhelm.schwenker at uni-ulm.de/ // // /Stefan Scherer/ /Multimodal Communication and Computation Laboratory/ /Institute for Creative Technologies/ /University of Southern California/ /scherer at ict.usc.edu/ // // /Louis-Philippe Morency/ /Multimodal Communication and Computation Laboratory/ /Institute for Creative Technologies/ /University of Southern California/ /morency at ict.usc.edu/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From lila.kari at uwo.ca Thu Feb 6 15:14:10 2014 From: lila.kari at uwo.ca (Lila Kari) Date: Thu, 06 Feb 2014 15:14:10 -0500 Subject: Connectionists: UCNC 2014 - Invited Talks, 3rd CFP, Twitter Message-ID: <31A94FF3-31EE-49F7-8662-8176B14FEDFD@uwo.ca> UCNC 2014 - 3RD CALL FOR PAPERS The 13th International Conference on Unconventional Computation & Natural Computation University of Western Ontario, London, Ontario, Canada July 14-18, 2014 http://www.csd.uwo.ca/ucnc2014 http://www.facebook.com/UCNC2014 https://twitter.com/UCNC2014 Submission deadline: March 7, 2014 OVERVIEW The International Conference on Unconventional Computation and Natural Computation has been a meeting where scientists with different backgrounds, yet sharing a common interest in novel forms of computation, human-designed computation inspired by nature, and the computational aspects of processes taking place in nature, present their latest results. Papers and poster presentations are sought in all areas, theoretical or experimental, that relate to unconventional computation and natural computation. Typical, but not exclusive, topics are: * Molecular (DNA) computing, Quantum computing, Optical computing, Hypercomputation - relativistic computation, Chaos computing, Physarum computing, Computation in hyperbolic spaces, Collision-based computing, Computations beyond the Turing model; * Cellular automata, Neural computation, Evolutionary computation, Swarm intelligence, Ant algorithms, Artificial immune systems, Artificial life, Membrane computing, Amorphous computing; * Computational Systems Biology, Genetic networks, Protein-protein networks, Transport networks, Synthetic biology, Cellular (in vivo) computing. INVITED PLENARY SPEAKERS Yaakov Benenson (ETH Zurich) - "Molecular computing meets synthetic biology" Charles Bennett (IBM Research) - "Thermodynamics of computation and self-organization" Hod Lipson (Cornell University) - "The robotic scientist" Nadrian Seeman (New York University) - "DNA: not merely the secret of life" INVITED TUTORIAL SPEAKERS Anne Condon (University of British Columbia) - "Programming with biomolecules" Ming Li (University of Waterloo) - "Approximating semantics" Tommaso Toffoli (Boston University) - "Do we compute to live, or live to compute?" WORKSHOPS Computational Neuroscience - Organizer: Mark Daley (University of Western Ontario) DNA Computing by Self-Assembly - Organizer: Matthew Patitz (University of Arkansas) Unconventional Computation in Europe - Organizers: Martyn Amos (Manchester Metropolitan University), Susan Stepney (University of York) IMPORTANT DATES Submission deadline: March 7, 2014 Notification of acceptance: April 7, 2014 Final versions due: April 27, 2014 Conference: July 14-18, 2014 INSTRUCTIONS FOR AUTHORS Authors are invited to submit original papers (at most 12 pages in LNCS format) or one-page poster abstracts using the link https://www.easychair.org/conferences/?conf=ucnc2014 Papers must be submitted in Portable Document Format (PDF). The revised version of the manuscripts, to appear in a LNCS volume by Springer available at the conference venue, must be prepared in LATEX, see http://www.springer.com/computer/lncs/lncs+authors The papers must not have been submitted simultaneously to other conferences with published proceedings. All accepted papers must be presented at the conference. Selected papers will appear in a special issue of Natural Computing. PROGRAM COMMITTEE Andrew Adamatzky (University of the West of England, UK) Selim G. Akl (Queen's University, Canada) Eshel Ben-Jacob (Tel-Aviv University, Israel) Cristian S. Calude (University of Auckland, New Zealand) Jose Felix Costa (IST University of Lisbon, Portugal) Erzsebet Csuhaj-Varju (Eotvos Lorand University, Hungary) Alberto Dennunzio (Universita degli Studi di Milano-Bicocca, Italy) Marco Dorigo (Universite Libre de Bruxelles, Belgium) Jerome Durand-Lose (Universite d'Orleans, France) Masami Hagiya (University of Tokyo, Japan) Oscar H. Ibarra (University of California, Santa Barbara, USA, Co-Chair) Kazuo Iwama (Kyoto University, Japan) Jarkko Kari (University of Turku, Finland) Lila Kari (University of Western Ontario, Canada, Co-Chair) Viv Kendon (University of Leeds, UK) Kamala Krithivasan (IIT Madras, India) Giancarlo Mauri (Universita degli Studi di Milano-Bicocca, Italy) Yongli Mi (Hong Kong University of Science and Technology, China) Mario J. Perez-Jimenez (Universidad de Sevilla, Spain) Kai Salomaa (Queen's University, Canada) Hava Siegelmann (University of Massachusetts Amherst, USA) Susan Stepney (University of York, UK) Damien Woods (California Institute of Technology, USA) Byoung-Tak Zhang (Seoul National University, Korea) From weng at cse.msu.edu Thu Feb 6 15:29:25 2014 From: weng at cse.msu.edu (Juyang Weng) Date: Thu, 06 Feb 2014 15:29:25 -0500 Subject: Connectionists: How the brain works In-Reply-To: References: Message-ID: <52F3F0A5.5040907@cse.msu.edu> I agree with Thomas in terms of "theory must find its way into mainstream neuroscience, much more than it currently is". However, a radio, or a more sophisticated system like a space shuttle, is a bad example for us to consider how the brain works, because it lures us to miss the boat. Why? A radio or space shuttle does not learn autonomously, neither does a multilayer perception in a typical context where it is taught in a university class. For the same reason, with respect, I think that Steven Pinker's book "How the Mind Works" also miss the boat. I did enjoy rich published information in his book and I told him so when he and I met last year. But Pinker's book hardly ever discussed how the brain learns autonomously in his book. His discussion about brain's plasticity is superficial, mainly an account of experimental facts without linking to mechanisms about how the mind works. As far as I understand from my (controversial?) brain theory, brain plasticity is a small window for us to peek into the brain "first principle". The first principle for us to understand how the brain work, as I allured to in my last email to this list, is: Principle 1: Development --- How the brain learns autonomously through life time. A brain has sensory neurons (receptors) and muscle neurons firing all the time before birth and after birth. Any theory about the brain must explain how this biological machine learns autonomously. Still remember the "cluttered environment" I wrote earlier? A fetus brain or a baby brain must deal with cluttered environment directly! I leave others to fill other principles. Note: How the brain works cannot be the first principle, since how it learns determines how it works. We might fill other brain principles as we continue this discussion. -John On 1/26/14 11:39 PM, Thomas Trappenberg wrote: > > Some of our discussion seems to be about 'How the brain works'. I am > of course not smart enough to answer this question. So let me try > another system. > > How does a radio work? I guess it uses an antenna to sense an > electromagnetic wave that is then amplified so that an electromagnet > can drive a membrane to produce an airwave that can be sensed by our > ear. Hope this captures some essential aspects. > > Now that you know, can you repair it when it doesn't work? > > I believe that there can be explanations on different levels, and I > think they can be useful in different circumstances. Maybe my above > explanation is good for generally curious people, but if you want to > build a super good sounding radio, you need to know much more about > electronics, even quantitatively. And of course, if you want to > explain how the electromagnetic force comes about you might need to > dig down into quantum theory. And to take my point into the other > direction, even knowing all the electronic components in a computer > does not tell you how a word processor works. > > A multilayer perception is not the brain, but it captures some > interesting insight into how mappings between different > representations can be learned from examples. Is this how the brain > works? It clearly does not explain everything, and I am not even sure > if it really captures much if at all of the brain. But if we want to > create smarter drugs than we have to know how ion channels and cell > metabolism works. And if we want to help stroke patients, we have to > understand how the brain can be reorganized. We need to work on > several levels. > > Terry Sejnowski told us that the new Obama initiative is like the moon > project. When this program was initiated we had no idea how to > accomplish this, but dreams (and money) can be very motivating. > > This is a nice point, but I don't understand what a connection plan > would give us. I think without knowing precisely where and how strong > connections are made, and how each connection would influence a > postsynaptic or glia etc cells, such information is useless. So why > not having the goal of finding a cure for epilepsy? > > I do strongly believe we need theory in neuroscience. Only being > descriptive is not enough. BTW, theoretical physics is physics. > Physics would not be at the level where it is without theory. And of > course, theory is meaningless without experiments. I think our point > on this list is that theory must find its way into mainstream > neuroscience, much more than it currently is. I have the feeling that > we are digging our own grave by infighting and some narrow 'I know it > all' mentality. Just try to publish something which is not mainstream > even so it has solid experimental backing. > > Cheers, Thomas > -- -- Juyang (John) Weng, Professor Department of Computer Science and Engineering MSU Cognitive Science Program and MSU Neuroscience Program 428 S Shaw Ln Rm 3115 Michigan State University East Lansing, MI 48824 USA Tel: 517-353-4388 Fax: 517-432-1061 Email: weng at cse.msu.edu URL: http://www.cse.msu.edu/~weng/ ---------------------------------------------- From c.hilgetag at gmail.com Thu Feb 6 16:30:19 2014 From: c.hilgetag at gmail.com (Claus C. Hilgetag) Date: Thu, 6 Feb 2014 22:30:19 +0100 Subject: Connectionists: Brain Connectivity Workshop 2014 - registration now open! References: Message-ID: We cordially invite you to the Brain Connectivity Workshop 2014 which will take place in Hamburg, Germany, from 4th to 6th June 2014 (Welcome reception on the evening of 3rd June). See website (http://sfb936.net/index.php/events/brain-connectivity-workshop-2014) for details. The Brain Connectivity Workshop (http://www.brain-connectivity-workshop.org) is an annual meeting that is dedicated to discussing the latest approaches and findings in the field of brain connectivity studies within a small group of experts. Hence attendance is limited to 140 participants, and the workshop strongly aims to facilitate exchange and discussion of ideas by a number of unique features. Important dates at a glance: * Registration: Now open - early registration until March 31, 2014, http://www.sfb936.net/index.php/events/brain-connectivity-workshop-2014. Because of the limited number of participants, early registration is strongly advised; seats will be allocated on a first come, first served principle. Speakers do not have to register. * The HBM 2014 meeting will take place in Hamburg, from June 8th to 12th (http://www.humanbrainmapping.org/i4a/pages/index.cfm?pageid=3565), right after the Brain Connectivity Workshop. We look forward to seeing you. Prof. Claus C. Hilgetag, PhD (University Medical Center Hamburg-Eppendorf, Dept. of Computational Neuroscience) Prof. Dr. Klaas E. Stephan (University of Zurich (UZH) & Swiss Federal Institute of Technology (ETH) Zurich) Prof. Dr. Andreas K. Engel (University Medical Center Hamburg-Eppendorf, Dept. of Neurophysiology and Pathophysiology) Prof. Dr. Christian Gerloff (University Medical Center Hamburg-Eppendorf, Clinic and Policlinic of Neurology) Hilke Marina Petersen (University Medical Center Hamburg-Eppendorf, Dept. of Neurophysiology and Pathophysiology) ? Management The workshop is supported by DFG Sonderforschungsbereich 936 "Multi-Site Communication in the Brain" (SFB 936), www.sfb936.net. Apologies for cross-posting. -------------- next part -------------- An HTML attachment was scrubbed... URL: From weng at cse.msu.edu Thu Feb 6 17:43:11 2014 From: weng at cse.msu.edu (Juyang Weng) Date: Thu, 06 Feb 2014 17:43:11 -0500 Subject: Connectionists: How the brain works In-Reply-To: References: Message-ID: <52F40FFF.9080106@cse.msu.edu> You can use the Principle 1 to test. Dr. Karl Friston's Generalized Filtering model misses the boat too if his model is meant for the brain. The Bayesian framework fails totally: at any time, there are much more input components (like pixels) in the eyes that are irrelevant to actions (e.g., from many objects in a cluttered scene) than those that do (e.g., pixels from a particular object of interest). But which object is of interest changes on the fly! By the way, our DN model deals with Principle 1. Since this is useful for the list, I give a CC to the list. -John On 2/6/14 4:31 PM, Jayanta Dutta wrote: > > Dear Dr. Weng, > > Regarding how the brain works I wanted to know your opinion about Dr. > Karl Friston's Generalized Filtering model. Your opinions will be very > important for me. > > > -- > *Regards* > *Jayanta Kumar Dutta* > *Graduate Research Assistant* > *Computational Intelligence Lab* > *EECE Department* > *University of Memphis* -- -- Juyang (John) Weng, Professor Department of Computer Science and Engineering MSU Cognitive Science Program and MSU Neuroscience Program 428 S Shaw Ln Rm 3115 Michigan State University East Lansing, MI 48824 USA Tel: 517-353-4388 Fax: 517-432-1061 Email: weng at cse.msu.edu URL: http://www.cse.msu.edu/~weng/ ---------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From weng at cse.msu.edu Thu Feb 6 18:54:55 2014 From: weng at cse.msu.edu (Juyang Weng) Date: Thu, 06 Feb 2014 18:54:55 -0500 Subject: Connectionists: Deep Belief Nets (2006) / Neural History Compressor (1991) or Hierarchical Temporal Memory In-Reply-To: <3FD4809C-5EF7-478D-8939-AE56E6A434BE@idsia.ch> References: <3FD4809C-5EF7-478D-8939-AE56E6A434BE@idsia.ch> Message-ID: <52F420CF.7060006@cse.msu.edu> Juergen: You wrote: A stack of recurrent NN. But it is a wrong architecture as far as the brain is concerned. Although my joint work with Narendra Ahuja and Thomas S. Huang at UIUC was probably the first learning network that used the deep Learning idea for learning from clutter scenes (Cresceptron ICCV 1992 and IJCV 1997), I gave up this static deep learning idea later after we considered the Principle 1: Development. The deep learning architecture is wrong for the brain. It is too restricted, static in architecture, and cannot learn directly from cluttered scenes required by Principle 1. The brain is not a cascade of recurrent NN. I quote from Antonio Damasio "Decartes' Error": p. 93: "But intermediate communications occurs also via large subcortical nuclei such as those in the thalamas and basal ganglia, and via small nulei such as those in the brain stem." Of course, the cerebral pathways themselves are not a stack of recurrent NN either. There are many fundamental reasons for that. I give only one here base on our DN brain model: Looking at a human, the brain must dynamically attend the tip of the nose, the entire nose, the face, or the entire human body on the fly. For example, when the network attend the nose, the entire human body becomes the background! Without a brain network that has both shallow and deep connections (unlike your stack of recurrent NN), your network is only for recognizing a set of static patterns in a clean background. This is still an overworked pattern recognition problem, not a vision problem. -John On 2/6/14 7:24 AM, Schmidhuber Juergen wrote: > Deep Learning in Artificial Neural Networks (NN) is about credit assignment across many subsequent computational stages, in deep or recurrent NN. > > A popluar Deep Learning NN is the Deep Belief Network (2006) [1,2]. A stack of feedforward NN (FNN) is pre-trained in unsupervised fashion. This can facilitate subsequent supervised learning. > > Let me re-advertise a much older, very similar, but more general, working Deep Learner of 1991. It can deal with temporal sequences: the Neural Hierarchical Temporal Memory or Neural History Compressor [3]. A stack of recurrent NN (RNN) is pre-trained in unsupervised fashion. This can greatly facilitate subsequent supervised learning. > > The RNN stack is more general in the sense that it uses sequence-processing RNN instead of FNN with unchanging inputs. In the early 1990s, the system was able to learn many previously unlearnable Deep Learning tasks, one of them requiring credit assignment across 1200 successive computational stages [4]. > > Related developments: In the 1990s there was a trend from partially unsupervised [3] to fully supervised recurrent Deep Learners [5]. In recent years, there has been a similar trend from partially unsupervised to fully supervised systems. For example, several recent competition-winning and benchmark record-setting systems use supervised LSTM RNN stacks [6-9]. > > > References: > > [1] G. E. Hinton, R. R. Salakhutdinov. Reducing the dimensionality of data with neural networks. Science, Vol. 313. no. 5786, pp. 504 - 507, 2006. http://www.cs.toronto.edu/~hinton/science.pdf > > [2] G. W. Cottrell. New Life for Neural Networks. Science, Vol. 313. no. 5786, pp. 454-455, 2006. http://www.academia.edu/155897/Cottrell_Garrison_W._2006_New_life_for_neural_networks > > [3] J. Schmidhuber. Learning complex, extended sequences using the principle of history compression, Neural Computation, 4(2):234-242, 1992. (Based on TR FKI-148-91, 1991.) ftp://ftp.idsia.ch/pub/juergen/chunker.pdf Overview: http://www.idsia.ch/~juergen/firstdeeplearner.html > > [4] J. Schmidhuber. Habilitation thesis, TUM, 1993. ftp://ftp.idsia.ch/pub/juergen/habilitation.pdf . Includes an experiment with credit assignment across 1200 subsequent computational stages for a Neural Hierarchical Temporal Memory or History Compressor or RNN stack with unsupervised pre-training [2] (try Google Translate in your mother tongue): http://www.idsia.ch/~juergen/habilitation/node114.html > > [5] S. Hochreiter, J. Schmidhuber. Long Short-Term Memory. Neural Computation, 9(8):1735-1780, 1997. Based on TR FKI-207-95, 1995. ftp://ftp.idsia.ch/pub/juergen/lstm.pdf . Lots of of follow-up work on LSTM under http://www.idsia.ch/~juergen/rnn.html > > [6] S. Fernandez, A. Graves, J. Schmidhuber. Sequence labelling in structured domains with hierarchical recurrent neural networks. In Proc. IJCAI'07, p. 774-779, Hyderabad, India, 2007. ftp://ftp.idsia.ch/pub/juergen/IJCAI07sequence.pdf > > [7] A. Graves, J. Schmidhuber. Offline Handwriting Recognition with Multidimensional Recurrent Neural Networks. NIPS'22, p 545-552, Vancouver, MIT Press, 2009. http://www.idsia.ch/~juergen/nips2009.pdf > > [8] 2009: First very deep (and recurrent) learner to win international competitions with secret test sets: deep LSTM RNN (1995-) won three connected handwriting contests at ICDAR 2009 (French, Arabic, Farsi), performing simultaneous segmentation and recognition. http://www.idsia.ch/~juergen/handwriting.html > > [9] A. Graves, A. Mohamed, G. E. Hinton. Speech Recognition with Deep Recurrent Neural Networks. ICASSP 2013, Vancouver, 2013. http://www.cs.toronto.edu/~hinton/absps/RNN13.pdf > > > > Juergen Schmidhuber > http://www.idsia.ch/~juergen/whatsnew.html -- -- Juyang (John) Weng, Professor Department of Computer Science and Engineering MSU Cognitive Science Program and MSU Neuroscience Program 428 S Shaw Ln Rm 3115 Michigan State University East Lansing, MI 48824 USA Tel: 517-353-4388 Fax: 517-432-1061 Email: weng at cse.msu.edu URL: http://www.cse.msu.edu/~weng/ ---------------------------------------------- From Colin.Wise at uts.edu.au Thu Feb 6 19:43:30 2014 From: Colin.Wise at uts.edu.au (Colin Wise) Date: Fri, 7 Feb 2014 11:43:30 +1100 Subject: Connectionists: AAI Short Course - 'Data Mining - an Introduction' - Thursday 27 February 2014 In-Reply-To: <8112393AA53A9B4A9BDDA6421F26C68A016E669720A7@MAILBOXCLUSTER.adsroot.uts.edu.au> References: <8112393AA53A9B4A9BDDA6421F26C68A016E669720A7@MAILBOXCLUSTER.adsroot.uts.edu.au> Message-ID: <8112393AA53A9B4A9BDDA6421F26C68A016E669720AA@MAILBOXCLUSTER.adsroot.uts.edu.au> Dear Colleague, AAI Short Course - 'Data Mining - an Introduction' - Thursday 27 February 2014 https://shortcourses-bookings.uts.edu.au/ClientView/Schedules/ScheduleDetail.aspx?ScheduleID=1550&EventID=1281 AAi's short course 'Data Mining - an Introduction' may be of interest to you or others in your organisation or network. The Data Mining short course is an introduction to the foundations of data mining and knowledge discovery methods and their application to practical problems. It brings together the state-of-the-art research and practical techniques in data mining. Short Course is particularly useful for All those involved in Data Mining for their organisation: * Industry Practitioners wanting to get into data mining * Managers wanting to know what data mining is about * Students, Researchers, Academic Short Course topics * Introduction to data mining concepts and the broader context * The CRISP-DM approach to data mining * Basics of data * Classification and evaluating classifiers * Several classification methods including decision trees, random forest and support vector machines * Clustering Short Course outcomes Upon completion of this course students will: * Understand how data mining fits into the business and society context * Understand key terms and concepts in data mining * Be familiar with an approach for structuring data mining projects * Understand the basics of working with data * Understand the scope and limitations of several state-of-the-art mining Please register here https://shortcourses-bookings.uts.edu.au/ClientView/Schedules/ScheduleDetail.aspx?ScheduleID=1550&EventID=1281 An important foundation short course in the AAI series of advanced data analytic short courses - please view this short course and others here http://analytics.uts.edu.au/shortcourses/schedule.html We are happy to discuss at your convenience. Thank you and regards. Colin Wise Operations Manager Faculty of Engineering & IT The Advanced Analytics Institute [cid:image001.png at 01CF23F2.E8BD4690] University of Technology, Sydney Blackfriars Campus Building 2, Level 1 Tel. +61 2 9514 9267 M. 0448 916 589 Email: Colin.Wise at uts.edu.au AAI: www.analytics.uts.edu.au/ Reminder - AAI Short Course - Marketing Analytics - an Introduction - Thursday 6 March 2014 AAI Education and Training Short Courses Survey - you may be interested in completing our AAI Survey at http://analytics.uts.edu.au/shortcourses/survey.html AAI Email Policy - should you wish to not receive this periodic communication on Data Analytics Learning please reply to our email (to sender) with UNSUBSCRIBE in the Subject. We will delete you from our database. Thank you for your past and future support. UTS CRICOS Provider Code: 00099F DISCLAIMER: This email message and any accompanying attachments may contain confidential information. If you are not the intended recipient, do not read, use, disseminate, distribute or copy this message or attachments. If you have received this message in error, please notify the sender immediately and delete this message. Any views expressed in this message are those of the individual sender, except where the sender expressly, and with authority, states them to be the views of the University of Technology Sydney. Before opening any attachments, please check them for viruses and defects. Think. Green. Do. Please consider the environment before printing this email. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 10489 bytes Desc: image001.png URL: From retienne at jhu.edu Fri Feb 7 11:51:50 2014 From: retienne at jhu.edu (retienne) Date: Fri, 07 Feb 2014 11:51:50 -0500 Subject: Connectionists: 2014 Telluride Neuromorphic Cognition Engineering Workshop; Call for Participation Message-ID: <52F50F26.7000505@jhu.edu> Telluride Neuromorphic Cognition Engineering Workshop 2014 Neuromorphic Cognition Engineering Workshop: The 20th Anniversary Edition Telluride, Colorado, June 29th - July 19th, 2014 CALL FOR APPLICATIONS: Deadline is April 2nd, 2014 NEUROMORPHIC COGNITION ENGINEERING WORKSHOP www.ine-web.org Sunday June 29th - Saturday July 19th, 2014, Telluride, Colorado We invite applications for a three-week summer workshop that will be held in Telluride, Colorado. Sunday June 30th - Saturday July 19th, 2014. The application deadline is Wednesday, April 2nd and application instructions are described at the bottom of this document.? *This is the 20th Anniversary of the Workshop, and ~25 years since the conception of the "Meadian" version of Neuromorphic Engineering. Hence, we plan a celebratory Workshop, where some of the originators and benefactors of the field will participate in discussions of the successes and challenges**over the past 25 years and prognosticate the potential for the next 25 years. * The 2014 Workshop and Summer School on Neuromorphic Engineering is sponsored by the National Science Foundation, Institute of Neuromorphic Engineering, Qualcomm Corporation, The EU-Collaborative Convergent Science Network (CNS-II), University of Maryland - College Park, Institute for Neuroinformatics -- University of Zurich and ETH Zurich, Georgia Institute of Technology, Johns Hopkins University, Boston University, University of Western Sydney and the Salk Institute. Directors: * Cornelia Fermuller, University of Maryland, College Park * Ralph Etienne-Cummings, Johns Hopkins University * Shih-Chii Liu, Institute of Neuroinformatics, UNI/ETH Zurich, Switzerland * Timothy Horiuchi, University of Maryland, College Park Workshop Advisory Board: * Andreas Andreou, Johns Hopkins University * Andre van Schaik, University Western Sydney, Australia * Avis Cohen, University of Maryland * Barbara Shinn-Cunningham, Boston University * Giacomo Indiveri, Institute of Neuroinformatics, Uni/Eth Zurich, Switzerland * Jonathan Tapson, University Western Sydney, Australia * Malcolm Slaney, Microsoft Research * Jennifer Hasler, Georgia Institute of Technology * Rodney Douglas, Institute of Neuroinformatics, Uni/Eth Zurich, Switzerland * Shihab Shamma, University of Maryland * Tobi Delbruck, Institute of Neuroinformatics, Uni/Eth Zurich, Switzerland Previous year workshop can be found at:http://ine-web.org/workshops/workshops-overview/index.htmland the workshop wiki is athttps://neuromorphs.net/ GOALS: Neuromorphic engineers design and fabricate artificial neural systems whose organizing principles are based on those of biological nervous systems. Over the past 18 years, this research community has focused on the understanding of low-level sensory processing and systems infrastructure; efforts are now expanding to apply this knowledge and infrastructure to addressing higher-level problems in perception, cognition, and learning. In this 3-week intensive workshop and through the Institute for Neuromorphic Engineering (INE), the mission is to promote interaction between senior and junior researchers; to educate new members of the community; to introduce new enabling fields and applications to the community; to promote on-going collaborative activities emerging from the Workshop, and to promote a self-sustaining research field. FORMAT: The three week summer workshop will include background lectures on systems and cognitive neuroscience (in particular sensory processing, learning and memory, motor systems and attention), practical tutorials on emerging hardware design, mobile robots, hands-on projects, and special interest groups. Participants are required to take part and possibly complete at least one of the projects proposed. They are furthermore encouraged to become involved in as many of the other activities proposed as interest and time allow. There will be two lectures in the morning that cover issues that are important to the community in general. Because of the diverse range of backgrounds among the participants, some of these lectures will be tutorials, rather than detailed reports of current research. These lectures will be given by invited speakers. Projects and interest groups meet in the late afternoons, and after dinner. In the early afternoon there will be tutorials on a wide spectrum of topics, including analog VLSI, mobile robotics, vision and auditory systems, central-pattern-generators, selective attention mechanisms, cognitive systems, etc. *2014 TOPIC AREAS:* 1. *Human Cognition: Decoding Perceived, Attended, Imagined Acoustic Events and Human-Robot Interfaces*Project Leaders: Shihab Shamma (UM-College Park), Malcolm Slaney (Microsoft), Barbara Shinn-Cunningham (Boston U), Edward Lalor (Trinity College, Dublin) 2. *Motion and Action Processing on Wearable Devices*Project Leaders: Michael Pfeiffer (INI-UZH), Ryad Benosman (UPMC, Paris), Garrick Orchard (NUS, Singapore), and Cornelia Ferm?ller (UMCP) 3. *Planning with Dynamic Neural Fields: from Sensorimotor Dynamics to Large-Scale behavioral Search*Project Leaders: Yulia Sandamirskaya (RUB, Bochum) and Erik Billing (U. Skovde) 4. *Neuromorphic Olympics*Project Leaders: Jorg Conradt (TUM, Munich) and Terry Stewart (U. Waterloo) 5. *Embodied Neuromorphic Real-World Architectures of Perception, Cognition and Action*Project Leaders: Andreas Andreou (JHU) and Paul Verschure (UPF, Barcelona) 6. *Terry Sejnowski (Salk Institute) -- Computational Neuroscience (invitational mini-workshop)* LOCATION AND ARRANGEMENTS: The summer school will take place in the small town of Telluride, 9000 feet high in southwest Colorado, about 6 hours drive away from Denver (350 miles). Great Lakes Aviation and America West Express airlines provide daily flights directly into Telluride. All facilities within the beautifully renovated public school building are fully accessible to participants with disabilities. Participants will be housed in ski condominiums, within walking distance of the school. Participants are expected to share condominiums. The workshop is intended to be very informal and hands-on. Participants are not required to have had previous experience in analog VLSI circuit design, computational or machine vision, systems level neurophysiology or modeling the brain at the systems level. However, we strongly encourage active researchers with relevant backgrounds from academia, industry and national laboratories to apply, in particular if they are prepared to work on specific projects, talk about their own work or bring demonstrations to Telluride (e.g. robots, chips, software). Wireless internet access will be provided. Technical staff present throughout the workshops will assist with software and hardware issues. We will have a network of PCs running LINUX and Microsoft Windows for the workshop projects. We encourage participants to bring along their personal laptop. No cars are required. Given the small size of the town, we recommend that you do not rent a car. Bring hiking boots, warm clothes, rain gear, and a backpack, since Telluride is surrounded by beautiful mountains. Unless otherwise arranged with one of the organizers, we expect participants to stay for the entire duration of this three week workshop. FINANCIAL ARRANGEMENTS: Notification of acceptances will be mailed out around the April 15th, 2014. The Workshop covers all your accommodations and facilities costs for the 3 weeks duration. You are responsible for your own travel to the Workshop, however, sponsored fellowships will be available as described below to further subsidize your cost. Registration Fees: For expenses not covered by federal funds, a Workshop registration fee is required. The fee is TBD per participant for the 3-week Workshop. This is expected from all participants at the time of acceptance. Accommodations: The cost of a shared condominium, typically a bedroom in a shared condo for senior participants or a shared room for students, will be covered for all academic participants. Upgrades to a private rooms or condos will cost extra. Participants from National Laboratories and Industry are expected to pay for these condominiums. Fellowships: This year we will offer two Fellowships to subsidize your costs: 1. Qualcomm Corporation Fellowship: Three non-corporate participants will have their accommodation and registration fees ($2750) directly covered by Qualcomm, and will be reimbursed for travel costs up to $500. Additional generous funding from Qualcomm will provide $5000 to help organize and stage the Workshop. 2. EU-CSNII Fellowship (http://csnetwork.eu/) which is funded by the 7th Research Framework Program FP7-ICT-CSNII-601167: The top 8 EU applicants will be reimbursed for their registration fees ($1250), subsistence/travel subsidy (up to Euro 2000) and accommodations cost ($1500). The registration and accommodation costs will go directly to the INE (the INE will reimburse the participant's registration fees after receipt from CSNII), while the subsistence/travel reimbursement will be provided directly to the participants by the CSNII at the University of Pompeu Fabra, Barcelona, Spain. HOW TO APPLY: Applicants should be at the level of graduate students or above (i.e. postdoctoral fellows, faculty, research and engineering staff and the equivalent positions in industry and national laboratories). We actively encourage women and minority candidates to apply. Anyone interested in proposing or discussing specific projects should contact the appropriate topic leaders directly. The application website is (after February 7th, 2014): ine-web.org/telluride-conference-2014/apply-info Application information needed: * Contact email address. * First name, Last name, Affiliation, valid e-mail address. * Curriculum Vitae (a short version, please). * One page summary of background and interests relevant to the workshop, including possible ideas for workshop projects. Please indicate which topic areas you would most likely join. * Two letters of recommendation (uploaded directly by references). *Applicants will be notified by e-mail.* 7th February, 2014 - Applications accepted on website 2nd April, 2014 - Applications Due 15th April, 2014 - Notification of Acceptance ------------------------- -- ------------------------------------------------- Ralph Etienne-Cummings Professor Department of Electrical and Computer Engineering The Johns Hopkins University 105 Barton Hall 3400 N. Charles Street Baltimore, MD 21218 Tel: (410) 516 3494 Fax: (410) 516 2939 Email: retienne at jhu.edu URL: http://etienne.ece.jhu.edu/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From dubuf at ualg.pt Fri Feb 7 12:26:12 2014 From: dubuf at ualg.pt (Hans du Buf) Date: Fri, 07 Feb 2014 17:26:12 +0000 Subject: Connectionists: Postdoc position, 12 months, deep neural networks Message-ID: <52F51734.1020307@ualg.pt> The Vision Laboratory is a small group (2 postdocs plus 7 PhD students plus 3 MSc students) which develops models of visual perception. Our models of simple, complex and end-stopped cells run in real time on multi-core CPUs and on GPUs. The keypoints show state-of-the-art repeatability. Apart from naive Bayes nearest-neighbour classification, we are developing NN hierarchies for object detection and recognition in complex scenes on mobile robots. We need a postdoc who is specialized in NN architectures. Keywords: hierarchies, redundancy, sparse coding, Gripon-Bessou NNs, learning, Naive Bayes, CPU and GPU programming, and cognitive robotics. Start: preferably March or soon after. Duration: 12 months. Remuneration: 1495 euro/month - exempt from taxation. Ample money: for computers and conferences. Location: Faro, sunny Algarve, Portugal. Interested postdocs should apply BEFORE March 7, 2014! Pls send email with CV to Prof. Joao Rodrigues (jrodrig at ualg.pt) or to me (dubuf at ualg.pt). Regards, Hans -- ======================================================================= Prof.dr.ir. J.M.H. du Buf mailto:dubuf at ualg.pt Dept. of Electronics and Computer Science - FCT, University of Algarve, fax (+351) 289 818560 Campus de Gambelas, 8000 Faro, Portugal. tel (+351) 289 800900 ext 7761 ======================================================================= UALG Vision Laboratory: http://w3.ualg.pt/~dubuf/vision.html ======================================================================= From bernabe at imse-cnm.csic.es Fri Feb 7 04:00:19 2014 From: bernabe at imse-cnm.csic.es (bernabe) Date: Fri, 07 Feb 2014 10:00:19 +0100 Subject: Connectionists: One year Post-Doc position on Neuromorphic Engineering Message-ID: <52F4A0A3.1030209@imse-cnm.csic.es> Apologies for Cross-Posting --------------------------------------- The Neuromorphic group at the Sevilla Microelectronics Insitute has an opening for a one year Post-Doc position to work in a project that combines Nanotechnology principles with conventional digital electronics. The core of the project exploits Event-Based vision sensing and processing using AER (Address Event Representation) for object recognition and scene analysis, bridging with potential new nanotechnology devices for addressing compact learning schemes (such as STDP - spike timing dependent plasticity). The successful candidate would work with FPGAs, DVS (Dynamic Vision Sensor) retina chips, AER convolution modules and SpiNNaker systems. Gross salary is 32k euros per year. Taxes for this salary in Spain are around 24% (can be lower, depending on personal conditions). Although this position is for one year only, there are possibilities for continuation through other similar projects, depending on the candidate's interests. Interested candidates, please contact bernabe at imse-cnm.csic.es at your earliest convenience. Please distribute and post this announcement. -- -- ------------------------------------------------------------------------- Bernabe Linares-Barranco, PhD, IEEE Fellow Full Professor (Profesor de Investigacion) CSIC Instituto Microelectronica Sevilla (IMSE) Phone: 34-954-466643/66 National Microelectronics Center, CNM-CSIC Fax: 34-954-466600 Av. Americo Vespucio s/n E-mail: Bernabe.Linares(AT)imse-cnm.csic.es 41092 Sevilla, SPAIN URL: http://www.imse-cnm.csic.es/~bernabe ------------------------------------------------------------------------- From standage at queensu.ca Fri Feb 7 11:08:18 2014 From: standage at queensu.ca (Dominic Standage) Date: Fri, 7 Feb 2014 16:08:18 +0000 Subject: Connectionists: Paper on calcium dynamics, plasticity and learning Message-ID: <25A86FE23942874DA05B4C6980672A970D90F339@MP-DUP-MBX-01.AD.QUEENSU.CA> Dear colleagues - a new paper on synaptic plasticity and learning is available at the link below. We welcome any comments. http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0086248 Standage D, Trappenberg T, Blohm G (2014) Calcium-Dependent Calcium Decay Explains STDP in a Dynamic Model of Hippocampal Synapses. PLoS ONE 9(1): e86248. doi:10.1371/journal.pone.0086248 Abstract It is widely accepted that the direction and magnitude of synaptic plasticity depends on post-synaptic calcium flux, where high levels of calcium lead to long-term potentiation and moderate levels lead to long-term depression. At synapses onto neurons in region CA1 of the hippocampus (and many other synapses), NMDA receptors provide the relevant source of calcium. In this regard, post-synaptic calcium captures the coincidence of pre- and post-synaptic activity, due to the blockage of these receptors at low voltage. Previous studies show that under spike timing dependent plasticity (STDP) protocols, potentiation at CA1 synapses requires post-synaptic bursting and an inter-pairing frequency in the range of the hippocampal theta rhythm. We hypothesize that these requirements reflect the saturation of the mechanisms of calcium extrusion from the post-synaptic spine. We test this hypothesis with a minimal model of NMDA receptor-dependent plasticity, simulating slow extrusion with a calcium-dependent calcium time constant. In simulations of STDP experiments, the model accounts for latency-dependent depression with either post-synaptic bursting or theta-frequency pairing (or neither) and accounts for latency-dependent potentiation when both of these requirements are met. The model makes testable predictions for STDP experiments and our simple implementation is tractable at the network level, demonstrating associative learning in a biophysical network model with realistic synaptic dynamics. Dominic Standage Postdoctoral Research Fellow Department of Biomedical and Molecular Sciences / Centre for Neuroscience Studies Queen's University, Botterell Hall, Room 230 Kingston, Ontario, Canada K7L 3N6 Tel: (613) 533-6000 (ext 77446) Email: standage at queensu.ca -------------- next part -------------- An HTML attachment was scrubbed... URL: From ale at sissa.it Fri Feb 7 12:55:43 2014 From: ale at sissa.it (Alessandro Treves) Date: Fri, 07 Feb 2014 18:55:43 +0100 Subject: Connectionists: Postdoc Sought to Venture beyond the Grid, at SISSA, Trieste, Italy Message-ID: <20140207185543.Horde.lRmTKB8V4mxS9R4fHZaAQjA@webmail.sissa.it> A postdoctoral position is available from April 1st, 2014, for 2 years, to study the dynamics of spatial and semantic memory in complex geometries. The research, to be carried out within the LIMBO group at SISSA, is part of a collaboration coordinated by Edvard Moser in Trondheim, including also Richard Morris in Edinburgh and Jorg Conradt in Munich and funded by the EU contract GRIDMAP. At SISSA current research, mainly by Eugenio Urdapilleta and Federico Stella, focuses on mathematical and computational analyses of the representations that can be established in environments more complex than standard 2D flat surfaces; opening the way for a new postdoc to link purely spatial with semantic structures and analyze the ensuing dynamics. The ideal candidate brings into the project a perspective different from ours, is a proficient programmer and a creative thinker. A lack of familiarity with grid cells, the hippocampus and spatial navigation may be advantageous, if combined with an open mind and plastic synapses. The SISSA campus overlooks the gulf, and is possibly the best place where to do research in Italy. Enquiries can be directed to me at ale at sissa.it, although eventually formal applications will have to be addressed to the SISSA Director at assegni.ricerca at sissa.it Application deadline: March 10th, 2014 -- Alessandro Treves http://people.sissa.it/~ale/limbo.html SISSA - Cognitive Neuroscience, via Bonomea 265, 34136 Trieste, Italy and Master in Complex Actions http://www.mca.sissa.it/ From bernabe at imse-cnm.csic.es Fri Feb 7 07:29:56 2014 From: bernabe at imse-cnm.csic.es (bernabe) Date: Fri, 07 Feb 2014 13:29:56 +0100 Subject: Connectionists: four year young post-doc program Message-ID: <52F4D1C4.3010102@imse-cnm.csic.es> Apologies for cross-posting ------------------------------------- The neuromorphic group at the Sevilla Microelectronics Institute, Spain, welcomes applications for participating in a government funded 4-year young post-doc program. The program is a two-step program, two years each. The present announcement is for the first two-year step. For the second two-year step, there will be a continuation call by the Spanish Government in due time. Applicants must have defended their PhD after 1-Sep-2009. Gross salary will be around 27,000? (to be confirmed). Applications are submitted by the destination research group and the corresponding institution, and are evaluated taking into the account the candidate's CV, and the research trajectory of the hosting research group. Deadline for submitting the application is 24-February-2014. Applications will be evaluated nation wide. From previous years experience, the evaluation process takes minimum 6 months. Succesful candidates would work in one of the research lines of the hosting research group. The Neuromorphic Group at the Sevilla Microelectronics Institute works on AER (Address Event Representation) event-driven vision systems, developing chips, FPGA systems, as well as theoretical analyses for artificial vision, targeting applications like object recognition and scene analysis for robotics and high speed vision. Interested candidates please contact immediately bernabe at imse-cnm.csic.es . Please distribute and post this announcement. -- -- ------------------------------------------------------------------------- Bernabe Linares-Barranco, PhD, IEEE Fellow Full Professor (Profesor de Investigacion) CSIC Instituto Microelectronica Sevilla (IMSE) Phone: 34-954-466643/66 National Microelectronics Center, CNM-CSIC Fax: 34-954-466600 Av. Americo Vespucio s/n E-mail: Bernabe.Linares(AT)imse-cnm.csic.es 41092 Sevilla, SPAIN URL: http://www.imse-cnm.csic.es/~bernabe ------------------------------------------------------------------------- From irodero at cac.rutgers.edu Sat Feb 8 18:24:32 2014 From: irodero at cac.rutgers.edu (Ivan Rodero) Date: Sat, 8 Feb 2014 18:24:32 -0500 Subject: Connectionists: Bigsystem 2014 at HPDC - Call for Papers (Papers due Feb 15) In-Reply-To: <5D63B8C8-3FD3-4241-9021-CBD7F84DCAA7@rutgers.edu> References: <51EC7783-DCAD-4364-B1DC-576C726BAA31@rutgers.edu> <6F339279-23CD-4553-95DF-B1F906F948E3@rutgers.edu> <0957F75F-5AB9-4144-B62D-87D225B34E42@rutgers.edu> <22CF346C-98EC-4D5B-9500-D0B9FE60551A@rutgers.edu> <99D17D7E-34C0-47B7-B641-C756E67D169A@rutgers.edu> <87202061-93AC-4066-89E7-77976097AFAB@rutgers.edu> <79403393-1690-4DCB-855A-1EE231D5ED2B@rutgers.edu> <5D63B8C8-3FD3-4241-9021-CBD7F84DCAA7@rutgers.edu> Message-ID: <814D5AFC-14D5-47D8-99C1-29CC55E9A272@rutgers.edu> -------------------------------------------------------------------------------------------------- Please accept our apologies if you receive multiple copies of this CFP! -------------------------------------------------------------------------------------------------- =================== BigSystem 2014 =================== International Workshop on Software-Defined Ecosystems (BigSystem 2014) http://2014.bigsystem.org/ (co-located with ACM HPDC 2014, Vancouver, Canada, June 23-27, 2014) With the emerging technology breakthrough in computing, networking, storage, mobility, and analytics, the boundary of systems is undergoing fundamental change and is expected to logically disappear. It is the time to rethink system design and management without boundaries towards software-defined ecosystems, the Big System. The basic principles of software-defined mechanisms and policies have witnessed great success in clouds and networking. We are expecting broader, deeper, and greater evolution and confluence towards holistic software-defined ecosystems. BigSystem 2014 provides an open forum for researchers, practitioners, and system builders to exchange ideas, discuss, and shape roadmaps towards such big systems in the era of big data. Topics of Interest =================== * Architecture of software-defined ecosystems * Management of software-defined ecosystems * Software-defined principles * Software-defined computing * Software-defined networking * Software-defined storage * Software-defined security * Software-defined services * Software-defined mobile computing/cloud * Software-defined cyber-physical systems * Interaction and confluence of software-defined modalities * Virtualization * Hybrid systems, cross-layer design and management * Security, privacy, reliability, trustworthiness * Grand challenges in big systems * Big data infrastructure and engineering * HPC, big data, and computational science & engineering applications * Autonomic computing * Cloud computing and services * Emerging technologies Paper Submission Guidelines =================== Authors are invited to submit technical papers of at most 8 pages in PDF format, including figures and references. Short position papers (4 pages) are also encouraged. Papers should be formatted in the ACM Proceedings Style (double column text using single spaced 10 point size on 8.5 x 11 inch page, http://www.acm.org/sigs/publications/proceedings-templates) and submitted via EasyChair submission site. No changes to the margins, spacing, or font sizes as specified by the style file are allowed. Accepted papers will appear in the workshop proceedings, and will be incorporated into the ACM Digital Library. A limited number of papers will be accepted as posters. Selected distinguished papers, after further revisions, will be considered a special issue in a high quality journal. EasyChair submission site, https://www.easychair.org/conferences/?conf=bigsystem2014 Important Dates =================== * Papers Due Feb. 15th, 2014 * Notification Mar. 30th, 2014 * Camera-Ready April 15th, 2014 =================== Organization =================== Steering Committee =================== Rajkumar Buyya, University of Melbourne Jeff Chase, Duke Univeristy Jose Fortes, University of Florida Geoffrey Fox, Indiana University Hai Jin, Huazhong University of Science and Technology Chung-Sheng Li, IBM Research Xiaolin (Andy) Li, University of Florida Manish Parashar, Rutgers University General Chairs =================== Geoffrey Fox, Indiana University Manish Parashar, Rutgers University Program Chairs =================== Chung-Sheng Li, IBM Research Xiaolin (Andy) Li, University of Florida Publicity Chairs =================== Yong Chen, Texas Tech University Ivan Rodero, Rutgers University Web Chair =================== Ze Yu, University of Florida Technical Program Committee =================== Gagan Agrawal, Ohio State University Henri E. Bal, Vrije University Ilya Baldin, RENCI/UNC Chapel Hill Viraj Bhat, Yahoo Roger Barga, Microsoft Research Micah Beck, University of Tennessee Ali Butt, Virginia Tech Jiannong Cao, Hong Kong Polytechnic University Claris Castillo, RENCI Umit Catalyurek, Ohio State University Yong Chen, Texas Tech University Peter Dinda, Northwestern University Zhihui Du, Tsinghua University Renato Figueiredo, University of Florida Yashar Ganjali, University of Toronto William Gropp, UIUC Guofei Gu, Texas A&M University John Lange, University of Pittsburgh Junda Liu, Google David Meyer, Brocade Rajesh Narayanan, Dell Research Ioan Raicu, IIT Lavanya Ramakrishnan, Lawrence Berkeley National Lab Ivan Rodero, Rutgers University Ivan Seskar, Rutgers University Jian Tang, Syracuse University Tai Won Um, ETRI Jun Wang, University of Central Florida Kuang-Ching Wang, Clemson University Jon Weissman, University of Minnesota Dongyan Xu, Purdue University Vinod Yegneswaran, SRI Jianfeng Zhan, Chinese Academy of Sciences Han Zhao, Qualcomm Research ============================================================= Ivan Rodero, Ph.D. Rutgers Discovery Informatics Institute (RDI2) NSF Center for Cloud and Autonomic Computing (CAC) Department of Electrical and Computer Engineering Rutgers, The State University of New Jersey Office: CoRE Bldg, Rm 625 94 Brett Road, Piscataway, NJ 08854-8058 Phone: (732) 993-8837 Fax: (732) 445-0593 Email: irodero at rutgers dot edu WWW: http://nsfcac.rutgers.edu/people/irodero ============================================================= -------------- next part -------------- An HTML attachment was scrubbed... URL: From irodero at cac.rutgers.edu Sat Feb 8 18:27:33 2014 From: irodero at cac.rutgers.edu (Ivan Rodero) Date: Sat, 8 Feb 2014 18:27:33 -0500 Subject: Connectionists: Bigsystem 2014 at HPDC - Call for Papers (Papers due Feb 15) In-Reply-To: <5D63B8C8-3FD3-4241-9021-CBD7F84DCAA7@rutgers.edu> References: <51EC7783-DCAD-4364-B1DC-576C726BAA31@rutgers.edu> <6F339279-23CD-4553-95DF-B1F906F948E3@rutgers.edu> <0957F75F-5AB9-4144-B62D-87D225B34E42@rutgers.edu> <22CF346C-98EC-4D5B-9500-D0B9FE60551A@rutgers.edu> <99D17D7E-34C0-47B7-B641-C756E67D169A@rutgers.edu> <87202061-93AC-4066-89E7-77976097AFAB@rutgers.edu> <79403393-1690-4DCB-855A-1EE231D5ED2B@rutgers.edu> <5D63B8C8-3FD3-4241-9021-CBD7F84DCAA7@rutgers.edu> Message-ID: <6B61A541-C3A6-4B51-9600-A392C72202A6@rutgers.edu> -------------------------------------------------------------------------------------------------- Please accept our apologies if you receive multiple copies of this CFP! -------------------------------------------------------------------------------------------------- =================== BigSystem 2014 =================== International Workshop on Software-Defined Ecosystems (BigSystem 2014) http://2014.bigsystem.org/ (co-located with ACM HPDC 2014, Vancouver, Canada, June 23-27, 2014) With the emerging technology breakthrough in computing, networking, storage, mobility, and analytics, the boundary of systems is undergoing fundamental change and is expected to logically disappear. It is the time to rethink system design and management without boundaries towards software-defined ecosystems, the Big System. The basic principles of software-defined mechanisms and policies have witnessed great success in clouds and networking. We are expecting broader, deeper, and greater evolution and confluence towards holistic software-defined ecosystems. BigSystem 2014 provides an open forum for researchers, practitioners, and system builders to exchange ideas, discuss, and shape roadmaps towards such big systems in the era of big data. Topics of Interest =================== * Architecture of software-defined ecosystems * Management of software-defined ecosystems * Software-defined principles * Software-defined computing * Software-defined networking * Software-defined storage * Software-defined security * Software-defined services * Software-defined mobile computing/cloud * Software-defined cyber-physical systems * Interaction and confluence of software-defined modalities * Virtualization * Hybrid systems, cross-layer design and management * Security, privacy, reliability, trustworthiness * Grand challenges in big systems * Big data infrastructure and engineering * HPC, big data, and computational science & engineering applications * Autonomic computing * Cloud computing and services * Emerging technologies Paper Submission Guidelines =================== Authors are invited to submit technical papers of at most 8 pages in PDF format, including figures and references. Short position papers (4 pages) are also encouraged. Papers should be formatted in the ACM Proceedings Style (double column text using single spaced 10 point size on 8.5 x 11 inch page, http://www.acm.org/sigs/publications/proceedings-templates) and submitted via EasyChair submission site. No changes to the margins, spacing, or font sizes as specified by the style file are allowed. Accepted papers will appear in the workshop proceedings, and will be incorporated into the ACM Digital Library. A limited number of papers will be accepted as posters. Selected distinguished papers, after further revisions, will be considered a special issue in a high quality journal. EasyChair submission site, https://www.easychair.org/conferences/?conf=bigsystem2014 Important Dates =================== * Papers Due Feb. 15th, 2014 * Notification Mar. 30th, 2014 * Camera-Ready April 15th, 2014 =================== Organization =================== Steering Committee =================== Rajkumar Buyya, University of Melbourne Jeff Chase, Duke Univeristy Jose Fortes, University of Florida Geoffrey Fox, Indiana University Hai Jin, Huazhong University of Science and Technology Chung-Sheng Li, IBM Research Xiaolin (Andy) Li, University of Florida Manish Parashar, Rutgers University General Chairs =================== Geoffrey Fox, Indiana University Manish Parashar, Rutgers University Program Chairs =================== Chung-Sheng Li, IBM Research Xiaolin (Andy) Li, University of Florida Publicity Chairs =================== Yong Chen, Texas Tech University Ivan Rodero, Rutgers University Web Chair =================== Ze Yu, University of Florida Technical Program Committee =================== Gagan Agrawal, Ohio State University Henri E. Bal, Vrije University Ilya Baldin, RENCI/UNC Chapel Hill Viraj Bhat, Yahoo Roger Barga, Microsoft Research Micah Beck, University of Tennessee Ali Butt, Virginia Tech Jiannong Cao, Hong Kong Polytechnic University Claris Castillo, RENCI Umit Catalyurek, Ohio State University Yong Chen, Texas Tech University Peter Dinda, Northwestern University Zhihui Du, Tsinghua University Renato Figueiredo, University of Florida Yashar Ganjali, University of Toronto William Gropp, UIUC Guofei Gu, Texas A&M University John Lange, University of Pittsburgh Junda Liu, Google David Meyer, Brocade Rajesh Narayanan, Dell Research Ioan Raicu, IIT Lavanya Ramakrishnan, Lawrence Berkeley National Lab Ivan Rodero, Rutgers University Ivan Seskar, Rutgers University Jian Tang, Syracuse University Tai Won Um, ETRI Jun Wang, University of Central Florida Kuang-Ching Wang, Clemson University Jon Weissman, University of Minnesota Dongyan Xu, Purdue University Vinod Yegneswaran, SRI Jianfeng Zhan, Chinese Academy of Sciences Han Zhao, Qualcomm Research ============================================================= Ivan Rodero, Ph.D. Rutgers Discovery Informatics Institute (RDI2) NSF Center for Cloud and Autonomic Computing (CAC) Department of Electrical and Computer Engineering Rutgers, The State University of New Jersey Office: CoRE Bldg, Rm 625 94 Brett Road, Piscataway, NJ 08854-8058 Phone: (732) 993-8837 Fax: (732) 445-0593 Email: irodero at rutgers dot edu WWW: http://nsfcac.rutgers.edu/people/irodero ============================================================= -------------- next part -------------- An HTML attachment was scrubbed... URL: From erhard.wieser at tum.de Sun Feb 9 10:30:06 2014 From: erhard.wieser at tum.de (Wieser, Erhard) Date: Sun, 9 Feb 2014 15:30:06 +0000 Subject: Connectionists: Call for Papers: ICDL-EPIROB 2014 Message-ID: <2cedd199-664c-4045-a409-babba671aa92@BADWLRZ-SWHBT1.ads.mwn.de> ======================================================== Call for Papers & Call for Tutorials and Special Sessions IEEE ICDL-EPIROB 2014 The Fourth Joint IEEE International Conference on Development and Learning and on Epigenetic Robotics Palazzo Ducale, Genoa, Italy October 13-16, 2014 http://www.icdl-epirob.org/ == Conference description The past decade has seen the emergence of a new scientific field that studies how intelligent biological and artificial systems develop sensorimotor, cognitive and social abilities, over extended periods of time, through dynamic interactions with their physical and social environments. This field lies at the intersection of a number of scientific and engineering disciplines including Neuroscience, Developmental Psychology, Developmental Linguistics, Cognitive Science, Computational Neuroscience, Artificial Intelligence, Machine Learning, and Robotics. Various terms have been associated with this new field such as Autonomous Mental Development, Epigenetic Robotics, Developmental Robotics, etc., and several scientific meetings have been established. The two most prominent conference series of this field, the International Conference on Development and Learning (ICDL) and the International Conference on Epigenetic Robotics (EpiRob), are now joining forces for the fourth time and invite submissions for a joint conference in 2014, to explore and extend the interdisciplinary boundaries of this field. == Keynote speakers TBA == Call for Submissions We invite submissions for this exciting window into the future of developmental sciences. Submissions which establish novel links between brain, behavior and computation are particularly encouraged. == Topics of interest include (but are not limited to): * the development of perceptual, motor, cognitive, emotional, social, and communication skills in biological systems and robots; * embodiment; * general principles of development and learning; * interaction of nature and nurture; * sensitive/critical periods; * developmental stages; * grounding of knowledge and development of representations; * architectures for cognitive development and open-ended learning; * neural plasticity; * statistical learning; * reward and value systems; * intrinsic motivations, exploration and play; * interaction of development and evolution; * use of robots in applied settings such as autism therapy; * epistemological foundations and philosophical issues. Any of the topics above can be simultaneously studied from the neuroscience, psychology or modeling/robotic point of view. == Submissions will be accepted in several formats: 1. Full six-page paper submissions: Accepted papers will be included in the conference proceedings and will be selected for either an oral presentation or a featured poster presentation. Featured posters will have a 1 minute "teaser" presentation as part of the main conference session and will be showcased in the poster sessions. 2. Two-page poster abstract submissions: To encourage discussion of late-breaking results or for work that is not sufficiently mature for a full paper, we will accept 2-page abstracts. These submissions will NOT be included in the conference proceedings. Accepted abstracts will be presented during poster sessions. 3. Tutorials and workshops: We invite experts in different areas to organize either a tutorial or a workshop to be held on the first day of the conference. Tutorials are meant to provide insights into specific topics as well as overviews that will inform the interdisciplinary audience about the state-of-the-art in child development, neuroscience, robotics, or any of the other disciplines represented at the conference. A workshop is an opportunity to present a topic cumulatively. Workshop can be half- or full-day in duration including oral presentations as well as posters. Submission format: two pages. == Call for Tutorials and Workshops We invite experts in different areas to organize a tutorial or workshop, which will be held on the first day of the conference. Participants in tutorials and workshops are asked to register for the main conference. Tutorials are meant to provide insights into specific topics as well as overviews that will inform the interdisciplinary audience about the state-of-the-art in child development, neuroscience, robotics, or any of the other disciplines represented at the conference. A workshop is an opportunity to present a topic cumulatively. Workshop can be half- or full-day in duration including oral presentations as well as posters. Submissions (max. two pages) should be sent no later than April 30, 2014 to: * Lorenzo Natale (lorenzo.natale at iit.it) * Erol Sahin (erol at metu.edu.tr) including: * Title of tutorial or workshop; * Tutorial/workshop speaker(s), including short CVs/affiliations and other relevant information; * Concept of the tutorial/workshop; target audience or prerequisites. All proposals submitted will be subjected to a peer review process. == Important dates April 30th, 2014, paper submission deadline July 15th, 2014, author notification August 31st, 2014, final version (camera ready) due October 13th-16th, 2014, conference == Program committee General Chairs: Giorgio Metta (IIT, Genoa) Mark Lee (Univ. of Aberystwyth) Ian Fasel (Univ. of Arizona) Bridge Chairs: Giulio Sandini (IIT, Genoa) Masako Myowa-Yamakoshi (Univ. Kyoto) Program Chairs: Lorenzo Natale (IIT, Genoa) Erol Sahin (METU, Ankara) Publications Chairs: Francesco Nori (IIT, Genoa) Publicity Chairs: Katrin Lohan (Heriot-Watt, Edinburgh) Gordon Cheng (TUM, Munich) Local chairs: Alessandra Sciutti (IIT, Genoa) Vadim Tikhanoff (IIT, Genoa) Finance chairs: Andrea Derito (IIT, Genoa) -------------- next part -------------- An HTML attachment was scrubbed... URL: From dwang at cse.ohio-state.edu Sun Feb 9 12:27:48 2014 From: dwang at cse.ohio-state.edu (DeLiang Wang) Date: Sun, 9 Feb 2014 12:27:48 -0500 Subject: Connectionists: NEURAL NETWORKS, February 2014 Message-ID: <52F7BA94.8080506@cse.ohio-state.edu> Neural Networks - Volume 50, February 2014 http://www.journals.elsevier.com/neural-networks LETTERS Existence and global exponential stability of periodic solution for high-order discrete-time BAM neural networks Ancai Zhang, Jianlong Qiu, Jinhua She Cellular computational networks?A scalable architecture for learning the dynamics of large networked systems Bipul Luitel, Ganesh Kumar Venayagamoorthy ARTICLES Supervised orthogonal discriminant subspace projects learning for face recognition Yu Chen, Xiao-Hong Xu Direct Kernel Perceptron (DKP): Ultra-fast kernel ELM-based classification with non-iterative closed-form weight calculation Manuel Fern?ndez-Delgado, Eva Cernadas, Sen?n Barro, Jorge Ribeiro, Jos? Neves Batch gradient method with smoothing regularization for training of feedforward neural networks Wei Wu, Qinwei Fan, Jacek M. Zurada, Jian Wang, Dakun Yang, Yan Liu Compressed classification learning with Markov chain samples Feilong Cao, Tenghui Dai, Yongquan Zhang, Yuanpeng Tan Semi-supervised learning of class balance under class-prior change by distribution matching Marthinus Christoffel du Plessis, Masashi Sugiyama Robust support vector machine-trained fuzzy system Yahya Forghani, Hadi Sadoghi Yazdi Large-scale linear nonparallel support vector machine solver Yingjie Tian, Yuan Ping Finite time convergent learning law for continuous neural networks Isaac Chairez A Bayesian inverse solution using independent component analysis Jouni Puuronen, Aapo Hyv?rinen A one-layer recurrent neural network for constrained nonsmooth invex optimization Guocheng Li, Zheng Yan, Jun Wang Pointwise probability reinforcements for robust statistical inference Beno?t Fr?nay, Michel Verleysen A linear recurrent kernel online learning algorithm with sparse updates Haijin Fan, Qing Song Correcting and combining time series forecasters Paulo Renato A. Firmino, Paulo S.G. de Mattos Neto, Tiago A.E. Ferreira Hybrid fault diagnosis of nonlinear systems using neural parameter estimators E. Sobhani-Tehrani, H.A. Talebi, K. Khorasani From grlmc at urv.cat Sat Feb 8 03:20:37 2014 From: grlmc at urv.cat (GRLMC) Date: Sat, 8 Feb 2014 09:20:37 +0100 Subject: Connectionists: SSTiC 2014: February 15, 3rd registration deadline Message-ID: *To be removed from our mailing list, please respond to this message with UNSUBSCRIBE in the subject line* ********************************************************************* 2014 TARRAGONA INTERNATIONAL SUMMER SCHOOL ON TRENDS IN COMPUTING SSTiC 2014 Tarragona, Spain July 7-11, 2014 Organized by Rovira i Virgili University http://grammars.grlmc.com/sstic2014/ ********************************************************************* --- February 15, 3rd registration deadline --- ********************************************************************* AIM: SSTiC 2014 is the second edition in a series started in 2013. For the previous event, see http://grammars.grlmc.com/SSTiC2013/ SSTiC 2014 will be a research training event mainly addressed to PhD students and PhD holders in the first steps of their academic career. It intends to update them about the most recent developments in the diverse branches of computer science and its neighbouring areas. To that purpose, renowned scholars will lecture and will be available for interaction with the audience. SSTiC 2014 will cover the whole spectrum of computer science through 6 keynote lectures and 24 six-hour courses dealing with some of the most lively topics in the field. The organizers share the idea that outstanding speakers will really attract the brightest students. ADDRESSED TO: Graduate students from around the world. There are no formal pre-requisites in terms of the academic degree the attendee must hold. However, since there will be several levels among the courses, reference may be made to specific knowledge background in the description of some of them. SSTiC 2014 is also appropriate for more senior people who want to keep themselves updated on developments in their own field or in other branches of computer science. They will surely find it fruitful to listen and discuss with scholars who are main references in computing nowadays. REGIME: In addition to keynotes, 3 parallel sessions will be held during the whole event. Participants will be able to freely choose the courses they will be willing to attend as well as to move from one to another. VENUE: SSTiC 2014 will take place in Tarragona, located 90 kms. to the south of Barcelona. The venue will be: Campus Catalunya Universitat Rovira i Virgili Av. Catalunya, 35 43002 Tarragona KEYNOTE SPEAKERS: Larry S. Davis (U Maryland, College Park), A Historical Perspective of Computer Vision Models for Object Recognition and Scene Analysis David S. Johnson (Columbia U, New York), Open and Closed Problems in NP-Completeness George Karypis (U Minnesota, Twin Cities), Recommender Systems Past, Present, & Future Steffen Staab (U Koblenz), Explicit and Implicit Semantics: Two Sides of One Coin Philip Wadler (U Edinburgh), You and Your Research and The Elements of Style Ronald R. Yager (Iona C, New Rochelle), Social Modeling COURSES AND PROFESSORS: Divyakant Agrawal (U California, Santa Barbara), [intermediate] Scalable Data Management in Enterprise and Cloud Computing Infrastructures Pierre Baldi (U California, Irvine), [intermediate] Big Data Informatics Challenges and Opportunities in the Life Sciences Rajkumar Buyya (U Melbourne), [intermediate] Cloud Computing John M. Carroll (Pennsylvania State U, University Park), [introductory] Usability Engineering and Scenario-based Design Kwang-Ting (Tim) Cheng (U California, Santa Barbara), [introductory/intermediate] Smartphones: Hardware Platform, Software Development, and Emerging Apps Amr El Abbadi (U California, Santa Barbara), [introductory] The Distributed Foundations of Data Management in the Cloud Richard M. Fujimoto (Georgia Tech, Atlanta), [introductory] Parallel and Distributed Simulation Mark Guzdial (Georgia Tech, Atlanta), [introductory] Computing Education Research: What We Know about Learning and Teaching Computer Science David S. Johnson (Columbia U, New York), [introductory] The Traveling Salesman Problem in Theory and Practice George Karypis (U Minnesota, Twin Cities), [intermediate] Programming Models/Frameworks for Parallel & Distributed Computing Aggelos K. Katsaggelos (Northwestern U, Evanston), [intermediate] Optimization Techniques for Sparse/Low-rank Recovery Problems in Image Processing and Machine Learning Arie E. Kaufman (U Stony Brook), [advanced] Visualization Carl Lagoze (U Michigan, Ann Arbor), [introductory] Curation of Big Data Dinesh Manocha (U North Carolina, Chapel Hill), [introductory/intermediate] Robot Motion Planning Bijan Parsia (U Manchester), [introductory] The Empirical Mindset in Computer Science Charles E. Perkins (FutureWei Technologies, Santa Clara), [intermediate] Beyond LTE: the Evolution of 4G Networks and the Need for Higher Performance Handover System Designs Sudhakar M. Reddy (U Iowa, Iowa City), [introductory] Test and Design for Test of Digital Logic Circuits Robert Sargent (Syracuse U), [introductory] Validation of Models Mubarak Shah (U Central Florida, Orlando), [intermediate] Visual Crowd Analysis Steffen Staab (U Koblenz), [intermediate] Programming the Semantic Web Mike Thelwall (U Wolverhampton), [introductory] Sentiment Strength Detection for Twitter and the Social Web Jeffrey D. Ullman (Stanford U), [introductory] MapReduce Algorithms Nitin Vaidya (U Illinois, Urbana-Champaign), [introductory/intermediate] Distributed Consensus: Theory and Applications Philip Wadler (U Edinburgh), [intermediate] Topics in Lambda Calculus and Life ORGANIZING COMMITTEE: Adrian Horia Dediu (Tarragona) Carlos Mart?n-Vide (Tarragona, chair) Florentina Lilica Voicu (Tarragona) REGISTRATION: It has to be done at http://grammars.grlmc.com/sstic2014/registration.php The selection of up to 8 courses requested in the registration template is only tentative and non-binding. For the sake of organization, it will be helpful to have an approximation of the respective demand for each course. Since the capacity of the venue is limited, registration requests will be processed on a first come first served basis. The registration period will be closed when the capacity of the venue will be complete. It is very convenient to register prior to the event. FEES: As far as possible, participants are expected to attend for the whole (or most of the) week (full-time). Fees are a flat rate allowing one to participate to all courses. They vary depending on the registration deadline. ACCOMMODATION: Information about accommodation will be available on the website of the School in due time. CERTIFICATE: Participants will be delivered a certificate of attendance. QUESTIONS AND FURTHER INFORMATION: florentinalilica.voicu at urv.cat POSTAL ADDRESS: SSTiC 2014 Lilica Voicu Rovira i Virgili University Av. Catalunya, 35 43002 Tarragona, Spain Phone: +34 977 559 543 Fax: +34 977 558 386 ACKNOWLEDGEMENTS: Departament d?Economia i Coneixement, Generalitat de Catalunya Universitat Rovira i Virgili From terry at salk.edu Sun Feb 9 21:25:27 2014 From: terry at salk.edu (Terry Sejnowski) Date: Sun, 09 Feb 2014 18:25:27 -0800 Subject: Connectionists: NEURAL COMPUTATION - March, 2014 In-Reply-To: Message-ID: Neural Computation - Contents -- Volume 26, Number 3 - March 1, 2014 Available online for download now: http://www.mitpressjournals.org/toc/neco/26/3 ----- Note Dopamine Ramps Are a Consequence of Reward Prediction Errors Samuel Gershman Letters Approximate, Computationally Efficient Online Learning in Bayesian Spiking Neurons Levin Kuhlmann, Michael Hauser-Raspe, Jonathan H Manton, David B. Grayden, Jonathan Tapson, and Andre van Schaik Guaranteed Classification via Regularized Similarity Learning Zheng-Chu Guo, Yiming Ying Noise-robust Speech Recognition Through Auditory Feature Detection and Spike Sequence Decoding Phillip Schafer, Dezhe Jin Feature Selection for Ordinal Text Classification Stefano Baccianella, Andrea Esuli, and Fabrizio Sebastiani A Parallel Dual Matrix Method for Blind Signal Separation TiaoJun Zeng, QuanYuan Feng Robust Subspace Discovery via Relaxed Rank Minimization Xinggang Wang, Zhengdong Zhang, Yi Ma, Xiang Bai, Wenyu Liu, and Zhuowen Tu ------------ ON-LINE -- http://www.mitpressjournals.org/neuralcomp SUBSCRIPTIONS - 2014 - VOLUME 26 - 12 ISSUES USA Others Electronic Only Student/Retired $70 $193 $65 Individual $124 $187 $115 Institution $1,035 $1,098 $926 Canada: Add 5% GST MIT Press Journals, 238 Main Street, Suite 500, Cambridge, MA 02142-9902 Tel: (617) 253-2889 FAX: (617) 577-1545 journals-orders at mit.edu ------------ From juergen at idsia.ch Mon Feb 10 10:26:59 2014 From: juergen at idsia.ch (Juergen Schmidhuber) Date: Mon, 10 Feb 2014 16:26:59 +0100 Subject: Connectionists: Deep Belief Nets (2006) / Neural History Compressor (1991) or Hierarchical Temporal Memory In-Reply-To: <52F420CF.7060006@cse.msu.edu> References: <3FD4809C-5EF7-478D-8939-AE56E6A434BE@idsia.ch> <52F420CF.7060006@cse.msu.edu> Message-ID: John, perhaps your view is a bit too pessimistic. Note that a single RNN already is a general computer. In principle, dynamic RNNs can map arbitrary observation sequences to arbitrary computable sequences of motoric actions and internal attention-directing operations, e.g., to process cluttered scenes, or to implement development (the examples you mentioned). From my point of view, the main question is how to exploit this universal potential through learning. A stack of dynamic RNN can sometimes facilitate this. What it learns can later be collapsed into a single RNN [3]. Juergen http://www.idsia.ch/~juergen/whatsnew.html On Feb 7, 2014, at 12:54 AM, Juyang Weng wrote: > Juergen: > > You wrote: A stack of recurrent NN. But it is a wrong architecture as far as the brain is concerned. > > Although my joint work with Narendra Ahuja and Thomas S. Huang at UIUC was probably the first > learning network that used the deep Learning idea for learning from clutter scenes (Cresceptron ICCV 1992 and IJCV 1997), > I gave up this static deep learning idea later after we considered the Principle 1: Development. > > The deep learning architecture is wrong for the brain. It is too restricted, static in architecture, and cannot learn directly from cluttered scenes required by Principle 1. The brain is not a cascade of recurrent NN. > > I quote from Antonio Damasio "Decartes' Error": p. 93: "But intermediate communications occurs also via large subcortical nuclei such as those in the thalamas and basal ganglia, and via small nulei such as those in the brain stem." > > Of course, the cerebral pathways themselves are not a stack of recurrent NN either. > > There are many fundamental reasons for that. I give only one here base on our DN brain model: Looking at a human, the brain must dynamically attend the tip of the nose, the entire nose, the face, or the entire human body on the fly. For example, when the network attend the nose, the entire human body becomes the background! Without a brain network that has both shallow and deep connections (unlike your stack of recurrent NN), your network is only for recognizing a set of static patterns in a clean background. This is still an overworked pattern recognition problem, not a vision problem. > > -John > > On 2/6/14 7:24 AM, Schmidhuber Juergen wrote: >> Deep Learning in Artificial Neural Networks (NN) is about credit assignment across many subsequent computational stages, in deep or recurrent NN. >> >> A popluar Deep Learning NN is the Deep Belief Network (2006) [1,2]. A stack of feedforward NN (FNN) is pre-trained in unsupervised fashion. This can facilitate subsequent supervised learning. >> >> Let me re-advertise a much older, very similar, but more general, working Deep Learner of 1991. It can deal with temporal sequences: the Neural Hierarchical Temporal Memory or Neural History Compressor [3]. A stack of recurrent NN (RNN) is pre-trained in unsupervised fashion. This can greatly facilitate subsequent supervised learning. >> >> The RNN stack is more general in the sense that it uses sequence-processing RNN instead of FNN with unchanging inputs. In the early 1990s, the system was able to learn many previously unlearnable Deep Learning tasks, one of them requiring credit assignment across 1200 successive computational stages [4]. >> >> Related developments: In the 1990s there was a trend from partially unsupervised [3] to fully supervised recurrent Deep Learners [5]. In recent years, there has been a similar trend from partially unsupervised to fully supervised systems. For example, several recent competition-winning and benchmark record-setting systems use supervised LSTM RNN stacks [6-9]. >> >> >> References: >> >> [1] G. E. Hinton, R. R. Salakhutdinov. Reducing the dimensionality of data with neural networks. Science, Vol. 313. no. 5786, pp. 504 - 507, 2006. http://www.cs.toronto.edu/~hinton/science.pdf >> >> [2] G. W. Cottrell. New Life for Neural Networks. Science, Vol. 313. no. 5786, pp. 454-455, 2006. http://www.academia.edu/155897/Cottrell_Garrison_W._2006_New_life_for_neural_networks >> >> [3] J. Schmidhuber. Learning complex, extended sequences using the principle of history compression, Neural Computation, 4(2):234-242, 1992. (Based on TR FKI-148-91, 1991.) ftp://ftp.idsia.ch/pub/juergen/chunker.pdf Overview: http://www.idsia.ch/~juergen/firstdeeplearner.html >> >> [4] J. Schmidhuber. Habilitation thesis, TUM, 1993. ftp://ftp.idsia.ch/pub/juergen/habilitation.pdf . Includes an experiment with credit assignment across 1200 subsequent computational stages for a Neural Hierarchical Temporal Memory or History Compressor or RNN stack with unsupervised pre-training [2] (try Google Translate in your mother tongue): http://www.idsia.ch/~juergen/habilitation/node114.html >> >> [5] S. Hochreiter, J. Schmidhuber. Long Short-Term Memory. Neural Computation, 9(8):1735-1780, 1997. Based on TR FKI-207-95, 1995. ftp://ftp.idsia.ch/pub/juergen/lstm.pdf . Lots of of follow-up work on LSTM under http://www.idsia.ch/~juergen/rnn.html >> >> [6] S. Fernandez, A. Graves, J. Schmidhuber. Sequence labelling in structured domains with hierarchical recurrent neural networks. In Proc. IJCAI'07, p. 774-779, Hyderabad, India, 2007. ftp://ftp.idsia.ch/pub/juergen/IJCAI07sequence.pdf >> >> [7] A. Graves, J. Schmidhuber. Offline Handwriting Recognition with Multidimensional Recurrent Neural Networks. NIPS'22, p 545-552, Vancouver, MIT Press, 2009. http://www.idsia.ch/~juergen/nips2009.pdf >> >> [8] 2009: First very deep (and recurrent) learner to win international competitions with secret test sets: deep LSTM RNN (1995-) won three connected handwriting contests at ICDAR 2009 (French, Arabic, Farsi), performing simultaneous segmentation and recognition. http://www.idsia.ch/~juergen/handwriting.html >> >> [9] A. Graves, A. Mohamed, G. E. Hinton. Speech Recognition with Deep Recurrent Neural Networks. ICASSP 2013, Vancouver, 2013. http://www.cs.toronto.edu/~hinton/absps/RNN13.pdf >> >> >> >> Juergen Schmidhuber >> http://www.idsia.ch/~juergen/whatsnew.html > > -- > -- > Juyang (John) Weng, Professor > Department of Computer Science and Engineering > MSU Cognitive Science Program and MSU Neuroscience Program > 428 S Shaw Ln Rm 3115 > Michigan State University > East Lansing, MI 48824 USA > Tel: 517-353-4388 > Fax: 517-432-1061 > Email: weng at cse.msu.edu > URL: http://www.cse.msu.edu/~weng/ > ---------------------------------------------- > From jkrichma at uci.edu Sun Feb 9 21:18:19 2014 From: jkrichma at uci.edu (Jeff Krichmar) Date: Sun, 9 Feb 2014 18:18:19 -0800 Subject: Connectionists: New release of the CARLsim Spiking Neural Network Simulator Message-ID: Dear colleagues, Many of you may be interested in our latest software release of the CARLsim simulator. CARLsim is a publicly available, efficient C/C++-based Spiking Neural Network (SNN) simulator that is optimized to run on both generic, x86 CPUs and standard off-the-shelf GPUs. The simulator provides a PYNN-like programming interface, which allows for details and parameters to be specified at the synapse, neuron, and network level. Software and documentation can be found at: http://www.socsci.uci.edu/~jkrichma/CARLsim/index.html This release is in conjunction with our latest publications, which highlight CARLsim?s latest features. Beyeler, M., Richert, M., Dutt, N.D., and Krichmar, J.L. (2014). Efficient Spiking Neural Network Model of Pattern Motion Selectivity in Visual Cortex. Neuroinformatics. Carlson, K.D., Nageswaran, J.M., Dutt, N., and Krichmar, J.L. (2014). An efficient automated parameter tuning framework for spiking neural networks. Frontiers in Neuroscience 8. Carlson, K.D., Richert, M., Dutt, N., and Krichmar, J.L. (2013). Biologically Plausible Models of Homeostasis and STDP: Stability and Learning in Spiking Neural Networks. Paper presented at: International Joint Conference on Neural Networks (Dallas, TX: IEEE Explore). CARLsim Release 2.2.0 Features ---------------------------------- 1. Improved and expanded real-time SNN vision models. 2. Included support for a parameter tuning interface library that uses evolutionary algorithms and GPUs for automated SNN parameter tuning. 3. Implemented a model for homeostatic synaptic scaling. 4. Added CUDA 5.0 support. Best regards from the CARLsim team, Michael Beyeler Kris Carlson Nikil Dutt Jeff Krichmar ----------------- Jeff Krichmar Department of Cognitive Sciences 2328 Social & Behavioral Sciences Gateway University of California, Irvine Irvine, CA 92697-5100 jkrichma at uci.edu http://www.socsci.uci.edu/~jkrichma From minaiaa at gmail.com Mon Feb 10 10:56:04 2014 From: minaiaa at gmail.com (Ali Minai) Date: Mon, 10 Feb 2014 10:56:04 -0500 Subject: Connectionists: Deep Belief Nets (2006) / Neural History Compressor (1991) or Hierarchical Temporal Memory In-Reply-To: References: <3FD4809C-5EF7-478D-8939-AE56E6A434BE@idsia.ch> <52F420CF.7060006@cse.msu.edu> Message-ID: I agree with both Juergen and John. On the one hand, most neural processing must - almost necessarily - emerge from the dynamics of many recurrent networks interacting at multiple scales. I that sense, deep learning with recurrent networks is a fruitful place to start in trying to understand this. On the other hand, I also think that the term "deep learning" has become unnecessarily constrained to refer to a particular style of layered architecture and certain types of learning algorithms. We need to move beyond these - broaden the definition to include networks with more complex architectures and learning processes that include development, and even evolution. And to extend the model beyond just "neural" networks to encompass the entire brain-body network, including its mechanical and autonomic components. One problem is that when engineers and computer scientists try to understand the brain, we keep getting distracted by all the sexy "applications" that arise as a side benefit of our models, go chasing after them, and eventually lose track of the original goal of understanding how the brain works. This results in a lot of very useful neural network models for vision, time-series prediction, data analysis, etc., but doesn't tell us much about the brain. Some of us need to take a vow of chastity and commit ourselves anew to the discipline of biology. Ali On Mon, Feb 10, 2014 at 10:26 AM, Juergen Schmidhuber wrote: > John, > > perhaps your view is a bit too pessimistic. Note that a single RNN already > is a general computer. In principle, dynamic RNNs can map arbitrary > observation sequences to arbitrary computable sequences of motoric actions > and internal attention-directing operations, e.g., to process cluttered > scenes, or to implement development (the examples you mentioned). From my > point of view, the main question is how to exploit this universal potential > through learning. A stack of dynamic RNN can sometimes facilitate this. > What it learns can later be collapsed into a single RNN [3]. > > Juergen > > http://www.idsia.ch/~juergen/whatsnew.html > > > > On Feb 7, 2014, at 12:54 AM, Juyang Weng wrote: > > > Juergen: > > > > You wrote: A stack of recurrent NN. But it is a wrong architecture as > far as the brain is concerned. > > > > Although my joint work with Narendra Ahuja and Thomas S. Huang at UIUC > was probably the first > > learning network that used the deep Learning idea for learning from > clutter scenes (Cresceptron ICCV 1992 and IJCV 1997), > > I gave up this static deep learning idea later after we considered the > Principle 1: Development. > > > > The deep learning architecture is wrong for the brain. It is too > restricted, static in architecture, and cannot learn directly from > cluttered scenes required by Principle 1. The brain is not a cascade of > recurrent NN. > > > > I quote from Antonio Damasio "Decartes' Error": p. 93: "But intermediate > communications occurs also via large subcortical nuclei such as those in > the thalamas and basal ganglia, and via small nulei such as those in the > brain stem." > > > > Of course, the cerebral pathways themselves are not a stack of recurrent > NN either. > > > > There are many fundamental reasons for that. I give only one here base > on our DN brain model: Looking at a human, the brain must dynamically > attend the tip of the nose, the entire nose, the face, or the entire human > body on the fly. For example, when the network attend the nose, the entire > human body becomes the background! Without a brain network that has both > shallow and deep connections (unlike your stack of recurrent NN), your > network is only for recognizing a set of static patterns in a clean > background. This is still an overworked pattern recognition problem, not a > vision problem. > > > > -John > > > > On 2/6/14 7:24 AM, Schmidhuber Juergen wrote: > >> Deep Learning in Artificial Neural Networks (NN) is about credit > assignment across many subsequent computational stages, in deep or > recurrent NN. > >> > >> A popluar Deep Learning NN is the Deep Belief Network (2006) [1,2]. A > stack of feedforward NN (FNN) is pre-trained in unsupervised fashion. This > can facilitate subsequent supervised learning. > >> > >> Let me re-advertise a much older, very similar, but more general, > working Deep Learner of 1991. It can deal with temporal sequences: the > Neural Hierarchical Temporal Memory or Neural History Compressor [3]. A > stack of recurrent NN (RNN) is pre-trained in unsupervised fashion. This > can greatly facilitate subsequent supervised learning. > >> > >> The RNN stack is more general in the sense that it uses > sequence-processing RNN instead of FNN with unchanging inputs. In the early > 1990s, the system was able to learn many previously unlearnable Deep > Learning tasks, one of them requiring credit assignment across 1200 > successive computational stages [4]. > >> > >> Related developments: In the 1990s there was a trend from partially > unsupervised [3] to fully supervised recurrent Deep Learners [5]. In recent > years, there has been a similar trend from partially unsupervised to fully > supervised systems. For example, several recent competition-winning and > benchmark record-setting systems use supervised LSTM RNN stacks [6-9]. > >> > >> > >> References: > >> > >> [1] G. E. Hinton, R. R. Salakhutdinov. Reducing the dimensionality of > data with neural networks. Science, Vol. 313. no. 5786, pp. 504 - 507, > 2006. http://www.cs.toronto.edu/~hinton/science.pdf > >> > >> [2] G. W. Cottrell. New Life for Neural Networks. Science, Vol. 313. > no. 5786, pp. 454-455, 2006. > http://www.academia.edu/155897/Cottrell_Garrison_W._2006_New_life_for_neural_networks > >> > >> [3] J. Schmidhuber. Learning complex, extended sequences using the > principle of history compression, Neural Computation, 4(2):234-242, 1992. > (Based on TR FKI-148-91, 1991.) > ftp://ftp.idsia.ch/pub/juergen/chunker.pdf Overview: > http://www.idsia.ch/~juergen/firstdeeplearner.html > >> > >> [4] J. Schmidhuber. Habilitation thesis, TUM, 1993. > ftp://ftp.idsia.ch/pub/juergen/habilitation.pdf . Includes an experiment > with credit assignment across 1200 subsequent computational stages for a > Neural Hierarchical Temporal Memory or History Compressor or RNN stack with > unsupervised pre-training [2] (try Google Translate in your mother tongue): > http://www.idsia.ch/~juergen/habilitation/node114.html > >> > >> [5] S. Hochreiter, J. Schmidhuber. Long Short-Term Memory. Neural > Computation, 9(8):1735-1780, 1997. Based on TR FKI-207-95, 1995. > ftp://ftp.idsia.ch/pub/juergen/lstm.pdf . Lots of of follow-up work on > LSTM under http://www.idsia.ch/~juergen/rnn.html > >> > >> [6] S. Fernandez, A. Graves, J. Schmidhuber. Sequence labelling in > structured domains with hierarchical recurrent neural networks. In Proc. > IJCAI'07, p. 774-779, Hyderabad, India, 2007. > ftp://ftp.idsia.ch/pub/juergen/IJCAI07sequence.pdf > >> > >> [7] A. Graves, J. Schmidhuber. Offline Handwriting Recognition with > Multidimensional Recurrent Neural Networks. NIPS'22, p 545-552, Vancouver, > MIT Press, 2009. http://www.idsia.ch/~juergen/nips2009.pdf > >> > >> [8] 2009: First very deep (and recurrent) learner to win international > competitions with secret test sets: deep LSTM RNN (1995-) won three > connected handwriting contests at ICDAR 2009 (French, Arabic, Farsi), > performing simultaneous segmentation and recognition. > http://www.idsia.ch/~juergen/handwriting.html > >> > >> [9] A. Graves, A. Mohamed, G. E. Hinton. Speech Recognition with Deep > Recurrent Neural Networks. ICASSP 2013, Vancouver, 2013. > http://www.cs.toronto.edu/~hinton/absps/RNN13.pdf > >> > >> > >> > >> Juergen Schmidhuber > >> http://www.idsia.ch/~juergen/whatsnew.html > > > > -- > > -- > > Juyang (John) Weng, Professor > > Department of Computer Science and Engineering > > MSU Cognitive Science Program and MSU Neuroscience Program > > 428 S Shaw Ln Rm 3115 > > Michigan State University > > East Lansing, MI 48824 USA > > Tel: 517-353-4388 > > Fax: 517-432-1061 > > Email: weng at cse.msu.edu > > URL: http://www.cse.msu.edu/~weng/ > > ---------------------------------------------- > > > > > -- Ali A. Minai, Ph.D. Professor Complex Adaptive Systems Lab Department of Electrical Engineering & Computing Systems University of Cincinnati Cincinnati, OH 45221-0030 Phone: (513) 556-4783 Fax: (513) 556-7326 Email: Ali.Minai at uc.edu minaiaa at gmail.com WWW: http://www.ece.uc.edu/~aminai/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From gary.marcus at nyu.edu Mon Feb 10 11:38:12 2014 From: gary.marcus at nyu.edu (Gary Marcus) Date: Mon, 10 Feb 2014 11:38:12 -0500 Subject: Connectionists: Deep Belief Nets (2006) / Neural History Compressor (1991) or Hierarchical Temporal Memory In-Reply-To: References: <3FD4809C-5EF7-478D-8939-AE56E6A434BE@idsia.ch> <52F420CF.7060006@cse.msu.edu> Message-ID: Juergen and others, I am with John on his two basic concerns, and think that your appeal to computational universality is a red herring; I cc the entire group because I think that these issues lay at the center of why many of the hardest problems in AI and neuroscience continue to lay outside of reach, despite in-principle proofs about computational universality. John?s basic points, which I have also made before (e.g. in my books The Algebraic Mind and The Birth of the Mind and in my periodic New Yorker posts) are two a. It is unrealistic to expect that hierarchies of pattern recognizers will suffice for the full range of cognitive problems that humans (and strong AI systems) face. Deep learning, to take one example, excels at classification, but has thus far had relatively little to contribute to inference or natural language understanding. Socher et al?s impressive CVG work, for instance, is parasitic on a traditional (symbolic) parser, not a soup-to-nuts neural net induced from input. b. it is unrealistic to expect that all the relevant information can be extracted by any general purpose learning device. Yes, you can reliably map any arbitrary input-output relation onto a multilayer perceptron or recurrent net, but only if you know the complete input-output mapping in advance. Alas, you can?t be guaranteed to do that in general given arbitrary subsets of the complete space; in the real world, learners see subsets of possible data and have to make guesses about what the rest will be like. Wolpert?s No Free Lunch work is instructive here (and also in line with how cognitive scientists like Chomsky, Pinker, and myself have thought about the problem). For any problem, I presume that there exists an appropriately-configured net, but there is no guarantee that in the real world you are going to be able to correctly induce the right system via general-purpose learning algorithm given a finite amount of data, with a finite amount of training. Empirically, neural nets of roughly the form you are discussing have worked fine for some problems (e.g. backgammon) but been no match for their symbolic competitors in other domains (chess) and worked only as an adjunct rather than an central ingredient in still others (parsing, question-answering a la Watson, etc); in other domains, like planning and common-sense reasoning, there has been essentially no serious work at all. My own take, informed by evolutionary and developmental biology, is that no single general purpose architecture will ever be a match for the endproduct of a billion years of evolution, which includes, I suspect, a significant amount of customized architecture that need not be induced anew in each generation. We learn as well as we do precisely because evolution has preceded us, and endowed us with custom tools for learning in different domains. Until the field of neural nets more seriously engages in understanding what the contribution from evolution to neural wetware might be, I will remain pessimistic about the field?s prospects. Best, Gary Marcus Professor of Psychology New York University Visiting Cognitive Scientist Allen Institute for Brain Science Allen Institute for Artiificial Intelligence co-edited book coming late 2014: The Future of the Brain: Essays By The World?s Leading Neuroscientists http://garymarcus.com/ On Feb 10, 2014, at 10:26 AM, Juergen Schmidhuber wrote: > John, > > perhaps your view is a bit too pessimistic. Note that a single RNN already is a general computer. In principle, dynamic RNNs can map arbitrary observation sequences to arbitrary computable sequences of motoric actions and internal attention-directing operations, e.g., to process cluttered scenes, or to implement development (the examples you mentioned). From my point of view, the main question is how to exploit this universal potential through learning. A stack of dynamic RNN can sometimes facilitate this. What it learns can later be collapsed into a single RNN [3]. > > Juergen > > http://www.idsia.ch/~juergen/whatsnew.html > > > > On Feb 7, 2014, at 12:54 AM, Juyang Weng wrote: > >> Juergen: >> >> You wrote: A stack of recurrent NN. But it is a wrong architecture as far as the brain is concerned. >> >> Although my joint work with Narendra Ahuja and Thomas S. Huang at UIUC was probably the first >> learning network that used the deep Learning idea for learning from clutter scenes (Cresceptron ICCV 1992 and IJCV 1997), >> I gave up this static deep learning idea later after we considered the Principle 1: Development. >> >> The deep learning architecture is wrong for the brain. It is too restricted, static in architecture, and cannot learn directly from cluttered scenes required by Principle 1. The brain is not a cascade of recurrent NN. >> >> I quote from Antonio Damasio "Decartes' Error": p. 93: "But intermediate communications occurs also via large subcortical nuclei such as those in the thalamas and basal ganglia, and via small nulei such as those in the brain stem." >> >> Of course, the cerebral pathways themselves are not a stack of recurrent NN either. >> >> There are many fundamental reasons for that. I give only one here base on our DN brain model: Looking at a human, the brain must dynamically attend the tip of the nose, the entire nose, the face, or the entire human body on the fly. For example, when the network attend the nose, the entire human body becomes the background! Without a brain network that has both shallow and deep connections (unlike your stack of recurrent NN), your network is only for recognizing a set of static patterns in a clean background. This is still an overworked pattern recognition problem, not a vision problem. >> >> -John >> >> On 2/6/14 7:24 AM, Schmidhuber Juergen wrote: >>> Deep Learning in Artificial Neural Networks (NN) is about credit assignment across many subsequent computational stages, in deep or recurrent NN. >>> >>> A popluar Deep Learning NN is the Deep Belief Network (2006) [1,2]. A stack of feedforward NN (FNN) is pre-trained in unsupervised fashion. This can facilitate subsequent supervised learning. >>> >>> Let me re-advertise a much older, very similar, but more general, working Deep Learner of 1991. It can deal with temporal sequences: the Neural Hierarchical Temporal Memory or Neural History Compressor [3]. A stack of recurrent NN (RNN) is pre-trained in unsupervised fashion. This can greatly facilitate subsequent supervised learning. >>> >>> The RNN stack is more general in the sense that it uses sequence-processing RNN instead of FNN with unchanging inputs. In the early 1990s, the system was able to learn many previously unlearnable Deep Learning tasks, one of them requiring credit assignment across 1200 successive computational stages [4]. >>> >>> Related developments: In the 1990s there was a trend from partially unsupervised [3] to fully supervised recurrent Deep Learners [5]. In recent years, there has been a similar trend from partially unsupervised to fully supervised systems. For example, several recent competition-winning and benchmark record-setting systems use supervised LSTM RNN stacks [6-9]. >>> >>> >>> References: >>> >>> [1] G. E. Hinton, R. R. Salakhutdinov. Reducing the dimensionality of data with neural networks. Science, Vol. 313. no. 5786, pp. 504 - 507, 2006. http://www.cs.toronto.edu/~hinton/science.pdf >>> >>> [2] G. W. Cottrell. New Life for Neural Networks. Science, Vol. 313. no. 5786, pp. 454-455, 2006. http://www.academia.edu/155897/Cottrell_Garrison_W._2006_New_life_for_neural_networks >>> >>> [3] J. Schmidhuber. Learning complex, extended sequences using the principle of history compression, Neural Computation, 4(2):234-242, 1992. (Based on TR FKI-148-91, 1991.) ftp://ftp.idsia.ch/pub/juergen/chunker.pdf Overview: http://www.idsia.ch/~juergen/firstdeeplearner.html >>> >>> [4] J. Schmidhuber. Habilitation thesis, TUM, 1993. ftp://ftp.idsia.ch/pub/juergen/habilitation.pdf . Includes an experiment with credit assignment across 1200 subsequent computational stages for a Neural Hierarchical Temporal Memory or History Compressor or RNN stack with unsupervised pre-training [2] (try Google Translate in your mother tongue): http://www.idsia.ch/~juergen/habilitation/node114.html >>> >>> [5] S. Hochreiter, J. Schmidhuber. Long Short-Term Memory. Neural Computation, 9(8):1735-1780, 1997. Based on TR FKI-207-95, 1995. ftp://ftp.idsia.ch/pub/juergen/lstm.pdf . Lots of of follow-up work on LSTM under http://www.idsia.ch/~juergen/rnn.html >>> >>> [6] S. Fernandez, A. Graves, J. Schmidhuber. Sequence labelling in structured domains with hierarchical recurrent neural networks. In Proc. IJCAI'07, p. 774-779, Hyderabad, India, 2007. ftp://ftp.idsia.ch/pub/juergen/IJCAI07sequence.pdf >>> >>> [7] A. Graves, J. Schmidhuber. Offline Handwriting Recognition with Multidimensional Recurrent Neural Networks. NIPS'22, p 545-552, Vancouver, MIT Press, 2009. http://www.idsia.ch/~juergen/nips2009.pdf >>> >>> [8] 2009: First very deep (and recurrent) learner to win international competitions with secret test sets: deep LSTM RNN (1995-) won three connected handwriting contests at ICDAR 2009 (French, Arabic, Farsi), performing simultaneous segmentation and recognition. http://www.idsia.ch/~juergen/handwriting.html >>> >>> [9] A. Graves, A. Mohamed, G. E. Hinton. Speech Recognition with Deep Recurrent Neural Networks. ICASSP 2013, Vancouver, 2013. http://www.cs.toronto.edu/~hinton/absps/RNN13.pdf >>> >>> >>> >>> Juergen Schmidhuber >>> http://www.idsia.ch/~juergen/whatsnew.html >> >> -- >> -- >> Juyang (John) Weng, Professor >> Department of Computer Science and Engineering >> MSU Cognitive Science Program and MSU Neuroscience Program >> 428 S Shaw Ln Rm 3115 >> Michigan State University >> East Lansing, MI 48824 USA >> Tel: 517-353-4388 >> Fax: 517-432-1061 >> Email: weng at cse.msu.edu >> URL: http://www.cse.msu.edu/~weng/ >> ---------------------------------------------- >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bower at uthscsa.edu Mon Feb 10 14:40:35 2014 From: bower at uthscsa.edu (james bower) Date: Mon, 10 Feb 2014 13:40:35 -0600 Subject: Connectionists: Deep Belief Nets (2006) / Neural History Compressor (1991) or Hierarchical Temporal Memory In-Reply-To: References: <3FD4809C-5EF7-478D-8939-AE56E6A434BE@idsia.ch> <52F420CF.7060006@cse.msu.edu> Message-ID: <23B2A5C3-3F74-4690-AAA3-D266B4978C05@uthscsa.edu> Nice to see this started again, even after the ?get me off the mailing list? email. :-) For those of you relatively new to the field - it was discussions like this, I believe, that were responsible for growing connectionists to begin with - 25 years ago. Anyway: Well put - although, there is a long history of engineers and others coming up with interesting new ideas after contemplating biological structures - that actually made a contribution to engineering. Lots of current examples. However, success in the engineering world does not at all necessarily mean that this is how the brain actually does it. One more point - it is almost certain that a great deal of the computational power of the nervous system comes from interactions in the dendrite - which almost certainly can not be boiled down to the traditional summation of synaptic inputs over time and space followed by some simple thresholding mechanism. Therefore, in addition to the vow of chastity for any of you who are really in this business for the love of neuroscience, I also suggest that you focus on the computational erogenous zone of the dendrites. The Internet is a remarkable and complex network, but without understanding how the information it delivers is rendered and influences the computers it is connected to, probably rather difficult to figure out the network itself. Jim On Feb 10, 2014, at 9:56 AM, Ali Minai wrote: > I agree with both Juergen and John. On the one hand, most neural processing must - almost necessarily - emerge from the dynamics of many recurrent networks interacting at multiple scales. I that sense, deep learning with recurrent networks is a fruitful place to start in trying to understand this. On the other hand, I also think that the term "deep learning" has become unnecessarily constrained to refer to a particular style of layered architecture and certain types of learning algorithms. We need to move beyond these - broaden the definition to include networks with more complex architectures and learning processes that include development, and even evolution. And to extend the model beyond just "neural" networks to encompass the entire brain-body network, including its mechanical and autonomic components. > > One problem is that when engineers and computer scientists try to understand the brain, we keep getting distracted by all the sexy "applications" that arise as a side benefit of our models, go chasing after them, and eventually lose track of the original goal of understanding how the brain works. This results in a lot of very useful neural network models for vision, time-series prediction, data analysis, etc., but doesn't tell us much about the brain. Some of us need to take a vow of chastity and commit ourselves anew to the discipline of biology. > > Ali > > > On Mon, Feb 10, 2014 at 10:26 AM, Juergen Schmidhuber wrote: > John, > > perhaps your view is a bit too pessimistic. Note that a single RNN already is a general computer. In principle, dynamic RNNs can map arbitrary observation sequences to arbitrary computable sequences of motoric actions and internal attention-directing operations, e.g., to process cluttered scenes, or to implement development (the examples you mentioned). From my point of view, the main question is how to exploit this universal potential through learning. A stack of dynamic RNN can sometimes facilitate this. What it learns can later be collapsed into a single RNN [3]. > > Juergen > > http://www.idsia.ch/~juergen/whatsnew.html > > > > On Feb 7, 2014, at 12:54 AM, Juyang Weng wrote: > > > Juergen: > > > > You wrote: A stack of recurrent NN. But it is a wrong architecture as far as the brain is concerned. > > > > Although my joint work with Narendra Ahuja and Thomas S. Huang at UIUC was probably the first > > learning network that used the deep Learning idea for learning from clutter scenes (Cresceptron ICCV 1992 and IJCV 1997), > > I gave up this static deep learning idea later after we considered the Principle 1: Development. > > > > The deep learning architecture is wrong for the brain. It is too restricted, static in architecture, and cannot learn directly from cluttered scenes required by Principle 1. The brain is not a cascade of recurrent NN. > > > > I quote from Antonio Damasio "Decartes' Error": p. 93: "But intermediate communications occurs also via large subcortical nuclei such as those in the thalamas and basal ganglia, and via small nulei such as those in the brain stem." > > > > Of course, the cerebral pathways themselves are not a stack of recurrent NN either. > > > > There are many fundamental reasons for that. I give only one here base on our DN brain model: Looking at a human, the brain must dynamically attend the tip of the nose, the entire nose, the face, or the entire human body on the fly. For example, when the network attend the nose, the entire human body becomes the background! Without a brain network that has both shallow and deep connections (unlike your stack of recurrent NN), your network is only for recognizing a set of static patterns in a clean background. This is still an overworked pattern recognition problem, not a vision problem. > > > > -John > > > > On 2/6/14 7:24 AM, Schmidhuber Juergen wrote: > >> Deep Learning in Artificial Neural Networks (NN) is about credit assignment across many subsequent computational stages, in deep or recurrent NN. > >> > >> A popluar Deep Learning NN is the Deep Belief Network (2006) [1,2]. A stack of feedforward NN (FNN) is pre-trained in unsupervised fashion. This can facilitate subsequent supervised learning. > >> > >> Let me re-advertise a much older, very similar, but more general, working Deep Learner of 1991. It can deal with temporal sequences: the Neural Hierarchical Temporal Memory or Neural History Compressor [3]. A stack of recurrent NN (RNN) is pre-trained in unsupervised fashion. This can greatly facilitate subsequent supervised learning. > >> > >> The RNN stack is more general in the sense that it uses sequence-processing RNN instead of FNN with unchanging inputs. In the early 1990s, the system was able to learn many previously unlearnable Deep Learning tasks, one of them requiring credit assignment across 1200 successive computational stages [4]. > >> > >> Related developments: In the 1990s there was a trend from partially unsupervised [3] to fully supervised recurrent Deep Learners [5]. In recent years, there has been a similar trend from partially unsupervised to fully supervised systems. For example, several recent competition-winning and benchmark record-setting systems use supervised LSTM RNN stacks [6-9]. > >> > >> > >> References: > >> > >> [1] G. E. Hinton, R. R. Salakhutdinov. Reducing the dimensionality of data with neural networks. Science, Vol. 313. no. 5786, pp. 504 - 507, 2006. http://www.cs.toronto.edu/~hinton/science.pdf > >> > >> [2] G. W. Cottrell. New Life for Neural Networks. Science, Vol. 313. no. 5786, pp. 454-455, 2006. http://www.academia.edu/155897/Cottrell_Garrison_W._2006_New_life_for_neural_networks > >> > >> [3] J. Schmidhuber. Learning complex, extended sequences using the principle of history compression, Neural Computation, 4(2):234-242, 1992. (Based on TR FKI-148-91, 1991.) ftp://ftp.idsia.ch/pub/juergen/chunker.pdf Overview: http://www.idsia.ch/~juergen/firstdeeplearner.html > >> > >> [4] J. Schmidhuber. Habilitation thesis, TUM, 1993. ftp://ftp.idsia.ch/pub/juergen/habilitation.pdf . Includes an experiment with credit assignment across 1200 subsequent computational stages for a Neural Hierarchical Temporal Memory or History Compressor or RNN stack with unsupervised pre-training [2] (try Google Translate in your mother tongue): http://www.idsia.ch/~juergen/habilitation/node114.html > >> > >> [5] S. Hochreiter, J. Schmidhuber. Long Short-Term Memory. Neural Computation, 9(8):1735-1780, 1997. Based on TR FKI-207-95, 1995. ftp://ftp.idsia.ch/pub/juergen/lstm.pdf . Lots of of follow-up work on LSTM under http://www.idsia.ch/~juergen/rnn.html > >> > >> [6] S. Fernandez, A. Graves, J. Schmidhuber. Sequence labelling in structured domains with hierarchical recurrent neural networks. In Proc. IJCAI'07, p. 774-779, Hyderabad, India, 2007. ftp://ftp.idsia.ch/pub/juergen/IJCAI07sequence.pdf > >> > >> [7] A. Graves, J. Schmidhuber. Offline Handwriting Recognition with Multidimensional Recurrent Neural Networks. NIPS'22, p 545-552, Vancouver, MIT Press, 2009. http://www.idsia.ch/~juergen/nips2009.pdf > >> > >> [8] 2009: First very deep (and recurrent) learner to win international competitions with secret test sets: deep LSTM RNN (1995-) won three connected handwriting contests at ICDAR 2009 (French, Arabic, Farsi), performing simultaneous segmentation and recognition. http://www.idsia.ch/~juergen/handwriting.html > >> > >> [9] A. Graves, A. Mohamed, G. E. Hinton. Speech Recognition with Deep Recurrent Neural Networks. ICASSP 2013, Vancouver, 2013. http://www.cs.toronto.edu/~hinton/absps/RNN13.pdf > >> > >> > >> > >> Juergen Schmidhuber > >> http://www.idsia.ch/~juergen/whatsnew.html > > > > -- > > -- > > Juyang (John) Weng, Professor > > Department of Computer Science and Engineering > > MSU Cognitive Science Program and MSU Neuroscience Program > > 428 S Shaw Ln Rm 3115 > > Michigan State University > > East Lansing, MI 48824 USA > > Tel: 517-353-4388 > > Fax: 517-432-1061 > > Email: weng at cse.msu.edu > > URL: http://www.cse.msu.edu/~weng/ > > ---------------------------------------------- > > > > > > > > -- > Ali A. Minai, Ph.D. > Professor > Complex Adaptive Systems Lab > Department of Electrical Engineering & Computing Systems > University of Cincinnati > Cincinnati, OH 45221-0030 > > Phone: (513) 556-4783 > Fax: (513) 556-7326 > Email: Ali.Minai at uc.edu > minaiaa at gmail.com > > WWW: http://www.ece.uc.edu/~aminai/ Dr. James M. Bower Ph.D. Professor of Computational Neurobiology Barshop Institute for Longevity and Aging Studies. 15355 Lambda Drive University of Texas Health Science Center San Antonio, Texas 78245 Phone: 210 382 0553 Email: bower at uthscsa.edu Web: http://www.bower-lab.org twitter: superid101 linkedin: Jim Bower CONFIDENTIAL NOTICE: The contents of this email and any attachments to it may be privileged or contain privileged and confidential information. This information is only for the viewing or use of the intended recipient. If you have received this e-mail in error or are not the intended recipient, you are hereby notified that any disclosure, copying, distribution or use of, or the taking of any action in reliance upon, any of the information contained in this e-mail, or any of the attachments to this e-mail, is strictly prohibited and that this e-mail and all of the attachments to this e-mail, if any, must be immediately returned to the sender or destroyed and, in either case, this e-mail and all attachments to this e-mail must be immediately deleted from your computer without making any copies hereof and any and all hard copies made must be destroyed. If you have received this e-mail in error, please notify the sender by e-mail immediately. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bower at uthscsa.edu Mon Feb 10 15:24:00 2014 From: bower at uthscsa.edu (james bower) Date: Mon, 10 Feb 2014 14:24:00 -0600 Subject: Connectionists: Deep Belief Nets (2006) / Neural History Compressor (1991) or Hierarchical Temporal Memory In-Reply-To: References: <3FD4809C-5EF7-478D-8939-AE56E6A434BE@idsia.ch> <52F420CF.7060006@cse.msu.edu> Message-ID: Excellent points - reminds me again of the conversation years ago about whether a general structure like a Hopfield Network would, by itself, solve a large number of problems. All evidence from the nervous system points in the direction of strong influence of the nature of the problem on the solution, which also seems consistent with what has happened in real world applications of NN to engineering over the last 25 years. For biology, however, the interesting (even fundamental) question becomes, what the following actually are: > endowed us with custom tools for learning in different domains > the contribution from evolution to neural wetware might be I have mentioned previously, that my guess (and surprise) based on our own work over the last 30 years in olfaction is that ?learning? may all together be over emphasized (we do love free will). Yes, in our laboratories we place animals in situations where they have to ?learn?, but my suspicion is that in the real world where brains actually operate and evolve, most of what we do is actually ?recognition? that involves matching external stimuli to internal ?models? of what we expect to be there. I think that it is quite likely that that ?deep knowledge? is how evolution has most patterned neural wetware. Seems to me a way to avoid NP problems and the pitfalls of dealing with ?big data? which as I have said, I suspect the nervous system avoids at all costs. I have mentioned that we have reason to believe (to my surprise) that, starting with the olfactory receptors, the olfactory system already ?knows? about the metabolic structure of the real world. Accordingly, we are predicting that its receptors aren?t organized to collect a lot of data about the chemical structure of the stimulus (the way a chemist would), but instead looks for chemical signatures of metabolic processes. e.g. , it may be that 1/3 or more of mouse olfactory receptors detect one of the three molecules that are produced by many different kinds of fruit when ripe. ?Learning? in olfaction, might be some small additional mechanism you put on top to change the ?hedonic? value of the stimulus - ie. you can ?learn? to like fermented fish paste. But it is very likely that recognizing the (usually deleterious) environmental signature of fermentation is "hard wired?, requiring ?learning? to change the natural category. I know that many cognitive types (and philosophers as well) have developed much more nuanced discussions of these questions - however, I have always been struck by how much of the effort in NNs is focused on ?learning? as if it is the primary attribute of the nervous system we are trying to figure out. It seems to me figuring out "what the nose already knows? is much more important. Jim On Feb 10, 2014, at 10:38 AM, Gary Marcus wrote: > Juergen and others, > > I am with John on his two basic concerns, and think that your appeal to computational universality is a red herring; I cc the entire group because I think that these issues lay at the center of why many of the hardest problems in AI and neuroscience continue to lay outside of reach, despite in-principle proofs about computational universality. > > John?s basic points, which I have also made before (e.g. in my books The Algebraic Mind and The Birth of the Mind and in my periodic New Yorker posts) are two > > a. It is unrealistic to expect that hierarchies of pattern recognizers will suffice for the full range of cognitive problems that humans (and strong AI systems) face. Deep learning, to take one example, excels at classification, but has thus far had relatively little to contribute to inference or natural language understanding. Socher et al?s impressive CVG work, for instance, is parasitic on a traditional (symbolic) parser, not a soup-to-nuts neural net induced from input. > > b. it is unrealistic to expect that all the relevant information can be extracted by any general purpose learning device. > > Yes, you can reliably map any arbitrary input-output relation onto a multilayer perceptron or recurrent net, but only if you know the complete input-output mapping in advance. Alas, you can?t be guaranteed to do that in general given arbitrary subsets of the complete space; in the real world, learners see subsets of possible data and have to make guesses about what the rest will be like. Wolpert?s No Free Lunch work is instructive here (and also in line with how cognitive scientists like Chomsky, Pinker, and myself have thought about the problem). For any problem, I presume that there exists an appropriately-configured net, but there is no guarantee that in the real world you are going to be able to correctly induce the right system via general-purpose learning algorithm given a finite amount of data, with a finite amount of training. Empirically, neural nets of roughly the form you are discussing have worked fine for some problems (e.g. backgammon) but been no match for their symbolic competitors in other domains (chess) and worked only as an adjunct rather than an central ingredient in still others (parsing, question-answering a la Watson, etc); in other domains, like planning and common-sense reasoning, there has been essentially no serious work at all. > > My own take, informed by evolutionary and developmental biology, is that no single general purpose architecture will ever be a match for the endproduct of a billion years of evolution, which includes, I suspect, a significant amount of customized architecture that need not be induced anew in each generation. We learn as well as we do precisely because evolution has preceded us, and endowed us with custom tools for learning in different domains. Until the field of neural nets more seriously engages in understanding what the contribution from evolution to neural wetware might be, I will remain pessimistic about the field?s prospects. > > Best, > Gary Marcus > > Professor of Psychology > New York University > Visiting Cognitive Scientist > Allen Institute for Brain Science > Allen Institute for Artiificial Intelligence > co-edited book coming late 2014: > The Future of the Brain: Essays By The World?s Leading Neuroscientists > http://garymarcus.com/ > > On Feb 10, 2014, at 10:26 AM, Juergen Schmidhuber wrote: > >> John, >> >> perhaps your view is a bit too pessimistic. Note that a single RNN already is a general computer. In principle, dynamic RNNs can map arbitrary observation sequences to arbitrary computable sequences of motoric actions and internal attention-directing operations, e.g., to process cluttered scenes, or to implement development (the examples you mentioned). From my point of view, the main question is how to exploit this universal potential through learning. A stack of dynamic RNN can sometimes facilitate this. What it learns can later be collapsed into a single RNN [3]. >> >> Juergen >> >> http://www.idsia.ch/~juergen/whatsnew.html >> >> >> >> On Feb 7, 2014, at 12:54 AM, Juyang Weng wrote: >> >>> Juergen: >>> >>> You wrote: A stack of recurrent NN. But it is a wrong architecture as far as the brain is concerned. >>> >>> Although my joint work with Narendra Ahuja and Thomas S. Huang at UIUC was probably the first >>> learning network that used the deep Learning idea for learning from clutter scenes (Cresceptron ICCV 1992 and IJCV 1997), >>> I gave up this static deep learning idea later after we considered the Principle 1: Development. >>> >>> The deep learning architecture is wrong for the brain. It is too restricted, static in architecture, and cannot learn directly from cluttered scenes required by Principle 1. The brain is not a cascade of recurrent NN. >>> >>> I quote from Antonio Damasio "Decartes' Error": p. 93: "But intermediate communications occurs also via large subcortical nuclei such as those in the thalamas and basal ganglia, and via small nulei such as those in the brain stem." >>> >>> Of course, the cerebral pathways themselves are not a stack of recurrent NN either. >>> >>> There are many fundamental reasons for that. I give only one here base on our DN brain model: Looking at a human, the brain must dynamically attend the tip of the nose, the entire nose, the face, or the entire human body on the fly. For example, when the network attend the nose, the entire human body becomes the background! Without a brain network that has both shallow and deep connections (unlike your stack of recurrent NN), your network is only for recognizing a set of static patterns in a clean background. This is still an overworked pattern recognition problem, not a vision problem. >>> >>> -John >>> >>> On 2/6/14 7:24 AM, Schmidhuber Juergen wrote: >>>> Deep Learning in Artificial Neural Networks (NN) is about credit assignment across many subsequent computational stages, in deep or recurrent NN. >>>> >>>> A popluar Deep Learning NN is the Deep Belief Network (2006) [1,2]. A stack of feedforward NN (FNN) is pre-trained in unsupervised fashion. This can facilitate subsequent supervised learning. >>>> >>>> Let me re-advertise a much older, very similar, but more general, working Deep Learner of 1991. It can deal with temporal sequences: the Neural Hierarchical Temporal Memory or Neural History Compressor [3]. A stack of recurrent NN (RNN) is pre-trained in unsupervised fashion. This can greatly facilitate subsequent supervised learning. >>>> >>>> The RNN stack is more general in the sense that it uses sequence-processing RNN instead of FNN with unchanging inputs. In the early 1990s, the system was able to learn many previously unlearnable Deep Learning tasks, one of them requiring credit assignment across 1200 successive computational stages [4]. >>>> >>>> Related developments: In the 1990s there was a trend from partially unsupervised [3] to fully supervised recurrent Deep Learners [5]. In recent years, there has been a similar trend from partially unsupervised to fully supervised systems. For example, several recent competition-winning and benchmark record-setting systems use supervised LSTM RNN stacks [6-9]. >>>> >>>> >>>> References: >>>> >>>> [1] G. E. Hinton, R. R. Salakhutdinov. Reducing the dimensionality of data with neural networks. Science, Vol. 313. no. 5786, pp. 504 - 507, 2006. http://www.cs.toronto.edu/~hinton/science.pdf >>>> >>>> [2] G. W. Cottrell. New Life for Neural Networks. Science, Vol. 313. no. 5786, pp. 454-455, 2006. http://www.academia.edu/155897/Cottrell_Garrison_W._2006_New_life_for_neural_networks >>>> >>>> [3] J. Schmidhuber. Learning complex, extended sequences using the principle of history compression, Neural Computation, 4(2):234-242, 1992. (Based on TR FKI-148-91, 1991.) ftp://ftp.idsia.ch/pub/juergen/chunker.pdf Overview: http://www.idsia.ch/~juergen/firstdeeplearner.html >>>> >>>> [4] J. Schmidhuber. Habilitation thesis, TUM, 1993. ftp://ftp.idsia.ch/pub/juergen/habilitation.pdf . Includes an experiment with credit assignment across 1200 subsequent computational stages for a Neural Hierarchical Temporal Memory or History Compressor or RNN stack with unsupervised pre-training [2] (try Google Translate in your mother tongue): http://www.idsia.ch/~juergen/habilitation/node114.html >>>> >>>> [5] S. Hochreiter, J. Schmidhuber. Long Short-Term Memory. Neural Computation, 9(8):1735-1780, 1997. Based on TR FKI-207-95, 1995. ftp://ftp.idsia.ch/pub/juergen/lstm.pdf . Lots of of follow-up work on LSTM under http://www.idsia.ch/~juergen/rnn.html >>>> >>>> [6] S. Fernandez, A. Graves, J. Schmidhuber. Sequence labelling in structured domains with hierarchical recurrent neural networks. In Proc. IJCAI'07, p. 774-779, Hyderabad, India, 2007. ftp://ftp.idsia.ch/pub/juergen/IJCAI07sequence.pdf >>>> >>>> [7] A. Graves, J. Schmidhuber. Offline Handwriting Recognition with Multidimensional Recurrent Neural Networks. NIPS'22, p 545-552, Vancouver, MIT Press, 2009. http://www.idsia.ch/~juergen/nips2009.pdf >>>> >>>> [8] 2009: First very deep (and recurrent) learner to win international competitions with secret test sets: deep LSTM RNN (1995-) won three connected handwriting contests at ICDAR 2009 (French, Arabic, Farsi), performing simultaneous segmentation and recognition. http://www.idsia.ch/~juergen/handwriting.html >>>> >>>> [9] A. Graves, A. Mohamed, G. E. Hinton. Speech Recognition with Deep Recurrent Neural Networks. ICASSP 2013, Vancouver, 2013. http://www.cs.toronto.edu/~hinton/absps/RNN13.pdf >>>> >>>> >>>> >>>> Juergen Schmidhuber >>>> http://www.idsia.ch/~juergen/whatsnew.html >>> >>> -- >>> -- >>> Juyang (John) Weng, Professor >>> Department of Computer Science and Engineering >>> MSU Cognitive Science Program and MSU Neuroscience Program >>> 428 S Shaw Ln Rm 3115 >>> Michigan State University >>> East Lansing, MI 48824 USA >>> Tel: 517-353-4388 >>> Fax: 517-432-1061 >>> Email: weng at cse.msu.edu >>> URL: http://www.cse.msu.edu/~weng/ >>> ---------------------------------------------- >>> >> >> > Dr. James M. Bower Ph.D. Professor of Computational Neurobiology Barshop Institute for Longevity and Aging Studies. 15355 Lambda Drive University of Texas Health Science Center San Antonio, Texas 78245 Phone: 210 382 0553 Email: bower at uthscsa.edu Web: http://www.bower-lab.org twitter: superid101 linkedin: Jim Bower CONFIDENTIAL NOTICE: The contents of this email and any attachments to it may be privileged or contain privileged and confidential information. This information is only for the viewing or use of the intended recipient. If you have received this e-mail in error or are not the intended recipient, you are hereby notified that any disclosure, copying, distribution or use of, or the taking of any action in reliance upon, any of the information contained in this e-mail, or any of the attachments to this e-mail, is strictly prohibited and that this e-mail and all of the attachments to this e-mail, if any, must be immediately returned to the sender or destroyed and, in either case, this e-mail and all attachments to this e-mail must be immediately deleted from your computer without making any copies hereof and any and all hard copies made must be destroyed. If you have received this e-mail in error, please notify the sender by e-mail immediately. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bower at uthscsa.edu Mon Feb 10 16:04:27 2014 From: bower at uthscsa.edu (james bower) Date: Mon, 10 Feb 2014 15:04:27 -0600 Subject: Connectionists: Deep Belief Nets (2006) / Neural History Compressor (1991) or Hierarchical Temporal Memory In-Reply-To: References: <3FD4809C-5EF7-478D-8939-AE56E6A434BE@idsia.ch> <52F420CF.7060006@cse.msu.edu> Message-ID: <4451FE2D-8521-46F0-A1CE-148F5CC83549@uthscsa.edu> One other point that some of you might find interesting. While most neurobiologists and text books describe the cerebellum as involved in motor control, I suspect that it is actually not a motor control device in the usual sense at all. We proposed 20+ years ago that the cerebellum is actually a sensory control device, using the motor system (although not only) to precisely position sensory surfaces to collect the data the nervous system actually needs and expects. In the context of the current discussion about big data - such a mechanism would also contribute to the nervous system?s working around a potential data problem. Leaping and jumping forward, as an extension of this idea, we have proposed that autism may actually be an adaptive response to cerebellar dysfunction - and therefore a response to uncontrolled big data flux (in your terms). So, if correct, the brain adapts to being confronted with badly controlled data acquisition by shutting it off. Just to think about. Again, papers available for anyone interested. Given how much we do know about cerebellar circuitry - this could actually be an interesting opportunity for some cross disciplinary thinking about how one would use an active sensory data acquisition controller to select the sensory data that is ideal given an internal model of the world. Almost all of the NN type cerebellar models to date have been built around either the idea that the cerebellum is a motor timing device, or involved in learning (yadda yadda). Perhaps most on this list interested in brain networks don?t know that far and away, the pathway with the largest total number of axons is the pathway from the (entire) cerebral cortex to the cerebellum. We have predicted that this pathway is the mechanism by which the cerebral cortex ?loads? the cerebellum with knowledge about what it expects and needs. Jim On Feb 10, 2014, at 2:24 PM, james bower wrote: > Excellent points - reminds me again of the conversation years ago about whether a general structure like a Hopfield Network would, by itself, solve a large number of problems. All evidence from the nervous system points in the direction of strong influence of the nature of the problem on the solution, which also seems consistent with what has happened in real world applications of NN to engineering over the last 25 years. > > For biology, however, the interesting (even fundamental) question becomes, what the following actually are: > >> endowed us with custom tools for learning in different domains > >> the contribution from evolution to neural wetware might be > > I have mentioned previously, that my guess (and surprise) based on our own work over the last 30 years in olfaction is that ?learning? may all together be over emphasized (we do love free will). Yes, in our laboratories we place animals in situations where they have to ?learn?, but my suspicion is that in the real world where brains actually operate and evolve, most of what we do is actually ?recognition? that involves matching external stimuli to internal ?models? of what we expect to be there. I think that it is quite likely that that ?deep knowledge? is how evolution has most patterned neural wetware. Seems to me a way to avoid NP problems and the pitfalls of dealing with ?big data? which as I have said, I suspect the nervous system avoids at all costs. > > I have mentioned that we have reason to believe (to my surprise) that, starting with the olfactory receptors, the olfactory system already ?knows? about the metabolic structure of the real world. Accordingly, we are predicting that its receptors aren?t organized to collect a lot of data about the chemical structure of the stimulus (the way a chemist would), but instead looks for chemical signatures of metabolic processes. e.g. , it may be that 1/3 or more of mouse olfactory receptors detect one of the three molecules that are produced by many different kinds of fruit when ripe. ?Learning? in olfaction, might be some small additional mechanism you put on top to change the ?hedonic? value of the stimulus - ie. you can ?learn? to like fermented fish paste. But it is very likely that recognizing the (usually deleterious) environmental signature of fermentation is "hard wired?, requiring ?learning? to change the natural category. > > I know that many cognitive types (and philosophers as well) have developed much more nuanced discussions of these questions - however, I have always been struck by how much of the effort in NNs is focused on ?learning? as if it is the primary attribute of the nervous system we are trying to figure out. It seems to me figuring out "what the nose already knows? is much more important. > > > Jim > > > > > > > On Feb 10, 2014, at 10:38 AM, Gary Marcus wrote: > >> Juergen and others, >> >> I am with John on his two basic concerns, and think that your appeal to computational universality is a red herring; I cc the entire group because I think that these issues lay at the center of why many of the hardest problems in AI and neuroscience continue to lay outside of reach, despite in-principle proofs about computational universality. >> >> John?s basic points, which I have also made before (e.g. in my books The Algebraic Mind and The Birth of the Mind and in my periodic New Yorker posts) are two >> >> a. It is unrealistic to expect that hierarchies of pattern recognizers will suffice for the full range of cognitive problems that humans (and strong AI systems) face. Deep learning, to take one example, excels at classification, but has thus far had relatively little to contribute to inference or natural language understanding. Socher et al?s impressive CVG work, for instance, is parasitic on a traditional (symbolic) parser, not a soup-to-nuts neural net induced from input. >> >> b. it is unrealistic to expect that all the relevant information can be extracted by any general purpose learning device. >> >> Yes, you can reliably map any arbitrary input-output relation onto a multilayer perceptron or recurrent net, but only if you know the complete input-output mapping in advance. Alas, you can?t be guaranteed to do that in general given arbitrary subsets of the complete space; in the real world, learners see subsets of possible data and have to make guesses about what the rest will be like. Wolpert?s No Free Lunch work is instructive here (and also in line with how cognitive scientists like Chomsky, Pinker, and myself have thought about the problem). For any problem, I presume that there exists an appropriately-configured net, but there is no guarantee that in the real world you are going to be able to correctly induce the right system via general-purpose learning algorithm given a finite amount of data, with a finite amount of training. Empirically, neural nets of roughly the form you are discussing have worked fine for some problems (e.g. backgammon) but been no match for their symbolic competitors in other domains (chess) and worked only as an adjunct rather than an central ingredient in still others (parsing, question-answering a la Watson, etc); in other domains, like planning and common-sense reasoning, there has been essentially no serious work at all. >> >> My own take, informed by evolutionary and developmental biology, is that no single general purpose architecture will ever be a match for the endproduct of a billion years of evolution, which includes, I suspect, a significant amount of customized architecture that need not be induced anew in each generation. We learn as well as we do precisely because evolution has preceded us, and endowed us with custom tools for learning in different domains. Until the field of neural nets more seriously engages in understanding what the contribution from evolution to neural wetware might be, I will remain pessimistic about the field?s prospects. >> >> Best, >> Gary Marcus >> >> Professor of Psychology >> New York University >> Visiting Cognitive Scientist >> Allen Institute for Brain Science >> Allen Institute for Artiificial Intelligence >> co-edited book coming late 2014: >> The Future of the Brain: Essays By The World?s Leading Neuroscientists >> http://garymarcus.com/ >> >> On Feb 10, 2014, at 10:26 AM, Juergen Schmidhuber wrote: >> >>> John, >>> >>> perhaps your view is a bit too pessimistic. Note that a single RNN already is a general computer. In principle, dynamic RNNs can map arbitrary observation sequences to arbitrary computable sequences of motoric actions and internal attention-directing operations, e.g., to process cluttered scenes, or to implement development (the examples you mentioned). From my point of view, the main question is how to exploit this universal potential through learning. A stack of dynamic RNN can sometimes facilitate this. What it learns can later be collapsed into a single RNN [3]. >>> >>> Juergen >>> >>> http://www.idsia.ch/~juergen/whatsnew.html >>> >>> >>> >>> On Feb 7, 2014, at 12:54 AM, Juyang Weng wrote: >>> >>>> Juergen: >>>> >>>> You wrote: A stack of recurrent NN. But it is a wrong architecture as far as the brain is concerned. >>>> >>>> Although my joint work with Narendra Ahuja and Thomas S. Huang at UIUC was probably the first >>>> learning network that used the deep Learning idea for learning from clutter scenes (Cresceptron ICCV 1992 and IJCV 1997), >>>> I gave up this static deep learning idea later after we considered the Principle 1: Development. >>>> >>>> The deep learning architecture is wrong for the brain. It is too restricted, static in architecture, and cannot learn directly from cluttered scenes required by Principle 1. The brain is not a cascade of recurrent NN. >>>> >>>> I quote from Antonio Damasio "Decartes' Error": p. 93: "But intermediate communications occurs also via large subcortical nuclei such as those in the thalamas and basal ganglia, and via small nulei such as those in the brain stem." >>>> >>>> Of course, the cerebral pathways themselves are not a stack of recurrent NN either. >>>> >>>> There are many fundamental reasons for that. I give only one here base on our DN brain model: Looking at a human, the brain must dynamically attend the tip of the nose, the entire nose, the face, or the entire human body on the fly. For example, when the network attend the nose, the entire human body becomes the background! Without a brain network that has both shallow and deep connections (unlike your stack of recurrent NN), your network is only for recognizing a set of static patterns in a clean background. This is still an overworked pattern recognition problem, not a vision problem. >>>> >>>> -John >>>> >>>> On 2/6/14 7:24 AM, Schmidhuber Juergen wrote: >>>>> Deep Learning in Artificial Neural Networks (NN) is about credit assignment across many subsequent computational stages, in deep or recurrent NN. >>>>> >>>>> A popluar Deep Learning NN is the Deep Belief Network (2006) [1,2]. A stack of feedforward NN (FNN) is pre-trained in unsupervised fashion. This can facilitate subsequent supervised learning. >>>>> >>>>> Let me re-advertise a much older, very similar, but more general, working Deep Learner of 1991. It can deal with temporal sequences: the Neural Hierarchical Temporal Memory or Neural History Compressor [3]. A stack of recurrent NN (RNN) is pre-trained in unsupervised fashion. This can greatly facilitate subsequent supervised learning. >>>>> >>>>> The RNN stack is more general in the sense that it uses sequence-processing RNN instead of FNN with unchanging inputs. In the early 1990s, the system was able to learn many previously unlearnable Deep Learning tasks, one of them requiring credit assignment across 1200 successive computational stages [4]. >>>>> >>>>> Related developments: In the 1990s there was a trend from partially unsupervised [3] to fully supervised recurrent Deep Learners [5]. In recent years, there has been a similar trend from partially unsupervised to fully supervised systems. For example, several recent competition-winning and benchmark record-setting systems use supervised LSTM RNN stacks [6-9]. >>>>> >>>>> >>>>> References: >>>>> >>>>> [1] G. E. Hinton, R. R. Salakhutdinov. Reducing the dimensionality of data with neural networks. Science, Vol. 313. no. 5786, pp. 504 - 507, 2006. http://www.cs.toronto.edu/~hinton/science.pdf >>>>> >>>>> [2] G. W. Cottrell. New Life for Neural Networks. Science, Vol. 313. no. 5786, pp. 454-455, 2006. http://www.academia.edu/155897/Cottrell_Garrison_W._2006_New_life_for_neural_networks >>>>> >>>>> [3] J. Schmidhuber. Learning complex, extended sequences using the principle of history compression, Neural Computation, 4(2):234-242, 1992. (Based on TR FKI-148-91, 1991.) ftp://ftp.idsia.ch/pub/juergen/chunker.pdf Overview: http://www.idsia.ch/~juergen/firstdeeplearner.html >>>>> >>>>> [4] J. Schmidhuber. Habilitation thesis, TUM, 1993. ftp://ftp.idsia.ch/pub/juergen/habilitation.pdf . Includes an experiment with credit assignment across 1200 subsequent computational stages for a Neural Hierarchical Temporal Memory or History Compressor or RNN stack with unsupervised pre-training [2] (try Google Translate in your mother tongue): http://www.idsia.ch/~juergen/habilitation/node114.html >>>>> >>>>> [5] S. Hochreiter, J. Schmidhuber. Long Short-Term Memory. Neural Computation, 9(8):1735-1780, 1997. Based on TR FKI-207-95, 1995. ftp://ftp.idsia.ch/pub/juergen/lstm.pdf . Lots of of follow-up work on LSTM under http://www.idsia.ch/~juergen/rnn.html >>>>> >>>>> [6] S. Fernandez, A. Graves, J. Schmidhuber. Sequence labelling in structured domains with hierarchical recurrent neural networks. In Proc. IJCAI'07, p. 774-779, Hyderabad, India, 2007. ftp://ftp.idsia.ch/pub/juergen/IJCAI07sequence.pdf >>>>> >>>>> [7] A. Graves, J. Schmidhuber. Offline Handwriting Recognition with Multidimensional Recurrent Neural Networks. NIPS'22, p 545-552, Vancouver, MIT Press, 2009. http://www.idsia.ch/~juergen/nips2009.pdf >>>>> >>>>> [8] 2009: First very deep (and recurrent) learner to win international competitions with secret test sets: deep LSTM RNN (1995-) won three connected handwriting contests at ICDAR 2009 (French, Arabic, Farsi), performing simultaneous segmentation and recognition. http://www.idsia.ch/~juergen/handwriting.html >>>>> >>>>> [9] A. Graves, A. Mohamed, G. E. Hinton. Speech Recognition with Deep Recurrent Neural Networks. ICASSP 2013, Vancouver, 2013. http://www.cs.toronto.edu/~hinton/absps/RNN13.pdf >>>>> >>>>> >>>>> >>>>> Juergen Schmidhuber >>>>> http://www.idsia.ch/~juergen/whatsnew.html >>>> >>>> -- >>>> -- >>>> Juyang (John) Weng, Professor >>>> Department of Computer Science and Engineering >>>> MSU Cognitive Science Program and MSU Neuroscience Program >>>> 428 S Shaw Ln Rm 3115 >>>> Michigan State University >>>> East Lansing, MI 48824 USA >>>> Tel: 517-353-4388 >>>> Fax: 517-432-1061 >>>> Email: weng at cse.msu.edu >>>> URL: http://www.cse.msu.edu/~weng/ >>>> ---------------------------------------------- >>>> >>> >>> >> > > > > > > Dr. James M. Bower Ph.D. > > Professor of Computational Neurobiology > > Barshop Institute for Longevity and Aging Studies. > > 15355 Lambda Drive > > University of Texas Health Science Center > > San Antonio, Texas 78245 > > > Phone: 210 382 0553 > > Email: bower at uthscsa.edu > > Web: http://www.bower-lab.org > > twitter: superid101 > > linkedin: Jim Bower > > > CONFIDENTIAL NOTICE: > > The contents of this email and any attachments to it may be privileged or contain privileged and confidential information. This information is only for the viewing or use of the intended recipient. If you have received this e-mail in error or are not the intended recipient, you are hereby notified that any disclosure, copying, distribution or use of, or the taking of any action in reliance upon, any of the information contained in this e-mail, or > > any of the attachments to this e-mail, is strictly prohibited and that this e-mail and all of the attachments to this e-mail, if any, must be > > immediately returned to the sender or destroyed and, in either case, this e-mail and all attachments to this e-mail must be immediately deleted from your computer without making any copies hereof and any and all hard copies made must be destroyed. If you have received this e-mail in error, please notify the sender by e-mail immediately. > > > Dr. James M. Bower Ph.D. Professor of Computational Neurobiology Barshop Institute for Longevity and Aging Studies. 15355 Lambda Drive University of Texas Health Science Center San Antonio, Texas 78245 Phone: 210 382 0553 Email: bower at uthscsa.edu Web: http://www.bower-lab.org twitter: superid101 linkedin: Jim Bower CONFIDENTIAL NOTICE: The contents of this email and any attachments to it may be privileged or contain privileged and confidential information. This information is only for the viewing or use of the intended recipient. If you have received this e-mail in error or are not the intended recipient, you are hereby notified that any disclosure, copying, distribution or use of, or the taking of any action in reliance upon, any of the information contained in this e-mail, or any of the attachments to this e-mail, is strictly prohibited and that this e-mail and all of the attachments to this e-mail, if any, must be immediately returned to the sender or destroyed and, in either case, this e-mail and all attachments to this e-mail must be immediately deleted from your computer without making any copies hereof and any and all hard copies made must be destroyed. If you have received this e-mail in error, please notify the sender by e-mail immediately. -------------- next part -------------- An HTML attachment was scrubbed... URL: From minaiaa at gmail.com Mon Feb 10 16:37:10 2014 From: minaiaa at gmail.com (Ali Minai) Date: Mon, 10 Feb 2014 16:37:10 -0500 Subject: Connectionists: Deep Belief Nets (2006) / Neural History Compressor (1991) or Hierarchical Temporal Memory In-Reply-To: References: <3FD4809C-5EF7-478D-8939-AE56E6A434BE@idsia.ch> <52F420CF.7060006@cse.msu.edu> Message-ID: I think Gary's last paragraph is absolutely key. Unless we take both the evolutionary and the developmental processes into account, we will neither understand complex brains fully nor replicate their functionality too well in our robots etc. We build complex robots that know nothing and then ask them to learn complex things, setting up a hopelessly difficult learning problem. But that isn't how animals learn, or why animals have the brains and bodies they have. A purely abstract computational approach to neural models makes the same category error that connectionists criticized symbolists for making, just at a different level. Ali On Mon, Feb 10, 2014 at 11:38 AM, Gary Marcus wrote: > Juergen and others, > > I am with John on his two basic concerns, and think that your appeal to > computational universality is a red herring; I cc the entire group because > I think that these issues lay at the center of why many of the hardest > problems in AI and neuroscience continue to lay outside of reach, despite > in-principle proofs about computational universality. > > John's basic points, which I have also made before (e.g. in my books The > Algebraic Mind and The Birth of the Mind and in my periodic New Yorker > posts) are two > > a. It is unrealistic to expect that hierarchies of pattern recognizers > will suffice for the full range of cognitive problems that humans (and > strong AI systems) face. Deep learning, to take one example, excels at > classification, but has thus far had relatively little to contribute to > inference or natural language understanding. Socher et al's impressive CVG > work, for instance, is parasitic on a traditional (symbolic) parser, not a > soup-to-nuts neural net induced from input. > > b. it is unrealistic to expect that all the relevant information can be > extracted by any general purpose learning device. > > Yes, you can reliably map any arbitrary input-output relation onto a > multilayer perceptron or recurrent net, but *only* if you know the > complete input-output mapping in advance. Alas, you can't be guaranteed to > do that in general given arbitrary subsets of the complete space; in the > real world, learners see subsets of possible data and have to make guesses > about what the rest will be like. Wolpert's No Free Lunch work is > instructive here (and also in line with how cognitive scientists like > Chomsky, Pinker, and myself have thought about the problem). For any > problem, I presume that there exists an appropriately-configured net, > but there is no guarantee that in the real world you are going to be able > to correctly induce the right system via general-purpose learning algorithm > given a finite amount of data, with a finite amount of training. > Empirically, neural nets of roughly the form you are discussing have worked > fine for some problems (e.g. backgammon) but been no match for their > symbolic competitors in other domains (chess) and worked only as an adjunct > rather than an central ingredient in still others (parsing, > question-answering a la Watson, etc); in other domains, like planning and > common-sense reasoning, there has been essentially no serious work at all. > > My own take, informed by evolutionary and developmental biology, is that > no single general purpose architecture will ever be a match for the > endproduct of a billion years of evolution, which includes, I suspect, a > significant amount of customized architecture that need not be induced anew > in each generation. We learn as well as we do precisely because evolution > has preceded us, and endowed us with custom tools for learning in different > domains. Until the field of neural nets more seriously engages in > understanding what the contribution from evolution to neural wetware might > be, I will remain pessimistic about the field's prospects. > > Best, > Gary Marcus > > Professor of Psychology > New York University > Visiting Cognitive Scientist > Allen Institute for Brain Science > Allen Institute for Artiificial Intelligence > > co-edited book coming late 2014: > The Future of the Brain: Essays By The World's Leading Neuroscientists > http://garymarcus.com/ > > On Feb 10, 2014, at 10:26 AM, Juergen Schmidhuber > wrote: > > John, > > perhaps your view is a bit too pessimistic. Note that a single RNN already > is a general computer. In principle, dynamic RNNs can map arbitrary > observation sequences to arbitrary computable sequences of motoric actions > and internal attention-directing operations, e.g., to process cluttered > scenes, or to implement development (the examples you mentioned). From my > point of view, the main question is how to exploit this universal potential > through learning. A stack of dynamic RNN can sometimes facilitate this. > What it learns can later be collapsed into a single RNN [3]. > > Juergen > > http://www.idsia.ch/~juergen/whatsnew.html > > > > On Feb 7, 2014, at 12:54 AM, Juyang Weng wrote: > > Juergen: > > You wrote: A stack of recurrent NN. But it is a wrong architecture as far > as the brain is concerned. > > Although my joint work with Narendra Ahuja and Thomas S. Huang at UIUC was > probably the first > learning network that used the deep Learning idea for learning from > clutter scenes (Cresceptron ICCV 1992 and IJCV 1997), > I gave up this static deep learning idea later after we considered the > Principle 1: Development. > > The deep learning architecture is wrong for the brain. It is too > restricted, static in architecture, and cannot learn directly from > cluttered scenes required by Principle 1. The brain is not a cascade of > recurrent NN. > > I quote from Antonio Damasio "Decartes' Error": p. 93: "But intermediate > communications occurs also via large subcortical nuclei such as those in > the thalamas and basal ganglia, and via small nulei such as those in the > brain stem." > > Of course, the cerebral pathways themselves are not a stack of recurrent > NN either. > > There are many fundamental reasons for that. I give only one here base on > our DN brain model: Looking at a human, the brain must dynamically attend > the tip of the nose, the entire nose, the face, or the entire human body on > the fly. For example, when the network attend the nose, the entire human > body becomes the background! Without a brain network that has both shallow > and deep connections (unlike your stack of recurrent NN), your network is > only for recognizing a set of static patterns in a clean background. This > is still an overworked pattern recognition problem, not a vision problem. > > -John > > On 2/6/14 7:24 AM, Schmidhuber Juergen wrote: > > Deep Learning in Artificial Neural Networks (NN) is about credit > assignment across many subsequent computational stages, in deep or > recurrent NN. > > A popluar Deep Learning NN is the Deep Belief Network (2006) [1,2]. A > stack of feedforward NN (FNN) is pre-trained in unsupervised fashion. This > can facilitate subsequent supervised learning. > > Let me re-advertise a much older, very similar, but more general, working > Deep Learner of 1991. It can deal with temporal sequences: the Neural > Hierarchical Temporal Memory or Neural History Compressor [3]. A stack of > recurrent NN (RNN) is pre-trained in unsupervised fashion. This can greatly > facilitate subsequent supervised learning. > > The RNN stack is more general in the sense that it uses > sequence-processing RNN instead of FNN with unchanging inputs. In the early > 1990s, the system was able to learn many previously unlearnable Deep > Learning tasks, one of them requiring credit assignment across 1200 > successive computational stages [4]. > > Related developments: In the 1990s there was a trend from partially > unsupervised [3] to fully supervised recurrent Deep Learners [5]. In recent > years, there has been a similar trend from partially unsupervised to fully > supervised systems. For example, several recent competition-winning and > benchmark record-setting systems use supervised LSTM RNN stacks [6-9]. > > > References: > > [1] G. E. Hinton, R. R. Salakhutdinov. Reducing the dimensionality of data > with neural networks. Science, Vol. 313. no. 5786, pp. 504 - 507, 2006. > http://www.cs.toronto.edu/~hinton/science.pdf > > [2] G. W. Cottrell. New Life for Neural Networks. Science, Vol. 313. no. > 5786, pp. 454-455, 2006. > http://www.academia.edu/155897/Cottrell_Garrison_W._2006_New_life_for_neural_networks > > [3] J. Schmidhuber. Learning complex, extended sequences using the > principle of history compression, Neural Computation, 4(2):234-242, 1992. > (Based on TR FKI-148-91, 1991.) > ftp://ftp.idsia.ch/pub/juergen/chunker.pdf Overview: > http://www.idsia.ch/~juergen/firstdeeplearner.html > > [4] J. Schmidhuber. Habilitation thesis, TUM, 1993. > ftp://ftp.idsia.ch/pub/juergen/habilitation.pdf . Includes an experiment > with credit assignment across 1200 subsequent computational stages for a > Neural Hierarchical Temporal Memory or History Compressor or RNN stack with > unsupervised pre-training [2] (try Google Translate in your mother tongue): > http://www.idsia.ch/~juergen/habilitation/node114.html > > [5] S. Hochreiter, J. Schmidhuber. Long Short-Term Memory. Neural > Computation, 9(8):1735-1780, 1997. Based on TR FKI-207-95, 1995. > ftp://ftp.idsia.ch/pub/juergen/lstm.pdf . Lots of of follow-up work on > LSTM under http://www.idsia.ch/~juergen/rnn.html > > [6] S. Fernandez, A. Graves, J. Schmidhuber. Sequence labelling in > structured domains with hierarchical recurrent neural networks. In Proc. > IJCAI'07, p. 774-779, Hyderabad, India, 2007. > ftp://ftp.idsia.ch/pub/juergen/IJCAI07sequence.pdf > > [7] A. Graves, J. Schmidhuber. Offline Handwriting Recognition with > Multidimensional Recurrent Neural Networks. NIPS'22, p 545-552, Vancouver, > MIT Press, 2009. http://www.idsia.ch/~juergen/nips2009.pdf > > [8] 2009: First very deep (and recurrent) learner to win international > competitions with secret test sets: deep LSTM RNN (1995-) won three > connected handwriting contests at ICDAR 2009 (French, Arabic, Farsi), > performing simultaneous segmentation and recognition. > http://www.idsia.ch/~juergen/handwriting.html > > [9] A. Graves, A. Mohamed, G. E. Hinton. Speech Recognition with Deep > Recurrent Neural Networks. ICASSP 2013, Vancouver, 2013. > http://www.cs.toronto.edu/~hinton/absps/RNN13.pdf > > > > Juergen Schmidhuber > http://www.idsia.ch/~juergen/whatsnew.html > > > -- > -- > Juyang (John) Weng, Professor > Department of Computer Science and Engineering > MSU Cognitive Science Program and MSU Neuroscience Program > 428 S Shaw Ln Rm 3115 > Michigan State University > East Lansing, MI 48824 USA > Tel: 517-353-4388 > Fax: 517-432-1061 > Email: weng at cse.msu.edu > URL: http://www.cse.msu.edu/~weng/ > ---------------------------------------------- > > > > > -- Ali A. Minai, Ph.D. Professor Complex Adaptive Systems Lab Department of Electrical Engineering & Computing Systems University of Cincinnati Cincinnati, OH 45221-0030 Phone: (513) 556-4783 Fax: (513) 556-7326 Email: Ali.Minai at uc.edu minaiaa at gmail.com WWW: http://www.ece.uc.edu/~aminai/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From minaiaa at gmail.com Mon Feb 10 16:45:42 2014 From: minaiaa at gmail.com (Ali Minai) Date: Mon, 10 Feb 2014 16:45:42 -0500 Subject: Connectionists: Deep Belief Nets (2006) / Neural History Compressor (1991) or Hierarchical Temporal Memory In-Reply-To: <23B2A5C3-3F74-4690-AAA3-D266B4978C05@uthscsa.edu> References: <3FD4809C-5EF7-478D-8939-AE56E6A434BE@idsia.ch> <52F420CF.7060006@cse.msu.edu> <23B2A5C3-3F74-4690-AAA3-D266B4978C05@uthscsa.edu> Message-ID: I can just see a nascent war brewing between axonists and dedriticists :-), but you're absolutely right: The dendrite has been neglected too long, a victim to the insidious appeal of the point neuron. I still recall that beautiful chapter on dendritic Boolean computation in the first "Methods in Neuronal Modeling Book". Ali On Mon, Feb 10, 2014 at 2:40 PM, james bower wrote: > Nice to see this started again, even after the "get me off the mailing > list" email. :-) For those of you relatively new to the field - it was > discussions like this, I believe, that were responsible for growing > connectionists to begin with - 25 years ago. Anyway: > > > Well put - although, there is a long history of engineers and others > coming up with interesting new ideas after contemplating biological > structures - that actually made a contribution to engineering. Lots of > current examples. However, success in the engineering world does not at > all necessarily mean that this is how the brain actually does it. > > One more point - it is almost certain that a great deal of the > computational power of the nervous system comes from interactions in the > dendrite - which almost certainly can not be boiled down to the traditional > summation of synaptic inputs over time and space followed by some simple > thresholding mechanism. Therefore, in addition to the vow of chastity for > any of you who are really in this business for the love of neuroscience, I > also suggest that you focus on the computational erogenous zone of the > dendrites. The Internet is a remarkable and complex network, but without > understanding how the information it delivers is rendered and influences > the computers it is connected to, probably rather difficult to figure out > the network itself. > > Jim > > > > On Feb 10, 2014, at 9:56 AM, Ali Minai wrote: > > I agree with both Juergen and John. On the one hand, most neural > processing must - almost necessarily - emerge from the dynamics of many > recurrent networks interacting at multiple scales. I that sense, deep > learning with recurrent networks is a fruitful place to start in trying to > understand this. On the other hand, I also think that the term "deep > learning" has become unnecessarily constrained to refer to a particular > style of layered architecture and certain types of learning algorithms. We > need to move beyond these - broaden the definition to include networks with > more complex architectures and learning processes that include development, > and even evolution. And to extend the model beyond just "neural" networks > to encompass the entire brain-body network, including its mechanical and > autonomic components. > > One problem is that when engineers and computer scientists try to > understand the brain, we keep getting distracted by all the sexy > "applications" that arise as a side benefit of our models, go chasing after > them, and eventually lose track of the original goal of understanding how > the brain works. This results in a lot of very useful neural network models > for vision, time-series prediction, data analysis, etc., but doesn't tell > us much about the brain. Some of us need to take a vow of chastity and > commit ourselves anew to the discipline of biology. > > Ali > > > On Mon, Feb 10, 2014 at 10:26 AM, Juergen Schmidhuber wrote: > >> John, >> >> perhaps your view is a bit too pessimistic. Note that a single RNN >> already is a general computer. In principle, dynamic RNNs can map arbitrary >> observation sequences to arbitrary computable sequences of motoric actions >> and internal attention-directing operations, e.g., to process cluttered >> scenes, or to implement development (the examples you mentioned). From my >> point of view, the main question is how to exploit this universal potential >> through learning. A stack of dynamic RNN can sometimes facilitate this. >> What it learns can later be collapsed into a single RNN [3]. >> >> Juergen >> >> http://www.idsia.ch/~juergen/whatsnew.html >> >> >> >> On Feb 7, 2014, at 12:54 AM, Juyang Weng wrote: >> >> > Juergen: >> > >> > You wrote: A stack of recurrent NN. But it is a wrong architecture as >> far as the brain is concerned. >> > >> > Although my joint work with Narendra Ahuja and Thomas S. Huang at UIUC >> was probably the first >> > learning network that used the deep Learning idea for learning from >> clutter scenes (Cresceptron ICCV 1992 and IJCV 1997), >> > I gave up this static deep learning idea later after we considered the >> Principle 1: Development. >> > >> > The deep learning architecture is wrong for the brain. It is too >> restricted, static in architecture, and cannot learn directly from >> cluttered scenes required by Principle 1. The brain is not a cascade of >> recurrent NN. >> > >> > I quote from Antonio Damasio "Decartes' Error": p. 93: "But >> intermediate communications occurs also via large subcortical nuclei such >> as those in the thalamas and basal ganglia, and via small nulei such as >> those in the brain stem." >> > >> > Of course, the cerebral pathways themselves are not a stack of >> recurrent NN either. >> > >> > There are many fundamental reasons for that. I give only one here base >> on our DN brain model: Looking at a human, the brain must dynamically >> attend the tip of the nose, the entire nose, the face, or the entire human >> body on the fly. For example, when the network attend the nose, the entire >> human body becomes the background! Without a brain network that has both >> shallow and deep connections (unlike your stack of recurrent NN), your >> network is only for recognizing a set of static patterns in a clean >> background. This is still an overworked pattern recognition problem, not a >> vision problem. >> > >> > -John >> > >> > On 2/6/14 7:24 AM, Schmidhuber Juergen wrote: >> >> Deep Learning in Artificial Neural Networks (NN) is about credit >> assignment across many subsequent computational stages, in deep or >> recurrent NN. >> >> >> >> A popluar Deep Learning NN is the Deep Belief Network (2006) [1,2]. A >> stack of feedforward NN (FNN) is pre-trained in unsupervised fashion. This >> can facilitate subsequent supervised learning. >> >> >> >> Let me re-advertise a much older, very similar, but more general, >> working Deep Learner of 1991. It can deal with temporal sequences: the >> Neural Hierarchical Temporal Memory or Neural History Compressor [3]. A >> stack of recurrent NN (RNN) is pre-trained in unsupervised fashion. This >> can greatly facilitate subsequent supervised learning. >> >> >> >> The RNN stack is more general in the sense that it uses >> sequence-processing RNN instead of FNN with unchanging inputs. In the early >> 1990s, the system was able to learn many previously unlearnable Deep >> Learning tasks, one of them requiring credit assignment across 1200 >> successive computational stages [4]. >> >> >> >> Related developments: In the 1990s there was a trend from partially >> unsupervised [3] to fully supervised recurrent Deep Learners [5]. In recent >> years, there has been a similar trend from partially unsupervised to fully >> supervised systems. For example, several recent competition-winning and >> benchmark record-setting systems use supervised LSTM RNN stacks [6-9]. >> >> >> >> >> >> References: >> >> >> >> [1] G. E. Hinton, R. R. Salakhutdinov. Reducing the dimensionality of >> data with neural networks. Science, Vol. 313. no. 5786, pp. 504 - 507, >> 2006. http://www.cs.toronto.edu/~hinton/science.pdf >> >> >> >> [2] G. W. Cottrell. New Life for Neural Networks. Science, Vol. 313. >> no. 5786, pp. 454-455, 2006. >> http://www.academia.edu/155897/Cottrell_Garrison_W._2006_New_life_for_neural_networks >> >> >> >> [3] J. Schmidhuber. Learning complex, extended sequences using the >> principle of history compression, Neural Computation, 4(2):234-242, 1992. >> (Based on TR FKI-148-91, 1991.) >> ftp://ftp.idsia.ch/pub/juergen/chunker.pdf Overview: >> http://www.idsia.ch/~juergen/firstdeeplearner.html >> >> >> >> [4] J. Schmidhuber. Habilitation thesis, TUM, 1993. >> ftp://ftp.idsia.ch/pub/juergen/habilitation.pdf . Includes an experiment >> with credit assignment across 1200 subsequent computational stages for a >> Neural Hierarchical Temporal Memory or History Compressor or RNN stack with >> unsupervised pre-training [2] (try Google Translate in your mother tongue): >> http://www.idsia.ch/~juergen/habilitation/node114.html >> >> >> >> [5] S. Hochreiter, J. Schmidhuber. Long Short-Term Memory. Neural >> Computation, 9(8):1735-1780, 1997. Based on TR FKI-207-95, 1995. >> ftp://ftp.idsia.ch/pub/juergen/lstm.pdf . Lots of of follow-up work on >> LSTM under http://www.idsia.ch/~juergen/rnn.html >> >> >> >> [6] S. Fernandez, A. Graves, J. Schmidhuber. Sequence labelling in >> structured domains with hierarchical recurrent neural networks. In Proc. >> IJCAI'07, p. 774-779, Hyderabad, India, 2007. >> ftp://ftp.idsia.ch/pub/juergen/IJCAI07sequence.pdf >> >> >> >> [7] A. Graves, J. Schmidhuber. Offline Handwriting Recognition with >> Multidimensional Recurrent Neural Networks. NIPS'22, p 545-552, Vancouver, >> MIT Press, 2009. http://www.idsia.ch/~juergen/nips2009.pdf >> >> >> >> [8] 2009: First very deep (and recurrent) learner to win international >> competitions with secret test sets: deep LSTM RNN (1995-) won three >> connected handwriting contests at ICDAR 2009 (French, Arabic, Farsi), >> performing simultaneous segmentation and recognition. >> http://www.idsia.ch/~juergen/handwriting.html >> >> >> >> [9] A. Graves, A. Mohamed, G. E. Hinton. Speech Recognition with Deep >> Recurrent Neural Networks. ICASSP 2013, Vancouver, 2013. >> http://www.cs.toronto.edu/~hinton/absps/RNN13.pdf >> >> >> >> >> >> >> >> Juergen Schmidhuber >> >> http://www.idsia.ch/~juergen/whatsnew.html >> > >> > -- >> > -- >> > Juyang (John) Weng, Professor >> > Department of Computer Science and Engineering >> > MSU Cognitive Science Program and MSU Neuroscience Program >> > 428 S Shaw Ln Rm 3115 >> > Michigan State University >> > East Lansing, MI 48824 USA >> > Tel: 517-353-4388 >> > Fax: 517-432-1061 >> > Email: weng at cse.msu.edu >> > URL: http://www.cse.msu.edu/~weng/ >> > ---------------------------------------------- >> > >> >> >> > > > -- > Ali A. Minai, Ph.D. > Professor > Complex Adaptive Systems Lab > Department of Electrical Engineering & Computing Systems > University of Cincinnati > Cincinnati, OH 45221-0030 > > Phone: (513) 556-4783 > Fax: (513) 556-7326 > Email: Ali.Minai at uc.edu > minaiaa at gmail.com > > WWW: http://www.ece.uc.edu/~aminai/ > > > > > > > Dr. James M. Bower Ph.D. > > Professor of Computational Neurobiology > > Barshop Institute for Longevity and Aging Studies. > > 15355 Lambda Drive > > University of Texas Health Science Center > > San Antonio, Texas 78245 > > > > *Phone: 210 382 0553 <210%20382%200553>* > > Email: bower at uthscsa.edu > > Web: http://www.bower-lab.org > > twitter: superid101 > > linkedin: Jim Bower > > > > CONFIDENTIAL NOTICE: > > The contents of this email and any attachments to it may be privileged > or contain privileged and confidential information. This information is > only for the viewing or use of the intended recipient. If you have received > this e-mail in error or are not the intended recipient, you are hereby > notified that any disclosure, copying, distribution or use of, or the > taking of any action in reliance upon, any of the information contained in > this e-mail, or > > any of the attachments to this e-mail, is strictly prohibited and that > this e-mail and all of the attachments to this e-mail, if any, must be > > immediately returned to the sender or destroyed and, in either case, > this e-mail and all attachments to this e-mail must be immediately deleted > from your computer without making any copies hereof and any and all hard > copies made must be destroyed. If you have received this e-mail in error, > please notify the sender by e-mail immediately. > > > > -- Ali A. Minai, Ph.D. Professor Complex Adaptive Systems Lab Department of Electrical Engineering & Computing Systems University of Cincinnati Cincinnati, OH 45221-0030 Phone: (513) 556-4783 Fax: (513) 556-7326 Email: Ali.Minai at uc.edu minaiaa at gmail.com WWW: http://www.ece.uc.edu/~aminai/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From brian.mingus at colorado.edu Mon Feb 10 16:51:08 2014 From: brian.mingus at colorado.edu (Brian J Mingus) Date: Mon, 10 Feb 2014 14:51:08 -0700 Subject: Connectionists: Deep Belief Nets (2006) / Neural History Compressor (1991) or Hierarchical Temporal Memory In-Reply-To: References: <3FD4809C-5EF7-478D-8939-AE56E6A434BE@idsia.ch> <52F420CF.7060006@cse.msu.edu> Message-ID: fyi, there is a field called Developmental Robotics which takes this perspective seriously. For example, an infant goes through the following developmental trajectory over the first several months of life: - Born with nice looking reaches but can't reach to target - Locks the elbow to limit the number of degrees of freedom and practices pointing to a target - Slowly starts to unlock the elbow, exposing more degrees of freedom, and practices reaching to a target The infant does not need to learn how to solve a fully unconstrained inverse kinematics problem. It is born with reaching affordances and a musculoskeletal system which constrain the space into something computationally feasible. Likewise, if you hold an infant's feet in warm water it will vigorously try to walk. etc. etc. etc. This general pattern of evolved affordances being used to bootstrap intelligence is extremely widespread in the brain. Anyone who doesn't take this into consideration when modeling the brain isn't creating a human being, but rather something else. That said, evolution is a blind designer. A human being can out-design billions of years of evolution in a few years with nice supercomputer and plenty of lab subjects. So, if your goal is to understand exactly what a human being is, you might study human development. But if your goal is to create something more sophisticated than a human without the annoyance of studying exactly how a human develops intelligence, you might use deep networks with pretraining that automatically extract features that evolution baked in. btw, this is all widely known.. no? Brian Mingus http://grey.colorado.edu/mingus On Mon, Feb 10, 2014 at 2:37 PM, Ali Minai wrote: > I think Gary's last paragraph is absolutely key. Unless we take both the > evolutionary and the developmental processes into account, we will neither > understand complex brains fully nor replicate their functionality too well > in our robots etc. We build complex robots that know nothing and then ask > them to learn complex things, setting up a hopelessly difficult learning > problem. But that isn't how animals learn, or why animals have the brains > and bodies they have. A purely abstract computational approach to neural > models makes the same category error that connectionists criticized > symbolists for making, just at a different level. > > Ali > > > On Mon, Feb 10, 2014 at 11:38 AM, Gary Marcus wrote: > >> Juergen and others, >> >> I am with John on his two basic concerns, and think that your appeal to >> computational universality is a red herring; I cc the entire group because >> I think that these issues lay at the center of why many of the hardest >> problems in AI and neuroscience continue to lay outside of reach, despite >> in-principle proofs about computational universality. >> >> John's basic points, which I have also made before (e.g. in my books The >> Algebraic Mind and The Birth of the Mind and in my periodic New Yorker >> posts) are two >> >> a. It is unrealistic to expect that hierarchies of pattern recognizers >> will suffice for the full range of cognitive problems that humans (and >> strong AI systems) face. Deep learning, to take one example, excels at >> classification, but has thus far had relatively little to contribute to >> inference or natural language understanding. Socher et al's impressive CVG >> work, for instance, is parasitic on a traditional (symbolic) parser, not a >> soup-to-nuts neural net induced from input. >> >> b. it is unrealistic to expect that all the relevant information can be >> extracted by any general purpose learning device. >> >> Yes, you can reliably map any arbitrary input-output relation onto a >> multilayer perceptron or recurrent net, but *only* if you know the >> complete input-output mapping in advance. Alas, you can't be guaranteed to >> do that in general given arbitrary subsets of the complete space; in the >> real world, learners see subsets of possible data and have to make guesses >> about what the rest will be like. Wolpert's No Free Lunch work is >> instructive here (and also in line with how cognitive scientists like >> Chomsky, Pinker, and myself have thought about the problem). For any >> problem, I presume that there exists an appropriately-configured net, >> but there is no guarantee that in the real world you are going to be able >> to correctly induce the right system via general-purpose learning algorithm >> given a finite amount of data, with a finite amount of training. >> Empirically, neural nets of roughly the form you are discussing have worked >> fine for some problems (e.g. backgammon) but been no match for their >> symbolic competitors in other domains (chess) and worked only as an adjunct >> rather than an central ingredient in still others (parsing, >> question-answering a la Watson, etc); in other domains, like planning and >> common-sense reasoning, there has been essentially no serious work at all. >> >> My own take, informed by evolutionary and developmental biology, is that >> no single general purpose architecture will ever be a match for the >> endproduct of a billion years of evolution, which includes, I suspect, a >> significant amount of customized architecture that need not be induced anew >> in each generation. We learn as well as we do precisely because evolution >> has preceded us, and endowed us with custom tools for learning in different >> domains. Until the field of neural nets more seriously engages in >> understanding what the contribution from evolution to neural wetware might >> be, I will remain pessimistic about the field's prospects. >> >> Best, >> Gary Marcus >> >> Professor of Psychology >> New York University >> Visiting Cognitive Scientist >> Allen Institute for Brain Science >> Allen Institute for Artiificial Intelligence >> >> co-edited book coming late 2014: >> The Future of the Brain: Essays By The World's Leading Neuroscientists >> http://garymarcus.com/ >> >> On Feb 10, 2014, at 10:26 AM, Juergen Schmidhuber >> wrote: >> >> John, >> >> perhaps your view is a bit too pessimistic. Note that a single RNN >> already is a general computer. In principle, dynamic RNNs can map arbitrary >> observation sequences to arbitrary computable sequences of motoric actions >> and internal attention-directing operations, e.g., to process cluttered >> scenes, or to implement development (the examples you mentioned). From my >> point of view, the main question is how to exploit this universal potential >> through learning. A stack of dynamic RNN can sometimes facilitate this. >> What it learns can later be collapsed into a single RNN [3]. >> >> Juergen >> >> http://www.idsia.ch/~juergen/whatsnew.html >> >> >> >> On Feb 7, 2014, at 12:54 AM, Juyang Weng wrote: >> >> Juergen: >> >> You wrote: A stack of recurrent NN. But it is a wrong architecture as >> far as the brain is concerned. >> >> Although my joint work with Narendra Ahuja and Thomas S. Huang at UIUC >> was probably the first >> learning network that used the deep Learning idea for learning from >> clutter scenes (Cresceptron ICCV 1992 and IJCV 1997), >> I gave up this static deep learning idea later after we considered the >> Principle 1: Development. >> >> The deep learning architecture is wrong for the brain. It is too >> restricted, static in architecture, and cannot learn directly from >> cluttered scenes required by Principle 1. The brain is not a cascade of >> recurrent NN. >> >> I quote from Antonio Damasio "Decartes' Error": p. 93: "But intermediate >> communications occurs also via large subcortical nuclei such as those in >> the thalamas and basal ganglia, and via small nulei such as those in the >> brain stem." >> >> Of course, the cerebral pathways themselves are not a stack of recurrent >> NN either. >> >> There are many fundamental reasons for that. I give only one here base >> on our DN brain model: Looking at a human, the brain must dynamically >> attend the tip of the nose, the entire nose, the face, or the entire human >> body on the fly. For example, when the network attend the nose, the entire >> human body becomes the background! Without a brain network that has both >> shallow and deep connections (unlike your stack of recurrent NN), your >> network is only for recognizing a set of static patterns in a clean >> background. This is still an overworked pattern recognition problem, not a >> vision problem. >> >> -John >> >> On 2/6/14 7:24 AM, Schmidhuber Juergen wrote: >> >> Deep Learning in Artificial Neural Networks (NN) is about credit >> assignment across many subsequent computational stages, in deep or >> recurrent NN. >> >> A popluar Deep Learning NN is the Deep Belief Network (2006) [1,2]. A >> stack of feedforward NN (FNN) is pre-trained in unsupervised fashion. This >> can facilitate subsequent supervised learning. >> >> Let me re-advertise a much older, very similar, but more general, working >> Deep Learner of 1991. It can deal with temporal sequences: the Neural >> Hierarchical Temporal Memory or Neural History Compressor [3]. A stack of >> recurrent NN (RNN) is pre-trained in unsupervised fashion. This can greatly >> facilitate subsequent supervised learning. >> >> The RNN stack is more general in the sense that it uses >> sequence-processing RNN instead of FNN with unchanging inputs. In the early >> 1990s, the system was able to learn many previously unlearnable Deep >> Learning tasks, one of them requiring credit assignment across 1200 >> successive computational stages [4]. >> >> Related developments: In the 1990s there was a trend from partially >> unsupervised [3] to fully supervised recurrent Deep Learners [5]. In recent >> years, there has been a similar trend from partially unsupervised to fully >> supervised systems. For example, several recent competition-winning and >> benchmark record-setting systems use supervised LSTM RNN stacks [6-9]. >> >> >> References: >> >> [1] G. E. Hinton, R. R. Salakhutdinov. Reducing the dimensionality of >> data with neural networks. Science, Vol. 313. no. 5786, pp. 504 - 507, >> 2006. http://www.cs.toronto.edu/~hinton/science.pdf >> >> [2] G. W. Cottrell. New Life for Neural Networks. Science, Vol. 313. no. >> 5786, pp. 454-455, 2006. >> http://www.academia.edu/155897/Cottrell_Garrison_W._2006_New_life_for_neural_networks >> >> [3] J. Schmidhuber. Learning complex, extended sequences using the >> principle of history compression, Neural Computation, 4(2):234-242, 1992. >> (Based on TR FKI-148-91, 1991.) >> ftp://ftp.idsia.ch/pub/juergen/chunker.pdf Overview: >> http://www.idsia.ch/~juergen/firstdeeplearner.html >> >> [4] J. Schmidhuber. Habilitation thesis, TUM, 1993. >> ftp://ftp.idsia.ch/pub/juergen/habilitation.pdf . Includes an experiment >> with credit assignment across 1200 subsequent computational stages for a >> Neural Hierarchical Temporal Memory or History Compressor or RNN stack with >> unsupervised pre-training [2] (try Google Translate in your mother tongue): >> http://www.idsia.ch/~juergen/habilitation/node114.html >> >> [5] S. Hochreiter, J. Schmidhuber. Long Short-Term Memory. Neural >> Computation, 9(8):1735-1780, 1997. Based on TR FKI-207-95, 1995. >> ftp://ftp.idsia.ch/pub/juergen/lstm.pdf . Lots of of follow-up work on >> LSTM under http://www.idsia.ch/~juergen/rnn.html >> >> [6] S. Fernandez, A. Graves, J. Schmidhuber. Sequence labelling in >> structured domains with hierarchical recurrent neural networks. In Proc. >> IJCAI'07, p. 774-779, Hyderabad, India, 2007. >> ftp://ftp.idsia.ch/pub/juergen/IJCAI07sequence.pdf >> >> [7] A. Graves, J. Schmidhuber. Offline Handwriting Recognition with >> Multidimensional Recurrent Neural Networks. NIPS'22, p 545-552, Vancouver, >> MIT Press, 2009. http://www.idsia.ch/~juergen/nips2009.pdf >> >> [8] 2009: First very deep (and recurrent) learner to win international >> competitions with secret test sets: deep LSTM RNN (1995-) won three >> connected handwriting contests at ICDAR 2009 (French, Arabic, Farsi), >> performing simultaneous segmentation and recognition. >> http://www.idsia.ch/~juergen/handwriting.html >> >> [9] A. Graves, A. Mohamed, G. E. Hinton. Speech Recognition with Deep >> Recurrent Neural Networks. ICASSP 2013, Vancouver, 2013. >> http://www.cs.toronto.edu/~hinton/absps/RNN13.pdf >> >> >> >> Juergen Schmidhuber >> http://www.idsia.ch/~juergen/whatsnew.html >> >> >> -- >> -- >> Juyang (John) Weng, Professor >> Department of Computer Science and Engineering >> MSU Cognitive Science Program and MSU Neuroscience Program >> 428 S Shaw Ln Rm 3115 >> Michigan State University >> East Lansing, MI 48824 USA >> Tel: 517-353-4388 >> Fax: 517-432-1061 >> Email: weng at cse.msu.edu >> URL: http://www.cse.msu.edu/~weng/ >> ---------------------------------------------- >> >> >> >> >> > > > -- > Ali A. Minai, Ph.D. > Professor > Complex Adaptive Systems Lab > Department of Electrical Engineering & Computing Systems > University of Cincinnati > Cincinnati, OH 45221-0030 > > Phone: (513) 556-4783 > Fax: (513) 556-7326 > Email: Ali.Minai at uc.edu > minaiaa at gmail.com > > WWW: http://www.ece.uc.edu/~aminai/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bower at uthscsa.edu Mon Feb 10 17:10:56 2014 From: bower at uthscsa.edu (james bower) Date: Mon, 10 Feb 2014 16:10:56 -0600 Subject: Connectionists: Deep Belief Nets (2006) / Neural History Compressor (1991) or Hierarchical Temporal Memory In-Reply-To: References: <3FD4809C-5EF7-478D-8939-AE56E6A434BE@idsia.ch> <52F420CF.7060006@cse.msu.edu> <23B2A5C3-3F74-4690-AAA3-D266B4978C05@uthscsa.edu> Message-ID: <82AADAC5-11EB-46C3-B90A-32A064321702@uthscsa.edu> no war, no point in one without the other however, the ?expediency? issue you raise, that it is much easier to represent connections (and connection strengths) than dendrites is worth making - however, there are very few neurons in the vertebrate brain that lack dendrites. One the few I know of is in Nucleus Laminaris of the barn owl. There, likely the time delays in the dendrite interfere with the sub millisecond timing necessary to localize sound in space by comparing time arrivals in the ears. Even there, however, the processing is apparently even more complex than originally thought. So, again, if you can make it work without worrying about dendritic type processing - good for you - but likely irrelevant to the brain. Jim On Feb 10, 2014, at 3:45 PM, Ali Minai wrote: > I can just see a nascent war brewing between axonists and dedriticists :-), but you're absolutely right: The dendrite has been neglected too long, a victim to the insidious appeal of the point neuron. I still recall that beautiful chapter on dendritic Boolean computation in the first "Methods in Neuronal Modeling Book". > > Ali > > > On Mon, Feb 10, 2014 at 2:40 PM, james bower wrote: > Nice to see this started again, even after the ?get me off the mailing list? email. :-) For those of you relatively new to the field - it was discussions like this, I believe, that were responsible for growing connectionists to begin with - 25 years ago. Anyway: > > > Well put - although, there is a long history of engineers and others coming up with interesting new ideas after contemplating biological structures - that actually made a contribution to engineering. Lots of current examples. However, success in the engineering world does not at all necessarily mean that this is how the brain actually does it. > > One more point - it is almost certain that a great deal of the computational power of the nervous system comes from interactions in the dendrite - which almost certainly can not be boiled down to the traditional summation of synaptic inputs over time and space followed by some simple thresholding mechanism. Therefore, in addition to the vow of chastity for any of you who are really in this business for the love of neuroscience, I also suggest that you focus on the computational erogenous zone of the dendrites. The Internet is a remarkable and complex network, but without understanding how the information it delivers is rendered and influences the computers it is connected to, probably rather difficult to figure out the network itself. > > Jim > > > > On Feb 10, 2014, at 9:56 AM, Ali Minai wrote: > >> I agree with both Juergen and John. On the one hand, most neural processing must - almost necessarily - emerge from the dynamics of many recurrent networks interacting at multiple scales. I that sense, deep learning with recurrent networks is a fruitful place to start in trying to understand this. On the other hand, I also think that the term "deep learning" has become unnecessarily constrained to refer to a particular style of layered architecture and certain types of learning algorithms. We need to move beyond these - broaden the definition to include networks with more complex architectures and learning processes that include development, and even evolution. And to extend the model beyond just "neural" networks to encompass the entire brain-body network, including its mechanical and autonomic components. >> >> One problem is that when engineers and computer scientists try to understand the brain, we keep getting distracted by all the sexy "applications" that arise as a side benefit of our models, go chasing after them, and eventually lose track of the original goal of understanding how the brain works. This results in a lot of very useful neural network models for vision, time-series prediction, data analysis, etc., but doesn't tell us much about the brain. Some of us need to take a vow of chastity and commit ourselves anew to the discipline of biology. >> >> Ali >> >> >> On Mon, Feb 10, 2014 at 10:26 AM, Juergen Schmidhuber wrote: >> John, >> >> perhaps your view is a bit too pessimistic. Note that a single RNN already is a general computer. In principle, dynamic RNNs can map arbitrary observation sequences to arbitrary computable sequences of motoric actions and internal attention-directing operations, e.g., to process cluttered scenes, or to implement development (the examples you mentioned). From my point of view, the main question is how to exploit this universal potential through learning. A stack of dynamic RNN can sometimes facilitate this. What it learns can later be collapsed into a single RNN [3]. >> >> Juergen >> >> http://www.idsia.ch/~juergen/whatsnew.html >> >> >> >> On Feb 7, 2014, at 12:54 AM, Juyang Weng wrote: >> >> > Juergen: >> > >> > You wrote: A stack of recurrent NN. But it is a wrong architecture as far as the brain is concerned. >> > >> > Although my joint work with Narendra Ahuja and Thomas S. Huang at UIUC was probably the first >> > learning network that used the deep Learning idea for learning from clutter scenes (Cresceptron ICCV 1992 and IJCV 1997), >> > I gave up this static deep learning idea later after we considered the Principle 1: Development. >> > >> > The deep learning architecture is wrong for the brain. It is too restricted, static in architecture, and cannot learn directly from cluttered scenes required by Principle 1. The brain is not a cascade of recurrent NN. >> > >> > I quote from Antonio Damasio "Decartes' Error": p. 93: "But intermediate communications occurs also via large subcortical nuclei such as those in the thalamas and basal ganglia, and via small nulei such as those in the brain stem." >> > >> > Of course, the cerebral pathways themselves are not a stack of recurrent NN either. >> > >> > There are many fundamental reasons for that. I give only one here base on our DN brain model: Looking at a human, the brain must dynamically attend the tip of the nose, the entire nose, the face, or the entire human body on the fly. For example, when the network attend the nose, the entire human body becomes the background! Without a brain network that has both shallow and deep connections (unlike your stack of recurrent NN), your network is only for recognizing a set of static patterns in a clean background. This is still an overworked pattern recognition problem, not a vision problem. >> > >> > -John >> > >> > On 2/6/14 7:24 AM, Schmidhuber Juergen wrote: >> >> Deep Learning in Artificial Neural Networks (NN) is about credit assignment across many subsequent computational stages, in deep or recurrent NN. >> >> >> >> A popluar Deep Learning NN is the Deep Belief Network (2006) [1,2]. A stack of feedforward NN (FNN) is pre-trained in unsupervised fashion. This can facilitate subsequent supervised learning. >> >> >> >> Let me re-advertise a much older, very similar, but more general, working Deep Learner of 1991. It can deal with temporal sequences: the Neural Hierarchical Temporal Memory or Neural History Compressor [3]. A stack of recurrent NN (RNN) is pre-trained in unsupervised fashion. This can greatly facilitate subsequent supervised learning. >> >> >> >> The RNN stack is more general in the sense that it uses sequence-processing RNN instead of FNN with unchanging inputs. In the early 1990s, the system was able to learn many previously unlearnable Deep Learning tasks, one of them requiring credit assignment across 1200 successive computational stages [4]. >> >> >> >> Related developments: In the 1990s there was a trend from partially unsupervised [3] to fully supervised recurrent Deep Learners [5]. In recent years, there has been a similar trend from partially unsupervised to fully supervised systems. For example, several recent competition-winning and benchmark record-setting systems use supervised LSTM RNN stacks [6-9]. >> >> >> >> >> >> References: >> >> >> >> [1] G. E. Hinton, R. R. Salakhutdinov. Reducing the dimensionality of data with neural networks. Science, Vol. 313. no. 5786, pp. 504 - 507, 2006. http://www.cs.toronto.edu/~hinton/science.pdf >> >> >> >> [2] G. W. Cottrell. New Life for Neural Networks. Science, Vol. 313. no. 5786, pp. 454-455, 2006. http://www.academia.edu/155897/Cottrell_Garrison_W._2006_New_life_for_neural_networks >> >> >> >> [3] J. Schmidhuber. Learning complex, extended sequences using the principle of history compression, Neural Computation, 4(2):234-242, 1992. (Based on TR FKI-148-91, 1991.) ftp://ftp.idsia.ch/pub/juergen/chunker.pdf Overview: http://www.idsia.ch/~juergen/firstdeeplearner.html >> >> >> >> [4] J. Schmidhuber. Habilitation thesis, TUM, 1993. ftp://ftp.idsia.ch/pub/juergen/habilitation.pdf . Includes an experiment with credit assignment across 1200 subsequent computational stages for a Neural Hierarchical Temporal Memory or History Compressor or RNN stack with unsupervised pre-training [2] (try Google Translate in your mother tongue): http://www.idsia.ch/~juergen/habilitation/node114.html >> >> >> >> [5] S. Hochreiter, J. Schmidhuber. Long Short-Term Memory. Neural Computation, 9(8):1735-1780, 1997. Based on TR FKI-207-95, 1995. ftp://ftp.idsia.ch/pub/juergen/lstm.pdf . Lots of of follow-up work on LSTM under http://www.idsia.ch/~juergen/rnn.html >> >> >> >> [6] S. Fernandez, A. Graves, J. Schmidhuber. Sequence labelling in structured domains with hierarchical recurrent neural networks. In Proc. IJCAI'07, p. 774-779, Hyderabad, India, 2007. ftp://ftp.idsia.ch/pub/juergen/IJCAI07sequence.pdf >> >> >> >> [7] A. Graves, J. Schmidhuber. Offline Handwriting Recognition with Multidimensional Recurrent Neural Networks. NIPS'22, p 545-552, Vancouver, MIT Press, 2009. http://www.idsia.ch/~juergen/nips2009.pdf >> >> >> >> [8] 2009: First very deep (and recurrent) learner to win international competitions with secret test sets: deep LSTM RNN (1995-) won three connected handwriting contests at ICDAR 2009 (French, Arabic, Farsi), performing simultaneous segmentation and recognition. http://www.idsia.ch/~juergen/handwriting.html >> >> >> >> [9] A. Graves, A. Mohamed, G. E. Hinton. Speech Recognition with Deep Recurrent Neural Networks. ICASSP 2013, Vancouver, 2013. http://www.cs.toronto.edu/~hinton/absps/RNN13.pdf >> >> >> >> >> >> >> >> Juergen Schmidhuber >> >> http://www.idsia.ch/~juergen/whatsnew.html >> > >> > -- >> > -- >> > Juyang (John) Weng, Professor >> > Department of Computer Science and Engineering >> > MSU Cognitive Science Program and MSU Neuroscience Program >> > 428 S Shaw Ln Rm 3115 >> > Michigan State University >> > East Lansing, MI 48824 USA >> > Tel: 517-353-4388 >> > Fax: 517-432-1061 >> > Email: weng at cse.msu.edu >> > URL: http://www.cse.msu.edu/~weng/ >> > ---------------------------------------------- >> > >> >> >> >> >> >> -- >> Ali A. Minai, Ph.D. >> Professor >> Complex Adaptive Systems Lab >> Department of Electrical Engineering & Computing Systems >> University of Cincinnati >> Cincinnati, OH 45221-0030 >> >> Phone: (513) 556-4783 >> Fax: (513) 556-7326 >> Email: Ali.Minai at uc.edu >> minaiaa at gmail.com >> >> WWW: http://www.ece.uc.edu/~aminai/ > > > > > > Dr. James M. Bower Ph.D. > > Professor of Computational Neurobiology > > Barshop Institute for Longevity and Aging Studies. > > 15355 Lambda Drive > > University of Texas Health Science Center > > San Antonio, Texas 78245 > > > Phone: 210 382 0553 > > Email: bower at uthscsa.edu > > Web: http://www.bower-lab.org > > twitter: superid101 > > linkedin: Jim Bower > > > CONFIDENTIAL NOTICE: > > The contents of this email and any attachments to it may be privileged or contain privileged and confidential information. This information is only for the viewing or use of the intended recipient. If you have received this e-mail in error or are not the intended recipient, you are hereby notified that any disclosure, copying, distribution or use of, or the taking of any action in reliance upon, any of the information contained in this e-mail, or > > any of the attachments to this e-mail, is strictly prohibited and that this e-mail and all of the attachments to this e-mail, if any, must be > > immediately returned to the sender or destroyed and, in either case, this e-mail and all attachments to this e-mail must be immediately deleted from your computer without making any copies hereof and any and all hard copies made must be destroyed. If you have received this e-mail in error, please notify the sender by e-mail immediately. > > > > > > > -- > Ali A. Minai, Ph.D. > Professor > Complex Adaptive Systems Lab > Department of Electrical Engineering & Computing Systems > University of Cincinnati > Cincinnati, OH 45221-0030 > > Phone: (513) 556-4783 > Fax: (513) 556-7326 > Email: Ali.Minai at uc.edu > minaiaa at gmail.com > > WWW: http://www.ece.uc.edu/~aminai/ Dr. James M. Bower Ph.D. Professor of Computational Neurobiology Barshop Institute for Longevity and Aging Studies. 15355 Lambda Drive University of Texas Health Science Center San Antonio, Texas 78245 Phone: 210 382 0553 Email: bower at uthscsa.edu Web: http://www.bower-lab.org twitter: superid101 linkedin: Jim Bower CONFIDENTIAL NOTICE: The contents of this email and any attachments to it may be privileged or contain privileged and confidential information. This information is only for the viewing or use of the intended recipient. If you have received this e-mail in error or are not the intended recipient, you are hereby notified that any disclosure, copying, distribution or use of, or the taking of any action in reliance upon, any of the information contained in this e-mail, or any of the attachments to this e-mail, is strictly prohibited and that this e-mail and all of the attachments to this e-mail, if any, must be immediately returned to the sender or destroyed and, in either case, this e-mail and all attachments to this e-mail must be immediately deleted from your computer without making any copies hereof and any and all hard copies made must be destroyed. If you have received this e-mail in error, please notify the sender by e-mail immediately. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pblouw at uwaterloo.ca Mon Feb 10 15:54:22 2014 From: pblouw at uwaterloo.ca (Peter Blouw) Date: Mon, 10 Feb 2014 15:54:22 -0500 Subject: Connectionists: Feb 15th Application Deadline: Nengo Summer School on Large-Scale Brain Modelling Message-ID: Hi everyone, One final reminder to apply to the 2014 Nengo Summer School by Feb. 15th. The school will run from June 8th to June 21st in Waterloo, Ontario. Details below: The Centre for Theoretical Neuroscience at the University of Waterloo is inviting applications for an in-depth, two week summer school that will teach participants how to use the Neural Engineering Framework and the Nengo simulation package to build state-of-the-art cognitive and neural models. Nengo has been used to build what is currently the world's largest functional brain model, Spaun, and provides users with a versatile and powerful environment for simulating cognitive and neural systems. We welcome applications from all interested graduate students, research associates, postdocs, professors, and industry professionals. No specific training in the use of modelling software is required, but we encourage applications from active researchers with a relevant background in psychology, neuroscience, cognitive science, neuromorphic engineering, robotics, or a related field. More information about Nengo, the Neural Engineering Framework, and Spaun can be found at http://www.nengo.ca For more information about the summer school, please visit: http://nengo.ca/summerschool -------------- next part -------------- An HTML attachment was scrubbed... URL: From kstanley at eecs.ucf.edu Mon Feb 10 18:07:11 2014 From: kstanley at eecs.ucf.edu (Kenneth Stanley) Date: Mon, 10 Feb 2014 18:07:11 -0500 Subject: Connectionists: Deep Belief Nets (2006) / Neural History Compressor (1991) or Hierarchical Temporal Memory In-Reply-To: References: <3FD4809C-5EF7-478D-8939-AE56E6A434BE@idsia.ch> <52F420CF.7060006@cse.msu.edu> Message-ID: <014e01cf26b4$d4bd7ab0$7e387010$@eecs.ucf.edu> It's worth mentioning given the recent direction of the conversation that our group and others have been working for many years on the question of how brain-like structures can be evolved artificially. While this research area, called neuroevolution, is at a high level abstraction, it concretely begins to address some of the key questions being raised here about how the messy a priori constraints around the learner itself can practically be achieved. While early work in neuroevolution was mainly just simple extensions of conventional evolutionary algorithms, more recent work takes seriously deeper issues that are closer to the interface with neuroscientific concerns like the relationship of neural geometry to functionality. Just as an example, our 2010 Neural Computation publication, "Autonomous Evolution of Topographic Regularities in Artificial Neural Networks" (available at http://www.mitpressjournals.org/doi/abs/10.1162/neco.2010.06-09-1042 or http://eplex.cs.ucf.edu/papers/gauci_nc10.pdf for the manuscript), shows how topographic maps can emerge if neurons in an evolved network are allowed to exist at defined locations. (The particular algorithm is called HyperNEAT.) While this kind of work does not immediately converge with low-level neural models, it would be shortsighted to assume these areas will not eventually benefit from converging. Given the hunch that many share that much of learning in nature is contingent on ad hoc heuristics and tendencies implanted through eons of evolution, eventually pure learning models may need the support of sophisticated evolutionary infrastructure to most effectively (and realistically) take advantage of messy real world contexts. Best Regards, Ken Stanley Associate Professor of Computer Science University of Central Florida From: Connectionists [mailto:connectionists-bounces at mailman.srv.cs.cmu.edu] On Behalf Of Ali Minai Sent: Monday, February 10, 2014 4:37 PM To: Connectionists List Subject: Re: Connectionists: Deep Belief Nets (2006) / Neural History Compressor (1991) or Hierarchical Temporal Memory I think Gary's last paragraph is absolutely key. Unless we take both the evolutionary and the developmental processes into account, we will neither understand complex brains fully nor replicate their functionality too well in our robots etc. We build complex robots that know nothing and then ask them to learn complex things, setting up a hopelessly difficult learning problem. But that isn't how animals learn, or why animals have the brains and bodies they have. A purely abstract computational approach to neural models makes the same category error that connectionists criticized symbolists for making, just at a different level. Ali On Mon, Feb 10, 2014 at 11:38 AM, Gary Marcus wrote: Juergen and others, I am with John on his two basic concerns, and think that your appeal to computational universality is a red herring; I cc the entire group because I think that these issues lay at the center of why many of the hardest problems in AI and neuroscience continue to lay outside of reach, despite in-principle proofs about computational universality. John's basic points, which I have also made before (e.g. in my books The Algebraic Mind and The Birth of the Mind and in my periodic New Yorker posts) are two a. It is unrealistic to expect that hierarchies of pattern recognizers will suffice for the full range of cognitive problems that humans (and strong AI systems) face. Deep learning, to take one example, excels at classification, but has thus far had relatively little to contribute to inference or natural language understanding. Socher et al's impressive CVG work, for instance, is parasitic on a traditional (symbolic) parser, not a soup-to-nuts neural net induced from input. b. it is unrealistic to expect that all the relevant information can be extracted by any general purpose learning device. Yes, you can reliably map any arbitrary input-output relation onto a multilayer perceptron or recurrent net, but only if you know the complete input-output mapping in advance. Alas, you can't be guaranteed to do that in general given arbitrary subsets of the complete space; in the real world, learners see subsets of possible data and have to make guesses about what the rest will be like. Wolpert's No Free Lunch work is instructive here (and also in line with how cognitive scientists like Chomsky, Pinker, and myself have thought about the problem). For any problem, I presume that there exists an appropriately-configured net, but there is no guarantee that in the real world you are going to be able to correctly induce the right system via general-purpose learning algorithm given a finite amount of data, with a finite amount of training. Empirically, neural nets of roughly the form you are discussing have worked fine for some problems (e.g. backgammon) but been no match for their symbolic competitors in other domains (chess) and worked only as an adjunct rather than an central ingredient in still others (parsing, question-answering a la Watson, etc); in other domains, like planning and common-sense reasoning, there has been essentially no serious work at all. My own take, informed by evolutionary and developmental biology, is that no single general purpose architecture will ever be a match for the endproduct of a billion years of evolution, which includes, I suspect, a significant amount of customized architecture that need not be induced anew in each generation. We learn as well as we do precisely because evolution has preceded us, and endowed us with custom tools for learning in different domains. Until the field of neural nets more seriously engages in understanding what the contribution from evolution to neural wetware might be, I will remain pessimistic about the field's prospects. Best, Gary Marcus Professor of Psychology New York University Visiting Cognitive Scientist Allen Institute for Brain Science Allen Institute for Artiificial Intelligence co-edited book coming late 2014: The Future of the Brain: Essays By The World's Leading Neuroscientists http://garymarcus.com/ On Feb 10, 2014, at 10:26 AM, Juergen Schmidhuber wrote: John, perhaps your view is a bit too pessimistic. Note that a single RNN already is a general computer. In principle, dynamic RNNs can map arbitrary observation sequences to arbitrary computable sequences of motoric actions and internal attention-directing operations, e.g., to process cluttered scenes, or to implement development (the examples you mentioned). From my point of view, the main question is how to exploit this universal potential through learning. A stack of dynamic RNN can sometimes facilitate this. What it learns can later be collapsed into a single RNN [3]. Juergen http://www.idsia.ch/~juergen/whatsnew.html On Feb 7, 2014, at 12:54 AM, Juyang Weng wrote: Juergen: You wrote: A stack of recurrent NN. But it is a wrong architecture as far as the brain is concerned. Although my joint work with Narendra Ahuja and Thomas S. Huang at UIUC was probably the first learning network that used the deep Learning idea for learning from clutter scenes (Cresceptron ICCV 1992 and IJCV 1997), I gave up this static deep learning idea later after we considered the Principle 1: Development. The deep learning architecture is wrong for the brain. It is too restricted, static in architecture, and cannot learn directly from cluttered scenes required by Principle 1. The brain is not a cascade of recurrent NN. I quote from Antonio Damasio "Decartes' Error": p. 93: "But intermediate communications occurs also via large subcortical nuclei such as those in the thalamas and basal ganglia, and via small nulei such as those in the brain stem." Of course, the cerebral pathways themselves are not a stack of recurrent NN either. There are many fundamental reasons for that. I give only one here base on our DN brain model: Looking at a human, the brain must dynamically attend the tip of the nose, the entire nose, the face, or the entire human body on the fly. For example, when the network attend the nose, the entire human body becomes the background! Without a brain network that has both shallow and deep connections (unlike your stack of recurrent NN), your network is only for recognizing a set of static patterns in a clean background. This is still an overworked pattern recognition problem, not a vision problem. -John On 2/6/14 7:24 AM, Schmidhuber Juergen wrote: Deep Learning in Artificial Neural Networks (NN) is about credit assignment across many subsequent computational stages, in deep or recurrent NN. A popluar Deep Learning NN is the Deep Belief Network (2006) [1,2]. A stack of feedforward NN (FNN) is pre-trained in unsupervised fashion. This can facilitate subsequent supervised learning. Let me re-advertise a much older, very similar, but more general, working Deep Learner of 1991. It can deal with temporal sequences: the Neural Hierarchical Temporal Memory or Neural History Compressor [3]. A stack of recurrent NN (RNN) is pre-trained in unsupervised fashion. This can greatly facilitate subsequent supervised learning. The RNN stack is more general in the sense that it uses sequence-processing RNN instead of FNN with unchanging inputs. In the early 1990s, the system was able to learn many previously unlearnable Deep Learning tasks, one of them requiring credit assignment across 1200 successive computational stages [4]. Related developments: In the 1990s there was a trend from partially unsupervised [3] to fully supervised recurrent Deep Learners [5]. In recent years, there has been a similar trend from partially unsupervised to fully supervised systems. For example, several recent competition-winning and benchmark record-setting systems use supervised LSTM RNN stacks [6-9]. References: [1] G. E. Hinton, R. R. Salakhutdinov. Reducing the dimensionality of data with neural networks. Science, Vol. 313. no. 5786, pp. 504 - 507, 2006. http://www.cs.toronto.edu/~hinton/science.pdf [2] G. W. Cottrell. New Life for Neural Networks. Science, Vol. 313. no. 5786, pp. 454-455, 2006. http://www.academia.edu/155897/Cottrell_Garrison_W._2006_New_life_for_neural _networks [3] J. Schmidhuber. Learning complex, extended sequences using the principle of history compression, Neural Computation, 4(2):234-242, 1992. (Based on TR FKI-148-91, 1991.) ftp://ftp.idsia.ch/pub/juergen/chunker.pdf Overview: http://www.idsia.ch/~juergen/firstdeeplearner.html [4] J. Schmidhuber. Habilitation thesis, TUM, 1993. ftp://ftp.idsia.ch/pub/juergen/habilitation.pdf . Includes an experiment with credit assignment across 1200 subsequent computational stages for a Neural Hierarchical Temporal Memory or History Compressor or RNN stack with unsupervised pre-training [2] (try Google Translate in your mother tongue): http://www.idsia.ch/~juergen/habilitation/node114.html [5] S. Hochreiter, J. Schmidhuber. Long Short-Term Memory. Neural Computation, 9(8):1735-1780, 1997. Based on TR FKI-207-95, 1995. ftp://ftp.idsia.ch/pub/juergen/lstm.pdf . Lots of of follow-up work on LSTM under http://www.idsia.ch/~juergen/rnn.html [6] S. Fernandez, A. Graves, J. Schmidhuber. Sequence labelling in structured domains with hierarchical recurrent neural networks. In Proc. IJCAI'07, p. 774-779, Hyderabad, India, 2007. ftp://ftp.idsia.ch/pub/juergen/IJCAI07sequence.pdf [7] A. Graves, J. Schmidhuber. Offline Handwriting Recognition with Multidimensional Recurrent Neural Networks. NIPS'22, p 545-552, Vancouver, MIT Press, 2009. http://www.idsia.ch/~juergen/nips2009.pdf [8] 2009: First very deep (and recurrent) learner to win international competitions with secret test sets: deep LSTM RNN (1995-) won three connected handwriting contests at ICDAR 2009 (French, Arabic, Farsi), performing simultaneous segmentation and recognition. http://www.idsia.ch/~juergen/handwriting.html [9] A. Graves, A. Mohamed, G. E. Hinton. Speech Recognition with Deep Recurrent Neural Networks. ICASSP 2013, Vancouver, 2013. http://www.cs.toronto.edu/~hinton/absps/RNN13.pdf Juergen Schmidhuber http://www.idsia.ch/~juergen/whatsnew.html -- -- Juyang (John) Weng, Professor Department of Computer Science and Engineering MSU Cognitive Science Program and MSU Neuroscience Program 428 S Shaw Ln Rm 3115 Michigan State University East Lansing, MI 48824 USA Tel: 517-353-4388 Fax: 517-432-1061 Email: weng at cse.msu.edu URL: http://www.cse.msu.edu/~weng/ ---------------------------------------------- -- Ali A. Minai, Ph.D. Professor Complex Adaptive Systems Lab Department of Electrical Engineering & Computing Systems University of Cincinnati Cincinnati, OH 45221-0030 Phone: (513) 556-4783 Fax: (513) 556-7326 Email: Ali.Minai at uc.edu minaiaa at gmail.com WWW: http://www.ece.uc.edu/~aminai/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From minaiaa at gmail.com Mon Feb 10 20:53:13 2014 From: minaiaa at gmail.com (Ali Minai) Date: Mon, 10 Feb 2014 20:53:13 -0500 Subject: Connectionists: Deep Belief Nets (2006) / Neural History Compressor (1991) or Hierarchical Temporal Memory In-Reply-To: References: <3FD4809C-5EF7-478D-8939-AE56E6A434BE@idsia.ch> <52F420CF.7060006@cse.msu.edu> Message-ID: This is widely known and frequently ignored! I am very familiar with both developmental and evolutionary robotics, but neural modelers often ignore their lessons. Evolution is blind but also full of surprises and far smarter than engineers in its blind way, so even human designers have much to learn from it. Sure, they can design systems far faster than evolution - and it would be folly to truly try to evolve complex robots for all applications - but the true value of thinking from an evolutionary viewpoint is that it opens up whole new mechanisms, which can then be incorporated, albeit in simplified form, in non-evolutionary engineering. A careful study of phylogenetic histories illuminates many new things about systems that we might think we already understand. Ironically, the person who taught this lesson best was Herb Simon, one of the founders of symbolic AI! One big problem with the symbolic approach to intelligence was that it assumed mechanisms (algorithms) and just sought to discover how brains instantiated them. Well, it turns out that brains have their own mechanisms which do not necessarily correspond to the abstractions of logic or automata theory. I think that many of us (including myself) make the same error when we assume we understand abstract "neural" mechanisms underlying mental functions, and just try to instantiate them with abstract neural building blocks like attractor networks, feature maps, neuron layers, etc. That is a fine enterprise as long as we acknowledge what we're doing and proceed with humility. In this, I always think of Emily Dickinson's wonderful lines (which I first heard from Lynn Margulis, who did discover one of evolution's big surprises): But Nature is a stranger yet; the ones who cite her most have never passed her haunted house, or simplified her ghost. To pity those who know her not is helped by the regret, that those who know her, know her less the nearer her they get. Apologies for errors - I am quoting from memory. Ali On Mon, Feb 10, 2014 at 4:51 PM, Brian J Mingus wrote: > fyi, there is a field called Developmental Robotics which takes this > perspective seriously. For example, an infant goes through the following > developmental trajectory over the first several months of life: > > - Born with nice looking reaches but can't reach to target > > - Locks the elbow to limit the number of degrees of freedom and practices > pointing to a target > > - Slowly starts to unlock the elbow, exposing more degrees of freedom, and > practices reaching to a target > > The infant does not need to learn how to solve a fully unconstrained > inverse kinematics problem. It is born with reaching affordances and a > musculoskeletal system which constrain the space into something > computationally feasible. > > Likewise, if you hold an infant's feet in warm water it will vigorously > try to walk. > > etc. etc. etc. This general pattern of evolved affordances being used to > bootstrap intelligence is extremely widespread in the brain. > > Anyone who doesn't take this into consideration when modeling the brain > isn't creating a human being, but rather something else. > > That said, evolution is a blind designer. A human being can out-design > billions of years of evolution in a few years with nice supercomputer and > plenty of lab subjects. So, if your goal is to understand exactly what a > human being is, you might study human development. But if your goal is to > create something more sophisticated than a human without the annoyance of > studying exactly how a human develops intelligence, you might use deep > networks with pretraining that automatically extract features that > evolution baked in. > > btw, this is all widely known.. no? > > Brian Mingus > http://grey.colorado.edu/mingus > > > On Mon, Feb 10, 2014 at 2:37 PM, Ali Minai wrote: > >> I think Gary's last paragraph is absolutely key. Unless we take both the >> evolutionary and the developmental processes into account, we will neither >> understand complex brains fully nor replicate their functionality too well >> in our robots etc. We build complex robots that know nothing and then ask >> them to learn complex things, setting up a hopelessly difficult learning >> problem. But that isn't how animals learn, or why animals have the brains >> and bodies they have. A purely abstract computational approach to neural >> models makes the same category error that connectionists criticized >> symbolists for making, just at a different level. >> >> Ali >> >> >> On Mon, Feb 10, 2014 at 11:38 AM, Gary Marcus wrote: >> >>> Juergen and others, >>> >>> I am with John on his two basic concerns, and think that your appeal to >>> computational universality is a red herring; I cc the entire group because >>> I think that these issues lay at the center of why many of the hardest >>> problems in AI and neuroscience continue to lay outside of reach, despite >>> in-principle proofs about computational universality. >>> >>> John's basic points, which I have also made before (e.g. in my books The >>> Algebraic Mind and The Birth of the Mind and in my periodic New Yorker >>> posts) are two >>> >>> a. It is unrealistic to expect that hierarchies of pattern recognizers >>> will suffice for the full range of cognitive problems that humans (and >>> strong AI systems) face. Deep learning, to take one example, excels at >>> classification, but has thus far had relatively little to contribute to >>> inference or natural language understanding. Socher et al's impressive CVG >>> work, for instance, is parasitic on a traditional (symbolic) parser, not a >>> soup-to-nuts neural net induced from input. >>> >>> b. it is unrealistic to expect that all the relevant information can be >>> extracted by any general purpose learning device. >>> >>> Yes, you can reliably map any arbitrary input-output relation onto a >>> multilayer perceptron or recurrent net, but *only* if you know the >>> complete input-output mapping in advance. Alas, you can't be guaranteed to >>> do that in general given arbitrary subsets of the complete space; in the >>> real world, learners see subsets of possible data and have to make guesses >>> about what the rest will be like. Wolpert's No Free Lunch work is >>> instructive here (and also in line with how cognitive scientists like >>> Chomsky, Pinker, and myself have thought about the problem). For any >>> problem, I presume that there exists an appropriately-configured net, >>> but there is no guarantee that in the real world you are going to be able >>> to correctly induce the right system via general-purpose learning algorithm >>> given a finite amount of data, with a finite amount of training. >>> Empirically, neural nets of roughly the form you are discussing have worked >>> fine for some problems (e.g. backgammon) but been no match for their >>> symbolic competitors in other domains (chess) and worked only as an adjunct >>> rather than an central ingredient in still others (parsing, >>> question-answering a la Watson, etc); in other domains, like planning and >>> common-sense reasoning, there has been essentially no serious work at all. >>> >>> My own take, informed by evolutionary and developmental biology, is that >>> no single general purpose architecture will ever be a match for the >>> endproduct of a billion years of evolution, which includes, I suspect, a >>> significant amount of customized architecture that need not be induced anew >>> in each generation. We learn as well as we do precisely because evolution >>> has preceded us, and endowed us with custom tools for learning in different >>> domains. Until the field of neural nets more seriously engages in >>> understanding what the contribution from evolution to neural wetware might >>> be, I will remain pessimistic about the field's prospects. >>> >>> Best, >>> Gary Marcus >>> >>> Professor of Psychology >>> New York University >>> Visiting Cognitive Scientist >>> Allen Institute for Brain Science >>> Allen Institute for Artiificial Intelligence >>> >>> co-edited book coming late 2014: >>> The Future of the Brain: Essays By The World's Leading Neuroscientists >>> http://garymarcus.com/ >>> >>> On Feb 10, 2014, at 10:26 AM, Juergen Schmidhuber >>> wrote: >>> >>> John, >>> >>> perhaps your view is a bit too pessimistic. Note that a single RNN >>> already is a general computer. In principle, dynamic RNNs can map arbitrary >>> observation sequences to arbitrary computable sequences of motoric actions >>> and internal attention-directing operations, e.g., to process cluttered >>> scenes, or to implement development (the examples you mentioned). From my >>> point of view, the main question is how to exploit this universal potential >>> through learning. A stack of dynamic RNN can sometimes facilitate this. >>> What it learns can later be collapsed into a single RNN [3]. >>> >>> Juergen >>> >>> http://www.idsia.ch/~juergen/whatsnew.html >>> >>> >>> >>> On Feb 7, 2014, at 12:54 AM, Juyang Weng wrote: >>> >>> Juergen: >>> >>> You wrote: A stack of recurrent NN. But it is a wrong architecture as >>> far as the brain is concerned. >>> >>> Although my joint work with Narendra Ahuja and Thomas S. Huang at UIUC >>> was probably the first >>> learning network that used the deep Learning idea for learning from >>> clutter scenes (Cresceptron ICCV 1992 and IJCV 1997), >>> I gave up this static deep learning idea later after we considered the >>> Principle 1: Development. >>> >>> The deep learning architecture is wrong for the brain. It is too >>> restricted, static in architecture, and cannot learn directly from >>> cluttered scenes required by Principle 1. The brain is not a cascade of >>> recurrent NN. >>> >>> I quote from Antonio Damasio "Decartes' Error": p. 93: "But intermediate >>> communications occurs also via large subcortical nuclei such as those in >>> the thalamas and basal ganglia, and via small nulei such as those in the >>> brain stem." >>> >>> Of course, the cerebral pathways themselves are not a stack of recurrent >>> NN either. >>> >>> There are many fundamental reasons for that. I give only one here base >>> on our DN brain model: Looking at a human, the brain must dynamically >>> attend the tip of the nose, the entire nose, the face, or the entire human >>> body on the fly. For example, when the network attend the nose, the entire >>> human body becomes the background! Without a brain network that has both >>> shallow and deep connections (unlike your stack of recurrent NN), your >>> network is only for recognizing a set of static patterns in a clean >>> background. This is still an overworked pattern recognition problem, not a >>> vision problem. >>> >>> -John >>> >>> On 2/6/14 7:24 AM, Schmidhuber Juergen wrote: >>> >>> Deep Learning in Artificial Neural Networks (NN) is about credit >>> assignment across many subsequent computational stages, in deep or >>> recurrent NN. >>> >>> A popluar Deep Learning NN is the Deep Belief Network (2006) [1,2]. A >>> stack of feedforward NN (FNN) is pre-trained in unsupervised fashion. This >>> can facilitate subsequent supervised learning. >>> >>> Let me re-advertise a much older, very similar, but more general, >>> working Deep Learner of 1991. It can deal with temporal sequences: the >>> Neural Hierarchical Temporal Memory or Neural History Compressor [3]. A >>> stack of recurrent NN (RNN) is pre-trained in unsupervised fashion. This >>> can greatly facilitate subsequent supervised learning. >>> >>> The RNN stack is more general in the sense that it uses >>> sequence-processing RNN instead of FNN with unchanging inputs. In the early >>> 1990s, the system was able to learn many previously unlearnable Deep >>> Learning tasks, one of them requiring credit assignment across 1200 >>> successive computational stages [4]. >>> >>> Related developments: In the 1990s there was a trend from partially >>> unsupervised [3] to fully supervised recurrent Deep Learners [5]. In recent >>> years, there has been a similar trend from partially unsupervised to fully >>> supervised systems. For example, several recent competition-winning and >>> benchmark record-setting systems use supervised LSTM RNN stacks [6-9]. >>> >>> >>> References: >>> >>> [1] G. E. Hinton, R. R. Salakhutdinov. Reducing the dimensionality of >>> data with neural networks. Science, Vol. 313. no. 5786, pp. 504 - 507, >>> 2006. http://www.cs.toronto.edu/~hinton/science.pdf >>> >>> [2] G. W. Cottrell. New Life for Neural Networks. Science, Vol. 313. no. >>> 5786, pp. 454-455, 2006. >>> http://www.academia.edu/155897/Cottrell_Garrison_W._2006_New_life_for_neural_networks >>> >>> [3] J. Schmidhuber. Learning complex, extended sequences using the >>> principle of history compression, Neural Computation, 4(2):234-242, 1992. >>> (Based on TR FKI-148-91, 1991.) >>> ftp://ftp.idsia.ch/pub/juergen/chunker.pdf Overview: >>> http://www.idsia.ch/~juergen/firstdeeplearner.html >>> >>> [4] J. Schmidhuber. Habilitation thesis, TUM, 1993. >>> ftp://ftp.idsia.ch/pub/juergen/habilitation.pdf . Includes an >>> experiment with credit assignment across 1200 subsequent computational >>> stages for a Neural Hierarchical Temporal Memory or History Compressor or >>> RNN stack with unsupervised pre-training [2] (try Google Translate in your >>> mother tongue): http://www.idsia.ch/~juergen/habilitation/node114.html >>> >>> [5] S. Hochreiter, J. Schmidhuber. Long Short-Term Memory. Neural >>> Computation, 9(8):1735-1780, 1997. Based on TR FKI-207-95, 1995. >>> ftp://ftp.idsia.ch/pub/juergen/lstm.pdf . Lots of of follow-up work on >>> LSTM under http://www.idsia.ch/~juergen/rnn.html >>> >>> [6] S. Fernandez, A. Graves, J. Schmidhuber. Sequence labelling in >>> structured domains with hierarchical recurrent neural networks. In Proc. >>> IJCAI'07, p. 774-779, Hyderabad, India, 2007. >>> ftp://ftp.idsia.ch/pub/juergen/IJCAI07sequence.pdf >>> >>> [7] A. Graves, J. Schmidhuber. Offline Handwriting Recognition with >>> Multidimensional Recurrent Neural Networks. NIPS'22, p 545-552, Vancouver, >>> MIT Press, 2009. http://www.idsia.ch/~juergen/nips2009.pdf >>> >>> [8] 2009: First very deep (and recurrent) learner to win international >>> competitions with secret test sets: deep LSTM RNN (1995-) won three >>> connected handwriting contests at ICDAR 2009 (French, Arabic, Farsi), >>> performing simultaneous segmentation and recognition. >>> http://www.idsia.ch/~juergen/handwriting.html >>> >>> [9] A. Graves, A. Mohamed, G. E. Hinton. Speech Recognition with Deep >>> Recurrent Neural Networks. ICASSP 2013, Vancouver, 2013. >>> http://www.cs.toronto.edu/~hinton/absps/RNN13.pdf >>> >>> >>> >>> Juergen Schmidhuber >>> http://www.idsia.ch/~juergen/whatsnew.html >>> >>> >>> -- >>> -- >>> Juyang (John) Weng, Professor >>> Department of Computer Science and Engineering >>> MSU Cognitive Science Program and MSU Neuroscience Program >>> 428 S Shaw Ln Rm 3115 >>> Michigan State University >>> East Lansing, MI 48824 USA >>> Tel: 517-353-4388 >>> Fax: 517-432-1061 >>> Email: weng at cse.msu.edu >>> URL: http://www.cse.msu.edu/~weng/ >>> ---------------------------------------------- >>> >>> >>> >>> >>> >> >> >> -- >> Ali A. Minai, Ph.D. >> Professor >> Complex Adaptive Systems Lab >> Department of Electrical Engineering & Computing Systems >> University of Cincinnati >> Cincinnati, OH 45221-0030 >> >> Phone: (513) 556-4783 >> Fax: (513) 556-7326 >> Email: Ali.Minai at uc.edu >> minaiaa at gmail.com >> >> WWW: http://www.ece.uc.edu/~aminai/ >> > > -- Ali A. Minai, Ph.D. Professor Complex Adaptive Systems Lab Department of Electrical Engineering & Computing Systems University of Cincinnati Cincinnati, OH 45221-0030 Phone: (513) 556-4783 Fax: (513) 556-7326 Email: Ali.Minai at uc.edu minaiaa at gmail.com WWW: http://www.ece.uc.edu/~aminai/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From gary.marcus at nyu.edu Mon Feb 10 18:56:46 2014 From: gary.marcus at nyu.edu (Gary Marcus) Date: Mon, 10 Feb 2014 18:56:46 -0500 Subject: Connectionists: Deep Belief Nets (2006) / Neural History Compressor (1991) or Hierarchical Temporal Memory In-Reply-To: References: <3FD4809C-5EF7-478D-8939-AE56E6A434BE@idsia.ch> <52F420CF.7060006@cse.msu.edu> Message-ID: On Feb 10, 2014, at 4:51 PM, Brian J Mingus wrote: > That said, evolution is a blind designer. A human being can out-design billions of years of evolution in a few years with nice supercomputer and plenty of lab subjects. So, if your goal is to understand exactly what a human being is, you might study human development. But if your goal is to create something more sophisticated than a human without the annoyance of studying exactly how a human develops intelligence [a reasonable goal, depending on your research program] > , you might use deep networks with pretraining that automatically extract features that evolution baked in. > the key question is whether extracting features is, in itself, enough to replicate (or even better) the blind handiwork of evolution. my own guess is ?absolutely not?. Evolution has a done a fine job with evo-crafting features ? which deep networks might plausibly hope to match? but that there?s a lot of highly-selected circuitry downstream that probably cannot readily be captured through the mere acquisition of hierarchies of features. On the left is a figure from Solari and Stoner?s magnificent cognitive consilience diagram, which I encourage all students of cortical neuroscience to contemplate (click to zoom in). On the right is a figure representing Google?s cat detector, a state of the art unsupervised learner, yet still no match for humans when it comes to invariance or in the use of top-down visual information. Is the one on the right genuinely a useful approximation of the one of the left? In my own view there is an impedance mismatch between most current models and the intricacy of biological reality. Cheers, Gary -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: PastedGraphic-1.png Type: image/png Size: 265368 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: PastedGraphic-5.tiff Type: image/tiff Size: 37416 bytes Desc: not available URL: From tt at cs.dal.ca Mon Feb 10 23:40:12 2014 From: tt at cs.dal.ca (Thomas Trappenberg) Date: Tue, 11 Feb 2014 00:40:12 -0400 Subject: Connectionists: Deep Belief Nets (2006) / Neural History Compressor (1991) or Hierarchical Temporal Memory In-Reply-To: References: <3FD4809C-5EF7-478D-8939-AE56E6A434BE@idsia.ch> <52F420CF.7060006@cse.msu.edu> Message-ID: I enjoy these further discussions. Thanks so much for all the thoughts. Personally I am always really fascinated when I can learn about mechanisms that are not so obvious, like phase transitions, point attractor networks, or universal learning machines. The recent popularity of deep learning systems is really fun as it creates new interest in students, and learning machines, specifically the questions of representational learning, is important and useful in its own right. And I even think that on an abstract level it has something to do with the brain. I also like to understand the brain, where some of these mechanisms are at work but which has also a lot of structure that I like to understand. Evolution, development, dendritic computations, glia networks, neuromodulation, epigenetics, lots of fascinating anatomy, and if I might add probabilistic synapses, are all important to understand and must play important roles. Still lots to do, and another reason not to bet on one. Personally I am rather critical about universal learning machines. Indeed, we (or at least me) are a good example of a non-universal learning machines. I even start to appreciate the comments on recognition rather than learning machines. Even un-human big data seems mostly to solve very stereotyped problems (though recognizing traffic signes with better than human performance is also useful). This is why I now like learning with small data, which could be another useful machine learning domain. No free lunch, but evolution can evolve biased learners that can get some free snacks here and there. Cheers, Thomas -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrewslavinross at gmail.com Mon Feb 10 22:50:13 2014 From: andrewslavinross at gmail.com (Andrew Ross) Date: Mon, 10 Feb 2014 22:50:13 -0500 Subject: Connectionists: Deep Belief Nets (2006) / Neural History Compressor (1991) or Hierarchical Temporal Memory In-Reply-To: References: <3FD4809C-5EF7-478D-8939-AE56E6A434BE@idsia.ch> <52F420CF.7060006@cse.msu.edu> Message-ID: I'm probably preaching to at least some of the choir here, I don't think we should be too beholden to the brain when we're developing software models to simulate some of the things it does. It honestly seems way too complicated. And I'd like to offer some software development-inspired intuitions about why I feel that way. In software engineering, you always want to be working at a high level of generality, and avoiding special cases whenever possible. They complicate the code unnecessarily, make it harder to maintain or reproduce in the future, and make it less clear for others to understand. Simplicity is crucial for the sustainability of software, and I would posit this is highly relevant for life, as well. The reason why I think simulating the brain is hard is because our brain gives us a general mechanism for solving problems, but at every point along the way evolution has sneaked in a bunch of special cases. If the input is a snake, respond in the following way. If the input is an attractive-ish person, respond in this other manner, etc etc. These extra logic branches complicate any efforts to model cognition in a useful manner. Human nature is filled with such an inestimable amount of baggage that figuring out something very general like the process by which we learn to abstract from examples seems impossible just from examining the brain, no matter how well we learn to subdivide its regions. We have all this specific machinery for recognizing and responding to specific things, and it seems extraordinarily difficult to fully isolate any of those systems from the rest. Any individual brain too might have its own peculiarities; you'd have to examine a whole bunch and figure out principle components to have any real basis for understanding what you see. But I think the brain has to be simple. We could not possibly survive and reproduce so reliably were it not. The biological underpinnings of cognition have to be super robust against the efforts of entropy to introduce randomness into our progeny, and that robustness can only come with a simplicity that I think must be (computationally) tractable. This is why I am sympathetic to efforts that use the brain for inspiration (figuratively), but don't try to actually emulate it. No, we will not be producing anything resembling a human intelligence this way, and there may be certain things we find it easy to recognize because of hardwiring that a simplified machine would have a more difficult time with -- but we know that even people with intense brain damage can find pretty astounding workarounds. The ability to abstract is the ultimate workaround, and it's what allows us to survive even if some of those crucial special cases we have baked in fail to get transmitted to the next generation. The goal is not to simulate a human being; I don't want a program that will love, worry as much as I do, or be at all influenced by the belief that it possesses an elbow. Instead I want a machine that is capable of recognizing cats, and possibly forming a more general concepts of "animals" which can be distinguished from pictures of trees or jars of mustard. And beyond pictures, I want a computer program that can begin to understand relations like "this thing is inside this other thing" and be able to identify even very general similarities between different inputs, divorced as much as possible from the specificities of the modality of input it receives. I recognize that's far off, but I feel like we should be focusing on generalization, and how we can simulate some of the most fundamental mechanisms of cognition, without getting hung up on, you know, our actual field of study. Andrew On Mon, Feb 10, 2014 at 6:56 PM, Gary Marcus wrote: > > On Feb 10, 2014, at 4:51 PM, Brian J Mingus > wrote: > > That said, evolution is a blind designer. A human being can out-design > billions of years of evolution in a few years with nice supercomputer and > plenty of lab subjects. So, if your goal is to understand exactly what a > human being is, you might study human development. But if your goal is to > create something more sophisticated than a human without the annoyance of > studying exactly how a human develops intelligence > > [a reasonable goal, depending on your research program] > > , you might use deep networks with pretraining that automatically extract > features that evolution baked in. > > > the key question is whether extracting features is, in itself, enough to > replicate (or even better) the blind handiwork of evolution. my own guess > is "absolutely not". Evolution has a done a fine job with evo-crafting > features -- which deep networks might plausibly hope to match-- but that > there's a lot of highly-selected circuitry *downstream* that probably > cannot readily be captured through the mere acquisition of hierarchies of > features. > > On the left is a figure from Solari and Stoner's > magnificent cognitive consilience diagram, > which I encourage all students of cortical neuroscience to contemplate > (click to zoom in). On the right is a figure representing Google's cat > detector, a state of the art unsupervised learner, yet still no match for > humans when it comes to invariance or in the use of top-down visual > information. Is the one on the right genuinely a useful approximation of > the one of the left? > > In my own view there is an impedance mismatch between most current models > and the intricacy of biological reality. > > Cheers, > Gary > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: PastedGraphic-1.png Type: image/png Size: 265368 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: PastedGraphic-5.tiff Type: image/tiff Size: 37416 bytes Desc: not available URL: From gary at eng.ucsd.edu Tue Feb 11 05:22:48 2014 From: gary at eng.ucsd.edu (Gary Cottrell) Date: Tue, 11 Feb 2014 11:22:48 +0100 Subject: Connectionists: Deep Belief Nets (2006) / Neural History Compressor (1991) or Hierarchical Temporal Memory In-Reply-To: <4451FE2D-8521-46F0-A1CE-148F5CC83549@uthscsa.edu> References: <3FD4809C-5EF7-478D-8939-AE56E6A434BE@idsia.ch> <52F420CF.7060006@cse.msu.edu> <4451FE2D-8521-46F0-A1CE-148F5CC83549@uthscsa.edu> Message-ID: <9E09D066-0F2E-42F7-AE55-4D2750570F77@eng.ucsd.edu> interesting points, jim! I wonder, though, why you worry so much about "big data"? I think it is more like "appropriate-sized data." we have never before been able to give our models anything like the kind of data we get in our first years of life. Let's do a little back-of-the-envelope on this. We saccade about 3 times a second, which, if you are awake 16 hours a day (make that 20 for Terry Sejnowski), come out to about 172,800 fixations per day, or high-dimensional samples of the world, if you like. One year of that, not counting drunken blackouts, etc., is 63 million samples. After 10 years that's 630 million samples. This dwarfs imagenet, at least the 1.2 million images used by Krizhevsky et al. Of course, there is a lot of redundancy here (spatial and temporal), which I believe the brain uses to construct its models (e.g., by some sort of learning rule like F?ldi?k's), so maybe 1.2 million isn't so bad. On the other hand, you may argue, imagenet is nothing like the real world - it is, after all, pictures taken by humans, so objects tend to be centered. This leads to a comment about you worrying about filtering data to avoid the big data "problem." Well, I would suggest that there is a lot of work on attention (some of it completely compatible with connectionist models, e.g., Itti, et al. 1998, Zhang, et al., 2008) that would cause a system to focus on objects, just as photographers do. So, it isn't like we haven't worried about that as you do, it's just that we've done something about it! ;-) Anyway, I like your ideas about the cerebellum - sounds like there are a bunch of Ph.D. theses in there? cheers, gary Itti, L., Koch, C., & Niebur, E. (1998). A model of saliency-based visual attention for rapid scene analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence, 20, 1254?1259. Zhang, Lingyun, Tong, Matthew H., Marks, Tim K., Shan, Honghao, and Cottrell, Garrison W. (2008). SUN: A Bayesian Framework for Saliency Using Natural Statistics. Journal of Vision 8(7):32, 1-20. The code for SUN is here On Feb 10, 2014, at 10:04 PM, james bower wrote: > One other point that some of you might find interesting. > > While most neurobiologists and text books describe the cerebellum as involved in motor control, I suspect that it is actually not a motor control device in the usual sense at all. We proposed 20+ years ago that the cerebellum is actually a sensory control device, using the motor system (although not only) to precisely position sensory surfaces to collect the data the nervous system actually needs and expects. In the context of the current discussion about big data - such a mechanism would also contribute to the nervous system?s working around a potential data problem. > > Leaping and jumping forward, as an extension of this idea, we have proposed that autism may actually be an adaptive response to cerebellar dysfunction - and therefore a response to uncontrolled big data flux (in your terms). > > So, if correct, the brain adapts to being confronted with badly controlled data acquisition by shutting it off. > > Just to think about. Again, papers available for anyone interested. > > Given how much we do know about cerebellar circuitry - this could actually be an interesting opportunity for some cross disciplinary thinking about how one would use an active sensory data acquisition controller to select the sensory data that is ideal given an internal model of the world. Almost all of the NN type cerebellar models to date have been built around either the idea that the cerebellum is a motor timing device, or involved in learning (yadda yadda). > > Perhaps most on this list interested in brain networks don?t know that far and away, the pathway with the largest total number of axons is the pathway from the (entire) cerebral cortex to the cerebellum. We have predicted that this pathway is the mechanism by which the cerebral cortex ?loads? the cerebellum with knowledge about what it expects and needs. > > > > Jim > > > > > > On Feb 10, 2014, at 2:24 PM, james bower wrote: > >> Excellent points - reminds me again of the conversation years ago about whether a general structure like a Hopfield Network would, by itself, solve a large number of problems. All evidence from the nervous system points in the direction of strong influence of the nature of the problem on the solution, which also seems consistent with what has happened in real world applications of NN to engineering over the last 25 years. >> >> For biology, however, the interesting (even fundamental) question becomes, what the following actually are: >> >>> endowed us with custom tools for learning in different domains >> >>> the contribution from evolution to neural wetware might be >> >> I have mentioned previously, that my guess (and surprise) based on our own work over the last 30 years in olfaction is that ?learning? may all together be over emphasized (we do love free will). Yes, in our laboratories we place animals in situations where they have to ?learn?, but my suspicion is that in the real world where brains actually operate and evolve, most of what we do is actually ?recognition? that involves matching external stimuli to internal ?models? of what we expect to be there. I think that it is quite likely that that ?deep knowledge? is how evolution has most patterned neural wetware. Seems to me a way to avoid NP problems and the pitfalls of dealing with ?big data? which as I have said, I suspect the nervous system avoids at all costs. >> >> I have mentioned that we have reason to believe (to my surprise) that, starting with the olfactory receptors, the olfactory system already ?knows? about the metabolic structure of the real world. Accordingly, we are predicting that its receptors aren?t organized to collect a lot of data about the chemical structure of the stimulus (the way a chemist would), but instead looks for chemical signatures of metabolic processes. e.g. , it may be that 1/3 or more of mouse olfactory receptors detect one of the three molecules that are produced by many different kinds of fruit when ripe. ?Learning? in olfaction, might be some small additional mechanism you put on top to change the ?hedonic? value of the stimulus - ie. you can ?learn? to like fermented fish paste. But it is very likely that recognizing the (usually deleterious) environmental signature of fermentation is "hard wired?, requiring ?learning? to change the natural category. >> >> I know that many cognitive types (and philosophers as well) have developed much more nuanced discussions of these questions - however, I have always been struck by how much of the effort in NNs is focused on ?learning? as if it is the primary attribute of the nervous system we are trying to figure out. It seems to me figuring out "what the nose already knows? is much more important. >> >> >> Jim >> >> >> >> >> >> >> On Feb 10, 2014, at 10:38 AM, Gary Marcus wrote: >> >>> Juergen and others, >>> >>> I am with John on his two basic concerns, and think that your appeal to computational universality is a red herring; I cc the entire group because I think that these issues lay at the center of why many of the hardest problems in AI and neuroscience continue to lay outside of reach, despite in-principle proofs about computational universality. >>> >>> John?s basic points, which I have also made before (e.g. in my books The Algebraic Mind and The Birth of the Mind and in my periodic New Yorker posts) are two >>> >>> a. It is unrealistic to expect that hierarchies of pattern recognizers will suffice for the full range of cognitive problems that humans (and strong AI systems) face. Deep learning, to take one example, excels at classification, but has thus far had relatively little to contribute to inference or natural language understanding. Socher et al?s impressive CVG work, for instance, is parasitic on a traditional (symbolic) parser, not a soup-to-nuts neural net induced from input. >>> >>> b. it is unrealistic to expect that all the relevant information can be extracted by any general purpose learning device. >>> >>> Yes, you can reliably map any arbitrary input-output relation onto a multilayer perceptron or recurrent net, but only if you know the complete input-output mapping in advance. Alas, you can?t be guaranteed to do that in general given arbitrary subsets of the complete space; in the real world, learners see subsets of possible data and have to make guesses about what the rest will be like. Wolpert?s No Free Lunch work is instructive here (and also in line with how cognitive scientists like Chomsky, Pinker, and myself have thought about the problem). For any problem, I presume that there exists an appropriately-configured net, but there is no guarantee that in the real world you are going to be able to correctly induce the right system via general-purpose learning algorithm given a finite amount of data, with a finite amount of training. Empirically, neural nets of roughly the form you are discussing have worked fine for some problems (e.g. backgammon) but been no match for their symbolic competitors in other domains (chess) and worked only as an adjunct rather than an central ingredient in still others (parsing, question-answering a la Watson, etc); in other domains, like planning and common-sense reasoning, there has been essentially no serious work at all. >>> >>> My own take, informed by evolutionary and developmental biology, is that no single general purpose architecture will ever be a match for the endproduct of a billion years of evolution, which includes, I suspect, a significant amount of customized architecture that need not be induced anew in each generation. We learn as well as we do precisely because evolution has preceded us, and endowed us with custom tools for learning in different domains. Until the field of neural nets more seriously engages in understanding what the contribution from evolution to neural wetware might be, I will remain pessimistic about the field?s prospects. >>> >>> Best, >>> Gary Marcus >>> >>> Professor of Psychology >>> New York University >>> Visiting Cognitive Scientist >>> Allen Institute for Brain Science >>> Allen Institute for Artiificial Intelligence >>> co-edited book coming late 2014: >>> The Future of the Brain: Essays By The World?s Leading Neuroscientists >>> http://garymarcus.com/ >>> >>> On Feb 10, 2014, at 10:26 AM, Juergen Schmidhuber wrote: >>> >>>> John, >>>> >>>> perhaps your view is a bit too pessimistic. Note that a single RNN already is a general computer. In principle, dynamic RNNs can map arbitrary observation sequences to arbitrary computable sequences of motoric actions and internal attention-directing operations, e.g., to process cluttered scenes, or to implement development (the examples you mentioned). From my point of view, the main question is how to exploit this universal potential through learning. A stack of dynamic RNN can sometimes facilitate this. What it learns can later be collapsed into a single RNN [3]. >>>> >>>> Juergen >>>> >>>> http://www.idsia.ch/~juergen/whatsnew.html >>>> >>>> >>>> >>>> On Feb 7, 2014, at 12:54 AM, Juyang Weng wrote: >>>> >>>>> Juergen: >>>>> >>>>> You wrote: A stack of recurrent NN. But it is a wrong architecture as far as the brain is concerned. >>>>> >>>>> Although my joint work with Narendra Ahuja and Thomas S. Huang at UIUC was probably the first >>>>> learning network that used the deep Learning idea for learning from clutter scenes (Cresceptron ICCV 1992 and IJCV 1997), >>>>> I gave up this static deep learning idea later after we considered the Principle 1: Development. >>>>> >>>>> The deep learning architecture is wrong for the brain. It is too restricted, static in architecture, and cannot learn directly from cluttered scenes required by Principle 1. The brain is not a cascade of recurrent NN. >>>>> >>>>> I quote from Antonio Damasio "Decartes' Error": p. 93: "But intermediate communications occurs also via large subcortical nuclei such as those in the thalamas and basal ganglia, and via small nulei such as those in the brain stem." >>>>> >>>>> Of course, the cerebral pathways themselves are not a stack of recurrent NN either. >>>>> >>>>> There are many fundamental reasons for that. I give only one here base on our DN brain model: Looking at a human, the brain must dynamically attend the tip of the nose, the entire nose, the face, or the entire human body on the fly. For example, when the network attend the nose, the entire human body becomes the background! Without a brain network that has both shallow and deep connections (unlike your stack of recurrent NN), your network is only for recognizing a set of static patterns in a clean background. This is still an overworked pattern recognition problem, not a vision problem. >>>>> >>>>> -John >>>>> >>>>> On 2/6/14 7:24 AM, Schmidhuber Juergen wrote: >>>>>> Deep Learning in Artificial Neural Networks (NN) is about credit assignment across many subsequent computational stages, in deep or recurrent NN. >>>>>> >>>>>> A popluar Deep Learning NN is the Deep Belief Network (2006) [1,2]. A stack of feedforward NN (FNN) is pre-trained in unsupervised fashion. This can facilitate subsequent supervised learning. >>>>>> >>>>>> Let me re-advertise a much older, very similar, but more general, working Deep Learner of 1991. It can deal with temporal sequences: the Neural Hierarchical Temporal Memory or Neural History Compressor [3]. A stack of recurrent NN (RNN) is pre-trained in unsupervised fashion. This can greatly facilitate subsequent supervised learning. >>>>>> >>>>>> The RNN stack is more general in the sense that it uses sequence-processing RNN instead of FNN with unchanging inputs. In the early 1990s, the system was able to learn many previously unlearnable Deep Learning tasks, one of them requiring credit assignment across 1200 successive computational stages [4]. >>>>>> >>>>>> Related developments: In the 1990s there was a trend from partially unsupervised [3] to fully supervised recurrent Deep Learners [5]. In recent years, there has been a similar trend from partially unsupervised to fully supervised systems. For example, several recent competition-winning and benchmark record-setting systems use supervised LSTM RNN stacks [6-9]. >>>>>> >>>>>> >>>>>> References: >>>>>> >>>>>> [1] G. E. Hinton, R. R. Salakhutdinov. Reducing the dimensionality of data with neural networks. Science, Vol. 313. no. 5786, pp. 504 - 507, 2006. http://www.cs.toronto.edu/~hinton/science.pdf >>>>>> >>>>>> [2] G. W. Cottrell. New Life for Neural Networks. Science, Vol. 313. no. 5786, pp. 454-455, 2006. http://www.academia.edu/155897/Cottrell_Garrison_W._2006_New_life_for_neural_networks >>>>>> >>>>>> [3] J. Schmidhuber. Learning complex, extended sequences using the principle of history compression, Neural Computation, 4(2):234-242, 1992. (Based on TR FKI-148-91, 1991.) ftp://ftp.idsia.ch/pub/juergen/chunker.pdf Overview: http://www.idsia.ch/~juergen/firstdeeplearner.html >>>>>> >>>>>> [4] J. Schmidhuber. Habilitation thesis, TUM, 1993. ftp://ftp.idsia.ch/pub/juergen/habilitation.pdf . Includes an experiment with credit assignment across 1200 subsequent computational stages for a Neural Hierarchical Temporal Memory or History Compressor or RNN stack with unsupervised pre-training [2] (try Google Translate in your mother tongue): http://www.idsia.ch/~juergen/habilitation/node114.html >>>>>> >>>>>> [5] S. Hochreiter, J. Schmidhuber. Long Short-Term Memory. Neural Computation, 9(8):1735-1780, 1997. Based on TR FKI-207-95, 1995. ftp://ftp.idsia.ch/pub/juergen/lstm.pdf . Lots of of follow-up work on LSTM under http://www.idsia.ch/~juergen/rnn.html >>>>>> >>>>>> [6] S. Fernandez, A. Graves, J. Schmidhuber. Sequence labelling in structured domains with hierarchical recurrent neural networks. In Proc. IJCAI'07, p. 774-779, Hyderabad, India, 2007. ftp://ftp.idsia.ch/pub/juergen/IJCAI07sequence.pdf >>>>>> >>>>>> [7] A. Graves, J. Schmidhuber. Offline Handwriting Recognition with Multidimensional Recurrent Neural Networks. NIPS'22, p 545-552, Vancouver, MIT Press, 2009. http://www.idsia.ch/~juergen/nips2009.pdf >>>>>> >>>>>> [8] 2009: First very deep (and recurrent) learner to win international competitions with secret test sets: deep LSTM RNN (1995-) won three connected handwriting contests at ICDAR 2009 (French, Arabic, Farsi), performing simultaneous segmentation and recognition. http://www.idsia.ch/~juergen/handwriting.html >>>>>> >>>>>> [9] A. Graves, A. Mohamed, G. E. Hinton. Speech Recognition with Deep Recurrent Neural Networks. ICASSP 2013, Vancouver, 2013. http://www.cs.toronto.edu/~hinton/absps/RNN13.pdf >>>>>> >>>>>> >>>>>> >>>>>> Juergen Schmidhuber >>>>>> http://www.idsia.ch/~juergen/whatsnew.html >>>>> >>>>> -- >>>>> -- >>>>> Juyang (John) Weng, Professor >>>>> Department of Computer Science and Engineering >>>>> MSU Cognitive Science Program and MSU Neuroscience Program >>>>> 428 S Shaw Ln Rm 3115 >>>>> Michigan State University >>>>> East Lansing, MI 48824 USA >>>>> Tel: 517-353-4388 >>>>> Fax: 517-432-1061 >>>>> Email: weng at cse.msu.edu >>>>> URL: http://www.cse.msu.edu/~weng/ >>>>> ---------------------------------------------- >>>>> >>>> >>>> >>> >> >> >> >> >> >> Dr. James M. Bower Ph.D. >> >> Professor of Computational Neurobiology >> >> Barshop Institute for Longevity and Aging Studies. >> >> 15355 Lambda Drive >> >> University of Texas Health Science Center >> >> San Antonio, Texas 78245 >> >> >> Phone: 210 382 0553 >> >> Email: bower at uthscsa.edu >> >> Web: http://www.bower-lab.org >> >> twitter: superid101 >> >> linkedin: Jim Bower >> >> >> CONFIDENTIAL NOTICE: >> >> The contents of this email and any attachments to it may be privileged or contain privileged and confidential information. This information is only for the viewing or use of the intended recipient. If you have received this e-mail in error or are not the intended recipient, you are hereby notified that any disclosure, copying, distribution or use of, or the taking of any action in reliance upon, any of the information contained in this e-mail, or >> >> any of the attachments to this e-mail, is strictly prohibited and that this e-mail and all of the attachments to this e-mail, if any, must be >> >> immediately returned to the sender or destroyed and, in either case, this e-mail and all attachments to this e-mail must be immediately deleted from your computer without making any copies hereof and any and all hard copies made must be destroyed. If you have received this e-mail in error, please notify the sender by e-mail immediately. >> >> >> > > > > > > Dr. James M. Bower Ph.D. > > Professor of Computational Neurobiology > > Barshop Institute for Longevity and Aging Studies. > > 15355 Lambda Drive > > University of Texas Health Science Center > > San Antonio, Texas 78245 > > > Phone: 210 382 0553 > > Email: bower at uthscsa.edu > > Web: http://www.bower-lab.org > > twitter: superid101 > > linkedin: Jim Bower > > > CONFIDENTIAL NOTICE: > > The contents of this email and any attachments to it may be privileged or contain privileged and confidential information. This information is only for the viewing or use of the intended recipient. If you have received this e-mail in error or are not the intended recipient, you are hereby notified that any disclosure, copying, distribution or use of, or the taking of any action in reliance upon, any of the information contained in this e-mail, or > > any of the attachments to this e-mail, is strictly prohibited and that this e-mail and all of the attachments to this e-mail, if any, must be > > immediately returned to the sender or destroyed and, in either case, this e-mail and all attachments to this e-mail must be immediately deleted from your computer without making any copies hereof and any and all hard copies made must be destroyed. If you have received this e-mail in error, please notify the sender by e-mail immediately. > > > [I am in Dijon, France on sabbatical this year. To call me, Skype works best (gwcottrell), or dial +33 788319271] Gary Cottrell 858-534-6640 FAX: 858-534-7029 My schedule is here: http://tinyurl.com/b7gxpwo Computer Science and Engineering 0404 IF USING FED EX INCLUDE THE FOLLOWING LINE: CSE Building, Room 4130 University of California San Diego 9500 Gilman Drive # 0404 La Jolla, Ca. 92093-0404 Things may come to those who wait, but only the things left by those who hustle. -- Abraham Lincoln "I'll have a caf? mocha vodka valium latte to go, please" -Anonymous "Of course, none of this will be easy. If it was, we would already know everything there was about how the brain works, and presumably my life would be simpler here. It could explain all kinds of things that go on in Washington." -Barack Obama "Probably once or twice a week we are sitting at dinner and Richard says, 'The cortex is hopeless,' and I say, 'That's why I work on the worm.'" Dr. Bargmann said. "A grapefruit is a lemon that saw an opportunity and took advantage of it." - note written on a door in Amsterdam on Lijnbaansgracht. "Physical reality is great, but it has a lousy search function." -Matt Tong "Only connect!" -E.M. Forster "You always have to believe that tomorrow you might write the matlab program that solves everything - otherwise you never will." -Geoff Hinton "There is nothing objective about objective functions" - Jay McClelland "I am awaiting the day when people remember the fact that discovery does not work by deciding what you want and then discovering it." -David Mermin Email: gary at ucsd.edu Home page: http://www-cse.ucsd.edu/~gary/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From gary at eng.ucsd.edu Tue Feb 11 05:34:58 2014 From: gary at eng.ucsd.edu (Gary Cottrell) Date: Tue, 11 Feb 2014 11:34:58 +0100 Subject: Connectionists: Deep Belief Nets (2006) / Neural History Compressor (1991) or Hierarchical Temporal Memory In-Reply-To: <9E09D066-0F2E-42F7-AE55-4D2750570F77@eng.ucsd.edu> References: <3FD4809C-5EF7-478D-8939-AE56E6A434BE@idsia.ch> <52F420CF.7060006@cse.msu.edu> <4451FE2D-8521-46F0-A1CE-148F5CC83549@uthscsa.edu> <9E09D066-0F2E-42F7-AE55-4D2750570F77@eng.ucsd.edu> Message-ID: Oh, and I forgot to mention, this is just visual information, obviously. Compare this to the 5-8 syllables per second we get (depending on language, but information rate seems to be about the same across languages - relative to Vietnamese (Pellegrino et al. 2011). So this is about double the samples of fixations, per second, but we aren't always listening to speech. But for those who listen to rap, Eminem comes in at about 10 syllables per second, but he is topped by Outsider, at 21 syllables per second. g. Fran?ois Pellegrino, Christophe Coup?, Egidio Marsico (2011) Across-Language Perspective on Speech Information Rate. Language, 87(3):539-558.K | 10.1353/lan.2011.0057 A cross-language perspective on speech information rate Fran?ois Pellegrino, Christophe Coup? and Egidio Marsico On Feb 11, 2014, at 11:22 AM, Gary Cottrell wrote: > interesting points, jim! > > I wonder, though, why you worry so much about "big data"? > > I think it is more like "appropriate-sized data." we have never before been able to give our models anything like the kind of data we get in our first years of life. Let's do a little back-of-the-envelope on this. We saccade about 3 times a second, which, if you are awake 16 hours a day (make that 20 for Terry Sejnowski), come out to about 172,800 fixations per day, or high-dimensional samples of the world, if you like. One year of that, not counting drunken blackouts, etc., is 63 million samples. After 10 years that's 630 million samples. This dwarfs imagenet, at least the 1.2 million images used by Krizhevsky et al. Of course, there is a lot of redundancy here (spatial and temporal), which I believe the brain uses to construct its models (e.g., by some sort of learning rule like F?ldi?k's), so maybe 1.2 million isn't so bad. > > On the other hand, you may argue, imagenet is nothing like the real world - it is, after all, pictures taken by humans, so objects tend to be centered. This leads to a comment about you worrying about filtering data to avoid the big data "problem." Well, I would suggest that there is a lot of work on attention (some of it completely compatible with connectionist models, e.g., Itti, et al. 1998, Zhang, et al., 2008) that would cause a system to focus on objects, just as photographers do. So, it isn't like we haven't worried about that as you do, it's just that we've done something about it! ;-) > > Anyway, I like your ideas about the cerebellum - sounds like there are a bunch of Ph.D. theses in there? > > cheers, > gary > > > Itti, L., Koch, C., & Niebur, E. (1998). A model of saliency-based visual attention for rapid scene analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence, 20, 1254?1259. > > Zhang, Lingyun, Tong, Matthew H., Marks, Tim K., Shan, Honghao, and Cottrell, Garrison W. (2008). SUN: A Bayesian Framework for Saliency Using Natural Statistics. Journal of Vision 8(7):32, 1-20. > The code for SUN is here > On Feb 10, 2014, at 10:04 PM, james bower wrote: > >> One other point that some of you might find interesting. >> >> While most neurobiologists and text books describe the cerebellum as involved in motor control, I suspect that it is actually not a motor control device in the usual sense at all. We proposed 20+ years ago that the cerebellum is actually a sensory control device, using the motor system (although not only) to precisely position sensory surfaces to collect the data the nervous system actually needs and expects. In the context of the current discussion about big data - such a mechanism would also contribute to the nervous system?s working around a potential data problem. >> >> Leaping and jumping forward, as an extension of this idea, we have proposed that autism may actually be an adaptive response to cerebellar dysfunction - and therefore a response to uncontrolled big data flux (in your terms). >> >> So, if correct, the brain adapts to being confronted with badly controlled data acquisition by shutting it off. >> >> Just to think about. Again, papers available for anyone interested. >> >> Given how much we do know about cerebellar circuitry - this could actually be an interesting opportunity for some cross disciplinary thinking about how one would use an active sensory data acquisition controller to select the sensory data that is ideal given an internal model of the world. Almost all of the NN type cerebellar models to date have been built around either the idea that the cerebellum is a motor timing device, or involved in learning (yadda yadda). >> >> Perhaps most on this list interested in brain networks don?t know that far and away, the pathway with the largest total number of axons is the pathway from the (entire) cerebral cortex to the cerebellum. We have predicted that this pathway is the mechanism by which the cerebral cortex ?loads? the cerebellum with knowledge about what it expects and needs. >> >> >> >> Jim >> >> >> >> >> >> On Feb 10, 2014, at 2:24 PM, james bower wrote: >> >>> Excellent points - reminds me again of the conversation years ago about whether a general structure like a Hopfield Network would, by itself, solve a large number of problems. All evidence from the nervous system points in the direction of strong influence of the nature of the problem on the solution, which also seems consistent with what has happened in real world applications of NN to engineering over the last 25 years. >>> >>> For biology, however, the interesting (even fundamental) question becomes, what the following actually are: >>> >>>> endowed us with custom tools for learning in different domains >>> >>>> the contribution from evolution to neural wetware might be >>> >>> I have mentioned previously, that my guess (and surprise) based on our own work over the last 30 years in olfaction is that ?learning? may all together be over emphasized (we do love free will). Yes, in our laboratories we place animals in situations where they have to ?learn?, but my suspicion is that in the real world where brains actually operate and evolve, most of what we do is actually ?recognition? that involves matching external stimuli to internal ?models? of what we expect to be there. I think that it is quite likely that that ?deep knowledge? is how evolution has most patterned neural wetware. Seems to me a way to avoid NP problems and the pitfalls of dealing with ?big data? which as I have said, I suspect the nervous system avoids at all costs. >>> >>> I have mentioned that we have reason to believe (to my surprise) that, starting with the olfactory receptors, the olfactory system already ?knows? about the metabolic structure of the real world. Accordingly, we are predicting that its receptors aren?t organized to collect a lot of data about the chemical structure of the stimulus (the way a chemist would), but instead looks for chemical signatures of metabolic processes. e.g. , it may be that 1/3 or more of mouse olfactory receptors detect one of the three molecules that are produced by many different kinds of fruit when ripe. ?Learning? in olfaction, might be some small additional mechanism you put on top to change the ?hedonic? value of the stimulus - ie. you can ?learn? to like fermented fish paste. But it is very likely that recognizing the (usually deleterious) environmental signature of fermentation is "hard wired?, requiring ?learning? to change the natural category. >>> >>> I know that many cognitive types (and philosophers as well) have developed much more nuanced discussions of these questions - however, I have always been struck by how much of the effort in NNs is focused on ?learning? as if it is the primary attribute of the nervous system we are trying to figure out. It seems to me figuring out "what the nose already knows? is much more important. >>> >>> >>> Jim >>> >>> >>> >>> >>> >>> >>> On Feb 10, 2014, at 10:38 AM, Gary Marcus wrote: >>> >>>> Juergen and others, >>>> >>>> I am with John on his two basic concerns, and think that your appeal to computational universality is a red herring; I cc the entire group because I think that these issues lay at the center of why many of the hardest problems in AI and neuroscience continue to lay outside of reach, despite in-principle proofs about computational universality. >>>> >>>> John?s basic points, which I have also made before (e.g. in my books The Algebraic Mind and The Birth of the Mind and in my periodic New Yorker posts) are two >>>> >>>> a. It is unrealistic to expect that hierarchies of pattern recognizers will suffice for the full range of cognitive problems that humans (and strong AI systems) face. Deep learning, to take one example, excels at classification, but has thus far had relatively little to contribute to inference or natural language understanding. Socher et al?s impressive CVG work, for instance, is parasitic on a traditional (symbolic) parser, not a soup-to-nuts neural net induced from input. >>>> >>>> b. it is unrealistic to expect that all the relevant information can be extracted by any general purpose learning device. >>>> >>>> Yes, you can reliably map any arbitrary input-output relation onto a multilayer perceptron or recurrent net, but only if you know the complete input-output mapping in advance. Alas, you can?t be guaranteed to do that in general given arbitrary subsets of the complete space; in the real world, learners see subsets of possible data and have to make guesses about what the rest will be like. Wolpert?s No Free Lunch work is instructive here (and also in line with how cognitive scientists like Chomsky, Pinker, and myself have thought about the problem). For any problem, I presume that there exists an appropriately-configured net, but there is no guarantee that in the real world you are going to be able to correctly induce the right system via general-purpose learning algorithm given a finite amount of data, with a finite amount of training. Empirically, neural nets of roughly the form you are discussing have worked fine for some problems (e.g. backgammon) but been no match for their symbolic competitors in other domains (chess) and worked only as an adjunct rather than an central ingredient in still others (parsing, question-answering a la Watson, etc); in other domains, like planning and common-sense reasoning, there has been essentially no serious work at all. >>>> >>>> My own take, informed by evolutionary and developmental biology, is that no single general purpose architecture will ever be a match for the endproduct of a billion years of evolution, which includes, I suspect, a significant amount of customized architecture that need not be induced anew in each generation. We learn as well as we do precisely because evolution has preceded us, and endowed us with custom tools for learning in different domains. Until the field of neural nets more seriously engages in understanding what the contribution from evolution to neural wetware might be, I will remain pessimistic about the field?s prospects. >>>> >>>> Best, >>>> Gary Marcus >>>> >>>> Professor of Psychology >>>> New York University >>>> Visiting Cognitive Scientist >>>> Allen Institute for Brain Science >>>> Allen Institute for Artiificial Intelligence >>>> co-edited book coming late 2014: >>>> The Future of the Brain: Essays By The World?s Leading Neuroscientists >>>> http://garymarcus.com/ >>>> >>>> On Feb 10, 2014, at 10:26 AM, Juergen Schmidhuber wrote: >>>> >>>>> John, >>>>> >>>>> perhaps your view is a bit too pessimistic. Note that a single RNN already is a general computer. In principle, dynamic RNNs can map arbitrary observation sequences to arbitrary computable sequences of motoric actions and internal attention-directing operations, e.g., to process cluttered scenes, or to implement development (the examples you mentioned). From my point of view, the main question is how to exploit this universal potential through learning. A stack of dynamic RNN can sometimes facilitate this. What it learns can later be collapsed into a single RNN [3]. >>>>> >>>>> Juergen >>>>> >>>>> http://www.idsia.ch/~juergen/whatsnew.html >>>>> >>>>> >>>>> >>>>> On Feb 7, 2014, at 12:54 AM, Juyang Weng wrote: >>>>> >>>>>> Juergen: >>>>>> >>>>>> You wrote: A stack of recurrent NN. But it is a wrong architecture as far as the brain is concerned. >>>>>> >>>>>> Although my joint work with Narendra Ahuja and Thomas S. Huang at UIUC was probably the first >>>>>> learning network that used the deep Learning idea for learning from clutter scenes (Cresceptron ICCV 1992 and IJCV 1997), >>>>>> I gave up this static deep learning idea later after we considered the Principle 1: Development. >>>>>> >>>>>> The deep learning architecture is wrong for the brain. It is too restricted, static in architecture, and cannot learn directly from cluttered scenes required by Principle 1. The brain is not a cascade of recurrent NN. >>>>>> >>>>>> I quote from Antonio Damasio "Decartes' Error": p. 93: "But intermediate communications occurs also via large subcortical nuclei such as those in the thalamas and basal ganglia, and via small nulei such as those in the brain stem." >>>>>> >>>>>> Of course, the cerebral pathways themselves are not a stack of recurrent NN either. >>>>>> >>>>>> There are many fundamental reasons for that. I give only one here base on our DN brain model: Looking at a human, the brain must dynamically attend the tip of the nose, the entire nose, the face, or the entire human body on the fly. For example, when the network attend the nose, the entire human body becomes the background! Without a brain network that has both shallow and deep connections (unlike your stack of recurrent NN), your network is only for recognizing a set of static patterns in a clean background. This is still an overworked pattern recognition problem, not a vision problem. >>>>>> >>>>>> -John >>>>>> >>>>>> On 2/6/14 7:24 AM, Schmidhuber Juergen wrote: >>>>>>> Deep Learning in Artificial Neural Networks (NN) is about credit assignment across many subsequent computational stages, in deep or recurrent NN. >>>>>>> >>>>>>> A popluar Deep Learning NN is the Deep Belief Network (2006) [1,2]. A stack of feedforward NN (FNN) is pre-trained in unsupervised fashion. This can facilitate subsequent supervised learning. >>>>>>> >>>>>>> Let me re-advertise a much older, very similar, but more general, working Deep Learner of 1991. It can deal with temporal sequences: the Neural Hierarchical Temporal Memory or Neural History Compressor [3]. A stack of recurrent NN (RNN) is pre-trained in unsupervised fashion. This can greatly facilitate subsequent supervised learning. >>>>>>> >>>>>>> The RNN stack is more general in the sense that it uses sequence-processing RNN instead of FNN with unchanging inputs. In the early 1990s, the system was able to learn many previously unlearnable Deep Learning tasks, one of them requiring credit assignment across 1200 successive computational stages [4]. >>>>>>> >>>>>>> Related developments: In the 1990s there was a trend from partially unsupervised [3] to fully supervised recurrent Deep Learners [5]. In recent years, there has been a similar trend from partially unsupervised to fully supervised systems. For example, several recent competition-winning and benchmark record-setting systems use supervised LSTM RNN stacks [6-9]. >>>>>>> >>>>>>> >>>>>>> References: >>>>>>> >>>>>>> [1] G. E. Hinton, R. R. Salakhutdinov. Reducing the dimensionality of data with neural networks. Science, Vol. 313. no. 5786, pp. 504 - 507, 2006. http://www.cs.toronto.edu/~hinton/science.pdf >>>>>>> >>>>>>> [2] G. W. Cottrell. New Life for Neural Networks. Science, Vol. 313. no. 5786, pp. 454-455, 2006. http://www.academia.edu/155897/Cottrell_Garrison_W._2006_New_life_for_neural_networks >>>>>>> >>>>>>> [3] J. Schmidhuber. Learning complex, extended sequences using the principle of history compression, Neural Computation, 4(2):234-242, 1992. (Based on TR FKI-148-91, 1991.) ftp://ftp.idsia.ch/pub/juergen/chunker.pdf Overview: http://www.idsia.ch/~juergen/firstdeeplearner.html >>>>>>> >>>>>>> [4] J. Schmidhuber. Habilitation thesis, TUM, 1993. ftp://ftp.idsia.ch/pub/juergen/habilitation.pdf . Includes an experiment with credit assignment across 1200 subsequent computational stages for a Neural Hierarchical Temporal Memory or History Compressor or RNN stack with unsupervised pre-training [2] (try Google Translate in your mother tongue): http://www.idsia.ch/~juergen/habilitation/node114.html >>>>>>> >>>>>>> [5] S. Hochreiter, J. Schmidhuber. Long Short-Term Memory. Neural Computation, 9(8):1735-1780, 1997. Based on TR FKI-207-95, 1995. ftp://ftp.idsia.ch/pub/juergen/lstm.pdf . Lots of of follow-up work on LSTM under http://www.idsia.ch/~juergen/rnn.html >>>>>>> >>>>>>> [6] S. Fernandez, A. Graves, J. Schmidhuber. Sequence labelling in structured domains with hierarchical recurrent neural networks. In Proc. IJCAI'07, p. 774-779, Hyderabad, India, 2007. ftp://ftp.idsia.ch/pub/juergen/IJCAI07sequence.pdf >>>>>>> >>>>>>> [7] A. Graves, J. Schmidhuber. Offline Handwriting Recognition with Multidimensional Recurrent Neural Networks. NIPS'22, p 545-552, Vancouver, MIT Press, 2009. http://www.idsia.ch/~juergen/nips2009.pdf >>>>>>> >>>>>>> [8] 2009: First very deep (and recurrent) learner to win international competitions with secret test sets: deep LSTM RNN (1995-) won three connected handwriting contests at ICDAR 2009 (French, Arabic, Farsi), performing simultaneous segmentation and recognition. http://www.idsia.ch/~juergen/handwriting.html >>>>>>> >>>>>>> [9] A. Graves, A. Mohamed, G. E. Hinton. Speech Recognition with Deep Recurrent Neural Networks. ICASSP 2013, Vancouver, 2013. http://www.cs.toronto.edu/~hinton/absps/RNN13.pdf >>>>>>> >>>>>>> >>>>>>> >>>>>>> Juergen Schmidhuber >>>>>>> http://www.idsia.ch/~juergen/whatsnew.html >>>>>> >>>>>> -- >>>>>> -- >>>>>> Juyang (John) Weng, Professor >>>>>> Department of Computer Science and Engineering >>>>>> MSU Cognitive Science Program and MSU Neuroscience Program >>>>>> 428 S Shaw Ln Rm 3115 >>>>>> Michigan State University >>>>>> East Lansing, MI 48824 USA >>>>>> Tel: 517-353-4388 >>>>>> Fax: 517-432-1061 >>>>>> Email: weng at cse.msu.edu >>>>>> URL: http://www.cse.msu.edu/~weng/ >>>>>> ---------------------------------------------- >>>>>> >>>>> >>>>> >>>> >>> >>> >>> >>> >>> >>> Dr. James M. Bower Ph.D. >>> >>> Professor of Computational Neurobiology >>> >>> Barshop Institute for Longevity and Aging Studies. >>> >>> 15355 Lambda Drive >>> >>> University of Texas Health Science Center >>> >>> San Antonio, Texas 78245 >>> >>> >>> Phone: 210 382 0553 >>> >>> Email: bower at uthscsa.edu >>> >>> Web: http://www.bower-lab.org >>> >>> twitter: superid101 >>> >>> linkedin: Jim Bower >>> >>> >>> CONFIDENTIAL NOTICE: >>> >>> The contents of this email and any attachments to it may be privileged or contain privileged and confidential information. This information is only for the viewing or use of the intended recipient. If you have received this e-mail in error or are not the intended recipient, you are hereby notified that any disclosure, copying, distribution or use of, or the taking of any action in reliance upon, any of the information contained in this e-mail, or >>> >>> any of the attachments to this e-mail, is strictly prohibited and that this e-mail and all of the attachments to this e-mail, if any, must be >>> >>> immediately returned to the sender or destroyed and, in either case, this e-mail and all attachments to this e-mail must be immediately deleted from your computer without making any copies hereof and any and all hard copies made must be destroyed. If you have received this e-mail in error, please notify the sender by e-mail immediately. >>> >>> >>> >> >> >> >> >> >> Dr. James M. Bower Ph.D. >> >> Professor of Computational Neurobiology >> >> Barshop Institute for Longevity and Aging Studies. >> >> 15355 Lambda Drive >> >> University of Texas Health Science Center >> >> San Antonio, Texas 78245 >> >> >> Phone: 210 382 0553 >> >> Email: bower at uthscsa.edu >> >> Web: http://www.bower-lab.org >> >> twitter: superid101 >> >> linkedin: Jim Bower >> >> >> CONFIDENTIAL NOTICE: >> >> The contents of this email and any attachments to it may be privileged or contain privileged and confidential information. This information is only for the viewing or use of the intended recipient. If you have received this e-mail in error or are not the intended recipient, you are hereby notified that any disclosure, copying, distribution or use of, or the taking of any action in reliance upon, any of the information contained in this e-mail, or >> >> any of the attachments to this e-mail, is strictly prohibited and that this e-mail and all of the attachments to this e-mail, if any, must be >> >> immediately returned to the sender or destroyed and, in either case, this e-mail and all attachments to this e-mail must be immediately deleted from your computer without making any copies hereof and any and all hard copies made must be destroyed. If you have received this e-mail in error, please notify the sender by e-mail immediately. >> >> >> > > [I am in Dijon, France on sabbatical this year. To call me, Skype works best (gwcottrell), or dial +33 788319271] > > Gary Cottrell 858-534-6640 FAX: 858-534-7029 > > My schedule is here: http://tinyurl.com/b7gxpwo > > Computer Science and Engineering 0404 > IF USING FED EX INCLUDE THE FOLLOWING LINE: > CSE Building, Room 4130 > University of California San Diego > 9500 Gilman Drive # 0404 > La Jolla, Ca. 92093-0404 > > Things may come to those who wait, but only the things left by those who hustle. -- Abraham Lincoln > > "I'll have a caf? mocha vodka valium latte to go, please" -Anonymous > > "Of course, none of this will be easy. If it was, we would already know everything there was about how the brain works, and presumably my life would be simpler here. It could explain all kinds of things that go on in Washington." -Barack Obama > > "Probably once or twice a week we are sitting at dinner and Richard says, 'The cortex is hopeless,' and I say, 'That's why I work on the worm.'" Dr. Bargmann said. > > "A grapefruit is a lemon that saw an opportunity and took advantage of it." - note written on a door in Amsterdam on Lijnbaansgracht. > > "Physical reality is great, but it has a lousy search function." -Matt Tong > > "Only connect!" -E.M. Forster > > "You always have to believe that tomorrow you might write the matlab program that solves everything - otherwise you never will." -Geoff Hinton > > "There is nothing objective about objective functions" - Jay McClelland > > "I am awaiting the day when people remember the fact that discovery does not work by deciding what you want and then discovering it." > -David Mermin > > Email: gary at ucsd.edu > Home page: http://www-cse.ucsd.edu/~gary/ > [I am in Dijon, France on sabbatical this year. To call me, Skype works best (gwcottrell), or dial +33 788319271] Gary Cottrell 858-534-6640 FAX: 858-534-7029 My schedule is here: http://tinyurl.com/b7gxpwo Computer Science and Engineering 0404 IF USING FED EX INCLUDE THE FOLLOWING LINE: CSE Building, Room 4130 University of California San Diego 9500 Gilman Drive # 0404 La Jolla, Ca. 92093-0404 Things may come to those who wait, but only the things left by those who hustle. -- Abraham Lincoln "I'll have a caf? mocha vodka valium latte to go, please" -Anonymous "Of course, none of this will be easy. If it was, we would already know everything there was about how the brain works, and presumably my life would be simpler here. It could explain all kinds of things that go on in Washington." -Barack Obama "Probably once or twice a week we are sitting at dinner and Richard says, 'The cortex is hopeless,' and I say, 'That's why I work on the worm.'" Dr. Bargmann said. "A grapefruit is a lemon that saw an opportunity and took advantage of it." - note written on a door in Amsterdam on Lijnbaansgracht. "Physical reality is great, but it has a lousy search function." -Matt Tong "Only connect!" -E.M. Forster "You always have to believe that tomorrow you might write the matlab program that solves everything - otherwise you never will." -Geoff Hinton "There is nothing objective about objective functions" - Jay McClelland "I am awaiting the day when people remember the fact that discovery does not work by deciding what you want and then discovering it." -David Mermin Email: gary at ucsd.edu Home page: http://www-cse.ucsd.edu/~gary/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From gary at eng.ucsd.edu Tue Feb 11 05:42:23 2014 From: gary at eng.ucsd.edu (Gary Cottrell) Date: Tue, 11 Feb 2014 11:42:23 +0100 Subject: Connectionists: Deep Belief Nets (2006) / Neural History Compressor (1991) or Hierarchical Temporal Memory In-Reply-To: References: <3FD4809C-5EF7-478D-8939-AE56E6A434BE@idsia.ch> <52F420CF.7060006@cse.msu.edu> <23B2A5C3-3F74-4690-AAA3-D266B4978C05@uthscsa.edu> Message-ID: <4069454B-0CD4-4B93-933F-F01A8B4365FE@eng.ucsd.edu> well, since I haven't seen him pipe up here, there has been a lot of work on this for quite a while by one Bart Mel?.most recently I saw him speak on dendritic computation and a CoSyNe workshop. g. On Feb 10, 2014, at 10:45 PM, Ali Minai wrote: > I can just see a nascent war brewing between axonists and dedriticists :-), but you're absolutely right: The dendrite has been neglected too long, a victim to the insidious appeal of the point neuron. I still recall that beautiful chapter on dendritic Boolean computation in the first "Methods in Neuronal Modeling Book". > > Ali > > > On Mon, Feb 10, 2014 at 2:40 PM, james bower wrote: > Nice to see this started again, even after the ?get me off the mailing list? email. :-) For those of you relatively new to the field - it was discussions like this, I believe, that were responsible for growing connectionists to begin with - 25 years ago. Anyway: > > > Well put - although, there is a long history of engineers and others coming up with interesting new ideas after contemplating biological structures - that actually made a contribution to engineering. Lots of current examples. However, success in the engineering world does not at all necessarily mean that this is how the brain actually does it. > > One more point - it is almost certain that a great deal of the computational power of the nervous system comes from interactions in the dendrite - which almost certainly can not be boiled down to the traditional summation of synaptic inputs over time and space followed by some simple thresholding mechanism. Therefore, in addition to the vow of chastity for any of you who are really in this business for the love of neuroscience, I also suggest that you focus on the computational erogenous zone of the dendrites. The Internet is a remarkable and complex network, but without understanding how the information it delivers is rendered and influences the computers it is connected to, probably rather difficult to figure out the network itself. > > Jim > > > > On Feb 10, 2014, at 9:56 AM, Ali Minai wrote: > >> I agree with both Juergen and John. On the one hand, most neural processing must - almost necessarily - emerge from the dynamics of many recurrent networks interacting at multiple scales. I that sense, deep learning with recurrent networks is a fruitful place to start in trying to understand this. On the other hand, I also think that the term "deep learning" has become unnecessarily constrained to refer to a particular style of layered architecture and certain types of learning algorithms. We need to move beyond these - broaden the definition to include networks with more complex architectures and learning processes that include development, and even evolution. And to extend the model beyond just "neural" networks to encompass the entire brain-body network, including its mechanical and autonomic components. >> >> One problem is that when engineers and computer scientists try to understand the brain, we keep getting distracted by all the sexy "applications" that arise as a side benefit of our models, go chasing after them, and eventually lose track of the original goal of understanding how the brain works. This results in a lot of very useful neural network models for vision, time-series prediction, data analysis, etc., but doesn't tell us much about the brain. Some of us need to take a vow of chastity and commit ourselves anew to the discipline of biology. >> >> Ali >> >> >> On Mon, Feb 10, 2014 at 10:26 AM, Juergen Schmidhuber wrote: >> John, >> >> perhaps your view is a bit too pessimistic. Note that a single RNN already is a general computer. In principle, dynamic RNNs can map arbitrary observation sequences to arbitrary computable sequences of motoric actions and internal attention-directing operations, e.g., to process cluttered scenes, or to implement development (the examples you mentioned). From my point of view, the main question is how to exploit this universal potential through learning. A stack of dynamic RNN can sometimes facilitate this. What it learns can later be collapsed into a single RNN [3]. >> >> Juergen >> >> http://www.idsia.ch/~juergen/whatsnew.html >> >> >> >> On Feb 7, 2014, at 12:54 AM, Juyang Weng wrote: >> >> > Juergen: >> > >> > You wrote: A stack of recurrent NN. But it is a wrong architecture as far as the brain is concerned. >> > >> > Although my joint work with Narendra Ahuja and Thomas S. Huang at UIUC was probably the first >> > learning network that used the deep Learning idea for learning from clutter scenes (Cresceptron ICCV 1992 and IJCV 1997), >> > I gave up this static deep learning idea later after we considered the Principle 1: Development. >> > >> > The deep learning architecture is wrong for the brain. It is too restricted, static in architecture, and cannot learn directly from cluttered scenes required by Principle 1. The brain is not a cascade of recurrent NN. >> > >> > I quote from Antonio Damasio "Decartes' Error": p. 93: "But intermediate communications occurs also via large subcortical nuclei such as those in the thalamas and basal ganglia, and via small nulei such as those in the brain stem." >> > >> > Of course, the cerebral pathways themselves are not a stack of recurrent NN either. >> > >> > There are many fundamental reasons for that. I give only one here base on our DN brain model: Looking at a human, the brain must dynamically attend the tip of the nose, the entire nose, the face, or the entire human body on the fly. For example, when the network attend the nose, the entire human body becomes the background! Without a brain network that has both shallow and deep connections (unlike your stack of recurrent NN), your network is only for recognizing a set of static patterns in a clean background. This is still an overworked pattern recognition problem, not a vision problem. >> > >> > -John >> > >> > On 2/6/14 7:24 AM, Schmidhuber Juergen wrote: >> >> Deep Learning in Artificial Neural Networks (NN) is about credit assignment across many subsequent computational stages, in deep or recurrent NN. >> >> >> >> A popluar Deep Learning NN is the Deep Belief Network (2006) [1,2]. A stack of feedforward NN (FNN) is pre-trained in unsupervised fashion. This can facilitate subsequent supervised learning. >> >> >> >> Let me re-advertise a much older, very similar, but more general, working Deep Learner of 1991. It can deal with temporal sequences: the Neural Hierarchical Temporal Memory or Neural History Compressor [3]. A stack of recurrent NN (RNN) is pre-trained in unsupervised fashion. This can greatly facilitate subsequent supervised learning. >> >> >> >> The RNN stack is more general in the sense that it uses sequence-processing RNN instead of FNN with unchanging inputs. In the early 1990s, the system was able to learn many previously unlearnable Deep Learning tasks, one of them requiring credit assignment across 1200 successive computational stages [4]. >> >> >> >> Related developments: In the 1990s there was a trend from partially unsupervised [3] to fully supervised recurrent Deep Learners [5]. In recent years, there has been a similar trend from partially unsupervised to fully supervised systems. For example, several recent competition-winning and benchmark record-setting systems use supervised LSTM RNN stacks [6-9]. >> >> >> >> >> >> References: >> >> >> >> [1] G. E. Hinton, R. R. Salakhutdinov. Reducing the dimensionality of data with neural networks. Science, Vol. 313. no. 5786, pp. 504 - 507, 2006. http://www.cs.toronto.edu/~hinton/science.pdf >> >> >> >> [2] G. W. Cottrell. New Life for Neural Networks. Science, Vol. 313. no. 5786, pp. 454-455, 2006. http://www.academia.edu/155897/Cottrell_Garrison_W._2006_New_life_for_neural_networks >> >> >> >> [3] J. Schmidhuber. Learning complex, extended sequences using the principle of history compression, Neural Computation, 4(2):234-242, 1992. (Based on TR FKI-148-91, 1991.) ftp://ftp.idsia.ch/pub/juergen/chunker.pdf Overview: http://www.idsia.ch/~juergen/firstdeeplearner.html >> >> >> >> [4] J. Schmidhuber. Habilitation thesis, TUM, 1993. ftp://ftp.idsia.ch/pub/juergen/habilitation.pdf . Includes an experiment with credit assignment across 1200 subsequent computational stages for a Neural Hierarchical Temporal Memory or History Compressor or RNN stack with unsupervised pre-training [2] (try Google Translate in your mother tongue): http://www.idsia.ch/~juergen/habilitation/node114.html >> >> >> >> [5] S. Hochreiter, J. Schmidhuber. Long Short-Term Memory. Neural Computation, 9(8):1735-1780, 1997. Based on TR FKI-207-95, 1995. ftp://ftp.idsia.ch/pub/juergen/lstm.pdf . Lots of of follow-up work on LSTM under http://www.idsia.ch/~juergen/rnn.html >> >> >> >> [6] S. Fernandez, A. Graves, J. Schmidhuber. Sequence labelling in structured domains with hierarchical recurrent neural networks. In Proc. IJCAI'07, p. 774-779, Hyderabad, India, 2007. ftp://ftp.idsia.ch/pub/juergen/IJCAI07sequence.pdf >> >> >> >> [7] A. Graves, J. Schmidhuber. Offline Handwriting Recognition with Multidimensional Recurrent Neural Networks. NIPS'22, p 545-552, Vancouver, MIT Press, 2009. http://www.idsia.ch/~juergen/nips2009.pdf >> >> >> >> [8] 2009: First very deep (and recurrent) learner to win international competitions with secret test sets: deep LSTM RNN (1995-) won three connected handwriting contests at ICDAR 2009 (French, Arabic, Farsi), performing simultaneous segmentation and recognition. http://www.idsia.ch/~juergen/handwriting.html >> >> >> >> [9] A. Graves, A. Mohamed, G. E. Hinton. Speech Recognition with Deep Recurrent Neural Networks. ICASSP 2013, Vancouver, 2013. http://www.cs.toronto.edu/~hinton/absps/RNN13.pdf >> >> >> >> >> >> >> >> Juergen Schmidhuber >> >> http://www.idsia.ch/~juergen/whatsnew.html >> > >> > -- >> > -- >> > Juyang (John) Weng, Professor >> > Department of Computer Science and Engineering >> > MSU Cognitive Science Program and MSU Neuroscience Program >> > 428 S Shaw Ln Rm 3115 >> > Michigan State University >> > East Lansing, MI 48824 USA >> > Tel: 517-353-4388 >> > Fax: 517-432-1061 >> > Email: weng at cse.msu.edu >> > URL: http://www.cse.msu.edu/~weng/ >> > ---------------------------------------------- >> > >> >> >> >> >> >> -- >> Ali A. Minai, Ph.D. >> Professor >> Complex Adaptive Systems Lab >> Department of Electrical Engineering & Computing Systems >> University of Cincinnati >> Cincinnati, OH 45221-0030 >> >> Phone: (513) 556-4783 >> Fax: (513) 556-7326 >> Email: Ali.Minai at uc.edu >> minaiaa at gmail.com >> >> WWW: http://www.ece.uc.edu/~aminai/ > > > > > > Dr. James M. Bower Ph.D. > > Professor of Computational Neurobiology > > Barshop Institute for Longevity and Aging Studies. > > 15355 Lambda Drive > > University of Texas Health Science Center > > San Antonio, Texas 78245 > > > Phone: 210 382 0553 > > Email: bower at uthscsa.edu > > Web: http://www.bower-lab.org > > twitter: superid101 > > linkedin: Jim Bower > > > CONFIDENTIAL NOTICE: > > The contents of this email and any attachments to it may be privileged or contain privileged and confidential information. This information is only for the viewing or use of the intended recipient. If you have received this e-mail in error or are not the intended recipient, you are hereby notified that any disclosure, copying, distribution or use of, or the taking of any action in reliance upon, any of the information contained in this e-mail, or > > any of the attachments to this e-mail, is strictly prohibited and that this e-mail and all of the attachments to this e-mail, if any, must be > > immediately returned to the sender or destroyed and, in either case, this e-mail and all attachments to this e-mail must be immediately deleted from your computer without making any copies hereof and any and all hard copies made must be destroyed. If you have received this e-mail in error, please notify the sender by e-mail immediately. > > > > > > > -- > Ali A. Minai, Ph.D. > Professor > Complex Adaptive Systems Lab > Department of Electrical Engineering & Computing Systems > University of Cincinnati > Cincinnati, OH 45221-0030 > > Phone: (513) 556-4783 > Fax: (513) 556-7326 > Email: Ali.Minai at uc.edu > minaiaa at gmail.com > > WWW: http://www.ece.uc.edu/~aminai/ [I am in Dijon, France on sabbatical this year. To call me, Skype works best (gwcottrell), or dial +33 788319271] Gary Cottrell 858-534-6640 FAX: 858-534-7029 My schedule is here: http://tinyurl.com/b7gxpwo Computer Science and Engineering 0404 IF USING FED EX INCLUDE THE FOLLOWING LINE: CSE Building, Room 4130 University of California San Diego 9500 Gilman Drive # 0404 La Jolla, Ca. 92093-0404 Things may come to those who wait, but only the things left by those who hustle. -- Abraham Lincoln "I'll have a caf? mocha vodka valium latte to go, please" -Anonymous "Of course, none of this will be easy. If it was, we would already know everything there was about how the brain works, and presumably my life would be simpler here. It could explain all kinds of things that go on in Washington." -Barack Obama "Probably once or twice a week we are sitting at dinner and Richard says, 'The cortex is hopeless,' and I say, 'That's why I work on the worm.'" Dr. Bargmann said. "A grapefruit is a lemon that saw an opportunity and took advantage of it." - note written on a door in Amsterdam on Lijnbaansgracht. "Physical reality is great, but it has a lousy search function." -Matt Tong "Only connect!" -E.M. Forster "You always have to believe that tomorrow you might write the matlab program that solves everything - otherwise you never will." -Geoff Hinton "There is nothing objective about objective functions" - Jay McClelland "I am awaiting the day when people remember the fact that discovery does not work by deciding what you want and then discovering it." -David Mermin Email: gary at ucsd.edu Home page: http://www-cse.ucsd.edu/~gary/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From gary at eng.ucsd.edu Tue Feb 11 05:43:16 2014 From: gary at eng.ucsd.edu (Gary Cottrell) Date: Tue, 11 Feb 2014 11:43:16 +0100 Subject: Connectionists: Deep Belief Nets (2006) / Neural History Compressor (1991) or Hierarchical Temporal Memory In-Reply-To: References: <3FD4809C-5EF7-478D-8939-AE56E6A434BE@idsia.ch> <52F420CF.7060006@cse.msu.edu> Message-ID: <43720F06-4754-4044-9E7C-311CAF8A5EA4@eng.ucsd.edu> And this speaks to all of the work on developmental robotics, which works from the principle that robots will have to go through a developmental period like we do? g. On Feb 10, 2014, at 10:37 PM, Ali Minai wrote: > I think Gary's last paragraph is absolutely key. Unless we take both the evolutionary and the developmental processes into account, we will neither understand complex brains fully nor replicate their functionality too well in our robots etc. We build complex robots that know nothing and then ask them to learn complex things, setting up a hopelessly difficult learning problem. But that isn't how animals learn, or why animals have the brains and bodies they have. A purely abstract computational approach to neural models makes the same category error that connectionists criticized symbolists for making, just at a different level. > > Ali > > > On Mon, Feb 10, 2014 at 11:38 AM, Gary Marcus wrote: > Juergen and others, > > I am with John on his two basic concerns, and think that your appeal to computational universality is a red herring; I cc the entire group because I think that these issues lay at the center of why many of the hardest problems in AI and neuroscience continue to lay outside of reach, despite in-principle proofs about computational universality. > > John?s basic points, which I have also made before (e.g. in my books The Algebraic Mind and The Birth of the Mind and in my periodic New Yorker posts) are two > > a. It is unrealistic to expect that hierarchies of pattern recognizers will suffice for the full range of cognitive problems that humans (and strong AI systems) face. Deep learning, to take one example, excels at classification, but has thus far had relatively little to contribute to inference or natural language understanding. Socher et al?s impressive CVG work, for instance, is parasitic on a traditional (symbolic) parser, not a soup-to-nuts neural net induced from input. > > b. it is unrealistic to expect that all the relevant information can be extracted by any general purpose learning device. > > Yes, you can reliably map any arbitrary input-output relation onto a multilayer perceptron or recurrent net, but only if you know the complete input-output mapping in advance. Alas, you can?t be guaranteed to do that in general given arbitrary subsets of the complete space; in the real world, learners see subsets of possible data and have to make guesses about what the rest will be like. Wolpert?s No Free Lunch work is instructive here (and also in line with how cognitive scientists like Chomsky, Pinker, and myself have thought about the problem). For any problem, I presume that there exists an appropriately-configured net, but there is no guarantee that in the real world you are going to be able to correctly induce the right system via general-purpose learning algorithm given a finite amount of data, with a finite amount of training. Empirically, neural nets of roughly the form you are discussing have worked fine for some problems (e.g. backgammon) but been no match for their symbolic competitors in other domains (chess) and worked only as an adjunct rather than an central ingredient in still others (parsing, question-answering a la Watson, etc); in other domains, like planning and common-sense reasoning, there has been essentially no serious work at all. > > My own take, informed by evolutionary and developmental biology, is that no single general purpose architecture will ever be a match for the endproduct of a billion years of evolution, which includes, I suspect, a significant amount of customized architecture that need not be induced anew in each generation. We learn as well as we do precisely because evolution has preceded us, and endowed us with custom tools for learning in different domains. Until the field of neural nets more seriously engages in understanding what the contribution from evolution to neural wetware might be, I will remain pessimistic about the field?s prospects. > > Best, > Gary Marcus > > Professor of Psychology > New York University > Visiting Cognitive Scientist > Allen Institute for Brain Science > Allen Institute for Artiificial Intelligence > co-edited book coming late 2014: > The Future of the Brain: Essays By The World?s Leading Neuroscientists > http://garymarcus.com/ > > On Feb 10, 2014, at 10:26 AM, Juergen Schmidhuber wrote: > >> John, >> >> perhaps your view is a bit too pessimistic. Note that a single RNN already is a general computer. In principle, dynamic RNNs can map arbitrary observation sequences to arbitrary computable sequences of motoric actions and internal attention-directing operations, e.g., to process cluttered scenes, or to implement development (the examples you mentioned). From my point of view, the main question is how to exploit this universal potential through learning. A stack of dynamic RNN can sometimes facilitate this. What it learns can later be collapsed into a single RNN [3]. >> >> Juergen >> >> http://www.idsia.ch/~juergen/whatsnew.html >> >> >> >> On Feb 7, 2014, at 12:54 AM, Juyang Weng wrote: >> >>> Juergen: >>> >>> You wrote: A stack of recurrent NN. But it is a wrong architecture as far as the brain is concerned. >>> >>> Although my joint work with Narendra Ahuja and Thomas S. Huang at UIUC was probably the first >>> learning network that used the deep Learning idea for learning from clutter scenes (Cresceptron ICCV 1992 and IJCV 1997), >>> I gave up this static deep learning idea later after we considered the Principle 1: Development. >>> >>> The deep learning architecture is wrong for the brain. It is too restricted, static in architecture, and cannot learn directly from cluttered scenes required by Principle 1. The brain is not a cascade of recurrent NN. >>> >>> I quote from Antonio Damasio "Decartes' Error": p. 93: "But intermediate communications occurs also via large subcortical nuclei such as those in the thalamas and basal ganglia, and via small nulei such as those in the brain stem." >>> >>> Of course, the cerebral pathways themselves are not a stack of recurrent NN either. >>> >>> There are many fundamental reasons for that. I give only one here base on our DN brain model: Looking at a human, the brain must dynamically attend the tip of the nose, the entire nose, the face, or the entire human body on the fly. For example, when the network attend the nose, the entire human body becomes the background! Without a brain network that has both shallow and deep connections (unlike your stack of recurrent NN), your network is only for recognizing a set of static patterns in a clean background. This is still an overworked pattern recognition problem, not a vision problem. >>> >>> -John >>> >>> On 2/6/14 7:24 AM, Schmidhuber Juergen wrote: >>>> Deep Learning in Artificial Neural Networks (NN) is about credit assignment across many subsequent computational stages, in deep or recurrent NN. >>>> >>>> A popluar Deep Learning NN is the Deep Belief Network (2006) [1,2]. A stack of feedforward NN (FNN) is pre-trained in unsupervised fashion. This can facilitate subsequent supervised learning. >>>> >>>> Let me re-advertise a much older, very similar, but more general, working Deep Learner of 1991. It can deal with temporal sequences: the Neural Hierarchical Temporal Memory or Neural History Compressor [3]. A stack of recurrent NN (RNN) is pre-trained in unsupervised fashion. This can greatly facilitate subsequent supervised learning. >>>> >>>> The RNN stack is more general in the sense that it uses sequence-processing RNN instead of FNN with unchanging inputs. In the early 1990s, the system was able to learn many previously unlearnable Deep Learning tasks, one of them requiring credit assignment across 1200 successive computational stages [4]. >>>> >>>> Related developments: In the 1990s there was a trend from partially unsupervised [3] to fully supervised recurrent Deep Learners [5]. In recent years, there has been a similar trend from partially unsupervised to fully supervised systems. For example, several recent competition-winning and benchmark record-setting systems use supervised LSTM RNN stacks [6-9]. >>>> >>>> >>>> References: >>>> >>>> [1] G. E. Hinton, R. R. Salakhutdinov. Reducing the dimensionality of data with neural networks. Science, Vol. 313. no. 5786, pp. 504 - 507, 2006. http://www.cs.toronto.edu/~hinton/science.pdf >>>> >>>> [2] G. W. Cottrell. New Life for Neural Networks. Science, Vol. 313. no. 5786, pp. 454-455, 2006. http://www.academia.edu/155897/Cottrell_Garrison_W._2006_New_life_for_neural_networks >>>> >>>> [3] J. Schmidhuber. Learning complex, extended sequences using the principle of history compression, Neural Computation, 4(2):234-242, 1992. (Based on TR FKI-148-91, 1991.) ftp://ftp.idsia.ch/pub/juergen/chunker.pdf Overview: http://www.idsia.ch/~juergen/firstdeeplearner.html >>>> >>>> [4] J. Schmidhuber. Habilitation thesis, TUM, 1993. ftp://ftp.idsia.ch/pub/juergen/habilitation.pdf . Includes an experiment with credit assignment across 1200 subsequent computational stages for a Neural Hierarchical Temporal Memory or History Compressor or RNN stack with unsupervised pre-training [2] (try Google Translate in your mother tongue): http://www.idsia.ch/~juergen/habilitation/node114.html >>>> >>>> [5] S. Hochreiter, J. Schmidhuber. Long Short-Term Memory. Neural Computation, 9(8):1735-1780, 1997. Based on TR FKI-207-95, 1995. ftp://ftp.idsia.ch/pub/juergen/lstm.pdf . Lots of of follow-up work on LSTM under http://www.idsia.ch/~juergen/rnn.html >>>> >>>> [6] S. Fernandez, A. Graves, J. Schmidhuber. Sequence labelling in structured domains with hierarchical recurrent neural networks. In Proc. IJCAI'07, p. 774-779, Hyderabad, India, 2007. ftp://ftp.idsia.ch/pub/juergen/IJCAI07sequence.pdf >>>> >>>> [7] A. Graves, J. Schmidhuber. Offline Handwriting Recognition with Multidimensional Recurrent Neural Networks. NIPS'22, p 545-552, Vancouver, MIT Press, 2009. http://www.idsia.ch/~juergen/nips2009.pdf >>>> >>>> [8] 2009: First very deep (and recurrent) learner to win international competitions with secret test sets: deep LSTM RNN (1995-) won three connected handwriting contests at ICDAR 2009 (French, Arabic, Farsi), performing simultaneous segmentation and recognition. http://www.idsia.ch/~juergen/handwriting.html >>>> >>>> [9] A. Graves, A. Mohamed, G. E. Hinton. Speech Recognition with Deep Recurrent Neural Networks. ICASSP 2013, Vancouver, 2013. http://www.cs.toronto.edu/~hinton/absps/RNN13.pdf >>>> >>>> >>>> >>>> Juergen Schmidhuber >>>> http://www.idsia.ch/~juergen/whatsnew.html >>> >>> -- >>> -- >>> Juyang (John) Weng, Professor >>> Department of Computer Science and Engineering >>> MSU Cognitive Science Program and MSU Neuroscience Program >>> 428 S Shaw Ln Rm 3115 >>> Michigan State University >>> East Lansing, MI 48824 USA >>> Tel: 517-353-4388 >>> Fax: 517-432-1061 >>> Email: weng at cse.msu.edu >>> URL: http://www.cse.msu.edu/~weng/ >>> ---------------------------------------------- >>> >> >> > > > > > -- > Ali A. Minai, Ph.D. > Professor > Complex Adaptive Systems Lab > Department of Electrical Engineering & Computing Systems > University of Cincinnati > Cincinnati, OH 45221-0030 > > Phone: (513) 556-4783 > Fax: (513) 556-7326 > Email: Ali.Minai at uc.edu > minaiaa at gmail.com > > WWW: http://www.ece.uc.edu/~aminai/ [I am in Dijon, France on sabbatical this year. To call me, Skype works best (gwcottrell), or dial +33 788319271] Gary Cottrell 858-534-6640 FAX: 858-534-7029 My schedule is here: http://tinyurl.com/b7gxpwo Computer Science and Engineering 0404 IF USING FED EX INCLUDE THE FOLLOWING LINE: CSE Building, Room 4130 University of California San Diego 9500 Gilman Drive # 0404 La Jolla, Ca. 92093-0404 Things may come to those who wait, but only the things left by those who hustle. -- Abraham Lincoln "I'll have a caf? mocha vodka valium latte to go, please" -Anonymous "Of course, none of this will be easy. If it was, we would already know everything there was about how the brain works, and presumably my life would be simpler here. It could explain all kinds of things that go on in Washington." -Barack Obama "Probably once or twice a week we are sitting at dinner and Richard says, 'The cortex is hopeless,' and I say, 'That's why I work on the worm.'" Dr. Bargmann said. "A grapefruit is a lemon that saw an opportunity and took advantage of it." - note written on a door in Amsterdam on Lijnbaansgracht. "Physical reality is great, but it has a lousy search function." -Matt Tong "Only connect!" -E.M. Forster "You always have to believe that tomorrow you might write the matlab program that solves everything - otherwise you never will." -Geoff Hinton "There is nothing objective about objective functions" - Jay McClelland "I am awaiting the day when people remember the fact that discovery does not work by deciding what you want and then discovering it." -David Mermin Email: gary at ucsd.edu Home page: http://www-cse.ucsd.edu/~gary/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From gary at eng.ucsd.edu Tue Feb 11 05:45:45 2014 From: gary at eng.ucsd.edu (Gary Cottrell) Date: Tue, 11 Feb 2014 11:45:45 +0100 Subject: Connectionists: Deep Belief Nets (2006) / Neural History Compressor (1991) or Hierarchical Temporal Memory In-Reply-To: References: <3FD4809C-5EF7-478D-8939-AE56E6A434BE@idsia.ch> <52F420CF.7060006@cse.msu.edu> Message-ID: > Likewise, if you hold an infant's feet in warm water it will vigorously try to walk. And if you put it in water, it will swim?. [sorry for all the posts, but my wife is making me stay home from work today after I went to work yesterday a week after my heart attack. You can't hold a good connectionist down! - ;-)] [at least they aren't as long as Jim's!] On Feb 10, 2014, at 10:51 PM, Brian J Mingus wrote: [I am in Dijon, France on sabbatical this year. To call me, Skype works best (gwcottrell), or dial +33 788319271] Gary Cottrell 858-534-6640 FAX: 858-534-7029 My schedule is here: http://tinyurl.com/b7gxpwo Computer Science and Engineering 0404 IF USING FED EX INCLUDE THE FOLLOWING LINE: CSE Building, Room 4130 University of California San Diego 9500 Gilman Drive # 0404 La Jolla, Ca. 92093-0404 Things may come to those who wait, but only the things left by those who hustle. -- Abraham Lincoln "I'll have a caf? mocha vodka valium latte to go, please" -Anonymous "Of course, none of this will be easy. If it was, we would already know everything there was about how the brain works, and presumably my life would be simpler here. It could explain all kinds of things that go on in Washington." -Barack Obama "Probably once or twice a week we are sitting at dinner and Richard says, 'The cortex is hopeless,' and I say, 'That's why I work on the worm.'" Dr. Bargmann said. "A grapefruit is a lemon that saw an opportunity and took advantage of it." - note written on a door in Amsterdam on Lijnbaansgracht. "Physical reality is great, but it has a lousy search function." -Matt Tong "Only connect!" -E.M. Forster "You always have to believe that tomorrow you might write the matlab program that solves everything - otherwise you never will." -Geoff Hinton "There is nothing objective about objective functions" - Jay McClelland "I am awaiting the day when people remember the fact that discovery does not work by deciding what you want and then discovering it." -David Mermin Email: gary at ucsd.edu Home page: http://www-cse.ucsd.edu/~gary/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From juergen at idsia.ch Tue Feb 11 06:21:31 2014 From: juergen at idsia.ch (Juergen Schmidhuber) Date: Tue, 11 Feb 2014 12:21:31 +0100 Subject: Connectionists: Deep Belief Nets (2006) / Neural History Compressor (1991) or Hierarchical Temporal Memory In-Reply-To: References: <3FD4809C-5EF7-478D-8939-AE56E6A434BE@idsia.ch> <52F420CF.7060006@cse.msu.edu> Message-ID: <6A71A1D7-9E58-4AED-B44E-BE9AD1E567C1@idsia.ch> Gary (Marcus), you wrote: "it is unrealistic to expect that all the relevant information can be extracted by any general purpose learning device." You might be a bit too pessimistic about general purpose systems. Unbeknownst to many NN researchers, there are _universal_ problem solvers that are time-optimal in various theoretical senses [10-12] (not to be confused with universal incomputable AI [13]). For example, there is a meta-method [10] that solves any well-defined problem as quickly as the unknown fastest way of solving it, save for an additive constant overhead that becomes negligible as problem size grows. Note that most problems are large; only few are small. (AI researchers are still in business because many are interested in problems so small that it is worth trying to reduce the overhead.) Several posts addressed the subject of evolution (Gary Marcus, Ken Stanley, Brian Mingus, Ali Minai, Thomas Trappenberg). Evolution is another a form of learning, of searching the parameter space. Not provably optimal in the sense of the methods above, but often quite practical. It is used all the time for reinforcement learning without a teacher. For example, an RNN with over a million weights recently learned through evolution to drive a simulated car based on a high-dimensional video-like visual input stream [14,15]. The RNN learned both control and visual processing from scratch, without being aided by unsupervised techniques (which may speed up evolution by reducing the search space through compact sensory codes). Jim, you wrote: "this could actually be an interesting opportunity for some cross disciplinary thinking about how one would use an active sensory data acquisition controller to select the sensory data that is ideal given an internal model of the world." Well, that's what intrinsic reward-driven curiosity and attention direction is all about - reward the controller for selecting data that maximises learning/compression progress of the world model - lots of work on this since 1990 [16,17]. (See also posts on developmental robotics by Brian Mingus and Gary Cottrell.) [10] Marcus Hutter. The Fastest and Shortest Algorithm for All Well-Defined Problems. International Journal of Foundations of Computer Science, 13(3):431-443, 2002. (On J. Schmidhuber's SNF grant 20-61847.) [11] http://www.idsia.ch/~juergen/optimalsearch.html [12] http://www.idsia.ch/~juergen/goedelmachine.html [13] http://www.idsia.ch/~juergen/unilearn.html [14] J. Koutnik, G. Cuccu, J. Schmidhuber, F. Gomez. Evolving Large-Scale Neural Networks for Vision-Based Reinforcement Learning. Proc. GECCO'13, Amsterdam, July 2013. [15] http://www.idsia.ch/~juergen/compressednetworksearch.html [16] http://www.idsia.ch/~juergen/interest.html [17] http://www.idsia.ch/~juergen/creativity.html Juergen From jose at psychology.rutgers.edu Tue Feb 11 06:40:38 2014 From: jose at psychology.rutgers.edu (Stephen =?ISO-8859-1?Q?Jos=E9?= Hanson) Date: Tue, 11 Feb 2014 06:40:38 -0500 Subject: Connectionists: Deep Belief Nets (2006) / Neural History Compressor (1991) or Hierarchical Temporal Memory In-Reply-To: References: <3FD4809C-5EF7-478D-8939-AE56E6A434BE@idsia.ch> <52F420CF.7060006@cse.msu.edu> Message-ID: <1392118838.2486.17.camel@sam> Yikes... your wife is correct.. .stay home! Sorry to hear about your heart attack! I wonder tho if you and Jim's recent profilific posts is a form of "emergent connectionist retirement" rest! Best, Steve On Tue, 2014-02-11 at 11:45 +0100, Gary Cottrell wrote: > > > > Likewise, if you hold an infant's feet in warm water it will > > vigorously try to walk. > > > > And if you put it in water, it will swim?. > > > > [sorry for all the posts, but my wife is making me stay home from work > today after I went to work yesterday a week after my heart attack. You > can't hold a good connectionist down! - ;-)] > > > [at least they aren't as long as Jim's!] > > > On Feb 10, 2014, at 10:51 PM, Brian J Mingus > wrote: > > > > > > > [I am in Dijon, France on sabbatical this year. To call me, Skype > works best (gwcottrell), or dial +33 788319271] > > > Gary Cottrell 858-534-6640 FAX: 858-534-7029 > > > My schedule is here: http://tinyurl.com/b7gxpwo > > Computer Science and Engineering 0404 > IF USING FED EX INCLUDE THE FOLLOWING LINE: > CSE Building, Room 4130 > University of California San Diego > 9500 Gilman Drive # 0404 > La Jolla, Ca. 92093-0404 > > > Things may come to those who wait, but only the things left by those > who hustle. -- Abraham Lincoln > > > "I'll have a caf? mocha vodka valium latte to go, please" -Anonymous > > > "Of course, none of this will be easy. If it was, we would already > know everything there was about how the brain works, and presumably my > life would be simpler here. It could explain all kinds of things that > go on in Washington." -Barack Obama > > > "Probably once or twice a week we are sitting at dinner and Richard > says, 'The cortex is hopeless,' and I say, 'That's why I work on the > worm.'" Dr. Bargmann said. > > "A grapefruit is a lemon that saw an opportunity and took advantage of > it." - note written on a door in Amsterdam on Lijnbaansgracht. > > "Physical reality is great, but it has a lousy search function." -Matt > Tong > > "Only connect!" -E.M. Forster > > "You always have to believe that tomorrow you might write the matlab > program that solves everything - otherwise you never will." -Geoff > Hinton > > > "There is nothing objective about objective functions" - Jay > McClelland > > "I am awaiting the day when people remember the fact that discovery > does not work by deciding what you want and then discovering it." > -David Mermin > > Email: gary at ucsd.edu > Home page: http://www-cse.ucsd.edu/~gary/ > > > -- Stephen Jos? Hanson Director RUBIC (Rutgers Brain Imaging Center) Professor of Psychology Member of Cognitive Science Center (NB) Member EE Graduate Program (NB) Member CS Graduate Program (NB) Rutgers University email: jose at psychology.rutgers.edu web: psychology.rutgers.edu/~jose lab: www.rumba.rutgers.edu fax: 866-434-7959 voice: 973-353-3313 (RUBIC) -------------- next part -------------- An HTML attachment was scrubbed... URL: From gary.marcus at nyu.edu Tue Feb 11 09:01:56 2014 From: gary.marcus at nyu.edu (Gary Marcus) Date: Tue, 11 Feb 2014 09:01:56 -0500 Subject: Connectionists: Deep Belief Nets (2006) / Neural History Compressor (1991) or Hierarchical Temporal Memory In-Reply-To: <6A71A1D7-9E58-4AED-B44E-BE9AD1E567C1@idsia.ch> References: <3FD4809C-5EF7-478D-8939-AE56E6A434BE@idsia.ch> <52F420CF.7060006@cse.msu.edu> <6A71A1D7-9E58-4AED-B44E-BE9AD1E567C1@idsia.ch> Message-ID: Juergen: Nice papers - but the goalposts are definitely shifting here. We started with your claim that "a single RNN already is a general computer. In principle, dynamic RNNs can map arbitrary observation sequences to arbitrary computable sequences of motoric actions and internal attention-directing operations, e.g., to process cluttered scenes?; I acknowledged the truth of the claim, but suggested that the appeal to universality was a red herring. What you?ve offered in response are two rather different architectures: a lambda calculus-based learning system that makes no contact (at least in the paper I read) with RNNs at all, and an evolutionary system that uses a whole bunch of machinery besides RNNs to derive RNNs that can do the right mapping. My objection was to the notion that all you need is an RNN; by pointing to various external gadgets, you reinforce my belief that RNNs aren?t enough by themselves. Of course, you are absolutely right that at some level of abstraction ?evolution is another a form of learning?, but I think it behooves the field to recognize that that other form of learning is likely to have very different properties from, say, back-prop. Evolution shapes cascades of genes that build complex cumulative systems in a distributed but algorithmic fashion; currently popular learning algorithms tune individual weights based on training examples. To assimilate the two is to do a disservice of the evolutionary contribution. Best, Gary On Feb 11, 2014, at 6:21 AM, Juergen Schmidhuber wrote: > Gary (Marcus), you wrote: "it is unrealistic to expect that all the relevant information can be extracted by any general purpose learning device." You might be a bit too pessimistic about general purpose systems. Unbeknownst to many NN researchers, there are _universal_ problem solvers that are time-optimal in various theoretical senses [10-12] (not to be confused with universal incomputable AI [13]). For example, there is a meta-method [10] that solves any well-defined problem as quickly as the unknown fastest way of solving it, save for an additive constant overhead that becomes negligible as problem size grows. Note that most problems are large; only few are small. (AI researchers are still in business because many are interested in problems so small that it is worth trying to reduce the overhead.) > > Several posts addressed the subject of evolution (Gary Marcus, Ken Stanley, Brian Mingus, Ali Minai, Thomas Trappenberg). Evolution is another a form of learning, of searching the parameter space. Not provably optimal in the sense of the methods above, but often quite practical. It is used all the time for reinforcement learning without a teacher. For example, an RNN with over a million weights recently learned through evolution to drive a simulated car based on a high-dimensional video-like visual input stream [14,15]. The RNN learned both control and visual processing from scratch, without being aided by unsupervised techniques (which may speed up evolution by reducing the search space through compact sensory codes). > > Jim, you wrote: "this could actually be an interesting opportunity for some cross disciplinary thinking about how one would use an active sensory data acquisition controller to select the sensory data that is ideal given an internal model of the world." Well, that's what intrinsic reward-driven curiosity and attention direction is all about - reward the controller for selecting data that maximises learning/compression progress of the world model - lots of work on this since 1990 [16,17]. (See also posts on developmental robotics by Brian Mingus and Gary Cottrell.) > > [10] Marcus Hutter. The Fastest and Shortest Algorithm for All Well-Defined Problems. International Journal of Foundations of Computer Science, 13(3):431-443, 2002. (On J. Schmidhuber's SNF grant 20-61847.) > [11] http://www.idsia.ch/~juergen/optimalsearch.html > [12] http://www.idsia.ch/~juergen/goedelmachine.html > [13] http://www.idsia.ch/~juergen/unilearn.html > [14] J. Koutnik, G. Cuccu, J. Schmidhuber, F. Gomez. Evolving Large-Scale Neural Networks for Vision-Based Reinforcement Learning. Proc. GECCO'13, Amsterdam, July 2013. > [15] http://www.idsia.ch/~juergen/compressednetworksearch.html > [16] http://www.idsia.ch/~juergen/interest.html > [17] http://www.idsia.ch/~juergen/creativity.html > > Juergen > > From pierre-yves.oudeyer at inria.fr Tue Feb 11 06:58:12 2014 From: pierre-yves.oudeyer at inria.fr (Pierre-Yves Oudeyer) Date: Tue, 11 Feb 2014 12:58:12 +0100 Subject: Connectionists: Deep Belief Nets (2006) / Neural History Compressor (1991) or Hierarchical Temporal Memory In-Reply-To: References: <3FD4809C-5EF7-478D-8939-AE56E6A434BE@idsia.ch> <52F420CF.7060006@cse.msu.edu> Message-ID: <2B3F3833-98F9-4ACB-A0A2-D5D5560E99FF@inria.fr> Hi, the view put forward by Gart strongly resonates with the approach that has been taken in the developmental robotics community in the last 10 years. I like to explain developmental robotics as the study of developmental constraints and architectures which guide learning mechanisms so as to allow actual lifelong acquisition and adaptation of skills in the large high-dimensional real world with severely limited time and space resources. Such an approach considers centrally the No Free Lunch idea, and indeed tries to identifies and understand specific families of guiding mechanisms that allow corresponding families of learners to acquire families of skills in some families of environments. For pointers, see: http://en.wikipedia.org/wiki/Developmental_robotics Thus, in a way, while lifelong learning and adaptation is a key object of study, most work is not so much about elaborating new models of learning mechanisms, but about studying what (often changing) properties of the inner and outer environment allow to canalise them. Examples of such mechanisms include body and neural maturation, active learning (selection of action that provide informative and useful data), emotional and motivational systems, cognitive biases for inference, self-organisation or socio-cultural scaffolding, and their interactions. This body of work is unfortunately not yet well connected (except a few exceptions) with the connectionnist community, but I am convinced more mutual exchange would be valuable. For those interested, we have a dedicated: - journal: IEEE TAMD https://ieeexplore.ieee.org/xpl/RecentIssue.jsp?punumber=4563672 - conference: IEEE ICDL-Epirob - newsletter: IEEE CIS AMD, latest issue: http://www.cse.msu.edu/amdtc/amdnl/AMDNL-V10-N2.pdf Best regards, Pierre-Yves Oudeyer http://www.pyoudeyer.com https://flowers.inria.fr On 10 Feb 2014, at 17:38, Gary Marcus wrote: > Juergen and others, > > I am with John on his two basic concerns, and think that your appeal to computational universality is a red herring; I cc the entire group because I think that these issues lay at the center of why many of the hardest problems in AI and neuroscience continue to lay outside of reach, despite in-principle proofs about computational universality. > > John?s basic points, which I have also made before (e.g. in my books The Algebraic Mind and The Birth of the Mind and in my periodic New Yorker posts) are two > > a. It is unrealistic to expect that hierarchies of pattern recognizers will suffice for the full range of cognitive problems that humans (and strong AI systems) face. Deep learning, to take one example, excels at classification, but has thus far had relatively little to contribute to inference or natural language understanding. Socher et al?s impressive CVG work, for instance, is parasitic on a traditional (symbolic) parser, not a soup-to-nuts neural net induced from input. > > b. it is unrealistic to expect that all the relevant information can be extracted by any general purpose learning device. > > Yes, you can reliably map any arbitrary input-output relation onto a multilayer perceptron or recurrent net, but only if you know the complete input-output mapping in advance. Alas, you can?t be guaranteed to do that in general given arbitrary subsets of the complete space; in the real world, learners see subsets of possible data and have to make guesses about what the rest will be like. Wolpert?s No Free Lunch work is instructive here (and also in line with how cognitive scientists like Chomsky, Pinker, and myself have thought about the problem). For any problem, I presume that there exists an appropriately-configured net, but there is no guarantee that in the real world you are going to be able to correctly induce the right system via general-purpose learning algorithm given a finite amount of data, with a finite amount of training. Empirically, neural nets of roughly the form you are discussing have worked fine for some problems (e.g. backgammon) but been no match for their symbolic competitors in other domains (chess) and worked only as an adjunct rather than an central ingredient in still others (parsing, question-answering a la Watson, etc); in other domains, like planning and common-sense reasoning, there has been essentially no serious work at all. > > My own take, informed by evolutionary and developmental biology, is that no single general purpose architecture will ever be a match for the endproduct of a billion years of evolution, which includes, I suspect, a significant amount of customized architecture that need not be induced anew in each generation. We learn as well as we do precisely because evolution has preceded us, and endowed us with custom tools for learning in different domains. Until the field of neural nets more seriously engages in understanding what the contribution from evolution to neural wetware might be, I will remain pessimistic about the field?s prospects. > > Best, > Gary Marcus > > Professor of Psychology > New York University > Visiting Cognitive Scientist > Allen Institute for Brain Science > Allen Institute for Artiificial Intelligence > co-edited book coming late 2014: > The Future of the Brain: Essays By The World?s Leading Neuroscientists > http://garymarcus.com/ > > On Feb 10, 2014, at 10:26 AM, Juergen Schmidhuber wrote: > >> John, >> >> perhaps your view is a bit too pessimistic. Note that a single RNN already is a general computer. In principle, dynamic RNNs can map arbitrary observation sequences to arbitrary computable sequences of motoric actions and internal attention-directing operations, e.g., to process cluttered scenes, or to implement development (the examples you mentioned). From my point of view, the main question is how to exploit this universal potential through learning. A stack of dynamic RNN can sometimes facilitate this. What it learns can later be collapsed into a single RNN [3]. >> >> Juergen >> >> http://www.idsia.ch/~juergen/whatsnew.html >> >> >> >> On Feb 7, 2014, at 12:54 AM, Juyang Weng wrote: >> >>> Juergen: >>> >>> You wrote: A stack of recurrent NN. But it is a wrong architecture as far as the brain is concerned. >>> >>> Although my joint work with Narendra Ahuja and Thomas S. Huang at UIUC was probably the first >>> learning network that used the deep Learning idea for learning from clutter scenes (Cresceptron ICCV 1992 and IJCV 1997), >>> I gave up this static deep learning idea later after we considered the Principle 1: Development. >>> >>> The deep learning architecture is wrong for the brain. It is too restricted, static in architecture, and cannot learn directly from cluttered scenes required by Principle 1. The brain is not a cascade of recurrent NN. >>> >>> I quote from Antonio Damasio "Decartes' Error": p. 93: "But intermediate communications occurs also via large subcortical nuclei such as those in the thalamas and basal ganglia, and via small nulei such as those in the brain stem." >>> >>> Of course, the cerebral pathways themselves are not a stack of recurrent NN either. >>> >>> There are many fundamental reasons for that. I give only one here base on our DN brain model: Looking at a human, the brain must dynamically attend the tip of the nose, the entire nose, the face, or the entire human body on the fly. For example, when the network attend the nose, the entire human body becomes the background! Without a brain network that has both shallow and deep connections (unlike your stack of recurrent NN), your network is only for recognizing a set of static patterns in a clean background. This is still an overworked pattern recognition problem, not a vision problem. >>> >>> -John >>> >>> On 2/6/14 7:24 AM, Schmidhuber Juergen wrote: >>>> Deep Learning in Artificial Neural Networks (NN) is about credit assignment across many subsequent computational stages, in deep or recurrent NN. >>>> >>>> A popluar Deep Learning NN is the Deep Belief Network (2006) [1,2]. A stack of feedforward NN (FNN) is pre-trained in unsupervised fashion. This can facilitate subsequent supervised learning. >>>> >>>> Let me re-advertise a much older, very similar, but more general, working Deep Learner of 1991. It can deal with temporal sequences: the Neural Hierarchical Temporal Memory or Neural History Compressor [3]. A stack of recurrent NN (RNN) is pre-trained in unsupervised fashion. This can greatly facilitate subsequent supervised learning. >>>> >>>> The RNN stack is more general in the sense that it uses sequence-processing RNN instead of FNN with unchanging inputs. In the early 1990s, the system was able to learn many previously unlearnable Deep Learning tasks, one of them requiring credit assignment across 1200 successive computational stages [4]. >>>> >>>> Related developments: In the 1990s there was a trend from partially unsupervised [3] to fully supervised recurrent Deep Learners [5]. In recent years, there has been a similar trend from partially unsupervised to fully supervised systems. For example, several recent competition-winning and benchmark record-setting systems use supervised LSTM RNN stacks [6-9]. >>>> >>>> >>>> References: >>>> >>>> [1] G. E. Hinton, R. R. Salakhutdinov. Reducing the dimensionality of data with neural networks. Science, Vol. 313. no. 5786, pp. 504 - 507, 2006. http://www.cs.toronto.edu/~hinton/science.pdf >>>> >>>> [2] G. W. Cottrell. New Life for Neural Networks. Science, Vol. 313. no. 5786, pp. 454-455, 2006. http://www.academia.edu/155897/Cottrell_Garrison_W._2006_New_life_for_neural_networks >>>> >>>> [3] J. Schmidhuber. Learning complex, extended sequences using the principle of history compression, Neural Computation, 4(2):234-242, 1992. (Based on TR FKI-148-91, 1991.) ftp://ftp.idsia.ch/pub/juergen/chunker.pdf Overview: http://www.idsia.ch/~juergen/firstdeeplearner.html >>>> >>>> [4] J. Schmidhuber. Habilitation thesis, TUM, 1993. ftp://ftp.idsia.ch/pub/juergen/habilitation.pdf . Includes an experiment with credit assignment across 1200 subsequent computational stages for a Neural Hierarchical Temporal Memory or History Compressor or RNN stack with unsupervised pre-training [2] (try Google Translate in your mother tongue): http://www.idsia.ch/~juergen/habilitation/node114.html >>>> >>>> [5] S. Hochreiter, J. Schmidhuber. Long Short-Term Memory. Neural Computation, 9(8):1735-1780, 1997. Based on TR FKI-207-95, 1995. ftp://ftp.idsia.ch/pub/juergen/lstm.pdf . Lots of of follow-up work on LSTM under http://www.idsia.ch/~juergen/rnn.html >>>> >>>> [6] S. Fernandez, A. Graves, J. Schmidhuber. Sequence labelling in structured domains with hierarchical recurrent neural networks. In Proc. IJCAI'07, p. 774-779, Hyderabad, India, 2007. ftp://ftp.idsia.ch/pub/juergen/IJCAI07sequence.pdf >>>> >>>> [7] A. Graves, J. Schmidhuber. Offline Handwriting Recognition with Multidimensional Recurrent Neural Networks. NIPS'22, p 545-552, Vancouver, MIT Press, 2009. http://www.idsia.ch/~juergen/nips2009.pdf >>>> >>>> [8] 2009: First very deep (and recurrent) learner to win international competitions with secret test sets: deep LSTM RNN (1995-) won three connected handwriting contests at ICDAR 2009 (French, Arabic, Farsi), performing simultaneous segmentation and recognition. http://www.idsia.ch/~juergen/handwriting.html >>>> >>>> [9] A. Graves, A. Mohamed, G. E. Hinton. Speech Recognition with Deep Recurrent Neural Networks. ICASSP 2013, Vancouver, 2013. http://www.cs.toronto.edu/~hinton/absps/RNN13.pdf >>>> >>>> >>>> >>>> Juergen Schmidhuber >>>> http://www.idsia.ch/~juergen/whatsnew.html >>> >>> -- >>> -- >>> Juyang (John) Weng, Professor >>> Department of Computer Science and Engineering >>> MSU Cognitive Science Program and MSU Neuroscience Program >>> 428 S Shaw Ln Rm 3115 >>> Michigan State University >>> East Lansing, MI 48824 USA >>> Tel: 517-353-4388 >>> Fax: 517-432-1061 >>> Email: weng at cse.msu.edu >>> URL: http://www.cse.msu.edu/~weng/ >>> ---------------------------------------------- >>> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From neurogirl at hotmail.com Tue Feb 11 07:45:14 2014 From: neurogirl at hotmail.com (neuro girl) Date: Tue, 11 Feb 2014 07:45:14 -0500 Subject: Connectionists: Deep Belief Nets (2006) / Neural History Compressor (1991) or Hierarchical Temporal Memory In-Reply-To: References: <3FD4809C-5EF7-478D-8939-AE56E6A434BE@idsia.ch> <52F420CF.7060006@cse.msu.edu> Message-ID: OMG, are these threads giving people heart attacks?! ;) Feel better! Sent from a bigger version of some ridiculous contraption that pretends to make my life easier On Feb 11, 2014, at 5:58 AM, "Gary Cottrell" wrote: > Likewise, if you hold an infant's feet in warm water it will vigorously try to walk. And if you put it in water, it will swim?. [sorry for all the posts, but my wife is making me stay home from work today after I went to work yesterday a week after my heart attack. You can't hold a good connectionist down! - ;-)] [at least they aren't as long as Jim's!] On Feb 10, 2014, at 10:51 PM, Brian J Mingus wrote: [I am in Dijon, France on sabbatical this year. To call me, Skype works best (gwcottrell), or dial +33 788319271] Gary Cottrell 858-534-6640 FAX: 858-534-7029 My schedule is here: http://tinyurl.com/b7gxpwo Computer Science and Engineering 0404 IF USING FED EX INCLUDE THE FOLLOWING LINE: CSE Building, Room 4130 University of California San Diego 9500 Gilman Drive # 0404 La Jolla, Ca. 92093-0404 Things may come to those who wait, but only the things left by those who hustle. -- Abraham Lincoln "I'll have a caf? mocha vodka valium latte to go, please" -Anonymous "Of course, none of this will be easy. If it was, we would already know everything there was about how the brain works, and presumably my life would be simpler here. It could explain all kinds of things that go on in Washington." -Barack Obama "Probably once or twice a week we are sitting at dinner and Richard says, 'The cortex is hopeless,' and I say, 'That's why I work on the worm.'" Dr. Bargmann said. "A grapefruit is a lemon that saw an opportunity and took advantage of it." - note written on a door in Amsterdam on Lijnbaansgracht. "Physical reality is great, but it has a lousy search function." -Matt Tong "Only connect!" -E.M. Forster "You always have to believe that tomorrow you might write the matlab program that solves everything - otherwise you never will." -Geoff Hinton "There is nothing objective about objective functions" - Jay McClelland "I am awaiting the day when people remember the fact that discovery does not work by deciding what you want and then discovering it." -David Mermin Email: gary at ucsd.edu Home page: http://www-cse.ucsd.edu/~gary/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From Bill at BillHowell.ca Tue Feb 11 01:13:13 2014 From: Bill at BillHowell.ca (Bill Howell. Retired from NRCan. now in Alberta Canada) Date: Mon, 10 Feb 2014 23:13:13 -0700 Subject: Connectionists: Deep Belief Nets (2006) / Neural History Compressor (1991) or Hierarchical Temporal Memory In-Reply-To: References: <3FD4809C-5EF7-478D-8939-AE56E6A434BE@idsia.ch> <52F420CF.7060006@cse.msu.edu> Message-ID: <52F9BF79.1030108@BillHowell.ca> An HTML attachment was scrubbed... URL: From gary.marcus at nyu.edu Tue Feb 11 09:15:11 2014 From: gary.marcus at nyu.edu (Gary Marcus) Date: Tue, 11 Feb 2014 09:15:11 -0500 Subject: Connectionists: developmental robotics In-Reply-To: <2B3F3833-98F9-4ACB-A0A2-D5D5560E99FF@inria.fr> References: <3FD4809C-5EF7-478D-8939-AE56E6A434BE@idsia.ch> <52F420CF.7060006@cse.msu.edu> <2B3F3833-98F9-4ACB-A0A2-D5D5560E99FF@inria.fr> Message-ID: <5B3FD743-AFB1-4FD0-9F5C-246693E27852@nyu.edu> Pierre (and others) Thanks for all the links. I am a fan, in principle, but much of the developmental robotics work I have seen sticks rigidly to a fairly ?blank-slate? perspective, perhaps needlessly so. I?d especially welcome references to work in which researchers have given robots a significant head start, so that the learning that takes place has a strong starting point. Has anyone, for example, tried to build a robot that starts with the cognitive capacities of a two-year-old (rather than a newborn), and goes from there? Or taken seriously the nativist arguments of Chomsky, Pinker, and Spelke and tried to build a robot that is innately endowed with concepts like ?person?, ?object?, ?set?, and ?place?? Best, Gary On Feb 11, 2014, at 6:58 AM, Pierre-Yves Oudeyer wrote: > > Hi, > > the view put forward by Gart strongly resonates with the approach that has been taken in the developmental robotics community in the last 10 years. > I like to explain developmental robotics as the study of developmental constraints and architectures which guide learning mechanisms so as to allow actual lifelong acquisition and adaptation of skills in the large high-dimensional real world with severely limited time and space resources. > Such an approach considers centrally the No Free Lunch idea, and indeed tries to identifies and understand specific families of guiding mechanisms that allow corresponding families of learners to acquire families of skills in some families of environments. > For pointers, see: http://en.wikipedia.org/wiki/Developmental_robotics > > Thus, in a way, while lifelong learning and adaptation is a key object of study, most work is not so much about elaborating new models of learning mechanisms, but about studying what (often changing) properties of the inner and outer environment allow to canalise them. > Examples of such mechanisms include body and neural maturation, active learning (selection of action that provide informative and useful data), emotional and motivational systems, cognitive biases for inference, self-organisation or socio-cultural scaffolding, and their interactions. > > This body of work is unfortunately not yet well connected (except a few exceptions) with the connectionnist community, but I am convinced more mutual exchange would be valuable. > > For those interested, we have a dedicated: > - journal: IEEE TAMD https://ieeexplore.ieee.org/xpl/RecentIssue.jsp?punumber=4563672 > - conference: IEEE ICDL-Epirob > - newsletter: IEEE CIS AMD, latest issue: http://www.cse.msu.edu/amdtc/amdnl/AMDNL-V10-N2.pdf > > Best regards, > Pierre-Yves Oudeyer > http://www.pyoudeyer.com > https://flowers.inria.fr > > On 10 Feb 2014, at 17:38, Gary Marcus wrote: > >> Juergen and others, >> >> I am with John on his two basic concerns, and think that your appeal to computational universality is a red herring; I cc the entire group because I think that these issues lay at the center of why many of the hardest problems in AI and neuroscience continue to lay outside of reach, despite in-principle proofs about computational universality. >> >> John?s basic points, which I have also made before (e.g. in my books The Algebraic Mind and The Birth of the Mind and in my periodic New Yorker posts) are two >> >> a. It is unrealistic to expect that hierarchies of pattern recognizers will suffice for the full range of cognitive problems that humans (and strong AI systems) face. Deep learning, to take one example, excels at classification, but has thus far had relatively little to contribute to inference or natural language understanding. Socher et al?s impressive CVG work, for instance, is parasitic on a traditional (symbolic) parser, not a soup-to-nuts neural net induced from input. >> >> b. it is unrealistic to expect that all the relevant information can be extracted by any general purpose learning device. >> >> Yes, you can reliably map any arbitrary input-output relation onto a multilayer perceptron or recurrent net, but only if you know the complete input-output mapping in advance. Alas, you can?t be guaranteed to do that in general given arbitrary subsets of the complete space; in the real world, learners see subsets of possible data and have to make guesses about what the rest will be like. Wolpert?s No Free Lunch work is instructive here (and also in line with how cognitive scientists like Chomsky, Pinker, and myself have thought about the problem). For any problem, I presume that there exists an appropriately-configured net, but there is no guarantee that in the real world you are going to be able to correctly induce the right system via general-purpose learning algorithm given a finite amount of data, with a finite amount of training. Empirically, neural nets of roughly the form you are discussing have worked fine for some problems (e.g. backgammon) but been no match for their symbolic competitors in other domains (chess) and worked only as an adjunct rather than an central ingredient in still others (parsing, question-answering a la Watson, etc); in other domains, like planning and common-sense reasoning, there has been essentially no serious work at all. >> >> My own take, informed by evolutionary and developmental biology, is that no single general purpose architecture will ever be a match for the endproduct of a billion years of evolution, which includes, I suspect, a significant amount of customized architecture that need not be induced anew in each generation. We learn as well as we do precisely because evolution has preceded us, and endowed us with custom tools for learning in different domains. Until the field of neural nets more seriously engages in understanding what the contribution from evolution to neural wetware might be, I will remain pessimistic about the field?s prospects. >> >> Best, >> Gary Marcus >> >> Professor of Psychology >> New York University >> Visiting Cognitive Scientist >> Allen Institute for Brain Science >> Allen Institute for Artiificial Intelligence >> co-edited book coming late 2014: >> The Future of the Brain: Essays By The World?s Leading Neuroscientists >> http://garymarcus.com/ >> >> On Feb 10, 2014, at 10:26 AM, Juergen Schmidhuber wrote: >> >>> John, >>> >>> perhaps your view is a bit too pessimistic. Note that a single RNN already is a general computer. In principle, dynamic RNNs can map arbitrary observation sequences to arbitrary computable sequences of motoric actions and internal attention-directing operations, e.g., to process cluttered scenes, or to implement development (the examples you mentioned). From my point of view, the main question is how to exploit this universal potential through learning. A stack of dynamic RNN can sometimes facilitate this. What it learns can later be collapsed into a single RNN [3]. >>> >>> Juergen >>> >>> http://www.idsia.ch/~juergen/whatsnew.html >>> >>> >>> >>> On Feb 7, 2014, at 12:54 AM, Juyang Weng wrote: >>> >>>> Juergen: >>>> >>>> You wrote: A stack of recurrent NN. But it is a wrong architecture as far as the brain is concerned. >>>> >>>> Although my joint work with Narendra Ahuja and Thomas S. Huang at UIUC was probably the first >>>> learning network that used the deep Learning idea for learning from clutter scenes (Cresceptron ICCV 1992 and IJCV 1997), >>>> I gave up this static deep learning idea later after we considered the Principle 1: Development. >>>> >>>> The deep learning architecture is wrong for the brain. It is too restricted, static in architecture, and cannot learn directly from cluttered scenes required by Principle 1. The brain is not a cascade of recurrent NN. >>>> >>>> I quote from Antonio Damasio "Decartes' Error": p. 93: "But intermediate communications occurs also via large subcortical nuclei such as those in the thalamas and basal ganglia, and via small nulei such as those in the brain stem." >>>> >>>> Of course, the cerebral pathways themselves are not a stack of recurrent NN either. >>>> >>>> There are many fundamental reasons for that. I give only one here base on our DN brain model: Looking at a human, the brain must dynamically attend the tip of the nose, the entire nose, the face, or the entire human body on the fly. For example, when the network attend the nose, the entire human body becomes the background! Without a brain network that has both shallow and deep connections (unlike your stack of recurrent NN), your network is only for recognizing a set of static patterns in a clean background. This is still an overworked pattern recognition problem, not a vision problem. >>>> >>>> -John >>>> >>>> On 2/6/14 7:24 AM, Schmidhuber Juergen wrote: >>>>> Deep Learning in Artificial Neural Networks (NN) is about credit assignment across many subsequent computational stages, in deep or recurrent NN. >>>>> >>>>> A popluar Deep Learning NN is the Deep Belief Network (2006) [1,2]. A stack of feedforward NN (FNN) is pre-trained in unsupervised fashion. This can facilitate subsequent supervised learning. >>>>> >>>>> Let me re-advertise a much older, very similar, but more general, working Deep Learner of 1991. It can deal with temporal sequences: the Neural Hierarchical Temporal Memory or Neural History Compressor [3]. A stack of recurrent NN (RNN) is pre-trained in unsupervised fashion. This can greatly facilitate subsequent supervised learning. >>>>> >>>>> The RNN stack is more general in the sense that it uses sequence-processing RNN instead of FNN with unchanging inputs. In the early 1990s, the system was able to learn many previously unlearnable Deep Learning tasks, one of them requiring credit assignment across 1200 successive computational stages [4]. >>>>> >>>>> Related developments: In the 1990s there was a trend from partially unsupervised [3] to fully supervised recurrent Deep Learners [5]. In recent years, there has been a similar trend from partially unsupervised to fully supervised systems. For example, several recent competition-winning and benchmark record-setting systems use supervised LSTM RNN stacks [6-9]. >>>>> >>>>> >>>>> References: >>>>> >>>>> [1] G. E. Hinton, R. R. Salakhutdinov. Reducing the dimensionality of data with neural networks. Science, Vol. 313. no. 5786, pp. 504 - 507, 2006. http://www.cs.toronto.edu/~hinton/science.pdf >>>>> >>>>> [2] G. W. Cottrell. New Life for Neural Networks. Science, Vol. 313. no. 5786, pp. 454-455, 2006. http://www.academia.edu/155897/Cottrell_Garrison_W._2006_New_life_for_neural_networks >>>>> >>>>> [3] J. Schmidhuber. Learning complex, extended sequences using the principle of history compression, Neural Computation, 4(2):234-242, 1992. (Based on TR FKI-148-91, 1991.) ftp://ftp.idsia.ch/pub/juergen/chunker.pdf Overview: http://www.idsia.ch/~juergen/firstdeeplearner.html >>>>> >>>>> [4] J. Schmidhuber. Habilitation thesis, TUM, 1993. ftp://ftp.idsia.ch/pub/juergen/habilitation.pdf . Includes an experiment with credit assignment across 1200 subsequent computational stages for a Neural Hierarchical Temporal Memory or History Compressor or RNN stack with unsupervised pre-training [2] (try Google Translate in your mother tongue): http://www.idsia.ch/~juergen/habilitation/node114.html >>>>> >>>>> [5] S. Hochreiter, J. Schmidhuber. Long Short-Term Memory. Neural Computation, 9(8):1735-1780, 1997. Based on TR FKI-207-95, 1995. ftp://ftp.idsia.ch/pub/juergen/lstm.pdf . Lots of of follow-up work on LSTM under http://www.idsia.ch/~juergen/rnn.html >>>>> >>>>> [6] S. Fernandez, A. Graves, J. Schmidhuber. Sequence labelling in structured domains with hierarchical recurrent neural networks. In Proc. IJCAI'07, p. 774-779, Hyderabad, India, 2007. ftp://ftp.idsia.ch/pub/juergen/IJCAI07sequence.pdf >>>>> >>>>> [7] A. Graves, J. Schmidhuber. Offline Handwriting Recognition with Multidimensional Recurrent Neural Networks. NIPS'22, p 545-552, Vancouver, MIT Press, 2009. http://www.idsia.ch/~juergen/nips2009.pdf >>>>> >>>>> [8] 2009: First very deep (and recurrent) learner to win international competitions with secret test sets: deep LSTM RNN (1995-) won three connected handwriting contests at ICDAR 2009 (French, Arabic, Farsi), performing simultaneous segmentation and recognition. http://www.idsia.ch/~juergen/handwriting.html >>>>> >>>>> [9] A. Graves, A. Mohamed, G. E. Hinton. Speech Recognition with Deep Recurrent Neural Networks. ICASSP 2013, Vancouver, 2013. http://www.cs.toronto.edu/~hinton/absps/RNN13.pdf >>>>> >>>>> >>>>> >>>>> Juergen Schmidhuber >>>>> http://www.idsia.ch/~juergen/whatsnew.html >>>> >>>> -- >>>> -- >>>> Juyang (John) Weng, Professor >>>> Department of Computer Science and Engineering >>>> MSU Cognitive Science Program and MSU Neuroscience Program >>>> 428 S Shaw Ln Rm 3115 >>>> Michigan State University >>>> East Lansing, MI 48824 USA >>>> Tel: 517-353-4388 >>>> Fax: 517-432-1061 >>>> Email: weng at cse.msu.edu >>>> URL: http://www.cse.msu.edu/~weng/ >>>> ---------------------------------------------- >>>> >>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bower at uthscsa.edu Tue Feb 11 10:13:32 2014 From: bower at uthscsa.edu (james bower) Date: Tue, 11 Feb 2014 09:13:32 -0600 Subject: Connectionists: Deep Belief Nets (2006) / Neural History Compressor (1991) or Hierarchical Temporal Memory In-Reply-To: <9E09D066-0F2E-42F7-AE55-4D2750570F77@eng.ucsd.edu> References: <3FD4809C-5EF7-478D-8939-AE56E6A434BE@idsia.ch> <52F420CF.7060006@cse.msu.edu> <4451FE2D-8521-46F0-A1CE-148F5CC83549@uthscsa.edu> <9E09D066-0F2E-42F7-AE55-4D2750570F77@eng.ucsd.edu> Message-ID: <9DDF31D5-D5AF-4459-A7EA-7E81001FB0C2@uthscsa.edu> Gary, Thanks - yes have been several PhD thesis already - perhaps more to come, although many of my students have started out skeptical about the ideas (best students). The ?sensory data acquisition? story is complicated - so, don?t want to bother most of the list with it here, but what we have proposed is that the cerebellum functions at a level much more precise than attention, adjusting the position of receptors at the resolution in space and time that is relevant to their actual coding scales. For example, we suggested a number of years ago that the cerebellum might be involved in influence micro-sacades (of which there are many many more) - in this way, in some sense perhaps doing visual texture. There is now some evidence for that. We ourselves have been looking at the auditory system recently, suggesting that the cerebellum is involved in the control of the fine tuning of the cochlea through modulatory control of the outer hair cells. This work has actually involved human imaging as well. in one of the first assaults on the motor control theory of the cerebellum, we also showed a number of years ago (again with human fMRI) that the cerebellum is far and away most active when the fingers are being used to perform a sensory discrimination with fine movement, rather than simple fine movement itself. I am currently writing a book that lays out this theory, starting with the evolutionary (and developmental) origins of the cerebellum, which are interesting. While many appear to believe that we now know enough about how the brain works to ?translate? to both engineering and medicine - in my view, the cerebellum stands as a testament to how ignorant we really are. We have known the details of its circuitry better and longer than any other structure, yet, we may have been fundamentally wrong about what it does for the last 150 years. :-) Jim On Feb 11, 2014, at 4:22 AM, Gary Cottrell wrote: > interesting points, jim! > > I wonder, though, why you worry so much about "big data"? > > I think it is more like "appropriate-sized data." we have never before been able to give our models anything like the kind of data we get in our first years of life. Let's do a little back-of-the-envelope on this. We saccade about 3 times a second, which, if you are awake 16 hours a day (make that 20 for Terry Sejnowski), come out to about 172,800 fixations per day, or high-dimensional samples of the world, if you like. One year of that, not counting drunken blackouts, etc., is 63 million samples. After 10 years that's 630 million samples. This dwarfs imagenet, at least the 1.2 million images used by Krizhevsky et al. Of course, there is a lot of redundancy here (spatial and temporal), which I believe the brain uses to construct its models (e.g., by some sort of learning rule like F?ldi?k's), so maybe 1.2 million isn't so bad. > > On the other hand, you may argue, imagenet is nothing like the real world - it is, after all, pictures taken by humans, so objects tend to be centered. This leads to a comment about you worrying about filtering data to avoid the big data "problem." Well, I would suggest that there is a lot of work on attention (some of it completely compatible with connectionist models, e.g., Itti, et al. 1998, Zhang, et al., 2008) that would cause a system to focus on objects, just as photographers do. So, it isn't like we haven't worried about that as you do, it's just that we've done something about it! ;-) > > Anyway, I like your ideas about the cerebellum - sounds like there are a bunch of Ph.D. theses in there? > > cheers, > gary > > > Itti, L., Koch, C., & Niebur, E. (1998). A model of saliency-based visual attention for rapid scene analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence, 20, 1254?1259. > > Zhang, Lingyun, Tong, Matthew H., Marks, Tim K., Shan, Honghao, and Cottrell, Garrison W. (2008). SUN: A Bayesian Framework for Saliency Using Natural Statistics. Journal of Vision 8(7):32, 1-20. > The code for SUN is here > On Feb 10, 2014, at 10:04 PM, james bower wrote: > >> One other point that some of you might find interesting. >> >> While most neurobiologists and text books describe the cerebellum as involved in motor control, I suspect that it is actually not a motor control device in the usual sense at all. We proposed 20+ years ago that the cerebellum is actually a sensory control device, using the motor system (although not only) to precisely position sensory surfaces to collect the data the nervous system actually needs and expects. In the context of the current discussion about big data - such a mechanism would also contribute to the nervous system?s working around a potential data problem. >> >> Leaping and jumping forward, as an extension of this idea, we have proposed that autism may actually be an adaptive response to cerebellar dysfunction - and therefore a response to uncontrolled big data flux (in your terms). >> >> So, if correct, the brain adapts to being confronted with badly controlled data acquisition by shutting it off. >> >> Just to think about. Again, papers available for anyone interested. >> >> Given how much we do know about cerebellar circuitry - this could actually be an interesting opportunity for some cross disciplinary thinking about how one would use an active sensory data acquisition controller to select the sensory data that is ideal given an internal model of the world. Almost all of the NN type cerebellar models to date have been built around either the idea that the cerebellum is a motor timing device, or involved in learning (yadda yadda). >> >> Perhaps most on this list interested in brain networks don?t know that far and away, the pathway with the largest total number of axons is the pathway from the (entire) cerebral cortex to the cerebellum. We have predicted that this pathway is the mechanism by which the cerebral cortex ?loads? the cerebellum with knowledge about what it expects and needs. >> >> >> >> Jim >> >> >> >> >> >> On Feb 10, 2014, at 2:24 PM, james bower wrote: >> >>> Excellent points - reminds me again of the conversation years ago about whether a general structure like a Hopfield Network would, by itself, solve a large number of problems. All evidence from the nervous system points in the direction of strong influence of the nature of the problem on the solution, which also seems consistent with what has happened in real world applications of NN to engineering over the last 25 years. >>> >>> For biology, however, the interesting (even fundamental) question becomes, what the following actually are: >>> >>>> endowed us with custom tools for learning in different domains >>> >>>> the contribution from evolution to neural wetware might be >>> >>> I have mentioned previously, that my guess (and surprise) based on our own work over the last 30 years in olfaction is that ?learning? may all together be over emphasized (we do love free will). Yes, in our laboratories we place animals in situations where they have to ?learn?, but my suspicion is that in the real world where brains actually operate and evolve, most of what we do is actually ?recognition? that involves matching external stimuli to internal ?models? of what we expect to be there. I think that it is quite likely that that ?deep knowledge? is how evolution has most patterned neural wetware. Seems to me a way to avoid NP problems and the pitfalls of dealing with ?big data? which as I have said, I suspect the nervous system avoids at all costs. >>> >>> I have mentioned that we have reason to believe (to my surprise) that, starting with the olfactory receptors, the olfactory system already ?knows? about the metabolic structure of the real world. Accordingly, we are predicting that its receptors aren?t organized to collect a lot of data about the chemical structure of the stimulus (the way a chemist would), but instead looks for chemical signatures of metabolic processes. e.g. , it may be that 1/3 or more of mouse olfactory receptors detect one of the three molecules that are produced by many different kinds of fruit when ripe. ?Learning? in olfaction, might be some small additional mechanism you put on top to change the ?hedonic? value of the stimulus - ie. you can ?learn? to like fermented fish paste. But it is very likely that recognizing the (usually deleterious) environmental signature of fermentation is "hard wired?, requiring ?learning? to change the natural category. >>> >>> I know that many cognitive types (and philosophers as well) have developed much more nuanced discussions of these questions - however, I have always been struck by how much of the effort in NNs is focused on ?learning? as if it is the primary attribute of the nervous system we are trying to figure out. It seems to me figuring out "what the nose already knows? is much more important. >>> >>> >>> Jim >>> >>> >>> >>> >>> >>> >>> On Feb 10, 2014, at 10:38 AM, Gary Marcus wrote: >>> >>>> Juergen and others, >>>> >>>> I am with John on his two basic concerns, and think that your appeal to computational universality is a red herring; I cc the entire group because I think that these issues lay at the center of why many of the hardest problems in AI and neuroscience continue to lay outside of reach, despite in-principle proofs about computational universality. >>>> >>>> John?s basic points, which I have also made before (e.g. in my books The Algebraic Mind and The Birth of the Mind and in my periodic New Yorker posts) are two >>>> >>>> a. It is unrealistic to expect that hierarchies of pattern recognizers will suffice for the full range of cognitive problems that humans (and strong AI systems) face. Deep learning, to take one example, excels at classification, but has thus far had relatively little to contribute to inference or natural language understanding. Socher et al?s impressive CVG work, for instance, is parasitic on a traditional (symbolic) parser, not a soup-to-nuts neural net induced from input. >>>> >>>> b. it is unrealistic to expect that all the relevant information can be extracted by any general purpose learning device. >>>> >>>> Yes, you can reliably map any arbitrary input-output relation onto a multilayer perceptron or recurrent net, but only if you know the complete input-output mapping in advance. Alas, you can?t be guaranteed to do that in general given arbitrary subsets of the complete space; in the real world, learners see subsets of possible data and have to make guesses about what the rest will be like. Wolpert?s No Free Lunch work is instructive here (and also in line with how cognitive scientists like Chomsky, Pinker, and myself have thought about the problem). For any problem, I presume that there exists an appropriately-configured net, but there is no guarantee that in the real world you are going to be able to correctly induce the right system via general-purpose learning algorithm given a finite amount of data, with a finite amount of training. Empirically, neural nets of roughly the form you are discussing have worked fine for some problems (e.g. backgammon) but been no match for their symbolic competitors in other domains (chess) and worked only as an adjunct rather than an central ingredient in still others (parsing, question-answering a la Watson, etc); in other domains, like planning and common-sense reasoning, there has been essentially no serious work at all. >>>> >>>> My own take, informed by evolutionary and developmental biology, is that no single general purpose architecture will ever be a match for the endproduct of a billion years of evolution, which includes, I suspect, a significant amount of customized architecture that need not be induced anew in each generation. We learn as well as we do precisely because evolution has preceded us, and endowed us with custom tools for learning in different domains. Until the field of neural nets more seriously engages in understanding what the contribution from evolution to neural wetware might be, I will remain pessimistic about the field?s prospects. >>>> >>>> Best, >>>> Gary Marcus >>>> >>>> Professor of Psychology >>>> New York University >>>> Visiting Cognitive Scientist >>>> Allen Institute for Brain Science >>>> Allen Institute for Artiificial Intelligence >>>> co-edited book coming late 2014: >>>> The Future of the Brain: Essays By The World?s Leading Neuroscientists >>>> http://garymarcus.com/ >>>> >>>> On Feb 10, 2014, at 10:26 AM, Juergen Schmidhuber wrote: >>>> >>>>> John, >>>>> >>>>> perhaps your view is a bit too pessimistic. Note that a single RNN already is a general computer. In principle, dynamic RNNs can map arbitrary observation sequences to arbitrary computable sequences of motoric actions and internal attention-directing operations, e.g., to process cluttered scenes, or to implement development (the examples you mentioned). From my point of view, the main question is how to exploit this universal potential through learning. A stack of dynamic RNN can sometimes facilitate this. What it learns can later be collapsed into a single RNN [3]. >>>>> >>>>> Juergen >>>>> >>>>> http://www.idsia.ch/~juergen/whatsnew.html >>>>> >>>>> >>>>> >>>>> On Feb 7, 2014, at 12:54 AM, Juyang Weng wrote: >>>>> >>>>>> Juergen: >>>>>> >>>>>> You wrote: A stack of recurrent NN. But it is a wrong architecture as far as the brain is concerned. >>>>>> >>>>>> Although my joint work with Narendra Ahuja and Thomas S. Huang at UIUC was probably the first >>>>>> learning network that used the deep Learning idea for learning from clutter scenes (Cresceptron ICCV 1992 and IJCV 1997), >>>>>> I gave up this static deep learning idea later after we considered the Principle 1: Development. >>>>>> >>>>>> The deep learning architecture is wrong for the brain. It is too restricted, static in architecture, and cannot learn directly from cluttered scenes required by Principle 1. The brain is not a cascade of recurrent NN. >>>>>> >>>>>> I quote from Antonio Damasio "Decartes' Error": p. 93: "But intermediate communications occurs also via large subcortical nuclei such as those in the thalamas and basal ganglia, and via small nulei such as those in the brain stem." >>>>>> >>>>>> Of course, the cerebral pathways themselves are not a stack of recurrent NN either. >>>>>> >>>>>> There are many fundamental reasons for that. I give only one here base on our DN brain model: Looking at a human, the brain must dynamically attend the tip of the nose, the entire nose, the face, or the entire human body on the fly. For example, when the network attend the nose, the entire human body becomes the background! Without a brain network that has both shallow and deep connections (unlike your stack of recurrent NN), your network is only for recognizing a set of static patterns in a clean background. This is still an overworked pattern recognition problem, not a vision problem. >>>>>> >>>>>> -John >>>>>> >>>>>> On 2/6/14 7:24 AM, Schmidhuber Juergen wrote: >>>>>>> Deep Learning in Artificial Neural Networks (NN) is about credit assignment across many subsequent computational stages, in deep or recurrent NN. >>>>>>> >>>>>>> A popluar Deep Learning NN is the Deep Belief Network (2006) [1,2]. A stack of feedforward NN (FNN) is pre-trained in unsupervised fashion. This can facilitate subsequent supervised learning. >>>>>>> >>>>>>> Let me re-advertise a much older, very similar, but more general, working Deep Learner of 1991. It can deal with temporal sequences: the Neural Hierarchical Temporal Memory or Neural History Compressor [3]. A stack of recurrent NN (RNN) is pre-trained in unsupervised fashion. This can greatly facilitate subsequent supervised learning. >>>>>>> >>>>>>> The RNN stack is more general in the sense that it uses sequence-processing RNN instead of FNN with unchanging inputs. In the early 1990s, the system was able to learn many previously unlearnable Deep Learning tasks, one of them requiring credit assignment across 1200 successive computational stages [4]. >>>>>>> >>>>>>> Related developments: In the 1990s there was a trend from partially unsupervised [3] to fully supervised recurrent Deep Learners [5]. In recent years, there has been a similar trend from partially unsupervised to fully supervised systems. For example, several recent competition-winning and benchmark record-setting systems use supervised LSTM RNN stacks [6-9]. >>>>>>> >>>>>>> >>>>>>> References: >>>>>>> >>>>>>> [1] G. E. Hinton, R. R. Salakhutdinov. Reducing the dimensionality of data with neural networks. Science, Vol. 313. no. 5786, pp. 504 - 507, 2006. http://www.cs.toronto.edu/~hinton/science.pdf >>>>>>> >>>>>>> [2] G. W. Cottrell. New Life for Neural Networks. Science, Vol. 313. no. 5786, pp. 454-455, 2006. http://www.academia.edu/155897/Cottrell_Garrison_W._2006_New_life_for_neural_networks >>>>>>> >>>>>>> [3] J. Schmidhuber. Learning complex, extended sequences using the principle of history compression, Neural Computation, 4(2):234-242, 1992. (Based on TR FKI-148-91, 1991.) ftp://ftp.idsia.ch/pub/juergen/chunker.pdf Overview: http://www.idsia.ch/~juergen/firstdeeplearner.html >>>>>>> >>>>>>> [4] J. Schmidhuber. Habilitation thesis, TUM, 1993. ftp://ftp.idsia.ch/pub/juergen/habilitation.pdf . Includes an experiment with credit assignment across 1200 subsequent computational stages for a Neural Hierarchical Temporal Memory or History Compressor or RNN stack with unsupervised pre-training [2] (try Google Translate in your mother tongue): http://www.idsia.ch/~juergen/habilitation/node114.html >>>>>>> >>>>>>> [5] S. Hochreiter, J. Schmidhuber. Long Short-Term Memory. Neural Computation, 9(8):1735-1780, 1997. Based on TR FKI-207-95, 1995. ftp://ftp.idsia.ch/pub/juergen/lstm.pdf . Lots of of follow-up work on LSTM under http://www.idsia.ch/~juergen/rnn.html >>>>>>> >>>>>>> [6] S. Fernandez, A. Graves, J. Schmidhuber. Sequence labelling in structured domains with hierarchical recurrent neural networks. In Proc. IJCAI'07, p. 774-779, Hyderabad, India, 2007. ftp://ftp.idsia.ch/pub/juergen/IJCAI07sequence.pdf >>>>>>> >>>>>>> [7] A. Graves, J. Schmidhuber. Offline Handwriting Recognition with Multidimensional Recurrent Neural Networks. NIPS'22, p 545-552, Vancouver, MIT Press, 2009. http://www.idsia.ch/~juergen/nips2009.pdf >>>>>>> >>>>>>> [8] 2009: First very deep (and recurrent) learner to win international competitions with secret test sets: deep LSTM RNN (1995-) won three connected handwriting contests at ICDAR 2009 (French, Arabic, Farsi), performing simultaneous segmentation and recognition. http://www.idsia.ch/~juergen/handwriting.html >>>>>>> >>>>>>> [9] A. Graves, A. Mohamed, G. E. Hinton. Speech Recognition with Deep Recurrent Neural Networks. ICASSP 2013, Vancouver, 2013. http://www.cs.toronto.edu/~hinton/absps/RNN13.pdf >>>>>>> >>>>>>> >>>>>>> >>>>>>> Juergen Schmidhuber >>>>>>> http://www.idsia.ch/~juergen/whatsnew.html >>>>>> >>>>>> -- >>>>>> -- >>>>>> Juyang (John) Weng, Professor >>>>>> Department of Computer Science and Engineering >>>>>> MSU Cognitive Science Program and MSU Neuroscience Program >>>>>> 428 S Shaw Ln Rm 3115 >>>>>> Michigan State University >>>>>> East Lansing, MI 48824 USA >>>>>> Tel: 517-353-4388 >>>>>> Fax: 517-432-1061 >>>>>> Email: weng at cse.msu.edu >>>>>> URL: http://www.cse.msu.edu/~weng/ >>>>>> ---------------------------------------------- >>>>>> >>>>> >>>>> >>>> >>> >>> >>> >>> >>> >>> Dr. James M. Bower Ph.D. >>> >>> Professor of Computational Neurobiology >>> >>> Barshop Institute for Longevity and Aging Studies. >>> >>> 15355 Lambda Drive >>> >>> University of Texas Health Science Center >>> >>> San Antonio, Texas 78245 >>> >>> >>> Phone: 210 382 0553 >>> >>> Email: bower at uthscsa.edu >>> >>> Web: http://www.bower-lab.org >>> >>> twitter: superid101 >>> >>> linkedin: Jim Bower >>> >>> >>> CONFIDENTIAL NOTICE: >>> >>> The contents of this email and any attachments to it may be privileged or contain privileged and confidential information. This information is only for the viewing or use of the intended recipient. If you have received this e-mail in error or are not the intended recipient, you are hereby notified that any disclosure, copying, distribution or use of, or the taking of any action in reliance upon, any of the information contained in this e-mail, or >>> >>> any of the attachments to this e-mail, is strictly prohibited and that this e-mail and all of the attachments to this e-mail, if any, must be >>> >>> immediately returned to the sender or destroyed and, in either case, this e-mail and all attachments to this e-mail must be immediately deleted from your computer without making any copies hereof and any and all hard copies made must be destroyed. If you have received this e-mail in error, please notify the sender by e-mail immediately. >>> >>> >>> >> >> >> >> >> >> Dr. James M. Bower Ph.D. >> >> Professor of Computational Neurobiology >> >> Barshop Institute for Longevity and Aging Studies. >> >> 15355 Lambda Drive >> >> University of Texas Health Science Center >> >> San Antonio, Texas 78245 >> >> >> Phone: 210 382 0553 >> >> Email: bower at uthscsa.edu >> >> Web: http://www.bower-lab.org >> >> twitter: superid101 >> >> linkedin: Jim Bower >> >> >> CONFIDENTIAL NOTICE: >> >> The contents of this email and any attachments to it may be privileged or contain privileged and confidential information. This information is only for the viewing or use of the intended recipient. If you have received this e-mail in error or are not the intended recipient, you are hereby notified that any disclosure, copying, distribution or use of, or the taking of any action in reliance upon, any of the information contained in this e-mail, or >> >> any of the attachments to this e-mail, is strictly prohibited and that this e-mail and all of the attachments to this e-mail, if any, must be >> >> immediately returned to the sender or destroyed and, in either case, this e-mail and all attachments to this e-mail must be immediately deleted from your computer without making any copies hereof and any and all hard copies made must be destroyed. If you have received this e-mail in error, please notify the sender by e-mail immediately. >> >> >> > > [I am in Dijon, France on sabbatical this year. To call me, Skype works best (gwcottrell), or dial +33 788319271] > > Gary Cottrell 858-534-6640 FAX: 858-534-7029 > > My schedule is here: http://tinyurl.com/b7gxpwo > > Computer Science and Engineering 0404 > IF USING FED EX INCLUDE THE FOLLOWING LINE: > CSE Building, Room 4130 > University of California San Diego > 9500 Gilman Drive # 0404 > La Jolla, Ca. 92093-0404 > > Things may come to those who wait, but only the things left by those who hustle. -- Abraham Lincoln > > "I'll have a caf? mocha vodka valium latte to go, please" -Anonymous > > "Of course, none of this will be easy. If it was, we would already know everything there was about how the brain works, and presumably my life would be simpler here. It could explain all kinds of things that go on in Washington." -Barack Obama > > "Probably once or twice a week we are sitting at dinner and Richard says, 'The cortex is hopeless,' and I say, 'That's why I work on the worm.'" Dr. Bargmann said. > > "A grapefruit is a lemon that saw an opportunity and took advantage of it." - note written on a door in Amsterdam on Lijnbaansgracht. > > "Physical reality is great, but it has a lousy search function." -Matt Tong > > "Only connect!" -E.M. Forster > > "You always have to believe that tomorrow you might write the matlab program that solves everything - otherwise you never will." -Geoff Hinton > > "There is nothing objective about objective functions" - Jay McClelland > > "I am awaiting the day when people remember the fact that discovery does not work by deciding what you want and then discovering it." > -David Mermin > > Email: gary at ucsd.edu > Home page: http://www-cse.ucsd.edu/~gary/ > Dr. James M. Bower Ph.D. Professor of Computational Neurobiology Barshop Institute for Longevity and Aging Studies. 15355 Lambda Drive University of Texas Health Science Center San Antonio, Texas 78245 Phone: 210 382 0553 Email: bower at uthscsa.edu Web: http://www.bower-lab.org twitter: superid101 linkedin: Jim Bower CONFIDENTIAL NOTICE: The contents of this email and any attachments to it may be privileged or contain privileged and confidential information. This information is only for the viewing or use of the intended recipient. If you have received this e-mail in error or are not the intended recipient, you are hereby notified that any disclosure, copying, distribution or use of, or the taking of any action in reliance upon, any of the information contained in this e-mail, or any of the attachments to this e-mail, is strictly prohibited and that this e-mail and all of the attachments to this e-mail, if any, must be immediately returned to the sender or destroyed and, in either case, this e-mail and all attachments to this e-mail must be immediately deleted from your computer without making any copies hereof and any and all hard copies made must be destroyed. If you have received this e-mail in error, please notify the sender by e-mail immediately. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bower at uthscsa.edu Tue Feb 11 10:27:15 2014 From: bower at uthscsa.edu (james bower) Date: Tue, 11 Feb 2014 09:27:15 -0600 Subject: Connectionists: Deep Belief Nets (2006) / Neural History Compressor (1991) or Hierarchical Temporal Memory In-Reply-To: References: <3FD4809C-5EF7-478D-8939-AE56E6A434BE@idsia.ch> <52F420CF.7060006@cse.msu.edu> <4451FE2D-8521-46F0-A1CE-148F5CC83549@uthscsa.edu> <9E09D066-0F2E-42F7-AE55-4D2750570F77@eng.ucsd.edu> Message-ID: <171A4C47-CE15-4B0F-8F55-56F5D5B456D0@uthscsa.edu> With respect to big data, attention and vision. Of course we collect a lot of data - however, it is precisely my point that we ?point? our receptors towards the data we want based on what we already think we know is out there. ?Attention? as I generally hear it discussed, I think, doesn't have enough of the sense that we are seeking data we expect. Of course, in the laboratory, our monkeys are often given tasks in some random presentation order, so that they can?t predict, making the data presentation better controlled and the results probably easier to interpret. In the real world, neural reaction to unexpected stimuli doesn?t seem to me to involve very high level processing at all - duck and cover, then run. The mogul runs in the freestyle last night were very precisely layed out - and it is remarkable the intensity with which, in mid flight, they stare at the ground. They aren?t calculating on the fly (literally in this case), they are collecting data from what they already know is there. Training in that case isn?t learning the way that my sense is most think about it - it?s more fine tuning the expectation system. At least that?s how I think of it. It was telling that in the downhill, the US skier who had been killing the course in practice, attributed his failure in the final run to the fact that the light had changed - he couldn?t get the data in the form he expected. Jim On Feb 11, 2014, at 4:34 AM, Gary Cottrell wrote: > Oh, and I forgot to mention, this is just visual information, obviously. Compare this to the 5-8 syllables per second we get (depending on language, but information rate seems to be about the same across languages - relative to Vietnamese (Pellegrino et al. 2011). So this is about double the samples of fixations, per second, but we aren't always listening to speech. But for those who listen to rap, Eminem comes in at about 10 syllables per second, but he is topped by Outsider, at 21 syllables per second. > > g. > > > Fran?ois Pellegrino, Christophe Coup?, Egidio Marsico (2011) Across-Language Perspective on Speech Information Rate. > Language, 87(3):539-558.K | 10.1353/lan.2011.0057 > > A cross-language perspective on speech information rate > > Fran?ois Pellegrino, Christophe Coup? and Egidio Marsico > > > On Feb 11, 2014, at 11:22 AM, Gary Cottrell wrote: > >> interesting points, jim! >> >> I wonder, though, why you worry so much about "big data"? >> >> I think it is more like "appropriate-sized data." we have never before been able to give our models anything like the kind of data we get in our first years of life. Let's do a little back-of-the-envelope on this. We saccade about 3 times a second, which, if you are awake 16 hours a day (make that 20 for Terry Sejnowski), come out to about 172,800 fixations per day, or high-dimensional samples of the world, if you like. One year of that, not counting drunken blackouts, etc., is 63 million samples. After 10 years that's 630 million samples. This dwarfs imagenet, at least the 1.2 million images used by Krizhevsky et al. Of course, there is a lot of redundancy here (spatial and temporal), which I believe the brain uses to construct its models (e.g., by some sort of learning rule like F?ldi?k's), so maybe 1.2 million isn't so bad. >> >> On the other hand, you may argue, imagenet is nothing like the real world - it is, after all, pictures taken by humans, so objects tend to be centered. This leads to a comment about you worrying about filtering data to avoid the big data "problem." Well, I would suggest that there is a lot of work on attention (some of it completely compatible with connectionist models, e.g., Itti, et al. 1998, Zhang, et al., 2008) that would cause a system to focus on objects, just as photographers do. So, it isn't like we haven't worried about that as you do, it's just that we've done something about it! ;-) >> >> Anyway, I like your ideas about the cerebellum - sounds like there are a bunch of Ph.D. theses in there? >> >> cheers, >> gary >> >> >> Itti, L., Koch, C., & Niebur, E. (1998). A model of saliency-based visual attention for rapid scene analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence, 20, 1254?1259. >> >> Zhang, Lingyun, Tong, Matthew H., Marks, Tim K., Shan, Honghao, and Cottrell, Garrison W. (2008). SUN: A Bayesian Framework for Saliency Using Natural Statistics. Journal of Vision 8(7):32, 1-20. >> The code for SUN is here >> On Feb 10, 2014, at 10:04 PM, james bower wrote: >> >>> One other point that some of you might find interesting. >>> >>> While most neurobiologists and text books describe the cerebellum as involved in motor control, I suspect that it is actually not a motor control device in the usual sense at all. We proposed 20+ years ago that the cerebellum is actually a sensory control device, using the motor system (although not only) to precisely position sensory surfaces to collect the data the nervous system actually needs and expects. In the context of the current discussion about big data - such a mechanism would also contribute to the nervous system?s working around a potential data problem. >>> >>> Leaping and jumping forward, as an extension of this idea, we have proposed that autism may actually be an adaptive response to cerebellar dysfunction - and therefore a response to uncontrolled big data flux (in your terms). >>> >>> So, if correct, the brain adapts to being confronted with badly controlled data acquisition by shutting it off. >>> >>> Just to think about. Again, papers available for anyone interested. >>> >>> Given how much we do know about cerebellar circuitry - this could actually be an interesting opportunity for some cross disciplinary thinking about how one would use an active sensory data acquisition controller to select the sensory data that is ideal given an internal model of the world. Almost all of the NN type cerebellar models to date have been built around either the idea that the cerebellum is a motor timing device, or involved in learning (yadda yadda). >>> >>> Perhaps most on this list interested in brain networks don?t know that far and away, the pathway with the largest total number of axons is the pathway from the (entire) cerebral cortex to the cerebellum. We have predicted that this pathway is the mechanism by which the cerebral cortex ?loads? the cerebellum with knowledge about what it expects and needs. >>> >>> >>> >>> Jim >>> >>> >>> >>> >>> >>> On Feb 10, 2014, at 2:24 PM, james bower wrote: >>> >>>> Excellent points - reminds me again of the conversation years ago about whether a general structure like a Hopfield Network would, by itself, solve a large number of problems. All evidence from the nervous system points in the direction of strong influence of the nature of the problem on the solution, which also seems consistent with what has happened in real world applications of NN to engineering over the last 25 years. >>>> >>>> For biology, however, the interesting (even fundamental) question becomes, what the following actually are: >>>> >>>>> endowed us with custom tools for learning in different domains >>>> >>>>> the contribution from evolution to neural wetware might be >>>> >>>> I have mentioned previously, that my guess (and surprise) based on our own work over the last 30 years in olfaction is that ?learning? may all together be over emphasized (we do love free will). Yes, in our laboratories we place animals in situations where they have to ?learn?, but my suspicion is that in the real world where brains actually operate and evolve, most of what we do is actually ?recognition? that involves matching external stimuli to internal ?models? of what we expect to be there. I think that it is quite likely that that ?deep knowledge? is how evolution has most patterned neural wetware. Seems to me a way to avoid NP problems and the pitfalls of dealing with ?big data? which as I have said, I suspect the nervous system avoids at all costs. >>>> >>>> I have mentioned that we have reason to believe (to my surprise) that, starting with the olfactory receptors, the olfactory system already ?knows? about the metabolic structure of the real world. Accordingly, we are predicting that its receptors aren?t organized to collect a lot of data about the chemical structure of the stimulus (the way a chemist would), but instead looks for chemical signatures of metabolic processes. e.g. , it may be that 1/3 or more of mouse olfactory receptors detect one of the three molecules that are produced by many different kinds of fruit when ripe. ?Learning? in olfaction, might be some small additional mechanism you put on top to change the ?hedonic? value of the stimulus - ie. you can ?learn? to like fermented fish paste. But it is very likely that recognizing the (usually deleterious) environmental signature of fermentation is "hard wired?, requiring ?learning? to change the natural category. >>>> >>>> I know that many cognitive types (and philosophers as well) have developed much more nuanced discussions of these questions - however, I have always been struck by how much of the effort in NNs is focused on ?learning? as if it is the primary attribute of the nervous system we are trying to figure out. It seems to me figuring out "what the nose already knows? is much more important. >>>> >>>> >>>> Jim >>>> >>>> >>>> >>>> >>>> >>>> >>>> On Feb 10, 2014, at 10:38 AM, Gary Marcus wrote: >>>> >>>>> Juergen and others, >>>>> >>>>> I am with John on his two basic concerns, and think that your appeal to computational universality is a red herring; I cc the entire group because I think that these issues lay at the center of why many of the hardest problems in AI and neuroscience continue to lay outside of reach, despite in-principle proofs about computational universality. >>>>> >>>>> John?s basic points, which I have also made before (e.g. in my books The Algebraic Mind and The Birth of the Mind and in my periodic New Yorker posts) are two >>>>> >>>>> a. It is unrealistic to expect that hierarchies of pattern recognizers will suffice for the full range of cognitive problems that humans (and strong AI systems) face. Deep learning, to take one example, excels at classification, but has thus far had relatively little to contribute to inference or natural language understanding. Socher et al?s impressive CVG work, for instance, is parasitic on a traditional (symbolic) parser, not a soup-to-nuts neural net induced from input. >>>>> >>>>> b. it is unrealistic to expect that all the relevant information can be extracted by any general purpose learning device. >>>>> >>>>> Yes, you can reliably map any arbitrary input-output relation onto a multilayer perceptron or recurrent net, but only if you know the complete input-output mapping in advance. Alas, you can?t be guaranteed to do that in general given arbitrary subsets of the complete space; in the real world, learners see subsets of possible data and have to make guesses about what the rest will be like. Wolpert?s No Free Lunch work is instructive here (and also in line with how cognitive scientists like Chomsky, Pinker, and myself have thought about the problem). For any problem, I presume that there exists an appropriately-configured net, but there is no guarantee that in the real world you are going to be able to correctly induce the right system via general-purpose learning algorithm given a finite amount of data, with a finite amount of training. Empirically, neural nets of roughly the form you are discussing have worked fine for some problems (e.g. backgammon) but been no match for their symbolic competitors in other domains (chess) and worked only as an adjunct rather than an central ingredient in still others (parsing, question-answering a la Watson, etc); in other domains, like planning and common-sense reasoning, there has been essentially no serious work at all. >>>>> >>>>> My own take, informed by evolutionary and developmental biology, is that no single general purpose architecture will ever be a match for the endproduct of a billion years of evolution, which includes, I suspect, a significant amount of customized architecture that need not be induced anew in each generation. We learn as well as we do precisely because evolution has preceded us, and endowed us with custom tools for learning in different domains. Until the field of neural nets more seriously engages in understanding what the contribution from evolution to neural wetware might be, I will remain pessimistic about the field?s prospects. >>>>> >>>>> Best, >>>>> Gary Marcus >>>>> >>>>> Professor of Psychology >>>>> New York University >>>>> Visiting Cognitive Scientist >>>>> Allen Institute for Brain Science >>>>> Allen Institute for Artiificial Intelligence >>>>> co-edited book coming late 2014: >>>>> The Future of the Brain: Essays By The World?s Leading Neuroscientists >>>>> http://garymarcus.com/ >>>>> >>>>> On Feb 10, 2014, at 10:26 AM, Juergen Schmidhuber wrote: >>>>> >>>>>> John, >>>>>> >>>>>> perhaps your view is a bit too pessimistic. Note that a single RNN already is a general computer. In principle, dynamic RNNs can map arbitrary observation sequences to arbitrary computable sequences of motoric actions and internal attention-directing operations, e.g., to process cluttered scenes, or to implement development (the examples you mentioned). From my point of view, the main question is how to exploit this universal potential through learning. A stack of dynamic RNN can sometimes facilitate this. What it learns can later be collapsed into a single RNN [3]. >>>>>> >>>>>> Juergen >>>>>> >>>>>> http://www.idsia.ch/~juergen/whatsnew.html >>>>>> >>>>>> >>>>>> >>>>>> On Feb 7, 2014, at 12:54 AM, Juyang Weng wrote: >>>>>> >>>>>>> Juergen: >>>>>>> >>>>>>> You wrote: A stack of recurrent NN. But it is a wrong architecture as far as the brain is concerned. >>>>>>> >>>>>>> Although my joint work with Narendra Ahuja and Thomas S. Huang at UIUC was probably the first >>>>>>> learning network that used the deep Learning idea for learning from clutter scenes (Cresceptron ICCV 1992 and IJCV 1997), >>>>>>> I gave up this static deep learning idea later after we considered the Principle 1: Development. >>>>>>> >>>>>>> The deep learning architecture is wrong for the brain. It is too restricted, static in architecture, and cannot learn directly from cluttered scenes required by Principle 1. The brain is not a cascade of recurrent NN. >>>>>>> >>>>>>> I quote from Antonio Damasio "Decartes' Error": p. 93: "But intermediate communications occurs also via large subcortical nuclei such as those in the thalamas and basal ganglia, and via small nulei such as those in the brain stem." >>>>>>> >>>>>>> Of course, the cerebral pathways themselves are not a stack of recurrent NN either. >>>>>>> >>>>>>> There are many fundamental reasons for that. I give only one here base on our DN brain model: Looking at a human, the brain must dynamically attend the tip of the nose, the entire nose, the face, or the entire human body on the fly. For example, when the network attend the nose, the entire human body becomes the background! Without a brain network that has both shallow and deep connections (unlike your stack of recurrent NN), your network is only for recognizing a set of static patterns in a clean background. This is still an overworked pattern recognition problem, not a vision problem. >>>>>>> >>>>>>> -John >>>>>>> >>>>>>> On 2/6/14 7:24 AM, Schmidhuber Juergen wrote: >>>>>>>> Deep Learning in Artificial Neural Networks (NN) is about credit assignment across many subsequent computational stages, in deep or recurrent NN. >>>>>>>> >>>>>>>> A popluar Deep Learning NN is the Deep Belief Network (2006) [1,2]. A stack of feedforward NN (FNN) is pre-trained in unsupervised fashion. This can facilitate subsequent supervised learning. >>>>>>>> >>>>>>>> Let me re-advertise a much older, very similar, but more general, working Deep Learner of 1991. It can deal with temporal sequences: the Neural Hierarchical Temporal Memory or Neural History Compressor [3]. A stack of recurrent NN (RNN) is pre-trained in unsupervised fashion. This can greatly facilitate subsequent supervised learning. >>>>>>>> >>>>>>>> The RNN stack is more general in the sense that it uses sequence-processing RNN instead of FNN with unchanging inputs. In the early 1990s, the system was able to learn many previously unlearnable Deep Learning tasks, one of them requiring credit assignment across 1200 successive computational stages [4]. >>>>>>>> >>>>>>>> Related developments: In the 1990s there was a trend from partially unsupervised [3] to fully supervised recurrent Deep Learners [5]. In recent years, there has been a similar trend from partially unsupervised to fully supervised systems. For example, several recent competition-winning and benchmark record-setting systems use supervised LSTM RNN stacks [6-9]. >>>>>>>> >>>>>>>> >>>>>>>> References: >>>>>>>> >>>>>>>> [1] G. E. Hinton, R. R. Salakhutdinov. Reducing the dimensionality of data with neural networks. Science, Vol. 313. no. 5786, pp. 504 - 507, 2006. http://www.cs.toronto.edu/~hinton/science.pdf >>>>>>>> >>>>>>>> [2] G. W. Cottrell. New Life for Neural Networks. Science, Vol. 313. no. 5786, pp. 454-455, 2006. http://www.academia.edu/155897/Cottrell_Garrison_W._2006_New_life_for_neural_networks >>>>>>>> >>>>>>>> [3] J. Schmidhuber. Learning complex, extended sequences using the principle of history compression, Neural Computation, 4(2):234-242, 1992. (Based on TR FKI-148-91, 1991.) ftp://ftp.idsia.ch/pub/juergen/chunker.pdf Overview: http://www.idsia.ch/~juergen/firstdeeplearner.html >>>>>>>> >>>>>>>> [4] J. Schmidhuber. Habilitation thesis, TUM, 1993. ftp://ftp.idsia.ch/pub/juergen/habilitation.pdf . Includes an experiment with credit assignment across 1200 subsequent computational stages for a Neural Hierarchical Temporal Memory or History Compressor or RNN stack with unsupervised pre-training [2] (try Google Translate in your mother tongue): http://www.idsia.ch/~juergen/habilitation/node114.html >>>>>>>> >>>>>>>> [5] S. Hochreiter, J. Schmidhuber. Long Short-Term Memory. Neural Computation, 9(8):1735-1780, 1997. Based on TR FKI-207-95, 1995. ftp://ftp.idsia.ch/pub/juergen/lstm.pdf . Lots of of follow-up work on LSTM under http://www.idsia.ch/~juergen/rnn.html >>>>>>>> >>>>>>>> [6] S. Fernandez, A. Graves, J. Schmidhuber. Sequence labelling in structured domains with hierarchical recurrent neural networks. In Proc. IJCAI'07, p. 774-779, Hyderabad, India, 2007. ftp://ftp.idsia.ch/pub/juergen/IJCAI07sequence.pdf >>>>>>>> >>>>>>>> [7] A. Graves, J. Schmidhuber. Offline Handwriting Recognition with Multidimensional Recurrent Neural Networks. NIPS'22, p 545-552, Vancouver, MIT Press, 2009. http://www.idsia.ch/~juergen/nips2009.pdf >>>>>>>> >>>>>>>> [8] 2009: First very deep (and recurrent) learner to win international competitions with secret test sets: deep LSTM RNN (1995-) won three connected handwriting contests at ICDAR 2009 (French, Arabic, Farsi), performing simultaneous segmentation and recognition. http://www.idsia.ch/~juergen/handwriting.html >>>>>>>> >>>>>>>> [9] A. Graves, A. Mohamed, G. E. Hinton. Speech Recognition with Deep Recurrent Neural Networks. ICASSP 2013, Vancouver, 2013. http://www.cs.toronto.edu/~hinton/absps/RNN13.pdf >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> Juergen Schmidhuber >>>>>>>> http://www.idsia.ch/~juergen/whatsnew.html >>>>>>> >>>>>>> -- >>>>>>> -- >>>>>>> Juyang (John) Weng, Professor >>>>>>> Department of Computer Science and Engineering >>>>>>> MSU Cognitive Science Program and MSU Neuroscience Program >>>>>>> 428 S Shaw Ln Rm 3115 >>>>>>> Michigan State University >>>>>>> East Lansing, MI 48824 USA >>>>>>> Tel: 517-353-4388 >>>>>>> Fax: 517-432-1061 >>>>>>> Email: weng at cse.msu.edu >>>>>>> URL: http://www.cse.msu.edu/~weng/ >>>>>>> ---------------------------------------------- >>>>>>> >>>>>> >>>>>> >>>>> >>>> >>>> >>>> >>>> >>>> >>>> Dr. James M. Bower Ph.D. >>>> >>>> Professor of Computational Neurobiology >>>> >>>> Barshop Institute for Longevity and Aging Studies. >>>> >>>> 15355 Lambda Drive >>>> >>>> University of Texas Health Science Center >>>> >>>> San Antonio, Texas 78245 >>>> >>>> >>>> Phone: 210 382 0553 >>>> >>>> Email: bower at uthscsa.edu >>>> >>>> Web: http://www.bower-lab.org >>>> >>>> twitter: superid101 >>>> >>>> linkedin: Jim Bower >>>> >>>> >>>> CONFIDENTIAL NOTICE: >>>> >>>> The contents of this email and any attachments to it may be privileged or contain privileged and confidential information. This information is only for the viewing or use of the intended recipient. If you have received this e-mail in error or are not the intended recipient, you are hereby notified that any disclosure, copying, distribution or use of, or the taking of any action in reliance upon, any of the information contained in this e-mail, or >>>> >>>> any of the attachments to this e-mail, is strictly prohibited and that this e-mail and all of the attachments to this e-mail, if any, must be >>>> >>>> immediately returned to the sender or destroyed and, in either case, this e-mail and all attachments to this e-mail must be immediately deleted from your computer without making any copies hereof and any and all hard copies made must be destroyed. If you have received this e-mail in error, please notify the sender by e-mail immediately. >>>> >>>> >>>> >>> >>> >>> >>> >>> >>> Dr. James M. Bower Ph.D. >>> >>> Professor of Computational Neurobiology >>> >>> Barshop Institute for Longevity and Aging Studies. >>> >>> 15355 Lambda Drive >>> >>> University of Texas Health Science Center >>> >>> San Antonio, Texas 78245 >>> >>> >>> Phone: 210 382 0553 >>> >>> Email: bower at uthscsa.edu >>> >>> Web: http://www.bower-lab.org >>> >>> twitter: superid101 >>> >>> linkedin: Jim Bower >>> >>> >>> CONFIDENTIAL NOTICE: >>> >>> The contents of this email and any attachments to it may be privileged or contain privileged and confidential information. This information is only for the viewing or use of the intended recipient. If you have received this e-mail in error or are not the intended recipient, you are hereby notified that any disclosure, copying, distribution or use of, or the taking of any action in reliance upon, any of the information contained in this e-mail, or >>> >>> any of the attachments to this e-mail, is strictly prohibited and that this e-mail and all of the attachments to this e-mail, if any, must be >>> >>> immediately returned to the sender or destroyed and, in either case, this e-mail and all attachments to this e-mail must be immediately deleted from your computer without making any copies hereof and any and all hard copies made must be destroyed. If you have received this e-mail in error, please notify the sender by e-mail immediately. >>> >>> >>> >> >> [I am in Dijon, France on sabbatical this year. To call me, Skype works best (gwcottrell), or dial +33 788319271] >> >> Gary Cottrell 858-534-6640 FAX: 858-534-7029 >> >> My schedule is here: http://tinyurl.com/b7gxpwo >> >> Computer Science and Engineering 0404 >> IF USING FED EX INCLUDE THE FOLLOWING LINE: >> CSE Building, Room 4130 >> University of California San Diego >> 9500 Gilman Drive # 0404 >> La Jolla, Ca. 92093-0404 >> >> Things may come to those who wait, but only the things left by those who hustle. -- Abraham Lincoln >> >> "I'll have a caf? mocha vodka valium latte to go, please" -Anonymous >> >> "Of course, none of this will be easy. If it was, we would already know everything there was about how the brain works, and presumably my life would be simpler here. It could explain all kinds of things that go on in Washington." -Barack Obama >> >> "Probably once or twice a week we are sitting at dinner and Richard says, 'The cortex is hopeless,' and I say, 'That's why I work on the worm.'" Dr. Bargmann said. >> >> "A grapefruit is a lemon that saw an opportunity and took advantage of it." - note written on a door in Amsterdam on Lijnbaansgracht. >> >> "Physical reality is great, but it has a lousy search function." -Matt Tong >> >> "Only connect!" -E.M. Forster >> >> "You always have to believe that tomorrow you might write the matlab program that solves everything - otherwise you never will." -Geoff Hinton >> >> "There is nothing objective about objective functions" - Jay McClelland >> >> "I am awaiting the day when people remember the fact that discovery does not work by deciding what you want and then discovering it." >> -David Mermin >> >> Email: gary at ucsd.edu >> Home page: http://www-cse.ucsd.edu/~gary/ >> > > [I am in Dijon, France on sabbatical this year. To call me, Skype works best (gwcottrell), or dial +33 788319271] > > Gary Cottrell 858-534-6640 FAX: 858-534-7029 > > My schedule is here: http://tinyurl.com/b7gxpwo > > Computer Science and Engineering 0404 > IF USING FED EX INCLUDE THE FOLLOWING LINE: > CSE Building, Room 4130 > University of California San Diego > 9500 Gilman Drive # 0404 > La Jolla, Ca. 92093-0404 > > Things may come to those who wait, but only the things left by those who hustle. -- Abraham Lincoln > > "I'll have a caf? mocha vodka valium latte to go, please" -Anonymous > > "Of course, none of this will be easy. If it was, we would already know everything there was about how the brain works, and presumably my life would be simpler here. It could explain all kinds of things that go on in Washington." -Barack Obama > > "Probably once or twice a week we are sitting at dinner and Richard says, 'The cortex is hopeless,' and I say, 'That's why I work on the worm.'" Dr. Bargmann said. > > "A grapefruit is a lemon that saw an opportunity and took advantage of it." - note written on a door in Amsterdam on Lijnbaansgracht. > > "Physical reality is great, but it has a lousy search function." -Matt Tong > > "Only connect!" -E.M. Forster > > "You always have to believe that tomorrow you might write the matlab program that solves everything - otherwise you never will." -Geoff Hinton > > "There is nothing objective about objective functions" - Jay McClelland > > "I am awaiting the day when people remember the fact that discovery does not work by deciding what you want and then discovering it." > -David Mermin > > Email: gary at ucsd.edu > Home page: http://www-cse.ucsd.edu/~gary/ > Dr. James M. Bower Ph.D. Professor of Computational Neurobiology Barshop Institute for Longevity and Aging Studies. 15355 Lambda Drive University of Texas Health Science Center San Antonio, Texas 78245 Phone: 210 382 0553 Email: bower at uthscsa.edu Web: http://www.bower-lab.org twitter: superid101 linkedin: Jim Bower CONFIDENTIAL NOTICE: The contents of this email and any attachments to it may be privileged or contain privileged and confidential information. This information is only for the viewing or use of the intended recipient. If you have received this e-mail in error or are not the intended recipient, you are hereby notified that any disclosure, copying, distribution or use of, or the taking of any action in reliance upon, any of the information contained in this e-mail, or any of the attachments to this e-mail, is strictly prohibited and that this e-mail and all of the attachments to this e-mail, if any, must be immediately returned to the sender or destroyed and, in either case, this e-mail and all attachments to this e-mail must be immediately deleted from your computer without making any copies hereof and any and all hard copies made must be destroyed. If you have received this e-mail in error, please notify the sender by e-mail immediately. -------------- next part -------------- An HTML attachment was scrubbed... URL: From qobi at purdue.edu Tue Feb 11 10:14:31 2014 From: qobi at purdue.edu (Jeffrey Mark Siskind) Date: Tue, 11 Feb 2014 10:14:31 -0500 Subject: Connectionists: developmental robotics In-Reply-To: <5B3FD743-AFB1-4FD0-9F5C-246693E27852@nyu.edu> (message from Gary Marcus on Tue, 11 Feb 2014 09:15:11 -0500) References: <3FD4809C-5EF7-478D-8939-AE56E6A434BE@idsia.ch> <52F420CF.7060006@cse.msu.edu> <2B3F3833-98F9-4ACB-A0A2-D5D5560E99FF@inria.fr> <5B3FD743-AFB1-4FD0-9F5C-246693E27852@nyu.edu> Message-ID: Has anyone, for example, tried to build a robot that starts with the cognitive capacities of a two-year-old (rather than a newborn), and goes from there? yes Or taken seriously the nativist arguments of Chomsky, Pinker, and Spelke and tried to build a robot that is innately endowed with concepts like ?person?, ?object?, ?set?, and ?place?? I'd like to innately endow my robots with these concepts. Can you point me to a URL where I can download the code? Python, MATLAB, and Haskell preferred. Jeff (http://engineering.purdue.edu/~qobi) From bower at uthscsa.edu Tue Feb 11 11:11:48 2014 From: bower at uthscsa.edu (james bower) Date: Tue, 11 Feb 2014 10:11:48 -0600 Subject: Connectionists: developmental robotics In-Reply-To: References: <3FD4809C-5EF7-478D-8939-AE56E6A434BE@idsia.ch> <52F420CF.7060006@cse.msu.edu> <2B3F3833-98F9-4ACB-A0A2-D5D5560E99FF@inria.fr> <5B3FD743-AFB1-4FD0-9F5C-246693E27852@nyu.edu> Message-ID: <561EFEC3-0DF4-4287-90E6-8FE13E825172@uthscsa.edu> For those who haven?t, you might find the book: Vehicles http://www.amazon.com/Vehicles-Experiments-Synthetic-Psychology-Bradford-ebook/dp/B002Z13PRM/ref=sr_1_1?s=books&ie=UTF8&qid=1392134985&sr=1-1&keywords=vehicles By Valentino Braitenberg: how deceptive ones perception of ?cognition? can be with respect to the actual underlying neural mechanisms. Jim On Feb 11, 2014, at 9:14 AM, Jeffrey Mark Siskind wrote: > Has anyone, for example, tried to build a robot that starts with the > cognitive capacities of a two-year-old (rather than a newborn), and goes > from there? > > yes > > Or taken seriously the nativist arguments of Chomsky, Pinker, and Spelke > and tried to build a robot that is innately endowed with concepts like > ?person?, ?object?, ?set?, and ?place?? > > I'd like to innately endow my robots with these concepts. Can you point me to > a URL where I can download the code? Python, MATLAB, and Haskell preferred. > > Jeff (http://engineering.purdue.edu/~qobi) > Dr. James M. Bower Ph.D. Professor of Computational Neurobiology Barshop Institute for Longevity and Aging Studies. 15355 Lambda Drive University of Texas Health Science Center San Antonio, Texas 78245 Phone: 210 382 0553 Email: bower at uthscsa.edu Web: http://www.bower-lab.org twitter: superid101 linkedin: Jim Bower CONFIDENTIAL NOTICE: The contents of this email and any attachments to it may be privileged or contain privileged and confidential information. This information is only for the viewing or use of the intended recipient. If you have received this e-mail in error or are not the intended recipient, you are hereby notified that any disclosure, copying, distribution or use of, or the taking of any action in reliance upon, any of the information contained in this e-mail, or any of the attachments to this e-mail, is strictly prohibited and that this e-mail and all of the attachments to this e-mail, if any, must be immediately returned to the sender or destroyed and, in either case, this e-mail and all attachments to this e-mail must be immediately deleted from your computer without making any copies hereof and any and all hard copies made must be destroyed. If you have received this e-mail in error, please notify the sender by e-mail immediately. -------------- next part -------------- An HTML attachment was scrubbed... URL: From angelo.arleo at snv.jussieu.fr Tue Feb 11 10:48:35 2014 From: angelo.arleo at snv.jussieu.fr (Angelo Arleo) Date: Tue, 11 Feb 2014 16:48:35 +0100 Subject: Connectionists: Two PhD positions in Visual Psychophysics and Computational Neuroscience at the Institute of Vision in Paris, France Message-ID: Two fully-funded PhD positions in Visual Psychophysics and Computational Neuroscience Job description Applications are invited for two PhD positions in the Aging in Vision and Action laboratory at the Institute of Vision (INSERM-CNRS-University Pierre & Marie Curie) Paris, France. The gradual impairment of vision-dependent functions in the elderly is at the core of our research. The main goal is to characterize how healthy aging shapes perceptual and cognitive aspects of vision. The first PhD project will focus on the impact of healthy ageing on low-level visual perception and visual processing functions. The second PhD will pioneer original research on the impact of visual aging on spatial perception and spatial orientation functions. These goals will be tackled through the combination of experimental and theoretical methods. Requirements Applicants? background should be in physics/engineering or neuroscience/psychology, with a keen interest in combining quantitative and experimental approaches. Previous experience with visual psychophysics, eye-movement analysis, visual space perception, or aging is welcome. Computer programming skills are also a plus. Proficiency in oral and written English is required. Knowledge of French is not mandatory. Successful candidates are expected to work in an interdisciplinary environment with collaborations with biologists, theoreticians and clinicians. They will be awarded a 3-year PhD fellowship from the University Pierre and Marie Curie Paris. Working environment and laboratory The Institute of Vision is one of the top international centers for integrated research on vision and eye diseases. It is located at the heart of Paris, on the campus of the National Hospital Center for Ophthalmology. The Institute of Vision includes multidisciplinary research groups, which share state-of-the-art platforms for human and animal experimentation. It also harbors a clinical investigation center, which fosters truly translational research activity. More information can be found online (http://www.institut-vision.org). The Institute of Vision and Essilor International, world leader in ophthalmic optics, have recently supported the creation of the laboratory of Aging in Vision and Action. This new laboratory aims at evaluating and understanding the functional aspects of the degeneration mechanisms related to visual aging. This research has the potential to produce fundamental knowledge suited for opening to assistive technological developments and rehabilitation solutions. The faculty members of the Aging in Vision and Action laboratory, led by Angelo Arleo, are specialized in visual psychophysics, neurobiology of spatial orientation, neural coding, neurocomputational modeling, and preclinical evaluation. The group has access to a wide range of state-of-the-art platforms including eye trackers, motion capture rooms, virtual reality environments, artificial street labs and home labs (http://www.streetlab-vision.com/en/). How to apply Candidates should send a motivation letter, a full CV, and names and contact information of at least two referees, to angelo.arleo at upmc.fr. The application deadline is March 15, 2014. Short listed candidates will be contacted for an interview (either face-to-face or via videoconference). For further inquiries, please contact Angelo Arleo, angelo.arleo at upmc.fr, phone: +33 6 89 89 07 23. ---------------------------------------------------------- Angelo ARLEO Institut of Vision, Aging in Vision and Action Lab, Head CNRS - INSERM - University Pierre&Marie Curie, 13, rue Moreau F-75012 Paris, France Mobile: +33 (0)6 89 89 07 23 email: angelo.arleo at upmc.fr ------------------------------------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From callforvideos at aaaivideos.org Tue Feb 11 05:35:57 2014 From: callforvideos at aaaivideos.org (AAAI Video Competition Call for Videos) Date: Tue, 11 Feb 2014 11:35:57 +0100 Subject: Connectionists: Call for Videos: AAAI-14 AI Video Competition Message-ID: AAAI-14 AI Video Competition Date: July 28, 2014 Place: Qu?bec City, Qu?bec, Canada Website: http://www.aaaivideos.org Video: http://youtu.be/uwD5qN-MF5M Submission Deadline: April 15, 2014 ------- Dear Colleagues, AAAI is pleased to announce the continuation of the AAAI Video Competition, now entering its eighth year. The video competition will be held in conjunction with the AAAI-14 conference in Qu?bec City, Qu?bec, Canada, July 27-31, 2014. At the award ceremony, authors of award-winning videos will be presented with "Shakeys", trophies named in honour of SRI's Shakey robot and its pioneering video. Award-winning videos will be screened at this ceremony. The goal of the competition is to show the world how much fun AI is by documenting exciting artificial intelligence advances in research, education, and application. View previous entries and award winners at http://www.aaaivideos.org/past_competitions. The rules are simple: Compose a short video about an exciting AI project, and narrate it so that it is accessible to a broad online audience. We strongly encourage student participation. To increase reach, select videos will be uploaded to Youtube and promoted through social media (twitter, facebook, g+) and major blogs in AI and robotics. VIDEO FORMAT AND CONTENT Either a 1 minute (max) short video or a 5 minute (max) long video, with English narration (or English subtitles). Consider combining screenshots, interviews, and videos of a system in action. Make the video self-contained, so that newcomers to AI can understand and learn from it. We encourage a good sense of humor, but will only accept submissions with serious AI content. For example, we welcome submissions of videos that: * Highlight a research topic - contemporary or historic, your own or from another group * Introduce viewers to an exciting new AI-related technology * Provide a window into the research activities of a laboratory and/or senior researcher * Attract prospective students to the field of AI * Explain AI concepts - your video could be used in the classroom Please note that this list is not exhaustive. Novel ideas for AI-based videos, including those not necessarily based on a "system in action", are encouraged. No matter what your choice is, creativity is encouraged! (Please note: The authors of previous, award-winning videos typically used humor, background music, and carefully selected movie clips to make their contributions come alive.) Please also note that videos should only contain material for which the authors own copyright. Clips from films or television and music for the soundtrack should only be used if copyright permission has been granted by the copyright holders, and this written permission accompanies the video submission. SUBMISSION INSTRUCTIONS Submit your video by making it available for download on a (preferably password-protected) dropbox, ftp or website. Once you have done so, please fill out the submission form (http://www.aaaivideos.org/submission_form.txt) and send it to us by email (submission at aaaivideos.org). All submissions are due no later than April 15, 2014. REVIEW AND AWARD PROCESS Submitted videos will be peer-reviewed by members of the programme committee according to the criteria below. Videos that receive positive reviews will be accepted for publication in the AAAI Video Competition proceedings. Select videos will also be uploaded to Youtube, promoted through social media, and featured on the dedicated website ( http://www.aaaivideos.org). The best videos will be nominated for awards. Winners will be revealed at the award ceremony during AAAI-14. All authors of accepted videos will be asked to sign a distribution license form. Review criteria: 1. Relevance to AI (research or application) 2. Excitement generated by the technology presented 3. Educational content 4. Entertainment value 5. Presentation (cinematography, narration, soundtrack, production values) AWARD CATEGORIES Best Video, Best Short Video, Best Student Video, Most Jaw-Dropping Technology, Most Educational, Most Entertaining and Best Presentation. (Categories may be changed at the discretion of the chairs.) AWARDS Trophies ("Shakeys"). KEY DATES * Submission Deadline: April 15, 2014 * Reviewing Decision Notifications & Award Nominations: May 15, 2014 * Final Version Due: June 15, 2014 * Screening and Award Presentations: July 28, 2014 FOR MORE INFORMATION Please contact us at info at aaaivideos.org We look forward to your participation in this exciting event! Mauro Birattari and Sabine Hauert AAAI Video Competition 2014 -------------- next part -------------- An HTML attachment was scrubbed... URL: From m.lengyel at eng.cam.ac.uk Tue Feb 11 09:46:27 2014 From: m.lengyel at eng.cam.ac.uk (=?windows-1252?Q?M=E1t=E9_Lengyel?=) Date: Tue, 11 Feb 2014 14:46:27 +0000 Subject: Connectionists: Advanced Course in Computational Neuroscience 2014 Message-ID: <000F5479-0825-4973-9A24-18D291461824@eng.cam.ac.uk> ADVANCED COURSE IN COMPUTATIONAL NEUROSCIENCE August 3 - 30, 2014, FIAS, Frankfurt, Germany http://fias.uni-frankfurt.de/accn/ Applications accepted: February 10, 2014 ? March 23, 2014 SCIENTIFIC DIRECTORS: * Ehud Ahissar (Weizmann Institute, Israel) * Dieter Jaeger (Emory University, USA) * M?t? Lengyel (University of Cambridge, UK) * Christian Machens (Champalimaud Neuroscience Programme, Portugal) LOCAL ORGANIZERS: * Jochen Triesch (FIAS, Frankfurt, Germany) * Hermann Cuntz (FIAS & ESI, Frankfurt, Germany) The ACCN is for advanced graduate students and postdoctoral fellows who are interested in learning the essentials of the field of computational neuroscience. The course has two complementary parts. Mornings are devoted to lectures given by distinguished international faculty on topics across the breadth of experimental and computational neuroscience. During the rest of the day, students pursue a project of their choosing under the close supervision of expert tutors. This gives them practical training in the art and practice of neural modeling. The first week of the course introduces students to essential neurobiological concepts and to the most important techniques in modeling single cells, synapses and circuits. Students learn how to solve their research problems using software such as MATLAB, NEST, NEURON, Python, XPP, etc. During the following three weeks the lectures cover networks and specific neural systems and functions. Topics range from modeling single cells and subcellular processes through the simulation of simple circuits, large neuronal networks and system level models of the brain. The course ends with project presentations by the students. The course is designed for students from a variety of disciplines, including neuroscience, physics, electrical engineering, computer science, mathematics and psychology. Students are expected to have a keen interest and basic background in neurobiology as well as some computer experience. Students of any nationality can apply. Essential details: * Course size: about 30 students. * Fee (which covers tuition, lodging, meals and excursions): EUR 750. * Scholarships and travel stipends are available. * Application start: February 10, 2014 * Application deadline: March 23, 2014 * Deadline for letters of recommendation: March 23, 2014 * Notification of results: May, 2014 Information and application http://fias.uni-frankfurt.de/accn/ Contact address: accn at fias.uni-frankfurt.de FACULTY Erik De Schutter (Okinawa Institute of Science and Technology, Japan), Dieter Jaeger (Emory University, USA), Astrid Prinz (Emory University, USA), Charles Wilson (University of Texas, San Antonio, USA), Michael Hausser (University College London, UK), Sophie Deneve (Ecole Normale Superieure, France), Christian Machens (Champalimaud Centre for the Unknown, Portugal), Jochen Triesch (FIAS, Germany), Misha Tsodyks (Weizmann Institute, Israel), Carl van Vreeswijk (CNRS Paris, France), Peter Dayan (University College London, UK), Joern Diedrichsen (University College London, UK), M?t? Lengyel (University of Cambridge, UK), Zhaoping Li (University College London, UK), Tatjana Tchumatchenko (MPI for Brain Research, Frankfurt, Germany), Ehud Ahissar (Weizmann Institute, Israel), Merav Ahissar (Hebrew University, Israel), Nava Rubin (Universitat Pompeu Fabra, Spain) General Interest Lectures: Hans-Joachim Pflueger (Freie Universitaet Berlin, Germany), Erin Schuman (MPI for Brain Research, Germany), Erik De Schutter (Okinawa Institute of Science and Technology, Japan), J. Kevin O'Regan (Paris Descartes University, France) Tutors: Daniel Miner (Frankfurt, Germany), Andreea Lazar (Frankfurt, Germany), Wieland Brendel (Lisbon / Tuebingen, Portugal / Germany), Sina Tootoonian (Cambridge, UK), Peter Jedlicka (Frankfurt, Germany) SECRETARY DURING THE COURSE Chris Ploegaert (University of Antwerp, Belgium) -- Mate Lengyel, PhD Computational and Biological Learning Lab Cambridge University Engineering Department Trumpington Street, Cambridge CB2 1PZ, UK tel: +44 (0)1223 748 532, fax: +44 (0)1223 332 662 email: m.lengyel at eng.cam.ac.uk web: www.eng.cam.ac.uk/~m.lengyel From bwyble at gmail.com Tue Feb 11 12:17:10 2014 From: bwyble at gmail.com (Brad Wyble) Date: Tue, 11 Feb 2014 12:17:10 -0500 Subject: Connectionists: Deep Belief Nets (2006) / Neural History Compressor (1991) or Hierarchical Temporal Memory In-Reply-To: <171A4C47-CE15-4B0F-8F55-56F5D5B456D0@uthscsa.edu> References: <3FD4809C-5EF7-478D-8939-AE56E6A434BE@idsia.ch> <52F420CF.7060006@cse.msu.edu> <4451FE2D-8521-46F0-A1CE-148F5CC83549@uthscsa.edu> <9E09D066-0F2E-42F7-AE55-4D2750570F77@eng.ucsd.edu> <171A4C47-CE15-4B0F-8F55-56F5D5B456D0@uthscsa.edu> Message-ID: On Tue, Feb 11, 2014 at 10:27 AM, james bower wrote: > With respect to big data, attention and vision. > > Of course we collect a lot of data - however, it is precisely my point > that we 'point' our receptors towards the data we want based on what we > already think we know is out there. "Attention" as I generally hear it > discussed, I think, doesn't have enough of the sense that we are seeking > data we expect. > > There is quite a lot of literature on "top down" attention, and it is well known that attention is focussed towards regions of a visual image that are likely to contain the object of interest. For example, people direct their eyes towards sidewalks when looking for people which is discussed in many intro cognitive psych textbooks This was known even farther back when Yarbus asked subjects different questions about an image and tracked their eyes: http://sstetson.files.wordpress.com/2010/07/702px-yarbus_the_visitor.jpg People don't discuss this quite as often as you'd like for perhaps two reasons: one being that it's so obviously true that it's hard to make it interesting, and the other being that it's always difficult to study natural images in the same rigorous manner that people study wholly artificial stimuli. Nevertheless, there is a (relatively)recent surge in study of natural images, so perhaps you'd be happier with the state of affairs in attention research than you think you'd be. However Jim, it has to be emphasized that the ability to attend selectively based on these expectations requires a lot of learning in the first place! Probably years of exposure to visual input are required, as Gary suggests. As an example, we have recently shown that attention can be rapidly captured by natural images that match one's task set (which you might fairly describe as an expectation). So for example if you are looking for a marine animal, a picture of a seahorse will grab your attention within about 200ms. I think that it would be extremely hard to hard argue that the mapping between "marine animal" and a visual form of a sea horse is wired by the genes. Attention therefore provides a great example of a system that can be triggered by both hardwired (e.g. luminance and orientation defined stimuli), and acquired patterns (e.g. marine animals). I suspect that the same is true in every modality. -Brad > > > On Feb 11, 2014, at 4:34 AM, Gary Cottrell wrote: > > Oh, and I forgot to mention, this is just visual information, obviously. > Compare this to the 5-8 syllables per second we get (depending on language, > but information rate seems to be about the same across languages - relative > to Vietnamese (Pellegrino et al. 2011). So this is about double the samples > of fixations, per second, but we aren't always listening to speech. But for > those who listen to rap, Eminem comes in at about 10 syllables per second, > but he is topped by Outsider, at 21 syllables per second. > > g. > > > Fran?ois Pellegrino, Christophe Coup?, Egidio Marsico > (2011) Across-Language Perspective on Speech Information Rate. > *Language,* *87*(3):539-558.K | 10.1353/lan.2011.0057 > > A cross-language perspective on speech information rate > > Fran?ois Pellegrino, Christophe Coup? and Egidio Marsico > > > On Feb 11, 2014, at 11:22 AM, Gary Cottrell wrote: > > interesting points, jim! > > I wonder, though, why you worry so much about "big data"? > > I think it is more like "appropriate-sized data." we have never before > been able to give our models anything like the kind of data we get in our > first years of life. Let's do a little back-of-the-envelope on this. We > saccade about 3 times a second, which, if you are awake 16 hours a day > (make that 20 for Terry Sejnowski), come out to about 172,800 fixations per > day, or high-dimensional samples of the world, if you like. One year of > that, not counting drunken blackouts, etc., is 63 million samples. After 10 > years that's 630 million samples. This dwarfs imagenet, at least the 1.2 > million images used by Krizhevsky et al. Of course, there is a lot of > redundancy here (spatial and temporal), which I believe the brain uses to > construct its models (e.g., by some sort of learning rule like F?ldi?k's), > so maybe 1.2 million isn't so bad. > > On the other hand, you may argue, imagenet is nothing like the real world > - it is, after all, pictures taken by humans, so objects tend to be > centered. This leads to a comment about you worrying about filtering data > to avoid the big data "problem." Well, I would suggest that there is a lot > of work on attention (some of it completely compatible with connectionist > models, e.g., Itti, et al. 1998, Zhang, et al., 2008) that would cause a > system to focus on objects, just as photographers do. So, it isn't like we > haven't worried about that as you do, it's just that we've done something > about it! ;-) > > Anyway, I like your ideas about the cerebellum - sounds like there are a > bunch of Ph.D. theses in there... > > cheers, > gary > > > Itti, L., Koch, C., & Niebur, E. (1998). A model of saliency-based visual > attention for rapid scene analysis. IEEE Transactions on Pattern Analysis > and Machine Intelligence, 20, 1254-1259. > > Zhang, Lingyun, Tong, Matthew H., Marks, Tim K., Shan, Honghao, and > Cottrell, Garrison W. (2008). SUN: A Bayesian Framework for Saliency Using > Natural Statistics. Journal of Vision 8(7):32, 1-20. > The code for SUN is here > On Feb 10, 2014, at 10:04 PM, james bower wrote: > > One other point that some of you might find interesting. > > While most neurobiologists and text books describe the cerebellum as > involved in motor control, I suspect that it is actually not a motor > control device in the usual sense at all. We proposed 20+ years ago that > the cerebellum is actually a sensory control device, using the motor system > (although not only) to precisely position sensory surfaces to collect the > data the nervous system actually needs and expects. In the context of the > current discussion about big data - such a mechanism would also contribute > to the nervous system's working around a potential data problem. > > Leaping and jumping forward, as an extension of this idea, we have > proposed that autism may actually be an adaptive response to cerebellar > dysfunction - and therefore a response to uncontrolled big data flux (in > your terms). > > So, if correct, the brain adapts to being confronted with badly controlled > data acquisition by shutting it off. > > Just to think about. Again, papers available for anyone interested. > > Given how much we do know about cerebellar circuitry - this could actually > be an interesting opportunity for some cross disciplinary thinking about > how one would use an active sensory data acquisition controller to select > the sensory data that is ideal given an internal model of the world. > Almost all of the NN type cerebellar models to date have been built around > either the idea that the cerebellum is a motor timing device, or involved > in learning (yadda yadda). > > Perhaps most on this list interested in brain networks don't know that far > and away, the pathway with the largest total number of axons is the pathway > from the (entire) cerebral cortex to the cerebellum. We have predicted > that this pathway is the mechanism by which the cerebral cortex "loads" the > cerebellum with knowledge about what it expects and needs. > > > > Jim > > > > > > On Feb 10, 2014, at 2:24 PM, james bower wrote: > > Excellent points - reminds me again of the conversation years ago about > whether a general structure like a Hopfield Network would, by itself, solve > a large number of problems. All evidence from the nervous system points in > the direction of strong influence of the nature of the problem on the > solution, which also seems consistent with what has happened in real world > applications of NN to engineering over the last 25 years. > > For biology, however, the interesting (even fundamental) question becomes, > what the following actually are: > > endowed us with custom tools for learning in different domains > > > the contribution from evolution to neural wetware might be > > > I have mentioned previously, that my guess (and surprise) based on our own > work over the last 30 years in olfaction is that 'learning' may all > together be over emphasized (we do love free will). Yes, in our > laboratories we place animals in situations where they have to "learn", > but my suspicion is that in the real world where brains actually operate > and evolve, most of what we do is actually "recognition' that involves > matching external stimuli to internal 'models' of what we expect to > be there. I think that it is quite likely that that 'deep knowledge' is > how evolution has most patterned neural wetware. Seems to me a way to > avoid NP problems and the pitfalls of dealing with "big data" which as I > have said, I suspect the nervous system avoids at all costs. > > I have mentioned that we have reason to believe (to my surprise) that, > starting with the olfactory receptors, the olfactory system already "knows" > about the metabolic structure of the real world. Accordingly, we are > predicting that its receptors aren't organized to collect a lot of data > about the chemical structure of the stimulus (the way a chemist would), but > instead looks for chemical signatures of metabolic processes. e.g. , it > may be that 1/3 or more of mouse olfactory receptors detect one of the > three molecules that are produced by many different kinds of fruit when > ripe. "Learning" in olfaction, might be some small additional mechanism > you put on top to change the 'hedonic' value of the stimulus - ie. you can > 'learn' to like fermented fish paste. But it is very likely that > recognizing the (usually deleterious) environmental signature > of fermentation is "hard wired", requiring 'learning' to change the natural > category. > > I know that many cognitive types (and philosophers as well) have developed > much more nuanced discussions of these questions - however, I have always > been struck by how much of the effort in NNs is focused on 'learning' as if > it is the primary attribute of the nervous system we are trying to figure > out. It seems to me figuring out "what the nose already knows" is much > more important. > > > Jim > > > > > > > On Feb 10, 2014, at 10:38 AM, Gary Marcus wrote: > > Juergen and others, > > I am with John on his two basic concerns, and think that your appeal to > computational universality is a red herring; I cc the entire group because > I think that these issues lay at the center of why many of the hardest > problems in AI and neuroscience continue to lay outside of reach, despite > in-principle proofs about computational universality. > > John's basic points, which I have also made before (e.g. in my books The > Algebraic Mind and The Birth of the Mind and in my periodic New Yorker > posts) are two > > a. It is unrealistic to expect that hierarchies of pattern recognizers > will suffice for the full range of cognitive problems that humans (and > strong AI systems) face. Deep learning, to take one example, excels at > classification, but has thus far had relatively little to contribute to > inference or natural language understanding. Socher et al's impressive CVG > work, for instance, is parasitic on a traditional (symbolic) parser, not a > soup-to-nuts neural net induced from input. > > b. it is unrealistic to expect that all the relevant information can be > extracted by any general purpose learning device. > > Yes, you can reliably map any arbitrary input-output relation onto a > multilayer perceptron or recurrent net, but only if you know the complete > input-output mapping in advance. Alas, you can't be guaranteed to do that > in general given arbitrary subsets of the complete space; in the real > world, learners see subsets of possible data and have to make guesses about > what the rest will be like. Wolpert's No Free Lunch work is instructive > here (and also in line with how cognitive scientists like Chomsky, Pinker, > and myself have thought about the problem). For any problem, I presume that > there exists an appropriately-configured net, but there is no guarantee > that in the real world you are going to be able to correctly induce the > right system via general-purpose learning algorithm given a finite amount > of data, with a finite amount of training. Empirically, neural nets of > roughly the form you are discussing have worked fine for some problems > (e.g. backgammon) but been no match for their symbolic competitors in other > domains (chess) and worked only as an adjunct rather than an central > ingredient in still others (parsing, question-answering a la Watson, etc); > in other domains, like planning and common-sense reasoning, there has been > essentially no serious work at all. > > My own take, informed by evolutionary and developmental biology, is that > no single general purpose architecture will ever be a match for the > endproduct of a billion years of evolution, which includes, I suspect, a > significant amount of customized architecture that need not be induced anew > in each generation. We learn as well as we do precisely because evolution > has preceded us, and endowed us with custom tools for learning in different > domains. Until the field of neural nets more seriously engages in > understanding what the contribution from evolution to neural wetware might > be, I will remain pessimistic about the field's prospects. > > Best, > Gary Marcus > > Professor of Psychology > New York University > Visiting Cognitive Scientist > Allen Institute for Brain Science > Allen Institute for Artiificial Intelligence > co-edited book coming late 2014: > The Future of the Brain: Essays By The World's Leading Neuroscientists > http://garymarcus.com/ > > On Feb 10, 2014, at 10:26 AM, Juergen Schmidhuber > wrote: > > John, > > perhaps your view is a bit too pessimistic. Note that a single RNN already > is a general computer. In principle, dynamic RNNs can map arbitrary > observation sequences to arbitrary computable sequences of motoric actions > and internal attention-directing operations, e.g., to process cluttered > scenes, or to implement development (the examples you mentioned). From my > point of view, the main question is how to exploit this universal potential > through learning. A stack of dynamic RNN can sometimes facilitate this. > What it learns can later be collapsed into a single RNN [3]. > > Juergen > > http://www.idsia.ch/~juergen/whatsnew.html > > > > On Feb 7, 2014, at 12:54 AM, Juyang Weng wrote: > > Juergen: > > You wrote: A stack of recurrent NN. But it is a wrong architecture as far > as the brain is concerned. > > Although my joint work with Narendra Ahuja and Thomas S. Huang at UIUC was > probably the first > learning network that used the deep Learning idea for learning from > clutter scenes (Cresceptron ICCV 1992 and IJCV 1997), > I gave up this static deep learning idea later after we considered the > Principle 1: Development. > > The deep learning architecture is wrong for the brain. It is too > restricted, static in architecture, and cannot learn directly from > cluttered scenes required by Principle 1. The brain is not a cascade of > recurrent NN. > > I quote from Antonio Damasio "Decartes' Error": p. 93: "But intermediate > communications occurs also via large subcortical nuclei such as those in > the thalamas and basal ganglia, and via small nulei such as those in the > brain stem." > > Of course, the cerebral pathways themselves are not a stack of recurrent > NN either. > > There are many fundamental reasons for that. I give only one here base on > our DN brain model: Looking at a human, the brain must dynamically attend > the tip of the nose, the entire nose, the face, or the entire human body on > the fly. For example, when the network attend the nose, the entire human > body becomes the background! Without a brain network that has both shallow > and deep connections (unlike your stack of recurrent NN), your network is > only for recognizing a set of static patterns in a clean background. This > is still an overworked pattern recognition problem, not a vision problem. > > -John > > On 2/6/14 7:24 AM, Schmidhuber Juergen wrote: > > Deep Learning in Artificial Neural Networks (NN) is about credit > assignment across many subsequent computational stages, in deep or > recurrent NN. > > A popluar Deep Learning NN is the Deep Belief Network (2006) [1,2]. A > stack of feedforward NN (FNN) is pre-trained in unsupervised fashion. This > can facilitate subsequent supervised learning. > > Let me re-advertise a much older, very similar, but more general, working > Deep Learner of 1991. It can deal with temporal sequences: the Neural > Hierarchical Temporal Memory or Neural History Compressor [3]. A stack of > recurrent NN (RNN) is pre-trained in unsupervised fashion. This can greatly > facilitate subsequent supervised learning. > > The RNN stack is more general in the sense that it uses > sequence-processing RNN instead of FNN with unchanging inputs. In the early > 1990s, the system was able to learn many previously unlearnable Deep > Learning tasks, one of them requiring credit assignment across 1200 > successive computational stages [4]. > > Related developments: In the 1990s there was a trend from partially > unsupervised [3] to fully supervised recurrent Deep Learners [5]. In recent > years, there has been a similar trend from partially unsupervised to fully > supervised systems. For example, several recent competition-winning and > benchmark record-setting systems use supervised LSTM RNN stacks [6-9]. > > > References: > > [1] G. E. Hinton, R. R. Salakhutdinov. Reducing the dimensionality of data > with neural networks. Science, Vol. 313. no. 5786, pp. 504 - 507, 2006. > http://www.cs.toronto.edu/~hinton/science.pdf > > [2] G. W. Cottrell. New Life for Neural Networks. Science, Vol. 313. no. > 5786, pp. 454-455, 2006. > http://www.academia.edu/155897/Cottrell_Garrison_W._2006_New_life_for_neural_networks > > [3] J. Schmidhuber. Learning complex, extended sequences using the > principle of history compression, Neural Computation, 4(2):234-242, 1992. > (Based on TR FKI-148-91, 1991.) > ftp://ftp.idsia.ch/pub/juergen/chunker.pdf Overview: > http://www.idsia.ch/~juergen/firstdeeplearner.html > > [4] J. Schmidhuber. Habilitation thesis, TUM, 1993. > ftp://ftp.idsia.ch/pub/juergen/habilitation.pdf . Includes an experiment > with credit assignment across 1200 subsequent computational stages for a > Neural Hierarchical Temporal Memory or History Compressor or RNN stack with > unsupervised pre-training [2] (try Google Translate in your mother tongue): > http://www.idsia.ch/~juergen/habilitation/node114.html > > [5] S. Hochreiter, J. Schmidhuber. Long Short-Term Memory. Neural > Computation, 9(8):1735-1780, 1997. Based on TR FKI-207-95, 1995. > ftp://ftp.idsia.ch/pub/juergen/lstm.pdf . Lots of of follow-up work on > LSTM under http://www.idsia.ch/~juergen/rnn.html > > [6] S. Fernandez, A. Graves, J. Schmidhuber. Sequence labelling in > structured domains with hierarchical recurrent neural networks. In Proc. > IJCAI'07, p. 774-779, Hyderabad, India, 2007. > ftp://ftp.idsia.ch/pub/juergen/IJCAI07sequence.pdf > > [7] A. Graves, J. Schmidhuber. Offline Handwriting Recognition with > Multidimensional Recurrent Neural Networks. NIPS'22, p 545-552, Vancouver, > MIT Press, 2009. http://www.idsia.ch/~juergen/nips2009.pdf > > [8] 2009: First very deep (and recurrent) learner to win international > competitions with secret test sets: deep LSTM RNN (1995-) won three > connected handwriting contests at ICDAR 2009 (French, Arabic, Farsi), > performing simultaneous segmentation and recognition. > http://www.idsia.ch/~juergen/handwriting.html > > [9] A. Graves, A. Mohamed, G. E. Hinton. Speech Recognition with Deep > Recurrent Neural Networks. ICASSP 2013, Vancouver, 2013. > http://www.cs.toronto.edu/~hinton/absps/RNN13.pdf > > > > Juergen Schmidhuber > http://www.idsia.ch/~juergen/whatsnew.html > > > -- > -- > Juyang (John) Weng, Professor > Department of Computer Science and Engineering > MSU Cognitive Science Program and MSU Neuroscience Program > 428 S Shaw Ln Rm 3115 > Michigan State University > East Lansing, MI 48824 USA > Tel: 517-353-4388 > Fax: 517-432-1061 > Email: weng at cse.msu.edu > URL: http://www.cse.msu.edu/~weng/ > ---------------------------------------------- > > > > > > > > > > Dr. James M. Bower Ph.D. > > Professor of Computational Neurobiology > > Barshop Institute for Longevity and Aging Studies. > > 15355 Lambda Drive > > University of Texas Health Science Center > > San Antonio, Texas 78245 > > > Phone: 210 382 0553 > > Email: bower at uthscsa.edu > > Web: http://www.bower-lab.org > > twitter: superid101 > > linkedin: Jim Bower > > > CONFIDENTIAL NOTICE: > > The contents of this email and any attachments to it may be privileged or > contain privileged and confidential information. This information is only > for the viewing or use of the intended recipient. If you have received this > e-mail in error or are not the intended recipient, you are hereby notified > that any disclosure, copying, distribution or use of, or the taking of any > action in reliance upon, any of the information contained in this e-mail, or > > any of the attachments to this e-mail, is strictly prohibited and that > this e-mail and all of the attachments to this e-mail, if any, must be > > immediately returned to the sender or destroyed and, in either case, this > e-mail and all attachments to this e-mail must be immediately deleted from > your computer without making any copies hereof and any and all hard copies > made must be destroyed. If you have received this e-mail in error, please > notify the sender by e-mail immediately. > > > > > > > > > Dr. James M. Bower Ph.D. > > Professor of Computational Neurobiology > > Barshop Institute for Longevity and Aging Studies. > > 15355 Lambda Drive > > University of Texas Health Science Center > > San Antonio, Texas 78245 > > > Phone: 210 382 0553 > > Email: bower at uthscsa.edu > > Web: http://www.bower-lab.org > > twitter: superid101 > > linkedin: Jim Bower > > > CONFIDENTIAL NOTICE: > > The contents of this email and any attachments to it may be privileged or > contain privileged and confidential information. This information is only > for the viewing or use of the intended recipient. If you have received this > e-mail in error or are not the intended recipient, you are hereby notified > that any disclosure, copying, distribution or use of, or the taking of any > action in reliance upon, any of the information contained in this e-mail, or > > any of the attachments to this e-mail, is strictly prohibited and that > this e-mail and all of the attachments to this e-mail, if any, must be > > immediately returned to the sender or destroyed and, in either case, this > e-mail and all attachments to this e-mail must be immediately deleted from > your computer without making any copies hereof and any and all hard copies > made must be destroyed. If you have received this e-mail in error, please > notify the sender by e-mail immediately. > > > > > [I am in Dijon, France on sabbatical this year. To call me, Skype works > best (gwcottrell), or dial +33 788319271] > > Gary Cottrell 858-534-6640 FAX: 858-534-7029 > > My schedule is here: http://tinyurl.com/b7gxpwo > > Computer Science and Engineering 0404 > IF USING FED EX INCLUDE THE FOLLOWING LINE: > CSE Building, Room 4130 > University of California San Diego > 9500 Gilman Drive # 0404 > La Jolla, Ca. 92093-0404 > > Things may come to those who wait, but only the things left by those who > hustle. -- Abraham Lincoln > > "I'll have a caf? mocha vodka valium latte to go, please" -Anonymous > > "Of course, none of this will be easy. If it was, we would already know > everything there was about how the brain works, and presumably my life > would be simpler here. It could explain all kinds of things that go on in > Washington." -Barack Obama > > "Probably once or twice a week we are sitting at dinner and Richard says, > 'The cortex is hopeless,' and I say, 'That's why I work on the worm.'" Dr. > Bargmann said. > > "A grapefruit is a lemon that saw an opportunity and took advantage of > it." - note written on a door in Amsterdam on Lijnbaansgracht. > > "Physical reality is great, but it has a lousy search function." -Matt Tong > > "Only connect!" -E.M. Forster > > "You always have to believe that tomorrow you might write the matlab > program that solves everything - otherwise you never will." -Geoff Hinton > > "There is nothing objective about objective functions" - Jay McClelland > > "I am awaiting the day when people remember the fact that discovery does > not work by deciding what you want and then discovering it." > -David Mermin > > Email: gary at ucsd.edu > Home page: http://www-cse.ucsd.edu/~gary/ > > > [I am in Dijon, France on sabbatical this year. To call me, Skype works > best (gwcottrell), or dial +33 788319271] > > Gary Cottrell 858-534-6640 FAX: 858-534-7029 > > My schedule is here: http://tinyurl.com/b7gxpwo > > Computer Science and Engineering 0404 > IF USING FED EX INCLUDE THE FOLLOWING LINE: > CSE Building, Room 4130 > University of California San Diego > 9500 Gilman Drive # 0404 > La Jolla, Ca. 92093-0404 > > Things may come to those who wait, but only the things left by those who > hustle. -- Abraham Lincoln > > "I'll have a caf? mocha vodka valium latte to go, please" -Anonymous > > "Of course, none of this will be easy. If it was, we would already know > everything there was about how the brain works, and presumably my life > would be simpler here. It could explain all kinds of things that go on in > Washington." -Barack Obama > > "Probably once or twice a week we are sitting at dinner and Richard says, > 'The cortex is hopeless,' and I say, 'That's why I work on the worm.'" Dr. > Bargmann said. > > "A grapefruit is a lemon that saw an opportunity and took advantage of > it." - note written on a door in Amsterdam on Lijnbaansgracht. > > "Physical reality is great, but it has a lousy search function." -Matt Tong > > "Only connect!" -E.M. Forster > > "You always have to believe that tomorrow you might write the matlab > program that solves everything - otherwise you never will." -Geoff Hinton > > "There is nothing objective about objective functions" - Jay McClelland > > "I am awaiting the day when people remember the fact that discovery does > not work by deciding what you want and then discovering it." > -David Mermin > > Email: gary at ucsd.edu > Home page: http://www-cse.ucsd.edu/~gary/ > > > > > > > Dr. James M. Bower Ph.D. > > Professor of Computational Neurobiology > > Barshop Institute for Longevity and Aging Studies. > > 15355 Lambda Drive > > University of Texas Health Science Center > > San Antonio, Texas 78245 > > > > *Phone: 210 382 0553 <210%20382%200553>* > > Email: bower at uthscsa.edu > > Web: http://www.bower-lab.org > > twitter: superid101 > > linkedin: Jim Bower > > > > CONFIDENTIAL NOTICE: > > The contents of this email and any attachments to it may be privileged > or contain privileged and confidential information. This information is > only for the viewing or use of the intended recipient. If you have received > this e-mail in error or are not the intended recipient, you are hereby > notified that any disclosure, copying, distribution or use of, or the > taking of any action in reliance upon, any of the information contained in > this e-mail, or > > any of the attachments to this e-mail, is strictly prohibited and that > this e-mail and all of the attachments to this e-mail, if any, must be > > immediately returned to the sender or destroyed and, in either case, > this e-mail and all attachments to this e-mail must be immediately deleted > from your computer without making any copies hereof and any and all hard > copies made must be destroyed. If you have received this e-mail in error, > please notify the sender by e-mail immediately. > > > > -- Brad Wyble Assistant Professor Psychology Department Penn State University http://wyblelab.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From H.Abbass at adfa.edu.au Tue Feb 11 12:34:02 2014 From: H.Abbass at adfa.edu.au (Hussein Abbass) Date: Tue, 11 Feb 2014 17:34:02 +0000 Subject: Connectionists: The Forgotten Role of Pareto Message-ID: There is an important issue that seems to be missing from the current discussions on Deep Belief Networks, Developmental Robotics, etc. This forgotten issue has received less attention in machine learning, developmental robotics, and neuroscience in general. It is that an individual "explicitly" has conflicting objectives. I will call this explicit conflicting objectives as "Pareto" to differentiate it from taking a weighted sum to combine these objectives into one. I argue we lose a big opportunity when doing this mistake. What a Pareto approach does? It allows many competing models/hypotheses to co-exist together as long as they explain different parts of the agent's utility. They co-exist because they "optimally" trade-off the conflicting objectives of the agent "differently". In a way, this idea of Pareto learning [1] provides a principled approach to combining models and forming ensembles [2], allowing the agent to dance (be adaptive) on this Pareto curve when faced with different environmental conditions [3] and swinging their decisions by selecting a model that is appropriate to the trade-off the agent requires in different environments. It also seems natural to me that this Pareto approach gives us the ability as individuals to be flexible and defines new measures for robustness [4]. In fact, the idea in [3] "on reflection after 5 years since it was published" offers a cognitive architecture. The non-dominance set is the smallest number of models that need to be maintained within the long term memory (LTM). When the agent senses the trade-off in the environment, a sensory-semantic process kicks off and select the most appropriate model from the LTM to match the sensed trade-off, which can then send a code into a short term memory. Also, on reflection, it seems a principled approach to form the society in Minsky's society of mind [5], because it does not require a hierarchy and it satisfies the non-compromise principle. By limiting learning and evolution to a single objective, we limit the flexibility in the system to a single point on the trade-off curve; we lose a precious opportunity. My hypothesis is, this conflict in the objectives that humans are faced with in their environment is a primary cause for many differentiation processes, be it biological, behavioural or cognitive. It is an essential component in the interaction between the agent and the environment, without which, we are stuck with one single model - even if it is made-up of many other models that "seem" on an arbitrarily defined measure to be different! P.S. I differentiate between different environments or different environmental conditions and the same environment with different utilities. The reason is, from a multi-agent perspective, the utility - I would argue - sits at the interfaces among (interaction of) the agents, while the environment is everything that is outside the multi-agent (exogenous to the multiple agents as a group). I argue that utilities sit at the interface rather than within the agents because I believe that utilities are negotiated, they are not pre-determined. 1. Abbass H.A. (2003) Speeding up Back-Propagation Using Multiobjective Evolutionary Algorithms, Neural Computation, MIT Press, vol. 15, No 11, 2705-2726. 2. Abbass H.A. (2003) "Pareto neuro-ensembles." AI 2003: Advances in Artificial Intelligence. LNCS 2903, Springer Berlin Heidelberg. 554-566. 3. Abbass H.A. and Bender A. (2009) The Pareto Operating Curve for Risk Minimization, Artificial Life and Robotics Journal, Springer, 14(4), 449-452. 4. Bui L., Abbass H.A., Barlow M., and Bender A. (2012) Robustness Against the Decision-Maker's Attitude to Risk in Problems with Conflicting Objectives, IEEE Transactions on Evolutionary Computation, 16(1), 1-19. 5. Leu G., Curtis N., and Abbass H.A. Society of Mind cognitive agent architecture applied to drivers adapting in a traffic context, Adaptive Behaviour, to appear, online first doi: 10.1177/1059712313509652. Prof. Hussein Abbass | School of Engineering and Information Technology | University of New South Wales - Canberra | Canberra, ACT 2600, Australia | Tel: +61-2-62688158 | Fax: +61-2-62688276| www: http://www.husseinabbass.net/ ________________________________ THE UNIVERSITY OF NEW SOUTH WALES UNSW CANBERRA AT THE AUSTRALIAN DEFENCE FORCE ACADEMY PO Box 7916, CANBERRA BC 2610, Australia Web: http://unsw.adfa.edu.au CRICOS Provider no. 00100G This message is intended for the addressee named and may contain confidential information. If you are not the intended recipient, please delete it and notify the sender. Views expressed in this message are those of the individual sender and are not necessarily the views of UNSW. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bower at uthscsa.edu Tue Feb 11 13:40:07 2014 From: bower at uthscsa.edu (james bower) Date: Tue, 11 Feb 2014 12:40:07 -0600 Subject: Connectionists: Deep Belief Nets (2006) / Neural History Compressor (1991) or Hierarchical Temporal Memory In-Reply-To: References: <3FD4809C-5EF7-478D-8939-AE56E6A434BE@idsia.ch> <52F420CF.7060006@cse.msu.edu> <4451FE2D-8521-46F0-A1CE-148F5CC83549@uthscsa.edu> <9E09D066-0F2E-42F7-AE55-4D2750570F77@eng.ucsd.edu> <171A4C47-CE15-4B0F-8F55-56F5D5B456D0@uthscsa.edu> Message-ID: <60F06006-DC81-4988-B000-D33F78D9036F@uthscsa.edu> > > > However Jim, it has to be emphasized that the ability to attend selectively based on these expectations requires a lot of learning in the first place! This is precisely the question I am trying to raise - how much ?learning? in the sense of individual learning is actually involved. In the olfactory system, we now suspect not much. The brain probably knows much more about what it is looking for than we think - de novo. No doubt at all, that you can set up circumstances where individual learning has to take place - but in the real world, how much is that really the way the system works - and how much does the artificial effort reflect the design structure of the system. We can teach animals to recognize all kinds of chemical odorants - however, is that forced artificial use of the olfactory system really the way to understand its structure. Have oriented bars in the visual system been a net plus for figuring out how the system works, or a massive distraction (leading to many many distracting theories). My bets have always been on the later. This is, I know, a very very old argument - comes down to nature / nurture in some sense. However, i am suggesting that the kinds of experiments we do, have all kinds of built in assumptions about learning in them to begin with, and that a great deal of machine learning, and NNs before that seems to assume fundamentally that these networks needed to learn from examples individually. Many many years ago (sorry to keep dating myself), I pointed out in one of the first snowbird meetings that one needed to consider learning on individual as well as evolutionary time scales and that they were related in almost certainly a complex way. > Probably years of exposure to visual input are required, as Gary suggests. As an example, we have recently shown that attention can be rapidly captured by natural images that match one's task set (which you might fairly describe as an expectation). yes - what I am saying is that this might be MOST of learning - and the ?task set? in the real world is given. > So for example if you are looking for a marine animal, a picture of a seahorse will grab your attention within about 200ms. I think that it would be extremely hard to hard argue that the mapping between "marine animal" and a visual form of a sea horse is wired by the genes. Ah - another subject what ?wired by genes? means, and another level of our relative lack of sophistication and understanding about how biology works - genotype to phenotype - BUT - although computational neuroscience started first, it is pretty clear that modeling of the realistic type is advancing faster than in neuroscience. To wit this out today: http://whis.caltech.edu/GeNeTool/GeNeTool.html In fact, this work by Erik Davidson is a direct result of an intervention by myself and Hamid Boulari http://research.fhcrc.org/bolouri/en.html, then a faculty member at the University of Hartforshire, who had been working on artificial life simulations, who had decided it was time to get more serious about real biology. Years ago, he convinced Erik that moving his ?models? out of his head and into mathematics was the right thing to do. An interesting story actually, as to how that eventually happened. But anyway. IMHO we know as little about what it means to be ?wired in genes? as we do about how cerebral cortex works - and for the same reasons, too many ?mind models? not enough real ones. > Attention therefore provides a great example of a system that can be triggered by both hardwired (e.g. luminance and orientation defined stimuli), and acquired patterns (e.g. marine animals). I suspect they are not ?acquired? in individual time, but rather in evolutionary time. That is what I am saying - what form they exist in internally is another (and important obviously) question. Many years ago, I was visiting CNRS in Toulouse France, and Simon Thorpe had just finished one of the first ?how fast can you recognize it? visual experiments. After my talk (on cortical oscillations) he asked me if I could guess how fast a human could recognize the presence of an animal in a visual scene - I said under 200 Msecs - he was quite surprised that I guessed the answer - I told him that it was one theta iteration. Point being, that humans can do this for animals they have never seen in the wild, or ever seen at all. call it a ?search image? - but I think, again, we attribute too much to learning, especially in the USA. > I suspect that the same is true in every modality. And I do too Jim > > > -Brad > > > > > > On Feb 11, 2014, at 4:34 AM, Gary Cottrell wrote: > >> Oh, and I forgot to mention, this is just visual information, obviously. Compare this to the 5-8 syllables per second we get (depending on language, but information rate seems to be about the same across languages - relative to Vietnamese (Pellegrino et al. 2011). So this is about double the samples of fixations, per second, but we aren't always listening to speech. But for those who listen to rap, Eminem comes in at about 10 syllables per second, but he is topped by Outsider, at 21 syllables per second. >> >> g. >> >> >> Fran?ois Pellegrino, Christophe Coup?, Egidio Marsico (2011) Across-Language Perspective on Speech Information Rate. >> Language, 87(3):539-558.K | 10.1353/lan.2011.0057 >> >> A cross-language perspective on speech information rate >> >> Fran?ois Pellegrino, Christophe Coup? and Egidio Marsico >> >> >> On Feb 11, 2014, at 11:22 AM, Gary Cottrell wrote: >> >>> interesting points, jim! >>> >>> I wonder, though, why you worry so much about "big data"? >>> >>> I think it is more like "appropriate-sized data." we have never before been able to give our models anything like the kind of data we get in our first years of life. Let's do a little back-of-the-envelope on this. We saccade about 3 times a second, which, if you are awake 16 hours a day (make that 20 for Terry Sejnowski), come out to about 172,800 fixations per day, or high-dimensional samples of the world, if you like. One year of that, not counting drunken blackouts, etc., is 63 million samples. After 10 years that's 630 million samples. This dwarfs imagenet, at least the 1.2 million images used by Krizhevsky et al. Of course, there is a lot of redundancy here (spatial and temporal), which I believe the brain uses to construct its models (e.g., by some sort of learning rule like F?ldi?k's), so maybe 1.2 million isn't so bad. >>> >>> On the other hand, you may argue, imagenet is nothing like the real world - it is, after all, pictures taken by humans, so objects tend to be centered. This leads to a comment about you worrying about filtering data to avoid the big data "problem." Well, I would suggest that there is a lot of work on attention (some of it completely compatible with connectionist models, e.g., Itti, et al. 1998, Zhang, et al., 2008) that would cause a system to focus on objects, just as photographers do. So, it isn't like we haven't worried about that as you do, it's just that we've done something about it! ;-) >>> >>> Anyway, I like your ideas about the cerebellum - sounds like there are a bunch of Ph.D. theses in there? >>> >>> cheers, >>> gary >>> >>> >>> Itti, L., Koch, C., & Niebur, E. (1998). A model of saliency-based visual attention for rapid scene analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence, 20, 1254?1259. >>> >>> Zhang, Lingyun, Tong, Matthew H., Marks, Tim K., Shan, Honghao, and Cottrell, Garrison W. (2008). SUN: A Bayesian Framework for Saliency Using Natural Statistics. Journal of Vision 8(7):32, 1-20. >>> The code for SUN is here >>> On Feb 10, 2014, at 10:04 PM, james bower wrote: >>> >>>> One other point that some of you might find interesting. >>>> >>>> While most neurobiologists and text books describe the cerebellum as involved in motor control, I suspect that it is actually not a motor control device in the usual sense at all. We proposed 20+ years ago that the cerebellum is actually a sensory control device, using the motor system (although not only) to precisely position sensory surfaces to collect the data the nervous system actually needs and expects. In the context of the current discussion about big data - such a mechanism would also contribute to the nervous system?s working around a potential data problem. >>>> >>>> Leaping and jumping forward, as an extension of this idea, we have proposed that autism may actually be an adaptive response to cerebellar dysfunction - and therefore a response to uncontrolled big data flux (in your terms). >>>> >>>> So, if correct, the brain adapts to being confronted with badly controlled data acquisition by shutting it off. >>>> >>>> Just to think about. Again, papers available for anyone interested. >>>> >>>> Given how much we do know about cerebellar circuitry - this could actually be an interesting opportunity for some cross disciplinary thinking about how one would use an active sensory data acquisition controller to select the sensory data that is ideal given an internal model of the world. Almost all of the NN type cerebellar models to date have been built around either the idea that the cerebellum is a motor timing device, or involved in learning (yadda yadda). >>>> >>>> Perhaps most on this list interested in brain networks don?t know that far and away, the pathway with the largest total number of axons is the pathway from the (entire) cerebral cortex to the cerebellum. We have predicted that this pathway is the mechanism by which the cerebral cortex ?loads? the cerebellum with knowledge about what it expects and needs. >>>> >>>> >>>> >>>> Jim >>>> >>>> >>>> >>>> >>>> >>>> On Feb 10, 2014, at 2:24 PM, james bower wrote: >>>> >>>>> Excellent points - reminds me again of the conversation years ago about whether a general structure like a Hopfield Network would, by itself, solve a large number of problems. All evidence from the nervous system points in the direction of strong influence of the nature of the problem on the solution, which also seems consistent with what has happened in real world applications of NN to engineering over the last 25 years. >>>>> >>>>> For biology, however, the interesting (even fundamental) question becomes, what the following actually are: >>>>> >>>>>> endowed us with custom tools for learning in different domains >>>>> >>>>>> the contribution from evolution to neural wetware might be >>>>> >>>>> I have mentioned previously, that my guess (and surprise) based on our own work over the last 30 years in olfaction is that ?learning? may all together be over emphasized (we do love free will). Yes, in our laboratories we place animals in situations where they have to ?learn?, but my suspicion is that in the real world where brains actually operate and evolve, most of what we do is actually ?recognition? that involves matching external stimuli to internal ?models? of what we expect to be there. I think that it is quite likely that that ?deep knowledge? is how evolution has most patterned neural wetware. Seems to me a way to avoid NP problems and the pitfalls of dealing with ?big data? which as I have said, I suspect the nervous system avoids at all costs. >>>>> >>>>> I have mentioned that we have reason to believe (to my surprise) that, starting with the olfactory receptors, the olfactory system already ?knows? about the metabolic structure of the real world. Accordingly, we are predicting that its receptors aren?t organized to collect a lot of data about the chemical structure of the stimulus (the way a chemist would), but instead looks for chemical signatures of metabolic processes. e.g. , it may be that 1/3 or more of mouse olfactory receptors detect one of the three molecules that are produced by many different kinds of fruit when ripe. ?Learning? in olfaction, might be some small additional mechanism you put on top to change the ?hedonic? value of the stimulus - ie. you can ?learn? to like fermented fish paste. But it is very likely that recognizing the (usually deleterious) environmental signature of fermentation is "hard wired?, requiring ?learning? to change the natural category. >>>>> >>>>> I know that many cognitive types (and philosophers as well) have developed much more nuanced discussions of these questions - however, I have always been struck by how much of the effort in NNs is focused on ?learning? as if it is the primary attribute of the nervous system we are trying to figure out. It seems to me figuring out "what the nose already knows? is much more important. >>>>> >>>>> >>>>> Jim >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> On Feb 10, 2014, at 10:38 AM, Gary Marcus wrote: >>>>> >>>>>> Juergen and others, >>>>>> >>>>>> I am with John on his two basic concerns, and think that your appeal to computational universality is a red herring; I cc the entire group because I think that these issues lay at the center of why many of the hardest problems in AI and neuroscience continue to lay outside of reach, despite in-principle proofs about computational universality. >>>>>> >>>>>> John?s basic points, which I have also made before (e.g. in my books The Algebraic Mind and The Birth of the Mind and in my periodic New Yorker posts) are two >>>>>> >>>>>> a. It is unrealistic to expect that hierarchies of pattern recognizers will suffice for the full range of cognitive problems that humans (and strong AI systems) face. Deep learning, to take one example, excels at classification, but has thus far had relatively little to contribute to inference or natural language understanding. Socher et al?s impressive CVG work, for instance, is parasitic on a traditional (symbolic) parser, not a soup-to-nuts neural net induced from input. >>>>>> >>>>>> b. it is unrealistic to expect that all the relevant information can be extracted by any general purpose learning device. >>>>>> >>>>>> Yes, you can reliably map any arbitrary input-output relation onto a multilayer perceptron or recurrent net, but only if you know the complete input-output mapping in advance. Alas, you can?t be guaranteed to do that in general given arbitrary subsets of the complete space; in the real world, learners see subsets of possible data and have to make guesses about what the rest will be like. Wolpert?s No Free Lunch work is instructive here (and also in line with how cognitive scientists like Chomsky, Pinker, and myself have thought about the problem). For any problem, I presume that there exists an appropriately-configured net, but there is no guarantee that in the real world you are going to be able to correctly induce the right system via general-purpose learning algorithm given a finite amount of data, with a finite amount of training. Empirically, neural nets of roughly the form you are discussing have worked fine for some problems (e.g. backgammon) but been no match for their symbolic competitors in other domains (chess) and worked only as an adjunct rather than an central ingredient in still others (parsing, question-answering a la Watson, etc); in other domains, like planning and common-sense reasoning, there has been essentially no serious work at all. >>>>>> >>>>>> My own take, informed by evolutionary and developmental biology, is that no single general purpose architecture will ever be a match for the endproduct of a billion years of evolution, which includes, I suspect, a significant amount of customized architecture that need not be induced anew in each generation. We learn as well as we do precisely because evolution has preceded us, and endowed us with custom tools for learning in different domains. Until the field of neural nets more seriously engages in understanding what the contribution from evolution to neural wetware might be, I will remain pessimistic about the field?s prospects. >>>>>> >>>>>> Best, >>>>>> Gary Marcus >>>>>> >>>>>> Professor of Psychology >>>>>> New York University >>>>>> Visiting Cognitive Scientist >>>>>> Allen Institute for Brain Science >>>>>> Allen Institute for Artiificial Intelligence >>>>>> co-edited book coming late 2014: >>>>>> The Future of the Brain: Essays By The World?s Leading Neuroscientists >>>>>> http://garymarcus.com/ >>>>>> >>>>>> On Feb 10, 2014, at 10:26 AM, Juergen Schmidhuber wrote: >>>>>> >>>>>>> John, >>>>>>> >>>>>>> perhaps your view is a bit too pessimistic. Note that a single RNN already is a general computer. In principle, dynamic RNNs can map arbitrary observation sequences to arbitrary computable sequences of motoric actions and internal attention-directing operations, e.g., to process cluttered scenes, or to implement development (the examples you mentioned). From my point of view, the main question is how to exploit this universal potential through learning. A stack of dynamic RNN can sometimes facilitate this. What it learns can later be collapsed into a single RNN [3]. >>>>>>> >>>>>>> Juergen >>>>>>> >>>>>>> http://www.idsia.ch/~juergen/whatsnew.html >>>>>>> >>>>>>> >>>>>>> >>>>>>> On Feb 7, 2014, at 12:54 AM, Juyang Weng wrote: >>>>>>> >>>>>>>> Juergen: >>>>>>>> >>>>>>>> You wrote: A stack of recurrent NN. But it is a wrong architecture as far as the brain is concerned. >>>>>>>> >>>>>>>> Although my joint work with Narendra Ahuja and Thomas S. Huang at UIUC was probably the first >>>>>>>> learning network that used the deep Learning idea for learning from clutter scenes (Cresceptron ICCV 1992 and IJCV 1997), >>>>>>>> I gave up this static deep learning idea later after we considered the Principle 1: Development. >>>>>>>> >>>>>>>> The deep learning architecture is wrong for the brain. It is too restricted, static in architecture, and cannot learn directly from cluttered scenes required by Principle 1. The brain is not a cascade of recurrent NN. >>>>>>>> >>>>>>>> I quote from Antonio Damasio "Decartes' Error": p. 93: "But intermediate communications occurs also via large subcortical nuclei such as those in the thalamas and basal ganglia, and via small nulei such as those in the brain stem." >>>>>>>> >>>>>>>> Of course, the cerebral pathways themselves are not a stack of recurrent NN either. >>>>>>>> >>>>>>>> There are many fundamental reasons for that. I give only one here base on our DN brain model: Looking at a human, the brain must dynamically attend the tip of the nose, the entire nose, the face, or the entire human body on the fly. For example, when the network attend the nose, the entire human body becomes the background! Without a brain network that has both shallow and deep connections (unlike your stack of recurrent NN), your network is only for recognizing a set of static patterns in a clean background. This is still an overworked pattern recognition problem, not a vision problem. >>>>>>>> >>>>>>>> -John >>>>>>>> >>>>>>>> On 2/6/14 7:24 AM, Schmidhuber Juergen wrote: >>>>>>>>> Deep Learning in Artificial Neural Networks (NN) is about credit assignment across many subsequent computational stages, in deep or recurrent NN. >>>>>>>>> >>>>>>>>> A popluar Deep Learning NN is the Deep Belief Network (2006) [1,2]. A stack of feedforward NN (FNN) is pre-trained in unsupervised fashion. This can facilitate subsequent supervised learning. >>>>>>>>> >>>>>>>>> Let me re-advertise a much older, very similar, but more general, working Deep Learner of 1991. It can deal with temporal sequences: the Neural Hierarchical Temporal Memory or Neural History Compressor [3]. A stack of recurrent NN (RNN) is pre-trained in unsupervised fashion. This can greatly facilitate subsequent supervised learning. >>>>>>>>> >>>>>>>>> The RNN stack is more general in the sense that it uses sequence-processing RNN instead of FNN with unchanging inputs. In the early 1990s, the system was able to learn many previously unlearnable Deep Learning tasks, one of them requiring credit assignment across 1200 successive computational stages [4]. >>>>>>>>> >>>>>>>>> Related developments: In the 1990s there was a trend from partially unsupervised [3] to fully supervised recurrent Deep Learners [5]. In recent years, there has been a similar trend from partially unsupervised to fully supervised systems. For example, several recent competition-winning and benchmark record-setting systems use supervised LSTM RNN stacks [6-9]. >>>>>>>>> >>>>>>>>> >>>>>>>>> References: >>>>>>>>> >>>>>>>>> [1] G. E. Hinton, R. R. Salakhutdinov. Reducing the dimensionality of data with neural networks. Science, Vol. 313. no. 5786, pp. 504 - 507, 2006. http://www.cs.toronto.edu/~hinton/science.pdf >>>>>>>>> >>>>>>>>> [2] G. W. Cottrell. New Life for Neural Networks. Science, Vol. 313. no. 5786, pp. 454-455, 2006. http://www.academia.edu/155897/Cottrell_Garrison_W._2006_New_life_for_neural_networks >>>>>>>>> >>>>>>>>> [3] J. Schmidhuber. Learning complex, extended sequences using the principle of history compression, Neural Computation, 4(2):234-242, 1992. (Based on TR FKI-148-91, 1991.) ftp://ftp.idsia.ch/pub/juergen/chunker.pdf Overview: http://www.idsia.ch/~juergen/firstdeeplearner.html >>>>>>>>> >>>>>>>>> [4] J. Schmidhuber. Habilitation thesis, TUM, 1993. ftp://ftp.idsia.ch/pub/juergen/habilitation.pdf . Includes an experiment with credit assignment across 1200 subsequent computational stages for a Neural Hierarchical Temporal Memory or History Compressor or RNN stack with unsupervised pre-training [2] (try Google Translate in your mother tongue): http://www.idsia.ch/~juergen/habilitation/node114.html >>>>>>>>> >>>>>>>>> [5] S. Hochreiter, J. Schmidhuber. Long Short-Term Memory. Neural Computation, 9(8):1735-1780, 1997. Based on TR FKI-207-95, 1995. ftp://ftp.idsia.ch/pub/juergen/lstm.pdf . Lots of of follow-up work on LSTM under http://www.idsia.ch/~juergen/rnn.html >>>>>>>>> >>>>>>>>> [6] S. Fernandez, A. Graves, J. Schmidhuber. Sequence labelling in structured domains with hierarchical recurrent neural networks. In Proc. IJCAI'07, p. 774-779, Hyderabad, India, 2007. ftp://ftp.idsia.ch/pub/juergen/IJCAI07sequence.pdf >>>>>>>>> >>>>>>>>> [7] A. Graves, J. Schmidhuber. Offline Handwriting Recognition with Multidimensional Recurrent Neural Networks. NIPS'22, p 545-552, Vancouver, MIT Press, 2009. http://www.idsia.ch/~juergen/nips2009.pdf >>>>>>>>> >>>>>>>>> [8] 2009: First very deep (and recurrent) learner to win international competitions with secret test sets: deep LSTM RNN (1995-) won three connected handwriting contests at ICDAR 2009 (French, Arabic, Farsi), performing simultaneous segmentation and recognition. http://www.idsia.ch/~juergen/handwriting.html >>>>>>>>> >>>>>>>>> [9] A. Graves, A. Mohamed, G. E. Hinton. Speech Recognition with Deep Recurrent Neural Networks. ICASSP 2013, Vancouver, 2013. http://www.cs.toronto.edu/~hinton/absps/RNN13.pdf >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> Juergen Schmidhuber >>>>>>>>> http://www.idsia.ch/~juergen/whatsnew.html >>>>>>>> >>>>>>>> -- >>>>>>>> -- >>>>>>>> Juyang (John) Weng, Professor >>>>>>>> Department of Computer Science and Engineering >>>>>>>> MSU Cognitive Science Program and MSU Neuroscience Program >>>>>>>> 428 S Shaw Ln Rm 3115 >>>>>>>> Michigan State University >>>>>>>> East Lansing, MI 48824 USA >>>>>>>> Tel: 517-353-4388 >>>>>>>> Fax: 517-432-1061 >>>>>>>> Email: weng at cse.msu.edu >>>>>>>> URL: http://www.cse.msu.edu/~weng/ >>>>>>>> ---------------------------------------------- >>>>>>>> >>>>>>> >>>>>>> >>>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> Dr. James M. Bower Ph.D. >>>>> >>>>> Professor of Computational Neurobiology >>>>> >>>>> Barshop Institute for Longevity and Aging Studies. >>>>> >>>>> 15355 Lambda Drive >>>>> >>>>> University of Texas Health Science Center >>>>> >>>>> San Antonio, Texas 78245 >>>>> >>>>> >>>>> Phone: 210 382 0553 >>>>> >>>>> Email: bower at uthscsa.edu >>>>> >>>>> Web: http://www.bower-lab.org >>>>> >>>>> twitter: superid101 >>>>> >>>>> linkedin: Jim Bower >>>>> >>>>> >>>>> CONFIDENTIAL NOTICE: >>>>> >>>>> The contents of this email and any attachments to it may be privileged or contain privileged and confidential information. This information is only for the viewing or use of the intended recipient. If you have received this e-mail in error or are not the intended recipient, you are hereby notified that any disclosure, copying, distribution or use of, or the taking of any action in reliance upon, any of the information contained in this e-mail, or >>>>> >>>>> any of the attachments to this e-mail, is strictly prohibited and that this e-mail and all of the attachments to this e-mail, if any, must be >>>>> >>>>> immediately returned to the sender or destroyed and, in either case, this e-mail and all attachments to this e-mail must be immediately deleted from your computer without making any copies hereof and any and all hard copies made must be destroyed. If you have received this e-mail in error, please notify the sender by e-mail immediately. >>>>> >>>>> >>>>> >>>> >>>> >>>> >>>> >>>> >>>> Dr. James M. Bower Ph.D. >>>> >>>> Professor of Computational Neurobiology >>>> >>>> Barshop Institute for Longevity and Aging Studies. >>>> >>>> 15355 Lambda Drive >>>> >>>> University of Texas Health Science Center >>>> >>>> San Antonio, Texas 78245 >>>> >>>> >>>> Phone: 210 382 0553 >>>> >>>> Email: bower at uthscsa.edu >>>> >>>> Web: http://www.bower-lab.org >>>> >>>> twitter: superid101 >>>> >>>> linkedin: Jim Bower >>>> >>>> >>>> CONFIDENTIAL NOTICE: >>>> >>>> The contents of this email and any attachments to it may be privileged or contain privileged and confidential information. This information is only for the viewing or use of the intended recipient. If you have received this e-mail in error or are not the intended recipient, you are hereby notified that any disclosure, copying, distribution or use of, or the taking of any action in reliance upon, any of the information contained in this e-mail, or >>>> >>>> any of the attachments to this e-mail, is strictly prohibited and that this e-mail and all of the attachments to this e-mail, if any, must be >>>> >>>> immediately returned to the sender or destroyed and, in either case, this e-mail and all attachments to this e-mail must be immediately deleted from your computer without making any copies hereof and any and all hard copies made must be destroyed. If you have received this e-mail in error, please notify the sender by e-mail immediately. >>>> >>>> >>>> >>> >>> [I am in Dijon, France on sabbatical this year. To call me, Skype works best (gwcottrell), or dial +33 788319271] >>> >>> Gary Cottrell 858-534-6640 FAX: 858-534-7029 >>> >>> My schedule is here: http://tinyurl.com/b7gxpwo >>> >>> Computer Science and Engineering 0404 >>> IF USING FED EX INCLUDE THE FOLLOWING LINE: >>> CSE Building, Room 4130 >>> University of California San Diego >>> 9500 Gilman Drive # 0404 >>> La Jolla, Ca. 92093-0404 >>> >>> Things may come to those who wait, but only the things left by those who hustle. -- Abraham Lincoln >>> >>> "I'll have a caf? mocha vodka valium latte to go, please" -Anonymous >>> >>> "Of course, none of this will be easy. If it was, we would already know everything there was about how the brain works, and presumably my life would be simpler here. It could explain all kinds of things that go on in Washington." -Barack Obama >>> >>> "Probably once or twice a week we are sitting at dinner and Richard says, 'The cortex is hopeless,' and I say, 'That's why I work on the worm.'" Dr. Bargmann said. >>> >>> "A grapefruit is a lemon that saw an opportunity and took advantage of it." - note written on a door in Amsterdam on Lijnbaansgracht. >>> >>> "Physical reality is great, but it has a lousy search function." -Matt Tong >>> >>> "Only connect!" -E.M. Forster >>> >>> "You always have to believe that tomorrow you might write the matlab program that solves everything - otherwise you never will." -Geoff Hinton >>> >>> "There is nothing objective about objective functions" - Jay McClelland >>> >>> "I am awaiting the day when people remember the fact that discovery does not work by deciding what you want and then discovering it." >>> -David Mermin >>> >>> Email: gary at ucsd.edu >>> Home page: http://www-cse.ucsd.edu/~gary/ >>> >> >> [I am in Dijon, France on sabbatical this year. To call me, Skype works best (gwcottrell), or dial +33 788319271] >> >> Gary Cottrell 858-534-6640 FAX: 858-534-7029 >> >> My schedule is here: http://tinyurl.com/b7gxpwo >> >> Computer Science and Engineering 0404 >> IF USING FED EX INCLUDE THE FOLLOWING LINE: >> CSE Building, Room 4130 >> University of California San Diego >> 9500 Gilman Drive # 0404 >> La Jolla, Ca. 92093-0404 >> >> Things may come to those who wait, but only the things left by those who hustle. -- Abraham Lincoln >> >> "I'll have a caf? mocha vodka valium latte to go, please" -Anonymous >> >> "Of course, none of this will be easy. If it was, we would already know everything there was about how the brain works, and presumably my life would be simpler here. It could explain all kinds of things that go on in Washington." -Barack Obama >> >> "Probably once or twice a week we are sitting at dinner and Richard says, 'The cortex is hopeless,' and I say, 'That's why I work on the worm.'" Dr. Bargmann said. >> >> "A grapefruit is a lemon that saw an opportunity and took advantage of it." - note written on a door in Amsterdam on Lijnbaansgracht. >> >> "Physical reality is great, but it has a lousy search function." -Matt Tong >> >> "Only connect!" -E.M. Forster >> >> "You always have to believe that tomorrow you might write the matlab program that solves everything - otherwise you never will." -Geoff Hinton >> >> "There is nothing objective about objective functions" - Jay McClelland >> >> "I am awaiting the day when people remember the fact that discovery does not work by deciding what you want and then discovering it." >> -David Mermin >> >> Email: gary at ucsd.edu >> Home page: http://www-cse.ucsd.edu/~gary/ >> > > > > > > Dr. James M. Bower Ph.D. > > Professor of Computational Neurobiology > > Barshop Institute for Longevity and Aging Studies. > > 15355 Lambda Drive > > University of Texas Health Science Center > > San Antonio, Texas 78245 > > > Phone: 210 382 0553 > > Email: bower at uthscsa.edu > > Web: http://www.bower-lab.org > > twitter: superid101 > > linkedin: Jim Bower > > > CONFIDENTIAL NOTICE: > > The contents of this email and any attachments to it may be privileged or contain privileged and confidential information. This information is only for the viewing or use of the intended recipient. If you have received this e-mail in error or are not the intended recipient, you are hereby notified that any disclosure, copying, distribution or use of, or the taking of any action in reliance upon, any of the information contained in this e-mail, or > > any of the attachments to this e-mail, is strictly prohibited and that this e-mail and all of the attachments to this e-mail, if any, must be > > immediately returned to the sender or destroyed and, in either case, this e-mail and all attachments to this e-mail must be immediately deleted from your computer without making any copies hereof and any and all hard copies made must be destroyed. If you have received this e-mail in error, please notify the sender by e-mail immediately. > > > > > > > -- > Brad Wyble > Assistant Professor > Psychology Department > Penn State University > > http://wyblelab.com Dr. James M. Bower Ph.D. Professor of Computational Neurobiology Barshop Institute for Longevity and Aging Studies. 15355 Lambda Drive University of Texas Health Science Center San Antonio, Texas 78245 Phone: 210 382 0553 Email: bower at uthscsa.edu Web: http://www.bower-lab.org twitter: superid101 linkedin: Jim Bower CONFIDENTIAL NOTICE: The contents of this email and any attachments to it may be privileged or contain privileged and confidential information. This information is only for the viewing or use of the intended recipient. If you have received this e-mail in error or are not the intended recipient, you are hereby notified that any disclosure, copying, distribution or use of, or the taking of any action in reliance upon, any of the information contained in this e-mail, or any of the attachments to this e-mail, is strictly prohibited and that this e-mail and all of the attachments to this e-mail, if any, must be immediately returned to the sender or destroyed and, in either case, this e-mail and all attachments to this e-mail must be immediately deleted from your computer without making any copies hereof and any and all hard copies made must be destroyed. If you have received this e-mail in error, please notify the sender by e-mail immediately. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bwyble at gmail.com Tue Feb 11 21:26:26 2014 From: bwyble at gmail.com (Brad Wyble) Date: Tue, 11 Feb 2014 21:26:26 -0500 Subject: Connectionists: Deep Belief Nets (2006) / Neural History Compressor (1991) or Hierarchical Temporal Memory In-Reply-To: <60F06006-DC81-4988-B000-D33F78D9036F@uthscsa.edu> References: <3FD4809C-5EF7-478D-8939-AE56E6A434BE@idsia.ch> <52F420CF.7060006@cse.msu.edu> <4451FE2D-8521-46F0-A1CE-148F5CC83549@uthscsa.edu> <9E09D066-0F2E-42F7-AE55-4D2750570F77@eng.ucsd.edu> <171A4C47-CE15-4B0F-8F55-56F5D5B456D0@uthscsa.edu> <60F06006-DC81-4988-B000-D33F78D9036F@uthscsa.edu> Message-ID: > > > > However, i am suggesting that the kinds of experiments we do, have all > kinds of built in assumptions about learning in them to begin with, and > that a great deal of machine learning, and NNs before that seems to assume > fundamentally that these networks needed to learn from examples > individually. Many many years ago (sorry to keep dating myself), I pointed > out in one of the first snowbird meetings that one needed to consider > learning on individual as well as evolutionary time scales and that they > were related in almost certainly a complex way. > > > Well I believe that you are certainly right that it's very complex, and also that we build a lot of our theories into our experimental data. But how else can one explain the ability to map concepts onto arbitrary patterns of photons but through learning? And how can this learned mapping not be a fundamental part of our ability to deal with the visual world? The ability to distinguish visual forms, very quickly, is what allows us to deal with saccadic vision. And what about reading? Surely that constitutes a "real world" situation in which learning is fundamental? Incidentally, you might be really interested in project Prakash, which restores sight to the congenitally blind, and thereby has the opportunity to observe how quickly a visual system, that has been largely deprived of input since birth, can adapt to the presence of vision. > > Attention therefore provides a great example of a system that can be > triggered by both hardwired (e.g. luminance and orientation defined > stimuli), and acquired patterns (e.g. marine animals). > > > I suspect they are not "acquired" in individual time, but rather in > evolutionary time. That is what I am saying - what form they exist in > internally is another (and important obviously) question. > > That's a very strong position, which is (if I understand you) essentially saying that primitive visual forms are encoded in the DNA and expressed through development in the visual system. But if we critically depended on such pre-existing forms, how could we ever learn to cope with technological developments? Or to learn a new language? I think that you are drastically underestimating the state space of vision, Many years ago, I was visiting CNRS in Toulouse France, and Simon Thorpe > had just finished one of the first "how fast can you recognize it" visual > experiments. After my talk (on cortical oscillations) he asked me if I > could guess how fast a human could recognize the presence of an animal in a > visual scene - I said under 200 Msecs - he was quite surprised that I > guessed the answer - I told him that it was one theta iteration. > > Honestly, I think you just got lucky on that one. There are plenty of visual discriminations that require varying amounts of time from 200-500ms or more. > Point being, that humans can do this for animals they have never seen in > the wild, or ever seen at all. > > A bit unfair to call that de-novo performance, since most of us have seen a sufficient variety of animals to allow us to accurately classify a novel animal. What is important to understand is that, with a few hours of solid training, the ability to perceive/classify novel visual forms increases dramatically. (e.g. http://www.pnas.org/content/early/2012/12/19/1218438110.full.pdf) If you still doubt, watch videos of world champion StarCraft players and see if your visual system can keep up. (e.g. http://www.youtube.com/watch?v=-yfMoIVTilo) > call it a 'search image' - but I think, again, we attribute too much to > learning, especially in the USA. > > > Well you may not be wrong there. But I think that your perspective of the NN field is a bit skewed by your experience. There are quite a lot of us who build networks that function "out of the box", and emphasize on-demand function over learning. -Brad -- Brad Wyble Assistant Professor Psychology Department Penn State University http://wyblelab.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From juergen at idsia.ch Wed Feb 12 02:37:37 2014 From: juergen at idsia.ch (Juergen Schmidhuber) Date: Wed, 12 Feb 2014 08:37:37 +0100 Subject: Connectionists: Deep Belief Nets (2006) / Neural History Compressor (1991) or Hierarchical Temporal Memory In-Reply-To: <1392131211.2486.33.camel@sam> References: <3FD4809C-5EF7-478D-8939-AE56E6A434BE@idsia.ch> <52F420CF.7060006@cse.msu.edu> <6A71A1D7-9E58-4AED-B44E-BE9AD1E567C1@idsia.ch> <1392131211.2486.33.camel@sam> Message-ID: <7D54587B-E5A3-4158-BF8F-65EC7E28A671@idsia.ch> Gary, an RNN is a general computer. Its weights are its program. The main question is: how to find good weights? Of course, this requires additional code, namely, the program searcher or learning algorithm (a central theme of AI research) - what you call "a whole bunch of machinery". Elegant code is tiny in comparison to the set of weights that it learns. What's a good learning algorithm depends on the problem class. Summarising my posts: 1. Universal optimal program search applies to all well-defined problems and all general computers including RNN. 2. Artificial evolution is useful for reinforcement learning RNN (in partially observable environments) where plain supervised backprop is useless (last post). 3. Unsupervised deep RNN stacks (1991) can improve supervised RNN (original post). Cheers, Juergen On Feb 11, 2014, at 4:06 PM, Stephen Jos? Hanson wrote: > Gary, Gary Gary... why does this seem like a deja-vu..? > > In any case, this thread might be interested in a paper we published in > 1996 in Neural Computation > which used a General RNN to learn from scratch FSMs (other later were > able to learn FSM with 1000s of states--see Lee Giles's work ). > > In our case we wanted to learn the syntax independently of the lexicon > and then transfer between languagues, showing that the RNN was able to > generalize *across* languages, and could bootstrap a state space that > automatically accomodated the new Lexicon/Grammer simply through > learning and relearning the same grammar with different leixicons.. > > take a look: > On the Emergence of Rules in Neural Networks > Stephen Jos? Hanson, Michiro Negishi, Neural Computation, > September 2002, Vol. 14, No. 9, Pages 2245-2268 > > http://www.mitpressjournals.org/doi/abs/10.1162/089976602320264079?journalCode=neco > > > > > In any case, this genetics/learning argument is the Red Herring.. as you > well know. > > Best > > Steve > > On Tue, 2014-02-11 at 09:01 -0500, Gary Marcus wrote: >> Juergen: >> >> Nice papers - but the goalposts are definitely shifting here. We started with your claim that "a single RNN already is a general computer. In principle, dynamic RNNs can map arbitrary observation sequences to arbitrary computable sequences of motoric actions and internal attention-directing operations, e.g., to process cluttered scenes?; I acknowledged the truth of the claim, but suggested that the appeal to universality was a red herring. >> >> What you?ve offered in response are two rather different architectures: a lambda calculus-based learning system that makes no contact (at least in the paper I read) with RNNs at all, and an evolutionary system that uses a whole bunch of machinery besides RNNs to derive RNNs that can do the right mapping. My objection was to the notion that all you need is an RNN; by pointing to various external gadgets, you reinforce my belief that RNNs aren?t enough by themselves. >> >> Of course, you are absolutely right that at some level of abstraction ?evolution is another a form of learning?, but I think it behooves the field to recognize that that other form of learning is likely to have very different properties from, say, back-prop. Evolution shapes cascades of genes that build complex cumulative systems in a distributed but algorithmic fashion; currently popular learning algorithms tune individual weights based on training examples. To assimilate the two is to do a disservice of the evolutionary contribution. >> >> Best, >> Gary >> >> >> On Feb 11, 2014, at 6:21 AM, Juergen Schmidhuber wrote: >> >>> Gary (Marcus), you wrote: "it is unrealistic to expect that all the relevant information can be extracted by any general purpose learning device." You might be a bit too pessimistic about general purpose systems. Unbeknownst to many NN researchers, there are _universal_ problem solvers that are time-optimal in various theoretical senses [10-12] (not to be confused with universal incomputable AI [13]). For example, there is a meta-method [10] that solves any well-defined problem as quickly as the unknown fastest way of solving it, save for an additive constant overhead that becomes negligible as problem size grows. Note that most problems are large; only few are small. (AI researchers are still in business because many are interested in problems so small that it is worth trying to reduce the overhead.) >>> >>> Several posts addressed the subject of evolution (Gary Marcus, Ken Stanley, Brian Mingus, Ali Minai, Thomas Trappenberg). Evolution is another a form of learning, of searching the parameter space. Not provably optimal in the sense of the methods above, but often quite practical. It is used all the time for reinforcement learning without a teacher. For example, an RNN with over a million weights recently learned through evolution to drive a simulated car based on a high-dimensional video-like visual input stream [14,15]. The RNN learned both control and visual processing from scratch, without being aided by unsupervised techniques (which may speed up evolution by reducing the search space through compact sensory codes). >>> >>> Jim, you wrote: "this could actually be an interesting opportunity for some cross disciplinary thinking about how one would use an active sensory data acquisition controller to select the sensory data that is ideal given an internal model of the world." Well, that's what intrinsic reward-driven curiosity and attention direction is all about - reward the controller for selecting data that maximises learning/compression progress of the world model - lots of work on this since 1990 [16,17]. (See also posts on developmental robotics by Brian Mingus and Gary Cottrell.) >>> >>> [10] Marcus Hutter. The Fastest and Shortest Algorithm for All Well-Defined Problems. International Journal of Foundations of Computer Science, 13(3):431-443, 2002. (On J. Schmidhuber's SNF grant 20-61847.) >>> [11] http://www.idsia.ch/~juergen/optimalsearch.html >>> [12] http://www.idsia.ch/~juergen/goedelmachine.html >>> [13] http://www.idsia.ch/~juergen/unilearn.html >>> [14] J. Koutnik, G. Cuccu, J. Schmidhuber, F. Gomez. Evolving Large-Scale Neural Networks for Vision-Based Reinforcement Learning. Proc. GECCO'13, Amsterdam, July 2013. >>> [15] http://www.idsia.ch/~juergen/compressednetworksearch.html >>> [16] http://www.idsia.ch/~juergen/interest.html >>> [17] http://www.idsia.ch/~juergen/creativity.html >>> >>> Juergen >>> >>> >> >> > > -- > Stephen Jos? Hanson > Director RUBIC (Rutgers Brain Imaging Center) > Professor of Psychology > Member of Cognitive Science Center (NB) > Member EE Graduate Program (NB) > Member CS Graduate Program (NB) > Rutgers University > > email: jose at psychology.rutgers.edu > web: psychology.rutgers.edu/~jose > lab: www.rumba.rutgers.edu > fax: 866-434-7959 > voice: 973-353-3313 (RUBIC) > From pierre-yves.oudeyer at inria.fr Wed Feb 12 05:33:00 2014 From: pierre-yves.oudeyer at inria.fr (Pierre-Yves Oudeyer) Date: Wed, 12 Feb 2014 11:33:00 +0100 Subject: Connectionists: developmental robotics In-Reply-To: <5B3FD743-AFB1-4FD0-9F5C-246693E27852@nyu.edu> References: <3FD4809C-5EF7-478D-8939-AE56E6A434BE@idsia.ch> <52F420CF.7060006@cse.msu.edu> <2B3F3833-98F9-4ACB-A0A2-D5D5560E99FF@inria.fr> <5B3FD743-AFB1-4FD0-9F5C-246693E27852@nyu.edu> Message-ID: <73438C5B-8062-495F-A25F-36817C59AF70@inria.fr> Dear Gary, for some historical reasons, the term ?developmental approach? is indeed sometimes associated to ?blank slate? approaches. But to me, this association is (should be) rather an oxymoron. I would say many active contributors of the developmental robotics community today rather explore Piagetian and Vygotskian legacies, neither innatists nor empiricists, rather focusing on the processes of guided change of sensorimotor, cognitive and social structures. (You can certainly find papers targetting ?tabula rasa? learning and using the term "developmental robotics", but I would say this is a minority of papers in the the last 6-7 years in this field, and by the way it is useful that some try to study the extents and limits of tabula rasa approaches). And so yes, there are references of papers where sophisticated developmental architectures are used as starting points, and at various stages of the developmental path. With regards to low-level sensorimotor development in high-dimensional bodies, a perspective is provided in [1], where the integration of peculiar active information seeking [2], maturational [3], morphological [4], goal-based [5] and social guidance [6] mechanisms is argued to be essential (we are here far away from the blank slate). Other groups developped similar argumentations, for example concerning the role of body growth in fetal development [7], and the global organization of motor learning into organized stages [8]. Some of the ideas of Chomsky and other structuralists related to algebraic, compositional and recursive mechanisms for higher level cognition are now actually being adapted to motor and action control within a developmental view, such as for example in work identifying and using primitives and compositional structures for action [9, 10], and some of us even talk of a minimalist grammar for action [11]. As far as language development is concerned, I would even say that most work I know assumes advanced prior computational structures for things like unification and recursion, within hybrid neural architectures [12] or based on older more classical computational architectures now used in concert with statistical inference for studying the development of grammatical structures in artificial agents [13, 14, 15]. Not everyone though pre-supposes those high-level symbolic compositional capabilities (some study their very formation out of neural architectures [18]), but in that case they still often consider as the central object of study the role of constraints, such as for example the role of embodiment and social cues to guide a statistical learner to pick regularities in language acquisition [16, 17]. Best, Pierre-Yves [1] Intrinsically Motivated Learning of Real-World Sensorimotor Skills with Developmental Constraints Oudeyer P-Y., Baranes A., Kaplan F. (2013) in Intrinsically Motivated Learning in Natural and Artificial Systems, eds. Baldassarre G. and Mirolli M., Springer. http://www.pyoudeyer.com/OudeyerBaranesKaplan13.pdf [2] Information Seeking, Curiosity and Attention: Computational and Neural Mechanisms Gottlieb, J., Oudeyer, P-Y., Lopes, M., Baranes, A. (2013) Trends in Cognitive Science, 17(11), pp. 585-596. http://www.pyoudeyer.com/TICSCuriosity2013.pdf [3] Maturational constraints for motor learning in high-dimensions : the case of biped walking Lapeyre, M., Ly, O., Oudeyer, P-Y. (2011) Proceedings of IEEE-RAS International Conference on Humanoid Robots (HUMANOIDS 2011), Bled, Slovenia. http://hal.archives-ouvertes.fr/docs/00/64/93/33/PDF/Humanoid.pdf [4] Poppy Humanoid Platform: Experimental Evaluation of the Role of a Bio-inspired Thigh Shape, Matthieu Lapeyre, Pierre Rouanet, Pierre-Yves Oudeyer, Humanoids 2013. http://hal.inria.fr/docs/00/86/11/10/PDF/humanoid2013.pdf [5] Active Learning of Inverse Models with Intrinsically Motivated Goal Exploration in Robots Adrien Baranes ; Pierre-Yves Oudeyer Robotics and Autonomous Systems, Elsevier, 2013, 61 (1), pp. 69-73 http://hal.inria.fr/docs/00/78/84/40/PDF/RAS-SAGG-RIAC-2012.pdf [6] Active Choice of Teachers, Learning Strategies and Goals for a Socially Guided Intrinsic Motivation Learner Sao Mai Nguyen ; Pierre-Yves Oudeyer Paladyn Journal of Bejavioral Robotics, Springer, 2012, 3 (3), pp. 136-146 http://hal.inria.fr/docs/00/93/69/32/PDF/Nguyen_2012Paladyn.pdf [7] Yasunori Yamada, Yasuo Kuniyoshi: Embodiment guides motor and spinal circuit development in vertebrate embryo and fetus, International Conference on Development and Learning (ICDL2012)/ EpiRob2012, 2012. http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6400578 [8] J. Law, P. Shaw, K. Earland, M. Sheldon, M. Lee (2014) A psychology based approach for longitudinal development in cognitive robotics. Frontiers in Neurorobotics 8 (1) pp. 1-19. http://www.frontiersin.org/Neurorobotics/10.3389/fnbot.2014.00001/abstract [9] A Simple Ontology of Manipulation Actions Based on Hand-Object Relations in: Autonomous Mental Development, IEEE Transactions on, Issue Date: June 2013, Written by: Worgotter, F.; Aksoy, E.E.; Kruger, N.; Piater, J.; Ude, A.; Tamosiunaite, M. https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6493410 [10] F. Guerin, N. Kr?ger and D. Kraft ?A survey of the ontogeny of tool use: from sensorimotor experience to planning? IEEE Trans. Autonom. Mental Develop., vol. 5, no. 1, pp. 18-45, 2013 https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=6248675 [11] Katerina Pastra and Yiannis Aloimonos (2012), ?The Minimalist Grammar of Action?, Philosophical Transactions of the Royal Society B, 367(1585):103. http://www.csri.gr/files/csri/docs/pastra/philoTrans-pastra.pdf [12] Hinaut X, Dominey PF (2013) Real-Time Parallel Processing of Grammatical Structure in the Fronto-Striatal System: A Recurrent Network Simulation Study Using Reservoir Computing Plos One 8(2): e52946 http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0052946 [13] Siskind https://engineering.purdue.edu/~qobi/ [14] Beuls K, Steels L (2013) Agent-Based Models of Strategies for the Emergence and Evolution of Grammatical Agreement. PLoS ONE 8(3): e58960. doi:10.1371/journal.pone.0058960 http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0058960 [15] Daniel Hewlett, Thomas Walsh, Paul Cohen. 2011. Teaching and Executing Verb Phrases .Development and Learning (ICDL), 2011 IEEE International Conference on 2, 1-6. http://w3.sista.arizona.edu/~cohen/Publications/papers/Hewlett-2011.pdf [16] Attentional constraints and statistics in toddlers' word learning SH Suanda, SB Foster, LB Smith, C Yu - Development and Learning and Epigenetic Robotics ( ?, 2013 http://www.indiana.edu/~dll/papers/SFSY_2013_ICDL.pdf [17] The role of embodied intention in early lexical acquisition C Yu, DH Ballard, RN Aslin - Cognitive science, 2005 [18] M. Maniadakisa, P. Trahaniasa, J. Tani: ?Self-organizing high-order cognitive functions in artificial agents: implications for possible prefrontal cortex mechanisms?, Neural Networks, Vol33, pp.76-87, 2012. http://neurorobot.kaist.ac.kr/pdf_files/Maniadakis-NN2012.pdf On 11 Feb 2014, at 15:15, Gary Marcus wrote: > Pierre (and others) > > Thanks for all the links. I am a fan, in principle, but much of the developmental robotics work I have seen sticks rigidly to a fairly ?blank-slate? perspective, perhaps needlessly so. I?d especially welcome references to work in which researchers have given robots a significant head start, so that the learning that takes place has a strong starting point. Has anyone, for example, tried to build a robot that starts with the cognitive capacities of a two-year-old (rather than a newborn), and goes from there? Or taken seriously the nativist arguments of Chomsky, Pinker, and Spelke and tried to build a robot that is innately endowed with concepts like ?person?, ?object?, ?set?, and ?place?? > > Best, > Gary > > On Feb 11, 2014, at 6:58 AM, Pierre-Yves Oudeyer wrote: > >> >> Hi, >> >> the view put forward by Gart strongly resonates with the approach that has been taken in the developmental robotics community in the last 10 years. >> I like to explain developmental robotics as the study of developmental constraints and architectures which guide learning mechanisms so as to allow actual lifelong acquisition and adaptation of skills in the large high-dimensional real world with severely limited time and space resources. >> Such an approach considers centrally the No Free Lunch idea, and indeed tries to identifies and understand specific families of guiding mechanisms that allow corresponding families of learners to acquire families of skills in some families of environments. >> For pointers, see: http://en.wikipedia.org/wiki/Developmental_robotics >> >> Thus, in a way, while lifelong learning and adaptation is a key object of study, most work is not so much about elaborating new models of learning mechanisms, but about studying what (often changing) properties of the inner and outer environment allow to canalise them. >> Examples of such mechanisms include body and neural maturation, active learning (selection of action that provide informative and useful data), emotional and motivational systems, cognitive biases for inference, self-organisation or socio-cultural scaffolding, and their interactions. >> >> This body of work is unfortunately not yet well connected (except a few exceptions) with the connectionnist community, but I am convinced more mutual exchange would be valuable. >> >> For those interested, we have a dedicated: >> - journal: IEEE TAMD https://ieeexplore.ieee.org/xpl/RecentIssue.jsp?punumber=4563672 >> - conference: IEEE ICDL-Epirob >> - newsletter: IEEE CIS AMD, latest issue: http://www.cse.msu.edu/amdtc/amdnl/AMDNL-V10-N2.pdf >> >> Best regards, >> Pierre-Yves Oudeyer >> http://www.pyoudeyer.com >> https://flowers.inria.fr >> >> On 10 Feb 2014, at 17:38, Gary Marcus wrote: >> >>> Juergen and others, >>> >>> I am with John on his two basic concerns, and think that your appeal to computational universality is a red herring; I cc the entire group because I think that these issues lay at the center of why many of the hardest problems in AI and neuroscience continue to lay outside of reach, despite in-principle proofs about computational universality. >>> >>> John?s basic points, which I have also made before (e.g. in my books The Algebraic Mind and The Birth of the Mind and in my periodic New Yorker posts) are two >>> >>> a. It is unrealistic to expect that hierarchies of pattern recognizers will suffice for the full range of cognitive problems that humans (and strong AI systems) face. Deep learning, to take one example, excels at classification, but has thus far had relatively little to contribute to inference or natural language understanding. Socher et al?s impressive CVG work, for instance, is parasitic on a traditional (symbolic) parser, not a soup-to-nuts neural net induced from input. >>> >>> b. it is unrealistic to expect that all the relevant information can be extracted by any general purpose learning device. >>> >>> Yes, you can reliably map any arbitrary input-output relation onto a multilayer perceptron or recurrent net, but only if you know the complete input-output mapping in advance. Alas, you can?t be guaranteed to do that in general given arbitrary subsets of the complete space; in the real world, learners see subsets of possible data and have to make guesses about what the rest will be like. Wolpert?s No Free Lunch work is instructive here (and also in line with how cognitive scientists like Chomsky, Pinker, and myself have thought about the problem). For any problem, I presume that there exists an appropriately-configured net, but there is no guarantee that in the real world you are going to be able to correctly induce the right system via general-purpose learning algorithm given a finite amount of data, with a finite amount of training. Empirically, neural nets of roughly the form you are discussing have worked fine for some problems (e.g. backgammon) but been no match for their symbolic competitors in other domains (chess) and worked only as an adjunct rather than an central ingredient in still others (parsing, question-answering a la Watson, etc); in other domains, like planning and common-sense reasoning, there has been essentially no serious work at all. >>> >>> My own take, informed by evolutionary and developmental biology, is that no single general purpose architecture will ever be a match for the endproduct of a billion years of evolution, which includes, I suspect, a significant amount of customized architecture that need not be induced anew in each generation. We learn as well as we do precisely because evolution has preceded us, and endowed us with custom tools for learning in different domains. Until the field of neural nets more seriously engages in understanding what the contribution from evolution to neural wetware might be, I will remain pessimistic about the field?s prospects. >>> >>> Best, >>> Gary Marcus >>> >>> Professor of Psychology >>> New York University >>> Visiting Cognitive Scientist >>> Allen Institute for Brain Science >>> Allen Institute for Artiificial Intelligence >>> co-edited book coming late 2014: >>> The Future of the Brain: Essays By The World?s Leading Neuroscientists >>> http://garymarcus.com/ >>> >>> On Feb 10, 2014, at 10:26 AM, Juergen Schmidhuber wrote: >>> >>>> John, >>>> >>>> perhaps your view is a bit too pessimistic. Note that a single RNN already is a general computer. In principle, dynamic RNNs can map arbitrary observation sequences to arbitrary computable sequences of motoric actions and internal attention-directing operations, e.g., to process cluttered scenes, or to implement development (the examples you mentioned). From my point of view, the main question is how to exploit this universal potential through learning. A stack of dynamic RNN can sometimes facilitate this. What it learns can later be collapsed into a single RNN [3]. >>>> >>>> Juergen >>>> >>>> http://www.idsia.ch/~juergen/whatsnew.html >>>> >>>> >>>> >>>> On Feb 7, 2014, at 12:54 AM, Juyang Weng wrote: >>>> >>>>> Juergen: >>>>> >>>>> You wrote: A stack of recurrent NN. But it is a wrong architecture as far as the brain is concerned. >>>>> >>>>> Although my joint work with Narendra Ahuja and Thomas S. Huang at UIUC was probably the first >>>>> learning network that used the deep Learning idea for learning from clutter scenes (Cresceptron ICCV 1992 and IJCV 1997), >>>>> I gave up this static deep learning idea later after we considered the Principle 1: Development. >>>>> >>>>> The deep learning architecture is wrong for the brain. It is too restricted, static in architecture, and cannot learn directly from cluttered scenes required by Principle 1. The brain is not a cascade of recurrent NN. >>>>> >>>>> I quote from Antonio Damasio "Decartes' Error": p. 93: "But intermediate communications occurs also via large subcortical nuclei such as those in the thalamas and basal ganglia, and via small nulei such as those in the brain stem." >>>>> >>>>> Of course, the cerebral pathways themselves are not a stack of recurrent NN either. >>>>> >>>>> There are many fundamental reasons for that. I give only one here base on our DN brain model: Looking at a human, the brain must dynamically attend the tip of the nose, the entire nose, the face, or the entire human body on the fly. For example, when the network attend the nose, the entire human body becomes the background! Without a brain network that has both shallow and deep connections (unlike your stack of recurrent NN), your network is only for recognizing a set of static patterns in a clean background. This is still an overworked pattern recognition problem, not a vision problem. >>>>> >>>>> -John >>>>> >>>>> On 2/6/14 7:24 AM, Schmidhuber Juergen wrote: >>>>>> Deep Learning in Artificial Neural Networks (NN) is about credit assignment across many subsequent computational stages, in deep or recurrent NN. >>>>>> >>>>>> A popluar Deep Learning NN is the Deep Belief Network (2006) [1,2]. A stack of feedforward NN (FNN) is pre-trained in unsupervised fashion. This can facilitate subsequent supervised learning. >>>>>> >>>>>> Let me re-advertise a much older, very similar, but more general, working Deep Learner of 1991. It can deal with temporal sequences: the Neural Hierarchical Temporal Memory or Neural History Compressor [3]. A stack of recurrent NN (RNN) is pre-trained in unsupervised fashion. This can greatly facilitate subsequent supervised learning. >>>>>> >>>>>> The RNN stack is more general in the sense that it uses sequence-processing RNN instead of FNN with unchanging inputs. In the early 1990s, the system was able to learn many previously unlearnable Deep Learning tasks, one of them requiring credit assignment across 1200 successive computational stages [4]. >>>>>> >>>>>> Related developments: In the 1990s there was a trend from partially unsupervised [3] to fully supervised recurrent Deep Learners [5]. In recent years, there has been a similar trend from partially unsupervised to fully supervised systems. For example, several recent competition-winning and benchmark record-setting systems use supervised LSTM RNN stacks [6-9]. >>>>>> >>>>>> >>>>>> References: >>>>>> >>>>>> [1] G. E. Hinton, R. R. Salakhutdinov. Reducing the dimensionality of data with neural networks. Science, Vol. 313. no. 5786, pp. 504 - 507, 2006. http://www.cs.toronto.edu/~hinton/science.pdf >>>>>> >>>>>> [2] G. W. Cottrell. New Life for Neural Networks. Science, Vol. 313. no. 5786, pp. 454-455, 2006. http://www.academia.edu/155897/Cottrell_Garrison_W._2006_New_life_for_neural_networks >>>>>> >>>>>> [3] J. Schmidhuber. Learning complex, extended sequences using the principle of history compression, Neural Computation, 4(2):234-242, 1992. (Based on TR FKI-148-91, 1991.) ftp://ftp.idsia.ch/pub/juergen/chunker.pdf Overview: http://www.idsia.ch/~juergen/firstdeeplearner.html >>>>>> >>>>>> [4] J. Schmidhuber. Habilitation thesis, TUM, 1993. ftp://ftp.idsia.ch/pub/juergen/habilitation.pdf . Includes an experiment with credit assignment across 1200 subsequent computational stages for a Neural Hierarchical Temporal Memory or History Compressor or RNN stack with unsupervised pre-training [2] (try Google Translate in your mother tongue): http://www.idsia.ch/~juergen/habilitation/node114.html >>>>>> >>>>>> [5] S. Hochreiter, J. Schmidhuber. Long Short-Term Memory. Neural Computation, 9(8):1735-1780, 1997. Based on TR FKI-207-95, 1995. ftp://ftp.idsia.ch/pub/juergen/lstm.pdf . Lots of of follow-up work on LSTM under http://www.idsia.ch/~juergen/rnn.html >>>>>> >>>>>> [6] S. Fernandez, A. Graves, J. Schmidhuber. Sequence labelling in structured domains with hierarchical recurrent neural networks. In Proc. IJCAI'07, p. 774-779, Hyderabad, India, 2007. ftp://ftp.idsia.ch/pub/juergen/IJCAI07sequence.pdf >>>>>> >>>>>> [7] A. Graves, J. Schmidhuber. Offline Handwriting Recognition with Multidimensional Recurrent Neural Networks. NIPS'22, p 545-552, Vancouver, MIT Press, 2009. http://www.idsia.ch/~juergen/nips2009.pdf >>>>>> >>>>>> [8] 2009: First very deep (and recurrent) learner to win international competitions with secret test sets: deep LSTM RNN (1995-) won three connected handwriting contests at ICDAR 2009 (French, Arabic, Farsi), performing simultaneous segmentation and recognition. http://www.idsia.ch/~juergen/handwriting.html >>>>>> >>>>>> [9] A. Graves, A. Mohamed, G. E. Hinton. Speech Recognition with Deep Recurrent Neural Networks. ICASSP 2013, Vancouver, 2013. http://www.cs.toronto.edu/~hinton/absps/RNN13.pdf >>>>>> >>>>>> >>>>>> >>>>>> Juergen Schmidhuber >>>>>> http://www.idsia.ch/~juergen/whatsnew.html >>>>> >>>>> -- >>>>> -- >>>>> Juyang (John) Weng, Professor >>>>> Department of Computer Science and Engineering >>>>> MSU Cognitive Science Program and MSU Neuroscience Program >>>>> 428 S Shaw Ln Rm 3115 >>>>> Michigan State University >>>>> East Lansing, MI 48824 USA >>>>> Tel: 517-353-4388 >>>>> Fax: 517-432-1061 >>>>> Email: weng at cse.msu.edu >>>>> URL: http://www.cse.msu.edu/~weng/ >>>>> ---------------------------------------------- >>>>> >>>> >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: logo_url.jpg Type: image/jpeg Size: 409 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: logo_url.jpg Type: image/jpeg Size: 409 bytes Desc: not available URL: From poirazi at imbb.forth.gr Wed Feb 12 05:44:27 2014 From: poirazi at imbb.forth.gr (Yiota Poirazi) Date: Wed, 12 Feb 2014 12:44:27 +0200 Subject: Connectionists: Faculty positions in Neuroscience and Systems Biology at IMBB-FORTH Message-ID: <52FB508B.3070507@imbb.forth.gr> Dear friends and colleagues, I'd like to call your attention to the opening of two Researcher positions at IMBB FORTH in the areas of *NEUROSCIENCE* and *SYSTEMS BIOLOGY.* I would like to kindly ask you to forward this email to *potential candidates. * Here is some information about FORTH and IMBB. ** The Foundation for Research and Technology-Hellas (FORTH) is one of the top European Research Centers. It has ranked 1^st in terms of high impact publications in Greece and 12^th in the number of FP7 grants in Europe. Importantly, FORTH researchers have been awarded 1/3 of the prestigious and highly competitive ERC grants given to Greece since the beginning of the program in 2007. The Institute of Molecular Biology and Biotechnology (IMBB) at FORTH is the top Biological Institute in Greece, in terms of high quality personnel, publications, infrastructure and competitive grants. IMBB-FORTH is located in Heraklion,one of the most ancient and historical Greek cities, on the picturesque island of Crete. Crete, the homeland of major artists, such as El Greco and Nikos Kazantzakis, impressively combines the outstanding geophysical variety (forests, mountains, gorges, and beaches) with its rich history of thousands of years, resulting in the well-known Cretan culture and cuisine. International Brain Research Organization (IBRO) I am grateful for your help! best wishes, Yiota -- Panayiota Poirazi, Ph.D. Director of Research Computational Biology Laboratory Institute of Molecular Biology and Biotechnology (IMBB) Foundation of Research and Technology-Hellas (FORTH) Vassilika Vouton P.O.Box 1385 GR 711 10 Heraklion, Crete GREECE Tel: +30 2810 391139 Fax: +30 2810 391101 ?mail:poirazi at imbb.forth.gr http://www.dendrites.gr http://www.imbb.forth.gr/personal_page/poirazi.html -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ????-Researchers_Position_en.pdf Type: application/pdf Size: 137904 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ????-Researchers_Position_gr.pdf Type: application/pdf Size: 156299 bytes Desc: not available URL: From gary.marcus at nyu.edu Wed Feb 12 06:52:14 2014 From: gary.marcus at nyu.edu (Gary Marcus) Date: Wed, 12 Feb 2014 06:52:14 -0500 Subject: Connectionists: Deep Belief Nets (2006) / Neural History Compressor (1991) or Hierarchical Temporal Memory In-Reply-To: <7D54587B-E5A3-4158-BF8F-65EC7E28A671@idsia.ch> References: <3FD4809C-5EF7-478D-8939-AE56E6A434BE@idsia.ch> <52F420CF.7060006@cse.msu.edu> <6A71A1D7-9E58-4AED-B44E-BE9AD1E567C1@idsia.ch> <1392131211.2486.33.camel@sam> <7D54587B-E5A3-4158-BF8F-65EC7E28A671@idsia.ch> Message-ID: <140BC2C8-62DF-487C-A9B4-95BA99B6455E@nyu.edu> A pair of lovely messages this morning! A tip of the hot to Pierre for his wonderful summary of recent work in developmental robotics that is pushing beyond pure empiricism. And, Juergen, I think we?ve finally reached common ground, though I would place the emphasis differently, inverting the framing of your point #2: it?s too much to expect supervised backprop to be a universal solution. So it?s great to see the field beginning to engage more seriously in understanding what cannot readily be handled in that way (something I have been counseling [1, 2, 3] since the late 1990?s), and what other approaches we might need. Best, Gary 1. Marcus, G. F. Rethinking eliminative connectionism. Cogn Psychol 37, 243-282 (1998). 2. Marcus, G. F., Vijayan, S., Bandi Rao, S. & Vishton, P. M. Rule learning by seven-month-old infants. Science 283, 77-80 (1999). 3. Marcus, G. F. The algebraic mind: Integrating connectionism and cognitive science (The MIT Press, 2001). On Feb 12, 2014, at 2:37 AM, Juergen Schmidhuber wrote: > Gary, > an RNN is a general computer. Its weights are its program. The main question is: how to find good weights? Of course, this requires additional code, namely, the program searcher or learning algorithm (a central theme of AI research) - what you call "a whole bunch of machinery". Elegant code is tiny in comparison to the set of weights that it learns. What's a good learning algorithm depends on the problem class. Summarising my posts: 1. Universal optimal program search applies to all well-defined problems and all general computers including RNN. 2. Artificial evolution is useful for reinforcement learning RNN (in partially observable environments) where plain supervised backprop is useless (last post). 3. Unsupervised deep RNN stacks (1991) can improve supervised RNN (original post). > Cheers, > Juergen > > > On Feb 11, 2014, at 4:06 PM, Stephen Jos? Hanson wrote: > >> Gary, Gary Gary... why does this seem like a deja-vu..? >> >> In any case, this thread might be interested in a paper we published in >> 1996 in Neural Computation >> which used a General RNN to learn from scratch FSMs (other later were >> able to learn FSM with 1000s of states--see Lee Giles's work ). >> >> In our case we wanted to learn the syntax independently of the lexicon >> and then transfer between languagues, showing that the RNN was able to >> generalize *across* languages, and could bootstrap a state space that >> automatically accomodated the new Lexicon/Grammer simply through >> learning and relearning the same grammar with different leixicons.. >> >> take a look: >> On the Emergence of Rules in Neural Networks >> Stephen Jos? Hanson, Michiro Negishi, Neural Computation, >> September 2002, Vol. 14, No. 9, Pages 2245-2268 >> >> http://www.mitpressjournals.org/doi/abs/10.1162/089976602320264079?journalCode=neco >> >> >> >> >> In any case, this genetics/learning argument is the Red Herring.. as you >> well know. >> >> Best >> >> Steve >> >> On Tue, 2014-02-11 at 09:01 -0500, Gary Marcus wrote: >>> Juergen: >>> >>> Nice papers - but the goalposts are definitely shifting here. We started with your claim that "a single RNN already is a general computer. In principle, dynamic RNNs can map arbitrary observation sequences to arbitrary computable sequences of motoric actions and internal attention-directing operations, e.g., to process cluttered scenes?; I acknowledged the truth of the claim, but suggested that the appeal to universality was a red herring. >>> >>> What you?ve offered in response are two rather different architectures: a lambda calculus-based learning system that makes no contact (at least in the paper I read) with RNNs at all, and an evolutionary system that uses a whole bunch of machinery besides RNNs to derive RNNs that can do the right mapping. My objection was to the notion that all you need is an RNN; by pointing to various external gadgets, you reinforce my belief that RNNs aren?t enough by themselves. >>> >>> Of course, you are absolutely right that at some level of abstraction ?evolution is another a form of learning?, but I think it behooves the field to recognize that that other form of learning is likely to have very different properties from, say, back-prop. Evolution shapes cascades of genes that build complex cumulative systems in a distributed but algorithmic fashion; currently popular learning algorithms tune individual weights based on training examples. To assimilate the two is to do a disservice of the evolutionary contribution. >>> >>> Best, >>> Gary >>> >>> >>> On Feb 11, 2014, at 6:21 AM, Juergen Schmidhuber wrote: >>> >>>> Gary (Marcus), you wrote: "it is unrealistic to expect that all the relevant information can be extracted by any general purpose learning device." You might be a bit too pessimistic about general purpose systems. Unbeknownst to many NN researchers, there are _universal_ problem solvers that are time-optimal in various theoretical senses [10-12] (not to be confused with universal incomputable AI [13]). For example, there is a meta-method [10] that solves any well-defined problem as quickly as the unknown fastest way of solving it, save for an additive constant overhead that becomes negligible as problem size grows. Note that most problems are large; only few are small. (AI researchers are still in business because many are interested in problems so small that it is worth trying to reduce the overhead.) >>>> >>>> Several posts addressed the subject of evolution (Gary Marcus, Ken Stanley, Brian Mingus, Ali Minai, Thomas Trappenberg). Evolution is another a form of learning, of searching the parameter space. Not provably optimal in the sense of the methods above, but often quite practical. It is used all the time for reinforcement learning without a teacher. For example, an RNN with over a million weights recently learned through evolution to drive a simulated car based on a high-dimensional video-like visual input stream [14,15]. The RNN learned both control and visual processing from scratch, without being aided by unsupervised techniques (which may speed up evolution by reducing the search space through compact sensory codes). >>>> >>>> Jim, you wrote: "this could actually be an interesting opportunity for some cross disciplinary thinking about how one would use an active sensory data acquisition controller to select the sensory data that is ideal given an internal model of the world." Well, that's what intrinsic reward-driven curiosity and attention direction is all about - reward the controller for selecting data that maximises learning/compression progress of the world model - lots of work on this since 1990 [16,17]. (See also posts on developmental robotics by Brian Mingus and Gary Cottrell.) >>>> >>>> [10] Marcus Hutter. The Fastest and Shortest Algorithm for All Well-Defined Problems. International Journal of Foundations of Computer Science, 13(3):431-443, 2002. (On J. Schmidhuber's SNF grant 20-61847.) >>>> [11] http://www.idsia.ch/~juergen/optimalsearch.html >>>> [12] http://www.idsia.ch/~juergen/goedelmachine.html >>>> [13] http://www.idsia.ch/~juergen/unilearn.html >>>> [14] J. Koutnik, G. Cuccu, J. Schmidhuber, F. Gomez. Evolving Large-Scale Neural Networks for Vision-Based Reinforcement Learning. Proc. GECCO'13, Amsterdam, July 2013. >>>> [15] http://www.idsia.ch/~juergen/compressednetworksearch.html >>>> [16] http://www.idsia.ch/~juergen/interest.html >>>> [17] http://www.idsia.ch/~juergen/creativity.html >>>> >>>> Juergen >>>> >>>> >>> >>> >> >> -- >> Stephen Jos? Hanson >> Director RUBIC (Rutgers Brain Imaging Center) >> Professor of Psychology >> Member of Cognitive Science Center (NB) >> Member EE Graduate Program (NB) >> Member CS Graduate Program (NB) >> Rutgers University >> >> email: jose at psychology.rutgers.edu >> web: psychology.rutgers.edu/~jose >> lab: www.rumba.rutgers.edu >> fax: 866-434-7959 >> voice: 973-353-3313 (RUBIC) >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bower at uthscsa.edu Wed Feb 12 08:07:27 2014 From: bower at uthscsa.edu (james bower) Date: Wed, 12 Feb 2014 07:07:27 -0600 Subject: Connectionists: developmental robotics In-Reply-To: <73438C5B-8062-495F-A25F-36817C59AF70@inria.fr> References: <3FD4809C-5EF7-478D-8939-AE56E6A434BE@idsia.ch> <52F420CF.7060006@cse.msu.edu> <2B3F3833-98F9-4ACB-A0A2-D5D5560E99FF@inria.fr> <5B3FD743-AFB1-4FD0-9F5C-246693E27852@nyu.edu> <73438C5B-8062-495F-A25F-36817C59AF70@inria.fr> Message-ID: > > > As far as language development is concerned, I would even say that most work I know assumes advanced prior computational structures for things like unification and recursion, Sorry again to probably be revealing my ignorance on current practices in a field, but, isn?t the "prior computational structure", in this case, the actual structure of cerebral cortex itself. Just like the structure of music, the structure of language, it seems to me, is very likely to deeply reflect the core structure of cerebral cortical networks, first and foremost. A few years ago a friend of mine Jaron Lanier and I were talking about possible influences of the computational structure of the olfactory system on language. What was an off the cuff conversation ended up as a chapter in his book ?you are not a gadget? - How olfaction might have played a crucial role in the evolution of human language Jaron didn?t get my sense completely right, but the general idea is there. As I have mentioned before, I believe that a deeper understanding of the computational requirements for the olfactory system and how those requirements may be met by the core structure of cerebral cortical networks (which has nothing to do with columns), at least represents a different approach to understanding systems like language (and vision). Jim > within hybrid neural architectures [12] or based on older more classical computational architectures now used in concert with statistical inference for studying the development of grammatical structures in artificial agents [13, 14, 15]. Not everyone though pre-supposes those high-level symbolic compositional capabilities (some study their very formation out of neural architectures [18]), but in that case they still often consider as the central object of study the role of constraints, such as for example the role of embodiment and social cues to guide a statistical learner to pick regularities in language acquisition [16, 17]. > > Best, > Pierre-Yves > > > [1] Intrinsically Motivated Learning of Real-World Sensorimotor Skills with Developmental Constraints > Oudeyer P-Y., Baranes A., Kaplan F. (2013) > in Intrinsically Motivated Learning in Natural and Artificial Systems, eds. Baldassarre G. and Mirolli M., Springer. > http://www.pyoudeyer.com/OudeyerBaranesKaplan13.pdf > > [2] Information Seeking, Curiosity and Attention: Computational and Neural Mechanisms > Gottlieb, J., Oudeyer, P-Y., Lopes, M., Baranes, A. (2013) > Trends in Cognitive Science, 17(11), pp. 585-596. > http://www.pyoudeyer.com/TICSCuriosity2013.pdf > > [3] Maturational constraints for motor learning in high-dimensions : the case of biped walking > Lapeyre, M., Ly, O., Oudeyer, P-Y. (2011) > Proceedings of IEEE-RAS International Conference on Humanoid Robots (HUMANOIDS 2011), Bled, Slovenia. > http://hal.archives-ouvertes.fr/docs/00/64/93/33/PDF/Humanoid.pdf > > [4] Poppy Humanoid Platform: Experimental Evaluation of the Role of a Bio-inspired Thigh Shape, > Matthieu Lapeyre, Pierre Rouanet, Pierre-Yves Oudeyer, > Humanoids 2013. > http://hal.inria.fr/docs/00/86/11/10/PDF/humanoid2013.pdf > > [5] Active Learning of Inverse Models with Intrinsically Motivated Goal Exploration in Robots > Adrien Baranes ; Pierre-Yves Oudeyer > Robotics and Autonomous Systems, Elsevier, 2013, 61 (1), pp. 69-73 > http://hal.inria.fr/docs/00/78/84/40/PDF/RAS-SAGG-RIAC-2012.pdf > > [6] Active Choice of Teachers, Learning Strategies and Goals for a Socially Guided Intrinsic Motivation Learner > Sao Mai Nguyen ; Pierre-Yves Oudeyer > Paladyn Journal of Bejavioral Robotics, Springer, 2012, 3 (3), pp. 136-146 > http://hal.inria.fr/docs/00/93/69/32/PDF/Nguyen_2012Paladyn.pdf > > [7] Yasunori Yamada, Yasuo Kuniyoshi: > Embodiment guides motor and spinal circuit development in vertebrate embryo and fetus, > International Conference on Development and Learning (ICDL2012)/ EpiRob2012, 2012. > http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6400578 > > [8] J. Law, P. Shaw, K. Earland, M. Sheldon, M. Lee (2014) A psychology based approach for longitudinal development in cognitive robotics. Frontiers in Neurorobotics 8 (1) pp. 1-19. > http://www.frontiersin.org/Neurorobotics/10.3389/fnbot.2014.00001/abstract > > [9] A Simple Ontology of Manipulation Actions Based on Hand-Object Relations > in: Autonomous Mental Development, IEEE Transactions on, Issue Date: June 2013, Written by: Worgotter, F.; Aksoy, E.E.; Kruger, N.; Piater, J.; Ude, A.; Tamosiunaite, M. > https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6493410 > > [10] F. Guerin, N. Kr?ger and D. Kraft > ?A survey of the ontogeny of tool use: from sensorimotor experience to planning? > IEEE Trans. Autonom. Mental Develop., vol. 5, no. 1, pp. 18-45, 2013 > https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=6248675 > > [11] Katerina Pastra and Yiannis Aloimonos (2012), ?The Minimalist Grammar of Action?, Philosophical Transactions of the Royal Society B, 367(1585):103. > http://www.csri.gr/files/csri/docs/pastra/philoTrans-pastra.pdf > > [12] Hinaut X, Dominey PF (2013) Real-Time Parallel Processing of Grammatical Structure in the Fronto-Striatal System: A Recurrent Network Simulation Study Using Reservoir Computing Plos One 8(2): e52946 > http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0052946 > > [13] Siskind https://engineering.purdue.edu/~qobi/ > > [14] Beuls K, Steels L (2013) Agent-Based Models of Strategies for the Emergence and Evolution of Grammatical Agreement. PLoS ONE 8(3): e58960. doi:10.1371/journal.pone.0058960 > http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0058960 > > [15] Daniel Hewlett, Thomas Walsh, Paul Cohen. 2011. Teaching and Executing Verb Phrases .Development and Learning (ICDL), 2011 IEEE International Conference on 2, 1-6. > http://w3.sista.arizona.edu/~cohen/Publications/papers/Hewlett-2011.pdf > > [16] Attentional constraints and statistics in toddlers' word learning > SH Suanda, SB Foster, LB Smith, C Yu - Development and Learning and Epigenetic Robotics ( ?, 2013 > http://www.indiana.edu/~dll/papers/SFSY_2013_ICDL.pdf > > [17] The role of embodied intention in early lexical acquisition > C Yu, DH Ballard, RN Aslin - Cognitive science, 2005 > > [18] M. Maniadakisa, P. Trahaniasa, J. Tani: ?Self-organizing high-order cognitive functions in artificial agents: implications for possible prefrontal cortex mechanisms?, Neural Networks, Vol33, pp.76-87, 2012. > http://neurorobot.kaist.ac.kr/pdf_files/Maniadakis-NN2012.pdf > > > On 11 Feb 2014, at 15:15, Gary Marcus wrote: > >> Pierre (and others) >> >> Thanks for all the links. I am a fan, in principle, but much of the developmental robotics work I have seen sticks rigidly to a fairly ?blank-slate? perspective, perhaps needlessly so. I?d especially welcome references to work in which researchers have given robots a significant head start, so that the learning that takes place has a strong starting point. Has anyone, for example, tried to build a robot that starts with the cognitive capacities of a two-year-old (rather than a newborn), and goes from there? Or taken seriously the nativist arguments of Chomsky, Pinker, and Spelke and tried to build a robot that is innately endowed with concepts like ?person?, ?object?, ?set?, and ?place?? >> >> Best, >> Gary >> >> On Feb 11, 2014, at 6:58 AM, Pierre-Yves Oudeyer wrote: >> >>> >>> Hi, >>> >>> the view put forward by Gart strongly resonates with the approach that has been taken in the developmental robotics community in the last 10 years. >>> I like to explain developmental robotics as the study of developmental constraints and architectures which guide learning mechanisms so as to allow actual lifelong acquisition and adaptation of skills in the large high-dimensional real world with severely limited time and space resources. >>> Such an approach considers centrally the No Free Lunch idea, and indeed tries to identifies and understand specific families of guiding mechanisms that allow corresponding families of learners to acquire families of skills in some families of environments. >>> For pointers, see: http://en.wikipedia.org/wiki/Developmental_robotics >>> >>> Thus, in a way, while lifelong learning and adaptation is a key object of study, most work is not so much about elaborating new models of learning mechanisms, but about studying what (often changing) properties of the inner and outer environment allow to canalise them. >>> Examples of such mechanisms include body and neural maturation, active learning (selection of action that provide informative and useful data), emotional and motivational systems, cognitive biases for inference, self-organisation or socio-cultural scaffolding, and their interactions. >>> >>> This body of work is unfortunately not yet well connected (except a few exceptions) with the connectionnist community, but I am convinced more mutual exchange would be valuable. >>> >>> For those interested, we have a dedicated: >>> - journal: IEEE TAMD https://ieeexplore.ieee.org/xpl/RecentIssue.jsp?punumber=4563672 >>> - conference: IEEE ICDL-Epirob >>> - newsletter: IEEE CIS AMD, latest issue: http://www.cse.msu.edu/amdtc/amdnl/AMDNL-V10-N2.pdf >>> >>> Best regards, >>> Pierre-Yves Oudeyer >>> http://www.pyoudeyer.com >>> https://flowers.inria.fr >>> >>> On 10 Feb 2014, at 17:38, Gary Marcus wrote: >>> >>>> Juergen and others, >>>> >>>> I am with John on his two basic concerns, and think that your appeal to computational universality is a red herring; I cc the entire group because I think that these issues lay at the center of why many of the hardest problems in AI and neuroscience continue to lay outside of reach, despite in-principle proofs about computational universality. >>>> >>>> John?s basic points, which I have also made before (e.g. in my books The Algebraic Mind and The Birth of the Mind and in my periodic New Yorker posts) are two >>>> >>>> a. It is unrealistic to expect that hierarchies of pattern recognizers will suffice for the full range of cognitive problems that humans (and strong AI systems) face. Deep learning, to take one example, excels at classification, but has thus far had relatively little to contribute to inference or natural language understanding. Socher et al?s impressive CVG work, for instance, is parasitic on a traditional (symbolic) parser, not a soup-to-nuts neural net induced from input. >>>> >>>> b. it is unrealistic to expect that all the relevant information can be extracted by any general purpose learning device. >>>> >>>> Yes, you can reliably map any arbitrary input-output relation onto a multilayer perceptron or recurrent net, but only if you know the complete input-output mapping in advance. Alas, you can?t be guaranteed to do that in general given arbitrary subsets of the complete space; in the real world, learners see subsets of possible data and have to make guesses about what the rest will be like. Wolpert?s No Free Lunch work is instructive here (and also in line with how cognitive scientists like Chomsky, Pinker, and myself have thought about the problem). For any problem, I presume that there exists an appropriately-configured net, but there is no guarantee that in the real world you are going to be able to correctly induce the right system via general-purpose learning algorithm given a finite amount of data, with a finite amount of training. Empirically, neural nets of roughly the form you are discussing have worked fine for some problems (e.g. backgammon) but been no match for their symbolic competitors in other domains (chess) and worked only as an adjunct rather than an central ingredient in still others (parsing, question-answering a la Watson, etc); in other domains, like planning and common-sense reasoning, there has been essentially no serious work at all. >>>> >>>> My own take, informed by evolutionary and developmental biology, is that no single general purpose architecture will ever be a match for the endproduct of a billion years of evolution, which includes, I suspect, a significant amount of customized architecture that need not be induced anew in each generation. We learn as well as we do precisely because evolution has preceded us, and endowed us with custom tools for learning in different domains. Until the field of neural nets more seriously engages in understanding what the contribution from evolution to neural wetware might be, I will remain pessimistic about the field?s prospects. >>>> >>>> Best, >>>> Gary Marcus >>>> >>>> Professor of Psychology >>>> New York University >>>> Visiting Cognitive Scientist >>>> Allen Institute for Brain Science >>>> Allen Institute for Artiificial Intelligence >>>> co-edited book coming late 2014: >>>> The Future of the Brain: Essays By The World?s Leading Neuroscientists >>>> http://garymarcus.com/ >>>> >>>> On Feb 10, 2014, at 10:26 AM, Juergen Schmidhuber wrote: >>>> >>>>> John, >>>>> >>>>> perhaps your view is a bit too pessimistic. Note that a single RNN already is a general computer. In principle, dynamic RNNs can map arbitrary observation sequences to arbitrary computable sequences of motoric actions and internal attention-directing operations, e.g., to process cluttered scenes, or to implement development (the examples you mentioned). From my point of view, the main question is how to exploit this universal potential through learning. A stack of dynamic RNN can sometimes facilitate this. What it learns can later be collapsed into a single RNN [3]. >>>>> >>>>> Juergen >>>>> >>>>> http://www.idsia.ch/~juergen/whatsnew.html >>>>> >>>>> >>>>> >>>>> On Feb 7, 2014, at 12:54 AM, Juyang Weng wrote: >>>>> >>>>>> Juergen: >>>>>> >>>>>> You wrote: A stack of recurrent NN. But it is a wrong architecture as far as the brain is concerned. >>>>>> >>>>>> Although my joint work with Narendra Ahuja and Thomas S. Huang at UIUC was probably the first >>>>>> learning network that used the deep Learning idea for learning from clutter scenes (Cresceptron ICCV 1992 and IJCV 1997), >>>>>> I gave up this static deep learning idea later after we considered the Principle 1: Development. >>>>>> >>>>>> The deep learning architecture is wrong for the brain. It is too restricted, static in architecture, and cannot learn directly from cluttered scenes required by Principle 1. The brain is not a cascade of recurrent NN. >>>>>> >>>>>> I quote from Antonio Damasio "Decartes' Error": p. 93: "But intermediate communications occurs also via large subcortical nuclei such as those in the thalamas and basal ganglia, and via small nulei such as those in the brain stem." >>>>>> >>>>>> Of course, the cerebral pathways themselves are not a stack of recurrent NN either. >>>>>> >>>>>> There are many fundamental reasons for that. I give only one here base on our DN brain model: Looking at a human, the brain must dynamically attend the tip of the nose, the entire nose, the face, or the entire human body on the fly. For example, when the network attend the nose, the entire human body becomes the background! Without a brain network that has both shallow and deep connections (unlike your stack of recurrent NN), your network is only for recognizing a set of static patterns in a clean background. This is still an overworked pattern recognition problem, not a vision problem. >>>>>> >>>>>> -John >>>>>> >>>>>> On 2/6/14 7:24 AM, Schmidhuber Juergen wrote: >>>>>>> Deep Learning in Artificial Neural Networks (NN) is about credit assignment across many subsequent computational stages, in deep or recurrent NN. >>>>>>> >>>>>>> A popluar Deep Learning NN is the Deep Belief Network (2006) [1,2]. A stack of feedforward NN (FNN) is pre-trained in unsupervised fashion. This can facilitate subsequent supervised learning. >>>>>>> >>>>>>> Let me re-advertise a much older, very similar, but more general, working Deep Learner of 1991. It can deal with temporal sequences: the Neural Hierarchical Temporal Memory or Neural History Compressor [3]. A stack of recurrent NN (RNN) is pre-trained in unsupervised fashion. This can greatly facilitate subsequent supervised learning. >>>>>>> >>>>>>> The RNN stack is more general in the sense that it uses sequence-processing RNN instead of FNN with unchanging inputs. In the early 1990s, the system was able to learn many previously unlearnable Deep Learning tasks, one of them requiring credit assignment across 1200 successive computational stages [4]. >>>>>>> >>>>>>> Related developments: In the 1990s there was a trend from partially unsupervised [3] to fully supervised recurrent Deep Learners [5]. In recent years, there has been a similar trend from partially unsupervised to fully supervised systems. For example, several recent competition-winning and benchmark record-setting systems use supervised LSTM RNN stacks [6-9]. >>>>>>> >>>>>>> >>>>>>> References: >>>>>>> >>>>>>> [1] G. E. Hinton, R. R. Salakhutdinov. Reducing the dimensionality of data with neural networks. Science, Vol. 313. no. 5786, pp. 504 - 507, 2006. http://www.cs.toronto.edu/~hinton/science.pdf >>>>>>> >>>>>>> [2] G. W. Cottrell. New Life for Neural Networks. Science, Vol. 313. no. 5786, pp. 454-455, 2006. http://www.academia.edu/155897/Cottrell_Garrison_W._2006_New_life_for_neural_networks >>>>>>> >>>>>>> [3] J. Schmidhuber. Learning complex, extended sequences using the principle of history compression, Neural Computation, 4(2):234-242, 1992. (Based on TR FKI-148-91, 1991.) ftp://ftp.idsia.ch/pub/juergen/chunker.pdf Overview: http://www.idsia.ch/~juergen/firstdeeplearner.html >>>>>>> >>>>>>> [4] J. Schmidhuber. Habilitation thesis, TUM, 1993. ftp://ftp.idsia.ch/pub/juergen/habilitation.pdf . Includes an experiment with credit assignment across 1200 subsequent computational stages for a Neural Hierarchical Temporal Memory or History Compressor or RNN stack with unsupervised pre-training [2] (try Google Translate in your mother tongue): http://www.idsia.ch/~juergen/habilitation/node114.html >>>>>>> >>>>>>> [5] S. Hochreiter, J. Schmidhuber. Long Short-Term Memory. Neural Computation, 9(8):1735-1780, 1997. Based on TR FKI-207-95, 1995. ftp://ftp.idsia.ch/pub/juergen/lstm.pdf . Lots of of follow-up work on LSTM under http://www.idsia.ch/~juergen/rnn.html >>>>>>> >>>>>>> [6] S. Fernandez, A. Graves, J. Schmidhuber. Sequence labelling in structured domains with hierarchical recurrent neural networks. In Proc. IJCAI'07, p. 774-779, Hyderabad, India, 2007. ftp://ftp.idsia.ch/pub/juergen/IJCAI07sequence.pdf >>>>>>> >>>>>>> [7] A. Graves, J. Schmidhuber. Offline Handwriting Recognition with Multidimensional Recurrent Neural Networks. NIPS'22, p 545-552, Vancouver, MIT Press, 2009. http://www.idsia.ch/~juergen/nips2009.pdf >>>>>>> >>>>>>> [8] 2009: First very deep (and recurrent) learner to win international competitions with secret test sets: deep LSTM RNN (1995-) won three connected handwriting contests at ICDAR 2009 (French, Arabic, Farsi), performing simultaneous segmentation and recognition. http://www.idsia.ch/~juergen/handwriting.html >>>>>>> >>>>>>> [9] A. Graves, A. Mohamed, G. E. Hinton. Speech Recognition with Deep Recurrent Neural Networks. ICASSP 2013, Vancouver, 2013. http://www.cs.toronto.edu/~hinton/absps/RNN13.pdf >>>>>>> >>>>>>> >>>>>>> >>>>>>> Juergen Schmidhuber >>>>>>> http://www.idsia.ch/~juergen/whatsnew.html >>>>>> >>>>>> -- >>>>>> -- >>>>>> Juyang (John) Weng, Professor >>>>>> Department of Computer Science and Engineering >>>>>> MSU Cognitive Science Program and MSU Neuroscience Program >>>>>> 428 S Shaw Ln Rm 3115 >>>>>> Michigan State University >>>>>> East Lansing, MI 48824 USA >>>>>> Tel: 517-353-4388 >>>>>> Fax: 517-432-1061 >>>>>> Email: weng at cse.msu.edu >>>>>> URL: http://www.cse.msu.edu/~weng/ >>>>>> ---------------------------------------------- >>>>>> >>>>> >>>>> >>>> >>> >> > Dr. James M. Bower Ph.D. Professor of Computational Neurobiology Barshop Institute for Longevity and Aging Studies. 15355 Lambda Drive University of Texas Health Science Center San Antonio, Texas 78245 Phone: 210 382 0553 Email: bower at uthscsa.edu Web: http://www.bower-lab.org twitter: superid101 linkedin: Jim Bower CONFIDENTIAL NOTICE: The contents of this email and any attachments to it may be privileged or contain privileged and confidential information. This information is only for the viewing or use of the intended recipient. If you have received this e-mail in error or are not the intended recipient, you are hereby notified that any disclosure, copying, distribution or use of, or the taking of any action in reliance upon, any of the information contained in this e-mail, or any of the attachments to this e-mail, is strictly prohibited and that this e-mail and all of the attachments to this e-mail, if any, must be immediately returned to the sender or destroyed and, in either case, this e-mail and all attachments to this e-mail must be immediately deleted from your computer without making any copies hereof and any and all hard copies made must be destroyed. If you have received this e-mail in error, please notify the sender by e-mail immediately. -------------- next part -------------- An HTML attachment was scrubbed... URL: From troy.d.kelley6.civ at mail.mil Wed Feb 12 09:24:07 2014 From: troy.d.kelley6.civ at mail.mil (Kelley, Troy D CIV (US)) Date: Wed, 12 Feb 2014 14:24:07 +0000 Subject: Connectionists: Deep Belief Nets (2006) / Neural History Compressor (1991) or Hierarchical Temporal Memory (UNCLASSIFIED) Message-ID: Classification: UNCLASSIFIED Caveats: NONE Or taken seriously the nativist arguments of Chomsky, Pinker, and Spelke and tried to build a robot that is innately endowed with concepts like "person", "object", "set", and "place"? ===== I am working on this now. We had been using concepts from ConceptNet (from MIT), but I think it might be better to built concepts around a set of conceptual primitives. By the way there is some great work by Jean Mandler on conceptual primitives. Her book the "Foundations of Mind: Origins of Conceptual Thought" is fascinating. I have been actively pursuing the idea of building a mind using primitives. This is the same idea that Irving Beiderman uses for object recognition - that objects are visual primitives used by the visual system (cubes, cylinders). This idea can be extended to language, both written (letters) and spoken (phonemes). The idea of using primitives can reduce the computation complexity of many problems. As Mandler puts it, many complex cognitive reasoning tasks are really built out of more conceptual primitives - especially ones based on movement. For example, "The Germans ousted the French during WWII" is a rather complex statement, but is thought of as a movement based concept - the Germans replaced the French in some space. There are many other examples of these movement based primitives which inform thought - "he is moving up in the world" or "He is down and out" or "He is going down". All of these might be complex ideas but they are represented as movement based conceptual primitives. Troy Kelley Cognitive Robotics Team Leader Human Research and Engineering Directorate Army Research Laboratory Aberdeen, MD, 21005 V: 410-278-5869 Troy Kelley Cognitive Robotics Team Leader Human Research and Engineering Directorate Army Research Laboratory Aberdeen, MD, 21005 V: 410-278-5869 Classification: UNCLASSIFIED Caveats: NONE -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/x-pkcs7-signature Size: 5619 bytes Desc: not available URL: From rloosemore at susaro.com Wed Feb 12 10:17:15 2014 From: rloosemore at susaro.com (Richard Loosemore) Date: Wed, 12 Feb 2014 10:17:15 -0500 Subject: Connectionists: Deep Belief Nets (2006) / Neural History Compressor (1991) or Hierarchical Temporal Memory (UNCLASSIFIED) In-Reply-To: References: Message-ID: <52FB907B.2020703@susaro.com> To say nothing of the conceptual primitives in Roger Schank's work.... ("Dynamic memory: A theory of reminding and learning in computers and people" 1982). On 2/12/14, 9:24 AM, Kelley, Troy D CIV (US) wrote: > Classification: UNCLASSIFIED > Caveats: NONE > > Or taken seriously the nativist arguments of Chomsky, Pinker, and Spelke > and tried to build a robot that is innately endowed with concepts like > "person", "object", "set", and "place"? > ===== > I am working on this now. We had been using concepts from ConceptNet (from > MIT), but I think it might be better to built concepts around a set of > conceptual primitives. > > By the way there is some great work by Jean Mandler on conceptual > primitives. Her book the "Foundations of Mind: Origins of Conceptual > Thought" is fascinating. I have been actively pursuing the idea of building > a mind using primitives. This is the same idea that Irving Beiderman uses > for object recognition - that objects are visual primitives used by the > visual system (cubes, cylinders). This idea can be extended to language, > both written (letters) and spoken (phonemes). The idea of using primitives > can reduce the computation complexity of many problems. As Mandler puts it, > many complex cognitive reasoning tasks are really built out of more > conceptual primitives - especially ones based on movement. For example, > "The Germans ousted the French during WWII" is a rather complex statement, > but is thought of as a movement based concept - the Germans replaced the > French in some space. There are many other examples of these movement based > primitives which inform thought - "he is moving up in the world" or "He is > down and out" or "He is going down". All of these might be complex ideas > but they are represented as movement based conceptual primitives. > > Troy Kelley > Cognitive Robotics Team Leader > Human Research and Engineering Directorate > Army Research Laboratory > Aberdeen, MD, 21005 > V: 410-278-5869 > > > > Troy Kelley > Cognitive Robotics Team Leader > Human Research and Engineering Directorate > Army Research Laboratory > Aberdeen, MD, 21005 > V: 410-278-5869 > > > Classification: UNCLASSIFIED > Caveats: NONE > > From irodero at cac.rutgers.edu Wed Feb 12 23:58:20 2014 From: irodero at cac.rutgers.edu (Ivan Rodero) Date: Wed, 12 Feb 2014 23:58:20 -0500 Subject: Connectionists: CFP : ExtremeGreen 2014 Workshop, with CCGrid2014 (due Feb 15) In-Reply-To: <6B61A541-C3A6-4B51-9600-A392C72202A6@rutgers.edu> References: <51EC7783-DCAD-4364-B1DC-576C726BAA31@rutgers.edu> <6F339279-23CD-4553-95DF-B1F906F948E3@rutgers.edu> <0957F75F-5AB9-4144-B62D-87D225B34E42@rutgers.edu> <22CF346C-98EC-4D5B-9500-D0B9FE60551A@rutgers.edu> <99D17D7E-34C0-47B7-B641-C756E67D169A@rutgers.edu> <87202061-93AC-4066-89E7-77976097AFAB@rutgers.edu> <79403393-1690-4DCB-855A-1EE231D5ED2B@rutgers.edu> <5D63B8C8-3FD3-4241-9021-CBD7F84DCAA7@rutgers.edu> <6B61A541-C3A6-4B51-9600-A392C72202A6@rutgers.edu> Message-ID: =========================== Call for Papers : ExtremeGreen 2014 Workshop : Extreme Green & Energy Efficiency in Large Scale Distributed Systems =========================== May 26-29, 2014 -- Chicago, IL, USA during CCGrid 2014: the 14th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing http://avalon.ens-lyon.fr/extremegreen Important dates : ? Papers due on : February 15th, 2014 ? Author Notification : March 1st, 2014 ? Final Papers Due : March 14th, 2014 ? ExtremeGreen Workshop: May 26, 2014 Workshop co-chairs: ? Laurent Lefevre, INRIA, Laboratoire LIP, Ecole Normale Superieure of Lyon, France ? Marcos Dias de Assuncao, IBM Research, Brazil ? Wu-Chun Feng, Virginia Tech, USA ? Anne-C?cile Orgerie, CNRS, France ? Manish Parashar, Rutgers University, USA ? Ivan Rodero, Rutgers University, USA Workshop description : Improving the energy efficiency of large-scale distributed systems (e.g. data centres and Clouds) is a key challenge for both academic and industrial organizations. Although the topic has gained lots of attention over the past years, some of the proposed solutions often seem conservative and not easily applicable to large-scale systems. Their impact at large-scale remains to be proved. Hence, this workshop aims to provide a venue for discussion of ideas that can demonstrate "more than small % solution" to energy efficiency and their applicability to "real world". After the success of ExtremGreen2013 workshop, the ExtremeGreen2014 workshop will focus on scientific and industrial approaches and solutions that could have a large impact in terms of energy savings and energy efficiency. Clean-slate approaches and innovative solutions breaking conventional approaches are welcome. The workshop also welcomes submissions of work-in-progress papers on ideas that can have a large impact on improving the energy efficiency of large-scale distributed systems. The papers must provide preliminary results that demonstrate the originality and possible impact of the proposed solutions. Submissions will be reviewed by an international group of experts in distributed systems and energy efficiency. Topics of interests : ? Green clouds ? Energy efficiency of data centers ? Green Grids ? Green networks for large scale distributed systems ? Energy efficiency of storage solutions ? Energy-aware design and programming ? Energy-efficient hardware and software architectures ? Sustainable solutions in large scale distributed systems ? Energy-efficient resource management tools ? Energy-efficient scalable approaches ? Experimental results of Green solutions Papers submission : Submitted papers must be 8 pages long maximum. Authors must submit their articles through the submission system : https://www.easychair.org/conferences/?conf=extremegreen2014 Program Committee (TBC): ? Cosimo Anglano, Universit?? del Piemonte Orientale, Italy ? Silvia Bianchi, IBM Research, Brazil ? George Bosilca, Innovative Computing Laboratory - University of Tennessee, USA ? Pascal Bouvry, University of Luxembourg, Luxembourg ? Georges Da Costa, IRIT/Toulouse III, France ? Bronis De Supinski, Lawrence Livermore National Laboratory, USA ? Jaafar M. H. Elmirghani, University of Leeds, UK ? Jean-Patrick Gelas, Universite de Lyon / INRIA, France ? Yiannis Georgiou, BULL, France ? Olivier Gluck, Universit?? de Lyon, France ? Zhiyi Huang, Univ of Otago, NEw Zealand ? Thomas Ludwig, University of Hamburg, Germany ? Hiroshi Nakamura, University of Tokyo, Japan ? Marco Netto, IBM Research, Brazil ? Manish Parashar, Rutgers University, USA ? Jean-Marc Pierson, IRIT, France ? Enrique S. Quintana-Orti, Universidad Jaume I, Spain ? Ivan Rodero, Rutgers University / CAC, USA ? Domenico Talia, University of Calabria, Italy ? Jordi Torres, Barcelona Supercomputing Center (BSC) - Technical University of Catalonia (UPC), Spain ? Xavier Vigouroux, BULL, France ? Albert Zomaya, University of Sydney, Australia ============================================================= Ivan Rodero, Ph.D. Rutgers Discovery Informatics Institute (RDI2) NSF Center for Cloud and Autonomic Computing (CAC) Department of Electrical and Computer Engineering Rutgers, The State University of New Jersey Office: CoRE Bldg, Rm 625 94 Brett Road, Piscataway, NJ 08854-8058 Phone: (732) 993-8837 Fax: (732) 445-0593 Email: irodero at rutgers dot edu WWW: http://nsfcac.rutgers.edu/people/irodero ============================================================= -------------- next part -------------- An HTML attachment was scrubbed... URL: From irodero at cac.rutgers.edu Thu Feb 13 00:03:16 2014 From: irodero at cac.rutgers.edu (Ivan Rodero) Date: Thu, 13 Feb 2014 00:03:16 -0500 Subject: Connectionists: CFP : ExtremeGreen 2014 Workshop, with CCGrid2014 (due Feb 15) In-Reply-To: <6B61A541-C3A6-4B51-9600-A392C72202A6@rutgers.edu> References: <51EC7783-DCAD-4364-B1DC-576C726BAA31@rutgers.edu> <6F339279-23CD-4553-95DF-B1F906F948E3@rutgers.edu> <0957F75F-5AB9-4144-B62D-87D225B34E42@rutgers.edu> <22CF346C-98EC-4D5B-9500-D0B9FE60551A@rutgers.edu> <99D17D7E-34C0-47B7-B641-C756E67D169A@rutgers.edu> <87202061-93AC-4066-89E7-77976097AFAB@rutgers.edu> <79403393-1690-4DCB-855A-1EE231D5ED2B@rutgers.edu> <5D63B8C8-3FD3-4241-9021-CBD7F84DCAA7@rutgers.edu> <6B61A541-C3A6-4B51-9600-A392C72202A6@rutgers.edu> Message-ID: <7B19F956-BFBE-4CB4-9224-106DC80F83F0@rutgers.edu> =========================== Call for Papers : ExtremeGreen 2014 Workshop : Extreme Green & Energy Efficiency in Large Scale Distributed Systems =========================== May 26-29, 2014 -- Chicago, IL, USA during CCGrid 2014: the 14th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing http://avalon.ens-lyon.fr/extremegreen Important dates : ? Papers due on : February 15th, 2014 ? Author Notification : March 1st, 2014 ? Final Papers Due : March 14th, 2014 ? ExtremeGreen Workshop: May 26, 2014 Workshop co-chairs: ? Laurent Lefevre, INRIA, Laboratoire LIP, Ecole Normale Superieure of Lyon, France ? Marcos Dias de Assuncao, IBM Research, Brazil ? Wu-Chun Feng, Virginia Tech, USA ? Anne-C?cile Orgerie, CNRS, France ? Manish Parashar, Rutgers University, USA ? Ivan Rodero, Rutgers University, USA Workshop description : Improving the energy efficiency of large-scale distributed systems (e.g. data centres and Clouds) is a key challenge for both academic and industrial organizations. Although the topic has gained lots of attention over the past years, some of the proposed solutions often seem conservative and not easily applicable to large-scale systems. Their impact at large-scale remains to be proved. Hence, this workshop aims to provide a venue for discussion of ideas that can demonstrate "more than small % solution" to energy efficiency and their applicability to "real world". After the success of ExtremGreen2013 workshop, the ExtremeGreen2014 workshop will focus on scientific and industrial approaches and solutions that could have a large impact in terms of energy savings and energy efficiency. Clean-slate approaches and innovative solutions breaking conventional approaches are welcome. The workshop also welcomes submissions of work-in-progress papers on ideas that can have a large impact on improving the energy efficiency of large-scale distributed systems. The papers must provide preliminary results that demonstrate the originality and possible impact of the proposed solutions. Submissions will be reviewed by an international group of experts in distributed systems and energy efficiency. Topics of interests : ? Green clouds ? Energy efficiency of data centers ? Green Grids ? Green networks for large scale distributed systems ? Energy efficiency of storage solutions ? Energy-aware design and programming ? Energy-efficient hardware and software architectures ? Sustainable solutions in large scale distributed systems ? Energy-efficient resource management tools ? Energy-efficient scalable approaches ? Experimental results of Green solutions Papers submission : Submitted papers must be 8 pages long maximum. Authors must submit their articles through the submission system : https://www.easychair.org/conferences/?conf=extremegreen2014 Program Committee (TBC): ? Cosimo Anglano, Universit?? del Piemonte Orientale, Italy ? Silvia Bianchi, IBM Research, Brazil ? George Bosilca, Innovative Computing Laboratory - University of Tennessee, USA ? Pascal Bouvry, University of Luxembourg, Luxembourg ? Georges Da Costa, IRIT/Toulouse III, France ? Bronis De Supinski, Lawrence Livermore National Laboratory, USA ? Jaafar M. H. Elmirghani, University of Leeds, UK ? Jean-Patrick Gelas, Universite de Lyon / INRIA, France ? Yiannis Georgiou, BULL, France ? Olivier Gluck, Universit?? de Lyon, France ? Zhiyi Huang, Univ of Otago, NEw Zealand ? Thomas Ludwig, University of Hamburg, Germany ? Hiroshi Nakamura, University of Tokyo, Japan ? Marco Netto, IBM Research, Brazil ? Manish Parashar, Rutgers University, USA ? Jean-Marc Pierson, IRIT, France ? Enrique S. Quintana-Orti, Universidad Jaume I, Spain ? Ivan Rodero, Rutgers University / CAC, USA ? Domenico Talia, University of Calabria, Italy ? Jordi Torres, Barcelona Supercomputing Center (BSC) - Technical University of Catalonia (UPC), Spain ? Xavier Vigouroux, BULL, France ? Albert Zomaya, University of Sydney, Australia ============================================================= Ivan Rodero, Ph.D. Rutgers Discovery Informatics Institute (RDI2) NSF Center for Cloud and Autonomic Computing (CAC) Department of Electrical and Computer Engineering Rutgers, The State University of New Jersey Office: CoRE Bldg, Rm 625 94 Brett Road, Piscataway, NJ 08854-8058 Phone: (732) 993-8837 Fax: (732) 445-0593 Email: irodero at rutgers dot edu WWW: http://nsfcac.rutgers.edu/people/irodero ============================================================= -------------- next part -------------- An HTML attachment was scrubbed... URL: From zk240 at cam.ac.uk Wed Feb 12 16:50:55 2014 From: zk240 at cam.ac.uk (Zoe Kourtzi) Date: Wed, 12 Feb 2014 21:50:55 +0000 Subject: Connectionists: lab manager position, Univ of Cambridge Message-ID: <2CCC366C-3E41-418D-AD3C-34381121FAB0@cam.ac.uk> A lab manager position is available to work with Andrew Welchman and Zoe Kourtzi at the Department of Psychology, University of Cambridge, UK. The lab focuses on understanding the neural basis of human perception and adaptive behaviours using a combination of behavioral, computational and brain imaging techniques (fMRI, TMS, EEG). The team consists of an international and interdisciplinary mix of students and post-docs who use specialist equipment (display devices, eye trackers, brain imaging equipment). The successful candidate will provide hardware (i.e. configuring IT equipment and experimental hardware) and software support (i.e. developing software for stimulus generation and data analysis) as well as some administrative support (e.g. ordering equipment, organizing data bases and data storage, writing reports). The lab manager will be involved in all aspects of the lab life (i.e. conducting research projects in collaboration with lab members, organizing conferences and workshops, attending seminars and lab meetings). A bachelor's (or higher) degree in computer science, engineering, math, physics, neuroscience, psychology, or other related field is required. Strong computer programming skills (especially MATLAB, C++, OpenGL) and organizational skills are required. Research experience in cognitive and or computational neuroscience would be preferable, but not required. For informal inquiries, please send CV and a brief statement of background skills and interests to zk240 at cam.ac.uk Kind regards Zoe Kourtzi, PhD Professor of Experimental Psychology Department of Psychology University of Cambridge Downing Street Cambridge CB2 3EB -------------- next part -------------- An HTML attachment was scrubbed... URL: From tomas.hromadka at gmail.com Wed Feb 12 17:22:33 2014 From: tomas.hromadka at gmail.com (Tomas Hromadka) Date: Wed, 12 Feb 2014 23:22:33 +0100 Subject: Connectionists: [COSYNE2014] Meeting program available online Message-ID: <52FBF429.4020505@gmail.com> ================================================================== Computational and Systems Neuroscience (Cosyne) MAIN MEETING WORKSHOPS Feb 27 - Mar 2, 2014 Mar 3 - Mar 4, 2014 Salt Lake City, Utah Snowbird Ski Resort, Utah http://www.cosyne.org ================================================================== MEETING PROGRAM The main meeting program and workshops program are now available online at www.cosyne.org REGISTRATION AND HOTELS: Online registration is currently open. Hotel booking is currently open. For more detailed information, please visit www.cosyne.org INVITED SPEAKERS: Anne Churchland (CSHL) Rui Costa (Champalimaud) Joshua Gold (U Pennsylvania) Hopi Hoekstra (Harvard University) Thomas Jessell (Columbia) John Krakauer (Johns Hopkins) Jeffrey Magee (Janelia Farm) Thomas Mrsic-Flogel (Universitat Basel) Yael Niv (Princeton) Elad Schneidman (Weizmann Institute) Doris Tsao (Caltech) Nachum Ulanovsky (Weizmann Institute) THE MEETING: The annual Cosyne meeting provides an inclusive forum for the exchange of empirical and theoretical approaches to problems in systems neuroscience, in order to understand how neural systems function. The MAIN MEETING is single-track. A set of invited talks are selected by the Executive Committee, and additional talks and posters are selected by the Program Committee, based on submitted abstracts. The WORKSHOPS feature in-depth discussion of current topics of interest, in a small group setting. Cosyne topics include but are not limited to: neural coding, natural scene statistics, dendritic computation, neural basis of persistent activity, nonlinear receptive field mapping, representations of time and sequence, reward systems, decision-making, synaptic plasticity, map formation and plasticity, population coding, attention, and computation with spiking networks. WORKSHOP TITLES: Computational Psychiatry (2 days). Organizers: Quentin Huys, Tiago Maia Information Sampling in Behavioral Optimization (2 days). Organizers: Bruno Averbeck, Robert C. Wilson, Matthew R. Nassar Rogue States: Altered Dynamics of Neural Circuit Activity in Brain Disorders. Organizers: Cian O'Donnell, Terrence Sejnowski Scalable Models for High-Dimensional Neural Data. Organizers: Il Memming Park, Evan Archer, Jonathan Pillow Homeostasis and Self-Regulation of Developing Circuits: From Single Neurons to Networks. Organizers: Julijana Gjorgjieva, Matthias Hennig Theories of Mammalian Perception: Open and Closed Loop Modes of Brain-World Interactions. Organizers: Ehud Ahissar, Eldad Assa Noise Correlations in the Cortex: Quantification, Origins, and Functional Significance. Organizers: Jozsef Fiser, Mate Lengyel, Alex Pouget Excitatory and Inhibitory Synaptic Conductances: Functional Roles and Inference Methods. Organizers: Milad Lankarany, Taro Toyoizumi Discovering Structure in Neural Data. Organizers: Eric Jonas, Scott Linderman, Ryan Adams, Konrad Koerding Thalamocortical Network Mechanisms for Cortical Functioning. Organizers: Murray Sherman, W. Martin Usrey Canonical Circuits, Canonical Computations. Organizers: Anita Disney, Krishnan Padmanabhan From the Actome to the Ethome: Systems Neuroscience of Behavioral Ecology. Organizers: Aldo Faisal, Constantin Rothkopf Sequence Generation and Timing Signals in Neural Circuits. Organizers: Kanaka Rajan, Christopher D Harvey, David W Tank Multisensory Computations in the Cortex. Organizers: Joseph Makin, Philip Sabes ORGANIZING COMMITTEE: General Chairs: Marlene Cohen (U Pittsburgh) and Peter Latham (UCL) Program Chairs: Michael Long (NYU) and Stephanie Palmer (U Chicago) Workshop Chairs: Robert Froemke (NYU) and Tatyana Sharpee (Salk) Publicity Chair: Eugenia Chiappe (Champalimaud) EXECUTIVE COMMITTEE: Anne Churchland (CSHL) Zachary Mainen (Champalimaud) Alexandre Pouget (U Geneva) Anthony Zador (CSHL) From h.glotin at gmail.com Wed Feb 12 16:34:56 2014 From: h.glotin at gmail.com (Herve Glotin) Date: Wed, 12 Feb 2014 22:34:56 +0100 Subject: Connectionists: [CFP] uLearnBio-ICML 2014: Workshop on Unsupervised Learning from Bioacoustic Big Data Message-ID: [Please forward to whom it may concern] We are pleased to inform you that paper submission is now open for our ICML 2014 workshop uLearnBio: "Unsupervised Learning from Bioacoustic Big Data"organized within ICML - on 21-26 june, 2014 in Beijing, China http://sabiod.univ-tln.fr/ulearnbio/ We kindly invite interested people to submit their papers before the deadline for paper submission (21 march 2014). Two technical challenges are open in the framework of the workshop and we kindly invite interested people to participate. Submission of keynote/results for the technical challenge are possible up to 30 may 2014. We would appreciate if you forward this call to whom it may concern. Best Regards -- Faicel Chamroukhi, Associate Professor Information Sciences and Systems Laboratory (LSIS)- UMR 7296 CNRS Southern University of Toulon-Var - Francehttp://chamroukhi.univ-tln.fr faicel.chamroukhi at univ-tln.fr -- Herve' Glotin, Pr. Institut Univ. de France (IUF) & Univ. Toulon (UTLN) Head of information DYNamics & Integration (DYNI @ UMR CNRS LSIS) http://glotin.univ-tln.fr glotin at univ-tln.fr -------------- next part -------------- An HTML attachment was scrubbed... URL: From ecai2014 at guarant.cz Wed Feb 12 06:50:09 2014 From: ecai2014 at guarant.cz (=?utf-8?q?ECAI_2014?=) Date: Wed, 12 Feb 2014 12:50:09 +0100 Subject: Connectionists: =?utf-8?q?ECAI_2014_-_last_call_for_papers?= Message-ID: <20140212115009.613B4174255@gds25d.active24.cz> ECAI'14 Last Call for Papers The Twenty-first European Conference on Artificial Intelligence 18-22 August 2014, Prague, Czech Republic http://www.ecai2014.org The biennial European Conference on Artificial Intelligence (ECAI) is Europe's premier archival venue for presenting scientific results in AI. Organised by the European Coordinating Committee for AI (ECCAI), the ECAI conference provides an opportunity for researchers to present and hear about the very best research in contemporary AI. As well as a full programme of technical papers, ECAI'14 will include the Prestigious Applications of Intelligent Systems conference (PAIS), the Starting AI Researcher Symposium (STAIRS), the International Web Rule Symposium (RuleML) and an extensive programme of workshops, tutorials, and invited speakers. (Separate calls are issued for PAIS, STAIRS, tutorials, and workshops.) ECAI'14 will be held in the beautiful and historic city of Prague, the capital of the Czech Republic. With excellent opportunities for sightseeing and gastronomy, Prague promises to be a wonderful venue for a memorable conference. This call invites the submission of papers and posters for the technical programme of ECAI'14. High-quality original submissions are welcome from all areas of AI; the following list of topics is indicative only. Agent-based and Multi-agent Systems Constraints, Satisfiability, and Search Knowledge Representation, Reasoning, and Logic Machine Learning and Data Mining Natural Language Processing Planning and Scheduling Robotics, Sensing, and Vision Uncertainty in AI Web and Knowledge-based Information Systems Multidisciplinary Topics Both long (6-page) and short (2-page) papers can be submitted. Whereas long papers should report on substantial research results, short papers are intended for highly promising but possibly more preliminary work. Short papers will be presented in poster form. Rejected long papers will be considered for the short paper track. Submitted papers must be formatted according to ECAI'14 guidelines and submitted electronically through the ECAI'14 paper submission site. Full instructions including formatting guidelines and electronic templates are available on the ECAI'14 website. Paper submission: 1 March 2014 Author feedback: 14-18 April 2014 Notification of acceptance/rejection: 9 May 2014 Camera-ready copy due: 30 May 2014 The proceedings of ECAI'14 will be published by IOS Press. Best papers go AIJ The authors of the best papers (and runner ups) of ECAI'14 will be invited to submit an extended version of their paper to the Artificial Intelligence Journal. Conference Secretariat GUARANT International Na Pankr?ci 17 140 21 Prague 4 Tel: +420 284 001 444, Fax: +420 284 001 448 E-mail: ecai2014 at guarant.cz Web: www.ecai2014.org This email is not intended to be spam or to go to anyone who wishes not to receive it. If you do not wish to receive this letter and wish to remove your email address from our database please reply to this message with "Unsubscribe" in the subject line. From decision.making.big.data at gmail.com Tue Feb 11 16:23:45 2014 From: decision.making.big.data at gmail.com (Amir-massoud Farahmand Andre Barreto) Date: Tue, 11 Feb 2014 16:23:45 -0500 Subject: Connectionists: [CFP] AAAI-14 Workshop on Sequential Decision-Making with Big Data Message-ID: The AAAI-14 Workshop on Sequential Decision-Making with Big Data held at the AAAI Conference on Artificial Intelligence (AAAI-14), Quebec City, Canada (July 27-28, 2014) (Workshop URL: https://sites.google.com/site/decisionmakingbigdata ) In the 21st century, we live in a world where data is abundant. We would like to use this data to make better decisions in many areas of life, such as industry, health care, business, and government. This opportunity has encouraged many machine learning and data mining researchers to develop tools to benefit from big data. However, the methods developed so far have focused almost exclusively on the task of prediction. As a result, the question of how big data can leverage decision-making has remained largely untouched. This workshop is about decision-making in the era of big data. The main topic will be the complex decision-making problems, in particular the sequential ones, that arise in this context. Examples of these problems are high-dimensional large-scale reinforcement learning and their simplified version such as various types of bandit problems. These problems can be classified into three potentially overlapping categories: 1) Very large number of data-points. Examples: data coming from user clicks on the web and financial data. In this scenario, the most important issue is computational cost. Any algorithm that is super-linear will not be practical. 2) Very high-dimensional input space. Examples are found in robotic and computer vision problems. The only possible way to solve these problems is to benefit from their regularities. 3) Partially observable systems. Here the immediate observed variables do not have enough information for accurate decision-making, but one might extract sufficient information by considering the history of observations. If the time series is projected onto a high-dimensional representation, one ends up with problems similar to 2. Topics Some potential topics of interest are: - Reinforcement learning algorithms that deal with one of the aforementioned categories; - Bandit problems with high-dimensional action space - Challenging real-world applications of sequential decision-making problems that can benefit from big data. Example domains include robotics, adaptive treatment strategies for personalized health care, finance, recommendation systems, and advertising. Format The workshop will be a one-day meeting consisting of invited talks, oral and poster presentations from participants, and a final panel-driven discussion. Attendance We expect about 30-50 participants from invited speakers, contributed authors, and interested researchers. Submission We invite researchers from different fields of machine learning (e.g., reinforcement learning, online learning, active learning), optimization, systems (distributed and parallel computing), as well as application-domain experts (from e.g., robotics, recommendation systems, personalized medicine, etc.) to submit an extended abstract (maximum 4 pages in AAAI format) of their recent work to decision.making.big.data at gmail.com. Accepted papers will be presented as posters or contributed oral presentations. Important Dates Paper Submission: April 10, 2014 Notification of Acceptance: May 1, 2014 Camera-Ready Papers: May 15, 2014 Date of Workshop: July 27 or 28, 2014 Organizing Committee - Amir-massoud Farahmand (McGill University) - Andr? M.S. Barreto (Brazilian National Laboratory for Scientific Computing (LNCC)) - Mohammad Ghavamzadeh (Adobe Research and INRIA Lille - Team SequeL) - Joelle Pineau (McGill University) - Doina Precup (McGill University) -------------- next part -------------- An HTML attachment was scrubbed... URL: From h.glotin at gmail.com Wed Feb 12 16:24:31 2014 From: h.glotin at gmail.com (Herve Glotin) Date: Wed, 12 Feb 2014 22:24:31 +0100 Subject: Connectionists: [ 2 years postdoc ] on drone learning, available at UMR CNRS LSIS - Toulon (France). Message-ID: 2 years postdoctoral position are now available at UMR CNRS LSIS - University of Toulon (France). The candidate will be involved in SYCIE project : a drone co-planification and adaptive perception / interaction with PROLEXIA, IFREMER & DCNS as detailed at: http://www.polemermediterranee.com/Securite-et-surete-maritime/Surveillance-et-interventions-maritimes/SYCIE The SYCIE project objective is to develop a planning, simulation and supervision system for new generation, multi-mission autonomous vehicles. They will ensure mission effectiveness in harsh environmental conditions whilst minimizing the impact on the environment, executing complex high-level tasks, collaborating with other vehicles, planning / replanning their mission. We are seeking for a highly motivated postdoctoral candidate in one of the related topics: * Machine Learning / optimization * Semi-supervised learning / ranking * Multi-objective planification * Perception learning * Drone application Application: Please send (in a unique SYCIE_yourname.pdf) to = sycie.utln at gmail your detailed CV + biblio + your motivation letter + URLs to your two most relevant publications + URL to your Phd. You can apply for only one year if you prefer. Programming skill (Python, matlab, C...) is desirable. Place: The position is at UMR CNRS LSIS in Provence - Toulon campus, few minutes far from Prolexia, DCNS or IFREMER, yielding to an excellent academic-industrial collaborative framework. The DYNI team is a research environment which has been providing significant contributions in multimodal perception, tracking optimization and machine learning. The campus situation is detailed at http://www.univ-tln.fr. The salary is 2075 euros / month after taxes. -- Herve' Glotin, Pr. Institut Univ. de France (IUF) & Univ. Toulon (UTLN) Head of information DYNamics & Integration (DYNI @ UMR CNRS LSIS) http://glotin.univ-tln.fr -------------- next part -------------- An HTML attachment was scrubbed... URL: From y.demiris at imperial.ac.uk Fri Feb 14 10:03:00 2014 From: y.demiris at imperial.ac.uk (Demiris, Yiannis) Date: Fri, 14 Feb 2014 15:03:00 +0000 Subject: Connectionists: Two researcher positions, Personal Robotics Laboratory, Imperial College London Message-ID: Dear colleagues, two researcher positions are available at the Personal Robotics laboratory of Imperial College London; one is at the postdoctoral level and one at the doctoral level. The successful candidates will work under the supervision of Dr Yiannis Demiris (www.demiris.info) within the context of a new EU FP7 STREP project WYSIWYD (What you Say is What you Did), which started in January 2014 and will run for three years. The project aims at developing humanoid robot systems capable of open-ended learning of sensorimotor and cognitive skills during multimodal interaction with humans. The positions offer an exciting interdisciplinary topic and an excellent working environment in one of the world?s top research universities; the positions are available immediately, and will run until the end of the project in December 2016. Candidates with strong background in one or more of: computer vision, human action understanding, humanoid robot control, and machine learning are encouraged to apply. Strong computing and mathematical skills are a must, along with a strong desire to see your algorithms working on real robots. Details about the positions, key responsibilities, ideal person specifications, and the application procedure that needs to be followed can be found at the lab?s webpages: www.imperial.ac.uk/personalrobotics (click on Join us) Deadline for receipt of applications is the 27th of February 2014. With best wishes, Yiannis --- Dr Yiannis Demiris Reader (Associate Professor) in Personal Robotics, Department of Electrical and Electronic Engineering, Rm 1014, Imperial College London, South Kensington Campus, Exhibition Road, London, SW7 2BT, UK Tel: +44-(0)2075946300, Fax: +44-(0)2075946274 WWW: http://www.iis.ee.ic.ac.uk/yiannis -------------- next part -------------- An HTML attachment was scrubbed... URL: From mehdi.khamassi at isir.upmc.fr Fri Feb 14 09:34:22 2014 From: mehdi.khamassi at isir.upmc.fr (Mehdi Khamassi) Date: Fri, 14 Feb 2014 15:34:22 +0100 Subject: Connectionists: Fourth International Symposium on Biology of Decision-Making, 26-28 May @ Paris, France Message-ID: [Please accept our apologies if you get multiple copies of this message] Dear colleagues, It is our great pleasure to invite you to the Fourth Symposium on Biology of Decision Making which will take place in Paris, France, on May, 26-28th 2014. The deadline for poster submission is on April, 15th. The deadline for registration is on May, 1st. Registration fees (80 euros) include lunches, coffee breaks and access to social events. Please circulate widely and encourage your students and postdocs to attend. ------------------------------------------------------------------------------------------------ FOURTH SYMPOSIUM ON BIOLOGY OF DECISION MAKING (SBDM 2014) May 26-28, 2014, Paris, France Institut du Cerveau et de la Moelle, H?pital La Piti? Salp?tri?re, Paris, France. & Ecole Normale Sup?rieure, Paris, France. & Universit? Pierre et Marie Curie, Paris, France. http://sbdm2014.isir.upmc.fr ------------------------------------------------------------------------------------------------ PRESENTATION: The Fourth Symposium on Biology of Decision Making will take place on May 26-28, 2014 at the Institut du Cerveau et de la Moelle, Paris, France, with a satellite day at Ecole Normale Sup?rieure, Paris, France. The objective of this three day symposium is to gather people from different research fields with different approaches (economics, ethology, psychiatry, neural and computational approaches) to decision-making. The symposium will be a single-track, will last for 3 days and will include 6 sessions: (#1) Who is making decisions? Cortex or basal ganglia; (#2) New computational approaches to decision-making; (#3) A new player in decision-making: the hippocampus; (#4) Neuromodulation of decision-making; (#5) Maladaptive decisions in clinical conditions; (#6) Who is more rational? Decision making across species. CONFIRMED SPEAKERS: Bernard Balleine (Sydney University, Australia) Karim Benchenane (CNRS-ESPCI, France) Matthew Botvinick (Princeton University, USA) Anne Collins (Brown University, USA) Roshan Cools (Radboud Univ. Nijmegen, The Netherlands) Molly Crockett (UCL, UK) Jean Daunizeau (INSERM-ICM, France) Nathaniel Daw (NYU, USA) Kenji Doya (OIST, Japan) Philippe Faure (CNRS-UPMC, France) Lesley Fellows (McGill University, Canada) Algo Genovesio (Universita La Sapienza, Italy) Tobias Kalenscher (Universit?t D?sseldorf, Germany) Etienne Koechlin (CNRS-ENS, France) James Marshall (Sheffield University, UK) Genela Morris (Haifa University, Israel) Camilio Padoa-Schioppa (Washington Univ. St Louis, USA) Alex Pouget (Rochester University, USA) Pete Redgrave (Sheffield University, UK) Jonathan Roiser (UCL, UK) Masamichi Sakagami (Tamagawa University, Japan) Daphna Shohamy (Columbia University, USA) Klaas Stephan (ETH Zurich, Switzerland) IMPORTANT DATES: April 15, 2014 Deadline for Poster Submission May 1, 2014 Deadline for Registration May 26-28, 2014 Symposium Venue ORGANIZING COMMITTEE: Thomas Boraud (CNRS, Bordeaux, France) Sacha Bourgeois-Gironde (La Sorbonne, Paris, France) Kenji Doya (OIST, Okinawa, Japan) Mehdi Khamassi (CNRS - UPMC, Paris, France) Etienne Koechlin (CNRS - ENS, Paris, France) Mathias Pessiglione (ICM - INSERM, Paris, France) CONTACT INFORMATION : Website, registration, poster submission and detailed program: http://sbdm2014.isir.upmc.fr Contact: sbdm2014 [ at ] isir.upmc.fr -- Mehdi Khamassi, PhD Researcher (CNRS) Institut des Syst?mes Intelligents et de Robotique (UMR7222) CNRS - Universit? Pierre et Marie Curie Pyramide, Tour 55 - Bo?te courrier 173 4 place Jussieu, 75252 Paris Cedex 05, France tel: + 33 1 44 27 28 85 fax: +33 1 44 27 51 45 cell: +33 6 50 76 44 92 http://people.isir.upmc.fr/khamassi -- Mehdi Khamassi, PhD Researcher (CNRS) Institut des Syst?mes Intelligents et de Robotique (UMR7222) CNRS - Universit? Pierre et Marie Curie Pyramide, Tour 55 - Bo?te courrier 173 4 place Jussieu, 75252 Paris Cedex 05, France tel: + 33 1 44 27 28 85 fax: +33 1 44 27 51 45 cell: +33 6 50 76 44 92 http://people.isir.upmc.fr/khamassi From wermter at informatik.uni-hamburg.de Fri Feb 14 06:31:06 2014 From: wermter at informatik.uni-hamburg.de (Stefan Wermter) Date: Fri, 14 Feb 2014 12:31:06 +0100 Subject: Connectionists: [meetings] Intl Conf. on Artificial Neural Networks (ICANN 2014) - Deadline extended Message-ID: <52FDFE7A.1090208@informatik.uni-hamburg.de> Due to a substantial number of requests the deadline is extended to 27 February 2014. No further extension will be granted. =================================================================== ICANN 2014: 24th Annual Conference on Artificial Neural Networks 15 - 19 September 2014, University of Hamburg, Germany http://icann2014.org/ =================================================================== The International Conference on Artificial Neural Networks (ICANN) is the annual flagship conference of the European Neural Network Society (ENNS). In 2014 the University of Hamburg will organize the 24th ICANN Conference from 15th to 19th September 2014 in Hamburg, Germany. KEYNOTE SPEAKERS: Christopher M. Bishop (Microsoft Research, Cambridge, UK) Yann LeCun (New York University, NY, USA) Kevin Gurney (University of Sheffield, Sheffield, UK) Barbara Hammer (Bielefeld University, Bielefeld, Germany) Jun Tani (KAIST, Daejeon, Republic of Korea) Paul Verschure (Universitat Pompeu Fabra, Barcelona, Spain) ORGANIZATION: General Chair: Stefan Wermter (Hamburg, Germany) Program co-Chairs Alessandro E.P. Villa (Lausanne, Switzerland, ENNS President) Wlodzislaw Duch (Torun, Poland & Singapore, ENNS Past-President) Petia Koprinkova-Hristova (Sofia, Bulgaria) G?nther Palm (Ulm, Germany) Cornelius Weber (Hamburg, Germany) Timo Honkela (Helsinki, Finland) Local Organizing Committee Chairs: Sven Magg, Johannes Bauer, Jorge Chacon, Stefan Heinrich, Doreen Jirak, Katja Koesters, Erik Strahl VENUE: Hamburg is the second-largest city in Germany, home to over 1.8 million people. Situated at the river Elbe, the port of Hamburg is the second-largest port in Europe. The University of Hamburg is the largest institution for research and education in the north of Germany. The venue of the conference is the ESA building of the University of Hamburg, situated at Edmund-Siemers-Allee near the city centre and easily reachable from Dammtor Railway Station. Hamburg Airport can be reached easily via public transport. For the accomodation we arranged guaranteed rates for a couple of hotels in Hamburg for ICANN 2014. CONFERENCE TOPICS: ICANN 2014 will feature the main tracks Brain Inspired Computing and Machine Learning research, with strong cross-disciplinary interactions and applications. All research fields dealing with Neural Networks will be present at the conference. A non-exhaustive list of topics includes: Brain Inspired Computing: Cognitive models, Computational Neuroscience, Self-organization, Reinforcement Learning, Neural Control and Planning, Hybrid Neural-Symbolic Architectures, Neural Dynamics, Recurrent Networks, Deep Learning. Machine Learning: Neural Network Theory, Neural Network Models, Graphical Models, Bayesian Networks, Kernel Methods, Generative Models, Information Theoretic Learning, Reinforcement Learning, Relational Learning, Dynamical Models. Neural Applications for: Intelligent Robotics, Neurorobotics, Language Processing, Image Processing, Sensor Fusion, Pattern Recognition, Data Mining, Neural Agents, Brain-Computer Interaction, Neural Hardware, Evolutionary Neural Networks. PAPERS: Papers of maximum 8 pages length will be refereed to international standards by at least three referees. Accepted papers of contributing authors will be published in Springer-Verlag Lecture Notes in Compute Science (LNCS) series. Submission of papers will be online. More details are available on the conference web site. DEMONSTRATIONS: ICANN 2014 will host demonstrations to showcase research and applications of neural networks. Demonstrations are self-contained, i.e. independent of any presented talk or poster. For a demonstration proposal, we request a 1-page description of your demonstration and its features. Later, you will communicate which resources (space / duration / projector / internet / etc.) you require. Decisions about demonstrations will be made within two weeks after submission deadline. A full conference registration is required for the demonstration. We invite you to submit proposals for Demonstrations to: ICANN2014 at informatik.uni-hamburg.de TRAVEL AWARDS: As in previous years, the European Neural Network Society (ENNS) will offer at least five student travel awards of 400 Euro each for students presenting papers.In addition, the selected students will be able to register to the conference for free and will become ENNS members for the next year (2015). The deadline for sending the Travel Grant application (that includes a Letter of Interest to the PC chairs, Studentship Proof and detailed CV of the candidate) is the 14th of April, 2014. The award will be sent to the student by 28th April and paid during the conference. More details can be found on the website. DEADLINES: Submission of full papers: * 27 February 2014 * Notification of acceptance: 7 April 2014 Submission of Demonstration proposals: 21 April 2014 Camera-ready paper and registration: 5 May 2014 Conference dates: 15-19 September 2014 CONFERENCE WEBSITE: http://www.icann2014.org *********************************************** Professor Dr. Stefan Wermter Chair of Knowledge Technology Department of Computer Science University of Hamburg Vogt Koelln Str. 30 22527 Hamburg, Germany http://www.informatik.uni-hamburg.de/~wermter/ http://www.informatik.uni-hamburg.de/WTM/ *********************************************** From kerstin at nld.ds.mpg.de Fri Feb 14 03:30:08 2014 From: kerstin at nld.ds.mpg.de (Kerstin Mosch) Date: Fri, 14 Feb 2014 09:30:08 +0100 Subject: Connectionists: Reminder: Bernstein Conference 2014 - Call for Workshop proposals Message-ID: <52FDD410.5020308@nld.ds.mpg.de> *The Bernstein Network invites proposals for the Workshops directly preceding the main Bernstein Conference 2014 in G?ttingen.* ************************************************************** Call for Workshop proposals: Workshops: September 2 & 3, 2014 (Main Bernstein Conference: September 3 - 5, 2014) Deadline of proposal submission: March 1, 2014 Notification of acceptance: March 15, 2014 ************************************************************** The Bernstein Conference has become the largest annual Computational Neuroscience conference in Europe and now regularly attracts more than 500 international participants. Since 2013, the Bernstein conference includes a series of pre-conference workshops. They provide an informal forum to discuss timely research questions and challenges in Computational Neuroscience and related fields. Workshops addressing controversial issues, open problems, and comparisons of competing approaches are encouraged. SCHEDULE: Sept 2, 2014, 13:30 - 17:30 & Sept 3, 9:00 - 12:30. You may apply for a half-day workshop, but preference will be given to full-day workshops (hence, which bridge across both days). Workshop costs: The Bernstein Conference does not provide financial support, but offers 5 free registrations for the main conference per workshop (assigned by organizers). For further information about the conference, please visit the website . DETAILS FOR WORKSHOP PROPOSALS: Submission form can be downloadedhere . Deadline for submission of WS-proposals: March 1, 2014 We are looking forward to seeing you in G?ttingen in September! WORKSHOP PROGRAM COMMITTEE Matthias Bethge (Bernstein Center T?bingen) Upinder Bhalla (NCBS, Bangalore) Carlos Brody (Princeton University) Gustavo Deco (University Pompeu Fabra, Barcelona) Alain Destexhe (CNRS, Gif-sur-Yvette) Gaute Einevoll(Norwegian University of Life Sciences, Aas) Wulfram Gerstner (EPFL, Lausanne) Andreas Herz (Bernstein Center Munich) Christian Machens (Champalimaud Neuroscience Programme, Lisbon) Eero Simoncelli (NYU, New York) Sara Solla (Northwestern University, Evanston) Misha Tsodyks (Weizmann Institute and Columbia University) Mark van Rossum (University of Edinburgh) Fred Wolf (Bernstein Center G?ttingen) Florentin W?rg?tter (General Conference Chair, Bernstein Focus Neurotechnology, G?ttingen) CONFERENCE ASSISTANTS Contact: Kerstin Mosch, Sabine Huhnold at contact at bccn-goettingen.de -- Dr. Kerstin Mosch Bernstein Center for Computational Neuroscience (BCCN) Goettingen Bernstein Focus Neurotechnology (BFNT) Goettingen Max Planck Institute for Dynamics and Self-Organization Am Fassberg 17 D-37077 Goettingen Germany T: +49 (0) 551 5176 - 405 E: kerstin at nld.ds.mpg.de I: www.bccn-goettingen.de I: www.bfnt-goettingen.de -------------- next part -------------- An HTML attachment was scrubbed... URL: From pierre-yves.oudeyer at inria.fr Fri Feb 14 11:27:35 2014 From: pierre-yves.oudeyer at inria.fr (Pierre-Yves Oudeyer) Date: Fri, 14 Feb 2014 17:27:35 +0100 Subject: Connectionists: Call for PhD application in the Flowers Lab at Inria/Ensta ParisTech, France: Machine Learning for Personalization of Online Tutoring Systems Message-ID: ============================================================================================================ Call for PhD application in the Flowers Lab at Inria/Ensta ParisTech, France: Machine Learning for Personalization of Online Tutoring Systems ============================================================================================================ The Flowers Lab (https://flowers.inria.fr) is searching for highly talented candidates for application to a PhD within the KidLearn Project (https://flowers.inria.fr/research/kidlearn/). This project aims at elaborating novel machine learning approaches for the personalization of online tutoring systems, e.g. in MOOCs, and based on recently developped models of active learning, curiosity, and algorithmic teaching (see here). Work will involve elaboration of algorithmic approaches based on these models, as well as real world experimentations in collaborations with pedagogy experts and industry leading companies in the domain of online educational software. Thus, we are searching candidates with very strong skills in statistical inference and machine learning, with interest in practical application and transfer to industry. This PhD will potentially take place through a direct collaboration and funding with/by one of the leading companies in educational technologies (through a CIFRE scheme). Please apply here, after contacting Manuel Lopes (manuel.lopes at inria.fr) and Pierre-Yves Oudeyer (pierre-yves.oudeyer at inria.fr). More details: ========================================================= KidLearn: Machine Learning for Personalization of Online Tutoring Systems Position type: PhD Student Functional area: Bordeaux (Talence) Research theme: Perception, cognition, interaction Project: FLOWERS Scientific advisors: manuel.lopes at inria.fr and pierre-yves.oudeyer at inria.fr HR Contact: laure.pottier_schupp at inria.fr ========================================================== About Inria and the job: http://www.inria.fr/en Established in 1967, Inria is the only French public research body fully dedicated to computational sciences. Combining computer sciences with mathematics, Inria?s 3,500 researchers strive to invent the digital technologies of the future. Educated at leading international universities, they creatively integrate basic research with applied research and dedicate themselves to solving real problems, collaborating with the main players in public and private research in France and abroad and transferring the fruits of their work to innovative companies. The researchers at Inria published over 4,450 articles in 2012. They are behind over 250 active patents and 112 start-ups. The 180 project teams are distributed in eight research centers located throughout France. Job offer description Algorithmic teaching (AT) formally studies the optimal teaching problem, that is, finding the smallest sequence of examples that uniquely identifies a target concept to a learner. AT can be seen as a complementary problem from the active learning but here it is the teacher that is choosing its examples in an intelligent way. Algorithmic teaching gives insights into what constitutes informative examples for a learning agent [3,5]. The main approach used until the moment is to ask a pedagogical experts to provide a set of Knowledge Units (KU) and the respective teaching approaches (e.g. lectures, exercises or videos). The goal of the tutoring system is to select the KU that will improve the knowledge of the student. One limitation of the Knowledge Tracing model is that the system is agnostic to the specific problem being addressed. KU are considered as discrete entities and at most a pre-requisite structure is defined. In this work we want to explicitly model the structural properties of the problem (e.g. mathematical, geometrical or chemistry) and infer the knowledge from the observed actions. These approaches take advantage of the knowledge on how to correctly solve the problem and by measuring how the student solved, we can estimate what wrong assumptions were made. Such knowledge would allow creating dedicated demonstrations or questions that either repeat the instruction on the topics, or provide new exercises that clarify the differences of the different concepts. Skills and profile The first phase of this work will be to do studies on the different approaches for teaching and on pedagogical approaches to teaching mathematics. A second phase, in collaboration with teachers, will be to identify a set of problems of higher impact and define a set of suitable knowledge units. New machine learning algorithms need to be developed, beyond previous approaches such as [3,4,5], that are able to estimate the knowledge level of the students and that optimize the pedagogical value of each exercise. Special interest will be given to algorithms that take in to account explicitly the structural knowledge about the problem at hand. In this way the system will not only be able to select exercises from a pre-defined database but will also be able to synthetize new exercises and problems. The final phase will be to deploy a large-scale study in collaboration with pedagogical experts and an industry leading company in the domain, to validate and study the impact of the optimization algorithms in identifying the knowledge level of students, in the teaching objectives and in general improvement in the interest and motivation to engage in the teaching process. Excellent knowledge on machine learning. Good programming capabilities, especially on the design of interfaces Interest for multidisciplinary studies and experience in performing user studies. Benefits Participation for transportation and restauration Duration: 3 years Additional information References : ? 1. J.E. Beck. Difficulties in inferring student knowledge from observations (and why you should care). In Educational Data Mining: Supplementary Proceedings of the 13th International Conference of Artificial Intelligence in Education, 2007. ? 2. J.I. Lee and E. Brunskill. The impact on individualizing student models on necessary practice opportunities. In International Conference on Educational Data Mining (EDM), 2012. ? 3. A. Rafferty, E. Brunskill, T. Griffiths, and P. Shafto. Faster teaching by pomdp planning. In Artificial Intelligence in Education, 2011. ? 4. Manuel Lopes, Benjamin Clement, Didier Roy, Pierre-Yves Oudeyer. Multi-Armed Bandits for Intelligent Tutoring Systems, arXiv:1310.3174 [cs.AI], 2013. ? 5. Maya Cakmak and Manuel Lopes. Algorithmic and Human Teaching of Sequential Decision Tasks. AAAI Conference on Artificial Intelligence (AAAI), Toronto, Canada, 2012. -------------- next part -------------- An HTML attachment was scrubbed... URL: From hava at cs.umass.edu Fri Feb 14 17:06:49 2014 From: hava at cs.umass.edu (Hava Siegelmann) Date: Fri, 14 Feb 2014 17:06:49 -0500 Subject: Connectionists: request for removal Message-ID: <52FE9379.2090001@cs.umass.edu> Hi coordinator: I request to remove the two posting I put for a postdoc on the connectionist list. Can you please do it ASAP ? Thanks Hava -- Hava T. Siegelmann, Ph.D. Professor Director, BINDS Lab (Biologically Inspired Neural Dynamical Systems) Dept. of Computer Science Program of Neuroscience and Behavior University of Massachusetts Amherst Amherst, MA, 01003 Phone: 413-545-2744 Fax: 413-545-1249 LAB WEBSITE: http://binds.cs.umass.edu/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From brian.mingus at colorado.edu Fri Feb 14 18:20:39 2014 From: brian.mingus at colorado.edu (Brian J Mingus) Date: Fri, 14 Feb 2014 16:20:39 -0700 Subject: Connectionists: request for removal In-Reply-To: <52FE9379.2090001@cs.umass.edu> References: <52FE9379.2090001@cs.umass.edu> Message-ID: Hi Hava, This is an e-mail mailing list. Once sent you can never unsend. However, you could have replied to your original posting in order to indicate that the announcement was cancelled. fyi, I have been giving people whose posts are very relevant to the list permission to post without moderation. In part to facilitate conversation, and in part to reduce the moderation workload. However, if the SNR becomes too low I might have to back off from this strategy. Thanks all for helping to keep a sane SNR :) Also, please e-mail *connectionists-owner* to contact the moderator, not *connectionists, *which sends an e-mail to about 5,000 people. Brian Mingus http://grey.colorado.edu/mingus On Fri, Feb 14, 2014 at 3:06 PM, Hava Siegelmann wrote: > Hi coordinator: > > I request to remove the two posting I put for a postdoc on the > connectionist list. Can you please do it ASAP ? > > Thanks > > Hava > > > -- > Hava T. Siegelmann, Ph.D. > Professor > Director, BINDS Lab (Biologically Inspired Neural Dynamical Systems) > Dept. of Computer Science > Program of Neuroscience and Behavior > University of Massachusetts Amherst > Amherst, MA, 01003 > Phone: 413-545-2744 Fax: 413-545-1249 > LAB WEBSITE: http://binds.cs.umass.edu/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From akira-i at brest-state-tech-univ.org Fri Feb 14 16:59:24 2014 From: akira-i at brest-state-tech-univ.org (Akira Imada) Date: Sat, 15 Feb 2014 00:59:24 +0300 Subject: Connectionists: ICNNAI-2014 - final reminder - 10 days to submission due, Message-ID: Dear Connectionists, The 8th International Conference on Neural Network and Artificial Intelligence (ICNNAI-2014) will be held on 3-6 June 2014 in Brest, Belarus. (See http://icnnai.bstu.by/2014/cfp.html). Submission due is 25 February. 10 days to go. Submission should be via Springer's OCS: - http://senldogo0039.springer-sbm.com/ocs/conference/settings/ICNNAI2014/DATA . It would not be possible to extend the due. So, we encourage you a little earlier submission, one day or two, just in case unpredictable things happens. Belarus is still a land of fairly well-kept secret. In this lovely wonder land, with luck, you'll find something that could not be discovered elsewhere. Akira Imada, Professor email: akira-i at brest-state-tech-univ.org Dept. Intelligent Information Technology Brest State Technical University Moskowskaja 267, Brest, 224017 Belarus -------------- next part -------------- An HTML attachment was scrubbed... URL: From smart at neuralcorrelate.com Fri Feb 14 18:37:00 2014 From: smart at neuralcorrelate.com (Susana Martinez-Conde) Date: Fri, 14 Feb 2014 16:37:00 -0700 Subject: Connectionists: Illusion submission EXTENSION: 10th anniversary edition of the Best Illusion of the Year Contest! In-Reply-To: <03a501cf29dd$568e95d0$03abc170$@neuralcorrelate.com> References: <03a501cf29dd$568e95d0$03abc170$@neuralcorrelate.com> Message-ID: <03c101cf29dd$a8e2a290$faa7e7b0$@neuralcorrelate.com> ***DUE TO POPULAR DEMAND*** -- The deadline for the 10th annual Best Illusion of the Year Contest has been extended. The FINAL (no exceptions) submission date is now ***March 1st***! http://illusionoftheyear.com *** We are happy to announce the 10th anniversary edition of the world's Best Illusion of the Year Contest!!*** Submissions are now welcome! The 2014 contest will be held in St. Petersburg, Florida, at the TradeWinds Island Resorts (headquarters of the Vision Sciences Society conference), on May 18th. Past contests have been highly successful in drawing public attention to perceptual research, with over ***FIVE MILLION*** website hits from viewers all over the world, as well as hundreds of international media stories. The First, Second and Third Prize winners from the 2013 contest were Jun Ono, Akiyasu Tomoeda and Kokichi Sugihara (Meiji University and CREST, Japan), Arthur Shapiro and Alex Rose-Henig (American University, USA), and Arash Afraz and Ken Nakayama (Massachusetts Institute of Technology and Harvard University, USA). To see the illusions, photo galleries and other highlights from the 2013 and previous contests, go to http://illusionoftheyear.com. Eligible submissions to compete in the 2014 contest are novel perceptual or cognitive illusions (unpublished, or published no earlier than 2013) of all sensory modalities (visual, auditory, etc.) in standard image, movie or html formats. Exciting new variants of classic or known illusions are admissible. An international panel of impartial judges will rate the submissions and narrow them to the TOP TEN. Then, at the Contest Gala in St. Petersburg, the TOP TEN illusionists will present their contributions and the attendees of the event (that means you!) will vote to pick the TOP THREE WINNERS! The 2014 Contest Gala will be hosted by world-renowned magician Mac King. Mac King is the premiere comedy magician in the world today, with his own family-friendly show, "The Mac King Comedy Magic Show," at the Harrah's Las Vegas. He was named "Magician of the Year" by the Magic Castle in Hollywood in 2003, and is a frequent guest and host of television specials. Illusions submitted to previous editions of the contest can be re-submitted to the 2014 contest, so long as they meet the above requirements and were not among the TOP THREE winners in previous years. Submissions will be held in strict confidence by the panel of judges and the authors/creators will retain full copyright. The TOP TEN illusions will be posted on the illusion contest's website *after* the Contest Gala. Illusions not chosen among the TOP TEN will not be disclosed. Participating in to the Best Illusion of the Year Contest does not preclude the illusion authors/creators from also submitting their work for publication elsewhere. Submissions can be made to Dr. Susana Martinez-Conde (Illusion Contest Executive Producer, Neural Correlate Society) via email (smart at neuralcorrelate.com) until March 1st, 2014. Illusion submissions should come with a (no more than) one-page description of the illusion and its theoretical underpinnings (if known). Women and underrepresented groups are especially encouraged to participate. The Neural Correlate Society reserves the right to disqualify illusion entries that may be offensive to some or all members of the public, or inappropriate for viewing by audiences of all ages. Illusions will be rated according to: . Significance to our understanding of the mind and brain . Simplicity of the description . Sheer beauty . Counterintuitive quality . Spectacularity Visit the illusion contest website for further information and to see last year's illusions: http://illusionoftheyear.com. Submit your ideas now and take home this prestigious award! On behalf of the Executive Board of the Neural Correlate Society: Jose-Manuel Alonso, Stephen Macknik, Susana Martinez-Conde, Luis Martinez, Xoana Troncoso, Peter Tse ---------------------------------------------------------------- Susana Martinez-Conde, PhD Executive Producer, Best Illusion of the Year Contest President, Neural Correlate Society Columnist, Scientific American Mind Author, Sleights of Mind Director, Laboratory of Visual Neuroscience Division of Neurobiology Barrow Neurological Institute 350 W. Thomas Rd Phoenix AZ 85013, USA Phone: +1 (602) 406-3484 Fax: +1 (602) 406-4172 Email: smart at neuralcorrelate.com http://smc.neuralcorrelate.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From grlmc at urv.cat Sat Feb 15 13:51:16 2014 From: grlmc at urv.cat (GRLMC) Date: Sat, 15 Feb 2014 19:51:16 +0100 Subject: Connectionists: LATA 2014: call for participation Message-ID: *To be removed from our mailing list, please respond to this message with UNSUBSCRIBE in the subject line* ************************************************************************* 8th INTERNATIONAL CONFERENCE ON LANGUAGE AND AUTOMATA THEORY AND APPLICATIONS LATA 2014 Madrid, Spain March 10-14, 2014 http://grammars.grlmc.com/lata2014/ ********************************************************************* PROGRAM Monday, March 10: 9:30 - 10:30 Registration 10:30 - 10:40 Opening 10:40 - 11:30 Helmut Seidl: Interprocedural Information Flow Analysis of XML Processors - Invited Lecture 11:30 - 11:45 Break 11:45 - 13:00 Alberto Dennunzio, Enrico Formenti, Luca Manzoni: Extremal Combinatorics of Reaction Systems Fernando Arroyo, Sandra G?mez Canaval, Victor Mitrana, ?tefan Popescu: Networks of Polarized Evolutionary Processors are Computationally Complete Liang Ding, Abdul Samad, Xingran Xue, Xiuzhen Huang, Russell L. Malmberg, Liming Cai: Stochastic k-Tree Grammar and its Application in Biomolecular Structure Modeling 13:00 - 14:30 Lunch 14:30 - 15:45 B?atrice B?rard, Olivier Carton: Channel Synthesis Revisited Daniel Pr??a: Weight-reducing Hennie Machines and Their Descriptional Complexity Manfred Droste, Stefan D?ck: Weighted Automata and Logics for Infinite Nested Words 15:45 - 16:00 Break 16:00 - 17:15 Friedrich Otto, Franti?ek Mr?z: Extended Two-Way Ordered Restarting Automata for Picture Languages Sang-Ki Ko, Yo-Sub Han, Kai Salomaa: Top-Down Tree Edit-Distance of Regular Tree Languages Bertram Felgenhauer, Ren? Thiemann: Reachability Analysis with State-Compatible Automata 17:15 - 17:45 Coffee Break 17:45 - 18:35 Leslie A. Goldberg: The Complexity of Approximate Counting - Invited Lecture Tuesday, March 11: 9:00 - 9:50 Sanjeev Khanna: Matchings, Random Walks, and Sampling - Invited Lecture 9:50 - 10:00 Break 10:00 - 11:15 Niko Beerenwinkel, Stefano Beretta, Paola Bonizzoni, Riccardo Dondi, Yuri Pirola: Covering Pairs in Directed Acyclic Graphs Eike Best, Raymond Devillers: Characterisation of the State Spaces of Live and Bounded Marked Graph Petri Nets Mar?a Martos-Salgado, Fernando Rosa-Velardo: Expressiveness of Dynamic Networks of Timed Petri Nets 11:15 - 11:45 Coffee Break 11:45 - 13:00 Bireswar Das, Patrick Scharpfenecker, Jacobo Tor?n: Succinct Encodings of Graph Isomorphism Matthias Gall?, Mat?as Tealdi: On Context-Diverse Repeats and their Incremental Computation Marcella Anselmo, Dora Giammarresi, Maria Madonia: Picture Codes with Finite Deciphering Delay 13:00 - 14:30 Lunch 14:30 - 15:45 Paul Tarau: Computing with Catalan Families Joan Boyar, Shahin Kamali, Kim S. Larsen, Alejandro L?pez-Ortiz: On the List Update Problem with Advice Rob Gysel: Minimal Triangulation Algorithms for Perfect Phylogeny Problems 15:45 - 16:00 Break 16:00 - 17:15 Gr?goire Laurence, Aur?lien Lemay, Joachim Niehren, S?awek Staworko, Marc Tommasi: Learning Sequential Tree-to-word Transducers Dariusz Kaloci?ski: On Computability and Learnability of the Pumping Lemma Function Slimane Bellaouar, Hadda Cherroun, Djelloul Ziadi: Efficient List-based Computation of the String Subsequence Kernel Wednesday, March 12: 9:00 - 9:50 Oscar H. Ibarra: On the Parikh Membership Problem for FAs, PDAs, and CMs - Invited Lecture 9:50 - 10:00 Break 10:00 - 11:15 Marius Konitzer, Hans Ulrich Simon: DFA with a Bounded Activity Level Shenggen Zheng, Jozef Gruska, Daowen Qiu: On the State Complexity of Semi-Quantum Finite Automata Vojt?ch Vorel: Complexity of a Problem Concerning Reset Words for Eulerian Binary Automata 11:15 - 11:45 Coffee Break 11:45 - 13:00 Pascal Caron, Marianne Flouret, Ludovic Mignot: (k,l)-Unambiguity and Quasi-Deterministic Structures: an Alternative for the Determinization Zuzana Bedn?rov?, Viliam Geffert: Two Double-Exponential Gaps for Automata with a Limited Pushdown Parosh Aziz Abdulla, Mohamed Faouzi Atig, Jari Stenman: Computing Optimal Reachability Costs in Priced Dense-Timed Pushdown Automata 13:00 - 14:30 Lunch 15:45 - 17:45 Sightseeing in Madrid by Bus Thursday, March 13: 9:00 - 9:50 Javier Esparza: A Brief History of Strahler Numbers (I) - Invited Tutorial 9:50 - 10:00 Break 10:00 - 11:15 Zeinab Mazadi, Ziyuan Gao, Sandra Zilles: Distinguishing Pattern Languages with Membership Examples Francine Blanchet-Sadri, Andrew Lohr, Sean Simmons, Brent Woodhouse: Computing Depths of Patterns Alexandre Blondin Mass?, S?bastien Gaboury, Sylvain Hall?, Micha?l Larouche: Solving Equations on Words with Morphisms and Antimorphisms 11:15 - 11:45 Coffee Break 11:45 - 13:00 Anton Cern?: Solutions to the Multi-Dimensional Equal Powers Problem Constructed by Composition of Rectangular Morphisms Enrico Formenti, Markus Holzer, Martin Kutrib, Julien Provillard: ?-rational Languages: High Complexity Classes vs. Borel Hierarchy Thomas Weidner: Probabilistic ?-Regular Expressions 13:00 - 14:30 Lunch 14:30 - 15:45 Haizhou Li, Fran?ois Pinet, Farouk Toumani: Probabilistic Simulation for Probabilistic Data-aware Business Processes Daniel Bundala, Jakub Z?vodn?: Optimal Sorting Networks Etienne Dubourg, David Janin: Algebraic Tools for the Overlapping Tile Product 15:45 - 16:00 Break 16:00 - 17:15 Luca Breveglieri, Stefano Crespi Reghizzi, Angelo Morzenti: Shift-Reduce Parsers for Transition Networks Gadi Aleksandrowicz, Andrei Asinowski, Gill Barequet, Ronnie Barequet: Formulae for Polyominoes on Twisted Cylinders Alexandre Blondin Mass?, Amadou Makhtar Tall, Hugo Tremblay: On the Arithmetics of Discrete Figures Friday, March 14: 9:00 - 9:50 Javier Esparza: A Brief History of Strahler Numbers (II) - Invited Tutorial 9:50 - 10:00 Break 10:00 - 11:15 Hanna Klaudel, Maciej Koutny, Zhenhua Duan: Interval Temporal Logic Semantics of Box Algebra Pierre Ganty, Ahmed Rezine: Ordered Counter-Abstraction: Refinable Subword Relations for Parameterized Verification Matthew Gwynne, Oliver Kullmann: On SAT Representations of XOR Constraints 11:15 - 11:45 Coffee Break 11:45 - 13:00 Bernd Finkbeiner, Hazem Torfah: Counting Models of Linear-time Temporal Logic Claudia Carapelle, Shiguang Feng, Oliver Fern?ndez Gil, Karin Quaas: Satisfiability for MTL and TPTL over Non-Monotonic Data Words Joachim Klein, David M?ller, Christel Baier, Sascha Kl?ppelholz: Are Good-for-games Automata Good for Probabilistic Model Checking? 13:00 Closing From irodero at cac.rutgers.edu Sun Feb 16 09:46:33 2014 From: irodero at cac.rutgers.edu (Ivan Rodero) Date: Sun, 16 Feb 2014 09:46:33 -0500 Subject: Connectionists: Bigsystem 2014 at HPDC - Call for Papers (Papers due Feb 28 -- extended) In-Reply-To: <7B19F956-BFBE-4CB4-9224-106DC80F83F0@rutgers.edu> References: <51EC7783-DCAD-4364-B1DC-576C726BAA31@rutgers.edu> <6F339279-23CD-4553-95DF-B1F906F948E3@rutgers.edu> <0957F75F-5AB9-4144-B62D-87D225B34E42@rutgers.edu> <22CF346C-98EC-4D5B-9500-D0B9FE60551A@rutgers.edu> <99D17D7E-34C0-47B7-B641-C756E67D169A@rutgers.edu> <87202061-93AC-4066-89E7-77976097AFAB@rutgers.edu> <79403393-1690-4DCB-855A-1EE231D5ED2B@rutgers.edu> <5D63B8C8-3FD3-4241-9021-CBD7F84DCAA7@rutgers.edu> <6B61A541-C3A6-4B51-9600-A392C72202A6@rutgers.edu> <7B19F956-BFBE-4CB4-9224-106DC80F83F0@rutgers.edu> Message-ID: <83E0FA91-68C8-49CF-95CF-B70E2519CD98@rutgers.edu> =================== BigSystem 2014 =================== International Workshop on Software-Defined Ecosystems (BigSystem 2014) http://2014.bigsystem.org/ (co-located with ACM HPDC 2014, Vancouver, Canada, June 23-27, 2014) With the emerging technology breakthrough in computing, networking, storage, mobility, and analytics, the boundary of systems is undergoing fundamental change and is expected to logically disappear. It is the time to rethink system design and management without boundaries towards software-defined ecosystems, the Big System. The basic principles of software-defined mechanisms and policies have witnessed great success in clouds and networks. We are expecting broader, deeper, and greater evolution and confluence towards holistic software-defined ecosystems. BigSystem 2014 provides an open forum for researchers, practitioners, and system builders to exchange ideas, discuss, and shape roadmaps towards such big systems in the era of big data. Topics of Interest =================== * Architecture of software-defined ecosystems * Management of software-defined ecosystems * Software-defined principles * Software-defined computing * Software-defined networking * Software-defined storage * Software-defined security * Software-defined services * Software-defined mobile computing/cloud * Software-defined cyber-physical systems * Interaction and confluence of software-defined modalities * Virtualization * Hybrid systems, cross-layer design and management * Security, privacy, reliability, trustworthiness * Grand challenges in big systems * Big data infrastructure and engineering * HPC, big data, and computational science & engineering applications * Autonomic computing * Cloud computing and services * Emerging technologies Paper Submission Guidelines =================== Authors are invited to submit technical papers of at most 8 pages in PDF format, including figures and references. Short position papers (4 pages) are also encouraged. Papers should be formatted in the ACM Proceedings Style (double column text using single spaced 10 point size on 8.5 x 11 inch page, http://www.acm.org/sigs/publications/proceedings-templates) and submitted via EasyChair submission site. No changes to the margins, spacing, or font sizes as specified by the style file are allowed. Accepted papers will appear in the workshop proceedings, and will be incorporated into the ACM Digital Library. A few papers will be accepted as posters. Selected distinguished papers, after further revisions, will be considered for a special issue in a high quality journal. EasyChair submission site, https://www.easychair.org/conferences/?conf=bigsystem2014 Important Dates =================== * Papers Due Feb. 28th, 2014 * Notification Mar. 31st, 2014 * Camera-Ready April 15th, 2014 =================== Organization =================== General Chairs =================== Geoffrey Fox, Indiana University Manish Parashar, Rutgers University Program Chairs =================== Chung-Sheng Li, IBM Research Xiaolin (Andy) Li, University of Florida Steering Committee =================== Rajkumar Buyya, University of Melbourne Jeff Chase, Duke University Jose Fortes, University of Florida Geoffrey Fox, Indiana University Hai Jin, Huazhong University of Science and Technology Chung-Sheng Li, IBM Research Xiaolin (Andy) Li, University of Florida Manish Parashar, Rutgers University Panel Chairs =================== David Meyer, Brocade Kuang-Ching Wang, Clemson University Publicity Chairs =================== Yong Chen, Texas Tech University Ivan Rodero, Rutgers University Web Chairs =================== Ze Yu, University of Florida Technical Program Committee =================== Gagan Agrawal, Ohio State University Henri E. Bal, Vrije University Ilya Baldin, RENCI/UNC Chapel Hill Viraj Bhat, Yahoo Roger Barga, Microsoft Research Micah Beck, University of Tennessee Ali Butt, Virginia Tech Jiannong Cao, Hong Kong Polytechnic U. Claris Castillo, RENCI Umit Catalyurek, Ohio State University Yong Chen, Texas Tech University Peter Dinda, Northwestern University Zhihui Du, Tsinghua University Renato Figueiredo, University of Florida Yashar Ganjali, University of Toronto William Gropp, UIUC Guofei Gu, Texas A&M University John Lange, University of Pittsburgh Junda Liu, Google David Meyer, Brocade Rajesh Narayanan, Dell Research Ioan Raicu, IIT Lavanya Ramakrishnan, LBNL Ivan Rodero, Rutgers University Ivan Seskar, Rutgers University Jian Tang, Syracuse University Tai Won Um, ETRI Edward Walker, Whitworth University Jun Wang, University of Central Florida Kuang-Ching Wang, Clemson University Jon Weissman, University of Minnesota Dongyan Xu, Purdue University Vinod Yegneswaran, SRI Jianfeng Zhan, Chinese Academy of Sciences Han Zhao, Qualcomm Research ============================================================= Ivan Rodero, Ph.D. Rutgers Discovery Informatics Institute (RDI2) NSF Center for Cloud and Autonomic Computing (CAC) Department of Electrical and Computer Engineering Rutgers, The State University of New Jersey Office: CoRE Bldg, Rm 625 94 Brett Road, Piscataway, NJ 08854-8058 Phone: (732) 993-8837 Fax: (732) 445-0593 Email: irodero at rutgers dot edu WWW: http://nsfcac.rutgers.edu/people/irodero ============================================================= From ecai2014 at guarant.cz Mon Feb 17 03:30:03 2014 From: ecai2014 at guarant.cz (=?utf-8?q?RuleML_2014?=) Date: Mon, 17 Feb 2014 09:30:03 +0100 Subject: Connectionists: =?utf-8?q?CFP_-_Learning_=28Business=29_Rules_fro?= =?utf-8?q?m_Data_=40RuleML/ECAI_2014?= Message-ID: <20140217083003.3D8A3174298@gds25d.active24.cz> ** apologies for cross-posting ** ==== 1st Call for Papers ==== http://2014.ruleml.org/learning-business-rules-from-data CFP: 1st RuleML special track on Learning (Business) Rules from Data @ RuleML/ECAI 2014 Venue: Prague, Czech Republic Abstract submission: 31 March 2014 Paper submission: 8 April 2014 Accepted papers will be published in the Springer Lecture Notes in Computer Science (LNCS) series with the RuleML main track proceedings. RuleML is colocated with ECAI'14 - the European Conference on Artificial Intelligence (http://www.ecai2014.org/). ================= TOPIC Papers submitted to the track could address (among others) extraction of business rules from sets of fuzzy, uncertain and possibly conflicting rules learned from data and bridging the gap between rules as "correlations" in the data and rules that can be used in business rule management systems. * Learning Action, Association, Decision and Constraint Rules from Data * Extracting business rules from decision trees and rule sets induced from data * Non-monotonic, uncertain and defeasible reasoning to resolve conflicting rules * Enhancing rule learning processes with domain knowledge * Fuzzy and probabilistic extensions to rule markup languages (SBVR, RuleML, PMML) * Rule interest/quality measures suitable for business rule learning * Learning disjunctive and negative rules in business rules context ================= PROGRAM COMMITTEE Each submitted paper will be reviewed by three PC members from the industry and academia. ==ORGANIZERS Tom?? Kliegr (University of Economics, Prague, Czech Republic) Davide Sottara (Arizona State University, USA) ===INDUSTRY Alex Guazzelli (Zementis) Jerome Boyer (IBM) Jacob Feldman (Open Rules) Mark Proctor (Drools/Red Hat) Petr Masa (freelance consultant) ===ACADEMIA Martin Atzmueller (University of Kassel, Germany) Bruno Cremilleux (Universit? de Caen Basse-Normandie, France) Agnieszka Dardzinska (Bialystok University of Technology, Poland) Evelina Lamma (Universit? degli Studi di Ferrara, Italy) Florian Lemmerich (University of W?rzburg, Germany) Johannes Fuernkranz (TU Darmstadt, Germany) Martin Holena (Academy of Sciences, Czech Republic) Zbigniew W. Ras (University of North Carolina, Charlotte, USA) Fabrizio Riguzzi (Universit? degli Studi di Ferrara, Italy) Milan ?im?nek (University of Economics Prague, Czech Republic) ================= SUBMISSION INFORMATION Papers are submitted via EasyChair: http://www.easychair.org/conferences/?conf=ruleml2014 Long papers: 15 pages Short papers: 8 pages From outreach at cnsorg.org Sat Feb 15 10:53:23 2014 From: outreach at cnsorg.org (Outreach) Date: Sat, 15 Feb 2014 08:53:23 -0700 Subject: Connectionists: CNS-2014: abstract submission deadline extended to February 23 Message-ID: Organization for Computational Neurosciences (OCNS) 23rd Annual Meeting Qu?bec City, Canada July 26-31, 2014 Deadline for abstract submission and author registration has now been extended. Please note that one of the authors has to register as sponsoring author for the main meeting before abstract submission is possible. In case the abstract is notaccepted for presentation, the registration fee will be refunded. NEW AND FINAL Deadlines: 23 Feb 2013 Abstract submission closes (11:00 pm Pacific time USA) Please visit: https://ocns.memberclicks.net/cns-2014-abstract-submission The main meeting (July 27 - 29) will be preceded by a day of tutorials (July 26) and followed by two days of workshops (July 30 -31). Invited Keynote Speakers: Chris Eliasmith, University of Waterloo, Canada Christof Koch, Allen Institute for Brain Science,USA Henry Markram, EPFL Lausanne, Switzerland Frances Skinner, TWRI/UHN, University of Toronto, Canada For up-to-date conference information, please visit http://www.cnsorg.org/cns-2014-quebec-city ------------------------------ ---------- OCNS is the international member-based society for computational neuroscientists. Become a member to be eligible for travel awards and more. Visit our website for more information: http://www.cnsorg.org From rsalakhu at cs.toronto.edu Sun Feb 16 12:35:49 2014 From: rsalakhu at cs.toronto.edu (Ruslan Salakhutdinov) Date: Sun, 16 Feb 2014 12:35:49 -0500 (EST) Subject: Connectionists: 2nd CFP: ICML 2014: Call for Tutorial Proposals Message-ID: ******************************************** ICML 2014: Call for Tutorial Proposals ******************************************** Important dates * Tutorial proposal deadline Feb 21, 2014 * Acceptance notification Mar 8, 2014 * Tutorials June 21, 2014 * Contact: rsalakhu at cs.toronto.edu The ICML 2014 Organizing Committee invites proposals for tutorials to be held at the 31th International Conference on Machine Learning, on June 21, 2014 in Beijing, China. We seek proposals for two-hour tutorials on core techniques and areas of knowledge of broad interest within the machine learning community, including established or emerging research topics within the field itself, as well as from related fields or application areas that are clearly relevant to machine learning. The ideal tutorial should attract a wide audience, and should be broad enough to provide a gentle introduction to the chosen research area, but should also cover the most important contributions in depth. Tutorial proceedings will not be provided in hardcopy, but will instead be made available by the presenters on their website prior to the conference. How to Propose a Tutorial: Proposals should provide sufficient information to evaluate the quality and importance of the topic, the likely quality of the presentation materials, and the speakers' teaching ability. The written proposal should be 2-3 pages long, and should use the following boldface text for section headings: * Topic overview: What will the tutorial be about? Why is this an interesting and significant subject for the machine learning community at large? * Target audience: From which areas do you expect potential participants to come? What prior knowledge, if any, do you expect from the audience? What will the participants learn? How many participants do you expect? * Content details: Provide a detailed outline of the topics to be presented, including estimates for the time that will be devoted to each subject. Aim for a total length of approximately two hours. If possible, provide samples of past tutorial slides or teaching materials. In case of multiple presenters, specify how you will distribute the work. * Format: How will you present the material? Will there be multimedia parts of the presentation? Do you plan software demonstrations? Specify any extraordinary technical equipment that you would need. * Organizers' and presenters' expertise: Please include the name, email address, and webpage of all presenters. In addition, outline the presenters' background and include a list of publications in the tutorial area. Tutorial proposals should be submitted via email in PDF format to rsalakhu at cs.toronto.edu. Soon after submission, proposers should expect to receive a verification of receipt. Important dates * Tutorial proposal deadline Feb 21, 2014 * Acceptance notification Mar 8, 2014 * Tutorials June 21, 2014 * Contact: rsalakhu at cs.toronto.edu Russ Salakhutdinov, tutorial chair ICML 2014 From torsello at dsi.unive.it Sun Feb 16 16:42:39 2014 From: torsello at dsi.unive.it (Andrea Torsello) Date: Sun, 16 Feb 2014 22:42:39 +0100 Subject: Connectionists: Second Call for Participation International Summer School on Complex Networks Message-ID: <1902477.A2dONCXeTj@giatrus.torsello.net> Call for Participation - Application now open ISSCN International Summer School on Complex Networks Bertinoro, Italy July 14-18 2014 http://www.dsi.unive.it/isscn/ Complex networks are an emerging and powerful computational tool in the physical, biological and social sciences. They aim is to capture the structural properties of data represented as graphs or networks, providing ways of characterising both the static and dynamic facets of network structure. The topic draws on ideas from graph theory, statistical physics and dynamic systems theory. Applications include communication networks, epidemiology, transportation, social networks and ecology. The aim in the Summer School is to provide an overview of both the foundations and state of the art in the field. Lectures will be presented by intellectual leaders in the field, and there will be an opportunity to interact closely with them during the school. The school will be held in the Bertinoro Residential Centre of the University of Bologna, which is situated in beautiful hills between Ravenna and Bologna. The summer school is aimed at PhD students, and younger postdocs or RA?s working in the complex networks area. It will run for 5 days with lectures in the mornings and afternoons, and the school fee includes residential accommodation and meals at the residential centre. List of Lecturers Michele Benzi, Emory University, USA Luciano Costa, University of S?o Paulo, Brasil Ernesto Estrada, University of Strathclyde, UK Jesus Gomez Gardenes, University of Zaragoza, Spain Ferenc Jordan, The Microsoft Research ? COSBI, Italy Yamir Moreno, University of Zaragoza, Spain Mirco Musolesi, University of Birmingham, UK Simone Severini, University College London, UK Organizers Andrea Torsello, Universit? Ca? Foscari Venezia, Italy Edwin Hancock, University of York, UK Richard Wilson, University of York, UK Ernesto Estrada, University of Strathclyde, UK Registration Fees Registration will include Accommodation for 5 nights (13/7 to 17/7), and meals. Accommodation can be in single or double rooms (subject to availability). Single room Double room PhD student EUR 700 EUR 650 Postdoc EUR 800 EUR 750 Other EUR 900 EUR 850 We hope to be able to provide scholarships to PhD students to reduce the registration Fees. Student applicants will be automatically considered for scholarships. Application is now open through the schools's website. Deadline for application is April 15th. After that time we will let all applicants know whether their application was successful, whether they are entitled for scholarship, and, in that case, the amount of financial aid we can offer. Applicants must send an expression of interest along with their Curriculum vitae. PhD students must also send a letter from the supervisor in support to the request to be considered for the scholarship. Contact: Andrea Torsello -- Andrea Torsello PhD Dipartimento di Informatica, Universita' Ca' Foscari Venezia via Torino 155, 30172 Venezia Mestre, Italy Tel: +39 0412348468 Fax: +39 0412348419 http://www.dsi.unive.it/~atorsell From Colin.Wise at uts.edu.au Mon Feb 17 19:47:33 2014 From: Colin.Wise at uts.edu.au (Colin Wise) Date: Tue, 18 Feb 2014 11:47:33 +1100 Subject: Connectionists: AAi Short Course - 'Marketing Analytics - an Introduction' - Thursday 6 March 2014 Message-ID: <8112393AA53A9B4A9BDDA6421F26C68A016E67D7B1C1@MAILBOXCLUSTER.adsroot.uts.edu.au> Dear Colleague, AAi Short Course - 'Marketing Analytics - an Introduction' - Thursday 6 March 2014 https://shortcourses-bookings.uts.edu.au/ClientView/Schedules/ScheduleDetail.aspx?ScheduleID=1573 AAi's short course 'Marketing Analytics - an Introduction' may be of interest to you or others in your organisation or network. Marketing Analytics is the analysis of data that leads to improved marketing performance. Virtually all firms use marketing analytics of some sort, which can be as simple as spread sheet operations or using advanced statistical software. With almost limitless sources and volumes of consumer data available, it is imperative now more than ever to use marketing analytics for a competitive edge. This course will utilize cutting edge and traditional methods to analyse consumer data. This program is particularly useful for all those involved or interested in marketing analytics for their organisation: * Industry Practitioners * Marketing Firms * Students, Researchers, Academics Program topics * 9:00am to 10:15am - Surveys and Regression Analysis * 10:30am to 12:00pm - Target Marketing * 1:00pm to 2:15pm - Consumer Choice Modeling * 2:30pm to 4:00pm - Choice Modeling and Logistic regression Course outcomes Upon completion of this course students will: * Understand how to analyse survey data and ensure analytical results are representative of the market * Understand consumer choice and how specific segments respond to changes in the marketing mix * Determine a customer's willingness to pay for a product or service attribute and how to rank the importance of attributes * Please register here https://shortcourses-bookings.uts.edu.au/ClientView/Schedules/ScheduleDetail.aspx?ScheduleID=1573 An important foundation short course in the AAI series of advanced data analytic short courses - please view this short course and others here http://analytics.uts.edu.au/shortcourses/schedule.html We are happy to discuss at your convenience. Thank you and regards. Colin Wise Operations Manager Faculty of Engineering & IT The Advanced Analytics Institute [cid:image001.png at 01CF23F2.E8BD4690] University of Technology, Sydney Blackfriars Campus Building 2, Level 1 Tel. +61 2 9514 9267 M. 0448 916 589 Email: Colin.Wise at uts.edu.au AAI: www.analytics.uts.edu.au/ Reminder - AAI Short Course - Data Mining - an Introduction - Thursday 27 February 2014 https://shortcourses-bookings.uts.edu.au/ClientView/Schedules/ScheduleDetail.aspx?ScheduleID=1550&EventID=1281 AAI Education and Training Short Courses Survey - you may be interested in completing our AAI Survey at http://analytics.uts.edu.au/shortcourses/survey.html AAI Email Policy - should you wish to not receive this periodic communication on Data Analytics Learning please reply to our email (to sender) with UNSUBSCRIBE in the Subject. We will delete you from our database. Thank you for your past and future support. UTS CRICOS Provider Code: 00099F DISCLAIMER: This email message and any accompanying attachments may contain confidential information. If you are not the intended recipient, do not read, use, disseminate, distribute or copy this message or attachments. If you have received this message in error, please notify the sender immediately and delete this message. Any views expressed in this message are those of the individual sender, except where the sender expressly, and with authority, states them to be the views of the University of Technology Sydney. Before opening any attachments, please check them for viruses and defects. Think. Green. Do. Please consider the environment before printing this email. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 10489 bytes Desc: image001.png URL: From hava at cs.umass.edu Tue Feb 18 09:26:08 2014 From: hava at cs.umass.edu (Hava Siegelmann) Date: Tue, 18 Feb 2014 09:26:08 -0500 Subject: Connectionists: Fwd: There were 2 postings on the CMU site... In-Reply-To: <13DA6A6D-4FDC-4BF6-952C-FE94E5FE6196@cs.umass.edu> References: <13DA6A6D-4FDC-4BF6-952C-FE94E5FE6196@cs.umass.edu> Message-ID: <53036D80.7080002@cs.umass.edu> Due to recent University regulations regarding EOD I request that these two messages will be removed from the site at this stage. Please let me know when this is done. Thank you very much Hava -------- Original Message -------- Subject: There were 2 postings on the CMU site... Date: Tue, 18 Feb 2014 09:01:36 -0500 From: Michele Roberts To: Hava Siegelmann As of today, two postings appear on this listserv. I?ve attached a snapshot of the Jan. 12 posting, The second one in the list was posted on Jan. 16, I believe. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: CMU_listserv-ads.pdf Type: application/pdf Size: 242761 bytes Desc: not available URL: -------------- next part -------------- An HTML attachment was scrubbed... URL: From bower at uthscsa.edu Tue Feb 18 10:28:22 2014 From: bower at uthscsa.edu (james bower) Date: Tue, 18 Feb 2014 09:28:22 -0600 Subject: Connectionists: Fwd: There were 2 postings on the CMU site... In-Reply-To: <53036D80.7080002@cs.umass.edu> References: <13DA6A6D-4FDC-4BF6-952C-FE94E5FE6196@cs.umass.edu> <53036D80.7080002@cs.umass.edu> Message-ID: ?EOD? ? ?Explosive Ordinance Disposal?? I thought that might apply to some of my recent postings, but can?t see how it applies here. :-) However, this is probably not the appropriate venue to discuss how the run away growth in expensive and over reaching middle management is hindering both the research and academic atmosphere in Universities world wide. Nope, probably not. Jim Bower On Feb 18, 2014, at 8:26 AM, Hava Siegelmann wrote: > Due to recent University regulations regarding EOD I request that these two messages will be removed from the site at this stage. Please let me know when this is done. > > Thank you very much > > Hava > > > > > -------- Original Message -------- > Subject: There were 2 postings on the CMU site... > Date: Tue, 18 Feb 2014 09:01:36 -0500 > From: Michele Roberts > To: Hava Siegelmann > > As of today, two postings appear on this listserv. I?ve attached a snapshot of the Jan. 12 posting, The second one in the list was posted on Jan. 16, I believe. > > Dr. James M. Bower Ph.D. Professor of Computational Neurobiology Barshop Institute for Longevity and Aging Studies. 15355 Lambda Drive University of Texas Health Science Center San Antonio, Texas 78245 Phone: 210 382 0553 Email: bower at uthscsa.edu Web: http://www.bower-lab.org twitter: superid101 linkedin: Jim Bower CONFIDENTIAL NOTICE: The contents of this email and any attachments to it may be privileged or contain privileged and confidential information. This information is only for the viewing or use of the intended recipient. If you have received this e-mail in error or are not the intended recipient, you are hereby notified that any disclosure, copying, distribution or use of, or the taking of any action in reliance upon, any of the information contained in this e-mail, or any of the attachments to this e-mail, is strictly prohibited and that this e-mail and all of the attachments to this e-mail, if any, must be immediately returned to the sender or destroyed and, in either case, this e-mail and all attachments to this e-mail must be immediately deleted from your computer without making any copies hereof and any and all hard copies made must be destroyed. If you have received this e-mail in error, please notify the sender by e-mail immediately. -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.gleeson at ucl.ac.uk Tue Feb 18 06:07:48 2014 From: p.gleeson at ucl.ac.uk (Padraig Gleeson) Date: Tue, 18 Feb 2014 11:07:48 +0000 Subject: Connectionists: Open Source Brain workshop 2014: Building and sharing models of the cortex Message-ID: <53033F04.203@ucl.ac.uk> (Apologies for cross postings) Announcing the 2014 Open Source Brain Workshop: Building and sharing models of the cortex. May 14-16th, Alghero, Sardinia. There are an increasing number of cortical cell and network models being developed and made publicly available, from networks of leaky integrate and fire neurons based on cortical connectivity and firing properties, to multicompartmental, conductance based cell models. Many of these are being reused, reimplemented and extended to address new scientific questions by labs around the world. In this meeting, we will look at the range of cortico-thalamic models out there which are of widespread interest and work towards getting these into public, open source repositories, in standardised formats such as NeuroML and PyNN . We will investigate the steps needed to ensure these models are well tested, annotated and ready for use as research tools by the attendees and the wider community. There will be presentations from experimentalists who are producing the data to constrain these models, as well as simulator and application developers who will create the infrastructure to build, simulate, analyse and share the models. *Confirmed speakers* Sharon Crook, Arizona State University, USA Markus Diesmann, Research Center J?lich, Germany Dirk Feldmeyer, Aachen University, Germany Rick Gerkin, Arizona State University, USA Alon Korngreen, Bar-Ilan University, Israel Dave Lester, SpiNNaker project, University of Manchester, UK Angus Silver, University College London Tim Vogels, University of Oxford, UK Daniel W?jcik, Nencki Institute, Poland Full details of the workshop can be found here: http://www.opensourcebrain.org/projects/osb/wiki/OSB2014. The meeting is free to attend, but registration is required (deadline April 15th). The Open Source Brain initiative aims to encourage collaborative, open source model development in computational neuroscience and is primarily supported by the Wellcome Trust. Regards, The OSB 2014 organising committee ----------------------------------------------------- Padraig Gleeson Room 321, Anatomy Building Department of Neuroscience, Physiology& Pharmacology University College London Gower Street London WC1E 6BT United Kingdom +44 207 679 3214 p.gleeson at ucl.ac.uk ----------------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From ahu at cs.stir.ac.uk Wed Feb 19 13:00:36 2014 From: ahu at cs.stir.ac.uk (Dr Amir Hussain) Date: Wed, 19 Feb 2014 18:00:36 +0000 Subject: Connectionists: (Final Abstract Submission Deadline Extended to 25 Feb 2014) International Workshop on Autonomous Cognitive Robotics, Stirling, Scotland, UK, 27-28 March 2014 Message-ID: Dear friends **with advance apologies for any cross-postings** ***By popular demand, the FINAL deadline for submitting abstracts (300 words max.) has now been extended to: Tue, 25 Feb 2014 (Decisions due: 28 Feb 2014)*** The Call for Abstracts below may be of interest - we would very much appreciate if you could also kindly help circulate the Call to any interested colleagues and friends. Details of the Workshop and distinguished invited Speakers can also be found here: http://www.cs.stir.ac.uk/~ahu/AUTCOGROB2014 Prospective contributors are required to submit an abstract of no more than 300 words (by the extended deadline: 25 Feb 2014) to: eya at cs.stir.ac.uk PhD/research students will benefit from a 50% registration fee discount. We look forward to seeing you soon in Stirling! Kindest regards Prof Amir Hussain, University of Stirling, UK & Prof Kevin Gurney, University of Sheffield, UK (Workshop Organisers & Co-Chairs) Important Dates: Abstract submissions FINAL deadline (extended): 25 Feb 2014; Decisions Due: 28 Feb 2014 Workshop dates: Thurs 27- Fri 28 March 2014 ------- Call for Abstracts/Participation International IEEE/EPSRC Workshop on Autonomous Cognitive Robotics University of Stirling, Stirling, Scotland, UK, 27-28 March 2014 http://www.cs.stir.ac.uk/~ahu/AUTCOGROB2014 Autonomous Cognitive Robotics is an emerging discipline, fusing ideas across several traditional domains and seeks to further our understanding in two problem domains. First, by instantiating brain models into an embodied form, it supplies a strong test of those models, thereby furthering out understanding of neurobiology and cognitive psychology. Second, by harnessing the insights we have about cognition, it is a potentially fruitful source of engineering solutions to a range of problems in robotics, and in particular, in areas such as intelligent autonomous vehicles and assistive technology. It therefore promises next generation solutions in the design of urban autonomous vehicles, planetary rovers, and artificial social (e)companions. The aim of this 2-day workshop is to bring together leading international and UK scientists, engineers and industry representatives, alongside European research network and EU funding unit leaders, to present state-of-the-art in autonomous cognitive systems and robotics research, and discuss future R&D challenges and opportunities. We welcome contributions from people working in: neurobiology, cognitive psychology, artificial intelligence, control engineering, and computer science, who embrace the vision outlined above. If you wish to contribute, please email an abstract of not more than 300 words (by 25 Feb 2014) to: eya at cs.stir.ac.uk Both "works-in-progress" and fully-developed ideas are welcome. Selected abstracts will be invited for oral presentation but there will also be research poster sessions (prize for the best poster/presentation) and an Exhibition by Springer. Of particular interest to Doctoral and postdoctoral researchers will be an invited talk by the Senior Publishing Editor of Springer Neuroscience, Dr Martijn Roelandse, on "publishing interdisciplinary research in scientific journals". We welcome people at all stages of their career to submit. Authors of selected best (oral & poster) presentations will be invited to submit extended papers for publication in a special issue of Springer's Cognitive Computation journal (http://www.springer.com/12559) Invited Speakers: Juha Heikkil?, Deputy Head of Unit: Robotics & Cognitive Systems, European Commission Prof Vincent Muller, Co-ordinator, EU-Cognition-III: European Network Dr Ingmar Posner, The Oxford Mobile Robotics Group, University of Oxford, UK Prof Dongbing Gu, Human-Centred Robotics Group, University of Essex, UK Prof Tony Pipe, Bristol Robotics Laboratory, UK Prof David Robertson, University of Edinburgh, UK Prof Mike Grimble, Strathclyde University & Industrial Systems and Control Ltd. UK Dr Tony Dodd, Dept. of Automatic Control Systems Engineering, Sheffield University, UK Workshop Organisers & Co-Chairs: Prof Amir Hussain, University of Stirling, UK Prof Kevin Gurney, University of Sheffield, UK Important Dates: Abstract submissions FINAL deadline (extended): 25 February 2014; Decisions Due: 28 Feb 2014 Workshop dates: Thurs 27- Fri 28 March 2014 Registration: Registration fees will include lunches, refreshments and a copy of the Workshop Abstract Proceedings Early Registration Fee: ?100 Early Deadline: 3 Mar 2014 Late Registration Fee: ?150 Final deadline: 10 Mar 2014 Registration payment details will be sent on acceptance of Abstract, or can be obtained by emailing: eya at cs.stir.ac.uk They will also be available on-line: http://www.cs.stir.ac.uk/~ahu/AUTCOGROB2014 -- The University of Stirling has been ranked in the top 12 of UK universities for graduate employment*. 94% of our 2012 graduates were in work and/or further study within six months of graduation. *The Telegraph The University of Stirling is a charity registered in Scotland, number SC 011159. -- The University of Stirling has been ranked in the top 12 of UK universities for graduate employment*. 94% of our 2012 graduates were in work and/or further study within six months of graduation. *The Telegraph The University of Stirling is a charity registered in Scotland, number SC 011159. From A.Cangelosi at plymouth.ac.uk Thu Feb 20 07:56:34 2014 From: A.Cangelosi at plymouth.ac.uk (Angelo Cangelosi) Date: Thu, 20 Feb 2014 12:56:34 +0000 Subject: Connectionists: Two postdoc positions at Plymouth University (Neurorobotics + HRI NLP) Message-ID: Plymouth University ? Centre for Robotics and Neural Systems Two post-doctoral research fellows are required for a period of up to 20 months to work on the FP7 funded collaborative projects ?ROBOT-ERA: Implementation and integration of advanced Robotic systems and intelligent Environments in real scenarios for the ageing population? and ?POETICON++: Robots Need Language?. The two posts will focus on: POETICON++ Research Fellow: You will carry out research on computational neuroscience modelling of action and language learning. http://www.jobs.ac.uk/job/AIE133/postdoctoral-research-fellow-poeticon-project/ ROBOT-ERA Research Fellow: You will implement and evaluate the Human-Robot Interaction and Natural Language Interfaces (e.g. programming of automatic speech recognition system) for a service robot. Expertise on Human-Computer Interaction is also welcome. http://www.jobs.ac.uk/job/AIE465/postdoctoral-research-fellow/ The applicants must have a PhD in Robotics, Computer Science, Artificial Intelligence or related disciplines. Expertise in one or more of these fields is essential: Human-Robot Interaction, Natural language processing, Cognitive robotics, Computational Neuroscience. Excellent programming skills are essential. The start date will be 1 May 2014 (or as soon as possible after this date, subject to negotiation) and the contract will last until 31 December 2015. The appointees will be working collaboratively as part of a larger inter-disciplinary research team at the School of Computing and Mathematics and the Centre for Robotics and Neural Systems, under the supervision of Professor Angelo Cangelosi and Professor Tony Belpaeme. You will also collaborate with the other European and international partners of the ROBOT-ERA and POETICON++ consortium. Informal enquiries regarding the post, the project or the research details can be made to Prof Angelo Cangelosi (acangelosi at plymouth.ac.uk or +44 1752 586217) for Poeticon++ and to Prof Tony Belpaeme for ROBOT-ERA (tony.belpaeme at plymouth.ac.uk). APPLICATION CLOSING DATE: THURSDAY 14 MARCH 2014 INTERVIEWS WILL BE ON 1 APRIL 2014 ________________________________ [http://www.plymouth.ac.uk/images/email_footer.gif] This email and any files with it are confidential and intended solely for the use of the recipient to whom it is addressed. If you are not the intended recipient then copying, distribution or other use of the information contained is strictly prohibited and you should not rely on it. If you have received this email in error please let the sender know immediately and delete it from your system(s). Internet emails are not necessarily secure. While we take every care, Plymouth University accepts no responsibility for viruses and it is your responsibility to scan emails and their attachments. Plymouth University does not accept responsibility for any changes made after it was sent. Nothing in this email or its attachments constitutes an order for goods or services unless accompanied by an official order form. From h.ortmanns at jacobs-university.de Thu Feb 20 09:36:43 2014 From: h.ortmanns at jacobs-university.de (Meyer-Ortmanns, Hildegard) Date: Thu, 20 Feb 2014 14:36:43 +0000 Subject: Connectionists: International Workshop on The Versatile Action of Noise, June22-27, 2014 in Bremen Message-ID: <60DB187638D40542BCC94DE271AC982A2D712E30@SXCHMB01.jacobs.jacobs-university.de> Dear Colleagues, we are organizing an international workshop on The Versatile Action of Noise: Applications from Genetic to Neural Circuits at Jacobs University Bremen, Germany from June 22-27, 2014. http://noise-workshop.user.jacobs-university.de/ Stochastic mechanisms, often summarized under "noise", are ubiquitous in artificial and biological networks. In applications to genetic circuits the goal is to trace back the various manifestations of noise to a few sources on underlying levels like the stochastic fluctuations in biochemical reactions, to follow the propagation and modification of noise towards the cellular level and to pursue the interaction of different sources of noise with respect to their mutual repression or amplification. In applications to neural systems the focus is on microscopic noise, e.g. the thermal fluctuations on channel dynamics, the effect of synaptic noise on neuronal firing, but also on large-scale collective fluctuations of brain areas, information loss, effects of large-scale noise on attention and decision-making, as well as on bridges between the different levels. A comparison between both types of applications should reveal parallels in how natural systems transform or utilize noise. It is the tools of theoretical physics that provide a deeper understanding of its role, in particular when it acts counterintuitive. Advanced methods from statistical physics and nonlinear dynamics allow for predicting the action of noise on excitable media like neural systems, extinction events in various populations and noise-induced rare events. We invite applications from graduate students, PhD students and postdocs with a background in theoretical physics, applied mathematics with an interest in biological applications and neuroscience. Applicants from the experimental side should be interested in the mathematical modeling and analysis of experimental data. The fee is 150 Euro for early registration until April 30 and 250 Euro afterwards, including accommodation in the guesthouse on campus and all meals, for the period of the workshop. The workshop is generously supported by the WE-Heraeus foundation. For registration please visit our website. Selected abstracts will be invited for oral presentation, we shall also have poster sessions. Kind regards Hildegard Meyer-Ortmanns and Alberto Bernacchia Invited speakers: Michael Assaf * (Racah Institute of Physics, Hebrew University of Jerusalem);Jan Benda * (Eberhard Karls University, T?bingen);Michael Breakspear * (QIMR and UNSW Berghofer, Sidney, Australia);Thierry Emonet * (Yale University, New Haven, U.S.A.);Tobias Galla * (Manchester University, Manchester);Jordi Garcia-Ojalvo * (Universitat Pompeu Fabra, Barcelona );Benjamin Lindner * (Humboldt University, Berlin);Wolfgang Maass * (Graz Technical University, Graz);Ralf Metzler * (Potsdam University, Potsdam);Sidney R. Nagel (The University of Chicago, U.S.A.);Simone Pigolotti * (Universitat Politecnica de Catalunya, Barcelona);Joachim R?dler * (Ludwig Maximilians University, Munich);Jaime de la Rocha (*) (Institut D' Invest. Biom?diques August Pi i Sunyer, Barcelona);Lutz Schimansky-Geier*(Humboldt University, Berlin);Susanne Schreiber *(Humboldt University, Berlin);Pieter Rein Ten Wolde * (FOM Institute AMOLF, Amsterdam);Ra?l Toral * (IFISC, UIB-CSIC, Palma de Mallorca) *confirmed Topics include: ? The effects of extrinsic noise on cellular decision making; ? Correlated fluctuations in genetic networks; ? Propagation of noise of sequential gene regulation; ? Error rates in biological copying; ? Single cell response and decision making under noise; ? The generation of transcriptional noise in bacteria; ? Costs and benefits of biochemical noise; ? Noise in large-scale cortical rhythms; ? Noise-induced order in collective neural populations; ? Computations by noisy networks of spiking neurons ? Noise and irregular firing in cortical circuits; ? Cellular mechanisms of temperature-compensation in receptor neurons; ? Coherent noise, scale invariance and intermittency in large systems; ? Shaping noise for population success; ? Quasi-cycles induced by noise; ? Interplay between noise and delay. ? Scientific Organization Prof. Dr. Hildegard Meyer-Ortmanns / Statistical Physics Group / Jacobs University Bremen gGmbH / School of Engineering and Science / Campus Ring 8 / 28759 Bremen / Germany / Tel: ++49 (0) 421 / 200-3221 / Tel: ++49 (0) 421 / 200-3249 / Email: h.ortmanns at jacobs-university.de Prof. Dr. Alberto Bernacchia / Computational Neuroscience Group / Jacobs University Bremen gGmbH / School of Engineering and Science / Campus Ring 6 / 28759 Bremen / Germany / Tel: ++49 (0) 421 / 200-3542 / Tel: ++49 (0) 421 / 200-3249 / Email: a.bernacchia at jacobs-university.de __________________________________________________ Dr. Hildegard Meyer-Ortmanns, Professor of Physics School of Engineering and Science Jacobs University Bremen gGmbH P.O. Box 750561 | 28725 Bremen | Germany Phone: +49 (0) 421 200 3221 | Fax: +49 (0) 421 200 3229 Office: Campus Ring 8, room 64 | 28759 Bremen | Germany h.ortmanns at jacobs-university.de http://www.jacobs-university.de http://www.jacobs-university.de/ses/meyer-ortmanns/ ______________________________________________ Commercial registry: Amtsgericht Bremen, HRB 18117 CEO / Gesch?ftsf?hrerin: Prof. Dr.-Ing. Katja Windt Chair Board of Governors: Prof. Dr. Karin Lochte -------------- next part -------------- An HTML attachment was scrubbed... URL: From jpezaris at gmail.com Thu Feb 20 12:25:43 2014 From: jpezaris at gmail.com (John Pezaris) Date: Thu, 20 Feb 2014 12:25:43 -0500 Subject: Connectionists: AREADNE 2014 Second Call for Abstracts Message-ID: AREADNE 2014 Research in Encoding and Decoding of Neural Ensembles Nomikos Conference Centre, Santorini, Greece 25-29 June 2014 http://areadne.org info at areadne.org * * * * SECOND CALL FOR ABSTRACTS * * * * Abstract submissions for poster presentation at AREADNE 2014 are now open. AREADNE 2014 will bring world-wide scientific leaders to present theoretical and experimental results on the functioning of neuronal ensembles. This, our fifth meeting to be held at the Nomikos Conference Centre in Santorini, Greece, will create an informal yet spectacular setting for discussion of cutting-edge ideas and discoveries in an intensive but relaxed forum that emphasizes interaction. For submission details, see http://areadne.org/call-for-abstracts Submissions of abstracts for for poster presentations are due by 7 March 2014; notifications will be provided by 28 March 2014. We strongly encourage potential attendees to submit an abstract as presenters have registration priority. For information about the conference, please refer to the main web page http://areadne.org or send email to us at info at areadne.org. We hope to see you in Santorini! --- J. S. Pezaris and N. G. Hatsopoulos AREADNE 2014 Co-Chairs From icais at cuas.at Mon Feb 17 02:17:49 2014 From: icais at cuas.at (icais) Date: Mon, 17 Feb 2014 07:17:49 +0000 Subject: Connectionists: ICAIS'14 -- 1st CFP Message-ID: <825BADE2E9C8674B82CEBD8B5F91F6474F56842F@EXMBX01.technikum.local> [Apologies if you receive multiple copies of this CFP] -------------------------------------------------------------------------------------------------- * * * CALL FOR PAPERS * * * The 2014 International Conference on Adaptive & Intelligent Systems - ICAIS'14 September 08th - 10th, 2014 Bournemouth, UK http://computing.bournemouth.ac.uk/ICAIS/ icais at bournemouth.ac.uk Sponsored by - IEEE Computational Intelligence Society - The International Neural Network Society -------------------------------------------------------------------------------------------------- * * * PLENARY TALKS * * * Prof. Ludmila I Kuncheva, Bangor University, UK Prof. Jo?o Gama, University of Porto Porto, Portugal * * * AIMS OF THE CONFERENCE * * * The ICAIS'14 conference aims at bringing together international researchers, developers and practitioners from different horizons to discuss the latest advances in system learning and adaptation. ICAIS'14 will serve as a space to present the current state of the art but also future research avenues of this thematic. Topics of the conference cover three aspects: Algorithms & theories of adaptation and learning, Adaptation issues in Software & System Engineering, Real- world Applications. ICAIS'14 will feature contributed papers as well as world-renowned guest speakers (see webpage), interactive breakout sessions, and instructional workshops. * * * IMPORTANT DATES * * * - Workshop & Special Session proposal: April 13, 2014 - Full paper submission: June 10, 2014 - Acceptance notification: July 01, 2014 - Final camera ready: July 11, 2014 * * * CONFERENCE PROCEEDINGS * * * Proceedings will be published by Springer in Lecture Notes in Artificial Intelligence Series. * * * MAIN TOPICS (but not limited to) * * * - Track 1: Self-X Systems o Self-adaptation o Self-organization and behavior emergence o Self-managing o Self-healing o Self-monitoring o Multi-agent systems o Self-X software agents o Self-X robots o Self-organizing sensor networks o Evolving systems - Track 2: Incremental Learning o Online incremental learning o Self-growing neural networks o Adaptive and life-long learning o Plasticity and stability o Forgetting o Unlearning o Novelty detection o Perception and evolution o Drift handling o Adaptation in changing environments - Track 3: Online Processing o Adaptive rule-based systems o Adaptive identification systems o Adaptive decision systems o Adaptive preference learning o Time series prediction o Online and single-pass data mining o Online classification o Online clustering o Online regression o Online feature selection and reduction o Online information routing - Track 4: Dynamic and Evolving Models in Computational Intelligence o (Dynamic) Neural networks architectures o (Dynamic) Evolutionary computation o (Dynamic) Swarm intelligence o (Dynamic) Immune and bacterial systems o Uncertainty and fuzziness modeling for adaptation o Approximate reasoning and adaptation o Chaotic systems - Track 5: Software & System Engineering o Autonomic computing o Organic computing o Evolution o Adaptive software architecture o Software change o Software agents o Engineering of complex systems o Adaptive software engineering processes o Component-based development - Track 6: Applications - Adaptivity and Learning o Smart systems o Ambient / ubiquitous environments o Distributed intelligence o Robotics o Industrial applications o Internet applications o Business applications o Supply chain management o etc. * * * SUBMISSION * * * Papers must be in PDF, not exceeding 10 pages and conforming to Springer-Verlag Lecture Notes guidelines. Author instructions and style files can be downloaded at http://www.springer.de/comp/lncs/authors.html. Papers must be submitted through the submission system ( http://computing.bournemouth.ac.uk/ICAIS ). Short papers describing novel research visions, work-in-progress or less mature results are also welcome. All submission will be peer-reviewed by at least 3 qualified reviewers. Selection criteria will include: relevance, significance, impact, originality, technical soundness, and quality of presentation. Preference will be given to submissions that take strong or challenging positions on important emergent topics. At least one author have to attend the conference to present the paper. * * * ORGANIZATION COMMITTEE * * * General Chair: - Abdelhamid Bouchachia, Bournemouth University, UK International Advisory Committee: - Nikola Kasabov, Auckland University, New Zealand - Xin Yao, University of Birmingham, UK - Djamel Ziou, University of Sherbrooke, Canada - Plamen Angelov, University of Lancaster, UK - Witold Pedrycz, University of Edmonton, Canada - Janusz Kacprzyk, Polish Academy of Sciences, Poland Organization Committee: - Hammadi Nait-Charif, Bournemouth University, UK - Emili Balaguer-Ballester, Bournemouth University, UK - Damien Fay, Bournemouth University, UK - Jane McAlpine, Bournemouth University, UK Publicity Chair: - Markus Prossegger, Carinthia University of Applied Sciences, Austria -------------- next part -------------- An HTML attachment was scrubbed... URL: From jkrichma at uci.edu Fri Feb 21 01:24:20 2014 From: jkrichma at uci.edu (Jeff Krichmar) Date: Thu, 20 Feb 2014 22:24:20 -0800 Subject: Connectionists: ICRA 2014 Workshop on "Neurobiologically Inspired Robotics" Message-ID: <366FB9E4-DCC8-4975-8B02-3DD4656786CC@uci.edu> Dear colleagues, We are happy to announce the ICRA 2014 Workshop on "Neurobiologically Inspired Robotics", a full-day workshop held in conjunction with the IEEE International Conference on Robotics and Automation, at the Hong Kong Convention and Exhibition Centre on Thursday, June 5th, 2014. The "Neurobiologically Inspired Robotics" workshop will be a full day event comprised of leading neuroroboticists from around the world. Each speaker will discuss how leveraging neurobiology and brain processing can enhance robot autonomy and capability. *Rather than an abstract, theoretical session on the potential future of neurobiologically inspired robots, each speaker has embedded a brain inspired model, at some level, onto a physical robot system*. They will discuss the inspiration, implementation, and implications of their systems. The first talk will give an overview of the field and a brief primer on neuroscience. This will be followed by talks from leading neurorobotic experts (titles and authors given below). The final session of the day will be a panel discussion that focuses on the future of this field and how to transition these research systems into practical applications in the real world. The workshop format includes invited speakers, collaborative discussion sessions and a poster presentation. *The best submission will be awarded a GoPro Black HERO3+ camera worth $400 USD.* *Important Dates:* - Early registration: February 28, 2014 at www.icra2014.com - Abstract submission deadline: April 14, 2014 - Notification of acceptance: April 28, 2014 - Workshop: Thursday June 5th, 2014, 9 AM - 5 PM *Website*: http://www.socsci.uci.edu/~jkrichma/ICRA2014-NeuroRobot.html *Organizers:* - Professor Angelo Cangelosi, University of Plymouth - Dr Jeffrey L. Krichmar, University of California, Irvine - Dr Michael Milford, Queensland University of Technology - Professor Michele Rucci, Boston University - Professor Bert Shi, Hong Kong University of Science and Technology *Submissions:* We invite submissions of a one page abstract describing new research in Neurobiologically Inspired Robotics. Accepted submissions will be given a poster slot, with two submissions selected for 15 minute talks during the workshop. Submit your abstract in .pdf format to: jeff.krichmar AT uci.edu Each submission will be reviewed by the organizing committee. *Speaker Lineup:* - Minoru Asada (Osaka University) - "Affective Developmental Robotics Towards Artificial Empathy" - Angelo Cangelosi (University of Plymouth) - "Neurorobotics Modelling of Embodied Cognition" - Jorg Conradt (Technische Universitat Munchen) - "Computational Neuroscience in Robots: Event-based Sensors and Information Processing" - Jeffrey L. Krichmar (University of California, Irvine) - "A Tactile, Interactive NeuroRobot for Development Disorder Therapy" - Michael Milford (Queensland University of Technology) - "Rodent-inspired Place Recognition over Multiple Spatial Scales using Multiple Sensing Modalities" - Florian Roerbein (Technische Universitat Munchen) "Neurorobotics and the Human Brain Project" - Michele Rucci (Boston University) - TBA - Bert Shi (Hong Kong University of Science and Technology) - "Intrinsically Motivated Joint Learning of Visual Perception and Eye Movements" - Jun Tani (Korea Advanced Institute of Science and Technology) - "Prediction and postdiction for tackling the problem of "free wills": From neuro-robotics experimental studies." *Workshop Support*: This workshop is generously supported by Microsoft Research: http://research.microsoft.com/en-us/ *Enquiries*: Please contact Michael Milford (michael DOT milford AT qut.edu.au) or Jeffrey Krichmar (jkrichma AT uci.edu) Best regards, Michael, Jeff, Angelo, Michele and Bert. Jeff Krichmar Department of Cognitive Sciences 2328 Social and Behavioral Sciences Gateway University of California, Irvine Irvine, CA 92697-5100 jkrichma at uci.edu http://www.socsci.uci.edu/~jkrichma From poirazi at imbb.forth.gr Fri Feb 21 05:26:26 2014 From: poirazi at imbb.forth.gr (Yiota Poirazi) Date: Fri, 21 Feb 2014 12:26:26 +0200 Subject: Connectionists: DENDRITES 2014, July 1-4, Crete: SECOND CALL FOR ABSTRACTS Message-ID: <530729D2.9080406@imbb.forth.gr> Please distribute to potentially interested parties and email lists. Many thanks! Looking forward to welcoming you in Crete! On behalf of the organizing committee, Yiota Poirazi -- Panayiota Poirazi, Ph.D. Research Director Institute of Molecular Biology and Biotechnology (IMBB) Foundation of Research and Technology-Hellas (FORTH) Vassilika Vouton, P.O.Box 1385 GR 711 10 Heraklion, Crete, GREECE Tel: +30 2810 391139 Fax: +30 2810 391101 ?mail:poirazi at imbb.forth.gr http://www.dendrites.gr ------------------------------------------------------- DENDRITES 2014 Heraklion, Crete, Greece. July 1-4, 2014 http://dendrites2014.gr/ info at dendrites2014.gr * * * * * SECOND CALL FOR ABSTRACTS * * * * * Abstract submission for poster and oral presentation is open until *March 10, 2014.* Dendrites 2014 aims to bring together scientists from around the world to present their research on dendrites, ranging from the molecular to the anatomical and biophysical levels. With the backdrop of an informal yet spectacular setting on Crete, the meeting has been carefully planned to not only satisfy our scientific curiosity but also foster discussion and encourage interaction between attendees well beyond traditional presentations. FORMAT AND SPEAKERS The meeting consists of the Main Event (July 1-3) and the Soft Skills Day (July 4). Invited speakers for the Main Event include: * Susumu Tonegawa (Nobel Laureate), * Angus Silver, * Tiago Branco, * Alcino J. Silva, * Michael H?usser, * Julietta U. Frey, * Stefan Remy, and * Kristen Harris. For the Soft Skills Day, Kerri Smith, Nature podcast editor, is going to present on communication and dissemination of scientific results. Alcino J. Silva (UCLA) will present his recent work on developing tools for integrating and planning research in Neuroscience. There will be a hands on session on tools used to model neural cells/networks and a talk about the advantages of publishing under the Creative Commons License. CALL FOR ABSTRACTS Submission of abstracts for poster and oral presentations are due by 10 March 2014; notifications will be provided by late March 2014. We welcome both experimental and theoretical contributions addressing novel findings related to dendrites. For details, please see http://dendrites2014.gr/call/. Electronic abstract submission is hosted on the Frontiers platform, where extended abstracts will be published (Frontiers in Cellular Neuroscience). One of the authors has to register for the main event as a presenting author. Frontiers publishing fees are included in the registration. In case an abstract is not accepted for presentation, the registration fee will be refunded. Dendrites 2014 will be hosted by FORTH, and is being organized as part of the Marie Curie Initial Training Network NAMASEN and the ERC Starting Grant dEMORY. FOR FURTHER INFORMATION Please see the conference web site (http://dendrites2014.gr/), subscribe to our twitter (https://twitter.com/dendrites2014) or RSS feeds (http://dendrites2014.gr/rss_feed/news.xml), or send an email to info at dendrites2014.org . ---- -------------- next part -------------- An HTML attachment was scrubbed... URL: From davrot at neuro.uni-bremen.de Fri Feb 21 04:39:17 2014 From: davrot at neuro.uni-bremen.de (David Rotermund) Date: Fri, 21 Feb 2014 10:39:17 +0100 Subject: Connectionists: Open PhD position: Electrophysiology -- Neuronal mechanisms of rapid functional configuration Message-ID: <53071EC5.1080602@neuro.uni-bremen.de> Open PhD position (salary E13/2): Electrophysiology -- Neuronal mechanisms of rapid functional configuration Visual information processing is highly flexible and rapidly adapted to the current behavioural task. In this project, you have the unique opportunity to investigate the neural mechanisms of selective information processing in multiple visual areas with massively parallel multielectrode recordings. You will be embedded into an interdisciplinary and international research group within the Bernstein National Network for Computational Neuroscience which unites computational approaches with experimental work on human subjects and primates. Our team is also part of a major neuroscientific and neurotechnological research focus at Bremen University, comprising the Center for Cognitive Sciences (ZKW) and the Creative Unit 'I-See', conducting research on the development of an artificial eye. Your task will be to conduct experiments on awake behaving macaque monkeys in collaboration with the group of Prof. Dr. Andreas Kreiter (http://www.brain.uni-bremen.de) and analyze the collected data. This includes familiarization with, and training of the animals, preparation of the experimental setup and recordings, implantation of the electrode arrays, and recording of the data under different visual perception tasks. You should be familiar with standard methods of data analysis, and have a degree (master/diploma or equivalent) in natural sciences (e.g. Biology) with focus on experimental work (preferably Animal Physiology). Basic programming skills and interest in concepts from Computational Neuroscience are required. We expect high motivation for communicating and collaborating with the other members in the group. The position is to be filled as soon as possible (i.e., on a first-come, first-served basis). Please send your application in German or English language, including your letter of motivation, CV, copies of school and university certificates (master/diploma or equivalent) by e-mail to Udo Ernst (udo at neuro.uni-bremen.de) or by postal mail to the address below: Dr. Udo Ernst Hochschulring 18 / Cognium Universit?t BremenA 28359 Bremen, Germany From M.Loog at tudelft.nl Fri Feb 21 13:14:03 2014 From: M.Loog at tudelft.nl (Marco Loog - EWI) Date: Fri, 21 Feb 2014 18:14:03 +0000 Subject: Connectionists: S+SSPR 2014 (deadline extension to 14.3.) In-Reply-To: References: Message-ID: <1B7FB132C5F56E428E697512485E823244BFC055@SRV363.tudelft.net> ------------------------------------------------------------------- CALL FOR PAPERS ------------------------------------------------------------------- S+SSPR 2014 IAPR Joint International Workshops on 10th Statistical Techniques in Pattern Recognition (SPR) 15th Structural and Syntactic Pattern Recognition Workshop (SSPR) 20-22 August 2014 Joensuu, Finland http://cs.uef.fi/ssspr2014/ ------------------------------------------------------------------- IMPORTANT DATES: Paper submission: 14 March 2014 (extended) Notifications: 28 April 2014 Camera-ready: 17 May 2014 ------------------------------------------------------------------- CONFERENCE TOPICS: We invite original contributions within the following topics: SPR Topics: Domain adaptation Structural Matching Dissimilarity Representations Structural Complexity Ensemble Methods Syntactic Pattern Recognition Multiple Classifiers Image Understanding Gaussian Processes Shape Analysis Dimensionality Reduction Graph-theoretic Methods Clustering Algorithms Graphical Models Model Selection Structural Kernels Semi-Supervised Learning Spectral Methods Multiple Instance Learning Spatio-Temporal Pattern Recognition Active Learning Stochastic Structural Models Contextual Pattern Recognition Intelligent Sensing Systems Location-based Pattern Recognition Multimedia Analysis Partially Supervised Learning Structured Text Analysis Novelty Detection Image Document Analysis Comparative Studies In addition to the original contributions, we also invite authors to submit their recent papers (within 1 year) published in a related journal (e.g. PAMI, PR, PRL, JMLR). These papers will undergo a lighter review process, and if accepted, they will be included in the workshop program and short abstract in the proceedings. ------------------------------------------------------------------- PAPER SUBMISSION PROCEDURE: All manuscripts (max. 10 pages) must be in LaTex format following Springer's LNCS style: http://www.springer.com/comp/lncs/authors.html All papers are submitted via EasyChair, and will be reviewed at least two or more anonymous reviewers. Accepted papers (not exceeding 10 pages) will be published in Springer's Lecture Notes in Computer Science (LNCS) series, providing that at least one author will register to the workshop and present the paper. ------------------------------------------------------------------- KEYNOTE SPEAKERS: Prof. Ali Shokoufandeh, Drexel University, Philadelphia, US Approximation of hard combinatorial problems via embedding to hierarchically separated trees https://www.cs.drexel.edu/~ashokouf/ Prof. David Hand Imperial College, London, UK Evaluating supervised classification methods: error rate, ROC curves, and beyond http://www3.imperial.ac.uk/people/d.j.hand ------------------------------------------------------------------- ORGANIZATION: General Chair: Pasi Fr?nti, University of Eastern Finland SPR Chairs: Gavin Brown, University of Manchester, UK Marco Loog, Delft University of Technology, The Netherlands SSPR Chairs: Francisco Escolano, Universidad de Alicante, Spain Marcello Pelillo, University of Venice, Italy --------------------------------------------------------------- VENUE: The workshops are organized by School of Computing in the University of Eastern Finland. Joensuu is a small town of 75,000 inhabitants in the lakeside Finland - the capital of green. It is famous its peaceful nature, excellent outdoor opportunities as well as many saunas. TRAVEL: Joensuu can be reached easiest from Helsinki (50 min transit), which can be reached direct flights from most European cities, New York, Chicago and all the major hubs in Asia including Beijing, Shanghai, Tokio, Osaka, Bangkok and Singapore. The workshops are organized one week prior to ICPR conference. Schedude is planned to support smooth transit to Stockholm either by flight or by a memorable sea cruise. --------------------------------------------------------------- LINKS: http://cs.uef.fi/ssspr2014/ https://www.facebook.com/pages/SSSPR-2014/171649539698188 --------------------------------------------------------------- From mohajerin.nima at gmail.com Fri Feb 21 15:57:08 2014 From: mohajerin.nima at gmail.com (Nima) Date: Fri, 21 Feb 2014 15:57:08 -0500 Subject: Connectionists: Good references for modeling dynamic systems with Recurrent Neural Networks needed Message-ID: Hello everyone, I am going to apply RNNs for modeling of dynamic systems of order 3 and higher. I appreciate if you guide me to some good references in the literature. Thanks, /Nima -------------- next part -------------- An HTML attachment was scrubbed... URL: From ahmedhalimo at gmail.com Sun Feb 23 02:37:41 2014 From: ahmedhalimo at gmail.com (Ahmed Moustafa) Date: Sun, 23 Feb 2014 18:37:41 +1100 Subject: Connectionists: Open PhD positions in Computational modeling and experimental studies of motor and cognitive processes in health and disease. Message-ID: Funding is available for two PhD students. We seek graduate students to work on studies of Motor and Cognitive Processes in patient populations, including Parkinson's disease, schizophrenia, drug addiction, among others. This is part of various thesis projects leading towards a PhD. Applicants with interest in either experimental or computational modeling research are encouraged to apply. Computational modeling work involves building abstract mathematical models of learning, cognition, and decision making as well as neural network models that integrate neuroscience data. Experimental studies focus on testing cognitive function in various patient populations as well as behavioral neurogenetics and fMRI studies in collaboration with various labs around Sydney. Testing of subjects will take place at University of New South Wales, Sydney University, and/or other hospitals in Sydney CBD. The lab is located in Bankstown, a suburb of Sydney. For any questions, feel free to contact Dr. Ahmed Moustafa at a.moustafa at uws.edu.au Dr. Ahmed Moustafa Lecturer in Cognitive and Behavioural Neuroscience, Marcs Institute for Brain and Behaviour & School of Social Sciences and Psychology University of Western Sydney Building 24, Bankstown Campus, Office: 24.1.6 Phone: +61 2 9772 6847; Fax: +61 2 9772 6757 Email: a.moustafa at uws.edu.au -------------- next part -------------- An HTML attachment was scrubbed... URL: From Michael_Frank at brown.edu Sun Feb 23 15:10:08 2014 From: Michael_Frank at brown.edu (Michael J Frank) Date: Sun, 23 Feb 2014 15:10:08 -0500 Subject: Connectionists: Behavioral Neuroscience journal: Computational papers welcome Message-ID: The new editorial board for *Behavioral Neuroscience *would like to increase the visibility of computational work in the journal, particularly that focused on understanding the neural basis of behavior and cognitive function, including movement and motor function, learning and memory, attention, decision making, sensation and perception, motivated behavior, and neuropsychiatric/neurodegenerative disorders. We are particularly interested in computational work that has a strong behavioral component in one of the areas described above. Our office is currently handling new submissions to the journal. We hope you will help us by sending us your best computational work that also matches our mission. *Behavioral Neuroscience* is published bi-monthly. There are no publication or page charges, and online color figures are free. The average time for a decision is about one month, and accepted papers are published online in about 2.5 weeks if proofs are returned promptly. If you have questions about whether a particular finding is what we have in mind, feel free to contact me. Michael Frank, Incoming Associate Editor, *Behavioral Neuroscience* on behalf of the other Associate Editors: Mark Baxter, Elizabeth Buffalo, Cynthia Moss, Geoffrey Schoenbaum, and Editor Rebecca Burwell Michael J Frank, PhD, Associate Professor Laboratory for Neural Computation and Cognition Brown University http://ski.clps.brown.edu (401)-863-6872 -------------- next part -------------- An HTML attachment was scrubbed... URL: From borisyuk at math.utah.edu Fri Feb 21 20:53:11 2014 From: borisyuk at math.utah.edu (Alla Borisyuk) Date: Fri, 21 Feb 2014 18:53:11 -0700 Subject: Connectionists: CNS-2014: workshops posted and deadline approaching Message-ID: Organization for Computational Neurosciences (OCNS) 23rd Annual Meeting Qu?bec City, Canada July 26-31, 2014 Final deadline for abstract submission is February 23 . Note that one of the authors has to register as sponsoring author for the main meeting before abstract submission is possible. In case the abstract is notaccepted for presentation, the registration fee will be refunded. FINAL Deadline: 23 Feb 2014 Abstract submission closes (11:00 pm Pacific time USA) Please visit: https://ocns.memberclicks.net/cns-2014-abstract-submission The main meeting (July 27 - 29) will be preceded by a day of tutorials (July 26) and followed by two days of workshops (July 30 -31). Invited Keynote Speakers: Chris Eliasmith, University of Waterloo, Canada Christof Koch, Allen Institute for Brain Science,USA Henry Markram, EPFL Lausanne, Switzerland Frances Skinner, TWRI/UHN, University of Toronto, Canada THE CURRENT LIST OF WORKSHOPS AND TUTORIALS IS NOW POSTED For up-to-date conference information, please visit http://www.cnsorg.org/cns-2014-quebec-city ---------------------------------------- OCNS is the international member-based society for computational neuroscientists. Become a member to be eligible for travel awards and more. Visit our website for more information: http://www.cnsorg.org From dong at wi-lab.com Sat Feb 22 03:29:54 2014 From: dong at wi-lab.com (Juzhen Dong) Date: Sat, 22 Feb 2014 17:29:54 +0900 Subject: Connectionists: CFP: Brain Informatics & Health (BIH'14) Message-ID: <53086002.6050707@wi-lab.com> [Apologies for cross-postings] ################################################################## The 2014 International Conference on Brain Informatics & Health (BIH'14) August 11-14, 2014, Warsaw, Poland 3RD CALL FOR PAPERS ################################################################## Homepage: http://wic2014.mimuw.edu.pl/bih/homepage ################################################################## BIH'14 is a part of the 2014 Web Intelligence Congress (WIC'14). The series of Brain Informatics conferences was started in China, in 2006, with the International Workshop on Web Intelligence meets Brain Informatics (WImBI'06). The next events have been held in China, Canada and Japan. Since 2012 the conference topics have been extended with major elements of Health Informatics in order to investigate some common challenges in both areas. In 2014, this series of events will visit Europe for the first time. Important Dates: ################################################################## # Electronic submission of full papers: March 2, 2014 # Workshop paper submission: March 23, 2014 # Notification of paper acceptance: May 4-11, 2014 # Camera-ready of accepted papers: May 18, 2014 ################################################################## WIC'14 Keynote Speakers: - Stefan Decker (National University of Ireland) - Karl Friston (University College London) - Sadaaki Miyamoto (University of Tsukuba) - Yi Pan (Georgia State University) - Henryk Skarzynski (World Hearning Center in Kajetany) - John F. Sowa (VivoMind Research, LLC) - Andrzej Szalas (Linkoping University & The University of Warsaw) - Andrew Chi-Chih Yao (Tsinghua University; Turing Award Winner) BIH'14 Co-Organizers/Co-Sponsors: - Web Intelligence Consortium (WIC) - IEEE-CIS Task Force on Brain Informatics (IEEE TF-BI) - The University of Warsaw - Polish Mathematical Society (PTM) - Warsaw University of Technology - Polish Academy of Sciences (PAS) Committee on Informatics - Polish Artificial Intelligence Society BIH'14 Program Co-Chairs: - Xiaohua (Tony) Hu, USA - Lars Schwabe, Germany - Ah-Hwee Tan, Singapore On-Line Submissions & Publications: ################################################################## # Papers need to have up to 10 pages in LNCS format: # http://www.springer.com/computer/lncs?SGWID=0-164-6-793341-0 # Accepted papers will be published by Springer as a # volume of the series of LNCS/LNAI. ################################################################## CONFERENCE TOPICS INCLUDE, BUT ARE NOT LIMITED TO: 1. Foundations of Brain Understanding * Brain Organization & Reaction Modelling * Causal, Hierarchical & Granular Brain Modelling * Human Reasoning & Learning Mechanisms * Neural Basis for High-Level Human Concepts * Higher Cognitive Functions & Consciousness * Systematic Design of Cognitive Experiments * Visual, Auditory & Tactile Information Processes * Spatio-Temporality of Human Information Processes 2. Brain-Inspired Problem Solving * Brain & Cognition-Inspired Intelligent Systems * Foundations & Applications of Neurocomputing * Deep, Hierarchical & Energy-based Learning * Brain-Related Aspects of Natural Computing * Brain Informatics for Cyber-Individual Models * Human Factors in Computing Systems * Neuroeconomics & Neuromarketing * Neurolinguistics & Neurosemantics 3. Brain & Health Data Management * Digital, Data & Computational Brain * Big Brain Data Centers & Computational Grids * Brain Data/Information Flow Simulations * Brain & Health Data Repositories & Benchmarks * Brain & Health Data Cleaning / Quality Assurance * Brain & Health Data/Evidence Integration * Electronic Patient Record Management * Medical Knowledge Abstraction & Representation 4. Biomedical Decision Support * Neurological/Mental Disease Diagnosis & Assessment * Therapy Planning & Disease Prognostic Support * Risk Management for Diagnostic/Therapeutic Support * Computer Support for Surgical Intervention * Physiological, Clinical & Epidemiological Modelling * Operations Research for Biomedicine & Healthcare 5. Brain & Health Data Analytics * Pattern Recognition Methods for Brain & Health Data * Knowledge Discovery from Brain & Health Databases * Multimodal Brain Information Fusion * Neuroimaging & Electromagnetic Brain Signals * Discovery of Biomarkers & New Therapies * Healthcare Workflow Mining for Quality Assurance * Domain Knowledge in Medical Image/Signal Analysis * Mining Brain/Health Literature & Medical Records * Survival Analysis & Health Hazard Evaluations * Interactive & Visual Analytics for Biomedicine 6. Healthcare Systems * Healthcare Systems as Complex Systems * Risk Management for Healthcare Processes * IT Solutions for Healthcare Service Delivery * IT Solutions for Hospital Management * Organizational Impacts of Healthcare IT Solutions * Public Health Informatics & Healthcare Networks * Medical Compliance Support & Automation * Social Aspects of Healthcare Mechanisms 7. Biomedical Technologies * Brain-Computer Middleware * Biomedical Intelligent Devices * Biomedical Sensor Calibration * Assistive & Monitoring Technologies * Biomedical Software Engineering * Biomedical Robotics & Microrobotics 8. Applications of Brain & Health Informatics * Brain & Health Scientific Research Support Portals * Brain Signal Interfaces & Non-verbal Communication * Telemedicine, E-medicine & M-medicine * Clinical/Hospital Information Systems * Biomedical & Health Recommender Systems * Business Intelligence based on Brain & Health Data Tutorial Proposals: ################################################################## # Electronic submission of proposals: April 13, 2014 # Notification of proposal acceptance: April 20, 2014 ################################################################## *** Post-Conference Journal Publications *** - Web Intelligence and Agent Systems (IOS Press) - Brain Informatics (Springer) - Information Technology & Decision Making (World Scientific) - Computational Intelligence (Wiley) - Health Information Science and Systems (Springer) - Cognitive Systems Research (Elsevier) - Computational Cognitive Science (Springer) - Semantic Computing (World Scientific) *** About WIC'14 *** The 2014 Web Intelligence Congress (WIC'14) is a Special Event of Web25 (25 years of the Web). It includes four top-quality international conferences related to intelligent informatics: - IEEE/WIC/ACM Web Intelligence 2014 (WI'14) - IEEE/WIC/ACM Intelligent Agent Technology 2014 (IAT'14) - Active Media Technology 2014 (AMT'14) - Brain Informatics & Heath 2014 (BIH'14) They are co-located in order to bring together researchers and practitioners from diverse fields with the purpose of exploring the fundamental roles, interactions and practical impacts of Artificial Intelligence and Advanced Information Technology. *** About the Venue *** The conference will be held in August - the best Summer period to visit Warsaw and Poland. Lectures will take place in Central Campus of the University of Warsaw, in the Old Library building converted to a modern conference center. The campus is located in downtown Warsaw, close to the Old Town and Vistula River. *** Contact Information *** Dominik Slezak WIC'14 Congress Program Chair From pascualm at key.uzh.ch Sun Feb 23 21:28:01 2014 From: pascualm at key.uzh.ch (Roberto D. Pascual-Marqui) Date: Mon, 24 Feb 2014 11:28:01 +0900 Subject: Connectionists: Isolated effective coherence (iCoh): causal information flow excluding indirect paths (pre-print) Message-ID: Dear Colleagues, The following pre-print on a method for assessing the "isolated effective coherence (iCoh)" might be of interest: http://arxiv.org/abs/1402.4887 RD Pascual-Marqui, RJ Biscay, J Bosch-Bayard, D Lehmann, K Kochi, N Yamada, T Kinoshita, N Sadato. Isolated effective coherence (iCoh): causal information flow excluding indirect paths Abstract: A problem of great interest in real world systems, where multiple time series measurements are available, is the estimation of the intra-system causal relations. For instance, electric cortical signals are used for studying functional connectivity between brain areas, their directionality, the direct or indirect nature of the connections, and the spectral characteristics (e.g. which oscillations are preferentially transmitted). The earliest spectral measure of causality was Akaike's (1968) seminal work on the noise contribution ratio (NCR), reflecting direct and indirect connections. Later, the partial directed coherence (PDC) of Baccala and Sameshima (2001) was proposed for direct connections. In this study the partial coherence is estimated under a multivariate auto-regressive model, followed by setting all irrelevant associations to zero, other than the particular directional association of interest. This is the isolated effective coherence (iCoh). It is shown that the NCR computed under these same conditions is identical to the iCoh, thus enriching its interpretability. In comparison with the iCoh, it is shown that the PDC is affected by irrelevant connections to such an extent that it can misrepresent the frequency response. Toy examples are included to demonstrate these properties. Sincerely, Roberto ... Roberto D. Pascual-Marqui, PhD, PD The KEY Institute for Brain-Mind Research, University Hospital of Psychiatry Zurich (Switzerland) Department of Community Psychiatric Medicine, Shiga University of Medical Science (Japan) (pascualm at belle.shiga-med.ac.jp) [www.keyinst.uzh.ch/loreta] [www.researcherid.com/rid/A-2012-2008] From emmanuel.vincent at inria.fr Mon Feb 24 08:28:23 2014 From: emmanuel.vincent at inria.fr (Emmanuel Vincent) Date: Mon, 24 Feb 2014 14:28:23 +0100 Subject: Connectionists: PhD scholarship on deep neural networks for robust ASR Message-ID: <530B48F7.9090909@inria.fr> We are pleased to offer a PhD scholarship on deep neural networks for source separation and noise-robust ASR http://www.inria.fr/en/institute/recruitment/offers/phd/campaign-2014/%28view%29/details.html?nPostingTargetID=14062 Please forward to interested candidates. We are looking forward to receiving applications by April 18 (please do not wait until the later deadline indicated on the website). Best, -- Emmanuel Vincent PAROLE Project-Team Inria Nancy - Grand Est 615 rue du Jardin Botanique, 54600 Villers-l?s-Nancy, France Phone: +33 3 8359 3083 - Fax: +33 3 8327 8319 Web: http://www.loria.fr/~evincent/ From viktor.jirsa at univ-amu.fr Mon Feb 24 17:49:01 2014 From: viktor.jirsa at univ-amu.fr (Viktor Jirsa) Date: Mon, 24 Feb 2014 22:49:01 +0000 Subject: Connectionists: Project Proposals for Google Summer of Code 2014 - The Virtual Brain Message-ID: Project Proposals for Google Summer of Code 2014 Google Summer of Code is a global program that offers post-secondary student developers ages 18 and older stipends, financed by Google, to write code for various open source software projects. Students can apply to take part in projects proposed by mentoring organizations. Accepted student applicants are paired with a mentor or mentors from the participating projects. The Virtual Brain project (http://www.thevirtualbrain.org) is one of the proposed projects and offers 5 sub-projects. The Virtual Brain: An open-source simulator for whole brain network modeling Project 1: Profiling the simulator Description Project 2: Numerical accuracy evaluation Project 3: Interactive Data Exploration Project 4: IO module Project 5: Packaging More details can be found here: http://incf.org/gsoc/2014/proposals -------------- next part -------------- An HTML attachment was scrubbed... URL: From viktor.jirsa at univ-amu.fr Mon Feb 24 17:38:45 2014 From: viktor.jirsa at univ-amu.fr (Viktor Jirsa) Date: Mon, 24 Feb 2014 22:38:45 +0000 Subject: Connectionists: The Virtual Brain training workshop - Hamburg June 7th, 2014 Message-ID: Dear colleagues, please find the training workshop announcement on TVB here below? Best wishes, Viktor Jirsa ---------------------- The Virtual Brain Node 1 : 1st Training Workshop, Hamburg Germany, June 7th 2014 In this workshop we will explain the fundamental principles of full brain network modelling using the open source neuroinformatics platform The Virtual Brain (TVB). This simulation environment enables the biologically realistic modelling of network dynamics using Connectome-based approaches across different brain scales. Configurable brain network models generate macroscopic neuroimaging signals including functional MRI (fMRI), intracranial and stereotactic EEG, surface EEG and MEG for single subjects. Researchers from different backgrounds can benefit from an integrative software platform including a supporting framework for data management (generation, organization, storage, integration and sharing) and a simulation core written in Python. The architecture of TVB supports interaction with MATLAB packages, for example, the well-known Brain Connectivity Toolbox. Workshop goals: to create a conceptual and technical understanding of the various network modelling approaches in TVB; familiarization with TVB graphical user interface, Python and Matlab programming environment and data formats. Workshop format: lectures and hands-on tutorials For more details and registration, please visit http://www.thevirtualbrain.org/tvb/zwei/milestones?key=node1 Important dates: February 28th, 2014 ? registration open June 1st, 2014 ? registration closed (or until max. capacity is reached) June 7th, 2014 ? TVB Training Workshop Location: Institute for Computational Neuroscience Martinistr. 52, Building No. W36, 20246 Hamburg, Germany Room No. 11. With our best regards, The TVB Team -------------- next part -------------- An HTML attachment was scrubbed... URL: From Muhammad.Iqbal at ecs.vuw.ac.nz Mon Feb 24 18:44:05 2014 From: Muhammad.Iqbal at ecs.vuw.ac.nz (Muhammad.Iqbal at ecs.vuw.ac.nz) Date: Tue, 25 Feb 2014 12:44:05 +1300 Subject: Connectionists: Special Issue on Twenty Years of XCS Message-ID: <5163d2fc10aa048892ae7c71abf1b895.squirrel@mail.ecs.vuw.ac.nz> Dear Colleague, -------------------------- CALL FOR ARTICLES ----------------------- Evolutionary Intelligence Journal, Springer-Verlag Berlin Heidelberg Special Issue on Twenty Years of XCS Classifier System Manuscript submission: 1 August 2014 The introduction of XCS in 1995 by Stewart Wilson revived the research in Learning Classifier Systems (LCS). Two decades later, XCS is still leading and inspiring research in evolutionary machine learning. Submissions are now invited for a special issue of Springer?s Evolutionary Intelligence Journal marking the 20 years of XCS. The special issue will consider the original contributions as well as the extended versions of papers submitted to International Workshop on Learning Classifier Systems (IWLCS 2014) and Evolutionary Machine Learning track of Genetic and Evolutionary Computation Conference (GECCO 2014) co-held at Vancouver, Canada on July 12-16, 2014. Key Dates: First submissions: 1 August 2014 First reviews to authors: 6 October 2014 Revised submissions: 3 November 2014 Revised reviews to authors: 1 December 2014 Camera ready: 15 December 2014 Guest Editorial Team: Tim Kovacs (kovacs at cs.bris.ac.uk) Kamran Shafi (k.shafi at adfa.edu.au) Ryan Urbanowicz (ryanurbanowicz at gmail.com) Muhammad Iqbal (Muhammad.Iqbal at ecs.vuw.ac.nz) -------------- next part -------------- A non-text attachment was scrubbed... Name: CFP_EI2014.pdf Type: application/pdf Size: 12796 bytes Desc: not available URL: From pelillo at dsi.unive.it Tue Feb 25 09:12:03 2014 From: pelillo at dsi.unive.it (Marcello Pelillo) Date: Tue, 25 Feb 2014 15:12:03 +0100 (CET) Subject: Connectionists: Philosophical Aspects of Pattern Recognition / Special Issue + Tutorial Message-ID: Dear colleagues, I'd like to draw your attention to the following two initiatives devoted to the "Philosophical aspects of pattern recognition" 1. Special Issue of Pattern Recognition Letters http://www.journals.elsevier.com/pattern-recognition-letters/call-for-papers/philosophical-aspects-of-pattern-recognition/ >>> Submission Deadline: July 1, 2014 2. ICPR 2014 Tutorial: http://www.icpr2014.org/tutorialpages/philosophicalaspects Best regards, -mp --- Marcello Pelillo, FIEEE, FIAPR Professor of Computer Science Computer Vision and Pattern Recognition Lab, Director Center for Knowledge, Interaction and Intelligent Systems (KIIS), Director DAIS Ca' Foscari University, Venice Via Torino 155, 30172 Venezia Mestre, Italy Tel: (39) 041 2348.440 Fax: (39) 041 2348.419 E-mail: marcello.pelillo at gmail.com URL: http://www.dsi.unive.it/~pelillo From nati at ttic.edu Mon Feb 24 23:40:35 2014 From: nati at ttic.edu (Nathan Srebro) Date: Tue, 25 Feb 2014 06:40:35 +0200 Subject: Connectionists: Post-Doc: Machine Learning Applied to the Social Sciences Message-ID: Post-Doc: Machine Learning Applied to the Social Sciences Applications are sought for an NSF-funded post-doctoral positions in machine learning applied to the social sciences with James Evans from the University of Chicago and Nathan Srebro from TTIC and the Technion. Post-docs will be associated with Knowledge Lab (knowledgelab.org) in the Computation Institute (www.ci.uchicago.edu) at the University of Chicago (www.uchicago.edu), one of the leading research universities in the United States, and with the Toyota Technological Institute at Chicago (TTIC, www.ttic.edu), an elite computer science institute located on the University of Chicago campus, and supervised by Prof. James Evans of the University of Chicago and Prof. Nathan Srebro of TTIC and the Technion. Position are for 2-3 years, contingent on annual reappointment. The aim is to develop and apply machine learning methods, including active learning, matrix learning and collaborative learning, to information gathering methods in the social sciences. Applicants are expected to have strong qualifications in machine learning or related fields, and will be introduced to social science research. They will have the opertunity to work with other post-docs, researchers and students in the social sciences, while being part of the vibrant machine learning group at TTIC. Minimum qualifications for this position are a PhD or expected PhD (by the end of 2014) in computer science, applied mathematics, statistics or a related field, with background in machine learning. Women and members of underrepresented groups are encouraged to apply. Interested candidates must submit to knowledgelab at ci.uchicago.edu: 1) cover letter, describing your interest in and qualifications for pursuing interdisciplinary research; 2) curriculum vitae (including publications list); 3) contact information for three or more scholars who know your work and are willing to write letters of reference; 4) optionally, examples of working software you have written. Positions can begin immediately or anytime through Fall 2014. Compensation includes a competitive salary and benefits plan and assistance with relocation to Chicago. -------------- next part -------------- An HTML attachment was scrubbed... URL: From fh at informatik.uni-freiburg.de Tue Feb 25 02:04:54 2014 From: fh at informatik.uni-freiburg.de (Frank Hutter) Date: Tue, 25 Feb 2014 08:04:54 +0100 Subject: Connectionists: CFP: AutoML Workshop @ ICML 2014 Message-ID: CALL FOR CONTRIBUTIONS The AutoML Workshop @ ICML 2014 Beijing, China, June 25/26, 2014 Web: http://icml2014.automl.org Email: icml2014 at automl.org ---------------------------------------------------------------- Important Dates: - Submission deadline: Friday 25 April, 2014 - Notification of acceptance: Friday 16 May, 2014 ---------------------------------------------------------------- Workshop Overview: Machine learning has achieved considerable success, but this success crucially relies on human machine learning experts to select appropriate features, workflows, ML paradigms, algorithms, and algorithm hyperparameters. Because the complexity of these tasks is often beyond non-experts, the rapid growth of machine learning applications has created a demand for machine learning methods that can be used easily and without expert knowledge. We call the resulting research area that targets progressive automation of machine learning AutoML. AutoML aims to automate many different stages of the machine learning process. Relevant topics include: - Model selection, hyper-parameter optimization, and model search - Representation learning and automatic feature extraction / construction - Reusable workflows and automatic generation of workflows - Meta learning and transfer learning - Automatic problem "ingestion" (from raw data and miscellaneous formats) - Feature coding/transformation to match requirements of different learning algorithms - Automatically detecting and handling skewed data and/or missing values - Automatic leakage detection - Matching problems to methods/algorithms (beyond regression and classification) - Automatic acquisition of new data (active learning, experimental design) - Automatic report writing (providing insight from the automatic data analysis) - User interfaces for AutoML (e.g., "Turbo Tax for Machine Learning") - Automatic inference and differentiation - Automatic selection of evaluation metrics - Automatic creation of appropriately sized and stratified train, validation, and test sets - Parameterless, robust algorithms - Automatic algorithm selection to satisfy time/space constraints at train- or run-time - Run-time wrappers to detect data shift and other causes of prediction failure We encourage contributions in any of these areas. We welcome 2-page short-form submissions and 6-page long-form submissions. Submissions should be formatted using JMLR Workshop and Proceedings format (an example LaTeX file is available on the workshop website icml2014.automl.org). We also encourage submissions of previously-published material that is closely related to the workshop topic (for presentation only). Confirmed invited speakers: - Dan Roth: Language designed for novice ML developers - Holger Hoos: Programming by Optimization - Yoshua Bengio: Representation learning - Jasper Snoek: Hyper-parameter optimization - Vikash Masingka: Probabilistic programming Advisory Committee: James Bergstra, Nando de Freitas, Roman Garnett, Matt Hoffman, Michael Osborne, Alice Zheng Organizers: Frank Hutter, Rich Caruana, R?mi Bardenet, Misha Bilenko, Isabelle Guyon, Bal?zs K?gl, and Hugo Larochelle -------------- next part -------------- An HTML attachment was scrubbed... URL: From christos.dimitrakakis at gmail.com Wed Feb 26 14:27:26 2014 From: christos.dimitrakakis at gmail.com (Christos Dimitrakakis) Date: Wed, 26 Feb 2014 20:27:26 +0100 Subject: Connectionists: CFP: ICML 2014 Workshop on Learning, Security and Privacy Message-ID: <530E401E.1070106@gmail.com> (Apologies for crossposting.) CALL FOR PAPERS ICML 2014 Workshop on Learning, Security and Privacy Beijing, China, 25 or 26 June, 2014 (TBD) https://sites.google.com/site/learnsecprivacy2014/ ---------------------------------------------------------------- Important Dates: - Submission deadline: 28 March, 2014 - Notification of acceptance: 18 April, 2014 ---------------------------------------------------------------- Workshop overview: Many machine learning settings give rise to security and privacy requirements which are not well-addressed by traditional learning methods. Security concerns arise in intrusion detection, malware analysis, biometric authentication, spam filtering, and other applications where data may be manipulated - either at the training stage or during the system deployment - to reduce prediction accuracy. Privacy issues are common to the analysis of personal and corporate data ubiquitous in modern Internet services. Learning methods addressing security and privacy issues face an interplay of game theory, cryptography, optimization and differential privacy. Despite encouraging progress in recent years, many theoretical and practical challenges remain. Several emerging research areas, including stream mining, mobility data mining, and social network analysis, require new methodical approaches to ensure privacy and security. There is also an urgent need for methods that can quantify and enforce privacy and security guarantees for specific applications. The ever increasing abundance of data raises technical challenges to attain scalability of learning methods in security and privacy critical settings. These challenges can only be addressed in the interdisciplinary context, by pooling expertise from the traditionally disjoint fields of machine learning, security and privacy. To encourage scientific dialogue and foster cross-fertilization among these three fields, the workshop invites original submissions, ranging from ongoing research to mature work, in any of the following core subjects: - Statistical approaches for privacy preservation. - Private decision making and mechanism design. - Metrics and evaluation methods for privacy and security. - Robust learning in adversarial environments. - Learning in unknown / partially observable stochastic games. - Distributed inference and decision making for security. - Application-specific privacy preserving machine learning and decision theory. - Secure multiparty computation and cryptographic approaches for machine learning. - Cryptographic applications of machine learning and decision theory. - Security applications: Intrusion detection and response, biometric authentication, fraud detection, spam filtering, captchas. - Security analysis of learning algorithms - The economics of learning, security and privacy. Submission instructions: Submissions should be in the ICML 2014 format, with a maximum of 6 pages (including references). Work must be original. Accepted papers will be made available online at the workshop website. Submissions need not be anonymous. Submissions should be made through EasyChair: https://www.easychair.org/conferences/?conf=lps2014. For detailed submission instructions, please refer to the workshop website. Organizing committee: Christos Dimitrakakis (Chalmers University of Technology, Sweden). Pavel Laskov (University of Tuebingen, Germany). Daniel Lowd (University of Oregon, USA). Benjamin Rubinstein (University of Melbourne, Australia). Elaine Shi (University of Maryland, College Park, USA). Program committee (preliminary): Michael Br?ckner (Amazon, Germany) Battista Biggio (University of Cagliari, Italy) Alvaro Cardenas (University of Texas, Dallas, USA) Kamalika Chaudhuri (UCSD, USA) Alex Kantchelian (UC Berkeley, USA) Aikaterini Mitrokotsa (Chalmers University, Sweden) Blaine Nelson (University of Potsdam, Germany) Konrad Rieck (University of Goettingen, Germany) Nedim Srndic (University ofr Tuebingen) Aaron Roth (University of Pennsylvania, USA) Risto Vaarandi (NATO CCDCOE, Estonia) Shobha Venkataraman (AT&T Research, USA) Ting-Fang Yen (EMC, USA) -- Christos Dimitrakakis http://www.cse.chalmers.se/~chrdimi/ From m.reske at fz-juelich.de Tue Feb 25 11:08:55 2014 From: m.reske at fz-juelich.de (Martina Reske) Date: Tue, 25 Feb 2014 17:08:55 +0100 Subject: Connectionists: =?windows-1252?q?Postdoc_position_in_computationa?= =?windows-1252?q?l_neuroscience_in_J=FClich=2C_Germany=2C_within_the_Helm?= =?windows-1252?q?holtz_young_investigator=91s_group_=93Theory_of_multi-sc?= =?windows-1252?q?ale_neuronal_networks=93?= Message-ID: <530CC017.3070703@fz-juelich.de> Dear colleagues The Institute of Neuroscience and Medicine (INM) at the Research Center J?lich investigates the structure and function of the brain. The department INM-6 consists of 4 groups that conduct research in the field of computational and systems neuroscience (www.csn.fz-juelich.de). Within the INM-6 the Helmholtz young investigator?s group ?Theory of multi-scale neuronal networks? focuses on the mechanisms shaping the (correlated and oscillatory) activity in neuronal networks with structured connectivity on several spatial scales, from synaptic specificity within cortical layers to inter-area-connections. We aim at a quantitative and mechanistic understanding of features of experimentally observed neuronal activity by developing and adapting analytical methods from statistical physics (e.g. Fokker-Planck equations, mean-field approaches, path integral methods) combined with direct simulations of large-scale neuronal networks at cellular resolution. The Helmholtz young investigator?s group ?Theory of multi-scale neuronal networks? is looking to recruit a Postdoctoral Researcher (f/m) Your Job: The candidate will coordinate on the scientific and organizational level research projects to investigate the relationships between neuronal connectivity on different scales and spatially and temporally structured activity, by developing a sequence of theoretical descriptions from the microscopic level of spiking neurons to effective equations of interacting areas. This involves the development and application of analytical and simulation tools. As a member of our team the candidate will supervise PhD students and in collaboration with our experimental partners define relevant and realistic research projects in the context of the long term research program of the institute. Your Profile: * University degree (master?s degree or equivalent) in physics or mathematics, and a PhD in a quantitative science * Appreciation for the work with PhD students, understanding of scientific progress as a collaborative achievement * Enthusiasm about the combination of analytical and numerical approaches that enable a quantitative understanding of the dynamics in neuronal networks * Good programming skills and basic knowledge of software development in either python, C++, or matlab * Experience in scientific writing in a related field (demonstrated by a corresponding publication record) Our Offer: * A position in a creative and international team, themes ranging from computational neuroscience to simulation technology * Excellent scientific equipment, located on a green campus, and near the cultural centers K?ln, D?sseldorf and Aachen * Employment for a fixed term of initially 2 years We look forward to receiving your application, preferably online via our online recruitment system (see here: http://www.fz-juelich.de/SharedDocs/Stellenangebote/_common/dna/2014-028-EN-INM-6.html), quoting the reference number 2014-028. Contact: Barbara Kranen Fon: +49 2461 61-9700 www.fz-juelich.de Kind regards, Martina -- -- Dr. Martina Reske Scientific Coordinator Institute of Neuroscience and Medicine (INM-6) Computational and Systems Neuroscience & Institute for Advanced Simulation (IAS-6) Theoretical Neuroscience J?lich Research Centre and JARA J?lich, Germany Work +49.2461.611916 Work Cell +49.151.26156918 Fax +49.2461.619460 www.csn.fz-juelich.de ------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------ Forschungszentrum Juelich GmbH 52425 Juelich Sitz der Gesellschaft: Juelich Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498 Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen Huthmacher Geschaeftsfuehrung: Prof. Dr. Achim Bachem (Vorsitzender), Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt, Prof. Dr. Sebastian M. Schmidt ------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------ -------------- next part -------------- An HTML attachment was scrubbed... URL: From dayan at gatsby.ucl.ac.uk Wed Feb 26 17:57:06 2014 From: dayan at gatsby.ucl.ac.uk (Peter Dayan) Date: Wed, 26 Feb 2014 22:57:06 +0000 Subject: Connectionists: MQ fellowship for mental health research Message-ID: <20140226225706.GA17414@gatsby.ucl.ac.uk> MQ is a charity that funds research into mental health. They have a fellowship programme aimed at new faculty (or about to be faculty) working in any country and any area of mental health research. It provides up to ?75,000 a year for three years. MQ has very eclectic interests in the field - including computational approaches - so if you're interested, please follow the links from: http://www.joinmq.org/research/pages/fellows-programme Peter From schwarzwaelder at bcos.uni-freiburg.de Thu Feb 27 04:58:30 2014 From: schwarzwaelder at bcos.uni-freiburg.de (=?ISO-8859-15?Q?Kerstin_Schwarzw=E4lder?=) Date: Thu, 27 Feb 2014 10:58:30 +0100 Subject: Connectionists: Call for applications: Brains for Brains Young Researchers' Computational Neuroscience Award 2014 Message-ID: <530F0C46.50802@bcos.uni-freiburg.de> Dear colleagues, for the fifth time, the Bernstein Association for Computational Neuroscience is announcing the "Brains for Brains Young Researchers' Computational Neuroscience Award". The call is open for researchers of any nationality who have contributed to a peer reviewed publication (as coauthor) or peer reviewed conference abstract (as first author) that was submitted before starting their doctoral studies, is written in English and was accepted or published in 2013 or 2014. The award comprises 500 ? prize money, plus a travel grant of up to 2.000 ? to cover a trip to Germany, including participation in the Bernstein Conference 2014 in G?ttingen (www.bernstein-conference.de), and an individually planned visit to up to two German research institutions in Computational Neuroscience. Deadline for application is April 25, 2014. Detailed information about the application procedure can be found under: www.nncn.de/en/bernstein-association/brains-for-brains-2014 For inquiries please contact info at bcos.uni-freiburg.de Best regards, Kerstin Schwarzw?lder -- Dr. Kerstin Schwarzw?lder Bernstein Coordination Site of the National Bernstein Network Computational Neuroscience Albert Ludwigs University Freiburg Hansastr. 9A 79104 Freiburg Germany phone: +49 761 203 9594 fax: +49 761 203 9585 schwarzwaelder at bcos.uni-freiburg.de www.nncn.de Twitter: NNCN_Germany YouTube: Bernstein TV Facebook: Bernstein Network Computational Neuroscience, Germany LinkedIn: Bernstein Network Computational Neuroscience, Germany -------------- next part -------------- An HTML attachment was scrubbed... URL: From benoit.frenay at uclouvain.be Fri Feb 28 02:45:34 2014 From: benoit.frenay at uclouvain.be (Benoit Frenay) Date: Fri, 28 Feb 2014 08:45:34 +0100 Subject: Connectionists: Last Deadline Extension: Neurocomputing Special Issue on Advances in Learning with Label Noise Message-ID: <53103E9E.60409@uclouvain.be> An HTML attachment was scrubbed... URL: From ecai2014 at guarant.cz Fri Feb 28 04:20:02 2014 From: ecai2014 at guarant.cz (=?utf-8?q?ECAI_2014?=) Date: Fri, 28 Feb 2014 10:20:02 +0100 Subject: Connectionists: =?utf-8?q?ECAI_2014_-_Workshops?= Message-ID: <20140228092002.703DF17434C@gds25d.active24.cz> ** apologies for cross-posting ** ECAI 2014 Conference September 18-22, 2014 Prague, Czech Republic LIST OF ACCEPTED WORKSHOPS W1 - 3rd International Workshop on Computational Creativity, Concept Invention, and General Intelligence (C3GI) - 19.8. Tarek Richard Besold, Kai-Uwe Kuehnberger, Alan Smaill and Marco Schorlemmer W2 - ECAI 2014 Workshop on Computer Games - 18.8. Tristan Cazenave, Mark Winands and Yngvi Bj?rnsson W3 - MetaSel - Meta-learning & Algorithm Selection -19.8. Pavel Brazdil, Carlos Soares, Joaquin Vanschoren and Lars Kotthoff W4 - "2nd European Workshop on Chance Discovery and Data Synthesis (EWCDDS14)" - 18.-19.8. Akinori Abe and Yukio Ohsawa W5 - 9th Workshop on Agents Applied in Health Care - 18.8. Antonio Moreno, Ulises Cort?s, Mag? Lluch-Ariet, Helena Lindgren, Michael Ignaz Schumacher and David Isern W6 - "International Workshop on Reactive Concepts in Knowledge Representation (ReactKnow 2014)" - 19.8. Stefan Ellmauthaler and J?rg P?hrer W7 - COmbining COnstraint solving with MIning and LEarning (CoCoMiLe) - 19.8. Lars Kotthoff, Barry O'Sullivan and Georgiana Ifrim W8 - IAT4SIS - 2014 Workshop on Intelligent Agents and Technologies for Socially Interconnected Systems - 18.8. Ana Paula Rocha, Virginia Dignum, Eugenio Oliveira, Laurent Vercouter and Huib Aldewereld W9 - DARe-14: International Workshop on Defeasible and Ampliative Reasoning - 19.8. Richard Booth, Giovanni Casini, Szymon Klarman, Gilles Richard and Ivan Varzinczak W10 - Workshop on Multi-Agent Coordination in Robotic Exploration - 18.8. Jan Faigl and Olivier Simonin W11 - Artificial Intelligence meets Business Processes and Services (AIBPS2014 @ECAI) - 18.8. Stefania Montani, Grzegorz Nalepa and Daniele Theseider Dupre W12 - CogRob 2014 - The 9th International Workshop on Cognitive Robotics - 18.-19.8. Esra Erdem and Fredrik Heintz W13 - "What can FCA do for Artificial Intelligence?'' (Third FCA4AI Workshop) - 19.8. Sergei O. Kuznetsov, Amedeo Napoli and Sebastian Rudolph W14 - Industrial Applications of Holonic and Multi-Agent Systems (APL-MAS) - 18.8. Pavel Vrba, Vladimir Marik and Thomas Strasser W15 - Third international workshop on AI Problems and Approaches for Intelligent Environments (AI4IE) - 18.8. Sebastian Bader, Anika Schumann and Stephan Sigg W16 - Artificial Intelligence meets Web of Knowledge (AIWK) - 19.8. Papini Odile, Salem Benferhat, Laurent Garcia, Marie-Laure Mugnier W17 - ERLARS 2014 - 7th International Workshop on Evolutionary and Reinforcement Learning for Autonomous Robot Systems - 19.8. Nils Siebel W18 - 15th International Workshop on Computational Logic in Multi-Agent Systems (CLIMA XV) - 18.-19.8. Nils Bulling, Leon van der Torre and Serena Villata W19 - 3rd International Workshop on Artificial Intelligence and Assistive Medicine (A-AIM/NETMED 2014) - 18.8. Constantine D. Spyropoulos, Aldo Dragoni and Stavros Perantonis W20 - 10th International Workshop on Knowledge Engineering and Software Engineering (KESE2014) - 19.8. Grzegorz J. Nalepa, Joachim Baumeister and Krzysztof Kaczor W21 - Workshop on Artificial Intelligence for Ambient Assisted Living (AI4ALL) - 18.8. Francisco Florez-Revuelta, Dorothy N. Monekosso, Paolo Remagnino, Feng Gu and Alexandros A. Chaaraoui More information may be found on the website http://www.ecai2014.org/workshops/ Marina De Vos and Karl Tuyls (ECAI 2014 Workshops Chairs) From jerryzhu at cs.wisc.edu Fri Feb 28 12:30:27 2014 From: jerryzhu at cs.wisc.edu (Jerry Zhu) Date: Fri, 28 Feb 2014 11:30:27 -0600 (CST) Subject: Connectionists: CFP: ICML 2014 Workshop on Topological Methods for Machine Learning Message-ID: Call for Papers ICML Workshop on Topological Methods for Machine Learning June 2014, Beijing, China http://topology.cs.wisc.edu This workshop aims to translate advances in computational topology (e.g., homology, cohomology, persistence, Hodge theory) into machine learning algorithms and applications. Topology has the potential to be a new mathematical tool for machine learning. We expect the workshop to bring topologists, statisticians and machine learning researchers closer to realize this potential. Computational topology saw three major developments in recent years: persistent homology, Euler calculus and Hodge theory. Persistent homology extracts stable homology groups against noise; Euler Calculus encodes integral geometry and is easier to compute than persistent homology or Betti numbers; Hodge theory connects geometry to topology via optimization and spectral method. All three techniques are related to Morse theory, which is inspiring new computational tools or algorithms for data analysis. Computational topology has inspired a number of applications in the last few years, including game theory, graphics, image processing, multimedia, neuroscience, numerical PDE, peridynamics, ranking, robotics, voting theory, sensor networks, and natural language processing. Which promising directions in computational topology can mathematicians and machine learning researchers work on together, in order to develop new models, algorithms, and theory for machine learning? While all aspects of computational topology are appropriate for this workshop, our emphasis is on topology applied to machine learning -- concrete models, algorithms and real-world applications. Topics We seek papers in all areas where topology and machine learning interact, especially on translating computational topology into new machine learning algorithms and applications. Topics include, but are not limited to, the following: - Models in machine learning where topology plays an important role; - Applications of topology in all areas related to machine learning and human cognition; - Statistical properties for topological inference; - Algorithms based on computational topology; - Feature extraction with topological methods. Submissions Papers should be 4-page (excluding references) extended abstracts on topics relevant to the workshop. Papers must be formatted in ICML style following this webpage: http://icml.cc/2014/14.html. Please email PDF submissions to topologyicml2014 at gmail.com. Submissions due date: 3/21/14 Authors notification: 4/18/2014 Organizers Lek-Heng Lim, University of Chicago Yuan Yao, Peking University Jerry Zhu, University of Wisconsin-Madison Jun Zhu, Tsinghua University Questions and comments can be directed to topologyicml2014 at gmail.com. From d.mandic at imperial.ac.uk Fri Feb 28 10:28:57 2014 From: d.mandic at imperial.ac.uk (Danilo Mandic) Date: Fri, 28 Feb 2014 15:28:57 +0000 Subject: Connectionists: Two faculty openings in Big Data Science at Imperial College London Message-ID: <5310AB39.4070108@imperial.ac.uk> Two faculty positions in Big Data Science are available at the Assistant Professor / Associate Professor level at the Department of Electrical and Electronic Engineering, Imperial College London, UK. If you are interested, please find more detail from http://www.jobs.ac.uk/job/AIF563/lecturer-senior-lecturer-reader/ The closing date is 26 March 2014. Danilo From dchau at cs.cmu.edu Thu Feb 27 02:40:44 2014 From: dchau at cs.cmu.edu (Polo Chau) Date: Thu, 27 Feb 2014 02:40:44 -0500 Subject: Connectionists: KDD'14 tutorial proposals due 3/15; workshop proposals due 3/7 Message-ID: <4A985105-C87B-40AE-B224-27466967BEDF@cs.cmu.edu> Dear friends and colleagues, The due dates for proposing tutorials and workshops for KDD'14 are fast approaching! Submit your proposals soon to get the chance to reach out to thousands of data scientists, researchers, practitioners, students and more. KDD'14, a top data science conference, will be held in New York City, August 24-27, 2014. To submit a workshop proposal (due 3/7) or a tutorial proposal (due 3/15), visit: http://www.kdd.org/kdd2014/calls.html Cheers, Polo Chau, Kaitlin Atkinson, Ankur Teredesai KDD?14 Publicity and Media Chairs From M.M.vanZaanen at uvt.nl Fri Feb 28 04:13:03 2014 From: M.M.vanZaanen at uvt.nl (Menno van Zaanen) Date: Fri, 28 Feb 2014 10:13:03 +0100 Subject: Connectionists: CFP COLING Workshop on Computational Approaches to Compound Analysis (ComAComA) Message-ID: <20140228091303.GJ4455@pinball.uvt.nl> Call for Papers The First Workshop on Computational Approaches to Compound Analysis (ComAComA 2014) at COLING 2014 Dublin, Ireland, 23/24 August, 2014 DESCRIPTION The ComAComA workshop is an interdisciplinary platform for researchers to present recent and ongoing work on compound processing in different languages. Given the high productivity of compounding in a wide range of languages, compound processing is an interesting subject in linguistics, computational linguistics, and other applied disciplines. For example, for many language technology applications, compound processing remains a challenge (both morphologically and semantically), since novel compounds are created and interpreted on the fly. In order to deal with this productivity, systems that can analyse new compound forms and their meanings need to be developed. From an interdisciplinary perspective, we also need to better understand the process of compounding (for instance, as a cognitive process), in order to model its complexity. The workshop has several related aims. Firstly, it will bring together researchers from different research fields (e.g., computational linguistics, linguistics, neurolinguistics, psycholinguistics, language technology) to discuss various aspects of compound processing. Secondly, the workshop will provide an overview of the current state-of-the-art research, as well as desired resources for future research in this area. Finally, we expect that the interdisciplinary nature of the workshop will result in methodologies to evaluate compound processing systems from different perspectives. KEYNOTE SPEAKERS Diarmuid ?? S??aghdha (University of Cambridge) Andrea Krott (University of Birmingham) TOPICS OF INTEREST The ComAComA workshop solicits papers on original and unpublished research on the following topics, including, but not limited to: ** Annotation of compounds for computational purposes ** Categorisation of compounds (e.g. different taxonomies) ** Classification of compound semantics ** Compound splitting ** Automatic morphological analysis of compounds ** Compound processing in computational psycholinguistics ** Psycho- and/or neurolinguistic aspects of compound processing ** Theoretical and/or descriptive linguistic aspects of compound processing ** Compound paraphrase generation ** Applications of compound processing ** Resources for compound processing ** Evaluation methodologies for compound processing PAPER REQUIREMENTS ** Papers should describe original work, with room for completed work, well-advanced ongoing research, or contemplative, novel ideas. Papers should indicate clearly the state of completion of the reported results. Wherever appropriate, concrete evaluation results should be included. ** Submissions will be judged on correctness, originality, technical strength, significance and relevance to the conference, and interest to the attendees. ** Submissions presented at the conference should mostly contain new material that has not been presented at any other meeting with publicly available proceedings. Papers that are being submitted in parallel to other conferences or workshops must indicate this on the title page, as must papers that contain significant overlap with previously published work. REVIEWING Reviewing will be double blind. It will be managed by the organisers of ComAComA, assisted by the workshop???s Program Committee (see details below). INSTRUCTIONS FOR AUTHORS ** All papers will be included in the COLING workshop proceedings, in electronic form only. ** The maximum submission length is 8 pages (A4), plus two extra pages for references. ** Authors of accepted papers will be given additional space in the camera-ready version to reflect space needed for changes stemming from reviewers comments. ** The only mode of delivery will be oral; there will be no poster presentations. ** Papers shall be submitted in English, anonymised with regard to the authors and/or their institution (no author-identifying information on the title page nor anywhere in the paper), including referencing style as usual. ** Papers must conform to official COLING 2014 style guidelines, which are available on the COLING 2014 website (see also links below). ** The only accepted format for submitted papers is PDF. ** Submission and reviewing will be managed online by the START system (see link below). Submissions must be uploaded on the START system by the submission deadlines; submissions after that time will not be reviewed. To minimise network congestion, we request authors to upload their submissions as early as possible. ** In order to allow participants to be acquainted with the published papers ahead of time which in turn should facilitate discussions at the workshop, the official publication date has been set two weeks before the conference, i.e., on August 11, 2014. On that day, the papers will be available online for all participants to download, print and read. If your employer is taking steps to protect intellectual property related to your paper, please inform them about this timing. ** While submissions are anonymous, we strongly encourage authors to plan for depositing language resources and other data as well as tools used and/or developed for the experiments described in the papers, if the paper is accepted. In this respect, we encourage authors then to deposit resources and tools to available open-access repositories of language resources and/or repositories of tools (such as META-SHARE, Clarin, ELRA, LDC or AFNLP/COCOSDA for data, and github, sourceforge, CPAN and similar for software and tools), and refer to them instead of submitting them with the paper. COLING 2014 STYLE FILES Download a zip file with style files for LaTeX, MS Word and Libre Office here: http://www.coling-2014.org/doc/coling2014.zip IMPORTANT DATES May 2, 2014: Paper submission deadline June 6, 2014: Author notification deadline June 27, 2014: Camera-ready PDF deadline August 11, 2014: Official paper publication date August 23/24, 2014: ComAComA Workshop (exact date still unknown) (August 25-29, 2014: Main COLING conference) URLs Main COLING conference: http://www.coling-2014.org/ Workshop: http://tinyurl.com/comacoma Paper submission: https://www.softconf.com/coling2014/WS-17/ Style sheets: http://www.coling-2014.org/doc/coling2014.zip ORGANISING COMMITTEE Ben Verhoeven (University of Antwerp, Belgium) ben.verhoeven at uantwerpen.be Walter Daelemans (University of Antwerp, Belgium) walter.daelemans at uantwerpen.be Menno van Zaanen (Tilburg University, The Netherlands) mvzaanen at uvt.nl Gerhard van Huyssteen (North-West University, South Africa) gerhard.vanhuyssteen at nwu.ac.za PROGRAM COMMITTEE (preliminary) ** Preslav Nakov (University of California in Berkeley) ** Iris Hendrickx (Radboud University Nijmegen) ** Gary Libben (University of Alberta) ** Lonneke Van der Plas (University of Stuttgart) ** Helmut Schmid (Ludwig Maximilian University Munich) ** Fintan Costello (University College Dublin) ** Roald Eiselen (North-West University) ** Su Nam Kim (University of Melbourne) ** Pavol ??tekauer (P.J. Safarik University) ** Arina Banga (University of Debrecen) ** Diarmuid ?? S??aghdha (University of Cambridge) ** Rochelle Lieber (University of New Hampshire) ** Vivi Nastase (Fondazione Bruno Kessler) ** Tony Veale (University College Dublin) ** Pius ten Hacken (University of Innsbruck) ** Anneke Neijt (Radboud University Nijmegen) ** Andrea Krott (University of Birmingham) ** Emmanuel Keuleers (Ghent University) ** Stan Szpakowicz (University of Ottawa)