From boris.gutkin at ens.fr Tue Apr 1 04:04:53 2014 From: boris.gutkin at ens.fr (Boris) Date: Tue, 1 Apr 2014 10:04:53 +0200 Subject: Connectionists: =?windows-1252?q?Call_for_applications_to_Master_?= =?windows-1252?q?Program_=93Cognitive__sciences_and_technologies=3A_from_?= =?windows-1252?q?neuron_to_cognition=94_in_HSE?= References: Message-ID: <94ACCD16-F691-4114-BF5E-0E4387E75587@ens.fr> > > > -------- CALL FOR APPLICATIONS TO MASTER PROGRAM ?COGNITIVE SCIENCES AND > TECHNOLOGIES: FROM NEURON TO COGNITION? IN HSE > -------------------- > > The 2-year Master?s Program ?Cognitive sciences and technologies: from > neuron to cognition? invites highly-motivated students to apply. The > master's program is designed for students interested in connecting cognition, > neuroscience and computer science. The earliest applications have the higher > chances to get the total tuition fee exemption. Scholarships are also > available. > > Program?s Web site http://psy.hse.ru/cogito > > Application period for non-residents: March 1, 2014 - July 15, 2014. Early > applications (before April 20) are encouraged. Location: The Higher School > of Economics, Moscow, Russia. > > > > > From arno.onken at iit.it Tue Apr 1 13:14:39 2014 From: arno.onken at iit.it (Arno Onken) Date: Tue, 01 Apr 2014 19:14:39 +0200 Subject: Connectionists: Bernstein Workshop "Characterizing Natural Scenes": Call for Abstracts/Contributed Talks Message-ID: <533AF3FF.30005@iit.it> Call for Abstracts/Contributed Talks You are invited to participate in the pre-conference Bernstein workshop "Characterizing Natural Scenes: Retinal Coding and Statistical Theory", taking place in G?ttingen, Germany, on September 2-3, 2014. Please find the preliminary schedule at workshop URL: https://sites.google.com/site/jiankliu/Meetings/2014bccn_workshop The workshop chairs will select a couple of abstracts for short oral presentation at the workshop. All the abstracts will be presented as posters during the Bernstein conference on September 3-4. However, the workshop could hold a separate poster session based on the number and quality of abstracts that we receive. To participate in the workshop please submit the abstract (the extended version is preferred) as a PDF attachment via email to Jian Liu (jian.liu at med.uni-goettingen.de) or Arno Onken (arno.onken at iit.it). The deadline for submission is August 1. REGISTRATION: Please refer to the Bernstein Conference 2014 for registration and venue information. http://www.bernstein-conference.de/ WORKSHOP DESCRIPTION: How does the retina process natural visual inputs? What role do the many types of retinal ganglion cells (RGCs) play in this? Experiments with specific artificial stimuli have suggested that individual types of RGCs may have specific functional roles in visual processing, yet it is not clear how these simplified functional investigations relate to the processing of natural images and movies. An important ingredient for future analysis will be a better understanding of the complex statistical properties of natural scenes, as revealed by theoreticians. The relationship between natural visual statistics and retinal coding provides a promising direction for improving our understanding of the visual system. But we still lack a systematic framework for understanding the underlying mechanisms of how relevant features of natural scenes are encoded by the retina. In this workshop, we expect mutual benefits for both natural scene statistics and retinal coding. We will bring together experimentalists and theoreticians in order to highlight recent progress, encourage exchange of insights, and stimulate new ideas for future work with the following core questions: 1) How can we develop useful descriptions of the statistics of natural scenes that are relevant for retinal coding? 2) How can we characterize the functional roles of different RGC types in processing natural scenes? 3) Which coding strategies are present at the level of RGC populations? 4) How can we unify the acquired knowledge of natural scenes and neural data to develop better tools for analysis? CONFIRMED SPEAKERS: * Vijay Balasubramanian (University of Pennsylvania) * Philipp Berens (BCCN, T?bingen) * Thomas Euler (University of T?bingen) * Felix Franke (ETH Zurich) * Olivier Marre (Vision Institute, Paris) * Aman Saleem (University College London) * Maneesh Sahani (University College London) * Rava A. da Silveira (ENS, Paris) We are looking forward to seeing you in G?ttingen. Workshop organizers: Jian Liu (University Medical Center G?ttingen and BCCN G?ttingen) Arno Onken (Center for Neuroscience and Cognitive Systems, Istituto Italiano di Tecnologia Rovereto) From acriss at syr.edu Tue Apr 1 16:41:19 2014 From: acriss at syr.edu (Amy Criss) Date: Tue, 1 Apr 2014 16:41:19 -0400 Subject: Connectionists: postdoctoral positions in cognitive modeling at Syracuse University Message-ID: Two post-doctoral fellows are sought to conduct research under the direction of Drs. Amy Criss and Michael Kalish in the Department of Psychology at Syracuse University. The broad goals of the research program are to develop and evaluate models of human learning and memory using behavioral experiments, advanced statistical techniques (including hierarchical Bayesian analysis) and mathematical cognitive modeling. Dr Criss's primary work is in episodic memory, while Dr Kalish focus is on category learning processes. We are looking for a candidate in each of these areas, and are especially interested in scholars who can bridge the two. Successful candidates will have a PhD in cognitive psychology, cognitive science or related fields (computer science, applied maths, neuroscience) and will have relevant research experience. The funding is for one year with subsequent years possible based on positive evaluation and availability of funds. Salary will be commensurate with experience. Apply online at https://www.sujobopps.com/hr/postings/53456 (for Dr Criss) or https://www.sujobopps.com/postings/53726 (for Dr Kalish) and attach curriculum vita, brief statement of research interests (including reference to the research of Criss/Kalish respectively), and contact information for 3 references. Review of applications will begin immediately and will be accepted until the positions are filled. An appointment may begin as early as June 2014. SU is an EEO/AA employer. From irodero at cac.rutgers.edu Tue Apr 1 22:43:02 2014 From: irodero at cac.rutgers.edu (Ivan Rodero) Date: Tue, 1 Apr 2014 22:43:02 -0400 Subject: Connectionists: CAC 2014 - Call for Papers (EXTENDED deadline - abstracts due March April 7) In-Reply-To: <5F0F4666-E02C-4D89-BD56-596EE179E001@rutgers.edu> References: <51EC7783-DCAD-4364-B1DC-576C726BAA31@rutgers.edu> <6F339279-23CD-4553-95DF-B1F906F948E3@rutgers.edu> <0957F75F-5AB9-4144-B62D-87D225B34E42@rutgers.edu> <22CF346C-98EC-4D5B-9500-D0B9FE60551A@rutgers.edu> <99D17D7E-34C0-47B7-B641-C756E67D169A@rutgers.edu> <87202061-93AC-4066-89E7-77976097AFAB@rutgers.edu> <79403393-1690-4DCB-855A-1EE231D5ED2B@rutgers.edu> <5D63B8C8-3FD3-4241-9021-CBD7F84DCAA7@rutgers.edu> <8569604B-1ABF-4E84-80DF-02604DF42043@rutgers.edu> <5F0F4666-E02C-4D89-BD56-596EE179E001@rutgers.edu> Message-ID: <3A3F8015-8025-407C-B714-D4BD97BCA581@rutgers.edu> CAC 2014 Call for Papers ==================== The International Conference on Cloud and Autonomic Computing (CAC-2014) Part of FAS* - Foundation and Applications of Self* Computing Conferences Collocated with The 8th IEEE Self-Adaptive and Self-Organizing System Conference The 14th IEEE Peer-to-Peer Computing Conference Imperial College, London, September 8-12, 2014 http://www.autonomic-conference.org Join our LinkedIn group at: http://www.linkedin.com/groups?home=&gid=7446715&trk=anet_ug_hm Important Dates Abstract registration: April 7, 2014 (Extended) Paper submission due: April 14, 2014 (Extended) Notification to authors: June 12, 2014 Final paper due: July 12, 2014 Please find attached the complete CFP in PDF format. ============================================================= Ivan Rodero, Ph.D. Rutgers Discovery Informatics Institute (RDI2) NSF Center for Cloud and Autonomic Computing (CAC) Department of Electrical and Computer Engineering Rutgers, The State University of New Jersey Office: CoRE Bldg, Rm 624 94 Brett Road, Piscataway, NJ 08854-8058 Phone: (732) 993-8837 Fax: (732) 445-0593 Email: irodero at rutgers dot edu WWW: http://nsfcac.rutgers.edu/people/irodero ============================================================= -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: cac2014-cfp.pdf Type: application/pdf Size: 232858 bytes Desc: not available URL: -------------- next part -------------- An HTML attachment was scrubbed... URL: From irodero at cac.rutgers.edu Tue Apr 1 23:34:35 2014 From: irodero at cac.rutgers.edu (Ivan Rodero) Date: Tue, 1 Apr 2014 23:34:35 -0400 Subject: Connectionists: CAC 2014 - Call for Papers (EXTENDED deadline - abstracts due March April 7) In-Reply-To: <5F0F4666-E02C-4D89-BD56-596EE179E001@rutgers.edu> References: <51EC7783-DCAD-4364-B1DC-576C726BAA31@rutgers.edu> <6F339279-23CD-4553-95DF-B1F906F948E3@rutgers.edu> <0957F75F-5AB9-4144-B62D-87D225B34E42@rutgers.edu> <22CF346C-98EC-4D5B-9500-D0B9FE60551A@rutgers.edu> <99D17D7E-34C0-47B7-B641-C756E67D169A@rutgers.edu> <87202061-93AC-4066-89E7-77976097AFAB@rutgers.edu> <79403393-1690-4DCB-855A-1EE231D5ED2B@rutgers.edu> <5D63B8C8-3FD3-4241-9021-CBD7F84DCAA7@rutgers.edu> <8569604B-1ABF-4E84-80DF-02604DF42043@rutgers.edu> <5F0F4666-E02C-4D89-BD56-596EE179E001@rutgers.edu> Message-ID: CAC 2014 Call for Papers ==================== The International Conference on Cloud and Autonomic Computing (CAC-2014) Part of FAS* - Foundation and Applications of Self* Computing Conferences Collocated with The 8th IEEE Self-Adaptive and Self-Organizing System Conference The 14th IEEE Peer-to-Peer Computing Conference Imperial College, London, September 8-12, 2014 http://www.autonomic-conference.org Join our LinkedIn group at: http://www.linkedin.com/groups?home=&gid=7446715&trk=anet_ug_hm Important Dates Abstract registration: April 7, 2014 (Extended) Paper submission due: April 14, 2014 (Extended) Notification to authors: June 12, 2014 Final paper due: July 12, 2014 Please find attached the complete CFP in PDF format. ============================================================= Ivan Rodero, Ph.D. Rutgers Discovery Informatics Institute (RDI2) NSF Center for Cloud and Autonomic Computing (CAC) Department of Electrical and Computer Engineering Rutgers, The State University of New Jersey Office: CoRE Bldg, Rm 624 94 Brett Road, Piscataway, NJ 08854-8058 Phone: (732) 993-8837 Fax: (732) 445-0593 Email: irodero at rutgers dot edu WWW: http://nsfcac.rutgers.edu/people/irodero ============================================================= -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: cac2014-cfp.pdf Type: application/pdf Size: 232858 bytes Desc: not available URL: -------------- next part -------------- An HTML attachment was scrubbed... URL: From poirazi at imbb.forth.gr Thu Apr 3 06:02:18 2014 From: poirazi at imbb.forth.gr (Yiota Poirazi) Date: Thu, 03 Apr 2014 13:02:18 +0300 Subject: Connectionists: DENDRITES 2014, July 1-4, Crete: LAST CALL & LOWER REGISTRATION Message-ID: <533D31AA.4050301@imbb.forth.gr> Apologies for cross-posting! Please distribute to interested parties. many thanks! Yiota Poirazi ------------------------------------------------------- DENDRITES 2014 4^th NAMASEN Training Workshop Heraklion, Crete, Greece. July 1-4, 2014 http://dendrites2014.gr/ info at dendrites2014.gr * * * * * FINAL DEADLINE EXTENSION* * * * * Abstract submission for poster and oral presentation has been extended to *April 14th, 2014. This is the final submission date.* * * * * * *REGISTRATION WAIVERS FOR STUDENTS/POSTDOCS** * * * * Several registration waivers are still available!! Waivers are provided by IBRO PERC funding for Symposia, Workshops & Meetings. Students and postdocs presenting an abstract are eligible. To request a waiver, please send an email to info at dendrites2014.org including information about your submitted abstract. Travel funds might also become available and will be announced on the web site. * * * * **LOWER REGISTRATION FEES if registering by April 30th, 2014** * * * * We are happy to announce that, given the availability of new sponsorships, we can provide lower registration fees (-40 Euro) for attendants registering by April 30th, 2014. People who registered already will get reimbursed. * * * * * EMBO KEYNOTE LECTURE* * * * * We are happy to announce that Prof. Tonegawa's lecture will be supported by EMBO, as the "EMBO Keynote Lecture". This award, given to EMBO members presenting in major international scientific meeting, highlights the international standing of DENDRITES 2014. ---------------------------------------- Dendrites 2014 aims to bring together scientists from around the world to present their research on dendrites, ranging from the molecular to the anatomical and biophysical levels. The Workshop will take place in the beautiful island of Crete, just after the AREADNE meeting (areadne.org) which takes place in the nearby island of Santorini. Note that Santorini is just a 2 hour boat ride from Crete. One could combine two meetings in a single trip to Greece! FORMAT AND SPEAKERS The meeting consists of the Main Event (July 1-3) and the Soft Skills Day (July 4). Invited speakers for the Main Event include: * Susumu Tonegawa (Nobel Laureate), * Angus Silver, * Tiago Branco, * Alcino J. Silva, * Michael H?usser, * Julietta U. Frey, * Stefan Remy, and * Kristen Harris. For the Soft Skills Day, Kerri Smith, Nature podcast editor, is going to present on communication and dissemination of scientific results. Alcino J. Silva (UCLA) will present his recent work on developing tools for integrating and planning research in Neuroscience. There will be a hands on session on tools used to model neural cells/networks and a talk about the advantages of publishing under the Creative Commons License. CALL FOR ABSTRACTS Submission of abstracts for poster and oral presentations are due by *14 April 2014*. We welcome both experimental and theoretical contributions addressing novel findings related to dendrites. For details, please see http://dendrites2014.gr/call/. Electronic abstract submission is hosted on the Frontiers platform, where extended abstracts will be published as an online, open access special issue (in Frontiers in Cellular Neuroscience, Impact factor 4.5). One of the authors has to register for the main event as a presenting author. In case an abstract is not accepted for presentation, the registration fee will be refunded. FOR FURTHER INFORMATION Please see the conference web site (http://dendrites2014.gr/), subscribe to our twitter (https://twitter.com/dendrites2014) or RSS feeds (http://dendrites2014.gr/rss_feed/news.xml), or send an email to info at dendrites2014.org . -- Panayiota Poirazi, Ph.D. Research Director Institute of Molecular Biology and Biotechnology (IMBB) Foundation of Research and Technology-Hellas (FORTH) Vassilika Vouton, P.O.Box 1385 GR 711 10 Heraklion, Crete, GREECE Tel: +30 2810 391139 Fax: +30 2810 391101 ?mail:poirazi at imbb.forth.gr http://www.dendrites.gr -------------- next part -------------- An HTML attachment was scrubbed... URL: From h.glotin at gmail.com Thu Apr 3 13:45:29 2014 From: h.glotin at gmail.com (Herve Glotin) Date: Thu, 3 Apr 2014 19:45:29 +0200 Subject: Connectionists: [CFP] ICML uLearnBio 2014: int. wkp on Unsupervised Learning from Bioacoustic Big Data Message-ID: [ Apologies for cross-posting - new call for paper - uLearnBio at ICML 2014] - Int. Workshop on Unsupervised Learning from Bioacoustic Big Data - joint to ICML 2014 - Int. Conference on Machine Learning - 25/26 June, Beijing, China Prog. Com.: Chamroukhi, Glotin, Dugan, Clark, Arti?res, LeCun http://sabiod.univ-tln.fr/ulearnbio/ MAIN TOPICS: Unsupervised generative learning on big data, Latent data models, Model-based clustering, Bayesian non-parametric clustering, Bayesian sparse representation, Feature learning, Deep neural net, Bioacoustics, Environmental scene analysis, Big Bio-acoustic data structuration, Species clustering (birds, whales...) IMPORTANT DATES (Extended): 13th April for regular paper from 2 to 6 pages, or 30th may for keynote paper on one of the technical challenge. All submissions will be reviewed by program committee members, and will be assessed based on their novelty, technical quality, potential impact, and clarity of writing. All accepted papers will be published as part of the ICMLUlb workshop proceedings (with ISBN number), and will be available online from the workshop website. The organizers will discuss the opportunity of editing a special issue with a journal and authors of the best quality submissions will be invited to submit extended versions of their papers. OBJECTIVES: The general topic of uLearnBio is machine learning from bioacoustic data, supervised method but also unsupervised feature learning and clustering from bioacoustic data. A special session will concern cluster analysis based on Bayesian Non-Parametrics (BNP), in particular the Infinite Gaussian Mixture Model (IGMM) formulation, Chinese Restaurant Process (CRP) mixtures and Dirichlet Process Mixtures (DPM). The non-parametric alternative avoids assuming restricted functional forms and thus allows the complexity and accuracy of the inferred model to grow as more data is observed. It also represents an alternative to the difficult problem of model selection in model-based clustering models by inferring the number of clusters from the data as the learning proceeds. ICMLulb offers an excellent framework to see how parametric and nonparametric probabilistic models for cluster analysis can perform to learn from complex real bio-acoustic data. Data issued from bird songs, whale songs, are provided in the framework of challenges as in previous ICML and NIPS workshops on learning from bio-acoustic data (ICML4B and NIPS4B books are available at http://sabiod.org). ICMLuLearnBio will bring ideas on how to proceed in understanding bioacoustics to provide methods for biodiversity indexing. The scaled bio-acoustic data science is a novel challenge for AI. Large cabled submarine acoustic observatory deployments permit data to be acquired continuously, over long time periods. For examples, submarine Neptune observatory in Canada, Antares or Nemo neutrino detectors, or PALAOA in Antarctic (cf NIPS4B proc.) are 'big data' challenges. Automated analysis, including clustering/segmentation and structuration of acoustic signals, event detection, data mining and machine learning to discover relationships among data streams promise to aid scientists in discoveries in an otherwise overwhelming quantity of acoustic data. CHALLENGES: In addition to the two previously announced challenges (Parisian bird and Whale challenges), we open a 3rd challenge on 500 amazonian bird species linked to the LifeClef Bird challenge 2014 but into an unsupervised way, over 9K .wav files. Details on challenges : http://sabiod.univ-tln.fr/ulearnbio/challenges.html INVITED SPEAKERS: Pr. G. McLachlan - Department of mathematics - University of Queensland, AU, Dr. F. Chamroukhi - LSIS CNRS - Toulon Univ, FR, Dr. P. Dugan - Bioacoustics Research Program on Big Data - Cornell Univ, USA. More information and open challenges = http://sabiod.org Program Committee: Dr. F. Chamroukhi - LSIS CNRS - Toulon Univ, Pr. H. Glotin - LSIS CNRS - Institut Universitaire France - LSIS CNRS - Univ. Toulon, Dr. P. Dugan - Ornithology Bioacoustics Lab - Cornell Univ, NY, Pr. C. Clark - Ornithology Bioacoustics Lab - Cornell Univ, NY, Pr. T. Arti?res - LIP6 CNRS - Sorbonne Univ, Paris, Pr. Y. LeCun - Computational & Biological Learning Lab - NYU - Facebook Research Center, NY. From dwang at cse.ohio-state.edu Thu Apr 3 14:53:50 2014 From: dwang at cse.ohio-state.edu (DeLiang Wang) Date: Thu, 3 Apr 2014 14:53:50 -0400 Subject: Connectionists: NEURAL NETWORKS, April 2014 Message-ID: <533DAE3E.5070702@cse.ohio-state.edu> Neural Networks - Volume 52, April 2014 http://www.journals.elsevier.com/neural-networks Pairwise constrained concept factorization for data representation Yangcheng He, Hongtao Lu, Lei Huang, Saining Xie Hybrid extreme rotation forest Borja Ayerdi, Manuel Gra?a Policy oscillation is overshooting Paul Wagner NeuCube: A spiking neural network architecture for mapping, learning and understanding of spatio-temporal brain data Nikola K. Kasabov Construction of a Boolean model of gene and protein regulatory network with memory Meng Yang, Rui Li, Tianguang Chu Nonsmooth finite-time stabilization of neural networks with discontinuous activations Xiaoyang Liu, Ju H. Park, Nan Jiang, Jinde Cao From Colin.Wise at uts.edu.au Thu Apr 3 22:27:57 2014 From: Colin.Wise at uts.edu.au (Colin Wise) Date: Fri, 4 Apr 2014 13:27:57 +1100 Subject: Connectionists: AAI Short Course - 'Behaviour Analytics - an Introduction' - Wednesday 23 April 2014 Message-ID: <8112393AA53A9B4A9BDDA6421F26C68A0173A2CFE4AF@MAILBOXCLUSTER.adsroot.uts.edu.au> Dear Colleague, AAI Short Course - 'Behaviour Analytics - an Introduction' - Wednesday 23 April 2014 https://shortcourses-bookings.uts.edu.au/ClientView/Schedules/ScheduleDetail.aspx?ScheduleID=1572&EventID=1294 Our AAI short course 'Behaviour Analytics - an Introduction' may be of interest to you and or others in your organisation or network. Complex behaviours are widely seen on the internet, business, social and online networks, and multi-agent systems. In fact, behaviour is a concept with stronger semantic meaning than the so-called data for recording and representing business activities, impacts and dynamics. Therefore, an in-depth understanding of complex behaviours has been increasingly recognised as a crucial means for disclosing interior driving forces, causes and impact on businesses in handling many challenging issues. This forms the need and emergence of behaviour analytics, i.e. understanding behaviours from the computing perspective. In this short course, we present an overview of behaviour analytics and discuss complex behaviour interactions and relationships, complex behaviour representation, behavioural feature construction, behaviour impact and utility analysis, behaviour pattern analysis, exceptional behaviour analysis, negative behaviour analysis, behaviour interaction and evolution. Please register here https://shortcourses-bookings.uts.edu.au/ClientView/Schedules/ScheduleDetail.aspx?ScheduleID=1572&EventID=1294 An important foundation short course in the AAI series of advanced data analytic short courses - please view this short course and others here http://www.uts.edu.au/research-and-teaching/our-research/advanced-analytics-institute/short-courses/upcoming-courses We are happy to discuss at your convenience. Thank you and regards. Colin Wise Operations Manager Advanced Analytics Institute (AAI) Blackfriars Building 2, Level 1 University of Technology, Sydney (UTS) Email: Colin.Wise at uts.edu.au Tel. +61 2 9514 9267 M. 0448 916 589 AAI: www.analytics.uts.edu.au/ Reminder - AAI Short Course - Advanced Data Analytics - an Introduction - Wednesday 7 May 2014 https://shortcourses-bookings.uts.edu.au/ClientView/Schedules/ScheduleDetail.aspx?ScheduleID=1571 Future short courses on Data Analytics and Big Data may be viewed at LINK AAI Education and Training Short Courses Survey - you may be interested in completing our AAI Survey at LINK AAI Email Policy - should you wish to not receive this periodic communication on Data Analytics Learning please reply to our email (to sender) with UNSUBSCRIBE in the Subject. We will delete you from our database. Thank you for your past and future support. UTS CRICOS Provider Code: 00099F DISCLAIMER: This email message and any accompanying attachments may contain confidential information. If you are not the intended recipient, do not read, use, disseminate, distribute or copy this message or attachments. If you have received this message in error, please notify the sender immediately and delete this message. Any views expressed in this message are those of the individual sender, except where the sender expressly, and with authority, states them to be the views of the University of Technology Sydney. Before opening any attachments, please check them for viruses and defects. Think. Green. Do. Please consider the environment before printing this email. -------------- next part -------------- An HTML attachment was scrubbed... URL: From fmschleif at googlemail.com Fri Apr 4 01:24:17 2014 From: fmschleif at googlemail.com (Frank-Michael Schleif) Date: Fri, 4 Apr 2014 07:24:17 +0200 Subject: Connectionists: Call for papers Special Session on 'High dimensional data analysis - theoretical advances and applications' at CIDM 2014 / SSCI Message-ID: Call for Papers Special Session on 'High dimensional data analysis - theoretical advances and applications' 09-12 December 2014, Orlando, Florida, USA http://www.ieee-ssci.org/CIDM.html http://www.cs.bham.ac.uk/~schleify/CIDM_2014/ AIMS AND SCOPE Modern measurement technology, greatly enhanced storage capabilities and novel data formats have radically increased the amount and dimensionality of electronic data. Due to its high dimensionality, complexity and the curse of dimensionality, these data sets can often not be addressed by classical statistical methods. Prominent examples can be found in the life sciences with microarrays, hyper spectral data in geo-sciences but also in fields like astrophysics, biomedical imaging, finance or web and market basket analysis. Computational intelligence methods have the potential to be used to pre-process, model and to analyze such complex data but new strategies are needed to get efficient and reliable models. Novel data encoding techniques and projection methods, employing concepts of randomization algorithms have opened new ways to obtain compact descriptions of these complex data sets or to identify relevant information. However theoretical foundations and the practical potential of these methods and alternative approaches has still to be explored and improved. New advances and research to address the curse of dimensions, and to uncover and exploit the blessings of high dimensionality in data analysis are of major interest in theory and application. TOPICS This workshop aims to promote new advances and research directions to address the modeling, representation/encoding and reduction of high-dimensional data or approaches and studies adressing challenging problems in the field of high dimensional data analysis. Topics of interest range from theoretical foundations, to algorithms and implementation, to applications and empirical studies of mining high dimensional data, including (but not limited to) the following: o Studies on how the curse of dimensionality affects computational intelligence methods o New computational intelligence techniques that exploit some properties of high dimensional data spaces o Theoretical findings addressing the imbalance between high dimensionality and small sample size o Stability and reliability analyses for data analysis in high dimensions o Adaptive and non-adaptive dimensionality reduction for noisy high dimensional data sets o Methods of random projections, compressed sensing, and random matrix theory applied to high dimensional data mining o Models of low intrinsic dimension, such as sparse representation, manifold models, latent structure models, and studies of their noise tolerance o Classification, regression, clustering of high dimensional complex data sets o Functional data mining o Data presentation and visualisation methods for very high dimensional data sets o Data mining applications to real problems in science, engineering or businesses where the data is high dimensional PAPER SUBMISSION High quality original submissions (upto 8 pages, IEEE style) should follow the guidelines as outlined at the CIDM homepage and should be submitted using the provided IEEE paper submission system. We strongly encourage to use the LaTeX stylesheet and not the word format to ensure high quality typesetting and paper representation also during the review process. Webpage of the special session: http://www.cs.bham.ac.uk/~schleify/CIDM_2014/ IMPORTANT DATES Paper submission deadline : 15 June 2014 Notification of acceptance : 05 September 2014 Deadline for final papers : 05 October 2014 The CIDM 2014 conference : 9-12 December 2014 SPECIAL SESSION ORGANIZERS: Ata Kaban, University of Birmingham, Birmingham, UK Frank-Michael Schleif, University of Birmingham, Birmingham, UK Thomas Villmann, University of Appl. Sc. Mittweida, Germany From davrot at neuro.uni-bremen.de Fri Apr 4 07:09:22 2014 From: davrot at neuro.uni-bremen.de (David Rotermund) Date: Fri, 04 Apr 2014 13:09:22 +0200 Subject: Connectionists: Open position in Theoretical Neuroscience Message-ID: <533E92E2.5030109@neuro.uni-bremen.de> The Computational Neuroscience group of Klaus Pawelzik ( http://www.neuro.uni-bremen.de ) invites applications for an open Ph.D. student position in the project 'I-See The artificial eye: Chronic wireless interface to the visual cortex' (http://www.isee.uni-bremen.de for details about the project) In this project we investigate the possibilities and foundations of injecting visual information into the visual cortex via electrical stimulation for neuro - prosthetic purposes. We are looking for a PhD student with a background in physics and/or computational neuroscience. She/he will work in close cooperation with experimentalists in the group of Andreas Kreiter (www.brain.uni-bremen.de). The student will analyse electro-physiological data with the goal to identify the underlying network structure/model which generates a specific neural response and visual percept to a particular electrical stimulation. She/he will then invert this model in order to determine the spatio-temporal stimulation pattern required for creating a desired visual percept. Basic knowledge in programming and in formal methods/Computational Neuroscience are required. Interested candidates should send their application in German or English language, a letter of motivation, CV, copies of school and university certificates (master/diploma or equivalent) until the 17th of April to: Agnes Jan?en Cognium Hochschulring 18 Universit?t Bremen D-28359 Bremen Germany The documents can be sent in via eMail ( ajanssen at neuro.uni-bremen.de ) or in paper form. Severely disabled applicants and women with essentially identical technical and personal suitability will be preferentially selected. From G.W.M.Rauterberg at tue.nl Wed Apr 2 09:30:52 2014 From: G.W.M.Rauterberg at tue.nl (Rauterberg, G.W.M.) Date: Wed, 2 Apr 2014 13:30:52 +0000 Subject: Connectionists: online survey about human values Message-ID: <808380B9814A0143B66A787B7FB3439003B3BC60@XSERVER20A.campus.tue.nl> Dear friends and colleagues, I am writing to invite you to participate in a research study on human values http://probe.id.tue.nl/hvs/. I am a PhD candidate in the department of Industrial Design at Eindhoven University of Technology (TU/e). Related to my research on value-driven design, I am conducting an online survey for finding similarities and differences of human values all over the world. For this, a variety of participants with different cultural backgrounds are needed. So you would help me very much if you could do this survey by yourself and then send it to your friends and colleagues, especially in another country. The survey will only take a couple of minutes of your time. Please find the survey here: http://probe.id.tue.nl/hvs/. Many thanks in advance, Shadi Kheirandish Doctoral Candidate (PhD) Designed Intelligence Group Department of Industrial Design Den Dolech 2, 5612 AZ Postbus 513, HG 2.93 5600 MB Eindhoven The Netherlands -------------- next part -------------- An HTML attachment was scrubbed... URL: From Vittorio.Murino at iit.it Wed Apr 2 07:42:03 2014 From: Vittorio.Murino at iit.it (Vittorio Murino) Date: Wed, 2 Apr 2014 13:42:03 +0200 Subject: Connectionists: Open postdoc position in Computer Vision, Machine Learning & Pattern Recognition at IIT-PAVIS Message-ID: <533BF78B.9050206@iit.it> BC: 68042 Fondazione Istituto Italiano di Tecnologia (IIT) was founded with the objective of promoting Italy?s technological development and further education in science and technology. Therefore, IIT?s scientific program is based on the combination of basic scientific research with the development of technical applications, a major inspirational principle. The research areas cover scientific topics of high innovative content, representing the most advanced frontiers in modern technology, with wide application opportunities in various fields ranging from medicine to industry, from computer science to robotics, life sciences and nanobiotechnology. PAVIS department at IIT (Pattern Analysis and Computer Vision), (http://www.iit.it/pavis.html) is looking for a highly qualified researcher in Computer Vision, Pattern Recognition, Machine Learning and Image Analysis. PAVIS?s main mission is the design and development of innovative computer vision systems, characterized by the use of highly-functional smart sensors and advanced video analytics features. PAVIS has also an active role in supporting the research facilities in IIT which provide solutions to scientists in other disciplines, such as Neuroscience and Nanophysics. To this end, the group is involved in activities concerning computer vision and pattern recognition, machine learning, multimodal data analysis and sensor fusion and embedded computer vision systems. This call aims at consolidating PAVIS?s expertise in the following research areas: + Detection, tracking (individuals, groups, crowd) and re-identification; + Behavior Analysis & Activity Recognition (individuals, groups, crowd). The candidate is also expected to support supervision of PhDs and general lab-related activities. Candidates should have a Ph.D. in computer vision, machine learning, signal processing, image analysis or related areas. Evidence of top quality research in the above specified areas in terms of published papers in top conferences/journals and/or patents is mandatory. An internationally very competitive salary will be offered. It is a fixed term and filled initially for 2 years with an option for extension for another 2 more years. Please notice that for foreign researchers (or for returning Italian researchers) Italy offers a very convenient tax exemptions (Legge Tremonti). Further details and informal enquires can be sent by email to pavis at iit.it or alessandro.perina at iit.it. Complete application forms along with a curriculum listing all publications, a pdf of your most representative publications and a research statement describing your previous research experience and outlining its relevance with the above topics should be sent by email to pavis at iit.it, quoting PAVIS-PDS 68042 in the subject line. Please also indicate 2 independent references inside the CV or in the email text. The call will remain open and applications will be reviewed until the position is assigned but for full consideration please apply before April 30, 2014. --------------------------------------------------------------- In order to comply with Italian law (art. 23 of Privacy Law of the Italian Legislative Decree n. 196/03), the candidate is kindly asked to give his/her consent to allow IIT to process his/her personal data. We inform you that the information you provide will be solely used solely for the purpose of assessing your professional profile to meet the requirements of Istituto Italiano di Tecnologia. Your data will be processed by Istituto Italiano di Tecnologia, with its headquarters in Genoa, Via Morego 30, acting as the Data Holder, using computer and paper- based means, observing the rules on the protection of personal data, including those relating to the security of data. Please also note that, pursuant to art.7 of Legislative Decree 196/2003, you may exercise your rights at any time as a party concerned by contacting the Data Manager. Istituto Italiano di Tecnologia is an Equal Opportunity Employer that actively seeks diversity in the workforce. -- Vittorio Murino **************************** Prof. Vittorio Murino, Ph.D. PAVIS - Pattern Analysis & Computer Vision IIT Istituto Italiano di Tecnologia Via Morego 30 16163 Genova, Italy Phone: +39 010 71781 504 Mobile: +39 329 6508554 Fax: +39 010 71781 236 E-mail: vittorio.murino at iit.it BC: 68042 Fondazione Istituto Italiano di Tecnologia - IIT - was founded with the objective of promoting Italy?s technological development and further education in science and technology. Therefore, IIT?s scientific program is based on the combination of basic scientific research with the development of technical applications, a major inspirational principle. The research areas cover scientific topics of high innovative content, representing the most advanced frontiers in modern technology, with wide application opportunities in various fields ranging from medicine to industry, from computer science to robotics, life sciences and nanobiotechnology. PAVIS department at IIT (Pattern Analysis and Computer Vision), (http://www.iit.it/pavis.html) is looking for a highly qualified researcher in Computer Vision, Pattern Recognition, Machine Learning and Image Analysis. PAVIS?s main mission is the design and development of innovative computer vision systems, characterized by the use of highly-functional smart sensors and advanced video analytics features. PAVIS has also an active role in supporting the research facilities in IIT which provide solutions to scientists in other disciplines, such as Neuroscience and Nanophysics. To this end, the group is involved in activities concerning computer vision and pattern recognition, machine learning, multimodal data analysis and sensor fusion and embedded computer vision systems. This call aims at consolidating PAVIS?s expertise in the following research areas: ? Detection and Tracking (individuals, groups, crowd) and Re-identification; ? Behavior Analysis & Activity Recognition (individuals, groups, crowd). The candidate is also expected to support supervision of PhDs and general lab-related activities. Candidates should have a Ph.D. in computer vision, machine learning, signal processing, image analysis or related areas. Evidence of top quality research in the above specified areas in terms of published papers in top conferences/journals and/or patents is mandatory. An internationally very competitive salary will be offered. It is a fixed term and filled initially for 2 years with an option for extension for another 2 more years. Please notice that for foreign researchers (or for returning Italian researchers) Italy offers a very convenient tax exemptions (Legge Tremonti). Further details and informal enquires can be sent by email to pavis at iit.itor alessandro.perina at iit.it Complete application forms along with a curriculum listing all publications, a pdf of your most representative publications and a research statement describing your previous research experience and outlining its relevance with the above topics should be sent by email to pavis at iit.it, quoting PAVIS-PDS 68042 in the subject line. Please also indicate 2 independent references inside the CV or in the email text. The call will remain open and applications will be reviewed until the position is assigned but for full consideration please apply before April 30, 2014. In order to comply with Italian law (art. 23 of Privacy Law of the Italian Legislative Decree n. 196/03), the candidate is kindly asked to give his/her consent to allow IIT to process his/her personal data. We inform you that the information you provide will be solely used solely for the purpose of assessing your professional profile to meet the requirements of Istituto Italiano di Tecnologia. Your data will be processed by Istituto Italiano di Tecnologia, with its headquarters in Genoa, Via Morego 30, acting as the Data Holder, using computer and paper- based means, observing the rules on the protection of personal data, including those relating to the security of data. Please also note that, pursuant to art.7 of Legislative Decree 196/2003, you may exercise your rights at any time as a party concerned by contacting the Data Manager. Istituto Italiano di Tecnologia is an Equal Opportunity Employer that actively seeks diversity in the workforce. http://www.iit.it/pavis.html *************************************************************************** From torcini at gmail.com Wed Apr 2 10:42:21 2014 From: torcini at gmail.com (A. Torcini) Date: Wed, 2 Apr 2014 16:42:21 +0200 Subject: Connectionists: Fully funded Postdoc position in Computational Neuroscience Message-ID: Fully funded Postdoc position (Experienced Researcher) in Computational Neuroscience New approaches for the analysis of neuronal spike trains (ER3) within the Marie Curie Initial Training Network - 'Neural Engineering Transformative Technologies' (NETT) at the Institute of Complex Systems (ISC), CNR, Florence, Italy. Gross Salary per annum: 64,701 EURO (Living Allowance) plus 9,290 - 13,272 EURO (Mobility Allowance) depending on circumstances Required title: PhD in Physics, Engineering, Applied Mathematics, or Computational Neuroscience Applications: Please first send an informal enquiry to Dr. Thomas Kreuz (thomas.kreuz at cnr.it) and/or Dr. Alessandro Torcini (alessandro.torcini at cnr.it). Once you are preselected (you will receive a reply within a few days), a full application has to be submitted according to the rules stated here [http://neuro.fi.isc.cnr.it/index.php?page=how-to-apply]. Closing date for the position: May 15, 2014 Applications are invited for the above post to work with Dr. Thomas Kreuz [http://www.fi.isc.cnr.it/users/thomas.kreuz/] and Dr. Alessandro Torcini [http://www.fi.isc.cnr.it/users/alessandro.torcini/] in the Computational Neuroscience group [http://neuro.fi.isc.cnr.it/] at ISC, Florence. This world leading group combines theoretical investigations (e.g., on non-trivial collective phenomena in neuronal populations) with practical applications (such as spike train analysis). The group is one of the main participants in the Center for the Study of Complex Dynamics (CSDC) created with the purpose of coordinating interdisciplinary training and research activities. CSDC researchers include physicists, control engineers, mathematicians, biologists and psychologists. This full-time post will begin on the 1st of September 2014 (at the latest) and will be offered on a fixed-term contract for a period of 24 months. The fellowship will include a six-month stay (probably at the end) with NETT partner Prof. Bert Kappen [http://www.snn.ru.nl/~bertk/] at the Radboud University Nijmegen, Netherlands. The Florence part of the project will mainly consist of two parts: Development of new approaches for (multivariate) data analysis and analysis of electrophysiological data (in particular neuronal spike trains). During the six-month stay in the Netherlands the project will investigate the applicability of methods derived from latent state models, sequential Monte Carlo sampling, and path integral control to analyze neural time-series data, such as spike trains. The candidate should have a strong background in at least one of the following fields: computational neuroscience, data analysis, and nonlinear dynamics as well as solid experience in scientific programming (Matlab). Candidates must be in the first 5 years of their research careers and already be in possession of a doctoral degree in physics, engineering, applied mathematics, or computational neuroscience. Preference will be given to a candidate with experience in (non-linear) data analysis. As part of our commitment to promoting diversity we encourage applications from women. To comply with the Marie Curie Actions rule for mobility applicants must not have resided, worked or studied in Italy for more than 12 months in the 3 years prior to May 2014. --------------------------------------------------------------------------------------- Alessandro Torcini - Istituto dei Sistemi Complessi - CNR via Madonna del Piano, 10 --- I-50019 Sesto Fiorentino Tel:+39-055-522-6670 Fax:+39-055-522-6683 SKyPE: torcini http://www.fi.isc.cnr.it/users/alessandro.torcini ----------------------------------------------------------------------------------------- From G.Brostow at cs.ucl.ac.uk Thu Apr 3 10:09:29 2014 From: G.Brostow at cs.ucl.ac.uk (Gabriel J. Brostow) Date: Thu, 3 Apr 2014 15:09:29 +0100 (BST) Subject: Connectionists: recruiting awesome European PhD students to London Message-ID: Hello to all prospective European-resident PhD students! We have funding for an unprecedented *three* PhD studentships. These are described below, where the 2nd one, "Spatiotemporal Models of Retinal Images" is particularly relevant to Connectionists. Do get in touch if there are questions. -Gabe -- -Gabriel J. Brostow http://www.cs.ucl.ac.uk/staff/G.Brostow ================================================================= ================================================================= UCL/MSR-Cambridge/RVC: PhD in Vision-based Tracking of Quadrupeds If you love computer vision and coding, and want to have major impact on science (and you are considered an EU-resident), please read on! We have funding for an excellent student from the EU to complete a 3 year Computer Science PhD at University College London. The co-supervisors for this project are Jamie Shotton of Microsoft Research Cambridge who leads the Kinect pose-estimation effort, and Thilo Pfau & Andrew Spence, animal locomotion experts at the Royal Veterinary College in north London. This PhD combines computer vision, machine learning, graphics, and biomechanics. Applicants will develop skills and make contributions in all these areas, but should already be fairly strong in one or two of them. Under the interdisciplinary supervision of experts in computer vision, biology, and veterinary medicine, the student will both build computer vision systems and use these to investigate real outstanding scientific and clinical questions. On the vision side, we believe recent advances in understanding human shape and motion can be extended to work for quadrupeds. Animals are substantially different to humans in interesting ways, and we have identified many hard technical issues here. On the biological side, we aim to investigate how body morphology and the ultimate constraints of stability, energetic cost, and dexterity shape animal gait. On the clinical side, this studentship will make essential contributions to applying the developed techniques across species with the potential for large welfare and economic benefits. This project offers the student a rare opportunity to become a world expert, guided by top specialists in vision and biology, just when their respective strengths are ready to be exploited. At UCL Computer Science, the PhD student will be based in Gabriel Brostow's group in central London. The student will make visits to collect data and run experiments to the RVC in North London, and will have opportunities to spend periods of time at MSR-Cambridge. The student is expected to work with other students and postdocs in our teams and with the larger cohort of researchers at the three sites. Programming experience desired: high proficiency in one or more of Matlab / C++ / Python. The PhD is a time to learn new things, but the idealized candidate would have completed small projects with some combination of machine learning, GPU, Qt, OpenGL, and OpenCV-type libraries. As an example reference guide, see the topics covered in the Prince textbook, http://www.computervisionmodels.com/. Application Instructions: You'll need to submit an online application here as soon as possible: http://www.cs.ucl.ac.uk/admissions/phd_programme/applying/. (Deadlines listed there do not apply to funded studentships like this. Recruiting ends when we find the right person.) Please make sure to put Gabriel Brostow as the supervisor but you should also email me *now* (g.brostow at ucl.ac.uk) so we know to look out for your application (and please use the text "kinect4legs" in the Subject line). ================================================================= ================================================================= UCL: PhD in Spatiotemporal Models of Retinal Images If you love applying machine learning and want to help save babies from going blind (and you are considered an EU-resident), please read on! We have funding for an excellent student from the EU to complete a 3 year Computer Science PhD at University College London. This PhD combines machine learning, computer vision, and ophthalmology. Applicants will develop skills and make contributions in all three areas, but should already be fairly strong and excited about one or two of them. The project aims to help clinicians screen for Retinopathy of Prematurity (ROP), an illness that causes blindness in premature babies when undetected. Read more about ROP further below. Our goal is to develop a model of how the retina looks over time a) when an eye is healthy, b) as ROP progresses, and c) as a result of laser treatment. The probabilistic generative model for these cases will be learned from image data of real patients. Subsequently at test time, given image(s) of a premature baby's retina, we will be able to assess the probability that the baby is healthy or at-risk. This research will be a form of structured texture-synthesis, with algorithms and applications beyond "just" medical images. Retinpathy of Prematurity (ROP): ROP is one of the few ophthalmic conditions in which a diagnosis to treat or not is urgent. Screening is essential, but difficult. A simple retinal camera is needed, but also expertise and experience. Even in middle-income countries where a camera is available, there are often not enough qualified ROP experts to examine all the at-risk babies. In low-income countries there is currently a large proportion of premature infants facing a lifetime of blindness from untreated ROP due largely to a lack of screening facilities. Preliminary research has revealed further challenges. A fairly universal international grading system is used by experts to grade the severity of a case. However, research has shown that experts do not tend to agree on the grading for each eye because there is no agreed model of disease progression. At UCL Computer Science, the PhD student will be based in Gabriel Brostow's group. The co-supervisor is Dr. Clare Wilson, Paediatric Ophthalmologist at Great Ormond Street Hospital, and at UCL Institute of Ophthalmology. The research areas overlap and the student is expected to work with other students and postdocs in our team (http://web4.cs.ucl.ac.uk/staff/g.brostow/#Students) and the larger cohort of vision + machine learning researchers here. UCL is in central London, and is one of the top 3 groups for Vision/Learning/Graphics in Europe. For ophthalmology expertise, this is THE place to be. Programming experience desired: high proficiency in one or more of Matlab / C++ / Python. The PhD is a time to learn new things, but the idealized candidate would have completed small projects with some combination of machine learning, GPU, Qt, and OpenCV-type libraries. As an example reference guide, see the topics covered in the Prince textbook, http://www.computervisionmodels.com/. Application Instructions: You'll need to submit an online application here as soon as possible: http://www.cs.ucl.ac.uk/admissions/phd_programme/applying/. (Deadlines listed there do not apply to funded studentships like this.) Please make sure to put Gabriel Brostow as the supervisor but you should also email me *now* (g.brostow at ucl.ac.uk) so we know to look out for your application (and please use the text "synthROP" in the Subject line). ================================================================= ================================================================= UCL: PhD in multiview video analysis & special-effect synthesis Our new project is co-funded by a large camera company and UCL's Impact Scholarship, and explores the research challenges of bringing Image/Video Based Rendering (IBR/VBR) into common use for content creators and action enthusiasts. Funding is limited to European Union applicants (sorry to everyone else) This is an amazing opportunity to do your PhD on fun, hard, and direct-impact research problems. Whether or not you're the type of person who plays/watches sports, you will enjoy being able to work on projects that push forward machine learning / vision / graphics, and make beautiful-looking results (in part, by capturing action data in beautiful places). The project and the studentships must start by September 2014, but applications are being considered now on a rolling basis. (i.e. the student should be signed up by summer-time) The existing funding is for a 3-year PhD (the typical length in the UK), but there is a chance of extending the funding to cover a 4th year of research, if things go well along the way. At UCL, the PhD student will be based in Gabriel Brostow's group. The research area overlaps and the student is expected to work with others in the larger cohort of vision + graphics researchers here. We have some specific topics where innovation is needed: - View interpolation - Semantic texture synthesis - Depth from intensity - Video-texture synthesis - Active Learning - Video matting - Video inpainting Programming experience desired: high proficiency in one or more of Matlab / C++ / Python Exposure to some of the following areas is a plus: - 3D Acquisition - Camera Calibration - Machine Learning - Texture Synthesis - Content Creation / Visual Effects - User Interface Development (e.g. Qt) As an example reference guide, see the topics covered in the Prince textbook, http://www.computervisionmodels.com/. About us: UCL is in central London, and is one of the top 3 groups for Vision/Graphics/Learning in Europe. Our people are fun and productive :) Application Instructions: You'll need to submit an online application here as soon as possible: http://www.cs.ucl.ac.uk/admissions/phd_programme/applying/ but you should also email me (Gabriel Brostow) *now* so I know to look out for your application (and please use the text "smartVinterp" in the Subject line). We only see the online applications after all your letters of recommendation etc. have arrived; we'll obviously prompt the CS Admission Panel to assess each batch of good applications that we're aware of. Good applicants in the past have had an MSc project in a related area, evidence of being productive in some detail-oriented domain, appetite for scholarly reading/writing, and enormous drive. If you're reading this, you're probably pretty smart/ambitious, but for a PhD, you also need to be enthusiastic, creative, and *very* persistent. Again, it pains me enormously that even great candidates from non-EU countries CAN NOT be considered for this funded studentship directly. -- -Gabriel J. Brostow http://www.cs.ucl.ac.uk/staff/G.Brostow From carla.piazza at uniud.it Wed Apr 2 08:31:21 2014 From: carla.piazza at uniud.it (Carla Piazza) Date: Wed, 2 Apr 2014 12:31:21 +0000 Subject: Connectionists: FMMB 2014 - second call for papers Message-ID: <4C9CC661-8F38-4657-9A14-C63EC3079760@uniud.it> [We apologize if you have received multiple copies of this message] ? FMMB 2014 - 1st INTL. CONFERENCE ON FORMAL METHODS IN MACRO-BIOLOGY Call for Papers September 22-24, 2014 Noumea, New Caledonia http://fmmb2014.sciencesconf.org/ * AIMS AND SCOPE The purpose of FMMB is to bring together researchers, developers, and students in theoretical computer science, applied mathematics, mathematical and computational biology, interested in studying the application of formal methods to the construction and analysis of models describing biological processes at both micro and macro levels. The topics of interest include (but are not limited to): - representation and analysis of biological systems in formal systems such as: o ordinary and partial differential equation systems, o discrete event systems, infinite state systems, o hybrid discrete-continuous systems, hybrid automata, o cellular automata, multi-agent systems, o stochastic processes, stochastic games, o statistical physics models, o process algebras, process calculi, o rewriting systems, graph grammars, - coupling models and data, inference of models from data, - computability and complexity issues, - modelling and analysis tools, case studies. Application areas particularly solicited include: - environmental biology, ecology, marine science, - agriculture and forestry, - developmental biology, population biology, - epidemiology, medicine, - systems biology, synthetic biology. * KEYNOTE SPEAKERS Pieter Collins, Maastricht University, Netherlands, Saso Dzeroski, Jozef Stefan Institute, Slovenia, Radu Grosu, Vienna Technical University, Austria, Steffen Klamt, Max Planck Institute Magdeburg, Germany, Pietro Lio', Cambridge University, UK, H?l?ne Morlon, Ecole Polytechnique and CNRS, France. * PC CO-CHAIRS Fran?ois Fages, Inria Paris-Rocquencourt, France, Carla Piazza, University of Udine, Italy. * LOCAL CHAIR Teodor Knapik, University of New Caledonia. * IMPORTANT DATES Submission deadline: April 25 Author notification: June 4 Camera-ready copy due: June 25 * SUBMISSION AND PUBLICATION Regular submissions of 12-20 pages or short submissions of 2 pages in LNCS style should be submitted as PDF files via EasyChair https://www.easychair.org/conferences/?conf=fmmb2014 . All accepted papers will be published in a book in the LNBI series of Springer-Verlag. Authors of the most significant contributions will be invited to submit extended versions for a journal special issue. ----------------------------------------------------------------- Prof. Carla Piazza Dip. di Matematica e Informatica Universita' di Udine Via le Scienze 206, I-33100 Udine, Italy Phone: +39.0432.55.8497 Fax: +39.0432.55.8499 Email: carla.piazza at uniud.it Home: http://www.dimi.uniud.it/piazza ----------------------------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From M.deKamps at leeds.ac.uk Thu Apr 3 08:43:27 2014 From: M.deKamps at leeds.ac.uk (Marc de Kamps) Date: Thu, 3 Apr 2014 13:43:27 +0100 Subject: Connectionists: Research Fellow position University of Leeds/HBP Message-ID: <328C1883CC6BC643B02B589D66B5A16C02EAB07548BC@HERMES8.ds.leeds.ac.uk> A postdoctoral Research Fellow position is available for immediate start at the University of Leeds, UK. The position is funded as part of the Human Brain Project. This project focuses on modelling of neural populations using population density techniques. Population density methods are a powerful statistical methods that capture and evolve dynamics at the population level directly, rather than through a large number of neuron point model instances. More details about this research can be found in recent publications (see below). In this project, you will investigate the theoretical foundations of population density techniques, and apply them in the creation of efficient numerical algorithms. The resulting software should be usable by non-experts in large-scale network models. We are looking for outstanding candidates with a PhD in Physics, Mathematics, Engineering, Computer Science or related disciplines.? They are expected to have good mathematical analytic skills, as well as substantial experience in scientific computation in languages such as C/C++ or Fortran. A background in computational neuroscience is highly desirable. An interest in other activities in the Human Brain project is expected, and the application of the techniques in large-scale network models is actively encouraged. You will be expected to interact with other researchers at the European Institute of Theoretical Neuroscience in Paris through regular visits. You will be based in the School of Computing at the University of Leeds. The School has as research themes 'Applied Computing in Biology, Medicine and Health' as well as 'Computational Science and Engineering'. Informal enquiries about this position can be made directly to Marc de Kamps (m.dekamps at leeds.ac.uk). The complete advert and application instructions can be found at: http://tinyurl.com/oxzfqho The closing date for applications is 30 April 2014. Relevant publications: http://dx.doi.org/10.1162/089976603322297322 http://arxiv.org/abs/1309.1654) http://dx.doi.org/10.1016/j.neunet.2008.07.006 Dr Marc de Kamps Institute of Aritificial Intelligence and Biological Systems School of Computing University of Leeds Leeds LS2 9JT, UK From daavid.samu at gmail.com Fri Apr 4 11:27:26 2014 From: daavid.samu at gmail.com (=?ISO-8859-1?Q?D=E1vid_Samu?=) Date: Fri, 4 Apr 2014 17:27:26 +0200 Subject: Connectionists: Paper on the influence of wiring cost on cortical connectome Message-ID: Dear all, [with apologies for any cross-posting] The following paper may be of interest to the group: D Samu, AK Seth, T Nowotny: Influence of Wiring Cost on the Large-Scale Architecture of Human Cortical Connectivity http://www.ploscompbiol.org/article/info:doi/10.1371/journal.pcbi.1003557 Best, David Samu -------------- next part -------------- An HTML attachment was scrubbed... URL: From grlmc at urv.cat Sat Apr 5 09:37:26 2014 From: grlmc at urv.cat (GRLMC) Date: Sat, 5 Apr 2014 15:37:26 +0200 Subject: Connectionists: TPNC 2014: 1st call for papers Message-ID: *To be removed from our mailing list, please respond to this message with UNSUBSCRIBE in the subject line* **************************************************************************** ************** 3rd INTERNATIONAL CONFERENCE ON THE THEORY AND PRACTICE OF NATURAL COMPUTING TPNC 2014 Granada, Spain December 9-11, 2014 Organized by: Soft Computing and Intelligent Information Systems (SCI2S) University of Granada Research Group on Mathematical Linguistics (GRLMC) Rovira i Virgili University http://grammars.grlmc.com/tpnc2014/ **************************************************************************** ************** AIMS: TPNC is a conference series intending to cover the wide spectrum of computational principles, models and techniques inspired by information processing in nature. TPNC 2014 will reserve significant room for young scholars at the beginning of their career. It aims at attracting contributions to nature-inspired models of computation, synthesizing nature by means of computation, nature-inspired materials, and information processing in nature. VENUE: TPNC 2014 will take place in Granada, in the region of Andaluc?a, to the south of Spain. The city is the seat of a rich Islamic historical legacy, including the Moorish citadel and palace called Alhambra. SCOPE: Topics of either theoretical, experimental, or applied interest include, but are not limited to: * Nature-inspired models of computation: - amorphous computing - cellular automata - chaos and dynamical systems based computing - evolutionary computing - membrane computing - neural computing - optical computing - swarm intelligence * Synthesizing nature by means of computation: - artificial chemistry - artificial immune systems - artificial life * Nature-inspired materials: - computing with DNA - nanocomputing - physarum computing - quantum computing and quantum information - reaction-diffusion computing * Information processing in nature: - developmental systems - fractal geometry - gene assembly in unicellular organisms - rough/fuzzy computing in nature - synthetic biology - systems biology * Applications of natural computing to: algorithms, bioinformatics, control, cryptography, design, economics, graphics, hardware, learning, logistics, optimization, pattern recognition, programming, robotics, telecommunications etc. A flexible "theory to/from practice" approach would be the perfect focus for the expected contributions. STRUCTURE: TPNC 2014 will consist of: - invited talks - invited tutorials - peer-reviewed contributions INVITED SPEAKERS: tba PROGRAMME COMMITTEE: Hussein A. Abbass (Canberra, AU) Uwe Aickelin (Nottingham, UK) Thomas B?ck (Leiden, NL) Christian Blum (San Sebasti?n, ES) Jinde Cao (Nanjing, CN) Vladimir Cherkassky (Minneapolis, US) Sung-Bae Cho (Seoul, KR) Andries P. Engelbrecht (Pretoria, ZA) Inman Harvey (Brighton, UK) Francisco Herrera (Granada, ES) Tzung-Pei Hong (Kaohsiung, TW) Yaochu Jin (Guildford, UK) Soo-Young Lee (Daejeon, KR) Derong Liu (Chicago, US) Manuel Lozano (Granada, ES) Carlos Mart?n-Vide (Tarragona, ES, chair) Risto Miikkulainen (Austin, US) Frank Neumann (Adelaide, AU) Leandro Nunes de Castro (S?o Paulo, BR) Erkki Oja (Aalto, FI) Marc Schoenauer (Orsay, FR) Biplab Kumar Sikdar (Shibpur, IN) Darko Stefanovic (Albuquerque, US) Umberto Straccia (Pisa, IT) Thomas St?tzle (Brussels, BE) Ponnuthurai N. Suganthan (Singapore, SG) Johan Suykens (Leuven, BE) El-Ghazali Talbi (Lille, FR) Jon Timmis (York, UK) Michael N. Vrahatis (Patras, GR) Xin Yao (Birmingham, UK) ORGANIZING COMMITTEE: Adrian Horia Dediu (Tarragona) Carlos Garc?a-Mart?nez (C?rdoba) Carlos Mart?n-Vide (Tarragona, co-chair) Manuel Lozano (Granada, co-chair) Francisco Javier Rodr?guez (Granada) Florentina Lilica Voicu (Tarragona) SUBMISSIONS: Authors are invited to submit non-anonymized papers in English presenting original and unpublished research. Papers should not exceed 12 single-spaced pages (including eventual appendices) and should be prepared according to the standard format for the Springer Verlag's LNCS series (see http://www.springer.com/computer/lncs?SGWID=0-164-6-793341-0). Submissions have to be uploaded to: https://www.easychair.org/conferences/?conf=tpnc2014 PUBLICATIONS: A volume of proceedings published by Springer in the LNCS series will be available by the time of the conference. A special issue of a major journal will be later published containing peer-reviewed extended versions of some of the papers contributed to the conference. Submissions to it will be by invitation. REGISTRATION: The period for registration is open from April 5 to December 9, 2014. The registration form can be found at: http://grammars.grlmc.com/tpnc2014/Registration.php DEADLINES: Paper submission: July 17, 2014 (23:59h, CET) Notification of paper acceptance or rejection: August 24, 2014 Final version of the paper for the LNCS proceedings: September 7, 2014 Early registration: September 7, 2014 Late registration: November 25, 2014 Submission to the post-conference journal special issue: March 11, 2015 QUESTIONS AND FURTHER INFORMATION: florentinalilica.voicu at urv.cat POSTAL ADDRESS: TPNC 2014 Research Group on Mathematical Linguistics (GRLMC) Rovira i Virgili University Av. Catalunya, 35 43002 Tarragona, Spain Phone: +34 977 559 543 Fax: +34 977 558 386 ACKNOWLEDGEMENTS: Departament d?Economia i Coneixement, Generalitat de Catalunya Universidad de Granada Universitat Rovira i Virgili From weng at cse.msu.edu Sat Apr 5 10:51:59 2014 From: weng at cse.msu.edu (Juyang Weng) Date: Sat, 05 Apr 2014 10:51:59 -0400 Subject: Connectionists: how the brain works? (UNCLASSIFIED) In-Reply-To: References: <51FDF3A2-717D-4399-9076-331EEA0FC45A@lehigh.edu> Message-ID: <5340188F.6090405@cse.msu.edu> Dear Steve, This is one of my long-time questions that I did not have a chance to ask you when I met you many times before. But they may be useful for some people on this list. Please accept my apology of my question implies any false impression that I did not intend. (1) Your statement below seems to have confirmed my understanding: Your top-down process in ART in the late 1990's is basically for finding an acceptable match between the input feature vector and the stored feature vectors represented by neurons (not meant for the nearest match). The currently active neuron is the one being examined by the top down process in a sequential fashion: one neuron after another, until an acceptable neuron is found. (2) The input to the ART in the late 1990's is for a single feature vector as a monolithic input. By monolithic, I mean that all neurons take the entire input feature vector as input. I raise this point here because neuron in ART in the late 1990's does not have an explicit local sensory receptive field (SRF), i.e., are fully connected from all components of the input vector. A local SRF means that each neuron is only connected to a small region in an input image. My apology again if my understanding above has errors although I have examined the above two points carefully through multiple your papers. Best regards, -John On 3/22/14 10:04 PM, Stephen Grossberg wrote: > Dear Tsvi, > > You stated that ART "requires complex signals". I noted that this > statement is not correct. > > To illustrate what I meant, I noted that ART uses a simple measure of > pattern mismatch. In particular, a top-down expectation selects > consistent features and suppresses inconsistent features to focus > attention upon expected features. This property of attention is well > supported by lots of psychological and neurobiological data. > > You also contrasted models that "cycle", one being ART, with your own > network which "does not cycle". I therefore mentioned that ART > hypothesis testing and search, which involve "cycling" through > operations of mismatch, arousal, and reset, are directly supported by > a lot of data. For example, in an oddball paradigm, one can compare > mismatch with properties of the P120 ERP, arousal with properties of > the N200 ERP, and reset with properties of the P300 ERP. > > I am not sure why this reply led you to write bitterly about an old > review of one of your articles, which I know nothing about, and which > is not relevant to my specific points of information. > > Best, > > Steve > > > On Mar 22, 2014, at 4:38 PM, Tsvi Achler wrote: > >> Dear Steve, >> Isn't Section 8.2 exactly about the cycling (and labeled as such) and >> figure 2 a depiction of the cycling? >> >> Your response is similar to feedback I received years ago in an >> evaluation of my algorithm, where the reviewer clearly didnt read the >> paper in detail but it seems was intent not to let it through because >> it seemed as if they had their own algorithm and agenda. I took out >> snippets of that review and placed it here because this is continuing >> today. >> >> From the review: "The network, in effect, implements a winner-takes >> all scheme, when only a single output neuron reacts to each input..." >> This is not true my network does not implement a Winner take all, in >> fact that is the point, there is no lateral inhibition. >> "... this network type can be traced back to the sixties, most >> notably to the work of Grossberg ... As a result, I do not see any >> novelty in this paper ... Overall recommendation: 1 (Reject) .. >> Reviewer's confidence: 5 (highest)." >> >> This is exactly what I mean when I stated that it seems academia >> would rather bury new ideas. Such a callous and strong dismissal is >> devastating to a young student and detrimental to the field. >> Someone as decorated and established as you has the opportunity to >> move the field forward. However instead the neural network aspect of >> feedback during recognition is being actively inhibited by >> unsubstantive and destructive efforts. >> >> I would be happy to work with you offline to write a joint statement >> on this with all of the technical details. >> >> Sincerely, >> -Tsvi >> >> >> On Mar 22, 2014 4:08 AM, "Stephen Grossberg" > > wrote: >> > >> > Dear Tsvi, >> > >> > You mention Adaptive Resonance below and suggest that it "requires >> complex signals indicating when to stop, compare, and cycle". That is >> not correct. >> > >> > ART uses a simple measure of pattern mismatch. Moreover, >> psychological, neurophysiological, anatomical, and ERP data support >> the operations that it models during hypothesis testing and memory >> search. ART predicted various of these data before they were collected. >> > >> > If you would like to pursue this further, see >> http://cns.bu.edu/~steve/ART.pdf >> for a recent heuristic review. >> > >> > Best, >> > >> > Steve >> > >> > >> > On Mar 21, 2014, at 10:29 PM, Tsvi Achler wrote: >> > >> > Sorry for the length of this response but I wanted to go into some >> > detail here. >> > >> > I see the habituation paradigm as somewhat analogous to surprise and >> > measurement of error during recognition. I can think of a few >> > mathematical Neural Network classifiers that can generate an internal >> > pattern for match during recognition to calculate this >> > habituation/surprise orientation. Feedforward networks definitely >> > will not work because they don't recall the internal stimulus very >> > well. One option is adaptive resonance (which I assume you use), but >> > it cycles through the patterns one at a time and requires complex >> > signals indicating when to stop, compare, and cycle. I assume >> > Juyang's DN can also do something similar but I suspect it also must >> > cycle since it also has lateral inhibition. Bidirectional Associative >> > Memories (BAM) may also be used. Others such as Bayes networks and >> > free-energy principle can used, although they are not as easily >> > translatable to neural networks. >> > >> > Another option is a network like mine which does not have lateral >> > connections but also generates internal patterns. The advantage is >> > that it can also generate mixtures of patterns at once, does not cycle >> > through individual patterns, does not require signals associated with >> > cycling, and can be shown mathematically to be analogous to >> > feedforward networks. The error signal it produces can be used for an >> > orientation reflex or what I rather call attention. It is essential >> > for recognition and planning. >> > >> > I would be happy to give a talk on this and collaborate on a rigorous >> > comparison. Indeed it is important to look at models other than those >> > using feedforward connections during recognition. >> > >> > Sincerely, >> > >> > -Tsvi >> > >> > >> > >> > >> > On Mar 21, 2014 5:25 AM, "Kelley, Troy D CIV (US)" >> > > >> wrote: >> > >> > >> > Classification: UNCLASSIFIED >> > >> > Caveats: NONE >> > >> > >> > Yes, Mark, I would argue that habituation is anticipatory >> prediction. The >> > >> > neuron creates a model of the incoming stimulus and the neuron is >> > >> > essentially predicting that the next stimuli will be comparatively >> similar >> > >> > to the previous stimulus. If this prediction is met, the neuron >> habituates. >> > >> > That is a simple, low level, predictive model. >> > >> > >> > -----Original Message----- >> > >> > From: Mark H. Bickhard [mailto:mhb0 at Lehigh.EDU >> ] >> > >> > Sent: Thursday, March 20, 2014 5:28 PM >> > >> > To: Kelley, Troy D CIV (US) >> > >> > Cc: Tsvi Achler; Andras Lorincz; bower at uthscsa.edu >> ; >> > >> > connectionists at mailman.srv.cs.cmu.edu >> >> > >> > Subject: Re: Connectionists: how the brain works? >> > >> > >> > I would agree with the importance of Sokolov habituation, but there >> is more >> > >> > than one way to understand and generalize from this phenomenon: >> > >> > >> > http://www.lehigh.edu/~mhb0/AnticipatoryBrain20Aug13.pdf >> >> > >> > >> > Mark H. Bickhard >> > >> > Lehigh University >> > >> > 17 Memorial Drive East >> > >> > Bethlehem, PA 18015 >> > >> > mark at bickhard.name >> > >> > http://bickhard.ws/ >> > >> > >> > On Mar 20, 2014, at 4:41 PM, Kelley, Troy D CIV (US) wrote: >> > >> > >> > We have found that the habituation algorithm that Sokolov >> discovered way >> > >> > back in 1963 provides an useful place to start if one is trying to >> determine >> > >> > how the brain works. The algorithm, at the cellular level, is >> capable of >> > >> > determining novelty and generating implicit predictions - which it then >> > >> > habituates to. Additionally, it is capable of regenerating the original >> > >> > response when re-exposed to the same stimuli. All of these behaviors >> > >> > provide an excellent framework at the cellular level for explain >> all sorts >> > >> > of high level behaviors at the functional level. And it fits the >> Ockham's >> > >> > razor principle of using a single algorithm to explain a wide >> variety of >> > >> > explicit behavior. >> > >> > >> > Troy D. Kelley >> > >> > RDRL-HRS-E >> > >> > Cognitive Robotics and Modeling Team Leader Human Research and >> Engineering >> > >> > Directorate U.S. Army Research Laboratory Aberdeen, MD 21005 Phone >> > >> > 410-278-5869 or 410-278-6748 Note my new email address: >> > >> > troy.d.kelley6.civ at mail.mil >> > >> > >> > >> > >> > >> > >> > On 3/20/14 10:41 AM, "Tsvi Achler" > > wrote: >> > >> > >> > I think an Ockham's razor principle can be used to find the most >> > >> > optimal algorithm if it is interpreted to mean the model with the >> > >> > least amount of free parameters that captures the most phenomena. >> > >> > http://reason.cs.uiuc.edu/tsvi/Evaluating_Flexibility_of_Recognition.p >> > >> > df >> > >> > -Tsvi >> > >> > >> > On Wed, Mar 19, 2014 at 10:37 PM, Andras Lorincz >> > >> > >> > wrote: >> > >> > Ockham works here via compressing both the algorithm and the structure. >> > >> > Compressing the structure to stem cells means that the algorithm >> > >> > should describe the development, the working, and the time dependent >> > >> > structure of the brain. Not compressing the description of the >> > >> > structure of the evolved brain is a different problem since it saves >> > >> > the need for the description of the development, but the working. >> > >> > Understanding the structure and the working of one part of the brain >> > >> > requires the description of its communication that increases the >> > >> > complexity of the description. By the way, this holds for the whole >> > >> > brain, so we might have to include the body at least; a structural >> > >> > minimist may wish to start from the genetic code, use that hint and >> > >> > unfold the already compressed description. There are (many and >> > >> > different) todos 'outside' ... >> > >> > >> > >> > Andras >> > >> > >> > >> > >> > >> > . >> > >> > >> > ________________________________ >> > >> > From: Connectionists > > >> > >> > on behalf of james bower > >> > >> > Sent: Thursday, March 20, 2014 3:33 AM >> > >> > >> > To: Geoffrey Goodhill >> > >> > Cc: connectionists at mailman.srv.cs.cmu.edu >> >> > >> > Subject: Re: Connectionists: how the brain works? >> > >> > >> > Geoffrey, >> > >> > >> > Nice addition to the discussion actually introducing an interesting >> > >> > angle on the question of brain organization (see below) As you note, >> > >> > reaction diffusion mechanisms and modeling have been quite successful >> > >> > in replicating patterns seen in biology - especially interesting I >> > >> > think is the modeling of patterns in slime molds, but also for very >> > >> > general pattern formation in embryology. However, more and more >> > >> > detailed analysis of what is diffusing, what is sensing what is >> > >> > diffusing, and what is reacting to substances once sensed -- all >> > >> > linked to complex patterns of gene regulation and expression have >> > >> > made it clear that actual embryological development is much much more >> > >> > complex, as Turing himself clearly anticipated, as the quote you >> cite pretty >> > >> > clearly indicates. Clearly a smart guy. But, I don't actually think >> > >> > that >> > >> > this is an application of Ochham's razor although it might appear to >> > >> > be after the fact. Just as Hodgkin and Huxley were not applying it >> > >> > either in >> > >> > their model of the action potential. Turing apparently guessed (based >> > >> > on a >> > >> > lot of work at the time on pattern formation with reaction diffusion) >> > >> > that such a mechanism might provide the natural basis for what >> > >> > embryos do. Thus, just like for Hodgkin and Huxley, his model >> > >> > resulted from a bio-physical insight, not an explicit attempt to >> > >> > build a stripped down model for its own sake. I seriously doubt >> > >> > that Turning would have claimed that he, or his models could more >> > >> > effectively do what biology actually does in forming an embrio, or >> > >> > substitute for the actual process. >> > >> > >> > However, I think there is another interesting connection here to the >> > >> > discussion on modeling the brain. Almost certainly communication and >> > >> > organizational systems in early living beings were reaction diffusion >> > >> > based. >> > >> > This is still a dominant effect for many 'sensing' in small organisms. >> > >> > Perhaps, therefore, one can look at nervous systems as structures >> > >> > specifically developed to supersede reaction diffusion mechanisms, >> > >> > thus superseding this very 'natural' but complexity limited type of >> > >> > communication and organization. What this means, I believe, is that >> > >> > a simplified or abstracted physical or mathematical model of the >> > >> > brain explicitly violates the evolutionary pressures responsible for >> > >> > its structure. Its where the wires go, what the wires do, and what >> > >> > the receiving neuron does with the information that forms the basis >> > >> > for neural computation, multiplied by a very large number. And that >> > >> > is dependent on the actual physical structure of those elements. >> > >> > >> > One more point about smart guys, as a young computational >> > >> > neurobiologist I questioned how insightful John von Neumann actually >> > >> > was because I was constantly hearing about a lecture he wrote (but >> > >> > didn't give) at Yale suggesting that dendrites and neurons might be >> > >> > digital ( John von Neumann's The Computer and the Brain. (New >> > >> > Haven/London: Yale Univesity Press, 1958.) Very clearly a not very >> > >> > insightful idea for a supposedly smart guy. It wasn't until a few >> > >> > years later, when I actually read the lecture - that I found out that >> > >> > he ends by stating that this idea is almost certainly wrong, given >> > >> > the likely nonlinearities in neuronal dendrites. So von Neumann >> > >> > didn't lack insight, the people who quoted him did. It is a >> > >> > remarkable fact that more than 60 years later, the majority of >> models of >> > >> > so called neurons built by engineers AND neurobiologists don't consider >> > >> > these nonlinearities. >> > >> > The point being the same point, to the Hopfield, Mead, Feynman list, >> > >> > we can now add Turing and von Neumann as suspecting that for >> > >> > understanding, biology and the nervous system must be dealt with in >> their >> > >> > full complexity. >> > >> > >> > But thanks for the example from Turing - always nice to consider actual >> > >> > examples. :-) >> > >> > >> > Jim >> > >> > >> > >> > >> > >> > >> > On Mar 19, 2014, at 8:30 PM, Geoffrey Goodhill >> > >> > >> > wrote: >> > >> > >> > Hi All, >> > >> > >> > A great example of successful Ockham-inspired biology is Alan >> > >> > Turing's model for pattern formation (spots, stripes etc) in >> > >> > embryology (The chemical basis of morphogenesis, Phil Trans Roy Soc, >> > >> > 1953). Turing introduced a physical mechanism for how inhomogeneous >> > >> > spatial patterns can arise in a biological system from a spatially >> > >> > homogeneous starting point, based on the diffusion of morphogens. The >> > >> > paper begins: >> > >> > >> > "In this section a mathematical model of the growing embryo will be >> > >> > described. This model will be a simplification and an idealization, >> > >> > and consequently a falsification. It is to be hoped that the features >> > >> > retained for discussion are those of greatest importance in the >> > >> > present state of knowledge." >> > >> > >> > The paper remained virtually uncited for its first 20 years following >> > >> > publication, but since then has amassed 8000 citations (Google >> > >> > Scholar). The subsequent discovery of huge quantities of molecular >> > >> > detail in biological pattern formation have only reinforced the >> > >> > importance of this relatively simple model, not because it explains >> > >> > every system, but because the overarching concepts it introduced have >> > >> > proved to be so fertile. >> > >> > >> > Cheers, >> > >> > >> > Geoff >> > >> > >> > >> > On Mar 20, 2014, at 6:27 AM, Michael Arbib wrote: >> > >> > >> > Ignoring the gross differences in circuitry between hippocampus and >> > >> > cerebellum, etc., is not erring on the side of simplicity, it is >> > >> > erring, period. Have you actually looked at a >> > >> > Cajal/Sxentagothai-style drawing of their circuitry? >> > >> > >> > At 01:07 PM 3/19/2014, Brian J Mingus wrote: >> > >> > >> > Hi Jim, >> > >> > >> > Focusing too much on the details is risky in and of itself. Optimal >> > >> > compression requires a balance, and we can't compute what that >> > >> > balance is (all models are wrong). One thing we can say for sure is >> > >> > that we should err on the side of simplicity, and adding detail to >> > >> > theories before simpler explanations have failed is not Ockham's >> > >> > heuristic. That said it's still in the space of a Big Data fuzzy >> > >> > science approach, where we throw as much data from as many levels of >> > >> > analysis as we can come up with into a big pot and then construct a >> > >> > theory. The thing to keep in mind is that when we start pruning this >> > >> > model most of the details are going to disappear, because almost all >> > >> > of them are irrelevant. Indeed, the size of the description that >> > >> > includes all the details is almost infinite, whereas the length of >> > >> > the description that explains almost all the variance is extremely >> > >> > short, especially in comparison. This is why Ockham's razor is a good >> > >> > heuristic. It helps prevent us from wasting time on unnecessary >> > >> > details by suggesting that we only inquire as to the details once our >> > >> > existing simpler theory has failed to work. >> > >> > >> > On 3/14/14 3:40 PM, Michael Arbib wrote: >> > >> > >> > At 11:17 AM 3/14/2014, Juyang Weng wrote: >> > >> > >> > The brain uses a single architecture to do all brain functions we are >> > >> > aware of! It uses the same architecture to do vision, audition, >> > >> > motor, reasoning, decision making, motivation (including pain >> > >> > avoidance and pleasure seeking, novelty seeking, higher emotion, etc.). >> > >> > >> > >> > Gosh -- and I thought cerebral cortex, hippocampus and cerebellum >> > >> > were very different from each other. >> > >> > >> > >> > >> > >> > Troy D. Kelley >> > >> > RDRL-HRS-E >> > >> > Cognitive Robotics and Modeling Team Leader Human Research and >> Engineering >> > >> > Directorate U.S. Army Research Laboratory Aberdeen, MD 21005 Phone >> > >> > 410-278-5869 or 410-278-6748 Note my new email address: >> > >> > troy.d.kelley6.civ at mail.mil >> > >> > >> > >> > >> > Classification: UNCLASSIFIED >> > >> > Caveats: NONE >> > >> > >> > >> > >> > Stephen Grossberg >> > Wang Professor of Cognitive and Neural Systems >> > Professor of Mathematics, Psychology, and Biomedical Engineering >> > Director, Center for Adaptive Systems >> http://www.cns.bu.edu/about/cas.html >> > http://cns.bu.edu/~steve >> > steve at bu.edu >> > >> > >> > >> > > > Stephen Grossberg > Wang Professor of Cognitive and Neural Systems > Professor of Mathematics, Psychology, and Biomedical Engineering > Director, Center for Adaptive Systems http://www.cns.bu.edu/about/cas.html > http://cns.bu.edu/~steve > steve at bu.edu > > > > -- -- Juyang (John) Weng, Professor Department of Computer Science and Engineering MSU Cognitive Science Program and MSU Neuroscience Program 428 S Shaw Ln Rm 3115 Michigan State University East Lansing, MI 48824 USA Tel: 517-353-4388 Fax: 517-432-1061 Email: weng at cse.msu.edu URL: http://www.cse.msu.edu/~weng/ ---------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From weng at cse.msu.edu Sat Apr 5 11:55:59 2014 From: weng at cse.msu.edu (Juyang Weng) Date: Sat, 05 Apr 2014 11:55:59 -0400 Subject: Connectionists: How the brain works? Symbolic vs. Connectionist In-Reply-To: References: Message-ID: <5340278F.6000306@cse.msu.edu> On 4/18/08 3:38 PM, Asim Roy wrote: > By the way, your algorithms I believe are structurally similar to the other connectionists algorithms I analyzed in my paper. Asim, I respectfully disagree. If what you said is true, it is absolutely impossible for me to bridge the wide gap between the two major schools in AI: the symbolic school and the connectionist school. Please see the (incomplete) 14-point list of novelties below from an anonymous reviewer. I first introduce the wide gap: Marvin Minsky 1991 in AI Magazine: - Symbolic: Logic & Neat - Connectionist: analogical & scruffy. Michael Jordan at IJCNN 2011: - Neural nets do not abstract well - I will not talk about neural nets today -------------- next part -------------- An HTML attachment was scrubbed... URL: From weng at cse.msu.edu Sat Apr 5 12:30:55 2014 From: weng at cse.msu.edu (Juyang Weng) Date: Sat, 05 Apr 2014 12:30:55 -0400 Subject: Connectionists: How the brain works? Symbolic vs. Connectionist In-Reply-To: <5340278F.6000306@cse.msu.edu> References: <5340278F.6000306@cse.msu.edu> Message-ID: <53402FBF.8010004@cse.msu.edu> On 4/18/08 3:38 PM, Asim Roy wrote: > By the way, your algorithms I believe are structurally similar to the other connectionists algorithms I analyzed in my paper. Asim, I respectfully disagree. If what you said were true, it would be absolutely impossible for me to bridge the wide gap between the two major schools in AI: the symbolic school and the connectionist school. Please see the (incomplete) 14-point list of novelties below from an anonymous reviewer. I first briefly introduce the wide gap: Marvin Minsky 1991 in AI Magazine: - Symbolic: Logic & Neat - Connectionist: analogical & scruffy. Michael Jordan at IJCNN 2011: - Neural nets do not abstract well - I will not talk about neural nets today Michael Jordan, Joshua Tenenbaum, and many other respected researchers used graphical models to model higher brain computations or intelligent systems. However, my humble understanding is: (1) Graphical Models belong to the category of symbolic representation that Marvin Minsky pointed to. (2) Graphical Models are GROSSLY wrong as models of the human brain. (3) Graphical Models are not an acceptable model even for high brain functions such as abstraction and reasoning. I am afraid that this issue is EXTREMELY difficult for a computational neuroscientist or an AI researcher to understand, if he has not taken the course BMI 871: Computational Brain-Mind. All the courses offered elsewhere are fundamentally different from BMI 871 in nature, regardless how the title of the course may look similar. (By the way, the BMI 2014 course application deadline is tomorrow, Sunday. See http://www.brain-mind-institute.org/) The following is quoted from my email to Michael Jordan while he and I were discussing this important issue yesterday via emails: "By the way, I do not agree with your `graphical models provide a better platform for abstraction' (which you did not seem to have said that day), because the abstraction is mainly in the mind of the human designer of the graphic models and probably also other humans who have heard the human designer's explanation. However, the meanings of each node in a graphic model are not learned from experience as the brain of a human child. Thus, each such node does not have required grounded invariance (e.g., location invariance for the type-concept abstraction). Without the necessary invariance (e.g., abstraction of type from concrete instances of different locations), there is not any power of abstraction in any node of a graphic model." "In summary, abstraction of any node of a graphic model is mainly in the mind of a human designer, instead of the abstraction power of each node handcrafted. The impression of abstraction is the human designer's illusion. His brain can abstract from instances does not mean that the node he handcrafted has also a power of abstraction." The following 14-point list is quoted from a respected anonymous reviewer for my IEEE TAMD paper that addresses the title of this email. He seems to have gone beyond looking for superficial similarities. The major differences lie in basic ideas, basic principles, architectures, representations, algorithms, and the ways to teach. --- start of quote --- Comments to the Author The author has addressed all the concerns I raised in my previous review. In doing so, he has added a considerable amount of new material. As a result, this version of the paper is much improved and the underlying argument is conveyed more clearly. Overall, it is easier to read and more instructive. The many important issues raised are now more clearly conveyed and, in its current form, it will no doubt be a valuable addition to the literature in AMD. Without attempting to be in any way exhaustive, I would note the following key messages conveyed by the paper, focussing in particular on the clarifications that have been added to the current version: - The external nature of symbolic representations. - The need to deal incrementally with new external environments through real-time autonomously generated internal modal representations. - The importance of modelling brain development. - The now-clear distinction between emergent and symbolic representations. - The differentiation between connectionist and emergent representations. - The argument symbolic representations are open to ?hand-crafting? while emergent representations are not. - The identification of the brain with the central nervous system. - The external nature of the sensory and motor aspects of the brain. - The consensual nature of the interpretation of sensations in human communication. - The necessity for emergent representations to be based on self-generated states without imposing meaning or specifying boundaries on the meaning on these states. - The transfer of temporal associations by the brain. - The accumulation of spatiotemporal attention skills based on experience of the real physical world. - The back-projection of signals, not errors, in descending connections. - The importance of the developmental program and top-down attention for understanding human and artificial intelligence. --- end of quote --- John Weng -- -- Juyang (John) Weng, Professor Department of Computer Science and Engineering MSU Cognitive Science Program and MSU Neuroscience Program 428 S Shaw Ln Rm 3115 Michigan State University East Lansing, MI 48824 USA Tel: 517-353-4388 Fax: 517-432-1061 Email: weng at cse.msu.edu URL: http://www.cse.msu.edu/~weng/ ---------------------------------------------- From matostmp at gmail.com Sat Apr 5 12:28:36 2014 From: matostmp at gmail.com (Thiago Matos Pinto) Date: Sat, 5 Apr 2014 13:28:36 -0300 Subject: Connectionists: IWSYP'14: I International Workshop on Synaptic Plasticity, Brazil (call for abstracts) Message-ID: *I International Workshop on Synaptic Plasticity - IWSYP'14* * September 8-9, 2014* *Ribeir?o Preto, S?o Paulo, Brazil* http://iwsyp14.thiagomatospinto.com University of S?o Paulo The I International Workshop on Synaptic Plasticity - IWSYP'14 will bring together *experimentalists and theoreticians* who have contributed significantly to understanding the biological mechanisms underlying *synaptic plasticity*. The aim of this workshop is to facilitate discussions and interactions in the effort towards increasing our understanding on synaptic plasticity in the brain. The IWSYP'14 will take place in the campus of *Ribeir?o Preto* at the University of S?o Paulo in *Brazil* on September 8 and 9, 2014. You are welcome to join this workshop that focuses on the discussion of fresh ideas, the presentation of work in progress, and the establishment of a scientific network between young researchers. The workshop will be held in English. Participation is *free of charge*! This workshop is a satellite event of the XXXVIII Annual Meeting of the Brazilian Society for Neuroscience and Behavior (SBNeC), which will be held on September 10-13 in B?zios, Rio de Janeiro. The SBNeC meeting will also feature big names in Neuroscience, and will be a preparatory event for the 9th World Congress of the International Brain Research Organization (IBRO), which will take place in Rio de Janeiro in 2015. Organizers: *Thiago Matos Pinto *(University of S?o Paulo, Ribeir?o Preto, Brazil) *Antonio Roque *(University of S?o Paulo, Ribeir?o Preto, Brazil) Speakers: *Chris De Zeeuw* (Erasmus Medical Center, Rotterdam, The Netherlands & Netherlands Institute for Neuroscience, Amsterdam, The Netherlands) *Freek Hoebeek* (Erasmus Medical Center, Rotterdam, The Netherlands) *Reinoud Maex* (?cole Normale Sup?rieure, Paris, France) *Ricardo Le?o *(University of S?o Paulo, Ribeir?o Preto, Brazil) *Thiago Matos Pinto *(University of S?o Paulo, Ribeir?o Preto, Brazil) Abstracts: *Submission of abstracts for poster and oral presentations are due by May 16, 2014*. We welcome both experimental and theoretical contributions addressing novel findings on topics related to synaptic plasticity. Please see http://iwsyp14.thiagomatospinto.com/abstracts for details. Important dates: *Abstract submission deadline: May 16, 2014* Notification of acceptance: June 13, 2014 Notification of oral/poster selection: June 30, 2014 Workshop dates: September 8-9, 2014 Please visit the IWSYP'14 website for program and further details: http://iwsyp14.thiagomatospinto.com We look forward to seeing you in Ribeir?o Preto. With best wishes, Thiago Matos Pinto, IWSYP'14 Organizer -------------- next part -------------- An HTML attachment was scrubbed... URL: From friedhelm.schwenker at uni-ulm.de Sun Apr 6 15:58:03 2014 From: friedhelm.schwenker at uni-ulm.de (Dr. Schwenker) Date: Sun, 06 Apr 2014 21:58:03 +0200 Subject: Connectionists: ANNPR 2014: Call for papers Message-ID: <5341B1CB.1060706@uni-ulm.de> Please note the extended deadlines. Sixth IAPR TC3 International Workshop on Artificial Neural Networks in Pattern Recognition October 6-8, 2014 Concordia University, Montreal, Canada ANNPR 2014 invites papers that present original work in areas of neural networks, machine learning and pattern recognition, focusing on both theoretical and applied aspects. Topics of interest include, but not limited to: Methodological Issues - Supervised learning - Unsupervised learning - Combination of supervised and unsupervised learning - Feed-forward, recurrent, and competitive neural nets - Kernel machines - Hierarchical modular architectures and hybrid systems - Combination of neural networks and Hidden Markov models - Multiple classifier systems and ensemble methods - Probabilistic graphical models - Kernel methods - Deep architectures Applications in Pattern Recognition - Image processing and segmentation - Handwritten recognition and Document analysis - Sensor-fusion and multi-modal processing - Feature extraction, dimension reduction - Clustering and vector quantization - Speech and speaker recognition - Data, text, and web mining - Bioinformatics/Cheminformatics Note: A PDF of the Call for Papers may be downloaded at: http://users.encs.concordia.ca/~annpr14/ANNPR2014-CFP.pdf Invited Speakers ANNPR 2014 is proud to announce the following invited speakers: Dr. Yoshua Bengio, Canada Full Professor of the Department of Computer Science and Operations Research, University of Montreal Research Areas: Machine learning (ML), neural computation and adaptive perception, statistical learning algorithms Dr. Zhi-Hua Zhou, China Professor in the Department of Computer Science & Technology, Nanjing University, China Research Areas: Machine learning, data mining, pattern recognition, multimedia information retrieval Dr. Michael Herrmann, U.K. Lecturer at The University of Edinburgh School of Informatics Research Areas: Neurocomputation, neural nets, learning theory in autonomous agents & behavioral neuroscience, ML, dynamics of neural systems More information can be found on our homepage (see below) Special Sessions Fellow scientists wishing to organize a special session should submit a proposal to the ANNPR chairs (annpr2014 at encs.concordia.ca). Proposals should include the session title, a list of topics covered by the session and a list of 3-6 papers for possible presentation in the session. Proposals must be received by March 15th, 2014. Important Dates Proposals of special sessions: March 15th, 2014 Paper submission: April 28th, 2014 Notification of acceptance: June 30th, 2014 Camera ready due: July 18, 2014 Workshop: October 6-8, 2014 General Chairs Ching Suen, Concordia University Canada Friedhelm Schwenker, Ulm University, Germany Neamat El Gayar, Cairo University, Egypt/Canada For more information, please visit or contact us: Homepage: http://www.annpr2014.com/ PDF CFP: http://users.encs.concordia.ca/~annpr14/ANNPR2014-CFP.pdf email: annpr2014 at encs.concordia.ca -- Sixth IAPR TC3 International Workshop on Artificial Neural Networks in Pattern Recognition ANNPR 2014 October 6-8, 2014 Concordia University, Montreal, Canada Web: http://www.annpr2014.com/ eMail: annpr14 at encs.concordia.ca From steve at cns.bu.edu Sun Apr 6 16:30:36 2014 From: steve at cns.bu.edu (Stephen Grossberg) Date: Sun, 6 Apr 2014 16:30:36 -0400 Subject: Connectionists: how the brain works? (UNCLASSIFIED) In-Reply-To: <5340188F.6090405@cse.msu.edu> References: <51FDF3A2-717D-4399-9076-331EEA0FC45A@lehigh.edu> <5340188F.6090405@cse.msu.edu> Message-ID: <6F08AE39-766F-45B9-97AD-C50714755724@cns.bu.edu> Dear John, Thanks for your questions. I reply below. On Apr 5, 2014, at 10:51 AM, Juyang Weng wrote: > Dear Steve, > > This is one of my long-time questions that I did not have a chance to ask you when I met you many times before. > But they may be useful for some people on this list. > Please accept my apology of my question implies any false impression that I did not intend. > > (1) Your statement below seems to have confirmed my understanding: > Your top-down process in ART in the late 1990's is basically for finding an acceptable match > between the input feature vector and the stored feature vectors represented by neurons (not meant for the nearest match). ART has developed a lot since the 1990s. A non-technical but fairly comprehensive review article was published in 2012 in Neural Networks and can be found at http://cns.bu.edu/~steve/ART.pdf. I do not think about the top-down process in ART in quite the way that you state above. My reason for this is summarized by the acronym CLEARS for the processes of Consciousness, Learning, Expectation, Attention, Resonance, and Synchrony. All the CLEARS processes come into this story, and ART top-down mechanisms contribute to all of them. For me, the most fundamental issues concern how ART dynamically self-stabilizes the memories that are learned within the model's bottom-up adaptive filters and top-down expectations. In particular, during learning, a big enough mismatch can lead to hypothesis testing and search for a new, or previously learned, category that leads to an acceptable match. The criterion for what is "big enough mismatch" or "acceptable match" is regulated by a vigilance parameter that can itself vary in a state-dependent way. After learning occurs, a bottom-up input pattern typically directly selects the best-matching category, without any hypothesis testing or search. And even if there is a reset due to a large initial mismatch with a previously active category, a single reset event may lead directly to a matching category that can directly resonate with the data. I should note that all of the foundational predictions of ART now have substantial bodies of psychological and neurobiological data to support them. See the review article if you would like to read about them. > The currently active neuron is the one being examined by the top down process I'm not sure what you mean by "being examined", but perhaps my comment above may deal with it. I should comment, though, about your use of the word "currently active neuron". I assume that you mean at the category level. In this regard, there are two ART's. The first aspect of ART is as a cognitive and neural theory whose scope, which includes perceptual, cognitive, and adaptively timed cognitive-emotional dynamics, among other processes, is illustrated by the above referenced 2012 review article in Neural Networks. In the biological theory, there is no general commitment to just one "currently active neuron". One always considers the neuronal population, or populations, that represent a learned category. Sometimes, but not always, a winner-take-all category is chosen. The 2012 review article illustrates some of the large data bases of psychological and neurobiological data that have been explained in a principled way, quantitatively simulated, and successfully predicted by ART over a period of decades. ART-like processing is, however, certainly not the only kind of computation that may be needed to understand how the brain works. The paradigm called Complementary Computing that I introduced awhile ago makes precise the sense in which ART may be just one kind of dynamics supported by advanced brains. This is also summarized in the review article. The second aspect of ART is as a series of algorithms that mathematically characterize key ART design principles and mechanisms in a focused setting, and provide algorithms for large-scale applications in engineering and technology. ARTMAP, fuzzy ARTMAP, and distributed ARTMAP are among these, all of them developed with Gail Carpenter. Some of these algorithms use winner-take-all categories to enable the proof of mathematical theorems that characterize how underlying design principles work. In contrast, the distributed ARTMAP family of algorithms, developed by Gail Carpenter and her colleagues, allows for distributed category representations without losing the benefits of fast, incremental, self-stabilizing learning and prediction in response to a large non-stationary databases that can include many unexpected events. See, e.g., http://techlab.bu.edu/members/gail/articles/115_dART_NN_1997_.pdf and http://techlab.bu.edu/members/gail/articles/155_Fusion2008_CarpenterRavindran.pdf. I should note that FAST learning is a technical concept: it means that each adaptive weight can converge to its new equilibrium value on EACH learning trial. That is why ART algorithms can often successfully carry out one-trial incremental learning of a data base. This is not true of many other algorithms, such as back propagation, simulated annealing, and the like, which all experience catastrophic forgetting if they try to do fast learning. Almost all other learning algorithms need to be run using slow learning, that allows only a small increment in the values of adaptive weights on each learning trial, to avoid massive memory instabilities, and work best in response to stationary data. Such algorithms often fail to detect important rare cases, among other limitations. ART can provably learn in either the fast or slow mode in response to non-stationary data. > in a sequential fashion: one neuron after another, until an acceptable neuron is found. > > (2) The input to the ART in the late 1990's is for a single feature vector as a monolithic input. > By monolithic, I mean that all neurons take the entire input feature vector as input. > I raise this point here because neuron in ART in the late 1990's does not have an explicit local sensory receptive field (SRF), > i.e., are fully connected from all components of the input vector. A local SRF means that each neuron is only connected to a small region > in an input image. Various ART algorithms for technology do use fully connected networks. They represent a single-channel case, which is often sufficient in applications and which simplifies mathematical proofs. However, the single-channel case is, as its name suggests, not a necessary constraint on ART design. In addition, many ART biological models do not restrict themselves to the single-channel case, and do have receptive fields. These include the LAMINART family of models that predict functional roles for many identified cell types in the laminar circuits of cerebral cortex. These models illustrate how variations of a shared laminar circuit design can carry out very different intelligent functions, such as 3D vision (e.g., 3D LAMINART), speech and language (e.g., cARTWORD), and cognitive information processing (e.g., LIST PARSE). They are all summarized in the 2012 review article, with the archival articles themselves on my web page http://cns.bu.edu/~steve. The existence of these laminar variations-on-a-theme provides an existence proof for the exciting goal of designing a family of chips whose specializations can realize all aspects of higher intelligence, and which can be consistently connected because they all share a similar underlying design. Work on achieving this goal can productively occupy lots of creative modelers and technologists for many years to come. I hope that the above replies provide some relevant information, as well as pointers for finding more. Best, Steve > > My apology again if my understanding above has errors although I have examined the above two points carefully > through multiple your papers. > > Best regards, > > -John > > Juyang (John) Weng, Professor > Department of Computer Science and Engineering > MSU Cognitive Science Program and MSU Neuroscience Program > 428 S Shaw Ln Rm 3115 > Michigan State University > East Lansing, MI 48824 USA > Tel: 517-353-4388 > Fax: 517-432-1061 > Email: weng at cse.msu.edu > URL: http://www.cse.msu.edu/~weng/ > ---------------------------------------------- > Stephen Grossberg Wang Professor of Cognitive and Neural Systems Professor of Mathematics, Psychology, and Biomedical Engineering Director, Center for Adaptive Systems http://www.cns.bu.edu/about/cas.html http://cns.bu.edu/~steve steve at bu.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: From weng at cse.msu.edu Sun Apr 6 19:24:17 2014 From: weng at cse.msu.edu (Juyang Weng) Date: Sun, 06 Apr 2014 19:24:17 -0400 Subject: Connectionists: how the brain works? In-Reply-To: <6F08AE39-766F-45B9-97AD-C50714755724@cns.bu.edu> References: <51FDF3A2-717D-4399-9076-331EEA0FC45A@lehigh.edu> <5340188F.6090405@cse.msu.edu> <6F08AE39-766F-45B9-97AD-C50714755724@cns.bu.edu> Message-ID: <5341E221.405@cse.msu.edu> Dear Steve, On 4/6/14 4:30 PM, Stephen Grossberg wrote: > A non-technical but fairly comprehensive review article was published in 2012 in Neural Networks > and can be found at http://cns.bu.edu/~steve/ART.pdf . Please let us continue this conversation as it was very difficult to have such in-depth academic conversations when we meet at a conference. I hope that this conversation is not too boring for others on this list. I try to be concise and clear here. How do we better understand how the brain works? We know that there are many areas and subareas in each human brain. Are these areas more like individual organs in the human body (as Steve Pinker reasoned), or more like emergent statistical signal clusters (as I proposed)? Probably both views are somewhat true in a sense, but what is a better "first-order" approximation through the lifetime of each brain? Approach A: Static brain areas. First, draw a figure of brain areas like your Fig. 4. Study and model the roles of these individual brain areas, like your above article. The term "static" does not mean that each brain area does not learn. In contrary, each brain area does learn and adapt. The more areas one's model contains, the more complete his brain model is. In your Fig. 4, you have V2, V3, V4, ITa, ITb, SC, LIP, PFC, and PPC. Approach B: Dynamic brain areas. The entire brain is a single, but highly dynamic area Y. Regard brain areas as results of development from living experience. This development is regulated by a set of somewhat general-purpose developmental mechanisms in each brain cell (e.g., the laminar architecture). All receptors in the body are denoted as a set X, which contains a group of receptors, X1, X2, X3, ... , Xm where Xi, i=1, 2, ..., m is a sensory organ. For example, X1 and X2 are left retina and right retina, respectively. X3 and X4 contain all hair cells in the left cochlea and right cochlea, respectively. Likewise, all muscles and glands in the body are denoted as a set Z, which contains a group of effectors, Z1, Z2, Z3, ... , Zn where Zi, i=1, 2, ..., n is an effector organ. The entire nervous system consists of many cells as a set Y. There is no statically modeled Brodmann areas inside Y, because Brodmann areas are only applicable for a normal human. For a congenitally blind, e.g., all visual areas are mostly assigned to vision and touch. Our Cresceptron 1992, in a sense, followed Approach A. Developmental Networks (DN) since 2007, with its embodiments Where-What Networks (WWN-1 2008, through WWN-8 2013) followed Approach B. That is probably why Asim Roy was looking for architecture similarities between DN and other traditional neural networks. However, complex architectures in Y autonomously emerge. Hopefully, with all human receptors and effectors and human experience, Y would emerge a human-like brain. As far as I know, there were no prior published networks that allow signal projections from Z to everywhere in the brain Y. Why? This is probably because the analysis and understanding are both challenging. We reached a rigorous analysis of a general-purpose Y using the finite automata theory. For example, the largely one-way connections among areas V2, V3, V4, ITa, ITb, SC, LIP, PFC, and PPC in your Fig. 4 are inconsistent with ensemble knowledge in neural anatomy (e.g., Felleman and Van Essen 1991 [1]). Their connection tables show that almost all connections between two brain areas are two-way. You said to me that you have not included all the connections in figures in Fig. 4. However, if you include all the missing connections, your explanation in the papers does not hold any more. [1] D. J. Felleman and D. C. Van Essen, Distributed hierarchical processing in the primate cerebral cortex, Cerebral Cortex,1, 1-47, 1991. -John -- -- Juyang (John) Weng, Professor Department of Computer Science and Engineering MSU Cognitive Science Program and MSU Neuroscience Program 428 S Shaw Ln Rm 3115 Michigan State University East Lansing, MI 48824 USA Tel: 517-353-4388 Fax: 517-432-1061 Email: weng at cse.msu.edu URL: http://www.cse.msu.edu/~weng/ ---------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhong at maebashi-it.ac.jp Sun Apr 6 22:22:58 2014 From: zhong at maebashi-it.ac.jp (Ning Zhong) Date: Mon, 07 Apr 2014 11:22:58 +0900 Subject: Connectionists: BRIN Special Issue on Brain Big Data in the Hyper World Message-ID: <53420C02.1080203@maebashi-it.ac.jp> Call for Papers: Special Issue on Brain Big Data in the Hyper World Brain Informatics: Brain Data Computing and Health Studies (BRIN) An International Journal (Springer) Guest Editors: Stephen S. Yau, Arizona State University, USA Ning Zhong, Maebashi Institute of Technology, Japan An emerging hyper world encompasses all human activities in a social-cyber-physical space, in which big data are used as a bridge to connect humans, computers, and things. It is a trend or attempt to integrate brain big data into the hyper world for realizing the harmonious symbiosis of humans, computers and things. Brain informatics provides the key technique to implement the attempt by offering informatics-enabled brain studies and applications in the hyper world, which can be regarded as a brain big data cycle. This brain big data cycle is implemented by various processing, interpreting, and integrating multiple forms of brain big data obtained from atomic and molecular levels to the entire brain. The implementation involves using advanced neuroimaging technologies, including fMRI, PET, MEG, EEG, as well as other sources like eye-tracking and wearable, portable, micro and nano devices. Such brain big data will not only help scientists improve their understanding of human thinking, learning, decision-making, emotion, memory, and social behavior, but also help cure disease, serve health-care, facilitate environmental control and sustainability, using human-centric information and computing technologies in the hyper world. This special issue will present some of the best work being done worldwide to deal with fundamental issues, new challenges and potential applications of brain big data in the hyper world. Topics of interests include, but are not limited to: - Hyper world and cyber individual model - Future of brain big data and big data on the brain - Assessing the brain's white matter with diffusion imaging - A big brain vs. the main requirement for big data - Big data in neuroimaging and connectome - Heart to heart science - Multimodal data analysis for depression early stage prediction and intervention - Multi-granular computing for brain big data - Big data analytics and interactive knowledge discovery with respect to brain cognition and mental health - Big data policy for life and brain sciences and its implications to future research All manuscripts must be in English. The page limit is 11 pages for each paper (210x279mm in Springer Journal Paper Style). Manuscripts submitted for publication are reviewed by at least three peer reviewers, according to the usual policies of the BRIN Journal. Important Dates: Paper submission deadline: April 30, 2014 First round notification: May 15, 2014 Final decision notification: May 31, 2014 Publication: June, 2014 Submission Instructions: For submission instructions please visit the journal?s online submission system at http://www.editorialmanager.com/brin, have a look at the journal product page at http://www.springer.com/40708 or contact Prof. Ning Zhong at zhong at maebashi-it.ac.jp and CC to Dr. Jian Yang at jianyang at bjut.edu.cn. From achler at gmail.com Mon Apr 7 08:07:02 2014 From: achler at gmail.com (Tsvi Achler) Date: Mon, 7 Apr 2014 05:07:02 -0700 Subject: Connectionists: how the brain works? (UNCLASSIFIED) In-Reply-To: <6F08AE39-766F-45B9-97AD-C50714755724@cns.bu.edu> References: <51FDF3A2-717D-4399-9076-331EEA0FC45A@lehigh.edu> <5340188F.6090405@cse.msu.edu> <6F08AE39-766F-45B9-97AD-C50714755724@cns.bu.edu> Message-ID: Dear Steve, John I think such discussions are great to spark interests in feedback (output back to input) such models which I feel should be given much more attention. In this vein it may be better to discuss more of the details here than to suggest to read a reference. Basically I see ART as a neural way to implement a K-nearest neighbor algorithm. Clearly the way ART overcomes the neural hurdles is immense especially in figuring out how to coordinate neurons. However it is also important to summarize such methods in algorithmic terms which I attempt to do here (and please comment/correct). Instar learning is used to find the best weights for quick feedforward recognition without too much resonance (otherwise more resonance will be needed). Outstar learning is used to find the expectation of the patterns. The resonance mechanism evaluates distances between the "neighbors" evaluating how close differing outputs are to the input pattern (using the expectation). By choosing one winner the network is equivalent to a 1-nearest neighbor model. If you open it up to more winners (eg k winners) as you suggest then it becomes a k-nearest neighbor mechanism. Clearly I focused here on the main ART modules and did not discuss other additions. But I want to just focus on the main idea at this point. Sincerely, -Tsvi On Sun, Apr 6, 2014 at 1:30 PM, Stephen Grossberg wrote: > Dear John, > > Thanks for your questions. I reply below. > > On Apr 5, 2014, at 10:51 AM, Juyang Weng wrote: > > Dear Steve, > > This is one of my long-time questions that I did not have a chance to ask > you when I met you many times before. > But they may be useful for some people on this list. > Please accept my apology of my question implies any false impression that > I did not intend. > > (1) Your statement below seems to have confirmed my understanding: > Your top-down process in ART in the late 1990's is basically for finding > an acceptable match > between the input feature vector and the stored feature vectors > represented by neurons (not meant for the nearest match). > > > ART has developed a lot since the 1990s. A non-technical but fairly > comprehensive review article was published in 2012 in *Neural Networks*and can be found at > http://cns.bu.edu/~steve/ART.pdf. > > I do not think about the top-down process in ART in quite the way that you > state above. My reason for this is summarized by the acronym CLEARS for the > processes of Consciousness, Learning, Expectation, Attention, Resonance, > and Synchrony. All the CLEARS processes come into this story, and ART > top-down mechanisms contribute to all of them. For me, the most fundamental > issues concern how ART dynamically self-stabilizes the memories that are > learned within the model's bottom-up adaptive filters and top-down > expectations. > > In particular, during learning, a big enough mismatch can lead to > hypothesis testing and search for a new, or previously learned, category > that leads to an acceptable match. The criterion for what is "big enough > mismatch" or "acceptable match" is regulated by a vigilance parameter that > can itself vary in a state-dependent way. > > After learning occurs, a bottom-up input pattern typically directly > selects the best-matching category, without any hypothesis testing or > search. And even if there is a reset due to a large initial mismatch with a > previously active category, a single reset event may lead directly to a > matching category that can directly resonate with the data. > > I should note that all of the foundational predictions of ART now have > substantial bodies of psychological and neurobiological data to support > them. See the review article if you would like to read about them. > > The currently active neuron is the one being examined by the top down > process > > > I'm not sure what you mean by "being examined", but perhaps my comment > above may deal with it. > > I should comment, though, about your use of the word "currently active > neuron". I assume that you mean at the category level. > > In this regard, there are two ART's. The first aspect of ART is as a > cognitive and neural theory whose scope, which includes perceptual, > cognitive, and adaptively timed cognitive-emotional dynamics, among other > processes, is illustrated by the above referenced 2012 review article in *Neural > Networks*. In the biological theory, there is no general commitment to > just one "currently active neuron". One always considers the neuronal > population, or populations, that represent a learned category. Sometimes, > but not always, a winner-take-all category is chosen. > > The 2012 review article illustrates some of the large data bases of > psychological and neurobiological data that have been explained in a > principled way, quantitatively simulated, and successfully predicted by ART > over a period of decades. ART-like processing is, however, certainly not > the only kind of computation that may be needed to understand how the brain > works. The paradigm called Complementary Computing that I introduced awhile > ago makes precise the sense in which ART may be just one kind of dynamics > supported by advanced brains. This is also summarized in the review article. > > The second aspect of ART is as a series of algorithms that mathematically > characterize key ART design principles and mechanisms in a focused setting, > and provide algorithms for large-scale applications in engineering and > technology. ARTMAP, fuzzy ARTMAP, and distributed ARTMAP are among these, > all of them developed with Gail Carpenter. Some of these algorithms use > winner-take-all categories to enable the proof of mathematical theorems > that characterize how underlying design principles work. In contrast, the > distributed ARTMAP family of algorithms, developed by Gail Carpenter and > her colleagues, allows for distributed category representations without > losing the benefits of fast, incremental, self-stabilizing learning and > prediction in response to a large non-stationary databases that can include > many unexpected events. > > See, e.g., > http://techlab.bu.edu/members/gail/articles/115_dART_NN_1997_.pdf and > http://techlab.bu.edu/members/gail/articles/155_Fusion2008_CarpenterRavindran.pdf > . > > I should note that FAST learning is a technical concept: it means that > each adaptive weight can converge to its new equilibrium value on EACH > learning trial. That is why ART algorithms can often successfully carry out > one-trial incremental learning of a data base. This is not true of many > other algorithms, such as back propagation, simulated annealing, and the > like, which all experience catastrophic forgetting if they try to do fast > learning. Almost all other learning algorithms need to be run using slow > learning, that allows only a small increment in the values of adaptive > weights on each learning trial, to avoid massive memory instabilities, and > work best in response to stationary data. Such algorithms often fail to > detect important rare cases, among other limitations. ART can provably > learn in either the fast or slow mode in response to non-stationary data. > > in a sequential fashion: one neuron after another, until an acceptable > neuron is found. > > (2) The input to the ART in the late 1990's is for a single feature vector > as a monolithic input. > By monolithic, I mean that all neurons take the entire input feature > vector as input. > I raise this point here because neuron in ART in the late 1990's does not > have an explicit local sensory receptive field (SRF), > i.e., are fully connected from all components of the input vector. A > local SRF means that each neuron is only connected to a small region > in an input image. > > > Various ART algorithms for technology do use fully connected networks. > They represent a single-channel case, which is often sufficient in > applications and which simplifies mathematical proofs. However, the > single-channel case is, as its name suggests, not a necessary constraint on > ART design. > > In addition, many ART biological models do not restrict themselves to the > single-channel case, and do have receptive fields. These include the > LAMINART family of models that predict functional roles for many identified > cell types in the laminar circuits of cerebral cortex. These models > illustrate how variations of a shared laminar circuit design can carry out > very different intelligent functions, such as 3D vision (e.g., 3D > LAMINART), speech and language (e.g., cARTWORD), and cognitive information > processing (e.g., LIST PARSE). They are all summarized in the 2012 review > article, with the archival articles themselves on my web page > http://cns.bu.edu/~steve. > > The existence of these laminar variations-on-a-theme provides an existence > proof for the exciting goal of designing a family of chips whose > specializations can realize all aspects of higher intelligence, and which > can be consistently connected because they all share a similar underlying > design. Work on achieving this goal can productively occupy lots of > creative modelers and technologists for many years to come. > > I hope that the above replies provide some relevant information, as well > as pointers for finding more. > > Best, > > Steve > > > > > My apology again if my understanding above has errors although I have > examined the above two points carefully > through multiple your papers. > > Best regards, > > -John > > Juyang (John) Weng, Professor > Department of Computer Science and Engineering > MSU Cognitive Science Program and MSU Neuroscience Program > 428 S Shaw Ln Rm 3115 > Michigan State University > East Lansing, MI 48824 USA > Tel: 517-353-4388 > Fax: 517-432-1061 > Email: weng at cse.msu.edu > URL: http://www.cse.msu.edu/~weng/ > ---------------------------------------------- > > > > Stephen Grossberg > Wang Professor of Cognitive and Neural Systems > Professor of Mathematics, Psychology, and Biomedical Engineering > Director, Center for Adaptive Systems http://www.cns.bu.edu/about/cas.html > http://cns.bu.edu/~steve > steve at bu.edu > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kerstin at nld.ds.mpg.de Mon Apr 7 09:20:45 2014 From: kerstin at nld.ds.mpg.de (Kerstin Mosch) Date: Mon, 07 Apr 2014 15:20:45 +0200 Subject: Connectionists: Bernstein Conference 2013 - Registration is now open Message-ID: <5342A62D.4050509@nld.ds.mpg.de> *BERNSTEIN CONFERENCE 2014 in G?ttingen* Registration is open now! Please register here . Early registration deadline: July 7, 2014 **************************************************************************** Satellite Workshops September 02-03, 2014 Main Conference September 03-05, 2014 **************************************************************************** The Bernstein Conference has become the largest annual Computational Neuroscience Conference in Europe and now regularly attracts more than 500 international participants. This year, the Conference is organized by the Bernstein Focus Neurotechnology Goettingen and will take place September 03-05, 2014. In addition, there will be a series of pre-conference satellite workshops on September 02-03, 2014. The Bernstein Conference is a single-track conference, covering all aspects of Computational Neuroscience and Neurotechnology, and sessions for poster presentations are an integral part of the conference. The Bernstein conference will also feature a series of satellite workshops on September 02-03, 2014. The goal is to provide an informal forum for the discussion of timely research questions and challenges. For more information on the conference, please visit: http://www.bernstein-conference.de CONFERENCE DATE AND VENUE: Satellite Workshops September 02-03, 2014 Main Conference September 03-05, 2014 Venue: Central Lecture Hall (ZHG), Platz der Goettinger Sieben 5, 37073 G?ttingen, Germany PUBLIC PHD STUDENT EVENT: September 02, 2014 PROGRAM COMMITTEE: Ilka Diester, Daniel Durstewitz, Thomas Euler, Alexander Gail, Matthias Kaschube, Jason Kerr, Roland Schaette, Fred Wolf, Florentin W?rg?tter, ORGANIZING COMMITTEE: Florentin W?rg?tter, Kerstin Mosch, Yvonne Reimann We look forward to seeing you in Goettingen in September! -- Dr. Kerstin Mosch Bernstein Center for Computational Neuroscience (BCCN) Goettingen Bernstein Focus Neurotechnology (BFNT) Goettingen Max Planck Institute for Dynamics and Self-Organization Am Fassberg 17 D-37077 Goettingen Germany T: +49 (0) 551 5176 - 425 E:kerstin at nld.ds.mpg.de I:www.bccn-goettingen.de I:www.bfnt-goettingen.de -------------- next part -------------- An HTML attachment was scrubbed... URL: From torsello at dsi.unive.it Mon Apr 7 09:36:01 2014 From: torsello at dsi.unive.it (Andrea Torsello) Date: Mon, 07 Apr 2014 15:36:01 +0200 Subject: Connectionists: Call for Participation International Summer School on Complex Networks Message-ID: <5342A9C1.4080401@dsi.unive.it> ISSCN International Summer School on Complex Networks Bertinoro, Italy July 14-18 2014 http://www.dsi.unive.it/isscn/ Complex networks are an emerging and powerful computational tool in the physical, biological and social sciences. They aim is to capture the structural properties of data represented as graphs or networks, providing ways of characterising both the static and dynamic facets of network structure. The topic draws on ideas from graph theory, statistical physics and dynamic systems theory. Applications include communication networks, epidemiology, transportation, social networks and ecology. The aim in the Summer School is to provide an overview of both the foundations and state of the art in the field. Lectures will be presented by intellectual leaders in the field, and there will be an opportunity to interact closely with them during the school. The school will be held in the Bertinoro Residential Centre of the University of Bologna, which is situated in beautiful hills between Ravenna and Bologna. The summer school is aimed at PhD students, and younger postdocs or RA?s working in the complex networks area. It will run for 5 days with lectures in the mornings and afternoons, and the school fee includes residential accommodation and meals at the residential centre. List of Lecturers Michele Benzi, Emory University, USA Luciano Costa, University of S?o Paulo, Brasil Ernesto Estrada, University of Strathclyde, UK Jesus Gomez Gardenes, University of Zaragoza, Spain Ferenc Jordan, The Microsoft Research ? COSBI, Italy Yamir Moreno, University of Zaragoza, Spain Mirco Musolesi, University of Birmingham, UK Simone Severini, University College London, UK Organizers Andrea Torsello, Universit? Ca? Foscari Venezia, Italy Edwin Hancock, University of York, UK Richard Wilson, University of York, UK Ernesto Estrada, University of Strathclyde, UK Registration Fees Registration will include Accommodation for 5 nights (13/7 to 17/7), and meals. Accommodation can be in single or double rooms (subject to availability). Single room Double room PhD student EUR 700 EUR 650 Postdoc EUR 800 EUR 750 Other EUR 900 EUR 850 We hope to be able to provide scholarships to PhD students to reduce the registration Fees. Student applicants will be automatically considered for scholarships. Application is now open through the schools's website. Deadline for application is April 15th. After that time we will let all applicants know whether their application was successful, whether they are entitled for scholarship, and, in that case, the amount of financial aid we can offer. Applicants must send an expression of interest along with their Curriculum vitae. PhD students must also send a letter from the supervisor in support to the request to be considered for the scholarship. Contact: Andrea Torsello -- Andrea Torsello PhD Dipartimento di Scienze Ambientali, Informatica, Statistica Universita' Ca' Foscari di Venezia via Torino 155, 30172 Venezia Mestre, Italy Tel: +39 0412348468 Fax: +39 0412348419 http://www.dsi.unive.it/~atorsell From matog at cab.cnea.gov.ar Mon Apr 7 08:08:30 2014 From: matog at cab.cnea.gov.ar (German Mato) Date: Mon, 07 Apr 2014 09:08:30 -0300 Subject: Connectionists: School J. A. Balseiro 2014: Modeling in Neuroscience Message-ID: <5342953E.1030600@cab.cnea.gov.ar> ****************************************************************************************** We're pleased to announce the "School J. A. Balseiro 2014: Modeling in Neuroscience" which will take place in Bariloche, Argentina from the 6th to the 31st of October. Additional information in: http://fisica.cab.cnea.gov.ar/escuelaib2014-neurociencias ****************************************************************************************** From rbcp at cin.ufpe.br Mon Apr 7 13:42:16 2014 From: rbcp at cin.ufpe.br (Ricardo Bastos C. Prudencio) Date: Mon, 7 Apr 2014 09:42:16 -0800 Subject: Connectionists: Call for Papers - BRACIS/ENIAC 2014 Message-ID: Deadline for paper submission: April 30th (firm deadline) ========================================================= Call for Papers - BRACIS/ENIAC 2014 ========================================================= BRACIS 2014 - Brazilian Conference on Intelligent Systems ENIAC 2014 - Encontro Nacional de Intelig?ncia Artificial e Computacional October 18-23, 2014 - S?o Carlos-SP, Brazil http://jcris2014.icmc.usp.br/index.php/bracis-eniac ========================================================= Submission BRACIS\ENIAC will have a unified submission process. Papers should be written in English or Portuguese. Submitted papers must not exceed 6 pages, including all tables, figures, and references (style of the IEEE Computer Society Press). At maximum, two additional pages are permitted with overlength page charge. Formatting instructions, as well as templates for Word and /LaTeX/, can be found in http://www.computer.org/portal/web/cscps/formatting The best papers in English will be included in the BRACIS proceedings published by IEEE. All accepted papers written in Portuguese will appear in the ENIAC proceedings, published electronically as BDBComp. The remaining accepted papers submitted in English, but not indicated to BRACIS will be suggested for publication in ENIAC. Authors of selected papers will be invited to submit extended versions of their work to be appreciated for publication in special issues of several international journals (to be announced). Papers must be submitted in PDF files through the EasyChair system. https://www.easychair.org/conferences/?conf=braciseniac2014 The deadline for paper registration/upload is April 30th., 23:55 BRST. Submission Policy By submitting papers to BRACIS/ENIAC 2014, the authors agree that in case of acceptance, at least one author should be fully registered to the conference **before** the deadline for sending the camera-ready paper. Accepted papers without the respective author full registration **before** the deadline will not be included in the proceedings. Important dates Deadline Submission for Regular Papers: April 30, 2014 (Firm deadline) Acceptance notification: May 30, 2014 Final camera-ready papers due: June 30, 2014 Program Chairs: Ricardo Prudencio (UFPE) and Paulo E. Santos (FEI) Local BRACIS Chair: Estevam Rafael Hruschka Junior (UFSCar) and Heloisa de Arruda Camargo (UFSCar) -- -------------- next part -------------- An HTML attachment was scrubbed... URL: From weng at cse.msu.edu Mon Apr 7 13:58:56 2014 From: weng at cse.msu.edu (Juyang Weng) Date: Mon, 07 Apr 2014 13:58:56 -0400 Subject: Connectionists: how the brain works? (UNCLASSIFIED) In-Reply-To: References: <51FDF3A2-717D-4399-9076-331EEA0FC45A@lehigh.edu> <5340188F.6090405@cse.msu.edu> <6F08AE39-766F-45B9-97AD-C50714755724@cns.bu.edu> Message-ID: <5342E760.7080309@cse.msu.edu> Tsvi, Note that ART uses a vigilance value to pick up the first "acceptable" match in its sequential bottom-up and top-down search. I believe that was Steve meant when he mentioned vigilance. Why do you think "ART as a neural way to implement a K-nearest neighbor algorithm"? If not all the neighbors have sequentially participated, how can ART find the nearest neighbor, let alone K-nearest neighbor? Our DN uses an explicit k-nearest mechanism to find the k-nearest neighbors in every network update, to avoid the problems of slow resonance in existing models of spiking neuronal networks. The explicit k-nearest mechanism itself is not meant to be biologically plausible, but it gives a computational advantage for software simulation of large networks at a speed slower than 1000 network updates per second. I guess that more detailed molecular simulations of individual neuronal spikes (such as using the Hodgkin-Huxley model of a neuron, using the NEURON software, or like the Blue Brain project directed by respected Dr. Henry Markram) are very useful for showing some detailed molecular, synaptic, and neuronal properties. However, they miss necessary brain-system-level mechanisms so much that it is difficult for them to show major brain-scale functions (such as learning to recognize objects and detection of natural objects directly from natural cluttered scenes). According to my understanding, if one uses a detailed neuronal model for each of a variety of neuronal types and connects those simulated neurons of different types according to a diagram of Brodmann areas, his simulation is NOT going to lead to any major brain function. He still needs brain-system-level knowledge such as that taught in the BMI 871 course. -John On 4/7/14 8:07 AM, Tsvi Achler wrote: > Dear Steve, John > I think such discussions are great to spark interests in feedback > (output back to input) such models which I feel should be given much > more attention. > In this vein it may be better to discuss more of the details here than > to suggest to read a reference. > > Basically I see ART as a neural way to implement a K-nearest neighbor > algorithm. Clearly the way ART overcomes the neural hurdles is > immense especially in figuring out how to coordinate neurons. However > it is also important to summarize such methods in algorithmic terms > which I attempt to do here (and please comment/correct). > Instar learning is used to find the best weights for quick feedforward > recognition without too much resonance (otherwise more resonance will > be needed). Outstar learning is used to find the expectation of the > patterns. The resonance mechanism evaluates distances between the > "neighbors" evaluating how close differing outputs are to the input > pattern (using the expectation). By choosing one winner the network > is equivalent to a 1-nearest neighbor model. If you open it up to > more winners (eg k winners) as you suggest then it becomes a > k-nearest neighbor mechanism. > > Clearly I focused here on the main ART modules and did not discuss > other additions. But I want to just focus on the main idea at this point. > Sincerely, > -Tsvi > > > On Sun, Apr 6, 2014 at 1:30 PM, Stephen Grossberg > wrote: > > Dear John, > > Thanks for your questions. I reply below. > > On Apr 5, 2014, at 10:51 AM, Juyang Weng wrote: > >> Dear Steve, >> >> This is one of my long-time questions that I did not have a >> chance to ask you when I met you many times before. >> But they may be useful for some people on this list. >> Please accept my apology of my question implies any false >> impression that I did not intend. >> >> (1) Your statement below seems to have confirmed my understanding: >> Your top-down process in ART in the late 1990's is basically for >> finding an acceptable match >> between the input feature vector and the stored feature vectors >> represented by neurons (not meant for the nearest match). > > ART has developed a lot since the 1990s. A non-technical but > fairly comprehensive review article was published in 2012 in > /Neural Networks/ and can be found at > http://cns.bu.edu/~steve/ART.pdf . > > I do not think about the top-down process in ART in quite the way > that you state above. My reason for this is summarized by the > acronym CLEARS for the processes of Consciousness, Learning, > Expectation, Attention, Resonance, and Synchrony. All the CLEARS > processes come into this story, and ART top-down mechanisms > contribute to all of them. For me, the most fundamental issues > concern how ART dynamically self-stabilizes the memories that are > learned within the model's bottom-up adaptive filters and top-down > expectations. > > In particular, during learning, a big enough mismatch can lead to > hypothesis testing and search for a new, or previously learned, > category that leads to an acceptable match. The criterion for what > is "big enough mismatch" or "acceptable match" is regulated by a > vigilance parameter that can itself vary in a state-dependent way. > > After learning occurs, a bottom-up input pattern typically > directly selects the best-matching category, without any > hypothesis testing or search. And even if there is a reset due to > a large initial mismatch with a previously active category, a > single reset event may lead directly to a matching category that > can directly resonate with the data. > > I should note that all of the foundational predictions of ART now > have substantial bodies of psychological and neurobiological data > to support them. See the review article if you would like to read > about them. > >> The currently active neuron is the one being examined by the top >> down process > > I'm not sure what you mean by "being examined", but perhaps my > comment above may deal with it. > > I should comment, though, about your use of the word "currently > active neuron". I assume that you mean at the category level. > > In this regard, there are two ART's. The first aspect of ART is as > a cognitive and neural theory whose scope, which includes > perceptual, cognitive, and adaptively timed cognitive-emotional > dynamics, among other processes, is illustrated by the above > referenced 2012 review article in /Neural Networks/. In the > biological theory, there is no general commitment to just one > "currently active neuron". One always considers the neuronal > population, or populations, that represent a learned category. > Sometimes, but not always, a winner-take-all category is chosen. > > The 2012 review article illustrates some of the large data bases > of psychological and neurobiological data that have been explained > in a principled way, quantitatively simulated, and successfully > predicted by ART over a period of decades. ART-like processing is, > however, certainly not the only kind of computation that may be > needed to understand how the brain works. The paradigm called > Complementary Computing that I introduced awhile ago makes precise > the sense in which ART may be just one kind of dynamics supported > by advanced brains. This is also summarized in the review article. > > The second aspect of ART is as a series of algorithms that > mathematically characterize key ART design principles and > mechanisms in a focused setting, and provide algorithms for > large-scale applications in engineering and technology. ARTMAP, > fuzzy ARTMAP, and distributed ARTMAP are among these, all of them > developed with Gail Carpenter. Some of these algorithms use > winner-take-all categories to enable the proof of mathematical > theorems that characterize how underlying design principles work. > In contrast, the distributed ARTMAP family of algorithms, > developed by Gail Carpenter and her colleagues, allows for > distributed category representations without losing the benefits > of fast, incremental, self-stabilizing learning and prediction in > response to a large non-stationary databases that can include many > unexpected events. > > See, e.g., > http://techlab.bu.edu/members/gail/articles/115_dART_NN_1997_.pdf > and > http://techlab.bu.edu/members/gail/articles/155_Fusion2008_CarpenterRavindran.pdf. > > I should note that FAST learning is a technical concept: it means > that each adaptive weight can converge to its new equilibrium > value on EACH learning trial. That is why ART algorithms can often > successfully carry out one-trial incremental learning of a data > base. This is not true of many other algorithms, such as back > propagation, simulated annealing, and the like, which all > experience catastrophic forgetting if they try to do fast > learning. Almost all other learning algorithms need to be run > using slow learning, that allows only a small increment in the > values of adaptive weights on each learning trial, to avoid > massive memory instabilities, and work best in response to > stationary data. Such algorithms often fail to detect important > rare cases, among other limitations. ART can provably learn in > either the fast or slow mode in response to non-stationary data. > >> in a sequential fashion: one neuron after another, until an >> acceptable neuron is found. >> >> (2) The input to the ART in the late 1990's is for a single >> feature vector as a monolithic input. >> By monolithic, I mean that all neurons take the entire input >> feature vector as input. >> I raise this point here because neuron in ART in the late 1990's >> does not have an explicit local sensory receptive field (SRF), >> i.e., are fully connected from all components of the input >> vector. A local SRF means that each neuron is only connected to >> a small region >> in an input image. > > Various ART algorithms for technology do use fully connected > networks. They represent a single-channel case, which is often > sufficient in applications and which simplifies mathematical > proofs. However, the single-channel case is, as its name suggests, > not a necessary constraint on ART design. > > In addition, many ART biological models do not restrict themselves > to the single-channel case, and do have receptive fields. These > include the LAMINART family of models that predict functional > roles for many identified cell types in the laminar circuits of > cerebral cortex. These models illustrate how variations of a > shared laminar circuit design can carry out very different > intelligent functions, such as 3D vision (e.g., 3D LAMINART), > speech and language (e.g., cARTWORD), and cognitive information > processing (e.g., LIST PARSE). They are all summarized in the 2012 > review article, with the archival articles themselves on my web > page http://cns.bu.edu/~steve . > > The existence of these laminar variations-on-a-theme provides an > existence proof for the exciting goal of designing a family of > chips whose specializations can realize all aspects of higher > intelligence, and which can be consistently connected because they > all share a similar underlying design. Work on achieving this goal > can productively occupy lots of creative modelers and > technologists for many years to come. > > I hope that the above replies provide some relevant information, > as well as pointers for finding more. > > Best, > > Steve > > > >> >> My apology again if my understanding above has errors although I >> have examined the above two points carefully >> through multiple your papers. >> >> Best regards, >> >> -John >> >> Juyang (John) Weng, Professor >> Department of Computer Science and Engineering >> MSU Cognitive Science Program and MSU Neuroscience Program >> 428 S Shaw Ln Rm 3115 >> Michigan State University >> East Lansing, MI 48824 USA >> Tel:517-353-4388 >> Fax:517-432-1061 >> Email:weng at cse.msu.edu >> URL:http://www.cse.msu.edu/~weng/ >> ---------------------------------------------- >> > > Stephen Grossberg > Wang Professor of Cognitive and Neural Systems > Professor of Mathematics, Psychology, and Biomedical Engineering > Director, Center for Adaptive Systems > http://www.cns.bu.edu/about/cas.html > http://cns.bu.edu/~steve > steve at bu.edu > > > > > -- -- Juyang (John) Weng, Professor Department of Computer Science and Engineering MSU Cognitive Science Program and MSU Neuroscience Program 428 S Shaw Ln Rm 3115 Michigan State University East Lansing, MI 48824 USA Tel: 517-353-4388 Fax: 517-432-1061 Email: weng at cse.msu.edu URL: http://www.cse.msu.edu/~weng/ ---------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From decision.making.big.data at gmail.com Mon Apr 7 20:55:27 2014 From: decision.making.big.data at gmail.com (Amir-massoud Farahmand Andre Barreto) Date: Mon, 7 Apr 2014 20:55:27 -0400 Subject: Connectionists: [CFP] AAAI Workshop on Sequential Decision-Making with Big Data (Deadline: April 10) Message-ID: This is just a friendly reminder that the paper submission deadline for this workshop is on April 10, 2014. We also have an exciting list of confirmed invited speakers. Make sure that you participate! -am *The AAAI-14 Workshop on Sequential Decision-Making with Big Data**held at the AAAI Conference on Artificial Intelligence (AAAI-14), Quebec City, Canada, July 27-28, 2014.* Workshop URL: https://sites.google.com/site/decisionmakingbigdata In the 21st century, we live in a world where data is abundant. We would like to use this data to make better decisions in many areas of life, such as industry, health care, business, and government. This opportunity has encouraged many machine learning and data mining researchers to develop tools to benefit from big data. However, the methods developed so far have focused almost exclusively on the task of prediction. As a result, the question of how big data can leverage decision-making has remained largely untouched. This workshop is about decision-making in the era of big data. The main topic will be the complex decision-making problems, in particular the sequential ones, that arise in this context. Examples of these problems are high-dimensional large-scale reinforcement learning and their simplified version such as various types of bandit problems. These problems can be classified into three potentially overlapping categories: 1) Very large number of data-points. Examples: data coming from user clicks on the web and financial data. In this scenario, the most important issue is computational cost. Any algorithm that is super-linear will not be practical. 2) Very high-dimensional input space. Examples are found in robotic and computer vision problems. The only possible way to solve these problems is to benefit from their regularities. 3) Partially observable systems. Here the immediate observed variables do not have enough information for accurate decision-making, but one might extract sufficient information by considering the history of observations. If the time series is projected onto a high-dimensional representation, one ends up with problems similar to 2. *Topics*Some potential topics of interest are: - Reinforcement learning algorithms that deal with one of the aforementioned categories - Bandit problems with high-dimensional action space - Challenging real-world applications of sequential decision-making problems that can benefit from big data. Example domains include robotics, adaptive treatment strategies for personalized health care, finance, recommendation systems, and advertising. *Format*The workshop will be a one-day meeting consisting of invited talks, oral and poster presentations from participants, and a final panel-driven discussion. *Attendance*We expect about 30-50 participants from invited speakers, contributed authors, and interested researchers. *Submission*We invite researchers from different fields of machine learning (e.g., reinforcement learning, online learning, active learning), optimization, systems (distributed and parallel computing), as well as application-domain experts (from e.g., robotics, recommendation systems, personalized medicine, etc.) to submit an extended abstract (maximum 4 pages in AAAI format) of their recent work to decision.making.big.data at gmail.com. Accepted papers will be presented as posters or contributed oral presentations. *Important Dates*Paper Submission: *April 10, 2014*Notification of Acceptance: May 1, 2014 Camera-Ready Papers: May 15, 2014 Date of Workshop: July 27 or 28, 2014 *Invited Speakers*- Emma Brunskill (Carnegie Mellon University) - Lihong Li (Microsoft Research) - Richard Sutton (University of Alberta) - And a few others to be confirmed later. *Organizing Committee*- Amir-massoud Farahmand (Carnegie Mellon University) - Andr? M.S. Barreto (Brazilian National Laboratory for Scientific Computing (LNCC)) - Mohammad Ghavamzadeh (Adobe Research and INRIA Lille - Team SequeL) - Joelle Pineau (McGill University) - Doina Precup (McGill University) -------------- next part -------------- An HTML attachment was scrubbed... URL: From gtesauro at us.ibm.com Mon Apr 7 20:28:25 2014 From: gtesauro at us.ibm.com (Gerry Tesauro) Date: Mon, 7 Apr 2014 20:28:25 -0400 Subject: Connectionists: CfP: AAAI-14 Workshop on "Cognitive Computing for Augmented Human Intelligence" In-Reply-To: References: Message-ID: Dear Connectionist readers, Please consider participating in and submitting papers to the upcoming workshop on "Cognitive Computing for Augmented Human Intelligence" to be held at AAAI-14 on July 28, 2014 in Quebec City, Canada. Confirmed Invited Speakers: Yoshua Bengio (Univ. of Montreal); Milind Tambe (USC) Organizing Committee: Biplav Srivastava, Aurelie Lozano, Janusz Marecki, Irina Rish and Gerald Tesauro (IBM); Ruslan Salakhudtinov (Univ. of Toronto); Manuela Veloso (CMU) The full Call for Papers and workshop details may be found here: https://sites.google.com/site/cognitivecomputingahi/ (Please note the submission deadline will be extended to April 17, 2014.) Regards, --Gerry -------------- next part -------------- An HTML attachment was scrubbed... URL: From achler at gmail.com Tue Apr 8 01:30:48 2014 From: achler at gmail.com (Tsvi Achler) Date: Mon, 7 Apr 2014 22:30:48 -0700 Subject: Connectionists: how the brain works? (UNCLASSIFIED) In-Reply-To: <5342E760.7080309@cse.msu.edu> References: <51FDF3A2-717D-4399-9076-331EEA0FC45A@lehigh.edu> <5340188F.6090405@cse.msu.edu> <6F08AE39-766F-45B9-97AD-C50714755724@cns.bu.edu> <5342E760.7080309@cse.msu.edu> Message-ID: Hi John, ART evaluates distance between the contending representation and the current input through vigilance. If they are too far apart, a poor vigilance signal will be triggered. The best resonance will be achieved when they have the least amount of distance. If in your model, K-nearest neighbors is used without a neural equivalent, then your model is not quite in the spirit of a connectionist model. For example, Bayesian networks do a great job emulating brain behavior, modeling the integration of priors. and has been invaluable to model cognitive studies. However they assume a statistical configuration of connections and distributions which is not quite known how to emulate with neurons. Thus pure Bayesian models are also questionable in terms of connectionist modeling. But some connectionist models can emulate some statistical models for example see section 2.4 in Thomas & McClelland's chapter in Sun's 2008 book ( http://www.psyc.bbk.ac.uk/people/academic/thomas_m/TM_Cambridge_sub.pdf). I am not suggesting Hodgkin-Huxley level detailed neuron models, however connectionist models should have their connections explicitly defined. Sincerely, -Tsvi On Mon, Apr 7, 2014 at 10:58 AM, Juyang Weng wrote: > Tsvi, > > Note that ART uses a vigilance value to pick up the first "acceptable" > match in its sequential bottom-up and top-down search. > I believe that was Steve meant when he mentioned vigilance. > > Why do you think "ART as a neural way to implement a K-nearest neighbor > algorithm"? > If not all the neighbors have sequentially participated, > how can ART find the nearest neighbor, let alone K-nearest neighbor? > > Our DN uses an explicit k-nearest mechanism to find the k-nearest > neighbors in every network update, > to avoid the problems of slow resonance in existing models of spiking > neuronal networks. > The explicit k-nearest mechanism itself is not meant to be biologically > plausible, > but it gives a computational advantage for software simulation of large > networks > at a speed slower than 1000 network updates per second. > > I guess that more detailed molecular simulations of individual neuronal > spikes (such as using the Hodgkin-Huxley model of > a neuron, using the NEURON software, or like the > Blue Brain project directed by respected Dr. > Henry Markram) > are very useful for showing some detailed molecular, synaptic, and > neuronal properties. > However, they miss necessary brain-system-level mechanisms so much that it > is difficult for them > to show major brain-scale functions > (such as learning to recognize objects and detection of natural objects > directly from natural cluttered scenes). > > According to my understanding, if one uses a detailed neuronal model for > each of a variety of neuronal types and > connects those simulated neurons of different types according to a diagram > of Brodmann areas, > his simulation is NOT going to lead to any major brain function. > He still needs brain-system-level knowledge such as that taught in the BMI > 871 course. > > -John > > On 4/7/14 8:07 AM, Tsvi Achler wrote: > > Dear Steve, John > I think such discussions are great to spark interests in feedback (output > back to input) such models which I feel should be given much more > attention. > In this vein it may be better to discuss more of the details here than to > suggest to read a reference. > > Basically I see ART as a neural way to implement a K-nearest neighbor > algorithm. Clearly the way ART overcomes the neural hurdles is immense > especially in figuring out how to coordinate neurons. However it is also > important to summarize such methods in algorithmic terms which I attempt > to do here (and please comment/correct). > Instar learning is used to find the best weights for quick feedforward > recognition without too much resonance (otherwise more resonance will be > needed). Outstar learning is used to find the expectation of the patterns. > The resonance mechanism evaluates distances between the "neighbors" > evaluating how close differing outputs are to the input pattern (using the > expectation). By choosing one winner the network is equivalent to a > 1-nearest neighbor model. If you open it up to more winners (eg k winners) > as you suggest then it becomes a k-nearest neighbor mechanism. > > Clearly I focused here on the main ART modules and did not discuss other > additions. But I want to just focus on the main idea at this point. > Sincerely, > -Tsvi > > > On Sun, Apr 6, 2014 at 1:30 PM, Stephen Grossberg wrote: > >> Dear John, >> >> Thanks for your questions. I reply below. >> >> On Apr 5, 2014, at 10:51 AM, Juyang Weng wrote: >> >> Dear Steve, >> >> This is one of my long-time questions that I did not have a chance to ask >> you when I met you many times before. >> But they may be useful for some people on this list. >> Please accept my apology of my question implies any false impression that >> I did not intend. >> >> (1) Your statement below seems to have confirmed my understanding: >> Your top-down process in ART in the late 1990's is basically for finding >> an acceptable match >> between the input feature vector and the stored feature vectors >> represented by neurons (not meant for the nearest match). >> >> >> ART has developed a lot since the 1990s. A non-technical but fairly >> comprehensive review article was published in 2012 in *Neural Networks*and can be found at >> http://cns.bu.edu/~steve/ART.pdf. >> >> I do not think about the top-down process in ART in quite the way that >> you state above. My reason for this is summarized by the acronym CLEARS for >> the processes of Consciousness, Learning, Expectation, Attention, >> Resonance, and Synchrony. All the CLEARS processes come into this story, >> and ART top-down mechanisms contribute to all of them. For me, the most >> fundamental issues concern how ART dynamically self-stabilizes the memories >> that are learned within the model's bottom-up adaptive filters and top-down >> expectations. >> >> In particular, during learning, a big enough mismatch can lead to >> hypothesis testing and search for a new, or previously learned, category >> that leads to an acceptable match. The criterion for what is "big enough >> mismatch" or "acceptable match" is regulated by a vigilance parameter that >> can itself vary in a state-dependent way. >> >> After learning occurs, a bottom-up input pattern typically directly >> selects the best-matching category, without any hypothesis testing or >> search. And even if there is a reset due to a large initial mismatch with a >> previously active category, a single reset event may lead directly to a >> matching category that can directly resonate with the data. >> >> I should note that all of the foundational predictions of ART now have >> substantial bodies of psychological and neurobiological data to support >> them. See the review article if you would like to read about them. >> >> The currently active neuron is the one being examined by the top down >> process >> >> >> I'm not sure what you mean by "being examined", but perhaps my comment >> above may deal with it. >> >> I should comment, though, about your use of the word "currently active >> neuron". I assume that you mean at the category level. >> >> In this regard, there are two ART's. The first aspect of ART is as a >> cognitive and neural theory whose scope, which includes perceptual, >> cognitive, and adaptively timed cognitive-emotional dynamics, among other >> processes, is illustrated by the above referenced 2012 review article in *Neural >> Networks*. In the biological theory, there is no general commitment to >> just one "currently active neuron". One always considers the neuronal >> population, or populations, that represent a learned category. Sometimes, >> but not always, a winner-take-all category is chosen. >> >> The 2012 review article illustrates some of the large data bases of >> psychological and neurobiological data that have been explained in a >> principled way, quantitatively simulated, and successfully predicted by ART >> over a period of decades. ART-like processing is, however, certainly not >> the only kind of computation that may be needed to understand how the brain >> works. The paradigm called Complementary Computing that I introduced awhile >> ago makes precise the sense in which ART may be just one kind of dynamics >> supported by advanced brains. This is also summarized in the review article. >> >> The second aspect of ART is as a series of algorithms that >> mathematically characterize key ART design principles and mechanisms in a >> focused setting, and provide algorithms for large-scale applications in >> engineering and technology. ARTMAP, fuzzy ARTMAP, and distributed ARTMAP >> are among these, all of them developed with Gail Carpenter. Some of these >> algorithms use winner-take-all categories to enable the proof of >> mathematical theorems that characterize how underlying design principles >> work. In contrast, the distributed ARTMAP family of algorithms, developed >> by Gail Carpenter and her colleagues, allows for distributed category >> representations without losing the benefits of fast, incremental, >> self-stabilizing learning and prediction in response to a large >> non-stationary databases that can include many unexpected events. >> >> See, e.g., >> http://techlab.bu.edu/members/gail/articles/115_dART_NN_1997_.pdf and >> http://techlab.bu.edu/members/gail/articles/155_Fusion2008_CarpenterRavindran.pdf >> . >> >> I should note that FAST learning is a technical concept: it means that >> each adaptive weight can converge to its new equilibrium value on EACH >> learning trial. That is why ART algorithms can often successfully carry out >> one-trial incremental learning of a data base. This is not true of many >> other algorithms, such as back propagation, simulated annealing, and the >> like, which all experience catastrophic forgetting if they try to do fast >> learning. Almost all other learning algorithms need to be run using slow >> learning, that allows only a small increment in the values of adaptive >> weights on each learning trial, to avoid massive memory instabilities, and >> work best in response to stationary data. Such algorithms often fail to >> detect important rare cases, among other limitations. ART can provably >> learn in either the fast or slow mode in response to non-stationary data. >> >> in a sequential fashion: one neuron after another, until an acceptable >> neuron is found. >> >> (2) The input to the ART in the late 1990's is for a single feature >> vector as a monolithic input. >> By monolithic, I mean that all neurons take the entire input feature >> vector as input. >> I raise this point here because neuron in ART in the late 1990's does not >> have an explicit local sensory receptive field (SRF), >> i.e., are fully connected from all components of the input vector. A >> local SRF means that each neuron is only connected to a small region >> in an input image. >> >> >> Various ART algorithms for technology do use fully connected networks. >> They represent a single-channel case, which is often sufficient in >> applications and which simplifies mathematical proofs. However, the >> single-channel case is, as its name suggests, not a necessary constraint on >> ART design. >> >> In addition, many ART biological models do not restrict themselves to >> the single-channel case, and do have receptive fields. These include the >> LAMINART family of models that predict functional roles for many identified >> cell types in the laminar circuits of cerebral cortex. These models >> illustrate how variations of a shared laminar circuit design can carry out >> very different intelligent functions, such as 3D vision (e.g., 3D >> LAMINART), speech and language (e.g., cARTWORD), and cognitive information >> processing (e.g., LIST PARSE). They are all summarized in the 2012 review >> article, with the archival articles themselves on my web page >> http://cns.bu.edu/~steve. >> >> The existence of these laminar variations-on-a-theme provides an >> existence proof for the exciting goal of designing a family of chips whose >> specializations can realize all aspects of higher intelligence, and which >> can be consistently connected because they all share a similar underlying >> design. Work on achieving this goal can productively occupy lots of >> creative modelers and technologists for many years to come. >> >> I hope that the above replies provide some relevant information, as >> well as pointers for finding more. >> >> Best, >> >> Steve >> >> >> >> >> My apology again if my understanding above has errors although I have >> examined the above two points carefully >> through multiple your papers. >> >> Best regards, >> >> -John >> >> Juyang (John) Weng, Professor >> Department of Computer Science and Engineering >> MSU Cognitive Science Program and MSU Neuroscience Program >> 428 S Shaw Ln Rm 3115 >> Michigan State University >> East Lansing, MI 48824 USA >> Tel: 517-353-4388 >> Fax: 517-432-1061 >> Email: weng at cse.msu.edu >> URL: http://www.cse.msu.edu/~weng/ >> ---------------------------------------------- >> >> >> >> Stephen Grossberg >> Wang Professor of Cognitive and Neural Systems >> Professor of Mathematics, Psychology, and Biomedical Engineering >> Director, Center for Adaptive Systems >> http://www.cns.bu.edu/about/cas.html >> http://cns.bu.edu/~steve >> steve at bu.edu >> >> >> >> >> > > -- > -- > Juyang (John) Weng, Professor > Department of Computer Science and Engineering > MSU Cognitive Science Program and MSU Neuroscience Program > 428 S Shaw Ln Rm 3115 > Michigan State University > East Lansing, MI 48824 USA > Tel: 517-353-4388 > > Fax: 517-432-1061 > Email: weng at cse.msu.edu > URL: http://www.cse.msu.edu/~weng/ > ---------------------------------------------- > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From troy.d.kelley6.civ at mail.mil Tue Apr 8 09:14:36 2014 From: troy.d.kelley6.civ at mail.mil (Kelley, Troy D CIV (US)) Date: Tue, 8 Apr 2014 13:14:36 +0000 Subject: Connectionists: how the brain works? (UNCLASSIFIED) In-Reply-To: <6F08AE39-766F-45B9-97AD-C50714755724@cns.bu.edu> References: <51FDF3A2-717D-4399-9076-331EEA0FC45A@lehigh.edu> <5340188F.6090405@cse.msu.edu> <6F08AE39-766F-45B9-97AD-C50714755724@cns.bu.edu> Message-ID: Classification: UNCLASSIFIED Caveats: NONE Steve, ------- In particular, during learning, a big enough mismatch can lead to hypothesis testing and search for a new, or previously learned, category that leads to an acceptable match. The criterion for what is "big enough mismatch" or "acceptable match" is regulated by a vigilance parameter that can itself vary in a state-dependent way. ------- In our work with ART, we have found the vigilance to be an extremely sensitive parameter. I wonder if there would be some way to change this calculation to keep the vigilance from being so sensitive? There seemed to be a sweet spot for the parameter to work well with the learned material but that was sometimes difficult to find. Also, it seems to me from my work in developmental cognition that this parameter might benefit from being incrementally adaptive. In other words, it changes from repeated exposure to learned material. So early during learning, the criterion for a "big enough mismatch" is low, and as learning develops the criterion becomes higher - reflecting the fact that more information is being learned and there is less likely a chance for a big mismatch. Troy Kelley Cognitive Robotics Team Leader Human Research and Engineering Directorate Army Research Laboratory Aberdeen, MD, 21005 V: 410-278-5869 Classification: UNCLASSIFIED Caveats: NONE -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/x-pkcs7-signature Size: 5619 bytes Desc: not available URL: From psajda at columbia.edu Tue Apr 8 15:55:09 2014 From: psajda at columbia.edu (Paul Sajda) Date: Tue, 8 Apr 2014 15:55:09 -0400 Subject: Connectionists: Postdoctoral positions in neuroimaging and rapid decision making Message-ID: <1FEBED08-36BA-4E3E-A6CE-062DD47FB53A@columbia.edu> The Laboratory for Intelligent Imaging and Neural Computing (LIINC) at Columbia University has immediate openings for several Postdoctoral Research Scientists. Specifically these positions will contribute to our research program focusing on understanding the neural basis for rapid decision making in the human brain and using this understanding to build brain computer interfaces (BCI) for next-generation human-machine interaction. Our approach leverages multi-modal neuroimaging, including a custom designed simultaneous EEG/fMRI system, to identify spatio-temporal cortical and subcortical networks underlying rapid decision making. Our analysis methods are deeply rooted in machine learning techniques, with a focus on using such techniques to enable single-trial analysis for correlating with decision variables. Applicants should have a background in one, and preferably several, of the following area: machine learning, neural signal processing, neuroimaging (EEG and/or fMRI), decision making, cognitive neuroscience and/or BCI. A Ph.D. in Engineering, Computer Science or Neuroscience is preferred. LIINC (liinc.bme.columbia.edu) is in the Department of Biomedical Engineering at Columbia University and interacts closely with other departments at Columbia, including Electrical Engineering, Biological Sciences, Computer Science and Neuroscience. We are also a member lab of the newly formed Center for Neural Engineering and Computation (CNEC) at Columbia University (cnec.columbia.edu) as well as the Institute for Data Sciences and Engineering (idse.columbia.edu). Additionally, we are a member of the Cognition and Neuroergonomics Collaborative Technology Alliance (CaN CTA) which a collaborative research effort between the Army Research Labs, Academia and Industry. Postdoctoral Research Scientists will thus have the opportunity to broadly collaborate will other scientists in academia, industry and government, being part of one of the largest collaborative programs in real-world neuroimaging and brain computer interfaces. Interested candidates should send via email their CV, three representative papers, the names of three references, and cover letter to Prof. Paul Sajda (psajda at columbia.edu). Applications will be considered until the positions are filled. The position is for one year, with the option to renew for 2-3 years, given satisfactory performance and available funding. Paul Sajda, Ph.D. Professor Departments of Biomedical Engineering, Electrical Engineering and Radiology Columbia University 351 Engineering Terrace Building, Mail Code 8904 1210 Amsterdam Avenue New York, NY 10027 tel: (212) 854-5279 fax: (212) 854-8725 email: ps629 at columbia.edu http://liinc.bme.columbia.edu Editor-in-Chief IEEE Transactions on Neural Systems and Rehabilitation Engineering email: editortnsre at gmail.com http://tnsre.bme.columbia.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: From cie.conference.series at gmail.com Tue Apr 8 16:37:55 2014 From: cie.conference.series at gmail.com (CiE Conference Series) Date: Tue, 8 Apr 2014 21:37:55 +0100 (BST) Subject: Connectionists: CiE 2014: Language, Life, Limits - Call for Presentations, Registration Message-ID: ------------------------------------------------------------------- COMPUTABILITY IN EUROPE 2014: Language, Life, Limits Budapest, Hungary June 23 - 27, 2014 http://cie2014.inf.elte.hu ------------------------------------------------------------------- ------------------------------------------------------ EARLY REGISTRATION DEADLINE: April 23, 2014 DEADLINE FOR INFORMAL PRESENTATIONS: April 14, 2014 ------------------------------------------------------ CALL FOR INFORMAL PRESENTATIONS There is a remarkable difference in conference style between computer science and mathematics conferences. Mathematics conferences allow for informal presentations that are prepared very shortly before the conference and inform the participants about current research and work in progress. The format of computer science conferences with pre-conference proceedings is not able to accommodate this form of scientific communication. Continuing the tradition of past CiE conferences, also this year's CiE conference endeavours to get the best of both worlds. In addition to the formal presentations based on our LNCS proceedings volume, we invite researchers to present informal presentations. For this, please send us a brief description of your talk (between one paragraph and one page) by the DEADLINE: APRIL 14, 2014 Please submit your abstract electronically, via EasyChair , selecting the category "Informal Presentation". You will be notified whether your talk has been accepted for informal presentation usually within a week or two after your submission. __________________________________________________________________________ ASSOCIATION COMPUTABILITY IN EUROPE http://www.computability.org.uk CiE Conference Series http://www.illc.uva.nl/CiE CiE 2014: Language, Life, Limits http://cie2014.inf.elte.hu CiE Membership Application Form http://www.lix.polytechnique.fr/CIE AssociationCiE on Twitter http://twitter.com/AssociationCiE __________________________________________________________________________ From weng at cse.msu.edu Wed Apr 9 00:19:17 2014 From: weng at cse.msu.edu (Juyang Weng) Date: Wed, 09 Apr 2014 00:19:17 -0400 Subject: Connectionists: how the brain works? (UNCLASSIFIED) In-Reply-To: References: <51FDF3A2-717D-4399-9076-331EEA0FC45A@lehigh.edu> <5340188F.6090405@cse.msu.edu> <6F08AE39-766F-45B9-97AD-C50714755724@cns.bu.edu> <5342E760.7080309@cse.msu.edu> Message-ID: <5344CA45.3060407@cse.msu.edu> Tavi: Let me explain a little more detail: There are two large categories of biological neurons, excitatory and inhibitory. Both are developed through mainly signal statistics, not specified primarily by the genomes. Not all people agree with my this point, but please tolerate my this view for now. I gave a more detailed discussion on this view in my NAI book. The main effect of inhibitory connections is to reduce the number of firing neurons (David Field called it sparse coding), with the help of excitatory connections. This sparse coding is important because those do not fire are long term memory of the area at this point of time. My this view is different from David Field. He wrote that sparse coding is for the current representations. I think sparse coding is necessary for long-term memory. Not all people agree with my this point, but please tolerate my this view for now. However, this reduction requires very fast parallel neuronal updates to avoid uncontrollable large-magnitude oscillations. Even with the fast biological parallel neuronal updates, we still see slow but small-magnitude oscillations such as the well-known theta waves and alpha waves. My view is that such slow but small-magnitude oscillations are side effects of excitatory and inhibitory connections that form many loops, not something really desirable for the brain operation (sorry, Paul Werbos). Not all people agree with my this point, but please tolerate my this view for now. Therefore, as far as I understand, all computer simulations for spiking neurons are not showing major brain functions because they have to deal with the slow oscillations that are very different from the brain's, e.g., as Dr. Henry Markram reported (40Hz?). The above discussion again shows the power and necessity of an overarching brain theory like that in my NAI book. Those who only simulate biological neurons using superficial biological phenomena are not going to demonstrate any major brain functions. They can talk about signal statistics from their simulations, but signal statistics are far from brain functions. -John On 4/8/14 1:30 AM, Tsvi Achler wrote: > Hi John, > ART evaluates distance between the contending representation and the > current input through vigilance. If they are too far apart, a poor > vigilance signal will be triggered. > The best resonance will be achieved when they have the least amount of > distance. > If in your model, K-nearest neighbors is used without a neural > equivalent, then your model is not quite in the spirit of a > connectionist model. > For example, Bayesian networks do a great job emulating brain > behavior, modeling the integration of priors. and has been invaluable > to model cognitive studies. However they assume a statistical > configuration of connections and distributions which is not quite > known how to emulate with neurons. Thus pure Bayesian models are also > questionable in terms of connectionist modeling. But some > connectionist models can emulate some statistical models for example > see section 2.4 in Thomas & McClelland's chapter in Sun's 2008 book > (http://www.psyc.bbk.ac.uk/people/academic/thomas_m/TM_Cambridge_sub.pdf). > I am not suggesting Hodgkin-Huxley level detailed neuron models, > however connectionist models should have their connections explicitly > defined. > Sincerely, > -Tsvi > > > > On Mon, Apr 7, 2014 at 10:58 AM, Juyang Weng > wrote: > > Tsvi, > > Note that ART uses a vigilance value to pick up the first > "acceptable" match in its sequential bottom-up and top-down search. > I believe that was Steve meant when he mentioned vigilance. > > Why do you think "ART as a neural way to implement a K-nearest > neighbor algorithm"? > If not all the neighbors have sequentially participated, > how can ART find the nearest neighbor, let alone K-nearest neighbor? > > Our DN uses an explicit k-nearest mechanism to find the k-nearest > neighbors in every network update, > to avoid the problems of slow resonance in existing models of > spiking neuronal networks. > The explicit k-nearest mechanism itself is not meant to be > biologically plausible, > but it gives a computational advantage for software simulation of > large networks > at a speed slower than 1000 network updates per second. > > I guess that more detailed molecular simulations of individual > neuronal spikes (such as using the Hodgkin-Huxley model of > a neuron, using the NEURON software, > or like the Blue Brain > project directed by respected Dr. > Henry Markram) > are very useful for showing some detailed molecular, synaptic, and > neuronal properties. > However, they miss necessary brain-system-level mechanisms so much > that it is difficult for them > to show major brain-scale functions > (such as learning to recognize objects and detection of natural > objects directly from natural cluttered scenes). > > According to my understanding, if one uses a detailed neuronal > model for each of a variety of neuronal types and > connects those simulated neurons of different types according to a > diagram of Brodmann areas, > his simulation is NOT going to lead to any major brain function. > He still needs brain-system-level knowledge such as that taught in > the BMI 871 course. > > -John > > On 4/7/14 8:07 AM, Tsvi Achler wrote: >> Dear Steve, John >> I think such discussions are great to spark interests in feedback >> (output back to input) such models which I feel should be given >> much more attention. >> In this vein it may be better to discuss more of the details here >> than to suggest to read a reference. >> >> Basically I see ART as a neural way to implement a K-nearest >> neighbor algorithm. Clearly the way ART overcomes the neural >> hurdles is immense especially in figuring out how to coordinate >> neurons. However it is also important to summarize such methods >> in algorithmic terms which I attempt to do here (and please >> comment/correct). >> Instar learning is used to find the best weights for quick >> feedforward recognition without too much resonance (otherwise >> more resonance will be needed). Outstar learning is used to find >> the expectation of the patterns. The resonance mechanism >> evaluates distances between the "neighbors" evaluating how close >> differing outputs are to the input pattern (using the >> expectation). By choosing one winner the network is equivalent >> to a 1-nearest neighbor model. If you open it up to more winners >> (eg k winners) as you suggest then it becomes a k-nearest >> neighbor mechanism. >> >> Clearly I focused here on the main ART modules and did not >> discuss other additions. But I want to just focus on the main >> idea at this point. >> Sincerely, >> -Tsvi >> >> >> On Sun, Apr 6, 2014 at 1:30 PM, Stephen Grossberg >> > wrote: >> >> Dear John, >> >> Thanks for your questions. I reply below. >> >> On Apr 5, 2014, at 10:51 AM, Juyang Weng wrote: >> >>> Dear Steve, >>> >>> This is one of my long-time questions that I did not have a >>> chance to ask you when I met you many times before. >>> But they may be useful for some people on this list. >>> Please accept my apology of my question implies any false >>> impression that I did not intend. >>> >>> (1) Your statement below seems to have confirmed my >>> understanding: >>> Your top-down process in ART in the late 1990's is basically >>> for finding an acceptable match >>> between the input feature vector and the stored feature >>> vectors represented by neurons (not meant for the nearest >>> match). >> >> ART has developed a lot since the 1990s. A non-technical but >> fairly comprehensive review article was published in 2012 in >> /Neural Networks/ and can be found at >> http://cns.bu.edu/~steve/ART.pdf >> . >> >> I do not think about the top-down process in ART in quite the >> way that you state above. My reason for this is summarized by >> the acronym CLEARS for the processes of Consciousness, >> Learning, Expectation, Attention, Resonance, and Synchrony. >> All the CLEARS processes come into this story, and ART >> top-down mechanisms contribute to all of them. For me, the >> most fundamental issues concern how ART dynamically >> self-stabilizes the memories that are learned within the >> model's bottom-up adaptive filters and top-down expectations. >> >> In particular, during learning, a big enough mismatch can >> lead to hypothesis testing and search for a new, or >> previously learned, category that leads to an acceptable >> match. The criterion for what is "big enough mismatch" or >> "acceptable match" is regulated by a vigilance parameter that >> can itself vary in a state-dependent way. >> >> After learning occurs, a bottom-up input pattern typically >> directly selects the best-matching category, without any >> hypothesis testing or search. And even if there is a reset >> due to a large initial mismatch with a previously active >> category, a single reset event may lead directly to a >> matching category that can directly resonate with the data. >> >> I should note that all of the foundational predictions of ART >> now have substantial bodies of psychological and >> neurobiological data to support them. See the review article >> if you would like to read about them. >> >>> The currently active neuron is the one being examined by the >>> top down process >> >> I'm not sure what you mean by "being examined", but perhaps >> my comment above may deal with it. >> >> I should comment, though, about your use of the word >> "currently active neuron". I assume that you mean at the >> category level. >> >> In this regard, there are two ART's. The first aspect of ART >> is as a cognitive and neural theory whose scope, which >> includes perceptual, cognitive, and adaptively timed >> cognitive-emotional dynamics, among other processes, is >> illustrated by the above referenced 2012 review article in >> /Neural Networks/. In the biological theory, there is no >> general commitment to just one "currently active neuron". One >> always considers the neuronal population, or populations, >> that represent a learned category. Sometimes, but not always, >> a winner-take-all category is chosen. >> >> The 2012 review article illustrates some of the large data >> bases of psychological and neurobiological data that have >> been explained in a principled way, quantitatively simulated, >> and successfully predicted by ART over a period of decades. >> ART-like processing is, however, certainly not the only kind >> of computation that may be needed to understand how the brain >> works. The paradigm called Complementary Computing that I >> introduced awhile ago makes precise the sense in which ART >> may be just one kind of dynamics supported by advanced >> brains. This is also summarized in the review article. >> >> The second aspect of ART is as a series of algorithms that >> mathematically characterize key ART design principles and >> mechanisms in a focused setting, and provide algorithms for >> large-scale applications in engineering and technology. >> ARTMAP, fuzzy ARTMAP, and distributed ARTMAP are among these, >> all of them developed with Gail Carpenter. Some of these >> algorithms use winner-take-all categories to enable the proof >> of mathematical theorems that characterize how underlying >> design principles work. In contrast, the distributed ARTMAP >> family of algorithms, developed by Gail Carpenter and her >> colleagues, allows for distributed category representations >> without losing the benefits of fast, incremental, >> self-stabilizing learning and prediction in response to a >> large non-stationary databases that can include many >> unexpected events. >> >> See, e.g., >> http://techlab.bu.edu/members/gail/articles/115_dART_NN_1997_.pdf >> and >> http://techlab.bu.edu/members/gail/articles/155_Fusion2008_CarpenterRavindran.pdf. >> >> I should note that FAST learning is a technical concept: it >> means that each adaptive weight can converge to its new >> equilibrium value on EACH learning trial. That is why ART >> algorithms can often successfully carry out one-trial >> incremental learning of a data base. This is not true of many >> other algorithms, such as back propagation, simulated >> annealing, and the like, which all experience catastrophic >> forgetting if they try to do fast learning. Almost all other >> learning algorithms need to be run using slow learning, that >> allows only a small increment in the values of adaptive >> weights on each learning trial, to avoid massive memory >> instabilities, and work best in response to stationary data. >> Such algorithms often fail to detect important rare cases, >> among other limitations. ART can provably learn in either the >> fast or slow mode in response to non-stationary data. >> >>> in a sequential fashion: one neuron after another, until an >>> acceptable neuron is found. >>> >>> (2) The input to the ART in the late 1990's is for a single >>> feature vector as a monolithic input. >>> By monolithic, I mean that all neurons take the entire input >>> feature vector as input. >>> I raise this point here because neuron in ART in the late >>> 1990's does not have an explicit local sensory receptive >>> field (SRF), >>> i.e., are fully connected from all components of the input >>> vector. A local SRF means that each neuron is only >>> connected to a small region >>> in an input image. >> >> Various ART algorithms for technology do use fully connected >> networks. They represent a single-channel case, which is >> often sufficient in applications and which simplifies >> mathematical proofs. However, the single-channel case is, as >> its name suggests, not a necessary constraint on ART design. >> >> In addition, many ART biological models do not restrict >> themselves to the single-channel case, and do have receptive >> fields. These include the LAMINART family of models that >> predict functional roles for many identified cell types in >> the laminar circuits of cerebral cortex. These models >> illustrate how variations of a shared laminar circuit design >> can carry out very different intelligent functions, such as >> 3D vision (e.g., 3D LAMINART), speech and language (e.g., >> cARTWORD), and cognitive information processing (e.g., LIST >> PARSE). They are all summarized in the 2012 review article, >> with the archival articles themselves on my web page >> http://cns.bu.edu/~steve . >> >> The existence of these laminar variations-on-a-theme provides >> an existence proof for the exciting goal of designing a >> family of chips whose specializations can realize all aspects >> of higher intelligence, and which can be consistently >> connected because they all share a similar underlying design. >> Work on achieving this goal can productively occupy lots of >> creative modelers and technologists for many years to come. >> >> I hope that the above replies provide some relevant >> information, as well as pointers for finding more. >> >> Best, >> >> Steve >> >> >> >>> >>> My apology again if my understanding above has errors >>> although I have examined the above two points carefully >>> through multiple your papers. >>> >>> Best regards, >>> >>> -John >>> >>> Juyang (John) Weng, Professor >>> Department of Computer Science and Engineering >>> MSU Cognitive Science Program and MSU Neuroscience Program >>> 428 S Shaw Ln Rm 3115 >>> Michigan State University >>> East Lansing, MI 48824 USA >>> Tel:517-353-4388 >>> Fax:517-432-1061 >>> Email:weng at cse.msu.edu >>> URL:http://www.cse.msu.edu/~weng/ >>> ---------------------------------------------- >>> >> >> Stephen Grossberg >> Wang Professor of Cognitive and Neural Systems >> Professor of Mathematics, Psychology, and Biomedical Engineering >> Director, Center for Adaptive Systems >> http://www.cns.bu.edu/about/cas.html >> http://cns.bu.edu/~steve >> steve at bu.edu >> >> >> >> >> > > -- > -- > Juyang (John) Weng, Professor > Department of Computer Science and Engineering > MSU Cognitive Science Program and MSU Neuroscience Program > 428 S Shaw Ln Rm 3115 > Michigan State University > East Lansing, MI 48824 USA > Tel:517-353-4388 > Fax: 517-432-1061 Email: weng at cse.msu.edu > URL: http://www.cse.msu.edu/~weng/ > > ---------------------------------------------- > > -- -- Juyang (John) Weng, Professor Department of Computer Science and Engineering MSU Cognitive Science Program and MSU Neuroscience Program 428 S Shaw Ln Rm 3115 Michigan State University East Lansing, MI 48824 USA Tel: 517-353-4388 Fax: 517-432-1061 Email: weng at cse.msu.edu URL: http://www.cse.msu.edu/~weng/ ---------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From weng at cse.msu.edu Wed Apr 9 12:14:17 2014 From: weng at cse.msu.edu (Juyang Weng) Date: Wed, 09 Apr 2014 12:14:17 -0400 Subject: Connectionists: how the brain works? (UNCLASSIFIED) In-Reply-To: References: <51FDF3A2-717D-4399-9076-331EEA0FC45A@lehigh.edu> <5340188F.6090405@cse.msu.edu> <6F08AE39-766F-45B9-97AD-C50714755724@cns.bu.edu> Message-ID: <534571D9.7000508@cse.msu.edu> Troy, As far as I am aware, what we called vigilance is fully automatically determined by the biological brain in a dynamic way, and it is not a single parameter and is not static either. There are at least 5 mechanisms for each neuron in DN that affect so called vigilance: (1) Goodness of match of bottom-up source: How does the bottom-up feature of the neuron match the current sensory input? ART has that. (2) Goodness of top-down source: How does the top-down feature of the neuron match the motor input (does the neuron fit the current attention)? ART does not seems to have that. ART uses top-down for sequential one-by-one search for an acceptable match. (3) The competition through inhibitory and excitatory connections with other neurons. This leads to sparse coding or a reduced number of concurrently firing neurons. ART does not seem to have such fast competition. (4) Modulation effect 1: Reinforcement (punishments and rewards): Does the bottom-up sensory input and top-down sensory input corresponds to an important event? By important, I mean, e.g., it can directly related to a past punishment (e.g., an electric shock) or past reward (e.g., food). Vigilance to such a punishment or reward event will be higher. The serotonin and dopamine systems participate such reinforcement. ART does not seem to have that. (5) Modulation effect 2: novelty. E.g., How long have I attended this event? This is what is called novelty, related to sensitization and habituation, and the sense of novelty in general. If I attend the event for long, the novelty goes down. The ACh and NE systems participate in such novelty evaluation. ART does not seem to have that. -John On 4/8/14 9:14 AM, Kelley, Troy D CIV (US) wrote: > Classification: UNCLASSIFIED > Caveats: NONE > > Steve, > ------- > > In particular, during learning, a big enough mismatch can lead to hypothesis > testing and search for a new, or previously learned, category that leads to > an acceptable match. The criterion for what is "big enough mismatch" or > "acceptable match" is regulated by a vigilance parameter that can itself > vary in a state-dependent way. > > ------- > > In our work with ART, we have found the vigilance to be an extremely > sensitive parameter. I wonder if there would be some way to change this > calculation to keep the vigilance from being so sensitive? There seemed to > be a sweet spot for the parameter to work well with the learned material but > that was sometimes difficult to find. > > Also, it seems to me from my work in developmental cognition that this > parameter might benefit from being incrementally adaptive. In other words, > it changes from repeated exposure to learned material. So early during > learning, the criterion for a "big enough mismatch" is low, and as learning > develops the criterion becomes higher - reflecting the fact that more > information is being learned and there is less likely a chance for a big > mismatch. > > Troy Kelley > Cognitive Robotics Team Leader > Human Research and Engineering Directorate > Army Research Laboratory > Aberdeen, MD, 21005 > V: 410-278-5869 > > > > Classification: UNCLASSIFIED > Caveats: NONE > > -- -- Juyang (John) Weng, Professor Department of Computer Science and Engineering MSU Cognitive Science Program and MSU Neuroscience Program 428 S Shaw Ln Rm 3115 Michigan State University East Lansing, MI 48824 USA Tel: 517-353-4388 Fax: 517-432-1061 Email: weng at cse.msu.edu URL: http://www.cse.msu.edu/~weng/ ---------------------------------------------- From finale at mit.edu Wed Apr 9 08:39:25 2014 From: finale at mit.edu (Finale P Doshi) Date: Wed, 9 Apr 2014 08:39:25 -0400 (EDT) Subject: Connectionists: AAAI Workshop on Modern AI for Health: Deadline Extended! Message-ID: AAAI Workshop on Modern Artificial Intelligence for Health Analytics ==== Call for Abstracts ==== When: July 27th or July 28th during AAAI Workshops Where: Qu?bec City, QC, Canada Website: http://www.cebm.brown.edu/maiha Email: maiha.aaai at gmail.com Important Dates: Submission - 4/24/2014 11:59 PM EST Notification - 5/1/2014 Workshop - 7/27/14 or 7/28/14 The proliferation of health-related information presents unprecedented opportunities to improve patient care. However, medical experts are currently overwhelmed by information, and existing artificial intelligence (AI) technologies are often inadequate for the challenges associated with analyzing clinical data. Novel computational methods are needed to process, organize, and make sense of these data. The objective of this workshop is to discuss computational methods that transform healthcare data into knowledge that ultimately improves patient care. Moreover, this workshop will focus on community building, by bringing together AI researchers interested in health and physicians interested in AI. The workshop will include a structured discussion around venues for this sort of emerging, interdisciplinary work. It will also include invited talks by leaders in the field.? We solicit papers in three tracks (further detailed below): (1) novel AI methods for healthcare, (2) clinical applications of AI, and (3) open AI challenges in health. ?We are particularly interested in work done in collaboration with clinicians and clinical researchers. Potential topics include: * Machine Learning ? ? ? ? - Predicting individual patient outcomes from past data ? ? ? ? - Patient risk stratification and clustering ? ? ? ? - Handling core methodological challenges (e.g., data sparsity) ? ? ? ? - Exploiting resources to augment training data * Computer Vision ? ? ? ? - Vitals monitoring ? ? ? ? - Medical imaging ? ? ? ? - Brain imaging (fMRI) * Natural language processing ? ? ? ? - Extracting structured data from free-text (e.g., clinical notes) ? ? ? ? - Biomedical text classification ? ? ? ? - Parsing biomedical literature * Information retrieval and organization ? ? ? ? - Identifying relevant literature ? ? ? ? - Clinical question answering ? ? ? ? - Ontology learning We define healthcare information broadly, including heterogeneous data such as clinical trial results, patient health records, genomic data, wearable health monitor outputs, online physician reviews, and medical images. Submissions may be up to 4 pages (plus references) and must be aligned to one of the following tracks: (1) Methods. Should present a novel methodology for a health sciences problem. While the application need not be unsolved, the method(s) must be specifically relevant to healthcare data. (2) Applications. Should describe a real-world application of AI methods to healthcare data. Methods need not be novel, but they should be state-of-the-art. These papers will be judged on the significance of the application. (3) Open problems. Should describe open problems/datasets that might interest AI researchers. ?Preference will be given to submissions that describe problems for which data are readily available. Submissions should be no longer than 4 pages plus references. They should be emailed directly to: maiha.aaai at gmail.com. Organizers: * Byron Wallace, Brown University (byron_wallace at brown.edu) * Jenna Wiens, MIT (jwiens at csail.mit.edu) * Finale Doshi-Velez, Harvard Medical School (finale at mit.edu) * David Kale, USC/Children's Hospital Los Angeles (dkale at usc.edu) From c.clopath at imperial.ac.uk Thu Apr 10 14:49:04 2014 From: c.clopath at imperial.ac.uk (Claudia Clopath) Date: Thu, 10 Apr 2014 19:49:04 +0100 Subject: Connectionists: PhD opportunities at Imperial College London in the Clopath lab Message-ID: PhD opportunities in Computational Neuroscience. Computational Neuroscience Laboratory Headed by Dr. Claudia Clopath Department of Bioengineering Imperial College London -----------------Requirements:----------------- The Computational Neuroscience Laboratory, headed by Dr. Claudia Clopath, is looking for a talented PhD student, interested in working in the field of computational neuroscience, specifically addressing questions of learning and memory. The PhD position is fully funded, and has a flexible starting date. The perfect candidate has a strong mathematical, physical or engineering background (or equivalent), and a keen interest in biological and neural systems. Demonstrated programming skills are a plus. This position is restricted to EU citizens. More information can be found at: http://www.bg.ic.ac.uk/research/c.clopath/ -----------------Research topic:----------------- Learning and memory are among the most fascinating topic of neuroscience, yet our understanding of it is only at the beginning. Learning is thought to change the connections between the neurons in the brain, a process called synaptic plasticity. Using mathematical and computational tools, it is possible to model synaptic plasticity across different time scales, which helps understand how different types of memory are formed. The PhD candidate will be working to build those models of synaptic plasticity, and study the functional role of synaptic plasticity in artificial neural networks. They will have the opportunity to collaborate with experimental laboratories, which study connectivity changes and behavioural learning. -----------------The lab:----------------- The Computational Neuroscience Laboratory is very young and dynamic, and publishes in prestigious journals, such as Nature and Science. It is part of the Department of Bioengineering, which conducts state-of-the-art multidisciplinary research in biomechanics, neuroscience and neurotechnology. The lab is at Imperial College London, the 3rd ranked university in Europe, and is located in the city centre of London. -----------------How to apply:----------------- Candidates should send a single pdf file, consisting of a 1-page motivation letter, CV, and names and contact information of two references, to clopathlab.imperial at gmail.com, with the subject containing 'PHD2014'. The position remains open until filled, but applications received first will be first considered. -------------- next part -------------- An HTML attachment was scrubbed... URL: From achler at gmail.com Thu Apr 10 20:42:15 2014 From: achler at gmail.com (Tsvi Achler) Date: Thu, 10 Apr 2014 17:42:15 -0700 Subject: Connectionists: how the brain works? (UNCLASSIFIED) In-Reply-To: <5344CA45.3060407@cse.msu.edu> References: <51FDF3A2-717D-4399-9076-331EEA0FC45A@lehigh.edu> <5340188F.6090405@cse.msu.edu> <6F08AE39-766F-45B9-97AD-C50714755724@cns.bu.edu> <5342E760.7080309@cse.msu.edu> <5344CA45.3060407@cse.msu.edu> Message-ID: I can't comment on most of this, but I am not sure if all models of sparsity and sparse coding fall into the connectionist realm either because some make statistical assumptions. -Tsvi On Tue, Apr 8, 2014 at 9:19 PM, Juyang Weng wrote: > Tavi: > > Let me explain a little more detail: > > There are two large categories of biological neurons, excitatory and > inhibitory. Both are developed through mainly signal statistics, > not specified primarily by the genomes. Not all people agree with my > this point, but please tolerate my this view for now. > I gave a more detailed discussion on this view in my NAI book. > > The main effect of inhibitory connections is to reduce the number of > firing neurons (David Field called it sparse coding), with the help of > excitatory connections. This sparse coding is important because those do > not fire are long term memory of the area at this point of time. > My this view is different from David Field. He wrote that sparse coding > is for the current representations. I think sparse coding is > necessary for long-term memory. Not all people agree with my this point, > but please tolerate my this view for now. > > However, this reduction requires very fast parallel neuronal updates to > avoid uncontrollable large-magnitude oscillations. > Even with the fast biological parallel neuronal updates, we still see slow > but small-magnitude oscillations such as the > well-known theta waves and alpha waves. My view is that such slow but > small-magnitude oscillations are side effects of > excitatory and inhibitory connections that form many loops, not something > really desirable for the brain operation (sorry, > Paul Werbos). Not all people agree with my this point, but please > tolerate my this view for now. > > Therefore, as far as I understand, all computer simulations for spiking > neurons are not showing major brain functions > because they have to deal with the slow oscillations that are very > different from the brain's, e.g., as Dr. Henry Markram reported > (40Hz?). > > The above discussion again shows the power and necessity of an overarching > brain theory like that in my NAI book. > Those who only simulate biological neurons using superficial biological > phenomena are not going to demonstrate > any major brain functions. They can talk about signal statistics from > their simulations, but signal statistics are far from brain functions. > > -John > > > On 4/8/14 1:30 AM, Tsvi Achler wrote: > > Hi John, > ART evaluates distance between the contending representation and the > current input through vigilance. If they are too far apart, a poor > vigilance signal will be triggered. > The best resonance will be achieved when they have the least amount of > distance. > If in your model, K-nearest neighbors is used without a neural equivalent, > then your model is not quite in the spirit of a connectionist model. > For example, Bayesian networks do a great job emulating brain behavior, > modeling the integration of priors. and has been invaluable to model > cognitive studies. However they assume a statistical configuration of > connections and distributions which is not quite known how to emulate with > neurons. Thus pure Bayesian models are also questionable in terms of > connectionist modeling. But some connectionist models can emulate some > statistical models for example see section 2.4 in Thomas & McClelland's > chapter in Sun's 2008 book ( > http://www.psyc.bbk.ac.uk/people/academic/thomas_m/TM_Cambridge_sub.pdf). > I am not suggesting Hodgkin-Huxley level detailed neuron models, however > connectionist models should have their connections explicitly defined. > Sincerely, > -Tsvi > > > > On Mon, Apr 7, 2014 at 10:58 AM, Juyang Weng wrote: > >> Tsvi, >> >> Note that ART uses a vigilance value to pick up the first "acceptable" >> match in its sequential bottom-up and top-down search. >> I believe that was Steve meant when he mentioned vigilance. >> >> Why do you think "ART as a neural way to implement a K-nearest neighbor >> algorithm"? >> If not all the neighbors have sequentially participated, >> how can ART find the nearest neighbor, let alone K-nearest neighbor? >> >> Our DN uses an explicit k-nearest mechanism to find the k-nearest >> neighbors in every network update, >> to avoid the problems of slow resonance in existing models of spiking >> neuronal networks. >> The explicit k-nearest mechanism itself is not meant to be biologically >> plausible, >> but it gives a computational advantage for software simulation of large >> networks >> at a speed slower than 1000 network updates per second. >> >> I guess that more detailed molecular simulations of individual neuronal >> spikes (such as using the Hodgkin-Huxley model of >> a neuron, using the NEURON software, or like the >> Blue Brain project directed by respected Dr. >> Henry Markram) >> are very useful for showing some detailed molecular, synaptic, and >> neuronal properties. >> However, they miss necessary brain-system-level mechanisms so much that >> it is difficult for them >> to show major brain-scale functions >> (such as learning to recognize objects and detection of natural objects >> directly from natural cluttered scenes). >> >> According to my understanding, if one uses a detailed neuronal model for >> each of a variety of neuronal types and >> connects those simulated neurons of different types according to a >> diagram of Brodmann areas, >> his simulation is NOT going to lead to any major brain function. >> He still needs brain-system-level knowledge such as that taught in the >> BMI 871 course. >> >> -John >> >> On 4/7/14 8:07 AM, Tsvi Achler wrote: >> >> Dear Steve, John >> I think such discussions are great to spark interests in feedback (output >> back to input) such models which I feel should be given much more >> attention. >> In this vein it may be better to discuss more of the details here than to >> suggest to read a reference. >> >> Basically I see ART as a neural way to implement a K-nearest neighbor >> algorithm. Clearly the way ART overcomes the neural hurdles is immense >> especially in figuring out how to coordinate neurons. However it is also >> important to summarize such methods in algorithmic terms which I attempt >> to do here (and please comment/correct). >> Instar learning is used to find the best weights for quick feedforward >> recognition without too much resonance (otherwise more resonance will be >> needed). Outstar learning is used to find the expectation of the patterns. >> The resonance mechanism evaluates distances between the "neighbors" >> evaluating how close differing outputs are to the input pattern (using the >> expectation). By choosing one winner the network is equivalent to a >> 1-nearest neighbor model. If you open it up to more winners (eg k winners) >> as you suggest then it becomes a k-nearest neighbor mechanism. >> >> Clearly I focused here on the main ART modules and did not discuss >> other additions. But I want to just focus on the main idea at this point. >> Sincerely, >> -Tsvi >> >> >> On Sun, Apr 6, 2014 at 1:30 PM, Stephen Grossberg wrote: >> >>> Dear John, >>> >>> Thanks for your questions. I reply below. >>> >>> On Apr 5, 2014, at 10:51 AM, Juyang Weng wrote: >>> >>> Dear Steve, >>> >>> This is one of my long-time questions that I did not have a chance to >>> ask you when I met you many times before. >>> But they may be useful for some people on this list. >>> Please accept my apology of my question implies any false impression >>> that I did not intend. >>> >>> (1) Your statement below seems to have confirmed my understanding: >>> Your top-down process in ART in the late 1990's is basically for finding >>> an acceptable match >>> between the input feature vector and the stored feature vectors >>> represented by neurons (not meant for the nearest match). >>> >>> >>> ART has developed a lot since the 1990s. A non-technical but fairly >>> comprehensive review article was published in 2012 in *Neural Networks*and can be found at >>> http://cns.bu.edu/~steve/ART.pdf. >>> >>> I do not think about the top-down process in ART in quite the way that >>> you state above. My reason for this is summarized by the acronym CLEARS for >>> the processes of Consciousness, Learning, Expectation, Attention, >>> Resonance, and Synchrony. All the CLEARS processes come into this >>> story, and ART top-down mechanisms contribute to all of them. For me, >>> the most fundamental issues concern how ART dynamically self-stabilizes the >>> memories that are learned within the model's bottom-up adaptive filters and >>> top-down expectations. >>> >>> In particular, during learning, a big enough mismatch can lead to >>> hypothesis testing and search for a new, or previously learned, category >>> that leads to an acceptable match. The criterion for what is "big enough >>> mismatch" or "acceptable match" is regulated by a vigilance parameter that >>> can itself vary in a state-dependent way. >>> >>> After learning occurs, a bottom-up input pattern typically directly >>> selects the best-matching category, without any hypothesis testing or >>> search. And even if there is a reset due to a large initial mismatch with a >>> previously active category, a single reset event may lead directly to a >>> matching category that can directly resonate with the data. >>> >>> I should note that all of the foundational predictions of ART now have >>> substantial bodies of psychological and neurobiological data to support >>> them. See the review article if you would like to read about them. >>> >>> The currently active neuron is the one being examined by the top down >>> process >>> >>> >>> I'm not sure what you mean by "being examined", but perhaps my comment >>> above may deal with it. >>> >>> I should comment, though, about your use of the word "currently active >>> neuron". I assume that you mean at the category level. >>> >>> In this regard, there are two ART's. The first aspect of ART is as a >>> cognitive and neural theory whose scope, which includes perceptual, >>> cognitive, and adaptively timed cognitive-emotional dynamics, among other >>> processes, is illustrated by the above referenced 2012 review article in *Neural >>> Networks*. In the biological theory, there is no general commitment to >>> just one "currently active neuron". One always considers the neuronal >>> population, or populations, that represent a learned category. Sometimes, >>> but not always, a winner-take-all category is chosen. >>> >>> The 2012 review article illustrates some of the large data bases of >>> psychological and neurobiological data that have been explained in a >>> principled way, quantitatively simulated, and successfully predicted by ART >>> over a period of decades. ART-like processing is, however, certainly not >>> the only kind of computation that may be needed to understand how the brain >>> works. The paradigm called Complementary Computing that I introduced awhile >>> ago makes precise the sense in which ART may be just one kind of dynamics >>> supported by advanced brains. This is also summarized in the review article. >>> >>> The second aspect of ART is as a series of algorithms that >>> mathematically characterize key ART design principles and mechanisms in a >>> focused setting, and provide algorithms for large-scale applications in >>> engineering and technology. ARTMAP, fuzzy ARTMAP, and distributed ARTMAP >>> are among these, all of them developed with Gail Carpenter. Some of these >>> algorithms use winner-take-all categories to enable the proof of >>> mathematical theorems that characterize how underlying design principles >>> work. In contrast, the distributed ARTMAP family of algorithms, developed >>> by Gail Carpenter and her colleagues, allows for distributed category >>> representations without losing the benefits of fast, incremental, >>> self-stabilizing learning and prediction in response to a large >>> non-stationary databases that can include many unexpected events. >>> >>> See, e.g., >>> http://techlab.bu.edu/members/gail/articles/115_dART_NN_1997_.pdf and >>> http://techlab.bu.edu/members/gail/articles/155_Fusion2008_CarpenterRavindran.pdf >>> . >>> >>> I should note that FAST learning is a technical concept: it means that >>> each adaptive weight can converge to its new equilibrium value on EACH >>> learning trial. That is why ART algorithms can often successfully carry out >>> one-trial incremental learning of a data base. This is not true of many >>> other algorithms, such as back propagation, simulated annealing, and the >>> like, which all experience catastrophic forgetting if they try to do fast >>> learning. Almost all other learning algorithms need to be run using slow >>> learning, that allows only a small increment in the values of adaptive >>> weights on each learning trial, to avoid massive memory instabilities, and >>> work best in response to stationary data. Such algorithms often fail to >>> detect important rare cases, among other limitations. ART can provably >>> learn in either the fast or slow mode in response to non-stationary data. >>> >>> in a sequential fashion: one neuron after another, until an >>> acceptable neuron is found. >>> >>> (2) The input to the ART in the late 1990's is for a single feature >>> vector as a monolithic input. >>> By monolithic, I mean that all neurons take the entire input feature >>> vector as input. >>> I raise this point here because neuron in ART in the late 1990's does >>> not have an explicit local sensory receptive field (SRF), >>> i.e., are fully connected from all components of the input vector. A >>> local SRF means that each neuron is only connected to a small region >>> in an input image. >>> >>> >>> Various ART algorithms for technology do use fully connected networks. >>> They represent a single-channel case, which is often sufficient in >>> applications and which simplifies mathematical proofs. However, the >>> single-channel case is, as its name suggests, not a necessary constraint on >>> ART design. >>> >>> In addition, many ART biological models do not restrict themselves to >>> the single-channel case, and do have receptive fields. These include the >>> LAMINART family of models that predict functional roles for many identified >>> cell types in the laminar circuits of cerebral cortex. These models >>> illustrate how variations of a shared laminar circuit design can carry out >>> very different intelligent functions, such as 3D vision (e.g., 3D >>> LAMINART), speech and language (e.g., cARTWORD), and cognitive information >>> processing (e.g., LIST PARSE). They are all summarized in the 2012 review >>> article, with the archival articles themselves on my web page >>> http://cns.bu.edu/~steve. >>> >>> The existence of these laminar variations-on-a-theme provides an >>> existence proof for the exciting goal of designing a family of chips whose >>> specializations can realize all aspects of higher intelligence, and which >>> can be consistently connected because they all share a similar underlying >>> design. Work on achieving this goal can productively occupy lots of >>> creative modelers and technologists for many years to come. >>> >>> I hope that the above replies provide some relevant information, as >>> well as pointers for finding more. >>> >>> Best, >>> >>> Steve >>> >>> >>> >>> >>> My apology again if my understanding above has errors although I have >>> examined the above two points carefully >>> through multiple your papers. >>> >>> Best regards, >>> >>> -John >>> >>> Juyang (John) Weng, Professor >>> Department of Computer Science and Engineering >>> MSU Cognitive Science Program and MSU Neuroscience Program >>> 428 S Shaw Ln Rm 3115 >>> Michigan State University >>> East Lansing, MI 48824 USA >>> Tel: 517-353-4388 >>> Fax: 517-432-1061 >>> Email: weng at cse.msu.edu >>> URL: http://www.cse.msu.edu/~weng/ >>> ---------------------------------------------- >>> >>> >>> >>> Stephen Grossberg >>> Wang Professor of Cognitive and Neural Systems >>> Professor of Mathematics, Psychology, and Biomedical Engineering >>> Director, Center for Adaptive Systems >>> http://www.cns.bu.edu/about/cas.html >>> http://cns.bu.edu/~steve >>> steve at bu.edu >>> >>> >>> >>> >>> >> >> -- >> -- >> Juyang (John) Weng, Professor >> Department of Computer Science and Engineering >> MSU Cognitive Science Program and MSU Neuroscience Program >> 428 S Shaw Ln Rm 3115 >> Michigan State University >> East Lansing, MI 48824 USA >> Tel: 517-353-4388 >> >> Fax: 517-432-1061 >> Email: weng at cse.msu.edu >> URL: http://www.cse.msu.edu/~weng/ >> ---------------------------------------------- >> >> >> > > -- > -- > Juyang (John) Weng, Professor > Department of Computer Science and Engineering > MSU Cognitive Science Program and MSU Neuroscience Program > 428 S Shaw Ln Rm 3115 > Michigan State University > East Lansing, MI 48824 USA > Tel: 517-353-4388 > Fax: 517-432-1061 > Email: weng at cse.msu.edu > URL: http://www.cse.msu.edu/~weng/ > ---------------------------------------------- > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Colin.Wise at uts.edu.au Fri Apr 11 00:59:33 2014 From: Colin.Wise at uts.edu.au (Colin Wise) Date: Fri, 11 Apr 2014 14:59:33 +1000 Subject: Connectionists: REMINDER - AAI Short Course - 'Behaviour Analytics - an Introduction' - Wednesday 23 April 2014 Message-ID: <8112393AA53A9B4A9BDDA6421F26C68A0173A2CFE6D8@MAILBOXCLUSTER.adsroot.uts.edu.au> Dear Colleague, REMINDER - AAI Short Course - 'Behaviour Analytics - an Introduction' - Wednesday 23 April 2014 https://shortcourses-bookings.uts.edu.au/ClientView/Schedules/ScheduleDetail.aspx?ScheduleID=1572&EventID=1294 Our AAI short course 'Behaviour Analytics - an Introduction' may be of interest to you and or others in your organisation or network. Complex behaviours are widely seen on the internet, business, social and online networks, and multi-agent systems. In fact, behaviour is a concept with stronger semantic meaning than the so-called data for recording and representing business activities, impacts and dynamics. Therefore, an in-depth understanding of complex behaviours has been increasingly recognised as a crucial means for disclosing interior driving forces, causes and impact on businesses in handling many challenging issues. This forms the need and emergence of behaviour analytics, i.e. understanding behaviours from the computing perspective. In this short course, we present an overview of behaviour analytics and discuss complex behaviour interactions and relationships, complex behaviour representation, behavioural feature construction, behaviour impact and utility analysis, behaviour pattern analysis, exceptional behaviour analysis, negative behaviour analysis, behaviour interaction and evolution. Please register here https://shortcourses-bookings.uts.edu.au/ClientView/Schedules/ScheduleDetail.aspx?ScheduleID=1572&EventID=1294 An important foundation short course in the AAI series of advanced data analytic short courses - please view this short course and others here http://www.uts.edu.au/research-and-teaching/our-research/advanced-analytics-institute/short-courses/upcoming-courses We are happy to discuss at your convenience. Thank you and regards. Colin Wise Operations Manager Advanced Analytics Institute (AAI) Blackfriars Building 2, Level 1 University of Technology, Sydney (UTS) Email: Colin.Wise at uts.edu.au Tel. +61 2 9514 9267 M. 0448 916 589 AAI: www.analytics.uts.edu.au/ Reminder - AAI Short Course - Advanced Data Analytics - an Introduction - Wednesday 7 May 2014 https://shortcourses-bookings.uts.edu.au/ClientView/Schedules/ScheduleDetail.aspx?ScheduleID=1571 Future short courses on Data Analytics and Big Data may be viewed at LINK AAI Education and Training Short Courses Survey - you may be interested in completing our AAI Survey at LINK AAI Email Policy - should you wish to not receive this periodic communication on Data Analytics Learning please reply to our email (to sender) with UNSUBSCRIBE in the Subject. We will delete you from our database. Thank you for your past and future support. UTS CRICOS Provider Code: 00099F DISCLAIMER: This email message and any accompanying attachments may contain confidential information. If you are not the intended recipient, do not read, use, disseminate, distribute or copy this message or attachments. If you have received this message in error, please notify the sender immediately and delete this message. Any views expressed in this message are those of the individual sender, except where the sender expressly, and with authority, states them to be the views of the University of Technology Sydney. Before opening any attachments, please check them for viruses and defects. Think. Green. Do. Please consider the environment before printing this email. -------------- next part -------------- An HTML attachment was scrubbed... URL: From terry at salk.edu Fri Apr 11 01:25:29 2014 From: terry at salk.edu (Terry Sejnowski) Date: Thu, 10 Apr 2014 22:25:29 -0700 Subject: Connectionists: NEURAL COMPUTATION - May, 2014 In-Reply-To: Message-ID: Neural Computation - Contents -- Volume 26, Number 5 - May 1, 2014 Available online for download now: http://www.mitpressjournals.org/toc/neco/26/5 ----- Article A Transition to Sharp Timing in Stochastic Leaky Integrate-and-Fire Neurons Driven by Frozen Noisy Input Thibaud Taillefumier, Marcelo Magnasco Letters Sample Skewness as a Statistical Measurement of Neuronal Tuning Sharpness Jason Michael Samonds, Brian Richard Potetz, and Tai Sing Lee Prewhitening High Dimensional fMRI Data Sets Without Eigendecompoistion Abd-Krim Seghouane, Yousef Saad Signal-tuned Gabor Functions as Models for Stimulus-dependent Cortical Receptive Fields Jose R. A. Torreao, Silvia M.C. Victer, and Marcos S. Amaral Energy Complexity of Recurrent Neural Networks Jiri Sima On Some Classes of Sequential Spiking Neural P Systems Xingyi Zhang, Xiangxiang Zeng, Bin Luo, and Linqiang Pan ------------ ON-LINE -- http://www.mitpressjournals.org/neuralcomp SUBSCRIPTIONS - 2014 - VOLUME 26 - 12 ISSUES USA Others Electronic Only Student/Retired $70 $193 $65 Individual $124 $187 $115 Institution $1,035 $1,098 $926 Canada: Add 5% GST MIT Press Journals, 238 Main Street, Suite 500, Cambridge, MA 02142-9902 Tel: (617) 253-2889 FAX: (617) 577-1545 journals-orders at mit.edu ------------ From christos.dimitrakakis at gmail.com Fri Apr 11 04:50:03 2014 From: christos.dimitrakakis at gmail.com (Christos Dimitrakakis) Date: Fri, 11 Apr 2014 10:50:03 +0200 Subject: Connectionists: CFP: ICML 2014 Workshop on Learning, Security and Privacy (extended) In-Reply-To: <533452B3.5030107@gmail.com> References: <533452B3.5030107@gmail.com> Message-ID: <5347ACBB.9010908@gmail.com> ICML 2014 Workshop on Learning, Security and Privacy Beijing, China, 25 or 26 June, 2014 (TBD) https://sites.google.com/site/learnsecprivacy2014/ ------------------------------------------------------------ Important information: - Submission deadline: 14 April, 2014 (extended!) - Notification of acceptance: 30 April, 2014 - Submissions: https://www.easychair.org/conferences/?conf=lps2014 - Format: 6 pages ICML - Submission type: (1) Open problems (2) original research. ------------------------------------------------------------ Workshop overview To encourage scientific dialogue and foster cross-fertilization in machine learning, decision theory, security and privacy, the workshop invites original submissions, ranging from open problems and ongoing research to mature work, in any of the following core subjects: - Statistical approaches for privacy preservation. - Private decision making and mechanism design. - Metrics and evaluation methods for privacy and security. - Robust learning in adversarial environments. - Learning in unknown / partially observable stochastic games. - Distributed inference and decision making for security. - Application-specific privacy preserving machine learning and decision theory. - Secure multiparty computation and cryptographic approaches for machine learning. - Cryptographic applications of machine learning and decision theory. - Security applications: Intrusion detection and response, biometric authentication, fraud detection, spam filtering, captchas. - Security analysis of learning algorithms - The economics of learning, security and privacy. Submission instructions: Submissions should be in the ICML 2014 format, with a maximum of 6 pages (including references). Work must be original, but we also encourage submission of open problems. Accepted papers will be made available online at the workshop website. Submissions need not be anonymous. Submissions should be made through EasyChair: https://www.easychair.org/conferences/?conf=lps2014. For detailed submission instructions, please refer to the workshop website. Organizing committee: Christos Dimitrakakis (Chalmers University of Technology, Sweden). Pavel Laskov (University of Tuebingen, Germany). Daniel Lowd (University of Oregon, USA). Benjamin Rubinstein (University of Melbourne, Australia). Elaine Shi (University of Maryland, College Park, USA). Program committee: Asli Bay (EPFL, Switzerland) Battista Biggio (University of Cagliary, Italy) Michael Br?ckner (Amazon, Germany) Mike Burmester (Florida State University, USA) Alvaro Cardenas (University of Texas, Dallas) Kamalika Chaudhuri (UCSD, USA) Craig B Gentry (IBM Research, USA) Alex Kantchelian (UC Berkeley, USA) Aikaterini Mitrokotsa (Chalmers University, Sweden) Blaine Nelson (University of Potsdam, Germany) Norman Poh (University of Surrey, UK) Konrad Rieck (University of G?ttingen) Nedim Srndic (University of Tuebingen, Germany) Aaron Roth (University of Pennsylvania, USA) Risto Vaarandi (NATO CCDCOE, Estonia) Sobha Venkataraman (AT&T Research, USA) Ting-Fang Yen (EMC, USA) From M.M.vanZaanen at uvt.nl Fri Apr 11 09:58:40 2014 From: M.M.vanZaanen at uvt.nl (Menno van Zaanen) Date: Fri, 11 Apr 2014 15:58:40 +0200 Subject: Connectionists: 3rd CfP First Workshop on Computational Approaches to Compound Analysis (ComAComA 2014) Message-ID: <20140411135840.GQ8893@pinball.uvt.nl> Third Call for Papers The First Workshop on Computational Approaches to Compound Analysis (ComAComA 2014) at COLING 2014 Dublin, Ireland, 23/24 August, 2014 DESCRIPTION The ComAComA workshop is an interdisciplinary platform for researchers to present recent and ongoing work on compound processing in different languages. Given the high productivity of compounding in a wide range of languages, compound processing is an interesting subject in linguistics, computational linguistics, and other applied disciplines. For example, for many language technology applications, compound processing remains a challenge (both morphologically and semantically), since novel compounds are created and interpreted on the fly. In order to deal with this productivity, systems that can analyse new compound forms and their meanings need to be developed. From an interdisciplinary perspective, we also need to better understand the process of compounding (for instance, as a cognitive process), in order to model its complexity. The workshop has several related aims. Firstly, it will bring together researchers from different research fields (e.g., computational linguistics, linguistics, neurolinguistics, psycholinguistics, language technology) to discuss various aspects of compound processing. Secondly, the workshop will provide an overview of the current state-of-the-art research, as well as desired resources for future research in this area. Finally, we expect that the interdisciplinary nature of the workshop will result in methodologies to evaluate compound processing systems from different perspectives. KEYNOTE SPEAKERS Diarmuid ?? S??aghdha (University of Cambridge) Andrea Krott (University of Birmingham) TOPICS OF INTEREST The ComAComA workshop solicits papers on original and unpublished research on the following topics, including, but not limited to: ** Annotation of compounds for computational purposes ** Categorisation of compounds (e.g. different taxonomies) ** Classification of compound semantics ** Compound splitting ** Automatic morphological analysis of compounds ** Compound processing in computational psycholinguistics ** Psycho- and/or neurolinguistic aspects of compound processing ** Theoretical and/or descriptive linguistic aspects of compound processing ** Compound paraphrase generation ** Applications of compound processing ** Resources for compound processing ** Evaluation methodologies for compound processing PAPER REQUIREMENTS ** Papers should describe original work, with room for completed work, well-advanced ongoing research, or contemplative, novel ideas. Papers should indicate clearly the state of completion of the reported results. Wherever appropriate, concrete evaluation results should be included. ** Submissions will be judged on correctness, originality, technical strength, significance and relevance to the conference, and interest to the attendees. ** Submissions presented at the conference should mostly contain new material that has not been presented at any other meeting with publicly available proceedings. Papers that are being submitted in parallel to other conferences or workshops must indicate this on the title page, as must papers that contain significant overlap with previously published work. REVIEWING Reviewing will be double blind. It will be managed by the organisers of ComAComA, assisted by the workshop???s Program Committee (see details below). INSTRUCTIONS FOR AUTHORS ** All papers will be included in the COLING workshop proceedings, in electronic form only. ** The maximum submission length is 8 pages (A4), plus two extra pages for references. ** Authors of accepted papers will be given additional space in the camera-ready version to reflect space needed for changes stemming from reviewers comments. ** The only mode of delivery will be oral; there will be no poster presentations. ** Papers shall be submitted in English, anonymised with regard to the authors and/or their institution (no author-identifying information on the title page nor anywhere in the paper), including referencing style as usual. ** Papers must conform to official COLING 2014 style guidelines, which are available on the COLING 2014 website (see also links below). ** The only accepted format for submitted papers is PDF. ** Submission and reviewing will be managed online by the START system (see link below). Submissions must be uploaded on the START system by the submission deadlines; submissions after that time will not be reviewed. To minimise network congestion, we request authors to upload their submissions as early as possible. ** In order to allow participants to be acquainted with the published papers ahead of time which in turn should facilitate discussions at the workshop, the official publication date has been set two weeks before the conference, i.e., on August 11, 2014. On that day, the papers will be available online for all participants to download, print and read. If your employer is taking steps to protect intellectual property related to your paper, please inform them about this timing. ** While submissions are anonymous, we strongly encourage authors to plan for depositing language resources and other data as well as tools used and/or developed for the experiments described in the papers, if the paper is accepted. In this respect, we encourage authors then to deposit resources and tools to available open-access repositories of language resources and/or repositories of tools (such as META-SHARE, Clarin, ELRA, LDC or AFNLP/COCOSDA for data, and github, sourceforge, CPAN and similar for software and tools), and refer to them instead of submitting them with the paper. COLING 2014 STYLE FILES Download a zip file with style files for LaTeX, MS Word and Libre Office here: http://www.coling-2014.org/doc/coling2014.zip IMPORTANT DATES May 2, 2014: Paper submission deadline June 6, 2014: Author notification deadline June 27, 2014: Camera-ready PDF deadline August 11, 2014: Official paper publication date August 23/24, 2014: ComAComA Workshop (exact date still unknown) (August 25-29, 2014: Main COLING conference) URLs Workshop: http://tinyurl.com/comacoma Main COLING conference: http://www.coling-2014.org/ Paper submission: https://www.softconf.com/coling2014/WS-17/ Style sheets: http://www.coling-2014.org/doc/coling2014.zip ORGANISING COMMITTEE Ben Verhoeven (University of Antwerp, Belgium) ben.verhoeven at uantwerpen.be Walter Daelemans (University of Antwerp, Belgium) walter.daelemans at uantwerpen.be Menno van Zaanen (Tilburg University, The Netherlands) mvzaanen at uvt.nl Gerhard van Huyssteen (North-West University, South Africa) gerhard.vanhuyssteen at nwu.ac.za PROGRAM COMMITTEE ** Preslav Nakov (University of California in Berkeley) ** Iris Hendrickx (Radboud University Nijmegen) ** Gary Libben (University of Alberta) ** Lonneke Van der Plas (University of Stuttgart) ** Helmut Schmid (Ludwig Maximilian University Munich) ** Fintan Costello (University College Dublin) ** Roald Eiselen (North-West University) ** Su Nam Kim (University of Melbourne) ** Pavol ??tekauer (P.J. Safarik University) ** Arina Banga (University of Debrecen) ** Diarmuid ?? S??aghdha (University of Cambridge) ** Rochelle Lieber (University of New Hampshire) ** Vivi Nastase (Fondazione Bruno Kessler) ** Tony Veale (University College Dublin) ** Pius ten Hacken (University of Innsbruck) ** Anneke Neijt (Radboud University Nijmegen) ** Andrea Krott (University of Birmingham) ** Emmanuel Keuleers (Ghent University) ** Stan Szpakowicz (University of Ottawa) From grlmc at urv.cat Sun Apr 13 10:55:48 2014 From: grlmc at urv.cat (GRLMC) Date: Sun, 13 Apr 2014 16:55:48 +0200 Subject: Connectionists: SLSP 2014: 3rd call for papers Message-ID: <3E6B1AF23AD64B10ABC6308AFD75D03C@Carlos1> *To be removed from our mailing list, please respond to this message with UNSUBSCRIBE in the subject line* **************************************************************************** ****** 2nd INTERNATIONAL CONFERENCE ON STATISTICAL LANGUAGE AND SPEECH PROCESSING SLSP 2014 Grenoble, France October 14-16, 2014 Organised by: ?quipe GETALP Laboratoire d?Informatique de Grenoble Research Group on Mathematical Linguistics (GRLMC) Rovira i Virgili University http://grammars.grlmc.com/slsp2014/ **************************************************************************** ****** AIMS: SLSP is a yearly conference series aimed at promoting and displaying excellent research on the wide spectrum of statistical methods that are currently in use in computational language or speech processing. It aims at attracting contributions from both fields. Though there exist large, well-known conferences and workshops hosting contributions to any of these areas, SLSP is a more focused meeting where synergies between subdomains and people will hopefully happen. In SLSP 2014, significant room will be reserved to young scholars at the beginning of their career and particular focus will be put on methodology. VENUE: SLSP 2014 will take place in Grenoble, at the foot of the French Alps. SCOPE: The conference invites submissions discussing the employment of statistical methods (including machine learning) within language and speech processing. The list below is indicative and not exhaustive: phonology, phonetics, prosody, morphology syntax, semantics discourse, dialogue, pragmatics statistical models for natural language processing supervised, unsupervised and semi-supervised machine learning methods applied to natural language, including speech statistical methods, including biologically-inspired methods similarity alignment language resources part-of-speech tagging parsing semantic role labelling natural language generation anaphora and coreference resolution speech recognition speaker identification/verification speech transcription speech synthesis machine translation translation technology text summarisation information retrieval text categorisation information extraction term extraction spelling correction text and web mining opinion mining and sentiment analysis spoken dialogue systems author identification, plagiarism and spam filtering STRUCTURE: SLSP 2014 will consist of: invited talks invited tutorials peer-reviewed contributions INVITED SPEAKERS: Claire Gardent (LORIA, Nancy, FR), Grammar Based Sentence Generation and Statistical Error Mining Roger K. Moore (Sheffield, UK), Spoken Language Processing: Time to Look Outside? Martti Vainio (Helsinki, FI), Phonetics and Machine Learning: Hierarchical Modelling of Prosody in Statistical Speech Synthesis PROGRAMME COMMITTEE: Sophia Ananiadou (Manchester, UK) Srinivas Bangalore (Florham Park, US) Patrick Blackburn (Roskilde, DK) Herv? Bourlard (Martigny, CH) Bill Byrne (Cambridge, UK) Nick Campbell (Dublin, IE) David Chiang (Marina del Rey, US) Kenneth W. Church (Yorktown Heights, US) Walter Daelemans (Antwerpen, BE) Thierry Dutoit (Mons, BE) Alexander Gelbukh (Mexico City, MX) James Glass (Cambridge, US) Ralph Grishman (New York, US) Sanda Harabagiu (Dallas, US) Xiaodong He (Redmond, US) Hynek Hermansky (Baltimore, US) Hitoshi Isahara (Toyohashi, JP) Lori Lamel (Orsay, FR) Gary Geunbae Lee (Pohang, KR) Haizhou Li (Singapore, SG) Daniel Marcu (Los Angeles, US) Carlos Mart?n-Vide (Tarragona, ES, chair) Manuel Montes-y-G?mez (Puebla, MX) Satoshi Nakamura (Nara, JP) Shrikanth S. Narayanan (Los Angeles, US) Vincent Ng (Dallas, US) Joakim Nivre (Uppsala, SE) Elmar N?th (Erlangen, DE) Maurizio Omologo (Trento, IT) Mari Ostendorf (Seattle, US) Barbara H. Partee (Amherst, US) Gerald Penn (Toronto, CA) Massimo Poesio (Colchester, UK) James Pustejovsky (Waltham, US) Ga?l Richard (Paris, FR) German Rigau (San Sebasti?n, ES) Paolo Rosso (Valencia, ES) Yoshinori Sagisaka (Tokyo, JP) Bj?rn W. Schuller (London, UK) Satoshi Sekine (New York, US) Richard Sproat (New York, US) Mark Steedman (Edinburgh, UK) Jian Su (Singapore, SG) Marc Swerts (Tilburg, NL) Jun'ichi Tsujii (Beijing, CN) Gertjan van Noord (Groningen, NL) Renata Vieira (Porto Alegre, BR) Dekai Wu (Hong Kong, HK) Feiyu Xu (Berlin, DE) Roman Yangarber (Helsinki, FI) Geoffrey Zweig (Redmond, US) ORGANISING COMMITTEE: Laurent Besacier (Grenoble, co-chair) Adrian Horia Dediu (Tarragona) Benjamin Lecouteux (Grenoble) Carlos Mart?n-Vide (Tarragona, co-chair) Florentina Lilica Voicu (Tarragona) SUBMISSIONS: Authors are invited to submit non-anonymized papers in English presenting original and unpublished research. Papers should not exceed 12 single-spaced pages (including eventual appendices) and should be prepared according to the standard format for Springer Verlag's LNAI/LNCS series (see http://www.springer.com/computer/lncs?SGWID=0-164-6-793341-0). Submissions have to be uploaded to: https://www.easychair.org/conferences/?conf=slsp2014 PUBLICATIONS: A volume of proceedings published by Springer in the LNAI/LNCS series will be available by the time of the conference. A special issue of a major journal will be later published containing peer-reviewed extended versions of some of the papers contributed to the conference. Submissions to it will be by invitation. REGISTRATION: The period for registration is open from January 16, 2014 to October 14, 2014. The registration form can be found at: http://grammars.grlmc.com/slsp2014/Registration.php DEADLINES: Paper submission: May 7, 2014 (23:59h, CET) Notification of paper acceptance or rejection: June 18, 2014 Final version of the paper for the LNAI/LNCS proceedings: June 25, 2014 Early registration: July 2, 2014 Late registration: September 30, 2014 Submission to the post-conference journal special issue: January 16, 2015 QUESTIONS AND FURTHER INFORMATION: florentinalilica.voicu at urv.cat POSTAL ADDRESS: SLSP 2014 Research Group on Mathematical Linguistics (GRLMC) Rovira i Virgili University Av. Catalunya, 35 43002 Tarragona, Spain Phone: +34 977 559 543 Fax: +34 977 558 386 ACKNOWLEDGEMENTS: Departament d?Economia i Coneixement, Generalitat de Catalunya Laboratoire d?Informatique de Grenoble Universitat Rovira i Virgili From torsello at dsi.unive.it Sun Apr 13 17:03:37 2014 From: torsello at dsi.unive.it (Andrea Torsello) Date: Sun, 13 Apr 2014 23:03:37 +0200 Subject: Connectionists: Application Deadline extended : International Summer School on Complex Networks Message-ID: <9516956.eADKxJMOT6@giatrus.torsello.net> ISSCN International Summer School on Complex Networks Bertinoro, Italy July 14-18 2014 http://www.dsi.unive.it/isscn/ Complex networks are an emerging and powerful computational tool in the physical, biological and social sciences. They aim is to capture the structural properties of data represented as graphs or networks, providing ways of characterising both the static and dynamic facets of network structure. The topic draws on ideas from graph theory, statistical physics and dynamic systems theory. Applications include communication networks, epidemiology, transportation, social networks and ecology. The aim in the Summer School is to provide an overview of both the foundations and state of the art in the field. Lectures will be presented by intellectual leaders in the field, and there will be an opportunity to interact closely with them during the school. The school will be held in the Bertinoro Residential Centre of the University of Bologna, which is situated in beautiful hills between Ravenna and Bologna. The summer school is aimed at PhD students, and younger postdocs or RA?s working in the complex networks area. It will run for 5 days with lectures in the mornings and afternoons, and the school fee includes residential accommodation and meals at the residential centre. List of Lecturers Michele Benzi, Emory University, USA Luciano Costa, University of S?o Paulo, Brasil Ernesto Estrada, University of Strathclyde, UK Jesus Gomez Gardenes, University of Zaragoza, Spain Ferenc Jordan, The Microsoft Research ? COSBI, Italy Yamir Moreno, University of Zaragoza, Spain Mirco Musolesi, University of Birmingham, UK Simone Severini, University College London, UK Organizers Andrea Torsello, Universit? Ca? Foscari Venezia, Italy Edwin Hancock, University of York, UK Richard Wilson, University of York, UK Ernesto Estrada, University of Strathclyde, UK Registration Fees Registration will include Accommodation for 5 nights (13/7 to 17/7), and meals. Accommodation can be in single or double rooms (subject to availability). Single room Double room PhD student EUR 700 EUR 650 Postdoc EUR 800 EUR 750 Other EUR 900 EUR 850 We hope to be able to provide scholarships to PhD students to reduce the registration Fees. Student applicants will be automatically considered for scholarships. Application is now open through the schools's website. The deadline for application has bee extended to April 30th. After that time we will let all applicants know whether their application was successful, whether they are entitled for scholarship, and, in that case, the amount of financial aid we can offer. Applicants must send an expression of interest along with their Curriculum vitae. PhD students must also send a letter from the supervisor in support to the request to be considered for the scholarship. Contact: Andrea Torsello -- Andrea Torsello PhD Dipartimento di Informatica, Universita' Ca' Foscari Venezia via Torino 155, 30172 Venezia Mestre, Italy Tel: +39 0412348468 Fax: +39 0412348419 http://www.dsi.unive.it/~atorsell From ted.carnevale at yale.edu Sun Apr 13 20:34:43 2014 From: ted.carnevale at yale.edu (Ted Carnevale) Date: Sun, 13 Apr 2014 20:34:43 -0400 Subject: Connectionists: early registration discounts for NEURON Summer Course Message-ID: <534B2D23.2010401@yale.edu> Good news for all who are interested in the NEURON Summer Course! Applicants who sign up by Friday, May 9, are now eligible for early registration discounts that range from $100 to $200 depending on which course components they request. NEURON Fundamentals June 21-24 This addresses all aspects of using NEURON to model individual neurons, and also introduces parallel simulation and the fundamentals of modeling networks. Parallel Simulation with NEURON June 25-26 This is for users who are already familiar with NEURON and need to create models that will run on parallel hardware. After Course Early May 9 ====== ===== ===== Fundamentals $1050 $1200 Parallel 650 750 Both 1600 1800 Registration is limited, and the registration deadline is Friday, May 30, 2014. For more information regarding NEURON courses see http://www.neuron.yale.edu/neuron/courses --Ted From shimon.whiteson at gmail.com Mon Apr 14 01:47:51 2014 From: shimon.whiteson at gmail.com (Shimon Whiteson) Date: Mon, 14 Apr 2014 07:47:51 +0200 Subject: Connectionists: Postdoc position in reinforcement learning for telepresence robotics at the University of Amsterdam Message-ID: The Informatics Institute at the University of Amsterdam invites applications for a fully funded position for a postdoctoral researcher in the area of reinforcement learning for telepresence robotics? The position is within the Intelligent Systems Lab Amsterdam and will be supervised by dr. Shimon Whiteson and dr. Maarten van Someren. Application closing date: 15 May? 2014 Starting date: 1 December 2014 (somewhat flexible) Duration: 2 years The research will focus on the development of new algorithms for discovering socially appropriate behavior for a semi-autonomous telepresence robot by integrating multiple sources of implicit feedback from the robot's environment.? Doing so will require innovation in reinforcement learning, inverse reinforcement learning, semi-supervised learning, etc. The research will be conducted as part of a European project called "Telepresence Reinforcement Learning Social Robot (TERESA)", which the University of Amsterdam coordinates and collaborates on with several other European universities and companies (see http://www.teresaproject.eu).? The project aims to develop a socially intelligent semi-autonomous telepresence robot and successfully demonstrate its application in an elderly day center. For further information, including instructions on submitting an application, see the official job ad at: http://www.uva.nl/en/about-the-uva/working-at-the-uva/vacancies/item/14-134.html Informal inquiries can be made by email to Shimon Whiteson (s.a.whiteson at uva.nl). ------------------------------------------------------------- Shimon Whiteson | Assistant Professor Intelligent Autonomous Systems Group Informatics Institute | University of Amsterdam ------------------------------------------------------------- Science Park 904 |?1098 XH Amsterdam +31 (0)20.525.8701 | +31 (0)6.3851.0110 http://staff.science.uva.nl/~whiteson ------------------------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From schwarzwaelder at bcos.uni-freiburg.de Mon Apr 14 04:49:20 2014 From: schwarzwaelder at bcos.uni-freiburg.de (=?ISO-8859-15?Q?Kerstin_Schwarzw=E4lder?=) Date: Mon, 14 Apr 2014 10:49:20 +0200 Subject: Connectionists: Reminder: Call for applications: Brains for Brains Young Researchers' Computational Neuroscience Award 2014 In-Reply-To: <530F0C46.50802@bcos.uni-freiburg.de> References: <530F0C46.50802@bcos.uni-freiburg.de> Message-ID: <534BA110.8030203@bcos.uni-freiburg.de> Dear colleagues, Please let me remind you that, for the fifth time, the Bernstein Association for Computational Neuroscience is announcing the "Brains for Brains Young Researchers' Computational Neuroscience Award". The call is open for researchers of any nationality who have contributed to a peer reviewed publication (as coauthor) or peer reviewed conference abstract (as first author) that was submitted before starting their doctoral studies, is written in English and was accepted or published in 2013 or 2014. The award comprises 500 ? prize money, plus a travel grant of up to 2.000 ? to cover a trip to Germany, including participation in the Bernstein Conference 2014 in G?ttingen (www.bernstein-conference.de), and an individually planned visit to up to two German research institutions in Computational Neuroscience. Deadline for application is April 25, 2014. Detailed information about the application procedure can be found under: www.nncn.de/en/bernstein-association/brains-for-brains-2014 For inquiries please contact info at bcos.uni-freiburg.de Best regards, Kerstin Schwarzw?lder -- Dr. Kerstin Schwarzw?lder Bernstein Coordination Site of the National Bernstein Network Computational Neuroscience Albert Ludwigs University Freiburg Hansastr. 9A 79104 Freiburg Germany phone: +49 761 203 9594 fax: +49 761 203 9585 schwarzwaelder at bcos.uni-freiburg.de www.nncn.de Twitter: NNCN_Germany YouTube: Bernstein TV Facebook: Bernstein Network Computational Neuroscience, Germany LinkedIn: Bernstein Network Computational Neuroscience, Germany -------------- next part -------------- An HTML attachment was scrubbed... URL: From danko.nikolic at googlemail.com Mon Apr 14 06:51:46 2014 From: danko.nikolic at googlemail.com (Danko Nikolic) Date: Mon, 14 Apr 2014 12:51:46 +0200 Subject: Connectionists: how the brain works? (UNCLASSIFIED) In-Reply-To: References: <51FDF3A2-717D-4399-9076-331EEA0FC45A@lehigh.edu> <5340188F.6090405@cse.msu.edu> <6F08AE39-766F-45B9-97AD-C50714755724@cns.bu.edu> <5342E760.7080309@cse.msu.edu> <5344CA45.3060407@cse.msu.edu> Message-ID: <534BBDC2.4090406@gmail.com> Dear all, It has been very interesting to follow the discussion on the functioning of ART, stability-plasticity dilemma and the related issues. In that context, I would like to point to an exciting property of the practopoietic theory, which enables us to understand what is needed for a general solution to the problems similar to the stability-plasticity dilemma. The issue of stability-plasticity dilemma can be described as a problem of deciding when a new category of a stimulus needs to be created and the system has to be adjusted as opposed to deciding to treat the stimulus as old and familiar and thus, not needing to adjust. Practopoietic theory helps us understand how a general solution can be implemented for deciding whether to use old types of behavior or to come up with new ones. This is possible in a so-called "T_3 system" in which a process called "anapoiesis" takes place. When a system is organized in such a T_3 way, every stimulus, old or new, is treated in the same fashion, i.e., as new. The system always adjusts--to everything(!)--even to stimuli that have been seen thousands of times. There is never a simple direct categorization (or pattern recognition) in which a mathematical mapping would take place from input vectors to output vectors, as traditionally implemented in multi-layer neural networks. Rather the system readjusts itself continuously to prepare for interactions with the surrounding world. The only simple input-output mappings that take place are the sensory-motor loops that execute the actual behavior. The internal processes corresponding to perception, recognition, categorization etc. are implemented by the mechanisms of internal system adjustments (based on anapoiesis). These mechanisms create new sensory-motor loops, which are then most similar to the traditional mapping operations. The difference between old and new stimuli (i.e., familiar and unfamiliar) is detectable in the behavior of the system only because the system adjusts quicker to the older that to the newer stimuli. The claimed advantage of such a T_3 practopoietic system is that only such a system can become generally intelligent as a whole and behave adaptively and consciously with understanding of what is going on around; The system forms a general "adjustment machine" that can become aware of its surroundings and can be capable of interpreting the situation appropriately to decide on the next action. Thus, the perceptual dilemma of stability vs. plasticity is converted into a general understanding of the current situation and the needs of the system. If the current goals of the system requires treating a slightly novel stimulus as new, it will be treated as "new". However, if a slight change in the stimulus features does not make a difference for the current goals and the situation, than the stimulus will be treated as "old". Importantly, practopoietic theory is not formulated in terms of neurons (inhibition, excitation, connections, changes of synaptic weights, etc.). Instead, the theory is formulated much more elegantly--in terms of interactions between cybernetic control mechanisms organized into a specific type of hierarchy (poietic hierarchy). This abstract formulation is extremely helpful for two reasons. First, it enables one to focus on the most important functional aspects and thus, to understand much easier the underlying principles of system operations. Second, it tells us what is needed to create intelligent behavior using any type of implementation, neuronal or non-neuronal. I hope this will be motivating enough to give practopoiesis a read. With best regards, Danko Link: http://www.danko-nikolic.com/practopoiesis/ On 4/11/14 2:42 AM, Tsvi Achler wrote: > I can't comment on most of this, but I am not sure if all models of > sparsity and sparse coding fall into the connectionist realm either > because some make statistical assumptions. > -Tsvi > > > On Tue, Apr 8, 2014 at 9:19 PM, Juyang Weng > wrote: > > Tavi: > > Let me explain a little more detail: > > There are two large categories of biological neurons, excitatory > and inhibitory. Both are developed through mainly signal > statistics, > not specified primarily by the genomes. Not all people agree > with my this point, but please tolerate my this view for now. > I gave a more detailed discussion on this view in my NAI book. > > The main effect of inhibitory connections is to reduce the number > of firing neurons (David Field called it sparse coding), with the > help of > excitatory connections. This sparse coding is important because > those do not fire are long term memory of the area at this point > of time. > My this view is different from David Field. He wrote that sparse > coding is for the current representations. I think sparse coding is > necessary for long-term memory. Not all people agree with my this > point, but please tolerate my this view for now. > > However, this reduction requires very fast parallel neuronal > updates to avoid uncontrollable large-magnitude oscillations. > Even with the fast biological parallel neuronal updates, we still > see slow but small-magnitude oscillations such as the > well-known theta waves and alpha waves. My view is that such > slow but small-magnitude oscillations are side effects of > excitatory and inhibitory connections that form many loops, not > something really desirable for the brain operation (sorry, > Paul Werbos). Not all people agree with my this point, but please > tolerate my this view for now. > > Therefore, as far as I understand, all computer simulations for > spiking neurons are not showing major brain functions > because they have to deal with the slow oscillations that are very > different from the brain's, e.g., as Dr. Henry Markram reported > (40Hz?). > > The above discussion again shows the power and necessity of an > overarching brain theory like that in my NAI book. > Those who only simulate biological neurons using superficial > biological phenomena are not going to demonstrate > any major brain functions. They can talk about signal statistics > from their simulations, but signal statistics are far from brain > functions. > > -John > > > On 4/8/14 1:30 AM, Tsvi Achler wrote: >> Hi John, >> ART evaluates distance between the contending representation and >> the current input through vigilance. If they are too far apart, >> a poor vigilance signal will be triggered. >> The best resonance will be achieved when they have the least >> amount of distance. >> If in your model, K-nearest neighbors is used without a neural >> equivalent, then your model is not quite in the spirit of a >> connectionist model. >> For example, Bayesian networks do a great job emulating brain >> behavior, modeling the integration of priors. and has been >> invaluable to model cognitive studies. However they assume a >> statistical configuration of connections and distributions which >> is not quite known how to emulate with neurons. Thus pure >> Bayesian models are also questionable in terms of connectionist >> modeling. But some connectionist models can emulate some >> statistical models for example see section 2.4 in Thomas & >> McClelland's chapter in Sun's 2008 book >> (http://www.psyc.bbk.ac.uk/people/academic/thomas_m/TM_Cambridge_sub.pdf). >> I am not suggesting Hodgkin-Huxley level detailed neuron models, >> however connectionist models should have their connections >> explicitly defined. >> Sincerely, >> -Tsvi >> >> >> >> On Mon, Apr 7, 2014 at 10:58 AM, Juyang Weng > > wrote: >> >> Tsvi, >> >> Note that ART uses a vigilance value to pick up the first >> "acceptable" match in its sequential bottom-up and top-down >> search. >> I believe that was Steve meant when he mentioned vigilance. >> >> Why do you think "ART as a neural way to implement a >> K-nearest neighbor algorithm"? >> If not all the neighbors have sequentially participated, >> how can ART find the nearest neighbor, let alone K-nearest >> neighbor? >> >> Our DN uses an explicit k-nearest mechanism to find the >> k-nearest neighbors in every network update, >> to avoid the problems of slow resonance in existing models of >> spiking neuronal networks. >> The explicit k-nearest mechanism itself is not meant to be >> biologically plausible, >> but it gives a computational advantage for software >> simulation of large networks >> at a speed slower than 1000 network updates per second. >> >> I guess that more detailed molecular simulations of >> individual neuronal spikes (such as using the Hodgkin-Huxley >> model of >> a neuron, using the NEURON software, >> or like the Blue Brain >> project directed by respected Dr. >> Henry Markram) >> are very useful for showing some detailed molecular, >> synaptic, and neuronal properties. >> However, they miss necessary brain-system-level mechanisms so >> much that it is difficult for them >> to show major brain-scale functions >> (such as learning to recognize objects and detection of >> natural objects directly from natural cluttered scenes). >> >> According to my understanding, if one uses a detailed >> neuronal model for each of a variety of neuronal types and >> connects those simulated neurons of different types according >> to a diagram of Brodmann areas, >> his simulation is NOT going to lead to any major brain function. >> He still needs brain-system-level knowledge such as that >> taught in the BMI 871 course. >> >> -John >> >> On 4/7/14 8:07 AM, Tsvi Achler wrote: >>> Dear Steve, John >>> I think such discussions are great to spark interests in >>> feedback (output back to input) such models which I feel >>> should be given much more attention. >>> In this vein it may be better to discuss more of the details >>> here than to suggest to read a reference. >>> >>> Basically I see ART as a neural way to implement a K-nearest >>> neighbor algorithm. Clearly the way ART overcomes the >>> neural hurdles is immense especially in figuring out how to >>> coordinate neurons. However it is also important to >>> summarize such methods in algorithmic terms which I attempt >>> to do here (and please comment/correct). >>> Instar learning is used to find the best weights for quick >>> feedforward recognition without too much resonance >>> (otherwise more resonance will be needed). Outstar learning >>> is used to find the expectation of the patterns. The >>> resonance mechanism evaluates distances between the >>> "neighbors" evaluating how close differing outputs are to >>> the input pattern (using the expectation). By choosing one >>> winner the network is equivalent to a 1-nearest neighbor >>> model. If you open it up to more winners (eg k winners) as >>> you suggest then it becomes a k-nearest neighbor mechanism. >>> >>> Clearly I focused here on the main ART modules and did not >>> discuss other additions. But I want to just focus on the >>> main idea at this point. >>> Sincerely, >>> -Tsvi >>> >>> >>> On Sun, Apr 6, 2014 at 1:30 PM, Stephen Grossberg >>> > wrote: >>> >>> Dear John, >>> >>> Thanks for your questions. I reply below. >>> >>> On Apr 5, 2014, at 10:51 AM, Juyang Weng wrote: >>> >>>> Dear Steve, >>>> >>>> This is one of my long-time questions that I did not >>>> have a chance to ask you when I met you many times before. >>>> But they may be useful for some people on this list. >>>> Please accept my apology of my question implies any >>>> false impression that I did not intend. >>>> >>>> (1) Your statement below seems to have confirmed my >>>> understanding: >>>> Your top-down process in ART in the late 1990's is >>>> basically for finding an acceptable match >>>> between the input feature vector and the stored feature >>>> vectors represented by neurons (not meant for the >>>> nearest match). >>> >>> ART has developed a lot since the 1990s. A non-technical >>> but fairly comprehensive review article was published in >>> 2012 in /Neural Networks/ and can be found at >>> http://cns.bu.edu/~steve/ART.pdf >>> . >>> >>> I do not think about the top-down process in ART in >>> quite the way that you state above. My reason for this >>> is summarized by the acronym CLEARS for the processes of >>> Consciousness, Learning, Expectation, Attention, >>> Resonance, and Synchrony. All the CLEARS processes come >>> into this story, and ART top-down mechanisms contribute >>> to all of them. For me, the most fundamental issues >>> concern how ART dynamically self-stabilizes the memories >>> that are learned within the model's bottom-up adaptive >>> filters and top-down expectations. >>> >>> In particular, during learning, a big enough mismatch >>> can lead to hypothesis testing and search for a new, or >>> previously learned, category that leads to an acceptable >>> match. The criterion for what is "big enough mismatch" >>> or "acceptable match" is regulated by a vigilance >>> parameter that can itself vary in a state-dependent way. >>> >>> After learning occurs, a bottom-up input pattern >>> typically directly selects the best-matching category, >>> without any hypothesis testing or search. And even if >>> there is a reset due to a large initial mismatch with a >>> previously active category, a single reset event may >>> lead directly to a matching category that can directly >>> resonate with the data. >>> >>> I should note that all of the foundational predictions >>> of ART now have substantial bodies of psychological and >>> neurobiological data to support them. See the review >>> article if you would like to read about them. >>> >>>> The currently active neuron is the one being examined >>>> by the top down process >>> >>> I'm not sure what you mean by "being examined", but >>> perhaps my comment above may deal with it. >>> >>> I should comment, though, about your use of the word >>> "currently active neuron". I assume that you mean at the >>> category level. >>> >>> In this regard, there are two ART's. The first aspect of >>> ART is as a cognitive and neural theory whose scope, >>> which includes perceptual, cognitive, and adaptively >>> timed cognitive-emotional dynamics, among other >>> processes, is illustrated by the above referenced 2012 >>> review article in /Neural Networks/. In the biological >>> theory, there is no general commitment to just one >>> "currently active neuron". One always considers the >>> neuronal population, or populations, that represent a >>> learned category. Sometimes, but not always, a >>> winner-take-all category is chosen. >>> >>> The 2012 review article illustrates some of the large >>> data bases of psychological and neurobiological data >>> that have been explained in a principled way, >>> quantitatively simulated, and successfully predicted by >>> ART over a period of decades. ART-like processing is, >>> however, certainly not the only kind of computation that >>> may be needed to understand how the brain works. The >>> paradigm called Complementary Computing that I >>> introduced awhile ago makes precise the sense in which >>> ART may be just one kind of dynamics supported by >>> advanced brains. This is also summarized in the review >>> article. >>> >>> The second aspect of ART is as a series of algorithms >>> that mathematically characterize key ART design >>> principles and mechanisms in a focused setting, and >>> provide algorithms for large-scale applications in >>> engineering and technology. ARTMAP, fuzzy ARTMAP, and >>> distributed ARTMAP are among these, all of them >>> developed with Gail Carpenter. Some of these algorithms >>> use winner-take-all categories to enable the proof of >>> mathematical theorems that characterize how underlying >>> design principles work. In contrast, the distributed >>> ARTMAP family of algorithms, developed by Gail Carpenter >>> and her colleagues, allows for distributed category >>> representations without losing the benefits of fast, >>> incremental, self-stabilizing learning and prediction in >>> response to a large non-stationary databases that can >>> include many unexpected events. >>> >>> See, e.g., >>> http://techlab.bu.edu/members/gail/articles/115_dART_NN_1997_.pdf >>> and >>> http://techlab.bu.edu/members/gail/articles/155_Fusion2008_CarpenterRavindran.pdf. >>> >>> I should note that FAST learning is a technical concept: >>> it means that each adaptive weight can converge to its >>> new equilibrium value on EACH learning trial. That is >>> why ART algorithms can often successfully carry out >>> one-trial incremental learning of a data base. This is >>> not true of many other algorithms, such as back >>> propagation, simulated annealing, and the like, which >>> all experience catastrophic forgetting if they try to do >>> fast learning. Almost all other learning algorithms need >>> to be run using slow learning, that allows only a small >>> increment in the values of adaptive weights on each >>> learning trial, to avoid massive memory instabilities, >>> and work best in response to stationary data. Such >>> algorithms often fail to detect important rare cases, >>> among other limitations. ART can provably learn in >>> either the fast or slow mode in response to >>> non-stationary data. >>> >>>> in a sequential fashion: one neuron after another, >>>> until an acceptable neuron is found. >>>> >>>> (2) The input to the ART in the late 1990's is for a >>>> single feature vector as a monolithic input. >>>> By monolithic, I mean that all neurons take the entire >>>> input feature vector as input. >>>> I raise this point here because neuron in ART in the >>>> late 1990's does not have an explicit local sensory >>>> receptive field (SRF), >>>> i.e., are fully connected from all components of the >>>> input vector. A local SRF means that each neuron is >>>> only connected to a small region >>>> in an input image. >>> >>> Various ART algorithms for technology do use fully >>> connected networks. They represent a single-channel >>> case, which is often sufficient in applications and >>> which simplifies mathematical proofs. However, the >>> single-channel case is, as its name suggests, not a >>> necessary constraint on ART design. >>> >>> In addition, many ART biological models do not restrict >>> themselves to the single-channel case, and do have >>> receptive fields. These include the LAMINART family of >>> models that predict functional roles for many identified >>> cell types in the laminar circuits of cerebral cortex. >>> These models illustrate how variations of a shared >>> laminar circuit design can carry out very different >>> intelligent functions, such as 3D vision (e.g., 3D >>> LAMINART), speech and language (e.g., cARTWORD), and >>> cognitive information processing (e.g., LIST PARSE). >>> They are all summarized in the 2012 review article, with >>> the archival articles themselves on my web page >>> http://cns.bu.edu/~steve . >>> >>> The existence of these laminar variations-on-a-theme >>> provides an existence proof for the exciting goal of >>> designing a family of chips whose specializations can >>> realize all aspects of higher intelligence, and which >>> can be consistently connected because they all share a >>> similar underlying design. Work on achieving this goal >>> can productively occupy lots of creative modelers and >>> technologists for many years to come. >>> >>> I hope that the above replies provide some relevant >>> information, as well as pointers for finding more. >>> >>> Best, >>> >>> Steve >>> >>> >>> >>>> >>>> My apology again if my understanding above has errors >>>> although I have examined the above two points carefully >>>> through multiple your papers. >>>> >>>> Best regards, >>>> >>>> -John >>>> >>>> Juyang (John) Weng, Professor >>>> Department of Computer Science and Engineering >>>> MSU Cognitive Science Program and MSU Neuroscience Program >>>> 428 S Shaw Ln Rm 3115 >>>> Michigan State University >>>> East Lansing, MI 48824 USA >>>> Tel:517-353-4388 >>>> Fax:517-432-1061 >>>> Email:weng at cse.msu.edu >>>> URL:http://www.cse.msu.edu/~weng/ >>>> ---------------------------------------------- >>>> >>> >>> Stephen Grossberg >>> Wang Professor of Cognitive and Neural Systems >>> Professor of Mathematics, Psychology, and Biomedical >>> Engineering >>> Director, Center for Adaptive Systems >>> http://www.cns.bu.edu/about/cas.html >>> http://cns.bu.edu/~steve >>> steve at bu.edu >>> >>> >>> >>> >>> >> >> -- >> -- >> Juyang (John) Weng, Professor >> Department of Computer Science and Engineering >> MSU Cognitive Science Program and MSU Neuroscience Program >> 428 S Shaw Ln Rm 3115 >> Michigan State University >> East Lansing, MI 48824 USA >> Tel:517-353-4388 >> Fax: 517-432-1061 Email: weng at cse.msu.edu >> URL: http://www.cse.msu.edu/~weng/ >> >> ---------------------------------------------- >> >> > > -- > -- > Juyang (John) Weng, Professor > Department of Computer Science and Engineering > MSU Cognitive Science Program and MSU Neuroscience Program > 428 S Shaw Ln Rm 3115 > Michigan State University > East Lansing, MI 48824 USA > Tel:517-353-4388 > Fax:517-432-1061 > Email:weng at cse.msu.edu > URL:http://www.cse.msu.edu/~weng/ > ---------------------------------------------- > > -- Prof. Dr. Danko Nikolic' Web: http://www.danko-nikolic.com Mail address 1: Department of Neurophysiology Max Planck Institut for Brain Research Deutschordenstr. 46 60528 Frankfurt am Main GERMANY Mail address 2: Frankfurt Institute for Advanced Studies Wolfgang Goethe University Ruth-Moufang-Str. 1 60433 Frankfurt am Main GERMANY ---------------------------- Office: (..49-69) 96769-736 Lab: (..49-69) 96769-209 Fax: (..49-69) 96769-327 danko.nikolic at gmail.com ---------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From g.westermann at lancaster.ac.uk Mon Apr 14 09:27:48 2014 From: g.westermann at lancaster.ac.uk (Gert Westermann) Date: Mon, 14 Apr 2014 14:27:48 +0100 Subject: Connectionists: 2nd call for papers: NCPW14 - 14th Neural Computation and Psychology Workshop, Lancaster UK Message-ID: 2nd call for papers: NCPW14 - 14th Neural Computation and Psychology Workshop, Lancaster UK * Apologies for cross-posting * Dear colleague, We cordially invite you to participate in NCPW14, the 14th Neural Computation and Psychology Workshop, to be held at Lancaster University, UK, from August 21-23, 2014: http://www.psych.lancs.ac.uk/ncpw14 This well-established and lively workshop aims at bringing together researchers from different disciplines such as artificial intelligence, cognitive science, computer science, neurobiology, philosophy and psychology to discuss their work on models of cognitive processes. Often this workshop has had a theme, and this time it is ?Development across the lifespan?, but also submissions that do not fall under this theme are welcome. Papers must be about emergent models - frequently, but not necessarily - of the connectionist/neural network kind, applied to cognition. The NCPW workshops have always been characterized by their limited size, high quality papers, the absence of parallel talk sessions, and a schedule that is explicitly designed to encourage interaction among the researchers present in an informal setting. NCPW14 is no exception. The scientific program will consist of keynote lectures, oral sessions and poster sessions. Furthermore, this workshop will feature a unique set of invited speakers: James McClelland, Stanford University. Bob McMurray, University of Iowa. Michael Thomas, Birkbeck College, London. Abstract submission Please submit one-page abstracts (pdf) by email to ncpw14conf at gmail.com, stating whether you want to present as a talk, poster, or talk/poster. Please also indicate whether you would like to be considered for a Rumelhart Memorial Travel Award (see below). Rumelhart Memorial Travel Awards The Rumelhart Memorial Travel awards, generously funded by Professor Jay McClelland, will provide funding to support travel costs for students/post docs presenting at the conference. Awards of US$250 are available to students or post docs from Western European countries, and US$750 for students or post docs from elsewhere. Decisions will be based on the quality of the submission. Eligibility criteria: The first author of the submission is a student or Post-doctoral fellow who will attend the meeting and will present the submission if chosen to receive a travel award. Location Lancaster is situated in the north west of England, approximately one hour from Manchester and Liverpool airports. Lancaster is surrounded by spectacular scenery, hiking and climbing country. The Yorkshire Dales national park is 10 miles away. The Lake District national park is 20 miles away, http://www.nationalparks.gov.uk. The stagecoach 555 bus (http://www.stagecoachbus.com) takes you directly from Lancaster through the heart of the Lakes. The Trough of Bowland Area of Outstanding Natural Beauty is 2 miles away. Important dates to remember Abstract deadline: 15 May Notification of abstract acceptance: 7 June Early registration deadline: tbc Online registration deadline: tbc Conference dates: 21-23 August, 2014 Looking forward to your participation! Organizing Committee Gert Westermann, Lancaster University Padraic Monaghan, Lancaster University Katherine E. Twomey, Liverpool University Alastair C. Smith, MPI Nijmegen -------------------------------------------------------------------------- Prof. Gert Westermann Department of Psychology Lancaster University Lancaster LA1 4YF Phone: +44 (0)1524 592 942 Fax: +44 (0)1524 593 744 g.westermann at lancaster.ac.uk http://www.psych.lancs.ac.uk/people/gert-westermann -------------- next part -------------- An HTML attachment was scrubbed... URL: From mail at mkaiser.de Mon Apr 14 07:28:18 2014 From: mail at mkaiser.de (Marcus Kaiser) Date: Mon, 14 Apr 2014 12:28:18 +0100 Subject: Connectionists: Wellcome Trust 4-year PhD programme 'Systems Neuroscience: From Networks to Behaviour' (second round) Message-ID: Dear all, our Wellcome Trust 4-year PhD programme in systems neuroscience, aimed at applicants from the physical sciences (physics, engineering, mathematics, or computer science), is now accepting applications for studentships starting in September 2014 (see below). Research areas include Neuroinformatics, Computational Neuroscience, Neuroimaging (fMRI, DTI, EEG, ECoG in rodents, non-human primates, and humans), Brain Connectivity, Clinical Neuroscience, Behaviour and Evolution, and Brain Dynamics (simulations and time series analysis). Strong interactions between clinical, experimental, and computational researchers are a key component of this programme. Best, Marcus *Wellcome Trust 4-year PhD programme 'Systems Neuroscience: From Networks to Behaviour'* Programme Directors: Prof. Stuart Baker, Prof. Tim Griffiths, and Dr Marcus Kaiser The Institute of Neuroscience at Newcastle University integrates more than 100 principal investigators across medicine, psychology, computer science, and engineering. Research in systems, cellular, computational, and behavioural neuroscience. Laboratory facilities include auditory and visual psychophysics; rodent, monkey, and human neuroimaging (EEG, fMRI, PET); TMS; optical recording, multi-electrode neurophysiology, confocal and fluorescence imaging, high-throughput computing and e-science, artificial sensory-motor devices, clinical testing, and the only brain bank for molecular changes in human brain development. The Wellcome Trust's Four-year PhD Programmes are a flagship scheme aimed at supporting the most promising students to undertake in-depth postgraduate research training. The first year combines taught courses with three laboratory rotations to broaden students' knowledge of the subject area. At the end of the first year, students will make an informed choice of their three-year PhD research project. This programme is based at Newcastle University and is aimed to provide specialised training for physical and computational scientists (e.g. physics, chemistry, engineering, mathematics, and computer science) wishing to apply their skills to a research neuroscience career. Eligibility/Person Specification: Applicants should have, or expect to obtain, a 1st or 2:1 degree, or equivalent, in a physical sciences, engineering, mathematics or computing degree. Value of the award: Support includes a stipend for 4 years (?20k/yr tax-free), PhD registration fees at UK/EU student rate, research expenses, general training funds and some travel costs. Applications will be accepted and considered upon receipt until the deadline of the 30 of May 2014 or until a suitable candidate has been appointed. You must apply through the University's online postgraduate application form (*http://www.ncl.ac.uk/postgraduate/funding/search/list/in065 * ) inserting the reference number IN065 and selecting 'Master of Research/Doctor of Philosophy (Medical Sciences) - Neuroscience' as the programme of study. You should also send your covering letter and CV to Helen Stewart, Postgraduate Secretary, Institute of Neuroscience, Henry Wellcome Building, Faculty of Medical Sciences, Newcastle University, Newcastle upon Tyne, NE2 4HH, or by email to *ion-postgrad-enq at ncl.ac.uk * . For more information, see *http://www.ncl.ac.uk/ion/study/wellcome/ * -- Marcus Kaiser, Ph.D. Associate Professor (Reader) in Neuroinformatics School of Computing Science Newcastle University Claremont Tower Newcastle upon Tyne NE1 7RU, UK Lab website: http://www.biological-networks.org/ Neuroinformatics at Newcastle: http://research.ncl.ac.uk/neuroinformatics/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From lorincz at inf.elte.hu Mon Apr 14 09:05:12 2014 From: lorincz at inf.elte.hu (Andras Lorincz) Date: Mon, 14 Apr 2014 13:05:12 +0000 Subject: Connectionists: how the brain works? (UNCLASSIFIED) In-Reply-To: <534BBDC2.4090406@gmail.com> References: <51FDF3A2-717D-4399-9076-331EEA0FC45A@lehigh.edu> <5340188F.6090405@cse.msu.edu> <6F08AE39-766F-45B9-97AD-C50714755724@cns.bu.edu> <5342E760.7080309@cse.msu.edu> <5344CA45.3060407@cse.msu.edu> , <534BBDC2.4090406@gmail.com> Message-ID: You are heading reinforcement learning and the so-called temporal difference learning in it. This is a good direction, since many little details can be mapped to the corico-basal ganglia-thalamocortical loops. Nonetheless, this can't explain everything. The separation of outliers from the generalizable subspace(s) is relevant, since the latter enables one to fill in missing information, whereas the former does not. This was the take-home message of the Netflix competition and the subsequent developments on exact matrix completion. Andras _________________________ Andras Lorincz ECCAI Fellow email: lorincz at inf.elte.hu home: http://people.inf.elte.hu/lorincz ________________________________ From: Connectionists on behalf of Danko Nikolic Sent: Monday, April 14, 2014 12:51 PM To: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: how the brain works? (UNCLASSIFIED) Dear all, It has been very interesting to follow the discussion on the functioning of ART, stability-plasticity dilemma and the related issues. In that context, I would like to point to an exciting property of the practopoietic theory, which enables us to understand what is needed for a general solution to the problems similar to the stability-plasticity dilemma. The issue of stability-plasticity dilemma can be described as a problem of deciding when a new category of a stimulus needs to be created and the system has to be adjusted as opposed to deciding to treat the stimulus as old and familiar and thus, not needing to adjust. Practopoietic theory helps us understand how a general solution can be implemented for deciding whether to use old types of behavior or to come up with new ones. This is possible in a so-called "T_3 system" in which a process called "anapoiesis" takes place. When a system is organized in such a T_3 way, every stimulus, old or new, is treated in the same fashion, i.e., as new. The system always adjusts--to everything(!)--even to stimuli that have been seen thousands of times. There is never a simple direct categorization (or pattern recognition) in which a mathematical mapping would take place from input vectors to output vectors, as traditionally implemented in multi-layer neural networks. Rather the system readjusts itself continuously to prepare for interactions with the surrounding world. The only simple input-output mappings that take place are the sensory-motor loops that execute the actual behavior. The internal processes corresponding to perception, recognition, categorization etc. are implemented by the mechanisms of internal system adjustments (based on anapoiesis). These mechanisms create new sensory-motor loops, which are then most similar to the traditional mapping operations. The difference between old and new stimuli (i.e., familiar and unfamiliar) is detectable in the behavior of the system only because the system adjusts quicker to the older that to the newer stimuli. The claimed advantage of such a T_3 practopoietic system is that only such a system can become generally intelligent as a whole and behave adaptively and consciously with understanding of what is going on around; The system forms a general "adjustment machine" that can become aware of its surroundings and can be capable of interpreting the situation appropriately to decide on the next action. Thus, the perceptual dilemma of stability vs. plasticity is converted into a general understanding of the current situation and the needs of the system. If the current goals of the system requires treating a slightly novel stimulus as new, it will be treated as "new". However, if a slight change in the stimulus features does not make a difference for the current goals and the situation, than the stimulus will be treated as "old". Importantly, practopoietic theory is not formulated in terms of neurons (inhibition, excitation, connections, changes of synaptic weights, etc.). Instead, the theory is formulated much more elegantly--in terms of interactions between cybernetic control mechanisms organized into a specific type of hierarchy (poietic hierarchy). This abstract formulation is extremely helpful for two reasons. First, it enables one to focus on the most important functional aspects and thus, to understand much easier the underlying principles of system operations. Second, it tells us what is needed to create intelligent behavior using any type of implementation, neuronal or non-neuronal. I hope this will be motivating enough to give practopoiesis a read. With best regards, Danko Link: http://www.danko-nikolic.com/practopoiesis/ On 4/11/14 2:42 AM, Tsvi Achler wrote: I can't comment on most of this, but I am not sure if all models of sparsity and sparse coding fall into the connectionist realm either because some make statistical assumptions. -Tsvi On Tue, Apr 8, 2014 at 9:19 PM, Juyang Weng > wrote: Tavi: Let me explain a little more detail: There are two large categories of biological neurons, excitatory and inhibitory. Both are developed through mainly signal statistics, not specified primarily by the genomes. Not all people agree with my this point, but please tolerate my this view for now. I gave a more detailed discussion on this view in my NAI book. The main effect of inhibitory connections is to reduce the number of firing neurons (David Field called it sparse coding), with the help of excitatory connections. This sparse coding is important because those do not fire are long term memory of the area at this point of time. My this view is different from David Field. He wrote that sparse coding is for the current representations. I think sparse coding is necessary for long-term memory. Not all people agree with my this point, but please tolerate my this view for now. However, this reduction requires very fast parallel neuronal updates to avoid uncontrollable large-magnitude oscillations. Even with the fast biological parallel neuronal updates, we still see slow but small-magnitude oscillations such as the well-known theta waves and alpha waves. My view is that such slow but small-magnitude oscillations are side effects of excitatory and inhibitory connections that form many loops, not something really desirable for the brain operation (sorry, Paul Werbos). Not all people agree with my this point, but please tolerate my this view for now. Therefore, as far as I understand, all computer simulations for spiking neurons are not showing major brain functions because they have to deal with the slow oscillations that are very different from the brain's, e.g., as Dr. Henry Markram reported (40Hz?). The above discussion again shows the power and necessity of an overarching brain theory like that in my NAI book. Those who only simulate biological neurons using superficial biological phenomena are not going to demonstrate any major brain functions. They can talk about signal statistics from their simulations, but signal statistics are far from brain functions. -John On 4/8/14 1:30 AM, Tsvi Achler wrote: Hi John, ART evaluates distance between the contending representation and the current input through vigilance. If they are too far apart, a poor vigilance signal will be triggered. The best resonance will be achieved when they have the least amount of distance. If in your model, K-nearest neighbors is used without a neural equivalent, then your model is not quite in the spirit of a connectionist model. For example, Bayesian networks do a great job emulating brain behavior, modeling the integration of priors. and has been invaluable to model cognitive studies. However they assume a statistical configuration of connections and distributions which is not quite known how to emulate with neurons. Thus pure Bayesian models are also questionable in terms of connectionist modeling. But some connectionist models can emulate some statistical models for example see section 2.4 in Thomas & McClelland's chapter in Sun's 2008 book (http://www.psyc.bbk.ac.uk/people/academic/thomas_m/TM_Cambridge_sub.pdf). I am not suggesting Hodgkin-Huxley level detailed neuron models, however connectionist models should have their connections explicitly defined. Sincerely, -Tsvi On Mon, Apr 7, 2014 at 10:58 AM, Juyang Weng > wrote: Tsvi, Note that ART uses a vigilance value to pick up the first "acceptable" match in its sequential bottom-up and top-down search. I believe that was Steve meant when he mentioned vigilance. Why do you think "ART as a neural way to implement a K-nearest neighbor algorithm"? If not all the neighbors have sequentially participated, how can ART find the nearest neighbor, let alone K-nearest neighbor? Our DN uses an explicit k-nearest mechanism to find the k-nearest neighbors in every network update, to avoid the problems of slow resonance in existing models of spiking neuronal networks. The explicit k-nearest mechanism itself is not meant to be biologically plausible, but it gives a computational advantage for software simulation of large networks at a speed slower than 1000 network updates per second. I guess that more detailed molecular simulations of individual neuronal spikes (such as using the Hodgkin-Huxley model of a neuron, using the NEURON software, or like the Blue Brain project directed by respected Dr. Henry Markram) are very useful for showing some detailed molecular, synaptic, and neuronal properties. However, they miss necessary brain-system-level mechanisms so much that it is difficult for them to show major brain-scale functions (such as learning to recognize objects and detection of natural objects directly from natural cluttered scenes). According to my understanding, if one uses a detailed neuronal model for each of a variety of neuronal types and connects those simulated neurons of different types according to a diagram of Brodmann areas, his simulation is NOT going to lead to any major brain function. He still needs brain-system-level knowledge such as that taught in the BMI 871 course. -John On 4/7/14 8:07 AM, Tsvi Achler wrote: Dear Steve, John I think such discussions are great to spark interests in feedback (output back to input) such models which I feel should be given much more attention. In this vein it may be better to discuss more of the details here than to suggest to read a reference. Basically I see ART as a neural way to implement a K-nearest neighbor algorithm. Clearly the way ART overcomes the neural hurdles is immense especially in figuring out how to coordinate neurons. However it is also important to summarize such methods in algorithmic terms which I attempt to do here (and please comment/correct). Instar learning is used to find the best weights for quick feedforward recognition without too much resonance (otherwise more resonance will be needed). Outstar learning is used to find the expectation of the patterns. The resonance mechanism evaluates distances between the "neighbors" evaluating how close differing outputs are to the input pattern (using the expectation). By choosing one winner the network is equivalent to a 1-nearest neighbor model. If you open it up to more winners (eg k winners) as you suggest then it becomes a k-nearest neighbor mechanism. Clearly I focused here on the main ART modules and did not discuss other additions. But I want to just focus on the main idea at this point. Sincerely, -Tsvi On Sun, Apr 6, 2014 at 1:30 PM, Stephen Grossberg > wrote: Dear John, Thanks for your questions. I reply below. On Apr 5, 2014, at 10:51 AM, Juyang Weng wrote: Dear Steve, This is one of my long-time questions that I did not have a chance to ask you when I met you many times before. But they may be useful for some people on this list. Please accept my apology of my question implies any false impression that I did not intend. (1) Your statement below seems to have confirmed my understanding: Your top-down process in ART in the late 1990's is basically for finding an acceptable match between the input feature vector and the stored feature vectors represented by neurons (not meant for the nearest match). ART has developed a lot since the 1990s. A non-technical but fairly comprehensive review article was published in 2012 in Neural Networks and can be found at http://cns.bu.edu/~steve/ART.pdf. I do not think about the top-down process in ART in quite the way that you state above. My reason for this is summarized by the acronym CLEARS for the processes of Consciousness, Learning, Expectation, Attention, Resonance, and Synchrony. All the CLEARS processes come into this story, and ART top-down mechanisms contribute to all of them. For me, the most fundamental issues concern how ART dynamically self-stabilizes the memories that are learned within the model's bottom-up adaptive filters and top-down expectations. In particular, during learning, a big enough mismatch can lead to hypothesis testing and search for a new, or previously learned, category that leads to an acceptable match. The criterion for what is "big enough mismatch" or "acceptable match" is regulated by a vigilance parameter that can itself vary in a state-dependent way. After learning occurs, a bottom-up input pattern typically directly selects the best-matching category, without any hypothesis testing or search. And even if there is a reset due to a large initial mismatch with a previously active category, a single reset event may lead directly to a matching category that can directly resonate with the data. I should note that all of the foundational predictions of ART now have substantial bodies of psychological and neurobiological data to support them. See the review article if you would like to read about them. The currently active neuron is the one being examined by the top down process I'm not sure what you mean by "being examined", but perhaps my comment above may deal with it. I should comment, though, about your use of the word "currently active neuron". I assume that you mean at the category level. In this regard, there are two ART's. The first aspect of ART is as a cognitive and neural theory whose scope, which includes perceptual, cognitive, and adaptively timed cognitive-emotional dynamics, among other processes, is illustrated by the above referenced 2012 review article in Neural Networks. In the biological theory, there is no general commitment to just one "currently active neuron". One always considers the neuronal population, or populations, that represent a learned category. Sometimes, but not always, a winner-take-all category is chosen. The 2012 review article illustrates some of the large data bases of psychological and neurobiological data that have been explained in a principled way, quantitatively simulated, and successfully predicted by ART over a period of decades. ART-like processing is, however, certainly not the only kind of computation that may be needed to understand how the brain works. The paradigm called Complementary Computing that I introduced awhile ago makes precise the sense in which ART may be just one kind of dynamics supported by advanced brains. This is also summarized in the review article. The second aspect of ART is as a series of algorithms that mathematically characterize key ART design principles and mechanisms in a focused setting, and provide algorithms for large-scale applications in engineering and technology. ARTMAP, fuzzy ARTMAP, and distributed ARTMAP are among these, all of them developed with Gail Carpenter. Some of these algorithms use winner-take-all categories to enable the proof of mathematical theorems that characterize how underlying design principles work. In contrast, the distributed ARTMAP family of algorithms, developed by Gail Carpenter and her colleagues, allows for distributed category representations without losing the benefits of fast, incremental, self-stabilizing learning and prediction in response to a large non-stationary databases that can include many unexpected events. See, e.g., http://techlab.bu.edu/members/gail/articles/115_dART_NN_1997_.pdf and http://techlab.bu.edu/members/gail/articles/155_Fusion2008_CarpenterRavindran.pdf. I should note that FAST learning is a technical concept: it means that each adaptive weight can converge to its new equilibrium value on EACH learning trial. That is why ART algorithms can often successfully carry out one-trial incremental learning of a data base. This is not true of many other algorithms, such as back propagation, simulated annealing, and the like, which all experience catastrophic forgetting if they try to do fast learning. Almost all other learning algorithms need to be run using slow learning, that allows only a small increment in the values of adaptive weights on each learning trial, to avoid massive memory instabilities, and work best in response to stationary data. Such algorithms often fail to detect important rare cases, among other limitations. ART can provably learn in either the fast or slow mode in response to non-stationary data. in a sequential fashion: one neuron after another, until an acceptable neuron is found. (2) The input to the ART in the late 1990's is for a single feature vector as a monolithic input. By monolithic, I mean that all neurons take the entire input feature vector as input. I raise this point here because neuron in ART in the late 1990's does not have an explicit local sensory receptive field (SRF), i.e., are fully connected from all components of the input vector. A local SRF means that each neuron is only connected to a small region in an input image. Various ART algorithms for technology do use fully connected networks. They represent a single-channel case, which is often sufficient in applications and which simplifies mathematical proofs. However, the single-channel case is, as its name suggests, not a necessary constraint on ART design. In addition, many ART biological models do not restrict themselves to the single-channel case, and do have receptive fields. These include the LAMINART family of models that predict functional roles for many identified cell types in the laminar circuits of cerebral cortex. These models illustrate how variations of a shared laminar circuit design can carry out very different intelligent functions, such as 3D vision (e.g., 3D LAMINART), speech and language (e.g., cARTWORD), and cognitive information processing (e.g., LIST PARSE). They are all summarized in the 2012 review article, with the archival articles themselves on my web page http://cns.bu.edu/~steve. The existence of these laminar variations-on-a-theme provides an existence proof for the exciting goal of designing a family of chips whose specializations can realize all aspects of higher intelligence, and which can be consistently connected because they all share a similar underlying design. Work on achieving this goal can productively occupy lots of creative modelers and technologists for many years to come. I hope that the above replies provide some relevant information, as well as pointers for finding more. Best, Steve My apology again if my understanding above has errors although I have examined the above two points carefully through multiple your papers. Best regards, -John Juyang (John) Weng, Professor Department of Computer Science and Engineering MSU Cognitive Science Program and MSU Neuroscience Program 428 S Shaw Ln Rm 3115 Michigan State University East Lansing, MI 48824 USA Tel: 517-353-4388 Fax: 517-432-1061 Email: weng at cse.msu.edu URL: http://www.cse.msu.edu/~weng/ ---------------------------------------------- Stephen Grossberg Wang Professor of Cognitive and Neural Systems Professor of Mathematics, Psychology, and Biomedical Engineering Director, Center for Adaptive Systems http://www.cns.bu.edu/about/cas.html http://cns.bu.edu/~steve steve at bu.edu -- -- Juyang (John) Weng, Professor Department of Computer Science and Engineering MSU Cognitive Science Program and MSU Neuroscience Program 428 S Shaw Ln Rm 3115 Michigan State University East Lansing, MI 48824 USA Tel: 517-353-4388 Fax: 517-432-1061 Email: weng at cse.msu.edu URL: http://www.cse.msu.edu/~weng/ ---------------------------------------------- -- -- Juyang (John) Weng, Professor Department of Computer Science and Engineering MSU Cognitive Science Program and MSU Neuroscience Program 428 S Shaw Ln Rm 3115 Michigan State University East Lansing, MI 48824 USA Tel: 517-353-4388 Fax: 517-432-1061 Email: weng at cse.msu.edu URL: http://www.cse.msu.edu/~weng/ ---------------------------------------------- -- Prof. Dr. Danko Nikoli? Web: http://www.danko-nikolic.com Mail address 1: Department of Neurophysiology Max Planck Institut for Brain Research Deutschordenstr. 46 60528 Frankfurt am Main GERMANY Mail address 2: Frankfurt Institute for Advanced Studies Wolfgang Goethe University Ruth-Moufang-Str. 1 60433 Frankfurt am Main GERMANY ---------------------------- Office: (..49-69) 96769-736 Lab: (..49-69) 96769-209 Fax: (..49-69) 96769-327 danko.nikolic at gmail.com ---------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From weng at cse.msu.edu Mon Apr 14 13:56:46 2014 From: weng at cse.msu.edu (Juyang Weng) Date: Mon, 14 Apr 2014 13:56:46 -0400 Subject: Connectionists: how the brain works? (UNCLASSIFIED) In-Reply-To: <534BBDC2.4090406@gmail.com> References: <51FDF3A2-717D-4399-9076-331EEA0FC45A@lehigh.edu> <5340188F.6090405@cse.msu.edu> <6F08AE39-766F-45B9-97AD-C50714755724@cns.bu.edu> <5342E760.7080309@cse.msu.edu> <5344CA45.3060407@cse.msu.edu> <534BBDC2.4090406@gmail.com> Message-ID: <534C215E.6040500@cse.msu.edu> Dear Danko, Much of your views are consistent to our DN model, an overarching model for developing brains. > The system always adjusts--to everything(!) Yes, since the system does not know which is new and which is old. However, the amount of adjustment is different. There is also a novelty system imbedded into the basic brain circuits, realized by neurotransmitters such as ACh and NE. > The only simple input-output mappings that take place are the sensory-motor loops that execute the actual behavior. Sorry, I do not quite agree. All sensory-motor loops that execute the actual behavior are not simple input-output mappings. They affect all related brain representations, including perception, cognition and motivation, as the DN system implies. > If the current goals of the system requires treating a slightly novel stimulus as new, it will be treated as "new". However, if a slight change in the stimulus features does not make a difference for the current goals and the situation, than the stimulus will be treated as "old". The brain does not seem to have an if-then-else circuit like your above statement seems to suggest. Regardless new or old, the brain uses basically the same set of mechanisms. Only the outcome is always different. > Importantly, practopoietic theory is not formulated in terms of neurons (inhibition, excitation, connections, changes of synaptic weights, etc.). Then, does it fall into the trap of symbolic representations? How does the theory explain the development of various types of invariance? DN suggests that various type of invariance arise from experience, not in the human genes. Thus, convolution networks (including the Creceptron that my co-authors and I used before) for locational invariance are GROSSLY wrong for the brain. -John On 4/14/14 6:51 AM, Danko Nikolic wrote: > Dear all, > > It has been very interesting to follow the discussion on the > functioning of ART, stability-plasticity dilemma and the related > issues. In that context, I would like to point to an exciting property > of the practopoietic theory, which enables us to understand what is > needed for a general solution to the problems similar to the > stability-plasticity dilemma. > > The issue of stability-plasticity dilemma can be described as a > problem of deciding when a new category of a stimulus needs to be > created and the system has to be adjusted as opposed to deciding to > treat the stimulus as old and familiar and thus, not needing to > adjust. Practopoietic theory helps us understand how a general > solution can be implemented for deciding whether to use old types of > behavior or to come up with new ones. This is possible in a so-called > "T_3 system" in which a process called "anapoiesis" takes place. When > a system is organized in such a T_3 way, every stimulus, old or new, > is treated in the same fashion, i.e., as new. The system always > adjusts--to everything(!)--even to stimuli that have been seen > thousands of times. There is never a simple direct categorization (or > pattern recognition) in which a mathematical mapping would take place > from input vectors to output vectors, as traditionally implemented in > multi-layer neural networks. > > Rather the system readjusts itself continuously to prepare for > interactions with the surrounding world. The only simple input-output > mappings that take place are the sensory-motor loops that execute the > actual behavior. The internal processes corresponding to perception, > recognition, categorization etc. are implemented by the mechanisms of > internal system adjustments (based on anapoiesis). These mechanisms > create new sensory-motor loops, which are then most similar to the > traditional mapping operations. The difference between old and new > stimuli (i.e., familiar and unfamiliar) is detectable in the behavior > of the system only because the system adjusts quicker to the older > that to the newer stimuli. > > The claimed advantage of such a T_3 practopoietic system is that only > such a system can become generally intelligent as a whole and behave > adaptively and consciously with understanding of what is going on > around; The system forms a general "adjustment machine" that can > become aware of its surroundings and can be capable of interpreting > the situation appropriately to decide on the next action. Thus, the > perceptual dilemma of stability vs. plasticity is converted into a > general understanding of the current situation and the needs of the > system. If the current goals of the system requires treating a > slightly novel stimulus as new, it will be treated as "new". However, > if a slight change in the stimulus features does not make a difference > for the current goals and the situation, than the stimulus will be > treated as "old". > > Importantly, practopoietic theory is not formulated in terms of > neurons (inhibition, excitation, connections, changes of synaptic > weights, etc.). Instead, the theory is formulated much more > elegantly--in terms of interactions between cybernetic control > mechanisms organized into a specific type of hierarchy (poietic > hierarchy). This abstract formulation is extremely helpful for two > reasons. First, it enables one to focus on the most important > functional aspects and thus, to understand much easier the underlying > principles of system operations. Second, it tells us what is needed to > create intelligent behavior using any type of implementation, neuronal > or non-neuronal. > > I hope this will be motivating enough to give practopoiesis a read. > > With best regards, > > Danko > > > > Link: > http://www.danko-nikolic.com/practopoiesis/ > > > > On 4/11/14 2:42 AM, Tsvi Achler wrote: >> I can't comment on most of this, but I am not sure if all models of >> sparsity and sparse coding fall into the connectionist realm either >> because some make statistical assumptions. >> -Tsvi >> >> >> On Tue, Apr 8, 2014 at 9:19 PM, Juyang Weng > > wrote: >> >> Tavi: >> >> Let me explain a little more detail: >> >> There are two large categories of biological neurons, excitatory >> and inhibitory. Both are developed through mainly signal >> statistics, >> not specified primarily by the genomes. Not all people agree >> with my this point, but please tolerate my this view for now. >> I gave a more detailed discussion on this view in my NAI book. >> >> The main effect of inhibitory connections is to reduce the number >> of firing neurons (David Field called it sparse coding), with the >> help of >> excitatory connections. This sparse coding is important because >> those do not fire are long term memory of the area at this point >> of time. >> My this view is different from David Field. He wrote that sparse >> coding is for the current representations. I think sparse coding is >> necessary for long-term memory. Not all people agree with my this >> point, but please tolerate my this view for now. >> >> However, this reduction requires very fast parallel neuronal >> updates to avoid uncontrollable large-magnitude oscillations. >> Even with the fast biological parallel neuronal updates, we still >> see slow but small-magnitude oscillations such as the >> well-known theta waves and alpha waves. My view is that such >> slow but small-magnitude oscillations are side effects of >> excitatory and inhibitory connections that form many loops, not >> something really desirable for the brain operation (sorry, >> Paul Werbos). Not all people agree with my this point, but >> please tolerate my this view for now. >> >> Therefore, as far as I understand, all computer simulations for >> spiking neurons are not showing major brain functions >> because they have to deal with the slow oscillations that are >> very different from the brain's, e.g., as Dr. Henry Markram reported >> (40Hz?). >> >> The above discussion again shows the power and necessity of an >> overarching brain theory like that in my NAI book. >> Those who only simulate biological neurons using superficial >> biological phenomena are not going to demonstrate >> any major brain functions. They can talk about signal statistics >> from their simulations, but signal statistics are far from brain >> functions. >> >> -John >> >> >> On 4/8/14 1:30 AM, Tsvi Achler wrote: >>> Hi John, >>> ART evaluates distance between the contending representation and >>> the current input through vigilance. If they are too far apart, >>> a poor vigilance signal will be triggered. >>> The best resonance will be achieved when they have the least >>> amount of distance. >>> If in your model, K-nearest neighbors is used without a neural >>> equivalent, then your model is not quite in the spirit of a >>> connectionist model. >>> For example, Bayesian networks do a great job emulating brain >>> behavior, modeling the integration of priors. and has been >>> invaluable to model cognitive studies. However they assume a >>> statistical configuration of connections and distributions which >>> is not quite known how to emulate with neurons. Thus pure >>> Bayesian models are also questionable in terms of connectionist >>> modeling. But some connectionist models can emulate some >>> statistical models for example see section 2.4 in Thomas & >>> McClelland's chapter in Sun's 2008 book >>> (http://www.psyc.bbk.ac.uk/people/academic/thomas_m/TM_Cambridge_sub.pdf). >>> I am not suggesting Hodgkin-Huxley level detailed neuron models, >>> however connectionist models should have their connections >>> explicitly defined. >>> Sincerely, >>> -Tsvi >>> >>> >>> >>> On Mon, Apr 7, 2014 at 10:58 AM, Juyang Weng >> > wrote: >>> >>> Tsvi, >>> >>> Note that ART uses a vigilance value to pick up the first >>> "acceptable" match in its sequential bottom-up and top-down >>> search. >>> I believe that was Steve meant when he mentioned vigilance. >>> >>> Why do you think "ART as a neural way to implement a >>> K-nearest neighbor algorithm"? >>> If not all the neighbors have sequentially participated, >>> how can ART find the nearest neighbor, let alone K-nearest >>> neighbor? >>> >>> Our DN uses an explicit k-nearest mechanism to find the >>> k-nearest neighbors in every network update, >>> to avoid the problems of slow resonance in existing models >>> of spiking neuronal networks. >>> The explicit k-nearest mechanism itself is not meant to be >>> biologically plausible, >>> but it gives a computational advantage for software >>> simulation of large networks >>> at a speed slower than 1000 network updates per second. >>> >>> I guess that more detailed molecular simulations of >>> individual neuronal spikes (such as using the Hodgkin-Huxley >>> model of >>> a neuron, using the NEURON software, >>> or like the Blue Brain >>> project directed by respected >>> Dr. Henry Markram) >>> are very useful for showing some detailed molecular, >>> synaptic, and neuronal properties. >>> However, they miss necessary brain-system-level mechanisms >>> so much that it is difficult for them >>> to show major brain-scale functions >>> (such as learning to recognize objects and detection of >>> natural objects directly from natural cluttered scenes). >>> >>> According to my understanding, if one uses a detailed >>> neuronal model for each of a variety of neuronal types and >>> connects those simulated neurons of different types >>> according to a diagram of Brodmann areas, >>> his simulation is NOT going to lead to any major brain >>> function. >>> He still needs brain-system-level knowledge such as that >>> taught in the BMI 871 course. >>> >>> -John >>> >>> On 4/7/14 8:07 AM, Tsvi Achler wrote: >>>> Dear Steve, John >>>> I think such discussions are great to spark interests in >>>> feedback (output back to input) such models which I feel >>>> should be given much more attention. >>>> In this vein it may be better to discuss more of the >>>> details here than to suggest to read a reference. >>>> >>>> Basically I see ART as a neural way to implement a >>>> K-nearest neighbor algorithm. Clearly the way ART >>>> overcomes the neural hurdles is immense especially in >>>> figuring out how to coordinate neurons. However it is also >>>> important to summarize such methods in algorithmic terms >>>> which I attempt to do here (and please comment/correct). >>>> Instar learning is used to find the best weights for quick >>>> feedforward recognition without too much resonance >>>> (otherwise more resonance will be needed). Outstar >>>> learning is used to find the expectation of the patterns. >>>> The resonance mechanism evaluates distances between the >>>> "neighbors" evaluating how close differing outputs are to >>>> the input pattern (using the expectation). By choosing one >>>> winner the network is equivalent to a 1-nearest neighbor >>>> model. If you open it up to more winners (eg k winners) as >>>> you suggest then it becomes a k-nearest neighbor mechanism. >>>> >>>> Clearly I focused here on the main ART modules and did not >>>> discuss other additions. But I want to just focus on the >>>> main idea at this point. >>>> Sincerely, >>>> -Tsvi >>>> >>>> >>>> On Sun, Apr 6, 2014 at 1:30 PM, Stephen Grossberg >>>> > wrote: >>>> >>>> Dear John, >>>> >>>> Thanks for your questions. I reply below. >>>> >>>> On Apr 5, 2014, at 10:51 AM, Juyang Weng wrote: >>>> >>>>> Dear Steve, >>>>> >>>>> This is one of my long-time questions that I did not >>>>> have a chance to ask you when I met you many times >>>>> before. >>>>> But they may be useful for some people on this list. >>>>> Please accept my apology of my question implies any >>>>> false impression that I did not intend. >>>>> >>>>> (1) Your statement below seems to have confirmed my >>>>> understanding: >>>>> Your top-down process in ART in the late 1990's is >>>>> basically for finding an acceptable match >>>>> between the input feature vector and the stored >>>>> feature vectors represented by neurons (not meant for >>>>> the nearest match). >>>> >>>> ART has developed a lot since the 1990s. A >>>> non-technical but fairly comprehensive review article >>>> was published in 2012 in /Neural Networks/ and can be >>>> found at http://cns.bu.edu/~steve/ART.pdf >>>> . >>>> >>>> I do not think about the top-down process in ART in >>>> quite the way that you state above. My reason for this >>>> is summarized by the acronym CLEARS for the processes >>>> of Consciousness, Learning, Expectation, Attention, >>>> Resonance, and Synchrony. All the CLEARS processes come >>>> into this story, and ART top-down mechanisms contribute >>>> to all of them. For me, the most fundamental issues >>>> concern how ART dynamically self-stabilizes the >>>> memories that are learned within the model's bottom-up >>>> adaptive filters and top-down expectations. >>>> >>>> In particular, during learning, a big enough mismatch >>>> can lead to hypothesis testing and search for a new, or >>>> previously learned, category that leads to an >>>> acceptable match. The criterion for what is "big enough >>>> mismatch" or "acceptable match" is regulated by a >>>> vigilance parameter that can itself vary in a >>>> state-dependent way. >>>> >>>> After learning occurs, a bottom-up input pattern >>>> typically directly selects the best-matching category, >>>> without any hypothesis testing or search. And even if >>>> there is a reset due to a large initial mismatch with a >>>> previously active category, a single reset event may >>>> lead directly to a matching category that can directly >>>> resonate with the data. >>>> >>>> I should note that all of the foundational predictions >>>> of ART now have substantial bodies of psychological and >>>> neurobiological data to support them. See the review >>>> article if you would like to read about them. >>>> >>>>> The currently active neuron is the one being examined >>>>> by the top down process >>>> >>>> I'm not sure what you mean by "being examined", but >>>> perhaps my comment above may deal with it. >>>> >>>> I should comment, though, about your use of the word >>>> "currently active neuron". I assume that you mean at >>>> the category level. >>>> >>>> In this regard, there are two ART's. The first aspect >>>> of ART is as a cognitive and neural theory whose scope, >>>> which includes perceptual, cognitive, and adaptively >>>> timed cognitive-emotional dynamics, among other >>>> processes, is illustrated by the above referenced 2012 >>>> review article in /Neural Networks/. In the biological >>>> theory, there is no general commitment to just one >>>> "currently active neuron". One always considers the >>>> neuronal population, or populations, that represent a >>>> learned category. Sometimes, but not always, a >>>> winner-take-all category is chosen. >>>> >>>> The 2012 review article illustrates some of the large >>>> data bases of psychological and neurobiological data >>>> that have been explained in a principled way, >>>> quantitatively simulated, and successfully predicted by >>>> ART over a period of decades. ART-like processing is, >>>> however, certainly not the only kind of computation >>>> that may be needed to understand how the brain works. >>>> The paradigm called Complementary Computing that I >>>> introduced awhile ago makes precise the sense in which >>>> ART may be just one kind of dynamics supported by >>>> advanced brains. This is also summarized in the review >>>> article. >>>> >>>> The second aspect of ART is as a series of algorithms >>>> that mathematically characterize key ART design >>>> principles and mechanisms in a focused setting, and >>>> provide algorithms for large-scale applications in >>>> engineering and technology. ARTMAP, fuzzy ARTMAP, and >>>> distributed ARTMAP are among these, all of them >>>> developed with Gail Carpenter. Some of these algorithms >>>> use winner-take-all categories to enable the proof of >>>> mathematical theorems that characterize how underlying >>>> design principles work. In contrast, the distributed >>>> ARTMAP family of algorithms, developed by Gail >>>> Carpenter and her colleagues, allows for distributed >>>> category representations without losing the benefits of >>>> fast, incremental, self-stabilizing learning and >>>> prediction in response to a large non-stationary >>>> databases that can include many unexpected events. >>>> >>>> See, e.g., >>>> http://techlab.bu.edu/members/gail/articles/115_dART_NN_1997_.pdf >>>> and >>>> http://techlab.bu.edu/members/gail/articles/155_Fusion2008_CarpenterRavindran.pdf. >>>> >>>> I should note that FAST learning is a technical >>>> concept: it means that each adaptive weight can >>>> converge to its new equilibrium value on EACH learning >>>> trial. That is why ART algorithms can often >>>> successfully carry out one-trial incremental learning >>>> of a data base. This is not true of many other >>>> algorithms, such as back propagation, simulated >>>> annealing, and the like, which all experience >>>> catastrophic forgetting if they try to do fast >>>> learning. Almost all other learning algorithms need to >>>> be run using slow learning, that allows only a small >>>> increment in the values of adaptive weights on each >>>> learning trial, to avoid massive memory instabilities, >>>> and work best in response to stationary data. Such >>>> algorithms often fail to detect important rare cases, >>>> among other limitations. ART can provably learn in >>>> either the fast or slow mode in response to >>>> non-stationary data. >>>> >>>>> in a sequential fashion: one neuron after another, >>>>> until an acceptable neuron is found. >>>>> >>>>> (2) The input to the ART in the late 1990's is for a >>>>> single feature vector as a monolithic input. >>>>> By monolithic, I mean that all neurons take the entire >>>>> input feature vector as input. >>>>> I raise this point here because neuron in ART in the >>>>> late 1990's does not have an explicit local sensory >>>>> receptive field (SRF), >>>>> i.e., are fully connected from all components of the >>>>> input vector. A local SRF means that each neuron is >>>>> only connected to a small region >>>>> in an input image. >>>> >>>> Various ART algorithms for technology do use fully >>>> connected networks. They represent a single-channel >>>> case, which is often sufficient in applications and >>>> which simplifies mathematical proofs. However, the >>>> single-channel case is, as its name suggests, not a >>>> necessary constraint on ART design. >>>> >>>> In addition, many ART biological models do not restrict >>>> themselves to the single-channel case, and do have >>>> receptive fields. These include the LAMINART family of >>>> models that predict functional roles for many >>>> identified cell types in the laminar circuits of >>>> cerebral cortex. These models illustrate how variations >>>> of a shared laminar circuit design can carry out very >>>> different intelligent functions, such as 3D vision >>>> (e.g., 3D LAMINART), speech and language (e.g., >>>> cARTWORD), and cognitive information processing (e.g., >>>> LIST PARSE). They are all summarized in the 2012 review >>>> article, with the archival articles themselves on my >>>> web page http://cns.bu.edu/~steve >>>> . >>>> >>>> The existence of these laminar variations-on-a-theme >>>> provides an existence proof for the exciting goal of >>>> designing a family of chips whose specializations can >>>> realize all aspects of higher intelligence, and which >>>> can be consistently connected because they all share a >>>> similar underlying design. Work on achieving this goal >>>> can productively occupy lots of creative modelers and >>>> technologists for many years to come. >>>> >>>> I hope that the above replies provide some relevant >>>> information, as well as pointers for finding more. >>>> >>>> Best, >>>> >>>> Steve >>>> >>>> >>>> >>>>> >>>>> My apology again if my understanding above has errors >>>>> although I have examined the above two points carefully >>>>> through multiple your papers. >>>>> >>>>> Best regards, >>>>> >>>>> -John >>>>> >>>>> Juyang (John) Weng, Professor >>>>> Department of Computer Science and Engineering >>>>> MSU Cognitive Science Program and MSU Neuroscience Program >>>>> 428 S Shaw Ln Rm 3115 >>>>> Michigan State University >>>>> East Lansing, MI 48824 USA >>>>> Tel:517-353-4388 >>>>> Fax:517-432-1061 >>>>> Email:weng at cse.msu.edu >>>>> URL:http://www.cse.msu.edu/~weng/ >>>>> ---------------------------------------------- >>>>> >>>> >>>> Stephen Grossberg >>>> Wang Professor of Cognitive and Neural Systems >>>> Professor of Mathematics, Psychology, and Biomedical >>>> Engineering >>>> Director, Center for Adaptive Systems >>>> http://www.cns.bu.edu/about/cas.html >>>> http://cns.bu.edu/~steve >>>> steve at bu.edu >>>> >>>> >>>> >>>> >>>> >>> >>> -- >>> -- >>> Juyang (John) Weng, Professor >>> Department of Computer Science and Engineering >>> MSU Cognitive Science Program and MSU Neuroscience Program >>> 428 S Shaw Ln Rm 3115 >>> Michigan State University >>> East Lansing, MI 48824 USA >>> Tel:517-353-4388 >>> Fax: 517-432-1061 Email: weng at cse.msu.edu >>> URL: http://www.cse.msu.edu/~weng/ >>> >>> ---------------------------------------------- >>> >>> >> >> -- >> -- >> Juyang (John) Weng, Professor >> Department of Computer Science and Engineering >> MSU Cognitive Science Program and MSU Neuroscience Program >> 428 S Shaw Ln Rm 3115 >> Michigan State University >> East Lansing, MI 48824 USA >> Tel:517-353-4388 >> Fax:517-432-1061 >> Email:weng at cse.msu.edu >> URL:http://www.cse.msu.edu/~weng/ >> ---------------------------------------------- >> >> > > > -- > Prof. Dr. Danko Nikolic' > > Web: > http://www.danko-nikolic.com > > Mail address 1: > Department of Neurophysiology > Max Planck Institut for Brain Research > Deutschordenstr. 46 > 60528 Frankfurt am Main > GERMANY > > Mail address 2: > Frankfurt Institute for Advanced Studies > Wolfgang Goethe University > Ruth-Moufang-Str. 1 > 60433 Frankfurt am Main > GERMANY > > ---------------------------- > Office: (..49-69) 96769-736 > Lab: (..49-69) 96769-209 > Fax: (..49-69) 96769-327 > danko.nikolic at gmail.com > ---------------------------- -- -- Juyang (John) Weng, Professor Department of Computer Science and Engineering MSU Cognitive Science Program and MSU Neuroscience Program 428 S Shaw Ln Rm 3115 Michigan State University East Lansing, MI 48824 USA Tel: 517-353-4388 Fax: 517-432-1061 Email: weng at cse.msu.edu URL: http://www.cse.msu.edu/~weng/ ---------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From danko.nikolic at googlemail.com Mon Apr 14 14:10:03 2014 From: danko.nikolic at googlemail.com (Danko Nikolic) Date: Mon, 14 Apr 2014 20:10:03 +0200 Subject: Connectionists: how the brain works? (UNCLASSIFIED) In-Reply-To: References: <51FDF3A2-717D-4399-9076-331EEA0FC45A@lehigh.edu> <5340188F.6090405@cse.msu.edu> <6F08AE39-766F-45B9-97AD-C50714755724@cns.bu.edu> <5342E760.7080309@cse.msu.edu> <5344CA45.3060407@cse.msu.edu> , <534BBDC2.4090406@gmail.com> Message-ID: <534C247B.6070809@gmail.com> Dear Andras, I see why it may seem on the first look that practopoiesis is somehow related to reinforcement learning. However, any closer look to the theory will reveal that this is not the case. Practopoiesis is related to just a little bit to reinforcement learning, and it is as much related to any other brain theory or algorithm. At the end, in fact, it is quite a novel and exciting way of looking at brain function. Maybe the contribution of practopoiesis can be appreciated best if described in the language of learning. Imagine a system that does not have one or two general learning rules that are applied to all of the units of the system. Instead, imagine that the number of different learning rules equals the number of units in the system; a billion neurons means billion different learning rules, whereby each learning rule maximizes a different function. Moreover, imagine that, in this system, all its long-term memories (explicit and implicit) are stored in those learning rules. Thus, long-term memories are not stored primarily in the weights of neuron connections (as widely presumed) but in the rules by which the system changes its network in response to sensory inputs. Then, when we retrieve a piece of information, or recognize a stimulus, or make a decision, we use these learning mechanisms by quickly applying them to the network (e.g., every second or even faster) as a function of the incoming sensory inputs (or the sensory-motor contingencies). As a result, the network continuously changes its architecture with very high rate, and can quickly come back to its previous architecture if it is presented with a sequences of stimuli that has been presented already previously. One of the key point is that this process of changing the network is how we think, perceive, recall, categorize, etc. This process requires one more set of learning mechanisms that lay behind those mentioned rules containing our long-term memory. This latter set is responsible for acquiring our long-term memories i.e., for determining for each unit which learning rules it needs to use. Thus, there is a process of learning how to learn. Practopoietic theory explain how this is possible, why it works, how such systems can be described in generalized cybernetics terms, and why this approach is sufficiently adaptive to produce intelligence on par with that of humans. In practopoietic theory, the learning rules that store our long-term memories are referred to, not as "learning", but as "reconstruction of knowledge" or in Greek "anapoiesis". The paper also reviews behavioral evidence indicating that our cognition in fact is anapoietic by its nature. I hope that this helps understand that practopoiesis is something totally new and cannot simply be described with the existing machine-leaning approaches and brain theories. With best regards, Danko On 4/14/14 3:05 PM, Andras Lorincz wrote: > > You are heading reinforcement learning and the so-called temporal > difference learning in it. This is a good direction, since many > little details can be mapped to the > corico-basal ganglia-thalamocortical loops. Nonetheless, this can't > explain everything. The separation of outliers from the generalizable > subspace(s) is relevant, since the latter enables one to fill in > missing information, whereas the former does not. This was the > take-home message of the Netflix competition and the > subsequent developments on exact matrix completion. > > Andras > > > _________________________ > Andras Lorincz > ECCAI Fellow > > email: lorincz at inf.elte.hu > home: http://people.inf.elte.hu/lorincz > > ------------------------------------------------------------------------ > *From:* Connectionists > on behalf of Danko Nikolic > *Sent:* Monday, April 14, 2014 12:51 PM > *To:* connectionists at mailman.srv.cs.cmu.edu > *Subject:* Re: Connectionists: how the brain works? (UNCLASSIFIED) > Dear all, > > It has been very interesting to follow the discussion on the > functioning of ART, stability-plasticity dilemma and the related > issues. In that context, I would like to point to an exciting property > of the practopoietic theory, which enables us to understand what is > needed for a general solution to the problems similar to the > stability-plasticity dilemma. > > The issue of stability-plasticity dilemma can be described as a > problem of deciding when a new category of a stimulus needs to be > created and the system has to be adjusted as opposed to deciding to > treat the stimulus as old and familiar and thus, not needing to > adjust. Practopoietic theory helps us understand how a general > solution can be implemented for deciding whether to use old types of > behavior or to come up with new ones. This is possible in a so-called > "T_3 system" in which a process called "anapoiesis" takes place. When > a system is organized in such a T_3 way, every stimulus, old or new, > is treated in the same fashion, i.e., as new. The system always > adjusts--to everything(!)--even to stimuli that have been seen > thousands of times. There is never a simple direct categorization (or > pattern recognition) in which a mathematical mapping would take place > from input vectors to output vectors, as traditionally implemented in > multi-layer neural networks. > > Rather the system readjusts itself continuously to prepare for > interactions with the surrounding world. The only simple input-output > mappings that take place are the sensory-motor loops that execute the > actual behavior. The internal processes corresponding to perception, > recognition, categorization etc. are implemented by the mechanisms of > internal system adjustments (based on anapoiesis). These mechanisms > create new sensory-motor loops, which are then most similar to the > traditional mapping operations. The difference between old and new > stimuli (i.e., familiar and unfamiliar) is detectable in the behavior > of the system only because the system adjusts quicker to the older > that to the newer stimuli. > > The claimed advantage of such a T_3 practopoietic system is that only > such a system can become generally intelligent as a whole and behave > adaptively and consciously with understanding of what is going on > around; The system forms a general "adjustment machine" that can > become aware of its surroundings and can be capable of interpreting > the situation appropriately to decide on the next action. Thus, the > perceptual dilemma of stability vs. plasticity is converted into a > general understanding of the current situation and the needs of the > system. If the current goals of the system requires treating a > slightly novel stimulus as new, it will be treated as "new". However, > if a slight change in the stimulus features does not make a difference > for the current goals and the situation, than the stimulus will be > treated as "old". > > Importantly, practopoietic theory is not formulated in terms of > neurons (inhibition, excitation, connections, changes of synaptic > weights, etc.). Instead, the theory is formulated much more > elegantly--in terms of interactions between cybernetic control > mechanisms organized into a specific type of hierarchy (poietic > hierarchy). This abstract formulation is extremely helpful for two > reasons. First, it enables one to focus on the most important > functional aspects and thus, to understand much easier the underlying > principles of system operations. Second, it tells us what is needed to > create intelligent behavior using any type of implementation, neuronal > or non-neuronal. > > I hope this will be motivating enough to give practopoiesis a read. > > With best regards, > > Danko > > > > Link: > http://www.danko-nikolic.com/practopoiesis/ > > > > On 4/11/14 2:42 AM, Tsvi Achler wrote: >> I can't comment on most of this, but I am not sure if all models of >> sparsity and sparse coding fall into the connectionist realm either >> because some make statistical assumptions. >> -Tsvi >> >> >> On Tue, Apr 8, 2014 at 9:19 PM, Juyang Weng > > wrote: >> >> Tavi: >> >> Let me explain a little more detail: >> >> There are two large categories of biological neurons, excitatory >> and inhibitory. Both are developed through mainly signal >> statistics, >> not specified primarily by the genomes. Not all people agree >> with my this point, but please tolerate my this view for now. >> I gave a more detailed discussion on this view in my NAI book. >> >> The main effect of inhibitory connections is to reduce the number >> of firing neurons (David Field called it sparse coding), with the >> help of >> excitatory connections. This sparse coding is important because >> those do not fire are long term memory of the area at this point >> of time. >> My this view is different from David Field. He wrote that sparse >> coding is for the current representations. I think sparse coding is >> necessary for long-term memory. Not all people agree with my this >> point, but please tolerate my this view for now. >> >> However, this reduction requires very fast parallel neuronal >> updates to avoid uncontrollable large-magnitude oscillations. >> Even with the fast biological parallel neuronal updates, we still >> see slow but small-magnitude oscillations such as the >> well-known theta waves and alpha waves. My view is that such >> slow but small-magnitude oscillations are side effects of >> excitatory and inhibitory connections that form many loops, not >> something really desirable for the brain operation (sorry, >> Paul Werbos). Not all people agree with my this point, but >> please tolerate my this view for now. >> >> Therefore, as far as I understand, all computer simulations for >> spiking neurons are not showing major brain functions >> because they have to deal with the slow oscillations that are >> very different from the brain's, e.g., as Dr. Henry Markram reported >> (40Hz?). >> >> The above discussion again shows the power and necessity of an >> overarching brain theory like that in my NAI book. >> Those who only simulate biological neurons using superficial >> biological phenomena are not going to demonstrate >> any major brain functions. They can talk about signal statistics >> from their simulations, but signal statistics are far from brain >> functions. >> >> -John >> >> >> On 4/8/14 1:30 AM, Tsvi Achler wrote: >>> Hi John, >>> ART evaluates distance between the contending representation and >>> the current input through vigilance. If they are too far apart, >>> a poor vigilance signal will be triggered. >>> The best resonance will be achieved when they have the least >>> amount of distance. >>> If in your model, K-nearest neighbors is used without a neural >>> equivalent, then your model is not quite in the spirit of a >>> connectionist model. >>> For example, Bayesian networks do a great job emulating brain >>> behavior, modeling the integration of priors. and has been >>> invaluable to model cognitive studies. However they assume a >>> statistical configuration of connections and distributions which >>> is not quite known how to emulate with neurons. Thus pure >>> Bayesian models are also questionable in terms of connectionist >>> modeling. But some connectionist models can emulate some >>> statistical models for example see section 2.4 in Thomas & >>> McClelland's chapter in Sun's 2008 book >>> (http://www.psyc.bbk.ac.uk/people/academic/thomas_m/TM_Cambridge_sub.pdf). >>> I am not suggesting Hodgkin-Huxley level detailed neuron models, >>> however connectionist models should have their connections >>> explicitly defined. >>> Sincerely, >>> -Tsvi >>> >>> >>> >>> On Mon, Apr 7, 2014 at 10:58 AM, Juyang Weng >> > wrote: >>> >>> Tsvi, >>> >>> Note that ART uses a vigilance value to pick up the first >>> "acceptable" match in its sequential bottom-up and top-down >>> search. >>> I believe that was Steve meant when he mentioned vigilance. >>> >>> Why do you think "ART as a neural way to implement a >>> K-nearest neighbor algorithm"? >>> If not all the neighbors have sequentially participated, >>> how can ART find the nearest neighbor, let alone K-nearest >>> neighbor? >>> >>> Our DN uses an explicit k-nearest mechanism to find the >>> k-nearest neighbors in every network update, >>> to avoid the problems of slow resonance in existing models >>> of spiking neuronal networks. >>> The explicit k-nearest mechanism itself is not meant to be >>> biologically plausible, >>> but it gives a computational advantage for software >>> simulation of large networks >>> at a speed slower than 1000 network updates per second. >>> >>> I guess that more detailed molecular simulations of >>> individual neuronal spikes (such as using the Hodgkin-Huxley >>> model of >>> a neuron, using the NEURON software, >>> or like the Blue Brain >>> project directed by respected >>> Dr. Henry Markram) >>> are very useful for showing some detailed molecular, >>> synaptic, and neuronal properties. >>> However, they miss necessary brain-system-level mechanisms >>> so much that it is difficult for them >>> to show major brain-scale functions >>> (such as learning to recognize objects and detection of >>> natural objects directly from natural cluttered scenes). >>> >>> According to my understanding, if one uses a detailed >>> neuronal model for each of a variety of neuronal types and >>> connects those simulated neurons of different types >>> according to a diagram of Brodmann areas, >>> his simulation is NOT going to lead to any major brain >>> function. >>> He still needs brain-system-level knowledge such as that >>> taught in the BMI 871 course. >>> >>> -John >>> >>> On 4/7/14 8:07 AM, Tsvi Achler wrote: >>>> Dear Steve, John >>>> I think such discussions are great to spark interests in >>>> feedback (output back to input) such models which I feel >>>> should be given much more attention. >>>> In this vein it may be better to discuss more of the >>>> details here than to suggest to read a reference. >>>> >>>> Basically I see ART as a neural way to implement a >>>> K-nearest neighbor algorithm. Clearly the way ART >>>> overcomes the neural hurdles is immense especially in >>>> figuring out how to coordinate neurons. However it is also >>>> important to summarize such methods in algorithmic terms >>>> which I attempt to do here (and please comment/correct). >>>> Instar learning is used to find the best weights for quick >>>> feedforward recognition without too much resonance >>>> (otherwise more resonance will be needed). Outstar >>>> learning is used to find the expectation of the patterns. >>>> The resonance mechanism evaluates distances between the >>>> "neighbors" evaluating how close differing outputs are to >>>> the input pattern (using the expectation). By choosing one >>>> winner the network is equivalent to a 1-nearest neighbor >>>> model. If you open it up to more winners (eg k winners) as >>>> you suggest then it becomes a k-nearest neighbor mechanism. >>>> >>>> Clearly I focused here on the main ART modules and did not >>>> discuss other additions. But I want to just focus on the >>>> main idea at this point. >>>> Sincerely, >>>> -Tsvi >>>> >>>> >>>> On Sun, Apr 6, 2014 at 1:30 PM, Stephen Grossberg >>>> > wrote: >>>> >>>> Dear John, >>>> >>>> Thanks for your questions. I reply below. >>>> >>>> On Apr 5, 2014, at 10:51 AM, Juyang Weng wrote: >>>> >>>>> Dear Steve, >>>>> >>>>> This is one of my long-time questions that I did not >>>>> have a chance to ask you when I met you many times >>>>> before. >>>>> But they may be useful for some people on this list. >>>>> Please accept my apology of my question implies any >>>>> false impression that I did not intend. >>>>> >>>>> (1) Your statement below seems to have confirmed my >>>>> understanding: >>>>> Your top-down process in ART in the late 1990's is >>>>> basically for finding an acceptable match >>>>> between the input feature vector and the stored >>>>> feature vectors represented by neurons (not meant for >>>>> the nearest match). >>>> >>>> ART has developed a lot since the 1990s. A >>>> non-technical but fairly comprehensive review article >>>> was published in 2012 in /Neural Networks/ and can be >>>> found at http://cns.bu.edu/~steve/ART.pdf >>>> . >>>> >>>> I do not think about the top-down process in ART in >>>> quite the way that you state above. My reason for this >>>> is summarized by the acronym CLEARS for the processes >>>> of Consciousness, Learning, Expectation, Attention, >>>> Resonance, and Synchrony. All the CLEARS processes come >>>> into this story, and ART top-down mechanisms contribute >>>> to all of them. For me, the most fundamental issues >>>> concern how ART dynamically self-stabilizes the >>>> memories that are learned within the model's bottom-up >>>> adaptive filters and top-down expectations. >>>> >>>> In particular, during learning, a big enough mismatch >>>> can lead to hypothesis testing and search for a new, or >>>> previously learned, category that leads to an >>>> acceptable match. The criterion for what is "big enough >>>> mismatch" or "acceptable match" is regulated by a >>>> vigilance parameter that can itself vary in a >>>> state-dependent way. >>>> >>>> After learning occurs, a bottom-up input pattern >>>> typically directly selects the best-matching category, >>>> without any hypothesis testing or search. And even if >>>> there is a reset due to a large initial mismatch with a >>>> previously active category, a single reset event may >>>> lead directly to a matching category that can directly >>>> resonate with the data. >>>> >>>> I should note that all of the foundational predictions >>>> of ART now have substantial bodies of psychological and >>>> neurobiological data to support them. See the review >>>> article if you would like to read about them. >>>> >>>>> The currently active neuron is the one being examined >>>>> by the top down process >>>> >>>> I'm not sure what you mean by "being examined", but >>>> perhaps my comment above may deal with it. >>>> >>>> I should comment, though, about your use of the word >>>> "currently active neuron". I assume that you mean at >>>> the category level. >>>> >>>> In this regard, there are two ART's. The first aspect >>>> of ART is as a cognitive and neural theory whose scope, >>>> which includes perceptual, cognitive, and adaptively >>>> timed cognitive-emotional dynamics, among other >>>> processes, is illustrated by the above referenced 2012 >>>> review article in /Neural Networks/. In the biological >>>> theory, there is no general commitment to just one >>>> "currently active neuron". One always considers the >>>> neuronal population, or populations, that represent a >>>> learned category. Sometimes, but not always, a >>>> winner-take-all category is chosen. >>>> >>>> The 2012 review article illustrates some of the large >>>> data bases of psychological and neurobiological data >>>> that have been explained in a principled way, >>>> quantitatively simulated, and successfully predicted by >>>> ART over a period of decades. ART-like processing is, >>>> however, certainly not the only kind of computation >>>> that may be needed to understand how the brain works. >>>> The paradigm called Complementary Computing that I >>>> introduced awhile ago makes precise the sense in which >>>> ART may be just one kind of dynamics supported by >>>> advanced brains. This is also summarized in the review >>>> article. >>>> >>>> The second aspect of ART is as a series of algorithms >>>> that mathematically characterize key ART design >>>> principles and mechanisms in a focused setting, and >>>> provide algorithms for large-scale applications in >>>> engineering and technology. ARTMAP, fuzzy ARTMAP, and >>>> distributed ARTMAP are among these, all of them >>>> developed with Gail Carpenter. Some of these algorithms >>>> use winner-take-all categories to enable the proof of >>>> mathematical theorems that characterize how underlying >>>> design principles work. In contrast, the distributed >>>> ARTMAP family of algorithms, developed by Gail >>>> Carpenter and her colleagues, allows for distributed >>>> category representations without losing the benefits of >>>> fast, incremental, self-stabilizing learning and >>>> prediction in response to a large non-stationary >>>> databases that can include many unexpected events. >>>> >>>> See, e.g., >>>> http://techlab.bu.edu/members/gail/articles/115_dART_NN_1997_.pdf >>>> and >>>> http://techlab.bu.edu/members/gail/articles/155_Fusion2008_CarpenterRavindran.pdf. >>>> >>>> I should note that FAST learning is a technical >>>> concept: it means that each adaptive weight can >>>> converge to its new equilibrium value on EACH learning >>>> trial. That is why ART algorithms can often >>>> successfully carry out one-trial incremental learning >>>> of a data base. This is not true of many other >>>> algorithms, such as back propagation, simulated >>>> annealing, and the like, which all experience >>>> catastrophic forgetting if they try to do fast >>>> learning. Almost all other learning algorithms need to >>>> be run using slow learning, that allows only a small >>>> increment in the values of adaptive weights on each >>>> learning trial, to avoid massive memory instabilities, >>>> and work best in response to stationary data. Such >>>> algorithms often fail to detect important rare cases, >>>> among other limitations. ART can provably learn in >>>> either the fast or slow mode in response to >>>> non-stationary data. >>>> >>>>> in a sequential fashion: one neuron after another, >>>>> until an acceptable neuron is found. >>>>> >>>>> (2) The input to the ART in the late 1990's is for a >>>>> single feature vector as a monolithic input. >>>>> By monolithic, I mean that all neurons take the entire >>>>> input feature vector as input. >>>>> I raise this point here because neuron in ART in the >>>>> late 1990's does not have an explicit local sensory >>>>> receptive field (SRF), >>>>> i.e., are fully connected from all components of the >>>>> input vector. A local SRF means that each neuron is >>>>> only connected to a small region >>>>> in an input image. >>>> >>>> Various ART algorithms for technology do use fully >>>> connected networks. They represent a single-channel >>>> case, which is often sufficient in applications and >>>> which simplifies mathematical proofs. However, the >>>> single-channel case is, as its name suggests, not a >>>> necessary constraint on ART design. >>>> >>>> In addition, many ART biological models do not restrict >>>> themselves to the single-channel case, and do have >>>> receptive fields. These include the LAMINART family of >>>> models that predict functional roles for many >>>> identified cell types in the laminar circuits of >>>> cerebral cortex. These models illustrate how variations >>>> of a shared laminar circuit design can carry out very >>>> different intelligent functions, such as 3D vision >>>> (e.g., 3D LAMINART), speech and language (e.g., >>>> cARTWORD), and cognitive information processing (e.g., >>>> LIST PARSE). They are all summarized in the 2012 review >>>> article, with the archival articles themselves on my >>>> web page http://cns.bu.edu/~steve >>>> . >>>> >>>> The existence of these laminar variations-on-a-theme >>>> provides an existence proof for the exciting goal of >>>> designing a family of chips whose specializations can >>>> realize all aspects of higher intelligence, and which >>>> can be consistently connected because they all share a >>>> similar underlying design. Work on achieving this goal >>>> can productively occupy lots of creative modelers and >>>> technologists for many years to come. >>>> >>>> I hope that the above replies provide some relevant >>>> information, as well as pointers for finding more. >>>> >>>> Best, >>>> >>>> Steve >>>> >>>> >>>> >>>>> >>>>> My apology again if my understanding above has errors >>>>> although I have examined the above two points carefully >>>>> through multiple your papers. >>>>> >>>>> Best regards, >>>>> >>>>> -John >>>>> >>>>> Juyang (John) Weng, Professor >>>>> Department of Computer Science and Engineering >>>>> MSU Cognitive Science Program and MSU Neuroscience Program >>>>> 428 S Shaw Ln Rm 3115 >>>>> Michigan State University >>>>> East Lansing, MI 48824 USA >>>>> Tel:517-353-4388 >>>>> Fax:517-432-1061 >>>>> Email:weng at cse.msu.edu >>>>> URL:http://www.cse.msu.edu/~weng/ >>>>> ---------------------------------------------- >>>>> >>>> >>>> Stephen Grossberg >>>> Wang Professor of Cognitive and Neural Systems >>>> Professor of Mathematics, Psychology, and Biomedical >>>> Engineering >>>> Director, Center for Adaptive Systems >>>> http://www.cns.bu.edu/about/cas.html >>>> http://cns.bu.edu/~steve >>>> steve at bu.edu >>>> >>>> >>>> >>>> >>>> >>> >>> -- >>> -- >>> Juyang (John) Weng, Professor >>> Department of Computer Science and Engineering >>> MSU Cognitive Science Program and MSU Neuroscience Program >>> 428 S Shaw Ln Rm 3115 >>> Michigan State University >>> East Lansing, MI 48824 USA >>> Tel:517-353-4388 >>> Fax: 517-432-1061 Email: weng at cse.msu.edu >>> URL: http://www.cse.msu.edu/~weng/ >>> >>> ---------------------------------------------- >>> >>> >> >> -- >> -- >> Juyang (John) Weng, Professor >> Department of Computer Science and Engineering >> MSU Cognitive Science Program and MSU Neuroscience Program >> 428 S Shaw Ln Rm 3115 >> Michigan State University >> East Lansing, MI 48824 USA >> Tel:517-353-4388 >> Fax:517-432-1061 >> Email:weng at cse.msu.edu >> URL:http://www.cse.msu.edu/~weng/ >> ---------------------------------------------- >> >> > > > -- > Prof. Dr. Danko Nikolic' > > Web: > http://www.danko-nikolic.com > > Mail address 1: > Department of Neurophysiology > Max Planck Institut for Brain Research > Deutschordenstr. 46 > 60528 Frankfurt am Main > GERMANY > > Mail address 2: > Frankfurt Institute for Advanced Studies > Wolfgang Goethe University > Ruth-Moufang-Str. 1 > 60433 Frankfurt am Main > GERMANY > > ---------------------------- > Office: (..49-69) 96769-736 > Lab: (..49-69) 96769-209 > Fax: (..49-69) 96769-327 > danko.nikolic at gmail.com > ---------------------------- -- Prof. Dr. Danko Nikolic' Web: http://www.danko-nikolic.com Mail address 1: Department of Neurophysiology Max Planck Institut for Brain Research Deutschordenstr. 46 60528 Frankfurt am Main GERMANY Mail address 2: Frankfurt Institute for Advanced Studies Wolfgang Goethe University Ruth-Moufang-Str. 1 60433 Frankfurt am Main GERMANY ---------------------------- Office: (..49-69) 96769-736 Lab: (..49-69) 96769-209 Fax: (..49-69) 96769-327 danko.nikolic at gmail.com ---------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From weng at cse.msu.edu Mon Apr 14 14:25:30 2014 From: weng at cse.msu.edu (Juyang Weng) Date: Mon, 14 Apr 2014 14:25:30 -0400 Subject: Connectionists: how the brain works? (UNCLASSIFIED) In-Reply-To: <534C247B.6070809@gmail.com> References: <51FDF3A2-717D-4399-9076-331EEA0FC45A@lehigh.edu> <5340188F.6090405@cse.msu.edu> <6F08AE39-766F-45B9-97AD-C50714755724@cns.bu.edu> <5342E760.7080309@cse.msu.edu> <5344CA45.3060407@cse.msu.edu> , <534BBDC2.4090406@gmail.com> <534C247B.6070809@gmail.com> Message-ID: <534C281A.6030609@cse.msu.edu> > Thus, long-term memories are not stored primarily in the weights of neuron connections (as widely presumed) but in the rules by which the system changes its network in response to sensory inputs. I still wish that you can explain it in terms of neurons and their connections, as I believe that any model of a brain should at least be explained in the grainularity of neurons, their connections and neural transmitters. -John On 4/14/14 2:10 PM, Danko Nikolic wrote: > Dear Andras, > > I see why it may seem on the first look that practopoiesis is > somehow related to reinforcement learning. However, any closer look to > the theory will reveal that this is not the case. Practopoiesis is > related to just a little bit to reinforcement learning, and it is as > much related to any other brain theory or algorithm. At the end, in > fact, it is quite a novel and exciting way of looking at brain function. > > Maybe the contribution of practopoiesis can be appreciated best if > described in the language of learning. Imagine a system that does not > have one or two general learning rules that are applied to all of the > units of the system. Instead, imagine that the number of different > learning rules equals the number of units in the system; a billion > neurons means billion different learning rules, whereby each learning > rule maximizes a different function. > > Moreover, imagine that, in this system, all its long-term memories > (explicit and implicit) are stored in those learning rules. Thus, > long-term memories are not stored primarily in the weights of neuron > connections (as widely presumed) but in the rules by which the system > changes its network in response to sensory inputs. Then, when we > retrieve a piece of information, or recognize a stimulus, or make a > decision, we use these learning mechanisms by quickly applying them to > the network (e.g., every second or even faster) as a function of the > incoming sensory inputs (or the sensory-motor contingencies). As a > result, the network continuously changes its architecture with very > high rate, and can quickly come back to its previous architecture if > it is presented with a sequences of stimuli that has been presented > already previously. One of the key point is that this process of > changing the network is how we think, perceive, recall, categorize, etc. > > This process requires one more set of learning mechanisms that lay > behind those mentioned rules containing our long-term memory. This > latter set is responsible for acquiring our long-term memories i.e., > for determining for each unit which learning rules it needs to use. > Thus, there is a process of learning how to learn. > > Practopoietic theory explain how this is possible, why it works, how > such systems can be described in generalized cybernetics terms, and > why this approach is sufficiently adaptive to produce intelligence on > par with that of humans. In practopoietic theory, the learning rules > that store our long-term memories are referred to, not as "learning", > but as "reconstruction of knowledge" or in Greek "anapoiesis". The > paper also reviews behavioral evidence indicating that our cognition > in fact is anapoietic by its nature. > > I hope that this helps understand that practopoiesis is something > totally new and cannot simply be described with the existing > machine-leaning approaches and brain theories. > > With best regards, > > Danko > > > On 4/14/14 3:05 PM, Andras Lorincz wrote: >> >> You are heading reinforcement learning and the so-called temporal >> difference learning in it. This is a good direction, since many >> little details can be mapped to the >> corico-basal ganglia-thalamocortical loops. Nonetheless, this can't >> explain everything. The separation of outliers from the generalizable >> subspace(s) is relevant, since the latter enables one to fill in >> missing information, whereas the former does not. This was the >> take-home message of the Netflix competition and the >> subsequent developments on exact matrix completion. >> >> Andras >> >> >> _________________________ >> Andras Lorincz >> ECCAI Fellow >> >> email: lorincz at inf.elte.hu >> home: http://people.inf.elte.hu/lorincz >> >> ------------------------------------------------------------------------ >> *From:* Connectionists >> on behalf of Danko >> Nikolic >> *Sent:* Monday, April 14, 2014 12:51 PM >> *To:* connectionists at mailman.srv.cs.cmu.edu >> *Subject:* Re: Connectionists: how the brain works? (UNCLASSIFIED) >> Dear all, >> >> It has been very interesting to follow the discussion on the >> functioning of ART, stability-plasticity dilemma and the related >> issues. In that context, I would like to point to an exciting >> property of the practopoietic theory, which enables us to understand >> what is needed for a general solution to the problems similar to the >> stability-plasticity dilemma. >> >> The issue of stability-plasticity dilemma can be described as a >> problem of deciding when a new category of a stimulus needs to be >> created and the system has to be adjusted as opposed to deciding to >> treat the stimulus as old and familiar and thus, not needing to >> adjust. Practopoietic theory helps us understand how a general >> solution can be implemented for deciding whether to use old types of >> behavior or to come up with new ones. This is possible in a so-called >> "T_3 system" in which a process called "anapoiesis" takes place. When >> a system is organized in such a T_3 way, every stimulus, old or new, >> is treated in the same fashion, i.e., as new. The system always >> adjusts--to everything(!)--even to stimuli that have been seen >> thousands of times. There is never a simple direct categorization (or >> pattern recognition) in which a mathematical mapping would take place >> from input vectors to output vectors, as traditionally implemented in >> multi-layer neural networks. >> >> Rather the system readjusts itself continuously to prepare for >> interactions with the surrounding world. The only simple input-output >> mappings that take place are the sensory-motor loops that execute the >> actual behavior. The internal processes corresponding to perception, >> recognition, categorization etc. are implemented by the mechanisms of >> internal system adjustments (based on anapoiesis). These mechanisms >> create new sensory-motor loops, which are then most similar to the >> traditional mapping operations. The difference between old and new >> stimuli (i.e., familiar and unfamiliar) is detectable in the behavior >> of the system only because the system adjusts quicker to the older >> that to the newer stimuli. >> >> The claimed advantage of such a T_3 practopoietic system is that only >> such a system can become generally intelligent as a whole and behave >> adaptively and consciously with understanding of what is going on >> around; The system forms a general "adjustment machine" that can >> become aware of its surroundings and can be capable of interpreting >> the situation appropriately to decide on the next action. Thus, the >> perceptual dilemma of stability vs. plasticity is converted into a >> general understanding of the current situation and the needs of the >> system. If the current goals of the system requires treating a >> slightly novel stimulus as new, it will be treated as "new". However, >> if a slight change in the stimulus features does not make a >> difference for the current goals and the situation, than the stimulus >> will be treated as "old". >> >> Importantly, practopoietic theory is not formulated in terms of >> neurons (inhibition, excitation, connections, changes of synaptic >> weights, etc.). Instead, the theory is formulated much more >> elegantly--in terms of interactions between cybernetic control >> mechanisms organized into a specific type of hierarchy (poietic >> hierarchy). This abstract formulation is extremely helpful for two >> reasons. First, it enables one to focus on the most important >> functional aspects and thus, to understand much easier the underlying >> principles of system operations. Second, it tells us what is needed >> to create intelligent behavior using any type of implementation, >> neuronal or non-neuronal. >> >> I hope this will be motivating enough to give practopoiesis a read. >> >> With best regards, >> >> Danko >> >> >> >> Link: >> http://www.danko-nikolic.com/practopoiesis/ >> >> >> >> On 4/11/14 2:42 AM, Tsvi Achler wrote: >>> I can't comment on most of this, but I am not sure if all models of >>> sparsity and sparse coding fall into the connectionist realm either >>> because some make statistical assumptions. >>> -Tsvi >>> >>> >>> On Tue, Apr 8, 2014 at 9:19 PM, Juyang Weng >> > wrote: >>> >>> Tavi: >>> >>> Let me explain a little more detail: >>> >>> There are two large categories of biological neurons, excitatory >>> and inhibitory. Both are developed through mainly signal >>> statistics, >>> not specified primarily by the genomes. Not all people agree >>> with my this point, but please tolerate my this view for now. >>> I gave a more detailed discussion on this view in my NAI book. >>> >>> The main effect of inhibitory connections is to reduce the >>> number of firing neurons (David Field called it sparse coding), >>> with the help of >>> excitatory connections. This sparse coding is important because >>> those do not fire are long term memory of the area at this point >>> of time. >>> My this view is different from David Field. He wrote that >>> sparse coding is for the current representations. I think >>> sparse coding is >>> necessary for long-term memory. Not all people agree with my >>> this point, but please tolerate my this view for now. >>> >>> However, this reduction requires very fast parallel neuronal >>> updates to avoid uncontrollable large-magnitude oscillations. >>> Even with the fast biological parallel neuronal updates, we >>> still see slow but small-magnitude oscillations such as the >>> well-known theta waves and alpha waves. My view is that such >>> slow but small-magnitude oscillations are side effects of >>> excitatory and inhibitory connections that form many loops, not >>> something really desirable for the brain operation (sorry, >>> Paul Werbos). Not all people agree with my this point, but >>> please tolerate my this view for now. >>> >>> Therefore, as far as I understand, all computer simulations for >>> spiking neurons are not showing major brain functions >>> because they have to deal with the slow oscillations that are >>> very different from the brain's, e.g., as Dr. Henry Markram reported >>> (40Hz?). >>> >>> The above discussion again shows the power and necessity of an >>> overarching brain theory like that in my NAI book. >>> Those who only simulate biological neurons using superficial >>> biological phenomena are not going to demonstrate >>> any major brain functions. They can talk about signal >>> statistics from their simulations, but signal statistics are far >>> from brain functions. >>> >>> -John >>> >>> >>> On 4/8/14 1:30 AM, Tsvi Achler wrote: >>>> Hi John, >>>> ART evaluates distance between the contending representation >>>> and the current input through vigilance. If they are too far >>>> apart, a poor vigilance signal will be triggered. >>>> The best resonance will be achieved when they have the least >>>> amount of distance. >>>> If in your model, K-nearest neighbors is used without a neural >>>> equivalent, then your model is not quite in the spirit of a >>>> connectionist model. >>>> For example, Bayesian networks do a great job emulating brain >>>> behavior, modeling the integration of priors. and has been >>>> invaluable to model cognitive studies. However they assume a >>>> statistical configuration of connections and distributions >>>> which is not quite known how to emulate with neurons. Thus >>>> pure Bayesian models are also questionable in terms of >>>> connectionist modeling. But some connectionist models can >>>> emulate some statistical models for example see section 2.4 in >>>> Thomas & McClelland's chapter in Sun's 2008 book >>>> (http://www.psyc.bbk.ac.uk/people/academic/thomas_m/TM_Cambridge_sub.pdf). >>>> I am not suggesting Hodgkin-Huxley level detailed neuron >>>> models, however connectionist models should have their >>>> connections explicitly defined. >>>> Sincerely, >>>> -Tsvi >>>> >>>> >>>> >>>> On Mon, Apr 7, 2014 at 10:58 AM, Juyang Weng >>> > wrote: >>>> >>>> Tsvi, >>>> >>>> Note that ART uses a vigilance value to pick up the first >>>> "acceptable" match in its sequential bottom-up and top-down >>>> search. >>>> I believe that was Steve meant when he mentioned vigilance. >>>> >>>> Why do you think "ART as a neural way to implement a >>>> K-nearest neighbor algorithm"? >>>> If not all the neighbors have sequentially participated, >>>> how can ART find the nearest neighbor, let alone K-nearest >>>> neighbor? >>>> >>>> Our DN uses an explicit k-nearest mechanism to find the >>>> k-nearest neighbors in every network update, >>>> to avoid the problems of slow resonance in existing models >>>> of spiking neuronal networks. >>>> The explicit k-nearest mechanism itself is not meant to be >>>> biologically plausible, >>>> but it gives a computational advantage for software >>>> simulation of large networks >>>> at a speed slower than 1000 network updates per second. >>>> >>>> I guess that more detailed molecular simulations of >>>> individual neuronal spikes (such as using the >>>> Hodgkin-Huxley model of >>>> a neuron, using the NEURON software, >>>> or like the Blue Brain >>>> project directed by respected >>>> Dr. Henry Markram) >>>> are very useful for showing some detailed molecular, >>>> synaptic, and neuronal properties. >>>> However, they miss necessary brain-system-level mechanisms >>>> so much that it is difficult for them >>>> to show major brain-scale functions >>>> (such as learning to recognize objects and detection of >>>> natural objects directly from natural cluttered scenes). >>>> >>>> According to my understanding, if one uses a detailed >>>> neuronal model for each of a variety of neuronal types and >>>> connects those simulated neurons of different types >>>> according to a diagram of Brodmann areas, >>>> his simulation is NOT going to lead to any major brain >>>> function. >>>> He still needs brain-system-level knowledge such as that >>>> taught in the BMI 871 course. >>>> >>>> -John >>>> >>>> On 4/7/14 8:07 AM, Tsvi Achler wrote: >>>>> Dear Steve, John >>>>> I think such discussions are great to spark interests in >>>>> feedback (output back to input) such models which I feel >>>>> should be given much more attention. >>>>> In this vein it may be better to discuss more of the >>>>> details here than to suggest to read a reference. >>>>> >>>>> Basically I see ART as a neural way to implement a >>>>> K-nearest neighbor algorithm. Clearly the way ART >>>>> overcomes the neural hurdles is immense especially in >>>>> figuring out how to coordinate neurons. However it is >>>>> also important to summarize such methods in algorithmic >>>>> terms which I attempt to do here (and please >>>>> comment/correct). >>>>> Instar learning is used to find the best weights for quick >>>>> feedforward recognition without too much resonance >>>>> (otherwise more resonance will be needed). Outstar >>>>> learning is used to find the expectation of the patterns. >>>>> The resonance mechanism evaluates distances between the >>>>> "neighbors" evaluating how close differing outputs are to >>>>> the input pattern (using the expectation). By choosing >>>>> one winner the network is equivalent to a 1-nearest >>>>> neighbor model. If you open it up to more winners (eg k >>>>> winners) as you suggest then it becomes a k-nearest >>>>> neighbor mechanism. >>>>> >>>>> Clearly I focused here on the main ART modules and did not >>>>> discuss other additions. But I want to just focus on the >>>>> main idea at this point. >>>>> Sincerely, >>>>> -Tsvi >>>>> >>>>> >>>>> On Sun, Apr 6, 2014 at 1:30 PM, Stephen Grossberg >>>>> > wrote: >>>>> >>>>> Dear John, >>>>> >>>>> Thanks for your questions. I reply below. >>>>> >>>>> On Apr 5, 2014, at 10:51 AM, Juyang Weng wrote: >>>>> >>>>>> Dear Steve, >>>>>> >>>>>> This is one of my long-time questions that I did not >>>>>> have a chance to ask you when I met you many times >>>>>> before. >>>>>> But they may be useful for some people on this list. >>>>>> Please accept my apology of my question implies any >>>>>> false impression that I did not intend. >>>>>> >>>>>> (1) Your statement below seems to have confirmed my >>>>>> understanding: >>>>>> Your top-down process in ART in the late 1990's is >>>>>> basically for finding an acceptable match >>>>>> between the input feature vector and the stored >>>>>> feature vectors represented by neurons (not meant for >>>>>> the nearest match). >>>>> >>>>> ART has developed a lot since the 1990s. A >>>>> non-technical but fairly comprehensive review article >>>>> was published in 2012 in /Neural Networks/ and can be >>>>> found at http://cns.bu.edu/~steve/ART.pdf >>>>> . >>>>> >>>>> I do not think about the top-down process in ART in >>>>> quite the way that you state above. My reason for this >>>>> is summarized by the acronym CLEARS for the processes >>>>> of Consciousness, Learning, Expectation, Attention, >>>>> Resonance, and Synchrony. All the CLEARS processes >>>>> come into this story, and ART top-down mechanisms >>>>> contribute to all of them. For me, the most >>>>> fundamental issues concern how ART dynamically >>>>> self-stabilizes the memories that are learned within >>>>> the model's bottom-up adaptive filters and top-down >>>>> expectations. >>>>> >>>>> In particular, during learning, a big enough mismatch >>>>> can lead to hypothesis testing and search for a new, >>>>> or previously learned, category that leads to an >>>>> acceptable match. The criterion for what is "big >>>>> enough mismatch" or "acceptable match" is regulated by >>>>> a vigilance parameter that can itself vary in a >>>>> state-dependent way. >>>>> >>>>> After learning occurs, a bottom-up input pattern >>>>> typically directly selects the best-matching category, >>>>> without any hypothesis testing or search. And even if >>>>> there is a reset due to a large initial mismatch with >>>>> a previously active category, a single reset event may >>>>> lead directly to a matching category that can directly >>>>> resonate with the data. >>>>> >>>>> I should note that all of the foundational predictions >>>>> of ART now have substantial bodies of psychological >>>>> and neurobiological data to support them. See the >>>>> review article if you would like to read about them. >>>>> >>>>>> The currently active neuron is the one being examined >>>>>> by the top down process >>>>> >>>>> I'm not sure what you mean by "being examined", but >>>>> perhaps my comment above may deal with it. >>>>> >>>>> I should comment, though, about your use of the word >>>>> "currently active neuron". I assume that you mean at >>>>> the category level. >>>>> >>>>> In this regard, there are two ART's. The first aspect >>>>> of ART is as a cognitive and neural theory whose >>>>> scope, which includes perceptual, cognitive, and >>>>> adaptively timed cognitive-emotional dynamics, among >>>>> other processes, is illustrated by the above >>>>> referenced 2012 review article in /Neural Networks/. >>>>> In the biological theory, there is no general >>>>> commitment to just one "currently active neuron". One >>>>> always considers the neuronal population, or >>>>> populations, that represent a learned category. >>>>> Sometimes, but not always, a winner-take-all category >>>>> is chosen. >>>>> >>>>> The 2012 review article illustrates some of the large >>>>> data bases of psychological and neurobiological data >>>>> that have been explained in a principled way, >>>>> quantitatively simulated, and successfully predicted >>>>> by ART over a period of decades. ART-like processing >>>>> is, however, certainly not the only kind of >>>>> computation that may be needed to understand how the >>>>> brain works. The paradigm called Complementary >>>>> Computing that I introduced awhile ago makes precise >>>>> the sense in which ART may be just one kind of >>>>> dynamics supported by advanced brains. This is also >>>>> summarized in the review article. >>>>> >>>>> The second aspect of ART is as a series of algorithms >>>>> that mathematically characterize key ART design >>>>> principles and mechanisms in a focused setting, and >>>>> provide algorithms for large-scale applications in >>>>> engineering and technology. ARTMAP, fuzzy ARTMAP, and >>>>> distributed ARTMAP are among these, all of them >>>>> developed with Gail Carpenter. Some of these >>>>> algorithms use winner-take-all categories to enable >>>>> the proof of mathematical theorems that characterize >>>>> how underlying design principles work. In contrast, >>>>> the distributed ARTMAP family of algorithms, developed >>>>> by Gail Carpenter and her colleagues, allows for >>>>> distributed category representations without losing >>>>> the benefits of fast, incremental, self-stabilizing >>>>> learning and prediction in response to a large >>>>> non-stationary databases that can include many >>>>> unexpected events. >>>>> >>>>> See, e.g., >>>>> http://techlab.bu.edu/members/gail/articles/115_dART_NN_1997_.pdf >>>>> and >>>>> http://techlab.bu.edu/members/gail/articles/155_Fusion2008_CarpenterRavindran.pdf. >>>>> >>>>> I should note that FAST learning is a technical >>>>> concept: it means that each adaptive weight can >>>>> converge to its new equilibrium value on EACH learning >>>>> trial. That is why ART algorithms can often >>>>> successfully carry out one-trial incremental learning >>>>> of a data base. This is not true of many other >>>>> algorithms, such as back propagation, simulated >>>>> annealing, and the like, which all experience >>>>> catastrophic forgetting if they try to do fast >>>>> learning. Almost all other learning algorithms need to >>>>> be run using slow learning, that allows only a small >>>>> increment in the values of adaptive weights on each >>>>> learning trial, to avoid massive memory instabilities, >>>>> and work best in response to stationary data. Such >>>>> algorithms often fail to detect important rare cases, >>>>> among other limitations. ART can provably learn in >>>>> either the fast or slow mode in response to >>>>> non-stationary data. >>>>> >>>>>> in a sequential fashion: one neuron after another, >>>>>> until an acceptable neuron is found. >>>>>> >>>>>> (2) The input to the ART in the late 1990's is for a >>>>>> single feature vector as a monolithic input. >>>>>> By monolithic, I mean that all neurons take the >>>>>> entire input feature vector as input. >>>>>> I raise this point here because neuron in ART in the >>>>>> late 1990's does not have an explicit local sensory >>>>>> receptive field (SRF), >>>>>> i.e., are fully connected from all components of the >>>>>> input vector. A local SRF means that each neuron is >>>>>> only connected to a small region >>>>>> in an input image. >>>>> >>>>> Various ART algorithms for technology do use fully >>>>> connected networks. They represent a single-channel >>>>> case, which is often sufficient in applications and >>>>> which simplifies mathematical proofs. However, the >>>>> single-channel case is, as its name suggests, not a >>>>> necessary constraint on ART design. >>>>> >>>>> In addition, many ART biological models do not >>>>> restrict themselves to the single-channel case, and do >>>>> have receptive fields. These include the LAMINART >>>>> family of models that predict functional roles for >>>>> many identified cell types in the laminar circuits of >>>>> cerebral cortex. These models illustrate how >>>>> variations of a shared laminar circuit design can >>>>> carry out very different intelligent functions, such >>>>> as 3D vision (e.g., 3D LAMINART), speech and language >>>>> (e.g., cARTWORD), and cognitive information processing >>>>> (e.g., LIST PARSE). They are all summarized in the >>>>> 2012 review article, with the archival articles >>>>> themselves on my web page http://cns.bu.edu/~steve >>>>> . >>>>> >>>>> The existence of these laminar variations-on-a-theme >>>>> provides an existence proof for the exciting goal of >>>>> designing a family of chips whose specializations can >>>>> realize all aspects of higher intelligence, and which >>>>> can be consistently connected because they all share a >>>>> similar underlying design. Work on achieving this goal >>>>> can productively occupy lots of creative modelers and >>>>> technologists for many years to come. >>>>> >>>>> I hope that the above replies provide some relevant >>>>> information, as well as pointers for finding more. >>>>> >>>>> Best, >>>>> >>>>> Steve >>>>> >>>>> >>>>> >>>>>> >>>>>> My apology again if my understanding above has errors >>>>>> although I have examined the above two points carefully >>>>>> through multiple your papers. >>>>>> >>>>>> Best regards, >>>>>> >>>>>> -John >>>>>> >>>>>> Juyang (John) Weng, Professor >>>>>> Department of Computer Science and Engineering >>>>>> MSU Cognitive Science Program and MSU Neuroscience Program >>>>>> 428 S Shaw Ln Rm 3115 >>>>>> Michigan State University >>>>>> East Lansing, MI 48824 USA >>>>>> Tel:517-353-4388 >>>>>> Fax:517-432-1061 >>>>>> Email:weng at cse.msu.edu >>>>>> URL:http://www.cse.msu.edu/~weng/ >>>>>> ---------------------------------------------- >>>>>> >>>>> >>>>> Stephen Grossberg >>>>> Wang Professor of Cognitive and Neural Systems >>>>> Professor of Mathematics, Psychology, and Biomedical >>>>> Engineering >>>>> Director, Center for Adaptive Systems >>>>> http://www.cns.bu.edu/about/cas.html >>>>> http://cns.bu.edu/~steve >>>>> steve at bu.edu >>>>> >>>>> >>>>> >>>>> >>>>> >>>> >>>> -- >>>> -- >>>> Juyang (John) Weng, Professor >>>> Department of Computer Science and Engineering >>>> MSU Cognitive Science Program and MSU Neuroscience Program >>>> 428 S Shaw Ln Rm 3115 >>>> Michigan State University >>>> East Lansing, MI 48824 USA >>>> Tel:517-353-4388 >>>> Fax: 517-432-1061 Email: >>>> weng at cse.msu.edu URL: >>>> http://www.cse.msu.edu/~weng/ >>>> >>>> ---------------------------------------------- >>>> >>>> >>> >>> -- >>> -- >>> Juyang (John) Weng, Professor >>> Department of Computer Science and Engineering >>> MSU Cognitive Science Program and MSU Neuroscience Program >>> 428 S Shaw Ln Rm 3115 >>> Michigan State University >>> East Lansing, MI 48824 USA >>> Tel:517-353-4388 >>> Fax:517-432-1061 >>> Email:weng at cse.msu.edu >>> URL:http://www.cse.msu.edu/~weng/ >>> ---------------------------------------------- >>> >>> >> >> >> -- >> Prof. Dr. Danko Nikolic' >> >> Web: >> http://www.danko-nikolic.com >> >> Mail address 1: >> Department of Neurophysiology >> Max Planck Institut for Brain Research >> Deutschordenstr. 46 >> 60528 Frankfurt am Main >> GERMANY >> >> Mail address 2: >> Frankfurt Institute for Advanced Studies >> Wolfgang Goethe University >> Ruth-Moufang-Str. 1 >> 60433 Frankfurt am Main >> GERMANY >> >> ---------------------------- >> Office: (..49-69) 96769-736 >> Lab: (..49-69) 96769-209 >> Fax: (..49-69) 96769-327 >> danko.nikolic at gmail.com >> ---------------------------- > > > -- > Prof. Dr. Danko Nikolic' > > Web: > http://www.danko-nikolic.com > > Mail address 1: > Department of Neurophysiology > Max Planck Institut for Brain Research > Deutschordenstr. 46 > 60528 Frankfurt am Main > GERMANY > > Mail address 2: > Frankfurt Institute for Advanced Studies > Wolfgang Goethe University > Ruth-Moufang-Str. 1 > 60433 Frankfurt am Main > GERMANY > > ---------------------------- > Office: (..49-69) 96769-736 > Lab: (..49-69) 96769-209 > Fax: (..49-69) 96769-327 > danko.nikolic at gmail.com > ---------------------------- -- -- Juyang (John) Weng, Professor Department of Computer Science and Engineering MSU Cognitive Science Program and MSU Neuroscience Program 428 S Shaw Ln Rm 3115 Michigan State University East Lansing, MI 48824 USA Tel: 517-353-4388 Fax: 517-432-1061 Email: weng at cse.msu.edu URL: http://www.cse.msu.edu/~weng/ ---------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From danko.nikolic at googlemail.com Mon Apr 14 15:04:09 2014 From: danko.nikolic at googlemail.com (Danko Nikolic) Date: Mon, 14 Apr 2014 21:04:09 +0200 Subject: Connectionists: how the brain works? (UNCLASSIFIED) In-Reply-To: <534C215E.6040500@cse.msu.edu> References: <51FDF3A2-717D-4399-9076-331EEA0FC45A@lehigh.edu> <5340188F.6090405@cse.msu.edu> <6F08AE39-766F-45B9-97AD-C50714755724@cns.bu.edu> <5342E760.7080309@cse.msu.edu> <5344CA45.3060407@cse.msu.edu> <534BBDC2.4090406@gmail.com> <534C215E.6040500@cse.msu.edu> Message-ID: <534C3129.1080901@gmail.com> Dear Jon, Please see my comments below. > > Much of your views are consistent to our DN model, an overarching > model for developing brains. > I agree that there are some consistencies. Moreover, I agree that development is important and cannot be neglected in any theory. Development and learning cannot be dissociated because development is just a way of learning. > > The system always adjusts--to everything(!) > > Yes, since the system does not know which is new and which is old. > However, the amount of adjustment is different. > There is also a novelty system imbedded into the basic brain circuits, > realized by neurotransmitters such as ACh and NE. I believe that this above is the description of your DN system, but it is not a description of an anapoietic system. In an anapoietic system the amount of adjustment corresponds to the degree to which the mental contents (e.g., those of working memory) have changed. So, if in one moment we think about vacation and then, the next moment we think about chess, the change is large because the mental contents have changed. The novelty of those contents does not matter so much. We may be recalling familiar information about chess and still the change is large. Novelty makes this process slower but largely does not determine the amount of change during anapoiesis. > > > The only simple input-output mappings that take place are the > sensory-motor loops that execute the actual behavior. > > Sorry, I do not quite agree. All sensory-motor loops that execute the > actual behavior are not simple input-output mappings. > They affect all related brain representations, including perception, > cognition and motivation, as the DN system implies. Again, I agree that this is what DN system does. However, I was describing the properties of a practopoietic system that implements anapoiesis. This system has different properties. > > > If the current goals of the system requires treating a slightly > novel stimulus as new, it will be treated as "new". However, if a > slight change in the stimulus features does not make a difference for > the current goals and the situation, than the stimulus will be treated > as "old". > > The brain does not seem to have an if-then-else circuit like your > above statement seems to suggest. Regardless new or old, > the brain uses basically the same set of mechanisms. Only the outcome > is always different. > There is no if-then. It is more accurate to say that, with anapoiesis, these properties emerge. > > Importantly, practopoietic theory is not formulated in terms of > neurons (inhibition, excitation, connections, changes of synaptic > weights, etc.). > > Then, does it fall into the trap of symbolic representations? No, no symbolics. It is more like cybernetics organized into a creational (i.e., poietic) hierarchy. > How does the theory explain the development of various types of > invariance? An anapoietic system only can learn in an invariant way. In fact, it is very hard to learn things literally, much like non-invariant learning is very hard for a human mind. The system must put a lot of effort, a lot of learning, in order to reduce its invariance. Too understand why invariance is a natural property of anapoiesis, please see the section 2.4 entitled "Practopoietic transcendence of knowledge: Generality-??specificity hierarchy". Best regards, Danko -- Prof. Dr. Danko Nikoli? Web: http://www.danko-nikolic.com Mail address 1: Department of Neurophysiology Max Planck Institut for Brain Research Deutschordenstr. 46 60528 Frankfurt am Main GERMANY Mail address 2: Frankfurt Institute for Advanced Studies Wolfgang Goethe University Ruth-Moufang-Str. 1 60433 Frankfurt am Main GERMANY ---------------------------- Office: (..49-69) 96769-736 Lab: (..49-69) 96769-209 Fax: (..49-69) 96769-327 danko.nikolic at gmail.com ---------------------------- From weng at cse.msu.edu Mon Apr 14 20:41:38 2014 From: weng at cse.msu.edu (Juyang Weng) Date: Mon, 14 Apr 2014 20:41:38 -0400 Subject: Connectionists: how the brain works? (UNCLASSIFIED) In-Reply-To: <534C3129.1080901@gmail.com> References: <51FDF3A2-717D-4399-9076-331EEA0FC45A@lehigh.edu> <5340188F.6090405@cse.msu.edu> <6F08AE39-766F-45B9-97AD-C50714755724@cns.bu.edu> <5342E760.7080309@cse.msu.edu> <5344CA45.3060407@cse.msu.edu> <534BBDC2.4090406@gmail.com> <534C215E.6040500@cse.msu.edu> <534C3129.1080901@gmail.com> Message-ID: <534C8042.5010406@cse.msu.edu> Dear Danko, Thank you for your pointer which gave me an opportunity to browse your paper, especially the section 2.4. If you like to get my feedback that you might like to take into account for your next stage work, here are two major ones: (1) Your work is still qualitative, not quantitative. This gap is wide, toward an experimentally verifiable theory. (2) Fig. 2, A: your feed-forward computational model is incorrect, even though you have a comparator. For any neuroscientists who are interested in modeling the brain, I suggest that he should first look into computer vision systems that are at least quantitative. There is a series of overwhelmingly challenging problems that computer vision systems face, but they only touched on a small number of the problems. Unfortunately, the computer vision field basically gets lost on its way in search for a general vision system. The field seems to be hopeless because it does not seriously look into the brain. When one can claim that his computational model autonomously learns general-purpose vision by directly learning for a wide variety of invariance from cluttered scenes without pre-segmentation of the scenes (my proposed it as the first principle), he is well on his way toward how the same vision system addresses many other brain problems. Vision was the field the DN model was first verified on before the same model was used to address other brain functions, using basically the same set of developmental principles. -John On 4/14/14 3:04 PM, Danko Nikolic wrote: > Dear Jon, > > Please see my comments below. > >> >> Much of your views are consistent to our DN model, an overarching >> model for developing brains. >> > I agree that there are some consistencies. Moreover, I agree that > development is important and cannot be neglected in any theory. > Development and learning cannot be dissociated because development is > just a way of learning. > >> > The system always adjusts--to everything(!) >> >> Yes, since the system does not know which is new and which is old. >> However, the amount of adjustment is different. >> There is also a novelty system imbedded into the basic brain >> circuits, realized by neurotransmitters such as ACh and NE. > I believe that this above is the description of your DN system, but it > is not a description of an anapoietic system. In an anapoietic system > the amount of adjustment corresponds to the degree to which the mental > contents (e.g., those of working memory) have changed. So, if in one > moment we think about vacation and then, the next moment we think > about chess, the change is large because the mental contents have > changed. The novelty of those contents does not matter so much. We may > be recalling familiar information about chess and still the change is > large. Novelty makes this process slower but largely does not > determine the amount of change during anapoiesis. > >> >> > The only simple input-output mappings that take place are the >> sensory-motor loops that execute the actual behavior. >> >> Sorry, I do not quite agree. All sensory-motor loops that execute the >> actual behavior are not simple input-output mappings. >> They affect all related brain representations, including perception, >> cognition and motivation, as the DN system implies. > Again, I agree that this is what DN system does. However, I was > describing the properties of a practopoietic system that implements > anapoiesis. This system has different properties. >> >> > If the current goals of the system requires treating a slightly >> novel stimulus as new, it will be treated as "new". However, if a >> slight change in the stimulus features does not make a difference for >> the current goals and the situation, than the stimulus will be >> treated as "old". >> >> The brain does not seem to have an if-then-else circuit like your >> above statement seems to suggest. Regardless new or old, >> the brain uses basically the same set of mechanisms. Only the outcome >> is always different. >> > There is no if-then. It is more accurate to say that, with anapoiesis, > these properties emerge. > > >> > Importantly, practopoietic theory is not formulated in terms of >> neurons (inhibition, excitation, connections, changes of synaptic >> weights, etc.). >> >> Then, does it fall into the trap of symbolic representations? > No, no symbolics. It is more like cybernetics organized into a > creational (i.e., poietic) hierarchy. > >> How does the theory explain the development of various types of >> invariance? > An anapoietic system only can learn in an invariant way. In fact, it > is very hard to learn things literally, much like non-invariant > learning is very hard for a human mind. The system must put a lot of > effort, a lot of learning, in order to reduce its invariance. > > Too understand why invariance is a natural property of anapoiesis, > please see the section 2.4 entitled "Practopoietic transcendence of > knowledge: Generality-??specificity hierarchy". > > Best regards, > > Danko > > -- -- Juyang (John) Weng, Professor Department of Computer Science and Engineering MSU Cognitive Science Program and MSU Neuroscience Program 428 S Shaw Ln Rm 3115 Michigan State University East Lansing, MI 48824 USA Tel: 517-353-4388 Fax: 517-432-1061 Email: weng at cse.msu.edu URL: http://www.cse.msu.edu/~weng/ ---------------------------------------------- From h.glotin at gmail.com Tue Apr 15 03:41:58 2014 From: h.glotin at gmail.com (Herve Glotin) Date: Tue, 15 Apr 2014 09:41:58 +0200 Subject: Connectionists: [lastCFP] ICML uLearnBio 2014: int. wkp on Unsupervised Learning from Bioacoustic Big Data Message-ID: - Int'l Workshop on Unsupervised Learning from Bioacoustic Big Data - joint to ICML 2014 - 25/26 June Beijing Chamroukhi, Glotin, Dugan, Clark, Arti?res, LeCun http://sabiod.univ-tln.fr/ulearnbio/ last deadline ext. = 25th April for regular paper (2 to 6 pages), MAIN TOPICS: Unsupervised generative learning on big data, Latent data models, Model-based clustering, Bayesian non-parametric clustering, Bayesian sparse representation, Feature learning, Deep neural net, Bioacoustics, Environmental scene analysis, Big Bio-acoustic data structuration, Species clustering (birds, whales...) IMPORTANT DATES (last ext.): 25th April for regular paper (2 to 6 pages), or 30th may for paper on one of the technical challenge. All submissions will be reviewed by program committee, and assessed based on their novelty, technical quality, potential impact, and clarity of writing. All accepted papers will be published as part of the ICMLUlb workshop online proceedings with ISBN number. The organizers discuss the opportunity of editing a special issue with a journal, authors of the best quality submissions will be invited to submit extended versions of their papers. OBJECTIVES: The general topic of uLearnBio is machine learning from bioacoustic data, supervised method but also unsupervised feature learning and clustering from bioacoustic data. The non-parametric alternative avoids assuming restricted functional forms and thus allows the complexity and accuracy of the inferred model to grow as more data is observed. It also represents an alternative to the difficult problem of model selection in model-based clustering models by inferring the number of clusters from the data as the learning proceeds. ICMLulb offers an excellent framework to see how parametric and nonparametric probabilistic models for cluster analysis can perform to learn from complex real bio-acoustic data. Data issued from bird songs, whale songs, are provided in the framework of challenges as in previous ICML and NIPS workshops on learning from bio-acoustic data (ICML4B and NIPS4B books are available at http://sabiod.org). ICMLuLearnBio will bring ideas on how to proceed in understanding bioacoustics to provide methods for biodiversity indexing. The scaled bio-acoustic data science is a novel challenge for AI. Large cabled submarine acoustic observatory deployments permit data to be acquired continuously, over long time periods. For examples, submarine Neptune observatory in Canada, Antares or Nemo neutrino detectors, or PALAOA in Antarctic (cf NIPS4B proc.) are 'big data' challenges. Automated analysis, including clustering/segmentation and structuration of acoustic signals, event detection, data mining and machine learning to discover relationships among data streams promise to aid scientists in discoveries in an otherwise overwhelming quantity of acoustic data. CHALLENGES: In addition to the two previously announced challenges (Parisian bird and Whale challenges), we open a 3rd challenge on 500 amazonian bird species linked to the LifeClef Bird challenge 2014 but into an unsupervised way, over 9K .wav files. Details on challenges : http://sabiod.univ-tln.fr/ulearnbio/challenges.html More information and open challenges = http://sabiod.org INVITED SPEAKERS: Pr. G. McLachlan - Dept of mathematics - Univ. of Queensland, AU, Dr. F. Chamroukhi - LSIS CNRS - Toulon Univ, FR, Dr. P. Dugan - Bioacoustics R. Prog on Big Data - Cornell Univ, USA. Program Committee: Dr. F. Chamroukhi - LSIS CNRS - Toulon univ, Pr. H. Glotin - LSIS CNRS - Inst. Univ de France - Toulon univ, Dr. P. Dugan - Ornithology Bioacoustics Lab - Cornell univ, NY, Pr. C. Clark - Ornithology Bioacoustics Lab - Cornell univ, NY, Pr. T. Arti?res - LIP6 CNRS - Sorbonne univ, Paris, Pr. Y. LeCun - Comp. Biological Learning Lab NYU & Facebook Research Center NY. -- Herve' Glotin, Pr. Institut Univ. de France (IUF) & Univ. Toulon (UTLN) Head of information DYNamics & Integration (DYNI @ UMR CNRS LSIS) http://glotin.univ-tln.fr glotin at univ-tln.fr From mehdi.khamassi at isir.upmc.fr Tue Apr 15 13:23:06 2014 From: mehdi.khamassi at isir.upmc.fr (Mehdi Khamassi) Date: Tue, 15 Apr 2014 19:23:06 +0200 Subject: Connectionists: Poster submission deadline extended to the Fourth International Symposium on Biology of Decision-Making, 26-28 May 2014 @ Paris, France Message-ID: <8644d63e50610e9081e0fc711510eb0b@mailhost.isir.upmc.fr> [Please accept our apologies if you get multiple copies of this message] Dear colleagues, The deadline for poster submission is extended to April 30 for the Fourth Symposium on Biology of Decision Making which will take place in Paris, France, on May, 26-28th 2014. Poster submission and online registration can be done on the dedicated website: http://sbdm2014.isir.upmc.fr Please circulate widely and encourage your colleagues to attend. ------------------------------------------------------------------------------------------------ FOURTH SYMPOSIUM ON BIOLOGY OF DECISION MAKING (SBDM 2014) May 26-28, 2014, Paris, France Institut du Cerveau et de la Moelle, H?pital La Piti? Salp?tri?re, Paris, France. & Ecole Normale Sup?rieure, Paris, France. & Universit? Pierre et Marie Curie, Paris, France. http://sbdm2014.isir.upmc.fr ------------------------------------------------------------------------------------------------ PRESENTATION: The Fourth Symposium on Biology of Decision Making will take place on May 26-28, 2014 at the Institut du Cerveau et de la Moelle, Paris, France, with a satellite day at Ecole Normale Sup?rieure, Paris, France. The objective of this three day symposium is to gather people from different research fields with different approaches (economics, ethology, psychiatry, neural and computational approaches) to decision-making. The symposium will be a single-track, will last for 3 days and will include 6 sessions: (#1) Who is making decisions? Cortex or basal ganglia; (#2) New computational approaches to decision-making; (#3) A new player in decision-making: the hippocampus; (#4) Neuromodulation of decision-making; (#5) Maladaptive decisions in clinical conditions; (#6) Who is more rational? Decision making across species. CONFIRMED SPEAKERS: Elsa Addessi (ISTC-CNR, Italy) Bernard Balleine (Sydney University, Australia) Karim Benchenane (CNRS-ESPCI, France) Roland Benoit (Harvard, USA) Matthew Botvinick (Princeton University, USA) Anne Collins (Brown University, USA) Roshan Cools (Radboud Univ. Nijmegen, The Netherlands) Molly Crockett (UCL, UK) Jean Daunizeau (INSERM-ICM, France) Nathaniel Daw (NYU, USA) Kenji Doya (OIST, Japan) Philippe Faure (CNRS-UPMC, France) Lesley Fellows (McGill University, Canada) Aldo Genovesio (Universita La Sapienza, Italy) Ben Hayden (University of Rochester, USA) Tobias Kalenscher (Universit?t D?sseldorf, Germany) Etienne Koechlin (CNRS-ENS, France) James Marshall (Sheffield University, UK) Genela Morris (Haifa University, Israel) Camillo Padoa-Schioppa (Washington Univ. St Louis, USA) Alex Pouget (Rochester University, USA) Pete Redgrave (Sheffield University, UK) Jonathan Roiser (UCL, UK) Masamichi Sakagami (Tamagawa University, Japan) Daphna Shohamy (Columbia University, USA) Klaas Stephan (Univ. Zurich & ETH Zurich, Switzerland) IMPORTANT DATES: April 30, 2014 Deadline for Poster Submission (extended deadline) May 1, 2014 Deadline for Registration May 26-28, 2014 Symposium Venue ORGANIZING COMMITTEE: Thomas Boraud (CNRS, Bordeaux, France) Sacha Bourgeois-Gironde (La Sorbonne, Paris, France) Kenji Doya (OIST, Okinawa, Japan) Mehdi Khamassi (CNRS - UPMC, Paris, France) Etienne Koechlin (CNRS - ENS, Paris, France) Mathias Pessiglione (ICM - INSERM, Paris, France) CONTACT INFORMATION : Website, registration, poster submission and detailed program: http://sbdm2014.isir.upmc.fr Contact: sbdm2014 [ at ] isir.upmc.fr Questions about registration: sbdm2014-registration [ at ] isir.upmc.fr -- Mehdi Khamassi, PhD Researcher (CNRS) Institut des Syst?mes Intelligents et de Robotique (UMR7222) CNRS - Universit? Pierre et Marie Curie Pyramide, Tour 55 - Bo?te courrier 173 4 place Jussieu, 75252 Paris Cedex 05, France tel: + 33 1 44 27 28 85 fax: +33 1 44 27 51 45 cell: +33 6 50 76 44 92 http://people.isir.upmc.fr/khamassi From naotsu at gmail.com Tue Apr 15 23:12:01 2014 From: naotsu at gmail.com (Naotsugu Tsuchiya) Date: Wed, 16 Apr 2014 13:12:01 +1000 Subject: Connectionists: Research Fellowship Opportunity in Cognitive Neuroscience Message-ID: Dear All, The School of Psychological Sciences is pleased to announce the opening of a call for a *Research Fellow in Cognitive Neuroscience*. Expressions of interest from suitably qualified applicants are sought and appointments may be made at the level of Research Fellow, Senior Research Fellow or Principal Research Fellow, depending upon experience. Salary and research support are available for 4 years. Domestic and international applicants are welcome to apply. Please see the attached document or follow the link below for further information regarding this outstanding career opportunity. http://jobs.monash.edu.au/jobDetails.asp?sJobIDs=523427&lWorkTypeID=&lLocationID=&lCategoryID=641%2C+640%2C+636&lBrandID=&stp=AW&sLanguage=en Best wishes Mark Bellgrove and Kim Cornish -- Professor Mark Bellgrove Research Chair School of Psychological Sciences Faculty of Medicine, Nursing and Health Sciences Monash University, Clayton Campus Room 533, Building 17 Victoria, 3800 AUSTRALIA Tel +61 3 990 24200 Fax +61 3 99053948 Monash Provider No. 00008C www.med.monash.edu.au/psych/research -- ------------------------------------------------------------------------- Nao (Naotsugu) Tsuchiya, Ph.D. 1. Associate Professor School of Psychological Sciences Faculty of Medicine, Nursing and Health Sciences, Monash University 2. ARC Future Fellow homepage: http://users.monash.edu.au/~naotsugt/Tsuchiya_Labs_Homepage/Main.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From birgit.ahrens at bcf.uni-freiburg.de Wed Apr 16 05:37:46 2014 From: birgit.ahrens at bcf.uni-freiburg.de (Birgit Ahrens) Date: Wed, 16 Apr 2014 11:37:46 +0200 Subject: Connectionists: BCF/NWG course "Analysis and Models in Neurophysiology" 2014 at the Bernstein Center Freiburg, Germany Message-ID: <534E4F6A.7010609@bcf.uni-freiburg.de> BCF/NWG-Course "Analysis and Models in Neurophysiology" /Sunday, October 5 - Friday, October 10, 2014 / /Bernstein Center Freiburg, Hansastra?e 9a, 79104 Freiburg, Germany/ *Aim of the course:* The course is intended to provide advanced Diploma/Masters and PhD students, as well as young researchers from the neurosciences with approaches for the analysis of electrophysiological data and the theoretical concepts behind them. *The course includes various topics such as*: * Neuron models and spike train statistics * Point processes and correlation measures * Systems and signals * Local field potentials The course will consist of lectures in the morning and and matching exercises using Matlab and Mathematica in the afternoon. Experience with these software packages will be helpful but is not required for registration. The participants should have a basic understanding of scientific programming. This course is designated especially for advanced diploma/master-students and PhD-students (preferentially in their first year). *Application:* Please apply by sending one pdf document containing your CV and a meaningful letter of motivation to nwg-course at bcf.uni-freiburg.de. The letter of motivation should refer to the following points: * Reasons for wanting to take this course * Background in mathematics * Experience using Matlab/Python/Mathematika * Background in neuroscience The course is limited to 20 participants. *Course fees:* NWG members - 50EUR, others - 125EUR *Application deadline: *June 30, 2014 *More information: *http://www.bcf.uni-freiburg.de/events/conferences-workshops/20141005-nwgcourse *-- Dr. Birgit Ahrens --* Teaching & Training Coordinator Bernstein Center Freiburg University of Freiburg Hansastr. 9a D - 79104 Freiburg Germany Phone: +49 (0) 761 203-9575 Fax: +49 (0) 761 203-9559 -------------- next part -------------- An HTML attachment was scrubbed... URL: From hasselmo at gmail.com Wed Apr 16 09:16:07 2014 From: hasselmo at gmail.com (Michael Hasselmo) Date: Wed, 16 Apr 2014 09:16:07 -0400 Subject: Connectionists: Post-doctoral position - Boston University Message-ID: Post-doctoral position ? Boston University The Hasselmo laboratory in the Center for Memory and Brain at Boston University is looking for a post-doctoral fellow to work on a multi-disciplinary university research initiative (MURI) project in collaboration with the Leonard and Roy labs at MIT and the Milford lab at QUT. Applicants should have experience in the area of robotics with an emphasis on spatial navigation and computational neuroscience. This background should include a degree in electrical engineering, computer science, computational neuroscience or mechanical engineering. The successful candidate will extend current research on goal directed spatial behavior, including work on modeling navigation in robots guided by models of neurophysiological circuits. He/she will also implement previously developed computational models in 3D simulation environments and robotic hardware. The ideal candidate will have experience in path planning, autonomous navigation, computational modeling, neural networks, AI and/or computer vision. Excellent programming skills in MATLAB, Python and/or C++ are a must. Experience in implementing computational models on hardware platforms is a plus. Requirements: - A degree in one of the following areas: electrical engineering, computer science, computational neuroscience, mechanical engineering. - Experience in robotics, mathematical modeling, autonomous navigation, and/or neural networks. - Excellent coding skills in MATLAB, C++ and/or Python. -- Prof. Michael Hasselmo Center for Memory and Brain, Department of Psychological and Brain Sciences, Graduate Program for Neuroscience, Principal Investigator, ONR Multi-disciplinary University Research Initiative on Grid cells and Autonomous Systems, Boston University, 2 Cummington St., Boston, MA, 02215, USA Tel: (617) 353-1397, e-mail: hasselmo at bu.edu, http://www.bu.edu/hasselmo -------------- next part -------------- An HTML attachment was scrubbed... URL: From AdvancedAnalytics at uts.edu.au Wed Apr 16 19:21:00 2014 From: AdvancedAnalytics at uts.edu.au (Advanced Analytics) Date: Thu, 17 Apr 2014 09:21:00 +1000 Subject: Connectionists: AAI Short Course - 'Behaviour Analytics - an Introduction' - Wednesday 23 April 2014 Message-ID: <8112393AA53A9B4A9BDDA6421F26C68A0173A4934E1E@MAILBOXCLUSTER.adsroot.uts.edu.au> Dear Colleague, AAI Short Course - 'Behaviour Analytics - an Introduction' - Wednesday 23 April 2014 https://shortcourses-bookings.uts.edu.au/ClientView/Schedules/ScheduleDetail.aspx?ScheduleID=1572&EventID=1294 Our AAI short course 'Behaviour Analytics - an Introduction' may be of interest to you and or others in your organisation or network. Complex behaviours are widely seen on the internet, business, social and online networks, and multi-agent systems. In fact, behaviour is a concept with stronger semantic meaning than the so-called data for recording and representing business activities, impacts and dynamics. Therefore, an in-depth understanding of complex behaviours has been increasingly recognised as a crucial means for disclosing interior driving forces, causes and impact on businesses in handling many challenging issues. This forms the need and emergence of behaviour analytics, i.e. understanding behaviours from the computing perspective. In this short course, we present an overview of behaviour analytics and discuss complex behaviour interactions and relationships, complex behaviour representation, behavioural feature construction, behaviour impact and utility analysis, behaviour pattern analysis, exceptional behaviour analysis, negative behaviour analysis, behaviour interaction and evolution. Please register here https://shortcourses-bookings.uts.edu.au/ClientView/Schedules/ScheduleDetail.aspx?ScheduleID=1572&EventID=1294 An important foundation short course in the AAI series of advanced data analytic short courses - please view this short course and others here http://www.uts.edu.au/research-and-teaching/our-research/advanced-analytics-institute/short-courses/upcoming-courses We are happy to discuss at your convenience. Thank you and regards. Colin Wise Operations Manager Advanced Analytics Institute (AAI) Blackfriars Building 2, Level 1 University of Technology, Sydney (UTS) Email: Colin.Wise at uts.edu.au Tel. +61 2 9514 9267 M. 0448 916 589 AAI: www.analytics.uts.edu.au/ Reminder - AAI Short Course - Advanced Data Analytics - an Introduction - Wednesday 7 May 2014 https://shortcourses-bookings.uts.edu.au/ClientView/Schedules/ScheduleDetail.aspx?ScheduleID=1571 Future short courses on Data Analytics and Big Data may be viewed at LINK AAI Education and Training Short Courses Survey - you may be interested in completing our AAI Survey at LINK AAI Email Policy - should you wish to not receive this periodic communication on Data Analytics Learning please reply to our email (to sender) with UNSUBSCRIBE in the Subject. We will delete you from our database. Thank you for your past and future support. UTS CRICOS Provider Code: 00099F DISCLAIMER: This email message and any accompanying attachments may contain confidential information. If you are not the intended recipient, do not read, use, disseminate, distribute or copy this message or attachments. If you have received this message in error, please notify the sender immediately and delete this message. Any views expressed in this message are those of the individual sender, except where the sender expressly, and with authority, states them to be the views of the University of Technology Sydney. Before opening any attachments, please check them for viruses and defects. Think. Green. Do. Please consider the environment before printing this email. -------------- next part -------------- An HTML attachment was scrubbed... URL: From callforvideos at aaaivideos.org Thu Apr 17 02:04:24 2014 From: callforvideos at aaaivideos.org (AAAI Video Competition Call for Videos) Date: Thu, 17 Apr 2014 08:04:24 +0200 Subject: Connectionists: Extension: AAAI Video Competition 2014 - Call For Videos Message-ID: DEADLINE EXTENSION -> May 4 AAAI-14 AI Video Competition Date: July 28, 2014 Place: Qu?bec City, Qu?bec, Canada Website: http://www.aaaivideos.org Video: http://youtu.be/uwD5qN-MF5M Submission Deadline: May 4, 2014 (extended deadline) ------- Dear Colleagues, AAAI is pleased to announce the continuation of the AAAI Video Competition, now entering its eighth year. The video competition will be held in conjunction with the AAAI-14 conference in Qu?bec City, Qu?bec, Canada, July 27-31, 2014. At the award ceremony, authors of award-winning videos will be presented with "Shakeys", trophies named in honour of SRI's Shakey robot and its pioneering video. Award-winning videos will be screened at this ceremony. The goal of the competition is to show the world how much fun AI is by documenting exciting artificial intelligence advances in research, education, and application. View previous entries and award winners at http://www.aaaivideos.org/past_competitions. The rules are simple: Compose a short video about an exciting AI project, and narrate it so that it is accessible to a broad online audience. We strongly encourage student participation. To increase reach, select videos will be uploaded to Youtube and promoted through social media (twitter, facebook, g+) and major blogs in AI and robotics. VIDEO FORMAT AND CONTENT Either a 1 minute (max) short video or a 5 minute (max) long video, with English narration (or English subtitles). Consider combining screenshots, interviews, and videos of a system in action. Make the video self-contained, so that newcomers to AI can understand and learn from it. We encourage a good sense of humor, but will only accept submissions with serious AI content. For example, we welcome submissions of videos that: * Highlight a research topic - contemporary or historic, your own or from another group * Introduce viewers to an exciting new AI-related technology * Provide a window into the research activities of a laboratory and/or senior researcher * Attract prospective students to the field of AI * Explain AI concepts - your video could be used in the classroom Please note that this list is not exhaustive. Novel ideas for AI-based videos, including those not necessarily based on a "system in action", are encouraged. No matter what your choice is, creativity is encouraged! (Please note: The authors of previous, award-winning videos typically used humor, background music, and carefully selected movie clips to make their contributions come alive.) Please also note that videos should only contain material for which the authors own copyright. Clips from films or television and music for the soundtrack should only be used if copyright permission has been granted by the copyright holders, and this written permission accompanies the video submission SUBMISSION INSTRUCTIONS Submit your video by making it available for download on a (preferably password-protected) dropbox, ftp or website. Once you have done so, please fill out the submission form (http://www.aaaivideos.org/submission_form.txt) and send it to us by email (submission at aaaivideos.org). All submissions are due no later than April 15, 2014. REVIEW AND AWARD PROCESS Submitted videos will be peer-reviewed by members of the programme committee according to the criteria below. Videos that receive positive reviews will be accepted for publication in the AAAI Video Competition proceedings. Select videos will also be uploaded to Youtube, promoted through social media, and featured on the dedicated website ( http://www.aaaivideos.org). The best videos will be nominated for awards. Winners will be revealed at the award ceremony during AAAI-14. All authors of accepted videos will be asked to sign a distribution license form. Review criteria: 1. Relevance to AI (research or application) 2. Excitement generated by the technology presented 3. Educational content 4. Entertainment value 5. Presentation (cinematography, narration, soundtrack, production values) AWARD CATEGORIES Best Video, Best Short Video, Best Student Video, Most Jaw-Dropping Technology, Most Educational, Most Entertaining and Best Presentation. (Categories may be changed at the discretion of the chairs.) AWARDS Trophies ("Shakeys"). KEY DATES * Submission Deadline: May 4, 2014 (extended deadline) * Reviewing Decision Notifications & Award Nominations: May 31, 2014 * Final Version Due: June 15, 2014 * Screening and Award Presentations: July 28, 2014 FOR MORE INFORMATION Please contact us at info at aaaivideos.org We look forward to your participation in this exciting event! Mauro Birattari and Sabine Hauert AAAI Video Competition 2014 -------------- next part -------------- An HTML attachment was scrubbed... URL: From dglanzma at mail.nih.gov Thu Apr 17 11:43:03 2014 From: dglanzma at mail.nih.gov (Glanzman, Dennis (NIH/NIMH) [E]) Date: Thu, 17 Apr 2014 15:43:03 +0000 Subject: Connectionists: Major change in NIH Policy regarding Revision (Amended) Applications -- NOT-OD-14-074 Message-ID: I am writing to let the community know that effective immediately, NIH no longer limits applicants to a single Revision Application (A1). >From today's Notice: "... following an unsuccessful resubmission (A1) application, applicants may submit the same idea as a new (A0) application for the next appropriate due date." See the full Notice online at: http://grants.nih.gov/grants/guide/notice-files/NOT-OD-14-074.html Dennis L. Glanzman, Ph.D. Chief, Theoretical and Computational Neuroscience Program National Institute of Mental Health, NIH, DHHS 6001 Executive Boulevard Room 7192; MSC 9637 Bethesda, MD 20892-9637 Telephone: (301) 443-6429 Dennis L. Glanzman, Ph.D. Chief, Theoretical and Computational Neuroscience Program National Institute of Mental Health, NIH, DHHS 6001 Executive Boulevard Room 7192; MSC 9637 Bethesda, MD 20892-9637 Telephone: (301) 443-6429 -------------- next part -------------- An HTML attachment was scrubbed... URL: From juergen at idsia.ch Thu Apr 17 11:40:31 2014 From: juergen at idsia.ch (Schmidhuber Juergen) Date: Thu, 17 Apr 2014 17:40:31 +0200 Subject: Connectionists: Deep Learning Overview Draft Message-ID: Dear connectionists, here the preliminary draft of an invited Deep Learning overview: http://www.idsia.ch/~juergen/DeepLearning17April2014.pdf Abstract. In recent years, deep neural networks (including recurrent ones) have won numerous contests in pattern recognition and machine learning. This historical survey compactly summarises relevant work, much of it from the previous millennium. Shallow and deep learners are distinguished by the depth of their credit assignment paths, which are chains of possibly learnable, causal links between actions and effects. I review deep supervised learning (also recapitulating the history of backpropagation), unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks. The draft mostly consists of references (about 600 entries so far). Many important citations are still missing though. As a machine learning researcher, I am obsessed with credit assignment. In case you know of references to add or correct, please send brief explanations and bibtex entries to juergen at idsia.ch (NOT to the entire list), preferably together with URL links to PDFs for verification. Please also do not hesitate to send me additional corrections / improvements / suggestions / Deep Learning success stories with feedforward and recurrent neural networks. I'll post a revised version later. Thanks a lot! Juergen Schmidhuber http://www.idsia.ch/~juergen/ http://www.idsia.ch/~juergen/whatsnew.html From weng at cse.msu.edu Thu Apr 17 16:07:58 2014 From: weng at cse.msu.edu (Juyang Weng) Date: Thu, 17 Apr 2014 16:07:58 -0400 Subject: Connectionists: Deep Learning Overview Draft In-Reply-To: References: Message-ID: <5350349E.5040407@cse.msu.edu> Dear Juergen, Congratulations on the draft and 600+ references! Thank you very much for asking for reference. Cresceptron generated heated debate then (e.g., Takeo Kanade's comments). Some people commented that Cresceptron started the learning for computer vision from cluttered scenes. Of course, it had many problems then. To save your time, I cut and paste the major characterization of Cresceptron from my web: 1991 (IJCNN 1992) - 1997 (IJCV): Cresceptron . It appeared to be the first deep learning network that adapts its connection structure. - It appeared to be the first visual learning program for both detecting and recognizing general objects from cluttered complex natural background. - It also did segmentation, but in another separate top-down segmentation phase while the network did not do recognition. - The number of neural planes dynamically and incrementally grew from interactive experience, but the number of layers (15 in the experiments) was determined by the image size. - All the internal network learning was fully automatic --- there was no need for manual intervention once the learning (development) had started. - It required pre-segmentation for teaching: A human outlined the object contours for supervised learning. This avoided learning background. - Its internal features were automatically grouped through the last-layer motor supervision (class labels) but learnings of internal features were all unsupervised. - It uses local match-and-maximization paired-layer architecture which corresponds to logic-AND and logic-OR in multivalue logic (Tommy Poggio used a term HMAX later). - The intrinsic convolution mechanism of the network provided both shift invariance and distortion tolerance. (Later WWNs are better in learning location as one of concepts.) - It is a cascade network: features in a layer are learned from features of the previous layer, but not earlier. (This cascade restriction was overcome by later WWNs.) - Inspired by Neocognitron (K. Fukushima 1975) which was for recognition of individual characters in a uniform background. If you are so kind to cite it, I guess that probably it belongs to your section 5.9 1991-: Deep Hierarchy of Recurrent NNs. If it does not fit your article, please accept my apology for wasting your time. Just my 2 cents of worth. :) Best regards, -John On 4/17/14 11:40 AM, Schmidhuber Juergen wrote: > Dear connectionists, > > here the preliminary draft of an invited Deep Learning overview: > > http://www.idsia.ch/~juergen/DeepLearning17April2014.pdf > > Abstract. In recent years, deep neural networks (including recurrent ones) have won numerous contests in pattern recognition and machine learning. This historical survey compactly summarises relevant work, much of it from the previous millennium. Shallow and deep learners are distinguished by the depth of their credit assignment paths, which are chains of possibly learnable, causal links between actions and effects. I review deep supervised learning (also recapitulating the history of backpropagation), unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks. > > The draft mostly consists of references (about 600 entries so far). Many important citations are still missing though. As a machine learning researcher, I am obsessed with credit assignment. In case you know of references to add or correct, please send brief explanations and bibtex entries to juergen at idsia.ch (NOT to the entire list), preferably together with URL links to PDFs for verification. Please also do not hesitate to send me additional corrections / improvements / suggestions / Deep Learning success stories with feedforward and recurrent neural networks. I'll post a revised version later. > > Thanks a lot! > > Juergen Schmidhuber > http://www.idsia.ch/~juergen/ > http://www.idsia.ch/~juergen/whatsnew.html > > > -- -- Juyang (John) Weng, Professor Department of Computer Science and Engineering MSU Cognitive Science Program and MSU Neuroscience Program 428 S Shaw Ln Rm 3115 Michigan State University East Lansing, MI 48824 USA Tel: 517-353-4388 Fax: 517-432-1061 Email: weng at cse.msu.edu URL: http://www.cse.msu.edu/~weng/ ---------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.geurts at ulg.ac.be Thu Apr 17 17:20:01 2014 From: p.geurts at ulg.ac.be (Pierre Geurts) Date: Thu, 17 Apr 2014 23:20:01 +0200 Subject: Connectionists: CFP - MLSB14, the 8th Machine Learning in Systems Biology workshop Message-ID: Call for contributions MLSB14, the eighth International Workshop on Machine Learning in Systems Biology http://www.mlsb.cc Organized in conjunction with ECCB 2014 Strasbourg, France, September 6-7, 2014. Submission deadline: June 11, 2014 WORKSHOP DESCRIPTION Molecular biology and all the biomedical sciences are undergoing a true revolution as a result of the emergence and growing impact of a series of new disciplines/tools sharing the "-omics" suffix in their name. These include in particular genomics, transcriptomics, proteomics and metabolomics, devoted respectively to the examination of the entire systems of genes, transcripts, proteins and metabolites present in a given cell or tissue type. The availability of these new, highly effective tools for biological exploration is dramatically changing the way one performs research in at least two respects. First, the amount of available experimental data is not a limiting factor any more; on the contrary, there is a plethora of it. Given the research question, the challenge has shifted towards identifying the relevant pieces of information and making sense out of it (a "data mining" issue). Second, rather than focus on components in isolation, we can now try to understand how biological systems behave as a result of the integration and interaction between the individual components that one can now monitor simultaneously (so called "systems biology"). Taking advantage of this wealth of "omics" information has become a condition sine qua non for whoever ambitions to remain competitive in molecular biology and in the biomedical sciences in general. Machine learning naturally appears as one of the main drivers of progress in this context, where most of the targets of interest deal with complex structured objects: sequences, 2D and 3D structures or interaction networks. At the same time bioinformatics and systems biology have already induced significant new developments of general interest in machine learning, for example in the context of learning with structured data, graph inference, semi-supervised learning, system identification, and novel combinations of optimization and learning algorithms. MLSB14, the Eighth International Workshop on Machine Learning in Systems Biology, is a workshop of the ECCB 2014 conference. It aims to contribute to the cross-fertilization between the research in machine learning methods and their applications to systems biology by bringing together method developers and experimentalists. We are soliciting submissions bringing forward methods for discovering complex structures (e.g. interaction networks, molecule structures) and methods supporting genome-wide data analysis Please see the workshop website http://www.mlsb.cc for more details. SUBMISSIONS INSTRUCTIONS We invite you to submit an extended abstract of up to 4 pages in PDF format describing new or very recently published results. Submissions will be reviewed by the scientific programme committee. They will be selected for oral or poster presentation according to their originality and relevance to the workshop topics. KEY DATES Submission deadline: June 11, 2014 Author notification: July 15, 2014 Early registration deadline ECCB14: August 2, 2014 Workshop: September 6-7, 2014 CHAIRS Florence d'Alch?-Buc (University of Evry, France) Pierre Geurts (University of Liege, Belgium) ORGANIZING COMMITTEE Florence d'Alch?-Buc (University of Evry, France) Markus Heinonen (University of Evry, France) Pierre Geurts (University of Liege, Belgium) V?n Anh Huynh-Thu (University of Edinburgh, UK) Nizar Touleimat (Centre National de g?notypage, CEA, Evry, France) CONTACT For further information, please contact chairsmlsb2014 at gmail.com From malin.sandstrom at incf.org Wed Apr 16 05:42:19 2014 From: malin.sandstrom at incf.org (=?ISO-8859-1?Q?Malin_Sandstr=F6m?=) Date: Wed, 16 Apr 2014 11:42:19 +0200 Subject: Connectionists: Applications open for INCF two-day course "Introduction to neuroinformatics" Message-ID: Dear all, Application is now open for our two-day course "Introduction to Neuroinformatics" in Leiden in August. The course aim is to give researchers and advanced students an introduction to the field, with lectures given by carefully selected world experts. The content is suitable for both neuroscience and informatics backgrounds. Please help us spread the word to fellow researchers and students in your department! There is no course fee, and we have some travel support available. The application deadline is May 20th. More info: http://www.incf.org/community/events/incf-short-course-2014 Download course poster: http://www.incf.org/community/events/incf-short-course-2014/at_download/event_pdf In case of questions, please contact Mathew Abrams on course-admin at incf.org Best regards, Malin -- Malin Sandstr?m, PhD Community Engagement Officer malin.sandstrom at incf.org International Neuroinformatics Coordinating Facility Karolinska Institutet Nobels v?g 15 A SE-171 77 Stockholm Sweden http://www.incf.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From mindscience at gmail.com Fri Apr 18 22:19:20 2014 From: mindscience at gmail.com (Arjun Bansal) Date: Fri, 18 Apr 2014 19:19:20 -0700 Subject: Connectionists: Job listing: Algorithms Research Engineers (San Diego, CA & Mountain View, CA) Message-ID: Nervana Systems (http://nervanasys.com) -- San Diego, CA & Mountain View, CA We are a startup company with a great team, and top investors & advisors (Bruno Olshausen and Jan Rabaey) looking to build out our Algorithms team. If you love deep learning and want to be part of speeding up the algorithms through custom hardware, there is no better opportunity and no better location(s) to do it in (San Diego location is just 15 minutes from the hottest surf spots!). Here are the skills/experience we are looking for: ? Master's (or PhD, preferred) in computer science, electrical engineering or related fields (statistics, applied math, computational neuroscience) ? Background in deep learning, applied machine learning & big data ? Familiarity with details of implementing algorithms on multi-core CPUs, clusters (MPI), GPUs, heterogenous clusters, distributed frameworks (eg. GraphLab, Spark, Hadoop) ? Experience designing professional software using C++, Python Bonus: ? Experience with participating and winning Kaggle competitions ? Experience with hardware design (FPGAs, ASIC, SystemC, EDA tools) We are hiring for San Diego and Bay Area locations. We are early stage so generous equity, benefits, and competitive salaries are available for the right candidates. We can also sponsor employment visas as needed. Our work culture is family friendly and flexible. Please email your resume to jobs at nervanasys.com. http://nervanasys.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From smart at neuralcorrelate.com Sat Apr 19 22:25:44 2014 From: smart at neuralcorrelate.com (Susana Martinez-Conde) Date: Sat, 19 Apr 2014 19:25:44 -0700 Subject: Connectionists: TOP 10 illusions of 2014 Message-ID: <000001cf5c3f$d5d27bb0$81777310$@neuralcorrelate.com> The Best Illusion of the Year Contest is happy to announce the TOP TEN illusions of 2014! The 2014 Contest Gala will be on Monday, May 18th, in St. Petersburg, Florida, at the TradeWinds Island Resorts (http://www.tradewindsresort.com; headquarters of the Vision Sciences Society conference). The TOP TEN illusions will be shown for the first time at the Gala, and posted on the Best Illusion of the Year Contest?s website (http://illusionoftheyear.com) immediately afterwards. Who will the TOP THREE winners be? That?s up to you! The live audience will choose them from the current TOP TEN list. 2014 TOP TEN ILLUSION CONTESTANTS (alphabetical order): The Disappearing Faces Illusion, by Stuart Anstis (University of California, San Diego) Dynamic Illusory Size Contrast, by Gideon P. Caplovitz, Christopher D. Blair, and Ryan E.B. Mruczek (University of Nevada Reno) Infinite Maze, by Sebastiaan Math?t and Theo Danes (CNRS, Aix-Marseille Universit?) Pigeon-Neck Illusion, by Jun Ono, Akiyasu Tomoeda, and Kokichi Sugihara (Meiji University, JST, CREST) Pure False Pop Out, False Heterogeneity, and Hidden Singletons, by Kimberley D. Orsten and James R. Pomerantz (Rice University, Houston, TX) Integration of elemental motion signals, by Arthur G. Shapiro & Oliver Flynn (American University) Autokinetic Illusion, by Gianni A. Sarcone (Archimedes' Laboratory (TM) Project) Age is all in your head, by Victoria Skye Rotating McThatcher: Face and Mouth Orientation Influence Audiovisual Speech Perception, by James W. Dias & Lawrence D. Rosenblum (University of California, Riverside) Flexible colors, by Mark Vergeer, Stuart Anstis, and Rob van Lier (University of Leuven, UC San Diego, Radboud University Nijmegen) On behalf of the Neural Correlate Society, Susana Martinez-Conde (Executive Producer, Best Illusion of the Year Contest) Neural Correlate Society Executive Committee: Jose-Manuel Alonso, Stephen Macknik, Susana Martinez-Conde, Luis Martinez, Xoana Troncoso, Peter Tse The Neural Correlate Society is a tax-exempt 501(c)3 non-profit organization, whose mission is to promote the public awareness of neuroscience research. ---------------------------------------------------------------------- Susana Martinez-Conde, PhD Director, Laboratory of Visual Neuroscience Barrow Neurological Institute 350 W. Thomas Rd. Phoenix AZ 85013 USA Phone: +1 602 406-3484 Fax: +1 602 406-4192 Email: smart at neuralcorrelate.com http://smc.neuralcorrelate.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From danko.nikolic at googlemail.com Sun Apr 20 01:47:53 2014 From: danko.nikolic at googlemail.com (Danko Nikolic) Date: Sun, 20 Apr 2014 07:47:53 +0200 Subject: Connectionists: practopoiesis In-Reply-To: <53529B29.4040502@cse.msu.edu> References: <5351970C.1060008@cse.msu.edu> <535222C7.2070401@gmail.com> <53529B29.4040502@cse.msu.edu> Message-ID: <53535F89.2070807@gmail.com> Dear John, Yes, this is correct, practopoiesis is about physical and mental development. However, it is equally so about behavior and cognition. And, maybe surprisingly, it includes also evolution by natural selection. In fact, one of the most fundamental theoretical advantages behind practopoiesis is that the same organizational principles can be used to explain all of these different aspects of biological systems; and more importantly, the relationships between them. For example, when one consciously thinks and comes up with new decisions on what to do next, the process is principally not different from development of the nervous system and coming up with new anatomical structures. It is just that one process takes place faster than the other, and that one is at a higher level of organization than the other. Apart from that, the underlying cybernetic principles are the same. Maturana and Varela would say something like: "Thinking is living, living is thinking". Regards, Danko On 4/19/14 5:50 PM, Juyang Weng wrote: > Dear Danko, > > I guess that practopoiesis is about physical and mental development > through a biological life. Am I wrong? > > Best, > > -John > -- Prof. Dr. Danko Nikoli? Web: http://www.danko-nikolic.com Mail address 1: Department of Neurophysiology Max Planck Institut for Brain Research Deutschordenstr. 46 60528 Frankfurt am Main GERMANY Mail address 2: Frankfurt Institute for Advanced Studies Wolfgang Goethe University Ruth-Moufang-Str. 1 60433 Frankfurt am Main GERMANY ---------------------------- Office: (..49-69) 96769-736 Lab: (..49-69) 96769-209 Fax: (..49-69) 96769-327 danko.nikolic at gmail.com ---------------------------- From risi at uchicago.edu Mon Apr 21 02:40:57 2014 From: risi at uchicago.edu (Risi Kondor) Date: Mon, 21 Apr 2014 01:40:57 -0500 Subject: Connectionists: IMA Summer School on Modern Applications of Representation Theory Message-ID: <631BD053-704E-4C48-8446-800615FFA911@uchicago.edu> IMA GRADUATE SUMMER SCHOOL ON MONDERN APPLICATIONS OF REPRESENTATION THEORY The University of Chicago, July 20 - August 6, 2014 Application deadline: April 30, 2014 http://www.stat.uchicago.edu/mart/index.html Organizers: Risi Kondor (UChicago), Lek-Heng Lim (UChicago), Jason Morton (Penn State) Confirmed speakers: Shamgar Gurevich (UW Madison), Ronny Hadani (UT Austin), Risi Kondor (UChicago), Joseph Landsberg (Texas A&M), Lek-Heng Lim (UChicago) Cristopher Moore (Santa Fe Institute), Jason Morton (Penn State), Amit Singer (Princeton), Chris Umans (Caltech). While traditionally considered a branch of pure mathematics, the representation theory of groups has recently proved itself to be a powerful tool in a variety of applied fields, such as machine learning, cryo-electron imaging, digital signal processing, holographic algorithms and quantum computing, and algebraic and geometric computational complexity theory. The goal of the summer school is to build an interdisciplinary research community across mathematics, computer science, statistics, and other related fields. The program will consist of a sequence of introductory lectures intended to bring all participants up to speed on the fundamentals of representation theory, followed by a wide range of topical lectures presented by leading researchers in their respective fields. The summer school is primarily intended for students from IMA Participating Institutions, but a limited number of places are available for graduate students and postdocs from other institutions. Accommodation and limited travel funding will be provided. Please note that the application deadline is April 30. From chiestand at salk.edu Mon Apr 21 21:15:58 2014 From: chiestand at salk.edu (Chris Hiestand) Date: Mon, 21 Apr 2014 18:15:58 -0700 Subject: Connectionists: NIPS 2014 Call for Papers Message-ID: <8EB93DB4-482C-4AF2-8D76-1163E4E78CD6@salk.edu> Neural Information Processing Systems Conference and Workshops December 8-13, 2014 Montreal Convention Center, Montreal, Canada http://nips.cc/Conferences/2014/ Deadline for Paper Submissions: Friday, June 6, 2014, 11 pm Universal Time (4 pm Pacific Daylight Time). Submit at: https://cmt.research.microsoft.com/NIPS2014/ Submissions are solicited for the Twenty-Eigth Annual Conference on Neural Information Processing Systems, an interdisciplinary conference that brings together researchers in all aspects of neural and statistical information processing and computation, and their applications. The conference is a highly selective, single track meeting that includes oral and poster presentations of refereed papers as well as invited talks. The 2014 conference will be held on December 8-11 at Montreal Convention Center, Montreal, Canada. One day of tutorials (December 8) will precede the main conference, and two days of workshops (December 12-13) will follow it at the same location. Submission process: Electronic submissions will be accepted until Friday, June 6, 2014, 11 pm Universal Time (4 pm Pacific Daylight Time). As was the case last year, final papers will be due in advance of the conference. However, minor changes such as typos and additional references will still be allowed for a certain period after the conference. Reviewing: As in previous years, reviewing will be double-blind: the reviewers will not know the identities of the authors. However, differently from previous years, anonymous reviews and meta-reviews of accepted papers will be made public after the end of the review process. Evaluation Criteria: Submissions will be refereed on the basis of technical quality, novelty, potential impact, and clarity. Dual Submissions Policy: Submissions that are identical (or substantially similar) to versions that have been previously published, or accepted for publication, or that have been submitted in parallel to other conferences are not appropriate for NIPS and violate our dual submission policy. Exceptions to this rule are the following: Submission is permitted of a short version of a paper that has been submitted to a journal, but has not yet been published in that journal. Authors must declare such dual-submissions either through the CMT submission form, or via email to the program chairs at program-chairs at nips.cc. It is the authors? responsibility to make sure that the journal in question allows dual concurrent submissions to conferences. Submission is permitted for papers presented or to be presented at conferences or workshops without proceedings, or with only abstracts published. Previously published papers with substantial overlap written by the authors must be cited so as to preserve author anonymity (e.g. ?the authors of 1 prove that ??). Differences relative to these earlier papers must be explained in the text of the submission. It is acceptable to submit to NIPS 2014 work that has been made available as a technical report (or similar, e.g. in arXiv) without citing it. While this could compromise the authors' anonymity, reviewers will be asked to refrain from actively searching for the authors? identity or disclose to the area chairs if their identity is known to them. The dual-submission rules apply during the NIPS review period which begins June 6 and ends September 10, 2014. Submission Instructions: All submissions will be made electronically, in PDF format. Papers are limited to eight pages, including figures and tables, in the NIPS style. An additional ninth page containing only cited references is allowed. Please refer to the complete submission and formatting instructions: http://nips.cc/Conferences/2014/PaperInformation/AuthorSubmissionInstructions and to the style files: http://nips.cc/Conferences/2014/PaperInformation/StyleFiles for further details. Supplementary Material: Authors can submit up to 10 MB of material, containing proofs, audio, images, video, data or source code. Note that the reviewers and the program committee reserve the right to judge the paper solely on the basis of the 9 pages of the paper; looking at any extra material is up to the discretion of the reviewers and is not required. Technical Areas: Papers are solicited in all areas of neural information processing and statistical learning, including, but not limited to: * Algorithms and Architectures: statistical learning algorithms, kernel methods, graphical models, Gaussian processes, Bayesian methods, neural networks, deep learning, dimensionality reduction and manifold learning, model selection, combinatorial optimization, relational and structured learning. * Applications: innovative applications that use machine learning, including systems for time series prediction, bioinformatics, systems biology, text/web analysis, multimedia processing, and robotics. * Brain Imaging: neuroimaging, cognitive neuroscience, EEG (electroencephalogram), ERP (event related potentials), MEG (magnetoencephalogram), fMRI (functional magnetic resonance imaging), brain mapping, brain segmentation, brain computer interfaces. * Cognitive Science and Artificial Intelligence: theoretical, computational, or experimental studies of perception, psychophysics, human or animal learning, memory, reasoning, problem solving, natural language processing, and neuropsychology. * Control and Reinforcement Learning: decision and control, exploration, planning, navigation, Markov decision processes, game playing, multi-agent coordination, computational models of classical and operant conditioning. * Hardware Technologies: analog and digital VLSI, neuromorphic engineering, computational sensors and actuators, microrobotics, bioMEMS, neural prostheses, photonics, molecular and quantum computing. * Learning Theory: generalization, regularization and model selection, Bayesian learning, spaces of functions and kernels, statistical physics of learning, online learning and competitive analysis, hardness of learning and approximations, statistical theory, large deviations and asymptotic analysis, information theory. * Neuroscience: theoretical and experimental studies of processing and transmission of information in biological neurons and networks, including spike train generation, synaptic modulation, plasticity and adaptation. * Speech and Signal Processing: recognition, coding, synthesis, denoising, segmentation, source separation, auditory perception, psychoacoustics, dynamical systems, recurrent networks, language models, dynamic and temporal models. * Visual Processing: biological and machine vision, image processing and coding, segmentation, object detection and recognition, motion detection and tracking, visual psychophysics, visual scene analysis and interpretation. Demonstrations and Workshops: There is a separate Demonstration track at NIPS. Authors wishing to submit to the Demonstration track should consult the upcoming Call for Demonstrations. The workshops will be held at the Montreal Convention Center December 12-13. The upcoming call for workshop proposals will provide details. Web version: http://nips.cc/Conferences/2014/CallForPapers From naotsu at gmail.com Tue Apr 22 03:37:15 2014 From: naotsu at gmail.com (Naotsugu Tsuchiya) Date: Tue, 22 Apr 2014 17:37:15 +1000 Subject: Connectionists: Endeavour Scholarships and Fellowships Message-ID: Dear all, The Australian government is offering Endeavour Scholarships and Fellowships https://aei.gov.au/scholarships-and-fellowships/international-applicants/pages/international-applicants.aspx to those competitive international students and postdocs to visit and conduct short-term research in Australia. Please see the web site if you are interested. In my lab, we are looking for students/postdocs who are proficient in Matlab (and preferably signal processing and/or information theory as well) and interested in 1) application of the integrated information theory of consciosness to the real neuronal data, 2) analysis of intracranial recording data (using decoding, cross-frequency coupling etc) to understand the neuronal basis of consciousness. Please send me your CV if you are interested. Endeavour Awards will be closing 30 June 2013. Regards Nao Tsuchiya -- ------------------------------------------------------------------------- Nao (Naotsugu) Tsuchiya, Ph.D. 1. Associate Professor School of Psychological Sciences Faculty of Medicine, Nursing and Health Sciences, Monash University 2. ARC Future Fellow homepage: http://users.monash.edu.au/~naotsugt/Tsuchiya_Labs_Homepage/Main.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From gunnar.blohm at gmail.com Tue Apr 22 11:35:17 2014 From: gunnar.blohm at gmail.com (Gunnar Blohm) Date: Tue, 22 Apr 2014 11:35:17 -0400 Subject: Connectionists: CoSMo 2014: application deadline approaching! Message-ID: <53568C35.5090103@queensu.ca> Just a quick reminder that the application deadline for the summer school on Computational Sensory-Motor Neuroscience (CoSMo 2014) is approaching fast: Apr 28, 2014!!! http://www.compneurosci.com/CoSMo/ Best Gunnar Blohm, Konrad K?rding, Paul Schrater -- ------------------------------------------------------- Dr. Gunnar BLOHM Assistant Professor in Computational Neuroscience Centre for Neuroscience Studies, Departments of Biomedical and Molecular Sciences, Mathematics & Statistics, and Psychology, School of Computing, and Canadian Action and Perception Network (CAPnet) Queen?s University 18, Stuart Street Kingston, Ontario, Canada, K7L 3N6 Tel: (613) 533-3385 Fax: (613) 533-6840 Email: Gunnar.Blohm at QueensU.ca Web: http://www.compneurosci.com/ From fh at cs.uni-freiburg.de Tue Apr 22 02:17:22 2014 From: fh at cs.uni-freiburg.de (Frank Hutter) Date: Tue, 22 Apr 2014 08:17:22 +0200 Subject: Connectionists: Final CFP: AutoML Workshop @ ICML 2014 (deadline April 25) Message-ID: FINAL CALL FOR CONTRIBUTIONS The AutoML Workshop @ ICML 2014 Beijing, China, June 25/26, 2014 Web: http://icml2014.automl.org Email: icml2014 at automl.org ---------------------------------------------------------------- Important Dates: - Submission deadline: Friday 25 April, 2014 - Notification of acceptance: Friday 16 May, 2014 ---------------------------------------------------------------- Workshop Overview: Machine learning has achieved considerable success, but this success crucially relies on human machine learning experts to select appropriate features, workflows, ML paradigms, algorithms, and algorithm hyperparameters. Because the complexity of these tasks is often beyond non-experts, the rapid growth of machine learning applications has created a demand for machine learning methods that can be used easily and without expert knowledge. We call the resulting research area that targets progressive automation of machine learning AutoML. AutoML aims to automate many different stages of the machine learning process. Relevant topics include: - Model selection, hyper-parameter optimization, and model search - Representation learning and automatic feature extraction / construction - Reusable workflows and automatic generation of workflows - Meta learning and transfer learning - Automatic problem "ingestion" (from raw data and miscellaneous formats) - Feature coding/transformation to match requirements of different learning algorithms - Automatically detecting and handling skewed data and/or missing values - Automatic leakage detection - Matching problems to methods/algorithms (beyond regression and classification) - Automatic acquisition of new data (active learning, experimental design) - Automatic report writing (providing insight from the automatic data analysis) - User interfaces for AutoML (e.g., ?Turbo Tax for Machine Learning?) - Automatic inference and differentiation - Automatic selection of evaluation metrics - Automatic creation of appropriately sized and stratified train, validation, and test sets - Parameterless, robust algorithms - Automatic algorithm selection to satisfy time/space constraints at train- or run-time - Run-time wrappers to detect data shift and other causes of prediction failure We encourage contributions in any of these areas. We welcome 2-page short-form submissions and 6-page long-form submissions (in either case plus references). Submissions should be formatted using JMLR Workshop and Proceedings format (an example LaTeX file is available on the workshop website icml2014.automl.org). We also encourage submissions of previously-published material that is closely related to the workshop topic (for presentation only). Confirmed invited speakers: - Dan Roth: Language designed for novice ML developers - Holger Hoos: Programming by Optimization - Yoshua Bengio: Representation learning - Jasper Snoek: Hyper-parameter optimization - Vikash Masingka: Probabilistic programming Advisory Committee: James Bergstra, Nando de Freitas, Roman Garnett, Matt Hoffman, Michael Osborne, Alice Zheng Organizers: Frank Hutter, Rich Caruana, R?mi Bardenet, Misha Bilenko, Isabelle Guyon, Bal?zs K?gl, and Hugo Larochelle -------------- next part -------------- An HTML attachment was scrubbed... URL: From likforman at telecom-paristech.fr Tue Apr 22 05:27:47 2014 From: likforman at telecom-paristech.fr (Laurence Likforman) Date: Tue, 22 Apr 2014 11:27:47 +0200 Subject: Connectionists: Reminder: Montreal confs: ANNPR 2014 (Oct) & C3S2E 2014 (Aug.) In-Reply-To: <53561D98.2040508@telecom-paristech.fr> References: <53561D98.2040508@telecom-paristech.fr> Message-ID: <53563613.3060908@telecom-paristech.fr> 2 International workshop/conference in Montreal coming soon at Concordia University ! ------- ANNPR 2014 Int. Workshop on Artificial Neural Networks and Pattern Recognition, deadline: April 28: http://www.annpr2014.com/ ------- C3S2E 2014: International C* Conference on Computer Science & Software Engineering Please note that the deadline has been extended to May 5, 2014. Welcome to submit papers for a special session on Pattern Recognition: http://confsys.encs.concordia.ca/c3s2e/c3s2e14/c3s2e14.php From fh at cs.uni-freiburg.de Tue Apr 22 02:44:21 2014 From: fh at cs.uni-freiburg.de (Frank Hutter) Date: Tue, 22 Apr 2014 08:44:21 +0200 Subject: Connectionists: PhD & postdoc positions in AutoML and automated algorithm design Message-ID: Several PhD & postdoc positions are available in the research group on machine learning, optimization, and automated algorithm design at the University of Freiburg, Germany. Research projects focus on two areas: - AutoML, which aims to automate many of the tasks that have traditionally been left to machine learning experts, such as hyperparameter optimization, model search, and architecture selection in deep learning (see automl.org); and - Automated algorithm design in other areas, in particular for solving NP-hard problems. The salary scale for full-time positions is TV-L E13 (with a monthly gross salary between 3370 EUR and 4860 EUR, depending on experience and previous position). Applicants should have an excellent first academic degree in artificial intelligence, machine learning, computer science, statistics or a related discipline. Demonstrated general knowledge in machine learning is a requirement, and experience in one or more of the following areas is an advantage: - Algorithm configuration and selection - Bayesian optimization and stochastic optimization - Discrete optimization and NP-hard problem solving - Hyperparameter optimization - Meta learning and transfer learning - Active learning and experimental design - Deep learning - Gaussian processes Application deadline: June 11, 2014. Please see the full job posting at http://aad.informatik.uni-freiburg.de/positions.html for details. Thanks, Frank Hutter -------------- next part -------------- An HTML attachment was scrubbed... URL: From ted.carnevale at yale.edu Tue Apr 22 14:48:49 2014 From: ted.carnevale at yale.edu (Ted Carnevale) Date: Tue, 22 Apr 2014 14:48:49 -0400 Subject: Connectionists: NEURON Summer Course deadline approaching Message-ID: <5356B991.5030306@yale.edu> The early registration deadline for the NEURON Summer Course is Friday, May 9, less than three weeks from today. Applicants who sign up by that date are eligible for reduced registration fees. NEURON Fundamentals June 21-24 This course shows how to model individual neurons and networks of neurons, and introduces parallel simulation. The early registration fee is $1050, but after May 9 registration goes up to $1200. Parallel Simulation with NEURON June 25-26 This is for users who are already familiar with NEURON and now need to parallelize an existing model or create a new model that will run on parallel hardware. The early registration fee is $650, but after May 9 it goes up to $750. Sign up for both courses by May 9 and pay only $1600, or wait until May 10 or later and pay $1800. Registration is limited, and the final registration deadline is Friday, May 30, 2014. For more information about these courses see http://www.neuron.yale.edu/neuron/courses or contact Ted Carnevale ted dot carnevale at yale edu 203-494-7381 From chicoisne.guillaume at uqam.ca Tue Apr 22 15:49:03 2014 From: chicoisne.guillaume at uqam.ca (Chicoisne, Guillaume) Date: Tue, 22 Apr 2014 19:49:03 +0000 Subject: Connectionists: Aslo: Web Science and the Mind - Re: Reminder: Montreal confs: ANNPR 2014 (Oct) & C3S2E 2014 (Aug.) Message-ID: In Montr?al too (July), in another university (UQAM): Summer School in Cognitive Science 2014 - Web Science and the mind. This summer school will present a comprehensive overview on the interactions between the web and cognitive sciences, with topics ranging from social network analysis to distributed cognition and semantic web. http://www.summer14.isc.uqam.ca/ Also in Quebec City this summer: - The 36th annual meeting of the Cognitive Science Society http://cognitivesciencesociety.org/conference2014/index.html - The 28th AAAI Conference on Artificial Intelligence http://www.aaai.org/Conferences/AAAI/aaai14.php - The seventh Conference on Artificial General Intelligence http://www.agi-society.org/ Busy summer in Quebec! Guillaume Chicoisne, PhD Coordinator, Institut des Sciences Cognitives UQAM, Montreal, Qc., Canada Le 14-04-22 05:27, ? Laurence Likforman ? a ?crit : >2 International workshop/conference in Montreal coming soon >at Concordia University ! > >------- >ANNPR 2014 > >Int. Workshop on Artificial Neural Networks and Pattern Recognition, >deadline: April 28: > >http://www.annpr2014.com/ > >------- > >C3S2E 2014: > >International C* Conference on Computer Science & Software Engineering > > Please note that the deadline has been extended to May 5, 2014. >Welcome to submit papers for a special session on Pattern Recognition: > >http://confsys.encs.concordia.ca/c3s2e/c3s2e14/c3s2e14.php > > > > > > > > > > > From Colin.Wise at uts.edu.au Tue Apr 22 22:01:09 2014 From: Colin.Wise at uts.edu.au (Colin Wise) Date: Wed, 23 Apr 2014 12:01:09 +1000 Subject: Connectionists: AAI Short Course - 'Advanced Data Analytics - an Introduction' - Wednesday 7 May 2014 Message-ID: <8112393AA53A9B4A9BDDA6421F26C68A0173A4934F19@MAILBOXCLUSTER.adsroot.uts.edu.au> Dear Colleague, AAI Short Course - 'Advanced Data Analytics - an Introduction' - Wednesday 7 May 2014 https://shortcourses-bookings.uts.edu.au/ClientView/Schedules/ScheduleDetail.aspx?ScheduleID=1571 The AAI short course 'Advanced Data Analytics' may well be of interest to you and your organisation and key personnel. This Data Analytics introductory short course will provide an early and rewarding understanding of the level of analytics which your organisation and your people should be seeking. Course outcomes Upon completion of this course students will: * Understand why advanced data analytics is essential to your business success * Understand the key terms and concepts used in advanced data analytics * Understand relations of big data, clouding computing and analytics * Be familiar with basic skills of statistics in data analytics, including descriptive analysis, regression, multivariate data analysis * Learning basic data mining and data warehousing, visualization and reporting, such as supervised vs unsupervised methods, clustering, association rule and frequent mining and so on * Knowing key techniques in machine learning, such as Parametric and non-parametric models, learning and inference, Maximum-likelihood estimation, and Bayesian approaches and so on * Be given the introduction of social media analytics, multimedia analytics, and the real projects or case studies conducted in AAI Future short courses on Data Analytics and Big Data may be viewed at http://www.uts.edu.au/research-and-teaching/our-research/advanced-analytics-institute/short-courses/upcoming-courses Recommended first introductory public short course to attend in the series of advanced data analytic short courses - please register here https://shortcourses-bookings.uts.edu.au/ClientView/Schedules/ScheduleDetail.aspx?ScheduleID=1571 Happy to discuss at your convenience. Regards. Colin Wise Operations Manager Faculty of Engineering & IT The Advanced Analytics Institute [cid:image001.png at 01CF5EEA.736D87F0] University of Technology, Sydney Blackfriars Campus Building 2, Level 1 Tel. +61 2 9514 9267 M. 0448 916 589 Email: Colin.Wise at uts.edu.au AAI: www.analytics.uts.edu.au/ AAI Email Policy - should you wish to not receive these communication on Data Analytics Learning please reply to our email (sender) with UNSUBSCRIBE in the Subject. We will delete you from our database. Thank you for your patience and consideration. UTS CRICOS Provider Code: 00099F DISCLAIMER: This email message and any accompanying attachments may contain confidential information. If you are not the intended recipient, do not read, use, disseminate, distribute or copy this message or attachments. If you have received this message in error, please notify the sender immediately and delete this message. Any views expressed in this message are those of the individual sender, except where the sender expressly, and with authority, states them to be the views of the University of Technology Sydney. Before opening any attachments, please check them for viruses and defects. Think. Green. Do. Please consider the environment before printing this email. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 10489 bytes Desc: image001.png URL: From smthampi at ieee.org Sun Apr 20 23:53:15 2014 From: smthampi at ieee.org (Dr. Sabu M Thampi) Date: Mon, 21 Apr 2014 09:23:15 +0530 Subject: Connectionists: Reminder -Third Intl. Symposium on Intelligent Informatics (ISI'14) - Submission Deadline - May 02. 2014 Message-ID: ------------------------------------------------------------------------- *** Our apologies if you receive multiple copies of this CFP *** ------------------------------------------------------------------------- Third International Symposium on Intelligent Informatics (ISI'14) September 24-27, 2014, Delhi, India http://icacci-conference.org/isi2014/ Submission Deadline: May 02, 2014 -------------------------------------------------------------------------- The Third International Symposium on Intelligent Informatics (ISI'14) aims to bring together researchers and practitioners from around the world to discuss the state of art of the theory and applications of Intelligent Informatics. ISI'14 invites original and unpublished work from individuals active in the broad theme of the Symposium. ISI-2014 has TWO tracks: 'Advances in Intelligent Informatics' and a special track on 'Intelligent Distributed Computing'. Please refer submission page http://icacci-conference.org/isi2014/cfp.html for more details. All accepted papers will be published as TWO special volumes (Volume 1: Advances in Intelligent Informatics & Volume 2: Intelligent Distributed Computing) in the prestigious Advances in Intelligent and Soft Computing (Springer) Series, indexed by ISI Proceedings, DBLP, Ulrich's, EI-Compendex, SCOPUS, Zentralblatt Math, MetaPress, Springerlink etc... Topics of interest include but not limited to: Main Track: Advances in Intelligent Informatics ----------------------------------------------- Artificial Immune Systems Autonomous Agents and Multi-Agent Systems Bayesian Networks and Probabilistic Reasoning Biologically Inspired Intelligence, Brain-Computer Interfacing Chaos, Fractals, Rough Sets Clustering and Data Analysis Complex Systems and Applications Computational Intelligence and Soft Computing Distributed Intelligent Systems Database Management and Information Retrieval Evolutionary Computation, Expert Systems Fusion of Neural Networks and Fuzzy Systems Green and Renewable Energy Systems Human Interface, Human Information Processing Hybrid and Distributed Algorithms High Performance Computing Image and Speech Signal Processing Knowledge Based Systems, Knowledge Networks Machine Learning, Reinforcement Learning Memetic Computing Multimedia and Applications Networked Control Systems Neural Networks and Applications Optimization and Decision Making Pattern Classification and Recognition Robotic Intelligence, Business Intelligence Swarm Intelligence, Ant Colonies Robustness Analysis, Wavelet Analysis Self-Organizing Systems, Stochastic systems Social Intelligence, Web Intelligence Soft computing in P2P, Grid, Cloud and Internet Computing Technologies Support Vector Machines Virtual Reality in Engineering Applications Special Track: Intelligent Distributed Computing ------------------------------------------------ Agent-Based Wireless Sensor Networks Autonomic and Adaptive Distributed Computing Context-Aware Intelligent Computing Data Mining and Knowledge Discovery in Distributed Environments Distributed Frameworks and Middleware for the Internet of Things Distributed Problem Solving and Decision Making Emerging Behaviors in Complex Distributed Systems Information Extraction and Retrieval in Distributed Environments Intelligence in Cooperative Information Systems and Social Networks Intelligence in Distributed Multimedia Systems Intelligence in Mobile, Ubiquitous and Pervasive Computing Intelligence in Peer-To-Peer Systems Intelligent Cloud Infrastructures Intelligent Distributed Applications in E-Commerce, E-Health, E-Government Intelligent Distributed Problem Solving and Decision Making Intelligent High-Performance Architectures Intelligent Integration of Data and Processes Intelligent Service Composition and Orchestration Intelligent Service-Oriented Distributed Systems Knowledge Integration and Fusion from Distributed Sources Modelling and Simulation of Intelligent Distributed Systems Multi-Agent Approaches to Distributed Computing Security Informatics Self-Organising and Adaptive Distributed Systems Semantic and Knowledge Grids Virtualization Infrastructures for Intelligent Computing Web Intelligence and Big Data KEY DATES --------- Full Paper Submission Ends: May 02, 2014 Acceptance Notification:June 06, 2014 Final Paper Deadline: June 30, 2014 Author Registration Closes: July 02, 2014 TECHNICAL PROGRAM COMMITTEE General Chair -------------- Kuan-Ching Li, Providence University, Taiwan Program Chairs ----------------- El-Sayed M. El-Alfy, King Fahd University of Petroleum and Minerals, Saudi Arabia Selwyn Piramuthu,University of Florida, USA Thomas Hanne, University of Applied Sciences, Switzerland TPC Members ----------- A B MMoniruzzaman, Daffodil International University, Bangladesh A. F. M. SajidulQadir, Samsung R&D Institute-Bangladesh, Bangladesh AbdelmajidKhelil, Huawei European Research Center, Germany Aboul Ella Hassanien, University of Cairo, Egypt Adel Alimi, University of Sfax, Tunisia AfshinShaabany, University of Fasa, Iran AgostinoBruzzone, University of Genoa, Italy Ajay Jangra, KUK University, Kurukshetra, Haryana, India Ajay Singh, Multimedia University, Malaysia Akhil Gupta, Jaypee University of Information Technology, India Akihiro Fujihara, KwanseiGakuin University, Japan Alex James, Nazarbayev University, Kazakhstan Ali Yavari, KTH Royal Institute of Technology, Sweden AmitGautam, SP College of Engineering, India Amudha J, Amrita VishwaVidyapeetham, India Anca Daniela Ionita, University Politehnica of Bucharest, Romania Angelo Trotta, University of Bologna, Italy AngelosMichalas, Technological Education Institute of Western Macedonia, Greece AnirbanKundu, Kuang-Chi Institute of Advanced Technology, P.R. China AnjanaGosain, Indraprastha University, India Antonio LaTorre, Universidad Polit?cnica de Madrid, Spain ArpanKar, Indian Institute of Management, Rohtak, India Ash Mohammad Abbas, Aligarh Muslim University, India AthanasiosPantelous, University of Liverpool, United Kingdom Atsushi Takeda, Tohoku Gakuin University, Japan AtulNegi, University of Hyderabad, India AzianAzamimi Abdullah, Universiti Malaysia Perlis, Malaysia B H Shekar, Mangalore University, India BelalAbuhaija, University of Tabuk, Saudi Arabia BhushanTrivedi, GLS Institute Of Computer Technology, India Bilal Gonen, University of West Florida, USA Bilal Khan, University of Sussex, United Kingdom Chia-Hung Lai, National Cheng Kung University, Taiwan Chia-Pang Chen, National Taiwan University, Taiwan Chien-Fu Cheng, Tamkang University, Taiwan Chiranjib Sur, ABV-Indian Institute of Information Technology & Management, India Chunming Liu, T-Mobile USA, USA CiprianDobre, University Politehnica of Bucharest, Romania ConstandinosMavromoustakis, University of Nicosia, Cyprus Daniela Castelluccia, University of Bari, Italy DeeptiMehrotra, Amity School of Computer Sciences, India Demetrios Sampson, University of Piraeus, Greece Dennis Kergl, Universit?t der BundeswehrMunchen, Germany DimitriosStratogiannis, National Technical University of Athens, Greece Emilio Jim?nez Mac?as, University of La Rioja, Spain EvgenyKhorov, IITP RAS, Russia Farrah Wong, Universiti Malaysia Sabah, Malaysia FikretSivrikaya, Technische Universitat Berlin, Germany G Thakur, MANIT Bhopal, India G. Rajchakit, Maejo University, Thailand GanchoVachkov, The University of the South Pacific (USP), Fiji Gregorio Romero, Universidad Politecnica de Madrid, Spain Gwo-JiunHorng, Fortune Institute of Technology, Taiwan HabibKammoun, University of Sfax, Tunisia HabibLouafi, Ecole de Technologies Superieure (ETS), Canada Haijun Zhang, Beijing University of Chemical Technology, P.R. China HajarMousannif, Cadi Ayyad University, Morocco HammadMushtaq, University of Management &Technology, Pakistan HanenIdoudi, ENSI- University of Manouba, Tunisia HemantaKalita, North Eastern Hill University, India Hideaki Iiduka, Kyushu Institute of Technology, Japan HossamZawbaa, Beni-Suef University, Egypt HosseinMalekmohamadi, University of Surrey, United Kingdom Huifang Chen, Zhejiang University, P.R. China Igor Salkov, Donetsk National University, Ukraine J. MailenKootsey, Simulation Resources, Inc., USA JaafarGaber, UTBM, France JanuszKacprzyk, Polish Academy of Sciences, Poland Javier Bajo, University of Salamanca, Spain Jia-Chin Lin, National Central University, Taiwan Jose Delgado, Technical University of Lisbon, Portugal Jose Luis Vazquez-Poletti, Universidad Complutense de Madrid, Spain JosipLorincz, University of Split, Croatia Jun He, University of New Brunswick, Canada JunyoungHeo, Hansung University, Korea KambizBadie, Iran Telecom Research Center, Iran Kenichi Kourai, Kyushu Institute of Technology, Japan Kenneth Nwizege, University of SWANSEA, United Kingdom Kuei-Ping Shih, Tamkang University, Taiwan Lorenzo Mossucca, IstitutoSuperiore Mario Boella, Italy LucioAgostinho, University of Campinas, Brazil Luis Teixeira, UniversidadeCatolica Portuguesa, Portugal Mahendra Dixit, SDMCET, India MalikaBourenane, University of Senia, Algeria ManjunathAradhya, Sri Jayachamarajendra College of Engineering, India Manu Sood, Himachal Pradesh University, India Marcelo Carvalho, University of Brasilia, Brazil Marco Rospocher, Fondazione Bruno Kessler, Italy MarenglenBiba, University of New York, Tirana, USA Martin Randles, Liverpool John Moores University, United Kingdom Martin Zsifkovits, University of Vienna, Austria Massimo Cafaro, University of Salento, Italy MelihKaraman, Bogazici University, Turkey MikulasAlexik, University of Zilina, Slovakia Mohamad Noh Ahmad, UniversitiTeknologi Malaysia, Malaysia Mohamed Dahmane, University of Montreal, Canada Mohamed Moussaoui, AbdelmalekEsaadiUniversity, Morocco Mohammad Monirujjaman Khan, University of Liberal Arts Bangladesh, Bangladesh Mohammed Mujahid U F, King Fahd University of Petroleum and Minerals (KFUPM), SA MohandLagha, SaadDahlab University of Blida - Blida - Algeria, Algeria Monica Chis, Frequentis AG, Romania MukeshTaneja, Cisco Systems, India Mustafa Khandwawala, University of North Carolina at Chapel Hill, USA Naveen Aggarwal, Panjab University, India Nestor Mora Nunez, Cadiz University, Spain NicoSaputro, Southern Illinois University Carbondale, USA Nora Cuppens-Boulahia, IT TELECOM Bretagne, France Olaf Maennel, Loughborough University, United Kingdom Omar ElTayeby, Clark Atlanta University, USA OskarsOzolins, Riga Technical University, Latvia Otavio Teixeira, Centro Universit?rio do Estado do Para (CESUPA), Brazil Pedro Gon?alves, Universidade de Aveiro, Portugal Peiyan Yuan, Beijing University of Posts and Telecommunications, P.R. China PetiaKoprinkova-Hristova, Bulgarian Academy of Sciences, Bulgaria Philip Moore, Birmingham City University, United Kingdom Praveen Srivastava, Indian Institute of Management (IIM), India Qin Lu, University of Technology, Sydney, Australia RabebMizouni, Khalifa University, UAE RachidAnane, Coventry University, United Kingdom Rafael Pasquini, Federal University of Uberlandia - UFU, Brazil Rajeev Shrivastava, MPSIDC, India RajibKar, National Institute of Technology, Durgapur, India Rashid Saeed, Telekom Malaysia, R&D Innovation Center, Malaysia Raveendranathan K C, LBS Institute of Technology for Women, India RivinduPerera, Informatics Institute of Technology, Sri Lanka RubitaSudirman, UniversitiTeknologi Malaysia, Malaysia Ryosuke Ando, Toyota Transportation Research Institute (TTRI), Japan Salvatore Venticinque, Second University of Naples, Italy Sami Habib, Kuwait University, Kuwait SasankoGantayat, GMR Institute of Technology, India Satish Chandra, Jaypee Institute of Information Technology, India SatyaGhrera, Jaypee University of Information Technology, India Scott Turner, University of Northampton, United Kingdom Selvamani K, Anna University, India Shanmugapriya D, Avinashilingam Institute, India Sheng-Shih Wang, Minghsin University of Science and Technology, Taiwan Shuping Liu, University of Southern California, USA Shyan Ming Yuan, National Chiao Tung University, Taiwan Sotiris Karachontzitis, University of Patras, Greece Sotiris Kotsiantis, University of Patras, Greece SowmyaKamath S, National Institute of Technology, Surathkal, India SriparnaSaha, IIT Patna, India Su Fong Chien, MIMOS Berhad, Malaysia SujitMandal, National Institute of Technology, Durgapur, India Suma V, DayanandaSagar College of Engineering, VTU, India Tae (Tom) Oh, Rochester Institute of Technology, USA Teruaki Ito, University of Tokushima, Japan TilokchanIrengbam, Manipur University, India TraianRebedea, University Politehnica of Bucharest, Romania Usha Banerjee, College of Engineering Roorkee, India VatsavayiValliKumari, Andhra University, India Veronica Moertini, Parahyangan Catholic University, Bandung, Indonesia Vikrant Bhateja, ShriRamswaroop College, Lucknow, India Visvasuresh Victor Govindaswamy, Concordia University, USA VivekSehgal, Jaypee University of Information Technology, India Wan Hussain Wan Ishak, Universiti Utara Malaysia, Malaysia Xiaoya Hu, Huazhong University of Science and Technology, P.R. China Xiao-Yang Liu, Shanghai Jiao Tong University, P.R. China Yingyuan Xiao, Tianjin University of Technology, P.R. China Yong Liao, NARUS Inc., USA Yoshitaka Kameya, Meijo University, Japan Yuming Zhou, Nanjing University, P.R. China Yu-N Cheah, UniversitiSains Malaysia, Malaysia Zhenzhen Ye, IBM, USA ZhijieShen, Hortonworks, Inc., USA Zhuo Lu, Intelligent Automation, Inc, USA CONTACT US ------------ Email: isi2014.icacci at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From rbcp at cin.ufpe.br Thu Apr 24 07:20:40 2014 From: rbcp at cin.ufpe.br (Ricardo Bastos C. Prudencio) Date: Thu, 24 Apr 2014 08:20:40 -0300 Subject: Connectionists: BRACIS/ENIAC 2014 - Deadline approaching Message-ID: Apologise for multiple postings. ============================== Deadline - Submission for Regular Papers: April 30, 2014 ========================================================= Call for Papers - BRACIS/ENIAC 2014 ========================================================= BRACIS 2014 - Brazilian Conference on Intelligent Systems ENIAC 2014 - Encontro Nacional de Intelig?ncia Artificial e Computacional October 18-23, 2014 - S?o Carlos-SP, Brazil http://jcris2014.icmc.usp.br/index.php/bracis-eniac ========================================================= Submission BRACIS\ENIAC will have a unified submission process. Papers should be written in English or Portuguese. Submitted papers must not exceed 6 pages, including all tables, figures, and references (style of the IEEE Computer Society Press). At maximum, two additional pages are permitted with overlength page charge. Formatting instructions, as well as templates for Word and /LaTeX/, can be found in http://www.computer.org/portal/web/cscps/formatting The best papers in English will be included in the BRACIS proceedings published by IEEE. All accepted papers written in Portuguese will appear in the ENIAC proceedings, published electronically as BDBComp. The remaining accepted papers submitted in English, but not indicated to BRACIS will be suggested for publication in ENIAC. Authors of selected papers will be invited to submit extended versions of their work to be appreciated for publication in special issues of several international journals (to be announced). Papers must be submitted in PDF files through the EasyChair system. https://www.easychair.org/conferences/?conf=braciseniac2014 The deadline for paper registration/upload is April 30th., 23:55 BRST. Submission Policy By submitting papers to BRACIS/ENIAC 2014, the authors agree that in case of acceptance, at least one author should be fully registered to the conference **before** the deadline for sending the camera-ready paper. Accepted papers without the respective author full registration **before** the deadline will not be included in the proceedings. Important dates Deadline Submission for Regular Papers: April 30, 2014 (New deadline) Acceptance notification: June 12, 2014 Final camera-ready papers due: July 7, 2014 Program Chairs: Ricardo Prudencio (UFPE) and Paulo E. Santos (FEI) Local BRACIS Chair: Estevam Rafael Hruschka Junior (UFSCar) and Heloisa de Arruda Camargo (UFSCar) -- -------------- next part -------------- An HTML attachment was scrubbed... URL: From malin.sandstrom at incf.org Thu Apr 24 09:37:01 2014 From: malin.sandstrom at incf.org (=?ISO-8859-1?Q?Malin_Sandstr=F6m?=) Date: Thu, 24 Apr 2014 15:37:01 +0200 Subject: Connectionists: Reminder: Abstract submission closes April 27 for Neuroinformatics 2014 Message-ID: *Welcome to the 7th INCF Congress of Neuroinformatics in Leiden, the Netherlands,** August 25 - 27, 2014* Neuroinformatics 2014 includes keynote lectures (Margarita Behrens, Mitya Chklovskii, Daniel Choquet, Ila Fiete, Michael Milham, and Felix Sch?rmann), workshops, and a special symposium on population-based neuroimaging, hosted by the INCF Netherlands Node. The last day to submit your abstract is *April 27*, submit it here: http://www.neuroinformatics2014.org/abstracts Registration is open at http://www.neuroinformatics2014.org/registration Watch the NI2014 promo video: http://www.youtube.com/watch?v=aEudq3SOwK0 We kindly ask that you print and display the congress poster at your institution: it's available for download at http://www.neuroinformatics2014.org/documents/a0_poster_nov_2013 We're looking forward to seeing you in August! [image: Inline image 1] -- Malin Sandstr?m, PhD Community Engagement Officer malin.sandstrom at incf.org International Neuroinformatics Coordinating Facility Karolinska Institutet Nobels v?g 15 A SE-171 77 Stockholm Sweden http://www.incf.org -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: NI2014.jpg Type: image/jpeg Size: 402899 bytes Desc: not available URL: From A.K.Seth at sussex.ac.uk Wed Apr 23 10:24:43 2014 From: A.K.Seth at sussex.ac.uk (Anil Seth) Date: Wed, 23 Apr 2014 14:24:43 +0000 Subject: Connectionists: new MATLAB toolbox for Granger causal connectivity analysis Message-ID: <5AA03083-8797-4715-887F-1835E57DD06E@sussex.ac.uk> Dear all, Lionel Barnett and I are pleased to announce the availability of our wholly new MATLAB toolbox for implementing Granger causality analysis. This new free software is a wholesale revision of my previous ?GCCA? toolbox which has been available since 2010 (http://www.ncbi.nlm.nih.gov/pubmed/19961876), and which has enjoyed a wide uptake within the neuroscience community and beyond. Granger causality is a powerful analysis method which can be used to detect directed functional connectivity in stationary time-series data, of the sort often (but not always) found in neuroscience datasets. The new ?multivariate Granger causality? ?MVGC? toolbox offers a range of enhancements. It easily handles conditional spectral analyses. It requires only estimation of the ?full? regression model, avoiding biases inherent in estimation of both full and reduced models. It is accompanied by a fully functional help system integrated into the online MATLAB documentation. And much else besides. The toolbox can be downloaded here: http://www.sussex.ac.uk/sackler/mvgc/ A paper describing the toolbox and the basic concepts and maths underlying Granger causality has been recently published in Journal of Neuroscience Methods: http://www.sciencedirect.com/science/article/pii/S0165027013003701 ? which is the appropriate citation for any published work utilizing the software. Preparation of the toolbox was supported by the Sackler Centre for Consciousness Science, which is funded by the Dr. Mortimer and Dame Theresa Sackler Foundation. We hope you find it useful! Best regards Anil Seth and Lionel Barnett ------------------------------------------- Anil K. Seth, D.Phil. Professor of Cognitive and Computational Neuroscience Co-Director, Sackler Centre for Consciousness Science University of Sussex www.anilseth.com a.k.seth at sussex.ac.uk -------------- next part -------------- An HTML attachment was scrubbed... URL: From karthikmaheshv at gmail.com Wed Apr 23 12:40:04 2014 From: karthikmaheshv at gmail.com (Karthik Mahesh Varadarajan) Date: Wed, 23 Apr 2014 18:40:04 +0200 Subject: Connectionists: [RSS2014] CfP: First Workshop on Affordances: Affordances in "Vision for Cognitive Robotics" (in conjunction with RSS 2014) Message-ID: ====================================================================== *Call for Papers- First Workshop on Affordances: Affordances in Vision for Cognitive Robotics* (in conjunction with RSS 2014), July 13, 2014, Berkeley, USA http://affordances.info/workshops/RSS.html ====================================================================== Based on the Gibsonian principle of defining objects by their function, "affordances" have been studied extensively by psychologists and visual perception researchers, resulting in the creation of numerous cognitive models. These models are being increasingly revisited and adapted by computer vision and robotics researchers to build cognitive models of visual perception and behavioral algorithms in recent years. This workshop attempts to explore this nascent, yet rapidly emerging field of affordance based cognitive robotics while integrating the efforts and language of affordance communities not just in computer vision and robotics, but also psychophysics and neurobiology by creating an open affordance research forum, feature framework and ontology called AfNet (theaffordances.net). In particular, the workshop will focus on emerging trends in affordances and other human-centered function/action features that can be used to build computer vision and robotic applications. The workshop also features contributions from researchers involved in traditional theories to affordances, especially from the point of view of psychophysics and neuro-biology. Avenues to aiding research in these fields using techniques from computer vision and cognitive robotics will also be explored. Primary topics addressed by the workshop include the following among others - Affordances in visual perception models - Affordances as visual primitives, common coding features and symbolic cognitive systems - Affordances for object recognition, search, attention modulation, functional scene understanding/ classification - Object functionality analysis - Affordances from appearance and touch based cues - Haptic adjectives - Functional-visual categories for transfer learning - Actions and functions in object perception - Human-object interactions and modeling - Motion-capture data analysis for object categorization - Affordances in human and robot grasping - Robot behavior for affordance learning - Execution of affordances on robots - Affordances to address cognitive and domestic robot applications - Affordance ontologies - Psychophysics of affordances - Neurobiological and cognitive models for affordances The workshop also seeks to address key challenges in robotics with regard to functional form descriptions. While affordances describe the function that each object or entity affords, these in turn define the manipulation schema and interaction modes that robots need to use to work with objects. These functional features, ascertained through vision, haptics and other sensory information also help in categorizing objects, task planning, grasp planning, scene understanding and a number of other robotic tasks. Understanding various challenges in the field and building a common language and framework for communication across varied communities in are the key goals of the proposed workshop. Through the course of the workshop, we also envisage the establishment of a working group for AfNet. An initial version is available online at www.theaffordances.net. We hope the workshop will serve to foster greater collaboration between the affordance communities in various fields. *Paper Submissions* Paper contributions to the workshop are solicited in four different formats - *Conceptual papers* (1 page): Authors are invited to submit original ideas on approaches to address specific problems in the targeted areas of the workshop. While a clear presentation of the proposed approach and the expected results are essential, specifics of implementation and evaluations are outside the scope of this format. This format is intended at exchange and evaluation of ideas prior to implementation/ experimental work as well as to open up collaboration avenues. - *Design papers* (3 pages): Authors submitting design papers are required to address key issues regarding the problem considered with detailed algorithms and preliminary or proof-of-concept results. Detailed evaluations and analyses are outside the scope of this format. This format is intended at addressing late-breaking and work in progress results as well as fostering collaboration between research and engineering groups. - *Experimental papers* (3 pages): Experimental papers are required to present results of experiments and evaluation of previously published algorithms or design frameworks. Details of implementation and exhaustive test case analyses are key to this format. These papers are geared at benchmarking and standardizing previously known approaches. - *Full papers* (5 pages): Full papers must be self-inclusive contributions with a detailed treatment of the problem statement, related work, design methodology, algorithm, test-bed, evaluation, comparative analysis, results and future scope of work. Submission of original and unpublished work is highly encouraged. Since the goal of this workshop is to bring together the various affordance communities, extended versions/ summary reports of recent research published elsewhere, as adapted to the goals of the workshop, will also be accepted. These papers are required to clearly state the relevance to the workshop and the necessary adaptation. The program will be composed of oral as well as Pecha-Kucha style presentations. Each contribution will be reviewed by three reviewers through a single-blind review process. The paper formatting should follow the RSS formatting guidelines (Templates: Word andLaTeX ). All contributions are to be submitted via Microsoft Conference Management Tool in PDF format. Appendices and supplementary text can be submitted as a second PDF, while other materials such as videos (though, preferably as links on vimeo or youtube) as a zipped file. All papers are expected to be self-inclusive and supplementary materials are not guaranteed to be reviewed. Please adhere to the following strict deadlines. In addition to direct acceptance, early submissions may be conditionally accepted, in which case submission of a revised version of the paper based on reviewer comments, prior to the late submission deadline is necessary. The final decision on acceptance of such conditionally accepted papers will be announced along with the decisions for the late submissions. *Important Dates* - Initial submissions (Early): 23:59:59 PDT May 5, 2014 - Notification of acceptance (Early submissions): May 15, 2014 - Initial submissions (Late): 23:59:59 PDT May 27, 2014 - Notification of acceptance (Late submissions): June 5, 2014 - Submission of publication-ready version: June 10, 2014 - Workshop date: July 13, 2014 *Organizers* Karthik Mahesh Varadarajan (varadarajan(at) acin.tuwien.ac.at), TU Wien Markus Vincze (vincze(at)tuwien.ac.at), TU Wien Trevor Darrell (trevor(at) eecs.berkeley.edu), UC. Berkeley Juergen Gall (gall(at)iai.uni-bonn.de), Univ. Bonn *Speakers and Participants* (To be updated) Abhinav Gupta (Affordances in Computer Vision), Carnegie Mellon University Ashutosh Saxena (Affordances in Cognitive Robotics), Cornell University Lisa Oakes (Psychophysics of affordances)*, UC. Davis TBA (Neurobiology of affordances)* *Program Committee* Irving Biederman (USC) Aude Olivia (MIT) Fei-Fei Li (Stanford University) Martha Teghtsoonian (Smith College) Derek Hoiem (UIUC) Barbara Caputo (Univ. of Rome, IDIAP) Song-Chun Zhu (UCLA) Antonis Argyros (FORTH) Tamim Asfour (KIT) Michael Beetz (TUM) Norbert Krueger (Univ. of Southern Denmark) Sven Dickinson (Univ. of Toronto) Diane Pecher (Erasmus Univ. Rotterdam) Aaron Bobick (GeorgiaTech) Jason Carso (UB New York) Juan Carlos Niebles (Universidad del Norte) Tamara Berg (UNC Chapel Hill) Moritz Tenorth (Univ. Bremen) Dejan Pangercic (Robert Bosch) Roozbeh Mottaghi (Stanford) Alireza Fathi (Stanford) Xiaofeng Ren (Amazon) David Fouhey (CMU) Tucker Hermans (Georgia Tech) Tian Lan (Stanford) Amir Roshan Zamir (UCF) Hamed Pirsiavash (MIT) Walterio Mayol-Cuevas (Univ. of Bristol) -------------- next part -------------- An HTML attachment was scrubbed... URL: From saketn at andrew.cmu.edu Wed Apr 23 15:04:25 2014 From: saketn at andrew.cmu.edu (Saket) Date: Wed, 23 Apr 2014 15:04:25 -0400 Subject: Connectionists: Biological Distributed Algorithms 2014 Message-ID: ======================================================= The 2nd Workshop on Biological Distributed Algorithms (BDA 2014) October 11-12, 2014 in Austin, Texas USA - in conjunction with DISC 2014 http://www.cs.cmu.edu/~saketn/BDA2014/ ======================================================= We are excited to announce the second workshop on Biological Distributed Algorithms (BDA). BDA is focused on the relationships between distributed computing and distributed biological systems and in particular, on analysis and case studies that combine the two. Such research can lead to better understanding of the behavior of the biological systems while at the same time developing novel computational algorithms that can be used to solve basic distributed computing problems. BDA 2014 will be collocated with DISC 2014 (the 28th International Symposium on Distributed Computing) in Austin, Texas. It will take place just before DISC, on Saturday and Sunday, October 11-12, 2014. BDA 2014 will include talks on distributed algorithms related to a variety of biological systems. However, this time we will devote special attention to *communication and coordination in insect colonies* (e.g. foraging, navigation, task allocation, construction) and *networks in the brain*(e.g. learning, decision-making, attention). =========== SUBMISSIONS =========== We solicit submissions describing recent results relevant to biological distributed computing. We especially welcome papers describing new insights and / or case studies regarding the relationship between distributed computing and biological systems even if these are not fully formed. Since a major goal of the workshop is to explore new directions and approaches, we especially encourage the submission of ongoing work. Selected contributors would be asked to present, discuss and defend their work at the workshop. Submissions should be in PDF and include title, author information, and a 4-page extended abstract. Please use the following EasyChair submission link: http://www.easychair.org/conferences/?conf=bda2014 Note: The workshop will not include published proceedings. In particular, we welcome submissions of papers describing work that has appeared or is expected to appear in other venues. =============== IMPORTANT DATES =============== July 18, 2014 - Paper submission deadline August 18, 2014 - Decision notifications October 11-12, 2014 - Workshop ================ INVITED SPEAKERS ================ Dmitri Chklovskii - HHMI Janelia Farm Anna Dornhaus - University of Arizona Ofer Feinerman - Weizmann Institute Deborah Gordon - Stanford Nancy Lynch - MIT James Marshall - University of Sheffield Saket Navlakha - CMU Stephen Pratt - Arizona State University Andrea Richa - Arizona State University ================= PROGRAM COMMITTEE ================= Ziv Bar-Joseph - CMU (co-chair) Anna Dornhaus - University of Arizona Yuval Emek - Technion (co-chair) Amos Korman - CNRS and University of Paris Diderot (co-chair) Nancy Lynch - MIT Radhika Nagpal - Harvard Saket Navlakha - CMU Nir Shavit - MIT -------------- next part -------------- An HTML attachment was scrubbed... URL: From fmschleif at googlemail.com Fri Apr 25 04:00:30 2014 From: fmschleif at googlemail.com (Frank-Michael Schleif) Date: Fri, 25 Apr 2014 10:00:30 +0200 Subject: Connectionists: Call for participation 10th Anniversary Workshop on Self Organizing Maps (WSOM) 2014 in Mittweida / Germany Message-ID: We apologize for possible duplicates of this message sent to distribution lists. =================================================================== Call for Participation for 10th Workshop on Self-Organizing Maps 2014 - WSOM 2014 Mittweida, Germany, 2-4 July 2014 http://www.wsom2014.de/ =================================================================== GENERAL INFORMATION The Workshop on Self-Organizing Maps 2014 -- WSOM 2014 will be held in the beautiful small town Mittweida located close to the mountains Erzgebirge in Saxony/Germany. It will bring together researchers and practitioners in the field of self-organizing systems for data analysis, with a particular emphasis on self-organizing maps and learning vector quantization. WSOM 2014 is the 10th conference in a series of biannual international conferences started with WSOM'97 in Helsinki. The workshop is focusing on recent advancements in the field of self-organizing map, learning vector quantization and prototype based learning. In a broader scope the workshop also covers practical approaches of these techniques and other machine learning methods. This year session are * Self-Organizing Maps Theory and Visualization * Prototype Based Classification * Classification and non-standard metric approaches * Applications Invited Speakers: * Prof. Dr. Michael Biehl, University Groningen (NL), Johann-Bernoulli-Institute of Mathematics and Computer Sciences * Prof. Dr. Erzs?bet Mer?nyi, Rice University Houston (USA), Department of Statistics and Department of Electrical and Computer Engineering * Prof. Dr. Fabrice Rossi, Universit? Paris1 ? Panth?on-Sorbonne, Department Statistique, Analyse, Mod?lisation Multidisciplinaire (SAMM) (F) Preliminary program The preliminary program of the WSOM 2014 conference is available on the Web: http://www.global.hs-mittweida.de/~wsom2014/wsom2014_program.htm Further the list of -- accepted papers can be found at http://www.global.hs-mittweida.de/~wsom2014/wsom2014_accepted_papers.htm You can register (early bird until 31.05) at http://www.global.hs-mittweida.de/~wsom2014/wsom2014_registration.htm Further details (registration, venue a.s.o), at: www.wsom2014.de You can also contact wsom2014 at hs-mittweida.de From M.M.vanZaanen at uvt.nl Fri Apr 25 06:33:23 2014 From: M.M.vanZaanen at uvt.nl (Menno van Zaanen) Date: Fri, 25 Apr 2014 12:33:23 +0200 Subject: Connectionists: Final CfP First Workshop on Computational Approaches to Compound Analysis (ComAComA 2014) Message-ID: <20140425103323.GH3133@pinball.uvt.nl> Final Call for Papers The First Workshop on Computational Approaches to Compound Analysis (ComAComA 2014) at COLING 2014 Dublin, Ireland, 23/24 August, 2014 DESCRIPTION The ComAComA workshop is an interdisciplinary platform for researchers to present recent and ongoing work on compound processing in different languages. Given the high productivity of compounding in a wide range of languages, compound processing is an interesting subject in linguistics, computational linguistics, and other applied disciplines. For example, for many language technology applications, compound processing remains a challenge (both morphologically and semantically), since novel compounds are created and interpreted on the fly. In order to deal with this productivity, systems that can analyse new compound forms and their meanings need to be developed. From an interdisciplinary perspective, we also need to better understand the process of compounding (for instance, as a cognitive process), in order to model its complexity. The workshop has several related aims. Firstly, it will bring together researchers from different research fields (e.g., computational linguistics, linguistics, neurolinguistics, psycholinguistics, language technology) to discuss various aspects of compound processing. Secondly, the workshop will provide an overview of the current state-of-the-art research, as well as desired resources for future research in this area. Finally, we expect that the interdisciplinary nature of the workshop will result in methodologies to evaluate compound processing systems from different perspectives. KEYNOTE SPEAKERS Diarmuid ?? S??aghdha (University of Cambridge) Andrea Krott (University of Birmingham) TOPICS OF INTEREST The ComAComA workshop solicits papers on original and unpublished research on the following topics, including, but not limited to: ** Annotation of compounds for computational purposes ** Categorisation of compounds (e.g. different taxonomies) ** Classification of compound semantics ** Compound splitting ** Automatic morphological analysis of compounds ** Compound processing in computational psycholinguistics ** Psycho- and/or neurolinguistic aspects of compound processing ** Theoretical and/or descriptive linguistic aspects of compound processing ** Compound paraphrase generation ** Applications of compound processing ** Resources for compound processing ** Evaluation methodologies for compound processing PAPER REQUIREMENTS ** Papers should describe original work, with room for completed work, well-advanced ongoing research, or contemplative, novel ideas. Papers should indicate clearly the state of completion of the reported results. Wherever appropriate, concrete evaluation results should be included. ** Submissions will be judged on correctness, originality, technical strength, significance and relevance to the conference, and interest to the attendees. ** Submissions presented at the conference should mostly contain new material that has not been presented at any other meeting with publicly available proceedings. Papers that are being submitted in parallel to other conferences or workshops must indicate this on the title page, as must papers that contain significant overlap with previously published work. REVIEWING Reviewing will be double blind. It will be managed by the organisers of ComAComA, assisted by the workshop???s Program Committee (see details below). INSTRUCTIONS FOR AUTHORS ** All papers will be included in the COLING workshop proceedings, in electronic form only. ** The maximum submission length is 8 pages (A4), plus two extra pages for references. ** Authors of accepted papers will be given additional space in the camera-ready version to reflect space needed for changes stemming from reviewers comments. ** The only mode of delivery will be oral; there will be no poster presentations. ** Papers shall be submitted in English, anonymised with regard to the authors and/or their institution (no author-identifying information on the title page nor anywhere in the paper), including referencing style as usual. ** Papers must conform to official COLING 2014 style guidelines, which are available on the COLING 2014 website (see also links below). ** The only accepted format for submitted papers is PDF. ** Submission and reviewing will be managed online by the START system (see link below). Submissions must be uploaded on the START system by the submission deadlines; submissions after that time will not be reviewed. To minimise network congestion, we request authors to upload their submissions as early as possible. ** In order to allow participants to be acquainted with the published papers ahead of time which in turn should facilitate discussions at the workshop, the official publication date has been set two weeks before the conference, i.e., on August 11, 2014. On that day, the papers will be available online for all participants to download, print and read. If your employer is taking steps to protect intellectual property related to your paper, please inform them about this timing. ** While submissions are anonymous, we strongly encourage authors to plan for depositing language resources and other data as well as tools used and/or developed for the experiments described in the papers, if the paper is accepted. In this respect, we encourage authors then to deposit resources and tools to available open-access repositories of language resources and/or repositories of tools (such as META-SHARE, Clarin, ELRA, LDC or AFNLP/COCOSDA for data, and github, sourceforge, CPAN and similar for software and tools), and refer to them instead of submitting them with the paper. COLING 2014 STYLE FILES Download a zip file with style files for LaTeX, MS Word and Libre Office here: http://www.coling-2014.org/doc/coling2014.zip IMPORTANT DATES May 2, 2014: Paper submission deadline June 6, 2014: Author notification deadline June 27, 2014: Camera-ready PDF deadline August 11, 2014: Official paper publication date August 23/24, 2014: ComAComA Workshop (exact date still unknown) (August 25-29, 2014: Main COLING conference) URLs Workshop: http://tinyurl.com/comacoma Main COLING conference: http://www.coling-2014.org/ Paper submission: https://www.softconf.com/coling2014/WS-17/ Style sheets: http://www.coling-2014.org/doc/coling2014.zip ORGANISING COMMITTEE Ben Verhoeven (University of Antwerp, Belgium) ben.verhoeven at uantwerpen.be Walter Daelemans (University of Antwerp, Belgium) walter.daelemans at uantwerpen.be Menno van Zaanen (Tilburg University, The Netherlands) mvzaanen at uvt.nl Gerhard van Huyssteen (North-West University, South Africa) gerhard.vanhuyssteen at nwu.ac.za PROGRAM COMMITTEE ** Preslav Nakov (University of California in Berkeley) ** Iris Hendrickx (Radboud University Nijmegen) ** Gary Libben (University of Alberta) ** Lonneke Van der Plas (University of Stuttgart) ** Helmut Schmid (Ludwig Maximilian University Munich) ** Fintan Costello (University College Dublin) ** Roald Eiselen (North-West University) ** Su Nam Kim (University of Melbourne) ** Pavol ??tekauer (P.J. Safarik University) ** Arina Banga (University of Debrecen) ** Diarmuid ?? S??aghdha (University of Cambridge) ** Rochelle Lieber (University of New Hampshire) ** Vivi Nastase (Fondazione Bruno Kessler) ** Tony Veale (University College Dublin) ** Pius ten Hacken (University of Innsbruck) ** Anneke Neijt (Radboud University Nijmegen) ** Andrea Krott (University of Birmingham) ** Emmanuel Keuleers (Ghent University) ** Stan Szpakowicz (University of Ottawa) From p.roelfsema at nin.knaw.nl Fri Apr 25 08:26:48 2014 From: p.roelfsema at nin.knaw.nl (Pieter Roelfsema) Date: Fri, 25 Apr 2014 12:26:48 +0000 Subject: Connectionists: Roelfsema lab: Postdoc or PhD on processing in cortical layers Message-ID: Post-doc (or PhD student) in Computational and Cognitive Neuroscience ? what is the role of the cortical layers? A post-doc (or PhD student) position is available to study computational models about the roles of the different layers of the cerebral cortex in the Vision & Cognition group (head Pieter Roelfsema) at the Netherlands Institute for Neuroscience in Amsterdam. What roles do the different cortical layers play in visual processing? This project, funded by the Human Brain Project, aims to build computational models for the unique contributions of the different cortical layers to perceptual processing. We recently discovered that the cortical layers have different roles in visual perception (Self et al., Curr. Biol, 2013). We now wish to understand this subdivision of labor by making models of the layers and the interactions between them that take place in the visual cortex during perception. The candidate will have close interactions with researchers addressing laminar processing with high-field fMRI. The successful applicant will have a solid computational background and an interest in cognitive neuroscience, preferably affinity with neuroimaging, and good programming skills. Appointment: The position involves a temporary appointment for 2-4 years. To apply, please send application letter, CV and two letters of recommendation to: Tini Eikelboom The Netherlands Institute for Neuroscience Meibergdreef 47 1105 BA Amsterdam, The Netherlands Telephone: +31-20-5664587 E-mail: t.eikelboom at nin.knaw.nl -------------- next part -------------- An HTML attachment was scrubbed... URL: From itialife at gmail.com Fri Apr 25 13:49:05 2014 From: itialife at gmail.com (Keyan Ghazi-Zahedi) Date: Fri, 25 Apr 2014 19:49:05 +0200 Subject: Connectionists: ALIFE-14 Workshop on information theoretic incentives for artificial life Message-ID: <3E75F140-AE6D-466F-A9A3-DB8E8E22BF41@gmail.com> *** WORKSHOP ON INFORMATION THEORETIC INCENTIVES FOR ARTIFICIAL LIFE *** to be held at the 14th International Conference on the Synthesis and Simulation of Living Systems (ALIFE-14) New York City, 30 July-2 August 2014 Workshop web page: http://www.mis.mpg.de/ay/workshops/alife14ws.html Conference web page: http://alife14.org Please forward this mail to other interested parties. ~~~ IMPORTANT DATES Submission deadline: 2 May 2014 Notification of acceptance: 23 May 2014 Workshop date: 30 July 2014 (date now confirmed) ~~~ CHANGES SINCE LAST EMAIL This is the second email you might receive regarding this workshop. There are two main changes: We are happy to announce that Chris Adami will be giving a keynote. Due to space constraints at ALIFE, we have, like many other workshops, rescheduled to a half-day program. ~~~ ABOUT Artificial Life aims to understand the basic and generic principles of life, and demonstrate this understanding by producing life-like systems based on those principles. In recent years, with the advent of the information age, and the widespread acceptance of information technology, our view of life has changed. Ideas such as "life is information processing" or "information holds the key to understanding life" have become more common. But what can information, or more formally Information Theory, offer to Artificial Life? One relevant area is the motivation of behaviour for artificial agents, both virtual and real. Instead of learning to perform a specific task, informational measures can be used to define concepts such as boredom, empowerment or the ability to predict one's own future. Intrinsic motivations derived from these concepts allow us to generate behaviour, ideally from an embodied and enactive perspective, which are based on basic but generic principles. The key questions here are: "What are the important intrinsic motivations a living agent has, and what behaviour can be produced by them?" Related to an agent's behaviour is also the question on how and where the necessary computation to realise this behaviour is performed. Can information be used to quantify the morphological computation of an embodied agent and to what degree are the computational limitations of an agent influencing its behaviour? Another area of interest is the guidance of artificial evolution or adaptation. Assuming it is true that an agent wants to optimise its information processing, possibly obtain as much relevant information as possible for the cheapest computational cost, then what behaviour would naturally follow from that? Can the development of social interaction or collective phenomena be motivated by an informational gradient? Furthermore, evolution itself can be seen as a process in which an agent population obtains information from the environment, which begs the question of how this can be quantified, and how systems would adapt to maximise this information? The common theme in those different scenarios is the identification and quantification of driving forces behind evolution, learning, behaviour and other crucial processes of life, in the hope that the implementation or optimisation of these measurements will allow us to construct life-like systems. ~~~ WORKSHOP FORMAT The workshop is scheduled for half a day (this has changed due to space limitations). The workshop will consist of a keynote (by Chris Adami), presentations and discussions. To participate in the workshop by giving a presentation we would ask you to submit an extended abstract. We are interested in both, the current existing approaches in the field, and possible future avenues of investigation. Student presentations are an option for younger researchers, new to the field, that would like to outline and discuss their research direction or early results. The general idea is to offer a forum for those interested in applying information theory and similar methods to the field of artificial life, to have a focused discussion on both, the possibilities and technical challenges involved. After the workshop, there will a special issue in the Journal "Entropy" on the topic of the workshop, and we encourage participants to submit an extended version of their work. Further details will be announced as soon as possible. ~~~ SUBMISSION AND PARTICIPATION How to submit If you want to participate in the workshop by giving a talk we would invite you to send us an email (itialife at gmail.com) by 2 May 2014 with - name - contact details -1-2 pages long extended abstract. We are interested in previous work related to the subject and current work, including preliminary results. Specifically for students we also offer the option to submit for a shorter student talk, to present some early results, and discuss their approach to the field. In this case, please submit a 1-2 pages long extended abstract and indicate that you are interested in a student presentation. If there are any questions, or if you just want to indicate interest in submitting or attending, please feel free to mail us at itialife at gmail.com. ~~~ CONTACTS Web: http://www.mis.mpg.de/ay/workshops/alife14ws.html Email: itialife at gmail.com ~~~ Organisers: Christoph Salge, Keyan Zahedi, Georg Martius and Daniel Polani With best wishes, Keyan Ghazi-Zahedi, on behalf of the organisers -- Keyan Ghazi-Zahedi, Dr. rer. nat. Information Theory of Cognitive Systems Max Planck Institute for Mathematics in the Sciences phone: (+49) 341 9959 545, fax: (+49) 341 9959 555, office A11 -------------- next part -------------- An HTML attachment was scrubbed... URL: From grlmc at urv.cat Sat Apr 26 06:36:05 2014 From: grlmc at urv.cat (GRLMC) Date: Sat, 26 Apr 2014 12:36:05 +0200 Subject: Connectionists: SSTiC 2014: May 10, 6th registration deadline Message-ID: <56BC9C0416524621B20694503375EFFA@Carlos1> *To be removed from our mailing list, please respond to this message with UNSUBSCRIBE in the subject line* ********************************************************************* 2014 TARRAGONA INTERNATIONAL SUMMER SCHOOL ON TRENDS IN COMPUTING SSTiC 2014 Tarragona, Spain July 7-11, 2014 Organized by Rovira i Virgili University http://grammars.grlmc.com/sstic2014/ ********************************************************************* --- May 10, 6th registration deadline --- ********************************************************************* AIM: SSTiC 2014 is the second edition in a series started in 2013. For the previous event, see http://grammars.grlmc.com/SSTiC2013/ SSTiC 2014 will be a research training event mainly addressed to PhD students and PhD holders in the first steps of their academic career. It intends to update them about the most recent developments in the diverse branches of computer science and its neighbouring areas. To that purpose, renowned scholars will lecture and will be available for interaction with the audience. SSTiC 2014 will cover the whole spectrum of computer science through 6 keynote lectures and 23 six-hour courses dealing with some of the most lively topics in the field. The organizers share the idea that outstanding speakers will really attract the brightest students. ADDRESSED TO: Graduate students from around the world. There are no formal pre-requisites in terms of the academic degree the attendee must hold. However, since there will be several levels among the courses, reference may be made to specific knowledge background in the description of some of them. SSTiC 2014 is also appropriate for more senior people who want to keep themselves updated on developments in their own field or in other branches of computer science. They will surely find it fruitful to listen and discuss with scholars who are main references in computing nowadays. REGIME: In addition to keynotes, 3 parallel sessions will be held during the whole event. Participants will be able to freely choose the courses they will be willing to attend as well as to move from one to another. VENUE: SSTiC 2014 will take place in Tarragona, located 90 kms. to the south of Barcelona. The venue will be: Campus Catalunya Universitat Rovira i Virgili Av. Catalunya, 35 43002 Tarragona KEYNOTE SPEAKERS: Larry S. Davis (U Maryland, College Park), A Historical Perspective of Computer Vision Models for Object Recognition and Scene Analysis David S. Johnson (Columbia U, New York), Open and Closed Problems in NP-Completeness George Karypis (U Minnesota, Twin Cities), Top-N Recommender Systems: Revisiting Item Neighborhood Methods Steffen Staab (U Koblenz), Explicit and Implicit Semantics: Two Sides of One Coin Philip Wadler (U Edinburgh), You and Your Research and The Elements of Style Ronald R. Yager (Iona C, New Rochelle), Social Modeling COURSES AND PROFESSORS: Divyakant Agrawal (U California, Santa Barbara), [intermediate] Scalable Data Management in Enterprise and Cloud Computing Infrastructures Pierre Baldi (U California, Irvine), [intermediate] Big Data Informatics Challenges and Opportunities in the Life Sciences Rajkumar Buyya (U Melbourne), [intermediate] Cloud Computing John M. Carroll (Pennsylvania State U, University Park), [introductory] Usability Engineering and Scenario-based Design Kwang-Ting (Tim) Cheng (U California, Santa Barbara), [introductory/intermediate] Smartphones: Hardware Platform, Software Development, and Emerging Apps Amr El Abbadi (U California, Santa Barbara), [introductory] The Distributed Foundations of Data Management in the Cloud Richard M. Fujimoto (Georgia Tech, Atlanta), [introductory] Parallel and Distributed Simulation Mark Guzdial (Georgia Tech, Atlanta), [introductory] Computing Education Research: What We Know about Learning and Teaching Computer Science David S. Johnson (Columbia U, New York), [introductory] The Traveling Salesman Problem in Theory and Practice George Karypis (U Minnesota, Twin Cities), [intermediate] Programming Models/Frameworks for Parallel & Distributed Computing Aggelos K. Katsaggelos (Northwestern U, Evanston), [intermediate] Optimization Techniques for Sparse/Low-rank Recovery Problems in Image Processing and Machine Learning Arie E. Kaufman (U Stony Brook), [intermediate/advanced] Visualization Carl Lagoze (U Michigan, Ann Arbor), [introductory] Curation of Big Data Dinesh Manocha (U North Carolina, Chapel Hill), [introductory/intermediate] Robot Motion Planning Bijan Parsia (U Manchester), [introductory] The Empirical Mindset in Computer Science Charles E. Perkins (FutureWei Technologies, Santa Clara), [intermediate] Beyond LTE: the Evolution of 4G Networks and the Need for Higher Performance Handover System Designs Robert Sargent (Syracuse U), [introductory] Validation of Models Mubarak Shah (U Central Florida, Orlando), [intermediate] Visual Crowd Analysis Steffen Staab (U Koblenz), [intermediate] Programming the Semantic Web Mike Thelwall (U Wolverhampton), [introductory] Sentiment Strength Detection for Twitter and the Social Web Jeffrey D. Ullman (Stanford U), [introductory] MapReduce Algorithms Nitin Vaidya (U Illinois, Urbana-Champaign), [introductory/intermediate] Distributed Consensus: Theory and Applications Philip Wadler (U Edinburgh), [intermediate] Topics in Lambda Calculus and Life ORGANIZING COMMITTEE: Adrian Horia Dediu (Tarragona) Carlos Mart?n-Vide (Tarragona, chair) Florentina Lilica Voicu (Tarragona) REGISTRATION: It has to be done at http://grammars.grlmc.com/sstic2014/registration.php The selection of up to 8 courses requested in the registration template is only tentative and non-binding. For the sake of organization, it will be helpful to have an approximation of the respective demand for each course. Since the capacity of the venue is limited, registration requests will be processed on a first come first served basis. The registration period will be closed when the capacity of the venue will be complete. It is very convenient to register prior to the event. FEES: As far as possible, participants are expected to attend for the whole (or most of the) week (full-time). Fees are a flat rate allowing one to participate to all courses. They vary depending on the registration deadline. ACCOMMODATION: Information about accommodation is available on the website of the School. CERTIFICATE: Participants will be delivered a certificate of attendance. QUESTIONS AND FURTHER INFORMATION: florentinalilica.voicu at urv.cat POSTAL ADDRESS: SSTiC 2014 Lilica Voicu Rovira i Virgili University Av. Catalunya, 35 43002 Tarragona, Spain Phone: +34 977 559 543 Fax: +34 977 558 386 ACKNOWLEDGEMENTS: Departament d?Economia i Coneixement, Generalitat de Catalunya Universitat Rovira i Virgili From daniel.margulies at gmail.com Sun Apr 27 17:00:06 2014 From: daniel.margulies at gmail.com (Daniel Margulies) Date: Sun, 27 Apr 2014 23:00:06 +0200 Subject: Connectionists: Register now for the 2014 Neuroimaging Hackathon in Berlin, June 5-7, 2014 Message-ID: The 2014 Neuroimaging Hackathon registration is open! This unique 3-day event (June 5-7) will be held prior to annual meeting of the Organization for Human Brain Mapping (OHBM)at the legendary c-base , a decked-out collaborative art and technology space at the heart of the Berlin hacker community. In keeping with the spirit of prior neuroimaging hackathons such as OHBM 2013and Brainhack 2012 and 2013 , participants will have the opportunity to propose collaborative projects and engage in a number of open challenges. For updated information on the developing Hackathon content, and to contribute your own ideas, check out www.brainhack.org. Current projects include the Sage Alzheimer?s Disease Challenge, cloud computing tutorials with NITRC-CEand Amazon Web Services, workshops on data-sharing and meta-analysis, and many others initiated by scientists from the brain imaging community (see http://brainhack.org/categories/hackathon2014/). Participants engaged in the Alzheimer?s Disease Challenge will have the unique opportunity to set an initial benchmark for a multi-phase effort to identify early accurate predictive biomarkers that can inform the scientific, industrial and regulatory communities about the disease. Join this vibrant community and help build open neuroscience initiatives in one of the most unique tech venues in Berlin. To keep the collaborations flowing, meals and refreshments are all included in the registration costs and will be provided on-site, as well as $100+ in Amazon Web Service credits for your computational needs. Participation is limited, so we encourage you to register now. The spirit of the Hackathon will also be continuing into the OHBM meeting in Hamburg from June 8-12, where a collaboration space will be available in the conference venue. This space will be open to all OHBM attendees to discuss, present, and continue working on hackathon projects. OHBM Hackathon website: https://www.humanbrainmapping.org/hackathon Registration: https://www.humanbrainmapping.org/i4a/ams/conference/conference.cfm?conferenceID=98 Projects: http://www.brainhack.org/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhong at maebashi-it.ac.jp Mon Apr 28 05:02:50 2014 From: zhong at maebashi-it.ac.jp (Ning Zhong) Date: Mon, 28 Apr 2014 18:02:50 +0900 Subject: Connectionists: CFPs: Brain Informatics (a Springer published international journal) Message-ID: <535E193A.8060606@maebashi-it.ac.jp> Call for Papers Brain Informatics: Brain Data Computing and Health Studies (a Springer published international journal) http://www.springer.com/40708 (ISSN Electronic: 2198-4026, ISSN Print 2198-4026). It is fully sponsored and no any article-processing fee charged for authors currently. Brain Informatics is an international, peer-reviewed, interdisciplinary, and open-access journal published under the brand SpringerOpen, which providing a unique platform for researchers and practitioners to disseminate original research on the latest applications of computing technology in the study of the human brain and cognition. The journal addresses the computational, cognitive, neurophysiological, biological, physical, ecological, and social perspectives of brain informatics, as well as topics relating to mental health and well-being. It features emerging advanced information technologies, such as Internet/Web of Things (IoT/WoT), Wisdom Web of Things (W2T), cloud computing, big data analytics, and interactive knowledge discovery with respect to brain cognition and mental health. It explores how these technologies are applied to informatics-enabled brain studies and their applications. Informatics-enabled brain studies are transforming the various brain sciences, as new methodologies enhance human interpretive powers when dealing with big data sets increasingly derived from advanced neuro-imaging technologies, including fMRI, PET, and MEG/EEG, as well as from other sources like eye-tracking and from wearable, portable, micro and nano devices. Such methodologies allow the society to measure, collect, manage, model, mine, and interpret multiple forms of big brain data, adding to the understanding of human thought, memory, learning, decision-making, emotion, consciousness, and social behavior. They assist in building brain-inspired, human-level wisdom-computing paradigms and technologies, improving the treatment efficacy of mental health and brain disorders, and facilitating environmental control and sustainability. Brain Informatics enables researchers and professionals to remain at the cutting-edge with the latest developments of theory, technology, clinical practice, and applications in this interdisciplinary field. Brain Informatics will publish high-quality, original research papers, brief reports, and critical reviews in all theoretical, technological, clinical, and interdisciplinary studies that make up the field of brain informatics and its applications in brain-inspired intelligent systems, health studies, among others. For more information including on-line submission and instructions for authors, please visit homepage: http://www.springer.com/40708 Editorial Board: Editors-in-Chief: Ning Zhong, Maebashi Institute of Technology, Japan Jiming Liu, Hong Kong Baptist University, Hong Kong Associate Editors: Jan G. Bazan, University of Rzeszow, Poland Lin Chen, Institute of Biophysics, Chinese Academy of Sciences, China Kang Cheng, RIKEN Brain Science Institute, Japan W. Art Chaovalitwongse, University of Washington, USA Yiu-ming Cheung, Hong Kong Baptist University, Hong Kong Karl Friston, University College London, UK Vinod Goel, York University, Canada Geoffrey Goodhill, University of Queensland, Australia Yike Guo, Imperial College London, UK Andreas Holzinger, Medical University Graz, Austria Frank D. Hsu, Fordham University, USA Yong He, Beijing Normal University, China Bin Hu, Lanzhou University, China Runhe Huang, Hose University, Japan Zhisheng Huang, Vrije University of Amsterdam, The Netherlands Kazuyuki Imamura, Maebashi Institute of Technology, Japan Tianzi Jiang, Institute of Automation, Chinese Academy of Sciences, China Marcel Just, Carnegie Mellon University, USA Abbas Kouzani, Deakin University, Australia Renaud Lambiotte, University of Namur, Belgium Kuncheng Li, Xuanwu Hospital, Capital Medical University, China Yan Li, University of Southern Queensland, Australia Xiaohui Liu, Brunel University, UK Kazuhiro Oiwa, National Institute of Information and Communications Technology, Japan Marcello Pelillo, University of Venice, Italy Tomaso Poggio, Massachusetts Institute of Technology, USA David Powers, Flinders University, Australia Lael Schooler, Max Planck Institute, Germany Abdulkadir Sengur, Firat University, Turkey Shinsuke Shimojo, California Institute of Technology, USA Hava Siegelmann, University of Massachusetts Amherst, USA Daniel L. Silver, Acadia University, Canada Diego Sona, Italian Institute of Technology, Italy Ron Sun, Rensselaer Polytechnic Institute, USA Thomas Trappenberg, Dalhousie University, Canada Frank Van Der Velde, University of Twente, The Netherlands Leendert van Maanen, University of Amsterdam, The Netherlands Juan D. Velasquez, University of Chile, Chile Benjamin W. Wah, Chinese University of Hong Kong, Hong Kong Gang Wang, Anding Hospital, Capital Medical University, China Guoyin Wang, Chongqing University of Posts and Telecommunications, China Jinglong Wu, Okayama University, Japan Zongben Xu, Xi'an Jiaotong University, China Yiyu Yao, University of Regina, Canada Philip Yu, University of Illinois, Chicago, USA Fabio Massimo Zanzotto, University of Rome "Tor Vergata", Italy Yanqing Zhang, Georgia State University, USA Xinian Zuo, Institute of Psychology, Chinese Academy of Sciences, China Contact information: Ning Zhong Jian Yang From ecai2014 at guarant.cz Mon Apr 28 07:40:03 2014 From: ecai2014 at guarant.cz (=?utf-8?q?ECAI_2014?=) Date: Mon, 28 Apr 2014 13:40:03 +0200 Subject: Connectionists: =?utf-8?q?ECAI_2014_-_STAIRS_2014?= Message-ID: <20140428114003.17FF21742B3@gds25d.active24.cz> =============================================================== Call for Papers STAIRS 2014 The 7th Starting AI Researcher Symposium Prague, Czech Republic 18-19 August 2014 (part of ECAI-2014) http://www.ecai2014.org/stairs/ Submission Deadline: May 15 Proceedings published by IOS Press =============================================================== The 7th European Starting AI Researcher Symposium (STAIRS-2014) will be held as a satellite event of the 21st European Conference on Artificial Intelligence (ECAI-2014) in Prague in August 2014. STAIRS is aimed at young researchers in Europe and beyond, particularly PhD students, but also advanced Master?s students and postdoctoral researchers holding a PhD for less than one year at the time of the paper submission deadline. STAIRS offers opportunities to gain experience with submitting to and presenting at international events with a broad scientific scope. Accepted papers will be presented either orally or in a poster session. Both types of accepted papers will be collected in the symposium proceedings, published by IOS Press. STAIRS will feature invited talks by: * Francesca Rossi (University of Padova, Italy) * Michael Wooldridge (University of Oxford, UK) TOPICS OF INTEREST We welcome submissions in all areas of AI, ranging from foundations to applications. Topics of interest include, amongst others: * autonomous agents and multiagent systems * constraints, satisfiability, and search * knowledge representation, reasoning, and logic * machine learning and data mining * natural language processing * planning and scheduling * robotics, sensing, and vision * uncertainty in AI * web and knowledge-based information systems * multidisciplinary topics KEY DATES * Submission deadline: 15 May 2014 * Notification of acceptance/rejection: 10 June 2014 * Camera-ready copy due: 15 June 2014 * STAIRS-2014: 18-19 August 2014 * ECAI-2014 main conference: 20-22 August 2014 SUBMISSION All submissions should be prepared using the IOS Paper Kit, available from the STAIRS-2014 website and must not exceed 10 (ten) pages in length. To submit, upload your paper to Easychair: http://www.easychair.org/conferences/?conf=stairs2014 The principal author of the paper must be a (PhD) student or have obtained their PhD less than one year before the submission deadline. SYMPOSIUM CHAIRS Ulle Endriss (ILLC, University of Amsterdam) Joao Leite (Universidade NOVA de Lisboa) PROGRAMME COMMITTEE (to be completed) Natasha Alechina (University of Nottingham) Jose Julio Alferes (Universidade NOVA de Lisboa) Pietro Baroni (University of Brescia) Ronen Brafman (Ben-Gurion University) Gerhard Brewka (Leipzig University) Hubie Chen (Universidad del Pa?s Vasco and Ikerbasque) Eric De La Clergerie (INRIA) Michael Fink (Vienna University of Technology) Wiebe van der Hoek (University of Liverpool) Joerg Hoffmann (Saarland University) Paolo Liberatore (University of Rome ?La Sapienza?) Weiru Liu (Queen?s University Belfast) Ines Lynce (INESC-ID/IST, University of Lisbon) Pierre Marquis (CRIL-CNRS and Universit? d?Artois) Nicolas Maudet (Universit? Paris 6) Hector Palacios (Universitat Pompeu Fabra) David Schlangen (Bielefeld University) Steven Schockaert (Cardiff University) Elizabeth Sklar (CUNY and University of Liverpool) Ivan Titov (ILLC, University of Amsterdam) Stefan Woltran (Vienna University of Technology) Pinar Yolum (Bogazici University) J. Marius Zoellner (Karlsruhe Institute of Technology) =============================================================== From erhard.wieser at tum.de Mon Apr 28 08:45:08 2014 From: erhard.wieser at tum.de (Wieser, Erhard) Date: Mon, 28 Apr 2014 12:45:08 +0000 Subject: Connectionists: Call for Paper: ICDL-EPIROB 2014 Message-ID: <65FB3D535850A24A89847846D8264700BF6E30@BADWLRZ-SWMBX11.ads.mwn.de> ======================================================== Call for Papers, Tutorials and Thematic Workshops IEEE ICDL-EPIROB 2014 The Fourth Joint IEEE International Conference on Development and Learning and on Epigenetic Robotics Palazzo Ducale, Genoa, Italy October 13-16, 2014 http://www.icdl-epirob.org/ == Conference description The past decade has seen the emergence of a new scientific field that studies how intelligent biological and artificial systems develop sensorimotor, cognitive and social abilities, over extended periods of time, through dynamic interactions with their physical and social environments. This field lies at the intersection of a number of scientific and engineering disciplines including Neuroscience, Developmental Psychology, Developmental Linguistics, Cognitive Science, Computational Neuroscience, Artificial Intelligence, Machine Learning, and Robotics. Various terms have been associated with this new field such as Autonomous Mental Development, Epigenetic Robotics, Developmental Robotics, etc., and several scientific meetings have been established. The two most prominent conference series of this field, the International Conference on Development and Learning (ICDL) and the International Conference on Epigenetic Robotics (EpiRob), are now joining forces for the fourth time and invite submissions for a joint conference in 2014, to explore and extend the interdisciplinary boundaries of this field. == Keynote speakers Prof. Dana Ballard, Dept. of Computer Sciences, University of Texas at Austin, USA Prof. Cristina Becchio, Dept. of Psychology, University of Turin, ITALY Prof. Tetsur Matsuzawa, Primate Research Institute, Kyoto University, JAPAN Prof. Harold Bekkering, Donders Institute, Radboud University, Nijmegen, Holland == Call for Submissions We invite submissions for this exciting window into the future of developmental sciences. Submissions which establish novel links between brain, behavior and computation are particularly encouraged. == Topics of interest include (but are not limited to): * the development of perceptual, motor, cognitive, emotional, social, and communication skills in biological systems and robots; * embodiment; * general principles of development and learning; * interaction of nature and nurture; * sensitive/critical periods; * developmental stages; * grounding of knowledge and development of representations; * architectures for cognitive development and open-ended learning; * neural plasticity; * statistical learning; * reward and value systems; * intrinsic motivations, exploration and play; * interaction of development and evolution; * use of robots in applied settings such as autism therapy; * epistemological foundations and philosophical issues. Any of the topics above can be simultaneously studied from the neuroscience, psychology or modeling/robotic point of view. == Submissions will be accepted in several formats: 1. Full six-page paper submissions: Accepted papers will be included in the conference proceedings and will be selected for either an oral presentation or a featured poster presentation. Featured posters will have a 1 minute "teaser" presentation as part of the main conference session and will be showcased in the poster sessions. Maximum two-extra pages can be acceptable for a publication fee of 100 euros per page. 2. Two-page poster abstract submissions: To encourage discussion of late-breaking results or for work that is not sufficiently mature for a full paper, we will accept 2-page abstracts. These submissions will NOT be included in the conference proceedings. Accepted abstracts will be presented during poster sessions. 3. Tutorials and workshops: We invite experts in different areas to organize either a tutorial or a workshop to be held on the first day of the conference. Tutorials are meant to provide insights into specific topics as well as overviews that will inform the interdisciplinary audience about the state-of-the-art in child development, neuroscience, robotics, or any of the other disciplines represented at the conference. A workshop is an opportunity to present a topic cumulatively. Workshop can be half- or full-day in duration including oral presentations as well as posters. Submission format: two pages including title, list of speakers, concept and target audience. All submissions will be peer reviewed. Submission website through paperplaza at: http://ras.papercept.net == Important dates May 15th, 2014, paper submission deadline (EXTENDED) July 15th, 2014, author notification August 31st, 2014, final version (camera ready) due October 13th-16th, 2014, conference == Program committee General Chairs: Giorgio Metta (IIT, Genoa) Mark Lee (Univ. of Aberystwyth) Ian Fasel (Univ. of Arizona) Bridge Chairs: Giulio Sandini (IIT, Genoa) Masako Myowa-Yamakoshi (Univ. Kyoto) Program Chairs: Lorenzo Natale (IIT, Genoa) Erol Sahin (METU, Ankara) Publications Chairs: Francesco Nori (IIT, Genoa) Publicity Chairs: Katrin Lohan (Heriot-Watt, Edinburgh) Gordon Cheng (TUM, Munich) Local chairs: Alessandra Sciutti (IIT, Genoa) Vadim Tikhanoff (IIT, Genoa) Finance chairs: Andrea Derito (IIT, Genoa) -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ieee flyer.pdf Type: application/pdf Size: 745207 bytes Desc: ieee flyer.pdf URL: From joerg.luecke at uni-oldenburg.de Mon Apr 28 05:54:47 2014 From: joerg.luecke at uni-oldenburg.de (=?windows-1252?Q?J=F6rg_L=FCcke?=) Date: Mon, 28 Apr 2014 11:54:47 +0200 Subject: Connectionists: Two PhD Positions and one Postdoc Position in Machine Learning / Sensory Data Processing Message-ID: <535E2567.2030804@uni-oldenburg.de> The Machine Learning research group at the University of Oldenburg is seeking to fill Two PhD Research Positions and one (postdoctoral) Research Associate Position All positions are part of the Machine Learning group which develops learning and inference technology for sensory data. We pursue basic research, develop new technology, and apply our approaches to different tasks. Our research hereby combines modern probabilistic data descriptions, modern computer technology and insights from the neurosciences. We contribute to the improvement of current methods for computer hearing, pattern recognition and computer vision as well as to the understanding of biological and artificial intelligence. Research will be conducted in close collaboration with leading international and national research labs. Our research field can be considered as part of the Data Sciences, Computational Sciences, or Big Data approaches (to name some terms recently used in the media). Salary levels of the positions are based on the TV-L scale of the German public sector (?ffentlicher Dienst). After the deduction of health insurance, pension tax and other taxes, the salary for the PhD positions amounts to approximately 1600 EUR per month, and to about 2000 EUR per month for the (postdoctoral) Research Associate Position. Depending on the experience of the candidates, the salary can be higher. But please note that the only definite sources for all information on the positions including salary, job description and application/selection procedure are the central websites of the University of Oldenburg. For the PhD positions see: http://www.uni-oldenburg.de/stellen/?stelle=63323 For the postdoc position see: http://www.uni-oldenburg.de/stellen/?stelle=63322 Depending on the skills and interests of the candidates, the research focus will be on the development of new probabilistic learning algorithms and/or their applications to high-dimensional sensory data. Projects can emphasize basic research for general purpose learning and pattern recognition, or applications of algorithms to specific tasks. Candidates for the PhD positions have to hold a Master degree in Physics, Computer Science, Mathematics or a closely related subject (at the latest at the time when the contract is signed). Analytical/mathematical skills and programming skills (e.g. python, matlab, C++) are required for all candidates. Experiences with Machine Learning algorithms are desirable. Experiences with acoustic data, other types of sensory data, and an interest in the neurosciences are a plus but are not strictly required. The candidates for the (postdoctoral) Research Associate position have to hold a Master degree or (preferably) a PhD/Doctoral degree in Physics, Computer Science, Mathematics or a closely related subject (alternatively, a statement can be provided that a PhD/Doctoral degree will be issued when the position is taken or shortly thereafter). Analytical/mathematical skills, programming skills, and experience with Machine Learning algorithms are required. Experiences with acoustic data, other types of sensory data, and an interest in the neurosciences are a plus but are not strictly required. The (postdoctoral) Research Associate position is suitable for part-time work. All positions can be filled immediately and are available for initially two years with an option for extension. The appointed researchers will be part of a very new working environment. The research group is currently established through a strategic investment into Machine Learning technology at Oldenburg. The group is located in a new building, and the Cluster of Excellence Hearing4all is part of the German Excellence Initiative which funds top-tier research in Germany. The cluster comprises many interdisciplinary research groups that are currently set up or that are extended. As a consequence, many new PhD, postdoc and faculty positions allow for interesting collaboration opportunities and provide an attractive scientific and social environment. The city of Oldenburg is one of Germany's cities with the highest rated living quality, it is close to the North See and near to the cities of Bremen and Hamburg. For more information about the Machine Learning research group visit: http://www.uni-oldenburg.de/ml/ For more information about the Cluster of Excellence Hearing4all visit: http://hearing4all.eu/EN/ The University of Oldenburg is dedicated to increasing the percentage of women in science. Therefore, female candidates are particularly encouraged to apply. According to ? 21 III NHG (legislation governing Higher Education in Lower Saxony) preference will be given to female candidates in cases of equal qualification. Handicapped applicants will be given preference if equally qualified. Please send your application including the usual documents (including contact details for reference letters) preferably electronically (PDF) to J?rg L?cke or per mail to: Carl von Ossietzky Universit?t Oldenburg, Fakult?t VI, Machine Learning, z.Hd. Frau Jennifer K?llner, 26111 Oldenburg, Germany. Please use ?Research Position (PhD)? as subject line for applications to the PhD positions, and ?Research Associate Position (postdoc)? for applications to the (postdoctoral) Research Associate position. The application deadline is 15 June 2014. Applications received after this date are not guaranteed to enter the selection process. -- J?rg L?cke (PhD) Associate Professor Machine Learning Lab and Cluster of Excellence Hearing4all Department of Medical Physics and Acoustics School of Medicine and Health Sciences University of Oldenburg 26111 Oldenburg Germany www.uni-oldenburg.de/ml From igel at diku.dk Mon Apr 28 09:29:32 2014 From: igel at diku.dk (Christian Igel) Date: Mon, 28 Apr 2014 15:29:32 +0200 Subject: Connectionists: PhD course on Information Geometry in Learning and Optimization Message-ID: PhD course on Information Geometry in Learning and Optimization We invite you to participate in an international PhD course on Information Geometry in Learning and Optimization at Copenhagen University, September 22-26, 2014. The course will provide: * A crash course on Riemannian geometry and numerical tools for applications of Riemannian geometry * Introductions to Information Geometry and its role in Machine Learning and Stochastic Optimization Distinguished speakers: * Shun'ichi Amari, RIKEN Brain Science Institute * Nihat Ay, Max Planck Institute for Mathematics in the Sciences and Universit?t Leipzig * Nikolaus Hansen, Universit? Paris-Sud and Inria Saclay ? ?le-de-France * Jan Peters, Technische Universit?t Darmstadt and Max-Planck Institute for Intelligent Systems For more information, visit the course webpage: http://image.diku.dk/MLLab/IG.php In order to register, please email Susan Nasirumbi Ipsen at suntonn at di.ku.dk . The deadline for registration is July 31. Seats are assigned on a first come first serve basis, so please register early. Best regards, Aasa Feragen and Christian Igel From juergen at idsia.ch Mon Apr 28 10:42:30 2014 From: juergen at idsia.ch (Schmidhuber Juergen) Date: Mon, 28 Apr 2014 16:42:30 +0200 Subject: Connectionists: INNS Request for nominations: Hebb, Helmholtz, Gabor, and Young Investigator awards Message-ID: <4B646239-7B31-4716-B670-D8D5635D635B@idsia.ch> The International Neural Network Society's Awards Program is established to recognize individuals who have made outstanding contributions in the field of Neural Networks. Up to three awards, at most one in each category, of up to $1,000 ($500 cash award plus up to $500 travel award), are presented annually to senior, highly accomplished researchers for outstanding contributions made in the field of Neural Networks. The Hebb, Helmholtz and Gabor Awards: - The Hebb Award - recognizes achievement in biological learning. - The Helmholtz Award - recognizes achievement in sensation/perception. - The Gabor Award - recognizes achievement in engineering/application. - Young Investigator Awards: Up to two awards of $500 each are presented annually to individuals with no more than five years postdoctoral experience and who are under forty years of age, for significant contributions in the field of Neural Networks. Nominations: 1. The Awards Committee should receive nominations of no more than two pages in length, specifying: - The award category (Hebb, Helmholtz, Gabor, or Young Investigator) for which the candidate is being nominated. - The reasons for which the nominee should be considered for the award. - A list of at least five of the nominee's important and published papers. 2. The curriculum vitae of both the nominee and the nominator must be included with the nomination, including the name, address, position/title, phone, fax, and e-mail address for both the nominee and nominator. 3. The nominator must be an INNS member in good standing. Nominees do not have to be INNS members. If an award recipient is not an INNS member, they shall receive a free one-year INNS membership. 4. Nominators may not nominate themselves or their family members. 5. Individuals may not receive the same INNS Award more than once All nominations will be considered by the Awards Committee and selected ones forwarded to the INNS Board of Governors, along with the Committee's recommendations for award recipients. Voting shall be performed by the entire BoG. The INNS Award Committee members must be INNS Governors in the year that they are appointed. The 2014 Committee consists of Prof. J?rgen Schmidhuber, Prof. Leonid Perlovsky, and Prof. Hava Siegelmann. Please email the 2015 nominations along with their attachments directly to the chair of the Awards Committee at juergen at idsia.ch, with a copy to the Secretary of the Society at bill at billhowell.ca by June 1, 2014. Please use the following subject line in the email: INNS award nomination. Please find additional information here: http://www.inns.org/awards J?rgen Schmidhuber http://www.idsia.ch/~juergen/ From pbrazdil at inescporto.pt Mon Apr 28 11:21:41 2014 From: pbrazdil at inescporto.pt (Pavel Brazdil) Date: Mon, 28 Apr 2014 16:21:41 +0100 Subject: Connectionists: ECAI-14 Workshop and Tutorial on Metalearning & Algorithm Selection Message-ID: MetaSel - Meta-learning & Algorithm Selection ********************************************* ECAI-2014 Workshop, Prague, 19 August 2014 (date altered) http://metasel2014.inescporto.pt/ 2nd Announcement & Call for Papers Objectives This ECAI-2014 workshop will provide a platform for discussing the nature of algorithm selection which arises in many diverse domains, such as machine learning, data mining, optimization and satisfiability solving, among many others. Algorithm Selection and configuration are increasingly relevant today. Researchers and practitioners from all branches of science and technology face a large choice of parameterized machine learning algorithms, with little guidance as to which techniques to use. Moreover, data mining challenges frequently remind us that algorithm selection and configuration are crucial in order to achieve the best performance, and drive industrial applications. Meta-learning leverages knowledge of past algorithm applications to select the best techniques for future applications, and offers effective techniques that are superior to humans both in terms of the end result and especially in the time required to achieve it. In this workshop we will discuss different ways of exploiting meta-learning techniques to identify the potentially best algorithm(s) for a new task, based on meta-level information and prior experiments. We also discuss the prerequisites for effective meta-learning systems such as recent infrastructure such as OpenML.org. Many problems of today require that solutions be elaborated in the form of complex systems or workflows which include many different processes or operations. Constructing such complex systems or workflows requires extensive expertise, and could be greatly facilitated by leveraging planning, meta-learning and intelligent system design. This task is inherently interdisciplinary, as it builds on expertise in various areas of AI. The workshop will include invited talks, presentations of peer-reviewed papers and panels. The invited talks will be by Lars Kotthoff and Frank Hutter (to be confirmed). The target audience of this workshop includes researchers (Ph.D.'s) and research students interested to exchange their knowledge about: - problems and solutions of algorithm selection and algorithm configuration - how to use software and platforms to select algorithms in practice - how to provide advice to end users about which algorithms to select in diverse domains, including optimization, SAT etc. and incorporate this knowledge in new platforms. We specifically aim to attract researchers in diverse areas that have encountered the problem of algorithm selection and thus promote exchange of ideas and possible collaborations. Topics Algorithm Selection & Configuration Planning to learn and construct workflows Applications of workflow planning Meta-learning and exploitation of meta-knowledge Exploitation of ontologies of tasks and methods Exploitation of benchmarks and experimentation Representation of learning goals and states in learning Control and coordination of learning processes Meta-reasoning Experimentation and evaluation of learning processes Layered learning Multi-task and transfer learning Learning to learn Intelligent design Performance modeling Process mining Submissions and Review Process Important dates: Submission deadline: 25 May 2014 Notification: 23 June 2014 Full papers can consist of a maximum of 8 pages, extended abstracts up to 2 pages, in the ECAI format. Each submission must be submitted online via the Easychair submission interface. Submissions can be updated at will before the submission deadline. Electronic versions of accepted submissions will also be made publicly available on the conference web site. The only accepted format for submitted papers is PDF. Submissions are possible either as a full paper or as an extended abstract. Full papers should present more advanced work, covering research or a case application. Extended abstracts may present current, recently published or future research, and can cover a wider scope. For instance, they may be position statements, offer a specific scientific or business problem to be solved by machine learning (ML) / data mining (DM) or describe ML / DM demo or installation. Each paper submission will be evaluated on the basis of relevance, significance of contribution, technical quality, scholarship, and quality of presentation, by at least two members of the program committee. All accepted submissions will be included in the conference proceedings. At least one author of each accepted full paper or extended abstract is required to attend the workshop to present the contribution. A selection will be made of the best paper and runner ups, and these will be presented in the plenary session. The remainder of accepted submissions will be presented in the form of short talks and a poster session. All accepted papers, including those presented as a poster, will be published in the workshop proceedings (possibly as CEUR Workshop Proceedings). The papers selected for plenary presentation will be identified in the proceedings. Organizers: Pavel Brazdil, FEP, Univ. of Porto / Inesc Tec, Portugal, pbrazdil at inescporto.pt Carlos Soares, FEUP, Univ. of Porto / Inesc Tec, Portugal, csoares at fe.up.pt Joaquin Vanschoren, Eindhoven University of Technology (TU/e), Eindhoven, The Netherlands, j.vanschoren at tue.nl Lars Kotthoff, University College Cork, Cork, Ireland, larsko at 4c.ucc.ie Program Committee: Pavel Brazdil, LIAAD-INESC Porto L.A. / FEP, University of Porto, Portugal Andr? C. P. Carvalho, USP, Brasil Claudia Diamantini, Universit? Politecnica delle Marche, Italy Johannes Fuernkranz, TU Darmstadt, Germany Christophe Giraud-Carrier, Brigham Young Univ., USA Krzysztof Grabczewski, Nicolaus Copernicus University, Poland Melanie Hilario, Switzerland Frank Hutter, University of Freiburg, Germany Christopher Jefferson, University of St Andrews, UK Alexandros Kalousis, U Geneva, Switzerland J?rg-Uwe Kietz, U.Zurich, Switzerland Lars Kotthoff, University College Cork, Ireland Yuri Malitsky, University College Cork, Ireland Bernhard Pfahringer, U Waikato, New Zealand Vid Podpecan, Jozef Stefan Institute, Slovenia Ricardo Prud?ncio, Univ. Federal de Pernambuco Recife (PE), Brasil Carlos Soares, FEP, University of Porto, Portugal Guido Tack, Monash University, Australia Joaquin Vanschoren, U. Leiden / KU Leuven Ricardo Vilalta, University of Houston, USA Filip Zelezn?, CVUT, Prague, R.Checa Previous events This workshop is closely related to the PlanLearn-2012, which took place at ECAI-2012 and other predecessor workshops in this series. Tutorial on Metalearning and Algorithm Selection at ECAI-2014 ************************************************************* 18 August 2014 http://metasel.inescporto.pt/ Algorithm Selection and configuration are increasingly relevant today. Researchers and practitioners from all branches of science and technology face a large choice of parameterized algorithms, with little guidance as to which techniques to use. Moreover, data mining challenges frequently remind us that algorithm selection and configuration are crucial in order to achieve the best performance and drive industrial applications. Meta-learning leverages knowledge of past algorithm applications to select the best techniques for future applications, and offers effective techniques that are superior to humans both in terms of the end result and especially in a limited time. In this tutorial, we elucidate the nature of algorithm selection and how it arises in many diverse domains, such as machine learning, data mining, optimization and SAT solving. We show that it is possible to use meta-learning techniques to identify the potentially best algorithm(s) for a new task, based on meta-level information and prior experiments. We also discuss the prerequisites for effective meta-learning systems, and how recent infrastructures, such as OpenML.org, allow us to build systems that effectively advice users on which algorithms to apply. The intended audience includes researchers (Ph.D.'s), research students and practitioners interested to learn about, or consolidate their knowledge about the state-of-the-art in algorithm selection and algorithm configuration, how to use Data Mining software and platforms to select algorithms in practice, how to provide advice to end users about which algorithms to select in diverse domains, including optimization, SAT etc. and incorporate this knowledge in new platforms. The participants should bring their own laptops. From ted.carnevale at yale.edu Mon Apr 28 13:58:04 2014 From: ted.carnevale at yale.edu (Ted Carnevale) Date: Mon, 28 Apr 2014 13:58:04 -0400 Subject: Connectionists: NEURON Summer Course openings Message-ID: <535E96AC.2020609@yale.edu> Space is still available in the NEURON Summer Course, but you'll have to act quickly to make sure you qualify for the early registration rates. NEURON Fundamentals June 21-24 This addresses all aspects of using NEURON to model individual neurons, and also introduces parallel simulation and the fundamentals of modeling networks. Parallel Simulation with NEURON June 25-26 This is for users who are already familiar with NEURON and need to create models that will run on parallel hardware. Apply by Friday, May 9, and pay only $1050 for the Fundamentals course $650 for the Parallel Simulation course $1600 for both Registration fees for applications received after May 9 are $1200 for Fundamentals $750 for Parallel Simulation $1800 for both These fees cover handout materials plus food and housing on the campus of UC San Diego. Registration is limited, and the registration deadline is Friday, May 30, 2014. For more information and the on-line registration form, see http://www.neuron.yale.edu/neuron/courses --Ted From dayan at gatsby.ucl.ac.uk Tue Apr 29 09:05:31 2014 From: dayan at gatsby.ucl.ac.uk (Peter Dayan) Date: Tue, 29 Apr 2014 14:05:31 +0100 Subject: Connectionists: Gatsby Computational Neuroscience Unit, PhD Programme Message-ID: <20140429130531.GE24277@gatsby.ucl.ac.uk> Gatsby Computational Neuroscience Unit, University College London 4 year PhD Programme The Gatsby Unit is a centre for theoretical neuroscience and machine learning, focusing on unsupervised, semi-supervised and reinforcement learning, neural dynamics, population coding, Bayesian and nonparametric statistics, kernel methods, and applications of these to the analysis of perceptual processing, neural data, machine vision and bioinformatics. It provides a unique opportunity for a critical mass of theoreticians to interact closely with each other, and with other world-class research groups in related departments at UCL, including the Sainsbury Wellcome Centre for Neural Circuits and Behaviour, with which we will shortly share a building, and Biosciences, Computer Science, Functional Imaging, Physics, Physiology, Psychology, Neurology, Ophthalmology and Statistics, the cross-faculty Centre for Computational Statistics and Machine Learning, and also with other UK and overseas universities. The Unit has openings for exceptional PhD candidates. Applicants should have a strong analytical background, a keen interest in neuroscience and/or machine learning and a relevant first degree, for example in Computer Science, Engineering, Mathematics, Neuroscience, Physics, Psychology or Statistics. The PhD programme lasts four years, including a first year of intensive instruction in techniques and research in theoretical neuroscience and machine learning. All students are fully funded, regardless of nationality. The Unit also welcomes applications from students with pre-secured funding or who are currently soliciting other scholarship/studentships. We will review applications as soon as they are complete (including a CV, statement of research interests and letters from three referees) until the positions are filled. Early application is thus advised. Full details of our programme, and how to apply, are available at: http://www.gatsby.ucl.ac.uk/teaching/phd For further details of research interests please see http://www.gatsby.ucl.ac.uk/research.html and the individual faculty webpages at http://www.gatsby.ucl.ac.uk/members.html From marc.toussaint at informatik.uni-stuttgart.de Tue Apr 29 04:31:23 2014 From: marc.toussaint at informatik.uni-stuttgart.de (Marc Toussaint) Date: Tue, 29 Apr 2014 10:31:23 +0200 Subject: Connectionists: Call for Participation: Autonomous Learning Summer School 2014, Leipzig, Germany Message-ID: <535F635B.4050504@informatik.uni-stuttgart.de> Summer School on Autonomous Learning September 1-4, 2014 Max-Planck-Institute for Mathematics in the Sciences Leipzig supported by the DFG Priority Programme 1527 Autonomous Learning research aims at understanding how autonomous systems can efficiently learn from the interaction with the environment, especially by having an integrated approach to decision making and learning, allowing systems to autonomously decide on actions, representations, hyperparameters and model structures for the purpose of efficient learning. In this summer school international and national experts will introduce to the core concepts and related theory for autonomous learning in real-world environments. We hope to foster the enthusiasm of young researchers for this exciting research area, giving them the opportunity to meet leading experts in the field and similarly interested students. The tutorials are structured around three themes: 1. learning representations, 2. acting to learn (exploration), and 3. learning to act in real-world environments (robotics). This course is free of charge, but participants have to cover their own travel, room and board. Registration is possible with the application form on this website until *May 31*. More information at the Priority Programme website and at MPI MiS Leipzig: http://autonomous-learning.org/summer-school-2014-on-autonomous-learning/ http://www.mis.mpg.de/calendar/conferences/2014/al.html Speakers: * Shun-ichi Amari (RIKEN, Japan)  * Satinder Singh (University of Michigan, USA) * Tamim Asfour (KIT Karlsruhe)  * Michael Beetz (Bremen University)  * Matthias Bethge (University of T?bingen, MPI for Biological Cybernetics, Bernstein Center for Computational Neuroscience) * Keyan Ghazi-Zahedi (MPI for Mathematics in the Sciences) * Thomas Martinetz (University of L?beck) * Helge Ritter (Bielefeld University) * Friedrich Sommer (Redwood Center for Theoretical Neuroscience, UC Berkeley) * Marc Toussaint (Stuttgart University)  Scientific Organizers: Nihat Ay (MPI for Mathematics in the Sciences Leipzig) Marc Toussaint (Stuttgart University) Hope to see you in Leipzig! From hudlicka at cs.umass.edu Tue Apr 29 08:13:52 2014 From: hudlicka at cs.umass.edu (Eva Hudlicka) Date: Tue, 29 Apr 2014 08:13:52 -0400 Subject: Connectionists: =?windows-1252?q?CFP=3A___=93Computational_Modeli?= =?windows-1252?q?ng_of_Cognition-Emotion_Interactions_-_Workshop_at_CogSc?= =?windows-1252?q?i_2014?= Message-ID: <3B2D9B9B-7E39-49FA-93AF-3028835FDF49@cs.umass.edu> Workshop on ?Computational Modeling of Cognition-Emotion Interactions: Relevance to Mechanisms of Affective Disorders and Psychotherapeutic Action? https://people.cs.umass.edu/~hudlicka/cogsci2014-cog-em.html -------------------------------------------------------------------------- In conjunction with CogSci 2014 (http://cognitivesciencesociety.org/conference2014/index.html) July 23, 2014 Quebec City, Quebec -------------------------------------------------------------------------- Objectives Recent years have witnessed an increasing interest in developing computational models of emotion and emotion-cognition interaction, within the emerging area of computational affective science. At the same time, emotion theorists and clinical psychologists have been recognizing the importance of moving beyond descriptive characterizations of affective disorders, and identifying the underlying mechanisms that mediate both psychopathology and psychotherapeutic action. Computational models of cognition-emotion interactions have the potential to facilitate more accurate assessment and diagnosis of affective disorders, and to provide a basis for more efficient and targeted approaches to treatment, through an improved understanding of the mechanisms of therapeutic action. In spite of the significant synergy that could result from a dialogue among researchers and practitioners in affective modeling, emotion research and clinical psychology, limited interaction exists among these communities. The goal of this workshop is to provide a forum for interdisciplinary dialog among the members of these research communities, and explore how computational models of emotion-cognition interaction can help elucidate the mechanisms mediating affective disorders, as well as the mechanisms of therapeutic action. Workshop Format To facilitate cross-disciplinary discussions and interaction, the workshop format will emphasize moderated panels, small working groups, and open discussion, in addition to the traditional paper sessions. To this end, we encourage submissions of proposals for discussion topics, panels and small working groups. Keynote address: Keith Oatley, University of Toronto ? ?The cognitive bases of emotions, emotional disorders and psychotherapy.? Submission Guidelines Interested participants should submit extended abstracts (1-3 pages), discussion and working group topics (1 page), or panel proposals (1-2 pages) to hudlicka at cs.umass.edu by June 1. Relevant topics include: ? Which processes mediating cognition-emotion interactions are sufficiently well understood to support computational modeling (e.g., affective biases on attention & perception; emotion regulation; cognitive appraisal)? ? How can models of these processes contribute to an understanding of the mechanisms of therapeutic action, across different types of psychotherapies (e.g., cognitive, psychodynamic, emotion-focused)? ? What are the relative benefits and drawbacks of the dominant theoretical perspectives on emotion with respect to computational models of emotion-cognition interaction and therapeutic action (e.g., discrete / categorical models, dimensional models (PAD), componential models)? ? What are the best representational and reasoning approaches for modeling cognitive-affective schemas and their transformation during therapy? Can we characterize the differences in these transformations across distinct therapeutic approaches (e.g., cognitive, metacognitive, emotion-focused, motivational interviewing, psychodynamic)? ? What are the most appropriate computational methods for modeling the distinct modalities of affective processes (e.g., physiological / somatic, expressive /behavioral, cognitive)? ? How can we model intermodal interactions across processes operating at different time scales? ? What types of data are necessary to develop these models, and how can these be obtained? ? For a given affective process and modality, what criteria determine the best level of model resolution (e.g., models of lower-level processes via connectionist methods vs. higher-level symbolic models)? ? How can we validate computational models of cognition-emotion interactions and therapeutic action, and what are the limits of this validation (e.g., validation of detailed symbolic models hypothesizing specific internal mental constructs, such as goals or plans, may not be possible with current technologies). Addressing these questions from a multi-disciplinary perspective will provide the context within which concrete gaps in both theoretical knowledge and methodologies can be identified, and research priorities established. Submitters will receive notification of acceptance/rejection by June 15. For more information see: https://people.cs.umass.edu/~hudlicka/cogsci2014-cog-em.html Post-Workshop Publication The workshop papers will be published as a separate report, and both the papers and the presentations will be made available on the workshop web site. The workshop organizers will also pursue the possibility of a special issue of a relevant journal (TBD) and/or an edited volume in the Affective Science Series published by Oxford University Press. Program Committee Eva Hudlicka, Chair, Psychometrix Associates & University of Massachusetts-Amherst Michael Arbib, University of Southern California Jorge Armony, McGill University Luc Beaudoin, Simon Fraser University Jean-Marc Fellous, University of Arizona Ian Horswill, Northwestern University Richard Lane, University of Arizona Matthias Scheutz, Tufts University ---------------- Eva Hudlicka, PhD Psychometrix Associates, Inc. School of Computer Science - U.Mass.-Amherst psychometrixassociates.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From gunnar.blohm at gmail.com Tue Apr 29 11:38:15 2014 From: gunnar.blohm at gmail.com (Gunnar Blohm) Date: Tue, 29 Apr 2014 11:38:15 -0400 Subject: Connectionists: CoSMo 214: extended deadline! - May 5, 2014 Message-ID: <535FC767.5040304@queensu.ca> Due to continues interest, we have decided to extend the application deadline for CoSMo 2014! This is a tremendous opportunity for students and postdocs to get hands-on training in computational neuroscience. *Extended deadline: May 5, 2014* http://www.compneurosci.com/CoSMo/ Best Gunnar Blohm, Konrad K?rding, Paul Schrater (organizers) -- ------------------------------------------------------- Dr. Gunnar BLOHM Assistant Professor in Computational Neuroscience Centre for Neuroscience Studies, Departments of Biomedical and Molecular Sciences, Mathematics & Statistics, and Psychology, School of Computing, and Canadian Action and Perception Network (CAPnet) Queen's University 18, Stuart Street Kingston, Ontario, Canada, K7L 3N6 Tel: (613) 533-3385 Fax: (613) 533-6840 Email: Gunnar.Blohm at QueensU.ca Web: http://www.compneurosci.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From kkuehnbe at uos.de Wed Apr 30 04:20:38 2014 From: kkuehnbe at uos.de (Kai-Uwe Kuehnberger) Date: Wed, 30 Apr 2014 10:20:38 +0200 Subject: Connectionists: Call for Papers for the BICA 2014 Symposium on "Neural-Symbolic Networks for Cognitive Capacities" In-Reply-To: References: Message-ID: <5360B256.7010905@uos.de> ++++++++++++++++++++++++++++++++++ ++++++++++++++++++++++++++++++++++ Call for Papers for the == BICA 2014 Symposium on Neural-Symbolic Networks for Cognitive Capacities == = WEBPAGE = https://sites.google.com/site/bica2014nsncc/ = SCOPE = Researchers in artificial intelligence and cognitive systems modelling continue to face foundational challenges in their quest to develop plausible models and implementations of cognitive capacities and intelligence. One of the methodological core issues is the question of the integration between sub-symbolic and symbolic approaches to knowledge representation, learning and reasoning in cognitively-inspired models. Network-based approaches very often enable flexible tools which can discover and process the internal structure of (possibly large) data sets. They promise to give rise to efficient signal-processing models which are biologically plausible and optimally suited for a wide range of applications, whilst possibly also offering an explanation of cognitive phenomena of the human brain. Still, the extraction of high-level explicit (i.e. symbolic) knowledge from distributed low-level representations thus far has to be considered a mostly unsolved problem. In recent years, network-based models have seen significant advancement in the wake of the development of the new "deep learning" family of approaches to machine learning. Due to the hierarchically structured nature of the underlying models, these developments have also reinvigorated efforts in overcoming the neural-symbolic divide. The aim of the symposium is to bring together recent work developed in the field of network-based information processing in a cognitive context, which bridges the gap between different levels of description and paradigms and which sheds light onto canonical solutions or principled approaches occurring in the context of neural-symbolic integration to modelling or implementing cognitive capacities. = TOPICS = We particularly encourage submissions related to the following non-exhaustive list of topics: - new learning paradigms of network-based models addressing different knowledge levels - biologically plausible methods and models - integration of network models and symbolic reasoning - cognitive systems using neural-symbolic paradigms - extraction of symbolic knowledge from network-based representations - challenging applications which have the potential to become benchmark problems - visionary papers concerning the future of network approaches to cognitive modelling = DATES & SUBMISSIONS = The deadlines for submissions, author feedback, etc. are bound to the normal BICA 2014 deadlines (and, thus, are also subject to the same changes and extensions). Please also have a look at http://bicasociety.org/meetings/2014/cfp/ for possibly updated dates and deadlines. The current schedule is: - Paper submission due: May 26, 2014 - Paper review feedback: June 14, 2014 - Final papers due: August 1, 2014 Submissions can either be made in form of 500-word abstracts or as up to 6-page papers. For details on the submission process, formats, etc., please refer to the BICA 2014 Call for Papers ( http://bicasociety.org/meetings/2014/cfp/ ) and the BICA 2014 submission guidelines ( http://bicasociety.org/meetings/2014/submission/ ). = SYMPOSIUM CO-CHAIRS = - Terrence C. Stewart, Centre for Theoretical Neuroscience, University of Waterloo, Canada - Tarek R. Besold, Institute of Cognitive Science, University of Osnabr?ck, Germany -------------- next part -------------- An HTML attachment was scrubbed... URL: From pel at gatsby.ucl.ac.uk Wed Apr 30 03:22:31 2014 From: pel at gatsby.ucl.ac.uk (Peter Latham) Date: Wed, 30 Apr 2014 08:22:31 +0100 (BST) Subject: Connectionists: =?iso-8859-15?q?Fully_funded_combined_theoretical?= =?iso-8859-15?q?_and_experimental=A0neuroscience_PhD_at_UCL?= Message-ID: We are looking for an exceptional candidate to carry out a combined theoretical and experimental PhD. The student will receive training from two labs: Professor Peter Latham (Gatsby Computational Neuroscience?Unit; theory) and Professor Michael Hausser (Wolfson Institute for Biomedical Research; experiment). The candidate will initially be based at the Gatsby Unit with Professor Latham, taking courses in computational neuroscience and developing theories at the circuit level; the rest of the PhD will be with Professor Hausser testing those theories using advanced experimental tools for linking neural circuit function with behavior (most notably targeted optogenetics? and fast 2-photon imaging). The applicant should have a strong analytical background and a keen interest in neuroscience. Experience in an experimental lab is desirable but not absolutely necessary. The applicant will be fully funded for four?years, with no restrictions on nationality. Start date is flexible (but ideally by 1 October 2014).? Closing date is May 15! Applicants should apply directly to Professor Latham (pel at gatsby.ucl.ac.uk). Please send, in PDF or plain text format where possible, the following: - CV - statement of research interests, - transcript(s) for previous degrees, and arrange for three academic letters of recommendation, also to pel at gatsby.ucl.ac.uk?. For further details of research interests please visit our websites: Latham: ?http://www.gatsby.ucl.ac.uk/~pel/ Hausser:?http://www.dendrites.org/ From giacomo.cabri at unimore.it Wed Apr 30 11:23:19 2014 From: giacomo.cabri at unimore.it (Giacomo Cabri) Date: Wed, 30 Apr 2014 17:23:19 +0200 Subject: Connectionists: CfP: EXTENDED DEADLINE - The Eight IEEE International Conference on Self-Adaptive and Self-Organizing Systems, (SASO 2014) Message-ID: <53611567.3080501@unimore.it> ************************************************************************************************************ CALL FOR PAPERS - EXTENDED DEADLINE The Eight IEEE International Conference on Self-Adaptive and Self-Organizing Systems (SASO 2014) Imperial College, London (UK); 8-12 September 2014 http://www.iis.ee.imperial.ac.uk/saso2014/ ************************************************************************************************************ Part of FAS* - Foundation and Applications of Self* Computing Conferences Collocated with: The International Conference on Cloud and Autonomic Computing (CAC 2014) The 14th IEEE Peer-to-Peer Computing Conference ------------------- Aims and Scope ------------------- The aim of the Self-Adaptive and Self-Organizing systems conference series (SASO) is to provide a forum for the foundations of a principled approach to engineering systems, networks and services based on self-adaptation and self-organization. The complexity of current and emerging networks, software and services, especially in dealing with dynamics in the environment and problem domain, has led the software engineering, distributed systems and management communities to look for inspiration in diverse fields (e.g., complex systems, control theory, artificial intelligence, sociology, and biology) to find new ways of designing and managing such computing systems. In this endeavor, self-organization and self-adaptation have emerged as two promising interrelated approaches. The eight edition of the SASO conference embraces the inter-disciplinarity and the scientific, empirical and application dimensions of self-* systems and welcomes novel results on both self-adaptive and self-organizing systems research. The topics of interest include, but are not limited to: - Self-* systems theory: theoretical frameworks and models; biologically- and socially-inspired paradigms; inter-operation of self-* mechanisms; - Self-* systems engineering: reusable mechanisms, design patterns, architectures, methodologies; software and middleware development frameworks and methods, platforms and toolkits; hardware; self-* materials; - Self-* system properties: robustness, resilience and stability; emergence; computational awareness and self-awareness; reflection; - Self-* cyber-physical and socio-technical systems: human factors and visualization; self-* social computers; crowdsourcing and collective awareness; - Applications and experiences of self-* systems: cyber security, transportation, computational sustainability, big data and creative commons, power systems. Contributions must present novel theoretical or experimental results; novel design patterns, mechanisms, system architectures, frameworks or tools; or practical approaches and experiences in building or deploying real-world systems and applications. Contributions contrasting different approaches for engineering a given family of systems, or demonstrating the applicability of a certain approach for different systems, are equally encouraged. Where relevant and appropriate, accepted papers will also be encouraged to submit accompanying papers for the Demo or Poster Sessions. -------------------- Important Dates -------------------- Abstract submission: May 9, 2014 (EXTENDED) Paper submission: May 16, 2014 (EXTENDED) Notification: June 21, 2014 Camera ready copy due: July 18, 2014 Early registration: August 22, 2014 Conference: September 8 - 12, 2014 ---------------------------- Submission Instructions ---------------------------- All submissions should be 10 pages and formatted according to the IEEE Computer Society Press proceedings style guide and submitted electronically in PDF format. Please register as authors and submit your papers using the SASO 2014 conference management system, which is located at: https://www.easychair.org/conferences/?conf=saso2014 The proceedings will be published by IEEE Computer Society Press, and made available as a part of the IEEE digital library. Note that a separate call for poster submissions has also been issued. -------------------- Review Criteria -------------------- Papers should present novel ideas in the cross-disciplinary research context described in this call, clearly motivated by problems from current practice or applied research. We expect both theoretical and empirical contributions to be clearly stated, substantiated by formal analysis, simulation, experimental evaluations, comparative studies, and so on. Appropriate reference must be made to related work. Because SASO is a cross-disciplinary conference, papers must be intelligible and relevant to researchers who are not members of the same specialized sub-field. Authors are also encouraged to submit papers describing applications. Application papers are expected to provide an indication of the real world relevance of the problem that is solved, including a description of the deployment domain, and some form of evaluation of performance, usability, or comparison to alternative approaches. Experience papers are also welcome but they must clearly state the insight into any aspect of design, implementation or management of self-* systems which is of benefit to practitioners and the SASO community ------------------- Program Chairs ------------------- Ada Diaconescu Telecom ParisTech, France Nagarajan Kandasamy Drexel University, USA Mirko Viroli University of Bologna, Italy --------------------- Contact Details --------------------- Please send any inquiries to: mailto:saso2014 at easychair.org -- |----------------------------------------------------| | Prof. Giacomo Cabri - Ph.D., Associate Professor | Dip. di Scienze Fisiche, Informatiche e Matematiche | Universita' di Modena e Reggio Emilia - Italia | e-mail giacomo.cabri at unimore.it | tel. +39-059-2058320 fax +39-059-2055216 |----------------------------------------------------| From mnick at mit.edu Tue Apr 29 15:31:35 2014 From: mnick at mit.edu (Maximilian Nickel) Date: Tue, 29 Apr 2014 15:31:35 -0400 Subject: Connectionists: ML PhD Summer Course in Genova, Italy 30 June - 4 July 2014 Message-ID: * Apologies for multiple postings * ML PhD Summer Course in Genova, Italy 30 June - 4 July 2014 Regularization Methods for Machine Learning (RegML) ======================================== Instructors: * Francesca Odone (francesca.odone at unige.it) * Lorenzo Rosasco (lorenzo.rosasco at unige.it) A 20 hours advanced machine learning course including theory classes and practical laboratory session. The course covers foundations as well as recent advances in Machine Learning with emphasis on high dimensional data and a core set techniques, namely regularization methods. In many respect the course is compressed version of the 9.520 course at MIT [1]. The course started in 2008 has seen an increasing national and international attendance over the years with a peak of 85 participants in 2013. Registration Required: send an e-mail to the instructors by *May 24th*. The course will be activated if a minimum number of participants is reached. The course will be held in Genova in the heart of the Italian Riviera. For more information see URL http://lcsl.mit.edu/courses/regml/ [1] http://www.mit.edu/~9.520/ -------------- next part -------------- An HTML attachment was scrubbed... URL: