From grlmc at urv.cat Sat Nov 1 07:50:32 2014 From: grlmc at urv.cat (GRLMC) Date: Sat, 1 Nov 2014 12:50:32 +0100 Subject: Connectionists: TPNC 2014: call for participation Message-ID: <861646ACED6A4496A8FC3516CFBE46CB@Carlos1> *To be removed from our mailing list, please respond to this message with UNSUBSCRIBE in the subject line* **************************************************************************** 3rd International Conference on the Theory and Practice of Natural Computing TPNC 2014 Granada, Spain December 9-11, 2014 Organised by: Soft Computing and Intelligent Information Systems (SCI2S) University of Granada Research Group on Mathematical Linguistics (GRLMC) Rovira i Virgili University http://grammars.grlmc.com/tpnc2014/ **************************************************************************** PROGRAM Tuesday, December 9 9:00 - 10:00 Registration 10:00 - 10:15 Opening 10:15 - 11:05 Marco Dorigo: Swarm Intelligence - Invited Lecture 11:05 - 11:35 Coffee Break 11:35 - 13:15 Rusin? Freivalds: Ultrametric Vs. Quantum Query Algorithms Peter Niebert and Mathieu Caralp: Cellular Programming Kiyoharu Tagawa and Shoichi Harada: Multi-Noisy-objective Optimization Based on Prediction of Worst-Case Performance Paulo Urbano, Enrique Naredo, and Leonardo Trujillo: Generalization in Maze Navigation using Grammatical Evolution and Novelty Search 13:15 - 14:45 Lunch 14:45 - 16:00 Simon Bin, Sebastian Volke, Gerik Scheuermann, and Martin Middendorf: Comparing the Optimization Behaviour of Heuristics with Topology Based Visualization Jos? Mat?as Cutillas Lozano and Domingo Gim?nez: Parameterized Message-Passing Metaheuristic Schemes on a Heterogeneous Computing System Jonathan Gutierrez, Megan Sorenson, and Eva Strawbridge: Modeling Fluid Flow Induced by C. elegans Swimming at Low Reynolds Number 16:00 - 18:00 Touristic visit Wednesday, December 10 9:00 - 9:50 Kalyanmoy Deb: Multi-Criterion Problem Solving: A Niche for Natural Computing Methods - Invited Lecture 9:50 - 10:05 Break 10:05 - 11:20 Mohammad Ali Javaheri Javid, Mohammad Majid al-Rifaie, and Robert Zimmer: Detecting Symmetry in Cellular Automata Generated Patterns Using Swarm Intelligence Edward Kent, Jason A. D. Atkin, and Rong Qu: Vehicle Routing in a Forestry Commissioning Operation Using Ant Colony Optimisation Michel Boyer and Tal Mor: Extrapolated States, Void States, and a Huge Novel Class of Distillable Entangled States 11:20 - 11:50 Coffee Break and Group Photo 11:50 - 13:05 Vinay K. Gautam, Eugen Czeizler, Pauline C. Haddow, and Martin Kuiper: Design of a Minimal System for Self-replication of Rectangular Patterns of DNA Tiles Naya Nagy and Marius Nagy: Unconditionally Secure Quantum Bit Commitment Protocol Based on Incomplete Information Marcos Villagra and Tomoyuki Yamakami: Quantum and Reversible Verification of Proofs Using Constant Memory Space 13:05 - 14:35 Lunch 14:35 - 15:50 Henning Bordihn, Paolo Bottoni, Anna Labella, and Victor Mitrana: Solving 2D-Pattern Matching with Networks of Picture Processors Clelia De Felice, Rocco Zaccagnino, and Rosalba Zizza: Unavoidable Sets and Regularity of Languages Generated by (1,3)-Circular Splicing Systems Kaoru Fujioka: A Two-Dimensional Extension of Insertion Systems 15:50 - 16:05 Break 16:05 - 16:35 Special session Thursday, December 11 9:00 - 9:50 Francisco Herrera: Bioinspired Real Parameter Optimization: Where We Are and What's Next - Invited Lecture 9:50 - 10:05 Break 10:05 - 11:20 Muhammad Marwan Muhammad Fuad: Differential Evolution-Based Weighted Combination of Distance Metrics for k-means Clustering Sergio Santander-Jim?nez and Miguel A. Vega-Rodr?guez: Inferring Multiobjective Phylogenetic Hypotheses by Using a Parallel Indicator-Based Evolutionary Algorithm Jean-Philippe Bernard, Benjamin Gilles, and Christophe Godin: Combining Finite Element Method and L-Systems Using Natural Information Flow Propagation to Simulate Growing Dynamical Systems 11:20 - 11:50 Coffee Break 11:50 - 13:05 Abdoulaye Sarr, Alexandra Fronville, and Vincent Rodin: Morphogenesis Model for Systematic Simulation of Forms' Co-evolution with Constraints : Application to Mitosis Jir? ??ma: The Power of Extra Analog Neuron Zheng Yan, Xinyi Le, and Jun Wang: Model Predictive Control of Linear Parameter Varying Systems Based on a Recurrent Neural Network 13:05 - 13:15 Closing --- Este mensaje no contiene virus ni malware porque la protecci?n de avast! Antivirus est? activa. http://www.avast.com From ASIM.ROY at asu.edu Sat Nov 1 03:59:51 2014 From: ASIM.ROY at asu.edu (Asim Roy) Date: Sat, 1 Nov 2014 07:59:51 +0000 Subject: Connectionists: Call for Papers - Neural Networks special issue on Big Data Message-ID: <4AD8F84F0AA4E1448BD8131BA7E55EB41E2E7589@exmbt02.asurite.ad.asu.edu> Apologies for cross posting. Special Issue on Neural Network Learning in Big Data [Special Issue on Neural Network Learning in Big Data] Neural Networks Special Issue: Neural Network Learning in Big Data Big data is much more than storage of and access to data. Analytics plays an important role in making sense of that data and exploiting its value. But learning from big data has become a significant challenge and requires development of new types of algorithms. Most machine learning algorithms encounter theoretical challenges in scaling up to big data. Plus there are challenges of high dimensionality, velocity and variety for all types of machine learning algorithms. The neural network field has historically focused on algorithms that learn in an online, incremental mode without requiring in-memory access to huge amounts of data. The brain is arguably the best and most elegant big data processor and is the inspiration for neural network learning methods. Neural network type of learning is not only ideal for streaming data (as in the Industrial Internet or the Internet of Things), but could also be used for stored big data. For stored big data, neural network algorithms can learn from all of the data instead of from samples of the data. And the same is true for streaming data where not all of the data is actually stored. In general, online, incremental learning algorithms are less vulnerable to size of the data. Neural network algorithms, in particular, can take advantage of massively parallel (brain-like) computations, which use very simple processors, that other machine learning technologies cannot. Specialized neuromorphic hardware, originally meant for large-scale brain simulations, is becoming available to implement these algorithms in a massively parallel fashion. Neural network algorithms, therefore, can deliver very fast and efficient real-time learning through the use of hardware and this could be particularly useful for streaming data in the Industrial Internet. Neural network technologies thus can become significant components of big data analytics platforms and this special issue will begin that journey with big data. For this special issue of Neural Networks, we invite papers that address many of the challenges of learning from big data. In particular, we are interested in papers on efficient and innovative algorithmic approaches to analyzing big data (e.g. deep networks, nature-inspired and brain-inspired algorithms), implementations on different computing platforms (e.g. neuromorphic, GPUs, clouds, clusters) and applications of online learning to solve real-world big data problems (e.g. health care, transportation, and electric power and energy management). RECOMMENDED TOPICS: Topics of interest include, but are not limited to: 1. Autonomous, online, incremental learning - theory, algorithms and applications in big data 2. High dimensional data, feature selection, feature transformation - theory, algorithms and applications for big data 3. Scalable neural network algorithms for big data 4. Neural network learning algorithms for high-velocity streaming data 5. Deep neural network learning 6. Neuromorphic hardware for scalable neural network learning 7. Big data analytics using neural networks in healthcare/medical applications 8. Big data analytics using neural networks in electric power and energy systems 9. Big data analytics using neural networks in large sensor networks 10.Big data and neural network learning in computational biology and bioinformatics SUBMISSION PROCEDURE: Prospective authors should visit http://ees.elsevier.com/neunet/ for information on paper submission. During the submission process, there will be steps to designate the submission to this special issue. However, please indicate on the first page of the manuscript that the manuscript is intended for the Special Issue: Neural Network Learning in Big Data. Manuscripts will be peer reviewed according to Neural Networks guidelines. Manuscript submission due: December 15, 2014 First review completed: March 1, 2015 Revised manuscript due: April 1, 2015 Second review completed, final decisions to authors: April 15, 2015 Final manuscript due: April 30, 2015 GUEST EDITORS: Asim Roy, Arizona State University, USA (asim.roy at asu.edu) (lead guest editor) Kumar Venayagamoorthy, Clemson University, USA (gkumar at ieee.org) Nikola Kasabov, Auckland University of Technology, New Zealand (nkasabov at aut.ac.nz) Irwin King, Chinese University of Hong Kong, China (irwinking at gmail.com) -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 5272 bytes Desc: image001.jpg URL: From ahu at cs.stir.ac.uk Sat Nov 1 17:20:06 2014 From: ahu at cs.stir.ac.uk (Dr Amir Hussain) Date: Sat, 1 Nov 2014 21:20:06 +0000 Subject: Connectionists: Increased Impact Factor and Table of Contents Alert: Cognitive Computation journal (Springer): Vol.6, No.3 / Sep 2014 Issue Message-ID: Dear Colleagues: (with advance apologies for any cross-postings) We are delighted to announce the publication of Volume 6, No.3 / Sep 2014 Issue, of Springer's Cognitive Computation journal - www.springer.com/12559 ============================================================== Important News!: Increased Impact Factor for 2013! ============================================================== As you will know, Cognitive Computation was selected for coverage in Thomson Reuter?s products and services in 2011. Beginning with V.1 (1) 2009, this publication is now indexed and abstracted in: ? Science Citation Index Expanded (also known as SciSearch?) ? Journal Citation Reports/Science Edition ? Current Contents?/Engineering Computing and Technology ? Neuroscience Citation Index? Cognitive Computation received its first Impact Factor (IF) of 1.0 in 2011. The IF for 2013 has increased to 1.1 (with a first 5 year IF of 1.387) (Thomson Reuters Journal Citation Reports? 2013) Many congratulations to the editors, reviewers and authors of this exciting young journal! Want to be part of the growing success? Visit the journal homepage ( http://springer.com/12559) for instructions on submitting your research. ============================================================== The first six papers of the September 2014 Issue comprise a Special Issue on the International Conference on Neural Information Processing (ICONIP'2012) held at Qatar, 2012, Guest Edited by: Chuandong Li, Tingwen Huang, Zhigang Zeng and He Huang, Xing He. The Special Issue papers are followed by 17 regular papers, including an invited paper titled: "An Insight into Extreme Learning Machines (ELMs): Random Neurons, Random Features and Kernels" by Guang-Bin Huang (http://link.springer.com/article/10.1007/s12559-014-9255-2) The individual list of 24 published articles (Table of Contents) for this Issue can be viewed here (and also at the end of this message, followed by an overview of the previous Issues/Archive listings): http://link.springer.com/journal/12559/6/3/ You may also be interested in the journal's seminal Special Issue (Sep 2013 Issue): In Memory of John G Taylor: A Polymath Scholar, by Guest Editors: Vassilis Cutsuridis and Amir Hussain (the Guest Editorial is available here: http://link.springer.com/content/pdf/10.1007%2Fs12559-013-9226-z.pdf and full listing of articles can be found at: http://link.springer.com/journal/12559/5/3/page/1) A list of the journal's most downloaded articles (which can always be read for FREE) can be found here: http://www.springer.com/biomed/neuroscience/journal/12559?hideChart=1#realtime Other 'Online First' published articles not yet in a print issue can be viewed here: http://www.springerlink.com/content/121361/?Content+Status=Accepted All previous Volumes and Issues of the journal can be viewed here: http://link.springer.com/journal/volumesAndIssues/12559 ============================================ Reminder: New Cognitive Computation "LinkedIn" Group: ============================================ To further strengthen the bonds amongst the interdisciplinary audience of Cognitive Computation, we have set-up a "Cognitive Computation LinkedIn group", which has over 700 members already! We warmly invite you to join us at: http://www.linkedin.com/groups?gid=3155048 For further information on the journal and to sign up for electronic "Table of Contents alerts" please visit the Cognitive Computation homepage: http://www.springer.com/12559 or follow us on Twitter at: http://twitter.com/CognComput for the latest On-line First Issues. For any questions with regards to LinkedIn and/or Twitter, please contact Springer's Publishing Editor: Dr. Martijn Roelandse: martijn.roelandse at springer.com Finally, we would like to invite you to submit short or regular papers describing original research or timely review of important areas - our aim is to peer review all papers within approximately six-eight weeks of receipt. We also welcome relevant high quality proposals for Special Issues - four are already planned for 2014-15 (for CFPs, see: http://www.springer.com/biomed/neuroscience/journal/12559?detailsPage=press ) With our very best wishes to all aspiring readers and authors of Cognitive Computation, Professor Amir Hussain, PhD (Editor-in-Chief: Cognitive Computation) E-mail: ahu at cs.stir.ac.uk (University of Stirling, Scotland, UK) Professor Igor Aleksander, PhD (Honorary Editor-in-Chief: Cognitive Computation) (Imperial College, London, UK) http://www.springer.com/12559 Also consider your work for related Book Series: SpringerBriefs on Cognitive Computation: http://www.springer.com/series/10374 NEW: Springer Series on Socio-Affective Computing: http://www.springer.com/series/13199 --------------------------------------------------------------------------------------------------------------- Table of Contents Alert -- Cognitive Computation Vol 6 No 3, Sep 2014 --------------------------------------------------------------------------------------------------------------- Special Issue on ICONIP 2012 Issue Editors : Chuandong Li, Tingwen Huang, Zhigang Zeng, He Huang, Xing He __________________________________________________________________ Special Issue on ICONIP 2012 Chuandong Li , Tingwen Huang , Zhigang Zeng , He Huang & Xing He http://link.springer.com/article/10.1007/s12559-014-9300-1 Image Fusion by Hierarchical Joint Sparse Representation Yao Yao , Ping Guo , Xin Xin & Ziheng Jiang http://link.springer.com/article/10.1007/s12559-013-9235-y An Improved Fault-Tolerant Objective Function and Learning Algorithm for Training the Radial Basis Function Neural Network Ruibin Feng , Yi Xiao , Chi Sing Leung , Peter W. M. Tsang & John Sum http://link.springer.com/article/10.1007/s12559-013-9236-x A Learner-Independent Knowledge Transfer Approach to Multi-task Learning Shaoning Pang , Fan Liu , Youki Kadobayashi , Tao Ban & Daisuke Inoue http://link.springer.com/article/10.1007/s12559-013-9238-8 Abductive Learning Ensembles for Hand Shape Identification El-Sayed M. El-Alfy & Radwan E. Abdel-Aal http://link.springer.com/article/10.1007/s12559-013-9241-0 Exploiting a Modified Gray Model in Back Propagation Neural Networks for Enhanced Forecasting Xuejun Gao , Tingwen Huang , Zhenyou Wang & Mingqing Xiao http://link.springer.com/article/10.1007/s12559-014-9247-2 A Consensus-Based Grouping Algorithm for Multi-agent Cooperative Task Allocation with Complex Requirements Simon Hunt , Qinggang Meng , Chris Hinde & Tingwen Huang http://link.springer.com/article/10.1007/s12559-014-9265-0 -----REGULAR PAPERS----------------------------------------------- Development of Computational Models of Emotions for Autonomous Agents: A Review Luis-Felipe Rodr?guez & F?lix Ramos http://link.springer.com/article/10.1007/s12559-013-9244-x An Insight into Extreme Learning Machines: Random Neurons, Random Features and Kernels Guang-Bin Huang http://link.springer.com/article/10.1007/s12559-014-9255-2 Multitask Extreme Learning Machine for Visual Tracking Huaping Liu , Fuchun Sun & Yuanlong Yu http://link.springer.com/article/10.1007/s12559-013-9242-z Fast Image Recognition Based on Independent Component Analysis and Extreme Learning Machine Shujing Zhang , Bo He , Rui Nian , Jing Wang , Bo Han , Amaury Lendasse & Guang Yuan http://link.springer.com/article/10.1007/s12559-014-9245-4 A Class Incremental Extreme Learning Machine for Activity Recognition Zhongtang Zhao , Zhenyu Chen , Yiqiang Chen , Shuangquan Wang & Hongan Wang http://link.springer.com/article/10.1007/s12559-014-9259-y A Two-Stage Methodology Using K-NN and False-Positive Minimizing ELM for Nominal Data Classification Anton Akusok , Yoan Miche , Jozsef Hegedus , Rui Nian & Amaury Lendasse http://link.springer.com/article/10.1007/s12559-014-9253-4 Feature Component-Based Extreme Learning Machines for Finger Vein Recognition Shan Juan Xie , Sook Yoon , Jucheng Yang , Yu Lu , Dong Sun Park & Bin Zhou http://link.springer.com/article/10.1007/s12559-014-9254-3 Counting Pedestrian with Mixed Features and Extreme Learning Machine Yuanwei Li , En Zhu , Xinzhong Zhu , Jianping Yin & Jianmin Zhao http://link.springer.com/article/10.1007/s12559-014-9248-1 A Voting Optimized Strategy Based on ELM for Improving Classification of Motor Imagery BCI Data Lijuan Duan , Hongyan Zhong , Jun Miao , Zhen Yang , Wei Ma & Xuan Zhang http://link.springer.com/article/10.1007/s12559-014-9264-1 A Gradient-Based Neural Network Method for Solving Strictly Convex Quadratic Programming Problems Alireza Nazemi & Masoomeh Nazemi http://link.springer.com/article/10.1007/s12559-014-9249-0 Conjugate Unscented FastSLAM for Autonomous Mobile Robots in Large-Scale Environments Y. Song , Q. L. Li & Y. F. Kang http://link.springer.com/article/10.1007/s12559-014-9258-z Modular Composite Representation Javier Snaider & Stan Franklin http://link.springer.com/article/10.1007/s12559-013-9243-y Brain Programming for the Evolution of an Artificial Dorsal Stream Le?n Dozal , Gustavo Olague , Eddie Clemente & Daniel E. Hern?ndez http://link.springer.com/article/10.1007/s12559-014-9251-6 Modelling Task-Dependent Eye Guidance to Objects in Pictures Antonio Clavelli , Dimosthenis Karatzas , Josep Llad?s , Mario Ferraro & Giuseppe Boccignone http://link.springer.com/article/10.1007/s12559-014-9262-3 Scanpath Generated by Cue-Driven Activation and Spatial Strategy: A Comparative Study KangWoo Lee & Yubu Lee http://link.springer.com/article/10.1007/s12559-014-9246-3 Novel Biologically Inspired Approaches to Extracting Online Information from Temporal Data Zeeshan Khawar Malik , Amir Hussain & Jonathan Wu http://link.springer.com/article/10.1007/s12559-014-9257-0 Sparse-Representation-Based Classification with Structure-Preserving Dimension Reduction Jin Xu , Guang Yang , Yafeng Yin , Hong Man & Haibo He http://link.springer.com/article/10.1007/s12559-014-9252-5 --------------------------------------------------- Previous Issues/Archive: Overview: --------------------------------------------------- All previous Volumes and Issues can be viewed here: http://link.springer.com/journal/volumesAndIssues/12559 Alternatively, the full listing of the Inaugural Vol. 1, No. 1 / March 2009, can be viewed here (which included invited authoritative reviews by leading researchers in their areas - including keynote papers from London University's John Taylor, Igor Aleksander and Stanford University's James McClelland, and invited papers from Ron Sun, Pentti Haikonen, Geoff Underwood, Kevin Gurney, Claudius Gross, Anil Seth and Tom Ziemke): http://www.springerlink.com/content/1866-9956/1/1/ The full listing of Vol. 1, No. 2 / June 2009, can be viewed here (which included invited reviews and original research contributions from leading researchers, including Rodney Douglas, Giacomo Indiveri, Jurgen Schmidhuber, Thomas Wennekers, Pentti Kanerva and Friedemann Pulvermuller): http://www.springerlink.com/content/1866-9956/1/2/ The full listing of Vol.1, No. 3 / Sep 2009, can be viewed here: http://www.springerlink.com/content/1866-9956/1/3/ The full listing of Vol. 1, No. 4 / Dec 2009, can be viewed here: http://www.springerlink.com/content/1866-9956/1/4/ The full listing of Vol.2, No. 1 / March 2010, can be viewed here: http://www.springerlink.com/content/1866-9956/2/1/ The full listing of Vol.2, No. 2 / June 2010, can be viewed here: http://www.springerlink.com/content/1866-9956/2/2/ The full listing of Vol.2, No. 3 / Aug 2010, can be viewed here: http://www.springerlink.com/content/1866-9956/2/3/ The full listing of Vol.2, No. 4 / Dec 2010, can be viewed here: http://www.springerlink.com/content/1866-9956/2/4/ The full listing of Vol.3, No.1 / Mar 2011 (Special Issue on: Saliency, Attention, Active Visual Search and Picture Scanning, edited by John Taylor and Vassilis Cutsuridis), can be viewed here: http://www.springerlink.com/content/1866-9956/3/1/ The Guest Editorial can be viewed here: http://www.springerlink.com/content/hu2245056415633l/ The full listing of Vol.3, No.2 / June 2011 can be viewed here: http://www.springerlink.com/content/1866-9956/3/2/ The full listing of Vol. 3, No. 3 / Sep 2011 (Special Issue on: Cognitive Behavioural Systems, Guest Edited by: Anna Esposito, Alessandro Vinciarelli, Simon Haykin, Amir Hussain and Marcos Faundez-Zanuy), can be viewed here: http://www.springerlink.com/content/1866-9956/3/3/ The Guest Editorial for the special issue can be viewed here: http://www.springerlink.com/content/h4718567520t2h84/ The full listing of Vol. 3, No. 4 / Dec 2011 can be viewed here: http://www.springerlink.com/content/1866-9956/3/4/ The full listing of Vol. 4, No.1 / Mar 2012 can be viewed here: http://www.springerlink.com/content/1866-9956/4/1/ The full listing of Vol. 4, No.2 / June 2012 can be viewed here: http://www.springerlink.com/content/1866-9956/4/2/ The full listing of Vol. 4, No.3 / Sep 2012 (Special Issue on: Computational Creativity, Intelligence and Autonomy, Edited by: J. Mark Bishop and Yasemin J. Erden) can be viewed here: http://www.springerlink.com/content/1866-9956/4/3/ The full listing of Vol. 4, No.4 / Dec 2012 (Special Issue titled: "Cognitive & Emotional Information Processing", Edited by: Stefano Squartini, Bj?rn Schuller and Amir Hussain, which is followed by a number of regular papers), can be viewed here: http://link.springer.com/journal/12559/4/4/page/1 The full listing of Vol. 5, No.1 / March 2013 Special Issue titled: Computational Intelligence and Applications Guest Editors: Zhigang Zeng & Haibo He, which is followed by a number of regular papers), can be viewed here: http://link.springer.com/journal/12559/5/1/page/1 The full listing of Vol. 5, No.2 / June 2013 Special Issue titled: Advances on Brain Inspired Computing, Guest Editors: Stefano Squartini, Sanqing Hu & Qingshan Liu, which is followed by a number of regular papers), can be viewed here: http://link.springer.com/journal/12559/5/2/page/1 The full listing of Vol. 5, No.3 / Sep 2013 Special Issue titled: In Memory of John G Taylor: A Polymath Scholar, Guest Editors: Vassilis Cutsuridis & Amir Hussain, which is followed by a number of regular papers), can be viewed here: http://link.springer.com/journal/12559/5/3/page/1 The full listing of Vol. 5, No.4 / Dec 2013, which includes regular papers (including an invited paper by Professor Ron Sun, Rensselaer Polytechnic Institute, USA, titled: Moral Judgment, Human Motivation, and Neural Networks), and a Special Issue titled: Advanced Cognitive Systems Based on Nonlinear Analysis. Guest Editors: Carlos M. Travieso and Jes?s B. Alonso, can be viewed here: http://link.springer.com/journal/12559/5/4/page/1 The full listing of Vol. 6, No.1 / Mar 2014, can be viewed here: http://link.springer.com/journal/12559/6/1/page/1 The full listing of Vol. 6, No.2 / June 2014, can be viewed here: http://link.springer.com/journal/12559/6/2/page/1 -- The University of Stirling has been ranked in the top 12 of UK universities for graduate employment*. 94% of our 2012 graduates were in work and/or further study within six months of graduation. *The Telegraph The University of Stirling is a charity registered in Scotland, number SC 011159. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ahu at cs.stir.ac.uk Sat Nov 1 17:19:58 2014 From: ahu at cs.stir.ac.uk (Dr Amir Hussain) Date: Sat, 1 Nov 2014 21:19:58 +0000 Subject: Connectionists: Increased Impact Factor and Table of Contents Alert: Cognitive Computation journal (Springer): Vol.6, No.3 / Sep 2014 Issue Message-ID: Dear Colleagues: (with advance apologies for any cross-postings) We are delighted to announce the publication of Volume 6, No.3 / Sep 2014 Issue, of Springer's Cognitive Computation journal - www.springer.com/12559 ============================================================== Important News!: Increased Impact Factor for 2013! ============================================================== As you will know, Cognitive Computation was selected for coverage in Thomson Reuter?s products and services in 2011. Beginning with V.1 (1) 2009, this publication is now indexed and abstracted in: ? Science Citation Index Expanded (also known as SciSearch?) ? Journal Citation Reports/Science Edition ? Current Contents?/Engineering Computing and Technology ? Neuroscience Citation Index? Cognitive Computation received its first Impact Factor (IF) of 1.0 in 2011. The IF for 2013 has increased to 1.1 (with a first 5 year IF of 1.387) (Thomson Reuters Journal Citation Reports? 2013) Many congratulations to the editors, reviewers and authors of this exciting young journal! Want to be part of the growing success? Visit the journal homepage ( http://springer.com/12559) for instructions on submitting your research. ============================================================== The first six papers of the September 2014 Issue comprise a Special Issue on the International Conference on Neural Information Processing (ICONIP'2012) held at Qatar, 2012, Guest Edited by: Chuandong Li, Tingwen Huang, Zhigang Zeng and He Huang, Xing He. The Special Issue papers are followed by 17 regular papers, including an invited paper titled: "An Insight into Extreme Learning Machines (ELMs): Random Neurons, Random Features and Kernels" by Guang-Bin Huang (http://link.springer.com/article/10.1007/s12559-014-9255-2) The individual list of 24 published articles (Table of Contents) for this Issue can be viewed here (and also at the end of this message, followed by an overview of the previous Issues/Archive listings): http://link.springer.com/journal/12559/6/3/ You may also be interested in the journal's seminal Special Issue (Sep 2013 Issue): In Memory of John G Taylor: A Polymath Scholar, by Guest Editors: Vassilis Cutsuridis and Amir Hussain (the Guest Editorial is available here: http://link.springer.com/content/pdf/10.1007%2Fs12559-013-9226-z.pdf and full listing of articles can be found at: http://link.springer.com/journal/12559/5/3/page/1) A list of the journal's most downloaded articles (which can always be read for FREE) can be found here: http://www.springer.com/biomed/neuroscience/journal/12559?hideChart=1#realtime Other 'Online First' published articles not yet in a print issue can be viewed here: http://www.springerlink.com/content/121361/?Content+Status=Accepted All previous Volumes and Issues of the journal can be viewed here: http://link.springer.com/journal/volumesAndIssues/12559 ============================================ Reminder: New Cognitive Computation "LinkedIn" Group: ============================================ To further strengthen the bonds amongst the interdisciplinary audience of Cognitive Computation, we have set-up a "Cognitive Computation LinkedIn group", which has over 700 members already! We warmly invite you to join us at: http://www.linkedin.com/groups?gid=3155048 For further information on the journal and to sign up for electronic "Table of Contents alerts" please visit the Cognitive Computation homepage: http://www.springer.com/12559 or follow us on Twitter at: http://twitter.com/CognComput for the latest On-line First Issues. For any questions with regards to LinkedIn and/or Twitter, please contact Springer's Publishing Editor: Dr. Martijn Roelandse: martijn.roelandse at springer.com Finally, we would like to invite you to submit short or regular papers describing original research or timely review of important areas - our aim is to peer review all papers within approximately six-eight weeks of receipt. We also welcome relevant high quality proposals for Special Issues - four are already planned for 2014-15 (for CFPs, see: http://www.springer.com/biomed/neuroscience/journal/12559?detailsPage=press ) With our very best wishes to all aspiring readers and authors of Cognitive Computation, Professor Amir Hussain, PhD (Editor-in-Chief: Cognitive Computation) E-mail: ahu at cs.stir.ac.uk (University of Stirling, Scotland, UK) Professor Igor Aleksander, PhD (Honorary Editor-in-Chief: Cognitive Computation) (Imperial College, London, UK) http://www.springer.com/12559 Also consider your work for related Book Series: SpringerBriefs on Cognitive Computation: http://www.springer.com/series/10374 NEW: Springer Series on Socio-Affective Computing: http://www.springer.com/series/13199 --------------------------------------------------------------------------------------------------------------- Table of Contents Alert -- Cognitive Computation Vol 6 No 3, Sep 2014 --------------------------------------------------------------------------------------------------------------- Special Issue on ICONIP 2012 Issue Editors : Chuandong Li, Tingwen Huang, Zhigang Zeng, He Huang, Xing He __________________________________________________________________ Special Issue on ICONIP 2012 Chuandong Li , Tingwen Huang , Zhigang Zeng , He Huang & Xing He http://link.springer.com/article/10.1007/s12559-014-9300-1 Image Fusion by Hierarchical Joint Sparse Representation Yao Yao , Ping Guo , Xin Xin & Ziheng Jiang http://link.springer.com/article/10.1007/s12559-013-9235-y An Improved Fault-Tolerant Objective Function and Learning Algorithm for Training the Radial Basis Function Neural Network Ruibin Feng , Yi Xiao , Chi Sing Leung , Peter W. M. Tsang & John Sum http://link.springer.com/article/10.1007/s12559-013-9236-x A Learner-Independent Knowledge Transfer Approach to Multi-task Learning Shaoning Pang , Fan Liu , Youki Kadobayashi , Tao Ban & Daisuke Inoue http://link.springer.com/article/10.1007/s12559-013-9238-8 Abductive Learning Ensembles for Hand Shape Identification El-Sayed M. El-Alfy & Radwan E. Abdel-Aal http://link.springer.com/article/10.1007/s12559-013-9241-0 Exploiting a Modified Gray Model in Back Propagation Neural Networks for Enhanced Forecasting Xuejun Gao , Tingwen Huang , Zhenyou Wang & Mingqing Xiao http://link.springer.com/article/10.1007/s12559-014-9247-2 A Consensus-Based Grouping Algorithm for Multi-agent Cooperative Task Allocation with Complex Requirements Simon Hunt , Qinggang Meng , Chris Hinde & Tingwen Huang http://link.springer.com/article/10.1007/s12559-014-9265-0 -----REGULAR PAPERS----------------------------------------------- Development of Computational Models of Emotions for Autonomous Agents: A Review Luis-Felipe Rodr?guez & F?lix Ramos http://link.springer.com/article/10.1007/s12559-013-9244-x An Insight into Extreme Learning Machines: Random Neurons, Random Features and Kernels Guang-Bin Huang http://link.springer.com/article/10.1007/s12559-014-9255-2 Multitask Extreme Learning Machine for Visual Tracking Huaping Liu , Fuchun Sun & Yuanlong Yu http://link.springer.com/article/10.1007/s12559-013-9242-z Fast Image Recognition Based on Independent Component Analysis and Extreme Learning Machine Shujing Zhang , Bo He , Rui Nian , Jing Wang , Bo Han , Amaury Lendasse & Guang Yuan http://link.springer.com/article/10.1007/s12559-014-9245-4 A Class Incremental Extreme Learning Machine for Activity Recognition Zhongtang Zhao , Zhenyu Chen , Yiqiang Chen , Shuangquan Wang & Hongan Wang http://link.springer.com/article/10.1007/s12559-014-9259-y A Two-Stage Methodology Using K-NN and False-Positive Minimizing ELM for Nominal Data Classification Anton Akusok , Yoan Miche , Jozsef Hegedus , Rui Nian & Amaury Lendasse http://link.springer.com/article/10.1007/s12559-014-9253-4 Feature Component-Based Extreme Learning Machines for Finger Vein Recognition Shan Juan Xie , Sook Yoon , Jucheng Yang , Yu Lu , Dong Sun Park & Bin Zhou http://link.springer.com/article/10.1007/s12559-014-9254-3 Counting Pedestrian with Mixed Features and Extreme Learning Machine Yuanwei Li , En Zhu , Xinzhong Zhu , Jianping Yin & Jianmin Zhao http://link.springer.com/article/10.1007/s12559-014-9248-1 A Voting Optimized Strategy Based on ELM for Improving Classification of Motor Imagery BCI Data Lijuan Duan , Hongyan Zhong , Jun Miao , Zhen Yang , Wei Ma & Xuan Zhang http://link.springer.com/article/10.1007/s12559-014-9264-1 A Gradient-Based Neural Network Method for Solving Strictly Convex Quadratic Programming Problems Alireza Nazemi & Masoomeh Nazemi http://link.springer.com/article/10.1007/s12559-014-9249-0 Conjugate Unscented FastSLAM for Autonomous Mobile Robots in Large-Scale Environments Y. Song , Q. L. Li & Y. F. Kang http://link.springer.com/article/10.1007/s12559-014-9258-z Modular Composite Representation Javier Snaider & Stan Franklin http://link.springer.com/article/10.1007/s12559-013-9243-y Brain Programming for the Evolution of an Artificial Dorsal Stream Le?n Dozal , Gustavo Olague , Eddie Clemente & Daniel E. Hern?ndez http://link.springer.com/article/10.1007/s12559-014-9251-6 Modelling Task-Dependent Eye Guidance to Objects in Pictures Antonio Clavelli , Dimosthenis Karatzas , Josep Llad?s , Mario Ferraro & Giuseppe Boccignone http://link.springer.com/article/10.1007/s12559-014-9262-3 Scanpath Generated by Cue-Driven Activation and Spatial Strategy: A Comparative Study KangWoo Lee & Yubu Lee http://link.springer.com/article/10.1007/s12559-014-9246-3 Novel Biologically Inspired Approaches to Extracting Online Information from Temporal Data Zeeshan Khawar Malik , Amir Hussain & Jonathan Wu http://link.springer.com/article/10.1007/s12559-014-9257-0 Sparse-Representation-Based Classification with Structure-Preserving Dimension Reduction Jin Xu , Guang Yang , Yafeng Yin , Hong Man & Haibo He http://link.springer.com/article/10.1007/s12559-014-9252-5 --------------------------------------------------------- Previous Issues/Archive: Overview: --------------------------------------------------------- All previous Volumes and Issues can be viewed here: http://link.springer.com/journal/volumesAndIssues/12559 Alternatively, the full listing of the Inaugural Vol. 1, No. 1 / March 2009, can be viewed here (which included invited authoritative reviews by leading researchers in their areas - including keynote papers from London University's John Taylor, Igor Aleksander and Stanford University's James McClelland, and invited papers from Ron Sun, Pentti Haikonen, Geoff Underwood, Kevin Gurney, Claudius Gross, Anil Seth and Tom Ziemke): http://www.springerlink.com/content/1866-9956/1/1/ The full listing of Vol. 1, No. 2 / June 2009, can be viewed here (which included invited reviews and original research contributions from leading researchers, including Rodney Douglas, Giacomo Indiveri, Jurgen Schmidhuber, Thomas Wennekers, Pentti Kanerva and Friedemann Pulvermuller): http://www.springerlink.com/content/1866-9956/1/2/ The full listing of Vol.1, No. 3 / Sep 2009, can be viewed here: http://www.springerlink.com/content/1866-9956/1/3/ The full listing of Vol. 1, No. 4 / Dec 2009, can be viewed here: http://www.springerlink.com/content/1866-9956/1/4/ The full listing of Vol.2, No. 1 / March 2010, can be viewed here: http://www.springerlink.com/content/1866-9956/2/1/ The full listing of Vol.2, No. 2 / June 2010, can be viewed here: http://www.springerlink.com/content/1866-9956/2/2/ The full listing of Vol.2, No. 3 / Aug 2010, can be viewed here: http://www.springerlink.com/content/1866-9956/2/3/ The full listing of Vol.2, No. 4 / Dec 2010, can be viewed here: http://www.springerlink.com/content/1866-9956/2/4/ The full listing of Vol.3, No.1 / Mar 2011 (Special Issue on: Saliency, Attention, Active Visual Search and Picture Scanning, edited by John Taylor and Vassilis Cutsuridis), can be viewed here: http://www.springerlink.com/content/1866-9956/3/1/ The Guest Editorial can be viewed here: http://www.springerlink.com/content/hu2245056415633l/ The full listing of Vol.3, No.2 / June 2011 can be viewed here: http://www.springerlink.com/content/1866-9956/3/2/ The full listing of Vol. 3, No. 3 / Sep 2011 (Special Issue on: Cognitive Behavioural Systems, Guest Edited by: Anna Esposito, Alessandro Vinciarelli, Simon Haykin, Amir Hussain and Marcos Faundez-Zanuy), can be viewed here: http://www.springerlink.com/content/1866-9956/3/3/ The Guest Editorial for the special issue can be viewed here: http://www.springerlink.com/content/h4718567520t2h84/ The full listing of Vol. 3, No. 4 / Dec 2011 can be viewed here: http://www.springerlink.com/content/1866-9956/3/4/ The full listing of Vol. 4, No.1 / Mar 2012 can be viewed here: http://www.springerlink.com/content/1866-9956/4/1/ The full listing of Vol. 4, No.2 / June 2012 can be viewed here: http://www.springerlink.com/content/1866-9956/4/2/ The full listing of Vol. 4, No.3 / Sep 2012 (Special Issue on: Computational Creativity, Intelligence and Autonomy, Edited by: J. Mark Bishop and Yasemin J. Erden) can be viewed here: http://www.springerlink.com/content/1866-9956/4/3/ The full listing of Vol. 4, No.4 / Dec 2012 (Special Issue titled: "Cognitive & Emotional Information Processing", Edited by: Stefano Squartini, Bj?rn Schuller and Amir Hussain, which is followed by a number of regular papers), can be viewed here: http://link.springer.com/journal/12559/4/4/page/1 The full listing of Vol. 5, No.1 / March 2013 Special Issue titled: Computational Intelligence and Applications Guest Editors: Zhigang Zeng & Haibo He, which is followed by a number of regular papers), can be viewed here: http://link.springer.com/journal/12559/5/1/page/1 The full listing of Vol. 5, No.2 / June 2013 Special Issue titled: Advances on Brain Inspired Computing, Guest Editors: Stefano Squartini, Sanqing Hu & Qingshan Liu, which is followed by a number of regular papers), can be viewed here: http://link.springer.com/journal/12559/5/2/page/1 The full listing of Vol. 5, No.3 / Sep 2013 Special Issue titled: In Memory of John G Taylor: A Polymath Scholar, Guest Editors: Vassilis Cutsuridis & Amir Hussain, which is followed by a number of regular papers), can be viewed here: http://link.springer.com/journal/12559/5/3/page/1 The full listing of Vol. 5, No.4 / Dec 2013, which includes regular papers (including an invited paper by Professor Ron Sun, Rensselaer Polytechnic Institute, USA, titled: Moral Judgment, Human Motivation, and Neural Networks), and a Special Issue titled: Advanced Cognitive Systems Based on Nonlinear Analysis. Guest Editors: Carlos M. Travieso and Jes?s B. Alonso, can be viewed here: http://link.springer.com/journal/12559/5/4/page/1 The full listing of Vol. 6, No.1 / Mar 2014, can be viewed here: http://link.springer.com/journal/12559/6/1/page/1 The full listing of Vol. 6, No.2 / June 2014, can be viewed here: http://link.springer.com/journal/12559/6/2/page/1 -- The University of Stirling has been ranked in the top 12 of UK universities for graduate employment*. 94% of our 2012 graduates were in work and/or further study within six months of graduation. *The Telegraph The University of Stirling is a charity registered in Scotland, number SC 011159. -------------- next part -------------- An HTML attachment was scrubbed... URL: From christos.dimitrakakis at gmail.com Sun Nov 2 06:31:27 2014 From: christos.dimitrakakis at gmail.com (Christos Dimitrakakis) Date: Sun, 02 Nov 2014 11:31:27 +0000 Subject: Connectionists: PhD studentship in Private and Secure Machine Learning and Decision Making at Chalmers Message-ID: PhD studentship in Private and Secure Machine Learning and Decision Making at Chalmers ------ General information ------- We are looking for an excellent, motivated, self-driven doctoral student to work in the area of machine learning and decision theory with a focus on security and privacy. The position is for five years at the Department of Computer Science and Engineering, within the division of Computing Science and the group of Algorithms, Learning and Computational Biology ( http://www.cse.chalmers.se/research/lab/), who are doing research on fields ranging from machine learning, statistics, algorithms, optimisation, reinforcement learning to computational biology, text and massive data analysis. We particularly welcome applications from students in Computer Science, Mathematics and Statistics. Experience in statistics, machine learning, information theory, optimisatiom, decision and game theory is especially valuable. Familiarity with topics in privacy, security and cryptography is also advantageous, but not essential. ------ Research topic ------ The student will be expected to develop and analyse state-of-the-art algorithms for distributed machine learning and decision making that provide users with strong privacy guarantees. The research will be at the intersection of machine learning, differential privacy and mechanism design. The aim will be to develop machine learning algorithms for distributed systems which optimally trade off the balance between user privacy and system utility. The research will be mainly theoretical. The student will be supervised by Dr. Christos Dimitrakakis (Machine learning, decision theory and differential privacy - see http://www.cse.chalmers.se/~chrdimi/ ) and co-supervised by Dr. Katerina Mitrokotsa (Cryptography and security - see http://www.cse.chalmers.se/~aikmitr/ ). Employment will be in the scope of the Swiss Sense Synergy project, whose aim is to develop intelligent location-based networking protocols and crowdsourcing applications, in collaboration with three Swiss universities. Further info about the Swiss Sense Synergy project can be found here http://goo.gl/tmQ9NJ ------ How to apply ----- The application review process is already under way. The final application deadline is November 21, 2014. The position is expected to begin in January 2015, or soon thereafter. Apply here: http://www.chalmers.se/en/about-chalmers/vacancies/Pages/default.aspx?rmpage=job&rmjob=2438 Contact Christos Dimitrakakis or Aikaterini Mitrokotsa if you have any questions about this position. -------------- next part -------------- An HTML attachment was scrubbed... URL: From gary.marcus at nyu.edu Sun Nov 2 16:57:23 2014 From: gary.marcus at nyu.edu (Gary Marcus) Date: Sun, 2 Nov 2014 16:57:23 -0500 Subject: Connectionists: The Atoms of Neural Computation Message-ID: <330594F9-9782-452A-B4CB-0C2E8F069DF5@nyu.edu> New piece in Science, reevaluating the ?canonical cortical computation? hypothesis: http://www.sciencemag.org/content/346/6209/551.short (First paragraph pasted in below) And lot of further detail didn?t fit: http://biorxiv.org/content/early/2014/10/31/010983 I love a discussion here on the group, especially re: the Table of possible computation and their neural realizations, in the Supplement. We plan to crowd-source a more detailed version of that table; please contact me if you are interested in contributing. Cheers, Gary The human cerebral cortex is central to a wide array of cognitive functions, from vision to language, reasoning, decision-making, and motor control. Yet, nearly a century after the neuroanatomical organization of the cortex was first defined, its basic logic remains unknown. One hypothesis is that cortical neurons form a single, massively repeated ?canonical? circuit, characterized as a kind of a ?nonlinear spatiotemporal filter with adaptive properties? (1). In this classic view, it was ?assumed that these?properties are identical for all neocortical areas.? Nearly four decades later, there is still no consensus about whether such a canonical circuit exists, either in terms of its anatomical basis or its function. Likewise, there is little evidence that such uniform architectures can capture the diversity of cortical function in simple mammals, let alone characteristically human processes such as language and abstract thinking (2). Analogous software implementations in artificial intelligence (e.g., deep learning networks) have proven effective in certain pattern classification tasks, such as speech and image recognition, but likewise have made little inroads in areas such as reasoning and natural language understanding. Is the search for a single canonical cortical circuit misguided? Gary Marcus Professor of Psychology and Neural Science New York University Visiting Cognitive Scientist Allen Institute for Brain Science Editor, The Future of the Brain (2014) http://garymarcus.com/ New Yorker essays New York Times op-eds -------------- next part -------------- An HTML attachment was scrubbed... URL: From randy.oreilly at colorado.edu Mon Nov 3 00:49:16 2014 From: randy.oreilly at colorado.edu (Randall O'Reilly) Date: Sun, 2 Nov 2014 22:49:16 -0700 Subject: Connectionists: The Atoms of Neural Computation In-Reply-To: <330594F9-9782-452A-B4CB-0C2E8F069DF5@nyu.edu> References: <330594F9-9782-452A-B4CB-0C2E8F069DF5@nyu.edu> Message-ID: <2F8F3AF0-F2A4-4BA9-8064-FA3088630A2E@colorado.edu> Gary ? you may be interested in our in-press chapter on the Leabra cognitive architecture, which enumerates 20 core principles instantiated in this architecture that have broad empirical and theoretical support: https://grey.colorado.edu/CompCogNeuro/index.php/OReillyHazyHerdIP (here is an important new update too: https://grey.colorado.edu/CompCogNeuro/index.php/OReillyWyatteRohrlichIP ) Also, we have a more in-depth treatment of the binding and related issues here: https://grey.colorado.edu/CompCogNeuro/index.php/OReillyPetrovCohenEtAl14 Cheers, - Randy > On Nov 2, 2014, at 2:57 PM, Gary Marcus wrote: > > New piece in Science, reevaluating the ?canonical cortical computation? hypothesis: http://www.sciencemag.org/content/346/6209/551.short (First paragraph pasted in below) > > And lot of further detail didn?t fit: http://biorxiv.org/content/early/2014/10/31/010983 > > I love a discussion here on the group, especially re: the Table of possible computation and their neural realizations, in the Supplement. We plan to crowd-source a more detailed version of that table; please contact me if you are interested in contributing. > > Cheers, > Gary > > The human cerebral cortex is central to a wide array of cognitive functions, from vision to language, reasoning, decision-making, and motor control. Yet, nearly a century after the neuroanatomical organization of the cortex was first defined, its basic logic remains unknown. One hypothesis is that cortical neurons form a single, massively repeated ?canonical? circuit, characterized as a kind of a ?nonlinear spatiotemporal filter with adaptive properties? (1). In this classic view, it was ?assumed that these?properties are identical for all neocortical areas.? Nearly four decades later, there is still no consensus about whether such a canonical circuit exists, either in terms of its anatomical basis or its function. Likewise, there is little evidence that such uniform architectures can capture the diversity of cortical function in simple mammals, let alone characteristically human processes such as language and abstract thinking (2). Analogous software implementations in artificial intelligence (e.g., deep learning networks) have proven effective in certain pattern classification tasks, such as speech and image recognition, but likewise have made little inroads in areas such as reasoning and natural language understanding. Is the search for a single canonical cortical circuit misguided? > > > Gary Marcus > Professor of Psychology and Neural Science > New York University > Visiting Cognitive Scientist > Allen Institute for Brain Science > Editor, The Future of the Brain (2014) > http://garymarcus.com/ > New Yorker essays > New York Times op-eds > > > > > > From marc.toussaint at informatik.uni-stuttgart.de Mon Nov 3 11:11:24 2014 From: marc.toussaint at informatik.uni-stuttgart.de (Marc Toussaint) Date: Mon, 03 Nov 2014 17:11:24 +0100 Subject: Connectionists: 2 postdoc positions in Machine Learning, Robotics & AI @ Univ. Stuttgart (and optionally MPI Tuebingen) Message-ID: <5457A92C.3010008@informatik.uni-stuttgart.de> 2 postdoc positions in Machine Learning, Robotics & AI at the Machine Learning & Robotics Lab, Univ. of Stuttgart Optional joint appointment at the Max-Planck-Institute for Intelligent Systems in Tuebingen The Machine Learning & Robotics Lab at University of Stuttgart is recruiting two highly-motivated postdoctoral researchers. The MLR lab strives to tackle problems that are both fundamental and real in the area of robotics and intelligent systems. This includes holistic approaches to learning, planning and inference on all levels from robotic control to higher-level geometric and symbolic reasoning. Applicants should have strong interest in this approach, esp. one of the following or related research topics: * (Constrained) Optimization methods for robotics, robotic control or learning in robotics in general * Combined task and motion planning in uncertain and probabilistic domains (bridging between symbolic/relational MDPs, belief planning and geometric planning) * Learning, planning and inference in the case of multi-agents or concurrent actions (including multi-agent extensions for invRL) * Active learning, experimental design and UCB/UCT type methods for autonomous robot exploration in relational domains Please relate clearly to these topic in your Research Statement. Researchers from the broader areas of modern AI (probabilistic reasoning, learning & planning), robotics and machine learning are welcome to apply. The candidates are expected to conduct independent research and at the same time contribute to ongoing projects in the areas listed above. Successful candidates can furthermore be given the opportunity to work with undergraduate, M.Sc. and Ph.D. students. The positions may jointly be appointed at Stefan Schaal's Autonomous Motion Department at the MPI Tuebingen, in this case focusing on learning and optimization methods for robotic control. See http://www-amd.is.tuebingen.mpg.de A successful Post-doc applicant should have a strong track record of top-tier research publications in our community (e.g., at UAI, ICML, IJCAI, AAAI, NIPS, AISTATS or RSS, ICRA, IROS and respective journals). A Ph.D. in Computer Science, Physics or Maths (or another field clearly related to the above topics) as well as strong organizational and coordination skills are a must. The positions are started with a 24 months contract and may be extendable up to 48 months. Payment will be according to the German TVL E-13 or E-14 payment scheme, depending on the candidate's experience and qualifications. All complete applications submitted through the online application system found at https://ipvs.informatik.uni-stuttgart.de/mlr/jobs/ will be considered. There is no fixed deadline: the positions will be filled as soon as possible. Applicants may contact Marc Toussaint at NIPS. Please also feel free to contact me informally by email on any issue. -- Marc Toussaint, Prof. Dr. Uni Stuttgart Universit?tsstra?e 38 70569 Stuttgart, Germany +49 711 685 88376 http://ipvs.informatik.uni-stuttgart.de/mlr/marc/index.html From steve at cns.bu.edu Mon Nov 3 14:07:59 2014 From: steve at cns.bu.edu (Stephen Grossberg) Date: Mon, 3 Nov 2014 14:07:59 -0500 Subject: Connectionists: The Atoms of Neural Computation: A reply to Gary Marcus In-Reply-To: <330594F9-9782-452A-B4CB-0C2E8F069DF5@nyu.edu> References: <330594F9-9782-452A-B4CB-0C2E8F069DF5@nyu.edu> Message-ID: Dear Gary, I just read your interesting Science article. I was intrigued by your comments about cerebral cortex that ?its basic logic remains unknown? and that ?there is little evidence that such uniform architectures can capture the diversity of cortical function in simple mammals?. I was also struck by your comments that cortical models ?might include circuits for shifting the focus of attention, for encoding and manipulating sequences, and for normalizing the ratio between the activity of an individual neuron and a set of neurons?and for working memory storage, decision-making, storage and transformation of information via population coding?, alongside machinery for hierarchical pattern recognition?. You also commented about such matters as ?temporal synchrony among neural ensembles?to precisely controlled recurrent interactions between the prefrontal cortex and basal ganglia?? Actually, there is an emerging unified laminar cortical theory that embodies all of these properties, and that has been used to provide unified explanations and predictions about psychological, anatomical, neurophysiological, biophysical, and some biochemical data. This theory, whose various component models are often unified under the general heading of LAMINART theory, has been getting rapidly developed since the first article about it appeared in Trends in Neurosciences in 1997. The name LAMINART acknowledges the synthesis of concepts about the design of laminar cortical architectures with more long-standing principles and mechanisms of Adaptive Resonance Theory, or ART, which began as a cognitive and neural theory of how the brain autonomously learns to categorize, recognize, and predict objects and events in a changing world. As illustrated in the review article Grossberg (2012, http://cns.bu.edu/~steve/ART.pdf), ART is arguably the currently most highly developed cognitive and neural theory available, with the broadest explanatory and predictive range. It has been getting progressively developed since I introduced it in 1976 to propose a solution to the classical stability-plasticity dilemma. This proposed solution enables ART to carry out fast, incremental, and self-stabilizing unsupervised and supervised learning in response to a changing world. ART specifies mechanistic links between processes of consciousness, learning, expectation, attention, resonance, and synchrony during both unsupervised and supervised learning. ART provides functional and mechanistic explanations of such diverse topics as laminar cortical circuitry; invariant object and scenic gist learning and recognition; prototype, surface, and boundary attention; gamma and beta oscillations; learning of entorhinal grid cells and hippocampal place cells; computation of homologous spatial and temporal mechanisms in the entorhinal-hippocampal system; vigilance breakdowns during autism and medial temporal amnesia; cognitive-emotional interactions that focus attention on valued objects in an adaptively timed way; item-order-rank working memories and learned list chunks for the planning and control of sequences of linguistic, spatial, and motor information; conscious speech percepts that are influenced by future context; auditory streaming in noise during source segregation; and speaker normalization. Brain regions that are functionally described include visual and auditory neocortex; specific and nonspecific thalamic nuclei; inferotemporal, parietal, prefrontal, entorhinal, hippocampal, parahippocampal, perirhinal, and motor cortices; frontal eye fields; supplementary eye fields; amygdala; basal ganglia: cerebellum; and superior colliculus. Due to the complementary organization of the brain, ART does not describe many spatial and motor behaviors whose matching and learning laws differ from those of ART. Given Randy O?Reilly?s comments about Leabra, it is also of historical interest that I introduced the core equations used in Leabra in the 1960s and early 1970s, and they have proved to be of critical importance in all the developments of ART. To illustrate how LAMINART illustrates the type of laminar cortical theory that your Science article discusses, let me refer interested readers to a few archival articles. LAMINART proposes how all cortical areas combine bottom-up, horizontal, and top-down interactions, thereby beginning to functionally clarify why all granular neocortex has a characteristic architecture with six main cell layers, and how these laminar circuits may be specialized to carry out different types of biological intelligence. In particular, this unification shows how variations of a shared laminar cortical design can be used to explain and simulate psychological and neurobiological data about vision, speech, and cognition: Vision. The 3D LAMINART model integrates bottom-up and horizontal processes of 3D boundary formation and perceptual grouping, surface filling-in, and figure-ground separation with top-down attentional matching and oscillatory dynamics in cortical areas such as V1, V2, and V4 (Cao and Grossberg, 2005; Fang and Grossberg, 2009; Grossberg, 1999; Grossberg and Raizada, 2000; Grossberg and Swaminathan, 2004; Grossberg and Versace, 2008; Grossberg and Yazdanbakhsh, 2005; Raizada and Grossberg, 2001). It is arguably the currently most highly developed vision model with the broadest explanatory and predictive range, laminar or not. This model, as well as the other models listed below, also makes multiple predictions about the functional roles that are played by identified cortical cells in all of these visual processes. Speech. The cARTWORD model proposes how bottom-up, horizontal, and top-down interactions within a hierarchy of laminar cortical processing stages, modulated by the basal ganglia, can generate a conscious speech percept that is embodied by a resonant wave of activation that occurs between acoustic features, acoustic item chunks, and list chunks (Grossberg and Kazerounian, 2011). Chunk-mediated gating allows speech to be heard in the correct temporal order, even when what is consciously heard depends upon using future context to disambiguate noise-occluded sounds, as occurs during phonemic restoration. Cognition. The LIST PARSE model describes how bottom-up, horizontal, and top-down interactions within the laminar circuits of lateral prefrontal cortex may carry out working memory storage of event sequences within layers 6 and 4, how unitization of these event sequences through learning into list chunks may occur within layer 2/3, and how these stored sequences can be recalled at variable rates that are under volitional control by the basal ganglia (Grossberg and Pearson, 2008). In particular, the model uses variations of the same circuitry to quantitatively simulate human cognitive data about immediate serial recall and immediate free recall, delayed free recall, and continuous distracter free recall; and monkey neurophysiological data from the prefrontal cortex obtained during sequential sensory-motor imitation and planned performance. Prefrontal-basal ganglia interactions. In addition to the thalamcortical interactions embodied in the above models, neocortical interactions with other subcortical structures have been developed as part of this emerging theory, notably cognitive-emotional interactions, reinforcement learning, and gating of plans and movements. These are also reviewed in Grossberg (2012). Here I will just mention one of these models that focuses on the kinds of prefrontal-basal ganglia interactions that you mentioned in your Science article. The lisTELOS model builds upon, and unifies, the working memory and basal ganglia circuits of the LIST PARSE and TELOS models. In particular, Silver et al. (2011) have incorporated an item-order-rank spatial working memory into a comprehensive model of how sequences of eye movements, which may include repetitions, may be planned and performed. Similar mechanisms may be expected to control other types of sequences as well, for reasons that are reviewed in Grossberg (2012). The lisTELOS model's name derives from the fact that it unifies and further develops concepts from LIST PARSE about how item-order-rank working memories store lists of items, and of how TELOS model properties of the basal ganglia (Brown et al., 1999, 2004) help to balance reactive vs. planned movements by selectively gating sequences of actions through time. Shunting dynamics and ratio processing. The kind of shunting dynamics that enables automatic computation of activity ratios has been a critical component of all models that my colleagues and I have developed since my foundational article (Grossberg, 1973) first mathematically proved how this works in both non-recurrent and recurrent networks. Indeed, ART models may be viewed as self-organizing production systems that carry out a novel kind of probabilistic hypothesis testing and decision-making that is designed to work in response to big non-stationary data bases. New computational paradigms. These examples illustrate an emerging unified theory of how variations of a shared laminar neocortical design can carry out multiple types of biological intelligence. Semi-classical models, such as deep learning, have been very useful in technology, but have little to offer in explaining how our brains have evolved to control autonomous adaptive behaviors. This weakness of deep learning is partly explained by the fact that these laminar cortical models embody revolutionary new computational paradigms that I have called Laminar Computing and Complementary Computing, which underlie natural computational realizations for biological systems that have evolved to autonomously and stably adapt in real time to a rapidly changing and unpredictable world. Indeed, LAMINART embodies a new type of hybrid between feedforward and feedback computing, and also between digital and analog computing for processing distributed data. These properties go beyond the typesof Bayesian models that are so popular today. They underlie the fast but stable self-organization that is characteristic of cortical development and life-long learning. Their circuits "run as fast as they can": they behave like a real-time probabilistic decision circuit that operates as quickly as possible, given the evidence. There is thus a trade-off between certainty and speed. They operate in a fast feedforward mode when there is little uncertainty, and automatically switch to a slower feedback mode when there is uncertainty. Feedback selects a winning decision that enables the circuit to speed up again, since activation amplitude, synchronization, and processing speed both increase with certainty. LAMINART also embodies a novel kind of hybrid computing that simultaneously realizes the stability of digital computing and the sensitivity of analog computing. The coherence that is derived from synchronous storage in interlaminar and intercortical feedback loops provides the stability of digital computing...the feedback loop exhibits hysteresis that can preserve the stored pattern against external perturbations...while preserving the sensitivity of analog computation. I should add that the new models are also of interest in technology, and indeed have been embodied in the software and hardware applications of many companies during the past few decades. A great deal of additional exciting research remains to be done to develop a unified software and hardware platforms for multiple types of autonomous adaptive intelligence. These promise to revolutionize computer science in general, and the design of autonomous adaptive mobile robots in particular. Best, Steve Stephen Grossberg Wang Professor of Cognitive and Neural Systems Professor of Mathematics, Psychology, and Biomedical Engineering Director, Center for Adaptive Systems Boston, University http://cns.bu.edu/~steve References Brown, J., Bullock, D., and Grossberg, S. (1999). How the basal ganglia use parallel excitatory and inhibitory learning pathways to selectively respond to unexpected rewarding cues. Journal of Neuroscience, 19, 10502-10511. Brown, J.W., Bullock, D., and Grossberg, S. (2004). How laminar frontal cortex and basal ganglia circuits interact to control planned and reactive saccades. Neural Networks, 17, 471-510. Cao, Y. and Grossberg, S. (2005). A laminar cortical model of stereopsis and 3D surface perception: Closure and da Vinci stereopsis. Spatial Vision, 18, 515-578. Fang, L. and Grossberg, S. (2009). From stereogram to surface: How the brain sees the world in depth. Spatial Vision, 22, 45-82. Grossberg, S. (1973). Contour enhancement, short-term memory, and constancies in reverberating neural networks. Studies in Applied Mathematics, 52, 213-257. Grossberg, S., and Kazerounian, S. (2011). Laminar cortical dynamics of conscious speech perception: A neural model of phonemic restoration using subsequent context in noise. Journal of the Acoustical Society of America, 130, 440-460. Grossberg, S., and Pearson, L. (2008). Laminar cortical dynamics of cognitive and motor working memory, sequence learning and performance: Toward a unified theory of how the cerebral cortex works. Psychological Review, 115, 677-732 . Grossberg, S., and Raizada, R. (2000). Contrast-sensitive perceptual grouping and object-based attention in the laminar circuits of primary visual cortex. Vision Research, 40, 1413-1432. Grossberg, S., and Swaminathan, G. (2004). A laminar cortical model for 3D perception of slanted and curved surfaces and of 2D images: development, attention and bistability. Vision Research, 44, 1147-1187. Grossberg, S., and Versace, M. (2008). Spikes, synchrony, and attentive learning by laminar thalamocortical circuits. Brain Research, 1218, 278-312. Grossberg, S., and Yazdanbakhsh, A. (2005). Laminar cortical dynamics of 3D surface perception: Stratification, transparency, and neon color spreading. Vision Research, 45, 1725-1743. Raizada, R. and Grossberg, S. (2003). Towards a theory of the laminar architecture of cerebral cortex: Computational clues from the visual system. Cerebral Cortex, 13, 100-113. Silver, M.R., Grossberg, S., Bullock, D., Histed, M.H., and Miller, E.K. (2011). A neural model of sequential movement planning and control of eye movements: Item-order-rank working memory and saccade selection by the supplementary eye fields. Neural Networks, 26, 29-58. *********************************** On Nov 2, 2014, at 4:57 PM, Gary Marcus wrote: > New piece in Science, reevaluating the ?canonical cortical computation? hypothesis: http://www.sciencemag.org/content/346/6209/551.short (First paragraph pasted in below) > > And lot of further detail didn?t fit: http://biorxiv.org/content/early/2014/10/31/010983 > > I love a discussion here on the group, especially re: the Table of possible computation and their neural realizations, in the Supplement. We plan to crowd-source a more detailed version of that table; please contact me if you are interested in contributing. > > Cheers, > Gary > > The human cerebral cortex is central to a wide array of cognitive functions, from vision to language, reasoning, decision-making, and motor control. Yet, nearly a century after the neuroanatomical organization of the cortex was first defined, its basic logic remains unknown. One hypothesis is that cortical neurons form a single, massively repeated ?canonical? circuit, characterized as a kind of a ?nonlinear spatiotemporal filter with adaptive properties? (1). In this classic view, it was ?assumed that these?properties are identical for all neocortical areas.? Nearly four decades later, there is still no consensus about whether such a canonical circuit exists, either in terms of its anatomical basis or its function. Likewise, there is little evidence that such uniform architectures can capture the diversity of cortical function in simple mammals, let alone characteristically human processes such as language and abstract thinking (2). Analogous software implementations in artificial intelligence (e.g., deep learning networks) have proven effective in certain pattern classification tasks, such as speech and image recognition, but likewise have made little inroads in areas such as reasoning and natural language understanding. Is the search for a single canonical cortical circuit misguided? > > > Gary Marcus > Professor of Psychology and Neural Science > New York University > Visiting Cognitive Scientist > Allen Institute for Brain Science > Editor, The Future of the Brain (2014) > http://garymarcus.com/ > New Yorker essays > New York Times op-eds > > > > > > Stephen Grossberg Wang Professor of Cognitive and Neural Systems Professor of Mathematics, Psychology, and Biomedical Engineering Director, Center for Adaptive Systems http://www.cns.bu.edu/about/cas.html http://cns.bu.edu/~steve steve at bu.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: From friedhelm.schwenker at uni-ulm.de Mon Nov 3 15:58:13 2014 From: friedhelm.schwenker at uni-ulm.de (Dr. Schwenker) Date: Mon, 03 Nov 2014 21:58:13 +0100 Subject: Connectionists: CfP : TWELFTH INTERNATIONAL CONFERENCE ON MULTIPLE CLASSIFIER SYSTEMS Message-ID: <5457EC65.4070805@uni-ulm.de> ---- PLEASE, APOLOGIZE MULTIPLE COPIES ---- ****************************************** ** MCS 2015 Preliminary Call for Papers ** ****************************************** *****Paper Submission: JANUARY 30, 2015***** ********************************************************************** TWELFTH INTERNATIONAL CONFERENCE ON MULTIPLE CLASSIFIER SYSTEMS Reisensburg Castle (G?nzburg, Germany), Ulm University, June 29 - July 1, 2015 http://mcs.diee.unica.it ********************************************************************** MCS 2015 is the twelfth edition of the well-established series of meetings providing the leading international forum for the discussion of issues in multiple classifier systems and ensemble methods. The aim of the workshop is to bring together researchers from diverse communities concerned with this topic, including pattern recognition, machine learning, neural networks, data mining and statistics. MCS 2015 will be held on June 29 - July 1, 2015, at Reisensburg Castle (G?nzburg, Germany) research center of the Ulm University. Updated information:http://mcs.diee.unica.it Best regards, MCS 2015 Co-Chairs Friedhelm Schwenker Joseph Kittler Fabio Roli -------------- next part -------------- An HTML attachment was scrubbed... URL: From terry at salk.edu Mon Nov 3 17:13:08 2014 From: terry at salk.edu (Terry Sejnowski) Date: Mon, 03 Nov 2014 14:13:08 -0800 Subject: Connectionists: The Atoms of Neural Computation: A reply to Gary Marcus In-Reply-To: Message-ID: The debate between lumpers and splitters on cortical areas will not be settled until we have the right tools to probe them anatomically and functionally. We don't even know how many types of neurons there are in the cortex. Estimates range from 100 to 1000. One of the goals of the BRAIN Initiative is to find out how many there are and how they vary between different parts of the cortex: http://www.braininitiative.nih.gov/2025/index.htm An important source of variability between neurons is differential patterns of gene methylation, which is uniquely different in neurons compared with other cell types in the body: Lister, R. Mukamel, et al. Global epigenomic reconfiguration during mammalian brain development, Science, 341, 629, 2013 http://directorsblog.nih.gov/2013/08/27/charting-the-chemical-choreography-of-brain-development/#more-1983 http://papers.cnl.salk.edu/PDFs/Global%20epigenomic%20reconfiguration%20during%20mammalian%20brain%20development%202013-4331.pdf We now have optical techniques to record from 1000 cortical neurons simultaneously and that will increase by a factor of 100-1000x over the next decade. This will create a big data problem for neuroscience that readers of this list could help solve: Sejnowski, T. J. Churchland, P.S. Movshon, J.A. Putting big data to good use in neuroscience, Nature Neuroscience, 17, 1440-1441, 2014 http://papers.cnl.salk.edu/PDFs/Putting%20big%20data%20to%20good%20use%20in%20neuroscience%202014-4397.pdf Terry ----- From hqyang at cse.cuhk.edu.hk Mon Nov 3 01:41:41 2014 From: hqyang at cse.cuhk.edu.hk (Haiqin) Date: Mon, 3 Nov 2014 14:41:41 +0800 Subject: Connectionists: CFP: SDATA Workshop on WSDM2015 Message-ID: ************************************************************************************************************ CFP: International Workshop On Scalable Data Analytics: Theory & Applications (SDATA) In conjunction with WSDM 2015, Shanghai, China, Feb 6, 2015 http://sdata-wsdm-shanghai2015.weebly.com/ Paper Submission Deadline: November 14, 2014 ************************************************************************************************************ With the fast evolving technology for data collection, data transmission, and data analysis, the scientific, biomedical, and engineering research communities are undergoing a profound transformation where discoveries and innovations increasingly rely on massive amounts of data. New prediction techniques, including novel statistical, mathematical, and modeling techniques are enabling a paradigm shift in scientific and biomedical investigation. Data become the fourth pillar of science and engineering, offering complementary insights in addition to theory, experiments, and computer simulation. Advances in machine learning, data mining, and visualization are enabling new ways of extracting useful information from massive data sets. The characteristics of volume, velocity, variety and veracity bring challenges to current data analytics techniques. It is desirable to scale up data analytics techniques for modeling and analyzing big data from various domains. The workshop aims to provide professionals, researchers, and technologists with a single forum where they can discuss and share the state-of-the-art theories and applications of scalable data analytics technologies. Topic of Interest ============= Topics of interest include, but not limited to, the following aspects: *Distributed data analytics architectures - Data analytics algorithms for GPUs - Data analytics algorithms for clouds - Data analytics algorithms for clusters * Theory and algorithms for scalable descriptive statistical modeling - Structured, semi-structured, unstructured data preprocessing - Effective data sampling and feature engineering - Data calibration and transformation - Data qualitative quantitative measurement and validation * Theory and algorithms of scalable predictive statistical modeling - Association analysis - Data approximation, dimensional reduction, clustering - Liner / non-linear models for classification, regression, and ranking - Multiview learning, multitask learning, transfer learning, semi-supervised learning, active learning techniques for multimodal data * Scalable analytics techniques for temporal and spatial data - Real time analysis for data stream - Trend prediction in financial data - Topic detection in instant message systems - Real time modeling of events in dynamic networks - Spatial modeling on maps * Scalable data analytics algorithms in large graphs - Communities discovery and analysis in social networks - Link prediction in networks - Anomaly detection in social networks - Authority identification and influence measurement in social networks - Fusion of information from multiple blogs, rating systems, and social networks - Integration of text, videos, images, sounds in social media - Recommender systems * Novel applications of scalable machine learning in big data - Decision making with big data - Counterfactual reasoning with big data - Medical / health informatics big data analysis - Security big data analysis - Astronomy big data analysis - Biological big data analysis - Urban / smart city big data analysis - Education big data analysis Paper Submission ============= Submissions must represent new and original work. Concurrent submissions are not allowed. Submissions that have been previously presented in venues with no formal proceedings or as posters are allowed, but must be so indicated on the first page of the submission. Papers must be formatted for US Letter size according to ACM guidelines and style files, must fit within 10 pages (with a font size no smaller than 9pt), including references, diagrams, and appendices if any. A submitted paper must be self-contained and in English. Submit papers via the link https://www.easychair.org/conferences/?conf=sdata2015. Important Dates ============= * November 14, 2014: Due date for full workshop papers submission * December 5, 2014: Notification of paper acceptance to authors * December 19, 2014: Camera-ready & registration of accepted papers * February 6, 2015: Workshops Organizers ============= Kaizhu HUANG, Xi'an Jiaotong-Liverpool University Haiqin YANG, The Chinese University of Hong Kong Irwin KING, The Chinese University of Hong Kong Michael LYU, The Chinese University of Hong Kong -------------- Best Regards, Haiqin -------------- next part -------------- An HTML attachment was scrubbed... URL: From dror.cohen07 at gmail.com Mon Nov 3 17:57:07 2014 From: dror.cohen07 at gmail.com (Dror Cohen) Date: Tue, 4 Nov 2014 09:57:07 +1100 Subject: Connectionists: The Atoms of Neural Computation: A reply to Gary Marcus In-Reply-To: References: Message-ID: "lumpers and splitters" I like that :). There is another aspect to this debate that I think is often overlooked. Biological systems exhibit high degeneracy. In this context, degeneracy refers to the ability of different mechanisms to produce the same function. This is to be contrasted with redundancy, which occurs when similar mechanisms produce the same function. See below for an excellent overview. Degeneracy and Complexity in Biological Systems Gerald M. Edelman and Joseph A. Gally PNAS 2001 There may be large variability in the types of neurons in the cortex, but my bet is that their functions overlap - that is, this variability, in part, is an example of degeneracy. I'm interested in doing some simulations to develop this idea, if anyone is interested please feel free to email me. Thanks, Dror On Tue, Nov 4, 2014 at 9:13 AM, Terry Sejnowski wrote: > The debate between lumpers and splitters on cortical areas will not be > settled > until we have the right tools to probe them anatomically and functionally. > > We don't even know how many types of neurons there are in the cortex. > Estimates range from 100 to 1000. > > One of the goals of the BRAIN Initiative is to find out how many > there are and how they vary between different parts of the cortex: > > http://www.braininitiative.nih.gov/2025/index.htm > > An important source of variability between neurons is differential patterns > of gene methylation, which is uniquely different in neurons compared > with other cell types in the body: > > Lister, R. Mukamel, et al. Global epigenomic reconfiguration > during mammalian brain development, Science, 341, 629, 2013 > > > http://directorsblog.nih.gov/2013/08/27/charting-the-chemical-choreography-of-brain-development/#more-1983 > > > http://papers.cnl.salk.edu/PDFs/Global%20epigenomic%20reconfiguration%20during%20mammalian%20brain%20development%202013-4331.pdf > > We now have optical techniques to record from 1000 cortical neurons > simultaneously and that will increase by a factor of 100-1000x > over the next decade. > > This will create a big data problem for neuroscience that readers of > this list could help solve: > > Sejnowski, T. J. Churchland, P.S. Movshon, J.A. > Putting big data to good use in neuroscience, > Nature Neuroscience, 17, 1440-1441, 2014 > > > http://papers.cnl.salk.edu/PDFs/Putting%20big%20data%20to%20good%20use%20in%20neuroscience%202014-4397.pdf > > Terry > > ----- > -------------- next part -------------- An HTML attachment was scrubbed... URL: From weng at cse.msu.edu Mon Nov 3 20:25:59 2014 From: weng at cse.msu.edu (Juyang Weng) Date: Mon, 03 Nov 2014 20:25:59 -0500 Subject: Connectionists: The Atoms of Neural Computation: A reply to Gary Marcus In-Reply-To: References: Message-ID: <54582B27.60906@cse.msu.edu> Terry, we should get more phenomena from experimental biologists and neuroscientists. However, we must also provide a computational model about how the brain wires itself and develops its functions from activities. You may be right in terms of "settled", but almost nothing can be settled in science. One example is Newtonian physics which has been replaced with more accurate model of relativity. -John On 11/3/14 5:13 PM, Terry Sejnowski wrote: > The debate between lumpers and splitters on cortical areas will not be settled > until we have the right tools to probe them anatomically and functionally. > > We don't even know how many types of neurons there are in the cortex. > Estimates range from 100 to 1000. > > One of the goals of the BRAIN Initiative is to find out how many > there are and how they vary between different parts of the cortex: > > http://www.braininitiative.nih.gov/2025/index.htm > > An important source of variability between neurons is differential patterns > of gene methylation, which is uniquely different in neurons compared > with other cell types in the body: > > Lister, R. Mukamel, et al. Global epigenomic reconfiguration > during mammalian brain development, Science, 341, 629, 2013 > > http://directorsblog.nih.gov/2013/08/27/charting-the-chemical-choreography-of-brain-development/#more-1983 > > http://papers.cnl.salk.edu/PDFs/Global%20epigenomic%20reconfiguration%20during%20mammalian%20brain%20development%202013-4331.pdf > > We now have optical techniques to record from 1000 cortical neurons > simultaneously and that will increase by a factor of 100-1000x > over the next decade. > > This will create a big data problem for neuroscience that readers of > this list could help solve: > > Sejnowski, T. J. Churchland, P.S. Movshon, J.A. > Putting big data to good use in neuroscience, > Nature Neuroscience, 17, 1440-1441, 2014 > > http://papers.cnl.salk.edu/PDFs/Putting%20big%20data%20to%20good%20use%20in%20neuroscience%202014-4397.pdf > > Terry > > ----- -- -- Juyang (John) Weng, Professor Department of Computer Science and Engineering MSU Cognitive Science Program and MSU Neuroscience Program 428 S Shaw Ln Rm 3115 Michigan State University East Lansing, MI 48824 USA Tel: 517-353-4388 Fax: 517-432-1061 Email: weng at cse.msu.edu URL: http://www.cse.msu.edu/~weng/ ---------------------------------------------- From steve at cns.bu.edu Mon Nov 3 21:08:03 2014 From: steve at cns.bu.edu (Stephen Grossberg) Date: Mon, 3 Nov 2014 21:08:03 -0500 Subject: Connectionists: The Atoms of Neural Computation: A reply to Terry Sejnowski In-Reply-To: References: Message-ID: <7ACD4F72-B82C-442C-A929-D80AA9DE9292@cns.bu.edu> Dear Terry, I personally don?t believe that a ?debate between lumpers and splitters? is productive. Understanding the brain cannot be achieved in one grand step. It needs to be done by successive approximations, by unlumping coarser models into finer models. Such a method has been enormously successful throughout the history of theoretical physics. In order to deeply understand how brains give rise to minds, I believe that we need to take seriously the basic fact that: Behavioral success drives brain evolution. This implies that, if we want to discover the evolutionary constraints that have shaped brain design, we need to theoretically represent the functional brain units that can compute indices of behavioral success. I have developed and practiced such a method, with many colleagues, over the past 50 years. It is summarized in http://www.cns.bu.edu/Profiles/Grossberg/GrossbergInterests.pdf. That is how, after more than 20 years of developing non-laminar neural models of various behavioral functions, my colleagues and I were forced into finer laminar cortical models with a much expanded explanatory and predictive range. As one result, in Table 1 in Raizada and Grossberg (2003, Cerebral Cortex, 13, 100-113, http://www.cns.bu.edu/Profiles/Grossberg/RaiGro03CerCor.pdf), the LAMINART model predicts distinct functional roles in visual perceptual grouping for multiple identified cell types and connections throughout the layers of V1 and V2. Advanced experimental methods are needed to test these predictions, as well as to report unknown circuit properties, but I doubt that having such data without a functionally meaningful theory would make much sense. As another example: the TELOS model (Brown, Bullock, and Grossberg (2004, Neural Networks, 17, 471-510, http://www.cns.bu.edu/Profiles/Grossberg/BroBulGro2003NN.pdf) learned five saccadic tasks that are reported in monkey experiments and then, using the learned model parameters, quantitatively simulated, and predicted the function of, the recorded dynamics of 17 identified cells types across multiple brain regions, as well as cell types that have not yet been reported. Again, more experiments are needed, but without a functionally meaningful theory, the data may well seem meaningless. I could go on with many such examples, but I hope that my main point is clear: In order to develop this kind of evolving but principled theory, which embodies both "lumping" and "splitting", I believe that one needs to link brain to behavior at every step, and acknowledge that the theorist?s task is to provide functionally meaningful explanations and predictions for every component in a model, at each stage of the unlumping process, and at multiple levels of organization. Best, Steve On Nov 3, 2014, at 5:13 PM, Terry Sejnowski wrote: > The debate between lumpers and splitters on cortical areas will not be settled > until we have the right tools to probe them anatomically and functionally. > > We don't even know how many types of neurons there are in the cortex. > Estimates range from 100 to 1000. > > One of the goals of the BRAIN Initiative is to find out how many > there are and how they vary between different parts of the cortex: > > http://www.braininitiative.nih.gov/2025/index.htm > > An important source of variability between neurons is differential patterns > of gene methylation, which is uniquely different in neurons compared > with other cell types in the body: > > Lister, R. Mukamel, et al. Global epigenomic reconfiguration > during mammalian brain development, Science, 341, 629, 2013 > > http://directorsblog.nih.gov/2013/08/27/charting-the-chemical-choreography-of-brain-development/#more-1983 > > http://papers.cnl.salk.edu/PDFs/Global%20epigenomic%20reconfiguration%20during%20mammalian%20brain%20development%202013-4331.pdf > > We now have optical techniques to record from 1000 cortical neurons > simultaneously and that will increase by a factor of 100-1000x > over the next decade. > > This will create a big data problem for neuroscience that readers of > this list could help solve: > > Sejnowski, T. J. Churchland, P.S. Movshon, J.A. > Putting big data to good use in neuroscience, > Nature Neuroscience, 17, 1440-1441, 2014 > > http://papers.cnl.salk.edu/PDFs/Putting%20big%20data%20to%20good%20use%20in%20neuroscience%202014-4397.pdf > > Terry > > ----- Stephen Grossberg Wang Professor of Cognitive and Neural Systems Professor of Mathematics, Psychology, and Biomedical Engineering Director, Center for Adaptive Systems http://www.cns.bu.edu/about/cas.html http://cns.bu.edu/~steve steve at bu.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Reply to Gary Marcus via connectionists 11-3-14.pdf Type: application/pdf Size: 102628 bytes Desc: not available URL: -------------- next part -------------- An HTML attachment was scrubbed... URL: From bwyble at gmail.com Tue Nov 4 00:49:40 2014 From: bwyble at gmail.com (Brad Wyble) Date: Tue, 4 Nov 2014 00:49:40 -0500 Subject: Connectionists: The Atoms of Neural Computation In-Reply-To: <330594F9-9782-452A-B4CB-0C2E8F069DF5@nyu.edu> References: <330594F9-9782-452A-B4CB-0C2E8F069DF5@nyu.edu> Message-ID: > > I love a discussion here on the group, especially re: the Table of > possible computation and their neural realizations, in the Supplement. We > plan to crowd-source a more detailed version of that table; please contact > me if you are interested in contributing. > > For this I'd like to reinforce Stephen Grossberg's point about paying close attention to behavioral data, which in my opinion should play at least as strong a role as the neural data in our search for suitable neural mechanisms to describe cognitive function. For example if one is developing a model of working memory, a set of behavioral constraints provides strong restrictions on the classes of mechanisms that one should consider which may be even more restrictive than highly detailed neural data. As an example, we've recently published a computational model (Swan & Wyble 2014, linked below) of binding multi-feature objects into visual working memory that relies on satisfying several general constraints, as well as matching certain empirical benchmarks. These constraints are: 1. Memory can store repetitions of the same item 2. Memory is addressable by content and by temporal order 3. Storing multiple items produces interference and crosstalk 4. Items can be stored and (mostly) overwritten quite rapidly if the task requires it Obviously this list is far from complete, but even so it is useful in terms of thinking about mechanisms. For example, the requirement to store repetitions provides difficulty for models in which memory storage occurs at the sensory level. I also suspect that repetitions pose difficulty for some models that use neural synchrony to store binding information. At the very least such behavioral constraints encourage one to think deeply about how mechanisms would operate in service of functional requirements. Best regards -Brad Swan, G., & Wyble, B. (2014). The binding pool: A model of shared neural resources for distinct items in visual working memory. *Attention, Perception, & Psychophysics*, 76(7) 2136-2157 *http://wyblelab.com/research_repos/models/bindingpool/ * -------------- next part -------------- An HTML attachment was scrubbed... URL: From bob at email.arizona.edu Tue Nov 4 09:54:35 2014 From: bob at email.arizona.edu (Robert Wilson) Date: Tue, 4 Nov 2014 09:54:35 -0500 Subject: Connectionists: Ph.D. and Postdoc positions in computational neuroscience and neuroimaging with Robert Wilson at the University of Arizona Message-ID: Join the recently established Neuroscience of Reinforcement Learning (NeuRL) lab led by Robert Wilson at the University of Arizona (http://www.u.arizona.edu/~bob/). Research in the lab focuses on computational, behavioral, pupillometric and neuroimaging studies of human reinforcement learning and decision making. Potential projects include: probing the neural basis of exploration and foraging, the role of randomness in decision making, and the representation of state information in the brain. For more information about either position feel free to email me at: bob at email.arizona.edu Postdoc applicants: Candidates for the Postdoc position should have a Ph.D. in neuroscience, psychology, computer science, bioengineering or related field. A strong background in functional imaging and/or computational modeling is preferred. To apply, please send a cover letter stating background and research interests, a CV, one or two representative publications/working papers, and contact information of at least two references. Tucson: Situated between four mountain ranges, the highest nearly 10,000 feet, Tucson and its environs are both stunningly beautiful and remarkably affordable. A mecca for cyclists and hikers, Tucson and its year-round sunshine nurture a thriving outdoor community, as well as a vibrant downtown cultural scene with music venues, art galleries and fine dining. Low rents and affordable amenities make for comfortable living just blocks from the university. Tucson is among the most livable, enjoyable destinations in the United States! From Johan.Suykens at esat.kuleuven.be Tue Nov 4 05:40:07 2014 From: Johan.Suykens at esat.kuleuven.be (Johan Suykens) Date: Tue, 04 Nov 2014 11:40:07 +0100 Subject: Connectionists: book announcement Message-ID: <5458AD07.4090005@esat.kuleuven.be> Regularization, Optimization, Kernels, and Support Vector Machines Editors: Johan A.K. Suykens, Marco Signoretto, Andreas Argyriou Chapman and Hall/CRC, Machine Learning & Pattern Recognition series, Boca Raton, USA, Oct 2014 http://www.crcpress.com/product/isbn/9781482241396 Chapter contributions: 1. An Equivalence between the Lasso and Support Vector Machines; Martin Jaggi 2. Regularized Dictionary Learning; Annalisa Barla, Saverio Salzo, and Alessandro Verri 3. Hybrid Conditional Gradient-Smoothing Algorithms with Applications to Sparse and Low Rank Regularization; Andreas Argyriou, Marco Signoretto, and Johan A.K. Suykens 4. Nonconvex Proximal Splitting with Computational Errors; Suvrit Sra 5. Learning Constrained Task Similarities in Graph-Regularized Multi-Task Learning; Remi Flamary, Alain Rakotomamonjy, and Gilles Gasso 6. The Graph-Guided Group Lasso for Genome-Wide Association Studies; Zi Wang and Giovanni Montana 7. On the Convergence Rate of Stochastic Gradient Descent for Strongly Convex Functions; Cheng Tang and Claire Monteleoni 8. Detecting Ineffective Features for Nonparametric Regression; Kris De Brabanter, Paola Gloria Ferrario, and Laszlo Gyorfi 9. Quadratic Basis Pursuit; Henrik Ohlsson, Allen Y. Yang, Roy Dong, Michel Verhaegen, and S. Shankar Sastry 10. Robust Compressive Sensing; Esa Ollila, Hyon-Jung Kim, and Visa Koivunen 11. Regularized Robust Portfolio Estimation; Theodoros Evgeniou, Massimiliano Pontil, Diomidis Spinellis, Rafal Swiderski, and Nick Nassuphis 12. The Why and How of Nonnegative Matrix Factorization; Nicolas Gillis 13. Rank Constrained Optimization Problems in Computer Vision; Ivan Markovsky 14. Low-Rank Tensor Denoising and Recovery via Convex Optimization; Ryota Tomioka, Taiji Suzuki, Kohei Hayashi, and Hisashi Kashima 15. Learning Sets and Subspaces; Alessandro Rudi, Guillermo D. Canas, Ernesto De Vito, and Lorenzo Rosasco 16. Output Kernel Learning Methods; Francesco Dinuzzo, Cheng Soon Ong, and Kenji Fukumizu 17. Kernel Based Identification of Systems with Multiple Outputs Using Nuclear Norm Regularization; Tillmann Falck, Bart De Moor, and Johan A.K. Suykens 18. Kernel Methods for Image Denoising; Pantelis Bouboulis and Sergios Theodoridis 19. Single-Source Domain Adaptation with Target and Conditional Shift; Kun Zhang, Bernhard Scholkopf, Krikamol Muandet, Zhikun Wang, Zhi-Hua Zhou, and Claudio Persello 20. Multi-Layer Support Vector Machines; Marco A. Wiering and Lambert R.B. Schomaker 21. Online Regression with Kernels; Steven Van Vaerenbergh and Ignacio Santamaria From simone.seeger at zi-mannheim.de Tue Nov 4 03:07:24 2014 From: simone.seeger at zi-mannheim.de (Seeger, Simone) Date: Tue, 4 Nov 2014 08:07:24 +0000 Subject: Connectionists: PhD student position in theoretical/ computational neuroscience available in Mannheim, Germany In-Reply-To: <68B5BBF569FEF84891D3AAB4D3141E453D907F55@ZIMAIL1.Zi.local> References: <68B5BBF569FEF84891D3AAB4D3141E453D907F55@ZIMAIL1.Zi.local> Message-ID: <68B5BBF569FEF84891D3AAB4D3141E453D907F61@ZIMAIL1.Zi.local> A PhD student position in theoretical/ computational neuroscience (German payscale 0.67x E13 TV-L) is available for 3 years initially, starting January 2015 or later, at the Dept. of Theoretical Neuroscience (Head: Prof. Daniel Durstewitz), Bernstein Center for Computational Neuroscience Heidelberg-Mannheim, Central Institute of Mental Health Mannheim, Heidelberg University. The lab focuses on highly data-driven computational models of prefrontal cortical and hippocampal network dynamics underlying higher cognitive function in health and psychiatric disease. Our approach is to estimate (in statistically principled ways) single cell and network models from slice and in-vivo electrophysiological recordings generated in the lab, and then apply these for understanding network function and dysfunction in experimental data (multiple single-unit, behavior) in psychiatrically relevant genetic animal models. Applicants should have a strong mathematical background (dynamical systems and/or statistics / stochastic processes), good programming skills (Matlab, C++), and a keen interest in biological and neural systems. Please send informal inquiries and application documents (cover letter including a brief description of research interests, CV and contact details of two personal references) to Dr. loreen.hertaeg at zi-mannheim.de. *** Simone Seeger, M.A. Administration Bernstein Center for Computational Neuroscience Zentralinstitut f?r Seelische Gesundheit Postfach 12 21 20, 68072 Mannheim J5, 68159 Mannheim Telefon: 0621/1703-1326 oder 06221/54-8310 Fax: 0621/1703-2915 E-Mail: Simone.Seeger at zi-mannheim.de Internet: http://www.bccn-heidelberg-mannheim.de -------------- next part -------------- An HTML attachment was scrubbed... URL: From m.gomez.rodriguez at gmail.com Tue Nov 4 05:54:10 2014 From: m.gomez.rodriguez at gmail.com (Manuel Gomez Rodriguez) Date: Tue, 4 Nov 2014 11:54:10 +0100 Subject: Connectionists: Networks & Machine Learning Positions @ MPI-SWS Message-ID: PhD positions in Networks & Machine Learning ==================================== The newly established Machine Learning group at the Max Planck Institute for Software Systems (MPI-SWS, http://www.mpi-sws.org/), led by Manuel Gomez-Rodriguez, is looking for up to three PhD students with a strong interest in networks & machine learning. In our group, we are interested in developing machine learning and large-scale data mining methods for the analysis and modeling of large real-world networks and processes that take place over them. We are particularly interested in problems arising in the Web and social media. For further information on our research, please, visit http://www.mpi-sws.org/~manuelgr/. Applicants should be highly motivated and creative and have an exceptional background in machine learning, probability, optimization or graph theory, as well as excellent programming skills. A Master's degree in a relevant field (e.g., Computer Science, Engineering, Statistics & Optimization) is expected. Previous research experience in social networks or machine learning is a big plus. MPI-SWS currently has 10 tenured and tenure-track faculty and about 50 doctoral and post-doctoral researchers and is located in Kaiserslautern and Saarbruecken, in the tri-border area of Germany, France and Luxembourg. The institute maintains an open, international and diverse work environment and we seek applications from outstanding candidates regardless of national origin or citizenship. Working language at MPI-SWS is English -- German is not required. The salary is highly competitive and you will have great support to do outstanding research. There will be ample opportunities of collaboration with the Empirical Inference Department, headed by Bernhard Schoelkopf, at the Max Planck Institute for Intelligent Systems (MPI-IS, http://www.is.tuebingen.mpg.de/research/dep/bs.html) in Tuebingen. One of the PhD positions will be a joint appointment between MPI-SWS and MPI-IS, and will be based in Tuebingen. In order to apply, please, use the online application system at http://www.mpi-sws.org/index.php?n=graduate#graduate-school and drop me an email to manuelgr at mpi-sws.org to highlight your application. Review of applications will start immediately, and continue until the positions are filled. If you attend NIPS ?14, please let me know to informally meet in person. If you are giving a talk or poster, please tell me so I can look at it. Best wishes, Manuel From jkrichma at uci.edu Tue Nov 4 11:41:01 2014 From: jkrichma at uci.edu (Jeff Krichmar) Date: Tue, 4 Nov 2014 08:41:01 -0800 Subject: Connectionists: CFP: Special Issue on Neurobiologically Inspired Robotics: Enhanced Autonomy Through Neuromorphic Cognition Message-ID: Dear Connectionists, I hope some of you will consider submitting to this special issue of Neural Networks. Neurobiologically inspired robotics goes by many names: brain-based devices, cognitive robots, neurorobots, and neuromorphic robots, to name a few. The field has grown into an exciting area of research and engineering. The common goal is twofold: Firstly, developing a system that demonstrates some level of cognitive ability can lead to a better understanding of the neural machinery that realizes cognitive function. The often used phrase, ?understanding through building?, implies that one can get a deep understanding of a system by constructing physical artifacts that can operate in the real-world. In building and studying neurobiologically inspired robots, scientists must address theories of neuroscience that couple brain, body, and behavior. Secondly, the deep theoretical understanding of cognition, neurobiology and behavior obtained by constructing physical systems, could lead to a system that demonstrates capabilities commonly found in the animal kingdom, but rarely found in artificial systems, most notably their adaptive and flexible autonomous behavior. There have already been some successes that meet these goals. For example, navigation models based on the hippocampus are now deployed on robots that autonomously explore their environment. Machine image processing systems based on visual cortex have been used in a number of unsupervised recognition and perception applications. Robots designed to address impairments due to disorders such as Alzheimer?s disease, autism spectrum disorder, and attentional deficit disorders, are being used as therapeutic and diagnostic tools without the need for constant caretaker supervision. Despite these successes, the field is still in its infancy and basic research is needed. In particular, we are interested in papers that describe: 1) How models of cognitive functions, such as attention, decision-making, learning and memory, perception, and social cognition can be constructed on physical robots. 2) How the neuromorphic devices, which are designed to run neural algorithms with low-power, can advance the construction of autonomous robotics. 3) How the theoretical and engineering lessons learned from constructing neurobiologically inspired robots can transfer to autonomous robots carrying out practical applications. This Special Issue invites papers that address the three broad topics described above. Topics of interest ? Adaptive behavior ? Active sensing ? Artificial empathy ? Cortical computing ? Developmental robotics ? Embodied Cognition ? Neuromorphic Engineering ? On-line learning and memory systems ? Prediction and planning ? Socially assistive robotics Guest Editors Jeffrey Krichmar, University of California, Irvine Minoru Asada, Osaka University Jorg Conradt, Technische Universitat M?nchen Important Dates Submission due: 1 Feb 2015 Acceptance notification: 1 Aug 2015 Expected publication: 1 Nov 2015 Submission instructions Each paper for submission should be formatted according to the style and length limit of Neural Networks. Please refer complete Author Guidelines at http://www.elsevier.com/journals/neural-networks/0893-6080/guide-for-authors. Note that published papers and those currently under review by other journals or conferences are prohibited. A separate cover letter should be submitted that includes the paper title, the list of all authors and their affiliations, and information of the contact author. Each paper will be reviewed rigorously, and possibly in two rounds, i.e., minor/major revisions will undergo another round of review. Prospective authors are invited to submit their papers directly via the online submission system at http://ees.elsevier.com/neunet/. To ensure that all manuscripts are correctly included into the special issue described, it is important that all authors select ?SI: Neurobiological Robotics? when they reach the "Article Type" step in the submission process. Jeff Krichmar Department of Cognitive Sciences 2328 Social & Behavioral Sciences Gateway University of California, Irvine Irvine, CA 92697-5100 jkrichma at uci.edu http://www.socsci.uci.edu/~jkrichma From jose at psychology.rutgers.edu Tue Nov 4 13:19:49 2014 From: jose at psychology.rutgers.edu (Stephen =?ISO-8859-1?Q?Jos=E9?= Hanson) Date: Tue, 04 Nov 2014 13:19:49 -0500 Subject: Connectionists: HANSONLAB-Graduate Student Opportunities Message-ID: <1415125189.5592.70.camel@edison> Please see attached Hansonlab is seeking graduate students. Please repost. Steve -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: hansonlab-gradopps.pdf Type: application/pdf Size: 97398 bytes Desc: not available URL: From dwang at cse.ohio-state.edu Tue Nov 4 16:09:04 2014 From: dwang at cse.ohio-state.edu (DeLiang Wang) Date: Tue, 4 Nov 2014 16:09:04 -0500 Subject: Connectionists: NEURAL NETWORKS, November 2014 Message-ID: <54594070.3080302@cse.ohio-state.edu> Neural Networks - Volume 59, November 2014 http://www.journals.elsevier.com/neural-networks Learning Markov random walks for robust subspace clustering and estimation Risheng Liu, Zhouchen Lin, Zhixun Su Model-wise and point-wise random sample consensus for robust regression and outlier detection Moumen T. El-Melegy Feature selection for linear SVMs under uncertain data: Robust optimization based on difference of convex functions algorithms Hoai An Le Thi, Xuan Thanh Vo, Tao Pham Dinh Ordinal regression neural networks based on concentric hyperspheres Pedro Antonio Gutierrez, Peter Tino, Cesar Hervas-Martinez Practical emotional neural networks Ehsan Lotfi, M.-R. Akbarzadeh-T. On computational algorithms for real-valued continuous functions of several variables David Sprecher From m.a.wiering at rug.nl Wed Nov 5 08:53:09 2014 From: m.a.wiering at rug.nl (M.A.Wiering) Date: Wed, 05 Nov 2014 14:53:09 +0100 Subject: Connectionists: CFP: Special session on Combining Evolutionary Computation and Reinforcement Learning at CEC 2015 In-Reply-To: <7790a1204f7f70.545a2b7f@rug.nl> References: <75d0ac344f290f.545a2a51@rug.nl> <75d0f96b4f71d3.545a2aca@rug.nl> <77908fe84f3577.545a2b06@rug.nl> <7790c5f44f7930.545a2b43@rug.nl> <7790a1204f7f70.545a2b7f@rug.nl> Message-ID: <7790de6d4f5e10.545a39d5@rug.nl> Special session for mdrugan at vub.ac.be(http://sites.ieee.org/cec2015/" style="margin: 0px; padding: 0px; border: 0px; outline: 0px; vertical-align: baseline; background-color: transparent; color: rgb(1, 127, 141); text-decoration: none;" target="_blank" title="2015 IEEE Congress on Evolutionary Computation (CEC2015)">2015 IEEE Congress on Evolutionary Computation (CEC2015): Combining Evolutionary Computation and Reinforcement Learning http://sites.ieee.org/cec2015/ Held during May 25-28, 2015, at Sendai International Center, Sendai, Japan. Paper Submission Due:December 19, 2014 Evolutionary Computation (EC) and Reinforcement Learning (RL) are two research fields in the area of search, optimization, and control. RL addresses sequential decision making problems in initially unknown stochastic environments, involving stochastic policies and unknown temporal delays between actions and observable effects. EC studies algorithms that can optimize some fitness function by searching for the optimal set of parameter values. RL can quite easily cope with stochastic environments, which is more complex with traditional EC methods. The main strengths of EC techniques are their general applicability to solving many different kinds of optimization problems and their global search behavior enabling these methods not to get easily trapped in local optima. There also exist EC methods that deal with adaptive control problems such as classifier systems and evolutionary reinforcement learning. Such methods address basically the same problem as in RL, i.e. the maximization of the agent's reward in a potentially unknown environment that is not always completely observable. Still, the approach taken by these methods are different and complementary. RL is used for learning the parameters of a single model using a fixed representation of the knowledge and learns to improve its value function from the reward given after every step taken in an environment. EC is usually a population based optimizer that uses a fitness function to rank individuals based on their total performance in the environment and uses different operators to guide the search. These two research fields can benefit from an exchange of ideas resulting in a better theoretical understanding and/or empirical efficiency. Aim and scope The main goal of this special session is to solicit research on frontiers and potential synergies between evolutionary computation and reinforcement learning. We encourage submissions describing applications of EC for optimizing agents in difficult environments that are possibly dynamic, uncertain and partially observable, like in games, multi-agent applications such as scheduling, and other real-world applications. Ideally, this special session will gather research papers with a background in either RL or EC that propose new challenges and ideas as result of synergies between RL and EC. Topics of interests We enthusiastically solicit papers on relevant topics such as: ? Novel frameworks including both evolutionary algorithms and RL ? Comparisons between RL and EC approaches to optimize the behavior of agents in specific environments ? Parameter optimization of EC methods using RL or vice versa ? Adaptive search operator selection using reinforcement learning ? Optimization algorithms such as meta-heuristics, evolutionary algorithms for dynamic and uncertain environments ? Theoretical results on the learnability in dynamic and uncertain environments ? On-line self-adapting systems or automatic configuration systems ? Solving multi-objective sequential decision making problems with EC/RL ? Learning in multi-agent systems using hybrids between EC and RL ? Learning to play games using optimization techniques ? Real-world applications in engineering, business, computer science, biological sciences, scientific computation, etc. in dynamic and uncertain environments solved with evolutionary algorithms ? Solving dynamic scheduling and planning problems with EC and/or RL Organizers Madalina M. Drugan?( Artificial Intelligence Lab, Vrije Universiteit Brussel, Pleinlaan 2, 1050, Brussels, Belgium Bernard Manderick (Bernard.Manderick at vub.ac.be) Artificial Intelligence Lab, Vrije Universiteit Brussel, Pleinlaan 2, 1050, Brussels, Belgium Marco A. Wiering?(m.a.wiering at rug.nl) Institute of Artificial Intelligence and Cognitive Engineering, University of Groningen, Nijenborgh 9, 9700AK Groningen, The Netherlands -------------- next part -------------- An HTML attachment was scrubbed... URL: From mail at jan-peters.net Tue Nov 4 17:01:49 2014 From: mail at jan-peters.net (Jan Peters) Date: Tue, 4 Nov 2014 23:01:49 +0100 Subject: Connectionists: [jobs] Robotics & Machine Learning Positions @ Darmstadt In-Reply-To: References: Message-ID: <875F92E5-8E55-479D-A552-38A345B6E9C7@jan-peters.net> *** PROSPECTIVE APPLICANTS: PLEASE MEET GERHARD NEUMANN, TUCKER HERMANS AND HERKE VAN HOOF AT NIPS OR GUILHERME MAEDA AND ELMAR RUECKERT AT HUMANOIDS (IF YOU ATTEND) *** Robotics & Machine Learning Positions =============================== The Autonomous Systems Labs (CLAS and IAS) at the Technical University of Darmstadt (TU Darmstadt) is seeking for several highly qualified postdoctoral researchers as well as talented Ph.D. students with strong interests in one or more of the following research topics: * Machine Learning for Robotics (especially Reinforcement Learning, Imitation, and Model Learning) * Robot Grasping and Manipulation * Robot Control, Learning for Control * Robot Table Tennis Please relate clearly to these topic in your Research Statement. Outstanding students and researchers from the areas of robotics and robotics-related areas including machine learning, control engineering or computer vision are welcome to apply. The candidates are expected to conduct independent research and at the same time contribute to ongoing projects in the areas listed above. Successful candidates can furthermore be given the opportunity to work with undergraduate, M.Sc. and Ph.D. students. Due to our strong ties to the Max Planck Institutes for Intelligent Systems and Biological Cybernetics, the University of Southern California, as well as to the Honda Research Institute, there will be ample opportunities of collaboration with these institutes. ABOUT THE APPLICANT Ph.D. position applicants need to have a Master's degree in a relevant field (e.g., Robotics, Computer Science, Engineering, Statistics & Optimization, Math and Physics) and have exhibited their ability to perform research in either robotics or machine learning. A successful Post-doc applicant should have a strong robotics and/or machine learning background with a track record of top-tier research publications, including relevant conferences (e.g., RSS, ICRA, IROS or ICML, IJCAI, AAAI, NIPS, AISTATS) and journals (e.g., AURO, TRo, IJRR or JMLR, MLJ, Neural Computation) . A Ph.D. in Computer Science, Electrical or Mechanical Engineering (or another field clearly related to robotics and/or machine learning) as well as strong organizational and coordination skills are a must. Expertise in working with real robot systems is a big plus for all applicants. THE POSITIONS The positions are started with a 24 months contract and may be extendable up to 48 months. Payment will be according to the German TVL E-13 or E-14 payment scheme, depending on the candidates experience and qualifications. HOW TO APPLY? All complete applications submitted through our online application system found at http://www.ias.tu-darmstadt.de/Jobs/Application will be considered. There is no fixed deadline: the positions will be filled as soon as possible. Ph.D. applicants should provide at least a research statement, a PDF with their CV, degrees, and grade-sheets, and two references who are willing to write a recommendation letter. PostDoc applicants require three references and, in addition, should provide their top three publications. Please ensure to include a link to your research web-site as well as your date of availability. Applicants are encouraged to contact Gerhard Neumann, Jan Peters, Guilherme Maeda, Elmar Rueckert, Tucker Hermans or Serena Ivaldi during the upcoming Humanoids, NIPS or other conferences. Candidates giving a presentation at one of these conferences are invited to send a corresponding note to us. ABOUT CLAS AND IAS The Autonomous Systems Labs CLAS and IAS aim at endowing robots with the ability to learn new tasks and adapt their behavior to their environment. To accomplish this goal, IAS focuses on the intersection between Machine Learning, Robotics and Biomimetic Systems. Resulting research topics range from algorithm development in machine learning over robot grasping/manipulation and robot table tennis to biomimetic motor control/learning and brain-robot interfaces. Members of CLAS and IAS have been highly successful, as exhibited by recent awards, which include a Daimler Benz Fellowship, several Best Cognitive Robotics Paper Awards, the Georges Giralt Best 2013 Robotics PhD Thesis Award, an IEEE RAS Early Career Award, etc. The lab collaborates with numerous universities in Germany, Europe, the USA and Japan. CLAS and IAS are partners in several European projects with many top institutes in ML and Robotics.. The CLAS and IAS lab is located in the Robert Piloty Building in the beautiful Herrngarten park. It is less than fifty meters from a beer garden frequently used for lab meetings and after successful paper submissions. ABOUT TU DARMSTADT The TU Darmstadt is one of the top technical universities in Germany, and is well known for its research and teaching. It was one of the first universities in the world to introduce programs in electrical engineering. our chemical elements were discovered at Darmstadt, most prominently, the element darmstadtium, and it is Germany's first fully autonomous university. More information can be found on: http://en.wikipedia.org/wiki/Darmstadt_University_of_Technology ABOUT DARMSTADT Darmstadt is well known high-tech center with important activities in space craft operations (e.g., through the European Space Operations Centre, the European Organization for Exploitation of Meteorological Satellites), chemistry, pharmacy, information technology, biotechnology, telecommunications and mechatronics, and consistently ranked among the top high-tech regions in Germany. Darmstadt's important centers for arts, music and theatre allow for versatile cultural activities, while the proximity of the Odenwald forest and the Rhine valley allows for many outdoor sports. The 33,547 students of Darmstadt's three universities constitute a major part of Darmstadt's 140,000 inhabitants. Darmstadt's immigrant population is among the most diverse in Germany, such that the knowledge of German language is rarely ever needed (and many IAS members do not speak any German). Darmstadt is located close to the center of Europe. With just 17 Minutes driving distance to the Frankfurt airport (closer than Frankfurt itself), it is one of best connected cities in Europe. Most major European cities can be reached within less than 2.5h from Darmstadt. From veronica.bolon at udc.es Wed Nov 5 09:16:17 2014 From: veronica.bolon at udc.es (Veronica Bolon Canedo) Date: Wed, 5 Nov 2014 14:16:17 +0000 Subject: Connectionists: CFP Special Session ESANN 2015 in Feature and Kernel Learning Message-ID: *** Apologies for cross posting *** ESANN 2015 Special Session ?Feature and Kernel Learning? - CALL FOR PAPERS European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN 2015). 22-24 April 2015, Bruges Belgium http://www.esann.org Submissions are invited for next year ESANN Special Session ?Feature and Kernel Learning? Organizers: Veronica Bolon-Canedo (University of A Coru?a, Spain) Michel Donini, Fabio Aiolli (University of Padova, Italy) ABSTRACT: Feature selection and weighting has been an active research area in the last decades finding success in many different applications. With the advent of Big Data, the adequate identification of the relevant features has converted feature selection in an even more indispensable step. In particular, in kernel methods features are implicitly represented by means of feature mappings and kernels. It has been shown that the correct selection of a kernel is a crucial task. An erroneous selection of a kernel can lead to poor performance. Manually searching for an optimal kernel is a time-consuming and a sub-optimal choice. This special session is concerned with using data to learn features and kernels automatically. Furthermore, it also aims to offer a meeting opportunity for academics and industry-related researchers, belonging to the various communities of Computational Intelligence, Machine Learning and Data Mining to discuss new areas of feature selection/learning and their application to real world problems or new challenges that need to be faced with the emerging "Big Dimensionality". We invite papers on feature and kernel selection/learning. In particular, topics of interest include, but are not limited to: - Incremental or online feature learning - Learning features in extremely high dimensional domains - Learning features on real applications - Learning features on small sample domains - Feature ranking - Feature weighting - Multiple kernel learning - Kernel complexity - Scalability and efficiency of feature and kernel learning algorithms - Evaluation of feature learning - Multi-task and multi-view feature learning - Semi-supervised and unsupervised feature learning Submitted papers will be reviewed according to the ESANN reviewing process and will be evaluated on their scientific value: originality, correctness, and writing style. SUBMISSION AND IMPORTANT DATES: We kindly invite you to submit a paper to this special session. Each paper will undergo to a peer reviewing process for its acceptance. Paper submission should be done exclusively through the ESANN portal following the instructions provided in: http://www.elen.ucl.ac.be/esann/index.php?pg=submission Paper submission deadline: 21 November 2014 Notification of acceptance: 31 January 2015 ESANN 2015 conference: 22-24 April 2015 More information about the Conference Program, accomodation facilities and registration fees is available on the ESANN website http://www.esann.org Ver?nica Bol?n Canedo, Ph.D. LIDIA group Department of Computer Science Faculty of Informatics Universidade da Coru?a Campus de Elvi?a, s/n 15071 - A Coru?a, Spain Phone: +34 981 167150 Ext. 1305 Fax: +34 981 167160 e-mail: veronica.bolon at udc.es http://www.lidiagroup.org/veronica-bolon.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From poramate at manoonpong.com Wed Nov 5 13:39:01 2014 From: poramate at manoonpong.com (poramate at manoonpong.com) Date: Wed, 05 Nov 2014 18:39:01 GMT Subject: Connectionists: [jobs] Post Doctoral Position in Embodied Neural Computation at the University of Southern Denmark in Odense Message-ID: <1415212741986.130120.4444@webmail6> The Embodied AI and Neurorobotics Lab, part of Centre for BioRobotics at the Maersk Mc-Kinney Moller Institute at the University of Southern Denmark, invites applications for a position as postdoc in embodied neural computation. The position is available starting fromMarch2015, or as soon as possible thereafter, for up to two years. Description The recently established Embodied AI and Neurobotics lab aims to 1) develop modular bio-inspired robots and their modular neural mechanisms towards embodied autonomous locomotion systems with adaptivity, energy efficiency, and versatility; 2) understand complex dynamical interactions between physical and computational components in embodied neural closed-loop systems. Recent developments of our embodied (neuro) robotic systems include Locokit robots () and AMOS hexapod walking robots (). To further this vision, we are recruiting one highly motivated postdoctoral scientist in the fields of Machine learning, Computational Neuroscience, Embodied AI, and Robotics, to work on neural predictive control for goal-directed learning and multi- scale adaptation. The predictive control will be used for autonomous acoustic navigation and communication of walking robots in complex environments. The successful candidate will be expected to have 1) a PhD degree in electrical engineering and computer science, computational neuroscience, artificial intelligence, physics of complex systems, or a quantitative field, 2) articles published in international peer-reviewed journals documenting experience with embodied neural computation, and 3) strong background on reinforcement learning (such as continuous time actor-critic), artificial neural networks (in particular reservoir computing), nonlinear dynamic systems and robotics (e.g., walking robots), and 4) good programming skills (e.g., C, C++). Additionally, the candidate should have excellent writing skills and be able to work independently. The successful candidate for the position will be affiliated to Centre for BioRobotics at the Maersk Mc-Kinney Moller Institute, the University of Southern Denmark. Applicants should provide a covering letter explaining their approach to the problem of predictive control alluded to above, and at most three articles illustrating their publication record and research interests, in addition to standard items such as CV, full publication list, etc. (see below). Applications will be considered continuously until the position is filled. Contact Information: Further information is available from Assoc. Prof. Poramate Manoonpong, The Maersk Mc-Kinney Moller Institute, Tel: +4565508698, Email: , Website: , An application must include: * Application * Curriculum Vitae * Certificates/Diplomas?(Master's degree certificate and the latest certificate) * Information on previous teaching experience, please attach as Teaching portfolio * List of publications indicating the publications attached * Examples of the most relevant publications. Please attach one pdf-file for each publication, a possible co-author statement must be a part of this pdf-file The University encourages all interested persons to apply, regardless of age, gender, religious affiliation or ethnic background. The full posting can be accessed at Please apply online at: -------------- next part -------------- An HTML attachment was scrubbed... URL: From birgit.ahrens at bcf.uni-freiburg.de Thu Nov 6 06:42:26 2014 From: birgit.ahrens at bcf.uni-freiburg.de (Birgit Ahrens) Date: Thu, 06 Nov 2014 12:42:26 +0100 Subject: Connectionists: Postdoctoral Position at the Bernstein Center Freiburg, University of Freiburg, Germany Message-ID: <545B5EA2.8090808@bcf.uni-freiburg.de> *PostDoc Position in Non-Clinical Epilepsy Research in the Biomicrotechnology lab of Prof. Ulrich Egert* We are currently offering a PostDoc position (2 years) in the Laboratory for Biomicrotechnology (http://www.bcf.uni-freiburg.de/people/details/egert) at the University of Freiburg. The project investigates mechanisms underlying mesiotemporal lobe epilepsy from the perspective of the dysfunction of interaction between subnetworks in the hippocampal formation (Froriep et al. 2012, Epilepsia). We aim to use targeted stimulation to reduce the circuit's susceptibility for seizures. For this project, we are searching for a neurophysiologist to implement a new paradigm for stimulation in a mouse model of epilepsy. It is essential that you have a background in neuroscience, ideally in experimental neurophysiology in vivo as well as a PhD degree. You should further be competent in data analysis and have an affinity for the network perspective. The project is part of the Cluster of Excellence "BrainLinks-BrainTools'' (www.brainlinks.uni-freiburg.de), together with the Bernstein Center Freiburg (www.bcf.uni-freiburg.de) and will combine neurophysiology, computational neuroscience and neurotechnology. *Please apply using our online form at https://yoda.bcf.uni-freiburg.de/* Further details on: www.bcf.uni-freiburg.de/jobs Contact: Dr. Birgit Ahrens Teaching & Training Coordinator Bernstein Center Freiburg Hansastr. 9a 79104 Freiburg, Germany birgit.ahrens at bcf.uni-freiburg.de -------------- next part -------------- An HTML attachment was scrubbed... URL: From terry at salk.edu Thu Nov 6 10:55:07 2014 From: terry at salk.edu (Terry Sejnowski) Date: Thu, 06 Nov 2014 07:55:07 -0800 Subject: Connectionists: NEURAL COMPUTATION - December 1, 2014 In-Reply-To: Message-ID: Neural Computation - Contents -- Volume 26, Number 12 - December 1, 2014 Available online for download now: http://www.mitpressjournals.org/toc/neco/26/12 ----- View Population Coding and the Labeling Problem: Extrinsic Versus Intrinsic Representations Sidney Lehky, Margaret E. Sereno, and Anne B. Sereno Letters Information Transfer Through Stochastic Transmission of a Linear Combination of Rates Stelios Smirnakis, Ioannis Smyrnakis Spike-Based Probabilistic Inference in Analog Graphical Models Using Interspike-Interval Coding Andreas Steimer, Rodney J. Douglas An Investigation of the Stochastic Hodgkin-Huxley Models Under Noisy Rate Functions Marifi Guler Dynamic Analysis of Naive Adaptive Brain-machine Interfaces Kevin Cai Kowalski, Bryan D He, and Lakshminarayan Srinivasan A Bio-inspired, Computational Model Suggests Velocity Gradients of Optic Flow Locally Encode Ordinal Depth at Surface Borders and Globally They Encode Self-Motion Florian Raudies., Stefan Ringbauer, and Heiko Neumann Exploitation of Pairwise Class Distances for Ordinal Classification Javier Sanchez-Monedero , Peter Tino, Pedro Antonio Gutierrez, and C. Hevas-Martinez Spherical Mesh Adaptive Direct Search for Separating Quasi-uncorrelated Sources by Range-based Independent Component Analysis S. Easter Selvan, Pierre B. Borckmans, Amit Chattopadhyay, and Pierre-Antoine Absil ------------ ON-LINE -- http://www.mitpressjournals.org/neuralcomp SUBSCRIPTIONS - 2014 - VOLUME 26 - 12 ISSUES USA Others Electronic Only Student/Retired $70 $193 $65 Individual $124 $187 $115 Institution $1,035 $1,098 $926 From m.lengyel at eng.cam.ac.uk Fri Nov 7 06:26:03 2014 From: m.lengyel at eng.cam.ac.uk (=?windows-1252?Q?M=E1t=E9_Lengyel?=) Date: Fri, 7 Nov 2014 12:26:03 +0100 Subject: Connectionists: Faculty position in Information Engineering and Medical Neuroscience, Cambridge Message-ID: Faculty position in Information Engineering and Medical Neuroscience Department of Engineering, University of Cambridge, UK Applications are invited for a University Lectureship (~ US Assistant /Professor equivalent) in the Control Group in the Department of Engineering. Exceptional candidates are sought with the potential to develop a record of world-class research commensurate with the Department?s international reputation. The current faculty members in the Cambridge Control Group are: Emeritus Professor Keith Glover, Professor Jan Maciejowski, Professor Rodolphe Sepulchre (Professor of Engineering), Professor Malcolm Smith (Head of Group) and Dr Glenn Vinnicombe (Reader). Candidates will be expected to contribute to undergraduate and graduate teaching in the general area of Control Engineering and possibly other related areas of Information Engineering. The closing date for applications is 9 January, 2015. The interviewing panel will meet soon after the closing date. Short-listed candidates will be invited to visit the Department, give a short seminar/lecture and attend a formal interview. The field of this University Lectureship is at the interface between Information Engineering and Medical Neuroscience. Unprecedented challenges and opportunities exist in this area as a consequence of a fast-developing technology of actuating and sensing devices at the neuronal scale (e.g. neural implants, multi-electrode recordings, and optogenetic techniques). The successful candidate will preferably be working in the field of Control Engineering with a research record in problems of medical neuroscience, and with appropriate theoretical expertise (nonlinear systems). It could be filled by someone whose primary expertise is in a different branch of Information Engineering, but the ability to contribute to teaching in one or more of the ?core? areas of Information Engineering (Control, Signal processing, Information theory, etc) will be expected. Medical neuroscience and brain science is a current focus of most national and international funding agencies. Funding opportunities will exist in conjunction with Cambridge Neuroscience. The lectureship will strengthen the Department?s Bioengineering strategic theme. Informal enquiries can be made to Professor Rodolphe Sepulchre (r.sepulchre at eng.cam.ac.uk) or Professor Malcolm Smith (mcs at eng.cam.ac.uk). From boracchi at elet.polimi.it Fri Nov 7 04:05:48 2014 From: boracchi at elet.polimi.it (Giacomo Boracchi) Date: Fri, 7 Nov 2014 10:05:48 +0100 Subject: Connectionists: IJCNN 2015 Killarney, Ireland: Deadlines soon for Special Session and Competition proposals Message-ID: IJCNN 2015 - International Joint Conference on Neural Networks July 12-17, 2015, Killarney Convention Center, Killarney, Ireland http://www.ijcnn.org IJCNN is the premier international conference in the area of neural network theory, analysis, and applications. Co-sponsored by the International Neural Network Society and the IEEE Computational Intelligence Society, over the last three decades this conference and its predecessors has hosted [past, present, and future] leaders of neural network research. In an era when neural networks are widely used and reported in many areas, scientists, engineers, educators, and students from all over the world can get the best overall view of neural networks, from neuroscience to advanced control systems to cognition, at the IJCNN. Please note that your proposals for Special Sessions and Competitions are due in THREE days!!! Call for Special Sessions - The IJCNN 2015 Program Committee solicits proposals for special sessions within the technical scopes of the Congress. Special sessions, to be organized by internationally recognized experts, aim to bring together researchers focused in special, novel and challenging topics. Fast-developing themes such as Deep Learning, Big Data, or applications to challenging fields like chemistry, biology, computer games, robotics, etc. are examples. Papers submitted for special sessions are to be peer-reviewed with the same criteria used for the contributed papers. Researchers interested in organizing special sessions are invited to submit a formal proposal using the on-line form of the Special Sessions webpage. For further details please contact the Special Session Co-chairs: Mike Gashler. University of Arkansas. USA, and Jose Garcia-Rodriguez. University of Alicante. Spain Deadline: November 10th, 2014. Call for Competitions - Call for Competitions : Competitions compare cutting edge neural network technologies, and test them along with alternative, more traditional methods for solving difficult practical problems. As examples, past competitions have included time series prediction, microarray classification, neural connectomics, computer games, pedestrian trajectories avoidance, computer vision and pattern recognition. Competition organisers are kindly invited to submit their proposals to Abir Hussain, Competitions Chair, Liverpool John MooresU, UK by November 15th, 2014. The notifications of acceptance of the competition proposals will be provided by November 30, 2014. Please note that each accepted competition will be posted in a separate web page of the IJCNN 2015 website. A winning certificate and free registration will be provided to the winner of each competition who attends IJCNN 2015. * Deadlines for Submission: - Special session and Competition proposals: November 10, 2014 - Tutorial & Workshop proposals: December 15, 2014 - Paper submission deadline: January 15, 2015 - Paper Decision notification: March 15, 2015 - Camera-ready submission: April 15, 2015 More details about the charming Killarney siting for the conference can be found on our website www.ijcnn.org. We also have a Facebook page. Keep working on your paper submissions, and submit your proposals for Special Sessions and Competitions SOON! Email any of the Chairs if you wish to discuss your ideas for proposals. We are looking forward to seeing you in Ireland!! IJCNN General Chair De-Shuang Huang, Tongji University, China, Director - Machine Learning and Systems Biology Laboratory IJCNN Program Chair Yoonsuck Choe, Texas A&M University Director, Brain Networks Laboratory Many thanks to our Sponsors and Contributors: - International Neural Network Society - IEEE Computational Intelligence Society - Failte Ireland, National Tourism Development Authority -- Giacomo Boracchi, PhD DEIB - Dipartimento di Elettronica, Informazione e Bioingegneria Politecnico di Milano Via Ponzio, 34/5 20133 Milano, Italy. Tel. +39 02 2399 3467 http://home.dei.polimi.it/boracchi/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark.plumbley at qmul.ac.uk Thu Nov 6 04:08:01 2014 From: mark.plumbley at qmul.ac.uk (Mark Plumbley) Date: Thu, 6 Nov 2014 09:08:01 +0000 Subject: Connectionists: Research positions in Musical Audio Repurposing using Source Separation, University of Surrey, UK Message-ID: Dear Connectionsts, Please forward the following job information to anyone who may be interested. Apologies for cross-posting. Post 1: Research Fellow in Source Separation for Musical Audio Repurposing (http://jobs.surrey.ac.uk/071314) Post 2: Research Software Developer in Musical Audio Repurposing using Source Separation (http://jobs.surrey.ac.uk/071214) Additional information below. Many thanks, Mark Plumbley --- To December 2014: Director, Centre for Digital Music, Queen Mary University of London, UK >From January 2015: Professor of Signal Processing, Centre for Vision, Speech and Signal Processing (CVSSP), University of Surrey, UK --- Post 1: Research Fellow in Source Separation for Musical Audio Repurposing http://jobs.surrey.ac.uk/071314 University of Surrey, Department of Electronic Engineering Salary: GBP 30,434 to GBP 37,394 Closing Date: Thursday 27 November 2014 Reference: 071314 Applications are invited for a Research Fellow to work full-time on an EPSRC funded project "Musical Audio Repurposing using Source Separation" from 5 January 2015 to 30th June 2017 (30 Months). This project will develop a new approach to the challenge of high quality musical audio repurposing, focussing on soloing, desoloing, remixing and upmixing. To tackle this, the project will investigate new methods for musical audio source separation, in parallel with investigating new perceptual evaluation measures for audio source separation. The candidate will be responsible for investigating and developing new and enhanced methods for high quality musical audio source separation. These may include methods based on score-informed musical source separation, sparse representations, time-frequency methods, non-negative matrix factorisation (NMF) & high-resolution NMF, and interactive methods employing user feedback. The candidate will be working as part of a team, with two other researchers focussing on perceptual evaluation methods and software development of open-source research tools. The successful applicant is expected to have a PhD in electronic engineering, computer science or a related subject, and is expected to have significant experience in audio signal processing research. Research experience in one or more of: audio source separation, audio upmixing, spatial audio coding, multichannel audio processing, musical audio analysis, automatic music transcription, sparse representations and/or machine learning is desirable. The project will be led by Prof Mark Plumbley in the Machine Audition Lab of CVSSP, and in collaboration with the Institute of Sound Recording (IoSR). CVSSP is one of the major research centres of Surrey's Department of Electronic Engineering (EE), the top ranked UK EE department in both the RAE 2008 and in the national league tables. CVSSP is one of the largest research centres in the UK focusing on signal processing, vision, graphics and machine learning, with 120+ members comprising academic and support staff, research fellows and PhD students. The IoSR is a leading centre for research in psychoacoustic engineering, as well as being home to the Tonmeister undergraduate degree programme. It has a focused team of 12 researchers, plus several industrial collaborators, and a range of professional facilities of the highest standards, including three recording studios and an ITU-R BS 1116 standard critical listening room. Informal enquires are welcome, to: Prof Mark Plumbley (m.plumbley at surrey.ac.uk). For further details and to apply online visit http://jobs.surrey.ac.uk/071314 We acknowledge, understand and embrace diversity. ------ Post 2: Research Software Developer in Musical Audio Repurposing using Source Separation http://jobs.surrey.ac.uk/071214 University of Surrey, Department of Electronic Engineering Salary: GBP 30,434 to GBP 37,394 Closing Date: Thursday 27 November 2014 Reference: 071214 Applications are invited for a Research Fellow / Research Software Developer to work full-time on an EPSRC funded project "Musical Audio Repurposing using Source Separation" from 5 January 2015 to 30th June 2017 (30 Months). This project will develop a new approach to the challenge of high quality musical audio repurposing, focussing on soloing, desoloing, remixing and upmixing. To tackle this, the project will investigate new methods for musical audio source separation, in parallel with investigating new perceptual evaluation measures for audio source separation, and developing new open-source research software tools. The candidate will be responsible for developing an extensible open-source research software framework, in conjunction with other researchers in the project. The framework will include audio separation and repurposing algorithms, objective assessment tools and example datasets. They will also develop user software and demonstrators for audio repurposing, such as upmixers for MPEG Surround (SAC) and MPEG Spatial Audio Object Coding (SAOC), and will work with a specialist app developer to create a demonstrator remixing app. The candidate will be working as part of a team, with two other researchers focussing on audio source separation and perceptual evaluation methods. The successful applicant is expected to have excellent mathematical and programming skills, as well as either a Masters degree in electronic engineering, computer science or related subject, or equivalent professional experience. They will have at least 1 year's experience in software development relevant to audio and music signal processing, in topics such as digital signal processing, acoustics, binaural audio, multichannel audio, audio coding, speech processing, and/or music information retrieval. Significant experience of development in both Python and Matlab as well as C/C++ is desirable. Research experience in audio signal processing or experience of working closely with audio signal processing researchers is also desirable. The project will be led by Prof Mark Plumbley in the Machine Audition Lab of CVSSP, and in collaboration with the Institute of Sound Recording (IoSR). CVSSP is one of the major research centres of Surrey's Department of Electronic Engineering (EE), the top ranked UK EE department in both the RAE 2008 and in the national league tables. CVSSP is one of the largest research centres in the UK focusing on signal processing, vision, graphics and machine learning, with 120+ members comprising academic and support staff, research fellows and PhD students. The IoSR is a leading centre for research in psychoacoustic engineering, as well as being home to the Tonmeister undergraduate degree programme. It has a focused team of 12 researchers, plus several industrial collaborators, and a range of professional facilities of the highest standards, including three recording studios and an ITU-R BS 1116 standard critical listening room. Informal enquires are welcome, to: Prof Mark Plumbley (m.plumbley at surrey.ac.uk). For more information and to apply online, visit http://jobs.surrey.ac.uk/071214 We acknowledge, understand and embrace diversity. ------- -- Prof Mark D Plumbley Director, Centre for Digital Music School of Electronic Engineering & Computer Science Queen Mary University of London Mile End Road, London E1 4NS, UK Tel: +44 (0)20 7882 7518 Email: mark.plumbley at qmul.ac.uk Twitter: @markplumbley @c4dm http://www.eecs.qmul.ac.uk/~markp/ >From January 2015: Professor of Signal Processing Centre for Vision, Speech and Signal Processing (CVSSP) University of Surrey Guildford, Surrey, GU2 7XH, UK From marios.philiastides at gmail.com Thu Nov 6 09:30:22 2014 From: marios.philiastides at gmail.com (Marios Philiastides) Date: Thu, 6 Nov 2014 14:30:22 +0000 Subject: Connectionists: Postdoctoral position in neuroecomonics and decision making Message-ID: Postodoc position at the Institute of Neuroscience and Psychology, University of Glasgow Applications are invited for a full-time postdoctoral position to make a contribution to the ESRC funded project on the neurobiology of human decision making using multimodal neuroimaging (PI: Dr. Marios Philiastides). The post will be based at the Institute of Neuroscience and Psychology (INP) at the University of Glasgow, which benefits from on-site access to the Centre for Cognitive Neuroimaging (CCNi). The CCNi is a research-dedicated facility within the INP and it is equipped with state-of-the art brain imaging facilities comprising a 3T fMRI scanner (Siemens Trio), an MEG system, and several TMS and EEG systems, including MR-compatible recording options. Our group uses multimodal neuroimaging coupled with mathematical modelling to characterise the spatiotemporal dynamics and the computational principles of the brain networks underlying human decision making. Our analysis methods are heavily inspired by machine learning and statistical pattern recognition and are designed to exploit trial-to-trial variability in electrophysiologically-derived measures that can be used in conjunction with simultaneously acquired fMRI to tease apart the cascade of constituent cortical and subcortical processes involved in decision making. The primary focus of the project will be to unravel the neural correlates of learning and confidence during decision making. Candidates must have (or nearing completion of) a PhD degree in neuroscience, psychology, and cognitive science or in a related discipline. Candidates must have previous practical experience and working knowledge of human neuroimaging (M/EEG and/or fMRI). The post holder must also have working knowledge of experimental statistics, signal processing and excellent programming skills in Matlab. Previous experience in simultaneous EEG/fMRI experiments, advanced multivariate data analysis and computational modelling is desirable but not required. This post will be available from 5th January 2015 or as soon as possible thereafter, for three years. Salary commensurate with experience and qualifications: Grade 6/7: ?27,057 - ?30,434 / ?33,242 - ?37,394 per annum. Informal enquiries may be addressed to Dr. Marios Philiastides at marios.philiastides at glasgow.ac.uk. Apply online at: www.gla.ac.uk/jobs (Ref: M00589) Closing date: 7 December 2014 -- Marios G. Philiastides, Ph.D. Associate Professor Institute of Neuroscience and Psychology Centre for Cognitive Neuroimaging University of Glasgow Glasgow, G12 8QB Web: http://decision.ccni.gla.ac.uk/ Email: marios.philiastides at glasgow.ac.uk -------------- next part -------------- An HTML attachment was scrubbed... URL: From huajin.tang at gmail.com Fri Nov 7 11:27:08 2014 From: huajin.tang at gmail.com (Huajin Tang) Date: Sat, 8 Nov 2014 00:27:08 +0800 Subject: Connectionists: CFP: Invited Session on Cognitive Computing and Neuro-Cognitive Robots Message-ID: 7th IEEE International Conference on Cybernetics and Intelligent Systems (CIS 2015) *Invited Session * Cognitive Computing and Neuro-Cognitive Robots *Call For Papers* It has been a challenge in neuroscience and computer science for many years to implement brain-style intelligence in an artificial neural system. In complementary to traditional machine learning algorithms and robotics techniques, cognitive computing based on neuro-inspired models and algorithms with robotic simulation has become a popular stream to target the core problem of brain-style intelligence. Based on neuro-anatomic and physiological features of various sensory and brain regions, the synthetic neural systems are the fundamental approaches to understand cognitive computing mechanisms, including neural coding, spiking timing based learning, recognition and perception. Embodying the cognitive functions and computations undertaken by the brain in robotic system, neuro-cognitive robots are providing a systematic framework necessary to emulate brain-style cognition and autonomy in a physical environment. The 7th IEEE International Conference on Cybernetics and Intelligent Systems (CIS 2015) Invited Session on ?Cognitive Computing and Neuro-Cognitive Robots? aims to reflect the efforts and achievements of the current research stream, by inviting scientists and researchers to report their state-of-the-art technologies and theories on computational modelling, theory, experiments and applications. TOPICS OF INTEREST The special session welcomes all papers related to cognitive computing, brain-inspired systems, and neuro-cognitive robots. Topics of interest include but are not limited to: ? Cognitive computing and brain-computer integration; ? Cyborg intelligence; ? Neural circuits modelling and theory; ?Neural information coding; ?Neuromoromorphic computing algorithms (STDP, Hebbian, recognition, perception, etc). ? Embodied cognition, neuro-robotics, etc. IMPORTANT DATES 31 January 2015: Paper submission deadline. 31 March 2015: Notification of paper acceptance. 15-17 July 2015: Conference dates SUBMISSION GUIDELINE The guidelines are on the conference website http://www.cis-ram.org/2015/ . Please send a copy of the submitted paper to the Session organizers and contact them if there is any inquiry. ORGANIZERS Dr. Huajin Tang, Institute for Infocomm Research, A*STAR, Singapore Email: htang at i2r.a-star.edu.sg webpage: http://www1.i2r.a-star.edu.sg/~htang/ Prof. Gang Pan, Zhejiang University Email: gpan at zju.edu.cn webpage: http://www.cs.zju.edu.cn/~gpan -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve at cns.bu.edu Fri Nov 7 12:03:59 2014 From: steve at cns.bu.edu (Stephen Grossberg) Date: Fri, 7 Nov 2014 12:03:59 -0500 Subject: Connectionists: The Atoms of Neural Computation: A reply to Randy O'Reilly In-Reply-To: <4AB08192-1C70-4A7D-80E6-F1228F6332E0@colorado.edu> References: <330594F9-9782-452A-B4CB-0C2E8F069DF5@nyu.edu> <4AB08192-1C70-4A7D-80E6-F1228F6332E0@colorado.edu> Message-ID: <537CF027-A270-4923-B303-47D0393E28A3@cns.bu.edu> Dear Randy, Thanks for your comments below in response to my remark that I introduced the core equations used in Leabra in the 1960s and early 1970s. I am personally passionate about trying to provide accurate citations of prior work, and welcome new information about it. This is especially true given that proper citation is not easy in a rapidly developing and highly interdisciplinary field such as ours. Given your comments and the information at my disposal, however, I stand by my remark, and will say why below. If you have additional relevant information, I will welcome it. It is particularly difficult to provide proper citation when the same model name is used even after the model equations are changed. Your comment suggests that the name Leabra is used for all such variations. However, a change of a core model equation is, in fact, a change of model. To deal with the need to develop and refine models, my colleagues and I provide distinct model names for different stages of model development; e.g., ART 1, ART 2, ARTMAP, ARTSCAN, ARTSCENE, etc. My comment was based on your published claims about earlier versions of Leabra. If Leabra is now so changed that these comments are no longer relevant, then perhaps a new model name would help readers to understand this. Using the same name for many different versions of a model makes it hard to ever disconfirm it. Indeed, some authors just correct old mistakes with new equations under the same model name, and never admit that a mistake was made. I will refer mostly to two publications about Leabra: The O?Reilly and Munakata (2000) book (abbreviated O&M below) on Computational Explorations in Cognitive Neuroscience, and the O?Reilly and Frank (2006) article (abbreviated O&F) on Making working memory work: A computational model of learning in the prefrontal cortex and basal ganglia ; https://grey.colorado.edu/mediawiki/sites/CompCogNeuro/images/3/30/OReillyFrank06.pdf). The preface of O&M says that the goal of the book, and of Leabra, is a highly worthy one: ?to consolidate and integrate advances?into one coherent package?we have found that the process of putting all of these ideas together?led to an emergent phenomenon in which the whole is greater than the sum of its parts?? I was therefore dismayed to see that the core equations of this presumably new synthesis were already pioneered and developed by my colleagues and me long before 2000, and used in a coherent way in many of our previous articles. O&M leaves readers thinking that their process of ?putting all of these ideas together? represented a novel synthesis for a unified cognitive architecture. For example, the O&M book review http://srsc.ulb.ac.be/axcWWW/papers/pdf/03-EJCP.pdf writes that the book?s first five chapters are ?dedicated to developing a novel, biologically motivated learning algorithm called Leabra?. The review then lists standard hypotheses in the neural modeling literature as the basic properties of Leabra. The purported advance of Leabra ?in contrast to the now almost pass? back propagation algorithm, takes as a starting point that networks of real neurons exhibit several properties that are incompatible with the assumptions of vanilla back propagation?; notably that cells can send signals reciprocally to each other; they experience competition; their adaptive weights never change sign during learning; and their connections are never used to back-propagate error information during learning. These claims were not new in 2000. My more specific examples will be drawn from O&F, for definiteness. I will compare the claims of this article with previously published results from our own work, although similar concerns could be expressed using examples from other authors. The devil is in the details. Let me now get specific about the core equations of Leabra. I will break my comments into six parts, one part for each core model equation: 1. STM: NETWORK SHUNTING DYNAMICS Randy, you wrote in your email below that Leabra is ?based on the standard equivalent circuit equations for the neuron? and mention Hodgkin and Huxley in this regard. It is not clear how ?based on? translates into a mathematical model equation. In particular, the Hodgkin-Huxley equations are empirical fits to the dynamics of a squid giant axon. They were not equations for neural networks. It was a big step conceptually to go from individual neurons to neural networks. When I started publishing the Additive and Shunting models for neural networks in 1967-68, they were not considered ?standard?, as illustrated by the fact that several of these articles were published in the Proceedings of the National Academy of Sciences. As to the idea that moving away from back propagation was novel in 2000, consider the extensive critique of back propagation in the oft-cited 1988 Grossberg article entitled Nonlinear neural networks: Principles, mechanisms, and architectures (Neural Networks, 1, 17-61). http://www.cns.bu.edu/Profiles/Grossberg/Gro1988NN.pdf. See the comparison of back propagation and adaptive resonance theory in Section 17. The main point of this article was not to criticize back propagation, however. It was to review efforts to develop neurally-based cognitive models that had been ongoing already in 1988 for at least 20 years. As to the shunting dynamics used in O&F, on pp. 316 of their article, consider equations (A.1) and (A.2), which define shunting cooperative and competitive dynamics. Compare equation (9) on p. 23 and equations (100)-(101) on p. 35 in Grossberg (1988), or equations (A16) and (A18) on pp. 47-48 in the oft-cited 1980 Grossberg article on How does a brain build a cognitive code? (Psychological Review, 87, 1-51 http://cns.bu.edu/Profiles/Grossberg/Gro1980PsychRev.pdf), This article reviewed aspects of the paradigm that I introduced in the 1960s to unify aspects of brain and cognition. Or see equations (1) ? (7) in the even earlier, also oft-cited, 1973 Grossberg article on Contour enhancement, short-term memory, and constancies in reverberating neural networks (Studies in Applied Mathematics, 52, 213-257 http://cns.bu.edu/Profiles/Grossberg/Gro1973StudiesAppliedMath.pdf). This breakthrough article showed how to design recurrent shunting cooperative and competitive networks and their signal functions to exhibit key properties of contrast enhancement, noise suppression, activity normalization, and short-term memory storage. These three articles illustrate scores of our articles that have developed such concepts before you began to write on this subject. 2. SIGMOID SIGNALS O&F introduce a sigmoidal signal function in their equation (A.3). Grossberg (1973) was the first article to mathematically characterize how sigmoidal signal functions transform inputs before storing them in a short-term memory that is defined by a recurrent shunting on-center off-surround network. These results have been reviewed in many places; e.g., Grossberg (1980, pp. 46-49, Appendices C and D) and Grossberg (1988, p. 37). 3. COMPETITION, PARTIAL CONTRAST, AND k-WINNERS-TAKE-ALL O&F introduce k-Winners-Take-All Inhibition in their equations (A.5)-(A.6). Grossberg (1973) mathematically proved how to realize partial contrast enhancement (i.e., k-Winners-Take-All Inhibition) in a shunting recurrent on-center off-surround network. This result is also reviewed in Grossberg (1980) and Grossberg (1988), and happens automatically when a sigmoid signal function is used in a recurrent shunting on-center off-surround network. It does not require a separate hypothesis. 4. MTM: HABITUATIVE TRANSMITTER GATING AND SYNAPTIC DEPRESSION O&F describe synaptic depression in their equation (A.18). The term synaptic depression was introduced by Abbott et al. (1997), who derived an equation for it from their visual cortical data. Tsodyks and Markram (1997) derived a similar equation with somatosensory cortical data in mind. I introduced equations for synaptic depression in PNAS in 1968 (e.g., equations (18)-(24) in http://cns.bu.edu/~steve/Gro1968PNAS60.pdf). I called it medium-term memory (MTM), or activity-dependent habituation, or habituative transmitter gates; e.g., see the review in http://www.scholarpedia.org/article/Recurrent_neural_networks. MTM has multiple functional roles that were all used in our models in the 1960s-1980s, and thereafter to the present. One role is to carry out intracellular adaptation that divides the response to a current input with a time-average of recent input intensity. A related role is to prevent recurrent activation from persistently choosing the same neuron, by reducing the net input to this neuron. MTM traces also enable reset events to occur. For example, in a gated dipole opponent processing network, used a great deal in our modeling of reinforcement learning, they enable an antagonistic rebound in activation to occur in the network's OFF channel in response to either a rapidly decreasing input to the ON channel, or to an arousal burst to both channels that is triggered by an unexpected event (e.g., Grossberg, 1972, http://cns.bu.edu/~steve/Gro1972MathBioSci_II.pdf. Grossberg, 1980, Appendix E, http://cns.bu.edu/~steve/Gro1972MathBioSci_II.pdf). This property enables a resonance that reads out a predictive error to be quickly reset, thereby triggering a memory search, or hypothesis testing, to discover a recognition category capable of better representing an attended object or event, as in adaptive resonance theory, or ART. MTM reset dynamics also help to explain data about the dynamics of visual perception, cognitive-emotional interactions, decision-making under risk, and sensory-motor control. 5. LTM: HEBBIAN VS. GATED STEEPEST DESCENT LEARNING O&F then introduce what they call a Hebbian learning equation (A.7). This equation was introduced in several of my 1969 articles and has been used in many articles since then. It is reviewed in Grossberg (1980, p. 43, equation (A2)) and Grossberg (1988, p. 23, equation (11). This equation describes gated steepest descent learning, with variants called outstar learning for the learning of spatial patterns (that was described in the Journal of Statistical Physics in 1969; http://cns.bu.edu/~steve/Gro1969JourStatPhy.pdf) and instar learning for the tuning of adaptive filters (that was described in 1976 in Biological Cybernetics; http://cns.bu.edu/~steve/Gro1976BiolCyb_I.pdf), where I first used it to develop competitive learning and self-organizing map models. It is sometimes called Kohonen learning after Kohonen?s first use of it in self-organizing maps after 1984. Significantly, this learning law seems to be the first example of a process that gates learning, a concept that O&R emphasize, and which I first discovered in 1958 and published in 1969 and thereafter as parts of a mathematical analysis of associative learning in recurrent neural networks. I introduced ART in the second part of the 1976 Biological Cybernetics article (http://cns.bu.edu/~steve/Gro1976BiolCyb_II.pdf) in order to show how this kind of learning could dynamically self-stabilize in response to large non-stationary data bases using attentional matching and memory search, or hypothesis testing. MTM plays an important role in this self-regulating search process. It should also be noted that equation (A.7) is not Hebbian. It mixes Hebbian and anti-Hebbian properties. Such an adaptive weight, or long-term memory (LTM) trace, can either increase or decrease to track the signals in its pathway. When an LTM trace increases, it can properly be said to undergo Hebbian learning, after the famous law of Hebb (1949) which said that associative traces always increase during learning. When such an LTM trace decreases, it is said to undergo anti-Hebbian learning. Gated steepest descent was the first learning law to incorporate both Hebbian and anti-Hebbian properties in a single synapse. Since that time, such a law has been used to model neurophysiological data about learning in the hippocampus and cerebellum (also called Long Term Potentiation and Long Term Depression) and about adaptive tuning of cortical feature detectors during early visual development, among many other topics. The Hebb (1949) learning postulate says that: "When an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes part in firing it, some grown process or metabolic change takes place in one or both cells such that A's efficiency, as one of the cells firing B, is increased". This postulate only allows LTM traces to increase. Thus, after sufficient learning took place, Hebbian traces would saturate at their maximum values, and could not subsequently respond adaptively to changing environmental demands. The Hebb postulate assumed the wrong processing unit: It assumed that the strength of an individual connection is the unit of learning. My mathematical work in the 1960s showed, instead, that the unit of LTM is a pattern of LTM traces that is distributed across a network. When one needs to match an LTM pattern to an STM pattern, as occurs during category learning, then both increases and decreases of LTM strength are needed. 6. ERROR-DRIVEN LEARNING Finally, O&F introduce a form of error-driven learning in equation (A.10). Several biological variants of error-driven learning have been well-known for many years. Indeed, I have proposed a fundamental reason why the brain needs both gated steepest descent and error-driven learning, which I will only mention briefly here: The brain?s global organization seems to embody Complementary Computing. For further discussion of this theme, see Figure 1 and related text in the Grossberg (2012 Neural Networks, 37, 1-47) review article (http://cns.bu.edu/~steve/ART.pdf) or Grossberg (2000, Trends in Cognitive Sciences, 4, 233-246; http://www.cns.bu.edu/Profiles/Grossberg/Gro2000TICS.pdf). One error-driven learning equation, for opponent learning of adaptive movement gains, was already reviewed in Grossberg (1988, p. 51, equations (118)-(121)). However, the main use of the O&F error-driven learning is in reinforcement learning. Even here, there are error-based reinforcement learning laws that explain more neural data than the O&F equation can, and did it before they did. I will come back to error-driven reinforcement learning in a moment. First, let me summarize: The evidence supplied above shows that there is precious little that was new in the original Leabra formalism. It is disappointing, given the fact that my work is well-known by many neural modelers to have pioneered these concepts and mechanisms, that none of these articles was cited as a source for Leabra in O&F. This is all the more regrettable since I exchanged detailed collegial emails with an O?Reilly collaborator more than 10 years ago to try to make this historical background known to him. Every model has its weaknesses, which provide opportunities for further development. Such weaknesses are hard to accept, however, when they have already been overcome in prior published work, and are not noted in articles that espouse a later, weaker, model. In order to prevent this comment from becoming unduly long, I discuss only one aspect of the O&F work on error-driven reinforcement learning, to complete my comments about Leabra. In O&F (2006, p. 284), it was written that ?to date, no model has attempted to address the more difficult question of how the BG [basal ganglia] ?knows? what information is task relevant (which was hard-wired in prior models). The present model learns this dynamic gating functionality in an adaptive manner via reinforcement learning mechanisms thought to depend on the dopaminergic system and associated areas?. This claim is not correct. For example, two earlier articles by Brown, Bullock, and Grossberg use dopaminergic gating, among other mechanisms, to show how the brain can learn what is task relevant (1999, Journal of Neuroscience, 19, 10502-10511 http://www.cns.bu.edu/Profiles/Grossberg/BroBulGro99.pdf; 2004, Neural Networks, 271-510 http://www.cns.bu.edu/Profiles/Grossberg/BroBulGro2003NN.pdf). The former article simulates neurophysiological data about basal ganglia and related brain regions that temporal difference models and that O&F (2006) could not explain. The latter article shows how the TELOS model can incrementally learn five different tasks that monkeys have been trained to learn. After learning, the model quantitatively simulates the recorded neurophysiological dynamics of 17 established cell types in frontal cortex, basal ganglia, and related brain regions, and predicts explicit functional roles for all of these cells in the learning and performance of these tasks. The O&F (2006) model did not achieve this level of understanding. Instead, the model simulated some relatively simple cognitive tasks and seems to show no quantitative fits to any data. The model also seemed to make what I consider unnecessary errors. For example, O&F (2006) wrote on p. 294 that ??When a conditioned stimulus is activated in advance of a primary reward, the PV system is actually trained to not expect reward at this time, because it is always trained by the current primary reward value. Therefore, we need an additional mechanism to account for the anticipatory DA bursting at CS onset, which in turn is critical for training up the BG gating system?This is the learned value (LV) system, which is trained only when primary rewards are either present or expected by the PV and is free to fire at other times without adapting its weights. Therefore, the LV is protected from having to learn that no primary reward is actually present at CS onset, because it is not trained at that time?. No such convoluted assumptions were needed in Brown et al (1999) to explain and quantitatively simulate a broad range of anatomical and neurophysiological conditioning data from monkeys that were recorded in ventral striatum, striosomes, pedunculo-pntine tegmental nucleus, and the lateral hypothalamus. I believe that a core problem in their model is a lack of understanding of an issue that is basic in these data; namely, how adaptively timed conditioning occurs. Their error-driven conditioning laws, based on delta-rule learning, are, to my mind, simply inadequate; see their equations (3.1)-(3.3) on p. 294 and their equations in Section A.5, pp. 318+. In contrast, the Bullock and Grossberg family of articles have traced adaptively timed error-driven learning to detailed dynamics of the metabotropic glutamate receptor system, as simulated in Brown et al (1999), and also used to quantitatively simulate adaptively timed cerebellar data. In this regard, O?Reilly and Frank (2006) mention in passing the cerebellum as a source of ?timing signals? on p. 294, line 7. However, the timing that goes on in the cerebellum and the timing that goes on in the basal ganglia have different functional roles. A detailed modeling synthesis and simulations of biochemical, biophysical, neurophysiological, anatomical, and behavioral data about adaptively timed conditioning in the cerebellum is provided in the article by Fiala, Grossberg, and Bullock (Journal of Neuroscience, 1996, 16, 3760-3774, http://cns.bu.edu/~steve/FiaGroBul1996JouNeuroscience.pdf). There are many related problems. Not the least of them is the assumption ?that time is discretized into steps that correspond to environmental events (e.g., the presentation of a CS or US)? (O&F, 2006, p. 318). One cannot understand adaptively timed learning, or working memory for that matter, if such an assumption is made. Such unrealistic technical assumptions often lead one to unrealistic conceptual assumptions. A framework that uses real-time dynamics is needed to deeply understand how these brain processes work. Best, Steve On Nov 5, 2014, at 3:14 AM, Randall O'Reilly wrote: > >> Given Randy O?Reilly?s comments about Leabra, it is also of historical interest that I introduced the core equations used in Leabra in the 1960s and early 1970s, and they have proved to be of critical importance in all the developments of ART. > > For future reference, Leabra is based on the standard equivalent circuit equations for the neuron which I believe date at least to the time of Hodgkin and Huxley. Specifically, we use the ?AdEx? model of Gerstner and colleagues, and a rate-code equivalent thereof that we derived. For learning we use a biologically-plausible version of backpropagation that I analyzed in 1996 and has no provenance in any of your prior work. Our more recent version of this learning model shares a number of features in common with the BCM algorithm from 1982, while retaining the core error-driven learning component. I just put all the equations in one place here in case you?re interested: https://grey.colorado.edu/emergent/index.php/Leabra > > None of this is to say that your pioneering work was not important in shaping the field ? of course it was, but I hope you agree that it is also important to get one?s facts straight on these things. > > Best, > - Randy > Stephen Grossberg Wang Professor of Cognitive and Neural Systems Professor of Mathematics, Psychology, and Biomedical Engineering Director, Center for Adaptive Systems http://www.cns.bu.edu/about/cas.html http://cns.bu.edu/~steve steve at bu.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: From brian.mingus at colorado.edu Fri Nov 7 13:05:23 2014 From: brian.mingus at colorado.edu (Brian J Mingus) Date: Fri, 7 Nov 2014 11:05:23 -0700 Subject: Connectionists: The Atoms of Neural Computation: A reply to Randy O'Reilly In-Reply-To: <537CF027-A270-4923-B303-47D0393E28A3@cns.bu.edu> References: <330594F9-9782-452A-B4CB-0C2E8F069DF5@nyu.edu> <4AB08192-1C70-4A7D-80E6-F1228F6332E0@colorado.edu> <537CF027-A270-4923-B303-47D0393E28A3@cns.bu.edu> Message-ID: There is a rather fundamental levels of analysis difference here. As Sejnowski mentioned, there are 100 to 1000 types of pyramidal neurons, but Dror Cohen appropriately pointed out Edelman's notion of "degeneracy." This is where neurons have become differentiated and yet perform the same function. Ockham's razor suggests that we assume a strong form of the degeneracy hypothesis, using simple, abstract models, and then add increased biological fidelity as these models prove insufficient. Sufficiency is, of course, dependent on your goal. If your goal is to create a model that solves the problems that the brain solves doing it in the way that the brain does it, while retaining some but not too much fidelity, then you want biologically plausible backpropagation (for now, at least, as it's far from proven that the brain does this). If you just want to do what the brain does in some way, you abstract away and use Bayes. If you want to do what the brain does in all the excruciating detail that the brain does it in, because that's just what you are personally interested in, or for whatever reason, you concern yourself with the almost literally infinitely detailed differences in neurons. And there is plenty of room in-between these three basic levels. Even studying Purkinje neurons in detail for decades outside the context of the rest of the brain helps inform everything else in the brain. There are more general issues with provenance conversations like this one. They ignore the principle of "he who says it best says it last" - J. Schmidhuber (2008). *The last inventor of the telephone*. Science. This point stresses that communication is essential, and pushes against mountains of papers that are hard to comprehend and would require a legion of programmers to independently replicate, and thus really understand, assuming that such replication is possible. But perhaps the most fundamental problem with such provenance conversations is that it ignores that we are all working together here, despite that we may *think* we are working at cross-purposes, or with different goals in mind (including just trying to be more awesome than everyone else). Everything everyone does informs everything that everyone else does. This reminds me of multi-objective optimization, which also seems to summarize how we - embodied brains - came to be in the first place. This same kind of process, at the memetic instead of genetic level, will lead to a theory of the brain. Interacting levels of analysis. Researchers working apart working together. That said, I think all would agree that Schmidhuber's recent review is a very useful endeavor, and provenance is ultimately of historical importance, and credit assignment is motivationally important. Regarding Leabra, Grossberg's claim that he invented the core Leabra equations, which embody the intuition that backpropagation can be implemented in a biologically plausible way if neurons keep track of their activation in two different phases and then compute their weight deltas purely locally based on this difference, seems wrong. Where is this prior actual implementation and actual description that could be implemented and would work as well as backpropagation as Leabra does? Regarding the Leabra cognitive architecture, it borrows heavily from pretty much every single other researcher, including Grossberg (as his postdoc(s) have worked in O'Reilly's lab, and we have tried to read some of Grossberg's papers), but also even ACT-R, which has an independent implementation in *emergent, *and has been partially integrated with Leabra in a model called SAL (Synthesis of ACT-R and Leabra). And this architecture is inspired by *so many* others. As many as possible. Giving credit assignment here properly is *very* hard, and there is no point fighting about it. The O'Reilly lab bibliography contains tens of thousands of papers aggregated by scores of researchers over decades. Ultimately, these papers are distilled into concepts and memes that are incorporated into models. These new models stand on the shoulders of giants. For a giant to come back and say that the new work is in fact his, is counterproductive. Instead, we should look for something like SALART, or just let the process unfold. Provenance is to be left to historians, and it will ultimately be flawed, as the information for accurate record-keeping is just not around, nor easily expressible in such a complex domain. And arguably, almost all of the *real* communication in science is communicated not via papers, but via researchers migrating from lab to lab and sharing concepts whose origin they may well be unaware of, which pushes all of our knowledge forward. Researchers working in isolation and aggressively assigning credit to themselves is hypercompetitive and borderline narcissism. It's also rather ironic considering that we study systems that use not just competition but also cooperation. This is all explained by game theory at some level, after all. It's a mix! Speaking of how mixed up all of our contributions are, please take the time to help keep my list of neural simulators up-to-date, which I intended to be a community resource and project. https://grey.colorado.edu/emergent/index.php/Comparison_of_Neural_Network_Simulators Sincerely, Brian http://linkedin.com/in/brianmingus On Fri, Nov 7, 2014 at 10:03 AM, Stephen Grossberg wrote: > > Dear Randy, > > Thanks for your comments below in response to my remark that I introduced > the core equations used in Leabra in the 1960s and early 1970s. > > I am personally passionate about trying to provide accurate citations of > prior work, and welcome new information about it. This is especially true > given that proper citation is not easy in a rapidly developing and highly > interdisciplinary field such as ours. > > Given your comments and the information at my disposal, however, I stand > by my remark, and will say why below. If you have additional relevant > information, I will welcome it. > > It is particularly difficult to provide proper citation when the same > model name is used even after the model equations are changed. Your comment > suggests that the name Leabra is used for all such variations. However, a > change of a core model equation is, in fact, a change of model. > > To deal with the need to develop and refine models, my colleagues and I > provide distinct model names for different stages of model development; > e.g., ART 1, ART 2, ARTMAP, ARTSCAN, ARTSCENE, etc. > > My comment was based on your published claims about earlier versions of > Leabra. If Leabra is now so changed that these comments are no longer > relevant, then perhaps a new model name would help readers to understand > this. Using the same name for many different versions of a model makes it > hard to ever disconfirm it. Indeed, some authors just correct old mistakes > with new equations under the same model name, and never admit that a > mistake was made. > > I will refer mostly to two publications about Leabra: The O?Reilly and > Munakata (2000) book (abbreviated O&M below) on *Computational > Explorations in Cognitive Neuroscience*, and the O?Reilly and Frank > (2006) article (abbreviated O&F) on *Making working memory work: A > computational model of learning in the prefrontal cortex and basal ganglia* > ; > https://grey.colorado.edu/mediawiki/sites/CompCogNeuro/images/3/30/OReillyFrank06.pdf). > > > The preface of O&M says that the goal of the book, and of Leabra, is a > highly worthy one: ?to consolidate and integrate advances?into one coherent > package?we have found that the process of putting all of these ideas > together?led to an emergent phenomenon in which the whole is greater than > the sum of its parts?? I was therefore dismayed to see that the core > equations of this presumably new synthesis were already pioneered and > developed by my colleagues and me long before 2000, and used in a coherent > way in many of our previous articles. > > O&M leaves readers thinking that their process of ?putting all of these > ideas together? represented a novel synthesis for a unified cognitive > architecture. For example, the O&M book review > http://srsc.ulb.ac.be/axcWWW/papers/pdf/03-EJCP.pdf writes that the > book?s first five chapters are ?dedicated to developing a novel, > biologically motivated learning algorithm called Leabra?. The review then > lists standard hypotheses in the neural modeling literature as the basic > properties of Leabra. The purported advance of Leabra ?in contrast to the > now almost pass? back propagation algorithm, takes as a starting point that > networks of real neurons exhibit several properties that are incompatible > with the assumptions of vanilla back propagation?; notably that cells can > send signals reciprocally to each other; they experience competition; their > adaptive weights never change sign during learning; and their connections > are never used to back-propagate error information during learning. These > claims were not new in 2000. > > My more specific examples will be drawn from O&F, for definiteness. I will > compare the claims of this article with previously published results from > our own work, although similar concerns could be expressed using examples > from other authors. > > The devil is in the details. Let me now get specific about the core > equations of Leabra. I will break my comments into six parts, one part for > each core model equation: > > *1. STM: NETWORK SHUNTING DYNAMICS* > Randy, you wrote in your email below that Leabra is ?based on the standard > equivalent circuit equations for the neuron? and mention Hodgkin and Huxley > in this regard. > > It is not clear how ?based on? translates into a mathematical model > equation. In particular, the Hodgkin-Huxley equations are empirical fits to > the dynamics of a squid giant axon. They were not equations for neural > networks. > > It was a big step conceptually to go from individual neurons to neural > networks. When I started publishing the Additive and Shunting models for > neural networks in 1967-68, they were not considered ?standard?, as > illustrated by the fact that several of these articles were published in > the *Proceedings of the National Academy of Sciences.* > > As to the idea that moving away from back propagation was novel in 2000, > consider the extensive critique of back propagation in the oft-cited 1988 > Grossberg article entitled *Nonlinear neural networks: Principles, > mechanisms, and architectures* (*Neural Networks*, 1, 17-61). > http://www.cns.bu.edu/Profiles/Grossberg/Gro1988NN.pdf. See the > comparison of back propagation and adaptive resonance theory in Section 17. > The main point of this article was not to criticize back propagation, > however. It was to review efforts to develop neurally-based cognitive > models that had been ongoing already in 1988 for at least 20 years. > > As to the shunting dynamics used in O&F, on pp. 316 of their article, > consider equations (A.1) and (A.2), which define shunting cooperative and > competitive dynamics. Compare equation (9) on p. 23 and equations > (100)-(101) on p. 35 in Grossberg (1988), or equations (A16) and (A18) on > pp. 47-48 in the oft-cited 1980 Grossberg article on *How does a brain > build a cognitive code? *(*Psychological Review*, 87, 1-51 > http://cns.bu.edu/Profiles/Grossberg/Gro1980PsychRev.pdf), This article > reviewed aspects of the paradigm that I introduced in the 1960s to unify > aspects of brain and cognition. Or see equations (1) ? (7) in the even > earlier, also oft-cited, 1973 Grossberg article on *Contour enhancement, > short-term memory, and constancies in reverberating neural networks *(Studies > in Applied Mathematics, 52, 213-257 > http://cns.bu.edu/Profiles/Grossberg/Gro1973StudiesAppliedMath.pdf). This > breakthrough article showed how to design recurrent shunting cooperative > and competitive networks and their signal functions to exhibit key > properties of contrast enhancement, noise suppression, activity > normalization, and short-term memory storage. These three articles > illustrate scores of our articles that have developed such concepts before > you began to write on this subject. > > *2. SIGMOID SIGNALS* > O&F introduce a sigmoidal signal function in their equation (A.3). > Grossberg (1973) was the first article to mathematically characterize how > sigmoidal signal functions transform inputs before storing them in a > short-term memory that is defined by a recurrent shunting on-center > off-surround network. These results have been reviewed in many places; > e.g., Grossberg (1980, pp. 46-49, Appendices C and D) and Grossberg (1988, > p. 37). > > *3. COMPETITION, PARTIAL CONTRAST, AND k-WINNERS-TAKE-ALL* > O&F introduce k-Winners-Take-All Inhibition in their equations > (A.5)-(A.6). Grossberg (1973) mathematically proved how to realize partial > contrast enhancement (i.e., k-Winners-Take-All Inhibition) in a shunting > recurrent on-center off-surround network. This result is also reviewed in > Grossberg (1980) and Grossberg (1988), and happens automatically when a > sigmoid signal function is used in a recurrent shunting on-center > off-surround network. It does not require a separate hypothesis. > > *4. MTM: HABITUATIVE TRANSMITTER GATING AND SYNAPTIC DEPRESSION* > O&F describe synaptic depression in their equation (A.18). The term > synaptic depression was introduced by Abbott et al. (1997), who derived an > equation for it from their visual cortical data. Tsodyks and Markram (1997) > derived a similar equation with somatosensory cortical data in mind. I > introduced equations for synaptic depression in *PNAS* in 1968 (e.g., > equations (18)-(24) in http://cns.bu.edu/~steve/Gro1968PNAS60.pdf). I > called it medium-term memory (MTM), or activity-dependent habituation, or > habituative transmitter gates; e.g., see the review in > http://www.scholarpedia.org/article/Recurrent_neural_networks. > > MTM has multiple functional roles that were all used in our models in the > 1960s-1980s, and thereafter to the present. One role is to carry out > intracellular adaptation that divides the response to a current input with > a time-average of recent input intensity. A related role is to prevent > recurrent activation from persistently choosing the same neuron, by > reducing the net input to this neuron. MTM traces also enable reset events > to occur. For example, in a gated dipole opponent processing network, used > a great deal in our modeling of reinforcement learning, they enable an > antagonistic rebound in activation to occur in the network's OFF channel in > response to either a rapidly decreasing input to the ON channel, or to an > arousal burst to both > channels that is triggered by an unexpected event (e.g., Grossberg, 1972, > http://cns.bu.edu/~steve/Gro1972MathBioSci_II.pdf. Grossberg, 1980, > Appendix E, http://cns.bu.edu/~steve/Gro1972MathBioSci_II.pdf). This > property enables a resonance > that reads out a > predictive error to be quickly reset, thereby triggering a memory search, > or hypothesis testing, to discover a recognition category capable of better > representing an attended object or event, as in adaptive resonance theory, > or ART. MTM reset dynamics also help to explain data about the dynamics of > visual perception, cognitive-emotional interactions, decision-making under > risk, and sensory-motor control. > > *5. LTM: HEBBIAN VS. GATED STEEPEST DESCENT LEARNING* > O&F then introduce what they call a Hebbian learning equation (A.7). This > equation was introduced in several of my 1969 articles and has been used in > many articles since then. It is reviewed in Grossberg (1980, p. 43, > equation (A2)) and Grossberg (1988, p. 23, equation (11). This equation > describes *gated steepest descent learning*, with variants called *outstar > learning* for the learning of spatial patterns (that was described in the *Journal > of Statistical Physics* in 1969; > http://cns.bu.edu/~steve/Gro1969JourStatPhy.pdf) and *instar learning* > for the tuning of adaptive filters (that was described in 1976 in *Biological > Cybernetics; http://cns.bu.edu/~steve/Gro1976BiolCyb_I.pdf > *), where I first used it > to develop competitive learning and self-organizing map models. It is > sometimes called Kohonen learning after Kohonen?s first use of it in > self-organizing maps after 1984. Significantly, this learning law seems to > be the first example of a process that *gates* learning, a concept that > O&R emphasize, and which I first discovered in 1958 and published in 1969 > and thereafter as parts of a mathematical analysis of associative learning > in recurrent neural networks. > > I introduced ART in the second part of the 1976 *Biological Cybernetics* > article (http://cns.bu.edu/~steve/Gro1976BiolCyb_II.pdf) in order to show > how this kind of learning could dynamically self-stabilize in response to > large non-stationary data bases using attentional matching and memory > search, or hypothesis testing. MTM plays an important role in this > self-regulating search process. > > It should also be noted that equation (A.7) is *not* Hebbian. It mixes > Hebbian and anti-Hebbian properties. Such an adaptive weight, or > long-term memory (LTM) trace, can either increase or decrease to track the > signals in its pathway. When an LTM trace increases, it can properly be > said to undergo Hebbian learning, after the famous law of Hebb (1949) which > said that associative traces always increase during learning. When such an > LTM trace decreases, it is said to undergo anti-Hebbian learning. Gated > steepest descent was the first learning law to incorporate both Hebbian and > anti-Hebbian properties in a single synapse. Since that time, such a law > has been used to model neurophysiological data about learning in the > hippocampus and cerebellum (also called Long Term Potentiation and Long > Term Depression) and about adaptive tuning of cortical feature detectors > during early visual development, among many other topics. > > The Hebb (1949) learning postulate says that: "When an axon of cell A is > near enough to excite a cell B and repeatedly or persistently takes part in > firing it, some grown process or metabolic change takes place in one or > both cells such that A's efficiency, as one of the cells firing B, is > increased". This postulate only allows LTM traces to increase. Thus, after > sufficient learning took place, Hebbian traces would saturate at their > maximum values, and could not subsequently respond adaptively to changing > environmental demands. The Hebb postulate assumed the wrong processing > unit: It assumed that the strength of an individual connection is the unit > of learning. My mathematical work in the 1960s showed, instead, that the > unit of LTM is a pattern of LTM traces that is distributed across a > network. When one needs to match an LTM pattern to an STM pattern, as > occurs during category learning, then both increases and decreases of LTM > strength are needed. > > *6. ERROR-DRIVEN LEARNING* > Finally, O&F introduce a form of error-driven learning in equation (A.10). > Several biological variants of error-driven learning have been well-known > for many years. Indeed, I have proposed a fundamental reason why the brain > needs both gated steepest descent and error-driven learning, which I will > only mention briefly here: The brain?s global organization seems to embody > Complementary Computing. For further discussion of this theme, see Figure 1 > and related text in the Grossberg (2012 *Neural Networks*, 37, 1-47) > review article (http://cns.bu.edu/~steve/ART.pdf) or Grossberg (2000, *Trends > in Cognitive Sciences*, 4, 233-246; > http://www.cns.bu.edu/Profiles/Grossberg/Gro2000TICS.pdf). > > One error-driven learning equation, for opponent learning of adaptive > movement gains, was already reviewed in Grossberg (1988, p. 51, equations > (118)-(121)). However, the main use of the O&F error-driven learning is in > reinforcement learning. Even here, there are error-based reinforcement > learning laws that explain more neural data than the O&F equation can, and > did it before they did. I will come back to error-driven reinforcement > learning in a moment. > > First, let me summarize: The evidence supplied above shows that there is > precious little that was new in the original Leabra formalism. It is > disappointing, given the fact that my work is well-known by many neural > modelers to have pioneered these concepts and mechanisms, that none of > these articles was cited as a source for Leabra in O&F. This is all the > more regrettable since I exchanged detailed collegial emails with an > O?Reilly collaborator more than 10 years ago to try to make this historical > background known to him. > > Every model has its weaknesses, which provide opportunities for further > development. Such weaknesses are hard to accept, however, when they have > already been overcome in prior published work, and are not noted in > articles that espouse a later, weaker, model. > > In order to prevent this comment from becoming unduly long, I discuss only > one aspect of the O&F work on error-driven reinforcement learning, to > complete my comments about Leabra. > > In O&F (2006, p. 284), it was written that ?to date, no model has > attempted to address the more difficult question of how the BG [basal > ganglia] ?knows? what information is task relevant (which was hard-wired in > prior models). The present model learns this dynamic gating functionality > in an adaptive manner via reinforcement learning mechanisms thought to > depend on the dopaminergic system and associated areas?. This claim is not > correct. For example, two earlier articles by Brown, Bullock, and Grossberg > use dopaminergic gating, among other mechanisms, to show how the brain can > learn what is task relevant (1999, *Journal of Neuroscience*, 19, > 10502-10511 http://www.cns.bu.edu/Profiles/Grossberg/BroBulGro99.pdf; > 2004, *Neural Networks*, 271-510 > http://www.cns.bu.edu/Profiles/Grossberg/BroBulGro2003NN.pdf). The former > article simulates neurophysiological data about basal ganglia and related > brain regions that temporal difference models and that O&F (2006) could not > explain. The latter article shows how the TELOS model can incrementally > learn five different tasks that monkeys have been trained to learn. After > learning, the model quantitatively simulates the recorded > neurophysiological dynamics of 17 established cell types in frontal cortex, > basal ganglia, and related brain regions, and predicts explicit functional > roles for all of these cells in the learning and performance of these tasks. > > The O&F (2006) model did not achieve this level of understanding. Instead, > the model simulated some relatively simple cognitive tasks and seems to > show no quantitative fits to any data. The model also seemed to make what I > consider unnecessary errors. For example, O&F (2006) wrote on p. 294 that > ??When a conditioned stimulus is activated in advance of a primary reward, > the PV system is actually trained to not expect reward at this time, > because it is always trained by the current primary reward value. > Therefore, we need an additional mechanism to account for the anticipatory > DA bursting at CS onset, which in turn is critical for training up the BG > gating system?This is the learned value (LV) system, which is trained only > when primary rewards are either present or expected by the PV and is free > to fire at other times without adapting its weights. Therefore, the LV is > protected from having to learn that no primary reward is actually present > at CS onset, because it is not trained at that time?. No such convoluted > assumptions were needed in Brown et al (1999) to explain and quantitatively > simulate a broad range of anatomical and neurophysiological conditioning > data from monkeys that were recorded in ventral striatum, striosomes, > pedunculo-pntine tegmental nucleus, and the lateral hypothalamus. > > I believe that a core problem in their model is a lack of understanding of > an issue that is basic in these data; namely, how *adaptively timed* > conditioning occurs. Their error-driven conditioning laws, based on > delta-rule learning, are, to my mind, simply inadequate; see their > equations (3.1)-(3.3) on p. 294 and their equations in Section A.5, pp. > 318+. In contrast, the Bullock and Grossberg family of articles have traced > adaptively timed error-driven learning to detailed dynamics of the > metabotropic glutamate receptor system, as simulated in Brown et al (1999), > and also used to quantitatively simulate adaptively timed cerebellar data. > In this regard, O?Reilly and Frank (2006) mention in passing the cerebellum > as a source of ?timing signals? on p. 294, line 7. However, the timing that > goes on in the cerebellum and the timing that goes on in the basal ganglia > have different functional roles. A detailed modeling synthesis and > simulations of biochemical, biophysical, neurophysiological, anatomical, > and behavioral data about adaptively timed conditioning in the cerebellum > is provided in the article by Fiala, Grossberg, and Bullock (*Journal of > Neuroscience*, 1996, 16, 3760-3774, > http://cns.bu.edu/~steve/FiaGroBul1996JouNeuroscience.pdf). > > There are many related problems. Not the least of them is the assumption > ?that time is discretized into steps that correspond to environmental > events (e.g., the presentation of a CS or US)? (O&F, 2006, p. 318). One > cannot understand adaptively timed learning, or working memory for that > matter, if such an assumption is made. Such unrealistic technical > assumptions often lead one to unrealistic conceptual assumptions. A > framework that uses real-time dynamics is needed to deeply understand how > these brain processes work. > > Best, > > Steve > > > > > On Nov 5, 2014, at 3:14 AM, Randall O'Reilly wrote: > > > Given Randy O?Reilly?s comments about Leabra, it is also of historical > interest that I introduced the core equations used in Leabra in the 1960s > and early 1970s, and they have proved to be of critical importance in all > the developments of ART. > > > For future reference, Leabra is based on the standard equivalent circuit > equations for the neuron which I believe date at least to the time of > Hodgkin and Huxley. Specifically, we use the ?AdEx? model of Gerstner and > colleagues, and a rate-code equivalent thereof that we derived. For > learning we use a biologically-plausible version of backpropagation that I > analyzed in 1996 and has no provenance in any of your prior work. Our more > recent version of this learning model shares a number of features in common > with the BCM algorithm from 1982, while retaining the core error-driven > learning component. I just put all the equations in one place here in case > you?re interested: https://grey.colorado.edu/emergent/index.php/Leabra > > None of this is to say that your pioneering work was not important in > shaping the field ? of course it was, but I hope you agree that it is also > important to get one?s facts straight on these things. > > Best, > - Randy > > > Stephen Grossberg > Wang Professor of Cognitive and Neural Systems > Professor of Mathematics, Psychology, and Biomedical Engineering > Director, Center for Adaptive Systems http://www.cns.bu.edu/about/cas.html > http://cns.bu.edu/~steve > steve at bu.edu > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eero at cns.nyu.edu Fri Nov 7 16:05:29 2014 From: eero at cns.nyu.edu (Eero Simoncelli) Date: Fri, 7 Nov 2014 16:05:29 -0500 (EST) Subject: Connectionists: Doctoral studies in Computational/Theoretical Neuroscience at NYU Message-ID: <201411072105.sA7L5Tm28176@calaf.cns.nyu.edu> New York University is home to a thriving interdisciplinary community of researchers using computational and theoretical approaches in neuroscience. We are interested in exceptional PhD candidates with strong quantitative training (e.g., physics, mathematics, engineering) coupled with a clear interest in brain sciences. A listing of faculty, sorted by their primary departmental affiliation, is given below. Doctoral programs are flexible, allowing students to pursue research across departmental boundaries. Nevertheless, admissions are handled separately by each department, and students interested in pursuing graduate studies should submit an application to the program that best fits their goals and interests. ** Center for Neural Science (CNS) (deadline: 1 December) [http://www.cns.nyu.edu/doctoral/] [Graduate studies in Neuroscience across NYU: http://www.neuroscience.nyu.edu/] * Andr?? A. Fenton - Molecular, neural, behavioral, and computationalaspects of memory. * Paul W. Glimcher - Decision-making in humans and animals. Neuroeconomics. * Roozbeh Kiani - Vision and decision-making. * Wei Ji Ma (also in Psychology) - Perception, working memory, and decision making. * Tony Movshon - Vision and visual development. * Bijan Pesaran - Neuronal dynamics and decision making. * Alex Reyes - Functional interactions of neurons in a network. * John Rinzel (also in Mathematics) - Biophysical mechanisms and theory of neural computation. * Nava Rubin - Visual perception and the neural basis of vision. * Robert Shapley - Visual physiology and perception. * Eero Simoncelli - Computational vision and audition. * Xiao-Jing Wang - Computational neuroscience, decision-making and working memory, neural circuits. ** Neuroscience and Physiology program, School of Medicine (deadline: 1 December) [http://neuroscience.med.nyu.edu/training-programs/graduate-program-neuroscience-physiology-school-medicine] [Graduate studies in Neuroscience across NYU: http://www.neuroscience.nyu.edu/] * Gyorgy Buzsaki - Rhythms in neural networks. * Dmitry Rinberg - Sensory information processing in the behaving animal. * Mario Svirsky - Auditory neural prostheses; experimental/computational studies of speech production/perception. ** Psychology, Cognition & Perception program (deadline: 12 December) [http://www.psych.nyu.edu/programs/cp/] * Nathaniel Daw (also in CNS) - Models of decision-making and neuromodulation. * Todd Gureckis - Memory, learning, and decision processes. * David Heeger (also in CNS) - fMRI, computational neuroscience, vision, attention. * Michael Landy - Computational approaches to vision. * Laurence Maloney - Mathematical approaches to psychology and neuroscience. * Gary Marcus - Origins of the human mind. * Denis Pelli - Visual object recognition. * Jonathan Winawer - Visual perception and memory. ** Mathematics (deadline: 18 December ) [http://math.nyu.edu/degree/phd/] * David Cai - Nonlinear stochastic behavior in physical and biological systems. * David McLaughlin - Nonlinear wave equations, computational visual neuroscience. * Aaditya Rangan - computational neurobiology, numerical analysis. * Charles Peskin - Mathematical biology. * Michael Shelley - Modeling and large-scale computation, computational visual neuroscience. * Daniel Tranchina - Information processing in the retina. ** Computer Science (deadline: 12 December) [http://www.cs.nyu.edu/web/Research/Areas/graphicsvisionui.html] * Davi Geiger - Computational vision and learning. * Yann LeCun - machine learning, hierarchical visual processing, robotics. ** Electrical and Computer Engineering, Poly campus, Brooklyn (deadline: 15 December) [http://www.poly.edu/academics/programs/electrical-engineering-phd] * Jonathan Viventi - Brain-computer interfaces and brain recording technologies. ** Economics (deadline: 18 December) [http://econ.as.nyu.edu/page/phd] * Andrew Caplin - Economic theory, neurobiology of decision. * Andrew Schotter - Experimental economics, game theory, neurobiology of decision. From grlmc at urv.cat Sat Nov 8 15:54:28 2014 From: grlmc at urv.cat (GRLMC) Date: Sat, 8 Nov 2014 21:54:28 +0100 Subject: Connectionists: AlCoB 2015: 1st call for papers Message-ID: <0B2C8403ADBA46B19006BADC866B071B@Carlos1> *To be removed from our mailing list, please respond to this message with UNSUBSCRIBE in the subject line* **************************************************************************** ****** 2nd INTERNATIONAL CONFERENCE ON ALGORITHMS FOR COMPUTATIONAL BIOLOGY AlCoB 2015 Mexico City, Mexico August 4-6, 2015 Organized by: Centre for Complexity Sciences (C3) School of Sciences Institute for Research in Applied Mathematics and Systems (IIMAS) Graduate Program in Computing Science and Engineering National Autonomous University of Mexico Research Group on Mathematical Linguistics (GRLMC) Rovira i Virgili University http://grammars.grlmc.com/alcob2015/ **************************************************************************** ****** AIMS: AlCoB aims at promoting and displaying excellent research using string and graph algorithms and combinatorial optimization to deal with problems in biological sequence analysis, genome rearrangement, evolutionary trees, and structure prediction. The conference will address several of the current challenges in computational biology by investigating algorithms aimed at: 1) assembling sequence reads into a complete genome, 2) identifying gene structures in the genome, 3) recognizing regulatory motifs, 4) aligning nucleotides and comparing genomes, 5) reconstructing regulatory networks of genes, and 6) inferring the evolutionary phylogeny of species. Particular focus will be put on methodology and significant room will be reserved to young scholars at the beginning of their career. VENUE: AlCoB 2015 will take place in Mexico City, the oldest capital city in the Americas and the largest Spanish-speaking city in the world. The venue will be the main campus of the National Autonomous University of Mexico. SCOPE: Topics of either theoretical or applied interest include, but are not limited to: Exact sequence analysis Approximate sequence analysis Pairwise sequence alignment Multiple sequence alignment Sequence assembly Genome rearrangement Regulatory motif finding Phylogeny reconstruction Phylogeny comparison Structure prediction Compressive genomics Proteomics: molecular pathways, interaction networks ... Transcriptomics: splicing variants, isoform inference and quantification, differential analysis Next-generation sequencing: population genomics, metagenomics, metatranscriptomics ... Microbiome analysis Systems biology STRUCTURE: AlCoB 2015 will consist of: invited talks invited tutorials peer-reviewed contributions INVITED SPEAKERS: to be announced PROGRAMME COMMITTEE: Stephen Altschul (National Center for Biotechnology Information, Bethesda, USA) Yurii Aulchenko (Russian Academy of Sciences, Novosibirsk, Russia) Pierre Baldi (University of California, Irvine, USA) Daniel G. Brown (University of Waterloo, Canada) Yuehui Chen (University of Jinan, China) Keith A. Crandall (George Washington University, Washington, USA) Joseph Felsenstein (University of Washington, Seattle, USA) Michael Galperin (National Center for Biotechnology Information, Bethesda, USA) Susumu Goto (Kyoto University, Japan) Igor Grigoriev (DOE Joint Genome Institute, Walnut Creek, USA) Yike Guo (Imperial College, London, UK) Javier Herrero (University College London, UK) Karsten Hokamp (Trinity College Dublin, Ireland) Hsuan-Cheng Huang (National Yang-Ming University, Taipei, Taiwan) Ian Korf (University of California, Davis, USA) Nikos Kyrpides (DOE Joint Genome Institute, Walnut Creek, USA) Yun Li (University of North Carolina, Chapel Hill, USA) Jun Liu (Harvard University, Cambridge, USA) Mingyao Li (University of Pennsylvania, Philadelphia, USA) Rodrigo L?pez (European Bioinformatics Institute, Hinxton, UK) Andrei N. Lupas (Max Planck Institute for Developmental Biology, T?bingen, Germany) B.S. Manjunath (University of California, Santa Barbara, USA) Carlos Mart?n-Vide (chair, Rovira i Virgili University, Tarragona, Spain) Tarjei Mikkelsen (Broad Institute, Cambridge, USA) Henrik Nielsen (Technical University of Denmark, Lyngby, Denmark) Christine Orengo (University College London, UK) Modesto Orozco (Institute for Research in Biomedicine, Barcelona, Spain) Christos A. Ouzounis (Centre for Research & Technology Hellas, Thessaloniki, Greece) Manuel Peitsch (Philip Morris International R&D, Neuch?tel, Switzerland) David A. Rosenblueth (National Autonomous University of Mexico, Mexico City, Mexico) Julio Rozas (University of Barcelona, Spain) Alessandro Sette (La Jolla Institute for Allergy and Immunology, USA) Peter F. Stadler (University of Leipzig, Germany) Guy Theraulaz (Paul Sabatier University, Toulouse, France) Alfonso Valencia (Spanish National Cancer Research Centre, Madrid, Spain) Kai Wang (University of Southern California, Los Angeles, USA) Lusheng Wang (City University of Hong Kong, Hong Kong) Zidong Wang (Brunel University, Uxbridge, UK) Harel Weinstein (Cornell University, New York, USA) Jennifer Wortman (Broad Institute, Cambridge, USA) Jun Yu (Chinese Academy of Sciences, Beijing, China) Mohammed J. Zaki (Rensselaer Polytechnic Institute, Troy, USA) Louxin Zhang (National University of Singapore, Singapore) Hongyu Zhao (Yale University, New Haven, USA) ORGANIZING COMMITTEE: Adrian Horia Dediu (Tarragona) Francisco Hern?ndez-Quiroz (Mexico City) Carlos Mart?n-Vide (Tarragona, co-chair) David A. Rosenblueth (Mexico City, co-chair) Florentina Lilica Voicu (Tarragona) SUBMISSIONS: Authors are invited to submit non-anonymized papers in English presenting original and unpublished research. Papers should not exceed 12 single-spaced pages (including eventual appendices, references, proofs, etc.) and should be prepared according to the standard format for Springer Verlag's LNCS series (see http://www.springer.com/computer/lncs?SGWID=0-164-6-793341-0). Submissions have to be uploaded to: https://www.easychair.org/conferences/?conf=alcob2015 PUBLICATIONS: A volume of proceedings published by Springer in the LNCS/LNBI series will be available by the time of the conference. A special issue of a major journal will be later published containing peer-reviewed substantially extended versions of some of the papers contributed to the conference. Submissions to it will be by invitation. REGISTRATION: The registration form can be found at: http://grammars.grlmc.com/alcob2015/Registration.php DEADLINES: Paper submission: March 2, 2015 (23:59 CET) Notification of paper acceptance or rejection: April 10, 2015 Final version of the paper for the LNCS/LNBI proceedings: April 19, 2015 Early registration: April 19, 2015 Late registration: July 21, 2015 Submission to the journal special issue: November 6, 2015 QUESTIONS AND FURTHER INFORMATION: florentinalilica.voicu at urv.cat POSTAL ADDRESS: AlCoB 2015 Research Group on Mathematical Linguistics (GRLMC) Rovira i Virgili University Av. Catalunya, 35 43002 Tarragona, Spain Phone: +34 977 559 543 Fax: +34 977 558 386 ACKNOWLEDGEMENTS: National Autonomous University of Mexico Rovira i Virgili University --- Este mensaje no contiene virus ni malware porque la protecci?n de avast! Antivirus est? activa. http://www.avast.com From wachtler at biologie.uni-muenchen.de Sun Nov 9 11:42:36 2014 From: wachtler at biologie.uni-muenchen.de (Thomas Wachtler) Date: Sun, 9 Nov 2014 17:42:36 +0100 (CET) Subject: Connectionists: G-Node Winter Course on Neural Data Analysis 2015 Message-ID: 7th G-Node Winter Course on Neural Data Analysis February 23 - 27, 2015 in Munich, Germany The German Neuroinformatics Node (G-Node) organizes its seventh international training course to promote state-of-the-art methods of neural data analysis among PhD students and postdocs. The course offers hands-on experience with model-driven analysis of data from intra- and extracellular electrophysiology. We encourage applications from students/postdocs with an experimental background that want to widen their repertoire of analysis methods, as well as from students with a theoretical background that have an interest in analyzing physiological data. Participants must have ample programming skills in Matlab or Python. Faculty: Jan Grewe ? Eberhard Karls Universit?t T?bingen Alex Loebel ? Ludwig-Maximillians-Universit?t M?nchen and BCCN M?nchen Moritz Grosse-Wentrup ? Max Planck Institute for Intelligent Systems T?bingen Thomas Wachtler ? Ludwig-Maximillians-Universit?t M?nchen and BCCN M?nchen Topics: ? Short-term plasticity ? Spectral analysis ? Mutual information ? Machine learning ? Neural tuning and decoding ? Deadline for application: December 19, 2014 For more information visit http://www.g-node.org/dataanalysis-course-2015 With best regards, Jan Grewe (Organizer, T?bingen) and Thomas Wachtler (Local Organizer, Munich) From boracchi at elet.polimi.it Sun Nov 9 17:48:06 2014 From: boracchi at elet.polimi.it (Giacomo Boracchi) Date: Sun, 9 Nov 2014 23:48:06 +0100 Subject: Connectionists: "Concept Drift, Domain Adaptation & Learning in Dynamic Environments" Special Session @IJCNN 2015 Message-ID: CALL FOR PAPERS IEEE IJCNN 2015 Special Session on *"Concept Drift, Domain Adaptation & Learning in Dynamic Environments"* July 12 - 17, 2015, Killarney, Ireland. http://home.deib.polimi.it/boracchi/events/ijcnn2015_SS/index.html ********************************************************** IMPORTANT DATES Paper submission: January 15th, 2015 Paper Decision notification: March 15th, 2015 Camera-ready submission: April 15th, 2015 Conference Dates: July 12 - 17th, 2015 *********************************************************** One of the fundamental goals in computational intelligence is to achieve brain-like intelligence, a remarkable property of which is the ability to incrementally learn from noisy and incomplete data and to adapt to changing environments. The special session aims at presenting novel approaches to incremental learning and adaptation to dynamic environments both from the theoretical perspective of machine learning and from the application-oriented view of computational intelligence techniques. **Topics** Papers must present original work or review the state-of-the-art in the following non-exhaustive list of topics: * Architectures, techniques and algorithms for learning in non-stationary/dynamic environments * Domain adaptation, dataset shift, covariance shift * Incremental learning, lifelong learning, cumulative learning * Change-detection tests and anomaly-detection algorithms * Mining from streams of data * Applications that call for incremental learning or learning in non-stationary/dynamic environments, such as: o Adaptive classifiers for concept drift and recurring concepts o Intelligent systems operating in non-stationary/dynamic environments o Intelligent embedded and cyber-physical systems * Applications that call for change and anomaly detection, such as: o fault detection o fraud detection o network intrusion and security o intelligent sensor networks * Cognitive-inspired approaches to adaptation and learning * Development of test-sets benchmarks for evaluating algorithms learning in non-stationary/dynamic environments * Issues relevant to above mentioned or related fields **Keywords** Concept drift, nonstationary environment, change/anomaly detection, domain adaptation, incremental learning, data streams. **Webpage** Further information can be found on the Special Session webpage http://home.deib.polimi.it/boracchi/events/ijcnn2015_SS/index.html **Paper Submission** THE DEADLINE FOR THE PAPER SUBMISSION TO THE SPECIAL SESSION IS THE SAME OF IJCNN 2015, January 15th 2015. All the submissions will be peer-reviewed with the same criteria used for other contributed papers. Perspective authors will submit their papers through the IJCNN2015 conference submission system at http://www.ijcnn.org/ Please make sure to select the Special Session "Concept Drift, Domain Adaptation & Learning in Dynamic Environments" from the "S. SPECIAL SESSION TOPICS" name in the "Main Research topic" dropdown list; Templates and instruction for authors will be provided on the IJCNN webpage http://www.ijcnn.org/ All papers submitted to the special sessions will be subject to the same peer-review procedure as regular papers, accepted papers will be published in the conference proceedings. Further information about IJCNN 2015 can be fond at http://www.ijcnn.org/ For any question you may have about the Special Session or paper submission, feel free to contact Giacomo Boracchi, giacomo.boracchi at polimi.it *********************************************************** Special Session on "Concept Drift, Domain Adaptation & Learning in Dynamic Environments" @ IJCNN 2015 *Organizes* . Giacomo Boracchi (Politecnico di Milano, Dipartimento di Elettronica, Informazione e Bioingegneria, Italy) giacomo.boracchi at polimi.it . Robi Polikar (Rowan University, Glassboro, NJ) polikar at rowan.edu . Manuel Roveri (Politecnico di Milano, Dipartimento di Elettronica, Informazione e Bioingegneria, Italy) manuel.roveri at polimi.it *Technical Program Committee* . Cesare Alippi, Politecnico Milano, Italy . Alfred Bifet, University of Waikato, New Zealand . Gianluca Bontempi, Universit? Libre de Bruxelles, Belgium . Gregory Ditzler, Drexel University, PA, USA . Yaochu Jin, University of Surrey, England, UK . Georg Krempl, University Magdeburg, Germany . Ludmilla Kuncheva, University of Bangor, Wales, UK . Leandro L. Minku, University of Birmingham, UK . Harris Papadopoulos, Frederick University, Cyprus . Leszek Rutkowski, Czestochowa University of Technology, Poland . Marley Vellasco, Pontif?cia Universidade Cat?lica do Rio de Janeiro, Brasil . Shengxiang Yang, Brunel University, England, UK *********************************************************** Giacomo Boracchi, PhD DEIB - Dipartimento di Elettronica, Informazione e Bioingegneria Politecnico di Milano Via Ponzio, 34/5 20133 Milano, Italy. Tel. +39 02 2399 3467 http://home.dei.polimi.it/boracchi/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From marcus.pearce at qmul.ac.uk Sun Nov 9 19:19:06 2014 From: marcus.pearce at qmul.ac.uk (Marcus Pearce) Date: Mon, 10 Nov 2014 00:19:06 +0000 Subject: Connectionists: Postdoctoral research position: modelling musical preference decisions In-Reply-To: References: Message-ID: <5460047A.8040706@qmul.ac.uk> Dear All (with apologies for cross-posting), A postdoc position is available at Queen Mary, University of London as part of an EPSRC-funded project on predicting musical choices using computational models of cognitive and neural processing. The goal of the project is to understand the cognitive and neural processes involved in musical preference decisions using computational modelling that incorporates the structure of the music, the listener and the listening context. The project also involves behavioural and EEG data collection and analysis using machine learning. Salary: ?31,735 - ?35,319 per annum Closing date: 14th November 2014 Start date: January 2014 or soon thereafter Duration of post: 14 Months For further information and to apply see: http://music-cognition.eecs.qmul.ac.uk/postdoc.html Marcus -- Lecturer in Sound and Music Processing Queen Mary, University of London Mile End Road, London E1 4NS, UK Tel: +44 (0)20 7882 6207 Web:http://webprojects.eecs.qmul.ac.uk/marcusp Lab:http://music-cognition.eecs.qmul.ac.uk/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From pblouw at uwaterloo.ca Sun Nov 9 22:55:35 2014 From: pblouw at uwaterloo.ca (Peter Blouw) Date: Sun, 9 Nov 2014 22:55:35 -0500 Subject: Connectionists: 2015 Summer School on Large-Scale Brain Modelling Message-ID: Hello! [All details about this school can be found online at http://www.nengo.ca/summerschool] The Centre for Theoretical Neuroscience at the University of Waterloo is inviting applications for our 2nd annual summer school on large-scale brain modelling. This two-week school will teach participants how to use the Nengo simulation package to build state-of-the-art cognitive and neural models. Nengo has been used to build what is currently the world's largest functional brain model, Spaun [1], and provides users with a versatile and powerful environment for simulating cognitive and neural systems. We welcome applications from all interested graduate students, research associates, postdocs, professors, and industry professionals. No specific training in the use of modelling software is required, but we encourage applications from active researchers with a relevant background in psychology, neuroscience, cognitive science, engineering, computer science, or a related field. For a look at last year's summer school, please see this short video: http://goo.gl/BXJ3x5 [1] Eliasmith, C., Stewart T. C., Choo X., Bekolay T., DeWolf T., Tang Y., Rasmussen, D. (2012). A large-scale model of the functioning brain. Science. Vol. 338 no. 6111 pp. 1202-1205. DOI: 10.1126/science.1225266. [ http://nengo.ca/publications/spaunsciencepaper] ***Application Deadline: February 15, 2015*** Format Participants are encouraged to bring their own ideas for projects, which may focus on testing hypotheses, modelling neural or cognitive data, implementing specific behavioural functions with neurons, expanding past models, or provide a proof-of-concept of various neural mechanisms. More generally, participants will have the opportunity to: - build perceptual, motor, and cognitive models with spiking neurons - model anatomical, electrophysiological, cognitive, and behavioural data - use a variety of single cell models within a large-scale model - integrate machine learning methods into biologically oriented models - use Nengo with your favorite simulator, e.g. Brian, NEST, Neuron, etc. - interface Nengo with various kinds of neuromorphic hardware - interface Nengo with cameras and robotic systems - implement modern nonlinear control methods in neural models - and much more? Hands-on tutorials, work on individual or group projects, and talks from invited faculty members will make up the bulk of day-to-day activities. There will be a weekend break on June 13-14, and fun activities scheduled for evenings throughout. A project demonstration event will be held on the last day of the school, with prizes for strong projects! Date and Location: June 7th to June 19th, 2015 at the University of Waterloo, Ontario, Canada. Applications: Please visit http://www.nengo.ca/summerschool, where you can find more information regarding costs, travel, lodging, along with an application form listing required materials. If you have any questions about the school or the application process, please contact Peter Blouw (pblouw at uwaterloo.ca) We look forward to hearing from you! -------------- next part -------------- An HTML attachment was scrubbed... URL: From stephane.canu at insa-rouen.fr Mon Nov 10 03:53:40 2014 From: stephane.canu at insa-rouen.fr (=?windows-1252?Q?St=E9phane_Canu?=) Date: Mon, 10 Nov 2014 09:53:40 +0100 Subject: Connectionists: Postdoc: Gesture Recognition in Normandy (France) Message-ID: <54607D14.4050007@insa-rouen.fr> GESTURE RECOGNITION POSTDOCTORAL RESEARCHER The following postdoctoral position is available to work on sequence learning for multimodal gesture recognition with St?phane Canu (INSA Rouen). Location: Normandy - Rouen (France) ? ITEKUBE Caen (France) Starting date: ASAP Duration: 2 years (1+1) Net salary: ranges between ~1,800 Euros and 2,400 Euros per month, commensurate with experience. PROJECT SUMMARY: this project consists of a joint project between Professor St?phane Canu of INSA Rouen and David Ulrich of ITEKUBE ITEKUBE has considerable experience building large multi touch tables (46inch) and wants to improve its HMI (human machine interface). By integrating machine learning it will be possible to expand gesture recognition from the simplest to the most complex one in a multimodal environment (for instance by using multi touch inputs and kinect as the same time), leading to a more intuitive and keyboard/mouse-free use of a computer. The successful candidate will be undertaking novel research in multimodal signal learning methods, real-time gesture representation and recognition, data fusion and intention prediction to be included in the next generation of multi touch table HMI among others. EXPERIENCE: Postdoctoral researcher (new Ph.D. or more experienced), extensive knowledge of support vector machines and time series modeling. Prior work with gesture recognition and/or signal recognition would be most appropriate. SOFTWARE DEVELOPMENT ENVIRONMENTS: Should be familiar with MatLab, and program in C++ on PC's with Windows 7/8 would also be desirable. Interested individuals should send a CV, representative publications, a statement of research interests, and three letters of reference to scanu at insa-rouen.fr and david at itekube.com -- Stephane Canu LITIS - INSA Rouen - Dep ASI asi.insa-rouen.fr/~scanu +33 2 32 95 98 44 From evomusart at gmail.com Mon Nov 10 11:33:04 2014 From: evomusart at gmail.com (Colin Jonhson) Date: Mon, 10 Nov 2014 16:33:04 +0000 Subject: Connectionists: Evomusart 2015 Deadline extension Message-ID: ---------------------------------------------------------------------------- Please distribute (Apologies for cross posting) ---------------------------------------------------------------------------- CALL FOR PAPERS - DEADLINE EXTENSION - 25 November EvoMUSART 2015 http://www.evostar.org/2015/cfp_evomusart.php 4th International Conference on Evolutionary and Biologically Inspired Music, Sound, Art and Design April 2015, Copenhagen, Denmark Part of evo* 2015 evo*: http://www.evostar.org ---------------------------------------------------------------------------- NEW THIS YEAR: LEONARDO Gallery ---------------------------------------------------------------------------- The journal LEONARDO will be publishing a Gallery Section (online and in the print edition) associated with the conference. This will consist of a number of visual artworks based on ideas and techniques presented at the conference. A separate call for this will be issued after papers have been selected for the conference. http://www.leonardo.info/gallery/ ---------------------------------------------------------------------------- Following the success of previous events and the importance of the field of evolutionary and biologically inspired (artificial neural network, swarm, alife) music, sound, art and design, evomusart has become an evo* conference with independent proceedings since 2012. Thus, evomusart 2015 is the fourth International Conference on Evolutionary and Biologically Inspired Music, Sound, Art and Design. The use of biologically inspired techniques for the development of artistic systems is a recent, exciting and significant area of research. There is a growing interest in the application of these techniques in fields such as: visual art and music generation, analysis, and interpretation; sound synthesis; architecture; video; poetry; design; and other creative tasks. The main goal of evomusart 2015 is to bring together researchers who are using biologically inspired computer techniques for artistic tasks, providing the opportunity to promote, present and discuss ongoing work in the area. The event will be held in April, 2015 in Copenhagen, Denmark, as part of the Evo* event. ---------------------------------------------------------------------------- Publication Details ---------------------------------------------------------------------------- Submissions will be rigorously reviewed for scientific and artistic merit. Accepted papers will be presented orally or as posters at the event and included in the evomusart proceedings, published by Springer Verlag in a dedicated volume of the Lecture Notes in Computer Science series. The acceptance rate at evomusart 2014 was 26.7% for papers accepted for oral presentation, or 36.7% for oral and poster presentation combined. Submitters are strongly encouraged to provide in all papers a link for download of media demonstrating their results, whether music, images, video, or other media types. Links should be anonymised for double-blind review, e.g. using a URL shortening service. ---------------------------------------------------------------------------- Topics of interest ---------------------------------------------------------------------------- Submissions should concern the use of biologically inspired computer techniques -- e.g. Evolutionary Computation, Artificial Life, Artificial Neural Networks, Swarm Intelligence, other artificial intelligence techniques -- in the generation, analysis and interpretation of art, music, design, architecture and other artistic fields. Topics of interest include, but are not limited to: -- Generation - Biologically Inspired Design and Art -- Systems that create drawings, images, animations, sculptures, poetry, text, designs, webpages, buildings, etc.; - Biologically Inspired Sound and Music -- Systems that create musical pieces, sounds, instruments, voices, sound effects, sound analysis, etc.; - Robotic-Based Evolutionary Art and Music; - Other related artificial intelligence or generative techniques in the fields of Computer Music, Computer Art, etc.; -- Theory - Computational Aesthetics, Experimental Aesthetics; Emotional Response, Surprise, Novelty; - Representation techniques; - Surveys of the current state-of-the-art in the area; identification of weaknesses and strengths; comparative analysis and classification; - Validation methodologies; - Studies on the applicability of these techniques to related areas; - New models designed to promote the creative potential of biologically inspired computation; -- Computer Aided Creativity and computational creativity - Systems in which biologically inspired computation is used to promote the creativity of a human user; - New ways of integrating the user in the evolutionary cycle; - Analysis and evaluation of: the artistic potential of biologically inspired art and music; the artistic processes inherent to these approaches; the resulting artefacts; - Collaborative distributed artificial art environments; -- Automation - Techniques for automatic fitness assignment; - Systems in which an analysis or interpretation of the artworks is used in conjunction with biologically inspired techniques to produce novel objects; - Systems that resort to biologically inspired computation to perform the analysis of image, music, sound, sculpture, or some other types of artistic object. ---------------------------------------------------------------------------- Important Dates (to be confirmed) ---------------------------------------------------------------------------- Submission (UPDATED): 25 November 2014 Notification to authors: 07 January 2015 Camera-ready deadline: 21 January 2015 Evo*: 8-10 April 2015 ---------------------------------------------------------------------------- Additional information and submission details ---------------------------------------------------------------------------- Submit your manuscript, at most 12 A4 pages long, in Springer LNCS format (instructions downloadable from http://www.springer.com/computer/lncs?SGWID=0-164-6-793341-0) no later than November 15th, 2014. Page limit: 12 pages The reviewing process will be double-blind; please omit information about the authors in the submitted paper. Submission page: http://myreview.csregistry.org/evomusart15/ ---------------------------------------------------------------------------- Programme committee ---------------------------------------------------------------------------- Adri?n Carballal, University of A Coruna, Spain Alain Lioret, Paris 8 University, France Alan Dorin, Monash University, Australia Alejandro Pazos, University of A Coruna, Spain Amilcar Cardoso, University of Coimbra, Portugal Amy K. Hoover, University of Central Florida, USA Andrew Brown, Griffith University, Australia Andrew Gildfind, Google, Inc., Australia Andrew Horner, University of Science & Technology, Hong Kong Anna Ursyn, University of Northern Colorado, USA Antonino Santos, University of A Coruna, Spain Antonios Liapis, IT University of Copenhagen, Denmark Arne Eigenfeldt, Simon Fraser University, Canada Benjamin Schroeder, Ohio State University, USA Benjamin Smith, Indianapolis University, Purdue University,Indianapolis, USA Bill Manaris, College of Charleston, USA Brian Ross, Brock University, Canada Carlos Grilo, Instituto Polit?cnico de Leiria, Portugal Christian Jacob, University of Calgary, Canada Colin Johnson, University of Kent, UK Dan Ashlock, University of Guelph, Canada Dan Ventura, Brigham Young University, USA Daniel Jones, Goldsmiths College, University of London, UK Daniel Silva, University of Coimbra, Portugal Douglas Repetto, Columbia University, USA Eduardo Miranda, University of Plymouth, UK Eelco den Heijer, Vrije Universiteit Amsterdam, Netherlands Eleonora Bilotta, University of Calabria, Italy Gary Greenfield, University of Richmond, USA Hans Dehlinger, Independent Artist, Germany Jonathan E. Rowe, University of Birmingham, UK Jane Prophet, City University of Hong Kong, China Jon McCormack, Monash University, Australia Jonathan Byrne, University College Dublin, Ireland Jonathan Eisenmann, Ohio State University, USA Jos? Fornari, NICS/Unicamp, Brazil Juan Romero, University of A Coruna, Spain Kate Reed, Imperial College, UK Marcelo Freitas Caetano, IRCAM, France Marcos Nadal, University of Vienna, Austria Matthew Lewis, Ohio State University, USA Mauro Annunziato, Plancton Art Studio, Italy Maximos Kaliakatsos-Papakostas, University of Patras, Greece Michael O?Neill, University College Dublin, Ireland Nicolas Monmarch?, University of Tours, France Pablo Gerv?s, Universidad Complutense de Madrid, Spain Palle Dahlstedt, G?teborg University, Sweden Patrick Janssen, National University of Singapure, Singapure Paulo Urbano, Universidade de Lisboa, Portugal Pedro Abreu, University of Coimbra, Portugal Pedro Cruz, University of Coimbra, Portugal Penousal Machado, University of Coimbra, Portugal Peter Bentley, University College London, UK Peter Cariani, University of Binghamton, USA Philip Galanter, Texas A&M College of Architecture, USA Philippe Pasquier, Simon Fraser University, Canada Rafael Ramirez, Pompeu Fabra University, Spain Roger Malina, International Society for the Arts, Sciences and Technology, USA Roisin Loughran, University College Dublin, Ireland Ruli Manurung, University of Indonesia, Indonesia Scott Draves, Independent Artist, USA Somnuk Phon-Amnuaisuk, Brunei Institute of Technology, Malaysia Stephen Todd, IBM, UK Takashi Ikegami, Tokyo Institute of Technology, Japan Tim Blackwell, Goldsmiths College, University of London, UK Vic Ciesielski, RMIT, Australia Yang Li, University of Science and Technology Beijing, China ---------------------------------------------------------------------------- Conference chairs ---------------------------------------------------------------------------- Colin Johnson University of Kent, UK c.g.johnson(at)kent.ac.uk Adri?n Carballal University of A Coru?a, Spain adriancarballal(at)gmail.com Publication chair Jo?o Correia, University of Coimbra jncor(at)dei.uc.pt From Julien.Mayor at unige.ch Mon Nov 10 19:49:47 2014 From: Julien.Mayor at unige.ch (Julien Mayor) Date: Tue, 11 Nov 2014 00:49:47 +0000 Subject: Connectionists: Research Assistant position in Cognitive/Developmental Psychology at the University of Nottingham - Malaysia Campus Message-ID: Dear colleagues, please circulate the following advertisement for a position of Research Assistant in Cognitive/Developmental Psychology at the University of Nottingham - Malaysia Campus. Applications are invited for a full-time Research Assistant position based within the School of Psychology of the University of Nottingham Malaysia Campus. The position is for three years and the individual will be required to undertake an MPhil/PhD degree for which the fees will be waived. The position will also include three years of stipend (at RM 1700 per month). Applicants must possess a 1st or upper 2nd class degree or Masters in a relevant subject, an appropriate English Language qualification and evidence of an aptitude for research. The applicant will select to work on one of the following projects: 1. Development of a computational model of phoneme and lexical acquisition. The aim of this project is to develop a model that will capture language development at the phonemic and lexical levels, with special foci on native vs. non-native phoneme discrimination patterns and on bilingualism. Fluency in Matlab or a strong commitment to learn is expected from the successful candidate. 2. Experimental research into learning mechanisms involved in early word learning. The aim of this project is to describe the learning mechanisms infants and young children use when learning new words. The project will be carried out using tablet-based experiment in order to test infants and young children outside of the laboratory. Questions will focus on: selectivity of word-object associations, domain-specificity of lexical biases, interaction between phoneme and lexical processing. The successful candidate will need to be eager to work with infants and young children. Applications should be sent to Julien.mayor at nottingham.edu.my and should include a CV, a cover letter mentioning for which project you apply to, and contact details (email) of one or two reference persons. Review of applications will start on November 30th. About the University: The University of Nottingham Malaysia Campus is situated 35km south of Kuala Lumpur. It is one of three campuses that form the University of Nottingham. Degrees awarded at the UNMC are indistinguishable from degrees awarded at the Nottingham Campus and the School of Psychology at UNMC was the first department outside of the UK to receive accreditation by the British Psychological Society. The small size of the school of Psychology allows for a very high degree of interactions between the members of staff and the postgraduate students. For more information: http://www.nottingham.edu.my/Psychology/index.aspx. -------------- next part -------------- An HTML attachment was scrubbed... URL: From yael at Princeton.EDU Mon Nov 10 23:09:07 2014 From: yael at Princeton.EDU (Yael Niv) Date: Tue, 11 Nov 2014 04:09:07 +0000 Subject: Connectionists: Doctoral studies at the Princeton Neuroscience Institute Message-ID: <57C9F1F3-E1D7-4FB4-8144-668BA6A4064D@exchange.Princeton.EDU> The Graduate Program in Neuroscience at Princeton University offers a unique and intensive program of study spanning molecular, cellular, systems and cognitive neuroscience, followed by advanced research in a world-class Princeton laboratory. We seek highly motivated and creative students in our efforts to understand the brain. A listing of faculty affiliated with the program can be found online at www.princeton.edu/neuroscience, and below. Our doctoral program is flexible and individually-tailored, and we encourage students to pursue research with more than one faculty and across departmental boundaries. Applications for entry in the Fall of 2014 are now being accepted, with a deadline of December 1. For details, including contact information, please visit www.princeton.edu/neuroscience. Michael Berry - Neural computation in the retina William Bialek - Interface between physics and biology Matthew Botvinick - Neural foundations of human behavior Lisa Boulanger - Neuronal functions of immune molecules Carlos Brody - Quantitative and behavioral neurophysiology Tim Buschman - Neural dynamics of cognitive control Jonathan Cohen - Neural bases of cognitive control Lynn Enquist - Neurovirology Liz Gavis - Neural development and mRNA localization in Drosophila Alan Gelperin - Learning, memory and olfaction Asif Ghazanfar - Neurobiology of primate social agents Elizabeth Gould - Neurogenesis and hippocampal function Michael Graziano - Sensorimotor integration Charles Gross - Functions of the cerebral cortex in behavior Uri Hasson - Temporal scales of neural processing Philip Holmes - Mathematical modeling John Hopfield - Computational neurobiology/biophysics Barry Jacobs - Brain monoamine neurotransmitters Sabine Kastner - Neural mechanisms for visual perception Carolyn McBride - Molecular and neural basis of behavioral evolution Mala Murthy - Neurophysiology of olfactory and auditory perception in Drosophila Coleen Murphy - Molecular mechanisms of aging Yael Niv - Reinforcement learning and decision making Ken Norman - Neural bases of episodic memory Jonathan Pillow - Neural information processing, machine learning, and statistical modeling of neural data Sebastian Seung - Structure and function of neural circuits Joshua Shaevits - Neural and behavioral dynamics in simple organisms David Tank - Neural circuit dynamics Jordan Taylor - Motor control and learning Alexander Todorov - Cognitive neuroscience of social cognition and behavior Nicholas Turk-Browne - Cognitive neuroscience of attention, perception and memory Samuel Wang - Dynamics and learning in neural circuits Ilana Witten - Neural circuits underlying reward and motivation From rrosenb1 at nd.edu Mon Nov 10 20:11:34 2014 From: rrosenb1 at nd.edu (Robert Rosenbaum) Date: Mon, 10 Nov 2014 20:11:34 -0500 Subject: Connectionists: Faculty positions in applied statistics Message-ID: Dear colleagues, The ACMS department at Notre Dame is hiring two statistics professors, one senior and one tenure-track. The department strongly supports interdisciplinary and collaborative research, so researchers who apply statistics in neuroscience are encouraged to apply. Applications should be received by Dec 1. See the full advertisement below. The University of Notre Dame has committed four new faculty positions to the Department of Applied and Computational Mathematics and Statistics (ACMS) to be filled over the next two years. Positions at both the junior and senior level are available. This year we seek to hire a statistician at the tenured level, and a statistician at the tenure-track level, in any areas of research that build on our existing activities. Preference will be given to applicants whose statistical research includes multi-disciplinary collaborations. ACMS includes research groups in applied mathematics, statistics and computational science. The current fourteen faculty members have research interests in Bayesian statistics, statistical bioinformatics and high-dimensional data, finance, statistics in networks and Big Data, multiscale modeling of blood clotting and biofilms, mathematical modeling in cell biology and tumor growth, MEMS, and numerical and computational algorithms and computational neuroscience. ACMS offers a Bachelor of Science, a doctoral degree, a research masters degree and a professional masters degree. ACMS is a member of the College of Science. The successful applicant must have a doctorate in statistics, biostatistics or a closely related field, and a record of success in both research and teaching. The teaching load in ACMS is 3 courses per year, and the position begins in August 2015. Applications received by December 1, 2014 will be given full consideration. Applications, including a cover letter, curriculum vitae and research and teaching statements, should be filed through MathJobs (www.MathJobs.org). Applicants should also arrange for at least three letters of recommendation to be submitted through the MathJobs system. These letters should address the applicant's research accomplishments and supply evidence that the applicant has the ability to communicate articulately and teach effectively. Senior faculty are invited to contact the Department Chair, Steven Buechler, at buechler.1 at nd.edu, at any time. Notre Dame is an equal opportunity employer, and we particularly welcome applications from women and minority candidates. ? Robert Rosenbaum Assistant Professor Department of Applied and Computational Mathematics and Statistics University of Notre Dame From rhaefner at bcs.rochester.edu Mon Nov 10 21:46:38 2014 From: rhaefner at bcs.rochester.edu (Ralf Haefner) Date: Mon, 10 Nov 2014 21:46:38 -0500 Subject: Connectionists: Postdoctoral positions in computational systems neuroscience (sensory processing & decision-making) Message-ID: <6BE7FD60-8734-4E2B-8C45-F13D9DE265CF@bcs.rochester.edu> Postdoctoral position in computational systems neuroscience (sensory processing & decision-making) We are looking for 2 postdoctoral researchers to join the group of Ralf Haefner in the department for Brain & Cognitive Sciences at the University of Rochester (NY). The focus of our group is on understanding the neural basis of perceptual inference. Based on the assumption that the brain performs probabilistic inference using an internal model of the world, we ask questions such as: What is the nature of the internal model? How is it learnt? How are posterior beliefs represented by spiking responses of populations of neurons and what algorithm does the brain use to compute its beliefs? The position requires strong mathematical and good programming skills. The ideal candidate would also be familiar with machine learning techniques, especially relating to generative models and MCMC-sampling-based probabilistic inference. Possible research projects range from purely theoretical to data-driven. Our collaborations with electrophysiology labs allow for testing of theoretical models using state-of-the-art cortical population recordings in vivo and, if desired, the opportunity to record your own data. More information about past work is available here: http://tinyurl.com/rmhaefner and a preprint of a manuscript representative of future work here: http://arxiv.org/abs/1409.0257 For details on current and possible future projects, and general information, please contact us directly. The successful candidate with join a world-class department with a wide range of research interests (https://www.bcs.rochester.edu/research/index.html ) and a strong tradition in probabilistic modeling. Start date and duration are flexible. Please send applications including a CV and names of 2-3 references by email. Please contact us as soon as possible if you'd like to meet at the upcoming SfN or NIPS conferences. Ralf Haefner Assistant Professor Brain & Cognitive Sciences University of Rochester (NY) rhaefner at bcs.rochester.edu From schwarzwaelder at bcos.uni-freiburg.de Tue Nov 11 02:40:40 2014 From: schwarzwaelder at bcos.uni-freiburg.de (=?UTF-8?B?S2Vyc3RpbiBTY2h3YXJ6d8OkbGRlcg==?=) Date: Tue, 11 Nov 2014 08:40:40 +0100 Subject: Connectionists: Call for Applications: Bernstein Award for Computational Neuroscience 2015 Message-ID: <5461BD78.3080609@bcos.uni-freiburg.de> Dear colleagues, I would like to bring to your attention that for the tenth time, the German Federal Ministry of Education and Research (BMBF) has announced an open call for applications for the "Bernstein Award". The "Bernstein Award for Computational Neuroscience" is endowed with up to 1.25 Mio ? for a period of five years and allows young scientists of all nationalities to establish an independent research group at a German university or research institution. The BMBF announcement can be found under the following links: German version English version Posters to announce the Bernstein Award locally can be downloaded from here: German version English version Application deadline is April 15, 2015. Kind regards, Kerstin Schwarzw?lder -- Dr. Kerstin Schwarzw?lder Bernstein Coordination Site of the National Bernstein Network Computational Neuroscience Albert Ludwigs University Freiburg Hansastr. 9A 79104 Freiburg Germany phone: +49 761 203 9594 fax: +49 761 203 9585 schwarzwaelder at bcos.uni-freiburg.de www.nncn.de Twitter: NNCN_Germany YouTube: Bernstein TV Facebook: Bernstein Network Computational Neuroscience, Germany LinkedIn: Bernstein Network Computational Neuroscience, Germany -------------- next part -------------- An HTML attachment was scrubbed... URL: From jason at cs.jhu.edu Tue Nov 11 19:33:38 2014 From: jason at cs.jhu.edu (Jason Eisner) Date: Tue, 11 Nov 2014 19:33:38 -0500 Subject: Connectionists: Johns Hopkins University Jobs: Research Scientists and Postdocs Message-ID: The Human Language Technology Center of Excellence (HLTCOE) at Johns Hopkins University is hiring research scientists and postdocs. The HLTCOE already has a very strong group of researchers and is growing rapidly: http://hltcoe.jhu.edu/people/ Candidates interested in NLP, speech and applications of machine learning to language processing should apply. http://hltcoe.jhu.edu The full advertisement is below. Best, Jason ------------------------------------------------------------ The Human Language Technology Center of Excellence (HLTCOE) at Johns Hopkins University seeks to hire outstanding junior and senior researchers in all areas of speech and language processing. Positions include research scientist and post-doc. The HLTCOE, located near Johns Hopkins? beautiful Homewood campus in Baltimore, Maryland, conducts long-term research on fundamental challenges that are critical for real-world problems. Its researchers publish widely. Applicants must hold a PhD in computer science, linguistics, electrical engineering, or a closely related field. Candidates should have a strong background in one or more of these areas: ? *Natural Language Processing and Understanding:* Information extraction, knowledge distillation, semantics, sentiment, parsing, morphology, including low-resource languages ? *Machine Translation:* Low-resource languages, large-scale training, phrase-based and syntax-based approaches ? *Speech Processing:* Robust speech recognition and speaker identification (multiple languages, genres, and channels, limited resources), speech retrieval, language identification ? *Machine Learning:* Large-scale learning, transfer learning, semi-supervised learning, data mining *Research Scientists* Research scientists are charged with setting the agenda for a program, working with other members of the HLTCOE research team to pursue the organization?s cutting edge research goals, publishing results in academic conferences, and (optionally) working with or directly advising students and teaching University courses. *Senior applicants *should be an experienced researcher with a track record of high quality publications, and should also have significant experience in project management and a demonstrated ability in building HLT systems. Senior applicants should have experience equivalent to the level of associate professor or 6+ years in industrial research. Applications will be considered on a rolling basis. *Junior applicants *should have a strong record of publication, and demonstrated research experience. Applicants should have experience equivalent to those applying to assistant professor positions, or 1-5 years in industry. Submit applications by January 3, 2015 for full consideration, however, applications will be accepted until positions have been filled. *Post-Docs* Recent PhD graduates may be considered for postdoctoral positions, which last from 1-2 years. Post-docs will be mentored by research scientists. Applications should be submitted here: https://academicjobsonline.org/ajo/jobs/4656 Email: hltcoe-hiring at jhu.edu WWW: http://hltcoe.jhu.edu *Note: U.S. Citizenship and security clearance are required for most positions; the HLTCOE will seek a clearance for those who do not already have one.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From schwarzwaelder at bcos.uni-freiburg.de Thu Nov 13 03:22:38 2014 From: schwarzwaelder at bcos.uni-freiburg.de (=?UTF-8?B?S2Vyc3RpbiBTY2h3YXJ6d8OkbGRlcg==?=) Date: Thu, 13 Nov 2014 09:22:38 +0100 Subject: Connectionists: Open positions in Computational Neuroscience - visit the Bernstein Network at booth #3235 at SfN 2014! In-Reply-To: <546469B5.3070507@bcos.uni-freiburg.de> References: <546469B5.3070507@bcos.uni-freiburg.de> Message-ID: <54646A4E.7030006@bcos.uni-freiburg.de> Dear all, as in past years, the Bernstein Network Computational Neuroscience will exhibit at the SfN Meeting 2014. Please find us at the "Neuroscience in Germany" booth #3235. The exhibition presents the network?s research areas, study and training opportunities and job announcements. We look forward to welcoming you at booth #3235! Best regards, Kerstin Schwarzwaelder -- Dr. Kerstin Schwarzw?lder Bernstein Coordination Site of the National Bernstein Network Computational Neuroscience Albert Ludwigs University Freiburg Hansastr. 9A 79104 Freiburg Germany phone: +49 761 203 9594 fax: +49 761 203 9585 schwarzwaelder at bcos.uni-freiburg.de www.nncn.de Twitter: NNCN_Germany YouTube: Bernstein TV Facebook: Bernstein Network Computational Neuroscience, Germany LinkedIn: Bernstein Network Computational Neuroscience, Germany -------------- next part -------------- An HTML attachment was scrubbed... URL: From michel.verleysen at uclouvain.be Thu Nov 13 04:15:17 2014 From: michel.verleysen at uclouvain.be (Michel Verleysen) Date: Thu, 13 Nov 2014 09:15:17 +0000 Subject: Connectionists: ESANN 2015 deadline extension Message-ID: ====================================================== ESANN 2015 23rd European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning Bruges (Belgium) - April 22-23-24, 2015 http://www.esann.org/ Submission deadline extension ====================================================== Due to numerous requests, the deadline to submit papers to the ESANN 2015 conference has been extended to November 28, 2014. Please note that no further extension will be given. Looking forward to seeing you at ESANN 2015, The organizing committee. [http://www.uclouvain.be/cps/ucl/doc/ac-arec/images/logo-signature.png] Michel Verleysen Professor ICTEAM institute Place du Levant, 3 box L5.03.02 B-1348-Louvain-la-Neuve michel.verleysen at uclouvain.be T?l. +32 10 47 25 51 - Fax +32 10 47 25 98 perso.uclouvain.be/michel.verleysen -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 7401 bytes Desc: image001.png URL: From xose.pardo at usc.es Thu Nov 13 05:19:03 2014 From: xose.pardo at usc.es (Xose M. Pardo) Date: Thu, 13 Nov 2014 11:19:03 +0100 Subject: Connectionists: (Extended Deadline: 5 Dec 2014) IbPRIA 2015- Santiago de Compostela (Spain) Message-ID: <54648597.9000209@usc.es> The submission deadline for the 7th Iberian Conference on Pattern Recognition and Image Analyisis, in Santiago de Compostela (Spain) on 17?19 June 2015, has been extended to 5 December 2014. Further information: http://www.ibpria.org/2015/ IbPRIA 2015, Santiago de Compostela. We look forward to your contributions! -- Xose M. Pardo, Local Chair http://persoal.citius.usc.es/xose.pardo voice: +34 8818 16438 Centro de Investigaci?n en Tenoloxas da Informacion (CITIUS) Univ. Santiago de Compostela 15782 Santiago de Compostela GALIZA (Spain) From Jill.Douglas at ed.ac.uk Thu Nov 13 09:08:36 2014 From: Jill.Douglas at ed.ac.uk (Jill Douglas) Date: Thu, 13 Nov 2014 14:08:36 -0000 Subject: Connectionists: Postdoc position: Whittaker Research Fellow in Mathematics for Data Science - University of Edinburgh Message-ID: <012a01cfff4b$51314fd0$f393ef70$@ed.ac.uk> The School of Mathematics, University of Edinburgh wishes to appoint a WHITTAKER RESEARCH FELLOW in MATHEMATICS FOR DATA SCIENCE. The fellowship is a fixed term contract for two years. We are seeking outstanding candidates with a track record of research developing new mathematical or statistical methods for the analysis and processing of large-scale datasets. The candidates' research, which can be foundational or applied, will relate to themes that include (but are not limited to) structure identification, dimensionality reduction and machine learning. Successful candidates will connect with existing areas of expertise in the School of Mathematics and, more broadly, with relevant research within the University. They will demonstrate a strong independent research programme and will be committed to teaching. The successful candidates will join Edinburgh Data Science, a 1000-researcher network of excellence that coordinates the University's activities in data science; they will have the opportunity to participate in the Centres for Doctoral Training 'Data Science' and 'Maxwell Institute Graduate School in Analysis & its Applications.' Please include with your application: - a CV with publication list - outline of proposed research programme In addition, please arrange for three referees to send letters of recommendation directly to hr at maths.ed.ac.uk by the closing date of 8th December 2014. Interviews will be held in early January 2015 and you must be willing to take up the position on or before 1 September 2015. All applicants should apply online by clicking the apply link via our vacancy website and submitting a CV and research statement. The University of Edinburgh promotes equality and diversity. The School of Mathematics holds a Bronze Athena SWAN award in recognition of our commitment to advance the representation of women in science, mathematics, engineering and technology, and is a supporter of the London Mathematical Society Good Practice Scheme. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: not available URL: From Kai.Puolamaki at ttl.fi Thu Nov 13 16:08:57 2014 From: Kai.Puolamaki at ttl.fi (=?utf-8?B?UHVvbGFtw6RraSBLYWk=?=) Date: Thu, 13 Nov 2014 21:08:57 +0000 Subject: Connectionists: Senior Research Scientist (tenured) in computational data analysis (FIOH, Helsinki, DL 30 Nov) Message-ID: The Brain Work Research Centre at the Finnish Institute of Occupational Health is seeking for a SENIOR RESEARCH SCIENTIST Your task is to lead research on computational data analysis and to devise new computational methods and build expertise that benefit the research and services for the needs of the future work life. You will advance the use of the heterogeneous data sets that have been and are being collected at the institute, including data sets related to physiology, health, cognition, work environment, and spanning measurements from laboratories to work places. You will be responsible for national and international research collaboration in the topic of the position. You will apply for funding, lead research projects as well as supervise students and other more junior staff members. You may participate to education and teaching within area of your expertise. We require a doctoral degree in computer science or other suitable discipline, as well as thorough understanding of computational data analysis. The senior research scientist should have a track record after the doctoral degree of successful collaboration in international multidisciplinary environment as well as of applying for research funding, managing projects, and collaboration with companies. We require excellent communication skills in written and spoken English. We may decide to hire a specialist research scientist (more junior position, doctoral degree required) for the said position, if the candidate is suitable but does not yet qualify for a senior research scientist position. The position is tenured, with a four month probation period. The Senior Research Scientist is an academic track position, roughly equivalent to associate professor in a typical university career system. The location of work is in Meilahti, Helsinki. Interested? Please fill in and submit an electronic application form at latest on 30 November 2014 at https://rekry.oikotie.fi/recruitment/jobs/tyoterveyslaitos/o/4537/en_US/en_ US/details Please attach to your application as a pdf file curriculum vitae (including list of publications), the names and contact information of references, and a motivation letter that includes a brief description of your research interests. Additional information: Kai Puolam?ki, Docent, PhD, Director, Brain Work Research Centre, kai.puolamaki at ttl.fi, +358 30 474 2445 Minna Huotilainen, Docent, PhD, Research Professor, minna.huotilainen at ttl.fi, +358 30 474 2517 Matti Gr?hn, PhD, Team Leader, matti.grohn at ttl.fi, +358 30 474 2687 The Finnish Institute of Occupational Health has a web site at www.ttl.fi You can find online version of this call text at https://rekry.oikotie.fi/recruitment/jobs/tyoterveyslaitos/o/4537/en_US/en_ US/details The Finnish Institute of Occupational Health (FIOH) is a research and specialist organization that promotes occupational health and safety and the well-being of workers. It is an independent institution under public law, functioning under the administrative sector of the Ministry of Social Affairs and Health. FIOH has six regional offices, and its headquarters are in Helsinki. The Brain Work Research Centre (BWRC) at FIOH studies how brain and senses function and how this is influenced by human and work related factors. One of the focus areas of the BWRC is the study of human information processing in the modern and future ICT environments. Finnish Institute of Occupational Health, Topeliuksenkatu 41 a A, FI-00250 Helsinki, Finland -- Kai Puolam?ki, Docent, PhD. Director, Brain?Work Research Centre, Finnish Institute of Occupational Health. Topeliuksenkatu 41 a A, 00250 Helsinki, Finland. Phone: +358 43 8250726 Email: kai.puolamaki at ttl.fi FIOH web site: http://www.ttl.fi/ Research homepage: http://www.iki.fi/kaip/ From pierre-yves.oudeyer at inria.fr Sat Nov 15 10:48:19 2014 From: pierre-yves.oudeyer at inria.fr (Pierre-Yves Oudeyer) Date: Sat, 15 Nov 2014 16:48:19 +0100 Subject: Connectionists: [Call for workshops] IJCNN 2015, Killarney, Irelan, 12-17 july 2015 Message-ID: <7C3CF09A-3386-4466-BF30-B5B1C292F7EF@inria.fr> Call for Workshops, IJCNN 2015 Deadline: December 15th, 2014 Location: Killarney, Ireland Date of events: 12-17 july 2015 Web: http://www.ijcnn.org/call-for-workshops Post-conference workshops at IJCNN offer a unique opportunity for in-depth discussions of specific topics in neural networks and computational intelligence. The workshops should be moderated by scientists or professionals who has significant expertise and /or whose recent work has had a significant impact within their field. IJCNN 2015 will emphasize emerging and growing areas of computational intelligence. Each workshop has a duration of 3 or 6 hours. The format of each workshop will be up to the moderator, and can include interactive presentations as well as panel discussions among participants. These interactions should highlight exciting new developments and current research trends to facilitate a discussion of ideas that will drive the field forward in the coming years. Workshop organizers can prepare various materials including handouts or electronic resources that can be made available for distribution before or after the meeting. Researchers interested in organizing workshops are invited to submit a formal proposal. A form for submitting proposals is provided on the web-page http://www.ijcnn.org/call-for-workshops Any questions regarding this proposal can be asked to the Workshop Chair: Pierre-Yves Oudeyer, INRIA, France. email: pierre-yves.oudeyer at inria.fr Pierre-Yves Oudeyer Inria, France http://www.pyoudeyer.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From grlmc at urv.cat Sat Nov 15 11:18:28 2014 From: grlmc at urv.cat (GRLMC) Date: Sat, 15 Nov 2014 17:18:28 +0100 Subject: Connectionists: BigDat 2015: registration deadline 23 November Message-ID: *To be removed from our mailing list, please respond to this message with UNSUBSCRIBE in the subject line* ***************************************************** INTERNATIONAL WINTER SCHOOL ON BIG DATA BigDat 2015 Tarragona, Spain January 26-30, 2015 Organized by Rovira i Virgili University http://grammars.grlmc.com/bigdat2015/ ***************************************************** --- 6th registration deadline: November 23, 2014 --- ***************************************************** AIM: BigDat 2015 is a research training event for graduates and postgraduates in the first steps of their academic career. It aims at updating them about the most recent developments in the fast developing area of big data, which covers a large spectrum of current exciting research, development and innovation with an extraordinary potential for a huge impact on scientific discoveries, medicine, engineering, business models, and society itself. Renowned academics and industry pioneers will lecture and share their views with the audience. All big data subareas will be displayed, namely: foundations, infrastructure, management, search and mining, security and privacy, and applications. Main challenges of analytics, management and storage of big data will be identified through 4 keynote lectures and 23 six-hour courses, which will tackle the most lively and promising topics. The organizers believe outstanding speakers will attract the brightest and most motivated students. Interaction will be a main component of the event. ADDRESSED TO: Graduate and postgraduates from around the world. There are no formal pre-requisites in terms of academic degrees. However, since there will be differences in the course levels, specific knowledge background may be required for some of them. BigDat 2015 is also appropriate for more senior people who want to keep themselves updated on recent developments and future trends. They will surely find it fruitful to listen and discuss with major researchers, industry leaders and innovators. REGIME: In addition to keynotes, 3 courses will run in parallel during the whole event. Participants will be able to freely choose the courses they will be willing to attend as well as to move from one to another. VENUE: BigDat 2015 will take place in Tarragona, located 90 kms. to the south of Barcelona. The venue will be: Campus Catalunya Universitat Rovira i Virgili Av. Catalunya, 35 43002 Tarragona KEYNOTE SPEAKERS: Ian Foster (Argonne National Laboratory), Taming Big Data: Accelerating Discovery via Outsourcing and Automation Geoffrey C. Fox (Indiana University, Bloomington), Mapping Big Data Applications to Clouds and HPC C. Lee Giles (Pennsylvania State University, University Park), Scholarly Big Data: Information Extraction and Data Mining William D. Gropp (University of Illinois, Urbana-Champaign), tba COURSES AND PROFESSORS: Hendrik Blockeel (KU Leuven), [intermediate] Decision Trees for Big Data Analytics Diego Calvanese (Free University of Bozen-Bolzano), [introductory/intermediate] End-User Access to Big Data Using Ontologies Jiannong Cao (Hong Kong Polytechnic University), [introductory/intermediate] Programming with Big Data Edward Y. Chang (HTC Corporation, New Taipei City), [introductory/advanced] Big Data Analytics: Architectures, Algorithms, and Applications Ernesto Damiani (University of Milan), [introductory/intermediate] Process Discovery and Predictive Decision Making from Big Data Sets and Streams Gautam Das (University of Texas, Arlington), [intermediate/advanced] Mining Deep Web Repositories Maarten de Rijke (University of Amsterdam), tba Geoffrey C. Fox (Indiana University, Bloomington), [intermediate] Using Software Defined Systems to Address Big Data Problems Minos Garofalakis (Technical University of Crete, Chania) [intermediate/advanced], Querying Continuous Data Streams Vasant G. Honavar (Pennsylvania State University, University Park) [introductory/intermediate], Learning Predictive Models from Big Data Mounia Lalmas (Yahoo! Research Labs, London), [introductory] Measuring User Engagement Tao Li (Florida International University, Miami), [introductory/intermediate] Data Mining Techniques to Understand Textual Data Kwan-Liu Ma (University of California, Davis), [intermediate] Big Data Visualization Christoph Meinel (Hasso Plattner Institute, Potsdam), [introductory/intermediate] New Computing Power by In-Memory and Multicore to Tackle Big Data Manish Parashar (Rutgers University, Piscataway), [intermediate] Big Data Challenges in Simulation-based Science Srinivasan Parthasarathy (Ohio State University, Columbus), [intermediate] Scalable Data Analysis Evaggelia Pitoura (University of Ioannina), [introductory/intermediate] Online Social Networks Vijay V. Raghavan (University of Louisiana, Lafayette), [introductory/intermediate] Visual Analytics of Time-evolving Large-scale Graphs Pierangela Samarati (University of Milan), [intermediate], Data Security and Privacy in the Cloud Peter Sanders (Karlsruhe Institute of Technology), [introductory/intermediate] Algorithm Engineering for Large Data Sets Johan Suykens (KU Leuven), [introductory/intermediate] Fixed-size Kernel Models for Big Data Domenico Talia (University of Calabria, Rende), [intermediate] Scalable Data Mining on Parallel, Distributed and Cloud Computing Systems Jieping Ye (Arizona State University, Tempe), [introductory/advanced] Large-Scale Sparse Learning and Low Rank Modeling ORGANIZING COMMITTEE: Adrian Horia Dediu (Tarragona) Carlos Mart?n-Vide (Tarragona, chair) Florentina Lilica Voicu (Tarragona) REGISTRATION: It has to be done at http://grammars.grlmc.com/bigdat2015/registration.php The selection of up to 8 courses requested in the registration template is only tentative and non-binding. For the sake of organization, it will be helpful to have an approximation of the respective demand for each course. Since the capacity of the venue is limited, registration requests will be processed on a first come first served basis. The registration period will be closed and the on-line registration facility disabled when the capacity of the venue will be complete. It is much recommended to register prior to the event. FEES: As far as possible, participants are expected to stay full-time. Fees are a flat rate covering the attendance to all courses during the week. There are several early registration deadlines. Fees depend on the registration deadline. ACCOMMODATION: Suggestions of accommodation are available on the webpage. CERTIFICATE: Participants will be delivered a certificate of attendance. QUESTIONS AND FURTHER INFORMATION: florentinalilica.voicu at urv.cat POSTAL ADDRESS: BigDat 2015 Lilica Voicu Rovira i Virgili University Av. Catalunya, 35 43002 Tarragona, Spain Phone: +34 977 559 543 Fax: +34 977 558 386 ACKNOWLEDGEMENTS: Universitat Rovira i Virgili --- Este mensaje no contiene virus ni malware porque la protecci?n de avast! Antivirus est? activa. http://www.avast.com From sethu.vijayakumar at ed.ac.uk Fri Nov 14 09:07:46 2014 From: sethu.vijayakumar at ed.ac.uk (Sethu Vijayakumar) Date: Fri, 14 Nov 2014 14:07:46 +0000 Subject: Connectionists: Robotics Science and Systems (RSS) 2015, Rome, Italy: Second CFP In-Reply-To: <542D5F0F.1070307@ed.ac.uk> References: <542D5F0F.1070307@ed.ac.uk> Message-ID: <54660CB2.6050701@ed.ac.uk> ======================================== Robotics Science and Systems (RSS) 2015 http://www.roboticsconference.org Sapienza University of Rome, Rome, Italy July 13-17, 2015 CALL FOR PAPERS AND WORKSHOP PROPOSALS ======================================== Robotics Science and Systems (RSS) brings together researchers working on algorithmic and mathematical foundations of robotics, robotics applications, and analysis of robotic systems. The conference is single-track, and the final program will be the result of a thorough review process, to give attendees an opportunity to see the best research in all areas of robotics. The program includes invited talks, as well as oral and poster presentations of refereed papers. The main conference is followed by two days of workshops and tutorials. All papers presented at the main conference will be published in online proceedings. Selected papers will be invited for submission to a special issue of International Journal of Robotics Research and a special issue of Autonomous Robots. Important Dates ======================================== Papers: - Final Paper Submission Deadline: January 22, 2015 - Acceptance Notification: April 30, 2015 Workshops: - Preliminary Submission Deadline: December 12, 2014 - Final Submission Deadline: February 6, 2015 - Acceptance Notification: March 6, 2015 RSS 2015 Conference: July 13-15, 2015 RSS 2015 Workshops: July 16-17, 2015 Topic Areas ======================================== Papers containing original and unpublished work are solicited in all areas of robotics, including (but not limited to) the following: kinematics, dynamics, control, planning, manipulation, human-robot interaction, human-centered systems, field robotics, distributed systems, medical robotics, biological robotics, mechanisms, robot perception, mobile systems, mobility, estimation, and learning. Invited Speakers (Tentative) ======================================== Vanessa Evers, University of Twente Leonidas Guibas, Stanford University Tomaso Poggio, Massachusetts Institute of Technology Moshe Shoham, Technion Israel Institute of Technology Paper Format ======================================== Submissions may be up to 8 pages in length, excluding references. Reviewing for RSS 2015 is double-blind. So authors should not be listed on the title page, and reasonable anonymity should be maintained in the paper. Submissions are refereed on the basis of technical quality, novelty, significance, and clarity. More details are available at on the conference web site: http://www.roboticsconference.org We look forward to seeing you in Rome! Lydia E. Kavraki, RSS 2015 General Chair David Hsu, RSS 2015 Program Chair Sethu Vijayakumar, RSS 2015 Publicity Chair -- ------------------------------------------------------------------ Professor Sethu Vijayakumar FRSE Personal Chair in Robotics Director, Edinburgh Centre for Robotics [edinburgh-robotics.org] Director, IPAB, School of Informatics, The University of Edinburgh 1.28 Informatics Forum, 10 Crichton Street, Edinburgh EH8 9AB, UK URL: http://homepages.inf.ed.ac.uk/svijayak Ph: +44(0)131 651 3444 SLMC Research Group URL: http://www.ipab.informatics.ed.ac.uk/slmc ------------------------------------------------------------------ Adjunct Faculty, Department of Computer Science University of Southern California, Los Angeles, CA, USA 90089-0781 ------------------------------------------------------------------ Microsoft Research & Royal Academy of Engg. Senior Research Fellow ------------------------------------------------------------------ The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336. From sinankalkan at gmail.com Fri Nov 14 01:46:33 2014 From: sinankalkan at gmail.com (Sinan KALKAN) Date: Fri, 14 Nov 2014 08:46:33 +0200 Subject: Connectionists: 2nd CfP - ICAR 2015 - 17th International Conference on Advanced Robotics Message-ID: * Please accept our apologies if you receive multiple copies of this call * 17th International Conference on Advanced Robotics, ICAR 2015 27-31 July, 2015, Istanbul, Turkey http://www.icar2015.org/ *2nd Call for Papers* The 17th International Conference on Advanced Robotics, ICAR 2015 is organized by Middle East Technical University in collaboration with Kadir Has University. The conference will take place in Kadir Has University campus in Istanbul, Turkey, on July 27-31, 2015. Keeping up with the same spirit of innovation, ICAR wants to bring high quality papers, workshops and tutorials to the geographical areas where the larger robotics conferences have not been organized yet. After the successful conference last year in Montevideo, Uruguay (www.icar2013.org), next year, the 17th ICAR will be held in Istanbul, Turkey where ?the east meets the west?. The conference is organized by Middle East Technical University (METU) in collaboration with Kadir Has University. The venue is Kadir Has Campus situated on the historic peninsula along Halic bay (www.icar2015.org). ICAR 2015 will be technically co-sponsored by the IEEE Robotics and Automation Society. The technical program of ICAR 2015 will consist of plenary talks, workshops and oral presentations. Submitted papers should describe original work in the form of theoretical modelling, design, experimental validation, or case studies from all areas of robotics, focusing on emerging paradigms and application areas including but not limited to: Robotics Vision Adversarial Planning Cognitive Robotics Robot Operating Systems Robotics Architectures Simulation and Visualization Mobile Robots Robot Swarms Humanoid Robots Biologically-Inspired Robots Self-Localization and Navigation Embedded and Mobile Hardware Spatial Cognition Robotic Entertainment Human-Robot Interaction Robot Competitions Multi-Robot Systems Unmanned Aerial Robots Search and Rescue Robots Underwater Robotic Systems Learning and Adaptation Educational Robotics Cooperation and Competition Rehabilitation Robotics Dynamics and Control Immersive Robotics *Keynote speakers* Danica Kragic Royal Institute of Technology (KTH), Sweden http://www.csc.kth.se/~danik/ Oussama Khatib Stanford University, USA http://cs.stanford.edu/groups/manips/ok.html Todd P. Coleman University of California, San Diego, USA http://coleman.ucsd.edu/ Noah J. Cowan Johns Hopkins University, USA http://limbs.lcsr.jhu.edu/people/cowan/ *Important Dates* Paper submission February 1, 2015 Workshop and tutorial proposals February 1, 2015 Notification of paper acceptance April 15, 2015 Camera-ready papers May 15, 2015 ICAR 2015 Conference July 27-31, 2015 *Paper Submission* Original technical paper contributions are solicited for presentation at ICAR 2015. Accepted papers will be published in IEEE Xplore conference proceedings. Submissions should be 6-8 pages following the IEEE Xplore format available at: http://www.ieee.org/conferences_events/conferences/publishing/templates.html Papers will be submitted online via EasyChair: https://www.easychair.org/conferences/?conf=icar2015 *For more information* http://www.icar2015.org/ Sinan Kalkan, Erol Sahin Publicity and Publication Co-chairs -- Sinan KALKAN, Asst. Prof. Dept. of Computer Engineering Middle East Technical University Ankara TURKEY Web: http://kovan.ceng.metu.edu.tr/~sinan Tel: + 90 - 312 - 210 5547 / 210 7372 Fax: +90 - 312 - 210 5544 -------------- next part -------------- An HTML attachment was scrubbed... URL: From THTeng at ntu.edu.sg Sun Nov 16 03:14:57 2014 From: THTeng at ntu.edu.sg (Teng Teck Hou (Dr)) Date: Sun, 16 Nov 2014 08:14:57 +0000 Subject: Connectionists: Special Session on Autonomous Learning from Big Data in IJCNN 2015: Call for Papers Message-ID: <9237A50F72F4DF4A8CA542FF9E5A5D40F3C2FF8A@EXCHMBOX34.staff.main.ntu.edu.sg> [Apologies for cross postings] Call for Paper to Special Session in International Joint Conference on Neural Network (IJCNN) 2015 Title: Autonomous Learning from Big Data Organizers: P. Angelov (p.angelov at lancaster.ac.uk) and A. Roy (asim.roy at asu.edu) The aim of the special session is to present latest results in this fast expanding area of Autonomous Learning Systems and Big Data Analytics and to give a forum to discuss the challenges for the future. It is organised by the new Special Interest Group on Autonomous Learning Systems and the Section on Big Data Analytics within INNS and by the Technical Committee on Evolving Intelligent Systems, SMC Society, IEEE and aims to be a focal point of the latest research in this emerging area. One of the important research challenges today is to cope effectively and efficiently with the ever growing amount of data that is being exponentially produced by sensors, Internet activity, nature and society. To deal with this ocean of zeta-bytes of data, data streams and navigate to the small islands of human-interpretable knowledge and information requires new types of analytics approaches and autonomous learning systems and processes. Traditionally, for decades or even centuries machine learning, AI, cognitive science were developed with the assumption that the data available to test and validate the hypotheses is a small, finite volume and can be processed iteratively and offline. The realities of dynamically evolving big data streams and big data sets (e.g. pentabytes of data from retail industry, high frequency trading, genomics or other areas) become more prominent only during the last decade or so. This poses new challenges and requires new, revolutionary approaches. Topics of interest (include but not limited to): Methodology * Autonomous, online, incremental learning - theory, algorithms and applications in big data * High dimensional data, feature selection, feature transformation - theory, algorithms and applications for big data * Scalable algorithms for big data * Learning algorithms for high-velocity streaming data * Kernel methods and statistical learning theory * Big data streams analytics * Deep neural network learning * Machine vision and big data * Brain-machine interfaces and big data * Cognitive modeling and big data * Embodied robotics and big data * Fuzzy systems and big data * Evolutionary systems and big data * Evolving systems for big data analytics * Neuromorphic hardware for scalable machine learning * Parallel and distributed computing for big data analytics (cloud, map-reduce, etc.) * New Adaptive and Evolving Learning Methods * Autonomous Learning Systems * Stability, Robustness, Unlearning Effects * Structure Flexibility and Robustness in Evolving Systems * Evolving in Dynamic Environments * Drift and Shift in Data Streams * Self-monitoring Evolving Systems * Evolving Decision Systems * Evolving Perceptions * Self-organising Systems * Neural Networks with Evolving Structure * Non-stationary Time Series Prediction with Evolving Systems * Automatic Novelty Detection in Evolving Systems * On-Line Identification of Fuzzy Systems * Evolving Neuro-fuzzy Systems * Evolving Clustering Methods * Evolving Fuzzy Rule-based Classifiers * Evolving Regression-based Classifiers * Evolving Intelligent Systems for Time Series Prediction * Evolving Intelligent System State Monitoring and Prognostics Methods * Evolving Intelligent Controllers * Evolving Fuzzy Decision Support Systems * Evolving Probabilistic Models * Big data and collective intelligence/collaborative learning * Big data and hybrid systems * Big data and self-aware systems * Big Data and infrastructure * Big data analytics and healthcare/medical applications * Big data analytics and energy systems/smart grids * Big data analytics and transportation systems * Big data analytics in large sensor networks * Big data and machine learning in computational biology, bioinformatics * Recommendation systems/collaborative filtering for big data * Big data visualization * Online multimedia/ stream/ text analytics * Link and graph mining * Big data and cloud computing, large scale stream processing on the cloud Real-life applications * Robotics * Defence * Intelligent Transport * Bio-Informatics * Industrial Applications * Data Mining and Knowledge Discovery * Control Systems * Evolving Consumer Behaviour * Evolving Activities Recognition * Evolving Self-localisation Systems Dates: * Send Title & Abstract to P. Angelov or Asim Roy as soon as possible * Deadline for Paper Submission 15 January, 2015 * Notification of Acceptance 15 March, 2015 * Final Paper Submission 15 April, 2015 Selected authors will be invited to submit extended papers for a special issue of the Springer journal Evolving Systems ________________________________ CONFIDENTIALITY: This email is intended solely for the person(s) named and may be confidential and/or privileged. If you are not the intended recipient, please delete it, notify us and do not copy, use, or disclose its contents. Towards a sustainable earth: Print only when necessary. Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From cdiekman at gmail.com Sat Nov 15 10:48:00 2014 From: cdiekman at gmail.com (Casey Diekman) Date: Sat, 15 Nov 2014 10:48:00 -0500 Subject: Connectionists: Faculty Position, Mathematical Sciences, New Jersey Institute of Technology Message-ID: The Department of Mathematical Sciences (DMS) at the New Jersey Institute of Technology seeks candidates to fill a tenure-track/tenured position at the Assistant/Associate/Full Professor level in the general area of Applied Mathematics. Candidates are sought from all fields of Applied Mathematics. The Department is particularly interested in candidates whose research interests are consistent with the existing research strengths in scientific computation/numerical analysis, modeling/asymptotic analysis, PDE's and dynamical systems, with focused research groups in applications to fluid dynamics, mathematical neuroscience, and wave propagation. DMS has experienced tremendous growth in research over the past two decades, and is now recognized as having a leading national program in applied mathematics. The department offers B.S., M.S., and Ph.D. degrees, with Ph.D. program tracks in Applied Mathematics as well as in Applied Probability & Statistics. For more information about DMS faculty and programs, visit http://math.njit.edu. Candidates should have a Ph.D. in Applied Mathematics or a related field and postdoctoral experience with strong research and teaching potential for consideration at the Assistant Professor level and an appropriate record of accomplishment in classroom teaching, mentoring doctoral students and research publication and funding for consideration at the Associate or Full Professor level. At the university's discretion, the prerequisites may be excepted where the candidate can demonstrate to the satisfaction of the university an equivalent combination of education and experience that prepares the candidate for success in the position. Please visit https://njit.jobs, posting number 0602414, to apply. Submit a cover letter, resume/CV, research and teaching statements, a summary of teaching evaluations (if available), and names and contact information of three references. Review of applications will begin on November 15, 2014 and will continue until the position is filled. To build a diverse workforce, NJIT encourages applications from individuals with disabilities, minorities, veterans and women. EEO employer. Casey Diekman Assistant Professor Department of Mathematical Sciences New Jersey Institute of Technology 323 Dr. Martin Luther King, Jr. Blvd. Newark, NJ 07102 -------------- next part -------------- An HTML attachment was scrubbed... URL: From randy.oreilly at colorado.edu Mon Nov 17 04:31:25 2014 From: randy.oreilly at colorado.edu (Randall O'Reilly) Date: Mon, 17 Nov 2014 02:31:25 -0700 Subject: Connectionists: Postdoc at CU Boulder, Computational Cognitive Neuroscience of Bidirectional Vision Message-ID: Postdoctoral Position Computational Cognitive Neuroscience Lab, University of Colorado Boulder Postdoctoral research position available in the CCN lab at the Department of Psychology & Neuroscience, University of Colorado Boulder (Randall O?Reilly, director), to work on an ONR-funded grant focusing on top-down influences on visual object recognition. This position will focus on developing bidirectional vision models based on the structure and function of the relevant visual areas, including V3, LIP, and FEF interacting with the ventral object recognition pathways in V4 and IT. This is a collaborative project with Tim Curran at CU Boulder (human EEG studies), David Sheinberg at Brown (primate electrophysiology), and Tor Wager (CU Boulder, fMRI) ? there is an opportunity to conduct tests of the computational models with these collaborators. Primary qualifications include a PhD in a relevant field, and extensive prior computational modeling experience, preferably in relevant visual domains. The position is available immediately, but the start date is flexible. The University of Colorado is an Equal Opportunity Employer committed to building a diverse workforce. We encourage applications from women, racial and ethnic minorities, individuals with disabilities and veterans. Alternative formats of this ad can be provided upon request for individuals with disabilities by contacting the ADA Coordinator at hr-ada at colorado.edu. The University of Colorado Boulder conducts background checks on all final applicants being considered for employment. For further information about the lab, see http://grey.colorado.edu/CompCogNeuro/index.php/CCNLab, and our recent publications in this area: http://grey.colorado.edu/CompCogNeuro/index.php/CCNLab/publications. Feel free to contact randy.oreilly at colorado.edu for inquiries. Applicants must submit a CV, a statement of research experience/interests, and a cover letter including email addresses for at least three referees, and one recommendation letters. The other referees will be contacted for letters if needed. Applications will be accepted electronically at https://www.jobsatcu.com/postings/86996 . The job posting # is RF01716 - Randy ---- Dr. Randall C. O'Reilly Professor, Department of Psychology and Neuroscience University of Colorado Boulder 345 UCB, Boulder, CO 80309-0345 303-492-0054 Fax: 303-492-2967 http://psych.colorado.edu/~oreilly From mark.humphries at manchester.ac.uk Mon Nov 17 06:30:39 2014 From: mark.humphries at manchester.ac.uk (Mark Humphries) Date: Mon, 17 Nov 2014 11:30:39 +0000 Subject: Connectionists: 4 year postdoctoral position at the University of Manchester, Faculty of Life Sciences Message-ID: <7E954275ED82B9468C2C731FB72522F5B66F1C84@MBXP09.ds.man.ac.uk> Dear connectionists (with apologies for cross-posting), We seek a skilled and enthusiastic individual to develop cutting-edge analysis and data-mining techniques for multi-neuron recording data, and apply those techniques to the analysis of extensive data-sets. The successful candidate will join a newly established team that tackles the challenging problems of how to make sense of the deluge of circuit-wide neural activity data, from cortical cell assemblies to invertebrate central pattern generators (http://www.systemsneurophysiologylab.ls.manchester.ac.uk/) The ideal candidate will hold a PhD in a related discipline (e.g. computational neuroscience, computer science, statistics, physics, engineering), have good programming skills, and experience in either neurophysiological data or computational modelling of neural circuits. The position is available from January 2015 for up to 48 months. For more details and to apply please visit: https://www.jobs.manchester.ac.uk/displayjob.aspx?jobid=8837 Application deadline: 11th December 2014 For all informal inquiries: mark.humphries at manchester.ac.uk Dr Mark Humphries MRC Senior non-Clinical Research Fellow AV Hill Building Faculty of Life Sciences University of Manchester http://www.systemsneurophysiologylab.ls.manchester.ac.uk/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From tbesold at uni-osnabrueck.de Sun Nov 16 15:51:23 2014 From: tbesold at uni-osnabrueck.de (Tarek R. Besold) Date: Sun, 16 Nov 2014 21:51:23 +0100 Subject: Connectionists: CFP for IJCNN 2015 Special Session on Neural-Symbolic Networks and Cognitive Capacities Message-ID: <4015F933-B4EA-4657-97F4-82C01C2661FA@uni-osnabrueck.de> Call for Papers for the == IJCNN 2015 Special Session on Neural-Symbolic Networks and Cognitive Capacities == = WEBPAGE = https://sites.google.com/site/ijcnn2015nsncc/ = SCOPE = Researchers in artificial intelligence and cognitive systems modelling continue to face foundational challenges in their quest to develop plausible models and implementations of cognitive capacities and intelligence in artificial systems. One of the methodological core issues is the question of the integration between sub-symbolic and symbolic approaches to knowledge representation, learning and reasoning in cognitively-inspired models. Network-based approaches very often enable flexible tools which can discover and process the internal structure of (possibly large) data sets. They promise to give rise to efficient signal-processing models which are biologically plausible and optimally suited for a wide range of applications, whilst possibly also offering an explanation of cognitive phenomena of the human brain. Still, the extraction of high-level explicit (i.e. symbolic) knowledge from distributed low-level representations thus far has to be considered a mostly unsolved problem. In recent years, network-based models have seen significant advancement in the wake of the development of the new "deep learning" family of approaches to machine learning. Due to the hierarchically structured nature of the underlying models, these developments have also reinvigorated efforts in overcoming the neural-symbolic divide. The aim of the special session is to bring together recent work developed in the field of network-based information processing in a cognition-related context, which bridges the gap between different levels of description and paradigms and which sheds light onto canonical solutions or principled approaches occurring in the context of neural-symbolic integration to modelling or implementing cognitively-inspired capacities in artificial systems. Besides classical research work applying computational modelling methods to problems from cognitive psychology, computational neuroscience, artificial intelligence, and cognitive science, this session also explicitly addresses cognitively-inspired neural-symbolic approaches in more application-driven research as, e.g., technical cognitive systems, cognitive robotics, large knowledge bases and big data, etc. = TOPICS = We particularly encourage submissions related to the following non-exhaustive list of topics: - new learning paradigms of network-based models addressing different knowledge levels - biologically plausible methods and models - integration of network models and symbolic reasoning - cognitive systems using neural-symbolic paradigms - extraction of symbolic knowledge from network-based representations - applications and implementations of cognitively-inspired neural-symbolic approaches in technical systems and industry - cognitively-inspired neural-symbolic techniques for large knowledge bases and big data - challenging applications which have the potential to become benchmark problems - visionary papers concerning the future of network approaches to cognitive modelling or the future role of neural-symbolic systems in applications = DATES & SUBMISSIONS = The deadlines for submissions, author feedback, etc. are bound to the normal IJCNN 2015 deadlines (and, thus, are also subject to the same changes and extensions). The current schedule is: - Paper submission due: January 15, 2015 - Paper review feedback: March 15, 2015 - Final papers due: April 15, 2015 For details on the submission process, formats, etc., please refer to the IJCNN 2015 Call for Papers ( http://www.ijcnn.org/call-for-papers ) and the IJCNN 2015 submission guidelines ( http://www.ijcnn.org/paper-submission ). When submitting to the special session, please make sure to select the corresponding session topic during the submission process. = SESSION CO-CHAIRS = - Tarek R. Besold, Institute of Cognitive Science, University of Osnabr?ck, Germany - Artur D'Avila Garcez, Department of Computer Science, City University London, UK - Kai-Uwe K?hnberger, Institute of Cognitive Science, University of Osnabr?ck, Germany - Terrence C. Stewart, Centre for Theoretical Neuroscience, University of Waterloo, Canada -------------- next part -------------- An HTML attachment was scrubbed... URL: From dyyeung at cse.ust.hk Tue Nov 18 08:10:53 2014 From: dyyeung at cse.ust.hk (Dit-Yan Yeung) Date: Tue, 18 Nov 2014 21:10:53 +0800 Subject: Connectionists: Tenure-Track Faculty Openings in Hong Kong University of Science and Technology (HKUST) Message-ID: <546B455D.5010802@cse.ust.hk> The Department of Computer Science and Engineering of the Hong Kong University of Science and Technology has at least two tenure-track faculty openings in different areas, including machine learning. More information can be found here: http://www.cse.ust.hk/admin/recruitment/faculty/ Application deadline: 15 December 2014 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: dyyeung.vcf Type: text/x-vcard Size: 443 bytes Desc: not available URL: From kai.x.wang at nyu.edu Mon Nov 17 13:25:08 2014 From: kai.x.wang at nyu.edu (Echo Wang) Date: Mon, 17 Nov 2014 13:25:08 -0500 Subject: Connectionists: 2015 NYU-Duke Neuroeconomics Summer Institute: Call for Application Message-ID: *2015 NYU-Duke Neuroeconomics Summer Institute* *July 12-July 25, 2015* *Location: NYU Shanghai Pudong Campus* http://www.shanghai-neuroeconomics.org/summer-school-about/ **Call for applications: Deadline February 14 2015** New York University, Duke University and the Shanghai Neuroeconomics Collective are delighted to present the first biennial Duke-NYU Cooperative International Summer Institute for Neuroeconomics, directed by Nathaniel Daw (NYU), Paul Glimcher (NYU), Ben Hayden (University of Rochester), Hilke Plassmann (INSEAD), Michael Platt (Duke University) and Xiao-Jing Wang (NYU and NYU-Shanghai). The goal of the Neuroeconomics Summer Institute is to bring together post-docs and advanced graduate students in neuroscience, psychology, economics and related disciplines for intensive and advanced study of the rapidly growing interdisciplinary field of Neuroeconomics. The course will feature daily lectures, morning and afternoon, by leading international faculty in Neuroeconomics. Workshops and experimental projects will take place in the evenings. Modeled after the Cold Spring Harbor Banbury meetings and the International Behavioral Economics Summer School, the course aims to be the preeminent training venue for young neuroeconomists. The courses will be taught by a faculty of internationally prominent experts in neuroeconomics, including Xinying Cai (NYU Shanghai), Molly Crockett (Oxford), Michael Dorris (Chinese Academy of Sciences), Jeff Erlich (NYU Shanghai), Ernst Fehr (University of Zurich), Eric Johnson (Columbia), Uma Karmarkar (Harvard Business School), Liz Phelps (NYU), Drazan Prelec (MIT), Antonio Rangel (Caltech), Matthew Rushworth (Oxford), Agniezka Tymula (University of Sydney), Naoshige Uchida (Harvard), Jeff Stevens (University of Nebraska), Daphna Shohamy (Columbia), Elke Weber (Columbia), and Shihwei Wu (National Yang-Ming University). Interested in applying? For details, please visit our website at http://www.shanghai-neuroeconomics.org/summer-school-about/. Application deadline is* February 14 2015.* -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Capture.JPG Type: image/jpeg Size: 27063 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: NYU Logo.jpg Type: image/jpeg Size: 31613 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: summer school advertisement flyer_final.pdf Type: application/pdf Size: 412717 bytes Desc: not available URL: From apbraga at ufmg.br Mon Nov 17 11:24:27 2014 From: apbraga at ufmg.br (=?UTF-8?B?UHJvZi4gQW50w7RuaW8gZGUgUMOhZHVhIEJyYWdh?=) Date: Mon, 17 Nov 2014 14:24:27 -0200 Subject: Connectionists: [CFP: IJCNN 2015 Special Session on Big Data in Smart Industry] Message-ID: [Apologies for cross-posting] *IJCNN 2015 Special Session* *Big Data in Smart Industry* *CALL FOR PAPERS* International Joint Conference on Neural Networks (IJCNN 2015 ). July 12-17, 2015, Killarney, Ireland. Submissions are invited for IJCNN 2015 *Special Session* on ?*Big Data in Smart Industry* ? Webpage: *http://www.cpdee.ufmg.br/~apbraga/Paginas2014/SSIJCNN2015.html * *ABSTRACT* Industry is experiencing worldwide the beginning of a new era of innovation and change. Sensors, actuators, supervision and control elements are increasingly endowed with autonomy, flexibility, communication capability and interoperability. The new generation of devices, which is capable of data collection and processing, has been gradually incorporated into several levels of the industrial production chain. The synergy of these physical and computational elements forms the base for a profound transformation of the global industry with a perspective of a dramatic increase of productivity and reliability, and significant benefits for the society. Within this new scenario, the availability of a large amount of high dimensional data reveals itself as a dilemma for the induction of data models. On the one hand, there is an expectation that a greater ability of sampling might improve performance and reliability. Nevertheless, the reality is that most current methods and models are not able to deal with problems of such high dimension and volume. Many current problems involve terabytes of data with hundreds of variables and dimensions that tend to rise continuously if the expected industrial growth rate in the sector is maintained. Prognostics are for an exponential growth of the data storage capacity in the worldwide network of devices during the next years. In addition to the increased internal connectivity of the industry, an improved integration and enhanced synergy with consumer markets and inputs through networking appears to be an inevitable path. Current trend suggests worldwide industry will highly demand data models and processing capabilities to handle time-varying, massive and high dimensional data. This is where Big Data in Smart Industry problems become relevant, and it is crucial that academia and industry are prepared from scientific and technological points of view to face the new challenges. *TOPICS:* We would like to encourage the submission of papers within the general scope of the Special Session (Big data in Smart Industry) in the following topics: - Modeling of large datasets - Dimensionality reduction of very large datasets - Learning from large industrial datasets - Data analysis and visualization - Online modeling, optimization, and autonomous control of industrial processes - Embedded intelligence in cyber-physical systems - Computational intelligence for smart energy management - Data stream processing for water, transportation, agriculture, and sustainability - Internet of things and smart resources management - Data-driven optimization and control of dynamical systems - Cyber-physical system units (CPSU) with embedded autonomy - Data acquisition and storage in distributed industrial environments *ORGANIZERS:* *Ant?nio P?dua Braga* Federal University of Minas Gerais, Brazil apbraga at ufmg.br *http://www.ppgee.ufmg.br/~apbraga * *Fernando Gomide* University of Campinas, Brazil gomide at dca.fee.unicamp.br *http://www.dca.fee.unicamp.br/~gomide * *SUBMISSION & IMPORTANT DATES* *Special session papers will undergo* the same review *process* as regular *papers and follow the same schedule of the conference. *Paper submission should be done directly to IJCNN submissions page found at *http://ijcnn.org *. Be sure to set the "Main research topic" to your special session. The special sessions are found at the bottom of the list. *Paper submission deadline : **January 15, 2015* *Paper decision notification: **March 15, 2015* *Camera-ready submission: **April 15, 2015* -- Google: Prof. Ant?nio de P?dua Braga -------------- next part -------------- An HTML attachment was scrubbed... URL: From tomas.hromadka at gmail.com Wed Nov 19 03:48:25 2014 From: tomas.hromadka at gmail.com (Tomas Hromadka) Date: Wed, 19 Nov 2014 09:48:25 +0100 Subject: Connectionists: [COSYNE2015] Abstract submission closes soon; Travel grants Message-ID: <546C5959.9020607@gmail.com> ================================================= Computational and Systems Neuroscience (Cosyne) MAIN MEETING Mar 5 - Mar 8, 2015 Salt Lake City, Utah WORKSHOPS Mar 9 - Mar 10, 2015 Snowbird Ski Resort, Utah http://www.cosyne.org ================================================= IMPORTANT COSYNE NEWS (see below for details): Abstract submission deadline is fast approaching. Meeting registration is now open. Travel grant submission will open on 08 Dec 2014. ABSTRACT SUBMISSION DEADLINE: Wed 26 Nov 2014 (11:59pm PST) WORKSHOP PROPOSAL DEADLINE: Fri 21 Nov 2014 (11:59pm PST) The annual Cosyne meeting provides an inclusive forum for the exchange of empirical and theoretical approaches to problems in systems neuroscience, in order to understand how neural systems function. The MAIN MEETING is single-track. A set of invited talks are selected by the Executive Committee, and additional talks and posters are selected by the Program Committee, based on submitted abstracts. The WORKSHOPS feature in-depth discussion of current topics of interest, in a small group setting. Cosyne topics include but are not limited to: neural coding, natural scene statistics, dendritic computation, neural basis of persistent activity, nonlinear receptive field mapping, representations of time and sequence, reward systems, decision-making, synaptic plasticity, map formation and plasticity, population coding, attention, and computation with spiking networks. This year we would like to foster increased participation from experimental groups as well as computational ones. Please circulate widely and encourage your students and postdocs to apply. When preparing an abstract, authors should be aware that not all abstracts can be accepted for the meeting, due to space constraints. Abstracts will be selected based on the clarity with which they convey the substance, significance, and originality of the work to be presented. For details on submitting abstracts and workshop proposals please visit www.cosyne.org. Please note that the main Cosyne 2015 meeting will take place in a different venue (Hilton Salt Lake City Center) one week later than usual, see www.cosyne.org for details. TRAVEL GRANTS: Applications will open on 08 Dec 2014 for travel grants to attend the conference. Each awardee will receive at least $500 to help offset the costs of travel, registration, and accommodations. Larger grants may be available to those traveling from outside North America. Special consideration is given to scientists who have not previously attended the meeting, under-represented minorities, students who are attending the meeting together with a mentor, and authors of submitted Cosyne abstracts. For details on applying, see http://www.cosyne.org/c/index.php?title=Travel_Grants CONFIRMED SPEAKERS: Amy Bastian (Johns Hopkins) Matteo Carandini (UCL) Sophie Deneve (ENS) Florian Engert (Harvard) Marla Feller (UC Berkeley) Wulfram Gerstner (EPFL) Shawn Lockery (U Oregon) Liam Paninski (Columbia) Nicole Rust (U Penn) Tatyana Sharpee (Salk) Mariano Sigman (UBA) Emo Todorov (U Washington) ORGANIZING COMMITTEE: General Chairs: Michael Long (NYU) and Stephanie Palmer (U Chicago) Program Chairs: Maria Geffen (U Penn) and Konrad Kording (Northwestern) Workshop Chairs: Robert Froemke (NYU) and Claudia Clopath (Imperial College) Publicity Chair: Xaq Pitkow (Rice) EXECUTIVE COMMITTEE: Anne Churchland (CSHL) Zachary Mainen (Champalimaud) Alexandre Pouget (U Geneva) Anthony Zador (CSHL) From georg.dorffner at meduniwien.ac.at Wed Nov 19 17:54:47 2014 From: georg.dorffner at meduniwien.ac.at (Georg Dorffner) Date: Wed, 19 Nov 2014 23:54:47 +0100 Subject: Connectionists: Post-Doc position: Data Science for Molecular Data Analysis, Medical University of Vienna Message-ID: <546D1FB7.6070906@meduniwien.ac.at> The Medical University of Vienna has an opening for a half-time postdoc position at the Center for Medical Statistics, Informatics and Intelligent Systems. The position is limited to 5 years, with the opportunity to be converted to a tenure-track position. The remaining 50% to a full time position can be gained by acquiring research grants. A doctorate in Medical Computer Science, Biomedical Engineering or similar field is required. Candidates should have a background in Data Science (e.g. Machine Learning and Pattern Recognition) and be familiar with the analysis of molecular data (e.g. from genomics or proteomics). The official job announcement (in German) can be found at http://www.meduniwien.ac.at/homepage/content/organisation/dienstleistungseinrichtungen-und-stabstellen/personalabteilung/bewerbung-stellenangebote/aktive-ausschreibungen/nr45-medizinische-statistik-informatik-und-intelligente-systeme-kennzahl-1896014/ If you are interested, please apply directly or send your CV to me (georg.dorffner at meduniwien.ac.at) until November 27, 2014 and I can guide you through the application procedure. Georg From jenhicks at stanford.edu Wed Nov 19 14:33:08 2014 From: jenhicks at stanford.edu (Jennifer Hicks) Date: Wed, 19 Nov 2014 11:33:08 -0800 Subject: Connectionists: Postdoctoral Position in Data Mining of Human Activity at Stanford University Message-ID: The Mobilize Center at Stanford University (http://mobilize.stanford.edu), a newly established National Institutes of Health (NIH) Big Data to Knowledge (BD2K) Center for Excellence, has openings for several Distinguished Postdoctoral Fellows. The proliferation of devices monitoring human activity, including mobile phones and an ever-growing array of wearable sensors, is generating unprecedented quantities of data describing human movement, behaviors, and health. Modeling and gaining insight from these massive and complex datasets will require novel algorithms for large-scale data processing and machine learning. The Mobilize Center is bringing together leading data science and biomedical researchers to integrate and understand these data using innovative machine learning and data mining techniques, combined with state-of-the-art biomechanical modeling. The Center is led by Scott Delp, along with co-investigators Trevor Hastie, Jure Leskovec, Christopher Re, Stephen Boyd, Jennifer Widom, Abby King, Russ Altman, and Margot Gerritsen. We are searching for outstanding creative individuals to develop novel data mining and machine learning tools to study human mobility and health. The ideal candidate will have strong research skills in data mining and machine learning, biomechanics, and experience developing computational methods. Prior experience with wearable sensors, statistical learning, optimization, game theory, software development, medical informatics, and other data science methods is desirable. Interested applicants should: (1) Send a letter indicating their interest and experience, a CV, and copies of two representative publications via e-mail to mobilize-center at stanford.edu. (2) Complete the short online form: https://docs.google.com/forms/d/1Rrc7JurC4HfA-AiD7iJqXwdmWHvm3jzKHrVeO9BjLcU/viewform (3) Arrange for two letters of reference to be sent to mobilize-center at stanford.edu within two weeks of submitting (1) and (2). We encourage applicants to also send links to software that they have developed. The review of applications will begin immediately and continue until the positions are filled. Stanford University is an affirmative action and equal opportunity employer, committed to increasing the diversity of its workforce. It welcomes applications from women, members of minority groups, veterans, persons with disabilities, and others who would bring additional dimensions to the university's research and teaching mission. -- Jennifer Hicks, Ph.D. Director of Data Science | Mobilize Center Associate Director | NCSRR R&D Manager | OpenSim Stanford University From M.Montemurro at manchester.ac.uk Thu Nov 20 08:02:13 2014 From: M.Montemurro at manchester.ac.uk (Marcelo Montemurro) Date: Thu, 20 Nov 2014 13:02:13 +0000 Subject: Connectionists: =?windows-1252?q?PhD_in_=93Characterising_motor_i?= =?windows-1252?q?mpairments_in_autism_using_computational_techniques=94_a?= =?windows-1252?q?t_the_University_of_Manchester?= Message-ID: sent on behalf of Dr Emma Gowen. PhD in ?Characterising motor impairments in autism using computational techniques?. Applications are invited for this fully-funded PhD position (fees and salary) in the lab of Dr Emma Gowen in the Faculty of Life Sciences, University of Manchester Deadline: 26th Nov 2014 Start date: September 2015 Duration: 4 years Eligibility: Candidates must be nationals of the UK or other EU country This project will apply computational and statistical methods to motion tracking data in order to determine the nature of motor impairments in autistic individuals. Autism is a life-long developmental condition that affects how a person communicates and interacts with people. In addition to these social symptoms, autistic individuals have altered motor control such as less accurate eye-hand coordination and abnormal gait patterns causing considerable problems with daily living and social skills. Despite this impact, motor problems are poorly characterised. In this project, computational and statistical methods (e.g. feature extraction, machine learning) will be applied to movement data recorded from autistic and non-autistic people in order to address the following questions: Can motor ability differentiate between autistic and non- autistic groups? Can motor ability be used as a diagnostic marker for autism? Are there different subgroups of motor impairment present in the autistic population? For more details of the project please visit: http://www.ls.manchester.ac.uk/phdprogrammes/projectsavailable/project/?id=1891 **************************************************** Dr Emma Gowen Senior Lecturer Body, Eyes and Movement (BEAM) lab http://beamlab.lab.ls.manchester.ac.uk/ http://www.dremmagowen.webspace.virginmedia.com/EmmaSite/Home.html http://www.autism.manchester.ac.uk/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From gdetor at gmail.com Thu Nov 20 07:52:26 2014 From: gdetor at gmail.com (Georgios Is. Detorakis) Date: Thu, 20 Nov 2014 13:52:26 +0100 Subject: Connectionists: Workshop on Neural Population Dynamics Message-ID: Dear all, We are pleased to announce the *Workshop on Neural Population Dynamics*, to be held at *Sup?lec* in *Gif sur Yvette* on *February, 4th 2015*: http://neural-pops.sciencesconf.org This workshop aims at gathering neuroscientists and control theoreticians around the dynamics of neural populations. Its topics cover: ? Modeling ? Identification of parameters based on experimental data ? Link between models ? Mathematical analysis ? Feedback control. The list of confirmed speakers is: ? Bruno Cessac, INRIA ? Alain Destexhe, UNIC ? David Hansel, Univ. Paris 5 ? Axel Hutt, INRIA ? Dimitris Pinotsis, UCL ? Peter Wellstead, Hamilton Institute. *Registration* is free but mandatory. *Posters* are welcomed. See website for details. The workshop is organized in the framework of the Research Initiative "Control and Neuroscience" of the iCODE institute of Paris-Saclay and the ANR project SynchNeuro. We look forward to seeing you there ! The organizers, Georgios Is. Detorakis and Antoine Chaillet http://neural-pops.sciencesconf.org -- -gid. -------------- next part -------------- An HTML attachment was scrubbed... URL: From moritz.deger at epfl.ch Thu Nov 20 05:10:38 2014 From: moritz.deger at epfl.ch (Moritz Deger) Date: Thu, 20 Nov 2014 11:10:38 +0100 Subject: Connectionists: faculty position machine learning at EPFL (Switzerland) Message-ID: <1416478238.30895.3.camel@moritz-mob> On behalf of Wulfram Gerstner: Dear colleagues, we have an open faculty position in machine learning at the Computer Science Department of the EPFL in Lausanne, Switzerland. http://professeurs.epfl.ch/page-113108-en.html The position can be at the level of tenure track assistant professor or senior tenured professor. The committee starts to look at applications on Dec. 1st, but later applications will also be considered. The EPFL school of computer and communication sciences offers an attractive international environment with world-class research in different areas of computer science, communication sciences and strong links to other departments on campus. Applications welcome. Wulfram Gerstner Professor EPFL http://professeurs.epfl.ch/page-113108-en.html From ralph.etiennecummings at gmail.com Fri Nov 21 10:03:48 2014 From: ralph.etiennecummings at gmail.com (Ralph Etienne-Cummings) Date: Fri, 21 Nov 2014 10:03:48 -0500 Subject: Connectionists: Telluride Neuromorphic Cognition Workshop 2015: Call for Topics Proposals Message-ID: Call for Topic Area Proposals2015 Neuromorphic Cognition Engineering Workshop Telluride, Colorado, June 28 -July 18, 2015 We are now accepting proposals for Topic Areas in the 2015 Telluride Neuromorphic Cognition Engineering Workshop. We support topics and projects in neuromorphic cognition, particularly those that involve solving challenging ?everyday? tasks that incorporate domain-specific knowledge, exploration, prediction, and problem solving. In particular, we are interested in projects that hold promise for addressing Grand Challenge types of problems that do not have strong solutions of any form, neuromorphic or not. These Challenge problems should feature long-duration sensorimotor problems that involve autonomous cognitive decision making. Examples might include tasks such as learning a new language, navigating through an unknown environment to locate an object or reach a desired location, adaptively manipulating unknown or complex objects in the service of a task, playing a game requiring inference of hidden information or long-term planning and learning, etc. Proposals related to hardware technologies that aim to bring these capabilities to reality are also encouraged. Topic proposals that aim to solve a particular problem using the multidisciplinary experience of participants will be favored over topics that simply gather a large number of people working within a discipline, or using a single technology, or approach. Topic areas for this summer's Telluride Neuromorphic Cognition Engineering Workshop will be chosen from proposals submitted to the organizers. We will have 4 topic areas and a ?Future hardware technologies? tutorial/projects group. Topic areas can span a large field; we are looking for leadership in planning activities and inviting good people in a field. Although past topic areas have tended to be very broad and discipline-oriented (e.g., cognition, audition, vision, robotics, neural interfacing, neuromorphic VLSI, etc.), application-oriented topic areas (e.g., sensor fusion, game-playing robot, object recognition, auditory scene analysis, human robot interaction, mobile electronic implementations of neuromorphic sensing, real time deep network implementations) are especially desirable. Topic area leaders will receive housing for themselves and their invitees, and limited travel funds. Topic area leaders will help to define the field of neuromorphic cognition engineering through the projects they pursue and the people they invite. They shape their topic by inviting speakers and project leaders (the invitees) and by initiating topic area project discussions prior to the workshop. Teams of two organizers are required. One of the organizers should be an attendee of a previous Telluride Workshop (in any capacity) who has stayed at the Workshop for at least one week. The second organizer should be a person who comes ideally from a field outside traditional neuromorphic engineering. Pre-workshop topic area choices and study assignments. Before the workshop begins, each topic area will be required to prepare and distribute study materials that constitute: 1) an introductory presentation (e.g., pptx, video, review paper) of the fundamental knowledge associated with the topic area that everyone at the workshop should be exposed to, and 2) a few critical papers that the participants in the topic area should read before the workshop. The topic area should 3) begin a serious group discussion of the projects (e.g., via Google+, Skype, email, etc). The maximum 2-page proposals should include: 1. Title of topic area. 2. Names of the two topic leaders, their affiliations, and contact information (email addresses!). 3. A paragraph explaining the focus and goals of the topic area. 4. A list of possible specific topic area projects. 5. A list of example invitees (up to six names and institutions). Commitments from your invitees should already be in place such that these invitees can come to the workshop if your proposal is accepted. 6. Any other material that fits within the two-page limit that will help us make a smart choice. Send your topic area proposal in pdf or text format to organizers14 at neuromorphs.net with subject line containing "topic area proposal". Proposals must be received by December 20th, 2014; proposals received after the deadline may still be considered if space is available. Resources limit the workshop to 4 topic areas, each with 5 invitees. If your proposal for the topic area is not accepted, we will work with you to see if there is a natural way to include your ideas (and you) into the accepted topic areas. We hope to have significant turn-over each year in the topic areas and leaders to ensure fresh new ideas and participants. See the Institute of Neuromorphic Engineering (www.ine-web.org) for background information on the workshop and neuromorphs.net for the 2014 workshop wiki. We look forward to your topic proposals! Deadline: December 20th, 2014 The Workshop Directors: Cornelia Fermuller (University of Maryland), Ralph Etienne-Cummings (Johns Hopkins Univ.) Shih-Chii Liu (University of Zurich and ETH Zurich), Timmer Horiuchi (University of Maryland) Former 2007-2012 Workshop Director: Tobi Delbruck (University of Zurich and ETH Zurich) -------------- next part -------------- An HTML attachment was scrubbed... URL: From mail at mkaiser.de Mon Nov 24 07:57:28 2014 From: mail at mkaiser.de (Marcus Kaiser) Date: Mon, 24 Nov 2014 12:57:28 +0000 Subject: Connectionists: Wellcome Trust 4-year PhD programme 'Systems Neuroscience: From Networks to Behaviour' Message-ID: Dear all, our Wellcome Trust 4-year PhD programme in systems neuroscience, aimed at applicants from the physical sciences (physics, engineering, mathematics, or computer science), is now accepting applications for studentships starting in September 2015 (see below). Research areas include Neuroinformatics, Computational Neuroscience, Neuroimaging (fMRI, DTI, EEG, ECoG in rodents, non-human primates, and humans), Brain Connectivity, Clinical Neuroscience, Behaviour and Evolution, and Brain Dynamics (simulations and time series analysis). Strong interactions between clinical, experimental, and computational researchers are a key component of this programme (see, for example, http://www.cando.ac.uk/ for a current research project). On a separate note, we also offer a one-year master programme in Neuroinformatics (http://www.ncl.ac.uk/computing/study/postgrad/taught/5199/ ) which is now accepting applications. See http://neuroinformatics.ncl.ac.uk/ for more information about the research environment. Best, Marcus *Wellcome Trust 4-year PhD programme 'Systems Neuroscience: From Networks to Behaviour'* Programme Directors: Prof. Stuart Baker, Prof. Tim Griffiths, and Dr Marcus Kaiser The Institute of Neuroscience at Newcastle University integrates more than 100 principal investigators across medicine, psychology, computer science, and engineering. Research in systems, cellular, computational, and behavioural neuroscience. Laboratory facilities include auditory and visual psychophysics; rodent, monkey, and human neuroimaging (EEG, fMRI, PET); TMS; optical recording, multi-electrode neurophysiology, confocal and fluorescence imaging, optogenetics, high-throughput computing and e-science, artificial sensory-motor devices, clinical testing, and the only brain bank for molecular changes in human brain development. The Wellcome Trust's Four-year PhD Programmes are a flagship scheme aimed at supporting the most promising students to undertake in-depth postgraduate research training. The first year combines taught courses with three laboratory rotations to broaden students' knowledge of the subject area. At the end of the first year, students will make an informed choice of their three-year PhD research project. This programme is based at Newcastle University and is aimed to provide specialised training for physical and computational scientists (e.g. physics, chemistry, engineering, mathematics, and computer science) wishing to apply their skills to a research neuroscience career. Eligibility/Person Specification: Applicants should have, or expect to obtain, a 1st or 2:1 degree, or equivalent, in a physical sciences, engineering, mathematics or computing degree. Value of the award: Support includes a stipend for 4 years (?20k/yr tax-free), PhD registration fees at UK/EU student rate, research expenses, general training funds and travel costs. How to apply: You must apply through the University's online postgraduate application form ( http://www.ncl.ac.uk/postgraduate/funding/search/list/IN076 ) inserting the reference number IN076 and selecting 'Master of Research/Doctor of Philosophy (Medical Sciences) - Neuroscience' as the programme of study. Only mandatory fields need to be completed (no personal statement required) and a covering letter, CV and (if English is not your first language) a copy of your English language qualifications must be attached. The covering letter must state the title of the studentship, quote the reference number IN076 and state how your interests and experience relate to the programme. The deadline for receiving applications is 15 January 2015. You should also send your covering letter and CV to Beckie Hedley, Postgraduate Secretary, Institute of Neuroscience, Henry Wellcome Building, Faculty of Medical Sciences, Newcastle University, Newcastle upon Tyne, NE2 4HH, or by email to ion-postgrad-enq at ncl.ac.uk . For more information, see http://www.ncl.ac.uk/ion/study/wellcome/ Best, Marcus -- Marcus Kaiser, Ph.D. Associate Professor (Reader) in Neuroinformatics School of Computing Science Newcastle University Claremont Tower Newcastle upon Tyne NE1 7RU, UK Lab website: http://www. dynamic-connectome.org Neuroinformatics at Newcastle: http://neuroinformatics.ncl.ac.uk/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From pfeiffer at ini.phys.ethz.ch Mon Nov 24 08:17:55 2014 From: pfeiffer at ini.phys.ethz.ch (Michael Pfeiffer) Date: Mon, 24 Nov 2014 14:17:55 +0100 Subject: Connectionists: Professor of Systems and Circuits Neuroinformatics, ETH Zurich and University of Zurich Message-ID: <54733003.8090700@ini.phys.ethz.ch> The Department of Information Technology and Electrical Engineering (www.ee.ethz.ch ) at ETH Zurich and the Faculty of Science (www.mnf.uzh.ch ) of the University of Zurich invite applications for the above-mentioned position in the Institute of Neuroinformatics (INI, www.ini.uzh.ch) to complement and extend its vigorous research activities. The research of the candidate should be directed towards the theory and practice of computation in neural systems and behavior, with a strong interest in theory driven experimental neuroscience and information processing in neural circuits. The successful candidate will combine cutting-edge theories and experiments in neural information processing and computation to explore the causal links between neuronal circuits and behaviour in vertebrates. An ability to develop requisite neurotechnologies will be an asset. The successful candidate will contribute to a highly collaborative multi-disciplinary environment. We encourage internationally recognized candidates with strong research records to apply. We seek to fill either a tenured or tenure-track professorship position. The new professor will be expected to teach undergraduate level courses (German or English) and graduate level courses (English). The INI is a joint institute of the Faculty of Science of the University of Zurich and the Department of Information Technology and Electrical Engineering of ETH Zurich. The INI fosters research at the interface between neuroscience, computing and engineering through its research, teaching and graduate training, and specialist international workshop programs. Its members conduct a coordinated research program by means of multidisciplinary teams composed of about 70 biologists, physicists, psychologists, engineers and computer scientists. Through many levels of experiment and theory INI scientists explore how the circuits of the brain process information to generate intelligent behavior. They exploit new developments in silicon technology and computers as a means of developing models and hardware implementations of information processing and storage in the brain. In order to strengthen interdisciplinary research, the new professor should have a strong overlap with existing interests in the INI, and also forge links with other institutes of the University of Zurich and the ETH Zurich (for example: Biology, Brain Research, Pharmacology, Computer Science, Information Technology and Electrical Engineering, Mathematics, and Physics). To apply online, please go to http://www.facultyaffairs.ethz.ch/facultypositions/prof_neuroinformatics and follow the link there.** Applications should include a curriculum vitae, a list of publications and statements of future research and teaching activities. The letter of application should be addressed *to the President of ETH Zurich. The closing date for applications is 15 January 2015*. ETH Zurich is an equal opportunity and family friendly employer and is further responsive to the needs of dual career couples. We specifically encourage women to apply. -- ========================================= Dr. Michael Pfeiffer Group leader, Program coordinator NSC Institute of Neuroinformatics University of Zurich and ETH Zurich Winterthurerstrasse 190 CH-8057 Zurich, Switzerland Tel. +41 44 635 30 45 Fax +41 44 635 30 53 pfeiffer (at) ini.phys.ethz.ch http://www.ini.uzh.ch/~pfeiffer/ ========================================= -------------- next part -------------- An HTML attachment was scrubbed... URL: From fjaekel at uos.de Tue Nov 25 09:56:13 2014 From: fjaekel at uos.de (Frank =?ISO-8859-1?Q?J=E4kel?=) Date: Tue, 25 Nov 2014 15:56:13 +0100 Subject: Connectionists: Vision Research Special Issue: Quantitative Approaches in Gestalt Perception Message-ID: <1416927373.28743.34.camel@pappel.ikw.uni-osnabrueck.de> Dear all, the deadline for the Vision Research Special Issue on Quantitative Approaches in Gestalt Perception has been extended by two months. The new deadline is now Jan 31, 2015. Regards, Frank -------- Submissions are invited for a special issue of Vision Research on Quantitative Approaches in Gestalt Perception. Gestalt Perception has been the topic of research for more than 100 years since Wertheimer?s seminal publication in 1912. Recently, quantitative approaches to study Gestalt phenomena helped to specify and clarify some of the early Gestalt notions, generating testable quantitative predictions lacking from much of the original Gestalt writings. This special issue aims to bring together the many diverse quantitative approaches to the study of Gestalt Perception. Contributions are sought from visual psychophysics, computer vision, cognitive psychology, the cognitive neurosciences, computational neuroscience as well as machine learning and theory. Papers are invited on all aspects of Gestalt Perception; given the aim of this Special Issue we have a preference for quantitative approaches, but may exceptionally consider purely experimental work as well as historical treatments and reviews if they specifically provide groundwork for future formal developments. Examples of specific topics include (but are not limited to): - Attention and Top-Down Effects on Gestalt Perception - Configural Superiority - Environmental and Image Statistics and Gestalts - Figure-Ground Segmentation - History and Review of Gestalt Perception - Learning and Development of Gestalt Perception - Models of Gestalt Phenomena - Neuronal Basis of Gestalt Effects - Object Formation - Object Recognition and Gestalt - Perceptual Grouping - Perceptual Organization of Motion, Form or Scenes - Shape Perception - Structural Representations The new deadline for submissions is the 31st of January 2015. http://www.journals.elsevier.com/vision-research/call-for-papers/special-issue-on-quantitative-approaches-in-gestalt-percepti/ -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/x-pkcs7-signature Size: 6293 bytes Desc: not available URL: From weng at cse.msu.edu Tue Nov 25 11:42:52 2014 From: weng at cse.msu.edu (Juyang Weng) Date: Tue, 25 Nov 2014 11:42:52 -0500 Subject: Connectionists: Special Issue on Brain Mind in the International Journal of Intelligence Science (IJIS) In-Reply-To: <01bc01d007c3$2897d280$79c77780$@com> References: <01bc01d007c3$2897d280$79c77780$@com> Message-ID: <5474B18C.7070708@cse.msu.edu> From Scientific Research Publishing: I am glad to tell you that we have launched *Special Issue on Brain Mind (BM)*: The purpose of this special issue is to promote the researches of Brain Mind, which is one of the substantial topics in intelligence science. Scientists have made great strides toward understanding how the human brain works. But as they unlock the secrets of the brain's cellular machinery, they face a question that has occupied science and philosophy for centuries: What is the relationship between the brain and that complex set of experiences and behaviors we call the "mind?" This special issue solicits recent cutting-edge and promising progresses in brain mind research. Topics of interest include, but are not limited to: * Brain area: neural networks, brain-mind architecture, multisensory integration, and neural modulation * Behaviors: actions, motor development, concept learning, abstraction, languages, decision making, reasoning and creativity granular computing * Memories: working memory, episodic memory, semantic memory, procedural memory * Consciousness: consciousness theory, motivation, meta-cognition, inter-modal attention * Learning: deep learning, neural computing, introspective learning * Mind computation: informational mind, computational model of mind * Brain-like computing: pattern recognition, robotics, computer architecture, image analysis, computer vision, speech recognition According to the following timetable: Submission Deadline December 31st, 2014 Publication Date January 2015 Ms Spring Zhou Scientific Research Publishing Online Paper Submission -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 3153 bytes Desc: not available URL: From n.lepora at sheffield.ac.uk Tue Nov 25 12:35:38 2014 From: n.lepora at sheffield.ac.uk (Nathan F Lepora) Date: Tue, 25 Nov 2014 17:35:38 +0000 Subject: Connectionists: [meetings] Living Machines IV: First Call for Papers, Satellite Events and Sponsors Message-ID: ______________________________________________________________ First Call for Papers, Satellite Events and Sponsors Living Machines IV: The 4th International Conference on Biomimetic and Biohybrid Systems 27th to 31st July 2015 http://csnetwork.eu/livingmachines To be hosted at the La Pedrera Barcelona, Spain In association with the Universitat Pompeu Fabra Accepted papers will be published in Springer Lecturer Notes in Artificial Intelligence Submission deadline March 16th, 2015 ______________________________________________________________ ABOUT LIVING MACHINES 2015 The development of future real-world technologies will depend strongly on our understanding and harnessing of the principles underlying living systems and the flow of communication signals between living and artificial systems. Biomimetics is the development of novel technologies through the distillation of principles from the study of biological systems. The investigation of biomimetic systems can serve two complementary goals. First, a suitably designed and configured biomimetic artefact can be used to test theories about the natural system of interest. Second, biomimetic technologies can provide useful, elegant and efficient solutions to unsolved challenges in science and engineering. Biohybrid systems are formed by combining at least one biological component?an existing living system?and at least one artificial, newly-engineered component. By passing information in one or both directions, such a system forms a new hybrid bio-artificial entity. The following are some examples: ? Biomimetic robots and their component technologies (sensors, actuators, processors) that can intelligently interact with their environments. ? Active biomimetic materials and structures that self-organize and self-repair. ? Biomimetic computers?neuromimetic emulations of the physiological basis for intelligent behaviour. ? Biohybrid brain-machine interfaces and neural implants. ? Artificial organs and body-parts including sensory organ-chip hybrids and intelligent prostheses. ? Organism-level biohybrids such as robot-animal or robot-human systems. ACTIVITIES The main conference will take the form of a three-day single-track oral and poster presentation programme, 29th to 31st July 2015, hosted at the La Pedrera, Barcelona, Spain. The conference programme will include five plenary lectures from leading international researchers in biomimetic and biohybrid systems, and the demonstrations of state-of-the-art living machine technologies The full conference will be preceded by up to two days of Sateliite Events hosted by the Universitat Pompeu Fabra in Barcelona. SUBMITTING TO LIVING MACHINES 2015 We invite both full papers and extended abstracts in areas related to the conference themes. All contributions will be refereed and accepted papers will appear in the Living Machines 2015 proceedings which we expect to be published in the Springer-Verlag LNAI Series. Full papers (upto 12 pages) are invited from researchers at any stage in their career but should present significant findings and advances in biomimetic or biohybid research; more preliminary work would be better suited to extended abstract submission (minimum 4 pages). Further details of submission formats will be circulated in an updated CfP and will be posted on the conference web-site. Full papers will be accepted for either oral presentation (single track) or poster presentation. Extended abstracts will be accepted for poster presentation only. Authors of the best full papers will be invited to submitted extended versions of their paper for publication in a special issue of Bioinspiration and Biomimetics. Satellite events Active researchers in biomimetic and biohybrid systems are invited to propose topics for 1-day or 2-day tutorials, symposia or workshops on related themes to be held 27-28th July at Universitat Pompeu Fabra. Events can be scheduled on either the 27th or the 28th or across both days. Attendance at satellite events will attract a small fee intended to cover the costs of the meeting. There is a lot of flexibility about the content, organisation, and budgeting for these events. Please contact us if you are interested in organising a satellite event. EXPECTED DEADLINES March 16th, 2015 Paper submission deadline May 1st, 2015 Notification of acceptance May 22nd, 2015 Camera ready copy July 27-31 2015 Conference SPONSORSHIP Living Machines 2015 is sponsored by the Convergent Science Network (CSN) for Biomimetics and Neurotechnology. CSN is an EU FP7 Future Emerging Technologies Co-ordination Activity that also organises two highly successful workshop series: the Barcelona Summer School on Brain, Technology and Cognition (http://bcbt.upf.edu/bcbt12/) and the Capoccaccia Neuromorphic Cognitive Engineering Workshop. The 2015 Living Machines conference will also be hosted and sponsored by the Universitat Pompeu Fabra. Call for Sponsors. Other organisations wishing to sponsor the conference in any way and gain the corresponding benefits by promoting themselves and their products to through conference publications, the conference web-site, and conference publicity are encouraged to contact the conference organisers to discuss the terms of sponsorship and necessary arrangements. We offer a number of attractive and good-value packages to potential sponsors. ABOUT THE VENUE Living Machines 2015 returns to the venue of our first conference at the beautiful biomimetic building La Pedrera (Casa Mila) (https://www.lapedrera.com/en/home) designed by reknown modernist architect Antoni Gaudi. Attendees at the conference will get a free ticket to visit this historic building in the centre the Barcelona. Workshops will be held at the Pobleneu Campus of Universitat Pompeu Fabra, close to hotel and restaurant area of the Diagonal and Ramblas de Pobleneu and a short walk from Barcelona?s famous beaches. Organising Committee: Paul Verschure, Universitat Pompeu Fabra (Co-chair) Tony Prescott, University of Sheffield (Co-chair) Stuart Wilson, University of Sheffield (Program Chair) Anna Mura, Universitat Pompeu Fabra (Communications, Local Organiser) Nathan Lepora, University of Bristol (Communications) Carme Buisan University Pompeu Fabra (Treasurer) -- Dr N Lepora Phone: 07859 024565 Website: www.lepora.com Honorary Research Fellow, Sheffield Centre for Robotics (SCentRo) University of Sheffield S10 2TP Lecturer in Robotics, Department of Engineering Mathematics and Bristol Robotics Laboratory University of Bristol BS8 1UB From dst at cs.cmu.edu Tue Nov 25 13:23:42 2014 From: dst at cs.cmu.edu (Dave Touretzky) Date: Tue, 25 Nov 2014 13:23:42 -0500 Subject: Connectionists: Special Issue on Brain Mind in the International Journal of Intelligence Science (IJIS) In-Reply-To: <5474B18C.7070708@cse.msu.edu> References: <01bc01d007c3$2897d280$79c77780$@com> <5474B18C.7070708@cse.msu.edu> Message-ID: <21115.1416939822@cs.cmu.edu> Juyang Weng forwarded an announcement from Scientific Research Publishing about a "special issue on Brain Mind". Readers should be warned that Scientific Research Publishing (SCIRP) is included in Jeffrey Beall's list of predatory open-access publishers: http://scholarlyoa.com/publishers/ Beall, an academic librarian and tenured associate professor at the University of Colorado in Denver, has become famous for his work exposing the sleazy side of open access publishing. You can read more about him here: http://en.wikipedia.org/wiki/Jeffrey_Beall A 2010 Nature News piece on Scientific Research Publishing's unethical practices can be viewed here: http://www.nature.com/news/2010/100113/full/463148a.html Caveat author. -- Dr. David S. Touretzky Research Professor Computer Science Department & Center for the Neural Basis of Cognition Carnegie Mellon University Pittsburgh, PA 15213-3891 tel. 412-268-7561 From apbraga at ufmg.br Mon Nov 24 11:05:42 2014 From: apbraga at ufmg.br (=?UTF-8?B?UHJvZi4gQW50w7RuaW8gZGUgUMOhZHVhIEJyYWdh?=) Date: Mon, 24 Nov 2014 14:05:42 -0200 Subject: Connectionists: [CFP: IJCNN 2015 Special Session on Big Data in Smart Industry] Message-ID: [Apologies for cross-posting] IJCNN 2015 Special Session on "Big Data in Smart Industry" CALL FOR PAPERS International Joint Conference on Neural Networks (IJCNN 2015). July 12-17, 2015, Killarney, Ireland. Submissions are invited for IJCNN 2015 Special Session on ?Big Data in Smart Industry? Webpage: http://www.cpdee.ufmg.br/~apbraga/Paginas2014/SSIJCNN2015.html ABSTRACT Industry is experiencing worldwide the beginning of a new era of innovation and change. Sensors, actuators, supervision and control elements are increasingly endowed with autonomy, flexibility, communication capability and interoperability. The new generation of devices, which is capable of data collection and processing, has been gradually incorporated into several levels of the industrial production chain. The synergy of these physical and computational elements forms the base for a profound transformation of the global industry with a perspective of a dramatic increase of productivity and reliability, and significant benefits for the society. Within this new scenario, the availability of a large amount of high dimensional data reveals itself as a dilemma for the induction of data models. On the one hand, there is an expectation that a greater ability of sampling might improve performance and reliability. Nevertheless, the reality is that most current methods and models are not able to deal with problems of such high dimension and volume. Many current problems involve terabytes of data with hundreds of variables and dimensions that tend to rise continuously if the expected industrial growth rate in the sector is maintained. Prognostics are for an exponential growth of the data storage capacity in the worldwide network of devices during the next years. In addition to the increased internal connectivity of the industry, an improved integration and enhanced synergy with consumer markets and inputs through networking appears to be an inevitable path. Current trend suggests worldwide industry will highly demand data models and processing capabilities to handle time-varying, massive and high dimensional data. This is where Big Data in Smart Industry problems become relevant, and it is crucial that academia and industry are prepared from scientific and technological points of view to face the new challenges. TOPICS: We would like to encourage the submission of papers within the general scope of the Special Session (Big data in Smart Industry) in the following topics: - Modeling of large datasets - Dimensionality reduction of very large datasets - Learning from large industrial datasets - Data analysis and visualization - Online modeling, optimization, and autonomous control of industrial processes - Embedded intelligence in cyber-physical systems - Computational intelligence for smart energy management - Data stream processing for water, transportation, agriculture, and sustainability - Internet of things and smart resources management - Data-driven optimization and control of dynamical systems - Cyber-physical system units (CPSU) with embedded autonomy - Data acquisition and storage in distributed industrial environments ORGANIZERS: Ant?nio P?dua Braga Federal University of Minas Gerais, Brazil apbraga at ufmg.br http://www.ppgee.ufmg.br/~apbraga Fernando Gomide University of Campinas, Brazil gomide at dca.fee.unicamp.br http://www.dca.fee.unicamp.br/~gomide SUBMISSION & IMPORTANT DATES Special session papers will undergo the same review process as regular papers and follow the same schedule of the conference. Paper submission should be done directly to IJCNN submissions page found at http://ijcnn.org. Be sure to set the "Main research topic" to your special session. The special sessions are found at the bottom of the list. Paper submission deadline : January 15, 2015 Paper decision notification: March 15, 2015 Camera-ready submission: April 15, 2015 -- Google: Prof. Ant?nio de P?dua Braga -------------- next part -------------- An HTML attachment was scrubbed... URL: From smyth at ics.uci.edu Mon Nov 24 14:29:21 2014 From: smyth at ics.uci.edu (Padhraic Smyth) Date: Mon, 24 Nov 2014 11:29:21 -0800 Subject: Connectionists: Faculty Position at UC Irvine in AI and Machine Learning Message-ID: FACULTY POSITION IN ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING AT UC IRVINE The Department of Computer Science at the University of California, Irvine invites applications for a faculty position in the general area of artificial intelligence and machine learning. UC Irvine has international visibility in these areas with faculty such as Rina Dechter, Anima Anandkumar, Pierre Baldi, Alex Ihler, and Padhraic Smyth, as well as strong research groups in closely related areas such as computer vision, bioinformatics, and databases. The computer science department is colocated in the same school and building with the Department of Statistics and the Department of Informatics, offering a rich collaborative research environment. AI and machine learning have high visibility on campus through a number of different centers and institutes, including the Center for Machine Learning and Intelligent Systems (http://cml.ics.uci.edu), the newly formed UCI Data Science Initiative (http://datascience.uci.edu), and the Institute for Genomics and Bioinformatics http://www/igb/uci.edu), providing many excellent opportunities for collaboration with the Schools of Biological Sciences, Engineering, Medicine, Physical Sciences, Social Sciences, and Education. About UC Irvine: One of the youngest University of California campuses, UC Irvine is ranked first in the United States and fifth in the world among universities less than 50 years old, according to The Times Higher Education survey. Compensation is competitive, and includes priority access to purchase on-campus faculty housing at below-market prices. UC Irvine is located 4 miles from the Pacific Ocean and 45 miles south of Los Angeles. The area offers an excellent year-round Mediterranean climate, numerous recreational and cultural opportunities, and one of the highest-ranked public school systems in the nation. Full application details can be found at: https://recruit.ap.uci.edu/apply/JPF02666 (tenure-track assistant level) https://recruit.ap.uci.edu/apply/JPF02668 (tenured associate level) The appointment may be made at either the assistant or associate level. The University of California, Irvine is an Equal Opportunity/Affirmative Action Employer advancing inclusive excellence. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, age, protected veteran status, or other protected categories covered by the UC nondiscrimination policy. A recipient of an NSF ADVANCE award for gender equity, UCI is responsive to the needs of dual career couples, supports work-life balance through an array of family-friendly policies, and is dedicated to broadening participation in higher education. -- --------------------------- Padhraic Smyth Professor, Department of Computer Science Director, Center for Machine Learning and Intelligent Systems University of California, Irvine, CA 92697-3435 www.ics.uci.edu/~smyth tel: 949 824 2558 email: smyth at ics.uci.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: From jesus.m.cortes at gmail.com Thu Nov 27 11:14:44 2014 From: jesus.m.cortes at gmail.com (Jesus Cortes) Date: Thu, 27 Nov 2014 17:14:44 +0100 Subject: Connectionists: BCAM-Severo Ochoa Workshop: Quantitative Biomedicine in Health and Disease Message-ID: Dear scientist We are organizing the BCAM-Severo Ochoa Workshop: Quantitative Biomedicine in Health and Disease, that will be held in Bilbao on Feb 17-18, 2015 Confirmed speakers are: Adam Barrett (Sussex Univ.) Cesar Caballero (BCBL) Mathieu Desroches(INRIA) Luca Faes(Trento Univ.) Andrea Fuster (TU Eindhoven) Estibaliz Garrote (Tecnalia) Albert Granados (INRIA) Pablo Lamata (King College) Bert Kappen (RadboudUniv. Nijmegen) Daniele Marinazzo (Gent Univ.) Pierrick Coup? (CNRS) Ernesto Sanz-Arigita (Umc, Amsterdam) Jordi Soriano (Barcelona Univ.) Ruedi Stoop (ETH Zurich) Joanna Tyrcha (Stockholm Univ.) Ed Vigmond (Bordeaux 1 Univ., LiRYC) Michael Wibral (Goethe Univ. ) Abstract submission: one single pdf-page (free format) including authors, Institutions and abstract Deadline: Jan 15, 2015 Send to: qbio2015 at bcamath.org Free registration fees Max 40 people Best Regards. Luca Gerardo-Giorda, Sebastiano Stramaglia and Jesus M Cortes -------------- next part -------------- An HTML attachment was scrubbed... URL: From Johan.Suykens at esat.kuleuven.be Thu Nov 27 10:54:40 2014 From: Johan.Suykens at esat.kuleuven.be (Johan Suykens) Date: Thu, 27 Nov 2014 16:54:40 +0100 Subject: Connectionists: Postdoc positions ERC Advanced Grant A-DATADRIVE-B Message-ID: <54774940.30804@esat.kuleuven.be> The research group KU Leuven ESAT-STADIUS is currently offering 2 Postdoc positions (1-year, extendable) within the framework of the ERC Advanced Grant A-DATADRIVE-B (PI: Johan Suykens) http://www.esat.kuleuven.be/stadius/ADB on Advanced Data-Driven Black-box modelling. The research positions relate to the following possible topics: -1- Prior knowledge incorporation -2- Kernels and tensors -3- Modelling structured dynamical systems -4- Sparsity -5- Optimization algorithms -6- Core models and mathematical foundations -7- Next generation software tool The research group ESAT-STADIUS http://www.esat.kuleuven.be/stadius at the university KU Leuven Belgium provides an excellent research environment being active in the broad area of mathematical engineering, including systems and control theory, neural networks and machine learning, nonlinear systems and complex networks, optimization, signal processing, bioinformatics and biomedicine. The research will be conducted under the supervision of Prof. Johan Suykens. Interested candidates having a solid mathematical background and PhD degree can on-line apply at the website https://icts.kuleuven.be/apps/jobsite/vacatures/53177117?lang=en by including CV and motivation letter. For further information on these positions you may contact johan.suykens at esat.kuleuven.be. From ozawasei at kobe-u.ac.jp Thu Nov 27 06:15:57 2014 From: ozawasei at kobe-u.ac.jp (OZAWA, Seiichi) Date: Thu, 27 Nov 2014 20:15:57 +0900 Subject: Connectionists: CFP: IJCNN 2015 Special Session on Autonomous Machine Learning for Cyber-Physical Systems Message-ID: <547707ED.2020007@kobe-u.ac.jp> CALL FOR PAPERS IJCNN2015 Special Session on Autonomous Machine Learning for Cyber-Physical Systems https://lipn.univ-paris13.fr/~grozavu/IML-IJCNN2015/default.html --------------------- Scope and Motivation --------------------- Recent development in ICT and sensor devices brings us a new form of intelligent systems called Cyber-Physical System (CPS). In CPS, physical entities such as humans, robots, cars, factories, houses interact and communicate with other entities in both physical- and cyber-worlds. The information processed in cyber-physical worlds are video images, voice/sounds, texts (e.g. documents, tweets, e-mails), control signals, sensor data, etc., and such data are continuously generated as ?big stream data?. In general, such data are composed not only of explicit information on physical entities (e.g. location, translation, acceleration), but also of implicit information such as health conditions, emotion, and behaviors, which should be extracted from original sensor data. To acquire knowledge from the latter type of implicit information, autonomous machine learning and data mining methods that can learn from high-dimensional stream data are solicited for CPS. The purpose of this special session is to share new ideas to develop autonomous machine learning and data mining methods for big stream data that are generated not only by connecting cyber-physical worlds but also within either of cyber- and physical- worlds. ------- Topics ------- A wide range of autonomous machine learning/data mining methods and applications for cyber-physical systems is covered, including but not limited to the followings: Theoretical approaches to machine learning /data mining methods for cyber-physical systems - Supervised/Unsupervised Learning - Online/Incremental Learning - Online Feature Selection/Extraction - Online Clustering - Active Learning - Stream Data Mining - Text Mining - Time-Series Analysis Applications of cyber-physical systems such as - Human-Robot Interactions - Smart Life Technologies (e.g. smart grids, smart city, smart home, smart car, smart agriculture) - Social Network Analysis (e.g. sentiment analysis, user profiling) - Cybersecurity - Opinion Mining - Emotion/Behavior Mining - Person Attitude Mining - Realty Mining ---------------- Important Dates ---------------- - January 15, 2015: Paper submission deadline - March 15, 2015: Notification of paper acceptance - April 15, 2015: Camera-ready deadline - July 12-17, 2015: Conference days ----------- Submission ----------- Manuscripts submitted to special sessions should be done through the paper submission website of IJCNN 2015 as regular submissions. Please follow the instructions below: 1- Go to http://ieee-cis.org/conferences/ijcnn2015/upload.php to submit a paper to IJCNN 2015. 2- Be sure to set the "Main research topic" to "SS32: Autonomous Machine Learning for Cyber-Physical Systems." (The special sessions are found at the bottom of the list.) All papers submitted to special sessions will be subject to the same peer-review review procedure as the regular papers. Accepted papers will be part of the regular conference proceedings. For more information, please contact the Special Session organizers: - Seiichi Ozawa (ozawasei at kobe-u.ac.jp), Kobe University, Japan - Nistor Grozavu (nistor at lipn.univ-paris13.fr), LIPN, Paris 13 University, Villetaneuse, France - Nicoleta Rogovschi (nicoleta.rogovschi at parisdescartes.fr), LIPADE, Paris Descartes University, Paris, France - Shogo Okada (okada at ntt.dis.titech.ac.jp), Tokyo Institute of Technology, Japan From d.goodman at imperial.ac.uk Wed Nov 26 13:30:42 2014 From: d.goodman at imperial.ac.uk (Dan Goodman) Date: Wed, 26 Nov 2014 18:30:42 +0000 Subject: Connectionists: PhD positions in computational neuroscience Message-ID: <54761C52.2040801@imperial.ac.uk> One or more PhD positions in computational neuroscience are available in the Neural Reckoning Group of Dan Goodman at the Department of Electrical and Electronic Engineering, Imperial College London. I am interested in supervising students with a strong mathematical, computational or neuroscience background. Projects could be carried out in several possible areas, including neuroinformatics (i.e simulation and analysis of neural data) or sensory neuroscience (e.g. spiking neural models in the auditory system). For more information about the group, including some suggestions for research topics, see http://neural-reckoning.org/openings.html. In addition to working within the group, studying at Imperial College provides excellent opportunities for interacting with other theoretical and experimental researchers, both at Imperial (recently ranked 2nd in the world in the QS world university rankings) and in the many neuroscience groups in London. Applicants should initially send me a brief CV and cover letter with a description of research interests or a proposed project, and will eventually have to formally apply through the standard Imperial College mechanism (for more information, see http://www3.imperial.ac.uk/electricalengineering/courses/phd). Funding is available for at least one student, and there are several competitive funding schemes at Imperial which would provide funding for additional students. Also, please forward this message on to any of your students or colleagues with students that you think may be interested. Dan Goodman From Vittorio.Murino at iit.it Fri Nov 28 11:11:00 2014 From: Vittorio.Murino at iit.it (Vittorio Murino) Date: Fri, 28 Nov 2014 17:11:00 +0100 Subject: Connectionists: NIPS 2014 Workshop -- Riemannian Geometry in Machine Learning, Statistics, and Computer Vision Message-ID: <54789E94.7030907@iit.it> Apologize for multiple posting ++++++++++++++++++++++++++++++++++++++++++++++++++++++ RIEMANNIAN GEOMETRY @ NIPS2014 -- Call for Workshop Participation RIEMANNIAN GEOMETRY IN MACHINE LEARNING, STATISTICS, AND COMPUTER VISION Saturday December 13, 2014 Palais des Congr?s, Montreal, Canada This workshop will be held in conjunction with Neural Information Processing Systems NIPS 2014 http://www.riemanniangeometry2014.eu/ *** Look at the workshop program !!! http://riemanniangeometry2014.eu/index.php/schedule OVERVIEW Traditional machine learning and data analysis methods often assume that the input data can be represented by vectors in Euclidean space. While this assumption has worked well for many applications, researchers have increasingly realized that if the data is intrinsically non-Euclidean, ignoring this geometrical structure can lead to suboptimal results. In the existing literature, there are two common approaches for exploiting data geometry when the data is assumed to lie on a Riemannian manifold. In the first direction, often referred to as manifold learning, the data is assumed to lie on an unknown Riemannian manifold and the structure of this manifold is exploited through the training data, either labeled or unlabeled. Examples of manifold learning techniques include Manifold Regularization via the graph Laplacian, Locally Linear Embedding, and Isometric Mapping. In the second direction, which is gaining increasing importance and success, the Riemannian manifold representing the input data is assumed to be known explicitly. Some manifolds that have been widely used for data representation are: the manifold of symmetric, positive definite matrices, the Grassmannian manifold of subspaces of a vector space, and the Kendall manifold of shapes. When the manifold is known, the full power of the mathematical theory of Riemannian geometry can be exploited in both the formulation of algorithms as well as their theoretical analysis. Successful applications of these approaches are numerous and range from brain imaging and low rank matrix completion to computer vision tasks such as object detection and tracking. This workshop focuses on the latter direction. We aim to bring together researchers in statistics, machine learning, computer vision, and other areas, to discuss and exchange current state of the art results, both theoretically and computationally, and identify potential future research directions. INVITER SPEAKERS * Thomas Fletcher, University of Utah * Mark Girolami/Michael Betancourt, University of Warwick * Richard Hartley, Australian National University/NICTA * Anuj Srivastava, Florida State University * Bart Vandereycken, Princeton University FORMAT The workshop will consist of invited talks only, followed by a panel discussion together with the speakers. Specifically, we plan to have three invited oral presentations in the morning and two in the afternoon, each lasting 45 minutes plus questions, about one hour in total. At the end of the talks in the afternoon, a panel and open discussion forum will be held, lasting about one hour. All invited speakers will participate in the panel discussion, which will be mediated by the workshop organizers. Two half-hour periods, one in the morning and one in the afternoon, will be allocated for breaks and informal discussions. ORGANIZING COMMITTEE Minh Ha Quang - Pattern Analysis & Computer Vision (PAVIS), Istituto Italiano di Tecnologia, Genova, Italy Vikas Sindhwani - IBM T.J. Watson Research Center, New York Vittorio Murino - Pattern Analysis & Computer Vision (PAVIS), Istituto Italiano di Tecnologia, Genova, and University of Verona, Italy ++++++++++++++++++++++++++++++++++++++++++++++++++++++ -- Vittorio Murino ******************************************* Prof. Vittorio Murino, Ph.D. PAVIS - Pattern Analysis & Computer Vision IIT Istituto Italiano di Tecnologia Via Morego 30 16163 Genova, Italy Phone: +39 010 71781 504 Mobile: +39 329 6508554 Fax: +39 010 71781 236 E-mail:vittorio.murino at iit.it Secretary: Sara Curreli email: sara.curreli at iit.it Phone: +39 010 71781 917 http://www.iit.it/pavis ******************************************** -------------- next part -------------- An HTML attachment was scrubbed... URL: From k.hobson at imperial.ac.uk Fri Nov 28 11:31:37 2014 From: k.hobson at imperial.ac.uk (Hobson, Kate) Date: Fri, 28 Nov 2014 16:31:37 +0000 Subject: Connectionists: PhD Studentships in Neurotechnology at Imperial College London Message-ID: <2402E86631C41242875786E5D3CD98B47318FF39@icexch-m6.ic.ac.uk> 4-year studentships available in the Imperial College EPSRC Centre for Doctoral Training in Neurotechnology for Life and Health Ten fully-funded studentships are now available for start in October 2015. Neurotechnology is the use of insights and tools from engineering, mathematics, physics, chemistry, and biology to investigate neural function and treat dysfunction.? Brain-related illnesses affect more than two billion people worldwide, and the numbers are growing. Reducing this burden is a major challenge for society. The Centre for Doctoral Training in Neurotechnology for Life and Health will train a new generation of multidisciplinary researchers at the interface of neuroscience and engineering, to address this challenge. The Centre spans the Faculties of Engineering, Natural Sciences and Medicine at Imperial, with investigators from the Departments of Bioengineering, Mechanical Engineering, Electrical and Electronic Engineering, Computing, Chemistry, Physics, Life Sciences, and the Division of Brain Sciences. Directed by Dr Simon Schultz, Prof Bill Wisden and Prof Paul Matthews, it intends to admit approximately 14 students per year.?All research projects will involve a team of supervisors, each of whom will bring complementary expertise to the project. In addition to researchers from across Imperial College, the Centre involves twenty industry and charity partners, as well as satellite research groups at the Crick Institute and the University of Oxford. Studentships begin with a one-year MRes in Neurotechnology, which forms an integral part of the four year training programme. During this year, students will take 3 months of taught courses specially developed for the CDT, followed by laboratory rotations as part of a single research training project. After the first year, students enter the PhD phase having developed the interdisciplinary and technical skills to thrive in a cutting edge research environment, and make the most impact with their PhD. Who should apply Applicants should be seeking to undertake a multidisciplinary 4-year research training programme at the interface between neuroscience and engineering. Candidates should have, or expect to obtain, a first or upper second class degree, or non-UK equivalent, in an engineering or physical sciences discipline. Students with a biological or medical sciences background will be considered in exceptional circumstances, provided they can demonstrate substantial quantitative skills. All studentships are open to UK or EU applicants who meet EPSRC eligibility criteria (see www.epsrc.ac.uk/skills/students/help/eligibility). A limited number of places is also available to UK or EU applicants, with no residency criteria. International (ie non UK/EU) candidates may be considered for the CDT programme if they can provide their own full funding for the 4 years. Funding Studentships cover tuition fees and a tax free stipend of approximately ?16,000 per year. A generous annual allowance is provided for research consumables and for conference attendance. How to Apply Visit www.imperial.ac.uk/neurotechnology/cdt for more information on the CDT as well as details of projects available and how to apply. Application deadline: 30th January 2015. -- Kate Hobson | Administrator Centre for Neurotechnology | Imperial College London +44(0)20 7594 5101 | www.imperial.ac.uk/neurotechnology From erik at oist.jp Sat Nov 29 20:33:09 2014 From: erik at oist.jp (Erik De Schutter) Date: Sun, 30 Nov 2014 10:33:09 +0900 Subject: Connectionists: Postdoctoral position in cerebellar network modeling in Okinawa Message-ID: A postdoctoral researcher position to model cerebellar function is available in the Computational Neuroscience Unit (https://groups.oist.jp/cnu) of Prof. Erik De Schutter at the Okinawa Institute of Science and Technology Graduate University. Depending on previous experience research emphasis can be on analyzing population activity in network models of the olivocerebellum or on further development of XML-based methods for network model description and simulation, or a combination of both. Candidates should have experience in network modeling, preferentially using the parallel NEURON environment. Prior knowledge of cerebellar anatomy and physiology is a plus but not required. The postdoc will interact with other researchers and students in the lab who are working on cerebellar modeling projects. We offer attractive financial and working conditions in an English language graduate university that emphasizes interdisciplinary research, located on a beautiful subtropical island. Send curriculum vitae, Candidates should mail a summary of research interests and experience, and the names of three referees to Prof. Erik De Schutter at erik at oist.jp