From juergen at idsia.ch Fri May 2 10:13:50 2014 From: juergen at idsia.ch (Juergen Schmidhuber) Date: Fri, 2 May 2014 16:13:50 +0200 Subject: Connectionists: Deep Learning Overview Draft In-Reply-To: References: Message-ID: Dear Connectionists, thanks a lot for numerous helpful comments! This was great fun; I learned a lot. A revised version (now with 750+ references) is here: http://arxiv.org/abs/1404.7828 PDF with better formatting (for A4 paper): http://www.idsia.ch/~juergen/DeepLearning30April2014.pdf LATEX source: http://www.idsia.ch/~juergen/DeepLearning30April2014.tex The complete BIBTEX file is also public: http://www.idsia.ch/~juergen/bib.bib Don't hesitate to send additional suggestions to juergen at idsia.ch (NOT to the entire list). Kind regards, Juergen Schmidhuber http://www.idsia.ch/~juergen/deeplearning.html Revised Table of Contents 1 Introduction to Deep Learning (DL) in Neural Networks (NNs) 2 Event-Oriented Notation for Activation Spreading in Feedforward NNs (FNNs) and Recurrent NNs (RNNs) 3 Depth of Credit Assignment Paths (CAPs) and of Problems 4 Recurring Themes of Deep Learning 4.1 Dynamic Programming (DP) for DL 4.2 Unsupervised Learning (UL) Facilitating Supervised Learning (SL) and RL 4.3 Occam?s Razor: Compression and Minimum Description Length (MDL) 4.4 Learning Hierarchical Representations Through Deep SL, UL, RL 4.5 Fast Graphics Processing Units (GPUs) for DL in NNs 5 Supervised NNs, Some Helped by Unsupervised NNs (With DL Timeline) 5.1 1940s and Earlier 5.2 Around 1960: More Neurobiological Inspiration for DL 5.3 1965: Deep Networks Based on the Group Method of Data Handling (GMDH) 5.4 1979: Convolution + Weight Replication + Winner-Take-All (WTA) 5.5 1960-1981 and Beyond: Development of Backpropagation (BP) for NNs 5.5.1 BP for Weight-Sharing FNNs and RNNs 5.6 1989: BP for Convolutional NNs (CNNs) 5.7 Late 1980s-2000: Improvements of NNs 5.7.1 Ideas for Dealing with Long Time Lags and Deep CAPs 5.7.2 Better BP Through Advanced Gradient Descent 5.7.3 Discovering Low-Complexity, Problem-Solving NNs 5.7.4 Potential Benefits of UL for SL 5.8 1987: UL Through Autoencoder (AE) Hierarchies 5.9 1991: Fundamental Deep Learning Problem of Gradient Descent 5.10 1991: UL-Based History Compression Through a Deep Hierarchy of RNNs 5.11 1994: Contest-Winning Not So Deep NNs 5.12 1995: Supervised Very Deep Recurrent Learner (LSTM RNN) 5.13 1999: Max-Pooling (MP) 5.14 2003: More Contest-Winning/Record-Setting, Often Not So Deep NNs 5.15 2006: Deep Belief Networks (DBNs) / Improved CNNs / GPU-CNNs 5.16 2009: First Official Competitions Won by RNNs, and with MPCNNs 5.17 2010: Plain Backprop (+ Distortions) on GPU Yields Excellent Results 5.18 2011: MPCNNs on GPU Achieve Superhuman Vision Performance 5.19 2011: Hessian-Free Optimization for RNNs 5.20 2012: First Contests Won on ImageNet & Object Detection & Segmentation 5.21 2013: More Contests and Benchmark Records 5.21.1 Currently Successful Supervised Techniques: LSTM RNNs / GPU-MPCNNs 5.22 Recent Tricks for Improving SL Deep NNs (Compare Sec. 5.7.2, 5.7.3) 5.23 Consequences for Neuroscience 5.24 DL with Spiking Neurons? 6 DL in FNNs and RNNs for Reinforcement Learning (RL) 6.1 RL Through NN World Models Yields RNNs With Deep CAPs 6.2 Deep FNNs for Traditional RL and Markov Decision Processes (MDPs) 6.3 Deep RL RNNs for Partially Observable MDPs (POMDPs) 6.4 RL Facilitated by Deep UL in FNNs and RNNs 6.5 Deep Hierarchical RL (HRL) and Subgoal Learning with FNNs and RNNs 6.6 Deep RL by Direct NN Search / Policy Gradients / Evolution 6.7 Deep RL by Indirect Policy Search / Compressed NN Search 6.8 Universal RL 7 Conclusion On Apr 17, 2014, at 5:40 PM, Schmidhuber Juergen wrote: > Dear connectionists, > > here the preliminary draft of an invited Deep Learning overview: > > http://www.idsia.ch/~juergen/DeepLearning17April2014.pdf > > Abstract. In recent years, deep neural networks (including recurrent ones) have won numerous contests in pattern recognition and machine learning. This historical survey compactly summarises relevant work, much of it from the previous millennium. Shallow and deep learners are distinguished by the depth of their credit assignment paths, which are chains of possibly learnable, causal links between actions and effects. I review deep supervised learning (also recapitulating the history of backpropagation), unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks. > > The draft mostly consists of references (about 600 entries so far). Many important citations are still missing though. As a machine learning researcher, I am obsessed with credit assignment. In case you know of references to add or correct, please send brief explanations and bibtex entries to juergen at idsia.ch (NOT to the entire list), preferably together with URL links to PDFs for verification. Please also do not hesitate to send me additional corrections / improvements / suggestions / Deep Learning success stories with feedforward and recurrent neural networks. I'll post a revised version later. > > Thanks a lot! > > Juergen Schmidhuber > http://www.idsia.ch/~juergen/ > http://www.idsia.ch/~juergen/whatsnew.html From ted.carnevale at yale.edu Fri May 2 11:42:05 2014 From: ted.carnevale at yale.edu (Ted Carnevale) Date: Fri, 02 May 2014 11:42:05 -0400 Subject: Connectionists: NEURON course early registration deadline approaches! Message-ID: <5363BCCD.4000107@yale.edu> Early registration discounts for this year's NEURON summer courses are available for just one more week. NEURON Fundamentals June 21-24 This addresses all aspects of using NEURON to model individual neurons, and also introduces parallel simulation and the fundamentals of modeling networks. Parallel Simulation with NEURON June 25-26 This is for users who are already familiar with NEURON and need to create models that will run on parallel hardware. Apply by Friday, May 9, and pay only $1050 for the Fundamentals course $650 for the Parallel Simulation course $1600 for both Registration fees for applications received after May 9 are $1200 for Fundamentals $750 for Parallel Simulation $1800 for both These fees cover handout materials plus food and housing on the campus of UC San Diego. Registration is limited, and the registration deadline is Friday, May 30, 2014. For more information and the on-line registration form, see http://www.neuron.yale.edu/neuron/courses --Ted From grlmc at urv.cat Sat May 3 05:59:28 2014 From: grlmc at urv.cat (GRLMC) Date: Sat, 3 May 2014 11:59:28 +0200 Subject: Connectionists: SLSP 2014: extended submission deadline 14 May Message-ID: <6C68B21990634284B499DEFB784A0178@Carlos1> *To be removed from our mailing list, please respond to this message with UNSUBSCRIBE in the subject line* ---------------------------------------------------------------------------- ***** SUBMISSION DEADLINE EXTENDED: May 14 ***** ---------------------------------------------------------------------------- **************************************************************************** ****** 2nd INTERNATIONAL CONFERENCE ON STATISTICAL LANGUAGE AND SPEECH PROCESSING SLSP 2014 Grenoble, France October 14-16, 2014 Organised by: ?quipe GETALP Laboratoire d?Informatique de Grenoble Research Group on Mathematical Linguistics (GRLMC) Rovira i Virgili University http://grammars.grlmc.com/slsp2014/ **************************************************************************** ****** AIMS: SLSP is a yearly conference series aimed at promoting and displaying excellent research on the wide spectrum of statistical methods that are currently in use in computational language or speech processing. It aims at attracting contributions from both fields. Though there exist large, well-known conferences and workshops hosting contributions to any of these areas, SLSP is a more focused meeting where synergies between subdomains and people will hopefully happen. In SLSP 2014, significant room will be reserved to young scholars at the beginning of their career and particular focus will be put on methodology. VENUE: SLSP 2014 will take place in Grenoble, at the foot of the French Alps. SCOPE: The conference invites submissions discussing the employment of statistical methods (including machine learning) within language and speech processing. The list below is indicative and not exhaustive: phonology, phonetics, prosody, morphology syntax, semantics discourse, dialogue, pragmatics statistical models for natural language processing supervised, unsupervised and semi-supervised machine learning methods applied to natural language, including speech statistical methods, including biologically-inspired methods similarity alignment language resources part-of-speech tagging parsing semantic role labelling natural language generation anaphora and coreference resolution speech recognition speaker identification/verification speech transcription speech synthesis machine translation translation technology text summarisation information retrieval text categorisation information extraction term extraction spelling correction text and web mining opinion mining and sentiment analysis spoken dialogue systems author identification, plagiarism and spam filtering STRUCTURE: SLSP 2014 will consist of: invited talks peer-reviewed contributions INVITED SPEAKERS: Claire Gardent (LORIA, Nancy, FR), Grammar Based Sentence Generation and Statistical Error Mining Roger K. Moore (Sheffield, UK), Spoken Language Processing: Time to Look Outside? Martti Vainio (Helsinki, FI), Phonetics and Machine Learning: Hierarchical Modelling of Prosody in Statistical Speech Synthesis PROGRAMME COMMITTEE: Sophia Ananiadou (Manchester, UK) Srinivas Bangalore (Florham Park, US) Patrick Blackburn (Roskilde, DK) Herv? Bourlard (Martigny, CH) Bill Byrne (Cambridge, UK) Nick Campbell (Dublin, IE) David Chiang (Marina del Rey, US) Kenneth W. Church (Yorktown Heights, US) Walter Daelemans (Antwerpen, BE) Thierry Dutoit (Mons, BE) Alexander Gelbukh (Mexico City, MX) James Glass (Cambridge, US) Ralph Grishman (New York, US) Sanda Harabagiu (Dallas, US) Xiaodong He (Redmond, US) Hynek Hermansky (Baltimore, US) Hitoshi Isahara (Toyohashi, JP) Lori Lamel (Orsay, FR) Gary Geunbae Lee (Pohang, KR) Haizhou Li (Singapore, SG) Daniel Marcu (Los Angeles, US) Carlos Mart?n-Vide (Tarragona, ES, chair) Manuel Montes-y-G?mez (Puebla, MX) Satoshi Nakamura (Nara, JP) Shrikanth S. Narayanan (Los Angeles, US) Vincent Ng (Dallas, US) Joakim Nivre (Uppsala, SE) Elmar N?th (Erlangen, DE) Maurizio Omologo (Trento, IT) Mari Ostendorf (Seattle, US) Barbara H. Partee (Amherst, US) Gerald Penn (Toronto, CA) Massimo Poesio (Colchester, UK) James Pustejovsky (Waltham, US) Ga?l Richard (Paris, FR) German Rigau (San Sebasti?n, ES) Paolo Rosso (Valencia, ES) Yoshinori Sagisaka (Tokyo, JP) Bj?rn W. Schuller (London, UK) Satoshi Sekine (New York, US) Richard Sproat (New York, US) Mark Steedman (Edinburgh, UK) Jian Su (Singapore, SG) Marc Swerts (Tilburg, NL) Jun'ichi Tsujii (Beijing, CN) Gertjan van Noord (Groningen, NL) Renata Vieira (Porto Alegre, BR) Dekai Wu (Hong Kong, HK) Feiyu Xu (Berlin, DE) Roman Yangarber (Helsinki, FI) Geoffrey Zweig (Redmond, US) ORGANISING COMMITTEE: Laurent Besacier (Grenoble, co-chair) Adrian Horia Dediu (Tarragona) Benjamin Lecouteux (Grenoble) Carlos Mart?n-Vide (Tarragona, co-chair) Florentina Lilica Voicu (Tarragona) SUBMISSIONS: Authors are invited to submit non-anonymized papers in English presenting original and unpublished research. Papers should not exceed 12 single-spaced pages (including eventual appendices, references, etc.) and should be prepared according to the standard format for Springer Verlag's LNAI/LNCS series (see http://www.springer.com/computer/lncs?SGWID=0-164-6-793341-0). Submissions have to be uploaded to: https://www.easychair.org/conferences/?conf=slsp2014 PUBLICATIONS: A volume of proceedings published by Springer in the LNAI/LNCS series will be available by the time of the conference. A special issue of a major journal will be later published containing peer-reviewed extended versions of some of the papers contributed to the conference. Submissions to it will be by invitation. REGISTRATION: The period for registration is open from January 16, 2014 to October 14, 2014. The registration form can be found at: http://grammars.grlmc.com/slsp2014/Registration.php DEADLINES: Paper submission: May 14, 2014 (23:59h, CET) ? EXTENDED ? Notification of paper acceptance or rejection: June 18, 2014 Final version of the paper for the LNAI/LNCS proceedings: June 25, 2014 Early registration: July 2, 2014 Late registration: September 30, 2014 Submission to the post-conference journal special issue: January 16, 2015 QUESTIONS AND FURTHER INFORMATION: florentinalilica.voicu at urv.cat POSTAL ADDRESS: SLSP 2014 Research Group on Mathematical Linguistics (GRLMC) Rovira i Virgili University Av. Catalunya, 35 43002 Tarragona, Spain Phone: +34 977 559 543 Fax: +34 977 558 386 ACKNOWLEDGEMENTS: Laboratoire d?Informatique de Grenoble Universitat Rovira i Virgili From maass at igi.tugraz.at Thu May 1 01:59:03 2014 From: maass at igi.tugraz.at (Wolfgang Maass) Date: Thu, 01 May 2014 07:59:03 +0200 Subject: Connectionists: Phd position in the Human Brain Project: Principles of Brain Computation and Learning Message-ID: <5361E2A7.8070905@igi.tugraz.at> We are inviting applications for a fully funded Phd position at the Graz University of Technology (Faculty for Computer Science) for research on Principles of Brain Computation and Learning in the Human Brain Project https://www.humanbrainproject.eu/ It is planned to combine this Phd position with a part-time position as University Assistant, thereby providing the first step for a possible university career. Our Lab in Graz focuses currently on the development of new paradigms for probabilistic inference and learning in networks of spiking neurons (either in brain networks, or in neuromorphic hardware). Preceding results on which we will build can be found on http://www.igi.tugraz.at/maass/publications.html Excellent research skills, a genuine interest in answering fundamental open questions about computation and learning in networks of spiking neurons, and the capability to work in an interdisciplinary research team are expected. Applications are especially invited by students who have a strong background in a theoretical area related to machine learning, computational theory or probabilistic models, and who also have substantial experience in programming and computer simulations. Our doctoral program will lead to a Phd in Computer Science. Please send your CV, information about your grades, and a letter describing your scientific interests and goals by May 21 to my assistant Regina Heidinger: regina.heidinger at igi.tugraz.at It would be helpful if you could include names and email addresses of referees, and pdf files of your master thesis and/or other publications. -- Prof. Dr. Wolfgang Maass Institut fuer Grundlagen der Informationsverarbeitung Technische Universitaet Graz Inffeldgasse 16b , A-8010 Graz, Austria Tel.: ++43/316/873-5822 Fax ++43/316/873-5805 http://www.igi.tugraz.at/maass/Welcome.html From fmschleif at googlemail.com Mon May 5 05:00:47 2014 From: fmschleif at googlemail.com (Frank-Michael Schleif) Date: Mon, 5 May 2014 11:00:47 +0200 Subject: Connectionists: =?utf-8?q?Call_for_papers_2nd_International_Works?= =?utf-8?q?hop_on_High_Dimensional_Data_Mining_=28HDM=E2=80=9914=29?= =?utf-8?q?_at_ICDM_2014?= Message-ID: +++ PLEASE, APOLOGIZE MULTIPLE COPIES +++ =================================================================== Call for Papers The 2nd International Workshop on High Dimensional Data Mining (HDM?14) http://www.cs.bham.ac.uk/~axk/HDM14.htm http://hdataskforce.wordpress.com/ In conjunction with the IEEE International Conference on Data Mining (IEEE ICDM 2014) http://icdm2014.sfu.ca/home.html =================================================================== Description of Workshop Stanford statistician David Donoho predicted that the 21st century will be the century of data. "We can say with complete confidence that in the coming century, high-dimensional data analysis will be a very significant activity, and completely new methods of high-dimensional data analysis will be developed; we just don't know what they are yet." -- D. Donoho, 2000. Beyond any doubt, unprecedented technological advances lead to increasingly high dimensional data sets in all areas of science, engineering and businesses. These include genomics and proteomics, biomedical imaging, signal processing, astrophysics, finance, web and market basket analysis, among many others. The number of features in such data is often of the order of thousands or millions - that is much larger than the available sample size. A number of issues make classical data analysis methods inadequate, questionable, or inefficient at best when faced with high dimensional data spaces: 1. High dimensional geometry defeats our intuition rooted in low dimensional experiences, and this makes data presentation and visualisation particularly challenging. 2. Phenomena that occur in high dimensional probability spaces, such as the concentration of measure, are counter-intuitive for the data mining practitioner. For instance, distance concentration is the phenomenon that the contrast between pair-wise distances may vanish as the dimensionality increases. This makes the notion of nearest neighbour meaningless, together with a number of methods that rely on a notion of distance. 3. Bogus correlations and misleading estimates may result when trying to fit complex models for which the effective dimensionality is too large compared to the number of data points available. 4. The accumulation of noise may confound our ability to find low dimensional intrinsic structure hidden in the high dimensional data. 5. The computation cost of processing high dimensional data or carrying out optimisation over a high dimensional parameter spaces is often prohibiting. Topics This workshop aims to promote new advances and research directions to address the curses and uncover and exploit the blessings of high dimensionality in data mining. Topics of interest include (but are not limited to): - Systematic studies of how the curse of dimensionality affects data mining methods - New data mining techniques that exploit some properties of high dimensional data spaces - Theoretical underpinning of mining data whose dimensionality is larger than the sample size - Stability and reliability analyses for data mining in high dimensions - Adaptive and non-adaptive dimensionality reduction for noisy high dimensional data sets - Methods of random projections, compressed sensing, and random matrix theory applied to high dimensional data mining and high dimensional optimisation - Models of low intrinsic dimension, such as sparse representation, manifold models, latent structure models, and studies of their noise tolerance - Classification of high dimensional complex data sets - Functional data mining - Data presentation and visualisation methods for very high dimensional data sets - Data mining applications to real problems in science, engineering or businesses where the data is high dimensional Paper submission High quality original submissions are solicited for oral and poster presentation at the workshop. Papers should not exceed a maximum of 8 pages, and must follow the IEEE ICDM format requirements of the main conference. All submissions will be peer-reviewed, and all accepted workshop papers will be published in the proceedings by the IEEE Computer Society Press. Submit your paper here. Important dates Submission deadline: August 1, 2014 Notifications to authors: September 26, 2014 Workshop date: December 14, 2014 We are looking forward to welcome you in Shenzhen with best regards Ata Kaban Frank-Michael Schleif Thomas Villmann (Workshop Organizers) From jeanpascal.pfister at gmail.com Fri May 2 09:42:27 2014 From: jeanpascal.pfister at gmail.com (Jean-Pascal Pfister) Date: Fri, 2 May 2014 15:42:27 +0200 Subject: Connectionists: 2 PhD positions at the Institute of Neuroinformatics, Zurich, Switzerland Message-ID: Applications are invited for two PhD student positions at the Institute for Neuroinformatics (ETHZ, UZH) in Zurich. The positions are funded by the Swiss National Science Foundation grant entitled "Inference and Learning with Spiking Neurons". The PhD students will be supervised by Jean-Pascal Pfister in the Theoretical Neuroscience Group. There are several lines of evidence showing that the brain combines unreliable sensory informations in a Bayesian optimal way. The aim of the project is to determine how Bayesian inference and learning can be implemented at the level of neurons and synapses. The Institute for Neuroinformatics in Zurich (which is a joint institute belonging to both the Swiss Federal Institute of Technology (ETHZ) and the University of Zurich) offers an ideal environment for studying Computational Neuroscience due to the presence of multiple theoretical and experimental labs. Candidates should hold a Diplom/Master degree in Physics, Mathematics, Machine learning, Computational Neuroscience or a related field. They should have a strong analytical background and demonstrate interest in theoretical neuroscience. Preference will be given to candidates with good programming skills. The position is offered for a period of three years, starting on the 1st of September 2014 at earliest. Salary scale is provided by the Swiss National Science Foundation (www.snf.ch). Applicants should submit (by email) a CV, a statement of research interests, 2 reference letters, marks obtained for the Diploma/Master, abstract of the Diploma/Master to Jean-Pascal Pfister (pfister at pyl.unibe.ch). Deadline for application is 30th of Mai 2014 or until the positions are filled. --------------------------------- Jean-Pascal Pfister, PhD Theoretical Neuroscience Group Physiology Department, University of Bern 5 B?hlplatz, CH-3012 Bern, CH e-mail: pfister at pyl.unibe.ch URL: http://www.physio.unibe.ch/~pfister/ tel: ++41 31 631 35 09 From the 1st of September 2014: Institute of Neuroinformatics University of Zurich and ETH Zurich Winterthurerstrasse 190 8057 Zurich, Switzerland -------------- next part -------------- An HTML attachment was scrubbed... URL: From fjaekel at uos.de Mon May 5 09:47:27 2014 From: fjaekel at uos.de (Frank =?ISO-8859-1?Q?J=E4kel?=) Date: Mon, 05 May 2014 15:47:27 +0200 Subject: Connectionists: Vision Research Special Issue on Quantitative Approaches in Gestalt Perception Message-ID: <1399297647.29036.24.camel@birke.ikw.uni-osnabrueck.de> Vision Research Special Issue on Quantitative Approaches in Gestalt Perception Call for papers: Quantitative Approaches in Gestalt Perception Submissions are invited for a special issue of Vision Research on Quantitative Approaches in Gestalt Perception. Gestalt Perception has been the topic of research for more than 100 years since Wertheimer?s seminal publication in 1912. Recently, quantitative approaches to study Gestalt phenomena helped to specify and clarify some of the early Gestalt notions, generating testable quantitative predictions lacking from much of the original Gestalt writings. This special issue aims to bring together the many diverse quantitative approaches to the study of Gestalt Perception. Contributions are sought from visual psychophysics, computer vision, cognitive psychology, the cognitive neurosciences, computational neuroscience as well as machine learning and theory. Papers are invited on all aspects of Gestalt Perception; given the aim of this Special Issue we have a preference for quantitative approaches, but may exceptionally consider purely experimental work as well as historical treatments and reviews if they specifically provide groundwork for future formal developments. Examples of specific topics include (but are not limited to): - Attention and Top-Down Effects on Gestalt Perception - Configural Superiority - Environmental and Image Statistics and Gestalts - Figure-Ground Segmentation - History and Review of Gestalt Perception - Learning and Development of Gestalt Perception - Models of Gestalt Phenomena - Neuronal Basis of Gestalt Effects - Object Formation - Object Recognition and Gestalt - Perceptual Grouping - Perceptual Organization of Motion, Form or Scenes - Shape Perception - Structural Representations The call for this Special Issue will appear on the Vision Research WWW-site by the end of May 2014. The deadline for submissions is the 30th of September 2014, and the publication of the Special Issue is planned for the end of May 2015. The Special Issue will be edited by Michael Herzog, Frank J?kel, Manish Singh and Felix Wichmann. Should you have any questions feel free to contact the managing guest editor Frank J?kel (fjaekel at uos.de) or one of the other guest editors. -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/x-pkcs7-signature Size: 6293 bytes Desc: not available URL: From terry at salk.edu Mon May 5 22:15:16 2014 From: terry at salk.edu (Terry Sejnowski) Date: Mon, 05 May 2014 19:15:16 -0700 Subject: Connectionists: NEURAL COMPUTATION - June, 2014 In-Reply-To: Message-ID: Neural Computation - Contents -- Volume 26, Number 6 - June 1, 2014 Available online for download now: http://www.mitpressjournals.org/toc/neco/26/6 ----- Article A Causal Perspective on the Analysis of Signal and Noise Correlations and Their Role in Population Coding Daniel Chicharro Letters Towards Unified Hybrid Simulation Techniques for Spiking Neural Networks Michiel D'Haene, Michiel Hermans, and Benjamin Schrauwen Neural Decoding With Kernel-based Metric Learning Austin J. Brockmeier, John S. Choi, Evan G. Kriminger, Joseph T. Francis, and Jose C. Principe Adaptive Multi Class Classification for Brain Computer Interfaces Alberto Llera, Vicenc Gomez, and Hilbert J. Kappen On Non-negative Matrix Factorization Algorithms for Signal-dependent Noise With Application to Electromyography Data Karthik Devarajan, Vincent C.K. Cheung Direct Learning of Sparse Changes in Markov Networks by Density Ratio Estimation Song Liu, John A. Quinn, Michael U. Gutmann, Taiji Suzuki, and Masashi Sugiyama Short Term Memory Capacity in Networks via the Restricted Isometry Property Adam Charles, Han Lun Yap, and Chris Rozell ON-LINE -- http://www.mitpressjournals.org/neuralcomp SUBSCRIPTIONS - 2014 - VOLUME 26 - 12 ISSUES USA Others Electronic Only Student/Retired $70 $193 $65 Individual $124 $187 $115 Institution $1,035 $1,098 $926 Canada: Add 5% GST MIT Press Journals, 238 Main Street, Suite 500, Cambridge, MA 02142-9902 Tel: (617) 253-2889 FAX: (617) 577-1545 journals-orders at mit.edu ------------ From karthikmaheshv at gmail.com Tue May 6 08:26:59 2014 From: karthikmaheshv at gmail.com (Karthik Mahesh Varadarajan) Date: Tue, 6 May 2014 14:26:59 +0200 Subject: Connectionists: [meetings] CfP: First Workshop on Affordances: Affordances in "Vision for Cognitive Robotics" (in conjunction with RSS 2014) Message-ID: ====================================================================== *Call for Papers- First Workshop on Affordances: Affordances in Vision for Cognitive Robotics* (in conjunction with RSS 2014), July 13, 2014, Berkeley, USA http://affordances.info/workshops/RSS.html ====================================================================== Based on the Gibsonian principle of defining objects by their function, "affordances" have been studied extensively by psychologists and visual perception researchers, resulting in the creation of numerous cognitive models. These models are being increasingly revisited and adapted by computer vision and robotics researchers to build cognitive models of visual perception and behavioral algorithms in recent years. This workshop attempts to explore this nascent, yet rapidly emerging field of affordance based cognitive robotics while integrating the efforts and language of affordance communities not just in computer vision and robotics, but also psychophysics and neurobiology by creating an open affordance research forum, feature framework and ontology called AfNet (theaffordances.net). In particular, the workshop will focus on emerging trends in affordances and other human-centered function/action features that can be used to build computer vision and robotic applications. The workshop also features contributions from researchers involved in traditional theories to affordances, especially from the point of view of psychophysics and neuro-biology. Avenues to aiding research in these fields using techniques from computer vision and cognitive robotics will also be explored. Primary topics addressed by the workshop include the following among others - Affordances in visual perception models - Affordances as visual primitives, common coding features and symbolic cognitive systems - Affordances for object recognition, search, attention modulation, functional scene understanding/ classification - Object functionality analysis - Affordances from appearance and touch based cues - Haptic adjectives - Functional-visual categories for transfer learning - Actions and functions in object perception - Human-object interactions and modeling - Motion-capture data analysis for object categorization - Affordances in human and robot grasping - Robot behavior for affordance learning - Execution of affordances on robots - Affordances to address cognitive and domestic robot applications - Affordance ontologies - Psychophysics of affordances - Neurobiological and cognitive models for affordances The workshop also seeks to address key challenges in robotics with regard to functional form descriptions. While affordances describe the function that each object or entity affords, these in turn define the manipulation schema and interaction modes that robots need to use to work with objects. These functional features, ascertained through vision, haptics and other sensory information also help in categorizing objects, task planning, grasp planning, scene understanding and a number of other robotic tasks. Understanding various challenges in the field and building a common language and framework for communication across varied communities in are the key goals of the proposed workshop. Through the course of the workshop, we also envisage the establishment of a working group for AfNet. An initial version is available online at www.theaffordances.net. We hope the workshop will serve to foster greater collaboration between the affordance communities in various fields. *Paper Submissions* Paper contributions to the workshop are solicited in four different formats - *Conceptual papers* (1 page): Authors are invited to submit original ideas on approaches to address specific problems in the targeted areas of the workshop. While a clear presentation of the proposed approach and the expected results are essential, specifics of implementation and evaluations are outside the scope of this format. This format is intended at exchange and evaluation of ideas prior to implementation/ experimental work as well as to open up collaboration avenues. - *Design papers* (3 pages): Authors submitting design papers are required to address key issues regarding the problem considered with detailed algorithms and preliminary or proof-of-concept results. Detailed evaluations and analyses are outside the scope of this format. This format is intended at addressing late-breaking and work in progress results as well as fostering collaboration between research and engineering groups. - *Experimental papers* (3 pages): Experimental papers are required to present results of experiments and evaluation of previously published algorithms or design frameworks. Details of implementation and exhaustive test case analyses are key to this format. These papers are geared at benchmarking and standardizing previously known approaches. - *Full papers* (5 pages): Full papers must be self-inclusive contributions with a detailed treatment of the problem statement, related work, design methodology, algorithm, test-bed, evaluation, comparative analysis, results and future scope of work. Submission of original and unpublished work is highly encouraged. Since the goal of this workshop is to bring together the various affordance communities, extended versions/ summary reports of recent research published elsewhere, as adapted to the goals of the workshop, will also be accepted. These papers are required to clearly state the relevance to the workshop and the necessary adaptation. The program will be composed of oral as well as Pecha-Kucha style presentations. Each contribution will be reviewed by three reviewers through a single-blind review process. The paper formatting should follow the RSS formatting guidelines (Templates: Word andLaTeX ). All contributions are to be submitted via Microsoft Conference Management Tool in PDF format. Appendices and supplementary text can be submitted as a second PDF, while other materials such as videos (though, preferably as links on vimeo or youtube) as a zipped file. All papers are expected to be self-inclusive and supplementary materials are not guaranteed to be reviewed. Please adhere to the following strict deadlines. In addition to direct acceptance, early submissions may be conditionally accepted, in which case submission of a revised version of the paper based on reviewer comments, prior to the late submission deadline is necessary. The final decision on acceptance of such conditionally accepted papers will be announced along with the decisions for the late submissions. *Important Dates* - Initial submissions (Late): 23:59:59 PDT May 27, 2014 - Notification of acceptance (Early submissions): May 15, 2014 - Notification of acceptance (Late submissions): June 5, 2014 - Submission of publication-ready version: June 10, 2014 - Workshop date: July 13, 2014 *Organizers* Karthik Mahesh Varadarajan (varadarajan(at) acin.tuwien.ac.at), TU Wien Markus Vincze (vincze(at)tuwien.ac.at), TU Wien Trevor Darrell (trevor(at) eecs.berkeley.edu), UC. Berkeley Juergen Gall (gall(at)iai.uni-bonn.de), Univ. Bonn *Speakers and Participants* (To be updated) Abhinav Gupta (Affordances in Computer Vision), Carnegie Mellon University Ashutosh Saxena (Affordances in Cognitive Robotics), Cornell University Lisa Oakes (Psychophysics of affordances)*, UC. Davis TBA (Neurobiology of affordances)* *Program Committee* Irving Biederman (USC) Aude Olivia (MIT) Fei-Fei Li (Stanford University) Martha Teghtsoonian (Smith College) Derek Hoiem (UIUC) Barbara Caputo (Univ. of Rome, IDIAP) Song-Chun Zhu (UCLA) Antonis Argyros (FORTH) Tamim Asfour (KIT) Michael Beetz (TUM) Norbert Krueger (Univ. of Southern Denmark) Sven Dickinson (Univ. of Toronto) Diane Pecher (Erasmus Univ. Rotterdam) Aaron Bobick (GeorgiaTech) Jason Corso (UB New York) Juan Carlos Niebles (Universidad del Norte) Tamara Berg (UNC Chapel Hill) Moritz Tenorth (Univ. Bremen) Dejan Pangercic (Robert Bosch) Roozbeh Mottaghi (Stanford) Alireza Fathi (Stanford) Xiaofeng Ren (Amazon) David Fouhey (CMU) Tucker Hermans (Georgia Tech) Tian Lan (Stanford) Amir Roshan Zamir (UCF) Hamed Pirsiavash (MIT) Walterio Mayol-Cuevas (Univ. of Bristol) -------------- next part -------------- An HTML attachment was scrubbed... URL: From ckloo.um at gmail.com Thu May 1 02:57:40 2014 From: ckloo.um at gmail.com (CK Loo) Date: Thu, 1 May 2014 14:57:40 +0800 Subject: Connectionists: ICONIP 2014: call for paper Message-ID: ================================================================================ We apologize if you receive multiple copies of this message. Please disseminate this CFP to your colleague & contact. ================================================================================ *The 21st INTERNATIONAL CONFERENCE ON NEURAL INFORMATION PROCESSING **(ICONIP 2014)* *November 3 - 6, 2014,* *Kuching, MALAYSIA* Official conference site: *http://www.iconip2014.org* Pre-Conference Workshop site: *http://www.csmining.org/cdmc2014/* *ORGANIZED BY:* *TECHNICAL SUPPORTER:* *SUPPORTED BY:* University of Malaya FSKTM Asia Pacific Neural Network Assembly (APNNA) International Neural Network Society Japanese Neural Network Society IEEE Computational Intelligence Society Ministry of Tourism and Culture, Malaysia Malaysia Convention & Exhibition Bureau Sarawak Convention Bureau Universiti Malaysia Sarawarak (UNIMAS) *IMPORTANT DATES* *Submission deadline for Special Session/Workshop proposals Notification of acceptance (Special Session/Workshop proposals)Submission deadline for regular and Special Session papers Notification of acceptance Camera-ready final paper submission Early registration Conference Pre-Conference Workshop (Kuala Lumpur) * *4 April 2014 18 April 1411 April 2014 25 April 142 May 2014 30 May 143 August 2014 10 August 201410 August 20143 - 6 November 201431 October 2014* *Welcome* to the land of the hornbills! *Sarawak* is located on the island of Borneo, the third largest island in the world, north of the Equator. With its beautiful blue skies and tropical breezes you can bathe in the sandy beaches and palm-fringed state of Sarawak. With a pleasant tropical rainforest climate, *Kuching* the capital of the state of Sarawak, is Borneo's most stylish and sophisticated city. Its many museums and impressive planetarium will enthral even the tiniest travellers. Visitors can also grab a snack from the many stalls on the bustling waterfront, or visit the absolutely massive Kubah Ria market, which is an electric experience for shoppers, vendors, and observers. Within Kuching itself, the bustling streets ? some very modern, others with a colonial vibe ? amply reward visitors with a penchant for aimless ambling. Click here for video link: (http://youtu.be/6YZFHF8q4cQ) *Download Apps* on your gadgets and start navigating and exploring what, where, when and how you can get around the country and enjoy the endless wonders, from food, festival, events to shopping and many more. [ http://www.vmy2014.com/plan-your-trip/travel-kit/phone-and-tablet-apps] *The 21st International Conference on Neural Information Processing (ICONIP 2014)*, an annual conference of the Asia Pacific Neural Network Assembly (APNNA) will be held in the beautiful city of *Kuching, Sarawak, Malaysia from 3 - 6 November 2014*. ICONIP 2014 will provide a high-level international forum for scientists, engineers, educators, and students to address new challenges, share solutions, and discuss future research directions in neural information processing and real-world applications. The scope of the conference includes, *but is not limited to*, ** Cognitive Science and Artificial Intelligence* Stability and convergence analysis, statistical learning algorithms, neural network models, supervised learning, unsupervised learning, reinforcement learning, kernel methods, graphical models, dimensionality reduction and manifold learning, statistical and information-theoretic methods, generalization, regularization and model selection, Gaussian processes and mixture models, matrix/tensor analysis, statistical physics of learning, and evolutionary algorithms. * * Learning Theory, Algorithms, and Architectures* Theoretical and experimental studies of processing and transmission of information in biological neurons and networks, spiking neurons, visual and auditory cortex, neural encoding and decoding, plasticity and adaptation, brain imaging, neuroimaging, brain mapping, and brain segmentation. * * Computational Neuroscience and Brain Imaging* Theoretical, computational, or experimental studies of perception, psychophysics, learning and memory, inference and reasoning, problem solving, natural language processing, and neuropsychology. * * Vision, Speech, and Signal Processing* Biological and machine vision, image processing and coding, visual perception and modeling, visual selective attention, visual coding and representation, object detection and recognition, motion detection and tracking, natural scene analysis, image processing, auditory perception and modelling, source separation, speech recognition and speech synthesis, speaker identification, and audio & speech retrieval. * * Control, Robotics, and Hardware Technologies* Decision and control, exploration, planning, navigation, Markov decision processes, game playing, multi-agent coordination, neuro-fuzzy system, cognitive robotics, developmental robotics, analog and digital VLSI, neuromorphic engineering, computational sensors and actuators, microrobotics, bioMEMS, neural prostheses, photonics, and molecular & quantum computing. * * Novel Approaches and Applications* Innovative applications that use machine learning, including systems for time series prediction, bioinformatics, systems biology, text/web analysis, multimedia processing, adaptive intelligent systems, brain-computer interfaces, granular computing, hybrid intelligent systems, neuroinformatics and neuroengineering, bioinformatics, information retrieval, data mining, and knowledge discovery. *KEYNOTE/PLENARY/INVITED SPEAKERS* Keynote Speaker: ? Prof. Dr. Shun-ichi Amari Plenary Speakers: ? Prof. Dr. J?rgen Schmidhuber ? Prof. Dr. Jacek Zurada Invited Speakers: ? Prof. Dr. Tan Kay Chen ? Prof. Dr. Liu Derong ? Prof. Dr. Zhi-Hua Zhou ? Prof. Dr. Jun Wang ? Prof. Dr. Akira Hirose ? Prof. Dr. Soo-Young Lee ? Prof. Dr. Nikholas Kasabov *SUBMISSION GUIDELINES* Authors are invited to submit *full-length papers* (*8 pages maximum*) by the submission deadline through the *online submission system*. The submission of a paper implies that the paper is original and has not been submitted under review or copyright-protected elsewhere and will be presented by an author if accepted. All submitted papers will be refereed by experts in the field based on the criteria of originality, significance, quality, and clarity. The authors of accepted papers will have an opportunity to revise their papers and take consideration of the referees' comments and suggestions. *The proceedings of ICONIP 2014* will be published by *Springer in its series of Lecture Notes in Computer Science*. Authors of* selected papers will be invited to extend their work *and to submit their manuscript for review and possible publication in any of the following journals/book volume: *1.* *Neurocomputing* *2.* *Neural Computing and Applications (by Springer)* *3.* *Neural Processing Letters * *4.* *International Journal of Knowledge Engineering and Soft Data Paradigms (IJKESDP)* *5.* *International Journal of Advanced Intelligence Paradigms * *6.* *International Journal of Computational Intelligence Studies * *7.* *International Journal on Intelligent Decision Technologies (IDT) * *8.* *Numerical Algebra, Control, and Optimization * *9.* *Advances in Neural Network Optimisation and Simulations (book volume in "Advances in Mechanics and Mathematics" by Springer)* *All enquiries should be directed to: * *ICONIP 2014 Secretariat* Faculty of Computer Science and Information Technology University of Malaya 50603 Lembah Pantai Kuala Lumpur MALAYSIA Email : enquiries at iconip2014.org ================================================================================ -- Best regards, Yunli Lee -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Textversion CFP2.docx Type: application/vnd.openxmlformats-officedocument.wordprocessingml.document Size: 28210 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: CFP ICONIP 2014 v.4.3.pdf Type: application/pdf Size: 640659 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: CFP ICONIP 2014 v4.3.docx Type: application/vnd.openxmlformats-officedocument.wordprocessingml.document Size: 795388 bytes Desc: not available URL: From itialife at gmail.com Mon May 5 03:59:41 2014 From: itialife at gmail.com (Keyan Ghazi-Zahedi) Date: Mon, 5 May 2014 09:59:41 +0200 Subject: Connectionists: EXTENDED DEADLINE: ALIFE-14 Workshop on information theoretic incentives for artificial life Message-ID: *** WORKSHOP ON INFORMATION THEORETIC INCENTIVES FOR ARTIFICIAL LIFE *** to be held at the 14th International Conference on the Synthesis and Simulation of Living Systems (ALIFE-14) New York City, 30 July-2 August 2014 Workshop web page: http://www.mis.mpg.de/ay/workshops/alife14ws.html Conference web page: http://alife14.org Please forward this mail to other interested parties. ~~~ IMPORTANT DATES Submission deadline: Monday, 12 May 2014 Notification of acceptance: Monday, 26 May 2014 Workshop date: 30 July 2014 ~~~ ABOUT Artificial Life aims to understand the basic and generic principles of life, and demonstrate this understanding by producing life-like systems based on those principles. In recent years, with the advent of the information age, and the widespread acceptance of information technology, our view of life has changed. Ideas such as "life is information processing" or "information holds the key to understanding life" have become more common. But what can information, or more formally Information Theory, offer to Artificial Life? One relevant area is the motivation of behaviour for artificial agents, both virtual and real. Instead of learning to perform a specific task, informational measures can be used to define concepts such as boredom, empowerment or the ability to predict one's own future. Intrinsic motivations derived from these concepts allow us to generate behaviour, ideally from an embodied and enactive perspective, which are based on basic but generic principles. The key questions here are: "What are the important intrinsic motivations a living agent has, and what behaviour can be produced by them?" Related to an agent's behaviour is also the question on how and where the necessary computation to realise this behaviour is performed. Can information be used to quantify the morphological computation of an embodied agent and to what degree are the computational limitations of an agent influencing its behaviour? Another area of interest is the guidance of artificial evolution or adaptation. Assuming it is true that an agent wants to optimise its information processing, possibly obtain as much relevant information as possible for the cheapest computational cost, then what behaviour would naturally follow from that? Can the development of social interaction or collective phenomena be motivated by an informational gradient? Furthermore, evolution itself can be seen as a process in which an agent population obtains information from the environment, which begs the question of how this can be quantified, and how systems would adapt to maximise this information? The common theme in those different scenarios is the identification and quantification of driving forces behind evolution, learning, behaviour and other crucial processes of life, in the hope that the implementation or optimisation of these measurements will allow us to construct life-like systems. ~~~ WORKSHOP FORMAT The workshop is scheduled for half a day (this has changed due to space limitations). The workshop will consist of a keynote (by Chris Adami), presentations and discussions. To participate in the workshop by giving a presentation we would ask you to submit an extended abstract. We are interested in both, the current existing approaches in the field, and possible future avenues of investigation. Student presentations are an option for younger researchers, new to the field, that would like to outline and discuss their research direction or early results. The general idea is to offer a forum for those interested in applying information theory and similar methods to the field of artificial life, to have a focused discussion on both, the possibilities and technical challenges involved. After the workshop, there will a special issue in the Journal "Entropy" on the topic of the workshop, and we encourage participants to submit an extended version of their work. Further details will be announced as soon as possible. ~~~ SUBMISSION AND PARTICIPATION How to submit If you want to participate in the workshop by giving a talk we would invite you to send us an email (itialife at gmail.com) by 2 May 2014 with - name - contact details -1-2 pages long extended abstract. We are interested in previous work related to the subject and current work, including preliminary results. Specifically for students we also offer the option to submit for a shorter student talk, to present some early results, and discuss their approach to the field. In this case, please submit a 1-2 pages long extended abstract and indicate that you are interested in a student presentation. If there are any questions, or if you just want to indicate interest in submitting or attending, please feel free to mail us at itialife at gmail.com. ~~~ CONTACTS Web: http://www.mis.mpg.de/ay/workshops/alife14ws.html Email: itialife at gmail.com ~~~ Organisers: Christoph Salge, Keyan Zahedi, Georg Martius and Daniel Polani With best wishes, Keyan Ghazi-Zahedi, on behalf of the organisers -- Keyan Ghazi-Zahedi, Dr. rer. nat. Information Theory of Cognitive Systems Max Planck Institute for Mathematics in the Sciences phone: (+49) 341 9959 545, fax: (+49) 341 9959 555, office A11 -------------- next part -------------- An HTML attachment was scrubbed... URL: From Vittorio.Murino at iit.it Wed May 7 03:20:33 2014 From: Vittorio.Murino at iit.it (Vittorio Murino) Date: Wed, 7 May 2014 09:20:33 +0200 Subject: Connectionists: 5th PAVIS School on Computer Vision, Pattern Recognition, and Image Processing Message-ID: <5369DEC1.8030802@iit.it> Apologise for multiple posting ==================================================================== Call for Participation 5th PAVIS School on Computer Vision, Pattern Recognition, and Image Processing September 16-18, 2014 - Sestri Levante (GE), Italy SCENE UNDERSTANDING AND OBJECT RECOGNITION IN CONTEXT ------------------------------------------------------------------ Invited speakers * Antonio TORRALBA, MIT (USA) * Agata LAPEDRIZA, Universitat Oberta de Catalunya (Spain) & MIT (USA) -------------------------------------- REGISTRATION DEADLINE JULY 1, 2014 <<<<< (see instructions below) -------------------------------------- The goal of this school will be to introduce recent advances in scene recognition, multiclass object detection and object recognition in context. The class will cover global features for scene recognition (gist, deep features, ?), databases for scene understanding (crowdsourcing, image annotation, ?), methods for multiclass object detection (short summary of object detectionapproaches with emphasis on multiclass techniques) and current approaches for object recognition in context and scene understanding. The theoretical sessions will be complemented with guided experiments in MATLAB. The detailed program of the school will be published later on the school website. ********************************************************************* * * REGISTRATION DEADLINE: July 1, 2014 * * Interested applicants are invited to send an expression of interest * at pavisschool2014 at iit.it asking for participation. * For PhD candidates please attach a Curriculum vitae and a letter * from your supervisor in support to the request. * * Accepted candidates will receive an email containing the * instructions for the actual registration and payment. * ********************************************************************* Notice that, due to the limited number of places, applications are subject to acceptance, and for this reason, early registrations or expressions of interest are encouraged. The attendees are expected to bring a laptop with a working version of MATLAB since practical experiments will be performed during the school using open source libraries such as VLFeat. --------------------------------------------------------------- Registration Fees - 150 euro for Ph.D. and undergraduate students. - 250 euro for post docs, researchers, and other people working in a university or a research institute. - 300 euro for everybody else. --------------------------------------------------------------- Director: Vittorio Murino Local Organizers: Matteo Bustreo Carlos Beltran-Gonzalez Alessio Del Bue School webpage: http://www.iit.it/en/pavis-schools/schoolpavis2014/pavis-school-2013-home.html Please, check the website regularly for updated information. ---------------------------------------------------------------- This school follows a series of intensive courses, targeting PhD students and researchers in the areas of Computer Vision, Image Processing, and Pattern Recognition. The course is residential, spanning 3 days, so that attendees can install a more productive interaction with the lecturers. It is organized and sponsored by PAVIS (Pattern analysis and Computer Vision) department of the Istituto Italiano di Tecnologia, Genova (Italy). The course will take place in the beautiful Baia del Silenzio in Sestri Levante (http://g.co/maps/xqnyr), located between the city of Genova and the border to Tuscany. The school is structured in a such way that attendees can install a more productive interaction with the lecturers. The school is endorsed by GIRPR (Gruppo Italiano Ricercatori in Pattern Recognition). ===================================================================== -- Vittorio Murino ******************************************* Prof. Vittorio Murino, Ph.D. PAVIS - Pattern Analysis & Computer Vision IIT Istituto Italiano di Tecnologia Via Morego 30 16163 Genova, Italy Phone: +39 010 71781 504 Mobile: +39 329 6508554 Fax: +39 010 71781 236 E-mail: vittorio.murino at iit.it Secretary: Sara Curreli email: sara.curreli at iit.it Phone: +39 010 71781 917 http://www.iit.it/pavis ******************************************** -- Vittorio Murino ******************************************* Prof. Vittorio Murino, Ph.D. PAVIS - Pattern Analysis & Computer Vision IIT Istituto Italiano di Tecnologia Via Morego 30 16163 Genova, Italy Phone: +39 010 71781 504 Mobile: +39 329 6508554 Fax: +39 010 71781 236 E-mail: vittorio.murino at iit.it Secretary: Sara Curreli email: sara.curreli at iit.it Phone: +39 010 71781 917 http://www.iit.it/pavis ******************************************** From kerstin at nld.ds.mpg.de Wed May 7 11:01:10 2014 From: kerstin at nld.ds.mpg.de (Kerstin Mosch) Date: Wed, 7 May 2014 17:01:10 +0200 Subject: Connectionists: 10 days left for abstract submission - Bernstein Conference 2014 - Call for Abstracts In-Reply-To: <536A231F.5060103@bcos.uni-freiburg.de> References: <536A231F.5060103@bcos.uni-freiburg.de> Message-ID: <536A4714.4060806@nld.ds.mpg.de> An HTML attachment was scrubbed... URL: From wermter at informatik.uni-hamburg.de Wed May 7 12:54:34 2014 From: wermter at informatik.uni-hamburg.de (Stefan Wermter) Date: Wed, 07 May 2014 18:54:34 +0200 Subject: Connectionists: [meetings] Intl Conf. on Artificial Neural Networks (ICANN 2014) - Call for Participation Message-ID: <536A654A.3070706@informatik.uni-hamburg.de> Call for Participation. Early Registration is now open. ICANN 2014: 24th Annual Conference on Artificial Neural Networks 15 - 19 September 2014, University of Hamburg, Germany http://icann2014.org/ =================================================================== The International Conference on Artificial Neural Networks (ICANN) is the annual flagship conference of the European Neural Network Society (ENNS). In 2014 the University of Hamburg will organize the 24th ICANN Conference from 15th to 19th September 2014 in Hamburg, Germany. CHAIRS and ORGANISATION: General Chair: Stefan Wermter (Hamburg, Germany) Program co-Chairs Alessandro E.P. Villa (Lausanne, Switzerland, ENNS President) Wlodzislaw Duch (Torun, Poland & Singapore, ENNS Past-President) Petia Koprinkova-Hristova (Sofia, Bulgaria) G?nther Palm (Ulm, Germany) Cornelius Weber (Hamburg, Germany) Timo Honkela (Helsinki, Finland) Local Organizing Committee Chairs: Sven Magg, Johannes Bauer, Jorge Chacon, Stefan Heinrich, Doreen Jirak, Katja Koesters, Erik Strahl KEYNOTE SPEAKERS: Christopher M. Bishop (Microsoft Research, Cambridge, UK) Yann LeCun (New York University, NY, USA) Kevin Gurney (University of Sheffield, Sheffield, UK) Barbara Hammer (Bielefeld University, Bielefeld, Germany) Jun Tani (KAIST, Daejeon, Republic of Korea) Paul Verschure (Universitat Pompeu Fabra, Barcelona, Spain) VENUE: Hamburg is the second-largest city in Germany, home to over 1.8 million people. Situated at the river Elbe, the port of Hamburg is the second-largest port in Europe. The University of Hamburg is the largest institution for research and education in the north of Germany. The venue of the conference is the ESA building of the University of Hamburg, situated at Edmund-Siemers-Allee near the city centre and easily reachable from Dammtor Railway Station. Hamburg Airport can be reached easily via public transport. For the accomodation we arranged guaranteed rates for a couple of hotels in Hamburg for ICANN 2014. CONFERENCE TOPICS: ICANN 2014 will feature the main tracks Brain Inspired Computing and Machine Learning research, with strong cross-disciplinary interactions and applications. All research fields dealing with Neural Networks will be present at the conference. A non-exhaustive list of topics includes: Brain Inspired Computing: Cognitive models, Computational Neuroscience, Self-organization, Reinforcement Learning, Neural Control and Planning, Hybrid Neural-Symbolic Architectures, Neural Dynamics, Recurrent Networks, Deep Learning. Machine Learning: Neural Network Theory, Neural Network Models, Graphical Models, Bayesian Networks, Kernel Methods, Generative Models, Information Theoretic Learning, Reinforcement Learning, Relational Learning, Dynamical Models. Neural Applications for: Intelligent Robotics, Neurorobotics, Language Processing, Image Processing, Sensor Fusion, Pattern Recognition, Data Mining, Neural Agents, Brain-Computer Interaction, Neural Hardware, Evolutionary Neural Networks. CONFERENCE WEBSITE: http://www.icann2014.org *********************************************** Professor Dr. Stefan Wermter Chair of Knowledge Technology Department of Computer Science University of Hamburg Vogt Koelln Str. 30 22527 Hamburg, Germany http://www.informatik.uni-hamburg.de/~wermter/ http://www.informatik.uni-hamburg.de/WTM/ *********************************************** From matostmp at gmail.com Wed May 7 22:06:21 2014 From: matostmp at gmail.com (Thiago Matos Pinto) Date: Wed, 7 May 2014 23:06:21 -0300 Subject: Connectionists: Reminder: Abstract submission closes May 16 for IWSYP'14 (I International Workshop on Synaptic Plasticity) in Brazil Message-ID: *I International Workshop on Synaptic Plasticity - IWSYP'14* * September 8-9, 2014* *Ribeir?o Preto, S?o Paulo, Brazil* iwsyp14.thiagomatospinto.com University of S?o Paulo The I International Workshop on Synaptic Plasticity - IWSYP'14 will bring together *experimentalists and theoreticians* who have contributed significantly to understanding the biological mechanisms underlying *synaptic plasticity*. The aim of this workshop is to facilitate discussions and interactions in the effort towards increasing our understanding on synaptic plasticity in the brain. The IWSYP'14 will take place in the campus of *Ribeir?o Preto* at the University of S?o Paulo in *Brazil* on September 8 and 9, 2014. You are welcome to join this workshop that focuses on the discussion of fresh ideas, the presentation of work in progress, and the establishment of a scientific network between young researchers. The workshop will be held in English. Participation is *free of charge*! This workshop is a satellite event of the XXXVIII Annual Meeting of the Brazilian Society for Neuroscience and Behavior (SBNeC), which will be held on September 10-13 in B?zios, Rio de Janeiro. The SBNeC meeting will also feature big names in Neuroscience, and will be a preparatory event for the 9th World Congress of the International Brain Research Organization (IBRO), which will take place in Rio de Janeiro in 2015. Organizers: *Thiago Matos Pinto *(Univesity of S?o Paulo, Ribeir?o Preto, Brazil) *Antonio Roque *(University of S?o Paulo, Ribeir?o Preto, Brazil) Speakers: *Chris De Zeeuw* (Erasmus Medical Center, Rotterdam, The Netherlands & Netherlands Institute for Neuroscience, Amsterdam, The Netherlands) *Freek Hoebeek* (Erasmus Medical Center, Rotterdam, The Netherlands) *Reinoud Maex* (?cole Normale Sup?rieure, Paris, France) *Ricardo Le?o *(University of S?o Paulo, Ribeir?o Preto, Brazil) *Thiago Matos Pinto *(University of S?o Paulo, Ribeir?o Preto, Brazil) Abstracts: *Submission of abstracts for poster and oral presentations are due by May 16, 2014*. We welcome both experimental and theoretical contributions addressing novel findings on topics related to synaptic plasticity. Please see iwsyp14.thiagomatospinto.com/abstracts for details. Important dates: *Abstract submission deadline: May 16, 2014* Notification of acceptance: June 13, 2014 Notification of oral/poster selection: June 30, 2014 Workshop dates: September 8-9, 2014 Please visit the IWSYP'14 website for program and further details: iwsyp14. thiagomatospinto.com You will also find us on Facebook at facebook.com/iwsyp14 We look forward to seeing you in Ribeir?o Preto. With best wishes, Thiago Matos Pinto, IWSYP'14 Organizer -------------- next part -------------- An HTML attachment was scrubbed... URL: From olivier.faugeras at inria.fr Wed May 7 03:46:04 2014 From: olivier.faugeras at inria.fr (Olivier Faugeras) Date: Wed, 07 May 2014 09:46:04 +0200 Subject: Connectionists: Special Issue of the Journal of Mathematical Neuroscience on Stochastic Network Models in Neuroscience Message-ID: <5369E4BC.9010903@inria.fr> Dear all, The Journal of Mathematical Neuroscience is pleased to announce a special issue on *Stochastic Network Models in Neuroscience*. This is in conjunction with the workshop Stochastic Network Models of Neocortex (a Festschrift for Jack Cowan) to be held in Banff International Research Station, July 13-18 2014: https://www.birs.ca/events/2014/5-day-workshops/14w5138 Research and review articles are sollicited and should be submitted to the Journal before September 30. More information on the call can be found on the JMN webpage: http://www.mathematical-neuroscience.com/ or directly at: http://www.mathematical-neuroscience.com/sites/10207/pdf/Cowan_JMN_v3.pdf Olivier Faugeras --------------------------------------------------------------------------------------- : Olivier Faugeras : Professor : Equipe INRIA NeuroMathComp Team : co-editor in chief, Journal of Mathematical Neuroscience : http://www-sop.inria.fr/members/Olivier.Faugeras/index.en.html : Email: olivier.faugeras at inria.fr : Tel: +334 92 38 78 31 : Sec: +334 92 38 78 30 : --------------------------------------------------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From sandayci at rub.de Wed May 7 09:30:21 2014 From: sandayci at rub.de (Yulia Sandamirskaya) Date: Wed, 7 May 2014 15:30:21 +0200 Subject: Connectionists: Summer school on: NEURONAL DYNAMICS FOR COGNITIVE ROBOTICS Message-ID: <2368E64C-9E17-450A-BE5A-8E266A54F826@rub.de> Summer School on: NEURONAL DYNAMICS FOR COGNITIVE ROBOTICS held August 25???30, 2014 at the Institute for Neural Computation, Ruhr-University Bochum in Germany, coordinated by Prof. Dr. Gregor Sch??ner. Neuronal dynamics provide a powerful theoretical language for the design and modeling of embodied and situated cognitive systems.This school provides a hands-on and down-to-earth introduction to neuronal dynamics ideas and enables participants to become productive within this framework. The school is aimed at advanced undergraduate or graduate students, postdocs and faculty members in embodied cognition, cognitive science and robotics. A limited number of scholarships may become available from sponsoring by the EU COG III network. The school combines tutorial lectures in the mornings with hands-on projects working with robotic systems. Participants will develop their own modeling project, which will connect to their ongoing doctoral or postdoctoral research. Topics addressed include: Neural dynamics, attractor dynamics and instabilities, Dynamic Field Theory, neuronal representations, artificial perception, simple forms of cognition including detection and selection decisions, memory formation, learning, and grounding relational concepts. This years special emphasis is on learning and behavioral organization. For more information see: www.robotics-school.org To apply, please send a CV and a short cover letter with background and motivation to mathis.richter at rub.de Selection of participants will begin by June 15, 2014. -------------- next part -------------- A non-text attachment was scrubbed... Name: poster.pdf Type: application/pdf Size: 164624 bytes Desc: not available URL: From v.steuber at herts.ac.uk Wed May 7 06:57:45 2014 From: v.steuber at herts.ac.uk (Steuber, Volker) Date: Wed, 7 May 2014 11:57:45 +0100 Subject: Connectionists: PhD studentships in computational neuroscience and machine learning Message-ID: <18EF08266D889C41A14D1099C7102CE2BDFC6EC5F0@UH-MAILSTOR.herts.ac.uk> Applications are invited for PhD studentships in the Biocomputation Research Group at the Science and Technology Research Institute at the University of Hertfordshire. PhD projects involve the development of computer simulations of neurons and neuronal networks to study information processing in the brain, and/or the application of machine learning techniques to analyse electrophysiological data. More details about our research and a list of potential projects can be found on our homepage (http://homepages.herts.ac.uk/~comqvs/research.html). Applicants should have good computational and numerical skills and an excellent first degree in computer science, biology, maths, physics, neuroscience, or a related discipline. Previous experience in neuroscience is not required but would be an advantage. Successful candidates are eligible for a research studentship award from the University (approximately GBP 13,800 per annum bursary plus the payment of the standard UK student fees). Applicants from outside the UK or EU are eligible, but will have to pay half of the overseas fees out of their bursary. Research in Computer Science at the University of Hertfordshire has been recognized as excellent by the latest Research Assessment Exercise, with 55% of the research submitted being rated as world leading or internationally excellent. The Science and Technology Research Institute provides a very stimulating environment, offering a large number of specialized and interdisciplinary seminars as well as general training opportunities. The University of Hertfordshire is situated in Hatfield, in the green belt just north of London. Please contact Dr Volker Steuber (v.steuber @ herts.ac.uk) for informal enquiries. Application forms can be obtained from Mrs Lorraine Nicholls, Research Student Administrator, STRI, University of Hertfordshire, College Lane, Hatfield, Herts, AL10 9AB, Tel: 01707 286083, l.nicholls @ herts.ac.uk. The short-listing process will begin on 9 June 2014. From alejandro.1138 at gmail.com Wed May 7 05:40:20 2014 From: alejandro.1138 at gmail.com (Alejandro Chinea Manrique de Lara) Date: Wed, 7 May 2014 10:40:20 +0100 Subject: Connectionists: Neocortex of cetaceans: Neuronal density and total neuron numbers statistics Message-ID: Dear all, I have been trying to find for research purposes the following information for long-time unsucessfully: (a) The number of cortical neurons in Killer Whales (Orcinus Orca) (b) The number of cortical neurons in Humpback Whales (Megaptera Novaeangliae) (c) The cortex neuronal density and neuron number in the Sperm Whale (Physeter Macrocephalus). Therefore, I would be grateful if somebody could kindly answer any of the above information or send me a pointer to a person, mailing list, journal or publication where I could eventually find this information. Sincerely, Alejandro Chinea From sahidullahmd at gmail.com Fri May 9 03:54:33 2014 From: sahidullahmd at gmail.com (Md Sahidullah) Date: Fri, 9 May 2014 10:54:33 +0300 Subject: Connectionists: International Summer Term Course on Neuroscience at Indian Institute of Technology Kharagpur Message-ID: Registration for the International Summer Term course "*Methods and Techniques of Cognitive and Clinical Neuroscience*" has started. The course will be conducted at Dept. of Electronics & ECE of IIT Kharagpur campus, during July 02 - 12, 2014 with a Sunday break in between. The course brochure can be downloaded from here (Link ). *Learning Outcomes:* By the end of the course, a student will be able to demonstrate an understanding of: (a) neurophysiological basis of various techniques (correlational and causal) to measure brain activity patterns; (b) state-of-the art data analysis methods for complex brain responses (single unit, EEG/MEG and fMRI); (c) pros and cons and appropriateness of each method and the relationships between different methods. *International Faculty:* The principal faculty, Dr. Joydeep Bhattacharya, Professor, Goldsmiths, University of London is well-known name in this field for his long list of contribution (Link ). We at IITKGP worked together with him on quite a few neurosignal processing problems, the research output of which were published in high impact factor journals (Link1 , Link2 ,Link3 ). Dr. Caroline Di Bernardi Luft, a Senior Research Fellow working at Goldsmiths, University of London will assist with her extensive knowledge of EEG, fMRI and MATLAB programming for neuro data analysis. *National Faculty:* Among national faculty, we have Dr. Abhijit Das, MD, DM, the Director of Neurorehabilitation and a Consultant Neurologist at Institute of Neurosciences, Kolkata, India (Link ). His expertize includes Cognitive Neurorehabilitation, Cortical Plasticity, Non Invasive Brain Stimulation (TMS/tDCS), Neuroimaging etc. We at IITKGP got acquainted with him through a collaborative research initiative (Link). My departmental colleague, Prof. Sudipta Mukhopadhyaya (Link) and yours truly too will pitch in time to time and supplement!!! *No. of seats and Course Fee:* No. of participants to this course is capped at fifty. We intend to reserve ten seats for participants from Industry and R & D organizations. Initial application fee is Rs. 500 only. Upon confirmation, one has to pay the following course fee. Course CodeCourse NameCourse Fee: Participants fromFor StudentsIndustries & R & D organisationsAcademic InstitutionIST 0118Methods & Techniques in Cognitive and Clinical NeuroscienceRs, 20,000Rs. 15,000Rs. 15,000 *Course Fee waiver for faculty / students:* There is a grant available with us which will allow us to reimburse full course fee on successful completion of the course to most of the participating faculty of recognized institutes of India. Students from Indian institutions, interested in this area are also eligible for fee waiver. We are asking to pay up initially, to avoid cases where one blocks the seat and then does not turn up, wasting the opportunity for another and if one is casual in his / her approach in completing the course. There will be exam. to clear at the end of the course. IIT Kharagpur students will be able to earn two credits from this course. *How to Register:* Please follow the guideline available at following webpage of IIT Kharagpur: http://www.mymail.iitkgp.ernet.in/iswt/registration.php Last date of application is June 02, 2014. *Priority Rule:* Since there are limited no. of seats and the course is virtually free for most of the participants from academic institutions, Course Coordinator reserves the right to give priority to faculty, then research and pstgraduate students and finally UG students working / intend to work in this area. We shall try to confirm as early as possible whether one gets selected. We are trying hard to arrange full reimbursement of course fee to all participants from recognized academic institutions of India. If it is not possible to waive fee for all, the priority rule will follow first-come-first-served policy among selected, confirmed candidates who complete the registration process and successfully complete the course. *Travel Booking and Confirmation Schedule:* We would like to confirm candidature of 1st set of applicants of this course by 30th April so that there is enough time to get reservation done for outstation candidates. Train reservation now starts 2 months ahead. Those who apply before 30th April, will get the confirmation on 30th April from the pool applied till then. After that, we shall release list every 10 days till last day. *Accommodation & Food:* While we are trying to arrange reimbursement of course fee for most the candidates from academic institutions, the accommodation, food etc. are chargeable. Accommodation will be provided from whatever is available at campus at that time. We shall try to provide relatively inexpensive accommodation for student participants. *Other information:* It is expected to be monsoon time here in the first fortnight of July. Last year, during the course period, temperature varied between 25 and 35 degree centigrade. For any query of this course, please email me at gsaha.iitkgp at gmail.com or gsaha at ece.iitkgp.ernet.in -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ISWT_IITKGP_Methods & Techniques al Neuroscience_Page_2.jpg Type: image/jpeg Size: 343882 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ISWT_IITKGP_Methods & Techniques in Cognitive & Clinical Neuroscience_Page_1.jpg Type: image/jpeg Size: 408068 bytes Desc: not available URL: From rb2568 at columbia.edu Thu May 8 16:31:36 2014 From: rb2568 at columbia.edu (Rajendra Bose) Date: Thu, 8 May 2014 20:31:36 +0000 Subject: Connectionists: Open Position: Scientific Computing Specialist at Columbia University Zuckerman Institute Message-ID: <8443446E-C58F-4165-BE4F-67B8FF74D1FE@mail.columbia.edu> Announcing an open position for ?Scientific Computing Specialist? in the Research Computing group at the new Mortimer B. Zuckerman Mind Brain Behavior Institute at Columbia. Please pass along the link below to anybody who may be interested. http://www.columbia.edu/~rb2568/zmbbi_scs.pdf Thank you-- Raj Rajendra Bose, Ph.D. 2014 Secretary, Coalition for Academic Scientific Computation http://casc.org Director, Research Computing Mortimer B. Zuckerman Mind Brain Behavior Institute Columbia University Tel: 212-851-2918 http://zuckermaninstitute.columbia.edu From ted.carnevale at yale.edu Fri May 9 14:06:13 2014 From: ted.carnevale at yale.edu (Ted Carnevale) Date: Fri, 09 May 2014 14:06:13 -0400 Subject: Connectionists: NEURON course discounts continue! Message-ID: <536D1915.1020005@yale.edu> The early registration discounts for this year's NEURON summer courses were originally supposed to end today, but we have been able to arrange to continue the discounts for as long as seats remain open, or until the Friday May 30 registration deadline. NEURON Fundamentals June 21-24 This addresses all aspects of using NEURON to model individual neurons, and also introduces parallel simulation and the fundamentals of modeling networks. Includes a healthy serving of Python. Parallel Simulation with NEURON June 25-26 This is for users who are already familiar with NEURON and need to create models that will run on parallel hardware. Special attention will be paid to critical topics such as powerful strategies and idioms for implementing and managing models, measuring and improving performance, and debugging. $1050 for the Fundamentals course $650 for the Parallel Simulation course $1600 for both These fees cover handout materials plus food and housing on the campus of UC San Diego. For more information and the on-line registration form, see http://www.neuron.yale.edu/neuron/courses --Ted From p.geurts at ulg.ac.be Fri May 9 19:54:48 2014 From: p.geurts at ulg.ac.be (Pierre Geurts) Date: Sat, 10 May 2014 01:54:48 +0200 Subject: Connectionists: Call for papers, Nectar track at ECML/PKDD 2014, June 14 Message-ID: CALL FOR NECTAR TRACK CONTRIBUTIONS AT ECML/PKDD 2014 http://ecmlpkdd2014.loria.fr/submission/call-for-nectar-track-contributions/ The European Conference on ?Machine Learning? and ?Principles and Practice of Knowledge Discovery in Databases? (ECML-PKDD) provides an international forum for the discussion of the latest high-quality research results in all areas related to machine learning and knowledge discovery in databases and related application domains. The goal of the Nectar Track is to offer conference attendees a compact overview of recent scientific advances at the frontier of machine learning and data mining, as published in related conferences and journals. We invite senior and junior researchers interested in Machine Learning and/or Knowledge Discovery in Databases, to submit summaries of their own work published in the neighboring fields, e.g. artificial intelligence, data analytics, bioinformatics, games, computational linguistics, computer vision, geoinformatics, health informatics, database theory, human computer interaction, information and knowledge management, robotics, pattern recognition, statistics, social network analysis, theoretical computer science, uncertainty in AI ? and more. Papers summarizing original and/or influential advances in machine learning and data mining are welcome; descriptions of new applications are welcome as well. Note that papers focusing on software implementations should rather be submitted to the demo track. Papers must be 4 pages and should be formatted according to the Author instructions and style files that can be found at http://www.springer.de/comp/lncs/authors.html Papers should be submitted in the CMT submission system (select from the menu the Nectar track). https://cmt2.research.microsoft.com/ECMLPKDD2014/ Submissions must clearly indicate which corresponding original publication(s) are presented, and must clearly motivate the relevance of the work in the context of machine learning and data mining. Important dates: (all deadlines are at 23:59 of the mentioned date, Pacific Time) ? Submission deadline: Saturday, June 14 ? Notifications: Friday, June 28 ? Submission of camera ready copies: July 8 From grlmc at urv.cat Sat May 10 03:05:17 2014 From: grlmc at urv.cat (GRLMC) Date: Sat, 10 May 2014 09:05:17 +0200 Subject: Connectionists: TPNC 2014: 2nd call for papers Message-ID: *To be removed from our mailing list, please respond to this message with UNSUBSCRIBE in the subject line* **************************************************************************** ************** 3rd INTERNATIONAL CONFERENCE ON THE THEORY AND PRACTICE OF NATURAL COMPUTING TPNC 2014 Granada, Spain December 9-11, 2014 Organized by: Soft Computing and Intelligent Information Systems (SCI2S) University of Granada Research Group on Mathematical Linguistics (GRLMC) Rovira i Virgili University http://grammars.grlmc.com/tpnc2014/ **************************************************************************** ************** AIMS: TPNC is a conference series intending to cover the wide spectrum of computational principles, models and techniques inspired by information processing in nature. TPNC 2014 will reserve significant room for young scholars at the beginning of their career. It aims at attracting contributions to nature-inspired models of computation, synthesizing nature by means of computation, nature-inspired materials, and information processing in nature. VENUE: TPNC 2014 will take place in Granada, in the region of Andaluc?a, to the south of Spain. The city is the seat of a rich Islamic historical legacy, including the Moorish citadel and palace called Alhambra. SCOPE: Topics of either theoretical, experimental, or applied interest include, but are not limited to: * Nature-inspired models of computation: - amorphous computing - cellular automata - chaos and dynamical systems based computing - evolutionary computing - membrane computing - neural computing - optical computing - swarm intelligence * Synthesizing nature by means of computation: - artificial chemistry - artificial immune systems - artificial life * Nature-inspired materials: - computing with DNA - nanocomputing - physarum computing - quantum computing and quantum information - reaction-diffusion computing * Information processing in nature: - developmental systems - fractal geometry - gene assembly in unicellular organisms - rough/fuzzy computing in nature - synthetic biology - systems biology * Applications of natural computing to: algorithms, bioinformatics, control, cryptography, design, economics, graphics, hardware, learning, logistics, optimization, pattern recognition, programming, robotics, telecommunications etc. A flexible "theory to/from practice" approach would be the perfect focus for the expected contributions. STRUCTURE: TPNC 2014 will consist of: - invited talks - peer-reviewed contributions INVITED SPEAKERS: Kalyanmoy Deb (East Lansing, US), Multi-Criterion Problem Solving: A Niche for Natural Computing Methods Marco Dorigo (Brussels, BE), Swarm Intelligence Francisco Herrera (Granada, ES), Bioinspired Real Parameter Optimization: Where We Are and What?s Next PROGRAMME COMMITTEE: Hussein A. Abbass (Canberra, AU) Uwe Aickelin (Nottingham, UK) Thomas B?ck (Leiden, NL) Christian Blum (San Sebasti?n, ES) Jinde Cao (Nanjing, CN) Vladimir Cherkassky (Minneapolis, US) Sung-Bae Cho (Seoul, KR) Andries P. Engelbrecht (Pretoria, ZA) Terence C. Fogarty (London, UK) Fernando Gomide (Campinas, BR) Inman Harvey (Brighton, UK) Francisco Herrera (Granada, ES) Tzung-Pei Hong (Kaohsiung, TW) Thomas Jansen (Aberystwyth, UK) Yaochu Jin (Guildford, UK) Okyay Kaynak (Istanbul, TR) Satoshi Kobayashi (Tokyo, JP) Soo-Young Lee (Daejeon, KR) Derong Liu (Chicago, US) Manuel Lozano (Granada, ES) Carlos Mart?n-Vide (Tarragona, ES, chair) Ujjwal Maulik (Kolkata, IN) Risto Miikkulainen (Austin, US) Frank Neumann (Adelaide, AU) Leandro Nunes de Castro (S?o Paulo, BR) Erkki Oja (Aalto, FI) Lech Polkowski (Warsaw, PL) Brian J. Ross (St. Catharines, CA) Marc Schoenauer (Orsay, FR) Biplab Kumar Sikdar (Shibpur, IN) Dipti Srinivasan (Singapore, SG) Darko Stefanovic (Albuquerque, US) Umberto Straccia (Pisa, IT) Thomas St?tzle (Brussels, BE) Ponnuthurai N. Suganthan (Singapore, SG) Johan Suykens (Leuven, BE) El-Ghazali Talbi (Lille, FR) Jon Timmis (York, UK) Fernando J. Von Zuben (Campinas, BR) Michael N. Vrahatis (Patras, GR) Xin Yao (Birmingham, UK) ORGANIZING COMMITTEE: Adrian Horia Dediu (Tarragona) Carlos Garc?a-Mart?nez (C?rdoba) Carlos Mart?n-Vide (Tarragona, co-chair) Manuel Lozano (Granada, co-chair) Francisco Javier Rodr?guez (Granada) Florentina Lilica Voicu (Tarragona) SUBMISSIONS: Authors are invited to submit non-anonymized papers in English presenting original and unpublished research. Papers should not exceed 12 single-spaced pages (including eventual appendices, references, etc.) and should be prepared according to the standard format for the Springer Verlag's LNCS series (see http://www.springer.com/computer/lncs?SGWID=0-164-6-793341-0). Submissions have to be uploaded to: https://www.easychair.org/conferences/?conf=tpnc2014 PUBLICATIONS: A volume of proceedings published by Springer in the LNCS series will be available by the time of the conference. A special issue of a major journal will be later published containing peer-reviewed extended versions of some of the papers contributed to the conference. Submissions to it will be by invitation. REGISTRATION: The period for registration is open from April 5 to December 9, 2014. The registration form can be found at: http://grammars.grlmc.com/tpnc2014/Registration.php DEADLINES: Paper submission: July 17, 2014 (23:59h, CET) Notification of paper acceptance or rejection: August 24, 2014 Final version of the paper for the LNCS proceedings: September 7, 2014 Early registration: September 7, 2014 Late registration: November 25, 2014 Submission to the post-conference journal special issue: March 11, 2015 QUESTIONS AND FURTHER INFORMATION: florentinalilica.voicu at urv.cat POSTAL ADDRESS: TPNC 2014 Research Group on Mathematical Linguistics (GRLMC) Rovira i Virgili University Av. Catalunya, 35 43002 Tarragona, Spain Phone: +34 977 559 543 Fax: +34 977 558 386 ACKNOWLEDGEMENTS: Universidad de Granada Universitat Rovira i Virgili From gunnar.blohm at gmail.com Mon May 12 11:55:42 2014 From: gunnar.blohm at gmail.com (Gunnar Blohm) Date: Mon, 12 May 2014 11:55:42 -0400 Subject: Connectionists: CAN satellite workshop Message-ID: <5370EEFE.9050004@queensu.ca> *Invitation to the CAN 2014 satellite workshop on "Neural circuits of health and disease: from computation to experiment" * ** We would like to invite you to join us at the CAN 2014 satellite workshop on "Neural circuits of health and disease: from computation to experiment" to be held on *Monday, May 26 between 5pm and 10pm *at the Montreal Hilton Bonaventure. This workshop is organized by the Canadian Association for Neuroinformatics and Computational Neuroscience (CNCN) and jointly sponsored by CIHR and NeuroDevNet. The *goals *of this workshop are to (1) showcase the significance of CNCN-experimental collaborations, (2) enable new breakthroughs on major health questions through break-out sessions matching CNCN expertise with experimentalists, and (3) prepare a joint vision on how CNCN will optimally benefit health research in Canada. We hope to build new collaborations, exchange ideas and demonstrate the usefulness of CNCN. In addition, we aim at providing specific suggestions to funding agencies on how CNCN could further boost Canadian health research. We are also very honored to have *Dr. Eve Marder* (Brandeis University; former SfN president) as a keynote lecturer. Please see below for a detailed program. If you are interested in participating, please email Dr. Gunnar Blohm (gunnar.blohm at queensu.ca ) with a short statement of interest and benefit of participation to you. Although this workshop is mainly intended for PIs, we also encourage senior graduate students and postdocs to apply. Participation is by invitation only. The CNCN steering committee will select 60 participants (10 students/postdocs) out of all applications. Thanks to CIHR and NeuroDevNet, workshop participation is free and includes dinner. We hope to see you at the workshop! Sincerely Gunnar Blohm & Paul Pavlidis (co-organizers, co-directors of CNCN) _Preliminary Program_: 5pm -- 5:15pm -- Welcome by Paul Pavlidis & Gunnar Blohm Showcases of CNCN-experimental collaborations 5-15pm -- 5:45pm: Neurodevelopmental disorders (TBD) 5:45pm -- 6:15pm: Stroke (TBD) 6:15pm -- 6:45pm: Changes in the dynamics of network oscillations in hippocampus as markers of neurodegeneration in Alzheimer disease mice models (Frances Skinner, U Toronto; Sylvain Williams, McGill) 6:45pm -- 7:15pm: Genomics and neuropsychiatric disorders (Gustavo Turecki, McGill; Paul Pavlidis, UBC) 7:15pm -- 8:30pm: 6 chaired round table discussions & dinner 8:30pm -- 9:30pm: keynote lecture (Dr. Eve Marder) 9:30pm -- open end: open discussion on vision of CNCN -- ------------------------------------------------------- Dr. Gunnar BLOHM Assistant Professor in Computational Neuroscience Centre for Neuroscience Studies, Departments of Biomedical and Molecular Sciences, Mathematics & Statistics, and Psychology, School of Computing, and Canadian Action and Perception Network (CAPnet) Queen's University 18, Stuart Street Kingston, Ontario, Canada, K7L 3N6 Tel: (613) 533-3385 Fax: (613) 533-6840 Email: Gunnar.Blohm at QueensU.ca Web: http://www.compneurosci.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From lila.kari at uwo.ca Mon May 12 15:06:41 2014 From: lila.kari at uwo.ca (Lila Kari) Date: Mon, 12 May 2014 15:06:41 -0400 Subject: Connectionists: UCNC 2014 PROGRAM Message-ID: UCNC 2014 PROGRAM ANNOUNCED The 13th International Conference on Unconventional Computation & Natural Computation (UCNC) University of Western Ontario, London, Ontario, Canada July 14-18, 2014 http://www.csd.uwo.ca/ucnc2014 http://www.facebook.com/UCNC2014 https://twitter.com/UCNC2014 The tentative program of UCNC 2014 is now available at http://conferences.csd.uwo.ca/ucnc2014/timetable.php PROGRAM HIGHLIGHTS INVITED PLENARY SPEAKERS Yaakov Benenson (ETH Zurich) - "Molecular Computing Meets Synthetic Biology" Charles Bennett (IBM Research) - "From Quantum Dynamics to Physical Complexity'' Hod Lipson (Cornell University) - "The Robotic Scientist" Nadrian Seeman (New York University) - "DNA: Not Merely the Secret of Life" INVITED TUTORIAL SPEAKERS Anne Condon (University of British Columbia) - "Programming with Biomolecules" Ming Li (University of Waterloo) - "Approximating Semantics" Tommaso Toffoli (Boston University) - "Do We Compute to Live, or Live to Compute?" WORKSHOP ON DNA COMPUTING BY SELF-ASSEMBLY - MAIN SPEAKERS Scott Summers (University of Wisconsin) - ''Two Hands Are Better than One (in Self-Assembly)'' Damien Woods (Caltech) - ''Intrinsic Universality and the Computational Power of Self-Assembly'' WORKSHOP ON COMPUTATIONAL NEUROSCIENCE - MAIN SPEAKERS William Cunningham (University of Toronto) - ''Computation in Affective Dynamics'' Randy McIntosh (Rotman Research Institute) - ''Building and Interacting with the Virtual Brain'' WORNSKHOP ON UNCONVENTIONAL COMPUTATION IN EUROPE - MAIN SPEAKER Ricard Sole (Univ. Pompeu Fabra) - ''Computation and the Major Synthetic Transitions in Artificial Evolution'' OVERVIEW The International Conference on Unconventional Computation and Natural Computation has been a meeting where scientists with different backgrounds, yet sharing a common interest in novel forms of computation, human-designed computation inspired by nature, and the computational aspects of processes taking place in nature, present their latest theoretical or experimental results. Typical, but not exclusive, topics are: * Molecular (DNA) computing, Quantum computing, Optical computing, Chaos computing, Physarum computing, Hyperbolic space computation, Collision-based computing, Super-Turing Computation; * Cellular automata, Neural computation, Evolutionary computation, Swarm intelligence, Ant algorithms, Artificial immune systems, Artificial life, Membrane computing, Amorphous computing; * Computational Systems Biology, Computational neuroscience, Synthetic biology, Cellular (in vivo) computing. From ecai2014 at guarant.cz Wed May 14 02:50:03 2014 From: ecai2014 at guarant.cz (ecai2014 at guarant.cz) Date: Wed, 14 May 2014 08:50:03 +0200 Subject: Connectionists: =?utf-8?q?ECAI_2014_=E2=80=93_REGISTRATION_and_SO?= =?utf-8?q?CIAL_EVENTS?= Message-ID: <20140514065004.00472174227@gds25d.active24.cz> ECAI 2014 ??? REGISTRATION and SOCIAL EVENTS Image: http://dev.topinfo.cz/guarant.mailing/img/_/mailing/3731/header.jpg Dear colleagues, We would like to invite you to the ECAI 2014 Conference in Prague. The pre-conference sessions (workshops, Tutorials, STAIRS and RuleML) will be organized inFaculty of Electrical Engineering, the main conference in Clarion Congress Hotel Prague. You may participate in two social events (Welcome Cocktail and Conference Dinner). REGISTRATION For registration please use the online registration system available on: www.ecai2014.org/registration/ You may register for: ????Main conference (August 20???22) ????Workshops (August 18???19) ????Tutorials (August 18???19) ????PAIS (August 20???21) ????STAIRS (August 18???19) ????RuleML (August 18???20) ????Angry Birds Competition (August 18???22) and also Conference Dinner (August 21) Detailed information can be found on www.ecai2014.org SOCIAL EVENTS ????Welcome Cocktail - Bethlehem Chapel, August 19 ????Conference Dinner - Monastery Restaurant, August 21 For more information please click HERE. We look forward to meeting you in Prague. Conference Secretariat GUARANT International Na Pankr??ci 17 140 21 Prague 4 Tel: +420 284 001 444, Fax: +420 284 001 448 E-mail: ecai2014 at guarant.cz Web: www.ecai2014.org This email is not intended to be spam or to go to anyone who wishes not to receive it. If you do notwish to receive this letter and wish to remove your email address from our database pleasereply to this message with ???Unsubscribe??? in the subject line. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: header.jpg Type: image/jpeg Size: 70793 bytes Desc: not available URL: From j.m.mooij at uva.nl Mon May 12 08:25:25 2014 From: j.m.mooij at uva.nl (Joris Mooij) Date: Mon, 12 May 2014 14:25:25 +0200 Subject: Connectionists: 2nd CFP: UAI 2014 Workshop Causal Learning: Inference and Prediction Message-ID: <20140512122525.GL8264@zaphod.jorismooij.nl> 2nd Call for Papers Workshop "Causal Inference: Learning and Prediction" Uncertainty in Artificial Intelligence 2014 (UAI 2014) Sunday, July 27, 2014 Quebec City, Quebec, Canada Causality is central to how we view and react to the world around us, to our decision making, and to the advancement of science. Causal inference in statistics and machine learning has advanced rapidly in the last 20 years, leading to a plethora of new methods, both for causal structure learning and for making causal predictions (i.e., predicting what happens under interventions). However, a side-effect of the increased sophistication of these approaches is that they have grown apart, rather than together. The aim of this workshop is to bring together researchers interested in the challenges of causal inference from observational and interventional data, especially when latent (confounding) variables or feedback loops may be present. Contributions describing practical applications of causal methods are specially encouraged. This one-day workshop will explore these topics through a set of invited talks, presentations and a poster session. We encourage co-submission of (full) papers that have been submitted to the main UAI 2014 conference. See our website for updates: http://staff.science.uva.nl/~jmooij1/uai2014-causality-workshop/ [This workshop takes place directly after the 30th Conference on Uncertainty in Artificial Intelligence (UAI), 23-26 July, 2014.] Example Topics: ============== * Addressing the challenge of practical causal inference in the context of real applications; * Developing measures and methods for evaluating the quality of causal predictions; * Feasible prediction of post-interventional distributions by reconstructing latent confounders; * Considering the relative robustness of assumptions and algorithms to model misspecification; * Methods for causal inference from high-dimensional data; * Methods for combining different datasets; * Experimental design for causal inference; * Real-world validation of causal inference methods; * Discussions on the possibility of making causal predictions in a highly confounded and cyclic world; * Occam?s Razor in causal inference (methodological justifications for oversimplified models). Submission ========= There are two possible submission formats. The authors can either submit: - a one-page abstract (including references) describing recently published work, or - a full-length paper, limited to 9 pages (including figures and text, excluding references). If a contribution consists of material that has been published elsewhere earlier on (except possibly at UAI 2014), the authors must choose the one-page abstract format and cite the original work. Our submission deadline comes a few days after the UAI author notification deadline. We encourage co-submission of (full) papers that have been submitted to the main UAI 2014 conference. Please indicate if your paper was also submitted to UAI. If accepted for UAI, the paper would be published in UAI proceedings, but we may also invite the authors to give a (oral or poster) presentation at the workshop. Style files for full papers can be found on the UAI website: http://auai.org/uai2014/ Abstracts and papers must be submitted via e-mail before the deadline (June 6) to: uai2014.causality.workshop at gmail.com Contributions will be peer reviewed by at least two reviewers. Accepted papers will be presented either as oral presentation or in a poster session. Proceedings ========== After the workshop we will publish proceedings on CEUR-WS and via the web-page: http://staff.science.uva.nl/~jmooij1/uai2014-causality-workshop/ Authors of accepted papers can choose to contribute the submitted manuscript (i.e., the full paper or the abstract). They can also choose not to contribute to the proceedings. Oral presentation slides will also be disseminated via the workshop web-page. Important Dates ============== * June 6 2014: Submission deadline for abstracts and full papers * June 27 2014: Author notification * July 27 2014: Workshop (following the UAI 2014 main conference, July 24-26) Organizers ========= Joris Mooij (Chair), University of Amsterdam Dominik Janzing, Max Planck Institute of Intelligent Systems Jonas Peters, ETH Z?rich Tom Claassen, Radboud University Nijmegen Antti Hyttinen, California Institute of Technology From rli at cs.odu.edu Wed May 14 21:39:17 2014 From: rli at cs.odu.edu (rli) Date: Thu, 15 May 2014 01:39:17 +0000 Subject: Connectionists: KDD workshop: BrainKDD Call for Papers Message-ID: <77A78D2D4678A2428E4D244F82B9FEF40123EA713B@relatum.cs.odu.edu> [Apology for cross-postings] BrainKDD Call for Papers BrainKDD: International Workshop on Data Mining for Brain Science in conjunction with ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (SIGKDD'14) August 24, 2014, New York City https://sites.google.com/site/brainkdd/ Understanding brain function is one of the greatest challenges facing science. Today, brain science is experiencing rapid changes and is expected to achieve major advances in the near future. In April 2013, U.S. President Barack Obama formally announced the Brain Research Through Advancing Innovative Neurotechnologies Initiative, the BRAIN Initiative. In Europe, the European Commission has recently launched the European Human Brain Project (HBP). In the private sector, the Allen Institute for Brain Science is embarking on a new 10-year plan to generate comprehensive, large-scale data in the mammalian cerebral cortex under the MindScope project. These ongoing and emerging projects are expected to generate a deluge of data that capture the brain activities at different levels of organization. There is thus a compelling need to develop the next generation of data mining and knowledge discovery tools that allow one to make sense of this raw data and to understand how neurological activity encodes information. This workshop will focus on exploring the forefront between computer science and brain science and inspiring fundamentally new ways of mining and knowledge discovery from a variety of brain data. We encourage submissions in, but not limited to, the following areas: * Mining of in situ hybridization and microarray gene expression data * Mining of brain connectivity and circuitry data * Mining of structural and functional MRI data * Mining of EEG and related data * Mining of temporal developing brain data * Mining of spatial neuroimaging data * Integrative mining of multi-modality brain data * Mining of diseased brain data, such as Alzheimer's disease, Parkinson's disease, and schizophrenia * Segmentation and registration of neuroimaging data Important Dates: June 16nd, 2014: Paper Submission Due July 8th, 2014: Notification of Acceptance August 24th, 2014: Workshop Presentation Submission Instructions: Papers should be prepared using the ACM Proceedings Format http://www.acm.org/sigs/publications/proceedings-templates#aL2 and should be at most 9 pages long. Paper should be submitted in PDF format through the following link: https://www.easychair.org/conferences/?conf=brainkdd2014 Publication: Accepted workshop papers will be invited to a journal to be determined (subject to peer review). Invited Speakers: * Partha Mitra, Cold Spring Harbor Laboratory * Hanchuan Peng, Allen Institute for Brain Science Program Committee Members: * Gal Chechik, Bar Ilan University * Leon French, Rotman Research Institute at Baycrest * Junzhou Huang, University of Texas at Arlington * Shuai Huang, University of South Florida * Dean Krusienski, Old Dominion University * Jiang Li, Old Dominion University * Jing Li, Arizona State University * Yonggang Shi, University of Southern California * Lin Yang, University of Kentucky * Pew-Thian Yap, University of North Carolina at Chapel Hill * Jiayu Zhou, Arizona State University Program Committee Chairs: Michael Hawrylycz Allen Institute for Brain Science Shuiwang Ji Old Dominion University Tianming Liu University of Georgia Dinggang Shen University of North Carolina at Chapel Hill Publicity Chair: Rongjian Li Old Dominion University -------------- next part -------------- An HTML attachment was scrubbed... URL: From anartaghosh at gmail.com Wed May 14 13:38:20 2014 From: anartaghosh at gmail.com (Anarta Ghosh) Date: Wed, 14 May 2014 17:38:20 +0000 Subject: Connectionists: Senior Research Scientist Position at UTRCI Message-ID: Dear Connectionists, United Technologies Research Center Ireland Ltd. (UTRCI) is inviting qualified candidates, with excellent track records in the areas of machine learning, data analytics, statistical pattern recognition and decision support systems, to apply for senior research scientist positions. UTRC delivers advanced technologies to the businesses of United Technologies Corporation (UTC). UTC (NYSE:UTX) is a diversified company that provides a broad range of high-technology products and services to the global aerospace and building systems industries (www.utc.com). UTRC employees enjoy a diverse and collaborative working environment. The following links provide information about the job postings and can also be used to apply for the positions: ? *Senior Research Scientist, Decision Support and Data Mining:* http://careers.utc.com/text/jobs/descriptions/utrci-research-scientist-decision-support-job-cork-cork-4103307 http://careers.utc.com/text/jobs/descriptions/utrci-research-scientist-large-scale-data-mining-inference-and-recommenda-job-cork-cork-4228962 In addition to the aforementioned permanent positions, our company also offers Internship opportunities in these areas for students and research associates. A short description of our company is presented below. Please do not hesitate to contact us at any time should you have a question or wish to provide recommendations on qualified individuals. Yours sincerely, Anarta Ghosh ======================================= **UTRC Profile* United Technologies Research Center (UTRC) delivers advanced technologies to the businesses of United Technologies Corporation (UTC). UTC (NYSE:UTX) is a diversified company that provides a broad range of high-technology products and services to the global aerospace and building systems industries (www.utc.com). UTC's commercial businesses are Otis elevators and escalators and UTC Climate, Controls & Security, a leading provider of heating, ventilation, air conditioning, fire and security systems, building automation and controls. UTC?s aerospace businesses are Sikorsky Aircraft Corporation and the new UTC Propulsion & Aerospace Systems, which includes Pratt & Whitney aircraft engines and UTC Aerospace Systems aerospace products. UTRC partners with UTC business units and external research organizations to expand the boundaries of science and technology through research and innovation, delivering technology options that meet and anticipate the needs of the marketplace. Founded in 1929, UTRC is located in East Hartford, Connecticut (U.S.), with an office in Berkeley, California, and research and development centers in Shanghai, China, and Cork, Ireland. United Technologies Research Centre Ireland, Ltd. (UTRCI), established in 2009, operates as the European hub of UTRC and is part of UTRC?s mission to expand its collaborative activities while leveraging a global network of innovation. Located in Cork, Ireland, *UTRCI*undertakes R&D for the next generation of *Energy* and *Security Systems* for High Performance Buildings. Recently, UTRCI has expanded its research capability to include *Aerospace Systems *to support large UTC industrial presence in Europe (Pratt & Whitney, Sikorsky, and Goodrich) by leveraging existing relationships with independent aerospace-active research labs in Europe. The UTRCI team members are world-class engineers and researchers with expertise in Control Systems and Optimization, Building Energy Modeling, Power Electronics, Formal Methods, Access Control, Decision Support, Credentials, Communications, and Localization. UTRCI is a diverse and collaborative working environment with a focus on healthy work-life balance. ======================================= *Contact Details:* Dr. Anarta Ghosh United Technologies Research Center, Ireland ghosha at utrc.utc.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From kerstin at nld.ds.mpg.de Thu May 15 11:26:53 2014 From: kerstin at nld.ds.mpg.de (Kerstin Mosch) Date: Thu, 15 May 2014 17:26:53 +0200 Subject: Connectionists: 3 days left for abstract submission - Bernstein Conference 2014 - Call for Abstracts In-Reply-To: <536A4714.4060806@nld.ds.mpg.de> References: <536A231F.5060103@bcos.uni-freiburg.de> <536A4714.4060806@nld.ds.mpg.de> Message-ID: <5374DCBD.1090105@nld.ds.mpg.de> REMINDER: Call for Abstracts: Bernstein Conference 2014 Deadline of Abstract Submission: May 18, 2014 ************************************************************** Satellite Workshops September 02-03, 2014 Main Conference September 03-05, 2014 ************************************************************** The Bernstein Conference has become the largest annual Computational Neuroscience Conference in Europe and now regularly attracts more than 500 international participants. This year, the Conference is organized by the Bernstein Focus Neurotechnology Goettingen and will take place September 03-05, 2014. In addition, there will be a series of pre-conference satellite workshops on September 02-03, 2014. The Bernstein Conference is a single-track conference, covering all aspects of Computational Neuroscience and Neurotechnology, and sessions for poster presentations are an integral part of the conference. We now invite the submission of abstracts for poster presentations from all relevant areas. Accepted abstracts will be published online and will be citable via Digital Object Identifiers (DOI). Additionally, a small number of abstracts will be selected for contributed talks. DETAILS FOR ABSTRACT SUBMISSION: For abstract submission visit: http://www.bernstein-conference.de/abstracts Deadline: May 18, 2014 CONFERENCE DETAILS: Venue: Satellite Workshops and Main Conference, Central Lecture Hall (ZHG), Platz der Goettinger Sieben 5, 37073 G?ttingen, Germany Registration: Conference Registration starts in April, 2014 Early registration deadline: July 7, 2014 For more information on the conference, please visit the website: http://www.bernstein-conference.de PUBLIC PHD STUDENT EVENT: September 02, 2014 PUBLIC LECTURE: September 04, 2014 PROGRAM COMMITTEE: Ilka Diester, Daniel Durstewitz, Thomas Euler, Alexander Gail, Matthias Kaschube, Jason Kerr, Roland Schaette, Fred Wolf, Florentin W?rg?tter, ORGANIZING COMMITTEE: Florentin W?rg?tter, Kerstin Mosch, Yvonne Reimann Bernstein Coordination Site (BCOS): Andrea Huber Broesamle, Mareike Kardinal, Kerstin Schwarzwaelder We look forward to seeing you in Goettingen in September! -- Dr. Kerstin Mosch Bernstein Center for Computational Neuroscience (BCCN) Goettingen Bernstein Focus Neurotechnology (BFNT) Goettingen Max Planck Institute for Dynamics and Self-Organization Am Fassberg 17 D-37077 Goettingen Germany T: +49 (0) 551 5176 - 425 E: kerstin at nld.ds.mpg.de I: www.bccn-goettingen.de I: www.bfnt-goettingen.de From dwang at cse.ohio-state.edu Thu May 15 15:38:53 2014 From: dwang at cse.ohio-state.edu (DeLiang Wang) Date: Thu, 15 May 2014 15:38:53 -0400 Subject: Connectionists: NEURAL NETWORKS, May 2014 Message-ID: <537517CD.3040903@cse.ohio-state.edu> Neural Networks - Volume 53, May 2014 http://www.journals.elsevier.com/neural-networks Letters: Further results on robustness analysis of global exponential stability of recurrent neural networks with time delays and random disturbances Weiwei Luo, Kai Zhong, Song Zhu, Yi Shen Fastest strategy to achieve given number of neuronal firing in theta model Jiaoyan Wang, Qingyun Wang, Guanrong Chen Matrix measure strategies for stability and synchronization of inertial BAM neural network with time delays Jinde Cao, Ying Wan Articles: Cross-person activity recognition using reduced kernel extreme learning machine Wan-Yu Deng, Qing-Hua Zheng, Zhong-Min Wang Robust head pose estimation via supervised manifold learning Chao Wang, Xubo Song Assist-as-needed robotic trainer based on reinforcement learning and its application to dart-throwing Chihiro Obayashi, Tomoya Tamei, Tomohiro Shibata Kernel learning at the first level of inference Gavin C. Cawley, Nicola L.C. Talbot Similarity preserving low-rank representation for enhanced data representation and effective subspace learning Zhao Zhang, Shuicheng Yan, Mingbo Zhao Learning using privileged information: SVM+ and weighted SVM Maksim Lapin, Matthias Hein, Bernt Schiele Safe semi-supervised learning based on weighted likelihood Masanori Kawakita, Junichi Takeuchi Synchronization control of memristor-based recurrent neural networks with perturbations Weiping Wang, Lixiang Li, Haipeng Peng, Jinghua Xiao, Yixian Yang Effects of asymmetric coupling and self-coupling on metastable dynamical transient rotating waves in a ring of sigmoidal neurons Yo Horikawa Generalization performance of Gaussian kernels SVMC based on Markov sampling Jie Xu, Yuan Yan Tang, Bin Zou, Zongben Xu, Luoqing Li, Yang Lu Convergence behavior of delayed discrete cellular neural network without periodic coefficients Jinling Wang, Haijun Jiang, Cheng Hu, Tianlong Ma Multiple image-stability of neural networks with unbounded time-varying delays Lili Wang, Tianping Chen Extreme learning machine for ranking: Generalization analysis and applications Hong Chen, Jiangtao Peng, Yicong Zhou, Luoqing Li, Zhibin Pan . From giacomo.cabri at unimore.it Fri May 16 04:57:37 2014 From: giacomo.cabri at unimore.it (Giacomo Cabri) Date: Fri, 16 May 2014 10:57:37 +0200 Subject: Connectionists: SASO2014: Joint Call for Workshop Papers Message-ID: <5375D301.9010500@unimore.it> IEEE SASO Workshops: Call for Papers ******************************************************************** *** http://www.iis.ee.imperial.ac.uk/saso2014/workshops.php *** ******************************************************************** *** Important Dates *** Paper Submission Deadline: July 11, 2014 Paper Acceptance Notification: August 1, 2014 Camera-Ready Deadline: August 13, 2014 Workshop Dates: September 8 and 12, 2014 *** Workshops *** 2nd Workshop on Fundamentals of Collective Adaptive Systems (FoCAS 2014) Workshop on Quality Assurance for Self-adaptive, Self-organising Systems (QA4SASO) 2nd Workshop on Self-Adaptive and Self-Organising Socio-Technical Systems (SASO^ST 2014) Workshop on Self-Adaptive Self-Organising Manufacturing Systems (SASOMS 2014) Workshop on Self-Adaptive Self-Organising Manufacturing Systems (SISSY 2014) ******************************************************************** *** 2nd Workshop on Fundamentals of Collective Adaptive Systems *** (FoCAS 2014) ******************************************************************** Monday, September 8th, 2014 http://focas.eu/focas-workshop-saso-2014 Organizing Committee: Emma Hart, Edinburgh Napier University, UK Giacomo Cabri, Universit? di Modena e Reggio Emilia, Italy Collective Adaptive Systems (CAS) is a broad term that describes large scale systems that comprise of many units/nodes, each of which may have their own individual properties, objectives and actions. Decision-making in such a system is distributed and possibly highly dispersed, and interaction between the units may lead to the emergence of unexpected phenomena. CASs are open, in that nodes may enter or leave the collective at any time, and boundaries between CASs are fluid. The units can be highly heterogeneous (computers, robots, agents, devices, biological entities, etc.), each operating at different temporal and spatial scales, and having different (potentially conflicting) objectives and goals, even if often the system has a global goal that is pursued by means of collective actions. Our society increasingly depends on such systems, in which collections of heterogeneous technological nodes are tightly entangled with human and social structures to form artificial societies. Yet, to properly exploit them, we need to develop a deeper scientific understanding of the principles by which they operate, in order to better design them. This workshop solicits papers that address new methodologies, theories and principles that can be used in order to develop a better understanding of the fundamental factors underpinning the operation of such systems, so that we can better design, build, and analyse such systems. We welcome inter-disciplinary approaches. Invited contributions from the workshop will be published in a Special Issue of the Journal of Scalable Computing: Practice and Experience (http://scpe.org/) ********************************************************************************* *** Workshop on Quality Assurance for Self-adaptive, Self-organising Systems *** (QA4SASO) ********************************************************************************* Monday, September 12th, 2014 http://qa4saso.isse.de Organizing Committee: Wolfgang Reif Augsburg University, Germany Institute for Software & Systems Engineering reif at informatik.uni-augsburg.de Franz Wotawa Technical University of Graz, Austria Institute for Software Technology wotawa at ist.tugraz.at Tom Holvoet Catholic University of Leuven, Belgium Department of Computer Science tom.holvoet at cs.kuleuven.be For all enquiries about the workshop, please contact: Benedikt Eberhardinger Augsburg University, Germany Institute for Software & Systems Engineering benedikt.eberhardinger at informatik.uni-augsburg.de Developing self-adaptive, self-organising systems (SASO) that fulfil the requirements of different stakeholders is no simple matter. Quality assurance is required at each phase of the entire development process, starting from requirements elicitation, system architecture design, agent design, and finally in the implementation of the system. The quality of the artefacts from each development phase affects the rest of the system, since all parts are closely related to each other. Furthermore, the shift of adaption decisions from design-time to run-time - necessitated by the need of the systems to adapt to changing circumstances - makes it difficult, but even more essential, to assure high quality standards in these kind of systems. Accordingly, the analysis and evaluation of these self-* systems has to take into account the specific operational context to achieve high quality standards. ********************************************************************************* *** 2nd Workshop on Self-Adaptive and Self-Organising Socio-Technical Systems *** (SASO^ST 2014) ********************************************************************************* Friday, September 12th, 2014 http://sasost.isse.de Organizing Committee: Gerrit Anders, University of Augsburg, Germany, anders at informatik.uni-augsburg.de Jean Botev, University of Luxembourg, Luxembourg, jean.botev at uni.lu Markus Esch, Fraunhofer FKIE, Germany, markus.esch at fkie.fraunhofer.de The design and operation of computer systems has traditionally been driven by technical aspects and considerations. However, the usage characteristics of information and communication systems are both implicitly and explicitly determined by social interaction and the social graph of users. This aspect is becoming more and more evident with the increasing popularity of social network applications on the internet. This workshop will address all aspects of self-adaptive and self-organising mechanisms in socio-technical systems, covering different perspectives of this exciting research area ranging from normative and trust management systems to socio-inspired design strategies for distributed algorithms, collaboration platforms and communication protocols. SASO^ST 2014 has a call for papers and a call for talks *********************************************************************** *** Workshop on Self-Adaptive Self-Organising Manufacturing Systems *** (SASOMS 2014) *********************************************************************** Monday, September 8th, 2014 http://TBA Organizing Committee: Svetan Ratchev, University of Nottingham David Sanderson, University of Nottingham Derek McAuley, Connected Digital Economy Catapult SASOMS at nottingham.ac.uk Economic prosperity increasingly depends on maintaining and further expanding a resilient and sustainable manufacturing sector based on sophisticated technologies, relevant knowledge and skill bases, and a manufacturing infrastructure that has the ability to produce a high variety of complex products faster, better, and cheaper. Manufacturing competitiveness depends on maximising the utilisation of all available resources, empowering human intelligence and creativity, and capturing and capitalising on available information and knowledge for the whole product life cycle. It requires an infrastructure that can quickly respond to consumer and producer requirements, and minimise energy, transport, materials, and resource usage while maximising sustainability, safety, and economic competitiveness. Manufacture and distribution of products in sectors such as automotive, aerospace, pharmaceutical, and medical industries is a key production process in high labour cost areas. To respond to the current challenges, manufacturers need to transform current capital-intensive assembly lines into smart systems that can react to external and internal changes and can self-heal, self-adapt, self-organise, and reconfigure. ***************************************************** *** Workshop on Self-Improving System Integration *** (SISSY 2014) ***************************************************** Monday, September 8th, 2014 http://www.informatik.uni-augsburg.de/lehrstuehle /oc/Veranstaltungen/SISSY14/ Organizing Committee: Kirstie Bellman, The Aerospace Corporation, Kirstie.L.Bellman at aero.org Sven Tomforde, Universit?t Augsburg, Organic Computing Group, sven.tomforde at informatik.uni-augsburg.de Rolf P. W?rtz, Ruhr-Universit?t Bochum, Institute for Neural Computation, rolf.wuertz at ini.rub.de Please contact Sven Tomforde for all enquiries. This workshop intends to focus on the important work of applying self-X principles to the integration of ?Interwoven Systems" (where an "Interwoven System" is a system cutting across several technical domains, combining traditionally engineered systems, systems making use of self-X properties and methods, and human systems). The goal of the workshop is to identify key challenges involved in creating self-integrating systems and consider methods to achieve continuous self-improvement for this integration process. The workshop specifically targets an interdisciplinary community of researchers (i.e. from systems engineering, complex adaptive systems, socio-technical systems, and the OC/AC domains) in the hope that collective expertise from a range of domains can be leveraged to drive forward research in the area. -- |----------------------------------------------------| | Prof. Giacomo Cabri - Ph.D., Associate Professor | Dip. di Scienze Fisiche, Informatiche e Matematiche | Universita' di Modena e Reggio Emilia - Italia | e-mail giacomo.cabri at unimore.it | tel. +39-059-2058320 fax +39-059-2055216 |----------------------------------------------------| From matostmp at gmail.com Fri May 16 14:25:26 2014 From: matostmp at gmail.com (Thiago Matos Pinto) Date: Fri, 16 May 2014 15:25:26 -0300 Subject: Connectionists: IWSYP'14: deadline extension to May 30 Message-ID: <7A6AC30E-DCDE-4B42-8FC8-0B2E8CA941FF@gmail.com> I International Workshop on Synaptic Plasticity - IWSYP'14 September 8-9, 2014 Ribeir?o Preto, S?o Paulo, Brazil http://iwsyp14.thiagomatospinto.com University of S?o Paulo Deadline for abstract submission has now been extended to May 30. Participation is free of charge! Speakers: Chris De Zeeuw (Erasmus Medical Center, Rotterdam, The Netherlands & Netherlands Institute for Neuroscience, Amsterdam, The Netherlands) Freek Hoebeek (Erasmus Medical Center, Rotterdam, The Netherlands) Reinoud Maex (?cole Normale Sup?rieure, Paris, France) Ricardo Le?o (University of S?o Paulo, Ribeir?o Preto, Brazil) Thiago Matos Pinto (University of S?o Paulo, Ribeir?o Preto, Brazil) Organizers: Thiago Matos Pinto (University of S?o Paulo, Ribeir?o Preto, Brazil) Antonio Roque (University of S?o Paulo, Ribeir?o Preto, Brazil) Abstracts: Submission of abstracts for poster and oral presentations are due by May 30, 2014. We welcome both experimental and theoretical contributions addressing novel findings on topics related to synaptic plasticity. Please see http://iwsyp14.thiagomatospinto.com/abstracts for details. Please visit the IWSYP'14 website for program and further details: http://iwsyp14.thiagomatospinto.com You will also find us on Facebook at facebook.com/iwsyp14 We look forward to seeing you in Ribeir?o Preto. With best wishes, Thiago Matos Pinto, IWSYP'14 Organizer -------------- next part -------------- An HTML attachment was scrubbed... URL: From dchau at cs.cmu.edu Fri May 16 15:08:12 2014 From: dchau at cs.cmu.edu (Polo Chau) Date: Fri, 16 May 2014 15:08:12 -0400 Subject: Connectionists: IDEA Workshop @ KDD 2014 - Papers due June 20, 2014 Message-ID: Dear colleagues, I'm co-organizing the IDEA workshop at this year's KDD (Christos too!) in New York City. We welcome many kinds of papers as long as they fit the IDEA theme ? interactive data exploration and analysis (e.g., demos, short papers, white papers, etc.). Submission due on June 20. Submission due on June 20. I've attached the "call for paper" below. Visit our website for more details: http://poloclub.gatech.edu/idea2014/ I'm also attaching a small version of our poster. Full version available on our website. Thanks! Polo * * * Call for Papers - IDEA @ KDD 2014 KDD 2014 Workshop on Interactive Data Exploration and Analysis. Sunday, August 24. New York City, NY, USA IDEA is a full-day workshop organized in conjunction with the The ACM SIGKDD Conference on Knowledge, Discovery and Data Mining in New York City, USA, on 24 August 2014. Last year?s IDEA at KDD 2013 in Chicago was a great success. Join us this year at New York City! Paper submission deadline: June 20, 2014 http://poloclub.gatech.edu/idea2014/ * Workshop Goals We aim to address the development of data mining techniques that allow users to interactively explore their data, receiving near-instant updates to every requested refinement. Our focus and emphasis is on interactivity and effective integration of techniques from data mining, visualization and human-computer interaction. In other words, we explore how the best of these different, related domains can be combined such that the sum is greater than the parts. * Topics of Interest include, but are not limited to - interactive data mining algorithms - visualizations for interactive data mining - demonstrations of interactive data mining - quick, high-level data analysis methods - any-time data mining algorithms - visual analytics - methods that allow meaningful intermediate results - data surrogates - on-line algorithms - adaptive stream mining algorithms - theoretical/complexity analysis of instant data mining - learning from user input for action replication/prediction - active learning / mining * Submission Information We welcome both novel research papers, demo papers, work-in-progress papers, and visionary papers. All papers will be peer reviewed. Submissions should be in PDF, written in English, with a maximum of 10 pages. Shorter papers are welcome. Format your paper using the standard double-column ACM Proceedings Style http://www.acm.org/sigs/publications/proceedings-templates Submit at EasyChair: http://www.easychair.org/conferences/?conf=idea14 At least one author of an accepted paper must attend the workshop to present the work. * Important Dates Submission : June 20, 2014, 23:59 EST Acceptance : July 7, 2014 Camera-ready : July 21, 2014 Workshop : August 24, 2014 * Further information and Contact Website: http://poloclub.gatech.edu/idea2014/ Email: idea14kdd (at) gmail.com Best regards, Polo Chau, Matthijs van Leeuwen, Jilles Vreeken, and Christos Faloutsos -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: idea2014_poster_b_small.jpeg Type: image/jpg Size: 102587 bytes Desc: not available URL: From grlmc at urv.cat Sat May 17 06:27:27 2014 From: grlmc at urv.cat (GRLMC) Date: Sat, 17 May 2014 12:27:27 +0200 Subject: Connectionists: AlCoB 2014: call for participation Message-ID: *To be removed from our mailing list, please respond to this message with UNSUBSCRIBE in the subject line* ********************************************************************* 1st INTERNATIONAL CONFERENCE ON ALGORITHMS FOR COMPUTATIONAL BIOLOGY AlCoB 2014 Tarragona, Spain July 1-3, 2014 http://grammars.grlmc.com/alcob2014/ ********************************************************************* PROGRAM Tuesday, July 1: 9:15 - 10:15 Registration 10:15 - 10:25 Opening 10:25 - 11:15 Michael Galperin: Comparative Genomics Approaches to Identifying Functionally Related Genes ? Invited Lecture 11:15 - 11:45 Coffee Break 11:45 - 13:00 Liana Amaya Moreno, Ozlem Defterli, Armin F?genschuh, Gerhard-Wilhelm Weber: Vester's Sensitivity Model for Genetic Networks with Time-Discrete Dynamics Sebastian Wandelt, Ulf Leser: RRCA: Ultra-fast Multiple In-Species Genome Alignments David A. Rosenblueth, Stalin Mu?oz, Miguel Carrillo, Eugenio Azpeitia: Inference of Boolean Networks from Gene Interaction Graphs using a SAT Solver 13:00 - 14:30 Lunch 14:30 - 15:45 Laurent Lemarchand, Reinhardt Euler, Congping Lin, Imogen Sparkes: Modeling the Geometry of the Endoplasmic Reticulum Network Sean Maxwell, Mark R. Chance, Mehmet Koyut?rk: Efficiently Enumerating All Connected Induced Subgraphs of a Large Molecular Network Bogdan Iancu, Diana-Elena Gratie, Sepinoud Azimi, Ion Petre: On the Implementation of Quantitative Model Refinement 15:45 - 16:00 Break 16:00 - 16:50 Jason Papin: Network Analysis of Microbial Pathogens ? Invited Lecture Wednesday, July 2: 9:00 - 9:50 Uwe Ohler: Decoding Non-coding Regulatory Regions in DNA and RNA (I) ? Invited Tutorial 9:50 - 10:00 Break 10:00 - 11:15 Dimitris Polychronopoulos, Anastasia Krithara, Christoforos Nikolaou, Giorgos Paliouras, Yannis Almirantis, George Giannakopoulos: Analysis and Classification of Constrained DNA Elements with N-gram Graphs and Genomic Signatures Inken Wohlers, Mathilde Le Boudic-Jamin, Hristo Djidjev, Gunnar W. Klau, Rumen Andonov: Exact Protein Structure Classification Using the Maximum Contact Map Overlap Metric Tomohiko Ohtsuki, Naoki Nariai, Kaname Kojima, Takahiro Mimori, Yukuto Sato, Yosuke Kawai, Yumi Yamaguchi-Kabata, Testuo Shibuya, Masao Nagasaki: SVEM: A Structural Variant Estimation Method using Multi-Mapped Reads on Breakpoints 11:15 - 11:45 Coffee Break 11:45 - 13:00 Claire Lemaitre, Liviu Ciortuz, Pierre Peterlongo: Mapping-free and Assembly-free Discovery of Inversion Breakpoints from Raw NGS Reads Giuseppe Narzisi, Bud Mishra, Michael C. Schatz: On Algorithmic Complexity of Biomolecular Sequence Assembly Problem Ivo Hedtke, Ioana Lemnian, Matthias M?ller-Hannemann, Ivo Grosse: On Optimal Read Trimming in Next Generation Sequencing and Its Complexity 13:00 - 14:30 Lunch 14:30 - 15:20 Annie Chateau, Rodolphe Giroudeau: Complexity and Polynomial-Time Approximation Algorithms around the Scaffolding Problem Ernst Althaus, Andreas Hildebrandt, Anna Katharina Hildebrandt: A Greedy Algorithm for Hierarchical Complete Linkage Clustering 15:30 Visit of the City Thursday, July 3: 9:00 - 9:50 Uwe Ohler: Decoding Non-coding Regulatory Regions in DNA and RNA (II) ? Invited Tutorial 9:50 - 10:00 Break 10:00 - 11:15 Carla Negri Lintzmayer, Zanoni Dias: On Sorting of Signed Permutations by Prefix and Suffix Reversals and Transpositions Thiago da Silva Arruda, Ulisses Dias, Zanoni Dias: Heuristics for the Sorting by Length-Weighted Inversions Problem on Signed Permutations Alexander Grigoriev, Steven Kelk, Nela Leki?: On Low Treewidth Graphs and Supertrees 11:15 - 11:45 Coffee Break 11:45 - 13:00 Carla Negri Lintzmayer, Zanoni Dias: On the Diameter of Rearrangement Problems Amina Noor, Aitzaz Ahmad, Bilal Wajid, Erchin Serpedin, Mohamed Nounou, Hazem Nounou: A Closed-Form Solution for Transcription Factor Activity Estimation using Network Component Analysis Kaname Kojima, Naoki Nariai, Takahiro Mimori, Yumi Yamaguchi-Kabata, Yukuto Sato, Yosuke Kawai, Masao Nagasaki: HapMonster: A Statistically Unified Approach for Variant Calling and Haplotyping Based on Phase-Informative Reads 13:00 Closing From m.a.wiering at rug.nl Sun May 18 11:39:35 2014 From: m.a.wiering at rug.nl (M.A.Wiering) Date: Sun, 18 May 2014 17:39:35 +0200 Subject: Connectionists: Deep Learning Overview Draft In-Reply-To: <77209f184693a1.5378d434@rug.nl> References: <75e0dcfc46a366.5378d300@rug.nl> <75e0b9f646f9ad.5378d33d@rug.nl> <7790cf6546ae50.5378d37a@rug.nl> <7790fc1e46921e.5378d3b9@rug.nl> <7720b7cd46d802.5378d3f7@rug.nl> <77209f184693a1.5378d434@rug.nl> Message-ID: <7650fe46468566.5378f057@rug.nl> Dear Connectionists, Another recent approach to deep learning is using deep support vector machines. We recently published a paper on multi-layer support vector machines and show excellent results on a large set of experiments for classification, regression, and dimensionality reduction experiments. You can find our paper Multi-Layer Support Vector Machines using this link: https://www.researchgate.net/publication/260719624_Multi-Layer_Support_Vector_Machines?ev=prf_pub With kind regards, Marco Wiering ================ On 17-04-14, Schmidhuber Juergen wrote: > Dear connectionists, > > here the preliminary draft of an invited Deep Learning overview: > > http://www.idsia.ch/~juergen/DeepLearning17April2014.pdf > > Abstract. In recent years, deep neural networks (including recurrent ones) have won numerous contests in pattern recognition and machine learning. This historical survey compactly summarises relevant work, much of it from the previous millennium. Shallow and deep learners are distinguished by the depth of their credit assignment paths, which are chains of possibly learnable, causal links between actions and effects. I review deep supervised learning (also recapitulating the history of backpropagation), unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks. > > The draft mostly consists of references (about 600 entries so far). Many important citations are still missing though. As a machine learning researcher, I am obsessed with credit assignment. In case you know of references to add or correct, please send brief explanations and bibtex entries to juergen at idsia.ch (NOT to the entire list), preferably together with URL links to PDFs for verification. Please also do not hesitate to send me additional corrections / improvements / suggestions / Deep Learning success stories with feedforward and recurrent neural networks. I'll post a revised version later. > > Thanks a lot! > > Juergen Schmidhuber > http://www.idsia.ch/~juergen/ > http://www.idsia.ch/~juergen/whatsnew.html > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ted.carnevale at yale.edu Sun May 18 23:50:14 2014 From: ted.carnevale at yale.edu (Ted Carnevale) Date: Sun, 18 May 2014 23:50:14 -0400 Subject: Connectionists: NEURON course registration deadline approaching Message-ID: <53797F76.4090701@yale.edu> The registration deadline for this year's NEURON Summer Course is Friday, May 30--just 10 working days from today. Sign up now to make sure you'll have a seat in this course, which will feature the latest developments in cell and network modeling with NEURON. This year's course has two components: NEURON Fundamentals June 21-24 This addresses all aspects of using NEURON to model individual neurons and networks of neurons on parallel hardware, and covers the principles of parallel simulations with NEURON on hardware that ranges from multicore desktop PCs and Macs to massively parallel supercomputers. Parallel Simulation with NEURON June 25-26 This is for users who are already familiar with NEURON and either have models that they want to parallelize, or need to create models that will run on parallel hardware. It addresses key topics such as measuring and maximizing performance, and strategies for debugging. Registration fees are $1050 for NEURON Fundamentals $650 for Parallel Simulation $1600 for both These fees cover handout materials plus food and housing on the campus of UC San Diego. Registration is limited, and the registration deadline is Friday, May 30, 2014. For more information and the on-line registration form, see http://www.neuron.yale.edu/neuron/courses --Ted From fmschleif at googlemail.com Mon May 19 01:50:16 2014 From: fmschleif at googlemail.com (Frank-Michael Schleif) Date: Mon, 19 May 2014 07:50:16 +0200 Subject: Connectionists: Call for papers (deadline approaching) -- Special Session -- on 'High dimensional data analysis - theoretical advances and applications' at CIDM 2014 / SSCI Message-ID: +++ PLEASE, APOLOGIZE MULTIPLE COPIES +++ Call for Papers Special Session on 'High dimensional data analysis - theoretical advances and applications' 09-12 December 2014, Orlando, Florida, USA http://www.ieee-ssci.org/CIDM.html http://www.cs.bham.ac.uk/~schleify/CIDM_2014/ AIMS AND SCOPE Modern measurement technology, greatly enhanced storage capabilities and novel data formats have radically increased the amount and dimensionality of electronic data. Due to its high dimensionality, complexity and the curse of dimensionality, these data sets can often not be addressed by classical statistical methods. Prominent examples can be found in the life sciences with microarrays, hyper spectral data in geo-sciences but also in fields like astrophysics, biomedical imaging, finance or web and market basket analysis. Computational intelligence methods have the potential to be used to pre-process, model and to analyze such complex data but new strategies are needed to get efficient and reliable models. Novel data encoding techniques and projection methods, employing concepts of randomization algorithms have opened new ways to obtain compact descriptions of these complex data sets or to identify relevant information. However theoretical foundations and the practical potential of these methods and alternative approaches has still to be explored and improved. New advances and research to address the curse of dimensions, and to uncover and exploit the blessings of high dimensionality in data analysis are of major interest in theory and application. TOPICS This workshop aims to promote new advances and research directions to address the modeling, representation/encoding and reduction of high-dimensional data or approaches and studies adressing challenging problems in the field of high dimensional data analysis. Topics of interest range from theoretical foundations, to algorithms and implementation, to applications and empirical studies of mining high dimensional data, including (but not limited to) the following: o Studies on how the curse of dimensionality affects computational intelligence methods o New computational intelligence techniques that exploit some properties of high dimensional data spaces o Theoretical findings addressing the imbalance between high dimensionality and small sample size o Stability and reliability analyses for data analysis in high dimensions o Adaptive and non-adaptive dimensionality reduction for noisy high dimensional data sets o Methods of random projections, compressed sensing, and random matrix theory applied to high dimensional data mining o Models of low intrinsic dimension, such as sparse representation, manifold models, latent structure models, and studies of their noise tolerance o Classification, regression, clustering of high dimensional complex data sets o Functional data mining o Data presentation and visualisation methods for very high dimensional data sets o Data mining applications to real problems in science, engineering or businesses where the data is high dimensional PAPER SUBMISSION High quality original submissions (upto 8 pages, IEEE style) should follow the guidelines as outlined at the CIDM homepage and should be submitted using the provided IEEE paper submission system. We strongly encourage to use the LaTeX stylesheet and not the word format to ensure high quality typesetting and paper representation also during the review process. Webpage of the special session: http://www.cs.bham.ac.uk/~schleify/CIDM_2014/ IMPORTANT DATES Paper submission deadline : 15 June 2014 Notification of acceptance : 05 September 2014 Deadline for final papers : 05 October 2014 The CIDM 2014 conference : 9-12 December 2014 SPECIAL SESSION ORGANIZERS: Ata Kaban, University of Birmingham, Birmingham, UK Frank-Michael Schleif, University of Birmingham, Birmingham, UK Thomas Villmann, University of Appl. Sc. Mittweida, Germany -- ------------------------------------------------------- Dr. rer. nat. habil. Frank-Michael Schleif School of Computer Science The University of Birmingham Edgbaston Birmingham B15 2TT United Kingdom - email: fschleif at techfak.uni-bielefeld.de http://promos-science.blogspot.de/ ------------------------------------------------------- From sinankalkan at gmail.com Mon May 19 05:04:06 2014 From: sinankalkan at gmail.com (Sinan KALKAN) Date: Mon, 19 May 2014 12:04:06 +0300 Subject: Connectionists: CfP - ICAR 2015 - 17th International Conference on Advanced Robotics Message-ID: * Please accept our apologies if you receive multiple copies of this call * 17th International Conference on Advanced Robotics, ICAR 2015 27-31 July, 2015, Istanbul, Turkey http://www.icar2015.org/ *Call for Papers* The 17th International Conference on Advanced Robotics, ICAR 2015 is organized by Middle East Technical University in collaboration with Kadir Has University. The conference will take place in Kadir Has University campus in Istanbul, Turkey, on July 27-31, 2015. Keeping up with the same spirit of innovation, ICAR wants to bring high quality papers, workshops and tutorials to the geographical areas where the larger robotics conferences have not been organized yet. After the successful conference last year in Montevideo, Uruguay (www.icar2013.org), next year, the 17th ICAR will be held in Istanbul, Turkey where ?the east meets the west?. The conference is organized by Middle East Technical University (METU) in collaboration with Kadir Has University. The venue is Kadir Has Campus situated on the historic peninsula along Halic bay ( www.icar2015.org). ICAR 2015 will be technically co-sponsored by the IEEE Robotics and Automation Society. The technical program of ICAR 2015 will consist of plenary talks, workshops and oral presentations. Submitted papers should describe original work in the form of theoretical modelling, design, experimental validation, or case studies from all areas of robotics, focusing on emerging paradigms and application areas including but not limited to: Robotics Vision Adversarial Planning Cognitive Robotics Robot Operating Systems Robotics Architectures Simulation and Visualization Mobile Robots Robot Swarms Humanoid Robots Biologically-Inspired Robots Self-Localization and Navigation Embedded and Mobile Hardware Spatial Cognition Robotic Entertainment Human-Robot Interaction Robot Competitions Multi-Robot Systems Unmanned Aerial Robots Search and Rescue Robots Underwater Robotic Systems Learning and Adaptation Educational Robotics Cooperation and Competition Rehabilitation Robotics Dynamics and Control Immersive Robotics *Important Dates* Paper submission February 1, 2015 Workshop and tutorial proposals February 1, 2015 Notification of paper acceptance April 15, 2015 Camera-ready papers May 15, 2015 ICAR 2015 Conference July 27-31, 2015 *Paper Submission* Original technical paper contributions are solicited for presentation at ICAR 2015. Accepted papers will be published in IEEE Xplore conference proceedings. Submissions should be 6-8 pages following the IEEE Xplore format available at: http://www.ieee.org/conferences_events/conferences/publishing/templates.html Papers will be submitted online via EasyChair: https://www.easychair.org/conferences/?conf=icar2015 *For more information* http://www.icar2015.org/ Sinan Kalkan, Erol Sahin Publicity and Publication Co-chairs -- Sinan KALKAN, Asst. Prof. Dept. of Computer Engineering Middle East Technical University Ankara TURKEY Web: http://kovan.ceng.metu.edu.tr/~sinan Tel: + 90 - 312 - 210 5547 / 210 7372 Fax: +90 - 312 - 210 5544 -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrea.huber at bcos.uni-freiburg.de Fri May 16 01:51:19 2014 From: andrea.huber at bcos.uni-freiburg.de (=?ISO-8859-1?Q?Andrea_Huber_Br=F6samle?=) Date: Fri, 16 May 2014 07:51:19 +0200 Subject: Connectionists: Reminder: Call for Abstracts Bernstein Conference 2014 Message-ID: <5375A757.9000208@bcos.uni-freiburg.de> REMINDER: Call for Abstracts: Bernstein Conference 2014 Deadline of Abstract Submission: May 18, 2014 ************************************************************** Satellite Workshops September 02-03, 2014 Main Conference September 03-05, 2014 ************************************************************** The Bernstein Conference has become the largest annual Computational Neuroscience Conference in Europe and now regularly attracts more than 500 international participants. This year, the Conference is organized by the Bernstein Focus Neurotechnology Goettingen and will take place September 03-05, 2014. In addition, there will be a series of pre-conference satellite workshops on September 02-03, 2014. The Bernstein Conference is a single-track conference, covering all aspects of Computational Neuroscience and Neurotechnology, and sessions for poster presentations are an integral part of the conference. We now invite the submission of abstracts for poster presentations from all relevant areas. Accepted abstracts will be published online and will be citable via Digital Object Identifiers (DOI). Additionally, a small number of abstracts will be selected for contributed talks. DETAILS FOR ABSTRACT SUBMISSION: For abstract submission visit:http://www.bernstein-conference.de/abstracts Deadline: May 18, 2014 CONFERENCE DETAILS: Venue: Satellite Workshops and Main Conference, Central Lecture Hall (ZHG), Platz der Goettinger Sieben 5, 37073 G?ttingen, Germany Registration: Conference Registration starts in April, 2014 Early registration deadline: July 7, 2014 For more information on the conference, please visit the website: http://www.bernstein-conference.de PUBLIC PHD STUDENT EVENT: September 02, 2014 PUBLIC LECTURE: September 04, 2014 PROGRAM COMMITTEE: Ilka Diester, Daniel Durstewitz, Thomas Euler, Alexander Gail, Matthias Kaschube, Jason Kerr, Roland Schaette, Fred Wolf, Florentin W?rg?tter, ORGANIZING COMMITTEE: Florentin W?rg?tter, Kerstin Mosch, Yvonne Reimann Bernstein Coordination Site (BCOS): Andrea Huber Broesamle, Mareike Kardinal, Kerstin Schwarzwaelder We look forward to seeing you in Goettingen in September! From AdvancedAnalytics at uts.edu.au Mon May 19 02:29:50 2014 From: AdvancedAnalytics at uts.edu.au (Advanced Analytics) Date: Mon, 19 May 2014 16:29:50 +1000 Subject: Connectionists: AAi Seminar: Contrast Mining Aided Problem Solving and Data Analytics: Successes and Potential References: <8112393AA53A9B4A9BDDA6421F26C68A0173A49354D3@MAILBOXCLUSTER.adsroot.uts.edu.au> Message-ID: <8112393AA53A9B4A9BDDA6421F26C68A0173A49354E0@MAILBOXCLUSTER.adsroot.uts.edu.au> Dear Colleague, AAi Seminar: Contrast Mining Aided Problem Solving and Data Analytics: Successes and Potential Speaker: Professor Guozhu Dong, Knoesis Center and Department of CSE, Wright State University, Dayton, Ohio, USA Date: Monday 26 May 2014 Time: 1.30pm to 2.30pm Location: Blackfriars Campus, Room CC05.GD.03 Seminar Chairman: Associate Professor Jinyan Li, Seminar Coordinator, AAI - Jinyan.Li at uts.edu.au Abstract: Contrast data mining is about (a) the mining of patterns and models that characterize the differences between contrasting data sets of different classes/conditions, and (b) the use of the mined results to solve challenging problems (in the area of contrast mining assisted problem solving and data analytics). Since the pioneering work on emerging pattern mining in 1999 by Dong and Li, a significant body of results on contrast mining have been published, including a book in 2012. In this talk Professor Dong will focus on the successes and potential of contrast mining assisted problem solving. He will provide an overview of results on contrast pattern based classification, contrast pattern based outlier detection, and contrast pattern based gene ranking for complex diseases, including example successes in cancer analysis, molecular compound selection, toxicity analysis, blog analysis, and city environment analysis for crime prevention. He will also give detailed discussion on recent advances on contrast pattern based clustering and contrast pattern aided regression, together with discussion on their exciting performance in experiments. Professor Dong believes that contrast mining assisted problem solving and data analytics have big potential, since they offer powerful concepts and tools to effectively handle challenges associated with multiple complex attribute/variable relationships in high dimensional data. Bio: Guozhu Dong is a full professor at Wright State University. He earned a PhD in Computer Science from the University of Southern California. His main research interests are data mining, data science, bioinformatics, and databases. He has published over 150 articles and two books entitled "Sequence Data Mining" and "Contrast Data Mining," and he holds 4 US patents. He is widely known for his pioneering work on contrast/emerging pattern mining and applications, and for his work on first-order maintenance of recursive and transitive closure views. His papers have received 5400+ citations (scholar.google.com) and his h-index is 35. He is a recipient of the Best Paper Awards from the 2005 IEEE ICDM and the 2014 PAKDD, and a recipient of the Research Excellence Award at College of CECS of WSU. He is a senior member of both IEEE and ACM. Overview to AAI seminar series The Advanced Analytics Seminar Series presents the latest theoretical advancement and empirical experience in a broad range of interdisciplinary and business-oriented analytics fields. It covers topics related to data mining, machine learning, statistics, bioinformatics, behavior informatics, marketing analytics and multimedia analytics. It also provides a platform for the showcase of commercial products in ubiquitous advanced analytics. Speakers are invited from both academia and industry. It opens regularly on a week day at the garden-like UTS Blackfriars Campus. You are warmly welcome to attend this seminar series. Thank you for your consideration to attending this quality Seminar by our visitor, Professor Guozhu Dong, from the Knoesis Center and Department of CSE, Wright State University, Dayton, Ohio, USA Regards. Colin Wise Operations Manager Faculty of Engineering & IT The Advanced Analytics Institute [cid:image001.png at 01CF6910.56EBA230] University of Technology, Sydney Blackfriars Campus Building 2, Level 1 Tel. +61 2 9514 9267 M. 0448 916 589 Email: Colin.Wise at uts.edu.au AAI: www.analytics.uts.edu.au/ UTS CRICOS Provider Code: 00099F DISCLAIMER: This email message and any accompanying attachments may contain confidential information. If you are not the intended recipient, do not read, use, disseminate, distribute or copy this message or attachments. If you have received this message in error, please notify the sender immediately and delete this message. Any views expressed in this message are those of the individual sender, except where the sender expressly, and with authority, states them to be the views of the University of Technology Sydney. Before opening any attachments, please check them for viruses and defects. Think. Green. Do. Please consider the environment before printing this email. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 10489 bytes Desc: image001.png URL: From Pierre.Bessiere at imag.fr Sun May 18 07:15:29 2014 From: Pierre.Bessiere at imag.fr (=?iso-8859-1?Q?Pierre_Bessi=E8re?=) Date: Sun, 18 May 2014 13:15:29 +0200 Subject: Connectionists: =?windows-1252?q?post-doc_position=3A_=93The_info?= =?windows-1252?q?rmational_structure_of_audio-visuo-motor_speech=94?= Message-ID: <1280795C-F11A-4C4D-8380-78BA3AA279AA@imag.fr> Speech Unit(e)s The multisensory-motor unity of speech Understanding how speech unites the sensory and motor streams, And how speech units emerge from perceptuo-motor interactions ERC Advanced Grant, Jean-Luc Schwartz, GIPSA-Lab, Grenoble, France Proposal for a post-doc position beginning in September 2014: ?The informational structure of audio-visuo-motor speech? Context The Speech Unit(e)s project is focused on the speech unification process associating the auditory, visual and motor streams in the human brain, in an interdisciplinary approach combining cognitive psychology, neurosciences, phonetics (both descriptive and developmental) and computational models. The framework is provided by the ?Perception-for-Action-Control Theory (PACT)? developed by the PI (Schwartz et al., 2012). PACT is a perceptuo-motor theory of speech communication connecting, in a principled way, perceptual shaping and motor procedural knowledge in speech multisensory processing. The communication unit in PACT is neither a sound nor a gesture but a perceptually shaped gesture, that is a perceptuo-motor unit. It is characterized by both articulatory coherence ? provided by its gestural nature ? and perceptual value ? necessary for being functional. PACT considers two roles for the perceptuo-motor link in speech perception: online unification of the sensory and motor streams through audio-visuo-motor binding, and offline joint emergence of the perceptual and motor repertoires in speech development. Objectives of the post-doc position In a multisensory-motor context such as the present one, a major requirement concerns a better knowledge of the structure of the information. Speech scientists have acquired a very good knowledge of the structure of speech acoustics, capitalizing on large audio corpora and up-to-date statistical techniques (mostly based on sophisticated implementations of Hidden Markov Models, e.g. Schutz & Waibel, 2001, Yu & Deng, 2011). Data on speech rhythms, in relation with the syllabic structure, have been analyzed clearly in a number of works (Greenberg, 1999; Grant & Greenberg, 2003). In spite of strong efforts in the field of audiovisual speech automatic recognition (Potamianos et al., 2003), characterization of the structure of audiovisual information is scarce. While an increasing number of papers on audiovisual speech perception presenting cognitive and neurophysiological data quote the ?advance of the visual stream on the audio stream?, few papers provide quantitative evidence (see Chandrasekaran et al., 2009) and when they do, these are sometimes mistaken or oversimplified. Actually, the temporal relationship between visual and auditory information is far from constant from one situation to another (Troille et al., 2010). Concerning the perceptuo-motor link, the situation is even worse. Few systematic quantitative studies are available because of the difficulty to acquire articulatory data, and in these studies the focus is generally set on the so-called ?inversion problem? (e.g. Ananthakrishnan & Engwall, 2011, Hueber et al., 2012) rather than a systematic characterization of the structure of the perceptuo-motor relationship. Finally, there is a strong lack of systematic investigations of the relationship between orofacial and bracchio-manual gestures in face-to-face communication. We shall gather a large corpus of audio-visuo-articulatory-gestural speech. Articulatory data will be acquired through ultrasound, electromagnetic articulography (EMA) and video imaging. Labial configurations and estimates of jaw movements will be automatically extracted and processed thanks to a large range of video facial processing. We also additionally plan to record information about accompanying coverbal gestures by the hand and arm, thanks to an optotrack system enabling to track bracchio-manual gestures. A complete equipment for audio-video-ultrasound-EMA-optotrack acquisition and automatic processing, named ultraspeech-tools, is available in Grenoble (more information on www.ultraspeech.com). The corpus will consist in isolated syllables, chained syllables, simple sentences and read material. Elicitation paradigms will associate reading tasks (with material presented on a computer screen) and dialogic situations between the subject and an experimenter to evoke coverbal gestures as ecologically as possible. The corpus will be analyzed, by extensive use and possibly development of original techniques, based on data?driven statistical models and machine learning algorithms, in the search for three major types of characteristics: 1) Quantification of auditory, visual and motor rhythms, from kinematic data ? e.g. acoustic envelope, lip height and/or width, variations in time of the principal components of the tongue deformations, analysis of the arm/hand/finger dynamics; 2) Quantification of delays between sound, lips, tongue and hand in various kinds of configurations associated with coarticulation processes (e.g. vowel to vowel anticipatory and perseverative phenomena, consonant-vowel coproduction, vocal tract preparatory movement in silence) and oral/gestural coordination; 3) Quantification of the amount of predictability between lips, tongue, hand and sound, through various techniques allowing the quantitative estimate of joint information (e.g. mutual information, entropy, co-inertia) and perform statistical inference between modalities (e.g. Graphical models such as dynamic Bayesian Framework, multi-stream HMM, etc.) The work will be performed within a multidisciplinary group in GIPSA-Lab Grenoble, associating specialists in speech and gesture communication, cognitive processes, signal processing and machine learning (partners of the project: Jean-Luc Schwartz, Marion Dohen from the ?Speech, Brain, Multimodality Development? team; Thomas Hueber, Laurent Girin from the ?Talking Machines and Face to Face Interaction? team; Pierre-Olivier Amblard and Olivier Michel from the ?Information in Complex Systems? team). Practical information The post-doc position is open for a two-year period, with a possible third-year prolongation. The position is open from September 2014, or slightly later if necessary. Candidates should have a background in speech and signal processing, face-to-face communication, and machine learning, or at least two of these three domains. Candidates should send as soon as possible a short email to Jean-Luc Schwartz (Jean-Luc.Schwartz at gipsa-lab.grenoble-inp.fr) to declare their intention to submit a full proposal. Then they must send a full application file in the next weeks. This application file will include an extended CV and a list of publications, together with a letter explaining why they are interested in the project, what their specific interests could be, possibly suggesting other experiments related to the general question of the informational structure of audio-visuo-motor speech, and also how this position would fit into their future plans for the development of their own career. They should also provide two names (with email addresses) for recommendations about their applications. Preselected candidates will be interviewed. Final selection should occur before mid-June. -------------- next part -------------- An HTML attachment was scrubbed... URL: From d.polani at herts.ac.uk Thu May 15 18:30:32 2014 From: d.polani at herts.ac.uk (d.polani at herts.ac.uk) Date: Thu, 15 May 2014 23:30:32 +0100 Subject: Connectionists: PhD Studentships in Information Processing and Self-Organization in Adaptive Biological and Artificial Systems at the University of Hertfordshire Message-ID: ///////////////////////////////////////////////////////////////////// PhD Studentships Available on INFORMATION PROCESSING AND SELF-ORGANIZATION IN ADAPTIVE BIOLOGICAL AND ARTIFICIAL SYSTEMS Adaptive Systems Research Group School of Computer Science University of Hertfordshire, UK ///////////////////////////////////////////////////////////////////// PhD studentships are available in the Adaptive Systems Research Group at the University of Hertfordshire in the topics of Artificial Life, especially for the study of principles behind information processing in adaptive, complex and self-organizing systems, a research area which has witnessed a dramatic growth in the last years. We use mathematical methods, with particular emphasis on an arsenal of recently developed techniques based on Shannon's information theory, to describe, understand or construct such systems in the context of AI/robotics and biology. Questions of interest and possible research directions include, but are not limited to: - information-theoretic approaches towards a mathematically founded understanding of information processing and the perception-action loop in agents; fundamental quantitative constraints governing the interaction between an agent and its environment - theoretically grounded pathways towards a systematic way to generate self-organization in complex systems - biologically plausible, information-theoretic methods for creating Artificial Intelligence systems from first principles - generic models for intrinsic motivation generation and artificial creativity based on such principles - fundamental principles underlying biological (e.g. neural) computation (with opportunities to collaborate with the Biocomputation Research Group) The prospective candidates should have a strong first degree, a keen interest in contributing to a new, highly dynamic and quickly growing research area and a very strong background in Computer Science, Physics, Mathematics, Statistics or another relevant computational/quantitative discipline. In particular, they should demonstrate excellent programming skills in one or more major computer languages. A mathematical/numerical background would be desirable, knowledge in at least one of the following fields would be a plus: probability theory, information theory, differential geometry, control, or data modelling/neural network techniques. The envisaged research will take place in the vibrant and enthusiastic research environment of the Adaptive Systems Research Group in the School of Computer Science at the University of Hertfordshire which offers a large number of specialized and interdisciplinary seminars as well as general training opportunities. Research in Computer Science at the University of Hertfordshire has been recognized as excellent by the latest Research Assessment Exercise, with 55% of the research submitted being rated as world leading or internationally excellent. The University of Hertfordshire is located in Hatfield, Hertfordshire UK which is considered the "northern green belt" of London. Hatfield is close to London (less than 25 minutes by train to Kings Cross), has convenient access to Stansted, Luton and Heathrow airports and is not far from the historic town of St. Albans. Successful candidates are eligible for a research studentship award from the University (approximately GBP 13,800 per annum bursary plus the payment of the standard UK student fees). Applicants from outside the UK or EU are eligible, but will have to pay half of the overseas fees out of their bursary. For informal inquiries on the research topic please contact Dr. Daniel Polani (Email: d.polani at herts.ac.uk). Your application form (http://www.herts.ac.uk/__data/assets/pdf_file/0010/31105/uh-application-form.pdf) should be returned to: Mrs Lorraine Nicholls Research Student Administrator, STRI University of Hertfordshire College Lane Hatfield, Herts, AL10 9AB, UK Tel: 0044 1707 286083 Email: l.nicholls at herts.ac.uk Applications should also include two references and transcripts of previous academic degrees. We accept applications for self-funded places throughout the year. The next short-listing process for studentship applications will begin on 9 June 2014. From Pierre.Bessiere at imag.fr Sun May 18 07:14:06 2014 From: Pierre.Bessiere at imag.fr (=?iso-8859-1?Q?Pierre_Bessi=E8re?=) Date: Sun, 18 May 2014 13:14:06 +0200 Subject: Connectionists: =?windows-1252?q?PhD_position=3A_=93Bayesian_mode?= =?windows-1252?q?l_of_the_joint_development_of_perception=2C_action_and_p?= =?windows-1252?q?honology=94?= Message-ID: <3723BF68-AC94-47B4-AA67-9914D3F4B60D@imag.fr> Proposal for a PhD position beginning in September 2014: ?Bayesian model of the joint development of perception, action and phonology? Context The Speech Unit(e)s project is focused on the speech unification process associating the auditory, visual and motor streams in the human brain, in an interdisciplinary approach combining cognitive psychology, neurosciences, phonetics (both descriptive and developmental) and computational models. The framework is provided by the ?Perception-for-Action-Control Theory (PACT)? developed by the PI (Schwartz et al., 2012). PACT is a perceptuo-motor theory of speech communication, which connects in a principled way perceptual shaping and motor procedural knowledge in speech multisensory processing. The communication unit in PACT is neither a sound nor a gesture but a perceptually shaped gesture, that is a perceptuo-motor unit. It is characterised by both articulatory coherence ? provided by its gestural nature ? and perceptual value ? necessary for being functional. PACT considers two roles for the perceptuo-motor link in speech perception: online unification of the sensory and motor streams through audio-visuo-motor binding, and offline joint emergence of the perceptual and motor repertoires in speech development. Objectives of the PhD position In the debates between auditory and motor theories of speech perception, and in their modern revival concerning the role of the dorsal route (Hickok & Poeppel, 2004, 2007), there is no real reflexion about what could be the functional role of a perceptuo-motor coupling for speech perception. The ?dorsal route? is supposed to be useful in ?adverse conditions?, e.g. in noise or with a foreign language (Callan et al., 2004; Zekveld et al., 2006). But no theoretical explanation is actually proposed for this potential efficiency of motor processes in adverse conditions. We have recently developed a computational framework enabling to compare the predictions of auditory, motor and perceptuo-motor theories in various kinds of situations (Moulin-Frier et al., 2012). Casting these theories into a single, unified mathematical framework is an efficient way to compare the theories and their properties in a systematic manner. Bayesian modeling is a mathematical framework that precisely allows such comparisons. The trick is that the same tool, namely probabilities, can be used both for defining the models and for comparing them (see e.g. Myung & Pitt, 2009). The generic model we developed is called COSMO, which stands for "Communicating about Objects using Sensory-Motor Operations". The COSMO acronym also represents the five variables around which the basic structure of the model is built. In COSMO, communication (C) is a success when an object OS in the speaker?s mind is transferred, via sensory and motor means S and M, to the listener?s mind where it is correctly recovered as OL. COSMO assumes that a communicating agent, which is both a speaker and a listener, internalizes the communication situation inside an internal model. The PhD project aims at developing COSMO in two major directions. (1) Joint acquisition of perceptual and motor repertoires in a syllabic framework. Experiments in COSMO have mainly concerned simple stimuli, e.g. in abstract one-dimensional sensory-motor spaces, or with restricted vowel samples. We will explore strategies for automatically learning to produce and perceive complex sequences such as plosive-vowel CV sequences, which display systematic coarticulation phenomena. Various kinds of exploration and learning mechanisms are available from cognitive and developmental robotics (Moulin-Frier & Oudeyer, 2012). Validation tests will be inspired from real data, on e.g. locus equations for plosive acoustics (Sussman et al., 1998), robustness to perturbations in production (Lindblom et al., 1979; Savariaux et al., 1995), or coupling of perceptual and motor idiosyncrasies. (2) Comparison of auditory, motor and perceptuo-motor theories for speech processing in various conditions. Once these perception and production components will be settled in COSMO, we will compare auditory, motor and perceptuo-motor speech perception theories in challenging conditions, such as noise, speaker normalization, or foreign accent. We will test the ability to develop a perceptuo-motor phonology from auditory and motor experience, e.g. to acquire a category such as ?plosive place of articulation? through the discovery of perceptuo-motor links in learning. We will also test COSMO on natural CV stimuli, exploiting natural multi-speaker corpora of CV sequences for learning and perceptual tests. The work will be realized within a multidisciplinary team gathering knowledge in speech communication, cognitive theories and Bayesian modeling (Jean-Luc Schwartz in GIPSA-Lab Grenoble, Julien Diard in LPNC Grenoble, Pierre Bessi?re in ISIR Paris), in collaboration with Pierre-Yves Oudeyer in INRIA Bordeaux. Practical information The PhD position is open from September 2014, or slightly later if necessary. Candidates should have a master, some knowledge about speech and cognitive modeling, and ability to program and to develop computational models. They must send a CV, together with a letter explaining why they are interested in the project. They should also provide two names (with email addresses) for recommendations about their applications. This should be send before June 15th to Jean-Luc Schwartz (Jean-Luc.Schwartz at gipsa-lab.grenoble-inp.fr). Interviews will be done with preselected candidates. Decision will occur in the following weeks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From kerstin at nld.ds.mpg.de Mon May 19 09:33:05 2014 From: kerstin at nld.ds.mpg.de (Kerstin Mosch) Date: Mon, 19 May 2014 15:33:05 +0200 Subject: Connectionists: Bernstein Conference 2014: Abstract Submission Extendet In-Reply-To: <537A0141.2020702@bcos.uni-freiburg.de> References: <537A0141.2020702@bcos.uni-freiburg.de> Message-ID: <537A0811.3020001@nld.ds.mpg.de> Abstract submission deadline EXTENDED: With more than 200 submissions for posters, we are already getting close to the numbers of the last Bernstein Conference. As there is additional poster space available in G?ttingen and as there is still sufficient time for preparations, we have decided to extend the deadline for abstract submissions until *May 26, 2014.* Remaining slots will be filled on a first come first serve basis. Best regards, Andrea Huber Br?samle REMINDER: Call for Abstracts: Bernstein Conference 2014 *Deadline of Abstract Submission: May 26, 2014* ************************************************************** Satellite Workshops September 02-03, 2014 Main Conference September 03-05, 2014 ************************************************************** The Bernstein Conference has become the largest annual Computational Neuroscience Conference in Europe and now regularly attracts more than 500 international participants. This year, the Conference is organized by the Bernstein Focus Neurotechnology Goettingen and will take place September 03-05, 2014. In addition, there will be a series of pre-conference satellite workshops on September 02-03, 2014. The Bernstein Conference is a single-track conference, covering all aspects of Computational Neuroscience and Neurotechnology, and sessions for poster presentations are an integral part of the conference. We now invite the submission of abstracts for poster presentations from all relevant areas. Accepted abstracts will be published online and will be citable via Digital Object Identifiers (DOI). Additionally, a small number of abstracts will be selected for contributed talks. DETAILS FOR ABSTRACT SUBMISSION: For abstract submission visit:http://www.bernstein-conference.de/abstracts Deadline: May 18, 2014 CONFERENCE DETAILS: Venue: Satellite Workshops and Main Conference, Central Lecture Hall (ZHG), Platz der Goettinger Sieben 5, 37073 G?ttingen, Germany Registration: Conference Registration starts in April, 2014 Early registration deadline: July 7, 2014 For more information on the conference, please visit the website: http://www.bernstein-conference.de PUBLIC PHD STUDENT EVENT: September 02, 2014 PUBLIC LECTURE: September 04, 2014 PROGRAM COMMITTEE: Ilka Diester, Daniel Durstewitz, Thomas Euler, Alexander Gail, Matthias Kaschube, Jason Kerr, Roland Schaette, Fred Wolf, Florentin W?rg?tter, ORGANIZING COMMITTEE: Florentin W?rg?tter, Kerstin Mosch, Yvonne Reimann Bernstein Coordination Site (BCOS): Andrea Huber Broesamle, Mareike Kardinal, Kerstin Schwarzwaelder We look forward to seeing you in Goettingen in September! -- Dr. Andrea Huber Br?samle Head of the Bernstein Coordination Site (BCOS) Bernstein Network Computational Neuroscience Albert Ludwigs University Freiburg Hansastr. 9A 79104 Freiburg, Germany phone: +49 761 203 9583 fax: +49 761 203 9585 andrea.huber at bcos.uni-freiburg.de www.nncn.de Twitter: NNCN_Germany YouTube: Bernstein TV Facebook: Bernstein Network Computational Neuroscience, Germany LinkedIn: Bernstein Network Computational Neuroscience, Germany -------------- next part -------------- An HTML attachment was scrubbed... URL: From rli at cs.odu.edu Thu May 15 09:42:46 2014 From: rli at cs.odu.edu (rli) Date: Thu, 15 May 2014 13:42:46 +0000 Subject: Connectionists: KDD workshop: BrainKDD Call for Papers Message-ID: <77A78D2D4678A2428E4D244F82B9FEF40123EA7226@relatum.cs.odu.edu> KDD workshop: BrainKDD Call for Papers [Apology for cross-postings] BrainKDD Call for Papers BrainKDD: International Workshop on Data Mining for Brain Science in conjunction with ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (SIGKDD'14) August 24, 2014, New York City https://sites.google.com/site/brainkdd/ Understanding brain function is one of the greatest challenges facing science. Today, brain science is experiencing rapid changes and is expected to achieve major advances in the near future. In April 2013, U.S. President Barack Obama formally announced the Brain Research Through Advancing Innovative Neurotechnologies Initiative, the BRAIN Initiative. In Europe, the European Commission has recently launched the European Human Brain Project (HBP). In the private sector, the Allen Institute for Brain Science is embarking on a new 10-year plan to generate comprehensive, large-scale data in the mammalian cerebral cortex under the MindScope project. These ongoing and emerging projects are expected to generate a deluge of data that capture the brain activities at different levels of organization. There is thus a compelling need to develop the next generation of data mining and knowledge discovery tools that allow one to make sense of this raw data and to understand how neurological activity encodes information. This workshop will focus on exploring the forefront between computer science and brain science and inspiring fundamentally new ways of mining and knowledge discovery from a variety of brain data. We encourage submissions in, but not limited to, the following areas: * Mining of in situ hybridization and microarray gene expression data * Mining of brain connectivity and circuitry data * Mining of structural and functional MRI data * Mining of EEG and related data * Mining of temporal developing brain data * Mining of spatial neuroimaging data * Integrative mining of multi-modality brain data * Mining of diseased brain data, such as Alzheimer's disease, Parkinson's disease, and schizophrenia * Segmentation and registration of neuroimaging data Important Dates: June 16nd, 2014: Paper Submission Due July 8th, 2014: Notification of Acceptance August 24th, 2014: Workshop Presentation Submission Instructions: Papers should be prepared using the ACM Proceedings Format http://www.acm.org/sigs/publications/proceedings-templates#aL2 and should be at most 9 pages long. Paper should be submitted in PDF format through the following link: https://www.easychair.org/conferences/?conf=brainkdd2014 Publication: Accepted workshop papers will be invited to a journal to be determined (subject to peer review). Invited Speakers: * Partha Mitra, Cold Spring Harbor Laboratory * Hanchuan Peng, Allen Institute for Brain Science Program Committee Members: * Gal Chechik, Bar Ilan University * Leon French, Rotman Research Institute at Baycrest * Junzhou Huang, University of Texas at Arlington * Shuai Huang, University of South Florida * Dean Krusienski, Old Dominion University * Jiang Li, Old Dominion University * Jing Li, Arizona State University * Yonggang Shi, University of Southern California * Lin Yang, University of Kentucky * Pew-Thian Yap, University of North Carolina at Chapel Hill * Jiayu Zhou, Arizona State University Program Committee Chairs: Michael Hawrylycz Allen Institute for Brain Science Shuiwang Ji Old Dominion University Tianming Liu University of Georgia Dinggang Shen University of North Carolina at Chapel Hill Publicity Chair: Rongjian Li Old Dominion University -------------- next part -------------- An HTML attachment was scrubbed... URL: From kenneth.harris at ucl.ac.uk Mon May 19 12:22:27 2014 From: kenneth.harris at ucl.ac.uk (Harris, Kenneth) Date: Mon, 19 May 2014 16:22:27 +0000 Subject: Connectionists: Postdoctoral vacancies in London Message-ID: <3da8c7452631421c8768554ac133ad10@DB3PR01MB266.eurprd01.prod.exchangelabs.com> Dear Connectionists, We are seeking applicants for 3 postdoctoral jobs at the interface of computational and experimental neuroscience, working with Profs. Kenneth Harris and Matteo Carandini. The closing date for all positions is June 14 2014. For details on how to apply, please see https://www.ucl.ac.uk/cortexlab/positions. Position 1: Large-scale analysis of neocortical activity Modern experimental methods allow for simultaneous recording of thousands of neurons. These methods provide an unprecedented opportunity to study how neuronal populations process information. However, turning this data into concrete conclusions about brain function raises a twofold challenge. First, statistical methods required to characterize, visualize, and test hypotheses about neural data must be developed; second, the data sets involved are so large that they require advanced computational techniques for their efficient processing. We are seeking a postdoctoral scientist to study population coding and circuit function in cortex, by working with data from large-scale electrophysiological and optical recordings. This position would suit an individual with strong quantitative skills together with good knowledge of neurobiology. Position 2: Large-scale simulation of cortical circuits Recent years have seen the development of techniques for rapid simulation of neural circuits, making it at last possible to model how cellular and synaptic properties determine cortical activity in circuits of realistic size. We are seeking a postdoctoral scientist to build large-scale models of recurrent spiking cortical networks, using hardware systems such as large-scale GPGPU clusters or specialized chips such as spiNNaker. Close interaction with experimental neuroscience will be a key part of the project, and the models will be constrained by their ability to both reproduce patterns of population activity measured in vivo, and to perform real-world visual classification tasks. This project involves the use of specialized hardware, and would suit a candidate with strong programming ability as well as neuroscience knowledge. Position 3: Experimental study of cortical population activity Developments in electrophysiology, microscopy, and genetics now make it possible to measure the activity of thousands of neurons simultaneously, while identifying and controlling specific cell classes in real time with light. Combining these experimental technologies with large-scale informatics and computational analysis provides a tremendous tool to understand the function of cortical circuits. We are seeking a postdoctoral scientist to apply these tools to study network activity during a visual discrimination task in behaving mice. This project involves integrating multiple techniques currently in use in our lab including large-scale electrophysiological recordings, optogenetics, two-photon microscopy, and mouse behaviour. This position would suit a candidate with a background in experimental neurobiology, engineering, or experimental physics. --------------------- Kenneth D. Harris Professor of Quantitative Neuroscience Institute of Neurology, Department of Neuroscience, Physiology, and Pharmacology University College London 21 University Street London WC1E 6DE Phone: +44 (0)20 3108 2410 -------------- next part -------------- An HTML attachment was scrubbed... URL: From dubuf at ualg.pt Tue May 20 07:06:53 2014 From: dubuf at ualg.pt (Hans du Buf) Date: Tue, 20 May 2014 12:06:53 +0100 Subject: Connectionists: Two postdoc positions: computational vision, sparse coding Message-ID: <537B374D.1030000@ualg.pt> The Vision Laboratory develops models of visual perception. Our models of simple, complex and end-stopped cells run in real time on multi-core CPUs and on GPUs. See the V1 keypoints at http://w3.ualg.pt/~kterzic/videos.html Our keypoints show state-of-the-art repeatability if compared to SIFT etc. We are now also developing binary descriptors to outperform BRISK etc. The keypoints and descriptors will be embedded into a deep hierarchy for object recognition in complex scenes. Basic descriptors work already on mobile robots. We are at the edge between computer vision and computational neuroscience. We can hire two postdocs who are proficient in C++ and OpenCV. Since guaranteed funding is available until March 31, 2015, we seek adventurous candidates who can jump on the plane before the 1st of July. Adventurous because we are at the brink of a real breakthrough with top-class papers, and follow-up funding is being prepared via both Portuguese and European projects. Duration: July 1st - March 31, possibly continued. Remuneration: 1495 euro/month - exempt from taxation. Location: Faro, sunny Algarve, Portugal. VISA: European citizen or valid working permit in Schengen area. Interested postdocs must apply as soon as possible, but BEFORE June 13, 2014! One postdoc MUST have the PhD degree by then. Please send email with ONLY CV to Prof. Joao Rodrigues (jrodrig at ualg.pt) or to me (dubuf at ualg.pt). Please use Subject: KPpostdocs Regards, Hans -- ======================================================================= Prof.dr.ir. J.M.H. du Buf mailto:dubuf at ualg.pt Dept. of Electronics and Computer Science - FCT, University of Algarve, fax (+351) 289 818560 Campus de Gambelas, 8000 Faro, Portugal. tel (+351) 289 800900 ext 7761 ======================================================================= UALG Vision Laboratory: http://w3.ualg.pt/~dubuf/vision.html ======================================================================= From bcbt at upf.edu Mon May 19 11:20:46 2014 From: bcbt at upf.edu (=?UTF-8?Q?BCBT=2C_B=C3=BAstia_Compartida?=) Date: Mon, 19 May 2014 17:20:46 +0200 Subject: Connectionists: CfP BCBT2014 Summer School, Barcelona, Spain Message-ID: *Barcelona Cognition, Brain and Technology Summer School (BCBT2014)* University Pompeu Fabra, Barcelona, Spain. September 1-12, 2014 http://bcbt.upf.edu/bcbt14/ Join us for the annual *"Barcelona Cognition, Brain and Technology summer school - BCBT2014 *" (see* http://bcbt.upf.edu/bcbt14 *) to be held from September 1-12, 2014 in Barcelona Spain. The BCBT summer school is part of a series of events organized by the Convergent Science Network of Biomimetics and Neurotechnology (*http://csnetwork.eu/ *) and is positioned at the interface between biology and future and emerging technologies with an emphasis on the principles underlying brains and their translation to avant-garde technologies. BCBT promotes a shared systems-level understanding of the functional architecture of the brain and its possible emulation in artificial systems. This is the 7th year of BCBT, which has ran with great success in previous years thanks to an excellent line-up of outstanding speakers (see bcbt.upf.edu for previous editions of the school). BCBT lectures are available online (see *http://csnetwork.eu/ *) as well as the CSN *Podcast interviews *with many of the BCBT speakers. BCBT caters to students and researchers involved in projects that are in the ambit of ?BIO-ICT convergence?, "Brain Inspired ICT", ?Cognitive systems-robotics?. BCBT will offer up to 30 student slots for the practical workshops while as many researchers can register to attend the presentations and discussion sessions as they are interested. The atmosphere of the BCBT summer school is informal with the goal to stimulate in-depth discussion. There are presentations in the morning, usually 2, with the afternoons reserved for tutorials and projects for student participants and road-mapping workshops or further discussion for senior scientists. For the 2014 edition we plan a *1st week* of the school addressing *Evolutionary and Developmental aspects of nervous systems and behavior*(*Session 1*, 1-5 Sept*)*, while the *2nd week* will include presentations aimed at the investigation of *Core Brain Systems* with emphasis on the basal ganglia (*Session 2*, 8-9 Sept) followed by roadmapping sessions (*Session 3*, 10-11 Sept). Friday 12 is reserved for the presentation of the student projects to which everybody is invited. *Application deadline: August 2nd, 2014* For more information, and to apply, please go to http://bcbt.upf.edu/bcbt14/home BCBT2014 is supported by the Convergent Science Network of Biomimetics and Neurotechnology (*http://csnetwork.eu/ *). and is co-organized by Paul Verschure (BCBT Chair) SPECS, University Pompeu Fabra Barcelona, ES Tony Prescott (BCBT Co-Chair) University of Sheffield, UK Leah Krubizer (BCBT Co-Chair) UC Davis, CA, USA Anna Mura (BCBT Co-Chair) SPECS, University Pompeu Fabra Barcelona, ES -------------- next part -------------- An HTML attachment was scrubbed... URL: From schaul at gmail.com Tue May 20 13:28:35 2014 From: schaul at gmail.com (Tom Schaul) Date: Tue, 20 May 2014 18:28:35 +0100 Subject: Connectionists: Competition: General Video Game AI Message-ID: tl;dr: Build a general-purpose learning/planning/search agent that can play well on many games, such as Space Invaders or Zelda -- and compete! Start here: http://www.gvgai.net/ ------------- The first edition of the General Video Game AI Competition (GVG-AI) will be held this year in in association with the Computational Intelligence in Games conference (CIG-2014): http://www.gvgai.net The objective is to develop controllers that can play not just a single video game, but any number of unseen games (within the class of 2D arcade-style games). The starter kit includes 10 games (with 5 levels each), for example: Boulder Dash, Missile Command, Sokoban, Space Invaders, and Zelda. But the general agent will end up being evaluated on 10 unseen games! Deadline: August 1st 2014. A starter package includes some simple algorithms to get off the ground quickly, including a random agent, an MCTS-based agent, and a genetic-algorithm based one: http://www.gvgai.net/sampleControllers.php The CIG track for competition papers is still open too, here: http://cig2014.de/doku.php/call_for_papers Good luck! Diego Perez, Spyridon Samothrakis, Julian Togelius, Tom Schaul and Simon Lucas -- http://schaul.site44.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From gpipa at uos.de Mon May 19 07:57:34 2014 From: gpipa at uos.de (Gordon Pipa) Date: Mon, 19 May 2014 13:57:34 +0200 Subject: Connectionists: =?iso-8859-1?q?Research_Assistant_Position_in_the?= =?iso-8859-1?q?_Neuroinformatics_lab_/_Institute_of_Cognitive_Scie?= =?iso-8859-1?q?nce=2C_University_Osnabr=FCck?= Message-ID: <01fe01cf7359$8617bd20$92473760$@uos.de> The Neuroinformatics Research Group (Prof. Dr. Gordon Pipa) of the Institute of Cognitive Science invites applications for a Research Assistant (Postdoc level), (Salary level E 13 TV-L) to be filled by July 1, 2014 for a period of three years. The position allows for further scientific qualification. Description of Responsibilities: The position involves research on complex neuronal networks and the analysis of data generated by such systems. The research will develop tools and apply these on the estimation of complex and causal interactions between neurons and larger areas of the brain, based on various measurements, such as spiking activity, LFP and ECoG. The successful candidate will be involved in several research projects that range from BSc to PhD projects. The position also involves teaching Cognitive Science courses at B.Sc. and M.Sc. level (4 hours/week). Required Qualifications: Candidates are expected to have an excellent academic degree (PhD and Master/diploma). Applicants should have an excellent knowledge in the field of machine learning, i.e. generalized linear and state space models, kernel methods, reservoir computing and deep learning. Experience or a strong interest in the fields of complex systems, computational neurosciences, dynamical systems theory and the concepts to study these, i.e. bifurcation and stability analysis, and delay coupled differential equations, measurements of complexity, are encouraged. The position may be held full or part-time. As a certified family-friendly institution, Osnabr?ck University is committed to furthering the compatibility between work/studies and family life. Furthermore, qualified applicants with disabilities will be favored. Applications with the usual documentation, including two letters of recommendation (hardcopies which will not be returned) should be submitted no later than June 4, 2014 to the Director of the Institute of Cognitive Science, University of Osnabrueck, Albrechtstra?e 28, 49076 Osnabrueck. An electronic copy of your application should be sent to gpipa at uni-osnabrueck.de. Further information can be provided by Prof. Dr. Gordon Pipa, (gpipa at uni-osnabrueck.de). ---------------------------------------------------------------------------- --- Professor and Chair of the Neuroinformatics Department Dr. rer. nat. Gordon Pipa Institute of Cognitive Science, Room 31/404 University of Osnabrueck Albrechtstr. 28, 49069 Osnabrueck, Germany tel. +49 (0) 541-969-2277 fax (private). +49 (0) 5405- 500 80 98 home office. +49 (0) 5405- 500 90 95 e-mail: gpipa at uos.de webpage: http://www.ni.uos.de Projects: http://www.pipa.biz/Projects_and_Lectures/projects_and_lectures.html research gate: https://www.researchgate.net/profile/Gordon_Pipa/?ev=prf_act google scholar: http://scholar.google.de/citations?user=joR6mgEAAAAJ Personal Assistent and Secretary of the Neuroinformatics lab: Anna Jungeilges Tel. +49 (0)541 969-2390 Fax +49 (0)541 969-2246 Email: anna.jungeilges at uni-osnabrueck.de visit us on http://www.facebook.com/CognitiveScienceOsnabruck https://twitter.com/#!/CogSciUOS -------------- next part -------------- An HTML attachment was scrubbed... URL: From connectionists at mspacek.mm.st Wed May 21 18:21:59 2014 From: connectionists at mspacek.mm.st (Martin Spacek) Date: Wed, 21 May 2014 15:21:59 -0700 Subject: Connectionists: How the brain works In-Reply-To: References: Message-ID: <537D2707.5070709@mspacek.mm.st> Somewhat off-topic, Stephen Colbert interviewed Steven Pinker on the Colbert Report in 2007, and asked him to describe, on the spot, how the brain works in 5 words or less. His reply: "Brain cells fire in patterns." http://thecolbertreport.cc.com/videos/n36pgb/steven-pinker (I think that's the link, but I'm outside the US so I can't view it.) > Terry Sejnowski told us that the new Obama initiative is like the moon > project. When this program was initiated we had no idea how to accomplish > this, but dreams (and money) can be very motivating. > > This is a nice point, but I don't understand what a connection plan would > give us. I think without knowing precisely where and how strong connections > are made, and how each connection would influence a postsynaptic or glia etc > cells, such information is useless. So why not having the goal of finding a > cure for epilepsy? Why not have the goal of finding a cure for epilepsy? I propose that neuroscience today is mostly a study of how the brain breaks. Unfortunately, for those of us that aren't so interested in a specific disease, or disease at all, grant proposals often still need to be couched in those terms. Studying how a thing breaks can only get you so far. At some point, to really make progress, you need to figure out how the darn thing works when it ain't broke. That's what makes the Human Brain Project and the Brain Initiative worthwhile ventures, even if they aren't hypothesis driven. Martin Spacek PhD candidate, Graduate Program in Neuroscience Swindale Lab Dept. of Ophthalmology and Visual Sciences University of British Columbia, Vancouver, BC, Canada http://mspacek.github.io From ale at sissa.it Wed May 21 18:31:34 2014 From: ale at sissa.it (Alessandro Treves) Date: Thu, 22 May 2014 00:31:34 +0200 Subject: Connectionists: ICTP Winter School on Systems Neuroscience in Trieste, Dec 2014 Message-ID: <20140522003134.Horde.2YPwNR8V4mxTfSlGod_CEiA@webmail.sissa.it> The Abdus Salam International Centre for Theoretical Physics is organizing a Winter School on Quantitative Systems Biology with a focus on System Neuroscience, from 1 to 12 December 2014, in Trieste, Italy. New experimental techniques are opening windows on biological mechanisms inside the cell and in the brain, making these systems accessible to quantitative investigation and calling for an understanding of brain functions at the systemic level. The school is targeted to young researchers, especially at the Ph.D and postdoc level who either work in this area or hope to do so. Lecturers include: Winfried Denk (Max Planck Inst., Germany) Stefano Fusi (Columbia, USA) Kate Jeffery (UCL, UK) Etienne Koechlin (ENS Paris, France) Venkatesh Murthy (Harvard, USA) Israel Nelken (Hebrew Univ., Israel) Botond Roska (FMI, Switzerland) Wolfram Schultz (Univ. Cambridge, UK) Tatyana Sharpee (Salk Institute, USA) Misha Tsodyks (Weizmann Institute, Israel) Fred Wolf (Univ. G?ttingen, Germany) There is no participation fee for selected participants. DEADLINE for applications June 1st, 2014 Details on http://cdsagenda5.ictp.trieste.it/full_display.php?ida=a13235 -- Alessandro Treves http://people.sissa.it/~ale/limbo.html SISSA - Cognitive Neuroscience, via Bonomea 265, 34136 Trieste, Italy and Master in Complex Actions http://www.mca.sissa.it/ From weng at cse.msu.edu Wed May 21 23:33:58 2014 From: weng at cse.msu.edu (Juyang Weng) Date: Thu, 22 May 2014 11:33:58 +0800 Subject: Connectionists: How the brain works In-Reply-To: <537D2707.5070709@mspacek.mm.st> References: <537D2707.5070709@mspacek.mm.st> Message-ID: <537D7026.3080609@cse.msu.edu> From what I understand preliminarily about how the brain works, a cure for epilepsy can come only after we know how the brain works, not before. With great respect to Terry, I guess that the Obama's BRAIN project should not be like the moon project. The moon project is an engineering problem. An engineering problem can be solved mainly by government funds. But most goals of the BRAIN project (e.g., cure brain diseases) cannot be reached without theoretically understanding how the brain works. In this regard, sorry, Obama was wrong. However, theoretically understanding how the brain works is also a cultural problem. The most fundamental problem of the BRAIN project is the research environment in the U.S., i.e., the division of walls between disciplines. It is a problem of culture. Why? Suppose that one gave all in this connectionists list a largely correct model about how the brain works, few on this list would be able to understand it let alone agree with it! Still remember the problem with Charles Darwin's evolutionary theory? The problem of how the brain works is like that and more, because it is further highly cross-disciplinary. Charles Darwin's evolutionary theory was largely of a single disciplinary, if I understand it correctly. Such cultural problems cannot be solved only by government funds. They require advance of culture in our research community. Unfortunately, advances of culture take many decades. How long does it takes for the human culture to accept Charles Darwin's evolutionary theory? -John On 5/22/14 6:21 AM, Martin Spacek wrote: > Somewhat off-topic, Stephen Colbert interviewed Steven Pinker on the > Colbert Report in 2007, and asked him to describe, on the spot, how > the brain works in 5 words or less. His reply: > > "Brain cells fire in patterns." > > http://thecolbertreport.cc.com/videos/n36pgb/steven-pinker > (I think that's the link, but I'm outside the US so I can't view it.) > >> Terry Sejnowski told us that the new Obama initiative is like the moon >> project. When this program was initiated we had no idea how to >> accomplish >> this, but dreams (and money) can be very motivating. >> >> This is a nice point, but I don't understand what a connection plan >> would >> give us. I think without knowing precisely where and how strong >> connections >> are made, and how each connection would influence a postsynaptic or >> glia etc >> cells, such information is useless. So why not having the goal of >> finding a >> cure for epilepsy? > > Why not have the goal of finding a cure for epilepsy? I propose that > neuroscience today is mostly a study of how the brain breaks. > Unfortunately, for those of us that aren't so interested in a specific > disease, or disease at all, grant proposals often still need to be > couched in those terms. > > Studying how a thing breaks can only get you so far. At some point, to > really make progress, you need to figure out how the darn thing works > when it ain't broke. That's what makes the Human Brain Project and the > Brain Initiative worthwhile ventures, even if they aren't hypothesis > driven. > > Martin Spacek > PhD candidate, Graduate Program in Neuroscience > Swindale Lab > Dept. of Ophthalmology and Visual Sciences > University of British Columbia, Vancouver, BC, Canada > http://mspacek.github.io > -- -- Juyang (John) Weng, Professor Department of Computer Science and Engineering MSU Cognitive Science Program and MSU Neuroscience Program 428 S Shaw Ln Rm 3115 Michigan State University East Lansing, MI 48824 USA Tel: 517-353-4388 Fax: 517-432-1061 Email: weng at cse.msu.edu URL: http://www.cse.msu.edu/~weng/ ---------------------------------------------- From levine at uta.edu Thu May 22 00:21:39 2014 From: levine at uta.edu (Levine, Daniel S) Date: Wed, 21 May 2014 23:21:39 -0500 Subject: Connectionists: How the brain works In-Reply-To: <537D7026.3080609@cse.msu.edu> References: <537D2707.5070709@mspacek.mm.st>,<537D7026.3080609@cse.msu.edu> Message-ID: <581625BB6C84AB4BBA1C969C69E269EC025258C1E817@MAVMAIL2.uta.edu> John, I agree about the need for cultural change. But I'm glad you said ONLY by government funds. Government funds can contribute to the necessary cultural change IF they are used in the right way. And the right way is to encourage risk taking and cross-disciplinary thinking, not merely to entrench a few groups who are recognized experts. The size of the overall funding pie also needs to be sufficient that both recognized experts and imaginative novices can get more freedom to pursue their ideas. Dan ________________________________________ From: Connectionists [connectionists-bounces at mailman.srv.cs.cmu.edu] On Behalf Of Juyang Weng [weng at cse.msu.edu] Sent: Wednesday, May 21, 2014 10:33 PM To: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: How the brain works From what I understand preliminarily about how the brain works, a cure for epilepsy can come only after we know how the brain works, not before. With great respect to Terry, I guess that the Obama's BRAIN project should not be like the moon project. The moon project is an engineering problem. An engineering problem can be solved mainly by government funds. But most goals of the BRAIN project (e.g., cure brain diseases) cannot be reached without theoretically understanding how the brain works. In this regard, sorry, Obama was wrong. However, theoretically understanding how the brain works is also a cultural problem. The most fundamental problem of the BRAIN project is the research environment in the U.S., i.e., the division of walls between disciplines. It is a problem of culture. Why? Suppose that one gave all in this connectionists list a largely correct model about how the brain works, few on this list would be able to understand it let alone agree with it! Still remember the problem with Charles Darwin's evolutionary theory? The problem of how the brain works is like that and more, because it is further highly cross-disciplinary. Charles Darwin's evolutionary theory was largely of a single disciplinary, if I understand it correctly. Such cultural problems cannot be solved only by government funds. They require advance of culture in our research community. Unfortunately, advances of culture take many decades. How long does it takes for the human culture to accept Charles Darwin's evolutionary theory? -John On 5/22/14 6:21 AM, Martin Spacek wrote: > Somewhat off-topic, Stephen Colbert interviewed Steven Pinker on the > Colbert Report in 2007, and asked him to describe, on the spot, how > the brain works in 5 words or less. His reply: > > "Brain cells fire in patterns." > > http://thecolbertreport.cc.com/videos/n36pgb/steven-pinker > (I think that's the link, but I'm outside the US so I can't view it.) > >> Terry Sejnowski told us that the new Obama initiative is like the moon >> project. When this program was initiated we had no idea how to >> accomplish >> this, but dreams (and money) can be very motivating. >> >> This is a nice point, but I don't understand what a connection plan >> would >> give us. I think without knowing precisely where and how strong >> connections >> are made, and how each connection would influence a postsynaptic or >> glia etc >> cells, such information is useless. So why not having the goal of >> finding a >> cure for epilepsy? > > Why not have the goal of finding a cure for epilepsy? I propose that > neuroscience today is mostly a study of how the brain breaks. > Unfortunately, for those of us that aren't so interested in a specific > disease, or disease at all, grant proposals often still need to be > couched in those terms. > > Studying how a thing breaks can only get you so far. At some point, to > really make progress, you need to figure out how the darn thing works > when it ain't broke. That's what makes the Human Brain Project and the > Brain Initiative worthwhile ventures, even if they aren't hypothesis > driven. > > Martin Spacek > PhD candidate, Graduate Program in Neuroscience > Swindale Lab > Dept. of Ophthalmology and Visual Sciences > University of British Columbia, Vancouver, BC, Canada > http://mspacek.github.io > -- -- Juyang (John) Weng, Professor Department of Computer Science and Engineering MSU Cognitive Science Program and MSU Neuroscience Program 428 S Shaw Ln Rm 3115 Michigan State University East Lansing, MI 48824 USA Tel: 517-353-4388 Fax: 517-432-1061 Email: weng at cse.msu.edu URL: http://www.cse.msu.edu/~weng/ ---------------------------------------------- From marc.toussaint at informatik.uni-stuttgart.de Thu May 22 07:07:54 2014 From: marc.toussaint at informatik.uni-stuttgart.de (Marc Toussaint) Date: Thu, 22 May 2014 13:07:54 +0200 Subject: Connectionists: Call for Participation: Autonomous Learning Summer School 2014, Leipzig, Germany Message-ID: <537DDA8A.40103@informatik.uni-stuttgart.de> Summer School on Autonomous Learning September 1-4, 2014 Max-Planck-Institute for Mathematics in the Sciences Leipzig supported by the DFG Priority Programme 1527 Autonomous Learning research aims at understanding how autonomous systems can efficiently learn from the interaction with the environment, especially by having an integrated approach to decision making and learning, allowing systems to autonomously decide on actions, representations, hyperparameters and model structures for the purpose of efficient learning. In this summer school international and national experts will introduce to the core concepts and related theory for autonomous learning in real-world environments. We hope to foster the enthusiasm of young researchers for this exciting research area, giving them the opportunity to meet leading experts in the field and similarly interested students. The tutorials are structured around three themes: 1. learning representations, 2. acting to learn (exploration), and 3. learning to act in real-world environments (robotics). This course is free of charge, but participants have to cover their own travel, room and board. Registration is possible with the application form on this website until *May 31*. More information at the Priority Programme website and at MPI MiS Leipzig: http://autonomous-learning.org/summer-school-2014-on-autonomous-learning/ http://www.mis.mpg.de/calendar/conferences/2014/al.html Speakers: * Shun-ichi Amari (RIKEN, Japan)  * Satinder Singh (University of Michigan, USA) * Tamim Asfour (KIT Karlsruhe)  * Michael Beetz (Bremen University)  * Matthias Bethge (University of T?bingen, MPI for Biological Cybernetics, Bernstein Center for Computational Neuroscience) * Keyan Ghazi-Zahedi (MPI for Mathematics in the Sciences) * Thomas Martinetz (University of L?beck) * Helge Ritter (Bielefeld University) * Friedrich Sommer (Redwood Center for Theoretical Neuroscience, UC Berkeley) * Marc Toussaint (Stuttgart University)  Scientific Organizers: Nihat Ay (MPI for Mathematics in the Sciences Leipzig) Marc Toussaint (Stuttgart University) Hope to see you in Leipzig! Marc From Vittorio.Murino at iit.it Thu May 22 10:27:25 2014 From: Vittorio.Murino at iit.it (Vittorio Murino) Date: Thu, 22 May 2014 16:27:25 +0200 Subject: Connectionists: 5th PAVIS School on CVPR: Scene Understanding & Object Recognition in Context - A. Torralba, A. Lapedriza Message-ID: <537E094D.7020502@iit.it> !!! PLEASE NOTE THE CHANGE OF THE DATE !!! (due to the overlapping of other events during the previously indicated period) Apologise for multiple posting ==================================================================== Call for Participation 5th PAVIS School on Computer Vision, Pattern Recognition, and Image Processing September 29 - October 1, 2014 <<<<< NEW DATE !!! Sestri Levante (GE), Italy ------------------------------------------------------------------- SCENE UNDERSTANDING AND OBJECT RECOGNITION IN CONTEXT ------------------------------------------------------------------- Invited speakers * Antonio TORRALBA, MIT (USA) * Agata LAPEDRIZA, Universitat Oberta de Catalunya (Spain) & MIT (USA) -------------------------------------- REGISTRATION DEADLINE JULY 1, 2014 <<<<< (see instructions below) -------------------------------------- The goal of this school will be to introduce recent advances in scene recognition, multiclass object detection and object recognition in context. The class will cover global features for scene recognition (gist, deep features, etc), databases for scene understanding (e.g., crowdsourcing, image annotation), methods for multiclass object detection (short summary of object detectionapproaches with emphasis on multiclass techniques) and current approaches for object recognition in context and scene understanding. The theoretical sessions will be complemented with guided experiments in MATLAB. The detailed program of the school will be published later on the school website. ********************************************************************* * * REGISTRATION DEADLINE: July 1, 2014 * * Interested applicants are invited to send an expression of interest * at pavisschool2014 at iit.it asking for participation. * For PhD candidates please attach a Curriculum vitae and, possibly, * a letter from your supervisor in support to the request. * * Accepted candidates will receive an email containing the * instructions for the actual registration and payment. * ********************************************************************* Notice that, due to the limited number of places, applications are subject to acceptance, and for this reason, early registrations or expressions of interest are encouraged. The attendees are expected to bring a laptop with a working version of MATLAB since practical experiments will be performed during the school using open source libraries such as VLFeat. --------------------------------------------------------------- Registration Fees - 150 euro for Ph.D. and undergraduate students. - 250 euro for post docs, researchers, and other people working in a university or a research institute. - 300 euro for everybody else. --------------------------------------------------------------- Director: Vittorio Murino Local Organizers: Matteo Bustreo Carlos Beltran-Gonzalez Alessio Del Bue School webpage: http://www.iit.it/en/pavis-schools/schoolpavis2014.html Please, check the website regularly for updated information. ---------------------------------------------------------------- This school follows a series of intensive courses, targeting PhD students and researchers in the areas of Computer Vision, Image Processing, and Pattern Recognition. The course is residential, spanning 3 days, so that attendees can install a more productive interaction with the lecturers. It is organized and sponsored by PAVIS (Pattern analysis and Computer Vision) department of the Istituto Italiano di Tecnologia, Genova (Italy). The course will take place in the beautiful Baia del Silenzio in Sestri Levante (http://g.co/maps/xqnyr), located between the city of Genova and the border to Tuscany. The school is structured in a such way that attendees can install a more productive interaction with the lecturers. The school is endorsed by GIRPR (Gruppo Italiano Ricercatori in Pattern Recognition). ===================================================================== -- Vittorio Murino ******************************************* Prof. Vittorio Murino, Ph.D. PAVIS - Pattern Analysis & Computer Vision IIT Istituto Italiano di Tecnologia Via Morego 30 16163 Genova, Italy Phone: +39 010 71781 504 Mobile: +39 329 6508554 Fax: +39 010 71781 236 E-mail: vittorio.murino at iit.it Secretary: Sara Curreli email: sara.curreli at iit.it Phone: +39 010 71781 917 http://www.iit.it/pavis ******************************************** From echeveste at th.physik.uni-frankfurt.de Thu May 22 04:11:41 2014 From: echeveste at th.physik.uni-frankfurt.de (Rodrigo Echeveste) Date: Thu, 22 May 2014 10:11:41 +0200 Subject: Connectionists: =?utf-8?q?New_Frontiers_journal_section_=22Comput?= =?utf-8?q?ational_Intelligence=22=E2=80=8F?= Message-ID: <537DB13D.9000101@th.physik.uni-frankfurt.de> This is to to call your attention to a new Frontiers journal section "Computational Intelligence" http://www.frontiersin.org/Computational_Intelligence within Robotics and AI. The first articles have been published: --------------------- Rodrigo Echeveste and Claudius Gros Generating functionals for computational intelligence: The Fisher information as an objective function for self-limiting Hebbian learning rules http://journal.frontiersin.org/Journal/10.3389/frobt.2014.00001/abstract --------------------- --------------------- Mikhail Prokopenko Grand challenges for Computational Intelligence http://journal.frontiersin.org/Journal/10.3389/frobt.2014.00002/full --------------------- ---------------------------------- Rodrigo Echeveste echeveste at itp.uni-frankfurt.de Institut f?r Theoretische Physik Goethe Universit?t Frankfurt ---------------------------------- From yushan.mail at gmail.com Thu May 22 17:36:48 2014 From: yushan.mail at gmail.com (Yu Shan) Date: Thu, 22 May 2014 17:36:48 -0400 Subject: Connectionists: How the brain works In-Reply-To: <537D7026.3080609@cse.msu.edu> References: <537D2707.5070709@mspacek.mm.st> <537D7026.3080609@cse.msu.edu> Message-ID: > Suppose that one gave all in this connectionists list a largely correct > model about how the brain works, few on this list would be able to > understand it let alone agree with it! > Let's look at a recent example. Nikolic proposed his theory (http://www.danko-nikolic.com/practopoiesis/) about how the brain works a few weeks ago to the Connectionists. Upon finishing reading this paper, I was quite exited. The theory is elegantly simple and yet has great explanatory power. It is also consistent with what we know about evolution as well as the brain's organization and development. Of course, we don't know yet if it is a "largely correct model about how the brain works". But, to my opinion, it has a great potential. Actually I am thinking how to implement those ideas in my own future research. However, the author's efforts of introducing this work to the Connectionists received little attention. Connectionists reach 5000+ people, who are probably the most interested and capable audience for such a topic. This makes the silence particularly intriguing. Of course, one possible reason is that lots of people here already studied this theory and deemed it irrelevant. But a more likely reason, I think, is most people did not give it much thought. If that is the case, it raises an interesting question: what is the barrier that a theory of how the brain works need to overcome in order to be treated seriously? In other words, what do we really want to know? Shan Yu, Ph.D Brainnetome Center and National Laboratory of Pattern Recognition Institute of Automation Chinese Academy of Sciences Beijing 100190, P. R. China http://www.brainnetome.org/en/shanyu From achler at gmail.com Thu May 22 18:58:23 2014 From: achler at gmail.com (Tsvi Achler) Date: Thu, 22 May 2014 15:58:23 -0700 Subject: Connectionists: How the brain works In-Reply-To: References: <537D2707.5070709@mspacek.mm.st> <537D7026.3080609@cse.msu.edu> Message-ID: Well Said! If I may add, the more a theory is different from what is in the mainstream, the less people seem to want to know. This means something that is significantly "new", (which I will define as being significantly different from what is currently popular), is less likely to be looked at. In my experience, the more it is different the less likely it is to be looked at, stifling research. -Tsvi On Thu, May 22, 2014 at 2:36 PM, Yu Shan wrote: >> Suppose that one gave all in this connectionists list a largely correct >> model about how the brain works, few on this list would be able to >> understand it let alone agree with it! >> > > Let's look at a recent example. Nikolic proposed his theory > (http://www.danko-nikolic.com/practopoiesis/) about how the brain > works a few weeks ago to the Connectionists. Upon finishing reading > this paper, I was quite exited. The theory is elegantly simple and yet > has great explanatory power. It is also consistent with what we know > about evolution as well as the brain's organization and development. > Of course, we don't know yet if it is a "largely correct model about > how the brain works". But, to my opinion, it has a great potential. > Actually I am thinking how to implement those ideas in my own future > research. > > However, the author's efforts of introducing this work to the > Connectionists received little attention. Connectionists reach 5000+ > people, who are probably the most interested and capable audience for > such a topic. This makes the silence particularly intriguing. Of > course, one possible reason is that lots of people here already > studied this theory and deemed it irrelevant. > > But a more likely reason, I think, is most people did not give it much > thought. If that is the case, it raises an interesting question: what > is the barrier that a theory of how the brain works need to overcome > in order to be treated seriously? In other words, what do we really > want to know? > > Shan Yu, Ph.D > Brainnetome Center and National Laboratory of Pattern Recognition > Institute of Automation Chinese Academy of Sciences > Beijing 100190, P. R. China > http://www.brainnetome.org/en/shanyu From janetw at itee.uq.edu.au Thu May 22 19:00:46 2014 From: janetw at itee.uq.edu.au (Janet Wiles) Date: Thu, 22 May 2014 23:00:46 +0000 Subject: Connectionists: How the brain works In-Reply-To: References: <537D2707.5070709@mspacek.mm.st> <537D7026.3080609@cse.msu.edu> Message-ID: <7F1713CE870C2941A370868EACF230894EDC0A@UQEXMDA6.soe.uq.edu.au> When does a model escape from a research lab? Or in other words, when do researchers beyond the in-group investigate, test, or extend a model? I have asked many colleagues this question over the years. Well-written papers help, open source code helps, tutorials help. But the most critical feature seems to be that it can be communicated in a single equation. Think about backprop, reinforcement learning, Bayes theorem. Janet Wiles Professor of Complex and Intelligent Systems, School of Information Technology and Electrical Engineering The University of Queensland -----Original Message----- From: Connectionists [mailto:connectionists-bounces at mailman.srv.cs.cmu.edu] On Behalf Of Yu Shan Sent: Friday, 23 May 2014 7:37 AM To: Juyang Weng Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: How the brain works > Suppose that one gave all in this connectionists list a largely > correct model about how the brain works, few on this list would be > able to understand it let alone agree with it! > Let's look at a recent example. Nikolic proposed his theory (http://www.danko-nikolic.com/practopoiesis/) about how the brain works a few weeks ago to the Connectionists. Upon finishing reading this paper, I was quite exited. The theory is elegantly simple and yet has great explanatory power. It is also consistent with what we know about evolution as well as the brain's organization and development. Of course, we don't know yet if it is a "largely correct model about how the brain works". But, to my opinion, it has a great potential. Actually I am thinking how to implement those ideas in my own future research. However, the author's efforts of introducing this work to the Connectionists received little attention. Connectionists reach 5000+ people, who are probably the most interested and capable audience for such a topic. This makes the silence particularly intriguing. Of course, one possible reason is that lots of people here already studied this theory and deemed it irrelevant. But a more likely reason, I think, is most people did not give it much thought. If that is the case, it raises an interesting question: what is the barrier that a theory of how the brain works need to overcome in order to be treated seriously? In other words, what do we really want to know? Shan Yu, Ph.D Brainnetome Center and National Laboratory of Pattern Recognition Institute of Automation Chinese Academy of Sciences Beijing 100190, P. R. China http://www.brainnetome.org/en/shanyu From harnad at ecs.soton.ac.uk Thu May 22 07:15:24 2014 From: harnad at ecs.soton.ac.uk (Stevan Harnad) Date: Thu, 22 May 2014 07:15:24 -0400 Subject: Connectionists: Web Science and the Mind: Montreal July 7-18 References: <8F4A1D9A-5853-47BE-A706-6250B919E1E1@ecs.soton.ac.uk> Message-ID: <058E8361-59FA-44CC-AD66-ABD5134B8AB9@ecs.soton.ac.uk> WEB SCIENCE AND THE MIND JULY 7 - 18 2014 Universite du Qu?bec ? Montreal Montreal, Canada Registration: http://www.summer14.isc.uqam.ca/page/inscription.php?lang_id=2 Cognitive Science and Web Science have been converging in the study of cognition: (i) distributed within the brain (ii) distributed between multiple minds (iii) distributed between minds and media SPEAKERS AND TOPICS Katy BORNER Indiana U Humanexus: Envisioning Communication and Collaboration Les CARR U Southampton Web Impact on Society Simon DeDEO Indiana U Collective Memory in Wikipedia Sergey DOROGOVTSEV U Aveiro Explosive Percolation Alan EVANS Montreal Neurological Institute Mapping the Brain Connectome Jean-Daniel FEKETE INRIA Visualizing Dynamic Interactions Benjamin FUNG McGill U Applying Data Mining to Real-Life Crime Investigation Fabien GANDON INRIA Social and Semantic Web: Adding the Missing Links Lee GILES Pennsylvania State U Scholarly Big Data: Information Extraction and Data Mining Peter GLOOR MIT Center for Collective Intelligence Collaborative Innovation Networks Jennifer GOLBECK U Maryland You Can't Hide: Predicting Personal Traits in Social Media Robert GOLDSTONE Indiana U Learning Along with Others Stephen GRIFFIN U Pittsburgh New Models of Scholarly Communication for Digital Scholarship Wendy HALL U Southampton It's All In the Mind Harry HALPIN U Edinburgh Does the Web Extend the Mind - and Semantics? Jiawei HAN U Illinois/Urbana Knowledge Mining in Heterogeneous Information Networks Stevan HARNAD UQAM Memetrics: Monitoring Measuring and Mapping Memes Jim HENDLER Rensselaer Polytechnic Institute The Data Web Tony HEY Microsoft Research Connections Open Science and the Web Francis HEYLIGHEN Vrije U Brussel Global Brain: Web as Self-organizing Distributed Intelligence Bryce HUEBNER Georgetown U Macrocognition: Situated versus Distributed Charles-Antoine JULIEN Mcgill U Visual Tools for Interacting with Large Networks Kayvan KOUSHA U Wolverhampton Web Impact Metrics for Research Assessment Guy LAPALME U Montreal Natural Language Processing on the Web Vincent LARIVIERE U Montreal Scientific Interaction Before and Since the Web Yang-Yu LIU Northeastern U Controllability and Observability of Complex Systems Richard MENARY U Macquarie Enculturated Cognition Thomas MALONE MIT Collective Intelligence: What is it? How to measure it? Increase it? Adilson MOTTER Northwestern U Bursts, Cascades and Time Allocation Cameron NEYLON PLOS Network Ready Research: The Role of Open Source and Open Thinking Takashi NISHIKAWA Northwestern U Visual Analytics: Network Structure Beyond Communities Filippo RADICCHI Indiana U Analogies between Interconnected and Clustered Networks Mark ROWLANDS Miami U Extended Mentality: What It Is and Why It Matters Robert RUPERT U Colorado What is Cognition and How Could it be Extended? Derek RUTHS McGill U Social Informatics Judith SIMON ITAS Socio-Technical Epistemology John SUTTON Macquarie U Transactive Memory and Distributed Cognitive Ecologies Georg THEINER Villanova U Domains and Dimensions of Group Cognition Peter TODD Indiana U Foraging in the World Mind and Online -------------- next part -------------- An HTML attachment was scrubbed... URL: From levine at uta.edu Thu May 22 19:45:38 2014 From: levine at uta.edu (Levine, Daniel S) Date: Thu, 22 May 2014 18:45:38 -0500 Subject: Connectionists: How the brain works In-Reply-To: <7F1713CE870C2941A370868EACF230894EDC0A@UQEXMDA6.soe.uq.edu.au> References: <537D2707.5070709@mspacek.mm.st> <537D7026.3080609@cse.msu.edu> <7F1713CE870C2941A370868EACF230894EDC0A@UQEXMDA6.soe.uq.edu.au> Message-ID: <581625BB6C84AB4BBA1C969C69E269EC025258BDCEA0@MAVMAIL2.uta.edu> I would disagree about the single equation. The brain needs to do a lot of different things to deal with the cognitive requirements of a changing and complex world, so that different functions (sensory pattern processing, motor control, etc.) may call for different structures and therefore different equations. Models of the brain become most useful when they can explain cognitive and behavioral functions that we take for granted in day-to-day life. -----Original Message----- From: Connectionists [mailto:connectionists-bounces at mailman.srv.cs.cmu.edu] On Behalf Of Janet Wiles Sent: Thursday, May 22, 2014 6:01 PM To: Yu Shan Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: How the brain works When does a model escape from a research lab? Or in other words, when do researchers beyond the in-group investigate, test, or extend a model? I have asked many colleagues this question over the years. Well-written papers help, open source code helps, tutorials help. But the most critical feature seems to be that it can be communicated in a single equation. Think about backprop, reinforcement learning, Bayes theorem. Janet Wiles Professor of Complex and Intelligent Systems, School of Information Technology and Electrical Engineering The University of Queensland -----Original Message----- From: Connectionists [mailto:connectionists-bounces at mailman.srv.cs.cmu.edu] On Behalf Of Yu Shan Sent: Friday, 23 May 2014 7:37 AM To: Juyang Weng Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: How the brain works > Suppose that one gave all in this connectionists list a largely > correct model about how the brain works, few on this list would be > able to understand it let alone agree with it! > Let's look at a recent example. Nikolic proposed his theory (http://www.danko-nikolic.com/practopoiesis/) about how the brain works a few weeks ago to the Connectionists. Upon finishing reading this paper, I was quite exited. The theory is elegantly simple and yet has great explanatory power. It is also consistent with what we know about evolution as well as the brain's organization and development. Of course, we don't know yet if it is a "largely correct model about how the brain works". But, to my opinion, it has a great potential. Actually I am thinking how to implement those ideas in my own future research. However, the author's efforts of introducing this work to the Connectionists received little attention. Connectionists reach 5000+ people, who are probably the most interested and capable audience for such a topic. This makes the silence particularly intriguing. Of course, one possible reason is that lots of people here already studied this theory and deemed it irrelevant. But a more likely reason, I think, is most people did not give it much thought. If that is the case, it raises an interesting question: what is the barrier that a theory of how the brain works need to overcome in order to be treated seriously? In other words, what do we really want to know? Shan Yu, Ph.D Brainnetome Center and National Laboratory of Pattern Recognition Institute of Automation Chinese Academy of Sciences Beijing 100190, P. R. China http://www.brainnetome.org/en/shanyu From bwyble at gmail.com Thu May 22 21:01:15 2014 From: bwyble at gmail.com (Brad Wyble) Date: Thu, 22 May 2014 21:01:15 -0400 Subject: Connectionists: How the brain works In-Reply-To: <581625BB6C84AB4BBA1C969C69E269EC025258BDCEA0@MAVMAIL2.uta.edu> References: <537D2707.5070709@mspacek.mm.st> <537D7026.3080609@cse.msu.edu> <7F1713CE870C2941A370868EACF230894EDC0A@UQEXMDA6.soe.uq.edu.au> <581625BB6C84AB4BBA1C969C69E269EC025258BDCEA0@MAVMAIL2.uta.edu> Message-ID: I concur with Dan. I'd caution anyone against the idea that a good theory of brain function should also ascribe to standards of mathematical aesthetics (i.e. that it can be reduced to a single function). Nature is functionally elegant, but that does not also mean that it can be reduced to an elegant mathematical formalism. That said, I really like the question posed by Janet, which is to wonder at when the research community at large should start to take a new model seriously. I don't think that there is a clear answer to this question but there do seem to be two major factors: one is the degree to which the model compresses data into a simpler form (i.e. how good is the theory), and the other is the degree to which the model fills a perceived explanatory void in the field. Models that are rapidly adopted hit both of these marks. -Brad On Thu, May 22, 2014 at 7:45 PM, Levine, Daniel S wrote: > I would disagree about the single equation. The brain needs to do a lot > of different things to deal with the cognitive requirements of a changing > and complex world, so that different functions (sensory pattern processing, > motor control, etc.) may call for different structures and therefore > different equations. Models of the brain become most useful when they can > explain cognitive and behavioral functions that we take for granted in > day-to-day life. > > -----Original Message----- > From: Connectionists [mailto:connectionists-bounces at mailman.srv.cs.cmu.edu] > On Behalf Of Janet Wiles > Sent: Thursday, May 22, 2014 6:01 PM > To: Yu Shan > Cc: connectionists at mailman.srv.cs.cmu.edu > Subject: Re: Connectionists: How the brain works > > When does a model escape from a research lab? Or in other words, when do > researchers beyond the in-group investigate, test, or extend a model? > > I have asked many colleagues this question over the years. Well-written > papers help, open source code helps, tutorials help. But the most critical > feature seems to be that it can be communicated in a single equation. Think > about backprop, reinforcement learning, Bayes theorem. > > Janet Wiles > Professor of Complex and Intelligent Systems, School of Information > Technology and Electrical Engineering The University of Queensland > > > -----Original Message----- > From: Connectionists [mailto:connectionists-bounces at mailman.srv.cs.cmu.edu] > On Behalf Of Yu Shan > Sent: Friday, 23 May 2014 7:37 AM > To: Juyang Weng > Cc: connectionists at mailman.srv.cs.cmu.edu > Subject: Re: Connectionists: How the brain works > > > Suppose that one gave all in this connectionists list a largely > > correct model about how the brain works, few on this list would be > > able to understand it let alone agree with it! > > > > Let's look at a recent example. Nikolic proposed his theory > (http://www.danko-nikolic.com/practopoiesis/) about how the brain works a > few weeks ago to the Connectionists. Upon finishing reading this paper, I > was quite exited. The theory is elegantly simple and yet has great > explanatory power. It is also consistent with what we know about evolution > as well as the brain's organization and development. > Of course, we don't know yet if it is a "largely correct model about how > the brain works". But, to my opinion, it has a great potential. > Actually I am thinking how to implement those ideas in my own future > research. > > However, the author's efforts of introducing this work to the > Connectionists received little attention. Connectionists reach 5000+ > people, who are probably the most interested and capable audience for such > a topic. This makes the silence particularly intriguing. Of course, one > possible reason is that lots of people here already studied this theory and > deemed it irrelevant. > > But a more likely reason, I think, is most people did not give it much > thought. If that is the case, it raises an interesting question: what is > the barrier that a theory of how the brain works need to overcome in order > to be treated seriously? In other words, what do we really want to know? > > Shan Yu, Ph.D > Brainnetome Center and National Laboratory of Pattern Recognition > Institute of Automation Chinese Academy of Sciences Beijing 100190, P. R. > China http://www.brainnetome.org/en/shanyu > > > -- Brad Wyble Assistant Professor Psychology Department Penn State University http://wyblelab.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From H.Abbass at adfa.edu.au Thu May 22 22:07:55 2014 From: H.Abbass at adfa.edu.au (Hussein Abbass) Date: Fri, 23 May 2014 02:07:55 +0000 Subject: Connectionists: How the brain works In-Reply-To: <7F1713CE870C2941A370868EACF230894EDC0A@UQEXMDA6.soe.uq.edu.au> References: <537D2707.5070709@mspacek.mm.st> <537D7026.3080609@cse.msu.edu> <7F1713CE870C2941A370868EACF230894EDC0A@UQEXMDA6.soe.uq.edu.au> Message-ID: Janet is asking an important question. What a single equation means to me is that the model is connected and I can fit it in my working memory to understand it. I would suggest the following three principles as a minimum for a piece of work to leave the lab: 1. The Minimum Knowledge Principle 2. The Scalability Vs Minimalist Structure Principle 3. The Fit-for-Purpose or Validity Principle Let me explain them in more details. 1. The Minimum Knowledge Principle This principle can also be called, the high school principle (for scientists) or the 7-year-old (for journalists) principle. If we would like people to accept a model, they need to understand it. If they don't, it does not really matter how fascinating it is, or even how right it is (whatever right means), they will always resist it. BP is successful because a high school student with high school math can understand it. A journalist message is successful when a 7-year-old can understand it. The higher the bar of the minimum knowledge required to understand a model is, the less likely the model will be widely accepted or adopted. In summary, the bar for the minimum knowledge needed to understand and use the model should be on an appropriate level for multiple groups in different sub-disciplines to understand it. 2. The Scalability Vs Minimalist Structure Principle The classical Feed-forward NN is successful because we can teach it with 2 neurons and implement it with hundreds or more nodes without a problem (obviously I am not downgrading numerical problems here). We can reduce the structure to a very small size to explain it and a single human can comprehend it properly, and we can scale it up to a very large dimension for real world problems while maintaining the principles of the minimalist structure. This is evident in many cases around us. Let us ask, what are the most popular data mining models in industry? Feed forward NN, decision trees, K-mean cluster, etc? They all share the same features, you can explain each of them on one page, you can have larger versions, the model is still the same. 3. The Fit-for-Purpose or Validity Principle How does the brain work? It is indeed a great question to ask but it seems to me there is an equally important question to ask as well, what is the purpose of asking the question? If the purpose is simply to advance our knowledge of how the brain works, then we need to ask, how do we know that we truly advanced this knowledge? Instead of continuing my argument with a series of questions, I will cut it short. We need to define the purpose(s) and be in a position to make sensible judgment that the advances we do fulfil this purpose. The only way we can have a system for an ant that is identical to an ant is to create a biological ant. If we do not, there is always a distance, a gap between what we have (call it model, knowledge, etc) and what the biological system is. A purpose needs to be defined to tell us if we accept the gap or not. Feed forward NN is acceptable for basic regression purposes, but not acceptable as a biologically sound explanation of how the brain works. In the former, the gap is minimal, while in the latter, the gap and distance from the purpose is huge. More principles can be added, but the above ones, I see most critical. Kind regards, Hussein -----Original Message----- From: Connectionists [mailto:connectionists-bounces at mailman.srv.cs.cmu.edu] On Behalf Of Janet Wiles Sent: Friday, 23 May 2014 7:01 AM To: Yu Shan Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: How the brain works When does a model escape from a research lab? Or in other words, when do researchers beyond the in-group investigate, test, or extend a model? I have asked many colleagues this question over the years. Well-written papers help, open source code helps, tutorials help. But the most critical feature seems to be that it can be communicated in a single equation. Think about backprop, reinforcement learning, Bayes theorem. Janet Wiles Professor of Complex and Intelligent Systems, School of Information Technology and Electrical Engineering The University of Queensland -----Original Message----- From: Connectionists [mailto:connectionists-bounces at mailman.srv.cs.cmu.edu] On Behalf Of Yu Shan Sent: Friday, 23 May 2014 7:37 AM To: Juyang Weng Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: How the brain works > Suppose that one gave all in this connectionists list a largely > correct model about how the brain works, few on this list would be > able to understand it let alone agree with it! > Let's look at a recent example. Nikolic proposed his theory (http://www.danko-nikolic.com/practopoiesis/) about how the brain works a few weeks ago to the Connectionists. Upon finishing reading this paper, I was quite exited. The theory is elegantly simple and yet has great explanatory power. It is also consistent with what we know about evolution as well as the brain's organization and development. Of course, we don't know yet if it is a "largely correct model about how the brain works". But, to my opinion, it has a great potential. Actually I am thinking how to implement those ideas in my own future research. However, the author's efforts of introducing this work to the Connectionists received little attention. Connectionists reach 5000+ people, who are probably the most interested and capable audience for such a topic. This makes the silence particularly intriguing. Of course, one possible reason is that lots of people here already studied this theory and deemed it irrelevant. But a more likely reason, I think, is most people did not give it much thought. If that is the case, it raises an interesting question: what is the barrier that a theory of how the brain works need to overcome in order to be treated seriously? In other words, what do we really want to know? Shan Yu, Ph.D Brainnetome Center and National Laboratory of Pattern Recognition Institute of Automation Chinese Academy of Sciences Beijing 100190, P. R. China http://www.brainnetome.org/en/shanyu ________________________________ THE UNIVERSITY OF NEW SOUTH WALES UNSW CANBERRA AT THE AUSTRALIAN DEFENCE FORCE ACADEMY PO Box 7916, CANBERRA BC 2610, Australia Web: http://unsw.adfa.edu.au CRICOS Provider no. 00100G This message is intended for the addressee named and may contain confidential information. If you are not the intended recipient, please delete it and notify the sender. Views expressed in this message are those of the individual sender and are not necessarily the views of UNSW. From janetw at itee.uq.edu.au Thu May 22 22:27:42 2014 From: janetw at itee.uq.edu.au (Janet Wiles) Date: Fri, 23 May 2014 02:27:42 +0000 Subject: Connectionists: How the brain works In-Reply-To: <581625BB6C84AB4BBA1C969C69E269EC025258BDCEA0@MAVMAIL2.uta.edu> References: <537D2707.5070709@mspacek.mm.st> <537D7026.3080609@cse.msu.edu> <7F1713CE870C2941A370868EACF230894EDC0A@UQEXMDA6.soe.uq.edu.au> <581625BB6C84AB4BBA1C969C69E269EC025258BDCEA0@MAVMAIL2.uta.edu> Message-ID: <7F1713CE870C2941A370868EACF230894EDE1E@UQEXMDA6.soe.uq.edu.au> Will a single equation be a good model of the brain as a whole? Unlikely! Will a set of equation-sized-chunks of knowledge suffice? It works for physics, but remains an empirical question for neuroscience. The point is not about what's the best way to model the brain, but rather, what models are adopted widely, while others remain the province of a single lab. A model that can be expressed as a single equation seems to be a particularly effective meme for computational researchers. Janet -----Original Message----- From: Levine, Daniel S [mailto:levine at uta.edu] Sent: Friday, 23 May 2014 9:46 AM To: Janet Wiles; Yu Shan Cc: connectionists at mailman.srv.cs.cmu.edu Subject: RE: Connectionists: How the brain works I would disagree about the single equation. The brain needs to do a lot of different things to deal with the cognitive requirements of a changing and complex world, so that different functions (sensory pattern processing, motor control, etc.) may call for different structures and therefore different equations. Models of the brain become most useful when they can explain cognitive and behavioral functions that we take for granted in day-to-day life. -----Original Message----- From: Connectionists [mailto:connectionists-bounces at mailman.srv.cs.cmu.edu] On Behalf Of Janet Wiles Sent: Thursday, May 22, 2014 6:01 PM To: Yu Shan Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: How the brain works When does a model escape from a research lab? Or in other words, when do researchers beyond the in-group investigate, test, or extend a model? I have asked many colleagues this question over the years. Well-written papers help, open source code helps, tutorials help. But the most critical feature seems to be that it can be communicated in a single equation. Think about backprop, reinforcement learning, Bayes theorem. Janet Wiles Professor of Complex and Intelligent Systems, School of Information Technology and Electrical Engineering The University of Queensland -----Original Message----- From: Connectionists [mailto:connectionists-bounces at mailman.srv.cs.cmu.edu] On Behalf Of Yu Shan Sent: Friday, 23 May 2014 7:37 AM To: Juyang Weng Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: How the brain works > Suppose that one gave all in this connectionists list a largely > correct model about how the brain works, few on this list would be > able to understand it let alone agree with it! > Let's look at a recent example. Nikolic proposed his theory (http://www.danko-nikolic.com/practopoiesis/) about how the brain works a few weeks ago to the Connectionists. Upon finishing reading this paper, I was quite exited. The theory is elegantly simple and yet has great explanatory power. It is also consistent with what we know about evolution as well as the brain's organization and development. Of course, we don't know yet if it is a "largely correct model about how the brain works". But, to my opinion, it has a great potential. Actually I am thinking how to implement those ideas in my own future research. However, the author's efforts of introducing this work to the Connectionists received little attention. Connectionists reach 5000+ people, who are probably the most interested and capable audience for such a topic. This makes the silence particularly intriguing. Of course, one possible reason is that lots of people here already studied this theory and deemed it irrelevant. But a more likely reason, I think, is most people did not give it much thought. If that is the case, it raises an interesting question: what is the barrier that a theory of how the brain works need to overcome in order to be treated seriously? In other words, what do we really want to know? Shan Yu, Ph.D Brainnetome Center and National Laboratory of Pattern Recognition Institute of Automation Chinese Academy of Sciences Beijing 100190, P. R. China http://www.brainnetome.org/en/shanyu From achler at gmail.com Thu May 22 22:37:36 2014 From: achler at gmail.com (Tsvi Achler) Date: Thu, 22 May 2014 19:37:36 -0700 Subject: Connectionists: How the brain works In-Reply-To: References: <537D2707.5070709@mspacek.mm.st> <537D7026.3080609@cse.msu.edu> <7F1713CE870C2941A370868EACF230894EDC0A@UQEXMDA6.soe.uq.edu.au> <581625BB6C84AB4BBA1C969C69E269EC025258BDCEA0@MAVMAIL2.uta.edu> Message-ID: I think it is also interesting to ask the opposite question: what type of models would be especially difficult to introduce to the community? Here is my short list: 1. Models that go against years of dogma (eg arguing against the idea that neural networks learn through a gradient descent mechanism) 2. Models reinterpreting existing data in a different way but overall getting similar results. -Tsvi On Thu, May 22, 2014 at 6:01 PM, Brad Wyble wrote: > I concur with Dan. I'd caution anyone against the idea that a good theory of > brain function should also ascribe to standards of mathematical aesthetics > (i.e. that it can be reduced to a single function). Nature is functionally > elegant, but that does not also mean that it can be reduced to an elegant > mathematical formalism. > > That said, I really like the question posed by Janet, which is to wonder at > when the research community at large should start to take a new model > seriously. I don't think that there is a clear answer to this question but > there do seem to be two major factors: one is the degree to which the model > compresses data into a simpler form (i.e. how good is the theory), and the > other is the degree to which the model fills a perceived explanatory void in > the field. Models that are rapidly adopted hit both of these marks. > > -Brad > > > > On Thu, May 22, 2014 at 7:45 PM, Levine, Daniel S wrote: >> >> I would disagree about the single equation. The brain needs to do a lot >> of different things to deal with the cognitive requirements of a changing >> and complex world, so that different functions (sensory pattern processing, >> motor control, etc.) may call for different structures and therefore >> different equations. Models of the brain become most useful when they can >> explain cognitive and behavioral functions that we take for granted in >> day-to-day life. >> >> -----Original Message----- >> From: Connectionists >> [mailto:connectionists-bounces at mailman.srv.cs.cmu.edu] On Behalf Of Janet >> Wiles >> Sent: Thursday, May 22, 2014 6:01 PM >> To: Yu Shan >> Cc: connectionists at mailman.srv.cs.cmu.edu >> Subject: Re: Connectionists: How the brain works >> >> When does a model escape from a research lab? Or in other words, when do >> researchers beyond the in-group investigate, test, or extend a model? >> >> I have asked many colleagues this question over the years. Well-written >> papers help, open source code helps, tutorials help. But the most critical >> feature seems to be that it can be communicated in a single equation. Think >> about backprop, reinforcement learning, Bayes theorem. >> >> Janet Wiles >> Professor of Complex and Intelligent Systems, School of Information >> Technology and Electrical Engineering The University of Queensland >> >> >> -----Original Message----- >> From: Connectionists >> [mailto:connectionists-bounces at mailman.srv.cs.cmu.edu] On Behalf Of Yu Shan >> Sent: Friday, 23 May 2014 7:37 AM >> To: Juyang Weng >> Cc: connectionists at mailman.srv.cs.cmu.edu >> Subject: Re: Connectionists: How the brain works >> >> > Suppose that one gave all in this connectionists list a largely >> > correct model about how the brain works, few on this list would be >> > able to understand it let alone agree with it! >> > >> >> Let's look at a recent example. Nikolic proposed his theory >> (http://www.danko-nikolic.com/practopoiesis/) about how the brain works a >> few weeks ago to the Connectionists. Upon finishing reading this paper, I >> was quite exited. The theory is elegantly simple and yet has great >> explanatory power. It is also consistent with what we know about evolution >> as well as the brain's organization and development. >> Of course, we don't know yet if it is a "largely correct model about how >> the brain works". But, to my opinion, it has a great potential. >> Actually I am thinking how to implement those ideas in my own future >> research. >> >> However, the author's efforts of introducing this work to the >> Connectionists received little attention. Connectionists reach 5000+ people, >> who are probably the most interested and capable audience for such a topic. >> This makes the silence particularly intriguing. Of course, one possible >> reason is that lots of people here already studied this theory and deemed it >> irrelevant. >> >> But a more likely reason, I think, is most people did not give it much >> thought. If that is the case, it raises an interesting question: what is the >> barrier that a theory of how the brain works need to overcome in order to be >> treated seriously? In other words, what do we really want to know? >> >> Shan Yu, Ph.D >> Brainnetome Center and National Laboratory of Pattern Recognition >> Institute of Automation Chinese Academy of Sciences Beijing 100190, P. R. >> China http://www.brainnetome.org/en/shanyu >> >> > > > > -- > Brad Wyble > Assistant Professor > Psychology Department > Penn State University > > http://wyblelab.com From bower at uthscsa.edu Fri May 23 10:51:41 2014 From: bower at uthscsa.edu (james bower) Date: Fri, 23 May 2014 09:51:41 -0500 Subject: Connectionists: How the brain works In-Reply-To: <7F1713CE870C2941A370868EACF230894EDC0A@UQEXMDA6.soe.uq.edu.au> References: <537D2707.5070709@mspacek.mm.st> <537D7026.3080609@cse.msu.edu> <7F1713CE870C2941A370868EACF230894EDC0A@UQEXMDA6.soe.uq.edu.au> Message-ID: perhaps instead of speculating, you might be interested in an article actually describing how a REAL brain model: read: a model made of the actual brain itself, rather than some abstracted imagined idea about how brains work, has propagated between labs. https://www.dropbox.com/s/erw705h4yyh3l9k/272602_1_En_5%20copy.pdf I am more than aware that most people on this mailing list have no interest in these kinds of models - and, as most are aware, I personally believe that most of the models that people are interested in on this mailing list are fundamentally Ptolemaic in intent and effect. However, I have no interest in continuing that argument. I am also, of course, very aware that building model that transcend individual laboratories is hard and that boiling everything down to as few equations as possible would make that process much easier (as it did in physics) - sadly, biological systems are explicitly more complex by nature - and their success depends on the details. That is absolutely clear. I should also say that in the GENESIS project, we have been working for a long time on how you build and propagate community models - which requires a new form of publication. But that is a much longer and more complex conversation than would probably be tolerated here. Accordingly, I post this link to the paper, just in case, someone new on the list is interested in actually understanding how actual physical models of the nervous system are very slowly being developed - more slowly than they should be, given the interest and hope of the majority that somehow a system that evolved over many hundreds of millions of years to do something VERY HARD, might somehow be captured in some mathematical structure simpler than itself. Jim Bower On May 22, 2014, at 6:00 PM, Janet Wiles wrote: > When does a model escape from a research lab? Or in other words, when do researchers beyond the in-group investigate, test, or extend a model? > > I have asked many colleagues this question over the years. Well-written papers help, open source code helps, tutorials help. But the most critical feature seems to be that it can be communicated in a single equation. Think about backprop, reinforcement learning, Bayes theorem. > > Janet Wiles > Professor of Complex and Intelligent Systems, > School of Information Technology and Electrical Engineering > The University of Queensland > > > -----Original Message----- > From: Connectionists [mailto:connectionists-bounces at mailman.srv.cs.cmu.edu] On Behalf Of Yu Shan > Sent: Friday, 23 May 2014 7:37 AM > To: Juyang Weng > Cc: connectionists at mailman.srv.cs.cmu.edu > Subject: Re: Connectionists: How the brain works > >> Suppose that one gave all in this connectionists list a largely >> correct model about how the brain works, few on this list would be >> able to understand it let alone agree with it! >> > > Let's look at a recent example. Nikolic proposed his theory > (http://www.danko-nikolic.com/practopoiesis/) about how the brain works a few weeks ago to the Connectionists. Upon finishing reading this paper, I was quite exited. The theory is elegantly simple and yet has great explanatory power. It is also consistent with what we know about evolution as well as the brain's organization and development. > Of course, we don't know yet if it is a "largely correct model about how the brain works". But, to my opinion, it has a great potential. > Actually I am thinking how to implement those ideas in my own future research. > > However, the author's efforts of introducing this work to the Connectionists received little attention. Connectionists reach 5000+ people, who are probably the most interested and capable audience for such a topic. This makes the silence particularly intriguing. Of course, one possible reason is that lots of people here already studied this theory and deemed it irrelevant. > > But a more likely reason, I think, is most people did not give it much thought. If that is the case, it raises an interesting question: what is the barrier that a theory of how the brain works need to overcome in order to be treated seriously? In other words, what do we really want to know? > > Shan Yu, Ph.D > Brainnetome Center and National Laboratory of Pattern Recognition Institute of Automation Chinese Academy of Sciences Beijing 100190, P. R. China http://www.brainnetome.org/en/shanyu > From rloosemore at susaro.com Fri May 23 11:08:32 2014 From: rloosemore at susaro.com (Richard Loosemore) Date: Fri, 23 May 2014 11:08:32 -0400 Subject: Connectionists: How the brain works In-Reply-To: References: <537D2707.5070709@mspacek.mm.st> <537D7026.3080609@cse.msu.edu> <7F1713CE870C2941A370868EACF230894EDC0A@UQEXMDA6.soe.uq.edu.au> Message-ID: <537F6470.5070304@susaro.com> On 5/23/14, 10:51 AM, james bower wrote: > perhaps instead of speculating, you might be interested in an article actually describing how a REAL brain model: read: a model made of the actual brain itself, rather than some abstracted imagined idea about how brains work, has propagated between labs. > > https://www.dropbox.com/s/erw705h4yyh3l9k/272602_1_En_5%20copy.pdf > James: that link seems to be unavailable. Richard Loosemore From dubuf at ualg.pt Fri May 23 10:43:32 2014 From: dubuf at ualg.pt (Hans du Buf) Date: Fri, 23 May 2014 15:43:32 +0100 Subject: Connectionists: How the brain works In-Reply-To: <7F1713CE870C2941A370868EACF230894EDE1E@UQEXMDA6.soe.uq.edu.au> References: <537D2707.5070709@mspacek.mm.st> <537D7026.3080609@cse.msu.edu> <7F1713CE870C2941A370868EACF230894EDC0A@UQEXMDA6.soe.uq.edu.au> <581625BB6C84AB4BBA1C969C69E269EC025258BDCEA0@MAVMAIL2.uta.edu> <7F1713CE870C2941A370868EACF230894EDE1E@UQEXMDA6.soe.uq.edu.au> Message-ID: <537F5E94.8030605@ualg.pt> I was out for some time (mailbox overflooded) and now saw again discussions about vision and motor control and single equations and and and... Why don't you start with the archaic part of our brain? (NOT the frontal lobe and rich club - the massive communication hubs; white matter - only these distinguish us and great apes from rodents) I've been reading a few recent reviews, and the idea comes up that the enormous complexity is the astonishing result of merely replicating very few structures over and over (single equation :-) We know the laminar structure of the neocortex and the connections and processing (FF input from a lower level, horizontal processing, FB input from a higher level), the hierarchies V1 V2 etc and M1 M2 etc and A1 A2 etc. Oops, FF = feedforward. These hierarchies are reciprocally connected: V2 groups features from V1, V4 groups features from V2, until IT cortex with population coding of (parts of) meaningful objects, but in each step up with less localisation (IT knows what the handle and shank of screwdriver are, and about where, and that they belong together; but don't ask IT to put the tip into the slot in the head of a screw - for that you need V1, but V1 has absolutely no clue what a screwdriver is). FF+FB is likely predictive coding with a generative grouping model. If you have V1 and V2, you also have V4 and IT. You can also assume that the processing in the V and M and A hierarchies is the same: one equation. All neocortical areas are reciprocally connected to pulvinar (LGN in case of vision) and higher-order thalamic areas in a laminar way, and then to basal ganglia via layers for arms, face and legs. The BG take decisions, most important keyword: DISinhibition. One equation. All visual areas are still connected to motor areas (archaic brain, rodents, screwdriver). All motor areas are connected to sensory areas: corollary discharge signals were first introduced because of saccadic eye movements, but they are ubiquitous for distinguishing external from self-induced percepts, and at all levels: from reflex inhibition, sensory filtration, stability analysis up to sensorimotor learning and planning. I boldly assume: one equation. Once you understand this, you could assume the same principles for other cortices, like the anterior and posterior cingulate: ACC for arousal and attention, error, conflict, reward, learning; PCC for more internal attention and salience. The PCC is connected to thalamus and striatum (basal ganglia, decisions!). ACC+PCC balance internal and external attention, both between narrow and broad attention. Internal: daydreaming, freewheeling, autism. Fronto-parietal network: short-term flexible allocation of selective attention. Cingulo-opercular network: longer-term, maintain task-related goals. They interact via cerebellar and thalamic nodes, work in parallel. They are part of the default mode network: external goal-oriented action vs. self-regulation. Homeostasis. Small imbalance between endogenous and exogenous processes: ADHD. Anterior insula and dorso-lateral PCC: balance between excessive control and lack of control. Imbalance: obsessive-compulsive disorder or schizophrenia. Keep in mind that our brain is always testing hypotheses and predicting errors, always at the brink of failure. Metastability: shift through multiple, short-lived yet stable states. You tweak a parameter and the brain freaks out. It is amazing that (in my view) a very few principles could be applied to understand how our brain works - and that most brains seem to work quite well. Finally (I need to get some work done), the brain is not a bunch of artificial neural networks which are trained once. It constantly re-trains itself like a babbling baby, although this is rarely noticed. Am I too bold? Hans On 05/23/2014 03:27 AM, Janet Wiles wrote: > Will a single equation be a good model of the brain as a whole? Unlikely! > Will a set of equation-sized-chunks of knowledge suffice? It works for physics, but remains an empirical question for neuroscience. > > The point is not about what's the best way to model the brain, but rather, what models are adopted widely, while others remain the province of a single lab. A model that can be expressed as a single equation seems to be a particularly effective meme for computational researchers. > > Janet > > ======================================================================= Prof.dr.ir. J.M.H. du Buf mailto:dubuf at ualg.pt Dept. of Electronics and Computer Science - FCT, University of Algarve, fax (+351) 289 818560 Campus de Gambelas, 8000 Faro, Portugal. tel (+351) 289 800900 ext 7761 ======================================================================= UALG Vision Laboratory: http://w3.ualg.pt/~dubuf/vision.html ======================================================================= From frothga at sandia.gov Fri May 23 13:02:08 2014 From: frothga at sandia.gov (Fred Rothganger) Date: Fri, 23 May 2014 11:02:08 -0600 Subject: Connectionists: How the brain works In-Reply-To: References: <537D2707.5070709@mspacek.mm.st> <537D7026.3080609@cse.msu.edu> <7F1713CE870C2941A370868EACF230894EDC0A@UQEXMDA6.soe.uq.edu.au> Message-ID: <537F7F10.1060002@sandia.gov> This thread is intriguing. Here are my thoughts on some of the questions that have come up: * Why create a model of how the brain works? For me, it is mainly curiosity. If we can describe how the human mind works, we will make a great leap towards understanding ourselves. This is much the same motivation as Psychology, but in a more generative sense. I believe that the human soul is a physical phenomenon, and that it can be modeled and computed. I believe in strong AI, that is, it is possible to build a functioning human mind in a substrate other than living neurons. These are merely beliefs. They are difficult, if not impossible, to prove. It is not sufficient to write out a model. We must build one and see what it does. In practical terms, the end-point of brain modeling will be a sentient machine. * Equations My current expectation is that a set of coupled differential equations will be sufficient to express a complete model of the brain. This will likely not be a small set. The equations need not be at the scale of membrane dynamics and the like. We may eventually understand higher-level processes and express those directly. (For example, the famous ART algorithm may be expressed as a set of differential equations.) * Simplicity A large set of equations is impossible for a single human mind to understand, certainly not in a single instant. Humans are notoriously bad at predicting a dynamical system with even 2 or 3 variables. (Example: A faucet is pouring into a tub. The tub drains at rate proportional to the water level. What happens to the water level in the tub?) You may examine and understand any small part of the equation set you wish. To understand the system as a whole, we need the aid of computers. Computers help us keep track of all the moving parts and analyze how they fit together. This includes simulation, but also other forms of analysis. * Community sharing Jim, I admire what you are doing with GENESIS. In general, Neuroinformatics is crucial to our success, because it provides a scaffolding on which all our disparate work can be combined into a single model of the brain. To succeed, we must go beyond simple model sharing, to models that connect with each other and some larger structure. NIF is helping in this regard, by organizing the work around an ontology. My small effort in this area is called N2A (http://journal.frontiersin.org/Journal/10.3389/fncir.2014.00001/abstract). I wish it were better developed, so you could see the potential scope we envision. From brian.mingus at colorado.edu Fri May 23 12:45:38 2014 From: brian.mingus at colorado.edu (Brian J Mingus) Date: Fri, 23 May 2014 10:45:38 -0600 Subject: Connectionists: How the brain works In-Reply-To: <537F5E94.8030605@ualg.pt> References: <537D2707.5070709@mspacek.mm.st> <537D7026.3080609@cse.msu.edu> <7F1713CE870C2941A370868EACF230894EDC0A@UQEXMDA6.soe.uq.edu.au> <581625BB6C84AB4BBA1C969C69E269EC025258BDCEA0@MAVMAIL2.uta.edu> <7F1713CE870C2941A370868EACF230894EDE1E@UQEXMDA6.soe.uq.edu.au> <537F5E94.8030605@ualg.pt> Message-ID: Is there anything that can't be represented as a single equation or a really long run on sentence aka model? With regards to whether new models such as *poesis are accepted by the field, I think this really boils down to identity politics. Most researchers are doing a mix of wanting to understand how their brain works and wanting to help humanity by grokking the brain to solve problems like epilepsy etc. The part of them that just wants to understand how their brain works will obviously tend to prefer their own personal rotation of the space. The part that wants to help humanity sees the need to integrate new theories, but this conflicts with the ego and these new theories are more likely to be changed beyond recognition to fit into a given researchers existing framework than to be supported and encouraged. Every once in a while a researcher will stumble upon a description so short and so elegant that it easily transcends the usefulness of all existing theories. However, the brain isn't like previous objects of study. It's essentially the most sophisticated thing that science has ever turned its gaze on. Whether it lends itself to simple "single equation" descriptions is an open question, but I personally doubt it. All models are wrong, and this applies to every model of the brain. Some models are useful, and this also applies to every model of the brain. For the purposes of understanding how your own brain works, an arbitrary rotation of a sophisticated theory seems quite sufficient. For solving actual problems, like epilepsy, some models will be more useful than others. That said, a model isn't always even needed to solve a problem: the latest epilepsy drugs are the most effective and they are found by shotgun approaches which result in drugs that work and for reasons that nobody understands. Zooming out, I like to ask myself whether there's a reason things are the way they are. This is obviously an unanswerable question, but it does shine a light on the fact that we have egos, and that this process of ego-scaffolding leads to many researchers focusing on different perspectives at different levels of analysis. The academic publishing system then broadcasts these perspectives, and whether or not we give truly fair credit assignment versus implicitly mashing their theory into our own preferred framework, everything does in the end get all mixed up together, resulting in a better set of theories overall. One wonders if this apparently fortuitous coincidence isn't a coincidence after all. I personally suspect the usefulness of what we're doing is that we are all contributing to the building of something great. While this somewhat justifies the existing system, given that it actually appears to be working (in a way that none of us understands), I would also advocate a more egalitarian approach where we open our minds to as many theories as possible and cheer on perspectives different from our own. And from this angle, I really like the promise of more informal mailing list conversations for the spread of ideas and hope that you guys keep it up, because I love reading your ideas. And I wish more folks would jump in too and help us all out, rather than just hiding out in the darkness of the literature! Brian https://www.linkedin.com/profile/view?id=74878589 On Fri, May 23, 2014 at 8:43 AM, Hans du Buf wrote: > I was out for some time (mailbox overflooded) and now saw again discussions > about vision and motor control and single equations and and and... > Why don't you start with the archaic part of our brain? (NOT the frontal > lobe > and rich club - the massive communication hubs; white matter - only these > distinguish us and great apes from rodents) > I've been reading a few recent reviews, and the idea comes up that the > enormous complexity is the astonishing result of merely replicating very > few structures over and over (single equation :-) > We know the laminar structure of the neocortex and the connections and > processing (FF input from a lower level, horizontal processing, FB input > from > a higher level), the hierarchies V1 V2 etc and M1 M2 etc and A1 A2 etc. > Oops, FF = feedforward. > These hierarchies are reciprocally connected: V2 groups features from V1, > V4 groups features from V2, until IT cortex with population coding > of (parts of) meaningful objects, but in each step up with less > localisation > (IT knows what the handle and shank of screwdriver are, and about where, > and that they belong together; but don't ask IT to put the tip into the > slot > in the head of a screw - for that you need V1, but V1 has absolutely no > clue > what a screwdriver is). FF+FB is likely predictive coding with a generative > grouping model. If you have V1 and V2, you also have V4 and IT. You can > also assume that the processing in the V and M and A hierarchies is the > same: one equation. > All neocortical areas are reciprocally connected to pulvinar (LGN in case > of > vision) and higher-order thalamic areas in a laminar way, and then to basal > ganglia via layers for arms, face and legs. The BG take decisions, most > important > keyword: DISinhibition. One equation. > All visual areas are still connected to motor areas (archaic brain, > rodents, > screwdriver). > All motor areas are connected to sensory areas: corollary discharge signals > were first introduced because of saccadic eye movements, but they are > ubiquitous for distinguishing external from self-induced percepts, and at > all levels: from reflex inhibition, sensory filtration, stability analysis > up to > sensorimotor learning and planning. I boldly assume: one equation. > > Once you understand this, you could assume the same principles for other > cortices, like the anterior and posterior cingulate: ACC for arousal and > attention, error, conflict, reward, learning; PCC for more internal > attention > and salience. The PCC is connected to thalamus and striatum (basal ganglia, > decisions!). ACC+PCC balance internal and external attention, both between > narrow and broad attention. Internal: daydreaming, freewheeling, autism. > Fronto-parietal network: short-term flexible allocation of selective > attention. > Cingulo-opercular network: longer-term, maintain task-related goals. > They interact via cerebellar and thalamic nodes, work in parallel. > They are part of the default mode network: external goal-oriented action > vs. self-regulation. Homeostasis. Small imbalance between endogenous and > exogenous processes: ADHD. > Anterior insula and dorso-lateral PCC: balance between excessive control > and > lack of control. Imbalance: obsessive-compulsive disorder or schizophrenia. > > Keep in mind that our brain is always testing hypotheses and predicting > errors, > always at the brink of failure. Metastability: shift through multiple, > short-lived > yet stable states. You tweak a parameter and the brain freaks out. It is > amazing > that (in my view) a very few principles could be applied to understand how > our > brain works - and that most brains seem to work quite well. > > Finally (I need to get some work done), the brain is not a bunch of > artificial neural > networks which are trained once. It constantly re-trains itself like a > babbling baby, > although this is rarely noticed. > Am I too bold? > Hans > > > > > > On 05/23/2014 03:27 AM, Janet Wiles wrote: > >> Will a single equation be a good model of the brain as a whole? Unlikely! >> Will a set of equation-sized-chunks of knowledge suffice? It works for >> physics, but remains an empirical question for neuroscience. >> >> The point is not about what's the best way to model the brain, but >> rather, what models are adopted widely, while others remain the province of >> a single lab. A model that can be expressed as a single equation seems to >> be a particularly effective meme for computational researchers. >> >> Janet >> >> >> > ======================================================================= > Prof.dr.ir. J.M.H. du Buf mailto:dubuf at ualg.pt > Dept. of Electronics and Computer Science - FCT, > University of Algarve, fax (+351) 289 818560 > Campus de Gambelas, 8000 Faro, Portugal. tel (+351) 289 800900 ext 7761 > ======================================================================= > UALG Vision Laboratory: http://w3.ualg.pt/~dubuf/vision.html > ======================================================================= > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bower at uthscsa.edu Fri May 23 13:25:21 2014 From: bower at uthscsa.edu (Uthscsa) Date: Fri, 23 May 2014 12:25:21 -0500 Subject: Connectionists: How the brain works In-Reply-To: References: <537D2707.5070709@mspacek.mm.st> <537D7026.3080609@cse.msu.edu> <7F1713CE870C2941A370868EACF230894EDC0A@UQEXMDA6.soe.uq.edu.au> Message-ID: <7444E33F-C47A-47F8-93D1-FB7A0C6D50A4@uthscsa.edu> Sorry all. Apparently the Dropbox link broke. Happens some times. Will fix and repost for those interested. Thanks for the interest. Just for the record I believe and have for many years that sucess requires the sharing and acceptance of community models. This has been the basis for the GENESIS project since the mid 1980s. I also understand that complex realistic models are more difficult to understand and share. Of course I also know well the sucess if physics for hundreds of years in building a common basis for advancing understanding by sharing common models. All that said just because their is light under that particular lamp Post does not mean that that is where the keys are. If one knows otherwise that also doesn't mean that is necessarily where you should look first. I will repost the link. But for those of you willing to read the paper. I believe you will find the kinds of fundamental questions about brain function that will be critical I believe to Figure out what kind of machine this is. And I am certain that we still fundamentally have little idea about that. Jim. (Still tilting after all these years) :-) > On May 23, 2014, at 9:51 AM, james bower wrote: > > perhaps instead of speculating, you might be interested in an article actually describing how a REAL brain model: read: a model made of the actual brain itself, rather than some abstracted imagined idea about how brains work, has propagated between labs. > > https://www.dropbox.com/s/erw705h4yyh3l9k/272602_1_En_5%20copy.pdf > > I am more than aware that most people on this mailing list have no interest in these kinds of models - and, as most are aware, I personally believe that most of the models that people are interested in on this mailing list are fundamentally Ptolemaic in intent and effect. > > However, I have no interest in continuing that argument. > > I am also, of course, very aware that building model that transcend individual laboratories is hard and that boiling everything down to as few equations as possible would make that process much easier (as it did in physics) - sadly, biological systems are explicitly more complex by nature - and their success depends on the details. That is absolutely clear. I should also say that in the GENESIS project, we have been working for a long time on how you build and propagate community models - which requires a new form of publication. But that is a much longer and more complex conversation than would probably be tolerated here. > > Accordingly, I post this link to the paper, just in case, someone new on the list is interested in actually understanding how actual physical models of the nervous system are very slowly being developed - more slowly than they should be, given the interest and hope of the majority that somehow a system that evolved over many hundreds of millions of years to do something VERY HARD, might somehow be captured in some mathematical structure simpler than itself. > > Jim Bower > > > > >> On May 22, 2014, at 6:00 PM, Janet Wiles wrote: >> >> When does a model escape from a research lab? Or in other words, when do researchers beyond the in-group investigate, test, or extend a model? >> >> I have asked many colleagues this question over the years. Well-written papers help, open source code helps, tutorials help. But the most critical feature seems to be that it can be communicated in a single equation. Think about backprop, reinforcement learning, Bayes theorem. >> >> Janet Wiles >> Professor of Complex and Intelligent Systems, >> School of Information Technology and Electrical Engineering >> The University of Queensland >> >> >> -----Original Message----- >> From: Connectionists [mailto:connectionists-bounces at mailman.srv.cs.cmu.edu] On Behalf Of Yu Shan >> Sent: Friday, 23 May 2014 7:37 AM >> To: Juyang Weng >> Cc: connectionists at mailman.srv.cs.cmu.edu >> Subject: Re: Connectionists: How the brain works >> >>> Suppose that one gave all in this connectionists list a largely >>> correct model about how the brain works, few on this list would be >>> able to understand it let alone agree with it! >>> From troy.d.kelley6.civ at mail.mil Fri May 23 13:59:27 2014 From: troy.d.kelley6.civ at mail.mil (Kelley, Troy D CIV (US)) Date: Fri, 23 May 2014 17:59:27 +0000 Subject: Connectionists: How the brain works (UNCLASSIFIED) In-Reply-To: <537F5E94.8030605@ualg.pt> References: <537D2707.5070709@mspacek.mm.st> <537D7026.3080609@cse.msu.edu> <7F1713CE870C2941A370868EACF230894EDC0A@UQEXMDA6.soe.uq.edu.au> <581625BB6C84AB4BBA1C969C69E269EC025258BDCEA0@MAVMAIL2.uta.edu> <7F1713CE870C2941A370868EACF230894EDE1E@UQEXMDA6.soe.uq.edu.au> <537F5E94.8030605@ualg.pt> Message-ID: Classification: UNCLASSIFIED Caveats: NONE Hans, ------- Hans wrote: If you have V1 and V2, you also have V4 and IT. You can also assume that the processing in the V and M and A hierarchies is the same: one equation. All neocortical areas are reciprocally connected to pulvinar (LGN in case of vision) and higher-order thalamic areas in a laminar way, and then to basal ganglia via layers for arms, face and legs. The BG take decisions, most important keyword: DISinhibition. One equation. All visual areas are still connected to motor areas (archaic brain, rodents, screwdriver). All motor areas are connected to sensory areas: corollary discharge signals were first introduced because of saccadic eye movements, but they are ubiquitous for distinguishing external from self-induced percepts, and at all levels: from reflex inhibition, sensory filtration, stability analysis up to sensorimotor learning and planning. I boldly assume: one equation. They are part of the default mode network: external goal-oriented action vs. self-regulation. Homeostasis. Small imbalance between endogenous and exogenous processes: ADHD. Anterior insula and dorso-lateral PCC: balance between excessive control and lack of control. Imbalance: obsessive-compulsive disorder or schizophrenia. ------- I would be interested in what you think of the very old habituation equation first described in 1966 by Thompson and Spencer We have been able to reproduce this balance/inbalance you speak of above between excessive control and lack of control on our robotics systems using a habituation equation. I would be interested in knowing what you think the habituation equation DOESN'T capture. My sense is that it doesn't capture reinforcement learning very well. Thompson, R. F., & Spencer, W. A. (1966). Habituation: a model phenomenon for the study of neuronal substrates of behavior. Psychological review, 73(1), 16. Note especially the nine characteristics of the equation they outline nicely beginning on page 19 of the article. Troy Kelley Cognitive Robotics Team Leader Human Research and Engineering Directorate Army Research Laboratory Aberdeen, MD, 21005 V: 410-278-5869 Classification: UNCLASSIFIED Caveats: NONE -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/x-pkcs7-signature Size: 5619 bytes Desc: not available URL: From mhb0 at lehigh.edu Fri May 23 14:16:49 2014 From: mhb0 at lehigh.edu (Mark H. Bickhard) Date: Fri, 23 May 2014 14:16:49 -0400 Subject: Connectionists: How the brain works In-Reply-To: References: <537D2707.5070709@mspacek.mm.st> <537D7026.3080609@cse.msu.edu> <7F1713CE870C2941A370868EACF230894EDC0A@UQEXMDA6.soe.uq.edu.au> <581625BB6C84AB4BBA1C969C69E269EC025258BDCEA0@MAVMAIL2.uta.edu> <7F1713CE870C2941A370868EACF230894EDE1E@UQEXMDA6.soe.uq.edu.au> <537F5E94.8030605@ualg.pt> Message-ID: I offer some thoughts on this, extracted from a recent paper: ?In fact, we find that actual CNS neurons are endogenously active, with baseline rates of oscillation, and with multiple modulatory relationships across a wide range of temporal and spatial scales: silent neurons that rarely or never fire, but that do carry slow potential waves (Bullock, 1981; Fuxe & Agnati, 1991; Haag & Borst, 1998; Roberts & Bush, 1981); volume transmitters, released into intercellular regions and diffused throughout populations of neurons rather than being constrained to a synaptic cleft (Agnati, Bjelke & Fuxe, 1992; Agnati, Fuxe, Nicholson & Sykov?, 2000); such neuromodulators can reconfigure the functional properties of ?circuits? and even reconfigure functional connectivity (Marder & Thirumalai, 2002; Marder, 2012); gaseous transmitter substances, such as NO, that diffuse without constraint from synapses and cell walls (e.g., Brann, Ganapathy, Lamar & Mahesh, 1997); gap junctions, that function extremely fast and without any transmitter substance (Dowling, 1992; Hall, 1992; Nauta & Feirtag, 1986); neurons, and neural circuits, that have resonance frequencies, and, thus, can selectively respond to modulatory influences with the ?right? carrier frequencies (Izhikevich, 2001, 2002, 2007); astrocytes that[1]: ? have neurotransmitter receptors, ? secrete neurotransmitters, ? modulate synaptogenesis, ? modulate synapses with respect to the degree to which they function as volume transmission synapses, ? create enclosed ?bubbles? within which they control the local environment in which neurons interact with each other, ? carry calcium waves across populations of astrocytes via gap junctions. These aspects of CNS processes make little sense in standard neural information processing models. In these, the central nervous system is considered to consist of passive threshold switch or input transforming neurons functioning in complex micro- and macro-circuits. Enough is known about alternative functional processes in the CNS, however, to know that this cannot be correct. The multifarious tool box of short through long temporal scale forms of modulation ? many realized in ways that contradict orthodoxy concerning standard integrate and fire models of neurons communicating via classical synapses ? is at best a wildly extravagant and unnecessary range of evolutionary implementations of simple circuits of neural threshold switches. This range, however, is precisely what is to be expected in a functional architecture composed of multiple scale modulatory influences among oscillatory processes (Bickhard, in preparation-a; Bickhard & Campbell, 1996; Bickhard & Terveen, 1995). [1] The literature on astrocytes has expanded dramatically in recent years: e.g., Bushong, Martone, Ellisman, 2004; Chv?tal & Sykov?, 2000; Hertz & Zielker, 2004; Nedergaard, Ransom & Goldman, 2003; Newman, 2003; Perea & Araque, 2007; Ransom, Behar & Nedergaard, 2003; Slezak & Pfreiger, 2003; Verkhratsky & Butt, 2007; Viggiano, Ibrahim, & Celio, 2000.? This is from: Bickhard, M. H. Toward a Model of Functional Brain Processes: Central Nervous System Functional Architecture. The basic critique of information processing models applies just as strongly to Predictive Brain models. It is elaborated, and an alternative is outlined, in: Bickhard, M. H. (forthcoming, 2014). The Anticipatory Brain: Two Approaches. In V. C. M?ller (Ed.) Fundamental Issues of Artificial Intelligence. Berlin: Springer (Synthese Library). I have attempted some first steps toward both a micro-functional and a macro-functional model that can account for and accommodate such facts in the functional brain processes paper. Mark Mark H. Bickhard Lehigh University 17 Memorial Drive East Bethlehem, PA 18015 mark at bickhard.name http://bickhard.ws/ On May 23, 2014, at 12:45 PM, Brian J Mingus wrote: Is there anything that can't be represented as a single equation or a really long run on sentence aka model? With regards to whether new models such as *poesis are accepted by the field, I think this really boils down to identity politics. Most researchers are doing a mix of wanting to understand how their brain works and wanting to help humanity by grokking the brain to solve problems like epilepsy etc. The part of them that just wants to understand how their brain works will obviously tend to prefer their own personal rotation of the space. The part that wants to help humanity sees the need to integrate new theories, but this conflicts with the ego and these new theories are more likely to be changed beyond recognition to fit into a given researchers existing framework than to be supported and encouraged. Every once in a while a researcher will stumble upon a description so short and so elegant that it easily transcends the usefulness of all existing theories. However, the brain isn't like previous objects of study. It's essentially the most sophisticated thing that science has ever turned its gaze on. Whether it lends itself to simple "single equation" descriptions is an open question, but I personally doubt it. All models are wrong, and this applies to every model of the brain. Some models are useful, and this also applies to every model of the brain. For the purposes of understanding how your own brain works, an arbitrary rotation of a sophisticated theory seems quite sufficient. For solving actual problems, like epilepsy, some models will be more useful than others. That said, a model isn't always even needed to solve a problem: the latest epilepsy drugs are the most effective and they are found by shotgun approaches which result in drugs that work and for reasons that nobody understands. Zooming out, I like to ask myself whether there's a reason things are the way they are. This is obviously an unanswerable question, but it does shine a light on the fact that we have egos, and that this process of ego-scaffolding leads to many researchers focusing on different perspectives at different levels of analysis. The academic publishing system then broadcasts these perspectives, and whether or not we give truly fair credit assignment versus implicitly mashing their theory into our own preferred framework, everything does in the end get all mixed up together, resulting in a better set of theories overall. One wonders if this apparently fortuitous coincidence isn't a coincidence after all. I personally suspect the usefulness of what we're doing is that we are all contributing to the building of something great. While this somewhat justifies the existing system, given that it actually appears to be working (in a way that none of us understands), I would also advocate a more egalitarian approach where we open our minds to as many theories as possible and cheer on perspectives different from our own. And from this angle, I really like the promise of more informal mailing list conversations for the spread of ideas and hope that you guys keep it up, because I love reading your ideas. And I wish more folks would jump in too and help us all out, rather than just hiding out in the darkness of the literature! Brian https://www.linkedin.com/profile/view?id=74878589 On Fri, May 23, 2014 at 8:43 AM, Hans du Buf wrote: I was out for some time (mailbox overflooded) and now saw again discussions about vision and motor control and single equations and and and... Why don't you start with the archaic part of our brain? (NOT the frontal lobe and rich club - the massive communication hubs; white matter - only these distinguish us and great apes from rodents) I've been reading a few recent reviews, and the idea comes up that the enormous complexity is the astonishing result of merely replicating very few structures over and over (single equation :-) We know the laminar structure of the neocortex and the connections and processing (FF input from a lower level, horizontal processing, FB input from a higher level), the hierarchies V1 V2 etc and M1 M2 etc and A1 A2 etc. Oops, FF = feedforward. These hierarchies are reciprocally connected: V2 groups features from V1, V4 groups features from V2, until IT cortex with population coding of (parts of) meaningful objects, but in each step up with less localisation (IT knows what the handle and shank of screwdriver are, and about where, and that they belong together; but don't ask IT to put the tip into the slot in the head of a screw - for that you need V1, but V1 has absolutely no clue what a screwdriver is). FF+FB is likely predictive coding with a generative grouping model. If you have V1 and V2, you also have V4 and IT. You can also assume that the processing in the V and M and A hierarchies is the same: one equation. All neocortical areas are reciprocally connected to pulvinar (LGN in case of vision) and higher-order thalamic areas in a laminar way, and then to basal ganglia via layers for arms, face and legs. The BG take decisions, most important keyword: DISinhibition. One equation. All visual areas are still connected to motor areas (archaic brain, rodents, screwdriver). All motor areas are connected to sensory areas: corollary discharge signals were first introduced because of saccadic eye movements, but they are ubiquitous for distinguishing external from self-induced percepts, and at all levels: from reflex inhibition, sensory filtration, stability analysis up to sensorimotor learning and planning. I boldly assume: one equation. Once you understand this, you could assume the same principles for other cortices, like the anterior and posterior cingulate: ACC for arousal and attention, error, conflict, reward, learning; PCC for more internal attention and salience. The PCC is connected to thalamus and striatum (basal ganglia, decisions!). ACC+PCC balance internal and external attention, both between narrow and broad attention. Internal: daydreaming, freewheeling, autism. Fronto-parietal network: short-term flexible allocation of selective attention. Cingulo-opercular network: longer-term, maintain task-related goals. They interact via cerebellar and thalamic nodes, work in parallel. They are part of the default mode network: external goal-oriented action vs. self-regulation. Homeostasis. Small imbalance between endogenous and exogenous processes: ADHD. Anterior insula and dorso-lateral PCC: balance between excessive control and lack of control. Imbalance: obsessive-compulsive disorder or schizophrenia. Keep in mind that our brain is always testing hypotheses and predicting errors, always at the brink of failure. Metastability: shift through multiple, short-lived yet stable states. You tweak a parameter and the brain freaks out. It is amazing that (in my view) a very few principles could be applied to understand how our brain works - and that most brains seem to work quite well. Finally (I need to get some work done), the brain is not a bunch of artificial neural networks which are trained once. It constantly re-trains itself like a babbling baby, although this is rarely noticed. Am I too bold? Hans On 05/23/2014 03:27 AM, Janet Wiles wrote: Will a single equation be a good model of the brain as a whole? Unlikely! Will a set of equation-sized-chunks of knowledge suffice? It works for physics, but remains an empirical question for neuroscience. The point is not about what's the best way to model the brain, but rather, what models are adopted widely, while others remain the province of a single lab. A model that can be expressed as a single equation seems to be a particularly effective meme for computational researchers. Janet ======================================================================= Prof.dr.ir. J.M.H. du Buf mailto:dubuf at ualg.pt Dept. of Electronics and Computer Science - FCT, University of Algarve, fax (+351) 289 818560 Campus de Gambelas, 8000 Faro, Portugal. tel (+351) 289 800900 ext 7761 ======================================================================= UALG Vision Laboratory: http://w3.ualg.pt/~dubuf/vision.html ======================================================================= -------------- next part -------------- An HTML attachment was scrubbed... URL: From tt at cs.dal.ca Fri May 23 15:45:17 2014 From: tt at cs.dal.ca (Thomas Trappenberg) Date: Fri, 23 May 2014 16:45:17 -0300 Subject: Connectionists: How the brain works In-Reply-To: References: <537D2707.5070709@mspacek.mm.st> <537D7026.3080609@cse.msu.edu> <7F1713CE870C2941A370868EACF230894EDC0A@UQEXMDA6.soe.uq.edu.au> <581625BB6C84AB4BBA1C969C69E269EC025258BDCEA0@MAVMAIL2.uta.edu> <7F1713CE870C2941A370868EACF230894EDE1E@UQEXMDA6.soe.uq.edu.au> <537F5E94.8030605@ualg.pt> Message-ID: .... when I started to take apart radios, they had elements that emitted a strange light (tubes), had metal gears with knobs attached to select channels, had lots of wood around them and big magnets with which I still play. All these details are important on some level, in particular if you want to repair the system (I was only able to take them apart). And you need lots of equations when you go into the specifics ... ... but still like the beauty of Maxwell equations, Glashow-Salam-Weingberg, or a Boltzmann machine to understand principles. I do enjoy our discussion. Cheers, Thomas On Fri, May 23, 2014 at 3:16 PM, Mark H. Bickhard wrote: > I offer some thoughts on this, extracted from a recent paper: > > ?In fact, we find that actual CNS neurons are endogenously active, with > baseline rates of oscillation, and with *multiple modulatory*relationships across a wide range of > *temporal* and *spatial* *scales*: > > - silent neurons that rarely or never fire, but that do carry slow > potential waves (Bullock, 1981; Fuxe & Agnati, 1991; Haag & Borst, > 1998; Roberts & Bush, 1981); > - volume transmitters, released into intercellular regions and > diffused throughout populations of neurons rather than being constrained to > a synaptic cleft (Agnati, Bjelke & Fuxe, 1992; Agnati, Fuxe, Nicholson > & Sykov?, 2000); such neuromodulators can reconfigure the functional > properties of ?circuits? and even reconfigure functional connectivity (Marder > & Thirumalai, 2002; Marder, 2012); > - gaseous transmitter substances, such as NO, that diffuse without > constraint from synapses and cell walls (e.g., Brann, Ganapathy, Lamar > & Mahesh, 1997); > - gap junctions, that function extremely fast and without any > transmitter substance (Dowling, 1992; Hall, 1992; Nauta & Feirtag, 1986 > ); > - neurons, and neural circuits, that have resonance frequencies, and, > thus, can selectively respond to modulatory influences with the ?right? > carrier frequencies (Izhikevich, 2001, 2002, 2007); > - astrocytes that[1] <#1462a52445a9f633__ftn1>: > > ? have neurotransmitter receptors, > > ? secrete neurotransmitters, > > ? modulate synaptogenesis, > > ? modulate synapses with respect to the degree to which they > function as volume transmission synapses, > > ? create enclosed ?bubbles? within which they control the local > environment in which neurons interact with each other, > > ? carry calcium waves across populations of astrocytes via gap > junctions. > > These aspects of CNS processes make little sense in standard neural > information processing models. In these, the central nervous system is > considered to consist of passive threshold switch or input transforming > neurons functioning in complex micro- and macro-circuits. Enough is known > about alternative functional processes in the CNS, however, to know that > this cannot be correct. The multifarious tool box of short through long > temporal scale forms of modulation ? many realized in ways that contradict > orthodoxy concerning standard integrate and fire models of neurons > communicating via classical synapses ? is at best a wildly extravagant and > unnecessary range of evolutionary implementations of simple circuits of > neural threshold switches. This range, however, is *precisely* what is > to be expected in a functional architecture composed of multiple scale > modulatory influences among oscillatory processes (Bickhard, in > preparation-a; Bickhard & Campbell, 1996; Bickhard & Terveen, 1995). > > ------------------------------ > > [1] <#1462a52445a9f633__ftnref1> The literature on astrocytes has > expanded dramatically in recent years: e.g., Bushong, Martone, Ellisman, > 2004; Chv?tal & Sykov?, 2000; Hertz & Zielker, 2004; Nedergaard, Ransom & > Goldman, 2003; Newman, 2003; Perea & Araque, 2007; Ransom, Behar & > Nedergaard, 2003; Slezak & Pfreiger, 2003; Verkhratsky & Butt, 2007; > Viggiano, Ibrahim, & Celio, 2000.? > This is from: > > Bickhard, M. H. *Toward a Model of Functional Brain Processes: Central > Nervous System Functional Architecture*. > > The basic critique of information processing models applies just as > strongly to Predictive Brain models. It is elaborated, and an alternative > is outlined, in: > > Bickhard, M. H. (forthcoming, 2014). The Anticipatory Brain: Two > Approaches. In V. C. M?ller (Ed.) *Fundamental Issues of Artificial > Intelligence*. Berlin: Springer (Synthese Library). > > I have attempted some first steps toward both a micro-functional and a > macro-functional model that can account for and accommodate such facts in > the functional brain processes paper. > > Mark > > > Mark H. Bickhard > > Lehigh University > > 17 Memorial Drive East > > Bethlehem, PA 18015 > > mark at bickhard.name > > http://bickhard.ws/ > > On May 23, 2014, at 12:45 PM, Brian J Mingus > wrote: > > Is there anything that can't be represented as a single equation or a > really long run on sentence aka model? > > With regards to whether new models such as *poesis are accepted by the > field, I think this really boils down to identity politics. Most > researchers are doing a mix of wanting to understand how their brain works > and wanting to help humanity by grokking the brain to solve problems like > epilepsy etc. The part of them that just wants to understand how their > brain works will obviously tend to prefer their own personal rotation of > the space. The part that wants to help humanity sees the need to integrate > new theories, but this conflicts with the ego and these new theories are > more likely to be changed beyond recognition to fit into a given > researchers existing framework than to be supported and encouraged. > > Every once in a while a researcher will stumble upon a description so > short and so elegant that it easily transcends the usefulness of all > existing theories. However, the brain isn't like previous objects of study. > It's essentially the most sophisticated thing that science has ever turned > its gaze on. Whether it lends itself to simple "single equation" > descriptions is an open question, but I personally doubt it. All models are > wrong, and this applies to every model of the brain. Some models are > useful, and this also applies to every model of the brain. For the purposes > of understanding how your own brain works, an arbitrary rotation of a > sophisticated theory seems quite sufficient. For solving actual problems, > like epilepsy, some models will be more useful than others. That said, a > model isn't always even needed to solve a problem: the latest epilepsy > drugs are the most effective and they are found by shotgun approaches which > result in drugs that work and for reasons that nobody understands. > > Zooming out, I like to ask myself whether there's a reason things are the > way they are. This is obviously an unanswerable question, but it does shine > a light on the fact that we have egos, and that this process of > ego-scaffolding leads to many researchers focusing on different > perspectives at different levels of analysis. The academic publishing > system then broadcasts these perspectives, and whether or not we give truly > fair credit assignment versus implicitly mashing their theory into our own > preferred framework, everything does in the end get all mixed up together, > resulting in a better set of theories overall. One wonders if this > apparently fortuitous coincidence isn't a coincidence after all. I > personally suspect the usefulness of what we're doing is that we are all > contributing to the building of something great. While this somewhat > justifies the existing system, given that it actually appears to be working > (in a way that none of us understands), I would also advocate a more > egalitarian approach where we open our minds to as many theories as > possible and cheer on perspectives different from our own. And from this > angle, I really like the promise of more informal mailing list > conversations for the spread of ideas and hope that you guys keep it up, > because I love reading your ideas. And I wish more folks would jump in too > and help us all out, rather than just hiding out in the darkness of the > literature! > > Brian > > https://www.linkedin.com/profile/view?id=74878589 > > > On Fri, May 23, 2014 at 8:43 AM, Hans du Buf wrote: > >> I was out for some time (mailbox overflooded) and now saw again >> discussions >> about vision and motor control and single equations and and and... >> Why don't you start with the archaic part of our brain? (NOT the frontal >> lobe >> and rich club - the massive communication hubs; white matter - only these >> distinguish us and great apes from rodents) >> I've been reading a few recent reviews, and the idea comes up that the >> enormous complexity is the astonishing result of merely replicating very >> few structures over and over (single equation :-) >> We know the laminar structure of the neocortex and the connections and >> processing (FF input from a lower level, horizontal processing, FB input >> from >> a higher level), the hierarchies V1 V2 etc and M1 M2 etc and A1 A2 etc. >> Oops, FF = feedforward. >> These hierarchies are reciprocally connected: V2 groups features from V1, >> V4 groups features from V2, until IT cortex with population coding >> of (parts of) meaningful objects, but in each step up with less >> localisation >> (IT knows what the handle and shank of screwdriver are, and about where, >> and that they belong together; but don't ask IT to put the tip into the >> slot >> in the head of a screw - for that you need V1, but V1 has absolutely no >> clue >> what a screwdriver is). FF+FB is likely predictive coding with a >> generative >> grouping model. If you have V1 and V2, you also have V4 and IT. You can >> also assume that the processing in the V and M and A hierarchies is the >> same: one equation. >> All neocortical areas are reciprocally connected to pulvinar (LGN in case >> of >> vision) and higher-order thalamic areas in a laminar way, and then to >> basal >> ganglia via layers for arms, face and legs. The BG take decisions, most >> important >> keyword: DISinhibition. One equation. >> All visual areas are still connected to motor areas (archaic brain, >> rodents, >> screwdriver). >> All motor areas are connected to sensory areas: corollary discharge >> signals >> were first introduced because of saccadic eye movements, but they are >> ubiquitous for distinguishing external from self-induced percepts, and at >> all levels: from reflex inhibition, sensory filtration, stability >> analysis up to >> sensorimotor learning and planning. I boldly assume: one equation. >> >> Once you understand this, you could assume the same principles for other >> cortices, like the anterior and posterior cingulate: ACC for arousal and >> attention, error, conflict, reward, learning; PCC for more internal >> attention >> and salience. The PCC is connected to thalamus and striatum (basal >> ganglia, >> decisions!). ACC+PCC balance internal and external attention, both between >> narrow and broad attention. Internal: daydreaming, freewheeling, autism. >> Fronto-parietal network: short-term flexible allocation of selective >> attention. >> Cingulo-opercular network: longer-term, maintain task-related goals. >> They interact via cerebellar and thalamic nodes, work in parallel. >> They are part of the default mode network: external goal-oriented action >> vs. self-regulation. Homeostasis. Small imbalance between endogenous and >> exogenous processes: ADHD. >> Anterior insula and dorso-lateral PCC: balance between excessive control >> and >> lack of control. Imbalance: obsessive-compulsive disorder or >> schizophrenia. >> >> Keep in mind that our brain is always testing hypotheses and predicting >> errors, >> always at the brink of failure. Metastability: shift through multiple, >> short-lived >> yet stable states. You tweak a parameter and the brain freaks out. It is >> amazing >> that (in my view) a very few principles could be applied to understand >> how our >> brain works - and that most brains seem to work quite well. >> >> Finally (I need to get some work done), the brain is not a bunch of >> artificial neural >> networks which are trained once. It constantly re-trains itself like a >> babbling baby, >> although this is rarely noticed. >> Am I too bold? >> Hans >> >> >> >> >> >> On 05/23/2014 03:27 AM, Janet Wiles wrote: >> >>> Will a single equation be a good model of the brain as a whole? Unlikely! >>> Will a set of equation-sized-chunks of knowledge suffice? It works for >>> physics, but remains an empirical question for neuroscience. >>> >>> The point is not about what's the best way to model the brain, but >>> rather, what models are adopted widely, while others remain the province of >>> a single lab. A model that can be expressed as a single equation seems to >>> be a particularly effective meme for computational researchers. >>> >>> Janet >>> >>> >>> >> ======================================================================= >> Prof.dr.ir . J.M.H. du Buf >> mailto:dubuf at ualg.pt >> Dept. of Electronics and Computer Science - FCT, >> University of Algarve, fax (+351) 289 818560 >> Campus de Gambelas, 8000 Faro, Portugal. tel (+351) 289 800900 ext 7761 >> ======================================================================= >> UALG Vision Laboratory: http://w3.ualg.pt/~dubuf/vision.html >> ======================================================================= >> >> >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From weng at cse.msu.edu Sat May 24 05:33:54 2014 From: weng at cse.msu.edu (Juyang Weng) Date: Sat, 24 May 2014 17:33:54 +0800 Subject: Connectionists: How the brain works In-Reply-To: References: <537D2707.5070709@mspacek.mm.st> <537D7026.3080609@cse.msu.edu> <7F1713CE870C2941A370868EACF230894EDC0A@UQEXMDA6.soe.uq.edu.au> <581625BB6C84AB4BBA1C969C69E269EC025258BDCEA0@MAVMAIL2.uta.edu> Message-ID: <53806782.4000001@cse.msu.edu> Tsvi, I understand your point, but there are too many model designers who are like blind men who touch the elephant at a reachable skin patch only. They do not have the culture to motivate them to move to elephant's other body parts. -John On 5/23/14 10:37 AM, Tsvi Achler wrote: > I think it is also interesting to ask the opposite question: what type > of models would be especially difficult to introduce to the community? > Here is my short list: > 1. Models that go against years of dogma (eg arguing against the idea > that neural networks learn through a gradient descent mechanism) > 2. Models reinterpreting existing data in a different way but overall > getting similar results. > -Tsvi > > > On Thu, May 22, 2014 at 6:01 PM, Brad Wyble wrote: >> I concur with Dan. I'd caution anyone against the idea that a good theory of >> brain function should also ascribe to standards of mathematical aesthetics >> (i.e. that it can be reduced to a single function). Nature is functionally >> elegant, but that does not also mean that it can be reduced to an elegant >> mathematical formalism. >> >> That said, I really like the question posed by Janet, which is to wonder at >> when the research community at large should start to take a new model >> seriously. I don't think that there is a clear answer to this question but >> there do seem to be two major factors: one is the degree to which the model >> compresses data into a simpler form (i.e. how good is the theory), and the >> other is the degree to which the model fills a perceived explanatory void in >> the field. Models that are rapidly adopted hit both of these marks. >> >> -Brad >> >> >> >> On Thu, May 22, 2014 at 7:45 PM, Levine, Daniel S wrote: >>> I would disagree about the single equation. The brain needs to do a lot >>> of different things to deal with the cognitive requirements of a changing >>> and complex world, so that different functions (sensory pattern processing, >>> motor control, etc.) may call for different structures and therefore >>> different equations. Models of the brain become most useful when they can >>> explain cognitive and behavioral functions that we take for granted in >>> day-to-day life. >>> >>> -----Original Message----- >>> From: Connectionists >>> [mailto:connectionists-bounces at mailman.srv.cs.cmu.edu] On Behalf Of Janet >>> Wiles >>> Sent: Thursday, May 22, 2014 6:01 PM >>> To: Yu Shan >>> Cc: connectionists at mailman.srv.cs.cmu.edu >>> Subject: Re: Connectionists: How the brain works >>> >>> When does a model escape from a research lab? Or in other words, when do >>> researchers beyond the in-group investigate, test, or extend a model? >>> >>> I have asked many colleagues this question over the years. Well-written >>> papers help, open source code helps, tutorials help. But the most critical >>> feature seems to be that it can be communicated in a single equation. Think >>> about backprop, reinforcement learning, Bayes theorem. >>> >>> Janet Wiles >>> Professor of Complex and Intelligent Systems, School of Information >>> Technology and Electrical Engineering The University of Queensland >>> >>> >>> -----Original Message----- >>> From: Connectionists >>> [mailto:connectionists-bounces at mailman.srv.cs.cmu.edu] On Behalf Of Yu Shan >>> Sent: Friday, 23 May 2014 7:37 AM >>> To: Juyang Weng >>> Cc: connectionists at mailman.srv.cs.cmu.edu >>> Subject: Re: Connectionists: How the brain works >>> >>>> Suppose that one gave all in this connectionists list a largely >>>> correct model about how the brain works, few on this list would be >>>> able to understand it let alone agree with it! >>>> >>> Let's look at a recent example. Nikolic proposed his theory >>> (http://www.danko-nikolic.com/practopoiesis/) about how the brain works a >>> few weeks ago to the Connectionists. Upon finishing reading this paper, I >>> was quite exited. The theory is elegantly simple and yet has great >>> explanatory power. It is also consistent with what we know about evolution >>> as well as the brain's organization and development. >>> Of course, we don't know yet if it is a "largely correct model about how >>> the brain works". But, to my opinion, it has a great potential. >>> Actually I am thinking how to implement those ideas in my own future >>> research. >>> >>> However, the author's efforts of introducing this work to the >>> Connectionists received little attention. Connectionists reach 5000+ people, >>> who are probably the most interested and capable audience for such a topic. >>> This makes the silence particularly intriguing. Of course, one possible >>> reason is that lots of people here already studied this theory and deemed it >>> irrelevant. >>> >>> But a more likely reason, I think, is most people did not give it much >>> thought. If that is the case, it raises an interesting question: what is the >>> barrier that a theory of how the brain works need to overcome in order to be >>> treated seriously? In other words, what do we really want to know? >>> >>> Shan Yu, Ph.D >>> Brainnetome Center and National Laboratory of Pattern Recognition >>> Institute of Automation Chinese Academy of Sciences Beijing 100190, P. R. >>> China http://www.brainnetome.org/en/shanyu >>> >>> >> >> >> -- >> Brad Wyble >> Assistant Professor >> Psychology Department >> Penn State University >> >> http://wyblelab.com -- -- Juyang (John) Weng, Professor Department of Computer Science and Engineering MSU Cognitive Science Program and MSU Neuroscience Program 428 S Shaw Ln Rm 3115 Michigan State University East Lansing, MI 48824 USA Tel: 517-353-4388 Fax: 517-432-1061 Email: weng at cse.msu.edu URL: http://www.cse.msu.edu/~weng/ ---------------------------------------------- From ted.carnevale at yale.edu Mon May 26 11:35:11 2014 From: ted.carnevale at yale.edu (Ted Carnevale) Date: Mon, 26 May 2014 11:35:11 -0400 Subject: Connectionists: Final opportunity to sign up for NEURON 2014 Summer Course Message-ID: <53835F2F.50006@yale.edu> The last day to register for the NEURON 2014 Summer Course is Friday, May 30, a short four days from now. Space is still available in this year's course, but you should act soon because room is limited. The course has two components, and you can sign up for either or both: "NEURON Fundamentals" which provides a thorough introduction to building and using models of individual cells and networks, including what you will (sooner than you might imagine) need to know in order to run simulations on parallel computers, and "Parallel Simulation with NEURON" which addresses advanced topics that include strategies for achieving best performance, incorporating random variation and stochastic fluctuation in parallelized models, and how to debug parallelized models. For more information and the online registration form see http://www.neuron.yale.edu/neuron/static/courses/nscsd2014/nscsd2014.html --Ted From dubuf at ualg.pt Mon May 26 11:30:52 2014 From: dubuf at ualg.pt (Hans du Buf) Date: Mon, 26 May 2014 16:30:52 +0100 Subject: Connectionists: How the brain works (UNCLASSIFIED) In-Reply-To: References: <537D2707.5070709@mspacek.mm.st> <537D7026.3080609@cse.msu.edu> <7F1713CE870C2941A370868EACF230894EDC0A@UQEXMDA6.soe.uq.edu.au> <581625BB6C84AB4BBA1C969C69E269EC025258BDCEA0@MAVMAIL2.uta.edu> <7F1713CE870C2941A370868EACF230894EDE1E@UQEXMDA6.soe.uq.edu.au> <537F5E94.8030605@ualg.pt> Message-ID: <53835E2C.5090308@ualg.pt> On 05/23/2014 06:59 PM, Kelley, Troy D CIV (US) wrote: > Classification: UNCLASSIFIED > Caveats: NONE > > Hans, > ------- > Hans wrote: > > ... (deleted) ... > Anterior insula and dorso-lateral PCC: balance between excessive control and > lack of control. Imbalance: obsessive-compulsive disorder or schizophrenia. > > ------- > I would be interested in what you think of the very old habituation equation > first described in 1966 by Thompson and Spencer > > We have been able to reproduce this balance/inbalance you speak of above > between excessive control and lack of control on our robotics systems using > a habituation equation. ------------------------ Troy, Please send me (not to the list) a pdf about your simulations. I cannot find the original TandS paper from 1966 with that magic formula, probably a few exponentials. Habituation: no idea how many theories there are by now and back in 1966 the knowledge was not very complete. Even the Groves and Thompson "dual process" theory is, well, a theory. Thompson himself, in his "history" paper, wrote that "both short- and long-term habituation are far more complex than earlier believed." I bet it is. In my earlier email I wrote about corollary discharges for warning sensory areas about upcoming (ego)motor actions. Habituation is related to external events. A system that is continuously predicting reward and attracted by surprise (entropy) must shield itself from distractions which do not yield reward in a broad sense - both sensory and motor processes cost energy. In contrast to snapping at a baby in a cradle, Pavlov's dogs were rewarded! But, temporal conditioning is also habituation. Reinforcement learning may be related, but the crucial difference is reward vs. distraction, learning useful things and (learning to) suppress useless things. In a famous learning task (vernier acuity) subjects were trained during days, and the effect lasted then for months or even half a year. So what is the reward here? Subjects won't be able to read better because this is line acuity. And if you don't tell that they will be asked to repeat the experiments half a year later, why remember? Hornet's nests are fascinating... Hans ----------------------- > > > I would be interested in knowing what you think the habituation equation > DOESN'T capture. My sense is that it doesn't capture reinforcement learning > very well. > > Thompson, R. F.,& Spencer, W. A. (1966). Habituation: a model phenomenon > for the study of neuronal substrates of behavior. Psychological review, > 73(1), 16. > > Note especially the nine characteristics of the equation they outline nicely > beginning on page 19 of the article. > > Troy Kelley > Cognitive Robotics Team Leader > Human Research and Engineering Directorate > Army Research Laboratory > Aberdeen, MD, 21005 > V: 410-278-5869 > > > -- ======================================================================= Prof.dr.ir. J.M.H. du Buf mailto:dubuf at ualg.pt Dept. of Electronics and Computer Science - FCT, University of Algarve, fax (+351) 289 818560 Campus de Gambelas, 8000 Faro, Portugal. tel (+351) 289 800900 ext 7761 ======================================================================= UALG Vision Laboratory: http://w3.ualg.pt/~dubuf/vision.html ======================================================================= From pbrazdil at inescporto.pt Mon May 26 11:53:18 2014 From: pbrazdil at inescporto.pt (Pavel Brazdil) Date: Mon, 26 May 2014 16:53:18 +0100 Subject: Connectionists: Deadline Extension of MetaSel Workshop at ECAI-2014 Message-ID: MetaSel - Meta-learning & Algorithm Selection ********************************************* ECAI-2014 Workshop, Prague 19 August 2014 http://metasel2014.inescporto.pt/ Deadline extension: 8 June 2014. Please spread the news. You can either submit a paper (8 pages ECAI format), or an extended abstract (2 pages ECAI format) which may present current work, or recently published work or some future research problem. Many thanks to those who have submitted a paper already! Topics Algorithm Selection & Configuration Planning to learn and construct workflows Applications of workflow planning Meta-learning and exploitation of meta-knowledge Exploitation of ontologies of tasks and methods Exploitation of benchmarks and experimentation Representation of learning goals and states in learning Control and coordination of learning processes Meta-reasoning Experimentation and evaluation of learning processes Layered learning Multi-task and transfer learning Learning to learn Intelligent design Performance modeling Process mining From conradt at tum.de Sun May 25 02:53:00 2014 From: conradt at tum.de (=?iso-8859-1?Q?J=F6rg_Conradt?=) Date: Sun, 25 May 2014 06:53:00 +0000 Subject: Connectionists: Open doctoral position: sound processing, Institute of Neuroinformatics, Zurich, CH Message-ID: Applications are invited for a doctoral student position to implement in software and hardware, computational methods for sound processing inspired by biological principles. The goal is to develop a robust real-time sound processing system which recognizes sounds and uses novel front-end sensor pre-processors such as the silicon cochlea. We are seeking a motivated student with a strong background in physics and / or computer science or electrical engineering. The position is available immediately for individuals with a degree in related fields. Other requirements: Programming knowledge of Matlab/JAVA, worked with DSPs/ FPGAs, comfortable with hardware.? Salary is according to the guidelines of the Swiss National Science Foundation. Interested applicants, please send your CV, grades, statement of interest, and the names of at least 1 reference to shih-at-ini-dot-uzh-dot-ch. Deadline for application is May 30, 2014 or until position is filled. This work will be done in conjunction with the groups of Shih-Chii Liu and Richard Hahnloser at the Institute of Neuroinformatics in Zurich. On behalf of Dr. Shih-Chii Liu, INI, http://sensors.ini.uzh.ch/ From Hugo.Larochelle at USherbrooke.ca Tue May 27 11:34:28 2014 From: Hugo.Larochelle at USherbrooke.ca (Hugo Larochelle) Date: Tue, 27 May 2014 15:34:28 +0000 Subject: Connectionists: Call for Demonstrations, NIPS 2014 Message-ID: The Neural Information Processing Systems Conference 2014 http://nips.cc/Conferences/2014/ has a Demonstration Track running in parallel with some of the evening Poster Sessions, December 8-11, 2014, in Montreal, Quebec, Canada. Demonstration Proposal Deadline: Monday September 15, 2014, 11pm Universal Time (4pm Pacific Daylight Time). http://nips.cc/Conferences/2014/CallForDemonstrations Demonstrations offer a unique opportunity to showcase: ? Hardware technology ? Software systems ? Neuromorphic and biologically-inspired systems ? Robotics or other systems, which are relevant to the technical areas covered by NIPS (see Call for Papers http://nips.cc/Conferences/2014/CallForPapers). Demonstrations must show novel technology and must be run live, preferably with some interactive parts. Unlike poster presentations or slide shows, live action and interaction with the audience are critical elements. Submissions: Submission of demo proposals at the following URL: https://nips.cc/Demonstrators/ You will be asked to fill a questionnaire and describe clearly: ? the technology demonstrated ? the elements of novelty ? the live action part ? the interactive part ? the equipment brought by the demonstrator ? the equipment required at the place of the demo Evaluation Criteria: Submissions will be refereed on the basis of technical quality, novelty, live action, and potential for interaction. Demonstration chair: Hugo Larochelle > http://nips.cc/Conferences/2014/CallForDemonstrations -------------- next part -------------- An HTML attachment was scrubbed... URL: From chiestand at salk.edu Tue May 27 15:02:10 2014 From: chiestand at salk.edu (Chris Hiestand) Date: Tue, 27 May 2014 12:02:10 -0700 Subject: Connectionists: NIPS 2014 Call for Workshops Message-ID: NIPS*2014 Post-Conference Workshops Friday December 12 and Saturday December 13, 2014 Palais des Congr?s de Montr?al/Convention and Exhibition Center, Montreal, Quebec, Canada Following the NIPS*2014 main conference, workshops on a variety of current topics will be held on Friday December 12 and Saturday December 13, 2014, in Montreal, Canada. We invite researchers interested in chairing one of these workshops to submit workshop proposals. The goal of the workshops is to provide an informal forum for researchers to discuss important research questions and challenges. Controversial issues, open problems, and comparisons of competing approaches are encouraged as workshop topics. There will be seven hours of workshop meetings per day, split into morning and afternoon sessions, with free time between the sessions for individual exchange or outdoor activities. Note that this year, there is no skiing at NIPS; the workshop schedule will thus differ from past NIPS workshops (the morning session will be 8:30am-12pm, and the afternoon session will be 3pm-6:30pm). Potential workshop topics range from Neuroscience to Bayesian Methods to Representation Learning to Kernels to Clustering, and include Application Areas such as Computational Biology, Speech, Vision or Social Networks. Detailed descriptions of previous workshops may be found at: http://nips.cc/Conferences/2013/Program/schedule.php?Session=Workshops Workshop organizers have several responsibilities, including: Coordinating workshop participation and content as well as providing the program for the workshop in a timely manner. Submission Instructions A nips.cc account is required to submit the Workshops application. Please follow the URL below and check the required format for the application well before the proposal deadline. You can edit your application online right up until this deadline. We have funding to video record a limited number of workshops for later online viewing. Workshop proposals should state if they wish their workshop to be recorded. Interested parties must submit a proposal by **23:59 UTC on Friday August 2nd, 2014** Proposals should be submitted electronically at the following URL: https://nips.cc/Workshops/ Preference will be given to one-day workshops that reserve a significant portion of time for open discussion or panel discussion and to workshops with a greater fraction of confirmed speakers. We suggest that organizers allocate sufficient time for questions, discussion, and breaks. Past experience suggests that workshops otherwise degrade into mini-conferences as talks begin to run over. Organizers should explicitly state the expected fraction of time for discussion & questions and the expected number of talks per day at the end of the proposal. We strongly recommend that each workshop include no more than 12 talks per day. We would like to attempt to partially unify the NIPS workshop important dates across all of the workshops. Therefore, please consider using the following date guidelines for your workshop in order to provide program information in time for publication: We suggest workshop organizers to adopt the following schedule: * Workshop acceptance notification will be on August 14th, 2014 * Your workshop should be publicized on or before August 21st, 2014. * Internal submission deadline should be on or before October 9th, 2014. * Internal acceptance decisions should be mailed out on or before October 23rd, 2014. * Submit finalized workshop organizers, abstract, and URL on or before October 30th, 2014. NIPS does not provide travel funding for workshop speakers. In the past, some workshops have sought and received funding from external sources to bring in outside speakers. The organizers of each accepted workshop can name four individuals per day of workshop to receive free workshop registration. Please feel free to contact us if you have any questions: Francis Bach and Amir Globerson NIPS*2014 Workshops Chairs Web URL: http://nips.cc/Conferences/2014/CallForWorkshops From schwarzwaelder at bcos.uni-freiburg.de Wed May 28 05:53:34 2014 From: schwarzwaelder at bcos.uni-freiburg.de (=?ISO-8859-15?Q?Kerstin_Schwarzw=E4lder?=) Date: Wed, 28 May 2014 11:53:34 +0200 Subject: Connectionists: Valentino Braitenberg Award for Computational Neuroscience - Call for Nominations In-Reply-To: <5385B12B.4010007@bcos.uni-freiburg.de> References: <5385B12B.4010007@bcos.uni-freiburg.de> Message-ID: <5385B21E.7030104@bcos.uni-freiburg.de> Dear Colleagues, The Bernstein Association for Computational Neuroscience invites nominations for the *V**alentino Braitenberg Award for Computational Neuroscience.* The award is biannually presented by the Bernstein Association to a scientist in recognition of outstanding research that contributes to our understanding of the functioning of the brain. The major criterion for the award is the impact or potential impact of the recipient's research on the field of brain science. In the spirit of Valentino Braitenberg's research, special emphasis is given to theoretical studies elucidating the functional implications of brain structures and their neuronal network dynamics. The awardee receives a ?5.000 prize donated by the Autonome Provinz Bozen S?dtirol as well as complimentary participation (registration, travel, and hotel accomodation) in the Bernstein Conference 2014. Here, the prize is awarded together with a Golden Neuron pin badge in a special ceremony that includes the Valentino Braitenberg lecture given by the awardee. Nominations may be submitted by scientists working in the field of Computational Neuroscience and should include toe following documents: - One-page laudation, in which the scientific work of the candidate is honored with regard to the award's criteria - CV and list of publications *Deadline for nominations is **June 16, 2014 *by e-mail to info at bcos.uni-freiburg.de The call for nominations can be found under the followingn URL: www.nncn.de/en/bernstein-association/valentino-braitenberg-award-for-computational-neuroscience For inquiries please contact info at bcos.uni-freiburg.de Best regards, Andrea Huber Br?samle -- Dr. Andrea Huber Br?samle Head of the Bernstein Coordination Site (BCOS) Bernstein Network Computational Neuroscience Albert Ludwigs University Freiburg Hansastr. 9A 79104 Freiburg, Germany phone: +49 761 203 9583 fax: +49 761 203 9585 andrea.huber at bcos.uni-freiburg.de www.nncn.de Twitter: NNCN_Germany YouTube: Bernstein TV Facebook: Bernstein Network Computational Neuroscience, Germany LinkedIn: Bernstein Network Computational Neuroscience, Germany -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Braitenberg_Call_2014_small.pdf Type: x-unknown/pdf Size: 1138790 bytes Desc: not available URL: From danko.nikolic at googlemail.com Wed May 28 06:58:06 2014 From: danko.nikolic at googlemail.com (Danko Nikolic) Date: Wed, 28 May 2014 12:58:06 +0200 Subject: Connectionists: practopoiesis Message-ID: <5385C13E.9060001@gmail.com> Dear all, I made an effort to make practopoiesis more approachable to a wider audience: One: I wrote a short popular article on the implications of practopoiesis for artificial intelligence: http://www.singularityweblog.com/practopoiesis/ Two: I made a list of the key concepts with a brief explanation of each: http://www.danko-nikolic.com/practopoiesis#Concepts I hope that this will be helpful. With warm greetings from Germany, Danko Nikolic -- Prof. Dr. Danko Nikoli? Web: http://www.danko-nikolic.com Mail address 1: Department of Neurophysiology Max Planck Institut for Brain Research Deutschordenstr. 46 60528 Frankfurt am Main GERMANY Mail address 2: Frankfurt Institute for Advanced Studies Wolfgang Goethe University Ruth-Moufang-Str. 1 60433 Frankfurt am Main GERMANY ---------------------------- Office: (..49-69) 96769-736 Lab: (..49-69) 96769-209 Fax: (..49-69) 96769-327 danko.nikolic at gmail.com ---------------------------- From arno.onken at iit.it Wed May 28 07:04:59 2014 From: arno.onken at iit.it (Arno Onken) Date: Wed, 28 May 2014 13:04:59 +0200 Subject: Connectionists: Bernstein Workshop "Characterizing Natural Scenes": Second Call for Abstracts/Contributed Talks Message-ID: <5385C2DB.1040808@iit.it> Second Call for Abstracts/Contributed Talks You are invited to participate in the pre-conference Bernstein workshop "Characterizing Natural Scenes: Retinal Coding and Statistical Theory", taking place in G?ttingen, Germany, on September 2-3, 2014. Please find the preliminary schedule at workshop URL: https://sites.google.com/site/jiankliu/Meetings/2014bccn_workshop The workshop chairs will select a couple of abstracts for short oral presentation at the workshop. All the abstracts will be presented as posters during the Bernstein conference on September 3-4. However, the workshop could hold a separate poster session based on the number and quality of abstracts that we receive. To participate in the workshop please submit the abstract (an extended version is preferred) as a PDF attachment via email to Jian Liu (jian.liu at med.uni-goettingen.de) or Arno Onken (arno.onken at iit.it). The deadline for submission is August 1. Please note that abstract submission for the main conference is already closed while abstract submission for this workshop is still open. REGISTRATION: Please refer to the Bernstein Conference 2014 for registration and venue information. http://www.bernstein-conference.de/ WORKSHOP DESCRIPTION: How does the retina process natural visual inputs? What role do the many types of retinal ganglion cells (RGCs) play in this? Experiments with specific artificial stimuli have suggested that individual types of RGCs may have specific functional roles in visual processing, yet it is not clear how these simplified functional investigations relate to the processing of natural images and movies. An important ingredient for future analysis will be a better understanding of the complex statistical properties of natural scenes, as revealed by theoreticians. The relationship between natural visual statistics and retinal coding provides a promising direction for improving our understanding of the visual system. But we still lack a systematic framework for understanding the underlying mechanisms of how relevant features of natural scenes are encoded by the retina. In this workshop, we expect mutual benefits for both natural scene statistics and retinal coding. We will bring together experimentalists and theoreticians in order to highlight recent progress, encourage exchange of insights, and stimulate new ideas for future work with the following core questions: 1) How can we develop useful descriptions of the statistics of natural scenes that are relevant for retinal coding? 2) How can we characterize the functional roles of different RGC types in processing natural scenes? 3) Which coding strategies are present at the level of RGC populations? 4) How can we unify the acquired knowledge of natural scenes and neural data to develop better tools for analysis? CONFIRMED SPEAKERS: * Vijay Balasubramanian (University of Pennsylvania) * Philipp Berens (BCCN, T?bingen) * Thomas Euler (University of T?bingen) * Felix Franke (ETH Zurich) * Olivier Marre (Vision Institute, Paris) * Aman Saleem (University College London) * Maneesh Sahani (University College London) * Rava A. da Silveira (ENS, Paris) We are looking forward to seeing you in G?ttingen. Workshop organizers: Jian Liu (University Medical Center G?ttingen and BCCN G?ttingen) Arno Onken (Center for Neuroscience and Cognitive Systems From weng at cse.msu.edu Wed May 28 08:27:10 2014 From: weng at cse.msu.edu (Juyang Weng) Date: Wed, 28 May 2014 20:27:10 +0800 Subject: Connectionists: How the brain works In-Reply-To: <5385C13E.9060001@gmail.com> References: <5385C13E.9060001@gmail.com> Message-ID: <5385D61E.4080708@cse.msu.edu> Danko: I browsed both of your pages. Thank you for providing them to this list. Communication on how the brain works is hard. I quote your traverse below on which you claimed that AI system has two traverses and the brain has three: Adaptive traverse: When a system undergoes operations and monitor-and-act units execute their functions, general cybernetic knowledge at one level of organization is being applied in order to extract cybernetic knowledge at another, higher level of organization. We refer to this process as adaptive traverse, or for short just a traverse. *A book*: dumbest; zero traverses* A computer*: somewhat smarter; one traverse* An AI system*: much smarter; two traverses* A human*: rules them all; three traverses What about a primate, an animal, and an inset. How many traverses does each have? I have proposed Self-Aware Self-Effecting mental architecture. You might want to take a look to compare the difference with yours. -John On 5/28/14 6:58 PM, Danko Nikolic wrote: > Dear all, > > I made an effort to make practopoiesis more approachable to a wider > audience: > > > One: I wrote a short popular article on the implications of > practopoiesis for artificial intelligence: > > http://www.singularityweblog.com/practopoiesis/ > > > > Two: I made a list of the key concepts with a brief explanation of each: > > http://www.danko-nikolic.com/practopoiesis#Concepts > > > > I hope that this will be helpful. > > With warm greetings from Germany, > > Danko Nikolic > > -- -- Juyang (John) Weng, Professor Department of Computer Science and Engineering MSU Cognitive Science Program and MSU Neuroscience Program 428 S Shaw Ln Rm 3115 Michigan State University East Lansing, MI 48824 USA Tel: 517-353-4388 Fax: 517-432-1061 Email: weng at cse.msu.edu URL: http://www.cse.msu.edu/~weng/ ---------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From katrien.vanlook at epfl.ch Tue May 27 09:58:27 2014 From: katrien.vanlook at epfl.ch (Van Look Katrien Jo Warda) Date: Tue, 27 May 2014 13:58:27 +0000 Subject: Connectionists: Position available - Software Engineer, Data Mining - Human Brain Project, Switzerland In-Reply-To: References: Message-ID: Dear all, We are looking for a Software Engineer to join the Data Mining Team in Neuroinformatics at the Human Brain Project in Switzerland. The Software Engineer will be primarily responsible for developing data mining workflows to generate additional information used to develop and refine a model of the brain. The full job description can be found here: http://emploi.epfl.ch/page-108900-en.html Please note that written and spoken French are not a necessary requirement. The application deadline is 1 August 2014. Thank you, Katrien Dr Katrien Van Look Scientific Project Coordinator ? Brain Simulation and Neuroinformatics, Human Brain Project EPFL, Lausanne, Switzerland E katrien.vanlook at epfl.ch T +41 (0)21 69 31406 -------------- next part -------------- An HTML attachment was scrubbed... URL: From danko.nikolic at googlemail.com Wed May 28 11:09:22 2014 From: danko.nikolic at googlemail.com (Danko Nikolic) Date: Wed, 28 May 2014 17:09:22 +0200 Subject: Connectionists: How the brain works In-Reply-To: <5385D61E.4080708@cse.msu.edu> References: <5385C13E.9060001@gmail.com> <5385D61E.4080708@cse.msu.edu> Message-ID: <5385FC22.8060802@gmail.com> On 28/05/14 14:27, Juyang Weng wrote: > > What about a primate, an animal, and an inset. How many traverses does > each have? > They all have three traverses. They are all biological behaving systems. Let us put it that way: - A small computer is more adaptive than a gigantic book. - A small AI system is more adaptive than a gigantic (non AI) computer. - A small animal is more adaptive than a gigantic AI system (of today). A human is different from an insect by the amount of cybernetic knowledge, not by the number of traverses. The total amount of cybernetic knowledge is called *variety* after Ashby, and is a different property of the system than is a traverse. Our total intelligence is a combination of the two: variety + traverses. We humans are at leading positions at both: we are more adaptive than any other high-variety system. And we possess more variety than any other high-traverse system. That is how we rule the world. For now. Danko -- Prof. Dr. Danko Nikoli? Web: http://www.danko-nikolic.com Mail address 1: Department of Neurophysiology Max Planck Institut for Brain Research Deutschordenstr. 46 60528 Frankfurt am Main GERMANY Mail address 2: Frankfurt Institute for Advanced Studies Wolfgang Goethe University Ruth-Moufang-Str. 1 60433 Frankfurt am Main GERMANY ---------------------------- Office: (..49-69) 96769-736 Lab: (..49-69) 96769-209 Fax: (..49-69) 96769-327 danko.nikolic at gmail.com ---------------------------- From dubuf at ualg.pt Wed May 28 11:43:58 2014 From: dubuf at ualg.pt (Hans du Buf) Date: Wed, 28 May 2014 16:43:58 +0100 Subject: Connectionists: How the brain works In-Reply-To: <5385D61E.4080708@cse.msu.edu> References: <5385C13E.9060001@gmail.com> <5385D61E.4080708@cse.msu.edu> Message-ID: <5386043E.8030603@ualg.pt> On 05/28/2014 01:27 PM, Juyang Weng wrote: > Danko: > > What about a primate, an animal, and an insect. How many traverses > does each have? > John, Danko, all, That's a good question but difficult to answer (at least for me). How can certain raven solve 3-step puzzles for getting at some food? They have never seen the puzzle, they look at the available pieces from different points, and they have the solution. Are they consciously reasoning? Or is it a low-level process in which the rewards of different combinations are predicted and summed in order to predict the total reward in order to access the food or not? Like chess players using a lot of knowledge, experience and intuition, or a chess program that simply goes through all possible combinations until a certain depth? Like us, mens agitat molem, they are tool users. I would put them at our level because, apart from using tools and solving puzzles, they are very social: they can collaborate, they are inventive to hide food while observing the others, they can fool others, and they can detect intentions from others. Three traverses? So is an ant intelligent? Perhaps a little bit. But the colony is much more intelligent. Is a neuron intelligent? Without any doubt, because the many chemical reactions are very complicated and intricate, and it has a function, like an ant. You don't need to answer the question whether the neuron's colony is intelligent. One of the questions was: is there one equation? I have been reading a lot about simple reaction-diffusion equations. Murray's two-tome book Mathematical Biology, explaining why some zebras are not striped vertically but horizontally, or even group interactions. Turing instabilities, spatial coherence resonance near pattern-forming instabilities, mostly systems composed of only two components. I have been experimenting with Swift-Hohenberg, where heat conduction in a fluid suddenly jumps to convection loops (if you are interested, please see the very nice demos of Michael Cross at Caltech). I managed to simulate the Brusselator, and belief me, even using Cross' notes it was not easy to get a stable simulation because there is always one critical value where the system jumps from one attractor to another. Only TWO components! Nevertheless, we can use dynamic neural field (DNF) theory and attractors to track multiple objects in complex scenes (vision) and to control limb joints for e.g. walking. This is low-level. One level higher DNFs can be used to predict rewards and to generate visuomotor sequences. A very complex system of massively parallel DNFs. But it is still reactive. The question, which I touched upon in a previous email, is whether the same principles can be used to balance narrow and broad attention, both endogenous and exogenous. To balance between excessive control and lack of control. I think this is possible. But the ultimate question is whether it could be applied at the highest level. This involves so many things. When I read a paper saying "this filter is implemented by the following Gaussian" my brain expects to see two times sigma squared. This is still experience and different memories. If you want to build an intelligent machine which can solve a 3-step puzzle which it has never seen before, only with the goal of satisfying a reward, you need a lot of experience and memories etc. before you can even start to think about it. But, at the very end, the circuits which solve the problem, and those which we think (!) are responsible for taking conscious decisions, are all the same. Do not expect a God in our machine, but: Deus ex machina. Hmmm, reminds me of the fact that our brain still has a religious region, which evolved in our ancestors to explain some natural phenomena and to establish a few basic rules for the survival of the tribe. Hans -------------- next part -------------- An HTML attachment was scrubbed... URL: From ecai2014 at guarant.cz Thu May 29 06:30:02 2014 From: ecai2014 at guarant.cz (=?utf-8?q?ECAI_2014?=) Date: Thu, 29 May 2014 12:30:02 +0200 Subject: Connectionists: =?utf-8?q?ECAI_2014_-_reminder?= Message-ID: <20140529103002.827521742AC@gds25d.active24.cz> =============================================================== ECAI 2014 Prague, Czech Republic 18-22 August 2014 http://www.ecai2014.org/ =============================================================== PRELIMINARY PROGRAMME Monday, August 18 Tutorials, Workshops, STAIRS, RuleML, Angry Birds Tuesday, August 19 Tutorials, Workshops, STAIRS, RuleML, Angry Birds Welcome Drink Wednesday, August 20 Opening Session, Keynote speaker, Parallel Sessions, PAIS, RuleML, Angry Birds Thursday, August 21 Keynote speaker, Parallel Sessions, PAIS, Angry Birds Conference Dinner Friday, August 22 Keynote speaker, Parallel Sessions, Angry Birds PROGRAMME COMMITTEE Conference Chair: Hector Geffner ECAI Programme Chair: Torsten Schaub Organizing Committee Chair: Vladim?r Ma??k Organizing Committe Co-chairs: Olga ?t?p?nkov? and Filip ?elezn? ECCAI Chair: Patrick Doherty Workshop Chairs: Marina de Vos and Karl Tuyls STAIRS Programme Chairs: Ulle Endriss and Joao Leite PAIS Programme Chairs: Gerhard Friedrich and Barry O'Sullivan Tutorials Chairs: Agostino Dovier and Paolo Torroni Senior Programme Committee: Jos? J?lio Alferes (Portugal) Sophia Ananiadou (United Kingdom) Elisabeth Andr? (Germany) Grigoris Antoniou (United Kingdom) Christian Bessiere (France) Yngvi Bj?rnsson (Iceland) Gerhard Brewka (Germany) Diego Calvanese (Italy) Amedeo Cesta (Italy) Andrew Coles (United Kingdom) Luc de Raedt (Belgium) Stefan Decker (Ireland) Esra Erdem (Turkey) Paolo Frasconi (Italy) Johannes F?rnkranz (Germany) Enrico Giunchiglia (Italy) Llu?s Godo Lacasa (Spain) Marc Hanheide (United Kingdom) Joachim Hertzberg (Germany) Joerg Hoffmann (Germany) Eyke H?llermeier (Germany) Anthony Hunter (United Kingdom) Luca Iocchi (Italy) Manfred Jaeger (Denmark) Tomi Janhunen (Finland) Souhila Kaci (France) Kristian Kersting (Germany) Philipp Koehn (United Kingdom) Sarit Kraus (Israel) =============================================================== This email is not intended to be spam or to go to anyone who wishes not to receive it. If you do not wish to receive this letter and wish to re?move your email address from our database please reply to this message with ?Unsubscribe? in the subject line From akira-i at brest-state-tech-univ.org Thu May 29 15:35:50 2014 From: akira-i at brest-state-tech-univ.org (Akira Imada) Date: Thu, 29 May 2014 22:35:50 +0300 Subject: Connectionists: ICNNAI-2014 at Belarus - Program, Message-ID: Dear Connectionists, We'd appreciate it if you'd take a time to visit and take a brief look at our program. - http://icnnai.bstu.by/2014/program.html Call for participation is at - http://icnnai.bstu.by/2014/call-for-participation.html Thank you, ICNNAI-2014 Organizers, -------------- next part -------------- An HTML attachment was scrubbed... URL: From danko.nikolic at googlemail.com Fri May 30 05:01:26 2014 From: danko.nikolic at googlemail.com (Danko Nikolic) Date: Fri, 30 May 2014 11:01:26 +0200 Subject: Connectionists: practopoiesis In-Reply-To: <5385D6EC.2040408@cse.msu.edu> References: <5385C13E.9060001@gmail.com> <5385D6EC.2040408@cse.msu.edu> Message-ID: <9B96C174-C36D-4792-B15E-C0898C2DD382@gmail.com> Dear John, I just read your SASE paper to make a comparison to practopoiesis, as you asked. Interesting paper. Nice work. I like your succinct stile, formal and accurate. Your theorems make it clear what you mean by awareness, etc. Here is what I can conclude about "A theory of developmental architecture" by Juyang Weng: Markov decision process is a T2-system. It does not make a difference whether you have sensors that collect only outside information or you also have sensors collecting internal information. It remains a T2 system. The same holds for actions. Internal actions do not change a T2 into T3 either. Therefore, according to practopoietic theory, your theory falls into the category of T2-systems. As you know, I laid out arguments that this is not enough to produce intelligent adaptive behavior that matches biological behaving systems. My suggestion would be to find ways to expand it into a T3. With best regards, Danko On May 28, 2014, at 2:30 PM, Juyang Weng wrote: > Danko, thank you for the links. You might want to take a look at my Self-Aware and Self-Effecting (SASE) > architecture. Your three-traverse idea has some similarity to it, but your theory is not (yet?) supported by fully computational detail > as SASE. > > -John > > On 5/28/14 6:58 PM, Danko Nikolic wrote: >> Dear all, >> >> I made an effort to make practopoiesis more approachable to a wider audience: >> >> >> One: I wrote a short popular article on the implications of practopoiesis for artificial intelligence: >> >> http://www.singularityweblog.com/practopoiesis/ >> >> >> >> Two: I made a list of the key concepts with a brief explanation of each: >> >> http://www.danko-nikolic.com/practopoiesis#Concepts >> >> >> >> I hope that this will be helpful. >> >> With warm greetings from Germany, >> >> Danko Nikolic >> >> > > -- > -- > Juyang (John) Weng, Professor > Department of Computer Science and Engineering > MSU Cognitive Science Program and MSU Neuroscience Program > 428 S Shaw Ln Rm 3115 > Michigan State University > East Lansing, MI 48824 USA > Tel: 517-353-4388 > Fax: 517-432-1061 > Email: weng at cse.msu.edu > URL: http://www.cse.msu.edu/~weng/ > ---------------------------------------------- > -- Prof. Dr. Danko Nikoli? Web: http://www.danko-nikolic.com Mail address 1: Group Leader Department of Neurophysiology Max Planck Institute for Brain Research Deutschordenstr. 46 60528 Frankfurt am Main GERMANY Mail address 2: Research Fellow Frankfurt Institute for Advanced Studies Wolfgang Goethe University Ruth-Moufang-Str. 1 60433 Frankfurt am Main GERMANY ---------------------------- Office: (..49-69) 96769-736 Lab: (..49-69) 96769-209 Fax: (..49-69) 96769-327 danko.nikolic at gmail.com ---------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From h.glotin at gmail.com Fri May 30 08:40:49 2014 From: h.glotin at gmail.com (Herve Glotin) Date: Fri, 30 May 2014 14:40:49 +0200 Subject: Connectionists: "From Cognition to Information Retrieval" Advanced Summer School - 23, 24, 25 sept - Cote Azur FR Message-ID: Dear colleagues, We would like to remind you that the early registration period for the 9th Advanced Multimodal Information Retrieval summer school ERMITES "From Cognition to Information Retrieval" will finish in few weeks. The school will be held in Porquerolles - Cote d Azur - France from Sept. 23rd to Sept. 25th ; details : http://glotin.univ-tln.fr/ERMITES14 Please, send this information to whoever you think may find it interesting. Here is a brief content: Grainger J. Research Director - CNRS LPC Orthographic Processing in Human, Monkey & Machine Graves A. Senior Researcher - Deep Mind Technologies - Google London Teaching Neural Net how to Write Kermorvant C. R&D Manager - A2IA SA Deep Neural Net for Industrial Written Text Recognition Touzet C. Pr - CNRS LNIA Cognition Neural Theory & Reading Hannagan T. Dr - European RC Brain & Language Research Institute Spherical Reader with Convolutional Neural Net Oudeyer P.-Y. Research Director - INRIA Curiosity-Driven Automatic Learning with Robots De Boer B. Pr - Artificial Intelligence Lab - Brussels Univ. Evolution of Language Learning Li H. Dr - INRA Multimedia Maximal Marginal Relevance for Video Summarization Bellot P. Pr - CNRS LSIS Information Retrieval in Big Text Data - Communication (poster, talk...) proposal is welcome and can be selected for the proceedings. - ERMITES14 is supported by TPM, INRIA, UTLN Axe Information, FRIIAM, ARIA, LSIS, Inst.Univ. de France. Content of this 9th edition: Best regards, Herve Glotin, Pierre-Hugues Joalland, et al. -- Herve' Glotin, Pr. Institut Univ. de France (IUF) & Univ. Toulon (UTLN) Head of information DYNamics & Integration (DYNI @ UMR CNRS LSIS) http://glotin.univ-tln.fr glotin at univ-tln.fr From grlmc at urv.cat Fri May 30 16:08:16 2014 From: grlmc at urv.cat (GRLMC) Date: Fri, 30 May 2014 22:08:16 +0200 Subject: Connectionists: SSTiC 2014: June 7, early registration deadline Message-ID: <5835787C688A43A2A8CFFF04621760A6@Carlos1> *To be removed from our mailing list, please respond to this message with UNSUBSCRIBE in the subject line* ********************************************************************* 2014 TARRAGONA INTERNATIONAL SUMMER SCHOOL ON TRENDS IN COMPUTING SSTiC 2014 Tarragona, Spain July 7-11, 2014 Organized by Rovira i Virgili University http://grammars.grlmc.com/sstic2014/ ********************************************************************* --- Early registration deadline: June 7 --- ********************************************************************* AIM: SSTiC 2014 is the second edition in a series started in 2013. For the previous event, see http://grammars.grlmc.com/SSTiC2013/ SSTiC 2014 will be a research training event mainly addressed to PhD students and PhD holders in the first steps of their academic career. It intends to update them about the most recent developments in the diverse branches of computer science and its neighbouring areas. To that purpose, renowned scholars will lecture and will be available for interaction with the audience. SSTiC 2014 will cover the whole spectrum of computer science through 6 keynote lectures and 22 six-hour courses dealing with some of the most lively topics in the field. The organizers share the idea that outstanding speakers will really attract the brightest students. ADDRESSED TO: Graduate students from around the world. There are no formal pre-requisites in terms of the academic degree the attendee must hold. However, since there will be several levels among the courses, reference may be made to specific knowledge background in the description of some of them. SSTiC 2014 is also appropriate for more senior people who want to keep themselves updated on developments in their own field or in other branches of computer science. They will surely find it fruitful to listen and discuss with scholars who are main references in computing nowadays. REGIME: In addition to keynotes, 3 parallel sessions will be held during the whole event. Participants will be able to freely choose the courses they will be willing to attend as well as to move from one to another. VENUE: SSTiC 2014 will take place in Tarragona, located 90 kms. to the south of Barcelona. The venue will be: Campus Catalunya Universitat Rovira i Virgili Av. Catalunya, 35 43002 Tarragona KEYNOTE SPEAKERS: Larry S. Davis (U Maryland, College Park), A Historical Perspective of Computer Vision Models for Object Recognition and Scene Analysis David S. Johnson (Columbia U, New York), Open and Closed Problems in NP-Completeness George Karypis (U Minnesota, Twin Cities), Top-N Recommender Systems: Revisiting Item Neighborhood Methods Steffen Staab (U Koblenz), Explicit and Implicit Semantics: Two Sides of One Coin Philip Wadler (U Edinburgh), You and Your Research and The Elements of Style Ronald R. Yager (Iona C, New Rochelle), Social Modeling COURSES AND PROFESSORS: Divyakant Agrawal (Qatar Computing Research Institute, Doha), [intermediate] Scalable Data Management in Enterprise and Cloud Computing Infrastructures Pierre Baldi (U California, Irvine), [intermediate] Big Data Informatics Challenges and Opportunities in the Life Sciences Rajkumar Buyya (U Melbourne), [intermediate] Cloud Computing John M. Carroll (Pennsylvania State U, University Park), [introductory] Usability Engineering and Scenario-based Design Kwang-Ting (Tim) Cheng (U California, Santa Barbara), [introductory/intermediate] Smartphones: Hardware Platform, Software Development, and Emerging Apps Amr El Abbadi (U California, Santa Barbara), [introductory] The Distributed Foundations of Data Management in the Cloud Richard M. Fujimoto (Georgia Tech, Atlanta), [introductory] Parallel and Distributed Simulation Mark Guzdial (Georgia Tech, Atlanta), [introductory] Computing Education Research: What We Know about Learning and Teaching Computer Science David S. Johnson (Columbia U, New York), [introductory] The Traveling Salesman Problem in Theory and Practice George Karypis (U Minnesota, Twin Cities), [intermediate] Programming Models/Frameworks for Parallel & Distributed Computing Aggelos K. Katsaggelos (Northwestern U, Evanston), [intermediate] Optimization Techniques for Sparse/Low-rank Recovery Problems in Image Processing and Machine Learning Arie E. Kaufman (U Stony Brook), [intermediate/advanced] Visualization Carl Lagoze (U Michigan, Ann Arbor), [introductory] Curation of Big Data Dinesh Manocha (U North Carolina, Chapel Hill), [introductory/intermediate] Robot Motion Planning Bijan Parsia (U Manchester), [introductory] The Empirical Mindset in Computer Science Charles E. Perkins (FutureWei Technologies, Santa Clara), [intermediate] Beyond LTE: the Evolution of 4G Networks and the Need for Higher Performance Handover System Designs Robert Sargent (Syracuse U), [introductory] Validation of Models Steffen Staab (U Koblenz), [intermediate] Programming the Semantic Web Mike Thelwall (U Wolverhampton), [introductory] Sentiment Strength Detection for Twitter and the Social Web Jeffrey D. Ullman (Stanford U), [introductory] MapReduce Algorithms Nitin Vaidya (U Illinois, Urbana-Champaign), [introductory/intermediate] Distributed Consensus: Theory and Applications Philip Wadler (U Edinburgh), [intermediate] Topics in Lambda Calculus and Life ORGANIZING COMMITTEE: Adrian Horia Dediu (Tarragona) Carlos Mart?n-Vide (Tarragona, chair) Florentina Lilica Voicu (Tarragona) REGISTRATION: It has to be done at http://grammars.grlmc.com/sstic2014/registration.php The selection of up to 8 courses requested in the registration template is only tentative and non-binding. For the sake of organization, it will be helpful to have an approximation of the respective demand for each course. Since the capacity of the venue is limited, registration requests will be processed on a first come first served basis. The registration period will be closed when the capacity of the venue will be complete. It is very convenient to register prior to the event. DEADLINES AND FEES: As far as possible, participants are expected to attend for the whole (or most of the) week (full-time). Fees are a flat rate allowing one to participate to all courses. They vary depending on the registration deadline: Early registration: June 7, 2014 --- 470 Euro Regular registration: July 5, 2014 --- 500 Euro On-site registration: July 11, 2014 --- 530 Euro ACCOMMODATION: Information about accommodation is available on the website of the School. CERTIFICATE: Participants will be delivered a certificate of attendance. QUESTIONS AND FURTHER INFORMATION: florentinalilica.voicu at urv.cat POSTAL ADDRESS: SSTiC 2014 Lilica Voicu Rovira i Virgili University Av. Catalunya, 35 43002 Tarragona, Spain Phone: +34 977 559 543 Fax: +34 977 558 386 ACKNOWLEDGEMENTS: Universitat Rovira i Virgili From weng at cse.msu.edu Sat May 31 09:41:05 2014 From: weng at cse.msu.edu (Juyang Weng) Date: Sat, 31 May 2014 21:41:05 +0800 Subject: Connectionists: practopoiesis In-Reply-To: <9B96C174-C36D-4792-B15E-C0898C2DD382@gmail.com> References: <5385C13E.9060001@gmail.com> <5385D6EC.2040408@cse.msu.edu> <9B96C174-C36D-4792-B15E-C0898C2DD382@gmail.com> Message-ID: <5389DBF1.4040409@cse.msu.edu> On 5/30/14 5:01 PM, Danko Nikolic wrote: > Dear John, > > I just read your SASE paper to make a comparison to practopoiesis, > as you asked. Interesting paper. Nice work. > > I like your succinct stile, formal and accurate. Your theorems make > it clear what you mean by awareness, etc. > > Here is what I can conclude about "A theory of developmental > architecture" by Juyang Weng: > > Markov decision process is a T2-system. It does not make a difference > whether you have sensors that collect only outside information or you > also have sensors collecting internal information. It remains a T2 > system. The same holds for actions. Internal actions do not change a > T2 into T3 either. Then, you must have rigorous and precise definition so that you are not the only God to say T2 or T3. If your work is scientific, your detailed, rigorous and precise definition but be able to be verified to be true or false by other researchers. Otherwise, your work is religion like. For example, in the above your writing, you do not say "why" SASE is T2 this is necessary for a scientific work. > > Therefore, according to practopoietic theory, your theory falls into > the category of T2-systems. You do not seem to have a verifiable theory yet. It appears to be religion like to me. > > As you know, I laid out arguments that this is not enough to produce > intelligent adaptive behavior that matches biological behaving > systems. My suggestion would be to find ways to expand it into a T3. Your arguments are too vague to be qualified as scientific work. In religion, the God says "if you believe it, it is there. Otherwise, it is not." -John > With best regards, > > > Danko > > > > > On May 28, 2014, at 2:30 PM, Juyang Weng wrote: > >> Danko, thank you for the links. You might want to take a look at my >> Self-Aware and Self-Effecting (SASE) >> architecture. Your three-traverse idea has some similarity to it, >> but your theory is not (yet?) supported by fully computational detail >> as SASE. >> >> -John >> >> On 5/28/14 6:58 PM, Danko Nikolic wrote: >>> Dear all, >>> >>> I made an effort to make practopoiesis more approachable to a wider >>> audience: >>> >>> >>> One: I wrote a short popular article on the implications of >>> practopoiesis for artificial intelligence: >>> >>> http://www.singularityweblog.com/practopoiesis/ >>> >>> >>> >>> Two: I made a list of the key concepts with a brief explanation of each: >>> >>> http://www.danko-nikolic.com/practopoiesis#Concepts >>> >>> >>> >>> I hope that this will be helpful. >>> >>> With warm greetings from Germany, >>> >>> Danko Nikolic >>> >>> >> >> -- >> -- >> Juyang (John) Weng, Professor >> Department of Computer Science and Engineering >> MSU Cognitive Science Program and MSU Neuroscience Program >> 428 S Shaw Ln Rm 3115 >> Michigan State University >> East Lansing, MI 48824 USA >> Tel: 517-353-4388 >> Fax: 517-432-1061 >> Email: weng at cse.msu.edu >> URL: http://www.cse.msu.edu/~weng/ >> ---------------------------------------------- >> > > -- > > Prof. Dr. Danko Nikoli? > > > Web:http://www.danko-nikolic.com > > Mail address 1: > Group Leader > Department of Neurophysiology > Max Planck Institute for Brain Research > Deutschordenstr. 46 > 60528 Frankfurt am Main > GERMANY > > Mail address 2: > Research Fellow > Frankfurt Institute for Advanced Studies > Wolfgang Goethe University > Ruth-Moufang-Str. 1 > 60433 Frankfurt am Main > GERMANY > > ---------------------------- > Office: (..49-69) 96769-736 > Lab: (..49-69) 96769-209 > Fax: (..49-69) 96769-327 > danko.nikolic at gmail.com > ---------------------------- > > > > > -- -- Juyang (John) Weng, Professor Department of Computer Science and Engineering MSU Cognitive Science Program and MSU Neuroscience Program 428 S Shaw Ln Rm 3115 Michigan State University East Lansing, MI 48824 USA Tel: 517-353-4388 Fax: 517-432-1061 Email: weng at cse.msu.edu URL: http://www.cse.msu.edu/~weng/ ---------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From frank.ritter at psu.edu Thu May 29 16:25:06 2014 From: frank.ritter at psu.edu (Frank Ritter) Date: Thu, 29 May 2014 16:25:06 -0400 Subject: Connectionists: CogModel notes: ICCM15/BRIMS14/books/prize/new society/Job Message-ID: [could you forward this to Lapsus or your list related to GDR I3?] This is the second emailing for ICCM 2015. Please forward if appropriate to your members, and put appropriate links onto your web site. Normally, the ICCM 2015 announcement would drive this email (it will be in Gronigen, near April 2015, on its regular (15/18 month) schedule, this should be settled next month). But, there are several timely announcements that indicate new publication outlets, resources, and jobs in Cog Sci and in cognitive modeling. I have also included several unusual items, including an interesting conference and a Kickstarter campaign! Now in order: meetings, resources, jobs. If you would like to be removed, please just let me know. I maintain it by hand to keep it small. [Hypertext version available at http://acs.ist.psu.edu/iccm2015/iccm-mailing-may2014.html] cheers, Frank Ritter frank.e.ritter at gmail.com http://www.frankritter.com **************************************************************** 1. Intnl. Conf. on Cognitive Modeling, April 2015 in Gronigen, NL 2. WASET International Conference on Cognitive Modeling (!) [link suppressed] :::: Other conferences, workshops 3. BRIMS 2014 proceedings online http://cc.ist.psu.edu/BRIMS/archives/2014/ 4. Cognitive Science Society annual meeting, 27 june early reg. deadline http://cognitivesciencesociety.org/conference_future.html 5. Cognitive Science Tutorial Program, 23 July 2014 http://cognitivesciencesociety.org/conference2014/tutorials.html 6. ACT-R Workshop at Cognitive Science 2014 http://cognitivesciencesociety.org/conference2014/workshops.html [from act-r mailing list] 7. Soar Workshop, week of 16 June https://web.eecs.umich.edu/~soar/workshop/ 8. 3rd Conference on Advances in Cognitive Systems, May 2015 http://www.cogsys.org/2015 9. BICA 2014 call for papers, due 26 May 2014, but other dates http://bicasociety.org/meetings/2014/ 10. Nengo Summer School, 8-21 June http://nengo.ca/summerschool 11. Summer School - Web Science & the Mind, 7-18 July, reg. still open http://www.summer14.isc.uqam.ca/ 12. CFP for 2015 iConference in Newport Beach, CA, 24-27 March 2015 http://ischools.org/the-iconference/ papers due 5 sept 2014 13. Intl. Summer School in Cog. Sci., 30 Jun-11 July, Bulgaria http://nbu.bg/cogs/events/ss2014.html 14. 14th Neural Computation and Psychology Workshop, 21-23 Aug http://www.psych.lancs.ac.uk/ncpw14 15. AI for Human-Robot Interaction (AI-HRI), papers due 13 June 14 http://ai-hri.github.io/ :::: Other resources 16. OpenWorm Kickstarter Campaign http://www.openworm.org 17. Numerous books to review 18. Premodeling / HCI book FDUCs: What designers need to know about people http://www.frankritter.com/fducs/ 19. New Society: The Society for Affective Science and Conference http://www.society-for-affective-science.org/conference 20. $10k prize for best paper in Human Factors journal, due 2 June 14 http://www.hfes.org/web/pubpages/hfprize.html :::: Jobs 21. Post doctoral Scholar in Systems Neuroscience and Connectivity Modeling http://www.la.psu.edu/facultysearch/ **************************************************************** 1. International Conf. on Cognitive Modeling, April 2015 in Gronigen, NL The International Conference on Cognitive Modeling will take place in April 2015 (approx. date) at RU/Gronigen, in the Netherlands. The deadline date for submissions will be in the Fall of 2014. Further announcements will provide more details. **************************************************************** 2. WASET International Conference on Cognitive Modeling (!) [link suppressed] Cognitive modeling has grown, and we don't always know everyone doing it any more. But, it has also now grown enough that WASET, the (so-called) World Academy of Science, Engineering and Technology has created a "WASET International Conference on Cognitive Modeling". WASET has NO RELATION to the ICCM conference folks. If you have any useful suggestions, please get in touch. I've emailed a few of their program committee members, where I could find their addresses, to start a discussion. The program committee appears to be made up of scholars from other areas, and I have confirmed that with two of them, who don't know how to be removed. References https://en.wikipedia.org/wiki/World_Academy_of_Science,_Engineering_and_Technology Google has plenty more information on WASET. Google: WASET International Conference on Cognitive Modeling 2014 will get you to the interesting conference. [I now suggest that we put copyright on our web pages, and, realise that cognitive modeling is now important enough to be copied (that is a neutral term) by WASAT.] **************************************************************** 3. BRIMS 2014 proceedings online http://cc.ist.psu.edu/BRIMS/archives/2014/ The Behavior Representation in Modeling and Simulation (BRIMS) conference (http://cc.ist.psu.edu/BRIMS2014/) was held 1-4 April. It was held with the Social computing, Behavioral-cultural modeling, and Prediction (SBP) conference (http://sbp-conference.org/) in DC. Its proceedings are online at http://cc.ist.psu.edu/BRIMS/archives/2014/ The place and time for next year are not yet announced. **************************************************************** 4. Cognitive Science Society annual meeting, 27 june early reg. deadline http://cognitivesciencesociety.org/conference_future.html CogSci 2014 - Cognitive Science Meets Artificial Intelligence: Human and Artificial Agents in Interactive Contexts Quebec City, CA 23-26 July 2014 Website URL for future conference: http://cognitivesciencesociety.org/conference_future.html ---- Highlights Include: Plenary Speakers: Dedre Gentner, Steven Harnad, & Minoru Asada 13th Rumelhart Prize Recipient: Ray Jackendoff Symposia: "Foundations of Social Cognition", "Moral Cognition and Computation", "The Future of Human-Agent Interaction" Cognitive scientists from around the world are invited to attend CogSci 2014, the world's premiere annual conference on cognitive science. The conference represents a broad spectrum of disciplines, topics, and methodologies from across the cognitive sciences. In addition to the invited presentations, the program will be filled with reviewed submissions from the following categories: papers, symposia, presentation-based talks, member abstracts, tutorials, and workshops. Submissions must be completed electronically through the conference submissions web site. Submissions may be in any area of the cognitive sciences, including, but not limited to, anthropology, artificial intelligence, computational cognitive systems, cognitive development, cognitive neuroscience, cognitive psychology, education, linguistics, logic, machine learning, neural networks, philosophy, robotics and social network studies. Information regarding the submission process, including opening dates for the submission website will be posted shortly. http://cognitivesciencesociety.org/conference2014/submissions.html We look forward to seeing you in Quebec City. Conference Co-Chairs: Paul Bello, Marcello Guarini, Marjorie McShane, and Brian Scassellati *********************************************************** 5. Cognitive Science Tutorial Program, 23 July 2014 http://cognitivesciencesociety.org/conference2014/tutorials.html Growth Curve Analysis: A Hands-On Tutorial on Using Multilevel Regression to Analyze Time Course Data Daniel Mirman Full Day Tutorial on Quantum Models of Cognition and Decision Zheng Wang, Jerome Busemeyer, Jennifer Trueblood Probability, programs, and the mind: Building structured Bayesian models of cognition Noah Goodman, Josh Tenenbaum Online Experiments using jsPsych, psiTurk, and Amazon Mechanical Turk Josh de Leeuw, Anna Coenen, Doug Markant, Jay B. Jay B. Martin, John McDonnell, Alexander Rich, Todd Gureckis Types and states: Mixture and hidden Markov models for cognitive science Ingmar Visser, Maarten Speekenbrink Practical Advice on How to Run Human Behavioral Studies Frank Ritter, Jong Kim *********************************************************** 6. ACT-R Workshop at Cognitive Science 2014 http://cognitivesciencesociety.org/conference2014/workshops.html [from act-r mailing list] The ACT-R Workshop 2014 will be held as part of the Cognitive Science annual meeting: http://cognitivesciencesociety.org/conference2014/workshops.html The workshop this year will be organized around four themes: Neuroscience, Metacognition, Applications, and Architecture. We expect to discuss a wide range of issues that relate to cognitive architectures and cognitive modeling more generally; anyone interested in these issues (regardless of familiarity with ACT-R) is invited and encouraged to attend. If you have an idea for a short presentation at the workshop, please email me, salvucci at drexel.edu a possible title and description. Registration information should be available soon on the Cognitive Science web site. **************************************************************** 7. Soar Workshop, week of 16 June https://web.eecs.umich.edu/~soar/workshop/ The Center for Cognitive Architecture at the University of Michigan (http://sitemaker.umich.edu/soarweb/home) and Soar Technology (http://www.soartech.com/) are pleased to announce that the 34th Soar Workshop will be held the week of June 16, 2014 at the Computer Science and Engineering building at the University of Michigan in Ann Arbor. For those of you not familiar with the Soar Workshop (https://web.eecs.umich.edu/~soar/workshop/), each year, members of the Soar community -- faculty, scientists, graduate students, technical staff and developers -- gather together for several days of intensive interaction and exchange on Soar. The Soar community is widely distributed geographically, so these workshops (http://sitemaker.umich.edu/soar/soar_workshop_proceedings) provide an opportunity to have face-to-face conversations, learn about the current status of other participants' research and get previews of what will happen in the future. We will try to give as many members of the community as possible the opportunity to describe their research or discuss the Soar issues that are of concern to them. This means the time available per talk is quite short (typically either 5 or 15 minutes). Since workshop attendees are already entrenched in the Soar world, brief talks that concentrate on only the essentials work very well. The workshop format is presentations only -- no formal papers. [but slides are often collected] There is no charge to attend the 33rd Soar Workshop although registration at https://web.eecs.umich.edu/~soar/workshop/manage.php this link is required. You may sign up to give talks of various lengths as soon as you're registered. And then please upload a copy of your presentation(s) by June 4th, 2013 via the Soar Workshop registration page (https://web.eecs.umich.edu/~soar/workshop/manage.php). Please make sure to register as soon as possible so that we can plan the workshop accordingly! Questions and information regarding the workshop should be directed to John Laird via (laird at umich.edu) email, mail or phone: Soar Workshop 33 c/o John Laird 2260 Hayward Ann Arbor, MI 48109-2121 (734) 647-1761 **************************************************************** 8. 3rd Conference on Advances in Cognitive Systems, May 2015 http://www.cogsys.org/2015 We are currently in the process of organizing the 3rd Conference on Advances in Cognitive Systems (ACS). We wanted to give you a heads up that ACS will be moving to a spring conference. The next ACS will be held at Georgia Tech, Atlanta, Georgia USA on May 29-31, 2015. The paper submission deadline will be in February 2015. As you may have heard, AAAI has moved to a winter conference to avoid conflict with IJCAI (which has become an annual event). The ACS organizing committee felt it best to avoid a collision with AAAI's paper submission deadlines. We hope that the ACS move to spring will allow the diverse and dynamic cognitive systems community to grow. Please be on the look out for the call for papers in the coming months. Best, Ashok Goel and Mark Riedl Co-Chairs, 3rd Conference on Advances in Cognitive Systems [there is also a mailing list that they have added people to: advances-cognitive-systems at lists.gatech.edu] **************************************************************** 9. BICA 2014 call for papers, due 26 May 2014, but other dates http://bicasociety.org/meetings/2014/ 2014 Annual Intl Conf on Biologically Inspired Cognitive Architectures (BICA 2014) Fifth Annual Meeting of the BICA Society November 7-9 (Friday-Sunday): MIT, Cambridge, MA (http://bicasociety.org/meetings/2014). Sponsors: The BICA Society; MIT; Elsevier B.V. Points of contact: Paul Robertson (paulr at dollabs.com) and Alexei Samsonovich (alexei at bicasociety.org) Call for Papers Biologically Inspired Cognitive Architectures (BICA) are computational frameworks for building intelligent agents that are inspired from biological intelligence. Biological intelligent systems, notably animals such as humans, have many qualities that are often lacking in artificially designed systems including robustness, flexibility and adaptability to environments. At a point in time where visibility into naturally intelligent systems is exploding, thanks to modern brain imaging and recording techniques allowing us to map brain structures and functions, our ability to learn lessons from nature and to build biologically inspired intelligent systems has never been greater. At the same time, the growth in computer science and technology has unleashed enough computational power at sufficiently low prices, that an explosion of intelligent applications from driverless vehicles, to augmented reality, to ubiquitous robots, is now almost certain. The growth in these fields challenges the computational replication of all essential aspects of the human mind (the BICA Challenge), an endeavor which is interdisciplinary in nature and promises to yield bi-directional flow of understanding between all involved disciplines. Scope With the scope of BICA 2014 covering all areas of BICA-related research listed below, the major thrusts will be Perception, Attention, and Language. Here the key questions are: ? What can we learn from biological systems about how perception, attention, decision making, and action work together to produce intelligent behavior that is robust in natural environments? ? What have we learned about information flow in biological systems that can aid us in building better artificial systems that combine perception, action, language, learning, and decision making in robots and intelligent agents? ? What have we learned recently about information flow in the brain that can lead to better cognitive models that combine perception, attention, decision making, action, language, and learning? ? What role is played by emotions in perception, attention, decision making, language, and learning? ? How have we, or can we incorporate into cognitive architectures new evolving understandings about flow of information in biological cognitive systems? ? What mathematical foundations are emerging today that can support perception and learning? In addition to these focus topic areas of BICA 2014, we encourage submission of papers in all areas of BICA research, especially in the following areas. Neuroscience: .... Social, Economic and Educational Sciences: ... Cognitive Science: .... Artificial Intelligence: .... General: .... Format and Agenda The format of the conference is a 2.5-day meeting including paper presentations, panel discussions, invited talks, and demonstration showcases. Symposia and other mini-events (special sessions, breakout groups, brainstorms, think-tanks, socials, contests, and more) as part of the conference will be added as needed (proposals are solicited). In addition, BICA 2014 will host a special track "Doctorial Consortium" for which a "best student paper" award will be presented. We will also host technology demonstrations, for which we solicit proposals, and a poster session. We solicit additionally proposals for panel topics. The working language is English. As a part of our rich social and cultural program included in the registration, we are planning a Welcome Reception and a boat trip into the Atlantic on Saturday night, with a banquet on the boat. Detailed program is not available yet. Please see our separate pages for Confirmed symposia planned as part of BICA 2014 (we are asking for additional proposals). Submission and Publication Venues Publication venues include: (1) the Elsevier journal BICA - papers may be distributed among several journal issues; and (2) a volume of Procedia Computer Science, indexed by Web of Science and Scopus. One EasyChair submission site will be used for all categories of submissions, including (1), (2) and also (3): stand-alone abstracts that will be included in the conference program brochure, without being published. All submissions will undergo one round of peer-reviews. The category may be changed based on reviews. All conference materials (including papers and abstracts) will be together made available locally for conference participants (included in the registration package) via USB and/or the Internet. Invited speakers are not required, but are encouraged to submit papers or abstracts. Other participants can have presentations accepted based on an abstract only, without a paper submission. Core Organizing Committee Paul Robertson (DOLL, Inc.): General Chair Patrick H. Winston (CSAIL/MIT): Co-Chair Howard Shrobe (CSAIL/MIT): Co-Chair Alexei Samsonovich (GMU): Co-Chair, PC Chair Antonio Chella: OC Member Christian Lebiere: OC Member Kamilla R. Johannsdottir: OC Member Important Dates Proposals for Sub-Events: Symposia, Workshops and Socials - March 1 Doctoral Consortium Submit - May 05 Technology Demos - June 13 Panel Session Interest - June 27 Paper and Abstract Submission: Paper and abstract submission opens - March 1st Abstract and Paper Submission Due - May 26* Paper Review Feedback - June 14* Final Papers Due - August 01 Late-breaking stand-alone abstracts due - August 31 Registration and Conference: Early-bird registration (opens soon) due: June 17* Regular author-presenter registration due - June 27* Guest and non-presenting attendee online registration due - October 1 Sessions of the conference BICA 2014: November 7-9 *Deadlines extended as of April 28 **************************************************************** 10. Nengo Summer School, 8-21 June http://nengo.ca/summerschool [From Chris Eliasmith, the registration is long past, but it's worth noting that this is another cognitive architecture summer school, which might be repeated.] I thought you might have some acquaintances or students who would be interested to know that we're hosting a summer school this year. It will be on all things Nengo & SPA. We're encouraging people to bring projects and/or data they'd like to turn into a detailed model. We'll have a very good instructor/student ratio (1:2). It's more fully described here: http://nengo.ca/summerschool The application deadline was Feb 15. **************************************************************** 11. Summer School - Web Science & the Mind, 7-18 July, reg. still open http://www.summer14.isc.uqam.ca/ [This appears to be a standing event that might be useful to modellers] The Institut des Sciences Cognitives at UQAM (Montreal) is organising its fifth Summer School, this year on the topic of "Web Science and the Mind" * Topic includes social network analysis, semantic web, distributed cognition, ... * Scholarships (500$ + registration) * Call for posters * Can be worth 3 credits Please contact us with any questions or comments you may have: summer14.isc at uqam.ca Guillaume Chicoisne Institut des sciences cognitives, Montreal -------- The Fifth Summer School in Cognitive Sciences : Web Science and the Mind. Organized by the UQAM Cognitive Science Institute in Montreal (Canada), from July 7 to 18. Theme of the Summer Institute: Web Science and the Mind. This summer school will present a comprehensive overview of the interactions between the web and cognitive sciences, with topics ranging from social network analysis to distributed cognition and semantic web. The Summer School will feature a poster session. Information about this poster session is available at: http://www.summer14.isc.uqam.ca/page/affiche.php Deadline: April 11th 2014 Registration for the Summer School is open (''Early Bird'' Registration fees until May 9th). Note that the lowest fee is for students that will attend the Summer School as a credited activity (worth 3 university credits). Details: http://www.summer14.isc.uqam.ca/page/inscription.php Scholarships for travel, accomodation and/or registration will be available for students registered in a Quebec University (CREPUQ). http://www.summer14.isc.uqam.ca/page/bourses.php Want to stay in touch? Follow us on Twitter! @iscUQAM We hope to see you there in July. **************************************************************** 12. CFP for 2015 iConference in Newport Beach, CA, 24-27 March 2015 http://ischools.org/the-iconference/ papers due 5 sept 2014 [This is a potential outlet, particularly for models applied to information systems] [from CHI-announcements] Call for Participation, iConference 2015 http://ischools.org/the-iconference/ The iConference is an international gathering of scholars and researchers concerned with critical information issues in contemporary society. iConference 2015 (http://ischools.org/the-iconference/) takes place March 24-27 in Newport Beach, California. The following submissions are invited: *Submission Type* *Deadline* *Notification* Papers (http://ischools.org/the-iconference/program/papers/) Friday, September 5, 2014, midnight PDT mid-November Posters (http://ischools.org/the-iconference/program/posters/) Friday, October 10, 2014, midnight PDT mid-November Workshops (http://ischools.org/the-iconference/program/workshops/) Friday, September 26, 2014, midnight PDT Monday, October 27, 2014 Interactive Sessions (http://ischools.org/the-iconference/program/sessions-for-interaction-and-engagement/) Friday, October 10, 2014, midnight PDT mid-November Doctoral Colloquium: http://ischools.org/the-iconference/program/doctoral-colloquium/ Friday, September 12, 2014, midnight PDT Friday, October 24, 2014 Social Media Expo (http://ischools.org/the-iconference/program/social-media-expo/) Participation commitment letter due October 14, 2014; submissions due December 15, 2014. Thursday, January 15, 2015 Dissertation Award: http://ischools.org/the-iconference/program/dissertation-award/ Wednesday, October 15, 2014, midnight PDT Thursday, January 15, 2015 iConference 2015 is presented by the iSchools organization (http://ischools.org) and hosted by The Donald Bren School of Information and Computer Sciences at University of California, Irvine. All information researchers and scholars are welcome. Sample topics of past iConferences include the following: ? social computing ? human-computer interaction ? digital youth ? information retrieval ? network science ? digital humanities ? data science ? information economics ? information systems ? information policy ? knowledge infrastructures ? computational social science ? information work and workers ? user experience and design ? data, text and knowledge mining ? digital curation and preservation ? computer-supported cooperative work ? bibliometrics and scholarly communication ? social, cultural, health and community informatics ? information and communication technology for development -- Gary M. Olson (949) 824-0077 Department of Informatics gary.olson at uci.edu Bren School of Information and Computer Sciences University of California, Irvine Irvine, CA 92697-3440 **************************************************************** 13. Intl. Summer School in Cog. Sci., 30 Jun-11 July, Bulgaria http://nbu.bg/cogs/events/ss2014.html The 21st edition of the well-known International Summer School in Cognitive Science will take place during the two weeks from June 30th until July 11th at the New Bulgarian University, Sofia, Bulgaria. The summer school features advanced courses for graduate students and young researchers in a variety of areas, including embodied cognition and dynamic field theory, behavioral experimentation from reaction times to model testing, computational approaches to development, the cognitive science of social dilemmas, the epistemology of cognitive science and neuroscience, and a practical course on working with E-prime as a software tool for experiment design and implementation. The summer school will also feature a participant symposium (submit your brief papers to the school email below by June 9). Registration is now open. For more information, visit the school's website: http://nbu.bg/cogs/events/ss2014.html You can contact us at the following email address: school at cogs.nbu.bg **************************************************************** 14. 14th Neural Computation and Psychology Workshop, 21-23 Aug http://www.psych.lancs.ac.uk/ncpw14 NCPW14 Call for Papers - 14th Neural Computation and Psychology Workshop We cordially invite you to participate in the 14th Neural Computation and Psychology Workshop (NCPW14) to be held at Lancaster University, UK, from August 21-23, 2014: http://www.psych.lancs.ac.uk/ncpw14 This well-established and lively workshop aims at bringing together researchers from different disciplines such as artificial intelligence, cognitive science, computer science, neurobiology, philosophy and psychology to discuss their work on models of cognitive processes. Often this workshop has had a theme, and this time it is 'Development across the lifespan', but also submissions that do not fall under this theme are welcome. Papers must be about emergent models - frequently, but not necessarily - of the connectionist/neural network kind, applied to cognition. The NCPW workshops have always been characterized by their limited size, high quality papers, the absence of parallel talk sessions, and a schedule that is explicitly designed to encourage interaction among the researchers present in an informal setting. NCPW14 is no exception. The scientific program will consist of keynote lectures, oral sessions and poster sessions. Furthermore, this workshop will feature a unique set of invited speakers: James McClelland, Stanford University. Bob McMurray, University of Iowa. Michael Thomas, Birkbeck College, London. Rumelhart Memorial Travel Awards The Rumelhart Memorial Travel awards, generously funded by Professor Jay McClelland, will provide funding to support travel costs for students/post docs presenting at the conference. Awards of US$250 are available to students or post docs from Western European countries, and US$750 for students or post docs from elsewhere. Decisions will be based on the quality of the submission. Eligibility criteria The first author of the submission is a PhD student or Post-doctoral fellow who will attend the meeting and will present the submission if chosen to receive a travel award. Location Lancaster is situated in the north west of England, approximately one hour from Manchester and Liverpool airports. Lancaster is surrounded by spectacular scenery, hiking and climbing country. The Yorkshire Dales national park is 10 miles away. The Lake District national park is 20 miles away, http://www.nationalparks.gov.uk. The stagecoach 555 bus (http://www.stagecoachbus.com) takes you directly from Lancaster through the heart of the Lakes. The Trough of Bowland Area of Outstanding Natural Beauty is 2 miles away. Important dates to remember Abstract deadline: 15 May Notification of abstract acceptance: 7 June Early registration deadline: tbc Online registration deadline: tbc Conference dates: 21-23 August, 2014 Looking forward to your participation! Organizing Committee Gert Westermann, Lancaster University Padraic Monaghan, Lancaster University Katherine E. Twomey, Liverpool University Alastair C. Smith, MPI Nijmegen -------------------------------------------------------------------------- Prof. Gert Westermann +44 (0)1524 592-942 Department of Psychology g.westermann at lancaster.ac.uk Lancaster University http://www.psych.lancs.ac.uk/people/gert-westermann Lancaster LA1 4YF **************************************************************** 15. AI for Human-Robot Interaction (AI-HRI), papers due 13 June 14 http://ai-hri.github.io/ AAAI Fall Symposium: AI for Human-Robot Interaction (AI-HRI) Nov 13-15, 2014 -- Arlington, VA http://ai-hri.github.io/ 2 page abstracts due June 13 Email as a pdf to: ai-hri-symposium-submissions at googlegroups.com This symposium will bring together and strengthen the community of researchers working on the AI challenges inherent to Human-Robot Interaction (HRI). Humans and human environments bring with them inherent uncertainty in dynamics, structure, and interaction. HRI aims to develop robots that are intelligent, autonomous, and capable of interacting with, modeling, and learning from humans. These goals are at the core of AI. The field of HRI is a broad community encompassing robotics, AI, HCI, psychology and social science. In this meeting we aim to specifically bring together the subset of this community that are focused on the AI problems of HRI. Currently this type of HRI work is seen across such a variety of venues (HRI, RSS, ICRA, IROS, Ro-Man, RoboCup, and more), that we lack a cohesive core community. Building this community is the central purpose of this symposium. Planned activities: - Keynote talks "How is HRI an AI problem?": We will have keynotes giving eight different perspectives about how AI research is going to bring us closer to the reality of humans interacting with robots on everyday tasks. - Breakout groups + panel discussions: these discussions will be focused on (1) defining a road map of grand challenges for this research area, and (2) what is the core venue for this community. - Poster session: This session will highlight state-of-the-art work and approaches to AI-HRI. - Team building: Given the diverse set of venues that this type of research is presented, it is very rare that members of the AI-HRI community get together in the same room. As such, a large part of this effort is to bring together a community of researchers, strengthen old connections and build new ones. Ample time will be provided for networking and informal discussions. Confirmed speakers: - Rodney Brooks - Cynthia Breazeal - Henrik Christensen - Maja Mataric - Manuela Veloso Important dates: - To have your work featured in the poster session, submit a two-page abstract by 13 June 14. Email your submission as a pdf to ai-hri-symposium-submissions at googlegroups.com - Decisions will be returned by July 11, 2014 - The symposium will be held Nov 13-15, 2014 in Arlington, VA. Organizing committee: - Andrea L. Thomaz, Georgia Institute of Technology (chair) - Kris Hauser, Indiana University - Chad Jenkins, Brown University - Maja J. Mataric, University of Southern California - Manuela Veloso, Carnegie Mellon University -- You received this message because you are subscribed to the Google Groups "HRI-announcement" group. To unsubscribe from this group and stop receiving emails from it, send an email to hri-announcement+unsubscribe at cs.byu.edu. For more options, visit https://groups.google.com/a/cs.byu.edu/d/optout. **************************************************************** 16. OpenWorm Kickstarter Campaign http://www.openworm.org [from Comp-neuro at neuroinf.org] [this is cool and interesting on several levels and ways] From: Shreejoy Tripathy Date: Mon, 12 May 2014 11:31:07 -0700 To: comp-neuro at neuroinf.org, connectionists at mailman.srv.cs.cmu.edu Subject: [Comp-neuro] OpenWorm Kickstarter Campaign I'm writing to let you know about the OpenWorm Kickstarter Campaign (https://www.kickstarter.com/projects/openworm/openworm-a-digital-organism-in-your-browser). As you may know, the goal of the OpenWorm project is to digitally simulate a C. Elegans organism. By design, the project is organized in a geographically distributed fashion and all of the work is being done in the open (on GitHub and Youtube) using entirely open source code and public data. Though I'm not a worm researcher, I'm supporting OpenWorm's Kickstarter because I strongly believe in their overall mission. As a computational neuroscientist, I see the project as a necessary first step in the simulation of more complex organisms, including mice and ultimately humans. Moreover, I see my Kickstarter contribution as a future investment in the open source tools and methodologies that OpenWorm develops that I hope to eventually use in my own research. More philosophically, I think that lately the field of neuroscience has suffered from some amount of "overhype", especially given the initially promised outcomes of recent large-scale brain initiatives (e.g., recording from all neurons in a mammalian brain, simulating an entire human brain). While these projects have been hugely successful in catalyzing widespread public support of basic neuroscience research, many neuroscientists see these promises as perhaps a bridge too far given the current research state. Though OpenWorm's goals are itself highly ambitious, the relative simplicity of the worm (named neurons, stereotyped connectomes and musculature, the ability to image neuronal activity in vivo) lends itself well to this attempt. By conducting the project in the open and basing success on quantifiable goals (i.e., how similar are the simulated worm's movements and neural activity to experimental measurements), the project will serve as a roadmap for future endeavours in multi-scale data integration and organismal simulation. Lastly, the geographically distributed and highly interdisciplinary nature of the project makes it challenging to receive funding through traditional NIH-style mechanisms. For this reason, I'm asking you to join me in supporting the project's Kickstarter campaign. More information: OpenWorm Website: http://www.openworm.org Kickstarter campaign link: https://www.kickstarter.com/projects/openworm/openworm-a-digital-organism-in-your-browser Project overview by OpenWorm member and neuroscientist Jim Hokanson: http://jimandscience.blogspot.ca/2014/05/modeling-worm-thoughts-on-openworm.html OpenWorm QandA on Reddit: http://www.reddit.com/r/science/comments/246dlw/science_ama_series_im_stephen_larson_project/ Sincerely, Shreejoy Tripathy Post-doc and Developer of http://neuroelectro.org University of British Columbia Centre for High-Throughput Biology **************************************************************** 17. Numerous books to review If you would like to review a book, there are several books recently published that their publishers would give you a copy to facilitate a review in a magazine or journal. If you are interested, email me and I'll foward you to the publisher. Foundations for Designing User-Centered Systems: What system designers need to know about people, Frank Ritter, Gordon Baxter, & Elizabeth Churchill. (2014). Springer. http://www.frankritter.com/fducs/ Game Analytics: Maximizing the Value of Player Data [Hardcover] Magy Seif El-Nasr, Anders Drachen, Alessandro Canossa (Eds) http://www.amazon.com/Game-Analytics-Maximizing-Value-Player/dp/1447147685 http://www.springer.com/computer/hci/book/978-1-4471-4768-8 How to build a brain, A Neural Architecture for Biological Cognition Chris Eliasmith, OUP. http://nengo.ca/build-a-brain Minding Norms: Mechanisms and dynamics of social order in agent societies Rosaria Conte, Giulia Andrighetto, and Marco Campenni (eds) http://global.oup.com/academic/product/minding-norms-9780199812677? Social Emotions in Nature and Artifact Jonathan Gratch and Stacy Marsella (eds). http://global.oup.com/academic/product/social-emotions-in-nature-and-artifact-978019538764314 Running Behavioral studies with human participants Ritter, Kim, Morgan, & Carlson, Sage http://www.sagepub.com/textbooks/Book237263 **************************************************************** 18. Premodeling / HCI book FDUCs: What designers need to know about people http://www.frankritter.com/fducs/ We have published book that is in many ways an HCI book, its first use is as a Human-computer interaction textbook (it has been used in at least 4 universities), but it is also a modeling book -- It is designed to create models of users in designers heads. Foundations for Designing User-Centered Systems: What system designers need to know about people, Frank Ritter, Gordon Baxter, & Elizabeth Churchill. (2014). Springer. If your library subscribes to Springer link, you can get a pdf from there. If you want a desk copy, there are links from the site above to order one. If all else fails, contact me. I use it to teach HCI, as have several at other universities and several penn state campuses. **************************************************************** 19. New Society: The Society for Affective Science and Conference http://www.society-for-affective-science.org/conference A new society has been formed -- The Society for Affective Science. Its mission is to foster basic and applied research in the variety of fields that study affect, broadly defined. Our inaugural conference will be April 24-26, 2014 in Washington DC. This conference will be student-friendly, open, and theoretically and methodologically diverse. It will provide a forum for cross-cutting work in emotion, stress, and many other topics that fall under the broad umbrella of affective science. For more information see the society website: http://www.society-for-affective-science.org/ Program highlights for the first meeting can be found at: http://www.society-for-affective-science.org/conference/highlights/ As you will see, the first meeting features an invited program of talks in addition to a call for posters. We expect the program to shift to submissions as the society develops. Currently, the meeting is largely single tracked, and in addition to the invited addresses and presidential symposium, the program includes methodology lunches, salons with well-known affective scientists, and a debate to keep things lively. Please feel free to send this email to your students, postdocs, and colleagues. If you have questions, please email us at: info at society-for-affective-science.org **************************************************************** 20. $10k prize for best paper in Human Factors journal, due 2 June 14 http://www.hfes.org/web/pubpages/hfprize.html [this appears to becoming an annual prize, modeling papers get published in HF and there are modellers as editors and reviewers] Submit your research for the 2014 Human Factors Prize--You could win $10,000! The Human Factors and Ergonomics Society (HFES) welcomes your submission for the 2014 Human Factors Prize: Recognizing Excellence in Human Factors/Ergonomics Research. The best paper will receive: $10,000 cash award Publication in the Society's flagship journal, (http://content.news.sagepub.com/emessageIRS/servlet/IRSL?v=5&a=10050&r=22230&m=4655&l=10&e=2&x=2457051.0) Human Factors The 2014 topic is human-automation interaction/autonomy. We seek articles that describe human factors/ergonomics (HF/E) research that pertains to effective and satisfying interaction between humans and automation. Plan to submit your work between April 1 and June 2, 2014. Eligibility Requirements: Any researcher is eligible to submit relevant work; membership in HFES is not required. Submissions must cover original (unpublished) research in the topical area and comply with the requirements in the Human Factors (http://content.news.sagepub.com/emessageIRS/servlet/IRSL?v=5&a=10050&r=22230&m=4655&l=12&e=2&x=2457051.0) instructions for authors. Review articles and brief reports are not eligible. Submissions must not be received prior to April 1 or after June 2, 2014. To see examples of articles on subject matter similar to the 2014 Prize topic, visit the Human Factors Prize Web page (http://content.news.sagepub.com/emessageIRS/servlet/IRSL?v=5&a=10050&r=22230&m=4655&l=9&e=2&x=2457051.0) The award will be formally conferred at a special session at the HFES International Annual Meeting, where the recipient will present his or her work (http://content.news.sagepub.com/emessageIRS/servlet/IRSL?v=5&a=10050&r=22230&m=4655&l=13&e=2&x=2457051.0) Human Factors: The Journal of the Human Factors and Ergonomics Society (HFS) is a peer-reviewed journal presenting original works of scientific merit that contribute to the understanding and advancement of the systematic consideration of people in relation to machines, systems, tools, and environments. HFS highlights fundamental human capabilities, limitations, and tendencies, as well as the basics of human performance. Human Factors Impact Factor: 1.182 Ranked: 7/16 in Ergonomics | 21/44 in Engineering, Industrial | 39/72 in Psychology, Applied | 44/49 in Behavioral Sciences | 51/75 in Psychology Source: 2012 Journal Citation Reports(R) (Thomson Reuters, 2013) Questions? Contact HFES Communications Director Lois Smith at lois at hfes.org. We look forward to receiving your submission. **************************************************************** 21. Post doctoral Scholar in Systems Neuroscience and Connectivity Modeling http://www.la.psu.edu/facultysearch/ [this was announced in February, but is still up on the web site] Post doctoral Scholar in Systems Neuroscience and Connectivity Modeling Job Number: 41202 Affirmative Action Search No. #: 021-320 Department: Psychology Rank: Post Doc Announcement: We are seeking a highly motivated Postdoctoral Scholar in the area of clinical/cognitive neuroscience, brain imaging, and network modeling. The successful candidate will work in collaboration with investigators at The Pennsylvania State University including Dr. Hillary in Psychology, Dr. Reka Albert in Physics and Dr. Peter Molenaar in Human Development and Family Studies. The goals of the research focus on time series analysis, graph theory, and connectivity modeling of human brain imaging data (high density EEG, BOLD fMRI). The primary responsibility of this position is to facilitate ongoing research examining neural plasticity after severe traumatic brain injury in humans. There is also keen interest for this position to support the development of novel methods for understanding plasticity from a systems neuroscience perspective. This includes prospective data collection as well as analysis of existing data sets. Current lab goals aim to focus on issues regarding: 1) signal processing (i.e., non-stationarity in time series data; cross-frequency coupling), 2) large scale connectivity analysis (e.g., graph theory), 3) machine learning, and 4) novel methods isolating regions of interest. A Doctorate (M.D. and/or Ph.D.) degree is required by the appointment date. Excellent verbal and written communication skills and a background in computational modeling (broadly defined) is required and programming experience is preferred. Review of applications will begin in February and continue until the position is filled. The appointment is for two years with a tentative start date flexible. To apply, submit cover letter, curriculum vita, and the names and contact information for three references through http://www.la.psu.edu/facultysearch/ For additional information, contact Frank G. Hillary, fhillary at psu.edu. Employment will require successful completion of background check(s) in accordance with University policies. Penn State is committed to affirmative action, equal opportunity and the diversity of its workforce. **************************************************************** -30- From lila.kari at uwo.ca Sat May 31 16:58:37 2014 From: lila.kari at uwo.ca (Lila Kari) Date: Sat, 31 May 2014 16:58:37 -0400 Subject: Connectionists: UCNC 2014 REGULAR REGISTRATION DEADLINE Message-ID: <3C1BBAA3-C09C-4BF2-BE94-C44B2670C126@uwo.ca> Dear Colleagues, This is a gentle reminder that the UCNC 2014 regular registration deadline is Friday, June 6, 2014. The 13th International Conference on Unconventional Computation & Natural Computation (UCNC) University of Western Ontario, London, Ontario, Canada July 14-18, 2014 http://www.csd.uwo.ca/ucnc2014 http://www.facebook.com/UCNC2014 https://twitter.com/UCNC2014 PROGRAM HIGHLIGHTS INVITED PLENARY SPEAKERS Yaakov Benenson (ETH Zurich) - "Molecular Computing Meets Synthetic Biology" Charles Bennett (IBM Research) - "From Quantum Dynamics to Physical Complexity'' Hod Lipson (Cornell University) - "The Robotic Scientist" Nadrian Seeman (New York University) - "DNA: Not Merely the Secret of Life" INVITED TUTORIAL SPEAKERS Anne Condon (University of British Columbia) - "Programming with Biomolecules" Ming Li (University of Waterloo) - "Approximating Semantics" Tommaso Toffoli (Boston University) - "Do We Compute to Live, or Live to Compute?" WORKSHOP ON DNA COMPUTING BY SELF-ASSEMBLY - MAIN SPEAKERS Scott Summers (University of Wisconsin) - ''Two Hands Are Better than One (in Self-Assembly)'' Damien Woods (Caltech) - ''Intrinsic Universality and the Computational Power of Self-Assembly'' WORKSHOP ON COMPUTATIONAL NEUROSCIENCE - MAIN SPEAKERS William Cunningham (University of Toronto) - ''Attitudes and Dynamic Evaluations'' Randy McIntosh (Rotman Research Institute) - ''Building and Interacting with the Virtual Brain'' WORKSHOP ON UNCONVENTIONAL COMPUTATION IN EUROPE - MAIN SPEAKER Ricard Sole (Univ. Pompeu Fabra) - ''Computation and the Major Synthetic Transitions in Artificial Evolution'' OVERVIEW The International Conference on Unconventional Computation and Natural Computation has been a meeting where scientists with different backgrounds, yet sharing a common interest in novel forms of computation, human-designed computation inspired by nature, and the computational aspects of processes taking place in nature, present their latest theoretical or experimental results. Typical, but not exclusive, topics are: * Molecular (DNA) computing, Quantum computing, Optical computing, Chaos computing, Physarum computing, Hyperbolic space computation, Collision-based computing, Super-Turing Computation; * Cellular automata, Neural computation, Evolutionary computation, Swarm intelligence, Ant algorithms, Artificial immune systems, Artificial life, Membrane computing, Amorphous computing; * Computational Systems Biology, Computational neuroscience, Synthetic biology, Cellular (in vivo) computing.