From terry at salk.edu Mon Aug 4 17:02:29 2008 From: terry at salk.edu (Terry Sejnowski) Date: Mon, 04 Aug 2008 14:02:29 -0700 Subject: Connectionists: NEURAL COMPUTATION - September, 2008 In-Reply-To: Message-ID: Neural Computation - Contents - Volume 20, Number 9 - September 1, 2008 Articles Dependence of Neuronal Correlations on Filter Characteristics and Marginal Spike-Train Statistics Tom Tetzlaff, Stefan Rotter, Eran Stark, Moshe Abeles, Ad Aertsen, and Markus Diesmann Correlations and Population Dynamics in Cortical Networks Birgit Kriener, Tom Tetzlaff, Ad Aertsen, Markus Diesmann, and Stefan Rotter Note On Exponential Convergence Conditions of An Extended Projection Neural Network Youshen Xia Letters Contrastive Divergence in Gaussian Diffusions Javier Movellan Temporal Dynamics of Rate-Based Synaptic Plasticity Rules in a Stochastic Model of Spike Timing Dependent Plasticity Terry Elliott Random Neural Networks with Synchronised Interactions Erol Gelenbe and Stelios Timotheou Encoding and Decoding Spikes for Dynamic Stimuli Rama Natarajan, Quentin Huys, Peter Dayan, and Richard Zemel A (Somewhat) New Solution to the Variable Binding Problem Leon Barrett, Jerome Feldman, and Liam Mac Dermed ----- ON-LINE - http://neco.mitpress.org/ SUBSCRIPTIONS - 2008 - VOLUME 20 - 12 ISSUES Electronic only USA Canada* Others USA Canada* Student/Retired $60 $63.60 $123 $54 $57.24 Individual $110 $116.60 $173 $99 $104.94 Institution $849 $899.94 $912 $756 $801.36 * includes 6% GST MIT Press Journals, 238 Main Street, Suite 500, Cambridge, MA 02142-9902. Tel: (617) 253-2889 FAX: (617) 577-1545 journals-orders at mit.edu http://mitpressjournals.org/neuralcomp ----- From gros07 at itp.uni-frankfurt.de Tue Aug 5 04:43:03 2008 From: gros07 at itp.uni-frankfurt.de (gros07@itp.uni-frankfurt.de) Date: Tue, 5 Aug 2008 10:43:03 +0200 (CEST) Subject: Connectionists: Book announcement: Complex and Adaptive Dynamical Systems: A Primer Message-ID: Dear Colleagues, It is a pleasure to announce the availability of the textbook Complex and Adaptive Dynamical Systems: A Primer (Springer Complexity, Softcover) ISBN: 978-3540718734 by Claudius Gros Complex and adaptive dynamical systems are ubiquitous and the brain constitutes a prominent example. The goal of this book is to provide a general entry point to complex and dynamical systems based on network architectures. ------------------------------------------------------- cover-text: ---------- This primer has been developed with the aim of conveying a wide range of commons-sense knowledge in the field of quantitative complex system science at an introductory level. The approach is modular and phenomenology driven. Examples of emerging phenomena of generic importance treated in this book are: - The small world phenomenon in social and scale-free networks; - Phase transitions and self-organized criticality in adaptive systems; - Life at the edge of chaos and coevolutionary avalanches resulting from the unfolding of all living; - The concept of living dynamical systems and emotional diffusive control within cognitive system theory. Technical course prerequisites are a basic knowledge of ordinary and partial differential equations and of statistics. Each chapter comes with exercises and suggestions for further reading, solutions to the exercises are also provided. ------------------------------------------------------- content: -------- * Chapter 1: Graph Theory and Small-World Networks * Chapter 2: Chaos, Bifurcations and Diffusion * Chapter 3: Random Boolean Networks * Chapter 4: Cellular Automata and Self-Organized Criticality * Chapter 5: Statistical modeling of Darwinian evolution * Chapter 6: Synchronization Phenomena * Chapter 7: Elements of Cognitive System Theory ------------------------------------------------------- Sincerely, Claudius Gros ======================================================================= Prof. Dr. Claudius Gros | email: gros07 at itp.uni-frankfurt.de Institute for Theoretical Physics | http://itp.uni-frankfurt.de/~gros J.W. Goethe University Frankfurt | office: +49 (069) 798-47818 Max-von-Laue-Strasse 1 | sectr.: +49 (069) 798-47817 60438 Frankfurt a.M., Germany | FAX : +49 (069) 798-47832 ======================================================================= From minai_ali at yahoo.com Sat Aug 2 17:07:19 2008 From: minai_ali at yahoo.com (Ali Minai) Date: Sat, 2 Aug 2008 14:07:19 -0700 (PDT) Subject: Connectionists: Call for Proposals: Tutorials at IJCNN 2009 Message-ID: <737468.23642.qm@web65512.mail.ac4.yahoo.com> Tutorials at the 2009 International Joint Conference on Neural Networks June 2009, Atlanta, USA ? The IJCNN 2009 Organizing Committee invites proposals for tutorials to be held during the IJCNN 2009 in Atlanta. The tutorials will be scheduled for June 14, 2009, immediately preceding the main program of the conference. Space has been reserved for up to 16 two hour tutorials, with the possibility of four hour tutorials if the schedule allows. Each tutorial is expected to have 20 ? 40 attendees. In addition to the core areas of neural networks, tutorials in areas such as cognitive modeling, computational neuroscience, bioinformatics, robotics, sensor networks, etc., are strongly encouraged. Honoraria will be offered to tutorial organizers based on the number of registered attendees for their tutorial. Proposals for tutorials should be submitted in electronic form (Word or pdf) by October 15, 2008 to: Ali A. Minai, Tutorials Chair University of Cincinnati Department of Electrical & Computer Engineering Cincinnati, OH 45221-0030, U.S.A. phone +1-513-556-4783 E-mail: The suggested length for proposals is 1-2 pages (single-spaced), and should not exceed 4 pages. Detailed information is available at: http://www.ijcnn2009.com/tutorials.html --------------------------------------------------------------------- Ali A. Minai Complex Adaptive Systems Lab Associate Professor Department of Electrical & Computer Engineering University of Cincinnati Cincinnati, OH 45221-0030 Phone: (513) 556-4783 Fax: (513) 556-7326 Email: aminai at ece.uc.edu ????????? minai_ali at yahoo.com WWW: http://www.ece.uc.edu/~aminai/ ---------------------------------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: https://mailman.srv.cs.cmu.edu/mailman/private/connectionists/attachments/20080802/24d96aec/attachment.html From wjma at cpu.bcm.edu Wed Aug 6 10:40:23 2008 From: wjma at cpu.bcm.edu (Wei Ji Ma) Date: Wed, 6 Aug 2008 09:40:23 -0500 Subject: Connectionists: Postdoc Position in Computational Neuroscience and Psychophysics Message-ID: <48071F74CBAD5B46973B3D2B90179BCB5B70@stan.hou-ad.hnl.bcm.tmc.edu> Postdoctoral position available Computational Neuroscience and Human Psychophysics Applications are invited for a postdoctoral position (minimum 2 years) in the laboratory of Dr. Wei Ji Ma (http://neuro.bcm.edu/malab) in the Department of Neuroscience at Baylor College of Medicine, Houston, Texas. The long-range goal of our research is to understand the representation and processing of uncertainty in the human brain, both at the behavioral and at the neural level. The lab uses a combination of theoretical analysis, computational modeling, and theory-driven human experiments. Current areas of study include multisensory perception, decision-making, and visual search. The position provides an opportunity to take part in highly collaborative research programs within the Computational Psychiatry Unit (http://cpu.bcm.edu) and the Department of Neuroscience as a whole. Applicants should have a Ph.D. in computational neuroscience, physics, mathematics, computer science, or a related field, and have a commitment to a research career in neuroscience. Experience in human psychophysics and programming experience with Matlab or C++ are very desirable. To apply, please send CV, statement of interests, and the names and contact information of two references to Wei Ji Ma at wjma at bcm.edu. Consideration of applications will begin immediately, and will end when the position is filled. Salary is competitive and will be commensurate with experience and qualifications. Baylor College of Medicine is an Affirmative-Action/Equal-Opportunity employer and is committed to cultural diversity and compliance with the Americans with Disabilities Act. -------------- next part -------------- An HTML attachment was scrubbed... URL: https://mailman.srv.cs.cmu.edu/mailman/private/connectionists/attachments/20080806/09c588bf/attachment-0001.html From behnke at cs.uni-bonn.de Sun Aug 3 15:01:52 2008 From: behnke at cs.uni-bonn.de (Sven Behnke) Date: Sun, 03 Aug 2008 21:01:52 +0200 Subject: Connectionists: PhD or PostDoc position in Computer Vision at University of Bonn, Germany Message-ID: <489600A0.7020103@cs.uni-bonn.de> Dear Connectionists, the Autonomous Intelligent Systems group at the Computer Science Institute of University of Bonn, Germany, has an immediate opening for a PhD candidate or PostDoc in the area of bio-inspiered computer vision systems. The group, headed by Prof. Sven Behnke, conducts research in the areas of computational intelligence and cognitive robotics. We developed a hierarchical recurrent neural approach to computer vision (Neural Abstraction Pyramid, see http://www.ais.uni-bonn.de/books/LNCS2766.pdf). Our humanoid soccer robots (team NimbRo, see http://www.NimbRo.net) recently defended the title in the RoboCup soccer tournament. We seek for an excellent candidate with a strong own interest in computer vision, machine learning, or probabilistic inference. Project goal is the creation of a general-purpose trainable vision system using hierarchical recurrent neural networks and/or hierarchical graphical models with loops. Various deep, locally connected network architectures and learning algorithms will be investigated for their ability to capture the hierarchical structure of the visual world. The system will be implemented on GPUs using CUDA and tested on real-world data sets, for example from robotics. The position involves teaching duties of four hours per week in the semester. Education: Masters (or Diploma) degree or PhD in any of Computer Science, Mathematics, or Physics. A strong mathematical background is highly desirable. Advantageous is experience in computer vision, machine learning, applied statistics, and excellent programming skills (Matlab or C). Salary: According to NRW TVL-13 annual gross starting from 37.700 Euros (approx. USD 58.570), depending on experience. Initial contract will be up to three years. Applying: Please send applications via email, including the regular CV, your research interests, and list of publications, preferably in a single pdf. Links to download publications and thesis are also welcome. Applications should be sent To: sekretariat _at_ ais.uni-bonn.de Cc: behnke _at_ cs.uni-bonn.de subject: Application GPCV Best regards, Sven Behnke http://www.ais.uni-bonn.de/behnke From t.heskes at science.ru.nl Thu Aug 7 09:05:11 2008 From: t.heskes at science.ru.nl (Tom Heskes) Date: Thu, 07 Aug 2008 15:05:11 +0200 Subject: Connectionists: Neurocomputing volume 71 (issues 10-12) Message-ID: <489AF307.2080805@science.ru.nl> Neurocomputing volume 71 (issues 10-12) ------- SPECIAL PAPERS (Neurocomputing for Vision Research) Neurocomputing for vision research (editorial) Dacheng Tao, Xuelong Li A comprehensive review of current local features for computer vision Jing Li, Nigel M. Allinson Separating corneal reflections for illumination estimation Huiqiong Wang, Stephen Lin, Xiuqing Ye, Weikang Gu Computational analysis and learning for a biologically motivated model of boundary detection Iasonas Kokkinos, Rachid Deriche, Olivier Faugeras, Petros Maragos Spatial relationship representation for visual object searching Jun Miao, Lijuan Duan, Laiyun Qing, Wen Gao, Xilin Chen, Yuan Yuan Total variation norm-based nonnegative matrix factorization for identifying discriminant representation of image patterns Taiping Zhang, Bin Fang, Weining Liu, Yuan Yan Tang, Guanghui He, Jing Wen Writer identification using global wavelet-based features Zhenyu He, Xinge You, Yuan Yan Tang Locality sensitive semi-supervised feature selection Jidong Zhao, Ke Lu, Xiaofei He A new extension of kernel feature and its application for visual recognition Qingshan Liu, Hongliang Jin, Xiaoou Tang, Hanqing Lu, Songde Ma An approach for directly extracting features from matrix data and its application in face recognition Yong Xu, David Zhang, Jian Yang, Jing-Yu Yang Tensor Rank One Discriminant Analysis?A convergent method for discriminative multilinear subspace selection Dacheng Tao, Xuelong Li, Xindong Wu, Steve Maybank A highly scalable incremental facial feature extraction method Fengxi Song, Hang Liu, David Zhang, Jingyu Yang Affective interaction based on person-independent facial expression space Hao Wang, Kongqiao Wang An evaluation of identity representability of facial expressions using feature distributions Qi Li, Chandra Kambhamettu, Jieping Ye A robust multimodal approach for emotion recognition Mingli Song, Mingyu You, Na Li, Chun Chen Local face sketch synthesis learning Xinbo Gao, Juanjuan Zhong, Dacheng Tao, Xuelong Li Fusing gait and face cues for human gender recognition Caifeng Shan, Shaogang Gong, Peter W. McOwan A fingerprint verification algorithm using tessellated invariant moment features Ju Cheng Yang, Dong Sun Park Feature alignment approach for hand posture recognition based on curvature scale space Chin-Chen Chang, Cheng-Yi Liu, Wen-Kai Tai A segmentation concept for positron emission tomography imaging using multiresolution analysis Abbes Amira, Shrutisagar Chandrasekaran, David W.G. Montgomery, Isa Servan Uzun The HCM for perceptual image segmentation Jonathan Randall, Ling Guan, Wanqing Li, Xing Zhang Automatic design of pulse coupled neurons for image segmentation Henrik Berg, Roland Olsson, Thomas Lindblad, Jos? Chilo Level set image segmentation with Bayesian analysis Huiyu Zhou, Yuan Yuan, Faquan Lin, Tangwei Liu Object tracking in videos using adaptive mixture models and active contours Mohand Sa?d Allili, Djemel Ziou Automatic medical image annotation and retrieval Jian Yao, Zhongfei (Mark) Zhang, Sameer Antani, Rodney Long, George Thoma Visual music and musical vision Xuelong Li, Dacheng Tao, Stephen J. Maybank, Yuan Yuan Isotree: Tree clustering via metric embedding Bai Xiao, Andrea Torsello, Edwin R. Hancock Approximation to the Fisher?Rao metric for the focus of expansion S.J. Maybank A new sub-pixel mapping algorithm based on a BP neural network with an observation model Liangpei Zhang, Ke Wu, Yanfei Zhong, Pingxiang Li Processing visual stimuli using hierarchical spiking neural networks Q.X. Wu, T.M. McGinnity, L.P. Maguire, A. Belatreche, B. Glackin ------- SPECIAL PAPERS (Advances in Blind Signal Processing) Advances in blind signal processing (editorial) Deniz Erdogmus, Danilo Mandic, Toshihisa Tanaka Blind partial separation of underdetermined convolutive mixtures of complex sources based on differential normalized kurtosis Fr?d?ric Abrard, Yannick Deville, Johan Thomas An adaptive stereo basis method for convolutive blind audio source separation Maria G. Jafari, Emmanuel Vincent, Samer A. Abdallah, Mark D. Plumbley, Mike E. Davies Partial separation method for solving permutation problem in frequency domain blind source separation of speech signals V.G. Reju, Soo Ngee Koh, Ing Yann Soon Elimination of filtering indeterminacy in blind source separation Kiyotoshi Matsuoka A combined blind source separation and adaptive noise cancellation scheme with potential application in blind acoustic parameter extraction Yonggang Zhang, Jonathon A. Chambers, Paul Kendrick, Trevor J. Cox, Francis F. Li Independent component analysis of optical flow for robot navigation Naoya Ohnishi, Atsushi Imiya Blind separation of convolutive image mixtures Sarit Shwartz, Yoav Y. Schechner, Michael Zibulevsky Improvements on ICA mixture models for image pre-processing and segmentation Patr?cia R. Oliveira, Roseli A.F. Romero On the decomposition of Mars hyperspectral data by ICA and Bayesian positive source separation S. Moussaoui, H. Hauksd?ttir, F. Schmidt, C. Jutten, J. Chanussot, D. Brie, S. Dout?, J.A. Benediktsson A robust model for spatiotemporal dependencies Fabian J. Theis, Peter Gruber, Ingo R. Keck, Elmar W. Lang Overcomplete topographic independent component analysis Libo Ma, Liqing Zhang Leap-frog-type learning algorithms over the Lie group of unitary matrices Simone Fiori Generalized splitting functions for blind separation of complex signals Michele Scarpiniti, Daniele Vigliano, Raffaele Parisi, Aurelio Uncini Blind separation with unknown number of sources based on auto-trimmed neural network Tsung-Ying Sun, Chan-Cheng Liu, Sheng-Ta Hsieh, Shang-Jeng Tsai Structure learning by pruning in independent component analysis Andreas Brinch Nielsen, Lars Kai Hansen Factorisation and denoising of 0?1 data: A variational approach Ata Kab?n, Ella Bingham Nonnegative matrix factorization with quadratic programming Rafal Zdunek, Andrzej Cichocki Sparse blind identification and separation by using adaptive K-orthodrome clustering Yoshikazu Washizawa, Andrzej Cichocki Estimating the mixing matrix in Sparse Component Analysis (SCA) based on partial k-dimensional subspace clustering Farid Movahedi Naini, G. Hosein Mohimani, Massoud Babaie-Zadeh, Christian Jutten Blind source extraction: Standard approaches and extensions to noisy and post-nonlinear mixing Wai Yie Leong, Wei Liu, Danilo P. Mandic Hybridizing sparse component analysis with genetic algorithms for microarray analysis K. Stadlthanner, F.J. Theis, E.W. Lang, A.M. Tom?, C.G. Puntonet, J.M. G?rriz Independent arrays or independent time courses for gene expression time series data analysis Sookjeong Kim, Jong Kyoung Kim, Seungjin Choi EM-based semi-blind channel estimation method for MIMO-OFDM communication systems D. Obradovic, C. Na, R. Lupas Scheiterer, A. Szabo ------- JOURNAL SITE: http://www.elsevier.com/locate/neucom SCIENCE DIRECT: http://www.sciencedirect.com/science/issue/5660-2008-999289989-691600 From emmanuel.vincent at irisa.fr Tue Aug 5 12:16:21 2008 From: emmanuel.vincent at irisa.fr (Emmanuel Vincent) Date: Tue, 05 Aug 2008 18:16:21 +0200 Subject: Connectionists: SiSEC 2008: try your source separation algorithm now Message-ID: <48987CD5.6000203@irisa.fr> 1st Community-based Signal Separation Evaluation Campaign (SiSEC 2008) http://sisec.wiki.irisa.fr/ Submission deadline: October 31, 2008 SiSEC is the first community-based endeavor for the evaluation of source separation algorithms. Following a first call, four datasets have been proposed by researchers in the field: - Under-determined speech and music mixtures - Determined and over-determined speech mixtures - Head-geometry mixtures of two speech sources in real environments - Professionally produced music recordings Help make this evaluation the most complete yet by trying your source separation algorithms over one or more datasets and submitting the results. To download the datasets and view submission guidelines and evaluation procedures, see http://sisec.wiki.irisa.fr/. Best regards, Emmanuel Vincent, Shoko Araki and Pau Bofill From terry at salk.edu Sat Aug 9 18:33:00 2008 From: terry at salk.edu (Terry Sejnowski) Date: Sat, 09 Aug 2008 15:33:00 -0700 Subject: Connectionists: NSF Workshop Report on the Science and Engineering of Learning In-Reply-To: Message-ID: http://www.cnl.salk.edu/Media/NSFWorkshopReport.v4.pdf Final Workshop Report: Future Challenges for the Science and Engineering of Learning July 23-25, 2007 National Science Foundation Organizers: Rodney Douglas - E. T. H. and University of Zurich Terry Sejnowski - Salk Institute and University of California at San Diego Executive Summary: This document reports on the workshop "Future Challenges for the Science and Engineering of Learning" held at the National Science Foundation Headquarters in Arlington Virginia on July 23-25, 2007. The goal of the workshop was to explore research opportunities in the broad domain of the Science and Engineering of Learning, and to provide NSF with this Report identifying important open questions. It is anticipated that this Report will to be used to encourage new research directions, particularly in the context of the NSF Science of Learning Centers (SLCs), and also to spur new technological developments. The workshop was attended by 20 leading international researchers. Half of the researchers at the workshops were from SLCs and the other half were experts in neuromorphic engineering and machine learning. The format of the meeting was designed to encourage open discussion. There were only relatively brief formal presentations. The most important outcome was a detailed set of open questions in the domains of both biological learning and machine learning. We also identified a set of common issues indicating that there is a growing convergence between these two previously separate domains so that work invested there will benefit our understanding of learning in both man and machine. In this summary we outline a few of these important questions. ----- From s.crone at lancaster.ac.uk Fri Aug 8 13:00:56 2008 From: s.crone at lancaster.ac.uk (Lancaster Forecasting Centre) Date: Fri, 8 Aug 2008 18:00:56 +0100 Subject: Connectionists: 2nd CfP - Special Issue on Forecasting with Computational Intelligence Message-ID: <20080808.RZNGYSPVRURHOUDV@lancaster.ac.uk> Call for Papers Special Issue of the International Journal of Forecasting on ?Forecasting with artificial neural networks and computational intelligence? Motivation & Context The last 20 years of research have produced more than 5000 publications on artificial neural networks (NN) for predictive modelling across various disciplines. However, while NN and other methods of computational intelligence (CI) are firmly established in automatic control and classification problems, they have not received the same level of attention in time series forecasting (regression). Many of the optimistic publications indicating a competitive or even superior performance of NNs have focussed on theoretical development of novel paradigms, or extensions to existing methods, architectures, and training algorithms, but have lacked a valid and reliable evaluation of the empirical evidence of their performance. Similarly, only a few publications have attempted to develop a thorough methodology on how to model NNs under specific conditions, limiting the modelling process of NNs to a heuristic and ad-hoc ?art? of hand-tuning individual models, rather than a scientific approach using a replicable methodology and modelling process. As a consequence, NNs have not yet been empirically validated as a forecasting method in many areas of forecasting, despite theoretical advances. To explore this gap between academic attention, theoretical prowess and empirical perfor?mance we invite contributions to a special issue of the International Journal of Forecasting (IJF) dedicated to evaluating the evidence on forecasting with NN and CI-methods. Topics Papers for this special issue should focus on novel techniques, methods, methodologies and applications from the computational intelligence domain, with particular emphasis on neural networks, within all aspects of forecasting. Particular emphasis will be placed on applied or applicable work that provides valid and reliable evidence on the performance of the methods and the development of robust methodologies based upon rigorous evaluation, rather than purely theoretical contributions. Contributions of contenders that have contributed to one of the recent forecasting competitions dedicated to NN and CI-methods (ESTSP?07, ESTSP?08, NN3 and NN5) are particularly encouraged. Due to the single-time origin design of these competitions, the authors are encouraged to obtain the complete datasets and rerun experi?ments for their papers, in order to obtain representative out-of-sample results across multiple origins and error measures in comparison to established statistical benchmark methods, adhering to the best-practices set out in discussions in the IJF (see e.g. Tashman (2000) Out-of-sample tests of forecasting accuracy - an analysis and review, International Journal of Forecasting 16, 437?450; and Adya and Collopy (1998) How effective are neural networks at forecasting and prediction? A review and evaluation, Journal of Forecasting, 17, 481?495). About the Journal The International Journal of Forecasting (IJF, www.forecasters.org/ijf) published by Elsevier is the leading journal in its field and indexed by all major citation indexing services (including ISI Thomson Scientific). It is the official publication of the International Institute of Forecasters (IIF) and shares its aims and scope. The IJF publishes high quality refereed papers covering all aspects of forecasting. Its objective (and that of the IIF) is to unify the field, and to bridge the gap between theory and practice. The intention is to make forecasting useful and relevant for decision and policy makers who need forecasts. The journal places particular emphasis on empirical studies, evaluation activities, implementation research and ways of improving the practice of forecasting. It is open to many points of view and encourages debate to find solutions for problems facing the field. Regular features of the IJF include research papers, research notes, discussion articles, book reviews, and software reviews. The IJF has an impact factor of 1.409 (Journal Citation Reports? 2008, published by Thomson Scientific). Review Process Each submitted paper will be peer-reviewed in the same manner as other submissions to the IJF. Providing papers fit into the theme of the special issue, quality and originality of the contribution will be the major criteria for each submission. Due to the tight deadlines, any paper for which the outcome of the refereeing process is ?major revision? will not be included in the special issue, but may be revised and resubmitted according to the journal?s regular process. It may also be considered for a forthcoming special volume on Advances in Forecasting with Computational Intelligence by Springer (which is circulated separately). Important Dates Deadline for manuscripts: 15 September, 2008 Preliminary decision to authors: 24 November, 2008 Revision Due: 12 January, 2008 Final Manuscript Due: 16 March, 2008 Submission Instructions Authors are encouraged to contact one of the editors with an extended abstract of three pages to discuss any questions of suitability. Only email submissions will be accepted. Please submit your manuscript to IJF_Special_Issue at neural-forecasting.com . The submission must be in PDF format. Final manuscripts must be submitted in either MS-Word or LaTeX format for typesetting by the publisher. Manuscripts must be in English and double-spaced throughout. Papers should in general not exceed 6,000 words. All submissions will be peer reviewed. Detailed instructions for authors are at: http://www.forecasters.org/ijf. Guest Editors: Prof. Fred Collopy Information Systems Department Weatherhead School of Management Case Western Reserve University Cleveland, Ohio 44106-7235 USA collopy at case.edu Dr. Sven F. Crone Lancaster University Management School Research Centre for Forecasting Lancaster, LA1 4YX United Kingdom s.crone at lancaster.ac.uk Dr. Amaury Lendasse Helsinki University of Technology Laboratory of Computer and Information Science P.O. Box 5400, FIN-02015 HUT Finland lendasse at hut.fi General Enquiries: For general enquiries please contact us via email at: IJF_Special_Issue at Neural-Forecasting.com From carnevalet at sbcglobal.net Tue Aug 12 13:36:00 2008 From: carnevalet at sbcglobal.net (Ted Carnevale) Date: Tue, 12 Aug 2008 13:36:00 -0400 Subject: Connectionists: NEURON v. 6.2 available Message-ID: <48A1CA00.1010606@sbcglobal.net> The newest standard distribution of NEURON is version 6.2, which is available from http://www.neuron.yale.edu/neuron/install/install.html This is principally a "bug fix" release, but there have also been some improvements of features and functionality. Of the latter, the most noteworthy have to do with Python. -------------------------- Changes that affect Python -------------------------- All communication with hoc from Python is now accomplished uniformly via the neuron.h object. In other words, do this first import neuron h = neuron.h Then, for example, h.Section() returns new section h.cas() returns currently accessed section h.allsec() is an iterator over all sections Python allows use of a nrn.Segment object as the argument to a PointProcess constructor or loc function. That is, IClamp(section(x)) is an alternative to IClamp(x, sec = section). Also, section(0.5).sec is the section, and section(0.5).x is the arc location value 0.5. The following new Vector functions are much (> 50 times !) faster than a Python loop over the elements: --Vector.from_python(source) fills the Vector with doubles from a Python list or NumPy 1-d array. The Vector is resized and returned. --Vector.to_python() returns a Python list of doubles. --Vector.to_python(target) fills the target with doubles from the hoc Vector and returns the target object. Note that if the target is a NumPy 1-d array, it must already be sized the same as the Vector. ---------------------------------- Other changes that deserve mention ---------------------------------- CellBuild.cexport() is public and can be used from hoc with an argument of 1 (or 0) to force (or prevent) writing the cell info to the simulation instance. Vector.play in continuous mode uses linear extrapolation of the last two time points when a value is requested outside the time domain of the vector. This allows more efficient variable time step approach to a discontinuity as it keeps the first derivative continuous when cvode asks for a value past the next discontinuity (the discontinuity event will cause a retreat to the proper time). From t.heskes at science.ru.nl Thu Aug 14 08:34:06 2008 From: t.heskes at science.ru.nl (Tom Heskes) Date: Thu, 14 Aug 2008 14:34:06 +0200 Subject: Connectionists: Neurocomputing volume 71 (issues 13-15) Message-ID: <48A4263E.4070202@science.ru.nl> Neurocomputing volume 71 (issues 13-15) ------- SPECIAL PAPERS (Artificial Neural Networks, ICANN 2006) International Conference on Artificial Neural Networks (editorial) Stefanos Kollias, Andreas Stafylopatis, W?odzis?aw Duch Exploring cognitive machines?Neural models of reasoning, illustrated through the two-sticks paradigm John G. Taylor, Matthew Hartley Connectionist model generation: A first-order approach Sebastian Bader, Pascal Hitzler, Steffen H?lldobler Interpolating support information granules Bruno Apolloni, Simone Bassis, Dario Malchiodi, Witold Pedrycz Toward a descriptive cognitive model of human learning Toshihiko Matsuka, Yasuaki Sakamoto, Arieta Chouchourelou, Jeffrey V. Nickerson Connectionist weighted fuzzy logic programs Alexandros Chortaras, Giorgos Stamou, Andreas Stafylopatis Variable step search algorithm for feedforward networks Miros?aw Kordos, W?odzis?aw Duch Learning long-term dependencies with recurrent neural networks Anton Maximilian Schaefer, Steffen Udluft, Hans-Georg Zimmermann Semi-supervised and active learning with the probabilistic RBF classifier Constantinos Constantinopoulos, Aristidis Likas Natural conjugate gradient training of multilayer perceptrons Ana Gonz?lez, Jos? R. Dorronsoro Tuning continual exploration in reinforcement learning: An optimality property of the Boltzmann strategy Youssef Achbany, Fran?ois Fouss, Luh Yen, Alain Pirotte, Marco Saerens Class-switching neural network ensembles Gonzalo Mart?nez-Mu?oz, Aitor S?nchez-Mart?nez, Daniel Hern?ndez-Lobato, Alberto Su?rez Learning radial basis neural networks in a lazy way: A comparative study Jos? M. Valls, In?s M. Galv?n, Pedro Isasi An extended class of multilayer perceptron R. Lopez, E. O?ate Kernel discriminant analysis based feature selection Tsuneyoshi Ishii, Masamichi Ashihara, Shigeo Abe User and context adaptive neural networks for emotion recognition George Caridakis, Kostas Karpouzis, Stefanos Kollias Fast and adaptive network of spiking neurons for multi-view visual pattern recognition Simei Gomes Wysoski, Lubica Benuskova, Nikola Kasabov Inferring semantics from textual information in multimedia retrieval Mats Sj?berg, Jorma Laaksonen, Timo Honkela, Matti P?ll? Semantic content ranking through collaborative and content clustering Marios C. Angelides, Anastasis A. Sofokleous Dimensionality reduction based on ICA for regression problems Nojun Kwak, Chunghoon Kim, Hwangnam Kim Sequential input selection algorithm for long-term prediction of time series Jarkko Tikka, Jaakko Hollm?n ------- SPECIAL PAPERS (Engineering of Intelligent Systems, ICEIS 2008) Engineering of intelligent systems (editorial) Amir Hussain, Simone Fiori, I.M. Qureshi, T.S. Durrani, M.M. Ahmed, K. Fukushima On linking human and machine brains Kevin Warwick, Virginie Ruiz A multi-stage neural network aided system for detection of microcalcifications in digitized mammograms Nikhil R. Pal, Brojeshwar Bhowmick, Sanjaya K. Patel, Srimanta Pal, J. Das A real-time kepstrum approach to speech enhancement and noise cancellation J. Jeong, T.J. Moir Estimation and decision fusion: A survey Abhijit Sinha, Huimin Chen, D.G. Danu, Thia Kirubarajan, M. Farooq A modular classification model for received signal strength based location systems Uzair Ahmad, Andrey V. Gavrilov, Sungyoung Lee, Young-Koo Lee FCANN: A new approach for extraction and representation of knowledge from ANN trained via Formal Concept Analysis Luis E. Z?rate, S. Mariano Dias, M.A. Junho Song Complex stochastic systems modelling and control via iterative machine learning Aiping Wang, Puya Afshar, Hong Wang A fuzzy learning?Sliding mode controller for direct field-oriented induction machines Habib-ur Rehman, Rached Dhaouadi Adaptive multi-model sliding mode control of robotic manipulators using soft computing Nasser Sadati, Rasoul Ghadami Polynomial models of gene dynamics Saadia Faisal, Gerwald Lichtenberg, Herbert Werner A genetic algorithm for product disassembly sequence planning Wang Hui, Xiang Dong, Duan Guanghong Autonomous intelligent cruise control using a novel multiple-controller framework incorporating fuzzy-logic-based switching and tuning Rudwan Abdullah, Amir Hussain, Kevin Warwick, Ali Zayed Statistical models of KSE100 index using hybrid financial systems Samreen Fatima, Ghulam Hussain ------- REGULAR PAPERS Game theoretical analysis of the simple one-vs.-all classifier Yuichi Shiraishi Adaptive quasiconformal kernel discriminant analysis Jeng-Shyang Pan, Jun-Bao Li, Zhe-Ming Lu Chaotic synchronization in 2D lattice for scene segmentation Liang Zhao, Fabricio Aparecido Breve Selecting valuable training samples for SVMs via data structure analysis Defeng Wang, Lin Shi Differences in prefrontal and motor structures learning dynamics depend on task complexity: A neural network model S.E. Lew, H.G. Rey, D.A. Gutnisky, B.S. Zanutto Associative memory with a controlled chaotic neural network Guoguang He, Luonan Chen, Kazuyuki Aihara Visual search light model for mental problem solving Andreas Wichert, Jo?o Dias Pereira, Paulo Carreira Exponential stability of recurrent neural networks with both time-varying delays and general activation functions via LMI approach Qiankun Song An integrated growing-pruning method for feedforward network training Pramod L. Narasimha, Walter H. Delashmit, Michael T. Manry, Jiang Li, Francisco Maldonado Asymptotic stability analysis of neural networks with successive time delay components Yu Zhao, Huijun Gao, Shaoshuai Mou An LMI approach to delay-dependent state estimation for delayed neural networks He Huang, Gang Feng, Jinde Cao Incremental GRLVQ: Learning relevant features for 3D object recognition Tim C. Kietzmann, Sascha Lange, Martin Riedmiller Temporal self-organizing maps for telecommunications market segmentation Pierpaolo D?Urso, Livia De Giovanni Continuous genetic algorithm-based fuzzy neural network for learning fuzzy IF?THEN rules R.J. Kuo, S.M. Hong, Y. Lin, Y.C. Huang Metric horseshoes in discrete-time RTD-based cellular neural networks Chen Feng-Juan, Li Ji-Bin An efficient algorithm for parallel distributed unsupervised learning Giuseppe Campobello, Giuseppe Patan?, Marco Russo Exponential stability and periodic solutions of FCNNs with variable coefficients and time-varying delays Shuyun Niu, Haijun Jiang, Zhidong Teng Moving object recognition by a shape-based neural fuzzy network Chia-Feng Juang, Liang-Tso Chen Global exponential estimates of stochastic interval neural networks with discrete and distributed delays Zhan Shu, James Lam Integrated structure selection and parameter optimisation for eng-genes neural models Patrick Connally, Kang Li, George W. Irwin Spectral decay vs. correlation dimension of EEG Anna Krakovsk?, Svorad ?tolc Jr. A mixed noise image filtering method using weighted-linking PCNNs Luping Ji, Zhang Yi ------- BRIEF PAPERS Adaptive local hyperplane classification Tao Yang, Vojislav Kecman Exponential synchronization of chaotic neural networks with mixed delays Tao Li, Shu-min Fei, Qing Zhu, Shen Cong Training robust support vector machine with smooth Ramp loss in the primal space Lei Wang, Huading Jia, Jie Li Multimodal face and fingerprint biometrics authentication on space-limited tokens Muhammad Khurram Khan, Jiashu Zhang Palmprint recognition using Gabor feature-based (2D)2PCA Xin Pan, Qiu-Qi Ruan Synaptic plasticity model of a spiking neural network for reinforcement learning Kyoobin Lee, Dong-Soo Kwon A novel face recognition approach based on kernel discriminative common vectors (KDCV) feature extraction and RBF neural network Xiao-Yuan Jing, Yong-Fang Yao, Jing-Yu Yang, David Zhang ------- CORRIGENDUM Corrigendum to ?Further result on asymptotic stability criterion of neural networks with time-varying delays? [Neurocomputing 71 (2007) 439?447] Tao Li, Lei Guo, Changyin Sun ------- JOURNAL SITE: http://www.elsevier.com/locate/neucom SCIENCE DIRECT: http://www.sciencedirect.com/science/issue/5660-2008-999289986-695724 From pooyapakarian at ipm.ir Wed Aug 13 23:06:32 2008 From: pooyapakarian at ipm.ir (Pooya Pakarian) Date: Thu, 14 Aug 2008 06:36:32 +0330 Subject: Connectionists: time dependant non-linearity in motion opponency, and the rivalry of LRAM and SRAM Message-ID: <20080814030400.M55110@ipm.ir> Hi All Please let me ask two questions. I very much appreciate your feedbacks. 1- the last stage of both the elaborated Reichardt Detector and also the Adelson-Bergen model for motion detection is the motion opponency stage that is a simple subtraction without any non-linearity or time-dependency or mutual inhibition, etc. Is there any article elaborating this last stage by for example adding some time-dependant non-linearity to it? 2- what are the major articles in which I can learn about the computational models of rivalry between the short-range and the long-range apparent motion system. this rivalry is hypothesized to occur in visual stimuli like the missing fundamental grating or the Ternus effect, or the reversed Phi, if the viewers' distance is adjusted properly. this rivalry is interesting because the two rivals are not at the same level of processing. Best thanks for your attention Pooya -- -- This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean. From steve at cns.bu.edu Thu Aug 14 09:03:51 2008 From: steve at cns.bu.edu (Stephen Grossberg) Date: Thu, 14 Aug 2008 09:03:51 -0400 Subject: Connectionists: conditioning, dopamine, amygdala, hypothalamus, basal ganglia, orbitofrontal cortex Message-ID: The following articles about reinforcement learning are now available at http://www.cns.bu.edu/~steve Mark R. Dranias, Stephen Grossberg, and Daniel Bullock Dopaminergic and Non-Dopaminergic Value Systems in Conditioning and Outcome-Specific Revaluation Brain Research, in press. ABSTRACT Animals are motivated to choose environmental options that can best satisfy current needs. To explain such choices, this paper introduces the MOTIVATOR (Matching Objects To Internal VAlues Triggers Option Revaluations) neural model. MOTIVATOR describes cognitive-emotional interactions between higher-order sensory cortices and an evaluative neuraxis composed of the hypothalamus, amygdala, and orbitofrontal cortex. Given a conditioned stimulus (CS), the model amygdala and lateral hypothalamus interact to calculate the expected current value of the subjective outcome that the CS predicts, constrained by the current state of deprivation or satiation. The amygdala relays the expected value information to orbitofrontal cells that receive inputs from anterior inferotemporal cells, and medial orbitofrontal cells that receive inputs from rhinal cortex. The activations of these orbitofrontal cells code the subjective values of objects. These values guide behavioral choices. The model basal ganglia detect errors in CS-specific predictions of the value and timing of rewards. Excitatory inputs from the pedunculopontine nucleus interact with timed inhibitory inputs from model striosomes in the ventral striatum to regulate dopamine burst and dip responses from cells in the substantia nigra pars compacta and ventral tegmental area. Learning in cortical and striatal regions is strongly modulated by dopamine. The model is used to address tasks that examine food-specific satiety, Pavlovian conditioning, reinforcer devaluation, and simultaneous visual discrimination. Model simulations successfully reproduce discharge dynamics of known cell types, including signals that predict saccadic reaction times and CS-dependent changes in systolic blood pressure. Keywords: amygdala, orbitofrontal cortex, rhinal cortex, lateral hypothalamus, inferotemporal cortex, basal ganglia, conditioning, motivation, devaluation, food-specific satiety, dopamine, cognitive-emotional interactions, decision-making, discrimination learning ****************** Stephen Grossberg, Daniel Bullock, and Mark R. Dranias Neural Dynamics Underlying Impaired Autonomic and Conditioned Responses Following Amygdala and Orbitofrontal Lesions Behavioral Neuroscience, in press. ABSTRACT A neural model is presented that explains how outcome-specific learning modulates affect, decision-making and Pavlovian conditioned approach responses. The model addresses how brain regions responsible for affective learning and habit learning interact, and answers a central question: What are the relative contributions of the amygdala and orbitofrontal cortex to emotion and behavior? In the model, the amygdala calculates outcome value while the orbitofrontal cortex influences attention and conditioned responding by assigning value information to stimuli. Model simulations replicate autonomic, electrophysiological, and behavioral data associated with three tasks commonly used to assay these phenomena: Food consumption, Pavlovian conditioning, and visual discrimination. Interactions of the basal ganglia and amygdala with sensory and orbitofrontal cortices enable the model to replicate the complex pattern of spared and impaired behavioral and emotional capacities seen following lesions of the amygdala and orbitofrontal cortex. Keywords: Pavlovian conditioning, inferotemporal and rhinal cortex, amygdala, basal ganglia, orbitofrontal cortex From eero at cns.nyu.edu Sun Aug 17 04:23:28 2008 From: eero at cns.nyu.edu (Eero Simoncelli) Date: Sun, 17 Aug 2008 11:23:28 +0300 Subject: Connectionists: time dependant non-linearity in motion opponency, and the rivalry of LRAM and SRAM In-Reply-To: <20080814030400.M55110@ipm.ir> References: <20080814030400.M55110@ipm.ir> Message-ID: <89C91023-14DA-4D61-859A-7805A60A8DBE@cns.nyu.edu> Dear Pooya, The main nonlinearity in the Adelson/Bergen energy model is the squaring (i.e., the "energy" computation), which comes before the subtraction. They also described a "normalized" version, in a 1986 conference paper, in which the opponent energy signal is divided by the static energy, pointing out the relationship to least-squares computations of motion from gradient measurements (Lucas & Kanade, 1981). The Extraction of Spatio-Temporal Energy in Human and Machine Vision Adelson, E. H., and Bergen, J. R., Proc. Workshop on Motion: Representation and Analysis (pp. 151-155), Charleston, SC; May 7-9 (1986). Related "divisive normalization" descriptions of the physiology are presented in A Model of Neuronal Responses in Visual Area MT E P Simoncelli and D J Heeger, Vision Research, vol.38(5), pp. 743--761, Mar 1998. The normalization is idealized as an instantaneous computation in this paper, but the real system presumably accomplishes it using feedback. Best, Eero Simoncelli Investigator, Howard Hughes Medical Institute Professor, Center for Neural Science, and Courant Institute for Mathematical Sciences New York University On 14 Aug2008, at 6:06 AM, Pooya Pakarian wrote: > Hi All > Please let me ask two questions. I very much appreciate your > feedbacks. > > 1- the last stage of both the elaborated Reichardt Detector and also > the > Adelson-Bergen model for motion detection is the motion opponency > stage that > is a simple subtraction without any non-linearity or time-dependency > or > mutual inhibition, etc. Is there any article elaborating this last > stage by > for example adding some time-dependant non-linearity to it? > > 2- what are the major articles in which I can learn about the > computational > models of rivalry between the short-range and the long-range > apparent motion > system. this rivalry is hypothesized to occur in visual stimuli like > the > missing fundamental grating or the Ternus effect, or the reversed > Phi, if > the viewers' distance is adjusted properly. this rivalry is > interesting > because the two rivals are not at the same level of processing. > > Best thanks for your attention > Pooya > -- > > > -- > This message has been scanned for viruses and > dangerous content by MailScanner, and is > believed to be clean. > From auke.ijspeert at epfl.ch Mon Aug 18 12:07:00 2008 From: auke.ijspeert at epfl.ch (Auke Ijspeert) Date: Mon, 18 Aug 2008 18:07:00 +0200 Subject: Connectionists: PhD studentship in computational neuroscience (gait generation and gait transition in the salamander) at EPFL, Lausanne Switzerland Message-ID: <48A99E24.20006@epfl.ch> *One funded PhD student position in computational neuroscience: /gait generation and gait transition/**/ in the salamander/ * The Biologically Inspired Robotics Group (BIRG, http://birg.epfl.ch/) in the School of Computer and Communication Sciences at EPFL (Lausanne, Switzerland) has one open PhD studentship in computational neuroscience. The position is part of a project funded by the Swiss SystemsX initiative in systems biology (see http://www.systemsx.ch/) in collaboration with Dr Thierry Wannier (Univ. Fribourg) and Prof. Jean-Marie Cabelguen (Univ. of Bordeaux). *Background* The goal of this project is to use an interdisciplinary approach to decode the mechanisms of gait generation and gait transition in the salamander. The focus is on the locomotor circuits in the brain stem and the spinal cord, in particular on decoding the interplay of descending control and spinal rhythm generation in locomotor activities. Using an interdisciplinary approach that combines neurophysiology, mathematical theory of coupled oscillators, and numerical simulations, we will address various questions concerning the mechanisms of gait transition between swimming and walking in salamander. The goal of the PhD thesis will be to develop models of the locomotor neural networks based on systems of coupled nonlinear oscillators representing the central pattern generator circuits of the salamander spinal cord. In order to investigate the feedback loops between the central nervous system, the body and the environment, these neural network models will be bidirectionally coupled with a representation of the salamander body, namely a 2D biomechanical simulation and a salamander-like amphibious robot. The expected impact of this project is a better understanding of the functioning of the spinal cord and of the descending pathways during locomotion in vertebrates. In the long term, such knowledge is fundamental to help designing therapies for patients with spinal cord injuries (SCIs). In the short term, this study will significantly enhance our understanding of locomotor circuits in salamander. Furthermore, since salamanders have capabilities of spinal regeneration and locomotor recovery after SCI that are quite unique among vertebrates, understanding the mechanisms of intact locomotion is essential to be able to properly characterize how locomotor function is recovered. *Requirements:* Candidates need to have a Master degree. The ideal candidate for this position should have a strong mathematical background (e.g., in computational biology, mathematics, or physics), good programming skills, and be interested in using mathematical models and robots as tools to understand biology. *How to apply:* The application to the positions should consist of a motivation letter (explaining why you are interested in the project, and why you feel qualified for it), a CV, and a list of grades. Two (or more) letters of reference should be sent directly by the referees (e.g. professors who have supervised a research project) to Prof. Auke Ijspeert (emails are preferred). Applicants will also need to apply to (and be accepted by) one of the EPFL doctoral programs (see http://phd.epfl.ch/), the most relevant being "Computer, communication and information sciences" and "Neuroscience". Informal inquiries about the relevance of an application can be sent to auke.ijspeert at epfl.ch (e.g. before submitting an application to the doctoral school), but responses can be slow because of a heavy schedule and a filled mail box. *Deadline and starting date:* Applications are invited from today, and will be considered continuously until the position is filled. The ideal starting date is the 1st of October 2008 (or as soon as possible after that date). *Contact:* Information concerning the type of research carried out by the group can be found at http://birg.epfl.ch/. You should send your application and any inquiry by email to: Prof. Auke Jan Ijspeert , auke.ijspeert at epfl.ch School of Computer and Communication Sciences EPFL, Swiss Federal Institute of Technology EPFL-IC-ISIM-GRIJ INN 237 Station 14 CH-1015 Lausanne, Switzerland WWW: http://birg.epfl.ch/ -- ----------------------------------------------------------------- Prof Auke Jan Ijspeert SNF (Swiss National Science Foundation) Assistant Professor School of Computer and Communication Sciences, EPFL EPFL-IC-ISIM-GRIJ EPFL, Swiss Federal Institute of Technology, Lausanne Station 14 CH 1015 Lausanne, Switzerland Office: INN 237 Tel: +41 21 693 2658, Fax: +41 21 693 3705 www: http://birg.epfl.ch Email: Auke.Ijspeert at epfl.ch Adjunct Assistant Professor, Department of Computer Science, University of Southern California ----------------------------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: https://mailman.srv.cs.cmu.edu/mailman/private/connectionists/attachments/20080818/fbf75186/attachment.html From doya at oist.jp Wed Aug 20 02:02:05 2008 From: doya at oist.jp (Kenji Doya) Date: Wed, 20 Aug 2008 15:02:05 +0900 Subject: Connectionists: Workshop on Open Problems in Neuroscience of Decision Making In-Reply-To: References: <91D06137-7972-4C74-AD5B-63EF5528F6CD@oist.jp> <750DBD6D-287E-472B-A076-E8B7139E086C@oist.jp> Message-ID: <4F1EE37A-1F86-49C9-869C-233418716B48@oist.jp> Workshop on Open Problems in the Neuroscience of Decision Making http://www.irp.oist.jp/nc/odm/ October 15 to 19, 2008 OIST Seaside House, Okinawa, Japan http://www.oist.jp/seasidehouse.html Durning the last decade, the study of "decision making" outgrew the domains of philosophy and economics into major subjects of neuroscience and machine learning. This workshop focuses on four outstanding issues in the neural mechanisms of decision making. We invite a small number of young scientists (graduate students or postdoctoral researchers) as the discussants of the workshop. If you would like to be considered as a discussant, Please send your C.V. and a PDF file of your publication most relevant to the topic of this workshop to odm08 at oist.jp by September 1st, 2008. Invitation will be notified by September 8th. Food and lodging during the workshop will be provided for the invited discussants. Travel support will also be available to those who require it. Sessions and Confirmed Speakers: 1: Valuation -- What affects the evaluation of risk and the timing of reward? Peter Bossaerts, EPFL John O?Doherty, U Dublin and Caltech Kenji Doya, OIST 2: Decision -- Where in the brain are decisions made? Geoffrey Schoenbaum, U Maryland Thomas Boraud, CNRS Paul Cisek, U Montreal 3: Learning -- How do dopamine neurons compute reward prediction errors? Brian Hyland, U Otago Yasushi Kobayashi, Osaka U Masamichi Sakagami, Tamagawa U Hiroyuki Nakahara, RIKEN BSI 4: Multiplicity -- Why do we use different strategies and which brain parts are involved? Bernard Balleine, UCLA Nathaniel Daw, New York U Jon Horvitz, Boston College Co-organizers: Bernard Balleine, UCLA Kenji Doya, OIST Hiroyuki Nakahara, RIKEN John O'Doherty, U Dublin and Caltech Masamichi Sakagami, Tamagawa U Sponsors: Okinawa Institute of Science and Technology RIKEN Brain Science Institute Tamagawa University University of California, Los Angeles California Institute of Technology ---- Kenji Doya Neural Computation Unit, Okinawa Institute of Science and Technology 12-22 Suzaki, Uruma, Okinawa 904-2234, Japan Phone: +81-98-921-3843; Fax: +81-98-921-3873 http://www.nc.irp.oist.jp/ From elli.chatzopoulou at incf.org Fri Aug 22 11:17:22 2008 From: elli.chatzopoulou at incf.org (INCF - Elli Chatzopoulou) Date: Fri, 22 Aug 2008 17:17:22 +0200 Subject: Connectionists: Neuroinformatics 2008 Congress: Publication of abstracts Message-ID: <48AED882.9020302@incf.org> Dear all, All the abstracts of lectures, posters and demos are now published online through Frontiers in Neuroinformatics. Please view them through the following link: http://frontiersin.org/conferences/individual_conference_listing.php?confid=2&ind=1 Please note that the deadline for registering for the congress Social Events (City Hall reception, Congress Banquet, and Archipelago Dinner) is August 29 (12:00 noon CET). Registration for the Congress itself will be open until the start of the congress. The complete program of Neuroinformatics2008 can be found here: http://www.neuroinformatics2008.org/program-new For questions regarding registration, please contact our congress arranger, MCI Stockholm, through the following email-address: confirmation-sweden at mci-group.com. -- Elli Chatzopoulou, Ph.D. Scientific Information and Public Relations Officer International Neuroinformatics Coordinating Facility Secretariat Karolinska Institutet Nobels v?g 15A SE-171 77 Stockholm Sweden Email: elli.chatzopoulou at incf.org Phone: +46 8 524 87491 Mobile: +46 7 614 87491 Fax: +46 8 524 87150 web: www.incf.org From pascal.fua at epfl.ch Wed Aug 20 12:29:51 2008 From: pascal.fua at epfl.ch (Pascal Fua) Date: Wed, 20 Aug 2008 18:29:51 +0200 Subject: Connectionists: Post-doctoral position in Vision/Robotics at EPFL Message-ID: <48AC467F.6080100@epfl.ch> EPFL's Computer Vision Laboratory (http://cvlab.epfl.ch/) has an opening for a post-doctoral fellow in the field of perception. The position is initially offered for 12 months and can be extended. Description: The work will be carried out within the context of an European Union project, whose overall aim is to investigate interactions between morphology and behavior by designing and building a prototype of a reconfigurable anguilliform swimming robot. More specifically, our goal is to give this robot an electric sense comparable to that of real electric fish, which it will be able to use to detect and recognize objects and obstacles. Position: The Computer Vision laboratory offers a creative international environment, a possibility to conduct competitive research on a global scale and involvement in teaching. Within the project, there will be ample opportunities to cooperate with some of the best groups in Europe and elsewhere. EPFL is located next to Lake Geneva in a beautiful setting 60 kilometers away from the city of Geneva. Salaries are in the range CHF 70000 to 80000 per year, the precise amount to be determined by EPFL's department of human resources. Education: Applicants are expected to have finished, or be about to finish their Ph.D. degrees, to have strong mathematical background, and to have a track record of publications in top conferences and journals. Strong programming skills (C or C++) are a plus. French language skills are not required, English is mandatory. Application: Applications must be sent by email to Prof. P. Fua (pascal.fua at epfl.ch). They must contain a statement of interest, a CV, a list of publications, and the names of three references. -- -------------------------------------------------------------------- Prof. P. Fua (Pascal.Fua at epfl.ch) Tel: 41/21-693-7519 FAX: 41/21-693-7520 Url: http://cvlab.epfl.ch/~fua/ -------------------------------------------------------------------- From sbasu at media.mit.edu Sat Aug 23 00:02:22 2008 From: sbasu at media.mit.edu (Sumit Basu) Date: Fri, 22 Aug 2008 21:02:22 -0700 Subject: Connectionists: Final CFP for SysML'08 (deadline: 9/26): Workshop on Tackling Computer Systems Problems with Machine Learning Techniques Message-ID: <1219464142.12144.1270093293@webmail.messagingengine.com> [This is a final reminder for the SysML?08 deadline, coming up very soon! (9/26). Please forward as appropriate.] If you?re doing any research that combines systems work with machine learning/AI, we strongly encourage you to submit it to SysML?08 (Workshop on Tackling Computer Systems Problems with Machine Learning Techniques), to be held jointly with OSDI?08 in San Diego. Topics of interest include using machine learning for reliability, performance, power management, security, fault diagnosis, manageability, programming, debugging, etc., as well as meta-aspects of SysML like tools and building blocks. We?re particularly interested in work that has been deployed and tested in real systems, but welcome submissions from all stages of development. Important Dates Submissions due: September 26, 2008, 5:00 p.m. PDT Notification of acceptance: October 31, 2008 Final papers due: November 21, 2008 Workshop date: December 11, 2008 Please visit the SysML?08 site at http://www.usenix.org/events/sysml08 (there will be a link for uploading submissions by early September) for more details and the complete Call for Papers (at http://www.usenix.org/events/sysml08/cfp). -Sumit Basu and Armando Fox, Co-Chairs of SysML?08 From carnevalet at sbcglobal.net Mon Aug 25 23:08:32 2008 From: carnevalet at sbcglobal.net (Ted Carnevale) Date: Mon, 25 Aug 2008 23:08:32 -0400 Subject: Connectionists: NEURON Course at 2008 SFN Meeting Message-ID: <48B373B0.9010903@sbcglobal.net> Short Course Announcement Using NEURON to Model Cells and Networks Satellite Symposium, Society for Neuroscience Meeting 9 AM - 5 PM on Friday, Nov. 14, 2008 Speakers to include M.L. Hines and N.T. Carnevale The emphasis of this course is on practical issues that are key to the most productive use of NEURON, an advanced simulation environment for realistic modeling of biological neurons and neural circuits. Through lectures and live computer demonstrations, we will present the principles and procedures that are involved in creating and using models of cells and networks with NEURON. We will also show how to create distributed models that achieve nearly linear speedup with the number of processors by taking advantage of parallel hardware ranging from multiprocessor personal computers and workstation clusters to massively parallel supercomputers. Partial list of topics to be covered: * Efficient design and implementation of models of neurons and networks. + Constructing and managing models with the Cell Builder, Channel Builder, Linear Circuit Builder, Network Builder, and Model Viewer. + Using the built-in variable-order variable-timestep integrator for improved speed and accuracy. + Parallelizing models of cells and networks. + Optimizing models with high-dimensional datasets. * Expanding NEURON's repertoire of biophysical mechanisms. * Databases for empirically-based modeling. Each registrant will a comprehensive set of notes. Registration is limited to 40 individuals on a first-come, first serve basis. Registration deadline is Monday, October 13, 2008. No on-site registration will be accepted. For more information see http://www.neuron.yale.edu/dc2008.html --Ted Carnevale From Dave_Touretzky at cs.cmu.edu Sun Aug 24 02:31:39 2008 From: Dave_Touretzky at cs.cmu.edu (Dave_Touretzky@cs.cmu.edu) Date: Sun, 24 Aug 2008 02:31:39 -0400 Subject: Connectionists: announcing HHsim 3.1 Message-ID: <29926.1219559499@ammon.boltz.cs.cmu.edu> HHsim is a graphical simulation of a section of excitable neuronal membrane using the Hodgkin-Huxley equations. It provides full access to the Hodgkin-Huxley parameters, membrane parameters, stimulus parameters, and ion concentrations. In contrast with NEURON or GENESIS, which are vastly more sophisticated research tools, HHsim is simple educational software designed specifically for graduate or undergraduate neurophysiology courses. The user interface can be mastered in a couple of minutes and provides many ways for the student to experiment. HHsim is free software distributed under the GNU General Public License. The official HHsim web site is at http://www.cs.cmu.edu/~dst/HHsim. The online documentation, including sample screen shots, is available on this web site, and also included with the program when you download it. The web site also offers sample exercises that use the simulator. Compiled, stand-alone versions of HHSim are available for Windows XP, Linux, and Mac OS X (both Intel and PowerPC architectures). HHsim is written in Matlab, so if you have Matlab on your machine, you can just download the source code and run that. Version 3.1 of HHsim includes several bug fixes and user interface upgrades, such as improved zoom and pan modes, and expanded parameter ranges in voltage clamp mode. Most of the work on version 3.1 was completed in February 2008, but the Mac Power PC version was just compeleted this week. -- Dave Touretzky From bisant at umbc.edu Sun Aug 24 17:48:28 2008 From: bisant at umbc.edu (David Bisant) Date: Sun, 24 Aug 2008 17:48:28 -0400 Subject: Connectionists: FLAIRS Special Track on Neural Networks and Data Mining, Call for Papers, Sanibel Island, Florida Message-ID: <48B1D72C.407@umbc.edu> Call for Papers FLAIRS-2009, Special Track on Neural Networks and Data Mining The 22nd International FLAIRS Conference (FLAIRS-2009) Sanibel Island, Florida May 19-21, 2009 http://www.flairs-22.info/ Important Dates Paper submissions due November 23, 2008 Notification letters sent January , 2009 Camera ready copy due February , 2009 Papers are being solicited for a special track on Neural Networks and Data Mining at the 22nd International FLAIRS Conference (FLAIRS-2009). The special track will be devoted to data mining with the aim of presenting new and important contributions in this area. The areas include, but are not limited to, the following: applications such as Intelligence analysis, medical and health applications, text, video, and multi-media mining, E-commerce and web data, financial data analysis, intrusion detection, remote sensing, earth sciences, and astronomy; modeling algorithms such as hidden Markov, decision trees, neural networks, statistical methods, or probabilistic methods; case studies in areas of application, or over different algorithms and approaches; feature extraction and selection; post-processing techniques such as visualization, summarization, or trending; preprocessing and data reduction; data engineering or warehousing; or other data mining research which is related to artificial intelligence or neural modeling. For those interested in computational neurobiology, see the track on Computational Biology and Bioinformatics. FLAIRS 2009 Invited Speakers Eugene Freuder, University College Cork, Ireland Arthur Graesser, University of Memphis, USA Jan Wiebe, University of Pittsburgh, USA Special Track Committee William Eberle (Co-Chair) Tennessee Technological University, USA David Bisant (Co-Chair) The Laboratory for Physical Sciences, USA Jesus Gonzalez NIAOE, Mexico Nitesh Chawla University of Notre Dame, USA Diane Cook Washington State University, USA Olac Fuentes University of Texas, El Paso, USA Slawomir Zadrozny Systems Research Institute, Polish Academy of Sciences,Poland Hyoil Han Drexel University, USA Geof Barrows Centeye Corporation, USA Lawrence Holder Washington State University, USA Jorge Ramirez Apple Computer, USA SeungJin Lim Utah State University, USA Eduardo Morales NIAOE, Mexico Hyoil Han Drexel University, USA Fred Petry Tulane University, USA Douglas Talbert Tennessee Technological University, USA Taghi Khosgoftaar Florida Atlantic University, USA Martin Atzmueller University of Wuerzburg From soeren.lorenz at uni-bielefeld.de Mon Aug 25 05:41:36 2008 From: soeren.lorenz at uni-bielefeld.de (Soeren Lorenz) Date: Mon, 25 Aug 2008 11:41:36 +0200 Subject: Connectionists: Special Issue 'Interactive Educational Media for the Neural and Cognitive Sciences' published in Brains, Minds, and Media Message-ID: <48B27E50.2030605@uni-bielefeld.de> (Apologies for cross-posting - please forward to colleagues) ________________________ Special Issue on 'Interactive Educational Media for the Neural and Cognitive Sciences' http://www.brains-minds-media.org/current now freely available in Brains, Minds & Media, the open access eJournal of New Media in Neural and Cognitive Science and Education: Summary: This special issue of Brains, Minds & Media covers a broad range of interactive research media and tools with a strong emphasis on their use in neural and cognitive sciences education. The focus lies on the question of how these media and tools can significantly enhance learning and teaching, and how a curricular integration can be achieved. Tools and projects covered are: SNNAP, Topographica, SimBrain, PDP++, SPatch, BrainMaps, ModelDB, Neurons in Action, CELEST, and Monist. Written by researchers, tool developers and experienced academic teachers, this collection provides an orientation guide not only for teaching researchers but also for interested teachers and students. ----------------------------- CONTINUOUS CALL FOR SUBMISSION 'Brains, Minds & Media' is an open access journal for media in research and education in the neural and cognitive sciences (see http://www.brains-minds-media.org/aims"). Information about manuscript submissions can be found at http://www.brains-minds-media.org/guidelines. Please send your submission to editors at brains-minds-media.org. ----------------------------- If you have any questions, please contact info at brains-minds-media.org. Regards, Soeren Lorenz (editorial co-ordinator) soeren.lorenz at uni-bielefeld.de From liam at stat.columbia.edu Tue Aug 26 16:27:44 2008 From: liam at stat.columbia.edu (liam@stat.columbia.edu) Date: Tue, 26 Aug 2008 16:27:44 -0400 (EDT) Subject: Connectionists: Postdoc: statistical modeling of neural data, Columbia Message-ID: Hi all - apologies for the cross-posting. A full-time postdoctoral position is available immediately in the lab of Liam Paninski, working on statistical analysis and modeling of neural data, in close cooperation with our experimental collaborators. Full information (including related publications) available here: http://www.stat.columbia.edu/~liam/postdoc.html Current projects include the analysis of large-scale multineuronal coding in retina and cortex, optimal stimulus design for sensory neuroscience experiments, optimal decoding of neural spike train data, and analysis of biophysical dynamics given calcium- and voltage-sensitive imaging data. Requirements: The work is highly interdisciplinary, and applicants must have strong mathematical and computational skills. Preferred educational background is a PhD in Electrical Engineering, Statistics, Machine Learning, Physics, Applied Mathematics, or Computational Neuroscience. Previous experience with signal processing (including sequential Monte Carlo / particle filtering methods), statistical modeling, and/or computational neuroscience is required. Environment: The Paninski group is at Columbia University, based in the Statistics department and closely integrated with the Center for Theoretical Neuroscience, in the great city of New York. Applicants should send email to "liam at stat columbia edu" providing: 1. a one-page description of past research experience 2. a one-page description of future research interests and goals 3. a resume of educational and research experience, including publications 4. names of at least two people that could provide letters of reference All materials should be in Acrobat (pdf) or plain text (no microsoft, please), and may be included as a URL, or as an email attachment. L From Maureen.Clerc at sophia.inria.fr Tue Aug 26 04:14:17 2008 From: Maureen.Clerc at sophia.inria.fr (Maureen Clerc) Date: Tue, 26 Aug 2008 10:14:17 +0200 Subject: Connectionists: Postdoc position: neural mass modeling of epileptic discharges Message-ID: <48B3BB59.2060703@sophia.inria.fr> A new postdoc position is available at INRIA (Odyss?e project-team, Sophia Antipolis) and INSERM (U751, Marseille). Topic: Investigation of the fluctuations of neuronal activity during epileptic discharges and comparison with the states of a neural mass model. Framework: The project takes place within a collaboration between INRIA and INSERM on models for the interpretation of multimodal datasets (MEG, EEG, fMRI). The goal of this project is to build computational models for the generation of multimodal data, in order to (i) gain a better understanding of the impact of the different parameters of neuronal activity on simulated non-invasive measurements (ii) use neural-mass-type models in an inverse problem of source estimation from MEG/EEG/fMRI. For more detail http://www-sop.inria.fr/odyssee/en/INRIA-INSERM-Postdocposition Contact: christian.benar at univmed.fr or Maureen.Clerc at sophia.inria.fr Deadline: fall 2008 From alexwade at gmail.com Fri Aug 29 01:38:35 2008 From: alexwade at gmail.com (Alex Wade) Date: Thu, 28 Aug 2008 22:38:35 -0700 Subject: Connectionists: Cosyne 2009 - workshop deadline reminder Message-ID: <76eaaa9a0808282238u4beb4935y4fb2db96adf4095a@mail.gmail.com> --------------------------------------------------- Cosyne09 - CALL FOR WORKSHOP PROPOSALS March 2-3, 2009 Snowbird, Utah http://cosyne.org/wiki/Cosyne_09_workshops --------------------------------------------------------------------------------- PROPOSAL DEADLINE: 15 Sept 2008 A series of workshops will be held after the main Cosyne meeting ( http://cosyne.org/). The goal is to provide an informal forum for the discussion of important research questions and challenges. Controversial issues, open problems, comparisons of competing approaches, and alternative viewpoints are encouraged. The overarching goal of all workshops should be the integration of empirical and theoretical approaches, in an environment that fosters collegial discussion and debate. Preference will be given to proposals that differ in content, scope, and/or approach from workshops of recent years (examples available at cosyne.org). Relevant topics include, but are not limited to: sensory processing; motor planning and control; multisensory integration; motivation, reward and decision making; learning and memory; adaptation and plasticity; neural coding; neural circuitry and network models; dendritic processing; and methods in computational or systems neuroscience. ________________________________________________________________ WORKSHOP DETAILS: -- There will be 4-8 workshops/day, running in parallel. -- Each workshop is expected to draw between 15 and 80 people. -- The workshops will be split into morning (8:00-11:00 AM) and afternoon (4:30-7:30 PM) sessions. -- Workshops will be held at Snowbird, a ski resort located 30 miles (typically less than an hour) from the Salt Lake City airport. -- Buses from the main conference will be provided. -- Descriptions of previous workshops may be found at http://cosyne.org/wiki/Cosyne_09_workshops ________________________________________________________________ SUBMISSION INSTRUCTIONS: Deadline: September 15th, 2008 Format: plain text only -- please no attachments email to: cosyne09workshops at gmail.com (Alex Huk & Adam Kohn) Proposals should include: - Name(s) and email address(es) of the organizers (no more than 2 organizers per session, please). A primary contact should be designated. -- A title. -- A description of: what the workshop is to address and accomplish, why the topic is of interest, who the targeted group of participants is. -- Names of potential invitees, with indication of which speakers are confirmed. Preference will be given to workshops with the most confirmed speakers. -- Proposed workshop length (1 or 2 days). Most workshops will be limited to a single day. If you think your workshop needs 2 days, please explain why. -- A *brief* resume of the workshop organizer along with a *brief* list of publications (about half a page total). ________________________________________________________________ WORKSHOP ORGANIZERS RESPONSIBILITIES: -- Coordinate workshop participation and content. -- Moderate the discussion. ________________________________________________________________ SUGGESTIONS: Experience has shown that the best discussions during a workshop are those that arise spontaneously. A good way to foster these is to have short talks and long question periods (e.g. 30 + 15 minutes), and have plenty of breaks. Also, when it comes to the number of talks, in the words of Jerry Brown, less is more. We recommend fewer than 10 talks. ________________________________________________________________ WORKSHOP COSTS: Detailed registration costs, etc, will be available at http://cosyne.org/ Please note: Cosyne does NOT provide travel funding for workshop speakers. All workshop participants are expected to pay for workshop registration fees. Participants are encouraged to register early, in order to qualify for discounted registration rates. Cosyne does provide free workshop registration for workshop organizers. ________________________________________________________________ COSYNE 2009 WORKSHOP CHAIRS: Alex Huk (UT Austin), Adam Kohn (Yeshiva U.) QUESTIONS: email: cosyne09workshops at gmail.com (Alex Huk & Adam Kohn) -------------- next part -------------- An HTML attachment was scrubbed... URL: https://mailman.srv.cs.cmu.edu/mailman/private/connectionists/attachments/20080829/a93387fa/attachment.html From aag at soi.city.ac.uk Fri Aug 29 09:29:04 2008 From: aag at soi.city.ac.uk (Artur d'Avila Garcez) Date: Fri, 29 Aug 2008 14:29:04 +0100 (BST) Subject: Connectionists: Journal of Algorithms thematic issue on Algorithmic Reinforcement Learning (final call for papers) Message-ID: Call for Papers Journal of Algorithms, Elsevier http://www.cs.rhul.ac.uk/~kostas/arl/cfp.html Thematic Issue on "Algorithmic Reinforcement Learning" http://www.elsevier.com/locate/jalgor Aim & Scope Reinforcement learning is an area of machine learning seeking to provide a computational approach to understanding and automating goal-directed learning and decision-making. It addresses the question of how an autonomous agent that senses and acts in its environment can learn to choose optimal actions to achieve its goals. The approach originates from previous work in psychology (particularly animal learning), computer science (particularly dynamic programming), with ongoing work in artificial intelligence (particularly stochastic, symbolic and connectionist learning). More recently, reinforcement learning has been used to provide cognitive models that simulate human performance during problem solving or skill acquisition. This thematic issue of the Journal of Algorithms seeks to celebrate the increasingly multidisciplinary nature of reinforcement learning and, in line with the Journal's manifesto, it proposes to study and present the subject from an algorithmic perspective that we refer to as Algorithmic Reinforcement Learning (ARL). It is hoped in this way that the issue will serve as a reference in the area, and will help organise and promote the research across sub-areas. We welcome the submission of innovative and mature results in specifying, developing and experimenting with ARL. Approaches that relate, compare and contrast, combine or integrate different areas of reinforcement learning are particularly encouraged. Papers describing innovative developments in the area are also encouraged. Areas of interest include, but are not limited to, the following topics: Multi-agent reinforcement learning Relational reinforcement learning Neuro-symbolic reinforcement learning Bayesian reinforcement learning Reinforcement learning and logic/ILP Reinforcement learning with background knowledge Robust reinforcement learning Reinforcement learning in game theory and bounded rationality Applications Important Dates Submission Deadline: 1st October 2008 Acceptance Notice: 20th January 2009 Final Manuscript: 1st March 2009 Publication Date: 2nd Quarter, 2009 (tentative) Submission Guidelines The work submitted must be in the form of high quality, original papers, which are not simultaneously submitted for publication elsewhere. Papers should be formatted according to the journal style, and not exceed 25 pages including figures, references, etc. The papers must be submitted by sending a PDF version of the complete manuscript to: arl-guest-eds at cs.rhul.ac.uk Submitted papers will be peer reviewed according to their originality, quality and relevance to the thematic issue. Guest Editors Dr. Kostas Stathis Computer Science Department, Royal Holloway, University of London, UK URL: http://www.cs.rhul.ac.uk/~kostas Dr. Artur d'Avilla Garcez Computing Department, City University London, UK URL: http://www.soi.city.ac.uk/~aag Dr. Robert Givan Department of Electrical and Computer Engineering, Purdue University, USA URL: http://cobweb.ecn.purdue.edu/~givan/ ----------------------------------------------------------------------- Dr. Artur d'Avila Garcez Reader in Computing Department of Computing, School of Informatics City University London, EC1V 0HB, UK Tel: + 44 (0)20 7040 8344 Fax: + 44 (0)20 7040 0244 Email: aag at soi.city.ac.uk URL: http://www.soi.city.ac.uk/~aag ----------------------------------------------------------------------- From dglanzma at mail.nih.gov Wed Aug 27 13:43:40 2008 From: dglanzma at mail.nih.gov (Glanzman, Dennis (NIH/NIMH) [E]) Date: Wed, 27 Aug 2008 13:43:40 -0400 Subject: Connectionists: Neuronal Variability and Its Functional Significance Message-ID: <0EE5F9DA83318D47B16FB45A5CBA4A4A02C187CB@nihcesmlbx2.nih.gov> CALL FOR POSTERS 16th Annual Dynamical Neuroscience Satellite Symposium "Neuronal Variability and Its Functional Significance" Preceding the 38th Annual Meeting of the Society for Neuroscience Thursday and Friday, November 13-14, 2008 The Capital Ballroom of the JW Marriott Hotel Washington, DC The brain is restless. Physiological data recorded from the brain often have random-appearing components. Repeated stimuli evoke responses that are not identical from trial to trial. Not too long ago this variability was dismissed as noise and, through techniques such as signal averaging, removed from further consideration. More recent work has begun to examine the rich content of this variability and shed light on its functional consequences. Neural variability and noise has become an active field of research, generating a wealth of new knowledge and information. This symposium will assess the current status of four related areas: Characterizing Neuronal Variability; The Dynamics of Neural Ensembles; Neural Variability and Cognition; Neural Variability and Brain Disorders. Invited Speakers: Henry Abarbanel, Larry Abbott, Emery Brown, Richard Coppola Charles Gray, Terran Lane, Daeyeol Lee, Stephen Lisberger, Helen Mayberg Anna Roe, Nicholas Schiff, Charles Schroeder, Richard Stein and Akaysha Tang Keynote Address Presented by the inaugural recipient of the Swartz Prize for Theoretical and Computational Neuroscience Symposium Organizers: Mingzhou Ding, University of Florida Dennis Glanzman, NIMH/NIH For programmatic information, please contact: D. Glanzman National Institute of Mental Health 6001 Executive Boulevard, Rockville, MD 20857 Telephone: (301) 443-1576 Register for the meeting, and to submit a poster abstract (October 17 deadline for abstracts) here: http://neuro.dgimeetingsupport.com, or, contact: Nakia Wilson Telephone: (877) 772-9111