From bengio at idiap.ch Mon Jul 2 11:34:55 2001 From: bengio at idiap.ch (Samy Bengio) Date: Mon, 2 Jul 2001 17:34:55 +0200 (MEST) Subject: Machine Learning positions for PhDs, Postdocs and Seniors at IDIAP, Switzerland Message-ID: SEVERAL OPEN POSITIONS IN SPEECH, COMPUTER VISION, MACHINE LEARNING, NEURAL NETWORKS, AND MULTIMODAL INTERFACES (see the full proposal at http://www.idiap.ch/open-positions/open-positions4.html) The IDIAP Institute (http://www.idiap.ch) is a not-for-profit research institute affiliated with the Swiss Federal Institute of Technology at Lausanne (EPFL) and the University of Geneva. Located in Martigny (Valais, CH), IDIAP is partly funded by the Swiss Federal Government, the State of Valais, and the City of Martigny, and is involved in numerous national and international (European) projects. Particularly active in the fields of speech and speaker recognition, computer vision, and machine learning, where IDIAP is targeting at the highest level of research. The institute currently numbers around 35-40 scientists, including permanent senior scientists, postdocs, and PhD students. Recently, IDIAP (in close collaboration with the EPFL Signal Processing Laboratory of Prof. Murat Kunt, http://ltswww.epfl.ch) has been awarded a major (10 years) research grant as the "Leading House" of a large National Centre of Competence in Research on "Interactive Multimodal Information Management". In view of the resulting (present and future) growth of the Institute, IDIAP currently welcomes applications of talented candidates at all levels with expertise or strong interest in the fields of speech processing, computer vision, machine learning, and multimodal interaction. The open positions include: management and senior positions (including one scientific deputy director and one speech processing group leader), project leaders, postdocs, and PhD students. Two EPFL tenure track positions at the Assistant Professor level (with most of the research responsibilities located at IDIAP) are also available. Preference will be given to candidates with experience in one or several of the following areas: signal processing, statistical pattern recognition (typically applied to speech and scene analysis), neural networks, hidden Markov models, speech and speaker recognition, computer vision, human/computer interaction (dialog). Senior and postdoc candidates should also have a proven record of high quality research and publications. All applicants should be experienced in C/C++ programming and familiar with the Unix environment; they should also be able to speak and write in English (and be willing to learn French). LOCATION: IDIAP is located in the town of Martigny (http://www.martigny.ch) in Valais, a scenic region in the South of Switzerland, surrounded by the highest mountains of Europe, and offering exciting recreational activities (including hiking, climbing and skiing), as well as varied cultural activities. It is also within close proximity to Montreux, Lausanne (EPFL) and Lake Geneva, and centrally located for travel to other parts of Europe. PROSPECTIVE CANDIDATES should send their detailed CV, together with a letter of motivation and 3 reference letters, to: IDIAP Att: Secretariat/jobs P.O. Box 592, Simplon, 4 CH-1920 Martigny Switzerland Email: jobs at idiap.ch Phone: +41-27-721.77.11 Fax: +41-27-721.77.12 ----- Samy Bengio Research Director. Machine Learning Group Leader. IDIAP, CP 592, rue du Simplon 4, 1920 Martigny, Switzerland. tel: +41 27 721 77 39, fax: +41 27 721 77 12. mailto:bengio at idiap.ch, http://www.idiap.ch/~bengio From serracri at sissa.it Fri Jul 6 05:27:09 2001 From: serracri at sissa.it (Cristina Serra) Date: Fri, 06 Jul 2001 11:27:09 +0200 Subject: Ph.D. in Neuroscience at SISSA Message-ID: <5.1.0.14.0.20010706112459.00a40440@shannon.sissa.it> The International School for Advanced Studies (SISSA) of Trieste, Italy, seeks candidates for 6+ PhD fellowships for training and research in Molecular, Cellular, Systems and Cognitive Neuroscience. SISSA is a leading center of higher learning in Physics, Mathematics, Biology and Neuroscience. Its mission is to foster research and the training of young scientists at the graduate and post-graduate level. All its activities are conducted in English. It features: - courses that cover a broad spectrum from molecular neurobiology to cognitive functions - small research groups with close supervisor interactions - excellent research facilities - concentrated 3-4 year PhD program - international staff and students - excellent track record for job placement in top-quality institutions Applications for the October 15-16 admission exams must arrive by October 1st. Those selected start their Ph.D. in November, 2001. More information at http://www.sissa.it/cns/testphd/neuro.html Current research groups are led by: Laura Ballerini, Antonino Cattaneo, Enrico Cherubini, Mathew Diamond, Luciano Domenici, Jacques Mehler, Anna Menini, John Nicholls, Andrea Nistri, Raffaella Rumiati, Tim Shallice, Vincent Torre and Alessandro Treves. From wsenn at cns.unibe.ch Fri Jul 6 09:05:42 2001 From: wsenn at cns.unibe.ch (Walter Senn) Date: Fri, 06 Jul 2001 15:05:42 +0200 Subject: Paper on synaptic delay learning Message-ID: <3B45B7A6.4A9166EC@cns.unibe.ch> Dear Connectionists The following paper (to appear in Neural Computation) is available at: http://www.cns.unibe.ch/publications/ftp/paper_Delay.pdf "Activity-dependent selection of axonal and dendritic delays or, why synaptic transmission should be unreliable" Walter Senn, Martin Schneider and Berthold Ruf Abstract: Systematic temporal relations between single neuronal activities or population activities are ubiquitous in the brain. No experimental evidence, however, exists for a direct modification of neuronal delays during Hebbian type stimulation protocols. We show that, in fact, an explicit delay adaptation is not required if one assumes that the synaptic strengths are modified according to the recently observed temporally asymmetric learning rule with the downregulating branch dominating the upregulating branch. During development, slow unbiased fluctuations in the transmission time together with temporally correlated network activity may control neural growth and implicitly induce drifts in the axonal delays and dendritic latencies. These delays and latencies become optimally tuned in the sense that the synaptic response tends to peak in the soma of the postsynaptic cell if this is most likely to fire. The nature of the selection process, however, requires unreliable synapses in order to give `successful' synapses an evolutionary advantage upon the others. Without unreliable transmission, the learning rule would equally modify all synapses with the same local time difference between the pre- and postsynaptic signal, irrespective whether the corresponding total axonal and dendritic delay supports the postsynaptic firing or not. Stochastic transmission may resolve this ambiguity by restricting the modification process to the active synapses only, giving those synapses a higher chance to be strengthened which contribute to the postsynaptic activity. The width of the learning window does also implicitely determine the preferred dendritic delay and the preferred width of the postsynaptic response. Hence, the learning rule may implicitly determine whether a synaptic connection provides precisely timed infromation or rather `contextual' information. Download from homepage: http://www.cns.unibe.ch/~wsenn/#pub ------------------------------------------------------------- Walter Senn Phone office: +41 31 631 87 21 Physiological Institute Phone home: +41 31 332 38 31 University of Bern Fax: +41 31 631 46 11 Buehlplatz 5 email: wsenn at cns.unibe.ch CH-3012 Bern SWITZERLAND http://www.cns.unibe.ch/~wsenn/ ------------------------------------------------------------- From nnsp01 at neuro.kuleuven.ac.be Fri Jul 6 10:22:20 2001 From: nnsp01 at neuro.kuleuven.ac.be (Neural Networks for Signal Processing 2001) Date: Fri, 06 Jul 2001 16:22:20 +0200 Subject: NNSP2001 Workshop: Focus on data mining and signal separation Message-ID: <3B45C99C.D7F7B803@neuro.kuleuven.ac.be> ----------------------------------------------------------------- 2001 IEEE Workshop on Neural Networks for Signal Processing September 10-12, 2001 Falmouth, Massachusetts, USA --------------------------------http://eivind.imm.dtu.dk/nnsp2001 The eleventh in a series of IEEE NNSP workshops will be held at the Sea Crest Oceanfront Resort and Conference Center (http://www.seacrest-resort.com/), the largest oceanfront conference resort on Cape Cod, with a 684 foot private white sandy beach. Contemporary neural networks for signal processing research combines many ideas from adaptive signal/image processing, machine learning, and advanced statistics in order to solve complex real-world signal processing problems. This year, the workshop will focus on two key application areas: * data mining * blind source separation. The strong technical program will be complemented by a series of exciting keynote addresses: ``Information Geometry of Multilayer Neural Networks'' by Shun-ichi Amari ``Semi Blind Signal Separation and Extraction and Their Application in Biomedical Signal Processing'' by Andrzej Cichocki ``Learning Metrics for Exploratory Data Analysis'' by Samuel Kaski ``From Bits to Information: Theory and Applications of Learning Machines'' by Tomaso Poggio ``Beyond Stochastic Chaos: Implications for Dynamic Reconstruction'' by Simon Haykin ``A Novel Associative Memory Approach to Blind SIMO/MIMO Channel Equalization and Signal Recovery'' by S.Y. Kung Please refer to the workshop web site for further information. From lunga at ifi.unizh.ch Sat Jul 7 11:19:30 2001 From: lunga at ifi.unizh.ch (Max Lungarella) Date: 7 Jul 2001 17:19:30 +0200 Subject: DEVELOPMENTAL EMBODIED COGNITION - CALL FOR PARTICIPATION Message-ID: <3B472882.82A9941C@ifi.unizh.ch> DEVELOPMENTAL EMBODIED COGNITION - DECO 2001 Workshop in Edinburgh, Scotland, 31 July 2001 *********** CALL FOR PARTICIPATION *********** http://www.cogsci.ed.ac.uk/~deco/ The objective of this workshop is to bring together researchers from cognitive science, psychology, robotics, artificial intelligence, philosophy, and related fields to discuss the role of developmental and embodied views of cognition, and in particular, their mutual relationships. The ultimate goal of this approach is to understand the emergence of high-level cognition in organisms bases on their interactions with their environment over extended periods of time. The workshop will be held at the University of Edinburgh on July 31st 2001, one day before the 23rd Annual Meeting of the Cognitive Science Society. The workshop will consist of invited talks, followed by a poster session with contributed papers. Invited speakers: Mark Johnson (Centre for Brain and Cognitive Development, Birkbeck College, London, UK) Max Lungarella and Rolf Pfeifer (AI-Laboratory, University of Zuerich, Switzerland) Lorenzo Natale (Lira-Lab, University of Genoa, Italy) Linda Smith (Department of Psychology, Indiana University, Bloomington, IN, USA) Michael Thomas (Neurocognitive Development Unit, University College London, UK) Tom Ziemke (Department of Computer Science, University of Skoevde, Sweden) Participation in the workshop is free, but registration is required. Please send email to deco at cogsci.ed.ac.uk to register. Please visit the workshop website at http://www.cogsci.ed.ac.uk/~deco/ for further information. Rolf Pfeifer Gert Westermann Workshop Co-Chair Workshop Co-Chair Artificial Intelligence Laboratory Sony Computer Science Laboratory University of Zurich, Switzerland Paris, France deco at cogsci.ed.ac.uk DECO-2001 is kindly sponsored by James (R) http://www.personaljames.com/ From didier at isr.umd.edu Thu Jul 12 10:25:52 2001 From: didier at isr.umd.edu (Didier A. Depireux) Date: Thu, 12 Jul 2001 10:25:52 -0400 (EDT) Subject: Job announcement for MEG lab Message-ID: Technical Coordinator, MEG Laboratory Cognitive Neuroscience of Language Laboratory, Department of Linguistics University of Maryland College Park The Cognitive Neuroscience of Language Laboratory at the University of Maryland is developing a new state-of-the-art magnetoencephalography (MEG) facility. The MEG facility will perform non-invasive recordings with millisecond resolution in time, and high spatial resolution, afforded by a dense sensor array (160+ channels). The laboratory will be used for research in a number of disciplines, including linguistics, neuroscience, electrical engineering, computer science, and physics. The lab is seeking a full-time Technical Coordinator, starting September 1st, 2001, or as soon as possible thereafter. The Technical Coordinator responsibilities include: maintaining the MEG laboratory, including stimulus delivery equipment and a number of PC workstations for data acquisition/analysis; supervising and running experiments; developing protocols and manuals, and training new lab users; assisting in preparing experiments and analyzing data. The position provides an exciting opportunity to gain expertise in a cutting edge cognitive neuroscience laboratory. The position demands energy and initiative, technical aptitude, ability to work with a wide variety of people, and a serious interest in brain function. Experience in any of the following areas is desirable, though not necessary: cognitive science, neuroscience, computer programming, electrophysiology, radiology, electrical engineering, or physics. The position comes with a competitive salary and full benefits. For more information on the Cognitive Neuroscience of Language Laboratory, see http://www.ling.umd.edu/cnl. Inquiries should be directed to Prof. David Poeppel, dpoeppel at deans.umd.edu (301)-405-1016; Department of Linguistics, 1401 Marie Mount Hall, University of Maryland, College Park, MD 20742. The University of Maryland is an Affirmative Action/Equal Opportunities Title IX employer. Women and minority candidates are especially encouraged to apply. From gbaura at cardiodynamics.com Thu Jul 12 22:15:43 2001 From: gbaura at cardiodynamics.com (Gail Baura) Date: Thu, 12 Jul 2001 19:15:43 -0700 Subject: Research Engineer position: nonlinear models in medical instrumentation Message-ID: <001001c10b41$b864d440$2a00a8c0@CARDIODYNAMICS.COM> Please send out this job posting for a Research Engineer. This researcher will use linear and nonlinear models as a means towards understanding of physiologic mechanisms underlying data acquired from our medical instrumentation. Thank you. CardioDynamics, based in San Diego, is a rapidly growing medical technology and information solutions company committed to fundamentally changing the way cardiac patient monitoring is performed in healthcare. We have the following growth opportunity: Research Engineer The role of this position is to conduct the research and systems engineering behind new product concepts and devices for both CDIC internal products and external partnership agreements. The engineer will conduct all activities for assigned digital signal processing research projects, including analysis of data to determine the physiologic mechanisms underlying hemodynamic parameters. The engineer will also translate research algorithms into efficient realizations by writing specifications for a DSP processor. This position requires a BS degree in an engineering discipline and an MS degree in electrical, biomedical, or software engineering, with specialization in digital signal processing. Demonstrated system identification expertise, such as ARMAX or artificial neural networks, in MS thesis. Experience in algorithm development using either LabVIEW or Matlab. Experience in obtaining physiologic data for analysis. Ideal candidate would also have 1  5 years progressive hands-on research and clinical evaluation experience with medical devices or electronic medical instruments. PhDs would be overqualified for this position. In addition to a dynamic environment, we offer a competitive compensation package, including stock options, bonuses, medical, dental, 401(k) with match, 3 wks paid time off, paid holidays and other benefits. Visit our website at www.cdic.com. For consideration, please send resume and cover letter with salary requirements to e-mail address: reseng at cdic.com, or fax to 858/587-8616, or mail to HR, 6175 Nancy Ridge Dr., Ste. 300, San Diego, CA 92121. Equal Opportunity Employer. From b9pafr at uni-jena.de Fri Jul 13 06:03:31 2001 From: b9pafr at uni-jena.de (Frank Pasemann) Date: Fri, 13 Jul 2001 12:03:31 +0200 Subject: Job opening PhD Studentship Message-ID: <3B4EC773.7B047E6A@rz.uni-jena.de> The TheoLab - Research Unit for Structure Dynamics and System Evolution - at the Friedrich-Schiller-University of Jena, Germany - invites PhD candidates with pronounced interests in the field of "Evolved Neurocontrollers for Autonomous Agents" to apply for a PhD Studentship (BAT IIa/2) We are looking for a PhD student to work on the project "Real-time Learning Procedures for Co-operating Robots" which is funded by the DFG (German Research Council) for two years. Research is on the development and implementation of behavior relevant autonomous learning rules for robots. They will be combined with existing evolutionary startegies for the generation of multi-functional neurocontrollers. The position is to be taken os soon as possible. Applicants should have experience in computer simulations (C, C++, Grafik, Linux). A background in the fields of Dynamical Systems Theory, Neural Networks, Robotics and/or Embodied Cognitive Science is favourable. Experiments will be performed with Khepera robots. Successful candidats may also benefit from TheoLabs cooperation with the Max-Planck-Institute for Mathematics in the Sciences, Leipzig, the Department of Computer Science, University Leipzig, and the Institute of Theoretical Physics, University Bremen. Applications (CV, two academic referees) should be send to Prof. Dr. Frank Pasemann TheorieLabor Friedrich-Schiller-Universit=E4t Jena Ernst-Abbe-Platz 4, D-07740 Jena, Germany Tel: x49 - 36 41 - 94 95 30 (Sekr.) b9pafr at rz.uni-jena.de, http://www.theorielabor.de ************************************************************** Prof. Dr. Frank Pasemann Tel: x49-3641-949531 TheorieLabor x49-3641-949530 (Sekr.) Friedrich-Schiller-Universit=E4t Fax: x49-3641-949532 Ernst-Abbe-Platz 4 frank.pasemann at rz.uni-jena.de D-07740 Jena, Germany http://www.theorielabor.de From wahba at stat.wisc.edu Fri Jul 13 17:21:05 2001 From: wahba at stat.wisc.edu (Grace Wahba) Date: Fri, 13 Jul 2001 16:21:05 -0500 (CDT) Subject: Multicategory Support Vector Machines Message-ID: <200107132121.QAA28132@hera.stat.wisc.edu> The following short paper is available at http://www.stat.wisc.edu/~wahba/trindex.html Multicategory Support Vector Machines (Preliminary Long Abstract) Yoonkyung Lee, Yi Lin and Grace Wahba University of Wisconsin-Madison Statistics Dept, TR 1040 Abstract Support Vector Machines (SVMs) have shown great performance in practice as a classification methodology recently. Even though the SVM implements the optimal classification rule asymptotically in the binary case, the one-versus-rest approach to solve the multicategory case using an SVM is not optimal. We have proposed Multicategory SVMs, which extend the binary SVM to the multicategory case, and encompass the binary SVM as a special case. The Multicategory SVM implements the optimal classification rule as the sample size gets large, overcoming the suboptimality of conventional one-versus-rest approach. The proposed method deals with the equal misclassification cost and the unequal cost case in unified way. From mbartlet at san.rr.com Mon Jul 16 19:42:59 2001 From: mbartlet at san.rr.com (Marian Stewart Bartlett) Date: Mon, 16 Jul 2001 16:42:59 -0700 Subject: Face Image Analysis by Unsupervised Learning Message-ID: <3B537C03.D1B0B36A@san.rr.com> I am pleased to announce the following new book: Face Image Analysis by Unsupervised Learning, by Marian Stewart Bartlett. Foreword by Terrence J. Sejnowski. Kluwer International Series on Engineering and Computer Science, V. 612. Boston: Kluwer Academic Publishers, 2001. Please see http://inc.ucsd.edu/~marni for more information. The book can be ordered at http://www.wkap.nl/book.htm/0-7923-7348-0. Book Jacket: Face Image Analysis by Unsupervised Learning explores adaptive approaches to face image analysis. It draws upon principles of unsupervised learning and information theory to adapt processing to the immediate task environment. In contrast to more traditional approaches to image analysis in which relevant structure is determined in advance and extracted using hand-engineered techniques, [this book] explores methods that have roots in biological vision and/or learn about the image structure directly from the image ensemble. Particular attention is paid to unsupervised learning techniques for encoding the statistical dependencies in the image ensemble. The first part of this volume reviews unsupervised learning, information theory, independent component analysis, and their relation to biological vision. Next, a face image representation using independent component analysis (ICA) is developed, which is an unsupervised learning technique based on optimal information transfer between neurons. The ICA representation is compared to a number of other face representations including eigenfaces and Gabor wavelets on tasks of identity recognition and expression analysis. Finally, methods for learning features that are robust to changes in viewpoint and lighting are presented. These studies provide evidence that encoding input dependencies through unsupervised learning is an effective strategy for face recognition. Face Image Analysis by Unsupervised Learning is suitable as a secondary text for a graduate level course, and as a reference for researchers and practioners in industry. "Marian Bartlett's comparison of ICA with other algorithms on the recognition of facial expressions is perhaps the most thorough analysis we have of the strengths and limits of ICA as a preprocessing stage for pattern recognition." - T.J. Sejnowski, The Salk Institute Table of Contents: http://www.cnl.salk.edu/~marni/contents.html 1. SUMMARY ---------------------------------------------------------------- 2. INTRODUCTION 1. Unsupervised learning in object representations 1. Generative models 2. Redundancy reduction as an organizational principle 3. Information theory 4. Redundancy reduction in the visual system 5. Principal component analysis 6. Hebbian learning 7. Explicit discovery of statistical dependencies 2. Independent component analysis 1. Decorrelation versus independence 2. Information maximization learning rule 3. Relation of sparse coding to independence 3. Unsupervised learning in visual development 1. Learning input dependencies: Biological evidence 2. Models of receptive field development based on correlation sensitive learning mechanisms 4. Learning invariances from temporal dependencies 1. Computational models 2. Temporal association in psychophysics and biology 5. Computational Algorithms for Recognizing Faces in Images ---------------------------------------------------------------- 3. INDEPENDENT COMPONENT REPRESENTATIONS FOR FACE RECOGNITION 1. Introduction 1. Independent component analysis (ICA) 2. Image data 2. Statistically independent basis images 1. Image representation: Architecture 1 2. Implementation: Architecture 1 3. Results: Architecture 1 3. A factorial face code 1. Independence in face space versuspixel space 2. Image representation: Architecture 2 3. Implementation: Architecture 2 4. Results: Architecture 2 4. Examination of the ICA Representations 1. Mutual information 2. Sparseness 5. Combined ICA recognition system 6. Discussion ---------------------------------------------------------------- 4. AUTOMATED FACIAL EXPRESSION ANALYSIS 1. Review of other systems 1. Motion-based approaches 2. Feature-based approaches 3. Model-based techniques 4. Holistic analysis 2. What is needed 3. The Facial Action Coding System (FACS) 4. Detection of deceit 5. Overview of approach ---------------------------------------------------------------- 5. IMAGE REPRESENTATIONS FOR FACIAL EXPRESSION ANALYSIS: COMPARITIVE STUDY I 1. Image database 2. Image analysis methods 1. Holistic spatial analysis 2. Feature measurement 3. Optic flow 4. Human subjects 3. Results 1. Hybrid system 2. Error analysis 4. Discussion ---------------------------------------------------------------- 6. IMAGE REPRESENTATIONS FOR FACIAL EXPRESSION ANALYSIS: COMPARITIVE STUDY II 1. Introduction 2. Image database 3. Optic flow analysis 1. Local velocity extraction 2. Local smoothing 3. Classification procedure 4. Holistic analysis 1. Principal component analysis: ``EigenActions'' 2. Local feature analysis (LFA) 3. ``FisherActions'' 4. Independent component analysis 5. Local representations 1. Local PCA 2. Gabor wavelet representation 3. PCA jets 6. Human subjects 7. Discussion 8. Conclusions ---------------------------------------------------------------- 7. LEARNING VIEWPOINT INVARIANT REPRESENTATIONS OF FACES 1. Introduction 2. Simulation 1. Model architecture 2. Competitive Hebbian learning of temporal relations 3. Temporal association in an attractor network 4. Simulation results 3. Discussion ---------------------------------------------------------------- 8. CONCLUSIONS AND FUTURE DIRECTIONS References Index ---------------------------------------------------------------- Foreword by Terrence J. Sejnowski Computers are good at many things that we are not good at, like sorting a long list of numbers and calculating the trajectory of a rocket, but they are not at all good at things that we do easily and without much thought, like seeing and hearing. In the early days of computers, it was not obvious that vision was a difficult problem. Today, despite great advances in speed, computers are still limited in what they can pick out from a complex scene and recognize. Some progress has been made, particularly in the area of face processing, which is the subject of this monograph. Faces are dynamic objects that change shape rapidly, on the time scale of seconds during changes of expression, and more slowly over time as we age. We use faces to identify individuals, and we rely of facial expressions to assess feelings and get feedback on the how well we are communicating. It is disconcerting to talk with someone whose face is a mask. If we want computers to communicate with us, they will have to learn how to make and assess facial expressions. A method for automating the analysis of facial expressions would be useful in many psychological and psychiatric studies as well as have great practical benefit in business and forensics. The research in this monograph arose through a collaboration with Paul Ekman, which began 10 years ago. Dr. Beatrice Golomb, then a postdoctoral fellow in my laboratory, had developed a neural network called Sexnet, which could distinguish the sex of person from a photograph of their face (Golomb et al. 1991). This is a difficult problem since no single feature can be used to reliably make this judgment, but humans are quite good at it. This project was the starting point for a major research effort, funded by the National Science Foundation, to automate the Facial Action Coding System (FACS), developed by Ekman and Friesen (1978). Joseph Hager made a major contribution in the early stages of this research by obtaining a high quality set of videos of experts who could produce each facial action. Without such a large dataset of labeled images of each action it would not have been possible to use neural network learning algorithms. In this monograph, Dr. Marian Stewart Bartlett presents the results of her doctoral research into automating the analysis of facial expressions. When she began her research, one of the methods that she used to study the FACS dataset, a new algorithm for Independent Component Analysis (ICA), had recently been developed, so she was pioneering not only facial analysis of expressions, but also the initial exploration of ICA. Her comparison of ICA with other algorithms on the recognition of facial expressions is perhaps the most thorough analysis we have of the strengths and limits ICA. Much of human learning is unsupervised; that is, without the benefit of an explicit teacher. The goal of unsupervised learning is to discover the underlying probability distributions of sensory inputs (Hinton & Sejnowski, 1999). Or as Yogi Berra once said, "You can observe a lot just by watchin'." The identification of an object in an image nearly always depends on the physical causes of the image rather than the pixel intensities. Unsupervised learning can be used to solve the difficult problem of extracting the underlying causes, and decisions about responses can be left to a supervised learning algorithm that takes the underlying causes rather than the raw sensory data as its inputs. Several types of input representation are compared here on the problem of discriminating between facial actions. Perhaps the most intriguing result is that two different input representations, Gabor filters and a version of ICA, both gave excellent results that were roughly comparable with trained humans. The responses of simple cells in the first stage of processing in the visual cortex of primates are similar to those of Gabor filters, which form a roughly statistically independent set of basis vectors over a wide range of natural images (Bell & Sejnowski, 1997). The disadvantage of Gabor filters from an image processing perspective is that they are computationally intensive. The ICA filters, in contrast, are much more computationally efficient, since they were optimized for faces. The disadvantage is that they are too specialized a basis set and could not be used for other problems in visual pattern discrimination. One of the reasons why facial analysis is such a difficult problem in visual pattern recognition is the great variability in the images of faces. Lighting conditions may vary greatly and the size and orientation of the face make the problem even more challenging. The differences between the same face under these different conditions are much greater than the differences between the faces of different individuals. Dr. Bartlett takes up this challenge in Chapter 7 and shows that learning algorithms may also be used to help overcome some of these difficulties. The results reported here form the foundation for future studies on face analysis, and the same methodology can be applied toward other problems in visual recognition. Although there may be something special about faces, we may have learned a more general lesson about the problem of discriminating between similar complex shapes: A few good filters are all you need, but each class of object may need a quite different set for optimal discrimination. -- Marian Stewart Bartlett, Ph.D. marni at salk.edu Institute for Neural Computation, 0523 http://inc.ucsd.edu/~marni University of California, San Diego phone: (858) 534-7368 La Jolla, CA 92093-0523 fax: (858) 534-2014 From edamiani at crema.unimi.it Sat Jul 14 13:51:17 2001 From: edamiani at crema.unimi.it (ernesto damiani) Date: Sat, 14 Jul 2001 19:51:17 +0200 Subject: Neuro-Fuzzy Applications Track - 17th ACM Symposium on Applied Computing (SAC 2002) Message-ID: <009801c10c8d$95d2d970$79e91d97@PIACENTI8Y2LPE> Call for Papers Neuro-Fuzzy Applications Track 17th ACM Symposium on Applied Computing (SAC 2002) March 10-14, 2002 Madrid, Spain SAC 2002 For the past fifteen years, the ACM Symposium on Applied Computing has been a primary forum for applied computer scientists, computer engineers, software engineers, and application developers from around the world to interact and present their work. SAC 2002 is sponsored by the ACM Special Interest Group on Applied Computing (SIGAPP). SAC 2002 is presented in cooperation with other special interest groups. SAC 2002 will be hosted by the Universidad Carlos III De Madrid, Spain, from March 10 - 14, 2002. For more info, see http://www.acm.org/conferences/sac/sac2002 Neuro-Fuzzy Application Track Recently, however, a tide of new applications is being fostered by the necessity of dealing with imprecision and vagueness in the context of a new generation of complex systems, such as telecommunication networks, software systems, data processing systems and the like. A common feature of these new systems is the fact that traditional fuzzy techniques (e.g. rule-based systems) are becoming fully integrated with neural network processing in the general framework of a soft computing approach to give approximate solutions to complex problems that proved too difficult to attack with other techniques. The Neuro-Fuzzy Applications Track, without neglecting traditional fuzzy applications, will focus on this new generation of neuro-fuzzy systems, both from the point of view of the computer scientist and (perhaps more importantly) from the point of view of the expert of the involved application field. Topics we intend to cover include (but are not limited to): IP/ATM,Mobile,Active Networks Neural Hardware Systems Neural Control Neuro-Fuzzy Processing of Multimedia Data Neuro-Fuzzy Systems in Molecular Computing Soft-Computing Techniques for Systems Design Flexible Query and Information Retrieval Systems Data Mining Computer Vision Fuzzy Hardware Systems Fuzzy Control Paper submission Authors are invited to contribute original papers in all areas of soft computing and fuzzy applications development for the technical sessions. and demos of new innovative systems. Papers must be submitted to one of the Track Chairs in 3 copies. In order to facilitate blind review, submitted papers should carry the authors' names and affiliations on a separate sheet. Authors must follow the Symposium's general Submission Guidelines. For electronic submissions, please contact the Track Chair in advance. All papers will be blindly reviewed for originality and accuracy. Conference Proceedings and Journal Publication Accepted papers in all categories will be published in SAC 2002 Proceedings. Expanded versions of selected papers will be considered for publication in the ACM SIGAPP Applied Computing Review (SIGAPP ACR). A special section with the best papers from SAC 2002 Fuzzy Track is planned to appear on Springer's Soft Computing international journal. Neuro-Fuzzy Applications Track Chairs Ernesto Damiani Universit=E0 di Milano - Polo di Crema Via Bramante 65 26013 Crema, Italy e-mail: edamiani at crema.unimi.it Phone:+ 39-0373-898240 FAX:+39-0373-898253 Athanasios Vasilakos Institute of Computer Science(ICS) Foundation for Research and Technology-Hellas(FORTH) P.O Box 1385 Heraklion,Crete,Greece e-mail: vasilako at ath.forthnet.gr Phone:+ 3-081-394400 FAX:+3-081-394408 IMPORTANT DATES Paper Submission September 1th, 2001 Notification of Acceptance/Rejection November 1st, 2001 Camera-Ready Copy December 1st, 2001 From Ajith.Abraham at infotech.monash.edu.au Mon Jul 16 05:19:58 2001 From: Ajith.Abraham at infotech.monash.edu.au (Ajith Abraham) Date: Mon, 16 Jul 2001 19:19:58 +1000 Subject: HIS'01 - Call for papers Message-ID: <5.0.2.1.2.20010716191847.00a52d10@mail1.monash.edu.au> **************************************************************************** Your help with circulating this announcement locally would be very much appreciated. We apologise if you receive multiple copies of this message. **************************************************************************** Dear Colleagues, We have organised an exciting event: HIS'2001: International Workshop on Hybrid Intelligent Systems in conjunction with The 14th Australian Joint Conference on Artificial Intelligence (AI'01). Venue: Adelaide, South Australia Date: 11-12, December 2001 Workshop URL: http://his.hybridsystem.com (Technically co-sponsored by The World Federation of Soft Computing) HIS'01 is an International Workshop that brings together researchers, developers, practitioners, and users of neural networks, fuzzy inference systems, evolutionary algorithms and conventional techniques. The aim of HIS'01 is to serve as a forum to present current and future work as well as to exchange research ideas in this field. HIS'01 invites authors to submit their original and unpublished work that demonstrate current research using hybrid computing techniques and their applications in science, technology, business and commercial. Topics of interest include but not limited to: Applications/techniques using the following, but not limited to: * Machine learning techniques (supervised/unsupervised/ reinforcement learning) * Artificial neural network and evolutionary algorithms * Artificial neural network optimization using global optimization= techniques * Neural networks and fuzzy inference systems * Fuzzy clustering algorithms optimized using evolutionary algorithms * Evolutionary computation (genetic algorithms, genetic programming ,evolution strategies, grammatical evolution etc) * Hybrid optimization techniques (simulated annealing, tabu search, GRASP etc.) * Hybrid computing using neural networks-fuzzy systems- evolutionary algorithms * Hybrid of soft computing and hard computing techniques * Models using inductive logic programming, decomposition methods, grammatical inference, case-based reasoning etc. * Other intelligent techniques ( support vector machines, rough sets, Bayesian networks, probabilistic reasoning, minimum message length etc) ************************************************************* Paper Submission ************************************************************* We invite you to submit a full paper of 20 pages(maximum limit) for the workshop presentation. Please follow the IOS Press guidelines for more information on submission. Submission implies the willingness of at least one of the authors to register and present the paper. All full papers are to be submitted in PDF, postscript or MS word version electronically to: hybrid at softcomputing.net Hard copies should be sent only if electronic submission is not possible. All papers will be peer reviewed by two independent referees of the international program committee of HIS'01. All accepted papers will published in the proceedings of the Workshop by IOS Press, Netherlands. *********************************************************** Important Dates *********************************************************** Submission deadline: September 07, 2001 Notification of acceptance: October 01, 2001 Camera ready papers and pre-registration due: 15 October'01 ************************************************************ Workshop Chairs ************************************************************ Ajith Abraham, School of Computing and Information Technology Monash University, Australia Phone: +61 3 990 26778, Fax: +61 3 990 26879 Email:ajith.abraham at ieee.org Mario K=F6ppen Department of Pattern Recognition Fraunhofer IPK-Berlin, Pascalstr. 8-9, 10587 Berlin, Germany Phone: +49 (0)30 39 006-200, Fax: +49 (0)30 39 175-17 Email: mario.koeppen at ipk.fhg.de ******************************************************************** International Technical Committee Members Honorary Chair: Lakhmi Jain, University of South Australia, Australia ******************************************************************** Baikunth Nath, Monash University, Australia Shunichi Amari, Riken Brain Science Institute, Japan Frank Hoffmann, Royal Institute of Technology, Sweden Saratchandran P, Nanyang Technological University, Singapore Jos=E9 Mira, University Nacional de Educ. a Distancia,Spain Sami Khuri, San Jose University, USA Dan Steinberg, Salford Systems Inc, USA Janusz Kacprzyk, Polish Academy of Sciences, Poland Venkatesan Muthukumar, University of Neveda, USA Evgenia Dimitriadou, Technische Universit=E4t Wien, Austria Kaori Yoshida, Kyushu Institute of Technology, Japan Mario K=F6ppen, Fraunhofer IPK-Berlin, Germany Janos Abonyi, University of Veszprem, Hungary Ajith Abraham, Monash University, Australia Jos=E9 Manuel Ben=EDtez, University of Granada, Spain Vijayan Asari, Old Dominion University, USA Xin Yao, University of Birmingham, UK Joshua Singer, Stanford University, USA Morshed Chowdhury, Deakin University, Australia Dharmendra Sharma, University of Canberra, Australia Eugene Kerckhoffs, Delft University of Tech., Netherlands Bret Lapin, SAIC Inc, San Diego, USA Rajan Alex, Western Texas A & M University, USA Sankar K Pal, Indian Statistical Institute, India Javier Ruiz-del-Solar, Universidad de chile, Chile Aureli Soria-Frisch, Fraunhofer IPK-Berlin, Germany Pavel Osmera, Brno University of Tech., Czech Republic Alberto Ochoa, ICIMAF, Cuba Xiao Zhi Gao, Helsinki University of Technology, Finland. Maumita Bhattacharya, Monash University, Australia P J Costa Branco, Instituto Superior Technico, Portugal Vasant Honavar, Iowa State University, USA ********************************************************************** From harnad at coglit.ecs.soton.ac.uk Tue Jul 17 12:00:41 2001 From: harnad at coglit.ecs.soton.ac.uk (Stevan Harnad) Date: Tue, 17 Jul 2001 17:00:41 +0100 (BST) Subject: Psycoloquy 1-30 2001: Calls for Commentators Message-ID: Below are the half-year contents of Psycoloquy for 2001. Please note that the full articles themselves will no longer be posted to subscribers and lists, just the summary contents, with the URLs where they can be retrieved. Note that below there are a number of target articles on which Open Peer Commentary is now invited: (1) 6 related target articles on Nicotine Addiction: Balfour, Le Houezec, Oscarson, Sivilotti, Smith& Sachse, Wonnacott http://www.cogsci.soton.ac.uk/cgi/psyc/ptopic?topic=nicotine-addiction (2) 4 independent target articles, all inviting commentary: Navon on Mirror Reversal http://www.cogsci.soton.ac.uk/psyc-bin/newpsy?12.017 Kramer & Moore on Family Therapy http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?12.024 Sherman on Bipolar Disorder http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?12.028 Overgaard on Consciousness http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?12.029 (3) 5 book Precis, all inviting Multiple Book Review: Miller on the Mating Mind http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?12.008 Ben-Ze'ev on Emotion http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?12.007 Bolton & Hill on Mental Disorder http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?12.018 Zachar on Biological Psychiatry http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?12.023 Praetoriuus on Cognition/Action http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?12.027 (4) 15 ongoing commentaries and responses on current Psycoloquy target articles: Social-Bias, Reduced-Wason-Task, Self-Consciousness, Electronic-Journals, Brain-Intelligence, Autonomous Brain, Stroop-Differences, Lashley-Hebb, Bell-Curve. ----------------------------------------------------------------------- TABLE OF CONTENTS: PSYCOLOQUY 2001 January - July: Balfour, D. (2001), The Role of Mesolimbic Dopamine in Nicotine Dependence. Psycoloquy 12(001) http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?12.001 Le Houezec, J. (2001), Non-Dopaminergic Pathways in Nicotine Dependence. Psycoloquy 12 (002) http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?12.002 Oscarson, M. (2001), Nicotine Metabolism by the Polymorphic Cytochrome P450 2A6 (CYP2A6) Enzyme: Implications for Interindividual Differences in Smoking Behaviour. Psycoloquy 12 (003) http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?12.003 Sivilotti, L. (2001), Nicotinic Receptors: Molecular Issues. Psycoloquy 12 (004) http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?12.004 Smith, G. & Sachse, C. (2001), A Role for CYP2D6 in Nicotine Metabolism? Psycoloquy 12 (005) http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?12.005 Wonnacott, S. (2001), Nicotinic Receptors in Relation to Nicotine Addiction. Psycoloquy 12 (006) http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?12.006 Ben-Ze'ev, A. (2001), The Subtlety of Emotions. Psycoloquy 12 (007) http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?12.007 Miller, G. F. (2001), The Mating Mind: How Sexual Choice Shaped the Evolution of Human Nature. Psycoloquy 12 (008) http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?12.008 Krueger, J. (2001), Social Bias Engulfs the Field Psycoloquy 12 (009) http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?12.009 Margolis, H. (2001), More On Modus Tollens and the Wason Task Psycoloquy 12 (010) http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?12.010 Newen, A. (2001), Kinds of Self-Consciousness Psycoloquy 12 (011) http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?12.011 Turner, R. (2001), An End to Great Publishing Myths Psycoloquy 12 (012) http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?12.012 Storfer, M. D. (2001), The Parallel Increase in Brain Size, Intelligence, and Myopia. Psycoloquy 12 (013) http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?12.013 Storfer, M. D. (2001), Interrelating Population Trends on Brain Size, Intelligence and Myopia. Psycoloquy 12 (014) http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?12.014 Storfer, M. D. (2001), Brain and Eye Size, Myopia, and IQ. Psycoloquy 12 (015) http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?12.015 Milner, P.M. (2001), Stimulus Equivalence, Attention and the Self: A Punless Reply to Gellatly's Smell Assemblies. Psycoloquy 12 (016) http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?12.016 Navon, D. (2001), The Puzzle of Mirror Reversal: A View From Clockland. Psycoloquy 12 (017) http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?12.017 Bolton, D. & Hill, J. (2001), Mind, Meaning & Mental Disorder: The Nature of Causal Explanation in Psychology & Psychiatry. Psycoloquy 12 (018) http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?12.018 Meyer, J. (2001), Scientific Journals by and for Scientists. Psycoloquy 12 (019) http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?12.019 Hutto, D. D. (2001), Syntax Before Semantics. Structure Before Content. Psycoloquy 12 (020) http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?12.020 Mills, M. E. (2001), Authors of the World Unite: Liberating Academic Content From PublisherS' Restrictions. Psycoloquy 12 (021) http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?12.021 Oullier, O. (2001), Does Scientific Publication Need A Peer Consensus? Psycoloquy 12 (022) http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?12.022 Zachar, P. (2001), Psychological Concepts and Biological Psychiatry: A Philosophical Analysis. Psycoloquy 12 (023) http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?12.023 Kramer, D. & Moore, M. (2001), Gender Roles, Romantic Fiction and Family Therapy. Psycoloquy 12 (024) http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?12.024 Koch, C. (2001) Stroop Interference and Working Memory. Psycoloquy 12 (025) http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?12.025 Abeles, M. (2001), Founders of Neuropsychology - Who is Ignored?. Psycoloquy 12 (026) http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?12.026 Praetorius, N. (2001), Principles of Cognition, Language and Action: Essays on the Foundations of a Science of Psychology. Psycoloquy 12 (027) http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?12.027 Sherman, J. A. (2001), Evolutionary Origin of Bipolar Disorder (EOBD). Psycoloquy 12 (028) http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?12.028 Overgaard, M. (2001), The Role of Phenomenological Reports in Experiments on Consciousness. Psycoloquy 12 (029) http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?12.029 Reifman, A. (2001), Heritability, Economic Inequality, and the Time Course of the "Bell Curve" Debate. Psycoloquy 12 (030) http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?12.030 From andre at ee.usyd.edu.au Tue Jul 17 18:53:05 2001 From: andre at ee.usyd.edu.au (Andre van Schaik) Date: Tue, 17 Jul 2001 16:53:05 -0600 Subject: Research assistant/ PhD studentship at the University of Sydney Message-ID: <4.3.2.7.2.20010717164959.0176af08@cassius.ee.usyd.edu.au> Computer Engineering Laboratory School of Electrical and Information Engineering Research Assistant Analogue Integrated Circuit Design The appointee will join a team working on a project aiming at the development of biologically inspired analogue VLSI circuits for the development of smart sensors. Applicants with a bachelor or higher degree in electrical engineering or computer science with experience in circuit design, preferrably analogue circuit design in the area of neuromorphic engineering or neural networks, are invited. The appointment will be for an initial period of one year and renewable for up to three years, subject to satisfactory progress and funding. If applicable, the appointee can enrol for a higher degree in an area of the project. Salary: (HEO level 5) 37,000-40,000 AU$ per year depending on experience. Duty statement: The research assistant will: - design, integrate and test several analogue VLSI circuits - develop test set-ups for these circuits - simulate in MATLAB algorithms and high level models for the circuits - work independently REPORTING: Reports to Dr. A. van Schaik QUALIFICATIONS: B.E. or B. Computer Science or higher. SKILLS: - design, simulation, layout and testing of analogue VLSI circuits - MATLAB programming - knowledge of Unix and Windows - communication with others - writing reports and technical papers - work independently From clinton at compneuro.umn.edu Tue Jul 17 16:34:51 2001 From: clinton at compneuro.umn.edu (Kathleen Clinton) Date: Tue, 17 Jul 2001 15:34:51 -0500 Subject: NEURON Workshop Announcement Message-ID: <3B54A16B.4DD0BDA2@compneuro.umn.edu> ****************************** NEURON Workshop Announcement ****************************** Michael Hines and Ted Carnevale of Yale University will conduct a three to five day workshop on NEURON, a computer code that simulates neural systems. The workshop will be held from August 20-24, 2001 at the University of Minnesota Supercomputing Institute in Minneapolis, Minnesota. Registration is open to students and researchers from academic, corporate, and industrial organizations. Space is still available, and registrations will be accepted on a first-come, first-serve basis. **Topics and Format** Participants may attend the workshop for three or five days. The first three days cover material necessary for the most common applications in neuroscience research and education. The fourth and fifth days deal with advanced topics of users whose projects may require problem-specific customizations. IBM will provide computers with Windows and Linux platforms. Days 1 - 3 "Fundamentals of Using the NEURON Simulation Environment" The first three days will cover the material that is required for informed use of the NEURON simulation environment. The emphasis will be on applying the graphical interface, which enables maximum productivity and conceptual control over models while at the same time reducing or eliminating the need to write code. Participants will be building their own models from the start of the course. By the end of the third day they will be well prepared to use NEURON on their own to explore a wide range of neural phenomena. Topics will include: Integration methods --accuracy, stability, and computational efficiency --fixed order, fixed timestep integration --global and local variable order, variable timestep integration Strategies for increasing computational efficiency. Using NEURON's graphical interface to --construct models of individual neurons with architectures that range from the simplest spherical cell to detailed models based on quantitative morphometric data (the CellBuilder). --construct models that combine neurons with electronic instrumentation (i.e. capacitors, resistors, amplifiers, current sources and voltage sources) (the Linear Circuit Builder). --construct network models that include artificial neurons, model cells with anatomical and biophysical properties, and hybrid nets with both kinds of cells (the Network Builder). --control simulations. --display simulation results as functions of time and space. --analyze simulation results. --analyze the electrotonic properties of neurons. Adding new biophysical mechanisms. Uses of the Vector class such as --synthesizing custom stimuli --analyzing experimental data --recording and analyzing simulation results Managing modeling projects. Days 4 and 5 "Beyond the GUI" The fourth and fifth days deal with advanced topics for users whose projects may require problem-specific customizations. Topics will include: Advanced use of the CellBuilder, Network Builder, and Linear Circuit Builder. When and how to modify model specification, initialization, and NEURON's main computational loop. Exploiting special features of the Network Connection class for efficient implementation of use-dependent synaptic plasticity. Using NEURON's tools for optimizing models. Parallelizing computations. Using new features of the extracellular mechanism for --extracellular stimulation and recording --implementation of gap junctions and ephaptic interactions Developing new GUI tools. **Registration** For academic or government employees the registration fee is $155 for the first three days and $245 for the full five days. These fees are $310 and $490, respectively, for corporate or industrial participants. Registration forms can be obtained at http://www.compneuro.umn.edu/NEURONregistration.html or from the workshop coordinator, Kathleen Clinton, at clinton at compneuro.umn.edu or (612) 625-8424. **Lodging** Out-of-town participants may stay at the Holiday Inn Metrodome in Minneapolis. It is within walking distance of the Supercomputing Institute. Participants are responsible for making their own hotel reservations. When making reservations, participants should state that they are attending the NEURON Workshop. A small block of rooms is available until July 28, 2001. Reservations can be arranged by calling (800) 448-3663 or (612) 333-4646. From sandro at northwestern.edu Tue Jul 17 19:11:33 2001 From: sandro at northwestern.edu (Sandro Mussa-Ivaldi) Date: Tue, 17 Jul 2001 18:11:33 -0500 Subject: Postdoctoral position in Neural Engineering Message-ID: <3B54C625.BE9B3B3B@northwestern.edu> Identification, learning and control of tracking behaviors in a neuro-robotic system A position of postdoctoral fellow is available to investigate information processing within a hybrid neuro-robotic system. The system is composed of a small mobile-robot and of the brain of a sea lamprey. The biological and the artificial elements exchange electrical signals through a computer/electronic interface. Applications are encouraged from people with a strong background in Engineering and/or Physics and some research experience in Neural Computation. The research has both an experimental and a theoretical component. Some experience in techniques of electrophysiology is desirable. While a background on experimental neurophysiology is not a requisite, the candidate should be willing to become acquainted with these techniques and to carry out experimental work in conjunction with theoretical modeling. The Laboratory in which this research is carried out is affiliated with the Departments of Physiology and of Biomedical Engineering of Nortwestern University and with the Rehabilitation Institute of Chicago. Additional information about the laboratory and its research can be found at http://manip.smpp.nwu.edu If you are interested, send a CV and the names of three references via email to Sandro Mussa-Ivaldi (sandro at northwestern.edu) Northwestern is an equal opportunity, affirmative action educator, and employer. From glanzman at helix.nih.gov Wed Jul 18 13:32:09 2001 From: glanzman at helix.nih.gov (Dennis Glanzman) Date: Wed, 18 Jul 2001 13:32:09 -0400 Subject: DYNAMICAL NEUROSCIENCE IX: Timing, Persistence and Feedback Control Message-ID: <4.3.2.7.2.20010718132246.00af3100@helix.nih.gov> Satellite Symposium at the Society for Neuroscience Annual Meeting DYNAMICAL NEUROSCIENCE IX: Timing, Persistence and Feedback Control San Diego Convention Center San Diego, California Friday and Saturday, November 9-10, 2001 FEEDBACK is an inherent feature of all systems that adapt to their internal and external environments. In this year's meeting we will explore the convergence of theoretical work and experimental data on neuronal computations that highlight the feedback requirement for the systematic operation of the nervous system. This will be covered at the level of both transient and steady-state phenomena. Invited speakers will discuss how feedback (along with other control mechanisms) regulates the temporal processing of auditory and somatosensory information, plasticity, learning, and the balance of feedback controls that underlie the formation of receptive fields. Further topics will include the control of neuronal dynamics involved with sensorimotor tasks, such as the stabilization of eye and head position, and the temporal pattern of exploratory whisking in rat. The work presented at this year's meeting will center on the theme of how abstract analytical models can be used in focusing the direction of new experiments. Organizers: Dennis Glanzman, NIMH, NIH; David Kleinfeld, UCSD; Sebastian Seung, MIT; and Misha Tsodyks, Weizmann Institute. Invited Speakers: Ehud Ahissar, Margaret Livingstone, Cynthia Moss, Israel Nelken, Alexa Riehle, Robert Shapley, Patricia Sharp, Haim Sompolinsky, David Tank, Ofer Tchernichovski, and Kechen Zhang. Keynote Address: Bard Ermentrout Register for the Symposium https://secure.laser.net/cmpinc_net/neuro/register.html Submit a Poster http://www.cmpinc.net/dynamical/poster.html Meeting Agenda (pdf format) http://www.nimh.nih.gov/diva/sn2001/agenda.pdf For Further Information: about registration and other logistics, please contact Matt Burdetsky, Capital Meeting Planning, phone 703-536-4993, fax 703-536-4991, E-mail: matt at cmpinc.net. For information about the technical content of the meeting, please contact Dr. Dennis L. Glanzman, National Institute of Mental Health, NIH. Telephone 301-443-1576, Fax 301-443-4822, E-mail:glanzman at helix.nih.gov. From yann at research.att.com Wed Jul 18 22:33:00 2001 From: yann at research.att.com (Yann LeCun) Date: Wed, 18 Jul 2001 22:33:00 -0400 Subject: NIPS volume 0-13 available at "NIPS Online" Message-ID: <200107190232.WAA28568@surfcity.research.att.com> Dear Colleagues: Volume 0 and Volume 13 of the NIPS proceedings have just been added to the NIPS Online collection. The NIPS Online web site at http://nips.djvuzone.org offers free access to the full collection of NIPS Proceedings with full-text search capability. Our thanks go to Barak Pearlmutter whose skilled negociations allowed us to obtain the rights to publish volume 0. -- Yann LeCun [apologies if you receive multiple copies of this message] ____________________________________________________________________ Yann LeCun Head, Image Processing Research Dept. AT&T Labs - Research tel:+1(732)420-9210 fax:(732)368-9454 200 Laurel Avenue, Room A5-4E34 yann at research.att.com Middletown, NJ 07748, USA. http://www.research.att.com/~yann From steve at cns.bu.edu Thu Jul 19 22:28:45 2001 From: steve at cns.bu.edu (Stephen Grossberg) Date: Thu, 19 Jul 2001 22:28:45 -0400 Subject: motion integration and segmentation within and across apertures Message-ID: The following article is now available at http://www.cns.bu.edu/Profiles/Grossberg in HTML, PDF, and Gzipped Postscript. Grossberg, S., Mingolla, E., and Viswanathan, L. Neural Dynamics of Motion Integration and Segmentation Within and Across Apertures Vision Research, in press. Abstract A neural model is developed of how motion integration and segmentation processes, both within and across apertures, compute global motion percepts. Figure-ground properties, such as occlusion, influence which motion signals determine the percept. For visible apertures, a line's terminators do not specify true line motion. For invisible apertures, a line's intrinsic terminators create veridical feature tracking signals. Sparse feature tracking signals can be amplified before they propagate across position and are integrated with ambiguous motion signals within line interiors. This integration process determines the global percept. It is the result of several processing stages: Directional transient cells respond to image transients and input to a directional short-range filter that selectively boosts feature tracking signals with the help of competitive signals. Then a long-range filter inputs to directional cells that pool signals over multiple orientations, opposite contrast polarities, and depths. This all happens no later than cortical area MT. The directional cells activate a directional grouping network, proposed to occur within cortical area MST, within which directions compete to determine a local winner. Enhanced feature tracking signals typically win over ambiguous motion signals. Model MST cells which encode the winning direction feed back to model MT cells, where they boost directionally consistent cell activities and suppress inconsistent activities over the spatial region to which they project. This feedback accomplishes directional and depthful motion capture within that region. Model simulations include the barberpole illusion, motion capture, the spotted barberpole, the triple barberpole, the occluded translating square illusion, motion transparency and the chopsticks illusion. Qualitative explanations of illusory contours from translating terminators and plaid adaptation are also given. From rsun at cecs.missouri.edu Thu Jul 19 15:35:17 2001 From: rsun at cecs.missouri.edu (rsun@cecs.missouri.edu) Date: Thu, 19 Jul 2001 14:35:17 -0500 Subject: Two recent issues of Cognitive Systems Research Message-ID: <200107191935.f6JJZHo19154@ari1.cecs.missouri.edu> The TOC of the two recent issues of Cognitive Systems Research: --------------------------------------------------------- Table of Contents for Cognitive Systems Research Volume 2, Issue 1, April 2001 Ron Sun Individual action and collective function: From sociology to multi-agent learning 1-3 [Abstract] [Full text] (PDF 42.3 Kb) Cristiano Castelfranchi The theory of social functions: challenges for computational social science and multi-agent learning 5-38 [Abstract] [Full text] (PDF 425.2 Kb) Tom R. Burns and Anna Gomoliska Socio-cognitive mechanisms of belief change - Applications of generalized game theory to belief revision, social fabrication, and self-fulfilling prophesy 39-54 [Abstract] [Full text] (PDF 139.5 Kb) Michael L. Littman Value-function reinforcement learning in Markov games [Abstract] [Full text] (PDF 108 Kb) 55-66 Junling Hu and Michael P. Wellman Learning about other agents in a dynamic multiagent system [Abstract] [Full text] (PDF 515.3 Kb) 67-79 Maja J. Mataric Learning in behavior-based multi-robot systems: policies, models, and other agents 81-93 [Abstract] [Full text] (PDF 295.6 Kb) Table of Contents for Cognitive Systems Research Volume 2, Issue 2, May 2001 Rosaria Conte Emergent (info)institutions [Abstract] [Full text] (PDF 103.1 Kb) 97-110 L. Andrew Coward The recommendation architecture: lessons from large-scale electronic systems applied to cognition 111-156 [Abstract] [Full text] (PDF 858 Kb) Agns Guillot and Jean-Arcady Meyer The animat contribution to cognitive systems research [Abstract] [Full text] (PDF 67.2 Kb) 157-165 Sheila Garfield Review of Speech and language processing [Abstract] [Full text] (PDF 55.2 Kb) 167-172 * Full text files can be viewed and printed using the Adobe Acrobat Reader. Download from the Web site: http://www.cecs.missouri.edu/~rsun/journal.html http://www.elsevier.nl/locate/cogsys http://www.elsevier.com/locate/cogsys Copyright 2001, Elsevier Science, All rights reserved. =========================================================================== Prof. Ron Sun http://www.cecs.missouri.edu/~rsun CECS Department phone: (573) 884-7662 University of Missouri-Columbia fax: (573) 882 8318 201 Engineering Building West Columbia, MO 65211-2060 email: rsun at cecs.missouri.edu http://www.cecs.missouri.edu/~rsun http://www.cecs.missouri.edu/~rsun/journal.html http://www.elsevier.com/locate/cogsys =========================================================================== From zemel at cs.toronto.edu Fri Jul 20 16:13:05 2001 From: zemel at cs.toronto.edu (Richard Zemel) Date: Fri, 20 Jul 2001 16:13:05 -0400 Subject: NIPS*2001 web-site back online Message-ID: <01Jul20.161311edt.453165-19931@jane.cs.toronto.edu> The NIPS*2001 web site (http://www.cs.cmu.edu/Web/Groups/NIPS) has been down for a few days but the problem has now been fixed. Apologies and thanks to the people who notified us about this. On the positive side: we've had nearly 30 percent more submissions than last year, so it should be an even better conference than ever. ========================================== Neural Information Processing Systems Natural and Synthetic Monday, Dec. 3 -- Saturday, Dec. 8, 2001 Vancouver, British Columbia, Canada Whistler Ski Resort ========================================== Invited speakers: Barbara Finlay -- How brains evolve, and the consequences for computation Alison Gopnik -- Babies and Bayes-nets: Causal inference and theory-formation in children, chimps, scientists and computers Jon M. Kleinberg -- Decentralized network algorithms: Small-world phenomena and the dynamics of information Tom Knight -- TBA Judea Pearl -- Causal inference as an exercise in computational learning Shihab Shamma -- Common principles in auditory and visual processing Tutorials: Luc Devroye -- Nonparametric density estimation: VC to the rescue Daphne Koller & Nir Friedman -- Learning Bayesian networks from data Shawn Lockery -- Chemotaxis: Gradient ascent by simple living organisms and their neural networks. Christopher Manning -- Probabilistic linguistics and probabilistic models of natural language processing Bernhard Scholkopf -- SVM and Kernel methods Sebastian Thrun -- Probabilistic robotics From dominey at isc.cnrs.fr Sat Jul 21 18:53:19 2001 From: dominey at isc.cnrs.fr (Peter FORD DOMINEY) Date: Sun, 22 Jul 2001 00:53:19 +0200 Subject: postdoc position, computational neuroscience and language Message-ID: <3.0.5.32.20010722005319.00810c00@nimbus.isc.cnrs.fr> Please Post: Post-Doctoral Fellowship Announcement: Multiple-Cue Integration in Language Acquisition: A Simulation Study Starting in September/October 2001, a post-doctoral fellowship will be available for a period of 12-36 months in the Sequential Cognition and Language group, at the Institute of Cognitive Science (Institut des Sciences Cognitive) in Lyon France. The selected researcher will participate in an HFSP funded project addressing aspects of language acquisition through simulation, behavioral and brain imagery (ERP) studies. The position will involve: 1. analysis of natural language corpora 2. Neural network simulation of language acquisition processes based on the preceding analysis. An example of a this type of approach can be found in: Dominey PF, Ramus F (2000) Neural network processing of natural language: I. Sensitivity to serial, temporal and abstract structure of language in the infant. Language and Cognitive Processes, 15(1) 87-127 Qualifications for the candidate: 1. A PhD in a related discipline (computer science, computational neuroscience, cognitive science) and a strong computational neuroscience background, with experience in the Linux/Unix C environment, and in cognitive neuroscience simulation. 2. Familiarity with the Childes language database and associated analysis tools, and/or experience/interest in computational aspects of language acquisition. 3. Fluency in French and English. Interested candidates should send a letter of intention, a CV and three letters of recommendation to Peter F. Dominey at the address below. Applications will continue to be accepted until the position is filled. Peter F. Dominey, Ph.D. Institut des Sciences Cognitives CNRS UPR 9075 67 boulevard Pinel 69675 BRON Cedex Tel Standard: 33(0)4.37.91.12.12 Tel Direct: 33(0)4.37.91.12.66 Fax : 33(0)4.37.91.12.10 dominey at isc.cnrs.fr WEB: http://www.isc.cnrs.fr From erol at starlab.net Sun Jul 22 16:45:48 2001 From: erol at starlab.net (erol@starlab.net) Date: Sun, 22 Jul 2001 22:45:48 +0200 (CEST) Subject: PhD and PostDoc research positions for the SWARM-BOTS project Message-ID: <995834748.3b5b3b7c2d763@127.0.0.1> Please post: SWARM-BOTS project: PhD and PostDoc research positions IRIDIA - Universit Libre de Bruxelles, Belgium We are currently seeking a PhD student and a PostDoc to join a research team in swarm intelligence and distributed autonomous robotics at IRIDIA, the artificial intelligence lab of the Universit Libre de Bruxelles, Belgium. The candidate will work on the SWARM-BOTS project. The main scientific objective of the SWARM-BOTS project is to study a novel approach to the design and implementation of self-organising and self-assembling artefacts. This novel approach finds its theoretical roots in recent studies in swarm intelligence and in ant algorithms that is, in studies of the self-organising and self-assembling capabilities shown by social insects and other animal societies. The area of competence of candidates should be in at least one of the following disciplines: Computer Science, Computational Intelligence, Automous Robotics, Self-organizing Systems, Complex Systems. The researchers we are looking for should be experienced programmers in procedural or object oriented programming languages and should have knowledge of modern operating systems. The PhD student should possess a degree that allows him or her to embark in a doctoral program. Female researchers are explicitly encouraged to apply for the offered positions. We guarantee that the selection process, based solely on the research records, will give equal opportunities to female and male researchers. The appointments will be for 3 years from October 1, 2001. The positions will be filled as adequate candidates will become available. Therefore, there is no submission deadline. For further information see http://iridia.ulb.ac.be/~mdorigo/IRIDIA-Swarmbots-Positions.html. Please send a CV to Dr. Marco Dorigo (mdorigo at ulb.ac.be). ------------------------------------ Marco Dorigo, Ph.D. Matre de Recherches du FNRS IRIDIA CP 194/6 Universite' Libre de Bruxelles Avenue Franklin Roosevelt 50 1050 Bruxelles Belgium mdorigo at ulb.ac.be http://iridia.ulb.ac.be/~mdorigo/ Tel +32-2-6503169 GSM +32-478-301233 Fax +32-2-6502715 Secretary +32-2-6502729 From j-patton at northwestern.edu Wed Jul 25 14:37:43 2001 From: j-patton at northwestern.edu (Jim Patton) Date: Wed, 25 Jul 2001 13:37:43 -0500 Subject: Postdoc Position Available Message-ID: <4.2.0.58.20010725133035.00aa4b80@merle.acns.nwu.edu> Postdoctoral Research Associate in Motor Control/Neuromechanics/Robotics Sensory Motor Performance Program, Northwestern University and The Rehabilitation Institute of Chicago [Posted 25-Jul-2001] _________________________________________ SMPP INFO: The Sensory Motor Performance Program (SMPP) at the Rehabilitation Institute of Chicago (RIC) is devoted to the study of musculoskeletal, neuromuscular and sensory disorders that are associated with abnormal control of posture and movement. Faculty members have appointments in the Northwestern University Medical School and the Northwestern University Engineering School. Approximately thirty-five research staff -- including post-doctoral research associates, graduate students, and support staff -- make up a unique team of physicians, engineers, mathematicians, physiologists, and occupational & physical therapists for the study of motor and sensory dysfunctions. Our studies on healthy individuals, patients and mathematical models are internationally renowned in the fields of biomechanics, neurophysiology, and rehabilitation research. See: SMPP is a part of the Rehabilitation Institute Research Corporation, the research arm of the Rehabilitation Institute of Chicago. It is academically affiliated with the Department of Physical Medicine and Rehabilitation at Northwestern University Medical School. Many members of our staff are affiliated with the departments of Physical Medicine and Rehabilitation, Biomedical Engineering, Mechanical Engineering, Physiology, Physical Therapy, the Institute for Neuroscience and other departments at Northwestern University and in the Chicago area. JOB DUTIES: We are currently seeking a Postdoctoral Research Associate to join our group motor under a training grant provided by the National Center on Medical Rehabilitation Research. Research may involve, but is not restricted to, quantitative electromyography, neural signal processing, computational analysis of neuromechanics, and neural engineering, robotics and/or the human machine interface. The applicant will benefit from the mentorship of a distinguished group of senior faculty. JOB EXPERIENCE: Applicants should have a doctoral degree and will be expected to have a record of research in one or more of the following areas: motor control, biomechanics, neurophysiology, biomedical engineering, or neuroscience. Some mathematical background and/or programming ability preferred. APPLICATION REQUIREMENTS: Applicants should be US nationals, or permanent residents. EMAIL a cover letter, vita, and list of three references to: James Patton, Ph.D. 345 E superior St., Suite 1406 Chicago,Illinois60611 USA 312-238-1277, 312-238-2208 Fax SALARY RANGE: Contingent on educational background & experience. _________________________________________ RIC is an Affirmative Action/Equal Opportunity Employer. Woman and minority applicants are encouraged to apply. Hiring is contingent on eligibility to work in the United States. ______________________________________________________________________ J A M E S P A T T O N , P H . D . Research Scientist Sensory Motor Performance Prog., Rehabilitation Institute of Chicago Physical Med & Rehabilitation, Northwestern University Med School 345 East Superior, Room 1406, Chicago, IL 60611 312-238-1277 (OFFICE) -2208 (FAX) -1232 (LAB) -3381 (SECRETARY) CELL PHONE MESSAGING (<150 char.): 8473341056 at msg.myvzw.com ______________________________________________________________________ From swatanab at pi.titech.ac.jp Wed Jul 25 22:07:38 2001 From: swatanab at pi.titech.ac.jp (Sumio Watanabe) Date: Thu, 26 Jul 2001 11:07:38 +0900 Subject: Papers: NN Learning Theory and Algebraic Geometry Message-ID: <00a001c11577$be993e10$988a7083@titech42lg8r0u> Dear Connectionists, The following papers are available. http://watanabe-www.pi.titech.ac.jp/~swatanab/index.html I would like to announce that the reason why the hierarchical structure is important in practical learning machines is now being clarified. Also please visit the page of our special session, http://watanabe-www.pi.titech.ac.jp/~swatanab/kes2001.html Comments and remarks are welcome. Thank you. Sumio Watanabe P&I Lab. Tokyo Institute of Technology swatanab at pi.titech.ac.jp ***** (1) S. Watanabe "Learning efficiency of redundant neural networks in Bayesian esitimation," to appear in IEEE Trans. on NN. The generalization error of a three-layer neural network in a redundant state is clarified. The method in this paper is not algebraic but completely analytic. It is shown that the stochastic complexity of the three-layer perceptron can be calculated by expanding the determinant of the singular information matrix. It is shown that, if the learner becomes more redundant compared with the true distribution, then the increase of the stochatsic complexity becomes smaller. Non-identifiable models are compared with the regular stiatistical models from the statistical model selection point of view, and it is shown that Bayesian estimation is appropriate for layered learning machines in almost redundant states. (2) S. Watanabe, "Algebraic geometrical methods for hierarchical learning machines," to appear in Neural Networks. This paper establishes the algebraic geometrical methods in neural network learning theory. The learning curve of a non-identifiable model is determined by the pole of the Zeta function of the Kullback information, and its pole can be found by resolution of singularities. The blowing-up technology in algebraic geometry is applied to the multi-layer perceptron, and its learning efficiency is obtained systematically. Even when the true distribution is not contained in parametric models, singularities in the parameter space make the learning curve smaller than the all curves of smaller models contained in the machine. *** PLease compare these two papers. *** End From jzhu at stanford.edu Thu Jul 26 17:55:36 2001 From: jzhu at stanford.edu (Ji Zhu) Date: Thu, 26 Jul 2001 14:55:36 -0700 (PDT) Subject: kernel logistic regression and the import vector machine Message-ID: The following short paper is available at http://www.stanford.edu/~jzhu/research/nips01.ps Kernel Logistic Regression and the Import Vector Machine Ji Zhu, Trevor Hastie Dept. of Statistics, Stanford University Abstract The support vector machine (SVM) is known for its good performance in binary classification, but its extension to multi-class classification is still an on-going research issue. In this paper, we propose a new approach for classification, called the import vector machine (IVM), which is built on kernel logistic regression (KLR). We show that the IVM not only performs as well as the SVM in binary classification, but also can naturally be generalized to the multi-class case. Furthermore, the IVM provides an estimate of the underlying probability. Similar to the ``support points'' of the SVM, the IVM model uses only a fraction of the training data to index kernel basis functions, typically a much smaller fraction than the SVM. This gives the IVM a computational advantage over the SVM, especially when the size of the training data set is large. From terry at salk.edu Fri Jul 27 15:35:26 2001 From: terry at salk.edu (Terry Sejnowski) Date: Fri, 27 Jul 2001 12:35:26 -0700 (PDT) Subject: NEURAL COMPUTATION 13:9 In-Reply-To: <200103072248.f27MmVH58010@kepler.salk.edu> Message-ID: <200107271935.f6RJZQL14567@purkinje.salk.edu> Neural Computation - Contents - Volume 13, Number 9 - September 1, 2001 ARTICLE Modeling Neuronal Assemblies: Theory and Implementation J. Eggert and J. L. van Hemmen NOTES On a Class of Support Vector Kernels Based on Frames in Function Hilbert Spaces J. B. Gao, C. J. Harris and S. R. Gunn Extraction of Specific Signals with Temporal Structure Allan Kardec Barros and Andrzej Cichocki LETTERS Correlation Between Uncoupled Conductance-Based Integrate-and-Fire Neurons Due to Common and Synchronous Presynaptic Firing Sybert Stroeve and Stan Gielen Attention Modulation of Neural Tuning Through Peak and Base Rate Hiroyuki Nakahara, Si Wu, and Shun-ichi Amari Democratic Integration: Self-Organized Integration of Adaptive Cues Jochen Triesch and Christoph von der Malsburg An Auto-Associative Neural Network Model of Paired-Associate Learning Daniel S. Rizzuto and Michael J. Kahana Simple Recurrent Networks Learn Context-Free and Context-Sensitive Languages by Counting Paul Rodriguez Training v-Support Vector Classifiers: Theory and Algorithms Chih-Chung Chang and Chih-Jen Lin A Tighter Bound for Graphical Models M. A. R. Leisink and H. J. Kappen ----- ON-LINE - http://neco.mitpress.org/ SUBSCRIPTIONS - 2001 - VOLUME 13 - 12 ISSUES USA Canada* Other Countries Student/Retired $60 $64.20 $108 Individual $88 $94.16 $136 Institution $460 $492.20 $508 * includes 7% GST MIT Press Journals, 5 Cambridge Center, Cambridge, MA 02142-9902. Tel: (617) 253-2889 FAX: (617) 577-1545 journals-orders at mit.edu ----- From X.Yao at cs.bham.ac.uk Fri Jul 27 15:00:31 2001 From: X.Yao at cs.bham.ac.uk (Xin Yao) Date: Fri, 27 Jul 2001 20:00:31 +0100 (BST) Subject: Job available: Research Fellow in Evolutionary Computation Message-ID: ---------------------------------------------------- VACANCY: Research Fellow in Evolutionary Computation (Ref. No. S35726/01) ---------------------------------------------------- http://www.bham.ac.uk/personnel/s35726.htm Applications are invited for a research fellowship in evolutionary computation (available for up to two years, full-time) in the School of Computer Science, the University of Birmingham, England. We are particularly interested in candidates with a background in co-evolution, evolvable hardware or a closely related area. However, outstanding applicants from any areas of evolutionary computation will be considered seriously. Applicants should have or be about to complete a PhD in Computer Science, Computer Engineering, Electrical Engineering, or a closely related field. The successful candidate is expected to have research experience and record of outstanding quality in evolutionary computation or a closely related area, as evidenced by publications in leading international journals or conference proceedings. The research potential of a new PhD (or a nearly completed PhD) may also be judged from his/her PhD thesis. Strong background and experience in computational studies and excellent analytical and programming skills will be highly valued. The successful applicant, who is to work with Prof. Xin Yao, must be able to work effectively in a team environment and is required to contribute to the School's teaching and admin activities. The starting salary for the post is on Reseach Grade 1A in the range GBP17,278 - GBP19,293 per annum (Depending on experience and qualifications). The School of Computer Science has a very strong group in evolutionary and neural computation with an international reputation. Staff members in this area include Dr. John Bullinaria (Neural Networks, Evolutionary Computation, Cog.Sci.) Dr. Jun He (Evolutionary Computation) Dr. Julian Miller (Evolutionary Computation, Machine Learning) Dr. Riccardo Poli (Evolutionary Computation, GP, Computer Vision, NNs, AI) Dr. Jon Rowe (Evolutionary Computation, AI) Dr. Thorsten Schnier (Evolutionary Computation, Engineering Design) Prof. Xin Yao (Evolutionary Computation, NNs, Machine Learning, Optimisation) Other staff members also working in these areas include Prof. Aaron Sloman (evolvable architectures of mind, co-evolution, interacting niches) and Dr. Jeremy Wyatt (evolutionary robotics, classifier systems). For further particulars, please visit http://www.bham.ac.uk/personnel/s35726.htm For informal enquiries, please contact Prof Xin Yao, phone (+44) 121 414 3747, email: X.Yao at cs.bham.ac.uk. His research interests can be found from http://www.cs.bham.ac.uk/~xin/research/. CLOSING DATE FOR RECEIPT OF APPLICATIONS: 21 August 2001 (late application may be considered) APPLICATION FORMS RETURNABLE TO The Director of Personnel Services The University of Birmingham Edgbaston, Birmingham, B15 2TT England RECRUITMENT OFFICE FAX NUMBER +44 121 414 4802 RECRUITMENT OFFICE TELEPHONE NUMBER +44 121 414 6486 RECRUITMENT OFFICE E-MAIL ADDRESS h.h.luong at bham.ac.uk From bis at prip.tuwien.ac.at Fri Jul 27 09:42:32 2001 From: bis at prip.tuwien.ac.at (Horst Bischof) Date: Fri, 27 Jul 2001 15:42:32 +0200 Subject: CfP Pattern Recognition, Special Issue on Kedrnel and Subspace Methods for Computer Vision Message-ID: <3B616FC8.8050205@prip.tuwien.ac.at> Pattern Recognition The Journal of the Pattern Recognition Society Special Issue on Kernel and Subspace Methods for Computer Vision Guest Editors: Ales Leonardis Horst Bischof Faculty of Computer and Pattern Recognition and Information Science, Image Processing Group University of Ljubljana, Vienna University of Technology Trzaska 25, Favoritenstr. 9/1832, 1001 Ljubljana, Slovenia A-1040 Vienna, Austria alesl at fri.uni-lj.si bis at prip.tuwien.ac.at This Pattern Recognition Special Issue will address new developments in the area of kernel and subspace methods related to computer vision. High-quality original journal paper submissions are invited. The topics of interest include (but are not limited to): Support Vector Machines, Independent Component Analysis, Principal Component Analysis, Mixture Modeling, Canonical Correlation Analysis, etc. applied to computer vision problems such as: Object Recognition, Navigation and Robotics, Medical Imaging, 3D Vision, etc. All submitted papers will be peer reviewed. Only high-quality, original submissions will be accepted for publication in the Special Issue---in accordance with the Pattern Recognition guidelines (http://www.elsevier.nl/inca/publications/store/3/2/8/index.htt). Submission Timetable Submission of full manuscript: November 30, 2001 Notification of Acceptance: March 29, 2002 Submission of revised manuscript: End of June 2002 Final Decision: August 2002 Final papers: September 2002 Submission Procedure All submissions should follow the Pattern Recognition Guidelines and should be submitted electronically via anonymous ftp in either postscript or pdf format (compressed with zip or gzip). Files should be named by the surname of the first author i.e., surname.ps.gz, for multiple submissions surname1, surname2, ... should be used. Papers should be uploaded to the following ftp site by the deadline of 30th November 2001. ftp ftp.prip.tuwien.ac.at [anonymous ftp, i.e.: Name: ftp Password: < your email address > ] cd sipr binary put .ext quit After uploading the paper authors should email the guest editor Ales Leonardis giving full details of the paper title and authors. From goldfarb at unb.ca Sat Jul 28 11:59:20 2001 From: goldfarb at unb.ca (Lev Goldfarb) Date: Sat, 28 Jul 2001 12:59:20 -0300 (ADT) Subject: What is a structural represetation? Message-ID: (Our apologies if you receive multiple copies of this announcement) Dear colleagues, The following paper, titled "What is a structural representation?", ( http://www.cs.unb.ca/profs/goldfarb/struct.ps ) which we believe to be, in a sense, the first one formally addressing the issue of structural representation and proposing the formal ETS model, should be of particular interest to researchers in pattern recognition and machine learning. It implies, in particular, that the properly understood (non-trivial) "structural" representations cannot be "replaced" by the classical numeric, e.g. vector-space-based, representations. Moreover, the concept of "structural" representation emerging from the ETS model is not the one familiar to all of you. (The abstract of the paper is appended below; for a change, the default paper size is A4. Unfortunately for some, the language of the paper is of necessity quite formal, since the main concepts do not have any analogues and therefore must be treated carefully.) Although the proposed model was motivated by, and will be applied to, the "real" problems coming from such areas as pattern recognition, machine learning, data mining, cheminformatics, bioinformatics, and many others, in view of the required radical rethinking that must now go into its implementations, at this time, we can only offer a very preliminary discussion, in the following companion paper, addressing the model's potential applications in chemistry http://www.cs.unb.ca/profs/goldfarb/cadd.ps (please keep in mind that the last paper was written on the basis of an earlier draft of the paper we are announcing now and it will be updated accordingly next month). We intend to discuss the paper shortly on INDUCTIVE mailing list. (To subscribe, send to INDUCTIVE-SERVER at UNB.CA the following text SUBSCRIBE INDUCTIVE FIRSTNAME LASTNAME) We would greatly appreciate any comments regarding both of the above papers. Best regards, Lev Goldfarb Tel: 506-458-7271 Faculty of Computer Science Tel(secret.): 453-4566 University of New Brunswick Fax: 506-453-3566 P.O. Box 4400 E-mail: goldfarb at unb.ca Fredericton, N.B., E3B 5A3 Home tel: 506-455-4323 Canada http://www.cs.unb.ca/profs/goldfarb/goldfarb.htm ***************************************************************************** WHAT IS A STRUCTURAL REPRESENTATION? Lev Goldfarb, Oleg Golubitsky, Dmitry Korkin Faculty of Computer Science University of New Brunswick Fredericton, NB, Canada We outline a formal foundation for a "structural" (or "symbolic") object/event representation, the necessity of which is acutely felt in all sciences, including mathematics and computer science. The proposed foundation incorporates two hypotheses: 1) the object's formative history must be an integral part of the object representation and 2) the process of object construction is irreversible, i.e. the "trajectory" of the object's formative evolution does not intersect itself. The last hypothesis is equivalent to the generalized axiom of (structural) induction. Some of the main difficulties associated with the transition from the classical numeric to the structural representations appear to be related precisely to the development of a formal framework satisfying these two hypotheses. The concept of (inductive) class--which has inspired the development of this approach to structural representation--differs fundamentally from the known concepts of class. In the proposed, evolving transformations system (ETS), model, the class is defined by the transformation system---a finite set of weighted transformations acting on the class progenitor--and the generation of the class elements is associated with the corresponding generative process which also induces the class typicality measure. Moreover, in the ETS model, a fundamental role of the object's class in the object's representation is clarified: the representation of an object must include the class. From the point of view of ETS model, the classical discrete representations, e.g. strings and graphs, appear now as incomplete special cases, the proper completion of which should incorporate the corresponding formative histories, i.e. those of the corresponding strings or graphs. From Zoubin at gatsby.ucl.ac.uk Tue Jul 31 11:57:13 2001 From: Zoubin at gatsby.ucl.ac.uk (Zoubin Ghahramani) Date: Tue, 31 Jul 2001 16:57:13 +0100 (BST) Subject: Director of Gatsby Unit, University College London, UK Message-ID: <200107311557.QAA11539@cajal.gatsby.ucl.ac.uk> The Gatsby Unit is seeking a new Director. The advertisement is enclosed below: note the broad scope of research at the Unit, currently directed by Geoff Hinton. I would also be happy to answer informal confidential enquiries by email. Zoubin Ghahramani zoubin at gatsby.ucl.ac.uk .......................................................................... UNIVERSITY COLLEGE LONDON Gatsby Computational Neuroscience Unit Director University College London and the Gatsby Charitable Foundation are seeking a new Director for the Gatsby Computational Neuroscience Unit. The successful candidate would also be considered for appointment to the Chair of Computational Neuroscience at UCL. The Unit is one of the leading computational neuroscience centres in the world with staff having minimal teaching responsibilities. It is based in Queen Square London in very close proximity to the Institute of Cognitive Neuroscience, the Functional Imaging Lab and the Institute of Neurology. Currently four senior staff, five postdoctoral researchers and ten PhD students receive substantial long-term funding from the Foundation. The Director will be expected to provide scientific direction and intellectual leadership. His/her breadth of interests will be more critical than the specific research area. Applications are invited from scientists who can lead a group on computationally and theoretically sophisticated research within the broad area bounded by neuroscience, cognition and machine learning. Salary will be at a level appropriate to the international standing of the successful applicant. Applications including a CV should be sent to Janice Hankes, Gatsby Computational Neuroscience Unit, UCL, Alexandra House, 17 Queen Square, London WC1N 3AR, UK or by email to Janice at gatsby.ucl.ac.uk Enquiries or expressions of interest to t.shallice at ucl.ac.uk For further information on the Unit see http://www.gatsby.ucl.ac.uk The closing date for applications is Monday 3rd September 2001. Working Toward Equal Opportunity .......................................................................... From terry at salk.edu Fri Jul 27 15:35:26 2001 From: terry at salk.edu (Terry Sejnowski) Date: Fri, 27 Jul 2001 12:35:26 -0700 (PDT) Subject: NEURAL COMPUTATION 13:9 In-Reply-To: <200103072248.f27MmVH58010@kepler.salk.edu> Message-ID: <200107271935.f6RJZQL14567@purkinje.salk.edu> Neural Computation - Contents - Volume 13, Number 9 - September 1, 2001 ARTICLE Modeling Neuronal Assemblies: Theory and Implementation J. Eggert and J. L. van Hemmen NOTES On a Class of Support Vector Kernels Based on Frames in Function Hilbert Spaces J. B. Gao, C. J. Harris and S. R. Gunn Extraction of Specific Signals with Temporal Structure Allan Kardec Barros and Andrzej Cichocki LETTERS Correlation Between Uncoupled Conductance-Based Integrate-and-Fire Neurons Due to Common and Synchronous Presynaptic Firing Sybert Stroeve and Stan Gielen Attention Modulation of Neural Tuning Through Peak and Base Rate Hiroyuki Nakahara, Si Wu, and Shun-ichi Amari Democratic Integration: Self-Organized Integration of Adaptive Cues Jochen Triesch and Christoph von der Malsburg An Auto-Associative Neural Network Model of Paired-Associate Learning Daniel S. Rizzuto and Michael J. Kahana Simple Recurrent Networks Learn Context-Free and Context-Sensitive Languages by Counting Paul Rodriguez Training v-Support Vector Classifiers: Theory and Algorithms Chih-Chung Chang and Chih-Jen Lin A Tighter Bound for Graphical Models M. A. R. Leisink and H. J. Kappen ----- ON-LINE - http://neco.mitpress.org/ SUBSCRIPTIONS - 2001 - VOLUME 13 - 12 ISSUES USA Canada* Other Countries Student/Retired $60 $64.20 $108 Individual $88 $94.16 $136 Institution $460 $492.20 $508 * includes 7% GST MIT Press Journals, 5 Cambridge Center, Cambridge, MA 02142-9902. Tel: (617) 253-2889 FAX: (617) 577-1545 journals-orders at mit.edu ----- From bis at prip.tuwien.ac.at Fri Jul 27 09:42:32 2001 From: bis at prip.tuwien.ac.at (Horst Bischof) Date: Fri, 27 Jul 2001 15:42:32 +0200 Subject: CfP Pattern Recognition, Special Issue on Kedrnel and Subspace Methods for Computer Vision Message-ID: <3B616FC8.8050205@prip.tuwien.ac.at> Pattern Recognition The Journal of the Pattern Recognition Society Special Issue on Kernel and Subspace Methods for Computer Vision Guest Editors: Ales Leonardis Horst Bischof Faculty of Computer and Pattern Recognition and Information Science, Image Processing Group University of Ljubljana, Vienna University of Technology Trzaska 25, Favoritenstr. 9/1832, 1001 Ljubljana, Slovenia A-1040 Vienna, Austria alesl at fri.uni-lj.si bis at prip.tuwien.ac.at This Pattern Recognition Special Issue will address new developments in the area of kernel and subspace methods related to computer vision. High-quality original journal paper submissions are invited. The topics of interest include (but are not limited to): Support Vector Machines, Independent Component Analysis, Principal Component Analysis, Mixture Modeling, Canonical Correlation Analysis, etc. applied to computer vision problems such as: Object Recognition, Navigation and Robotics, Medical Imaging, 3D Vision, etc. All submitted papers will be peer reviewed. Only high-quality, original submissions will be accepted for publication in the Special Issue---in accordance with the Pattern Recognition guidelines (http://www.elsevier.nl/inca/publications/store/3/2/8/index.htt). Submission Timetable Submission of full manuscript: November 30, 2001 Notification of Acceptance: March 29, 2002 Submission of revised manuscript: End of June 2002 Final Decision: August 2002 Final papers: September 2002 Submission Procedure All submissions should follow the Pattern Recognition Guidelines and should be submitted electronically via anonymous ftp in either postscript or pdf format (compressed with zip or gzip). Files should be named by the surname of the first author i.e., surname.ps.gz, for multiple submissions surname1, surname2, ... should be used. Papers should be uploaded to the following ftp site by the deadline of 30th November 2001. ftp ftp.prip.tuwien.ac.at [anonymous ftp, i.e.: Name: ftp Password: < your email address > ] cd sipr binary put .ext quit After uploading the paper authors should email the guest editor Ales Leonardis giving full details of the paper title and authors. From bengio at idiap.ch Mon Jul 2 11:34:55 2001 From: bengio at idiap.ch (Samy Bengio) Date: Mon, 2 Jul 2001 17:34:55 +0200 (MEST) Subject: Machine Learning positions for PhDs, Postdocs and Seniors at IDIAP, Switzerland Message-ID: SEVERAL OPEN POSITIONS IN SPEECH, COMPUTER VISION, MACHINE LEARNING, NEURAL NETWORKS, AND MULTIMODAL INTERFACES (see the full proposal at http://www.idiap.ch/open-positions/open-positions4.html) The IDIAP Institute (http://www.idiap.ch) is a not-for-profit research institute affiliated with the Swiss Federal Institute of Technology at Lausanne (EPFL) and the University of Geneva. Located in Martigny (Valais, CH), IDIAP is partly funded by the Swiss Federal Government, the State of Valais, and the City of Martigny, and is involved in numerous national and international (European) projects. Particularly active in the fields of speech and speaker recognition, computer vision, and machine learning, where IDIAP is targeting at the highest level of research. The institute currently numbers around 35-40 scientists, including permanent senior scientists, postdocs, and PhD students. Recently, IDIAP (in close collaboration with the EPFL Signal Processing Laboratory of Prof. Murat Kunt, http://ltswww.epfl.ch) has been awarded a major (10 years) research grant as the "Leading House" of a large National Centre of Competence in Research on "Interactive Multimodal Information Management". In view of the resulting (present and future) growth of the Institute, IDIAP currently welcomes applications of talented candidates at all levels with expertise or strong interest in the fields of speech processing, computer vision, machine learning, and multimodal interaction. The open positions include: management and senior positions (including one scientific deputy director and one speech processing group leader), project leaders, postdocs, and PhD students. Two EPFL tenure track positions at the Assistant Professor level (with most of the research responsibilities located at IDIAP) are also available. Preference will be given to candidates with experience in one or several of the following areas: signal processing, statistical pattern recognition (typically applied to speech and scene analysis), neural networks, hidden Markov models, speech and speaker recognition, computer vision, human/computer interaction (dialog). Senior and postdoc candidates should also have a proven record of high quality research and publications. All applicants should be experienced in C/C++ programming and familiar with the Unix environment; they should also be able to speak and write in English (and be willing to learn French). LOCATION: IDIAP is located in the town of Martigny (http://www.martigny.ch) in Valais, a scenic region in the South of Switzerland, surrounded by the highest mountains of Europe, and offering exciting recreational activities (including hiking, climbing and skiing), as well as varied cultural activities. It is also within close proximity to Montreux, Lausanne (EPFL) and Lake Geneva, and centrally located for travel to other parts of Europe. PROSPECTIVE CANDIDATES should send their detailed CV, together with a letter of motivation and 3 reference letters, to: IDIAP Att: Secretariat/jobs P.O. Box 592, Simplon, 4 CH-1920 Martigny Switzerland Email: jobs at idiap.ch Phone: +41-27-721.77.11 Fax: +41-27-721.77.12 ----- Samy Bengio Research Director. Machine Learning Group Leader. IDIAP, CP 592, rue du Simplon 4, 1920 Martigny, Switzerland. tel: +41 27 721 77 39, fax: +41 27 721 77 12. mailto:bengio at idiap.ch, http://www.idiap.ch/~bengio From serracri at sissa.it Fri Jul 6 05:27:09 2001 From: serracri at sissa.it (Cristina Serra) Date: Fri, 06 Jul 2001 11:27:09 +0200 Subject: Ph.D. in Neuroscience at SISSA Message-ID: <5.1.0.14.0.20010706112459.00a40440@shannon.sissa.it> The International School for Advanced Studies (SISSA) of Trieste, Italy, seeks candidates for 6+ PhD fellowships for training and research in Molecular, Cellular, Systems and Cognitive Neuroscience. SISSA is a leading center of higher learning in Physics, Mathematics, Biology and Neuroscience. Its mission is to foster research and the training of young scientists at the graduate and post-graduate level. All its activities are conducted in English. It features: - courses that cover a broad spectrum from molecular neurobiology to cognitive functions - small research groups with close supervisor interactions - excellent research facilities - concentrated 3-4 year PhD program - international staff and students - excellent track record for job placement in top-quality institutions Applications for the October 15-16 admission exams must arrive by October 1st. Those selected start their Ph.D. in November, 2001. More information at http://www.sissa.it/cns/testphd/neuro.html Current research groups are led by: Laura Ballerini, Antonino Cattaneo, Enrico Cherubini, Mathew Diamond, Luciano Domenici, Jacques Mehler, Anna Menini, John Nicholls, Andrea Nistri, Raffaella Rumiati, Tim Shallice, Vincent Torre and Alessandro Treves. From wsenn at cns.unibe.ch Fri Jul 6 09:05:42 2001 From: wsenn at cns.unibe.ch (Walter Senn) Date: Fri, 06 Jul 2001 15:05:42 +0200 Subject: Paper on synaptic delay learning Message-ID: <3B45B7A6.4A9166EC@cns.unibe.ch> Dear Connectionists The following paper (to appear in Neural Computation) is available at: http://www.cns.unibe.ch/publications/ftp/paper_Delay.pdf "Activity-dependent selection of axonal and dendritic delays or, why synaptic transmission should be unreliable" Walter Senn, Martin Schneider and Berthold Ruf Abstract: Systematic temporal relations between single neuronal activities or population activities are ubiquitous in the brain. No experimental evidence, however, exists for a direct modification of neuronal delays during Hebbian type stimulation protocols. We show that, in fact, an explicit delay adaptation is not required if one assumes that the synaptic strengths are modified according to the recently observed temporally asymmetric learning rule with the downregulating branch dominating the upregulating branch. During development, slow unbiased fluctuations in the transmission time together with temporally correlated network activity may control neural growth and implicitly induce drifts in the axonal delays and dendritic latencies. These delays and latencies become optimally tuned in the sense that the synaptic response tends to peak in the soma of the postsynaptic cell if this is most likely to fire. The nature of the selection process, however, requires unreliable synapses in order to give `successful' synapses an evolutionary advantage upon the others. Without unreliable transmission, the learning rule would equally modify all synapses with the same local time difference between the pre- and postsynaptic signal, irrespective whether the corresponding total axonal and dendritic delay supports the postsynaptic firing or not. Stochastic transmission may resolve this ambiguity by restricting the modification process to the active synapses only, giving those synapses a higher chance to be strengthened which contribute to the postsynaptic activity. The width of the learning window does also implicitely determine the preferred dendritic delay and the preferred width of the postsynaptic response. Hence, the learning rule may implicitly determine whether a synaptic connection provides precisely timed infromation or rather `contextual' information. Download from homepage: http://www.cns.unibe.ch/~wsenn/#pub ------------------------------------------------------------- Walter Senn Phone office: +41 31 631 87 21 Physiological Institute Phone home: +41 31 332 38 31 University of Bern Fax: +41 31 631 46 11 Buehlplatz 5 email: wsenn at cns.unibe.ch CH-3012 Bern SWITZERLAND http://www.cns.unibe.ch/~wsenn/ ------------------------------------------------------------- From nnsp01 at neuro.kuleuven.ac.be Fri Jul 6 10:22:20 2001 From: nnsp01 at neuro.kuleuven.ac.be (Neural Networks for Signal Processing 2001) Date: Fri, 06 Jul 2001 16:22:20 +0200 Subject: NNSP2001 Workshop: Focus on data mining and signal separation Message-ID: <3B45C99C.D7F7B803@neuro.kuleuven.ac.be> ----------------------------------------------------------------- 2001 IEEE Workshop on Neural Networks for Signal Processing September 10-12, 2001 Falmouth, Massachusetts, USA --------------------------------http://eivind.imm.dtu.dk/nnsp2001 The eleventh in a series of IEEE NNSP workshops will be held at the Sea Crest Oceanfront Resort and Conference Center (http://www.seacrest-resort.com/), the largest oceanfront conference resort on Cape Cod, with a 684 foot private white sandy beach. Contemporary neural networks for signal processing research combines many ideas from adaptive signal/image processing, machine learning, and advanced statistics in order to solve complex real-world signal processing problems. This year, the workshop will focus on two key application areas: * data mining * blind source separation. The strong technical program will be complemented by a series of exciting keynote addresses: ``Information Geometry of Multilayer Neural Networks'' by Shun-ichi Amari ``Semi Blind Signal Separation and Extraction and Their Application in Biomedical Signal Processing'' by Andrzej Cichocki ``Learning Metrics for Exploratory Data Analysis'' by Samuel Kaski ``From Bits to Information: Theory and Applications of Learning Machines'' by Tomaso Poggio ``Beyond Stochastic Chaos: Implications for Dynamic Reconstruction'' by Simon Haykin ``A Novel Associative Memory Approach to Blind SIMO/MIMO Channel Equalization and Signal Recovery'' by S.Y. Kung Please refer to the workshop web site for further information. From lunga at ifi.unizh.ch Sat Jul 7 11:19:30 2001 From: lunga at ifi.unizh.ch (Max Lungarella) Date: 7 Jul 2001 17:19:30 +0200 Subject: DEVELOPMENTAL EMBODIED COGNITION - CALL FOR PARTICIPATION Message-ID: <3B472882.82A9941C@ifi.unizh.ch> DEVELOPMENTAL EMBODIED COGNITION - DECO 2001 Workshop in Edinburgh, Scotland, 31 July 2001 *********** CALL FOR PARTICIPATION *********** http://www.cogsci.ed.ac.uk/~deco/ The objective of this workshop is to bring together researchers from cognitive science, psychology, robotics, artificial intelligence, philosophy, and related fields to discuss the role of developmental and embodied views of cognition, and in particular, their mutual relationships. The ultimate goal of this approach is to understand the emergence of high-level cognition in organisms bases on their interactions with their environment over extended periods of time. The workshop will be held at the University of Edinburgh on July 31st 2001, one day before the 23rd Annual Meeting of the Cognitive Science Society. The workshop will consist of invited talks, followed by a poster session with contributed papers. Invited speakers: Mark Johnson (Centre for Brain and Cognitive Development, Birkbeck College, London, UK) Max Lungarella and Rolf Pfeifer (AI-Laboratory, University of Zuerich, Switzerland) Lorenzo Natale (Lira-Lab, University of Genoa, Italy) Linda Smith (Department of Psychology, Indiana University, Bloomington, IN, USA) Michael Thomas (Neurocognitive Development Unit, University College London, UK) Tom Ziemke (Department of Computer Science, University of Skoevde, Sweden) Participation in the workshop is free, but registration is required. Please send email to deco at cogsci.ed.ac.uk to register. Please visit the workshop website at http://www.cogsci.ed.ac.uk/~deco/ for further information. Rolf Pfeifer Gert Westermann Workshop Co-Chair Workshop Co-Chair Artificial Intelligence Laboratory Sony Computer Science Laboratory University of Zurich, Switzerland Paris, France deco at cogsci.ed.ac.uk DECO-2001 is kindly sponsored by James (R) http://www.personaljames.com/ From didier at isr.umd.edu Thu Jul 12 10:25:52 2001 From: didier at isr.umd.edu (Didier A. Depireux) Date: Thu, 12 Jul 2001 10:25:52 -0400 (EDT) Subject: Job announcement for MEG lab Message-ID: Technical Coordinator, MEG Laboratory Cognitive Neuroscience of Language Laboratory, Department of Linguistics University of Maryland College Park The Cognitive Neuroscience of Language Laboratory at the University of Maryland is developing a new state-of-the-art magnetoencephalography (MEG) facility. The MEG facility will perform non-invasive recordings with millisecond resolution in time, and high spatial resolution, afforded by a dense sensor array (160+ channels). The laboratory will be used for research in a number of disciplines, including linguistics, neuroscience, electrical engineering, computer science, and physics. The lab is seeking a full-time Technical Coordinator, starting September 1st, 2001, or as soon as possible thereafter. The Technical Coordinator responsibilities include: maintaining the MEG laboratory, including stimulus delivery equipment and a number of PC workstations for data acquisition/analysis; supervising and running experiments; developing protocols and manuals, and training new lab users; assisting in preparing experiments and analyzing data. The position provides an exciting opportunity to gain expertise in a cutting edge cognitive neuroscience laboratory. The position demands energy and initiative, technical aptitude, ability to work with a wide variety of people, and a serious interest in brain function. Experience in any of the following areas is desirable, though not necessary: cognitive science, neuroscience, computer programming, electrophysiology, radiology, electrical engineering, or physics. The position comes with a competitive salary and full benefits. For more information on the Cognitive Neuroscience of Language Laboratory, see http://www.ling.umd.edu/cnl. Inquiries should be directed to Prof. David Poeppel, dpoeppel at deans.umd.edu (301)-405-1016; Department of Linguistics, 1401 Marie Mount Hall, University of Maryland, College Park, MD 20742. The University of Maryland is an Affirmative Action/Equal Opportunities Title IX employer. Women and minority candidates are especially encouraged to apply. From gbaura at cardiodynamics.com Thu Jul 12 22:15:43 2001 From: gbaura at cardiodynamics.com (Gail Baura) Date: Thu, 12 Jul 2001 19:15:43 -0700 Subject: Research Engineer position: nonlinear models in medical instrumentation Message-ID: <001001c10b41$b864d440$2a00a8c0@CARDIODYNAMICS.COM> Please send out this job posting for a Research Engineer. This researcher will use linear and nonlinear models as a means towards understanding of physiologic mechanisms underlying data acquired from our medical instrumentation. Thank you. CardioDynamics, based in San Diego, is a rapidly growing medical technology and information solutions company committed to fundamentally changing the way cardiac patient monitoring is performed in healthcare. We have the following growth opportunity: Research Engineer The role of this position is to conduct the research and systems engineering behind new product concepts and devices for both CDIC internal products and external partnership agreements. The engineer will conduct all activities for assigned digital signal processing research projects, including analysis of data to determine the physiologic mechanisms underlying hemodynamic parameters. The engineer will also translate research algorithms into efficient realizations by writing specifications for a DSP processor. This position requires a BS degree in an engineering discipline and an MS degree in electrical, biomedical, or software engineering, with specialization in digital signal processing. Demonstrated system identification expertise, such as ARMAX or artificial neural networks, in MS thesis. Experience in algorithm development using either LabVIEW or Matlab. Experience in obtaining physiologic data for analysis. Ideal candidate would also have 1  5 years progressive hands-on research and clinical evaluation experience with medical devices or electronic medical instruments. PhDs would be overqualified for this position. In addition to a dynamic environment, we offer a competitive compensation package, including stock options, bonuses, medical, dental, 401(k) with match, 3 wks paid time off, paid holidays and other benefits. Visit our website at www.cdic.com. For consideration, please send resume and cover letter with salary requirements to e-mail address: reseng at cdic.com, or fax to 858/587-8616, or mail to HR, 6175 Nancy Ridge Dr., Ste. 300, San Diego, CA 92121. Equal Opportunity Employer. From b9pafr at uni-jena.de Fri Jul 13 06:03:31 2001 From: b9pafr at uni-jena.de (Frank Pasemann) Date: Fri, 13 Jul 2001 12:03:31 +0200 Subject: Job opening PhD Studentship Message-ID: <3B4EC773.7B047E6A@rz.uni-jena.de> The TheoLab - Research Unit for Structure Dynamics and System Evolution - at the Friedrich-Schiller-University of Jena, Germany - invites PhD candidates with pronounced interests in the field of "Evolved Neurocontrollers for Autonomous Agents" to apply for a PhD Studentship (BAT IIa/2) We are looking for a PhD student to work on the project "Real-time Learning Procedures for Co-operating Robots" which is funded by the DFG (German Research Council) for two years. Research is on the development and implementation of behavior relevant autonomous learning rules for robots. They will be combined with existing evolutionary startegies for the generation of multi-functional neurocontrollers. The position is to be taken os soon as possible. Applicants should have experience in computer simulations (C, C++, Grafik, Linux). A background in the fields of Dynamical Systems Theory, Neural Networks, Robotics and/or Embodied Cognitive Science is favourable. Experiments will be performed with Khepera robots. Successful candidats may also benefit from TheoLabs cooperation with the Max-Planck-Institute for Mathematics in the Sciences, Leipzig, the Department of Computer Science, University Leipzig, and the Institute of Theoretical Physics, University Bremen. Applications (CV, two academic referees) should be send to Prof. Dr. Frank Pasemann TheorieLabor Friedrich-Schiller-Universit=E4t Jena Ernst-Abbe-Platz 4, D-07740 Jena, Germany Tel: x49 - 36 41 - 94 95 30 (Sekr.) b9pafr at rz.uni-jena.de, http://www.theorielabor.de ************************************************************** Prof. Dr. Frank Pasemann Tel: x49-3641-949531 TheorieLabor x49-3641-949530 (Sekr.) Friedrich-Schiller-Universit=E4t Fax: x49-3641-949532 Ernst-Abbe-Platz 4 frank.pasemann at rz.uni-jena.de D-07740 Jena, Germany http://www.theorielabor.de From wahba at stat.wisc.edu Fri Jul 13 17:21:05 2001 From: wahba at stat.wisc.edu (Grace Wahba) Date: Fri, 13 Jul 2001 16:21:05 -0500 (CDT) Subject: Multicategory Support Vector Machines Message-ID: <200107132121.QAA28132@hera.stat.wisc.edu> The following short paper is available at http://www.stat.wisc.edu/~wahba/trindex.html Multicategory Support Vector Machines (Preliminary Long Abstract) Yoonkyung Lee, Yi Lin and Grace Wahba University of Wisconsin-Madison Statistics Dept, TR 1040 Abstract Support Vector Machines (SVMs) have shown great performance in practice as a classification methodology recently. Even though the SVM implements the optimal classification rule asymptotically in the binary case, the one-versus-rest approach to solve the multicategory case using an SVM is not optimal. We have proposed Multicategory SVMs, which extend the binary SVM to the multicategory case, and encompass the binary SVM as a special case. The Multicategory SVM implements the optimal classification rule as the sample size gets large, overcoming the suboptimality of conventional one-versus-rest approach. The proposed method deals with the equal misclassification cost and the unequal cost case in unified way. From mbartlet at san.rr.com Mon Jul 16 19:42:59 2001 From: mbartlet at san.rr.com (Marian Stewart Bartlett) Date: Mon, 16 Jul 2001 16:42:59 -0700 Subject: Face Image Analysis by Unsupervised Learning Message-ID: <3B537C03.D1B0B36A@san.rr.com> I am pleased to announce the following new book: Face Image Analysis by Unsupervised Learning, by Marian Stewart Bartlett. Foreword by Terrence J. Sejnowski. Kluwer International Series on Engineering and Computer Science, V. 612. Boston: Kluwer Academic Publishers, 2001. Please see http://inc.ucsd.edu/~marni for more information. The book can be ordered at http://www.wkap.nl/book.htm/0-7923-7348-0. Book Jacket: Face Image Analysis by Unsupervised Learning explores adaptive approaches to face image analysis. It draws upon principles of unsupervised learning and information theory to adapt processing to the immediate task environment. In contrast to more traditional approaches to image analysis in which relevant structure is determined in advance and extracted using hand-engineered techniques, [this book] explores methods that have roots in biological vision and/or learn about the image structure directly from the image ensemble. Particular attention is paid to unsupervised learning techniques for encoding the statistical dependencies in the image ensemble. The first part of this volume reviews unsupervised learning, information theory, independent component analysis, and their relation to biological vision. Next, a face image representation using independent component analysis (ICA) is developed, which is an unsupervised learning technique based on optimal information transfer between neurons. The ICA representation is compared to a number of other face representations including eigenfaces and Gabor wavelets on tasks of identity recognition and expression analysis. Finally, methods for learning features that are robust to changes in viewpoint and lighting are presented. These studies provide evidence that encoding input dependencies through unsupervised learning is an effective strategy for face recognition. Face Image Analysis by Unsupervised Learning is suitable as a secondary text for a graduate level course, and as a reference for researchers and practioners in industry. "Marian Bartlett's comparison of ICA with other algorithms on the recognition of facial expressions is perhaps the most thorough analysis we have of the strengths and limits of ICA as a preprocessing stage for pattern recognition." - T.J. Sejnowski, The Salk Institute Table of Contents: http://www.cnl.salk.edu/~marni/contents.html 1. SUMMARY ---------------------------------------------------------------- 2. INTRODUCTION 1. Unsupervised learning in object representations 1. Generative models 2. Redundancy reduction as an organizational principle 3. Information theory 4. Redundancy reduction in the visual system 5. Principal component analysis 6. Hebbian learning 7. Explicit discovery of statistical dependencies 2. Independent component analysis 1. Decorrelation versus independence 2. Information maximization learning rule 3. Relation of sparse coding to independence 3. Unsupervised learning in visual development 1. Learning input dependencies: Biological evidence 2. Models of receptive field development based on correlation sensitive learning mechanisms 4. Learning invariances from temporal dependencies 1. Computational models 2. Temporal association in psychophysics and biology 5. Computational Algorithms for Recognizing Faces in Images ---------------------------------------------------------------- 3. INDEPENDENT COMPONENT REPRESENTATIONS FOR FACE RECOGNITION 1. Introduction 1. Independent component analysis (ICA) 2. Image data 2. Statistically independent basis images 1. Image representation: Architecture 1 2. Implementation: Architecture 1 3. Results: Architecture 1 3. A factorial face code 1. Independence in face space versuspixel space 2. Image representation: Architecture 2 3. Implementation: Architecture 2 4. Results: Architecture 2 4. Examination of the ICA Representations 1. Mutual information 2. Sparseness 5. Combined ICA recognition system 6. Discussion ---------------------------------------------------------------- 4. AUTOMATED FACIAL EXPRESSION ANALYSIS 1. Review of other systems 1. Motion-based approaches 2. Feature-based approaches 3. Model-based techniques 4. Holistic analysis 2. What is needed 3. The Facial Action Coding System (FACS) 4. Detection of deceit 5. Overview of approach ---------------------------------------------------------------- 5. IMAGE REPRESENTATIONS FOR FACIAL EXPRESSION ANALYSIS: COMPARITIVE STUDY I 1. Image database 2. Image analysis methods 1. Holistic spatial analysis 2. Feature measurement 3. Optic flow 4. Human subjects 3. Results 1. Hybrid system 2. Error analysis 4. Discussion ---------------------------------------------------------------- 6. IMAGE REPRESENTATIONS FOR FACIAL EXPRESSION ANALYSIS: COMPARITIVE STUDY II 1. Introduction 2. Image database 3. Optic flow analysis 1. Local velocity extraction 2. Local smoothing 3. Classification procedure 4. Holistic analysis 1. Principal component analysis: ``EigenActions'' 2. Local feature analysis (LFA) 3. ``FisherActions'' 4. Independent component analysis 5. Local representations 1. Local PCA 2. Gabor wavelet representation 3. PCA jets 6. Human subjects 7. Discussion 8. Conclusions ---------------------------------------------------------------- 7. LEARNING VIEWPOINT INVARIANT REPRESENTATIONS OF FACES 1. Introduction 2. Simulation 1. Model architecture 2. Competitive Hebbian learning of temporal relations 3. Temporal association in an attractor network 4. Simulation results 3. Discussion ---------------------------------------------------------------- 8. CONCLUSIONS AND FUTURE DIRECTIONS References Index ---------------------------------------------------------------- Foreword by Terrence J. Sejnowski Computers are good at many things that we are not good at, like sorting a long list of numbers and calculating the trajectory of a rocket, but they are not at all good at things that we do easily and without much thought, like seeing and hearing. In the early days of computers, it was not obvious that vision was a difficult problem. Today, despite great advances in speed, computers are still limited in what they can pick out from a complex scene and recognize. Some progress has been made, particularly in the area of face processing, which is the subject of this monograph. Faces are dynamic objects that change shape rapidly, on the time scale of seconds during changes of expression, and more slowly over time as we age. We use faces to identify individuals, and we rely of facial expressions to assess feelings and get feedback on the how well we are communicating. It is disconcerting to talk with someone whose face is a mask. If we want computers to communicate with us, they will have to learn how to make and assess facial expressions. A method for automating the analysis of facial expressions would be useful in many psychological and psychiatric studies as well as have great practical benefit in business and forensics. The research in this monograph arose through a collaboration with Paul Ekman, which began 10 years ago. Dr. Beatrice Golomb, then a postdoctoral fellow in my laboratory, had developed a neural network called Sexnet, which could distinguish the sex of person from a photograph of their face (Golomb et al. 1991). This is a difficult problem since no single feature can be used to reliably make this judgment, but humans are quite good at it. This project was the starting point for a major research effort, funded by the National Science Foundation, to automate the Facial Action Coding System (FACS), developed by Ekman and Friesen (1978). Joseph Hager made a major contribution in the early stages of this research by obtaining a high quality set of videos of experts who could produce each facial action. Without such a large dataset of labeled images of each action it would not have been possible to use neural network learning algorithms. In this monograph, Dr. Marian Stewart Bartlett presents the results of her doctoral research into automating the analysis of facial expressions. When she began her research, one of the methods that she used to study the FACS dataset, a new algorithm for Independent Component Analysis (ICA), had recently been developed, so she was pioneering not only facial analysis of expressions, but also the initial exploration of ICA. Her comparison of ICA with other algorithms on the recognition of facial expressions is perhaps the most thorough analysis we have of the strengths and limits ICA. Much of human learning is unsupervised; that is, without the benefit of an explicit teacher. The goal of unsupervised learning is to discover the underlying probability distributions of sensory inputs (Hinton & Sejnowski, 1999). Or as Yogi Berra once said, "You can observe a lot just by watchin'." The identification of an object in an image nearly always depends on the physical causes of the image rather than the pixel intensities. Unsupervised learning can be used to solve the difficult problem of extracting the underlying causes, and decisions about responses can be left to a supervised learning algorithm that takes the underlying causes rather than the raw sensory data as its inputs. Several types of input representation are compared here on the problem of discriminating between facial actions. Perhaps the most intriguing result is that two different input representations, Gabor filters and a version of ICA, both gave excellent results that were roughly comparable with trained humans. The responses of simple cells in the first stage of processing in the visual cortex of primates are similar to those of Gabor filters, which form a roughly statistically independent set of basis vectors over a wide range of natural images (Bell & Sejnowski, 1997). The disadvantage of Gabor filters from an image processing perspective is that they are computationally intensive. The ICA filters, in contrast, are much more computationally efficient, since they were optimized for faces. The disadvantage is that they are too specialized a basis set and could not be used for other problems in visual pattern discrimination. One of the reasons why facial analysis is such a difficult problem in visual pattern recognition is the great variability in the images of faces. Lighting conditions may vary greatly and the size and orientation of the face make the problem even more challenging. The differences between the same face under these different conditions are much greater than the differences between the faces of different individuals. Dr. Bartlett takes up this challenge in Chapter 7 and shows that learning algorithms may also be used to help overcome some of these difficulties. The results reported here form the foundation for future studies on face analysis, and the same methodology can be applied toward other problems in visual recognition. Although there may be something special about faces, we may have learned a more general lesson about the problem of discriminating between similar complex shapes: A few good filters are all you need, but each class of object may need a quite different set for optimal discrimination. -- Marian Stewart Bartlett, Ph.D. marni at salk.edu Institute for Neural Computation, 0523 http://inc.ucsd.edu/~marni University of California, San Diego phone: (858) 534-7368 La Jolla, CA 92093-0523 fax: (858) 534-2014 From edamiani at crema.unimi.it Sat Jul 14 13:51:17 2001 From: edamiani at crema.unimi.it (ernesto damiani) Date: Sat, 14 Jul 2001 19:51:17 +0200 Subject: Neuro-Fuzzy Applications Track - 17th ACM Symposium on Applied Computing (SAC 2002) Message-ID: <009801c10c8d$95d2d970$79e91d97@PIACENTI8Y2LPE> Call for Papers Neuro-Fuzzy Applications Track 17th ACM Symposium on Applied Computing (SAC 2002) March 10-14, 2002 Madrid, Spain SAC 2002 For the past fifteen years, the ACM Symposium on Applied Computing has been a primary forum for applied computer scientists, computer engineers, software engineers, and application developers from around the world to interact and present their work. SAC 2002 is sponsored by the ACM Special Interest Group on Applied Computing (SIGAPP). SAC 2002 is presented in cooperation with other special interest groups. SAC 2002 will be hosted by the Universidad Carlos III De Madrid, Spain, from March 10 - 14, 2002. For more info, see http://www.acm.org/conferences/sac/sac2002 Neuro-Fuzzy Application Track Recently, however, a tide of new applications is being fostered by the necessity of dealing with imprecision and vagueness in the context of a new generation of complex systems, such as telecommunication networks, software systems, data processing systems and the like. A common feature of these new systems is the fact that traditional fuzzy techniques (e.g. rule-based systems) are becoming fully integrated with neural network processing in the general framework of a soft computing approach to give approximate solutions to complex problems that proved too difficult to attack with other techniques. The Neuro-Fuzzy Applications Track, without neglecting traditional fuzzy applications, will focus on this new generation of neuro-fuzzy systems, both from the point of view of the computer scientist and (perhaps more importantly) from the point of view of the expert of the involved application field. Topics we intend to cover include (but are not limited to): IP/ATM,Mobile,Active Networks Neural Hardware Systems Neural Control Neuro-Fuzzy Processing of Multimedia Data Neuro-Fuzzy Systems in Molecular Computing Soft-Computing Techniques for Systems Design Flexible Query and Information Retrieval Systems Data Mining Computer Vision Fuzzy Hardware Systems Fuzzy Control Paper submission Authors are invited to contribute original papers in all areas of soft computing and fuzzy applications development for the technical sessions. and demos of new innovative systems. Papers must be submitted to one of the Track Chairs in 3 copies. In order to facilitate blind review, submitted papers should carry the authors' names and affiliations on a separate sheet. Authors must follow the Symposium's general Submission Guidelines. For electronic submissions, please contact the Track Chair in advance. All papers will be blindly reviewed for originality and accuracy. Conference Proceedings and Journal Publication Accepted papers in all categories will be published in SAC 2002 Proceedings. Expanded versions of selected papers will be considered for publication in the ACM SIGAPP Applied Computing Review (SIGAPP ACR). A special section with the best papers from SAC 2002 Fuzzy Track is planned to appear on Springer's Soft Computing international journal. Neuro-Fuzzy Applications Track Chairs Ernesto Damiani Universit=E0 di Milano - Polo di Crema Via Bramante 65 26013 Crema, Italy e-mail: edamiani at crema.unimi.it Phone:+ 39-0373-898240 FAX:+39-0373-898253 Athanasios Vasilakos Institute of Computer Science(ICS) Foundation for Research and Technology-Hellas(FORTH) P.O Box 1385 Heraklion,Crete,Greece e-mail: vasilako at ath.forthnet.gr Phone:+ 3-081-394400 FAX:+3-081-394408 IMPORTANT DATES Paper Submission September 1th, 2001 Notification of Acceptance/Rejection November 1st, 2001 Camera-Ready Copy December 1st, 2001 From Ajith.Abraham at infotech.monash.edu.au Mon Jul 16 05:19:58 2001 From: Ajith.Abraham at infotech.monash.edu.au (Ajith Abraham) Date: Mon, 16 Jul 2001 19:19:58 +1000 Subject: HIS'01 - Call for papers Message-ID: <5.0.2.1.2.20010716191847.00a52d10@mail1.monash.edu.au> **************************************************************************** Your help with circulating this announcement locally would be very much appreciated. We apologise if you receive multiple copies of this message. **************************************************************************** Dear Colleagues, We have organised an exciting event: HIS'2001: International Workshop on Hybrid Intelligent Systems in conjunction with The 14th Australian Joint Conference on Artificial Intelligence (AI'01). Venue: Adelaide, South Australia Date: 11-12, December 2001 Workshop URL: http://his.hybridsystem.com (Technically co-sponsored by The World Federation of Soft Computing) HIS'01 is an International Workshop that brings together researchers, developers, practitioners, and users of neural networks, fuzzy inference systems, evolutionary algorithms and conventional techniques. The aim of HIS'01 is to serve as a forum to present current and future work as well as to exchange research ideas in this field. HIS'01 invites authors to submit their original and unpublished work that demonstrate current research using hybrid computing techniques and their applications in science, technology, business and commercial. Topics of interest include but not limited to: Applications/techniques using the following, but not limited to: * Machine learning techniques (supervised/unsupervised/ reinforcement learning) * Artificial neural network and evolutionary algorithms * Artificial neural network optimization using global optimization= techniques * Neural networks and fuzzy inference systems * Fuzzy clustering algorithms optimized using evolutionary algorithms * Evolutionary computation (genetic algorithms, genetic programming ,evolution strategies, grammatical evolution etc) * Hybrid optimization techniques (simulated annealing, tabu search, GRASP etc.) * Hybrid computing using neural networks-fuzzy systems- evolutionary algorithms * Hybrid of soft computing and hard computing techniques * Models using inductive logic programming, decomposition methods, grammatical inference, case-based reasoning etc. * Other intelligent techniques ( support vector machines, rough sets, Bayesian networks, probabilistic reasoning, minimum message length etc) ************************************************************* Paper Submission ************************************************************* We invite you to submit a full paper of 20 pages(maximum limit) for the workshop presentation. Please follow the IOS Press guidelines for more information on submission. Submission implies the willingness of at least one of the authors to register and present the paper. All full papers are to be submitted in PDF, postscript or MS word version electronically to: hybrid at softcomputing.net Hard copies should be sent only if electronic submission is not possible. All papers will be peer reviewed by two independent referees of the international program committee of HIS'01. All accepted papers will published in the proceedings of the Workshop by IOS Press, Netherlands. *********************************************************** Important Dates *********************************************************** Submission deadline: September 07, 2001 Notification of acceptance: October 01, 2001 Camera ready papers and pre-registration due: 15 October'01 ************************************************************ Workshop Chairs ************************************************************ Ajith Abraham, School of Computing and Information Technology Monash University, Australia Phone: +61 3 990 26778, Fax: +61 3 990 26879 Email:ajith.abraham at ieee.org Mario K=F6ppen Department of Pattern Recognition Fraunhofer IPK-Berlin, Pascalstr. 8-9, 10587 Berlin, Germany Phone: +49 (0)30 39 006-200, Fax: +49 (0)30 39 175-17 Email: mario.koeppen at ipk.fhg.de ******************************************************************** International Technical Committee Members Honorary Chair: Lakhmi Jain, University of South Australia, Australia ******************************************************************** Baikunth Nath, Monash University, Australia Shunichi Amari, Riken Brain Science Institute, Japan Frank Hoffmann, Royal Institute of Technology, Sweden Saratchandran P, Nanyang Technological University, Singapore Jos=E9 Mira, University Nacional de Educ. a Distancia,Spain Sami Khuri, San Jose University, USA Dan Steinberg, Salford Systems Inc, USA Janusz Kacprzyk, Polish Academy of Sciences, Poland Venkatesan Muthukumar, University of Neveda, USA Evgenia Dimitriadou, Technische Universit=E4t Wien, Austria Kaori Yoshida, Kyushu Institute of Technology, Japan Mario K=F6ppen, Fraunhofer IPK-Berlin, Germany Janos Abonyi, University of Veszprem, Hungary Ajith Abraham, Monash University, Australia Jos=E9 Manuel Ben=EDtez, University of Granada, Spain Vijayan Asari, Old Dominion University, USA Xin Yao, University of Birmingham, UK Joshua Singer, Stanford University, USA Morshed Chowdhury, Deakin University, Australia Dharmendra Sharma, University of Canberra, Australia Eugene Kerckhoffs, Delft University of Tech., Netherlands Bret Lapin, SAIC Inc, San Diego, USA Rajan Alex, Western Texas A & M University, USA Sankar K Pal, Indian Statistical Institute, India Javier Ruiz-del-Solar, Universidad de chile, Chile Aureli Soria-Frisch, Fraunhofer IPK-Berlin, Germany Pavel Osmera, Brno University of Tech., Czech Republic Alberto Ochoa, ICIMAF, Cuba Xiao Zhi Gao, Helsinki University of Technology, Finland. Maumita Bhattacharya, Monash University, Australia P J Costa Branco, Instituto Superior Technico, Portugal Vasant Honavar, Iowa State University, USA ********************************************************************** From harnad at coglit.ecs.soton.ac.uk Tue Jul 17 12:00:41 2001 From: harnad at coglit.ecs.soton.ac.uk (Stevan Harnad) Date: Tue, 17 Jul 2001 17:00:41 +0100 (BST) Subject: Psycoloquy 1-30 2001: Calls for Commentators Message-ID: Below are the half-year contents of Psycoloquy for 2001. Please note that the full articles themselves will no longer be posted to subscribers and lists, just the summary contents, with the URLs where they can be retrieved. Note that below there are a number of target articles on which Open Peer Commentary is now invited: (1) 6 related target articles on Nicotine Addiction: Balfour, Le Houezec, Oscarson, Sivilotti, Smith& Sachse, Wonnacott http://www.cogsci.soton.ac.uk/cgi/psyc/ptopic?topic=nicotine-addiction (2) 4 independent target articles, all inviting commentary: Navon on Mirror Reversal http://www.cogsci.soton.ac.uk/psyc-bin/newpsy?12.017 Kramer & Moore on Family Therapy http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?12.024 Sherman on Bipolar Disorder http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?12.028 Overgaard on Consciousness http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?12.029 (3) 5 book Precis, all inviting Multiple Book Review: Miller on the Mating Mind http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?12.008 Ben-Ze'ev on Emotion http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?12.007 Bolton & Hill on Mental Disorder http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?12.018 Zachar on Biological Psychiatry http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?12.023 Praetoriuus on Cognition/Action http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?12.027 (4) 15 ongoing commentaries and responses on current Psycoloquy target articles: Social-Bias, Reduced-Wason-Task, Self-Consciousness, Electronic-Journals, Brain-Intelligence, Autonomous Brain, Stroop-Differences, Lashley-Hebb, Bell-Curve. ----------------------------------------------------------------------- TABLE OF CONTENTS: PSYCOLOQUY 2001 January - July: Balfour, D. (2001), The Role of Mesolimbic Dopamine in Nicotine Dependence. Psycoloquy 12(001) http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?12.001 Le Houezec, J. (2001), Non-Dopaminergic Pathways in Nicotine Dependence. Psycoloquy 12 (002) http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?12.002 Oscarson, M. (2001), Nicotine Metabolism by the Polymorphic Cytochrome P450 2A6 (CYP2A6) Enzyme: Implications for Interindividual Differences in Smoking Behaviour. Psycoloquy 12 (003) http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?12.003 Sivilotti, L. (2001), Nicotinic Receptors: Molecular Issues. Psycoloquy 12 (004) http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?12.004 Smith, G. & Sachse, C. (2001), A Role for CYP2D6 in Nicotine Metabolism? Psycoloquy 12 (005) http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?12.005 Wonnacott, S. (2001), Nicotinic Receptors in Relation to Nicotine Addiction. Psycoloquy 12 (006) http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?12.006 Ben-Ze'ev, A. (2001), The Subtlety of Emotions. Psycoloquy 12 (007) http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?12.007 Miller, G. F. (2001), The Mating Mind: How Sexual Choice Shaped the Evolution of Human Nature. Psycoloquy 12 (008) http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?12.008 Krueger, J. (2001), Social Bias Engulfs the Field Psycoloquy 12 (009) http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?12.009 Margolis, H. (2001), More On Modus Tollens and the Wason Task Psycoloquy 12 (010) http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?12.010 Newen, A. (2001), Kinds of Self-Consciousness Psycoloquy 12 (011) http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?12.011 Turner, R. (2001), An End to Great Publishing Myths Psycoloquy 12 (012) http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?12.012 Storfer, M. D. (2001), The Parallel Increase in Brain Size, Intelligence, and Myopia. Psycoloquy 12 (013) http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?12.013 Storfer, M. D. (2001), Interrelating Population Trends on Brain Size, Intelligence and Myopia. Psycoloquy 12 (014) http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?12.014 Storfer, M. D. (2001), Brain and Eye Size, Myopia, and IQ. Psycoloquy 12 (015) http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?12.015 Milner, P.M. (2001), Stimulus Equivalence, Attention and the Self: A Punless Reply to Gellatly's Smell Assemblies. Psycoloquy 12 (016) http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?12.016 Navon, D. (2001), The Puzzle of Mirror Reversal: A View From Clockland. Psycoloquy 12 (017) http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?12.017 Bolton, D. & Hill, J. (2001), Mind, Meaning & Mental Disorder: The Nature of Causal Explanation in Psychology & Psychiatry. Psycoloquy 12 (018) http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?12.018 Meyer, J. (2001), Scientific Journals by and for Scientists. Psycoloquy 12 (019) http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?12.019 Hutto, D. D. (2001), Syntax Before Semantics. Structure Before Content. Psycoloquy 12 (020) http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?12.020 Mills, M. E. (2001), Authors of the World Unite: Liberating Academic Content From PublisherS' Restrictions. Psycoloquy 12 (021) http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?12.021 Oullier, O. (2001), Does Scientific Publication Need A Peer Consensus? Psycoloquy 12 (022) http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?12.022 Zachar, P. (2001), Psychological Concepts and Biological Psychiatry: A Philosophical Analysis. Psycoloquy 12 (023) http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?12.023 Kramer, D. & Moore, M. (2001), Gender Roles, Romantic Fiction and Family Therapy. Psycoloquy 12 (024) http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?12.024 Koch, C. (2001) Stroop Interference and Working Memory. Psycoloquy 12 (025) http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?12.025 Abeles, M. (2001), Founders of Neuropsychology - Who is Ignored?. Psycoloquy 12 (026) http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?12.026 Praetorius, N. (2001), Principles of Cognition, Language and Action: Essays on the Foundations of a Science of Psychology. Psycoloquy 12 (027) http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?12.027 Sherman, J. A. (2001), Evolutionary Origin of Bipolar Disorder (EOBD). Psycoloquy 12 (028) http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?12.028 Overgaard, M. (2001), The Role of Phenomenological Reports in Experiments on Consciousness. Psycoloquy 12 (029) http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?12.029 Reifman, A. (2001), Heritability, Economic Inequality, and the Time Course of the "Bell Curve" Debate. Psycoloquy 12 (030) http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?12.030 From andre at ee.usyd.edu.au Tue Jul 17 18:53:05 2001 From: andre at ee.usyd.edu.au (Andre van Schaik) Date: Tue, 17 Jul 2001 16:53:05 -0600 Subject: Research assistant/ PhD studentship at the University of Sydney Message-ID: <4.3.2.7.2.20010717164959.0176af08@cassius.ee.usyd.edu.au> Computer Engineering Laboratory School of Electrical and Information Engineering Research Assistant Analogue Integrated Circuit Design The appointee will join a team working on a project aiming at the development of biologically inspired analogue VLSI circuits for the development of smart sensors. Applicants with a bachelor or higher degree in electrical engineering or computer science with experience in circuit design, preferrably analogue circuit design in the area of neuromorphic engineering or neural networks, are invited. The appointment will be for an initial period of one year and renewable for up to three years, subject to satisfactory progress and funding. If applicable, the appointee can enrol for a higher degree in an area of the project. Salary: (HEO level 5) 37,000-40,000 AU$ per year depending on experience. Duty statement: The research assistant will: - design, integrate and test several analogue VLSI circuits - develop test set-ups for these circuits - simulate in MATLAB algorithms and high level models for the circuits - work independently REPORTING: Reports to Dr. A. van Schaik QUALIFICATIONS: B.E. or B. Computer Science or higher. SKILLS: - design, simulation, layout and testing of analogue VLSI circuits - MATLAB programming - knowledge of Unix and Windows - communication with others - writing reports and technical papers - work independently From clinton at compneuro.umn.edu Tue Jul 17 16:34:51 2001 From: clinton at compneuro.umn.edu (Kathleen Clinton) Date: Tue, 17 Jul 2001 15:34:51 -0500 Subject: NEURON Workshop Announcement Message-ID: <3B54A16B.4DD0BDA2@compneuro.umn.edu> ****************************** NEURON Workshop Announcement ****************************** Michael Hines and Ted Carnevale of Yale University will conduct a three to five day workshop on NEURON, a computer code that simulates neural systems. The workshop will be held from August 20-24, 2001 at the University of Minnesota Supercomputing Institute in Minneapolis, Minnesota. Registration is open to students and researchers from academic, corporate, and industrial organizations. Space is still available, and registrations will be accepted on a first-come, first-serve basis. **Topics and Format** Participants may attend the workshop for three or five days. The first three days cover material necessary for the most common applications in neuroscience research and education. The fourth and fifth days deal with advanced topics of users whose projects may require problem-specific customizations. IBM will provide computers with Windows and Linux platforms. Days 1 - 3 "Fundamentals of Using the NEURON Simulation Environment" The first three days will cover the material that is required for informed use of the NEURON simulation environment. The emphasis will be on applying the graphical interface, which enables maximum productivity and conceptual control over models while at the same time reducing or eliminating the need to write code. Participants will be building their own models from the start of the course. By the end of the third day they will be well prepared to use NEURON on their own to explore a wide range of neural phenomena. Topics will include: Integration methods --accuracy, stability, and computational efficiency --fixed order, fixed timestep integration --global and local variable order, variable timestep integration Strategies for increasing computational efficiency. Using NEURON's graphical interface to --construct models of individual neurons with architectures that range from the simplest spherical cell to detailed models based on quantitative morphometric data (the CellBuilder). --construct models that combine neurons with electronic instrumentation (i.e. capacitors, resistors, amplifiers, current sources and voltage sources) (the Linear Circuit Builder). --construct network models that include artificial neurons, model cells with anatomical and biophysical properties, and hybrid nets with both kinds of cells (the Network Builder). --control simulations. --display simulation results as functions of time and space. --analyze simulation results. --analyze the electrotonic properties of neurons. Adding new biophysical mechanisms. Uses of the Vector class such as --synthesizing custom stimuli --analyzing experimental data --recording and analyzing simulation results Managing modeling projects. Days 4 and 5 "Beyond the GUI" The fourth and fifth days deal with advanced topics for users whose projects may require problem-specific customizations. Topics will include: Advanced use of the CellBuilder, Network Builder, and Linear Circuit Builder. When and how to modify model specification, initialization, and NEURON's main computational loop. Exploiting special features of the Network Connection class for efficient implementation of use-dependent synaptic plasticity. Using NEURON's tools for optimizing models. Parallelizing computations. Using new features of the extracellular mechanism for --extracellular stimulation and recording --implementation of gap junctions and ephaptic interactions Developing new GUI tools. **Registration** For academic or government employees the registration fee is $155 for the first three days and $245 for the full five days. These fees are $310 and $490, respectively, for corporate or industrial participants. Registration forms can be obtained at http://www.compneuro.umn.edu/NEURONregistration.html or from the workshop coordinator, Kathleen Clinton, at clinton at compneuro.umn.edu or (612) 625-8424. **Lodging** Out-of-town participants may stay at the Holiday Inn Metrodome in Minneapolis. It is within walking distance of the Supercomputing Institute. Participants are responsible for making their own hotel reservations. When making reservations, participants should state that they are attending the NEURON Workshop. A small block of rooms is available until July 28, 2001. Reservations can be arranged by calling (800) 448-3663 or (612) 333-4646. From sandro at northwestern.edu Tue Jul 17 19:11:33 2001 From: sandro at northwestern.edu (Sandro Mussa-Ivaldi) Date: Tue, 17 Jul 2001 18:11:33 -0500 Subject: Postdoctoral position in Neural Engineering Message-ID: <3B54C625.BE9B3B3B@northwestern.edu> Identification, learning and control of tracking behaviors in a neuro-robotic system A position of postdoctoral fellow is available to investigate information processing within a hybrid neuro-robotic system. The system is composed of a small mobile-robot and of the brain of a sea lamprey. The biological and the artificial elements exchange electrical signals through a computer/electronic interface. Applications are encouraged from people with a strong background in Engineering and/or Physics and some research experience in Neural Computation. The research has both an experimental and a theoretical component. Some experience in techniques of electrophysiology is desirable. While a background on experimental neurophysiology is not a requisite, the candidate should be willing to become acquainted with these techniques and to carry out experimental work in conjunction with theoretical modeling. The Laboratory in which this research is carried out is affiliated with the Departments of Physiology and of Biomedical Engineering of Nortwestern University and with the Rehabilitation Institute of Chicago. Additional information about the laboratory and its research can be found at http://manip.smpp.nwu.edu If you are interested, send a CV and the names of three references via email to Sandro Mussa-Ivaldi (sandro at northwestern.edu) Northwestern is an equal opportunity, affirmative action educator, and employer. From glanzman at helix.nih.gov Wed Jul 18 13:32:09 2001 From: glanzman at helix.nih.gov (Dennis Glanzman) Date: Wed, 18 Jul 2001 13:32:09 -0400 Subject: DYNAMICAL NEUROSCIENCE IX: Timing, Persistence and Feedback Control Message-ID: <4.3.2.7.2.20010718132246.00af3100@helix.nih.gov> Satellite Symposium at the Society for Neuroscience Annual Meeting DYNAMICAL NEUROSCIENCE IX: Timing, Persistence and Feedback Control San Diego Convention Center San Diego, California Friday and Saturday, November 9-10, 2001 FEEDBACK is an inherent feature of all systems that adapt to their internal and external environments. In this year's meeting we will explore the convergence of theoretical work and experimental data on neuronal computations that highlight the feedback requirement for the systematic operation of the nervous system. This will be covered at the level of both transient and steady-state phenomena. Invited speakers will discuss how feedback (along with other control mechanisms) regulates the temporal processing of auditory and somatosensory information, plasticity, learning, and the balance of feedback controls that underlie the formation of receptive fields. Further topics will include the control of neuronal dynamics involved with sensorimotor tasks, such as the stabilization of eye and head position, and the temporal pattern of exploratory whisking in rat. The work presented at this year's meeting will center on the theme of how abstract analytical models can be used in focusing the direction of new experiments. Organizers: Dennis Glanzman, NIMH, NIH; David Kleinfeld, UCSD; Sebastian Seung, MIT; and Misha Tsodyks, Weizmann Institute. Invited Speakers: Ehud Ahissar, Margaret Livingstone, Cynthia Moss, Israel Nelken, Alexa Riehle, Robert Shapley, Patricia Sharp, Haim Sompolinsky, David Tank, Ofer Tchernichovski, and Kechen Zhang. Keynote Address: Bard Ermentrout Register for the Symposium https://secure.laser.net/cmpinc_net/neuro/register.html Submit a Poster http://www.cmpinc.net/dynamical/poster.html Meeting Agenda (pdf format) http://www.nimh.nih.gov/diva/sn2001/agenda.pdf For Further Information: about registration and other logistics, please contact Matt Burdetsky, Capital Meeting Planning, phone 703-536-4993, fax 703-536-4991, E-mail: matt at cmpinc.net. For information about the technical content of the meeting, please contact Dr. Dennis L. Glanzman, National Institute of Mental Health, NIH. Telephone 301-443-1576, Fax 301-443-4822, E-mail:glanzman at helix.nih.gov. From yann at research.att.com Wed Jul 18 22:33:00 2001 From: yann at research.att.com (Yann LeCun) Date: Wed, 18 Jul 2001 22:33:00 -0400 Subject: NIPS volume 0-13 available at "NIPS Online" Message-ID: <200107190232.WAA28568@surfcity.research.att.com> Dear Colleagues: Volume 0 and Volume 13 of the NIPS proceedings have just been added to the NIPS Online collection. The NIPS Online web site at http://nips.djvuzone.org offers free access to the full collection of NIPS Proceedings with full-text search capability. Our thanks go to Barak Pearlmutter whose skilled negociations allowed us to obtain the rights to publish volume 0. -- Yann LeCun [apologies if you receive multiple copies of this message] ____________________________________________________________________ Yann LeCun Head, Image Processing Research Dept. AT&T Labs - Research tel:+1(732)420-9210 fax:(732)368-9454 200 Laurel Avenue, Room A5-4E34 yann at research.att.com Middletown, NJ 07748, USA. http://www.research.att.com/~yann From steve at cns.bu.edu Thu Jul 19 22:28:45 2001 From: steve at cns.bu.edu (Stephen Grossberg) Date: Thu, 19 Jul 2001 22:28:45 -0400 Subject: motion integration and segmentation within and across apertures Message-ID: The following article is now available at http://www.cns.bu.edu/Profiles/Grossberg in HTML, PDF, and Gzipped Postscript. Grossberg, S., Mingolla, E., and Viswanathan, L. Neural Dynamics of Motion Integration and Segmentation Within and Across Apertures Vision Research, in press. Abstract A neural model is developed of how motion integration and segmentation processes, both within and across apertures, compute global motion percepts. Figure-ground properties, such as occlusion, influence which motion signals determine the percept. For visible apertures, a line's terminators do not specify true line motion. For invisible apertures, a line's intrinsic terminators create veridical feature tracking signals. Sparse feature tracking signals can be amplified before they propagate across position and are integrated with ambiguous motion signals within line interiors. This integration process determines the global percept. It is the result of several processing stages: Directional transient cells respond to image transients and input to a directional short-range filter that selectively boosts feature tracking signals with the help of competitive signals. Then a long-range filter inputs to directional cells that pool signals over multiple orientations, opposite contrast polarities, and depths. This all happens no later than cortical area MT. The directional cells activate a directional grouping network, proposed to occur within cortical area MST, within which directions compete to determine a local winner. Enhanced feature tracking signals typically win over ambiguous motion signals. Model MST cells which encode the winning direction feed back to model MT cells, where they boost directionally consistent cell activities and suppress inconsistent activities over the spatial region to which they project. This feedback accomplishes directional and depthful motion capture within that region. Model simulations include the barberpole illusion, motion capture, the spotted barberpole, the triple barberpole, the occluded translating square illusion, motion transparency and the chopsticks illusion. Qualitative explanations of illusory contours from translating terminators and plaid adaptation are also given. From rsun at cecs.missouri.edu Thu Jul 19 15:35:17 2001 From: rsun at cecs.missouri.edu (rsun@cecs.missouri.edu) Date: Thu, 19 Jul 2001 14:35:17 -0500 Subject: Two recent issues of Cognitive Systems Research Message-ID: <200107191935.f6JJZHo19154@ari1.cecs.missouri.edu> The TOC of the two recent issues of Cognitive Systems Research: --------------------------------------------------------- Table of Contents for Cognitive Systems Research Volume 2, Issue 1, April 2001 Ron Sun Individual action and collective function: From sociology to multi-agent learning 1-3 [Abstract] [Full text] (PDF 42.3 Kb) Cristiano Castelfranchi The theory of social functions: challenges for computational social science and multi-agent learning 5-38 [Abstract] [Full text] (PDF 425.2 Kb) Tom R. Burns and Anna Gomoliska Socio-cognitive mechanisms of belief change - Applications of generalized game theory to belief revision, social fabrication, and self-fulfilling prophesy 39-54 [Abstract] [Full text] (PDF 139.5 Kb) Michael L. Littman Value-function reinforcement learning in Markov games [Abstract] [Full text] (PDF 108 Kb) 55-66 Junling Hu and Michael P. Wellman Learning about other agents in a dynamic multiagent system [Abstract] [Full text] (PDF 515.3 Kb) 67-79 Maja J. Mataric Learning in behavior-based multi-robot systems: policies, models, and other agents 81-93 [Abstract] [Full text] (PDF 295.6 Kb) Table of Contents for Cognitive Systems Research Volume 2, Issue 2, May 2001 Rosaria Conte Emergent (info)institutions [Abstract] [Full text] (PDF 103.1 Kb) 97-110 L. Andrew Coward The recommendation architecture: lessons from large-scale electronic systems applied to cognition 111-156 [Abstract] [Full text] (PDF 858 Kb) Agns Guillot and Jean-Arcady Meyer The animat contribution to cognitive systems research [Abstract] [Full text] (PDF 67.2 Kb) 157-165 Sheila Garfield Review of Speech and language processing [Abstract] [Full text] (PDF 55.2 Kb) 167-172 * Full text files can be viewed and printed using the Adobe Acrobat Reader. Download from the Web site: http://www.cecs.missouri.edu/~rsun/journal.html http://www.elsevier.nl/locate/cogsys http://www.elsevier.com/locate/cogsys Copyright 2001, Elsevier Science, All rights reserved. =========================================================================== Prof. Ron Sun http://www.cecs.missouri.edu/~rsun CECS Department phone: (573) 884-7662 University of Missouri-Columbia fax: (573) 882 8318 201 Engineering Building West Columbia, MO 65211-2060 email: rsun at cecs.missouri.edu http://www.cecs.missouri.edu/~rsun http://www.cecs.missouri.edu/~rsun/journal.html http://www.elsevier.com/locate/cogsys =========================================================================== From zemel at cs.toronto.edu Fri Jul 20 16:13:05 2001 From: zemel at cs.toronto.edu (Richard Zemel) Date: Fri, 20 Jul 2001 16:13:05 -0400 Subject: NIPS*2001 web-site back online Message-ID: <01Jul20.161311edt.453165-19931@jane.cs.toronto.edu> The NIPS*2001 web site (http://www.cs.cmu.edu/Web/Groups/NIPS) has been down for a few days but the problem has now been fixed. Apologies and thanks to the people who notified us about this. On the positive side: we've had nearly 30 percent more submissions than last year, so it should be an even better conference than ever. ========================================== Neural Information Processing Systems Natural and Synthetic Monday, Dec. 3 -- Saturday, Dec. 8, 2001 Vancouver, British Columbia, Canada Whistler Ski Resort ========================================== Invited speakers: Barbara Finlay -- How brains evolve, and the consequences for computation Alison Gopnik -- Babies and Bayes-nets: Causal inference and theory-formation in children, chimps, scientists and computers Jon M. Kleinberg -- Decentralized network algorithms: Small-world phenomena and the dynamics of information Tom Knight -- TBA Judea Pearl -- Causal inference as an exercise in computational learning Shihab Shamma -- Common principles in auditory and visual processing Tutorials: Luc Devroye -- Nonparametric density estimation: VC to the rescue Daphne Koller & Nir Friedman -- Learning Bayesian networks from data Shawn Lockery -- Chemotaxis: Gradient ascent by simple living organisms and their neural networks. Christopher Manning -- Probabilistic linguistics and probabilistic models of natural language processing Bernhard Scholkopf -- SVM and Kernel methods Sebastian Thrun -- Probabilistic robotics From dominey at isc.cnrs.fr Sat Jul 21 18:53:19 2001 From: dominey at isc.cnrs.fr (Peter FORD DOMINEY) Date: Sun, 22 Jul 2001 00:53:19 +0200 Subject: postdoc position, computational neuroscience and language Message-ID: <3.0.5.32.20010722005319.00810c00@nimbus.isc.cnrs.fr> Please Post: Post-Doctoral Fellowship Announcement: Multiple-Cue Integration in Language Acquisition: A Simulation Study Starting in September/October 2001, a post-doctoral fellowship will be available for a period of 12-36 months in the Sequential Cognition and Language group, at the Institute of Cognitive Science (Institut des Sciences Cognitive) in Lyon France. The selected researcher will participate in an HFSP funded project addressing aspects of language acquisition through simulation, behavioral and brain imagery (ERP) studies. The position will involve: 1. analysis of natural language corpora 2. Neural network simulation of language acquisition processes based on the preceding analysis. An example of a this type of approach can be found in: Dominey PF, Ramus F (2000) Neural network processing of natural language: I. Sensitivity to serial, temporal and abstract structure of language in the infant. Language and Cognitive Processes, 15(1) 87-127 Qualifications for the candidate: 1. A PhD in a related discipline (computer science, computational neuroscience, cognitive science) and a strong computational neuroscience background, with experience in the Linux/Unix C environment, and in cognitive neuroscience simulation. 2. Familiarity with the Childes language database and associated analysis tools, and/or experience/interest in computational aspects of language acquisition. 3. Fluency in French and English. Interested candidates should send a letter of intention, a CV and three letters of recommendation to Peter F. Dominey at the address below. Applications will continue to be accepted until the position is filled. Peter F. Dominey, Ph.D. Institut des Sciences Cognitives CNRS UPR 9075 67 boulevard Pinel 69675 BRON Cedex Tel Standard: 33(0)4.37.91.12.12 Tel Direct: 33(0)4.37.91.12.66 Fax : 33(0)4.37.91.12.10 dominey at isc.cnrs.fr WEB: http://www.isc.cnrs.fr From erol at starlab.net Sun Jul 22 16:45:48 2001 From: erol at starlab.net (erol@starlab.net) Date: Sun, 22 Jul 2001 22:45:48 +0200 (CEST) Subject: PhD and PostDoc research positions for the SWARM-BOTS project Message-ID: <995834748.3b5b3b7c2d763@127.0.0.1> Please post: SWARM-BOTS project: PhD and PostDoc research positions IRIDIA - Universit Libre de Bruxelles, Belgium We are currently seeking a PhD student and a PostDoc to join a research team in swarm intelligence and distributed autonomous robotics at IRIDIA, the artificial intelligence lab of the Universit Libre de Bruxelles, Belgium. The candidate will work on the SWARM-BOTS project. The main scientific objective of the SWARM-BOTS project is to study a novel approach to the design and implementation of self-organising and self-assembling artefacts. This novel approach finds its theoretical roots in recent studies in swarm intelligence and in ant algorithms that is, in studies of the self-organising and self-assembling capabilities shown by social insects and other animal societies. The area of competence of candidates should be in at least one of the following disciplines: Computer Science, Computational Intelligence, Automous Robotics, Self-organizing Systems, Complex Systems. The researchers we are looking for should be experienced programmers in procedural or object oriented programming languages and should have knowledge of modern operating systems. The PhD student should possess a degree that allows him or her to embark in a doctoral program. Female researchers are explicitly encouraged to apply for the offered positions. We guarantee that the selection process, based solely on the research records, will give equal opportunities to female and male researchers. The appointments will be for 3 years from October 1, 2001. The positions will be filled as adequate candidates will become available. Therefore, there is no submission deadline. For further information see http://iridia.ulb.ac.be/~mdorigo/IRIDIA-Swarmbots-Positions.html. Please send a CV to Dr. Marco Dorigo (mdorigo at ulb.ac.be). ------------------------------------ Marco Dorigo, Ph.D. Matre de Recherches du FNRS IRIDIA CP 194/6 Universite' Libre de Bruxelles Avenue Franklin Roosevelt 50 1050 Bruxelles Belgium mdorigo at ulb.ac.be http://iridia.ulb.ac.be/~mdorigo/ Tel +32-2-6503169 GSM +32-478-301233 Fax +32-2-6502715 Secretary +32-2-6502729 From j-patton at northwestern.edu Wed Jul 25 14:37:43 2001 From: j-patton at northwestern.edu (Jim Patton) Date: Wed, 25 Jul 2001 13:37:43 -0500 Subject: Postdoc Position Available Message-ID: <4.2.0.58.20010725133035.00aa4b80@merle.acns.nwu.edu> Postdoctoral Research Associate in Motor Control/Neuromechanics/Robotics Sensory Motor Performance Program, Northwestern University and The Rehabilitation Institute of Chicago [Posted 25-Jul-2001] _________________________________________ SMPP INFO: The Sensory Motor Performance Program (SMPP) at the Rehabilitation Institute of Chicago (RIC) is devoted to the study of musculoskeletal, neuromuscular and sensory disorders that are associated with abnormal control of posture and movement. Faculty members have appointments in the Northwestern University Medical School and the Northwestern University Engineering School. Approximately thirty-five research staff -- including post-doctoral research associates, graduate students, and support staff -- make up a unique team of physicians, engineers, mathematicians, physiologists, and occupational & physical therapists for the study of motor and sensory dysfunctions. Our studies on healthy individuals, patients and mathematical models are internationally renowned in the fields of biomechanics, neurophysiology, and rehabilitation research. See: SMPP is a part of the Rehabilitation Institute Research Corporation, the research arm of the Rehabilitation Institute of Chicago. It is academically affiliated with the Department of Physical Medicine and Rehabilitation at Northwestern University Medical School. Many members of our staff are affiliated with the departments of Physical Medicine and Rehabilitation, Biomedical Engineering, Mechanical Engineering, Physiology, Physical Therapy, the Institute for Neuroscience and other departments at Northwestern University and in the Chicago area. JOB DUTIES: We are currently seeking a Postdoctoral Research Associate to join our group motor under a training grant provided by the National Center on Medical Rehabilitation Research. Research may involve, but is not restricted to, quantitative electromyography, neural signal processing, computational analysis of neuromechanics, and neural engineering, robotics and/or the human machine interface. The applicant will benefit from the mentorship of a distinguished group of senior faculty. JOB EXPERIENCE: Applicants should have a doctoral degree and will be expected to have a record of research in one or more of the following areas: motor control, biomechanics, neurophysiology, biomedical engineering, or neuroscience. Some mathematical background and/or programming ability preferred. APPLICATION REQUIREMENTS: Applicants should be US nationals, or permanent residents. EMAIL a cover letter, vita, and list of three references to: James Patton, Ph.D. 345 E superior St., Suite 1406 Chicago,Illinois60611 USA 312-238-1277, 312-238-2208 Fax SALARY RANGE: Contingent on educational background & experience. _________________________________________ RIC is an Affirmative Action/Equal Opportunity Employer. Woman and minority applicants are encouraged to apply. Hiring is contingent on eligibility to work in the United States. ______________________________________________________________________ J A M E S P A T T O N , P H . D . Research Scientist Sensory Motor Performance Prog., Rehabilitation Institute of Chicago Physical Med & Rehabilitation, Northwestern University Med School 345 East Superior, Room 1406, Chicago, IL 60611 312-238-1277 (OFFICE) -2208 (FAX) -1232 (LAB) -3381 (SECRETARY) CELL PHONE MESSAGING (<150 char.): 8473341056 at msg.myvzw.com ______________________________________________________________________ From swatanab at pi.titech.ac.jp Wed Jul 25 22:07:38 2001 From: swatanab at pi.titech.ac.jp (Sumio Watanabe) Date: Thu, 26 Jul 2001 11:07:38 +0900 Subject: Papers: NN Learning Theory and Algebraic Geometry Message-ID: <00a001c11577$be993e10$988a7083@titech42lg8r0u> Dear Connectionists, The following papers are available. http://watanabe-www.pi.titech.ac.jp/~swatanab/index.html I would like to announce that the reason why the hierarchical structure is important in practical learning machines is now being clarified. Also please visit the page of our special session, http://watanabe-www.pi.titech.ac.jp/~swatanab/kes2001.html Comments and remarks are welcome. Thank you. Sumio Watanabe P&I Lab. Tokyo Institute of Technology swatanab at pi.titech.ac.jp ***** (1) S. Watanabe "Learning efficiency of redundant neural networks in Bayesian esitimation," to appear in IEEE Trans. on NN. The generalization error of a three-layer neural network in a redundant state is clarified. The method in this paper is not algebraic but completely analytic. It is shown that the stochastic complexity of the three-layer perceptron can be calculated by expanding the determinant of the singular information matrix. It is shown that, if the learner becomes more redundant compared with the true distribution, then the increase of the stochatsic complexity becomes smaller. Non-identifiable models are compared with the regular stiatistical models from the statistical model selection point of view, and it is shown that Bayesian estimation is appropriate for layered learning machines in almost redundant states. (2) S. Watanabe, "Algebraic geometrical methods for hierarchical learning machines," to appear in Neural Networks. This paper establishes the algebraic geometrical methods in neural network learning theory. The learning curve of a non-identifiable model is determined by the pole of the Zeta function of the Kullback information, and its pole can be found by resolution of singularities. The blowing-up technology in algebraic geometry is applied to the multi-layer perceptron, and its learning efficiency is obtained systematically. Even when the true distribution is not contained in parametric models, singularities in the parameter space make the learning curve smaller than the all curves of smaller models contained in the machine. *** PLease compare these two papers. *** End From jzhu at stanford.edu Thu Jul 26 17:55:36 2001 From: jzhu at stanford.edu (Ji Zhu) Date: Thu, 26 Jul 2001 14:55:36 -0700 (PDT) Subject: kernel logistic regression and the import vector machine Message-ID: The following short paper is available at http://www.stanford.edu/~jzhu/research/nips01.ps Kernel Logistic Regression and the Import Vector Machine Ji Zhu, Trevor Hastie Dept. of Statistics, Stanford University Abstract The support vector machine (SVM) is known for its good performance in binary classification, but its extension to multi-class classification is still an on-going research issue. In this paper, we propose a new approach for classification, called the import vector machine (IVM), which is built on kernel logistic regression (KLR). We show that the IVM not only performs as well as the SVM in binary classification, but also can naturally be generalized to the multi-class case. Furthermore, the IVM provides an estimate of the underlying probability. Similar to the ``support points'' of the SVM, the IVM model uses only a fraction of the training data to index kernel basis functions, typically a much smaller fraction than the SVM. This gives the IVM a computational advantage over the SVM, especially when the size of the training data set is large. From terry at salk.edu Fri Jul 27 15:35:26 2001 From: terry at salk.edu (Terry Sejnowski) Date: Fri, 27 Jul 2001 12:35:26 -0700 (PDT) Subject: NEURAL COMPUTATION 13:9 In-Reply-To: <200103072248.f27MmVH58010@kepler.salk.edu> Message-ID: <200107271935.f6RJZQL14567@purkinje.salk.edu> Neural Computation - Contents - Volume 13, Number 9 - September 1, 2001 ARTICLE Modeling Neuronal Assemblies: Theory and Implementation J. Eggert and J. L. van Hemmen NOTES On a Class of Support Vector Kernels Based on Frames in Function Hilbert Spaces J. B. Gao, C. J. Harris and S. R. Gunn Extraction of Specific Signals with Temporal Structure Allan Kardec Barros and Andrzej Cichocki LETTERS Correlation Between Uncoupled Conductance-Based Integrate-and-Fire Neurons Due to Common and Synchronous Presynaptic Firing Sybert Stroeve and Stan Gielen Attention Modulation of Neural Tuning Through Peak and Base Rate Hiroyuki Nakahara, Si Wu, and Shun-ichi Amari Democratic Integration: Self-Organized Integration of Adaptive Cues Jochen Triesch and Christoph von der Malsburg An Auto-Associative Neural Network Model of Paired-Associate Learning Daniel S. Rizzuto and Michael J. Kahana Simple Recurrent Networks Learn Context-Free and Context-Sensitive Languages by Counting Paul Rodriguez Training v-Support Vector Classifiers: Theory and Algorithms Chih-Chung Chang and Chih-Jen Lin A Tighter Bound for Graphical Models M. A. R. Leisink and H. J. Kappen ----- ON-LINE - http://neco.mitpress.org/ SUBSCRIPTIONS - 2001 - VOLUME 13 - 12 ISSUES USA Canada* Other Countries Student/Retired $60 $64.20 $108 Individual $88 $94.16 $136 Institution $460 $492.20 $508 * includes 7% GST MIT Press Journals, 5 Cambridge Center, Cambridge, MA 02142-9902. Tel: (617) 253-2889 FAX: (617) 577-1545 journals-orders at mit.edu ----- From X.Yao at cs.bham.ac.uk Fri Jul 27 15:00:31 2001 From: X.Yao at cs.bham.ac.uk (Xin Yao) Date: Fri, 27 Jul 2001 20:00:31 +0100 (BST) Subject: Job available: Research Fellow in Evolutionary Computation Message-ID: ---------------------------------------------------- VACANCY: Research Fellow in Evolutionary Computation (Ref. No. S35726/01) ---------------------------------------------------- http://www.bham.ac.uk/personnel/s35726.htm Applications are invited for a research fellowship in evolutionary computation (available for up to two years, full-time) in the School of Computer Science, the University of Birmingham, England. We are particularly interested in candidates with a background in co-evolution, evolvable hardware or a closely related area. However, outstanding applicants from any areas of evolutionary computation will be considered seriously. Applicants should have or be about to complete a PhD in Computer Science, Computer Engineering, Electrical Engineering, or a closely related field. The successful candidate is expected to have research experience and record of outstanding quality in evolutionary computation or a closely related area, as evidenced by publications in leading international journals or conference proceedings. The research potential of a new PhD (or a nearly completed PhD) may also be judged from his/her PhD thesis. Strong background and experience in computational studies and excellent analytical and programming skills will be highly valued. The successful applicant, who is to work with Prof. Xin Yao, must be able to work effectively in a team environment and is required to contribute to the School's teaching and admin activities. The starting salary for the post is on Reseach Grade 1A in the range GBP17,278 - GBP19,293 per annum (Depending on experience and qualifications). The School of Computer Science has a very strong group in evolutionary and neural computation with an international reputation. Staff members in this area include Dr. John Bullinaria (Neural Networks, Evolutionary Computation, Cog.Sci.) Dr. Jun He (Evolutionary Computation) Dr. Julian Miller (Evolutionary Computation, Machine Learning) Dr. Riccardo Poli (Evolutionary Computation, GP, Computer Vision, NNs, AI) Dr. Jon Rowe (Evolutionary Computation, AI) Dr. Thorsten Schnier (Evolutionary Computation, Engineering Design) Prof. Xin Yao (Evolutionary Computation, NNs, Machine Learning, Optimisation) Other staff members also working in these areas include Prof. Aaron Sloman (evolvable architectures of mind, co-evolution, interacting niches) and Dr. Jeremy Wyatt (evolutionary robotics, classifier systems). For further particulars, please visit http://www.bham.ac.uk/personnel/s35726.htm For informal enquiries, please contact Prof Xin Yao, phone (+44) 121 414 3747, email: X.Yao at cs.bham.ac.uk. His research interests can be found from http://www.cs.bham.ac.uk/~xin/research/. CLOSING DATE FOR RECEIPT OF APPLICATIONS: 21 August 2001 (late application may be considered) APPLICATION FORMS RETURNABLE TO The Director of Personnel Services The University of Birmingham Edgbaston, Birmingham, B15 2TT England RECRUITMENT OFFICE FAX NUMBER +44 121 414 4802 RECRUITMENT OFFICE TELEPHONE NUMBER +44 121 414 6486 RECRUITMENT OFFICE E-MAIL ADDRESS h.h.luong at bham.ac.uk From bis at prip.tuwien.ac.at Fri Jul 27 09:42:32 2001 From: bis at prip.tuwien.ac.at (Horst Bischof) Date: Fri, 27 Jul 2001 15:42:32 +0200 Subject: CfP Pattern Recognition, Special Issue on Kedrnel and Subspace Methods for Computer Vision Message-ID: <3B616FC8.8050205@prip.tuwien.ac.at> Pattern Recognition The Journal of the Pattern Recognition Society Special Issue on Kernel and Subspace Methods for Computer Vision Guest Editors: Ales Leonardis Horst Bischof Faculty of Computer and Pattern Recognition and Information Science, Image Processing Group University of Ljubljana, Vienna University of Technology Trzaska 25, Favoritenstr. 9/1832, 1001 Ljubljana, Slovenia A-1040 Vienna, Austria alesl at fri.uni-lj.si bis at prip.tuwien.ac.at This Pattern Recognition Special Issue will address new developments in the area of kernel and subspace methods related to computer vision. High-quality original journal paper submissions are invited. The topics of interest include (but are not limited to): Support Vector Machines, Independent Component Analysis, Principal Component Analysis, Mixture Modeling, Canonical Correlation Analysis, etc. applied to computer vision problems such as: Object Recognition, Navigation and Robotics, Medical Imaging, 3D Vision, etc. All submitted papers will be peer reviewed. Only high-quality, original submissions will be accepted for publication in the Special Issue---in accordance with the Pattern Recognition guidelines (http://www.elsevier.nl/inca/publications/store/3/2/8/index.htt). Submission Timetable Submission of full manuscript: November 30, 2001 Notification of Acceptance: March 29, 2002 Submission of revised manuscript: End of June 2002 Final Decision: August 2002 Final papers: September 2002 Submission Procedure All submissions should follow the Pattern Recognition Guidelines and should be submitted electronically via anonymous ftp in either postscript or pdf format (compressed with zip or gzip). Files should be named by the surname of the first author i.e., surname.ps.gz, for multiple submissions surname1, surname2, ... should be used. Papers should be uploaded to the following ftp site by the deadline of 30th November 2001. ftp ftp.prip.tuwien.ac.at [anonymous ftp, i.e.: Name: ftp Password: < your email address > ] cd sipr binary put .ext quit After uploading the paper authors should email the guest editor Ales Leonardis giving full details of the paper title and authors. From goldfarb at unb.ca Sat Jul 28 11:59:20 2001 From: goldfarb at unb.ca (Lev Goldfarb) Date: Sat, 28 Jul 2001 12:59:20 -0300 (ADT) Subject: What is a structural represetation? Message-ID: (Our apologies if you receive multiple copies of this announcement) Dear colleagues, The following paper, titled "What is a structural representation?", ( http://www.cs.unb.ca/profs/goldfarb/struct.ps ) which we believe to be, in a sense, the first one formally addressing the issue of structural representation and proposing the formal ETS model, should be of particular interest to researchers in pattern recognition and machine learning. It implies, in particular, that the properly understood (non-trivial) "structural" representations cannot be "replaced" by the classical numeric, e.g. vector-space-based, representations. Moreover, the concept of "structural" representation emerging from the ETS model is not the one familiar to all of you. (The abstract of the paper is appended below; for a change, the default paper size is A4. Unfortunately for some, the language of the paper is of necessity quite formal, since the main concepts do not have any analogues and therefore must be treated carefully.) Although the proposed model was motivated by, and will be applied to, the "real" problems coming from such areas as pattern recognition, machine learning, data mining, cheminformatics, bioinformatics, and many others, in view of the required radical rethinking that must now go into its implementations, at this time, we can only offer a very preliminary discussion, in the following companion paper, addressing the model's potential applications in chemistry http://www.cs.unb.ca/profs/goldfarb/cadd.ps (please keep in mind that the last paper was written on the basis of an earlier draft of the paper we are announcing now and it will be updated accordingly next month). We intend to discuss the paper shortly on INDUCTIVE mailing list. (To subscribe, send to INDUCTIVE-SERVER at UNB.CA the following text SUBSCRIBE INDUCTIVE FIRSTNAME LASTNAME) We would greatly appreciate any comments regarding both of the above papers. Best regards, Lev Goldfarb Tel: 506-458-7271 Faculty of Computer Science Tel(secret.): 453-4566 University of New Brunswick Fax: 506-453-3566 P.O. Box 4400 E-mail: goldfarb at unb.ca Fredericton, N.B., E3B 5A3 Home tel: 506-455-4323 Canada http://www.cs.unb.ca/profs/goldfarb/goldfarb.htm ***************************************************************************** WHAT IS A STRUCTURAL REPRESENTATION? Lev Goldfarb, Oleg Golubitsky, Dmitry Korkin Faculty of Computer Science University of New Brunswick Fredericton, NB, Canada We outline a formal foundation for a "structural" (or "symbolic") object/event representation, the necessity of which is acutely felt in all sciences, including mathematics and computer science. The proposed foundation incorporates two hypotheses: 1) the object's formative history must be an integral part of the object representation and 2) the process of object construction is irreversible, i.e. the "trajectory" of the object's formative evolution does not intersect itself. The last hypothesis is equivalent to the generalized axiom of (structural) induction. Some of the main difficulties associated with the transition from the classical numeric to the structural representations appear to be related precisely to the development of a formal framework satisfying these two hypotheses. The concept of (inductive) class--which has inspired the development of this approach to structural representation--differs fundamentally from the known concepts of class. In the proposed, evolving transformations system (ETS), model, the class is defined by the transformation system---a finite set of weighted transformations acting on the class progenitor--and the generation of the class elements is associated with the corresponding generative process which also induces the class typicality measure. Moreover, in the ETS model, a fundamental role of the object's class in the object's representation is clarified: the representation of an object must include the class. From the point of view of ETS model, the classical discrete representations, e.g. strings and graphs, appear now as incomplete special cases, the proper completion of which should incorporate the corresponding formative histories, i.e. those of the corresponding strings or graphs. From Zoubin at gatsby.ucl.ac.uk Tue Jul 31 11:57:13 2001 From: Zoubin at gatsby.ucl.ac.uk (Zoubin Ghahramani) Date: Tue, 31 Jul 2001 16:57:13 +0100 (BST) Subject: Director of Gatsby Unit, University College London, UK Message-ID: <200107311557.QAA11539@cajal.gatsby.ucl.ac.uk> The Gatsby Unit is seeking a new Director. The advertisement is enclosed below: note the broad scope of research at the Unit, currently directed by Geoff Hinton. I would also be happy to answer informal confidential enquiries by email. Zoubin Ghahramani zoubin at gatsby.ucl.ac.uk .......................................................................... UNIVERSITY COLLEGE LONDON Gatsby Computational Neuroscience Unit Director University College London and the Gatsby Charitable Foundation are seeking a new Director for the Gatsby Computational Neuroscience Unit. The successful candidate would also be considered for appointment to the Chair of Computational Neuroscience at UCL. The Unit is one of the leading computational neuroscience centres in the world with staff having minimal teaching responsibilities. It is based in Queen Square London in very close proximity to the Institute of Cognitive Neuroscience, the Functional Imaging Lab and the Institute of Neurology. Currently four senior staff, five postdoctoral researchers and ten PhD students receive substantial long-term funding from the Foundation. The Director will be expected to provide scientific direction and intellectual leadership. His/her breadth of interests will be more critical than the specific research area. Applications are invited from scientists who can lead a group on computationally and theoretically sophisticated research within the broad area bounded by neuroscience, cognition and machine learning. Salary will be at a level appropriate to the international standing of the successful applicant. Applications including a CV should be sent to Janice Hankes, Gatsby Computational Neuroscience Unit, UCL, Alexandra House, 17 Queen Square, London WC1N 3AR, UK or by email to Janice at gatsby.ucl.ac.uk Enquiries or expressions of interest to t.shallice at ucl.ac.uk For further information on the Unit see http://www.gatsby.ucl.ac.uk The closing date for applications is Monday 3rd September 2001. Working Toward Equal Opportunity .......................................................................... From terry at salk.edu Fri Jul 27 15:35:26 2001 From: terry at salk.edu (Terry Sejnowski) Date: Fri, 27 Jul 2001 12:35:26 -0700 (PDT) Subject: NEURAL COMPUTATION 13:9 In-Reply-To: <200103072248.f27MmVH58010@kepler.salk.edu> Message-ID: <200107271935.f6RJZQL14567@purkinje.salk.edu> Neural Computation - Contents - Volume 13, Number 9 - September 1, 2001 ARTICLE Modeling Neuronal Assemblies: Theory and Implementation J. Eggert and J. L. van Hemmen NOTES On a Class of Support Vector Kernels Based on Frames in Function Hilbert Spaces J. B. Gao, C. J. Harris and S. R. Gunn Extraction of Specific Signals with Temporal Structure Allan Kardec Barros and Andrzej Cichocki LETTERS Correlation Between Uncoupled Conductance-Based Integrate-and-Fire Neurons Due to Common and Synchronous Presynaptic Firing Sybert Stroeve and Stan Gielen Attention Modulation of Neural Tuning Through Peak and Base Rate Hiroyuki Nakahara, Si Wu, and Shun-ichi Amari Democratic Integration: Self-Organized Integration of Adaptive Cues Jochen Triesch and Christoph von der Malsburg An Auto-Associative Neural Network Model of Paired-Associate Learning Daniel S. Rizzuto and Michael J. Kahana Simple Recurrent Networks Learn Context-Free and Context-Sensitive Languages by Counting Paul Rodriguez Training v-Support Vector Classifiers: Theory and Algorithms Chih-Chung Chang and Chih-Jen Lin A Tighter Bound for Graphical Models M. A. R. Leisink and H. J. Kappen ----- ON-LINE - http://neco.mitpress.org/ SUBSCRIPTIONS - 2001 - VOLUME 13 - 12 ISSUES USA Canada* Other Countries Student/Retired $60 $64.20 $108 Individual $88 $94.16 $136 Institution $460 $492.20 $508 * includes 7% GST MIT Press Journals, 5 Cambridge Center, Cambridge, MA 02142-9902. Tel: (617) 253-2889 FAX: (617) 577-1545 journals-orders at mit.edu ----- From bis at prip.tuwien.ac.at Fri Jul 27 09:42:32 2001 From: bis at prip.tuwien.ac.at (Horst Bischof) Date: Fri, 27 Jul 2001 15:42:32 +0200 Subject: CfP Pattern Recognition, Special Issue on Kedrnel and Subspace Methods for Computer Vision Message-ID: <3B616FC8.8050205@prip.tuwien.ac.at> Pattern Recognition The Journal of the Pattern Recognition Society Special Issue on Kernel and Subspace Methods for Computer Vision Guest Editors: Ales Leonardis Horst Bischof Faculty of Computer and Pattern Recognition and Information Science, Image Processing Group University of Ljubljana, Vienna University of Technology Trzaska 25, Favoritenstr. 9/1832, 1001 Ljubljana, Slovenia A-1040 Vienna, Austria alesl at fri.uni-lj.si bis at prip.tuwien.ac.at This Pattern Recognition Special Issue will address new developments in the area of kernel and subspace methods related to computer vision. High-quality original journal paper submissions are invited. The topics of interest include (but are not limited to): Support Vector Machines, Independent Component Analysis, Principal Component Analysis, Mixture Modeling, Canonical Correlation Analysis, etc. applied to computer vision problems such as: Object Recognition, Navigation and Robotics, Medical Imaging, 3D Vision, etc. All submitted papers will be peer reviewed. Only high-quality, original submissions will be accepted for publication in the Special Issue---in accordance with the Pattern Recognition guidelines (http://www.elsevier.nl/inca/publications/store/3/2/8/index.htt). Submission Timetable Submission of full manuscript: November 30, 2001 Notification of Acceptance: March 29, 2002 Submission of revised manuscript: End of June 2002 Final Decision: August 2002 Final papers: September 2002 Submission Procedure All submissions should follow the Pattern Recognition Guidelines and should be submitted electronically via anonymous ftp in either postscript or pdf format (compressed with zip or gzip). Files should be named by the surname of the first author i.e., surname.ps.gz, for multiple submissions surname1, surname2, ... should be used. Papers should be uploaded to the following ftp site by the deadline of 30th November 2001. ftp ftp.prip.tuwien.ac.at [anonymous ftp, i.e.: Name: ftp Password: < your email address > ] cd sipr binary put .ext quit After uploading the paper authors should email the guest editor Ales Leonardis giving full details of the paper title and authors.