From dvprokhorov at gmail.com Thu Sep 1 21:51:30 2005 From: dvprokhorov at gmail.com (Danil Prokhorov) Date: Thu, 1 Sep 2005 21:51:30 -0400 Subject: Connectionists: Senior Research Scientist Position at TTC, CI/AI/machine learning Message-ID: TOYOTA TECHNICAL CENTER, U.S.A., INC. Toyota Technical Center (TTC) is Toyota's largest engineering and research organization in North America, located in Ann Arbor, MI. TTC is seeking an exceptional individual for the full-time position of Senior Research Scientist in the intersection of Computational Intelligence and Robotics research activities, to become a member of the Technical Research Department (TRD). TTC prefers a researcher with experience in sensor fusion for automotive and robotic systems. This position will offer opportunities for collaboration with leading North American and global research institutions. The candidate should also have experience in mentoring junior researchers and have some research project management experience. This research is intended to break new ground and advance the state of the art. Job Duties and Responsibilities: Apply special knowledge and talents to develop and execute new, independent research projects for automotive and robotic applications Provide guidance to on-site researchers and research assistants Interact with world renowned and leading researchers in applicable areas Host visiting Toyota engineers and scientists Provide deliverables such as written and oral reports, as well as publications for peer-reviewed journals and conferences. Qualifications: Experience in Artificial Intelligence, intelligent signal processing and sensor-fusion research Experience in automotive safety systems is preferred Experience in robotic research and testing Experience in mentoring junior researchers Experience in research project management Familiarity with computational intelligence is preferred (e.g., neural networks, fuzzy logic, evolutionary algorithms, data mining) Ph.D. or Sc.D. in a related field of study Good written and oral communication skills Ability to work well with others in a team environment A willingness to travel Position is located in Ann Arbor, MI. The position provides a competitive salary and excellent benefits and all of the amenities of our campus and surrounding community. Please apply online to Toyota using the following URL (preferred way to apply): http://tmm.recruitsoft.com/servlets/CareerSection?art_ip_action=FlowDispatcher&flowTypeNo=13&pageSeq=2&reqNo=25222&art_servlet_language=en&csNo=10103 or via e-mail to Debra Adams, dadams at ttc-usa.com From arjen.van.ooyen at falw.vu.nl Fri Sep 2 06:09:05 2005 From: arjen.van.ooyen at falw.vu.nl (Arjen van Ooyen) Date: Fri, 02 Sep 2005 12:09:05 +0200 Subject: Connectionists: New Paper Message-ID: <431824C1.3020406@falw.vu.nl> Attention-Gated Reinforcement Learning of Internal Representations for Classification Pieter R. Roelfsema & Arjen van Ooyen, Neural Computation (2005) 17: 2176-2214. Abstract Animal learning is associated with changes in the efficacy of connections between neurons. The rules that govern this plasticity can be tested in neural networks. Rules that train neural networks to map stimuli onto outputs are given by supervised learning and reinforcement learning theories. Supervised learning is efficient but biologically implausible. In contrast, reinforcement learning is biologically plausible but comparatively inefficient. It lacks a mechanism that can identify units at early processing levels that play a decisive role in the stimulus-response mapping. Here we show that this so-called credit assignment problem can be solved by a new role for attention in learning. There are two factors in our new learning scheme that determine synaptic plasticity: (1) a reinforcement signal that is homogeneous across the network and depends on the amount of reward obtained after a trial, and (2) an attentional feedback signal from the output layer that limits plasticity to those units at earlier processing levels that are crucial for the stimulus-response mapping. The new scheme is called attention-gated reinforcement learning (AGREL). We show that it is as efficient as supervised learning in classification tasks. AGREL is biologically realistic and integrates the role of feedback connections, attention effects, synaptic plasticity, and reinforcement learning signals into a coherent framework. For full text, go to http://www.bio.vu.nl/enf/vanooyen/papers/agrel2005_abstract.html -- Dr. Arjen van Ooyen Center for Neurogenomics and Cognitive Research (CNCR) Department of Experimental Neurophysiology Vrije Universiteit De Boelelaan 1085 1081 HV Amsterdam The Netherlands E-mail: arjen.van.ooyen at falw.vu.nl Phone: +31.20.5987090 Fax: +31.20.5987112 Room: B329 Web: http://www.bio.vu.nl/enf/vanooyen From krista at james.hut.fi Sun Sep 4 10:17:54 2005 From: krista at james.hut.fi (Krista Lagus) Date: Sun, 4 Sep 2005 17:17:54 +0300 Subject: Connectionists: Unsupervised segmentation of words into morphemes -- Challenge 2005 Message-ID: Unsupervised segmentation of words into morphemes -- Challenge 2005 http://www.cis.hut.fi/morphochallenge2005/ Part of the EU Network of Excellence PASCAL Challenge Program. Participation is open to all. The objective of the Challenge is to design a statistical machine learning algorithm that segments words into the smallest meaning-bearing units of language, morphemes. Ideally, these are basic vocabulary units suitable for different tasks, such as text understanding, machine translation, information retrieval, and statistical language modeling. The scientific goals are: * To learn of the phenomena underlying word construction in natural languages * To discover approaches suitable for a wide range of languages * To advance machine learning methodology The results will be presented in a workshop arranged in connection with other PASCAL challenges on machine learning. Program Committee (the list is increasing): Levent Arslan, Bo?azi?i University Samy Bengio, IDIAP Tolga Cilogu, Middle-East Technical University John Goldsmith, University of Chicago Kadri Hacioglu, Colorado University Chun Yu Kit, City University of Hong Kong Dietrich Klakow, Saarland University Jan Nouza,Technical University of Liberec Erkki Oja, Helsinki University of Technology Please read the rules and see the schedule. The datasets are available for download at http://www.cis.hut.fi/morphochallenge2005/ We are looking forward to an interesting competition! Mikko Kurimo, Mathias Creutz and Krista Lagus Neural Networks Research Centre, Helsinki University of Technology The organizers -------------------------------------------------------------------- Dr. Krista Lagus Krista.Lagus at hut.fi www.cis.hut.fi/krista/ Neural Networks Research Centre, Helsinki University of Technology P.O.Box 5400 (Konemiehentie 2, Espoo), FIN-02015 HUT, Finland Tel.+358-9-451 4459 Fax +358-9-451 3277 From munakata at psych.colorado.edu Sun Sep 4 23:06:55 2005 From: munakata at psych.colorado.edu (Yuko Munakata) Date: Sun, 4 Sep 2005 21:06:55 -0600 Subject: Connectionists: two cognitive faculty positions at CU Boulder Message-ID: <200509042106.56869.munakata@psych.colorado.edu> The University of Colorado Boulder has two cognitive faculty searches this year -- one in Psychology and one in the Institute of Cognitive Science. Both searches have interest in candidates using computational approaches. Yuko Munakata ******************************************* The Department of Psychology, University of Colorado, Boulder, invites applications for a tenure-track position in cognitive psychology beginning August 2006. The department anticipates hiring at the assistant professor level. The University of Colorado, Boulder, is committed to diversity and equality in education and employment. In that spirit, applications at all levels will be considered from those who would strengthen the department's diversity. Candidates in any area of cognitive psychology will be considered. Special consideration will be given to candidates whose research interests include cognitive neuroscience, development, language, higher-order cognition, or object perception. The successful candidate will be expected to teach at the undergraduate and graduate levels, to supervise undergraduate and graduate students in research, and to maintain an active research program. Salary is competitive and dependent upon experience. All applicants should send a curriculum vitae, a statement of research interests, a statement of undergraduate and graduate teaching interests, representative research papers, and at least three letters of recommendation to: Yuko Munakata, Chair, Cognitive Search Committee, Department of Psychology, University of Colorado, 345 UCB, Boulder, CO 80309-0345. We will begin reviewing applications November 15, 2005 and will continue to review applications until the position is filled. ******************************************* Cognitive Scientist/Psychologist, tenure track position The Institute of Cognitive Science at the University of Colorado invites applications for a full-time tenure-track position at the assistant professor level, with a starting date of Fall 2006. The Institute, is a multidisciplinary unit with representation from the departments of Psychology; Computer Science; Education; Linguistics; Speech, Language & Hearing Sciences; Philosophy;, and Architecture & Planning. Because the individual hired for this Institute position will have an academic appointment within the Department of Psychology, we seek applicants with a strong record of research that integrates Cognitive Science with cognitive processes including, but not limited to, judgment and decision-making, language and discourse processes, learning and memory, object processing, or higher-order cognition. Candidates taking a developmental, neuroscience, computational, or experimental approach are all welcome. We will give strongest consideration to applicants whose work demonstrates an ability and commitment to interdisciplinary research. Duties include graduate and undergraduate teaching, research, research supervision, and service. Applicants should send curriculum vitae, copies of representative publications, a teaching statement, a research summary, and letter from three referees to: Dr. Donna Caccamise Associate Director Institute of Cognitive Science 344 UCB University of Colorado Boulder, CO 80309 For fullest consideration, please apply by November 15, 2005. Applications will continue to be accepted after this date until the position is filled. Email inquiries may be sent to donnac at psych.colorado.edu. The University of Colorado is an Equal Opportunity/Affirmative Action Employer. From osporns at indiana.edu Mon Sep 5 12:48:26 2005 From: osporns at indiana.edu (Olaf Sporns) Date: Mon, 05 Sep 2005 11:48:26 -0500 Subject: Connectionists: ICDL 2006 Call for Papers Message-ID: <431C76DA.3090902@indiana.edu> ICDL 2006 International Conference on Development and Learning - Dynamics of Development and Learning - http://www.icdl06.org Indiana University Bloomington, May 31- June 3, 2006 CALL FOR PAPERS Paper Submission Deadline: Feb. 6, 2006 CALL FOR INVITED SESSIONS PROPOSALS Proposal Submission Deadline: Dec. 1, 2005 Recent years have seen a convergence of research in artificial intelligence, developmental psychology, cognitive science, neuroscience and robotics, aimed at identifying common computational principles of development and learning in artificial and natural systems. The theme of this year?s conference centers on development as a process of dynamic change that occurs within a complex and embodied system. The dynamics of development extend across multiple levels, from neural circuits, to changes in body morphology, sensors, movement, behavior, and inter-personal and social patterns. The goal of the conference is to present state-of-the-art research on autonomous development in humans, animals and robots, and to continue to identify new interdisciplinary research directions for the future of the field. The 5th International Conference on Development and Learning 2006 (ICDL06) will be held on the campus of Indiana University Bloomington, May 31- June 3, 2006. The conference is organized with the technical co-sponsorship of the IEEE Computational Intelligence Society. The conference will feature plenary talks by invited keynote speakers, invited sessions (workshops) organized around a central topic, a panel discussion and poster sessions. Paper submissions (for details regarding format and submission/review process see our website at http://www.icdl06.org) are invited in these areas: ? General Principles of Development and Learning in Humans and Robots ? Neural, Behavioral and Computational Plasticity ? Embodied Cognition: Foundations and Applications ? Social Development in Humans and Robots ? Language Development and Learning ? Dynamic Systems Approaches ? Emergence of Structures through Development ? Development of Perceptual and Motor Systems ? Models of Developmental Disorders Authors may specify preferences for oral or poster presentations. All submissions will be peer-reviewed and accepted papers will be published in a conference proceedings volume. Selected conference presenters will be invited to update and expand their papers for publication in a special issue on ?Dynamics of Development and Learning? of the journal Adaptive Behavior (http://adb.sagepub.com/). ICDL precedes the conference ?Artificial Life X?, June 3-7, 2006, also held on the campus of Indiana University Bloomington (http://alifex.org). ICDL and ALIFE will share one day of overlapping workshops and tutorials on June 3. Organizing Committee: Linda Smith (Chair), Olaf Sporns, Chen Yu, Mike Gasser, Cynthia Breazeal, Gideon Deak, John Weng. From shivani at MIT.EDU Wed Sep 7 01:27:56 2005 From: shivani at MIT.EDU (Shivani Agarwal) Date: Wed, 7 Sep 2005 01:27:56 -0400 (EDT) Subject: Connectionists: CFP: NIPS 2005 Workshop - Learning to Rank Message-ID: ************************************************************************ CALL FOR PAPERS ---- Learning to Rank ---- Workshop at the 19th Annual Conference on Neural Information Processing Systems (NIPS 2005) http://web.mit.edu/shivani/www/Ranking-NIPS-05/ -- Submission Deadline: October 21, 2005 -- ************************************************************************ [ Apologies for multiple postings ] OVERVIEW -------- The problem of ranking, in which the goal is to learn an ordering or ranking over objects, has recently gained much attention in machine learning. Progress has been made in formulating different forms of the ranking problem, proposing and analyzing algorithms for these forms, and developing theory for them. However, a multitude of basic questions remain unanswered: * Ranking problems may differ in many ways: in the form of the training examples, in the form of the desired output, and in the performance measure used to evaluate success. What are the consequences of each of these factors on the design of ranking algorithms and on their theoretical guarantees? * The relationships between ranking and other classical learning problems such as classification and regression are still under-explored. Is any of these problems inherently harder or easier than another? * Although ranking is studied mainly as a supervised learning problem, it can have important consequences for other forms of learning; for example, in semi-supervised learning, one often ranks unlabeled examples so as to assign labels to the ones ranked at the top, and in reinforcement learning, one often learns a policy that ranks actions for each state. To what extent can these connections be explored and exploited? * There is a large variety of applications in which ranking is required, ranging from information retrieval to collaborative filtering to computational biology. What forms of ranking are most suited to different applications? What are novel applications that can benefit from ranking, and what other forms of ranking do these applications point us to? This workshop aims to provide a forum for discussion and debate among researchers interested in the topic of ranking, with a focus on the basic questions above. The goal is not to find immediate answers, but rather to discuss possible methods and applications, develop intuition, brainstorm on possible directions and, in the process, encourage dialogue and collaboration among researchers with complementary ideas. FORMAT ------ This is a one-day workshop that will follow the 19th Annual Conference on Neural Information Processing Systems (NIPS 2005). The workshop will consist of two 3-hour sessions. There will be two invited talks and 5-6 contributed talks, with time for questions and discussion after each talk. We would particularly like to encourage, after each talk, a discussion of underlying assumptions, alternative approaches, and possible applications or theoretical analyses, as appropriate. The last 30 minutes of the workshop will be reserved for a concluding discussion which will be used to put into perspective insights gained from the workshop and to highlight open challenges. Invited Talks ------------- * Thorsten Joachims, Cornell University * Yoram Singer, The Hebrew University Contributed Talks ----------------- These will be based on papers submitted for review. See below for details. CALL FOR PAPERS --------------- We invite submissions of papers addressing all aspects of ranking in machine learning, including: * algorithmic approaches for ranking * theoretical analyses of ranking algorithms * comparisons of different forms of ranking * formulations of new forms of ranking * relationships between ranking and other learning problems * novel applications of ranking * challenges in applying or analyzing ranking methods We welcome papers on ranking that do not fit into one of the above categories, as well as papers that describe work in progress. We are particularly interested in papers that point to new questions/debate in ranking and/or shed new light on existing issues. Please note that papers that have previously appeared (or have been accepted for publication) in a journal or at a conference or workshop, or that are being submitted to another workshop, are not appropriate for this workshop. Submission Instructions ----------------------- Submissions should be at most 6 pages in length using NIPS style files (available at http://web.mit.edu/shivani/www/Ranking-NIPS-05/StyleFiles/), and should include the title, authors' names, postal and email addresses, and an abstract not to exceed 150 words. Email submissions (in pdf or ps format only) to shivani at mit.edu with subject line "Workshop Paper Submission". The deadline for submissions is Friday October 21, 11:59 pm EDT. Submissions will be reviewed by the program committee and authors will be notified of acceptance/rejection decisions by Friday November 11. Final versions of all accepted papers will be due on Friday November 18. Please note that one author of each accepted paper must be available to present the paper at the workshop. IMPORTANT DATES --------------- First call for papers -- September 6, 2005 Paper submission deadline -- October 21, 2005 (11:59 pm EDT) Notification of decisions -- November 11, 2005 Final papers due -- November 18, 2005 Workshop -- December 9 or 10, 2005 ORGANIZERS ---------- * Shivani Agarwal, MIT * Corinna Cortes, Google Research * Ralf Herbrich, Microsoft Research CONTACT ------- Please direct any questions to shivani at mit.edu. ************************************************************************ From leonb at nec-labs.com Tue Sep 6 14:56:36 2005 From: leonb at nec-labs.com (Leon Bottou) Date: Tue, 6 Sep 2005 14:56:36 -0400 Subject: Connectionists: CFP: NIPS 2005 Workshop: Large Scale Kernel Machines Message-ID: <200509061456.36446.leonb@nec-labs.com> ########################################################### NIPS 2005 Workshop LARGE SCALE KERNEL MACHINES ########################################################### Datasets with millions of observations can be gathered by crawling the web, mining business databases, or connecting a cheap video tuner to a laptop. Vastly more ambitious learning systems are theoretically possible. The literature shows no shortage of ideas for sophisticated statistical models. The computational cost of learning algorithms is now the bottleneck. During the last decade, dataset size has outgrown processor speed. Meanwhile, machine learning algorithms became more principled, and also more computationally expensive. The workshop investigates computationally efficient ways to exploit such large datasets using kernel machines. It will show how adequately designed kernel machines can efficiently process millions of examples. It will also debate whether kernel machines are the best way to achieve such objectives. TOPICS: * Fast implementation of ordinary Support Vector Machines. How to improve the optimization algorithms and to distribute them on several computers? * Kernel algorithms specifically designed for large scale datasets. For instance, online kernel algorithms are less hungry for memory. Does this improvement comes for free or does it increases the error rates? * Methods for containing the growth of the number of support vectors. Does the number of Support Vectors always grow linearly with the number of examples, as in ordinary Support Vector Machines? * Comparing the relative strengths of kernel and non kernel methods on large scale datasets. Are kernel methods the best tools for such datasets? CALL FOR PARTICIPATION: If you wish to make a presentation, send a plain text email to with title, authors, and a brief abstract (less than one page.) Please send us this information before November 1st. ORGANIZERS: Leon Bottou (NEC Labs, Princeton) Olivier Chapelle (MPI, Tuebingen) Dennis Decoste (Yahoo!, Sunnyvale) Jason Weston (NEC Labs, Princeton) ------------------------------------------------------- From pfbaldi at ics.uci.edu Tue Sep 6 15:41:10 2005 From: pfbaldi at ics.uci.edu (Pierre Baldi) Date: Tue, 6 Sep 2005 12:41:10 -0700 Subject: Connectionists: Faculty Positions in Machine Learning and Computational Biology at UCI Message-ID: <008d01c5b31a$eec22670$cd04c380@ics.uci.edu> Tenure-Track Faculty Positions Biomedical Informatics, Computational Biology, and Systems Biology University of California, Irvine Two junior tenure-track positions are available at the University of California, Irvine in all areas of research at the intersection of life and computational sciences. These appointments will be made in the Donald Bren School of Information and Computer Sciences with possible joint appointments in the School of Biological Sciences, the School of Physical Sciences, or the School of Medicine. Exceptionally qualified senior candidates also will be considered for tenured positions. These positions will be coordinated with the interdisciplinary research programs of the UCI Institute for Genomics and Bioinformatics. Examples of general areas of interest include: chemical informatics, bioinformatics, computational biology, systems biology, synthetic biology, and medical informatics. Examples of specific areas of interest include: protein structure and function prediction; molecular simulations and docking; computational drug screening and design; comparative genomics; analysis of high-throughput data; mathematical modeling of biological systems. Research methods should encompass computational, statistical, or machine-learning approaches. UCI is targeted as a growth campus for the University of California. It is one of the youngest UC campuses, yet ranked 10th among the nation's best public universities by US News & World Report. Salary and other compensation (including priority access to on-campus faculty housing) are competitive with the nation's finest public universities. For an overview of UCI, see http://www.uci.edu. The Bren School of ICS is one of nine academic units at UCI and was recently elevated to an independent school by the UC Regents. ICS' mission is to lead the innovation of new information and computing technology and study its economic and social significance while producing an educated workforce to further advance technology and fuel the economic engine. The Bren School of ICS has excellent faculty, innovative programs, high quality students and outstanding graduates as well as strong relationships with high tech industry. With approximately 2000 undergraduates, 300 graduate students, and 63 faculty members, ICS is the largest computing program within the UC system. For a perspective on ICS, see http://www.ics.uci.edu. Screening will begin immediately upon receipt of a completed application. Applications will be accepted until positions are filled, although maximum consideration will be given to applications received by January 15, 2006. Completed applications containing a cover letter, curriculum vita, sample research publications, and three to five letters of recommendation should be uploaded electronically. Please refer to the following web site for instructions: http://www.ics.uci.edu/employment/employ_faculty.php. The University of California, Irvine is an equal opportunity employer committed to excellence through diversity, has a National Science Foundation Advance Gender Equity Program, and is responsive to the needs of dual career couples. Pierre Baldi School of Information and Computer Sciences University of California, Irvine Irvine, CA 92697-3435 +1(949) 824-5809 +1(949) 824-9813 (FAX) ww.ics.uci.edu/~pfbaldi www.igb.uci.edu From reza at bme.jhu.edu Wed Sep 7 07:36:35 2005 From: reza at bme.jhu.edu (Reza Shadmehr) Date: Wed, 07 Sep 2005 07:36:35 -0400 Subject: Connectionists: Computational Motor Control Message-ID: <0IMG0071P2P530@jhuml1.jhmi.edu> Emo Todorov and I would like to invite you to the fourth computational motor control symposium at the Society for Neuroscience conference. The symposium will take place on Friday, Nov. 11 2005 at the Washington DC convention center. The purpose of the meeting is to highlight computational modeling and theories in motor control. This is an opportunity to meet and hear from some of the bright minds in the field. The program consists of two distinguished speakers and 12 contributed talks, selected from the submitted abstracts. The speakers this year are: Daniel Wolpert, Cambridge University "Probabilistic mechanisms in human sensorimotor control" Andrew Schwartz, University of Pittsburgh "Useful signals from the motor cortex" We encourage you to consider submitting an abstract. The abstracts will be reviewed by a panel and ranked. The top 12 abstracts will be selected for oral presentation. We encourage oral presentation by students who have had a major role in the work described in the abstracts. More information is available here: www.bme.jhu.edu/acmc The deadline for abstract submission is September 30. Abstracts should be no more than two pages in length, including figures and references. With our best wishes, Reza Shadmehr and Emo Todorov From hoya at brain.riken.jp Thu Sep 8 23:21:53 2005 From: hoya at brain.riken.jp (Tetsuya Hoya) Date: Fri, 09 Sep 2005 12:21:53 +0900 Subject: Connectionists: Artificial Mind System -- Kernel Memory Approach Message-ID: <4320FFD1.1000803@brain.riken.jp> I am pleased to announce the publication of my recent monograph: ``Artificial Mind System -- Kernel Memory Approach'' by Tetsuya Hoya in the series: Studies in Computational Intelligence (SCI), Vol. 1 (270p), Heidelberg: Springer-Verlag ISBN 3540260722 http://www.springeronline.com The book is written from an engineer's scope of the mind. It exposes the reader to a broad spectrum of interesting areas in brain science and mind-oriented studies. In the first part of the monograph, I focused upon a neww connectionist model, called the `kernel memory', which can be seen as generalisation of probabilistic / generalised regression neural networks. Then, the second part proposes a holistic model of an artificial mind system and its behaviour, as concretely as possible, on the basis of the kernel memory concept developed in the first part, within a unified context, which could eventually lead to practical realisation in terms of hardware or software. With a view that ``the mind is an input-output system alwayys evolving'', ideas inspired by many branches of studies related to brain science are integrated within the text, i.e. artificial intelligence, cognitive science/psychology, connectionism, consciousness studies, general neuroscience, linguistics, pattern recognition/data clustering, robotics, and signal processing. Key words: artificial intelligence, mind, neural networks, creating the brain. Regards, Tetsuya Hoya Lab. for Advanced Brain Signal Processing BSI-RIKEN 2-1, Hirosawa, Wako-City, Saitama 3511-0198 JAPAN e-mail: hoya at brain.riken.jp From ted.carnevale at yale.edu Tue Sep 6 10:37:55 2005 From: ted.carnevale at yale.edu (Ted Carnevale) Date: Tue, 06 Sep 2005 10:37:55 -0400 Subject: Connectionists: NEURON course at SFN 2005 meeting Message-ID: <431DA9C3.80504@yale.edu> This year's 1-day NEURON course at the annual SFN meeting will include presentations of new features such as: <> using the Import3D tool to convert detailed morphometric data (Neurolucida, swc, and Eutectic) into models <> using the new CellBuilder to set up spatially nonuniform biophysical properties <> using the ModelView tool to quickly discover what's really in a model (very helpful for deciphering your own old models, not to mention those you get from ModelDB and other sources) <> using the ChannelBuilder to create new voltage- and ligand- gated channels--including stochastic ion channels--without having to write any program code <> speeding up network models by distributing them over multiple processors Only a few seats remain available, and these may go quickly now that the fall semester has started. For on-line registration forms, see http://www.neuron.yale.edu/neuron/sfn2005/dc2005.html --Ted From cjs at ecs.soton.ac.uk Wed Sep 14 06:44:52 2005 From: cjs at ecs.soton.ac.uk (Craig Saunders) Date: Wed, 14 Sep 2005 11:44:52 +0100 Subject: Connectionists: NIPS Workshop on Kernel Methods and Structured Domains Message-ID: <4327FF24.4070301@ecs.soton.ac.uk> %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Call for Papers Workshop on Kernel Methods and Structured Domains http://nips2005.kyb.tuebingen.mpg.de/ NIPS 2005 Submission deadline: 21 October Accept/Reject notification: 05 November %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Workshop Description %%%%%%%%%%%%%%%%%%%% Substantial recent work in machine learning has focused on the problem of dealing with inputs and outputs on more complex domains than are provided for in the classical regression/classification setting. Structured representations can give a more informative view of input domains, which is crucial for the development of successful learning algorithms: application areas include determining protein structure and protein-protein interaction; part-of-speech tagging; the organisation of web documents into hierarchies; and image segmentation. Likewise, a major research direction is in the use of structured output representations, which have been applied in a broad range of areas including several of the foregoing examples (for instance, the output required of the learning algorithm may be a probabilistic model, a graph, or a ranking). In particular, kernel methods have been especially fertile in giving rise to efficient and powerful algorithms for both structured inputs and outputs, since (as with SVMs) use of the "kernel trick" can make the required optimisations tractable: examples include large margin Markov networks, graph kernels, and kernels on automata. More generally, kernels between probability measures have been proposed (with no a-priori assumptions as to the dependence structure), which have motivated particular kernels between images and strings. In NIPS 2004, two workshops took place addressing learning approaches on structured domains: Learning on Structured Outputs (Bakir,Gretton,Hoffman,Schoelkopf) and Graphical Models and Kernels (Smola, Taskar, Vishwanathan). In view of significant and continued advances in the field, the present workshop addresses the same area as these earlier workshops: to provide an overview of recent theoretical and algorithmic foundations for kernels on structured domains, to investigate applications that build on these fundamentals, and to explore new research directions for future work. The workshop is also intended as one element of the Pascal thematic program on learning with complex and structured outputs. Workshop format %%%%%%%%%%%%%%% The workshop will last one day, and will include invited talks (of 30 minutes' duration), submitted talks (15 minutes), and periods of moderated and open discussion (20 minutes). The final discussion will provide a wrap-up session which will summarise the issues raised, so that all participants leave the workshop with a clear view of the future challenges and open questions which need to be addressed. Invited speakers %%%%%%%%%%%%%%%% Ben Taskar Jean-Philippe Vert Matthias Hein Call for papers %%%%%%%%%%%%%%% We invite submissions for 15 minute contributed talks (of which a maximum of eight will be accepted). The intended emphasis is on recent innovations, work in progress, and promising directions or problems for new research. Proposed topics include: * Learning when the inputs/outputs are structures * Learning from data embedded in structure * Graphical models and information geometry * Kernels on probability measures Our focus will be on using kernel methods to deal efficiently with structured data. We will also consider work falling outside these specific topics, but within the workshop subject area. If you would like to submit to this session, please send an abstract to Arthur Gretton (arthur at tuebingen dot mpg dot de) before October 21. Please do not send posters or long documents. Decisions as to which proposals are accepted will be sent out on November 05. Workshop Chairs %%%%%%%%%%%%%%% Arthur Gretton (MPI for Biological Cybernetics) Gert Lanckriet (UC San Diego) Juho Rousu (University of Helsinki) Craig Saunders (University of Southampton) From irezek at robots.ox.ac.uk Wed Sep 14 11:53:27 2005 From: irezek at robots.ox.ac.uk (Iead Rezek) Date: Wed, 14 Sep 2005 16:53:27 +0100 Subject: Connectionists: CFP: NIPS 2005 Workshop - Game Theory, Machine Learning and Reasoning under Uncertainty Message-ID: <43284777.9040708@robots> ###################################################################### CALL FOR PAPERS Game Theory, Machine Learning and Reasoning under Uncertainty Workshop at the Neural Information Processing Systems (NIPS) 2005 http://www.robots.ox.ac.uk/~gmr05.html ###################################################################### OVERVIEW Game theory is concerned with understanding the decision processes and strategic actions of competing individuals. Having initially found applications in economics and diplomacy, game theory is increasingly being used to understand the interactions that occur within large multi-agent systems, and has been applied in areas as diverse as allocating resources within grid computing and coordinating the behaviour of multiple autonomous vehicles. Recent research has highlighted the benefits that may result from examining the interface between machine learning and game theory. While classical game theory makes limited provision for dealing with uncertainty and noise, research in machine learning, and particularly probabilistic inference, has resulted in a remarkable array of powerful algorithms for performing statistical inference from difficult real world data. Machine learning holds the promise of generalising game theory to deal with the uncertain, data-driven domain that we encounter in real-world applications (for example, the timely and decentralised flight coordination for Europe's future Open Skies initiative or the dynamic and responsive data fusion of multi-modal physiological time series in intensive care.) In addition, whilst techniques from graphical models have suggested computationally tractable algorithms for analysing games that consist of a large number of players, insights from game theory are also inspiring new work on strategic learning behaviour in probabilistic inference and are suggesting new algorithms to perform intelligent sampling in Markov Monte Carlo methods. CALL FOR PAPERS To investigate these issues and reflect the range of questions that the workshop addresses, we invite submissions from researchers who have made contributions to this nascent field and also practitioners from a wide range of backgrounds. Of particular interest is research concerning (but not limited to) the following issues: 1. Considered separately, what are the current limitations of game theory and probabilistic inference and what can be achieved by integrating the two? 2. What are the specific points of correspondences between game theory and probabilistic inference that can be used to incorporate new developments in probabilistic inference into game theory applications and vice versa. Some of the examples of such correspondences and their inclusions that could be addressed are - dynamic behaviour of probabilistic inference mechanisms through game theoretic interaction models, - structured belief models in computational game theory algorithms, - quantitative bounds on Nash equilibria, - MCMC sampling with intelligent particles. 3. What are the applications which are most likely to benefit from an explicitly inferential game theory? The workshop paper should be no more than 8 pages in length and written in standard NIPS format. Please also indicate whether you are interested in an oral or poster presentation. We also welcome position papers and work of researchers from industrial/application background to reflect the future practical needs to the research area. Please submit an article only if at least one of the authors is certain to attend. IMPORTANT DATES 14 September 2005 Call for papers 23 October 2005 Deadline for paper submissions 05 November 2005 Notification of paper acceptance 9/10 December 2005 Workshop ORGANISERS The ARGUS project (www.robots.ox.ac.uk/~argus), and in particular Iead Rezek, University of Oxford (irezek at robots.ox.ac.uk) Alex Rogers, University of Southampton (a.rogers at ecs.soton.ac.uk) and David Wolpert, NASA Ames Research Center INQUIRIES Please direct any inquiries regarding the workshop to Iead Rezek, (irezek at robots.ox.ac.uk) or Alex Rogers (a.rogers at ecs.soton.ac.uk) From cindy at bu.edu Mon Sep 12 10:54:10 2005 From: cindy at bu.edu (Cynthia Bradford) Date: Mon, 12 Sep 2005 10:54:10 -0400 Subject: Connectionists: Neural Networks 18(8) 2005: Special Issue on "Neural Networks and Kernel Methods for Structured Domains" Message-ID: <200509121454.j8CEsBBp014366@kenmore.bu.edu> NEURAL NETWORKS 18(8) Contents - Volume 18, Number 8 - 2005 Special Issue on "Neural Networks and Kernel Methods for Structured Domains" Barbara Hammer, Craig Saunders, and Alessandro Sperduti (editors) ------------------------------------------------------------------ Introduction to the Special Issue Barbara Hammer, Craig Saunders, and Alessandro Sperduti A novel approach to extracting features from motif content and protein composition for protein sequence classification Xing-Ming Zhao, Yiu-Ming, Cheung, and De-Shuang Huang Learning protein secondary structure from sequential and relational data Alessio Ceroni, Paolo Frasconi, and Gianluca Pollastri Recursive neural networks for processing graphs with labeled edges: Theory and applications M. Bianchini, M. Maggini, L. Sarti, and F. Scarselli Recursive principal components analysis Thomas Voegtlin The loading problem for recursive neural networks Marco Gori and Alessandro Sperduti On the relationship between deterministic and probabilistic directed graphical models: From Bayesian networks to recursive neural networks Pierre Baldi and Michal Rosen-Zvi An incremental regression method for graph structured data Menita Carozza and Salvatore Rampone Graph kernels for chemical informatics Liva Ralaivola, Sanjay J. Swamidass, Hiroto Saigo, and Pierre Baldi The context-tree kernel for strings Marco Cuturi and Jean-Philippe Vert ------------------------------------------------------------------ Electronic access: www.elsevier.com/locate/neunet/. Individuals can look up instructions, aims & scope, see news, tables of contents, etc. Those who are at institutions which subscribe to Neural Networks get access to full article text as part of the institutional subscription. Sample copies can be requested for free and back issues can be ordered through the Elsevier customer support offices: nlinfo-f at elsevier.nl usinfo-f at elsevier.com or info at elsevier.co.jp ------------------------------ INNS/ENNS/JNNS Membership includes a subscription to Neural Networks: The International (INNS), European (ENNS), and Japanese (JNNS) Neural Network Societies are associations of scientists, engineers, students, and others seeking to learn about and advance the understanding of the modeling of behavioral and brain processes, and the application of neural modeling concepts to technological problems. Membership in any of the societies includes a subscription to Neural Networks, the official journal of the societies. Application forms should be sent to all the societies you want to apply to (for example, one as a member with subscription and the other one or two as a member without subscription). The JNNS does not accept credit cards or checks; to apply to the JNNS, send in the application form and wait for instructions about remitting payment. The ENNS accepts bank orders in Swedish Crowns (SEK) or credit cards. The INNS does not invoice for payment. ---------------------------------------------------------------------------- Membership Type INNS ENNS JNNS ---------------------------------------------------------------------------- membership with $80 (regular) SEK 660 Y 13,000 Neural Networks (plus Y 2,000 enrollment fee) $20 (student) SEK 460 Y 11,000 (plus Y 2,000 enrollment fee) ---------------------------------------------------------------------------- membership without $30 SEK 200 not available Neural Networks to non-student (subscribe through another society) Y 5,000 student (plus Y 2,000 enrollment fee) ---------------------------------------------------------------------------- Name: ______________________________________________________ Title: ______________________________________________________ Address: ______________________________________________________ Phone: ______________________________________________________ Fax: ______________________________________________________ Email: ______________________________________________________ Payment: [ ] Check or money order enclosed, payable to INNS or ENNS OR [ ] Charge my VISA or MasterCard card number _______________________________ expiration date _____________________________ INNS Membership 2810 Crossroads Drive, Suite 3800 Madison WI 53718 USA 608 443 2461, ext. 138 (phone) 608 443 2474 (fax) srees at reesgroupinc.com http://www.inns.org ENNS Membership University of Skovde P.O. Box 408 531 28 Skovde Sweden 46 500 44 83 37 (phone) 46 500 44 83 99 (fax) enns at ida.his.se http://www.his.se/ida/enns JNNS Membership JNNS Secretariat c/o Fuzzy Logic Systems Institute 680-41 Kawazu, Iizuka Fukuoka 820-0067 Japan 81 948 24 2771 (phone) 81 948 24 3002 (fax) jnns at flsi.cird.or.jp http://www.jnns.org/ ---------------------------------------------------------------------------- From emipar at tsc.uc3m.es Tue Sep 13 09:33:08 2005 From: emipar at tsc.uc3m.es (Emilio Parrado-Hernandez) Date: Tue, 13 Sep 2005 15:33:08 +0200 Subject: Connectionists: Deadline extended: JMLR special topic on Machine Learning and Large Scale Optimization Message-ID: <4326D514.1070901@tsc.uc3m.es> The deadline for the submission to the JMLR special topic on Machine Learning and Large Scale Optimisation has been extended until October 5. You can find more information in the web site of the journal: http://jmlr.csail.mit.edu/ If you have any related question or enquire, please do not hesitate to contact us. Best regards, Emilio and Kristin -- ==================================================== Emilio Parrado-Hernandez Visiting Lecturer Department of Signal Processing and Communications, Universidad Carlos III de Madrid Avenida de la Universidad 30, 28911 Leganes, Spain Phone: +34 91 6248738 Fax: +34 91 6248749 ==================================================== From g.goodhill at imb.uq.edu.au Sun Sep 11 22:39:57 2005 From: g.goodhill at imb.uq.edu.au (Geoffrey Goodhill) Date: Mon, 12 Sep 2005 12:39:57 +1000 Subject: Connectionists: Faculty position in Applied Statistics Message-ID: <4324EA7D.7030100@imb.uq.edu.au> Dear Connectionists, I would like to draw your attention to the following faculty position available in the Maths dept at the University of Queensland. There are several people in the dept who are interested in neural networks / mathematical biology / statistical analysis of biological data, including Geoff McLachlan (www.maths.uq.edu.au/~gjm), Kevin Burrage (www.maths.uq.edu.au/~kb), and myself. Please direct enquiries to Geoff McLachlan, gjm at maths.uq.edu.au. Thanks, Geoff Geoffrey J Goodhill, PhD Associate Professor Queensland Brain Institute, Department of Mathematics & Institute for Molecular Bioscience University of Queensland St Lucia, QLD 4072, Australia Phone: +61 7 3346 2612 Fax: +61 7 3346 8836 Email: g.goodhill at uq.edu.au http://cns.qbi.uq.edu.au -------------- LECTURESHIP IN STATISTICS School of Physical Sciences - St Lucia Campus - UQ Opportunity to work in a leading centre of statistical research in Australia . The discipline of Mathematics (statistics/applied probability group) in the School of Physical Sciences is the major provider of undergraduate and postgraduate statistical education in Queensland, and is a leading centre of statistical research in Australia. We are seeking an appointment at Lecturer Level B preferably in the area of applied statistics. In the role of Lecturer, the successful applicant will undertake teaching, postgraduate supervision, and further development of the School's Mathematics program, as well as perform research, administrative and other activities associated with the School and its Statistics Research Activities. Applicants must possess postgraduate qualifications (PHD level or equivalent) in statistics and will have established a strong reputation for research in an area of applied statistics. This is a full time, fixed term appointment for 3 years at Academic Level B. The remuneration package will be in the range of $71,293 - $84,660 per annum, which includes employer superannuation contributions of 17%. Obtain the position description and selection criteria online or contact Mr Graham Beal on (07) 3365 7923 or by email to hr at epsa.uq.edu.au. Contact Professor Geoff McLachlan on (07) 3365 2150 or email gjm at maths.uq.edu.au to discuss the role. Send applications to Mr Graham Beal, Human Resource Officer, Faculty of Engineering, Physical Sciences and Architecture at the address below, or by email applications at epsa.uq.edu.au. Closing date for applications: 7 October 2005 Reference Number: 3000860 From michael at chaos.gwdg.de Mon Sep 12 06:57:11 2005 From: michael at chaos.gwdg.de (Michael Herrmann) Date: Mon, 12 Sep 2005 12:57:11 +0200 (CEST) Subject: Connectionists: Call for participation Message-ID: A mini-symposium on "SELF-ORGANIZATION OF BEHAVIOR in robotic and living systems" will take place at the Bernstein Center for Computational Neuroscience Goettingen on Sept 15/16, 2005. For more information please goto http://www.chaos.gwdg.de/~michael/SOB2005.html ********************************************************************* * Dr. J. Michael Herrmann Georg August University Goettingen * * Tel. : +49 (0)551 5176424 Institute for Nonlinear Dynamics * * Fax : +49 (0)551 5176439 Bunsenstrasse 10 * * mobil: 0176 2800 4268 D-37073 Goettingen, Germany * * EMail: michael at chaos.gwdg.de http://www.chaos.gwdg.de/~michael * ********************************************************************* From sml at essex.ac.uk Mon Sep 12 09:48:48 2005 From: sml at essex.ac.uk (Lucas, Simon M) Date: Mon, 12 Sep 2005 14:48:48 +0100 Subject: Connectionists: IEEE CIG 2005 Proceedings On-Line Message-ID: The proceedings for the 2005 IEEE Symposium on Computational Intelligence and Games are now freely available on-line. http://cigames.org These include many papers on neural network game playing agents etc, and so will be of interest to many members of this list. Please also note the CIG 2006 CFP link. Best regards, Simon Lucas From t.heskes at science.ru.nl Sun Sep 11 12:58:32 2005 From: t.heskes at science.ru.nl (Tom Heskes) Date: Sun, 11 Sep 2005 18:58:32 +0200 Subject: Connectionists: Neurocomputing volume 67 Message-ID: <43246238.9050901@science.ru.nl> Neurocomputing Volume 68 (October 2005) FULL LENGTH PAPERS Absolutely exponential stability of Cohen?Grossberg neural networks with unbounded delays Wenjun Xiong and Jinde Cao Speaker authentication system using soft computing approaches Abdul Wahab, Goek See Ng and Romy Dickiyanto Output partitioning of neural networks Sheng-Uei Guan, Qi Yinan, Syn Kiat Tan and Shanchun Li A unified SWSI?KAMs framework and performance evaluation on face recognition Songcan Chen, Lei Chen and Zhi-Hua Zhou A framework for simulating axon guidance Ning Feng, Gangmin Ning and Xiaoxiang Zheng Internal simulation of perception: a minimal neuro-robotic model Tom Ziemke, Dan-Anders Jirenhed and Germund Hesslow Asymptotic convergence properties of the EM algorithm with respect to the overlap in the mixture Jinwen Ma and Lei Xu Unsupervised learning with stochastic gradient Harold Szu and Ivica Kopriva Global asymptotic stability analysis of bidirectional associative memory neural networks with constant time delays Sabri Arik and Vedat Tavsanoglu A parallel growing architecture for self-organizing maps with unsupervised learning Iren Valova, Daniel Szer, Natacha Gueorguieva and Alexandre Buer ------- LETTERS Existence and exponential stability of almost periodic solutions for Hopfield neural networks with delays An efficient fingerprint verification system using integrated gabor filters and Parzen Window Classifier Dario Maio and Loris Nanni Ensemble of Parzen window classifiers for on-line signature verification Loris Nanni and Alessandra Lumini A genetic algorithm for solving the inverse problem of support vector machines Xi-Zhao Wang, Qiang He, De-Gang Chen and Daniel Yeung Variance change point detection via artificial neural networks for data separation Kyong Joo Oh, Myung Sang Moon and Tae Yoon Kim EEG pattern discrimination between salty and sweet taste using adaptive Gabor transform Juliana Cristina Hashida, Ana Carolina de Sousa Silva, S?rgio Souto and Ernane Jos? Xavier Costa Fast image compression using matrix K?L transform Daoqiang Zhang and Songcan Chen An alternative switching criterion for independent component analysis (ICA) Dengpan Gao, Jinwen Ma and Qiansheng Cheng A neural network to solve the hybrid N-parity: Learning with generalization issues M. Al-Rawi Support vector machines for candidate nodules classification Paola Campadelli, Elena Casiraghi and Giorgio Valentini Fusion of classifiers for predicting protein?protein interactions Loris Nanni Lagrangian object relaxation neural network for combinatorial optimization problems Hiroki Tamura, Zongmei Zhang, Xinshun Xu, Masahiro Ishii and Zheng Tang Fully complex extreme learning machine Ming-Bin Li, Guang-Bin Huang, P. Saratchandran and N. Sundararajan Fusion of classifiers for protein fold recognition Loris Nanni ------- JOURNAL SITE: http://www.elsevier.com/locate/neucom SCIENCE DIRECT: http://www.sciencedirect.com/science/issue/5660-2005-999319999-605301 From moodylab at icsi.berkeley.edu Thu Sep 15 03:21:07 2005 From: moodylab at icsi.berkeley.edu (John Moody) Date: Thu, 15 Sep 2005 00:21:07 -0700 Subject: Connectionists: CFP: NIPS*2005 Workshop on Computational Finance Message-ID: <10b7069b05091500215dd3b7d6@mail.gmail.com> CALL FOR PARTICIPATION -- NIPS*2005 WORKSHOP COMPUTATIONAL FINANCE & MACHINE LEARNING Friday, December 9, 2005 Westin Resort, Whistler, British Columbia http://www.icsi.berkeley.edu/~moody/nips2005compfin.htm WORKSHOP DESCRIPTION This interdisciplinary workshop will provide a forum for discussion of research at the intersection of Computational Finance and Machine Learning. Finance-related papers have occasionally appeared at NIPS over the years, but this will be the first finance-related workshop during NIPS's entire 19 year history! The workshop will bring together a diverse set of researchers from machine learning, academic finance and the financial industry. Emphasis will be on machine learning applications to finance and problems from finance that are data-driven or require parameter estimation from data. The workshop will include invited presentations, a financial industry panel, contributed talks, a poster session and plenty of time for lively discussion. A wide range of topics are of interest, for example: Financial Applications: * Financial Time Series & Volatility * Trading & Arbitrage Strategies * Optimal Execution, Market Making & Market Microstructure * Portfolio Management & Asset Allocation * Stock Selection & Security Analysis * Risk & Extreme Events * Behavioral Finance * Credit Analysis & Credit Risk * Option Hedging, Exercise & Pricing * Calibration, e.g. of Term Structure Models * Multi-Agent Market Simulations Machine Learning Methods: * Reinforcement Learning & Dynamic Programming * Non-Parametric Statistics & Bayesian Learning * Latent Variable & Hidden State Models * Evolutionary Algorithms * Data Mining & Visualization * Independent Components, Self-Organizing Maps * Monte Carlo & Resampling * Ensembles and Boosting SUBMISSIONS We anticipate accepting six to eight 20-minute contributed talks and a number of posters. If you would like to present your work, please submit a 100 to 500 word abstract as soon as possible (no later than October 14) to my assistant Su'ad Hall at: MoodyLab at ICSI.Berkeley.Edu If you wish to submit a full manuscript in addition, that's great. Our goal is to put together a coherent and informative program that has broad appeal to the NIPS and Computational Finance communities. Abstracts based upon previously published work are welcome. Please submit early! DETAILS Important Dates: Friday, October 14 - Submission Deadline Monday, October 31 - Acceptance Notification Friday, December 9 - The Workshop NIPS Workshop Registration & Hotel Info: http://www.nips.cc/Conferences/2005/ Workshop Inquiries: MoodyLab at ICSI.Berkeley.Edu ORGANIZERS John Moody, Algorithms Group International Computer Science Institute Berkeley & Portland, USA Ramo Gencay, Dept. of Economics Simon Fraser University Vancouver, Canada Neil Burgess, Ph.D. Morgan Stanley New York, USA From fink at cs.huji.ac.il Mon Sep 19 12:06:44 2005 From: fink at cs.huji.ac.il (Michael Fink) Date: Mon, 19 Sep 2005 18:06:44 +0200 Subject: Connectionists: CFP NIPS*05 Interclass Transfer Workshop: why learning to recognize many objects might be easier than learning to recognize just one? Message-ID: <1b2297b105091909065c3071fb@mail.gmail.com> ============================== ================================= Call for Papers NIPS*05 Workshop on interclass transfer: Why learning to recognize many object classes might be easier than learning to recognize just one www.cs.huji.ac.il/~fink/nips2005/ NIPS 2005 Submission deadline: 21 October Accept/Reject notification: 05 November =============================================================== Organizers: ========== Andras Ferencz, University of California at Berkeley Michael Fink, The Hebrew University of Jerusalem Shimon Ullman, Weizmann Institute of Science Workshop Description ==================== The human perceptual system has the remarkable capacity to recognize numerous object classes, often learning to reliably classify a novel category from just a short exposure to a single example. These skills are beyond the reach of current multi-class recognition systems. The workshop will focus on the proposal that a key factor for achieving such capabilities is the use of interclass transfer during learning. According to this view, a recognition system may benefit from interclass transfer if the multiple target classification tasks share common underlying structures that can be utilized to facilitate training or detection. Several challenges follow from this observation. First, can a theoretical foundation of interclass transfer be formulated? Second, what are promising algorithmic approaches for utilizing interclass transfer. Finally, can the computational approaches for multiple object recognition contribute insights to the research of human recognition processes? In the coming workshop we propose to address the following topics: * Explore the human capabilities for multi-class object recognition and examine how these capacities motivate our algorithmic approaches. * Attempt to formalize the interclass transfer framework and define what can be generalized between classes (for example, learning by analogy from the "closest" known category vs. finding useful subspaces from all categories). * Analyze state-of-the-art solutions aimed at recognizing many objects or at learning to recognize novel objects form very few examples (e.g. contrasting parametric vs. non-parametric approaches). * Characterize the problems in which we expect to observe high transfer between classes. * Delineate future challenges and suggest benchmarks for assessing progress The workshop is aimed at bringing together experimental and theoretical researchers interested in multi-class object recognition in humans and machines. Confirmed participants: ======================= William T. Freeman Fei Fei Li Erik Learned Miller Kevin Murphy Jitendra Malik Antonio Torrallba Daphna Weinshall From esann at dice.ucl.ac.be Tue Sep 20 14:02:25 2005 From: esann at dice.ucl.ac.be (esann) Date: Tue, 20 Sep 2005 20:02:25 +0200 Subject: Connectionists: CFP: ESANN'2006 European Symposium on Artificial Neural Networks Message-ID: <20050920180224.C387C1F05C@smtp2.elec.ucl.ac.be> ESANN'2006 14th European Symposium on Artificial Neural Networks 14th European Symposium on Artificial Neural Networks Advances in Computational Intelligence and Learning Bruges (Belgium) - April 26-27-28, 2006 Announcement and call for papers ===================================================== Technically co-sponsored by the International Neural Networks Society, the European Neural Networks Society, the IEEE Computational Intelligence Society (to be confirmed), the IEEE Region 8, the IEEE Benelux Section. The call for papers for the ESANN'2006 conference is now available on the Web: http://www.dice.ucl.ac.be/esann For those of you who maintain WWW pages including lists of related ANN sites: we would appreciate if you could add the above URL to your list; thank you very much! We make all possible efforts to avoid sending multiple copies of this call for papers; however we apologize if you receive this e-mail twice, despite our precautions. You will find below a short version of this call for papers, without the instructions to authors (available on the Web). ESANN'2006 is organized in collaboration with the UCL (Universite catholique de Louvain, Louvain-la-Neuve) and the KULeuven (Katholiek Universiteit Leuven). Scope and topics ---------------- Since its first happening in 1993, the European Symposium on Artificial Neural Networks has become the reference for researchers on fundamentals and theoretical aspects of artificial neural networks, computational intelligence, learning and related topics. Each year, around 100 specialists attend ESANN, in order to present their latest results and comprehensive surveys, and to discuss the future developments in this field. The ESANN'2006 conference will follow this tradition, while adapting its scope to the new developments in the field. Artificial neural networks are viewed as a branch, or subdomain, of machine learning, statistical information processing and computational intelligence. Mathematical foundations, algorithms and tools, and applications are covered. The following is a non-exhaustive list of machine learning, computational intelligence and artificial neural networks topics covered during the ESANN conferences: THEORY and MODELS Statistical and mathematical aspects of learning Feedforward models Kernel machines Graphical models, EM and Bayesian learning Vector quantization and self-organizing maps Recurrent networks and dynamical systems Blind signal processing Ensemble learning Nonlinear projection and data visualization Fuzzy neural networks Evolutionary computation Bio-inspired systems INFORMATION PROCESSING and APPLICATIONS Data mining Signal processing and modeling Approximation and identification Classification and clustering Feature extraction and dimension reduction Time series forecasting Multimodal interfaces and multichannel processing Adaptive control Vision and sensory systems Biometry Bioinformatics Brain-computer interfaces Neuroinformatics Papers will be presented orally (single track) and in poster sessions; all posters will be complemented by a short oral presentation during a plenary session. It is important to mention that the topics of a paper decide if it better fits into an oral or a poster session, not its quality. The selection of posters will be identical to oral presentations, and both will be printed in the same way in the proceedings. Nevertheless, authors must indicate their preference for oral or poster presentation when submitting their paper. Special sessions ---------------- Special sessions will be organised by renowned scientists in their respective fields. Papers submitted to these sessions are reviewed according to the same rules as submissions to regular sessions. They must also follow the same format, instructions, deadlines and submission procedure. The special sessions organised during ESANN'2006 are: 1) Semi-blind approaches for source separation and independent component analysis M. Babaie-Zadeh, Sharif Univ. Tech. (Iran), C. Jutten, CNRS ? Univ. J. Fourier ? INPG (France) 2) Visualization methods for data mining F. Rossi, INRIA Rocquencourt (France) 3) Neural Networks and Machine Learning in Bioinformatics - Theory and Applications B. Hammer, Clausthal Univ. Tech. (Germany), S. Kaski, Helsinki Univ. Tech. (Finland), U. Seiffert, IPK Gatersleben (Germany), T. Villmann, Univ. Leipzig (Germany) 4) Online Learning in Cognitive Robotics J.J. Steil, Univ. Bielefeld, H. Wersing, Honda Research Institute Europe (Germany) 5) Man-Machine-Interfaces - Processing of nervous signals M. Bogdan, Univ. T?bingen (Germany) 6) Nonlinear dynamics N. Crook, T. olde Scheper, Oxford Brookes University (UK) Location -------- The conference will be held in Bruges (also called "Venice of the North"), one of the most beautiful medieval towns in Europe. Bruges can be reached by train from Brussels in less than one hour (frequent trains). The town of Bruges is world-wide known, and famous for its architectural style, its canals, and its pleasant atmosphere. The conference will be organized in a hotel located near the centre (walking distance) of the town. There is no obligation for the participants to stay in this hotel. Hotels of all levels of comfort and price are available in Bruges; there is a possibility to book a room in the hotel of the conference at a preferential rate through the conference secretariat. A list of other smaller hotels is also available. The conference will be held at the Novotel hotel, Katelijnestraat 65B, 8000 Brugge, Belgium. Proceedings and journal special issue ------------------------------------- The proceedings will include all communications presented to the conference (tutorials, oral and posters), and will be available on-site. Extended versions of selected papers will be published in the Neurocomputing journal (Elsevier). Call for contributions ---------------------- Prospective authors are invited to submit their contributions before November 28, 2005. The electronic submission procedure is described on the ESANN portal http://www.dice.ucl.ac.be/esann/. Authors must also commit themselves that they will register to the conference and present the paper in case of acceptation of their submission (one paper per registrant). Authors of accepted papers will have to register before February 28, 2006; they will benefit from the advance registration fee. The ESANN conference applies a strict policy about the presentation of accepted papers during the conference: authors of accepted papers who do not show up at the conference will be blacklisted for future ESANN conferences, and the lists will be communicated to other conference organizers. Deadlines --------- Submission of papers November 28, 2005 Notification of acceptance January 27, 2006 Symposium April 26-28, 2006 Conference secretariat ---------------------- ESANN'2006 d-side conference services phone: + 32 2 730 06 11 24 av. L. Mommaerts Fax: + 32 2 730 06 00 B - 1140 Evere (Belgium) E-mail: esann at dice.ucl.ac.be http://www.dice.ucl.ac.be/esann Steering and local committee ---------------------------- Fran?ois Blayo Pr?figure (F) Gianluca Bontempi Univ.Libre Bruxelles (B) Marie Cottrell Univ. Paris I (F) Jeanny H?rault INPG Grenoble (F) Bernard Manderick Vrije Univ. Brussel (B) Eric Noldus Univ. Gent (B) Jean-Pierre Peters FUNDP Namur (B) Joos Vandewalle KUL Leuven (B) Michel Verleysen UCL Louvain-la-Neuve (B) Scientific committee (to be confirmed) -------------------- Cecilio Angulo Univ. Polit. de Catalunya (E) Miguel Atencia Univ. Malaga (E) Peter Bartlett Univ.California, Berkeley (USA) Pierre Bessi?re CNRS (F) Herv? Bourlard IDIAP Martigny (CH) Joan Cabestany Univ. Polit. de Catalunya (E) St?phane Canu Inst. Nat. Sciences App. (F) Valentina Colla Scuola Sup. Sant'Anna Pisa (I) Holk Cruse Universit?t Bielefeld (D) Eric de Bodt Univ. Lille II (F) & UCL Louvain-la-Neuve (B) Dante Del Corso Politecnico di Torino (I) Georg Dorffner University of Vienna (A) Wlodek Duch Nicholas Copernicus Univ. (PL) Marc Duranton Philips Semiconductors (USA) Richard Duro Univ. Coruna (E) Anibal Figueiras-Vidal Univ. Carlos III Madrid (E) Simone Fiori Univ. Perugia (I) Jean-Claude Fort Universit? Nancy I (F) Leonardo Franco Univ. Malaga (E) Colin Fyfe Univ. Paisley (UK) Stan Gielen Univ. of Nijmegen (NL) Mirta Gordon IMAG Grenoble (F) Marco Gori Univ. Siena (I) Bernard Gosselin Fac. Polytech. Mons (B) Manuel Grana UPV San Sebastian (E) Anne Gu?rin-Dugu? INPG Grenoble (F) Barbara Hammer Univ. of Osn?bruck (D) Martin Hasler EPFL Lausanne (CH) Tom Heskes Univ. Nijmegen (NL) Christian Igel Ruhr-Univ. Bochum (D) Jose Jerez Univ. Malaga (E) Gonzalo Joya Univ. Malaga (E) Christian Jutten INPG Grenoble (F) Stefanos Kollias National Tech. Univ. Athens (GR) Jouko Lampinen Helsinki Univ. of Tech. (FIN) Petr Lansky Acad. of Science of the Czech Rep. (CZ) Beatrice Lazzerini Univ. Pisa (I) Mia Loccufier Univ. Gent (B) Erzsebet Merenyi Rice Univ. (USA) Jos? Mira UNED (E) Jean-Pierre Nadal Ecole Normale Sup?rieure Paris (F) Erkki Oja Helsinki Univ. of Technology (FIN) Arlindo Oliveira INESC-ID (P) Gilles Pag?s Univ. Paris 6 (F) Thomas Parisini Univ. Trieste (I) H?l?ne Paugam-Moisy Universit? Lumi?re Lyon 2 (F) Alberto Prieto Universitad de Granada (E) Didier Puzenat Univ. Antilles-Guyane (F) Leonardo Reyneri Politecnico di Torino (I) Jean-Pierre Rospars INRA Versailles (F) Fabrice Rossi INRIA (F) David Saad Aston Univ. (UK) Francisco Sandoval Univ.Malaga (E) Jose Santos Reyes Univ. Coruna (E) Craig Saunders Univ.Southampton (UK) Udo Seiffert IPK Gatersleben (D) Bernard Sendhoff Honda Research Institute Europe (D) Peter Sollich King's College (UK) Jochen Steil Univ. Bielefeld (D) John Stonham Brunel University (UK) Johan Suykens K. U. Leuven (B) John Taylor King?s College London (UK) Michael Tipping Microsoft Research (Cambridge) (UK) Claude Touzet Univ. Provence (F) Marc Van Hulle KUL Leuven (B) Thomas Villmann Univ. Leipzig (D) Axel Wism?ller Ludwig-Maximilians-Univ. M?nchen (D) Michalis Zervakis Technical Univ. Crete (GR) ======================================================== ESANN - European Symposium on Artificial Neural Networks http://www.dice.ucl.ac.be/esann * For submissions of papers, reviews,... Michel Verleysen Univ. Cath. de Louvain - Machine Learning Group 3, pl. du Levant - B-1348 Louvain-la-Neuve - Belgium tel: +32 10 47 25 51 - fax: + 32 10 47 25 98 mailto:esann at dice.ucl.ac.be * Conference secretariat d-side conference services 24 av. L. Mommaerts - B-1140 Evere - Belgium tel: + 32 2 730 06 11 - fax: + 32 2 730 06 00 mailto:esann at dice.ucl.ac.be ======================================================== From Martin.Riedmiller at uos.de Tue Sep 20 03:30:58 2005 From: Martin.Riedmiller at uos.de (Martin Riedmiller) Date: Tue, 20 Sep 2005 09:30:58 +0200 Subject: Connectionists: CLSquare - free software available Message-ID: <432FBAB2.3050203@uos.de> A new release of CLSquare (closed loop simulation system) is ready for free download at our website http://amy.informatik.uni-osnabrueck.de/clsquare CLSquare simulates a control loop for closed loop control. Although originally designed for training and testing Reinforcement Learning controllers, it also applies to other learning and non-learning controller concepts. Currently availabe plants: Acrobot, bicycle, cart pole, cart double pole, pole, mountain car and maze. Currently availabe controllers: linear controller, Reinforcement learning Q table, neural network based Q controller. It comes with many useful features, e.g. graphical display and statistics output, a documentation, and many demos for quick starting. Enjoy, Martin Riedmiller, Neuroinformatics Group, University of Osnabrueck From silvia at sa.infn.it Tue Sep 20 08:48:13 2005 From: silvia at sa.infn.it (Silvia Scarpetta) Date: Tue, 20 Sep 2005 14:48:13 +0200 Subject: Connectionists: Post-doc position in neural computation in Salerno Message-ID: <005401c5bde1$90bc06a0$7746cdc1@sa.infn.it> SECOND ANNOUNCEMENT Please reply before 1 October 2005 OPENING A two-year postdoctoral position (Assegno di Ricerca) in Neural Networks / NeuroPhysics starting this Fall 2005 (around November, 2005) is available at the Dept. of Physics "E.R. Caianiello" of the Universit? degli Studi di Salerno, Italy, in the group headed by Prof. Maria Marinaro (http://www.sa.infn.it/NeuralGroup/) The monthly net amount of the fellowship is about 1200 Euro. RESEARCH ACTIVITY Areas of particular interest include: 1) Neurobiological applications of theoretical physics tools, computational and mathematical modeling of neural dynamics, cortical dynamics and oscillations, learning memory and modelling of STDP, rhythmic locomotion and related topics. 2) Neural networks applied to signal and image processing and to speech recognition. Description of current project in our group can be found in: http://www.sa.infn.it/NeuralGroup/ MINIMUM REQUIREMENTS master degree in scientific disciplines (physics, neuroscience, mathematics, computer science, engeneering, etc) and a PhD degree or a 3 years research experience in fields related with the position topic. Sufficient knowledge of the English language and of a programming language (C or Matlab) will be required for successful project work. *** Please provide before 1 October a CV with publication list and description of research interests, and arrange for at least two letters of reference to be sent to: Prof. Maria Marinaro Dipartimento di Fisica "E.R. Caianiello" Universit? degli Studi di Salerno Via S. Allende I-84081 Baronissi (SA) Italy ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Prof. Maria Marinaro Dipartimento di Fisica "E.R. Caianiello" Universit? degli Studi di Salerno Via S. Allende 84081 Baronissi (SA) Italy Tel. +39 089 965318 Web page: http://www.sa.infn.it/NeuralGroup/ Dr. Silvia Scarpetta Dipartimento di Fisica "E.R. Caianiello" Universit? degli Studi di Salerno Via S. Allende 84081 Baronissi (SA) Italy Tel. +39 089 965318 Web page: http://www.sa.infn.it/silvia.scarpetta From crammer at cis.upenn.edu Wed Sep 21 15:43:00 2005 From: crammer at cis.upenn.edu (Koby Crammer) Date: Wed, 21 Sep 2005 15:43:00 -0400 (EDT) Subject: Connectionists: NIPS Workshop CFP - Advances in Structured Learning for Text and Speech Processing Message-ID: ################################################################ CALL FOR PARTICIPATION Advances in Structured Learning for Text and Speech Processing a workshop at the 2005 Neural Information Processing Systems (NIPS) Conference Submission deadline: Tuesday, November 1st, 2005 http://www.cis.upenn.edu/~crammer/workshop-index.html ################################################################ Organizers: ----------- Fernando Pereira CIS, University of Pennsylvania Michael Collins CSAIL, MIT Jeff Bilmes EE, University of Washington Koby Crammer CIS, University of Pennsylvania Overview: This workshop is intended for researchers and students interested in developing and applying structured classification methods to text and speech processing problems. Recent advances in structured classification provide promising alternatives to the probabilistic generative models that have been the mainstay of speech recognition and statistical language processing. However, powerful features of probabilistic generative models, such as hidden variables and compositional combination of several kinds of evidence, do not transfer cleanly to all structured classification methods. Starting with surveys of the state-of-the-art in structured classification for text and speech, the workshop will focus on successes, failures, and directions for improvement of structured classification methods for text and speech and possible syntheses between the new structured classification methods and traditional generative models. Comparison will also be made with "generative" vs. "discriminative" training procedures in structure classification problems. A successful workshop will identify critical questions that current methods are not yet capable of solving, and promising directions for solution. For instance, we hope to achieve a better understanding of how discriminative models may work with missing information, such as under-specified alignments or syntactic analyses --- we plan, more generally, answer questions such as why, when, and where use a generative model. Such problems arise in both speech, language, and text processing, and will serve as unifying themes for the workshop. Among questions to be discussed, we expect: * Discriminative vs. generative models and algorithms * Max margin, perceptron, and other criterion * Incorporating prior knowledge * Using data from multiple domains * Adaptation of structured classifiers to new conditions * Using unlabeled data * Combining text and speech * Integrated inference for complex language processing tasks Program: -------- This one-day workshop will have survey/tutorial talks in the morning followed by shorter contributed talks, posters, and discussion sessions later in the day. The survey talks will present central themes and questions that will guide the discussion during the workshop (see below). We are well aware of the tendency to turn workshops into mini-conferences, so we will make sure by keeping a tight control on the schedule that there will be sufficient time for discussion during and after talks. One of the ways we intend to use in order to accomplish this goal is to assign each of the presenters in the workshop to serve as discussant for someone else's presentation. Discussion will be moderated by the organizers. The survey/tutorial talks are intended to provide a thorough background and overview of the field from a number of different perspectives (machine learning, statistics, mathematics, and applications such as speech, text, and language). In order to better customize the workshop to the interested audience, the survey/tutorial talks will be tuned to a set of issues and questions that are raised on a NIPS workshop discussion web page. The goal is for interested participants to post any nagging questions or general ideas that they have to this discussion board. These questions will then become a basis for the central theme of the workshop. Of course, for this to be a success it is necessary for people to pose questions to the discussion board. Therefore, it will be possible for people to post questions either with their identity associated, or anonymously. See below for further details. Potential participants are encouraged to submit (extended) abstracts of two to four (2-4) pages in length outlining their research as it relates to the above theme. Papers may show novel ideas or applications related to structured classification. Encouraged topics include novel theoretical results, practical application results, novel insight regarding the above, and/or tips and tricks that work well empirically on a broad range of data. Papers should be formatted using the standard NIPS formatting guidelines. Schedule and Dates: -------------------- - Nov 1st, 2005: Paper submission deadline. Email all submissions to: with subject starting with 'STRUCTLEARN' must be a .pdf file - Nov 8th, 2005: Acceptance (talks and poster) decisions announced. - Dec 9th, 2005: NIPS Workshop date. Relevant Web pages: - NIPS workshop web page: http://www.nips.cc/Conferences/2005/Workshops/ - Discussion Board for Advances in Structured Learning for Text and Speech Processing http://fling-l.seas.upenn.edu/~cse1xx/structlearn/index.php Please visit this web page and use it to post questions, problems, or ideas about open problems in the structured prediction area that you would like to see discussed both during the survey/tutorial talks and throughout the rest of the workshop. From ahu at cs.stir.ac.uk Wed Sep 21 05:35:12 2005 From: ahu at cs.stir.ac.uk (Dr. Amir Hussain) Date: Wed, 21 Sep 2005 10:35:12 +0100 Subject: Connectionists: Call for Papers: BICS 06 Message-ID: <000801c5be8f$c3df28b0$dae80954@hec.gov> Please circulate the CFP below/attached to friends and colleagues who may be interested.. Thank you in advance and apologies for any cross postings. Amir Hussain Co-Chair BICS'2006 2nd International Conference on: Brain Inspired Cognitive Systems (BICS 06) Island of Lesvos, Greece Hotel Delfinia October 10 - 14, 2006 http://www.icsc-naiso.org/conferences/bics2006/bics06-cfp.html General Chair: Igor Aleksander, Imperial College London, U.K. First International ICSC Symposium on Machine Models of Consciousness (MoC 2006) Discussions of this new burgeoning paradigm Chair: Ron Chrisley , University of Sussex, U.K. Third International ICSC Symposium on Biologically Inspired Systems (BIS 2006) Broader issues in biological inspiration and neuromorphic systems Chair: Leslie Smith, University of Stirling, U.K. Second International ICSC Symposium on Cognitive Neuro Science (CNS 2006) >From computationally inspired models to brain-inspired computation Chair: Igor Aleksander, Imperial College London, U.K Fourth International ICSC Symposium on Neural Computation (NC'2006) Progress in neural systems Chair: Amir Hussain, University of Stirling, U.K. Why this conference, and who should attend: Brain Inspired Cognitive Systems 2006 aims to bring together leading scientists and engineers who use analytic, syntactic and computational methods both to understand the prodigious processing properties of biological systems and, specifically, of the brain, and to exploit such knowledge to advance computational methods towards ever higher levels of cognitive competence. The four major symposia are organized in patterns that encourage cross-fertilization across the symposia topics. This emphasizes that BICS 2006 will be a major point of contact for researchers and practitioners who can benefit from not only the major advances in their specialist fields but also from the diversity of each other's views. Each of the four mornings is devoted to papers that will be selected for their clear novelty and proven scientific impact, while the afternoons will provide scope for researchers to present their current work and discuss their aims and ambitions. Debates across disciplines will unite researchers with differing perspectives. SUB-THEMES (including, but not limited to): Models of consciousness: (MoC) Global Workspace Theory Imagination/synthetic phenomenology Virtual Machine Approaches Axiomatic Models Control Theory/Methodology Developmental/Infant Models Will/volition/emotion/affect Philosophical implications Grounding in neurophysiology Enactive approaches Heterophenomenology Cognitive Neuroscience (CNS) Attentional Mechanisms Cognitive Neuroscience of vision CN of non-vision sensory modalities CN of volition Affective Systems Language Cortical Models Sub-Cortical Models Cerebellar Models Event location in the brain Others Biologically Inspired Systems (BIS) Brain Inspired (BI) Vision BI Audition and sound processing BI Other sensory modalities BI Motion processing BI Robotics BI Evolutionary systems BI Oscillatory systems BI Signal processing BI Learning Neuromorphic systems Others Neural Computation (NC) NeuroComputational (NC) Hybrid Systems NC Learning NC Control Systems NC Signal Processing Architectures Devices Pattern Classifiers Support Vector Machines Fuzzy or Neuro-Fuzzy Systems Evolutionary Neural Networks Biological Neural Network Models Applications Others INVITED SPEAKERS: Christof Koch, Koch Laboratory CALTECH, USA Shun-ichi Amari, RIKEN Brain Science Institute, Japan Holk Cruse, University of Bielefeld, Germany Pentti Haikonen, Nokia Research Center, Finland Timothy K Horiuchi, University of Maryland, USA John Taylor, Kings College, London, U.K. Steve Potter, Gatech, USA Jacek M Zurada, University of Louisville, USA Marios Polycarpou, University of Cyprus, Cyprus Others TBA CONFERENCE VENUE: Hotel Delphinia ( http://www.molyvoshotel.com/) at the ancient village of Molivos ( http://www.molivos.net/index.htm). ORGANIZED BY: ICSC Interdisciplinary Research, Planning Division Canada (http://www.icsc-naiso.org/html/ Planning Division ICSC Interdisciplinary Research NAISO Natural and Artificial Intelligence Systems Organization Canada --------------------------------------------------- Email: planning at icsc.ab.ca Website: www.icsc-naiso.org Tel: +1-780- 387 3546 Fax: +1-780- 387 4329 From mlittman at cs.rutgers.edu Wed Sep 21 09:50:34 2005 From: mlittman at cs.rutgers.edu (Michael L. Littman) Date: Wed, 21 Sep 2005 09:50:34 -0400 (EDT) Subject: Connectionists: RL Benchmark announcement Message-ID: <200509211350.j8LDoYm28629@porthos.rutgers.edu> The organizers of the NIPS-05 workshop "Reinforcement Learning Benchmarks and Bake-offs II" would like to announce the first RL benchmarking event. We are soliciting participation from researchers interested in implementing RL algorithms for learning to maximize reward in a set of simulation-based tasks. Our benchmarking set will include: * continuous-state MDPs * discrete factored-state MDPs It will not include: * partially observable MDPs Comparisons will be performed through a standardized interface (details to be announced) and we *highly* encourage a wide variety of approaches to be included in the comparisons. We hope to see participants executing algorithms based on temporal difference learning, evolutionary search, policy gradient, TD/policy search combinations, and others. We do not intend to declare a "winner", but we do hope to foster a culture of controlled comparison within the extended community interested in learning for control and decision making. If you are interested in participating, please contact Michael Littman to be added to our mailing list. Additional information will be available at our website at: http://www.cs.rutgers.edu/~mlittman/topics/nips05-mdp/ . Sincerely, The RL Benchmarking Event Organizers From alan at cns.nyu.edu Thu Sep 22 16:11:50 2005 From: alan at cns.nyu.edu (alan stocker) Date: Thu, 22 Sep 2005 16:11:50 -0400 Subject: Connectionists: NIPS Demo Deadline - September 25 Message-ID: <43331006.9000908@cns.nyu.edu> Reminder: The deadline for NIPS Demonstration Proposals is Sunday, September 25, 2005. Demonstrators will have a chance to show their live and interactive demos in the areas of hardware technology, neuromorphic and biologically-inspired systems, robotics, and software systems. For further information see: http://www.nips.cc./Conferences/current/CFP/CallForDemos.php -- ________________________________________ alan stocker, ph.d. +1 212 992 8752 http://www.cns.nyu.edu/~alan/ ________________________________________ From terry at salk.edu Fri Sep 23 01:06:04 2005 From: terry at salk.edu (Terry Sejnowski) Date: Thu, 22 Sep 2005 22:06:04 -0700 Subject: Connectionists: NEURAL COMPUTATION 17:11 In-Reply-To: Message-ID: Neural Computation - Contents - Volume 17, Number 11 - November 1, 2005 NOTE An Extended Analytic Expression for the Membrane Potential Distribution of Conductance-Based Synaptic Noise M. Rudolph and A. Destexhe LETTERS Synaptic and Temporal Ensemble Interpretation of Spike-Timing Dependent Plasticity Peter A. Appleby and Terry Elliott What Can a Neuron Learn with Spike-Timing-Dependent Plasticity? Robert Legenstein, Christian Naeger and Wolfgang Maass How Membrane Properties Shape the Discharge of Motoneurons: A Detailed Analytical Study Claude Meunier and Karol Borejsza Stimulus Competition by Inhibitory Interference Paul H. E. Tiesinga Optimization via Intermittency with a Self-Organizing Neural Network Terence Kwok and Kate A. Smith Mixture Modeling with Pairwise, Instance-Level Class Constraints Qi Zhao and David J. Miller Geometrical Properties of Nu Support Vector Machines with Different Norms Kazushi Ikeda and Noboru Murata ----- ON-LINE - http://neco.mitpress.org/ SUBSCRIPTIONS - 2005 - VOLUME 17 - 12 ISSUES Electronic only USA Canada* Others USA Canada* Student/Retired$60 $64.20 $114 $54 $57.78 Individual $100 $107.00 $143 $90 $96.30 Institution $680 $727.60 $734 $612 $654.84 * includes 7% GST MIT Press Journals, 5 Cambridge Center, Cambridge, MA 02142-9902. Tel: (617) 253-2889 FAX: (617) 577-1545 journals-orders at mit.edu ----- From nati at mit.edu Fri Sep 23 13:23:43 2005 From: nati at mit.edu (Nathan Srebro) Date: Fri, 23 Sep 2005 13:23:43 -0400 Subject: Connectionists: NIPS'05 Workshop on The Accuracy-Regularization Frontier In-Reply-To: <1540849205092310226c41249b@mail.gmail.com> References: <1540849205092310226c41249b@mail.gmail.com> Message-ID: <15408492050923102339c701d2@mail.gmail.com> NIPS Workshop on The Accuracy-Regularization Frontier Friday, December 9th, 2005 Westin Resort and SPA, Whistler, BC, Canada http://www.cs.toronto.edu/~nati/Front/ CALL FOR CONTRIBUTIONS A prevalent approach in machine learning for achieving good generalization performance is to seek a predictor that, on one hand, attains low empirical error, and on the other hand, is "simple", as measured by some regularizer, and so guaranteed to generalize well. Consider, for example, support vector machines, where one seeks a linear classifier with low empirical error and low L2-norm (corresponding to a large geometrical margin). The precise trade-off between the empirical error and the regularizer (e.g. L2-norm) is not known. But since we would like to minimize both, we can limit our attention only to extreme solutions, i.e. classifiers such that one cannot reduce both the empirical error and the regularizer (norm). Considering the set of attainable (error,norm) combinations, we are interested only in the extreme "frontier" (or "regularization path") of this set. The typical approach is to evaluate classifiers along the frontier on held-out validation data (or cross validate) and choose the classifier minimizing the validation error. Classifiers along the frontier are typically found by minimizing some parametric combination of the empirical error and the regularizer, e.g. norm^2+C*err, for varying C, in the case of SVMs. Different values of C yield different classifiers along the frontier and C can be thought of as parameterizing the frontier. This particular parametric function of the empirical error and the regularizer is chosen because it leads to a convenient optimization problem, but minimizing any other monotone function of the empirical error and regularizer (in this case, the L2-norm) would also lead to classifiers on the frontier. Recently, methods have been proposed for obtaining the entire frontier in computation time that is comparable to obtaining a single classifier along the frontier. The proposed workshop is concerned with optimization and statistical issues related to viewing the entire frontier, rather than a single predictor along it, as an object of interest in machine learning. Specific issues to be addressed include: 1. Characterizing the "frontier" in a way independent of a specific trade-off, and its properties as such, e.g. convexity, smoothness, piecewise linearity/polynomial behavior. 2. What parametric trade-offs capture the entire frontier? Minimizing any monotone trade-off leads to a predictor on the frontier, but what conditions must be met to ensure all predictors along the frontier are obtained when the regularization parameter is varied? Study of this question is motivated by scenarios in which minimizing a non-standard parametric trade-off leads to a more convenient optimization problem. 3. Methods for obtaining the frontier: 3a. Direct methods relying on a characterization, e.g. Hastie et al's (2004) work on the entire regularization path of Support vector Machines. 3b. Warm-restart continuation methods (slightly changing the regularization parameter and initializing the optimizer to the solution of the previous value of the parameter). How should one vary the regularization parameter in order to guarantee never to be too far away from the true frontier? In a standard optimization problem, one ensures a solution within some desired distance from the optimal solution. Analogously, when recovering the entire frontier, it would be desirable to seek a frontier which is always within some desired distance in the (error,regularizer) space from the true frontier. 3c. Predictor-corrector methods: when the frontier is a differentiable manifold, warm-restart methods can be improved by using a first order approximation of the manifold to predict where the frontier should be for an updated value of the frontier parameter. 4. Interesting generalization or uses of the frontier, e.g.: - The frontier across different kernels - Higher dimensional frontiers when more than two parameters are considered 5. Formalizing and providing guarantees for the standard practice of picking a classifier along the frontier using a hold-out set (this is especially important for more than two objectives). In some regression cases there are detailed inferences that can be done on the frontier --- for Ridge it is well established whereas for Lasso, Efron et al (2004), and more recently Zou et al (2004), establish degrees of freedom along the frontier, yielding generalization error estimates. The main goal of the workshop is to open up research in these directions, establishing the important questions and issues to be addressed, and introducing to the NIPS community relevant approaches for multi-objective optimization. CONTRIBUTIONS We invite presentations addressing any of the above issues, or other related issues. We welcome presentations of completed work or work-in-progress, as well as position statements, papers discussing potential research directions and surveys of recent developments. SUBMISSION INSTRUCTIONS If you would like to present in the workshop, please send an abstract in plain text (preferred), postscript or PDF (Microsoft Word documents will not be opened) to frontier at cs.toronto.edu as soon as possible, and no later than October 23rd, 2005. The final program will be posted in early November. Workshop organizing committee: Nathan Srebro, University of Toronto Alexandre d'Aspremont, Princeton University Francis Bach, Ecole des Mines de Paris Massimiliano Pontil, University College London Saharon Rosset, IBM T.J. Watson Research Center Katya Scheinberg, IBM T.J. Watson Research Center For further information, please email frontier at cs.toronto.edu or visit http://www.cs.toronto.edu/~nati/Front From marc at memory.syr.edu Fri Sep 23 00:19:42 2005 From: marc at memory.syr.edu (Marc Howard) Date: Fri, 23 Sep 2005 00:19:42 -0400 (EDT) Subject: Connectionists: Postdoctoral position at Syracuse University Message-ID: Postdoctoral Position Available Syracuse University Applications are invited for a Postdoctoral position in the lab of Dr. Marc Howard at Syracuse University (http://memory.syr.edu). The successful applicant will participate on research of mutual interest related to the modeling of episodic and/or semantic memory. Possible projects include, but are not limited to, learning semantic spaces with contextual models of episodic memory, cognitive modeling of human episodic memory data, and modeling of a detailed neural implementation of a distributed memory model. Papers describing work performed in the lab can be found at http://memory.syr.edu/publications.html. The lab is affiliated with both the Department of Psychology and the Department of Biomedical and Chemical Engineering, as well as the Syracuse Neuroscience Organization (http://sno.syr.edu), leading to a diverse and collaborative intellectual environment. The lab has access to extensive computing resources, including a Beowulf cluster housed in the Center for Policy Research. Applicants should have, or be about to receive, a Ph.D. in a relevant discipline with substantial mathematical/computational experience. Some degree of comfort working in Linux is essential. Familiarity with human memory is helpful; a strong interest in learning is essential. To apply, interested individuals should email a curriculum vitae (dvi or pdf files only), a brief statement of research interests, and the names of three references to Dr. Marc Howard at marc AT memory DOT syr DOT edu. Informal inquiries welcome. From louis.atallah at buid.ac.ae Mon Sep 26 08:55:42 2005 From: louis.atallah at buid.ac.ae (Louis Atallah) Date: Mon, 26 Sep 2005 16:55:42 +0400 Subject: Connectionists: Lecturer Position available in Machine Learning- The British University in Dubai Message-ID: <0INF003NGD2OF9@dicisp003b.dic.sys> Dear all, Applications are invited for the post of Lecturer in Machine Learning at the British University in Dubai. The Informatics Institute is in collaboration with the University of Edinburgh. The British University in Dubai (BUiD) is the result of an exciting vision shared by UAE leaders, UAE industry, UAE education and British interests in the region including the British Council. It is a research-led University in Dubai that draws on top-ranking British teaching and research to create a beacon for knowledge-led innovation in the Gulf region. For job particulars see http://www.buid.ac.ae/buid/html/article.asp?cid=303 For informal inquiries telephone Dr Habib Talhami 04 367 1962 or email him at habib.talhami at buid.ac.ae. Please do not reply to this email. Best regards Louis ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Dr. Louis Atallah Honorary Fellow Institute of Informatics Institute of Informatics British University in Dubai University of Edinburgh P.O.Box 502216 Dubai, UAE United Kingdom Tel: +971-4-3671957 | email: louis.atallah at buid.ac.ae web: http://homepages.inf.ed.ac.uk/latallah/ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ From dalche at lami.univ-evry.fr Tue Sep 27 00:58:18 2005 From: dalche at lami.univ-evry.fr (dalche@lami.univ-evry.fr) Date: Tue, 27 Sep 2005 06:58:18 +0200 (CEST) Subject: Connectionists: 2 years postdoctoral position at Genopole (Evry, FRANCE) Message-ID: <1230.82.230.68.163.1127797098.squirrel@www-ssl.lami.univ-evry.fr> A two-years postdoctoral position in statistical machine learning for postgenomics is available at GENOPOLE (Evry, France) The postdoc will join the team Machine Learning for Biology to work on a project about reverse modeling of macromolecular networks from various experimental data and background knowledge. The candidate must be a high-level computer scientist or mathematician with expertise in statistical machine learning (graphical models, neural networks, kernels). A background in bioinformatics (microarray data analysis, systems biology) is wishable but not necessary . He/she will be involved in collaborations with biologists and must show a high motivation to interdisciplinary work. Conditions of eligibility : the candidate must be working outside of France at the time of application. As the grant is intended as a return stipend, french candidates, currently in foreign countries, and foreign candidates who have attended part of their courses in France are welcome. Duration : Year 2005-07 (starts in December 2005 or sooner if possible) Deadline for candidature: Dec 1rst, 2005. Salary : 2000 euros/month. Modalities : send CV + 2 letters of referees+ 1 motivation letter Web page of the project : www.lami.univ-evry.fr/~dalche/recherche/postdoc2.html Contact: Prof. Florence d'Alch?-Buc Epigenomics project & LAMI UMR 8042 CNRS GENOPOLE 91 Evry FRANCE +33 1 60 87 40 73 / 39 08 Email: florence.dalche at lami.univ-evry.fr From mr287 at georgetown.edu Mon Sep 26 14:43:35 2005 From: mr287 at georgetown.edu (Maximilian Riesenhuber) Date: Mon, 26 Sep 2005 14:43:35 -0400 Subject: Connectionists: postdoctoral position in computational neuroscience, neural data analysis @ Georgetown University Message-ID: <43384157.3060103@georgetown.edu> Postdoctoral Position in computational neuroscience, neural data analysis Riesenhuber Lab Department of Neuroscience Georgetown University We have an opening for a postdoctoral fellow, starting ASAP or later, to participate in a research project studying the neural bases of fast visual target detection in complex images using a combination of psychophysics, EEG & NIRS imaging, and computational modeling. The candidate is expected to take on a main role in the analysis of the neural data and their computational modeling (with the goal of developing a real-time neurally-based target detection system), Thus, a strong quantitative background and experience in machine learning and data classification are required. Experience with EEG and psychophysics is a plus, as is a background in biological and/or machine vision. This position is also of interest for PhDs in computer science with an interest in moving into computational neuroscience. The position is for an initial period of one year with a possibility of extension depending on progress. Salary is competitive. Our lab investigates the computational mechanisms underlying human perception as a gateway to understanding information processing and learning in cortex. In our work, we combine computational modeling with psychophysical and fMRI data from our own lab and collaborators, as well as with single unit data obtained in collaboration with physiology labs. For more information, see http://maxlab.neuro.georgetown.edu. The project is a collaboration with Dr. Tom Zeffiro's lab at the Center for Functional and Molecular Imaging at Georgetown University (http://cfmi.georgetown.edu/). Georgetown has a vibrant neuroscience community with over forty labs participating in the Interdisciplinary Program in Neuroscience. Its scenic campus is located at the edge of Washington, DC, one of the most intellectual and culturally rich cities in the country. Interested candidates should send a CV, a brief (1 page) statement of research interests, representative reprints, and the names and contact information of three references by email to Maximilian Riesenhuber (mr287 at georgetown.edu). Review of applications will begin immediately, and will continue until the position is filled. ********************************************************************** Maximilian Riesenhuber phone: 202-687-9198 Department of Neuroscience fax: 202-784-3562 Georgetown University Medical Center email: mr287 at georgetown.edu Research Building Room WP-12 3970 Reservoir Rd., NW Washington, DC 20007 http://maxlab.neuro.georgetown.edu ********************************************************************** From d.mareschal at bbk.ac.uk Tue Sep 27 13:05:40 2005 From: d.mareschal at bbk.ac.uk (Denis Mareschal) Date: Tue, 27 Sep 2005 18:05:40 +0100 Subject: Connectionists: Postdoctoral Position in France Message-ID: Dear all, Please circulate to interested parties. Please DO NOT RESPOND DIRECTLY TO ME. send replies and queries to Robert French at the address below. Best regards, Denis Mareschal ================== Two year Post-doctoral position available: Neural Network and Genetic Algorithm Models of Category Learning We have obtained funding from the European Commission for a two-year post-doctoral position to study the mechanisms underlying the emergence of rule-based category learning in humans. The project is a highly interdisciplinary effort by researchers from Birkbeck College of the University of London, the University of Amsterdam, the University of Burgundy in Dijon, and Exeter and Cardiff Universities in the UK. The research will include ERP studies, experimental work with animals, experimental work with infants, children, and adults, as well as computational modelling. At the heart of this project is the need to develop connectionist (neural network) models of category learning that capture the developmental transitions observed both in infants across developmental time, as well as in different species across evolutionary time. The post-doctoral fellow will work primarily with Professor Robert French, a specialist in the area of neural network research, at the Learning and Development Laboratory (LEAD-CNRS) at the University of Burgundy in Dijon, France. There will be opportunities for close collaborations with the Centre for Brain and Cognitive Development, Birkbeck University of London. Interested candidates should contact Professor French at robert.french at u-bourgogne.fr. Professor Robert M. French: French is currently a research director for the French National Scientific Research Center (CNRS). He has worked closely with the co-ordinator of the FAR project, Denis Mareschal at Birkbeck College in London for the past decade. He is a highly interdisciplinary computer scientist who specialises in connectionist modelling of behaviour. In addition to having a PhD in computer science from the University of Michigan under Douglas Hofstadter and John Holland, he has formal training in mathematics, psychology and philosophy. He has published work ranging from foundational issues in cognitive modelling, models of bilingual memory, catastrophic interference in neural networks and artificial life. He has published in many of the areas directly related to the goals of this grant - namely, evolution, computational evolution, artificial neural networks, and infant categorisation. Computational skills: The simulations will be written in Matlab, and, while it is not necessary to know Matlab from the outset, excellent programming skills in some common programming language are necessary (e.g., C++, Java, Pascal, Lisp, etc.). Knowledge, and preferably practical experience of genetic algorithms and neural networks is important. A familiarity with some of the basic techniques of experimental psychology (especially category learning) and basic statistics (e.g., ANOVA, t-test, non-parametric tests, regression and correlation) will also be a plus. Language skills: Must have excellent standards of academic writing in English, and good oral communication skills. French is not required. Dijon: Dijon is an hour an a half by train southwest of Paris, located in the heart of France's famous Burgundy wine region and is one of the gastronomic centers of France. It is a beautiful city with a long history as the capital of Burgundy. The old town has been beautifully preserved. It has a very active cultural life, boasting arguably the finest music auditorium in France. The gently rolling hills of the region are ideal for hiking and biking. Dijon is home to the University of Burgundy, with approximately 20,000 students. The relative proximity of Paris (1:39 by train, one train an hour) makes for easy day-trips there for concerts, expositions, or tourism. LEAD: The successful candidate will be housed within LEAD (Experimental Laboratory for Learning and Development). This is one of the leading experimental psychology labs in France, carrying the prestigious CNRS label given to a select few labs in France and based on the publication record of the lab members and their international impact. They are especially strong in the areas of implicit learning, music cognition and modeling. To find out more about this lab, see: http://www.u-bourgogne.fr/LEAD Salary: The before-tax salary will be between 24,000 and 30,000 euros depending on the past experience of the candidate. (A typical pre-tax salary of 26,800 euros would mean an after-tax yearly salary of 21,960 euros.) Additional funding will be provided for computer equipment and travel to conferences and workshops. Standard social benefits available to employees of the University of Burgundy are provided. Responsibilities: The emphasis will be on research, publication and presentation of the FAR work at international venues. The successful candidate will be expected to develop (in collaboration with Professor French and other members of the project), implement and test connectionist models of category learning consistent with the objectives of the project. Duration of contract: The contract is to begin no later than January 1st, 2006 and is of a fixed term 2-year duration. Please send a CV, including references who may be contacted, to: robert.french at u-bourgogne.fr The position will be kept open until a suitable candidate is appointed. We anticipate having a first round of interviews at the end of October. The European Commission encourages woman and minority candidates to apply for positions funded by them. Further Details of Overall Project From Associations to Rules (FAR): Project summary Human adults appear different from other animals by their ability to use language to communicate, their use of logic and mathematics to reason, and their ability to abstract relations that go beyond perceptual similarity. These aspects of human cognition have one important thing in common: they are all thought to be based on rules. This apparent uniqueness of human adult cognition leads to an immediate puzzle: WHEN and HOW does this rule-based system come into being? Perhaps there is, in fact, continuity between the cognitive processes of non-linguistic species and pre-linguistic children on the one hand, and human adults on the other hand. Perhaps, this transition is simply a mirage that arises from the fact that Language and Formal Reasoning are usually described by reference to systems based on rules (e.g., grammar or syllogisms). To overcome this problem, we propose to study the transition from associative to rule-based cognition within the domain of concept learning. Concepts are the primary cognitive means by which we organise things in the world. Any species that lacked this ability would quickly become extinct (Ashby & Lee, 1993). Conversely, differences in the way that concepts are formed may go a long way in explaining the greater evolutionary success that some species have had over others. To address these issues, this project brings together 5 teams of leading international researchers from 4 different countries, with combined and convergent experience in Animal Cognition and Evolutionary Theory, Infant and Child Development, Adult Concept Learning, Neuroimaging, Social Psychology, Neural Network Modelling, and Statistical Modelling. Project objectives This project has six key objectives designed to understand how learning and development interact in the emergence of rule-based concept learning. To this end, we have identified 6 specific objectives: 1. To develop a computational (mechanistic) model of the emergence of rule-based concept learning both within the individual and across evolution. 2. To establish statistical tools for discriminating rigorously between rule-based and similarity-based classification behaviours. 3. To establish the conditions under which human adults show rule-based or similarity-based concept learning. 4. To chart the emergence across species of similarity vs. rule-based concept learning. 5. To chart the emergence of rule-based concept learning in human infants and adults. 6. To chart the emerging neural basis of rule-based concept learning and human adults, children, and infants. -- ================================================= Dr. Denis Mareschal Centre for Brain and Cognitive Development School of Psychology Birkbeck College University of London Malet St., London WC1E 7HX, UK tel +44 (0)20 7631-6582/6226 reception: 6207 fax +44 (0)20 7631-6312 http://www.psyc.bbk.ac.uk/people/academic/mareschal_d/ ================================================= From fhamker at uni-muenster.de Thu Sep 29 11:50:22 2005 From: fhamker at uni-muenster.de (Fred Hamker) Date: Thu, 29 Sep 2005 17:50:22 +0200 Subject: Connectionists: Postdoctoral and PhD position in Computational modelling at West. Wilhelms-University Muenster (Germany) Message-ID: <9603f60c469cd4b1e9e9317fe200bae6@uni-muenster.de> The junior research group of Dr. Fred Hamker in Psychology at the Westfaelische-Wilhelms Universitaet Muenster invites applications for a Postdoctoral and a PhD position. Our group pursues a theoretical and model-driven approach to experimental psychology/neuroscience in the field of visual perception and its cognitive control. It is part of the laboratory of Prof. Dr. Markus Lappe. Together, we form an interdisciplinary research community with members coming from psychology, biology, computer science, electrical engineering, physics. The positions are within a project funded by the German Research Council (DFG). They aim at developing a neurocomputational systems approach to modeling the cognitive guidance of attention and object/category recognition. The scope is on building functional models of cortical and subcortical areas in the primate brain based on physiological and anatomical findings. The function of the prefrontal cortex and basal ganglia will be an integral part of these models. The validity of the models should also be demonstrated by testing their performance on real world category/object recognition tasks. A degree in psychology, computer science, electrical engineering, physics, or biology is a prerequisite. Experience in programming (C++, Matlab), applied mathematics, and neural modeling is of significant advantage. Salary is according to German research scale (BAT IIa for the Postdoctoral and BAT IIa/2 for the PhD position). The position is initially for two years, starting in January 2006 (or soon thereafter). Please send applications by October 15th, but no later than November 1st 2005 per email (PDF preferred) to fhamker at uni-muenster.de. More information about the junior research group can be found at http://wwwpsy.uni-muenster.de/inst2/lappe/Fred/FredHamker.html. The university is an equal opportunity employer. Women are encouraged to apply. Disabled applicants will receive priority in case they have equal qualifications. From mseeger at gmail.com Thu Sep 29 06:40:05 2005 From: mseeger at gmail.com (Matthias Seeger) Date: Thu, 29 Sep 2005 12:40:05 +0200 Subject: Connectionists: Software available: Kernel multiple logistic regression. Incomplete Cholesky decomp. Updating Cholesky factor In-Reply-To: <43c7cd3f05092902031c4c3c@mail.gmail.com> References: <43c7cd3f05092902031c4c3c@mail.gmail.com> Message-ID: <43c7cd3f050929034027b3cdd8@mail.gmail.com> Dear colleagues, I have made available some software at http://www.kyb.tuebingen.mpg.de/bs/people/seeger [follow "Software"] which I hope will be of use to Machine Learning and Statistics practitioners. The code is for Matlab (MEX) and is made available under the GNU public license. It makes use of the BLAS, LAPACK (contained in Matlab), and LINPACK whenever possible. I wrote it under Linux and did not test it under any other system. 1) Kernel Multiple Logistic Regression Efficient implementation for MAP approximation to multi-class Gaussian process (aka kernel) model. Runs in O(n^2 C) (n datapoints, C classes) with full kernel matrices, or in O(n d C), d< Positions for PhD-studentships are available in B. Sch?lkopf's Empirical Inference department at the Max Planck Institute in Tuebingen, Germany (see http://www.kyb.tuebingen.mpg.de/bs), in the following areas: - application of kernel methods such as SVMs and Gaussian Processes to problems in robotics and computer vision (*) - learning theory and learning algorithms, in particular kernel machines and methods for dealing with structured data - machine learning for computer graphics - machine learning for brain computer interfacing - modelling and learning from multi-electrode neural recordings [The position (*) is funded by an EU research training network and is only available for EU nationals outside of Germany.] We invite applications of candidates with an outstanding academic record including a strong mathematical or analytical background. Max Planck Institutes are publicly funded research labs with an emphasis on excellence in basic research. Tuebingen is a university town in southern Germany, see http://www.tuebingen.de/kultur/english/index.html for some pictures. Inquiries and applications, including a CV (with complete lists of marks, copies of transcripts etc.) and a short statement of research interests (matching some of the above areas) should be sent to sabrina.nielebock at tuebingen.mpg.de or Sabrina Nielebock Max Planck Institute for Biological Cybernetics Spemannstr. 38 72076 Tuebingen Germany Tel. +49 7071 601 551 Fax +49 7071 601 552 In addition, please arrange for two letters of reference to be sent directly to the address above. Applications will be considered immediately and until the positions are filled. From danny.silver at acadiau.ca Fri Sep 30 21:12:57 2005 From: danny.silver at acadiau.ca (Daniel L. Silver) Date: Fri, 30 Sep 2005 22:12:57 -0300 Subject: Connectionists: CFP - NIPS 2005 Workshop - Inductive Transfer : 10 Years Later Message-ID: <20051001011300.MVVB29614.simmts6-srv.bellnexxia.net@DSILVERNB1> NIPS 2005 Workshop - Inductive Transfer : 10 Years Later --------------------------------------------------------- Friday, Dec 9, Westin Resort and Spa in Whistler, British Columbia, Canada Overview: --------- Inductive transfer refers to the problem of applying the knowledge learned in one or more tasks to learning for a new task. While all learning involves generalization across problem instances, transfer learning emphasizes the transfer of knowledge across domains, tasks, and distributions that are related, but not the same. For example, learning to recognize chairs might help to recognize tables; or learning to play checkers might improve learning of chess. While people are adept at inductive transfer, even across widely disparate domains, currently we have little learning theory to explain this phenomena and few systems exhibit knowledge transfer. At NIPS95 two of the current co-chairs lead a successful two-day workshop on "Learning to Learn" that focused on lifelong machine learning methods that retain and reuse learned knowledge. (The co-organizers of the NIPS95 workshop were Rich Caruana, Danny Silver, Jon Baxter, Tom Mitchell, Lorien Pratt, and Sebastian Thrun.) The fundamental motivation for that meeting was the belief that machine learning systems would benefit from re-using knowledge learned from related and/or prior experience and that this would enable them to move beyond task-specific tabula rasa systems. The workshop resulted in a series of articles published in a special issue of Connection Science [CS 1996], Machine Learning [vol. 28, 1997] and a book entitled "Learning to Learn" [Pratt and Thrun 1998]. Research in inductive transfer has continued since 1995 under a variety of names: learning to learn, life-long learning, knowledge transfer, transfer learning, multitask learning, knowledge consolidation, context-sensitive learning, knowledge-based inductive bias, meta-learning, and incremental/cumulative learning. The recent burst of activity in this area is illustrated by the research in multi-task learning within the kernel and Bayesian contexts that has established new frameworks for capturing task relatedness to improve learning [Ando and Zhang 04, Bakker and Heskes 03, Jebara 04, Evgeniou, and Pontil 04, Evgeniou, Micchelli and Pontil 05, Chapelle and Harchaoui 05]. This NIPS 2005 workshop will examine the progress that has been made in ten years, the questions and challenges that remain, and the opportunities for new applications of inductive transfer systems. In particular, the workshop organizers have identified three major goals: (1) To summarize the work thus far in inductive transfer to develop a taxonomy of research and highlight open questions, (2) To share new theories, approaches, and algorithms regarding the accumulation and re-use of learned knowledge to make learning more effective and more efficient, (3) To discuss the formation of an inductive transfer special interest group that might offer a website, benchmarking data, shared software, and links to various research programs and related web resources. Call for Papers: ---------------- We invite submission of workshop papers that discuss ongoing or completed work dealing with Inductive Transfer (see below for a list of appropriate topics). Papers should be no more than four pages in the standard NIPS format. Authorship should not be blind. Please submit a paper by emailing it in Postscript or PDF format to danny.silver at acadiau.ca with the subject line "ITWS Submission". We anticipate accepting as many as 8 papers for 15 minute presentation slots and up to 20 poster papers. Please only submit an article if at least one of the authors will attend the workshop to present the work. The successful papers will be made available on the Web. A special journal issue or an edited book of selected papers also is being planned. The 1995 workshop identified the most important areas for future research to be: * The relationship between computational learning theory and selective inductive bias; * The tradeoffs between storing or transferring knowledge in representational and functional form; * Methods of turning concurrent parallel learning into sequential lifelong learning methods; * Measuring relatedness between learning tasks for the purpose of knowledge transfer; * Long-term memory methods and cumulative learning; and * The practical applications of inductive transfer and lifelong learning systems. The workshop is interested in the progress that has been made in these areas over the last ten years. These remain key topics for discussion at the proposed workshop. More forward looking and important questions include: * Under what conditions is inductive transfer difficult? When is it easy? * What are the fundamental requirements for continual learning and transfer? * What new mathematical models/frameworks capture/demonstrate transfer learning? * What are some of latest and most advanced demonstrations of transfer learning in machines (Bayesian, kernel methods, reinforcement)? * What can be learned from transfer learning in humans and animals? * What are the latest psychological/neurological/computational theories of knowledge transfer in learning? Important Dates: ---------------- 19 Sep 05 - Call for participation 21 Oct 05 - Paper submission deadline 04 Nov 05 - Notification of paper acceptance 09 Dec 05 - Workshop in Whistler Organizers: -------------- Danny Silver, Jodrey School of Computer Science, Acadia University, Canada Rich Caruana, Department of Computer Science, Cornell University, USA Stuart Russell, Computer Science Division, University of California, Berkeley, USA Prasad Tadepalli, School of Electrical Eng. and Computer Science, Oregon State University, USA Goekhan Bakir, Max Planck Institute for Biological Cybernetics, Germany Kristin Bennett, Department of Mathematical Sciences, Rensselaer Polytechnic Institute, USA Massimiliano Pontil, Dept. of Computer Science, University College London, UK For further Information: ------------------------ Please see the workshop webpage at http://iitrl.acadiau.ca/itws05/ Email danny.silver at acadiau.ca =============================================== Daniel L. Silver, Ph.D. danny.silver at acadiau.ca Associate Professor p:902-585-1105 f:902-585-1067 Intelligent Information Technology Research Laboratory Jodrey School of Computer Science, Office 315 Acadia University, Wolfville, NS B4P 2R6 From dvprokhorov at gmail.com Thu Sep 1 21:51:30 2005 From: dvprokhorov at gmail.com (Danil Prokhorov) Date: Thu, 1 Sep 2005 21:51:30 -0400 Subject: Connectionists: Senior Research Scientist Position at TTC, CI/AI/machine learning Message-ID: TOYOTA TECHNICAL CENTER, U.S.A., INC. Toyota Technical Center (TTC) is Toyota's largest engineering and research organization in North America, located in Ann Arbor, MI. TTC is seeking an exceptional individual for the full-time position of Senior Research Scientist in the intersection of Computational Intelligence and Robotics research activities, to become a member of the Technical Research Department (TRD). TTC prefers a researcher with experience in sensor fusion for automotive and robotic systems. This position will offer opportunities for collaboration with leading North American and global research institutions. The candidate should also have experience in mentoring junior researchers and have some research project management experience. This research is intended to break new ground and advance the state of the art. Job Duties and Responsibilities: Apply special knowledge and talents to develop and execute new, independent research projects for automotive and robotic applications Provide guidance to on-site researchers and research assistants Interact with world renowned and leading researchers in applicable areas Host visiting Toyota engineers and scientists Provide deliverables such as written and oral reports, as well as publications for peer-reviewed journals and conferences. Qualifications: Experience in Artificial Intelligence, intelligent signal processing and sensor-fusion research Experience in automotive safety systems is preferred Experience in robotic research and testing Experience in mentoring junior researchers Experience in research project management Familiarity with computational intelligence is preferred (e.g., neural networks, fuzzy logic, evolutionary algorithms, data mining) Ph.D. or Sc.D. in a related field of study Good written and oral communication skills Ability to work well with others in a team environment A willingness to travel Position is located in Ann Arbor, MI. The position provides a competitive salary and excellent benefits and all of the amenities of our campus and surrounding community. Please apply online to Toyota using the following URL (preferred way to apply): http://tmm.recruitsoft.com/servlets/CareerSection?art_ip_action=FlowDispatcher&flowTypeNo=13&pageSeq=2&reqNo=25222&art_servlet_language=en&csNo=10103 or via e-mail to Debra Adams, dadams at ttc-usa.com From arjen.van.ooyen at falw.vu.nl Fri Sep 2 06:09:05 2005 From: arjen.van.ooyen at falw.vu.nl (Arjen van Ooyen) Date: Fri, 02 Sep 2005 12:09:05 +0200 Subject: Connectionists: New Paper Message-ID: <431824C1.3020406@falw.vu.nl> Attention-Gated Reinforcement Learning of Internal Representations for Classification Pieter R. Roelfsema & Arjen van Ooyen, Neural Computation (2005) 17: 2176-2214. Abstract Animal learning is associated with changes in the efficacy of connections between neurons. The rules that govern this plasticity can be tested in neural networks. Rules that train neural networks to map stimuli onto outputs are given by supervised learning and reinforcement learning theories. Supervised learning is efficient but biologically implausible. In contrast, reinforcement learning is biologically plausible but comparatively inefficient. It lacks a mechanism that can identify units at early processing levels that play a decisive role in the stimulus-response mapping. Here we show that this so-called credit assignment problem can be solved by a new role for attention in learning. There are two factors in our new learning scheme that determine synaptic plasticity: (1) a reinforcement signal that is homogeneous across the network and depends on the amount of reward obtained after a trial, and (2) an attentional feedback signal from the output layer that limits plasticity to those units at earlier processing levels that are crucial for the stimulus-response mapping. The new scheme is called attention-gated reinforcement learning (AGREL). We show that it is as efficient as supervised learning in classification tasks. AGREL is biologically realistic and integrates the role of feedback connections, attention effects, synaptic plasticity, and reinforcement learning signals into a coherent framework. For full text, go to http://www.bio.vu.nl/enf/vanooyen/papers/agrel2005_abstract.html -- Dr. Arjen van Ooyen Center for Neurogenomics and Cognitive Research (CNCR) Department of Experimental Neurophysiology Vrije Universiteit De Boelelaan 1085 1081 HV Amsterdam The Netherlands E-mail: arjen.van.ooyen at falw.vu.nl Phone: +31.20.5987090 Fax: +31.20.5987112 Room: B329 Web: http://www.bio.vu.nl/enf/vanooyen From krista at james.hut.fi Sun Sep 4 10:17:54 2005 From: krista at james.hut.fi (Krista Lagus) Date: Sun, 4 Sep 2005 17:17:54 +0300 Subject: Connectionists: Unsupervised segmentation of words into morphemes -- Challenge 2005 Message-ID: Unsupervised segmentation of words into morphemes -- Challenge 2005 http://www.cis.hut.fi/morphochallenge2005/ Part of the EU Network of Excellence PASCAL Challenge Program. Participation is open to all. The objective of the Challenge is to design a statistical machine learning algorithm that segments words into the smallest meaning-bearing units of language, morphemes. Ideally, these are basic vocabulary units suitable for different tasks, such as text understanding, machine translation, information retrieval, and statistical language modeling. The scientific goals are: * To learn of the phenomena underlying word construction in natural languages * To discover approaches suitable for a wide range of languages * To advance machine learning methodology The results will be presented in a workshop arranged in connection with other PASCAL challenges on machine learning. Program Committee (the list is increasing): Levent Arslan, Bo?azi?i University Samy Bengio, IDIAP Tolga Cilogu, Middle-East Technical University John Goldsmith, University of Chicago Kadri Hacioglu, Colorado University Chun Yu Kit, City University of Hong Kong Dietrich Klakow, Saarland University Jan Nouza,Technical University of Liberec Erkki Oja, Helsinki University of Technology Please read the rules and see the schedule. The datasets are available for download at http://www.cis.hut.fi/morphochallenge2005/ We are looking forward to an interesting competition! Mikko Kurimo, Mathias Creutz and Krista Lagus Neural Networks Research Centre, Helsinki University of Technology The organizers -------------------------------------------------------------------- Dr. Krista Lagus Krista.Lagus at hut.fi www.cis.hut.fi/krista/ Neural Networks Research Centre, Helsinki University of Technology P.O.Box 5400 (Konemiehentie 2, Espoo), FIN-02015 HUT, Finland Tel.+358-9-451 4459 Fax +358-9-451 3277 From munakata at psych.colorado.edu Sun Sep 4 23:06:55 2005 From: munakata at psych.colorado.edu (Yuko Munakata) Date: Sun, 4 Sep 2005 21:06:55 -0600 Subject: Connectionists: two cognitive faculty positions at CU Boulder Message-ID: <200509042106.56869.munakata@psych.colorado.edu> The University of Colorado Boulder has two cognitive faculty searches this year -- one in Psychology and one in the Institute of Cognitive Science. Both searches have interest in candidates using computational approaches. Yuko Munakata ******************************************* The Department of Psychology, University of Colorado, Boulder, invites applications for a tenure-track position in cognitive psychology beginning August 2006. The department anticipates hiring at the assistant professor level. The University of Colorado, Boulder, is committed to diversity and equality in education and employment. In that spirit, applications at all levels will be considered from those who would strengthen the department's diversity. Candidates in any area of cognitive psychology will be considered. Special consideration will be given to candidates whose research interests include cognitive neuroscience, development, language, higher-order cognition, or object perception. The successful candidate will be expected to teach at the undergraduate and graduate levels, to supervise undergraduate and graduate students in research, and to maintain an active research program. Salary is competitive and dependent upon experience. All applicants should send a curriculum vitae, a statement of research interests, a statement of undergraduate and graduate teaching interests, representative research papers, and at least three letters of recommendation to: Yuko Munakata, Chair, Cognitive Search Committee, Department of Psychology, University of Colorado, 345 UCB, Boulder, CO 80309-0345. We will begin reviewing applications November 15, 2005 and will continue to review applications until the position is filled. ******************************************* Cognitive Scientist/Psychologist, tenure track position The Institute of Cognitive Science at the University of Colorado invites applications for a full-time tenure-track position at the assistant professor level, with a starting date of Fall 2006. The Institute, is a multidisciplinary unit with representation from the departments of Psychology; Computer Science; Education; Linguistics; Speech, Language & Hearing Sciences; Philosophy;, and Architecture & Planning. Because the individual hired for this Institute position will have an academic appointment within the Department of Psychology, we seek applicants with a strong record of research that integrates Cognitive Science with cognitive processes including, but not limited to, judgment and decision-making, language and discourse processes, learning and memory, object processing, or higher-order cognition. Candidates taking a developmental, neuroscience, computational, or experimental approach are all welcome. We will give strongest consideration to applicants whose work demonstrates an ability and commitment to interdisciplinary research. Duties include graduate and undergraduate teaching, research, research supervision, and service. Applicants should send curriculum vitae, copies of representative publications, a teaching statement, a research summary, and letter from three referees to: Dr. Donna Caccamise Associate Director Institute of Cognitive Science 344 UCB University of Colorado Boulder, CO 80309 For fullest consideration, please apply by November 15, 2005. Applications will continue to be accepted after this date until the position is filled. Email inquiries may be sent to donnac at psych.colorado.edu. The University of Colorado is an Equal Opportunity/Affirmative Action Employer. From osporns at indiana.edu Mon Sep 5 12:48:26 2005 From: osporns at indiana.edu (Olaf Sporns) Date: Mon, 05 Sep 2005 11:48:26 -0500 Subject: Connectionists: ICDL 2006 Call for Papers Message-ID: <431C76DA.3090902@indiana.edu> ICDL 2006 International Conference on Development and Learning - Dynamics of Development and Learning - http://www.icdl06.org Indiana University Bloomington, May 31- June 3, 2006 CALL FOR PAPERS Paper Submission Deadline: Feb. 6, 2006 CALL FOR INVITED SESSIONS PROPOSALS Proposal Submission Deadline: Dec. 1, 2005 Recent years have seen a convergence of research in artificial intelligence, developmental psychology, cognitive science, neuroscience and robotics, aimed at identifying common computational principles of development and learning in artificial and natural systems. The theme of this year?s conference centers on development as a process of dynamic change that occurs within a complex and embodied system. The dynamics of development extend across multiple levels, from neural circuits, to changes in body morphology, sensors, movement, behavior, and inter-personal and social patterns. The goal of the conference is to present state-of-the-art research on autonomous development in humans, animals and robots, and to continue to identify new interdisciplinary research directions for the future of the field. The 5th International Conference on Development and Learning 2006 (ICDL06) will be held on the campus of Indiana University Bloomington, May 31- June 3, 2006. The conference is organized with the technical co-sponsorship of the IEEE Computational Intelligence Society. The conference will feature plenary talks by invited keynote speakers, invited sessions (workshops) organized around a central topic, a panel discussion and poster sessions. Paper submissions (for details regarding format and submission/review process see our website at http://www.icdl06.org) are invited in these areas: ? General Principles of Development and Learning in Humans and Robots ? Neural, Behavioral and Computational Plasticity ? Embodied Cognition: Foundations and Applications ? Social Development in Humans and Robots ? Language Development and Learning ? Dynamic Systems Approaches ? Emergence of Structures through Development ? Development of Perceptual and Motor Systems ? Models of Developmental Disorders Authors may specify preferences for oral or poster presentations. All submissions will be peer-reviewed and accepted papers will be published in a conference proceedings volume. Selected conference presenters will be invited to update and expand their papers for publication in a special issue on ?Dynamics of Development and Learning? of the journal Adaptive Behavior (http://adb.sagepub.com/). ICDL precedes the conference ?Artificial Life X?, June 3-7, 2006, also held on the campus of Indiana University Bloomington (http://alifex.org). ICDL and ALIFE will share one day of overlapping workshops and tutorials on June 3. Organizing Committee: Linda Smith (Chair), Olaf Sporns, Chen Yu, Mike Gasser, Cynthia Breazeal, Gideon Deak, John Weng. From shivani at MIT.EDU Wed Sep 7 01:27:56 2005 From: shivani at MIT.EDU (Shivani Agarwal) Date: Wed, 7 Sep 2005 01:27:56 -0400 (EDT) Subject: Connectionists: CFP: NIPS 2005 Workshop - Learning to Rank Message-ID: ************************************************************************ CALL FOR PAPERS ---- Learning to Rank ---- Workshop at the 19th Annual Conference on Neural Information Processing Systems (NIPS 2005) http://web.mit.edu/shivani/www/Ranking-NIPS-05/ -- Submission Deadline: October 21, 2005 -- ************************************************************************ [ Apologies for multiple postings ] OVERVIEW -------- The problem of ranking, in which the goal is to learn an ordering or ranking over objects, has recently gained much attention in machine learning. Progress has been made in formulating different forms of the ranking problem, proposing and analyzing algorithms for these forms, and developing theory for them. However, a multitude of basic questions remain unanswered: * Ranking problems may differ in many ways: in the form of the training examples, in the form of the desired output, and in the performance measure used to evaluate success. What are the consequences of each of these factors on the design of ranking algorithms and on their theoretical guarantees? * The relationships between ranking and other classical learning problems such as classification and regression are still under-explored. Is any of these problems inherently harder or easier than another? * Although ranking is studied mainly as a supervised learning problem, it can have important consequences for other forms of learning; for example, in semi-supervised learning, one often ranks unlabeled examples so as to assign labels to the ones ranked at the top, and in reinforcement learning, one often learns a policy that ranks actions for each state. To what extent can these connections be explored and exploited? * There is a large variety of applications in which ranking is required, ranging from information retrieval to collaborative filtering to computational biology. What forms of ranking are most suited to different applications? What are novel applications that can benefit from ranking, and what other forms of ranking do these applications point us to? This workshop aims to provide a forum for discussion and debate among researchers interested in the topic of ranking, with a focus on the basic questions above. The goal is not to find immediate answers, but rather to discuss possible methods and applications, develop intuition, brainstorm on possible directions and, in the process, encourage dialogue and collaboration among researchers with complementary ideas. FORMAT ------ This is a one-day workshop that will follow the 19th Annual Conference on Neural Information Processing Systems (NIPS 2005). The workshop will consist of two 3-hour sessions. There will be two invited talks and 5-6 contributed talks, with time for questions and discussion after each talk. We would particularly like to encourage, after each talk, a discussion of underlying assumptions, alternative approaches, and possible applications or theoretical analyses, as appropriate. The last 30 minutes of the workshop will be reserved for a concluding discussion which will be used to put into perspective insights gained from the workshop and to highlight open challenges. Invited Talks ------------- * Thorsten Joachims, Cornell University * Yoram Singer, The Hebrew University Contributed Talks ----------------- These will be based on papers submitted for review. See below for details. CALL FOR PAPERS --------------- We invite submissions of papers addressing all aspects of ranking in machine learning, including: * algorithmic approaches for ranking * theoretical analyses of ranking algorithms * comparisons of different forms of ranking * formulations of new forms of ranking * relationships between ranking and other learning problems * novel applications of ranking * challenges in applying or analyzing ranking methods We welcome papers on ranking that do not fit into one of the above categories, as well as papers that describe work in progress. We are particularly interested in papers that point to new questions/debate in ranking and/or shed new light on existing issues. Please note that papers that have previously appeared (or have been accepted for publication) in a journal or at a conference or workshop, or that are being submitted to another workshop, are not appropriate for this workshop. Submission Instructions ----------------------- Submissions should be at most 6 pages in length using NIPS style files (available at http://web.mit.edu/shivani/www/Ranking-NIPS-05/StyleFiles/), and should include the title, authors' names, postal and email addresses, and an abstract not to exceed 150 words. Email submissions (in pdf or ps format only) to shivani at mit.edu with subject line "Workshop Paper Submission". The deadline for submissions is Friday October 21, 11:59 pm EDT. Submissions will be reviewed by the program committee and authors will be notified of acceptance/rejection decisions by Friday November 11. Final versions of all accepted papers will be due on Friday November 18. Please note that one author of each accepted paper must be available to present the paper at the workshop. IMPORTANT DATES --------------- First call for papers -- September 6, 2005 Paper submission deadline -- October 21, 2005 (11:59 pm EDT) Notification of decisions -- November 11, 2005 Final papers due -- November 18, 2005 Workshop -- December 9 or 10, 2005 ORGANIZERS ---------- * Shivani Agarwal, MIT * Corinna Cortes, Google Research * Ralf Herbrich, Microsoft Research CONTACT ------- Please direct any questions to shivani at mit.edu. ************************************************************************ From leonb at nec-labs.com Tue Sep 6 14:56:36 2005 From: leonb at nec-labs.com (Leon Bottou) Date: Tue, 6 Sep 2005 14:56:36 -0400 Subject: Connectionists: CFP: NIPS 2005 Workshop: Large Scale Kernel Machines Message-ID: <200509061456.36446.leonb@nec-labs.com> ########################################################### NIPS 2005 Workshop LARGE SCALE KERNEL MACHINES ########################################################### Datasets with millions of observations can be gathered by crawling the web, mining business databases, or connecting a cheap video tuner to a laptop. Vastly more ambitious learning systems are theoretically possible. The literature shows no shortage of ideas for sophisticated statistical models. The computational cost of learning algorithms is now the bottleneck. During the last decade, dataset size has outgrown processor speed. Meanwhile, machine learning algorithms became more principled, and also more computationally expensive. The workshop investigates computationally efficient ways to exploit such large datasets using kernel machines. It will show how adequately designed kernel machines can efficiently process millions of examples. It will also debate whether kernel machines are the best way to achieve such objectives. TOPICS: * Fast implementation of ordinary Support Vector Machines. How to improve the optimization algorithms and to distribute them on several computers? * Kernel algorithms specifically designed for large scale datasets. For instance, online kernel algorithms are less hungry for memory. Does this improvement comes for free or does it increases the error rates? * Methods for containing the growth of the number of support vectors. Does the number of Support Vectors always grow linearly with the number of examples, as in ordinary Support Vector Machines? * Comparing the relative strengths of kernel and non kernel methods on large scale datasets. Are kernel methods the best tools for such datasets? CALL FOR PARTICIPATION: If you wish to make a presentation, send a plain text email to with title, authors, and a brief abstract (less than one page.) Please send us this information before November 1st. ORGANIZERS: Leon Bottou (NEC Labs, Princeton) Olivier Chapelle (MPI, Tuebingen) Dennis Decoste (Yahoo!, Sunnyvale) Jason Weston (NEC Labs, Princeton) ------------------------------------------------------- From pfbaldi at ics.uci.edu Tue Sep 6 15:41:10 2005 From: pfbaldi at ics.uci.edu (Pierre Baldi) Date: Tue, 6 Sep 2005 12:41:10 -0700 Subject: Connectionists: Faculty Positions in Machine Learning and Computational Biology at UCI Message-ID: <008d01c5b31a$eec22670$cd04c380@ics.uci.edu> Tenure-Track Faculty Positions Biomedical Informatics, Computational Biology, and Systems Biology University of California, Irvine Two junior tenure-track positions are available at the University of California, Irvine in all areas of research at the intersection of life and computational sciences. These appointments will be made in the Donald Bren School of Information and Computer Sciences with possible joint appointments in the School of Biological Sciences, the School of Physical Sciences, or the School of Medicine. Exceptionally qualified senior candidates also will be considered for tenured positions. These positions will be coordinated with the interdisciplinary research programs of the UCI Institute for Genomics and Bioinformatics. Examples of general areas of interest include: chemical informatics, bioinformatics, computational biology, systems biology, synthetic biology, and medical informatics. Examples of specific areas of interest include: protein structure and function prediction; molecular simulations and docking; computational drug screening and design; comparative genomics; analysis of high-throughput data; mathematical modeling of biological systems. Research methods should encompass computational, statistical, or machine-learning approaches. UCI is targeted as a growth campus for the University of California. It is one of the youngest UC campuses, yet ranked 10th among the nation's best public universities by US News & World Report. Salary and other compensation (including priority access to on-campus faculty housing) are competitive with the nation's finest public universities. For an overview of UCI, see http://www.uci.edu. The Bren School of ICS is one of nine academic units at UCI and was recently elevated to an independent school by the UC Regents. ICS' mission is to lead the innovation of new information and computing technology and study its economic and social significance while producing an educated workforce to further advance technology and fuel the economic engine. The Bren School of ICS has excellent faculty, innovative programs, high quality students and outstanding graduates as well as strong relationships with high tech industry. With approximately 2000 undergraduates, 300 graduate students, and 63 faculty members, ICS is the largest computing program within the UC system. For a perspective on ICS, see http://www.ics.uci.edu. Screening will begin immediately upon receipt of a completed application. Applications will be accepted until positions are filled, although maximum consideration will be given to applications received by January 15, 2006. Completed applications containing a cover letter, curriculum vita, sample research publications, and three to five letters of recommendation should be uploaded electronically. Please refer to the following web site for instructions: http://www.ics.uci.edu/employment/employ_faculty.php. The University of California, Irvine is an equal opportunity employer committed to excellence through diversity, has a National Science Foundation Advance Gender Equity Program, and is responsive to the needs of dual career couples. Pierre Baldi School of Information and Computer Sciences University of California, Irvine Irvine, CA 92697-3435 +1(949) 824-5809 +1(949) 824-9813 (FAX) ww.ics.uci.edu/~pfbaldi www.igb.uci.edu From reza at bme.jhu.edu Wed Sep 7 07:36:35 2005 From: reza at bme.jhu.edu (Reza Shadmehr) Date: Wed, 07 Sep 2005 07:36:35 -0400 Subject: Connectionists: Computational Motor Control Message-ID: <0IMG0071P2P530@jhuml1.jhmi.edu> Emo Todorov and I would like to invite you to the fourth computational motor control symposium at the Society for Neuroscience conference. The symposium will take place on Friday, Nov. 11 2005 at the Washington DC convention center. The purpose of the meeting is to highlight computational modeling and theories in motor control. This is an opportunity to meet and hear from some of the bright minds in the field. The program consists of two distinguished speakers and 12 contributed talks, selected from the submitted abstracts. The speakers this year are: Daniel Wolpert, Cambridge University "Probabilistic mechanisms in human sensorimotor control" Andrew Schwartz, University of Pittsburgh "Useful signals from the motor cortex" We encourage you to consider submitting an abstract. The abstracts will be reviewed by a panel and ranked. The top 12 abstracts will be selected for oral presentation. We encourage oral presentation by students who have had a major role in the work described in the abstracts. More information is available here: www.bme.jhu.edu/acmc The deadline for abstract submission is September 30. Abstracts should be no more than two pages in length, including figures and references. With our best wishes, Reza Shadmehr and Emo Todorov From hoya at brain.riken.jp Thu Sep 8 23:21:53 2005 From: hoya at brain.riken.jp (Tetsuya Hoya) Date: Fri, 09 Sep 2005 12:21:53 +0900 Subject: Connectionists: Artificial Mind System -- Kernel Memory Approach Message-ID: <4320FFD1.1000803@brain.riken.jp> I am pleased to announce the publication of my recent monograph: ``Artificial Mind System -- Kernel Memory Approach'' by Tetsuya Hoya in the series: Studies in Computational Intelligence (SCI), Vol. 1 (270p), Heidelberg: Springer-Verlag ISBN 3540260722 http://www.springeronline.com The book is written from an engineer's scope of the mind. It exposes the reader to a broad spectrum of interesting areas in brain science and mind-oriented studies. In the first part of the monograph, I focused upon a neww connectionist model, called the `kernel memory', which can be seen as generalisation of probabilistic / generalised regression neural networks. Then, the second part proposes a holistic model of an artificial mind system and its behaviour, as concretely as possible, on the basis of the kernel memory concept developed in the first part, within a unified context, which could eventually lead to practical realisation in terms of hardware or software. With a view that ``the mind is an input-output system alwayys evolving'', ideas inspired by many branches of studies related to brain science are integrated within the text, i.e. artificial intelligence, cognitive science/psychology, connectionism, consciousness studies, general neuroscience, linguistics, pattern recognition/data clustering, robotics, and signal processing. Key words: artificial intelligence, mind, neural networks, creating the brain. Regards, Tetsuya Hoya Lab. for Advanced Brain Signal Processing BSI-RIKEN 2-1, Hirosawa, Wako-City, Saitama 3511-0198 JAPAN e-mail: hoya at brain.riken.jp From ted.carnevale at yale.edu Tue Sep 6 10:37:55 2005 From: ted.carnevale at yale.edu (Ted Carnevale) Date: Tue, 06 Sep 2005 10:37:55 -0400 Subject: Connectionists: NEURON course at SFN 2005 meeting Message-ID: <431DA9C3.80504@yale.edu> This year's 1-day NEURON course at the annual SFN meeting will include presentations of new features such as: <> using the Import3D tool to convert detailed morphometric data (Neurolucida, swc, and Eutectic) into models <> using the new CellBuilder to set up spatially nonuniform biophysical properties <> using the ModelView tool to quickly discover what's really in a model (very helpful for deciphering your own old models, not to mention those you get from ModelDB and other sources) <> using the ChannelBuilder to create new voltage- and ligand- gated channels--including stochastic ion channels--without having to write any program code <> speeding up network models by distributing them over multiple processors Only a few seats remain available, and these may go quickly now that the fall semester has started. For on-line registration forms, see http://www.neuron.yale.edu/neuron/sfn2005/dc2005.html --Ted From cjs at ecs.soton.ac.uk Wed Sep 14 06:44:52 2005 From: cjs at ecs.soton.ac.uk (Craig Saunders) Date: Wed, 14 Sep 2005 11:44:52 +0100 Subject: Connectionists: NIPS Workshop on Kernel Methods and Structured Domains Message-ID: <4327FF24.4070301@ecs.soton.ac.uk> %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Call for Papers Workshop on Kernel Methods and Structured Domains http://nips2005.kyb.tuebingen.mpg.de/ NIPS 2005 Submission deadline: 21 October Accept/Reject notification: 05 November %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Workshop Description %%%%%%%%%%%%%%%%%%%% Substantial recent work in machine learning has focused on the problem of dealing with inputs and outputs on more complex domains than are provided for in the classical regression/classification setting. Structured representations can give a more informative view of input domains, which is crucial for the development of successful learning algorithms: application areas include determining protein structure and protein-protein interaction; part-of-speech tagging; the organisation of web documents into hierarchies; and image segmentation. Likewise, a major research direction is in the use of structured output representations, which have been applied in a broad range of areas including several of the foregoing examples (for instance, the output required of the learning algorithm may be a probabilistic model, a graph, or a ranking). In particular, kernel methods have been especially fertile in giving rise to efficient and powerful algorithms for both structured inputs and outputs, since (as with SVMs) use of the "kernel trick" can make the required optimisations tractable: examples include large margin Markov networks, graph kernels, and kernels on automata. More generally, kernels between probability measures have been proposed (with no a-priori assumptions as to the dependence structure), which have motivated particular kernels between images and strings. In NIPS 2004, two workshops took place addressing learning approaches on structured domains: Learning on Structured Outputs (Bakir,Gretton,Hoffman,Schoelkopf) and Graphical Models and Kernels (Smola, Taskar, Vishwanathan). In view of significant and continued advances in the field, the present workshop addresses the same area as these earlier workshops: to provide an overview of recent theoretical and algorithmic foundations for kernels on structured domains, to investigate applications that build on these fundamentals, and to explore new research directions for future work. The workshop is also intended as one element of the Pascal thematic program on learning with complex and structured outputs. Workshop format %%%%%%%%%%%%%%% The workshop will last one day, and will include invited talks (of 30 minutes' duration), submitted talks (15 minutes), and periods of moderated and open discussion (20 minutes). The final discussion will provide a wrap-up session which will summarise the issues raised, so that all participants leave the workshop with a clear view of the future challenges and open questions which need to be addressed. Invited speakers %%%%%%%%%%%%%%%% Ben Taskar Jean-Philippe Vert Matthias Hein Call for papers %%%%%%%%%%%%%%% We invite submissions for 15 minute contributed talks (of which a maximum of eight will be accepted). The intended emphasis is on recent innovations, work in progress, and promising directions or problems for new research. Proposed topics include: * Learning when the inputs/outputs are structures * Learning from data embedded in structure * Graphical models and information geometry * Kernels on probability measures Our focus will be on using kernel methods to deal efficiently with structured data. We will also consider work falling outside these specific topics, but within the workshop subject area. If you would like to submit to this session, please send an abstract to Arthur Gretton (arthur at tuebingen dot mpg dot de) before October 21. Please do not send posters or long documents. Decisions as to which proposals are accepted will be sent out on November 05. Workshop Chairs %%%%%%%%%%%%%%% Arthur Gretton (MPI for Biological Cybernetics) Gert Lanckriet (UC San Diego) Juho Rousu (University of Helsinki) Craig Saunders (University of Southampton) From irezek at robots.ox.ac.uk Wed Sep 14 11:53:27 2005 From: irezek at robots.ox.ac.uk (Iead Rezek) Date: Wed, 14 Sep 2005 16:53:27 +0100 Subject: Connectionists: CFP: NIPS 2005 Workshop - Game Theory, Machine Learning and Reasoning under Uncertainty Message-ID: <43284777.9040708@robots> ###################################################################### CALL FOR PAPERS Game Theory, Machine Learning and Reasoning under Uncertainty Workshop at the Neural Information Processing Systems (NIPS) 2005 http://www.robots.ox.ac.uk/~gmr05.html ###################################################################### OVERVIEW Game theory is concerned with understanding the decision processes and strategic actions of competing individuals. Having initially found applications in economics and diplomacy, game theory is increasingly being used to understand the interactions that occur within large multi-agent systems, and has been applied in areas as diverse as allocating resources within grid computing and coordinating the behaviour of multiple autonomous vehicles. Recent research has highlighted the benefits that may result from examining the interface between machine learning and game theory. While classical game theory makes limited provision for dealing with uncertainty and noise, research in machine learning, and particularly probabilistic inference, has resulted in a remarkable array of powerful algorithms for performing statistical inference from difficult real world data. Machine learning holds the promise of generalising game theory to deal with the uncertain, data-driven domain that we encounter in real-world applications (for example, the timely and decentralised flight coordination for Europe's future Open Skies initiative or the dynamic and responsive data fusion of multi-modal physiological time series in intensive care.) In addition, whilst techniques from graphical models have suggested computationally tractable algorithms for analysing games that consist of a large number of players, insights from game theory are also inspiring new work on strategic learning behaviour in probabilistic inference and are suggesting new algorithms to perform intelligent sampling in Markov Monte Carlo methods. CALL FOR PAPERS To investigate these issues and reflect the range of questions that the workshop addresses, we invite submissions from researchers who have made contributions to this nascent field and also practitioners from a wide range of backgrounds. Of particular interest is research concerning (but not limited to) the following issues: 1. Considered separately, what are the current limitations of game theory and probabilistic inference and what can be achieved by integrating the two? 2. What are the specific points of correspondences between game theory and probabilistic inference that can be used to incorporate new developments in probabilistic inference into game theory applications and vice versa. Some of the examples of such correspondences and their inclusions that could be addressed are - dynamic behaviour of probabilistic inference mechanisms through game theoretic interaction models, - structured belief models in computational game theory algorithms, - quantitative bounds on Nash equilibria, - MCMC sampling with intelligent particles. 3. What are the applications which are most likely to benefit from an explicitly inferential game theory? The workshop paper should be no more than 8 pages in length and written in standard NIPS format. Please also indicate whether you are interested in an oral or poster presentation. We also welcome position papers and work of researchers from industrial/application background to reflect the future practical needs to the research area. Please submit an article only if at least one of the authors is certain to attend. IMPORTANT DATES 14 September 2005 Call for papers 23 October 2005 Deadline for paper submissions 05 November 2005 Notification of paper acceptance 9/10 December 2005 Workshop ORGANISERS The ARGUS project (www.robots.ox.ac.uk/~argus), and in particular Iead Rezek, University of Oxford (irezek at robots.ox.ac.uk) Alex Rogers, University of Southampton (a.rogers at ecs.soton.ac.uk) and David Wolpert, NASA Ames Research Center INQUIRIES Please direct any inquiries regarding the workshop to Iead Rezek, (irezek at robots.ox.ac.uk) or Alex Rogers (a.rogers at ecs.soton.ac.uk) From cindy at bu.edu Mon Sep 12 10:54:10 2005 From: cindy at bu.edu (Cynthia Bradford) Date: Mon, 12 Sep 2005 10:54:10 -0400 Subject: Connectionists: Neural Networks 18(8) 2005: Special Issue on "Neural Networks and Kernel Methods for Structured Domains" Message-ID: <200509121454.j8CEsBBp014366@kenmore.bu.edu> NEURAL NETWORKS 18(8) Contents - Volume 18, Number 8 - 2005 Special Issue on "Neural Networks and Kernel Methods for Structured Domains" Barbara Hammer, Craig Saunders, and Alessandro Sperduti (editors) ------------------------------------------------------------------ Introduction to the Special Issue Barbara Hammer, Craig Saunders, and Alessandro Sperduti A novel approach to extracting features from motif content and protein composition for protein sequence classification Xing-Ming Zhao, Yiu-Ming, Cheung, and De-Shuang Huang Learning protein secondary structure from sequential and relational data Alessio Ceroni, Paolo Frasconi, and Gianluca Pollastri Recursive neural networks for processing graphs with labeled edges: Theory and applications M. Bianchini, M. Maggini, L. Sarti, and F. Scarselli Recursive principal components analysis Thomas Voegtlin The loading problem for recursive neural networks Marco Gori and Alessandro Sperduti On the relationship between deterministic and probabilistic directed graphical models: From Bayesian networks to recursive neural networks Pierre Baldi and Michal Rosen-Zvi An incremental regression method for graph structured data Menita Carozza and Salvatore Rampone Graph kernels for chemical informatics Liva Ralaivola, Sanjay J. Swamidass, Hiroto Saigo, and Pierre Baldi The context-tree kernel for strings Marco Cuturi and Jean-Philippe Vert ------------------------------------------------------------------ Electronic access: www.elsevier.com/locate/neunet/. Individuals can look up instructions, aims & scope, see news, tables of contents, etc. Those who are at institutions which subscribe to Neural Networks get access to full article text as part of the institutional subscription. Sample copies can be requested for free and back issues can be ordered through the Elsevier customer support offices: nlinfo-f at elsevier.nl usinfo-f at elsevier.com or info at elsevier.co.jp ------------------------------ INNS/ENNS/JNNS Membership includes a subscription to Neural Networks: The International (INNS), European (ENNS), and Japanese (JNNS) Neural Network Societies are associations of scientists, engineers, students, and others seeking to learn about and advance the understanding of the modeling of behavioral and brain processes, and the application of neural modeling concepts to technological problems. Membership in any of the societies includes a subscription to Neural Networks, the official journal of the societies. Application forms should be sent to all the societies you want to apply to (for example, one as a member with subscription and the other one or two as a member without subscription). The JNNS does not accept credit cards or checks; to apply to the JNNS, send in the application form and wait for instructions about remitting payment. The ENNS accepts bank orders in Swedish Crowns (SEK) or credit cards. The INNS does not invoice for payment. ---------------------------------------------------------------------------- Membership Type INNS ENNS JNNS ---------------------------------------------------------------------------- membership with $80 (regular) SEK 660 Y 13,000 Neural Networks (plus Y 2,000 enrollment fee) $20 (student) SEK 460 Y 11,000 (plus Y 2,000 enrollment fee) ---------------------------------------------------------------------------- membership without $30 SEK 200 not available Neural Networks to non-student (subscribe through another society) Y 5,000 student (plus Y 2,000 enrollment fee) ---------------------------------------------------------------------------- Name: ______________________________________________________ Title: ______________________________________________________ Address: ______________________________________________________ Phone: ______________________________________________________ Fax: ______________________________________________________ Email: ______________________________________________________ Payment: [ ] Check or money order enclosed, payable to INNS or ENNS OR [ ] Charge my VISA or MasterCard card number _______________________________ expiration date _____________________________ INNS Membership 2810 Crossroads Drive, Suite 3800 Madison WI 53718 USA 608 443 2461, ext. 138 (phone) 608 443 2474 (fax) srees at reesgroupinc.com http://www.inns.org ENNS Membership University of Skovde P.O. Box 408 531 28 Skovde Sweden 46 500 44 83 37 (phone) 46 500 44 83 99 (fax) enns at ida.his.se http://www.his.se/ida/enns JNNS Membership JNNS Secretariat c/o Fuzzy Logic Systems Institute 680-41 Kawazu, Iizuka Fukuoka 820-0067 Japan 81 948 24 2771 (phone) 81 948 24 3002 (fax) jnns at flsi.cird.or.jp http://www.jnns.org/ ---------------------------------------------------------------------------- From emipar at tsc.uc3m.es Tue Sep 13 09:33:08 2005 From: emipar at tsc.uc3m.es (Emilio Parrado-Hernandez) Date: Tue, 13 Sep 2005 15:33:08 +0200 Subject: Connectionists: Deadline extended: JMLR special topic on Machine Learning and Large Scale Optimization Message-ID: <4326D514.1070901@tsc.uc3m.es> The deadline for the submission to the JMLR special topic on Machine Learning and Large Scale Optimisation has been extended until October 5. You can find more information in the web site of the journal: http://jmlr.csail.mit.edu/ If you have any related question or enquire, please do not hesitate to contact us. Best regards, Emilio and Kristin -- ==================================================== Emilio Parrado-Hernandez Visiting Lecturer Department of Signal Processing and Communications, Universidad Carlos III de Madrid Avenida de la Universidad 30, 28911 Leganes, Spain Phone: +34 91 6248738 Fax: +34 91 6248749 ==================================================== From g.goodhill at imb.uq.edu.au Sun Sep 11 22:39:57 2005 From: g.goodhill at imb.uq.edu.au (Geoffrey Goodhill) Date: Mon, 12 Sep 2005 12:39:57 +1000 Subject: Connectionists: Faculty position in Applied Statistics Message-ID: <4324EA7D.7030100@imb.uq.edu.au> Dear Connectionists, I would like to draw your attention to the following faculty position available in the Maths dept at the University of Queensland. There are several people in the dept who are interested in neural networks / mathematical biology / statistical analysis of biological data, including Geoff McLachlan (www.maths.uq.edu.au/~gjm), Kevin Burrage (www.maths.uq.edu.au/~kb), and myself. Please direct enquiries to Geoff McLachlan, gjm at maths.uq.edu.au. Thanks, Geoff Geoffrey J Goodhill, PhD Associate Professor Queensland Brain Institute, Department of Mathematics & Institute for Molecular Bioscience University of Queensland St Lucia, QLD 4072, Australia Phone: +61 7 3346 2612 Fax: +61 7 3346 8836 Email: g.goodhill at uq.edu.au http://cns.qbi.uq.edu.au -------------- LECTURESHIP IN STATISTICS School of Physical Sciences - St Lucia Campus - UQ Opportunity to work in a leading centre of statistical research in Australia . The discipline of Mathematics (statistics/applied probability group) in the School of Physical Sciences is the major provider of undergraduate and postgraduate statistical education in Queensland, and is a leading centre of statistical research in Australia. We are seeking an appointment at Lecturer Level B preferably in the area of applied statistics. In the role of Lecturer, the successful applicant will undertake teaching, postgraduate supervision, and further development of the School's Mathematics program, as well as perform research, administrative and other activities associated with the School and its Statistics Research Activities. Applicants must possess postgraduate qualifications (PHD level or equivalent) in statistics and will have established a strong reputation for research in an area of applied statistics. This is a full time, fixed term appointment for 3 years at Academic Level B. The remuneration package will be in the range of $71,293 - $84,660 per annum, which includes employer superannuation contributions of 17%. Obtain the position description and selection criteria online or contact Mr Graham Beal on (07) 3365 7923 or by email to hr at epsa.uq.edu.au. Contact Professor Geoff McLachlan on (07) 3365 2150 or email gjm at maths.uq.edu.au to discuss the role. Send applications to Mr Graham Beal, Human Resource Officer, Faculty of Engineering, Physical Sciences and Architecture at the address below, or by email applications at epsa.uq.edu.au. Closing date for applications: 7 October 2005 Reference Number: 3000860 From michael at chaos.gwdg.de Mon Sep 12 06:57:11 2005 From: michael at chaos.gwdg.de (Michael Herrmann) Date: Mon, 12 Sep 2005 12:57:11 +0200 (CEST) Subject: Connectionists: Call for participation Message-ID: A mini-symposium on "SELF-ORGANIZATION OF BEHAVIOR in robotic and living systems" will take place at the Bernstein Center for Computational Neuroscience Goettingen on Sept 15/16, 2005. For more information please goto http://www.chaos.gwdg.de/~michael/SOB2005.html ********************************************************************* * Dr. J. Michael Herrmann Georg August University Goettingen * * Tel. : +49 (0)551 5176424 Institute for Nonlinear Dynamics * * Fax : +49 (0)551 5176439 Bunsenstrasse 10 * * mobil: 0176 2800 4268 D-37073 Goettingen, Germany * * EMail: michael at chaos.gwdg.de http://www.chaos.gwdg.de/~michael * ********************************************************************* From sml at essex.ac.uk Mon Sep 12 09:48:48 2005 From: sml at essex.ac.uk (Lucas, Simon M) Date: Mon, 12 Sep 2005 14:48:48 +0100 Subject: Connectionists: IEEE CIG 2005 Proceedings On-Line Message-ID: The proceedings for the 2005 IEEE Symposium on Computational Intelligence and Games are now freely available on-line. http://cigames.org These include many papers on neural network game playing agents etc, and so will be of interest to many members of this list. Please also note the CIG 2006 CFP link. Best regards, Simon Lucas From t.heskes at science.ru.nl Sun Sep 11 12:58:32 2005 From: t.heskes at science.ru.nl (Tom Heskes) Date: Sun, 11 Sep 2005 18:58:32 +0200 Subject: Connectionists: Neurocomputing volume 67 Message-ID: <43246238.9050901@science.ru.nl> Neurocomputing Volume 68 (October 2005) FULL LENGTH PAPERS Absolutely exponential stability of Cohen?Grossberg neural networks with unbounded delays Wenjun Xiong and Jinde Cao Speaker authentication system using soft computing approaches Abdul Wahab, Goek See Ng and Romy Dickiyanto Output partitioning of neural networks Sheng-Uei Guan, Qi Yinan, Syn Kiat Tan and Shanchun Li A unified SWSI?KAMs framework and performance evaluation on face recognition Songcan Chen, Lei Chen and Zhi-Hua Zhou A framework for simulating axon guidance Ning Feng, Gangmin Ning and Xiaoxiang Zheng Internal simulation of perception: a minimal neuro-robotic model Tom Ziemke, Dan-Anders Jirenhed and Germund Hesslow Asymptotic convergence properties of the EM algorithm with respect to the overlap in the mixture Jinwen Ma and Lei Xu Unsupervised learning with stochastic gradient Harold Szu and Ivica Kopriva Global asymptotic stability analysis of bidirectional associative memory neural networks with constant time delays Sabri Arik and Vedat Tavsanoglu A parallel growing architecture for self-organizing maps with unsupervised learning Iren Valova, Daniel Szer, Natacha Gueorguieva and Alexandre Buer ------- LETTERS Existence and exponential stability of almost periodic solutions for Hopfield neural networks with delays An efficient fingerprint verification system using integrated gabor filters and Parzen Window Classifier Dario Maio and Loris Nanni Ensemble of Parzen window classifiers for on-line signature verification Loris Nanni and Alessandra Lumini A genetic algorithm for solving the inverse problem of support vector machines Xi-Zhao Wang, Qiang He, De-Gang Chen and Daniel Yeung Variance change point detection via artificial neural networks for data separation Kyong Joo Oh, Myung Sang Moon and Tae Yoon Kim EEG pattern discrimination between salty and sweet taste using adaptive Gabor transform Juliana Cristina Hashida, Ana Carolina de Sousa Silva, S?rgio Souto and Ernane Jos? Xavier Costa Fast image compression using matrix K?L transform Daoqiang Zhang and Songcan Chen An alternative switching criterion for independent component analysis (ICA) Dengpan Gao, Jinwen Ma and Qiansheng Cheng A neural network to solve the hybrid N-parity: Learning with generalization issues M. Al-Rawi Support vector machines for candidate nodules classification Paola Campadelli, Elena Casiraghi and Giorgio Valentini Fusion of classifiers for predicting protein?protein interactions Loris Nanni Lagrangian object relaxation neural network for combinatorial optimization problems Hiroki Tamura, Zongmei Zhang, Xinshun Xu, Masahiro Ishii and Zheng Tang Fully complex extreme learning machine Ming-Bin Li, Guang-Bin Huang, P. Saratchandran and N. Sundararajan Fusion of classifiers for protein fold recognition Loris Nanni ------- JOURNAL SITE: http://www.elsevier.com/locate/neucom SCIENCE DIRECT: http://www.sciencedirect.com/science/issue/5660-2005-999319999-605301 From moodylab at icsi.berkeley.edu Thu Sep 15 03:21:07 2005 From: moodylab at icsi.berkeley.edu (John Moody) Date: Thu, 15 Sep 2005 00:21:07 -0700 Subject: Connectionists: CFP: NIPS*2005 Workshop on Computational Finance Message-ID: <10b7069b05091500215dd3b7d6@mail.gmail.com> CALL FOR PARTICIPATION -- NIPS*2005 WORKSHOP COMPUTATIONAL FINANCE & MACHINE LEARNING Friday, December 9, 2005 Westin Resort, Whistler, British Columbia http://www.icsi.berkeley.edu/~moody/nips2005compfin.htm WORKSHOP DESCRIPTION This interdisciplinary workshop will provide a forum for discussion of research at the intersection of Computational Finance and Machine Learning. Finance-related papers have occasionally appeared at NIPS over the years, but this will be the first finance-related workshop during NIPS's entire 19 year history! The workshop will bring together a diverse set of researchers from machine learning, academic finance and the financial industry. Emphasis will be on machine learning applications to finance and problems from finance that are data-driven or require parameter estimation from data. The workshop will include invited presentations, a financial industry panel, contributed talks, a poster session and plenty of time for lively discussion. A wide range of topics are of interest, for example: Financial Applications: * Financial Time Series & Volatility * Trading & Arbitrage Strategies * Optimal Execution, Market Making & Market Microstructure * Portfolio Management & Asset Allocation * Stock Selection & Security Analysis * Risk & Extreme Events * Behavioral Finance * Credit Analysis & Credit Risk * Option Hedging, Exercise & Pricing * Calibration, e.g. of Term Structure Models * Multi-Agent Market Simulations Machine Learning Methods: * Reinforcement Learning & Dynamic Programming * Non-Parametric Statistics & Bayesian Learning * Latent Variable & Hidden State Models * Evolutionary Algorithms * Data Mining & Visualization * Independent Components, Self-Organizing Maps * Monte Carlo & Resampling * Ensembles and Boosting SUBMISSIONS We anticipate accepting six to eight 20-minute contributed talks and a number of posters. If you would like to present your work, please submit a 100 to 500 word abstract as soon as possible (no later than October 14) to my assistant Su'ad Hall at: MoodyLab at ICSI.Berkeley.Edu If you wish to submit a full manuscript in addition, that's great. Our goal is to put together a coherent and informative program that has broad appeal to the NIPS and Computational Finance communities. Abstracts based upon previously published work are welcome. Please submit early! DETAILS Important Dates: Friday, October 14 - Submission Deadline Monday, October 31 - Acceptance Notification Friday, December 9 - The Workshop NIPS Workshop Registration & Hotel Info: http://www.nips.cc/Conferences/2005/ Workshop Inquiries: MoodyLab at ICSI.Berkeley.Edu ORGANIZERS John Moody, Algorithms Group International Computer Science Institute Berkeley & Portland, USA Ramo Gencay, Dept. of Economics Simon Fraser University Vancouver, Canada Neil Burgess, Ph.D. Morgan Stanley New York, USA From fink at cs.huji.ac.il Mon Sep 19 12:06:44 2005 From: fink at cs.huji.ac.il (Michael Fink) Date: Mon, 19 Sep 2005 18:06:44 +0200 Subject: Connectionists: CFP NIPS*05 Interclass Transfer Workshop: why learning to recognize many objects might be easier than learning to recognize just one? Message-ID: <1b2297b105091909065c3071fb@mail.gmail.com> ============================== ================================= Call for Papers NIPS*05 Workshop on interclass transfer: Why learning to recognize many object classes might be easier than learning to recognize just one www.cs.huji.ac.il/~fink/nips2005/ NIPS 2005 Submission deadline: 21 October Accept/Reject notification: 05 November =============================================================== Organizers: ========== Andras Ferencz, University of California at Berkeley Michael Fink, The Hebrew University of Jerusalem Shimon Ullman, Weizmann Institute of Science Workshop Description ==================== The human perceptual system has the remarkable capacity to recognize numerous object classes, often learning to reliably classify a novel category from just a short exposure to a single example. These skills are beyond the reach of current multi-class recognition systems. The workshop will focus on the proposal that a key factor for achieving such capabilities is the use of interclass transfer during learning. According to this view, a recognition system may benefit from interclass transfer if the multiple target classification tasks share common underlying structures that can be utilized to facilitate training or detection. Several challenges follow from this observation. First, can a theoretical foundation of interclass transfer be formulated? Second, what are promising algorithmic approaches for utilizing interclass transfer. Finally, can the computational approaches for multiple object recognition contribute insights to the research of human recognition processes? In the coming workshop we propose to address the following topics: * Explore the human capabilities for multi-class object recognition and examine how these capacities motivate our algorithmic approaches. * Attempt to formalize the interclass transfer framework and define what can be generalized between classes (for example, learning by analogy from the "closest" known category vs. finding useful subspaces from all categories). * Analyze state-of-the-art solutions aimed at recognizing many objects or at learning to recognize novel objects form very few examples (e.g. contrasting parametric vs. non-parametric approaches). * Characterize the problems in which we expect to observe high transfer between classes. * Delineate future challenges and suggest benchmarks for assessing progress The workshop is aimed at bringing together experimental and theoretical researchers interested in multi-class object recognition in humans and machines. Confirmed participants: ======================= William T. Freeman Fei Fei Li Erik Learned Miller Kevin Murphy Jitendra Malik Antonio Torrallba Daphna Weinshall From esann at dice.ucl.ac.be Tue Sep 20 14:02:25 2005 From: esann at dice.ucl.ac.be (esann) Date: Tue, 20 Sep 2005 20:02:25 +0200 Subject: Connectionists: CFP: ESANN'2006 European Symposium on Artificial Neural Networks Message-ID: <20050920180224.C387C1F05C@smtp2.elec.ucl.ac.be> ESANN'2006 14th European Symposium on Artificial Neural Networks 14th European Symposium on Artificial Neural Networks Advances in Computational Intelligence and Learning Bruges (Belgium) - April 26-27-28, 2006 Announcement and call for papers ===================================================== Technically co-sponsored by the International Neural Networks Society, the European Neural Networks Society, the IEEE Computational Intelligence Society (to be confirmed), the IEEE Region 8, the IEEE Benelux Section. The call for papers for the ESANN'2006 conference is now available on the Web: http://www.dice.ucl.ac.be/esann For those of you who maintain WWW pages including lists of related ANN sites: we would appreciate if you could add the above URL to your list; thank you very much! We make all possible efforts to avoid sending multiple copies of this call for papers; however we apologize if you receive this e-mail twice, despite our precautions. You will find below a short version of this call for papers, without the instructions to authors (available on the Web). ESANN'2006 is organized in collaboration with the UCL (Universite catholique de Louvain, Louvain-la-Neuve) and the KULeuven (Katholiek Universiteit Leuven). Scope and topics ---------------- Since its first happening in 1993, the European Symposium on Artificial Neural Networks has become the reference for researchers on fundamentals and theoretical aspects of artificial neural networks, computational intelligence, learning and related topics. Each year, around 100 specialists attend ESANN, in order to present their latest results and comprehensive surveys, and to discuss the future developments in this field. The ESANN'2006 conference will follow this tradition, while adapting its scope to the new developments in the field. Artificial neural networks are viewed as a branch, or subdomain, of machine learning, statistical information processing and computational intelligence. Mathematical foundations, algorithms and tools, and applications are covered. The following is a non-exhaustive list of machine learning, computational intelligence and artificial neural networks topics covered during the ESANN conferences: THEORY and MODELS Statistical and mathematical aspects of learning Feedforward models Kernel machines Graphical models, EM and Bayesian learning Vector quantization and self-organizing maps Recurrent networks and dynamical systems Blind signal processing Ensemble learning Nonlinear projection and data visualization Fuzzy neural networks Evolutionary computation Bio-inspired systems INFORMATION PROCESSING and APPLICATIONS Data mining Signal processing and modeling Approximation and identification Classification and clustering Feature extraction and dimension reduction Time series forecasting Multimodal interfaces and multichannel processing Adaptive control Vision and sensory systems Biometry Bioinformatics Brain-computer interfaces Neuroinformatics Papers will be presented orally (single track) and in poster sessions; all posters will be complemented by a short oral presentation during a plenary session. It is important to mention that the topics of a paper decide if it better fits into an oral or a poster session, not its quality. The selection of posters will be identical to oral presentations, and both will be printed in the same way in the proceedings. Nevertheless, authors must indicate their preference for oral or poster presentation when submitting their paper. Special sessions ---------------- Special sessions will be organised by renowned scientists in their respective fields. Papers submitted to these sessions are reviewed according to the same rules as submissions to regular sessions. They must also follow the same format, instructions, deadlines and submission procedure. The special sessions organised during ESANN'2006 are: 1) Semi-blind approaches for source separation and independent component analysis M. Babaie-Zadeh, Sharif Univ. Tech. (Iran), C. Jutten, CNRS ? Univ. J. Fourier ? INPG (France) 2) Visualization methods for data mining F. Rossi, INRIA Rocquencourt (France) 3) Neural Networks and Machine Learning in Bioinformatics - Theory and Applications B. Hammer, Clausthal Univ. Tech. (Germany), S. Kaski, Helsinki Univ. Tech. (Finland), U. Seiffert, IPK Gatersleben (Germany), T. Villmann, Univ. Leipzig (Germany) 4) Online Learning in Cognitive Robotics J.J. Steil, Univ. Bielefeld, H. Wersing, Honda Research Institute Europe (Germany) 5) Man-Machine-Interfaces - Processing of nervous signals M. Bogdan, Univ. T?bingen (Germany) 6) Nonlinear dynamics N. Crook, T. olde Scheper, Oxford Brookes University (UK) Location -------- The conference will be held in Bruges (also called "Venice of the North"), one of the most beautiful medieval towns in Europe. Bruges can be reached by train from Brussels in less than one hour (frequent trains). The town of Bruges is world-wide known, and famous for its architectural style, its canals, and its pleasant atmosphere. The conference will be organized in a hotel located near the centre (walking distance) of the town. There is no obligation for the participants to stay in this hotel. Hotels of all levels of comfort and price are available in Bruges; there is a possibility to book a room in the hotel of the conference at a preferential rate through the conference secretariat. A list of other smaller hotels is also available. The conference will be held at the Novotel hotel, Katelijnestraat 65B, 8000 Brugge, Belgium. Proceedings and journal special issue ------------------------------------- The proceedings will include all communications presented to the conference (tutorials, oral and posters), and will be available on-site. Extended versions of selected papers will be published in the Neurocomputing journal (Elsevier). Call for contributions ---------------------- Prospective authors are invited to submit their contributions before November 28, 2005. The electronic submission procedure is described on the ESANN portal http://www.dice.ucl.ac.be/esann/. Authors must also commit themselves that they will register to the conference and present the paper in case of acceptation of their submission (one paper per registrant). Authors of accepted papers will have to register before February 28, 2006; they will benefit from the advance registration fee. The ESANN conference applies a strict policy about the presentation of accepted papers during the conference: authors of accepted papers who do not show up at the conference will be blacklisted for future ESANN conferences, and the lists will be communicated to other conference organizers. Deadlines --------- Submission of papers November 28, 2005 Notification of acceptance January 27, 2006 Symposium April 26-28, 2006 Conference secretariat ---------------------- ESANN'2006 d-side conference services phone: + 32 2 730 06 11 24 av. L. Mommaerts Fax: + 32 2 730 06 00 B - 1140 Evere (Belgium) E-mail: esann at dice.ucl.ac.be http://www.dice.ucl.ac.be/esann Steering and local committee ---------------------------- Fran?ois Blayo Pr?figure (F) Gianluca Bontempi Univ.Libre Bruxelles (B) Marie Cottrell Univ. Paris I (F) Jeanny H?rault INPG Grenoble (F) Bernard Manderick Vrije Univ. Brussel (B) Eric Noldus Univ. Gent (B) Jean-Pierre Peters FUNDP Namur (B) Joos Vandewalle KUL Leuven (B) Michel Verleysen UCL Louvain-la-Neuve (B) Scientific committee (to be confirmed) -------------------- Cecilio Angulo Univ. Polit. de Catalunya (E) Miguel Atencia Univ. Malaga (E) Peter Bartlett Univ.California, Berkeley (USA) Pierre Bessi?re CNRS (F) Herv? Bourlard IDIAP Martigny (CH) Joan Cabestany Univ. Polit. de Catalunya (E) St?phane Canu Inst. Nat. Sciences App. (F) Valentina Colla Scuola Sup. Sant'Anna Pisa (I) Holk Cruse Universit?t Bielefeld (D) Eric de Bodt Univ. Lille II (F) & UCL Louvain-la-Neuve (B) Dante Del Corso Politecnico di Torino (I) Georg Dorffner University of Vienna (A) Wlodek Duch Nicholas Copernicus Univ. (PL) Marc Duranton Philips Semiconductors (USA) Richard Duro Univ. Coruna (E) Anibal Figueiras-Vidal Univ. Carlos III Madrid (E) Simone Fiori Univ. Perugia (I) Jean-Claude Fort Universit? Nancy I (F) Leonardo Franco Univ. Malaga (E) Colin Fyfe Univ. Paisley (UK) Stan Gielen Univ. of Nijmegen (NL) Mirta Gordon IMAG Grenoble (F) Marco Gori Univ. Siena (I) Bernard Gosselin Fac. Polytech. Mons (B) Manuel Grana UPV San Sebastian (E) Anne Gu?rin-Dugu? INPG Grenoble (F) Barbara Hammer Univ. of Osn?bruck (D) Martin Hasler EPFL Lausanne (CH) Tom Heskes Univ. Nijmegen (NL) Christian Igel Ruhr-Univ. Bochum (D) Jose Jerez Univ. Malaga (E) Gonzalo Joya Univ. Malaga (E) Christian Jutten INPG Grenoble (F) Stefanos Kollias National Tech. Univ. Athens (GR) Jouko Lampinen Helsinki Univ. of Tech. (FIN) Petr Lansky Acad. of Science of the Czech Rep. (CZ) Beatrice Lazzerini Univ. Pisa (I) Mia Loccufier Univ. Gent (B) Erzsebet Merenyi Rice Univ. (USA) Jos? Mira UNED (E) Jean-Pierre Nadal Ecole Normale Sup?rieure Paris (F) Erkki Oja Helsinki Univ. of Technology (FIN) Arlindo Oliveira INESC-ID (P) Gilles Pag?s Univ. Paris 6 (F) Thomas Parisini Univ. Trieste (I) H?l?ne Paugam-Moisy Universit? Lumi?re Lyon 2 (F) Alberto Prieto Universitad de Granada (E) Didier Puzenat Univ. Antilles-Guyane (F) Leonardo Reyneri Politecnico di Torino (I) Jean-Pierre Rospars INRA Versailles (F) Fabrice Rossi INRIA (F) David Saad Aston Univ. (UK) Francisco Sandoval Univ.Malaga (E) Jose Santos Reyes Univ. Coruna (E) Craig Saunders Univ.Southampton (UK) Udo Seiffert IPK Gatersleben (D) Bernard Sendhoff Honda Research Institute Europe (D) Peter Sollich King's College (UK) Jochen Steil Univ. Bielefeld (D) John Stonham Brunel University (UK) Johan Suykens K. U. Leuven (B) John Taylor King?s College London (UK) Michael Tipping Microsoft Research (Cambridge) (UK) Claude Touzet Univ. Provence (F) Marc Van Hulle KUL Leuven (B) Thomas Villmann Univ. Leipzig (D) Axel Wism?ller Ludwig-Maximilians-Univ. M?nchen (D) Michalis Zervakis Technical Univ. Crete (GR) ======================================================== ESANN - European Symposium on Artificial Neural Networks http://www.dice.ucl.ac.be/esann * For submissions of papers, reviews,... Michel Verleysen Univ. Cath. de Louvain - Machine Learning Group 3, pl. du Levant - B-1348 Louvain-la-Neuve - Belgium tel: +32 10 47 25 51 - fax: + 32 10 47 25 98 mailto:esann at dice.ucl.ac.be * Conference secretariat d-side conference services 24 av. L. Mommaerts - B-1140 Evere - Belgium tel: + 32 2 730 06 11 - fax: + 32 2 730 06 00 mailto:esann at dice.ucl.ac.be ======================================================== From Martin.Riedmiller at uos.de Tue Sep 20 03:30:58 2005 From: Martin.Riedmiller at uos.de (Martin Riedmiller) Date: Tue, 20 Sep 2005 09:30:58 +0200 Subject: Connectionists: CLSquare - free software available Message-ID: <432FBAB2.3050203@uos.de> A new release of CLSquare (closed loop simulation system) is ready for free download at our website http://amy.informatik.uni-osnabrueck.de/clsquare CLSquare simulates a control loop for closed loop control. Although originally designed for training and testing Reinforcement Learning controllers, it also applies to other learning and non-learning controller concepts. Currently availabe plants: Acrobot, bicycle, cart pole, cart double pole, pole, mountain car and maze. Currently availabe controllers: linear controller, Reinforcement learning Q table, neural network based Q controller. It comes with many useful features, e.g. graphical display and statistics output, a documentation, and many demos for quick starting. Enjoy, Martin Riedmiller, Neuroinformatics Group, University of Osnabrueck From silvia at sa.infn.it Tue Sep 20 08:48:13 2005 From: silvia at sa.infn.it (Silvia Scarpetta) Date: Tue, 20 Sep 2005 14:48:13 +0200 Subject: Connectionists: Post-doc position in neural computation in Salerno Message-ID: <005401c5bde1$90bc06a0$7746cdc1@sa.infn.it> SECOND ANNOUNCEMENT Please reply before 1 October 2005 OPENING A two-year postdoctoral position (Assegno di Ricerca) in Neural Networks / NeuroPhysics starting this Fall 2005 (around November, 2005) is available at the Dept. of Physics "E.R. Caianiello" of the Universit? degli Studi di Salerno, Italy, in the group headed by Prof. Maria Marinaro (http://www.sa.infn.it/NeuralGroup/) The monthly net amount of the fellowship is about 1200 Euro. RESEARCH ACTIVITY Areas of particular interest include: 1) Neurobiological applications of theoretical physics tools, computational and mathematical modeling of neural dynamics, cortical dynamics and oscillations, learning memory and modelling of STDP, rhythmic locomotion and related topics. 2) Neural networks applied to signal and image processing and to speech recognition. Description of current project in our group can be found in: http://www.sa.infn.it/NeuralGroup/ MINIMUM REQUIREMENTS master degree in scientific disciplines (physics, neuroscience, mathematics, computer science, engeneering, etc) and a PhD degree or a 3 years research experience in fields related with the position topic. Sufficient knowledge of the English language and of a programming language (C or Matlab) will be required for successful project work. *** Please provide before 1 October a CV with publication list and description of research interests, and arrange for at least two letters of reference to be sent to: Prof. Maria Marinaro Dipartimento di Fisica "E.R. Caianiello" Universit? degli Studi di Salerno Via S. Allende I-84081 Baronissi (SA) Italy ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Prof. Maria Marinaro Dipartimento di Fisica "E.R. Caianiello" Universit? degli Studi di Salerno Via S. Allende 84081 Baronissi (SA) Italy Tel. +39 089 965318 Web page: http://www.sa.infn.it/NeuralGroup/ Dr. Silvia Scarpetta Dipartimento di Fisica "E.R. Caianiello" Universit? degli Studi di Salerno Via S. Allende 84081 Baronissi (SA) Italy Tel. +39 089 965318 Web page: http://www.sa.infn.it/silvia.scarpetta From crammer at cis.upenn.edu Wed Sep 21 15:43:00 2005 From: crammer at cis.upenn.edu (Koby Crammer) Date: Wed, 21 Sep 2005 15:43:00 -0400 (EDT) Subject: Connectionists: NIPS Workshop CFP - Advances in Structured Learning for Text and Speech Processing Message-ID: ################################################################ CALL FOR PARTICIPATION Advances in Structured Learning for Text and Speech Processing a workshop at the 2005 Neural Information Processing Systems (NIPS) Conference Submission deadline: Tuesday, November 1st, 2005 http://www.cis.upenn.edu/~crammer/workshop-index.html ################################################################ Organizers: ----------- Fernando Pereira CIS, University of Pennsylvania Michael Collins CSAIL, MIT Jeff Bilmes EE, University of Washington Koby Crammer CIS, University of Pennsylvania Overview: This workshop is intended for researchers and students interested in developing and applying structured classification methods to text and speech processing problems. Recent advances in structured classification provide promising alternatives to the probabilistic generative models that have been the mainstay of speech recognition and statistical language processing. However, powerful features of probabilistic generative models, such as hidden variables and compositional combination of several kinds of evidence, do not transfer cleanly to all structured classification methods. Starting with surveys of the state-of-the-art in structured classification for text and speech, the workshop will focus on successes, failures, and directions for improvement of structured classification methods for text and speech and possible syntheses between the new structured classification methods and traditional generative models. Comparison will also be made with "generative" vs. "discriminative" training procedures in structure classification problems. A successful workshop will identify critical questions that current methods are not yet capable of solving, and promising directions for solution. For instance, we hope to achieve a better understanding of how discriminative models may work with missing information, such as under-specified alignments or syntactic analyses --- we plan, more generally, answer questions such as why, when, and where use a generative model. Such problems arise in both speech, language, and text processing, and will serve as unifying themes for the workshop. Among questions to be discussed, we expect: * Discriminative vs. generative models and algorithms * Max margin, perceptron, and other criterion * Incorporating prior knowledge * Using data from multiple domains * Adaptation of structured classifiers to new conditions * Using unlabeled data * Combining text and speech * Integrated inference for complex language processing tasks Program: -------- This one-day workshop will have survey/tutorial talks in the morning followed by shorter contributed talks, posters, and discussion sessions later in the day. The survey talks will present central themes and questions that will guide the discussion during the workshop (see below). We are well aware of the tendency to turn workshops into mini-conferences, so we will make sure by keeping a tight control on the schedule that there will be sufficient time for discussion during and after talks. One of the ways we intend to use in order to accomplish this goal is to assign each of the presenters in the workshop to serve as discussant for someone else's presentation. Discussion will be moderated by the organizers. The survey/tutorial talks are intended to provide a thorough background and overview of the field from a number of different perspectives (machine learning, statistics, mathematics, and applications such as speech, text, and language). In order to better customize the workshop to the interested audience, the survey/tutorial talks will be tuned to a set of issues and questions that are raised on a NIPS workshop discussion web page. The goal is for interested participants to post any nagging questions or general ideas that they have to this discussion board. These questions will then become a basis for the central theme of the workshop. Of course, for this to be a success it is necessary for people to pose questions to the discussion board. Therefore, it will be possible for people to post questions either with their identity associated, or anonymously. See below for further details. Potential participants are encouraged to submit (extended) abstracts of two to four (2-4) pages in length outlining their research as it relates to the above theme. Papers may show novel ideas or applications related to structured classification. Encouraged topics include novel theoretical results, practical application results, novel insight regarding the above, and/or tips and tricks that work well empirically on a broad range of data. Papers should be formatted using the standard NIPS formatting guidelines. Schedule and Dates: -------------------- - Nov 1st, 2005: Paper submission deadline. Email all submissions to: with subject starting with 'STRUCTLEARN' must be a .pdf file - Nov 8th, 2005: Acceptance (talks and poster) decisions announced. - Dec 9th, 2005: NIPS Workshop date. Relevant Web pages: - NIPS workshop web page: http://www.nips.cc/Conferences/2005/Workshops/ - Discussion Board for Advances in Structured Learning for Text and Speech Processing http://fling-l.seas.upenn.edu/~cse1xx/structlearn/index.php Please visit this web page and use it to post questions, problems, or ideas about open problems in the structured prediction area that you would like to see discussed both during the survey/tutorial talks and throughout the rest of the workshop. From ahu at cs.stir.ac.uk Wed Sep 21 05:35:12 2005 From: ahu at cs.stir.ac.uk (Dr. Amir Hussain) Date: Wed, 21 Sep 2005 10:35:12 +0100 Subject: Connectionists: Call for Papers: BICS 06 Message-ID: <000801c5be8f$c3df28b0$dae80954@hec.gov> Please circulate the CFP below/attached to friends and colleagues who may be interested.. Thank you in advance and apologies for any cross postings. Amir Hussain Co-Chair BICS'2006 2nd International Conference on: Brain Inspired Cognitive Systems (BICS 06) Island of Lesvos, Greece Hotel Delfinia October 10 - 14, 2006 http://www.icsc-naiso.org/conferences/bics2006/bics06-cfp.html General Chair: Igor Aleksander, Imperial College London, U.K. First International ICSC Symposium on Machine Models of Consciousness (MoC 2006) Discussions of this new burgeoning paradigm Chair: Ron Chrisley , University of Sussex, U.K. Third International ICSC Symposium on Biologically Inspired Systems (BIS 2006) Broader issues in biological inspiration and neuromorphic systems Chair: Leslie Smith, University of Stirling, U.K. Second International ICSC Symposium on Cognitive Neuro Science (CNS 2006) >From computationally inspired models to brain-inspired computation Chair: Igor Aleksander, Imperial College London, U.K Fourth International ICSC Symposium on Neural Computation (NC'2006) Progress in neural systems Chair: Amir Hussain, University of Stirling, U.K. Why this conference, and who should attend: Brain Inspired Cognitive Systems 2006 aims to bring together leading scientists and engineers who use analytic, syntactic and computational methods both to understand the prodigious processing properties of biological systems and, specifically, of the brain, and to exploit such knowledge to advance computational methods towards ever higher levels of cognitive competence. The four major symposia are organized in patterns that encourage cross-fertilization across the symposia topics. This emphasizes that BICS 2006 will be a major point of contact for researchers and practitioners who can benefit from not only the major advances in their specialist fields but also from the diversity of each other's views. Each of the four mornings is devoted to papers that will be selected for their clear novelty and proven scientific impact, while the afternoons will provide scope for researchers to present their current work and discuss their aims and ambitions. Debates across disciplines will unite researchers with differing perspectives. SUB-THEMES (including, but not limited to): Models of consciousness: (MoC) Global Workspace Theory Imagination/synthetic phenomenology Virtual Machine Approaches Axiomatic Models Control Theory/Methodology Developmental/Infant Models Will/volition/emotion/affect Philosophical implications Grounding in neurophysiology Enactive approaches Heterophenomenology Cognitive Neuroscience (CNS) Attentional Mechanisms Cognitive Neuroscience of vision CN of non-vision sensory modalities CN of volition Affective Systems Language Cortical Models Sub-Cortical Models Cerebellar Models Event location in the brain Others Biologically Inspired Systems (BIS) Brain Inspired (BI) Vision BI Audition and sound processing BI Other sensory modalities BI Motion processing BI Robotics BI Evolutionary systems BI Oscillatory systems BI Signal processing BI Learning Neuromorphic systems Others Neural Computation (NC) NeuroComputational (NC) Hybrid Systems NC Learning NC Control Systems NC Signal Processing Architectures Devices Pattern Classifiers Support Vector Machines Fuzzy or Neuro-Fuzzy Systems Evolutionary Neural Networks Biological Neural Network Models Applications Others INVITED SPEAKERS: Christof Koch, Koch Laboratory CALTECH, USA Shun-ichi Amari, RIKEN Brain Science Institute, Japan Holk Cruse, University of Bielefeld, Germany Pentti Haikonen, Nokia Research Center, Finland Timothy K Horiuchi, University of Maryland, USA John Taylor, Kings College, London, U.K. Steve Potter, Gatech, USA Jacek M Zurada, University of Louisville, USA Marios Polycarpou, University of Cyprus, Cyprus Others TBA CONFERENCE VENUE: Hotel Delphinia ( http://www.molyvoshotel.com/) at the ancient village of Molivos ( http://www.molivos.net/index.htm). ORGANIZED BY: ICSC Interdisciplinary Research, Planning Division Canada (http://www.icsc-naiso.org/html/ Planning Division ICSC Interdisciplinary Research NAISO Natural and Artificial Intelligence Systems Organization Canada --------------------------------------------------- Email: planning at icsc.ab.ca Website: www.icsc-naiso.org Tel: +1-780- 387 3546 Fax: +1-780- 387 4329 From mlittman at cs.rutgers.edu Wed Sep 21 09:50:34 2005 From: mlittman at cs.rutgers.edu (Michael L. Littman) Date: Wed, 21 Sep 2005 09:50:34 -0400 (EDT) Subject: Connectionists: RL Benchmark announcement Message-ID: <200509211350.j8LDoYm28629@porthos.rutgers.edu> The organizers of the NIPS-05 workshop "Reinforcement Learning Benchmarks and Bake-offs II" would like to announce the first RL benchmarking event. We are soliciting participation from researchers interested in implementing RL algorithms for learning to maximize reward in a set of simulation-based tasks. Our benchmarking set will include: * continuous-state MDPs * discrete factored-state MDPs It will not include: * partially observable MDPs Comparisons will be performed through a standardized interface (details to be announced) and we *highly* encourage a wide variety of approaches to be included in the comparisons. We hope to see participants executing algorithms based on temporal difference learning, evolutionary search, policy gradient, TD/policy search combinations, and others. We do not intend to declare a "winner", but we do hope to foster a culture of controlled comparison within the extended community interested in learning for control and decision making. If you are interested in participating, please contact Michael Littman to be added to our mailing list. Additional information will be available at our website at: http://www.cs.rutgers.edu/~mlittman/topics/nips05-mdp/ . Sincerely, The RL Benchmarking Event Organizers From alan at cns.nyu.edu Thu Sep 22 16:11:50 2005 From: alan at cns.nyu.edu (alan stocker) Date: Thu, 22 Sep 2005 16:11:50 -0400 Subject: Connectionists: NIPS Demo Deadline - September 25 Message-ID: <43331006.9000908@cns.nyu.edu> Reminder: The deadline for NIPS Demonstration Proposals is Sunday, September 25, 2005. Demonstrators will have a chance to show their live and interactive demos in the areas of hardware technology, neuromorphic and biologically-inspired systems, robotics, and software systems. For further information see: http://www.nips.cc./Conferences/current/CFP/CallForDemos.php -- ________________________________________ alan stocker, ph.d. +1 212 992 8752 http://www.cns.nyu.edu/~alan/ ________________________________________ From terry at salk.edu Fri Sep 23 01:06:04 2005 From: terry at salk.edu (Terry Sejnowski) Date: Thu, 22 Sep 2005 22:06:04 -0700 Subject: Connectionists: NEURAL COMPUTATION 17:11 In-Reply-To: Message-ID: Neural Computation - Contents - Volume 17, Number 11 - November 1, 2005 NOTE An Extended Analytic Expression for the Membrane Potential Distribution of Conductance-Based Synaptic Noise M. Rudolph and A. Destexhe LETTERS Synaptic and Temporal Ensemble Interpretation of Spike-Timing Dependent Plasticity Peter A. Appleby and Terry Elliott What Can a Neuron Learn with Spike-Timing-Dependent Plasticity? Robert Legenstein, Christian Naeger and Wolfgang Maass How Membrane Properties Shape the Discharge of Motoneurons: A Detailed Analytical Study Claude Meunier and Karol Borejsza Stimulus Competition by Inhibitory Interference Paul H. E. Tiesinga Optimization via Intermittency with a Self-Organizing Neural Network Terence Kwok and Kate A. Smith Mixture Modeling with Pairwise, Instance-Level Class Constraints Qi Zhao and David J. Miller Geometrical Properties of Nu Support Vector Machines with Different Norms Kazushi Ikeda and Noboru Murata ----- ON-LINE - http://neco.mitpress.org/ SUBSCRIPTIONS - 2005 - VOLUME 17 - 12 ISSUES Electronic only USA Canada* Others USA Canada* Student/Retired$60 $64.20 $114 $54 $57.78 Individual $100 $107.00 $143 $90 $96.30 Institution $680 $727.60 $734 $612 $654.84 * includes 7% GST MIT Press Journals, 5 Cambridge Center, Cambridge, MA 02142-9902. Tel: (617) 253-2889 FAX: (617) 577-1545 journals-orders at mit.edu ----- From nati at mit.edu Fri Sep 23 13:23:43 2005 From: nati at mit.edu (Nathan Srebro) Date: Fri, 23 Sep 2005 13:23:43 -0400 Subject: Connectionists: NIPS'05 Workshop on The Accuracy-Regularization Frontier In-Reply-To: <1540849205092310226c41249b@mail.gmail.com> References: <1540849205092310226c41249b@mail.gmail.com> Message-ID: <15408492050923102339c701d2@mail.gmail.com> NIPS Workshop on The Accuracy-Regularization Frontier Friday, December 9th, 2005 Westin Resort and SPA, Whistler, BC, Canada http://www.cs.toronto.edu/~nati/Front/ CALL FOR CONTRIBUTIONS A prevalent approach in machine learning for achieving good generalization performance is to seek a predictor that, on one hand, attains low empirical error, and on the other hand, is "simple", as measured by some regularizer, and so guaranteed to generalize well. Consider, for example, support vector machines, where one seeks a linear classifier with low empirical error and low L2-norm (corresponding to a large geometrical margin). The precise trade-off between the empirical error and the regularizer (e.g. L2-norm) is not known. But since we would like to minimize both, we can limit our attention only to extreme solutions, i.e. classifiers such that one cannot reduce both the empirical error and the regularizer (norm). Considering the set of attainable (error,norm) combinations, we are interested only in the extreme "frontier" (or "regularization path") of this set. The typical approach is to evaluate classifiers along the frontier on held-out validation data (or cross validate) and choose the classifier minimizing the validation error. Classifiers along the frontier are typically found by minimizing some parametric combination of the empirical error and the regularizer, e.g. norm^2+C*err, for varying C, in the case of SVMs. Different values of C yield different classifiers along the frontier and C can be thought of as parameterizing the frontier. This particular parametric function of the empirical error and the regularizer is chosen because it leads to a convenient optimization problem, but minimizing any other monotone function of the empirical error and regularizer (in this case, the L2-norm) would also lead to classifiers on the frontier. Recently, methods have been proposed for obtaining the entire frontier in computation time that is comparable to obtaining a single classifier along the frontier. The proposed workshop is concerned with optimization and statistical issues related to viewing the entire frontier, rather than a single predictor along it, as an object of interest in machine learning. Specific issues to be addressed include: 1. Characterizing the "frontier" in a way independent of a specific trade-off, and its properties as such, e.g. convexity, smoothness, piecewise linearity/polynomial behavior. 2. What parametric trade-offs capture the entire frontier? Minimizing any monotone trade-off leads to a predictor on the frontier, but what conditions must be met to ensure all predictors along the frontier are obtained when the regularization parameter is varied? Study of this question is motivated by scenarios in which minimizing a non-standard parametric trade-off leads to a more convenient optimization problem. 3. Methods for obtaining the frontier: 3a. Direct methods relying on a characterization, e.g. Hastie et al's (2004) work on the entire regularization path of Support vector Machines. 3b. Warm-restart continuation methods (slightly changing the regularization parameter and initializing the optimizer to the solution of the previous value of the parameter). How should one vary the regularization parameter in order to guarantee never to be too far away from the true frontier? In a standard optimization problem, one ensures a solution within some desired distance from the optimal solution. Analogously, when recovering the entire frontier, it would be desirable to seek a frontier which is always within some desired distance in the (error,regularizer) space from the true frontier. 3c. Predictor-corrector methods: when the frontier is a differentiable manifold, warm-restart methods can be improved by using a first order approximation of the manifold to predict where the frontier should be for an updated value of the frontier parameter. 4. Interesting generalization or uses of the frontier, e.g.: - The frontier across different kernels - Higher dimensional frontiers when more than two parameters are considered 5. Formalizing and providing guarantees for the standard practice of picking a classifier along the frontier using a hold-out set (this is especially important for more than two objectives). In some regression cases there are detailed inferences that can be done on the frontier --- for Ridge it is well established whereas for Lasso, Efron et al (2004), and more recently Zou et al (2004), establish degrees of freedom along the frontier, yielding generalization error estimates. The main goal of the workshop is to open up research in these directions, establishing the important questions and issues to be addressed, and introducing to the NIPS community relevant approaches for multi-objective optimization. CONTRIBUTIONS We invite presentations addressing any of the above issues, or other related issues. We welcome presentations of completed work or work-in-progress, as well as position statements, papers discussing potential research directions and surveys of recent developments. SUBMISSION INSTRUCTIONS If you would like to present in the workshop, please send an abstract in plain text (preferred), postscript or PDF (Microsoft Word documents will not be opened) to frontier at cs.toronto.edu as soon as possible, and no later than October 23rd, 2005. The final program will be posted in early November. Workshop organizing committee: Nathan Srebro, University of Toronto Alexandre d'Aspremont, Princeton University Francis Bach, Ecole des Mines de Paris Massimiliano Pontil, University College London Saharon Rosset, IBM T.J. Watson Research Center Katya Scheinberg, IBM T.J. Watson Research Center For further information, please email frontier at cs.toronto.edu or visit http://www.cs.toronto.edu/~nati/Front From marc at memory.syr.edu Fri Sep 23 00:19:42 2005 From: marc at memory.syr.edu (Marc Howard) Date: Fri, 23 Sep 2005 00:19:42 -0400 (EDT) Subject: Connectionists: Postdoctoral position at Syracuse University Message-ID: Postdoctoral Position Available Syracuse University Applications are invited for a Postdoctoral position in the lab of Dr. Marc Howard at Syracuse University (http://memory.syr.edu). The successful applicant will participate on research of mutual interest related to the modeling of episodic and/or semantic memory. Possible projects include, but are not limited to, learning semantic spaces with contextual models of episodic memory, cognitive modeling of human episodic memory data, and modeling of a detailed neural implementation of a distributed memory model. Papers describing work performed in the lab can be found at http://memory.syr.edu/publications.html. The lab is affiliated with both the Department of Psychology and the Department of Biomedical and Chemical Engineering, as well as the Syracuse Neuroscience Organization (http://sno.syr.edu), leading to a diverse and collaborative intellectual environment. The lab has access to extensive computing resources, including a Beowulf cluster housed in the Center for Policy Research. Applicants should have, or be about to receive, a Ph.D. in a relevant discipline with substantial mathematical/computational experience. Some degree of comfort working in Linux is essential. Familiarity with human memory is helpful; a strong interest in learning is essential. To apply, interested individuals should email a curriculum vitae (dvi or pdf files only), a brief statement of research interests, and the names of three references to Dr. Marc Howard at marc AT memory DOT syr DOT edu. Informal inquiries welcome. From louis.atallah at buid.ac.ae Mon Sep 26 08:55:42 2005 From: louis.atallah at buid.ac.ae (Louis Atallah) Date: Mon, 26 Sep 2005 16:55:42 +0400 Subject: Connectionists: Lecturer Position available in Machine Learning- The British University in Dubai Message-ID: <0INF003NGD2OF9@dicisp003b.dic.sys> Dear all, Applications are invited for the post of Lecturer in Machine Learning at the British University in Dubai. The Informatics Institute is in collaboration with the University of Edinburgh. The British University in Dubai (BUiD) is the result of an exciting vision shared by UAE leaders, UAE industry, UAE education and British interests in the region including the British Council. It is a research-led University in Dubai that draws on top-ranking British teaching and research to create a beacon for knowledge-led innovation in the Gulf region. For job particulars see http://www.buid.ac.ae/buid/html/article.asp?cid=303 For informal inquiries telephone Dr Habib Talhami 04 367 1962 or email him at habib.talhami at buid.ac.ae. Please do not reply to this email. Best regards Louis ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Dr. Louis Atallah Honorary Fellow Institute of Informatics Institute of Informatics British University in Dubai University of Edinburgh P.O.Box 502216 Dubai, UAE United Kingdom Tel: +971-4-3671957 | email: louis.atallah at buid.ac.ae web: http://homepages.inf.ed.ac.uk/latallah/ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ From dalche at lami.univ-evry.fr Tue Sep 27 00:58:18 2005 From: dalche at lami.univ-evry.fr (dalche@lami.univ-evry.fr) Date: Tue, 27 Sep 2005 06:58:18 +0200 (CEST) Subject: Connectionists: 2 years postdoctoral position at Genopole (Evry, FRANCE) Message-ID: <1230.82.230.68.163.1127797098.squirrel@www-ssl.lami.univ-evry.fr> A two-years postdoctoral position in statistical machine learning for postgenomics is available at GENOPOLE (Evry, France) The postdoc will join the team Machine Learning for Biology to work on a project about reverse modeling of macromolecular networks from various experimental data and background knowledge. The candidate must be a high-level computer scientist or mathematician with expertise in statistical machine learning (graphical models, neural networks, kernels). A background in bioinformatics (microarray data analysis, systems biology) is wishable but not necessary . He/she will be involved in collaborations with biologists and must show a high motivation to interdisciplinary work. Conditions of eligibility : the candidate must be working outside of France at the time of application. As the grant is intended as a return stipend, french candidates, currently in foreign countries, and foreign candidates who have attended part of their courses in France are welcome. Duration : Year 2005-07 (starts in December 2005 or sooner if possible) Deadline for candidature: Dec 1rst, 2005. Salary : 2000 euros/month. Modalities : send CV + 2 letters of referees+ 1 motivation letter Web page of the project : www.lami.univ-evry.fr/~dalche/recherche/postdoc2.html Contact: Prof. Florence d'Alch?-Buc Epigenomics project & LAMI UMR 8042 CNRS GENOPOLE 91 Evry FRANCE +33 1 60 87 40 73 / 39 08 Email: florence.dalche at lami.univ-evry.fr From mr287 at georgetown.edu Mon Sep 26 14:43:35 2005 From: mr287 at georgetown.edu (Maximilian Riesenhuber) Date: Mon, 26 Sep 2005 14:43:35 -0400 Subject: Connectionists: postdoctoral position in computational neuroscience, neural data analysis @ Georgetown University Message-ID: <43384157.3060103@georgetown.edu> Postdoctoral Position in computational neuroscience, neural data analysis Riesenhuber Lab Department of Neuroscience Georgetown University We have an opening for a postdoctoral fellow, starting ASAP or later, to participate in a research project studying the neural bases of fast visual target detection in complex images using a combination of psychophysics, EEG & NIRS imaging, and computational modeling. The candidate is expected to take on a main role in the analysis of the neural data and their computational modeling (with the goal of developing a real-time neurally-based target detection system), Thus, a strong quantitative background and experience in machine learning and data classification are required. Experience with EEG and psychophysics is a plus, as is a background in biological and/or machine vision. This position is also of interest for PhDs in computer science with an interest in moving into computational neuroscience. The position is for an initial period of one year with a possibility of extension depending on progress. Salary is competitive. Our lab investigates the computational mechanisms underlying human perception as a gateway to understanding information processing and learning in cortex. In our work, we combine computational modeling with psychophysical and fMRI data from our own lab and collaborators, as well as with single unit data obtained in collaboration with physiology labs. For more information, see http://maxlab.neuro.georgetown.edu. The project is a collaboration with Dr. Tom Zeffiro's lab at the Center for Functional and Molecular Imaging at Georgetown University (http://cfmi.georgetown.edu/). Georgetown has a vibrant neuroscience community with over forty labs participating in the Interdisciplinary Program in Neuroscience. Its scenic campus is located at the edge of Washington, DC, one of the most intellectual and culturally rich cities in the country. Interested candidates should send a CV, a brief (1 page) statement of research interests, representative reprints, and the names and contact information of three references by email to Maximilian Riesenhuber (mr287 at georgetown.edu). Review of applications will begin immediately, and will continue until the position is filled. ********************************************************************** Maximilian Riesenhuber phone: 202-687-9198 Department of Neuroscience fax: 202-784-3562 Georgetown University Medical Center email: mr287 at georgetown.edu Research Building Room WP-12 3970 Reservoir Rd., NW Washington, DC 20007 http://maxlab.neuro.georgetown.edu ********************************************************************** From d.mareschal at bbk.ac.uk Tue Sep 27 13:05:40 2005 From: d.mareschal at bbk.ac.uk (Denis Mareschal) Date: Tue, 27 Sep 2005 18:05:40 +0100 Subject: Connectionists: Postdoctoral Position in France Message-ID: Dear all, Please circulate to interested parties. Please DO NOT RESPOND DIRECTLY TO ME. send replies and queries to Robert French at the address below. Best regards, Denis Mareschal ================== Two year Post-doctoral position available: Neural Network and Genetic Algorithm Models of Category Learning We have obtained funding from the European Commission for a two-year post-doctoral position to study the mechanisms underlying the emergence of rule-based category learning in humans. The project is a highly interdisciplinary effort by researchers from Birkbeck College of the University of London, the University of Amsterdam, the University of Burgundy in Dijon, and Exeter and Cardiff Universities in the UK. The research will include ERP studies, experimental work with animals, experimental work with infants, children, and adults, as well as computational modelling. At the heart of this project is the need to develop connectionist (neural network) models of category learning that capture the developmental transitions observed both in infants across developmental time, as well as in different species across evolutionary time. The post-doctoral fellow will work primarily with Professor Robert French, a specialist in the area of neural network research, at the Learning and Development Laboratory (LEAD-CNRS) at the University of Burgundy in Dijon, France. There will be opportunities for close collaborations with the Centre for Brain and Cognitive Development, Birkbeck University of London. Interested candidates should contact Professor French at robert.french at u-bourgogne.fr. Professor Robert M. French: French is currently a research director for the French National Scientific Research Center (CNRS). He has worked closely with the co-ordinator of the FAR project, Denis Mareschal at Birkbeck College in London for the past decade. He is a highly interdisciplinary computer scientist who specialises in connectionist modelling of behaviour. In addition to having a PhD in computer science from the University of Michigan under Douglas Hofstadter and John Holland, he has formal training in mathematics, psychology and philosophy. He has published work ranging from foundational issues in cognitive modelling, models of bilingual memory, catastrophic interference in neural networks and artificial life. He has published in many of the areas directly related to the goals of this grant - namely, evolution, computational evolution, artificial neural networks, and infant categorisation. Computational skills: The simulations will be written in Matlab, and, while it is not necessary to know Matlab from the outset, excellent programming skills in some common programming language are necessary (e.g., C++, Java, Pascal, Lisp, etc.). Knowledge, and preferably practical experience of genetic algorithms and neural networks is important. A familiarity with some of the basic techniques of experimental psychology (especially category learning) and basic statistics (e.g., ANOVA, t-test, non-parametric tests, regression and correlation) will also be a plus. Language skills: Must have excellent standards of academic writing in English, and good oral communication skills. French is not required. Dijon: Dijon is an hour an a half by train southwest of Paris, located in the heart of France's famous Burgundy wine region and is one of the gastronomic centers of France. It is a beautiful city with a long history as the capital of Burgundy. The old town has been beautifully preserved. It has a very active cultural life, boasting arguably the finest music auditorium in France. The gently rolling hills of the region are ideal for hiking and biking. Dijon is home to the University of Burgundy, with approximately 20,000 students. The relative proximity of Paris (1:39 by train, one train an hour) makes for easy day-trips there for concerts, expositions, or tourism. LEAD: The successful candidate will be housed within LEAD (Experimental Laboratory for Learning and Development). This is one of the leading experimental psychology labs in France, carrying the prestigious CNRS label given to a select few labs in France and based on the publication record of the lab members and their international impact. They are especially strong in the areas of implicit learning, music cognition and modeling. To find out more about this lab, see: http://www.u-bourgogne.fr/LEAD Salary: The before-tax salary will be between 24,000 and 30,000 euros depending on the past experience of the candidate. (A typical pre-tax salary of 26,800 euros would mean an after-tax yearly salary of 21,960 euros.) Additional funding will be provided for computer equipment and travel to conferences and workshops. Standard social benefits available to employees of the University of Burgundy are provided. Responsibilities: The emphasis will be on research, publication and presentation of the FAR work at international venues. The successful candidate will be expected to develop (in collaboration with Professor French and other members of the project), implement and test connectionist models of category learning consistent with the objectives of the project. Duration of contract: The contract is to begin no later than January 1st, 2006 and is of a fixed term 2-year duration. Please send a CV, including references who may be contacted, to: robert.french at u-bourgogne.fr The position will be kept open until a suitable candidate is appointed. We anticipate having a first round of interviews at the end of October. The European Commission encourages woman and minority candidates to apply for positions funded by them. Further Details of Overall Project From Associations to Rules (FAR): Project summary Human adults appear different from other animals by their ability to use language to communicate, their use of logic and mathematics to reason, and their ability to abstract relations that go beyond perceptual similarity. These aspects of human cognition have one important thing in common: they are all thought to be based on rules. This apparent uniqueness of human adult cognition leads to an immediate puzzle: WHEN and HOW does this rule-based system come into being? Perhaps there is, in fact, continuity between the cognitive processes of non-linguistic species and pre-linguistic children on the one hand, and human adults on the other hand. Perhaps, this transition is simply a mirage that arises from the fact that Language and Formal Reasoning are usually described by reference to systems based on rules (e.g., grammar or syllogisms). To overcome this problem, we propose to study the transition from associative to rule-based cognition within the domain of concept learning. Concepts are the primary cognitive means by which we organise things in the world. Any species that lacked this ability would quickly become extinct (Ashby & Lee, 1993). Conversely, differences in the way that concepts are formed may go a long way in explaining the greater evolutionary success that some species have had over others. To address these issues, this project brings together 5 teams of leading international researchers from 4 different countries, with combined and convergent experience in Animal Cognition and Evolutionary Theory, Infant and Child Development, Adult Concept Learning, Neuroimaging, Social Psychology, Neural Network Modelling, and Statistical Modelling. Project objectives This project has six key objectives designed to understand how learning and development interact in the emergence of rule-based concept learning. To this end, we have identified 6 specific objectives: 1. To develop a computational (mechanistic) model of the emergence of rule-based concept learning both within the individual and across evolution. 2. To establish statistical tools for discriminating rigorously between rule-based and similarity-based classification behaviours. 3. To establish the conditions under which human adults show rule-based or similarity-based concept learning. 4. To chart the emergence across species of similarity vs. rule-based concept learning. 5. To chart the emergence of rule-based concept learning in human infants and adults. 6. To chart the emerging neural basis of rule-based concept learning and human adults, children, and infants. -- ================================================= Dr. Denis Mareschal Centre for Brain and Cognitive Development School of Psychology Birkbeck College University of London Malet St., London WC1E 7HX, UK tel +44 (0)20 7631-6582/6226 reception: 6207 fax +44 (0)20 7631-6312 http://www.psyc.bbk.ac.uk/people/academic/mareschal_d/ ================================================= From fhamker at uni-muenster.de Thu Sep 29 11:50:22 2005 From: fhamker at uni-muenster.de (Fred Hamker) Date: Thu, 29 Sep 2005 17:50:22 +0200 Subject: Connectionists: Postdoctoral and PhD position in Computational modelling at West. Wilhelms-University Muenster (Germany) Message-ID: <9603f60c469cd4b1e9e9317fe200bae6@uni-muenster.de> The junior research group of Dr. Fred Hamker in Psychology at the Westfaelische-Wilhelms Universitaet Muenster invites applications for a Postdoctoral and a PhD position. Our group pursues a theoretical and model-driven approach to experimental psychology/neuroscience in the field of visual perception and its cognitive control. It is part of the laboratory of Prof. Dr. Markus Lappe. Together, we form an interdisciplinary research community with members coming from psychology, biology, computer science, electrical engineering, physics. The positions are within a project funded by the German Research Council (DFG). They aim at developing a neurocomputational systems approach to modeling the cognitive guidance of attention and object/category recognition. The scope is on building functional models of cortical and subcortical areas in the primate brain based on physiological and anatomical findings. The function of the prefrontal cortex and basal ganglia will be an integral part of these models. The validity of the models should also be demonstrated by testing their performance on real world category/object recognition tasks. A degree in psychology, computer science, electrical engineering, physics, or biology is a prerequisite. Experience in programming (C++, Matlab), applied mathematics, and neural modeling is of significant advantage. Salary is according to German research scale (BAT IIa for the Postdoctoral and BAT IIa/2 for the PhD position). The position is initially for two years, starting in January 2006 (or soon thereafter). Please send applications by October 15th, but no later than November 1st 2005 per email (PDF preferred) to fhamker at uni-muenster.de. More information about the junior research group can be found at http://wwwpsy.uni-muenster.de/inst2/lappe/Fred/FredHamker.html. The university is an equal opportunity employer. Women are encouraged to apply. Disabled applicants will receive priority in case they have equal qualifications. From mseeger at gmail.com Thu Sep 29 06:40:05 2005 From: mseeger at gmail.com (Matthias Seeger) Date: Thu, 29 Sep 2005 12:40:05 +0200 Subject: Connectionists: Software available: Kernel multiple logistic regression. Incomplete Cholesky decomp. Updating Cholesky factor In-Reply-To: <43c7cd3f05092902031c4c3c@mail.gmail.com> References: <43c7cd3f05092902031c4c3c@mail.gmail.com> Message-ID: <43c7cd3f050929034027b3cdd8@mail.gmail.com> Dear colleagues, I have made available some software at http://www.kyb.tuebingen.mpg.de/bs/people/seeger [follow "Software"] which I hope will be of use to Machine Learning and Statistics practitioners. The code is for Matlab (MEX) and is made available under the GNU public license. It makes use of the BLAS, LAPACK (contained in Matlab), and LINPACK whenever possible. I wrote it under Linux and did not test it under any other system. 1) Kernel Multiple Logistic Regression Efficient implementation for MAP approximation to multi-class Gaussian process (aka kernel) model. Runs in O(n^2 C) (n datapoints, C classes) with full kernel matrices, or in O(n d C), d< Positions for PhD-studentships are available in B. Sch?lkopf's Empirical Inference department at the Max Planck Institute in Tuebingen, Germany (see http://www.kyb.tuebingen.mpg.de/bs), in the following areas: - application of kernel methods such as SVMs and Gaussian Processes to problems in robotics and computer vision (*) - learning theory and learning algorithms, in particular kernel machines and methods for dealing with structured data - machine learning for computer graphics - machine learning for brain computer interfacing - modelling and learning from multi-electrode neural recordings [The position (*) is funded by an EU research training network and is only available for EU nationals outside of Germany.] We invite applications of candidates with an outstanding academic record including a strong mathematical or analytical background. Max Planck Institutes are publicly funded research labs with an emphasis on excellence in basic research. Tuebingen is a university town in southern Germany, see http://www.tuebingen.de/kultur/english/index.html for some pictures. Inquiries and applications, including a CV (with complete lists of marks, copies of transcripts etc.) and a short statement of research interests (matching some of the above areas) should be sent to sabrina.nielebock at tuebingen.mpg.de or Sabrina Nielebock Max Planck Institute for Biological Cybernetics Spemannstr. 38 72076 Tuebingen Germany Tel. +49 7071 601 551 Fax +49 7071 601 552 In addition, please arrange for two letters of reference to be sent directly to the address above. Applications will be considered immediately and until the positions are filled. From danny.silver at acadiau.ca Fri Sep 30 21:12:57 2005 From: danny.silver at acadiau.ca (Daniel L. Silver) Date: Fri, 30 Sep 2005 22:12:57 -0300 Subject: Connectionists: CFP - NIPS 2005 Workshop - Inductive Transfer : 10 Years Later Message-ID: <20051001011300.MVVB29614.simmts6-srv.bellnexxia.net@DSILVERNB1> NIPS 2005 Workshop - Inductive Transfer : 10 Years Later --------------------------------------------------------- Friday, Dec 9, Westin Resort and Spa in Whistler, British Columbia, Canada Overview: --------- Inductive transfer refers to the problem of applying the knowledge learned in one or more tasks to learning for a new task. While all learning involves generalization across problem instances, transfer learning emphasizes the transfer of knowledge across domains, tasks, and distributions that are related, but not the same. For example, learning to recognize chairs might help to recognize tables; or learning to play checkers might improve learning of chess. While people are adept at inductive transfer, even across widely disparate domains, currently we have little learning theory to explain this phenomena and few systems exhibit knowledge transfer. At NIPS95 two of the current co-chairs lead a successful two-day workshop on "Learning to Learn" that focused on lifelong machine learning methods that retain and reuse learned knowledge. (The co-organizers of the NIPS95 workshop were Rich Caruana, Danny Silver, Jon Baxter, Tom Mitchell, Lorien Pratt, and Sebastian Thrun.) The fundamental motivation for that meeting was the belief that machine learning systems would benefit from re-using knowledge learned from related and/or prior experience and that this would enable them to move beyond task-specific tabula rasa systems. The workshop resulted in a series of articles published in a special issue of Connection Science [CS 1996], Machine Learning [vol. 28, 1997] and a book entitled "Learning to Learn" [Pratt and Thrun 1998]. Research in inductive transfer has continued since 1995 under a variety of names: learning to learn, life-long learning, knowledge transfer, transfer learning, multitask learning, knowledge consolidation, context-sensitive learning, knowledge-based inductive bias, meta-learning, and incremental/cumulative learning. The recent burst of activity in this area is illustrated by the research in multi-task learning within the kernel and Bayesian contexts that has established new frameworks for capturing task relatedness to improve learning [Ando and Zhang 04, Bakker and Heskes 03, Jebara 04, Evgeniou, and Pontil 04, Evgeniou, Micchelli and Pontil 05, Chapelle and Harchaoui 05]. This NIPS 2005 workshop will examine the progress that has been made in ten years, the questions and challenges that remain, and the opportunities for new applications of inductive transfer systems. In particular, the workshop organizers have identified three major goals: (1) To summarize the work thus far in inductive transfer to develop a taxonomy of research and highlight open questions, (2) To share new theories, approaches, and algorithms regarding the accumulation and re-use of learned knowledge to make learning more effective and more efficient, (3) To discuss the formation of an inductive transfer special interest group that might offer a website, benchmarking data, shared software, and links to various research programs and related web resources. Call for Papers: ---------------- We invite submission of workshop papers that discuss ongoing or completed work dealing with Inductive Transfer (see below for a list of appropriate topics). Papers should be no more than four pages in the standard NIPS format. Authorship should not be blind. Please submit a paper by emailing it in Postscript or PDF format to danny.silver at acadiau.ca with the subject line "ITWS Submission". We anticipate accepting as many as 8 papers for 15 minute presentation slots and up to 20 poster papers. Please only submit an article if at least one of the authors will attend the workshop to present the work. The successful papers will be made available on the Web. A special journal issue or an edited book of selected papers also is being planned. The 1995 workshop identified the most important areas for future research to be: * The relationship between computational learning theory and selective inductive bias; * The tradeoffs between storing or transferring knowledge in representational and functional form; * Methods of turning concurrent parallel learning into sequential lifelong learning methods; * Measuring relatedness between learning tasks for the purpose of knowledge transfer; * Long-term memory methods and cumulative learning; and * The practical applications of inductive transfer and lifelong learning systems. The workshop is interested in the progress that has been made in these areas over the last ten years. These remain key topics for discussion at the proposed workshop. More forward looking and important questions include: * Under what conditions is inductive transfer difficult? When is it easy? * What are the fundamental requirements for continual learning and transfer? * What new mathematical models/frameworks capture/demonstrate transfer learning? * What are some of latest and most advanced demonstrations of transfer learning in machines (Bayesian, kernel methods, reinforcement)? * What can be learned from transfer learning in humans and animals? * What are the latest psychological/neurological/computational theories of knowledge transfer in learning? Important Dates: ---------------- 19 Sep 05 - Call for participation 21 Oct 05 - Paper submission deadline 04 Nov 05 - Notification of paper acceptance 09 Dec 05 - Workshop in Whistler Organizers: -------------- Danny Silver, Jodrey School of Computer Science, Acadia University, Canada Rich Caruana, Department of Computer Science, Cornell University, USA Stuart Russell, Computer Science Division, University of California, Berkeley, USA Prasad Tadepalli, School of Electrical Eng. and Computer Science, Oregon State University, USA Goekhan Bakir, Max Planck Institute for Biological Cybernetics, Germany Kristin Bennett, Department of Mathematical Sciences, Rensselaer Polytechnic Institute, USA Massimiliano Pontil, Dept. of Computer Science, University College London, UK For further Information: ------------------------ Please see the workshop webpage at http://iitrl.acadiau.ca/itws05/ Email danny.silver at acadiau.ca =============================================== Daniel L. Silver, Ph.D. danny.silver at acadiau.ca Associate Professor p:902-585-1105 f:902-585-1067 Intelligent Information Technology Research Laboratory Jodrey School of Computer Science, Office 315 Acadia University, Wolfville, NS B4P 2R6