From janetw at itee.uq.edu.au Sun Oct 2 00:40:04 2005 From: janetw at itee.uq.edu.au (Janet Wiles) Date: Sun, 2 Oct 2005 14:40:04 +1000 (EST) Subject: Connectionists: Research position in genetic regulatory network modeling (Brisbane, Australia) Message-ID: This position would suit someone with a good network modeling background who has an interest in biological modeling at the level of genes and gene regulation. -- Janet ---------------------------------------------------------------- Research Officer / Senior Research Officer / Research Fellow The University of Queensland, St Lucia Campus, Brisbane, Australia The ARC Centre for Complex Systems (ACCS) and ARC Centre for Bioinformatics (ACB) have a jointly-funded position available for a researcher to participate in a project in the area of computational modelling of genetic regulatory networks (GRN). The project seeks to characterise the computational nature of genetic regulatory networks within a control paradigm in order to generalise insights from the biological systems to other complex systems. The project is supervised by A/Prof Janet Wiles. The role of the Research Officer/ Senior Research Officer / Research Fellow is to design, implement and analyse genetic regulatory networks in collaboration with ACCS and ACB researchers, and to develop appropriate tools. The Centres seek a motivated and innovative person with good modelling skills to provide support to the project. Applicants should have a background in software engineering or a related field. Obtain the position description and selection criteria online. http://www.accs.uq.edu.au/ Applications addressing the selection criteria and quoting the reference number, should be sent to the: The Director ARC Centre for Complex Systems School of Information Technology & Electrical Engineering The University of Queensland St Lucia QLD 4072. Or email admin at accs.uq.edu.au Closing date for applications: 30 October 2005 Reference Number: 3010353 ------------------------------------------------------------- A/Prof Janet Wiles Division of Complex and Intelligent Systems School of Information Technology & Electrical Engineering The University of Queensland QLD 4072 AUSTRALIA ------------------------------------------------------------- From kbp at imm.dtu.dk Mon Oct 3 11:12:28 2005 From: kbp at imm.dtu.dk (Kaare Brandt Petersen) Date: Mon, 3 Oct 2005 17:12:28 +0200 (METDST) Subject: Connectionists: The Matrix Cookbook - Updated Version Message-ID: Dear Colleagues (Apologies for multiple postings) A new and updated version of The Matrix Cookbok is now available at http://www.imm.dtu.dk/pubdb/views/edoc_download.php/3274/pdf/imm3274.pdf The Matrix Cookbook is a desktop reference on identities, relations and approximations regarding matrices. For instance differentiation of determinants, results for multivariate gaussians, expectations of general multivariate distributions, etc. In the new version new material include differentiation of Toeplitz matrices, solution to encapsulating linear systems, and co-skewness/co-kurtosis matrices. Best regards, Kaare -- Kaare Brandt Petersen Intelligent Signal Processing Group Technical University of Denmark Building 321, lok 120 DK-2800 Kgs. Lyngby, Europe Tel. +45 45253896 Fax. +45 45872599 http://www.imm.dtu.dk/~kbp From M.Denham at plymouth.ac.uk Mon Oct 3 11:04:11 2005 From: M.Denham at plymouth.ac.uk (Mike Denham) Date: Mon, 3 Oct 2005 16:04:11 +0100 Subject: Connectionists: Five-Year Postdoctoral Fellowship in Computational Neuroscience Message-ID: <52A8091888A23F47A013223014B6E9FE067118D8@03-CSEXCH.uopnet.plymouth.ac.uk> University of Plymouth, UK Centre for Theoretical and Computational Neuroscience Postdoctoral Research Fellow Five Year Fixed Term Appointment (salary range ?23,643 - ?26,671 pa) Applications are invited for the position of Postdoctoral Research Fellow in the Centre for Theoretical and Computational Neuroscience at the University of Plymouth, UK. The position has been made available through the award of a major new ?1.8M five-year research grant from the UK Engineering and Physical Sciences Research Council for a project entitled: "A Novel Computing Architecture for Cognitive Systems based on the Laminar Microcircuitry of the Neocortex". Collaborators on the project include Manchester University, University College London, Edinburgh University, Oxford University, and London University School of Pharmacy. Applicants for the post must have a PhD in a relevant subject area and possess an sound knowledge of the methods and tools of theoretical and computational neuroscience. They must be able to provide evidence of a existing strong publication record. The work of the Research Fellow will be specifically concerned with the investigation and integrated development of a cortical microcircuit model on a large scale Linux cluster simulation facility, in close collaboration with the other partners in the project, and with other partners on a related EU-funded project "FACETS". The position will require a postdoctoral researcher with extensive research experience in neurobiological modelling of neurons and neural circuitry, and sufficient breadth of knowledge in the field to be able to integrate the research results from all areas of the project into the model. The position is available immediately and an appointment will be made as soon as possible. The appointment will be for a fixed term of up to five years, and will be subject to a probationary period of twelve months. Informal enquiries should be made to Professor Mike Denham, Centre for Theoretical and Computational Neuroscience, University of Plymouth, Drake Circus, Plymouth, PL4 8AA, UK; tel: +44 (0)1752 232547; email: mdenham at plym.ac.uk From ulrike.luxburg at ipsi.fraunhofer.de Tue Oct 4 10:34:38 2005 From: ulrike.luxburg at ipsi.fraunhofer.de (Ulrike von Luxburg) Date: Tue, 04 Oct 2005 16:34:38 +0200 Subject: Connectionists: NIPS workshop "Theoretical Foundations of Clustering" Message-ID: <434292FE.6020407@ipsi.fraunhofer.de> ################################################################ CALL FOR CONTRIBUTIONS NIPS workshop on THEORETICAL FOUNDATIONS OF CLUSTERING Saturday, December 10, 2005 Westin Resort and SPA, Whistler, BC, Canada http://www.ipsi.fraunhofer.de/~ule/clustering_workshop_nips05/clustering_workshop_nips05.htm Submission deadline: October 30, 2005 ################################################################ Organizers: ------------ * Shai Ben-David, University of Waterloo, Canada, http://www.cs.uwaterloo.ca/~shai * Ulrike von Luxburg, Fraunhofer IPSI, Darmstadt, Germany http://www.ipsi.fraunhofer.de/~ule * John Shawe-Taylor, University of Southampton, UK http://www.ecs.soton.ac.uk/people/jst * Naftali Tishby, Hebrew University, Jerusalem, Israel http://www.cs.huji.ac.il/~tishby Clustering is one of the most widely used techniques for exploratory data analysis. Across all disciplines, from social sciences over biology to computer science, people try to get a first intuition about their data by identifying meaningful groups among the data points. In the past five decades, a wide variety of clustering algorithms have been developed and applied to a wide range of practical problems. Despite this large number of algorithms and applications, the goal of clustering and its proper interpretation remains fuzzy and vague. There are in fact many different problems that are clustered together under this single term, from quantization with low distortion for compression, through various techniques for graph partitioning whose goals are not fully specified, to methods for revealing hidden structure and unobserved features in complex data. We are clearly not talking about a single well defined problem. Moreover, the theoretical foundations of clustering seem to be distressingly meager, covering only some sub-domains and failing to address some of the most basic general aspects of the area. There is not even an agreement among the researchers on the correct questions to pose, let alone which tools and analysis techniques should be used to answer those questions. In our opinion there is an urgent need to initiate a concerted discussion on these issues, in order to move towards a consolidation of the theoretical basis for - at least some of the aspects of - clustering. One prospective benefit of building a theoretical framework for clustering may come from enabling the transfer of tools developed in other related domains, such as machine learning and information theory, where the usefulness of having a general mathematical framework have been impressively demonstrated. We have the impression that recently many researchers have become aware of this need and agree on the importance of these issues. Questions we wish to address: ----------------------------- 1. What is clustering? How can it be defined and how can we sort the different types of clustering and their goals? In particular: * Is the main purpose to use the partition to discover new features in the data? * Or the other way around, is the main purpose to simplify our data by building groups, thus getting rid of unimportant information? * Is clustering just data compression? * Is clustering just estimating modes of a density? * Is clustering related to human perception? * Can one come up with a meaningful taxonomy of clustering tasks? * Can we formulate the intuitive notion of "revealing hidden structure and properties"? 2. How should prior knowledge be encoded? As a pair-wise similarity/distance function over domain points? As a set of relevant features? Should data be embedded in some richer structure (Hilbert space, topology) ? 3. Is there a principled way to measure the quality of a clustering on particular data set? * Can every clustering task be expressed as an optimization of some explicit readily-computable associated objective cost function? * Can stability be considered a first principle for meaningful clustering? 4. Is there a principled way to measure the quality of a clustering algorithm? * Necessary conditions * Can we come up with sufficient conditions for reasonable clustering? * Stability conditions * Richness conditions * What type of performance guarantees can one hope to provide? 5. What are principled and meaningful ways of measuring the similarity (or degree of agreement) between different clusterings? 6. Can one distinguish "clusterable" data from "structureless" data? 7. What are the tools we should try to import from other areas such as classification prediction, density estimation, data compression, computational geometry, other relevant areas? Contributions: -------------- We invite presentations addressing one or several of the questions raised above. To keep the workshop lively we intend to keep the individual presentations short, at most 15 minutes. We welcome presentations about work in all "stages of completion", ranging from completed work over work in progress to discussing potential directions of future research. In particular we encourage position papers. We would like to stress that the focus of this workshop is on *foundations* of clustering. We are not interested in contributions about "yet another ad-hoc clustering algorithm". Please submit an extended abstract (at most two pages) summarizing your potential contribution to clustering_workshop at ipsi.fraunhofer.de. *** The deadline is the 30th October 2005. *** The organizers will review all submissions. You will be notified by November 11 whether your contribution is accepted. The final program of the workshop will be posted on the workshop webpage. -- ------------------------------------------------------ Dr. Ulrike von Luxburg Data Mining Group, Fraunhofer IPSI Dolivostrasse 15, 64293 Darmstadt, Germany Phone: +49 6151 869-844 and +49-170-9669432 Fax: +49 6151 869-989 http://www.ipsi.fraunhofer.de/~ule ------------------------------------------------------ From gtesauro at us.ibm.com Tue Oct 4 11:08:11 2005 From: gtesauro at us.ibm.com (Gerry Tesauro) Date: Tue, 4 Oct 2005 11:08:11 -0400 Subject: Connectionists: Fw: NIPS Workshop CFP - Value of Information in Inference, Learning and Decision-Making Message-ID: *********************************************************** Call For Papers Value of Information in Inference, Learning and Decision-Making Workshop held at the 19th Annual Conference on Neural Information Processing Systems (NIPS 2005) Whistler, CANADA: December 10, 2005 *********************************************************** Overview and Goals =================== A common fundamental problem of value of information (VOI) analysis arises in inference, learning and sequential decision-making when one is allowed to actively select, rather than passively observe, the input information. VOI provides a principled methodology that enables acquiring information in a way that optimally trades off the cost of information gathering with the expected benefit in some overall objective (e.g., classification accuracy or cumulative reward). For example, in Bayesian problem diagnosis VOI analysis aims at selecting observations (e.g., medical tests) that are most informative about the unknown variables (e.g., diseases we are trying to diagnose) while minimizing the cost of collecting the information. In sequential decision-making problems, VOI can provide a principled solution to the well-known "exploration versus exploitation" dilemma, so that one can optimally trade off the immediate cost of exploratory actions with expected improvement in future decisions and future reward. Yet another example is active learning, where the goal is to minimize the cost of observations (e.g., the number of labeled samples) while maximizing the learner's objective function. Finally, selecting the most relevant subset of features in supervised learning is another example where VOI analysis can provide a principled solution. Clearly, these areas differ in their choices of a particular objective function and the approaches to active exploration, but have a common goal of selecting explorative actions that maximize the VOI. In this workshop, we plan to bring together researchers from several fields concerned with VOI analysis and hope to ignite cross-fertilization between the areas. This could lead to major theoretical progress as well as practical impact in applications such as medical diagnosis, quality control in product design, IT systems management and troubleshooting, and DNA library screening, just to name a few. Suggested Topics ================= The list of possible topics includes (but is not limited to) the following: * VOI analysis in probabilistic inference and decision theory * feature selection and attribute-efficient learning * active learning (query learning, selective sampling) * exploration-exploitation trade-off in reinforcement learning * adaptive versus non-adaptive testing designs * comparison of different action selection criteria and objective functions * applications of VOI in diagnosis, systems control and management, coding theory, computational biology, neural coding, etc. Format ======= This is a one-day workshop following the 19th Annual Conference on Neural Information Processing Systems (NIPS 2005). There will be several invited talks and tutorials (roughly 30-40 minutes each) and shorter contributed talks from researchers in industry and academia, as well as a panel discussion. We will hold a poster session if we receive a sufficient number of good submissions. The workshop is intended to be accessible to the broader NIPS community and to encourage communication between different fields. Submission Instructions ======================== We invite submissions of extended abstracts (up to 2 pages, not including bibliography) for the short contributed talks and/or posters. The submission should present a high-level description of recent or ongoing work related to the topics above. We will explore the possibility of publishing papers based on invited and submitted talks in a special issue of an appropriate journal. Email submissions to nips05workshop at watson.ibm.com as attachments in Postscript or PDF, no later than October 24, 2005. Information ============ Workshop URL: www.research.ibm.com/nips05workshop/ Submission: nips05workshop at watson.ibm.com NIPS: http://www.nips.cc Dates & Deadlines ================== October 24: Abstract Submission October 31: Acceptance Notification Organizing Committee ===================== Dr. Alina Beygelzimer IBM T. J. Watson Research Lab, USA Dr. Rajarshi Das IBM T. J. Watson Research Lab, USA Dr. Irina Rish (primary contact) IBM T. J. Watson Research Lab, USA Dr. Gerry Tesauro IBM T. J. Watson Research Lab, USA Invited Speakers ================= Prof. Craig Boutilier University of Toronto Prof. Sanjoy Dasgupta University of California, San Diego Prof. Carlos Guestrin Carnegie Mellon University Prof. Michael Littman Rutgers University Prof. Dale Schuurmans University of Alberta ***************************************************************** From joshi at igi.tugraz.at Tue Oct 4 04:27:47 2005 From: joshi at igi.tugraz.at (Prashant Joshi) Date: Tue, 04 Oct 2005 10:27:47 +0200 Subject: Connectionists: Opening for a scientific programmer Message-ID: <43423D03.7020000@igi.tugraz.at> Hello Everyone! Position for a Scientific Programmer ______________________________ We have at our Institute in Graz, Austria an opening for a scientific programmer, who will develop efficient software for the parallel simulation of large neural circuits, and for carrying out learning experiments with such circuits. This software will replace and extend earlier software described on http://www.lsm.tugraz.at/ This work will be carried out in collaboration with Dr. Thomas Natschlaeger, in the framework of the 4-year EU Research project FACETS http://www.kip.uni-heidelberg.de/facets/public/ The goal of this fairly large research project is the development of detailed large scale models of neural circuits and areas, whose properties will be explored through simulations on a Blue Gene Supercomputer and on new special-purpose hardware. The salary for this position will be competitive. We are looking for a programmer who has the abstraction capability to design interfaces between different software components, and the skill and dedication needed for writing software that runs efficiently. We also expect experience in software development in C++ and Linux, as well as experience in writing parallel software (multithreading and interprocess communication). Interest or knowledge in computational neuroscience and/or machine learning would be helpful (in the case of scientific interest in these areas, a simultaneous participation in our Phd-Program is possible). Send your application by October 10 to maass at igi.tugraz.at Thanks and regards, Prashant Joshi -- QOTD: "It wouldn't have been anything, even if it were gonna be a thing." ****************************************************** * Prashant Joshi * Institute for Theoretical Computer Science (IGI) * Technische Universitaet Graz * Inffeldgasse 16b, A-8010 Graz, Austria -------------------------------------------- * joshi at igi.tu-graz.ac.at * http://www.igi.tugraz.at/joshi * Tel: + 43-316-873-5849 ****************************************************** From juergen at idsia.ch Tue Oct 4 09:20:17 2005 From: juergen at idsia.ch (Juergen Schmidhuber) Date: Tue, 4 Oct 2005 15:20:17 +0200 Subject: Connectionists: Feedback nets for control, prediction, classification Message-ID: <4ddaedb15d48b996368aa8c962b1b3f0@idsia.ch> 8 new papers on evolution-based & gradient-based recurrent neural nets, with Alex Graves, Daan Wierstra, Matteo Gagliolo, Santiago Fernandez, Nicole Beringer, Faustino Gomez, in Neural Networks, IJCAI 2005, ICANN 2005, GECCO 2005 (with a best paper award): EVolution of recurrent systems with Optimal LINear Output (EVOLINO). Basic idea: Evolve an RNN population; to get some RNN's fitness DO: Feed the training sequences into the RNN. This yields sequences of hidden unit activations. Compute an optimal linear mapping from hidden to target trajectories. The fitness of the recurrent hidden units is the RNN performance on a validation set, given this mapping. Evolino-based LSTM nets learn to solve several previously unlearnable time series prediction tasks, and form a basis for the first recurrent support vector machines: http://www.idsia.ch/~juergen/evolino.html COEVOLVING RECURRENT NEURONS LEARN TO CONTROL FAST WEIGHTS. For example, 3 co-evolving RNNs compute quickly changing weight values for 3 fast weight networks steering the 3 wheels of a mobile robot in a confined space in a realistic 3D physics simulation. Without a teacher it learns to balance two poles with a joint: http://www.idsia.ch/~juergen/rnnevolution.html EVOLUTION MAIN PAGE (links to work since 1987): http://www.idsia.ch/~juergen/evolution.html NEW RESULTS on bidirectional gradient-based RNNs for phoneme recognition etc: http://www.idsia.ch/~juergen/rnn.html Juergen Schmidhuber TUM & IDSIA From mjhealy at ece.unm.edu Mon Oct 3 21:19:43 2005 From: mjhealy at ece.unm.edu (mjhealy@ece.unm.edu) Date: Mon, 3 Oct 2005 19:19:43 -0600 (MDT) Subject: Connectionists: Experimental application of categorical semantic theory for neural networks Message-ID: Tom Caudell and I have a paper in the Proceedings of the International Joint Conference on Neural Networks (IJCNN05), Montreal, 2005, published by the IEEE Press. The subject is an experiment demonstrating improved performance with a modification to a standard artificial neural architecture based on our category-theoretic semantic model. This was joint research with Sandia National Laboratories on the generation of multispectral images from satellite data. We believe this is the first application of category theory directly in an engineering application (while at Boeing, another colleague and I had demonstrated its application to the synthesis of engineering software). Another feature of it is that it relates an abstract, categorical structure to a neural network parameter; this is not dealt with in detail in the Proceedings paper but will be in a full paper to be submitted. Tom Caudell's web page has a link to the Proceedings paper at http://www.eece.unm.edu/faculty/tpc/ The semantic theory is described in our Technical Report EECE-TR-04-020 on the UNM Dspace Repository, at https://repository.unm.edu/handle/1928/33 Regards, Mike Healy mjhealy at ece.unm.edu From mark at paskin.org Thu Oct 6 12:43:32 2005 From: mark at paskin.org (Mark A. Paskin) Date: Thu, 6 Oct 2005 09:43:32 -0700 Subject: Connectionists: CFP: NIPS 2005 Workshop: Intelligence Beyond the Desktop Message-ID: (Apologies for multiple postings.) ################################################################ CALL FOR PARTICIPATION Intelligence Beyond the Desktop a workshop at the 2005 Neural Information Processing Systems (NIPS) Conference Submission deadline: Friday, October 14, 2005 http://ai.stanford.edu/~paskin/ibd05/ ################################################################ OVERVIEW We are now well past the era of the desktop computer. Trends towards miniaturization, wireless communication, and increased sensing and control capabilities have led to a variety of systems that distribute computation, sensing, and controls across multiple devices. Examples include wireless sensor networks, multi-robot systems, networks of smartphones, and large area networks. Machine learning problems in these non-traditional settings cannot faithfully be viewed in terms of a data set and an objective function to optimize; physical aspects of the system impose challenging new constraints. Resources for computation and actuation may be limited and distributed across many nodes, requiring significant coordination; limited communication resources can make this coordination expensive. The scale and complexity of these systems often leads to large amounts of structured data that make state estimation challenging. In addition, these systems often have other constraints, such as limited power, or under-actuation, requiring reasoning about the system itself during learning and control. Furthermore, large-scale distributed systems are often unreliable, requiring algorithms that are robust to failures and lossy communication. New learning, inference, and control algorithms that address these challenges are required. This workshop aims to bring together researchers to discuss new applications of machine learning in these systems, the challenges that arise, and emerging solutions. FORMAT This one-day workshop will consist of invited talks and talks based upon submitted abstracts, with some time set aside for discussion. Our (tentative) invited speakers are: * Dieter Fox (University of Washington) * Leonidas Guibas (Stanford University) * Sebastian Thrun, (Stanford University) will speak about the machine learning algorithms used in Stanley, Stanford's entry into the DARPA Grand Challenge. CALL FOR PARTICIPATION Researchers working at the interface between machine learning and non- traditional computer architectures are invited to submit descriptions of their research for presentation at the workshop. Of particular relevance is research on the following topics: * distributed sensing, computation, and/or control * coordination * robustness * learning/inference/control under resource constraints (power, computation, time, etc.) * introspective machine learning (reasoning about the system architecture in the context of learning/inference/control) We especially encourage submissions that address unique challenges posed by non-traditional architectures for computation, such as * wireless sensor networks * multi-robot systems * large-area networks Submissions should be extended abstracts in PDF format which are no longer than three (3) pages long in 10pt or larger font. Submissions may be e-mailed to ibd-2005 at cs.cmu.edu with the subject "IBD SUBMISSION". We plan to accept four to six submissions for 25 minute presentation slots. In your submission please indicate if you would present a poster of your work (in case there are more qualified submissions than speaking slots). Call for participation: Wednesday, August 31, 2005 Submission deadline: Friday, October 14, 2005 11:59 PM PST Acceptance notification: Tuesday, November 1, 2005 Workshop: Friday, December 9, 2005 Organizers * Carlos Guestrin (http://www.cs.cmu.edu/~guestrin/) * Mark Paskin (http://paskin.org) Please direct any inquiries regarding the workshop to ibd-2005 at cs.cmu.edu. From esann at dice.ucl.ac.be Wed Oct 5 06:34:21 2005 From: esann at dice.ucl.ac.be (esann) Date: Wed, 5 Oct 2005 12:34:21 +0200 Subject: Connectionists: speciel sessions at ESANN'2006 European Symposium on Artificial Neural Networks Message-ID: <007f01c5c998$594986d0$43ed6882@maxwell.local> ESANN'2006 14th European Symposium on Artificial Neural Networks 14th European Symposium on Artificial Neural Networks Advances in Computational Intelligence and Learning Bruges (Belgium) - April 26-27-28, 2006 Special sessions ===================================================== The following message contains a summary of all special sessions that will be organized during the ESANN'2006 conference. Authors are invited to submit their contributions to one of these sessions or to a regular session, according to the guidelines found on the web pages of the conference http://www.dice.ucl.ac.be/esann/. According to our policy to reduce the number of unsolicited e-mails, we gathered all special session descriptions in a single message, and try to avoid sending it to overlapping distribution lists. We apologize if you receive multiple copies of this e-mail despite our precautions. Special sessions that will be organized during the ESANN'2006 conference ======================================================================== 1. Semi-blind approaches for Blind Source Separation (BSS) and Independent Component Analysis (ICA) M. Babaie-Zadeh, Sharif Univ. Tech. (Iran), C. Jutten, CNRS ? Univ. J. Fourier ? INPG (France) 2. Visualization methods for data mining F. Rossi, INRIA Rocquencourt (France) 3. Neural Networks and Machine Learning in Bioinformatics - Theory and Applications B. Hammer, Clausthal Univ. Tech. (Germany), S. Kaski, Helsinki Univ. Tech. (Finland), U. Seiffert, IPK Gatersleben (Germany), T. Villmann, Univ. Leipzig (Germany) 4. Online Learning in Cognitive Robotics J.J. Steil, Univ. Bielefeld, H. Wersing, Honda Research Institute Europe (Germany) 5. Man-Machine-Interfaces - Processing of nervous signals M. Bogdan, Univ. T?bingen (Germany) 6. Nonlinear dynamics N. Crook, T. olde Scheper, Oxford Brookes University (UK) Short descriptions ================== Semi-blind approaches for Blind Source Separation (BSS) and Independent Component Analysis (ICA) ----------------------------------------------------------------------- Organized by: - M. Babaie-Zadeh, Sharif Univ. Tech. (Iran) - C. Jutten, CNRS ? Univ. J. Fourier ? INPG (France) In the original formulation of the Blind Source Separation (BSS) problem, it is usually assumed that there is no prior information about source signals but their statistical independence. The methods then try to separate the sources by transforming the observations into as statistically independent as possible outputs (ICA). A well known result states that decorrelation (second-order independence) of the outputs is not sufficient for separating the sources. Another well-known result states that separating Gaussian sources using this approach is not possible. However, simple prior information about source signals can lead to new methods, whose simplicity and separation quality may significantly be improved (in terms of samples size, algorithm simplicity and speed, ability to separate a larger class of signals, etc.). For example, if we already know that the source signals are temporally correlated, it is possible to separate them by using second-order approaches : the algorithm based on second-order statistics is then simpler, Gaussian sources can be separated. We call such an approach a ?Semi-Blind? approach, because although it is not completely blind, the available prior information about the sources is very weak and remain true for a large class of sources. Some of the most famous priors for designing Semi-Blind approaches for BSS are: - Sparsity of the source signals: Such a prior enables us to separate more sources than sensors, and even dropping the independence assumption. Hence, these approaches are usually called Sparse Component Analysis (SCA). - Temporal correlation of the source signals (colored sources) enables separation of Gaussian sources, using second-order approaches. - Non-stationarity of the source signals enables separation of Gaussian sources, using second-order approaches. - Bounded sources enables, for example, simple geometric approaches to be used. - Models for source distribution (Markovian, etc.) can reduce the solution indeterminacies and improve separation performance. - Bayesian methods is a general framework for handling priors. Of course, mixture of priors, currently not very usual, could also be exploited and provide new algorithms. In this special session, we invite authors to submit papers illustrating the use of the above priors in BSS and ICA contexts. Visualization methods for data mining ------------------------------------- Organized by: - F. Rossi, INRIA Rocquencourt (France) In many situations, manual data exploration remains a mandatory preliminary step that provides insights on the studied problem and helps solving it. It is also very important for reporting results of data mining tools in an exploitable way. While statistical summaries and simple linear methods give a some rough analysis of a data set, sophisticated visualization methods allow human experts to discover information in an easier and more intuitive way. A very successful example of neural based visualization tool is given by Kohonen's Self Organizing Maps used together with component planes, U-matrix, P-matrix, etc. This session aims at bringing together researchers interested in visualization methods both used as exploratory tools (before other data mining methods) and as explanatory tools (after other data mining methods). Submissions are encouraged within (but not restricted to) following areas: - non linear projection - graph based visualization - cluster visualization - visualization method for supervised problems - visualization of non vector data Neural Networks and Machine Learning in Bioinformatics - Theory and Applications ------------------------------------------------------------------- Organized by: - B. Hammer, Clausthal Univ. Tech. (Germany) - S. Kaski, Helsinki Univ. Tech. (Finland) - U. Seiffert, IPK Gatersleben (Germany) - T. Villmann, Univ. Leipzig (Germany) Bioinformatics is a promising and innovative research field. Despite of a high number of techniques specifically dedicated to bioinformatic problems as well as successful applications, we are in the beginning of a process to massively integrate the aspects and experiences in the different core subjects such as biology, medicine, computer science, engineering, chemistry, physics and mathematics. Within this rather wide area we focus on neural networks and machine learning related approaches in bioinformatics with particular emphasis on integrative research in the background of the above mentioned scope. According to the high level and the aim of the hosting ESANN conference we encourage authors to submit papers containing - New theoretical aspects - New methodologies - Innovative applications in the field of bioinformatics. A prospective but nonexclusive list of topics is - Genomic Profiling - Pathways - Sequence analysis - Structured data - Time series analysis - Context related metrics in modelling and analysis - Visualization - Pattern recognition - Image processing - Clustering and Classification - ... Online Learning in Cognitive Robotics ------------------------------------- Organized by: - J.J. Steil, Univ. Bielefeld - H. Wersing, Honda Research Institute Europe (Germany) In hard- and software we currently observe technological breakthroughs towards cognitive agents, which will soon incorporate a mixture of miniaturized sensors, cameras, multi-DOF robots, and large data storage, together with sophisticated artificial cognitive functions. Such technologies might culminate in the widespread application of humanoid robots for entertainment and house-care, in health-care assistant systems, or advanced human-computer interfaces for multi-modal navigation in high-dimensional data spaces. Making such technologies easily accessible for every day use is essential for their acceptance by users and customers. At all levels for such systems learning will be an essential ingredient to meet the challenges in engineering, system development, and system integration and neural network methods are of crucial importance in this arena. Cognitive robots are meant to behave in the real world and to interact smoothly with their users and the environment. While off-line learning is well established to implement basis modules of such systems and many learning methods work well in toy domains, in concrete scenarios on-line adaptivity is necessary in many respects: in order to cope with the inevitable uncertainties of the real world, the limited predictability of the interaction structure, to acquire new and enhance preprogrammed behavior. Online-learning is also the main methodological ingredient in the developmental approach to intelligent robotics, which aims at incremental progressing from simple to more and more complex behavior. The current session will focus exclusively on the more difficult field of online learning in real systems with real data. Given their systems meet these constraints, authors are invited to submit contributions for all kinds of cognitive robotics, for instance - cognitive vision (eg. visual object learning, acquisition of visual memory, adaptive scene analysis) - localization and map building in mobile robots - online trajectory learning and acquisition - adaptive control of multi-DOF robots - learning in behavioral architectures - learning by demonstration and imitation Man-Machine-Interfaces - Processing of nervous signals ------------------------------------------------------ Organized by: - M. Bogdan, Univ. T?bingen (Germany) Recently, Man-Machine-Interfaces contacting the nervous system in order to extract information resp. to introduce information gain more and more in importance. In order to establish systems like neural prostheses or Brain-Computer-Interfaces, powerful (real time) algorithms for processing nerve signals or their field potentials are requested. Another important point is the introduction of informations into nervous systems be means like functional electrical stimulation (FES). Topics of this session can be, but are not limited to NN-based algorithms and applications for - Neural Prostheses - Brain-Computer-Interfaces - Multi Neuron Recordings - Multi Electrode Arrays - Functional Electrical Stimulation - Population Coding - Spike Sorting - ... Nonlinear dynamics ------------------ Organized by: - N. Crook, T. olde Scheper, Oxford Brookes University (UK) The field of nonlinear dynamics has been a useful ally in the study of artificial neural networks (ANNs) in recent years. Investigations into the stability of recurrent networks, for example, have helped to define the characteristics of weight matrixes which guarantee stable solutions. Similar studies have assessed the stability of Hopfield networks with distributed delays. However, some have suggested that nonlinear dynamics should play a more central role in models of neural information processing. Observations of the presence of chaotic dynamics in the firing patterns of natural neuronal systems has added some support to this suggestion. A range of models have been proposed in the literature that place nonlinear dynamics at the heart of neural information processing. Some of these use chaos as a basic for neural itinerancy, a process involving deterministic search through memory states. Others use the bifurcating properties of specific chaotic systems as a means of switching between states. This session will open with a tutorial paper outlining these different approaches. The session will also include paper contributions by some leading authors in the field. ======================================================== ESANN - European Symposium on Artificial Neural Networks http://www.dice.ucl.ac.be/esann * For submissions of papers, reviews,... Michel Verleysen Univ. Cath. de Louvain - Microelectronics Laboratory 3, pl. du Levant - B-1348 Louvain-la-Neuve - Belgium tel: +32 10 47 25 51 - fax: + 32 10 47 25 98 mailto:esann at dice.ucl.ac.be * Conference secretariat d-side conference services 24 av. L. Mommaerts - B-1140 Evere - Belgium tel: + 32 2 730 06 11 - fax: + 32 2 730 06 00 mailto:esann at dice.ucl.ac.be ======================================================== From juergen at idsia.ch Thu Oct 6 04:46:46 2005 From: juergen at idsia.ch (Juergen Schmidhuber) Date: Thu, 6 Oct 2005 10:46:46 +0200 Subject: Connectionists: Recurrent Support Vector Machines Message-ID: <078ef8ff985c9636db783be9596a124b@idsia.ch> Several people pointed out that the pdf of one of the recently announced papers on sequence learning with adaptive internal states was missing. Apologies! The TR on recurrent SVMs (with Gagliolo & Wierstra & Gomez) is now available at http://www.idsia.ch/~juergen/evolino.html The other 8 new publications can be found at http://www.idsia.ch/~juergen/rnn.html -JS From d.polani at herts.ac.uk Wed Oct 5 07:19:30 2005 From: d.polani at herts.ac.uk (Daniel Polani) Date: Wed, 5 Oct 2005 13:19:30 +0200 Subject: Connectionists: Research (PhD) studentships available (Artificial Life/Bio-Inspired Robotics) Message-ID: <17219.46786.843674.755004@perm.feis.herts.ac.uk> ######################################################################## RESEARCH (PhD) STUDENTSHIPS AVAILABLE Adaptive Systems Research Group http://adapsys.feis.herts.ac.uk/ University of Hertfordshire http://perseus.herts.ac.uk/ Contact: Dr. Daniel Polani, Principal Lecturer d.polani at herts.ac.uk ######################################################################## PhD Studentships are available at the Adaptive Systems Research Group in the areas (among others) of Evolution of Sensors (Theory and Robotics), Principles in Perception-Action Loops of Artificial and Biological Systems, Bio-inspired Learning Methods for Complex Agent Systems. All these fields can be considered subareas of Artificial Life which is a central interest of our group. We use robotics and mathematical models to create analytical, predictive and constructive models of biologically relevant scenarios. The idea is to understand how biological systems manage to ``climb'' the enormously intransparent complexity and intelligence obstacle to achieve the impressive variety of capabilities that is found in living systems. For this purpose, we develop mathematical, simulation and robotic models of these systems. The goal is, on the one hand, to understand biology, but, on the other hand, also to use this understanding to discover novel ``out-of-the-box'' principles for building artificial systems. _________________________________________________________________ EVOLUTION OF SENSORS (ROBOTICS AND THEORY) In recent years, the study of the evolution of sensors in living beings and in artificial systems has led to surprising insights into some of the driving forces of evolution. There is increasing evidence that the ``discovery'' of novel sensors by evolution contributes very significantly to selective pressure acting on living beings and may be one of the main sources of complexification and diversification during evolution. On the other hand, there are indications that sensor evolution itself is driven by the selection pressures resulting from given embodiments and informatory-ecological niches. The research of the last years allowed to identify the sources for this pressure in a quantitative way. A project in this direction will further develop theory and/or robotical models of sensor evolution of how environmental information can be tapped; thus it will contribute to unraveling one of the central mysteries of how the significant selection pressures produced by evolution can emerge and possibly be used for artificial systems. _________________________________________________________________ PRINCIPLES IN PERCEPTION-ACTION LOOPS OF ARTIFICIAL AND BIOLOGICAL SYSTEMS Living or artificial agents all share the property of acquiring environmental information (perception), processing this information and then acting upon it. Based on this minimal agent model, our group's recent research has found quantitative ways to derive a significant number of mechanisms from information-theoretic principles whereby agents can achieve increasingly more sophisticated models of their environment. Unlike learning models from classical Artificial Intelligence,the resulting agent controls are not easily human-readable; they also differ from artificial neural networks, where significant aspects of the architecture are pre-designed by humans. In fact, it turns out that the encodings discovered by the agents have qualitative similarities to encoding applied in biological neurons. Similar simple principles can be used in robots to recover a wider variety of phenomena observed in biological agents from common grounds. This is a powerful indication that biology and evolution may reuse the same (or similar) principles to create the wide variety of capabilites that we find in nature -- and it provides a methodology to recreate these capabilites in artificial systems. A project in this direction will apply quantitative principled methods to discover algorithms which will provide robots and robotic models to exhibit similar sensomotoric phenomena as living beings. _________________________________________________________________ BIO-INSPIRED LEARNING METHODS FOR COMPLEX AGENT SYSTEMS A further research area is the use of biologically inspired learning principles to allow complex agent systems to learn. Complex agent systems can be individual complex agents or larger agent groups (swarms) that, thanks to the size of their group, display complex emergent and/or self-organizing behaviour or require intricate coordination because of their task. Examples for this are ant colonies or RoboCup scenarios (robotic soccer). Understanding and managing the learning problem for such agent teams or complex agents is a very active research field and our information-theoretic methods (as mentioned in the previous sections) provide a novel, powerful and principled approach to it. A PhD project in this direction will develop principled and generalizable approaches to construct learning and adaptation models whereby complex agent systems can incrementally learn to master their environment and identify and solve tasks. _________________________________________________________________ FURTHER INFORMATION ON THE RESEARCH AREA For information on the research, please also consult the web page of Dr. Daniel Polani (http://homepages.feis.herts.ac.uk/~comqdp1/) and the respective publications (http://homepages.feis.herts.ac.uk/~comqdp1/publications.html). If you are interested in above areas, please make sure you mention the keywords (``Evolution of Sensors'', ``Perception-Action Loop'' etc.) in your application. If you have questions, please contact d.polani at herts.ac.uk. _________________________________________________________________ APPLICANTS Interested candidates will have very strong programming background, and very strong mathematical/analytical skills (e.g. due to a Computer Science/Mathematics/Physics degree) and a keen interest in interdisciplinary research, combining biological evidence with theoretical models and/or implementing them on simulated or real robotic systems. Applicants are urged to apply as soon as possible. _________________________________________________________________ ABOUT THE GROUP The PhD studentships offer the opportunity to work within the Adaptive Systems Research Group, a proactive and dynamic research team with an excellent international research profile at the University of Hertfordshire, located not far from London, between Cambridge and London. The group was founded and is co-organized by Prof. Kerstin Dautenhahn and Prof. Chrystopher Nehaniv. Other core faculty members of the group include Dr. Daniel Polani, Dr. Rene te Boekhorst, and Dr. Lola Canamero. Current projects within the Adaptive Systems Research Group are funded by FP6-IST, EPSRC, the Wellcome Trust and the British Academy. The group is currently involved in the following FP6 European projects: Euron-II, Humaine (both European Networks of Excellence), Cogniron and RobotCub (Integrated Projects). We hosted the AISB'05 convention Social Intelligence and Interaction in Biological and Artificial Systems that attracted an international audience of 300 participants. Research in the group is highly interdisciplinary and strongly biologically inspired, but also has a strong theoretical foundation. Adaptive Systems are computational, software, robotic, or biological systems that are able to deal with and 'survive' in a dynamically changing environment. We pursue a bottom-up approach to Artificial Intelligence that emphasizes the embodied and situated nature of biological or artificial systems that have evolved and are adapted to a particular environmental context. The Adaptive Systems Research Group has excellent research facilities for research staff, including numerous robotic platforms, covering the spectrum from miniature Khepera robots, dog-like AIBO robots, to human-sized robots (PeopleBots), as well as humanoid robots developed in our group. We are dedicated to excellence in research, and, while providing a collaborative and supportive working environment, expect PhD students to show great enthusiasm and determination for their work. Candidates need to provide evidence of excellent research potential that can lead to significant contributions to knowledge as part of a PhD thesis. Successful candidates may be eligible for a research studentship award from the University in some of these areas (equivalent to about ?9000 per annum bursary plus the payment of the standard UK student fees). Self-funded students might also consider to pursue other research topics that senior academic members of the Adaptive Systems research group are active in, please consult http://adapsys.feis.herts.ac.uk/ for more information. Other areas in our research group are advertised at: http://homepages.feis.herts.ac.uk/~comqkd/AS-Studentshipadvert.html For an application form, please contact: Mrs Lorraine Nicholls, Research Student Administrator, STRI, Faculty of Engineering and Information Sciences, University of Hertfordshire, College Lane, Hatfield, Herts, AL10 9AB United Kingdom Tel: +44 (0) 1707 286083 Fax: +44 (0) 1707 284185 or email: L.Nicholls at herts.ac.uk. ######################################################################## Pdf and HTML versions of this document can be found on http://homepages.feis.herts.ac.uk/~comqdp1/Studentships/SE_2005.pdf http://homepages.feis.herts.ac.uk/~comqdp1/Studentships/SE_2005/SE_2005.html ######################################################################## From sutton at cs.ualberta.ca Wed Oct 5 21:06:09 2005 From: sutton at cs.ualberta.ca (Rich Sutton) Date: Wed, 5 Oct 2005 19:06:09 -0600 Subject: Connectionists: academic job opening, University of Alberta Message-ID: The Department of Computing Science at the University of Alberta is seeking a qualified individual to fill a position at the level of assistant or associate professor in the area of artificial intelligence (www.cs.ualberta.ca). This is a soft-funded tenure track position. The initial appointment will be for three years, and continuation is subject to availability of funding. The successful candidate will be working with the Alberta Ingenuity Centre for Machine Learning (www.aicml.ca). Candidates should have a Ph.D. in Computing Science or equivalent, with specialization in artificial intelligence. Preference will be given to applicants with knowledge and experience in machine learning, with an emphasis on reinforcement learning. The candidate is expected to establish their own research program, supervise graduate students, and teach at both the graduate and undergraduate level. The Department highly values curiosity-driven research. Strong communication skills, project management, inter-personal skills, and team leadership are important qualities. The Department is well known for its collegial atmosphere, dynamic and well-funded research environment, and superb teaching infrastructure. Its faculty are internationally recognized in many areas of computing science, and enjoy collaborative research partnerships with local, national, and international industries. The University of Alberta, located in the provincial capital of Edmonton, is one of Canada's largest and finest teaching and research institutions, with a strong commitment to undergraduate teaching, community involvement, and research excellence. As a population center of 1,000,000, Edmonton offers a high-quality, affordable lifestyle that includes a wide range of cultural events and activities, in a natural setting close to the Canadian Rockies. Alberta's innovative funding initiatives for supporting and sustaining leading-edge IT research have attracted world-class researchers and outstanding graduate students to our Department and to the campus. Further information about the Department and University can be found at www.cs.ualberta.ca. The competition will remain open until a suitable candidate is found. Candidates should submit a curriculum vitae, a one-page summary of research plans, a statement of teaching interests and reprints of their three most significant publications electronically to everitt at cs.ualberta.ca or by mail to: Iris Everitt, Administrative Assistant Department of Computing Science University of Alberta Edmonton, Alberta, Canada T6G 2E8 All qualified candidates are encouraged to apply, however, Canadian and permanent residents will be given priority. The University of Alberta hires on the basis of merit. We are committed to the principle of equity of employment. We welcome diversity and encourage applications from all qualified women and men, including persons with disabilities, members of visible minorities, and Aboriginal persons. From tino at jp.honda-ri.com Thu Oct 6 01:56:29 2005 From: tino at jp.honda-ri.com (Tino Lourens) Date: Thu, 06 Oct 2005 14:56:29 +0900 Subject: Connectionists: TiViPE 2.0.0 now available on Linux, Max, and Windows: Spend more time on research and less on coding Message-ID: <4344BC8D.1000804@jp.honda-ri.com> Dear all, Visual programming environment TiViPE version 2.0.0 (Sep 2005) is available on all major platforms (Linux, Mac OSX, and Windows). If you would like to program faster, need a rapid prototyping tool, would like to modify a program while it is active, would like to integrate code (C, C++, Fortran, or Java) with minimal additonal effort then TiViPE might be the appropriate solution. Actually, I wanted a program that could wrap any routine into a graphical environment without programming and with little additional effort. Since, such a migration process is essential for users to move from textual programming to graphical programming. In this respect TiViPE provides a major difference to for instance AVS or Khoros. * TiViPE includes already a considerable number of useful icons for image processing and includes most of my research on early vision (center-surround, simple-, complex, endstopped cells, etc.) and graphmatching. * TiViPE can be downloaded from: http://www.dei.brain.riken.jp/~emilia/Collaboration/Tino/TiViPE/index.html On that website, you will find more information about TiViPE, as well as an optional library that extends the environment with 1. networking support through socket communication 2. merging support to compile a graphical program to a single icon and executable. Online available TiViPE modules provided by other users: * Relaxation Phase Labeling http://home.arcor.de/winni9/rpl.html Winfried Fellenz The next release of TiViPE will have * Language support: in addition to English also "Nihongo"; Japanese language support will be provided. * Automatic inclusion of a tree of modules from other users Publication (a citation in your publication is appreciated) T. Lourens. TiViPE --Tino's Visual Programming Environment. The 28th Annual International Computer Software & Applications Conference, IEEE COMPSAC 2004, pages 10-15, 2004. With kind regards, Tino Lourens -- Tino Lourens, Ph.D. Honda Research Institute Japan Co., Ltd. 8-1 Honcho, Wako-shi, Saitama, 351-0114, Japan Tel: +81-48-462-2121 (Ext.) 6806 mailto: tino at jp.honda-ri.com Fax: +81-48-462-5221 From B.Kappen at science.ru.nl Fri Oct 7 03:32:02 2005 From: B.Kappen at science.ru.nl (Bert Kappen) Date: Fri, 7 Oct 2005 09:32:02 +0200 (CEST) Subject: Connectionists: papers on stochastic optimal control available Message-ID: I would like to announce the following papers that have been accepted for publication. A linear theory for control of non-linear stochastic systems H.J. Kappen We address the role of noise and the issue of efficient computation in stochastic optimal control problems. We consider a class of non-linear control problems that can be formulated as a path integral and where the noise plays the role of temperature. The path integral displays symmetry breaking and there exist a critical noise value that separates regimes where optimal control yields qualitatively different solutions. The path integral can be computed efficiently by Monte Carlo integration or by Laplace approximation, and can therefore be used to solve high dimensional stochastic control problems. To appear in Physical Review Letters. download: www.arxiv.org/physics/0411119 A longer version of the paper is Path integrals and symmetry breaking for optimal control theory H.J. Kappen To appear in Journal of Statistical Mechanics. download: www.arxiv.org/physics/0505066 Bert Kappen SNN Radboud University Nijmegen URL: www.snn.kun.nl/~bert The Netherlands tel: +31 24 3614241 fax: +31 24 3541435 B.Kappen at science.ru.nl From wahba at stat.wisc.edu Thu Oct 6 23:51:58 2005 From: wahba at stat.wisc.edu (Grace Wahba) Date: Thu, 6 Oct 2005 22:51:58 -0500 Subject: Connectionists: Robust Manifold Unfolding with Kernel Regularization Message-ID: <200510070351.j973pwlc006579@juno.stat.wisc.edu> Esteemed colleagues: Robust Manifold Unfolding with Kernel Regularization Fan Lu, Yi Lin and Grace Wahba TR1108 October, 2005 University of Wisconsin Madison Statistics Dept TR 1108. http://www.stat.wisc.edu/~wahba -> TRLIST or http://www.stat.wisc.edu/~wahba/ftp1/tr1108.pdf Abstract We describe a robust method to unfold a low-dimensional manifold embedded in high-dimensional Euclidean space based on only pairwise distance information (possibly noisy) from the sampled data on the manifold. Our method is derived as one special extension of the recently developed framework called Kernel Regularization, ( http://www.pnas.org/cgi/content/full/102/35/12332 ) which is originally designed to extract information in the form of a positive definite kernel matrix from possibly crude, noisy, incomplete, inconsistent dissimilarity information between pairs of objects. The special formulation is transformed into an optimization problem that can be solved globally and efficiently using modern convex cone programming techniques. The geometric interpretation of our method will be discussed. Relationships to other methods for this problem are noted. From deneve at isc.cnrs.fr Fri Oct 7 12:11:58 2005 From: deneve at isc.cnrs.fr (Sophie Deneve) Date: Fri, 07 Oct 2005 18:11:58 +0200 Subject: Connectionists: Postdoctoral position available - Theoretical Neuroscience Group in Paris. Message-ID: <5.1.1.6.0.20051007181107.0305a010@pop.isc.cnrs.fr> Two postdoctoral research positions are available in the newly created Theoretical Neuroscience Group in Ecole Normale Sup?rieure Paris, for a project funded by a Marie Curie Team of Excellence grant. The overall theme of the project is "Bayesian inference and neural dynamics", and the research will involve building and analyzing probabilistic treatments of representation, inference and learning in biophysical models of cortical neuron and circuits. To do so we will integrate complementary computational neuroscience approaches. The first studies neurons and neural networks as biophysical entities. The second reinterpret cognitive and neural processes as bayesian computations. The faculty of this group includes Misha Tsodyks, Boris Gutkin, Sophie Deneve and Rava Da Silvera. It is part of the Department of Cognitive Science in Ecole Normale Sup?rieure, a unique institution regrouping major scientists in computational Neuroscience, Brain imaging, Psychology, Philosophy, and Mathematics. We are situated in central Paris, at a walking distance to top scientific research and educational institutions. We have numerous international collaborations with experimental groups, with the goal of understanding the neural basis of cognition. The positions are for two years duration, with attractive salaries, including mobility allowance if applicable. Generous travel support will be provided. Candidates should have 1- A strong mathematical/biophysical background and a strong interest in neuroscience, or 2- A strong neuroscience background and good basis in math and/or biophysics. 3- Demonstrable interest in experimental collaborations. 4- Good communication skills. Candidates should send a CV, a 1 page research project and the address of two referees, to npg.lab at college-de-france.fr , before the 1st of November, 2005. For further information please contact Sophie Deneve (deneve at isc.cnrs.fr) or Boris Gutkin (bgutkin at pasteur.fr). From stefan.wermter at sunderland.ac.uk Fri Oct 7 13:08:33 2005 From: stefan.wermter at sunderland.ac.uk (Stefan Wermter) Date: Fri, 07 Oct 2005 18:08:33 +0100 Subject: Connectionists: book on biomimetic neural learning for intelligent robots Message-ID: <4346AB91.80502@sunderland.ac.uk> We are pleased to announce the new book Biomimetic Neural Learning for Intelligent Robots Stefan Wermter, Gnther Palm, Mark Elshaw (Eds) 2005, Springer This book presents research performed as part of the EU project on biomimetic multimodal learning in a mirror neuron-based robot (MirrorBot) and contributions presented at the International AI-Workshop in NeuroBotics. The overall aim of the book is to present a broad spectrum of current research into biomimetic neural learning for intelligent autonomous robots. There seems to be a need for a new type of robots which is inspired by nature and so performs in a more flexible learned manner than current robots. This new type of robots is driven by recent new theories and experiments in neuroscience indicating that a biological and neuroscience-oriented approach could lead to new life-like robotic systems. The book focuses on some of the research progress made in the MirrorBot project which uses concepts from mirror neurons as a basis for the integration of vision, language and action. In this book we show the development of new techniques using cell assemblies, associative neural networks, and Hebbian-type learning in order to associate vision, language and motor concepts. We have developed biomimetic multimodal learning and language instruction in a robot to investigate the task of searching for objects. As well as the research performed in this area for the MirrorBot project, the second part of this book incorporates significant contributes from essential research in the field of biomimetic robotics. This second part of the book concentrates on the progress made in neuroscience-inspired robotic learning approaches (in short: Neuro-Botics). We hope that this book stimulates and encourages new research in this area. Further details can be found at http://www.his.sunderland.ac.uk/mirrorbot/mirrorbook.html and http://www.springeronline.com/sgw/cda/frontpage/0,11855,3-40109-22-55007983-0,00.html Chapters Towards Biomimetic Neural Learning for Intelligent Robots Stefan Wermter, Gnther Palm, Cornelius Weber and Mark Elshaw The Intentional Attunement Hypothesis. The Mirror Neuron System and its Role in Interpersonal Relations Vittorio Gallese Sequence Detector Networks and Associative Learning of Grammatical Categories Andreas Knoblauch and Friedemann Pulvermller A Distributed Model of Spatial Visual Attention Julien Vitay, Nicolas Rougier and Frdric Alexandre A Hybrid Architecture using Cross-Correlation and Recurrent Neural Networks for Acoustic Tracking in Robots John Murray, Harry Erwin and Stefan Wermter Image Invariant Robot Navigation Based on Self Organising Neural Place Codes Kaustubh Chokshi, Stefan Wermter, Christo Panchev and Kevin Burn Detecting Sequences and Understanding Language with Neural Associative Memories and Cell Assemblies Heiner Markert, Andreas Knoblauch and Gnther Palm Combining Visual Attention, Object Recognition and Associative Information Processing in a NeuroBotic System Rebecca Fay, Ulrich Kaufmann, Andreas Knoblauch, Heiner Markert and Gnther Palm Towards Word Semantics from Multi-modal Acoustico-Motor Integration: Application of the Bijama Model to the Setting of Action-Dependant Phonetic Representations Olivier Mnard, Frdric Alexandre and Herv Frezza-Buet Grounding Neural Robot Language in Action Stefan Wermter, Cornelius Weber, Mark Elshaw, Vittorio Gallese and Friedemann Pulvermller A Spiking Neural Network Model of Multi-Modal Language Processing of Robot Instructions Christo Panchev A Virtual Reality Platform for Modeling Cognitive Development Hector Jasso and Jochen Triesch Learning to Interpret Pointing Gestures: Experiments with Four-Legged Autonomous Robots Verena Hafner and Frdric Kaplan Reinforcement Learning Using a Grid Based Function Approximator Alexander Sung, Artur Merke and Martin Riedmiller Spatial Representation and Navigation in a Bio-inspired Robot Denis Sheynikhovich, Ricardo Chavarriaga, Thomas Strosslin and Wulfram Gerstner Representations for a Complex World: Combining Distributed and Localist Representations for Learning and Planning Joscha Bach MaximumOne: an Anthropomorphic Arm with Bio-Inspired Control System Michele Folgheraiter and Giuseppina Gini LARP, Biped Robotics Conceived as Human Modelling Umberto Scarfogliero, Michele Folgheraiter and Giuseppina Gini Novelty and Habituation: The Driving Force in Early Stage Learning for Developmental Robotics Qinggang Meng and Mark Lee Modular Learning Schemes for Visual Robot Control Gilles Hermann, Patrice Wira and Jean-Philippe Urban Neural Robot Detection in RoboCup Gerd Mayer, Ulrich Kaufmann, Gerhard Kraetzschmar and Gnther Palm A Scale Invariant Local Image Descriptor for Visual Homing Andrew Vardy and Franz Oppacher *************************************** Professor Stefan Wermter Chair for Intelligent Systems Centre for Hybrid Intelligent Systems School of Computing and Technology University of Sunderland St Peters Way Sunderland SR6 0DD United Kingdom email: stefan.wermter **AT** sunderland.ac.uk http://www.his.sunderland.ac.uk/~cs0stw/ http://www.his.sunderland.ac.uk/ **************************************** From mbeal at cse.Buffalo.EDU Fri Oct 7 22:01:26 2005 From: mbeal at cse.Buffalo.EDU (Matthew Beal) Date: Fri, 7 Oct 2005 22:01:26 -0400 (EDT) Subject: Connectionists: CFP: NIPS 2005 Workshop: Nonparametric Bayesian methods Message-ID: ################################################################ CALL FOR PARTICIPATION Open Problems and Challenges for Nonparametric Bayesian methods in Machine Learning a workshop at the 2005 Neural Information Processing Systems (NIPS) Conference Friday, December 9, 2005, Whistler BC, Canada http://aluminum.cse.buffalo.edu:8080/npbayes/nipsws05 Deadline for Submissions: Monday, October 31, 2005 Notification of Decision: Friday, November 4, 2005 ################################################################ Organizers: ----------- Matthew J. Beal, University at Buffalo, State University of New York Yee Whye Teh, National University of Singapore Overview: --------- Two years ago the NIPS workshops hosted its first forum for discussion of Nonparametric Bayesian (NPB) methods and Infinite Models as used in machine learning. It brought to bear techniques from the statistical disciplines on perennial problems in the NIPS community, such as model size and structure selection from a Bayesian viewpoint. NPB methods include some topics that are heavily studied in the NIPS community such as Gaussian Processes and more recently Dirichlet Process Mixture models, and other topics that are only now starting to make an impact. NPB methods are attractive to machine learning practitioners because they are both powerful and flexible: the key ingredient in an NPB formalism is to side-step the traditional practice of parameter fitting by instead integrating out the possibly complex parameters of the model, which allows interesting situations to exist in the model such as (countably) infinite components in a mixture model, infinite topics in a topic model, or infinite dimensions in a hidden state space; also, the flexibility of an NPB model is often very useful in domains where it is difficult to articulate priors or likelihood functions, such as in text and language modeling, spatially-dependent process modeling, etc. Since the last workshop, models such as Sparse Factor Analysis, Latent Dirichlet Allocation, Hidden Markov Models, and even robot mapping tasks have all benefitted from the flexibility of a nonparametric Bayesian approach, and these nonparametric alternatives have been shown to give superior generalization performance, as compared to finite model selection techniques. It is time now after two years to collect together these various research directions, and use them to define and delineate the challenges facing the nonparametric Bayesian community, and with this the set of open problems that stand a reasonable chance of being solved with focussed research plans. The workshop will focus less on well-studied topics like Gaussian Processes and more on potential new ideas from the statistics community. To this end, we will have a number of experts on nonparametric Bayesian methods from statistics to share their experiences and expertise with the general NIPS community, in an effort to transfer and build upon key methodologies developed there, since the time is ripe for the two groups to coalesce. In particular, we would like the workshop to address: * New techniques: What techniques and methodologies are currently being used in the statistics communities, and which of these can be transferred to be used in machine learning applications? Conversely, are the techniques developed withing the NIPS community of interest to the general statistics communities? * New problems: There are still a wide variety of problems in the NIPS community that cannot be elegantly solved by nonparametric Bayesian models, for example, problems needing smoothly time-varying or spatially-varying processes. It would be useful to identify these problems and the necessary characteristics of any nonparametric solutions. This can serve as concrete goals for further research. * Computational/Inferential issues: Inference in nonparametric models is for the most part carried out using expensive MCMC sampling, but recently variational and Expectation Propagation methods have been applied to isolated cases only. For more popular use of these models we need more efficient and reliable inference schemes. Can these methods scale to high dimensional data, and to large databases such as email repositories, news reports, and the world-wide-web? Could it be that NPB methods are just too much work for too little benefit? Format: ------- This is a one-day workshop, designed to be highly interactive, consisting of 3 or 4 themed sessions with short talks and lengthy moderated discussion periods. We anticipate a strong response and will likely have a poster session in between the morning and afternoon sessions. We have attracted several statisticians from outside the NIPS beaten track and as such we will also have a Distinguished Panel of statisticians/machine learners to discuss the points arising during the workshop's discussions. Call for Contributions: ----------------------- Potential speakers/discussants/attendees are encouraged to submit (extended) abstracts of 2-4 pages in length outlining their research as it relates to the themes above, before *Monday, October 31*. We are looking for position papers, extensions of theory and applications, as well as case studies of nonparametric methods. If there is overwhelming response we will accommodate a poster session in the afternoon break in between morning and evening sessions. All chosen abstracts will be posted on the workshop website beforehand to stimulate discussion. Deadline for Submissions: Monday, October 31, 2005 Notification of Decision: Friday, November 4, 2005 Date of workshop: Friday, December 9, 2005 Please email your submissions to mbeal at cse.buffalo.edu and/or yeewhye at gmail.com We encourage you to log in, post questions, and contribute to the pages of the workshop website, at http://aluminum.cse.buffalo.edu:8080/npbayes/nipsws05 which is in Wiki format, either in person or anonymously. Once you have a login if you wish to modify the pages please contact us for permissions. The website is intended to serve as a resource for you as nonparametric Bayesians, both before and after the workshop, and already contains plenty of links to literature on NPB from within and outside of the NIPS community. With your input we can tailor the workshop according to your suggestions. Also, for you to give comprehensive feedback on the topics to be covered, we have provided a survey form at http://aluminum.cse.buffalo.edu:8080/npbayes/nipsws05/survey which you are free to fill out only partially, and anonymously if desired. Thank you -Matt Beal & Yee Whye Teh From ted.carnevale at yale.edu Sat Oct 8 10:41:49 2005 From: ted.carnevale at yale.edu (Ted Carnevale) Date: Sat, 08 Oct 2005 10:41:49 -0400 Subject: Connectionists: NEURON 5.8 simulates large nets on parallel hardware Message-ID: <4347DAAD.7080907@yale.edu> NEURON version 5.8 is now available from http://www.neuron.yale.edu/neuron/install/install.html This includes many bug fixes for features that were new in the previous official release of NEURON. However, for many users the most important changes may be the improvements that facilitate its use in a parallel computing environment. These are: 1. ParallelContext now works on top of MPI as well as PVM (see "Top 10 reasons to prefer MPI over PVM" http://www.lam-mpi.org/mpi/mpi_top10.php). This makes it much easier for users to take advantage of NEURON on parallel hardware that ranges from workstation clusters to massively parallel supercomputers. This is potentially helpful for all users who are faced with optimization, parameter space exploration, and "large model, long runtime" problems. 2. NEURON 5.8 also introduces something for users with very large network models that just won't fit on a single-CPU computer: the ParallelNetworkManager, which manages an enhanced ParallelContext to support neural network simulations on parallel machines. That is, the ParallelNetworkManager lets you run simulations in which each processor--be it a node in a supercomputer, or a single- or multiple- CPU workstation in a cluster--handles a different part of the network. Tests on large scale problems indicate a nearly linear speedup with increasing number N of processors, until N is so large that the CPUs don't have enough work to keep themselves busy. In making this announcement of NEURON's new capabilities for parallel computation, I would like to acknowledge the special contributions of Michele Migliore and Bill Lytton, who collaborated with me on the design of these features, and Henry Markram, with whom I have been collaborating on large scale simulation problems and whose IBM Blue Gene supercomputer has been a most helpful testbed for these new features. Michael Hines From pavlovd at ics.uci.edu Fri Oct 7 15:12:01 2005 From: pavlovd at ics.uci.edu (Dmitry Pavlov) Date: Fri, 7 Oct 2005 12:12:01 -0700 (PDT) Subject: Connectionists: Job opening @ Yahoo!: Research Scientist, Yahoo! Search Content Analysis Team Message-ID: Research Scientist, Yahoo! Search Content Analysis Team Location . Sunnyvale, CA Essential Responsibilities We are looking for a highly motivated research scientist experienced in machine learning to help develop algorithms and software systems for analyzing web content. You will be part of Yahoo's content analysis team, an advanced development group responsible for web scale document classification, clustering, attribute extraction and matching applications that push the frontiers of text mining. The content analysis team is focused on lexical, syntactic and semantic analysis of text documents. As part of the team you will have access to Yahoo's enormous amount of data including query logs, product description, and web page index. Qualified candidates should be proficient in C/C++ development, have a solid knowledge of machine learning, data mining and information retrieval, including the hands on experience with data clustering, classification, regression, recommender systems, optimization, text mining, natural language processing, analysis, pattern recognition, etc. A PhD degree in computer science or related areas is also required. Required Qualifications/Education Ph.D. in Computer Science, Mathematics, Computational Linguistics, or related field. Experience in: C/C++, Unix operating system and development tools, scripting languages. Knowledge of text mining, natural language procecessing, information retrieval, statistical analysis, or machine learning. Understanding of algorithm efficiency issues. Excellent communication and problem solving skills. Keywords: data mining, machine learning, text mining, document classification, document clustering, natural langauge processing, regression analysis, support vector machine, neural network, decision tree, collaborative filter, statistics, information retrieval Please send your resumes to Dr. Dmitry Pavlov at dpavlov at yahoo-inc.com and reference Research Scientist position. From ianfasel at mplab.ucsd.edu Tue Oct 11 00:24:16 2005 From: ianfasel at mplab.ucsd.edu (Ian Fasel) Date: Mon, 10 Oct 2005 21:24:16 -0700 Subject: Connectionists: CFP: NIPS*2005 Workshop on automatic discovery of object categories Message-ID: CALL FOR PARTICIPATION -- NIPS*2005 WORKSHOP AUTOMATIC DISCOVERY OF OBJECT CATEGORIES Friday, December 9, 2005 Westin Resort, Whistler, British Columbia http://objectdiscovery.cc Recent years have seen explosive progress in the development of reliable, real-time object detection systems. Unfortunately, these systems typically require large numbers of carefully hand-segmented training images, making it extremely costly to develop more than a few special-purpose applications (e.g. face or car detectors). A system capable of effortlessly learning to identify and localize thousands of different object categories has thus become a new "grand challenge" in computer vision and learning. The goal of this workshop is to help establish and accelerate progress in the recently emerging field of learning about objects from images or video containing little or no training information. The emphasis will be on learning to identify both the presence and location of objects in arbitrary scenes -- which is a somewhat different problem from, e.g., scene categorization, or discrimination between a collection of already-segmented objects (although these may indeed be complementary methods). More than just a technological challenge, this topic brings up fundamental new issues that will require the development of new concepts and new methods in learning and classification, and clearly crosses disciplinary boundaries into diverse areas such as neuroscience, robotics, developmental psychology, and others. This workshop will bring together pioneering figures in this area to assess the state of the art, establish future research goals, and agree on methods for assessing progress. The workshop will include invited presentations, contributed talks, a poster session, and plenty of time for hopefully lively discussion. Although we do wish to hear about specific systems, we are equally interested in "where are we" and big-picture discussions to help bring focus to the topic. Important questions to focus on include: ? Feature-representations ? Fusion of multi-modal information ? Integration of bottom-up "saliency" information and top-down models ? Use of partially and weakly labeled data, and reinforcement signals ? Discriminative vs. generative vs. hybrid approaches ? Datasets for training and evaluating progress in artificial systems ? The developmental progression of object learning in humans and animals SUBMISSIONS We anticipate accepting six to eight 20-minute contributed talks and a number of posters. If you would like to present your work, please submit from a one page abstract to a complete manuscript as soon as possible to: ianfasel at mplab.ucsd.edu Abstracts based upon previously published work are welcome. Please submit early! DETAILS Website: http://objectdiscovery.cc Important Dates: Monday, October 24 - Submission Deadline Monday, October 31 - Acceptance Notification Friday, December 9 - The Workshop NIPS Workshop Registration & Hotel Info: http://www.nips.cc/Conferences/2005/ Workshop Inquiries: ianfasel at mplab.ucsd.edu movellan at mplab.ucsd.edu ORGANIZERS Ian Fasel and Javier Movellan UCSD Machine Perception Laboratory http://mplab.ucsd.edu From terry at salk.edu Mon Oct 10 14:21:20 2005 From: terry at salk.edu (Terry Sejnowski) Date: Mon, 10 Oct 2005 11:21:20 -0700 Subject: Connectionists: NEURAL COMPUTATION 17:12 In-Reply-To: Message-ID: Neural Computation - Contents - Volume 17, Number 12 - December 1, 2005 LETTERS The Effect of NMDA Receptors on Gain Modulation Michiel Berends, Reinoud Maex, and Erik De Schutter Oscillatory Synchronization Requires Precise and Balanced Feedback Inhibition in a Model of the Insect Antennal Lobe Dominique Martinez Response Properties of an Integrate-and-Fire Model That Receives Subthreshold Inputs Xuedong Zhang and Laurel H. Carney Incremental Online Learning in High Dimensions Sethu Vijayakumar, Aaron D'Souza and Stefan Schaal On the Nonlearnability of a Single Spiking Neuron Jiri Sima and Jiri Sgall A Novel Model-Based Hearing Compensation Design Using a Gradient-Free Optimization Method Zhe Chen, Suzanna Becker, Jeff Bondy, Ian C. Bruce, and Simon Haykin A Robust Information Clustering Algorithm Qing Song Learning Curves for Stochastic Gradient Descent in Linear Feedforward Networks Justin Werfel, Xiaohui Xie and H. Sebastian Seung Information Geometry of Interspike Intervals in Spiking Neurons Kazushi Ikeda ERRATUM Edgeworth-Expanded Gaussian Mixture Density Modeling Marc Van Hulle ----- ON-LINE - http://neco.mitpress.org/ SUBSCRIPTIONS - 2005 - VOLUME 17 - 12 ISSUES Electronic only USA Canada* Others USA Canada* Student/Retired$60 $64.20 $114 $54 $57.78 Individual $100 $107.00 $143 $90 $96.30 Institution $680 $727.60 $734 $612 $654.84 * includes 7% GST MIT Press Journals, 5 Cambridge Center, Cambridge, MA 02142-9902. Tel: (617) 253-2889 FAX: (617) 577-1545 journals-orders at mit.edu ----- From thomas.villmann at medizin.uni-leipzig.de Mon Oct 10 15:37:43 2005 From: thomas.villmann at medizin.uni-leipzig.de (thomas.villmann@medizin.uni-leipzig.de) Date: Mon, 10 Oct 2005 21:37:43 +0200 Subject: Connectionists: CfP: FLINS 2006 - Special Session on Data Analysis for Mass Spectrometric Problems Message-ID: <434ADF27.5463.2DB2C5C@localhost> Dear connectionists, I want to draw your attention to the following announcement which could be of interest for you: #################################################### Special Session on Data Analysis for Mass Spectrometric Problems at the The 7th International FLINS Conference on Applied Artificial Intelligence (FLINS 2006) Session Organizers Frank-Michael Schleif University of Leipzig, Dept. of Computational Intelligence & Bruker Daltonik GmbH Karl-Tauchnitz-Str. 25, D-04107 Leipzig, Germany Tel: +49-341-24 31-480 Fax: +49-341-96252-15 E-Mail: fms at bdal.de Thomas Villmann University of Leipzig, Dept. of Computational Intelligence Karl-Tauchnitz-Str. 25, D-04107 Leipzig, Germany E-Mail: villmann at informatik.uni-leipzig.de Jens Decker Bruker Daltonik GmbH Research & Development Fahrenheitstr. 4, D-28359 Bremen, Germany E-Mail: jde at bdal.de This session is organized as a part of The 7th International FLINS Conference on Applied Artificial Intelligence (FLINS 2006) http://www.fuzzy.ugent.be/flins2006/ August 29-31, 2006, Genova (Italy) In several areas of bioinformatics like mass spectrometry (ms), genome expression, biosignal analysis a.s.o. applied artificial intelligence methods play an important role in data analysis and data processing. These methods include all kinds of machine learning approaches as well as neural networks, modern statistics, genetic algorithms etc. In this session we focus on data processing in mass spectrometry. The most relevant problems arising in this domain are due to high dimensional but sparse data, processing of structures (functional data) and fuzziness. These topics are of more general interest also in the machine learning community. Otherwise ms plays an increasing role in the field of clinical proteomics and chemometrics. In the announced special session we focus on all kind of data analysis occurring during the processing of ms data like peak detection, feature extraction, pattern recognition, classification etc. The quality of these data analysis tools crucially influences the medical investigation results. This is especially true for the analysis of high-dimensional functional MALDI-TOF or SELDI-TOF spectra of body fluids from clinical proteomics studies. To improve the state of the art in this field both the processing of the spectra as well as new algorithms for the supervised and unsupervised analysis of the extracted spectral features should to be reconsidered in the light of new research results. The special session Data Analysis for Mass Spectrometric Problems on FLINS 2006 aims on collecting the state of the art activities in these fields and invites to bring researchers together working on these important topics. Therefore, we encourage the submission of contributions which aim on improvements of all kinds of data processing for MS. Recommended topics include but are not limited to the following: * Machine Learning approaches for ms data analysis * Fuzzy data analysis * Rule extraction * Automatic reasoning * Denoising * Baseline correction and noise estimation * Recalibration * Automatically evaluation of spectra quality * Feature extraction and selection * Classification of ms data (e.g. within clinical proteomics) * Statistical methods for data analysis of ms data * Applications in Clinical Proteomics, Metabolic Profiling Submission of papers Authors are invited to submit a paper up to 8 pages by December 15, 2005. You can submit your paper to the session organizers by email to: fms at bdal.de or {Schleif, villmann}@informatik.uni-leipzig.de All papers submitted in this session will be peer-reviewed. Accepted papers will be published in the conference proceedings as the book "Applied Artificial Intelligence" by World Scientific (to be EI indexed). Final papers should be prepared according to the publisher's instructions : http://www.worldscientific.com/style/proceedings_style.shtml Please select the trim size: 9" x 6". Papers that are not prepared according to these guidelines will not be published. Important dates * Paper submissions: December 15, 2005 * Acceptance letter: February 15, 2006 * Final papers submissions: April 15, 2006 ################################################# Best regards Thomas Villmann ______________________________________ Dr. rer. nat. Thomas Villmann University Leipzig Clinic for Psychotherapy Karl-Tauchnitz-Str. 25 phone / fax +49 (0)341 9718868 / 49 email: thomas.villmann at medizin.uni-leipzig.de From shivani at csail.mit.edu Tue Oct 11 21:20:57 2005 From: shivani at csail.mit.edu (Shivani Agarwal) Date: Tue, 11 Oct 2005 21:20:57 -0400 (EDT) Subject: Connectionists: CFP: NIPS 2005 Workshop - Learning to Rank Message-ID: ************************************************************************ FINAL CALL FOR PAPERS ---- Learning to Rank ---- Workshop at the 19th Annual Conference on Neural Information Processing Systems (NIPS 2005) Whistler, Canada, Friday December 9, 2005 http://web.mit.edu/shivani/www/Ranking-NIPS-05/ -- Submission Deadline: October 21, 2005 -- ************************************************************************ [ Apologies for multiple postings ] OVERVIEW -------- The problem of ranking, in which the goal is to learn an ordering or ranking over objects, has recently gained much attention in machine learning. Progress has been made in formulating different forms of the ranking problem, proposing and analyzing algorithms for these forms, and developing theory for them. However, a multitude of basic questions remain unanswered: * Ranking problems may differ in many ways: in the form of the training examples, in the form of the desired output, and in the performance measure used to evaluate success. What are the consequences of each of these factors on the design of ranking algorithms and on their theoretical guarantees? * The relationships between ranking and other classical learning problems such as classification and regression are still under-explored. Is any of these problems inherently harder or easier than another? * Although ranking is studied mainly as a supervised learning problem, it can have important consequences for other forms of learning; for example, in semi-supervised learning, one often ranks unlabeled examples so as to assign labels to the ones ranked at the top, and in reinforcement learning, one often learns a policy that ranks actions for each state. To what extent can these connections be explored and exploited? * There is a large variety of applications in which ranking is required, ranging from information retrieval to collaborative filtering to computational biology. What forms of ranking are most suited to different applications? What are novel applications that can benefit from ranking, and what other forms of ranking do these applications point us to? This workshop aims to provide a forum for discussion and debate among researchers interested in the topic of ranking, with a focus on the basic questions above. The goal is not to find immediate answers, but rather to discuss possible methods and applications, develop intuition, brainstorm on possible directions and, in the process, encourage dialogue and collaboration among researchers with complementary ideas. FORMAT ------ This is a one-day workshop that will follow the 19th Annual Conference on Neural Information Processing Systems (NIPS 2005). The workshop will consist of two 3-hour sessions. There will be two invited talks and 5-6 contributed talks, with time for questions and discussion after each talk. We would particularly like to encourage, after each talk, a discussion of underlying assumptions, alternative approaches, and possible applications or theoretical analyses, as appropriate. The last 30 minutes of the workshop will be reserved for a concluding discussion which will be used to put into perspective insights gained from the workshop and to highlight open challenges. Invited Talks ------------- * Thorsten Joachims, Cornell University * Yoram Singer, The Hebrew University Contributed Talks ----------------- These will be based on papers submitted for review. See below for details. CALL FOR PAPERS --------------- We invite submissions of papers addressing all aspects of ranking in machine learning, including: * algorithmic approaches for ranking * theoretical analyses of ranking algorithms * comparisons of different forms of ranking * formulations of new forms of ranking * relationships between ranking and other learning problems * novel applications of ranking * challenges in applying or analyzing ranking methods We welcome papers on ranking that do not fit into one of the above categories, as well as papers that describe work in progress. We are particularly interested in papers that point to new questions/debate in ranking and/or shed new light on existing issues. Please note that papers that have previously appeared (or have been accepted for publication) in a journal or at a conference or workshop, or that are being submitted to another workshop, are not appropriate for this workshop. Submission Instructions ----------------------- Submissions should be at most 6 pages in length using NIPS style files (available at http://web.mit.edu/shivani/www/Ranking-NIPS-05/StyleFiles/), and should include the title, authors' names, postal and email addresses, and an abstract not to exceed 150 words. Email submissions (in pdf or ps format only) to shivani at mit.edu with subject line "Workshop Paper Submission". The deadline for submissions is Friday October 21, 11:59 pm EDT. Submissions will be reviewed by the program committee and authors will be notified of acceptance/rejection decisions by Friday November 11. Final versions of all accepted papers will be due on Friday November 18. Please note that one author of each accepted paper must be available to present the paper at the workshop. IMPORTANT DATES --------------- First call for papers -- September 6, 2005 Paper submission deadline -- October 21, 2005 (11:59 pm EDT) Notification of decisions -- November 11, 2005 Final papers due -- November 18, 2005 Workshop -- December 9, 2005 ORGANIZERS ---------- * Shivani Agarwal, MIT * Corinna Cortes, Google Research * Ralf Herbrich, Microsoft Research CONTACT ------- Please direct any questions to shivani at mit.edu. ************************************************************************ From bengio at idiap.ch Thu Oct 13 02:24:31 2005 From: bengio at idiap.ch (Samy Bengio) Date: Thu, 13 Oct 2005 08:24:31 +0200 (CEST) Subject: Connectionists: two open PhD positions in machine learning Message-ID: Two Open Positions in Machine Learning -------------------------------------- The IDIAP Research Institute seeks two qualified candidates for PhD positions in machine learning. The objective of the first project is to develop novel kernel based algorithms for the analysis of sequences of high level events, such as automatic speech recognition (ASR). State-of-the-art ASR systems are based on generative Hidden Markov Models (HMMs). On the other hand, recent machine learning research have shown promising results in kernel based large margin discriminant models such as Support Vector Machines (SVMs) which maximize the margin between positive and negative examples. More recently, new kernels were proposed for various time-series tasks. The objective of this project is to study how these kernels could be modified in the context of more complex temporal tasks such as speech and video understanding. The objective of the second project is to develop novel machine learning algorithms for multi-channel sequence processing. Modeling jointly several sources of information (recorded from several cameras, microphones, etc) has several practical applications, including audio-visual speech recognition, multimodal person tracking, or complex scene analysis. Several machine learning models have already been proposed for such task, mainly for the case of two channels. The goal of this project is to propose theoretical and applied solutions for the case of more than two (potentially asynchronous) channels. Generative (Markovian based) models as well as margin-based models will be considered for the task. The ideal candidates will hold a degree in computer science, statistics, or related fields. She or he should have strong background in statistics, linear algebra, signal processing, C++, Perl and/or Python scripting languages, and the Linux environment. Knowledge in statistical machine learning and speech processing is an asset. Appointment for a PhD position is for a maximum of 4 years, provided successful progress, and should lead to a dissertation. Annual gross salary ranges from 36,000 Swiss Francs (first year) to 40,000 Swiss Francs (last year). Starting date is immediate. IDIAP is an equal opportunity employer and is actively involved in the European initiative involving the Advancement of Women in Science. IDIAP seeks to maintain a principle of open competition (on the basis of merit) to appoint the best candidate, provides equal opportunity for all candidates, and equally encourages both females and males to consider employment with IDIAP. Although IDIAP is located in the French part of Switzerland, English is the main working language. Free English and French lessons are provided. IDIAP is located in the town of Martigny in Valais, a scenic region in the south of Switzerland, surrounded by the highest mountains of Europe, and offering exciting recreational activities, including hiking, climbing and skiing, as well as varied cultural activities. It is within close proximity to Montreux (Jazz Festival) and Lausanne. Interested candidates should send a letter of motivation, along with their detailed CV and names of 3 references to jobs at idiap.ch. More information can also be obtained by contacting Samy Bengio. ---- Samy Bengio Senior Researcher in Machine Learning. IDIAP, CP 592, rue du Simplon 4, 1920 Martigny, Switzerland. tel: +41 27 721 77 39, fax: +41 27 721 77 12. mailto:bengio at idiap.ch, http://www.idiap.ch/~bengio From rsun at rpi.edu Thu Oct 13 17:17:02 2005 From: rsun at rpi.edu (Professor Ron Sun) Date: Thu, 13 Oct 2005 17:17:02 -0400 Subject: Connectionists: Tenure-track Position in Cognitive Science at RPI Message-ID: <5BEF17E0-97B2-47D7-81F8-7E91643E2B70@rpi.edu> Tenure-track Position in Cognitive Science The Cognitive Science Department at Rensselaer Polytechnic Institute invites applications for an anticipated tenure-track position at the rank of Assistant Professor beginning in Fall 2006 or possibly (for the right candidate) Spring 2007. We are seeking candidates who combine computational, mathematical, and/or logic-based modeling informed by experimental research in the areas of perception and action (e.g., motor control, vision, attention), interactive behavior (e.g., integrated models of cognitive systems), or high-level cognition (e.g., skill acquisition, decision making, reasoning). The candidate?s interest can be in basic and/or applied theory-based research. Interests in areas such as robotics or high-level computational neuroscience will be considered a strength. However, all disciplines within cognitive science are potential sources of candidates. All candidates are expected to have a strong potential for external funding. The Cognitive Science Department at Rensselaer is among the world?s newest dedicated cognitive science departments, specializing in computational cognitive modeling, perception/action, learning and reasoning (human and machine), and cognitive engineering. The department?s primary mission is to carry out seminal basic research and to develop engineering applications within cognitive science. This effort requires the continued growth of its new, research- oriented doctoral program in cognitive science. Department faculty have excellent ties with faculty in Computer Science, Engineering, and Decision Sciences. Women and minorities are especially encouraged to apply. Send curriculum vitae, reprints and preprints of publications, a 1-to-2 page statement of research, a 1-page statement of teaching interests, and three letters of reference to: Search Committee, c/o Heather Hewitt, Cognitive Science Department, Carnegie Building, Rensselaer Polytechnic Institute,110 8th Street, Troy, NY 12180. (Direct queries via email to Prof. Wayne D. Gray, grayw at rpi.edu, Chair of the Search Committee.) Applications will be reviewed beginning December 1st and continuing until the position is filled. ======================================================== Professor Ron Sun Cognitive Science Department Rensselaer Polytechnic Institute 110 Eighth Street, Carnegie 302A Troy, NY 12180, USA phone: 518-276-3409 fax: 518-276-3017 email: rsun at rpi.edu web: http://www.cogsci.rpi.edu/~rsun ======================================================= From bower at uthscsa.edu Wed Oct 12 12:10:50 2005 From: bower at uthscsa.edu (james Bower) Date: Wed, 12 Oct 2005 11:10:50 -0500 Subject: Connectionists: Two positions in computational biology at UTSA Message-ID: Hello all, We are happy to announce two new faculty positions in computational biology at the University of Texas, San Antonio. These positions represent the continued growth in computational biology at UTSA, complementing the already established computational research efforts of a number of senior and junior faculty (http://www.bio.utsa.edu/faculty.html). UTSA has also recently established a major new multi-user facility for computational biology and bioinformatics, which will include a state-of-the-art computer cluster with full software support for modeling, simulation, and data analysis. This facility is articulated with several other core facilities including a fully equipped imaging core. San Antonio itself is regularly included in the top ten safe, healthy, affordable, and interesting places to live in the United States. Please consider joining us in our efforts to make UTSA one of the premier universities for computational studies in the world. James M. Bower Charles Wilson Official Job Announcement The University of Texas at San Antonio Assistant Professor- Computational Biology The Department of Biology at the University of Texas San Antonio invites applications for two tenure-track positions at the rank of Assistant Professor pending budget approval. Required Qualifications: Candidates must have a M.D., or Ph.D. or the equivalent in biology or a related discipline, and at least 2 years of postdoctoral experience. Applicant's research interests should include experimental and computational techniques but can apply to any area of biological research. Preference will be given to candidates with a record of accomplishment in both experimental and model-based studies within the field of Neuroscience. We are seeking faculty with interests in sub cellular (e.g. channels, cell signaling, etc), neuronal (e.g. synaptic integration, or intrinsic electrical properties), systems level research or a combination of all three. Responsibilities: The successful applicant is expected to establish and maintain an extramurally funded research program, and contribute to undergraduate and graduate teaching in courses offered either at the UTSA Downtown Campus or the 1604 campus, and occasionally at night. The successful candidate will join an interactive group of researchers in Neuroscience at UTSA and the nearby UT Health Science Center. For more information on this faculty group and the department, please visit bio.utsa.edu. Attractive startup packages, including new laboratory space and access to a variety of shared facilities are available pending budget approval. Candidates please forward via email (biofacultyad at utsa.edu) or U.S. Post (Ion Channel Search Committee, Department of Biology, UTSA, 6900 N. Loop 1604 W., San Antonio, TX 78249-0662) a current curriculum vita, two or three representative publications, and a brief summary of future research interests and teaching philosophy. Include contact information (including email addresses) of three references. Applications will not be reviewed until all materials have arrived. Applicants who are not U. S. citizens must state their current visa and residency status. UTSA is an Affirmative Action/Equal Opportunity employer. Women, minorities, veterans, and individuals with disabilities are encouraged to apply. This position is security-sensitive as defined by the Texas Education Code ?51.215(c) and Texas Government Code?411.094(a)(2). For full consideration, applications should be received by December 15, 2005, but will be accepted until the position is filled. For further information contact the search committee at biofacultyad at utsa.edu. -- James M. Bower Ph.D. Research Imaging Center University of Texas Health Science Center at San Antonio 7703 Floyd Curl Drive San Antonio, TX 78284-6240 Cajal Neuroscience Center University of Texas San Antonio Phone: 210 567 8080 Fax: 210 567 8152 From isabelle at clopinet.com Wed Oct 12 17:35:16 2005 From: isabelle at clopinet.com (Isabelle Guyon) Date: Wed, 12 Oct 2005 23:35:16 +0200 Subject: Connectionists: New Pattern Recognition Competition Message-ID: <434D8194.2060906@clopinet.com> Dear colleagues, We are organizing a new machine learning challenge entitled: Performance Prediction Challenge see: http://www.modelselect.inf.ethz.ch/ How good are you at predicting how good you are? Find out: compete to predict accurately your generalization performance. This problem, which is of great practical importance e.g. in pilot studies, poses theoretical and computational challenges. Is cross-validation the best solution? What should k be in k-fold? Can one use theoretical performance bounds to better assess generalization? You will have opportunities to publish at WCCI 2006 (Vancouver, July 2006) and in JMLR under the new "model selection" special topic. Check the web site! Isabelle Guyon From derdogmus at ieee.org Fri Oct 14 01:30:30 2005 From: derdogmus at ieee.org (Deniz Erdogmus) Date: Thu, 13 Oct 2005 22:30:30 -0700 Subject: Connectionists: ICA 2006 Paper Submission Deadline Extended to October 28th Message-ID: <434F4276.8060708@ieee.org> Dear Colleagues, As the ICA 2006 organization committee, we have received numerous requests for a deadline extension due to its proximity to the ICASSP 2006 submission deadline. Clearly, both conferences are relevant to many researchers working in the ICA/BSS fields. Therefore, the paper submission deadline for ICA 2006 has been postponed to October 28th. Please visit the conference website at http://www.cnel.ufl.edu/ica2006/ for more information. Looking forward to your latest research contributions and to seeing you in Charleston, South Carolina, USA. Regards, Deniz Erdogmus on behalf of the ICA 2006 Organization Committee From marcus at idsia.ch Thu Oct 13 05:26:06 2005 From: marcus at idsia.ch (Marcus Hutter) Date: Thu, 13 Oct 2005 11:26:06 +0200 Subject: Connectionists: Useful Expressions for Mutual Information Message-ID: <2a1d01c5cfd8$23c5ac50$6bbfb0c3@NB1119> Dear Colleagues, We would like to announce 3 papers which appeared recently that derive various general and simple expressions for mutual information for unknown chances and missing data. M. Hutter and M. Zaffalon. Distribution of Mutual Information from Complete and Incomplete Data Computational Statistics & Data Analysis 48:3 (2005) 633-657 http://arxiv.org/abs/cs.LG/0403025 M. Zaffalon and M. Hutter. Robust Inference of Trees Annals of Mathematics and Artificial Intelligence (2005) to appear http://www.idsia.ch/~zaffalon/papers/treesj.pdf M. Hutter. Robust Estimators under the Imprecise Dirichlet Model Proc. 3rd International Symposium on Imprecise Probabilities and Their Applications (ISIPTA-2003) 274-289 http://www.carleton-scientific.com/isipta/PDF/021.pdf ----------------------------------- Marcus Hutter and Marco Zaffalon, Senior Researchers, IDSIA Istituto Dalle Molle di Studi sull'Intelligenza Artificiale Galleria 2 CH-6928 Manno(Lugano) - Switzerland Phone: +41-58-666 6668 Fax: +41-58-666 6661 E-mail marcus at idsia.ch http://www.idsia.ch/~marcus/idsia/ From timo.honkela at hut.fi Fri Oct 14 06:32:15 2005 From: timo.honkela at hut.fi (timo.honkela@hut.fi) Date: Fri, 14 Oct 2005 13:32:15 +0300 Subject: Connectionists: Proceedings on Adaptive Knowledge Representation and Reasoning In-Reply-To: <4346AB91.80502@sunderland.ac.uk> References: <4346AB91.80502@sunderland.ac.uk> Message-ID: We are pleased to announce that the proceedings of AKRR'05, International and Interdisciplinary Conference on Adaptive Knowledge Representation and Reasoning are available online at http://www.cis.hut.fi/AKRR05/papers/ In their keynote paper, Hyvarinen, Hoyer, Hurri and Gutmann present statistical models of images and early vision. They refer to the widely-spread assumption that biological visual systems are adapted to process the particular kind of information they receive. Hyvarinen et al. review work on modelling statistical regularities in ecologically valid visual input (natural images) and the obtained functional explanation of the properties of visual neurons. They refer to linear sparse coding as a seminal statistical model for natural images, that is also equivalent to the independent component analysis (ICA) model. The authors describe models that lead to emergence of further properties of visual neurons: the topographic organization and complex cell receptive fields. Gabriella Vigliocco's keynote talk highlighted the fruitful connection between the psychological empirical research in one hand and the computational analysis and modeling on the other. In their paper, Andrews, Vigliocco and Vinson consider the integration of attributional and distributional information in a probabilistic model of meaning representation. The authors present models of how meaning is represented in the brain/mind, based upon the assumption that children develop meaning representations for words using two main sources of information: information derived from their concrete experience with objects and events in the world and information implicitly derived from exposure to language. They first present a model developed using self-organising maps starting from speaker-generated features. The also present a probabilistic model that integrates the attributional information with distributional information derived from text corpora. The methodologically oriented papers in the proceedings consider adaptive systems, knowledge representation and reasoning from various points of view. Adaptive approaches and emergent representations based on contextual information are strongly present in the papers related to language and cognition. There were two special symposia in the conference that provide a focused view on their topics: ``Adaptive Models of Knowledge, Language and Cognition'' (AMKLC'05) and ``Knowledge Representation for Bioinformatics'' (KRBIO'05). The proceedings of these symposia are also included at the web site http://www.cis.hut.fi/AKRR05/papers/ Best regards, On behalf of the editors, Timo Honkela -- Timo Honkela, Chief Research Scientist, PhD, Docent Neural Networks Research Center Laboratory of Computer and Information Science Helsinki University of Technology P.O.Box 5400, FI-02015 TKK timo.honkela at tkk.fi, http://www.cis.hut.fi/tho/ From mjhealy at ece.unm.edu Sat Oct 15 17:01:46 2005 From: mjhealy at ece.unm.edu (mjhealy@ece.unm.edu) Date: Sat, 15 Oct 2005 15:01:46 -0600 (MDT) Subject: Connectionists: Experimental application of category theory to neural networks Message-ID: Tom Caudell and I have a paper in the Proceedings of the International Joint Conference on Neural Networks (IJCNN05), Montreal, 2005, published by the IEEE Press. The subject is an experiment demonstrating improved performance with a modification to a standard artificial neural architecture based on our category-theoretic semantic model. This was joint research with Sandia National Laboratories, and the experimental application concerns the generation of multispectral images from satellite data. We believe this is the first application of category theory directly in an engineering application (while at Boeing, another colleague and I had demonstrated its application to the synthesis of engineering software). Another feature of it is that it relates an abstract, categorical structure to a neural network parameter; this is not dealt with in detail in the Proceedings paper but will be in a full paper to be submitted. Tom Caudell's web page has a link to the Proceedings paper at http://www.eece.unm.edu/faculty/tpc/ The semantic theory is described in our Technical Report EECE-TR-04-020 on the UNM Dspace Repository, at https://repository.unm.edu/handle/1928/33 Regards, Mike Healy mjhealy at ece.unm.edu From bisant at umbc.edu Mon Oct 17 16:28:50 2005 From: bisant at umbc.edu (D Bisant) Date: Mon, 17 Oct 2005 16:28:50 -0400 Subject: Connectionists: Call for Papers, FLAIRS06 Nnet Special Track, Melbourne Beach, FL, May 11-13 Message-ID: <43540982.80509@umbc.edu> Neural Networks Special Track at the 19th International FLAIRS Conference In cooperation with the American Association for Artificial Intelligence Holiday Inn - Melbourne Oceanfront, Melbourne Beach, Florida May 11-13, 2006 Call for Papers Papers are being solicited for a special track on Neural Network Applications at the 19th International Florida Artificial Intelligence Society Conference (FLAIRS-2006). http://www.indiana.edu/~flairs06/ The special track will be devoted to Neural Networks with the aim of presenting new and important contributions in this area. The areas include, but are not limited to, the following: applications such as Pattern Recognition, Control and Process Monitoring, Biomedical Applications, Robotics, Text Mining, Diagnostic Problems, Telecommunications, Power Systems, Signal Processing; algorithms such as new developments in Back Propagation, RBF, SVM, Ensemble Methods, Kernel Approaches; hybrid approaches such as Neural Networks/Genetic Algorithms, Neural Network/Expert Systems, Causal Nets trained with Backpropagation, and Neural Network/Fuzzy Logic; or any other area of Neural Network research related to artificial intelligence. FLAIRS is a respectable multidisciplinary conference in artificial intelligence. This special track will feature a double-blind review. Submission Guidelines Interested authors must submit completed manuscripts by November 30, 2005. Submissions should be no more than 6 pages (4000 words) in length, including figures, and contain no identifying reference to self or organization. Papers should be formatted according to AAAI Guidelines. Submission instructions can be found at FLAIRS-06 website at http:// www.indiana.edu/~flairs06. Notification of acceptance will be mailed around January 20, 2006. Authors of accepted papers will be expected to submit the final camera-ready copies of their full papers by February 13, 2006 for publication in the conference proceedings which will be published by AAAI Press. Authors may be invited to submit a revised copy of their paper to a special issue of the International Journal on Artificial Intelligence Tools (IJAIT). Questions regarding the track should be addressed to: David Bisant at bisant at umbc.edu. FLAIRS 2006 Invited Speakers * Alan Bundy, University of Edinburgh, Scotland * Bob Morris, NASA Ames Research Center, USA * Mehran Sahami, Stanford University and Google, USA * Barry Smyth, University College Dublin, Ireland Important Dates * Paper submissions due: November 30, 2005 * Notification letters sent: January 20, 2006 * Camera ready copy due: February 13, 2006 Special Track Committee Ingrid Russell (Co-Chair), University of Hartford, USA David Bisant (Co-Chair), The Laboratory for Physical Sciences, USA Georgios Anagnostopoulos, Florida Institute of Technology, USA Jim Austin, University of York, UK Geof Barrows, Centeye Corporation, USA Serge Dolenko, Moscow State University, Russia Erol Gelenbe, Imperial College London, UK Michael Georgiopoulos, University of Central Florida, USA Gary Kuhn, Department of Defense, USA Luis Mart?, Univ. Carlos III de Madrid, Spain Costas Neocleous, University of Cyprus, Cyprus Sergio Roa Ovalle, National University of Colombia, Columbia Roberto Santana, University of the Basque Country, Spain C. N. Schizas, University of Cyprus, Cyprus Chellu Chandra Sekhar, Indian Institute of Technology, India From mark at paskin.org Mon Oct 17 14:22:13 2005 From: mark at paskin.org (Mark A. Paskin) Date: Mon, 17 Oct 2005 11:22:13 -0700 Subject: Connectionists: Deadline extended for NIPS 2005 Workshop: Intelligence Beyond the Desktop Message-ID: <3EA54D45-DD1F-4323-A2DD-39CCCF938A6B@paskin.org> The deadline has been extended to this Friday, October 21: ################################################################ CALL FOR PARTICIPATION Intelligence Beyond the Desktop a workshop at the 2005 Neural Information Processing Systems (NIPS) Conference Submission deadline: Friday, October 21, 2005 http://ai.stanford.edu/~paskin/ibd05/ ################################################################ OVERVIEW We are now well past the era of the desktop computer. Trends towards miniaturization, wireless communication, and increased sensing and control capabilities have led to a variety of systems that distribute computation, sensing, and controls across multiple devices. Examples include wireless sensor networks, multi-robot systems, networks of smartphones, and large area networks. Machine learning problems in these non-traditional settings cannot faithfully be viewed in terms of a data set and an objective function to optimize; physical aspects of the system impose challenging new constraints. Resources for computation and actuation may be limited and distributed across many nodes, requiring significant coordination; limited communication resources can make this coordination expensive. The scale and complexity of these systems often leads to large amounts of structured data that make state estimation challenging. In addition, these systems often have other constraints, such as limited power, or under-actuation, requiring reasoning about the system itself during learning and control. Furthermore, large-scale distributed systems are often unreliable, requiring algorithms that are robust to failures and lossy communication. New learning, inference, and control algorithms that address these challenges are required. This workshop aims to bring together researchers to discuss new applications of machine learning in these systems, the challenges that arise, and emerging solutions. FORMAT This one-day workshop will consist of invited talks and talks based upon submitted abstracts, with some time set aside for discussion. Our (tentative) invited speakers are: * Dieter Fox (University of Washington) * Leonidas Guibas (Stanford University) * Sebastian Thrun, (Stanford University) will speak about the machine learning algorithms used in Stanley, Stanford's winning entry into the DARPA Grand Challenge. CALL FOR PARTICIPATION Researchers working at the interface between machine learning and non- traditional computer architectures are invited to submit descriptions of their research for presentation at the workshop. Of particular relevance is research on the following topics: * distributed sensing, computation, and/or control * coordination * robustness * learning/inference/control under resource constraints (power, computation, time, etc.) * introspective machine learning (reasoning about the system architecture in the context of learning/inference/control) We especially encourage submissions that address unique challenges posed by non-traditional architectures for computation, such as * wireless sensor networks * multi-robot systems * large-area networks Submissions should be extended abstracts in PDF format which are no longer than three (3) pages long in 10pt or larger font. Submissions may be e-mailed to ibd-2005 at cs.cmu.edu with the subject "IBD SUBMISSION". We plan to accept four to six submissions for 25 minute presentation slots. In your submission please indicate if you would present a poster of your work (in case there are more qualified submissions than speaking slots). Call for participation: Wednesday, August 31, 2005 Submission deadline: Friday, October 21, 2005 11:59 PM PST Acceptance notification: Tuesday, November 1, 2005 Workshop: Friday, December 9, 2005 Organizers * Carlos Guestrin (http://www.cs.cmu.edu/~guestrin/) * Mark Paskin (http://paskin.org) Please direct any inquiries regarding the workshop to ibd-2005 at cs.cmu.edu. From sfr at unipg.it Wed Oct 19 07:54:34 2005 From: sfr at unipg.it (Simone G.O. FIORI) Date: Wed, 19 Oct 2005 13:54:34 +0200 Subject: Connectionists: Two new papers on learning theory. Message-ID: <1.5.4.32.20051019115434.01a82930@unipg.it> Dear colleagues, I take the liberty to announce the availability of two new papers on learning theory. They are related to unsupervised adapting of inherently-stable IIR filters implemented in state-space form, with application to blind system deconvolution, and to variate generation by look-up-table type adaptive-activation-function-neuron learning. 1] "Blind Adaptation of Stable Discrete-Time IIR Filters in State-Space Form", by S. Fiori, University of Perugia, Italy. Accepted on the IEEE Transactions on Signal Processing *Abstract: Blind deconvolution consists of extracting a source sequence and impulse response of a linear system from their convolution. In presence of system zeros close to the unit circle, which give rise to very long impulse responses, IIR adaptive structures are of use, whose adaptation should be carefully designed in order to guarantee stability. In this paper, we propose a blind-type discrete-time IIR adaptive filter structure realized in state-space form that, with a suitable parameterization of its coefficients, remains stable. The theory is first developed for a two-pole filter, whose numerical behavior is investigated via computer-based experiments. The proposed structure/adaptation theory is then extended to a multi-pole structure realized as a cascade of two-pole filters. Computer-based experiments are proposed and discussed, which aim at illustrating the behavior of the filter cascade on several cases of study. The numerical results obtained show the proposed filters remain stable during adaptation and provide satisfactory deconvolution results. Draft available at: http://www.unipg.it/sfr/publications/tsp05.pdf 2] "Neural Systems with Numerically-Matched Input-Output Statistic: Variate Generation", by S. Fiori, University of Perugia, Italy. Accepted on Neural Processing Letters *Abstract: The aim of this paper is to present a neural system trained to exhibit matched input-output statistic for random samples generation. The learning procedure is based on a cardinal equation from statistics that suggests how to warp an available samples set of known probability density function into a samples set with desired probability distribution. The warping structure is realized by a fully-tunable neural system implemented as a "look-up table". Learnability theorems are proven and discussed and the numerical features of the proposed methods are illustrated through computer-based experiments. Draft available at: http://www.unipg.it/sfr/publications/rng_nepl.pdf Bets regards. ================================================= | Simone FIORI (Elec.Eng., Ph.D.) | | * Faculty of Engineering - Perugia University * | | * Polo Didattico e Scientifico del Ternano * | | Loc. Pentima bassa, 21 - I-05100 TERNI (Italy) | | eMail: fiori at unipg.it - Fax: +39 0744 492925 | | Web: http://www.unipg.it/sfr/ | ================================================= From Pierre.Bessiere at imag.fr Wed Oct 19 02:51:23 2005 From: Pierre.Bessiere at imag.fr (=?ISO-8859-1?Q?Pierre_Bessi=E8re?=) Date: Wed, 19 Oct 2005 08:51:23 +0200 Subject: Connectionists: BAYESIAN COGNITION Workshop, Paris, January 16 - 18, 2006 Message-ID: <2F1462BD-B11E-43C9-98C7-7D0F8FB0AFA4@imag.fr> --------------------------------- BAYESIAN COGNITION --------------------------------- International workshop on probabilistic models of perception, inference, reasoning, decision action, learning and neural processing Paris, France, January 16 - 18, 2006 http://www.bayesian-cognition.org/ Scope and goal: --------------------- Animals and artificial systems alike are faced with the problem of making inferences about their environments and choosing appropriate responses based on incomplete, uncertain and noisy data. Probabilistic models and algorithms are flourishing in both life sciences an information sciences as ways of understanding the behavior of subjects and the neural processing underlying this behavior, and building robots and artificial agents that can function effectively in such circumstances. This workshop will gather life and information scientists to discuss the latest advances in this subject, specifically addressing the following topics: - Probability theory as an alternative to logic - Probabilistic models of neurons and assembly of neurons - Probabilistic models of CNS functionality - Stochastic synchronisation of neuronal assemblies - Probabilistic interpretation of psychological and psychophysical data - Probabilistic inference and learning algorithms - Probabilistic robotics Lecturers (confirmed list): ---------------------------------- - Alain Berthoz, Coll?ge de France - Pierre Bessi?re, CNRS Grenoble University - Heinrich B?lthoff, Max Planck Institute - Peter Dayan, UCL - Sophie Deneve, ISC - Jacques Droulez, Coll?ge de France - Ian Hacking, Coll?ge de France - Ben Kuipers, University of Texas - David MacKay, Cambridge University - Pascal Mamassian, Paris V University - Jose del R. Millan, IDIAP Research Institute - Kevin Murphy, University of British Columbia - Alexandre Pouget, University of Rochester - Rajesh Rao, University of Washington - Michael Shadlen, University of Washington - Roland Siegwart, EPFL - Eero Simoncelli, New York University - Jean-Jacques Slotine, MIT - Josh Tenenbaum, MIT - Sebastian Thrun, Stanford University - Daniel Wolpert, Cambridge University Registration and complementary information: ------------------------------------------------------------- http://www.bayesian-cognition.org/ - Talks + coffee breaks and welcome cocktail on January 16th : 90 euros - Talks + coffee breaks, welcome cocktail on January 16th and lunches : 140 euros Number of participant is limited, registration are accepted in their order of arrival. _______________________________ Dr Pierre Bessi?re - CNRS ***************************** GRAVIR Lab INRIA 655 avenue de l'Europe 38334 Montbonnot FRANCE Mail: Pierre.Bessiere at imag.fr Http: www-laplace.imag.fr Tel: +33 4 76 61 55 09 _______________________________ From asamsono at gmu.edu Wed Oct 19 15:15:51 2005 From: asamsono at gmu.edu (Alexei V. Samsonovich) Date: Wed, 19 Oct 2005 15:15:51 -0400 Subject: Connectionists: GRA positions available - please forward Message-ID: <43569B67.6040904@gmu.edu> Dear Colleague: As a part of a research team at KIAS (GMU, Fairfax, VA), I am searching for graduate students who are interested in working during one year, starting immediately, on a very ambitious project supported by our recently funded grant. The title is ?An Integrated Self-Aware Cognitive Architecture?. The grant may be extended for the following years. The objective is to create a self-aware, conscious entity in a computer. This entity is expected to be capable of autonomous cognitive growth, basic human-like behavior, and the key human abilities including learning, imagery, social interactions and emotions. The agent should be able to learn autonomously in a broad range of real-world paradigms. During the first year, the official goal is to design the architecture, but we are planning implementation experiments as well. We are currently looking for several students. The available positions must be filled as soon as possible, but no later than by the beginning of the Spring 2006 semester. Specifically, we are looking for a student to work on the symbolic part of the project and a student to work on the neuromorphic part, as explained below. A symbolic student must have a strong background in computer science, plus a strong interest and an ambition toward creating a model of the human mind. The task will be to design and to implement the core architecture, while testing its conceptual framework on selected practically interesting paradigms, and to integrate it with the neuromorphic component. Specific background and experience in one of the following areas is desirable: (1) cognitive architectures / intelligent agent design; (2) computational linguistics / natural language understanding; (3) hacking / phishing / network intrusion detection; (4) advanced robotics / computer-human interface. A neuromorphic candidate is expected to have a minimal background in one of the following three fields. (1) Modern cognitive neuropsychology, including, in particular, episodic and semantic memory, theory-of-mind, the self and emotion studies, familiarity with functional neuroanatomy, functional brain imaging data, cognitive-psychological models of memory and attention. (2) Behavioral / system-level / computational neuroscience. (3) Attractor neural network theory and computational modeling. With a background in one of the fields, the student must be willing to learn the other two fields, as the task will be to put them together in a neuromorphic hybrid architecture design (that will also include the symbolic core) and to map the result onto the human brain. Not to mention that all candidates are expected to be interested in the modern problem of consciousness, willing to learn new paradigms of research, and committed to success of the team. Given the circumstances, however, we do not expect all conditions listed above to be met. Our minimal criterion is the excitement and the desire of an applicant to build an artificial mind. I should add that this bold and seemingly risky project provides a unique in the world opportunity to engage with emergent, revolutionary activity that may change our lives. Cordially, Alexei Samsonovich -- Alexei V. Samsonovich, Ph.D., Research Assistant Professor George Mason University, Krasnow Institute for Advanced Study 4400 University Drive MS 2A1, Fairfax, VA 22030-4444, U.S.A. Office: 703-993-4385, fax: 703-993-4325, cell: 703-447-8032 From nip-lr at neuron.kaist.ac.kr Thu Oct 20 06:03:24 2005 From: nip-lr at neuron.kaist.ac.kr (nip-lr@neuron.kaist.ac.kr) Date: Thu, 20 Oct 2005 19:03:24 +0900 Subject: Connectionists: New Volume of Neural Information Processing - Letters and Reviews Message-ID: <1129802565502976.13927@webmail> A new volume of NIP-LR (Neural Information Processing - Letters and Reviews), Vol.8, Nos.1-3, July-September 2005, is now available both online and printed copy. The NIP-LR is a relatively new journal aiming high-quality timely-publication with double-blind reviews. For the online version simple visit the website at www.nip-lr.info. For the printed copy please send an e-mail request to nip-lr at neuron.kaist.ac.kr. Soo-Young Lee ---------------------------------------------------------------------------- ---------------------------------------------- Table of Contents Letters An Modified Error Function for the Complex-value Backpropagation Neural Networks Xiaoming Chen, Zheng Tang, Songsong Li Generative and Filtering Approaches for Overcomplete Representations Kaare Brandt Petersen, Jiucang Hao, Te-Won Lee Segmentation Method of MRI Using Fuzzy Gaussian Basis Neural Network Wei Sun, Yaonan Wang Fast Computation of Moore-Penrose Inverse Matrices Pierre Courrieu Batch Learning of the Self-Organizing Relationship (SOR) Network Takeshi Yamakawa, Keiichi Horio and Satoshi Sonoh Time Series Prediction Using an Interval Arithmetic FIR Network Ho Joon Kim, Tae-Wan Ryu Modeling with Recurrent Neural Networks using Generalized Mean Neuron Model Gunjan Gupta, R. N. Yadav, Prem K. Kalra and J. John Novelty Scene Detection Using Scan Path Topology and Energy Signature in Scaled Saliency Map Sang-Woo Ban, Woong-Jae Won, and Minho Lee From ckello at gmu.edu Thu Oct 20 09:53:03 2005 From: ckello at gmu.edu (ckello@gmu.edu) Date: Thu, 20 Oct 2005 09:53:03 -0400 Subject: Connectionists: NSF is seeking a program director in large-scale computing Message-ID: Dear Connectionists, The Behavioral and Cognitive Sciences division at the National Science Foundation is seeking to fill a rotator (temporary) program director position in large-scale (e.g., parallel) computing and "cyberinfrastructure" broadly construed (see below). The applicant must have a current position at a US institution. For those of you with computing expertise in the behavioral and cognitive sciences, this is an opportunity to help shape the future of NSF's investments in your science. Christopher Kello Program Director Perception, Action, and Cognition Program Department of Psychology George Mason University ANNOUNCEMENT NO: E20060001-IPA OPEN: 10/06/2005 CLOSE: 11/07/2005 POSITIONS WILL BE FILLED ON A ONE OR TWO YEAR INTERGOVERNMENTAL PERSONNEL ACT (IPA) ASSIGNMENT BASIS. The National Science Foundation is seeking a qualified candidate for a position as Program Director in the Division of Behavioral and Cognitive Sciences (BCS), Directorate for Social and Behavioral Sciences, Arlington, VA. The desired starting date for this appointment is January 2006. BCS supports research to develop and advance scientific knowledge focusing on human cognition, language, social behavior and culture, as well as research on the interactions between human societies and the physical environment. More information about BCS and their programs can be found on their website at http://www.nsf.gov. The Division of Behavioral and Cognitive Sciences at the NSF is seeking a Program Director who has expertise in cyber-infrastructure and one or more of its disciplinary areas. Social and behavioral research is able to address increasingly complex questions by making use of increases in computer power, processing speed, networking capacities, and data storage capacities, and expertise in the emerging cyber-infrastructure issues is sought. The Program Director would serve as a liaison for cyber-infrastructure issues within NSF and participate in activities within and across programs in the division, directorate, and foundation. The Program Director may als o assist in administration of programs in one or more of these disciplines: Archaeology, Cognitive Neuroscience; Cultural Anthropology; Development and Learning Sciences; Geography and Regional Science; Linguistics; Perception, Action and Cognition; Physical Anthropology, or Social Psychology. For IPA assignments, the individual remains an employee on the payroll of his or her home institution and the institution continues to administer pay and benefits. NSF reimburses the institution for NSF?s negotiated share of the costs. Individuals eligible for an IPA assignment include employees of State and local government agencies, institutions of higher education, Indian tribal governments, federally funded research and development centers and qualified nonprofit organizations. For more information regarding IPA assignments, visit our website at www.nsf.gov/about/career_opps. Qualifications required: Applicants must possess a Ph.D. or equivalent experience in a discipline related to social and behavioral sciences and have an active research program in a related area. In addition, six or more years of successful research, research administration, and/or managerial experience pertinent to the program are required. DUTIES AND RESPONSIBILITIES: As Program Director directs in the implementation, review, funding, post-award management, and evaluation of the program and contributes to the intellectual integration with other programs supported by the Division. Designs and implements the proposal review and evaluation process for relevant proposals. Selects well qualified individuals to provide objective reviews on proposals either as individuals or as members of a panel. Conducts final review of proposals and evaluations, and recommends acceptance or declination. Manages and monitors on-going grants, contracts, interagency and cooperative agreements to ensure fulfillment of commitments to NSF. Evaluates progress of awards through review and evaluation of reports and publications submitted by awardees and/or meetings at NSF and during site visits. Contributes to the responsibility for establishing goals and objectives, initiating new program thrusts and phasing out old projects. Recommends n ew or revised policies and plans in scientific, fiscal, and administrative matters to improve the activities and management of the Program. QUALIFICATIONS DESIRED: In addition to the qualifications outlined above, further qualifications desired include: ? Knowledge and understanding of scientific principles and theories that underlie the study of each Program. ? Research, analytical and technical writing skills, which evidence the ability to perform extensive inquiries into a wide variety of significant issues and to make recommendations and decisions based on findings. ? Ability to organize, implement and manage large, multi-disciplinary, broadly based, proposal driven grant programs allocating resources to meet a broad spectrum of program goals. ? Ability to meet and deal with members of the scientific community, other funding agencies and peers to effectively present and advocate program policies and plans. ? Ability to work with individuals within the Program; both technical and support staff. HOW TO APPLY: Applications may be transmitted electronically to rotator at nsf.gov. Individuals may also submit a resume or any application of your choice to the National Science Foundation, Division of Human Resource Management, 4201 Wilson Blvd., Arlington, VA 22230, Attn: E20060001-IPA. In addition, you are encouraged to submit a narrative statement that addresses your background and/or experience related to the Program you are applying for. You are asked to complete and submit the attached Applicant Survey form. Submission of this form is voluntary and will not affect your application for employment (the information is used for statistical purposes). Telephone inquiries may be referred to the Executive and Visiting Personnel Branch, at (703) 292-8755. For technical information, contact Dr. Peg (Marguerite) Barratt, Division Director, at (703) 292-8740 or mbarratt at nsf.gov. Hearing impaired individuals may call TDD (703) 292-8044. The National Science Foundation provides reasonable accommodations to applicants with disabilities on a case-by-case basis. If you need a reasonable accommodation for any part of the application and hiring process, please notify the point of contact listed on this vacancy announcement. NSF IS AN EQUAL OPPORTUNITY EMPLOYER COMMITTED TO EMPLOYING A HIGHLY QUALIFIED STAFF THAT REFLECTS THE DIVERSITY OF OUR NATION. From rsun at rpi.edu Thu Oct 20 16:20:25 2005 From: rsun at rpi.edu (Professor Ron Sun) Date: Thu, 20 Oct 2005 16:20:25 -0400 Subject: Connectionists: Cognitive Systems Research, Vol. 6, Iss. 3 and 4, 2005 Message-ID: <1BA809C5-BE7C-4B7B-98F4-94FFFC12B8CA@rpi.edu> New Issues are now available on ScienceDirect: NOTE: If the URLs in this email are not active hyperlinks, copy and paste the URL into the address/location box in your browser. ----------------------------------------------------------------------- * Cognitive Systems Research Volume 6, Issue 3, Pages 189-262 (September 2005) Epigenetic Robotics Edited by Luc Berthouze and Giorgio Metta http://www.sciencedirect.com/science/issue/ 6595-2005-999939996-596644 TABLE OF CONTENTS Epigenetic robotics: modelling cognitive development in robotic systems Pages 189-192 Luc Berthouze and Giorgio Metta http://www.sciencedirect.com/science? _ob=GatewayURL&_method=citationSearch&_urlVersion=4&_origin=SDVIALERTASC II&_version=1&_uoikey=B6W6C-4F1J8FY-1&md5=a6e730dcc6602ce2770e8cc701643a 81 A model of attentional impairments in autism: first steps toward a computational theory Pages 193-204 Petra Bj?rne and Christian Balkenius http://www.sciencedirect.com/science? _ob=GatewayURL&_method=citationSearch&_urlVersion=4&_origin=SDVIALERTASC II&_version=1&_uoikey=B6W6C-4F1J8FY-2&md5=afe39e751a2f591c7804593ab8c193 be Synching models with infants: a perceptual-level model of infant audio-visual synchrony detection Pages 205-228 Christopher G. Prince and George J. Hollich http://www.sciencedirect.com/science? _ob=GatewayURL&_method=citationSearch&_urlVersion=4&_origin=SDVIALERTASC II&_version=1&_uoikey=B6W6C-4F1J8FY-4&md5=c850deee36cbec77da5c4f74a1f74d e8 The evolution of imitation and mirror neurons in adaptive agents Pages 229-242 Elhanan Borenstein and Eytan Ruppin http://www.sciencedirect.com/science? _ob=GatewayURL&_method=citationSearch&_urlVersion=4&_origin=SDVIALERTASC II&_version=1&_uoikey=B6W6C-4F1J8FY-5&md5=de1cc98524ab5133400dd5517f61d4 ec Developmental stages of perception and language acquisition in a perceptually grounded robot Pages 243-259 Peter Ford Dominey and Jean-David Boucher http://www.sciencedirect.com/science? _ob=GatewayURL&_method=citationSearch&_urlVersion=4&_origin=SDVIALERTASC II&_version=1&_uoikey=B6W6C-4F1J8FY-3&md5=34377dba381a57278364f088e0196e 14 ----------------------------------------------------- * Cognitive Systems Research Volume 6, Issue 4, Pages 263-414 (December 2005) http://www.sciencedirect.com/science/issue/ 6595-2005-999939995-606565 TABLE OF CONTENTS When do differences matter? On-line feature extraction through cognitive economy Pages 263-281 David J. Finton http://www.sciencedirect.com/science? _ob=GatewayURL&_method=citationSearch&_urlVersion=4&_origin=SDVIALERTASC II&_version=1&_uoikey=B6W6C-4D75J5W-1&md5=f4b1ede099fa59af927f3889a2440a dc Experience-grounded semantics: a theory for intelligent systems Pages 282-302 Pei Wang http://www.sciencedirect.com/science? _ob=GatewayURL&_method=citationSearch&_urlVersion=4&_origin=SDVIALERTASC II&_version=1&_uoikey=B6W6C-4DS9026-1&md5=ff9bf1e90857ffc23559da7b6551e5 b4 A computational model of sequential movement learning with a signal mimicking dopaminergic neuron activities Pages 303-311 Wei Li, Jinghong Li and Jeffrey D. Johnson http://www.sciencedirect.com/science? _ob=GatewayURL&_method=citationSearch&_urlVersion=4&_origin=SDVIALERTASC II&_version=1&_uoikey=B6W6C-4F6MD7G-1&md5=5a3931fac34ae635e9b20351099db8 6e Understanding dynamic and static displays: using images to reason dynamically Pages 312-319 Sally Bogacz and J. Gregory Trafton http://www.sciencedirect.com/science? _ob=GatewayURL&_method=citationSearch&_urlVersion=4&_origin=SDVIALERTASC II&_version=1&_uoikey=B6W6C-4F6MD7G-2&md5=4c972633b54440549b2f68e1364eeb c1 An artificial intelligent counter Pages 320-332 Qi Zhang http://www.sciencedirect.com/science? _ob=GatewayURL&_method=citationSearch&_urlVersion=4&_origin=SDVIALERTASC II&_version=1&_uoikey=B6W6C-4FC3RM2-1&md5=253bfa0ee88d958aa530f8c21a1c56 e1 A cognitive model in which representations are images Pages 333-363 Janet Aisbett and Greg Gibbon http://www.sciencedirect.com/science? _ob=GatewayURL&_method=citationSearch&_urlVersion=4&_origin=SDVIALERTASC II&_version=1&_uoikey=B6W6C-4G7DY66-1&md5=161bdebeb758393bff6c205669790d d1 Agent communication pragmatics: the cognitive coherence approach Pages 364-395 Philippe Pasquier and Brahim Chaib-draa http://www.sciencedirect.com/science? _ob=GatewayURL&_method=citationSearch&_urlVersion=4&_origin=SDVIALERTASC II&_version=1&_uoikey=B6W6C-4GFVMN2-1&md5=89fa97ee6fe4ca2be7218d541fc557 93 Book reviews Review of Reductionism and the Development of Knowledge, T. Brown & L. Smith (Eds.); Mahwah, NJ: Lawrence Erlbaum, 2002. Pages 396-401 Geert Jan Boudewijnse http://www.sciencedirect.com/science? _ob=GatewayURL&_method=citationSearch&_urlVersion=4&_origin=SDVIALERTASC II&_version=1&_uoikey=B6W6C-4DN9DVP-3&md5=1af463f1abb48bb5c9a52887ad8d3c 29 Review of the Evolution and Function of Cognition, Lawrence Erlbaum (2003). Pages 402-404 D.M. Bernad http://www.sciencedirect.com/science? _ob=GatewayURL&_method=citationSearch&_urlVersion=4&_origin=SDVIALERTASC II&_version=1&_uoikey=B6W6C-4GP1VTX-1&md5=3581f5241a551708269892dfbd351b a8 Andy Clark, Natural-born Cyborgs: Minds, Technologies, and the Future of Human Intelligence, Oxford University Press (2003). Pages 405-409 Leslie Marsh http://www.sciencedirect.com/science? _ob=GatewayURL&_method=citationSearch&_urlVersion=4&_origin=SDVIALERTASC II&_version=1&_uoikey=B6W6C-4GJKTYX-1&md5=979c0fe014ed250a68b682a37e9e5c b8 ------------------------------------------------------------------------ ---------- See http://www.elsevier.com/locate/cogsys for further information regarding accessing these articles See the following Web page for submission, subscription, and other information regarding Cognitive Systems Research: http://www.cogsci.rpi.edu/~rsun/journal.html If you have questions about features of ScienceDirect, please access the ScienceDirect Info Site at http://www.info.sciencedirect.com ======================================================== Professor Ron Sun Cognitive Science Department Rensselaer Polytechnic Institute 110 Eighth Street, Carnegie 302A Troy, NY 12180, USA phone: 518-276-3409 fax: 518-276-3017 email: rsun at rpi.edu web: http://www.cogsci.rpi.edu/~rsun ======================================================= From verleysen at dice.ucl.ac.be Thu Oct 20 12:55:45 2005 From: verleysen at dice.ucl.ac.be (Michel Verleysen) Date: Thu, 20 Oct 2005 18:55:45 +0200 Subject: Connectionists: position at UCL Message-ID: <009701c5d597$1d4c3a00$43ed6882@maxwell.local> [with apologies for cross-posting] ********************************************************************* Research position on biomedical statistical machine learning Universit? catholique de Louvain (Louvain-la-Neuve, Belgium) Machine Learning Group, Faculty of Engineering, and Faculty of Medicine ********************************************************************* The Universit? catholique de Louvain seeks a qualified candidate for a research position in machine learning and signal processing in biomedical applications. The position is open for a candidate to the PhD degree. The goal of the project, funded by local public funds, is to develop prediction systems for epilectic seizures, based on neural recordings. Machine learning and signal processing techniques will be used on recorded signals to extract patterns that are relevant in a prediction strategy. The project will be carried out in two groups: the Machine Learning Group (http://www.ucl.ac.be/mlg/) and the neural Rehabilitation laboratory (http://www.md.ucl.ac.be/gren/), under the supervision of Prof. Michel Verleysen and Dr. Jean Delbeke respectively. The ideal candidate will hold a degree in computer science, statistics, signal processing or related fields. She or he should have strong background in statistics, linear algebra, and signal processing. Knowledge in statistical machine learning is an asset. She or he should have a sufficient knowledge in Matlab and C or C++ programming. A strong interest for biomedical applications is required, nut no prior knowledge is necessary. Direct involvement in physiological experimentation is expected. The candidate is expected to share his/her activities between the two groups, situated respectively in Louvain-la-Neuve and Brussels (about 20 km one from each other). Appointment for this PhD position is for a maximum of 3 years, provided successful progress, and should lead to a dissertation. Annual gross salary is around 30,000 euros. Starting date is immediate. Applications, including motivation letter and detailed curriculum, should be sent to: Michel Verleysen, Universit? catholique de Louvain, DICE-Machine Learning group, 3 place du Levant, B-1348 Louvain-la-Neuve, Belgium. E-mail: verleysen at dice.ucl.ac.be. Applications should be sent as soon as possible, and in no case received later than November 30, 2005. ================================================= Michel Verleysen Research Director FNRS - Lecturer UCL Universit? catholique de Louvain DICE - Machine Learning Group 3 place du Levant B-1348 Louvain-la-Neuve Belgium Tel: +32 10 47 25 51 Fax: +32 10 47 25 98 E-mail: verleysen at dice.ucl.ac.be Homepage: http://www.dice.ucl.ac.be/~verleyse ================================================= From Gunnar.Raetsch at tuebingen.mpg.de Fri Oct 21 07:26:16 2005 From: Gunnar.Raetsch at tuebingen.mpg.de (=?ISO-8859-1?Q?Gunnar_R=E4tsch?=) Date: Fri, 21 Oct 2005 13:26:16 +0200 Subject: Connectionists: NIPS Workshop on New Problems and Methods in Computational Biology Message-ID: <35F77F0F-26C5-48E6-8CB6-97BD912E66B0@tuebingen.mpg.de> Dear colleagues, I would like to invite you to participate in the workshop on New Problems and Methods in Computational Biology on the 9th of December at NIPS 2005 in Whistler, B.C. If you would like to contribute then please send an extended abstract by *November 1, 9am EST* to nips-compbio at tuebingen.mpg.de. We still have a few slots for talks available (details below). I am looking forward to meet you there! Gunnar Raetsch NIPS*05 Workshop New Problems and Methods in Computational Biology Workshop email: nips-compbio at tuebingen.mpg.de Workshop web address: http://www.fml.tuebingen.mpg.de/nipscompbio Organizers: * Gal Chechik, Department of Computer Science, Stanford University * Christina Leslie, Center for Comp. Learning Systems, Columbia University * Gunnar Raetsch, Friedrich Miescher Laboratory of the Max Planck Society * Koji Tsuda, AIST Computational Biology Research Center Workshop Description: The field of computational biology has seen a dramatic growth over the past few years, both in terms of new available data, new scientific questions and new challenges and for learning and inference. In particular, biological data is often relationally structured and highly diverse, thus requires to combine multiple weak evidence from heterogeneous sources. These could include sequenced genomes of a variety of organisms, gene expression data from multiple technologies, protein sequence and 3D structural data, protein interactions, gene ontology and pathway databases, genetic variation data, and an enormous amount of textual data in the biological and medical literature. The new types of scientific and clinical problems, require to develop new supervised and unsupervised learning approaches that can use these growing resources. The goal of this workshop is to present emerging problems and machine learning techniques in computational biology. Speakers from the biology/bioinformatics community will present current research problems in bioinformatics, and we invite contributed talks on novel learning approaches in computational biology. We encourage contributions describing either progress on new bioinformatics problems or work on established problems using methods that are substantially different from standard approaches. Kernel methods, graphical models, feature selection and other techniques applied to relevant bioinformatics problems would all be appropriate for the workshop. Submission instructions: Researchers interested in contributing should send an extended abstract of up to 4 pages (postscript or pdf format) to nips-compbio at tuebingen.mpg.de by *November 1, 9am EST*. The workshop organizers intend to invite submissions of full length versions of accepted workshop contributions for publication in a special issue of a BMC Bioinformatics (for information on last years special issue cf. http://www.fml.tuebingen.mpg.de/nipscompbio/bmc). Program Committee: * Michael I. Jordan, UC Berkeley * William Stafford Noble, University of Washington * Kristin Bennett, Rensselaer Polytechnic Institute * Nello Cristianini, UC Davis * Alexander Hartemink, Duke University * Eran Segal, Stanford University * Michal Linial, The Hebrew University of Jerusalem * Klaus-Robert Mueller, Fraunhofer FIRST * Bernhard Schoelkopf, Max Planck Institute for Biol. Cybernetics * Pierre Baldi, UC Irvine (2004) * Nir Friedman, Hebrew University and Harvard (2004) * Eleazar Eskin, UC San Diego (2004) * Dan Geiger, Technion (2004) * Alexander Schliep, Max Planck Institute for Molecular Genetics (2004) * Jean-Philippe Vert, Ecole des Mines de Paris (2004) The workshop is suported by the EU PASCAL network. +-------------------------------------------------------------------+ Gunnar R?tsch http://www.fml.tuebingen.mpg.de/raetsch Friedrich Miescher Laboratory Gunnar.Raetsch at tuebingen.mpg.de Max Planck Society Tel: (+49) 7071 601 820 Spemannstra?e 37, 72076 T?bingen, Germany Fax: (+49) 7071 601 455 From jqc at tuebingen.mpg.de Fri Oct 21 05:17:06 2005 From: jqc at tuebingen.mpg.de (Joaquin Quinonero Candela) Date: Fri, 21 Oct 2005 11:17:06 +0200 Subject: Connectionists: Gaussian Process Workshop at NIPS*05 Message-ID: <4358B212.7060506@tuebingen.mpg.de> Dear all, We are pleased to announce the NIPS*05 Gaussian Process Workshop: "Open Issues in Gaussian Processes for Machine Learning" http://gp.kyb.tuebingen.mpg.de which will take place in Whistler, Canada, on Saturday December 10th 2005, as part of the workshops of the 2005 Neural Information Processing Systems conference. With sponsor from the PASCAL research network we have been able to invite the following two senior speakers from statistics to the workshop: - Tony O'Hagan, University of Sheffield - Michael Stein, University of Chicago This is not a call for papers: there will be no call for papers for this workshop! We plan to have more of a guided discussion with long selected invited talks that can be interrupted with questions at any point, and a closing round table discussion with a panel of invited experts. We encourage all to join the workshop, and to write on the wiki of the workshop's webpage any open questions you may have on GPs which you wish to see addressed at the workshop. Best wishes, The organizers Joaquin Quionero-Candela, Max Planck Institute for Biol. Cybernetics Carl Edward Rasmussen, Max Planck Institute for Biol. Cybernetics Zoubin Ghahramani, University College London and Cambridge University -- +-------------------------------------------------------------------+ Dr. Joaquin Quionero-Candela http://www.tuebingen.mpg.de/~jqc Max Planck Institute for jqc at tuebingen.mpg.de Biological Cybernetics Tel: (+49) 7071 601 553 Spemannstrae 38, 72076 Tbingen, Germany Fax: (+49) 7071 601 552 From erik at tnb.ua.ac.be Fri Oct 21 06:23:20 2005 From: erik at tnb.ua.ac.be (Erik De Schutter) Date: Fri, 21 Oct 2005 12:23:20 +0200 Subject: Connectionists: Neuro-IT Cerebellar Modeling Workshop Message-ID: <74385561-EB7A-4147-AC28-5C8114808736@tnb.ua.ac.be> A two-day cerebellar modeling workshop will take place at the University of Antwerp, Belgium on December 5-6, 2005. Invited speakers include N. Brunel (Paris, France, G. Chauvet (Angers, France), E. d'Angelo (Pavia, Italy), C. Darlot (Paris, France), P. Dean (Sheffield, UK), E. De Schutter (Antwerp, Belgium), C. De Zeeuw (Rotterdam, Netherlands), M. Hausser (London, UK), R. Maex (Antwerp, Belgium), A. Silver (London, UK), P. Verschure (Zurich, Switzerland), Y. Yarom (Jerusalem, Israel). Full program and (free) registration can be found at http:// www.neuroinf.org/workshop/neuroit05/ From caruana at cs.cornell.edu Sat Oct 22 13:44:15 2005 From: caruana at cs.cornell.edu (Richard Caruana) Date: Sat, 22 Oct 2005 13:44:15 -0400 Subject: Connectionists: CFP: NIPS 2005 Workshop: Inductive Transfer (Learning-to-Learn) Message-ID: <200510221744.j9MHiGI16152@zinger.cs.cornell.edu> NIPS 2005 Workshop - Inductive Transfer : 10 Years Later --------------------------------------------------------- Friday Dec 9, Westin Resort & Spa, Whistler, B.C., Canada Overview: --------- Inductive transfer refers to the problem of applying the knowledge learned in one or more tasks to learning for a new task. While all learning involves generalization across problem instances, transfer learning emphasizes the transfer of knowledge across domains, tasks, and distributions that are related, but not the same. For example, learning to recognize chairs might help to recognize tables; or learning to play checkers might improve learning of chess. While people are adept at inductive transfer, even across widely disparate domains, currently we have little learning theory to explain this phenomena and few systems exhibit knowledge transfer. At NIPS95 two of the current co-chairs lead a successful two-day workshop on "Learning to Learn" that focused on lifelong machine learning methods that retain and reuse learned knowledge. (The co-organizers of the NIPS95 workshop were Jon Baxter, Rich Caruana, Tom Mitchell, Lorien Pratt, Danny Silver, and Sebastian Thrun.) The fundamental motivation for that meeting was the belief that machine learning systems would benefit from re-using knowledge learned from related and/or prior experience and that this would enable them to move beyond task-specific tabula rasa systems. The workshop resulted in a series of articles published in a special issue of Connection Science [CS 1996], Machine Learning [vol. 28, 1997] and a book entitled "Learning to Learn" [Pratt and Thrun 1998]. Research in inductive transfer has continued since 1995 under a variety of names: learning to learn, life-long learning, knowledge transfer, transfer learning, multitask learning, knowledge consolidation, context-sensitive learning, knowledge-based inductive bias, meta-learning, and incremental/cumulative learning. The recent burst of activity in this area is illustrated by the research in multi-task learning within the kernel and Bayesian contexts that has established new frameworks for capturing task relatedness to improve learning [Ando and Zhang 04, Bakker and Heskes 03, Jebara 04, Evgeniou, and Pontil 04, Evgeniou, Micchelli and Pontil 05, Chapelle and Harchaoui 05]. This NIPS 2005 workshop will examine the progress that has been made in ten years, the questions and challenges that remain, and the opportunities for new applications of inductive transfer systems. In particular, the workshop organizers have identified three major goals: (1) To summarize the work thus far in inductive transfer to develop a taxonomy of research and highlight open questions, (2) To share new theories, approaches, and algorithms regarding the accumulation and re-use of learned knowledge to make learning more effective and more efficient, (3) To discuss the formation of an inductive transfer special interest group that might offer a website, benchmarking data, shared software, and links to various research programs and related web resources. Call for Papers: ---------------- We invite submission of workshop papers that discuss ongoing or completed work dealing with Inductive Transfer (see below for a list of appropriate topics). Papers should be no more than four pages in the standard NIPS format. Authorship should not be blind. Please submit a paper by emailing it in Postscript or PDF format to danny.silver at acadiau.ca with the subject line "ITWS Submission". We anticipate accepting as many as 8 papers for 15 minute presentation slots and up to 20 poster papers. Please only submit an article if at least one of the authors will attend the workshop to present the work. The successful papers will be made available on the Web. A special journal issue or an edited book of selected papers also is being planned. The 1995 workshop identified the most important areas for future research to be: * The relationship between computational learning theory and selective inductive bias; * The tradeoffs between storing or transferring knowledge in representational and functional form; * Methods of turning concurrent parallel learning into sequential lifelong learning methods; * Measuring relatedness between learning tasks for the purpose of knowledge transfer; * Long-term memory methods and cumulative learning; and * The practical applications of inductive transfer and lifelong learning systems. The workshop is interested in the progress that has been made in these areas over the last ten years. These remain key topics for discussion at the proposed workshop. More forward looking and important questions include: * Under what conditions is inductive transfer difficult? When is it easy? * What are the fundamental requirements for continual learning and transfer? * What new mathematical models/frameworks capture/demonstrate transfer learning? * What are some of latest and most advanced demonstrations of transfer learning in machines * What can be learned from transfer learning in humans and animals? * What are the latest psychological/neurological/computational theories of knowledge transfer in learning? Important Dates: ---------------- 19 Sep 05 - Call for participation 28 Oct 05 - Paper submission deadline 11 Nov 05 - Notification of paper acceptance 09 Dec 05 - Workshop in Whistler Organizers: -------------- Goekhan Bakir, Max Planck Institute for Biological Cybernetics, Germany Kristin Bennett, Department of Mathematical Sciences, Rensselaer Polytechnic Institute, USA Rich Caruana, Department of Computer Science, Cornell University, USA Massimiliano Pontil, Dept. of Computer Science, University College London, UK Stuart Russell, Computer Science Division, University of California, Berkeley, USA Danny Silver, Jodrey School of Computer Science, Acadia University, Canada Prasad Tadepalli, School of Electrical Eng. and Computer Science, Oregon State University, USA For further Information: ------------------------ See the workshop webpage at http://iitrl.acadiau.ca/itws05/ or email danny.silver at acadiau.ca From Ke.Chen at manchester.ac.uk Mon Oct 24 22:39:53 2005 From: Ke.Chen at manchester.ac.uk (Ke CHEN) Date: Tue, 25 Oct 2005 03:39:53 +0100 Subject: Connectionists: call-for-paper: WCCI'06-IJCNN'06 Special Session on Natural Computation for Temporal Data Processing Message-ID: <435D9AF9.506@manchester.ac.uk> Dear Coordinators, I would appreciate it if you could help us solicit our call-for-paper by including it in your maillist. Best regards, Ke -------------------------------------------------------------------------------------------------------------------------------- CALL for Papers WCCI'06- IJCNN'06 Special Session on Natural Computation for Temporal Data Processing Ke Chen, Kar-Ann Toh and Peter Tino Aims and scope -------------- Temporal/sequential data are ubiquitous in the real world. Proper treatment of data with temporal dependencies is essential in many application areas ranging from multimedia data processing to bioinformatics.The field has attracted a substantial amount of interest from a wide range of disciplines. This special session attempts to explore/exploit the natural computation approaches, e.g. neural computation, statistical learning, evolutionary computation, fuzzy systems and various hybrid methods, for coping with diverse forms of time dependency and data types as well as their applications in real world problems, e.g. multimedia data processing, biometrics, bioinformatics and financial data processing. The purpose of this session is to bring together researchers and practitioners working in the area from various disciplines of natural computation. The session serves as a forum enabling experience exchange between academia and industry, as well as between researchers working in different research branches. Topics -------- Topics of interest include but are not limited to: * biologically plausible/natural methods for temporal data processing * clustering/classification techniques for temporal data * feature extraction and representations of temporal data * time-series modeling/forecasting * temporal information processing and fusion * representation/analysis/recognition of actions and events from temporal data * temporal information indexing and retrieval * temporal data mining and knowledge discovery Paper submission ---------------- The paper format should be the same as that required for a regular paper submission in WCCI'06. For details, see http://www.wcci2006.org/WCCI-Web_ifa.html. Papers should be submitted via the conference on-line web submission system under the title of this session (please follow the above link to find instructions). All submissions will go through a peer-review process based on similar criteria used for contributed papers in WCCI'06. Important dates --------------- Paper Submission: January 31, 2006 Decision Notification: March 15, 2006 Camera-Ready Submission: April 15, 2006 Session organizers ------------------ Ke Chen School of Informatics The University of Manchester Manchester M60 1QD United Kingdom Email: Ke.Chen at manchester.ac.uk Tel: +44 161 306 4565 Fax: +44 161 306 1281 Kar-Ann Toh Biometrics Engineering Research Center School of Electrical & Electronic Engineering Yonsei University, Seoul, Korea Email: katoh at yonsei.ac.kr Tel: +82-2-2123-5864 Fax: +82-2-312-4584 Peter Tino School of Computer Science The University of Birmingham Edgbaston, Birmingham B15 2TT United Kingdom Email: P.Tino at cs.bham.ac.uk Tel: +44 121 414 8558 Fax: +44 121 414 4281 From bisant at umbc.edu Mon Oct 24 17:40:52 2005 From: bisant at umbc.edu (D Bisant) Date: Mon, 24 Oct 2005 17:40:52 -0400 Subject: Connectionists: FLAIRS06, Nnet Special Track, Schedule Correction Message-ID: <435D54E4.2030801@umbc.edu> A posting was made last week for the FLAIRS06 Neural Network Special Track in Melbourne Beach, Florida. The paper submission was incorrect. The date should be Nov 21. The correct dates and other information can be found at http://www.indiana.edu/~flairs06/. My apologies, David Bisant, PhD From oby at cs.tu-berlin.de Mon Oct 24 05:33:15 2005 From: oby at cs.tu-berlin.de (Klaus Obermayer) Date: Mon, 24 Oct 2005 11:33:15 +0200 (MEST) Subject: Connectionists: tenured faculty position Message-ID: Dear All, below please find the advertisement for the tenured faculty position Modeling of Cognitive Processes within the Department of Computer Science and Electrical Engineering of the Berlin University of Technology and the recently established Bernstein Center for Computational Neuroscience Berlin. The Berlin Bernstein Center, which is funded by the German federal government, integrates interdisciplinary research initiatives in the brain sciences and in AI across the city's three major universities and across several research institutes in Berlin, covering neuroscience, medicine, physics, mathematics, computer science and engineering. The Center plans to launch an international Master/PhD program in Computational Neuroscience by fall 2006. The Department of Computer Science and Engineering is currently creating a focus area in AI and machine learning with at least four core faculty positions (Artificial Intelligence, Machine Learning, Modelling of Cognitive Processes, Neural Information Processing) and several other labs with AI related research. The Department will introduce "intelligent systems" as an area of specialization in its new Master program in Computer Science by fall 2006. More information about the Center and TU Berlin can be found via http://www.tu-berlin.de/eng/index.html and http://www.bccn-berlin.de/, but I am also happy to answer any questions related to the position and our Berlin research environment. Note, that proficiency in German is *not* a requirement, as all the relevant courses - at least during the first years - will be taught in English. All the best Klaus ------------------------------------------------------------------------- The Department of Electrical Engineering and Computer Science at the Technische Universit?t Berlin invites applications for a tenured faculty position Modeling of Cognitive Processes (W2) The position is associated with the recently established Bernstein Center for Computational Neuroscience Berlin (http://www.bccn-berlin.de). The Professorship is devoted to the development of quantitative models of higher brain functions (as inferred, for example, from non-invasive methods like EEG or fMRI) in order to better understand the neural basis of cognitive processes. Modeling work should be complemented by application oriented research in machine intelligence and artificial cognitive systems (e.g. autonomous intelligent agents, man-machine systems, etc.). The successful candidate is expected to establish a cooperative, innovative research program and have a strong committment to excellence in undergraduate and graduate teaching at the TU department as well as within the Bernstein Center. The Technische Universit?t Berlin is an equal opportunity employer, committed to the advancement of individuals without regard to ethnicity, religion, sex, age, disability, or any other protected status. Applications should include CV, summary of teaching and research experience, list of publications and funding, statement of research interests, and up to five selected publications. For legal details also see (BerlHG, Par. 100) http://www.bccn-berlin.de/positions/berlhg-p-100. Applications should be sent by Nov. 21st, 2005, to the Dekanat, Fakult?t IV, TU Berlin, Franklinstrasse 28/29, 10587 Berlin, Germany and per email to Prof. Dr. Klaus Obermayer (oby at cs.tu-berlin.de) to speed up the search process. ============================================================================= Prof. Dr. Klaus Obermayer phone: 49-30-314-73442 FR2-1, NI, Fakultaet IV 49-30-314-73120 Technische Universitaet Berlin fax: 49-30-314-73121 Franklinstrasse 28/29 e-mail: oby at cs.tu-berlin.de 10587 Berlin, Germany http://ni.cs.tu-berlin.de/ From rb60 at st-andrews.ac.uk Mon Oct 24 18:26:42 2005 From: rb60 at st-andrews.ac.uk (rb60@st-andrews.ac.uk) Date: Mon, 24 Oct 2005 23:26:42 +0100 Subject: Connectionists: Lectureship and Postdoctoral Fellowship at St Andrews, UK Message-ID: <1130192802.435d5fa28e85a@webmail.st-andrews.ac.uk> Lectureship and Postdoctoral Fellowship School of Computer Science University of St Andrews Scotland (UK) We are seeking candidates for a Lectureship (permanent) and a Postdoctoral Fellowship (2 years) to support the recently formed Cognitive Systems research group led by Professor Rens Bod. Possible areas of expertise include but are not limited to data-oriented parsing, statistical natural language processing, unsupervised language learning, computational musical analysis and case-based reasoning. Closing Date: 24th November 2005 Further details about these vacancies are given at http://www.dcs.st-andrews.ac.uk/news/vacancies/2005-11-24.php Further details on the Cognitive Systems group are given at http://cogsys.dcs.st-and.ac.uk/ ------------------------------------------------------------------ University of St Andrews Webmail: https://webmail.st-andrews.ac.uk From tgd at eecs.oregonstate.edu Mon Oct 24 11:09:21 2005 From: tgd at eecs.oregonstate.edu (Thomas G. Dietterich) Date: Mon, 24 Oct 2005 08:09:21 -0700 Subject: Connectionists: Postdoc positions at Oregon State University Message-ID: <8053-Mon24Oct2005080921-0700-tgd@cs.orst.edu> We have several post-doc positions available. --Tom Dietterich ---------------------------------------------------------------------- Oregon State University School of Electrical Engineering and Computer Science One or more Research Associate positions in the Machine Learning, Computer Graphics, and Computer Vision groups starting January 2006. Required qualifications include a Ph.D. in computer science or related field.; strong mathematical background; experience with at least 3 of the following: (a) knowledge representation frameworks (logical and probabilistic), (b) reasoning methods (logical and probabilistic), (c) experimental machine learning research, (d) planning and reasoning algorithms, (e) virtual environments for training, (f) computer vision for object recognition and tracking, (g) augmented reality; excellent written and spoken communication skills; excellent programming and software engineering skills; excitement about computer science research; and the ability to manage graduate and undergraduate students working on research projects Position is full-time, 12 month, fixed term with reappointment at the discretion of the hiring official. For full consideration, applications must be received by 11/15/05. Send resume, letter of interest, evidence of 2 relevant publications and 3 professional references w/address, phone # to: Research Associate Search, 1148 Kelly Engineering Center, Corvallis, OR 97331-5501. For full position announcement see: http://oregonstate.edu/jobs, other inquires contact Thomas G. Dietterich (tgd at cs.orst.edu). OSU is an AA/EOE. From Martin.Riedmiller at uos.de Tue Oct 25 13:28:32 2005 From: Martin.Riedmiller at uos.de (Martin Riedmiller) Date: Tue, 25 Oct 2005 19:28:32 +0200 Subject: Connectionists: CFP: NIPS Workshop on RL - Benchmarks and Bake-offs II Message-ID: <435E6B40.8080903@uos.de> ************************************************************************ FINAL CALL FOR PAPERS ---- Reinforcement Learning Benchmarks and Bake-offs II ---- Workshop at the 19th Annual Conference on Neural Information Processing Systems (NIPS 2005) Whistler, Canada, Friday December 9, 2005 http://www.ni.uos.de/rl_workshop05 -- Submission Deadline: November 4th, 2005 -- ************************************************************************ [ Apologies for multiple postings ] OVERVIEW -------- It is widely agreed that the field of reinforcement learning would benefit from the establishment of standard benchmark problems and perhaps regular competitive events (bake-offs). Competitions can greatly increase the interest and focus in an area by clarifying its objectives and challenges, publicly acknowledging the best algorithms, and generally making the area more exciting and enjoyable. Standard benchmarks can make it much easier to apply new algorithms to existing problems and thus provide clear first steps toward their evaluation. The workshop will be organized around two main themes. Theme 1 will be the 1st RL benchmarking event (see http://www.cs.rutgers.edu/~mlittman/topics/nips05-mdp) Theme 2 will be a in-depth discussion on issues related to RL benchmarking. Topics will include concrete proposals on how to organize an RL benchmark competition, proposals for benchmark domains and appropriate performance measures, and proposals for standardized software frameworks. Potential participants are encouraged to submit short papers (one page in length) summarizing their views and outlining their proposal. FORMAT ------ This is a one-day workshop that will follow the 19th Annual Conference on Neural Information Processing Systems (NIPS 2005). The workshop will consist of two 3-hour sessions. RL Benchmarking Event ------------- see http://www.cs.rutgers.edu/~mlittman/topics/nips05-mdp Contributed Talks ----------------- These will be based on papers submitted for review. See below for details. CALL FOR PAPERS --------------- We invite submissions of extended abstracts addressing all aspects of benchmarking in RL, e.g. propositions for benchmarks, performance measures, software, ... An important criterion for acceptance is the public availability of proposed benchmark problems. We are particularly interested in papers that point to open questions and stimulate discussions. Submission Instructions ----------------------- Submissions should be an extended abstract of at most 1 page in length. Email submissions (in pdf or ps format only) to martin.riedmiller at uos.de with subject line "RL Workshop". The deadline for submissions is Friday, November 4th. Submissions will be reviewed by the program committee and authors will be notified of acceptance/rejection decisions by Friday November 18th. Please note that one author of each accepted paper must be available to present the paper at the workshop. IMPORTANT DATES --------------- Paper submission deadline -- November 4, 2005 Notification of decisions -- November 18, 2005 Workshop -- December 9, 2005 ORGANIZERS ---------- Martin Riedmiller (contact person), Univ. of Osnabrueck Michael L. Littman (benchmarking event), Rutgers University Nikos Vlassis, University of Amsterdam Shimon Whiteson, UT Austin Adam White, U Alberta Michail G. Lagoudakis, Technical University of Crete CONTACT ------- Please direct any questions to martin.riedmiller at uos.de ************************************************************************ From anderson at cog.brown.edu Tue Oct 25 15:12:11 2005 From: anderson at cog.brown.edu (Jim Anderson) Date: Tue, 25 Oct 2005 15:12:11 -0400 (EDT) Subject: Connectionists: Position at Brown Message-ID:   COMPUTATIONAL MODELING, BROWN UNIVERSITY: The Department of Cognitive and Linguistic Sciences invites applications for a position as Assistant Professor in the computational modeling of human cognitive systems, beginning July 1, 2006. Applicants must have a strong computational or theoretical research program in an area such as modeling of cognitive or language processing, computational neuroscience, computational linguistics, computational vision, dynamical systems, learning, or motor control. Integrated experimental research or previous collaboration with experimentalists is highly desirable. Candidates should also have a broad teaching ability in the cognitive sciences at both the undergraduate and graduate levels and an interest in contributing to interdisciplinary research and education. Brown benefits from an interactive environment with exceptional students and faculty pursing multidisciplinary research in the brain sciences. Criteria for each rank are available on request; all Ph.D. requirements must be completed before July 1, 2006. Women and minorities are especially encouraged to apply. Send curriculum vitae, reprints and preprints of publications, a one-page statement of research and teaching interests, and three letters of reference to: Computational Search Committee, Dept. of Cognitive and Linguistic Sciences, Brown University, Providence, R.I. 02912 USA by December 15, 2005. Brown University is an Affirmative Action Employer From shivani at csail.mit.edu Wed Oct 26 15:17:27 2005 From: shivani at csail.mit.edu (Shivani Agarwal) Date: Wed, 26 Oct 2005 15:17:27 -0400 (EDT) Subject: Connectionists: NIPS 2005 Workshop - Learning to Rank - Abstracts invited Message-ID: ** Abstract submission deadline: November 1, 2005 ** LEARNING TO RANK Workshop at NIPS 2005 http://web.mit.edu/shivani/www/Ranking-NIPS-05/ In response to a huge demand, we are opening some slots for short presentations, which will consist of either short talks or poster presentations. These will be based on submissions of extended abstracts, 2-4 pages in length in NIPS format, which will be due November 1, 2005. See the workshop webpage for detailed submission instructions. Regular papers that are not accepted for a regular presentation will also automatically be considered for these short presentations. Please direct any questions to shivani at mit.edu. Shivani Agarwal, Corinna Cortes, Ralf Herbrich Workshop Organizers From oza at email.arc.nasa.gov Wed Oct 26 15:44:14 2005 From: oza at email.arc.nasa.gov (Nikunj Oza) Date: Wed, 26 Oct 2005 12:44:14 -0700 Subject: Connectionists: CFP: Information Fusion Journal - Special Issue on Applications of Ensemble Methods (second notice) Message-ID: <435FDC8E.3040400@email.arc.nasa.gov> APOLOGIES FOR MULTIPLE COPIES (This is a second announcement sent as a reminder. First announcement sent August 15, 2005) Call for papers for a special issue of Information Fusion An International Journal on Multi-Sensor, Multi-Source Information Fusion An Elsevier Publication On APPLICATIONS OF ENSEMBLE METHODS Editor-in-Chief: Dr. Belur V. Dasarathy, FIEEE d.belur at elsevier.com http://belur.no-ip.com Guest Editors: Nikunj C. Oza, Kagan Tumer The Information Fusion Journal is planning a special issue devoted to Applications of Ensemble Methods in Machine Learning and Pattern Recognition. Ensembles, also known as Multiple Classifier Systems (MCSs) and Committee Classifiers, were originally motivated by the desire to avoid relying on just one learned model when only a small amount of training data is available. Because of this, most studies on ensembles have evaluated their new algorithms on relatively small datasets; most notably, datasets from the University of California, Irvine (UCI) Machine Learning Repository. However, modern data mining problems raise a variety of issues very different from the ones ensembles have traditionally addressed. These new problems include too much data; data that are distributed, are noisy, and represent changing environments; and performance measures different from the standard accuracy measurements; among others. The aim of this issue is to examine the different applications that raise these modern data mining problems, and how current and novel ensemble methods aid in solving these problems. Manuscripts (which should be original and not previously published or presented even in a more or less similar form under any other forum) covering new applications as well as the theories and algorithms of ensemble learning algorithms developed to address these applications are invited. Contributions should be described in sufficient detail to be reproducible on the basis of the material presented in the paper. Topics appropriate for this special issue include, but are not limited to: ? Innovative applications of ensemble methods. ? Novel algorithms that address unique requirements (for example, different performance measures or running time constraints) of an application or a class of applications. ? Novel theories developed under assumptions unique to an application or a class of applications. ? Novel approaches to distributed model fusion. Manuscripts should be submitted electronically online at http://ees.elsevier.com/inffus (The corresponding author will have to create a user profile if one has not been established before at Elsevier.) Please also send without fail an electronic copy to oza at email.arc.nasa.gov (PDF format preferred), Guest Editors Nikunj C. Oza and Kagan Tumer NASA Ames Research Center Mail Stop 269-3 Moffett Field, CA 94035-1000, USA Deadline for Submission: November 30, 2005 -- -------------------------------------- Nikunj C. Oza, Ph.D. Tel: (650)604-2978 Research Scientist Fax: (650)604-4036 NASA Ames Research Center E-mail: oza at email.arc.nasa.gov Mail Stop 269-3 Web: http://ic.arc.nasa.gov/people/oza Moffett Field, CA 94035-1000 USA From doug.aberdeen at anu.edu.au Thu Oct 27 09:03:47 2005 From: doug.aberdeen at anu.edu.au (Douglas Aberdeen) Date: Thu, 27 Oct 2005 23:03:47 +1000 Subject: Connectionists: Invitation to the Machine Learning Summer School, Canberra, 2006 Message-ID: <18D459DA-F5EC-469D-A8B7-900B8A5148D5@anu.edu.au> Call for Students: Machine Learning Summer School, Canberra, 2006 Quick Link: http://canberra06.mlss.cc Since 2002, the Machine Learning Summer Schools have given an opportunity for leaders in the field of ML to lecture to the best students in ML. The first 2006 school will cover a broad range of subjects from foundations of learning theory, to state of the art applications. Dates: Monday 6 February to Friday 17 February 2006 (2 weeks) Location: Australian National University, Canberra, Australia Early Registration Deadline: December 31, 2005 Past Schools: http://www.mlss.cc Poster (please help advertise the MLSS series in your school): http://rsise.anu.edu.au/~daa/mlss06/flyer.pdf Details: -------- The Machine Learning Summer School is a series of master classes in Machine Learning, conducted by experts in the field. There are typically 5 or 6 key speakers who present for up to 8 hours, plus sundry other speakers. Over two weeks the lectures go from the foundations of statistical learning theory up to state of the art applications such as brain/computer interfaces and information retrieval. The target audience is anyone starting out in the field of Machine Learning. This includes o post graduates and post docs in Machine Learning or statistics; o researchers entering the field for the first time; o industry researchers. This year we have a particularly diverse set of topics including o learning theory, o kernel methods, o nonparametric methods, o reinforcement learning, o planning and learning; and a number of application areas including o information retrieval, o image processing, o brain/computer interfaces. A limited number of travel support scholarships and registration fee waivers may be available for students. For more information, including the preliminary speaker list, go to http://canberra06.mlss.cc, or contact doug.aberdeen at anu.edu.au. Come and enjoy an Australian summer with plenty of BBQs. I hope to see you there! -- Dr Douglas Aberdeen for the Machine Learning Summer School National ICT Australia From m.pontil at cs.ucl.ac.uk Sat Oct 29 17:32:21 2005 From: m.pontil at cs.ucl.ac.uk (Massimiliano Pontil) Date: Sat, 29 Oct 2005 22:32:21 +0100 Subject: Connectionists: Machine Learning Faculty Positions @ UCL Message-ID: <27E26778-1F8B-42F8-91C8-CCCC2A1BC709@cs.ucl.ac.uk> Jon Announcement: --------------------------- UCL Department of Computer Science Director UCL Centre for Computational Statistics and Machine Learning Reader/Senior Lecturer Post We are looking for world-class research talent to join us. We are specifically recruiting to a new senior faculty position in the areas of machine learning and computational statistics. The appointee will assume leadership of a new interdisciplinary UCL Centre for Computational Statistics and Machine Learning. The Gatsby Computational Neuroscience Unit is in conjunction recruiting to both senior and junior faculty positions in the same areas, reflecting a strategic UCL commitment to research on machine learning. We share a strong commitment to experimental research and to UCL's tradition of interdisciplinary research. There is also active involvement in this area from the Departments of Statistical Science and Physics. Candidates for a post in Computer Science should be interested in innovative and challenging teaching at both the core and edges of computer science. In the event of a suitably qualified person an appointment at Professorial level will be considered. You can find out more about the Department of Computer Science at http://www.cs.ucl.ac.uk and about the Gatsby Computational Neuroscience Unit at http://www.gatsby.ucl.ac.uk/ Further details of the posts and the application procedure can be found at http:// www.cs.ucl.ac.uk/vacancies Unless otherwise requested, applicants will also be considered by the Gatsby Computational Neuroscience Unit. For informal enquiries please contact Anthony Finkelstein at a.finkelstein at cs.ucl.ac.uk The closing date for applications is 5th January 2006 From hugh.chipman at acadiau.ca Fri Oct 28 08:20:14 2005 From: hugh.chipman at acadiau.ca (Hugh Chipman) Date: Fri, 28 Oct 2005 09:20:14 -0300 Subject: Connectionists: Postdoctoral Fellowship - Statistical Learning with Graph-Structured Data Message-ID: Acadia University, Wolfville, NS, Canada Postdoctoral Fellowship Statistical Learning with Graph-Structured Data The Department of Mathematics and Statistics invites applications for a Postdoctoral Fellowship in Statistical Learning with Graph-Structured Data. Recent or expected Ph.D., to start January 2006. 1 year position, with possible renewal for second year. Analysis of network data, social network modelling, and data visualization. Desired skills/background: statistical computation, modelling with large data sets, and familiarity with supervised/unsupervised statistical learning methods. See http://ace.acadiau.ca/math/postdoc.htm for details. Email a CV, statement of research interests, and names of three potential referees to statpostdoc at acadiau.ca. Review of applicants will commence November 1, 2005, and will continue until the position is filled. From rb60 at st-andrews.ac.uk Fri Oct 28 11:15:06 2005 From: rb60 at st-andrews.ac.uk (rb60@st-andrews.ac.uk) Date: Fri, 28 Oct 2005 16:15:06 +0100 Subject: Connectionists: Two PhD Studentships in Data-Oriented Parsing at St Andrews (UK) Message-ID: <1130512506.4362407aa1501@webmail.st-andrews.ac.uk> Two PhD Studentships in Data-Oriented Parsing School of Computer Science University of St Andrews Scotland (UK) We are seeking candidates for two fully funded PhD Studentships (3.5 years each) to reinforce the recently formed Cognitive Systems research group led by Professor Rens Bod. Candidates are expected to carry out research related to Data-Oriented Parsing and its applications including but not limited to Data-Oriented Translation, Data-Oriented Language Learning, Data-Oriented Musical Analysis, Data-Oriented Problem-Solving and Data-Oriented Reasoning. Please send a CV and a one-page statement of research interests to rb at dcs.st-and.ac.uk by 24 December 2005. Further details on the Cognitive Systems group are given at http://cogsys.dcs.st-and.ac.uk/ Further details on Data-Oriented Parsing are given at http://cogsys.dcs.st-and.ac.uk/dop/dop.html ------------------------------------------------------------------ University of St Andrews Webmail: https://webmail.st-andrews.ac.uk From h.abbass at adfa.edu.au Fri Oct 28 05:23:37 2005 From: h.abbass at adfa.edu.au (Hussein A. Abbass) Date: Fri, 28 Oct 2005 19:23:37 +1000 Subject: Connectionists: A PhD Scholarship at UNSW@ADFA - Evolutionary methods in the design of robust control systems Message-ID: <200510280925.j9S9P8e0012021@seal.cs.adfa.edu.au> Call for a PhD Scholarship ($20,037 pa tax-free for 3 years) Location: School of Information Technology and Electrical Engineering, University of NSW @ Australian Defence Force Academy, Canberra, Australia. The school of Information Technology and Electrical Engineering (ITEE) at the Australian Defence Force Academy campus of the University of New South Wales is seeking expressions of interest from highly qualified students to join the PhD program. The PhD scholarship provides living allowances of $20,037 per annum tax-free. Topic: Evolutionary methods in the design of robust control systems The proposed thesis topic will involve the student looking at the application of evolutionary computation and related methods to optimization problems arising in the design of robust feedback control systems. It is expected that this research will lead to new general methods for robust and nonlinear control system design. In addition specific applications will be considered including missile auto-pilot systems, UAV flight control systems and vibration control systems. Application Process: The successful applicant is anticipated to have a first-class honour or equivalent in Electrical Engineering or other relevant areas. All applicants are expected to possess good programming skills and excellent communication and research skills. An ideal applicant for the project would have knowledge in control systems, neural networks, and evolutionary computation. Applications should include a detailed CV, a certified copy of academic transcripts and a cover letter detailing the applicants' research interest and its relevance to the project. The applicant should satisfy UNSW admission requirements for the PhD program. Potential applicants should discuss the application and send the paper work to Prof. Ian Petersen i.petersen at adfa.edu.au or Dr. Hussein Abbass abbass at itee.adfa.edu.au. Deadline for applications is 15th of November 2005 or until the position is filled, for a possible starting date of March 2006. Applications should be sent by email or fax +61-2-62688581 From janetw at itee.uq.edu.au Sun Oct 2 00:40:04 2005 From: janetw at itee.uq.edu.au (Janet Wiles) Date: Sun, 2 Oct 2005 14:40:04 +1000 (EST) Subject: Connectionists: Research position in genetic regulatory network modeling (Brisbane, Australia) Message-ID: This position would suit someone with a good network modeling background who has an interest in biological modeling at the level of genes and gene regulation. -- Janet ---------------------------------------------------------------- Research Officer / Senior Research Officer / Research Fellow The University of Queensland, St Lucia Campus, Brisbane, Australia The ARC Centre for Complex Systems (ACCS) and ARC Centre for Bioinformatics (ACB) have a jointly-funded position available for a researcher to participate in a project in the area of computational modelling of genetic regulatory networks (GRN). The project seeks to characterise the computational nature of genetic regulatory networks within a control paradigm in order to generalise insights from the biological systems to other complex systems. The project is supervised by A/Prof Janet Wiles. The role of the Research Officer/ Senior Research Officer / Research Fellow is to design, implement and analyse genetic regulatory networks in collaboration with ACCS and ACB researchers, and to develop appropriate tools. The Centres seek a motivated and innovative person with good modelling skills to provide support to the project. Applicants should have a background in software engineering or a related field. Obtain the position description and selection criteria online. http://www.accs.uq.edu.au/ Applications addressing the selection criteria and quoting the reference number, should be sent to the: The Director ARC Centre for Complex Systems School of Information Technology & Electrical Engineering The University of Queensland St Lucia QLD 4072. Or email admin at accs.uq.edu.au Closing date for applications: 30 October 2005 Reference Number: 3010353 ------------------------------------------------------------- A/Prof Janet Wiles Division of Complex and Intelligent Systems School of Information Technology & Electrical Engineering The University of Queensland QLD 4072 AUSTRALIA ------------------------------------------------------------- From kbp at imm.dtu.dk Mon Oct 3 11:12:28 2005 From: kbp at imm.dtu.dk (Kaare Brandt Petersen) Date: Mon, 3 Oct 2005 17:12:28 +0200 (METDST) Subject: Connectionists: The Matrix Cookbook - Updated Version Message-ID: Dear Colleagues (Apologies for multiple postings) A new and updated version of The Matrix Cookbok is now available at http://www.imm.dtu.dk/pubdb/views/edoc_download.php/3274/pdf/imm3274.pdf The Matrix Cookbook is a desktop reference on identities, relations and approximations regarding matrices. For instance differentiation of determinants, results for multivariate gaussians, expectations of general multivariate distributions, etc. In the new version new material include differentiation of Toeplitz matrices, solution to encapsulating linear systems, and co-skewness/co-kurtosis matrices. Best regards, Kaare -- Kaare Brandt Petersen Intelligent Signal Processing Group Technical University of Denmark Building 321, lok 120 DK-2800 Kgs. Lyngby, Europe Tel. +45 45253896 Fax. +45 45872599 http://www.imm.dtu.dk/~kbp From M.Denham at plymouth.ac.uk Mon Oct 3 11:04:11 2005 From: M.Denham at plymouth.ac.uk (Mike Denham) Date: Mon, 3 Oct 2005 16:04:11 +0100 Subject: Connectionists: Five-Year Postdoctoral Fellowship in Computational Neuroscience Message-ID: <52A8091888A23F47A013223014B6E9FE067118D8@03-CSEXCH.uopnet.plymouth.ac.uk> University of Plymouth, UK Centre for Theoretical and Computational Neuroscience Postdoctoral Research Fellow Five Year Fixed Term Appointment (salary range ?23,643 - ?26,671 pa) Applications are invited for the position of Postdoctoral Research Fellow in the Centre for Theoretical and Computational Neuroscience at the University of Plymouth, UK. The position has been made available through the award of a major new ?1.8M five-year research grant from the UK Engineering and Physical Sciences Research Council for a project entitled: "A Novel Computing Architecture for Cognitive Systems based on the Laminar Microcircuitry of the Neocortex". Collaborators on the project include Manchester University, University College London, Edinburgh University, Oxford University, and London University School of Pharmacy. Applicants for the post must have a PhD in a relevant subject area and possess an sound knowledge of the methods and tools of theoretical and computational neuroscience. They must be able to provide evidence of a existing strong publication record. The work of the Research Fellow will be specifically concerned with the investigation and integrated development of a cortical microcircuit model on a large scale Linux cluster simulation facility, in close collaboration with the other partners in the project, and with other partners on a related EU-funded project "FACETS". The position will require a postdoctoral researcher with extensive research experience in neurobiological modelling of neurons and neural circuitry, and sufficient breadth of knowledge in the field to be able to integrate the research results from all areas of the project into the model. The position is available immediately and an appointment will be made as soon as possible. The appointment will be for a fixed term of up to five years, and will be subject to a probationary period of twelve months. Informal enquiries should be made to Professor Mike Denham, Centre for Theoretical and Computational Neuroscience, University of Plymouth, Drake Circus, Plymouth, PL4 8AA, UK; tel: +44 (0)1752 232547; email: mdenham at plym.ac.uk From ulrike.luxburg at ipsi.fraunhofer.de Tue Oct 4 10:34:38 2005 From: ulrike.luxburg at ipsi.fraunhofer.de (Ulrike von Luxburg) Date: Tue, 04 Oct 2005 16:34:38 +0200 Subject: Connectionists: NIPS workshop "Theoretical Foundations of Clustering" Message-ID: <434292FE.6020407@ipsi.fraunhofer.de> ################################################################ CALL FOR CONTRIBUTIONS NIPS workshop on THEORETICAL FOUNDATIONS OF CLUSTERING Saturday, December 10, 2005 Westin Resort and SPA, Whistler, BC, Canada http://www.ipsi.fraunhofer.de/~ule/clustering_workshop_nips05/clustering_workshop_nips05.htm Submission deadline: October 30, 2005 ################################################################ Organizers: ------------ * Shai Ben-David, University of Waterloo, Canada, http://www.cs.uwaterloo.ca/~shai * Ulrike von Luxburg, Fraunhofer IPSI, Darmstadt, Germany http://www.ipsi.fraunhofer.de/~ule * John Shawe-Taylor, University of Southampton, UK http://www.ecs.soton.ac.uk/people/jst * Naftali Tishby, Hebrew University, Jerusalem, Israel http://www.cs.huji.ac.il/~tishby Clustering is one of the most widely used techniques for exploratory data analysis. Across all disciplines, from social sciences over biology to computer science, people try to get a first intuition about their data by identifying meaningful groups among the data points. In the past five decades, a wide variety of clustering algorithms have been developed and applied to a wide range of practical problems. Despite this large number of algorithms and applications, the goal of clustering and its proper interpretation remains fuzzy and vague. There are in fact many different problems that are clustered together under this single term, from quantization with low distortion for compression, through various techniques for graph partitioning whose goals are not fully specified, to methods for revealing hidden structure and unobserved features in complex data. We are clearly not talking about a single well defined problem. Moreover, the theoretical foundations of clustering seem to be distressingly meager, covering only some sub-domains and failing to address some of the most basic general aspects of the area. There is not even an agreement among the researchers on the correct questions to pose, let alone which tools and analysis techniques should be used to answer those questions. In our opinion there is an urgent need to initiate a concerted discussion on these issues, in order to move towards a consolidation of the theoretical basis for - at least some of the aspects of - clustering. One prospective benefit of building a theoretical framework for clustering may come from enabling the transfer of tools developed in other related domains, such as machine learning and information theory, where the usefulness of having a general mathematical framework have been impressively demonstrated. We have the impression that recently many researchers have become aware of this need and agree on the importance of these issues. Questions we wish to address: ----------------------------- 1. What is clustering? How can it be defined and how can we sort the different types of clustering and their goals? In particular: * Is the main purpose to use the partition to discover new features in the data? * Or the other way around, is the main purpose to simplify our data by building groups, thus getting rid of unimportant information? * Is clustering just data compression? * Is clustering just estimating modes of a density? * Is clustering related to human perception? * Can one come up with a meaningful taxonomy of clustering tasks? * Can we formulate the intuitive notion of "revealing hidden structure and properties"? 2. How should prior knowledge be encoded? As a pair-wise similarity/distance function over domain points? As a set of relevant features? Should data be embedded in some richer structure (Hilbert space, topology) ? 3. Is there a principled way to measure the quality of a clustering on particular data set? * Can every clustering task be expressed as an optimization of some explicit readily-computable associated objective cost function? * Can stability be considered a first principle for meaningful clustering? 4. Is there a principled way to measure the quality of a clustering algorithm? * Necessary conditions * Can we come up with sufficient conditions for reasonable clustering? * Stability conditions * Richness conditions * What type of performance guarantees can one hope to provide? 5. What are principled and meaningful ways of measuring the similarity (or degree of agreement) between different clusterings? 6. Can one distinguish "clusterable" data from "structureless" data? 7. What are the tools we should try to import from other areas such as classification prediction, density estimation, data compression, computational geometry, other relevant areas? Contributions: -------------- We invite presentations addressing one or several of the questions raised above. To keep the workshop lively we intend to keep the individual presentations short, at most 15 minutes. We welcome presentations about work in all "stages of completion", ranging from completed work over work in progress to discussing potential directions of future research. In particular we encourage position papers. We would like to stress that the focus of this workshop is on *foundations* of clustering. We are not interested in contributions about "yet another ad-hoc clustering algorithm". Please submit an extended abstract (at most two pages) summarizing your potential contribution to clustering_workshop at ipsi.fraunhofer.de. *** The deadline is the 30th October 2005. *** The organizers will review all submissions. You will be notified by November 11 whether your contribution is accepted. The final program of the workshop will be posted on the workshop webpage. -- ------------------------------------------------------ Dr. Ulrike von Luxburg Data Mining Group, Fraunhofer IPSI Dolivostrasse 15, 64293 Darmstadt, Germany Phone: +49 6151 869-844 and +49-170-9669432 Fax: +49 6151 869-989 http://www.ipsi.fraunhofer.de/~ule ------------------------------------------------------ From gtesauro at us.ibm.com Tue Oct 4 11:08:11 2005 From: gtesauro at us.ibm.com (Gerry Tesauro) Date: Tue, 4 Oct 2005 11:08:11 -0400 Subject: Connectionists: Fw: NIPS Workshop CFP - Value of Information in Inference, Learning and Decision-Making Message-ID: *********************************************************** Call For Papers Value of Information in Inference, Learning and Decision-Making Workshop held at the 19th Annual Conference on Neural Information Processing Systems (NIPS 2005) Whistler, CANADA: December 10, 2005 *********************************************************** Overview and Goals =================== A common fundamental problem of value of information (VOI) analysis arises in inference, learning and sequential decision-making when one is allowed to actively select, rather than passively observe, the input information. VOI provides a principled methodology that enables acquiring information in a way that optimally trades off the cost of information gathering with the expected benefit in some overall objective (e.g., classification accuracy or cumulative reward). For example, in Bayesian problem diagnosis VOI analysis aims at selecting observations (e.g., medical tests) that are most informative about the unknown variables (e.g., diseases we are trying to diagnose) while minimizing the cost of collecting the information. In sequential decision-making problems, VOI can provide a principled solution to the well-known "exploration versus exploitation" dilemma, so that one can optimally trade off the immediate cost of exploratory actions with expected improvement in future decisions and future reward. Yet another example is active learning, where the goal is to minimize the cost of observations (e.g., the number of labeled samples) while maximizing the learner's objective function. Finally, selecting the most relevant subset of features in supervised learning is another example where VOI analysis can provide a principled solution. Clearly, these areas differ in their choices of a particular objective function and the approaches to active exploration, but have a common goal of selecting explorative actions that maximize the VOI. In this workshop, we plan to bring together researchers from several fields concerned with VOI analysis and hope to ignite cross-fertilization between the areas. This could lead to major theoretical progress as well as practical impact in applications such as medical diagnosis, quality control in product design, IT systems management and troubleshooting, and DNA library screening, just to name a few. Suggested Topics ================= The list of possible topics includes (but is not limited to) the following: * VOI analysis in probabilistic inference and decision theory * feature selection and attribute-efficient learning * active learning (query learning, selective sampling) * exploration-exploitation trade-off in reinforcement learning * adaptive versus non-adaptive testing designs * comparison of different action selection criteria and objective functions * applications of VOI in diagnosis, systems control and management, coding theory, computational biology, neural coding, etc. Format ======= This is a one-day workshop following the 19th Annual Conference on Neural Information Processing Systems (NIPS 2005). There will be several invited talks and tutorials (roughly 30-40 minutes each) and shorter contributed talks from researchers in industry and academia, as well as a panel discussion. We will hold a poster session if we receive a sufficient number of good submissions. The workshop is intended to be accessible to the broader NIPS community and to encourage communication between different fields. Submission Instructions ======================== We invite submissions of extended abstracts (up to 2 pages, not including bibliography) for the short contributed talks and/or posters. The submission should present a high-level description of recent or ongoing work related to the topics above. We will explore the possibility of publishing papers based on invited and submitted talks in a special issue of an appropriate journal. Email submissions to nips05workshop at watson.ibm.com as attachments in Postscript or PDF, no later than October 24, 2005. Information ============ Workshop URL: www.research.ibm.com/nips05workshop/ Submission: nips05workshop at watson.ibm.com NIPS: http://www.nips.cc Dates & Deadlines ================== October 24: Abstract Submission October 31: Acceptance Notification Organizing Committee ===================== Dr. Alina Beygelzimer IBM T. J. Watson Research Lab, USA Dr. Rajarshi Das IBM T. J. Watson Research Lab, USA Dr. Irina Rish (primary contact) IBM T. J. Watson Research Lab, USA Dr. Gerry Tesauro IBM T. J. Watson Research Lab, USA Invited Speakers ================= Prof. Craig Boutilier University of Toronto Prof. Sanjoy Dasgupta University of California, San Diego Prof. Carlos Guestrin Carnegie Mellon University Prof. Michael Littman Rutgers University Prof. Dale Schuurmans University of Alberta ***************************************************************** From joshi at igi.tugraz.at Tue Oct 4 04:27:47 2005 From: joshi at igi.tugraz.at (Prashant Joshi) Date: Tue, 04 Oct 2005 10:27:47 +0200 Subject: Connectionists: Opening for a scientific programmer Message-ID: <43423D03.7020000@igi.tugraz.at> Hello Everyone! Position for a Scientific Programmer ______________________________ We have at our Institute in Graz, Austria an opening for a scientific programmer, who will develop efficient software for the parallel simulation of large neural circuits, and for carrying out learning experiments with such circuits. This software will replace and extend earlier software described on http://www.lsm.tugraz.at/ This work will be carried out in collaboration with Dr. Thomas Natschlaeger, in the framework of the 4-year EU Research project FACETS http://www.kip.uni-heidelberg.de/facets/public/ The goal of this fairly large research project is the development of detailed large scale models of neural circuits and areas, whose properties will be explored through simulations on a Blue Gene Supercomputer and on new special-purpose hardware. The salary for this position will be competitive. We are looking for a programmer who has the abstraction capability to design interfaces between different software components, and the skill and dedication needed for writing software that runs efficiently. We also expect experience in software development in C++ and Linux, as well as experience in writing parallel software (multithreading and interprocess communication). Interest or knowledge in computational neuroscience and/or machine learning would be helpful (in the case of scientific interest in these areas, a simultaneous participation in our Phd-Program is possible). Send your application by October 10 to maass at igi.tugraz.at Thanks and regards, Prashant Joshi -- QOTD: "It wouldn't have been anything, even if it were gonna be a thing." ****************************************************** * Prashant Joshi * Institute for Theoretical Computer Science (IGI) * Technische Universitaet Graz * Inffeldgasse 16b, A-8010 Graz, Austria -------------------------------------------- * joshi at igi.tu-graz.ac.at * http://www.igi.tugraz.at/joshi * Tel: + 43-316-873-5849 ****************************************************** From juergen at idsia.ch Tue Oct 4 09:20:17 2005 From: juergen at idsia.ch (Juergen Schmidhuber) Date: Tue, 4 Oct 2005 15:20:17 +0200 Subject: Connectionists: Feedback nets for control, prediction, classification Message-ID: <4ddaedb15d48b996368aa8c962b1b3f0@idsia.ch> 8 new papers on evolution-based & gradient-based recurrent neural nets, with Alex Graves, Daan Wierstra, Matteo Gagliolo, Santiago Fernandez, Nicole Beringer, Faustino Gomez, in Neural Networks, IJCAI 2005, ICANN 2005, GECCO 2005 (with a best paper award): EVolution of recurrent systems with Optimal LINear Output (EVOLINO). Basic idea: Evolve an RNN population; to get some RNN's fitness DO: Feed the training sequences into the RNN. This yields sequences of hidden unit activations. Compute an optimal linear mapping from hidden to target trajectories. The fitness of the recurrent hidden units is the RNN performance on a validation set, given this mapping. Evolino-based LSTM nets learn to solve several previously unlearnable time series prediction tasks, and form a basis for the first recurrent support vector machines: http://www.idsia.ch/~juergen/evolino.html COEVOLVING RECURRENT NEURONS LEARN TO CONTROL FAST WEIGHTS. For example, 3 co-evolving RNNs compute quickly changing weight values for 3 fast weight networks steering the 3 wheels of a mobile robot in a confined space in a realistic 3D physics simulation. Without a teacher it learns to balance two poles with a joint: http://www.idsia.ch/~juergen/rnnevolution.html EVOLUTION MAIN PAGE (links to work since 1987): http://www.idsia.ch/~juergen/evolution.html NEW RESULTS on bidirectional gradient-based RNNs for phoneme recognition etc: http://www.idsia.ch/~juergen/rnn.html Juergen Schmidhuber TUM & IDSIA From mjhealy at ece.unm.edu Mon Oct 3 21:19:43 2005 From: mjhealy at ece.unm.edu (mjhealy@ece.unm.edu) Date: Mon, 3 Oct 2005 19:19:43 -0600 (MDT) Subject: Connectionists: Experimental application of categorical semantic theory for neural networks Message-ID: Tom Caudell and I have a paper in the Proceedings of the International Joint Conference on Neural Networks (IJCNN05), Montreal, 2005, published by the IEEE Press. The subject is an experiment demonstrating improved performance with a modification to a standard artificial neural architecture based on our category-theoretic semantic model. This was joint research with Sandia National Laboratories on the generation of multispectral images from satellite data. We believe this is the first application of category theory directly in an engineering application (while at Boeing, another colleague and I had demonstrated its application to the synthesis of engineering software). Another feature of it is that it relates an abstract, categorical structure to a neural network parameter; this is not dealt with in detail in the Proceedings paper but will be in a full paper to be submitted. Tom Caudell's web page has a link to the Proceedings paper at http://www.eece.unm.edu/faculty/tpc/ The semantic theory is described in our Technical Report EECE-TR-04-020 on the UNM Dspace Repository, at https://repository.unm.edu/handle/1928/33 Regards, Mike Healy mjhealy at ece.unm.edu From mark at paskin.org Thu Oct 6 12:43:32 2005 From: mark at paskin.org (Mark A. Paskin) Date: Thu, 6 Oct 2005 09:43:32 -0700 Subject: Connectionists: CFP: NIPS 2005 Workshop: Intelligence Beyond the Desktop Message-ID: (Apologies for multiple postings.) ################################################################ CALL FOR PARTICIPATION Intelligence Beyond the Desktop a workshop at the 2005 Neural Information Processing Systems (NIPS) Conference Submission deadline: Friday, October 14, 2005 http://ai.stanford.edu/~paskin/ibd05/ ################################################################ OVERVIEW We are now well past the era of the desktop computer. Trends towards miniaturization, wireless communication, and increased sensing and control capabilities have led to a variety of systems that distribute computation, sensing, and controls across multiple devices. Examples include wireless sensor networks, multi-robot systems, networks of smartphones, and large area networks. Machine learning problems in these non-traditional settings cannot faithfully be viewed in terms of a data set and an objective function to optimize; physical aspects of the system impose challenging new constraints. Resources for computation and actuation may be limited and distributed across many nodes, requiring significant coordination; limited communication resources can make this coordination expensive. The scale and complexity of these systems often leads to large amounts of structured data that make state estimation challenging. In addition, these systems often have other constraints, such as limited power, or under-actuation, requiring reasoning about the system itself during learning and control. Furthermore, large-scale distributed systems are often unreliable, requiring algorithms that are robust to failures and lossy communication. New learning, inference, and control algorithms that address these challenges are required. This workshop aims to bring together researchers to discuss new applications of machine learning in these systems, the challenges that arise, and emerging solutions. FORMAT This one-day workshop will consist of invited talks and talks based upon submitted abstracts, with some time set aside for discussion. Our (tentative) invited speakers are: * Dieter Fox (University of Washington) * Leonidas Guibas (Stanford University) * Sebastian Thrun, (Stanford University) will speak about the machine learning algorithms used in Stanley, Stanford's entry into the DARPA Grand Challenge. CALL FOR PARTICIPATION Researchers working at the interface between machine learning and non- traditional computer architectures are invited to submit descriptions of their research for presentation at the workshop. Of particular relevance is research on the following topics: * distributed sensing, computation, and/or control * coordination * robustness * learning/inference/control under resource constraints (power, computation, time, etc.) * introspective machine learning (reasoning about the system architecture in the context of learning/inference/control) We especially encourage submissions that address unique challenges posed by non-traditional architectures for computation, such as * wireless sensor networks * multi-robot systems * large-area networks Submissions should be extended abstracts in PDF format which are no longer than three (3) pages long in 10pt or larger font. Submissions may be e-mailed to ibd-2005 at cs.cmu.edu with the subject "IBD SUBMISSION". We plan to accept four to six submissions for 25 minute presentation slots. In your submission please indicate if you would present a poster of your work (in case there are more qualified submissions than speaking slots). Call for participation: Wednesday, August 31, 2005 Submission deadline: Friday, October 14, 2005 11:59 PM PST Acceptance notification: Tuesday, November 1, 2005 Workshop: Friday, December 9, 2005 Organizers * Carlos Guestrin (http://www.cs.cmu.edu/~guestrin/) * Mark Paskin (http://paskin.org) Please direct any inquiries regarding the workshop to ibd-2005 at cs.cmu.edu. From esann at dice.ucl.ac.be Wed Oct 5 06:34:21 2005 From: esann at dice.ucl.ac.be (esann) Date: Wed, 5 Oct 2005 12:34:21 +0200 Subject: Connectionists: speciel sessions at ESANN'2006 European Symposium on Artificial Neural Networks Message-ID: <007f01c5c998$594986d0$43ed6882@maxwell.local> ESANN'2006 14th European Symposium on Artificial Neural Networks 14th European Symposium on Artificial Neural Networks Advances in Computational Intelligence and Learning Bruges (Belgium) - April 26-27-28, 2006 Special sessions ===================================================== The following message contains a summary of all special sessions that will be organized during the ESANN'2006 conference. Authors are invited to submit their contributions to one of these sessions or to a regular session, according to the guidelines found on the web pages of the conference http://www.dice.ucl.ac.be/esann/. According to our policy to reduce the number of unsolicited e-mails, we gathered all special session descriptions in a single message, and try to avoid sending it to overlapping distribution lists. We apologize if you receive multiple copies of this e-mail despite our precautions. Special sessions that will be organized during the ESANN'2006 conference ======================================================================== 1. Semi-blind approaches for Blind Source Separation (BSS) and Independent Component Analysis (ICA) M. Babaie-Zadeh, Sharif Univ. Tech. (Iran), C. Jutten, CNRS ? Univ. J. Fourier ? INPG (France) 2. Visualization methods for data mining F. Rossi, INRIA Rocquencourt (France) 3. Neural Networks and Machine Learning in Bioinformatics - Theory and Applications B. Hammer, Clausthal Univ. Tech. (Germany), S. Kaski, Helsinki Univ. Tech. (Finland), U. Seiffert, IPK Gatersleben (Germany), T. Villmann, Univ. Leipzig (Germany) 4. Online Learning in Cognitive Robotics J.J. Steil, Univ. Bielefeld, H. Wersing, Honda Research Institute Europe (Germany) 5. Man-Machine-Interfaces - Processing of nervous signals M. Bogdan, Univ. T?bingen (Germany) 6. Nonlinear dynamics N. Crook, T. olde Scheper, Oxford Brookes University (UK) Short descriptions ================== Semi-blind approaches for Blind Source Separation (BSS) and Independent Component Analysis (ICA) ----------------------------------------------------------------------- Organized by: - M. Babaie-Zadeh, Sharif Univ. Tech. (Iran) - C. Jutten, CNRS ? Univ. J. Fourier ? INPG (France) In the original formulation of the Blind Source Separation (BSS) problem, it is usually assumed that there is no prior information about source signals but their statistical independence. The methods then try to separate the sources by transforming the observations into as statistically independent as possible outputs (ICA). A well known result states that decorrelation (second-order independence) of the outputs is not sufficient for separating the sources. Another well-known result states that separating Gaussian sources using this approach is not possible. However, simple prior information about source signals can lead to new methods, whose simplicity and separation quality may significantly be improved (in terms of samples size, algorithm simplicity and speed, ability to separate a larger class of signals, etc.). For example, if we already know that the source signals are temporally correlated, it is possible to separate them by using second-order approaches : the algorithm based on second-order statistics is then simpler, Gaussian sources can be separated. We call such an approach a ?Semi-Blind? approach, because although it is not completely blind, the available prior information about the sources is very weak and remain true for a large class of sources. Some of the most famous priors for designing Semi-Blind approaches for BSS are: - Sparsity of the source signals: Such a prior enables us to separate more sources than sensors, and even dropping the independence assumption. Hence, these approaches are usually called Sparse Component Analysis (SCA). - Temporal correlation of the source signals (colored sources) enables separation of Gaussian sources, using second-order approaches. - Non-stationarity of the source signals enables separation of Gaussian sources, using second-order approaches. - Bounded sources enables, for example, simple geometric approaches to be used. - Models for source distribution (Markovian, etc.) can reduce the solution indeterminacies and improve separation performance. - Bayesian methods is a general framework for handling priors. Of course, mixture of priors, currently not very usual, could also be exploited and provide new algorithms. In this special session, we invite authors to submit papers illustrating the use of the above priors in BSS and ICA contexts. Visualization methods for data mining ------------------------------------- Organized by: - F. Rossi, INRIA Rocquencourt (France) In many situations, manual data exploration remains a mandatory preliminary step that provides insights on the studied problem and helps solving it. It is also very important for reporting results of data mining tools in an exploitable way. While statistical summaries and simple linear methods give a some rough analysis of a data set, sophisticated visualization methods allow human experts to discover information in an easier and more intuitive way. A very successful example of neural based visualization tool is given by Kohonen's Self Organizing Maps used together with component planes, U-matrix, P-matrix, etc. This session aims at bringing together researchers interested in visualization methods both used as exploratory tools (before other data mining methods) and as explanatory tools (after other data mining methods). Submissions are encouraged within (but not restricted to) following areas: - non linear projection - graph based visualization - cluster visualization - visualization method for supervised problems - visualization of non vector data Neural Networks and Machine Learning in Bioinformatics - Theory and Applications ------------------------------------------------------------------- Organized by: - B. Hammer, Clausthal Univ. Tech. (Germany) - S. Kaski, Helsinki Univ. Tech. (Finland) - U. Seiffert, IPK Gatersleben (Germany) - T. Villmann, Univ. Leipzig (Germany) Bioinformatics is a promising and innovative research field. Despite of a high number of techniques specifically dedicated to bioinformatic problems as well as successful applications, we are in the beginning of a process to massively integrate the aspects and experiences in the different core subjects such as biology, medicine, computer science, engineering, chemistry, physics and mathematics. Within this rather wide area we focus on neural networks and machine learning related approaches in bioinformatics with particular emphasis on integrative research in the background of the above mentioned scope. According to the high level and the aim of the hosting ESANN conference we encourage authors to submit papers containing - New theoretical aspects - New methodologies - Innovative applications in the field of bioinformatics. A prospective but nonexclusive list of topics is - Genomic Profiling - Pathways - Sequence analysis - Structured data - Time series analysis - Context related metrics in modelling and analysis - Visualization - Pattern recognition - Image processing - Clustering and Classification - ... Online Learning in Cognitive Robotics ------------------------------------- Organized by: - J.J. Steil, Univ. Bielefeld - H. Wersing, Honda Research Institute Europe (Germany) In hard- and software we currently observe technological breakthroughs towards cognitive agents, which will soon incorporate a mixture of miniaturized sensors, cameras, multi-DOF robots, and large data storage, together with sophisticated artificial cognitive functions. Such technologies might culminate in the widespread application of humanoid robots for entertainment and house-care, in health-care assistant systems, or advanced human-computer interfaces for multi-modal navigation in high-dimensional data spaces. Making such technologies easily accessible for every day use is essential for their acceptance by users and customers. At all levels for such systems learning will be an essential ingredient to meet the challenges in engineering, system development, and system integration and neural network methods are of crucial importance in this arena. Cognitive robots are meant to behave in the real world and to interact smoothly with their users and the environment. While off-line learning is well established to implement basis modules of such systems and many learning methods work well in toy domains, in concrete scenarios on-line adaptivity is necessary in many respects: in order to cope with the inevitable uncertainties of the real world, the limited predictability of the interaction structure, to acquire new and enhance preprogrammed behavior. Online-learning is also the main methodological ingredient in the developmental approach to intelligent robotics, which aims at incremental progressing from simple to more and more complex behavior. The current session will focus exclusively on the more difficult field of online learning in real systems with real data. Given their systems meet these constraints, authors are invited to submit contributions for all kinds of cognitive robotics, for instance - cognitive vision (eg. visual object learning, acquisition of visual memory, adaptive scene analysis) - localization and map building in mobile robots - online trajectory learning and acquisition - adaptive control of multi-DOF robots - learning in behavioral architectures - learning by demonstration and imitation Man-Machine-Interfaces - Processing of nervous signals ------------------------------------------------------ Organized by: - M. Bogdan, Univ. T?bingen (Germany) Recently, Man-Machine-Interfaces contacting the nervous system in order to extract information resp. to introduce information gain more and more in importance. In order to establish systems like neural prostheses or Brain-Computer-Interfaces, powerful (real time) algorithms for processing nerve signals or their field potentials are requested. Another important point is the introduction of informations into nervous systems be means like functional electrical stimulation (FES). Topics of this session can be, but are not limited to NN-based algorithms and applications for - Neural Prostheses - Brain-Computer-Interfaces - Multi Neuron Recordings - Multi Electrode Arrays - Functional Electrical Stimulation - Population Coding - Spike Sorting - ... Nonlinear dynamics ------------------ Organized by: - N. Crook, T. olde Scheper, Oxford Brookes University (UK) The field of nonlinear dynamics has been a useful ally in the study of artificial neural networks (ANNs) in recent years. Investigations into the stability of recurrent networks, for example, have helped to define the characteristics of weight matrixes which guarantee stable solutions. Similar studies have assessed the stability of Hopfield networks with distributed delays. However, some have suggested that nonlinear dynamics should play a more central role in models of neural information processing. Observations of the presence of chaotic dynamics in the firing patterns of natural neuronal systems has added some support to this suggestion. A range of models have been proposed in the literature that place nonlinear dynamics at the heart of neural information processing. Some of these use chaos as a basic for neural itinerancy, a process involving deterministic search through memory states. Others use the bifurcating properties of specific chaotic systems as a means of switching between states. This session will open with a tutorial paper outlining these different approaches. The session will also include paper contributions by some leading authors in the field. ======================================================== ESANN - European Symposium on Artificial Neural Networks http://www.dice.ucl.ac.be/esann * For submissions of papers, reviews,... Michel Verleysen Univ. Cath. de Louvain - Microelectronics Laboratory 3, pl. du Levant - B-1348 Louvain-la-Neuve - Belgium tel: +32 10 47 25 51 - fax: + 32 10 47 25 98 mailto:esann at dice.ucl.ac.be * Conference secretariat d-side conference services 24 av. L. Mommaerts - B-1140 Evere - Belgium tel: + 32 2 730 06 11 - fax: + 32 2 730 06 00 mailto:esann at dice.ucl.ac.be ======================================================== From juergen at idsia.ch Thu Oct 6 04:46:46 2005 From: juergen at idsia.ch (Juergen Schmidhuber) Date: Thu, 6 Oct 2005 10:46:46 +0200 Subject: Connectionists: Recurrent Support Vector Machines Message-ID: <078ef8ff985c9636db783be9596a124b@idsia.ch> Several people pointed out that the pdf of one of the recently announced papers on sequence learning with adaptive internal states was missing. Apologies! The TR on recurrent SVMs (with Gagliolo & Wierstra & Gomez) is now available at http://www.idsia.ch/~juergen/evolino.html The other 8 new publications can be found at http://www.idsia.ch/~juergen/rnn.html -JS From d.polani at herts.ac.uk Wed Oct 5 07:19:30 2005 From: d.polani at herts.ac.uk (Daniel Polani) Date: Wed, 5 Oct 2005 13:19:30 +0200 Subject: Connectionists: Research (PhD) studentships available (Artificial Life/Bio-Inspired Robotics) Message-ID: <17219.46786.843674.755004@perm.feis.herts.ac.uk> ######################################################################## RESEARCH (PhD) STUDENTSHIPS AVAILABLE Adaptive Systems Research Group http://adapsys.feis.herts.ac.uk/ University of Hertfordshire http://perseus.herts.ac.uk/ Contact: Dr. Daniel Polani, Principal Lecturer d.polani at herts.ac.uk ######################################################################## PhD Studentships are available at the Adaptive Systems Research Group in the areas (among others) of Evolution of Sensors (Theory and Robotics), Principles in Perception-Action Loops of Artificial and Biological Systems, Bio-inspired Learning Methods for Complex Agent Systems. All these fields can be considered subareas of Artificial Life which is a central interest of our group. We use robotics and mathematical models to create analytical, predictive and constructive models of biologically relevant scenarios. The idea is to understand how biological systems manage to ``climb'' the enormously intransparent complexity and intelligence obstacle to achieve the impressive variety of capabilities that is found in living systems. For this purpose, we develop mathematical, simulation and robotic models of these systems. The goal is, on the one hand, to understand biology, but, on the other hand, also to use this understanding to discover novel ``out-of-the-box'' principles for building artificial systems. _________________________________________________________________ EVOLUTION OF SENSORS (ROBOTICS AND THEORY) In recent years, the study of the evolution of sensors in living beings and in artificial systems has led to surprising insights into some of the driving forces of evolution. There is increasing evidence that the ``discovery'' of novel sensors by evolution contributes very significantly to selective pressure acting on living beings and may be one of the main sources of complexification and diversification during evolution. On the other hand, there are indications that sensor evolution itself is driven by the selection pressures resulting from given embodiments and informatory-ecological niches. The research of the last years allowed to identify the sources for this pressure in a quantitative way. A project in this direction will further develop theory and/or robotical models of sensor evolution of how environmental information can be tapped; thus it will contribute to unraveling one of the central mysteries of how the significant selection pressures produced by evolution can emerge and possibly be used for artificial systems. _________________________________________________________________ PRINCIPLES IN PERCEPTION-ACTION LOOPS OF ARTIFICIAL AND BIOLOGICAL SYSTEMS Living or artificial agents all share the property of acquiring environmental information (perception), processing this information and then acting upon it. Based on this minimal agent model, our group's recent research has found quantitative ways to derive a significant number of mechanisms from information-theoretic principles whereby agents can achieve increasingly more sophisticated models of their environment. Unlike learning models from classical Artificial Intelligence,the resulting agent controls are not easily human-readable; they also differ from artificial neural networks, where significant aspects of the architecture are pre-designed by humans. In fact, it turns out that the encodings discovered by the agents have qualitative similarities to encoding applied in biological neurons. Similar simple principles can be used in robots to recover a wider variety of phenomena observed in biological agents from common grounds. This is a powerful indication that biology and evolution may reuse the same (or similar) principles to create the wide variety of capabilites that we find in nature -- and it provides a methodology to recreate these capabilites in artificial systems. A project in this direction will apply quantitative principled methods to discover algorithms which will provide robots and robotic models to exhibit similar sensomotoric phenomena as living beings. _________________________________________________________________ BIO-INSPIRED LEARNING METHODS FOR COMPLEX AGENT SYSTEMS A further research area is the use of biologically inspired learning principles to allow complex agent systems to learn. Complex agent systems can be individual complex agents or larger agent groups (swarms) that, thanks to the size of their group, display complex emergent and/or self-organizing behaviour or require intricate coordination because of their task. Examples for this are ant colonies or RoboCup scenarios (robotic soccer). Understanding and managing the learning problem for such agent teams or complex agents is a very active research field and our information-theoretic methods (as mentioned in the previous sections) provide a novel, powerful and principled approach to it. A PhD project in this direction will develop principled and generalizable approaches to construct learning and adaptation models whereby complex agent systems can incrementally learn to master their environment and identify and solve tasks. _________________________________________________________________ FURTHER INFORMATION ON THE RESEARCH AREA For information on the research, please also consult the web page of Dr. Daniel Polani (http://homepages.feis.herts.ac.uk/~comqdp1/) and the respective publications (http://homepages.feis.herts.ac.uk/~comqdp1/publications.html). If you are interested in above areas, please make sure you mention the keywords (``Evolution of Sensors'', ``Perception-Action Loop'' etc.) in your application. If you have questions, please contact d.polani at herts.ac.uk. _________________________________________________________________ APPLICANTS Interested candidates will have very strong programming background, and very strong mathematical/analytical skills (e.g. due to a Computer Science/Mathematics/Physics degree) and a keen interest in interdisciplinary research, combining biological evidence with theoretical models and/or implementing them on simulated or real robotic systems. Applicants are urged to apply as soon as possible. _________________________________________________________________ ABOUT THE GROUP The PhD studentships offer the opportunity to work within the Adaptive Systems Research Group, a proactive and dynamic research team with an excellent international research profile at the University of Hertfordshire, located not far from London, between Cambridge and London. The group was founded and is co-organized by Prof. Kerstin Dautenhahn and Prof. Chrystopher Nehaniv. Other core faculty members of the group include Dr. Daniel Polani, Dr. Rene te Boekhorst, and Dr. Lola Canamero. Current projects within the Adaptive Systems Research Group are funded by FP6-IST, EPSRC, the Wellcome Trust and the British Academy. The group is currently involved in the following FP6 European projects: Euron-II, Humaine (both European Networks of Excellence), Cogniron and RobotCub (Integrated Projects). We hosted the AISB'05 convention Social Intelligence and Interaction in Biological and Artificial Systems that attracted an international audience of 300 participants. Research in the group is highly interdisciplinary and strongly biologically inspired, but also has a strong theoretical foundation. Adaptive Systems are computational, software, robotic, or biological systems that are able to deal with and 'survive' in a dynamically changing environment. We pursue a bottom-up approach to Artificial Intelligence that emphasizes the embodied and situated nature of biological or artificial systems that have evolved and are adapted to a particular environmental context. The Adaptive Systems Research Group has excellent research facilities for research staff, including numerous robotic platforms, covering the spectrum from miniature Khepera robots, dog-like AIBO robots, to human-sized robots (PeopleBots), as well as humanoid robots developed in our group. We are dedicated to excellence in research, and, while providing a collaborative and supportive working environment, expect PhD students to show great enthusiasm and determination for their work. Candidates need to provide evidence of excellent research potential that can lead to significant contributions to knowledge as part of a PhD thesis. Successful candidates may be eligible for a research studentship award from the University in some of these areas (equivalent to about ?9000 per annum bursary plus the payment of the standard UK student fees). Self-funded students might also consider to pursue other research topics that senior academic members of the Adaptive Systems research group are active in, please consult http://adapsys.feis.herts.ac.uk/ for more information. Other areas in our research group are advertised at: http://homepages.feis.herts.ac.uk/~comqkd/AS-Studentshipadvert.html For an application form, please contact: Mrs Lorraine Nicholls, Research Student Administrator, STRI, Faculty of Engineering and Information Sciences, University of Hertfordshire, College Lane, Hatfield, Herts, AL10 9AB United Kingdom Tel: +44 (0) 1707 286083 Fax: +44 (0) 1707 284185 or email: L.Nicholls at herts.ac.uk. ######################################################################## Pdf and HTML versions of this document can be found on http://homepages.feis.herts.ac.uk/~comqdp1/Studentships/SE_2005.pdf http://homepages.feis.herts.ac.uk/~comqdp1/Studentships/SE_2005/SE_2005.html ######################################################################## From sutton at cs.ualberta.ca Wed Oct 5 21:06:09 2005 From: sutton at cs.ualberta.ca (Rich Sutton) Date: Wed, 5 Oct 2005 19:06:09 -0600 Subject: Connectionists: academic job opening, University of Alberta Message-ID: The Department of Computing Science at the University of Alberta is seeking a qualified individual to fill a position at the level of assistant or associate professor in the area of artificial intelligence (www.cs.ualberta.ca). This is a soft-funded tenure track position. The initial appointment will be for three years, and continuation is subject to availability of funding. The successful candidate will be working with the Alberta Ingenuity Centre for Machine Learning (www.aicml.ca). Candidates should have a Ph.D. in Computing Science or equivalent, with specialization in artificial intelligence. Preference will be given to applicants with knowledge and experience in machine learning, with an emphasis on reinforcement learning. The candidate is expected to establish their own research program, supervise graduate students, and teach at both the graduate and undergraduate level. The Department highly values curiosity-driven research. Strong communication skills, project management, inter-personal skills, and team leadership are important qualities. The Department is well known for its collegial atmosphere, dynamic and well-funded research environment, and superb teaching infrastructure. Its faculty are internationally recognized in many areas of computing science, and enjoy collaborative research partnerships with local, national, and international industries. The University of Alberta, located in the provincial capital of Edmonton, is one of Canada's largest and finest teaching and research institutions, with a strong commitment to undergraduate teaching, community involvement, and research excellence. As a population center of 1,000,000, Edmonton offers a high-quality, affordable lifestyle that includes a wide range of cultural events and activities, in a natural setting close to the Canadian Rockies. Alberta's innovative funding initiatives for supporting and sustaining leading-edge IT research have attracted world-class researchers and outstanding graduate students to our Department and to the campus. Further information about the Department and University can be found at www.cs.ualberta.ca. The competition will remain open until a suitable candidate is found. Candidates should submit a curriculum vitae, a one-page summary of research plans, a statement of teaching interests and reprints of their three most significant publications electronically to everitt at cs.ualberta.ca or by mail to: Iris Everitt, Administrative Assistant Department of Computing Science University of Alberta Edmonton, Alberta, Canada T6G 2E8 All qualified candidates are encouraged to apply, however, Canadian and permanent residents will be given priority. The University of Alberta hires on the basis of merit. We are committed to the principle of equity of employment. We welcome diversity and encourage applications from all qualified women and men, including persons with disabilities, members of visible minorities, and Aboriginal persons. From tino at jp.honda-ri.com Thu Oct 6 01:56:29 2005 From: tino at jp.honda-ri.com (Tino Lourens) Date: Thu, 06 Oct 2005 14:56:29 +0900 Subject: Connectionists: TiViPE 2.0.0 now available on Linux, Max, and Windows: Spend more time on research and less on coding Message-ID: <4344BC8D.1000804@jp.honda-ri.com> Dear all, Visual programming environment TiViPE version 2.0.0 (Sep 2005) is available on all major platforms (Linux, Mac OSX, and Windows). If you would like to program faster, need a rapid prototyping tool, would like to modify a program while it is active, would like to integrate code (C, C++, Fortran, or Java) with minimal additonal effort then TiViPE might be the appropriate solution. Actually, I wanted a program that could wrap any routine into a graphical environment without programming and with little additional effort. Since, such a migration process is essential for users to move from textual programming to graphical programming. In this respect TiViPE provides a major difference to for instance AVS or Khoros. * TiViPE includes already a considerable number of useful icons for image processing and includes most of my research on early vision (center-surround, simple-, complex, endstopped cells, etc.) and graphmatching. * TiViPE can be downloaded from: http://www.dei.brain.riken.jp/~emilia/Collaboration/Tino/TiViPE/index.html On that website, you will find more information about TiViPE, as well as an optional library that extends the environment with 1. networking support through socket communication 2. merging support to compile a graphical program to a single icon and executable. Online available TiViPE modules provided by other users: * Relaxation Phase Labeling http://home.arcor.de/winni9/rpl.html Winfried Fellenz The next release of TiViPE will have * Language support: in addition to English also "Nihongo"; Japanese language support will be provided. * Automatic inclusion of a tree of modules from other users Publication (a citation in your publication is appreciated) T. Lourens. TiViPE --Tino's Visual Programming Environment. The 28th Annual International Computer Software & Applications Conference, IEEE COMPSAC 2004, pages 10-15, 2004. With kind regards, Tino Lourens -- Tino Lourens, Ph.D. Honda Research Institute Japan Co., Ltd. 8-1 Honcho, Wako-shi, Saitama, 351-0114, Japan Tel: +81-48-462-2121 (Ext.) 6806 mailto: tino at jp.honda-ri.com Fax: +81-48-462-5221 From B.Kappen at science.ru.nl Fri Oct 7 03:32:02 2005 From: B.Kappen at science.ru.nl (Bert Kappen) Date: Fri, 7 Oct 2005 09:32:02 +0200 (CEST) Subject: Connectionists: papers on stochastic optimal control available Message-ID: I would like to announce the following papers that have been accepted for publication. A linear theory for control of non-linear stochastic systems H.J. Kappen We address the role of noise and the issue of efficient computation in stochastic optimal control problems. We consider a class of non-linear control problems that can be formulated as a path integral and where the noise plays the role of temperature. The path integral displays symmetry breaking and there exist a critical noise value that separates regimes where optimal control yields qualitatively different solutions. The path integral can be computed efficiently by Monte Carlo integration or by Laplace approximation, and can therefore be used to solve high dimensional stochastic control problems. To appear in Physical Review Letters. download: www.arxiv.org/physics/0411119 A longer version of the paper is Path integrals and symmetry breaking for optimal control theory H.J. Kappen To appear in Journal of Statistical Mechanics. download: www.arxiv.org/physics/0505066 Bert Kappen SNN Radboud University Nijmegen URL: www.snn.kun.nl/~bert The Netherlands tel: +31 24 3614241 fax: +31 24 3541435 B.Kappen at science.ru.nl From wahba at stat.wisc.edu Thu Oct 6 23:51:58 2005 From: wahba at stat.wisc.edu (Grace Wahba) Date: Thu, 6 Oct 2005 22:51:58 -0500 Subject: Connectionists: Robust Manifold Unfolding with Kernel Regularization Message-ID: <200510070351.j973pwlc006579@juno.stat.wisc.edu> Esteemed colleagues: Robust Manifold Unfolding with Kernel Regularization Fan Lu, Yi Lin and Grace Wahba TR1108 October, 2005 University of Wisconsin Madison Statistics Dept TR 1108. http://www.stat.wisc.edu/~wahba -> TRLIST or http://www.stat.wisc.edu/~wahba/ftp1/tr1108.pdf Abstract We describe a robust method to unfold a low-dimensional manifold embedded in high-dimensional Euclidean space based on only pairwise distance information (possibly noisy) from the sampled data on the manifold. Our method is derived as one special extension of the recently developed framework called Kernel Regularization, ( http://www.pnas.org/cgi/content/full/102/35/12332 ) which is originally designed to extract information in the form of a positive definite kernel matrix from possibly crude, noisy, incomplete, inconsistent dissimilarity information between pairs of objects. The special formulation is transformed into an optimization problem that can be solved globally and efficiently using modern convex cone programming techniques. The geometric interpretation of our method will be discussed. Relationships to other methods for this problem are noted. From deneve at isc.cnrs.fr Fri Oct 7 12:11:58 2005 From: deneve at isc.cnrs.fr (Sophie Deneve) Date: Fri, 07 Oct 2005 18:11:58 +0200 Subject: Connectionists: Postdoctoral position available - Theoretical Neuroscience Group in Paris. Message-ID: <5.1.1.6.0.20051007181107.0305a010@pop.isc.cnrs.fr> Two postdoctoral research positions are available in the newly created Theoretical Neuroscience Group in Ecole Normale Sup?rieure Paris, for a project funded by a Marie Curie Team of Excellence grant. The overall theme of the project is "Bayesian inference and neural dynamics", and the research will involve building and analyzing probabilistic treatments of representation, inference and learning in biophysical models of cortical neuron and circuits. To do so we will integrate complementary computational neuroscience approaches. The first studies neurons and neural networks as biophysical entities. The second reinterpret cognitive and neural processes as bayesian computations. The faculty of this group includes Misha Tsodyks, Boris Gutkin, Sophie Deneve and Rava Da Silvera. It is part of the Department of Cognitive Science in Ecole Normale Sup?rieure, a unique institution regrouping major scientists in computational Neuroscience, Brain imaging, Psychology, Philosophy, and Mathematics. We are situated in central Paris, at a walking distance to top scientific research and educational institutions. We have numerous international collaborations with experimental groups, with the goal of understanding the neural basis of cognition. The positions are for two years duration, with attractive salaries, including mobility allowance if applicable. Generous travel support will be provided. Candidates should have 1- A strong mathematical/biophysical background and a strong interest in neuroscience, or 2- A strong neuroscience background and good basis in math and/or biophysics. 3- Demonstrable interest in experimental collaborations. 4- Good communication skills. Candidates should send a CV, a 1 page research project and the address of two referees, to npg.lab at college-de-france.fr , before the 1st of November, 2005. For further information please contact Sophie Deneve (deneve at isc.cnrs.fr) or Boris Gutkin (bgutkin at pasteur.fr). From stefan.wermter at sunderland.ac.uk Fri Oct 7 13:08:33 2005 From: stefan.wermter at sunderland.ac.uk (Stefan Wermter) Date: Fri, 07 Oct 2005 18:08:33 +0100 Subject: Connectionists: book on biomimetic neural learning for intelligent robots Message-ID: <4346AB91.80502@sunderland.ac.uk> We are pleased to announce the new book Biomimetic Neural Learning for Intelligent Robots Stefan Wermter, Gnther Palm, Mark Elshaw (Eds) 2005, Springer This book presents research performed as part of the EU project on biomimetic multimodal learning in a mirror neuron-based robot (MirrorBot) and contributions presented at the International AI-Workshop in NeuroBotics. The overall aim of the book is to present a broad spectrum of current research into biomimetic neural learning for intelligent autonomous robots. There seems to be a need for a new type of robots which is inspired by nature and so performs in a more flexible learned manner than current robots. This new type of robots is driven by recent new theories and experiments in neuroscience indicating that a biological and neuroscience-oriented approach could lead to new life-like robotic systems. The book focuses on some of the research progress made in the MirrorBot project which uses concepts from mirror neurons as a basis for the integration of vision, language and action. In this book we show the development of new techniques using cell assemblies, associative neural networks, and Hebbian-type learning in order to associate vision, language and motor concepts. We have developed biomimetic multimodal learning and language instruction in a robot to investigate the task of searching for objects. As well as the research performed in this area for the MirrorBot project, the second part of this book incorporates significant contributes from essential research in the field of biomimetic robotics. This second part of the book concentrates on the progress made in neuroscience-inspired robotic learning approaches (in short: Neuro-Botics). We hope that this book stimulates and encourages new research in this area. Further details can be found at http://www.his.sunderland.ac.uk/mirrorbot/mirrorbook.html and http://www.springeronline.com/sgw/cda/frontpage/0,11855,3-40109-22-55007983-0,00.html Chapters Towards Biomimetic Neural Learning for Intelligent Robots Stefan Wermter, Gnther Palm, Cornelius Weber and Mark Elshaw The Intentional Attunement Hypothesis. The Mirror Neuron System and its Role in Interpersonal Relations Vittorio Gallese Sequence Detector Networks and Associative Learning of Grammatical Categories Andreas Knoblauch and Friedemann Pulvermller A Distributed Model of Spatial Visual Attention Julien Vitay, Nicolas Rougier and Frdric Alexandre A Hybrid Architecture using Cross-Correlation and Recurrent Neural Networks for Acoustic Tracking in Robots John Murray, Harry Erwin and Stefan Wermter Image Invariant Robot Navigation Based on Self Organising Neural Place Codes Kaustubh Chokshi, Stefan Wermter, Christo Panchev and Kevin Burn Detecting Sequences and Understanding Language with Neural Associative Memories and Cell Assemblies Heiner Markert, Andreas Knoblauch and Gnther Palm Combining Visual Attention, Object Recognition and Associative Information Processing in a NeuroBotic System Rebecca Fay, Ulrich Kaufmann, Andreas Knoblauch, Heiner Markert and Gnther Palm Towards Word Semantics from Multi-modal Acoustico-Motor Integration: Application of the Bijama Model to the Setting of Action-Dependant Phonetic Representations Olivier Mnard, Frdric Alexandre and Herv Frezza-Buet Grounding Neural Robot Language in Action Stefan Wermter, Cornelius Weber, Mark Elshaw, Vittorio Gallese and Friedemann Pulvermller A Spiking Neural Network Model of Multi-Modal Language Processing of Robot Instructions Christo Panchev A Virtual Reality Platform for Modeling Cognitive Development Hector Jasso and Jochen Triesch Learning to Interpret Pointing Gestures: Experiments with Four-Legged Autonomous Robots Verena Hafner and Frdric Kaplan Reinforcement Learning Using a Grid Based Function Approximator Alexander Sung, Artur Merke and Martin Riedmiller Spatial Representation and Navigation in a Bio-inspired Robot Denis Sheynikhovich, Ricardo Chavarriaga, Thomas Strosslin and Wulfram Gerstner Representations for a Complex World: Combining Distributed and Localist Representations for Learning and Planning Joscha Bach MaximumOne: an Anthropomorphic Arm with Bio-Inspired Control System Michele Folgheraiter and Giuseppina Gini LARP, Biped Robotics Conceived as Human Modelling Umberto Scarfogliero, Michele Folgheraiter and Giuseppina Gini Novelty and Habituation: The Driving Force in Early Stage Learning for Developmental Robotics Qinggang Meng and Mark Lee Modular Learning Schemes for Visual Robot Control Gilles Hermann, Patrice Wira and Jean-Philippe Urban Neural Robot Detection in RoboCup Gerd Mayer, Ulrich Kaufmann, Gerhard Kraetzschmar and Gnther Palm A Scale Invariant Local Image Descriptor for Visual Homing Andrew Vardy and Franz Oppacher *************************************** Professor Stefan Wermter Chair for Intelligent Systems Centre for Hybrid Intelligent Systems School of Computing and Technology University of Sunderland St Peters Way Sunderland SR6 0DD United Kingdom email: stefan.wermter **AT** sunderland.ac.uk http://www.his.sunderland.ac.uk/~cs0stw/ http://www.his.sunderland.ac.uk/ **************************************** From mbeal at cse.Buffalo.EDU Fri Oct 7 22:01:26 2005 From: mbeal at cse.Buffalo.EDU (Matthew Beal) Date: Fri, 7 Oct 2005 22:01:26 -0400 (EDT) Subject: Connectionists: CFP: NIPS 2005 Workshop: Nonparametric Bayesian methods Message-ID: ################################################################ CALL FOR PARTICIPATION Open Problems and Challenges for Nonparametric Bayesian methods in Machine Learning a workshop at the 2005 Neural Information Processing Systems (NIPS) Conference Friday, December 9, 2005, Whistler BC, Canada http://aluminum.cse.buffalo.edu:8080/npbayes/nipsws05 Deadline for Submissions: Monday, October 31, 2005 Notification of Decision: Friday, November 4, 2005 ################################################################ Organizers: ----------- Matthew J. Beal, University at Buffalo, State University of New York Yee Whye Teh, National University of Singapore Overview: --------- Two years ago the NIPS workshops hosted its first forum for discussion of Nonparametric Bayesian (NPB) methods and Infinite Models as used in machine learning. It brought to bear techniques from the statistical disciplines on perennial problems in the NIPS community, such as model size and structure selection from a Bayesian viewpoint. NPB methods include some topics that are heavily studied in the NIPS community such as Gaussian Processes and more recently Dirichlet Process Mixture models, and other topics that are only now starting to make an impact. NPB methods are attractive to machine learning practitioners because they are both powerful and flexible: the key ingredient in an NPB formalism is to side-step the traditional practice of parameter fitting by instead integrating out the possibly complex parameters of the model, which allows interesting situations to exist in the model such as (countably) infinite components in a mixture model, infinite topics in a topic model, or infinite dimensions in a hidden state space; also, the flexibility of an NPB model is often very useful in domains where it is difficult to articulate priors or likelihood functions, such as in text and language modeling, spatially-dependent process modeling, etc. Since the last workshop, models such as Sparse Factor Analysis, Latent Dirichlet Allocation, Hidden Markov Models, and even robot mapping tasks have all benefitted from the flexibility of a nonparametric Bayesian approach, and these nonparametric alternatives have been shown to give superior generalization performance, as compared to finite model selection techniques. It is time now after two years to collect together these various research directions, and use them to define and delineate the challenges facing the nonparametric Bayesian community, and with this the set of open problems that stand a reasonable chance of being solved with focussed research plans. The workshop will focus less on well-studied topics like Gaussian Processes and more on potential new ideas from the statistics community. To this end, we will have a number of experts on nonparametric Bayesian methods from statistics to share their experiences and expertise with the general NIPS community, in an effort to transfer and build upon key methodologies developed there, since the time is ripe for the two groups to coalesce. In particular, we would like the workshop to address: * New techniques: What techniques and methodologies are currently being used in the statistics communities, and which of these can be transferred to be used in machine learning applications? Conversely, are the techniques developed withing the NIPS community of interest to the general statistics communities? * New problems: There are still a wide variety of problems in the NIPS community that cannot be elegantly solved by nonparametric Bayesian models, for example, problems needing smoothly time-varying or spatially-varying processes. It would be useful to identify these problems and the necessary characteristics of any nonparametric solutions. This can serve as concrete goals for further research. * Computational/Inferential issues: Inference in nonparametric models is for the most part carried out using expensive MCMC sampling, but recently variational and Expectation Propagation methods have been applied to isolated cases only. For more popular use of these models we need more efficient and reliable inference schemes. Can these methods scale to high dimensional data, and to large databases such as email repositories, news reports, and the world-wide-web? Could it be that NPB methods are just too much work for too little benefit? Format: ------- This is a one-day workshop, designed to be highly interactive, consisting of 3 or 4 themed sessions with short talks and lengthy moderated discussion periods. We anticipate a strong response and will likely have a poster session in between the morning and afternoon sessions. We have attracted several statisticians from outside the NIPS beaten track and as such we will also have a Distinguished Panel of statisticians/machine learners to discuss the points arising during the workshop's discussions. Call for Contributions: ----------------------- Potential speakers/discussants/attendees are encouraged to submit (extended) abstracts of 2-4 pages in length outlining their research as it relates to the themes above, before *Monday, October 31*. We are looking for position papers, extensions of theory and applications, as well as case studies of nonparametric methods. If there is overwhelming response we will accommodate a poster session in the afternoon break in between morning and evening sessions. All chosen abstracts will be posted on the workshop website beforehand to stimulate discussion. Deadline for Submissions: Monday, October 31, 2005 Notification of Decision: Friday, November 4, 2005 Date of workshop: Friday, December 9, 2005 Please email your submissions to mbeal at cse.buffalo.edu and/or yeewhye at gmail.com We encourage you to log in, post questions, and contribute to the pages of the workshop website, at http://aluminum.cse.buffalo.edu:8080/npbayes/nipsws05 which is in Wiki format, either in person or anonymously. Once you have a login if you wish to modify the pages please contact us for permissions. The website is intended to serve as a resource for you as nonparametric Bayesians, both before and after the workshop, and already contains plenty of links to literature on NPB from within and outside of the NIPS community. With your input we can tailor the workshop according to your suggestions. Also, for you to give comprehensive feedback on the topics to be covered, we have provided a survey form at http://aluminum.cse.buffalo.edu:8080/npbayes/nipsws05/survey which you are free to fill out only partially, and anonymously if desired. Thank you -Matt Beal & Yee Whye Teh From ted.carnevale at yale.edu Sat Oct 8 10:41:49 2005 From: ted.carnevale at yale.edu (Ted Carnevale) Date: Sat, 08 Oct 2005 10:41:49 -0400 Subject: Connectionists: NEURON 5.8 simulates large nets on parallel hardware Message-ID: <4347DAAD.7080907@yale.edu> NEURON version 5.8 is now available from http://www.neuron.yale.edu/neuron/install/install.html This includes many bug fixes for features that were new in the previous official release of NEURON. However, for many users the most important changes may be the improvements that facilitate its use in a parallel computing environment. These are: 1. ParallelContext now works on top of MPI as well as PVM (see "Top 10 reasons to prefer MPI over PVM" http://www.lam-mpi.org/mpi/mpi_top10.php). This makes it much easier for users to take advantage of NEURON on parallel hardware that ranges from workstation clusters to massively parallel supercomputers. This is potentially helpful for all users who are faced with optimization, parameter space exploration, and "large model, long runtime" problems. 2. NEURON 5.8 also introduces something for users with very large network models that just won't fit on a single-CPU computer: the ParallelNetworkManager, which manages an enhanced ParallelContext to support neural network simulations on parallel machines. That is, the ParallelNetworkManager lets you run simulations in which each processor--be it a node in a supercomputer, or a single- or multiple- CPU workstation in a cluster--handles a different part of the network. Tests on large scale problems indicate a nearly linear speedup with increasing number N of processors, until N is so large that the CPUs don't have enough work to keep themselves busy. In making this announcement of NEURON's new capabilities for parallel computation, I would like to acknowledge the special contributions of Michele Migliore and Bill Lytton, who collaborated with me on the design of these features, and Henry Markram, with whom I have been collaborating on large scale simulation problems and whose IBM Blue Gene supercomputer has been a most helpful testbed for these new features. Michael Hines From pavlovd at ics.uci.edu Fri Oct 7 15:12:01 2005 From: pavlovd at ics.uci.edu (Dmitry Pavlov) Date: Fri, 7 Oct 2005 12:12:01 -0700 (PDT) Subject: Connectionists: Job opening @ Yahoo!: Research Scientist, Yahoo! Search Content Analysis Team Message-ID: Research Scientist, Yahoo! Search Content Analysis Team Location . Sunnyvale, CA Essential Responsibilities We are looking for a highly motivated research scientist experienced in machine learning to help develop algorithms and software systems for analyzing web content. You will be part of Yahoo's content analysis team, an advanced development group responsible for web scale document classification, clustering, attribute extraction and matching applications that push the frontiers of text mining. The content analysis team is focused on lexical, syntactic and semantic analysis of text documents. As part of the team you will have access to Yahoo's enormous amount of data including query logs, product description, and web page index. Qualified candidates should be proficient in C/C++ development, have a solid knowledge of machine learning, data mining and information retrieval, including the hands on experience with data clustering, classification, regression, recommender systems, optimization, text mining, natural language processing, analysis, pattern recognition, etc. A PhD degree in computer science or related areas is also required. Required Qualifications/Education Ph.D. in Computer Science, Mathematics, Computational Linguistics, or related field. Experience in: C/C++, Unix operating system and development tools, scripting languages. Knowledge of text mining, natural language procecessing, information retrieval, statistical analysis, or machine learning. Understanding of algorithm efficiency issues. Excellent communication and problem solving skills. Keywords: data mining, machine learning, text mining, document classification, document clustering, natural langauge processing, regression analysis, support vector machine, neural network, decision tree, collaborative filter, statistics, information retrieval Please send your resumes to Dr. Dmitry Pavlov at dpavlov at yahoo-inc.com and reference Research Scientist position. From ianfasel at mplab.ucsd.edu Tue Oct 11 00:24:16 2005 From: ianfasel at mplab.ucsd.edu (Ian Fasel) Date: Mon, 10 Oct 2005 21:24:16 -0700 Subject: Connectionists: CFP: NIPS*2005 Workshop on automatic discovery of object categories Message-ID: CALL FOR PARTICIPATION -- NIPS*2005 WORKSHOP AUTOMATIC DISCOVERY OF OBJECT CATEGORIES Friday, December 9, 2005 Westin Resort, Whistler, British Columbia http://objectdiscovery.cc Recent years have seen explosive progress in the development of reliable, real-time object detection systems. Unfortunately, these systems typically require large numbers of carefully hand-segmented training images, making it extremely costly to develop more than a few special-purpose applications (e.g. face or car detectors). A system capable of effortlessly learning to identify and localize thousands of different object categories has thus become a new "grand challenge" in computer vision and learning. The goal of this workshop is to help establish and accelerate progress in the recently emerging field of learning about objects from images or video containing little or no training information. The emphasis will be on learning to identify both the presence and location of objects in arbitrary scenes -- which is a somewhat different problem from, e.g., scene categorization, or discrimination between a collection of already-segmented objects (although these may indeed be complementary methods). More than just a technological challenge, this topic brings up fundamental new issues that will require the development of new concepts and new methods in learning and classification, and clearly crosses disciplinary boundaries into diverse areas such as neuroscience, robotics, developmental psychology, and others. This workshop will bring together pioneering figures in this area to assess the state of the art, establish future research goals, and agree on methods for assessing progress. The workshop will include invited presentations, contributed talks, a poster session, and plenty of time for hopefully lively discussion. Although we do wish to hear about specific systems, we are equally interested in "where are we" and big-picture discussions to help bring focus to the topic. Important questions to focus on include: ? Feature-representations ? Fusion of multi-modal information ? Integration of bottom-up "saliency" information and top-down models ? Use of partially and weakly labeled data, and reinforcement signals ? Discriminative vs. generative vs. hybrid approaches ? Datasets for training and evaluating progress in artificial systems ? The developmental progression of object learning in humans and animals SUBMISSIONS We anticipate accepting six to eight 20-minute contributed talks and a number of posters. If you would like to present your work, please submit from a one page abstract to a complete manuscript as soon as possible to: ianfasel at mplab.ucsd.edu Abstracts based upon previously published work are welcome. Please submit early! DETAILS Website: http://objectdiscovery.cc Important Dates: Monday, October 24 - Submission Deadline Monday, October 31 - Acceptance Notification Friday, December 9 - The Workshop NIPS Workshop Registration & Hotel Info: http://www.nips.cc/Conferences/2005/ Workshop Inquiries: ianfasel at mplab.ucsd.edu movellan at mplab.ucsd.edu ORGANIZERS Ian Fasel and Javier Movellan UCSD Machine Perception Laboratory http://mplab.ucsd.edu From terry at salk.edu Mon Oct 10 14:21:20 2005 From: terry at salk.edu (Terry Sejnowski) Date: Mon, 10 Oct 2005 11:21:20 -0700 Subject: Connectionists: NEURAL COMPUTATION 17:12 In-Reply-To: Message-ID: Neural Computation - Contents - Volume 17, Number 12 - December 1, 2005 LETTERS The Effect of NMDA Receptors on Gain Modulation Michiel Berends, Reinoud Maex, and Erik De Schutter Oscillatory Synchronization Requires Precise and Balanced Feedback Inhibition in a Model of the Insect Antennal Lobe Dominique Martinez Response Properties of an Integrate-and-Fire Model That Receives Subthreshold Inputs Xuedong Zhang and Laurel H. Carney Incremental Online Learning in High Dimensions Sethu Vijayakumar, Aaron D'Souza and Stefan Schaal On the Nonlearnability of a Single Spiking Neuron Jiri Sima and Jiri Sgall A Novel Model-Based Hearing Compensation Design Using a Gradient-Free Optimization Method Zhe Chen, Suzanna Becker, Jeff Bondy, Ian C. Bruce, and Simon Haykin A Robust Information Clustering Algorithm Qing Song Learning Curves for Stochastic Gradient Descent in Linear Feedforward Networks Justin Werfel, Xiaohui Xie and H. Sebastian Seung Information Geometry of Interspike Intervals in Spiking Neurons Kazushi Ikeda ERRATUM Edgeworth-Expanded Gaussian Mixture Density Modeling Marc Van Hulle ----- ON-LINE - http://neco.mitpress.org/ SUBSCRIPTIONS - 2005 - VOLUME 17 - 12 ISSUES Electronic only USA Canada* Others USA Canada* Student/Retired$60 $64.20 $114 $54 $57.78 Individual $100 $107.00 $143 $90 $96.30 Institution $680 $727.60 $734 $612 $654.84 * includes 7% GST MIT Press Journals, 5 Cambridge Center, Cambridge, MA 02142-9902. Tel: (617) 253-2889 FAX: (617) 577-1545 journals-orders at mit.edu ----- From thomas.villmann at medizin.uni-leipzig.de Mon Oct 10 15:37:43 2005 From: thomas.villmann at medizin.uni-leipzig.de (thomas.villmann@medizin.uni-leipzig.de) Date: Mon, 10 Oct 2005 21:37:43 +0200 Subject: Connectionists: CfP: FLINS 2006 - Special Session on Data Analysis for Mass Spectrometric Problems Message-ID: <434ADF27.5463.2DB2C5C@localhost> Dear connectionists, I want to draw your attention to the following announcement which could be of interest for you: #################################################### Special Session on Data Analysis for Mass Spectrometric Problems at the The 7th International FLINS Conference on Applied Artificial Intelligence (FLINS 2006) Session Organizers Frank-Michael Schleif University of Leipzig, Dept. of Computational Intelligence & Bruker Daltonik GmbH Karl-Tauchnitz-Str. 25, D-04107 Leipzig, Germany Tel: +49-341-24 31-480 Fax: +49-341-96252-15 E-Mail: fms at bdal.de Thomas Villmann University of Leipzig, Dept. of Computational Intelligence Karl-Tauchnitz-Str. 25, D-04107 Leipzig, Germany E-Mail: villmann at informatik.uni-leipzig.de Jens Decker Bruker Daltonik GmbH Research & Development Fahrenheitstr. 4, D-28359 Bremen, Germany E-Mail: jde at bdal.de This session is organized as a part of The 7th International FLINS Conference on Applied Artificial Intelligence (FLINS 2006) http://www.fuzzy.ugent.be/flins2006/ August 29-31, 2006, Genova (Italy) In several areas of bioinformatics like mass spectrometry (ms), genome expression, biosignal analysis a.s.o. applied artificial intelligence methods play an important role in data analysis and data processing. These methods include all kinds of machine learning approaches as well as neural networks, modern statistics, genetic algorithms etc. In this session we focus on data processing in mass spectrometry. The most relevant problems arising in this domain are due to high dimensional but sparse data, processing of structures (functional data) and fuzziness. These topics are of more general interest also in the machine learning community. Otherwise ms plays an increasing role in the field of clinical proteomics and chemometrics. In the announced special session we focus on all kind of data analysis occurring during the processing of ms data like peak detection, feature extraction, pattern recognition, classification etc. The quality of these data analysis tools crucially influences the medical investigation results. This is especially true for the analysis of high-dimensional functional MALDI-TOF or SELDI-TOF spectra of body fluids from clinical proteomics studies. To improve the state of the art in this field both the processing of the spectra as well as new algorithms for the supervised and unsupervised analysis of the extracted spectral features should to be reconsidered in the light of new research results. The special session Data Analysis for Mass Spectrometric Problems on FLINS 2006 aims on collecting the state of the art activities in these fields and invites to bring researchers together working on these important topics. Therefore, we encourage the submission of contributions which aim on improvements of all kinds of data processing for MS. Recommended topics include but are not limited to the following: * Machine Learning approaches for ms data analysis * Fuzzy data analysis * Rule extraction * Automatic reasoning * Denoising * Baseline correction and noise estimation * Recalibration * Automatically evaluation of spectra quality * Feature extraction and selection * Classification of ms data (e.g. within clinical proteomics) * Statistical methods for data analysis of ms data * Applications in Clinical Proteomics, Metabolic Profiling Submission of papers Authors are invited to submit a paper up to 8 pages by December 15, 2005. You can submit your paper to the session organizers by email to: fms at bdal.de or {Schleif, villmann}@informatik.uni-leipzig.de All papers submitted in this session will be peer-reviewed. Accepted papers will be published in the conference proceedings as the book "Applied Artificial Intelligence" by World Scientific (to be EI indexed). Final papers should be prepared according to the publisher's instructions : http://www.worldscientific.com/style/proceedings_style.shtml Please select the trim size: 9" x 6". Papers that are not prepared according to these guidelines will not be published. Important dates * Paper submissions: December 15, 2005 * Acceptance letter: February 15, 2006 * Final papers submissions: April 15, 2006 ################################################# Best regards Thomas Villmann ______________________________________ Dr. rer. nat. Thomas Villmann University Leipzig Clinic for Psychotherapy Karl-Tauchnitz-Str. 25 phone / fax +49 (0)341 9718868 / 49 email: thomas.villmann at medizin.uni-leipzig.de From shivani at csail.mit.edu Tue Oct 11 21:20:57 2005 From: shivani at csail.mit.edu (Shivani Agarwal) Date: Tue, 11 Oct 2005 21:20:57 -0400 (EDT) Subject: Connectionists: CFP: NIPS 2005 Workshop - Learning to Rank Message-ID: ************************************************************************ FINAL CALL FOR PAPERS ---- Learning to Rank ---- Workshop at the 19th Annual Conference on Neural Information Processing Systems (NIPS 2005) Whistler, Canada, Friday December 9, 2005 http://web.mit.edu/shivani/www/Ranking-NIPS-05/ -- Submission Deadline: October 21, 2005 -- ************************************************************************ [ Apologies for multiple postings ] OVERVIEW -------- The problem of ranking, in which the goal is to learn an ordering or ranking over objects, has recently gained much attention in machine learning. Progress has been made in formulating different forms of the ranking problem, proposing and analyzing algorithms for these forms, and developing theory for them. However, a multitude of basic questions remain unanswered: * Ranking problems may differ in many ways: in the form of the training examples, in the form of the desired output, and in the performance measure used to evaluate success. What are the consequences of each of these factors on the design of ranking algorithms and on their theoretical guarantees? * The relationships between ranking and other classical learning problems such as classification and regression are still under-explored. Is any of these problems inherently harder or easier than another? * Although ranking is studied mainly as a supervised learning problem, it can have important consequences for other forms of learning; for example, in semi-supervised learning, one often ranks unlabeled examples so as to assign labels to the ones ranked at the top, and in reinforcement learning, one often learns a policy that ranks actions for each state. To what extent can these connections be explored and exploited? * There is a large variety of applications in which ranking is required, ranging from information retrieval to collaborative filtering to computational biology. What forms of ranking are most suited to different applications? What are novel applications that can benefit from ranking, and what other forms of ranking do these applications point us to? This workshop aims to provide a forum for discussion and debate among researchers interested in the topic of ranking, with a focus on the basic questions above. The goal is not to find immediate answers, but rather to discuss possible methods and applications, develop intuition, brainstorm on possible directions and, in the process, encourage dialogue and collaboration among researchers with complementary ideas. FORMAT ------ This is a one-day workshop that will follow the 19th Annual Conference on Neural Information Processing Systems (NIPS 2005). The workshop will consist of two 3-hour sessions. There will be two invited talks and 5-6 contributed talks, with time for questions and discussion after each talk. We would particularly like to encourage, after each talk, a discussion of underlying assumptions, alternative approaches, and possible applications or theoretical analyses, as appropriate. The last 30 minutes of the workshop will be reserved for a concluding discussion which will be used to put into perspective insights gained from the workshop and to highlight open challenges. Invited Talks ------------- * Thorsten Joachims, Cornell University * Yoram Singer, The Hebrew University Contributed Talks ----------------- These will be based on papers submitted for review. See below for details. CALL FOR PAPERS --------------- We invite submissions of papers addressing all aspects of ranking in machine learning, including: * algorithmic approaches for ranking * theoretical analyses of ranking algorithms * comparisons of different forms of ranking * formulations of new forms of ranking * relationships between ranking and other learning problems * novel applications of ranking * challenges in applying or analyzing ranking methods We welcome papers on ranking that do not fit into one of the above categories, as well as papers that describe work in progress. We are particularly interested in papers that point to new questions/debate in ranking and/or shed new light on existing issues. Please note that papers that have previously appeared (or have been accepted for publication) in a journal or at a conference or workshop, or that are being submitted to another workshop, are not appropriate for this workshop. Submission Instructions ----------------------- Submissions should be at most 6 pages in length using NIPS style files (available at http://web.mit.edu/shivani/www/Ranking-NIPS-05/StyleFiles/), and should include the title, authors' names, postal and email addresses, and an abstract not to exceed 150 words. Email submissions (in pdf or ps format only) to shivani at mit.edu with subject line "Workshop Paper Submission". The deadline for submissions is Friday October 21, 11:59 pm EDT. Submissions will be reviewed by the program committee and authors will be notified of acceptance/rejection decisions by Friday November 11. Final versions of all accepted papers will be due on Friday November 18. Please note that one author of each accepted paper must be available to present the paper at the workshop. IMPORTANT DATES --------------- First call for papers -- September 6, 2005 Paper submission deadline -- October 21, 2005 (11:59 pm EDT) Notification of decisions -- November 11, 2005 Final papers due -- November 18, 2005 Workshop -- December 9, 2005 ORGANIZERS ---------- * Shivani Agarwal, MIT * Corinna Cortes, Google Research * Ralf Herbrich, Microsoft Research CONTACT ------- Please direct any questions to shivani at mit.edu. ************************************************************************ From bengio at idiap.ch Thu Oct 13 02:24:31 2005 From: bengio at idiap.ch (Samy Bengio) Date: Thu, 13 Oct 2005 08:24:31 +0200 (CEST) Subject: Connectionists: two open PhD positions in machine learning Message-ID: Two Open Positions in Machine Learning -------------------------------------- The IDIAP Research Institute seeks two qualified candidates for PhD positions in machine learning. The objective of the first project is to develop novel kernel based algorithms for the analysis of sequences of high level events, such as automatic speech recognition (ASR). State-of-the-art ASR systems are based on generative Hidden Markov Models (HMMs). On the other hand, recent machine learning research have shown promising results in kernel based large margin discriminant models such as Support Vector Machines (SVMs) which maximize the margin between positive and negative examples. More recently, new kernels were proposed for various time-series tasks. The objective of this project is to study how these kernels could be modified in the context of more complex temporal tasks such as speech and video understanding. The objective of the second project is to develop novel machine learning algorithms for multi-channel sequence processing. Modeling jointly several sources of information (recorded from several cameras, microphones, etc) has several practical applications, including audio-visual speech recognition, multimodal person tracking, or complex scene analysis. Several machine learning models have already been proposed for such task, mainly for the case of two channels. The goal of this project is to propose theoretical and applied solutions for the case of more than two (potentially asynchronous) channels. Generative (Markovian based) models as well as margin-based models will be considered for the task. The ideal candidates will hold a degree in computer science, statistics, or related fields. She or he should have strong background in statistics, linear algebra, signal processing, C++, Perl and/or Python scripting languages, and the Linux environment. Knowledge in statistical machine learning and speech processing is an asset. Appointment for a PhD position is for a maximum of 4 years, provided successful progress, and should lead to a dissertation. Annual gross salary ranges from 36,000 Swiss Francs (first year) to 40,000 Swiss Francs (last year). Starting date is immediate. IDIAP is an equal opportunity employer and is actively involved in the European initiative involving the Advancement of Women in Science. IDIAP seeks to maintain a principle of open competition (on the basis of merit) to appoint the best candidate, provides equal opportunity for all candidates, and equally encourages both females and males to consider employment with IDIAP. Although IDIAP is located in the French part of Switzerland, English is the main working language. Free English and French lessons are provided. IDIAP is located in the town of Martigny in Valais, a scenic region in the south of Switzerland, surrounded by the highest mountains of Europe, and offering exciting recreational activities, including hiking, climbing and skiing, as well as varied cultural activities. It is within close proximity to Montreux (Jazz Festival) and Lausanne. Interested candidates should send a letter of motivation, along with their detailed CV and names of 3 references to jobs at idiap.ch. More information can also be obtained by contacting Samy Bengio. ---- Samy Bengio Senior Researcher in Machine Learning. IDIAP, CP 592, rue du Simplon 4, 1920 Martigny, Switzerland. tel: +41 27 721 77 39, fax: +41 27 721 77 12. mailto:bengio at idiap.ch, http://www.idiap.ch/~bengio From rsun at rpi.edu Thu Oct 13 17:17:02 2005 From: rsun at rpi.edu (Professor Ron Sun) Date: Thu, 13 Oct 2005 17:17:02 -0400 Subject: Connectionists: Tenure-track Position in Cognitive Science at RPI Message-ID: <5BEF17E0-97B2-47D7-81F8-7E91643E2B70@rpi.edu> Tenure-track Position in Cognitive Science The Cognitive Science Department at Rensselaer Polytechnic Institute invites applications for an anticipated tenure-track position at the rank of Assistant Professor beginning in Fall 2006 or possibly (for the right candidate) Spring 2007. We are seeking candidates who combine computational, mathematical, and/or logic-based modeling informed by experimental research in the areas of perception and action (e.g., motor control, vision, attention), interactive behavior (e.g., integrated models of cognitive systems), or high-level cognition (e.g., skill acquisition, decision making, reasoning). The candidate?s interest can be in basic and/or applied theory-based research. Interests in areas such as robotics or high-level computational neuroscience will be considered a strength. However, all disciplines within cognitive science are potential sources of candidates. All candidates are expected to have a strong potential for external funding. The Cognitive Science Department at Rensselaer is among the world?s newest dedicated cognitive science departments, specializing in computational cognitive modeling, perception/action, learning and reasoning (human and machine), and cognitive engineering. The department?s primary mission is to carry out seminal basic research and to develop engineering applications within cognitive science. This effort requires the continued growth of its new, research- oriented doctoral program in cognitive science. Department faculty have excellent ties with faculty in Computer Science, Engineering, and Decision Sciences. Women and minorities are especially encouraged to apply. Send curriculum vitae, reprints and preprints of publications, a 1-to-2 page statement of research, a 1-page statement of teaching interests, and three letters of reference to: Search Committee, c/o Heather Hewitt, Cognitive Science Department, Carnegie Building, Rensselaer Polytechnic Institute,110 8th Street, Troy, NY 12180. (Direct queries via email to Prof. Wayne D. Gray, grayw at rpi.edu, Chair of the Search Committee.) Applications will be reviewed beginning December 1st and continuing until the position is filled. ======================================================== Professor Ron Sun Cognitive Science Department Rensselaer Polytechnic Institute 110 Eighth Street, Carnegie 302A Troy, NY 12180, USA phone: 518-276-3409 fax: 518-276-3017 email: rsun at rpi.edu web: http://www.cogsci.rpi.edu/~rsun ======================================================= From bower at uthscsa.edu Wed Oct 12 12:10:50 2005 From: bower at uthscsa.edu (james Bower) Date: Wed, 12 Oct 2005 11:10:50 -0500 Subject: Connectionists: Two positions in computational biology at UTSA Message-ID: Hello all, We are happy to announce two new faculty positions in computational biology at the University of Texas, San Antonio. These positions represent the continued growth in computational biology at UTSA, complementing the already established computational research efforts of a number of senior and junior faculty (http://www.bio.utsa.edu/faculty.html). UTSA has also recently established a major new multi-user facility for computational biology and bioinformatics, which will include a state-of-the-art computer cluster with full software support for modeling, simulation, and data analysis. This facility is articulated with several other core facilities including a fully equipped imaging core. San Antonio itself is regularly included in the top ten safe, healthy, affordable, and interesting places to live in the United States. Please consider joining us in our efforts to make UTSA one of the premier universities for computational studies in the world. James M. Bower Charles Wilson Official Job Announcement The University of Texas at San Antonio Assistant Professor- Computational Biology The Department of Biology at the University of Texas San Antonio invites applications for two tenure-track positions at the rank of Assistant Professor pending budget approval. Required Qualifications: Candidates must have a M.D., or Ph.D. or the equivalent in biology or a related discipline, and at least 2 years of postdoctoral experience. Applicant's research interests should include experimental and computational techniques but can apply to any area of biological research. Preference will be given to candidates with a record of accomplishment in both experimental and model-based studies within the field of Neuroscience. We are seeking faculty with interests in sub cellular (e.g. channels, cell signaling, etc), neuronal (e.g. synaptic integration, or intrinsic electrical properties), systems level research or a combination of all three. Responsibilities: The successful applicant is expected to establish and maintain an extramurally funded research program, and contribute to undergraduate and graduate teaching in courses offered either at the UTSA Downtown Campus or the 1604 campus, and occasionally at night. The successful candidate will join an interactive group of researchers in Neuroscience at UTSA and the nearby UT Health Science Center. For more information on this faculty group and the department, please visit bio.utsa.edu. Attractive startup packages, including new laboratory space and access to a variety of shared facilities are available pending budget approval. Candidates please forward via email (biofacultyad at utsa.edu) or U.S. Post (Ion Channel Search Committee, Department of Biology, UTSA, 6900 N. Loop 1604 W., San Antonio, TX 78249-0662) a current curriculum vita, two or three representative publications, and a brief summary of future research interests and teaching philosophy. Include contact information (including email addresses) of three references. Applications will not be reviewed until all materials have arrived. Applicants who are not U. S. citizens must state their current visa and residency status. UTSA is an Affirmative Action/Equal Opportunity employer. Women, minorities, veterans, and individuals with disabilities are encouraged to apply. This position is security-sensitive as defined by the Texas Education Code ?51.215(c) and Texas Government Code?411.094(a)(2). For full consideration, applications should be received by December 15, 2005, but will be accepted until the position is filled. For further information contact the search committee at biofacultyad at utsa.edu. -- James M. Bower Ph.D. Research Imaging Center University of Texas Health Science Center at San Antonio 7703 Floyd Curl Drive San Antonio, TX 78284-6240 Cajal Neuroscience Center University of Texas San Antonio Phone: 210 567 8080 Fax: 210 567 8152 From isabelle at clopinet.com Wed Oct 12 17:35:16 2005 From: isabelle at clopinet.com (Isabelle Guyon) Date: Wed, 12 Oct 2005 23:35:16 +0200 Subject: Connectionists: New Pattern Recognition Competition Message-ID: <434D8194.2060906@clopinet.com> Dear colleagues, We are organizing a new machine learning challenge entitled: Performance Prediction Challenge see: http://www.modelselect.inf.ethz.ch/ How good are you at predicting how good you are? Find out: compete to predict accurately your generalization performance. This problem, which is of great practical importance e.g. in pilot studies, poses theoretical and computational challenges. Is cross-validation the best solution? What should k be in k-fold? Can one use theoretical performance bounds to better assess generalization? You will have opportunities to publish at WCCI 2006 (Vancouver, July 2006) and in JMLR under the new "model selection" special topic. Check the web site! Isabelle Guyon From derdogmus at ieee.org Fri Oct 14 01:30:30 2005 From: derdogmus at ieee.org (Deniz Erdogmus) Date: Thu, 13 Oct 2005 22:30:30 -0700 Subject: Connectionists: ICA 2006 Paper Submission Deadline Extended to October 28th Message-ID: <434F4276.8060708@ieee.org> Dear Colleagues, As the ICA 2006 organization committee, we have received numerous requests for a deadline extension due to its proximity to the ICASSP 2006 submission deadline. Clearly, both conferences are relevant to many researchers working in the ICA/BSS fields. Therefore, the paper submission deadline for ICA 2006 has been postponed to October 28th. Please visit the conference website at http://www.cnel.ufl.edu/ica2006/ for more information. Looking forward to your latest research contributions and to seeing you in Charleston, South Carolina, USA. Regards, Deniz Erdogmus on behalf of the ICA 2006 Organization Committee From marcus at idsia.ch Thu Oct 13 05:26:06 2005 From: marcus at idsia.ch (Marcus Hutter) Date: Thu, 13 Oct 2005 11:26:06 +0200 Subject: Connectionists: Useful Expressions for Mutual Information Message-ID: <2a1d01c5cfd8$23c5ac50$6bbfb0c3@NB1119> Dear Colleagues, We would like to announce 3 papers which appeared recently that derive various general and simple expressions for mutual information for unknown chances and missing data. M. Hutter and M. Zaffalon. Distribution of Mutual Information from Complete and Incomplete Data Computational Statistics & Data Analysis 48:3 (2005) 633-657 http://arxiv.org/abs/cs.LG/0403025 M. Zaffalon and M. Hutter. Robust Inference of Trees Annals of Mathematics and Artificial Intelligence (2005) to appear http://www.idsia.ch/~zaffalon/papers/treesj.pdf M. Hutter. Robust Estimators under the Imprecise Dirichlet Model Proc. 3rd International Symposium on Imprecise Probabilities and Their Applications (ISIPTA-2003) 274-289 http://www.carleton-scientific.com/isipta/PDF/021.pdf ----------------------------------- Marcus Hutter and Marco Zaffalon, Senior Researchers, IDSIA Istituto Dalle Molle di Studi sull'Intelligenza Artificiale Galleria 2 CH-6928 Manno(Lugano) - Switzerland Phone: +41-58-666 6668 Fax: +41-58-666 6661 E-mail marcus at idsia.ch http://www.idsia.ch/~marcus/idsia/ From timo.honkela at hut.fi Fri Oct 14 06:32:15 2005 From: timo.honkela at hut.fi (timo.honkela@hut.fi) Date: Fri, 14 Oct 2005 13:32:15 +0300 Subject: Connectionists: Proceedings on Adaptive Knowledge Representation and Reasoning In-Reply-To: <4346AB91.80502@sunderland.ac.uk> References: <4346AB91.80502@sunderland.ac.uk> Message-ID: We are pleased to announce that the proceedings of AKRR'05, International and Interdisciplinary Conference on Adaptive Knowledge Representation and Reasoning are available online at http://www.cis.hut.fi/AKRR05/papers/ In their keynote paper, Hyvarinen, Hoyer, Hurri and Gutmann present statistical models of images and early vision. They refer to the widely-spread assumption that biological visual systems are adapted to process the particular kind of information they receive. Hyvarinen et al. review work on modelling statistical regularities in ecologically valid visual input (natural images) and the obtained functional explanation of the properties of visual neurons. They refer to linear sparse coding as a seminal statistical model for natural images, that is also equivalent to the independent component analysis (ICA) model. The authors describe models that lead to emergence of further properties of visual neurons: the topographic organization and complex cell receptive fields. Gabriella Vigliocco's keynote talk highlighted the fruitful connection between the psychological empirical research in one hand and the computational analysis and modeling on the other. In their paper, Andrews, Vigliocco and Vinson consider the integration of attributional and distributional information in a probabilistic model of meaning representation. The authors present models of how meaning is represented in the brain/mind, based upon the assumption that children develop meaning representations for words using two main sources of information: information derived from their concrete experience with objects and events in the world and information implicitly derived from exposure to language. They first present a model developed using self-organising maps starting from speaker-generated features. The also present a probabilistic model that integrates the attributional information with distributional information derived from text corpora. The methodologically oriented papers in the proceedings consider adaptive systems, knowledge representation and reasoning from various points of view. Adaptive approaches and emergent representations based on contextual information are strongly present in the papers related to language and cognition. There were two special symposia in the conference that provide a focused view on their topics: ``Adaptive Models of Knowledge, Language and Cognition'' (AMKLC'05) and ``Knowledge Representation for Bioinformatics'' (KRBIO'05). The proceedings of these symposia are also included at the web site http://www.cis.hut.fi/AKRR05/papers/ Best regards, On behalf of the editors, Timo Honkela -- Timo Honkela, Chief Research Scientist, PhD, Docent Neural Networks Research Center Laboratory of Computer and Information Science Helsinki University of Technology P.O.Box 5400, FI-02015 TKK timo.honkela at tkk.fi, http://www.cis.hut.fi/tho/ From mjhealy at ece.unm.edu Sat Oct 15 17:01:46 2005 From: mjhealy at ece.unm.edu (mjhealy@ece.unm.edu) Date: Sat, 15 Oct 2005 15:01:46 -0600 (MDT) Subject: Connectionists: Experimental application of category theory to neural networks Message-ID: Tom Caudell and I have a paper in the Proceedings of the International Joint Conference on Neural Networks (IJCNN05), Montreal, 2005, published by the IEEE Press. The subject is an experiment demonstrating improved performance with a modification to a standard artificial neural architecture based on our category-theoretic semantic model. This was joint research with Sandia National Laboratories, and the experimental application concerns the generation of multispectral images from satellite data. We believe this is the first application of category theory directly in an engineering application (while at Boeing, another colleague and I had demonstrated its application to the synthesis of engineering software). Another feature of it is that it relates an abstract, categorical structure to a neural network parameter; this is not dealt with in detail in the Proceedings paper but will be in a full paper to be submitted. Tom Caudell's web page has a link to the Proceedings paper at http://www.eece.unm.edu/faculty/tpc/ The semantic theory is described in our Technical Report EECE-TR-04-020 on the UNM Dspace Repository, at https://repository.unm.edu/handle/1928/33 Regards, Mike Healy mjhealy at ece.unm.edu From bisant at umbc.edu Mon Oct 17 16:28:50 2005 From: bisant at umbc.edu (D Bisant) Date: Mon, 17 Oct 2005 16:28:50 -0400 Subject: Connectionists: Call for Papers, FLAIRS06 Nnet Special Track, Melbourne Beach, FL, May 11-13 Message-ID: <43540982.80509@umbc.edu> Neural Networks Special Track at the 19th International FLAIRS Conference In cooperation with the American Association for Artificial Intelligence Holiday Inn - Melbourne Oceanfront, Melbourne Beach, Florida May 11-13, 2006 Call for Papers Papers are being solicited for a special track on Neural Network Applications at the 19th International Florida Artificial Intelligence Society Conference (FLAIRS-2006). http://www.indiana.edu/~flairs06/ The special track will be devoted to Neural Networks with the aim of presenting new and important contributions in this area. The areas include, but are not limited to, the following: applications such as Pattern Recognition, Control and Process Monitoring, Biomedical Applications, Robotics, Text Mining, Diagnostic Problems, Telecommunications, Power Systems, Signal Processing; algorithms such as new developments in Back Propagation, RBF, SVM, Ensemble Methods, Kernel Approaches; hybrid approaches such as Neural Networks/Genetic Algorithms, Neural Network/Expert Systems, Causal Nets trained with Backpropagation, and Neural Network/Fuzzy Logic; or any other area of Neural Network research related to artificial intelligence. FLAIRS is a respectable multidisciplinary conference in artificial intelligence. This special track will feature a double-blind review. Submission Guidelines Interested authors must submit completed manuscripts by November 30, 2005. Submissions should be no more than 6 pages (4000 words) in length, including figures, and contain no identifying reference to self or organization. Papers should be formatted according to AAAI Guidelines. Submission instructions can be found at FLAIRS-06 website at http:// www.indiana.edu/~flairs06. Notification of acceptance will be mailed around January 20, 2006. Authors of accepted papers will be expected to submit the final camera-ready copies of their full papers by February 13, 2006 for publication in the conference proceedings which will be published by AAAI Press. Authors may be invited to submit a revised copy of their paper to a special issue of the International Journal on Artificial Intelligence Tools (IJAIT). Questions regarding the track should be addressed to: David Bisant at bisant at umbc.edu. FLAIRS 2006 Invited Speakers * Alan Bundy, University of Edinburgh, Scotland * Bob Morris, NASA Ames Research Center, USA * Mehran Sahami, Stanford University and Google, USA * Barry Smyth, University College Dublin, Ireland Important Dates * Paper submissions due: November 30, 2005 * Notification letters sent: January 20, 2006 * Camera ready copy due: February 13, 2006 Special Track Committee Ingrid Russell (Co-Chair), University of Hartford, USA David Bisant (Co-Chair), The Laboratory for Physical Sciences, USA Georgios Anagnostopoulos, Florida Institute of Technology, USA Jim Austin, University of York, UK Geof Barrows, Centeye Corporation, USA Serge Dolenko, Moscow State University, Russia Erol Gelenbe, Imperial College London, UK Michael Georgiopoulos, University of Central Florida, USA Gary Kuhn, Department of Defense, USA Luis Mart?, Univ. Carlos III de Madrid, Spain Costas Neocleous, University of Cyprus, Cyprus Sergio Roa Ovalle, National University of Colombia, Columbia Roberto Santana, University of the Basque Country, Spain C. N. Schizas, University of Cyprus, Cyprus Chellu Chandra Sekhar, Indian Institute of Technology, India From mark at paskin.org Mon Oct 17 14:22:13 2005 From: mark at paskin.org (Mark A. Paskin) Date: Mon, 17 Oct 2005 11:22:13 -0700 Subject: Connectionists: Deadline extended for NIPS 2005 Workshop: Intelligence Beyond the Desktop Message-ID: <3EA54D45-DD1F-4323-A2DD-39CCCF938A6B@paskin.org> The deadline has been extended to this Friday, October 21: ################################################################ CALL FOR PARTICIPATION Intelligence Beyond the Desktop a workshop at the 2005 Neural Information Processing Systems (NIPS) Conference Submission deadline: Friday, October 21, 2005 http://ai.stanford.edu/~paskin/ibd05/ ################################################################ OVERVIEW We are now well past the era of the desktop computer. Trends towards miniaturization, wireless communication, and increased sensing and control capabilities have led to a variety of systems that distribute computation, sensing, and controls across multiple devices. Examples include wireless sensor networks, multi-robot systems, networks of smartphones, and large area networks. Machine learning problems in these non-traditional settings cannot faithfully be viewed in terms of a data set and an objective function to optimize; physical aspects of the system impose challenging new constraints. Resources for computation and actuation may be limited and distributed across many nodes, requiring significant coordination; limited communication resources can make this coordination expensive. The scale and complexity of these systems often leads to large amounts of structured data that make state estimation challenging. In addition, these systems often have other constraints, such as limited power, or under-actuation, requiring reasoning about the system itself during learning and control. Furthermore, large-scale distributed systems are often unreliable, requiring algorithms that are robust to failures and lossy communication. New learning, inference, and control algorithms that address these challenges are required. This workshop aims to bring together researchers to discuss new applications of machine learning in these systems, the challenges that arise, and emerging solutions. FORMAT This one-day workshop will consist of invited talks and talks based upon submitted abstracts, with some time set aside for discussion. Our (tentative) invited speakers are: * Dieter Fox (University of Washington) * Leonidas Guibas (Stanford University) * Sebastian Thrun, (Stanford University) will speak about the machine learning algorithms used in Stanley, Stanford's winning entry into the DARPA Grand Challenge. CALL FOR PARTICIPATION Researchers working at the interface between machine learning and non- traditional computer architectures are invited to submit descriptions of their research for presentation at the workshop. Of particular relevance is research on the following topics: * distributed sensing, computation, and/or control * coordination * robustness * learning/inference/control under resource constraints (power, computation, time, etc.) * introspective machine learning (reasoning about the system architecture in the context of learning/inference/control) We especially encourage submissions that address unique challenges posed by non-traditional architectures for computation, such as * wireless sensor networks * multi-robot systems * large-area networks Submissions should be extended abstracts in PDF format which are no longer than three (3) pages long in 10pt or larger font. Submissions may be e-mailed to ibd-2005 at cs.cmu.edu with the subject "IBD SUBMISSION". We plan to accept four to six submissions for 25 minute presentation slots. In your submission please indicate if you would present a poster of your work (in case there are more qualified submissions than speaking slots). Call for participation: Wednesday, August 31, 2005 Submission deadline: Friday, October 21, 2005 11:59 PM PST Acceptance notification: Tuesday, November 1, 2005 Workshop: Friday, December 9, 2005 Organizers * Carlos Guestrin (http://www.cs.cmu.edu/~guestrin/) * Mark Paskin (http://paskin.org) Please direct any inquiries regarding the workshop to ibd-2005 at cs.cmu.edu. From sfr at unipg.it Wed Oct 19 07:54:34 2005 From: sfr at unipg.it (Simone G.O. FIORI) Date: Wed, 19 Oct 2005 13:54:34 +0200 Subject: Connectionists: Two new papers on learning theory. Message-ID: <1.5.4.32.20051019115434.01a82930@unipg.it> Dear colleagues, I take the liberty to announce the availability of two new papers on learning theory. They are related to unsupervised adapting of inherently-stable IIR filters implemented in state-space form, with application to blind system deconvolution, and to variate generation by look-up-table type adaptive-activation-function-neuron learning. 1] "Blind Adaptation of Stable Discrete-Time IIR Filters in State-Space Form", by S. Fiori, University of Perugia, Italy. Accepted on the IEEE Transactions on Signal Processing *Abstract: Blind deconvolution consists of extracting a source sequence and impulse response of a linear system from their convolution. In presence of system zeros close to the unit circle, which give rise to very long impulse responses, IIR adaptive structures are of use, whose adaptation should be carefully designed in order to guarantee stability. In this paper, we propose a blind-type discrete-time IIR adaptive filter structure realized in state-space form that, with a suitable parameterization of its coefficients, remains stable. The theory is first developed for a two-pole filter, whose numerical behavior is investigated via computer-based experiments. The proposed structure/adaptation theory is then extended to a multi-pole structure realized as a cascade of two-pole filters. Computer-based experiments are proposed and discussed, which aim at illustrating the behavior of the filter cascade on several cases of study. The numerical results obtained show the proposed filters remain stable during adaptation and provide satisfactory deconvolution results. Draft available at: http://www.unipg.it/sfr/publications/tsp05.pdf 2] "Neural Systems with Numerically-Matched Input-Output Statistic: Variate Generation", by S. Fiori, University of Perugia, Italy. Accepted on Neural Processing Letters *Abstract: The aim of this paper is to present a neural system trained to exhibit matched input-output statistic for random samples generation. The learning procedure is based on a cardinal equation from statistics that suggests how to warp an available samples set of known probability density function into a samples set with desired probability distribution. The warping structure is realized by a fully-tunable neural system implemented as a "look-up table". Learnability theorems are proven and discussed and the numerical features of the proposed methods are illustrated through computer-based experiments. Draft available at: http://www.unipg.it/sfr/publications/rng_nepl.pdf Bets regards. ================================================= | Simone FIORI (Elec.Eng., Ph.D.) | | * Faculty of Engineering - Perugia University * | | * Polo Didattico e Scientifico del Ternano * | | Loc. Pentima bassa, 21 - I-05100 TERNI (Italy) | | eMail: fiori at unipg.it - Fax: +39 0744 492925 | | Web: http://www.unipg.it/sfr/ | ================================================= From Pierre.Bessiere at imag.fr Wed Oct 19 02:51:23 2005 From: Pierre.Bessiere at imag.fr (=?ISO-8859-1?Q?Pierre_Bessi=E8re?=) Date: Wed, 19 Oct 2005 08:51:23 +0200 Subject: Connectionists: BAYESIAN COGNITION Workshop, Paris, January 16 - 18, 2006 Message-ID: <2F1462BD-B11E-43C9-98C7-7D0F8FB0AFA4@imag.fr> --------------------------------- BAYESIAN COGNITION --------------------------------- International workshop on probabilistic models of perception, inference, reasoning, decision action, learning and neural processing Paris, France, January 16 - 18, 2006 http://www.bayesian-cognition.org/ Scope and goal: --------------------- Animals and artificial systems alike are faced with the problem of making inferences about their environments and choosing appropriate responses based on incomplete, uncertain and noisy data. Probabilistic models and algorithms are flourishing in both life sciences an information sciences as ways of understanding the behavior of subjects and the neural processing underlying this behavior, and building robots and artificial agents that can function effectively in such circumstances. This workshop will gather life and information scientists to discuss the latest advances in this subject, specifically addressing the following topics: - Probability theory as an alternative to logic - Probabilistic models of neurons and assembly of neurons - Probabilistic models of CNS functionality - Stochastic synchronisation of neuronal assemblies - Probabilistic interpretation of psychological and psychophysical data - Probabilistic inference and learning algorithms - Probabilistic robotics Lecturers (confirmed list): ---------------------------------- - Alain Berthoz, Coll?ge de France - Pierre Bessi?re, CNRS Grenoble University - Heinrich B?lthoff, Max Planck Institute - Peter Dayan, UCL - Sophie Deneve, ISC - Jacques Droulez, Coll?ge de France - Ian Hacking, Coll?ge de France - Ben Kuipers, University of Texas - David MacKay, Cambridge University - Pascal Mamassian, Paris V University - Jose del R. Millan, IDIAP Research Institute - Kevin Murphy, University of British Columbia - Alexandre Pouget, University of Rochester - Rajesh Rao, University of Washington - Michael Shadlen, University of Washington - Roland Siegwart, EPFL - Eero Simoncelli, New York University - Jean-Jacques Slotine, MIT - Josh Tenenbaum, MIT - Sebastian Thrun, Stanford University - Daniel Wolpert, Cambridge University Registration and complementary information: ------------------------------------------------------------- http://www.bayesian-cognition.org/ - Talks + coffee breaks and welcome cocktail on January 16th : 90 euros - Talks + coffee breaks, welcome cocktail on January 16th and lunches : 140 euros Number of participant is limited, registration are accepted in their order of arrival. _______________________________ Dr Pierre Bessi?re - CNRS ***************************** GRAVIR Lab INRIA 655 avenue de l'Europe 38334 Montbonnot FRANCE Mail: Pierre.Bessiere at imag.fr Http: www-laplace.imag.fr Tel: +33 4 76 61 55 09 _______________________________ From asamsono at gmu.edu Wed Oct 19 15:15:51 2005 From: asamsono at gmu.edu (Alexei V. Samsonovich) Date: Wed, 19 Oct 2005 15:15:51 -0400 Subject: Connectionists: GRA positions available - please forward Message-ID: <43569B67.6040904@gmu.edu> Dear Colleague: As a part of a research team at KIAS (GMU, Fairfax, VA), I am searching for graduate students who are interested in working during one year, starting immediately, on a very ambitious project supported by our recently funded grant. The title is ?An Integrated Self-Aware Cognitive Architecture?. The grant may be extended for the following years. The objective is to create a self-aware, conscious entity in a computer. This entity is expected to be capable of autonomous cognitive growth, basic human-like behavior, and the key human abilities including learning, imagery, social interactions and emotions. The agent should be able to learn autonomously in a broad range of real-world paradigms. During the first year, the official goal is to design the architecture, but we are planning implementation experiments as well. We are currently looking for several students. The available positions must be filled as soon as possible, but no later than by the beginning of the Spring 2006 semester. Specifically, we are looking for a student to work on the symbolic part of the project and a student to work on the neuromorphic part, as explained below. A symbolic student must have a strong background in computer science, plus a strong interest and an ambition toward creating a model of the human mind. The task will be to design and to implement the core architecture, while testing its conceptual framework on selected practically interesting paradigms, and to integrate it with the neuromorphic component. Specific background and experience in one of the following areas is desirable: (1) cognitive architectures / intelligent agent design; (2) computational linguistics / natural language understanding; (3) hacking / phishing / network intrusion detection; (4) advanced robotics / computer-human interface. A neuromorphic candidate is expected to have a minimal background in one of the following three fields. (1) Modern cognitive neuropsychology, including, in particular, episodic and semantic memory, theory-of-mind, the self and emotion studies, familiarity with functional neuroanatomy, functional brain imaging data, cognitive-psychological models of memory and attention. (2) Behavioral / system-level / computational neuroscience. (3) Attractor neural network theory and computational modeling. With a background in one of the fields, the student must be willing to learn the other two fields, as the task will be to put them together in a neuromorphic hybrid architecture design (that will also include the symbolic core) and to map the result onto the human brain. Not to mention that all candidates are expected to be interested in the modern problem of consciousness, willing to learn new paradigms of research, and committed to success of the team. Given the circumstances, however, we do not expect all conditions listed above to be met. Our minimal criterion is the excitement and the desire of an applicant to build an artificial mind. I should add that this bold and seemingly risky project provides a unique in the world opportunity to engage with emergent, revolutionary activity that may change our lives. Cordially, Alexei Samsonovich -- Alexei V. Samsonovich, Ph.D., Research Assistant Professor George Mason University, Krasnow Institute for Advanced Study 4400 University Drive MS 2A1, Fairfax, VA 22030-4444, U.S.A. Office: 703-993-4385, fax: 703-993-4325, cell: 703-447-8032 From nip-lr at neuron.kaist.ac.kr Thu Oct 20 06:03:24 2005 From: nip-lr at neuron.kaist.ac.kr (nip-lr@neuron.kaist.ac.kr) Date: Thu, 20 Oct 2005 19:03:24 +0900 Subject: Connectionists: New Volume of Neural Information Processing - Letters and Reviews Message-ID: <1129802565502976.13927@webmail> A new volume of NIP-LR (Neural Information Processing - Letters and Reviews), Vol.8, Nos.1-3, July-September 2005, is now available both online and printed copy. The NIP-LR is a relatively new journal aiming high-quality timely-publication with double-blind reviews. For the online version simple visit the website at www.nip-lr.info. For the printed copy please send an e-mail request to nip-lr at neuron.kaist.ac.kr. Soo-Young Lee ---------------------------------------------------------------------------- ---------------------------------------------- Table of Contents Letters An Modified Error Function for the Complex-value Backpropagation Neural Networks Xiaoming Chen, Zheng Tang, Songsong Li Generative and Filtering Approaches for Overcomplete Representations Kaare Brandt Petersen, Jiucang Hao, Te-Won Lee Segmentation Method of MRI Using Fuzzy Gaussian Basis Neural Network Wei Sun, Yaonan Wang Fast Computation of Moore-Penrose Inverse Matrices Pierre Courrieu Batch Learning of the Self-Organizing Relationship (SOR) Network Takeshi Yamakawa, Keiichi Horio and Satoshi Sonoh Time Series Prediction Using an Interval Arithmetic FIR Network Ho Joon Kim, Tae-Wan Ryu Modeling with Recurrent Neural Networks using Generalized Mean Neuron Model Gunjan Gupta, R. N. Yadav, Prem K. Kalra and J. John Novelty Scene Detection Using Scan Path Topology and Energy Signature in Scaled Saliency Map Sang-Woo Ban, Woong-Jae Won, and Minho Lee From ckello at gmu.edu Thu Oct 20 09:53:03 2005 From: ckello at gmu.edu (ckello@gmu.edu) Date: Thu, 20 Oct 2005 09:53:03 -0400 Subject: Connectionists: NSF is seeking a program director in large-scale computing Message-ID: Dear Connectionists, The Behavioral and Cognitive Sciences division at the National Science Foundation is seeking to fill a rotator (temporary) program director position in large-scale (e.g., parallel) computing and "cyberinfrastructure" broadly construed (see below). The applicant must have a current position at a US institution. For those of you with computing expertise in the behavioral and cognitive sciences, this is an opportunity to help shape the future of NSF's investments in your science. Christopher Kello Program Director Perception, Action, and Cognition Program Department of Psychology George Mason University ANNOUNCEMENT NO: E20060001-IPA OPEN: 10/06/2005 CLOSE: 11/07/2005 POSITIONS WILL BE FILLED ON A ONE OR TWO YEAR INTERGOVERNMENTAL PERSONNEL ACT (IPA) ASSIGNMENT BASIS. The National Science Foundation is seeking a qualified candidate for a position as Program Director in the Division of Behavioral and Cognitive Sciences (BCS), Directorate for Social and Behavioral Sciences, Arlington, VA. The desired starting date for this appointment is January 2006. BCS supports research to develop and advance scientific knowledge focusing on human cognition, language, social behavior and culture, as well as research on the interactions between human societies and the physical environment. More information about BCS and their programs can be found on their website at http://www.nsf.gov. The Division of Behavioral and Cognitive Sciences at the NSF is seeking a Program Director who has expertise in cyber-infrastructure and one or more of its disciplinary areas. Social and behavioral research is able to address increasingly complex questions by making use of increases in computer power, processing speed, networking capacities, and data storage capacities, and expertise in the emerging cyber-infrastructure issues is sought. The Program Director would serve as a liaison for cyber-infrastructure issues within NSF and participate in activities within and across programs in the division, directorate, and foundation. The Program Director may als o assist in administration of programs in one or more of these disciplines: Archaeology, Cognitive Neuroscience; Cultural Anthropology; Development and Learning Sciences; Geography and Regional Science; Linguistics; Perception, Action and Cognition; Physical Anthropology, or Social Psychology. For IPA assignments, the individual remains an employee on the payroll of his or her home institution and the institution continues to administer pay and benefits. NSF reimburses the institution for NSF?s negotiated share of the costs. Individuals eligible for an IPA assignment include employees of State and local government agencies, institutions of higher education, Indian tribal governments, federally funded research and development centers and qualified nonprofit organizations. For more information regarding IPA assignments, visit our website at www.nsf.gov/about/career_opps. Qualifications required: Applicants must possess a Ph.D. or equivalent experience in a discipline related to social and behavioral sciences and have an active research program in a related area. In addition, six or more years of successful research, research administration, and/or managerial experience pertinent to the program are required. DUTIES AND RESPONSIBILITIES: As Program Director directs in the implementation, review, funding, post-award management, and evaluation of the program and contributes to the intellectual integration with other programs supported by the Division. Designs and implements the proposal review and evaluation process for relevant proposals. Selects well qualified individuals to provide objective reviews on proposals either as individuals or as members of a panel. Conducts final review of proposals and evaluations, and recommends acceptance or declination. Manages and monitors on-going grants, contracts, interagency and cooperative agreements to ensure fulfillment of commitments to NSF. Evaluates progress of awards through review and evaluation of reports and publications submitted by awardees and/or meetings at NSF and during site visits. Contributes to the responsibility for establishing goals and objectives, initiating new program thrusts and phasing out old projects. Recommends n ew or revised policies and plans in scientific, fiscal, and administrative matters to improve the activities and management of the Program. QUALIFICATIONS DESIRED: In addition to the qualifications outlined above, further qualifications desired include: ? Knowledge and understanding of scientific principles and theories that underlie the study of each Program. ? Research, analytical and technical writing skills, which evidence the ability to perform extensive inquiries into a wide variety of significant issues and to make recommendations and decisions based on findings. ? Ability to organize, implement and manage large, multi-disciplinary, broadly based, proposal driven grant programs allocating resources to meet a broad spectrum of program goals. ? Ability to meet and deal with members of the scientific community, other funding agencies and peers to effectively present and advocate program policies and plans. ? Ability to work with individuals within the Program; both technical and support staff. HOW TO APPLY: Applications may be transmitted electronically to rotator at nsf.gov. Individuals may also submit a resume or any application of your choice to the National Science Foundation, Division of Human Resource Management, 4201 Wilson Blvd., Arlington, VA 22230, Attn: E20060001-IPA. In addition, you are encouraged to submit a narrative statement that addresses your background and/or experience related to the Program you are applying for. You are asked to complete and submit the attached Applicant Survey form. Submission of this form is voluntary and will not affect your application for employment (the information is used for statistical purposes). Telephone inquiries may be referred to the Executive and Visiting Personnel Branch, at (703) 292-8755. For technical information, contact Dr. Peg (Marguerite) Barratt, Division Director, at (703) 292-8740 or mbarratt at nsf.gov. Hearing impaired individuals may call TDD (703) 292-8044. The National Science Foundation provides reasonable accommodations to applicants with disabilities on a case-by-case basis. If you need a reasonable accommodation for any part of the application and hiring process, please notify the point of contact listed on this vacancy announcement. NSF IS AN EQUAL OPPORTUNITY EMPLOYER COMMITTED TO EMPLOYING A HIGHLY QUALIFIED STAFF THAT REFLECTS THE DIVERSITY OF OUR NATION. From rsun at rpi.edu Thu Oct 20 16:20:25 2005 From: rsun at rpi.edu (Professor Ron Sun) Date: Thu, 20 Oct 2005 16:20:25 -0400 Subject: Connectionists: Cognitive Systems Research, Vol. 6, Iss. 3 and 4, 2005 Message-ID: <1BA809C5-BE7C-4B7B-98F4-94FFFC12B8CA@rpi.edu> New Issues are now available on ScienceDirect: NOTE: If the URLs in this email are not active hyperlinks, copy and paste the URL into the address/location box in your browser. ----------------------------------------------------------------------- * Cognitive Systems Research Volume 6, Issue 3, Pages 189-262 (September 2005) Epigenetic Robotics Edited by Luc Berthouze and Giorgio Metta http://www.sciencedirect.com/science/issue/ 6595-2005-999939996-596644 TABLE OF CONTENTS Epigenetic robotics: modelling cognitive development in robotic systems Pages 189-192 Luc Berthouze and Giorgio Metta http://www.sciencedirect.com/science? _ob=GatewayURL&_method=citationSearch&_urlVersion=4&_origin=SDVIALERTASC II&_version=1&_uoikey=B6W6C-4F1J8FY-1&md5=a6e730dcc6602ce2770e8cc701643a 81 A model of attentional impairments in autism: first steps toward a computational theory Pages 193-204 Petra Bj?rne and Christian Balkenius http://www.sciencedirect.com/science? _ob=GatewayURL&_method=citationSearch&_urlVersion=4&_origin=SDVIALERTASC II&_version=1&_uoikey=B6W6C-4F1J8FY-2&md5=afe39e751a2f591c7804593ab8c193 be Synching models with infants: a perceptual-level model of infant audio-visual synchrony detection Pages 205-228 Christopher G. Prince and George J. Hollich http://www.sciencedirect.com/science? _ob=GatewayURL&_method=citationSearch&_urlVersion=4&_origin=SDVIALERTASC II&_version=1&_uoikey=B6W6C-4F1J8FY-4&md5=c850deee36cbec77da5c4f74a1f74d e8 The evolution of imitation and mirror neurons in adaptive agents Pages 229-242 Elhanan Borenstein and Eytan Ruppin http://www.sciencedirect.com/science? _ob=GatewayURL&_method=citationSearch&_urlVersion=4&_origin=SDVIALERTASC II&_version=1&_uoikey=B6W6C-4F1J8FY-5&md5=de1cc98524ab5133400dd5517f61d4 ec Developmental stages of perception and language acquisition in a perceptually grounded robot Pages 243-259 Peter Ford Dominey and Jean-David Boucher http://www.sciencedirect.com/science? _ob=GatewayURL&_method=citationSearch&_urlVersion=4&_origin=SDVIALERTASC II&_version=1&_uoikey=B6W6C-4F1J8FY-3&md5=34377dba381a57278364f088e0196e 14 ----------------------------------------------------- * Cognitive Systems Research Volume 6, Issue 4, Pages 263-414 (December 2005) http://www.sciencedirect.com/science/issue/ 6595-2005-999939995-606565 TABLE OF CONTENTS When do differences matter? On-line feature extraction through cognitive economy Pages 263-281 David J. Finton http://www.sciencedirect.com/science? _ob=GatewayURL&_method=citationSearch&_urlVersion=4&_origin=SDVIALERTASC II&_version=1&_uoikey=B6W6C-4D75J5W-1&md5=f4b1ede099fa59af927f3889a2440a dc Experience-grounded semantics: a theory for intelligent systems Pages 282-302 Pei Wang http://www.sciencedirect.com/science? _ob=GatewayURL&_method=citationSearch&_urlVersion=4&_origin=SDVIALERTASC II&_version=1&_uoikey=B6W6C-4DS9026-1&md5=ff9bf1e90857ffc23559da7b6551e5 b4 A computational model of sequential movement learning with a signal mimicking dopaminergic neuron activities Pages 303-311 Wei Li, Jinghong Li and Jeffrey D. Johnson http://www.sciencedirect.com/science? _ob=GatewayURL&_method=citationSearch&_urlVersion=4&_origin=SDVIALERTASC II&_version=1&_uoikey=B6W6C-4F6MD7G-1&md5=5a3931fac34ae635e9b20351099db8 6e Understanding dynamic and static displays: using images to reason dynamically Pages 312-319 Sally Bogacz and J. Gregory Trafton http://www.sciencedirect.com/science? _ob=GatewayURL&_method=citationSearch&_urlVersion=4&_origin=SDVIALERTASC II&_version=1&_uoikey=B6W6C-4F6MD7G-2&md5=4c972633b54440549b2f68e1364eeb c1 An artificial intelligent counter Pages 320-332 Qi Zhang http://www.sciencedirect.com/science? _ob=GatewayURL&_method=citationSearch&_urlVersion=4&_origin=SDVIALERTASC II&_version=1&_uoikey=B6W6C-4FC3RM2-1&md5=253bfa0ee88d958aa530f8c21a1c56 e1 A cognitive model in which representations are images Pages 333-363 Janet Aisbett and Greg Gibbon http://www.sciencedirect.com/science? _ob=GatewayURL&_method=citationSearch&_urlVersion=4&_origin=SDVIALERTASC II&_version=1&_uoikey=B6W6C-4G7DY66-1&md5=161bdebeb758393bff6c205669790d d1 Agent communication pragmatics: the cognitive coherence approach Pages 364-395 Philippe Pasquier and Brahim Chaib-draa http://www.sciencedirect.com/science? _ob=GatewayURL&_method=citationSearch&_urlVersion=4&_origin=SDVIALERTASC II&_version=1&_uoikey=B6W6C-4GFVMN2-1&md5=89fa97ee6fe4ca2be7218d541fc557 93 Book reviews Review of Reductionism and the Development of Knowledge, T. Brown & L. Smith (Eds.); Mahwah, NJ: Lawrence Erlbaum, 2002. Pages 396-401 Geert Jan Boudewijnse http://www.sciencedirect.com/science? _ob=GatewayURL&_method=citationSearch&_urlVersion=4&_origin=SDVIALERTASC II&_version=1&_uoikey=B6W6C-4DN9DVP-3&md5=1af463f1abb48bb5c9a52887ad8d3c 29 Review of the Evolution and Function of Cognition, Lawrence Erlbaum (2003). Pages 402-404 D.M. Bernad http://www.sciencedirect.com/science? _ob=GatewayURL&_method=citationSearch&_urlVersion=4&_origin=SDVIALERTASC II&_version=1&_uoikey=B6W6C-4GP1VTX-1&md5=3581f5241a551708269892dfbd351b a8 Andy Clark, Natural-born Cyborgs: Minds, Technologies, and the Future of Human Intelligence, Oxford University Press (2003). Pages 405-409 Leslie Marsh http://www.sciencedirect.com/science? _ob=GatewayURL&_method=citationSearch&_urlVersion=4&_origin=SDVIALERTASC II&_version=1&_uoikey=B6W6C-4GJKTYX-1&md5=979c0fe014ed250a68b682a37e9e5c b8 ------------------------------------------------------------------------ ---------- See http://www.elsevier.com/locate/cogsys for further information regarding accessing these articles See the following Web page for submission, subscription, and other information regarding Cognitive Systems Research: http://www.cogsci.rpi.edu/~rsun/journal.html If you have questions about features of ScienceDirect, please access the ScienceDirect Info Site at http://www.info.sciencedirect.com ======================================================== Professor Ron Sun Cognitive Science Department Rensselaer Polytechnic Institute 110 Eighth Street, Carnegie 302A Troy, NY 12180, USA phone: 518-276-3409 fax: 518-276-3017 email: rsun at rpi.edu web: http://www.cogsci.rpi.edu/~rsun ======================================================= From verleysen at dice.ucl.ac.be Thu Oct 20 12:55:45 2005 From: verleysen at dice.ucl.ac.be (Michel Verleysen) Date: Thu, 20 Oct 2005 18:55:45 +0200 Subject: Connectionists: position at UCL Message-ID: <009701c5d597$1d4c3a00$43ed6882@maxwell.local> [with apologies for cross-posting] ********************************************************************* Research position on biomedical statistical machine learning Universit? catholique de Louvain (Louvain-la-Neuve, Belgium) Machine Learning Group, Faculty of Engineering, and Faculty of Medicine ********************************************************************* The Universit? catholique de Louvain seeks a qualified candidate for a research position in machine learning and signal processing in biomedical applications. The position is open for a candidate to the PhD degree. The goal of the project, funded by local public funds, is to develop prediction systems for epilectic seizures, based on neural recordings. Machine learning and signal processing techniques will be used on recorded signals to extract patterns that are relevant in a prediction strategy. The project will be carried out in two groups: the Machine Learning Group (http://www.ucl.ac.be/mlg/) and the neural Rehabilitation laboratory (http://www.md.ucl.ac.be/gren/), under the supervision of Prof. Michel Verleysen and Dr. Jean Delbeke respectively. The ideal candidate will hold a degree in computer science, statistics, signal processing or related fields. She or he should have strong background in statistics, linear algebra, and signal processing. Knowledge in statistical machine learning is an asset. She or he should have a sufficient knowledge in Matlab and C or C++ programming. A strong interest for biomedical applications is required, nut no prior knowledge is necessary. Direct involvement in physiological experimentation is expected. The candidate is expected to share his/her activities between the two groups, situated respectively in Louvain-la-Neuve and Brussels (about 20 km one from each other). Appointment for this PhD position is for a maximum of 3 years, provided successful progress, and should lead to a dissertation. Annual gross salary is around 30,000 euros. Starting date is immediate. Applications, including motivation letter and detailed curriculum, should be sent to: Michel Verleysen, Universit? catholique de Louvain, DICE-Machine Learning group, 3 place du Levant, B-1348 Louvain-la-Neuve, Belgium. E-mail: verleysen at dice.ucl.ac.be. Applications should be sent as soon as possible, and in no case received later than November 30, 2005. ================================================= Michel Verleysen Research Director FNRS - Lecturer UCL Universit? catholique de Louvain DICE - Machine Learning Group 3 place du Levant B-1348 Louvain-la-Neuve Belgium Tel: +32 10 47 25 51 Fax: +32 10 47 25 98 E-mail: verleysen at dice.ucl.ac.be Homepage: http://www.dice.ucl.ac.be/~verleyse ================================================= From Gunnar.Raetsch at tuebingen.mpg.de Fri Oct 21 07:26:16 2005 From: Gunnar.Raetsch at tuebingen.mpg.de (=?ISO-8859-1?Q?Gunnar_R=E4tsch?=) Date: Fri, 21 Oct 2005 13:26:16 +0200 Subject: Connectionists: NIPS Workshop on New Problems and Methods in Computational Biology Message-ID: <35F77F0F-26C5-48E6-8CB6-97BD912E66B0@tuebingen.mpg.de> Dear colleagues, I would like to invite you to participate in the workshop on New Problems and Methods in Computational Biology on the 9th of December at NIPS 2005 in Whistler, B.C. If you would like to contribute then please send an extended abstract by *November 1, 9am EST* to nips-compbio at tuebingen.mpg.de. We still have a few slots for talks available (details below). I am looking forward to meet you there! Gunnar Raetsch NIPS*05 Workshop New Problems and Methods in Computational Biology Workshop email: nips-compbio at tuebingen.mpg.de Workshop web address: http://www.fml.tuebingen.mpg.de/nipscompbio Organizers: * Gal Chechik, Department of Computer Science, Stanford University * Christina Leslie, Center for Comp. Learning Systems, Columbia University * Gunnar Raetsch, Friedrich Miescher Laboratory of the Max Planck Society * Koji Tsuda, AIST Computational Biology Research Center Workshop Description: The field of computational biology has seen a dramatic growth over the past few years, both in terms of new available data, new scientific questions and new challenges and for learning and inference. In particular, biological data is often relationally structured and highly diverse, thus requires to combine multiple weak evidence from heterogeneous sources. These could include sequenced genomes of a variety of organisms, gene expression data from multiple technologies, protein sequence and 3D structural data, protein interactions, gene ontology and pathway databases, genetic variation data, and an enormous amount of textual data in the biological and medical literature. The new types of scientific and clinical problems, require to develop new supervised and unsupervised learning approaches that can use these growing resources. The goal of this workshop is to present emerging problems and machine learning techniques in computational biology. Speakers from the biology/bioinformatics community will present current research problems in bioinformatics, and we invite contributed talks on novel learning approaches in computational biology. We encourage contributions describing either progress on new bioinformatics problems or work on established problems using methods that are substantially different from standard approaches. Kernel methods, graphical models, feature selection and other techniques applied to relevant bioinformatics problems would all be appropriate for the workshop. Submission instructions: Researchers interested in contributing should send an extended abstract of up to 4 pages (postscript or pdf format) to nips-compbio at tuebingen.mpg.de by *November 1, 9am EST*. The workshop organizers intend to invite submissions of full length versions of accepted workshop contributions for publication in a special issue of a BMC Bioinformatics (for information on last years special issue cf. http://www.fml.tuebingen.mpg.de/nipscompbio/bmc). Program Committee: * Michael I. Jordan, UC Berkeley * William Stafford Noble, University of Washington * Kristin Bennett, Rensselaer Polytechnic Institute * Nello Cristianini, UC Davis * Alexander Hartemink, Duke University * Eran Segal, Stanford University * Michal Linial, The Hebrew University of Jerusalem * Klaus-Robert Mueller, Fraunhofer FIRST * Bernhard Schoelkopf, Max Planck Institute for Biol. Cybernetics * Pierre Baldi, UC Irvine (2004) * Nir Friedman, Hebrew University and Harvard (2004) * Eleazar Eskin, UC San Diego (2004) * Dan Geiger, Technion (2004) * Alexander Schliep, Max Planck Institute for Molecular Genetics (2004) * Jean-Philippe Vert, Ecole des Mines de Paris (2004) The workshop is suported by the EU PASCAL network. +-------------------------------------------------------------------+ Gunnar R?tsch http://www.fml.tuebingen.mpg.de/raetsch Friedrich Miescher Laboratory Gunnar.Raetsch at tuebingen.mpg.de Max Planck Society Tel: (+49) 7071 601 820 Spemannstra?e 37, 72076 T?bingen, Germany Fax: (+49) 7071 601 455 From jqc at tuebingen.mpg.de Fri Oct 21 05:17:06 2005 From: jqc at tuebingen.mpg.de (Joaquin Quinonero Candela) Date: Fri, 21 Oct 2005 11:17:06 +0200 Subject: Connectionists: Gaussian Process Workshop at NIPS*05 Message-ID: <4358B212.7060506@tuebingen.mpg.de> Dear all, We are pleased to announce the NIPS*05 Gaussian Process Workshop: "Open Issues in Gaussian Processes for Machine Learning" http://gp.kyb.tuebingen.mpg.de which will take place in Whistler, Canada, on Saturday December 10th 2005, as part of the workshops of the 2005 Neural Information Processing Systems conference. With sponsor from the PASCAL research network we have been able to invite the following two senior speakers from statistics to the workshop: - Tony O'Hagan, University of Sheffield - Michael Stein, University of Chicago This is not a call for papers: there will be no call for papers for this workshop! We plan to have more of a guided discussion with long selected invited talks that can be interrupted with questions at any point, and a closing round table discussion with a panel of invited experts. We encourage all to join the workshop, and to write on the wiki of the workshop's webpage any open questions you may have on GPs which you wish to see addressed at the workshop. Best wishes, The organizers Joaquin Quionero-Candela, Max Planck Institute for Biol. Cybernetics Carl Edward Rasmussen, Max Planck Institute for Biol. Cybernetics Zoubin Ghahramani, University College London and Cambridge University -- +-------------------------------------------------------------------+ Dr. Joaquin Quionero-Candela http://www.tuebingen.mpg.de/~jqc Max Planck Institute for jqc at tuebingen.mpg.de Biological Cybernetics Tel: (+49) 7071 601 553 Spemannstrae 38, 72076 Tbingen, Germany Fax: (+49) 7071 601 552 From erik at tnb.ua.ac.be Fri Oct 21 06:23:20 2005 From: erik at tnb.ua.ac.be (Erik De Schutter) Date: Fri, 21 Oct 2005 12:23:20 +0200 Subject: Connectionists: Neuro-IT Cerebellar Modeling Workshop Message-ID: <74385561-EB7A-4147-AC28-5C8114808736@tnb.ua.ac.be> A two-day cerebellar modeling workshop will take place at the University of Antwerp, Belgium on December 5-6, 2005. Invited speakers include N. Brunel (Paris, France, G. Chauvet (Angers, France), E. d'Angelo (Pavia, Italy), C. Darlot (Paris, France), P. Dean (Sheffield, UK), E. De Schutter (Antwerp, Belgium), C. De Zeeuw (Rotterdam, Netherlands), M. Hausser (London, UK), R. Maex (Antwerp, Belgium), A. Silver (London, UK), P. Verschure (Zurich, Switzerland), Y. Yarom (Jerusalem, Israel). Full program and (free) registration can be found at http:// www.neuroinf.org/workshop/neuroit05/ From caruana at cs.cornell.edu Sat Oct 22 13:44:15 2005 From: caruana at cs.cornell.edu (Richard Caruana) Date: Sat, 22 Oct 2005 13:44:15 -0400 Subject: Connectionists: CFP: NIPS 2005 Workshop: Inductive Transfer (Learning-to-Learn) Message-ID: <200510221744.j9MHiGI16152@zinger.cs.cornell.edu> NIPS 2005 Workshop - Inductive Transfer : 10 Years Later --------------------------------------------------------- Friday Dec 9, Westin Resort & Spa, Whistler, B.C., Canada Overview: --------- Inductive transfer refers to the problem of applying the knowledge learned in one or more tasks to learning for a new task. While all learning involves generalization across problem instances, transfer learning emphasizes the transfer of knowledge across domains, tasks, and distributions that are related, but not the same. For example, learning to recognize chairs might help to recognize tables; or learning to play checkers might improve learning of chess. While people are adept at inductive transfer, even across widely disparate domains, currently we have little learning theory to explain this phenomena and few systems exhibit knowledge transfer. At NIPS95 two of the current co-chairs lead a successful two-day workshop on "Learning to Learn" that focused on lifelong machine learning methods that retain and reuse learned knowledge. (The co-organizers of the NIPS95 workshop were Jon Baxter, Rich Caruana, Tom Mitchell, Lorien Pratt, Danny Silver, and Sebastian Thrun.) The fundamental motivation for that meeting was the belief that machine learning systems would benefit from re-using knowledge learned from related and/or prior experience and that this would enable them to move beyond task-specific tabula rasa systems. The workshop resulted in a series of articles published in a special issue of Connection Science [CS 1996], Machine Learning [vol. 28, 1997] and a book entitled "Learning to Learn" [Pratt and Thrun 1998]. Research in inductive transfer has continued since 1995 under a variety of names: learning to learn, life-long learning, knowledge transfer, transfer learning, multitask learning, knowledge consolidation, context-sensitive learning, knowledge-based inductive bias, meta-learning, and incremental/cumulative learning. The recent burst of activity in this area is illustrated by the research in multi-task learning within the kernel and Bayesian contexts that has established new frameworks for capturing task relatedness to improve learning [Ando and Zhang 04, Bakker and Heskes 03, Jebara 04, Evgeniou, and Pontil 04, Evgeniou, Micchelli and Pontil 05, Chapelle and Harchaoui 05]. This NIPS 2005 workshop will examine the progress that has been made in ten years, the questions and challenges that remain, and the opportunities for new applications of inductive transfer systems. In particular, the workshop organizers have identified three major goals: (1) To summarize the work thus far in inductive transfer to develop a taxonomy of research and highlight open questions, (2) To share new theories, approaches, and algorithms regarding the accumulation and re-use of learned knowledge to make learning more effective and more efficient, (3) To discuss the formation of an inductive transfer special interest group that might offer a website, benchmarking data, shared software, and links to various research programs and related web resources. Call for Papers: ---------------- We invite submission of workshop papers that discuss ongoing or completed work dealing with Inductive Transfer (see below for a list of appropriate topics). Papers should be no more than four pages in the standard NIPS format. Authorship should not be blind. Please submit a paper by emailing it in Postscript or PDF format to danny.silver at acadiau.ca with the subject line "ITWS Submission". We anticipate accepting as many as 8 papers for 15 minute presentation slots and up to 20 poster papers. Please only submit an article if at least one of the authors will attend the workshop to present the work. The successful papers will be made available on the Web. A special journal issue or an edited book of selected papers also is being planned. The 1995 workshop identified the most important areas for future research to be: * The relationship between computational learning theory and selective inductive bias; * The tradeoffs between storing or transferring knowledge in representational and functional form; * Methods of turning concurrent parallel learning into sequential lifelong learning methods; * Measuring relatedness between learning tasks for the purpose of knowledge transfer; * Long-term memory methods and cumulative learning; and * The practical applications of inductive transfer and lifelong learning systems. The workshop is interested in the progress that has been made in these areas over the last ten years. These remain key topics for discussion at the proposed workshop. More forward looking and important questions include: * Under what conditions is inductive transfer difficult? When is it easy? * What are the fundamental requirements for continual learning and transfer? * What new mathematical models/frameworks capture/demonstrate transfer learning? * What are some of latest and most advanced demonstrations of transfer learning in machines * What can be learned from transfer learning in humans and animals? * What are the latest psychological/neurological/computational theories of knowledge transfer in learning? Important Dates: ---------------- 19 Sep 05 - Call for participation 28 Oct 05 - Paper submission deadline 11 Nov 05 - Notification of paper acceptance 09 Dec 05 - Workshop in Whistler Organizers: -------------- Goekhan Bakir, Max Planck Institute for Biological Cybernetics, Germany Kristin Bennett, Department of Mathematical Sciences, Rensselaer Polytechnic Institute, USA Rich Caruana, Department of Computer Science, Cornell University, USA Massimiliano Pontil, Dept. of Computer Science, University College London, UK Stuart Russell, Computer Science Division, University of California, Berkeley, USA Danny Silver, Jodrey School of Computer Science, Acadia University, Canada Prasad Tadepalli, School of Electrical Eng. and Computer Science, Oregon State University, USA For further Information: ------------------------ See the workshop webpage at http://iitrl.acadiau.ca/itws05/ or email danny.silver at acadiau.ca From Ke.Chen at manchester.ac.uk Mon Oct 24 22:39:53 2005 From: Ke.Chen at manchester.ac.uk (Ke CHEN) Date: Tue, 25 Oct 2005 03:39:53 +0100 Subject: Connectionists: call-for-paper: WCCI'06-IJCNN'06 Special Session on Natural Computation for Temporal Data Processing Message-ID: <435D9AF9.506@manchester.ac.uk> Dear Coordinators, I would appreciate it if you could help us solicit our call-for-paper by including it in your maillist. Best regards, Ke -------------------------------------------------------------------------------------------------------------------------------- CALL for Papers WCCI'06- IJCNN'06 Special Session on Natural Computation for Temporal Data Processing Ke Chen, Kar-Ann Toh and Peter Tino Aims and scope -------------- Temporal/sequential data are ubiquitous in the real world. Proper treatment of data with temporal dependencies is essential in many application areas ranging from multimedia data processing to bioinformatics.The field has attracted a substantial amount of interest from a wide range of disciplines. This special session attempts to explore/exploit the natural computation approaches, e.g. neural computation, statistical learning, evolutionary computation, fuzzy systems and various hybrid methods, for coping with diverse forms of time dependency and data types as well as their applications in real world problems, e.g. multimedia data processing, biometrics, bioinformatics and financial data processing. The purpose of this session is to bring together researchers and practitioners working in the area from various disciplines of natural computation. The session serves as a forum enabling experience exchange between academia and industry, as well as between researchers working in different research branches. Topics -------- Topics of interest include but are not limited to: * biologically plausible/natural methods for temporal data processing * clustering/classification techniques for temporal data * feature extraction and representations of temporal data * time-series modeling/forecasting * temporal information processing and fusion * representation/analysis/recognition of actions and events from temporal data * temporal information indexing and retrieval * temporal data mining and knowledge discovery Paper submission ---------------- The paper format should be the same as that required for a regular paper submission in WCCI'06. For details, see http://www.wcci2006.org/WCCI-Web_ifa.html. Papers should be submitted via the conference on-line web submission system under the title of this session (please follow the above link to find instructions). All submissions will go through a peer-review process based on similar criteria used for contributed papers in WCCI'06. Important dates --------------- Paper Submission: January 31, 2006 Decision Notification: March 15, 2006 Camera-Ready Submission: April 15, 2006 Session organizers ------------------ Ke Chen School of Informatics The University of Manchester Manchester M60 1QD United Kingdom Email: Ke.Chen at manchester.ac.uk Tel: +44 161 306 4565 Fax: +44 161 306 1281 Kar-Ann Toh Biometrics Engineering Research Center School of Electrical & Electronic Engineering Yonsei University, Seoul, Korea Email: katoh at yonsei.ac.kr Tel: +82-2-2123-5864 Fax: +82-2-312-4584 Peter Tino School of Computer Science The University of Birmingham Edgbaston, Birmingham B15 2TT United Kingdom Email: P.Tino at cs.bham.ac.uk Tel: +44 121 414 8558 Fax: +44 121 414 4281 From bisant at umbc.edu Mon Oct 24 17:40:52 2005 From: bisant at umbc.edu (D Bisant) Date: Mon, 24 Oct 2005 17:40:52 -0400 Subject: Connectionists: FLAIRS06, Nnet Special Track, Schedule Correction Message-ID: <435D54E4.2030801@umbc.edu> A posting was made last week for the FLAIRS06 Neural Network Special Track in Melbourne Beach, Florida. The paper submission was incorrect. The date should be Nov 21. The correct dates and other information can be found at http://www.indiana.edu/~flairs06/. My apologies, David Bisant, PhD From oby at cs.tu-berlin.de Mon Oct 24 05:33:15 2005 From: oby at cs.tu-berlin.de (Klaus Obermayer) Date: Mon, 24 Oct 2005 11:33:15 +0200 (MEST) Subject: Connectionists: tenured faculty position Message-ID: Dear All, below please find the advertisement for the tenured faculty position Modeling of Cognitive Processes within the Department of Computer Science and Electrical Engineering of the Berlin University of Technology and the recently established Bernstein Center for Computational Neuroscience Berlin. The Berlin Bernstein Center, which is funded by the German federal government, integrates interdisciplinary research initiatives in the brain sciences and in AI across the city's three major universities and across several research institutes in Berlin, covering neuroscience, medicine, physics, mathematics, computer science and engineering. The Center plans to launch an international Master/PhD program in Computational Neuroscience by fall 2006. The Department of Computer Science and Engineering is currently creating a focus area in AI and machine learning with at least four core faculty positions (Artificial Intelligence, Machine Learning, Modelling of Cognitive Processes, Neural Information Processing) and several other labs with AI related research. The Department will introduce "intelligent systems" as an area of specialization in its new Master program in Computer Science by fall 2006. More information about the Center and TU Berlin can be found via http://www.tu-berlin.de/eng/index.html and http://www.bccn-berlin.de/, but I am also happy to answer any questions related to the position and our Berlin research environment. Note, that proficiency in German is *not* a requirement, as all the relevant courses - at least during the first years - will be taught in English. All the best Klaus ------------------------------------------------------------------------- The Department of Electrical Engineering and Computer Science at the Technische Universit?t Berlin invites applications for a tenured faculty position Modeling of Cognitive Processes (W2) The position is associated with the recently established Bernstein Center for Computational Neuroscience Berlin (http://www.bccn-berlin.de). The Professorship is devoted to the development of quantitative models of higher brain functions (as inferred, for example, from non-invasive methods like EEG or fMRI) in order to better understand the neural basis of cognitive processes. Modeling work should be complemented by application oriented research in machine intelligence and artificial cognitive systems (e.g. autonomous intelligent agents, man-machine systems, etc.). The successful candidate is expected to establish a cooperative, innovative research program and have a strong committment to excellence in undergraduate and graduate teaching at the TU department as well as within the Bernstein Center. The Technische Universit?t Berlin is an equal opportunity employer, committed to the advancement of individuals without regard to ethnicity, religion, sex, age, disability, or any other protected status. Applications should include CV, summary of teaching and research experience, list of publications and funding, statement of research interests, and up to five selected publications. For legal details also see (BerlHG, Par. 100) http://www.bccn-berlin.de/positions/berlhg-p-100. Applications should be sent by Nov. 21st, 2005, to the Dekanat, Fakult?t IV, TU Berlin, Franklinstrasse 28/29, 10587 Berlin, Germany and per email to Prof. Dr. Klaus Obermayer (oby at cs.tu-berlin.de) to speed up the search process. ============================================================================= Prof. Dr. Klaus Obermayer phone: 49-30-314-73442 FR2-1, NI, Fakultaet IV 49-30-314-73120 Technische Universitaet Berlin fax: 49-30-314-73121 Franklinstrasse 28/29 e-mail: oby at cs.tu-berlin.de 10587 Berlin, Germany http://ni.cs.tu-berlin.de/ From rb60 at st-andrews.ac.uk Mon Oct 24 18:26:42 2005 From: rb60 at st-andrews.ac.uk (rb60@st-andrews.ac.uk) Date: Mon, 24 Oct 2005 23:26:42 +0100 Subject: Connectionists: Lectureship and Postdoctoral Fellowship at St Andrews, UK Message-ID: <1130192802.435d5fa28e85a@webmail.st-andrews.ac.uk> Lectureship and Postdoctoral Fellowship School of Computer Science University of St Andrews Scotland (UK) We are seeking candidates for a Lectureship (permanent) and a Postdoctoral Fellowship (2 years) to support the recently formed Cognitive Systems research group led by Professor Rens Bod. Possible areas of expertise include but are not limited to data-oriented parsing, statistical natural language processing, unsupervised language learning, computational musical analysis and case-based reasoning. Closing Date: 24th November 2005 Further details about these vacancies are given at http://www.dcs.st-andrews.ac.uk/news/vacancies/2005-11-24.php Further details on the Cognitive Systems group are given at http://cogsys.dcs.st-and.ac.uk/ ------------------------------------------------------------------ University of St Andrews Webmail: https://webmail.st-andrews.ac.uk From tgd at eecs.oregonstate.edu Mon Oct 24 11:09:21 2005 From: tgd at eecs.oregonstate.edu (Thomas G. Dietterich) Date: Mon, 24 Oct 2005 08:09:21 -0700 Subject: Connectionists: Postdoc positions at Oregon State University Message-ID: <8053-Mon24Oct2005080921-0700-tgd@cs.orst.edu> We have several post-doc positions available. --Tom Dietterich ---------------------------------------------------------------------- Oregon State University School of Electrical Engineering and Computer Science One or more Research Associate positions in the Machine Learning, Computer Graphics, and Computer Vision groups starting January 2006. Required qualifications include a Ph.D. in computer science or related field.; strong mathematical background; experience with at least 3 of the following: (a) knowledge representation frameworks (logical and probabilistic), (b) reasoning methods (logical and probabilistic), (c) experimental machine learning research, (d) planning and reasoning algorithms, (e) virtual environments for training, (f) computer vision for object recognition and tracking, (g) augmented reality; excellent written and spoken communication skills; excellent programming and software engineering skills; excitement about computer science research; and the ability to manage graduate and undergraduate students working on research projects Position is full-time, 12 month, fixed term with reappointment at the discretion of the hiring official. For full consideration, applications must be received by 11/15/05. Send resume, letter of interest, evidence of 2 relevant publications and 3 professional references w/address, phone # to: Research Associate Search, 1148 Kelly Engineering Center, Corvallis, OR 97331-5501. For full position announcement see: http://oregonstate.edu/jobs, other inquires contact Thomas G. Dietterich (tgd at cs.orst.edu). OSU is an AA/EOE. From Martin.Riedmiller at uos.de Tue Oct 25 13:28:32 2005 From: Martin.Riedmiller at uos.de (Martin Riedmiller) Date: Tue, 25 Oct 2005 19:28:32 +0200 Subject: Connectionists: CFP: NIPS Workshop on RL - Benchmarks and Bake-offs II Message-ID: <435E6B40.8080903@uos.de> ************************************************************************ FINAL CALL FOR PAPERS ---- Reinforcement Learning Benchmarks and Bake-offs II ---- Workshop at the 19th Annual Conference on Neural Information Processing Systems (NIPS 2005) Whistler, Canada, Friday December 9, 2005 http://www.ni.uos.de/rl_workshop05 -- Submission Deadline: November 4th, 2005 -- ************************************************************************ [ Apologies for multiple postings ] OVERVIEW -------- It is widely agreed that the field of reinforcement learning would benefit from the establishment of standard benchmark problems and perhaps regular competitive events (bake-offs). Competitions can greatly increase the interest and focus in an area by clarifying its objectives and challenges, publicly acknowledging the best algorithms, and generally making the area more exciting and enjoyable. Standard benchmarks can make it much easier to apply new algorithms to existing problems and thus provide clear first steps toward their evaluation. The workshop will be organized around two main themes. Theme 1 will be the 1st RL benchmarking event (see http://www.cs.rutgers.edu/~mlittman/topics/nips05-mdp) Theme 2 will be a in-depth discussion on issues related to RL benchmarking. Topics will include concrete proposals on how to organize an RL benchmark competition, proposals for benchmark domains and appropriate performance measures, and proposals for standardized software frameworks. Potential participants are encouraged to submit short papers (one page in length) summarizing their views and outlining their proposal. FORMAT ------ This is a one-day workshop that will follow the 19th Annual Conference on Neural Information Processing Systems (NIPS 2005). The workshop will consist of two 3-hour sessions. RL Benchmarking Event ------------- see http://www.cs.rutgers.edu/~mlittman/topics/nips05-mdp Contributed Talks ----------------- These will be based on papers submitted for review. See below for details. CALL FOR PAPERS --------------- We invite submissions of extended abstracts addressing all aspects of benchmarking in RL, e.g. propositions for benchmarks, performance measures, software, ... An important criterion for acceptance is the public availability of proposed benchmark problems. We are particularly interested in papers that point to open questions and stimulate discussions. Submission Instructions ----------------------- Submissions should be an extended abstract of at most 1 page in length. Email submissions (in pdf or ps format only) to martin.riedmiller at uos.de with subject line "RL Workshop". The deadline for submissions is Friday, November 4th. Submissions will be reviewed by the program committee and authors will be notified of acceptance/rejection decisions by Friday November 18th. Please note that one author of each accepted paper must be available to present the paper at the workshop. IMPORTANT DATES --------------- Paper submission deadline -- November 4, 2005 Notification of decisions -- November 18, 2005 Workshop -- December 9, 2005 ORGANIZERS ---------- Martin Riedmiller (contact person), Univ. of Osnabrueck Michael L. Littman (benchmarking event), Rutgers University Nikos Vlassis, University of Amsterdam Shimon Whiteson, UT Austin Adam White, U Alberta Michail G. Lagoudakis, Technical University of Crete CONTACT ------- Please direct any questions to martin.riedmiller at uos.de ************************************************************************ From anderson at cog.brown.edu Tue Oct 25 15:12:11 2005 From: anderson at cog.brown.edu (Jim Anderson) Date: Tue, 25 Oct 2005 15:12:11 -0400 (EDT) Subject: Connectionists: Position at Brown Message-ID:   COMPUTATIONAL MODELING, BROWN UNIVERSITY: The Department of Cognitive and Linguistic Sciences invites applications for a position as Assistant Professor in the computational modeling of human cognitive systems, beginning July 1, 2006. Applicants must have a strong computational or theoretical research program in an area such as modeling of cognitive or language processing, computational neuroscience, computational linguistics, computational vision, dynamical systems, learning, or motor control. Integrated experimental research or previous collaboration with experimentalists is highly desirable. Candidates should also have a broad teaching ability in the cognitive sciences at both the undergraduate and graduate levels and an interest in contributing to interdisciplinary research and education. Brown benefits from an interactive environment with exceptional students and faculty pursing multidisciplinary research in the brain sciences. Criteria for each rank are available on request; all Ph.D. requirements must be completed before July 1, 2006. Women and minorities are especially encouraged to apply. Send curriculum vitae, reprints and preprints of publications, a one-page statement of research and teaching interests, and three letters of reference to: Computational Search Committee, Dept. of Cognitive and Linguistic Sciences, Brown University, Providence, R.I. 02912 USA by December 15, 2005. Brown University is an Affirmative Action Employer From shivani at csail.mit.edu Wed Oct 26 15:17:27 2005 From: shivani at csail.mit.edu (Shivani Agarwal) Date: Wed, 26 Oct 2005 15:17:27 -0400 (EDT) Subject: Connectionists: NIPS 2005 Workshop - Learning to Rank - Abstracts invited Message-ID: ** Abstract submission deadline: November 1, 2005 ** LEARNING TO RANK Workshop at NIPS 2005 http://web.mit.edu/shivani/www/Ranking-NIPS-05/ In response to a huge demand, we are opening some slots for short presentations, which will consist of either short talks or poster presentations. These will be based on submissions of extended abstracts, 2-4 pages in length in NIPS format, which will be due November 1, 2005. See the workshop webpage for detailed submission instructions. Regular papers that are not accepted for a regular presentation will also automatically be considered for these short presentations. Please direct any questions to shivani at mit.edu. Shivani Agarwal, Corinna Cortes, Ralf Herbrich Workshop Organizers From oza at email.arc.nasa.gov Wed Oct 26 15:44:14 2005 From: oza at email.arc.nasa.gov (Nikunj Oza) Date: Wed, 26 Oct 2005 12:44:14 -0700 Subject: Connectionists: CFP: Information Fusion Journal - Special Issue on Applications of Ensemble Methods (second notice) Message-ID: <435FDC8E.3040400@email.arc.nasa.gov> APOLOGIES FOR MULTIPLE COPIES (This is a second announcement sent as a reminder. First announcement sent August 15, 2005) Call for papers for a special issue of Information Fusion An International Journal on Multi-Sensor, Multi-Source Information Fusion An Elsevier Publication On APPLICATIONS OF ENSEMBLE METHODS Editor-in-Chief: Dr. Belur V. Dasarathy, FIEEE d.belur at elsevier.com http://belur.no-ip.com Guest Editors: Nikunj C. Oza, Kagan Tumer The Information Fusion Journal is planning a special issue devoted to Applications of Ensemble Methods in Machine Learning and Pattern Recognition. Ensembles, also known as Multiple Classifier Systems (MCSs) and Committee Classifiers, were originally motivated by the desire to avoid relying on just one learned model when only a small amount of training data is available. Because of this, most studies on ensembles have evaluated their new algorithms on relatively small datasets; most notably, datasets from the University of California, Irvine (UCI) Machine Learning Repository. However, modern data mining problems raise a variety of issues very different from the ones ensembles have traditionally addressed. These new problems include too much data; data that are distributed, are noisy, and represent changing environments; and performance measures different from the standard accuracy measurements; among others. The aim of this issue is to examine the different applications that raise these modern data mining problems, and how current and novel ensemble methods aid in solving these problems. Manuscripts (which should be original and not previously published or presented even in a more or less similar form under any other forum) covering new applications as well as the theories and algorithms of ensemble learning algorithms developed to address these applications are invited. Contributions should be described in sufficient detail to be reproducible on the basis of the material presented in the paper. Topics appropriate for this special issue include, but are not limited to: ? Innovative applications of ensemble methods. ? Novel algorithms that address unique requirements (for example, different performance measures or running time constraints) of an application or a class of applications. ? Novel theories developed under assumptions unique to an application or a class of applications. ? Novel approaches to distributed model fusion. Manuscripts should be submitted electronically online at http://ees.elsevier.com/inffus (The corresponding author will have to create a user profile if one has not been established before at Elsevier.) Please also send without fail an electronic copy to oza at email.arc.nasa.gov (PDF format preferred), Guest Editors Nikunj C. Oza and Kagan Tumer NASA Ames Research Center Mail Stop 269-3 Moffett Field, CA 94035-1000, USA Deadline for Submission: November 30, 2005 -- -------------------------------------- Nikunj C. Oza, Ph.D. Tel: (650)604-2978 Research Scientist Fax: (650)604-4036 NASA Ames Research Center E-mail: oza at email.arc.nasa.gov Mail Stop 269-3 Web: http://ic.arc.nasa.gov/people/oza Moffett Field, CA 94035-1000 USA From doug.aberdeen at anu.edu.au Thu Oct 27 09:03:47 2005 From: doug.aberdeen at anu.edu.au (Douglas Aberdeen) Date: Thu, 27 Oct 2005 23:03:47 +1000 Subject: Connectionists: Invitation to the Machine Learning Summer School, Canberra, 2006 Message-ID: <18D459DA-F5EC-469D-A8B7-900B8A5148D5@anu.edu.au> Call for Students: Machine Learning Summer School, Canberra, 2006 Quick Link: http://canberra06.mlss.cc Since 2002, the Machine Learning Summer Schools have given an opportunity for leaders in the field of ML to lecture to the best students in ML. The first 2006 school will cover a broad range of subjects from foundations of learning theory, to state of the art applications. Dates: Monday 6 February to Friday 17 February 2006 (2 weeks) Location: Australian National University, Canberra, Australia Early Registration Deadline: December 31, 2005 Past Schools: http://www.mlss.cc Poster (please help advertise the MLSS series in your school): http://rsise.anu.edu.au/~daa/mlss06/flyer.pdf Details: -------- The Machine Learning Summer School is a series of master classes in Machine Learning, conducted by experts in the field. There are typically 5 or 6 key speakers who present for up to 8 hours, plus sundry other speakers. Over two weeks the lectures go from the foundations of statistical learning theory up to state of the art applications such as brain/computer interfaces and information retrieval. The target audience is anyone starting out in the field of Machine Learning. This includes o post graduates and post docs in Machine Learning or statistics; o researchers entering the field for the first time; o industry researchers. This year we have a particularly diverse set of topics including o learning theory, o kernel methods, o nonparametric methods, o reinforcement learning, o planning and learning; and a number of application areas including o information retrieval, o image processing, o brain/computer interfaces. A limited number of travel support scholarships and registration fee waivers may be available for students. For more information, including the preliminary speaker list, go to http://canberra06.mlss.cc, or contact doug.aberdeen at anu.edu.au. Come and enjoy an Australian summer with plenty of BBQs. I hope to see you there! -- Dr Douglas Aberdeen for the Machine Learning Summer School National ICT Australia From m.pontil at cs.ucl.ac.uk Sat Oct 29 17:32:21 2005 From: m.pontil at cs.ucl.ac.uk (Massimiliano Pontil) Date: Sat, 29 Oct 2005 22:32:21 +0100 Subject: Connectionists: Machine Learning Faculty Positions @ UCL Message-ID: <27E26778-1F8B-42F8-91C8-CCCC2A1BC709@cs.ucl.ac.uk> Jon Announcement: --------------------------- UCL Department of Computer Science Director UCL Centre for Computational Statistics and Machine Learning Reader/Senior Lecturer Post We are looking for world-class research talent to join us. We are specifically recruiting to a new senior faculty position in the areas of machine learning and computational statistics. The appointee will assume leadership of a new interdisciplinary UCL Centre for Computational Statistics and Machine Learning. The Gatsby Computational Neuroscience Unit is in conjunction recruiting to both senior and junior faculty positions in the same areas, reflecting a strategic UCL commitment to research on machine learning. We share a strong commitment to experimental research and to UCL's tradition of interdisciplinary research. There is also active involvement in this area from the Departments of Statistical Science and Physics. Candidates for a post in Computer Science should be interested in innovative and challenging teaching at both the core and edges of computer science. In the event of a suitably qualified person an appointment at Professorial level will be considered. You can find out more about the Department of Computer Science at http://www.cs.ucl.ac.uk and about the Gatsby Computational Neuroscience Unit at http://www.gatsby.ucl.ac.uk/ Further details of the posts and the application procedure can be found at http:// www.cs.ucl.ac.uk/vacancies Unless otherwise requested, applicants will also be considered by the Gatsby Computational Neuroscience Unit. For informal enquiries please contact Anthony Finkelstein at a.finkelstein at cs.ucl.ac.uk The closing date for applications is 5th January 2006 From hugh.chipman at acadiau.ca Fri Oct 28 08:20:14 2005 From: hugh.chipman at acadiau.ca (Hugh Chipman) Date: Fri, 28 Oct 2005 09:20:14 -0300 Subject: Connectionists: Postdoctoral Fellowship - Statistical Learning with Graph-Structured Data Message-ID: Acadia University, Wolfville, NS, Canada Postdoctoral Fellowship Statistical Learning with Graph-Structured Data The Department of Mathematics and Statistics invites applications for a Postdoctoral Fellowship in Statistical Learning with Graph-Structured Data. Recent or expected Ph.D., to start January 2006. 1 year position, with possible renewal for second year. Analysis of network data, social network modelling, and data visualization. Desired skills/background: statistical computation, modelling with large data sets, and familiarity with supervised/unsupervised statistical learning methods. See http://ace.acadiau.ca/math/postdoc.htm for details. Email a CV, statement of research interests, and names of three potential referees to statpostdoc at acadiau.ca. Review of applicants will commence November 1, 2005, and will continue until the position is filled. From rb60 at st-andrews.ac.uk Fri Oct 28 11:15:06 2005 From: rb60 at st-andrews.ac.uk (rb60@st-andrews.ac.uk) Date: Fri, 28 Oct 2005 16:15:06 +0100 Subject: Connectionists: Two PhD Studentships in Data-Oriented Parsing at St Andrews (UK) Message-ID: <1130512506.4362407aa1501@webmail.st-andrews.ac.uk> Two PhD Studentships in Data-Oriented Parsing School of Computer Science University of St Andrews Scotland (UK) We are seeking candidates for two fully funded PhD Studentships (3.5 years each) to reinforce the recently formed Cognitive Systems research group led by Professor Rens Bod. Candidates are expected to carry out research related to Data-Oriented Parsing and its applications including but not limited to Data-Oriented Translation, Data-Oriented Language Learning, Data-Oriented Musical Analysis, Data-Oriented Problem-Solving and Data-Oriented Reasoning. Please send a CV and a one-page statement of research interests to rb at dcs.st-and.ac.uk by 24 December 2005. Further details on the Cognitive Systems group are given at http://cogsys.dcs.st-and.ac.uk/ Further details on Data-Oriented Parsing are given at http://cogsys.dcs.st-and.ac.uk/dop/dop.html ------------------------------------------------------------------ University of St Andrews Webmail: https://webmail.st-andrews.ac.uk From h.abbass at adfa.edu.au Fri Oct 28 05:23:37 2005 From: h.abbass at adfa.edu.au (Hussein A. Abbass) Date: Fri, 28 Oct 2005 19:23:37 +1000 Subject: Connectionists: A PhD Scholarship at UNSW@ADFA - Evolutionary methods in the design of robust control systems Message-ID: <200510280925.j9S9P8e0012021@seal.cs.adfa.edu.au> Call for a PhD Scholarship ($20,037 pa tax-free for 3 years) Location: School of Information Technology and Electrical Engineering, University of NSW @ Australian Defence Force Academy, Canberra, Australia. The school of Information Technology and Electrical Engineering (ITEE) at the Australian Defence Force Academy campus of the University of New South Wales is seeking expressions of interest from highly qualified students to join the PhD program. The PhD scholarship provides living allowances of $20,037 per annum tax-free. Topic: Evolutionary methods in the design of robust control systems The proposed thesis topic will involve the student looking at the application of evolutionary computation and related methods to optimization problems arising in the design of robust feedback control systems. It is expected that this research will lead to new general methods for robust and nonlinear control system design. In addition specific applications will be considered including missile auto-pilot systems, UAV flight control systems and vibration control systems. Application Process: The successful applicant is anticipated to have a first-class honour or equivalent in Electrical Engineering or other relevant areas. All applicants are expected to possess good programming skills and excellent communication and research skills. An ideal applicant for the project would have knowledge in control systems, neural networks, and evolutionary computation. Applications should include a detailed CV, a certified copy of academic transcripts and a cover letter detailing the applicants' research interest and its relevance to the project. The applicant should satisfy UNSW admission requirements for the PhD program. Potential applicants should discuss the application and send the paper work to Prof. Ian Petersen i.petersen at adfa.edu.au or Dr. Hussein Abbass abbass at itee.adfa.edu.au. Deadline for applications is 15th of November 2005 or until the position is filled, for a possible starting date of March 2006. Applications should be sent by email or fax +61-2-62688581