From uai03-pchairs at hugin.com Wed Jun 4 02:57:39 2003 From: uai03-pchairs at hugin.com (uai03-pchairs@hugin.com) Date: 4 Jun 2003 06:57:39 -0000 Subject: UAI-2003: Call For Participation Message-ID: <20030604065739.10430.qmail@hugin.dk> NOTE: Early registration deadline has been extended to Monday June 9, 2003. ********************************************************************** 19th Conference on Uncertainty in AI (UAI-2003) CALL FOR PARTICIPATION August 7-10, 2003 Hyatt Hotel, Acapulco, Mexico http://research.microsoft.com/uai2003/ ********************************************************************** Since 1985, the Conference on Uncertainty in Artificial Intelligence (UAI) has been the primary international forum for presenting new results on the use of principled methods for reasoning under uncertainty within intelligent systems. The scope of UAI is wide, including, but not limited to, representation, automated reasoning, learning, decision making and knowledge acquisition under uncertainty. We have encouraged submissions to UAI-2003 that report on theoretical or methodological advances in representation, automated reasoning, learning, decision making and knowledge acquisition under uncertainty, as well as submissions that report on systems that utilize techniques from these core areas. The main technical session will be on August 8-10, and will be preceded with an advanced tutorial program on August 7. The UAI-2003 is collocated with and immediately precedes the International Joint Conference on Artificial Intelligence (IJCAI) which will be held August 9-15. For detailed information about the technical program, schedule, online registration and accommodations please go to the conference web site at http://research.microsoft.com/uai2003/. Conference Program ****************** The main technical program at UAI-2003 will include 77 technical papers that were selected after a peer-review process. 25 of these will be given as plenary presentations, and 52 as poster presentations. The list of accepted papers is attached below. Invited Talks ************* The following invited speakers will be giving talks at UAI-2003: * Banquet talk Adrian F.M. Smith, University of London * Inferring 3D People from 2D Images Michael J. Black, Brown University * Strategic Reasoning and Graphical Models Michael Kearns, University of Pennsylvania * What's New in Statistical Machine Translation Kevin Knight, USC Information Sciences Institute * Some Measures of Incoherence: How not to gamble if you must Teddy Seidenfeld, Carnegie Mellon University Tutorials ********* The conference will be preceded by a day of advanced tutorials on Thursday August 7. This year we have four tutorials: * Graphical Model Research in Speech and Language Processing Jeff A. Bilmes, University of Washington * Probabilistic Models for Relational Domains Daphne Koller, Stanford University * Bayesian Networks for Forensic Identification Problems Steffen L. Lauritzen, Aalborg University * Uncertainty and Computational Markets Mike Wellman, University of Michigan Registration ************ Early registration deadline is June 9, 2003. To register online please go to http://research.microsoft.com/uai2003/ and select the "Registration" option. At the conference web site you can find additional information on the conference location and accommodations. Conference Organization *********************** Please direct general inquiries to the General Conference Chair at darwiche at cs.ucla.edu. Inquiries about the conference program should be directed to the Program Co-Chairs at uai03-pchairs at hugin.com. General Program Chair: * Adnan Darwiche, University of California, Los Angles. Program Co-Chairs: * Uffe Kjaerulff, Aalborg University. * Chris Meek, Microsoft Research. List of Accepted Papers *********************** A Linear Belief Function Approach to Portfolio Evaluation Liping Liu, Catherine Shenoy, Prakash Shenoy Policy-contingent abstraction for robust robot control Joelle Pineau, Geoff Gordon, Sebastian Thrun The Revisiting Problem in Mobile Robot Map Building: A Hierarchical Bayesian Approach Benjamin Stewart, Jonathan Ko, Dieter Fox, Kurt Konolige Implementation and Comparison of Solution Methods for Decision Processes with Non-Markovian Rewards Charles Gretton, David Price, Sylvie Thiebaux The Information Bottleneck EM IB-EM Algorithm Gal Elidan, Nir Friedman On revising fuzzy belief bases Richard Booth, Eva Richter Learning Module Networks Eran Segal, Dana Pe'er, Aviv Regev, Daphne Koller, Nir Friedman Learning Continuous Time Bayesian Networks Uri Nodelman, Christian Shelton, Daphne Koller 1 Billion Pages = 1 Million Dollars? Mining the Web to Play ``Who Wants to be a Millionaire?'' Shyong Lam, David Pennock, Dan Cosley, Steve Lawrence Collaborative Ensemble Learning: Combining Collaborative and Content-Based Information Filtering Kai Yu, Anton Schwaighofer, Volker Tresp, Wei-Ying Ma, HongJian Zhang Marginalizing Out Future Passengers in Group Elevator Control Daniel Nikovski, Matthew Brand Cooperative Negotiation in Autonomic Systems using Incremental Utility Elicitation Craig Boutilier, Rajarshi Das, Jeffrey Kephart, Gerry Tesauro, William Walsh Renewal Strings for cleaning astronomical databases Amos Storkey, Nigel Hambly, Christopher Williams, Bob Mann Approximate Decomposition: A Method for Bounding and Estimating Probabilistic and Deterministic Queries David Larkin Loopy Belief Propagation as a Basis for Communication in Sensor Networks Christopher Crick, Avi Pfeffer Efficient Gradient Estimation for Motor Control Learning Gregory Lawrence, Noah Cowan, Stuart Russell Large-Sample Learning of Bayesian Networks is Hard Max Chickering, David Heckerman, Christopher Meek Efficiently Inducing Features of Conditional Random Fields Andrew McCallum A generalized mean field algorithm for variational inference in exponential families Eric Xing, Michael I. Jordan, Stuart Russell A Logic for Reasoning about Evidence Joe Halpern, Riccardo Pucella On Information Regularization Adrian Corduneanu, Tommi Jaakkola An Empirical Study of w-Cutset Sampling for Bayesian Networks Bozhena Bidyuk, Rina Dechter Approximate inference and constrained optimization Tom Heskes, Kees Albers, Bert Kappen Upgrading Ambiguous Signs in QPNs Janneke Bolt, Silja Renooij, Linda van der Gaag A Tractable Probabilistic Model for Projection Pursuit Max Welling, Richard Zemel, Geoffrey Hinton A New Algorithm for Maximum Likelihood Estimation in Gaussian Graphical Models for Marginal Independence Mathias Drton, Thomas Richardson Stochastic complexity of Bayesian networks Keisuke Yamazaki, Sumio Watanabe On Local Optima in Learning Bayesian Networks Jens Dalgaard Nielsen, Tomas Kocka, Jose Manuel Pea A Distance-Based Branch and Bound Feature Selection Algorithm Ari Frank, Dan Geiger, Zohar Yakhini Decision Making with Partially Consonant Belief Functions Phan H. Giang, Prakash Shenoy Robust Independence Testing for Constraint-Based Learning of Causal Structure Denver Dash, Marek Druzdzel CLP(BN): Constraint Logic Programming for Probabilistic Knowledge Santos Costa Vitor, David Page, James Cussens, Maleeha Qazi Efficient Inference in Large Discrete Domains Rita Sharma, David Poole Solving MAP Exactly using Systematic Search James Park, Adnan Darwiche Dealing with uncertainty in fuzzy inductive reasoning methodology Francisco Mugica, Angela Nebot, Pilar Gomez LAYERWIDTH: Analysis of a New Metric for Directed Acyclic Graphs Mark Hopkins Strong Faithfulness and Uniform Consistency in Causal Inference Jiji Zhang, Peter Spirtes Locally Weighted Naive Bayes Eibe Frank, Mark Hall, Bernhard Pfahringer Phase Transition of Tractability in Constraint Satisfaction and Bayesian Network Inference Yong Gao On the Convergence of Bound Optimization Algorithms Ruslan Salakhutdinov, Sam Roweis, Zoubin Ghahramani Structure-Based Causes and Explanations in the Independent Choice Logic Alberto Finzi, Thomas Lukasiewicz Automated Analytic Asymptotic Evaluation of the Marginal Likelihood for Latent Models Dmitry Rusakov, Dan Geiger An Axiomatic Approach to Robustness in Search Problems with Multiple Scenarios Patrice Perny, Olivier Spanjaard Learning Riemannian Metrics Guy Lebanon Exploiting Locality in Searching the Web Joel Young, Thomas Dean Preference-based Graphic Models for Collaborative Filtering Rong Jin, Luo Si, Chengxiang Zhai Monte Carlo Matrix Inversion Policy Evaluation Fletcher Lu, Dale Schuurmans Bayesian Hierarchical Mixtures of Experts Markus Svensen, Christopher Bishop Toward a possibilistic handling of partially ordered information Sylvain Lagrue, Salem Benferhat, Odile Papini Incremental Compilation of Bayesian networks Julia Flores, Jose Gamez, Kristian G. Olesen Decentralized Sensor Fusion With Distributed Particle Filters Matthew Rosencrantz, Geoff Gordon, Sebastian Thrun Probabilistic Reasoning about Actions in Nonmonotonic Causal Theories Thomas Eiter, Thomas Lukasiewicz Parametric Dependability Analysis through Probabilistic Horn Abduction Luigi Portinale, Andrea Bobbio, Stefania Montani New Advances in Inference by Recursive Conditioning David Allen, Adnan Darwiche Updating with incomplete observations Gert De Cooman, Marco Zaffalon An Importance Sampling Algorithm Based on Evidence Pre-propagation Changhe Yuan, Marek Druzdzel Boltzmann Machine Learning with the Latent Maximum Entropy Principle Shaojun Wang, Dale Schuurmans, Fuchun Peng, Yunxin Zhao Inference in Polytrees with Sets of Probabilities Jos Carlos Rocha, Fabio Cozman Reasoning about Bayesian Network Classifiers Hei Chan, Adnan Darwiche Using the structure of d-connecting paths as a qualitative measure of the strength of dependence Sanjay Chaudhuri, Thomas Richardson Active Collaborative Filtering Craig Boutilier, Richard Zemel, Benjamin Marlin Learning Generative Models of Similarity Matrices Romer Rosales, Brendan Frey Symbolic Generalization for On-line Planning Zhengzhu Feng, Eric Hansen, Shlomo Zilberstein Factor Graphs: A Unification of Directed and Undirected Graphical Models Brendan Frey Sufficient Dimensionality Reduction with Side Information Amir Globerson, Gal Chechik, Naftali Tishby Markov Random Walk Representations with Continuous Distributions Chen-Hsiang Yeang, Martin Szummer Monte-Carlo optimizations for resource allocation problems in stochastic networks Milos Hauskrecht, Tomas Singliar Systematic vs. Non-systematic Algorithms for Solving the MPE Task Radu Marinescu, Kalev Kask, Rina Dechter Probabilistic models for joint clustering and time-warping of multidimensional curves Darya Chudova, Scott Gaffney, Padhraic Smyth Practically Perfect Christopher Meek, Max Chickering Optimal Limited Contingency Planning Nicolas Meuleau, David Smith Learning Measurement Models for Unobserved Variables Ricardo Silva, Richard Scheines, Clark Glymour, Peter Spirtes Budgeted Learning, Part II: The Naive-Bayes Case Daniel Lizotte, Omid Madani, Russell Greiner Value Elimination: Bayesian Inference via Backtracking Search Fahiem Bacchus, Shannon Dalmao, Toniann Pitassi A Simple Insight into Properties of Iterative Belief Propagation Rina Dechter, Robert Mateescu A Decision Making Perspective on Web Question Answering David Azari, Eric Horvitz, Susan Dumais, Eric Brill On Triangulating Dynamic Graphical Models Jeff Bilmes, Chris Bartels ********************************************************************** From David.Cohn at acm.org Wed Jun 4 12:38:10 2003 From: David.Cohn at acm.org (David 'Pablo' Cohn) Date: 04 Jun 2003 09:38:10 -0700 Subject: new JMLR paper: Think Globally, Fit Locally: Unsupervised Learning of Low Dimensional Manifolds Message-ID: <1054744690.29261.37.camel@bitbox.corp.google.com> [posted to connectionists at the request of the authors] The Journal of Machine Learning Research (www.jmlr.org) is pleased to announce publication of the seventh paper in Volume 4: -------------------------- Think Globally, Fit Locally: Unsupervised Learning of Low Dimensional Manifolds Lawrence K. Saul and Sam T. Roweis JMLR 4(Jun):119-155, 2003 Abstract The problem of dimensionality reduction arises in many fields of information processing, including machine learning, data compression, scientific visualization, pattern recognition, and neural computation. Here we describe locally linear embedding (LLE), an unsupervised learning algorithm that computes low dimensional, neighborhood preserving embeddings of high dimensional data. The data, assumed to be sampled from an underlying manifold, are mapped into a single global coordinate system of lower dimensionality. The mapping is derived from the symmetries of locally linear reconstructions, and the actual computation of the embedding reduces to a sparse eigenvalue problem. Notably, the optimizations in LLE---though capable of generating highly nonlinear embeddings---are simple to implement, and they do not involve local minima. In this paper, we describe the implementation of the algorithm in detail and discuss several extensions that enhance its performance. We present results of the algorithm applied to data sampled from known manifolds, as well as to collections of images of faces, lips, and handwritten digits. These examples are used to provide extensive illustrations of the algorithm's performance---both successes and failures---and to relate the algorithm to previous and ongoing work in nonlinear dimensionality reduction. Optimally-Smooth Adaptive ---------------------------------------------------------------------------- This paper is available electronically at http://www.jmlr.org in PostScript and PDF formats. The papers of Volumes 1, 2 and 3 are also available electronically from the JMLR website, and in hardcopy from the MIT Press; please see http://mitpress.mit.edu/JMLR for details. -David Cohn, From clinton at compneuro.umn.edu Wed Jun 4 15:39:29 2003 From: clinton at compneuro.umn.edu (Kathleen Clinton) Date: Wed, 04 Jun 2003 14:39:29 -0500 Subject: NEURON Workshop August 2003 Message-ID: <3EDE4AF1.30307@compneuro.umn.edu> ************************************* NEURON Workshop Space Still Available ************************************* Michael Hines and Ted Carnevale of Yale University will conduct a three to five day workshop on NEURON, a computer code that simulates neural systems. The workshop will be held from Monday to Friday, August 25-29, 2003 in Elliott Hall 121 on the University of Minnesota East Bank campus in Minneapolis, Minnesota. Registration is open to students and researchers from academic, government, and commercial organizations. Space is limited, and registrations will be accepted on a first-come, first-serve basis. The workshop is sponsored by the University of Minnesota Computational Neuroscience Program which is supported by a National Science Foundation-Integrative Graduate Education and Research Trainee grant and the University of Minnesota Graduate School, Institute of Technology, Medical School, and Supercomputing Institute for Digital Simulation and Advanced Computation. **Topics and Format** Participants may attend the workshop for three or five days. The first three days cover material necessary for the most common applications in neuroscience research and education. The fourth and fifth days deal with advanced topics of users whose projects may require problem-specific customizations. Windows platform will be used. Days 1 - 3 "Fundamentals of Using the NEURON Simulation Environment" The first three days will cover the material that is required for informed use of the NEURON simulation environment. The emphasis will be on applying the graphical interface, which enables maximum productivity and conceptual control over models while at the same time reducing or eliminating the need to write code. Participants will be building their own models from the start of the course. By the end of the third day they will be well prepared to use NEURON on their own to explore a wide range of neural phenomena. Additional information about the topics can be found at www.compneuro.umn.edu . Days 4 and 5 "Beyond the GUI" The fourth and fifth days deal with advanced topics for users whose projects may require problem-specific customizations. Topics will include: Advanced use of the CellBuilder, Network Builder, and Linear Circuit Builder. When and how to modify model specification, initialization, and NEURON's main computational loop. Exploiting special features of the Network Connection class for efficient implementation of use-dependent synaptic plasticity. Using NEURON's tools for optimizing models. Parallelizing computations. Using new features of the extracellular mechanism for --extracellular stimulation and recording --implementation of gap junctions and ephaptic interactions Developing new GUI tools. **Registration** For academic or government employees the registration fee is $175 for the first three days and $270 for the full five days. These fees are $350 and $540, respectively, for commercial participants. Registration forms can be obtained at www.compneuro.umn.edu/NEURONregistration.html or from the workshop coordinator, Kathleen Clinton, at clinton at compneuro.umn.edu or (612) 625-8424. **Lodging** Out-of-town participants may stay at the Days Inn, 2407 University Avenue SE in Minneapolis. It is within walking distance of Elliott Hall, but a shuttle bus is also available. Participants are responsible for making their own hotel reservations. The phone number is 612-623-9303. When making reservations, participants should state that they are attending the NEURON Workshop. A small block of rooms is available until July 24, 2003. From h.bowman at kent.ac.uk Wed Jun 4 12:07:10 2003 From: h.bowman at kent.ac.uk (hb5) Date: Wed, 04 Jun 2003 17:07:10 +0100 Subject: Neural Computation and Psychology Workshop Message-ID: <3EDE192E.66C4B079@ukc.ac.uk> The abstract submission deadline for the following event is almost on us. I would appreciate if you could bring this call to the attention of anybody that might be interested. Thanks, Howard Bowman ==================================================== Eighth Neural Computation and Psychology Workshop (NCPW 8) Connectionist Models of Cognition, Perception and Emotion 28-30 August 2003 at the University of Kent at Canterbury, UK The Eighth Neural Computation and Psychology Workshop (NCPW8) will be held in Canterbury, England from 28-30th August 2003. The NCPW series is now a well established and lively forum that brings together researchers from such diverse disciplines as artificial intelligence, cognitive science, computer science, neuroscience, philosophy and psychology. Between 25-30 papers will be accepted as oral presentations. In addition to the high quality of the papers presented, this Workshop takes place in an informal setting, in order to encourage interaction among the researchers present. Publication ----------- Proceedings of the workshop will appear in the series Progress in Neural Processing, which is published by World Scientific. Speakers -------- The list of speakers who have already agreed to attend includes: John Bullinaria (Birmingham) Gary Cottrell (San Diego) Bob French (Liege) Peter Hancock (Stirling) Richard Shillcock (Edinburgh) Chris Solomon (Kent at Canterbury) John Taylor (Kings College) Marius Usher (Birkbeck College) Important Dates --------------- Deadline for submission of abstracts: June 13th 2003 Notification of acceptance/rejection: June 29th 2003 Website ------- More details can be found on the conference website, http://www.cs.ukc.ac.uk/events/conf/2003/ncpw/ Conference Chair ---------------- Howard Bowman, University of Kent, UK Conference Organisers --------------------- Howard Bowman, UKC Colin G. Johnson, UKC Miguel Mendao, UKC Vikki Roberts, UKC Proceedings Editors -------------------- Howard Bowman, UKC Christophe Labiouse, Liege From terry at salk.edu Thu Jun 5 14:21:43 2003 From: terry at salk.edu (Terry Sejnowski) Date: Thu, 5 Jun 2003 11:21:43 -0700 (PDT) Subject: NEURAL COMPUTATION 15:6 Message-ID: <200306051821.h55ILhU53238@purkinje.salk.edu> Neural Computation - Contents - Volume 15, Number 6 - June 1, 2003 ARTICLE Estimation of Entropy and Mutual Information Liam Paninski REVIEW A Taxonomy for Spatiotemporal Connectionist Networks Revisited: The Unsupervised Case Guilherme de A. Barreto, Aluizio F. R. Araujo and Stefan C. Kremer LETTERS On Embedding Synfire Chains In A Balanced Network Y. Aviel, C. Mehring, M. Abeles, and D. Horn Ergodicity of Spike Trains: When Does Trial Averaging Make Sense? Naoki Masuda and Kazuyuki Aihara Laplacian Eigenmaps for Dimensionality Reduction and Data Representation Mikhail Belkin and Partha Niyogi Leave-One-Out Bounds for Kernel Methods Tong Zhang ----- ON-LINE - http://neco.mitpress.org/ SUBSCRIPTIONS - 2003 - VOLUME 15 - 12 ISSUES USA Canada* Other Countries Student/Retired $60 $64.20 $108 Individual $95 $101.65 $143 Institution $590 $631.30 $638 * includes 7% GST MIT Press Journals, 5 Cambridge Center, Cambridge, MA 02142-9902. Tel: (617) 253-2889 FAX: (617) 577-1545 journals-orders at mit.edu ----- From Johan.Suykens at esat.kuleuven.ac.be Thu Jun 5 05:29:39 2003 From: Johan.Suykens at esat.kuleuven.ac.be (Johan Suykens) Date: Thu, 05 Jun 2003 11:29:39 +0200 Subject: New Book on Learning Theory Message-ID: <3EDF0D83.2000102@esat.kuleuven.ac.be> -Announcement New Book on Learning Theory- J.A.K. Suykens, G. Horvath, S. Basu, C. Micchelli, J. Vandewalle (Eds.) Advances in Learning Theory: Methods, Models and Applications, NATO Science Series III: Computer & Systems Sciences, Volume 190, IOS Press Amsterdam, 2003, 436pp. (ISBN: 1 58603 341 7) http://www.esat.kuleuven.ac.be/sista/natoasi/book.html http://www.iospress.nl/site/html/boek-1722819779.html Book edited at the occasion of the NATO-ASI (Advanced Study Institute) on Learning Theory and Practice (Leuven July 2002) http://www.esat.kuleuven.ac.be/sista/natoasi/ltp2002.html -Contents- * Preface * Organizing committee * List of chapter contributors * Table of contents * An Overview of Statistical Learning Theory V. Vapnik * Best Choices for Regularization Parameters in Learning Theory: on the Bias-Variance Problem F. Cucker, S. Smale * Cucker Smale Learning Theory in Besov Spaces C.A. Micchelli, Y. Xu, P. Ye * High-dimensional Approximation by Neural Networks V. Kurkova * Functional Learning through Kernels S. Canu, X. Mary, A. Rakotomamonjy * Leave-one-out Error and Stability of Learning Algorithms with Applications A. Elisseeff, M. Pontil * Regularized Least-Squares Classification R. Rifkin, G. Yeo, T. Poggio * Support Vector Machines: Least Squares Approaches and Extensions J.A.K. Suykens, T. Van Gestel, J. De Brabanter, B. De Moor, J. Vandewalle * Extension of the nu-SVM Range for Classification F. Perez-Cruz, J. Weston, D.J.L. Herrmann, B. Schoelkopf * Kernels Methods for Text Processing N. Cristianini, J. Kandola, A. Vinokourov, J. Shawe-Taylor * An Optimization Perspective on Kernel Partial Least Squares Regression K.P. Bennett, M.J. Embrechts * Multiclass Learning with Output Codes Y. Singer * Bayesian Regression and Classification C.M. Bishop, M.E. Tipping * Bayesian Field Theory: from Likelihood Fields to Hyperfields J. Lemm * Bayesian Smoothing and Information Geometry R. Kulhavy * Nonparametric Prediction L. Gyorfi, D. Schafer * Recent Advances in Statistical Learning Theory M. Vidyasagar * Neural Networks in Measurement Systems (an engineering view) G. Horvath * List of participants * Subject index * Author index -Order information- IOS Press via website http://www.iospress.nl/site/html/boek-1722819779.html From thomas.j.palmeri at vanderbilt.edu Thu Jun 5 17:04:57 2003 From: thomas.j.palmeri at vanderbilt.edu (Thomas Palmeri) Date: Thu, 5 Jun 2003 16:04:57 -0500 Subject: Postdoctoral Fellowship at Vanderbilt University Message-ID: POSTDOCTORAL FELLOWSHIP LINKING COMPUTATIONAL MODELS AND SINGLE-CELL NEUROPHYSIOLOGY Members of the Psychology Department and the Center for Integrative and Cognitive Neuroscience at Vanderbilt University seek a highly qualified postdoctoral fellow to join an NSF-funded collaborative research project linking computational models of human cognition with single-cell neurophysiology. The aim is to elucidate how control over attention, categorization, and response selection are instantiated in neural processes underlying adaptive behavior. The project integrates separate programs of research in computational models of human cognition (Logan and Palmeri) and in single-cell neurophysiology (Schall). We are particularly interested in applicants with training in computational modeling (experience in mathematical modeling, neural network modeling, or dynamic systems modeling are equally desirable). Knowledge of theoretical and empirical research in attention, categorization, response selection, or related areas of cognition would be preferable, but is not necessary. The fellowship will pay according to the standard NIH scale, and will be for one or two years beginning July 1, 2003 or later. Fellows will be expected to apply for individual funding within the first year. Applicants should send a current vita, relevant reprints and preprints, a personal letter describing their research interests, background, goals, and career plans, and reference letters from two individuals. Applications will be reviewed as they are received. The fellowship can begin any time within the next six months. Individuals who have recently completed their dissertation or who expect to defend their dissertation this summer are encouraged to apply. We will also consider individuals currently in postdoctoral positions. Send Materials to: Thomas Palmeri, Gordon Logan, or Jeffrey Schall Department of Psychology 301 Wilson Hall 111 21st Avenue South Nashville, TN 37203 For more information on Vanderbilt, the Psychology Department, and the Center for Integrative and Cognition Neuroscience, see the following web pages: Vanderbilt University http://www.vanderbilt.edu/ Psychology Department http://sitemason.vanderbilt.edu/psychology Center for Integrative and Cognitive Neuroscience http://cicn.vanderbilt.edu Vanderbilt University is an Affirmative Action / Equal Opportunity employer. Thomas J. Palmeri Associate Professor 507 Wilson Hall Department of Psychology Vanderbilt University Nashville, TN 37240 tel: 615-343-7900 fax: 615-343-8449 email: thomas.j.palmeri at vanderbilt.edu www: www.psy.vanderbilt.edu/faculty/palmeri/home.html From Johan.Suykens at esat.kuleuven.ac.be Fri Jun 6 06:46:40 2003 From: Johan.Suykens at esat.kuleuven.ac.be (Johan Suykens) Date: Fri, 06 Jun 2003 12:46:40 +0200 Subject: New Book on Learning Theory Message-ID: <3EE07110.9020503@esat.kuleuven.ac.be> -Announcement New Book on Learning Theory- J.A.K. Suykens, G. Horvath, S. Basu, C. Micchelli, J. Vandewalle (Eds.) Advances in Learning Theory: Methods, Models and Applications, NATO Science Series III: Computer & Systems Sciences, Volume 190, IOS Press Amsterdam, 2003, 436pp. (ISBN: 1 58603 341 7) http://www.esat.kuleuven.ac.be/sista/natoasi/book.html http://www.iospress.nl/site/html/boek-1722819779.html Book edited at the occasion of the NATO-ASI (Advanced Study Institute) on Learning Theory and Practice (Leuven July 2002) http://www.esat.kuleuven.ac.be/sista/natoasi/ltp2002.html -Contents- * Preface * Organizing committee * List of chapter contributors * Table of contents * An Overview of Statistical Learning Theory V. Vapnik * Best Choices for Regularization Parameters in Learning Theory: on the Bias-Variance Problem F. Cucker, S. Smale * Cucker Smale Learning Theory in Besov Spaces C.A. Micchelli, Y. Xu, P. Ye * High-dimensional Approximation by Neural Networks V. Kurkova * Functional Learning through Kernels S. Canu, X. Mary, A. Rakotomamonjy * Leave-one-out Error and Stability of Learning Algorithms with Applications A. Elisseeff, M. Pontil * Regularized Least-Squares Classification R. Rifkin, G. Yeo, T. Poggio * Support Vector Machines: Least Squares Approaches and Extensions J.A.K. Suykens, T. Van Gestel, J. De Brabanter, B. De Moor, J. Vandewalle * Extension of the nu-SVM Range for Classification F. Perez-Cruz, J. Weston, D.J.L. Herrmann, B. Schoelkopf * Kernels Methods for Text Processing N. Cristianini, J. Kandola, A. Vinokourov, J. Shawe-Taylor * An Optimization Perspective on Kernel Partial Least Squares Regression K.P. Bennett, M.J. Embrechts * Multiclass Learning with Output Codes Y. Singer * Bayesian Regression and Classification C.M. Bishop, M.E. Tipping * Bayesian Field Theory: from Likelihood Fields to Hyperfields J. Lemm * Bayesian Smoothing and Information Geometry R. Kulhavy * Nonparametric Prediction L. Gyorfi, D. Schafer * Recent Advances in Statistical Learning Theory M. Vidyasagar * Neural Networks in Measurement Systems (an engineering view) G. Horvath * List of participants * Subject index * Author index -Order information- IOS Press via website http://www.iospress.nl/site/html/boek-1722819779.html From bio-adit2004-NOSPAM at listes.epfl.ch Sat Jun 7 07:59:32 2003 From: bio-adit2004-NOSPAM at listes.epfl.ch (Bio-ADIT2004) Date: Sat, 7 Jun 2003 13:59:32 +0200 Subject: [Bio-ADIT2004] - First Call for Papers Message-ID: ================================================================ We apologize if you receive multiple copies of this email. Please distribute this announcement to all interested parties. For removal, go to http://lslwww.epfl.ch/bio-adit2004/del.shtml ================================================================ Bio-ADIT 2004 CALL FOR PAPERS The First International Workshop on Biologically Inspired Approaches to Advanced Information Technology January 29 - 30, 2004 Swiss Federal Institute of Technology, Lausanne, Switzerland Website: http://lslwww.epfl.ch/bio-adit2004/ Sponsored by - Osaka University Forum, - Swiss Federal Institute of Technology, Lausanne, and - The 21st Century Center of Excellence Program of The Ministry of Education, Culture, Sports, Science and Technology, Japan Biologically inspired approaches have already proved successful in achieving major breakthroughs in a wide variety of problems in information technology (IT). A more recent trend is to explore the applicability of bio-inspired approaches to the development of self-organizing, evolving, adaptive and autonomous information technologies, which will meet the requirements of next-generation information systems, such as diversity, scalability, robustness, and resilience. These new technologies will become a base on which to build a networked symbiotic environment for pleasant, symbiotic society of human beings in the 21st century. Bio-ADIT 2004 will be the first international workshop to present original research results in the field of bio-inspired approaches to advanced information technologies. It will also serve to foster the connection between biological paradigms and solutions to building the next-generation information systems. SCOPE: The primary focus of the workshop is on new and original research results in the areas of information systems inspired by biology. We invite you to submit papers that present novel, challenging, and innovative results. The topics include all aspects of bio-inspired information technologies in networks, distributed/ parallel systems, hardware (including robotics) and software. We also encourage you to submit papers dealing with: - Self-organizing, self-repairing, self-replicating and self-stabilizing systems - Evolving and adapting systems - Autonomous and evolutionary software and robotic systems - Scalable, robust and resilient systems - Complex biosystems - Gene, protein and metabolic networks - Symbiosis networks SUBMISSION OF PAPERS: Authors are invited to submit complete and original papers. Papers submitted should not have been previously published in any forum, nor be under review for any journal or other conference. All submitted papers will be refereed for quality, correctness, originality and relevance. All accepted papers will be published in the conference proceedings. It is also planned to publish accepted papers as a book. Manuscripts should include an abstract and be limited to 16 pages in single spaced and single column format. Submissions should include the title, author(s), author's affiliation, e-mail address, fax number and postal address. In the case of multiple authors, an indication of which author is responsible for correspondence and preparing the camera ready paper for the proceedings should also be included. Electronic submission is strongly encouraged. Preferred file formats are PDF (.pdf) or Postscript (.ps). Visit our Web site at http://lslwww.epfl.ch/bio-adit2004/ for more information. Please contact Dr. Murata if you have to submit hard copies. Manuscripts should be submitted by September 5, 2003 through the Bio-ADIT Website. Please contact the technical program co-chairs for any questions: Professor Auke Jan Ijspeert School of Computer and Communication Sciences Swiss Federal Institute of Technology (EPFL) Lausanne CH 1015 Lausanne, Switzerland Tel: +41-21-693-2658 Fax: +41-21-693-3705 Email: Auke.Ijspeert at epfl.ch Professor Masayuki Murata Cybermedia Center Osaka University Toyonaka, Osaka 560-0043, Japan Tel: +81-6-6850-6860 Fax: +81-6-6850-6868 E-mail: murata at cmc.osaka-u.ac.jp IMPORTANT DATES: Paper submission deadline : September 5, 2003 Notification of acceptance: November 3, 2003 Camera ready papers due : December 1, 2003 WEBSITE: An electronic paper submission system is up and ready from July 1, 2003 to accept papers for Bio-ADIT. Please visit our Web site at http://lslwww.epfl.ch/bio-adit2004/ for more up-to-date information. EXECUTIVE COMMITTEE: General Co-Chairs: - Daniel Mange (Swiss Federal Institute of Technology, Lausanne, Switzerland) - Shojiro Nishio (Osaka University, Japan) Technical Program Committee Co-Chairs: - Auke Jan Ijspeert (Swiss Federal Institute of Technology, Lausanne, Switzerland) - Masayuki Murata (Osaka University, Japan) Finance Chair: - Marlyse Taric (Swiss Federal Institute of Technology, Lausanne, Switzerland) - Toshimitsu Masuzawa (Osaka University, Japan) Publicity Chair: - Christof Teuscher (Swiss Federal Institute of Technology, Lausanne, Switzerland) - Takao Onoye (Osaka University, Japan) Publications Chair: - Naoki Wakamiya (Osaka University, Japan) Local Arrangements Chair: - Carlos Andres Pena-Reyes (Swiss Federal Institute of Technology, Lausanne, Switzerland) Internet Chair: - Jonas Buchli (Swiss Federal Institute of Technology, Lausanne, Switzerland) TECHNICAL PROGRAM COMMITTEE: Co-Chairs: - Auke Jan Ijspeert (Swiss Federal Institute of Technology, Lausanne, Switzerland) - Masayuki Murata (Osaka University, Japan) Members: - Michael A. Arbib (University of Southern California, Los Angeles, USA) - Aude Billard (Swiss Federal Institute of Technology, Lausanne, Switzerland) - Takeshi Fukuda (IBM Tokyo Research Laboratory, Japan) - Katsuo Inoue (Osaka University, Japan) - Wolfgang Maass (Graz University of Technology, Austria) - Ian W. Marshall (BTexact Technologies, UK) - Toshimitsu Masuzawa (Osaka University, Japan) - Alberto Montresor (University of Bologna, Italy) - Stefano Nolfi (Institute of Cognitive Sciences and Technology,CNR, Rome, Italy) - Takao Onoye (Osaka University, Japan) - Rolf Pfeifer (University of Zurich, Switzerland) - Eduardo Sanchez (Swiss Federal Institute of Technology, Lausanne, Switzerland) - Hiroshi Shimizu (Osaka University, Japan) - Moshe Sipper (Ben-Gurion University, Israel) - Gregory Stephanopoulos (Massachusetts Institute of Technology, USA) - Adrian Stoica (Jet Propulsion Laboratory, USA) - Gianluca Tempesti (Swiss Federal Institute of Technology, Lausanne, Switzerland) - Naoki Wakamiya (Osaka University, Japan) - Xin Yao (University of Birmingham, UK) From stefan.wermter at sunderland.ac.uk Tue Jun 10 03:13:17 2003 From: stefan.wermter at sunderland.ac.uk (Stefan Wermter) Date: Tue, 10 Jun 2003 08:13:17 +0100 Subject: CFP: FLAIRS 2004 special track on neural network application Message-ID: <3EE5850D.AB0B47DB@sunderland.ac.uk> Call for Papers Neural Network Applications Special Track at the 17th International FLAIRS Conference In cooperation with the American Association for Artificial Intelligence Palms South Beach Hotel Miami Beach, FL May 17-19, 2004 Papers are being solicited for a special track on Neural Network Applications at the 17th International FLAIRS Conference (FLAIRS-2004). The special track will be devoted to the applications of Neural Networks with the aim of presenting new and important contributions in this area. For details see http://uhaweb.hartford.edu/irussell/ST04.html Stefan Wermter *************************************** Stefan Wermter Professor for Intelligent Systems Director of Centre for Hybrid Intelligent Systems School of Computing and Technology University of Sunderland St Peters Way Sunderland SR6 0DD United Kingdom phone: +44 191 515 3279 fax: +44 191 515 3553 email: stefan.wermter at sunderland.ac.uk http://www.his.sunderland.ac.uk/~cs0stw/ http://www.his.sunderland.ac.uk/ **************************************** From terry at salk.edu Tue Jun 10 20:52:59 2003 From: terry at salk.edu (Terry Sejnowski) Date: Tue, 10 Jun 2003 17:52:59 -0700 (PDT) Subject: NEURAL COMPUTATION 15:7 Message-ID: <200306110052.h5B0qxH66793@purkinje.salk.edu> Neural Computation - Contents - Volume 15, Number 7 - July 1, 2003 ARTICLE Background Synaptic Activity as a Switch Between Dynamical States in a Network Emilio Salinas NOTE Note on "Comparison of Model Selection for Regression" by Vladimir Cherkassky and Yungqian Ma Trevor Hastie, Rob Tibshirani and Jerome Friedman LETTERS Spike-Timing Dependent Plasticity and Relevant Mutual Information Maximization Gal Chechik Relating STDP to BCM Eugene M. Izhikevich and Niraj S. Desai Learning Innate Face Preferences James A. Bednar and Risto Miikkulainen Learning Optimized Features for Hierarchical Models of Invariant Object Recognition Heiko Wersing and Edgar Koerner Soft Learning Vector Quantization Sambu Seo and Klaus Obermayer An Efficient Approximation Algorithm for Finding a Maximum Clique Using Hopfield Network Learning Rong Long Wang, Zheng Tang and Qi Ping Cao The Effect of Noise on a Class of Energy-Based Learning Rules A. Bazzani, D. Remondini, N. Intrator, and G. C. Castellani Approximation by Fully-Complex Multilayer Perceptrons Taehwan Kim and Tulay Adali Asymptotic Behaviors of Support Vector Machines with Gaussian Kernel S. Sathiya Keerthi and Chih-Jen Lin Comparison of Model Selection for Regression Vladimir Cherkassky and Yunqian Ma ----- ON-LINE - http://neco.mitpress.org/ SUBSCRIPTIONS - 2003 - VOLUME 15 - 12 ISSUES USA Canada* Other Countries Student/Retired $60 $64.20 $108 Individual $95 $101.65 $143 Institution $590 $631.30 $638 * includes 7% GST MIT Press Journals, 5 Cambridge Center, Cambridge, MA 02142-9902. Tel: (617) 253-2889 FAX: (617) 577-1545 journals-orders at mit.edu ----- From a.silver at ucl.ac.uk Tue Jun 10 07:35:15 2003 From: a.silver at ucl.ac.uk (Angus Silver) Date: Tue, 10 Jun 2003 12:35:15 +0100 Subject: neuroinformatics software engineering position at UC London Message-ID: Software Development: Tools for Grid-Based Computational Neurobiology A research-based software engineering position is available in the Department of Physiology University College London UK as part of the MRC funded Grid-enabled modelling tools and databases for neuroinformatics a collaborative project with the Institute for Adaptive and Neural Computation, Division of Informatics, University of Edinburgh. The aim of the project is to develop software tools to aid construction, visualization, and analysis for both realistic neural network models of cerebellar cortex constructed in the Neuron and Genesis simulation environments and for lower level 3D-diffusion-reaction models of synaptic mechanisms. The candidate will have a strong quantitative background with a degree in neuroscience, computer science, physics or engineering and will be expert in programming in JAVA, and C. A higher degree (MSc/PhD) would be advantageous and experience with the Neuron simulator would be useful but not essential. Development of software tools will be closely linked to existing modelling projects (e.g. see abstract below) and to the electrophysiological and optical experiments carried out in Dr Silvers lab. The post is funded for 2 years at #33025 p.a. Further information is available from Angus Silver (a.silver at ucl.ac.uk). To apply please send CV before July 10th 2003. **************************************************************************** Shunting Inhibition Modulates Neuronal Gain during Synaptic Excitation. Neuron. 2003 May 8;38(3):433-45. Mitchell SJ, Silver RA. Department of Physiology, University College London, Gower Street, WC1E 6BT, London, United Kingdom Neuronal gain control is important for processing information in the brain. Shunting inhibition is not thought to control gain since it shifts input-output relationships during tonic excitation rather than changing their slope. Here we show that tonic inhibition reduces the gain and shifts the offset of cerebellar granule cell input-output relationships during frequency-dependent excitation with synaptic conductance waveforms. Shunting inhibition scales subthreshold voltage, increasing the excitation frequency required to attain a particular firing rate. This reduces gain because frequency-dependent increases in input variability, which couple mean subthreshold voltage to firing rate, boost voltage fluctuations during inhibition. Moreover, synaptic time course and the number of inputs also influence gain changes by setting excitation variability. Our results suggest that shunting inhibition can multiplicatively scale rate-coded information in neurons with high-variability synaptic inputs. From cristina.versino at jrc.it Wed Jun 11 09:25:16 2003 From: cristina.versino at jrc.it (Cristina Versino) Date: Wed, 11 Jun 2003 15:25:16 +0200 Subject: Analysis and Intelligence for Anti-fraud: post-doc position. Message-ID: The Joint Research Center has recently announced some job opportunities at the post-doctoral level in the web site of the Institute for the Protection and Security of the Citizen. (see the following site, and click at "opportunities") http://ipsc.jrc.cec.eu.int/ The dealine for those wishing to apply is 27 June 2003. The specific project entitled "Analysis and Intelligence for Anti-fraud", is offering a post-doc position with the following description: Project Description The ideal candidate has a Ph.D. in computer science with a specialization in one of the following fields: Large Databases, Data Warehousing, Data Mining/KDD, Data Visualization, or Statistics.Tasks will include both research and software development activities in support of the EU's anti-fraud work programme. The successful candidate will develop further hands-on data mining experience on large databases and will contribute to new developments in the techniques used at JRC.Some experience in the practical use of the Oracle RDBMS and the PL/SQL language in a Microsoft Windows environment is very useful. Working experience in datamining and/or data quality issues is highly desirable. Knowledge of the MATLAB software, the Business Objects or SAS Systems, and the JAVA programming language is an advantage. Duration: 24 months For specific information on the above project description in the anti-fraud domain, please write to thomas.barbas at jrc.it From cindy at bu.edu Wed Jun 11 11:20:54 2003 From: cindy at bu.edu (Cynthia Bradford) Date: Wed, 11 Jun 2003 11:20:54 -0400 Subject: Neural Networks 16(5/6): Special Issue on "Advances in Neural Networks Research: IJCNN'03" Message-ID: <200306111520.h5BFKs402506@cns-pc75.bu.edu> NEURAL NETWORKS 16(5/6) Contents - Volume 16, Numbers 5 and 6 - 2003 2003 Special Issue: "Advances in Neural Networks Research: IJCNN'03" Donald C. Wunsch II, Mike Hasselmo, DeLiang Wang, and Ganesh Kumar Venayagamoorthy, co-editors ------------------------------------------------------------------ INTRODUCTION: Welcome to the Special Issue: The Best of the Best Donald C. Wunsch II, Mike Hasselmo, DeLiang Wang, and Ganesh Kumar Venayagamoorthy PERCEPTUAL AND MOTOR FUNCTION: Adaptive force generation for precision-grip lifting by a spectral timing model of the cerebellum Antonio Ulloa, Daniel Bullock, and Brad Rhodes Radial basis function neural networks for nonlinear Fisher discrimination and Neyman-Pearson classification David Casasent and Xue-wen Chen Intrinsic generalization analysis of low dimensional representations Xiuwen Liu, Anuj Srivastava, and DeLiang Wang Application of four-layer neural network on information extraction Min Han, Lei Cheng, and Hua Meng Subject independent facial expression recognition with robust face detection using a convolutional neural network Masakazu Matsugu, Katsuhiko Mori, Yusuke Mitari, and Yuji Kaneda A generalized feedforward neural network architecture for classification and regression Ganesh Arulampalam and Abdesselam Bouzerdoum COGNITIVE FUNCTION AND COMPUTATIONAL NEUROSCIENCE: Hierarchical cognitive maps Horatiu Voicu Modeling goal-directed spatial navigation in the rat based on physiological data from the hippocampal formation Randal A. Koene, Anatoli Gorchetchnikov, Robert C. Cannon, and Michael E. Hasselmo An efficient training algorithm for dynamic synapse neural networks using trust region methods Hassan H. Namarvar and Theodore W. Berger Temporal binding as an inducer for connectionist recruitment learning over delayed lines Cengiz Gunay and Anthony S. Maida Developments in understanding neuronal spike trains and functional specializations in brain regions Roberto A. Santiago, James McNames, Kim Burchiel, and George G. Lendaris Shaping up simple cell's receptive field of animal visual by ICA and its application in navigation system Liming Zhang and Jianfeng Mei eLoom and Flatland: Specification, simulation and visualization engines for the study of arbitrary hierarchical neural architectures Thomas P. Caudell, Yunhai Xiao, and Michael J. Healy Associative morphological memories based on variations of the kernel and dual kernel methods Peter Sussner INFORMATICS: Adaptive double self-organizing maps for clustering gene expression profiles H. Ressom, D. Wang, and P. Natarajan An accelerated procedure for recursive feature ranking on microarray data C. Furlanello, M. Serafini, S. Merler, and G. Jurman DYNAMICS: Pattern completion through phase coding in population neurodynamics A. Gutierrez-Galvez and R. Gutierrez-Osuna Passive dendritic integration heavily affects spiking dynamics of recurrent networks Giorgio A. Ascoli Abductive reasoning with recurrent neural networks Ashraf M. Abdelbar, Emad A.M. Andrews, and Donald C. Wunsch II Neural networks with chaotic recursive nodes: Techniques for the design of associative memories, contrast with Hopfield architectures, and extensions for time-dependent inputs Emilio Del Moral Hernandez Simple and conditioned adaptive behavior from Kalman filter trained recurrent networks Lee A. Feldkamp, Daniel V. Prokhorov, and Timothy M. Feldkamp REINFORCEMENT LEARNING AND CONTROL: Learning robot actions based on self-organizing language memory Stefan Wermter and Mark Elshaw Autonomous mental development in high dimensional context and action spaces Ameet Joshi and Juyang Weng Chaos control and synchronization, with input saturation, via recurrent neural networks Edgar N. Sanchez and Luis J. Ricalde Proper orthogonal decomposition based optimal neurocontrol synthesis of a chemical reactor process using approximate dynamic programming Radhakant Padhi and S.N. Balakrishnan Numerical solution of elliptic partial differential equation using radial basis function neural networks Li Jianyu, Luo Siwei, Qi Yingjian, and Huang Yaping THEORY: Statistical efficiency of adaptive algorithms Bernard Widrow and Max Kamenetsky On structure-exploiting trust-region regularized nonlinear least squares algorithms for neural-network learning Eiji Mizutani and James W. Demmel Stochastic resonance in noisy threshold neurons Bart Kosko and Sanya Mitaim Quantum optimization for training support vector machines Davide Anguita, Sandro Ridella, Fabio Rivieccio, and Rodolfo Zunino On the quality of ART1 text clustering Louis Massey Extension neural network and its applications M.H. Wang and C.P. Hung Fuzzy least squares support vector machines for multiclass problems Daisuke Tsujinishi and Shigeo Abe Evolving efficient learning algorithms for binary mappings John A. Bullinaria A network for recursive extraction of canonical coordinates Ali Pezeshki, Mahmood R. Azimi-Sadjadi, and Louis L. Scharf Automatic basis selection techniques for RBF networks Ali Ghodsi and Dale Schuurmans Data smoothing regularization, multi-sets-learning, and problem solving strategies Lei Xu Million city traveling salesman problem solution by divide and conquer clustering with adaptive resonance neural networks Samuel A. Mulder and Donald C. Wunsch II APPLICATIONS: A practical sub-space adaptive filter A. Zaknich Pharmacodynamic population analysis in chronic renal failure using artificial neural networks: A comparative study Adam E. Gaweda, Alfred A. Jacobs, Michael E. Brier, and Jacek M. Zurada Electronic nose based tea quality standardization Ritaban Dutta, E.L. Hines, J.W. Gardner, K.R. Kashwan, and M. Bhuyan A novel neural network-based survival analysis method Antonio Eleuteri, Roberto Tagliaferri, Leopoldo Milano, Sabino De Placido, and Michele De Laurentiis Divide-and-conquer approach for brain machine interfaces: Nonlinear mixture of competitive linear models Sung-Phil Kim, Justin C. Sanchez, Deniz Erdogmus, Yadunandana N. Rao, Johan Wessberg, Jose C. Principe, and Miguel Nicolelis Stochastic error whitening algorithm for linear filter estimation with noisy data Yadunandana N. Rao, Deniz Erdogmus, Geetha Y. Rao, and Jose C. Principe New internal optimal neurocontrol for a series FACTS device in a power transmission line Jung-Wook Park, Ronald G. Harley, and Ganesh K. Venayagamoorthy Design of an adaptive neural network based power system stabilizer Wenxin Liu, Ganesh K. Venayagamoorthy, and Donald C. Wunsch II On neural network techniques in the secure management of communication systems through improving and quality assessing pseudorandom stream generators D.A. Karras and V. Zorkadis Multimedia authenticity protection with ICA watermarking and digital bacteria vaccination Harold Szu, Steven Noel, Seong-Bin Yim, Jeff Willey, and Joe Landa VISUAL CORTEX: HOW ILLUSIONS REPRESENT REALITY: Interpolation processes in the visual perception of objects P.J. Kellman Laminar cortical dynamics of visual form perception Stephen Grossberg Moving objects appear to slow down at low contrasts Stuart Anstis Neural models of motion integration and segmentation Ennio Mingolla ------------------------------------------------------------------ Electronic access: www.elsevier.com/locate/neunet/. Individuals can look up instructions, aims & scope, see news, tables of contents, etc. Those who are at institutions which subscribe to Neural Networks get access to full article text as part of the institutional subscription. Sample copies can be requested for free and back issues can be ordered through the Elsevier customer support offices: nlinfo-f at elsevier.nl usinfo-f at elsevier.com or info at elsevier.co.jp ------------------------------ INNS/ENNS/JNNS Membership includes a subscription to Neural Networks: The International (INNS), European (ENNS), and Japanese (JNNS) Neural Network Societies are associations of scientists, engineers, students, and others seeking to learn about and advance the understanding of the modeling of behavioral and brain processes, and the application of neural modeling concepts to technological problems. Membership in any of the societies includes a subscription to Neural Networks, the official journal of the societies. Application forms should be sent to all the societies you want to apply to (for example, one as a member with subscription and the other one or two as a member without subscription). The JNNS does not accept credit cards or checks; to apply to the JNNS, send in the application form and wait for instructions about remitting payment. The ENNS accepts bank orders in Swedish Crowns (SEK) or credit cards. The INNS does not invoice for payment. ---------------------------------------------------------------------------- Membership Type INNS ENNS JNNS ---------------------------------------------------------------------------- membership with $80 (regular) SEK 660 (regular) Y 13,000 (regular) Neural Networks (plus 2,000 enrollment fee) $20 (student) SEK 460 (student) Y 11,000 (student) (plus 2,000 enrollment fee) ----------------------------------------------------------------------------- membership without $30 SEK 200 not available to Neural Networks non-students (subscribe through another society) Y 5,000 (student) (plus 2,000 enrollment fee) ----------------------------------------------------------------------------- Name: _____________________________________ Title: _____________________________________ Address: _____________________________________ _____________________________________ _____________________________________ Phone: _____________________________________ Fax: _____________________________________ Email: _____________________________________ Payment: [ ] Check or money order enclosed, payable to INNS or ENNS OR [ ] Charge my VISA or MasterCard card number ____________________________ expiration date ________________________ INNS Membership 19 Mantua Road Mount Royal NJ 08061 USA 856 423 0162 (phone) 856 423 3420 (fax) innshq at talley.com http://www.inns.org ENNS Membership University of Skovde P.O. Box 408 531 28 Skovde Sweden 46 500 44 83 37 (phone) 46 500 44 83 99 (fax) enns at ida.his.se http://www.his.se/ida/enns JNNS Membership c/o Professor Shozo Yasui Kyushu Institute of Technology Graduate School of Life Science and Engineering 2-4 Hibikino, Wakamatsu-ku Kitakyushu 808-0196 Japan 81 93 695 6108 (phone and fax) jnns at brain.kyutech.ac.jp http://www.jnns.org/ ----------------------------------------------------------------- From tcp1 at leicester.ac.uk Wed Jun 11 13:28:23 2003 From: tcp1 at leicester.ac.uk (Tim Pearce) Date: Wed, 11 Jun 2003 18:28:23 +0100 Subject: Faculty Position in the area of Neuroengineering In-Reply-To: <3E48BDC6.9010502@ini.phys.ethz.ch> Message-ID: <00e601c3303e$dc61ac00$216bd28f@neuro2> Apologies for cross-postings. The Centre for Bioengineering has a Lectureship position available from the (effectively equivalent to Assistant Professor position in the US). We would particularly welcome applications from those with an interest in neuromorphic engineering, neuroengineering, neuronal modelling and/or computational neuroscience, The University is well placed for establishing links with neuroscience researchers in Leicester (particularly clinical-based) and its nearest cities (London, Birmingham, Cambridge and Nottingham). Informal Enquiries Informal enquiries may be made to Tim Pearce (tcp1 at le.ac.uk - +44 116 223 1307) or to the Head of Department, Ian Postlethwaite (ixp at le.ac.uk) or to Fernando Schlindwein (fss1 at le.ac.uk). Applications Applications should forwarded to reach the Personnel Office (Appointments) not later than 27 June 2003. ===== Applications are invited for a Lectureship in Bioengineering which is broadly interpreted to include any area at the intersection of technology, mathematical modelling and life and/or clinical sciences. The successful applicant will join the expanding Centre for Bioengineering (http://www.le.ac.uk/eg/research/groups/control/bio/bio.htm), which has research interests in real-time monitoring of patients, modelling of neural systems, ultrasound in medicine, and neuroengineering. Successful candidates should have a PhD in a related area, an established record of journal publications, and research interests that overlap or complement our existing activities. We particularly welcome applicants who can demonstrate an ability or strong potential to secure external research funding and develop a vigorous programme of research. The successful candidate will also be expected to contribute to our undergraduate and postgraduate teaching programmes. The University The University of Leicester is one of the UK's leading research and teaching universities. The University was founded as a University College in 1921 and granted a Royal Charter in 1957. It has an estate of approximately 94 hectares that includes a six-hectare Botanic Garden, an arboretum and a range of residences in the suburbs that are set in attractive gardens. The University has 18,949 students including 9,491 at postgraduate level. There are 42 academic departments and 35 special divisions and centres located in six faculties: Arts, Education and Continuing Studies, Law, Medicine and Biological Sciences, Science and Social Sciences. There is a University-wide Graduate School and an Institute of Lifelong Learning. The University employs approximately 3,000 staff. The University has been ranked in the UK's top twenty universities in three consecutive years since 2001 by the Financial Times and by the Sunday Times. It was placed in the top 20 UK universities for research grant and contract income. The University had 25 ratings of 5*, 5 or 4 in the 2001 Research Assessment Exercise when 84% of the staff were in units of assessment of national and international excellence. In the Teaching Quality Assessment four units achieved a grade of excellent before 1995 and since then 15 units have received a score of 22 or more out of 24. The University has been awarded the Queen's Anniversary Prize in Higher and Further Education in 2002 for its submission in Genetics. The University is committed to producing research and teaching of the highest quality, to promoting undergraduate and postgraduate studies through campus-based and distance-learning programmes and to developing close collaboration with the local and regional community. The Department of Engineering The Department has 30 academic staff (including 11 Professors) supported by 7 academically-related staff, about 20 research staff and 30 technical and clerical staff. Engineering is one of the largest Departments at Leicester. The Department is renowned for its research in the areas of Control and Instrumentation, Electrical and Electronic Power, Radio Systems, Mechanics of Materials and Thermofluids and Environmental Engineering. In the 2001 Research Assessment Exercise it received a rating of 5A. Several research-led appointments have been made in recent years, including a number of Chairs, and this has resulted in research groups of international standing with strong leadership and a research base of highly talented staff. The successful candidate will join the Control and Instrumentation Research Group and be part of the Centre for Bioengineering. For additional information see http://jobs.ac.uk/jobfiles/YK396.html From lshams at caltech.edu Thu Jun 12 00:35:40 2003 From: lshams at caltech.edu (Ladan Shams) Date: Wed, 11 Jun 2003 21:35:40 -0700 Subject: Postdoctoral Position at UCLA+Caltech Message-ID: <5282C980-9C8F-11D7-9B5B-0003934F6770@caltech.edu> Postdoctoral Position in Multisensory Perception UCLA and Caltech Applications are invited for a postdoctoral position to study issues in multisensory perception. The research involves psychophysics methodology possibly combined with fMRI and/or statistical modeling. The projects will aim to unravel the interactions between visual, auditory, and tactile perceptual processes at various levels of inquiry ranging from phenomenology, to underlying brain mechanisms, to the governing computational principles. The successful candidate will have a Ph.D. in Psychology, Neuroscience, Computer Science, Engineering or a related field. Some experience in Psychophysics is required, and any experience in fMRI or modeling will be a strong advantage. Expertise in Matlab and/or C in a Mac or UNIX environment is highly desirable. The research will be performed primarily in the laboratory of Ladan Shams at UCLA (http://vmpl.psych.ucla.edu), and partly in the laboratory of Shinsuke Shimojo at Caltech (http://neuro.caltech.edu). Thus, the successful candidate will be affiliated with both UCLA and Caltech. Salary is according to the NIH scale. The initial appointment will be for one year and may be extended for a second or third year. The starting date is September 2003 (but somewhat flexible). Please send inquiries or CVs plus the names of 3 references to: Ladan Shams (ladan at caltech.edu) California Institute of Technology and University of California are Equal Opportunity Employers. ---------------------------- Ladan Shams, Ph.D. Assistant Professor UCLA Psychology Department 7545B Franz Hall Los Angeles, CA 90095-1563 URL: http://vmpl.psych.ucla.edu Tel: (310) 428-5296 From cristina.versino at jrc.it Wed Jun 11 09:39:15 2003 From: cristina.versino at jrc.it (Cristina Versino) Date: Wed, 11 Jun 2003 15:39:15 +0200 Subject: Surveillance Review Station: post-doc position. Message-ID: The Joint Research Center has recently announced some job opportunities at the post-doctoral level in the web site of the Institute for the Protection and Security of the Citizen. (see the following site, and click at "opportunities") http://ipsc.jrc.cec.eu.int/ The dealine for those wishing to apply is 27 June 2003. Applications should follow the rules mentioned in the web site. The specific project (G07 1) entitled "Surveillance Review Station", is offering a post-doc position with the following description: Project Description The ideal candidate has a Ph.D. in pattern recognition and/or image processing. He/she will participate in the development of a tool for the analysis of multisensory surveillance data together with staff of the ?Surveillance and Information Retrieval? sector. The work consists in the analysis of surveillance data from nuclear installations (e.g., images, radiation measures, etc.), including finding effective representations of sensor data in view of detecting and classifying Safeguards relevant events. The candidate will therefore develop and test different types of data representation and classification techniques.Moreover, the candidate will explore the possibility of integrating simulation and visualisation tools to the image review station, for increasing the scene understanding through the knowledge on the context in which surveillance data are acquired. Duration: 36 months For specific information on the above project description in the surveillance domain, please write to cristina.versino at jrc.it Administrative contact person: Anne-Marie Morrissey, Tel: +39 0332 789322, Fax: +39 0332 785232 E-mail: anne-marie.morrissey at jrc.it From oreilly at grey.colorado.edu Fri Jun 13 00:40:09 2003 From: oreilly at grey.colorado.edu (Randall C. O'Reilly) Date: Thu, 12 Jun 2003 22:40:09 -0600 Subject: TR on Working Memory in PFC and BG Message-ID: <200306130440.h5D4e9D28330@grey.colorado.edu> The following technical report is now available for downloading from: http://psych.colorado.edu/~oreilly/pubs-abstr.html#03_pbwm - Randy Making Working Memory Work: A Computational Model of Learning in the Prefrontal Cortex and Basal Ganglia Randall C. O'Reilly Department of Psychology University of Colorado Boulder, CO 80309 ICS Technical Report 03-03 Abstract: The prefrontal cortex has long been thought to subserve both working memory (the holding of information online for processing) and ``executive'' functions (deciding how to manipulate working memory and perform processing). Although many computational models of working memory have been developed, the mechanistic basis of executive function remains elusive. In effect, the executive amounts to a homunculus. This paper presents an attempt to deconstruct this homunculus through powerful learning mechanisms that allow a computational model of the prefrontal cortex to control both itself and other brain areas in a strategic, task-appropriate manner. These learning mechanisms are based on structures in the basal ganglia (NAc, VTA, striosomes of the dorsal striatum, SNc) that can modulate learning in other basal ganglia structures (matrisomes of the dorsal striatum, GP, thalamus), which in turn provide a dynamic gating mechanism for controlling prefrontal working memory updating. Computationally, the learning mechanism is designed to simultaneously solve the temporal and structural credit assignment problems. The model's performance compares favorably with standard backpropagation-based temporal learning mechanisms on the challenging 1-2-AX working memory task, and other benchmark working memory tasks. +----------------------------------------------------------------+ | Dr. Randall C. O'Reilly | | | Associate Professor | Phone: (303) 492-0054 | | Department of Psychology | Fax: (303) 492-2967 | | Univ. of Colorado Boulder | | | 345 UCB | email: oreilly at psych.colorado.edu | | Boulder, CO 80309-0345 | www: psych.colorado.edu/~oreilly | +----------------------------------------------------------------+ From golden at utdallas.edu Fri Jun 13 09:42:50 2003 From: golden at utdallas.edu (Richard Golden) Date: Fri, 13 Jun 2003 08:42:50 -0500 (CDT) Subject: Symposium Workshop "Bayesian Methods for Cognitive Modeling" Message-ID: Symposium Workshop: Bayesian Methods for Cognitive Modeling Tentative Schedule Monday, July 28, 2003, Weber State University Ogden, Utah (Following the 2003 Annual Meeting of the Society for Mathematical Psychology) 8:15am - 8:30am Introduction to the Symposium Workshop. Richard Golden (University Texas Dallas) and Richard Shiffrin (Indiana University) 8:30am -10:00am Bayesian Methods for Unsupervised Learning Zoubin Ghahramani (University College London, Gatsby Computational Neuroscience Unit) 10:00am - 10:30am Coffee Break 10:30am -12:00pm Bayesian Models of Human Learning and Inference Josh Tenenbaum (MIT, Brain and Cognitive Sciences) 12:00pm - 1:30pm Lunch Break 1:30pm -3:00pm The Bayesian Approach to Vision Alan Yuille (UCLA, Departments of Statistics and Psychology) 3:00pm - 3:30pm Coffee Break 3:30pm - 5:00pm Probabilistic Approaches to Language Learning and Processing Christopher Manning (Stanford University, Computer Science) ------------------------------------------------------------------------------ ----------------------------------------- * Each talk will be approximately 80 minutes in length with a 10 minute question time period. * A $20 Registration Fee is required for participation in the workshop. ABSTRACTS 8:30am -10:00am Bayesian Methods for Unsupervised Learning Zoubin Ghahramani (University College London, Gatsby Computational Neuroscience Unit) Many models used in machine learning and neural computing can be understood within the unified framework of probabilistic graphical models. These include clustering models (k-means, mixtures of Gaussians), dimensionality reduction models (PCA, factor analysis), time series models (hidden Markov models, linear dynamical systems), independent components analysis (ICA), hierarchical neural network models, etc. I will review the link between all these models, and the framework for learning them using the EM algorithm for maximum likelihood. I will then describe limitations of the maximum likelihood framework and how Bayesian methods overcome these limitations, allowing learning without overfitting, principled model selection, and the coherent handling of uncertainty. Time permitting I will decribe the computational challenges of Bayesian learning and approximate methods for overcoming those challenges, such as variational methods. 10:30am -12:00pm Bayesian Models of Human Learning and Inference Josh Tenenbaum (MIT, Brain and Cognitive Sciences) How can people learn the meaning of a new word from just a few examples? What makes a set of examples more or less representative of a concept? What makes two objects seem more or less similar? Why are some generalizations apparently based on all-or-none rules while others appear to be based on gradients of similarity? How do we infer the existence of hidden causal properties or novel causal laws? I will describe an approach to explaining these aspects of everyday induction in terms of rational statistical inference. In our Bayesian models, learning and reasoning are explained in terms of probability computations over a hypothesis space of possible concepts, word meanings, or generalizations. The structure of the learner's hypothesis spaces reflects their domain-specific prior knowledge, while the nature of the probability computations depends on domain-general statistical principles. The hypotheses can be thought of as either potential rules for abstraction or potential features for similarity, with the shape of the learner's posterior probability distribution determining whether generalization appears more rule-based or similarity-based. Bayesian models thus offer an alternative to classical accounts of learning and reasoning that rest on a single route to knowledge -- e.g., domain-general statistics or domain-specific constraints -- or a single representational paradigm -- e.g., abstract rules or exemplar similarity. This talk will illustrate the Bayesian approach to modeling learning and reasoning on a range of behavioral case studies, and contrast its explanations with those of more traditional process models. 1:30pm -3:00pm The Bayesian Approach to Vision Alan Yuille (UCLA, Departments of Statistics and Psychology) Bayesian statistical decision theory formulates vision as perceptual inference where the goal is to infer the structure of the viewed scene from input images. The approach can be used not only to model perceptual phenomena but also to design computer vision systems that perform useful tasks on natural images. This ensures that the models can be extended from the artificial stimuli used in most psychophysical, or neuroscientific, experiments to more natural and realistic stimuli. The approach requires specifying likelihood functions for how the viewed scene generates the observed image data and prior probabilities for the state of the scene. We show how this relates to Signal Detection Theory and Machine Learning. Next we describe how the probability models (i.e. likelihood functions and priors) can be represented by graphs which makes explicit the statistical dependencies between variables. This representation enables us to account for perceptual phenomena such as discounting, cue integration, and explaining away. We illustrate the techniques involved in the Bayesian approach by two worked examples. The first is the perception of motion where we describe Bayesian theories (Weiss & Adelson, Yuille & Grzywacz) which show that many phenomena can be explained as a trade-off between the likelihood function and the prior of a single model. The second is image parsing where the goal is to segment natural images and to detect and recognize objects. This involves models competing and cooperating to explain the image by combining bottom-up and top-down processing. 3:30pm - 5:00pm Probabilistic Approaches to Language Learning and Processing Christopher Manning (Stanford University, Computer Science) At the engineering end of speech and natural language understanding research, the field has been transformed by the adoption of Bayesian probabilistic approaches, with generative models such as Markov models, hidden Markov models, and probabilistic context-free grammars being standard tools of the trade, and people increasingly using more sophisticated models. More recently, there has also started to be use of these models as cognitive models, to explore issues in psycholinguistic processing, and how humans approach the resolution problem, of combining evidence from numerous sources during the course of processing. Much of this work has been in a supervised learning paradigm, where models are built from hand-annotated data, but probabilistic approaches also open interesting new perspectives on formal problems of language learning. After surveying the broader field of probabilist approaches in natural language processing, I'd like to focus in on unsupervised approaches to learning language structure, show why it's a difficult problem, and present some recent work that I and others have been doing using probabilistic models, which shows considerable progress on tasks such as word class and syntactic structure learning. From bogus@does.not.exist.com Mon Jun 16 03:26:02 2003 From: bogus@does.not.exist.com () Date: Mon, 16 Jun 2003 17:26:02 +1000 Subject: Paper on neuronal gain in the leaky integrate-and-fire neuron with conductance synapses Message-ID: From j.a.bullinaria at cs.bham.ac.uk Tue Jun 17 12:52:54 2003 From: j.a.bullinaria at cs.bham.ac.uk (John Bullinaria) Date: Tue, 17 Jun 2003 17:52:54 +0100 Subject: Lectureship (i.e., Assistant Professorship) - Birmingham, UK Message-ID: <2421795A-A0E4-11D7-95F8-000A956C4D0A@cs.bham.ac.uk> Members of this list are encouraged to apply for the following post. Excellent opportunities exist to collaborate with members of CERCIA and the Natural Computation Group, as well as others in the School. ------------------------------------------------------------------------ VACANCY: Lectureship in Computer Science, Software Engineering or Artificial Intelligence (Ref. No. S36578) ------------------------------------------------------------------------ Applications are invited for and one limited-term Lectureship in Computer Science, Software Engineering or Artificial Intelligence in the School of Computer Science, the University of Birmingham, UK. The post is available immediately until 31 March 2007 in the first instance (further extension or conversion to an open-ended post is possible). We are looking for an outstanding researcher and teacher in any areas of CS, SE and AI, although candidates in applied areas are preferred. The post has become available because an existing staff member has moved temporarily to our new Centre of Excellence for Research in Computational Intelligence and Applications (CERCIA). Applications that strengthen our research in computational intelligence as well as in other areas are equally welcome. The successful applicants must have an internationally excellent research record (or research potential, in the case of a recent PhD graduate) in CS, SE or AI, as evidenced by publications in leading international journals or conference proceedings in these fields. They must be willing and able to teaching core CS, SE and AI modules that may or may not be in his/her research areas. They should have, or expect soon to have, a PhD in CS, SE, AI or an appropriate, closely related discipline. Very high quality research publications or industrial research achievements are acceptable as an alternative to a PhD. All our staff contribute to the School administration. The School encourages industrial outreach and consultancy. This School has been steadily growing in its research achievements and is widely recognized as an international centre for research and teaching, as confirmed, for instance, by external research reviews, by our publications record, our record in attracting research grants, our ability to attract high quality staff and students from many countries, and the provision of a new special-purpose building for the School in 2001. Further information about our research is accessible through our research and news web sites http://www.cs.bham.ac.uk/research/ http://www.cs.bham.ac.uk/news/ and in personal web pages of staff and research students: http://www.cs.bham.ac.uk/people/ Further information about our teaching can be found here: http://www.cs.bham.ac.uk/study Starting salary for Lecturers on scale GBP22,191 - GBP33,679 a year depending on experience and qualifications. Further particulars including instructions on how to apply, and pointers to online application forms, can be found at: http://www.punit.bham.ac.uk/vacancies/ vacancyDisplay.htm?org_unit_code=DNPHYS&vacancy_class_id=2 Please quote reference S36578. Informal enquiries to: Prof. X. Yao (x.yao at cs.bham.ac.uk) *** Closing date for applications is : 24/6/2003. *** However late applicants may be considered. Working towards equal opportunities. From W.El-Deredy at livjm.ac.uk Tue Jun 17 11:56:15 2003 From: W.El-Deredy at livjm.ac.uk (El-Deredy, Wael) Date: Tue, 17 Jun 2003 16:56:15 +0100 Subject: PhD Studentship: Dynamics of pain perception Message-ID: PhD studentship funded by the Arthritis Research Campaign =A3800,000 programme grant on brain mechanisms of pain perception. The post is based at Manchester University working with Professor Anthony Jones' team on functional imaging of pain representation in the brain. http://www.ipn.at/ipn.asp?BGI. The department received a 5 rating in the last RAE. The aims of the signal processing component of the programme are: 1. To develop new analysis methodologies for separating the sensory discrimination components of pain from the affective (subjective) components. 2. To integrate data from different neuroimaging modalities (mainly EEG and PET and EEG and fMRI). We are looking for an enthusiastic signal processing expert with interest in biomedical applications and willingness to work within a team. We expect good foundation in mathematics, statistics and optimisation techniques (especially Bayesian stats. and MCMC) and programming skills (Matlab). The candidate may also be required to support colleagues carrying out standard statistical analysis of functional neuroimaging data. For enquiries and application please write or send a CV to Anthony Jones (Ajones1 at fs1.ho.man.ac.uk) including the names of two referees by 25 July, 2003. Dr. Anthony Jones Pain Research Group Manchester University - Rheumatic Diseases Centre Clinical Sciences Building Hope Hospital Salford M6 8HD Tel: 0161 - 206 4265 Fax: 0161 - 206 4687 From a-parlos at tamu.edu Sat Jun 21 08:56:20 2003 From: a-parlos at tamu.edu (Alexander G. Parlos) Date: Sat, 21 Jun 2003 07:56:20 -0500 Subject: IEEE Special Issue Message-ID: <5.1.0.14.0.20030621075535.03c57300@mail.ad.mengr.tamu.edu> Call for Papers IEEE Transactions on Neural Networks Special Issue on Adaptive Learning Systems in Communication Networks Communication networks and internetworks, and in particular the Internet, have been characterized as the ultimate data-rich environments, dynamically evolving and expanding practically without any centralized control. Such data-rich, unstructured environments present a particular challenge for traditional methods of analysis and design. Adaptive learning methods, in general, including adaptive signal processing, neural networks, fuzzy logic and other data-driven methods and algorithms are in the unique position to offer credible alternatives. The goal of the proposed special issue is two-fold: (1) to highlight the on-going research in the field of adaptive learning systems, and in particular adaptive signal processing and neural networks, as it is applicable to computer and communication networks, and, (2) to present to the neural networks community and to others interested in adaptive learning systems, in general, a variety of new and challenging problems and their proposed solutions, originating from the rapidly expanding universe of computer and communication networks. As the use of these technologies spreads, numerous modeling, estimation, control, classification, clustering and signal processing problems are emerging. Many of these problems currently have no satisfactory solutions and some have been addressed with ad-hoc solutions. A common underlying theme of these problems is that they are data-rich, represent dynamically changing environments where the lack of valid mathematical models is predominant, and, are representative of systems with no centralized control. These problems appear amenable to data-driven methods and algorithms, such as adaptive learning methods, including neural networks and other non-parametric or semi-parametric approaches. This special issue will welcome contributions with proposed approaches to existing problems, either with currently known or new solutions, and to new problems in the subject areas of computer and communication networks. The focus of the proposed solutions will be on data-driven or the so-called measurement-based methods and algorithms, rooted in the general areas of adaptive learning methods. Papers are solicited from, but not limited to, the following topics: Network Management Topics: (i) Methods and algorithms for network traffic analysis, modeling and characterization; (ii) Network performance measurement and analysis techniques; network fault monitoring and diagnosis methods; (iii) Network security and privacy, including intrusion detection methods; (iv) Approaches and methods for Quality of Service in IP networks; (v) Scalable routing algorithms and decentralized congestion control algorithms; (v) Novel admission control algorithms; (vi) Control algorithms for high-speed network access technologies; (vii) Application of "new approaches" in adaptive learning systems to data-intensive tasks in complex networks. Content Management Topics: (i) Approaches for scalable Web caching and related optimization methods; (ii) Novel solutions to operational problems in content delivery and distribution networks; (iii) Web data mining and knowledge discovery - scalability and comparison of methods; (iv) Web personalization methods; (v) Information hiding techniques and digital rights management; (vi) Novel solutions to information access and retrieval for dynamic Web content; (vii) Efficient compression algorithms and coding for continuous digital media - multimedia content; (viii) Architectures for Quality of Service guarantees in real-time distributed applications; (ix) Uncertainty management in real-time distributed applications; (x) Concepts in real-time distributed applications enabled by new communication network technologies. Guest Editors: Alexander G. Parlos, Texas A&M University, College Station, Texas, USA (Coordinator) Chuanyi Ji, Georgia Institute of Technology, Atlanta, Georgia, USA K. Claffy, San Diego Supercomputer Center, University of California, San Diego, California, USA Thomas Parisini, University of Trieste, Trieste, Italy Marco Baglietto, University of Genoa, Genoa, Italy Manuscripts will be screened for topical relevance, and those that pass the screening process will undergo the standard review process of the IEEE Transactions on Neural Networks. Paper submission deadline is November 1, 2003. Prospective authors are encouraged to submit an abstract by September 1, 2003. This will help in the planning and review process. The final Special Issue will be published in the Fall of 2004. Electronic manuscript submission is mandatory and only papers in pdf format will be considered for review. All manuscripts should be sent to the Coordinator of the guest editorial team at a-parlos at tamu.edu. From michael at jupiter.chaos.gwdg.de Mon Jun 23 10:40:32 2003 From: michael at jupiter.chaos.gwdg.de (Michael Herrmann) Date: Mon, 23 Jun 2003 16:40:32 +0200 (CEST) Subject: Course in Computational Neurosciene Message-ID: Applications are invited for a tutorial course on COMPUTATIONAL NEUROSCIENCE at Goettingen, Germany September 24 - 28, 2003 presented by the German Neuroscience Society organized by J. M. Herrmann, M. Diesmann, and T. Geisel The course is intended to provide graduate students and young researchers from all parts of neuroscience with working knowledge of theoretical and computational methods in neuroscience and to acquaint them with recent developments in this field. The course includes topics such as * Mechanisms and models of visual attention * Models of synaptic background activity * Theory of neural coding * Structure and function of large-scale cortical networks * Theory of sensor-motor learning * Dynamics in local neural networks. Tutorials and lectures will be given by: Prof. Dr. Stefan Treue (Goettingen) Dr. Nicolas Brunel (Paris) Dr. Michael Rudolph (Paris) Prof. Dr. Klaus Pawelzik (Bremen) PD Dr. Markus Lappe (Muenster) PD Dr. Rolf Koetter (Duesseldorf) Dr. Christian Eurich (Bremen), and by the organizers. The course takes place at the Department of Nonlinear Dynamics of the Max-Planck Institute for Fluid Dynamics, Bunsenstr. 10, D-37073 Goettingen. The course is free for members of the German Neuroscience Society, while non-members are charged a fee of 100 EUR. Course language is English. To apply please fill in the application form at: www.chaos.gwdg.de/nwg-course by July 1, 2003. For further information please contact: nwg-course at chaos.gwdg.de From desa at Cogsci.ucsd.edu Mon Jun 23 17:33:57 2003 From: desa at Cogsci.ucsd.edu (Virginia de Sa) Date: Mon, 23 Jun 2003 14:33:57 -0700 (PDT) Subject: machine learning database for problems in biology Message-ID: Scientists from the San Diego Supercomputer Center(SDSC) in collaboration with scientists from the cognitive science department are putting together a grant proposal to construct a new database of genomic problems for machine learning. The idea is to be much more comprehensive and up to date than the UCI database and more user-friendly than GENBANK (where you have to be an expert to remove "wrong" entries). The scientists from the SDSC are Bioinformaticians interested in making the latest biological data available to the machine learning community and in exploiting the the latest machine learning tools to answer complex biological problems. There will also be different "views" of the same data, that should be useful for multi-view learning (Multi-view learning, Co-training, Minimizing-Disagreement, IMAX, ...). If you are interested in this kind of database, please let us know (nair at sdsc.edu, desa at ucsd.edu, gribskov at sdsc.edu) as soon as possible. Also let us know if there are particular features you would like (or features you don't like about current databases). -- ------------------------------------------------------------------ Virginia de Sa desa at ucsd.edu Department of Cognitive Science ph: 858-822-5095 9500 Gilman Dr. 858-822-2402 La Jolla, CA 92093-0515 fax: 858-534-1128 ------------------------------------------------------------------ From nello at wald.ucdavis.edu Mon Jun 23 14:01:47 2003 From: nello at wald.ucdavis.edu (Nello Cristianini) Date: Mon, 23 Jun 2003 11:01:47 -0700 (PDT) Subject: Impromtu Posters at COLT/KM 2003 Message-ID: <20030623105544.R13244-100000@anson.ucdavis.edu> Dear Colleague, you are invited to present your late-breaking work on kernel methods at the 'impromptu' poster session which will be held during the kernel day at the joint COLT/KM meeting in Washington DC, on 25 August, 2003. Incomplete, unusual and controversial ideas are welcome. Please submit an abstract IN TEXT FORMAT by July 25, 2003 to: sabrina.nielebock at tuebingen.mpg.de with the subject line: "KM workshop poster submission." You will receive a final acceptance/reject decision by July 31st. Pls DO NOT SEND POSTERS or other formats. regards Nello Cristianini From j.a.bullinaria at cs.bham.ac.uk Mon Jun 30 05:19:30 2003 From: j.a.bullinaria at cs.bham.ac.uk (John Bullinaria) Date: Mon, 30 Jun 2003 10:19:30 +0100 Subject: Research Position in Birmingham, UK Message-ID: ------------------------------------------------------------------------ VACANCY: The Centre of Excellence for Research in Computational Intelligence and Applications (CERCIA) at Birmingham, UK. (http://www.cercia.ac.uk) Research Fellow/Associate in Computational Intelligence (Ref. No. S36573/03) ------------------------------------------------------------------------ The School of Computer Science has recently set up a Centre of Excellence for Research in Computational Intelligence and Applications (CERCIA), with substantial funding from the Advantage West Midlands (the regional development agency), to capitalise on and exploit the world-class research in the School for the benefit of industry and businesses (especially those in the West Midlands region). Applications are now invited for the post of a research fellow/associate. The post is available immediately until 31 March 2007 in the first instance. The successful applicant for the research fellow/associate must have excellent analytical and problem solving skills in computational intelligence and excellent programming and software development skills. He/she should have a PhD degree in computer science/engineering or a closely related field, or at least a very good honours degree with significant research and development experiences in computational intelligence. He/she should demonstrate willingness and interest in tackling real-world problems and applying computational intelligence techniques to industry and businesses. He/she should be a good team player. Excellent written and oral communication skills are required. The School of Computer Science has a world-leading group in natural computation and computational intelligence (http://www.cs.bham.ac.uk/research/NC). It also runs an EPSRC supported MSc programme in Natural Computation (http://www.cs.bham.ac.uk/study/postgraduate-taught/msc-nc/). The group includes more than 25 researchers (including permanent and visiting staff and PhD students), working on a wide range of topics in natural computation and computational intelligence. The starting salary for the research fellow/associate is on the research scale in the range GBP18,265 - GBP30,660 per annum. (Depending on experience and qualifications). For further particulars, please visit http://www.cs.bham.ac.uk/news/jobs/cercia-rf.03/ For informal enquiries, please contact Prof Xin Yao (X.Yao at cs.bham.ac.uk). Formal applications should be sent to the Personnel Services (address below). CLOSING DATE FOR RECEIPT OF APPLICATIONS: 8 July 2003 (late application may be considered) APPLICATION FORMS RETURNABLE TO The Director of Personnel Services The University of Birmingham Edgbaston, Birmingham, B15 2TT England RECRUITMENT OFFICE FAX NUMBER +44 121 414 4802 RECRUITMENT OFFICE TELEPHONE NUMBER +44 121 414 6486 RECRUITMENT OFFICE E-MAIL ADDRESS j.a.gerald at bham.ac.uk From radford at cs.toronto.edu Mon Jun 30 11:38:50 2003 From: radford at cs.toronto.edu (Radford Neal) Date: Mon, 30 Jun 2003 11:38:50 -0400 Subject: New software release / Dirichlet diffusion trees Message-ID: <03Jun30.113856edt.453139-25226@jane.cs.toronto.edu> Announcing a new release of my SOFTWARE FOR FLEXIBLE BAYESIAN MODELING Features include: * Regression and classification models based on neural networks and Gaussian processes * Density modeling and clustering methods based on finite and infinite (Dirichlet process) mixtures and on Dirichlet diffusion trees * Inference for a variety of simple Bayesian models specified using BUGS-like formulas * A variety of Markov chain Monte Carlo methods, for use with the above models, and for evaluation of MCMC methodologies Dirichlet diffusion tree models are a new feature in this release. These models utilize a new family of prior distributions over distributions that is more flexible and realistic than Dirichlet process, Dirichlet process mixture, and Polya tree priors. These models are suitable for general density modeling tasks, and also provide a Bayesian method for hierarchical clustering. See the following references: Neal, R. M. (2003) "Density modeling and clustering using Dirichlet diffusion trees", to appear in Bayesian Statistics 7. Neal, R. M. (2001) "Defining priors for distributions using Dirichlet diffusion trees", Technical Report No. 0104, Dept. of Statistics, University of Toronto, 25 pages. Available at http://www.cs.utoronto.ca/~radford/dft-paper1.abstract.html The software is written in C for Unix and Linux systems. It is free, and may be downloaded from http://www.cs.utoronto.ca/~radford/fbm.software.html ---------------------------------------------------------------------------- Radford M. Neal radford at cs.utoronto.ca Dept. of Statistics and Dept. of Computer Science radford at utstat.utoronto.ca University of Toronto http://www.cs.utoronto.ca/~radford ---------------------------------------------------------------------------- From uai03-pchairs at hugin.com Wed Jun 4 02:57:39 2003 From: uai03-pchairs at hugin.com (uai03-pchairs@hugin.com) Date: 4 Jun 2003 06:57:39 -0000 Subject: UAI-2003: Call For Participation Message-ID: <20030604065739.10430.qmail@hugin.dk> NOTE: Early registration deadline has been extended to Monday June 9, 2003. ********************************************************************** 19th Conference on Uncertainty in AI (UAI-2003) CALL FOR PARTICIPATION August 7-10, 2003 Hyatt Hotel, Acapulco, Mexico http://research.microsoft.com/uai2003/ ********************************************************************** Since 1985, the Conference on Uncertainty in Artificial Intelligence (UAI) has been the primary international forum for presenting new results on the use of principled methods for reasoning under uncertainty within intelligent systems. The scope of UAI is wide, including, but not limited to, representation, automated reasoning, learning, decision making and knowledge acquisition under uncertainty. We have encouraged submissions to UAI-2003 that report on theoretical or methodological advances in representation, automated reasoning, learning, decision making and knowledge acquisition under uncertainty, as well as submissions that report on systems that utilize techniques from these core areas. The main technical session will be on August 8-10, and will be preceded with an advanced tutorial program on August 7. The UAI-2003 is collocated with and immediately precedes the International Joint Conference on Artificial Intelligence (IJCAI) which will be held August 9-15. For detailed information about the technical program, schedule, online registration and accommodations please go to the conference web site at http://research.microsoft.com/uai2003/. Conference Program ****************** The main technical program at UAI-2003 will include 77 technical papers that were selected after a peer-review process. 25 of these will be given as plenary presentations, and 52 as poster presentations. The list of accepted papers is attached below. Invited Talks ************* The following invited speakers will be giving talks at UAI-2003: * Banquet talk Adrian F.M. Smith, University of London * Inferring 3D People from 2D Images Michael J. Black, Brown University * Strategic Reasoning and Graphical Models Michael Kearns, University of Pennsylvania * What's New in Statistical Machine Translation Kevin Knight, USC Information Sciences Institute * Some Measures of Incoherence: How not to gamble if you must Teddy Seidenfeld, Carnegie Mellon University Tutorials ********* The conference will be preceded by a day of advanced tutorials on Thursday August 7. This year we have four tutorials: * Graphical Model Research in Speech and Language Processing Jeff A. Bilmes, University of Washington * Probabilistic Models for Relational Domains Daphne Koller, Stanford University * Bayesian Networks for Forensic Identification Problems Steffen L. Lauritzen, Aalborg University * Uncertainty and Computational Markets Mike Wellman, University of Michigan Registration ************ Early registration deadline is June 9, 2003. To register online please go to http://research.microsoft.com/uai2003/ and select the "Registration" option. At the conference web site you can find additional information on the conference location and accommodations. Conference Organization *********************** Please direct general inquiries to the General Conference Chair at darwiche at cs.ucla.edu. Inquiries about the conference program should be directed to the Program Co-Chairs at uai03-pchairs at hugin.com. General Program Chair: * Adnan Darwiche, University of California, Los Angles. Program Co-Chairs: * Uffe Kjaerulff, Aalborg University. * Chris Meek, Microsoft Research. List of Accepted Papers *********************** A Linear Belief Function Approach to Portfolio Evaluation Liping Liu, Catherine Shenoy, Prakash Shenoy Policy-contingent abstraction for robust robot control Joelle Pineau, Geoff Gordon, Sebastian Thrun The Revisiting Problem in Mobile Robot Map Building: A Hierarchical Bayesian Approach Benjamin Stewart, Jonathan Ko, Dieter Fox, Kurt Konolige Implementation and Comparison of Solution Methods for Decision Processes with Non-Markovian Rewards Charles Gretton, David Price, Sylvie Thiebaux The Information Bottleneck EM IB-EM Algorithm Gal Elidan, Nir Friedman On revising fuzzy belief bases Richard Booth, Eva Richter Learning Module Networks Eran Segal, Dana Pe'er, Aviv Regev, Daphne Koller, Nir Friedman Learning Continuous Time Bayesian Networks Uri Nodelman, Christian Shelton, Daphne Koller 1 Billion Pages = 1 Million Dollars? Mining the Web to Play ``Who Wants to be a Millionaire?'' Shyong Lam, David Pennock, Dan Cosley, Steve Lawrence Collaborative Ensemble Learning: Combining Collaborative and Content-Based Information Filtering Kai Yu, Anton Schwaighofer, Volker Tresp, Wei-Ying Ma, HongJian Zhang Marginalizing Out Future Passengers in Group Elevator Control Daniel Nikovski, Matthew Brand Cooperative Negotiation in Autonomic Systems using Incremental Utility Elicitation Craig Boutilier, Rajarshi Das, Jeffrey Kephart, Gerry Tesauro, William Walsh Renewal Strings for cleaning astronomical databases Amos Storkey, Nigel Hambly, Christopher Williams, Bob Mann Approximate Decomposition: A Method for Bounding and Estimating Probabilistic and Deterministic Queries David Larkin Loopy Belief Propagation as a Basis for Communication in Sensor Networks Christopher Crick, Avi Pfeffer Efficient Gradient Estimation for Motor Control Learning Gregory Lawrence, Noah Cowan, Stuart Russell Large-Sample Learning of Bayesian Networks is Hard Max Chickering, David Heckerman, Christopher Meek Efficiently Inducing Features of Conditional Random Fields Andrew McCallum A generalized mean field algorithm for variational inference in exponential families Eric Xing, Michael I. Jordan, Stuart Russell A Logic for Reasoning about Evidence Joe Halpern, Riccardo Pucella On Information Regularization Adrian Corduneanu, Tommi Jaakkola An Empirical Study of w-Cutset Sampling for Bayesian Networks Bozhena Bidyuk, Rina Dechter Approximate inference and constrained optimization Tom Heskes, Kees Albers, Bert Kappen Upgrading Ambiguous Signs in QPNs Janneke Bolt, Silja Renooij, Linda van der Gaag A Tractable Probabilistic Model for Projection Pursuit Max Welling, Richard Zemel, Geoffrey Hinton A New Algorithm for Maximum Likelihood Estimation in Gaussian Graphical Models for Marginal Independence Mathias Drton, Thomas Richardson Stochastic complexity of Bayesian networks Keisuke Yamazaki, Sumio Watanabe On Local Optima in Learning Bayesian Networks Jens Dalgaard Nielsen, Tomas Kocka, Jose Manuel Pea A Distance-Based Branch and Bound Feature Selection Algorithm Ari Frank, Dan Geiger, Zohar Yakhini Decision Making with Partially Consonant Belief Functions Phan H. Giang, Prakash Shenoy Robust Independence Testing for Constraint-Based Learning of Causal Structure Denver Dash, Marek Druzdzel CLP(BN): Constraint Logic Programming for Probabilistic Knowledge Santos Costa Vitor, David Page, James Cussens, Maleeha Qazi Efficient Inference in Large Discrete Domains Rita Sharma, David Poole Solving MAP Exactly using Systematic Search James Park, Adnan Darwiche Dealing with uncertainty in fuzzy inductive reasoning methodology Francisco Mugica, Angela Nebot, Pilar Gomez LAYERWIDTH: Analysis of a New Metric for Directed Acyclic Graphs Mark Hopkins Strong Faithfulness and Uniform Consistency in Causal Inference Jiji Zhang, Peter Spirtes Locally Weighted Naive Bayes Eibe Frank, Mark Hall, Bernhard Pfahringer Phase Transition of Tractability in Constraint Satisfaction and Bayesian Network Inference Yong Gao On the Convergence of Bound Optimization Algorithms Ruslan Salakhutdinov, Sam Roweis, Zoubin Ghahramani Structure-Based Causes and Explanations in the Independent Choice Logic Alberto Finzi, Thomas Lukasiewicz Automated Analytic Asymptotic Evaluation of the Marginal Likelihood for Latent Models Dmitry Rusakov, Dan Geiger An Axiomatic Approach to Robustness in Search Problems with Multiple Scenarios Patrice Perny, Olivier Spanjaard Learning Riemannian Metrics Guy Lebanon Exploiting Locality in Searching the Web Joel Young, Thomas Dean Preference-based Graphic Models for Collaborative Filtering Rong Jin, Luo Si, Chengxiang Zhai Monte Carlo Matrix Inversion Policy Evaluation Fletcher Lu, Dale Schuurmans Bayesian Hierarchical Mixtures of Experts Markus Svensen, Christopher Bishop Toward a possibilistic handling of partially ordered information Sylvain Lagrue, Salem Benferhat, Odile Papini Incremental Compilation of Bayesian networks Julia Flores, Jose Gamez, Kristian G. Olesen Decentralized Sensor Fusion With Distributed Particle Filters Matthew Rosencrantz, Geoff Gordon, Sebastian Thrun Probabilistic Reasoning about Actions in Nonmonotonic Causal Theories Thomas Eiter, Thomas Lukasiewicz Parametric Dependability Analysis through Probabilistic Horn Abduction Luigi Portinale, Andrea Bobbio, Stefania Montani New Advances in Inference by Recursive Conditioning David Allen, Adnan Darwiche Updating with incomplete observations Gert De Cooman, Marco Zaffalon An Importance Sampling Algorithm Based on Evidence Pre-propagation Changhe Yuan, Marek Druzdzel Boltzmann Machine Learning with the Latent Maximum Entropy Principle Shaojun Wang, Dale Schuurmans, Fuchun Peng, Yunxin Zhao Inference in Polytrees with Sets of Probabilities Jos Carlos Rocha, Fabio Cozman Reasoning about Bayesian Network Classifiers Hei Chan, Adnan Darwiche Using the structure of d-connecting paths as a qualitative measure of the strength of dependence Sanjay Chaudhuri, Thomas Richardson Active Collaborative Filtering Craig Boutilier, Richard Zemel, Benjamin Marlin Learning Generative Models of Similarity Matrices Romer Rosales, Brendan Frey Symbolic Generalization for On-line Planning Zhengzhu Feng, Eric Hansen, Shlomo Zilberstein Factor Graphs: A Unification of Directed and Undirected Graphical Models Brendan Frey Sufficient Dimensionality Reduction with Side Information Amir Globerson, Gal Chechik, Naftali Tishby Markov Random Walk Representations with Continuous Distributions Chen-Hsiang Yeang, Martin Szummer Monte-Carlo optimizations for resource allocation problems in stochastic networks Milos Hauskrecht, Tomas Singliar Systematic vs. Non-systematic Algorithms for Solving the MPE Task Radu Marinescu, Kalev Kask, Rina Dechter Probabilistic models for joint clustering and time-warping of multidimensional curves Darya Chudova, Scott Gaffney, Padhraic Smyth Practically Perfect Christopher Meek, Max Chickering Optimal Limited Contingency Planning Nicolas Meuleau, David Smith Learning Measurement Models for Unobserved Variables Ricardo Silva, Richard Scheines, Clark Glymour, Peter Spirtes Budgeted Learning, Part II: The Naive-Bayes Case Daniel Lizotte, Omid Madani, Russell Greiner Value Elimination: Bayesian Inference via Backtracking Search Fahiem Bacchus, Shannon Dalmao, Toniann Pitassi A Simple Insight into Properties of Iterative Belief Propagation Rina Dechter, Robert Mateescu A Decision Making Perspective on Web Question Answering David Azari, Eric Horvitz, Susan Dumais, Eric Brill On Triangulating Dynamic Graphical Models Jeff Bilmes, Chris Bartels ********************************************************************** From David.Cohn at acm.org Wed Jun 4 12:38:10 2003 From: David.Cohn at acm.org (David 'Pablo' Cohn) Date: 04 Jun 2003 09:38:10 -0700 Subject: new JMLR paper: Think Globally, Fit Locally: Unsupervised Learning of Low Dimensional Manifolds Message-ID: <1054744690.29261.37.camel@bitbox.corp.google.com> [posted to connectionists at the request of the authors] The Journal of Machine Learning Research (www.jmlr.org) is pleased to announce publication of the seventh paper in Volume 4: -------------------------- Think Globally, Fit Locally: Unsupervised Learning of Low Dimensional Manifolds Lawrence K. Saul and Sam T. Roweis JMLR 4(Jun):119-155, 2003 Abstract The problem of dimensionality reduction arises in many fields of information processing, including machine learning, data compression, scientific visualization, pattern recognition, and neural computation. Here we describe locally linear embedding (LLE), an unsupervised learning algorithm that computes low dimensional, neighborhood preserving embeddings of high dimensional data. The data, assumed to be sampled from an underlying manifold, are mapped into a single global coordinate system of lower dimensionality. The mapping is derived from the symmetries of locally linear reconstructions, and the actual computation of the embedding reduces to a sparse eigenvalue problem. Notably, the optimizations in LLE---though capable of generating highly nonlinear embeddings---are simple to implement, and they do not involve local minima. In this paper, we describe the implementation of the algorithm in detail and discuss several extensions that enhance its performance. We present results of the algorithm applied to data sampled from known manifolds, as well as to collections of images of faces, lips, and handwritten digits. These examples are used to provide extensive illustrations of the algorithm's performance---both successes and failures---and to relate the algorithm to previous and ongoing work in nonlinear dimensionality reduction. Optimally-Smooth Adaptive ---------------------------------------------------------------------------- This paper is available electronically at http://www.jmlr.org in PostScript and PDF formats. The papers of Volumes 1, 2 and 3 are also available electronically from the JMLR website, and in hardcopy from the MIT Press; please see http://mitpress.mit.edu/JMLR for details. -David Cohn, From clinton at compneuro.umn.edu Wed Jun 4 15:39:29 2003 From: clinton at compneuro.umn.edu (Kathleen Clinton) Date: Wed, 04 Jun 2003 14:39:29 -0500 Subject: NEURON Workshop August 2003 Message-ID: <3EDE4AF1.30307@compneuro.umn.edu> ************************************* NEURON Workshop Space Still Available ************************************* Michael Hines and Ted Carnevale of Yale University will conduct a three to five day workshop on NEURON, a computer code that simulates neural systems. The workshop will be held from Monday to Friday, August 25-29, 2003 in Elliott Hall 121 on the University of Minnesota East Bank campus in Minneapolis, Minnesota. Registration is open to students and researchers from academic, government, and commercial organizations. Space is limited, and registrations will be accepted on a first-come, first-serve basis. The workshop is sponsored by the University of Minnesota Computational Neuroscience Program which is supported by a National Science Foundation-Integrative Graduate Education and Research Trainee grant and the University of Minnesota Graduate School, Institute of Technology, Medical School, and Supercomputing Institute for Digital Simulation and Advanced Computation. **Topics and Format** Participants may attend the workshop for three or five days. The first three days cover material necessary for the most common applications in neuroscience research and education. The fourth and fifth days deal with advanced topics of users whose projects may require problem-specific customizations. Windows platform will be used. Days 1 - 3 "Fundamentals of Using the NEURON Simulation Environment" The first three days will cover the material that is required for informed use of the NEURON simulation environment. The emphasis will be on applying the graphical interface, which enables maximum productivity and conceptual control over models while at the same time reducing or eliminating the need to write code. Participants will be building their own models from the start of the course. By the end of the third day they will be well prepared to use NEURON on their own to explore a wide range of neural phenomena. Additional information about the topics can be found at www.compneuro.umn.edu . Days 4 and 5 "Beyond the GUI" The fourth and fifth days deal with advanced topics for users whose projects may require problem-specific customizations. Topics will include: Advanced use of the CellBuilder, Network Builder, and Linear Circuit Builder. When and how to modify model specification, initialization, and NEURON's main computational loop. Exploiting special features of the Network Connection class for efficient implementation of use-dependent synaptic plasticity. Using NEURON's tools for optimizing models. Parallelizing computations. Using new features of the extracellular mechanism for --extracellular stimulation and recording --implementation of gap junctions and ephaptic interactions Developing new GUI tools. **Registration** For academic or government employees the registration fee is $175 for the first three days and $270 for the full five days. These fees are $350 and $540, respectively, for commercial participants. Registration forms can be obtained at www.compneuro.umn.edu/NEURONregistration.html or from the workshop coordinator, Kathleen Clinton, at clinton at compneuro.umn.edu or (612) 625-8424. **Lodging** Out-of-town participants may stay at the Days Inn, 2407 University Avenue SE in Minneapolis. It is within walking distance of Elliott Hall, but a shuttle bus is also available. Participants are responsible for making their own hotel reservations. The phone number is 612-623-9303. When making reservations, participants should state that they are attending the NEURON Workshop. A small block of rooms is available until July 24, 2003. From h.bowman at kent.ac.uk Wed Jun 4 12:07:10 2003 From: h.bowman at kent.ac.uk (hb5) Date: Wed, 04 Jun 2003 17:07:10 +0100 Subject: Neural Computation and Psychology Workshop Message-ID: <3EDE192E.66C4B079@ukc.ac.uk> The abstract submission deadline for the following event is almost on us. I would appreciate if you could bring this call to the attention of anybody that might be interested. Thanks, Howard Bowman ==================================================== Eighth Neural Computation and Psychology Workshop (NCPW 8) Connectionist Models of Cognition, Perception and Emotion 28-30 August 2003 at the University of Kent at Canterbury, UK The Eighth Neural Computation and Psychology Workshop (NCPW8) will be held in Canterbury, England from 28-30th August 2003. The NCPW series is now a well established and lively forum that brings together researchers from such diverse disciplines as artificial intelligence, cognitive science, computer science, neuroscience, philosophy and psychology. Between 25-30 papers will be accepted as oral presentations. In addition to the high quality of the papers presented, this Workshop takes place in an informal setting, in order to encourage interaction among the researchers present. Publication ----------- Proceedings of the workshop will appear in the series Progress in Neural Processing, which is published by World Scientific. Speakers -------- The list of speakers who have already agreed to attend includes: John Bullinaria (Birmingham) Gary Cottrell (San Diego) Bob French (Liege) Peter Hancock (Stirling) Richard Shillcock (Edinburgh) Chris Solomon (Kent at Canterbury) John Taylor (Kings College) Marius Usher (Birkbeck College) Important Dates --------------- Deadline for submission of abstracts: June 13th 2003 Notification of acceptance/rejection: June 29th 2003 Website ------- More details can be found on the conference website, http://www.cs.ukc.ac.uk/events/conf/2003/ncpw/ Conference Chair ---------------- Howard Bowman, University of Kent, UK Conference Organisers --------------------- Howard Bowman, UKC Colin G. Johnson, UKC Miguel Mendao, UKC Vikki Roberts, UKC Proceedings Editors -------------------- Howard Bowman, UKC Christophe Labiouse, Liege From terry at salk.edu Thu Jun 5 14:21:43 2003 From: terry at salk.edu (Terry Sejnowski) Date: Thu, 5 Jun 2003 11:21:43 -0700 (PDT) Subject: NEURAL COMPUTATION 15:6 Message-ID: <200306051821.h55ILhU53238@purkinje.salk.edu> Neural Computation - Contents - Volume 15, Number 6 - June 1, 2003 ARTICLE Estimation of Entropy and Mutual Information Liam Paninski REVIEW A Taxonomy for Spatiotemporal Connectionist Networks Revisited: The Unsupervised Case Guilherme de A. Barreto, Aluizio F. R. Araujo and Stefan C. Kremer LETTERS On Embedding Synfire Chains In A Balanced Network Y. Aviel, C. Mehring, M. Abeles, and D. Horn Ergodicity of Spike Trains: When Does Trial Averaging Make Sense? Naoki Masuda and Kazuyuki Aihara Laplacian Eigenmaps for Dimensionality Reduction and Data Representation Mikhail Belkin and Partha Niyogi Leave-One-Out Bounds for Kernel Methods Tong Zhang ----- ON-LINE - http://neco.mitpress.org/ SUBSCRIPTIONS - 2003 - VOLUME 15 - 12 ISSUES USA Canada* Other Countries Student/Retired $60 $64.20 $108 Individual $95 $101.65 $143 Institution $590 $631.30 $638 * includes 7% GST MIT Press Journals, 5 Cambridge Center, Cambridge, MA 02142-9902. Tel: (617) 253-2889 FAX: (617) 577-1545 journals-orders at mit.edu ----- From Johan.Suykens at esat.kuleuven.ac.be Thu Jun 5 05:29:39 2003 From: Johan.Suykens at esat.kuleuven.ac.be (Johan Suykens) Date: Thu, 05 Jun 2003 11:29:39 +0200 Subject: New Book on Learning Theory Message-ID: <3EDF0D83.2000102@esat.kuleuven.ac.be> -Announcement New Book on Learning Theory- J.A.K. Suykens, G. Horvath, S. Basu, C. Micchelli, J. Vandewalle (Eds.) Advances in Learning Theory: Methods, Models and Applications, NATO Science Series III: Computer & Systems Sciences, Volume 190, IOS Press Amsterdam, 2003, 436pp. (ISBN: 1 58603 341 7) http://www.esat.kuleuven.ac.be/sista/natoasi/book.html http://www.iospress.nl/site/html/boek-1722819779.html Book edited at the occasion of the NATO-ASI (Advanced Study Institute) on Learning Theory and Practice (Leuven July 2002) http://www.esat.kuleuven.ac.be/sista/natoasi/ltp2002.html -Contents- * Preface * Organizing committee * List of chapter contributors * Table of contents * An Overview of Statistical Learning Theory V. Vapnik * Best Choices for Regularization Parameters in Learning Theory: on the Bias-Variance Problem F. Cucker, S. Smale * Cucker Smale Learning Theory in Besov Spaces C.A. Micchelli, Y. Xu, P. Ye * High-dimensional Approximation by Neural Networks V. Kurkova * Functional Learning through Kernels S. Canu, X. Mary, A. Rakotomamonjy * Leave-one-out Error and Stability of Learning Algorithms with Applications A. Elisseeff, M. Pontil * Regularized Least-Squares Classification R. Rifkin, G. Yeo, T. Poggio * Support Vector Machines: Least Squares Approaches and Extensions J.A.K. Suykens, T. Van Gestel, J. De Brabanter, B. De Moor, J. Vandewalle * Extension of the nu-SVM Range for Classification F. Perez-Cruz, J. Weston, D.J.L. Herrmann, B. Schoelkopf * Kernels Methods for Text Processing N. Cristianini, J. Kandola, A. Vinokourov, J. Shawe-Taylor * An Optimization Perspective on Kernel Partial Least Squares Regression K.P. Bennett, M.J. Embrechts * Multiclass Learning with Output Codes Y. Singer * Bayesian Regression and Classification C.M. Bishop, M.E. Tipping * Bayesian Field Theory: from Likelihood Fields to Hyperfields J. Lemm * Bayesian Smoothing and Information Geometry R. Kulhavy * Nonparametric Prediction L. Gyorfi, D. Schafer * Recent Advances in Statistical Learning Theory M. Vidyasagar * Neural Networks in Measurement Systems (an engineering view) G. Horvath * List of participants * Subject index * Author index -Order information- IOS Press via website http://www.iospress.nl/site/html/boek-1722819779.html From thomas.j.palmeri at vanderbilt.edu Thu Jun 5 17:04:57 2003 From: thomas.j.palmeri at vanderbilt.edu (Thomas Palmeri) Date: Thu, 5 Jun 2003 16:04:57 -0500 Subject: Postdoctoral Fellowship at Vanderbilt University Message-ID: POSTDOCTORAL FELLOWSHIP LINKING COMPUTATIONAL MODELS AND SINGLE-CELL NEUROPHYSIOLOGY Members of the Psychology Department and the Center for Integrative and Cognitive Neuroscience at Vanderbilt University seek a highly qualified postdoctoral fellow to join an NSF-funded collaborative research project linking computational models of human cognition with single-cell neurophysiology. The aim is to elucidate how control over attention, categorization, and response selection are instantiated in neural processes underlying adaptive behavior. The project integrates separate programs of research in computational models of human cognition (Logan and Palmeri) and in single-cell neurophysiology (Schall). We are particularly interested in applicants with training in computational modeling (experience in mathematical modeling, neural network modeling, or dynamic systems modeling are equally desirable). Knowledge of theoretical and empirical research in attention, categorization, response selection, or related areas of cognition would be preferable, but is not necessary. The fellowship will pay according to the standard NIH scale, and will be for one or two years beginning July 1, 2003 or later. Fellows will be expected to apply for individual funding within the first year. Applicants should send a current vita, relevant reprints and preprints, a personal letter describing their research interests, background, goals, and career plans, and reference letters from two individuals. Applications will be reviewed as they are received. The fellowship can begin any time within the next six months. Individuals who have recently completed their dissertation or who expect to defend their dissertation this summer are encouraged to apply. We will also consider individuals currently in postdoctoral positions. Send Materials to: Thomas Palmeri, Gordon Logan, or Jeffrey Schall Department of Psychology 301 Wilson Hall 111 21st Avenue South Nashville, TN 37203 For more information on Vanderbilt, the Psychology Department, and the Center for Integrative and Cognition Neuroscience, see the following web pages: Vanderbilt University http://www.vanderbilt.edu/ Psychology Department http://sitemason.vanderbilt.edu/psychology Center for Integrative and Cognitive Neuroscience http://cicn.vanderbilt.edu Vanderbilt University is an Affirmative Action / Equal Opportunity employer. Thomas J. Palmeri Associate Professor 507 Wilson Hall Department of Psychology Vanderbilt University Nashville, TN 37240 tel: 615-343-7900 fax: 615-343-8449 email: thomas.j.palmeri at vanderbilt.edu www: www.psy.vanderbilt.edu/faculty/palmeri/home.html From Johan.Suykens at esat.kuleuven.ac.be Fri Jun 6 06:46:40 2003 From: Johan.Suykens at esat.kuleuven.ac.be (Johan Suykens) Date: Fri, 06 Jun 2003 12:46:40 +0200 Subject: New Book on Learning Theory Message-ID: <3EE07110.9020503@esat.kuleuven.ac.be> -Announcement New Book on Learning Theory- J.A.K. Suykens, G. Horvath, S. Basu, C. Micchelli, J. Vandewalle (Eds.) Advances in Learning Theory: Methods, Models and Applications, NATO Science Series III: Computer & Systems Sciences, Volume 190, IOS Press Amsterdam, 2003, 436pp. (ISBN: 1 58603 341 7) http://www.esat.kuleuven.ac.be/sista/natoasi/book.html http://www.iospress.nl/site/html/boek-1722819779.html Book edited at the occasion of the NATO-ASI (Advanced Study Institute) on Learning Theory and Practice (Leuven July 2002) http://www.esat.kuleuven.ac.be/sista/natoasi/ltp2002.html -Contents- * Preface * Organizing committee * List of chapter contributors * Table of contents * An Overview of Statistical Learning Theory V. Vapnik * Best Choices for Regularization Parameters in Learning Theory: on the Bias-Variance Problem F. Cucker, S. Smale * Cucker Smale Learning Theory in Besov Spaces C.A. Micchelli, Y. Xu, P. Ye * High-dimensional Approximation by Neural Networks V. Kurkova * Functional Learning through Kernels S. Canu, X. Mary, A. Rakotomamonjy * Leave-one-out Error and Stability of Learning Algorithms with Applications A. Elisseeff, M. Pontil * Regularized Least-Squares Classification R. Rifkin, G. Yeo, T. Poggio * Support Vector Machines: Least Squares Approaches and Extensions J.A.K. Suykens, T. Van Gestel, J. De Brabanter, B. De Moor, J. Vandewalle * Extension of the nu-SVM Range for Classification F. Perez-Cruz, J. Weston, D.J.L. Herrmann, B. Schoelkopf * Kernels Methods for Text Processing N. Cristianini, J. Kandola, A. Vinokourov, J. Shawe-Taylor * An Optimization Perspective on Kernel Partial Least Squares Regression K.P. Bennett, M.J. Embrechts * Multiclass Learning with Output Codes Y. Singer * Bayesian Regression and Classification C.M. Bishop, M.E. Tipping * Bayesian Field Theory: from Likelihood Fields to Hyperfields J. Lemm * Bayesian Smoothing and Information Geometry R. Kulhavy * Nonparametric Prediction L. Gyorfi, D. Schafer * Recent Advances in Statistical Learning Theory M. Vidyasagar * Neural Networks in Measurement Systems (an engineering view) G. Horvath * List of participants * Subject index * Author index -Order information- IOS Press via website http://www.iospress.nl/site/html/boek-1722819779.html From bio-adit2004-NOSPAM at listes.epfl.ch Sat Jun 7 07:59:32 2003 From: bio-adit2004-NOSPAM at listes.epfl.ch (Bio-ADIT2004) Date: Sat, 7 Jun 2003 13:59:32 +0200 Subject: [Bio-ADIT2004] - First Call for Papers Message-ID: ================================================================ We apologize if you receive multiple copies of this email. Please distribute this announcement to all interested parties. For removal, go to http://lslwww.epfl.ch/bio-adit2004/del.shtml ================================================================ Bio-ADIT 2004 CALL FOR PAPERS The First International Workshop on Biologically Inspired Approaches to Advanced Information Technology January 29 - 30, 2004 Swiss Federal Institute of Technology, Lausanne, Switzerland Website: http://lslwww.epfl.ch/bio-adit2004/ Sponsored by - Osaka University Forum, - Swiss Federal Institute of Technology, Lausanne, and - The 21st Century Center of Excellence Program of The Ministry of Education, Culture, Sports, Science and Technology, Japan Biologically inspired approaches have already proved successful in achieving major breakthroughs in a wide variety of problems in information technology (IT). A more recent trend is to explore the applicability of bio-inspired approaches to the development of self-organizing, evolving, adaptive and autonomous information technologies, which will meet the requirements of next-generation information systems, such as diversity, scalability, robustness, and resilience. These new technologies will become a base on which to build a networked symbiotic environment for pleasant, symbiotic society of human beings in the 21st century. Bio-ADIT 2004 will be the first international workshop to present original research results in the field of bio-inspired approaches to advanced information technologies. It will also serve to foster the connection between biological paradigms and solutions to building the next-generation information systems. SCOPE: The primary focus of the workshop is on new and original research results in the areas of information systems inspired by biology. We invite you to submit papers that present novel, challenging, and innovative results. The topics include all aspects of bio-inspired information technologies in networks, distributed/ parallel systems, hardware (including robotics) and software. We also encourage you to submit papers dealing with: - Self-organizing, self-repairing, self-replicating and self-stabilizing systems - Evolving and adapting systems - Autonomous and evolutionary software and robotic systems - Scalable, robust and resilient systems - Complex biosystems - Gene, protein and metabolic networks - Symbiosis networks SUBMISSION OF PAPERS: Authors are invited to submit complete and original papers. Papers submitted should not have been previously published in any forum, nor be under review for any journal or other conference. All submitted papers will be refereed for quality, correctness, originality and relevance. All accepted papers will be published in the conference proceedings. It is also planned to publish accepted papers as a book. Manuscripts should include an abstract and be limited to 16 pages in single spaced and single column format. Submissions should include the title, author(s), author's affiliation, e-mail address, fax number and postal address. In the case of multiple authors, an indication of which author is responsible for correspondence and preparing the camera ready paper for the proceedings should also be included. Electronic submission is strongly encouraged. Preferred file formats are PDF (.pdf) or Postscript (.ps). Visit our Web site at http://lslwww.epfl.ch/bio-adit2004/ for more information. Please contact Dr. Murata if you have to submit hard copies. Manuscripts should be submitted by September 5, 2003 through the Bio-ADIT Website. Please contact the technical program co-chairs for any questions: Professor Auke Jan Ijspeert School of Computer and Communication Sciences Swiss Federal Institute of Technology (EPFL) Lausanne CH 1015 Lausanne, Switzerland Tel: +41-21-693-2658 Fax: +41-21-693-3705 Email: Auke.Ijspeert at epfl.ch Professor Masayuki Murata Cybermedia Center Osaka University Toyonaka, Osaka 560-0043, Japan Tel: +81-6-6850-6860 Fax: +81-6-6850-6868 E-mail: murata at cmc.osaka-u.ac.jp IMPORTANT DATES: Paper submission deadline : September 5, 2003 Notification of acceptance: November 3, 2003 Camera ready papers due : December 1, 2003 WEBSITE: An electronic paper submission system is up and ready from July 1, 2003 to accept papers for Bio-ADIT. Please visit our Web site at http://lslwww.epfl.ch/bio-adit2004/ for more up-to-date information. EXECUTIVE COMMITTEE: General Co-Chairs: - Daniel Mange (Swiss Federal Institute of Technology, Lausanne, Switzerland) - Shojiro Nishio (Osaka University, Japan) Technical Program Committee Co-Chairs: - Auke Jan Ijspeert (Swiss Federal Institute of Technology, Lausanne, Switzerland) - Masayuki Murata (Osaka University, Japan) Finance Chair: - Marlyse Taric (Swiss Federal Institute of Technology, Lausanne, Switzerland) - Toshimitsu Masuzawa (Osaka University, Japan) Publicity Chair: - Christof Teuscher (Swiss Federal Institute of Technology, Lausanne, Switzerland) - Takao Onoye (Osaka University, Japan) Publications Chair: - Naoki Wakamiya (Osaka University, Japan) Local Arrangements Chair: - Carlos Andres Pena-Reyes (Swiss Federal Institute of Technology, Lausanne, Switzerland) Internet Chair: - Jonas Buchli (Swiss Federal Institute of Technology, Lausanne, Switzerland) TECHNICAL PROGRAM COMMITTEE: Co-Chairs: - Auke Jan Ijspeert (Swiss Federal Institute of Technology, Lausanne, Switzerland) - Masayuki Murata (Osaka University, Japan) Members: - Michael A. Arbib (University of Southern California, Los Angeles, USA) - Aude Billard (Swiss Federal Institute of Technology, Lausanne, Switzerland) - Takeshi Fukuda (IBM Tokyo Research Laboratory, Japan) - Katsuo Inoue (Osaka University, Japan) - Wolfgang Maass (Graz University of Technology, Austria) - Ian W. Marshall (BTexact Technologies, UK) - Toshimitsu Masuzawa (Osaka University, Japan) - Alberto Montresor (University of Bologna, Italy) - Stefano Nolfi (Institute of Cognitive Sciences and Technology,CNR, Rome, Italy) - Takao Onoye (Osaka University, Japan) - Rolf Pfeifer (University of Zurich, Switzerland) - Eduardo Sanchez (Swiss Federal Institute of Technology, Lausanne, Switzerland) - Hiroshi Shimizu (Osaka University, Japan) - Moshe Sipper (Ben-Gurion University, Israel) - Gregory Stephanopoulos (Massachusetts Institute of Technology, USA) - Adrian Stoica (Jet Propulsion Laboratory, USA) - Gianluca Tempesti (Swiss Federal Institute of Technology, Lausanne, Switzerland) - Naoki Wakamiya (Osaka University, Japan) - Xin Yao (University of Birmingham, UK) From stefan.wermter at sunderland.ac.uk Tue Jun 10 03:13:17 2003 From: stefan.wermter at sunderland.ac.uk (Stefan Wermter) Date: Tue, 10 Jun 2003 08:13:17 +0100 Subject: CFP: FLAIRS 2004 special track on neural network application Message-ID: <3EE5850D.AB0B47DB@sunderland.ac.uk> Call for Papers Neural Network Applications Special Track at the 17th International FLAIRS Conference In cooperation with the American Association for Artificial Intelligence Palms South Beach Hotel Miami Beach, FL May 17-19, 2004 Papers are being solicited for a special track on Neural Network Applications at the 17th International FLAIRS Conference (FLAIRS-2004). The special track will be devoted to the applications of Neural Networks with the aim of presenting new and important contributions in this area. For details see http://uhaweb.hartford.edu/irussell/ST04.html Stefan Wermter *************************************** Stefan Wermter Professor for Intelligent Systems Director of Centre for Hybrid Intelligent Systems School of Computing and Technology University of Sunderland St Peters Way Sunderland SR6 0DD United Kingdom phone: +44 191 515 3279 fax: +44 191 515 3553 email: stefan.wermter at sunderland.ac.uk http://www.his.sunderland.ac.uk/~cs0stw/ http://www.his.sunderland.ac.uk/ **************************************** From terry at salk.edu Tue Jun 10 20:52:59 2003 From: terry at salk.edu (Terry Sejnowski) Date: Tue, 10 Jun 2003 17:52:59 -0700 (PDT) Subject: NEURAL COMPUTATION 15:7 Message-ID: <200306110052.h5B0qxH66793@purkinje.salk.edu> Neural Computation - Contents - Volume 15, Number 7 - July 1, 2003 ARTICLE Background Synaptic Activity as a Switch Between Dynamical States in a Network Emilio Salinas NOTE Note on "Comparison of Model Selection for Regression" by Vladimir Cherkassky and Yungqian Ma Trevor Hastie, Rob Tibshirani and Jerome Friedman LETTERS Spike-Timing Dependent Plasticity and Relevant Mutual Information Maximization Gal Chechik Relating STDP to BCM Eugene M. Izhikevich and Niraj S. Desai Learning Innate Face Preferences James A. Bednar and Risto Miikkulainen Learning Optimized Features for Hierarchical Models of Invariant Object Recognition Heiko Wersing and Edgar Koerner Soft Learning Vector Quantization Sambu Seo and Klaus Obermayer An Efficient Approximation Algorithm for Finding a Maximum Clique Using Hopfield Network Learning Rong Long Wang, Zheng Tang and Qi Ping Cao The Effect of Noise on a Class of Energy-Based Learning Rules A. Bazzani, D. Remondini, N. Intrator, and G. C. Castellani Approximation by Fully-Complex Multilayer Perceptrons Taehwan Kim and Tulay Adali Asymptotic Behaviors of Support Vector Machines with Gaussian Kernel S. Sathiya Keerthi and Chih-Jen Lin Comparison of Model Selection for Regression Vladimir Cherkassky and Yunqian Ma ----- ON-LINE - http://neco.mitpress.org/ SUBSCRIPTIONS - 2003 - VOLUME 15 - 12 ISSUES USA Canada* Other Countries Student/Retired $60 $64.20 $108 Individual $95 $101.65 $143 Institution $590 $631.30 $638 * includes 7% GST MIT Press Journals, 5 Cambridge Center, Cambridge, MA 02142-9902. Tel: (617) 253-2889 FAX: (617) 577-1545 journals-orders at mit.edu ----- From a.silver at ucl.ac.uk Tue Jun 10 07:35:15 2003 From: a.silver at ucl.ac.uk (Angus Silver) Date: Tue, 10 Jun 2003 12:35:15 +0100 Subject: neuroinformatics software engineering position at UC London Message-ID: Software Development: Tools for Grid-Based Computational Neurobiology A research-based software engineering position is available in the Department of Physiology University College London UK as part of the MRC funded Grid-enabled modelling tools and databases for neuroinformatics a collaborative project with the Institute for Adaptive and Neural Computation, Division of Informatics, University of Edinburgh. The aim of the project is to develop software tools to aid construction, visualization, and analysis for both realistic neural network models of cerebellar cortex constructed in the Neuron and Genesis simulation environments and for lower level 3D-diffusion-reaction models of synaptic mechanisms. The candidate will have a strong quantitative background with a degree in neuroscience, computer science, physics or engineering and will be expert in programming in JAVA, and C. A higher degree (MSc/PhD) would be advantageous and experience with the Neuron simulator would be useful but not essential. Development of software tools will be closely linked to existing modelling projects (e.g. see abstract below) and to the electrophysiological and optical experiments carried out in Dr Silvers lab. The post is funded for 2 years at #33025 p.a. Further information is available from Angus Silver (a.silver at ucl.ac.uk). To apply please send CV before July 10th 2003. **************************************************************************** Shunting Inhibition Modulates Neuronal Gain during Synaptic Excitation. Neuron. 2003 May 8;38(3):433-45. Mitchell SJ, Silver RA. Department of Physiology, University College London, Gower Street, WC1E 6BT, London, United Kingdom Neuronal gain control is important for processing information in the brain. Shunting inhibition is not thought to control gain since it shifts input-output relationships during tonic excitation rather than changing their slope. Here we show that tonic inhibition reduces the gain and shifts the offset of cerebellar granule cell input-output relationships during frequency-dependent excitation with synaptic conductance waveforms. Shunting inhibition scales subthreshold voltage, increasing the excitation frequency required to attain a particular firing rate. This reduces gain because frequency-dependent increases in input variability, which couple mean subthreshold voltage to firing rate, boost voltage fluctuations during inhibition. Moreover, synaptic time course and the number of inputs also influence gain changes by setting excitation variability. Our results suggest that shunting inhibition can multiplicatively scale rate-coded information in neurons with high-variability synaptic inputs. From cristina.versino at jrc.it Wed Jun 11 09:25:16 2003 From: cristina.versino at jrc.it (Cristina Versino) Date: Wed, 11 Jun 2003 15:25:16 +0200 Subject: Analysis and Intelligence for Anti-fraud: post-doc position. Message-ID: The Joint Research Center has recently announced some job opportunities at the post-doctoral level in the web site of the Institute for the Protection and Security of the Citizen. (see the following site, and click at "opportunities") http://ipsc.jrc.cec.eu.int/ The dealine for those wishing to apply is 27 June 2003. The specific project entitled "Analysis and Intelligence for Anti-fraud", is offering a post-doc position with the following description: Project Description The ideal candidate has a Ph.D. in computer science with a specialization in one of the following fields: Large Databases, Data Warehousing, Data Mining/KDD, Data Visualization, or Statistics.Tasks will include both research and software development activities in support of the EU's anti-fraud work programme. The successful candidate will develop further hands-on data mining experience on large databases and will contribute to new developments in the techniques used at JRC.Some experience in the practical use of the Oracle RDBMS and the PL/SQL language in a Microsoft Windows environment is very useful. Working experience in datamining and/or data quality issues is highly desirable. Knowledge of the MATLAB software, the Business Objects or SAS Systems, and the JAVA programming language is an advantage. Duration: 24 months For specific information on the above project description in the anti-fraud domain, please write to thomas.barbas at jrc.it From cindy at bu.edu Wed Jun 11 11:20:54 2003 From: cindy at bu.edu (Cynthia Bradford) Date: Wed, 11 Jun 2003 11:20:54 -0400 Subject: Neural Networks 16(5/6): Special Issue on "Advances in Neural Networks Research: IJCNN'03" Message-ID: <200306111520.h5BFKs402506@cns-pc75.bu.edu> NEURAL NETWORKS 16(5/6) Contents - Volume 16, Numbers 5 and 6 - 2003 2003 Special Issue: "Advances in Neural Networks Research: IJCNN'03" Donald C. Wunsch II, Mike Hasselmo, DeLiang Wang, and Ganesh Kumar Venayagamoorthy, co-editors ------------------------------------------------------------------ INTRODUCTION: Welcome to the Special Issue: The Best of the Best Donald C. Wunsch II, Mike Hasselmo, DeLiang Wang, and Ganesh Kumar Venayagamoorthy PERCEPTUAL AND MOTOR FUNCTION: Adaptive force generation for precision-grip lifting by a spectral timing model of the cerebellum Antonio Ulloa, Daniel Bullock, and Brad Rhodes Radial basis function neural networks for nonlinear Fisher discrimination and Neyman-Pearson classification David Casasent and Xue-wen Chen Intrinsic generalization analysis of low dimensional representations Xiuwen Liu, Anuj Srivastava, and DeLiang Wang Application of four-layer neural network on information extraction Min Han, Lei Cheng, and Hua Meng Subject independent facial expression recognition with robust face detection using a convolutional neural network Masakazu Matsugu, Katsuhiko Mori, Yusuke Mitari, and Yuji Kaneda A generalized feedforward neural network architecture for classification and regression Ganesh Arulampalam and Abdesselam Bouzerdoum COGNITIVE FUNCTION AND COMPUTATIONAL NEUROSCIENCE: Hierarchical cognitive maps Horatiu Voicu Modeling goal-directed spatial navigation in the rat based on physiological data from the hippocampal formation Randal A. Koene, Anatoli Gorchetchnikov, Robert C. Cannon, and Michael E. Hasselmo An efficient training algorithm for dynamic synapse neural networks using trust region methods Hassan H. Namarvar and Theodore W. Berger Temporal binding as an inducer for connectionist recruitment learning over delayed lines Cengiz Gunay and Anthony S. Maida Developments in understanding neuronal spike trains and functional specializations in brain regions Roberto A. Santiago, James McNames, Kim Burchiel, and George G. Lendaris Shaping up simple cell's receptive field of animal visual by ICA and its application in navigation system Liming Zhang and Jianfeng Mei eLoom and Flatland: Specification, simulation and visualization engines for the study of arbitrary hierarchical neural architectures Thomas P. Caudell, Yunhai Xiao, and Michael J. Healy Associative morphological memories based on variations of the kernel and dual kernel methods Peter Sussner INFORMATICS: Adaptive double self-organizing maps for clustering gene expression profiles H. Ressom, D. Wang, and P. Natarajan An accelerated procedure for recursive feature ranking on microarray data C. Furlanello, M. Serafini, S. Merler, and G. Jurman DYNAMICS: Pattern completion through phase coding in population neurodynamics A. Gutierrez-Galvez and R. Gutierrez-Osuna Passive dendritic integration heavily affects spiking dynamics of recurrent networks Giorgio A. Ascoli Abductive reasoning with recurrent neural networks Ashraf M. Abdelbar, Emad A.M. Andrews, and Donald C. Wunsch II Neural networks with chaotic recursive nodes: Techniques for the design of associative memories, contrast with Hopfield architectures, and extensions for time-dependent inputs Emilio Del Moral Hernandez Simple and conditioned adaptive behavior from Kalman filter trained recurrent networks Lee A. Feldkamp, Daniel V. Prokhorov, and Timothy M. Feldkamp REINFORCEMENT LEARNING AND CONTROL: Learning robot actions based on self-organizing language memory Stefan Wermter and Mark Elshaw Autonomous mental development in high dimensional context and action spaces Ameet Joshi and Juyang Weng Chaos control and synchronization, with input saturation, via recurrent neural networks Edgar N. Sanchez and Luis J. Ricalde Proper orthogonal decomposition based optimal neurocontrol synthesis of a chemical reactor process using approximate dynamic programming Radhakant Padhi and S.N. Balakrishnan Numerical solution of elliptic partial differential equation using radial basis function neural networks Li Jianyu, Luo Siwei, Qi Yingjian, and Huang Yaping THEORY: Statistical efficiency of adaptive algorithms Bernard Widrow and Max Kamenetsky On structure-exploiting trust-region regularized nonlinear least squares algorithms for neural-network learning Eiji Mizutani and James W. Demmel Stochastic resonance in noisy threshold neurons Bart Kosko and Sanya Mitaim Quantum optimization for training support vector machines Davide Anguita, Sandro Ridella, Fabio Rivieccio, and Rodolfo Zunino On the quality of ART1 text clustering Louis Massey Extension neural network and its applications M.H. Wang and C.P. Hung Fuzzy least squares support vector machines for multiclass problems Daisuke Tsujinishi and Shigeo Abe Evolving efficient learning algorithms for binary mappings John A. Bullinaria A network for recursive extraction of canonical coordinates Ali Pezeshki, Mahmood R. Azimi-Sadjadi, and Louis L. Scharf Automatic basis selection techniques for RBF networks Ali Ghodsi and Dale Schuurmans Data smoothing regularization, multi-sets-learning, and problem solving strategies Lei Xu Million city traveling salesman problem solution by divide and conquer clustering with adaptive resonance neural networks Samuel A. Mulder and Donald C. Wunsch II APPLICATIONS: A practical sub-space adaptive filter A. Zaknich Pharmacodynamic population analysis in chronic renal failure using artificial neural networks: A comparative study Adam E. Gaweda, Alfred A. Jacobs, Michael E. Brier, and Jacek M. Zurada Electronic nose based tea quality standardization Ritaban Dutta, E.L. Hines, J.W. Gardner, K.R. Kashwan, and M. Bhuyan A novel neural network-based survival analysis method Antonio Eleuteri, Roberto Tagliaferri, Leopoldo Milano, Sabino De Placido, and Michele De Laurentiis Divide-and-conquer approach for brain machine interfaces: Nonlinear mixture of competitive linear models Sung-Phil Kim, Justin C. Sanchez, Deniz Erdogmus, Yadunandana N. Rao, Johan Wessberg, Jose C. Principe, and Miguel Nicolelis Stochastic error whitening algorithm for linear filter estimation with noisy data Yadunandana N. Rao, Deniz Erdogmus, Geetha Y. Rao, and Jose C. Principe New internal optimal neurocontrol for a series FACTS device in a power transmission line Jung-Wook Park, Ronald G. Harley, and Ganesh K. Venayagamoorthy Design of an adaptive neural network based power system stabilizer Wenxin Liu, Ganesh K. Venayagamoorthy, and Donald C. Wunsch II On neural network techniques in the secure management of communication systems through improving and quality assessing pseudorandom stream generators D.A. Karras and V. Zorkadis Multimedia authenticity protection with ICA watermarking and digital bacteria vaccination Harold Szu, Steven Noel, Seong-Bin Yim, Jeff Willey, and Joe Landa VISUAL CORTEX: HOW ILLUSIONS REPRESENT REALITY: Interpolation processes in the visual perception of objects P.J. Kellman Laminar cortical dynamics of visual form perception Stephen Grossberg Moving objects appear to slow down at low contrasts Stuart Anstis Neural models of motion integration and segmentation Ennio Mingolla ------------------------------------------------------------------ Electronic access: www.elsevier.com/locate/neunet/. Individuals can look up instructions, aims & scope, see news, tables of contents, etc. Those who are at institutions which subscribe to Neural Networks get access to full article text as part of the institutional subscription. Sample copies can be requested for free and back issues can be ordered through the Elsevier customer support offices: nlinfo-f at elsevier.nl usinfo-f at elsevier.com or info at elsevier.co.jp ------------------------------ INNS/ENNS/JNNS Membership includes a subscription to Neural Networks: The International (INNS), European (ENNS), and Japanese (JNNS) Neural Network Societies are associations of scientists, engineers, students, and others seeking to learn about and advance the understanding of the modeling of behavioral and brain processes, and the application of neural modeling concepts to technological problems. Membership in any of the societies includes a subscription to Neural Networks, the official journal of the societies. Application forms should be sent to all the societies you want to apply to (for example, one as a member with subscription and the other one or two as a member without subscription). The JNNS does not accept credit cards or checks; to apply to the JNNS, send in the application form and wait for instructions about remitting payment. The ENNS accepts bank orders in Swedish Crowns (SEK) or credit cards. The INNS does not invoice for payment. ---------------------------------------------------------------------------- Membership Type INNS ENNS JNNS ---------------------------------------------------------------------------- membership with $80 (regular) SEK 660 (regular) Y 13,000 (regular) Neural Networks (plus 2,000 enrollment fee) $20 (student) SEK 460 (student) Y 11,000 (student) (plus 2,000 enrollment fee) ----------------------------------------------------------------------------- membership without $30 SEK 200 not available to Neural Networks non-students (subscribe through another society) Y 5,000 (student) (plus 2,000 enrollment fee) ----------------------------------------------------------------------------- Name: _____________________________________ Title: _____________________________________ Address: _____________________________________ _____________________________________ _____________________________________ Phone: _____________________________________ Fax: _____________________________________ Email: _____________________________________ Payment: [ ] Check or money order enclosed, payable to INNS or ENNS OR [ ] Charge my VISA or MasterCard card number ____________________________ expiration date ________________________ INNS Membership 19 Mantua Road Mount Royal NJ 08061 USA 856 423 0162 (phone) 856 423 3420 (fax) innshq at talley.com http://www.inns.org ENNS Membership University of Skovde P.O. Box 408 531 28 Skovde Sweden 46 500 44 83 37 (phone) 46 500 44 83 99 (fax) enns at ida.his.se http://www.his.se/ida/enns JNNS Membership c/o Professor Shozo Yasui Kyushu Institute of Technology Graduate School of Life Science and Engineering 2-4 Hibikino, Wakamatsu-ku Kitakyushu 808-0196 Japan 81 93 695 6108 (phone and fax) jnns at brain.kyutech.ac.jp http://www.jnns.org/ ----------------------------------------------------------------- From tcp1 at leicester.ac.uk Wed Jun 11 13:28:23 2003 From: tcp1 at leicester.ac.uk (Tim Pearce) Date: Wed, 11 Jun 2003 18:28:23 +0100 Subject: Faculty Position in the area of Neuroengineering In-Reply-To: <3E48BDC6.9010502@ini.phys.ethz.ch> Message-ID: <00e601c3303e$dc61ac00$216bd28f@neuro2> Apologies for cross-postings. The Centre for Bioengineering has a Lectureship position available from the (effectively equivalent to Assistant Professor position in the US). We would particularly welcome applications from those with an interest in neuromorphic engineering, neuroengineering, neuronal modelling and/or computational neuroscience, The University is well placed for establishing links with neuroscience researchers in Leicester (particularly clinical-based) and its nearest cities (London, Birmingham, Cambridge and Nottingham). Informal Enquiries Informal enquiries may be made to Tim Pearce (tcp1 at le.ac.uk - +44 116 223 1307) or to the Head of Department, Ian Postlethwaite (ixp at le.ac.uk) or to Fernando Schlindwein (fss1 at le.ac.uk). Applications Applications should forwarded to reach the Personnel Office (Appointments) not later than 27 June 2003. ===== Applications are invited for a Lectureship in Bioengineering which is broadly interpreted to include any area at the intersection of technology, mathematical modelling and life and/or clinical sciences. The successful applicant will join the expanding Centre for Bioengineering (http://www.le.ac.uk/eg/research/groups/control/bio/bio.htm), which has research interests in real-time monitoring of patients, modelling of neural systems, ultrasound in medicine, and neuroengineering. Successful candidates should have a PhD in a related area, an established record of journal publications, and research interests that overlap or complement our existing activities. We particularly welcome applicants who can demonstrate an ability or strong potential to secure external research funding and develop a vigorous programme of research. The successful candidate will also be expected to contribute to our undergraduate and postgraduate teaching programmes. The University The University of Leicester is one of the UK's leading research and teaching universities. The University was founded as a University College in 1921 and granted a Royal Charter in 1957. It has an estate of approximately 94 hectares that includes a six-hectare Botanic Garden, an arboretum and a range of residences in the suburbs that are set in attractive gardens. The University has 18,949 students including 9,491 at postgraduate level. There are 42 academic departments and 35 special divisions and centres located in six faculties: Arts, Education and Continuing Studies, Law, Medicine and Biological Sciences, Science and Social Sciences. There is a University-wide Graduate School and an Institute of Lifelong Learning. The University employs approximately 3,000 staff. The University has been ranked in the UK's top twenty universities in three consecutive years since 2001 by the Financial Times and by the Sunday Times. It was placed in the top 20 UK universities for research grant and contract income. The University had 25 ratings of 5*, 5 or 4 in the 2001 Research Assessment Exercise when 84% of the staff were in units of assessment of national and international excellence. In the Teaching Quality Assessment four units achieved a grade of excellent before 1995 and since then 15 units have received a score of 22 or more out of 24. The University has been awarded the Queen's Anniversary Prize in Higher and Further Education in 2002 for its submission in Genetics. The University is committed to producing research and teaching of the highest quality, to promoting undergraduate and postgraduate studies through campus-based and distance-learning programmes and to developing close collaboration with the local and regional community. The Department of Engineering The Department has 30 academic staff (including 11 Professors) supported by 7 academically-related staff, about 20 research staff and 30 technical and clerical staff. Engineering is one of the largest Departments at Leicester. The Department is renowned for its research in the areas of Control and Instrumentation, Electrical and Electronic Power, Radio Systems, Mechanics of Materials and Thermofluids and Environmental Engineering. In the 2001 Research Assessment Exercise it received a rating of 5A. Several research-led appointments have been made in recent years, including a number of Chairs, and this has resulted in research groups of international standing with strong leadership and a research base of highly talented staff. The successful candidate will join the Control and Instrumentation Research Group and be part of the Centre for Bioengineering. For additional information see http://jobs.ac.uk/jobfiles/YK396.html From lshams at caltech.edu Thu Jun 12 00:35:40 2003 From: lshams at caltech.edu (Ladan Shams) Date: Wed, 11 Jun 2003 21:35:40 -0700 Subject: Postdoctoral Position at UCLA+Caltech Message-ID: <5282C980-9C8F-11D7-9B5B-0003934F6770@caltech.edu> Postdoctoral Position in Multisensory Perception UCLA and Caltech Applications are invited for a postdoctoral position to study issues in multisensory perception. The research involves psychophysics methodology possibly combined with fMRI and/or statistical modeling. The projects will aim to unravel the interactions between visual, auditory, and tactile perceptual processes at various levels of inquiry ranging from phenomenology, to underlying brain mechanisms, to the governing computational principles. The successful candidate will have a Ph.D. in Psychology, Neuroscience, Computer Science, Engineering or a related field. Some experience in Psychophysics is required, and any experience in fMRI or modeling will be a strong advantage. Expertise in Matlab and/or C in a Mac or UNIX environment is highly desirable. The research will be performed primarily in the laboratory of Ladan Shams at UCLA (http://vmpl.psych.ucla.edu), and partly in the laboratory of Shinsuke Shimojo at Caltech (http://neuro.caltech.edu). Thus, the successful candidate will be affiliated with both UCLA and Caltech. Salary is according to the NIH scale. The initial appointment will be for one year and may be extended for a second or third year. The starting date is September 2003 (but somewhat flexible). Please send inquiries or CVs plus the names of 3 references to: Ladan Shams (ladan at caltech.edu) California Institute of Technology and University of California are Equal Opportunity Employers. ---------------------------- Ladan Shams, Ph.D. Assistant Professor UCLA Psychology Department 7545B Franz Hall Los Angeles, CA 90095-1563 URL: http://vmpl.psych.ucla.edu Tel: (310) 428-5296 From cristina.versino at jrc.it Wed Jun 11 09:39:15 2003 From: cristina.versino at jrc.it (Cristina Versino) Date: Wed, 11 Jun 2003 15:39:15 +0200 Subject: Surveillance Review Station: post-doc position. Message-ID: The Joint Research Center has recently announced some job opportunities at the post-doctoral level in the web site of the Institute for the Protection and Security of the Citizen. (see the following site, and click at "opportunities") http://ipsc.jrc.cec.eu.int/ The dealine for those wishing to apply is 27 June 2003. Applications should follow the rules mentioned in the web site. The specific project (G07 1) entitled "Surveillance Review Station", is offering a post-doc position with the following description: Project Description The ideal candidate has a Ph.D. in pattern recognition and/or image processing. He/she will participate in the development of a tool for the analysis of multisensory surveillance data together with staff of the ?Surveillance and Information Retrieval? sector. The work consists in the analysis of surveillance data from nuclear installations (e.g., images, radiation measures, etc.), including finding effective representations of sensor data in view of detecting and classifying Safeguards relevant events. The candidate will therefore develop and test different types of data representation and classification techniques.Moreover, the candidate will explore the possibility of integrating simulation and visualisation tools to the image review station, for increasing the scene understanding through the knowledge on the context in which surveillance data are acquired. Duration: 36 months For specific information on the above project description in the surveillance domain, please write to cristina.versino at jrc.it Administrative contact person: Anne-Marie Morrissey, Tel: +39 0332 789322, Fax: +39 0332 785232 E-mail: anne-marie.morrissey at jrc.it From oreilly at grey.colorado.edu Fri Jun 13 00:40:09 2003 From: oreilly at grey.colorado.edu (Randall C. O'Reilly) Date: Thu, 12 Jun 2003 22:40:09 -0600 Subject: TR on Working Memory in PFC and BG Message-ID: <200306130440.h5D4e9D28330@grey.colorado.edu> The following technical report is now available for downloading from: http://psych.colorado.edu/~oreilly/pubs-abstr.html#03_pbwm - Randy Making Working Memory Work: A Computational Model of Learning in the Prefrontal Cortex and Basal Ganglia Randall C. O'Reilly Department of Psychology University of Colorado Boulder, CO 80309 ICS Technical Report 03-03 Abstract: The prefrontal cortex has long been thought to subserve both working memory (the holding of information online for processing) and ``executive'' functions (deciding how to manipulate working memory and perform processing). Although many computational models of working memory have been developed, the mechanistic basis of executive function remains elusive. In effect, the executive amounts to a homunculus. This paper presents an attempt to deconstruct this homunculus through powerful learning mechanisms that allow a computational model of the prefrontal cortex to control both itself and other brain areas in a strategic, task-appropriate manner. These learning mechanisms are based on structures in the basal ganglia (NAc, VTA, striosomes of the dorsal striatum, SNc) that can modulate learning in other basal ganglia structures (matrisomes of the dorsal striatum, GP, thalamus), which in turn provide a dynamic gating mechanism for controlling prefrontal working memory updating. Computationally, the learning mechanism is designed to simultaneously solve the temporal and structural credit assignment problems. The model's performance compares favorably with standard backpropagation-based temporal learning mechanisms on the challenging 1-2-AX working memory task, and other benchmark working memory tasks. +----------------------------------------------------------------+ | Dr. Randall C. O'Reilly | | | Associate Professor | Phone: (303) 492-0054 | | Department of Psychology | Fax: (303) 492-2967 | | Univ. of Colorado Boulder | | | 345 UCB | email: oreilly at psych.colorado.edu | | Boulder, CO 80309-0345 | www: psych.colorado.edu/~oreilly | +----------------------------------------------------------------+ From golden at utdallas.edu Fri Jun 13 09:42:50 2003 From: golden at utdallas.edu (Richard Golden) Date: Fri, 13 Jun 2003 08:42:50 -0500 (CDT) Subject: Symposium Workshop "Bayesian Methods for Cognitive Modeling" Message-ID: Symposium Workshop: Bayesian Methods for Cognitive Modeling Tentative Schedule Monday, July 28, 2003, Weber State University Ogden, Utah (Following the 2003 Annual Meeting of the Society for Mathematical Psychology) 8:15am - 8:30am Introduction to the Symposium Workshop. Richard Golden (University Texas Dallas) and Richard Shiffrin (Indiana University) 8:30am -10:00am Bayesian Methods for Unsupervised Learning Zoubin Ghahramani (University College London, Gatsby Computational Neuroscience Unit) 10:00am - 10:30am Coffee Break 10:30am -12:00pm Bayesian Models of Human Learning and Inference Josh Tenenbaum (MIT, Brain and Cognitive Sciences) 12:00pm - 1:30pm Lunch Break 1:30pm -3:00pm The Bayesian Approach to Vision Alan Yuille (UCLA, Departments of Statistics and Psychology) 3:00pm - 3:30pm Coffee Break 3:30pm - 5:00pm Probabilistic Approaches to Language Learning and Processing Christopher Manning (Stanford University, Computer Science) ------------------------------------------------------------------------------ ----------------------------------------- * Each talk will be approximately 80 minutes in length with a 10 minute question time period. * A $20 Registration Fee is required for participation in the workshop. ABSTRACTS 8:30am -10:00am Bayesian Methods for Unsupervised Learning Zoubin Ghahramani (University College London, Gatsby Computational Neuroscience Unit) Many models used in machine learning and neural computing can be understood within the unified framework of probabilistic graphical models. These include clustering models (k-means, mixtures of Gaussians), dimensionality reduction models (PCA, factor analysis), time series models (hidden Markov models, linear dynamical systems), independent components analysis (ICA), hierarchical neural network models, etc. I will review the link between all these models, and the framework for learning them using the EM algorithm for maximum likelihood. I will then describe limitations of the maximum likelihood framework and how Bayesian methods overcome these limitations, allowing learning without overfitting, principled model selection, and the coherent handling of uncertainty. Time permitting I will decribe the computational challenges of Bayesian learning and approximate methods for overcoming those challenges, such as variational methods. 10:30am -12:00pm Bayesian Models of Human Learning and Inference Josh Tenenbaum (MIT, Brain and Cognitive Sciences) How can people learn the meaning of a new word from just a few examples? What makes a set of examples more or less representative of a concept? What makes two objects seem more or less similar? Why are some generalizations apparently based on all-or-none rules while others appear to be based on gradients of similarity? How do we infer the existence of hidden causal properties or novel causal laws? I will describe an approach to explaining these aspects of everyday induction in terms of rational statistical inference. In our Bayesian models, learning and reasoning are explained in terms of probability computations over a hypothesis space of possible concepts, word meanings, or generalizations. The structure of the learner's hypothesis spaces reflects their domain-specific prior knowledge, while the nature of the probability computations depends on domain-general statistical principles. The hypotheses can be thought of as either potential rules for abstraction or potential features for similarity, with the shape of the learner's posterior probability distribution determining whether generalization appears more rule-based or similarity-based. Bayesian models thus offer an alternative to classical accounts of learning and reasoning that rest on a single route to knowledge -- e.g., domain-general statistics or domain-specific constraints -- or a single representational paradigm -- e.g., abstract rules or exemplar similarity. This talk will illustrate the Bayesian approach to modeling learning and reasoning on a range of behavioral case studies, and contrast its explanations with those of more traditional process models. 1:30pm -3:00pm The Bayesian Approach to Vision Alan Yuille (UCLA, Departments of Statistics and Psychology) Bayesian statistical decision theory formulates vision as perceptual inference where the goal is to infer the structure of the viewed scene from input images. The approach can be used not only to model perceptual phenomena but also to design computer vision systems that perform useful tasks on natural images. This ensures that the models can be extended from the artificial stimuli used in most psychophysical, or neuroscientific, experiments to more natural and realistic stimuli. The approach requires specifying likelihood functions for how the viewed scene generates the observed image data and prior probabilities for the state of the scene. We show how this relates to Signal Detection Theory and Machine Learning. Next we describe how the probability models (i.e. likelihood functions and priors) can be represented by graphs which makes explicit the statistical dependencies between variables. This representation enables us to account for perceptual phenomena such as discounting, cue integration, and explaining away. We illustrate the techniques involved in the Bayesian approach by two worked examples. The first is the perception of motion where we describe Bayesian theories (Weiss & Adelson, Yuille & Grzywacz) which show that many phenomena can be explained as a trade-off between the likelihood function and the prior of a single model. The second is image parsing where the goal is to segment natural images and to detect and recognize objects. This involves models competing and cooperating to explain the image by combining bottom-up and top-down processing. 3:30pm - 5:00pm Probabilistic Approaches to Language Learning and Processing Christopher Manning (Stanford University, Computer Science) At the engineering end of speech and natural language understanding research, the field has been transformed by the adoption of Bayesian probabilistic approaches, with generative models such as Markov models, hidden Markov models, and probabilistic context-free grammars being standard tools of the trade, and people increasingly using more sophisticated models. More recently, there has also started to be use of these models as cognitive models, to explore issues in psycholinguistic processing, and how humans approach the resolution problem, of combining evidence from numerous sources during the course of processing. Much of this work has been in a supervised learning paradigm, where models are built from hand-annotated data, but probabilistic approaches also open interesting new perspectives on formal problems of language learning. After surveying the broader field of probabilist approaches in natural language processing, I'd like to focus in on unsupervised approaches to learning language structure, show why it's a difficult problem, and present some recent work that I and others have been doing using probabilistic models, which shows considerable progress on tasks such as word class and syntactic structure learning. From bogus@does.not.exist.com Mon Jun 16 03:26:02 2003 From: bogus@does.not.exist.com () Date: Mon, 16 Jun 2003 17:26:02 +1000 Subject: Paper on neuronal gain in the leaky integrate-and-fire neuron with conductance synapses Message-ID: From j.a.bullinaria at cs.bham.ac.uk Tue Jun 17 12:52:54 2003 From: j.a.bullinaria at cs.bham.ac.uk (John Bullinaria) Date: Tue, 17 Jun 2003 17:52:54 +0100 Subject: Lectureship (i.e., Assistant Professorship) - Birmingham, UK Message-ID: <2421795A-A0E4-11D7-95F8-000A956C4D0A@cs.bham.ac.uk> Members of this list are encouraged to apply for the following post. Excellent opportunities exist to collaborate with members of CERCIA and the Natural Computation Group, as well as others in the School. ------------------------------------------------------------------------ VACANCY: Lectureship in Computer Science, Software Engineering or Artificial Intelligence (Ref. No. S36578) ------------------------------------------------------------------------ Applications are invited for and one limited-term Lectureship in Computer Science, Software Engineering or Artificial Intelligence in the School of Computer Science, the University of Birmingham, UK. The post is available immediately until 31 March 2007 in the first instance (further extension or conversion to an open-ended post is possible). We are looking for an outstanding researcher and teacher in any areas of CS, SE and AI, although candidates in applied areas are preferred. The post has become available because an existing staff member has moved temporarily to our new Centre of Excellence for Research in Computational Intelligence and Applications (CERCIA). Applications that strengthen our research in computational intelligence as well as in other areas are equally welcome. The successful applicants must have an internationally excellent research record (or research potential, in the case of a recent PhD graduate) in CS, SE or AI, as evidenced by publications in leading international journals or conference proceedings in these fields. They must be willing and able to teaching core CS, SE and AI modules that may or may not be in his/her research areas. They should have, or expect soon to have, a PhD in CS, SE, AI or an appropriate, closely related discipline. Very high quality research publications or industrial research achievements are acceptable as an alternative to a PhD. All our staff contribute to the School administration. The School encourages industrial outreach and consultancy. This School has been steadily growing in its research achievements and is widely recognized as an international centre for research and teaching, as confirmed, for instance, by external research reviews, by our publications record, our record in attracting research grants, our ability to attract high quality staff and students from many countries, and the provision of a new special-purpose building for the School in 2001. Further information about our research is accessible through our research and news web sites http://www.cs.bham.ac.uk/research/ http://www.cs.bham.ac.uk/news/ and in personal web pages of staff and research students: http://www.cs.bham.ac.uk/people/ Further information about our teaching can be found here: http://www.cs.bham.ac.uk/study Starting salary for Lecturers on scale GBP22,191 - GBP33,679 a year depending on experience and qualifications. Further particulars including instructions on how to apply, and pointers to online application forms, can be found at: http://www.punit.bham.ac.uk/vacancies/ vacancyDisplay.htm?org_unit_code=DNPHYS&vacancy_class_id=2 Please quote reference S36578. Informal enquiries to: Prof. X. Yao (x.yao at cs.bham.ac.uk) *** Closing date for applications is : 24/6/2003. *** However late applicants may be considered. Working towards equal opportunities. From W.El-Deredy at livjm.ac.uk Tue Jun 17 11:56:15 2003 From: W.El-Deredy at livjm.ac.uk (El-Deredy, Wael) Date: Tue, 17 Jun 2003 16:56:15 +0100 Subject: PhD Studentship: Dynamics of pain perception Message-ID: PhD studentship funded by the Arthritis Research Campaign =A3800,000 programme grant on brain mechanisms of pain perception. The post is based at Manchester University working with Professor Anthony Jones' team on functional imaging of pain representation in the brain. http://www.ipn.at/ipn.asp?BGI. The department received a 5 rating in the last RAE. The aims of the signal processing component of the programme are: 1. To develop new analysis methodologies for separating the sensory discrimination components of pain from the affective (subjective) components. 2. To integrate data from different neuroimaging modalities (mainly EEG and PET and EEG and fMRI). We are looking for an enthusiastic signal processing expert with interest in biomedical applications and willingness to work within a team. We expect good foundation in mathematics, statistics and optimisation techniques (especially Bayesian stats. and MCMC) and programming skills (Matlab). The candidate may also be required to support colleagues carrying out standard statistical analysis of functional neuroimaging data. For enquiries and application please write or send a CV to Anthony Jones (Ajones1 at fs1.ho.man.ac.uk) including the names of two referees by 25 July, 2003. Dr. Anthony Jones Pain Research Group Manchester University - Rheumatic Diseases Centre Clinical Sciences Building Hope Hospital Salford M6 8HD Tel: 0161 - 206 4265 Fax: 0161 - 206 4687 From a-parlos at tamu.edu Sat Jun 21 08:56:20 2003 From: a-parlos at tamu.edu (Alexander G. Parlos) Date: Sat, 21 Jun 2003 07:56:20 -0500 Subject: IEEE Special Issue Message-ID: <5.1.0.14.0.20030621075535.03c57300@mail.ad.mengr.tamu.edu> Call for Papers IEEE Transactions on Neural Networks Special Issue on Adaptive Learning Systems in Communication Networks Communication networks and internetworks, and in particular the Internet, have been characterized as the ultimate data-rich environments, dynamically evolving and expanding practically without any centralized control. Such data-rich, unstructured environments present a particular challenge for traditional methods of analysis and design. Adaptive learning methods, in general, including adaptive signal processing, neural networks, fuzzy logic and other data-driven methods and algorithms are in the unique position to offer credible alternatives. The goal of the proposed special issue is two-fold: (1) to highlight the on-going research in the field of adaptive learning systems, and in particular adaptive signal processing and neural networks, as it is applicable to computer and communication networks, and, (2) to present to the neural networks community and to others interested in adaptive learning systems, in general, a variety of new and challenging problems and their proposed solutions, originating from the rapidly expanding universe of computer and communication networks. As the use of these technologies spreads, numerous modeling, estimation, control, classification, clustering and signal processing problems are emerging. Many of these problems currently have no satisfactory solutions and some have been addressed with ad-hoc solutions. A common underlying theme of these problems is that they are data-rich, represent dynamically changing environments where the lack of valid mathematical models is predominant, and, are representative of systems with no centralized control. These problems appear amenable to data-driven methods and algorithms, such as adaptive learning methods, including neural networks and other non-parametric or semi-parametric approaches. This special issue will welcome contributions with proposed approaches to existing problems, either with currently known or new solutions, and to new problems in the subject areas of computer and communication networks. The focus of the proposed solutions will be on data-driven or the so-called measurement-based methods and algorithms, rooted in the general areas of adaptive learning methods. Papers are solicited from, but not limited to, the following topics: Network Management Topics: (i) Methods and algorithms for network traffic analysis, modeling and characterization; (ii) Network performance measurement and analysis techniques; network fault monitoring and diagnosis methods; (iii) Network security and privacy, including intrusion detection methods; (iv) Approaches and methods for Quality of Service in IP networks; (v) Scalable routing algorithms and decentralized congestion control algorithms; (v) Novel admission control algorithms; (vi) Control algorithms for high-speed network access technologies; (vii) Application of "new approaches" in adaptive learning systems to data-intensive tasks in complex networks. Content Management Topics: (i) Approaches for scalable Web caching and related optimization methods; (ii) Novel solutions to operational problems in content delivery and distribution networks; (iii) Web data mining and knowledge discovery - scalability and comparison of methods; (iv) Web personalization methods; (v) Information hiding techniques and digital rights management; (vi) Novel solutions to information access and retrieval for dynamic Web content; (vii) Efficient compression algorithms and coding for continuous digital media - multimedia content; (viii) Architectures for Quality of Service guarantees in real-time distributed applications; (ix) Uncertainty management in real-time distributed applications; (x) Concepts in real-time distributed applications enabled by new communication network technologies. Guest Editors: Alexander G. Parlos, Texas A&M University, College Station, Texas, USA (Coordinator) Chuanyi Ji, Georgia Institute of Technology, Atlanta, Georgia, USA K. Claffy, San Diego Supercomputer Center, University of California, San Diego, California, USA Thomas Parisini, University of Trieste, Trieste, Italy Marco Baglietto, University of Genoa, Genoa, Italy Manuscripts will be screened for topical relevance, and those that pass the screening process will undergo the standard review process of the IEEE Transactions on Neural Networks. Paper submission deadline is November 1, 2003. Prospective authors are encouraged to submit an abstract by September 1, 2003. This will help in the planning and review process. The final Special Issue will be published in the Fall of 2004. Electronic manuscript submission is mandatory and only papers in pdf format will be considered for review. All manuscripts should be sent to the Coordinator of the guest editorial team at a-parlos at tamu.edu. From michael at jupiter.chaos.gwdg.de Mon Jun 23 10:40:32 2003 From: michael at jupiter.chaos.gwdg.de (Michael Herrmann) Date: Mon, 23 Jun 2003 16:40:32 +0200 (CEST) Subject: Course in Computational Neurosciene Message-ID: Applications are invited for a tutorial course on COMPUTATIONAL NEUROSCIENCE at Goettingen, Germany September 24 - 28, 2003 presented by the German Neuroscience Society organized by J. M. Herrmann, M. Diesmann, and T. Geisel The course is intended to provide graduate students and young researchers from all parts of neuroscience with working knowledge of theoretical and computational methods in neuroscience and to acquaint them with recent developments in this field. The course includes topics such as * Mechanisms and models of visual attention * Models of synaptic background activity * Theory of neural coding * Structure and function of large-scale cortical networks * Theory of sensor-motor learning * Dynamics in local neural networks. Tutorials and lectures will be given by: Prof. Dr. Stefan Treue (Goettingen) Dr. Nicolas Brunel (Paris) Dr. Michael Rudolph (Paris) Prof. Dr. Klaus Pawelzik (Bremen) PD Dr. Markus Lappe (Muenster) PD Dr. Rolf Koetter (Duesseldorf) Dr. Christian Eurich (Bremen), and by the organizers. The course takes place at the Department of Nonlinear Dynamics of the Max-Planck Institute for Fluid Dynamics, Bunsenstr. 10, D-37073 Goettingen. The course is free for members of the German Neuroscience Society, while non-members are charged a fee of 100 EUR. Course language is English. To apply please fill in the application form at: www.chaos.gwdg.de/nwg-course by July 1, 2003. For further information please contact: nwg-course at chaos.gwdg.de From desa at Cogsci.ucsd.edu Mon Jun 23 17:33:57 2003 From: desa at Cogsci.ucsd.edu (Virginia de Sa) Date: Mon, 23 Jun 2003 14:33:57 -0700 (PDT) Subject: machine learning database for problems in biology Message-ID: Scientists from the San Diego Supercomputer Center(SDSC) in collaboration with scientists from the cognitive science department are putting together a grant proposal to construct a new database of genomic problems for machine learning. The idea is to be much more comprehensive and up to date than the UCI database and more user-friendly than GENBANK (where you have to be an expert to remove "wrong" entries). The scientists from the SDSC are Bioinformaticians interested in making the latest biological data available to the machine learning community and in exploiting the the latest machine learning tools to answer complex biological problems. There will also be different "views" of the same data, that should be useful for multi-view learning (Multi-view learning, Co-training, Minimizing-Disagreement, IMAX, ...). If you are interested in this kind of database, please let us know (nair at sdsc.edu, desa at ucsd.edu, gribskov at sdsc.edu) as soon as possible. Also let us know if there are particular features you would like (or features you don't like about current databases). -- ------------------------------------------------------------------ Virginia de Sa desa at ucsd.edu Department of Cognitive Science ph: 858-822-5095 9500 Gilman Dr. 858-822-2402 La Jolla, CA 92093-0515 fax: 858-534-1128 ------------------------------------------------------------------ From nello at wald.ucdavis.edu Mon Jun 23 14:01:47 2003 From: nello at wald.ucdavis.edu (Nello Cristianini) Date: Mon, 23 Jun 2003 11:01:47 -0700 (PDT) Subject: Impromtu Posters at COLT/KM 2003 Message-ID: <20030623105544.R13244-100000@anson.ucdavis.edu> Dear Colleague, you are invited to present your late-breaking work on kernel methods at the 'impromptu' poster session which will be held during the kernel day at the joint COLT/KM meeting in Washington DC, on 25 August, 2003. Incomplete, unusual and controversial ideas are welcome. Please submit an abstract IN TEXT FORMAT by July 25, 2003 to: sabrina.nielebock at tuebingen.mpg.de with the subject line: "KM workshop poster submission." You will receive a final acceptance/reject decision by July 31st. Pls DO NOT SEND POSTERS or other formats. regards Nello Cristianini From j.a.bullinaria at cs.bham.ac.uk Mon Jun 30 05:19:30 2003 From: j.a.bullinaria at cs.bham.ac.uk (John Bullinaria) Date: Mon, 30 Jun 2003 10:19:30 +0100 Subject: Research Position in Birmingham, UK Message-ID: ------------------------------------------------------------------------ VACANCY: The Centre of Excellence for Research in Computational Intelligence and Applications (CERCIA) at Birmingham, UK. (http://www.cercia.ac.uk) Research Fellow/Associate in Computational Intelligence (Ref. No. S36573/03) ------------------------------------------------------------------------ The School of Computer Science has recently set up a Centre of Excellence for Research in Computational Intelligence and Applications (CERCIA), with substantial funding from the Advantage West Midlands (the regional development agency), to capitalise on and exploit the world-class research in the School for the benefit of industry and businesses (especially those in the West Midlands region). Applications are now invited for the post of a research fellow/associate. The post is available immediately until 31 March 2007 in the first instance. The successful applicant for the research fellow/associate must have excellent analytical and problem solving skills in computational intelligence and excellent programming and software development skills. He/she should have a PhD degree in computer science/engineering or a closely related field, or at least a very good honours degree with significant research and development experiences in computational intelligence. He/she should demonstrate willingness and interest in tackling real-world problems and applying computational intelligence techniques to industry and businesses. He/she should be a good team player. Excellent written and oral communication skills are required. The School of Computer Science has a world-leading group in natural computation and computational intelligence (http://www.cs.bham.ac.uk/research/NC). It also runs an EPSRC supported MSc programme in Natural Computation (http://www.cs.bham.ac.uk/study/postgraduate-taught/msc-nc/). The group includes more than 25 researchers (including permanent and visiting staff and PhD students), working on a wide range of topics in natural computation and computational intelligence. The starting salary for the research fellow/associate is on the research scale in the range GBP18,265 - GBP30,660 per annum. (Depending on experience and qualifications). For further particulars, please visit http://www.cs.bham.ac.uk/news/jobs/cercia-rf.03/ For informal enquiries, please contact Prof Xin Yao (X.Yao at cs.bham.ac.uk). Formal applications should be sent to the Personnel Services (address below). CLOSING DATE FOR RECEIPT OF APPLICATIONS: 8 July 2003 (late application may be considered) APPLICATION FORMS RETURNABLE TO The Director of Personnel Services The University of Birmingham Edgbaston, Birmingham, B15 2TT England RECRUITMENT OFFICE FAX NUMBER +44 121 414 4802 RECRUITMENT OFFICE TELEPHONE NUMBER +44 121 414 6486 RECRUITMENT OFFICE E-MAIL ADDRESS j.a.gerald at bham.ac.uk From radford at cs.toronto.edu Mon Jun 30 11:38:50 2003 From: radford at cs.toronto.edu (Radford Neal) Date: Mon, 30 Jun 2003 11:38:50 -0400 Subject: New software release / Dirichlet diffusion trees Message-ID: <03Jun30.113856edt.453139-25226@jane.cs.toronto.edu> Announcing a new release of my SOFTWARE FOR FLEXIBLE BAYESIAN MODELING Features include: * Regression and classification models based on neural networks and Gaussian processes * Density modeling and clustering methods based on finite and infinite (Dirichlet process) mixtures and on Dirichlet diffusion trees * Inference for a variety of simple Bayesian models specified using BUGS-like formulas * A variety of Markov chain Monte Carlo methods, for use with the above models, and for evaluation of MCMC methodologies Dirichlet diffusion tree models are a new feature in this release. These models utilize a new family of prior distributions over distributions that is more flexible and realistic than Dirichlet process, Dirichlet process mixture, and Polya tree priors. These models are suitable for general density modeling tasks, and also provide a Bayesian method for hierarchical clustering. See the following references: Neal, R. M. (2003) "Density modeling and clustering using Dirichlet diffusion trees", to appear in Bayesian Statistics 7. Neal, R. M. (2001) "Defining priors for distributions using Dirichlet diffusion trees", Technical Report No. 0104, Dept. of Statistics, University of Toronto, 25 pages. Available at http://www.cs.utoronto.ca/~radford/dft-paper1.abstract.html The software is written in C for Unix and Linux systems. It is free, and may be downloaded from http://www.cs.utoronto.ca/~radford/fbm.software.html ---------------------------------------------------------------------------- Radford M. Neal radford at cs.utoronto.ca Dept. of Statistics and Dept. of Computer Science radford at utstat.utoronto.ca University of Toronto http://www.cs.utoronto.ca/~radford ----------------------------------------------------------------------------