From ckiw at dai.ed.ac.uk Fri Aug 2 08:02:46 2002 From: ckiw at dai.ed.ac.uk (Chris Williams) Date: Fri, 2 Aug 2002 13:02:46 +0100 (BST) Subject: JOB: Research Fellow Position at University of Edinburgh Message-ID: 3 year Research Fellowship in Machine Learning and Probabilistic Modelling The University of Edinburgh seeks a Research Fellow in the field of Machine Learning and Probabilistic Modelling. The successful candidate will work closely with Dr Chris Williams and be based in the Institute for Adaptive and Neural Computation (ANC) in the School of Informatics. The fellowship offers considerable flexibility regarding the areas of research to be pursued within the general field of Machine Learning and Probabilistic Modelling. Examples of possible research areas include (but are not limited to) supervised learning, unsupervised learning, probabilistic graphical models, and learning in computer vision. Current areas of research activity include supervised learning using Gaussian process predictors, learning objects from image data, latent variable models (e.g. applied to galaxy spectra) and advanced factorial-type Hidden Markov models for condition monitoring in premature babies. Research in Machine Learning/Probabilistic Modelling in ANC is led by Dr. Chris Williams, Dr. David Barber and Prof. Chris Bishop. The candidate should have postgraduate experience in the mathematical, physical or computing sciences. Previous experience with probabilistic modelling together with good software skills would be an advantage. The candidate should be highly motivated and keen to participate in a lively interdisciplinary environment. The fellowship is available for a period of three years and is supported by a grant from MSR (Europe) to support basic research. Application Procedure: Please see details at http://www.informatics.ed.ac.uk/events/vacancies/311653.html The closing date for applications is 23 August 2002. For informal enquiries please contact Dr Chris Williams, c.k.i.williams at ed.ac.uk, tel +44 131 651 1212. From Mikael.Boden at ide.hh.se Sun Aug 4 09:57:11 2002 From: Mikael.Boden at ide.hh.se (Mikael Bodn) Date: Sun, 4 Aug 2002 15:57:11 +0200 Subject: Systematicity & Fallacies: Boden & Niklasson Message-ID: <01C23BCF.982D2640.mibo@ide.hh.se> Dear connectionists In a posting on this list Hadley criticized a paper we published in Connection Science (Boden and Niklasson, 2000, 12(2), 111--142). The paper discusses how systematicity of inference and representation can be achieved in neural networks using distributed representations. In light of Hadley's posting, a couple of comments and clarifications are seriously justified. First, we do NOT claim that our results fulfill the requirements of Hadley's "strong semantic systematicity" (made clear on p. 138 in the paper). Nevertheless, we deemed it useful to use the names of levels of generalization introduced by Hadley (1994) to qualify what we call "context-dependent" semantic systematicity (which may to some extent explain the misunderstanding we read into Hadley's note). Regarding the notion of novelty, some details are required. The architecture we report on contains in essence two neural network modules. Each module corresponds to a type of context in which an object may occur. In fact, we may think of the system as having two separate training sets. Due to error feedback between modules, training one module affects the other. Simply put, the modules share representations. We show that certain inferences (by generalization) on an object can be made in one module if the same object appears as training sample in the other module. More specifically, one module encodes information for the words used in the sentences (e.g., the representation for 'Tweety' encodes information that it is a 'Bird'. The second module is used for asserting facts involving words (like 'Birds fly'). Among other things, we test the ability of the network to assign meaning to words (in the first module) based on facts (presented as training examples in the second module). For example, we test for properties assigned to 'Jack' given that 'Jack can fly' is true -- and find that 'Jack' is a 'Bird' even if 'Jack' never appears in the training set of that module. Moreover, inference in the opposite direction can be made. From 'Tweety' is a 'Bird', the other module responds with 'Tweety flies' even if Tweety never appears in the training set specific for that module. Hence, in contrast to what Hadley hints in his note, the tested inference never appears in the training set for the tested module. For those interested in further details we would like refer to the paper. Offprints of the paper are available on request. You will also find a draft version on our web pages. Regards, Mikael Boden (http://www.hh.se/staff/mibo) Lars Niklasson (http://www.his.se/ida/~lars) -----Original Message----- From: Bob Hadley [SMTP:hadley at cs.sfu.ca] Sent: Thursday, July 25, 2002 9:49 PM To: Connectionists at cs.cmu.edu Cc: Bob HADLEY Subject: Systematicity & Fallacies: Boden & Niklasson The Fallacy of Equivocation: Boden and Niklasson. In a fairly recent paper (Connection Science, Vol. 12, 2000), Boden and Niklasson purport to demonstrate that a collection of connectionist networks (call them c-nets) can display an important type of Strong Semantic Systematicity. They make frequent references to my 1994 definitions of semantic systematicity and to my papers on this important topic. They also acknowledge that in 1994 I published definite reservations about claims by Niklasson and van Gelder to have produced a connectionist system that displays strong systematicity. In their recent (2000) paper, Boden and Niklasson purport to have answered my reservations by producing a case where a "novel test sentence" is assigned an appropriate meaning representation by previously trained c-nets. Readers may recall that my 1994 definition of strong semantic systematicity required that the "previously trained c-net" must assign an appropriate (and correct) meaning representation to a novel test sentence which contains PREVIOUSLY KNOWN words in at least one novel position. In contrast to this requirement, the putative novel test sentence that Boden and Niklasson employ does not present any previously known words in a novel position. Rather, it presents a purportedly novel word in a known position. However, there is a much more serious problem with their "novel test sentence" (call this sentence S). Here's the problem: The supposed novel sentence S does not produce a correct response when it is first presented to the trained c-net. So, Boden and Niklasson proceed to TRAIN the c-net on the sentence S for an additional 1000 epochs (over and above the earlier training phase). In this latter training phase, only S is presented as input, and backpropagation is employed. Once this further training is complete, Boden and Niklasson contend that a "novel" word in S has now been assigned a meaning representation which they believe to be correct. But, of course, S is no longer a "novel test sentence" at this stage. The c-net has been subjected to intensive training upon S, and only after this further training is complete are Boden and Niklasson able to claim success. Given this, for Boden and Niklasson to describe S as a novel test sentence is (to express the matter diplomatically) to committ a serious instance of the fallacy of equivocation. Indeed, I find it difficult to believe that Boden and Niklasson could be unaware that, as most connectionists use the phrase "test data" (or "novel test sentence"), sentence S is NOT a novel test sentence at all. For this reason, it astonishes me that Boden and Niklasson claim that they have NOW produced an experimental result that satisfactorily answers my 1994 reservations about the results published by Niklasson and van Gelder. My 1994 reservations involved my 1994 definition of strong systematicity, and that definition employed "novel test sentence" in the sense that connectionists commonly employ. At best, Boden and Niklasson are assigning some new, and surprising sense to that phrase -- hence the fallacy of equivocation. I believe there are other serious problems with Boden and Niklasson's (2000) paper, and I am presently writing a detailed critique of that paper. I'll make my new paper available on the internet within a few weeks. Look for a notice of my new critique on "Connectionist List" or send me an email request for the pdf file. In astonishment, Bob Hadley Reference: Boden, M. and Niklasson, L. (2000) "Semantic Systematicity and Context in Connectionist Networks", Connection Science, Vol. 12(2), pp. 111-142. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Robert F. Hadley (Bob) Phone: 604-291-4488 Professor email: hadley at cs.sfu.ca School of Computing Science and Cognitive Science Program Simon Fraser University Burnaby, B.C. V5A 1S6 Canada Web page: www.cs.sfu.ca/~hadley/ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ From juergen at idsia.ch Mon Aug 5 09:23:18 2002 From: juergen at idsia.ch (Juergen Schmidhuber) Date: Mon, 05 Aug 2002 15:23:18 +0200 Subject: Optimal Ordered Problem Solver Message-ID: <3D4E7C46.EDDC43CF@idsia.ch> Optimal Ordered Problem Solver Juergen Schmidhuber, IDSIA TR IDSIA-12-02; arXiv:cs.AI/0207097 v1; 31 July 2002 36 pages, submitted to JMRL, condensed 8 page version to NIPS http://www.idsia.ch/~juergen/oops.html ftp://ftp.idsia.ch/pub/juergen/oops.ps.gz We extend principles of non-incremental universal search to build a novel, optimally fast, incremental learner that is able to improve itself through experience. The Optimal Ordered Problem Solver (OOPS) searches for a universal algorithm that solves each task in a sequence of tasks. It continually organizes and exploits previously found solutions to earlier tasks, efficiently searching not only the space of domain-specific algorithms, but also the space of search algorithms. The initial bias is embodied by a task-dependent probability distribution on possible program prefixes (pieces of code that may continue). Prefixes are self-delimiting and executed in online fashion while being generated. They compute the probabilities of their own possible continuations. Let p^n denote a found prefix solving the first n tasks. It may exploit previous solutions p^i (i In his new book, Foundations of Language, Ray Jackendoff poses four challenges to "cognitive neuroscience", all related to the binding problem. He starts with the simple sentence: The big star's beside a little star. and asks how neural computation could model the two different stars without cross talk. He then goes on to more complex problems like variables, memory levels and learning. Various groups, including ours, have been working on these issues for decades and Ray's challenges provide a nice focus for assessing the current state of play. I will be happy to collect responses and post a summary back to the group. The relevant section of the book is pages 58-67 and is self contained. -- Jerome Feldman ICSI & EECS UC Berkeley "O wad some Pow'r the giftie gie us 1947 Center St. To see oursels as other see us!" Berkeley CA 94704 Robert Burns - To a Louse From tj at cs.cornell.edu Mon Aug 5 15:38:29 2002 From: tj at cs.cornell.edu (Thorsten Joachims) Date: Mon, 5 Aug 2002 15:38:29 -0400 Subject: SVM-light: new version and book Message-ID: <706871B20764CD449DB0E8E3D81C4D4302C19638@opus.cs.cornell.edu> Dear Colleague, a new version of SVM-Light (V5.00) is available, as well as my dissertation "Learning to Classify Text using Support Vector Machines", which recently appeared with Kluwer. The new version can be downloaded from http://svmlight.joachims.org/ SVM-Light is an implementation of Support Vector Machines (SVMs) for large-scale problems. The new features of this version are the following: - Learning of ranking functions (e.g. for search engines), in addition to classification and regression. - Bug fixes and improved numerical stability. The dissertation describes the algorithms and methods implemented in SVM-light. In particular, it shows how these methods can be used for text classification. Links are on my homepage http://www.joachims.org/ Cheers Thorsten --- Thorsten Joachims Assistant Professor Department of Computer Science Cornell University http://www.joachims.org/ From cateau at brain.inf.eng.tamagawa.ac.jp Mon Aug 5 21:08:31 2002 From: cateau at brain.inf.eng.tamagawa.ac.jp (Hide Cateau) Date: Tue, 06 Aug 2002 10:08:31 +0900 Subject: New paper on the spike-timing-dependent plasticity Message-ID: <20020806095500.2449.CATEAU@brain.inf.eng.tamagawa.ac.jp> Dear All, I would like to announce the availability of the following paper at my site:http://brain.inf.eng.tamagawa.ac.jp/cateau/hide.index.html Hideyuki Cateau & Tomoki Fukai, A stochastic method to predict the consequence of arbitrary forms of spike-timing-dependent plasticity, Neural Computation (2002) in press. This paper enables us to predict the consequence of arbitrary forms the spike-timing-dependent plasticity without doing any simulations. ........................................................... A stochastic method to predict the consequence of arbitrary forms of spike-timing-dependent plasticity Hideyuki Cateau? and Tomoki Fukai?# ?Core Research for the Evolutional Science and Technology Program(CREST),JST, Tokyo 1948610, Japan # Department of Engineering, Tamagawa University, Tokyo 1948610,Japan Abstract Synapses in various neural preparations exhibit spike-timing-dependent plasticity (STDP) with a variety of learning window functions. The window functions determine the magnitude and the polarity of synaptic change according to the time difference of pre- and postsynaptic spikes. Numerical experiments revealed that STDP learning with a single-exponential window function resulted in a bimodal distribution of synaptic conductances as a consequence of competition between synapses. A slightly modified window function, however, resulted in a unimodal distribution, rather than a bimodal distribution. Since various window functions have been observed in neural preparations, we develop an unambiguous mathematical method to calculate the conductance distribution for any given window function. Our method is based on the Fokker-Planck equation to determine the conductance distribution and on the Ornstein-Uhlenbeck process to characterize the membrane potential fluctuations. Demonstrating that our method reproduces the known quantitative results of STDP learning, we apply the method to the type of STDP learning found recently in the CA1 region of the rat hippocampus. We find that this learning can result in nearly optimized competition between synapses. Meanwhile, we find that the type of STDP learning found in the cerebellum-like structure of electric fish can result in all-or-none synapses, i.e., either all the synaptic conductances are maximized or none of them become significantly large. Our method also determines the window function that optimizes synaptic competition. ____________________________________________________________ Hideyuki Cateau Core Research for the Evolutional Science and Technology Program(CREST), JST Lab. for mathematical information engineering, Dept. Info-Communication Engineering, Tamagawa Univ. 6-1-1 Tamagawa-Gakuen, Machida-shi, Tokyo 1948610, Japan cateau at brain.inf.eng.tamagawa.ac.jp http://brain.inf.eng.tamagawa.ac.jp/members.html phone: +81-42-739-8434, fax:+81-42-739-7135 ____________________________________________________________ From jose at psychology.rutgers.edu Tue Aug 6 16:02:39 2002 From: jose at psychology.rutgers.edu (stephen j. hanson) Date: 06 Aug 2002 16:02:39 -0400 Subject: neural binding In-Reply-To: <3D4EA544.7ABB45F9@icsi.berkeley.edu> References: <3D4EA544.7ABB45F9@icsi.berkeley.edu> Message-ID: <1028664159.2731.56.camel@vaio> We recently published a paper in Neural Computation on a related kind of binding problem: In our case we were able to show transfer to NOVEL vocabularies (fixing DFAs) after training on other vocabularies using the same grammar. In effect the RNN learns to factor the vocabulary from the grammar and then quickly recruit new vocabularies that merely have syntactic similarity to already learned vocabularies. Examination of the hidden space indicates the RNN learns a type of "spatial metaphor" to recruit the unseen vocabulary in nearby parts of state (DFA)space. The network creates a hierarchical state structure that allows arbitrary and unique binding of new instances. Similarly for decades Pinker and others have maintained that this sort of transfer was impossible for "associationist" networks. Cheers, Steve ---------- "On the Emergence of Rules in Neural Networks" Stephen Jose Hanson and Michiro Negishi Neural Computation - Contents - Volume 14, Number 9 - September 1, 2002 On Mon, 2002-08-05 at 12:18, Jerome Feldman wrote: > > In his new book, Foundations of Language, Ray Jackendoff poses four challenges > to "cognitive neuroscience", all related to the binding problem. He starts with > the simple sentence: > > The big star's beside a little star. > > and asks how neural computation could model the two different stars without > cross talk. He then goes on to more complex problems like variables, memory > levels and learning. > > Various groups, including ours, have been working on these issues for decades and > Ray's challenges provide a nice focus for assessing the current state of play. I > will be happy to collect responses and post a summary back to the group. The > relevant section of the book is pages 58-67 and is self contained. > > -- > Jerome Feldman > ICSI & EECS UC Berkeley "O wad some Pow'r the giftie gie us > 1947 Center St. To see oursels as other see us!" > Berkeley CA 94704 Robert Burns - To a Louse > From dwang at cis.ohio-state.edu Wed Aug 7 10:01:04 2002 From: dwang at cis.ohio-state.edu (DeLiang Wang) Date: Wed, 07 Aug 2002 10:01:04 -0400 Subject: neural binding References: <3D4EA544.7ABB45F9@icsi.berkeley.edu> Message-ID: <3D512821.9986EF0A@cis.ohio-state.edu> Jerome Feldman wrote: > He starts with the simple sentence: > > The big star's beside a little star. > > and asks how neural computation could model the two different stars without > cross talk. A lot of work has been done in recent years using oscillator networks and temporal coding. The problem as stated above doesn't exist anymore for neural computation, I think, though how the brain solves the problem is another matter (Journal Neuron has a special issue on the neural binding problem in vol. 24, No. 1, 1999). For a short story on this see Wang: "On connectedness: a solution based on oscillatory correlation," Neural Computation, vol. 12, 131-139, 2000. Regards, DeLiang Wang -- ------------------------------------------------------------ Prof. DeLiang Wang Department of Computer and Information Science The Ohio State University 2015 Neil Ave. Columbus, OH 43210-1277, U.S.A. Email: dwang at cis.ohio-state.edu Phone: 614-292-6827 (OFFICE); 614-292-7402 (LAB) Fax: 614-292-2911 URL: http://www.cis.ohio-state.edu/~dwang From mhb0 at Lehigh.EDU Thu Aug 8 10:07:45 2002 From: mhb0 at Lehigh.EDU (Mark H. Bickhard) Date: Thu, 08 Aug 2002 10:07:45 -0400 Subject: Call for Papers/Call for Participation Message-ID: <3D527B30.D7B8AD0C@lehigh.edu> Interactivist Summer Institute 2003 July 22-26, 2003 Botanical Auditorium Copenhagen, Denmark Join us in exploring the frontiers of understanding of life, mind, and cognition. There is a growing recognition - across many disciplines - that phenomena of life and mind, including cognition and representation, are emergents of far-from-equilibrium, interactive, autonomous systems. Mind and biology, mind and agent, are being re-united. The classical treatment of cognition and representation within a formalist framework of encodingist assumptions is widely recognized as a fruitless maze of blind alleys. From neurobiology to robotics, from cognitive science to philosophy of mind and language, dynamic and interactive alternatives are being explored. Dynamic systems approaches and autonomous agent research join in the effort. The interactivist model offers a theoretical approach to matters of life and mind, ranging from evolutionary- and neuro-biology - including the emergence of biological function ? through representation, perception, motivation, memory, learning and development, emotions, consciousness, language, rationality, sociality, personality and psychopathology. This work has developed interfaces with studies of central nervous system functioning, the ontology of process, autonomous agents, philosophy of science, and all areas of psychology, philosophy, and cognitive science that address the person. The conference will involve both tutorials addressing central parts and aspects of the interactive model, and papers addressing current work of relevance to this general approach. This will be our second Summer Institute; the first was in 2001 at Lehigh University, Bethlehem, PA, USA. The intention is for this Summer Institute to become a traditional biennial meeting where those sharing the core ideas of interactivism will meet and discuss their work, try to reconstruct its historical roots, put forward current research in different fields that fits the interactivist framework, and define research topics for prospective graduate students. People working in philosophy of mind, linguistics, social sciences, artificial intelligence, cognitive robotics, theoretical biology, and other fields related to the sciences of mind are invited to send their paper submission or statement of interest for participation to the organizers. http://www.lehigh.edu/~interact/isi2003/isi2003.html Mark -- Mark H. Bickhard Cognitive Science 17 Memorial Drive East Lehigh University Bethlehem, PA 18015 610-758-3633 mhb0 at lehigh.edu mark.bickhard at lehigh.edu http://www.lehigh.edu/~mhb0/mhb0.html From hastie at stanford.edu Wed Aug 7 13:02:15 2002 From: hastie at stanford.edu (Trevor Hastie) Date: Wed, 7 Aug 2002 10:02:15 -0700 Subject: Two new papers: ICA and Boosting Message-ID: <004901c23e34$2e79c930$d3bb42ab@yacht> The following two papers have been posted to my web page at http://www-stat.stanford.edu/~hastie/Papers/ a.. Trevor Hastie and Robert Tibshirani. Independent Component Analysis through Product Density Estimation (ps file). A direct statistical approach to ICA, using an attractive spline representation to model each of the marginal densities. b.. Saharon Rosset, Ji Zhu and Trevor Hastie. Boosting as a Regularized Path to a Maximum Margin Classifier (ps file). We show that a version of boosting fits a model by optimizing a L1-penalized loss function. This in turn shows that the corresponding versions of Adaboost and Logitboost converge to an "L1" optimal separating hyperplane. -------------------------------------------------------------------- Trevor Hastie hastie at stanford.edu Professor, Department of Statistics, Stanford University Phone: (650) 725-2231 (Statistics) Fax: (650) 725-8977 (650) 498-5233 (Biostatistics) Fax: (650) 725-6951 URL: http://www-stat.stanford.edu/~hastie address: room 104, Department of Statistics, Sequoia Hall 390 Serra Mall, Stanford University, CA 94305-4065 -------------------------------------------------------------------- From Dave_Touretzky at cs.cmu.edu Sun Aug 11 04:40:19 2002 From: Dave_Touretzky at cs.cmu.edu (Dave_Touretzky@cs.cmu.edu) Date: Sun, 11 Aug 2002 04:40:19 -0400 Subject: announcing HHsim: a graphical Hodgkin-Huxley simulator in MATLAB Message-ID: <3979.1029055219@ammon.boltz.cs.cmu.edu> HHsim is a graphical simulation of a section of excitable neuronal membrane using the Hodgkin-Huxley equations. It provides full access to the Hodgkin-Huxley parameters, membrane parameters, stimulus parameters, and ion concentrations. HHsim requires MATLAB version 6. In contrast with NEURON or GENESIS, which are vastly more sophisticated research tools, HHsim is simple educational software designed specifically for graduate or undergraduate neurophysiology courses. The user interface can be mastered in a couple of minutes and provides many ways for the student to experiment. HHsim is free software distributed under the GNU General Public License. To download the software, or view screen captures illustrating its features, visit the HHsim web page: http://www.cs.cmu.edu/~dst/HHsim HHsim was written by Dave Touretzky, Mark Albert, and Nathaniel Daw in the Computer Science Department and the Center for the Neural Basis of Cognition at Carnegie Mellon University. Its development was supported in part by National Science Foundation grant DGE-9987588. -- Dave Touretzky From juergen at idsia.ch Mon Aug 12 10:15:10 2002 From: juergen at idsia.ch (Juergen Schmidhuber) Date: Mon, 12 Aug 2002 16:15:10 +0200 Subject: optimal universal reinforcement learner Message-ID: <3D57C2EE.5B7B8946@idsia.ch> I'd like to draw your attention to the first optimal, universal reinforcement learner. While traditional RL requires unrealistic Markovian assumptions, the recent AIXI model of Marcus Hutter just needs an environment whose reactions to control actions are sampled from an unknown but computable distribution mu. This includes basically every environment we can reasonably talk about: M. Hutter. Towards a Universal Theory of Artificial Intelligence based on Algorithmic Probability and Sequential Decisions. Proc. ECML-2001, p. 226-238. http://www.idsia.ch/~marcus/ai/paixi.pdf ftp://ftp.idsia.ch/pub/techrep/IDSIA-14-00.ps.gz Self-Optimizing and Pareto-Optimal Policies in General Environments based on Bayes-Mixtures. Proc. COLT-2002, 364-379. http://www.idsia.ch/~marcus/ai/selfopt.pdf ftp://ftp.idsia.ch/pub/techrep/IDSIA-04-02.ps.gz How does AIXI work? An optimal predictor using a universal Bayesmix XI predicts future events including reward. Here XI is just a weighted sum of all distributions nu in a set M. AIXI now simply selects those action sequences that maximize predicted reward. It turns out that this method really is self-optimizing in the following sense: for all nu in the mix, the average value of actions, given the history, asymptotically converges to the optimal value achieved by the unknown policy which knows the true mu in advance! The necessary condition is that M does admit self-optimizing policies. This is also sufficient! And there is no other policy yielding higher or equal value in all environments nu and a strictly higher value in at least one. Interestingly, the right way of treating the temporal horizon is not to discount it exponentially, as done in most traditional RL work, but to let the future horizon grow in proportion to the learner's lifetime so far. To quote some AIXI referees: "[...] Clearly fundamental and potentially interesting research direction with practical applications. [...] Great theory. Extends a major theoretical direction that led to practical MDL and MML. This approach may do the same thing (similar thing) wrt to decision theory and reinforcement learning, to name a few." "[...] this could be the foundation of a theory which might inspire AI and MC for years (decades?)." Juergen Schmidhuber, IDSIA http://www.idsia.ch/~juergen/unilearn.html From markman at psyvax.psy.utexas.edu Mon Aug 12 14:30:16 2002 From: markman at psyvax.psy.utexas.edu (Art Markman) Date: Mon, 12 Aug 2002 13:30:16 -0500 Subject: Rumelhart Prize goes to Joshi Message-ID: For immediate release: Aravind Joshi is the winner of the third annual David E. Rumelhart Prize. This announcement was made at the Cognitive Science Society meeting at George Mason University on August 9, 2003. The Rumelhart prize is given each year to a cognitive scientist to honor his or her outstanding contributions to the formal analysis of human cognition. This prize was created by the Glushko-Samuelson Foundation to honor David E. Rumelhart, a Cognitive Scientist who exploited a wide range of formal methods to address issues and topics in Cognitive Science. Dr. Joshi will receive his award and give a talk at the 25th annual meeting of the Cognitive Science Society in Boston, MA in August of 2003. For a more detailed discussion of the prize and of Dr. Joshi's work, please see http://www.cnbc.cmu.edu/derprize/announce2003.html. From Padraig.Cunningham at cs.tcd.ie Wed Aug 14 07:46:05 2002 From: Padraig.Cunningham at cs.tcd.ie (Padraig Cunningham) Date: Wed, 14 Aug 2002 12:46:05 +0100 Subject: Post Doc position Trinity College Dublin Message-ID: <0f3001c24388$2be8de50$2d2fe286@citeog> Post-Doctoral Researchers Computer Science Trinity College Dublin. Applications from researchers with experience in machine learning or data mining are invited for two post-doctoral positions within the Machine Learning Group in the Department of Computer Science at Trinity College Dublin. The positions are funded by Science Foundation Ireland as part of a research project on knowledge discovery and explanation in bioinformatics, medical informatics and finance. The project will tackle six problems from machine learning and data mining. These are; Local Feature Weighting in Lazy-Learning Systems, Explanation in Case-Based Reasoning Systems, Explanation in Ensembles of Regression Systems, Overfitting in Feature Weighting / Feature Selection, Validation of Clustering Results and Explanation of Clustering Results. Salary in the range Euro25,000-Euro33,000 per annum dependent upon publication record and experience. Candidates should submit a cover letter, full CV, list of publications and names of two referees to: Padraig.Cunningham at cs.tcd.ie Prof. Pdraig Cunningham, Machine Learning Group, Dept. of Computer Science, Trinity College, Dublin 2, Ireland. Tel: +353 1 608 1765/ Fax: +353 1 677 2204 Closing date for receipt of applications: Friday, 13th September, 2002. TRINITY COLLEGE IS AN EQUAL OPPORTUNITIES EMPLOYER From krichmar at nsi.edu Thu Aug 15 10:47:31 2002 From: krichmar at nsi.edu (Jeff Krichmar) Date: Thu, 15 Aug 2002 07:47:31 -0700 Subject: Machine Psychology and Brain-Based Devices Message-ID: <000501c2446a$af242940$c3b985c6@nsi.edu> Dear Connectionists, I thought our website would be of interest to many of you: http://www.nsi.edu/nomad. The NOMAD (Neurally Organized Mobile Adaptive Device) project was established in order to study the principles of brain function by using real-world devices controlled by nervous systems simulated according to biological principles. These devices learn through their experience about features in its environment, in much the same way as living creatures do. The website contains descriptions, pictures, movies and recent publications. The most recent publication, J.L. Krichmar, G.M. Edelman, (2002) Machine Psychology: Autonomous Behavior, Perceptual Categorization and Conditioning in a Brain-Based Device, Cerebral Cortex 12:818-830, is available in pdf format (http://www.nsi.edu/nomad/pubs.htm). Best regards, Jeff Krichmar The Neurosciences Institute 10640 John J. Hopkins Dr. San Diego, CA 92121 krichmar at nsi.edu From Henry.Tirri at cs.helsinki.fi Thu Aug 15 08:20:38 2002 From: Henry.Tirri at cs.helsinki.fi (Henry Tirri) Date: Thu, 15 Aug 2002 15:20:38 +0300 Subject: Minimum Description Length on the Web - a new www site Message-ID: <3D5B9C96.8070901@cs.helsinki.fi> I am pleased to announce a recently created web-resource for Minimum Description Length (MDL) research at http://www.mdl-research.org/ The site is maintained by the Complex Systems Computation Group (CoSCo) at the Helsinki Institute for Information Technology. We will be gradually building the resource to be more comprehensive, and all feedback and comments are appreciated. Henry Tirri Brief description of the site: ============================== What is MDL? The purpose of statistical modeling is to discover regularities in observed data. The success in finding such regularities can be measured by the length with which the data can be described. This is the rationale behind the Minimum Description Length (MDL) Principle introduced by Jorma Rissanen (Rissanen, 1978). "The MDL Principle is a relatively recent method for inductive inference. The fundamental idea behind the MDL Principle is that any regularity in a given set of data can be used to compress the data, i.e. to describe it using fewer symbols than needed to describe the data literally." (Grnwald, 1998) What is mdl-research.org? Minimum Description Length on the Web is intended as a source of information for everyone who wants to know more about MDL. The site contains links and references to suggested reading, tutorials, lecture notes, etc. on MDL as well as links to people who are working on MDL and related topics. The Reading section contains references to selected articles, books, and lecture material, and links to journals and conferences that publish MDL related material. The Demonstrations section will illustrate MDL through on-line demonstrations. The section is under construction but you can already find a demonstration on Markov chain order selection. The People section has links to researchers who are working on MDL and related fields. You can find loads of related material in their homepages. The Related Topics section is a short collection of links to MDL related topics, such as information theory, Bayesian statistics, etc. It can help you locate useful background knowledge. References: J.Rissanen, Modeling by shortest data description. Automatica, vol. 14 (1978), pp. 465-471. Peter Grnwald, The Minimum Description Length Principle and Reasoning under Uncertainty, Ph.D. Thesis, ILLC Dissertation Series DS 1998-03, CWI, the Netherlands, 1998. ----------------------------------------------------------- Henry Tirri, PhD. Research Director, Prof. of Computer Science Complex Systems Computation Group Helsinki Institute for Information Technology (HIIT) http://www.hiit.fi/henry.tirri; email: henry.tirri at hiit.fi ----------------------------------------------------------- From shastri at ICSI.Berkeley.EDU Fri Aug 16 02:51:58 2002 From: shastri at ICSI.Berkeley.EDU (Lokendra Shastri) Date: Thu, 15 Aug 2002 23:51:58 PDT Subject: neural binding Message-ID: <200208160651.XAA03985@dill.ICSI.Berkeley.EDU> Dear Connectionists, The following is a list of papers by my collaborators and I that address the four problems for Cognitive Neuroscience posed by Ray Jackendoff. Most of these papers and other related papers are accessible at "http://www.icsi.berkeley.edu/~shastri". 1. The dynamic binding problem ``Rules and variables in neural nets'', V. Ajjanagadde and L. Shastri, Neural Computation. 3, 121--134, 1991. ``From simple associations to systematic reasoning: a connectionist encoding of rules, variables, and dynamic bindings using temporal synchrony" L. Shastri and V. Ajjanagadde. Behavioral and Brain Sciences Vol. 16, No. 3, 417--494, 1993. (the response to commentators section of the above paper also addresses the problem of encoding embedded structures using dynamic bindings) ``Temporal Synchrony, Dynamic Bindings, and SHRUTI: a representational but non-classical model of reflexive reasoning'', L. Shastri. Behavioral and Brain Sciences Vol. 19, No. 2, 331-337, 1996. ``Advances in SHRUTI --- A neurally motivated model of relational knowledge representation and rapid inference using temporal synchrony'', L. Shastri. Applied Intelligence, 11, 79--108, 1999. ``Seeking coherent explanations --- a fusion of structured connectionism, temporal synchrony, and evidential reasoning,'' L. Shastri and C. Wendelken. In Proc. 22nd Annual Conference of the Cognitive Science Society, pp. 453--458, Philadelphia, PA. August 2000. Also relevant here is the work of Jaime Henderson on parsing. He shows how a parse structure can be built incrementally using a Shruti-like representations and synchronous binding. "Connectionist Syntactic Parsing Using Temporal Variable Binding. Henderson, J. (1994) Journal of Psycholinguistic Research, 23 (5) p. 353--379. 2. The problem of 2 (aka the multiple instantiation problem) L. Shastri and V. Ajjanagadde, BBS 1993 listed above ``Reflexive Reasoning with Multiple-Instantiation in in a Connectionist Reasoning System with a Typed Hierarchy'', D.R. Mani, and L. Shastri. Connection Science, Vol. 5, No. 3 & 4, 205--242. 1993. 3. (typed) variables Preliminary solutions were proposed in Shastri and Ajjanagadde, BBS 1993, and Mani and Shastri, Connection Science 1993. A fully developed solution appears in ``Types and Quantifiers in Shruti --- a connectionist model of rapid reasoning and relational processing,'' L. Shastri. In Hybrid Neural Symbolic Integration, S. Wermter and R. Sun (Eds.), Springer-Verlag, Berlin, pp 28-45, 2000. 4. One-shot learning of relational structures in long-term memory ``Episodic memory and cortico-hippocampal interactions,'' L. Shastri, Trends in Cognitive Sciences, 6(4):162-168. ``From transient patterns to persistent structures: a computational model of episodic memory formation via cortico-hippocampal interactions,'' L. Shastri, Behavioral and Brain Sciences, 62 pages (In revision). ``Biological Grounding of Recruitment Learning and Vicinal Algorithms in Long-term Potentiation'', L. Shastri. In Emergent neural computational architectures based on neuroscience, J. Austin, S. Wermter, and D. Wilshaw (Eds.), Springer-Verlag, Berlin, pp. 348-367, 2001. From nnk at his.atr.co.jp Wed Aug 21 03:00:21 2002 From: nnk at his.atr.co.jp (Neural Networks Japan Office) Date: Wed, 21 Aug 2002 16:00:21 +0900 Subject: Neural Networks 15(4/5/6): 2002 Special Issue Message-ID: NEURAL NETWORKS 15(4/5/6) Contents - Volume 15, Numbers 4/5/6 - 2002 2002 Special Issue "Computational Models of Neuromodulation" Kenji Doya, Peter Dayan, and Michael E. Hasselmo, co-editors ------------------------------------------------------------------ ***** Introduction ***** Cellular, synaptic and network effects of neuromodulation Eve Marder and Vatsala Thirumalai Metalearning and neuromodulation Kenji Doya ***** Dopamine ***** Dopamine-dependent plasticity of cortico-striatal synapses John N.J. Reynolds and Jeffery R. Wickens TD models of reward predictive responses in dopamine neurons Roland E. Suri Actor-critic models of the basal ganglia: new anatomical and computational perspectives. Daphna Joel, Yael Niv, Eytan Ruppin Dopamine: generalization and bonuses Sham Kakade and Peter Dayan The computational role of dopamine D1 receptors in working memory Daniel Durstewitz and Jeremy Seamans Dopamine controls fundamental cognitive operations of multi-target spatial working memory. Shoji Tanaka An integrative theory of the phasic and tonic modes of dopamine modulation in the prefrontal cortex. Jean-Claude Dreher, Yves Burnod ***** Serotonin ***** Opponent Interactions between serotonin and dopamine Nathaniel D. Daw, Sham Kakade, Peter Dayan Local analysis of behaviour in the adjusting-delay task assessing choice of delayed reinforcement Rudolf N. Cardinal, Natheniel Daw, Trevor W. Robbins and Barry J. Everitt ***** Norepinephrine ***** Neuromodulation of decision and response selection Marius Usher, Eddy Davelaar Simplified dynamics in a model of noradrenergic modulation of cognitive performance M. S. Gilzenrat, B. D. Holmes, J. Rajkowski, G. Aston-Jones, and J. D. Cohen Control of exploitation-exploration meta-parameter in reinforcement learning. Shin Ishii, Wako Yoshida, Junichiro Yoshimoto ***** Acetylcholine ***** Neuromodulation, theta rhythm and rat spatial navigation Michael Hasselmo, Jonathan Hay, Maxin Ilyn, and Anatoli Gorchetchinikov Cholinergic modulation of sensory representations in the olfactory bulb Christiane Linster and Thomas A. Cleland Acetylcholine in cortical inference Angela J. Yu, Peter Dayan Sensory-motor gating and cognitive control by the brainstem cholinergic systems Yasushi Kobayashi and Tadashi Isa ***** On-line Adaptation ***** On-line learning in changing environments with applications in supervised and unsupervised learning. Noboru Murata, Motoaki Kawanabe, Andreas Ziehe, Klaus-Robert Muller, Shun-ichi Amari Neuromodulation and plasticity in an autonomous robot Olaf Sporns, William H. Alexander ------------------------------------------------------------------ Electronic access: www.elsevier.com/locate/neunet/. Individuals can look up instructions, aims & scope, see news, tables of contents, etc. Those who are at institutions which subscribe to Neural Networks get access to full article text as part of the institutional subscription. Sample copies can be requested for free and back issues can be ordered through the Elsevier customer support offices: nlinfo-f at elsevier.nl usinfo-f at elsevier.com or info at elsevier.co.jp ------------------------------ INNS/ENNS/JNNS Membership includes a subscription to Neural Networks: The International (INNS), European (ENNS), and Japanese (JNNS) Neural Network Societies are associations of scientists, engineers, students, and others seeking to learn about and advance the understanding of the modeling of behavioral and brain processes, and the application of neural modeling concepts to technological problems. Membership in any of the societies includes a subscription to Neural Networks, the official journal of the societies. Application forms should be sent to all the societies you want to apply to (for example, one as a member with subscription and the other one or two as a member without subscription). The JNNS does not accept credit cards or checks; to apply to the JNNS, send in the application form and wait for instructions about remitting payment. The ENNS accepts bank orders in Swedish Crowns (SEK) or credit cards. The INNS does not invoice for payment. ---------------------------------------------------------------------------- Membership Type INNS ENNS JNNS ---------------------------------------------------------------------------- membership with $80 or 660 SEK or Y 15,000 [including Neural Networks 2,000 entrance fee] or $55 (student) 460 SEK (student) Y 13,000 (student) [including 2,000 entrance fee] ----------------------------------------------------------------------------- membership without $30 200 SEK not available to Neural Networks non-students (subscribe through another society) Y 5,000 (student) [including 2,000 entrance fee] ----------------------------------------------------------------------------- Institutional rates $1132 2230 NLG Y 149,524 ----------------------------------------------------------------------------- Name: _____________________________________ Title: _____________________________________ Address: _____________________________________ _____________________________________ _____________________________________ Phone: _____________________________________ Fax: _____________________________________ Email: _____________________________________ Payment: [ ] Check or money order enclosed, payable to INNS or ENNS OR [ ] Charge my VISA or MasterCard card number ____________________________ expiration date ________________________ INNS Membership 19 Mantua Road Mount Royal NJ 08061 USA 856 423 0162 (phone) 856 423 3420 (fax) innshq at talley.com http://www.inns.org ENNS Membership University of Skovde P.O. Box 408 531 28 Skovde Sweden 46 500 44 83 37 (phone) 46 500 44 83 99 (fax) enns at ida.his.se http://www.his.se/ida/enns JNNS Membership c/o Professor Tsukada Faculty of Engineering Tamagawa University 6-1-1, Tamagawa Gakuen, Machida-city Tokyo 113-8656 Japan 81 42 739 8431 (phone) 81 42 739 8858 (fax) jnns at jnns.inf.eng.tamagawa.ac.jp http://jnns.inf.eng.tamagawa.ac.jp/home-j.html ----------------------------------------------------------------- end From cateau at brain.inf.eng.tamagawa.ac.jp Wed Aug 21 03:33:15 2002 From: cateau at brain.inf.eng.tamagawa.ac.jp (Hide Cateau) Date: Wed, 21 Aug 2002 16:33:15 +0900 Subject: Paper: Analytic study of the mutiple phosphorylation process Message-ID: <20020821161945.970F.CATEAU@brain.inf.eng.tamagawa.ac.jp> Dear All, I would like to announce the availability of the following paper at my web site: http://brain.inf.eng.tamagawa.ac.jp/cateau/hide.index.html Hideyuki Cateau and Shigeru Tanaka, Kinetic analysis of multisite phosphorylation using analytic solutions to Michaelis-Menten equations, J.Theor.Biol.(2002)217:1-14 This paper provides temporal progress curves of multiple phosphorylation that occurs frequently in the intracellular signal transduction pathway. The temporal progress curves are given analytically, so that they enable us to analyze the signal transduction pathway quantitatively. ..................................................................... Kinetic Analysis of Multisite Phosphorylation Using Analytic Solutions to Michaelis-Menten Equations Hideyuki Cateau and ShigeruTanaka Laboratory for Visual Neurocomputing, RIKEN Brain Science Institute, Hirosawa 2-1, Wako, Saitama 351-0198, Japan Phosphorylation-induced expression or modulation of a functional protein is a common signal in living cells. Many functional proteins are phosphorylated at multiple sites and it is frequently observed that phosphorylation at one site enhances or suppresses phosphorylation at another site. Therefore, characterizing such cooperative phosphorylation is important. In this study, we determine a temporal progress curve of multisite phosphorylation by analytically integrating the Michaelis-Menten equations in time. Using this theoretical progress curve, we derive the useful criterion that an intersection of two progress curves implies the presence of cooperativity. Experiments generally yield noisy progress curves. We fit the theoretical progress curves to noisy progress curves containing 4% Gaussian noise in order to determine the kinetics of the phosphorylation. This fitting correctly identifes the sites involved in cooperative phosphorylation. (c) 2002 Elsevier Science Ltd. All rights reserved. ____________________________________________________________ Hideyuki Cateau Core Research for the Evolutional Science and Technology Program(CREST), JST Lab. for mathematical information engineering, Dept. Info-Communication Engineering, Tamagawa Univ. 6-1-1 Tamagawa-Gakuen, Machida-shi, Tokyo 1948610, Japan cateau at brain.inf.eng.tamagawa.ac.jp http://brain.inf.eng.tamagawa.ac.jp/members.html phone: +81-42-739-8434, fax:+81-42-739-7135 ____________________________________________________________ From jfeldman at ICSI.Berkeley.EDU Wed Aug 21 17:17:26 2002 From: jfeldman at ICSI.Berkeley.EDU (Jerome Feldman) Date: Wed, 21 Aug 2002 14:17:26 -0700 Subject: Binding - interim report Message-ID: <3D640366.F50611C5@icsi.berkeley.edu> A while back, I posted a query about Jackendoff's four challenges in neural binding. In addition to the responses that were posted to the whole group, the three other replies are included in this message. Only one response showed any evidence of having looked at Jackendoff's problems and formulating a response. It isn't obvious (at least to me) how to use any of the standard techniques to specify a model that meets Jackendoff's criteria. I privately encouraged the respondents to outline how they would do this and hope that they and others will respond. Jerry F. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Jerry, Connectionist models that learn to process sentences have had to solve the binding problem and have done so for many years. Among the models that solve this problem I can list St. John and McClelland (Artificial Intelligence, 1990), the Miikkulainen and Dyer work, Elman's simple recurrent networks, and a recent CMU-CS dissertation by Doug Rohde. The latter is available at doug's home page: http://tedlab.mit.edu/~dr/Thesis Best wishes, - Jay McClelland ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Here are some papers describing a mostly localist approach toward the variable binding problem: A. Browne and R. Sun, Connectionist inference models. {\it Neural Networks}, Vol.14, No.10, pp.1331-1355, December 2001. A. Browne and R. Sun, Connectionist variable binding. {\it Expert Systems}, Vol.16, No.3, pp.189-207. 1999. R. Sun, Robust reasoning: integrating rule-based and similarity-based reasoning. {\it Artificial Intelligence} (AIJ). Vol.75, No.2, pp.241-296. June, 1995. R. Sun, On schemas, logics, and neural assemblies. {\it Applied Intelligence}. Vol.5, No.2. pp.83-102. 1995. R. Sun, Beyond associative memories: logics and variables in connectionist networks. {\it Information Sciences}, Vol.70, No.1-2. pp.49-74. 1993. R. Sun, On variable binding in connectionist networks. {\it Connection Science}, Vol.4, No.2, pp.93-124. 1992. Most of these papers can be downloaded from my web page. Professor Ron Sun, Ph.D ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Dear Jerome, like DeLiang Wang, who recently posted a response to your request on the connectionists, I would say that there are now powerful neural binding architectures available. I have been working on a spatial model of neural binding, which employs spatial coactivation of neurons for the representation of feature bindings. It is called competitive layer model, and has been shown to perform well on perceptual grouping tasks like contour grouping, texture and greyscale segmentation: H. Wersing, J. J. Steil, and H. Ritter. A competitive layer model for feature binding and sensory segmentation. Neural Computation 13(2):357-387 (2001). http://www.techfak.uni-bielefeld.de/ags/ni/publications/papers/WersingSteilRitter2001-ACL.ps.gz H. Wersing. Spatial Feature Binding and Learning in Competitive Neural Layer Architectures PhD Thesis. Faculty of Technology, University of Bielefeld, March 2000. Published by Cuvillier, Goettingen http://www.techfak.uni-bielefeld.de/~hwersing/dissertation.ps.gz One particular feature of the model is, that it can be trained in very efficient way, solving a simple quadratic optimization problem. Our applications so far concentrated on segmentation problems, but the framework could be easily applied to other problem domains: H. Wersing. Learning Lateral Interactions for Feature Binding and Sensory Segmentation. Advances in Neural Information Processing Systems NIPS 2001, Vancouver http://www.techfak.uni-bielefeld.de/ags/ni/publications/papers/Wersing2001-LLI.ps.gz With kindest regards, Heiko Wersing ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ From E.Koning at elsevier.nl Thu Aug 22 04:07:28 2002 From: E.Koning at elsevier.nl (Koning, Esther (ELS)) Date: Thu, 22 Aug 2002 09:07:28 +0100 Subject: CFP Neurocomputing - Special Issue on Bioinformatics Message-ID: <4D56BD81F62EFD49A74B1057ECD75C0603A9A210@elsamsvexch01.elsevier.nl> CALL FOR PAPERS NEUROCOMPUTING An International Journal published by Elsevier Science B.V., vol. 49-55, 24 issues, in 2003 ISNN 0925-2312, URL: Special Issue on Bioinformatics Paper Submission Deadline: October 31st, 2002 Bioinformatics applies -- simply stated -- computational methods to the solution of biological problems. Bioinformatics, genomics, molecular biology, molecular evolution, computational biology, and affine fields are at the intersection between two axes: data sequences/physiology and information technology. Sequences include DNA sequences (gene, genome, organization), molecular evolution, protein structure, folding, function, and interaction, metabolic pathways, regulation signaling networks, physiology and cell biology (interspecies, interaction), as well as ecology and environment. Information technology in this context includes hardware and instrumentation, computation, as well as mathematical and physical models. The intersection between two subfields, one in each axis, generates areas including those known as genome sequencing, proteomics, functional genomics (microarrays, 2D-PAGE, ...), high-tech field ecology, genomic data analysis, statistical genomics, protein structure, prediction, protein dynamics, protein folding and design, data standards, data representations, analytical tools for complex biological data, dynamical systems modeling, as well as computational ecology. Research in these fields comprises property abstraction from the biological system, design and development of data analysis algorithms, as well as of databases and data access web-tools. Genome sequencing and related projects generate vast amounts of data that needs to be analyzed, thus emphasizing the relevance of efficient methods of data analysis and of the whole discipline. The Neurocomputing journal invites original contributions for the forthcoming special issue on Bioinformatics from a broad scope of areas. Some topics relevant to this special issue include, but are not restricted to: -- Theoretical foundations, algorithms, implementations, and complete systems -- Sequence analysis (single, multiple), alignment, annotation, etc. -- Improvements in databases and web-tools for bioinformatics -- Novel metrics and biological data preprocessing for posterior analysis -- Systems biology models and data modeling techniques including statistical inference, stochastic processes, random walks, Markov chains, hidden Markov models, motifs, profiles, dynamic programming, pattern recognition techniques, neural networks, support vector machines, evolutionary models, tree estimation, etc. -- Pathway inference, e.g. to determine where to target a drug using gene expression data and address side effects by providing information on where else a target metabolite appears. -- Key applications in diverse fields including bioinformatics, genomics, molecular biology, molecular evolution, computational biology, drug design, etc. Please send two hardcopies of the manuscript before October 31st, 2002, to: V. David Sanchez A., Neurocomputing - Editor in Chief - Advanced Computational Intelligent Systems P.O. Box 1424 La Ca=F1ada, CA 91012, U.S.A. Street address: 1149 Wotkyns Drive Pasadena, CA 91103, U.S.A. Fax: +1-626-793-5120 Email: vdavidsanchez at earthlink.net including abstract, keywords, a cover page containing the title and author names, corresponding author name's complete address including telephone, fax, and email address, and clear indication to be a submission to the Special Issue on Bioinformatics. Guest Editors Harvey J. Greenberg Center for Computational Biology University of Colorado at Denver P.O. Box 173364 Denver, CO 80217-3364 Phone: (303) 556-8464 Fax: (303) 556-8550 Email: Harvey.Greenberg at cudenver.edu Lawrence Hunter Center for Computational Pharmacology University of Colorado Health Science Center 4200 E. Ninth Ave. Denver, CO 80262 Phone: (303) 315-1094 Fax: (303) 315-1098 Email: Larry.Hunter at uchsc.edu Satoru Miyano Human Genome Center Institute of Medical Science University of Tokyo 4-6-1 Shirokanedai, Minato-ku, Tokyo 108-8639, Japan. Phone: +81-3-5449-5615 Fax: +81-3-5449-5442 Email: miyano at ims.u-tokyo.ac.jp Ralf Zimmer Praktische Informatik und Bioinformatik Institut f=FCr Informatik LMU M=FCnchen Theresienstrasse 39 D-80333 M=FCnchen Phone: +49-89-2180-4447 Fax: +49-89-2180-4054 Email: zimmer at bio.informatik.uni-muenchen.de V. David Sanchez A., Neurocomputing - Editor in Chief - Advanced Computational Intelligent Systems P.O. Box 1424 La Ca=F1ada, CA 91012, U.S.A. Fax: +1-626-793-5120 Email: vdavidsanchez at earthlink.net From alpaydin at boun.edu.tr Thu Aug 22 08:17:12 2002 From: alpaydin at boun.edu.tr (Ethem Alpaydin) Date: Thu, 22 Aug 2002 15:17:12 +0300 Subject: ICANN/ICONIP 2003 (Istanbul, Turkey) Message-ID: <3D64D648.D9F73CA@boun.edu.tr> --- JOINT 13th International Conference on Artificial Neural Networks and 10th International Conference on Neural Information Processing ICANN/ICONIP 2003 June 26-29, 2003, Istanbul, TURKEY http://www.nn2003.org Both the International Conference on Artificial Neural Networks and the International Conference on Neural Information Processing are very well established conferences, the first being the main annual conference of European Neural Network Society and the latter of the Asia Pacific Neural Netrworks Assembly. In 2003, the two conferences will, for the first time, be held jointly and what better place can there be for such an event than Istanbul, where East meets West. Important Dates: Full papers Short papers Submission Deadline December 1, 2002 January 7, 2003 Acceptance Notice February 3, 2003 March 7, 2003 Camera Ready Papers April 1, 2003 May 9, 2003 --- From christof at teuscher.ch Thu Aug 22 11:43:33 2002 From: christof at teuscher.ch (Christof Teuscher) Date: Thu, 22 Aug 2002 17:43:33 +0200 Subject: [IPCAT2003] - First Call for Papers Message-ID: <3D6506A5.21B9630F@teuscher.ch> ================================================================ We apologize if you receive multiple copies of this email. Please distribute this announcement to all interested parties. For removal specify address to remove_ipcat2003 at lslsun.epfl.ch ================================================================ **************************************************************** FIRST CALL FOR PAPERS **************************************************************** ** IPCAT2003 ** Fifth International Workshop on Information Processing in Cells and Tissues September 8 - 11, 2003 Swiss Federal Institute of Technology Lausanne (EPFL) Lausanne, Switzerland http://lslwww.epfl.ch/ipcat2003 **************************************************************** Description: ------------ The aim of the series of IPCAT workshops is to bring together a multidisciplinary core of scientists who are working in the general area of modeling information processing in biosystems. A general theme is the nature of biological information and the ways in which it is processed in biological and artificial cells and tissues. The key motivation is to provide a common ground for dialogue and interaction, without emphasis on any particular research constituency, or way of modeling, or single issue in the relationship between biology and information. IPCAT2003 will highlight recent research and seek to further the dialogue, exchange of ideas, and development of interactive viewpoints between biologists, physicists, computer scientists, technologists and mathematicians that have been progressively expanded throughout the IPCAT series of meetings (since 1995). The workshop will feature sessions of selected original research papers grouped around emergent themes of common interest, and a number of discussions and talks focusing on wider themes. IPCAT2003 will give particular attention to morphogenetic and ontogenetic processes and systems. IPCAT2003 encourages experimental, computational, and theoretical articles that link biology and the information processing sciences and that encompass the fundamental nature of biological information processing, the computational modeling of complex biological systems, evolutionary models of computation, the application of biological principles to the design of novel computing systems, and the use of biomolecular materials to synthesize artificial systems that capture essential principles of natural biological information processing. Topics of Interest: ------------------- Topics to be covered will include, but not limited to, the following list: o Self-organizing, self-repairing, and self-replicating systems o Evolutionary algorithms o Machine learning o Evolving, adapting, and neural hardware o Automata and cellular automata o Information processing in neural and non-neural biosystems o Parallel distributed processing biosystem models o Information processing in bio-developmental systems o Novel bio-information processing systems o Autonomous and evolutionary robotics o Bionics, neural implants, and bio-robotics o Molecular evolution and theoretical biology o Enzyme and gene networks o Modeling of metabolic pathways and responses o Simulation of genetic and ecological systems o Single neuron and sub-neuron information processing o Microelectronic simulation of bio-information systemics o Artificial bio-sensor and vision implementations o Artificial tissue and organ implementations o Applications of nanotechnology o Quantum informational biology o Quantum computation in cells and tissues o DNA computing Special Session: ---------------- Morphomechanics of the Embryo and Genome + Artificial Life -> Embryonics Artificial intelligence started with imitation of the adult brain, and artificial life has dealt mostly with the adult organism and its evolution, in that the span from genome to organism has been short or nonexistent. Embryonics is the attempt to grow artificial life in a way analogous to real embryonic development. This session will include speakers grappling with both ends of the problem. Papers for this special session should be submitted through the regular procedure. Organizers: R. Gordon, Lev V. Beloussov Paper Submission: ----------------- Papers will be published in a special issue of the BioSystems journal (Elsevier Science). They should be no longer than 15 pages (including figures and bibliography). Papers will either (1) be accepted for presentation at the workshop and for publication in the special issue of BioSystems, or (2) rejected. Important Dates: ---------------- Paper submission: February 28, 2003 Notification of acceptance: May 28, 2003 Camera-ready copy: July 11, 2003 For up-to-date information, consult the IPCAT2003 web-site: http://lslwww.epfl.ch/ipcat2003 We are looking forward to seeing you in beautiful Lausanne! Sincerely, Christof Teuscher IPCAT2003 Program Chair ---------------------------------------------------------------- Christof Teuscher Swiss Federal Institute of Technology Lausanne (EPFL) christof at teuscher.ch http://www.teuscher.ch/christof ---------------------------------------------------------------- IPCAT2003: http://lslwww.epfl.ch/ipcat2003 ---------------------------------------------------------------- From bower at uthscsa.edu Thu Aug 22 18:46:56 2002 From: bower at uthscsa.edu (James Bower) Date: Thu, 22 Aug 2002 16:46:56 -0600 Subject: Registration for GUM*02 Message-ID: REGISTRATION IS NOW OPEN FOR GUM*02 November 8,9,10 San Antonio, Texas Registration is now open for the first annual GENESIS USERS MEETING this fall in beautiful San Antonio, Texas, the weekend immediately following the Society for Neuroscience Annual Meeting in Orlando Florida. Meeting information and registration are available at: http://genesis-users-meeting.org/ The meeting has been designed as a working meeting devoted to research and education using anatomically and physiologically realistic biological models. While established as the GENESIS USERS meeting, all devotees of realistic modeling are encouraged to attend. Unique in its structure, GUM*02 will combine introductory, intermediate, and advanced tutorials with a full agenda of scientific presentations focused on the study of biological systems using realistic modeling techniques. As a working meeting with an educational objective, attendees are encouraged to present not only finished research, but also work in progress. TUTORIALS Tutorial sessions will be held on Friday, November 8th, We are pleased to announce that the introductory tutorial on realistic modeling will be conducted by Dr. Michael Hines, the creator of the modeling system NEURON, and Dr. David Beeman, manager of the GENESIS users group. Both presenters have extensive teaching experience in the Computational Neuroscience at the Marine Biological Laboratory as well as the European Course in Computational Neuroscience. Other tutorials will be offered by leaders in the field in subjects from subcellular to network modeling, and parameter searching techniques to the use of parallel computers. Registration for tutorials is on a first come first serve basis so it will be important to register in advance at: http://genesis-users-meeting.org/ SCIENTIFIC PROGRAM The contributed scientific program will take place on Saturday and Sunday, November 9th and 10th. All attendees are encouraged to present scientific and technical results, and all submissions will be accepted. Appropriate presentations include scientific results from realistic modeling efforts, presentations on technical aspects of simulator use and development, or presentations describing biological systems felt to be ripe for simulation and modeling. Unique to this meeting, we do not necessarily expect completed studies; works in progress are also welcomed. In this way students, especially, can benefit from the expertise of other attendees. Each scientific presentation will consist of a 15 minute oral overview in the morning followed by more detailed discussion of results in a poster/demonstration format in the afternoon. Presenters are encouraged to bring computers COMMUNITY BUILDING In addition to the educational and scientific aspects of meeting, opportunities will also be provided for more relaxed interactions between participants. San Antonio Texas is famous for its River Walk (http://www.thesanantonioriverwalk.com/index.asp). The San Antonio/Austin area of South Texas is also famous for its indigenous music scene - recently added to by Ramon and the K-Halls, who, it is rumored, are already putting pressure on meeting organizers for an exclusive contract. In other words, a good time will be had by all! MEETING COSTS Every effort has been made to keep costs to a minimum to encourage attendance by students. Room rates in the conference hotel have been established at $69 a night, unlimited occupancy. Advanced registration for students including postdoctoral fellows is $99 ($159 for faculty). The rates increase at the conference itself, so you are encouraged to register in advance. Again, additional meeting information as well as advanced registration is available at: http://genesis-users-meeting.org/ We hope to see you in 'ol San Antone this Fall -- James M. Bower Ph.D. Research Imaging Center University of Texas Health Science Center at San Antonio 7703 Floyd Curl Drive San Antonio, TX 78284-6240 Cajal Neuroscience Center University of Texas San Antonio Phone: 210 567 8080 Fax: 210 567 8152 From cia at bsp.brain.riken.go.jp Thu Aug 22 22:52:38 2002 From: cia at bsp.brain.riken.go.jp (Andrzej CICHOCKI) Date: Fri, 23 Aug 2002 11:52:38 +0900 Subject: New software for ICA and BSS Message-ID: <3D65A376.70104@bsp.brain.riken.go.jp> [Our sincere apologies if you receive multiple copies of this email] We would like to announce availability of software packages called ICALAB for ICA (Independent Component Analysis), BSS (Blind Sources Separation) and BSE (Blind Signal Extraction). ICALAB for Signal Processing and ICALAB for Image Processing are two independent packages for MATLAB that implement a number of efficient algorithms for ICA employing HOS (higher order statistics), BSS employing SOS (second order statistics) and LTP (linear temporal prediction), and BSE employing various SOS and HOS methods. After some data preprocessing, these packages can also be used also for MICA (multidimensional independent component analysis) and NIBSS (non independent blind source separation). The main features of both packages are an easy-to-use graphical user interface, and implementation of computationally powerful and efficient algorithms . Some implemented algorithms are robust with respect to additive white noise. The packages are available on our web pages: http://www.bsp.brain.riken.go.jp/ICALAB/ Any critical comments and suggestions are welcomed. Best regards, Andrzej Cichocki From hadley at cs.sfu.ca Sat Aug 24 11:51:40 2002 From: hadley at cs.sfu.ca (Bob Hadley) Date: Sat, 24 Aug 2002 08:51:40 -0700 (PDT) Subject: Modularity vs. Wholistic Connectivity Message-ID: <200208241551.g7OFpeK12167@css.css.sfu.ca> A pdf file for the following New Paper is now available at: www.cs.sfu.ca/~hadley/modular.pdf ~~~~~~~~~~~~~~~~~~~~~~ A DEFENSE OF FUNCTIONAL MODULARITY by Robert F. Hadley School of Computing Science and Cognitive Science Program Simon Fraser University Abstract Although belief in the existence of mental modules of some form is widespread among cognitive researchers, neurally sophisticated researchers commonly resist the view that cognitive processing involves modules which are functionally independent of one another. Moreover, within the past few years, at least three noted researchers (Fodor, Kosslyn, and Uttal) have called into serious question the existence of distinct modules in broad areas of human cognition. The present paper offers a defense of the existence of functionally independent modules, which, though spatially distributed, communicate via traditionally conceived input/output channels. This defense proceeds (i) by showing that the anti-modularity arguments of Fodor, Kosslyn, and Uttal are not compelling; (ii) by presenting theoretically-grounded reasons why any connectionist is committed, via the most basic tenets of connectionism, to accepting the existence of functionally independent modules; (iii) by presenting wholistically inclined connectionists with a novel challenge, namely, to demonstrate that a single, wholistic network could display strong levels of generalization as a side-effect of multiple, previously acquired skills. In the course of these arguments, I examine a recent generalization challenge posed by Phillips (2000) to eliminative connectionists. 32 pages, with 1.2 spacing From hamilton at may.ie Mon Aug 26 13:13:45 2002 From: hamilton at may.ie (Hamilton Institute) Date: Mon, 26 Aug 2002 18:13:45 +0100 Subject: Faculty positions in statistical machine learning Message-ID: <009a01c24d23$f0244c60$12c09d95@hamilton.local> Applications are invited from well qualified candidates for a small number of Senior Research positions at the Hamilton Institute. The successful candidates will be outstanding researchers who can demonstrate an exceptional research track record or significant research potential at international level modern statistical and machine learning methods, particularly in the context of time series analysis and probabilistic reasoning, human-computer interaction, hybrid systems. We are looking for leaders who will be a vital part of the future growth and development of the Institute. A strong commitment to research excellence, developing research partnerships, and the ability to establish a dynamic and world class research programme are essential. This is a unique opportunity to join a vibrant research group which is committed to research excellence and is currently undergoing substantial expansion. Where appropriate, the possibility exists to fund Ph.D/postdoctoral positions in direct support of senior posts. Salary Scale: Associate Professor Scale (EUR 64000-87000) or Full Professor Scale (EUR 86000- 110000) For further details visit www.hamilton.may.ie Applications with cv and two significant papers to hamilton at may.ie. For informal enquiries please contact Prof. D.J. Leith at doug.leith at may.ie From norbert at cn.stir.ac.uk Tue Aug 27 12:10:34 2002 From: norbert at cn.stir.ac.uk (norbert@cn.stir.ac.uk) Date: Tue, 27 Aug 2002 17:10:34 +0100 Subject: Research Position `Human and Artificial Vision' Message-ID: <1030464634.3d6ba47a9eea5@www.cn.stir.ac.uk> To whom it may concern I would like to submit the following job announcement to connectionists at cs.cmu.edu. Sincerely Norbert Krueger -- Dr. Norbert Krueger University of Stirling Centre for Cognitive and Computational Neuroscience (CCCN) Stirling FK9 4LA Scotland, UK Tel: ++44 (0) 1786 466379 Fax: ++44 (0) 1786 467641 Email: norbert at cn.stir.ac.uk http://www.cn.stir.ac.uk/~norbert ------------- Job Announcement ---------------------------------- RESEARCH ASSISTANT Centre for Cognitive and Computational Neuroscience (CCCN) ECOVISION Project 17,626 - 21,503 You will be associated with the group of Prof. Wrgtter (http://www.cn.stir.ac.uk/) and involved in computer vision studies of the real-world scene analysis problems in the context of the ECOVISION project (Early cognitive Vision, (http://www.pspc.dibe.unige.it/ecovision/). The goal of these studies is to design a machine vision system of superior performance. To this end principles of distributed cognitive reasoning, which are now better understood in the brain, shall be implemented in software and tested with artificial and real visual scenes. You shall develop this software in cooperation with other members of the group. Good software knowledge of C++ is required. It would also be helpful if you have a background in computer- and camera-equipment hardware. This project is funded by the European Commission and takes place in an international cooperation between seven partners from five different countries. It also offers good access to industrially relevant machine vision problems through the direct involvement of HELLA Hueck KG, who is a big German company developing driver assistant systems, which are the core application for the results of the ECOVISION project. This appointment will be on a fixed term basis for 22 months in the first instance, with an extension thereafter. The position is thought to be basis for a PhD. Informal enquiries may be made to Dr. Norbert Krger (tel: 01786 466379, email: norbert at cn.stir.ac.uk) or Professor Florentin Wrgtter (tel. 01786 466369, email worgott at cn.stir.ac.uk.). Application forms are available from the Personnel Office, University of Stirling, Stirling FK9 4LA, telephone 01786 467028, fax 01786 466155 or email personnel at stir.ac.uk quoting ref no: 1874/1207 or http://www.personnel.stir.ac.uk/recruitment/opportunities_research.html. Closing date for applications: Monday, 16 September 2002. ------------------------------------------------- This mail sent through IMP: http://horde.org/imp/ From dodd at acse.shef.ac.uk Fri Aug 30 09:33:54 2002 From: dodd at acse.shef.ac.uk (Tony Dodd) Date: Fri, 30 Aug 2002 14:33:54 +0100 (BST) Subject: Special Session on Support Vector / Kernel Machines at ICONS 2003 Message-ID: <200208301333.g7UDXsW03114@vulture.shef.ac.uk> The following call for papers may be interest to readers of connectionists. Ignore the requirement to indicate interest by 28 August - asap will be ok. ------------- Begin Forwarded Message ------------- From r.f.harrison at sheffield.ac.uk Thu Aug 22 08:41:47 2002 From: r.f.harrison at sheffield.ac.uk (Robert F Harrison) Date: Thu, 22 Aug 2002 13:41:47 +0100 Subject: Special Session on Support Vector / Kernel Machines at ICONS 2003 Message-ID: Hi I am trying to put together a special session on Support Vector / Kernel Machines for the IFAC Conference on Intelligent Control Systems and Signal Processing (ICONS 2003) to be held in Faro, Portugal in April 2003. http://conferences.ptrede.com/icons03/main.py/index Contributions should be in any of the following areas: dynamical modelling / identification; non-linear filtering / equalisation; feedback control but NOT in the general theory of (kernel) machine learning. If you are interested, please let me have a title/short description by 28 Aug. so I can judge level of interest. The bad news is that the deadline for full papers is 20 September but I hope to push that back by ~3 weeks. Sorry about the short deadline, thanks for your time Rob ---------------------------------------------------------------------------- Robert F Harrison BSc PhD CEng FIEE The University of Sheffield Department of Automatic Control & Systems Engineering Mappin Street Sheffield S1 3JD UK ------------- End Forwarded Message ------------- From giro-ci0 at wpmail.paisley.ac.uk Fri Aug 30 12:32:09 2002 From: giro-ci0 at wpmail.paisley.ac.uk (Mark Girolami) Date: Fri, 30 Aug 2002 17:32:09 +0100 Subject: Technical Report Available Message-ID: The following new technical report is available at the website below. Included on the website are Matlab demos along with all code and data required to allow easy replication of the experimental results reported. http://cis.paisley.ac.uk/giro-ci0/reddens/ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Probability Density Estimation from Optimally Condensed Data Samples. Mark Girolami & Chao He Computing & Information Systems Technical Reports, ISSN-1461-6122. Abstract The requirement to reduce the computational cost of evaluating a point probability density estimate when employing a Parzen window estimator is a well known problem. This paper presents the Reduced Set Density Estimator that provides a kernel based density estimator which employs a small percentage of the available data sample and is optimal in the L2 sense. Whilst only requiring O(N2) optimisation routines to estimate the required weighting coefficients, the proposed method provides similar levels of performance accuracy and sparseness of representation as Support Vector Machine density estimation, which requires O(N3) optimisation routines, and which has previously been shown to consistently outperform Gaussian Mixture Models. It is also demonstrated that the proposed density estimator consistently provides superior density estimates for similar levels of data reduction to that provided by the recently proposed Density Based Multiscale Data Condensation algorithm and in addition has comparable computational scaling. The additional advantage of the proposed method is that no extra free parameters are introduced such as regularisation, bin width or condensation ratios making this method a very simple and straightforward approach to providing a reduced set density estimator with comparable accuracy to that of the full sample Parzen density estimator. Professor. M.A Girolami PhD Associate Head of School and Chair of Applied Computational Intelligence School of Information and Communication Technologies University of Paisley High Street Paisley, PA1 2BE Tel: +44 (0)141 848 3317 Fax +44 (0)141 848 3542 http://cis.paisley.ac.uk/giro-ci0 Legal disclaimer -------------------------- The information transmitted is the property of the University of Paisley and is intended only for the person or entity to which it is addressed and may contain confidential and/or privileged material. Statements and opinions expressed in this e-mail may not represent those of the company. Any review, retransmission, dissemination and other use of, or taking of any action in reliance upon, this information by persons or entities other than the intended recipient is prohibited. If you received this in error, please contact the sender immediately and delete the material from any computer. -------------------------- From ckiw at dai.ed.ac.uk Fri Aug 2 08:02:46 2002 From: ckiw at dai.ed.ac.uk (Chris Williams) Date: Fri, 2 Aug 2002 13:02:46 +0100 (BST) Subject: JOB: Research Fellow Position at University of Edinburgh Message-ID: 3 year Research Fellowship in Machine Learning and Probabilistic Modelling The University of Edinburgh seeks a Research Fellow in the field of Machine Learning and Probabilistic Modelling. The successful candidate will work closely with Dr Chris Williams and be based in the Institute for Adaptive and Neural Computation (ANC) in the School of Informatics. The fellowship offers considerable flexibility regarding the areas of research to be pursued within the general field of Machine Learning and Probabilistic Modelling. Examples of possible research areas include (but are not limited to) supervised learning, unsupervised learning, probabilistic graphical models, and learning in computer vision. Current areas of research activity include supervised learning using Gaussian process predictors, learning objects from image data, latent variable models (e.g. applied to galaxy spectra) and advanced factorial-type Hidden Markov models for condition monitoring in premature babies. Research in Machine Learning/Probabilistic Modelling in ANC is led by Dr. Chris Williams, Dr. David Barber and Prof. Chris Bishop. The candidate should have postgraduate experience in the mathematical, physical or computing sciences. Previous experience with probabilistic modelling together with good software skills would be an advantage. The candidate should be highly motivated and keen to participate in a lively interdisciplinary environment. The fellowship is available for a period of three years and is supported by a grant from MSR (Europe) to support basic research. Application Procedure: Please see details at http://www.informatics.ed.ac.uk/events/vacancies/311653.html The closing date for applications is 23 August 2002. For informal enquiries please contact Dr Chris Williams, c.k.i.williams at ed.ac.uk, tel +44 131 651 1212. From Mikael.Boden at ide.hh.se Sun Aug 4 09:57:11 2002 From: Mikael.Boden at ide.hh.se (Mikael Bodn) Date: Sun, 4 Aug 2002 15:57:11 +0200 Subject: Systematicity & Fallacies: Boden & Niklasson Message-ID: <01C23BCF.982D2640.mibo@ide.hh.se> Dear connectionists In a posting on this list Hadley criticized a paper we published in Connection Science (Boden and Niklasson, 2000, 12(2), 111--142). The paper discusses how systematicity of inference and representation can be achieved in neural networks using distributed representations. In light of Hadley's posting, a couple of comments and clarifications are seriously justified. First, we do NOT claim that our results fulfill the requirements of Hadley's "strong semantic systematicity" (made clear on p. 138 in the paper). Nevertheless, we deemed it useful to use the names of levels of generalization introduced by Hadley (1994) to qualify what we call "context-dependent" semantic systematicity (which may to some extent explain the misunderstanding we read into Hadley's note). Regarding the notion of novelty, some details are required. The architecture we report on contains in essence two neural network modules. Each module corresponds to a type of context in which an object may occur. In fact, we may think of the system as having two separate training sets. Due to error feedback between modules, training one module affects the other. Simply put, the modules share representations. We show that certain inferences (by generalization) on an object can be made in one module if the same object appears as training sample in the other module. More specifically, one module encodes information for the words used in the sentences (e.g., the representation for 'Tweety' encodes information that it is a 'Bird'. The second module is used for asserting facts involving words (like 'Birds fly'). Among other things, we test the ability of the network to assign meaning to words (in the first module) based on facts (presented as training examples in the second module). For example, we test for properties assigned to 'Jack' given that 'Jack can fly' is true -- and find that 'Jack' is a 'Bird' even if 'Jack' never appears in the training set of that module. Moreover, inference in the opposite direction can be made. From 'Tweety' is a 'Bird', the other module responds with 'Tweety flies' even if Tweety never appears in the training set specific for that module. Hence, in contrast to what Hadley hints in his note, the tested inference never appears in the training set for the tested module. For those interested in further details we would like refer to the paper. Offprints of the paper are available on request. You will also find a draft version on our web pages. Regards, Mikael Boden (http://www.hh.se/staff/mibo) Lars Niklasson (http://www.his.se/ida/~lars) -----Original Message----- From: Bob Hadley [SMTP:hadley at cs.sfu.ca] Sent: Thursday, July 25, 2002 9:49 PM To: Connectionists at cs.cmu.edu Cc: Bob HADLEY Subject: Systematicity & Fallacies: Boden & Niklasson The Fallacy of Equivocation: Boden and Niklasson. In a fairly recent paper (Connection Science, Vol. 12, 2000), Boden and Niklasson purport to demonstrate that a collection of connectionist networks (call them c-nets) can display an important type of Strong Semantic Systematicity. They make frequent references to my 1994 definitions of semantic systematicity and to my papers on this important topic. They also acknowledge that in 1994 I published definite reservations about claims by Niklasson and van Gelder to have produced a connectionist system that displays strong systematicity. In their recent (2000) paper, Boden and Niklasson purport to have answered my reservations by producing a case where a "novel test sentence" is assigned an appropriate meaning representation by previously trained c-nets. Readers may recall that my 1994 definition of strong semantic systematicity required that the "previously trained c-net" must assign an appropriate (and correct) meaning representation to a novel test sentence which contains PREVIOUSLY KNOWN words in at least one novel position. In contrast to this requirement, the putative novel test sentence that Boden and Niklasson employ does not present any previously known words in a novel position. Rather, it presents a purportedly novel word in a known position. However, there is a much more serious problem with their "novel test sentence" (call this sentence S). Here's the problem: The supposed novel sentence S does not produce a correct response when it is first presented to the trained c-net. So, Boden and Niklasson proceed to TRAIN the c-net on the sentence S for an additional 1000 epochs (over and above the earlier training phase). In this latter training phase, only S is presented as input, and backpropagation is employed. Once this further training is complete, Boden and Niklasson contend that a "novel" word in S has now been assigned a meaning representation which they believe to be correct. But, of course, S is no longer a "novel test sentence" at this stage. The c-net has been subjected to intensive training upon S, and only after this further training is complete are Boden and Niklasson able to claim success. Given this, for Boden and Niklasson to describe S as a novel test sentence is (to express the matter diplomatically) to committ a serious instance of the fallacy of equivocation. Indeed, I find it difficult to believe that Boden and Niklasson could be unaware that, as most connectionists use the phrase "test data" (or "novel test sentence"), sentence S is NOT a novel test sentence at all. For this reason, it astonishes me that Boden and Niklasson claim that they have NOW produced an experimental result that satisfactorily answers my 1994 reservations about the results published by Niklasson and van Gelder. My 1994 reservations involved my 1994 definition of strong systematicity, and that definition employed "novel test sentence" in the sense that connectionists commonly employ. At best, Boden and Niklasson are assigning some new, and surprising sense to that phrase -- hence the fallacy of equivocation. I believe there are other serious problems with Boden and Niklasson's (2000) paper, and I am presently writing a detailed critique of that paper. I'll make my new paper available on the internet within a few weeks. Look for a notice of my new critique on "Connectionist List" or send me an email request for the pdf file. In astonishment, Bob Hadley Reference: Boden, M. and Niklasson, L. (2000) "Semantic Systematicity and Context in Connectionist Networks", Connection Science, Vol. 12(2), pp. 111-142. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Robert F. Hadley (Bob) Phone: 604-291-4488 Professor email: hadley at cs.sfu.ca School of Computing Science and Cognitive Science Program Simon Fraser University Burnaby, B.C. V5A 1S6 Canada Web page: www.cs.sfu.ca/~hadley/ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ From juergen at idsia.ch Mon Aug 5 09:23:18 2002 From: juergen at idsia.ch (Juergen Schmidhuber) Date: Mon, 05 Aug 2002 15:23:18 +0200 Subject: Optimal Ordered Problem Solver Message-ID: <3D4E7C46.EDDC43CF@idsia.ch> Optimal Ordered Problem Solver Juergen Schmidhuber, IDSIA TR IDSIA-12-02; arXiv:cs.AI/0207097 v1; 31 July 2002 36 pages, submitted to JMRL, condensed 8 page version to NIPS http://www.idsia.ch/~juergen/oops.html ftp://ftp.idsia.ch/pub/juergen/oops.ps.gz We extend principles of non-incremental universal search to build a novel, optimally fast, incremental learner that is able to improve itself through experience. The Optimal Ordered Problem Solver (OOPS) searches for a universal algorithm that solves each task in a sequence of tasks. It continually organizes and exploits previously found solutions to earlier tasks, efficiently searching not only the space of domain-specific algorithms, but also the space of search algorithms. The initial bias is embodied by a task-dependent probability distribution on possible program prefixes (pieces of code that may continue). Prefixes are self-delimiting and executed in online fashion while being generated. They compute the probabilities of their own possible continuations. Let p^n denote a found prefix solving the first n tasks. It may exploit previous solutions p^i (i In his new book, Foundations of Language, Ray Jackendoff poses four challenges to "cognitive neuroscience", all related to the binding problem. He starts with the simple sentence: The big star's beside a little star. and asks how neural computation could model the two different stars without cross talk. He then goes on to more complex problems like variables, memory levels and learning. Various groups, including ours, have been working on these issues for decades and Ray's challenges provide a nice focus for assessing the current state of play. I will be happy to collect responses and post a summary back to the group. The relevant section of the book is pages 58-67 and is self contained. -- Jerome Feldman ICSI & EECS UC Berkeley "O wad some Pow'r the giftie gie us 1947 Center St. To see oursels as other see us!" Berkeley CA 94704 Robert Burns - To a Louse From tj at cs.cornell.edu Mon Aug 5 15:38:29 2002 From: tj at cs.cornell.edu (Thorsten Joachims) Date: Mon, 5 Aug 2002 15:38:29 -0400 Subject: SVM-light: new version and book Message-ID: <706871B20764CD449DB0E8E3D81C4D4302C19638@opus.cs.cornell.edu> Dear Colleague, a new version of SVM-Light (V5.00) is available, as well as my dissertation "Learning to Classify Text using Support Vector Machines", which recently appeared with Kluwer. The new version can be downloaded from http://svmlight.joachims.org/ SVM-Light is an implementation of Support Vector Machines (SVMs) for large-scale problems. The new features of this version are the following: - Learning of ranking functions (e.g. for search engines), in addition to classification and regression. - Bug fixes and improved numerical stability. The dissertation describes the algorithms and methods implemented in SVM-light. In particular, it shows how these methods can be used for text classification. Links are on my homepage http://www.joachims.org/ Cheers Thorsten --- Thorsten Joachims Assistant Professor Department of Computer Science Cornell University http://www.joachims.org/ From cateau at brain.inf.eng.tamagawa.ac.jp Mon Aug 5 21:08:31 2002 From: cateau at brain.inf.eng.tamagawa.ac.jp (Hide Cateau) Date: Tue, 06 Aug 2002 10:08:31 +0900 Subject: New paper on the spike-timing-dependent plasticity Message-ID: <20020806095500.2449.CATEAU@brain.inf.eng.tamagawa.ac.jp> Dear All, I would like to announce the availability of the following paper at my site:http://brain.inf.eng.tamagawa.ac.jp/cateau/hide.index.html Hideyuki Cateau & Tomoki Fukai, A stochastic method to predict the consequence of arbitrary forms of spike-timing-dependent plasticity, Neural Computation (2002) in press. This paper enables us to predict the consequence of arbitrary forms the spike-timing-dependent plasticity without doing any simulations. ........................................................... A stochastic method to predict the consequence of arbitrary forms of spike-timing-dependent plasticity Hideyuki Cateau? and Tomoki Fukai?# ?Core Research for the Evolutional Science and Technology Program(CREST),JST, Tokyo 1948610, Japan # Department of Engineering, Tamagawa University, Tokyo 1948610,Japan Abstract Synapses in various neural preparations exhibit spike-timing-dependent plasticity (STDP) with a variety of learning window functions. The window functions determine the magnitude and the polarity of synaptic change according to the time difference of pre- and postsynaptic spikes. Numerical experiments revealed that STDP learning with a single-exponential window function resulted in a bimodal distribution of synaptic conductances as a consequence of competition between synapses. A slightly modified window function, however, resulted in a unimodal distribution, rather than a bimodal distribution. Since various window functions have been observed in neural preparations, we develop an unambiguous mathematical method to calculate the conductance distribution for any given window function. Our method is based on the Fokker-Planck equation to determine the conductance distribution and on the Ornstein-Uhlenbeck process to characterize the membrane potential fluctuations. Demonstrating that our method reproduces the known quantitative results of STDP learning, we apply the method to the type of STDP learning found recently in the CA1 region of the rat hippocampus. We find that this learning can result in nearly optimized competition between synapses. Meanwhile, we find that the type of STDP learning found in the cerebellum-like structure of electric fish can result in all-or-none synapses, i.e., either all the synaptic conductances are maximized or none of them become significantly large. Our method also determines the window function that optimizes synaptic competition. ____________________________________________________________ Hideyuki Cateau Core Research for the Evolutional Science and Technology Program(CREST), JST Lab. for mathematical information engineering, Dept. Info-Communication Engineering, Tamagawa Univ. 6-1-1 Tamagawa-Gakuen, Machida-shi, Tokyo 1948610, Japan cateau at brain.inf.eng.tamagawa.ac.jp http://brain.inf.eng.tamagawa.ac.jp/members.html phone: +81-42-739-8434, fax:+81-42-739-7135 ____________________________________________________________ From jose at psychology.rutgers.edu Tue Aug 6 16:02:39 2002 From: jose at psychology.rutgers.edu (stephen j. hanson) Date: 06 Aug 2002 16:02:39 -0400 Subject: neural binding In-Reply-To: <3D4EA544.7ABB45F9@icsi.berkeley.edu> References: <3D4EA544.7ABB45F9@icsi.berkeley.edu> Message-ID: <1028664159.2731.56.camel@vaio> We recently published a paper in Neural Computation on a related kind of binding problem: In our case we were able to show transfer to NOVEL vocabularies (fixing DFAs) after training on other vocabularies using the same grammar. In effect the RNN learns to factor the vocabulary from the grammar and then quickly recruit new vocabularies that merely have syntactic similarity to already learned vocabularies. Examination of the hidden space indicates the RNN learns a type of "spatial metaphor" to recruit the unseen vocabulary in nearby parts of state (DFA)space. The network creates a hierarchical state structure that allows arbitrary and unique binding of new instances. Similarly for decades Pinker and others have maintained that this sort of transfer was impossible for "associationist" networks. Cheers, Steve ---------- "On the Emergence of Rules in Neural Networks" Stephen Jose Hanson and Michiro Negishi Neural Computation - Contents - Volume 14, Number 9 - September 1, 2002 On Mon, 2002-08-05 at 12:18, Jerome Feldman wrote: > > In his new book, Foundations of Language, Ray Jackendoff poses four challenges > to "cognitive neuroscience", all related to the binding problem. He starts with > the simple sentence: > > The big star's beside a little star. > > and asks how neural computation could model the two different stars without > cross talk. He then goes on to more complex problems like variables, memory > levels and learning. > > Various groups, including ours, have been working on these issues for decades and > Ray's challenges provide a nice focus for assessing the current state of play. I > will be happy to collect responses and post a summary back to the group. The > relevant section of the book is pages 58-67 and is self contained. > > -- > Jerome Feldman > ICSI & EECS UC Berkeley "O wad some Pow'r the giftie gie us > 1947 Center St. To see oursels as other see us!" > Berkeley CA 94704 Robert Burns - To a Louse > From dwang at cis.ohio-state.edu Wed Aug 7 10:01:04 2002 From: dwang at cis.ohio-state.edu (DeLiang Wang) Date: Wed, 07 Aug 2002 10:01:04 -0400 Subject: neural binding References: <3D4EA544.7ABB45F9@icsi.berkeley.edu> Message-ID: <3D512821.9986EF0A@cis.ohio-state.edu> Jerome Feldman wrote: > He starts with the simple sentence: > > The big star's beside a little star. > > and asks how neural computation could model the two different stars without > cross talk. A lot of work has been done in recent years using oscillator networks and temporal coding. The problem as stated above doesn't exist anymore for neural computation, I think, though how the brain solves the problem is another matter (Journal Neuron has a special issue on the neural binding problem in vol. 24, No. 1, 1999). For a short story on this see Wang: "On connectedness: a solution based on oscillatory correlation," Neural Computation, vol. 12, 131-139, 2000. Regards, DeLiang Wang -- ------------------------------------------------------------ Prof. DeLiang Wang Department of Computer and Information Science The Ohio State University 2015 Neil Ave. Columbus, OH 43210-1277, U.S.A. Email: dwang at cis.ohio-state.edu Phone: 614-292-6827 (OFFICE); 614-292-7402 (LAB) Fax: 614-292-2911 URL: http://www.cis.ohio-state.edu/~dwang From mhb0 at Lehigh.EDU Thu Aug 8 10:07:45 2002 From: mhb0 at Lehigh.EDU (Mark H. Bickhard) Date: Thu, 08 Aug 2002 10:07:45 -0400 Subject: Call for Papers/Call for Participation Message-ID: <3D527B30.D7B8AD0C@lehigh.edu> Interactivist Summer Institute 2003 July 22-26, 2003 Botanical Auditorium Copenhagen, Denmark Join us in exploring the frontiers of understanding of life, mind, and cognition. There is a growing recognition - across many disciplines - that phenomena of life and mind, including cognition and representation, are emergents of far-from-equilibrium, interactive, autonomous systems. Mind and biology, mind and agent, are being re-united. The classical treatment of cognition and representation within a formalist framework of encodingist assumptions is widely recognized as a fruitless maze of blind alleys. From neurobiology to robotics, from cognitive science to philosophy of mind and language, dynamic and interactive alternatives are being explored. Dynamic systems approaches and autonomous agent research join in the effort. The interactivist model offers a theoretical approach to matters of life and mind, ranging from evolutionary- and neuro-biology - including the emergence of biological function ? through representation, perception, motivation, memory, learning and development, emotions, consciousness, language, rationality, sociality, personality and psychopathology. This work has developed interfaces with studies of central nervous system functioning, the ontology of process, autonomous agents, philosophy of science, and all areas of psychology, philosophy, and cognitive science that address the person. The conference will involve both tutorials addressing central parts and aspects of the interactive model, and papers addressing current work of relevance to this general approach. This will be our second Summer Institute; the first was in 2001 at Lehigh University, Bethlehem, PA, USA. The intention is for this Summer Institute to become a traditional biennial meeting where those sharing the core ideas of interactivism will meet and discuss their work, try to reconstruct its historical roots, put forward current research in different fields that fits the interactivist framework, and define research topics for prospective graduate students. People working in philosophy of mind, linguistics, social sciences, artificial intelligence, cognitive robotics, theoretical biology, and other fields related to the sciences of mind are invited to send their paper submission or statement of interest for participation to the organizers. http://www.lehigh.edu/~interact/isi2003/isi2003.html Mark -- Mark H. Bickhard Cognitive Science 17 Memorial Drive East Lehigh University Bethlehem, PA 18015 610-758-3633 mhb0 at lehigh.edu mark.bickhard at lehigh.edu http://www.lehigh.edu/~mhb0/mhb0.html From hastie at stanford.edu Wed Aug 7 13:02:15 2002 From: hastie at stanford.edu (Trevor Hastie) Date: Wed, 7 Aug 2002 10:02:15 -0700 Subject: Two new papers: ICA and Boosting Message-ID: <004901c23e34$2e79c930$d3bb42ab@yacht> The following two papers have been posted to my web page at http://www-stat.stanford.edu/~hastie/Papers/ a.. Trevor Hastie and Robert Tibshirani. Independent Component Analysis through Product Density Estimation (ps file). A direct statistical approach to ICA, using an attractive spline representation to model each of the marginal densities. b.. Saharon Rosset, Ji Zhu and Trevor Hastie. Boosting as a Regularized Path to a Maximum Margin Classifier (ps file). We show that a version of boosting fits a model by optimizing a L1-penalized loss function. This in turn shows that the corresponding versions of Adaboost and Logitboost converge to an "L1" optimal separating hyperplane. -------------------------------------------------------------------- Trevor Hastie hastie at stanford.edu Professor, Department of Statistics, Stanford University Phone: (650) 725-2231 (Statistics) Fax: (650) 725-8977 (650) 498-5233 (Biostatistics) Fax: (650) 725-6951 URL: http://www-stat.stanford.edu/~hastie address: room 104, Department of Statistics, Sequoia Hall 390 Serra Mall, Stanford University, CA 94305-4065 -------------------------------------------------------------------- From Dave_Touretzky at cs.cmu.edu Sun Aug 11 04:40:19 2002 From: Dave_Touretzky at cs.cmu.edu (Dave_Touretzky@cs.cmu.edu) Date: Sun, 11 Aug 2002 04:40:19 -0400 Subject: announcing HHsim: a graphical Hodgkin-Huxley simulator in MATLAB Message-ID: <3979.1029055219@ammon.boltz.cs.cmu.edu> HHsim is a graphical simulation of a section of excitable neuronal membrane using the Hodgkin-Huxley equations. It provides full access to the Hodgkin-Huxley parameters, membrane parameters, stimulus parameters, and ion concentrations. HHsim requires MATLAB version 6. In contrast with NEURON or GENESIS, which are vastly more sophisticated research tools, HHsim is simple educational software designed specifically for graduate or undergraduate neurophysiology courses. The user interface can be mastered in a couple of minutes and provides many ways for the student to experiment. HHsim is free software distributed under the GNU General Public License. To download the software, or view screen captures illustrating its features, visit the HHsim web page: http://www.cs.cmu.edu/~dst/HHsim HHsim was written by Dave Touretzky, Mark Albert, and Nathaniel Daw in the Computer Science Department and the Center for the Neural Basis of Cognition at Carnegie Mellon University. Its development was supported in part by National Science Foundation grant DGE-9987588. -- Dave Touretzky From juergen at idsia.ch Mon Aug 12 10:15:10 2002 From: juergen at idsia.ch (Juergen Schmidhuber) Date: Mon, 12 Aug 2002 16:15:10 +0200 Subject: optimal universal reinforcement learner Message-ID: <3D57C2EE.5B7B8946@idsia.ch> I'd like to draw your attention to the first optimal, universal reinforcement learner. While traditional RL requires unrealistic Markovian assumptions, the recent AIXI model of Marcus Hutter just needs an environment whose reactions to control actions are sampled from an unknown but computable distribution mu. This includes basically every environment we can reasonably talk about: M. Hutter. Towards a Universal Theory of Artificial Intelligence based on Algorithmic Probability and Sequential Decisions. Proc. ECML-2001, p. 226-238. http://www.idsia.ch/~marcus/ai/paixi.pdf ftp://ftp.idsia.ch/pub/techrep/IDSIA-14-00.ps.gz Self-Optimizing and Pareto-Optimal Policies in General Environments based on Bayes-Mixtures. Proc. COLT-2002, 364-379. http://www.idsia.ch/~marcus/ai/selfopt.pdf ftp://ftp.idsia.ch/pub/techrep/IDSIA-04-02.ps.gz How does AIXI work? An optimal predictor using a universal Bayesmix XI predicts future events including reward. Here XI is just a weighted sum of all distributions nu in a set M. AIXI now simply selects those action sequences that maximize predicted reward. It turns out that this method really is self-optimizing in the following sense: for all nu in the mix, the average value of actions, given the history, asymptotically converges to the optimal value achieved by the unknown policy which knows the true mu in advance! The necessary condition is that M does admit self-optimizing policies. This is also sufficient! And there is no other policy yielding higher or equal value in all environments nu and a strictly higher value in at least one. Interestingly, the right way of treating the temporal horizon is not to discount it exponentially, as done in most traditional RL work, but to let the future horizon grow in proportion to the learner's lifetime so far. To quote some AIXI referees: "[...] Clearly fundamental and potentially interesting research direction with practical applications. [...] Great theory. Extends a major theoretical direction that led to practical MDL and MML. This approach may do the same thing (similar thing) wrt to decision theory and reinforcement learning, to name a few." "[...] this could be the foundation of a theory which might inspire AI and MC for years (decades?)." Juergen Schmidhuber, IDSIA http://www.idsia.ch/~juergen/unilearn.html From markman at psyvax.psy.utexas.edu Mon Aug 12 14:30:16 2002 From: markman at psyvax.psy.utexas.edu (Art Markman) Date: Mon, 12 Aug 2002 13:30:16 -0500 Subject: Rumelhart Prize goes to Joshi Message-ID: For immediate release: Aravind Joshi is the winner of the third annual David E. Rumelhart Prize. This announcement was made at the Cognitive Science Society meeting at George Mason University on August 9, 2003. The Rumelhart prize is given each year to a cognitive scientist to honor his or her outstanding contributions to the formal analysis of human cognition. This prize was created by the Glushko-Samuelson Foundation to honor David E. Rumelhart, a Cognitive Scientist who exploited a wide range of formal methods to address issues and topics in Cognitive Science. Dr. Joshi will receive his award and give a talk at the 25th annual meeting of the Cognitive Science Society in Boston, MA in August of 2003. For a more detailed discussion of the prize and of Dr. Joshi's work, please see http://www.cnbc.cmu.edu/derprize/announce2003.html. From Padraig.Cunningham at cs.tcd.ie Wed Aug 14 07:46:05 2002 From: Padraig.Cunningham at cs.tcd.ie (Padraig Cunningham) Date: Wed, 14 Aug 2002 12:46:05 +0100 Subject: Post Doc position Trinity College Dublin Message-ID: <0f3001c24388$2be8de50$2d2fe286@citeog> Post-Doctoral Researchers Computer Science Trinity College Dublin. Applications from researchers with experience in machine learning or data mining are invited for two post-doctoral positions within the Machine Learning Group in the Department of Computer Science at Trinity College Dublin. The positions are funded by Science Foundation Ireland as part of a research project on knowledge discovery and explanation in bioinformatics, medical informatics and finance. The project will tackle six problems from machine learning and data mining. These are; Local Feature Weighting in Lazy-Learning Systems, Explanation in Case-Based Reasoning Systems, Explanation in Ensembles of Regression Systems, Overfitting in Feature Weighting / Feature Selection, Validation of Clustering Results and Explanation of Clustering Results. Salary in the range Euro25,000-Euro33,000 per annum dependent upon publication record and experience. Candidates should submit a cover letter, full CV, list of publications and names of two referees to: Padraig.Cunningham at cs.tcd.ie Prof. Pdraig Cunningham, Machine Learning Group, Dept. of Computer Science, Trinity College, Dublin 2, Ireland. Tel: +353 1 608 1765/ Fax: +353 1 677 2204 Closing date for receipt of applications: Friday, 13th September, 2002. TRINITY COLLEGE IS AN EQUAL OPPORTUNITIES EMPLOYER From krichmar at nsi.edu Thu Aug 15 10:47:31 2002 From: krichmar at nsi.edu (Jeff Krichmar) Date: Thu, 15 Aug 2002 07:47:31 -0700 Subject: Machine Psychology and Brain-Based Devices Message-ID: <000501c2446a$af242940$c3b985c6@nsi.edu> Dear Connectionists, I thought our website would be of interest to many of you: http://www.nsi.edu/nomad. The NOMAD (Neurally Organized Mobile Adaptive Device) project was established in order to study the principles of brain function by using real-world devices controlled by nervous systems simulated according to biological principles. These devices learn through their experience about features in its environment, in much the same way as living creatures do. The website contains descriptions, pictures, movies and recent publications. The most recent publication, J.L. Krichmar, G.M. Edelman, (2002) Machine Psychology: Autonomous Behavior, Perceptual Categorization and Conditioning in a Brain-Based Device, Cerebral Cortex 12:818-830, is available in pdf format (http://www.nsi.edu/nomad/pubs.htm). Best regards, Jeff Krichmar The Neurosciences Institute 10640 John J. Hopkins Dr. San Diego, CA 92121 krichmar at nsi.edu From Henry.Tirri at cs.helsinki.fi Thu Aug 15 08:20:38 2002 From: Henry.Tirri at cs.helsinki.fi (Henry Tirri) Date: Thu, 15 Aug 2002 15:20:38 +0300 Subject: Minimum Description Length on the Web - a new www site Message-ID: <3D5B9C96.8070901@cs.helsinki.fi> I am pleased to announce a recently created web-resource for Minimum Description Length (MDL) research at http://www.mdl-research.org/ The site is maintained by the Complex Systems Computation Group (CoSCo) at the Helsinki Institute for Information Technology. We will be gradually building the resource to be more comprehensive, and all feedback and comments are appreciated. Henry Tirri Brief description of the site: ============================== What is MDL? The purpose of statistical modeling is to discover regularities in observed data. The success in finding such regularities can be measured by the length with which the data can be described. This is the rationale behind the Minimum Description Length (MDL) Principle introduced by Jorma Rissanen (Rissanen, 1978). "The MDL Principle is a relatively recent method for inductive inference. The fundamental idea behind the MDL Principle is that any regularity in a given set of data can be used to compress the data, i.e. to describe it using fewer symbols than needed to describe the data literally." (Grnwald, 1998) What is mdl-research.org? Minimum Description Length on the Web is intended as a source of information for everyone who wants to know more about MDL. The site contains links and references to suggested reading, tutorials, lecture notes, etc. on MDL as well as links to people who are working on MDL and related topics. The Reading section contains references to selected articles, books, and lecture material, and links to journals and conferences that publish MDL related material. The Demonstrations section will illustrate MDL through on-line demonstrations. The section is under construction but you can already find a demonstration on Markov chain order selection. The People section has links to researchers who are working on MDL and related fields. You can find loads of related material in their homepages. The Related Topics section is a short collection of links to MDL related topics, such as information theory, Bayesian statistics, etc. It can help you locate useful background knowledge. References: J.Rissanen, Modeling by shortest data description. Automatica, vol. 14 (1978), pp. 465-471. Peter Grnwald, The Minimum Description Length Principle and Reasoning under Uncertainty, Ph.D. Thesis, ILLC Dissertation Series DS 1998-03, CWI, the Netherlands, 1998. ----------------------------------------------------------- Henry Tirri, PhD. Research Director, Prof. of Computer Science Complex Systems Computation Group Helsinki Institute for Information Technology (HIIT) http://www.hiit.fi/henry.tirri; email: henry.tirri at hiit.fi ----------------------------------------------------------- From shastri at ICSI.Berkeley.EDU Fri Aug 16 02:51:58 2002 From: shastri at ICSI.Berkeley.EDU (Lokendra Shastri) Date: Thu, 15 Aug 2002 23:51:58 PDT Subject: neural binding Message-ID: <200208160651.XAA03985@dill.ICSI.Berkeley.EDU> Dear Connectionists, The following is a list of papers by my collaborators and I that address the four problems for Cognitive Neuroscience posed by Ray Jackendoff. Most of these papers and other related papers are accessible at "http://www.icsi.berkeley.edu/~shastri". 1. The dynamic binding problem ``Rules and variables in neural nets'', V. Ajjanagadde and L. Shastri, Neural Computation. 3, 121--134, 1991. ``From simple associations to systematic reasoning: a connectionist encoding of rules, variables, and dynamic bindings using temporal synchrony" L. Shastri and V. Ajjanagadde. Behavioral and Brain Sciences Vol. 16, No. 3, 417--494, 1993. (the response to commentators section of the above paper also addresses the problem of encoding embedded structures using dynamic bindings) ``Temporal Synchrony, Dynamic Bindings, and SHRUTI: a representational but non-classical model of reflexive reasoning'', L. Shastri. Behavioral and Brain Sciences Vol. 19, No. 2, 331-337, 1996. ``Advances in SHRUTI --- A neurally motivated model of relational knowledge representation and rapid inference using temporal synchrony'', L. Shastri. Applied Intelligence, 11, 79--108, 1999. ``Seeking coherent explanations --- a fusion of structured connectionism, temporal synchrony, and evidential reasoning,'' L. Shastri and C. Wendelken. In Proc. 22nd Annual Conference of the Cognitive Science Society, pp. 453--458, Philadelphia, PA. August 2000. Also relevant here is the work of Jaime Henderson on parsing. He shows how a parse structure can be built incrementally using a Shruti-like representations and synchronous binding. "Connectionist Syntactic Parsing Using Temporal Variable Binding. Henderson, J. (1994) Journal of Psycholinguistic Research, 23 (5) p. 353--379. 2. The problem of 2 (aka the multiple instantiation problem) L. Shastri and V. Ajjanagadde, BBS 1993 listed above ``Reflexive Reasoning with Multiple-Instantiation in in a Connectionist Reasoning System with a Typed Hierarchy'', D.R. Mani, and L. Shastri. Connection Science, Vol. 5, No. 3 & 4, 205--242. 1993. 3. (typed) variables Preliminary solutions were proposed in Shastri and Ajjanagadde, BBS 1993, and Mani and Shastri, Connection Science 1993. A fully developed solution appears in ``Types and Quantifiers in Shruti --- a connectionist model of rapid reasoning and relational processing,'' L. Shastri. In Hybrid Neural Symbolic Integration, S. Wermter and R. Sun (Eds.), Springer-Verlag, Berlin, pp 28-45, 2000. 4. One-shot learning of relational structures in long-term memory ``Episodic memory and cortico-hippocampal interactions,'' L. Shastri, Trends in Cognitive Sciences, 6(4):162-168. ``From transient patterns to persistent structures: a computational model of episodic memory formation via cortico-hippocampal interactions,'' L. Shastri, Behavioral and Brain Sciences, 62 pages (In revision). ``Biological Grounding of Recruitment Learning and Vicinal Algorithms in Long-term Potentiation'', L. Shastri. In Emergent neural computational architectures based on neuroscience, J. Austin, S. Wermter, and D. Wilshaw (Eds.), Springer-Verlag, Berlin, pp. 348-367, 2001. From nnk at his.atr.co.jp Wed Aug 21 03:00:21 2002 From: nnk at his.atr.co.jp (Neural Networks Japan Office) Date: Wed, 21 Aug 2002 16:00:21 +0900 Subject: Neural Networks 15(4/5/6): 2002 Special Issue Message-ID: NEURAL NETWORKS 15(4/5/6) Contents - Volume 15, Numbers 4/5/6 - 2002 2002 Special Issue "Computational Models of Neuromodulation" Kenji Doya, Peter Dayan, and Michael E. Hasselmo, co-editors ------------------------------------------------------------------ ***** Introduction ***** Cellular, synaptic and network effects of neuromodulation Eve Marder and Vatsala Thirumalai Metalearning and neuromodulation Kenji Doya ***** Dopamine ***** Dopamine-dependent plasticity of cortico-striatal synapses John N.J. Reynolds and Jeffery R. Wickens TD models of reward predictive responses in dopamine neurons Roland E. Suri Actor-critic models of the basal ganglia: new anatomical and computational perspectives. Daphna Joel, Yael Niv, Eytan Ruppin Dopamine: generalization and bonuses Sham Kakade and Peter Dayan The computational role of dopamine D1 receptors in working memory Daniel Durstewitz and Jeremy Seamans Dopamine controls fundamental cognitive operations of multi-target spatial working memory. Shoji Tanaka An integrative theory of the phasic and tonic modes of dopamine modulation in the prefrontal cortex. Jean-Claude Dreher, Yves Burnod ***** Serotonin ***** Opponent Interactions between serotonin and dopamine Nathaniel D. Daw, Sham Kakade, Peter Dayan Local analysis of behaviour in the adjusting-delay task assessing choice of delayed reinforcement Rudolf N. Cardinal, Natheniel Daw, Trevor W. Robbins and Barry J. Everitt ***** Norepinephrine ***** Neuromodulation of decision and response selection Marius Usher, Eddy Davelaar Simplified dynamics in a model of noradrenergic modulation of cognitive performance M. S. Gilzenrat, B. D. Holmes, J. Rajkowski, G. Aston-Jones, and J. D. Cohen Control of exploitation-exploration meta-parameter in reinforcement learning. Shin Ishii, Wako Yoshida, Junichiro Yoshimoto ***** Acetylcholine ***** Neuromodulation, theta rhythm and rat spatial navigation Michael Hasselmo, Jonathan Hay, Maxin Ilyn, and Anatoli Gorchetchinikov Cholinergic modulation of sensory representations in the olfactory bulb Christiane Linster and Thomas A. Cleland Acetylcholine in cortical inference Angela J. Yu, Peter Dayan Sensory-motor gating and cognitive control by the brainstem cholinergic systems Yasushi Kobayashi and Tadashi Isa ***** On-line Adaptation ***** On-line learning in changing environments with applications in supervised and unsupervised learning. Noboru Murata, Motoaki Kawanabe, Andreas Ziehe, Klaus-Robert Muller, Shun-ichi Amari Neuromodulation and plasticity in an autonomous robot Olaf Sporns, William H. Alexander ------------------------------------------------------------------ Electronic access: www.elsevier.com/locate/neunet/. Individuals can look up instructions, aims & scope, see news, tables of contents, etc. Those who are at institutions which subscribe to Neural Networks get access to full article text as part of the institutional subscription. Sample copies can be requested for free and back issues can be ordered through the Elsevier customer support offices: nlinfo-f at elsevier.nl usinfo-f at elsevier.com or info at elsevier.co.jp ------------------------------ INNS/ENNS/JNNS Membership includes a subscription to Neural Networks: The International (INNS), European (ENNS), and Japanese (JNNS) Neural Network Societies are associations of scientists, engineers, students, and others seeking to learn about and advance the understanding of the modeling of behavioral and brain processes, and the application of neural modeling concepts to technological problems. Membership in any of the societies includes a subscription to Neural Networks, the official journal of the societies. Application forms should be sent to all the societies you want to apply to (for example, one as a member with subscription and the other one or two as a member without subscription). The JNNS does not accept credit cards or checks; to apply to the JNNS, send in the application form and wait for instructions about remitting payment. The ENNS accepts bank orders in Swedish Crowns (SEK) or credit cards. The INNS does not invoice for payment. ---------------------------------------------------------------------------- Membership Type INNS ENNS JNNS ---------------------------------------------------------------------------- membership with $80 or 660 SEK or Y 15,000 [including Neural Networks 2,000 entrance fee] or $55 (student) 460 SEK (student) Y 13,000 (student) [including 2,000 entrance fee] ----------------------------------------------------------------------------- membership without $30 200 SEK not available to Neural Networks non-students (subscribe through another society) Y 5,000 (student) [including 2,000 entrance fee] ----------------------------------------------------------------------------- Institutional rates $1132 2230 NLG Y 149,524 ----------------------------------------------------------------------------- Name: _____________________________________ Title: _____________________________________ Address: _____________________________________ _____________________________________ _____________________________________ Phone: _____________________________________ Fax: _____________________________________ Email: _____________________________________ Payment: [ ] Check or money order enclosed, payable to INNS or ENNS OR [ ] Charge my VISA or MasterCard card number ____________________________ expiration date ________________________ INNS Membership 19 Mantua Road Mount Royal NJ 08061 USA 856 423 0162 (phone) 856 423 3420 (fax) innshq at talley.com http://www.inns.org ENNS Membership University of Skovde P.O. Box 408 531 28 Skovde Sweden 46 500 44 83 37 (phone) 46 500 44 83 99 (fax) enns at ida.his.se http://www.his.se/ida/enns JNNS Membership c/o Professor Tsukada Faculty of Engineering Tamagawa University 6-1-1, Tamagawa Gakuen, Machida-city Tokyo 113-8656 Japan 81 42 739 8431 (phone) 81 42 739 8858 (fax) jnns at jnns.inf.eng.tamagawa.ac.jp http://jnns.inf.eng.tamagawa.ac.jp/home-j.html ----------------------------------------------------------------- end From cateau at brain.inf.eng.tamagawa.ac.jp Wed Aug 21 03:33:15 2002 From: cateau at brain.inf.eng.tamagawa.ac.jp (Hide Cateau) Date: Wed, 21 Aug 2002 16:33:15 +0900 Subject: Paper: Analytic study of the mutiple phosphorylation process Message-ID: <20020821161945.970F.CATEAU@brain.inf.eng.tamagawa.ac.jp> Dear All, I would like to announce the availability of the following paper at my web site: http://brain.inf.eng.tamagawa.ac.jp/cateau/hide.index.html Hideyuki Cateau and Shigeru Tanaka, Kinetic analysis of multisite phosphorylation using analytic solutions to Michaelis-Menten equations, J.Theor.Biol.(2002)217:1-14 This paper provides temporal progress curves of multiple phosphorylation that occurs frequently in the intracellular signal transduction pathway. The temporal progress curves are given analytically, so that they enable us to analyze the signal transduction pathway quantitatively. ..................................................................... Kinetic Analysis of Multisite Phosphorylation Using Analytic Solutions to Michaelis-Menten Equations Hideyuki Cateau and ShigeruTanaka Laboratory for Visual Neurocomputing, RIKEN Brain Science Institute, Hirosawa 2-1, Wako, Saitama 351-0198, Japan Phosphorylation-induced expression or modulation of a functional protein is a common signal in living cells. Many functional proteins are phosphorylated at multiple sites and it is frequently observed that phosphorylation at one site enhances or suppresses phosphorylation at another site. Therefore, characterizing such cooperative phosphorylation is important. In this study, we determine a temporal progress curve of multisite phosphorylation by analytically integrating the Michaelis-Menten equations in time. Using this theoretical progress curve, we derive the useful criterion that an intersection of two progress curves implies the presence of cooperativity. Experiments generally yield noisy progress curves. We fit the theoretical progress curves to noisy progress curves containing 4% Gaussian noise in order to determine the kinetics of the phosphorylation. This fitting correctly identifes the sites involved in cooperative phosphorylation. (c) 2002 Elsevier Science Ltd. All rights reserved. ____________________________________________________________ Hideyuki Cateau Core Research for the Evolutional Science and Technology Program(CREST), JST Lab. for mathematical information engineering, Dept. Info-Communication Engineering, Tamagawa Univ. 6-1-1 Tamagawa-Gakuen, Machida-shi, Tokyo 1948610, Japan cateau at brain.inf.eng.tamagawa.ac.jp http://brain.inf.eng.tamagawa.ac.jp/members.html phone: +81-42-739-8434, fax:+81-42-739-7135 ____________________________________________________________ From jfeldman at ICSI.Berkeley.EDU Wed Aug 21 17:17:26 2002 From: jfeldman at ICSI.Berkeley.EDU (Jerome Feldman) Date: Wed, 21 Aug 2002 14:17:26 -0700 Subject: Binding - interim report Message-ID: <3D640366.F50611C5@icsi.berkeley.edu> A while back, I posted a query about Jackendoff's four challenges in neural binding. In addition to the responses that were posted to the whole group, the three other replies are included in this message. Only one response showed any evidence of having looked at Jackendoff's problems and formulating a response. It isn't obvious (at least to me) how to use any of the standard techniques to specify a model that meets Jackendoff's criteria. I privately encouraged the respondents to outline how they would do this and hope that they and others will respond. Jerry F. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Jerry, Connectionist models that learn to process sentences have had to solve the binding problem and have done so for many years. Among the models that solve this problem I can list St. John and McClelland (Artificial Intelligence, 1990), the Miikkulainen and Dyer work, Elman's simple recurrent networks, and a recent CMU-CS dissertation by Doug Rohde. The latter is available at doug's home page: http://tedlab.mit.edu/~dr/Thesis Best wishes, - Jay McClelland ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Here are some papers describing a mostly localist approach toward the variable binding problem: A. Browne and R. Sun, Connectionist inference models. {\it Neural Networks}, Vol.14, No.10, pp.1331-1355, December 2001. A. Browne and R. Sun, Connectionist variable binding. {\it Expert Systems}, Vol.16, No.3, pp.189-207. 1999. R. Sun, Robust reasoning: integrating rule-based and similarity-based reasoning. {\it Artificial Intelligence} (AIJ). Vol.75, No.2, pp.241-296. June, 1995. R. Sun, On schemas, logics, and neural assemblies. {\it Applied Intelligence}. Vol.5, No.2. pp.83-102. 1995. R. Sun, Beyond associative memories: logics and variables in connectionist networks. {\it Information Sciences}, Vol.70, No.1-2. pp.49-74. 1993. R. Sun, On variable binding in connectionist networks. {\it Connection Science}, Vol.4, No.2, pp.93-124. 1992. Most of these papers can be downloaded from my web page. Professor Ron Sun, Ph.D ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Dear Jerome, like DeLiang Wang, who recently posted a response to your request on the connectionists, I would say that there are now powerful neural binding architectures available. I have been working on a spatial model of neural binding, which employs spatial coactivation of neurons for the representation of feature bindings. It is called competitive layer model, and has been shown to perform well on perceptual grouping tasks like contour grouping, texture and greyscale segmentation: H. Wersing, J. J. Steil, and H. Ritter. A competitive layer model for feature binding and sensory segmentation. Neural Computation 13(2):357-387 (2001). http://www.techfak.uni-bielefeld.de/ags/ni/publications/papers/WersingSteilRitter2001-ACL.ps.gz H. Wersing. Spatial Feature Binding and Learning in Competitive Neural Layer Architectures PhD Thesis. Faculty of Technology, University of Bielefeld, March 2000. Published by Cuvillier, Goettingen http://www.techfak.uni-bielefeld.de/~hwersing/dissertation.ps.gz One particular feature of the model is, that it can be trained in very efficient way, solving a simple quadratic optimization problem. Our applications so far concentrated on segmentation problems, but the framework could be easily applied to other problem domains: H. Wersing. Learning Lateral Interactions for Feature Binding and Sensory Segmentation. Advances in Neural Information Processing Systems NIPS 2001, Vancouver http://www.techfak.uni-bielefeld.de/ags/ni/publications/papers/Wersing2001-LLI.ps.gz With kindest regards, Heiko Wersing ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ From E.Koning at elsevier.nl Thu Aug 22 04:07:28 2002 From: E.Koning at elsevier.nl (Koning, Esther (ELS)) Date: Thu, 22 Aug 2002 09:07:28 +0100 Subject: CFP Neurocomputing - Special Issue on Bioinformatics Message-ID: <4D56BD81F62EFD49A74B1057ECD75C0603A9A210@elsamsvexch01.elsevier.nl> CALL FOR PAPERS NEUROCOMPUTING An International Journal published by Elsevier Science B.V., vol. 49-55, 24 issues, in 2003 ISNN 0925-2312, URL: Special Issue on Bioinformatics Paper Submission Deadline: October 31st, 2002 Bioinformatics applies -- simply stated -- computational methods to the solution of biological problems. Bioinformatics, genomics, molecular biology, molecular evolution, computational biology, and affine fields are at the intersection between two axes: data sequences/physiology and information technology. Sequences include DNA sequences (gene, genome, organization), molecular evolution, protein structure, folding, function, and interaction, metabolic pathways, regulation signaling networks, physiology and cell biology (interspecies, interaction), as well as ecology and environment. Information technology in this context includes hardware and instrumentation, computation, as well as mathematical and physical models. The intersection between two subfields, one in each axis, generates areas including those known as genome sequencing, proteomics, functional genomics (microarrays, 2D-PAGE, ...), high-tech field ecology, genomic data analysis, statistical genomics, protein structure, prediction, protein dynamics, protein folding and design, data standards, data representations, analytical tools for complex biological data, dynamical systems modeling, as well as computational ecology. Research in these fields comprises property abstraction from the biological system, design and development of data analysis algorithms, as well as of databases and data access web-tools. Genome sequencing and related projects generate vast amounts of data that needs to be analyzed, thus emphasizing the relevance of efficient methods of data analysis and of the whole discipline. The Neurocomputing journal invites original contributions for the forthcoming special issue on Bioinformatics from a broad scope of areas. Some topics relevant to this special issue include, but are not restricted to: -- Theoretical foundations, algorithms, implementations, and complete systems -- Sequence analysis (single, multiple), alignment, annotation, etc. -- Improvements in databases and web-tools for bioinformatics -- Novel metrics and biological data preprocessing for posterior analysis -- Systems biology models and data modeling techniques including statistical inference, stochastic processes, random walks, Markov chains, hidden Markov models, motifs, profiles, dynamic programming, pattern recognition techniques, neural networks, support vector machines, evolutionary models, tree estimation, etc. -- Pathway inference, e.g. to determine where to target a drug using gene expression data and address side effects by providing information on where else a target metabolite appears. -- Key applications in diverse fields including bioinformatics, genomics, molecular biology, molecular evolution, computational biology, drug design, etc. Please send two hardcopies of the manuscript before October 31st, 2002, to: V. David Sanchez A., Neurocomputing - Editor in Chief - Advanced Computational Intelligent Systems P.O. Box 1424 La Ca=F1ada, CA 91012, U.S.A. Street address: 1149 Wotkyns Drive Pasadena, CA 91103, U.S.A. Fax: +1-626-793-5120 Email: vdavidsanchez at earthlink.net including abstract, keywords, a cover page containing the title and author names, corresponding author name's complete address including telephone, fax, and email address, and clear indication to be a submission to the Special Issue on Bioinformatics. Guest Editors Harvey J. Greenberg Center for Computational Biology University of Colorado at Denver P.O. Box 173364 Denver, CO 80217-3364 Phone: (303) 556-8464 Fax: (303) 556-8550 Email: Harvey.Greenberg at cudenver.edu Lawrence Hunter Center for Computational Pharmacology University of Colorado Health Science Center 4200 E. Ninth Ave. Denver, CO 80262 Phone: (303) 315-1094 Fax: (303) 315-1098 Email: Larry.Hunter at uchsc.edu Satoru Miyano Human Genome Center Institute of Medical Science University of Tokyo 4-6-1 Shirokanedai, Minato-ku, Tokyo 108-8639, Japan. Phone: +81-3-5449-5615 Fax: +81-3-5449-5442 Email: miyano at ims.u-tokyo.ac.jp Ralf Zimmer Praktische Informatik und Bioinformatik Institut f=FCr Informatik LMU M=FCnchen Theresienstrasse 39 D-80333 M=FCnchen Phone: +49-89-2180-4447 Fax: +49-89-2180-4054 Email: zimmer at bio.informatik.uni-muenchen.de V. David Sanchez A., Neurocomputing - Editor in Chief - Advanced Computational Intelligent Systems P.O. Box 1424 La Ca=F1ada, CA 91012, U.S.A. Fax: +1-626-793-5120 Email: vdavidsanchez at earthlink.net From alpaydin at boun.edu.tr Thu Aug 22 08:17:12 2002 From: alpaydin at boun.edu.tr (Ethem Alpaydin) Date: Thu, 22 Aug 2002 15:17:12 +0300 Subject: ICANN/ICONIP 2003 (Istanbul, Turkey) Message-ID: <3D64D648.D9F73CA@boun.edu.tr> --- JOINT 13th International Conference on Artificial Neural Networks and 10th International Conference on Neural Information Processing ICANN/ICONIP 2003 June 26-29, 2003, Istanbul, TURKEY http://www.nn2003.org Both the International Conference on Artificial Neural Networks and the International Conference on Neural Information Processing are very well established conferences, the first being the main annual conference of European Neural Network Society and the latter of the Asia Pacific Neural Netrworks Assembly. In 2003, the two conferences will, for the first time, be held jointly and what better place can there be for such an event than Istanbul, where East meets West. Important Dates: Full papers Short papers Submission Deadline December 1, 2002 January 7, 2003 Acceptance Notice February 3, 2003 March 7, 2003 Camera Ready Papers April 1, 2003 May 9, 2003 --- From christof at teuscher.ch Thu Aug 22 11:43:33 2002 From: christof at teuscher.ch (Christof Teuscher) Date: Thu, 22 Aug 2002 17:43:33 +0200 Subject: [IPCAT2003] - First Call for Papers Message-ID: <3D6506A5.21B9630F@teuscher.ch> ================================================================ We apologize if you receive multiple copies of this email. Please distribute this announcement to all interested parties. For removal specify address to remove_ipcat2003 at lslsun.epfl.ch ================================================================ **************************************************************** FIRST CALL FOR PAPERS **************************************************************** ** IPCAT2003 ** Fifth International Workshop on Information Processing in Cells and Tissues September 8 - 11, 2003 Swiss Federal Institute of Technology Lausanne (EPFL) Lausanne, Switzerland http://lslwww.epfl.ch/ipcat2003 **************************************************************** Description: ------------ The aim of the series of IPCAT workshops is to bring together a multidisciplinary core of scientists who are working in the general area of modeling information processing in biosystems. A general theme is the nature of biological information and the ways in which it is processed in biological and artificial cells and tissues. The key motivation is to provide a common ground for dialogue and interaction, without emphasis on any particular research constituency, or way of modeling, or single issue in the relationship between biology and information. IPCAT2003 will highlight recent research and seek to further the dialogue, exchange of ideas, and development of interactive viewpoints between biologists, physicists, computer scientists, technologists and mathematicians that have been progressively expanded throughout the IPCAT series of meetings (since 1995). The workshop will feature sessions of selected original research papers grouped around emergent themes of common interest, and a number of discussions and talks focusing on wider themes. IPCAT2003 will give particular attention to morphogenetic and ontogenetic processes and systems. IPCAT2003 encourages experimental, computational, and theoretical articles that link biology and the information processing sciences and that encompass the fundamental nature of biological information processing, the computational modeling of complex biological systems, evolutionary models of computation, the application of biological principles to the design of novel computing systems, and the use of biomolecular materials to synthesize artificial systems that capture essential principles of natural biological information processing. Topics of Interest: ------------------- Topics to be covered will include, but not limited to, the following list: o Self-organizing, self-repairing, and self-replicating systems o Evolutionary algorithms o Machine learning o Evolving, adapting, and neural hardware o Automata and cellular automata o Information processing in neural and non-neural biosystems o Parallel distributed processing biosystem models o Information processing in bio-developmental systems o Novel bio-information processing systems o Autonomous and evolutionary robotics o Bionics, neural implants, and bio-robotics o Molecular evolution and theoretical biology o Enzyme and gene networks o Modeling of metabolic pathways and responses o Simulation of genetic and ecological systems o Single neuron and sub-neuron information processing o Microelectronic simulation of bio-information systemics o Artificial bio-sensor and vision implementations o Artificial tissue and organ implementations o Applications of nanotechnology o Quantum informational biology o Quantum computation in cells and tissues o DNA computing Special Session: ---------------- Morphomechanics of the Embryo and Genome + Artificial Life -> Embryonics Artificial intelligence started with imitation of the adult brain, and artificial life has dealt mostly with the adult organism and its evolution, in that the span from genome to organism has been short or nonexistent. Embryonics is the attempt to grow artificial life in a way analogous to real embryonic development. This session will include speakers grappling with both ends of the problem. Papers for this special session should be submitted through the regular procedure. Organizers: R. Gordon, Lev V. Beloussov Paper Submission: ----------------- Papers will be published in a special issue of the BioSystems journal (Elsevier Science). They should be no longer than 15 pages (including figures and bibliography). Papers will either (1) be accepted for presentation at the workshop and for publication in the special issue of BioSystems, or (2) rejected. Important Dates: ---------------- Paper submission: February 28, 2003 Notification of acceptance: May 28, 2003 Camera-ready copy: July 11, 2003 For up-to-date information, consult the IPCAT2003 web-site: http://lslwww.epfl.ch/ipcat2003 We are looking forward to seeing you in beautiful Lausanne! Sincerely, Christof Teuscher IPCAT2003 Program Chair ---------------------------------------------------------------- Christof Teuscher Swiss Federal Institute of Technology Lausanne (EPFL) christof at teuscher.ch http://www.teuscher.ch/christof ---------------------------------------------------------------- IPCAT2003: http://lslwww.epfl.ch/ipcat2003 ---------------------------------------------------------------- From bower at uthscsa.edu Thu Aug 22 18:46:56 2002 From: bower at uthscsa.edu (James Bower) Date: Thu, 22 Aug 2002 16:46:56 -0600 Subject: Registration for GUM*02 Message-ID: REGISTRATION IS NOW OPEN FOR GUM*02 November 8,9,10 San Antonio, Texas Registration is now open for the first annual GENESIS USERS MEETING this fall in beautiful San Antonio, Texas, the weekend immediately following the Society for Neuroscience Annual Meeting in Orlando Florida. Meeting information and registration are available at: http://genesis-users-meeting.org/ The meeting has been designed as a working meeting devoted to research and education using anatomically and physiologically realistic biological models. While established as the GENESIS USERS meeting, all devotees of realistic modeling are encouraged to attend. Unique in its structure, GUM*02 will combine introductory, intermediate, and advanced tutorials with a full agenda of scientific presentations focused on the study of biological systems using realistic modeling techniques. As a working meeting with an educational objective, attendees are encouraged to present not only finished research, but also work in progress. TUTORIALS Tutorial sessions will be held on Friday, November 8th, We are pleased to announce that the introductory tutorial on realistic modeling will be conducted by Dr. Michael Hines, the creator of the modeling system NEURON, and Dr. David Beeman, manager of the GENESIS users group. Both presenters have extensive teaching experience in the Computational Neuroscience at the Marine Biological Laboratory as well as the European Course in Computational Neuroscience. Other tutorials will be offered by leaders in the field in subjects from subcellular to network modeling, and parameter searching techniques to the use of parallel computers. Registration for tutorials is on a first come first serve basis so it will be important to register in advance at: http://genesis-users-meeting.org/ SCIENTIFIC PROGRAM The contributed scientific program will take place on Saturday and Sunday, November 9th and 10th. All attendees are encouraged to present scientific and technical results, and all submissions will be accepted. Appropriate presentations include scientific results from realistic modeling efforts, presentations on technical aspects of simulator use and development, or presentations describing biological systems felt to be ripe for simulation and modeling. Unique to this meeting, we do not necessarily expect completed studies; works in progress are also welcomed. In this way students, especially, can benefit from the expertise of other attendees. Each scientific presentation will consist of a 15 minute oral overview in the morning followed by more detailed discussion of results in a poster/demonstration format in the afternoon. Presenters are encouraged to bring computers COMMUNITY BUILDING In addition to the educational and scientific aspects of meeting, opportunities will also be provided for more relaxed interactions between participants. San Antonio Texas is famous for its River Walk (http://www.thesanantonioriverwalk.com/index.asp). The San Antonio/Austin area of South Texas is also famous for its indigenous music scene - recently added to by Ramon and the K-Halls, who, it is rumored, are already putting pressure on meeting organizers for an exclusive contract. In other words, a good time will be had by all! MEETING COSTS Every effort has been made to keep costs to a minimum to encourage attendance by students. Room rates in the conference hotel have been established at $69 a night, unlimited occupancy. Advanced registration for students including postdoctoral fellows is $99 ($159 for faculty). The rates increase at the conference itself, so you are encouraged to register in advance. Again, additional meeting information as well as advanced registration is available at: http://genesis-users-meeting.org/ We hope to see you in 'ol San Antone this Fall -- James M. Bower Ph.D. Research Imaging Center University of Texas Health Science Center at San Antonio 7703 Floyd Curl Drive San Antonio, TX 78284-6240 Cajal Neuroscience Center University of Texas San Antonio Phone: 210 567 8080 Fax: 210 567 8152 From cia at bsp.brain.riken.go.jp Thu Aug 22 22:52:38 2002 From: cia at bsp.brain.riken.go.jp (Andrzej CICHOCKI) Date: Fri, 23 Aug 2002 11:52:38 +0900 Subject: New software for ICA and BSS Message-ID: <3D65A376.70104@bsp.brain.riken.go.jp> [Our sincere apologies if you receive multiple copies of this email] We would like to announce availability of software packages called ICALAB for ICA (Independent Component Analysis), BSS (Blind Sources Separation) and BSE (Blind Signal Extraction). ICALAB for Signal Processing and ICALAB for Image Processing are two independent packages for MATLAB that implement a number of efficient algorithms for ICA employing HOS (higher order statistics), BSS employing SOS (second order statistics) and LTP (linear temporal prediction), and BSE employing various SOS and HOS methods. After some data preprocessing, these packages can also be used also for MICA (multidimensional independent component analysis) and NIBSS (non independent blind source separation). The main features of both packages are an easy-to-use graphical user interface, and implementation of computationally powerful and efficient algorithms . Some implemented algorithms are robust with respect to additive white noise. The packages are available on our web pages: http://www.bsp.brain.riken.go.jp/ICALAB/ Any critical comments and suggestions are welcomed. Best regards, Andrzej Cichocki From hadley at cs.sfu.ca Sat Aug 24 11:51:40 2002 From: hadley at cs.sfu.ca (Bob Hadley) Date: Sat, 24 Aug 2002 08:51:40 -0700 (PDT) Subject: Modularity vs. Wholistic Connectivity Message-ID: <200208241551.g7OFpeK12167@css.css.sfu.ca> A pdf file for the following New Paper is now available at: www.cs.sfu.ca/~hadley/modular.pdf ~~~~~~~~~~~~~~~~~~~~~~ A DEFENSE OF FUNCTIONAL MODULARITY by Robert F. Hadley School of Computing Science and Cognitive Science Program Simon Fraser University Abstract Although belief in the existence of mental modules of some form is widespread among cognitive researchers, neurally sophisticated researchers commonly resist the view that cognitive processing involves modules which are functionally independent of one another. Moreover, within the past few years, at least three noted researchers (Fodor, Kosslyn, and Uttal) have called into serious question the existence of distinct modules in broad areas of human cognition. The present paper offers a defense of the existence of functionally independent modules, which, though spatially distributed, communicate via traditionally conceived input/output channels. This defense proceeds (i) by showing that the anti-modularity arguments of Fodor, Kosslyn, and Uttal are not compelling; (ii) by presenting theoretically-grounded reasons why any connectionist is committed, via the most basic tenets of connectionism, to accepting the existence of functionally independent modules; (iii) by presenting wholistically inclined connectionists with a novel challenge, namely, to demonstrate that a single, wholistic network could display strong levels of generalization as a side-effect of multiple, previously acquired skills. In the course of these arguments, I examine a recent generalization challenge posed by Phillips (2000) to eliminative connectionists. 32 pages, with 1.2 spacing From hamilton at may.ie Mon Aug 26 13:13:45 2002 From: hamilton at may.ie (Hamilton Institute) Date: Mon, 26 Aug 2002 18:13:45 +0100 Subject: Faculty positions in statistical machine learning Message-ID: <009a01c24d23$f0244c60$12c09d95@hamilton.local> Applications are invited from well qualified candidates for a small number of Senior Research positions at the Hamilton Institute. The successful candidates will be outstanding researchers who can demonstrate an exceptional research track record or significant research potential at international level modern statistical and machine learning methods, particularly in the context of time series analysis and probabilistic reasoning, human-computer interaction, hybrid systems. We are looking for leaders who will be a vital part of the future growth and development of the Institute. A strong commitment to research excellence, developing research partnerships, and the ability to establish a dynamic and world class research programme are essential. This is a unique opportunity to join a vibrant research group which is committed to research excellence and is currently undergoing substantial expansion. Where appropriate, the possibility exists to fund Ph.D/postdoctoral positions in direct support of senior posts. Salary Scale: Associate Professor Scale (EUR 64000-87000) or Full Professor Scale (EUR 86000- 110000) For further details visit www.hamilton.may.ie Applications with cv and two significant papers to hamilton at may.ie. For informal enquiries please contact Prof. D.J. Leith at doug.leith at may.ie From norbert at cn.stir.ac.uk Tue Aug 27 12:10:34 2002 From: norbert at cn.stir.ac.uk (norbert@cn.stir.ac.uk) Date: Tue, 27 Aug 2002 17:10:34 +0100 Subject: Research Position `Human and Artificial Vision' Message-ID: <1030464634.3d6ba47a9eea5@www.cn.stir.ac.uk> To whom it may concern I would like to submit the following job announcement to connectionists at cs.cmu.edu. Sincerely Norbert Krueger -- Dr. Norbert Krueger University of Stirling Centre for Cognitive and Computational Neuroscience (CCCN) Stirling FK9 4LA Scotland, UK Tel: ++44 (0) 1786 466379 Fax: ++44 (0) 1786 467641 Email: norbert at cn.stir.ac.uk http://www.cn.stir.ac.uk/~norbert ------------- Job Announcement ---------------------------------- RESEARCH ASSISTANT Centre for Cognitive and Computational Neuroscience (CCCN) ECOVISION Project 17,626 - 21,503 You will be associated with the group of Prof. Wrgtter (http://www.cn.stir.ac.uk/) and involved in computer vision studies of the real-world scene analysis problems in the context of the ECOVISION project (Early cognitive Vision, (http://www.pspc.dibe.unige.it/ecovision/). The goal of these studies is to design a machine vision system of superior performance. To this end principles of distributed cognitive reasoning, which are now better understood in the brain, shall be implemented in software and tested with artificial and real visual scenes. You shall develop this software in cooperation with other members of the group. Good software knowledge of C++ is required. It would also be helpful if you have a background in computer- and camera-equipment hardware. This project is funded by the European Commission and takes place in an international cooperation between seven partners from five different countries. It also offers good access to industrially relevant machine vision problems through the direct involvement of HELLA Hueck KG, who is a big German company developing driver assistant systems, which are the core application for the results of the ECOVISION project. This appointment will be on a fixed term basis for 22 months in the first instance, with an extension thereafter. The position is thought to be basis for a PhD. Informal enquiries may be made to Dr. Norbert Krger (tel: 01786 466379, email: norbert at cn.stir.ac.uk) or Professor Florentin Wrgtter (tel. 01786 466369, email worgott at cn.stir.ac.uk.). Application forms are available from the Personnel Office, University of Stirling, Stirling FK9 4LA, telephone 01786 467028, fax 01786 466155 or email personnel at stir.ac.uk quoting ref no: 1874/1207 or http://www.personnel.stir.ac.uk/recruitment/opportunities_research.html. Closing date for applications: Monday, 16 September 2002. ------------------------------------------------- This mail sent through IMP: http://horde.org/imp/ From dodd at acse.shef.ac.uk Fri Aug 30 09:33:54 2002 From: dodd at acse.shef.ac.uk (Tony Dodd) Date: Fri, 30 Aug 2002 14:33:54 +0100 (BST) Subject: Special Session on Support Vector / Kernel Machines at ICONS 2003 Message-ID: <200208301333.g7UDXsW03114@vulture.shef.ac.uk> The following call for papers may be interest to readers of connectionists. Ignore the requirement to indicate interest by 28 August - asap will be ok. ------------- Begin Forwarded Message ------------- From r.f.harrison at sheffield.ac.uk Thu Aug 22 08:41:47 2002 From: r.f.harrison at sheffield.ac.uk (Robert F Harrison) Date: Thu, 22 Aug 2002 13:41:47 +0100 Subject: Special Session on Support Vector / Kernel Machines at ICONS 2003 Message-ID: Hi I am trying to put together a special session on Support Vector / Kernel Machines for the IFAC Conference on Intelligent Control Systems and Signal Processing (ICONS 2003) to be held in Faro, Portugal in April 2003. http://conferences.ptrede.com/icons03/main.py/index Contributions should be in any of the following areas: dynamical modelling / identification; non-linear filtering / equalisation; feedback control but NOT in the general theory of (kernel) machine learning. If you are interested, please let me have a title/short description by 28 Aug. so I can judge level of interest. The bad news is that the deadline for full papers is 20 September but I hope to push that back by ~3 weeks. Sorry about the short deadline, thanks for your time Rob ---------------------------------------------------------------------------- Robert F Harrison BSc PhD CEng FIEE The University of Sheffield Department of Automatic Control & Systems Engineering Mappin Street Sheffield S1 3JD UK ------------- End Forwarded Message ------------- From giro-ci0 at wpmail.paisley.ac.uk Fri Aug 30 12:32:09 2002 From: giro-ci0 at wpmail.paisley.ac.uk (Mark Girolami) Date: Fri, 30 Aug 2002 17:32:09 +0100 Subject: Technical Report Available Message-ID: The following new technical report is available at the website below. Included on the website are Matlab demos along with all code and data required to allow easy replication of the experimental results reported. http://cis.paisley.ac.uk/giro-ci0/reddens/ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Probability Density Estimation from Optimally Condensed Data Samples. Mark Girolami & Chao He Computing & Information Systems Technical Reports, ISSN-1461-6122. Abstract The requirement to reduce the computational cost of evaluating a point probability density estimate when employing a Parzen window estimator is a well known problem. This paper presents the Reduced Set Density Estimator that provides a kernel based density estimator which employs a small percentage of the available data sample and is optimal in the L2 sense. Whilst only requiring O(N2) optimisation routines to estimate the required weighting coefficients, the proposed method provides similar levels of performance accuracy and sparseness of representation as Support Vector Machine density estimation, which requires O(N3) optimisation routines, and which has previously been shown to consistently outperform Gaussian Mixture Models. It is also demonstrated that the proposed density estimator consistently provides superior density estimates for similar levels of data reduction to that provided by the recently proposed Density Based Multiscale Data Condensation algorithm and in addition has comparable computational scaling. The additional advantage of the proposed method is that no extra free parameters are introduced such as regularisation, bin width or condensation ratios making this method a very simple and straightforward approach to providing a reduced set density estimator with comparable accuracy to that of the full sample Parzen density estimator. Professor. M.A Girolami PhD Associate Head of School and Chair of Applied Computational Intelligence School of Information and Communication Technologies University of Paisley High Street Paisley, PA1 2BE Tel: +44 (0)141 848 3317 Fax +44 (0)141 848 3542 http://cis.paisley.ac.uk/giro-ci0 Legal disclaimer -------------------------- The information transmitted is the property of the University of Paisley and is intended only for the person or entity to which it is addressed and may contain confidential and/or privileged material. Statements and opinions expressed in this e-mail may not represent those of the company. Any review, retransmission, dissemination and other use of, or taking of any action in reliance upon, this information by persons or entities other than the intended recipient is prohibited. If you received this in error, please contact the sender immediately and delete the material from any computer. --------------------------