From oreilly at grey.colorado.edu Sat May 3 01:58:38 2003 From: oreilly at grey.colorado.edu (Randall C. O'Reilly) Date: Fri, 2 May 2003 23:58:38 -0600 Subject: PDP++ 3.0 Released Message-ID: <200305030558.h435wcc08888@grey.colorado.edu> Version 3.0 of the PDP++ neural network simulation software is now available for downloading. See either of the websites below for details: http://www.cnbc.cmu.edu/Resources/PDP++/PDP++.html http://psych.colorado.edu/~oreilly/PDP++/PDP++.html New features for this release are listed below. - Randy +----------------------------------------------------------------+ | Dr. Randall C. O'Reilly | | | Associate Professor | Phone: (303) 492-0054 | | Department of Psychology | Fax: (303) 492-2967 | | Univ. of Colorado Boulder | | | 345 UCB | email: oreilly at psych.colorado.edu | | Boulder, CO 80309-0345 | www: psych.colorado.edu/~oreilly | +----------------------------------------------------------------+ This file contains a summary of important changes from the previous release of PDP++. See the ChangeLog file for a comprehensive, though less comprehensible, list. =========================================== Release 3.0 =========================================== IMPORTANT NOTE FOR EXISTING PROJECTS: - If you have a stopping criterion set on a MonitorStat (e.g., stopping when MAX activation exceeds some threshold in the output layer), then this stopping criterion will be lost when you load the project. If you look in the spew of messages generated during the load process, you'll see a message like this: *** Member: mon not found in type: MonitorStat (this is likely just harmless version skew) <>{name="max_act_Output": disp_opts=" ": is_string=false: vec_n=0: val=0: str_val="": stopcrit={flag=true: rel=GREATERTHAN: val=.5: cnt=1: n_met=0: }: };<<- err_skp>> Just get the stopcrit value (val=.5 in this example) and relationship (GREATERTHAN) from this and enter it into the first element in the mon_vals list in the appropriate MonitorStat in the opened project to restore function (or replace this stat entirely with a new ActThreshRTStat). - GraphLog FIXED ranges will be lost upon loading: there is now a fix_min and fix_max flag inline with the ranges that will need to be clicked as appropriate (the range values are preserved, just the flag to fix them is missing). Look for FIXED messages in the spew as per above. - SelectEdit (.edits) labels will be lost upon loading, and can be recovered in the same manner as above. - NetView Layer fonts will be smaller than before -- use View:Actions/Set Layer Font to set a larger font. The new default for new views is 18 point. - For your additional C++ code: El() function on Lists/Groups/Arrays renamed to SafeEl() to better reflect its function (index range checking), and [] operator changed to FastEl() (no index range checking) instead of SafeEl(), to better reflect typical usage. - Error functions in BP: if you've written your own, see comments in ChangeLog (search for date 2002-09-13). NEW FEATURES: - Wizard that automates the construction of simulation objects: creates commonly-used configurations of neworks, environments, processes, stats and logs. - Distributed-memory parallel processing via MPI (instead of pthread): DMEM can be much more efficient than pthread, and is much more flexible in that it works in both shared and distributed memory architectures. Support for distributing computation across both connections and across events is provided, by setting the number of processors in the Network and the EpochProcess. In both cases, each processor runs everything redundantly except for a subset of events or connections -- this makes for relatively little extra code required to support dmem -- connections/events are divided across processes, and results are synchronized to keep everything consistent. - Additional analysis functions: PCA (principal components analysis) and MDS (multidimensional scaling), and Cluster Plot efficiency vastly improved to handle large data sets. Also added processes for automatically computing these functions over hidden layers, etc. Other new analysis routines include automatic generation of statistics on environment frequencies -- useful for validating environments. - Much improved GraphLog, focusing on discriminating overlapping line traces (repeated passes through the same set of X axis values -- eg. multiple settles of a network, multiple training runs, etc). Traces can be color-coded (line_type = TRACE_COLORS), incremented (producing a 3D-like effect) via trace_incr, and stacked (vertical = STACK_TRACES). A spike-raster-like plot, or even a continuous color-coded version, can be achieved by not displaying any vertical axis values at all (vertical = NO_VERTICAL) and using either VALUE_COLORS (continuous color-coded) or THRESH_POINTS (thresholded spike raster). Also, columns of data can be plotted row-wise instead (e.g., for monitored activations, or COPY aggregators), and the view_bufsz value is now actually respected, so you can scroll through large amounts of data instead of seeing it all. - Log displays are now much more robust when you add or remove data to be logged -- you should now spend much less time reconfiguring the log views. - Color-coded edit dialogs and view window backdrops based on an enhanced (and customizable) Project view color scheme. - Ability to save view displays to a JPEG or TIFF file (in addition to existing Postscript), including automatic saving at each update for constructing animations. - Incremental reading of events from a file during processing (FromFileEnv). - Automatic SimLog creation whenever project is saved -- helps you keep track of what each project is. - Added two new algorithms: Long Short Term Memory (Hochreiter, Schmidhuber et al) implemented as lstm++ (works really well on sequential learning problems in general), and RNS++, written by Josh Brown: http://iac.wustl.edu/~jwbrown/rns++/index.html - Support for g++ 3.x compilers (default for CYGWIN, DARWIN, use config/Makefile.LINUX.3 for LINUX instead of LINUX.2 (see README.linux for important details on compiling under LINUX, including a sstream include file fix for gcc 2.9x (e.g., RedHat 7.x)), and zlib now used instead of forking to call gzip for loading/saving compressed files: should be much faster and more reliable under CYGWIN (MS Windows). Check your Makefile.in for $(X11_LIB) instead of -lX11 if you get link errors involving zlib or jpeg lib calls. Also, the SIM_NONSHARED makefile variables have been eliminated -- these were included by default in mk_new_pdp Makefile.in files, and need to just be cut out of the makefile. - Numerous small processes and statistics to facilitate auotomation of common tasks. Also, a better interface for interactive environments where subsequent events depend on current network outputs (see demo/leabra/nav.proj.gz for an example). - Mersenne Twister random number generator now used for all random number calls (by Makoto Matsumoto and Takuji Nishimura, http://www.math.keio.ac.jp/~matumoto/emt.html) - Easier access to view configuration variables through view menu functions; support for window manager close button; SelectEdit can include Methods (Functions) in addition to member data; Net log view usability considerably improved; Added buttons for graphing activation functions, etc on relevant specs. From S.I.Reynolds at cs.bham.ac.uk Sat May 3 11:27:08 2003 From: S.I.Reynolds at cs.bham.ac.uk (Stuart I Reynolds) Date: Sat, 3 May 2003 16:27:08 +0100 (BST) Subject: PhD thesis available: Reinforcement Learning with Exploration In-Reply-To: <000001c309a0$c4605210$80f4a8c0@cwiware> Message-ID: Dear Connectionists, My PhD thesis, Reinforcement Learning with Exploration. School of Computer Science, The University of Birmingham. is available for download at: http://www.cs.bham.ac.uk/~sir/pub/thesis.abstract.html Regards Stuart Reynolds Abstract: Reinforcement Learning (RL) techniques may be used to find optimal controllers for multistep decision problems where the task is to maximise some reward signal. Successful applications include backgammon, network routing and scheduling problems. In many situations it is useful or necessary to have methods that learn about one behaviour while actually following another (i.e. `off-policy' methods). Most commonly, the learner may be required to follow an exploring behaviour, while its goal is to learn about the optimal behaviour. Existing methods for learning in this way (namely, Q-learning and Watkins' Q(lambda)) are notoriously inefficient with their use of real experience. More efficient methods exist but are either unsound (in that they are provably non-convergent to optimal solutions in standard formalisms), or are not easy to apply online. Online learning is an important factor in effective exploration. Being able to quickly assign credit to the actions that lead to rewards means that more informed choices between actions can be made sooner. A new algorithm is introduced to overcome these problems. It works online, without `eligibility traces', and has a naturally efficient implementation. Experiments and analysis characterise when it is likely to outperform existing related methods. New insights into the use of optimism for encouraging exploration are also discovered. It is found that standard practices can have strongly negative effect on the performance of a large class of RL methods for control optimisation. Also examined are large and non-discrete state-space problems where `function approximation' is needed, but where many RL methods are known to be unstable. Particularly, these are control optimisation methods and when experience is gathered in `off-policy' distributions (e.g. while exploring). By a new choice of error measure to minimise, the well studied linear gradient descent methods are shown to be `stable' when used with any `discounted return' estimating RL method. The notion of stability is weak (very large, but finite error bounds are shown), but the result is significant insofar as it covers new cases such as off-policy and multi-step methods for control optimisation. New ways of viewing the goal of function approximation in RL are also examined. Rather than a process of error minimisation between the learned and observed reward signal, the objective is viewed to be that of finding representations that make it possible to identify the best action for given states. A new `decision boundary partitioning' algorithm is presented with this goal in mind. The method recursively refines the value-function representation, increasing it in areas where it is expected that this will result in better decision policies. From brain_mind at epfl.ch Mon May 5 09:40:28 2003 From: brain_mind at epfl.ch (Brain & Mind) Date: Mon, 5 May 2003 15:40:28 +0200 Subject: Faculty positions in Lausanne - Switzerland Message-ID: <008b01c3130b$e3992b60$a8bfb280@sv.intranet.epfl.ch> 2 Tenure track positions are available to join the faculty of the Brain Mind Institute at the EPFL/ETH Lausanne. Positions are well funded with a startup, an annual budget, ample lab space and multiple core facilities. Positions offer to young scientists (ideally less than 36 years old) an opportunity to develop a vision. Laboratory of Perceptual Theory and Simulation The emergence of a coherent perception involves the integration of multiple modalities. A tenure track position is open for a computational neuroscientist interested in theory and simulations at the systems level. Laboratory of NeuRobotics A tenure track position is open for a computational neuroscientist interested in applying neuronal principles to robotics. More information at the Brain Mind Institute Faculty positions From sylee at ee.kaist.ac.kr Tue May 6 01:02:30 2003 From: sylee at ee.kaist.ac.kr (Soo-Young Lee) Date: Tue, 6 May 2003 14:02:30 +0900 Subject: CFPS and Announcement of a new journal devoted to Letters and Reviews Message-ID: <002401c3138c$b2a20e60$329ef88f@kaistsylee2> (Sorry, if you receive multiple copies.) CFPs and Announcement of a new rapid-publication journal with double-blind reviews "Neural Information Processing - Letters and Reviews" The first issue is scheduled on September 2003. 1. Motivation Although there exist many journals on neural networks, publications usually require one to two years from the date of submission and the published materials may not necessarily the latest at the time of publications. However, in many scientific disciplines, the publication time is usually 6 months. Also, in some scientific and engineering disciplines there exist " Letters" journals for rapid (timely) communications such as Electronics Letters and Optics Letters. These Letters enjoy high citation impact factors and excellent reputation. The rapid publication is more critical for multidisciplinary researches, where researchers come from many different academic backgrounds and may not know what the others are doing. Many researchers also believe that double-blind review procedures should be implemented. Another motivation comes from the need of good review papers on new and important topics. Review papers are extremely helpful to young researchers who would like to get into the field, especially for multidisciplinary researches, but not many journals accept review papers. It is also very important to connect system-level neuroscience and artificial neural networks. Although both communities can be benefited from each, there exists a big communication gap between these two communities. Therefore, it is very important to have at-least one publication devoted to timely communication and review papers with double-blind review procedures for both neuroscience and neural engineering communities. 2. Goals (a) Timely Publication - 3 to 4 months to publication for Letters - up to 6 months to publication for Reviews (b) Connecting Neuroscience and Engineering - serving system-level neuroscience and artificial neural network communities (c) Low Cost - free for online only - US$30 per year for hardcopy (d) High Quality - unbiased double-blind reviews - short papers (up to 6 single-column single-space published pages) for Letters (The Letters may include preliminary results of excellent ideas, and full paper may be published latter at other journals.) - in-depth reviews of new and important topics for Reviews 3. Topics - Cognitive neuroscience - Computational neuroscience - Neuroinformatics database and analysis tools - Brain signal measurements and functional brain mapping - Neural modeling and simulators - Neural network architecture and learning algorithms - Data representations in neural systems - Information theory for neural systems - Software implementations of neural networks - Neuromorphic hardware implementations - Biologically-motivated speech signal processing - Biologically-motivated image processing - Human-like inference systems and intelligent agents - Human-like behavior and intelligent systems - Artificial life - Other applications of neural information processing mechanisms 4. Publications - monthly online publications - yearly paper publications 5. Copyright Authors have all the rights on their papers, and may publish extended versions of their Letters to other journals. 6. Subscription Fee - On-line version: FREE - Hardcopy version: Personal: US$30/year (surface mail) Institution: US$50/year (surface mail) 7. Paper Reviews and Acceptance Decision - electronic review process based on Adobe PDF, Postscript, or MS Word files - rapid and unbiased (double-blind) reviews - binary ("Accept" or "Reject") decisions without revision requirements for Letters (Mandatory English editing services may be recommended.) - minor revision may be requested for Review papers. 8. Editors and Publisher Publisher: KAIST Press Home Page: http://www.nip-lr.info and http://neuron.kaist.ac.kr/nip-lr/ (from May 15th, 2003.) Editor-in-Chief: Soo-Young Lee Director, Brain Science Research Center Korea Advanced Institute of Science and Technology 373-1 Guseong-dong, Yuseong-gu, Daejeon 305-701 Korea (South) Tel: +82-42-869-3431 Fax: +82-42-869-8490 E-mail: nip-lr at neuron.kaist.ac.kr 9. Paper Submission All papers should be submitted to the Editor-in-Chief by e-mail at nip-lr at neuron.kaist.ac.kr. Detail guidelines and paper formats will be shown at the journal homepages (http://www.nip-lr.info and http://neuron.kaist.ac.kr/nip-lr/), which will be open at May 15th, 2003. 10. Others Although the journal got supports from many APNNA (Asia-Pacific Neural Network Assembly) Governing Board members, the official relationship between the journal and APNNA will be discussed latter. The journal is also expected to satisfy requirements for the inclusion in the SCI/SCIE in the near future. ---------------------------------------------------------------------- Neural Information Processing - Letters and Reviews Editor-in-Chief Soo-Young Lee Director, Brain Science Research Center Korea Advanced Institute of Science and Technology 373-1 Guseong-dong, Yuseong-gu, Daejeon 305-701 Korea (South) Tel: +82-42-869-3431 / Fax: +82-42-869-8490 E-mail: nip-lr at neuron.kaist.ac.kr Advisory Board Shun-ichi Amari, RIKEN Brain Science Institute, Japan Rodney Douglas, University/ETH Zurich , Switzerland Kunihiko Fukushima, Tokyo University of Technology, Japan Terry Sejnowski, Salk Institute, USA Harod Szu, Office of Naval Researches, USA Editorial Board Alan Barros, Universidade Federal do Maranhao, Brazil James A. Bednar, University of Texas at Austin, USA Yoonsuck Choe, Texas A&M University, USA Seungjin Choi, Pohang University of Science and Technology, Korea Andrzej Cichocki, RIKEN Brain Science Institute, Japan Wlodzislaw Duch, Nicholaus Copernicus University, Poland Tom Gedeon, Murdoch University, Australia Saman Halgamuge, University of Melbourne, Australia Shigeru Ikeda, Institute of Statistical Mathematics, Japan Masumi Ishikawa, Kyushu Institute of Technology, Japan Marwan Jabri, Oregon Health and Sciences University, USA Janusz Kacprzyk, Polish Academy of Sciences, Poland Nikolas Kasabov, Auckland University of Technology, New Zealand Okyay Kaynak, Bogazii University, Turkey Dae-Shik Kim, University of Minnesota, USA Seunghwan Kim, Pohang University of Science and Technology, Korea Irwin King, Chinese University of Hong Kong, Hong Kong Elmar Lang, University of Regensburg, Germany Chong Ho Lee, Inha University, Korea Daniel Lee, University of Pennsylvania, USA Minho Lee, Kyungpook National University, Korea Seong-Whan Lee, Korea University, Korea Te-Won Lee, University of California, San Diego, USA Chin-Teng Lin, National Chiao-Tung University, Taiwan Sigeru Omatu, Osaka Prefecture University, Japan Nikhil R. Pal, Indian Statistical Institute, India Carlos G. Puntonet , University of Granada, Spain Jagath C. Rajapakse, Nanyang Technological University, Singapore Asim Roy, Arizona State University, USA Christine Servire, Institut National Polytechnique de Grenoble, France Jude Shavlik, University of Wisconsin, USA Alessandro Sperduti, Universit degli Studi di Padova, Italy Ron Sun, University of Missouri-Columbia, USA Shigeru Tanaka, RIKEN Brain Science Institute, Japan Lipo Wang, Nanyang Technological University, Singapore Takeshi Yamakawa, Kyushu Institute of Technology, Japan Mingsheng Zhao, Tsinghua University, China Yixin Zhong, University of Posts & Telecommunications, China Michael Zibulevsky, Technion, Israel ----------------------------------------------------------------------- Paper Format for the Neural Information Processing - Letters and Reviews The paper should be located in 16.0 cm x 23.7 cm. For the recommended A4 paper size the top and bottom margins are 3.0 cm, and the left and right margins are 2.5 cm. The Letters should not exceed 6 pages, while no page limit is set to the Reviews. Abstract should have 1.5 cm indent on both sides, and should not exceed 200 words. The paper should be written in single column single space with Time New Roman font. Bold characters should be used for the paper title and section headings. The recommended font size is 10 points, while 14 points and 12 points are used for the paper title and section headings, respectively. The paper should be organized in the order of paper title, authors' information, abstract, keywords, main text, acknowledgment, references, and authors' bio-sketches. The authors' information consists of author name(s), department and organization, physical and e-mail addresses. Two versions of the papers should be submitted with and without the authors' information and bio-sketches. The latter will be used during the double-blind review processes. The paper title, author information, and section headings should be center justified, while all others should be justified on both sides. The first line of each paragraph should have an indent of 1 cm, and one line space is used before and after section headings. Each section heading should have an Arabic number. All the figures and tables should be located at proper locations. Figure and table captions should start with "Figure 1." or "Table 1.", and should be center-justified. The table captions should be located just above the table itself, while figure captions should be located just below the figure. All the references should be cited with numbers in brackets, i.e., [1], and listed in the order of citation. Acknowledgment should be located between the main text and references. It is not recommended to use footnotes. References [1] A.B. Crown, "Neural-based Intelligent Systems," Neural Information Processing - Letters and Reviews, Vol. 1, No. 1, pp. 1-5, 2003. [2] D. Evans, Neural Signal Processing, KAIST Press, 2003, pp. 111-124. James B. Author graduated from the University of Free Academics in 1999, and currently a professor of neural systems. His research interest includes computational models of human auditory pathway and neural information coding. (Home page: http://dns.ufa.ac.kr/~jauthor) From sham at gatsby.ucl.ac.uk Tue May 6 22:42:22 2003 From: sham at gatsby.ucl.ac.uk (Sham Kakade) Date: Wed, 7 May 2003 03:42:22 +0100 (BST) Subject: PhD thesis available on reinforcement learning Message-ID: Hi all, My thesis, "On the Sample Complexity of Reinforcement Learning", is now available at: http://www.gatsby.ucl.ac.uk/~sham/publications.html Below is the abstract and table of contents. cheers -Sham ================================================================= Abstract: This thesis is a detailed investigation into the following question: how much data must an agent collect in order to perform "reinforcement learning" successfully? This question is analogous to the classical issue of the sample complexity in supervised learning, but is harder because of the increased realism of the reinforcement learning setting. This thesis summarizes recent sample complexity results in the reinforcement learning literature and builds on these results to provide novel algorithms with strong performance guarantees. We focus on a variety of reasonable performance criteria and sampling models by which agents may access the environment. For instance, in a policy search setting, we consider the problem of how much simulated experience is required to reliably choose a "good" policy among a restricted class of policies \Pi (as in Kearns, Mansour, and Ng [2000]). In a more online setting, we consider the case in which an agent is placed in an environment and must follow one unbroken chain of experience with no access to "offline" simulation (as in Kearns and Singh [1998]). We build on the sample based algorithms suggested by Kearns, Mansour, and Ng [2000]. Their sample complexity bounds have no dependence on the size of the state space, an exponential dependence on the planning horizon time, and linear dependence on the complexity of \Pi . We suggest novel algorithms with more restricted guarantees whose sample complexities are again independent of the size of the state space and depend linearly on the complexity of the policy class \Pi , but have only a polynomial dependence on the horizon time. We pay particular attention to the tradeoffs made by such algorithms. ================================================================= Table of Contents: Chapter 1 Introduction 1.1 Studying the Sample Complexity 1.2 Why do we we care about the sample complexity? 1.3 Overview 1.4 Agnostic Reinforcement Learning Chapter 2 Fundamentals of Markov Decision Processes 2.1 MDP Formulation 2.2 Optimality Criteria 2.3 Exact Methods 2.4 Sampling Models and Sample Complexity 2.5 Near-Optimal, Sample Based Planning Chapter 3 Greedy Value Function Methods 3.1 Approximating the Optimal Value Function 3.2 Discounted Approximate Iterative Methods 3.3 Approximate Linear Programming Chapter 4 Policy Gradient Methods 4.1 Introduction 4.2 Sample Complexity of Estimation 4.3 The Variance Trap Chapter 5 The Mismeasure of Reinforcement Learning 5.1 Advantages and the Bellman Error 5.2 Performance Differences 5.3 Non-stationary Approximate Policy Iteration 5.4 Remarks Chapter 6 \mu -Learnability 6.1 The Trajectory Tree Method 6.2 Using a Measure \mu 6.3 \mu -PolicySearch 6.4 Remarks Chapter 7 Conservative Policy Iteration 7.1 Preliminaries 7.2 A Conservative Update Rule 7.3 Conservative Policy Iteration 7.4 Remarks Chapter 8 On the Sample Complexity of Exploration 8.1 Preliminaries 8.2 Optimality Criteria 8.3 Main Theorems 8.4 The Modified R_{max} Algorithm 8.5 The Analysis 8.6 Lower Bounds Chapter 9 Model Building and Exploration 9.1 The Parallel Sampler 9.2 Revisiting Exploration Chapter 10 Discussion 10.1 N, A, and T 10.2 From Supervised to Reinforcement Learning 10.3 POMDPs 10.4 The Complexity of Reinforcement Learning From bapics at uohyd.ernet.in Thu May 8 06:56:23 2003 From: bapics at uohyd.ernet.in (Dr. Raju Bapi) Date: Thu, 8 May 2003 18:56:23 +0800 (SGT) Subject: Call for papers - Neural and Cognitive Modeling (IICAI03) Message-ID: *****Apologies for cross posting ****** *****Please forward to interested people ******* Call for Papers for the Technical session on Neural and Cognitive Modeling First Indian International Conference on Artificial Intelligence Hyderabad, INDIA December 18-20, 2003 ------------------------------------------------------------------ About the Session This session focuses on issues in computational / theoretical neuroscience and cognitive modeling. The ideas presented could be proposals of conceptual framework or concrete models of brain function and dysfunction. Models could be pitched at the level of sub-neuronal, neuronal, population, or at the brain systems level. The processes could be related to language, speech, planning, decision making, reasoning, learning, memory, cognition, emotion, attention, awareness, vision, auditory, other sensory and motor domains, pattern and object recognition, neuromodulation, etc. The data for constraining models could originate from various experimental methods such as EEG, ERP, PET, fMRI, MEG, Electrophysiology, Psychophysics, Behavioral, Ethological, Developmental and Clinical studies. The modeling methods may be mathematical, statistical, neural networks, symbolic artificial intelligence and computer simulations. The above description is indicative of potential topics but not restrictive. For any clarifications, please contact the Session Chair. Instructions for the authors Prospective authors are invited to submit their papers electronically to the Session Chair by the due date. Authors should use the style files or MS-Word templates as provided by the Springer Lecture Notes to format their papers. The length of a submitted paper should not exceed 14 pages. Short papers and work currently under progress are also welcome. The papers must be in PDF or PS format. The first page of the draft paper should contain: title of the paper, name, affiliation, postal address, and E-mail address for each author, including the name of the author who will be presenting the paper (if accepted). The first page should contain a maximum of 5 keywords most appropriate to the content. Important Dates Last date for submission of papers: Tuesday, July 1, 2003 Notification of acceptance: Friday, August 1, 2003 Camera ready copies of accepted papers due: Friday, August 29, 2003 Please visit the conference website for registration details: http://www.iiconference.org --------------------------------------------------------------- Submissions for the Neural and Cognitive Modeling session should be sent electronically (PS or PDF or WORD attachments) to the Session Chair: Dr. Raju S. Bapi Reader, Dept of Comp and Info Sci. University of Hyderabad, Gachibowli Hyderbad, India 500 046 Phone: +91 40-23010500 - 512 Ext: 4017 / 4025 Fax: +91 40-23010145 email: bapics at uohyd.ernet.in (and) ksbapi at yahoo.com (Please send email to both the addresses with subject line "IICAI-03") --------------------------------------------------------------- From steve at hss.caltech.edu Fri May 9 19:19:22 2003 From: steve at hss.caltech.edu (Steven Quartz) Date: Fri, 9 May 2003 16:19:22 -0700 Subject: Caltech Postdoctoral Positions Message-ID: <016201c31681$6cbe29e0$3c17d783@caltech.edu> Postdoctoral Positions at California Institute of Technology Applications are invited for multiple postdoctoral fellowships to study the neural basis of reward, valuation, and decision-making utilizing functional brain imaging at the California Institute of Technology. These fellowships are funded by the David and Lucile Packard Foundation and the Moore Foundation and are part of a new interdisciplinary project at Caltech that brings together experimental economists, cognitive neuroscientists, behavioral biologists, and others to investigate the neural basis of social cognition and decision-making, with particular emphasis on economic and moral behavior. Research will take place in a new imaging center at Caltech that houses a Siemens 3T whole body scanner, a vertical monkey scanner, and a small animal high field scanner. There is also the opportunity for interaction with computational neuroscience research on decision-making. Applicants with a Ph.D. in neuroscience, cognitive science, computer science/engineering, or social science are encouraged to apply. A background in functional brain imaging is desirable, but other backgrounds will also be considered. Review of applications will start immediately and continue until the positions are filled. They will be supervised by Drs. Steven Quartz, John Allman, David Grether, and Colin Camerer. Interested individuals should send a statement of research interests and background, CV, and names for 3 letters of recommendation either via email to steve at hss.caltech.edu or via snail mail to: Dr. Steven Quartz, MC 228-77, California Institute of Technology, Pasadena, CA 91125. From laura.bonzano at dibe.unige.it Fri May 9 05:47:44 2003 From: laura.bonzano at dibe.unige.it (Laura Bonzano) Date: Fri, 9 May 2003 11:47:44 +0200 Subject: Summer school on Neuroengineering Message-ID: <00fe01c31610$0aaa8120$3c59fb82@ranma> Dear list members, I'm glad to announce the first edition of the "European Summer School on Neuroengineering" entitled to the memory of prof. Massimo Grattarola, organized by: a.. prof. Sergio Martinoia, Neuroengineering and Bio-nanoTechnologies Group, Department of Biophysical and Electronic Engineering (DIBE), University of Genova, Italy b.. prof. Pietro Morasso, Department of Communications, Computer and System Sciences (DIST), University of Genova, Italy c.. Ing. Fabrizio Davide, Telecom Italia Learning Services (TILS) It will take place in June 16-20, 2003 in Venice. I send you in attachment a presentation of this event. For more information and application forms, please visit: http://www.tils.com/neurobit/html/n_1.asp http://www.bio.dibe.unige.it/news_and_events/news_and_events_frames.htm Please transmit it to anyone potentially interested Best Regards, Laura Bonzano Apologies if you receive this more than once. ---------------------------------------------------------------- Ing. Laura Bonzano, Ph.D. Student Neuroengineering and Bio-nanoTechnologies - NBT Department of Biophysical and Electronic Engineering - DIBE Via Opera Pia 11A, 16145, GENOA, ITALY Phone: +39-010-3532765 Fax: +39-010-3532133 URL: http://www.bio.dibe.unige.it/ E-mail: laura.bonzano at dibe.unige.it ************************************************************************* First European School on Neuroengineering Massimo Grattarola http://www.tils.com/neurobit/html/n_1.asp Venice, 16-20 June 2003 Telecom Italia Learning Services (TILS), and the University of Genoa (DIBE, DIST, Bioengineering course) are currently organizing the first edition of an European Summer School on Neuroengineering. The school will be entitled to the memory of Massimo Grattarola. The first edition , which will last for five days, will be held from June 16 to June 20, 2003 at Telecom Italia s Future Center in Venice. The School will cover the following main themes: 1. Neural code and plasticity o Development and implementation of methods to identify, represent and analyze hierarchical and self-organizing systems o Cortical computational paradigms for perception and action 2. Brain-like adaptive information processing systems o Development and implementation of methods to identify, represent and analyze hierarchical and self-organizing systems o Models of learning, representation and adaptability based on knowledge of the nervous system. o Exploration of the capabilities of natural neurobiological systems as flexible computational devices. o Use of information from nervous systems to engineer new control techniques and new artificial systems. o Development of highly innovative Artificial Neural Networks capable of reproducing the functioning of vertebrate nervous systems. 3. Bio-artificial systems o Development of novel brain-computer interfaces o Development of new techniques for neuro-rehabilitation and neuro-prostheses o Hybrid silicon/biological systems In response to current reforms in training, at the Italian and European levels, the Summer School on Neuroengineering is certified to grant credits. These include: ECM (Educazione Continua in Medicina) credits recognized by the Italian Ministry of Health ECTS (European Credit Transfer System) credits recognized by all European Universities ************************************************************************* From wsom at brain.kyutech.ac.jp Mon May 12 05:09:59 2003 From: wsom at brain.kyutech.ac.jp (WSOM'03 Secretariat) Date: Mon, 12 May 2003 18:09:59 +0900 Subject: WSOM'03 Paper Submission Deadline Extended Message-ID: <00bc01c31866$438227c0$0c4111ac@yamac.brain.kyutech.ac.jp> Apologies, if you have received multiple copies. Paper Submission Deadline extended to 10 June, 2003. ============================================= Workshop on Self-Organizing Maps (WSOM'03) 11-14 September 2003 Hibikino, Kitakyushu, Fukuoka, Japan http://www.brain.kyutech.ac.jp/~wsom/ ============================================= CALL FOR PAPERS Workshop Objectives: ================== The Self-Organizing Map (SOM) with its related extensions is the most popular artificial neural algorithm for use in unsupervised learning and data visualization. Over 5,000 publications have been reported in the open literature, and many commercial projects employ the SOM as the tool for solving hard real-world problems. WSOM'03 is the discussion forum where your ideas and techniques are polished, and aims to unveil the results of hot researches and popularize the use of the SOM for technical public. Following the highly successful meetings held in 1997 (WSOM'97), 1999 (WSOM'99), and 2001 (WSOM'01), a further workshop in this established series, will bring together researchers and users of the SOM and related techniques. Topics: ====== Technical areas include, but are not limited to: * Self-organization * Unsupervised learning * Theory and extensions * Optimization * Hardware and architecture * Signal processing, image processing and vision * Medical Engineering * Time-series analysis * Text and document analysis * Financial analysis * Data visualization and mining * Bioinformatics * Robotics Important Dates: ============== Paper Submission: 10 June, 2003 Notification of Acceptance: 30 June, 2003 Final Paper Submission: 25 July, 2003 Organizing Committee: =================== Honorary Conference Chair Teuvo Kohonen, Finland Organizing Committee Organizing Chair Takeshi Yamakawa, Japan Organizing Committee Members Erkki Oja, Finland Heizo Tokutaka, Japan Program Committee Program Chair Masumi Ishikawa, Japan Program Committee Members Marie Cottrell, France Guido Deboeck, USA Shinto Eguchi, Japan Kikuo Fujimura, Japan Colin Fyfe, UK Masafumi Hagiwara, Japan Jaakko Hollmen, Finland Keiich Horio, Japan Marc M. Van Hulle, Belgium Toshimichi Ikemura, Japan Samuel Kaski, Finland Gerhard Kranner, Austria Thomas Martinetz, Germany Kiyotoshi Matsuoka, Japan Dieter Merkl, Austria Risto Miikkulainen, USA Yoshikazu Miyanaga, Japan Tsutomu Miyoshi, Japan Takashi Morie, Japan Junichi Murata, Japan Ikuko Nishikawa, Japan Klaus Obermayer, Germany Aiko Shibata, Japan Wataru Shiraki, Japan Olli Simula, Finland Eiji Uchino, Japan Alfred Ultsch, Germany Michel Verleysen, Belgium Thomas Villmann, Germany Lei Xu, China Shozo Yasui, Japan Hujun Yin, UK Paper Submissions: ================= Authors are invited to submit full papers before 10 June, 2003, by email to wsom at brain.kyutech.ac.jp. Detailed information will be available at the WSOM'03 webpage: http://brain.kyutech.ac.jp/~wsom/ ----------------------------------------------- WSOM'03 Secretariat Keiichi Horio Assistant Professor Graduate School of Life Science and Systems Engineering Kyushu Institute of Technology 2-4 Hibikino, Wakamatsu, Kitakyushu 808-0196 Japan Tel: +81-93-695-6127 E-mail: horio at brain.kyutech.ac.jp From Klaus at first.fhg.de Mon May 12 12:25:58 2003 From: Klaus at first.fhg.de (Klaus-R. Mueller) Date: Mon, 12 May 2003 18:25:58 +0200 Subject: EU summer school on ICA Message-ID: <3EBFCB16.9070600@first.fhg.de> Please POST to interested students and researchers: PLEASE REGISTER NOW - PLEASE POST - PLEASE REGISTER NOW Dear collegues, It is a pleasure to announce the *European summer school on ICA - from theory to applications* in Berlin, Germany, on June 16-17, 2003 http://ida.first.gmd.de/~harmeli/summer_school/ organized by the BLISS project (http://www.bliss-project.org). FLYER TO REGISTER: http://ida.first.gmd.de/~harmeli/summer_school/flyer.pdf Confirmed speakers include: Luis Almeida, INESC ID, Lisbon, Portugal Francis Bach, UC Berkeley, Berkeley, USA Jean-Francois Cardoso, ENST, Paris, France Gabriel Curio, UKBF, Berlin, Germany Lars-Kai Hansen, Technical University of Denmark, Lyngby, Denmark Stefan Harmeling, Fraunhofer FIRST, Berlin, Germany Simon Haykin, McMaster University, Hamilton, Canada Christian Jutten, INPG, Grenoble, France Juha Karhunen, HUT, Helsinki, Finland Te-Won Lee, Salk Institute, San Diego, USA Klaus-Robert Muller, Fraunhofer FIRST, Berlin, Germany Klaus Obermayer, TU Berlin, Berlin, Germany Erkki Oja, HUT, Helsinki, Finland Dinh-Tuan Pham, LMC-IMAG, Grenoble, France Laurenz Wiskott, Humboldt-University, Berlin, Germany Michael Zibulevsky, Technion, Haifa, Israel Andreas Ziehe, Fraunhofer FIRST, Berlin, Germany Local organizing committee: * Klaus-Robert Muller, Fraunhofer FIRST, Berlin, Germany * Stefan Harmeling, Fraunhofer FIRST, Berlin, Germany * Andreas Ziehe, Fraunhofer FIRST, Berlin, Germany NOTE, THERE WILL BE A POSTER SESSION WHERE STUDENTS CAN PRESENT THEIR RESEARCH. Please POST to interested students and researchers. Best regards, klaus -- &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&& Prof. Dr. Klaus-Robert M\"uller University of Potsdam and Fraunhofer Institut FIRST Intelligent Data Analysis Group (IDA) Kekulestr. 7, 12489 Berlin e-mail: Klaus-Robert.Mueller at first.fraunhofer.de and klaus at first.gmd.de Tel: +49 30 6392 1860 Tel: +49 30 6392 1800 (secretary) FAX: +49 30 6392 1805 http://www.first.fhg.de/persons/Mueller.Klaus-Robert.html &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&& From ASIM.ROY at asu.edu Mon May 12 19:40:28 2003 From: ASIM.ROY at asu.edu (Asim Roy) Date: Mon, 12 May 2003 16:40:28 -0700 Subject: Summary of panel discussion at IJCNN'2002 and ICONIP'02 on the question: "Oh sure, my method is connectionist too. Who said it's not?" Message-ID: This note summarizes the panel discussions that took place at IJCNN'2002 (International Joint Conference on Neural Networks) in Honolulu, Hawaii in May, 2002 and at ICONIP'02-SEAL'02-FSKD'02 (the 9th International Conference on Neural Information Processing, the 4th Asia-Pacific Conference on Simulated Evolution And Learning, and the 2002 International Conference on Fuzzy Systems and Knowledge Discovery) in November 2002 in Singapore. IJCNN'2002 was organized jointly by INNS (International Neural Network Society) and the IEEE Neural Network Council. This was the fifth panel discussion at these neural network conferences on the fundamental ideas of connectionism. The discussion topic at both of these conferences was: "Oh sure, my method is connectionist too. Who said it's not?" The abstract below summarizes the issues/questions that were addressed by this panel. The following persons were on these panels and their bio-sketches are included at the end: At ICONIP'02 in Singapore: 1. Shun-Ichi Amari 2. Wlodzislaw Duch 3. Kunihiko Fukushima 4. Nik Kasabov 5. Soo-Young Lee 6. Erkki Oja 7. Xin Yao 8. Lotfi Zadeh 9. Asim Roy At IJCNN'2002 in Honolulu: 1. Bruno Apolloni 2. Robert Hecht-Nielsen 3. Robert Kozma 4. Steve Rogers 5. Ron Sun Thanks to Lipo Wang, General Chair of ICONIP'02, and Donald C. Wunsch, Program Co-Chair of IJCNN'02, for allowing these discussions to take place. For those interested, summaries of prior debates on the basic ideas of connectionism are available at the CompNeuro archive site. Here is a partial list of the prior debate summaries available there. http://www.neuroinf.org/lists/comp-neuro/Archive/1999/0079.html - Some more questions in the search for sources of control in the brain http://www.neuroinf.org/lists/comp-neuro/Archive/1998/0084.html - BRAINS INTERNAL MECHANISMS - THE NEED FOR A NEW PARADIGM http://www.neuroinf.org/lists/comp-neuro/Archive/1997/0069.html - COULD THERE BE REAL-TIME, INSTANTANEOUS LEARNING IN THE BRAIN? http://www.neuroinf.org/lists/comp-neuro/Archive/1997/0057.html - CONNECTIONIST LEARNING: IS IT TIME TO RECONSIDER THE FOUNDATIONS? http://www.neuroinf.org/lists/comp-neuro/Archive/1997/0012.html - DOES PLASTICITY IMPLY LOCAL LEARNING? AND OTHER QUESTIONS http://www.neuroinf.org/lists/comp-neuro/Archive/1996/0047.html - Connectionist Learning - Some New Ideas/Questions Asim Roy Arizona State University Panel Question: "Oh sure, my method is connectionist too. Who said it's not?" Description: Some claim that the notion of connectionism is an evolving one. Since the publication of the PDP book (which enumerated the then accepted principles of connectionism), many new ideas have been proposed and many new developments have occurred. So according to these claims, the connectionism of today is different from connectionism of yesterday. Examples of such new developments in connectionism include hybrid connectionist-symbolic models (Sun 1995, 1997), neuro-fuzzy models (Keller 1993, Bezdek 1992), reinforcement learning models (Kaelbling et al. 1994, Sutton and Barto 1998), genetic/evolutionary algorithms (Mitchell 1994), support vector machines (references), and so on. In these newer connectionist models, there are many violations of the "older" connectionist principles. One of the simplest violations is the reading and setting of connection weights in a network by an external agent in the system. The means and mechanisms of external setting and reading of weights were not envisioned in early connectionism. Why do we need local learning laws if an external source can set the weights of a network? So this and other features of these newer methods are obviously in direct conflict with early connectionism. In the context of these algorithmic developments, it has been said that maybe nobody at this stage has a clear definition of connectionism, that everyone makes things up (in terms of basic principles) as they go along. Is this the case? If so, does this pose a problem for the field? To defend this situation, some argue that connectionism is not just one principle, but many? Is that the case? If not, should we redefine connectionism given the needs of these new types of learning methods and on the basis of our current knowledge of how the brain works? This panel intends to closely examine this issue in a focused and intensive way. Debates are expected. We hope to at least clarify some fundamental notions and issues concerning connectionism, and hopefully also make some progress on understanding where it needs to go in the near future. BRIEF SUMMARY OF INDIVIDUAL REMARKS Shun-ichi Amari -- RIKEN Brain Science Institute, Japan Connectionism----Theory or Philosophy A ghost haunted in the mid eighties in the world. It was named the connectionism. It was welcomed enthusiastically by many people, but was hated so much by the traditional AI people. Now there is a question: What is connectionism and where does it going? Is it too old now? If it is collections of theories, there have been many new developments. However, even at the time of rise of connectionism, its theories relied mostly on those developed in the seventies. Most were found to be rediscoveries. However, as a philosophy, it declared so strongly that information in the brain is distributed and processed in parallel by dynamic interactions. Novel engineering systems should learn from this fact. This philosophy has been accepted with enthusiasm, and generated many new theories and findings. Its righteousness is still valid. Bruno Apolloni -- University of Milano, Italy. Connectionism or not connectionism. The true revolution of the connectionism has been to credit heuristics, say subsymbolic computations, as scientific matter. But, the Aristotelian thought in our genomic inherit put a sharp divide between the noble brain activity represented by the symbolic reasoning and the low level attitudes, such as intuition, fantasy, and all what in general cannot be gauged by a theorem, rather is explicable just by the electro-chemistry of our neuron activities. A second revolution is currently demolishing the barrier between the two category, recognizing that neurons are also the seat of the abstract thought and no sharp difference occurs between the two mental attitudes. Likewise a photon that is both a particle and a wave, a thought is materially a well proven neural network, so well that it proves a fixed points vs. the adaptation to the environment, so simple that it can be stored on a limited amount of memory. Also learning algorithms are reveal themselves similar at the two levels. A search for rewarding is almost random at the subsymboilic level, and pursued by a hypothetic, possibly absurd, reasoning at symbolic one. The reaction to punishment is possibly unconscious at the former, a source of intellectual pain and a search for avoiding it at the symbolic level. The key point is to maintain an efficient set of feedbacks between the levels. In such framework we discover ourselves as material automatons, rather the physical matter sharing the nature of a God. Wlodzislaw Duch -- Nicholas Copernicus University , Torun, Poland Everybody has some view what connectionist is and all these points of view are in some way limited. Perhaps we should not worry about definitions. New branches of science, as Allen Newell once said, are not defined, but emerge from common interest of people that meet at conferences, discuss problems they find interesting, establish new journals. When people ask me what am I working on, what do I usually say? Computational intelligence. Trying to solve problems, that are not effectively algoritmizable. Is this connectionism? Unless I work in neurobiology, where methods should be biologically plausible, I do not care. It may be more statistical, or more evolutionary, as long as it solves interesting problems it is something worth working on. Unfortunately it is not quite so simple and since the matter has not been thoroughly discussed, connectionist approaches have spontaneously developed in many different directions, we face all kind of problems now. Some neural network conferences do not accept papers on statistical learning methods, because they are not neural, but accept SVM papers, although they have equally little to do with connectionism. Because recent development in SVM were somehow connected with perceptrons, and papers on this subject appear mostly in neural journals, it is perceived as a branch of neural computing. Many articles in the IEEE Transactions on Neural Networks, and other neural network journals, have little to do with connectionist ideas. Although IEEE formed the Neural Network Society, conferences organized by this society cover much broader range of topics. Not only SVMs are welcome, but also Bayesian belief networks and their outgrowth, graphical methods, statistical mean field theories, sampling techniques, chaos, dynamical systems methods, fuzzy, evolutionary, swarm, immune system, and many other techniques are accepted. Fields are overlapping, boundaries are fuzzy and we do not know how to define connectionism any more. Many people work on the same problems using different approaches that originate from fields they have been trained in. Classification Society, or Numerical Taxonomy experts, sometimes know about neural networks, but do neural network experts know what Classification Society or Pattern Recognition society is doing, and what kind of problems and methods are of their interest? For example, committees of models are investigated by neural network, evolutionary, machine learning, numerical taxonomy and pattern recognition communities. Same problems are solved over and over by people that do not know about existence of other fields. How to bring experts working on the same problems from different perspectives together? If anything should came out from this discussion, it should not be a definition of connectionism, but rather understanding that a lot of research efforts is duplicated in a many-fold way. The point is that we are too focused on the methods, forgetting about the challenges and problems that wait to be solved. It is too easy to modify one of neural methods, add another term to the error function, or modify network architecture. An infinitely many variants of clusterization, or unsupervised learning methods, may be devised. Are the classical branches of science defined by the methods? Biology, physics, chemistry and other classical branches of science were always problem-oriented. Why do we keep on thinking in the method-oriented way? Connectionist or not, does it solve the problem? Defining our field of interest as looking for solution of problems that are non-algorithmic, problems for which effective algorithms do not exist, makes it problem oriented. Solutions to such problems require intelligence. Since we solve them with computational means the field may be appropriately called Computational Intelligence (CI). Connectionist methods are an important part of this field, but there is no reason to restrict oneself to one group of methods. A good example is the contrast between symbolic, rule based methods used by Artificial Intelligence (AI), and subsymbolic, neural methods. Contrasting neural networks with rule-based AI must ultimately fail. How will we solve problems requiring systematic thinking without rules? Rules must emerge somehow from networks. Some CI problems require knowledge and symbolic reasoning, and this is where traditional AI has focused. These problems are related to higher cognitive functions, such as thinking, reasoning, planning, problem solving and understanding natural language. On the other hand neural computing has tried to solve problems requiring senso-motoric functions, perception, control, development of feature detectors, problems concerned with low-level cognition. Computational intelligence, being problem-oriented, is interested in algorithms coming from all sources. Search and logical rules may solve problems in theorem proving or sentence parsing that connectionist techniques are not able to solve. Learning, adaptation is just one side of intelligence. Although our brains use neurons to solve problems requiring systematic reasoning, there must be a way to approximate this neural activity with symbolic search-based processes. As with all approximations it may sometimes break down, but in most cases AI expert systems are solving interesting problems. Instead of stressing the differences it may be better to join forces, since low and high-cognitive functions are both needed for true intelligence. Solving problems, for which effective algorithms do not exist, by connectionist or other methods, provides clear definition for Computational Intelligence. Clear definition of neural computing, or soft computing, a definition that covers all that experts work on in these fields, is very difficult to agree upon, because of the method, rather than problem orientation. Early connectionism was naive: psychologist were writing papers showing that MLPs are not all-mighty. Everybody knows that now. For some tasks modular networks are necessary. The brain is not just one big network. External sources - other parts of the brain - control learning, for example the limbic structures involved in emotions decide what is interesting and worth learning. Weights are not constant, but are a function of inputs, not just in the long-term, but also short-term dynamics. But neurons, networks and brain functions are only one source of inspiration for us. Many methods were inspired by the Parallel Distributed Processing (PDP) idea. The name PDP did not become popular, since "neural networks" sounded much better. Almost all algorithms may be represented in some graphical way, with nodes representing functions or local processors. Graphical computations are going to be popular, but this is again just a broad group of algorithms, not a candidate for a branch of science. Modeling neurobiological systems at a very detailed level leads to computational neuroscience. Simpler approximations are still useful to model various brain functions and processes. Very rough approximations, leading to modular neural networks where single neurons do not matter, but the distributed nature of processing is important, lead to connectionist systems useful for psychology. These fields are appropriately based on neural processing, although they have strong overlap with many other branches of science, for example neuroscience with neurochemistry, molecular biology and genetics, and connectionist approaches in psychology with cognitive science. Engineering applications of neural computing should be less method-oriented and more problem-oriented. If we do not make an effort in this direction many journals and conferences will present solutions to the same problems, repeating many-fold the same work, and preventing comparison of results that may be obtained using different methods. Time will pass, but we shall not grow wiser ... Kunihiko Fukushima -- Tokyo University of Technology, Japan Find Out Other Principles that Govern the Brain The final goal of the connectionism is to understand the mechanism of information processing in the biological brain. In the history of the connectionism, we have experienced a kind of booms of the research twice, from 1960 and from 1985. One of the triggers for the first boom was a proposal of a neural network model "perceptron" by Rosenblatt, and that for the second boom was the introduction of the idea of cost minimization. In both cases, the difficult problem of understanding information processing in the brain was reduced to simple problems: in the first case, to the analysis of a model called perceptron; and in the second case, to a pure mathematical problem of cost minimization. This replacement with simple problems allowed nonprofessionals easily to join the research of the brain without having a large knowledge on neurophysiology or psychology. Brain is a system that works under several constraints. Since one of the constraints was shown as a hypothesis of cost minimization, the analysis of a system that works under the constraint became very easy. In other words, the process of understanding the brain was divided into two steps: biological experiments and solving mathematical problems. This division of labor allowed nonprofessionals to enter brain research very easily. It is true that the technique of cost minimization was very powerful. It can not only explain brain mechanisms but also is useful for other problems, such as forecasting weather and even stock market. Although this approach has produced a great advance in the brain research, it involves a risk at the same time. Researchers who are engaged in the research themselves have a large tendency of having an illusion that they are doing the research of the biological brain. They often forget that they are simply analyzing a behavior of a system that works under a certain constraint. This is a similar situation we had in 1960's. Everyone forgot the fact that they were analyzing a system called perceptron, and erroneously believed that they were making the research of the brain itself. Once mathematical limitations of the ability of the perceptron became clear, they moved away, not only from the research of the perceptron, but from the research of the brain itself. Their illusory belief caused of the winter era of the research in 1970's. Mathematical limitations of the ability of the principle of cost minimization are now becoming clear. Cost minimization is not the only rule that control the biological brain. We are now in the time to find out other constraints that govern the biological brain. Otherwise, we will have a winter era of the research again. Robert Hecht-Nielsen -- University of California, San Diego Robert Hecht-Nielsen's current views are described in Chapter 4 of the new book: Hecht-Nielsen, R. & McKenna, T. [Eds] (2003) Computational Models for Neuroscience: Human Cortical Information Processing [London, Springer-Verlag]. Nik Kasabov -- Auckland University of Technology, New Zealand Yes, indeed, very often researchers claim that their method is connectionist, too. We talk about a method being connectionist if the method utilizes artificial neurons (basic processing units) and connections between them, and if two main functions are performed in this connectionist environment - learning, and generalization [1,2]. Without having the above characteristics, it is hard to classify a method being connectionist. A method can by hybrid connectionist, too, if the connectionist principles from above are integrated with other principles for information processing, such as rule based systems, fuzzy systems [3], evolutionary computation [4] etc. There are additional characteristics that reinforce the connectionist principles in a method. For example, adaptive, on-line learning in an evolving connectionist structure; learning and capturing abstract information, rules; modular connectionist organization; different types of learning available in one system (e.g. active, passive, supervised, unsupervised, reinforcement); relating neurons to genes contained in them regarded as parameters of the learning and the development process [5]. As connectionism is inspired by the organization and the functioning of the brain, we can assume that the more brain-like a method is - the more connectionist it is. This is true. On the other hand a connectionist method, as defined above, can be more engineering (application), or mathematics oriented, rather than brain-like oriented. For the brain study research and for the modeling of brain functions it is important to have adequate brain-like models [6], but it is irrelevant to ask for an engineering method how much connectionist it is if it serves its purpose well. In the end, all possible directions for the development of new scientific methods for information processing should be encouraged if these methods contribute to the progress in science and technology regardless how much connectionist they are indeed. And the more a method can gain from the principles of connectionism, the better, as information processing methods are constantly "moving" towards being more human oriented and human-like to serve the humanity. [1] McClelland J, Rumelhart D, et al. (1986) Parallel Distributed Processing, vol. II, MIT Press. [2] Arbib M, (1995,2003) The Handbook of Brain Theory and Neural Networks. The MIT Press. [3] N.Kasabov (1986) Foundations of neural networks, fuzzy systems and knowledge engineering, The MIT Press. [4] X.Yao (1993) Evolutionary artificial neural networks, Int. Journal of Neural Systems, vol.4, No.3, 203-222. [5] N.Kasabov (2002) Evolving connectionist systems - methods and applications in bio-informatics, brain study and intelligent machines, Springer Veralg. [6] Amari, S. and N.Kasabov (1998) Brain-like computing and intelligent information systems, Springer Verlag Chaotic neurodynamics - A new frontier in connectionism Robert Kozma, University of Memphis, Memphis, TN 38152 Summary of my viewpoint presented at the Panel Session "My method is connectionist, too!" at IJCNN'02 / WCCI'02, Honolulu, May 10-15, 2002 All nontrivial problems we face in practical applications of pattern recognition and intelligent information processing systems require a nonlinear approach. In addition, adaptivity of the models is very often a key requirement, which allows producing a robust solution to real life problems. Connectionist models and neural networks, in particular, offer exactly these qualities. It is not surprising, therefore, that connectionism gains wide popularity in the literature. Connectionist methods can be considered as a family of nonlinear statistical tools of pattern recognition with a large number of parameters, which are adapted using powerful learning algorithms. In most of the cases, the parameterization and learning algorithm guarantees that the trained network operates in a convergent regime. In other words, the activation level of the network's nodes approach a steady state value in the autonomous case or when the inputs to the network are constant. There is an emergent field of research using dynamical neural networks that operate in oscillatory limit cycles or in a chaotic regime; see, e.g., Aihara et al. (1990). Although the first nonconvergent neural network models have been proposed about 4 decades ago, the time became ripe only recently to embrace these ideas and include them to the mainstream of connectionist science (Freeman, 1975). These new developments are facilitated by advancements both inside and outside connectionism. In the past decades, research into convergent NNs laid down the solid theoretical foundations, which now can be extended to chaotic domain. In addition, the mathematical theory of dynamical systems and chaos has reached maturity by now, i.e., it can address the very complex issues raised by high-dimensional chaotic models, like neural systems (Kaneko, 1990; Tsuda, 2001). Spatio-temporal neurodynamics is a key focus area of the research into nonconvergent neural systems. Within this field, we emphasize the role of intermediate-range, or mesoscopic effects in describing population dynamics (Kozma & Freeman, 2001). There are two major factors contributing to the emergence of the mesoscopic paradigm of neuroscience: 1. Biological systems exhibit a mesoscopic level of organization unifying 10^4 to 10^6 neurons, while the overall system size is 10^10 to 10^12. Mesoscopic approach provides an intermediate level between local (microscopic) and global (macroscopic) levels. Mesoscopic levels are biologically plausible. Artificial neural systems, however, do not need to imitate all the details of neural systems. Therefore, it is arguable whether we should follow nature's path in this case? 2. The introduction of mesoscopic level is very practical from computational perspective as well. Based on the present technology, it is not feasible to create computational devices with 10^10 to 10^12 processing units, the complexity level dictated by studying scaling properties of complex networks. Probably, we need to wait at least 10-15 years, when nanotecnology will become mature enough to produce systems of that complexity (Govindan, 2002). Until the technology of creating such an immense concentration of computational power, software and hardware implementations of neural networks representing mesoscopic level of granulation can provide a practically usable tool (Principe et al., 2001) of building models of space-time neurodynamics. References: Aihara, K., Takabe T., Toyoda M. (1990) Chaotic neural networks, Phys. Lett. A, 144(6-7), 333-340. Freeman, W.J. (1975) Mass Action in the Nervous System, Academic Press, New York. Govindan, T.R. (2002) NASA/USRA Workshop on Biology-Information Science-Nano-technology Fusion BIN, Ames, CA, Oct. 7-9, 2002. Kaneko, K. (1990) Clustering, coding, switching, hierarchical ordering, and control in a network of chaotic elements, Physica D, 41, 137-172. Kozma, R. and W.J. Freeman (2001) Chaotic Resonance - Methods and applications for robust classification of noisy and variable patterns, Int. J. Bifurc.&Chaos, 11(6), 2307-2322. Principe, J.C., Tavares, V.G., Harris, J.G., Freeman, W.J. (2001) Design and Implementation of a Biologically Realistic Olfactory Cortex in Analog Circuitry, Proc. IEEE, 89(7): 1030-1051. Tsuda, I. (2001) Toward an interpretation of dynamic neural activity in terms of chaotic dynamical systems", Behav. Brain Sci., 24, pp. 793-847. Soo-Young Lee -- Korea Advanced Institute of Science and Technology In my mind connectionism is a philosophy to build artificial systems based on biological neural systems. It is not necessarily limited to adaptive systems with layered architecture such as multiplayer Perceptron and radial basis function networks. Biological neural systems also utilize fixed interconnections, which has been evolved through generations. For example many biological neural systems incorporate winner-take-all networks based on lateral inhibition. My favorite connectionist model comes from human auditory pathway from cochlea to auditory cortex. It consists of several modules, which mainly have layered architecture with both feedforward and feedback connections. By understanding functions of each module and their connections we are able to build up mathematical models for speech processing in auditory pathway. The object path includes nonlinear noise-robust feature extraction from simple frequency selectivity to more complex time-frequency characteristics. By combing signals from both ears the spatial path performs sound localization and speech enhancement. The backward path is responsible to the top-down attention, which filters out irrelevant or unfamiliar signals. Although the majority of the networks have fixed interconnections, their combined network results in complicated dynamic functions. I believe it is a very important class of connectionist models. Erkki Oja -- Helsinki University of Technology, Finland In traditional cognitive science, the basic paradigm for natural and artificial intelligence is symbol manipulation: processing of well-defined concepts by rules. With the introduction of parallel distributed processing or connectionism in the 1980's, there was a paradigm shift. The new models are motivated by real neural networks but they do not have to be faithful in every detail to biology. The representation for data is a pattern of activity, a numerical vector, instead of a logical entity. Learning means changing some numerical parameters like connection weights, instead of updating rules. This is connectionism. Connectionist methods offer new hopes of solving many highly challenging problems like data mining, bioinformatics, novel user interfaces, robotics etc. Two examples which I have done research on are Independent Component Analysis (ICA) and Kohonen's Self-Organizing Maps (SOM). Both ideas are motivated by neural models but they can also be taken as data analysis tools as such. They are connectionist methods based on unsupervised learning, a very powerful way to infer empirical models from large data sets. Steven Rogers and Matthew Kabrisky -- Qualia Computing, Inc. (QCI) and CADx Systems The only reason to restrict the connectionist label to a subset of computational intelligence techniques is out of arrogance associated with the false impression that we understand to any significant detail how animals process information. What little is known will be modified dramatically by the many things we have yet to discover. All current connectionist techniques make big assumptions on what is included and what is relevant. There does not exist a unifying theory of fundamental processing methods used in physiological information processing that includes all potentially relevant electro-chemical elements (neurons, glial cells, etc ). Thus, in our opinion, to rule out any technique based on our current assumptions is premature. In the end, being engineers, what we care most about is how we can couple our learning algorithms in efficient productive ways with humans to achieve improved performance in useful tasks, intelligence amplification. Even if the techniques used cannot currently be mapped to similar processing strategies employed in physiological information processing systems, the fact that they are useful in interacting with the wetware of real connectionist systems makes them relevant. These quale modifying systems, whether composed of rule-based or local learning methods or even external setting of constraints are the only real connectionist systems that we can consider at the present time. Ron Sun -- University of Missouri-Columbia There have been a number of panel discussions on this and related issues. For example, a panel discussion on the question: "does connectionism permit reading of rules from a network?" took place at IJCNN'2000 in Como, Italy. This previous debate pointed out the limitations of strong connectionism. I noted then that clearly the death knell of strong connectionism had been sounded. Many early connectionist models have some significant shortcomings. For example, the limitations due to the regularity of their structures led to, for example, difficulty in representing and interpreting symbolic structures (despite some limited successes that we have seen). Other limitations are due to learning algorithms used by such models, which led to, for example, requiring lengthy training (requiring many repeated trials); requiring complete I/O mappings to be known a priori; and so on. These models may bear only remote resemblance to biological processes; they are far less complex than biological neural networks. In coping with these difficulties, two forms of connectionism emerged: Strong connectionism adheres strictly to the precepts of connectionism, which may be unnecessarily restrictive and lead to huge cost for certain symbolic processing. On the other hand, weak connectionism (or hybrid connectionism) encourages the incorporation of both symbolic and subsymbolic processes: reaping the benefit of connectionism while avoiding its shortcomings. There have been many theoretical and practical arguments for hybrid connectionism; see, for example, Sun (1994) and Sun (2002). I shall re-iterate the point I made before: To remove the strait-jacket of strong connectionism, we should advocate some forms of hybrid connectionism, encouraging the incorporation of non-NN representations and processes. It is time for a more open-minded framework in which our research is conducted. See http://www.cecs.missouri.edu/~rsun for details of work along this line. References: R. Sun, (2002). Duality of the Mind. Lawrence Erlbaum Associates, Mahwah, NJ. R. Sun, (1994). Integrating Rules and Connectionism for Robust Commonsense Reasoning. John Wiley and Sons, New York, NY. BIO-SKETCHES Shun-ichi Amari Professor Shun-ichi Amari is currently the Vice Director of RIKEN Brain Science Institute and Group Director of the Brain-Style Intelligence and Brain-Style Information Research Systems research groups. Professor Amari received his Ph.D. Degree in Mathematical Engineering in 1963 from University of Tokyo, Tokyo, Japan. Since 1981 he has held a professorship at the Department of Mathematical Engineering and Information Physics, University of Tokyo. In 1994, he joined RIKEN's Frontier Research Program, then moved to RIKEN Brain Science Institute when it was established in 1997. He is a fellow of the IEEE and received the IEEE Neural Network Pioneer Award, the Japan Academy Award and the IEEE Emanuerl Piore Award. Professor Amari has served as a member of numerous editorial committee boards and organizing commitees and has published around 300 papers, including several books, in the areas of information theory and neural nets. Bruno Apolloni Professor of Cybernetics and Information Theory at the Dipartimento di Scienze dell' Informazione (Department of Information Science), University of Milano, Italy. Director, Neural Networks Research Laboratory (LAREN), University of Milano. President, Neural Network Society of Italy. Author of over 100 papers in the frontier area between probability and statistics on the one hand and theoretical computer science on the other, with special regard to computational learning, pattern recognition, optimization, control theory, probabilistic analysis of algorithms, epistemological aspects of probability and fuzziness. His current research interests are in the statistical bases of learning, and in hybrid subsymbolic-symbolic learning architectures. Wlodzislaw Duch Wlodzislaw Duch is a professor of theoretical physics and applied computational sciences, since 1990 heading the Department of Informatics (formerly called a Department of Computer Methods) at Nicholas Copernicus University , Torun, Poland. His degrees include habilitation (D.Sc. 1987) in many body physics, Ph.D. in quantum chemistry (1980), and Master of Science diploma in physics (1977) at the Nicholas Copernicus University, Poland. He has held a number of academic positions at universities and scientific institutions all over the world. These include longer appointments at the University of Southern California in Los Angeles, and the Max-Planck-Institute of Astrophysics in Germany (every year since 1984), and shorter (up to 3 month) visits to the University of Florida in Gainesville; University of Alberta in Edmonton, Canada; Meiji University, Kyushu Institute of Technology and Rikkyo University in Japan; Louis Pasteur Universite in Strasbourg, France; King's College London in UK, to name only a few. He has been an editor of a number of professional journals, including IEEE Transactions on Neural Networks, Computer Physics Communications, Int. Journal of Transpersonal Studies and a head scientific editor of the "Kognitywistyka" (Cognitive Science) journal. He has worked as an expert for the European Union science programs and for other international bodies. He has published 4 books and over 250 scientific and popular articles in many journals. He has been awarded a number of grants by Polish state agencies, foreign committees as well as European Union institutions. Kunihiko Fukushima Kunihiko FUKUSHIMA is a Full Professor, Katayanagi Advanced Research Laboratories, at Tokyo University of Technology, Tokyo, Japan. He was a full professor at Osaka University from 1989 to 1999, at the University of Electro- Communications from 1999 to 2001. Prior to his Professorship, he was a Senior Research Scientist at the NHK Science and Technical Research Laboratories. He is one of the pioneers in the field of neural networks and has been engaged in modeling neural networks of the brain since 1965. His special interests lie in modeling neural networks of the higher brain functions, especially, the mechanism of the visual system. He is the inventor of the Neocognitron for deformation invariant visual pattern recognition, and the Selective Attention Model for recognition and segmentation of connected characters and natural images. One of his recent research interests is in modeling neural networks for active vision in the brain. He is the author of many books on neural networks, including "Information Processing in the Visual and Auditory Systems", "Neural Networks and Information Processing", "Neural Networks and Self-Organization", and "Physiology and Bionics of the Visual System". He received the Achievement Award, Excellent Paper Awards, and so on from IEICE. He serves as an editor for many international journals. He was the founding President of JNNS and is a founding member on the Board of Governors of INNS. Robert Hecht-Nielsen Beginning in 1968 with neural network computer experiments and continuing later with foundation and management of neural network research and development programs at Motorola (1979-1983) and TRW (1983-1986), Hecht-Nielsen was a pioneer in the development of neural networks. He has been a member of the University of California, San Diego faculty since 1986 and was the author of the first textbook on neural networks (Neurocomputing (1989) Reading MA: Addison-Wesley). He teaches a popular year-long graduate course on the subject (ECE-270 Neurocomputing). A Fellow of the IEEE and recipient of its Neural Networks Pioneer Award, Hecht-Nielsen's research is centered around elaboration of his recently completed theory of the function of thalamocortex. Hecht-Nielsen, R. and McKenna, T. [Eds.] (2003) Computational Models for Neuroscience: Human Cortical Information Processing, London: Springer-Verlag. Sagi, B., et al (2001) A biologically motivated solution to the Cocktail Party Problem, Neural Computation 13: 1575-1602. Nikola K. Kasabov Fellow of the Royal Society of New Zealand, Sen. Member of IEEE Affiliation: Director, Knowledge Engineering and Discovery Research Institute, Professor and Chair of Knowledge Engineering, School of Information Technologies, Auckland University of Technology Brief Biographical History: 1971 - MSc in Computer Science and Engineering, Technical University of Sofia 1972 - MSc in Applied Mathematics, Technical University of Sofia 1975 - PhD in Mathematical Sciences, Technical University of Sofia 1976 - 89 Lecturer and Associate Professor, Technical University of Sofia 1989-91 Research Fellow and Senior Lecturer, University of Essex, UK 1992- 1998 Senior Lecturer and Associate Professor, University of Otago, New Zealand 1999-2002 Professor and Personal Chair, Director Knowledge Engineering Lab, University of Otago, New Zealand Honours: Past President of the Asia Pacific Neural Network Assembly (1997-98). The Royal Society of New Zealand Silver Medal for Contribution to Science and Technology, 2001 Recent books:N.Kasabov, Evolving connectionist systems: Methods and applications in bioinformatics, brain study and intelligent machines, Springer Verlag, London, New York, Heidelberg (2002),450pp N. Kasabov, N., ed. Future Directions for Intelligent Systems and Information Sciences, Heidelberg, Physica-Verlag (Springer Verlag) (2000), 420pp Kasabov, N. and Kozma, R. eds. Neuro-Fuzzy Techniques for Intelligent Information Systems, Heidelberg, Physica-Verlag (Springer Verlag) (1999), 450pp Amari, S. and Kasabov, N. eds. Brain-like Computing and Intelligent Information Systems, Singapore, Springer Verlag (1998), 533 pp N.Kasabov, Foundations of Neural Networks, Fuzzy Systems and Knowledge Engineering, The MIT Press, CA, MA (1996), 550pp. Associate Editor of Journals: Information Science; Intelligent Systems; Soft Computing Robert Kozma Robert Kozma holds a Ph.D. in applied physics from Delft University of Technology (1992). Presently he is Associate Professor at the Department of Mathematical Sciences, Director of Computational Neurodynamics Lab, University of Memphis. Previously, he has been on the faculty of Tohoku University, Sendai, Japan (1993-1996); Otago University, Dunedin, New Zealand (1996-1998); and the Division of Neuroscience and Department of EECS at UC Berkeley (1998-2000). His expertise includes autonomous adaptive brain systems, mathematical and computational modeling of spatio-temporal dynamics of cognitive processes, neuro-fuzzy systems and computational intelligence. He is a Senior Member of IEEE, member of the Neural Network Technical Committee of the IEEE Neural Network Society, and other professional organizations. He has been on the Program Committee of about 20 international conferences in the field of intelligent computation and soft computing. Soo-Young Lee Soo-Young Lee received B.S., M.S., and Ph.D. degrees from Seoul National University in 1975, Korea Advanced Institute of Science in 1977, and Polytechnic Institute of New York in 1984, respectively. From 1977 to 1980 he worked for the Taihan Engineering Co., Seoul, Korea. From 1982 to 1985 he also worked for General Physics Corporation at Columbia, MD, USA. In early 1986 he joined the Department of Electrical Engineering, Korea Advanced Institute of Science and Technology, as an Assistant Professor and now is a Full Professor. In 1997 he established Brain Science Research Center, which is the main research organization for the Korean Brain Neuroinformatics Research Program. The research program is one of the Korean Brain Research Promotion Initiatives sponsored by Korean Ministry of Science and Technology from 1998 to 2008, and currently about 70 Ph.D. researchers have joined the research program from many Korean universities and research institutes. He was President of Asia-Pacific Neural Network Assembly, and is on Editorial Board for 2 international journals, i.e., Neural Processing Letters and Neurocomputing. He received Leadership Award and Presidential Award from International Neural Network Society in 1994 and 2001, respectively. His research interests have resided in artificial auditory systems based on biological information processing mechanism in our brain. Erkki Oja Erkki Oja is Director of the Neural Networks Research Centre and Professor of Computer Science at the Laboratory of Computer and Information Science, Helsinki University of Technology, Finland. He received his Dr.Sc. degree in 1977. He has been research associate at Brown University, Providence, RI, and visiting professor at Tokyo Institute of Technology. Dr. Oja is the author or coauthor of more than 250 articles and book chapters on pattern recognition, computer vision, and neural computing, as well as three books: "Subspace Methods of Pattern Recognition" (RSP and J.Wiley, 1983), which has been translated into Chinese and Japanese, "Kohonen Maps" (Elsevier, 1999), and "Independent Component Analysis" (J. Wiley, 2001). His research interests are in the study of principal components, independent components, self-organization, statistical pattern recognition, and applying artificial neural networks to computer vision and signal processing. Dr. Oja is member of the editorial boards of several journals and has been in the program committees of several recent conferences including ICANN, IJCNN, and ICONIP. He is member of the Finnish Academy of Sciences, Fellow of the IEEE, Founding Fellow of the International Association of Pattern Recognition (IAPR), and President of the European Neural Network Society (ENNS). Steven K. Rogers and Matthew Kabrisky Steven K. Rogers, PhD is the President/CEO of Qualia Computing, Inc. (QCI) and CADx Systems. He founded the company in May 1997 to commercialize the Qualia Insight(tm) (QI) platform. The goal of QCI is to systematically apply QI to achieve Intelligence Amplification across market sectors. Dr. Rogers spent 20 years in the U.S. Air Force designing smart weapons. He has published more than 200 papers in neural networks, pattern recognition and optical information processing and several books. He is a Fellow of the Institute of Electrical and Electronics Engineering for design, implementation and fielding of neural solution to Automatic Target Recognition. Dr. Rogers is also a Fellow of The International Optical Engineering Society for contribution to the science of optical neural computing and a charter member of International Neural Network Society. He was a plenary speaker at the 2002 World Congress on Computational Intelligence. Matthew Kabrisky, PhD is currently a Professor Emeritus of Electrical Engineering, School of Engineering, Air Force Institute of Technology, (AFIT). He advises the faculty on courses and research in autonomous pattern recognition, mathematical models of the central nervous system, and human factors engineering. His research interests include computational intelligence and self-awareness. Dr. Kabrisky is the Chief Scientist Emeritus of CADx Systems. Ron Sun Dr. Ron Sun is James C. Dowell Professor of computer science and computer engineering at the University of Missouri-Columbia. He received his Ph.D in 1991 from Brandeis University in computer science. Dr. Ron Sun's research interest centers around the studies of intelligence and cognition, especially in the areas of commonsense reasoning, human and machine learning, and hybrid connectionist models. He is the author of over 120 papers, and has written, edited or contributed to 20 books. For his paper on integrating rule-based reasoning and connectionist models, he received the 1991 David Marr Award from Cognitive Science Society. He has also been on the program committees of the National Conference on Artificial Intelligence (AAAI-93, AAAI-97, AAAI-99), International Joint Conference on Neural Networks (IJCNN-99, IJCNN-00, IJCNN-02), International Conference on Neural Information Processing (1997, 1999, 2001), International Two-Stream Conference on Expert Systems and Neural Networks, and other conferences, and has been an invited/plenary speaker for some of them. Dr. Sun is the founding co-editor-in-chief of the journal Cognitive Systems Research (Elsevier). He serves on the editorial boards of Connection Science, Applied Intelligence, and Neural Computing Surveys. He was a guest editor of the special issue of the journal Connection Science on architectures for integrating neural and symbolic processes and the special issue of IEEE Transactions on Neural Networks on hybrid intelligent models. He is a member of AAAI and Cognitive Science Society, and a senior member of IEEE. From James.Henderson at cui.unige.ch Tue May 13 09:42:43 2003 From: James.Henderson at cui.unige.ch (James Henderson) Date: Tue, 13 May 2003 15:42:43 +0200 Subject: PhD studentship in Neural Networks for Natural Language Processing Message-ID: <3EC0F653.6090502@cui.unige.ch> PhD Studentship Department of Computer Science University of Geneva Applications are invited for a PhD studentship in machine learning applied to natural language processing, available immediately. The successful candidate will join the Artificial Intelligence Research Group, Computer Science Department, University of Geneva. They will pursue research in connection with a project funded by the Swiss National Science Foundation developing neural network and semi-supervised machine learning techniques for application to very broad coverage natural language parsing. Candidates should have the following qualifications or a substantial subset thereof (comprising at the very least those marked as mandatory): - an outstanding academic record in computer science as well as strong mathematical and programming skills (mandatory) - a solid background in machine learning (mandatory). Knowledge of neural networks will be an asset - knowledge of natural language processing or computational linguistics, or at least a strong interest in language technology - a clear aptitude for independent and creative research as evidenced by an excellent Master's thesis - good communication skills in English and French (or at least a clear indication of willingness to learn the latter) The salary for a PhD student is around 47500 Swiss Francs per annum. Please send your curriculum vitae, academic transcript, and contact information for three references to James.Henderson at cui.unige.ch. Consideration of applications will begin immediately and continue until the position is filled. Further information about the Artificial Intelligence Research Group can be found at http://cui.unige.ch/AI-group/home.html, and about the project at http://cui.unige.ch/~henderson/nn_parsing.html. -- Dr James HENDERSON Tel: +41 22 705 76 42 CUI - University of Geneva Fax: +41 22 705 77 80 24 rue du General-Dufour Email: James.Henderson at cui.unige.ch 1211 GENEVE 4, Switzerland http://cui.unige.ch/~henderson/ From dwang at cis.ohio-state.edu Tue May 13 11:58:32 2003 From: dwang at cis.ohio-state.edu (DeLiang Wang) Date: Tue, 13 May 2003 11:58:32 -0400 Subject: Last Reminder: TNN Special issue on temporal coding Message-ID: <3EC11628.52DA2B30@cis.ohio-state.edu> LAST REMINDER: Submission Deadline is May 30, 2003 IEEE Transactions on Neural Networks Call for Papers Special Issue on "Temporal Coding for Neural Information Processing" Please check the following webpage for more information on the issue and submission: http://www.cis.ohio-state.edu/~dwang/tnn.html Thanks, DeLiang Wang -- ------------------------------------------------------------ Prof. DeLiang Wang Department of Computer and Information Science The Ohio State University 2015 Neil Ave. Columbus, OH 43210-1277, U.S.A. Email: dwang at cis.ohio-state.edu Phone: 614-292-6827 (OFFICE); 614-292-7402 (LAB) Fax: 614-292-2911 URL: http://www.cis.ohio-state.edu/~dwang From poznan at iub-psych.psych.indiana.edu Tue May 13 15:44:01 2003 From: poznan at iub-psych.psych.indiana.edu (poznan@iub-psych.psych.indiana.edu) Date: Tue, 13 May 2003 14:44:01 -0500 Subject: CALL FOR PAPERS Message-ID: <3EC14B01.3040901@iub-psych.psych.indiana.edu> NON-SYNAPTIC COMMUNICATION IN BRAINS: HOW UNCONSCIOUS INTEGRATION IS MANIFESTED IN ANTICIPATORY BEHAVIOR The synaptic model of neurocommunication in the brain has dominated connectionism for more than half a century. Generally, little consideration is given to other modes of neurotransmission in animal and human brains, even though there is indirect evidence that less than half of the communication between cells is by synapses. Non-synaptic diffusion neurotransmission may be the primary information transmission mechanism in certain normal and abnormal functions. Non-synaptic diffusion is vastly more economical than synaptic transmission in regards to space and energy expenditure in the brain. The task of integrating a collection of databases (i.e., neuroinformatics) becomes inconceivable, when faced with the challenge of how unconscious integration leads to anticipatory behavior, if meaning of data rather than the individual data is represented non-locally in the brain. Address Submissions and Correspondence to: Dr. Roman R. Poznanski Associate Editor, Journal of Integrative Neuroscience c/o Department of Psychology Indiana University 1101 E. 10th St. Bloomington, IN 47405-7007 email: poznan at iub-psych.psych.indiana.edu phone (Office): (812) 856-0838 http://www.worldscinet.com/jin/mkt/editorial.shtml From jose at psychology.rutgers.edu Tue May 13 17:08:40 2003 From: jose at psychology.rutgers.edu (stephen j. hanson) Date: 13 May 2003 17:08:40 -0400 Subject: Gratuate Research Assistantships at RUTGERS UNIVERSITY-- RUMBA LABORATORIES ADVANCED IMAGING CENTER (UMDNJ/RUTGERS) Message-ID: <1052860030.1809.78.camel@localhost.localdomain> RUTGERS UNIVERSITY-- RUMBA LABORATORIESADVANCED IMAGING CENTER (UMDNJ/RUTGERS)--Research Assistants/Graduate Fellowships. Immediate openings. Research in cognitive neuroscience, category learning, event perception, using magnetic resonance imaging and electrophysiological techniques. Background in experimental psychology or cognitive science (BA/BS Required), neuroscience and statistics would be helpful. Strong computer skills are a plus. An excellent opportunity for someone bound for graduate school in psychology, cognitive science, cognitive neuroscience or medicine. Send by email a CV with description of research experience and the names of three references to: rumbalabs at psychology.rutgers.edu (see www.rumba.rutgers.edu for more information) -- Stephen J. Hanson Professor & Chair Psychology Department Rutgers University RUMBA Laboratories Co Director Advanced Imaging Center (UMDNJ/Rutgers) From bogus@does.not.exist.com Tue May 13 06:31:24 2003 From: bogus@does.not.exist.com () Date: Tue, 13 May 2003 12:31:24 +0200 Subject: Deadline approaching - Erice School on Cortical Dynamics 31 Oct - 6 Nov2003 Message-ID: From smyth at ics.uci.edu Wed May 14 13:24:40 2003 From: smyth at ics.uci.edu (Padhraic Smyth) Date: Wed, 14 May 2003 10:24:40 -0700 Subject: Postdoctoral position in Machine Learning at UC Irvine Message-ID: <3EC27BD8.1020706@ics.uci.edu> Please forward to recent (or soon to be) Phd graduates who may be interested in this position, thanks. Padhraic Smyth Information and Computer Science University of California, Irvine Postdoctoral Research Position in Machine Learning School of Information and Computer Science University of California, Irvine A full-time post-doctoral position is available in the area of machine learning at UC Irvine, focusing on research in probabilistic modeling of large data sets such as text corpora and Web-related data. Applicants must have earned a Ph.D. in Computer Science, Electrical Engineering, Mathematics, Statistics, or a closely-related discipline, with an emphasis on machine learning or applied statistics. Knowledge of probabilistic learning methods (such as the EM algorithm or Bayesian learning), as well as programming experience in languages such as C/C++ or MATLAB, are also desirable. The salary range for this position is $45k to $60k annually, commensurate with training and experience. The appointment will be for a 1 or 2-year period beginning in September 2003. Interested applicants should send a curriculum vitae, statement of research interests and achievements, and names and email addresses of three or more references, to Professor Padhraic Smyth at smyth at ics.uci.edu. Please put "Application for postdoc" in the subject line of the email application. Applications received by July 15th 2003 will receive maximum consideration. UC Irvine is located roughly half way between Los Angeles and San Diego, about 3 miles from the Pacific Ocean. For general information about the university see www.ics.uci.edu/about/jobs/ and www.uci.edu. For further information on machine learning research at UC Irvine see (for example) www.datalab.uci.edu. The University of California is an Equal Opportunity Employer committed to excellence through diversity. From rudesai at cs.indiana.edu Thu May 15 19:25:23 2003 From: rudesai at cs.indiana.edu (Rutvik Desai) Date: Thu, 15 May 2003 18:25:23 -0500 (EST) Subject: Ph.D. thesis on language acquisition available Message-ID: The readers of this list might be interested in my recently completed thesis: Modeling Interaction of Syntax and Semantics in Language Acquisition http://www.cs.indiana.edu/~rudesai/thesis.html Advisor: Prof. Michael Gasser, Indiana University. Abstract: How language is acquired by children is one of the major questions of cognitive science and is linked intimately to the larger question of how the brain and mind work. I describe a connectionist model of language comprehension that shows how some behaviors and strategies in language learning can emerge from general learning mechanisms. A connectionist network attempts to produce the meanings of input sentences, generated by a small English-like grammar. I study three interesting behaviors, related to the interaction of syntax and semantics, that emerge as the network attempts to perform the task. First, the network can use syntactic cues to predict aspects of the meaning of a novel word (syntactic bootstrapping), and its learning of new syntax is aided by the knowledge of word meanings (semantic bootstrapping). Secondly, when a familiar verb is encountered in an incorrect syntactic context, the network tends to follow context to arrive at an interpretation of the utterance in the early stages of training, and follows the verb in later stages. Similar behavior is observed in children, known as frame and verb compliance. Lastly, there is considerable evidence that children's early language is item-based, i.e., organized around specific linguistic expressions and items they hear. The network's representations are also found to be highly item-based and context-specific in early stages, and become categorical in later stages of learning, similar to those of adults. The connectionist simulations provide a concrete and parsimonious account of these three phenomena in language development. They also support the idea that domain-specific behaviors and learning strategies can emerge from relatively general mechanisms and constraints, and it is not always necessary to propose apparatus specifically designed for particular tasks. --- Rutvik Desai Postdoctoral Fellow Language Imaging Laboratory Medical College of Wisconsin http://www.neuro.mcw.edu/ From David.Cohn at acm.org Sat May 3 09:32:33 2003 From: David.Cohn at acm.org (David 'Pablo' Cohn) Date: Sat, 03 May 2003 06:32:33 -0700 Subject: jmlr-announce: Designing Committees of Models through Deliberate Weighting of Data Points Message-ID: <5.2.0.9.0.20030503062926.03462d48@ux7.sp.cs.cmu.edu> The Journal of Machine Learning Research (www.jmlr.org) is pleased to announce publication of the third paper in Volume 4: -------------------------- Designing Committees of Models through Deliberate Weighting of Data Points Stefan W. Christensen, Ian Sinclair and Philippa A. S. Reed JMLR 4(Apr):39-66, 2003. Abstract In the adaptive derivation of mathematical models from data, each data point should contribute with a weight reflecting the amount of confidence one has in it. When no additional information for data confidence is available, all the data points should be considered equal, and are also generally given the same weight. In the formation of committees of models, however, this is often not the case and the data points may exercise unequal, even random, influence over the committee formation. In this paper, a principled approach to committee design is presented. The construction of a committee design matrix is detailed through which each data point will contribute to the committee formation with a fixed weight, while contributing with different individual weights to the derivation of the different constituent models, thus encouraging model diversity whilst not biasing the committee inadvertently towards any particular data points. Not distinctly an algorithm, it is instead a framework within which several different committee approaches may be realised. Whereas the focus in the paper lies entirely on regression, the principles discussed extend readily to classification. ---------------------------------------------------------------------------- This paper, and all previous papers, are available electronically at http://www.jmlr.org in PostScript and PDF formats. The papers of Volumes 1, 2 and 3 are also available electronically from the JMLR website, and in hardcopy from the MIT Press; please see http://mitpress.mit.edu/JMLR for details. -David Cohn, From dhwang at cs.latrobe.edu.au Fri May 16 02:08:54 2003 From: dhwang at cs.latrobe.edu.au (Dianhui Wang) Date: Fri, 16 May 2003 16:08:54 +1000 Subject: CALL FOR BOOK CHAPTERS Message-ID: <3EC48075.E8BBA612@cs.latrobe.edu.au> Dear Colleages, CALL FOR BOOK CHAPTERS --------------------------------------------------------------------------------- Neural Networks Applications in Information Technology and Web Engineering --------------------------------------------------------------------------------- This email solicits your submission for book chapters. The edited book highlights successful applications of neural networks in IT domain and Web engineering. It will be published by a publisher of University Malaysia Sarawaka in 2004. Details of the "Call for Chapters" could be found at http://homepage.cs.latrobe.edu.au/dhwang/call4chapter.html We are looking forward to receiving your submissions shortly. Kind regards, The Book Editors: ---------------------------------------------------------------------- Dr Dianhui Wang Department of Computer Science and Computer Engineering La Trobe University, Melbourne, VIC 3086, Australia Tel: +61 3 9479 3034 Email: dhwang at cs.latrobe.edu.au Mr Nung Kion Lee Faculty of Cognitive Sciences and Human Development University Malaysia Sarawak Kota Samarahan, Sarawak, Malaysia Tel: +60 82 679 276 Email: nklee at fcs.unimas.my ----------------------------------------------------------------------- From David.Cohn at acm.org Fri May 16 12:55:10 2003 From: David.Cohn at acm.org (David 'Pablo' Cohn) Date: 16 May 2003 09:55:10 -0700 Subject: new paper from JMLR: Task Clustering and Gating for Bayesian Multitask Learning Message-ID: <1053104110.1667.189.camel@bitbox.corp.google.com> The Journal of Machine Learning Research (www.jmlr.org) is pleased to announce publication of the fifth paper in Volume 4: -------------------------- Task Clustering and Gating for Bayesian Multitask Learning Bart Bakker and Tom Heskes JMLR 4(May):83-99, 2003 Abstract Modeling a collection of similar regression or classification tasks can be improved by making the tasks 'learn from each other'. In machine learning, this subject is approached through 'multitask learning', where parallel tasks are modeled as multiple outputs of the same network. In multilevel analysis this is generally implemented through the mixed-effects linear model where a distinction is made between 'fixed effects', which are the same for all tasks, and 'random effects', which may vary between tasks. In the present article we will adopt a Bayesian approach in which some of the model parameters are shared (the same for all tasks) and others more loosely connected through a joint prior distribution that can be learned from the data. We seek in this way to combine the best parts of both the statistical multilevel approach and the neural network machinery. The standard assumption expressed in both approaches is that each task can learn equally well from any other task. In this article we extend the model by allowing more differentiation in the similarities between tasks. One such extension is to make the prior mean depend on higher-level task characteristics. More unsupervised clustering of tasks is obtained if we go from a single Gaussian prior to a mixture of Gaussians. This can be further generalized to a mixture of experts architecture with the gates depending on task characteristics. All three extensions are demonstrated through application both on an artificial data set and on two real-world problems, one a school problem and the other involving single-copy newspaper sales. ---------------------------------------------------------------------------- This paper is available electronically at http://www.jmlr.org in PostScript and PDF formats. The papers of Volumes 1, 2 and 3 are also available electronically from the JMLR website, and in hardcopy from the MIT Press; please see http://mitpress.mit.edu/JMLR for details. -David Cohn, From steve at cns.bu.edu Sat May 17 04:57:45 2003 From: steve at cns.bu.edu (Stephen Grossberg) Date: Sat, 17 May 2003 04:57:45 -0400 Subject: cortical mechanisms of development, learning, attention, and 3D vision Message-ID: The following article is now available at http://www.cns.bu.edu/Profiles/Grossberg in PDF: Grossberg, S. (2003). How does the cerebral cortex work? Development, learning, attention, and 3D vision by laminar circuits of visual cortex. Behavioral and Cognitive Neuroscience Reviews, in press. ABSTRACT: A key goal of behavioral and cognitive neuroscience is to link brain mechanisms to behavioral functions. The present article describes recent progress towards explaining how the visual cortex sees. Visual cortex, like many parts of perceptual and cognitive neocortex, is organized into six main layers of cells, as well as characteristic sub-lamina. Here it is proposed how these layered circuits help to realize processes of development, learning, perceptual grouping, attention, and 3D vision through a combination of bottom-up, horizontal, and top-down interactions. A key theme is that the mechanisms which enable development and learning to occur in a stable way imply properties of adult behavior. These results thus begin to unify three fields: infant cortical development, adult cortical neurophysiology and anatomy, and adult visual perception. The identified cortical mechanisms promise to generalize to explain how other perceptual and cognitive processes work. From Sebastian_Thrun at heaven.learning.cs.cmu.edu Mon May 19 10:47:53 2003 From: Sebastian_Thrun at heaven.learning.cs.cmu.edu (Sebastian Thrun) Date: Mon, 19 May 2003 10:47:53 -0400 Subject: NIPS Site open for electronic submissions Message-ID: The NIPS Web site is now accepting electronic submissions at nips.cc Please note that the deadline for submissions is June 6, 2003. Detailed submission instructions can be found at nips.cc. Sebastian Thrun Lawrence Saul NIPS*2003 General Chair NIPS*2003 Program Chair From rsun at ari1.cecs.missouri.edu Tue May 20 14:24:00 2003 From: rsun at ari1.cecs.missouri.edu (Ron Sun) Date: Tue, 20 May 2003 13:24:00 -0500 Subject: a new book: Duality of the Mind Message-ID: <200305201824.h4KIO0iW021725@ari1.cecs.missouri.edu> Announcing a new book published by Lawrence Erlbaum Associates, Inc. http://www.erlbaum.com/ D U A L I T Y O F T H E M I N D A Bottom-up Approach toward Cognition by Ron Sun Synthesizing situated cognition, reinforcement learning, and hybrid connectionist models, a cognitive architecture focused on situated involvement and interaction with the world is developed in this book. The architecture notably incorporates the distinction between implicit and explicit processes. The work described in the book demonstrates the cognitive validity of the architecture, by ways of capturing a wide range of human learning data. Computational properties of the architecture is explored with experiments that manipulate implicit and explicit processes to optimize performance in a range of domains. Philosophical implications of the approach, on situated cognition, intentionality, symbol grounding, and consciousness, are also explored in detail. In a nutshell, this book motivates and develops a framework for studying human cognition, based on an approach that is characterized by its focus on the dichotomy of, and the interaction between, implicit and explicit cognition. -------------------------------------------------------------------- For more details, go to http://www.cecs.missouri.edu/~rsun/book6-ann.html To order the book, go to https://www.erlbaum.com/shop/tek9.asp?pg=products&specific=0-8058-3880-5 =================================================================== Professor Ron Sun, Ph.D James C. Dowell Professor CECS Department, 201 EBW phone: (573) 884-7662 University of Missouri-Columbia fax: (573) 882-8318 Columbia, MO 65211-2060 email: rsun at cecs.missouri.edu http://www.cecs.missouri.edu/~rsun =================================================================== From i.tetko at gsf.de Wed May 21 06:43:17 2003 From: i.tetko at gsf.de (Igor Tetko) Date: Wed, 21 May 2003 12:43:17 +0200 Subject: MIPS Postdoctoral Position Message-ID: Postdoctoral Position at MIPS Applications are invited for a postdoctoral fellowship to develop new methods for automatic classification of protein sequences using machine learning methods. The input data will include BLAST/FASTA similarity scores, protein domains and motifs, as well as secondary and tertiary structures predicted by standard bioinformatics methods. The primary classification target will be functional categories (FunCat) developed by the MIPS. Applicants with a Ph.D. in bioinformatics, computer science/engineering are encouraged to apply. A background in molecular biology as well as demonstrated computer skills (programming in Perl, SQL, C/C++ familiarity with UNIX, Web) are preferred. The position will be supervised by Prof. H.W. Mewes and Dr. I.V. Tetko. The salary will be according to BAT IIa. The appointment will be for two years. The position is available immediately. Please send curriculum vitae, names for letters of recommendation to i.tetko at gsf.de, http://mips.gsf.de. Informal inquiries are also welcome. -- Dr. Igor V. Tetko Senior Research Scientist Institute for Bioinformatics GSF - Forschungszentrum fuer Umwelt und Gesundheit, GmbH Ingolstaedter Landstrasse 1, D-85764 Neuherberg, Germany Telephone: +49-89-3187-3575 Fax: +49-89-3187-3585 http://www.vcclab.org/~itetko e-mail: itetko at vcclab.org, i.tetko at gsf.de From robtag at unisa.it Thu May 22 11:46:50 2003 From: robtag at unisa.it (Roberto Tagliaferri) Date: Thu, 22 May 2003 17:46:50 +0200 Subject: WIRN 2003 Message-ID: <3ECCF0EA.1050505@unisa.it> Dear collegue, please you find attached the preliminary program of WIRN 2003, XIV ITALIAN WORKSHOP ON NEURAL NETS IIASS "Eduardo R. Caianiello", Vietri sul Mare (SA) ITALY June 5 - 7, 2003 * Pre-WIRN workshop on Bioinformatics and Biostatistics * Wednesday, June 4 * WIRN regular sessions * Thursday, June 5 * Models (9.30-11.30) * Architectures & Algorithms (11.50-12.50) * Poster session (16.00-17.30) * Friday, June 6 * Applications (9.30-11.30) * Applications (11.50-12.30) * Architectures & Algorithms (15.00-16.00) * Caianiello Prize (16.00-17.00) * SIREN Society meeting (17.00) * Saturday, June 7 * Special session - Formats of knowledge: words, images, narratives (9.30-11.20) * Special session - Formats of knowledge: words, images, narratives (11.20-11.40) * Panel session * Post-WIRN Workshop on Formats of knowledge (Saturday, June 7, 14.30-18.30) Further information can be found on the web site of SIREN in the pages of the workshop: http://grid004.usr.dsi.unimi.it/indice2.html Best regards Bruno Apolloni & Roberto Tagliaferri Detailed program Pre-WIRN workshop on Bioinformatics and Biostatistics Wednesday, June 4 9.30 - 10.30 Piero Fariselli, Pier Luigi Martelli and Rita Casadio, University of Bologna Machine Learning-Approaches and Structural Genomics 10.30 - 11.30 Alessio Ceroni, Paolo Fiasconi, Andrea Passerini, Universit? di Firenze Algorithms for Protein Structure Prediction Kernel Machines and Recursive Neural Networks Coffe break 11.30 - 11.50 11.50 - 12.50 Giovanni Cuda, Barbara Quaresima, Francesco Baudi, Rita Casadonte, Maria Concetta Faniello, Pierosandro Tagliaferri, Francesco Costanzo and Salvatore Venuta, University "Magna Graecia" of Catanzaro Proteomic profiling of inherited breast cancer: identification of molecular targets for early detection, prognosis and treatment. 15.00 - 15.20 Antonio Eleuteri, DMA Universit? di Napoli "Federico II" and INFN Sez. Napoli Roberto Tagliaferri, DMI Universit? di Salerno and INFM Unit? di Salerno Leopoldo Milano, Dipartimento di Scienze Fisiche, Universit? di Napoli "Federico II" and INFN Sez. Napoli I-divergence projections for MLP networks 15.20 - 15.40 Francesco Masulli, Computer Science Department, Univerity of Pisa (ITALY) Stefano Rovetta, Computer and Information Science Department, Univerity of Genova (ITALY) Gene selection using Random Voronoi Ensembles 15.40 - 16.00 Giorgio Valentini, DSI, Dipartimento di Scienze dell' Informazione, Univ. degli Studi di Milano An application of Low Bias Bagged SVMs to the classification of heterogeneous malignant tissues 16.00 - 16.20 Giulio Antoniol, RCOST-University of Sannio Michele Ceccarelli, University of Sannio Wanda Longo, Marina Ciullo, Enza Colonna, IGB-CNR Teresa Nutile, IGN-CNR Browsing Large Pedigrees to Study of the Isolated Populations in the"Parco Nazionale del Cilento e Vallo di Diano 16.20 - 16.40 Alessio Micheli, Universit? di Pisa Filippo Portera, Alessandro Sperduti, Universit? di Padova QSAR/QSPR Studies by Kernel Machines, Recursive Neural Networks and Their Integration 16.40 - 17.00 Chakra Chennubhotla, Computer Science Dept. University of Toronto Alberto Paccanaro, Bioinformatics Unit, Queen Mary University of London Markov Analysis of Protein Sequence Similarities 17.00 - 17.20 Coffee break 17.20 - 17.40 Francesco Marangoni, Master in Bioinformatics, University of Turin, Italy Matteo Barberis, Master in Bioinformatics, University of Turin, Italy Marco Botta, Department of Informatics, University of Turin, Italy Large scale prediction of protein interactions by an SVM-based machine learning approach 17.40 - 18.00 Luigi Agnati, Departm. of BioMedical Science and CIGS, Univ. of Modena, Italy Ferr? S, Preclinical Pharmacology Section NIDA NIH, Baltimore MD, USA Canela EI., Departm. Biochemistry and Molecular Biology, Barcelona, Spain Watson S., Mental Health Research Institute and Departm. of Psychiatry, University of Michigan, USA Morpurgo Anna, Departm. Computer Science, Milano, Italy Fuxe K., Departm. of NeuroScience, KI, Stockholm, Sweden Computer-assisted image analysis of biological preparations carried out at different levels of resolution opened up a new understanding of brain function 18.00 - 18.20 Bruno Apolloni, Simone Bassis, Andrea Brega, Sabrina Gaito, Dario Malchiodi, Anna Maria Zanaboni, Universit? degli Studi di Milano, Dipartimento di Scienze dell'Informazione Monitoring of car driving awareness from biosignals WIRN regular sessions Thursday, June 5 Models (9.30-11.30) 9.30 - 10.30 Leon O. Chua, NOEL (Non Linear Electronics Laboratory) Univ. of California, Berkeley From Brainlike Computing to Artificial Life Invited talk 10.30 - 10.50 Roberto Serra, CRA Montecatini Marco Villani, CRA Montecatini On the dynamics of scale-free boolean networks 10.50 - 11.10 Silvio P. Sabatini, DIBE - University of Genoa Fabio Solari, DIBE - University of Genoa Giacomo M. Bisio, DIBE - University of Genoa Lattice Models for Context-driven Regularization in Motion Perception 11.10 - 11.30 Bruno Apolloni, Dip. di Scienze dell'Informazione, Universit? degli Studi di Milano Simone Bassis, Dip. di Scienze dell'Informazione, Universit? degli Studi di Milano Sabrina Gaito, Dip. di Scienze dell'Informazione, Universit? degli Studi di Milano Dario Malchiodi, Dip. di Scienze dell'Informazione, Universit? degli Studi di Milano Cooperative games in a stochastic environment 11.30 - 11.50 Coffe break Architectures & Algorihtms (11.50-12.50) 11.50 - 12.10 Cristiano Cervellera, Istituto di Studi sui Sistemi Intelligenti per l'Automazione - CNR Marco Muselli, Istituto di Elettronica e di Ingegneria dell'Informazione e delle Telecomunicazioni - CNR A Deterministic Learning Approach Based on Discrepancy 12.10 - 12.30 Massimo Panella, INFO-COM Dpt., University of Rome "La Sapienza" Fabio Massimo Frattale Mascioli, INFO-COM Dpt., University of Rome "La Sapienza" Antonello Rizzi, INFO-COM Dpt., University of Rome "La Sapienza" Giuseppe Martinelli, INFO-COM Dpt., University of Rome "La Sapienza" ANFIS Synthesis by Hyperplane Clustering for Time Series Prediction 12.30 - 12.50 Mario Costa, Politecnico di Torino - Dept. of Electronics Edmondo Minisci, Politecnico di Torino - Dept. of Aerospace Engineering Eros Pasero, Politecnico di Torino - Dept. of Electronics An Hybrid Neural/Genetic Approach to Continuous Multi-Objective Optimization Problems Lunch 12.50 - 15.00 Poster session (16.00 - 17.30) 15.00 - 16.00 Poster high light spotting 16.00 - 16.10 Coffee break 16.00 - 17.30 Poster session Friday, June 6 Applications (9.30 - 11.30) 9.30 - 10.30 Eraldo Paulesu, Universit? di Milano Bicocca, Dipartimento di Psicologia Experimental designs in cognitive neuroscience using functional imaging. Invited talk 10.30 - 10.50 N. Alberto Borghese, DSI - University of Milano Andrea Calvi, Department of Bioengineering - Politecnico of Milano Learning to maintain upright posture: what can be learnt using adaptive neural networks models? 10.50 - 11.10 Silvio Giove, University of Venice Claudia Basta, Regional and hurban planning department; City of Venice Environmental risk and territorial compatibility; a soft computing approach 11.10 - 11.30 Lara Giordano, DMI, Universit? di Salerno Claude Albore Livadie, Istituto Universitario Suor Orsola Benincasa Giovanni Paternoster, Dipartimento di Scienze Fisiche , Universit? di Napoli "Federico II" Raffaele Rinzivillo, Dipartimento di Scienze Fisiche , Universit? di Napoli "Federico II" Roberto Tagliaferri, DMI, Universit? di Salerno Soft Computing Techniques for Classification of Bronze Age Axes 11.30 - 11.50 Coffee break Applications (11.50 - 12.30) 11.50 - 12.10 Bruno Azzerboni, Universit? di Messina Mario Carpentieri, Universit? di Messina Maurizio Ipsale, Universit? di Messina Fabio La Foresta, Universit? di Messina Francesco Carlo Morabito, Universit? Mediterranea di Reggio Calabria Intracranial Pressure Signal Processing by Adaptative Fuzzy Network 12.10 - 12.30 Giovanni Pilato, Salvatore Vitabile, Istituto di Calcolo e Reti ad alte prestazioni (CNR) - Sezione di Palermo Giorgio Vassallo, CRES - Centro per la Ricerca Elettronica in Sicilia, Monreale (PA) Vincenzo Conti, Filippo Sorbello, Dipartimento di Ingegneria Informatica - Universit? di Palermo A Concurrent Neural Classifier for Html Documents Retrieval Architectures & Algorithms (15.00-15.40) 15.00 - 15.20 Francesca Vitagliano, Raffaele Parisi, Aurelio Uncini, Dip. INFOCOM - Universit? di Roma "La Sapienza" Generalized Splitting 2D Flexible Complex Domain Activation Functions 15.20 - 15.40 Francesco Masulli, Dip. Informatica - Universit? di Pisa; INFM Stefano Rovetta, Dip. Informatica e Scienze dell'Informazione - Universit? di Genova; INFM An Algorithm to Model Paradigm Shifting in Fuzzy Clustering ORAL Caianiello Prize (16.00 - 17.00) 17.00 SIREN Society meeting (17.00) Social dinner (20.00) Saturday, June 7 Special session Formats of knowledge: words, images, narratives (9.30-11.00) 9.30 - 9.50 M.Rita Ciceri, Facolt? di Psicologia, Universit? Cattolica Milano Formats and languages of knowledge: models of links for learning processes 9.50 - 10.20 Anne McKeough, Division of Applied Psychology and chair of the Human Learning and Development program, University of Calgary. Narrative as a format of thinking: hierarchical models in story comprehension 10.20 - 10.40 Alessandro Antonietti, Claudio Maretti, Facolt? di Scienze della Formazione, U.C Milano Analogical models 10.40 - 11.00 Paola Colombo, Centro di Psicologia della Comunicazione, Universit? Cattolica di Milano Pictures and words: analogies and specificity in knowledge organization 11.00 - 11.20 Coffe break Special session Formats of knowledge: words, images, narratives (11.20-11.40) 11.20 - 11.40 Ilaria Grazzani, Scienze della Formazione, Universit? Statale di Milano Emotional and metaemotional competence Panel session (11.40 - 12.40) Chair B. Apolloni M. Rita Ciceri, Anne McKeough, Paola Colombo, Ilaria Grazzani, L. O. Chua, E. Paulesu, L. Agnati, E. Pasero, C. Cervellara, Cognitive systems: from models to their implementation End of the regular sessions 12.00 Post-WIRN Workshop on Formats of knowledge (14.30 - 18.30) 14.30 - 14.50 Introduction by G. Milani, MIUR 14.50 - 15.10 Coffee break 14.50 - 18.30 Creation of small groups practising with special school softwares in different domains [edited by Rita Ciceri] involving declaring thought, narrative thought and so on. Papers in the poster section (Thursday, June 5, 15.00) N. Alberto Borghese, DSI - University of Milano Stefano Ferrari, Vincenzo Piuri, DTI - Polo di Crema - Universit? di Milano Real-time Surface Reconstruction through HRBF Networks Antonio Chella, DINFO - Universit? di Palermo Umberto Maniscalco, ICAR - CNR Sezione di Palermo Roberto Pirrone, DINFO - Universit? di Palermo A Neural Architecture for 3D Segmentation Andreas Hadjiprocopis, Institute of Neurology, UCL Paul Tofts, Institute of Neurology, UCL Towards an Automatic Lesion Segmentation Method for Dual Echo Magnetic Resonance Images using an Ensemble of Neural Networks Stefano D'Urso, Dip. INFOCOM - Universit? di Roma "La Sapienza" Automatic Polyphonic Piano Music Transcription by a Multi-Classification Discriminative-Learning Roberto Tagliaferri, DMI, Universit? di Salerno Giuseppe Longo, Dip. Scienze Fisiche, Universit? di Napoli "Federico II" Stefano Andreon, Osservatorio Astronomico di Brera Salvatore Capozziello, Dip. Fisica "E:R: Caianiello", Universit? di Salerno Ciro Donalek, Dip. Scienze Fisiche, Universit? di Napoli "Federico II" Gerardo Giordano, IIASS Neural networks for photometric redshifts evaluation Antonino Greco, DIMET Francesco Carlo Morabito, DIMET Mario Versaci, DIME Neural Network Approach for Estimation and Prediction of Time to Disruption in Tokamak Reactors Cinzia Avossa, Dept. of Physics University of Salerno Flora Giudicepietro, Osservatorio Vesuviano, INGV, Napoli Maria Marinaro, Silvia Scarpetta, Dept. of Physics University of Salerno Supervised and Unsupervised Analysis applied to Strombolian Explosion Quakes Giancarlo Mauri, Universit? di Milano-Bicocca. Italo Zoppis, Universit? di Milano-Bicocca. Dipartimento di Informatica Sistemistica e Comunicazioni A probabilistic neural networks system to recognize 3d face of people Bianchini Monica, Gori Marco, Sarti Lorenzo, Dipartimento di Ingegneria dell'Informazione, Universit? di Siena Face Localization with Recursive Neural Networks Alessandra Budillon, Francesco Palmieri, Dipartimento di Ingegneria dell'Informazione , Seconda Universit? di Napoli Multi-Class Image Coding via EM-KLT algorithm Pierluigi Salvo Rossi, Gianmarco Romano, Francesco Palmieri, Dipartimento di Ingegneria dell'Informazione, Seconda Universit? di Napoli Giulio Iannello, Dipartimento di Informatica e Sistemistica, Universit? di Napoli "Federico II" Bayesian Modelling for Packet Channels Elena Casiraghi, Raffaella Lanzarotti, Giuseppe Lipori, Universit? degli studi di Milano, Dipartimento di Scienze dell'Informazione A face detection system based on color and support vector machine Claudio Sistopaoli, Raffaele Parisi, INFOCOM Dept. - University of Rome "La Sapienza" Dereverberation of acoustic signals by Independent Component Analysis From bressler at fau.edu Thu May 22 16:11:19 2003 From: bressler at fau.edu (Steven Bressler) Date: Thu, 22 May 2003 16:11:19 -0400 Subject: POSTDOCTORAL POSITION AVAILABLE Message-ID: Postdoctoral Position Available Network for the Study of Brain Systems and Dynamics (http://dnl.ucsf.edu/NBSD/) A postdoctoral research fellow position is available immediately to work in the laboratory of Dr. Steven Bressler (http://www.ccs.fau.edu/~bressler/) as part of a consortium of Cognitive Neuroscience/Computational Science laboratories which study the dynamical systems of large-scale brain networks with EEG, MEG and fMRI. We are seeking an individual to adapt/refine and apply cutting-edge tools to analyses of high density EEG, MEG and fMRI data ranging from brain source imaging to functional connectivity and other modeling methods. The cognitive neuroscience aspect of the research emphasizes attention and working memory. Candidates should have a Ph.D. in computational or cognitive neuroscience (or in an area that provides similar background) with substantial mathematical/computational experience (especially in time series, dynamical systems analyses, neurophysiological signal processing, multivariate or Bayesian statistical analyses). Fellows will have the opportunity for a multidisciplinary training experience through interactions with all laboratories. Please contact Dr. Steven Bressler at bressler at ccs.fau.edu From Neural.Plasticity at snv.jussieu.fr Sun May 25 10:42:13 2003 From: Neural.Plasticity at snv.jussieu.fr (Susan Sara) Date: Sun, 25 May 2003 16:42:13 +0200 Subject: Neural Plasticity on-line submission Message-ID: <3.0.6.32.20030525164213.00b2a1e0@mail.snv.jussieu.fr> Dear Colleagues, Neural Plasticity is announcing on-line submission starting immediately. You can send your manuscript directly to the editor at this e-mail address. I will assure prompt and fair review and an editorial decision within four weeks. Neural Plasticity publishes full research papers, short communications, commentary and review articles concerning all aspects of neural plasticity, with special attention to its functional significance as reflected in behaviour. In vitro models, in vivo studies in anesthetized and behaving animals, as well as clinical studies in humans are included. The journal aims to be an inspiring forum for neuroscienists studying the development of the nervous system, learning and memory processes, and reorganisation and recovery after brain injury. Neural Plasticity, in its current format and under my editorship, has been in existence for less than five years, jumping from the bottom of the list of 200 neuroscience journals to 125 and then to an amazing 75 rank, in just three years. Our citation index was 2.33 last year. We are hoping that this new electronic submission option will encourage you to submit your papers to Neural Plasticity, so that we can continue to improve the journal and make it a lively and original forum for the Neuroscience community. Yours sincerely, Susan J. Sara Susan J. Sara, Editor Neural Plasticity Institut des Neurosciences Univ Pierre & Marie Curie 9 quai St. Bernard 75005 Paris France Tel 33 1 44 27 34 60 Fax 32 52 From bp1 at cn.stir.ac.uk Tue May 27 04:42:04 2003 From: bp1 at cn.stir.ac.uk (Bernd Porr) Date: Tue, 27 May 2003 09:42:04 +0100 Subject: PhD thesis: closed loop sequence learning Message-ID: <3ED324DC.7060308@cn.stir.ac.uk> I'm pleased to announce my PhD thesis: "Sequence-Learning in a Self-Referential Closed-Loop Behavioural System" Available here: http://www.cn.stir.ac.uk/~bp1/diss55pdf.pdf Abstract: --------- This thesis focuses on the problem of ``autonomous agents''. It is assumed that such agents want to be in a desired state which can be assessed by the agent itself when it observes the consequences of its own actions. Therefore the _feedback_ from the motor output via the environment to the sensor input is an essential component of such a system. Therefore, an agent is defined in this thesis as a self-referential system which operates within a closed sensor-motor-sensor feedback loop. The generic situation is that the agent is always prone to unpredictable disturbances which arrive from the outside, i.e. from its environment. These disturbances cause a deviation from the desired state (for example the organism is attacked unexpectedly or the temperature in the environment changes, ...). The simplest mechanism for managing such disturbances in an organism is to employ a reflex loop which essentially establishes reactive behaviour. Reflex loops are directly related to closed loop feedback controllers. Thus, they are robust and they do not need a built-in model of the control situation. However, reflexes have one main disadvantage, namely that they always occur ``too late''; i.e., only _after_ a (for example, unpleasant) reflex eliciting sensor event has occurred. This defines an objective problem for the organism. This thesis provides a solution to this problem which is called Isotropic Sequence Order (ISO-) learning. The problem is solved by correlating the primary \textsl{reflex} and a predictive sensor _input_: the result is that the system learns the temporal relation between the primary reflex and the earlier sensor input and creates a new predictive reflex. This (new) predictive reflex does not have the disadvantage of the primary reflex, namely of always being too late. As a consequence the agent is able to maintain its desired input-state all the time. In terms of engineering this means that ISO learning solves the inverse controller problem for the reflex, which is mathematically proven in this thesis. Summarising, this means that the organism starts as a reactive system and learning turns the system into a pro-active system. It will be demonstrated by a real robot experiment that ISO learning can successfully learn to solve the classical obstacle avoidance task without external intervention (like rewards). In this experiment the robot has to correlate a reflex (retraction _after_ collision) with signals of range finders (turn _before_ the collision). After successful learning the robot generates a turning reaction before it bumps into an obstacle. Additionally it will be shown that the learning goal of ``reflex avoidance'' can also, paradoxically, be used to solve an attraction task. -- http://www.cn.stir.ac.uk/~bp1/ mailto:bp1 at cn.stir.ac.uk From espaa at exeter.ac.uk Tue May 27 12:06:24 2003 From: espaa at exeter.ac.uk (Phil Weir) Date: Tue, 27 May 2003 17:06:24 +0100 (GMT Daylight Time) Subject: Pattern Analysis and Applications Journal Message-ID: Dear All, The Latest issue of Pattern Analysis and Applications, while not yet avilable in printed form, is now available for viewing in PDF on the Springer website at: http://link.springer.de/link/service/journals/10044/tocs/t3006001.htm This issue contains the following papers: J. Chen, M. Yeasin, R. Sharma: Visual modelling and evaluation of surgical skill M. Mirmehdi, P. Clark, J. Lam: A non-contact method of capturing low-resolution text for OCR L.I. Kuncheva, C.J. Whitaker, C.A. Shipp, R.P.W. Duin: Limits on the majority vote accuracy in classifier fusion D. Frosyniotis, A. Stafylopatis, A. Likas: A divide-and-conquer method for multi-net classifiers M.R. Ahmadzadeh, M. Petrou: Use of Dempster-Shafer theory to combine classifiers which use different class boundaries J. Yang, D. Zhang, J.-Y. Yang: A generalised K-L expansion method which can deal with small sample size and high-dimensional problems Q. Li, Y. Xie: Randomised hough transform with error propagation for line and circle detection Kagan Tumer, Nikunj C. Oza: Input decimated ensembles A. Mitiche, S. Hadjres: MDL estimation of a dense map of relative depth and 3D motion from a temporal sequence of images C. Thorton: Book Reviews. Truth from Trash: How Learning Makes Sense Best regards, Phil Weir. __________________________________________ Phil Weir Pattern Analysis and Applications Journal Department of Computer Science University of Exeter Exeter EX4 4PT UK Tel: +44-1392-264066 Fax: +44-1392-264067 E-mail: espaa at ex.ac.uk Web: http://www.dcs.ex.ac.uk/paa ____________________________________________ From oreilly at grey.colorado.edu Sat May 3 01:58:38 2003 From: oreilly at grey.colorado.edu (Randall C. O'Reilly) Date: Fri, 2 May 2003 23:58:38 -0600 Subject: PDP++ 3.0 Released Message-ID: <200305030558.h435wcc08888@grey.colorado.edu> Version 3.0 of the PDP++ neural network simulation software is now available for downloading. See either of the websites below for details: http://www.cnbc.cmu.edu/Resources/PDP++/PDP++.html http://psych.colorado.edu/~oreilly/PDP++/PDP++.html New features for this release are listed below. - Randy +----------------------------------------------------------------+ | Dr. Randall C. O'Reilly | | | Associate Professor | Phone: (303) 492-0054 | | Department of Psychology | Fax: (303) 492-2967 | | Univ. of Colorado Boulder | | | 345 UCB | email: oreilly at psych.colorado.edu | | Boulder, CO 80309-0345 | www: psych.colorado.edu/~oreilly | +----------------------------------------------------------------+ This file contains a summary of important changes from the previous release of PDP++. See the ChangeLog file for a comprehensive, though less comprehensible, list. =========================================== Release 3.0 =========================================== IMPORTANT NOTE FOR EXISTING PROJECTS: - If you have a stopping criterion set on a MonitorStat (e.g., stopping when MAX activation exceeds some threshold in the output layer), then this stopping criterion will be lost when you load the project. If you look in the spew of messages generated during the load process, you'll see a message like this: *** Member: mon not found in type: MonitorStat (this is likely just harmless version skew) <>{name="max_act_Output": disp_opts=" ": is_string=false: vec_n=0: val=0: str_val="": stopcrit={flag=true: rel=GREATERTHAN: val=.5: cnt=1: n_met=0: }: };<<- err_skp>> Just get the stopcrit value (val=.5 in this example) and relationship (GREATERTHAN) from this and enter it into the first element in the mon_vals list in the appropriate MonitorStat in the opened project to restore function (or replace this stat entirely with a new ActThreshRTStat). - GraphLog FIXED ranges will be lost upon loading: there is now a fix_min and fix_max flag inline with the ranges that will need to be clicked as appropriate (the range values are preserved, just the flag to fix them is missing). Look for FIXED messages in the spew as per above. - SelectEdit (.edits) labels will be lost upon loading, and can be recovered in the same manner as above. - NetView Layer fonts will be smaller than before -- use View:Actions/Set Layer Font to set a larger font. The new default for new views is 18 point. - For your additional C++ code: El() function on Lists/Groups/Arrays renamed to SafeEl() to better reflect its function (index range checking), and [] operator changed to FastEl() (no index range checking) instead of SafeEl(), to better reflect typical usage. - Error functions in BP: if you've written your own, see comments in ChangeLog (search for date 2002-09-13). NEW FEATURES: - Wizard that automates the construction of simulation objects: creates commonly-used configurations of neworks, environments, processes, stats and logs. - Distributed-memory parallel processing via MPI (instead of pthread): DMEM can be much more efficient than pthread, and is much more flexible in that it works in both shared and distributed memory architectures. Support for distributing computation across both connections and across events is provided, by setting the number of processors in the Network and the EpochProcess. In both cases, each processor runs everything redundantly except for a subset of events or connections -- this makes for relatively little extra code required to support dmem -- connections/events are divided across processes, and results are synchronized to keep everything consistent. - Additional analysis functions: PCA (principal components analysis) and MDS (multidimensional scaling), and Cluster Plot efficiency vastly improved to handle large data sets. Also added processes for automatically computing these functions over hidden layers, etc. Other new analysis routines include automatic generation of statistics on environment frequencies -- useful for validating environments. - Much improved GraphLog, focusing on discriminating overlapping line traces (repeated passes through the same set of X axis values -- eg. multiple settles of a network, multiple training runs, etc). Traces can be color-coded (line_type = TRACE_COLORS), incremented (producing a 3D-like effect) via trace_incr, and stacked (vertical = STACK_TRACES). A spike-raster-like plot, or even a continuous color-coded version, can be achieved by not displaying any vertical axis values at all (vertical = NO_VERTICAL) and using either VALUE_COLORS (continuous color-coded) or THRESH_POINTS (thresholded spike raster). Also, columns of data can be plotted row-wise instead (e.g., for monitored activations, or COPY aggregators), and the view_bufsz value is now actually respected, so you can scroll through large amounts of data instead of seeing it all. - Log displays are now much more robust when you add or remove data to be logged -- you should now spend much less time reconfiguring the log views. - Color-coded edit dialogs and view window backdrops based on an enhanced (and customizable) Project view color scheme. - Ability to save view displays to a JPEG or TIFF file (in addition to existing Postscript), including automatic saving at each update for constructing animations. - Incremental reading of events from a file during processing (FromFileEnv). - Automatic SimLog creation whenever project is saved -- helps you keep track of what each project is. - Added two new algorithms: Long Short Term Memory (Hochreiter, Schmidhuber et al) implemented as lstm++ (works really well on sequential learning problems in general), and RNS++, written by Josh Brown: http://iac.wustl.edu/~jwbrown/rns++/index.html - Support for g++ 3.x compilers (default for CYGWIN, DARWIN, use config/Makefile.LINUX.3 for LINUX instead of LINUX.2 (see README.linux for important details on compiling under LINUX, including a sstream include file fix for gcc 2.9x (e.g., RedHat 7.x)), and zlib now used instead of forking to call gzip for loading/saving compressed files: should be much faster and more reliable under CYGWIN (MS Windows). Check your Makefile.in for $(X11_LIB) instead of -lX11 if you get link errors involving zlib or jpeg lib calls. Also, the SIM_NONSHARED makefile variables have been eliminated -- these were included by default in mk_new_pdp Makefile.in files, and need to just be cut out of the makefile. - Numerous small processes and statistics to facilitate auotomation of common tasks. Also, a better interface for interactive environments where subsequent events depend on current network outputs (see demo/leabra/nav.proj.gz for an example). - Mersenne Twister random number generator now used for all random number calls (by Makoto Matsumoto and Takuji Nishimura, http://www.math.keio.ac.jp/~matumoto/emt.html) - Easier access to view configuration variables through view menu functions; support for window manager close button; SelectEdit can include Methods (Functions) in addition to member data; Net log view usability considerably improved; Added buttons for graphing activation functions, etc on relevant specs. From S.I.Reynolds at cs.bham.ac.uk Sat May 3 11:27:08 2003 From: S.I.Reynolds at cs.bham.ac.uk (Stuart I Reynolds) Date: Sat, 3 May 2003 16:27:08 +0100 (BST) Subject: PhD thesis available: Reinforcement Learning with Exploration In-Reply-To: <000001c309a0$c4605210$80f4a8c0@cwiware> Message-ID: Dear Connectionists, My PhD thesis, Reinforcement Learning with Exploration. School of Computer Science, The University of Birmingham. is available for download at: http://www.cs.bham.ac.uk/~sir/pub/thesis.abstract.html Regards Stuart Reynolds Abstract: Reinforcement Learning (RL) techniques may be used to find optimal controllers for multistep decision problems where the task is to maximise some reward signal. Successful applications include backgammon, network routing and scheduling problems. In many situations it is useful or necessary to have methods that learn about one behaviour while actually following another (i.e. `off-policy' methods). Most commonly, the learner may be required to follow an exploring behaviour, while its goal is to learn about the optimal behaviour. Existing methods for learning in this way (namely, Q-learning and Watkins' Q(lambda)) are notoriously inefficient with their use of real experience. More efficient methods exist but are either unsound (in that they are provably non-convergent to optimal solutions in standard formalisms), or are not easy to apply online. Online learning is an important factor in effective exploration. Being able to quickly assign credit to the actions that lead to rewards means that more informed choices between actions can be made sooner. A new algorithm is introduced to overcome these problems. It works online, without `eligibility traces', and has a naturally efficient implementation. Experiments and analysis characterise when it is likely to outperform existing related methods. New insights into the use of optimism for encouraging exploration are also discovered. It is found that standard practices can have strongly negative effect on the performance of a large class of RL methods for control optimisation. Also examined are large and non-discrete state-space problems where `function approximation' is needed, but where many RL methods are known to be unstable. Particularly, these are control optimisation methods and when experience is gathered in `off-policy' distributions (e.g. while exploring). By a new choice of error measure to minimise, the well studied linear gradient descent methods are shown to be `stable' when used with any `discounted return' estimating RL method. The notion of stability is weak (very large, but finite error bounds are shown), but the result is significant insofar as it covers new cases such as off-policy and multi-step methods for control optimisation. New ways of viewing the goal of function approximation in RL are also examined. Rather than a process of error minimisation between the learned and observed reward signal, the objective is viewed to be that of finding representations that make it possible to identify the best action for given states. A new `decision boundary partitioning' algorithm is presented with this goal in mind. The method recursively refines the value-function representation, increasing it in areas where it is expected that this will result in better decision policies. From brain_mind at epfl.ch Mon May 5 09:40:28 2003 From: brain_mind at epfl.ch (Brain & Mind) Date: Mon, 5 May 2003 15:40:28 +0200 Subject: Faculty positions in Lausanne - Switzerland Message-ID: <008b01c3130b$e3992b60$a8bfb280@sv.intranet.epfl.ch> 2 Tenure track positions are available to join the faculty of the Brain Mind Institute at the EPFL/ETH Lausanne. Positions are well funded with a startup, an annual budget, ample lab space and multiple core facilities. Positions offer to young scientists (ideally less than 36 years old) an opportunity to develop a vision. Laboratory of Perceptual Theory and Simulation The emergence of a coherent perception involves the integration of multiple modalities. A tenure track position is open for a computational neuroscientist interested in theory and simulations at the systems level. Laboratory of NeuRobotics A tenure track position is open for a computational neuroscientist interested in applying neuronal principles to robotics. More information at the Brain Mind Institute Faculty positions From sylee at ee.kaist.ac.kr Tue May 6 01:02:30 2003 From: sylee at ee.kaist.ac.kr (Soo-Young Lee) Date: Tue, 6 May 2003 14:02:30 +0900 Subject: CFPS and Announcement of a new journal devoted to Letters and Reviews Message-ID: <002401c3138c$b2a20e60$329ef88f@kaistsylee2> (Sorry, if you receive multiple copies.) CFPs and Announcement of a new rapid-publication journal with double-blind reviews "Neural Information Processing - Letters and Reviews" The first issue is scheduled on September 2003. 1. Motivation Although there exist many journals on neural networks, publications usually require one to two years from the date of submission and the published materials may not necessarily the latest at the time of publications. However, in many scientific disciplines, the publication time is usually 6 months. Also, in some scientific and engineering disciplines there exist " Letters" journals for rapid (timely) communications such as Electronics Letters and Optics Letters. These Letters enjoy high citation impact factors and excellent reputation. The rapid publication is more critical for multidisciplinary researches, where researchers come from many different academic backgrounds and may not know what the others are doing. Many researchers also believe that double-blind review procedures should be implemented. Another motivation comes from the need of good review papers on new and important topics. Review papers are extremely helpful to young researchers who would like to get into the field, especially for multidisciplinary researches, but not many journals accept review papers. It is also very important to connect system-level neuroscience and artificial neural networks. Although both communities can be benefited from each, there exists a big communication gap between these two communities. Therefore, it is very important to have at-least one publication devoted to timely communication and review papers with double-blind review procedures for both neuroscience and neural engineering communities. 2. Goals (a) Timely Publication - 3 to 4 months to publication for Letters - up to 6 months to publication for Reviews (b) Connecting Neuroscience and Engineering - serving system-level neuroscience and artificial neural network communities (c) Low Cost - free for online only - US$30 per year for hardcopy (d) High Quality - unbiased double-blind reviews - short papers (up to 6 single-column single-space published pages) for Letters (The Letters may include preliminary results of excellent ideas, and full paper may be published latter at other journals.) - in-depth reviews of new and important topics for Reviews 3. Topics - Cognitive neuroscience - Computational neuroscience - Neuroinformatics database and analysis tools - Brain signal measurements and functional brain mapping - Neural modeling and simulators - Neural network architecture and learning algorithms - Data representations in neural systems - Information theory for neural systems - Software implementations of neural networks - Neuromorphic hardware implementations - Biologically-motivated speech signal processing - Biologically-motivated image processing - Human-like inference systems and intelligent agents - Human-like behavior and intelligent systems - Artificial life - Other applications of neural information processing mechanisms 4. Publications - monthly online publications - yearly paper publications 5. Copyright Authors have all the rights on their papers, and may publish extended versions of their Letters to other journals. 6. Subscription Fee - On-line version: FREE - Hardcopy version: Personal: US$30/year (surface mail) Institution: US$50/year (surface mail) 7. Paper Reviews and Acceptance Decision - electronic review process based on Adobe PDF, Postscript, or MS Word files - rapid and unbiased (double-blind) reviews - binary ("Accept" or "Reject") decisions without revision requirements for Letters (Mandatory English editing services may be recommended.) - minor revision may be requested for Review papers. 8. Editors and Publisher Publisher: KAIST Press Home Page: http://www.nip-lr.info and http://neuron.kaist.ac.kr/nip-lr/ (from May 15th, 2003.) Editor-in-Chief: Soo-Young Lee Director, Brain Science Research Center Korea Advanced Institute of Science and Technology 373-1 Guseong-dong, Yuseong-gu, Daejeon 305-701 Korea (South) Tel: +82-42-869-3431 Fax: +82-42-869-8490 E-mail: nip-lr at neuron.kaist.ac.kr 9. Paper Submission All papers should be submitted to the Editor-in-Chief by e-mail at nip-lr at neuron.kaist.ac.kr. Detail guidelines and paper formats will be shown at the journal homepages (http://www.nip-lr.info and http://neuron.kaist.ac.kr/nip-lr/), which will be open at May 15th, 2003. 10. Others Although the journal got supports from many APNNA (Asia-Pacific Neural Network Assembly) Governing Board members, the official relationship between the journal and APNNA will be discussed latter. The journal is also expected to satisfy requirements for the inclusion in the SCI/SCIE in the near future. ---------------------------------------------------------------------- Neural Information Processing - Letters and Reviews Editor-in-Chief Soo-Young Lee Director, Brain Science Research Center Korea Advanced Institute of Science and Technology 373-1 Guseong-dong, Yuseong-gu, Daejeon 305-701 Korea (South) Tel: +82-42-869-3431 / Fax: +82-42-869-8490 E-mail: nip-lr at neuron.kaist.ac.kr Advisory Board Shun-ichi Amari, RIKEN Brain Science Institute, Japan Rodney Douglas, University/ETH Zurich , Switzerland Kunihiko Fukushima, Tokyo University of Technology, Japan Terry Sejnowski, Salk Institute, USA Harod Szu, Office of Naval Researches, USA Editorial Board Alan Barros, Universidade Federal do Maranhao, Brazil James A. Bednar, University of Texas at Austin, USA Yoonsuck Choe, Texas A&M University, USA Seungjin Choi, Pohang University of Science and Technology, Korea Andrzej Cichocki, RIKEN Brain Science Institute, Japan Wlodzislaw Duch, Nicholaus Copernicus University, Poland Tom Gedeon, Murdoch University, Australia Saman Halgamuge, University of Melbourne, Australia Shigeru Ikeda, Institute of Statistical Mathematics, Japan Masumi Ishikawa, Kyushu Institute of Technology, Japan Marwan Jabri, Oregon Health and Sciences University, USA Janusz Kacprzyk, Polish Academy of Sciences, Poland Nikolas Kasabov, Auckland University of Technology, New Zealand Okyay Kaynak, Bogazii University, Turkey Dae-Shik Kim, University of Minnesota, USA Seunghwan Kim, Pohang University of Science and Technology, Korea Irwin King, Chinese University of Hong Kong, Hong Kong Elmar Lang, University of Regensburg, Germany Chong Ho Lee, Inha University, Korea Daniel Lee, University of Pennsylvania, USA Minho Lee, Kyungpook National University, Korea Seong-Whan Lee, Korea University, Korea Te-Won Lee, University of California, San Diego, USA Chin-Teng Lin, National Chiao-Tung University, Taiwan Sigeru Omatu, Osaka Prefecture University, Japan Nikhil R. Pal, Indian Statistical Institute, India Carlos G. Puntonet , University of Granada, Spain Jagath C. Rajapakse, Nanyang Technological University, Singapore Asim Roy, Arizona State University, USA Christine Servire, Institut National Polytechnique de Grenoble, France Jude Shavlik, University of Wisconsin, USA Alessandro Sperduti, Universit degli Studi di Padova, Italy Ron Sun, University of Missouri-Columbia, USA Shigeru Tanaka, RIKEN Brain Science Institute, Japan Lipo Wang, Nanyang Technological University, Singapore Takeshi Yamakawa, Kyushu Institute of Technology, Japan Mingsheng Zhao, Tsinghua University, China Yixin Zhong, University of Posts & Telecommunications, China Michael Zibulevsky, Technion, Israel ----------------------------------------------------------------------- Paper Format for the Neural Information Processing - Letters and Reviews The paper should be located in 16.0 cm x 23.7 cm. For the recommended A4 paper size the top and bottom margins are 3.0 cm, and the left and right margins are 2.5 cm. The Letters should not exceed 6 pages, while no page limit is set to the Reviews. Abstract should have 1.5 cm indent on both sides, and should not exceed 200 words. The paper should be written in single column single space with Time New Roman font. Bold characters should be used for the paper title and section headings. The recommended font size is 10 points, while 14 points and 12 points are used for the paper title and section headings, respectively. The paper should be organized in the order of paper title, authors' information, abstract, keywords, main text, acknowledgment, references, and authors' bio-sketches. The authors' information consists of author name(s), department and organization, physical and e-mail addresses. Two versions of the papers should be submitted with and without the authors' information and bio-sketches. The latter will be used during the double-blind review processes. The paper title, author information, and section headings should be center justified, while all others should be justified on both sides. The first line of each paragraph should have an indent of 1 cm, and one line space is used before and after section headings. Each section heading should have an Arabic number. All the figures and tables should be located at proper locations. Figure and table captions should start with "Figure 1." or "Table 1.", and should be center-justified. The table captions should be located just above the table itself, while figure captions should be located just below the figure. All the references should be cited with numbers in brackets, i.e., [1], and listed in the order of citation. Acknowledgment should be located between the main text and references. It is not recommended to use footnotes. References [1] A.B. Crown, "Neural-based Intelligent Systems," Neural Information Processing - Letters and Reviews, Vol. 1, No. 1, pp. 1-5, 2003. [2] D. Evans, Neural Signal Processing, KAIST Press, 2003, pp. 111-124. James B. Author graduated from the University of Free Academics in 1999, and currently a professor of neural systems. His research interest includes computational models of human auditory pathway and neural information coding. (Home page: http://dns.ufa.ac.kr/~jauthor) From sham at gatsby.ucl.ac.uk Tue May 6 22:42:22 2003 From: sham at gatsby.ucl.ac.uk (Sham Kakade) Date: Wed, 7 May 2003 03:42:22 +0100 (BST) Subject: PhD thesis available on reinforcement learning Message-ID: Hi all, My thesis, "On the Sample Complexity of Reinforcement Learning", is now available at: http://www.gatsby.ucl.ac.uk/~sham/publications.html Below is the abstract and table of contents. cheers -Sham ================================================================= Abstract: This thesis is a detailed investigation into the following question: how much data must an agent collect in order to perform "reinforcement learning" successfully? This question is analogous to the classical issue of the sample complexity in supervised learning, but is harder because of the increased realism of the reinforcement learning setting. This thesis summarizes recent sample complexity results in the reinforcement learning literature and builds on these results to provide novel algorithms with strong performance guarantees. We focus on a variety of reasonable performance criteria and sampling models by which agents may access the environment. For instance, in a policy search setting, we consider the problem of how much simulated experience is required to reliably choose a "good" policy among a restricted class of policies \Pi (as in Kearns, Mansour, and Ng [2000]). In a more online setting, we consider the case in which an agent is placed in an environment and must follow one unbroken chain of experience with no access to "offline" simulation (as in Kearns and Singh [1998]). We build on the sample based algorithms suggested by Kearns, Mansour, and Ng [2000]. Their sample complexity bounds have no dependence on the size of the state space, an exponential dependence on the planning horizon time, and linear dependence on the complexity of \Pi . We suggest novel algorithms with more restricted guarantees whose sample complexities are again independent of the size of the state space and depend linearly on the complexity of the policy class \Pi , but have only a polynomial dependence on the horizon time. We pay particular attention to the tradeoffs made by such algorithms. ================================================================= Table of Contents: Chapter 1 Introduction 1.1 Studying the Sample Complexity 1.2 Why do we we care about the sample complexity? 1.3 Overview 1.4 Agnostic Reinforcement Learning Chapter 2 Fundamentals of Markov Decision Processes 2.1 MDP Formulation 2.2 Optimality Criteria 2.3 Exact Methods 2.4 Sampling Models and Sample Complexity 2.5 Near-Optimal, Sample Based Planning Chapter 3 Greedy Value Function Methods 3.1 Approximating the Optimal Value Function 3.2 Discounted Approximate Iterative Methods 3.3 Approximate Linear Programming Chapter 4 Policy Gradient Methods 4.1 Introduction 4.2 Sample Complexity of Estimation 4.3 The Variance Trap Chapter 5 The Mismeasure of Reinforcement Learning 5.1 Advantages and the Bellman Error 5.2 Performance Differences 5.3 Non-stationary Approximate Policy Iteration 5.4 Remarks Chapter 6 \mu -Learnability 6.1 The Trajectory Tree Method 6.2 Using a Measure \mu 6.3 \mu -PolicySearch 6.4 Remarks Chapter 7 Conservative Policy Iteration 7.1 Preliminaries 7.2 A Conservative Update Rule 7.3 Conservative Policy Iteration 7.4 Remarks Chapter 8 On the Sample Complexity of Exploration 8.1 Preliminaries 8.2 Optimality Criteria 8.3 Main Theorems 8.4 The Modified R_{max} Algorithm 8.5 The Analysis 8.6 Lower Bounds Chapter 9 Model Building and Exploration 9.1 The Parallel Sampler 9.2 Revisiting Exploration Chapter 10 Discussion 10.1 N, A, and T 10.2 From Supervised to Reinforcement Learning 10.3 POMDPs 10.4 The Complexity of Reinforcement Learning From bapics at uohyd.ernet.in Thu May 8 06:56:23 2003 From: bapics at uohyd.ernet.in (Dr. Raju Bapi) Date: Thu, 8 May 2003 18:56:23 +0800 (SGT) Subject: Call for papers - Neural and Cognitive Modeling (IICAI03) Message-ID: *****Apologies for cross posting ****** *****Please forward to interested people ******* Call for Papers for the Technical session on Neural and Cognitive Modeling First Indian International Conference on Artificial Intelligence Hyderabad, INDIA December 18-20, 2003 ------------------------------------------------------------------ About the Session This session focuses on issues in computational / theoretical neuroscience and cognitive modeling. The ideas presented could be proposals of conceptual framework or concrete models of brain function and dysfunction. Models could be pitched at the level of sub-neuronal, neuronal, population, or at the brain systems level. The processes could be related to language, speech, planning, decision making, reasoning, learning, memory, cognition, emotion, attention, awareness, vision, auditory, other sensory and motor domains, pattern and object recognition, neuromodulation, etc. The data for constraining models could originate from various experimental methods such as EEG, ERP, PET, fMRI, MEG, Electrophysiology, Psychophysics, Behavioral, Ethological, Developmental and Clinical studies. The modeling methods may be mathematical, statistical, neural networks, symbolic artificial intelligence and computer simulations. The above description is indicative of potential topics but not restrictive. For any clarifications, please contact the Session Chair. Instructions for the authors Prospective authors are invited to submit their papers electronically to the Session Chair by the due date. Authors should use the style files or MS-Word templates as provided by the Springer Lecture Notes to format their papers. The length of a submitted paper should not exceed 14 pages. Short papers and work currently under progress are also welcome. The papers must be in PDF or PS format. The first page of the draft paper should contain: title of the paper, name, affiliation, postal address, and E-mail address for each author, including the name of the author who will be presenting the paper (if accepted). The first page should contain a maximum of 5 keywords most appropriate to the content. Important Dates Last date for submission of papers: Tuesday, July 1, 2003 Notification of acceptance: Friday, August 1, 2003 Camera ready copies of accepted papers due: Friday, August 29, 2003 Please visit the conference website for registration details: http://www.iiconference.org --------------------------------------------------------------- Submissions for the Neural and Cognitive Modeling session should be sent electronically (PS or PDF or WORD attachments) to the Session Chair: Dr. Raju S. Bapi Reader, Dept of Comp and Info Sci. University of Hyderabad, Gachibowli Hyderbad, India 500 046 Phone: +91 40-23010500 - 512 Ext: 4017 / 4025 Fax: +91 40-23010145 email: bapics at uohyd.ernet.in (and) ksbapi at yahoo.com (Please send email to both the addresses with subject line "IICAI-03") --------------------------------------------------------------- From steve at hss.caltech.edu Fri May 9 19:19:22 2003 From: steve at hss.caltech.edu (Steven Quartz) Date: Fri, 9 May 2003 16:19:22 -0700 Subject: Caltech Postdoctoral Positions Message-ID: <016201c31681$6cbe29e0$3c17d783@caltech.edu> Postdoctoral Positions at California Institute of Technology Applications are invited for multiple postdoctoral fellowships to study the neural basis of reward, valuation, and decision-making utilizing functional brain imaging at the California Institute of Technology. These fellowships are funded by the David and Lucile Packard Foundation and the Moore Foundation and are part of a new interdisciplinary project at Caltech that brings together experimental economists, cognitive neuroscientists, behavioral biologists, and others to investigate the neural basis of social cognition and decision-making, with particular emphasis on economic and moral behavior. Research will take place in a new imaging center at Caltech that houses a Siemens 3T whole body scanner, a vertical monkey scanner, and a small animal high field scanner. There is also the opportunity for interaction with computational neuroscience research on decision-making. Applicants with a Ph.D. in neuroscience, cognitive science, computer science/engineering, or social science are encouraged to apply. A background in functional brain imaging is desirable, but other backgrounds will also be considered. Review of applications will start immediately and continue until the positions are filled. They will be supervised by Drs. Steven Quartz, John Allman, David Grether, and Colin Camerer. Interested individuals should send a statement of research interests and background, CV, and names for 3 letters of recommendation either via email to steve at hss.caltech.edu or via snail mail to: Dr. Steven Quartz, MC 228-77, California Institute of Technology, Pasadena, CA 91125. From laura.bonzano at dibe.unige.it Fri May 9 05:47:44 2003 From: laura.bonzano at dibe.unige.it (Laura Bonzano) Date: Fri, 9 May 2003 11:47:44 +0200 Subject: Summer school on Neuroengineering Message-ID: <00fe01c31610$0aaa8120$3c59fb82@ranma> Dear list members, I'm glad to announce the first edition of the "European Summer School on Neuroengineering" entitled to the memory of prof. Massimo Grattarola, organized by: a.. prof. Sergio Martinoia, Neuroengineering and Bio-nanoTechnologies Group, Department of Biophysical and Electronic Engineering (DIBE), University of Genova, Italy b.. prof. Pietro Morasso, Department of Communications, Computer and System Sciences (DIST), University of Genova, Italy c.. Ing. Fabrizio Davide, Telecom Italia Learning Services (TILS) It will take place in June 16-20, 2003 in Venice. I send you in attachment a presentation of this event. For more information and application forms, please visit: http://www.tils.com/neurobit/html/n_1.asp http://www.bio.dibe.unige.it/news_and_events/news_and_events_frames.htm Please transmit it to anyone potentially interested Best Regards, Laura Bonzano Apologies if you receive this more than once. ---------------------------------------------------------------- Ing. Laura Bonzano, Ph.D. Student Neuroengineering and Bio-nanoTechnologies - NBT Department of Biophysical and Electronic Engineering - DIBE Via Opera Pia 11A, 16145, GENOA, ITALY Phone: +39-010-3532765 Fax: +39-010-3532133 URL: http://www.bio.dibe.unige.it/ E-mail: laura.bonzano at dibe.unige.it ************************************************************************* First European School on Neuroengineering Massimo Grattarola http://www.tils.com/neurobit/html/n_1.asp Venice, 16-20 June 2003 Telecom Italia Learning Services (TILS), and the University of Genoa (DIBE, DIST, Bioengineering course) are currently organizing the first edition of an European Summer School on Neuroengineering. The school will be entitled to the memory of Massimo Grattarola. The first edition , which will last for five days, will be held from June 16 to June 20, 2003 at Telecom Italia s Future Center in Venice. The School will cover the following main themes: 1. Neural code and plasticity o Development and implementation of methods to identify, represent and analyze hierarchical and self-organizing systems o Cortical computational paradigms for perception and action 2. Brain-like adaptive information processing systems o Development and implementation of methods to identify, represent and analyze hierarchical and self-organizing systems o Models of learning, representation and adaptability based on knowledge of the nervous system. o Exploration of the capabilities of natural neurobiological systems as flexible computational devices. o Use of information from nervous systems to engineer new control techniques and new artificial systems. o Development of highly innovative Artificial Neural Networks capable of reproducing the functioning of vertebrate nervous systems. 3. Bio-artificial systems o Development of novel brain-computer interfaces o Development of new techniques for neuro-rehabilitation and neuro-prostheses o Hybrid silicon/biological systems In response to current reforms in training, at the Italian and European levels, the Summer School on Neuroengineering is certified to grant credits. These include: ECM (Educazione Continua in Medicina) credits recognized by the Italian Ministry of Health ECTS (European Credit Transfer System) credits recognized by all European Universities ************************************************************************* From wsom at brain.kyutech.ac.jp Mon May 12 05:09:59 2003 From: wsom at brain.kyutech.ac.jp (WSOM'03 Secretariat) Date: Mon, 12 May 2003 18:09:59 +0900 Subject: WSOM'03 Paper Submission Deadline Extended Message-ID: <00bc01c31866$438227c0$0c4111ac@yamac.brain.kyutech.ac.jp> Apologies, if you have received multiple copies. Paper Submission Deadline extended to 10 June, 2003. ============================================= Workshop on Self-Organizing Maps (WSOM'03) 11-14 September 2003 Hibikino, Kitakyushu, Fukuoka, Japan http://www.brain.kyutech.ac.jp/~wsom/ ============================================= CALL FOR PAPERS Workshop Objectives: ================== The Self-Organizing Map (SOM) with its related extensions is the most popular artificial neural algorithm for use in unsupervised learning and data visualization. Over 5,000 publications have been reported in the open literature, and many commercial projects employ the SOM as the tool for solving hard real-world problems. WSOM'03 is the discussion forum where your ideas and techniques are polished, and aims to unveil the results of hot researches and popularize the use of the SOM for technical public. Following the highly successful meetings held in 1997 (WSOM'97), 1999 (WSOM'99), and 2001 (WSOM'01), a further workshop in this established series, will bring together researchers and users of the SOM and related techniques. Topics: ====== Technical areas include, but are not limited to: * Self-organization * Unsupervised learning * Theory and extensions * Optimization * Hardware and architecture * Signal processing, image processing and vision * Medical Engineering * Time-series analysis * Text and document analysis * Financial analysis * Data visualization and mining * Bioinformatics * Robotics Important Dates: ============== Paper Submission: 10 June, 2003 Notification of Acceptance: 30 June, 2003 Final Paper Submission: 25 July, 2003 Organizing Committee: =================== Honorary Conference Chair Teuvo Kohonen, Finland Organizing Committee Organizing Chair Takeshi Yamakawa, Japan Organizing Committee Members Erkki Oja, Finland Heizo Tokutaka, Japan Program Committee Program Chair Masumi Ishikawa, Japan Program Committee Members Marie Cottrell, France Guido Deboeck, USA Shinto Eguchi, Japan Kikuo Fujimura, Japan Colin Fyfe, UK Masafumi Hagiwara, Japan Jaakko Hollmen, Finland Keiich Horio, Japan Marc M. Van Hulle, Belgium Toshimichi Ikemura, Japan Samuel Kaski, Finland Gerhard Kranner, Austria Thomas Martinetz, Germany Kiyotoshi Matsuoka, Japan Dieter Merkl, Austria Risto Miikkulainen, USA Yoshikazu Miyanaga, Japan Tsutomu Miyoshi, Japan Takashi Morie, Japan Junichi Murata, Japan Ikuko Nishikawa, Japan Klaus Obermayer, Germany Aiko Shibata, Japan Wataru Shiraki, Japan Olli Simula, Finland Eiji Uchino, Japan Alfred Ultsch, Germany Michel Verleysen, Belgium Thomas Villmann, Germany Lei Xu, China Shozo Yasui, Japan Hujun Yin, UK Paper Submissions: ================= Authors are invited to submit full papers before 10 June, 2003, by email to wsom at brain.kyutech.ac.jp. Detailed information will be available at the WSOM'03 webpage: http://brain.kyutech.ac.jp/~wsom/ ----------------------------------------------- WSOM'03 Secretariat Keiichi Horio Assistant Professor Graduate School of Life Science and Systems Engineering Kyushu Institute of Technology 2-4 Hibikino, Wakamatsu, Kitakyushu 808-0196 Japan Tel: +81-93-695-6127 E-mail: horio at brain.kyutech.ac.jp From Klaus at first.fhg.de Mon May 12 12:25:58 2003 From: Klaus at first.fhg.de (Klaus-R. Mueller) Date: Mon, 12 May 2003 18:25:58 +0200 Subject: EU summer school on ICA Message-ID: <3EBFCB16.9070600@first.fhg.de> Please POST to interested students and researchers: PLEASE REGISTER NOW - PLEASE POST - PLEASE REGISTER NOW Dear collegues, It is a pleasure to announce the *European summer school on ICA - from theory to applications* in Berlin, Germany, on June 16-17, 2003 http://ida.first.gmd.de/~harmeli/summer_school/ organized by the BLISS project (http://www.bliss-project.org). FLYER TO REGISTER: http://ida.first.gmd.de/~harmeli/summer_school/flyer.pdf Confirmed speakers include: Luis Almeida, INESC ID, Lisbon, Portugal Francis Bach, UC Berkeley, Berkeley, USA Jean-Francois Cardoso, ENST, Paris, France Gabriel Curio, UKBF, Berlin, Germany Lars-Kai Hansen, Technical University of Denmark, Lyngby, Denmark Stefan Harmeling, Fraunhofer FIRST, Berlin, Germany Simon Haykin, McMaster University, Hamilton, Canada Christian Jutten, INPG, Grenoble, France Juha Karhunen, HUT, Helsinki, Finland Te-Won Lee, Salk Institute, San Diego, USA Klaus-Robert Muller, Fraunhofer FIRST, Berlin, Germany Klaus Obermayer, TU Berlin, Berlin, Germany Erkki Oja, HUT, Helsinki, Finland Dinh-Tuan Pham, LMC-IMAG, Grenoble, France Laurenz Wiskott, Humboldt-University, Berlin, Germany Michael Zibulevsky, Technion, Haifa, Israel Andreas Ziehe, Fraunhofer FIRST, Berlin, Germany Local organizing committee: * Klaus-Robert Muller, Fraunhofer FIRST, Berlin, Germany * Stefan Harmeling, Fraunhofer FIRST, Berlin, Germany * Andreas Ziehe, Fraunhofer FIRST, Berlin, Germany NOTE, THERE WILL BE A POSTER SESSION WHERE STUDENTS CAN PRESENT THEIR RESEARCH. Please POST to interested students and researchers. Best regards, klaus -- &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&& Prof. Dr. Klaus-Robert M\"uller University of Potsdam and Fraunhofer Institut FIRST Intelligent Data Analysis Group (IDA) Kekulestr. 7, 12489 Berlin e-mail: Klaus-Robert.Mueller at first.fraunhofer.de and klaus at first.gmd.de Tel: +49 30 6392 1860 Tel: +49 30 6392 1800 (secretary) FAX: +49 30 6392 1805 http://www.first.fhg.de/persons/Mueller.Klaus-Robert.html &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&& From ASIM.ROY at asu.edu Mon May 12 19:40:28 2003 From: ASIM.ROY at asu.edu (Asim Roy) Date: Mon, 12 May 2003 16:40:28 -0700 Subject: Summary of panel discussion at IJCNN'2002 and ICONIP'02 on the question: "Oh sure, my method is connectionist too. Who said it's not?" Message-ID: This note summarizes the panel discussions that took place at IJCNN'2002 (International Joint Conference on Neural Networks) in Honolulu, Hawaii in May, 2002 and at ICONIP'02-SEAL'02-FSKD'02 (the 9th International Conference on Neural Information Processing, the 4th Asia-Pacific Conference on Simulated Evolution And Learning, and the 2002 International Conference on Fuzzy Systems and Knowledge Discovery) in November 2002 in Singapore. IJCNN'2002 was organized jointly by INNS (International Neural Network Society) and the IEEE Neural Network Council. This was the fifth panel discussion at these neural network conferences on the fundamental ideas of connectionism. The discussion topic at both of these conferences was: "Oh sure, my method is connectionist too. Who said it's not?" The abstract below summarizes the issues/questions that were addressed by this panel. The following persons were on these panels and their bio-sketches are included at the end: At ICONIP'02 in Singapore: 1. Shun-Ichi Amari 2. Wlodzislaw Duch 3. Kunihiko Fukushima 4. Nik Kasabov 5. Soo-Young Lee 6. Erkki Oja 7. Xin Yao 8. Lotfi Zadeh 9. Asim Roy At IJCNN'2002 in Honolulu: 1. Bruno Apolloni 2. Robert Hecht-Nielsen 3. Robert Kozma 4. Steve Rogers 5. Ron Sun Thanks to Lipo Wang, General Chair of ICONIP'02, and Donald C. Wunsch, Program Co-Chair of IJCNN'02, for allowing these discussions to take place. For those interested, summaries of prior debates on the basic ideas of connectionism are available at the CompNeuro archive site. Here is a partial list of the prior debate summaries available there. http://www.neuroinf.org/lists/comp-neuro/Archive/1999/0079.html - Some more questions in the search for sources of control in the brain http://www.neuroinf.org/lists/comp-neuro/Archive/1998/0084.html - BRAINS INTERNAL MECHANISMS - THE NEED FOR A NEW PARADIGM http://www.neuroinf.org/lists/comp-neuro/Archive/1997/0069.html - COULD THERE BE REAL-TIME, INSTANTANEOUS LEARNING IN THE BRAIN? http://www.neuroinf.org/lists/comp-neuro/Archive/1997/0057.html - CONNECTIONIST LEARNING: IS IT TIME TO RECONSIDER THE FOUNDATIONS? http://www.neuroinf.org/lists/comp-neuro/Archive/1997/0012.html - DOES PLASTICITY IMPLY LOCAL LEARNING? AND OTHER QUESTIONS http://www.neuroinf.org/lists/comp-neuro/Archive/1996/0047.html - Connectionist Learning - Some New Ideas/Questions Asim Roy Arizona State University Panel Question: "Oh sure, my method is connectionist too. Who said it's not?" Description: Some claim that the notion of connectionism is an evolving one. Since the publication of the PDP book (which enumerated the then accepted principles of connectionism), many new ideas have been proposed and many new developments have occurred. So according to these claims, the connectionism of today is different from connectionism of yesterday. Examples of such new developments in connectionism include hybrid connectionist-symbolic models (Sun 1995, 1997), neuro-fuzzy models (Keller 1993, Bezdek 1992), reinforcement learning models (Kaelbling et al. 1994, Sutton and Barto 1998), genetic/evolutionary algorithms (Mitchell 1994), support vector machines (references), and so on. In these newer connectionist models, there are many violations of the "older" connectionist principles. One of the simplest violations is the reading and setting of connection weights in a network by an external agent in the system. The means and mechanisms of external setting and reading of weights were not envisioned in early connectionism. Why do we need local learning laws if an external source can set the weights of a network? So this and other features of these newer methods are obviously in direct conflict with early connectionism. In the context of these algorithmic developments, it has been said that maybe nobody at this stage has a clear definition of connectionism, that everyone makes things up (in terms of basic principles) as they go along. Is this the case? If so, does this pose a problem for the field? To defend this situation, some argue that connectionism is not just one principle, but many? Is that the case? If not, should we redefine connectionism given the needs of these new types of learning methods and on the basis of our current knowledge of how the brain works? This panel intends to closely examine this issue in a focused and intensive way. Debates are expected. We hope to at least clarify some fundamental notions and issues concerning connectionism, and hopefully also make some progress on understanding where it needs to go in the near future. BRIEF SUMMARY OF INDIVIDUAL REMARKS Shun-ichi Amari -- RIKEN Brain Science Institute, Japan Connectionism----Theory or Philosophy A ghost haunted in the mid eighties in the world. It was named the connectionism. It was welcomed enthusiastically by many people, but was hated so much by the traditional AI people. Now there is a question: What is connectionism and where does it going? Is it too old now? If it is collections of theories, there have been many new developments. However, even at the time of rise of connectionism, its theories relied mostly on those developed in the seventies. Most were found to be rediscoveries. However, as a philosophy, it declared so strongly that information in the brain is distributed and processed in parallel by dynamic interactions. Novel engineering systems should learn from this fact. This philosophy has been accepted with enthusiasm, and generated many new theories and findings. Its righteousness is still valid. Bruno Apolloni -- University of Milano, Italy. Connectionism or not connectionism. The true revolution of the connectionism has been to credit heuristics, say subsymbolic computations, as scientific matter. But, the Aristotelian thought in our genomic inherit put a sharp divide between the noble brain activity represented by the symbolic reasoning and the low level attitudes, such as intuition, fantasy, and all what in general cannot be gauged by a theorem, rather is explicable just by the electro-chemistry of our neuron activities. A second revolution is currently demolishing the barrier between the two category, recognizing that neurons are also the seat of the abstract thought and no sharp difference occurs between the two mental attitudes. Likewise a photon that is both a particle and a wave, a thought is materially a well proven neural network, so well that it proves a fixed points vs. the adaptation to the environment, so simple that it can be stored on a limited amount of memory. Also learning algorithms are reveal themselves similar at the two levels. A search for rewarding is almost random at the subsymboilic level, and pursued by a hypothetic, possibly absurd, reasoning at symbolic one. The reaction to punishment is possibly unconscious at the former, a source of intellectual pain and a search for avoiding it at the symbolic level. The key point is to maintain an efficient set of feedbacks between the levels. In such framework we discover ourselves as material automatons, rather the physical matter sharing the nature of a God. Wlodzislaw Duch -- Nicholas Copernicus University , Torun, Poland Everybody has some view what connectionist is and all these points of view are in some way limited. Perhaps we should not worry about definitions. New branches of science, as Allen Newell once said, are not defined, but emerge from common interest of people that meet at conferences, discuss problems they find interesting, establish new journals. When people ask me what am I working on, what do I usually say? Computational intelligence. Trying to solve problems, that are not effectively algoritmizable. Is this connectionism? Unless I work in neurobiology, where methods should be biologically plausible, I do not care. It may be more statistical, or more evolutionary, as long as it solves interesting problems it is something worth working on. Unfortunately it is not quite so simple and since the matter has not been thoroughly discussed, connectionist approaches have spontaneously developed in many different directions, we face all kind of problems now. Some neural network conferences do not accept papers on statistical learning methods, because they are not neural, but accept SVM papers, although they have equally little to do with connectionism. Because recent development in SVM were somehow connected with perceptrons, and papers on this subject appear mostly in neural journals, it is perceived as a branch of neural computing. Many articles in the IEEE Transactions on Neural Networks, and other neural network journals, have little to do with connectionist ideas. Although IEEE formed the Neural Network Society, conferences organized by this society cover much broader range of topics. Not only SVMs are welcome, but also Bayesian belief networks and their outgrowth, graphical methods, statistical mean field theories, sampling techniques, chaos, dynamical systems methods, fuzzy, evolutionary, swarm, immune system, and many other techniques are accepted. Fields are overlapping, boundaries are fuzzy and we do not know how to define connectionism any more. Many people work on the same problems using different approaches that originate from fields they have been trained in. Classification Society, or Numerical Taxonomy experts, sometimes know about neural networks, but do neural network experts know what Classification Society or Pattern Recognition society is doing, and what kind of problems and methods are of their interest? For example, committees of models are investigated by neural network, evolutionary, machine learning, numerical taxonomy and pattern recognition communities. Same problems are solved over and over by people that do not know about existence of other fields. How to bring experts working on the same problems from different perspectives together? If anything should came out from this discussion, it should not be a definition of connectionism, but rather understanding that a lot of research efforts is duplicated in a many-fold way. The point is that we are too focused on the methods, forgetting about the challenges and problems that wait to be solved. It is too easy to modify one of neural methods, add another term to the error function, or modify network architecture. An infinitely many variants of clusterization, or unsupervised learning methods, may be devised. Are the classical branches of science defined by the methods? Biology, physics, chemistry and other classical branches of science were always problem-oriented. Why do we keep on thinking in the method-oriented way? Connectionist or not, does it solve the problem? Defining our field of interest as looking for solution of problems that are non-algorithmic, problems for which effective algorithms do not exist, makes it problem oriented. Solutions to such problems require intelligence. Since we solve them with computational means the field may be appropriately called Computational Intelligence (CI). Connectionist methods are an important part of this field, but there is no reason to restrict oneself to one group of methods. A good example is the contrast between symbolic, rule based methods used by Artificial Intelligence (AI), and subsymbolic, neural methods. Contrasting neural networks with rule-based AI must ultimately fail. How will we solve problems requiring systematic thinking without rules? Rules must emerge somehow from networks. Some CI problems require knowledge and symbolic reasoning, and this is where traditional AI has focused. These problems are related to higher cognitive functions, such as thinking, reasoning, planning, problem solving and understanding natural language. On the other hand neural computing has tried to solve problems requiring senso-motoric functions, perception, control, development of feature detectors, problems concerned with low-level cognition. Computational intelligence, being problem-oriented, is interested in algorithms coming from all sources. Search and logical rules may solve problems in theorem proving or sentence parsing that connectionist techniques are not able to solve. Learning, adaptation is just one side of intelligence. Although our brains use neurons to solve problems requiring systematic reasoning, there must be a way to approximate this neural activity with symbolic search-based processes. As with all approximations it may sometimes break down, but in most cases AI expert systems are solving interesting problems. Instead of stressing the differences it may be better to join forces, since low and high-cognitive functions are both needed for true intelligence. Solving problems, for which effective algorithms do not exist, by connectionist or other methods, provides clear definition for Computational Intelligence. Clear definition of neural computing, or soft computing, a definition that covers all that experts work on in these fields, is very difficult to agree upon, because of the method, rather than problem orientation. Early connectionism was naive: psychologist were writing papers showing that MLPs are not all-mighty. Everybody knows that now. For some tasks modular networks are necessary. The brain is not just one big network. External sources - other parts of the brain - control learning, for example the limbic structures involved in emotions decide what is interesting and worth learning. Weights are not constant, but are a function of inputs, not just in the long-term, but also short-term dynamics. But neurons, networks and brain functions are only one source of inspiration for us. Many methods were inspired by the Parallel Distributed Processing (PDP) idea. The name PDP did not become popular, since "neural networks" sounded much better. Almost all algorithms may be represented in some graphical way, with nodes representing functions or local processors. Graphical computations are going to be popular, but this is again just a broad group of algorithms, not a candidate for a branch of science. Modeling neurobiological systems at a very detailed level leads to computational neuroscience. Simpler approximations are still useful to model various brain functions and processes. Very rough approximations, leading to modular neural networks where single neurons do not matter, but the distributed nature of processing is important, lead to connectionist systems useful for psychology. These fields are appropriately based on neural processing, although they have strong overlap with many other branches of science, for example neuroscience with neurochemistry, molecular biology and genetics, and connectionist approaches in psychology with cognitive science. Engineering applications of neural computing should be less method-oriented and more problem-oriented. If we do not make an effort in this direction many journals and conferences will present solutions to the same problems, repeating many-fold the same work, and preventing comparison of results that may be obtained using different methods. Time will pass, but we shall not grow wiser ... Kunihiko Fukushima -- Tokyo University of Technology, Japan Find Out Other Principles that Govern the Brain The final goal of the connectionism is to understand the mechanism of information processing in the biological brain. In the history of the connectionism, we have experienced a kind of booms of the research twice, from 1960 and from 1985. One of the triggers for the first boom was a proposal of a neural network model "perceptron" by Rosenblatt, and that for the second boom was the introduction of the idea of cost minimization. In both cases, the difficult problem of understanding information processing in the brain was reduced to simple problems: in the first case, to the analysis of a model called perceptron; and in the second case, to a pure mathematical problem of cost minimization. This replacement with simple problems allowed nonprofessionals easily to join the research of the brain without having a large knowledge on neurophysiology or psychology. Brain is a system that works under several constraints. Since one of the constraints was shown as a hypothesis of cost minimization, the analysis of a system that works under the constraint became very easy. In other words, the process of understanding the brain was divided into two steps: biological experiments and solving mathematical problems. This division of labor allowed nonprofessionals to enter brain research very easily. It is true that the technique of cost minimization was very powerful. It can not only explain brain mechanisms but also is useful for other problems, such as forecasting weather and even stock market. Although this approach has produced a great advance in the brain research, it involves a risk at the same time. Researchers who are engaged in the research themselves have a large tendency of having an illusion that they are doing the research of the biological brain. They often forget that they are simply analyzing a behavior of a system that works under a certain constraint. This is a similar situation we had in 1960's. Everyone forgot the fact that they were analyzing a system called perceptron, and erroneously believed that they were making the research of the brain itself. Once mathematical limitations of the ability of the perceptron became clear, they moved away, not only from the research of the perceptron, but from the research of the brain itself. Their illusory belief caused of the winter era of the research in 1970's. Mathematical limitations of the ability of the principle of cost minimization are now becoming clear. Cost minimization is not the only rule that control the biological brain. We are now in the time to find out other constraints that govern the biological brain. Otherwise, we will have a winter era of the research again. Robert Hecht-Nielsen -- University of California, San Diego Robert Hecht-Nielsen's current views are described in Chapter 4 of the new book: Hecht-Nielsen, R. & McKenna, T. [Eds] (2003) Computational Models for Neuroscience: Human Cortical Information Processing [London, Springer-Verlag]. Nik Kasabov -- Auckland University of Technology, New Zealand Yes, indeed, very often researchers claim that their method is connectionist, too. We talk about a method being connectionist if the method utilizes artificial neurons (basic processing units) and connections between them, and if two main functions are performed in this connectionist environment - learning, and generalization [1,2]. Without having the above characteristics, it is hard to classify a method being connectionist. A method can by hybrid connectionist, too, if the connectionist principles from above are integrated with other principles for information processing, such as rule based systems, fuzzy systems [3], evolutionary computation [4] etc. There are additional characteristics that reinforce the connectionist principles in a method. For example, adaptive, on-line learning in an evolving connectionist structure; learning and capturing abstract information, rules; modular connectionist organization; different types of learning available in one system (e.g. active, passive, supervised, unsupervised, reinforcement); relating neurons to genes contained in them regarded as parameters of the learning and the development process [5]. As connectionism is inspired by the organization and the functioning of the brain, we can assume that the more brain-like a method is - the more connectionist it is. This is true. On the other hand a connectionist method, as defined above, can be more engineering (application), or mathematics oriented, rather than brain-like oriented. For the brain study research and for the modeling of brain functions it is important to have adequate brain-like models [6], but it is irrelevant to ask for an engineering method how much connectionist it is if it serves its purpose well. In the end, all possible directions for the development of new scientific methods for information processing should be encouraged if these methods contribute to the progress in science and technology regardless how much connectionist they are indeed. And the more a method can gain from the principles of connectionism, the better, as information processing methods are constantly "moving" towards being more human oriented and human-like to serve the humanity. [1] McClelland J, Rumelhart D, et al. (1986) Parallel Distributed Processing, vol. II, MIT Press. [2] Arbib M, (1995,2003) The Handbook of Brain Theory and Neural Networks. The MIT Press. [3] N.Kasabov (1986) Foundations of neural networks, fuzzy systems and knowledge engineering, The MIT Press. [4] X.Yao (1993) Evolutionary artificial neural networks, Int. Journal of Neural Systems, vol.4, No.3, 203-222. [5] N.Kasabov (2002) Evolving connectionist systems - methods and applications in bio-informatics, brain study and intelligent machines, Springer Veralg. [6] Amari, S. and N.Kasabov (1998) Brain-like computing and intelligent information systems, Springer Verlag Chaotic neurodynamics - A new frontier in connectionism Robert Kozma, University of Memphis, Memphis, TN 38152 Summary of my viewpoint presented at the Panel Session "My method is connectionist, too!" at IJCNN'02 / WCCI'02, Honolulu, May 10-15, 2002 All nontrivial problems we face in practical applications of pattern recognition and intelligent information processing systems require a nonlinear approach. In addition, adaptivity of the models is very often a key requirement, which allows producing a robust solution to real life problems. Connectionist models and neural networks, in particular, offer exactly these qualities. It is not surprising, therefore, that connectionism gains wide popularity in the literature. Connectionist methods can be considered as a family of nonlinear statistical tools of pattern recognition with a large number of parameters, which are adapted using powerful learning algorithms. In most of the cases, the parameterization and learning algorithm guarantees that the trained network operates in a convergent regime. In other words, the activation level of the network's nodes approach a steady state value in the autonomous case or when the inputs to the network are constant. There is an emergent field of research using dynamical neural networks that operate in oscillatory limit cycles or in a chaotic regime; see, e.g., Aihara et al. (1990). Although the first nonconvergent neural network models have been proposed about 4 decades ago, the time became ripe only recently to embrace these ideas and include them to the mainstream of connectionist science (Freeman, 1975). These new developments are facilitated by advancements both inside and outside connectionism. In the past decades, research into convergent NNs laid down the solid theoretical foundations, which now can be extended to chaotic domain. In addition, the mathematical theory of dynamical systems and chaos has reached maturity by now, i.e., it can address the very complex issues raised by high-dimensional chaotic models, like neural systems (Kaneko, 1990; Tsuda, 2001). Spatio-temporal neurodynamics is a key focus area of the research into nonconvergent neural systems. Within this field, we emphasize the role of intermediate-range, or mesoscopic effects in describing population dynamics (Kozma & Freeman, 2001). There are two major factors contributing to the emergence of the mesoscopic paradigm of neuroscience: 1. Biological systems exhibit a mesoscopic level of organization unifying 10^4 to 10^6 neurons, while the overall system size is 10^10 to 10^12. Mesoscopic approach provides an intermediate level between local (microscopic) and global (macroscopic) levels. Mesoscopic levels are biologically plausible. Artificial neural systems, however, do not need to imitate all the details of neural systems. Therefore, it is arguable whether we should follow nature's path in this case? 2. The introduction of mesoscopic level is very practical from computational perspective as well. Based on the present technology, it is not feasible to create computational devices with 10^10 to 10^12 processing units, the complexity level dictated by studying scaling properties of complex networks. Probably, we need to wait at least 10-15 years, when nanotecnology will become mature enough to produce systems of that complexity (Govindan, 2002). Until the technology of creating such an immense concentration of computational power, software and hardware implementations of neural networks representing mesoscopic level of granulation can provide a practically usable tool (Principe et al., 2001) of building models of space-time neurodynamics. References: Aihara, K., Takabe T., Toyoda M. (1990) Chaotic neural networks, Phys. Lett. A, 144(6-7), 333-340. Freeman, W.J. (1975) Mass Action in the Nervous System, Academic Press, New York. Govindan, T.R. (2002) NASA/USRA Workshop on Biology-Information Science-Nano-technology Fusion BIN, Ames, CA, Oct. 7-9, 2002. Kaneko, K. (1990) Clustering, coding, switching, hierarchical ordering, and control in a network of chaotic elements, Physica D, 41, 137-172. Kozma, R. and W.J. Freeman (2001) Chaotic Resonance - Methods and applications for robust classification of noisy and variable patterns, Int. J. Bifurc.&Chaos, 11(6), 2307-2322. Principe, J.C., Tavares, V.G., Harris, J.G., Freeman, W.J. (2001) Design and Implementation of a Biologically Realistic Olfactory Cortex in Analog Circuitry, Proc. IEEE, 89(7): 1030-1051. Tsuda, I. (2001) Toward an interpretation of dynamic neural activity in terms of chaotic dynamical systems", Behav. Brain Sci., 24, pp. 793-847. Soo-Young Lee -- Korea Advanced Institute of Science and Technology In my mind connectionism is a philosophy to build artificial systems based on biological neural systems. It is not necessarily limited to adaptive systems with layered architecture such as multiplayer Perceptron and radial basis function networks. Biological neural systems also utilize fixed interconnections, which has been evolved through generations. For example many biological neural systems incorporate winner-take-all networks based on lateral inhibition. My favorite connectionist model comes from human auditory pathway from cochlea to auditory cortex. It consists of several modules, which mainly have layered architecture with both feedforward and feedback connections. By understanding functions of each module and their connections we are able to build up mathematical models for speech processing in auditory pathway. The object path includes nonlinear noise-robust feature extraction from simple frequency selectivity to more complex time-frequency characteristics. By combing signals from both ears the spatial path performs sound localization and speech enhancement. The backward path is responsible to the top-down attention, which filters out irrelevant or unfamiliar signals. Although the majority of the networks have fixed interconnections, their combined network results in complicated dynamic functions. I believe it is a very important class of connectionist models. Erkki Oja -- Helsinki University of Technology, Finland In traditional cognitive science, the basic paradigm for natural and artificial intelligence is symbol manipulation: processing of well-defined concepts by rules. With the introduction of parallel distributed processing or connectionism in the 1980's, there was a paradigm shift. The new models are motivated by real neural networks but they do not have to be faithful in every detail to biology. The representation for data is a pattern of activity, a numerical vector, instead of a logical entity. Learning means changing some numerical parameters like connection weights, instead of updating rules. This is connectionism. Connectionist methods offer new hopes of solving many highly challenging problems like data mining, bioinformatics, novel user interfaces, robotics etc. Two examples which I have done research on are Independent Component Analysis (ICA) and Kohonen's Self-Organizing Maps (SOM). Both ideas are motivated by neural models but they can also be taken as data analysis tools as such. They are connectionist methods based on unsupervised learning, a very powerful way to infer empirical models from large data sets. Steven Rogers and Matthew Kabrisky -- Qualia Computing, Inc. (QCI) and CADx Systems The only reason to restrict the connectionist label to a subset of computational intelligence techniques is out of arrogance associated with the false impression that we understand to any significant detail how animals process information. What little is known will be modified dramatically by the many things we have yet to discover. All current connectionist techniques make big assumptions on what is included and what is relevant. There does not exist a unifying theory of fundamental processing methods used in physiological information processing that includes all potentially relevant electro-chemical elements (neurons, glial cells, etc ). Thus, in our opinion, to rule out any technique based on our current assumptions is premature. In the end, being engineers, what we care most about is how we can couple our learning algorithms in efficient productive ways with humans to achieve improved performance in useful tasks, intelligence amplification. Even if the techniques used cannot currently be mapped to similar processing strategies employed in physiological information processing systems, the fact that they are useful in interacting with the wetware of real connectionist systems makes them relevant. These quale modifying systems, whether composed of rule-based or local learning methods or even external setting of constraints are the only real connectionist systems that we can consider at the present time. Ron Sun -- University of Missouri-Columbia There have been a number of panel discussions on this and related issues. For example, a panel discussion on the question: "does connectionism permit reading of rules from a network?" took place at IJCNN'2000 in Como, Italy. This previous debate pointed out the limitations of strong connectionism. I noted then that clearly the death knell of strong connectionism had been sounded. Many early connectionist models have some significant shortcomings. For example, the limitations due to the regularity of their structures led to, for example, difficulty in representing and interpreting symbolic structures (despite some limited successes that we have seen). Other limitations are due to learning algorithms used by such models, which led to, for example, requiring lengthy training (requiring many repeated trials); requiring complete I/O mappings to be known a priori; and so on. These models may bear only remote resemblance to biological processes; they are far less complex than biological neural networks. In coping with these difficulties, two forms of connectionism emerged: Strong connectionism adheres strictly to the precepts of connectionism, which may be unnecessarily restrictive and lead to huge cost for certain symbolic processing. On the other hand, weak connectionism (or hybrid connectionism) encourages the incorporation of both symbolic and subsymbolic processes: reaping the benefit of connectionism while avoiding its shortcomings. There have been many theoretical and practical arguments for hybrid connectionism; see, for example, Sun (1994) and Sun (2002). I shall re-iterate the point I made before: To remove the strait-jacket of strong connectionism, we should advocate some forms of hybrid connectionism, encouraging the incorporation of non-NN representations and processes. It is time for a more open-minded framework in which our research is conducted. See http://www.cecs.missouri.edu/~rsun for details of work along this line. References: R. Sun, (2002). Duality of the Mind. Lawrence Erlbaum Associates, Mahwah, NJ. R. Sun, (1994). Integrating Rules and Connectionism for Robust Commonsense Reasoning. John Wiley and Sons, New York, NY. BIO-SKETCHES Shun-ichi Amari Professor Shun-ichi Amari is currently the Vice Director of RIKEN Brain Science Institute and Group Director of the Brain-Style Intelligence and Brain-Style Information Research Systems research groups. Professor Amari received his Ph.D. Degree in Mathematical Engineering in 1963 from University of Tokyo, Tokyo, Japan. Since 1981 he has held a professorship at the Department of Mathematical Engineering and Information Physics, University of Tokyo. In 1994, he joined RIKEN's Frontier Research Program, then moved to RIKEN Brain Science Institute when it was established in 1997. He is a fellow of the IEEE and received the IEEE Neural Network Pioneer Award, the Japan Academy Award and the IEEE Emanuerl Piore Award. Professor Amari has served as a member of numerous editorial committee boards and organizing commitees and has published around 300 papers, including several books, in the areas of information theory and neural nets. Bruno Apolloni Professor of Cybernetics and Information Theory at the Dipartimento di Scienze dell' Informazione (Department of Information Science), University of Milano, Italy. Director, Neural Networks Research Laboratory (LAREN), University of Milano. President, Neural Network Society of Italy. Author of over 100 papers in the frontier area between probability and statistics on the one hand and theoretical computer science on the other, with special regard to computational learning, pattern recognition, optimization, control theory, probabilistic analysis of algorithms, epistemological aspects of probability and fuzziness. His current research interests are in the statistical bases of learning, and in hybrid subsymbolic-symbolic learning architectures. Wlodzislaw Duch Wlodzislaw Duch is a professor of theoretical physics and applied computational sciences, since 1990 heading the Department of Informatics (formerly called a Department of Computer Methods) at Nicholas Copernicus University , Torun, Poland. His degrees include habilitation (D.Sc. 1987) in many body physics, Ph.D. in quantum chemistry (1980), and Master of Science diploma in physics (1977) at the Nicholas Copernicus University, Poland. He has held a number of academic positions at universities and scientific institutions all over the world. These include longer appointments at the University of Southern California in Los Angeles, and the Max-Planck-Institute of Astrophysics in Germany (every year since 1984), and shorter (up to 3 month) visits to the University of Florida in Gainesville; University of Alberta in Edmonton, Canada; Meiji University, Kyushu Institute of Technology and Rikkyo University in Japan; Louis Pasteur Universite in Strasbourg, France; King's College London in UK, to name only a few. He has been an editor of a number of professional journals, including IEEE Transactions on Neural Networks, Computer Physics Communications, Int. Journal of Transpersonal Studies and a head scientific editor of the "Kognitywistyka" (Cognitive Science) journal. He has worked as an expert for the European Union science programs and for other international bodies. He has published 4 books and over 250 scientific and popular articles in many journals. He has been awarded a number of grants by Polish state agencies, foreign committees as well as European Union institutions. Kunihiko Fukushima Kunihiko FUKUSHIMA is a Full Professor, Katayanagi Advanced Research Laboratories, at Tokyo University of Technology, Tokyo, Japan. He was a full professor at Osaka University from 1989 to 1999, at the University of Electro- Communications from 1999 to 2001. Prior to his Professorship, he was a Senior Research Scientist at the NHK Science and Technical Research Laboratories. He is one of the pioneers in the field of neural networks and has been engaged in modeling neural networks of the brain since 1965. His special interests lie in modeling neural networks of the higher brain functions, especially, the mechanism of the visual system. He is the inventor of the Neocognitron for deformation invariant visual pattern recognition, and the Selective Attention Model for recognition and segmentation of connected characters and natural images. One of his recent research interests is in modeling neural networks for active vision in the brain. He is the author of many books on neural networks, including "Information Processing in the Visual and Auditory Systems", "Neural Networks and Information Processing", "Neural Networks and Self-Organization", and "Physiology and Bionics of the Visual System". He received the Achievement Award, Excellent Paper Awards, and so on from IEICE. He serves as an editor for many international journals. He was the founding President of JNNS and is a founding member on the Board of Governors of INNS. Robert Hecht-Nielsen Beginning in 1968 with neural network computer experiments and continuing later with foundation and management of neural network research and development programs at Motorola (1979-1983) and TRW (1983-1986), Hecht-Nielsen was a pioneer in the development of neural networks. He has been a member of the University of California, San Diego faculty since 1986 and was the author of the first textbook on neural networks (Neurocomputing (1989) Reading MA: Addison-Wesley). He teaches a popular year-long graduate course on the subject (ECE-270 Neurocomputing). A Fellow of the IEEE and recipient of its Neural Networks Pioneer Award, Hecht-Nielsen's research is centered around elaboration of his recently completed theory of the function of thalamocortex. Hecht-Nielsen, R. and McKenna, T. [Eds.] (2003) Computational Models for Neuroscience: Human Cortical Information Processing, London: Springer-Verlag. Sagi, B., et al (2001) A biologically motivated solution to the Cocktail Party Problem, Neural Computation 13: 1575-1602. Nikola K. Kasabov Fellow of the Royal Society of New Zealand, Sen. Member of IEEE Affiliation: Director, Knowledge Engineering and Discovery Research Institute, Professor and Chair of Knowledge Engineering, School of Information Technologies, Auckland University of Technology Brief Biographical History: 1971 - MSc in Computer Science and Engineering, Technical University of Sofia 1972 - MSc in Applied Mathematics, Technical University of Sofia 1975 - PhD in Mathematical Sciences, Technical University of Sofia 1976 - 89 Lecturer and Associate Professor, Technical University of Sofia 1989-91 Research Fellow and Senior Lecturer, University of Essex, UK 1992- 1998 Senior Lecturer and Associate Professor, University of Otago, New Zealand 1999-2002 Professor and Personal Chair, Director Knowledge Engineering Lab, University of Otago, New Zealand Honours: Past President of the Asia Pacific Neural Network Assembly (1997-98). The Royal Society of New Zealand Silver Medal for Contribution to Science and Technology, 2001 Recent books:N.Kasabov, Evolving connectionist systems: Methods and applications in bioinformatics, brain study and intelligent machines, Springer Verlag, London, New York, Heidelberg (2002),450pp N. Kasabov, N., ed. Future Directions for Intelligent Systems and Information Sciences, Heidelberg, Physica-Verlag (Springer Verlag) (2000), 420pp Kasabov, N. and Kozma, R. eds. Neuro-Fuzzy Techniques for Intelligent Information Systems, Heidelberg, Physica-Verlag (Springer Verlag) (1999), 450pp Amari, S. and Kasabov, N. eds. Brain-like Computing and Intelligent Information Systems, Singapore, Springer Verlag (1998), 533 pp N.Kasabov, Foundations of Neural Networks, Fuzzy Systems and Knowledge Engineering, The MIT Press, CA, MA (1996), 550pp. Associate Editor of Journals: Information Science; Intelligent Systems; Soft Computing Robert Kozma Robert Kozma holds a Ph.D. in applied physics from Delft University of Technology (1992). Presently he is Associate Professor at the Department of Mathematical Sciences, Director of Computational Neurodynamics Lab, University of Memphis. Previously, he has been on the faculty of Tohoku University, Sendai, Japan (1993-1996); Otago University, Dunedin, New Zealand (1996-1998); and the Division of Neuroscience and Department of EECS at UC Berkeley (1998-2000). His expertise includes autonomous adaptive brain systems, mathematical and computational modeling of spatio-temporal dynamics of cognitive processes, neuro-fuzzy systems and computational intelligence. He is a Senior Member of IEEE, member of the Neural Network Technical Committee of the IEEE Neural Network Society, and other professional organizations. He has been on the Program Committee of about 20 international conferences in the field of intelligent computation and soft computing. Soo-Young Lee Soo-Young Lee received B.S., M.S., and Ph.D. degrees from Seoul National University in 1975, Korea Advanced Institute of Science in 1977, and Polytechnic Institute of New York in 1984, respectively. From 1977 to 1980 he worked for the Taihan Engineering Co., Seoul, Korea. From 1982 to 1985 he also worked for General Physics Corporation at Columbia, MD, USA. In early 1986 he joined the Department of Electrical Engineering, Korea Advanced Institute of Science and Technology, as an Assistant Professor and now is a Full Professor. In 1997 he established Brain Science Research Center, which is the main research organization for the Korean Brain Neuroinformatics Research Program. The research program is one of the Korean Brain Research Promotion Initiatives sponsored by Korean Ministry of Science and Technology from 1998 to 2008, and currently about 70 Ph.D. researchers have joined the research program from many Korean universities and research institutes. He was President of Asia-Pacific Neural Network Assembly, and is on Editorial Board for 2 international journals, i.e., Neural Processing Letters and Neurocomputing. He received Leadership Award and Presidential Award from International Neural Network Society in 1994 and 2001, respectively. His research interests have resided in artificial auditory systems based on biological information processing mechanism in our brain. Erkki Oja Erkki Oja is Director of the Neural Networks Research Centre and Professor of Computer Science at the Laboratory of Computer and Information Science, Helsinki University of Technology, Finland. He received his Dr.Sc. degree in 1977. He has been research associate at Brown University, Providence, RI, and visiting professor at Tokyo Institute of Technology. Dr. Oja is the author or coauthor of more than 250 articles and book chapters on pattern recognition, computer vision, and neural computing, as well as three books: "Subspace Methods of Pattern Recognition" (RSP and J.Wiley, 1983), which has been translated into Chinese and Japanese, "Kohonen Maps" (Elsevier, 1999), and "Independent Component Analysis" (J. Wiley, 2001). His research interests are in the study of principal components, independent components, self-organization, statistical pattern recognition, and applying artificial neural networks to computer vision and signal processing. Dr. Oja is member of the editorial boards of several journals and has been in the program committees of several recent conferences including ICANN, IJCNN, and ICONIP. He is member of the Finnish Academy of Sciences, Fellow of the IEEE, Founding Fellow of the International Association of Pattern Recognition (IAPR), and President of the European Neural Network Society (ENNS). Steven K. Rogers and Matthew Kabrisky Steven K. Rogers, PhD is the President/CEO of Qualia Computing, Inc. (QCI) and CADx Systems. He founded the company in May 1997 to commercialize the Qualia Insight(tm) (QI) platform. The goal of QCI is to systematically apply QI to achieve Intelligence Amplification across market sectors. Dr. Rogers spent 20 years in the U.S. Air Force designing smart weapons. He has published more than 200 papers in neural networks, pattern recognition and optical information processing and several books. He is a Fellow of the Institute of Electrical and Electronics Engineering for design, implementation and fielding of neural solution to Automatic Target Recognition. Dr. Rogers is also a Fellow of The International Optical Engineering Society for contribution to the science of optical neural computing and a charter member of International Neural Network Society. He was a plenary speaker at the 2002 World Congress on Computational Intelligence. Matthew Kabrisky, PhD is currently a Professor Emeritus of Electrical Engineering, School of Engineering, Air Force Institute of Technology, (AFIT). He advises the faculty on courses and research in autonomous pattern recognition, mathematical models of the central nervous system, and human factors engineering. His research interests include computational intelligence and self-awareness. Dr. Kabrisky is the Chief Scientist Emeritus of CADx Systems. Ron Sun Dr. Ron Sun is James C. Dowell Professor of computer science and computer engineering at the University of Missouri-Columbia. He received his Ph.D in 1991 from Brandeis University in computer science. Dr. Ron Sun's research interest centers around the studies of intelligence and cognition, especially in the areas of commonsense reasoning, human and machine learning, and hybrid connectionist models. He is the author of over 120 papers, and has written, edited or contributed to 20 books. For his paper on integrating rule-based reasoning and connectionist models, he received the 1991 David Marr Award from Cognitive Science Society. He has also been on the program committees of the National Conference on Artificial Intelligence (AAAI-93, AAAI-97, AAAI-99), International Joint Conference on Neural Networks (IJCNN-99, IJCNN-00, IJCNN-02), International Conference on Neural Information Processing (1997, 1999, 2001), International Two-Stream Conference on Expert Systems and Neural Networks, and other conferences, and has been an invited/plenary speaker for some of them. Dr. Sun is the founding co-editor-in-chief of the journal Cognitive Systems Research (Elsevier). He serves on the editorial boards of Connection Science, Applied Intelligence, and Neural Computing Surveys. He was a guest editor of the special issue of the journal Connection Science on architectures for integrating neural and symbolic processes and the special issue of IEEE Transactions on Neural Networks on hybrid intelligent models. He is a member of AAAI and Cognitive Science Society, and a senior member of IEEE. From James.Henderson at cui.unige.ch Tue May 13 09:42:43 2003 From: James.Henderson at cui.unige.ch (James Henderson) Date: Tue, 13 May 2003 15:42:43 +0200 Subject: PhD studentship in Neural Networks for Natural Language Processing Message-ID: <3EC0F653.6090502@cui.unige.ch> PhD Studentship Department of Computer Science University of Geneva Applications are invited for a PhD studentship in machine learning applied to natural language processing, available immediately. The successful candidate will join the Artificial Intelligence Research Group, Computer Science Department, University of Geneva. They will pursue research in connection with a project funded by the Swiss National Science Foundation developing neural network and semi-supervised machine learning techniques for application to very broad coverage natural language parsing. Candidates should have the following qualifications or a substantial subset thereof (comprising at the very least those marked as mandatory): - an outstanding academic record in computer science as well as strong mathematical and programming skills (mandatory) - a solid background in machine learning (mandatory). Knowledge of neural networks will be an asset - knowledge of natural language processing or computational linguistics, or at least a strong interest in language technology - a clear aptitude for independent and creative research as evidenced by an excellent Master's thesis - good communication skills in English and French (or at least a clear indication of willingness to learn the latter) The salary for a PhD student is around 47500 Swiss Francs per annum. Please send your curriculum vitae, academic transcript, and contact information for three references to James.Henderson at cui.unige.ch. Consideration of applications will begin immediately and continue until the position is filled. Further information about the Artificial Intelligence Research Group can be found at http://cui.unige.ch/AI-group/home.html, and about the project at http://cui.unige.ch/~henderson/nn_parsing.html. -- Dr James HENDERSON Tel: +41 22 705 76 42 CUI - University of Geneva Fax: +41 22 705 77 80 24 rue du General-Dufour Email: James.Henderson at cui.unige.ch 1211 GENEVE 4, Switzerland http://cui.unige.ch/~henderson/ From dwang at cis.ohio-state.edu Tue May 13 11:58:32 2003 From: dwang at cis.ohio-state.edu (DeLiang Wang) Date: Tue, 13 May 2003 11:58:32 -0400 Subject: Last Reminder: TNN Special issue on temporal coding Message-ID: <3EC11628.52DA2B30@cis.ohio-state.edu> LAST REMINDER: Submission Deadline is May 30, 2003 IEEE Transactions on Neural Networks Call for Papers Special Issue on "Temporal Coding for Neural Information Processing" Please check the following webpage for more information on the issue and submission: http://www.cis.ohio-state.edu/~dwang/tnn.html Thanks, DeLiang Wang -- ------------------------------------------------------------ Prof. DeLiang Wang Department of Computer and Information Science The Ohio State University 2015 Neil Ave. Columbus, OH 43210-1277, U.S.A. Email: dwang at cis.ohio-state.edu Phone: 614-292-6827 (OFFICE); 614-292-7402 (LAB) Fax: 614-292-2911 URL: http://www.cis.ohio-state.edu/~dwang From poznan at iub-psych.psych.indiana.edu Tue May 13 15:44:01 2003 From: poznan at iub-psych.psych.indiana.edu (poznan@iub-psych.psych.indiana.edu) Date: Tue, 13 May 2003 14:44:01 -0500 Subject: CALL FOR PAPERS Message-ID: <3EC14B01.3040901@iub-psych.psych.indiana.edu> NON-SYNAPTIC COMMUNICATION IN BRAINS: HOW UNCONSCIOUS INTEGRATION IS MANIFESTED IN ANTICIPATORY BEHAVIOR The synaptic model of neurocommunication in the brain has dominated connectionism for more than half a century. Generally, little consideration is given to other modes of neurotransmission in animal and human brains, even though there is indirect evidence that less than half of the communication between cells is by synapses. Non-synaptic diffusion neurotransmission may be the primary information transmission mechanism in certain normal and abnormal functions. Non-synaptic diffusion is vastly more economical than synaptic transmission in regards to space and energy expenditure in the brain. The task of integrating a collection of databases (i.e., neuroinformatics) becomes inconceivable, when faced with the challenge of how unconscious integration leads to anticipatory behavior, if meaning of data rather than the individual data is represented non-locally in the brain. Address Submissions and Correspondence to: Dr. Roman R. Poznanski Associate Editor, Journal of Integrative Neuroscience c/o Department of Psychology Indiana University 1101 E. 10th St. Bloomington, IN 47405-7007 email: poznan at iub-psych.psych.indiana.edu phone (Office): (812) 856-0838 http://www.worldscinet.com/jin/mkt/editorial.shtml From jose at psychology.rutgers.edu Tue May 13 17:08:40 2003 From: jose at psychology.rutgers.edu (stephen j. hanson) Date: 13 May 2003 17:08:40 -0400 Subject: Gratuate Research Assistantships at RUTGERS UNIVERSITY-- RUMBA LABORATORIES ADVANCED IMAGING CENTER (UMDNJ/RUTGERS) Message-ID: <1052860030.1809.78.camel@localhost.localdomain> RUTGERS UNIVERSITY-- RUMBA LABORATORIESADVANCED IMAGING CENTER (UMDNJ/RUTGERS)--Research Assistants/Graduate Fellowships. Immediate openings. Research in cognitive neuroscience, category learning, event perception, using magnetic resonance imaging and electrophysiological techniques. Background in experimental psychology or cognitive science (BA/BS Required), neuroscience and statistics would be helpful. Strong computer skills are a plus. An excellent opportunity for someone bound for graduate school in psychology, cognitive science, cognitive neuroscience or medicine. Send by email a CV with description of research experience and the names of three references to: rumbalabs at psychology.rutgers.edu (see www.rumba.rutgers.edu for more information) -- Stephen J. Hanson Professor & Chair Psychology Department Rutgers University RUMBA Laboratories Co Director Advanced Imaging Center (UMDNJ/Rutgers) From bogus@does.not.exist.com Tue May 13 06:31:24 2003 From: bogus@does.not.exist.com () Date: Tue, 13 May 2003 12:31:24 +0200 Subject: Deadline approaching - Erice School on Cortical Dynamics 31 Oct - 6 Nov2003 Message-ID: From smyth at ics.uci.edu Wed May 14 13:24:40 2003 From: smyth at ics.uci.edu (Padhraic Smyth) Date: Wed, 14 May 2003 10:24:40 -0700 Subject: Postdoctoral position in Machine Learning at UC Irvine Message-ID: <3EC27BD8.1020706@ics.uci.edu> Please forward to recent (or soon to be) Phd graduates who may be interested in this position, thanks. Padhraic Smyth Information and Computer Science University of California, Irvine Postdoctoral Research Position in Machine Learning School of Information and Computer Science University of California, Irvine A full-time post-doctoral position is available in the area of machine learning at UC Irvine, focusing on research in probabilistic modeling of large data sets such as text corpora and Web-related data. Applicants must have earned a Ph.D. in Computer Science, Electrical Engineering, Mathematics, Statistics, or a closely-related discipline, with an emphasis on machine learning or applied statistics. Knowledge of probabilistic learning methods (such as the EM algorithm or Bayesian learning), as well as programming experience in languages such as C/C++ or MATLAB, are also desirable. The salary range for this position is $45k to $60k annually, commensurate with training and experience. The appointment will be for a 1 or 2-year period beginning in September 2003. Interested applicants should send a curriculum vitae, statement of research interests and achievements, and names and email addresses of three or more references, to Professor Padhraic Smyth at smyth at ics.uci.edu. Please put "Application for postdoc" in the subject line of the email application. Applications received by July 15th 2003 will receive maximum consideration. UC Irvine is located roughly half way between Los Angeles and San Diego, about 3 miles from the Pacific Ocean. For general information about the university see www.ics.uci.edu/about/jobs/ and www.uci.edu. For further information on machine learning research at UC Irvine see (for example) www.datalab.uci.edu. The University of California is an Equal Opportunity Employer committed to excellence through diversity. From rudesai at cs.indiana.edu Thu May 15 19:25:23 2003 From: rudesai at cs.indiana.edu (Rutvik Desai) Date: Thu, 15 May 2003 18:25:23 -0500 (EST) Subject: Ph.D. thesis on language acquisition available Message-ID: The readers of this list might be interested in my recently completed thesis: Modeling Interaction of Syntax and Semantics in Language Acquisition http://www.cs.indiana.edu/~rudesai/thesis.html Advisor: Prof. Michael Gasser, Indiana University. Abstract: How language is acquired by children is one of the major questions of cognitive science and is linked intimately to the larger question of how the brain and mind work. I describe a connectionist model of language comprehension that shows how some behaviors and strategies in language learning can emerge from general learning mechanisms. A connectionist network attempts to produce the meanings of input sentences, generated by a small English-like grammar. I study three interesting behaviors, related to the interaction of syntax and semantics, that emerge as the network attempts to perform the task. First, the network can use syntactic cues to predict aspects of the meaning of a novel word (syntactic bootstrapping), and its learning of new syntax is aided by the knowledge of word meanings (semantic bootstrapping). Secondly, when a familiar verb is encountered in an incorrect syntactic context, the network tends to follow context to arrive at an interpretation of the utterance in the early stages of training, and follows the verb in later stages. Similar behavior is observed in children, known as frame and verb compliance. Lastly, there is considerable evidence that children's early language is item-based, i.e., organized around specific linguistic expressions and items they hear. The network's representations are also found to be highly item-based and context-specific in early stages, and become categorical in later stages of learning, similar to those of adults. The connectionist simulations provide a concrete and parsimonious account of these three phenomena in language development. They also support the idea that domain-specific behaviors and learning strategies can emerge from relatively general mechanisms and constraints, and it is not always necessary to propose apparatus specifically designed for particular tasks. --- Rutvik Desai Postdoctoral Fellow Language Imaging Laboratory Medical College of Wisconsin http://www.neuro.mcw.edu/ From David.Cohn at acm.org Sat May 3 09:32:33 2003 From: David.Cohn at acm.org (David 'Pablo' Cohn) Date: Sat, 03 May 2003 06:32:33 -0700 Subject: jmlr-announce: Designing Committees of Models through Deliberate Weighting of Data Points Message-ID: <5.2.0.9.0.20030503062926.03462d48@ux7.sp.cs.cmu.edu> The Journal of Machine Learning Research (www.jmlr.org) is pleased to announce publication of the third paper in Volume 4: -------------------------- Designing Committees of Models through Deliberate Weighting of Data Points Stefan W. Christensen, Ian Sinclair and Philippa A. S. Reed JMLR 4(Apr):39-66, 2003. Abstract In the adaptive derivation of mathematical models from data, each data point should contribute with a weight reflecting the amount of confidence one has in it. When no additional information for data confidence is available, all the data points should be considered equal, and are also generally given the same weight. In the formation of committees of models, however, this is often not the case and the data points may exercise unequal, even random, influence over the committee formation. In this paper, a principled approach to committee design is presented. The construction of a committee design matrix is detailed through which each data point will contribute to the committee formation with a fixed weight, while contributing with different individual weights to the derivation of the different constituent models, thus encouraging model diversity whilst not biasing the committee inadvertently towards any particular data points. Not distinctly an algorithm, it is instead a framework within which several different committee approaches may be realised. Whereas the focus in the paper lies entirely on regression, the principles discussed extend readily to classification. ---------------------------------------------------------------------------- This paper, and all previous papers, are available electronically at http://www.jmlr.org in PostScript and PDF formats. The papers of Volumes 1, 2 and 3 are also available electronically from the JMLR website, and in hardcopy from the MIT Press; please see http://mitpress.mit.edu/JMLR for details. -David Cohn, From dhwang at cs.latrobe.edu.au Fri May 16 02:08:54 2003 From: dhwang at cs.latrobe.edu.au (Dianhui Wang) Date: Fri, 16 May 2003 16:08:54 +1000 Subject: CALL FOR BOOK CHAPTERS Message-ID: <3EC48075.E8BBA612@cs.latrobe.edu.au> Dear Colleages, CALL FOR BOOK CHAPTERS --------------------------------------------------------------------------------- Neural Networks Applications in Information Technology and Web Engineering --------------------------------------------------------------------------------- This email solicits your submission for book chapters. The edited book highlights successful applications of neural networks in IT domain and Web engineering. It will be published by a publisher of University Malaysia Sarawaka in 2004. Details of the "Call for Chapters" could be found at http://homepage.cs.latrobe.edu.au/dhwang/call4chapter.html We are looking forward to receiving your submissions shortly. Kind regards, The Book Editors: ---------------------------------------------------------------------- Dr Dianhui Wang Department of Computer Science and Computer Engineering La Trobe University, Melbourne, VIC 3086, Australia Tel: +61 3 9479 3034 Email: dhwang at cs.latrobe.edu.au Mr Nung Kion Lee Faculty of Cognitive Sciences and Human Development University Malaysia Sarawak Kota Samarahan, Sarawak, Malaysia Tel: +60 82 679 276 Email: nklee at fcs.unimas.my ----------------------------------------------------------------------- From David.Cohn at acm.org Fri May 16 12:55:10 2003 From: David.Cohn at acm.org (David 'Pablo' Cohn) Date: 16 May 2003 09:55:10 -0700 Subject: new paper from JMLR: Task Clustering and Gating for Bayesian Multitask Learning Message-ID: <1053104110.1667.189.camel@bitbox.corp.google.com> The Journal of Machine Learning Research (www.jmlr.org) is pleased to announce publication of the fifth paper in Volume 4: -------------------------- Task Clustering and Gating for Bayesian Multitask Learning Bart Bakker and Tom Heskes JMLR 4(May):83-99, 2003 Abstract Modeling a collection of similar regression or classification tasks can be improved by making the tasks 'learn from each other'. In machine learning, this subject is approached through 'multitask learning', where parallel tasks are modeled as multiple outputs of the same network. In multilevel analysis this is generally implemented through the mixed-effects linear model where a distinction is made between 'fixed effects', which are the same for all tasks, and 'random effects', which may vary between tasks. In the present article we will adopt a Bayesian approach in which some of the model parameters are shared (the same for all tasks) and others more loosely connected through a joint prior distribution that can be learned from the data. We seek in this way to combine the best parts of both the statistical multilevel approach and the neural network machinery. The standard assumption expressed in both approaches is that each task can learn equally well from any other task. In this article we extend the model by allowing more differentiation in the similarities between tasks. One such extension is to make the prior mean depend on higher-level task characteristics. More unsupervised clustering of tasks is obtained if we go from a single Gaussian prior to a mixture of Gaussians. This can be further generalized to a mixture of experts architecture with the gates depending on task characteristics. All three extensions are demonstrated through application both on an artificial data set and on two real-world problems, one a school problem and the other involving single-copy newspaper sales. ---------------------------------------------------------------------------- This paper is available electronically at http://www.jmlr.org in PostScript and PDF formats. The papers of Volumes 1, 2 and 3 are also available electronically from the JMLR website, and in hardcopy from the MIT Press; please see http://mitpress.mit.edu/JMLR for details. -David Cohn, From steve at cns.bu.edu Sat May 17 04:57:45 2003 From: steve at cns.bu.edu (Stephen Grossberg) Date: Sat, 17 May 2003 04:57:45 -0400 Subject: cortical mechanisms of development, learning, attention, and 3D vision Message-ID: The following article is now available at http://www.cns.bu.edu/Profiles/Grossberg in PDF: Grossberg, S. (2003). How does the cerebral cortex work? Development, learning, attention, and 3D vision by laminar circuits of visual cortex. Behavioral and Cognitive Neuroscience Reviews, in press. ABSTRACT: A key goal of behavioral and cognitive neuroscience is to link brain mechanisms to behavioral functions. The present article describes recent progress towards explaining how the visual cortex sees. Visual cortex, like many parts of perceptual and cognitive neocortex, is organized into six main layers of cells, as well as characteristic sub-lamina. Here it is proposed how these layered circuits help to realize processes of development, learning, perceptual grouping, attention, and 3D vision through a combination of bottom-up, horizontal, and top-down interactions. A key theme is that the mechanisms which enable development and learning to occur in a stable way imply properties of adult behavior. These results thus begin to unify three fields: infant cortical development, adult cortical neurophysiology and anatomy, and adult visual perception. The identified cortical mechanisms promise to generalize to explain how other perceptual and cognitive processes work. From Sebastian_Thrun at heaven.learning.cs.cmu.edu Mon May 19 10:47:53 2003 From: Sebastian_Thrun at heaven.learning.cs.cmu.edu (Sebastian Thrun) Date: Mon, 19 May 2003 10:47:53 -0400 Subject: NIPS Site open for electronic submissions Message-ID: The NIPS Web site is now accepting electronic submissions at nips.cc Please note that the deadline for submissions is June 6, 2003. Detailed submission instructions can be found at nips.cc. Sebastian Thrun Lawrence Saul NIPS*2003 General Chair NIPS*2003 Program Chair From rsun at ari1.cecs.missouri.edu Tue May 20 14:24:00 2003 From: rsun at ari1.cecs.missouri.edu (Ron Sun) Date: Tue, 20 May 2003 13:24:00 -0500 Subject: a new book: Duality of the Mind Message-ID: <200305201824.h4KIO0iW021725@ari1.cecs.missouri.edu> Announcing a new book published by Lawrence Erlbaum Associates, Inc. http://www.erlbaum.com/ D U A L I T Y O F T H E M I N D A Bottom-up Approach toward Cognition by Ron Sun Synthesizing situated cognition, reinforcement learning, and hybrid connectionist models, a cognitive architecture focused on situated involvement and interaction with the world is developed in this book. The architecture notably incorporates the distinction between implicit and explicit processes. The work described in the book demonstrates the cognitive validity of the architecture, by ways of capturing a wide range of human learning data. Computational properties of the architecture is explored with experiments that manipulate implicit and explicit processes to optimize performance in a range of domains. Philosophical implications of the approach, on situated cognition, intentionality, symbol grounding, and consciousness, are also explored in detail. In a nutshell, this book motivates and develops a framework for studying human cognition, based on an approach that is characterized by its focus on the dichotomy of, and the interaction between, implicit and explicit cognition. -------------------------------------------------------------------- For more details, go to http://www.cecs.missouri.edu/~rsun/book6-ann.html To order the book, go to https://www.erlbaum.com/shop/tek9.asp?pg=products&specific=0-8058-3880-5 =================================================================== Professor Ron Sun, Ph.D James C. Dowell Professor CECS Department, 201 EBW phone: (573) 884-7662 University of Missouri-Columbia fax: (573) 882-8318 Columbia, MO 65211-2060 email: rsun at cecs.missouri.edu http://www.cecs.missouri.edu/~rsun =================================================================== From i.tetko at gsf.de Wed May 21 06:43:17 2003 From: i.tetko at gsf.de (Igor Tetko) Date: Wed, 21 May 2003 12:43:17 +0200 Subject: MIPS Postdoctoral Position Message-ID: Postdoctoral Position at MIPS Applications are invited for a postdoctoral fellowship to develop new methods for automatic classification of protein sequences using machine learning methods. The input data will include BLAST/FASTA similarity scores, protein domains and motifs, as well as secondary and tertiary structures predicted by standard bioinformatics methods. The primary classification target will be functional categories (FunCat) developed by the MIPS. Applicants with a Ph.D. in bioinformatics, computer science/engineering are encouraged to apply. A background in molecular biology as well as demonstrated computer skills (programming in Perl, SQL, C/C++ familiarity with UNIX, Web) are preferred. The position will be supervised by Prof. H.W. Mewes and Dr. I.V. Tetko. The salary will be according to BAT IIa. The appointment will be for two years. The position is available immediately. Please send curriculum vitae, names for letters of recommendation to i.tetko at gsf.de, http://mips.gsf.de. Informal inquiries are also welcome. -- Dr. Igor V. Tetko Senior Research Scientist Institute for Bioinformatics GSF - Forschungszentrum fuer Umwelt und Gesundheit, GmbH Ingolstaedter Landstrasse 1, D-85764 Neuherberg, Germany Telephone: +49-89-3187-3575 Fax: +49-89-3187-3585 http://www.vcclab.org/~itetko e-mail: itetko at vcclab.org, i.tetko at gsf.de From robtag at unisa.it Thu May 22 11:46:50 2003 From: robtag at unisa.it (Roberto Tagliaferri) Date: Thu, 22 May 2003 17:46:50 +0200 Subject: WIRN 2003 Message-ID: <3ECCF0EA.1050505@unisa.it> Dear collegue, please you find attached the preliminary program of WIRN 2003, XIV ITALIAN WORKSHOP ON NEURAL NETS IIASS "Eduardo R. Caianiello", Vietri sul Mare (SA) ITALY June 5 - 7, 2003 * Pre-WIRN workshop on Bioinformatics and Biostatistics * Wednesday, June 4 * WIRN regular sessions * Thursday, June 5 * Models (9.30-11.30) * Architectures & Algorithms (11.50-12.50) * Poster session (16.00-17.30) * Friday, June 6 * Applications (9.30-11.30) * Applications (11.50-12.30) * Architectures & Algorithms (15.00-16.00) * Caianiello Prize (16.00-17.00) * SIREN Society meeting (17.00) * Saturday, June 7 * Special session - Formats of knowledge: words, images, narratives (9.30-11.20) * Special session - Formats of knowledge: words, images, narratives (11.20-11.40) * Panel session * Post-WIRN Workshop on Formats of knowledge (Saturday, June 7, 14.30-18.30) Further information can be found on the web site of SIREN in the pages of the workshop: http://grid004.usr.dsi.unimi.it/indice2.html Best regards Bruno Apolloni & Roberto Tagliaferri Detailed program Pre-WIRN workshop on Bioinformatics and Biostatistics Wednesday, June 4 9.30 - 10.30 Piero Fariselli, Pier Luigi Martelli and Rita Casadio, University of Bologna Machine Learning-Approaches and Structural Genomics 10.30 - 11.30 Alessio Ceroni, Paolo Fiasconi, Andrea Passerini, Universit? di Firenze Algorithms for Protein Structure Prediction Kernel Machines and Recursive Neural Networks Coffe break 11.30 - 11.50 11.50 - 12.50 Giovanni Cuda, Barbara Quaresima, Francesco Baudi, Rita Casadonte, Maria Concetta Faniello, Pierosandro Tagliaferri, Francesco Costanzo and Salvatore Venuta, University "Magna Graecia" of Catanzaro Proteomic profiling of inherited breast cancer: identification of molecular targets for early detection, prognosis and treatment. 15.00 - 15.20 Antonio Eleuteri, DMA Universit? di Napoli "Federico II" and INFN Sez. Napoli Roberto Tagliaferri, DMI Universit? di Salerno and INFM Unit? di Salerno Leopoldo Milano, Dipartimento di Scienze Fisiche, Universit? di Napoli "Federico II" and INFN Sez. Napoli I-divergence projections for MLP networks 15.20 - 15.40 Francesco Masulli, Computer Science Department, Univerity of Pisa (ITALY) Stefano Rovetta, Computer and Information Science Department, Univerity of Genova (ITALY) Gene selection using Random Voronoi Ensembles 15.40 - 16.00 Giorgio Valentini, DSI, Dipartimento di Scienze dell' Informazione, Univ. degli Studi di Milano An application of Low Bias Bagged SVMs to the classification of heterogeneous malignant tissues 16.00 - 16.20 Giulio Antoniol, RCOST-University of Sannio Michele Ceccarelli, University of Sannio Wanda Longo, Marina Ciullo, Enza Colonna, IGB-CNR Teresa Nutile, IGN-CNR Browsing Large Pedigrees to Study of the Isolated Populations in the"Parco Nazionale del Cilento e Vallo di Diano 16.20 - 16.40 Alessio Micheli, Universit? di Pisa Filippo Portera, Alessandro Sperduti, Universit? di Padova QSAR/QSPR Studies by Kernel Machines, Recursive Neural Networks and Their Integration 16.40 - 17.00 Chakra Chennubhotla, Computer Science Dept. University of Toronto Alberto Paccanaro, Bioinformatics Unit, Queen Mary University of London Markov Analysis of Protein Sequence Similarities 17.00 - 17.20 Coffee break 17.20 - 17.40 Francesco Marangoni, Master in Bioinformatics, University of Turin, Italy Matteo Barberis, Master in Bioinformatics, University of Turin, Italy Marco Botta, Department of Informatics, University of Turin, Italy Large scale prediction of protein interactions by an SVM-based machine learning approach 17.40 - 18.00 Luigi Agnati, Departm. of BioMedical Science and CIGS, Univ. of Modena, Italy Ferr? S, Preclinical Pharmacology Section NIDA NIH, Baltimore MD, USA Canela EI., Departm. Biochemistry and Molecular Biology, Barcelona, Spain Watson S., Mental Health Research Institute and Departm. of Psychiatry, University of Michigan, USA Morpurgo Anna, Departm. Computer Science, Milano, Italy Fuxe K., Departm. of NeuroScience, KI, Stockholm, Sweden Computer-assisted image analysis of biological preparations carried out at different levels of resolution opened up a new understanding of brain function 18.00 - 18.20 Bruno Apolloni, Simone Bassis, Andrea Brega, Sabrina Gaito, Dario Malchiodi, Anna Maria Zanaboni, Universit? degli Studi di Milano, Dipartimento di Scienze dell'Informazione Monitoring of car driving awareness from biosignals WIRN regular sessions Thursday, June 5 Models (9.30-11.30) 9.30 - 10.30 Leon O. Chua, NOEL (Non Linear Electronics Laboratory) Univ. of California, Berkeley From Brainlike Computing to Artificial Life Invited talk 10.30 - 10.50 Roberto Serra, CRA Montecatini Marco Villani, CRA Montecatini On the dynamics of scale-free boolean networks 10.50 - 11.10 Silvio P. Sabatini, DIBE - University of Genoa Fabio Solari, DIBE - University of Genoa Giacomo M. Bisio, DIBE - University of Genoa Lattice Models for Context-driven Regularization in Motion Perception 11.10 - 11.30 Bruno Apolloni, Dip. di Scienze dell'Informazione, Universit? degli Studi di Milano Simone Bassis, Dip. di Scienze dell'Informazione, Universit? degli Studi di Milano Sabrina Gaito, Dip. di Scienze dell'Informazione, Universit? degli Studi di Milano Dario Malchiodi, Dip. di Scienze dell'Informazione, Universit? degli Studi di Milano Cooperative games in a stochastic environment 11.30 - 11.50 Coffe break Architectures & Algorihtms (11.50-12.50) 11.50 - 12.10 Cristiano Cervellera, Istituto di Studi sui Sistemi Intelligenti per l'Automazione - CNR Marco Muselli, Istituto di Elettronica e di Ingegneria dell'Informazione e delle Telecomunicazioni - CNR A Deterministic Learning Approach Based on Discrepancy 12.10 - 12.30 Massimo Panella, INFO-COM Dpt., University of Rome "La Sapienza" Fabio Massimo Frattale Mascioli, INFO-COM Dpt., University of Rome "La Sapienza" Antonello Rizzi, INFO-COM Dpt., University of Rome "La Sapienza" Giuseppe Martinelli, INFO-COM Dpt., University of Rome "La Sapienza" ANFIS Synthesis by Hyperplane Clustering for Time Series Prediction 12.30 - 12.50 Mario Costa, Politecnico di Torino - Dept. of Electronics Edmondo Minisci, Politecnico di Torino - Dept. of Aerospace Engineering Eros Pasero, Politecnico di Torino - Dept. of Electronics An Hybrid Neural/Genetic Approach to Continuous Multi-Objective Optimization Problems Lunch 12.50 - 15.00 Poster session (16.00 - 17.30) 15.00 - 16.00 Poster high light spotting 16.00 - 16.10 Coffee break 16.00 - 17.30 Poster session Friday, June 6 Applications (9.30 - 11.30) 9.30 - 10.30 Eraldo Paulesu, Universit? di Milano Bicocca, Dipartimento di Psicologia Experimental designs in cognitive neuroscience using functional imaging. Invited talk 10.30 - 10.50 N. Alberto Borghese, DSI - University of Milano Andrea Calvi, Department of Bioengineering - Politecnico of Milano Learning to maintain upright posture: what can be learnt using adaptive neural networks models? 10.50 - 11.10 Silvio Giove, University of Venice Claudia Basta, Regional and hurban planning department; City of Venice Environmental risk and territorial compatibility; a soft computing approach 11.10 - 11.30 Lara Giordano, DMI, Universit? di Salerno Claude Albore Livadie, Istituto Universitario Suor Orsola Benincasa Giovanni Paternoster, Dipartimento di Scienze Fisiche , Universit? di Napoli "Federico II" Raffaele Rinzivillo, Dipartimento di Scienze Fisiche , Universit? di Napoli "Federico II" Roberto Tagliaferri, DMI, Universit? di Salerno Soft Computing Techniques for Classification of Bronze Age Axes 11.30 - 11.50 Coffee break Applications (11.50 - 12.30) 11.50 - 12.10 Bruno Azzerboni, Universit? di Messina Mario Carpentieri, Universit? di Messina Maurizio Ipsale, Universit? di Messina Fabio La Foresta, Universit? di Messina Francesco Carlo Morabito, Universit? Mediterranea di Reggio Calabria Intracranial Pressure Signal Processing by Adaptative Fuzzy Network 12.10 - 12.30 Giovanni Pilato, Salvatore Vitabile, Istituto di Calcolo e Reti ad alte prestazioni (CNR) - Sezione di Palermo Giorgio Vassallo, CRES - Centro per la Ricerca Elettronica in Sicilia, Monreale (PA) Vincenzo Conti, Filippo Sorbello, Dipartimento di Ingegneria Informatica - Universit? di Palermo A Concurrent Neural Classifier for Html Documents Retrieval Architectures & Algorithms (15.00-15.40) 15.00 - 15.20 Francesca Vitagliano, Raffaele Parisi, Aurelio Uncini, Dip. INFOCOM - Universit? di Roma "La Sapienza" Generalized Splitting 2D Flexible Complex Domain Activation Functions 15.20 - 15.40 Francesco Masulli, Dip. Informatica - Universit? di Pisa; INFM Stefano Rovetta, Dip. Informatica e Scienze dell'Informazione - Universit? di Genova; INFM An Algorithm to Model Paradigm Shifting in Fuzzy Clustering ORAL Caianiello Prize (16.00 - 17.00) 17.00 SIREN Society meeting (17.00) Social dinner (20.00) Saturday, June 7 Special session Formats of knowledge: words, images, narratives (9.30-11.00) 9.30 - 9.50 M.Rita Ciceri, Facolt? di Psicologia, Universit? Cattolica Milano Formats and languages of knowledge: models of links for learning processes 9.50 - 10.20 Anne McKeough, Division of Applied Psychology and chair of the Human Learning and Development program, University of Calgary. Narrative as a format of thinking: hierarchical models in story comprehension 10.20 - 10.40 Alessandro Antonietti, Claudio Maretti, Facolt? di Scienze della Formazione, U.C Milano Analogical models 10.40 - 11.00 Paola Colombo, Centro di Psicologia della Comunicazione, Universit? Cattolica di Milano Pictures and words: analogies and specificity in knowledge organization 11.00 - 11.20 Coffe break Special session Formats of knowledge: words, images, narratives (11.20-11.40) 11.20 - 11.40 Ilaria Grazzani, Scienze della Formazione, Universit? Statale di Milano Emotional and metaemotional competence Panel session (11.40 - 12.40) Chair B. Apolloni M. Rita Ciceri, Anne McKeough, Paola Colombo, Ilaria Grazzani, L. O. Chua, E. Paulesu, L. Agnati, E. Pasero, C. Cervellara, Cognitive systems: from models to their implementation End of the regular sessions 12.00 Post-WIRN Workshop on Formats of knowledge (14.30 - 18.30) 14.30 - 14.50 Introduction by G. Milani, MIUR 14.50 - 15.10 Coffee break 14.50 - 18.30 Creation of small groups practising with special school softwares in different domains [edited by Rita Ciceri] involving declaring thought, narrative thought and so on. Papers in the poster section (Thursday, June 5, 15.00) N. Alberto Borghese, DSI - University of Milano Stefano Ferrari, Vincenzo Piuri, DTI - Polo di Crema - Universit? di Milano Real-time Surface Reconstruction through HRBF Networks Antonio Chella, DINFO - Universit? di Palermo Umberto Maniscalco, ICAR - CNR Sezione di Palermo Roberto Pirrone, DINFO - Universit? di Palermo A Neural Architecture for 3D Segmentation Andreas Hadjiprocopis, Institute of Neurology, UCL Paul Tofts, Institute of Neurology, UCL Towards an Automatic Lesion Segmentation Method for Dual Echo Magnetic Resonance Images using an Ensemble of Neural Networks Stefano D'Urso, Dip. INFOCOM - Universit? di Roma "La Sapienza" Automatic Polyphonic Piano Music Transcription by a Multi-Classification Discriminative-Learning Roberto Tagliaferri, DMI, Universit? di Salerno Giuseppe Longo, Dip. Scienze Fisiche, Universit? di Napoli "Federico II" Stefano Andreon, Osservatorio Astronomico di Brera Salvatore Capozziello, Dip. Fisica "E:R: Caianiello", Universit? di Salerno Ciro Donalek, Dip. Scienze Fisiche, Universit? di Napoli "Federico II" Gerardo Giordano, IIASS Neural networks for photometric redshifts evaluation Antonino Greco, DIMET Francesco Carlo Morabito, DIMET Mario Versaci, DIME Neural Network Approach for Estimation and Prediction of Time to Disruption in Tokamak Reactors Cinzia Avossa, Dept. of Physics University of Salerno Flora Giudicepietro, Osservatorio Vesuviano, INGV, Napoli Maria Marinaro, Silvia Scarpetta, Dept. of Physics University of Salerno Supervised and Unsupervised Analysis applied to Strombolian Explosion Quakes Giancarlo Mauri, Universit? di Milano-Bicocca. Italo Zoppis, Universit? di Milano-Bicocca. Dipartimento di Informatica Sistemistica e Comunicazioni A probabilistic neural networks system to recognize 3d face of people Bianchini Monica, Gori Marco, Sarti Lorenzo, Dipartimento di Ingegneria dell'Informazione, Universit? di Siena Face Localization with Recursive Neural Networks Alessandra Budillon, Francesco Palmieri, Dipartimento di Ingegneria dell'Informazione , Seconda Universit? di Napoli Multi-Class Image Coding via EM-KLT algorithm Pierluigi Salvo Rossi, Gianmarco Romano, Francesco Palmieri, Dipartimento di Ingegneria dell'Informazione, Seconda Universit? di Napoli Giulio Iannello, Dipartimento di Informatica e Sistemistica, Universit? di Napoli "Federico II" Bayesian Modelling for Packet Channels Elena Casiraghi, Raffaella Lanzarotti, Giuseppe Lipori, Universit? degli studi di Milano, Dipartimento di Scienze dell'Informazione A face detection system based on color and support vector machine Claudio Sistopaoli, Raffaele Parisi, INFOCOM Dept. - University of Rome "La Sapienza" Dereverberation of acoustic signals by Independent Component Analysis From bressler at fau.edu Thu May 22 16:11:19 2003 From: bressler at fau.edu (Steven Bressler) Date: Thu, 22 May 2003 16:11:19 -0400 Subject: POSTDOCTORAL POSITION AVAILABLE Message-ID: Postdoctoral Position Available Network for the Study of Brain Systems and Dynamics (http://dnl.ucsf.edu/NBSD/) A postdoctoral research fellow position is available immediately to work in the laboratory of Dr. Steven Bressler (http://www.ccs.fau.edu/~bressler/) as part of a consortium of Cognitive Neuroscience/Computational Science laboratories which study the dynamical systems of large-scale brain networks with EEG, MEG and fMRI. We are seeking an individual to adapt/refine and apply cutting-edge tools to analyses of high density EEG, MEG and fMRI data ranging from brain source imaging to functional connectivity and other modeling methods. The cognitive neuroscience aspect of the research emphasizes attention and working memory. Candidates should have a Ph.D. in computational or cognitive neuroscience (or in an area that provides similar background) with substantial mathematical/computational experience (especially in time series, dynamical systems analyses, neurophysiological signal processing, multivariate or Bayesian statistical analyses). Fellows will have the opportunity for a multidisciplinary training experience through interactions with all laboratories. Please contact Dr. Steven Bressler at bressler at ccs.fau.edu From Neural.Plasticity at snv.jussieu.fr Sun May 25 10:42:13 2003 From: Neural.Plasticity at snv.jussieu.fr (Susan Sara) Date: Sun, 25 May 2003 16:42:13 +0200 Subject: Neural Plasticity on-line submission Message-ID: <3.0.6.32.20030525164213.00b2a1e0@mail.snv.jussieu.fr> Dear Colleagues, Neural Plasticity is announcing on-line submission starting immediately. You can send your manuscript directly to the editor at this e-mail address. I will assure prompt and fair review and an editorial decision within four weeks. Neural Plasticity publishes full research papers, short communications, commentary and review articles concerning all aspects of neural plasticity, with special attention to its functional significance as reflected in behaviour. In vitro models, in vivo studies in anesthetized and behaving animals, as well as clinical studies in humans are included. The journal aims to be an inspiring forum for neuroscienists studying the development of the nervous system, learning and memory processes, and reorganisation and recovery after brain injury. Neural Plasticity, in its current format and under my editorship, has been in existence for less than five years, jumping from the bottom of the list of 200 neuroscience journals to 125 and then to an amazing 75 rank, in just three years. Our citation index was 2.33 last year. We are hoping that this new electronic submission option will encourage you to submit your papers to Neural Plasticity, so that we can continue to improve the journal and make it a lively and original forum for the Neuroscience community. Yours sincerely, Susan J. Sara Susan J. Sara, Editor Neural Plasticity Institut des Neurosciences Univ Pierre & Marie Curie 9 quai St. Bernard 75005 Paris France Tel 33 1 44 27 34 60 Fax 32 52 From bp1 at cn.stir.ac.uk Tue May 27 04:42:04 2003 From: bp1 at cn.stir.ac.uk (Bernd Porr) Date: Tue, 27 May 2003 09:42:04 +0100 Subject: PhD thesis: closed loop sequence learning Message-ID: <3ED324DC.7060308@cn.stir.ac.uk> I'm pleased to announce my PhD thesis: "Sequence-Learning in a Self-Referential Closed-Loop Behavioural System" Available here: http://www.cn.stir.ac.uk/~bp1/diss55pdf.pdf Abstract: --------- This thesis focuses on the problem of ``autonomous agents''. It is assumed that such agents want to be in a desired state which can be assessed by the agent itself when it observes the consequences of its own actions. Therefore the _feedback_ from the motor output via the environment to the sensor input is an essential component of such a system. Therefore, an agent is defined in this thesis as a self-referential system which operates within a closed sensor-motor-sensor feedback loop. The generic situation is that the agent is always prone to unpredictable disturbances which arrive from the outside, i.e. from its environment. These disturbances cause a deviation from the desired state (for example the organism is attacked unexpectedly or the temperature in the environment changes, ...). The simplest mechanism for managing such disturbances in an organism is to employ a reflex loop which essentially establishes reactive behaviour. Reflex loops are directly related to closed loop feedback controllers. Thus, they are robust and they do not need a built-in model of the control situation. However, reflexes have one main disadvantage, namely that they always occur ``too late''; i.e., only _after_ a (for example, unpleasant) reflex eliciting sensor event has occurred. This defines an objective problem for the organism. This thesis provides a solution to this problem which is called Isotropic Sequence Order (ISO-) learning. The problem is solved by correlating the primary \textsl{reflex} and a predictive sensor _input_: the result is that the system learns the temporal relation between the primary reflex and the earlier sensor input and creates a new predictive reflex. This (new) predictive reflex does not have the disadvantage of the primary reflex, namely of always being too late. As a consequence the agent is able to maintain its desired input-state all the time. In terms of engineering this means that ISO learning solves the inverse controller problem for the reflex, which is mathematically proven in this thesis. Summarising, this means that the organism starts as a reactive system and learning turns the system into a pro-active system. It will be demonstrated by a real robot experiment that ISO learning can successfully learn to solve the classical obstacle avoidance task without external intervention (like rewards). In this experiment the robot has to correlate a reflex (retraction _after_ collision) with signals of range finders (turn _before_ the collision). After successful learning the robot generates a turning reaction before it bumps into an obstacle. Additionally it will be shown that the learning goal of ``reflex avoidance'' can also, paradoxically, be used to solve an attraction task. -- http://www.cn.stir.ac.uk/~bp1/ mailto:bp1 at cn.stir.ac.uk From espaa at exeter.ac.uk Tue May 27 12:06:24 2003 From: espaa at exeter.ac.uk (Phil Weir) Date: Tue, 27 May 2003 17:06:24 +0100 (GMT Daylight Time) Subject: Pattern Analysis and Applications Journal Message-ID: Dear All, The Latest issue of Pattern Analysis and Applications, while not yet avilable in printed form, is now available for viewing in PDF on the Springer website at: http://link.springer.de/link/service/journals/10044/tocs/t3006001.htm This issue contains the following papers: J. Chen, M. Yeasin, R. Sharma: Visual modelling and evaluation of surgical skill M. Mirmehdi, P. Clark, J. Lam: A non-contact method of capturing low-resolution text for OCR L.I. Kuncheva, C.J. Whitaker, C.A. Shipp, R.P.W. Duin: Limits on the majority vote accuracy in classifier fusion D. Frosyniotis, A. Stafylopatis, A. Likas: A divide-and-conquer method for multi-net classifiers M.R. Ahmadzadeh, M. Petrou: Use of Dempster-Shafer theory to combine classifiers which use different class boundaries J. Yang, D. Zhang, J.-Y. Yang: A generalised K-L expansion method which can deal with small sample size and high-dimensional problems Q. Li, Y. Xie: Randomised hough transform with error propagation for line and circle detection Kagan Tumer, Nikunj C. Oza: Input decimated ensembles A. Mitiche, S. Hadjres: MDL estimation of a dense map of relative depth and 3D motion from a temporal sequence of images C. Thorton: Book Reviews. Truth from Trash: How Learning Makes Sense Best regards, Phil Weir. __________________________________________ Phil Weir Pattern Analysis and Applications Journal Department of Computer Science University of Exeter Exeter EX4 4PT UK Tel: +44-1392-264066 Fax: +44-1392-264067 E-mail: espaa at ex.ac.uk Web: http://www.dcs.ex.ac.uk/paa ____________________________________________