From austin at minster.cs.york.ac.uk Thu Mar 4 09:03:18 1999 From: austin at minster.cs.york.ac.uk (Jim Austin) Date: Thu, 4 Mar 1999 14:03:18 +0000 Subject: 3 year PhD studentship from April 1999. Message-ID: <9903041403.ZM14635@minster.cs.york.ac.uk> Available from the 1st April 1999. CASE PhD Studentship in Computer Science and Chemistry. University of York, York, UK. Supported by the BBSRC and Glaxo-Wellcome Ltd. Chemical structure matching using neural networks. Applicants are urgently requested to apply for a three year BBSRC PhD studentship supported by Glaxo-Wellcome under the industrial CASE scheme. The project will investigate the use of neural network based search systems for chemical structure matching in large chemical databases. The focus of the studentship will be in identifying the benefits of the methods in chemistry, and in the optimisation of the methods for small molecule and protein structure matching in very large molecular database systems. The project will be undertaken in the Departments of Chemistry (Protein Chemistry Group) and Computer Science (Neural Networks Research group) at the University of York and at the Glaxo-Wellcome research centre in Peterborough within the Computational Chemistry Group. All groups are internationally known for their work in neural networks or computational chemistry. The work will build on collaborative research over the last three years between the three groups, details of which can be found in our web page (http://www.cs.york.ac.uk/arch/neural/research/adam/molecules). Candidates will have a first degree in chemistry and will have programming ability (preferably in C++, under UNIX). Additional experience in Computer Science and neural networks would be an advantage but is not essential. The post must be filled on the 1st April 1999, thus candidates must be available to take up the post at that date. The deadline for applications is 15th March 1999 with interviews to be held as soon as possible after that date. Applications can be made by email, fax or by post. Informal enquiries can be made to Prof. Jim Austin at the address below. Prof. Jim Austin Department of Computer Science University of York York YO10 5DD, UK Tel: 01904 432734 Fax: 01904 432767 email: austin at cs.york.ac.uk -- Jim Austin, Professor of Neural Computation Advanced Computer Architecture Group, Department of Computer Science, University of York, York, YO10 5DD, UK. Tel : 01904 43 2734 Fax : 01904 43 2767 web pages: http://www.cs.york.ac.uk/arch/ From smola at first.gmd.de Thu Mar 4 21:30:38 1999 From: smola at first.gmd.de (Alex Smola) Date: Fri, 05 Mar 1999 03:30:38 +0100 Subject: PhD Thesis on "Learning with Kernels" Message-ID: <36DF41CE.306494D8@first.gmd.de> Dear Connectionists, the PhD Thesis "Learning with Kernels" was unavailable for download in the past two months since i discovered a couple of wrong constants in sections 6-8. These are fixed now and the thesis can be downloaded again at http://svm.first.gmd.de/papers/Smola98.ps.gz I apologize for the inconvenience that this may have caused to you. Alex J. Smola [ Moderator's note: here is the abstract from the previous announcment. -- DST] Support Vector (SV) Machines combine several techniques from statistics, machine learning and neural networks. One of the most important ingredients are kernels, i.e. the concept of transforming linear algorithms into nonlinear ones via a map into feature spaces. The present work focuses on the following issues: - Extensions of Support Vector Machines. - Extensions of kernel methods to other algorithms such as unsupervised learning. - Capacity bounds which are particularly well suited for kernel methods. After a brief introduction to SV regression it is shown how the classical \epsilon insensitive loss function can be replaced by other cost functions while keeping the original advantages or adding other features such as automatic parameter adaptation. Moreover the connection between kernels and regularization is pointed out. A theoretical analysis of several common kernels follows and criteria to check Mercer's condition more easily are presented. Further modifications lead to semiparametric models and greedy approximation schemes. Next three different types of optimization algorithms, namely interior point codes, subset selection algorithms, and sequential minimal optimization (including pseudocode) are presented. The primal--dual framework is used as an analytic tool in this context. Unsupervised learning is an extension of kernel methods to new problems. Besides Kernel PCA one can use the regularization to obtain more general feature exractors. A second approach leads to regularized quantization functionals which allow a smooth transition between the Generative Topographic Map and Principal Curves. The second part of the thesis deals with uniform convergence bounds for the algorithms and concepts presented so far. It starts with a brief self contained overview over existing techniques and an introduction to functional analytic tools which play a crucial role in this problem. By viewing the class of kernel expansions as an image of a linear operator it is possible to give bounds on the generalization ability of kernel expansions even when standard concepts like the VC dimension fail or give way too conservative estimates. In particular it is shown that it is possible to compute the covering numbers of the given hypothesis classes directly instead of taking the detour via the VC dimension. Applications of the new tools to SV machines, convex combinations of hypotheses (i.e. boosting and sparse coding), greedy approximation schemes, and principal curves conclude the presentation. -- / Address until 3/17/99 / / Alex J. Smola Department of Engineering / / Australian National University Canberra 0200, Australia / / Tel: (+61) 2 6279 8536 smola at first.gmd.de / / Fax: (+61) 2 6249 0506 http://www.first.gmd.de/~smola / / Private Address / University House GPO Box 1535 / / Australian National University Canberra 2601, Australia / / Tel: (+61) 2 6249 5378 / From mblsspw2 at fs2.mt.umist.ac.uk Fri Mar 5 05:51:27 1999 From: mblsspw2 at fs2.mt.umist.ac.uk (Philip Withers) Date: Fri, 5 Mar 1999 11:51:27 +0100 Subject: postdoc position: Neural Network Modelling of Al Rolling Process/Property Relationships Message-ID: Hello! I am intereseted in gaussian process and recurrent neural network models. I have a Post-doctoral position available here in manchester, Uk to work on applied materials problems. The successful applicant should have a mathematical, physics or engineering background. Details are attached. If you know of anyone interested please feel free to pass this information on. thank you Prof Phil Withers THE UNIVERSITY OF MANCHESTER MANCHESTER MATERIALS SCIENCE CENTRE RESEARCH ASSOCIATE Neural Network Modelling of Al Rolling Process/Property Relationships. Applications are invited for a physicist, mathematician, materials scientist or engineer to take up a two year post doctoral research fellowship in collaboration with ALCAN International, Banbury, Oxfordshire. Experience of mathematical modelling and/or materials science and engineering and a keenness to work on applied problems an advantage. The Materials Science Centre in Manchester received the highest research assessment grade and has world class expertise in light alloys. A second post focusing on the finite element modelling of friction welding of aerospace components and the measurement of stresses by neutron diffraction is also expected to become available shortly. Modelling and/or materials science and engineering experience advantageous. Salaries for the above posts will be in the range of #17570 to #21815 p.a. according to qualifications and experience. For informal enquiries please contact Professor P J Withers at the Manchester Materials Science Centre, Grosvenor Street, Manchester, M1 7HS UK or philip.withers at man.ac.uk For further details and an application form please contact Office of the Director of Personnel, The University of Manchester, Oxford Road, Manchester, M13 9PL. Tel: 0161 275 2028, Fax: 0161 275 2221/2471, Minicom: 0161 275 7889, email: personnel at man.ac.uk Web Site: http://www.man.ac.uk Please quote ref 030/99. *********************************************************************** Prof. P.J. Withers Manchester Materials Science Centre, Grosvenor St, Manchester, M1 7HS Tel. (0)161 200 8872 Fax (0)161 200 3636 or 3586 philip.withers at man.ac.uk From chiru at csa.iisc.ernet.in Sun Mar 7 06:44:07 1999 From: chiru at csa.iisc.ernet.in (Chiranjib Bhattacharya) Date: Sun, 7 Mar 1999 17:14:07 +0530 (IST) Subject: TR announcement: A fast algorithm for SVM classifier design Message-ID: We have recently developed a fast and simple algorithm for Support Vector Machine classifier design using geometric ideas involving convex polytopes. Details are given in the Technical Report mentioned below. For soft copies please email: ssk at csa.iisc.ernet.in Comments are welcome. A Fortran code implementation will be made available on request. ---------------------------------------------------------------------------- A Fast Iterative Nearest Point Algorithm for Support Vector Machine Classifier Design S.S. Keerthi, S.K. Shevade, C. Bhattacharya and K.R.K. Murthy Intelligent Systems Lab Dept. of Computer Science and Automation Indian Institute of Science Bangalore - 560 012 India Technical Report TR-ISL-99-03 Abstract In this paper we give a new, fast iterative algorithm for support vector machine (SVM) classifier design. The basic problem treated is one that does not allow classification violations. The problem is converted to a problem of computing the nearest point between two convex polytopes. The suitability of two classical nearest point algorithms, due to Gilbert, and Mitchell, Dem'yanov and Malozemov, is studied. Ideas from both these algorithms are combined and modified to derive our fast algorithm. For problems which require classification violations to be allowed, the violations are quadratically penalized and an idea due to Friess is used to convert it to a problem in which there are no classification violations. Comparitive computational evaluation of our algorithm against powerful SVM methods such as Platt's Sequential Minimal Optimization shows that our algorithm is very competitive. From harnad at coglit.soton.ac.uk Mon Mar 8 15:03:14 1999 From: harnad at coglit.soton.ac.uk (Stevan Harnad) Date: Mon, 8 Mar 1999 20:03:14 +0000 (GMT) Subject: The Neurology of Syntax: BBS Call for Commentators Message-ID: Below is the abstract of a forthcoming BBS target article *** please see also 5 important announcements about new BBS policies and address change at the bottom of this message) *** THE NEUROLOGY OF SYNTAX: LANGUAGE USE WITHOUT BROCA'S AREA by Yosef Grodzinsky This article has been accepted for publication in Behavioral and Brain Sciences (BBS), an international, interdisciplinary journal providing Open Peer Commentary on important and controversial current research in the biobehavioral and cognitive sciences. Commentators must be BBS Associates or nominated by a BBS Associate. To be considered as a commentator for this article, to suggest other appropriate commentators, or for information about how to become a BBS Associate, please send EMAIL by April 8th to: bbs at cogsci.soton.ac.uk or write to [PLEASE NOTE SLIGHTLY CHANGED ADDRESS]: Behavioral and Brain Sciences ECS: New Zepler Building University of Southampton Highfield, Southampton SO17 1BJ UNITED KINGDOM http://www.princeton.edu/~harnad/bbs/ http://www.cogsci.soton.ac.uk/bbs/ ftp://ftp.princeton.edu/pub/harnad/BBS/ ftp://ftp.cogsci.soton.ac.uk/pub/bbs/ gopher://gopher.princeton.edu:70/11/.libraries/.pujournals If you are not a BBS Associate, please send your CV and the name of a BBS Associate (there are currently over 10,000 worldwide) who is familiar with your work. All past BBS authors, referees and commentators are eligible to become BBS Associates. To help us put together a balanced list of commentators, please give some indication of the aspects of the topic on which you would bring your areas of expertise to bear if you were selected as a commentator. An electronic draft of the full text is available for inspection with a WWW browser, anonymous ftp or gopher according to the instructions that follow after the abstract. _____________________________________________________________ THE NEUROLOGY OF SYNTAX: LANGUAGE USE WITHOUT BROCA'S AREA Yosef Grodzinsky Department of Psychology Tel Aviv University Tel Aviv 69978 ISRAEL and Aphasia Research Center Department of Neurology Boston University School of Medicine yosef1 at ccsg.tau.ac.il ABSTRACT: A new view of the functional role of left anterior cortex in language use is proposed. The experimental record indicates that most human linguistic abilities are not localized in this region. In particular, most of syntax (long thought to be there) is not located in Broca's area and its vicinity (operculum, insula and subjacent white matter). This cerebral region, implicated in Broca's aphasia, does have a role in syntactic processing, but a highly specific one: it is neural home to receptive mechanisms involved in the computation of the relation between transformationally moved phrasal constituents and their extraction sites (in line with the Trace-Deletion Hypothesis). It is also involved in the construction of higher parts of the syntactic tree in speech production. By contrast, basic combinatorial capacities necessary for language processing - e.g., structure building operations, lexical insertion - are not supported by the neural tissue of this cerebral region, nor is lexical or combinatorial semantics. The dense body of empirical evidence supporting this restrictive view comes mainly from several angles on lesion studies of syntax in agrammatic Broca's aphasia. Five empirical arguments are presented: experiments in sentence comprehension; cross-linguistic considerations (where aphasia findings from several language types are pooled together and scrutinized comparatively); grammaticality and plausibility judgments; real-time processing of complex sentences; and rehabilitation. Also discussed are recent results from functional neuroimaging, and from structured observations on speech production of Broca's aphasics. Syntactic abilities, nonetheless, are distinct from other cognitive skills, and represented entirely and exclusively in the left cerebral hemisphere. Although more widespread in the left hemisphere than previously thought, they are clearly distinct from other human combinatorial and intellectual abilities. The neurological record (based on functional imaging, split-brain and right-hemisphere damaged patients, as well as patients suffering from a breakdown of mathematical skills) indicates that language is a distinct, modularly organized neurological entity. Combinatorial aspects of the language faculty reside in the human left cerebral hemisphere, but only the transformational component (or algorithms that implement it in use) is located in and around Broca's area. KEYWORDS: agrammatism, aphasia, Broca's area, cerebral localization, dyscalculia, functional neuroanatomy, grammatical transformation, modularity, neuroimaging, syntax, trace-deletion. ____________________________________________________________ To help you decide whether you would be an appropriate commentator for this article, an electronic draft is retrievable from the World Wide Web or by anonymous ftp from the US or UK BBS Archive. Ftp instructions follow below. Please do not prepare a commentary on this draft. Just let us know, after having inspected it, what relevant expertise you feel you would bring to bear on what aspect of the article. The URLs you can use to get to the BBS Archive: http://www.princeton.edu/~harnad/bbs/ http://www.cogsci.soton.ac.uk/bbs/Archive/bbs.grodzinsky.html ftp://ftp.princeton.edu/pub/harnad/BBS/bbs.grodzinsky ftp://ftp.cogsci.soton.ac.uk/pub/bbs/Archive/bbs.grodzinsky To retrieve a file by ftp from an Internet site, type either: ftp ftp.princeton.edu or ftp 128.112.128.1 When you are asked for your login, type: anonymous Enter password as queried (your password is your actual userid: yourlogin at yourhost.whatever.whatever - be sure to include the "@") cd /pub/harnad/BBS To show the available files, type: ls Next, retrieve the file you want with (for example): get bbs.grodzinsky When you have the file(s) you want, type: quit ____________________________________________________________ *** FIVE IMPORTANT ANNOUNCEMENTS *** ------------------------------------------------------------------ (1) There have been some very important developments in the area of Web archiving of scientific papers very recently. Please see: Science: http://www.cogsci.soton.ac.uk/~harnad/science.html Nature: http://www.cogsci.soton.ac.uk/~harnad/nature.html American Scientist: http://www.cogsci.soton.ac.uk/~harnad/amlet.html Chronicle of Higher Education: http://www.chronicle.com/free/v45/i04/04a02901.htm --------------------------------------------------------------------- (2) All authors in the biobehavioral and cognitive sciences are strongly encouraged to archive all their papers (on their Home-Servers as well as) on CogPrints: http://cogprints.soton.ac.uk/ It is extremely simple to do so and will make all of our papers available to all of us everywhere at no cost to anyone. --------------------------------------------------------------------- (3) BBS has a new policy of accepting submissions electronically. Authors can specify whether they would like their submissions archived publicly during refereeing in the BBS under-refereeing Archive, or in a referees-only, non-public archive. Upon acceptance, preprints of final drafts are moved to the public BBS Archive: ftp://ftp.princeton.edu/pub/harnad/BBS/.WWW/index.html http://www.cogsci.soton.ac.uk/bbs/Archive/ -------------------------------------------------------------------- (4) BBS has expanded its annual page quota and is now appearing bimonthly, so the service of Open Peer Commentary can now be be offered to more target articles. The BBS refereeing procedure is also going to be considerably faster with the new electronic submission and processing procedures. Authors are invited to submit papers to: Email: bbs at cogsci.soton.ac.uk Web: http://cogprints.soton.ac.uk http://bbs.cogsci.soton.ac.uk/ INSTRUCTIONS FOR AUTHORS: http://www.princeton.edu/~harnad/bbs/instructions.for.authors.html http://www.cogsci.soton.ac.uk/bbs/instructions.for.authors.html --------------------------------------------------------------------- (5) Call for Book Nominations for BBS Multiple Book Review In the past, Behavioral and Brain Sciences (BBS) journal had only been able to do 1-2 BBS multiple book treatments per year, because of our limited annual page quota. BBS's new expanded page quota will make it possible for us to increase the number of books we treat per year, so this is an excellent time for BBS Associates and biobehavioral/cognitive scientists in general to nominate books you would like to see accorded BBS multiple book review. (Authors may self-nominate, but books can only be selected on the basis of multiple nominations.) It would be very helpful if you indicated in what way a BBS Multiple Book Review of the book(s) you nominate would be useful to the field (and of course a rich list of potential reviewers would be the best evidence of its potential impact!). From Dunja.Mladenic at ijs.si Mon Mar 8 07:08:38 1999 From: Dunja.Mladenic at ijs.si (Dunja Mladenic) Date: Mon, 08 Mar 1999 13:08:38 +0100 Subject: thesis announcement References: <25598.920595604@skinner.boltz.cs.cmu.edu> Message-ID: <36E3BDC6.E8FA29D4@ijs.si> I'm glad to announce the availability of my PhD thesis "Machine Learning on non-homogeneous, distributed text data" Advisors: Prof. Ivan Bratko, Prof Tom M. Mitchell. The thesis is available at http://www.cs.cmu.edu/~TextLearning/pww/PhD.html as well as at http://www-ai.ijs.si/DunjaMladenic/PhD.html Best regards, Dunja Mladenic ================ ABSTRACT This dissertation proposes new machine learning methods where the corresponding learning problem is characterized by a high number of features, unbalanced class distribution and asymmetric misclassification costs. The input is given as a set of text documents or their Web addresses (URLs). The induced target concept is appropriate for the classification of new documents including shortened documents describing individual hyperlinks. The proposed methods are based on several new solutions. Proposed is a new, enriched document representation that extends the bag-of-words representation by adding word sequences and document topic categories. Features that represent word sequences are generated using a new efficient procedure. Features giving topic categories are obtained from background knowledge constructed using the new machine learning method for learning from class hierarchies. When learning from class hierarchy, a high number of class values, examples and features, are handled by (1) dividing a problem into subproblems based on the hierarchical structure of class values and examples, (2) by applying feature subset selection and (3) by pruning unpromising class values during classification. Several new feature scoring measures are proposed as a result of comparison and analysis of different feature scoring measures used in feature subset selection on text data. The new measures are appropriate for text domains with several tens or hundreds of thousands of features, can handle unbalanced class distribution and asymmetric misclassification costs. Developed methods are suitable for the classification of documents including shortened documents. We build descriptions of hyperlinks, and treat these as shortened documents. Since each hyperlink on the Web is pointing to some document, the classification of hyperlinks (corresponding shortened documents) could be potentially improved by using this information. We give the results of preliminary experiments for learning in domains with mutually dependent class attributes. Training examples are used for learning `a next state function on the Web', where document content (class attributes) is predicted from the hyperlink (feature-vector) that points to the document. Document content we are predicting is represented as a feature-vector each feature being one of the mutually dependent class attributes. The proposed methods and solutions are implemented and experimentally evaluated on real-world data collected from the Web in three independent projects. It is shown that document classification, categorization and prediction using the proposed methods perform well on large, real-world domains. The experimental findings further indicate that the developed methods can efficiently be used to support analysis of large amount of text data, automatic document categorization and abstraction, document content prediction based on the hyperlink content, classification of shortened documents, development of user customized text-based systems, and user customized Web browsing. As such, the proposed machine learning methods contribute to machine learning and to related fields of text-learning, data mining, intelligent data analysis, information retrieval, intelligent user interfaces, and intelligent agents. Within machine learning, this thesis contributes an approach to learning on large, distributed text data, learning on hypertext, and learning from class hierarchies. Within computer science, it contributes to better design of Web browsers and software assistants for people using the Web. From Kaspar.Althoefer at kcl.ac.uk Tue Mar 9 12:04:04 1999 From: Kaspar.Althoefer at kcl.ac.uk (Althoefer, Kaspar) Date: Tue, 09 Mar 1999 17:04:04 +0000 Subject: Research Studentship at King's College London, UK Message-ID: <36E55484.AA3CF242@kcl.ac.uk> Connectionists at CS.cmu.edu Dear Colleagues, I have a Research Studentship available here in London, UK, to work on waste pipe inspection involving research on sensors and neural networks. Details are attached. If you know of anyone interested please feel free to pass this information on. Best regards, Kaspar Althoefer. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% |_/ I N G'S | \ COLLEGE L O N D O N Department of Mechanical Engineering Founded1829 Research Studentship A MULTI-SENSOR SYSTEM FOR SEWER INSPECTION A Collaborative Project with North West Water, Water Research Centre and Telespec Ltd. sponsored by the EPSRC A research studentship is available in the Department of Mechanical Engineering as a result of a recently awarded grant by the EPSRC. The Department was awarded the top rating, 5*, in the 1996 Research Assessment Exercise and excellent facilities for both computational and experimental research are available. The project is co-sponsored by North West Water, the Water Research Centre (WRc) and Telespec Ltd. The three-year project involves the development of a novel sensor device for the inspection of waste pipes and the classification of the acquired sensor data. Applications are welcome from candidates with an interest to pursue research in the areas of sensors, digital signal processing and neural networks. Applicants will preferably hold, or expect to obtain, a good first or higher degree in mechanical engineering, electronic engineering or a related subject, and will have programming ability. Applicants will be expected to register for research studies leading to the MPhil/PhD degree. The studentship covers tuition fees at U.K./E.U. student rate and an annual maintenance allowance of ? 8,482 for three years. Applications in the form of a curriculum vita with the names of two academic referees must be sent to: Ms Nicola Nayler, Ref: EPSRC/KAA, Division of Engineering, King?s College London, Strand, London WC2R 2LS, e-mail: Nicola.Nayler at kcl.ac.uk. All applications must be received by 15 April 1999. Informal enquiries can be made to Dr Kaspar Althoefer at the address below: Dr Kaspar Althoefer, Department of Mechanical Engineering, King's College, Strand, London WC2R 2LS, UK, TEL: +44 (0)171 873 2431, e-mail: Kaspar.Althoefer at kcl.ac.uk, http://www.eee.kcl.ac.uk/~kaspar. Promoting excellence in teaching, learning & research Equality of opportunity is College policy From d.husmeier at ic.ac.uk Wed Mar 10 09:32:34 1999 From: d.husmeier at ic.ac.uk (Dirk Husmeier) Date: Wed, 10 Mar 1999 14:32:34 GMT Subject: New Book Message-ID: <24743.199903101432@picard.ee.ic.ac.uk> The following book is now available: Dirk Husmeier NEURAL NETWORKS FOR CONDITIONAL PROBABILITY ESTIMATION Forecasting Beyond Point Predictions Perspectives in Neural Computing Springer Verlag ISBN 1-85233-095-3 275 pages http://www.springer.co.uk -------------------------------------------------- SYNOPSIS -------------------------------------------------- Neural networks have been extensively applied to regression, forecasting, and system modelling. However, most of the conventional approaches predict only a single value as a function of the network inputs, which is inappropriate when the underlying conditional probability density is skewed or multi-modal. The objective of this book is to study the application of neural networks to predicting the entire conditional probability distribution of an unknown data-generating process. In the first part, the structure of a universal approximator architecture is discussed, and a backpropagation-like training scheme is derived from a maximum likelihood approach. More advanced chapters address the problems of training speed and generalisation performance. Several recent learning and regularisation methods are reviewed and adapted to the problem of predicting conditional probabilities: a combination of the random vector functional link net approach with the expectation maximisation algorithm, a generalisation of the Bayesian evidence scheme to mixture models, the derivation of an appropriate weighting scheme in network ensembles, and a discussion of why the over-fitting of individual networks may lead to an improved prediction performance of a network committee. All techniques and algorithms are applied to a set of various synthetic and real-world benchmark problems, and numerous graphs and diagrams provide a deeper insight into the nature of the learning and regularisation processes. Presupposing only a basic knowledge of probability and calculus, this book should be of interest to graduate students, researchers and practitioners in statistics, econometrics and artificial intelligence. -------------------------------------------------- OVERVIEW -------------------------------------------------- Conventional applications of neural networks usually predict a single value as a function of given inputs. In forecasting, for example, a standard objective is to predict the future value of some entity of interest on the basis of a time series of past measurements or observations. Typical training schemes aim to minimise the sum of squared deviations between predicted and actual values (the `targets'), by which, ideally, the network learns the conditional mean of the target given the input. If the underlying conditional distribution is Gaussian or at least unimodal this may be a satisfactory approach. However, for a multimodal distribution, the conditional mean does not capture the relevant features of the system, and the prediction performance will, in general, be very poor. This calls for a more powerful and sophisticated model, which can learn the whole conditional probability distribution. Chapter~1 demonstrates that even for a deterministic system and `benign' Gaussian observational noise, the conditional distribution of a future observation, conditional on a set of past observations, can become strongly skewed and multimodal. In Chapter~2, a general neural network structure for modelling conditional probability densities is derived, and it is shown that a universal approximator for this extended task requires at least two hidden layers. A training scheme is developed from a maximum likelihood approach in Chapter~3, and the performance of this method is demonstrated on three stochastic time series in Chapters~4 and 5. Several extensions of this basic paradigm are studied in the following chapters, aiming at both an increased training speed and a better generalisation performance. Chapter~7 shows that a straightforward application of the Expectation Maximisation (EM) algorithm does not lead to any improvement in the training scheme, but that in combination with the random vector functional link (RVFL) net approach, reviewed in Chapter~6, the training process can be accelerated by about two orders of magnitude. An empirical corroboration for this `speed-up' can be found in Chapter~8. Chapter~9 discusses a simple Bayesian approach to network training, where a conjugate prior distribution on the network parameters naturally results in a penalty term for regularisation. However, the hyperparameters still need to be set by intuition or cross-validation, so a consequent extension is presented in Chapters~10 and 11, where the Bayesian evidence scheme, introduced to the neural network community by MacKay for regularisation and model selection in the simple case of Gaussian homoscedastic noise, is generalised to arbitrary conditional probability densities. The Hessian matrix of the error function is calculated with an extended version of the EM algorithm. The resulting update equations for the hyperparameters and the expression for the model evidence are found to reduce to MacKay's results in the above limit of Gaussian noise and thus provide a consequent generalisation of these earlier results. An empirical test of the evidence-based regularisation scheme, presented in Chapter~12, confirms that the problem of overfitting can be considerably reduced, and that the training process is stabilised with respect to changes in the length of training time. A further improvement of the generalisation performance can be achieved by employing network committees, for which two weighting schemes -- based on either the evidence or the cross-validation performance -- are derived in Chapter~13. Chapters~14 and 16 report the results of extensive simulations on a synthetic and a real-world problem, where the intriguing observation is made that in network committees, overfitting of the individual models can be useful and may lead to better prediction results than obtained with an ensemble of properly regularised networks. An explanation for this curiosity can be given in terms of a modified bias-variance dilemma, as expounded in Chapter~13. The subject of Chapter~15 is the problem of feature selection and the identification of irrelevant inputs. To this end, the automatic relevance determination (ARD) scheme of MacKay and Neal is adapted to learning in committees of probability-predicting RVFL networks. This method is applied in Chapter~16 to a real-world benchmark problem, where the objective is the prediction of housing prices in the Boston metropolitan area on the basis of various socio-economic explanatory variables. The book concludes in Chapter~17 with a brief summary. From stefan.wermter at sunderland.ac.uk Wed Mar 10 11:46:38 1999 From: stefan.wermter at sunderland.ac.uk (Stefan Wermter) Date: Wed, 10 Mar 1999 16:46:38 +0000 Subject: PhD Studentship in hybrid intelligent systems Message-ID: <36E6A1EE.831F29AD@sunderland.ac.uk> PhD Studentship in Hybrid Intelligent Systems Applications are invited for a three year PhD studentship in the area of Hybrid Intelligent Systems. Areas of interest include: Artificial Neural Networks Natural Language Processing Hybrid Neural/Symbolic Architectures Cognitive Neuroscience Learning Agents and Softbots More examples of possible titles for research topics of interest can be found at: http://osiris.sunderland.ac.uk/~cs0stw/Projects/suggested_topics_titles.html Applicants should have a good honours degree in a relevant subject. The studentship includes fees and a maintenance allowance (around 6500 BP, under review). There may be possibilities for earning additional sums in the centre for informatics. If interested please e-mail Stefan.Wermter at sunderland.ac.uk ****************************************** Professor Stefan Wermter Research Chair in Intelligent Systems University of Sunderland Centre of Informatics School of Computing, Engineering and Technology St. Peters Way Sunderland SR6 0DD United Kingdom Phone: +44 191 515 3279 Fax: +44 191 515 2781 Email: stefan.wermter at sunderland.ac.uk http://osiris.sunderland.ac.uk/~cs0stw/ http://www.his.sunderland.ac.uk/ ****************************************** From jose at tractatus.rutgers.edu Thu Mar 11 06:38:37 1999 From: jose at tractatus.rutgers.edu (Stephen Jose Hanson) Date: Thu, 11 Mar 1999 07:38:37 -0400 Subject: COMPUTER MANAGER/RESEARCH STAFF Message-ID: <36E7AB3C.AB5D24E4@tractatus.rutgers.edu> COMPUTER MANAGER/RESEARCH STAFF Reporting to the chair, responsible for administering the computing resources of the department. Major component of this position involves research in Cognitive Science, especially related to Connectionist networks (or Neural Networks and Computational Neuroscience). Will plan, direct, and implement research approaches and concepts with faculty, including writing and organizing research experiments. Must be able to write program specifications designed for specific research control situations. Other responsibilities consist of installing and debugging software, and routine system maintenance administration. Will participate in planning and design for network growth and computing facilities as it relates to RU-NET 2000. Requires a bachelor's degree, or MS in Computer Science, Cognitive Science, Cognitive Neuroscience or AI or other related fields or equivalent experience. Requires familiarity with C-programming, UNIX system internals (BSD, System V, Solaris, Linux) and Windows (95, NT) as well as local area networks running TCP/IP. Image processing or graphics programming experience a plus. EMAIL Enquirys: jose at psychology.rutgers.edu please include in Subject Heading: SYS ADM Salary Range 27 Retirement System ABP Send resumes to: COMPUTER MANAGER SEARCH Department of PSYCHOLOGY, RUTGERS-NEWARK, 101 Warren Street, SMITH HALL, Newark NJ 07102 From d.husmeier at ic.ac.uk Thu Mar 11 15:38:32 1999 From: d.husmeier at ic.ac.uk (Dirk Husmeier) Date: Thu, 11 Mar 1999 20:38:32 GMT Subject: Correction of URL Message-ID: <25662.199903112038@picard.ee.ic.ac.uk> On announcing my new book yesterday (Neural Networks for Conditional Probability Estimation, Springer), I erroneously stated the general URL of Springer Verlag (http://www.springer.co.uk) rather than the specific web address http://www.springer.co.uk/comp/books/perspectives.html from where further information about publications in the series "Perspectives in Neural Computing" can be obtained. I am sorry for any confusion or inconvenience caused by this. Best wishes, Dirk Husmeier From dummy at ultra3.ing.unisi.it Thu Mar 11 15:57:15 1999 From: dummy at ultra3.ing.unisi.it (Paolo Frasconi) Date: Thu, 11 Mar 1999 21:57:15 +0100 (MET) Subject: CFP: Special Issue on Learning in Structured Domains Message-ID: CALL FOR PAPERS Special issue on Connectionist Models for Learning in Structured Domains IEEE Transactions on Knowledge and Data Engineering Submission deadline: July 30, 1999 BACKGROUND Structured representations are ubiquitous in different fields such as knowledge representation, language modeling, and pattern recognition. Although many of the most successful connectionist models are designed for "flat" (vector-based) or sequential representations, recursive or nested representations should be preferred in several situations. One obvious setting is concept learning when objects in the instance space are graphs or can be conveniently represented as graphs. Terms in first-order logic, blocks in document processing, patterns in structural and syntactic pattern recognition, chemical compounds, proteins in molecular biology, and even world wide web sites, are all entities which are best represented as graphical structures, and they cannot be easily dealt with vector-based architectures. In other cases (e.g., language processing) the process underlying data has a (hidden) recursive nature but only a flat representation is left as an observation. Still, the architecture should be able to deal with recursive representations in order to model correctly the mechanism that generated the observations. The interest in developing connectionist architectures capable of dealing with these rich representations can be traced back to the end of the 80's. Early approaches include Touretzky's BoltzCONS, the Pollack's RAAM model, Hinton's recursive distributed representations. More recent techniques include labeled RAAMs, holographic reduced representations, and recursive neural networks. Today, after more than ten years since the explosion of interest in connectionism, research in architectures and algorithms for learning structured representations still has a lot to explore and no definitive answers have emerged. It seems that the major difficulty with connectionist models is not just representing symbols, but rather devising proper ways of learning when examples are data structures, i.e. labeled graphs that can be used for describing relationships among symbols (or, more in general, combinations of symbols and continuously-valued attributes). TOPICS The aim of this special issue is to solicit and publish valuable papers that bring a clear picture of the state of the art in this area. We encourage submissions of papers addressing, in addition to other relevant issues, the following topics: * Algorithms and architectures for classification of data structures. * Unsupervised learning in structured domains. * Belief networks for learning structured patterns. * Compositional distributed representations. * Recursive autoassociative memories. * Learning structured rules and structured rule refinement. * Connectionist learning of syntactic parsing from text corpora. * Stochastic grammars and their relationships to neural and belief networks. * Links between connectionism and syntactic and structural pattern recognition. * Analogical reasoning. * Applications, including: - Medical and technical diagnosis: discovery and manipulation of structured dependencies, constraints, explanations. - Molecular biology and chemistry: prediction of molecular structure folding, classification of chemical structures. - Automated reasoning: robust matching, manipulation of logical terms, proof plans, search space reduction. - Software engineering: quality testing, modularization of software. - Geometrical and spatial reasoning: robotics, structured representation of objects in space, figure animation, layouting of objects. INSTRUCTIONS We encourage e-mail submissions (Postscript, RTF, and PDF are the only acceptable formats). For hard copy submission please send 6 copies of the manuscript to Prof. Marco Gori. Manuscripts should not exceed 30 pages double spaced (excluding Figures and Tables). The title and the abstract should be sent separately in ASCII format, even before the final submission, so that reviewers can be contacted timely. IMPORTANT DATES Submission of title and abstract (e-mail): July 15, 1999 Submission deadline: July 30, 1999 Notification of acceptance: December 31, 1999 Expected publication date: Mid-to-late 2000. GUEST EDITORS Prof. Paolo Frasconi DIEE, University of Cagliari Piazza d'Armi 09123 Cagliari (ITALY) Phone: +39 070 675 5849 E-mail: paolo at diee.unica.it Prof. Marco Gori DII, University of Siena Via Roma 56, 53100 Siena (ITALY) Phone: +39 0577 263 610 E-mail: marco at ing.unisi.it Prof. Alessandro Sperduti DI, University of Pisa Corso Italia 40, 56125 Pisa (ITALY) Phone: +39 050 887 213 E-mail: perso at di.unipi.it From ehartman at pav.com Thu Mar 11 16:34:00 1999 From: ehartman at pav.com (Eric Hartman) Date: Thu, 11 Mar 99 15:34:00 CST Subject: job announcement Message-ID: <36E8371F@pav.com> Pavilion is the leader in the development and application of software for modeling, optimization and advanced control in the process industries (chemicals, polymer, refining, and pulp & paper). We offer a dynamic, creative environment where team members contribute to the success of the company as well as develop and grow their state of the art knowledge and skills. Pavilion is located in Austin, TX and has approximately 100 employees. For more information on Pavilion refer to our web site www.pavtech.com. Researcher A position in the research group at Pavilion is currently available. The successful candidate will be expected to take a leadership role in the design and development of new modeling and control software. Therefore, the candidate is required to have a research background in modeling and control. Familiarity with neural network algorithms is also required. Research experience in continuous optimization or model predictive control is a plus. A Masters or Ph.D. degree in engineering, computer science, physics, mathematics or operations research is required. Because the candidate will be designing new products, strong software design skills with experience in C++ is needed. Experience with Visual Basic, COM, Windows programming is also a plus. The successful candidate must be self-motivated, a team player and have strong communication skills. Pavilion provides excellent benefits and compensation plans, including incentive stock options. Pavilion supports EOE. Send resume in confidence to Staffing: Pavilion Technologies, Inc. Dale Smith, Director of Human Relations 11100 Metric Blvd., #700 Austin, TX 78758-4018 Or e-mail: spiche at pav.com fax: 512-438-1401 From nigeduff at cse.ucsc.edu Fri Mar 12 13:38:57 1999 From: nigeduff at cse.ucsc.edu (Nigel Duffy) Date: Fri, 12 Mar 1999 10:38:57 -0800 Subject: Paper avaliable Re: Gradient Descent and Boosting Message-ID: <199903121838.KAA11899@alpha.cse.ucsc.edu> David Helmbold and I have the following paper relating boosting to gradient descent. This relationship is used to derive an algorithm and prove performance bounds on this new algorithm. A Geometric Approach to Leveraging Weak Learners Nigel Duffy and David Helmbold University of California Santa Cruz ABSTRACT AdaBoost is a popular and effective leveraging procedure for improving the hypotheses generated by weak learning algorithms. AdaBoost and many other leveraging algorithms can be viewed as performing a constrained gradient descent over a potential function. At each iteration the distribution over the sample given to the weak learner is the direction of steepest descent. We introduce a new leveraging algorithm based on a natural potential function. For this potential function, the direction of steepest descent can have negative components. Therefore we provide two transformations for obtaining suitable distributions from these directions of steepest descent. The resulting algorithms have bounds that are incomparable to AdaBoost's, and their empirical performance is similar to AdaBoost's. To appear in EuroColt 99, to be published by Springer Verlag. Available from: "http://www.cse.ucsc.edu/research/ml/papers/GeometricLeveraging.ps" From carlos at sonnabend.ifisiol.unam.mx Fri Mar 12 21:38:19 1999 From: carlos at sonnabend.ifisiol.unam.mx (Carlos Brody) Date: Fri, 12 Mar 1999 20:38:19 -0600 Subject: Cross-correlations DO NOT imply synchrony Message-ID: <199903130238.UAA08617@sonnabend.ifisiol.unam.mx> Cross-correlations DO NOT imply synchrony: announcing 3 papers -------------------------------------------------------------- Suppose that you record from two stimulus-driven cells simultaneously, over many trials. Interested in whether they are synchronized, you compute the average cross-correlogram of their spike trains. (For the initiated, you compute their shuffle-corrected cross-correlogram, so as to get rid of direct stimulus influences.) You find, in the resulting correlogram, that there is a narrow peak, centered at zero, with a width of say 15 ms. "Ah! The cells are synchronized on a 15-ms timescale!" you conclude. In concluding this you will be doing what most people do, and what most papers in the literature do. THIS CONCLUSION DOES NOT NECESSARILY FOLLOW. How and why? If the PSTHs of the cells have narrow peaks, by which I mean as narrow as the peak in the xcorrelogram itself, then even if the mechanism synchronizing the cells has a very very slow timescale (e.g. tens of seconds), the xcorrelogram will have a narrow peak. Such a peak would NOT be an artifact. It arises ONLY if there *IS* an interaction -- synchrony, if you will -- between the two cells. What is wrong is the conclusion regarding the timescale of the interaction. A narrow peak (tens of ms) does NOT necessarily mean a fast interaction or a fast timescale of synchronization. Wrong interpretations of this sort can make nonsense of the arguments one is making with respect to the data. An example in point is Sillito et al. "Feature-linked synchroni`zation of thalamic relay cell firing induced by feedback from the visual cortex", Nature 369: 479-482 (1994). A paper recently published in J. Neurophysiol (see pointer below) uses a simple biophysical model to go through that example in detail. It shows how one can get exactly the same xcorrelograms Sillito et al. got, but without any binding-related (i.e. fast) synchrony at all. Instead, in the model the only interaction between the cells is that their resting potential slowly covaries over the trials of the experiment. That slow (tens of seconds) covariation reproduces Sillito et al.'s data in remarkable detail. Two other papers, in press in Neural Computation, go through these kind of issues in a more abstract manner. The first describes the problem, and tries to provide rules of thumb for being alert to when interpretation problems may arise. The second paper suggests a couple of methods to try to disambiguate interpretations. Comments welcome. Carlos Brody carlos at sonnebend.ifisiol.unam.mx http://www.cns.caltech.edu/~carlos ----------------------------------------------------------- SLOW COVARIATIONS IN NEURONAL RESTING POTENTIALS CAN LEAD TO ARTEFACTUALLY FAST CROSS-CORRELATIONS IN THEIR SPIKE TRAINS. by C.D. Brody J. Neurophysiol., 80: 3345-3351 (Dec 1998) Reprint also at http://www.cns.caltech.edu/~carlos/papers/slowcovs.pdf A model of two lateral geniculate nucleus (LGN) cells, that interact only through slow (tens of seconds) covariations in their resting membrane potentials, is used here to investigate the effect of such slow covariations on cross-correlograms taken during stimulus-driven conditions. Despite the slow time-scale of the interactions, the model generates cross-correlograms with peak widths in the range of 25 -- 200 milliseconds. These bear a striking resemblance to those reported in studies of LGN cells by \cite{Sillito94}, which were taken at the time as evidence of a fast spike timing synchronization interaction; the model highlights the possibility that those correlogram peaks may have been caused by a mechanism other than spike synchronization. Slow resting potential covariations are suggested instead as the dominant generating mechanism. How can a slow interaction generate covariogram peaks with a width 100 to 1000 times thinner than its timescale? Broad peaks caused by slow interactions are modulated by the cells' PSTHs. When the PSTHs have thin peaks (e.g., tens of milliseconds), the cross-correlogram peaks generated by slow interactions will also be thin; such peaks are easily misinterpretable as being caused by fast interactions. Though this point is explored here in the context of LGN recordings, it is a general point and applies elsewhere. When cross-correlogram peak widths are of the same order of magnitude as PSTH peak widths, experiments designed to reveal short-timescale interactions must be interpreted with the issue of possible contributions from slower interactions in mind. -------------------------------------------------------------- http://www.cns.caltech.edu/~carlos/papers/nosynch.ps.Z nosynch.pdf CORRELATIONS WITHOUT SYNCHRONY by C.D. Brody In press, Neural Computation Peaks in spike train correlograms are usually taken as indicative of spike timing synchronization between neurons. Strictly speaking, however, a peak merely indicates that the two spike trains were not independent. Two biologically-plausible ways of departing from independence which are capable of generating peaks very similar to spike timing peaks are described here: covariations over trials in response {\em latency} and covariations over trials in neuronal {\em excitability}. Since peaks due to these interactions can be similar to spike timing peaks, interpreting a correlogram may be a problem with ambiguous solutions. What peak shapes do latency or excitability interactions generate? When are they similar to spike timing peaks? When can they be ruled out from having caused an observed correlogram peak? These are the questions addressed here. A companion paper \citep{Brody98b} proposes quantitative methods to tell cases apart when latency or excitability covariations cannot be ruled out. -------------------------------------------------------------- http://www.cns.caltech.edu/~carlos/papers/disambiguating.ps.Z disambiguating.pdf DISAMBIGUATING DIFFERENT COVARIATION TYPES by C.D. Brody In press, Neural Computation Covariations in neuronal {\em latency} or {\em excitability} can lead to peaks in spike train covariograms which may be very similar to those caused by spike timing synchronization \citep{Brody98a}. Two quantitative methods are described here: (1) A method to estimate the excitability component of a covariogram, based on trial-by-trial estimates of excitability. Once estimated, this component may be subtracted from the covariogram, leaving only other types of contributions. (2) A method to determine whether the covariogram could potentially have been caused by latency covariations. -------------------------------------------------------------- From jose at tractatus.rutgers.edu Sat Mar 13 09:23:13 1999 From: jose at tractatus.rutgers.edu (Stephen Jose Hanson) Date: Sat, 13 Mar 1999 10:23:13 -0400 Subject: RUMBA- POSTDOC POSITIONS and GRADUATE FELLOWSHIPS in Cog Sci/Cog Neuro Message-ID: <36EA74D1.59155BAD@tractatus.rutgers.edu> RUMBAat RUTGERS UNIVERSITY Newark Campus the Rutgers Mind/Brain Analysis (RUMBA) Project anticpates making several POSTDOCTORAL postions which are available in the Fall 99. These positions run for minimum 2 years and will be in the areas of specialization of cognitive neuroscience and connectionist modeling with applications to recurrent networks, image processing and functional brain imaging. The Rutgers Psychology Department has made several appointments in this area in the last few years and has access to two 1.5T magnets and is in process of acquiring a head-only 3T Magnet for cognitive neuroscience research. Review of applications will occur on June 10th, 1999, but will continue to be accepted until all positions are filled. Rutgers University is an equal opportunity/affirmative action employer. Qualified women and minority candidates are especially encouraged to apply. Send CV and three letters of recommendation and 2 reprints to Professor S. J. Hanson, Chair, Department of Psychology Post Doc Search, Rutgers University, Newark, NJ 07102. Email enquirys can be made to rumba at tractatus.rutgers.edu also see http://www.psych.rutgers.edu/RUMBA PSYCHOLOGY GRADUATE PROGRAM- Newark Campus GRADUATE RESEARCH FELLOWSHIPS. Fall 99 The graduate program in COGNITIVE SCIENCE and COGNITIVE NEUROSCIENCE seeks students for FALL 99. Interested applicants from Psychology, Computer Science or Cognitive Science undergrad programs are encouraged to apply. These fellowships are competitive and provide comprehensive training in computation, neuro-imaging and cognitive science/perception research. Please send enquiries and applications to Professor S. J. Hanson, Chair, Department of Psychology Rutgers University, Newark, NJ 07102. Email enquirys can be made to gradpgm at tractatus.rutgers.edu also please see our web page for more information on the graduate faculty and program http://www.psych.rutgers.edu From steve at cns.bu.edu Sat Mar 13 05:56:47 1999 From: steve at cns.bu.edu (Stephen Grossberg) Date: Sat, 13 Mar 1999 05:56:47 -0500 Subject: two views of consciousness Message-ID: CONSCIOUSNESS AND COMPLEXITY OR CONSCIOUSNESS AND RESONANCE? Stephen Grossberg and Rajeev D.S. Raizada Department of Cognitive and Neural Systems Boston University 677 Beacon Street Boston, MA 02215 Phone: 617-353-7858 or-7857 Fax: 617-353-7755 Email: steve at cns.bu.edu, rajeev at cns.bu.edu In their recent article in Science, Tononi and Edelman (1) suggest that "conscious experience is integrated ... and at the same time it is highly differentiated", that "integration [occurs] ... through reentrant interactions", and that "attention may increase .. conscious salience". They also note that "cortical regions ... for controlling action ... may not contribute significantly to conscious experience". An alternative theory unifies these several hypotheses into a single hypothesis: "All conscious states are resonant states" (2), and suggests how resonant states enable brains to learn about a changing world throughout life (3). Resonance arises when bottom-up and top-down, or "reentrant", processes reach an attentive consensus between what is expected and what is in the world. Because resonance dynamically regulates learning of sensory and cognitive representations, this theory is called adaptive resonance theory, or ART. ART implies all the properties noted by Tononi and Edelman, but also clarifies their critical link to learning, and explains why only a certain type of excitatory top-down matching can stabilize learning (4): When top-down attentional signals match bottom-up sensory input, their mutual excitation strengthens and maintains existing neural activity long enough for synaptic changes to occur. Thus, attentionally relevant stimuli are learned, while irrelevant stimuli are suppressed and hence prevented from destabilizing existing memories. Recent experiments support these predictions during vision (5), audition (6), and learning (7). Why dorsal cortical circuits that control action do not support consciousness now follows easily: Such circuits use inhibitory matching. For example, after moving your arm to an expected position, movement stops (viz., is inhibited) because "where you want to move" matches "where you are" (8). Inhibitory matches do not resonate, hence are not conscious. A detailed model of how the laminar circuits of neocortex use resonance to control cortical development, learning, attention, and grouping of information has recently been proposed (9), and suggests new experiments to test the predicted linkages between learning, attention, and consciousness. REFERENCES 1. G. Tononi and G.M. Edelman, Science 282, 1846 (1998). 2. S. Grossberg, Psychol. Rev. 87, 1 (1980); S. Grossberg, Studies of Mind and Brain (Kluwer/Reidel, Amsterdam, 1982); S. Grossberg, The Adaptive Brain, Vol I. (Elsevier/North-Holland, Amsterdam, 1987); S. Grossberg, Amer. Scientist 83, 438 (1995). S. Grossberg, Consciousness and Cognition, 8, 1, 1999. 3. C. D. Gilbert, Physiol. Rev. 78, 467 (1998); D. V. Buonomano and M. M. Merzenich, Ann. Rev. Neurosci. 21, 149 (1998). 4. G. A. Carpenter and S. Grossberg, Computer Vis., Graphics, and Image Proc. 37, 54. 5. A. M. Sillito, H. E. Jones, G. L. Gerstein, D. C. West, Nature 369, 479; J. Bullier, J. M. Hupe, A. C. James, P. Girard, J. Physiol. (Paris) 90, 217 (1996); V. A. F. Lamme, K. Zipser, H. Spekreijse, Soc. Neurosci. Abstr. 23, 603.1 (1997). 6. Y. Zhang, N. Suga, J. Yan, Nature 387, 900 (1997). 7. E. Gao and N. Suga, Proc. Natl. Acad. Sci. USA 95, 12663 (1998); E. R. Ergenzinger, M. M. Glasier, J. O. Hahm, T. P. Pons, Nature Neurosci. 1, 226 (1998); J. P. Rauschecker, Nature Neurosci. 1, 179 (1998); M. Ahissar and S. Hochstein, Proc. Natl. Acad. Sci. USA 90, 5718 (1993). 8. D. Bullock, P. Cisek, S. Grossberg, Cereb. Cortex 8, 48 (1998). 9. S. Grossberg, Spatial Vision, 12, 163 (1999); S. Grossberg and J. R. Williamson, Soc. Neurosci. Abstr. 23, 227.9 (1997); R. D. S. Raizada and S. Grossberg, Soc. Neurosci. Abstr. 24, 105.10 (1998). From nicolang at yugiri.brain.riken.go.jp Mon Mar 15 01:26:40 1999 From: nicolang at yugiri.brain.riken.go.jp (Nicolangelo Iannella) Date: Mon, 15 Mar 1999 15:26:40 +0900 Subject: Call for Invited Speakers Message-ID: <36ECA81F.852A8B92@yugiri.brain.riken.go.jp> ******CALL FOR PARTICIPATION/INVITATION****** On behalf of Proferssor Tom Gedeon, chairman of ICONIP'99, we are looking for INVITED SPEAKERS to participate in a (or a series of) special session(s) discussing "Neural Information Coding" for this years' International Conference on Neural Information Processing ICONIP'99 to be held in Perth, Australia November 16-20. ************** NOTE *************** TRAVEL RE-EMBURSEMENTS CAN NOT BE PROVIDED ************** NOTE *************** Neural information coding or the neural code, is still today a subject full of mystery and controversy. To date, there are several main stream contender neural codes (such as temporal and population coding) each with their own supporting experimental evidence. This has inevitably led to some confusion and differences in opinion, as to how information is encoded and processed with a "biological brain". Even though some progress has been made in solving the neural code, it is still not a popular discussion issue which definitely needs further debate and wider recognition, especially in the neural network community. Further benefits, especially those living in the upper Northern hemisphere (Europe, US and Canada), will be that since the conference is held near the end of spring, it will be a great opportunity to escape the cold winter and do some sight-seeing in Perth or around Australia, where one can witness truely unique natural wilderness. There are TWO CHOICES of presentation format, you the invitee can give either A) a normal paper presentation of about 20 minutes duration or B) a panel discussion style (and can still have a paper for the proceedings) It would be greatly appreciated if you definitely confirm your attendence to ICONIP'99 ASAP and furthermore, please indicate which style of presentation you prefer. Please send all correspondence to nicolang at yugiri.riken.go.jp or if this fails use angelo at postman.riken.go.jp Yours sincerely, Nicolangelo Iannella -- Nicolangelo Iannella RIKEN, Brain Science Institute Laboratory for Neural Modelling 2-1 Hirosawa, Wako-shi, Saitama 351-0198, Japan Email: angelo at postman.riken.go.jp Tel: +81 48 462 1111 ex. 7154 Fax: +81 48 467 9684 From cindy at cns.bu.edu Mon Mar 15 16:03:56 1999 From: cindy at cns.bu.edu (Cynthia Bradford) Date: Mon, 15 Mar 1999 16:03:56 -0500 Subject: Call for Registration Message-ID: <199903152103.QAA12910@retina.bu.edu> ***** CALL FOR REGISTRATION ***** and ***** COMPLETE PROGRAM ***** THIRD INTERNATIONAL CONFERENCE ON COGNITIVE AND NEURAL SYSTEMS Tutorials: May 26, 1999 Meeting: May 27-29, 1999 Boston University 677 Beacon Street Boston, Massachusetts 02215 http://cns-web.bu.edu/meetings/ Sponsored by Boston University's Center for Adaptive Systems and Department of Cognitive and Neural Systems with financial support from DARPA and ONR How Does the Brain Control Behavior? How Can Technology Emulate Biological Intelligence? The conference will include invited tutorials and lectures, and contributed lectures and posters by experts on the biology and technology of how the brain and other intelligent systems adapt to a changing world. The conference is aimed at researchers and students of computational neuroscience, connectionist cognitive science, artificial neural networks, neuromorphic engineering, and artificial intelligence. A single oral or poster session enables all presented work to be highly visible. Costs are kept at a minimum without compromising the quality of meeting handouts and social events. SEE BELOW FOR THE COMPLETE MEETING SCHEDULE (printed after the registration form). SEE THE WEB SITE FOR HOTEL AND OTHER CONFERENCE INFORMATION. ******************** REGISTRATION FORM Third International Conference on Cognitive and Neural Systems Department of Cognitive and Neural Systems Boston University 677 Beacon Street Boston, Massachusetts 02215 Tutorials: May 26, 1999 Meeting: May 27-29, 1999 FAX: (617) 353-7755 http://cns-web.bu.edu/meetings/ (Please Type or Print) Mr/Ms/Dr/Prof: _____________________________________________________ Name: ______________________________________________________________ Affiliation: _______________________________________________________ Address: ___________________________________________________________ City, State, Postal Code: __________________________________________ Phone and Fax: _____________________________________________________ Email: _____________________________________________________________ The conference registration fee includes the meeting program, reception, two coffee breaks each day, and meeting proceedings. The tutorial registration fee includes tutorial notes and two coffee breaks. CHECK ONE: ( ) $70 Conference plus Tutorial (Regular) ( ) $45 Conference plus Tutorial (Student) ( ) $45 Conference Only (Regular) ( ) $30 Conference Only (Student) ( ) $25 Tutorial Only (Regular) ( ) $15 Tutorial Only (Student) METHOD OF PAYMENT (please fax or mail): [ ] Enclosed is a check made payable to "Boston University". Checks must be made payable in US dollars and issued by a US correspondent bank. Each registrant is responsible for any and all bank charges. [ ] I wish to pay my fees by credit card (MasterCard, Visa, or Discover Card only). Name as it appears on the card: _____________________________________ Type of card: _______________________________________________________ Account number: _____________________________________________________ Expiration date: ____________________________________________________ Signature: __________________________________________________________ ******************** MEETING SCHEDULE Wednesday, May 26, 1999 (Tutorials) 7:45am---8:30am MEETING REGISTRATION 8:30am--10:00am Stephen Grossberg: "Development, learning, attention, and grouping by the laminar circuits of visual cortex" 10:00am--10:30am COFFEE BREAK 10:30am--12:00pm Daniel Schacter: "True memories, false memories: A cognitive neuroscience perspective" 12:00pm---1:30pm LUNCH 1:30pm---3:00pm Gail Carpenter: "Adaptive resonance theory and practice" 3:00pm---3:30pm COFFEE BREAK 3:30pm---5:00pm Tomaso Poggio: "Supervised learning: Regularization and support vector machines" Thursday, May 27, 1999 (Invited Talks, Contributed Talks, and Posters) Session Chairs: Stephen Grossberg (AM) and Daniel Bullock (PM) 7:15am---8:00am MEETING REGISTRATION 7:55am---8:00am Stephen Grossberg: "Welcome and Introduction" 8:00am---8:45am Joseph LeDoux: "Learning about danger: Systems and synapses" 8:45am---9:30am Joaquin Fuster: "The frontal lobe in temporal aspects of cognition" 9:30am--10:15am John Lisman: "The role of theta-gamma oscillations in memory processes" 10:15am--10:45am COFFEE BREAK AND POSTER SESSION I 10:45am--11:30am Michael Hasselmo: "Neuromodulation and cortical memory function: Physiology and computational modeling" 11:30am--12:15pm Dario Floreano: "Evolutionary cybernetics: Exploring the foundations of adaptive intelligence in biomimetic robots" 12:15pm---1:00pm Paolo Gaudiano: "Visually guided navigation with autonomous mobile robots" 1:00pm---2:15pm LUNCH 2:15pm---3:15pm PLENARY TALK: Rodney Brooks: "Learning through social interaction: Robot implementations" 3:15pm---3:30pm Hans Colonius and Petra Arndt: "Visual-auditory interaction in saccadic eye movements" 3:30pm---3:45pm John A. Bullinaria, Patricia M. Riddell, and Simon K. Rushton: "Modelling development and adaptation of oculomotor control" 3:45pm---4:00pm Antonio Guerrero, Juan Lopez, and Jorge Feliu: "Sensory-motor control architecture based on biological models for a stereohead" 4:00pm---4:15pm Magnus Snorrason and Jeff Norris: "Vision based path planning for Martian terrain" 4:15pm---4:30pm Philipp Althaus and Paul F.M.J. Verschure: "Distributed adaptive control 5: Bayesian theory of decision making, implemented on simulated and real robots" 4:30pm---4:45pm Mark A. Kon and Leszek Plaskota: "Information complexity of neural networks" 4:45pm---5:00pm C.H. Chen and Baoming Hong: "A high efficient face recognition technique based on multi-level feature representations and neural nets" 5:00pm---5:30pm COFFEE BREAK 5:00pm---8:00pm POSTER SESSION I (see below for details) Friday, May 28, 1999 (Invited and Contributed Talks) Session Chairs: Gail Carpenter (AM) and Frank Guenther (PM) 7:30am---8:00am MEETING REGISTRATION 8:00am---8:45am Shihab Shamma: "Encoding of timbre in the auditory system" 8:45am---9:30am Nobuo Suga: "Adjustment and improvement of auditory signal processing by the corticofugal feedback system" 9:30am--10:15am Stephen Grossberg: "Neural models of auditory and speech perception" 10:15am--10:45am COFFEE BREAK 10:45am--11:30am Steven Greenberg: "From sound to meaning: A syllable-centric perspective on spoken language" 11:30am--12:15pm Larry Gillick: "The state of large vocabulary continuous speech recognition" 12:15pm---1:00pm Andreas Andreou: "Neuromorphic VLSI microsystems for speech and vision processing" 1:00pm---2:15pm LUNCH 2:15pm---2:30pm James R. Williamson: "A hierarchical network for learning vernier discrimination" 2:30pm---2:45pm Scott Oddo: "ARTMAP: Automated interpretation of Lyme IgG Western Blots" 2:45pm---3:00pm Artur Dubrawski and Dorota Daniecka: "Attribute selection for neural training of a breast cancer diagnosis system" 3:00pm---3:15pm P. Niyogi, M.M. Sondhi, and C. Burges: "A computational framework for distinctive feature based speech recognition" 3:15pm---3:30pm Fatima T. Husain and Michiro Negishi: "Model of English vowel classification by Spanish speakers" 3:30pm---3:45pm Nancy Chang: "Learning form-meaning mappings for language understanding" 3:45pm---4:00pm L.M. Romanski and P.S. Goldman-Rakic: "An acoustically responsive domain in the prefrontal cortex of the awake behaving Macaque monkey" 4:00pm---4:30pm COFFEE BREAK 4:30pm---4:45pm R.M. Borisyuk, M.J. Denham, and F.C. Hoppensteadt: "An oscillatory model of novelty detection in the hippocampus" 4:45pm---5:00pm M.J. Denham and R.M. Borisyuk: "An oscillatory model of the septal-hippocampal inhibitory circuit and the modulation of hippocampal theta activity" 5:00pm---5:15pm Simona Doboli, Ali A. Minai, and Phillip J. Best: "Context-dependent place representations in the hippocampus" 5:15pm---5:30pm Jeffrey Krichmar, Theoden Netoff, and James Olds: "Place cells emerge in a network of simulated CA3 pyramidal cells that receive robotic sensor input" 5:30pm---5:45pm Oury Monchi and Michael Petrides: "Investigating various working memory components with a computational model of basal ganglia-thalamocortical loops" 5:45pm---6:00pm Frank van der Velde and Marc de Kamps: "Locating a familiar object using feedback modulation" 6:00pm---8:00pm MEETING RECEPTION Saturday, May 29, 1999 (Invited Talks, Contributed Talks, and Posters) Session Chairs: Eric Schwartz (AM) and Ennio Mingolla (PM) 7:30am---8:00am MEETING REGISTRATION 8:00am---8:45am Charles Gilbert: "Adult cortical dynamics" 8:45am---9:30am David van Essen: "Mapping and modeling of cortical structure and function" 9:30am--10:15am Randolph Blake: "What can be perceived in the absence of visual awareness?" 10:15am--10:45am COFFEE BREAK AND POSTER SESSION II 10:45am--11:30am Steven Zucker: "Complexity, confusion, and computational vision" 11:30am--12:15pm Ennio Mingolla: "Cortical computation for attentive visual navigation: Heading, time-to-contact, and pursuit movements" 12:15pm---1:00pm Richard Shiffrin: "A model for implicit and explicit memory" 1:00pm---2:15pm LUNCH 2:15pm---3:15pm PLENARY TALK: Shinsuke Shimojo: "Visual surface filling-in assessed by psychophysics and TMS (Transcranial Magnetic Stimulation)" 3:15pm---3:30pm S.R. Lehky, T.J. Sejnowski, and R. Desimone: "Sparseness of coding in monkey striate complex cells: Data and modeling" 3:30pm---3:45pm R.D.S. Raizada and S. Grossberg: "How do preattentive grouping and attentive modulation select object representations in the layers of visual cortex?" 3:45pm---4:00pm Nikolaus Almassy, Gerald M. Edelman, and Olaf Sporns: "Function of long-range intracortical connections in a model of the visual cortex embedded in a behaving real-world device" 4:00pm---4:15pm Thorsten Hansen, Karl O. Riedel, Luiz Pessoa, and Heiko Neumann: "Regularization and 2D brightness filling-in: Theoretical analysis and numerical simulations" 4:15pm---4:30pm Daniel A. Pollen, Andrzej W. Przybyszewski, Warren Foote, and Mark A. Rubin: "Neurons in Macaque V4 respond strongly to stimulus discontinuities" 4:30pm---4:45pm DeLiang L. Wang: "Object-based selection by a neural oscillator network" 4:45pm---5:00pm Nilendu Gautambhai Jani and Daniel S. Levine: "A neural network theory of proportional analogy-making" 5:00pm---5:30pm COFFEE BREAK 5:00pm---8:00pm POSTER SESSION II (see below for details) POSTER SESSION I: Thursday, May 27, 1999 All posters will be displayed for the full day. Cognition, Learning, Recognition (B): Brigitte Nevers and Remy Versace: "Contributions of studies about the frequency effects in the processes of activation and integration of memory traces" Gary C.-W. Shyi and Chang-Ming Lin: "Computing representations for object recognition in visual search: An eye-movement analysis" Emmet Spier: "Cognition not needed: An associative model for the outcome devaluation effect" William Power, Ray Frank, Neil Davey, and John Done: "A modular attractor model of semantic access" Sylvain Hanneton, Olivier Gapenne, Christelle Genouel, Charles Lenay, and Catherine Marque: "Dynamics of shape recognition through a minimal visuo-tactile sensory substitution interface" Cristiane Salum, Antonio Roque da Silva, and Alan Pickering: "Possible role of dopamine in learning and attention: A computational approach" C.-S.R. Li, Y.-Y. Yang, and H.-C. Chen: "Sensory and spatial components of tactile extinction and allesthesia in cortical and thalamic lesions" Robert Homer and Bogdan Sasaran: "The role of monoamine neurotransmitters in brain development and mental illness: A neural network model" G.J. Dalenoort and P.H. de Vries: "Cognitive control and binding" Stephen Grossberg and Dmitry V. Repin: "How does the brain represent numbers?" Julian Paul Keenan, John Ives, Qun Chen, Gottfried Schlaug, Thomas Kauffman, David Bartres-Faz, and Alvaro Pascual-Leone: "Mapping cortical networks via functional magnetic resonance imaging and transcranial magnetic stimulation: Preliminary results" Julian Paul Keenan, John Ives, Qun Chen, Gottfried Schlaug, Thomas Kauffman, David Bartres-Faz, and Alvaro Pascual-Leone: "Modulating cortical excitability using repetitive transcranial magnetic stimulation in a self-face study to examine the role of inhibition in the prefrontal cortex" Adaptive Resonance Theory (B + T): Norbert Kopco, Peter Sincak, and Rudolf Jaksa: "Methods for analysis and enhancement of neural network classification of remotely sensed images" Gail A. Carpenter and Matthew W. Giamporcaro: "A computer game testbed for modeling strategic decision making" Gail A. Carpenter, Sucharita Gopal, Scott Macomber, Byron Shock, and Curtis E. Woodcock: "ARTMAP neural network classification of land use change" Marc-Andre Cantin, Eric Granger, and Yvon Savaria: "Four implementations of the fuzzy Adaptive Resonance Theory (ART) neural network for high data throughput applications" Luis Marti, Luciano Garcia, and Miguel Catasus: "Continuous-valued function approximation by an ART-based neural network" Mark A. Rubin and Aijaz Baloch: "Demonstration of an ARTEX implementation for recognition of visual textures" Quanhong Wang: "Tests of two theoretical explanations for the perceptual interference effect: Adaptive Resonance Theory versus competitive activation models" Neural and Hybrid Systems (B + T): Hiroki Aoki and Toshimichi Saito: "A SOM with virtual connection and its application to guess of membership functions" Hiroyuki Torikai and Toshimichi Saito: "Basic functions from an integrate-and-fire circuit with plural inputs" Brian M. O'Rourke: "Tactics for time series modeling with neural networks and fuzzy clustering" Mark Plutowski: "Emotional processing: A framework for handling multiple motivations in autonomous software agents" David V. Reynolds: "Computer simulation of large-scale neural systems of pain and aggression based on fuzzy logic" Rajat K. De: "Artificial consciousness: Integration of knowledge-based and case-based approach in a neuro-fuzzy paradigm" G.E. Campbell, W.L. Buff, and D.W. Dorsey: "Decision making in a tactical setting: Crisp or fuzzy reasoning?" Raj P. Malhotra and Yan M. Yufik: "Virtual associative networks for complexity reduction in information fusion" Audition, Speech, and Language (B + T): M.G. Srikanthan and R.J. Glover: "Wavelet neural network based echolocation" Barbara Shinn-Cunningham, Norbert Kopco, and Scott Santarelli: "Computation of acoustic source position in near-field listening" Lewis Meier: "Application of computerized auditory scene analysis to underwater acoustic signals" Ivelin Stoianov: "Recurrent autoassociative networks and sequential processing" Susan L. Denham: "Synaptic depression may explain many of the temporal response properties observed in primary auditory cortex: A computational investigation" Katja Wiemer-Hastings, Arthur C. Graesser, and Peter Wiemer-Hastings: "Exploring effective linguistic context with feedforward neural networks" VLSI: Sorin Draghici and Thierry de Pauw: "On the computational power of limited precision weights neural networks in classification problems: How to calculate the weight range so that a solution will exist" Catherine Breslin: "Neuromorphic design by physical equivalence: Simple animal and neuron models" Luca Marchese: "Neuromorphic VLSI servers" Todd Hinck, Howard Cohen, Gert Cauwenberghs, Allyn Hubbard, and Andreas Andreou: "Neuromorphic VLSI systems for boundary contour integration: An interactive demonstration" Gu Lin and Bingxue Shi: "A programmable and expandable Hamming network integrated circuit" Neural System Models (B + T): Shinji Karasawa: "Impulse recurrent loops for short-term memory which merges with experience and long-term memory" F.E. Lauria, R. Prevete, M. Milo, and S. Visco: "The Java package it.na.cy.nnet" Dorian Aur and Teodora Ghioca: "Neural network formation for cooperative bifurcation neurons" Lumei Hui: "Comparison between the two-dot method and the transparency method for the autostereogram perception" Lydia N. Derkach: "Cognitive neuropsychology: A synthesis of western and eastern research" J. Marro and J.J. Torres: "Neural networks with coherent fluctuations of synapses" Nils Hulth: "Feature vector representations and individual scaling of prototype vectors" POSTER SESSION II: Saturday, May 29, 1999 All posters will be displayed for the full day. Vision (B): Drazen Domijan: "Boundary computation, presynaptic inhibition, and lightness perception" Harald Ruda and Magnus Snorrason: "Modeling time to detection for observers searching for targets in cluttered backgrounds" Li-Yun Fu: "A neuron filtering model for space- and time-varying signal processing" Sachin Ahuja and Bart Farell: "Points, lines, and surfaces" J.M. Harris and S.K. Rushton: "An eccentric hemisphere explanation of visual search for motion in depth?" Vinoth Jagaroo: "A neuropsychological perspective of spatial reference frames: Implications for the modeling of high-level vision" Jens Mansson: "Contour enhancement by local iso-orientation-cooperation and texture suppression" Thorsten Hansen and Heiko Neumann: "Contrast processing and contour enhancement: A model of recurrent long-range interactions in V1" Wolfgang Sepp and Heiko Neumann: "A hierarchical filling-in model for real-time brightness reconstruction" Lynette Linden: "Understanding image colors in phase space" Raymond K. Chafin and Cihan H. Dagli: "Biologically inspired connectionist models for image feature extraction in machine vision systems" Mark Wexler, Francesco Panerai, and Jacques Droulez: "Looking actively at Ames's window" Lavanya Viswanathan, Stephen Grossberg, and Ennio Mingolla: "Neural dynamics of motion grouping across apertures" Sensory-Motor Control (B): Brad Rhodes and Daniel Bullock: "A neural model for sequence learning and production" Sally Bogacz and Willard Larkin: "Motor control in fast musical passages" Thomas J. Anastasio, Paul E. Patton, and Kamel Belkacem-Boussaid: "Modeling multisensory enhancement in the superior colliculus using Bayes' rule" J.E. Vos and J.J. van Heijst: "A model of sensorimotor development using a neural network" Greg T. Gdowski and Robert A. McCrea: "Sensory signals carried by vestibulo-spinal and other non-eye-movement related vestibular neurons during voluntary head movements" Greg T. Gdowski and Robert A. McCrea: "Sensory signals carried by the vestibular nuclei during reflexive head movements evoked by whole body rotation" Jan G. Smits: "Dependence of time constant for stroke recovery of complexity of tasks" M. Chen, C.-S.R. Li, Y.-Y. Yang, C.-Y. Liu, H.-L. Chang, C. Shen, Y.-M. Chuang, and L.-Y. Kao: "Perceptual alternation in obsessive compulsive disorder: Implications for the functions of the frontostriatal circuitry" Sensory-Motor Control (T) and Robotics (T): John R. Alexander Jr.: "Timing problems of neural control circuits" F. Panerai, G. Metta, and G. Sandini: "An artificial vestibular system for reflex-control of robot eye movements" Michail G. Lagoudakis and Anthony S. Maida: "A polar neural map for mobile robot navigation" G. Baratoff, C. Toepfer, and H. Neumann: "Combining space-variant maps for flow-based obstacle detection and body-scaled free-space navigation" Angelo Arleo and Wulfram Gerstner: "Spatial models and autonomous navigation in neuro-mimetic systems" Giorgio Metta, Giulio Sandini, Riccardo Manzotti, and Francesco Panerai: "Learning eye-head-hand coordination: A developmental approach" S. Srinivasan and A. Bradley: "Sequential task execution in a prosthetic limb using an artificial neural network" Neural and Hybrid Systems (B + T): Raymond Pavlovski and Majid Karimi: "Control of basins of attraction in a self-trapping neural network with near-neighbor synapses" Anatoli Gorchetchnikov: "The level of suppression in feedback connections required for learning depends primarily on intracellular parameters" David Vogel: "A partial model of cortical memory" Chun-Kam Horng and Chin-Ming Hong: "Learning efficiency improvement of CMAC neural network by Gaussian basis function" Ana Madevska and Dragan Nikolic: "Automatic classification with support vector machines in molecular biology" Alex Heneveld: "A plausible neural network architecture: Temporal Hebbian inhibit-undesireds" M. Mar Abad Grau and Luis Daniel Hernandez Molinero: "Context-specific neural network feature selector with missing data" Per Jesper Sjostrom and Lars Ulrik Wahlberg: "Automated cell recognition and counting based on a combination of artificial neural networks and standard image analysis methods" Rafal Bogacz and Marcin Chady: "Local connections in a neural network improve pattern completion" Andrea Corradini: "Automatic posture recognition in color images using hybrid neural networks" Neural System Models (B + T): Wei Cao, SongNian Yu, and William Gregory: "New approach for measuring complexity of linear-inseparable multidimensional data patterns" Zhe Chen: "The application of wavelet neural network for time series prediction and system modeling based on multiresolution learning" Maria Alvarez Florendo and Anthony Roland Florendo: "Solutions to the binary addition, parity and symmetry problems using feedforward networks with inhibitory lateral connections" J.R.C. Piqueira, F.M. Formagin, L.H.A. Monteiro, and J.S. Del Nero: "Full connected phase locked loops as a model for synchronizing neuron sets" Steven Lehar: "The Gestalt principle of isomorphism and the perceptual representation of space" Gu Lin and Bingxue Shi: "A programmable and expandable fuzzy recognition integrated circuit" Boris Galitsky: "How the logic of mental attributes models the autism" ******************** From jbower at bbb.caltech.edu Mon Mar 15 12:44:14 1999 From: jbower at bbb.caltech.edu (James M. Bower) Date: Mon, 15 Mar 1999 09:44:14 -0800 Subject: Just like old times Message-ID: A non-text attachment was scrubbed... Name: not available Type: multipart/alternative Size: 1378 bytes Desc: not available Url : https://mailman.srv.cs.cmu.edu/mailman/private/connectionists/attachments/00000000/69d2e12e/attachment.bin From school at cogs.nbu.acad.bg Tue Mar 16 05:12:12 1999 From: school at cogs.nbu.acad.bg (CogSci Summer School) Date: Tue, 16 Mar 1999 13:12:12 +0300 Subject: No subject Message-ID: 6th International Summer School in Cognitive Science Sofia, New Bulgarian University July 12 - 31, 1999 International Advisory Board Elizabeth BATES (University of California at San Diego, USA) Amedeo CAPPELLI (CNR, Pisa, Italy) Cristiano CASTELFRANCHI (CNR, Roma, Italy) Daniel DENNETT (Tufts University, Medford, Massachusetts, USA) Ennio De RENZI (University of Modena, Italy) Charles DE WEERT (University of Nijmegen, Holland ) Christian FREKSA (Hamburg University, Germany) Dedre GENTNER (Northwestern University, Evanston, Illinois, USA) Christopher HABEL (Hamburg University, Germany) William HIRST (New School for Social Sciences, NY, USA) Joachim HOHNSBEIN (Dortmund University, Germany) Douglas HOFSTADTER (Indiana University, Bloomington, Indiana, USA) Keith HOLYOAK (University of California at Los Angeles, USA) Mark KEANE (Trinity College, Dublin, Ireland) Alan LESGOLD (University of Pittsburg, Pennsylvania, USA) Willem LEVELT (Max-Plank Institute of Psycholinguistics, Nijmegen, Holland) David RUMELHART (Stanford University, California, USA) Richard SHIFFRIN (Indiana University, Bloomington, Indiana, USA) Paul SMOLENSKY (University of Colorado, Boulder, USA) Chris THORNTON (University of Sussex, Brighton, England) Carlo UMILTA' (University of Padova, Italy) Eran ZAIDEL (University of California at Los Angeles, USA) Courses Each participant will enroll in 6 of the 10 courses offered thus attending 4 hours classes per day plus 2 hours tutorials in small groups plus individual studies and participation in symposia. Brain and Language: New Approaches to Evolution and Developmet (Elizabeth Bates, Univ. of California at San Diego, USA) Child Language Acquisition (Michael Tomasello, MPI for Evolutionary Anthropology, Germany) Culture and Cognition (Roy D'Andrade, Univ. of California at San Diego, USA) Understanding Social Dependence and Cooperation (Cristiano Castelfranchi, CNR, Italy) Models of Human Memory (Richard Shiffrin, Indiana University, USA) Categorization and Inductive Reasoning: Psychological and Computational Approaches (Evan Heit, Univ. of Warwick, UK) Understanding Human Thinking (Boicho Kokinov, New Bulgarian University) Perception-Based Spatial Reasoning (Reinhard Moratz, Hamburg University, Germany) Perception (Naum Yakimoff, New Bulgarian University) Applying Cognitive Science to Instruction (John Hayes, Carnegie-Mellon University, USA) In addition there will be seminars, working groups, project work, discussions. Participation Participants will be selected by a Selection Committee on the bases of their submitted documents: * application form, * CV, * statement of purpose, * copy of diploma; if student - academic transcript * letter of recommendation, * list of publications (if any) and short summary of up to three of them. For participants from Central and Eastern Europe as well as from the former Soviet Union there are scholarships available (provided by Soros' Open Society Institute). They cover tuition, travel, and living expenses. Deadline for application: April 15th Notification of acceptance: April 30th. Apply as soon as possible since the number of participants is restricted. For more information contact: Summer School in Cognitive Science Central and East European Center for Cognitive Science New Bulgarian University 21, Montevideo Str. Sofia 1635, Bulgaria Tel. (+3592) 957-1876 Fax: (+3592) 558262 e-mail: school at cogs.nbu.acad.bg Web page: http://www.nbu.acad.bg/staff/cogs/events/ss99.html From qian at brahms.cpmc.columbia.edu Tue Mar 16 19:19:58 1999 From: qian at brahms.cpmc.columbia.edu (Ning Qian) Date: Tue, 16 Mar 1999 19:19:58 -0500 Subject: papers available on stereo and learning Message-ID: <199903170019.TAA20805@brahms.cpmc.columbia.edu> Dear Colleagues, The following papers can be downloaded from the web site: http://brahms.cpmc.columbia.edu/ All papers are in Unix-compressed PostScript format. A limited number of hard copies are available for those who cannot download or decode. Best regards, Ning Qian ---------------------------------------------------------------- Relationship between Phase and Energy Methods for Disparity Computation, Neural Computation (in press) Ning Qian and Sam Mikaelian The phase and energy methods for computing binocular disparity maps from stereograms are motivated differently, have different physiological relevances, and involve different computational steps. Nevertheless, we demonstrate that at the final stages where disparity values are made explicit, the simplest versions of the two methods are exactly equivalent. The equivalence also holds when the quadrature-pair construction in the energy method is replaced with a more physiologically plausible phase-averaging step. The equivalence fails, however, when the phase-difference receptive field model is replaced by the position-shift model. Additionally, intermediate results from the two methods are always quite distinctive. In particular, the energy method generates a distributed disparity representation similar to that found in the visual cortex while the phase method does not. Finally, more elaborate versions of the two methods are in general not equivalent. We also briefly compare these two methods with some other stereo models in the literature. http://brahms.cpmc.columbia.edu/publications/compare.ps.Z ---------------------------------------------------------------- On the Momentum Term in Gradient Descent Learning Algorithms, Neural Networks, 1999, 12:145-151 Ning Qian A momentum term is usually included in the simulations of connectionist learning algorithms. Although it is well known that such a term greatly improves the speed of learning, there have been few rigorous studies of its mechanisms. In this paper, I show that in the limit of continuous time, the momentum parameter is analogous to the mass of Newtonian particles that move through a viscous medium in a conservative force field. The behavior of the system near a local minimum is equivalent to a set of coupled and damped harmonic oscillators. The momentum term improves the speed of convergence by bringing some eigen components of the system closer to critical damping. Similar results can be obtained for the discrete time case used in computer simulations. In particular, I derive the bounds for convergence on learning-rate and momentum parameters, and demonstrate that the momentum term can increase the range of learning rate over which the system converges. The optimal condition for convergence is also analyzed. http://brahms.cpmc.columbia.edu/publications/momentum.ps.Z ---------------------------------------------------------------- Perceptual Learning on Orientation and Direction Discrimination, Vision Research (in press) Nestor Matthews, Zili Liu, Bard J. Geesaman, and Ning Qian Two experiments were conducted to determine the extent to which perceptual learning transfers between orientation and direction discrimination. Naive observers were trained to discriminate orientation differences between two single-line stimuli, and direction differences between two single-moving-dot stimuli. In the first experiment, observers practiced the orientation and direction tasks along orthogonal axes in the fronto-parallel plane. In the second experiment, a different group of observers practiced both tasks along a single axis. Perceptual learning was observed on both tasks in both experiments. Under the same-axis condition, the observers' orientation sensitivity was found to be significantly elevated after the direction training, indicating a transfer of learning from direction to orientation. There was no evidence of transfer in any other cases tested. In addition, the rate of learning on the orientation task was much higher than the rate on the direction task. The implications of these findings on the neural mechanisms subserving orientation and direction discrimination are discussed. http://brahms.cpmc.columbia.edu/publications/pl.ps.Z From ckiw at dai.ed.ac.uk Wed Mar 17 12:48:13 1999 From: ckiw at dai.ed.ac.uk (Chris Williams) Date: Wed, 17 Mar 1999 17:48:13 +0000 (GMT) Subject: PhD study at the Institute for Adaptive and Neural Computation, U. , of Edinburgh, UK Message-ID: Apologies if you receive multiple copies of this message ---- The Institute for Adaptive and Neural Computation in the Division of Informatics at the University of Edinburgh welcomes applications from suitably qualified candidates regarding PhD study. The Institute for Adaptive and Neural Computation (ANC) is part of the newly formed Division of Informatics at the University of Edinburgh. The Institute fosters the study of adaptive processes in both artificial and biological systems. It encourages interdisciplinary and collaborative work involving the traditional disciplines of neuroscience, cognitive science, computer science, computational science, mathematics and statistics. Many of the information-processing tasks under study draw on a common set of principles and mathematical techniques for their solution. Combined study of the adaptive nature of artificial and biological systems facilitates the many benefits accruing from treating essentially the same problem from different perspectives. A principal theme is the study of artificial learning systems. This includes theoretical foundations (e.g. statistical theory, information theory), the development of new models and algorithms, and applications. A second principal theme is the analysis and modelling of brain processes at all levels of organization with a particular focus on theoretical developments which span levels. Within this theme, research areas are broadly defined as the study of the neural foundations of perception, cognition and action and their underlying developmental processes. A secondary theme is the construction and study of computational tools and methods which can support studies in the two principal themes, such as in the analysis of brain data, simulation of networks and parallel data mining. The Institute for Adaptive and Neural Computation has available PhD studentships (covering the cost of fees and living expenses) as from 1 October 1999. These are supported by the Medical Research Council and by Microsoft Research Ltd. In addition, the Division of Informatics receives a number of EPSRC studentships for which students wishing to study within the Institute for Adaptive and Neural Computation may be considered. Further information about ANC may be found at http://anc.ed.ac.uk/. Informal enquiries may be made to Emma Black, emma at anc.ed.ac.uk . For application forms and further information, write to: PhD Admissions Secretary Division of Informatics University of Edinburgh James Clerk Maxwell Building King's Buildings Mayfield Road Edinburgh EH9 3JZ Scotland, UK Email: phd-admissions at inf.ed.ac.uk Fax: +44 131 667 7209 Telephone: +44 131 650 5156 Information on study for Research Degrees in the Division of Informatics can be found at http://www.dai.ed.ac.uk/daidb/people/homes/rbf/IGS/IGSextr.htm From qian at brahms.cpmc.columbia.edu Wed Mar 17 11:21:12 1999 From: qian at brahms.cpmc.columbia.edu (Ning Qian) Date: Wed, 17 Mar 1999 11:21:12 -0500 Subject: papers available on stereo and learning In-Reply-To: <199903171107.LAA15362@axon.gatsby.ucl.ac.uk> (message from Geoffrey Hinton on Wed, 17 Mar 1999 11:07:38 +0000) References: <199903171107.LAA15362@axon.gatsby.ucl.ac.uk> Message-ID: <199903171621.LAA22026@brahms.cpmc.columbia.edu> Date: Wed, 17 Mar 1999 11:07:38 +0000 From: Geoffrey Hinton > I show that in the limit of continuous time, the momentum parameter > is analogous to the mass of Newtonian particles that move through a > viscous medium in a conservative force field. At the risk of sounding like Jim Bower, I would like to opint out that mechanical model was the original motivation for the momentum method. Geoff Hinton The original motivation was fully discussed and acknowledged (by citing Rumelhart, Hinton and Williams's chapter in the PDP book) in the Introduction. The paper went far beyond that by first showing m p = ------ m + mu (where p is the momentum parameter, m is the mass, mu is the friction coefficient), and then providing a stability and convergence analysis for both the continuous and discrete cases. Best regards, Ning From hinton at gatsby.ucl.ac.uk Wed Mar 17 06:07:38 1999 From: hinton at gatsby.ucl.ac.uk (Geoffrey Hinton) Date: Wed, 17 Mar 1999 11:07:38 +0000 Subject: papers available on stereo and learning In-Reply-To: Your message of Tue, 16 Mar 1999 19:19:58 -0500. Message-ID: <199903171107.LAA15362@axon.gatsby.ucl.ac.uk> > I show that in the limit of continuous time, the momentum parameter > is analogous to the mass of Newtonian particles that move through a > viscous medium in a conservative force field. At the risk of sounding like Jim Bower, I would like to opint out that mechanical model was the original motivation for the momentum method. Geoff Hinton From carlos at sonnabend.ifisiol.unam.mx Thu Mar 18 13:26:57 1999 From: carlos at sonnabend.ifisiol.unam.mx (Carlos Brody) Date: Thu, 18 Mar 1999 12:26:57 -0600 Subject: Jim Bower's posting Message-ID: <199903181826.MAA02693@sonnabend.ifisiol.unam.mx> I will be happy to sound like Ning Qian responding to Geoff Hinton sounding like Jim Bower, and I will say that My recent papers on cross-correlation go far beyond simply pointing out the possibility of an interpretation problem. (That possibility was pointed out by Aertsen, Gerstein, Habib and Palm in their 1989 paper introducing the JPSTH, J. Neurophysiol. 61:900-917. I was not until now aware that the Wilson and Bower papers also made a similar point-- thanks to Jim Bower for pointing it out.) Of the 3 papers I recently announced: The "Correlations without synchrony" paper studies what kinds of xcorrelogram shapes are generated by slow interactions (ACROSS trials) as compared to fast interactions (WITHIN trials). The specific point is to know *what to watch out for*. The most important rule of thumb for being alert to interpretation problems confusing slow and fast interactions is: if the PSTHs have peak widths of the same order of magnitude as the xcorr peak width, be careful with your interpretations! [Related to this, it is also important to know when you DON'T have to worry. Examples of perfectly o.k. interpretations (as far as I can tell) are Alonso and Reid's 1995 work on connections from LGN to simple cells in area 17 of cat Nature 378:281--284; or Ts'o, Gilbert, and Wiesel's 1986 work on connections within area 17, J. Neurosci. 6:1160--1170. In both of these, the PSTHs were MUCH broader than the peaks in the xcorrelograms.] The "Disambiguating different covariation types" paper tries to propose a couple of quantitative methods for disambiguating interpretations when it is not clear how much of the xcorrelogram came from slow or fast interactions. Finally, the "Slow covariations in neuronal resting potentials can lead to artefactually fast cross-correlations in their spike trains" paper shows how awareness of these issues is still very far from seeping into our collective consciousness: the paper goes through an example suggesting that a well-known paper, published in a well-known journal (Nature) suffered greatly from such interpretation problems. But that paper is not the only paper with signs of trouble: it just happened to be one where I was intimately familiar with the data and thus could go through it in detail and be confident of my conclusions. In sum, it is important not only to be aware that there CAN be trouble; but also to know WHEN there can be trouble. And, concomitantly, when there won't be. The three papers I announced try to go in this direction. Cross-correlation is a most useful tool, and should not be thrown out with the bathwater. Carlos. carlos at sonnabend.ifisol.unam.mx http://www.cns.caltech.edu/~carlos ------- Date: Mon, 15 Mar 1999 09:44:14 -0800 From: "James M. Bower" Errors-to: owner-connectionists at nntp-server.caltech.edu With respect to the recent posting by Carlos Brody, I would point to two papers published almost ten years ago by Matt Wilson and Myself: Wilson, M.A. and Bower, J.M. 1990 Computer simulation of oscillatory behavior in cerebral cortical networks. In: Advances in Neural information processing systems. Vol. 2, D. Touretzky, editor. Morgan Kaufmann, San Mateo, CA., pp. 84-91. Wilson, M.A. and Bower, J.M. 1991 A computer simulation of oscillatory behavior in primary visual cerebral cortex. Neural Computation 3: 498-509. Quoting from the first: "Interpreting phase coherence from correlation functions produced from the average of many simulation trials pointed out the need to distinguish average phase effects from instantaneous phase effects. Instantaneous phase implies that the statistics of the correlation function taken at any trial are consistent with the statistics of the combined data. Average phase allows for systematic within-trial and between-trial variability and is, therefore, a weaker assertion of actual coherence. This distinction is particularly important for theories which rely on phase encoding of stimulus information. Analysis of our model results indicates that the observed phase relationships are an average, rather than an instantaneous effect." Remarkable how slowly things change, or are accepted (eh Steve??). Jim Bower From jon at syseng.anu.edu.au Thu Mar 18 18:54:17 1999 From: jon at syseng.anu.edu.au (Jonathan Baxter) Date: Fri, 19 Mar 1999 10:54:17 +1100 Subject: papers available on stereo and learning References: <199903171107.LAA15362@axon.gatsby.ucl.ac.uk> <199903171621.LAA22026@brahms.cpmc.columbia.edu> Message-ID: <36F19229.FD164E8B@syseng.anu.edu.au> Ning Qian wrote: > Date: Wed, 17 Mar 1999 11:07:38 +0000 > From: Geoffrey Hinton > > > I show that in the limit of continuous time, the momentum parameter > > is analogous to the mass of Newtonian particles that move through a > > viscous medium in a conservative force field. > > At the risk of sounding like Jim Bower, I would like to opint out that > mechanical model was the original motivation for the momentum method. > > Geoff Hinton > > The original motivation was fully discussed and acknowledged (by > citing Rumelhart, Hinton and Williams's chapter in the PDP book) in > the Introduction. The paper went far beyond that by first showing > m > p = ------ > m + mu > > (where p is the momentum parameter, m is the mass, mu is the friction > coefficient), and then providing a stability and convergence analysis > for both the continuous and discrete cases. > > Best regards, > Ning The momentum term in steepest descent methods was introduced and analysed by B Poljak in 1964: B.T Poljak, 1964. "Some Methods of Speeding up the Convergance of Iteration Methods". Z. VyCisl. Mat. i Mat. Fiz, Vol. 4, pp 1-17. He called it the "Heavy Ball" method. I don't have the original paper but a good secondary source is "Neurodynamic Programming" by Bertsekas and Tsitsiklis,, Athena Scientific, 1996, pp 104--105. The convergence results are in there. Cheers, Jonathan Baxter From d.mareschal at bbk.ac.uk Fri Mar 19 06:52:13 1999 From: d.mareschal at bbk.ac.uk (Denis Mareschal) Date: Fri, 19 Mar 1999 12:52:13 +0100 Subject: PhD scholarships in connectionist/neural network/psychological models Message-ID: The Birkbeck College (University of London) psychology department has a number of full-time PhD scholarships for students commencing their degrees in October 1999. These are open to students wishing to undertake projects comprising of computational methods (connectionist/neural network and mathematical models) combined with experimental psychology. The department has a strong commitment to cognitive neuroscience and computational modelling. Strong candidates are invited to apply (see FULL ANNOUNCEMENT below) for a variety of cognitive neuroscience projects supervised by Denis Mareschal and/or by Marius Usher, as described: Denis Mareschal (http://www.psyc.bbk.ac.uk/staff/dm.html) connectionist modelling of perceptual and cognitive development in childhood and infancy Marius Usher (http://www.ukc.ac.uk/psychology/people/usherm/) Behavioral and computational studies of: choice reaction time, short-term and working memory and cognitive performance, information processing in the frontal lobes. The Birkbeck College psychology department includes the recently founded Centre for Brain and Cognitive Development (headed by Professor Mark Johnson) and is well complemented by a new cognitive science laboratory consisting of a suit of UNIX-based workstations dedicated to computational modelling research. Birkbeck College is situated in the Bloomsbury section of London and students would benefit from its close proximity to the Gatsby Computational Neuroscience Unit and the Institute of Cognitive Neuroscience. FULL ANNOUNCEMENT Birkbeck College Department of Psychology PhD Studentships The Birkbeck Psychology Department, rated 5 in the last Research Assessment Exercise, is offering two departmentally-funded research studentships, available from October 1999. There are opportunities to work in a number of research groups: Brain and Cognitive Development Perception, Cognition and Action Child Family and Social Studies For further details see http://www.psyc.bbk.ac.uk/ Students with or expecting a first class or upper second class degree in psychology should send their full CV, the names of two referees, and a two-page statement of their research interests to the postgraduate admissions tutor: Dr Paul Barber (PG Application) Department of Psychology Birkbeck College University of London Malet Street London WC1E 7HX phone: +44 171-631-6207 fax: + 44 171-631-6312 ================================================= Dr. Denis Mareschal Centre for Brain and Cognitive Development Department of Psychology Birkbeck College University of London Malet St., London WC1E 7HX, UK tel +44 171 631-6582/6207 fax +44 171 631-6312 ================================================= From cladv at pikas.inf.tu-dresden.de Sun Mar 21 05:29:43 1999 From: cladv at pikas.inf.tu-dresden.de (CL Advertisement) Date: Sun, 21 Mar 1999 11:29:43 +0100 (MET) Subject: Postdoctorate Grant at Dresden University of Technology Message-ID: <199903211029.LAA08326@pikas.inf.tu-dresden.de> The Department of Computer Science at the Dresden University of Technology offers a postdoctorate grant in the framework of its research programme "Specification of discrete processes and systems of processes by operational models and logics" with a duration -- for the time being -- upto the end of the year 1999. In this research programme we consider formalisms in the area of Petri nets and concurrent automata, term- and graph-rewriting systems, knowledge representation and cognitive robotics, model theory for process systems and the equivalences of these formalisms. Applicants who have finished their Ph.D. with very good marks, can send their curriculum vitae, photo, list of publications, and two references by professors to: Dresden University of Technology Department of Computer Science Prof. Dr.-Ing.habil. Heiko Vogler Mommsenstr. 13 D-01062 Dresden Germany From henkel at physik.uni-bremen.de Tue Mar 23 08:42:41 1999 From: henkel at physik.uni-bremen.de (Rolf Henkel) Date: Tue, 23 Mar 1999 14:42:41 +0100 Subject: Stereovision - new paper, new web-pages Message-ID: <36F79A51.DE43823F@physik.uni-bremen.de> Dear connectionists, there is a new paper available at http://axon.physik.uni-bremen.de/research/papers/ " Locking onto 3d-Structure by a Combined Vergence- and Fusionsystem" Abstract: An interacting fusion- and vergence-system is presented which utilizes two properties of coherence-based stereo: sub-pixel-precision and a stable validation signal for disparity estimates. This allows the system to sample the three-dimensional scene with several precisely chosen fixation points and automatically accumulate the data into a full disparity map. In addition, the system creates a fused cyclopean view of the scene, co-registered with the final disparity map. In addition, a corresponding web-page discussing this model and showing some samples of vergence-movies can be found at http://axon.physik.uni-bremen.de/research/stereo/vergence/ Newly updated web-pages discussing additional models for the fusion of stereo data into a single cyclopean view as well as the perception of transparency & binocular rivalry can be found following the links at http://axon.physik.uni-bremen.de/research/stereo/ Comments are very welcome. Best regards, Rolf Henkel -- Institute of Theoretical Neurophysics, University Bremen, Germany - Email: henkel at physik.uni-bremen.de - URL: http://axon.physik.uni-bremen.de/ From philh at cogs.susx.ac.uk Wed Mar 24 12:45:32 1999 From: philh at cogs.susx.ac.uk (Phil Husbands) Date: Wed, 24 Mar 1999 17:45:32 +0000 Subject: Lectureship in Adaptive Systems Message-ID: <36F924BC.1F00D15D@cogs.susx.ac.uk> University of Sussex School of Cognitive and Computing Sciences Lectureship in Computer Science and Artificial Intelligence Grade A or B (Ref 076) Applicants should have an interest in the area of evolutionary and adaptive systems, and be able to show evidence of significant research achievement in any aspect of adaptive robotics, artificial life, evolutionary computing or related fields. Applicants with a commitment to interdisciplinary research at the interface between AI and the biological sciences are particularly encouraged. Further particulars can be found at: http://www.cogs.susx.ac.uk/users/philh/aijob.html Informal inquiries can be made to Dr P Husbands, tel (+44 (0)1273) 678556, email philh at cogs.susx.ac.uk, or Professor H Buxton, tel (+44 (0)1273) 678569, email hilaryb at cogs.susx.ac.uk. Details of the School are available at http://www.cogs.susx.ac.uk, or from the School Office. Salary scales: Lecturer Grade A: 16,655 pounds to 21,815 pounds per annum. Lecturer Grade B: 22,726 pounds to 29,048 pounds per annum. Application packs for the above posts are available from and should be returned to Staffing Services, University of Sussex, Falmer, Brighton, East Sussex, BN1 9RH, tel (+44 (0)1273) 877324, and details are also available via http://www.sussex.ac.uk/Units/staffing/personnl/vacs/ Requests for application packs may also be sent via email to S.Jenks at sussex.ac.uk. Links to school web sites and further information about the University may be seen at http://www.central.sussex.ac.uk WHEN REQUESTING DETAILS, PLEASE QUOTE THE RELEVANT REFERENCE NUMBER. CLOSING DATE FOR APPLICATIONS: Friday 16 April 1999. From cesmeli at cis.ohio-state.edu Wed Mar 24 11:32:17 1999 From: cesmeli at cis.ohio-state.edu (erdogan cesmeli) Date: Wed, 24 Mar 1999 11:32:17 -0500 (EST) Subject: Student Travel Grants for Attending IJCNN'99 Message-ID: In case you are not yet aware, the IEEE Neural Network Council has a Student Travel Grant Program, to assist students presenting papers at IEEE NNC sponsored conferences, including this year's IJCNN to be held in Washington DC, during July 10-16, 1999. The deadline for application is April 15, 1999. For more information, check the conference web page: http://www.cas.american.edu/~medsker/ijcnn99/ijcnn99.html Thanks for your attention, Erdogan Cesmeli The Ohio State University From ericr at ee.usyd.edu.au Wed Mar 24 23:15:32 1999 From: ericr at ee.usyd.edu.au (Eric Ronco) Date: Thu, 25 Mar 1999 15:15:32 +1100 Subject: On-line Nonlinear Model based Predictive Control Simulator Message-ID: <36F9B864.ACA3B419@ee.usyd.edu.au> Dear all, An on-line nonlinear predictive control simulation package is now available at http://merlot.ee.usyd.edu.au/OLIFO This simulation package is intended to test the performance of a practical nonlinear Model based Predictive Controller, namely the Open-Loop Intermittent Feedback Optimal (OLIFO) controller, while applied to various complicated non-linear systems. Default settings are provided for each system. However, you are encouraged to experience the behaviour of this controller by changing some of the few parameters. This should demonstrate the power of this approach and provid a benchmark to compare the OLIFO controller performance with any other non-linear controllers. -- Dr Eric Ronco, room 316 Tel: +61 2 9351 7680 School of Electrical Engineering Fax: +61 2 9351 5132 Bldg J13, Sydney University Email: ericr at ee.usyd.edu.au NSW 2006, Australia http://www.ee.usyd.edu.au/~ericr From sue at soc.plym.ac.uk Fri Mar 26 05:41:13 1999 From: sue at soc.plym.ac.uk (Sue Denham) Date: Fri, 26 Mar 1999 10:41:13 +0000 Subject: Academic Positions in Computational Neuroscience Message-ID: <1.5.4.32.19990326104113.006e91cc@soc.plym.ac.uk> The University of Plymouth is considering the possibility of making some senior academic appointments, ie Professor, Reader or Senior Lecturer, in the area of Computational Neuroscience, with the aim of further strengthening this research area. The possible appointments are also linked to a proposal to create a multidisciplinary Institute of Cognitive Neuroscience, involving links between experimental neuroscientists at the Plymouth Marine Laboratory, neurologists at the Postgraduate Medical School, and cognitive scientists and neuropsychologists (including two new Professorship appointments) in the Faculty of Human Sciences. We would like to hear, informally at present, from anyone who might have an interest in such an appointment, without obligation to either party. If you were able to email a curriculum vitae, setting out in particular your research achievements in this area, we would be very grateful. We would also be happy to respond to any enquiries. All expressions of interest and enquiries will be treated as confidential. Please respond by email to: Professor Mike Denham Michael J Denham Siebe Professor of Neural and Adaptive Systems Centre for Neural and Adaptive Systems School of Computing University of Plymouth Plymouth PL4 8AA England tel: +44 1752 232541 fax: +44 1752 232540 e-mail: mike @soc.plym.ac.uk; mdenham at plym.ac.uk http://www.tech.plym.ac.uk/soc/research/neural/index.html Dr Sue Denham Centre for Neural and Adaptive Systems School of Computing University of Plymouth Plymouth PL4 8AA England tel: +44 17 52 23 26 10 fax: +44 17 52 23 25 40 e-mail: sue at soc.plym.ac.uk http://www.tech.plym.ac.uk/soc/research/neural/index.html From ingber at ingber.com Fri Mar 26 12:35:32 1999 From: ingber at ingber.com (Lester Ingber) Date: Fri, 26 Mar 1999 11:35:32 -0600 Subject: Paper: ... Exponential Modifications to Black-Scholes Message-ID: <19990326113532.A24307@ingber.com> The PostScript paper http://www.ingber.com/markets99_exp.ps.Z also can be retrieved uncompressed for awhile as http://www.alumni.caltech.edu/~ingber/markets99_exp.ps %A L. Ingber %A J.K. Wilson %R Statistical Mechanics of Financial Markets: Exponential Modifications to Black-Scholes %I DRW Investments LLC %C Chicago, IL %D 1999 %O URL http://www.ingber.com/markets99_exp.ps.Z http://www.alumni.caltech.edu/~ingber/markets99_exp.ps The Black-Scholes theory of option pricing has been considered for many years as an important but very approximate zeroth-order description of actual market behavior. We generalize the functional form of the diffusion of these systems and also consider multi-factor models including stochastic volatility. We use a previous development of a statistical mechanics of financial markets to model these issues. Daily Eurodollar futures prices and implied volatilities are fit to determine exponents of functional behavior of diffusions using methods of global optimization, Adaptive Simulated Annealing (ASA), to generate tight fits across moving time windows of Eurodollar contracts. These short-time fitted distributions are then developed into long-time distributions using a robust non-Monte Carlo path-integral algorithm, PATHINT, to generate prices and derivatives commonly used by option traders. The results of our study show that there is only a very small change in at-the money option prices for different probability distributions, both for the one-factor and two-factor models. There still are significant differences in risk parameters, partial derivatives, using more sophisticated models, especially for out-of-the-money options. ======================================================================== Instructions for Retrieval of Code and Reprints Interactively Via WWW The archive can be accessed via WWW path http://www.ingber.com/ http://www.alumni.caltech.edu/~ingber/ where the last address is a mirror homepage for the full archive. Interactively Via Anonymous FTP Code and reprints can be retrieved via anonymous ftp from ftp.ingber.com. Interactively [brackets signify machine prompts]: [your_machine%] ftp ftp.ingber.com [Name (...):] anonymous [Password:] your_e-mail_address [ftp>] binary [ftp>] ls [ftp>] get file_of_interest [ftp>] quit The 00index file contains an index of the other files. Files have the same WWW and FTP paths under the main / directory; e.g., http://www.ingber.com/MISC.DIR/00index_misc and ftp://ftp.ingber.com/MISC.DIR/00index_misc reference the same file. Electronic Mail If you do not have WWW or FTP access, get the Guide to Offline Internet Access, returned by sending an e-mail to mail-server at rtfm.mit.edu with only the words send usenet/news.answers/internet-services/access-via-email in the body of the message. The guide gives information on using e-mail to access just about all InterNet information and documents. Additional Information Limited help assisting people with queries on my codes and papers is available only by electronic mail correspondence. Sorry, I cannot mail out hardcopies of code or papers. Lester ======================================================================== -- /* Lester Ingber http://www.ingber.com/ ftp://ftp.ingber.com * * ingber at ingber.com ingber at alumni.caltech.edu ingber at drwtrading.com * * PO Box 06440 Wacker Dr PO Sears Tower Chicago IL 60606-0440 */ From gustl at itl.atr.co.jp Sun Mar 28 23:32:14 1999 From: gustl at itl.atr.co.jp (Michael Schuster) Date: Mon, 29 Mar 1999 13:32:14 +0900 Subject: PhD thesis available Message-ID: <199903290432.NAA04867@atra17.itl.atr.co.jp> PhD Thesis available ==================== I sent this BCC mail to a number of people who asked me about my thesis, or who I thought might be interested in it. Because I sent it to mailing lists, too, it could happen that you get this message twice -- my apologies. Mike Schuster ------------------------------------------------------------------------------ available from: http://isw3.aist-nara.ac.jp/IS/Shikano-lab/staff/1996/mike-s/mike-s.html in the publication section http://isw3.aist-nara.ac.jp/IS/Shikano-lab/staff/1996/mike-s/publication.html ENGLISH TITLE: On supervised learning from sequential data with applications for speech recognition ENGLISH ABSTRACT: Many problems of engineering interest, for example speech recognition, can be formulated in an abstract sense as supervised learning from sequential data, where an input sequence x_1^T = { x_1, x_2, x_3, ..., x_{T-1}, x_T } has to be mapped to an output sequence y_1^T = { y_1, y_2, y_3, ..., y_{T-1}, y_T }. This thesis gives a unified view of the abstract problem and presents some models and algorithms for improved sequence recognition and modeling performance, measured on synthetic data and on real speech data. A powerful neural network structure to deal with sequential data is the recurrent neural network (RNN), which allows one to estimate P(y_t|x_1, x_2, ..., x_t), the output at time t given all previous input. The first part of this thesis presents various extensions to the basic RNN structure, which are a) a bidirectional recurrent neural network (BRNN), which allows to estimate expressions of the form P(y_t|x_1^T), the output at t given all sequential input, for uni-modal regression and classification problems, b) an extended BRNN to directly estimate the posterior probability of a symbol sequence, P(y_1^T|x_1^T), by modeling P(y_t|y_{t-1}, y_{t-2}, ..., y_1, x_1^T) without explicit assumptions about the shape of the distribution P(y_1^T|x_1^T), c) a BRNN to model multi-modal input data that can be described by Gaussian mixture distributions conditioned on an output vector sequence, P(x_t|y_1^T), assuming that neighboring x_t, x_{t+1} are conditionally independent, and d) an extension to c) which removes the independence assumption by modeling P(x_t|x_{t-1}, x_{t-2}, ..., x_1, y_1^T) to estimate the likelihood P(x_1^T|y_1^T) of a given output sequence without any explicit approximations about the use of context. The second part of this thesis describes the details of a fast and memory-efficient one-pass stack decoder for speech recognition to perform the search for the most probable word sequence. The use of this decoder, which can handle arbitrary order N-gram language models and arbitrary order context-dependent acoustic models with full cross-word expansion, led to the best reported recognition results on the standard test set of a widely used Japanese newspaper dictation task. ---------------------------------------------------------------------------- Table of Contents: 1 Introduction 1.1 MOTIVATION AND BACKGROUND 1.1.1 Learning from examples 1.1.2 Does the order of the samples matter? 1.1.3 Example applications 1.1.4 Related scientific areas 1.2 THESIS STRUCTURE 2 Supervised learning from sequential data 2.1 DEFINITION OF THE PROBLEM 2.2 DECOMPOSITION INTO A GENERATIVE AND A PRIOR MODEL PART 2.2.1 Context-independent model 2.2.2. Context-dependent model 2.3 DIRECT DECOMPOSITION 2.4 HIDDEN MARKOV MODELS 2.4.1 Basic HMM formulation 2.4.2 Calculation of state occupation probabilities 2.4.3 Parameter estimation for output probability distributions 2.4.4 Parameter estimation for transition probabilities 2.5 SUMMARY 3 Neural networks for supervised learning from sequences 3.1 BASICS OF NEURAL NETWORKS 3.1.1 Parameter estimation by maximum likelihood 3.1.2 Problem classification 3.1.3 Neural network training 3.1.4 Neural network architectures 3.2 BIDIRECTIONAL RECURRENT NEURAL NETWORKS 3.2.1 Prediction assuming independent outputs 3.2.2 Experiments and results 3.2.3 Prediction assuming dependent outputs 3.2.4 Experiments and results 3.3 MIXTURE DENSITY RECURRENT NEURAL NETWORKS 3.3.1 Basics of mixture density networks 3.3.2 Mixture density extensions for BRNNs 3.3.3 Experiments and results 3.3.4 Discussion 3.4 SUMMARY 4 Memory-efficient LVCSR search using a one-pass stack decoder 4.1 INTRODUCTION 4.1.1 Organization of this chapter 4.1.2 General 4.1.3 Technical 4.1.4 Decoder types 4.2 A MEMORY-EFFICIENT ONE_PASS STACK DECODER 4.2.1 Basic algorithm 4.2.2 Pruning techniques 4.2.3 Stack module 4.2.4 Hypotheses module 4.2.5 N-gram module 4.2.6 LM lookahead 4.2.7 Cross-word models 4.2.8 Fast-match with delay 4.2.9 Using word-graphs as language model constraints 4.2.10 Lattice rescoring 4.2.11 Generating phone/state alignments 4.3 EXPERIMENTS 4.3.1 Recognition of Japanese 4.3.2 Recognition results for high accuracy 4.3.3 Recognition results for high speed and low memory 4.3.4 Time and memory requirements for modules 4.3.5 Usage of cross-word models 4.3.6 Usage of fast-match models 4.3.7 Effect of on-demand N-gram smearing 4.3.8 Lattice/N-nest list generation and lattice rescoring 4.4 CONCLUSIONS 4.5 ACKNOWLEDGMENTS 5. Conclusions 5.1 SUMMARY 5.2 CONTRIBUTIONS FROM THIS THESIS 5.3 SUGGESTIONS FOR FUTURE WORK ---------------------------------------------------------------------------- Mike Schuster, ATR Interpreting Telecommunications Research Laboratories, 2-2 Hikari-dai, Seika-cho, Soraku-gun, Kyoto 619-02, JAPAN, Tel. ++81-7749-5-1394, Fax. ++81-7749-5-1308, email: gustl at itl.atr.co.jp, http://isw3.aist-nara.ac.jp/IS/Shikano-lab/staff/1996/mike-s/mike-s.html ---------------------------------------------------------------------------- From juergen at idsia.ch Tue Mar 30 03:25:40 1999 From: juergen at idsia.ch (Juergen Schmidhuber) Date: Tue, 30 Mar 1999 10:25:40 +0200 Subject: IDSIA PHD JOB OPENING Message-ID: <199903300825.KAA01726@ruebe.idsia.ch> PHD STUDENT WANTED I am seeking an outstanding PhD candidate for a research project on reinforcement learning (RL) and program evolution (PE). OVERVIEW. Most machine learning research focuses on learning memory-free mappings between input patterns and output patterns. Humans, however, obviously learn entire algorithms mapping input sequences to output sequences in a complex fashion. In particular, they constantly learn to identify important events in input streams and store them in short-term memory until the memories are needed to compute appropriate output actions. If we want to bridge the gap between the learning abilities of humans and machines then we will have to study how such sequential processes can be learned. The focus of this project will be on RL and PE methods whose search space consists of fairly arbitrary, possibly probabilistic "programs" (as opposed to more limited stimulus/response associations). POSSIBLE PROJECT SUBGOALS. The project allows considerable scientific freedom. If you have a great idea, let's go for it and try it. Otherwise we'll start along the following lines. (1) Explore the limits of recent algorithmic search techniques such as "Adaptive Levin Search" and "Probabilistic Incremental Program Evolution" - both of which can learn memory strategies. (2) Improve, extend, and apply a recent technique "Incremental self-improvement" based on the success-story algorithm for probabilistic, self-modifying systems that can in principle learn to improve their own learning algorithm (metalearning). (3) Build unsupervised, "curious" systems selecting their own training exemplars for building models of the environment, and use the models to speed up improvement of goal-directed sequential behavior. (4) Examine RL economies where agents learn to pay each other for useful services, and test whether they can learn to memorize. See http://www.idsia.ch/~juergen/topics.html for papers on the above subjects. A highly qualified candidate is sought with a background in computational sciences, mathematics, engineering, physics or other relevant areas. Applicants should submit : (i) Detailed curriculum vitae, (ii) List of three references (and their email addresses), (ii) Transcripts of undergraduate and graduate (if applicable) studies and (iii) Concise statement of their research interests (two pages max). Candidates are also encouraged to submit their scores in the Graduate Record Examination (GRE) general test (if available). Please send hardcopies of all documents to: Juergen Schmidhuber, IDSIA, Corso Elvezia 36, 6900-Lugano, Switzerland Applications (with WWW pointers to studies or papers, if available) can also be submitted electronically (in plain ASCII or postscript format, but only small files please) to juergen at idsia.ch. Please connect your first and last name by a dot "." in the subject header, and add a meaningful extension. For instance, if your name is John Smith, then your messages could have headers such as: subject: John.Smith.cv.ps, subject: John.Smith.statement.txt, subject: John.Smith.correspondence.txt.... This will facilitate appropriate filing of your stuff. Thanks a lot! ABOUT IDSIA. Our research focuses on artificial neural nets, reinforcement learning, complexity and generalization issues, unsupervised learning and information theory, forecasting, combinatorial optimization, evolutionary computation. IDSIA's algorithms hold the world records for several important operations research benchmarks. In the "X-Lab Survey" by Business Week magazine, IDSIA was ranked in fourth place in the category "COMPUTER SCIENCE - BIOLOGICALLY INSPIRED". Its comparatively tiny size notwithstanding, IDSIA also ranked among the top ten labs worldwide in the broader category "ARTIFICIAL INTELLIGENCE". We are located in the beautiful city of Lugano in Ticino, the scenic southernmost province of Switzerland. Milano, Italy's center of fashion and finance, is 1 hour away, Venice 3 hours. Our collaborators at the Swiss supercomputing center CSCS are nearby; the new University of Lugano is across the lawn. Switzerland (origin of special relativity and the World Wide Web) boasts the highest citation impact factor, the highest supercomputing capacity pc (per capita), the most Nobel prizes pc (450% of the US value), the highest income pc, and perhaps the best chocolate. SALARY: commensurate with experience but generally attractive. Low taxes. There also is travel funding in case of papers accepted at important conferences. _________________________________________________ Juergen Schmidhuber research director IDSIA, Corso Elvezia 36, 6900-Lugano, Switzerland juergen at idsia.ch http://www.idsia.ch/~juergen From horn at neuron.tau.ac.il Tue Mar 30 08:40:51 1999 From: horn at neuron.tau.ac.il (David Horn) Date: Tue, 30 Mar 1999 15:40:51 +0200 (IST) Subject: NCHEP-99 Message-ID: First Announcement : NCHEP-99 Workshop on Neural Computation in High Energy Physics 1999 Place: Maale Hachamisha, Israel Dates: October 13 - 15, 1999. This workshop, sponsored by the Israel Science Foundation, will be devoted to the use of Neural Computation in High Energy Physics. Its purpose is to review the current status of this field and to discuss and evaluate possible future developments. Call for Papers. Applications of neural computation to HEP can be found in the area of data analysis and in both on-line and off-line trigger designs. We would like to discuss their possible involvement in the design of intelligent detectors and the use of reconfigurable devices. We call for papers on recent developments in all these subjects. Abstracts should be submitted electronically until July 1st, 1999. They should include postal and e-mail addresses of all authors and the author to whom correspondence should be addressed. Submitted papers should be limited to seven pages in length. Two copies of the submitted papers should reach the conference scientific committee by September 1st, 1999. Mail submissions to Prof. Halina Abramowicz, NCHEP-99, School of Physics and Astronomy, Tel Aviv University, Tel Aviv 69978, Israel. e-mail: halina at post.tau.ac.il Scientific Organizing Committee Prof. H. Abramowicz and Prof. D. Horn, Tel Aviv University. General Information Information about this workshop is available at the website http://neuron.tau.ac.il/NCHEP-99. Registration will be handled by Dan-Knassim. http://www.congress.co.il. e-mail: congress at mail.inter.net.il. Phone:+972-3-6133340 fax:+972-3-6133341. Workshop secretary is Michal Finkelman, e-mail:michal at neuron.tau.ac.il fax: +972-3-6407932. Notice This workshop will follow the NCST-99 conference on Neural Computation in Science and Technology, which will take place at the same location. That conference covers areas of both neurobiological modeling and computational applications. Information about NCST-99 is available at http://neuron.tau.ac.il/NCST-99. Participants of NCHEP-99 are encouraged to make use of this opportunity and take part also in that conference. From niall.griffith at ul.ie Wed Mar 31 08:22:24 1999 From: niall.griffith at ul.ie (Niall Griffith) Date: Wed, 31 Mar 1999 14:22:24 +0100 Subject: NN's & Music - new book Message-ID: <9903311322.AA16146@zeus.csis.ul.ie> Book Announcement.... MUSICAL NETWORKS: PARALLEL DISTRIBUTED PERCEPTION AND PERFORMANCE edited by Niall Griffith and Peter M. Todd MUSICAL NETWORKS, a new book on connectionist models of music cognition, composition, and performance, has been published by MIT Press. This book presents the latest research on neural network applications in the domains of musical and creative behavior by leaders in the field including Gjerdingen, Grossberg, Large, Mozer and many others. For a further description, and links to the complete table of contents and preface, please visit http://mitpress.mit.edu/book-home.tcl?isbn=0262071819 MUSICAL NETWORKS can be found in bookstores that carry MIT Press publications, or can be purchased directly from MIT Press through the above website or by calling their toll-free order number, 1-800-356-0343 (or in the UK, (0171)306-0603), and specifying ISBN 0-262-07181-9. The price is $37.50 (hardcover, 385 pages). In conjunction with our book, we have also created an extensive bibliography of connectionist publications on music, which you can access (in html, Word, bibtex, and plain text) at http://www.csis.ul.ie/staff/NiallGriffith/mnpdpp_bib0.htm We intend to keep this an up-to-date list, so if you have published anything in this area (or know of other work) that we have not yet included, please email full details to niall.griffith at ul.ie and it will be added. Niall Griffith and Peter Todd **************************************************************************** From austin at minster.cs.york.ac.uk Thu Mar 4 09:03:18 1999 From: austin at minster.cs.york.ac.uk (Jim Austin) Date: Thu, 4 Mar 1999 14:03:18 +0000 Subject: 3 year PhD studentship from April 1999. Message-ID: <9903041403.ZM14635@minster.cs.york.ac.uk> Available from the 1st April 1999. CASE PhD Studentship in Computer Science and Chemistry. University of York, York, UK. Supported by the BBSRC and Glaxo-Wellcome Ltd. Chemical structure matching using neural networks. Applicants are urgently requested to apply for a three year BBSRC PhD studentship supported by Glaxo-Wellcome under the industrial CASE scheme. The project will investigate the use of neural network based search systems for chemical structure matching in large chemical databases. The focus of the studentship will be in identifying the benefits of the methods in chemistry, and in the optimisation of the methods for small molecule and protein structure matching in very large molecular database systems. The project will be undertaken in the Departments of Chemistry (Protein Chemistry Group) and Computer Science (Neural Networks Research group) at the University of York and at the Glaxo-Wellcome research centre in Peterborough within the Computational Chemistry Group. All groups are internationally known for their work in neural networks or computational chemistry. The work will build on collaborative research over the last three years between the three groups, details of which can be found in our web page (http://www.cs.york.ac.uk/arch/neural/research/adam/molecules). Candidates will have a first degree in chemistry and will have programming ability (preferably in C++, under UNIX). Additional experience in Computer Science and neural networks would be an advantage but is not essential. The post must be filled on the 1st April 1999, thus candidates must be available to take up the post at that date. The deadline for applications is 15th March 1999 with interviews to be held as soon as possible after that date. Applications can be made by email, fax or by post. Informal enquiries can be made to Prof. Jim Austin at the address below. Prof. Jim Austin Department of Computer Science University of York York YO10 5DD, UK Tel: 01904 432734 Fax: 01904 432767 email: austin at cs.york.ac.uk -- Jim Austin, Professor of Neural Computation Advanced Computer Architecture Group, Department of Computer Science, University of York, York, YO10 5DD, UK. Tel : 01904 43 2734 Fax : 01904 43 2767 web pages: http://www.cs.york.ac.uk/arch/ From smola at first.gmd.de Thu Mar 4 21:30:38 1999 From: smola at first.gmd.de (Alex Smola) Date: Fri, 05 Mar 1999 03:30:38 +0100 Subject: PhD Thesis on "Learning with Kernels" Message-ID: <36DF41CE.306494D8@first.gmd.de> Dear Connectionists, the PhD Thesis "Learning with Kernels" was unavailable for download in the past two months since i discovered a couple of wrong constants in sections 6-8. These are fixed now and the thesis can be downloaded again at http://svm.first.gmd.de/papers/Smola98.ps.gz I apologize for the inconvenience that this may have caused to you. Alex J. Smola [ Moderator's note: here is the abstract from the previous announcment. -- DST] Support Vector (SV) Machines combine several techniques from statistics, machine learning and neural networks. One of the most important ingredients are kernels, i.e. the concept of transforming linear algorithms into nonlinear ones via a map into feature spaces. The present work focuses on the following issues: - Extensions of Support Vector Machines. - Extensions of kernel methods to other algorithms such as unsupervised learning. - Capacity bounds which are particularly well suited for kernel methods. After a brief introduction to SV regression it is shown how the classical \epsilon insensitive loss function can be replaced by other cost functions while keeping the original advantages or adding other features such as automatic parameter adaptation. Moreover the connection between kernels and regularization is pointed out. A theoretical analysis of several common kernels follows and criteria to check Mercer's condition more easily are presented. Further modifications lead to semiparametric models and greedy approximation schemes. Next three different types of optimization algorithms, namely interior point codes, subset selection algorithms, and sequential minimal optimization (including pseudocode) are presented. The primal--dual framework is used as an analytic tool in this context. Unsupervised learning is an extension of kernel methods to new problems. Besides Kernel PCA one can use the regularization to obtain more general feature exractors. A second approach leads to regularized quantization functionals which allow a smooth transition between the Generative Topographic Map and Principal Curves. The second part of the thesis deals with uniform convergence bounds for the algorithms and concepts presented so far. It starts with a brief self contained overview over existing techniques and an introduction to functional analytic tools which play a crucial role in this problem. By viewing the class of kernel expansions as an image of a linear operator it is possible to give bounds on the generalization ability of kernel expansions even when standard concepts like the VC dimension fail or give way too conservative estimates. In particular it is shown that it is possible to compute the covering numbers of the given hypothesis classes directly instead of taking the detour via the VC dimension. Applications of the new tools to SV machines, convex combinations of hypotheses (i.e. boosting and sparse coding), greedy approximation schemes, and principal curves conclude the presentation. -- / Address until 3/17/99 / / Alex J. Smola Department of Engineering / / Australian National University Canberra 0200, Australia / / Tel: (+61) 2 6279 8536 smola at first.gmd.de / / Fax: (+61) 2 6249 0506 http://www.first.gmd.de/~smola / / Private Address / University House GPO Box 1535 / / Australian National University Canberra 2601, Australia / / Tel: (+61) 2 6249 5378 / From mblsspw2 at fs2.mt.umist.ac.uk Fri Mar 5 05:51:27 1999 From: mblsspw2 at fs2.mt.umist.ac.uk (Philip Withers) Date: Fri, 5 Mar 1999 11:51:27 +0100 Subject: postdoc position: Neural Network Modelling of Al Rolling Process/Property Relationships Message-ID: Hello! I am intereseted in gaussian process and recurrent neural network models. I have a Post-doctoral position available here in manchester, Uk to work on applied materials problems. The successful applicant should have a mathematical, physics or engineering background. Details are attached. If you know of anyone interested please feel free to pass this information on. thank you Prof Phil Withers THE UNIVERSITY OF MANCHESTER MANCHESTER MATERIALS SCIENCE CENTRE RESEARCH ASSOCIATE Neural Network Modelling of Al Rolling Process/Property Relationships. Applications are invited for a physicist, mathematician, materials scientist or engineer to take up a two year post doctoral research fellowship in collaboration with ALCAN International, Banbury, Oxfordshire. Experience of mathematical modelling and/or materials science and engineering and a keenness to work on applied problems an advantage. The Materials Science Centre in Manchester received the highest research assessment grade and has world class expertise in light alloys. A second post focusing on the finite element modelling of friction welding of aerospace components and the measurement of stresses by neutron diffraction is also expected to become available shortly. Modelling and/or materials science and engineering experience advantageous. Salaries for the above posts will be in the range of #17570 to #21815 p.a. according to qualifications and experience. For informal enquiries please contact Professor P J Withers at the Manchester Materials Science Centre, Grosvenor Street, Manchester, M1 7HS UK or philip.withers at man.ac.uk For further details and an application form please contact Office of the Director of Personnel, The University of Manchester, Oxford Road, Manchester, M13 9PL. Tel: 0161 275 2028, Fax: 0161 275 2221/2471, Minicom: 0161 275 7889, email: personnel at man.ac.uk Web Site: http://www.man.ac.uk Please quote ref 030/99. *********************************************************************** Prof. P.J. Withers Manchester Materials Science Centre, Grosvenor St, Manchester, M1 7HS Tel. (0)161 200 8872 Fax (0)161 200 3636 or 3586 philip.withers at man.ac.uk From chiru at csa.iisc.ernet.in Sun Mar 7 06:44:07 1999 From: chiru at csa.iisc.ernet.in (Chiranjib Bhattacharya) Date: Sun, 7 Mar 1999 17:14:07 +0530 (IST) Subject: TR announcement: A fast algorithm for SVM classifier design Message-ID: We have recently developed a fast and simple algorithm for Support Vector Machine classifier design using geometric ideas involving convex polytopes. Details are given in the Technical Report mentioned below. For soft copies please email: ssk at csa.iisc.ernet.in Comments are welcome. A Fortran code implementation will be made available on request. ---------------------------------------------------------------------------- A Fast Iterative Nearest Point Algorithm for Support Vector Machine Classifier Design S.S. Keerthi, S.K. Shevade, C. Bhattacharya and K.R.K. Murthy Intelligent Systems Lab Dept. of Computer Science and Automation Indian Institute of Science Bangalore - 560 012 India Technical Report TR-ISL-99-03 Abstract In this paper we give a new, fast iterative algorithm for support vector machine (SVM) classifier design. The basic problem treated is one that does not allow classification violations. The problem is converted to a problem of computing the nearest point between two convex polytopes. The suitability of two classical nearest point algorithms, due to Gilbert, and Mitchell, Dem'yanov and Malozemov, is studied. Ideas from both these algorithms are combined and modified to derive our fast algorithm. For problems which require classification violations to be allowed, the violations are quadratically penalized and an idea due to Friess is used to convert it to a problem in which there are no classification violations. Comparitive computational evaluation of our algorithm against powerful SVM methods such as Platt's Sequential Minimal Optimization shows that our algorithm is very competitive. From harnad at coglit.soton.ac.uk Mon Mar 8 15:03:14 1999 From: harnad at coglit.soton.ac.uk (Stevan Harnad) Date: Mon, 8 Mar 1999 20:03:14 +0000 (GMT) Subject: The Neurology of Syntax: BBS Call for Commentators Message-ID: Below is the abstract of a forthcoming BBS target article *** please see also 5 important announcements about new BBS policies and address change at the bottom of this message) *** THE NEUROLOGY OF SYNTAX: LANGUAGE USE WITHOUT BROCA'S AREA by Yosef Grodzinsky This article has been accepted for publication in Behavioral and Brain Sciences (BBS), an international, interdisciplinary journal providing Open Peer Commentary on important and controversial current research in the biobehavioral and cognitive sciences. Commentators must be BBS Associates or nominated by a BBS Associate. To be considered as a commentator for this article, to suggest other appropriate commentators, or for information about how to become a BBS Associate, please send EMAIL by April 8th to: bbs at cogsci.soton.ac.uk or write to [PLEASE NOTE SLIGHTLY CHANGED ADDRESS]: Behavioral and Brain Sciences ECS: New Zepler Building University of Southampton Highfield, Southampton SO17 1BJ UNITED KINGDOM http://www.princeton.edu/~harnad/bbs/ http://www.cogsci.soton.ac.uk/bbs/ ftp://ftp.princeton.edu/pub/harnad/BBS/ ftp://ftp.cogsci.soton.ac.uk/pub/bbs/ gopher://gopher.princeton.edu:70/11/.libraries/.pujournals If you are not a BBS Associate, please send your CV and the name of a BBS Associate (there are currently over 10,000 worldwide) who is familiar with your work. All past BBS authors, referees and commentators are eligible to become BBS Associates. To help us put together a balanced list of commentators, please give some indication of the aspects of the topic on which you would bring your areas of expertise to bear if you were selected as a commentator. An electronic draft of the full text is available for inspection with a WWW browser, anonymous ftp or gopher according to the instructions that follow after the abstract. _____________________________________________________________ THE NEUROLOGY OF SYNTAX: LANGUAGE USE WITHOUT BROCA'S AREA Yosef Grodzinsky Department of Psychology Tel Aviv University Tel Aviv 69978 ISRAEL and Aphasia Research Center Department of Neurology Boston University School of Medicine yosef1 at ccsg.tau.ac.il ABSTRACT: A new view of the functional role of left anterior cortex in language use is proposed. The experimental record indicates that most human linguistic abilities are not localized in this region. In particular, most of syntax (long thought to be there) is not located in Broca's area and its vicinity (operculum, insula and subjacent white matter). This cerebral region, implicated in Broca's aphasia, does have a role in syntactic processing, but a highly specific one: it is neural home to receptive mechanisms involved in the computation of the relation between transformationally moved phrasal constituents and their extraction sites (in line with the Trace-Deletion Hypothesis). It is also involved in the construction of higher parts of the syntactic tree in speech production. By contrast, basic combinatorial capacities necessary for language processing - e.g., structure building operations, lexical insertion - are not supported by the neural tissue of this cerebral region, nor is lexical or combinatorial semantics. The dense body of empirical evidence supporting this restrictive view comes mainly from several angles on lesion studies of syntax in agrammatic Broca's aphasia. Five empirical arguments are presented: experiments in sentence comprehension; cross-linguistic considerations (where aphasia findings from several language types are pooled together and scrutinized comparatively); grammaticality and plausibility judgments; real-time processing of complex sentences; and rehabilitation. Also discussed are recent results from functional neuroimaging, and from structured observations on speech production of Broca's aphasics. Syntactic abilities, nonetheless, are distinct from other cognitive skills, and represented entirely and exclusively in the left cerebral hemisphere. Although more widespread in the left hemisphere than previously thought, they are clearly distinct from other human combinatorial and intellectual abilities. The neurological record (based on functional imaging, split-brain and right-hemisphere damaged patients, as well as patients suffering from a breakdown of mathematical skills) indicates that language is a distinct, modularly organized neurological entity. Combinatorial aspects of the language faculty reside in the human left cerebral hemisphere, but only the transformational component (or algorithms that implement it in use) is located in and around Broca's area. KEYWORDS: agrammatism, aphasia, Broca's area, cerebral localization, dyscalculia, functional neuroanatomy, grammatical transformation, modularity, neuroimaging, syntax, trace-deletion. ____________________________________________________________ To help you decide whether you would be an appropriate commentator for this article, an electronic draft is retrievable from the World Wide Web or by anonymous ftp from the US or UK BBS Archive. Ftp instructions follow below. Please do not prepare a commentary on this draft. Just let us know, after having inspected it, what relevant expertise you feel you would bring to bear on what aspect of the article. The URLs you can use to get to the BBS Archive: http://www.princeton.edu/~harnad/bbs/ http://www.cogsci.soton.ac.uk/bbs/Archive/bbs.grodzinsky.html ftp://ftp.princeton.edu/pub/harnad/BBS/bbs.grodzinsky ftp://ftp.cogsci.soton.ac.uk/pub/bbs/Archive/bbs.grodzinsky To retrieve a file by ftp from an Internet site, type either: ftp ftp.princeton.edu or ftp 128.112.128.1 When you are asked for your login, type: anonymous Enter password as queried (your password is your actual userid: yourlogin at yourhost.whatever.whatever - be sure to include the "@") cd /pub/harnad/BBS To show the available files, type: ls Next, retrieve the file you want with (for example): get bbs.grodzinsky When you have the file(s) you want, type: quit ____________________________________________________________ *** FIVE IMPORTANT ANNOUNCEMENTS *** ------------------------------------------------------------------ (1) There have been some very important developments in the area of Web archiving of scientific papers very recently. Please see: Science: http://www.cogsci.soton.ac.uk/~harnad/science.html Nature: http://www.cogsci.soton.ac.uk/~harnad/nature.html American Scientist: http://www.cogsci.soton.ac.uk/~harnad/amlet.html Chronicle of Higher Education: http://www.chronicle.com/free/v45/i04/04a02901.htm --------------------------------------------------------------------- (2) All authors in the biobehavioral and cognitive sciences are strongly encouraged to archive all their papers (on their Home-Servers as well as) on CogPrints: http://cogprints.soton.ac.uk/ It is extremely simple to do so and will make all of our papers available to all of us everywhere at no cost to anyone. --------------------------------------------------------------------- (3) BBS has a new policy of accepting submissions electronically. Authors can specify whether they would like their submissions archived publicly during refereeing in the BBS under-refereeing Archive, or in a referees-only, non-public archive. Upon acceptance, preprints of final drafts are moved to the public BBS Archive: ftp://ftp.princeton.edu/pub/harnad/BBS/.WWW/index.html http://www.cogsci.soton.ac.uk/bbs/Archive/ -------------------------------------------------------------------- (4) BBS has expanded its annual page quota and is now appearing bimonthly, so the service of Open Peer Commentary can now be be offered to more target articles. The BBS refereeing procedure is also going to be considerably faster with the new electronic submission and processing procedures. Authors are invited to submit papers to: Email: bbs at cogsci.soton.ac.uk Web: http://cogprints.soton.ac.uk http://bbs.cogsci.soton.ac.uk/ INSTRUCTIONS FOR AUTHORS: http://www.princeton.edu/~harnad/bbs/instructions.for.authors.html http://www.cogsci.soton.ac.uk/bbs/instructions.for.authors.html --------------------------------------------------------------------- (5) Call for Book Nominations for BBS Multiple Book Review In the past, Behavioral and Brain Sciences (BBS) journal had only been able to do 1-2 BBS multiple book treatments per year, because of our limited annual page quota. BBS's new expanded page quota will make it possible for us to increase the number of books we treat per year, so this is an excellent time for BBS Associates and biobehavioral/cognitive scientists in general to nominate books you would like to see accorded BBS multiple book review. (Authors may self-nominate, but books can only be selected on the basis of multiple nominations.) It would be very helpful if you indicated in what way a BBS Multiple Book Review of the book(s) you nominate would be useful to the field (and of course a rich list of potential reviewers would be the best evidence of its potential impact!). From Dunja.Mladenic at ijs.si Mon Mar 8 07:08:38 1999 From: Dunja.Mladenic at ijs.si (Dunja Mladenic) Date: Mon, 08 Mar 1999 13:08:38 +0100 Subject: thesis announcement References: <25598.920595604@skinner.boltz.cs.cmu.edu> Message-ID: <36E3BDC6.E8FA29D4@ijs.si> I'm glad to announce the availability of my PhD thesis "Machine Learning on non-homogeneous, distributed text data" Advisors: Prof. Ivan Bratko, Prof Tom M. Mitchell. The thesis is available at http://www.cs.cmu.edu/~TextLearning/pww/PhD.html as well as at http://www-ai.ijs.si/DunjaMladenic/PhD.html Best regards, Dunja Mladenic ================ ABSTRACT This dissertation proposes new machine learning methods where the corresponding learning problem is characterized by a high number of features, unbalanced class distribution and asymmetric misclassification costs. The input is given as a set of text documents or their Web addresses (URLs). The induced target concept is appropriate for the classification of new documents including shortened documents describing individual hyperlinks. The proposed methods are based on several new solutions. Proposed is a new, enriched document representation that extends the bag-of-words representation by adding word sequences and document topic categories. Features that represent word sequences are generated using a new efficient procedure. Features giving topic categories are obtained from background knowledge constructed using the new machine learning method for learning from class hierarchies. When learning from class hierarchy, a high number of class values, examples and features, are handled by (1) dividing a problem into subproblems based on the hierarchical structure of class values and examples, (2) by applying feature subset selection and (3) by pruning unpromising class values during classification. Several new feature scoring measures are proposed as a result of comparison and analysis of different feature scoring measures used in feature subset selection on text data. The new measures are appropriate for text domains with several tens or hundreds of thousands of features, can handle unbalanced class distribution and asymmetric misclassification costs. Developed methods are suitable for the classification of documents including shortened documents. We build descriptions of hyperlinks, and treat these as shortened documents. Since each hyperlink on the Web is pointing to some document, the classification of hyperlinks (corresponding shortened documents) could be potentially improved by using this information. We give the results of preliminary experiments for learning in domains with mutually dependent class attributes. Training examples are used for learning `a next state function on the Web', where document content (class attributes) is predicted from the hyperlink (feature-vector) that points to the document. Document content we are predicting is represented as a feature-vector each feature being one of the mutually dependent class attributes. The proposed methods and solutions are implemented and experimentally evaluated on real-world data collected from the Web in three independent projects. It is shown that document classification, categorization and prediction using the proposed methods perform well on large, real-world domains. The experimental findings further indicate that the developed methods can efficiently be used to support analysis of large amount of text data, automatic document categorization and abstraction, document content prediction based on the hyperlink content, classification of shortened documents, development of user customized text-based systems, and user customized Web browsing. As such, the proposed machine learning methods contribute to machine learning and to related fields of text-learning, data mining, intelligent data analysis, information retrieval, intelligent user interfaces, and intelligent agents. Within machine learning, this thesis contributes an approach to learning on large, distributed text data, learning on hypertext, and learning from class hierarchies. Within computer science, it contributes to better design of Web browsers and software assistants for people using the Web. From Kaspar.Althoefer at kcl.ac.uk Tue Mar 9 12:04:04 1999 From: Kaspar.Althoefer at kcl.ac.uk (Althoefer, Kaspar) Date: Tue, 09 Mar 1999 17:04:04 +0000 Subject: Research Studentship at King's College London, UK Message-ID: <36E55484.AA3CF242@kcl.ac.uk> Connectionists at CS.cmu.edu Dear Colleagues, I have a Research Studentship available here in London, UK, to work on waste pipe inspection involving research on sensors and neural networks. Details are attached. If you know of anyone interested please feel free to pass this information on. Best regards, Kaspar Althoefer. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% |_/ I N G'S | \ COLLEGE L O N D O N Department of Mechanical Engineering Founded1829 Research Studentship A MULTI-SENSOR SYSTEM FOR SEWER INSPECTION A Collaborative Project with North West Water, Water Research Centre and Telespec Ltd. sponsored by the EPSRC A research studentship is available in the Department of Mechanical Engineering as a result of a recently awarded grant by the EPSRC. The Department was awarded the top rating, 5*, in the 1996 Research Assessment Exercise and excellent facilities for both computational and experimental research are available. The project is co-sponsored by North West Water, the Water Research Centre (WRc) and Telespec Ltd. The three-year project involves the development of a novel sensor device for the inspection of waste pipes and the classification of the acquired sensor data. Applications are welcome from candidates with an interest to pursue research in the areas of sensors, digital signal processing and neural networks. Applicants will preferably hold, or expect to obtain, a good first or higher degree in mechanical engineering, electronic engineering or a related subject, and will have programming ability. Applicants will be expected to register for research studies leading to the MPhil/PhD degree. The studentship covers tuition fees at U.K./E.U. student rate and an annual maintenance allowance of ? 8,482 for three years. Applications in the form of a curriculum vita with the names of two academic referees must be sent to: Ms Nicola Nayler, Ref: EPSRC/KAA, Division of Engineering, King?s College London, Strand, London WC2R 2LS, e-mail: Nicola.Nayler at kcl.ac.uk. All applications must be received by 15 April 1999. Informal enquiries can be made to Dr Kaspar Althoefer at the address below: Dr Kaspar Althoefer, Department of Mechanical Engineering, King's College, Strand, London WC2R 2LS, UK, TEL: +44 (0)171 873 2431, e-mail: Kaspar.Althoefer at kcl.ac.uk, http://www.eee.kcl.ac.uk/~kaspar. Promoting excellence in teaching, learning & research Equality of opportunity is College policy From d.husmeier at ic.ac.uk Wed Mar 10 09:32:34 1999 From: d.husmeier at ic.ac.uk (Dirk Husmeier) Date: Wed, 10 Mar 1999 14:32:34 GMT Subject: New Book Message-ID: <24743.199903101432@picard.ee.ic.ac.uk> The following book is now available: Dirk Husmeier NEURAL NETWORKS FOR CONDITIONAL PROBABILITY ESTIMATION Forecasting Beyond Point Predictions Perspectives in Neural Computing Springer Verlag ISBN 1-85233-095-3 275 pages http://www.springer.co.uk -------------------------------------------------- SYNOPSIS -------------------------------------------------- Neural networks have been extensively applied to regression, forecasting, and system modelling. However, most of the conventional approaches predict only a single value as a function of the network inputs, which is inappropriate when the underlying conditional probability density is skewed or multi-modal. The objective of this book is to study the application of neural networks to predicting the entire conditional probability distribution of an unknown data-generating process. In the first part, the structure of a universal approximator architecture is discussed, and a backpropagation-like training scheme is derived from a maximum likelihood approach. More advanced chapters address the problems of training speed and generalisation performance. Several recent learning and regularisation methods are reviewed and adapted to the problem of predicting conditional probabilities: a combination of the random vector functional link net approach with the expectation maximisation algorithm, a generalisation of the Bayesian evidence scheme to mixture models, the derivation of an appropriate weighting scheme in network ensembles, and a discussion of why the over-fitting of individual networks may lead to an improved prediction performance of a network committee. All techniques and algorithms are applied to a set of various synthetic and real-world benchmark problems, and numerous graphs and diagrams provide a deeper insight into the nature of the learning and regularisation processes. Presupposing only a basic knowledge of probability and calculus, this book should be of interest to graduate students, researchers and practitioners in statistics, econometrics and artificial intelligence. -------------------------------------------------- OVERVIEW -------------------------------------------------- Conventional applications of neural networks usually predict a single value as a function of given inputs. In forecasting, for example, a standard objective is to predict the future value of some entity of interest on the basis of a time series of past measurements or observations. Typical training schemes aim to minimise the sum of squared deviations between predicted and actual values (the `targets'), by which, ideally, the network learns the conditional mean of the target given the input. If the underlying conditional distribution is Gaussian or at least unimodal this may be a satisfactory approach. However, for a multimodal distribution, the conditional mean does not capture the relevant features of the system, and the prediction performance will, in general, be very poor. This calls for a more powerful and sophisticated model, which can learn the whole conditional probability distribution. Chapter~1 demonstrates that even for a deterministic system and `benign' Gaussian observational noise, the conditional distribution of a future observation, conditional on a set of past observations, can become strongly skewed and multimodal. In Chapter~2, a general neural network structure for modelling conditional probability densities is derived, and it is shown that a universal approximator for this extended task requires at least two hidden layers. A training scheme is developed from a maximum likelihood approach in Chapter~3, and the performance of this method is demonstrated on three stochastic time series in Chapters~4 and 5. Several extensions of this basic paradigm are studied in the following chapters, aiming at both an increased training speed and a better generalisation performance. Chapter~7 shows that a straightforward application of the Expectation Maximisation (EM) algorithm does not lead to any improvement in the training scheme, but that in combination with the random vector functional link (RVFL) net approach, reviewed in Chapter~6, the training process can be accelerated by about two orders of magnitude. An empirical corroboration for this `speed-up' can be found in Chapter~8. Chapter~9 discusses a simple Bayesian approach to network training, where a conjugate prior distribution on the network parameters naturally results in a penalty term for regularisation. However, the hyperparameters still need to be set by intuition or cross-validation, so a consequent extension is presented in Chapters~10 and 11, where the Bayesian evidence scheme, introduced to the neural network community by MacKay for regularisation and model selection in the simple case of Gaussian homoscedastic noise, is generalised to arbitrary conditional probability densities. The Hessian matrix of the error function is calculated with an extended version of the EM algorithm. The resulting update equations for the hyperparameters and the expression for the model evidence are found to reduce to MacKay's results in the above limit of Gaussian noise and thus provide a consequent generalisation of these earlier results. An empirical test of the evidence-based regularisation scheme, presented in Chapter~12, confirms that the problem of overfitting can be considerably reduced, and that the training process is stabilised with respect to changes in the length of training time. A further improvement of the generalisation performance can be achieved by employing network committees, for which two weighting schemes -- based on either the evidence or the cross-validation performance -- are derived in Chapter~13. Chapters~14 and 16 report the results of extensive simulations on a synthetic and a real-world problem, where the intriguing observation is made that in network committees, overfitting of the individual models can be useful and may lead to better prediction results than obtained with an ensemble of properly regularised networks. An explanation for this curiosity can be given in terms of a modified bias-variance dilemma, as expounded in Chapter~13. The subject of Chapter~15 is the problem of feature selection and the identification of irrelevant inputs. To this end, the automatic relevance determination (ARD) scheme of MacKay and Neal is adapted to learning in committees of probability-predicting RVFL networks. This method is applied in Chapter~16 to a real-world benchmark problem, where the objective is the prediction of housing prices in the Boston metropolitan area on the basis of various socio-economic explanatory variables. The book concludes in Chapter~17 with a brief summary. From stefan.wermter at sunderland.ac.uk Wed Mar 10 11:46:38 1999 From: stefan.wermter at sunderland.ac.uk (Stefan Wermter) Date: Wed, 10 Mar 1999 16:46:38 +0000 Subject: PhD Studentship in hybrid intelligent systems Message-ID: <36E6A1EE.831F29AD@sunderland.ac.uk> PhD Studentship in Hybrid Intelligent Systems Applications are invited for a three year PhD studentship in the area of Hybrid Intelligent Systems. Areas of interest include: Artificial Neural Networks Natural Language Processing Hybrid Neural/Symbolic Architectures Cognitive Neuroscience Learning Agents and Softbots More examples of possible titles for research topics of interest can be found at: http://osiris.sunderland.ac.uk/~cs0stw/Projects/suggested_topics_titles.html Applicants should have a good honours degree in a relevant subject. The studentship includes fees and a maintenance allowance (around 6500 BP, under review). There may be possibilities for earning additional sums in the centre for informatics. If interested please e-mail Stefan.Wermter at sunderland.ac.uk ****************************************** Professor Stefan Wermter Research Chair in Intelligent Systems University of Sunderland Centre of Informatics School of Computing, Engineering and Technology St. Peters Way Sunderland SR6 0DD United Kingdom Phone: +44 191 515 3279 Fax: +44 191 515 2781 Email: stefan.wermter at sunderland.ac.uk http://osiris.sunderland.ac.uk/~cs0stw/ http://www.his.sunderland.ac.uk/ ****************************************** From jose at tractatus.rutgers.edu Thu Mar 11 06:38:37 1999 From: jose at tractatus.rutgers.edu (Stephen Jose Hanson) Date: Thu, 11 Mar 1999 07:38:37 -0400 Subject: COMPUTER MANAGER/RESEARCH STAFF Message-ID: <36E7AB3C.AB5D24E4@tractatus.rutgers.edu> COMPUTER MANAGER/RESEARCH STAFF Reporting to the chair, responsible for administering the computing resources of the department. Major component of this position involves research in Cognitive Science, especially related to Connectionist networks (or Neural Networks and Computational Neuroscience). Will plan, direct, and implement research approaches and concepts with faculty, including writing and organizing research experiments. Must be able to write program specifications designed for specific research control situations. Other responsibilities consist of installing and debugging software, and routine system maintenance administration. Will participate in planning and design for network growth and computing facilities as it relates to RU-NET 2000. Requires a bachelor's degree, or MS in Computer Science, Cognitive Science, Cognitive Neuroscience or AI or other related fields or equivalent experience. Requires familiarity with C-programming, UNIX system internals (BSD, System V, Solaris, Linux) and Windows (95, NT) as well as local area networks running TCP/IP. Image processing or graphics programming experience a plus. EMAIL Enquirys: jose at psychology.rutgers.edu please include in Subject Heading: SYS ADM Salary Range 27 Retirement System ABP Send resumes to: COMPUTER MANAGER SEARCH Department of PSYCHOLOGY, RUTGERS-NEWARK, 101 Warren Street, SMITH HALL, Newark NJ 07102 From d.husmeier at ic.ac.uk Thu Mar 11 15:38:32 1999 From: d.husmeier at ic.ac.uk (Dirk Husmeier) Date: Thu, 11 Mar 1999 20:38:32 GMT Subject: Correction of URL Message-ID: <25662.199903112038@picard.ee.ic.ac.uk> On announcing my new book yesterday (Neural Networks for Conditional Probability Estimation, Springer), I erroneously stated the general URL of Springer Verlag (http://www.springer.co.uk) rather than the specific web address http://www.springer.co.uk/comp/books/perspectives.html from where further information about publications in the series "Perspectives in Neural Computing" can be obtained. I am sorry for any confusion or inconvenience caused by this. Best wishes, Dirk Husmeier From dummy at ultra3.ing.unisi.it Thu Mar 11 15:57:15 1999 From: dummy at ultra3.ing.unisi.it (Paolo Frasconi) Date: Thu, 11 Mar 1999 21:57:15 +0100 (MET) Subject: CFP: Special Issue on Learning in Structured Domains Message-ID: CALL FOR PAPERS Special issue on Connectionist Models for Learning in Structured Domains IEEE Transactions on Knowledge and Data Engineering Submission deadline: July 30, 1999 BACKGROUND Structured representations are ubiquitous in different fields such as knowledge representation, language modeling, and pattern recognition. Although many of the most successful connectionist models are designed for "flat" (vector-based) or sequential representations, recursive or nested representations should be preferred in several situations. One obvious setting is concept learning when objects in the instance space are graphs or can be conveniently represented as graphs. Terms in first-order logic, blocks in document processing, patterns in structural and syntactic pattern recognition, chemical compounds, proteins in molecular biology, and even world wide web sites, are all entities which are best represented as graphical structures, and they cannot be easily dealt with vector-based architectures. In other cases (e.g., language processing) the process underlying data has a (hidden) recursive nature but only a flat representation is left as an observation. Still, the architecture should be able to deal with recursive representations in order to model correctly the mechanism that generated the observations. The interest in developing connectionist architectures capable of dealing with these rich representations can be traced back to the end of the 80's. Early approaches include Touretzky's BoltzCONS, the Pollack's RAAM model, Hinton's recursive distributed representations. More recent techniques include labeled RAAMs, holographic reduced representations, and recursive neural networks. Today, after more than ten years since the explosion of interest in connectionism, research in architectures and algorithms for learning structured representations still has a lot to explore and no definitive answers have emerged. It seems that the major difficulty with connectionist models is not just representing symbols, but rather devising proper ways of learning when examples are data structures, i.e. labeled graphs that can be used for describing relationships among symbols (or, more in general, combinations of symbols and continuously-valued attributes). TOPICS The aim of this special issue is to solicit and publish valuable papers that bring a clear picture of the state of the art in this area. We encourage submissions of papers addressing, in addition to other relevant issues, the following topics: * Algorithms and architectures for classification of data structures. * Unsupervised learning in structured domains. * Belief networks for learning structured patterns. * Compositional distributed representations. * Recursive autoassociative memories. * Learning structured rules and structured rule refinement. * Connectionist learning of syntactic parsing from text corpora. * Stochastic grammars and their relationships to neural and belief networks. * Links between connectionism and syntactic and structural pattern recognition. * Analogical reasoning. * Applications, including: - Medical and technical diagnosis: discovery and manipulation of structured dependencies, constraints, explanations. - Molecular biology and chemistry: prediction of molecular structure folding, classification of chemical structures. - Automated reasoning: robust matching, manipulation of logical terms, proof plans, search space reduction. - Software engineering: quality testing, modularization of software. - Geometrical and spatial reasoning: robotics, structured representation of objects in space, figure animation, layouting of objects. INSTRUCTIONS We encourage e-mail submissions (Postscript, RTF, and PDF are the only acceptable formats). For hard copy submission please send 6 copies of the manuscript to Prof. Marco Gori. Manuscripts should not exceed 30 pages double spaced (excluding Figures and Tables). The title and the abstract should be sent separately in ASCII format, even before the final submission, so that reviewers can be contacted timely. IMPORTANT DATES Submission of title and abstract (e-mail): July 15, 1999 Submission deadline: July 30, 1999 Notification of acceptance: December 31, 1999 Expected publication date: Mid-to-late 2000. GUEST EDITORS Prof. Paolo Frasconi DIEE, University of Cagliari Piazza d'Armi 09123 Cagliari (ITALY) Phone: +39 070 675 5849 E-mail: paolo at diee.unica.it Prof. Marco Gori DII, University of Siena Via Roma 56, 53100 Siena (ITALY) Phone: +39 0577 263 610 E-mail: marco at ing.unisi.it Prof. Alessandro Sperduti DI, University of Pisa Corso Italia 40, 56125 Pisa (ITALY) Phone: +39 050 887 213 E-mail: perso at di.unipi.it From ehartman at pav.com Thu Mar 11 16:34:00 1999 From: ehartman at pav.com (Eric Hartman) Date: Thu, 11 Mar 99 15:34:00 CST Subject: job announcement Message-ID: <36E8371F@pav.com> Pavilion is the leader in the development and application of software for modeling, optimization and advanced control in the process industries (chemicals, polymer, refining, and pulp & paper). We offer a dynamic, creative environment where team members contribute to the success of the company as well as develop and grow their state of the art knowledge and skills. Pavilion is located in Austin, TX and has approximately 100 employees. For more information on Pavilion refer to our web site www.pavtech.com. Researcher A position in the research group at Pavilion is currently available. The successful candidate will be expected to take a leadership role in the design and development of new modeling and control software. Therefore, the candidate is required to have a research background in modeling and control. Familiarity with neural network algorithms is also required. Research experience in continuous optimization or model predictive control is a plus. A Masters or Ph.D. degree in engineering, computer science, physics, mathematics or operations research is required. Because the candidate will be designing new products, strong software design skills with experience in C++ is needed. Experience with Visual Basic, COM, Windows programming is also a plus. The successful candidate must be self-motivated, a team player and have strong communication skills. Pavilion provides excellent benefits and compensation plans, including incentive stock options. Pavilion supports EOE. Send resume in confidence to Staffing: Pavilion Technologies, Inc. Dale Smith, Director of Human Relations 11100 Metric Blvd., #700 Austin, TX 78758-4018 Or e-mail: spiche at pav.com fax: 512-438-1401 From nigeduff at cse.ucsc.edu Fri Mar 12 13:38:57 1999 From: nigeduff at cse.ucsc.edu (Nigel Duffy) Date: Fri, 12 Mar 1999 10:38:57 -0800 Subject: Paper avaliable Re: Gradient Descent and Boosting Message-ID: <199903121838.KAA11899@alpha.cse.ucsc.edu> David Helmbold and I have the following paper relating boosting to gradient descent. This relationship is used to derive an algorithm and prove performance bounds on this new algorithm. A Geometric Approach to Leveraging Weak Learners Nigel Duffy and David Helmbold University of California Santa Cruz ABSTRACT AdaBoost is a popular and effective leveraging procedure for improving the hypotheses generated by weak learning algorithms. AdaBoost and many other leveraging algorithms can be viewed as performing a constrained gradient descent over a potential function. At each iteration the distribution over the sample given to the weak learner is the direction of steepest descent. We introduce a new leveraging algorithm based on a natural potential function. For this potential function, the direction of steepest descent can have negative components. Therefore we provide two transformations for obtaining suitable distributions from these directions of steepest descent. The resulting algorithms have bounds that are incomparable to AdaBoost's, and their empirical performance is similar to AdaBoost's. To appear in EuroColt 99, to be published by Springer Verlag. Available from: "http://www.cse.ucsc.edu/research/ml/papers/GeometricLeveraging.ps" From carlos at sonnabend.ifisiol.unam.mx Fri Mar 12 21:38:19 1999 From: carlos at sonnabend.ifisiol.unam.mx (Carlos Brody) Date: Fri, 12 Mar 1999 20:38:19 -0600 Subject: Cross-correlations DO NOT imply synchrony Message-ID: <199903130238.UAA08617@sonnabend.ifisiol.unam.mx> Cross-correlations DO NOT imply synchrony: announcing 3 papers -------------------------------------------------------------- Suppose that you record from two stimulus-driven cells simultaneously, over many trials. Interested in whether they are synchronized, you compute the average cross-correlogram of their spike trains. (For the initiated, you compute their shuffle-corrected cross-correlogram, so as to get rid of direct stimulus influences.) You find, in the resulting correlogram, that there is a narrow peak, centered at zero, with a width of say 15 ms. "Ah! The cells are synchronized on a 15-ms timescale!" you conclude. In concluding this you will be doing what most people do, and what most papers in the literature do. THIS CONCLUSION DOES NOT NECESSARILY FOLLOW. How and why? If the PSTHs of the cells have narrow peaks, by which I mean as narrow as the peak in the xcorrelogram itself, then even if the mechanism synchronizing the cells has a very very slow timescale (e.g. tens of seconds), the xcorrelogram will have a narrow peak. Such a peak would NOT be an artifact. It arises ONLY if there *IS* an interaction -- synchrony, if you will -- between the two cells. What is wrong is the conclusion regarding the timescale of the interaction. A narrow peak (tens of ms) does NOT necessarily mean a fast interaction or a fast timescale of synchronization. Wrong interpretations of this sort can make nonsense of the arguments one is making with respect to the data. An example in point is Sillito et al. "Feature-linked synchroni`zation of thalamic relay cell firing induced by feedback from the visual cortex", Nature 369: 479-482 (1994). A paper recently published in J. Neurophysiol (see pointer below) uses a simple biophysical model to go through that example in detail. It shows how one can get exactly the same xcorrelograms Sillito et al. got, but without any binding-related (i.e. fast) synchrony at all. Instead, in the model the only interaction between the cells is that their resting potential slowly covaries over the trials of the experiment. That slow (tens of seconds) covariation reproduces Sillito et al.'s data in remarkable detail. Two other papers, in press in Neural Computation, go through these kind of issues in a more abstract manner. The first describes the problem, and tries to provide rules of thumb for being alert to when interpretation problems may arise. The second paper suggests a couple of methods to try to disambiguate interpretations. Comments welcome. Carlos Brody carlos at sonnebend.ifisiol.unam.mx http://www.cns.caltech.edu/~carlos ----------------------------------------------------------- SLOW COVARIATIONS IN NEURONAL RESTING POTENTIALS CAN LEAD TO ARTEFACTUALLY FAST CROSS-CORRELATIONS IN THEIR SPIKE TRAINS. by C.D. Brody J. Neurophysiol., 80: 3345-3351 (Dec 1998) Reprint also at http://www.cns.caltech.edu/~carlos/papers/slowcovs.pdf A model of two lateral geniculate nucleus (LGN) cells, that interact only through slow (tens of seconds) covariations in their resting membrane potentials, is used here to investigate the effect of such slow covariations on cross-correlograms taken during stimulus-driven conditions. Despite the slow time-scale of the interactions, the model generates cross-correlograms with peak widths in the range of 25 -- 200 milliseconds. These bear a striking resemblance to those reported in studies of LGN cells by \cite{Sillito94}, which were taken at the time as evidence of a fast spike timing synchronization interaction; the model highlights the possibility that those correlogram peaks may have been caused by a mechanism other than spike synchronization. Slow resting potential covariations are suggested instead as the dominant generating mechanism. How can a slow interaction generate covariogram peaks with a width 100 to 1000 times thinner than its timescale? Broad peaks caused by slow interactions are modulated by the cells' PSTHs. When the PSTHs have thin peaks (e.g., tens of milliseconds), the cross-correlogram peaks generated by slow interactions will also be thin; such peaks are easily misinterpretable as being caused by fast interactions. Though this point is explored here in the context of LGN recordings, it is a general point and applies elsewhere. When cross-correlogram peak widths are of the same order of magnitude as PSTH peak widths, experiments designed to reveal short-timescale interactions must be interpreted with the issue of possible contributions from slower interactions in mind. -------------------------------------------------------------- http://www.cns.caltech.edu/~carlos/papers/nosynch.ps.Z nosynch.pdf CORRELATIONS WITHOUT SYNCHRONY by C.D. Brody In press, Neural Computation Peaks in spike train correlograms are usually taken as indicative of spike timing synchronization between neurons. Strictly speaking, however, a peak merely indicates that the two spike trains were not independent. Two biologically-plausible ways of departing from independence which are capable of generating peaks very similar to spike timing peaks are described here: covariations over trials in response {\em latency} and covariations over trials in neuronal {\em excitability}. Since peaks due to these interactions can be similar to spike timing peaks, interpreting a correlogram may be a problem with ambiguous solutions. What peak shapes do latency or excitability interactions generate? When are they similar to spike timing peaks? When can they be ruled out from having caused an observed correlogram peak? These are the questions addressed here. A companion paper \citep{Brody98b} proposes quantitative methods to tell cases apart when latency or excitability covariations cannot be ruled out. -------------------------------------------------------------- http://www.cns.caltech.edu/~carlos/papers/disambiguating.ps.Z disambiguating.pdf DISAMBIGUATING DIFFERENT COVARIATION TYPES by C.D. Brody In press, Neural Computation Covariations in neuronal {\em latency} or {\em excitability} can lead to peaks in spike train covariograms which may be very similar to those caused by spike timing synchronization \citep{Brody98a}. Two quantitative methods are described here: (1) A method to estimate the excitability component of a covariogram, based on trial-by-trial estimates of excitability. Once estimated, this component may be subtracted from the covariogram, leaving only other types of contributions. (2) A method to determine whether the covariogram could potentially have been caused by latency covariations. -------------------------------------------------------------- From jose at tractatus.rutgers.edu Sat Mar 13 09:23:13 1999 From: jose at tractatus.rutgers.edu (Stephen Jose Hanson) Date: Sat, 13 Mar 1999 10:23:13 -0400 Subject: RUMBA- POSTDOC POSITIONS and GRADUATE FELLOWSHIPS in Cog Sci/Cog Neuro Message-ID: <36EA74D1.59155BAD@tractatus.rutgers.edu> RUMBAat RUTGERS UNIVERSITY Newark Campus the Rutgers Mind/Brain Analysis (RUMBA) Project anticpates making several POSTDOCTORAL postions which are available in the Fall 99. These positions run for minimum 2 years and will be in the areas of specialization of cognitive neuroscience and connectionist modeling with applications to recurrent networks, image processing and functional brain imaging. The Rutgers Psychology Department has made several appointments in this area in the last few years and has access to two 1.5T magnets and is in process of acquiring a head-only 3T Magnet for cognitive neuroscience research. Review of applications will occur on June 10th, 1999, but will continue to be accepted until all positions are filled. Rutgers University is an equal opportunity/affirmative action employer. Qualified women and minority candidates are especially encouraged to apply. Send CV and three letters of recommendation and 2 reprints to Professor S. J. Hanson, Chair, Department of Psychology Post Doc Search, Rutgers University, Newark, NJ 07102. Email enquirys can be made to rumba at tractatus.rutgers.edu also see http://www.psych.rutgers.edu/RUMBA PSYCHOLOGY GRADUATE PROGRAM- Newark Campus GRADUATE RESEARCH FELLOWSHIPS. Fall 99 The graduate program in COGNITIVE SCIENCE and COGNITIVE NEUROSCIENCE seeks students for FALL 99. Interested applicants from Psychology, Computer Science or Cognitive Science undergrad programs are encouraged to apply. These fellowships are competitive and provide comprehensive training in computation, neuro-imaging and cognitive science/perception research. Please send enquiries and applications to Professor S. J. Hanson, Chair, Department of Psychology Rutgers University, Newark, NJ 07102. Email enquirys can be made to gradpgm at tractatus.rutgers.edu also please see our web page for more information on the graduate faculty and program http://www.psych.rutgers.edu From steve at cns.bu.edu Sat Mar 13 05:56:47 1999 From: steve at cns.bu.edu (Stephen Grossberg) Date: Sat, 13 Mar 1999 05:56:47 -0500 Subject: two views of consciousness Message-ID: CONSCIOUSNESS AND COMPLEXITY OR CONSCIOUSNESS AND RESONANCE? Stephen Grossberg and Rajeev D.S. Raizada Department of Cognitive and Neural Systems Boston University 677 Beacon Street Boston, MA 02215 Phone: 617-353-7858 or-7857 Fax: 617-353-7755 Email: steve at cns.bu.edu, rajeev at cns.bu.edu In their recent article in Science, Tononi and Edelman (1) suggest that "conscious experience is integrated ... and at the same time it is highly differentiated", that "integration [occurs] ... through reentrant interactions", and that "attention may increase .. conscious salience". They also note that "cortical regions ... for controlling action ... may not contribute significantly to conscious experience". An alternative theory unifies these several hypotheses into a single hypothesis: "All conscious states are resonant states" (2), and suggests how resonant states enable brains to learn about a changing world throughout life (3). Resonance arises when bottom-up and top-down, or "reentrant", processes reach an attentive consensus between what is expected and what is in the world. Because resonance dynamically regulates learning of sensory and cognitive representations, this theory is called adaptive resonance theory, or ART. ART implies all the properties noted by Tononi and Edelman, but also clarifies their critical link to learning, and explains why only a certain type of excitatory top-down matching can stabilize learning (4): When top-down attentional signals match bottom-up sensory input, their mutual excitation strengthens and maintains existing neural activity long enough for synaptic changes to occur. Thus, attentionally relevant stimuli are learned, while irrelevant stimuli are suppressed and hence prevented from destabilizing existing memories. Recent experiments support these predictions during vision (5), audition (6), and learning (7). Why dorsal cortical circuits that control action do not support consciousness now follows easily: Such circuits use inhibitory matching. For example, after moving your arm to an expected position, movement stops (viz., is inhibited) because "where you want to move" matches "where you are" (8). Inhibitory matches do not resonate, hence are not conscious. A detailed model of how the laminar circuits of neocortex use resonance to control cortical development, learning, attention, and grouping of information has recently been proposed (9), and suggests new experiments to test the predicted linkages between learning, attention, and consciousness. REFERENCES 1. G. Tononi and G.M. Edelman, Science 282, 1846 (1998). 2. S. Grossberg, Psychol. Rev. 87, 1 (1980); S. Grossberg, Studies of Mind and Brain (Kluwer/Reidel, Amsterdam, 1982); S. Grossberg, The Adaptive Brain, Vol I. (Elsevier/North-Holland, Amsterdam, 1987); S. Grossberg, Amer. Scientist 83, 438 (1995). S. Grossberg, Consciousness and Cognition, 8, 1, 1999. 3. C. D. Gilbert, Physiol. Rev. 78, 467 (1998); D. V. Buonomano and M. M. Merzenich, Ann. Rev. Neurosci. 21, 149 (1998). 4. G. A. Carpenter and S. Grossberg, Computer Vis., Graphics, and Image Proc. 37, 54. 5. A. M. Sillito, H. E. Jones, G. L. Gerstein, D. C. West, Nature 369, 479; J. Bullier, J. M. Hupe, A. C. James, P. Girard, J. Physiol. (Paris) 90, 217 (1996); V. A. F. Lamme, K. Zipser, H. Spekreijse, Soc. Neurosci. Abstr. 23, 603.1 (1997). 6. Y. Zhang, N. Suga, J. Yan, Nature 387, 900 (1997). 7. E. Gao and N. Suga, Proc. Natl. Acad. Sci. USA 95, 12663 (1998); E. R. Ergenzinger, M. M. Glasier, J. O. Hahm, T. P. Pons, Nature Neurosci. 1, 226 (1998); J. P. Rauschecker, Nature Neurosci. 1, 179 (1998); M. Ahissar and S. Hochstein, Proc. Natl. Acad. Sci. USA 90, 5718 (1993). 8. D. Bullock, P. Cisek, S. Grossberg, Cereb. Cortex 8, 48 (1998). 9. S. Grossberg, Spatial Vision, 12, 163 (1999); S. Grossberg and J. R. Williamson, Soc. Neurosci. Abstr. 23, 227.9 (1997); R. D. S. Raizada and S. Grossberg, Soc. Neurosci. Abstr. 24, 105.10 (1998). From nicolang at yugiri.brain.riken.go.jp Mon Mar 15 01:26:40 1999 From: nicolang at yugiri.brain.riken.go.jp (Nicolangelo Iannella) Date: Mon, 15 Mar 1999 15:26:40 +0900 Subject: Call for Invited Speakers Message-ID: <36ECA81F.852A8B92@yugiri.brain.riken.go.jp> ******CALL FOR PARTICIPATION/INVITATION****** On behalf of Proferssor Tom Gedeon, chairman of ICONIP'99, we are looking for INVITED SPEAKERS to participate in a (or a series of) special session(s) discussing "Neural Information Coding" for this years' International Conference on Neural Information Processing ICONIP'99 to be held in Perth, Australia November 16-20. ************** NOTE *************** TRAVEL RE-EMBURSEMENTS CAN NOT BE PROVIDED ************** NOTE *************** Neural information coding or the neural code, is still today a subject full of mystery and controversy. To date, there are several main stream contender neural codes (such as temporal and population coding) each with their own supporting experimental evidence. This has inevitably led to some confusion and differences in opinion, as to how information is encoded and processed with a "biological brain". Even though some progress has been made in solving the neural code, it is still not a popular discussion issue which definitely needs further debate and wider recognition, especially in the neural network community. Further benefits, especially those living in the upper Northern hemisphere (Europe, US and Canada), will be that since the conference is held near the end of spring, it will be a great opportunity to escape the cold winter and do some sight-seeing in Perth or around Australia, where one can witness truely unique natural wilderness. There are TWO CHOICES of presentation format, you the invitee can give either A) a normal paper presentation of about 20 minutes duration or B) a panel discussion style (and can still have a paper for the proceedings) It would be greatly appreciated if you definitely confirm your attendence to ICONIP'99 ASAP and furthermore, please indicate which style of presentation you prefer. Please send all correspondence to nicolang at yugiri.riken.go.jp or if this fails use angelo at postman.riken.go.jp Yours sincerely, Nicolangelo Iannella -- Nicolangelo Iannella RIKEN, Brain Science Institute Laboratory for Neural Modelling 2-1 Hirosawa, Wako-shi, Saitama 351-0198, Japan Email: angelo at postman.riken.go.jp Tel: +81 48 462 1111 ex. 7154 Fax: +81 48 467 9684 From cindy at cns.bu.edu Mon Mar 15 16:03:56 1999 From: cindy at cns.bu.edu (Cynthia Bradford) Date: Mon, 15 Mar 1999 16:03:56 -0500 Subject: Call for Registration Message-ID: <199903152103.QAA12910@retina.bu.edu> ***** CALL FOR REGISTRATION ***** and ***** COMPLETE PROGRAM ***** THIRD INTERNATIONAL CONFERENCE ON COGNITIVE AND NEURAL SYSTEMS Tutorials: May 26, 1999 Meeting: May 27-29, 1999 Boston University 677 Beacon Street Boston, Massachusetts 02215 http://cns-web.bu.edu/meetings/ Sponsored by Boston University's Center for Adaptive Systems and Department of Cognitive and Neural Systems with financial support from DARPA and ONR How Does the Brain Control Behavior? How Can Technology Emulate Biological Intelligence? The conference will include invited tutorials and lectures, and contributed lectures and posters by experts on the biology and technology of how the brain and other intelligent systems adapt to a changing world. The conference is aimed at researchers and students of computational neuroscience, connectionist cognitive science, artificial neural networks, neuromorphic engineering, and artificial intelligence. A single oral or poster session enables all presented work to be highly visible. Costs are kept at a minimum without compromising the quality of meeting handouts and social events. SEE BELOW FOR THE COMPLETE MEETING SCHEDULE (printed after the registration form). SEE THE WEB SITE FOR HOTEL AND OTHER CONFERENCE INFORMATION. ******************** REGISTRATION FORM Third International Conference on Cognitive and Neural Systems Department of Cognitive and Neural Systems Boston University 677 Beacon Street Boston, Massachusetts 02215 Tutorials: May 26, 1999 Meeting: May 27-29, 1999 FAX: (617) 353-7755 http://cns-web.bu.edu/meetings/ (Please Type or Print) Mr/Ms/Dr/Prof: _____________________________________________________ Name: ______________________________________________________________ Affiliation: _______________________________________________________ Address: ___________________________________________________________ City, State, Postal Code: __________________________________________ Phone and Fax: _____________________________________________________ Email: _____________________________________________________________ The conference registration fee includes the meeting program, reception, two coffee breaks each day, and meeting proceedings. The tutorial registration fee includes tutorial notes and two coffee breaks. CHECK ONE: ( ) $70 Conference plus Tutorial (Regular) ( ) $45 Conference plus Tutorial (Student) ( ) $45 Conference Only (Regular) ( ) $30 Conference Only (Student) ( ) $25 Tutorial Only (Regular) ( ) $15 Tutorial Only (Student) METHOD OF PAYMENT (please fax or mail): [ ] Enclosed is a check made payable to "Boston University". Checks must be made payable in US dollars and issued by a US correspondent bank. Each registrant is responsible for any and all bank charges. [ ] I wish to pay my fees by credit card (MasterCard, Visa, or Discover Card only). Name as it appears on the card: _____________________________________ Type of card: _______________________________________________________ Account number: _____________________________________________________ Expiration date: ____________________________________________________ Signature: __________________________________________________________ ******************** MEETING SCHEDULE Wednesday, May 26, 1999 (Tutorials) 7:45am---8:30am MEETING REGISTRATION 8:30am--10:00am Stephen Grossberg: "Development, learning, attention, and grouping by the laminar circuits of visual cortex" 10:00am--10:30am COFFEE BREAK 10:30am--12:00pm Daniel Schacter: "True memories, false memories: A cognitive neuroscience perspective" 12:00pm---1:30pm LUNCH 1:30pm---3:00pm Gail Carpenter: "Adaptive resonance theory and practice" 3:00pm---3:30pm COFFEE BREAK 3:30pm---5:00pm Tomaso Poggio: "Supervised learning: Regularization and support vector machines" Thursday, May 27, 1999 (Invited Talks, Contributed Talks, and Posters) Session Chairs: Stephen Grossberg (AM) and Daniel Bullock (PM) 7:15am---8:00am MEETING REGISTRATION 7:55am---8:00am Stephen Grossberg: "Welcome and Introduction" 8:00am---8:45am Joseph LeDoux: "Learning about danger: Systems and synapses" 8:45am---9:30am Joaquin Fuster: "The frontal lobe in temporal aspects of cognition" 9:30am--10:15am John Lisman: "The role of theta-gamma oscillations in memory processes" 10:15am--10:45am COFFEE BREAK AND POSTER SESSION I 10:45am--11:30am Michael Hasselmo: "Neuromodulation and cortical memory function: Physiology and computational modeling" 11:30am--12:15pm Dario Floreano: "Evolutionary cybernetics: Exploring the foundations of adaptive intelligence in biomimetic robots" 12:15pm---1:00pm Paolo Gaudiano: "Visually guided navigation with autonomous mobile robots" 1:00pm---2:15pm LUNCH 2:15pm---3:15pm PLENARY TALK: Rodney Brooks: "Learning through social interaction: Robot implementations" 3:15pm---3:30pm Hans Colonius and Petra Arndt: "Visual-auditory interaction in saccadic eye movements" 3:30pm---3:45pm John A. Bullinaria, Patricia M. Riddell, and Simon K. Rushton: "Modelling development and adaptation of oculomotor control" 3:45pm---4:00pm Antonio Guerrero, Juan Lopez, and Jorge Feliu: "Sensory-motor control architecture based on biological models for a stereohead" 4:00pm---4:15pm Magnus Snorrason and Jeff Norris: "Vision based path planning for Martian terrain" 4:15pm---4:30pm Philipp Althaus and Paul F.M.J. Verschure: "Distributed adaptive control 5: Bayesian theory of decision making, implemented on simulated and real robots" 4:30pm---4:45pm Mark A. Kon and Leszek Plaskota: "Information complexity of neural networks" 4:45pm---5:00pm C.H. Chen and Baoming Hong: "A high efficient face recognition technique based on multi-level feature representations and neural nets" 5:00pm---5:30pm COFFEE BREAK 5:00pm---8:00pm POSTER SESSION I (see below for details) Friday, May 28, 1999 (Invited and Contributed Talks) Session Chairs: Gail Carpenter (AM) and Frank Guenther (PM) 7:30am---8:00am MEETING REGISTRATION 8:00am---8:45am Shihab Shamma: "Encoding of timbre in the auditory system" 8:45am---9:30am Nobuo Suga: "Adjustment and improvement of auditory signal processing by the corticofugal feedback system" 9:30am--10:15am Stephen Grossberg: "Neural models of auditory and speech perception" 10:15am--10:45am COFFEE BREAK 10:45am--11:30am Steven Greenberg: "From sound to meaning: A syllable-centric perspective on spoken language" 11:30am--12:15pm Larry Gillick: "The state of large vocabulary continuous speech recognition" 12:15pm---1:00pm Andreas Andreou: "Neuromorphic VLSI microsystems for speech and vision processing" 1:00pm---2:15pm LUNCH 2:15pm---2:30pm James R. Williamson: "A hierarchical network for learning vernier discrimination" 2:30pm---2:45pm Scott Oddo: "ARTMAP: Automated interpretation of Lyme IgG Western Blots" 2:45pm---3:00pm Artur Dubrawski and Dorota Daniecka: "Attribute selection for neural training of a breast cancer diagnosis system" 3:00pm---3:15pm P. Niyogi, M.M. Sondhi, and C. Burges: "A computational framework for distinctive feature based speech recognition" 3:15pm---3:30pm Fatima T. Husain and Michiro Negishi: "Model of English vowel classification by Spanish speakers" 3:30pm---3:45pm Nancy Chang: "Learning form-meaning mappings for language understanding" 3:45pm---4:00pm L.M. Romanski and P.S. Goldman-Rakic: "An acoustically responsive domain in the prefrontal cortex of the awake behaving Macaque monkey" 4:00pm---4:30pm COFFEE BREAK 4:30pm---4:45pm R.M. Borisyuk, M.J. Denham, and F.C. Hoppensteadt: "An oscillatory model of novelty detection in the hippocampus" 4:45pm---5:00pm M.J. Denham and R.M. Borisyuk: "An oscillatory model of the septal-hippocampal inhibitory circuit and the modulation of hippocampal theta activity" 5:00pm---5:15pm Simona Doboli, Ali A. Minai, and Phillip J. Best: "Context-dependent place representations in the hippocampus" 5:15pm---5:30pm Jeffrey Krichmar, Theoden Netoff, and James Olds: "Place cells emerge in a network of simulated CA3 pyramidal cells that receive robotic sensor input" 5:30pm---5:45pm Oury Monchi and Michael Petrides: "Investigating various working memory components with a computational model of basal ganglia-thalamocortical loops" 5:45pm---6:00pm Frank van der Velde and Marc de Kamps: "Locating a familiar object using feedback modulation" 6:00pm---8:00pm MEETING RECEPTION Saturday, May 29, 1999 (Invited Talks, Contributed Talks, and Posters) Session Chairs: Eric Schwartz (AM) and Ennio Mingolla (PM) 7:30am---8:00am MEETING REGISTRATION 8:00am---8:45am Charles Gilbert: "Adult cortical dynamics" 8:45am---9:30am David van Essen: "Mapping and modeling of cortical structure and function" 9:30am--10:15am Randolph Blake: "What can be perceived in the absence of visual awareness?" 10:15am--10:45am COFFEE BREAK AND POSTER SESSION II 10:45am--11:30am Steven Zucker: "Complexity, confusion, and computational vision" 11:30am--12:15pm Ennio Mingolla: "Cortical computation for attentive visual navigation: Heading, time-to-contact, and pursuit movements" 12:15pm---1:00pm Richard Shiffrin: "A model for implicit and explicit memory" 1:00pm---2:15pm LUNCH 2:15pm---3:15pm PLENARY TALK: Shinsuke Shimojo: "Visual surface filling-in assessed by psychophysics and TMS (Transcranial Magnetic Stimulation)" 3:15pm---3:30pm S.R. Lehky, T.J. Sejnowski, and R. Desimone: "Sparseness of coding in monkey striate complex cells: Data and modeling" 3:30pm---3:45pm R.D.S. Raizada and S. Grossberg: "How do preattentive grouping and attentive modulation select object representations in the layers of visual cortex?" 3:45pm---4:00pm Nikolaus Almassy, Gerald M. Edelman, and Olaf Sporns: "Function of long-range intracortical connections in a model of the visual cortex embedded in a behaving real-world device" 4:00pm---4:15pm Thorsten Hansen, Karl O. Riedel, Luiz Pessoa, and Heiko Neumann: "Regularization and 2D brightness filling-in: Theoretical analysis and numerical simulations" 4:15pm---4:30pm Daniel A. Pollen, Andrzej W. Przybyszewski, Warren Foote, and Mark A. Rubin: "Neurons in Macaque V4 respond strongly to stimulus discontinuities" 4:30pm---4:45pm DeLiang L. Wang: "Object-based selection by a neural oscillator network" 4:45pm---5:00pm Nilendu Gautambhai Jani and Daniel S. Levine: "A neural network theory of proportional analogy-making" 5:00pm---5:30pm COFFEE BREAK 5:00pm---8:00pm POSTER SESSION II (see below for details) POSTER SESSION I: Thursday, May 27, 1999 All posters will be displayed for the full day. Cognition, Learning, Recognition (B): Brigitte Nevers and Remy Versace: "Contributions of studies about the frequency effects in the processes of activation and integration of memory traces" Gary C.-W. Shyi and Chang-Ming Lin: "Computing representations for object recognition in visual search: An eye-movement analysis" Emmet Spier: "Cognition not needed: An associative model for the outcome devaluation effect" William Power, Ray Frank, Neil Davey, and John Done: "A modular attractor model of semantic access" Sylvain Hanneton, Olivier Gapenne, Christelle Genouel, Charles Lenay, and Catherine Marque: "Dynamics of shape recognition through a minimal visuo-tactile sensory substitution interface" Cristiane Salum, Antonio Roque da Silva, and Alan Pickering: "Possible role of dopamine in learning and attention: A computational approach" C.-S.R. Li, Y.-Y. Yang, and H.-C. Chen: "Sensory and spatial components of tactile extinction and allesthesia in cortical and thalamic lesions" Robert Homer and Bogdan Sasaran: "The role of monoamine neurotransmitters in brain development and mental illness: A neural network model" G.J. Dalenoort and P.H. de Vries: "Cognitive control and binding" Stephen Grossberg and Dmitry V. Repin: "How does the brain represent numbers?" Julian Paul Keenan, John Ives, Qun Chen, Gottfried Schlaug, Thomas Kauffman, David Bartres-Faz, and Alvaro Pascual-Leone: "Mapping cortical networks via functional magnetic resonance imaging and transcranial magnetic stimulation: Preliminary results" Julian Paul Keenan, John Ives, Qun Chen, Gottfried Schlaug, Thomas Kauffman, David Bartres-Faz, and Alvaro Pascual-Leone: "Modulating cortical excitability using repetitive transcranial magnetic stimulation in a self-face study to examine the role of inhibition in the prefrontal cortex" Adaptive Resonance Theory (B + T): Norbert Kopco, Peter Sincak, and Rudolf Jaksa: "Methods for analysis and enhancement of neural network classification of remotely sensed images" Gail A. Carpenter and Matthew W. Giamporcaro: "A computer game testbed for modeling strategic decision making" Gail A. Carpenter, Sucharita Gopal, Scott Macomber, Byron Shock, and Curtis E. Woodcock: "ARTMAP neural network classification of land use change" Marc-Andre Cantin, Eric Granger, and Yvon Savaria: "Four implementations of the fuzzy Adaptive Resonance Theory (ART) neural network for high data throughput applications" Luis Marti, Luciano Garcia, and Miguel Catasus: "Continuous-valued function approximation by an ART-based neural network" Mark A. Rubin and Aijaz Baloch: "Demonstration of an ARTEX implementation for recognition of visual textures" Quanhong Wang: "Tests of two theoretical explanations for the perceptual interference effect: Adaptive Resonance Theory versus competitive activation models" Neural and Hybrid Systems (B + T): Hiroki Aoki and Toshimichi Saito: "A SOM with virtual connection and its application to guess of membership functions" Hiroyuki Torikai and Toshimichi Saito: "Basic functions from an integrate-and-fire circuit with plural inputs" Brian M. O'Rourke: "Tactics for time series modeling with neural networks and fuzzy clustering" Mark Plutowski: "Emotional processing: A framework for handling multiple motivations in autonomous software agents" David V. Reynolds: "Computer simulation of large-scale neural systems of pain and aggression based on fuzzy logic" Rajat K. De: "Artificial consciousness: Integration of knowledge-based and case-based approach in a neuro-fuzzy paradigm" G.E. Campbell, W.L. Buff, and D.W. Dorsey: "Decision making in a tactical setting: Crisp or fuzzy reasoning?" Raj P. Malhotra and Yan M. Yufik: "Virtual associative networks for complexity reduction in information fusion" Audition, Speech, and Language (B + T): M.G. Srikanthan and R.J. Glover: "Wavelet neural network based echolocation" Barbara Shinn-Cunningham, Norbert Kopco, and Scott Santarelli: "Computation of acoustic source position in near-field listening" Lewis Meier: "Application of computerized auditory scene analysis to underwater acoustic signals" Ivelin Stoianov: "Recurrent autoassociative networks and sequential processing" Susan L. Denham: "Synaptic depression may explain many of the temporal response properties observed in primary auditory cortex: A computational investigation" Katja Wiemer-Hastings, Arthur C. Graesser, and Peter Wiemer-Hastings: "Exploring effective linguistic context with feedforward neural networks" VLSI: Sorin Draghici and Thierry de Pauw: "On the computational power of limited precision weights neural networks in classification problems: How to calculate the weight range so that a solution will exist" Catherine Breslin: "Neuromorphic design by physical equivalence: Simple animal and neuron models" Luca Marchese: "Neuromorphic VLSI servers" Todd Hinck, Howard Cohen, Gert Cauwenberghs, Allyn Hubbard, and Andreas Andreou: "Neuromorphic VLSI systems for boundary contour integration: An interactive demonstration" Gu Lin and Bingxue Shi: "A programmable and expandable Hamming network integrated circuit" Neural System Models (B + T): Shinji Karasawa: "Impulse recurrent loops for short-term memory which merges with experience and long-term memory" F.E. Lauria, R. Prevete, M. Milo, and S. Visco: "The Java package it.na.cy.nnet" Dorian Aur and Teodora Ghioca: "Neural network formation for cooperative bifurcation neurons" Lumei Hui: "Comparison between the two-dot method and the transparency method for the autostereogram perception" Lydia N. Derkach: "Cognitive neuropsychology: A synthesis of western and eastern research" J. Marro and J.J. Torres: "Neural networks with coherent fluctuations of synapses" Nils Hulth: "Feature vector representations and individual scaling of prototype vectors" POSTER SESSION II: Saturday, May 29, 1999 All posters will be displayed for the full day. Vision (B): Drazen Domijan: "Boundary computation, presynaptic inhibition, and lightness perception" Harald Ruda and Magnus Snorrason: "Modeling time to detection for observers searching for targets in cluttered backgrounds" Li-Yun Fu: "A neuron filtering model for space- and time-varying signal processing" Sachin Ahuja and Bart Farell: "Points, lines, and surfaces" J.M. Harris and S.K. Rushton: "An eccentric hemisphere explanation of visual search for motion in depth?" Vinoth Jagaroo: "A neuropsychological perspective of spatial reference frames: Implications for the modeling of high-level vision" Jens Mansson: "Contour enhancement by local iso-orientation-cooperation and texture suppression" Thorsten Hansen and Heiko Neumann: "Contrast processing and contour enhancement: A model of recurrent long-range interactions in V1" Wolfgang Sepp and Heiko Neumann: "A hierarchical filling-in model for real-time brightness reconstruction" Lynette Linden: "Understanding image colors in phase space" Raymond K. Chafin and Cihan H. Dagli: "Biologically inspired connectionist models for image feature extraction in machine vision systems" Mark Wexler, Francesco Panerai, and Jacques Droulez: "Looking actively at Ames's window" Lavanya Viswanathan, Stephen Grossberg, and Ennio Mingolla: "Neural dynamics of motion grouping across apertures" Sensory-Motor Control (B): Brad Rhodes and Daniel Bullock: "A neural model for sequence learning and production" Sally Bogacz and Willard Larkin: "Motor control in fast musical passages" Thomas J. Anastasio, Paul E. Patton, and Kamel Belkacem-Boussaid: "Modeling multisensory enhancement in the superior colliculus using Bayes' rule" J.E. Vos and J.J. van Heijst: "A model of sensorimotor development using a neural network" Greg T. Gdowski and Robert A. McCrea: "Sensory signals carried by vestibulo-spinal and other non-eye-movement related vestibular neurons during voluntary head movements" Greg T. Gdowski and Robert A. McCrea: "Sensory signals carried by the vestibular nuclei during reflexive head movements evoked by whole body rotation" Jan G. Smits: "Dependence of time constant for stroke recovery of complexity of tasks" M. Chen, C.-S.R. Li, Y.-Y. Yang, C.-Y. Liu, H.-L. Chang, C. Shen, Y.-M. Chuang, and L.-Y. Kao: "Perceptual alternation in obsessive compulsive disorder: Implications for the functions of the frontostriatal circuitry" Sensory-Motor Control (T) and Robotics (T): John R. Alexander Jr.: "Timing problems of neural control circuits" F. Panerai, G. Metta, and G. Sandini: "An artificial vestibular system for reflex-control of robot eye movements" Michail G. Lagoudakis and Anthony S. Maida: "A polar neural map for mobile robot navigation" G. Baratoff, C. Toepfer, and H. Neumann: "Combining space-variant maps for flow-based obstacle detection and body-scaled free-space navigation" Angelo Arleo and Wulfram Gerstner: "Spatial models and autonomous navigation in neuro-mimetic systems" Giorgio Metta, Giulio Sandini, Riccardo Manzotti, and Francesco Panerai: "Learning eye-head-hand coordination: A developmental approach" S. Srinivasan and A. Bradley: "Sequential task execution in a prosthetic limb using an artificial neural network" Neural and Hybrid Systems (B + T): Raymond Pavlovski and Majid Karimi: "Control of basins of attraction in a self-trapping neural network with near-neighbor synapses" Anatoli Gorchetchnikov: "The level of suppression in feedback connections required for learning depends primarily on intracellular parameters" David Vogel: "A partial model of cortical memory" Chun-Kam Horng and Chin-Ming Hong: "Learning efficiency improvement of CMAC neural network by Gaussian basis function" Ana Madevska and Dragan Nikolic: "Automatic classification with support vector machines in molecular biology" Alex Heneveld: "A plausible neural network architecture: Temporal Hebbian inhibit-undesireds" M. Mar Abad Grau and Luis Daniel Hernandez Molinero: "Context-specific neural network feature selector with missing data" Per Jesper Sjostrom and Lars Ulrik Wahlberg: "Automated cell recognition and counting based on a combination of artificial neural networks and standard image analysis methods" Rafal Bogacz and Marcin Chady: "Local connections in a neural network improve pattern completion" Andrea Corradini: "Automatic posture recognition in color images using hybrid neural networks" Neural System Models (B + T): Wei Cao, SongNian Yu, and William Gregory: "New approach for measuring complexity of linear-inseparable multidimensional data patterns" Zhe Chen: "The application of wavelet neural network for time series prediction and system modeling based on multiresolution learning" Maria Alvarez Florendo and Anthony Roland Florendo: "Solutions to the binary addition, parity and symmetry problems using feedforward networks with inhibitory lateral connections" J.R.C. Piqueira, F.M. Formagin, L.H.A. Monteiro, and J.S. Del Nero: "Full connected phase locked loops as a model for synchronizing neuron sets" Steven Lehar: "The Gestalt principle of isomorphism and the perceptual representation of space" Gu Lin and Bingxue Shi: "A programmable and expandable fuzzy recognition integrated circuit" Boris Galitsky: "How the logic of mental attributes models the autism" ******************** From jbower at bbb.caltech.edu Mon Mar 15 12:44:14 1999 From: jbower at bbb.caltech.edu (James M. Bower) Date: Mon, 15 Mar 1999 09:44:14 -0800 Subject: Just like old times Message-ID: A non-text attachment was scrubbed... Name: not available Type: multipart/alternative Size: 1378 bytes Desc: not available Url : https://mailman.srv.cs.cmu.edu/mailman/private/connectionists/attachments/00000000/69d2e12e/attachment-0001.bin From school at cogs.nbu.acad.bg Tue Mar 16 05:12:12 1999 From: school at cogs.nbu.acad.bg (CogSci Summer School) Date: Tue, 16 Mar 1999 13:12:12 +0300 Subject: No subject Message-ID: 6th International Summer School in Cognitive Science Sofia, New Bulgarian University July 12 - 31, 1999 International Advisory Board Elizabeth BATES (University of California at San Diego, USA) Amedeo CAPPELLI (CNR, Pisa, Italy) Cristiano CASTELFRANCHI (CNR, Roma, Italy) Daniel DENNETT (Tufts University, Medford, Massachusetts, USA) Ennio De RENZI (University of Modena, Italy) Charles DE WEERT (University of Nijmegen, Holland ) Christian FREKSA (Hamburg University, Germany) Dedre GENTNER (Northwestern University, Evanston, Illinois, USA) Christopher HABEL (Hamburg University, Germany) William HIRST (New School for Social Sciences, NY, USA) Joachim HOHNSBEIN (Dortmund University, Germany) Douglas HOFSTADTER (Indiana University, Bloomington, Indiana, USA) Keith HOLYOAK (University of California at Los Angeles, USA) Mark KEANE (Trinity College, Dublin, Ireland) Alan LESGOLD (University of Pittsburg, Pennsylvania, USA) Willem LEVELT (Max-Plank Institute of Psycholinguistics, Nijmegen, Holland) David RUMELHART (Stanford University, California, USA) Richard SHIFFRIN (Indiana University, Bloomington, Indiana, USA) Paul SMOLENSKY (University of Colorado, Boulder, USA) Chris THORNTON (University of Sussex, Brighton, England) Carlo UMILTA' (University of Padova, Italy) Eran ZAIDEL (University of California at Los Angeles, USA) Courses Each participant will enroll in 6 of the 10 courses offered thus attending 4 hours classes per day plus 2 hours tutorials in small groups plus individual studies and participation in symposia. Brain and Language: New Approaches to Evolution and Developmet (Elizabeth Bates, Univ. of California at San Diego, USA) Child Language Acquisition (Michael Tomasello, MPI for Evolutionary Anthropology, Germany) Culture and Cognition (Roy D'Andrade, Univ. of California at San Diego, USA) Understanding Social Dependence and Cooperation (Cristiano Castelfranchi, CNR, Italy) Models of Human Memory (Richard Shiffrin, Indiana University, USA) Categorization and Inductive Reasoning: Psychological and Computational Approaches (Evan Heit, Univ. of Warwick, UK) Understanding Human Thinking (Boicho Kokinov, New Bulgarian University) Perception-Based Spatial Reasoning (Reinhard Moratz, Hamburg University, Germany) Perception (Naum Yakimoff, New Bulgarian University) Applying Cognitive Science to Instruction (John Hayes, Carnegie-Mellon University, USA) In addition there will be seminars, working groups, project work, discussions. Participation Participants will be selected by a Selection Committee on the bases of their submitted documents: * application form, * CV, * statement of purpose, * copy of diploma; if student - academic transcript * letter of recommendation, * list of publications (if any) and short summary of up to three of them. For participants from Central and Eastern Europe as well as from the former Soviet Union there are scholarships available (provided by Soros' Open Society Institute). They cover tuition, travel, and living expenses. Deadline for application: April 15th Notification of acceptance: April 30th. Apply as soon as possible since the number of participants is restricted. For more information contact: Summer School in Cognitive Science Central and East European Center for Cognitive Science New Bulgarian University 21, Montevideo Str. Sofia 1635, Bulgaria Tel. (+3592) 957-1876 Fax: (+3592) 558262 e-mail: school at cogs.nbu.acad.bg Web page: http://www.nbu.acad.bg/staff/cogs/events/ss99.html From qian at brahms.cpmc.columbia.edu Tue Mar 16 19:19:58 1999 From: qian at brahms.cpmc.columbia.edu (Ning Qian) Date: Tue, 16 Mar 1999 19:19:58 -0500 Subject: papers available on stereo and learning Message-ID: <199903170019.TAA20805@brahms.cpmc.columbia.edu> Dear Colleagues, The following papers can be downloaded from the web site: http://brahms.cpmc.columbia.edu/ All papers are in Unix-compressed PostScript format. A limited number of hard copies are available for those who cannot download or decode. Best regards, Ning Qian ---------------------------------------------------------------- Relationship between Phase and Energy Methods for Disparity Computation, Neural Computation (in press) Ning Qian and Sam Mikaelian The phase and energy methods for computing binocular disparity maps from stereograms are motivated differently, have different physiological relevances, and involve different computational steps. Nevertheless, we demonstrate that at the final stages where disparity values are made explicit, the simplest versions of the two methods are exactly equivalent. The equivalence also holds when the quadrature-pair construction in the energy method is replaced with a more physiologically plausible phase-averaging step. The equivalence fails, however, when the phase-difference receptive field model is replaced by the position-shift model. Additionally, intermediate results from the two methods are always quite distinctive. In particular, the energy method generates a distributed disparity representation similar to that found in the visual cortex while the phase method does not. Finally, more elaborate versions of the two methods are in general not equivalent. We also briefly compare these two methods with some other stereo models in the literature. http://brahms.cpmc.columbia.edu/publications/compare.ps.Z ---------------------------------------------------------------- On the Momentum Term in Gradient Descent Learning Algorithms, Neural Networks, 1999, 12:145-151 Ning Qian A momentum term is usually included in the simulations of connectionist learning algorithms. Although it is well known that such a term greatly improves the speed of learning, there have been few rigorous studies of its mechanisms. In this paper, I show that in the limit of continuous time, the momentum parameter is analogous to the mass of Newtonian particles that move through a viscous medium in a conservative force field. The behavior of the system near a local minimum is equivalent to a set of coupled and damped harmonic oscillators. The momentum term improves the speed of convergence by bringing some eigen components of the system closer to critical damping. Similar results can be obtained for the discrete time case used in computer simulations. In particular, I derive the bounds for convergence on learning-rate and momentum parameters, and demonstrate that the momentum term can increase the range of learning rate over which the system converges. The optimal condition for convergence is also analyzed. http://brahms.cpmc.columbia.edu/publications/momentum.ps.Z ---------------------------------------------------------------- Perceptual Learning on Orientation and Direction Discrimination, Vision Research (in press) Nestor Matthews, Zili Liu, Bard J. Geesaman, and Ning Qian Two experiments were conducted to determine the extent to which perceptual learning transfers between orientation and direction discrimination. Naive observers were trained to discriminate orientation differences between two single-line stimuli, and direction differences between two single-moving-dot stimuli. In the first experiment, observers practiced the orientation and direction tasks along orthogonal axes in the fronto-parallel plane. In the second experiment, a different group of observers practiced both tasks along a single axis. Perceptual learning was observed on both tasks in both experiments. Under the same-axis condition, the observers' orientation sensitivity was found to be significantly elevated after the direction training, indicating a transfer of learning from direction to orientation. There was no evidence of transfer in any other cases tested. In addition, the rate of learning on the orientation task was much higher than the rate on the direction task. The implications of these findings on the neural mechanisms subserving orientation and direction discrimination are discussed. http://brahms.cpmc.columbia.edu/publications/pl.ps.Z From ckiw at dai.ed.ac.uk Wed Mar 17 12:48:13 1999 From: ckiw at dai.ed.ac.uk (Chris Williams) Date: Wed, 17 Mar 1999 17:48:13 +0000 (GMT) Subject: PhD study at the Institute for Adaptive and Neural Computation, U. , of Edinburgh, UK Message-ID: Apologies if you receive multiple copies of this message ---- The Institute for Adaptive and Neural Computation in the Division of Informatics at the University of Edinburgh welcomes applications from suitably qualified candidates regarding PhD study. The Institute for Adaptive and Neural Computation (ANC) is part of the newly formed Division of Informatics at the University of Edinburgh. The Institute fosters the study of adaptive processes in both artificial and biological systems. It encourages interdisciplinary and collaborative work involving the traditional disciplines of neuroscience, cognitive science, computer science, computational science, mathematics and statistics. Many of the information-processing tasks under study draw on a common set of principles and mathematical techniques for their solution. Combined study of the adaptive nature of artificial and biological systems facilitates the many benefits accruing from treating essentially the same problem from different perspectives. A principal theme is the study of artificial learning systems. This includes theoretical foundations (e.g. statistical theory, information theory), the development of new models and algorithms, and applications. A second principal theme is the analysis and modelling of brain processes at all levels of organization with a particular focus on theoretical developments which span levels. Within this theme, research areas are broadly defined as the study of the neural foundations of perception, cognition and action and their underlying developmental processes. A secondary theme is the construction and study of computational tools and methods which can support studies in the two principal themes, such as in the analysis of brain data, simulation of networks and parallel data mining. The Institute for Adaptive and Neural Computation has available PhD studentships (covering the cost of fees and living expenses) as from 1 October 1999. These are supported by the Medical Research Council and by Microsoft Research Ltd. In addition, the Division of Informatics receives a number of EPSRC studentships for which students wishing to study within the Institute for Adaptive and Neural Computation may be considered. Further information about ANC may be found at http://anc.ed.ac.uk/. Informal enquiries may be made to Emma Black, emma at anc.ed.ac.uk . For application forms and further information, write to: PhD Admissions Secretary Division of Informatics University of Edinburgh James Clerk Maxwell Building King's Buildings Mayfield Road Edinburgh EH9 3JZ Scotland, UK Email: phd-admissions at inf.ed.ac.uk Fax: +44 131 667 7209 Telephone: +44 131 650 5156 Information on study for Research Degrees in the Division of Informatics can be found at http://www.dai.ed.ac.uk/daidb/people/homes/rbf/IGS/IGSextr.htm From qian at brahms.cpmc.columbia.edu Wed Mar 17 11:21:12 1999 From: qian at brahms.cpmc.columbia.edu (Ning Qian) Date: Wed, 17 Mar 1999 11:21:12 -0500 Subject: papers available on stereo and learning In-Reply-To: <199903171107.LAA15362@axon.gatsby.ucl.ac.uk> (message from Geoffrey Hinton on Wed, 17 Mar 1999 11:07:38 +0000) References: <199903171107.LAA15362@axon.gatsby.ucl.ac.uk> Message-ID: <199903171621.LAA22026@brahms.cpmc.columbia.edu> Date: Wed, 17 Mar 1999 11:07:38 +0000 From: Geoffrey Hinton > I show that in the limit of continuous time, the momentum parameter > is analogous to the mass of Newtonian particles that move through a > viscous medium in a conservative force field. At the risk of sounding like Jim Bower, I would like to opint out that mechanical model was the original motivation for the momentum method. Geoff Hinton The original motivation was fully discussed and acknowledged (by citing Rumelhart, Hinton and Williams's chapter in the PDP book) in the Introduction. The paper went far beyond that by first showing m p = ------ m + mu (where p is the momentum parameter, m is the mass, mu is the friction coefficient), and then providing a stability and convergence analysis for both the continuous and discrete cases. Best regards, Ning From hinton at gatsby.ucl.ac.uk Wed Mar 17 06:07:38 1999 From: hinton at gatsby.ucl.ac.uk (Geoffrey Hinton) Date: Wed, 17 Mar 1999 11:07:38 +0000 Subject: papers available on stereo and learning In-Reply-To: Your message of Tue, 16 Mar 1999 19:19:58 -0500. Message-ID: <199903171107.LAA15362@axon.gatsby.ucl.ac.uk> > I show that in the limit of continuous time, the momentum parameter > is analogous to the mass of Newtonian particles that move through a > viscous medium in a conservative force field. At the risk of sounding like Jim Bower, I would like to opint out that mechanical model was the original motivation for the momentum method. Geoff Hinton From carlos at sonnabend.ifisiol.unam.mx Thu Mar 18 13:26:57 1999 From: carlos at sonnabend.ifisiol.unam.mx (Carlos Brody) Date: Thu, 18 Mar 1999 12:26:57 -0600 Subject: Jim Bower's posting Message-ID: <199903181826.MAA02693@sonnabend.ifisiol.unam.mx> I will be happy to sound like Ning Qian responding to Geoff Hinton sounding like Jim Bower, and I will say that My recent papers on cross-correlation go far beyond simply pointing out the possibility of an interpretation problem. (That possibility was pointed out by Aertsen, Gerstein, Habib and Palm in their 1989 paper introducing the JPSTH, J. Neurophysiol. 61:900-917. I was not until now aware that the Wilson and Bower papers also made a similar point-- thanks to Jim Bower for pointing it out.) Of the 3 papers I recently announced: The "Correlations without synchrony" paper studies what kinds of xcorrelogram shapes are generated by slow interactions (ACROSS trials) as compared to fast interactions (WITHIN trials). The specific point is to know *what to watch out for*. The most important rule of thumb for being alert to interpretation problems confusing slow and fast interactions is: if the PSTHs have peak widths of the same order of magnitude as the xcorr peak width, be careful with your interpretations! [Related to this, it is also important to know when you DON'T have to worry. Examples of perfectly o.k. interpretations (as far as I can tell) are Alonso and Reid's 1995 work on connections from LGN to simple cells in area 17 of cat Nature 378:281--284; or Ts'o, Gilbert, and Wiesel's 1986 work on connections within area 17, J. Neurosci. 6:1160--1170. In both of these, the PSTHs were MUCH broader than the peaks in the xcorrelograms.] The "Disambiguating different covariation types" paper tries to propose a couple of quantitative methods for disambiguating interpretations when it is not clear how much of the xcorrelogram came from slow or fast interactions. Finally, the "Slow covariations in neuronal resting potentials can lead to artefactually fast cross-correlations in their spike trains" paper shows how awareness of these issues is still very far from seeping into our collective consciousness: the paper goes through an example suggesting that a well-known paper, published in a well-known journal (Nature) suffered greatly from such interpretation problems. But that paper is not the only paper with signs of trouble: it just happened to be one where I was intimately familiar with the data and thus could go through it in detail and be confident of my conclusions. In sum, it is important not only to be aware that there CAN be trouble; but also to know WHEN there can be trouble. And, concomitantly, when there won't be. The three papers I announced try to go in this direction. Cross-correlation is a most useful tool, and should not be thrown out with the bathwater. Carlos. carlos at sonnabend.ifisol.unam.mx http://www.cns.caltech.edu/~carlos ------- Date: Mon, 15 Mar 1999 09:44:14 -0800 From: "James M. Bower" Errors-to: owner-connectionists at nntp-server.caltech.edu With respect to the recent posting by Carlos Brody, I would point to two papers published almost ten years ago by Matt Wilson and Myself: Wilson, M.A. and Bower, J.M. 1990 Computer simulation of oscillatory behavior in cerebral cortical networks. In: Advances in Neural information processing systems. Vol. 2, D. Touretzky, editor. Morgan Kaufmann, San Mateo, CA., pp. 84-91. Wilson, M.A. and Bower, J.M. 1991 A computer simulation of oscillatory behavior in primary visual cerebral cortex. Neural Computation 3: 498-509. Quoting from the first: "Interpreting phase coherence from correlation functions produced from the average of many simulation trials pointed out the need to distinguish average phase effects from instantaneous phase effects. Instantaneous phase implies that the statistics of the correlation function taken at any trial are consistent with the statistics of the combined data. Average phase allows for systematic within-trial and between-trial variability and is, therefore, a weaker assertion of actual coherence. This distinction is particularly important for theories which rely on phase encoding of stimulus information. Analysis of our model results indicates that the observed phase relationships are an average, rather than an instantaneous effect." Remarkable how slowly things change, or are accepted (eh Steve??). Jim Bower From jon at syseng.anu.edu.au Thu Mar 18 18:54:17 1999 From: jon at syseng.anu.edu.au (Jonathan Baxter) Date: Fri, 19 Mar 1999 10:54:17 +1100 Subject: papers available on stereo and learning References: <199903171107.LAA15362@axon.gatsby.ucl.ac.uk> <199903171621.LAA22026@brahms.cpmc.columbia.edu> Message-ID: <36F19229.FD164E8B@syseng.anu.edu.au> Ning Qian wrote: > Date: Wed, 17 Mar 1999 11:07:38 +0000 > From: Geoffrey Hinton > > > I show that in the limit of continuous time, the momentum parameter > > is analogous to the mass of Newtonian particles that move through a > > viscous medium in a conservative force field. > > At the risk of sounding like Jim Bower, I would like to opint out that > mechanical model was the original motivation for the momentum method. > > Geoff Hinton > > The original motivation was fully discussed and acknowledged (by > citing Rumelhart, Hinton and Williams's chapter in the PDP book) in > the Introduction. The paper went far beyond that by first showing > m > p = ------ > m + mu > > (where p is the momentum parameter, m is the mass, mu is the friction > coefficient), and then providing a stability and convergence analysis > for both the continuous and discrete cases. > > Best regards, > Ning The momentum term in steepest descent methods was introduced and analysed by B Poljak in 1964: B.T Poljak, 1964. "Some Methods of Speeding up the Convergance of Iteration Methods". Z. VyCisl. Mat. i Mat. Fiz, Vol. 4, pp 1-17. He called it the "Heavy Ball" method. I don't have the original paper but a good secondary source is "Neurodynamic Programming" by Bertsekas and Tsitsiklis,, Athena Scientific, 1996, pp 104--105. The convergence results are in there. Cheers, Jonathan Baxter From d.mareschal at bbk.ac.uk Fri Mar 19 06:52:13 1999 From: d.mareschal at bbk.ac.uk (Denis Mareschal) Date: Fri, 19 Mar 1999 12:52:13 +0100 Subject: PhD scholarships in connectionist/neural network/psychological models Message-ID: The Birkbeck College (University of London) psychology department has a number of full-time PhD scholarships for students commencing their degrees in October 1999. These are open to students wishing to undertake projects comprising of computational methods (connectionist/neural network and mathematical models) combined with experimental psychology. The department has a strong commitment to cognitive neuroscience and computational modelling. Strong candidates are invited to apply (see FULL ANNOUNCEMENT below) for a variety of cognitive neuroscience projects supervised by Denis Mareschal and/or by Marius Usher, as described: Denis Mareschal (http://www.psyc.bbk.ac.uk/staff/dm.html) connectionist modelling of perceptual and cognitive development in childhood and infancy Marius Usher (http://www.ukc.ac.uk/psychology/people/usherm/) Behavioral and computational studies of: choice reaction time, short-term and working memory and cognitive performance, information processing in the frontal lobes. The Birkbeck College psychology department includes the recently founded Centre for Brain and Cognitive Development (headed by Professor Mark Johnson) and is well complemented by a new cognitive science laboratory consisting of a suit of UNIX-based workstations dedicated to computational modelling research. Birkbeck College is situated in the Bloomsbury section of London and students would benefit from its close proximity to the Gatsby Computational Neuroscience Unit and the Institute of Cognitive Neuroscience. FULL ANNOUNCEMENT Birkbeck College Department of Psychology PhD Studentships The Birkbeck Psychology Department, rated 5 in the last Research Assessment Exercise, is offering two departmentally-funded research studentships, available from October 1999. There are opportunities to work in a number of research groups: Brain and Cognitive Development Perception, Cognition and Action Child Family and Social Studies For further details see http://www.psyc.bbk.ac.uk/ Students with or expecting a first class or upper second class degree in psychology should send their full CV, the names of two referees, and a two-page statement of their research interests to the postgraduate admissions tutor: Dr Paul Barber (PG Application) Department of Psychology Birkbeck College University of London Malet Street London WC1E 7HX phone: +44 171-631-6207 fax: + 44 171-631-6312 ================================================= Dr. Denis Mareschal Centre for Brain and Cognitive Development Department of Psychology Birkbeck College University of London Malet St., London WC1E 7HX, UK tel +44 171 631-6582/6207 fax +44 171 631-6312 ================================================= From cladv at pikas.inf.tu-dresden.de Sun Mar 21 05:29:43 1999 From: cladv at pikas.inf.tu-dresden.de (CL Advertisement) Date: Sun, 21 Mar 1999 11:29:43 +0100 (MET) Subject: Postdoctorate Grant at Dresden University of Technology Message-ID: <199903211029.LAA08326@pikas.inf.tu-dresden.de> The Department of Computer Science at the Dresden University of Technology offers a postdoctorate grant in the framework of its research programme "Specification of discrete processes and systems of processes by operational models and logics" with a duration -- for the time being -- upto the end of the year 1999. In this research programme we consider formalisms in the area of Petri nets and concurrent automata, term- and graph-rewriting systems, knowledge representation and cognitive robotics, model theory for process systems and the equivalences of these formalisms. Applicants who have finished their Ph.D. with very good marks, can send their curriculum vitae, photo, list of publications, and two references by professors to: Dresden University of Technology Department of Computer Science Prof. Dr.-Ing.habil. Heiko Vogler Mommsenstr. 13 D-01062 Dresden Germany From henkel at physik.uni-bremen.de Tue Mar 23 08:42:41 1999 From: henkel at physik.uni-bremen.de (Rolf Henkel) Date: Tue, 23 Mar 1999 14:42:41 +0100 Subject: Stereovision - new paper, new web-pages Message-ID: <36F79A51.DE43823F@physik.uni-bremen.de> Dear connectionists, there is a new paper available at http://axon.physik.uni-bremen.de/research/papers/ " Locking onto 3d-Structure by a Combined Vergence- and Fusionsystem" Abstract: An interacting fusion- and vergence-system is presented which utilizes two properties of coherence-based stereo: sub-pixel-precision and a stable validation signal for disparity estimates. This allows the system to sample the three-dimensional scene with several precisely chosen fixation points and automatically accumulate the data into a full disparity map. In addition, the system creates a fused cyclopean view of the scene, co-registered with the final disparity map. In addition, a corresponding web-page discussing this model and showing some samples of vergence-movies can be found at http://axon.physik.uni-bremen.de/research/stereo/vergence/ Newly updated web-pages discussing additional models for the fusion of stereo data into a single cyclopean view as well as the perception of transparency & binocular rivalry can be found following the links at http://axon.physik.uni-bremen.de/research/stereo/ Comments are very welcome. Best regards, Rolf Henkel -- Institute of Theoretical Neurophysics, University Bremen, Germany - Email: henkel at physik.uni-bremen.de - URL: http://axon.physik.uni-bremen.de/ From philh at cogs.susx.ac.uk Wed Mar 24 12:45:32 1999 From: philh at cogs.susx.ac.uk (Phil Husbands) Date: Wed, 24 Mar 1999 17:45:32 +0000 Subject: Lectureship in Adaptive Systems Message-ID: <36F924BC.1F00D15D@cogs.susx.ac.uk> University of Sussex School of Cognitive and Computing Sciences Lectureship in Computer Science and Artificial Intelligence Grade A or B (Ref 076) Applicants should have an interest in the area of evolutionary and adaptive systems, and be able to show evidence of significant research achievement in any aspect of adaptive robotics, artificial life, evolutionary computing or related fields. Applicants with a commitment to interdisciplinary research at the interface between AI and the biological sciences are particularly encouraged. Further particulars can be found at: http://www.cogs.susx.ac.uk/users/philh/aijob.html Informal inquiries can be made to Dr P Husbands, tel (+44 (0)1273) 678556, email philh at cogs.susx.ac.uk, or Professor H Buxton, tel (+44 (0)1273) 678569, email hilaryb at cogs.susx.ac.uk. Details of the School are available at http://www.cogs.susx.ac.uk, or from the School Office. Salary scales: Lecturer Grade A: 16,655 pounds to 21,815 pounds per annum. Lecturer Grade B: 22,726 pounds to 29,048 pounds per annum. Application packs for the above posts are available from and should be returned to Staffing Services, University of Sussex, Falmer, Brighton, East Sussex, BN1 9RH, tel (+44 (0)1273) 877324, and details are also available via http://www.sussex.ac.uk/Units/staffing/personnl/vacs/ Requests for application packs may also be sent via email to S.Jenks at sussex.ac.uk. Links to school web sites and further information about the University may be seen at http://www.central.sussex.ac.uk WHEN REQUESTING DETAILS, PLEASE QUOTE THE RELEVANT REFERENCE NUMBER. CLOSING DATE FOR APPLICATIONS: Friday 16 April 1999. From cesmeli at cis.ohio-state.edu Wed Mar 24 11:32:17 1999 From: cesmeli at cis.ohio-state.edu (erdogan cesmeli) Date: Wed, 24 Mar 1999 11:32:17 -0500 (EST) Subject: Student Travel Grants for Attending IJCNN'99 Message-ID: In case you are not yet aware, the IEEE Neural Network Council has a Student Travel Grant Program, to assist students presenting papers at IEEE NNC sponsored conferences, including this year's IJCNN to be held in Washington DC, during July 10-16, 1999. The deadline for application is April 15, 1999. For more information, check the conference web page: http://www.cas.american.edu/~medsker/ijcnn99/ijcnn99.html Thanks for your attention, Erdogan Cesmeli The Ohio State University From ericr at ee.usyd.edu.au Wed Mar 24 23:15:32 1999 From: ericr at ee.usyd.edu.au (Eric Ronco) Date: Thu, 25 Mar 1999 15:15:32 +1100 Subject: On-line Nonlinear Model based Predictive Control Simulator Message-ID: <36F9B864.ACA3B419@ee.usyd.edu.au> Dear all, An on-line nonlinear predictive control simulation package is now available at http://merlot.ee.usyd.edu.au/OLIFO This simulation package is intended to test the performance of a practical nonlinear Model based Predictive Controller, namely the Open-Loop Intermittent Feedback Optimal (OLIFO) controller, while applied to various complicated non-linear systems. Default settings are provided for each system. However, you are encouraged to experience the behaviour of this controller by changing some of the few parameters. This should demonstrate the power of this approach and provid a benchmark to compare the OLIFO controller performance with any other non-linear controllers. -- Dr Eric Ronco, room 316 Tel: +61 2 9351 7680 School of Electrical Engineering Fax: +61 2 9351 5132 Bldg J13, Sydney University Email: ericr at ee.usyd.edu.au NSW 2006, Australia http://www.ee.usyd.edu.au/~ericr From sue at soc.plym.ac.uk Fri Mar 26 05:41:13 1999 From: sue at soc.plym.ac.uk (Sue Denham) Date: Fri, 26 Mar 1999 10:41:13 +0000 Subject: Academic Positions in Computational Neuroscience Message-ID: <1.5.4.32.19990326104113.006e91cc@soc.plym.ac.uk> The University of Plymouth is considering the possibility of making some senior academic appointments, ie Professor, Reader or Senior Lecturer, in the area of Computational Neuroscience, with the aim of further strengthening this research area. The possible appointments are also linked to a proposal to create a multidisciplinary Institute of Cognitive Neuroscience, involving links between experimental neuroscientists at the Plymouth Marine Laboratory, neurologists at the Postgraduate Medical School, and cognitive scientists and neuropsychologists (including two new Professorship appointments) in the Faculty of Human Sciences. We would like to hear, informally at present, from anyone who might have an interest in such an appointment, without obligation to either party. If you were able to email a curriculum vitae, setting out in particular your research achievements in this area, we would be very grateful. We would also be happy to respond to any enquiries. All expressions of interest and enquiries will be treated as confidential. Please respond by email to: Professor Mike Denham Michael J Denham Siebe Professor of Neural and Adaptive Systems Centre for Neural and Adaptive Systems School of Computing University of Plymouth Plymouth PL4 8AA England tel: +44 1752 232541 fax: +44 1752 232540 e-mail: mike @soc.plym.ac.uk; mdenham at plym.ac.uk http://www.tech.plym.ac.uk/soc/research/neural/index.html Dr Sue Denham Centre for Neural and Adaptive Systems School of Computing University of Plymouth Plymouth PL4 8AA England tel: +44 17 52 23 26 10 fax: +44 17 52 23 25 40 e-mail: sue at soc.plym.ac.uk http://www.tech.plym.ac.uk/soc/research/neural/index.html From ingber at ingber.com Fri Mar 26 12:35:32 1999 From: ingber at ingber.com (Lester Ingber) Date: Fri, 26 Mar 1999 11:35:32 -0600 Subject: Paper: ... Exponential Modifications to Black-Scholes Message-ID: <19990326113532.A24307@ingber.com> The PostScript paper http://www.ingber.com/markets99_exp.ps.Z also can be retrieved uncompressed for awhile as http://www.alumni.caltech.edu/~ingber/markets99_exp.ps %A L. Ingber %A J.K. Wilson %R Statistical Mechanics of Financial Markets: Exponential Modifications to Black-Scholes %I DRW Investments LLC %C Chicago, IL %D 1999 %O URL http://www.ingber.com/markets99_exp.ps.Z http://www.alumni.caltech.edu/~ingber/markets99_exp.ps The Black-Scholes theory of option pricing has been considered for many years as an important but very approximate zeroth-order description of actual market behavior. We generalize the functional form of the diffusion of these systems and also consider multi-factor models including stochastic volatility. We use a previous development of a statistical mechanics of financial markets to model these issues. Daily Eurodollar futures prices and implied volatilities are fit to determine exponents of functional behavior of diffusions using methods of global optimization, Adaptive Simulated Annealing (ASA), to generate tight fits across moving time windows of Eurodollar contracts. These short-time fitted distributions are then developed into long-time distributions using a robust non-Monte Carlo path-integral algorithm, PATHINT, to generate prices and derivatives commonly used by option traders. The results of our study show that there is only a very small change in at-the money option prices for different probability distributions, both for the one-factor and two-factor models. There still are significant differences in risk parameters, partial derivatives, using more sophisticated models, especially for out-of-the-money options. ======================================================================== Instructions for Retrieval of Code and Reprints Interactively Via WWW The archive can be accessed via WWW path http://www.ingber.com/ http://www.alumni.caltech.edu/~ingber/ where the last address is a mirror homepage for the full archive. Interactively Via Anonymous FTP Code and reprints can be retrieved via anonymous ftp from ftp.ingber.com. Interactively [brackets signify machine prompts]: [your_machine%] ftp ftp.ingber.com [Name (...):] anonymous [Password:] your_e-mail_address [ftp>] binary [ftp>] ls [ftp>] get file_of_interest [ftp>] quit The 00index file contains an index of the other files. Files have the same WWW and FTP paths under the main / directory; e.g., http://www.ingber.com/MISC.DIR/00index_misc and ftp://ftp.ingber.com/MISC.DIR/00index_misc reference the same file. Electronic Mail If you do not have WWW or FTP access, get the Guide to Offline Internet Access, returned by sending an e-mail to mail-server at rtfm.mit.edu with only the words send usenet/news.answers/internet-services/access-via-email in the body of the message. The guide gives information on using e-mail to access just about all InterNet information and documents. Additional Information Limited help assisting people with queries on my codes and papers is available only by electronic mail correspondence. Sorry, I cannot mail out hardcopies of code or papers. Lester ======================================================================== -- /* Lester Ingber http://www.ingber.com/ ftp://ftp.ingber.com * * ingber at ingber.com ingber at alumni.caltech.edu ingber at drwtrading.com * * PO Box 06440 Wacker Dr PO Sears Tower Chicago IL 60606-0440 */ From gustl at itl.atr.co.jp Sun Mar 28 23:32:14 1999 From: gustl at itl.atr.co.jp (Michael Schuster) Date: Mon, 29 Mar 1999 13:32:14 +0900 Subject: PhD thesis available Message-ID: <199903290432.NAA04867@atra17.itl.atr.co.jp> PhD Thesis available ==================== I sent this BCC mail to a number of people who asked me about my thesis, or who I thought might be interested in it. Because I sent it to mailing lists, too, it could happen that you get this message twice -- my apologies. Mike Schuster ------------------------------------------------------------------------------ available from: http://isw3.aist-nara.ac.jp/IS/Shikano-lab/staff/1996/mike-s/mike-s.html in the publication section http://isw3.aist-nara.ac.jp/IS/Shikano-lab/staff/1996/mike-s/publication.html ENGLISH TITLE: On supervised learning from sequential data with applications for speech recognition ENGLISH ABSTRACT: Many problems of engineering interest, for example speech recognition, can be formulated in an abstract sense as supervised learning from sequential data, where an input sequence x_1^T = { x_1, x_2, x_3, ..., x_{T-1}, x_T } has to be mapped to an output sequence y_1^T = { y_1, y_2, y_3, ..., y_{T-1}, y_T }. This thesis gives a unified view of the abstract problem and presents some models and algorithms for improved sequence recognition and modeling performance, measured on synthetic data and on real speech data. A powerful neural network structure to deal with sequential data is the recurrent neural network (RNN), which allows one to estimate P(y_t|x_1, x_2, ..., x_t), the output at time t given all previous input. The first part of this thesis presents various extensions to the basic RNN structure, which are a) a bidirectional recurrent neural network (BRNN), which allows to estimate expressions of the form P(y_t|x_1^T), the output at t given all sequential input, for uni-modal regression and classification problems, b) an extended BRNN to directly estimate the posterior probability of a symbol sequence, P(y_1^T|x_1^T), by modeling P(y_t|y_{t-1}, y_{t-2}, ..., y_1, x_1^T) without explicit assumptions about the shape of the distribution P(y_1^T|x_1^T), c) a BRNN to model multi-modal input data that can be described by Gaussian mixture distributions conditioned on an output vector sequence, P(x_t|y_1^T), assuming that neighboring x_t, x_{t+1} are conditionally independent, and d) an extension to c) which removes the independence assumption by modeling P(x_t|x_{t-1}, x_{t-2}, ..., x_1, y_1^T) to estimate the likelihood P(x_1^T|y_1^T) of a given output sequence without any explicit approximations about the use of context. The second part of this thesis describes the details of a fast and memory-efficient one-pass stack decoder for speech recognition to perform the search for the most probable word sequence. The use of this decoder, which can handle arbitrary order N-gram language models and arbitrary order context-dependent acoustic models with full cross-word expansion, led to the best reported recognition results on the standard test set of a widely used Japanese newspaper dictation task. ---------------------------------------------------------------------------- Table of Contents: 1 Introduction 1.1 MOTIVATION AND BACKGROUND 1.1.1 Learning from examples 1.1.2 Does the order of the samples matter? 1.1.3 Example applications 1.1.4 Related scientific areas 1.2 THESIS STRUCTURE 2 Supervised learning from sequential data 2.1 DEFINITION OF THE PROBLEM 2.2 DECOMPOSITION INTO A GENERATIVE AND A PRIOR MODEL PART 2.2.1 Context-independent model 2.2.2. Context-dependent model 2.3 DIRECT DECOMPOSITION 2.4 HIDDEN MARKOV MODELS 2.4.1 Basic HMM formulation 2.4.2 Calculation of state occupation probabilities 2.4.3 Parameter estimation for output probability distributions 2.4.4 Parameter estimation for transition probabilities 2.5 SUMMARY 3 Neural networks for supervised learning from sequences 3.1 BASICS OF NEURAL NETWORKS 3.1.1 Parameter estimation by maximum likelihood 3.1.2 Problem classification 3.1.3 Neural network training 3.1.4 Neural network architectures 3.2 BIDIRECTIONAL RECURRENT NEURAL NETWORKS 3.2.1 Prediction assuming independent outputs 3.2.2 Experiments and results 3.2.3 Prediction assuming dependent outputs 3.2.4 Experiments and results 3.3 MIXTURE DENSITY RECURRENT NEURAL NETWORKS 3.3.1 Basics of mixture density networks 3.3.2 Mixture density extensions for BRNNs 3.3.3 Experiments and results 3.3.4 Discussion 3.4 SUMMARY 4 Memory-efficient LVCSR search using a one-pass stack decoder 4.1 INTRODUCTION 4.1.1 Organization of this chapter 4.1.2 General 4.1.3 Technical 4.1.4 Decoder types 4.2 A MEMORY-EFFICIENT ONE_PASS STACK DECODER 4.2.1 Basic algorithm 4.2.2 Pruning techniques 4.2.3 Stack module 4.2.4 Hypotheses module 4.2.5 N-gram module 4.2.6 LM lookahead 4.2.7 Cross-word models 4.2.8 Fast-match with delay 4.2.9 Using word-graphs as language model constraints 4.2.10 Lattice rescoring 4.2.11 Generating phone/state alignments 4.3 EXPERIMENTS 4.3.1 Recognition of Japanese 4.3.2 Recognition results for high accuracy 4.3.3 Recognition results for high speed and low memory 4.3.4 Time and memory requirements for modules 4.3.5 Usage of cross-word models 4.3.6 Usage of fast-match models 4.3.7 Effect of on-demand N-gram smearing 4.3.8 Lattice/N-nest list generation and lattice rescoring 4.4 CONCLUSIONS 4.5 ACKNOWLEDGMENTS 5. Conclusions 5.1 SUMMARY 5.2 CONTRIBUTIONS FROM THIS THESIS 5.3 SUGGESTIONS FOR FUTURE WORK ---------------------------------------------------------------------------- Mike Schuster, ATR Interpreting Telecommunications Research Laboratories, 2-2 Hikari-dai, Seika-cho, Soraku-gun, Kyoto 619-02, JAPAN, Tel. ++81-7749-5-1394, Fax. ++81-7749-5-1308, email: gustl at itl.atr.co.jp, http://isw3.aist-nara.ac.jp/IS/Shikano-lab/staff/1996/mike-s/mike-s.html ---------------------------------------------------------------------------- From juergen at idsia.ch Tue Mar 30 03:25:40 1999 From: juergen at idsia.ch (Juergen Schmidhuber) Date: Tue, 30 Mar 1999 10:25:40 +0200 Subject: IDSIA PHD JOB OPENING Message-ID: <199903300825.KAA01726@ruebe.idsia.ch> PHD STUDENT WANTED I am seeking an outstanding PhD candidate for a research project on reinforcement learning (RL) and program evolution (PE). OVERVIEW. Most machine learning research focuses on learning memory-free mappings between input patterns and output patterns. Humans, however, obviously learn entire algorithms mapping input sequences to output sequences in a complex fashion. In particular, they constantly learn to identify important events in input streams and store them in short-term memory until the memories are needed to compute appropriate output actions. If we want to bridge the gap between the learning abilities of humans and machines then we will have to study how such sequential processes can be learned. The focus of this project will be on RL and PE methods whose search space consists of fairly arbitrary, possibly probabilistic "programs" (as opposed to more limited stimulus/response associations). POSSIBLE PROJECT SUBGOALS. The project allows considerable scientific freedom. If you have a great idea, let's go for it and try it. Otherwise we'll start along the following lines. (1) Explore the limits of recent algorithmic search techniques such as "Adaptive Levin Search" and "Probabilistic Incremental Program Evolution" - both of which can learn memory strategies. (2) Improve, extend, and apply a recent technique "Incremental self-improvement" based on the success-story algorithm for probabilistic, self-modifying systems that can in principle learn to improve their own learning algorithm (metalearning). (3) Build unsupervised, "curious" systems selecting their own training exemplars for building models of the environment, and use the models to speed up improvement of goal-directed sequential behavior. (4) Examine RL economies where agents learn to pay each other for useful services, and test whether they can learn to memorize. See http://www.idsia.ch/~juergen/topics.html for papers on the above subjects. A highly qualified candidate is sought with a background in computational sciences, mathematics, engineering, physics or other relevant areas. Applicants should submit : (i) Detailed curriculum vitae, (ii) List of three references (and their email addresses), (ii) Transcripts of undergraduate and graduate (if applicable) studies and (iii) Concise statement of their research interests (two pages max). Candidates are also encouraged to submit their scores in the Graduate Record Examination (GRE) general test (if available). Please send hardcopies of all documents to: Juergen Schmidhuber, IDSIA, Corso Elvezia 36, 6900-Lugano, Switzerland Applications (with WWW pointers to studies or papers, if available) can also be submitted electronically (in plain ASCII or postscript format, but only small files please) to juergen at idsia.ch. Please connect your first and last name by a dot "." in the subject header, and add a meaningful extension. For instance, if your name is John Smith, then your messages could have headers such as: subject: John.Smith.cv.ps, subject: John.Smith.statement.txt, subject: John.Smith.correspondence.txt.... This will facilitate appropriate filing of your stuff. Thanks a lot! ABOUT IDSIA. Our research focuses on artificial neural nets, reinforcement learning, complexity and generalization issues, unsupervised learning and information theory, forecasting, combinatorial optimization, evolutionary computation. IDSIA's algorithms hold the world records for several important operations research benchmarks. In the "X-Lab Survey" by Business Week magazine, IDSIA was ranked in fourth place in the category "COMPUTER SCIENCE - BIOLOGICALLY INSPIRED". Its comparatively tiny size notwithstanding, IDSIA also ranked among the top ten labs worldwide in the broader category "ARTIFICIAL INTELLIGENCE". We are located in the beautiful city of Lugano in Ticino, the scenic southernmost province of Switzerland. Milano, Italy's center of fashion and finance, is 1 hour away, Venice 3 hours. Our collaborators at the Swiss supercomputing center CSCS are nearby; the new University of Lugano is across the lawn. Switzerland (origin of special relativity and the World Wide Web) boasts the highest citation impact factor, the highest supercomputing capacity pc (per capita), the most Nobel prizes pc (450% of the US value), the highest income pc, and perhaps the best chocolate. SALARY: commensurate with experience but generally attractive. Low taxes. There also is travel funding in case of papers accepted at important conferences. _________________________________________________ Juergen Schmidhuber research director IDSIA, Corso Elvezia 36, 6900-Lugano, Switzerland juergen at idsia.ch http://www.idsia.ch/~juergen From horn at neuron.tau.ac.il Tue Mar 30 08:40:51 1999 From: horn at neuron.tau.ac.il (David Horn) Date: Tue, 30 Mar 1999 15:40:51 +0200 (IST) Subject: NCHEP-99 Message-ID: First Announcement : NCHEP-99 Workshop on Neural Computation in High Energy Physics 1999 Place: Maale Hachamisha, Israel Dates: October 13 - 15, 1999. This workshop, sponsored by the Israel Science Foundation, will be devoted to the use of Neural Computation in High Energy Physics. Its purpose is to review the current status of this field and to discuss and evaluate possible future developments. Call for Papers. Applications of neural computation to HEP can be found in the area of data analysis and in both on-line and off-line trigger designs. We would like to discuss their possible involvement in the design of intelligent detectors and the use of reconfigurable devices. We call for papers on recent developments in all these subjects. Abstracts should be submitted electronically until July 1st, 1999. They should include postal and e-mail addresses of all authors and the author to whom correspondence should be addressed. Submitted papers should be limited to seven pages in length. Two copies of the submitted papers should reach the conference scientific committee by September 1st, 1999. Mail submissions to Prof. Halina Abramowicz, NCHEP-99, School of Physics and Astronomy, Tel Aviv University, Tel Aviv 69978, Israel. e-mail: halina at post.tau.ac.il Scientific Organizing Committee Prof. H. Abramowicz and Prof. D. Horn, Tel Aviv University. General Information Information about this workshop is available at the website http://neuron.tau.ac.il/NCHEP-99. Registration will be handled by Dan-Knassim. http://www.congress.co.il. e-mail: congress at mail.inter.net.il. Phone:+972-3-6133340 fax:+972-3-6133341. Workshop secretary is Michal Finkelman, e-mail:michal at neuron.tau.ac.il fax: +972-3-6407932. Notice This workshop will follow the NCST-99 conference on Neural Computation in Science and Technology, which will take place at the same location. That conference covers areas of both neurobiological modeling and computational applications. Information about NCST-99 is available at http://neuron.tau.ac.il/NCST-99. Participants of NCHEP-99 are encouraged to make use of this opportunity and take part also in that conference. From niall.griffith at ul.ie Wed Mar 31 08:22:24 1999 From: niall.griffith at ul.ie (Niall Griffith) Date: Wed, 31 Mar 1999 14:22:24 +0100 Subject: NN's & Music - new book Message-ID: <9903311322.AA16146@zeus.csis.ul.ie> Book Announcement.... MUSICAL NETWORKS: PARALLEL DISTRIBUTED PERCEPTION AND PERFORMANCE edited by Niall Griffith and Peter M. Todd MUSICAL NETWORKS, a new book on connectionist models of music cognition, composition, and performance, has been published by MIT Press. This book presents the latest research on neural network applications in the domains of musical and creative behavior by leaders in the field including Gjerdingen, Grossberg, Large, Mozer and many others. For a further description, and links to the complete table of contents and preface, please visit http://mitpress.mit.edu/book-home.tcl?isbn=0262071819 MUSICAL NETWORKS can be found in bookstores that carry MIT Press publications, or can be purchased directly from MIT Press through the above website or by calling their toll-free order number, 1-800-356-0343 (or in the UK, (0171)306-0603), and specifying ISBN 0-262-07181-9. The price is $37.50 (hardcover, 385 pages). In conjunction with our book, we have also created an extensive bibliography of connectionist publications on music, which you can access (in html, Word, bibtex, and plain text) at http://www.csis.ul.ie/staff/NiallGriffith/mnpdpp_bib0.htm We intend to keep this an up-to-date list, so if you have published anything in this area (or know of other work) that we have not yet included, please email full details to niall.griffith at ul.ie and it will be added. Niall Griffith and Peter Todd ****************************************************************************