From densley at eng.auburn.edu Thu Oct 1 16:17:04 1992 From: densley at eng.auburn.edu (Dillard D. Ensley) Date: Thu, 1 Oct 92 15:17:04 CDT Subject: Power Survey Results Message-ID: <9210012017.AA03134@eng.auburn.edu> Dear Connectionists, Thank you for responding to my request for sources on applying artificial neural networks to problems in the electric power industry. Following is a list of 55 sources. There are another 57 papers in the "Proceedings of the First International Forum on Applications of Neural Networks to Power Systems," Seattle, Washington, July 23-26, 1991, published by the Institute of Electrical and Electronics Engineers (IEEE). Also, the Electric Power Research Institute (EPRI) and the International Neural Network Society (INNS) held a workshop entitled "Neural Network Computing for the Electric Power Industry" at Stanford University in Stanford, California on August 17-19, 1992. Several of you mentioned that the IEEE Power Engineering Society has a task force to compile a similar bibliography. Reports are that there are over 170 sources in that project. Though some companies claim to be working on commercial applications (and I was able to verify one company's claim), they all asked to remain unpublished until such products are marketed. So be watching for these products to hit the market. 1) M. Aggoune, M.A. El-Sharkawi, D.C. Park, R.J. Marks II, "Preliminary Results on Using Artificial Neural Networks for Security Assessment," IEEE Transactions on Power Systems, Vol. 6, No. 2, pp. 890-896, May 1991. 2) Israel E. Alguindigue, Anna Loskiewicz-Buczak, Robert E. Uhrig, "Neural Networks for the Monitoring of Rotating Machinery," Proceedings of the Eighth Power Plant Dynamics, Control and Testing Symposium (in press), May 1992. 3) Israel E. Alguindigue, Anna Loskiewicz-Buczak, Robert E. Uhrig, "Clustering and Classification Techniques for the Analysis of Vibration Signatures," Proceedings of the SPIE Technical Symposium on Intelligent Information Systems Application of Artificial Neural Networks, III, April 1992. 4) Hamid Bacha, Walter Meyer, "A Neural Network Architecture for Load Forecasting," Proceedings of the 1992 International Joint Conference on Neural Networks, Vol. 2, pp. 442-447, June 1992. 5) Eric B. Bartlett, Robert E. Uhrig, "Nuclear Power Plant Status Diagnostics Using an Artificial Neural Network," Nuclear Technology, Vol. 97, pp. 272-281, March 1992. 6) Franoise Beaufays, Youssef Abdel-Magid, Bernard Widrow, "Application of Neural Networks to Load-Frequency Control in Power Systems," submitted to Neural Networks, May 1992. 7) Chao-Rong Chen, Yuan-Yih-Hsu, "Synchronous Machine Steady- State Stability Analysis Using an Artificial Neural Network," IEEE Transactions on Energy Conversion, Vol. 6, No. 1, pp. 12-20, March 1991. 8) Mo-yuen Chow, Sui Oi Yee, "Methodology for On-Line Incipient Fault Detection in Single-Phase Squirrel-Cage Induction Motors Using Artificial Neural Networks," IEEE Transactions on Energy Conversion, Vol. 6, No. 3, pp. 536- 545, September 1991. 9) Badrul H. Chowdhury, Bogdan M. Wilamowski, "Real-Time Power System Analysis Using Neural Computing," Proceedings of the 1992 Workshop on Neural Networks, February 1992. 10) Sonja Ebron, David L. Lubkeman, Mark White, "A Neural Network Approach to the Detection of Incipient Faults on Power Distribution Feeders," IEEE Transactions on Power Delivery, Vol. 5, No. 2, pp. 905-912, April 1990. 11) Tom Elliott, "Neural Networks--Next Step in Applying Artificial Intelligence," Power, pp. 45-48, March 1990. 12) D.D. Ensley, "Neural Networks Applied to the Protection of Large Synchronous Generators," M.S. Thesis, Department of Electrical Engineering, Auburn University, Alabama, to be published December 1992. 13) Y.J. Feria, J.D. McPherson, D.J. Rolling, "Cellular Neural Networks for Eddy Current Problems," IEEE Transactions on Power Delivery, Vol. 6, No. 1, pp. 187-193, January 1991. 14) Zhichao Guo, Robert E. Uhrig, "Use of Artificial Neural Networks to Analyze Nuclear Power Plant Performance" (in press), Nuclear Technology, July 1992 (expected). 15) Zhichao Guo, Robert E. Uhrig, "Sensitivity Analysis and Applications to Nuclear Power Plant," Proceedings of the 1992 International Joint Conference on Neural Networks, Vol. 2, pp. 453-458, June 1992. 16) Zhichao Guo, Robert E. Uhrig, "Using Modular Neural Networks to Monitor Accident Conditions in Nuclear Power Plants," Proceedings of the SPIE Technical Symposium on Intelligent Information Systems Application of Artificial Neural Networks, III, April 1992. 17) R.K. Hartana, G.G. Richards, "Harmonic Source Monitoring and Identification Using Neural Networks," IEEE Transactions on Power Systems, Vol. 5, No. 4, pp. 1098- 1104, November 1990. 18) Kun-Long Ho, Yuan-Yih Hsu, Chien-Chuen Yang, "Short Term Load Forecasting Using a Multilayer Neural Network with an Adaptive Learning Algorithm," IEEE Transactions on Power Systems, Vol. 7, No. 1, pp. 141-149, February 1992. 19) Yuan-Yih Hsu, Chao-Rong Chen, "Tuning of Power System Stabilizers Using an Artificial Neural Network," IEEE Transactions on Energy Conversion, Vol. 6, No. 4, pp. 612- 618, December 1991. 20) Yuan-Yih Hsu, Chien-Chuen Yang, "Design of Artificial Neural Networks for Short-Term Load Forecasting," IEE Proceedings. Part C, Generation, Transmission and Distribution, Vol. 138, No. 5, pp. 407-418, September 1991. 21) Andreas Ikonomopoulos, Lefteri H. Tsoukalas, Robert E. Uhrig, "Use of Neural Networks to Monitor Power Plant Components," Proceedings of the American Power Conference, April 1992. 22) Andreas Ikonomopoulos, Lefteri H. Tsoukalas, Robert E. Uhrig, "A Hybrid Neural Network-Fuzzy Logic Approach to Nuclear Power Plant Transient Identificaiton," Proceedings of the AI-91: Frontiers in Innovative Computing for the Nuclear Industry, pp. 217-226, September 1991. 23) N. Kandil, V.K. Sood, K. Khorasani, R.V. Patel, "Fault Identification in an AC-DC Transmission System Using Neural Networks," IEEE Transactions on Power Systems, Vol. 7, No. 2, pp. 812-819, May 1992. 24) Shahla Keyvan, Luis Carlos Rabelo, Anil Malkani, "Nuclear Reactor Condition Monitoring by Adaptive Resonance Theory," Proceedings of the 1992 International Joint Conference on Neural Networks, Vol. 3, pp. 321-328, June 1992. 25) K.Y. Lee, Y.T. Cha, J.H. Park, "Short-Term Load Forecasting Using an Artificial Neural Network," IEEE Transactions on Power Systems, Vol. 7, No. 1, pp. 124-130, February 1992. 26) Z.J. Liu, F.E. Villaseca, F. Renovich, Jr., "Neural Networks for Generation Scheduling in Power Systems," Proceedings of the 1992 International Joint Conference on Neural Networks, Vol. 2, pp. 233-238, June 1992. 27) Hiroyuki Mori, Yoshihito Tamaru, Senji Tsuzuki, "An Artificial Neural-Net Based Technique for Power System Dynamic Stability with the Kohonen Model," IEEE Transactions on Power Systems, Vol. 7, No. 2, pp. 856-864, May 1992. 28) Hiroyuki Mori, Kenji Itou, Hiroshi Uematsu, Senji Tsuzuki, "An Artificial Neural-Net Based Method for Predicting Power System Voltage Harmonics," IEEE Transactions on Power Delivery, Vol. 7, No. 1, pp. 402-409, January 1992. 29) Seibert L. Murphy, Samir I. Sayegh, "Application of Neural Networks to Acoustic Screening of Small Electric Motors," Proceedings of the 1992 International Joint Conference on Neural Networks, Vol. 2, pp. 472-477, June 1992. 30) Dagmar Niebur, Alain J. Germond, "Power System Static Security Assessment Using the Kohonen Neural Network Classifier," IEEE Transactions on Power Systems, Vol. 7, No. 2, pp. 865-872, May 1992. 31) T.T. Nguyen, H.X. Bui, "Neural Network for Power System Control Function," Australasia Universities Power and Control Conference '91, pp. 202-207, October 1991. 32) S. Osowski, "Neural Network for Estimation of Harmonic Components in a Power System," IEE Proceedings. Part C, Generation, Transmission and Distribution, Vol. 139, No. 2, pp. 129-135, March 1992. 33) D.R. Ostojic, G.T. Heydt, "Transient Stability Assessment by Pattern Recognition in the Frequency Domain," IEEE Transactions on Power Systems, Vol. 6, No. 1, pp. 231-237, February 1991. 34) Z. Ouyang, S.M. Shahidehpour, "A Hybrid Artificial Neural Network-Dynamic Programming Approach to Unit Commitment," IEEE Transactions on Power Systems, Vol. 7, No. 1, pp. 236- 242, February 1992. 35) Norman L. Ovick, "A-to-D Voltage Classifier Using Neural Network," Proceedings of the 1991 Workshop on Neural Networks, pp. 615-620, February 1991. 36) Yoh-Han Pao, Dejan J. Sobajic, "Combined Use of Unsupervised and Supervised Learning for Dynamic Security Assessment," IEEE Transactions on Power Systems, Vol. 7, No. 2, pp. 878-884, May 1992. 37) Yoh-Han Pao, Dejan J. Sobajic, "Current Status of Artificial Neural Network Applications to Power Systems in the United States," Transactions of the Institute of Electrical Engineers of Japan, Vol. 111-B, No. 7, pp. 690- 697, July 1991. 38) D.C. Park, M.A. El-Sharkawi, R.J. Marks II, "Electric Load Forecasting Using an Artificial Neural Network," IEEE Transactions on Power Systems, Vol. 6, No. 2, pp. 442-448, May 1991. 39) Alexander G. Parlos, Amir F. Atiya, Kil T. Chong, Wei K. Tsai, "Nonlinear Identification of Process Dynamics Using Neural Networks," Nuclear Technology, Vol. 97, pp. 79-96, January 1992. 40) T.M. Peng, N.F. Hubele, G.G. Karady, "Advancement in the Application of Neural Networks for Short-Term Load Forecasting," IEEE Transactions on Power Systems, Vol. 7, No. 1, pp. 250-257, February 1992. 41) Kenneth F. Reinschmidt, "Neural Networks: Next Step for Simulation and Control," Power Engineering, pp. 41-45, November 1991. 42) C. Rodriguez, S. Rementeria, C. Ruiz, A. Lafuente, J.I. Martin, J. Muguerza, "A Modular Approach to the Design of Neural Networks for Fault Diagnosis in Power Systems," Proceedings of the 1992 International Joint Conference on Neural Networks, Vol. 3, pp. 16-23, June 1992. 43) Myung-Sub Roh, Se-Woo Cheon, Soon-Heung Chang, "Power Prediction in Nuclear Power Plants Using a Back-Propagation Learning Neural Network," Nuclear Technology, Vol. 94, pp. 270-278, May 1991. 44) N. Iwan Santoso, Owen T. Tan, "Neural-Net Based Real-Time Control of Capacitors Installed on Distribution Systems," IEEE Transactions on Power Delivery, Vol. 5, No. 1, pp. 266-272, January 1990. 45) T. Satoh, K. Nara, "Maintenance Scheduling by Using Simulated Annealing Method," IEEE Transactions on Power Systems, Vol. 6, No. 2, pp. 850-857, May 1991. 46) Dejan J. Sobajic, Yoh-Han Pao, "Artificial Neural-Net Based Dynamic Security Assessment for Electric Power Systems," IEEE Transactions on Power Systems, Vol. 4, No. 1, pp. 220- 226, February 1989. 47) Michael Travis, "Neural Network Methodology for Check Valve Diagnostics," M.S. Thesis, Department of Nuclear Engineering, University of Tennessee, December 1991. 48) Robert E. Uhrig, "Potential Use of Neural Networks in Nuclear Power Plants," Proceedings of the Eighth Power Plant Dynamics, Control and Testing Symposium (in press), May 1992. 49) Robert E. Uhrig, "Use of Neural Networks in the Analysis of Complex Systems," Proceedings of the 1992 Workshop on Neural Networks, February 1992. 50) Robert E. Uhrig, "Potential Application of Neural Networks to the Operation of Nuclear Power Plants," Nuclear Safety, Vol. 32, No. 1, pp. 68-79, January-March 1991. 51) Belle R. Upadhyaya, Evren Eryurek, "Application of Neural Networks for Sensor Validation and Plant Monitoring," Nuclear Technology, Vol. 97, pp. 170-176, February 1992. 52) Siri Weerasooriya, M.A. El-Sharkawi, M. Damborg, R.J. Marks II, "Towards Static-Security Assessment of a Large-Scale Power System Using Neural Networks," IEE Proceedings. Part C, Generation, Transmission and Distribution, Vol. 139, No. 1, pp. 64-70, January 1992. 53) Siri Weerasooriya, M.A. El-Sharkawi, "Identification and Control of a DC Motor Using Back-Propagation Neural Networks," IEEE Transactions on Energy Conversion, Vol. 6, No. 4, pp. 663-669, December 1991. 54) A. Martin Wildberger, "Model-Based Reasoning, and Neural Networks, Combined in an Expert Advisor for Efficient Operation of Electric Power Plants." 55) Q.H. Wu, B.W. Hogg, G.W. Irwin, "A Neural Network Regulator for Turbogenerators," IEEE Transactions on Neural Networks, Vol. 3, No. 1, pp. 95-100, January 1992. From mike at PARK.BU.EDU Fri Oct 2 14:25:45 1992 From: mike at PARK.BU.EDU (Michael Cohen) Date: Fri, 2 Oct 92 14:25:45 -0400 Subject: No subject Message-ID: <9210021825.AA13118@cns.bu.edu> POSTDOCTORAL FELLOW CENTER FOR ADAPTIVE SYSTEMS AND DEPARTMENT OF COGNITIVE AND NEURAL SYSTEMS BOSTON UNIVERSITY A postdoctoral fellow is sought to join the Center for Adaptive Systems and the Department of Cognitive and Neural Systems, which are research leaders in the development of biological and artificial neural networks. A person is sought who has a substantial research and publication record om developing neural network models of image processing and adaptive pattern recognition. Salary: $30,000+. Excellent opportunities for broadening knowledge of neural architectures through interactions with a faculty trained in psychology, neurobiology, mathematics, computer science, physics, and engineering. Well-equipped computer, vision, speech, word recognition, and motor control laboratories are in the Department. Boston University is an Equal Opportunity/Affirmative Action Employer. Please send a curriculum vitae, 3 letters of recommendation, and illustrative research articles by January 15, 1993 to: Postdoctoral Search Committee Center for Adaptive Systems Boston University 111 Cummington Street Room 244 Boston MA 02215 From jose at tractatus.siemens.com Fri Oct 2 14:39:13 1992 From: jose at tractatus.siemens.com (Steve Hanson) Date: Fri, 2 Oct 1992 14:39:13 -0400 (EDT) Subject: NIPS*92 CONFERENCE PROGRAM Message-ID: NIPS*92 Conference PROGRAM ORAL PROGRAM: Monday, November 30 After Dinner Talk: Stuart Anstis, Psychology Department., UC San Diego "I Thought I saw it move: The Psychology of Motion Perception." Tuesday, December 1 ORAL 1: COMPLEXITY, LEARNING & GENERALIZATION [8:30--9:40] 0.1.1. T. Cover, Department of Elec. Eng., Stanford University "Complexity and Generalization in Neural Networks." (Invited Talk)[8:30am] 0.1.2. N. Intrator, Center for Neural Science, Brown University "Combining Exploratory Projection Pursuit and Projection Pursuit Regression with Application to Neural Networks" [9:00am] 0.1.3. A, Stolcke & S. Omohundro, International Computer Science Institute, Berkeley, CA "Hidden Markov Model Induction by Bayesian Model." [9:20am] 0.1.4 K-Y Siu*, V. Roychowdhury%, T. Kailath+, *Department of Elec. & Computer Eng., UC Irvine, %School of Elec. Eng., Purdue University, +Information Systems Lab, Stanford University "Computing with Almost Optimal Size Neural Networks." [9:40am] ORAL 2: CONTROL, NAVIGATION & PLANNING 0.2.1. D. DeMers* & K. Kreutz-Delgado%, *Dept. of Computer Science, UC San Diego, %Dept. of Elec. & Computer Eng. & Inst. for Neural Comp., UC San Diego "Global Regularization of Inverse Kinematics for Redundant Manipulators." [10:30am] 0.2.2 A. W. Moore & C. G. Atkeson, MIT Al Lab "Memory-based Reinforcement Learning: Efficient Computation with Prioritized Sweeping." [10:50am] 0.2.3 P. Dayan* & G. E. Hinton% *CNL, The Salk Institute %Department of Computer Science, Univeristy of Toronto "Feudal Reinforcement Learning." [11:10am] 0.2.4 D. Pomerleau, School of Computer Science, CMU "Input Reconstruction Reliability Estimation." [11:30am] SPOTLIGHT 1: COMPLEXITY, LEARNING & GENERALIZATION. CONTROL, NAVIGATION & PLANNING. [11:50-11:58am] ORAL 3: VISUAL PROCESSING 0.3.1. S. Geman, Mathematics Department, Brown University "Interpretation-guided Segmentation and Recognition." (Invited Talk) [2:00pm] 0.3.2. S. Becker, Department of Computer Science, Univ. of Toronto "Learning to Categorize Objects Using Temporal Coherence." [2:30pm] 0.3.3. S. J Nowlan & T. J. Sejnowski, CNL, The Salk Institute "Filter Selection Model for Generating Visual Motion Signals for Target Tracking." [2:50pm] 0.3.4. E. Stern*, A. Aertsen%, E. Vaadia+ & S. Hochstein** *Department of Neurobiology, Hebrew University, Jerusalem %Inst. fur Neuroinformatik, Ruhr-Univ., Bochum, Germany +Department of Physiology, Hebrew University, Jerusalem ** "Stimulus Encoding by Multi-Dimensional Receptive Fields in Single Cells and Cell Populations in V1 of Awake Monkey." [3:10pm] ORAL 4: STOCHASTIC LEARNING AND ANALYSIS 0.4.1. T. K. Leen* & J. Moody % *CSE Department, Oregon Graduate Institute %Department of Computer Science, Yale University "Probability Densities and Equilibria in Stochastic i Learning." [4:00pm] 0.4.2. W. Finnoff, Siemens AG Corp. Res. & Dev., Munich, Germany "Diffusion Approximations for the Constant Learning Rate Backpropagation Algorithm and Resistance to Local Minima." [4:20pm] 0.4.3. L. Xu & A. Yuille, Division of Applied Sciences, Harvard Univ. "Self-Organization for Robust Principal Component Analysis by the Statistical Physics Approach." [4:40pm] SPOTLIGHT 2: VISUAL PROCESSING [5:00-5:12pm] SPOTLIGHT 3: STOCHASTIC LEARNING & ANALYSIS [5:15-5:35pm] Wednesday, December 2 ORAL 5: COMPUTATIONAL AND THEORETICAL NEUROBIOLOGY 0.11.1. J. Rinzel, Mathematical Research Branch, NIH "Coupling Mechanisms and Rhythmogenesis in Neuron Models." (Invited Talk) [8:30am] 0.11.2. K. Doya, M.E.T. Boyle, and A. I. Selverston Department of Biology, UC San Diego "Mapping between Neural and Physical Activities of the Lobster Gastric Mill System." [9:00am] 0.11.3. M. E. Nelson, Beckman Institute, University of Illinois "Neural Models of Adaptive Filtering Mechanisms in the Electrosensory System."9:20am] 0.11.4. N. Burgess, J. O'Keefe and M. Reece Department of Anatomy, University College, London "Using Hippocampal 'Place Cells' for Navigation, Exploiting Phase Coding." [9:40am] 0.11.5. M. A. Gluck and C. E. Myers Center for Molecular and Behaviorial Neuroscience, Rutgers Univ. "Neural Bases of Adaptive Stimulus Representations: A Computational Theory of Hippocampal-Region Function." [10:00am] ORAL 6: SPEECH AND SIGNAL PROCESSING 0.6.1. M. Cohen*, H. Franco*, N. Morgan%, D. Rumelhart+, and V. Abrash* *SRI Inst., Menlo Park, CA %ICSI, Berkeley, CA +Psychology Department, Stanford University, CA "Context-Dependent Multiple Distribution Phonetic i Modeling with MLPS." [10:50am] 0.6.2. M. Hirayama*, E. V. Bateson%, K. Honda%, Y. Koike* and M. Kawato* *ATR Human Inf. Proc. Res. Labs %ATR Auditory and Visual Perception Res. Labs., Kyoto, Japan "Physiologically Based Speech Synthesis." [11:10am] 0.6.3. W. Liu, M. H. Goldstein, Jr. and A. G. Androu, Dept. of Elec. & Comp. Eng., The Johns Hopskins University "Analog Cochlear Model for Multiresolution Speech Analysis." [11:30am] SPOTLIGHT 4: COMPUTATIONAL AND THEORETICAL NEUROBIOLOGY. [11:50am-12:02pm] SPOTLIGHT 5: SPEECH AND SIGNAL PROCESSING. [12:04-12:08pm] ORAL 7: COMPLEXITY, LEARNING & GENERALIZATION 2 0.5.1. S. Solla, AT&T Bell Labs "The Emergence of Generalization Ability in Learning Machines." (Invited Talk) [2:00pm] 0.5.2. J. Wiles* & M Ollila% *Depts. of Computer Science & Psychology, Univ. of Queensland, Australia %Vision Lab, CITRI, Dept. of Computer Science, Univ. of Melbourne, Australia "Intersecting Regions: The Key to Combinatorial Structure in Hidden Unit Space." [2:30pm] 0.5.3. T. A. Plate, Department of Computer Science, Univ. of Toronto "Holographic Recurrent Networks." [2:50pm] 0.5.4. P. Simard, Y. LeCun & J. Denker, AT& T Bell Labs "Efficient Pattern Recognition Using a New Transformation Distance." [3:10pm] SPOTLIGHT 6: COMPLEXITY, LEARNING & GENERALISATION 2 [3:30-3:42pm] ORAL 8: IMPLEMENTATIONS 0.8.1. J. Platt, J. Anderson, & D. Kirk, Synaptics, Inc., San Jose, CA "An Analog VLSI Chip for Radial Basis Functions." [4:15pm] 0.8.2. H. P. Graf, E. Cosatto, E. Sackinger, and J. Snyder, AT&T Bell Labs "A Modular System with Multiple Neural Net Chips." [4:35pm] 0.8.3. D. J. Baxter, S. Churcher, A. Hamilton, A. F. Murray, and H. M. Rackie Department of Elec. Eng., University of Edinburgh, Scotland "The Edinburgh Pulse Stream Implementation of a i Learning-Oriented Network (Epsilon) Chip." [4:55pm] SPOTLIGHT 7: COGNITIVE SCIENCE, [5:15-5-19pm] SPOTLIGHT 8: IMPLEMENTATIONS, APPLICATIONS [5:20-5:40pm] Thursday, December 3 ORAL 9: PREDICTION 0.9.1. A. Lapedes, Theory Division, Los Alamos National Laboratory "Nonparametric Neural Networks for Prediction." (Invited Talk) [8:30am] 0.9.2. M. Plutowski*, G. Cottrell%, and H. White+ *Department of Computer Science & Engineering, %Inst. for Neural Comp. and Department of Computer Science & Eng., +Inst. for Neural Comp. and Department of Economics, UCSD "Learning Mackey-Glass from 25 Examples, Plus or Minus 2." [9:00am] ORAL 10: COGNITIVE SCIENCE 0.10.1. P. Smolensky Dept. of Computer Sci. and Inst. of Cog. Sci., Univ. of Colorado, Boulder "Harmonic Grammars for Formal Languages." [9:20am] 0.10.2. D. Gentner & A. B. Markman Department of Psychology, Northwestern University "Analogy -- Watershed or Waterloo? Structural Alignment and the Development of Connectionist Models of Cognition." [9:40am] ORAL 11: APPLICATIONS 0.7.1. Dr. W. Baxt, UCSD Medical Center "The Application of the Artificial Neural Network to Clinical Decision Making." (Invited Talk) [10:30am] 0.7.2. V. Tresp*, J. Moody%, and W-R. Delong+ *Seimens AG, Central Research, Munich, Germany %Computer Science Department, Yale University +Seimens AG, Medical Eng. Group, Erlangen, Germany "Prediction and Control of the Glucose Metabolism of a Diabetic." [11:00am] 0.7.3. P. Baldi* & Y. Chavin% *JPL, Division of Biology, Cal Tech %Net-ID, Inc., and Psychology Department, Stanford University "Neural Networks for Finger Print Matching and Classification." [11:20am] 0.7.4. M. Schenkel*, H. Weismann, I. Guyon, C. Nohl, D. Henderson, B. Bosser%, and L. Jackel AT&T Bell Labs *also ETH-Zunch, %also EECS Dept., UC Berkeley "TDNN Solutions for Recognizing On-Line Natural Handwriting." [11:40am] POSTER SPOTLIGHT TALKS (4 Minute Talks) SPOTLIGHT 1: COMPLEXITY, LEARNING & GENERALIZATION 1. CONTROL, NAVIGATION & PLANNING. P&S.1.1. K-Y Siu* & V. Roychowdhury% *Department of Elec. & Comp. Eng., UC Irvine %School of Elec. Eng., Purdue University "Optimal Depth Neural Networks for Multiplication and Related Problems." P&S.1.2. T. M. Mitchell and S. B. Thrun School of Computer Science, CMU "Explanation-Based Neural Network Learning for Robot Control." SPOTLIGHT 2: VISUAL PROCESSING P&S.2.1. S. Madarasmi*, D. Kersten%, and T-C Pong* *Department of Computer Science, %Department of Psychology, University of Minnesota "Computation of Stereo Disparity for Transparent and for Opaque Surfaces." P&S.2.2. S. Ahmad and V. Tresp Siemens Research, Munich, Germany "Some Solutions to the Missing Feature Problem in Vision." P&S.2.3. J. Utans and G. Gindi Department of Elec. Eng., Yale University "Improving Convergence in Hierarchical Matching Networks for Object Recognition." SPOTLIGHT 3: STOCHASTIC LEARNING & ANALYSIS P&S.3.1. R. M. Neal, Department of Computer Science, University of Toronto "Bayesian Learning via Stochastic Dynamics." P&S.3.2. Y. Freund*, H. S. Seung%, and N. Tishby+ *Comp. and Inf. Sci., UC Santa Cruz, %Racah Inst. of Physics, and Center for Neural Comp., Hebrew Univ., Jerusalem, +Department of Comp. Sci. and Center for Neural Comp., Hebrew Univ., Jerusalem "Accelerating Learning Using Query by Committee." P&S.3.3. A. F. Murray, J. P. Edwards Department of Elec. Eng., University of Edinburgh, Scotland "Synaptic Weight Noise During MLP Learning Enhances Fault-Tolerance." P&S.3.4. D. De Mers and G. Cottrell Department of Computer Science, UC San Diego "Non-Linear Dimensionality Reduction." P&S.3.5. N. N. Schraudolph* and T. J. Sejnowski% *Computer Science & Engr. Department, UC San Diego %Computer Neurobiology Lab., The Salk Institute "Self-Stabilizing Hebbian Learning: Beyond Principal Components." SPOTLIGHT 4: COMPUTATIONAL AND THEORETICAL NEUROBIOLOGY. P&S.4.1. I. Gutterman and N. Tishby Department of Comp. Sci. and Center for Neural Computation, Hebrew University, Jerusalem "Statistical Modeling of Cell-Assemblies Activities in Prefrontal Cortex of Behaving Monkeys." P&S.4.2. R. Linsker, IBM. TJ Watson Center, Yorktown Heights "Towards Unambiguous Derivation of Receptive Fields Using a New Optimal-Encoding Criterion." P&S.4.3. O. Coenen*, T. J. Sejnowski*, and S. G. Lisberger% *Comp. Neurobiol. Lab., Howard Hughes Medical Inst., The Salk Institute, La Jolla, CA %Department of Physiology, Kick Center for Integrating Neuroscience, UCSF, CA "Biologically Plausible Learning Rules for the Vestibular-Ocular Reflex (VOR)." SPOTLIGHT 5: SPEECH AND SIGNAL PROCESSING. P&S.5.1. M. Hild and A. Waibel, School of Computer Science, CMU "Connected Letter Recognition with a Multi-State Time Delay Neural Network." SPOTLIGHT 6: COMPLEXITY, LEARNING & GENERALIZATION 2 P&S.6.1. I. Guyon*, B. Boser%, and V. Vapnik* *AT&T Bell Labs, Holmdel, NJ %EE&CS Department, UC Berkeley "Automatic Capacity Tuning of Very Large VC-Dimension Classifiers" P&S.6.2. P.Y. Simard*, Y. LeCun*, and B. Pearlmutter% *AT&T Bell Labs, Holmdel, NJ %Yale University "Local Computation of the Second Derivative Information in a Multi-Layer Network." P&S. 6.3 H. Drucker, R. Schapire & P. Simard, AT&T Bell Labs "Improving Performance in Neural Networks Using a Boosting Algorithm." SPOTLIGHT 7: COGNITIVE SCIENCE P&S.7.1. M. C. Mozer and S. Das Department of Computer Science & Inst. of Cognitive Science, Univ. of Colorado, Boulder, CO "A Connectionist Chunker that Induces the Structure of Context-Free Languages." SPOTLIGHT 8: IMPLEMENTATIONS, APPLICATIONS, P&S.5.1. J. Lazzaro*, J. Wawrzynck*, M. Mahowald%, M. Sivilotti+, D. Gillespie$ *EE &CS, UC Berkeley %Computation and Neural Sciences, Cal Tech +Computer Science, Cal. Tech. and Tanner Research, Pasadena, CA $Computer Science, Cal. Tech. and Synaptics, San Jox, CA "Silicon Auditory Processors as Computer Peripherals." P&S.5.2. C. Koch*, B. Mathur%, S-C Liu+, J. G. Harris+, J. Luo and M. Sivilotti$ *Computation and Neural Systems, Cal. Tech. %Rockwell Intl. Science Center, Thousand Oaks, CA +Al Lab, MIT $Tanner Research, Pasadena, CA "Object-Based Analog VLSI Vision Circuits." P&S.5.3. J. Alspector, R. Meir, B. Yuhas, A. Jayakumar Bellcore, Morristown, NJ "A Parallel Gradient Descent Method for Learning in Analog VLSI Neural Networks." P&S.5.4. A. C. Tsoi, D. S. C. So, and A. Sergejew Department of Elec. Eng., University of Queensland, Australia "Classification of Electroencephalogram Using Artificial Neural Networks." P&S.5.5. Y. Salu, Physics Department, and CSTEA, Howard University "Classification of Satelite Multi-Spectral Image Data by the Binary Diamond Neural Network." NIPS '92 FINAL POSTER SESSIONS 1 & 2 TUESDAY EVENING: SESSION 1 COMPLEXITY, LEARNING AND GENERALIZATION 1 "Optimal Depth Neural Networks for Multiplication and Related Problems." Kai-Yeung Siu, Department of Elec. & Comp. Eng, UC Irvine Vwani Roychowdhury, School of Elec. Eng., Purdue University "Initial Complexity of Large Networks and Its Effect on Generalization." Chuanyi Ji, Department of Eled, Comp. & System Eng., Rensselaer Polytechnic Inst., Troy, NY "Using Hints to Successfully Learn Context-Free Grammars with a Neural Network Pushdown Automaton." Sreerupa Das, Dept. of Computer Science, Univ. of Colorado, Boulder, CO C. Lee Giles, NEC Richard Institute, Princeton, NJ Guo-Zheng Sun, Inst. for Advanced Computer Studies, Univ. of MD "Interposing an Ontogenic Model Between Genetic Algorithms and Neural Networks." Richard K. Belew, Cognitive Comp. Science Research Group, UC San Diego "Combining Neural and Symbolic Learning to Revise Probabilistic Rule Bases." J. Jeffrey Mahoney and Raymond J. Mooney, Dept. of Computer Science, University of Texas, Austin, TX "Learning Sequential Tasks by Incrementally Adding Higher Orders." Mark Ring, Dept. of Computer Sciences, University of Texas, Austin, TX "Kohonen Feature Maps and Growing Cell Structures -- A Performance Comparison." Bernard Fritzke, Universitat Erlangen-Nurnberg, Lehrstuhl fur Programmiersprachen, Erlangen, Germany "Latticed RBF Networks: An Alternative to Constructive Methods." Brian Bonnlander & Michael C. Mozer, Department of Computer Science & Institute of Cognitive Science, University of Colorado, Boulder, CO "A Boundary Hunting Radial Basis Function Classifier which Allocates Centers Constructively." Eric I. Chang & Richard P. Lippmann, MIT Lincoln Laboratory, Lexington, MA "How Hints affect Learning" Yaser Abu-Mostafa, Dept of Electrical Engineering & Computer Science, California Institute of Technology, Pasadena, CA CONTROL, NAVIGATION & PLANNING "Explanation-Based Neural Network Learning for Robot Control." Tom M. Mitchell & Sebastian B. Thrun, School of Computer Science, Carnegie Mellon University, Pittsburgh, PA "Reinforcement Learning Applied to Linear Quadratic Regulation." Steven J. Bradtke, Department of Computer & Information Science, University of Massachusetts, Amherst, MA "Neural Network On-Line Learning Control of Spacecraft Smart Structure." Dr. Christopher Bowman, Ball Aerospace Systems Group, Boulder, CO "Integration of Visual and Somatosensory Information for Preshaping Hand in Grasping Movements." Yoji Uno*, Naohiro Fukumura%, Ryoji Suzuki%, and Mitsuo Kawato* *ATR Human Information Processing Research Laboratories, Kyoto, Japan %Faculty of Engineering, University of Tokyo, Tokyo, Japan "On-Line Estimation of the Optimal Value Function: HJB-Estimators." James K. Peterson, Department of Mathematical Sciences, Clemson University, Clemson, SC "Robust Control Under Extreme Uncertainty." Vijaykumar Gullapalli, CS Department, LGRC, University of Massachusetts, Amherst, MA "Trajectory Relaxation Learning for Approximation of Robot Inverse Dynamics." T. Sanger, MIT, Cambridge, MA "Learning Spatio-Temporal Planning from a Dynamic Programming Teacher: A Feed Forward Net for the Moving Obstacle Avoidance Problem." G. Fahner and R. Eckmiller, Department of Biophysics, Division of Biocybernetics, Heinrich-Heine-University of Dusseldorf, Dusseldorf, Germany "Learning Fuzzy Rule-Based Neural Networks for Control." Rodney M. Goodman and Charles M. Higgins, Department of Electrical Engineering, Cal. Tech., Pasadena, CA VISUAL PROCESSING "Computation of Stereo Disparity for Transparent and for Opaque Surfaces." Suthep Madarasmi*, Daniel Kersten%, Ting-Cheun Pong* *Computer Science Department, %Department of Psychology, *Computer Science Department, University of Minnesota, Minneapolis, MN "Some Solutions to the Missing Feature Problem in Vision." Sabutai Ahmad and Volker Tresp, Seimens Research, Munich, Germany "Improving Convergence in Hierarchial Matching Networks for Object Recognition." Joachim Utans and Gene Gindi, Yale University "An LGN Model Which Mediates Communication Between Different Spatial Frequency Channels Through Feedback From Cortex." Carlos D. Brody, Computation and Neural Systems Program, Cal. Tech., Pasadena, CA "Unsmearing Visual Motion: Development of Long-Range Horizontal Intrinsic Connections." Kevin E. Martin and Jonathan A. Marshall, Department of Computer Science, University of North Carolina, Chapel Hill, NC "LandSat Image Analysis via a Texture Classification Neural Network." Hayit K. Greenspan and Rodney M. Goodman, Department of Electrical Engineering, Cal. Tech., Pasadena, CA "Computation of Ego-Motion from Optic Flow in Visual Cortex." Markus Lappe and Josef P. Rauschecker, National Institutes of Health Animal Center, NIMH, Poolesville, MD, and Max Planck Institute for Biological Cybernetics, Tubingen, Germany "Learning to See Where and What: A Backprop Net Trained to Make Saccades and Recognize Characters." Gale L. Martin, Mosfeq Rashid, David Chapman & James Pittman, MCC, Austin, TX STOCHASTIC LEARNING AND ANALYSIS "Bayesian Learning via Stochastic Dynamics.' Radford M. Neal, Department of Computer Science, University of Toronto, Toronto, Canada "Accelerating Learning Using Query by Committee." Yoav Freund*, H. Sebastian Seung%, and Naftali Tishby+ *Computer and Info. Sciences, UC Santa Cruz %Racah Inst. of Physics and Ctr. for Neural Computation, Hebrew University, Jerusalem +Department of Computer Science and Ctr. for Neural Computation, Hebrew University, Jerusalem "Synaptic Weight Noise During MLP Learning Enhances Fault-Tolerance." Alan F. Murray and Peter J. Edwards, Dept. of Electrical Engineering, University of Edinburgh, Scotland "Self-Stabilizing Hebbian Learning: Beyond Principal Components." Nicol N. Schraudolph* and Terrence J. Sejnowski% *Computer Science & Engr. Department, UC San Diego %Computational Neurobiology Laboratory, The Salk Institute, La Jolla, CA "Probability Densities and Basin-Hopping in Stochastic Learning." Todd K. Leen and Genevieve B. Orr, Department of Computer Science and Engineering, Oregon Graduate Institute of Science and Technology, Beaverton, OR "Information Theoretic Analysis of Connection Structure from Spike Trains." S. Shiono, S. Yamada, M. Nakashima, and Kenji Matsumoto, Central Research Laboratory, Mitsubishi Electric Corp., Hyogo, Japan "Statistical Mechanics of Learning in a Large Committee Machine." H. Schwarze and J. Hertz, The Niels Bohr Institute and Nordita, Copenhagen, Denmark "Probability Estimation from a Database Using a Gibbs Energy Model." John W. Miller and Rodney M. Goodman, Department of Electrical Engr., Cal. Tech., Pasadena, CA "On the Use of Evidence in Bayesian Reasoning." David H. Wolpert, The Santa Fe Institute, Santa Fe, NM NETWORK DYNAMICS & CHAOS "Destabilization and Route to Chaos in Neural Networks with Random Connectivity." B. Doyon*, B. Cessac%+, M. Quoy%$, M. Samuelides%$ *Unite INSERM 230, Service de Neurologie, CHU Purpan, ToulouseCedex, France %Centre d'Etudes et de Recherches de Toulouse, Toulouse Cedex, France +Laboratoire de Physique Quantique, Universite Paul Sabatier, Toulouse Cedex, France $Ecole Nationale Superieure de l'Aeronautique et de l'Espace, Toulouse Cedex, France "Predicting Complex Behavior in Space Asymmetric Networks.' Ali A. Minai and William B. Levy, Department of Neurosurgery, University of Virginia, Charlottesville, VA "Single-iteration Threshold Hamming Networks." I. Meilijosn, E. Ruppin, M. Sipper, School of Mathematical Sciences, Tel Aviv University, Tel Aviv, Israel "History-Dependent Dynamics in Attractor Neural Networks: A Bayesian Approach." Isaac Meilijosn and Eytan Ruppin, School of Mathematical Sciences, Tel Aviv University, Tel Aviv, Israel "Bifurcation Analysis of a Coupled Neural Oscillator System With Application to Visual Cortex Modeling." Galina N. Borisyuk, Roman M. Borisyuk, Alexander I. Khibnki, Institute of Mathematical Problems of Biology, Russia Academy of Sciences, Pushchino, Russia "Non-Linear Dimensionality Reduction." David DeMers and Garrison Cottrell, Department of Computer Science, UC San Diego, La Jolla, CA THEORY AND ANALYSIS "On Learning m-Perceptron Networks with Binary Weights." Mostefa Golea*, Mario Marchand* and Thomas R. Hancock% *Ottawa-Carleton Institute for Physics, University of Ottawa, Ottawa, Canada %Aiken Computation Laboratory, Harvard University, Cambridge, MA "Neural Network Model Selection Using Asymptotic Jackknife Estimator and Cross-Validation Method." Yong Liu, Department of Physics and Center for Neural Science, Brown University, Providence, RI "Learning Curves, Model Selection and Complexity of Neural Networks." Noboru Murata, Shuji Yoshizawa, and Shun-ichi Amari, Department of Mathematical Engineering and Information Physics, University of Tokyo, Japan "The Power of Approximating: A Comparison of Activation Functions." Dhaskar DasGupta and Georg Schnitger, Department of Computer Science, The Pensylvania State University, Unviersity Park, PA "Rational Parameterizations of Neural Networks." Uwe Helmke* and Robert C. Williamson% *Department of Mathematics, University of Regensburg, Regensburg, Germany %Department of Systems Engineering, Australian National University, Canberra Australia "Learning Cellular Automaton Dynamics with Neural Networks." N. H. Wulff and J. A. Hertz, CONNECT, The Niels Bohr Institute and Nordita, Copenhagen, Denmark "Some Estimations of Necessary Number of Connections and Hidden Units for Feed Forward Networks." Adam Kowalczyk, Telcom Australia, Research Laboratories, Victoria, Australia WEDNESDAY EVENING: SESSION 2 COMPLEXITY, LEARNING AND GENERALIZAITON 2 "Automatic Capacity Tuning of Very Large VC-Dimension Classifiers." I. Gunyon, B. Boser*, V. Vapnik, AT& T Bell Laboratories, Holmdel, NJ *currently in EECS Department, UC Berkeley, CA "Local Computation of the Second Derivative Information in a Multi-Layer Network." Patrice Y. Simard, Yann Le Cun and Barak Pearlmutter* AT&T Bell Laboratories, Holmdel, NJ *Yale University, New Haven, CT "Improving Performance in Neural Networks Using a Boosting Algorithm." H. Drucker, R. Schapire & P. Simard, AT&T Bell Labs, Holmdel, NJ "Learning Classification With Few Labelled Examples." Joel Ratsaby and Santosh S. Venkatesh, Department of Electrical Engineering, University of Pennsylvania, Philadelphia, PA "Second Order Derivatives for Network Pruning: Optimal Brain Surgeon." Babak Hassibi and David G. Stork, Ricoh California Research Center, Menlo Park, CA, and Department of Electrical Engineering, Stanford University, Stanford, CA "Directional-Unit Boltzmann Machines." Richard S. Zemel, Christopher K. I. Williams and Michael C. Mozer* Computer Science Department, University of Toronto, Toronto, Canada *Computer Science Department, University of Colorado, Boulder, CO "Applying Classical Optimization Techniques to Neural Network Testing." Dr. Scott A. Markel and Dr. Roger L. Crane, David Sarnoff Research Center, Princeton, NJ "Time Warping Invariant Neural Networks." G. Z. Sun, H. H. Chen, Y. C. Lee and Y. D. Liu, Institute for Advanced Computer Studies / Laboratory for Plasma Research, University of Maryland, College Park, MD "Generalization Abilities of Cascade Network Architectures." E. Littmann and H. Ritter, Department of Computer Science, Bielefeld University, Bielefeld, Germany "Assessing and Improving Neural Network Predictions by the Bootstrap Algorithm." Gerhard Paa', German National Research Center for Computer Science, Augustin, Germany "Discriminability-Based Transfer between Neural Networks." L. Y. Pratt, Department of Matheamatics and Computer Science, Colorado School of mines, Golden, CO "Summed Weight Neuron Perturbation: An O(N) Improvement over Weight Perturbation." Barry Flower and Marwan Jabri, SEDAL, Department of Electrical Engineering, University of Sydney, Australia "Supervised Clustering." Virginia de Sa and Dana Ballard, Computer Science Department, University of Rochester, Rochester, NY "Extended Regularization Methods for Nonconvergent Model Selection." W. Finnoff, F. Hergert and H. G. Zimmerman, Siemans AG, Corporate Research and Development, Munich, Germany "Synchronization and Gramatical Inference in an Oscillating Elman Net." Bill Baird* and Frank Eeckman% *Department of Mathematics, UC Berkeley, CA %O-Division, Lawrence Livermore National Laboratory, Livermore, CA "Training Hidden Units in Reinforcement Learning Networks." Charles W. Anderson, Department of Computer Science, Colorado State University, Fort Collins, CO "Nets with Unreliable Hidden Nodes Learn Error-Correcting Codes." Stephen Judd and Paul Munro, Seimens Corporate Research, Princeton, NJ, and Department of Information Science, University of Pittsburgh, PA "A Fast Stochastic Error-Descent Algorithm for Supervised Learning and Optimization." Gert Cauwenberghs, Cal. Tech., Pasadena, CA SPEECH AND SIGNAL PROCESSING "Modeling Consistency in a Speaker Independent Continuous Speech Recognition System." Yochai Konig*, Nelson Morgan*, Chuck Wooters*, Victor Abrash%, Michael Cohen%, and Horacio Franco% *International Computer Science Institute, Berkeley, CA %SRI International, Menlo Park, CA "A Hybrid Linear/Nonlinear Approach to Channel Equalization Problems." Wei-Tsih Lee*, John C. Pearson*, and Manoel F. Tenorio% *David Sarnoff Research Center, Princeton, NJ %Purdue University, School of Electrical Engineering, West Lafayette, IN "Transient Detection Using Neural Networks: The Search for the Desired Signal." Abir Zahalka and Jose C. Principe, Computational NeuroEngineering Laboratory, University of Florida, Gainesville, FL "Performance Through Consistency: MS-TDNN's for Large Vocabulary Continuous Speech Recognition." Joe Tebelskis and Alex Waibel, School of Computer Science, Carnegie Mellon University, Pittsburgh, PA "Speech Recognition Using Segmental Neural Nets with the N-Best Paradigm." G. Zavaliagkos, S. Austin, J. Makhous and R. Schwartz, BBN Systems and Technologies, Cambridge, MA "Connected Letter Recognition with a Multi-State Time Delay Neural Network." Hermann Hild and Alex Waibel, School of Computer Science, Carnegie Mellon University, Pittsburgh, PA "Classification of Electroencephalogram Using Artificial Neural Networks." A. C. Tsoi, D. S. C. So, and A. Sergejew, Department of Electrical Engineering, University of Queensland, Queensland, Australia "Classification of Satellite Multi-Spectral Image Data by the Binary Diamond Neural Network." Yehuda Salu, The Physics Department and CSTEA, Howard University, Washington, DC "Silicon Auditory Processors as Computer Peripherals." John Lazzaro*, John Wawrzynek*, M. Mahowald%, Massimo Sivilotti+, and Dave Gillespie+ *Computer Science Division, UC Berkeley, CA %Computation and Neural Sciences, Cal. Tech, Pasadena, CA +Computer Science, Cal. Tech., Pasadena, CA "Object-Based Analog VLSI Vision Circuits." Christof Koch*, Bimal Mathur%, Shih-Chii Liu+, John G. Harris$, Jin Luo and Missimo Sivilotti$ *Computation and Neural Systems, Cal. Tech., Pasadena, CA %Rockwell International Science Center, Thousand Oaks, CA +Artificial Intelligence Laboratory, MIT, Cambridge, MA $Tanner Research, Pasadena, CA "A Parallel Gradient Descent Method for Learning in Analog VLSI Neural Networks." Joshua Alspector, Ronny Meir, Ben Yuhas, Anthony Jayakumar, Bellcore, Morristown, NJ APPLICATIONS "Dynamic Planar Warping and Planar Hidden Markov Modeling: From Speech to Optical Character Recognition." Esther Levin and Roberto Pieraccini, AT&T Bell Laboratories, Murray Hill, NJ "Forecasting Demand for Electric Power." Terrence L. Fine and Jen-Lun Yuan, School of Electrical Engineering, Cornell University, Ithaca, NY "Adaptive Algorithms for Multiple Sequence Alignments." Pierre Baldi*, Tim Hunkapiller*, Yves Chauvin%, and Marcella McClure+ *Cal. Tech, Pasadena, CA %Net-ID, Inc. +UC, Irvine "A Neural Network that Learns to Interpret Myocardial Planar Thallium Scintigrams." Charles Rosenberg*, Jacob Erel%, and Henri Atlan% *Department of Computer Science, Hebrew University, Jerusalem, Israel %Department of Biophysics and Nuclear Medicine, Hadassah Medical Center, Jeruslaem, Israel IMPLEMENTATIONS "An Analog VLSI Chip for Local Velocity Estimation Based on Reichardt's Motion Algorithm." Rahul Sarpeshkar, Wyeth Bair and Christof Koch, Department of Computation and Neural Systems, Cal. Tech., Pasadena, CA "Analog VLSI Implementation of Gradient Descent." David Kirk, Douglas Kerns, Kurt Fleischer, Alan Barr Cal. Tech., Pasadena, CA "An Object-oriented Framework and its Implementation for the Simulation of Neural Nets." Alexander Linden and Christoph Tietz, AI Research Division, German National Research Center For Computer Science, Augustin, Germany "Attractor Neural Networks with Local Inhibition." L. D'Alessandro*, E. Pasero*, and R. Zecchina% *Dipart. Elettronica, Politenico di Torino %Dipart. Fisica Teorica, Universita di Torino "Biological Neurons and Model Neurons: Construction and Study of Hybrid Networks." G. Le Masson, S. Renaud-Le Masson, E. Marder, and L. F. Abbot Department of Biology and Physics and Center for Complex Systems, Brandeis University, Waltham, MA COGNITIVE SCIENCE "A Connectionist Chunker that Induces the Structure of Context-Free Languages." Michael C. Mozer and Sreerupa Das, Department of Computer Science and Institute of Cognitive Science, University of Colorado, Boulder, CO "Network Structuring and Training Using Rule-Based Knowledge." Volker Tresp*, Jurgen Hollatz%, and Subutai Ahmad* *Siemens AG, Central Research and Development, Munich, Germany %Institut fur Informatik, Munich Germany "A Dynamic Model of Priming and Repetition Blindness.' Daphne Bavelier and Michael I. Jordan, Department of Brain and CCognitive Sciences, MIT, Cambridge, MA "A Knowledge-Based Model of Geometry Learning." Geoffrey Towell* and Richard Lehrer% *Siemens Corporate Research, Princeton, NJ %Educational Psychology, University of Wisconsin, Madison, WI "Representing Meaning With Activation Gestalts." Hinrich Schutze, CSLI, Stanford, CA "Perceiving Complex Visual Scenes: An Oscillator Neural Network Model that Integrates Location-Based Attention, Perceptual Organization, and Object-Based Selection." Rainer Goebel, Department of Psychology, University of Braunschweig, Braunschweig, Germany COMPUTATIONAL AND THEORETICAL NEUROBIOLOGY "Statistical Modeling of Cell-Assembly Activities in Prefrontal Cortex of Behaving Monkeys." Itay Gutterman and Naftali, Department of Computer Science and Center for Neural Comuptation, Hebrew University, Jerusalem, Israel "Towards Unambiguous Derivation of Receptive Fields Using a New Optimal-Encoding Criterion." Ralph Linsker, IBM, T. J. Watson Research Center, Yorktown Heights, NY "Biologically Plausible Learning Rules for the Vestibulo-Ocular Reflex (VOR)." Oliver Coenen*, Terrence J. Sejnowski*, and Stephen G. Lisberger% *Computational Neurobilogy Laboratory, The Salk Institute, La Jolla, CA %Department of Physiology, W. M. Keck Foundation Center for Integrative Neuroscience; and Neuroscience Graduata Program, UC San Francisco, CA "A Non-Hebbian LTP Learning Rule in Hippocampus Enables High-Capacity Temporal Sequence Encoding." Richard Granger, James W. Whitson, Jr., and Gary Lynch, Center for the Neurobiology of Learning and Memory, UC Irvine, CA "Using Aperiodic Reinforcement for Directed Self Organization." P. Read Montague, Steven J. Nowlan, Peter Dayan and Terrance J. Sejnowski, Computational Neurobiology Laboratory, The Salk Institute, San Diego, CA "Information Processing in Neocortical Pyramidal Cells." Bartlett W. Mel, Computation and Neural Systems Program, Cal. Tech., Pasadena, CA "How Oscillatory Neuronal Responses Reflect Bistability and Switching of the Hidden Assembly Dynamics." K. Pawelzik, H.-U. Bauer, J. Deppisch, and T. Geisel, Institute fur Theoretische Physik and SFP, Frankfurt, Germany "Topography and Ocular Dominance: A New Model that Explores Positive Between-Eye Correlations." Geoffrey Goodhill, University of Edinburgh, Centre for Cognitive Science, Edinburgh, Scotland "Statistical and Dynamical Interpretation of ISIH Data from Periodically Stimulated Sensory Neurons." Frank Moss* and Andre Longtin% *Department of Physics and Department of Biology, University of Missouri, St. Louis, MO %Department of Physics, University of Ottawa, Canada "Modelling Movement Disorders with Cascaded Jordan Networks." Alexander Britain*, Gordon D. A. Brown*, Michael Malloch* and Ian J. Mitchell% *Cognitive Neurocomputation Unit, Dept. of Psychology, University of Wales, Bangor, United Kingdom %Department of Cell and Structural Biology, Manchester, United Kingdom "Spiral Waves in Integrate-And-Fire Neural Networks." John G. Milton*, Po Hsiang Chu% and Jack D. Cowan+ *Department of Neurology, University of Chicago, Chicago, IL %Department of Computer Science, De Paul University, Chicago, IL +Department of Mathematics, University of Chicago, Chicago, IL "Parameterising Feature Sensitive Cell Formation in Linsker Networks." L. C. Walton and D. L. Bisset, Electronic Engineering Laboratories, University of Kent, United Kingdom "A Recurrent Neural Network for Generation of Ocular Saccades." Lina L. E. Massone, Departments of Physiology and Electrical Engineering and Computer Science, Northwestern University, Chicago, IL "A Formal Model of the Insect Olfactory Macroglomerulus." C. Linster*, C. Masson%, M. Kerszberg+, L. Personnaz*, and G. Dreyfus* *Ecole Superieure de Physique et de Chimie Industrielles de la Villa De Paris, Laboratoire d'Electronique, Paris, France %Laboratoire de Neurobiologie Comparees des Invertebres, INRA?CNRS, Bures Sur Yvette, France +Institut Pasteur, Paris, France "An Information-Theoretic Approach to Deciphering the Hippocampal Code." William E. Skaggs, Bruce L. McNaughton, Katalin M. Gothard, Etan J. Marksu, ARL Division of Neural Systems, Memory and Aging, University of Arizona, Tuscan, AZ From haussler at cse.ucsc.edu Fri Oct 2 14:52:12 1992 From: haussler at cse.ucsc.edu (David Haussler) Date: Fri, 2 Oct 1992 11:52:12 -0700 Subject: Tech report available on hidden Markov models for proteins Message-ID: <199210021852.AA19416@arapaho.ucsc.edu> University of California at Santa Cruz Department of Computer and Information Sciences The following technical report is available electronically or as a paper copy. Instructions for getting either follow the abstract. PROTEIN MODELING USING HIDDEN MARKOV MODELS: ANALYSIS OF GLOBINS David Haussler, Anders Krogh, Saira Mian, Kimmen Sjolander UCSC-CRL-92-23 (available electronically as ucsc-crl-92-23.ps.Z) June 1992, revised September 1992 (Shorter version will appear in Proc. of 26th Hawaii Int. Conf. on System Sciences, Biocomputing technology track, Jan. 5-8, 1993) Abstract: We apply Hidden Markov Models (HMMs) to the problem of statistical modeling and multiple alignment of protein families. In a detailed series of experiments, we have taken 625 unaligned globin sequences from the Swiss Protein database, and produced a statistical model entirely automatically from the primary (unaligned) sequences using no prior knowledge of globin structure. The produced model includes all the known positions in the 7 major alpha-helices, along with the distribution for the 20 amino acids for each of these positions, as well as the probability of and average length of insertions between these positions, and the probability that each position is not present at all. Using this model, we obtained a multiple alignment of all 625 sequences that agrees almost perfectly with the structural alignment given in [1]. In our tests, we have found that 400 of the 625 globins (selected at random) are enough to produce a model of the same quality. This model based on 400 globins can discriminate the remaining (228) globins from nonglobin protein sequences with greater than 99% accuracy, and can thus be used for database searches. The method we use to obtain the statistical model from the unaligned sequences is a variant of the Expectation Maximization (EM) algorithm known as the Viterbi algorithm. This method starts with an initial "neutral" model (same amino acid distribution in each position, fixed probabilities for insertions and deletions), optimally aligns the training sequences to this model (using dynamic programming), and then reestimates the probability parameters of the model. These last two steps are iterated until no further changes are made. A simple heuristic is used to automatically adjust the number of positions that are modeled by deleting positions that are not being used and inserting new positions where needed. After this, we then iterate the whole process above again on the new model. Our method is more general and more flexible than previous applications of HMMs and the EM algorithm to alignment and modeling problems in molecular biology. This technical report is available electronically through either of the following methods: 1. through anonymous ftp from ftp.cse.ucsc.edu, in /pub/tr. Log in as "anonymous", use your email address as your password, specify "binary" before getting the file. Uncompress before printing. 2. by mail to automatic mail server rnalib at ftp.cse.ucsc.edu. Put this command on the subject line or in the body of the message: @@ send ucsc-crl-92-23.ps.Z from tr To get the index or abstract list: @@ send INDEX from tr @@ send ABSTRACTS.1992 from tr To get the list of the tr directory: @@ list tr To get the list of commands and their syntax: @@ help commands Order paper copies from: Technical Library, Baskin Center for Computer Engineering & Information Sciences, UCSC, Santa Cruz CA 95064. Questions: jean at cse.ucsc.edu From jagota at cs.Buffalo.EDU Fri Oct 2 15:28:53 1992 From: jagota at cs.Buffalo.EDU (Arun Jagota) Date: Fri, 2 Oct 92 15:28:53 EDT Subject: Report on optimization using NNs and KC Message-ID: <9210021928.AA06796@sybil.cs.Buffalo.EDU> *** DO NOT POST TO OTHER BULLETIN BOARDS *** The following report may be of interest for: * Combinatorial Optimization (Maximum Clique) via neural nets * Kolmogorov Complexity; Universal Prior Distribution; generating "hard" instances for optimization problems * A scheme for generating compressible binary vectors motivated by Kolmogorov Complexity ideas. Source code is offered and may be used to generate compressible test data for any application whose instances directly or indirectly utilize binary vectors; comparison of performance on such test data vs, say, data from the uniform distribution may be useful, as below. -------- Performance of MAX-CLIQUE Approximation Heuristics Under Description-Length Weighted Distributions Arun Jagota Kenneth W. Regan Technical Report Department of Computer Science State University at New York at Buffalo We study the average performance of several neural-net heuristics applied to the problem of finding the size of the largest clique in an undirected graph. This function is NP-hard even to approximate within a constant factor in the worst case, but the heuristics we study are known to do quite well on average for instances drawn from the uniform distribution on graphs of size n. We extend a theorem of M. Li and P. Vitanyi to show that for instances drawn from the "universal distribution" m(x), the average-case performance of any approximation algorithm has the same order as its worst-case performance. The universal distribution is not computable or samplable. However, we give a realistic analogue q(x) which lends itself to efficient empirical testing. Our results so far are: out of nine heuristics we tested, three did markedly worse under q(x) than under uniform distribution, but six others revealed little change. HOW TO ACCESS: -------------- ftp ftp.cs.buffalo.edu (or 128.205.32.9 subject-to-change) Name : anonymous > cd users/jagota > get > quit : KCC.ps, KCC.dvi (*Same but some people have had problems printing our postscript in the past. `KCC.dvi' may require `binary' mode in ftp *) : nlt.README (* Contains documentation and instructions for our compressible string generation code *) If ftp is a problem, the report may also be obtained by sending e-mail to jagota at cs.buffalo.edu Arun Jagota *** DO NOT POST TO OTHER BULLETIN BOARDS *** From jose at tractatus.siemens.com Fri Oct 2 14:41:40 1992 From: jose at tractatus.siemens.com (Steve Hanson) Date: Fri, 2 Oct 1992 14:41:40 -0400 (EDT) Subject: NIPS*92 WORKSHOP PROGRAM Message-ID: NIPS*92 WORKSHOP PROGRAM For Further information and queries on workshop please respond to WORKSHOP CHAIRPERSONS listed below ========================================================================= Character Recognition Workshop Organizers: C. L. Wilson and M. D. Garris, NIST Abstract: In order to discuss recent developments and research in OCR technology, six speakers have been invited to share from their organization's own perspective on the subject. Those invited, represent a diversified group of organizations actively developing OCR systems. Each speaker participated in the first OCR Systems Conference sponsored by the Bureau of the Census and hosted by NIST. Therefore, the impressions and results gained from the conference should provide significant context for discussions. Invited presentations: C. L. Wilson, NIST, "Census OCR Results - Are Neural Networks Better?" T. P. Vogl, ERIM, "Effect of Training Set Size on OCR Accuracy" C. L. Scofield, Nestor, "Multiple Network Architectures for Handprint and Cursive Recognition" A. Rao, Kodak, "Directions in OCR Research and Document Understanding at Eastman Kodak Company" C. J. C. Burges, ATT, "Overview of ATT OCR Technology" K. M. Mohiuddin, IBM, "Handwriting OCR Work at IBM Almaden Research Center" ========================================================================= Neural Chips: State of the Art and Perspectives. Organizer: Eros Pasero pasero at polito.it Abstract: We will encourage lively audience discussion of important issues in neural net hardware, such as: - Taxonomy: neural computer, neural processor, neural coprocessor - Digital vs. Analog: limits and benefits of the two approaches. - Algorithms or neural constraints? - Neural chips implemented in universities - Industrial chips (e.g. Intel, AT&T, Synaptics) - Future perspectives Invited presentations: TBA ========================================================================= Reading the Entrails: Understanding What's Going On Inside a Neural Net Organizer: Scott E. Fahlman, Carnegie Mellon University fahlman at cs.cmu.edu Abstract: Neural networks can be viewed as "black boxes" that learn from examples, but often it is useful to figure out what sort of internal knowledge representation (or set of "features") is being employed, or how the inputs are combined to produce particular outputs. There are many reasons why we might seek such understanding: It can tell us which inputs really are needed and which are the most critical in producing a given output. It can produce explanations that give us more confidence in the network's decisions. It can help us to understand how the network would react to new situations. It can give us insight into problems with the network's performance, stability, or learning behavior. Sometimes, it's just a matter of scientific curiosity: if a network does something impressive, we want to know how it works. In this workshop we will survey the available techniques for understanding what is happening inside a neural network, both during and after training. We plan to have a number of presenters who can describe or demonstrate various network-understanding techniques, and who can tell us what useful insights were gained using these techniques. Where appropriate, presenters will be encouraged to use slides or videotape to illustrate their favorite methods. Among the techniques we will explore are the following: Diagrams of weights, unit states, and their trajectories over time. Diagrams of the receptive fields of hidden units. How to create meaningful diagrams in high-dimensional spaces. Techniques for extracting boolean or fuzzy rule-sets from a trained network. Techniques for extracting explanations of individual network outputs or decisions. Techniques for describing the dynamic behavior of recurrent or time-domain networks. Learning pathologies and what they look like. Invited presentations: Still to be determined. The workshop organizer would like to hear from potential speakers who would like to give a short presentation of the kind described above. Techniques that have proven useful in real-world problems are especially sought, as are short videotape segments showing network ========================================================================= COMPUTATIONAL APPROACHES TO BIOLOGICAL SEQUENCE ANALYSIS-- NEURAL NET VERSUS TRADITIONAL PERPECTIVES Organizers: Paul Stolorz, Santa Fe Institute and Los Alamos National Lab Jude Shavlik, University of Wisconsin. Abstract: There has been a good deal of recent interest in the use of neural networks to tackle several important biological sequence analysis problems. These problems range from the prediction of protein secondary and tertiary structure, to the prediction of DNA protein coding regions and regulatory sites, and the identification of homologies. Several promising developments have been presented at NIPS meetings in the past few years by researchers in the connectionist field. Furthermore, a number of structural biologists and chemists have been successfully using neural network methods. The sequence analysis applications encompass a rather large amount of neural network territory, ranging from feed forward architectures to recurrent nets, Hidden Markov Models and related approaches. The aim of this workshop is to review the progress made by these disparate strands of endeavor, and to analyze their respective strengths and weaknesses. In addition, the intention is to compare the class of neural network methods with alternative approaches, both new and traditional. These alternatives include knowledge based reasoning, standard non-parametric statistical analysis, Hidden Markov models and statistical physics methods. We hope that by careful consideration and comparison of neural nets with several of the alternatives mentioned above, methods can be found which are superior to any of the individual techniques developed to date. This discussion will be a major focus of the workshop, and we both anticipate and encourage vigorous debate. Invited presentations: Jude Shavlik, U. Wisconsin: Learning Important Relations in Protein Structures Gary Stormo, U. Colorado: TBA Larry Hunter, National Library of Medicine: Bayesian Clustering of Protein Structures Soren Brunak, DTH: Network analysis of protein structure and the genetic code David Haussler, U.C. Santa Cruz: Modeling Protein Families with Hidden Markov Models Paul Stolorz and Joe Bryngelson, Santa Fe Institute and Los Alamos: Information Theory and Statistical Physics in Protein Structures ========================================================================= Statistical Regression Methods and Feedforward Nets Organizers: Lei Xu, Harvard Univ. and Adam Krzyzak, Concordia Univ. Abstract: Feedforward neural networks are often used for function approximation, density estimation and pattern classification. These tasks are also the purposes of statistical regression methods. Some methods used in the literature of neural networks and the literature of statistical regression are same, some are different, and some have close relations. Recently, the connections between the methods in the two literatures have been explored from a number of aspects. E.g., (1) connecting feedforward nets to parametric statistical regression for theoretical studies about multilayer feedforward nets; (2) relating the performances of feedforward nets to the trade-off of bias and variances in nonparameter statistics. (3) connecting Radial Basis function nets to Nonparameter Kernal Regression to get several new theoretical results on approximation ability, convergence rate and receptive field size of Radial Basis Function networks; (4) using VC dimension to study the generalization ability of multilayer feedforward nets; (5) using other statistical methods such as projection pursuit, cross-validation, EM algorithm, CART, MARS for training feedforward nets. Not only in these mentioned aspects there are still many interesting and open issues to be further explored. But also, in the literature of statistical regression there are many other methods and theoretical results on both nonparametric regression and parameteric regression (e.g., L1 kernal estimation, ..., etc). Invited presentations: Presentations will include arranged talks and submissions. Submis- sions can be sent to either of the two organizers by Email before Nov.15, 1992. Each submission can be an abstract of 200--400 words. ========================================================================= Computational Models of Visual Attention Organizer: Pete Sandon, Dartmouth College Abstract: Visual attention refers to the process by which some part of the visual field is selected over other parts for preferential processing. The details of the attentional mechanism in humans has been the subject of much recent psychophysical experimentation. Along with the abundance of new data, a number of theories of attention have been proposed, some in the form of computational models simulated on computers. The goal of this workshop is to bring together computational modelers and experimentalists to evaluate the status of current theories and to identify the most promising avenues for improving understanding of the mechanisms and behavioral roles of visual attention. Invited presentations: Pete Sandon "The time course of selection" John Tsotsos "Inhibitory beam model of visual attention" Kyle Cave "Mapping the Allocation of Spatial Attention: Knowing Where Not to Look" Mike Mozer "A principle for unsupervised decomposition and hierarchical structuring of visual objects" Eric Lumer "On the interaction between perceptual grouping, object selection, and spatial orientation of attention" Steve Yantis "Mechanisms of human visual attention: Bottom-up and top-down influences" ========================================================================= Comparison and Unification of Algorithms, Loss Functions and Complexity Measures for Learning Organizers: Isabelle Guyon, Michael Kearns and Esther Levin, AT&T Bell Labs Abstract: The purpose of the workshop is an attempt to clarify and unify the relationships between many well-studied learning algorithms, loss functions, and combinatorial and statistical measures of learning problem complexity. Many results investigating the principles underlying supervised learning from empirical observations have the following general flavor: first, a "general purpose" learning algorithm is chosen for study (for example, gradient descent or maximum a posteriori). Next, an appropriate loss function is selected, and the details of the learning model are specified (such as the mechanism generating the observations). The analysis results in a bound on the loss of the algorithm in terms of a "complexity measure" such as the Vapnik-Chervonenkis dimension or the statistical capacity. We hope that reviewing the literature with an explicit emphasis on comparisons between algorithms, loss functions and complexity measures will result in a deeper understanding of the similarities and differences of the many possible approaches to and analyses of supervised learning, and aid in extracting the common general principles underlying all of them. Significant gaps in our knowledge concerning these relationships will suggest new directions in research. Half of the available time has been reserved for discussion and informal presentations. We anticipate and encourage active audience participation. Each discussion period will begin by soliciting topics of interest from the participants for investigation. Thus, participants are strongly encouraged to think about issues they would like to see discussed and clarified prior to the workshop. All talks will be tutorial in nature. Invited presentations: Michael Kearns, Isabelle Guyon and Esther Levin: -Overview on loss functions -Overview on general purpose learning algorithms -Overview on complexity measures David Haussler: Overview on "Chinese menu" results ========================================================================= Activity-Dependent Processes in Neural Development Organizer: Adina Roskies, Salk Institute Abstract: This workshop will focus on the role of activity in setting up neural architectures. Biological systems rely upon a variety of cues, both activity-dependent and independent, in establishing their architectures. Network architectures have traditionally been pre-specified, but it is ongoing construction of architectures may endow networks with more computational power than do static architectures. Biological issues such as the role of activity in development, the mechanisms by which it operates, and the type of activity necessary will be explored, as well as computational issues such as the computational value of such processes, the relation to hebbian learning, and constructivist algorithms. Invited presentations: General Overview (Adina Roskies) The role of NMDA in cortical development (Tony Bell) Optimality, local learning rules, and the emergence of function in a sensory processing network (Ralph Linsker) Mechanisms and models of neural development through rapid volume signals (Read Montague) The role of activity in cortical development and plasticity (Brad Schlaggar) Computational advantages of constructivist algorithms (Steve Quartz) Learning, development, and evolution (Rik Belew) ========================================================================= DETERMINSTIC ANNEALING AND COMBINATORIAL OPTIMIZATION Organizer: Anand Rangarajan, Yale Univ. Abstract: Optimization problems defined on ``mixed variables'' (analog and digital) occur in a wide variety of connectionist applications. Recently, several advances have been made in deterministic annealing techniques for optimization. Deterministic annealing is a faster and more efficient alternative to simulated annealing. This workshop will focus on several of these new techniques (emerging in the last two years). Topics include improved elastic nets for the traveling salesman problem, new algorithms for graph matching, relationship between deterministic annealing algorithms and older, more conventional techniques, applications in early vision problems like surface reconstruction, internal generation of annealing schedules, etc. Invited presentations: Alan Yuille, Statistical Physics algorithms that converge Chien-Ping Lu, Competitive elastic nets for TSP Paul Stolorz, Recasting deterministic annealing as constrained optimization Davi Geiger, Surface reconstruction from uncertain data on images and stereo images. Anand Rangarajan, A new deterministic annealing algorithm for graph matching ========================================================================= The Computational Neuron Organizer: Terry Sejnowski, Salk Institute (tsejnowski at ucsd.edu) Abstract: Neurons are complex dynamical systems. Nonlinear properties arise from voltage-sensitive ionic currents and synaptic conductances; branched dendrites provide a geometric substrata for synaptic integration and learning mechanisms. What can subthreshold nonlinearities in dendrites be used to compute? How do the time courses of ionic currents affect synaptic integration and Hebbian learning mechanisms? How are ionic channels in dendrites regulated? Why are there so many different types of neurons? These are a few of the issues that will we will be discussing. In addition to short scheduled presentations designed to stimulate discussion, we invite members of the audience to present one-viewgraph talks to introduce additional topics. Invited presentations: Larry Abbott - Neurons as dynamical systems. Tony Bell - Self-organization of ionic channels in neurons. Tom McKenna - Single neuron computation. Bart Mel - Computing capacity of dendrites. ========================================================================= ROBOT LEARNING Organizers: Sebastian Thrun (CMU), Tom Mitchell (CMU), David Cohn (MIT) Abstract: Robot learning has grasped the attention of many researchers over the past few years. Previous robotics research has demonstrated the difficulty of manually encoding sufficiently accurate models of the robot and its environment to succeed at complex tasks. Recently a wide variety of learning techniques ranging from statistical calibration techniques to neural networks and reinforcement learning have been applied to problems of perception, modeling and control. Robot learning is characterized by sensor noise, control error, dynamically changing environments and the opportunity for learning by experimentation. This workshop will provide a forum for researchers active in the area of robot learning and related fields. It will include informal tutorials and presentations of recent results, given by experts in this field, as well as significant time for open discussion. Problems to be considered include: How can current learning robot techniques scale to more complex domains, characterized by massive sensor input, complex causal interactions, and long time scales? How can previously acquired knowledge accelerate subsequent learning? What representations are appropriate and how can they be learned? Invited speakers: Chris Atkeson Steve Hanson Satinder Singh Andrew W. Moore Richard Yee Andy Barto Tom Mitchell Mike Jordan Dean Pomerleau Steve Suddarth ========================================================================= Connectionist Approaches to Symbol Grounding Organizers: Georg Dorffner, Univ. Vienna; Michael Gasser, Indiana Univ. Stevan Harnad, Princeton Univ. Abstract: In recent years, there has been increasing discomfort with the disembodied nature of symbols that is a hallmark of the symbolic paradigm in cognitive science and artificial intelligence and at the same time increasing interest in the potential offered by connectionist models to ``ground'' symbols. In ignoring the mechanisms by which their symbols get ``hooked up'' to sensory and motor processes, that is, the mechanisms by which intelligent systems develop categories, symbolists have missed out on what is not only one of the more challenging areas in cognitive science but, some would argue, the very heart of what cognition is about. This workshop will focus on issues in neural network based approaches to the grounding of symbols and symbol structures. In particular, connectionist models of categorisation and of label-category association will be discussed in the light of the symbol grounding problem. Invited presentations: "Grounding Symbols in the Analog World of Objects: Can Neural Nets Make the Connection?" Stevan Harnad, Princeton University "Learning Perceptually Grounded Lexical Semantics" Terry Regier, George Lakoff, Jerry Feldman, ICSI Berkeley T.B.A. Gary Cottrell, Univ. of California, San Diego "Learning Perceptual Dimensions" Michael Gasser, Indiana University "Symbols and External Embodiments - why Grounding has to Go Two Ways" Georg Dorffner, University of Vienna "Grounding Symbols on Conceptual Knowledge" Philippe Schyns, MIT ========================================================================= Continuous Speech Recognition: Is there a connectionist advantage? Organizer: Michael Franzini (maf at cs.cmu.edu) Abstract: This workshop will address the following questions: How do neural networks compare to the alternative technologies available for speech recognition? What evidence is available to suggest that connectionism may lead to better speech recognition systems? What comparisons have been performed between connectionist and non-connectionist systems, and how ``fair'' are these comparis- ons? Which approaches to connectionist speech recognition have produced the best results, and which are likely to produce the best results in the future? Traditionally, the selection criteria for NIPS papers reflect a much greater emphasis on theoretical importance of work than on performance figures, despite the fact that recognition rate is one of the most important considerations for speech recognition researchers (and often is {\em the} most important factor in determining their financial support). For this reason, this workshop -- to be oriented more towards performance than metho- dology -- will be of interest to many NIPS participants. The issue of connectionist vs. HMM performance in speech recogni- tion is controversial in the speech recognition community. The validity of past comparisons is often disputed, as is the funda- mental value of neural networks. In this workshop, an attempt will be made to address this issue and the questions stated above by citing specific experimental results and by making arguments with a theoretical basis. Preliminary list of speakers: Ron Cole Uli Bodenhausen Hermann Hild ========================================================================= Symbolic and Subsymbolic Information Processing in Biological Neural Circuits and Systems Organizer: Vasant Honavar (honavar at iastate.edu) Abstract: Traditional information processing models in cognitive psychology which became popular with the advent of the serial computer tended to view cognition as discrete, sequential symbol processing. Neural network or connectionist models offer an alternative paradigm for modelling cognitive phenomena that relies on continuous, parallel subsymbolic processing. Biological systems appear to combine both discrete as well as continuous, sequential as well as parallel, symbolic as well as subsymbolic information processing in various forms at different levels of organization. The flow of neurotransmitter molecules and of photons into receptors is quantal; the depolarization and hyperpolarization of neuron membranes is analog; the genetic code and the decoding processes appear to be digital; global interactions mediated by neurotransmitters and slow waves appear to be both analog and digital. The purpose of this workshop is to bring together interested computer scientists, neuroscientists, psychologists, mathematicians, engineers, physicists and systems theorists to examine and discuss specific examples as well as general principles (to the extent they can be gleaned from our current state of knowledge) of information processing at various levels of organization in biological neural systems. The workshop will consist of several short presentations by participants There will be ample time for informal presentations and discussion centering around a number of key topics such as: * Computational aspects of symbolic v/s subsymbolic information processing * Coordination and control structures and processes in neural systems * Encoding and decoding structures and processes in neural systems * Generative structures and processes in neural systems * Suitability of particular paradigms for modelling specific phenomena * Software requirements for modelling biological neural systems Invited presentations: TBA Those interested in giving a presentation should write to honavar at iastate.edu ========================================================================= Computational Issues in Neural Network Training Organizers: Scott Markel and Roger Crane, Sarnoff Research Abstract: Many of the best practical neural network training results are report- ed by researchers who use variants of back-propagation and/or develop their own algorithms. Few results are obtained by using classical nu- merical optimization methods although such methods can be used effec- tively for many practical applications. Many competent researchers have concluded, based on their own experience, that classical methods have little value in solving real problems. However, use of the best commercially available implementations of such algorithms can help in understanding numerical and computational issues that arise in all training methods. Also, classical methods can be used effectively to solve practical problems. Examples of numerical issues that are ap- propriate to discuss in this workshop include: convergence rates; lo- cal minima; selection of starting points; conditioning (for higher order methods); characterization of the error surface; ... . Ample time will reserved for discussion and informal presentations. We will encourage lively audience participation. ========================================================================= Real Applications of Real Biological Circuits Organizers: Richard Granger, UC Irvine and Jim Schwaber, Du Pont Abstract: The architectures, performance rules and learning rules of most artificial neural networks are at odds with the anatomy and physiology of real biological neural circuitry. For example, mammalian telencephelon (forebrain) is characterized by extremely sparse connectivity (~1-5%), almost entirely lacks dense recurrent connections, and has extensive lateral local circuit connections; inhibition is delayed-onset and relatively long-lasting (100s of milliseconds) compared to rapid-onset brief excitation (10s of milliseconds), and they are not interchangeable. Excitatory connections learn, but there is very little evidence for plasticity in inhibitory connections. Real synaptic plasticity rules are sensitive to temporal information, are not Hebbian, and do not contain "supervision" signals in any form related to those common in ANNs. These discrepancies between natural and artificial NNs raise the question of whether such biological details are largely extraneous to the behavioral and computational utility of neural circuitry, or whether such properties may yield novel rules that confer useful computational abilities to networks that use them. In this workshop we will explicitly analyze the power and utility of a range of novel algorithms derived from detailed biology, and illustrate specific industrial applicatons of these algorithms in the fields of process control and signal processing. It is anticipated that these issues will raise controversy, and half of the workshop will be dedicated to open discussion. Preliminary list of speakers: Jim Schwaber, DuPont Bbatunde Ogunnaike, DuPont Richard Granger, University of California, Irvine John Hopfield, Cal Tech ========================================================================= Recognizing Unconstrained Handwritten Script Organizers: Krishna Nathan, IBM and James A. Pittman, MCC Abstract: Neural networks have given new life to an old research topic, the segmentation and recognition of on-line handwritten script. Isolated handprinted character recognition systems are moving from research to product development, and researchers have moved forward to integrated segmentation and recognition projects. However, the 'real world' problem is best described as one of unconstrained handwriting recognition (often on-line) since it includes both printed and cursive styles -- often within the same word. The workshop will provide a forum for participants to share ideas on preprocessing, segmentation, and recognition techniques, and the use of context to improve the performance of online handwriting recognition systems. We will also discuss issues related to what constitutes acceptable recognition performance. The collection of training and test data will also be addressed. ========================================================================= Time Series Analysis and Predic.... Organizers: John Moody, Oregon Grad. Inst., Mike Mozer, Univ. of Colorado and Andreas Weigend, Xerox PARC Abstract: Several new techniques are now being applied to the problem of predicting the future behavior of a temporal sequence and deducing properties of the system that produced the time series. We will discuss both connectionist and non-connectionist techniques. Issues include algorithms and architectures, model selection, performance measures, iterated vs long term prediction, robust prediction and estimation, the number of degrees of freedom of the system, how much noise is in the data, whether it is chaotic or not, how the error grows with prediction time, detection and classification of signals in noise, etc. Half the available time has been reserved for discussion and informal presentations. We will encourage lively audience participation. Invited presentations: Classical and Non-Neural Approaches: Advantages and Problems. (John Moody) Connectionist Approaches: Problems and Interpretations. (Mike Mozer) Beyond Prediction: What can we learn about the system? (Andreas Weigend) Physiological Time Series Modeling (Volker Tresp) Financial Forecasting (William Finnoff / Georg Zimmerman) FIR Networks (Eric Wan) Dimension Estimation (Fernando Pineda) ========================================================================= Applications of VLSI Neural Networks Organizer: Dave Andes, Naval Air Warfare Center Abstract: This workshop will provide a forum for discussion of the problems and opportunities for neural net hardware systems which solve real problems under real time and space constraints. Some of the most difficult requirements for systems of this type come, not surprisingly, from the military. Several examples of these problems and VLSI solutions will be discussed in this working group. Examples from outside the military will also be discussed. At least half the time will be devoted to open discussion of the issues raised by the experiences of those who have already applied VLSI based ANN techniques to real world problems. Preliminary list of speakers: Bill Camp, IBM Federal Systems Lynn Kern, Naval Air Warfare Center Chuck Glover, Oak Ridge National Lab Dave Andes, Naval Air Warfare Center \enddata{text822, 0} From kohring at hlrserv.hlrz.kfa-juelich.de Sat Oct 3 07:55:17 1992 From: kohring at hlrserv.hlrz.kfa-juelich.de (G. Kohring) Date: Sat, 3 Oct 92 12:55:17 +0100 Subject: Two Papers Message-ID: <9210031155.AA14653@hlrserv.hlrz.kfa-juelich.de> The paper, "On the Q-State Neuron Problem in Attractor Neural Networks" whose abstract appears below, has recently been accepted for publication in the Journal "Neural Networks". It discusses some recent results which demonstrate that the use of analog neurons in Attractor Neural Networks is not practical. An abbreviated account of this work recently appeared in the Letters section of "Journal de Physique" (Journal de Physique I, 2 (1992) p. 1549) and the abstract for this paper is also given below. If anyone does not have access to these Journals and would like to get a copy of these papers, please send a request to the following address. G.A. Kohring HLRZ an der KFA Juelich Postfach 1913 D-5170 Juelich, Germany e-mail: kohring at hlrsun.hlrz.kfa-juelich.de On the Q-State Neuron Problem in Attractor Neural Networks (Neural Networks, in press) ABSTRACT The problems encountered when using multi-state neurons in attractor neural networks are discussed. In particular, straight-forward implementations of neurons with Q states, leads to information storage capacities, E, that decrease like E ~ log_2 Q/Q^2. More sophisticated schemes yield capacities that decrease like E ~ log_2 Q/Q, but with retrieval times increasing proportional to Q. There also exist schemes whereby the information capacity reaches its maximum value of unity, but the retrieval time grows with the number of neurons, N, like O(N^3) instead of O(N^2) as in conventional models. Furthermore, since Q-state models approximate analog neurons when Q is large, the results demonstrate that the use of analog neurons is not feasible. After discussing these problems, a solution is proposed in which the information capacity is independent of Q, and the retrieval time increases proportional to \log_2 Q . The retreival properties of this model, i.e., basins of attraction, etc. are calculated and shown to be in agreement with simple theoretical arguments. Finally, a critical discussion of this approach is given. On the Problems of Neural Networks with Multi-state Neurons (Journal de Physique I, 2 (1992) p. 1549) ABSTRACT For realistic neural network applications the storage and recognition of gray-tone patterns, i.e, patterns where each neuron in the network can take one of Q different values, is more important than the storage of black and white patterns, although the latter has been more widely studied. Recently, several groups have shown the former task to be problematic with current techniques since the useful storage capacity, ALPHA, generally decreases like: ALPHA ~ Q^{-2}. In this paper one solution to this problem is proposed, which leads to the storage capacity decreasing like: ALPHA ~ (log_2 Q)^{-1}. For realistic situations, where Q=256 this implies an increase of nearly four orders of magnitude in the storage capacity. The price paid, is that the time needed to recall a pattern increases like: log_2 Q. This price can be partially offset by an efficient parallel program which runs at 1.4 Gflops on a 32 processor iPSC/860 Hypercube. From jang at diva.berkeley.edu Sat Oct 3 20:01:50 1992 From: jang at diva.berkeley.edu (Jyh-Shing Roger Jang) Date: Sat, 3 Oct 92 17:01:50 -0700 Subject: paper available Message-ID: <9210040001.AA19824@diva.Berkeley.EDU> The following paper has been placed on the neuroprose archive as jang.fuzzy.ps.Z and is available via anonymous ftp (from archive.cis.ohio-state.edu in the pub/neuroprose directory). This paper will appear in IEEE Trans. on NN. ========================================================================= Self-Learning Fuzzy Controllers Based on Temporal Back Propagation ABSTRACT: This paper presents a generalized control strategy that enhances fuzzy controllers with self-learning capability for achieving prescribed control objectives in a near-optimal manner. This methodology, termed temporal back propagation, is model-insensitive in the sense that it can deal with plants that can be represented in a piecewise differentiable format, such as difference equations, neural networks, GMDH, fuzzy models, etc. Regardless of the numbers of inputs and outputs of the plants under consideration, the proposed approach can either refine the fuzzy if-then rules obtained from human experts, or automatically derive the fuzzy if-then rules if human experts are not available. The inverted pendulum system is employed as a testbed to demonstrate the effectiveness of the proposed control scheme and the robustness of the acquired fuzzy controller. =========================================================================== Here is an example of how to retrieve this file: gvax> ftp archive.cis.ohio-state.edu Connected to archive.cis.ohio-state.edu. 220 archive.cis.ohio-state.edu FTP server ready. Name: anonymous 331 Guest login ok, send ident as password. Password:neuron at wherever 230 Guest login ok, access restrictions apply. ftp> binary 200 Type set to I. ftp> cd pub/neuroprose 250 CWD command successful. ftp> get jang.fuzzy.ps.Z 200 PORT command successful. 150 Opening BINARY mode data connection for jang.fuzzy.ps.Z 226 Transfer complete. 100000 bytes sent in 3.14159 seconds ftp> quit 221 Goodbye. gvax> uncompress jang.fuzzy.ps.Z gvax> lpr jang.fuzzy.ps -- J.-S. Roger Jang 571 Evans, EECS Department Univ. of California Berkeley, CA 94720 jang at diva.berkeley.edu (510)-642-5029 fax: (510)642-5775 From jang at diva.berkeley.edu Mon Oct 5 12:04:15 1992 From: jang at diva.berkeley.edu (Jyh-Shing Roger Jang) Date: Mon, 5 Oct 92 09:04:15 -0700 Subject: paper available in Neuroprose Message-ID: <9210051604.AA16522@diva.Berkeley.EDU> The following paper has been placed on the neuroprose archive as jang.rbfn_fuzzy.ps.Z and is available via anonymous ftp (from archive.cis.ohio-state.edu in the pub/neuroprose directory). This paper will appear in IEEE Trans. on NN. ========================================================================= TITLE: Functional Equivalence between Radial Basis Function Networks and Fuzzy Inference Systems ABSTRACT: This short article shows that under some minor restrictions, the functional behavior of radial basis function networks and fuzzy inference systems are actually equivalent. This functional equivalence implies that advances in each literature, such as new learning rules or analysis on representational power, etc., can be applied to both models directly. It is of interest to observe that two models stemming from different origins turn out to be functional equivalent. =========================================================================== Here is an example of how to retrieve this file: gvax> ftp archive.cis.ohio-state.edu Connected to archive.cis.ohio-state.edu. 220 archive.cis.ohio-state.edu FTP server ready. Name: anonymous 331 Guest login ok, send ident as password. Password:neuron at wherever 230 Guest login ok, access restrictions apply. ftp> binary 200 Type set to I. ftp> cd pub/neuroprose 250 CWD command successful. ftp> get jang.rbfn_fuzzy.ps.Z 200 PORT command successful. 150 Opening BINARY mode data connection for jang.rbfn_fuzzy.ps.Z 226 Transfer complete. 100000 bytes sent in 3.14159 seconds ftp> quit 221 Goodbye. gvax> uncompress jang.rbfn_fuzzy.ps.Z gvax> lpr jang.rbfn_fuzzy.ps -- J.-S. Roger Jang 571 Evans, EECS Department Univ. of California Berkeley, CA 94720 jang at diva.berkeley.edu (510)-642-5029 fax: (510)642-5775 From jose at tractatus.siemens.com Mon Oct 5 08:07:30 1992 From: jose at tractatus.siemens.com (Steve Hanson) Date: Mon, 5 Oct 1992 08:07:30 -0400 (EDT) Subject: for Registration See Below Message-ID: FOR NIPS*92 REGISTRATION SEE BELOW NEURAL INFORMATION PROCESSING SYSTEMS (NIPS) -Natural and Synthetic- Monday, November 30 - Thursday, December 3, 1992 Denver, Colorado This is the sixth meeting of an inter-disciplinary conference which brings together neuroscientists, engineers, computer scientists, cognitive scientists, physicists, and mathematicians interested in all aspects of neural processing and computation. A day of tutorial presentations (Nov 30) will precede the regular session and two days of focused workshops will follow at a nearby ski area (Dec 4-5). Major categories and examples of subcategories for paper submissions are the following; Neuroscience: Studies and Analyses of Neurobiological Systems, Inhibition in cortical circuits, Signals and noise in neural computation, Theoretical Neurobiology and Neurophysics. Theory: Computational Learning Theory, Complexity Theory, Dynamical Systems, Statistical Mechanics, Probability and Statistics, Approximation Theory. Implementation and Simulation: VLSI, Optical, Software Simulators, Implementation Languages, Parallel Processor Design and Benchmarks. Algorithms and Architectures: Learning Algorithms, Constructive and Pruning Algorithms, Localized Basis Functions, Tree Structured Networks, Performance Comparisons, Recurrent Networks, Combinatorial Optimization, Genetic Algorithms. Cognitive Science & AI: Natural Language, Human Learning and Memory, Perception and Psychophysics, Symbolic Reasoning. Visual Processing: Stereopsis, Visual Motion, Recognition, Image Coding and Classification. Speech and Signal Processing: Speech Recognition, Coding, and Synthesis, Text-to-Speech, Adaptive Equalization, Nonlinear Noise Removal. Control, Navigation, and Planning: Navigation and Planning, Learning Internal Models of the World, Trajectory Planning, Robotic Motor Control, Process Control. Applications: Medical Diagnosis or Data Analysis, Financial and Economic Analysis, Timeseries Prediction, Protein Structure Prediction, Music Processing, Expert Systems. The technical program will contain plenary, contributed oral and poster presentations with no parallel sessions. All presented papers will be due (January 13, 1993) after the conference in camera-ready format and will be published by Morgan Kaufmann. FOR REGISTRATION PLEASE SEND YOUR NAME AND ADDRESS ASAP TO: NIPS*92 Registration SIEMENS Research Center 755 College Road East Princeton, NJ, 08540 NIPS*92 Organizing Committee: General Chair, Stephen J. Hanson, Siemens Research & Princeton University; Program Chair, Jack Cowan, University of Chicago; Publications Chair, Lee Giles, NEC; Publicity Chair, Davi Geiger, Siemens Research; Treasurer, Bob Allen, Bellcore; Local Arrangements, Chuck Anderson, Colorado State University; Program Co-Chairs: Andy Barto, U. Mass.; Jim Burr, Stanford U.; David Haussler, UCSC ; Alan Lapedes, Los Alamos; Bruce McNaughton, U. Arizona; Barlett Mel, JPL; Mike Mozer, U. Colorado; John Pearson, SRI; Terry Sejnowski, Salk Institute; David Touretzky, CMU; Alex Waibel, CMU; Halbert White, UCSD; Alan Yuille, Harvard U.; Tutorial Chair: Stephen Hanson, Workshop Chair: Gerry Tesauro, IBM Domestic Liasons: IEEE Liaison, Terrence Fine, Cornell; Government & Corporate Liaison, Lee Giles, NEC; Overseas Liasons: Mitsuo Kawato, ATR; Marwan Jabri, University of Sydney; Benny Lautrup, Niels Bohr Institute; John Bridle, RSRE; Andreas Meier, Simon Bolivar U. 9 From mclennan at cs.utk.edu Mon Oct 5 16:10:20 1992 From: mclennan at cs.utk.edu (mclennan@cs.utk.edu) Date: Mon, 5 Oct 92 16:10:20 -0400 Subject: report available Message-ID: <9210052010.AA05363@thud.cs.utk.edu> **DO NOT FORWARD TO OTHER GROUPS** The following technical report has been placed in the Neuroprose archives at Ohio State (filename: maclennan.flexcomp.ps.Z). Ftp instructions follow the abstract. ----------------------------------------------------- Research Issues in Flexible Computing Two Presentations in Japan Bruce MacLennan Computer Science Department University of Tennessee Knoxville, TN 37996 maclennan at cs.utk.edu Technical Report CS-92-172 ABSTRACT: This report contains the text of two presentations made in Japan in 1991, both of which deal with the Japanese ``Real World Com- puting Project'' (previously known as the ``New Information Pro- cessing Technology,'' and informally as the ``Sixth Generation Project''). (1) ``Flexible Computing: How to Make it Succeed'' (invited presentation, Institute for Supercomputing Research workshop, New Directions in Supercomputing): Many applications require the flexible processing of large amounts of ambiguous, incomplete, or redundant information, including images, speech and natural language. Recent advances have shown that many of these problems can be effectively solved by _emergent computation_, which is the exploitation of the self-organizing,collective and cooperative phenomena arising from the interaction of large numbers of simple computational elements obeying local dynamical laws. Accomplish- ing flexible computing will require basic research in three areas. THEORY: We need to understand the dynamical and computa- tional properties of systems with very high degrees of parallel- ism (more than a million elements). SOFTWARE: We need to under- stand the representation and processing of subsymbolic and sym- bolic information in the brain. HARDWARE: We need to be able to implement systems having a million to a billion analog proces- sors. (2) ``The Emergence of Symbolic Processes from the Subsymbolic Substrate''(panel presentation, MITI, International Symposium on New Information Processing Technologies '91): A central question for the success of neural network technology is the relation of symbolic processes (e.g., language and logic) to the underlying subsymbolic processes (e.g., pattern recognition, analogical rea- soning and learning). This is not simply an issue of integrating neural networks with conventional expert system technology. Human symbolic cognition is flexible because it is not purely formal, and because it retains some of the ``softness'' of the subsymbolic processes. If we want our computers to be as flexi- ble as people, then we need to understand the emergence of the discrete and symbolic from the continuous and subsymbolic. ----------------------------------------------------- FTP INSTRUCTIONS Either use the Getps script, or do the following: unix> ftp archive.cis.ohio-state.edu (or 128.146.8.52) Name: anonymous Password: ftp> cd pub/neuroprose ftp> binary ftp> get maclennan.flexcomp.ps.Z ftp> quit unix> uncompress maclennan.flexcomp.ps.Z unix> lpr maclennan.flexcomp.ps (or however you print postscript) If you need hardcopy, then send your request to: library at cs.utk.edu Bruce MacLennan Department of Computer Science 107 Ayres Hall The University of Tennessee Knoxville, TN 37996-1301 (615)974-0994/5067 FAX: (615)974-4404 maclennan at cs.utk.edu From sontag at control.rutgers.edu Mon Oct 5 18:28:35 1992 From: sontag at control.rutgers.edu (sontag@control.rutgers.edu) Date: Mon, 5 Oct 92 18:28:35 BST Subject: Finiteness of VC Dimension of Sigmoidal Feedforward "Neural Nets" Message-ID: <9210052228.AA03534@control.rutgers.edu> Finiteness of VC Dimension of Sigmoidal Feedforward "Neural Nets" Angus Macintyre, Oxford University Eduardo Sontag, Rutgers University [This is NOT a TeX file; it is just for reading on the screen.] It was until now, as far as we know, an open question whether "sigmoidal neural networks" lead to learnable (in the sense of sample complexity) classes of concepts. We wish to remark via this announcement that indeed the corresponding VC dimension is finite. The result holds without imposing any bounds on weights. The proof, which is outlined below, consists of simply putting together a couple of recent results in model theory. (A detailed paper, exploring other consequences of these results in the context of neural networks, is being prepared. This problem is also related to issues of learnability for sparse polynomials [="fewnomials"].) More precisely, we define a _sigma-circuit_ as an unbounded fan-in circuit (i.e., a directed acyclic graph) whose edges are labeled by real numbers (called "weights"), and, except for the input nodes (i.e., the nodes of in-degree zero), every node is also labeled by a real number (called its "bias"). There is a single output node (i.e., node of out-degree zero). We think of such a circuit as computing a function R^m -> R, where m is the number of input nodes. This function is inductively defined on nodes V as follows. If V is the i-th input node, it computes just F(u1,...,uk)=ui. If V is a noninput node, its function is s( w0 + w1.u1 + ... + wk.uk ) where w0 is the bias of the node V, u1,...,uk are the functions computed by the nodes Vi incident to V, and wi is the weight in the edge from Vi to V. The function computed by the output node is the function computed by the circuit. Here "s" denotes the "standard sigmoid": 1 s (x) = --------- . -x 1 + e (The results will depend critically on the choice of this particular s(x), which is standard in neural network theory. Minor modifications are possible, but the result is most definitely false if, e.g., s(x) = sin(x). The result is even false for other "sigmoidal-type" functions s; see e.g. [5].) We also define a _sigmoidal feedforward architecture_ as a circuit in which all weights and biases are left as variables. In that case, we write F(u,w) for the function computed by the circuit obtained by specializing all weights and biases to a particular vector w. We view an architecture in the obvious way as a map F : R^m x R^r -> R where 'r' is the total number of weights and biases. For each subset S of R^m, a _dichotomy_ on S is a function c: S -> {-1,+1}. We say that a function f: R^m -> R _implements_ this dichotomy if it holds that c(s) > 0 <==> f(x) > 0 . THEOREM. Let F be an architecture. Then there exists a (finite) integer f = VC(F) such that, for each subset S of R^m of cardinality > f, there is some dichotomy c on S which cannot be implemented by any f = F(.,w), w in R^r. Sketch of proof: First note that one can write a formula Q(u,w) in the first order language of the real numbers with addition, multiplication, order, and real exponentiation, Th(R,+,.,0,1,<,exp(.)) (all real constants are included as well), such that: for each (u,w) in R^m x R^r, F(u,w) > 0 if and only if Q(u,w) is true . (Divisions, as needed in computing s(x), can be encoded by adding new variables z, including an atomic formula of the type z . (1 + e^-x) = 1, and then existentially quantifying on z.) The paper [1] deals precisely with problem of proving the existence of such finite integers f, for formulas in first order theories. In page 383, first paragraph, it is shown that the desired result will be true if there is _order-minimality_ for the corresponding theory. In our context, this latter property means that every set of the form: { x in R | P(x,w) true } , w in R^r , where P is any formula in the above language, with SCALAR x, must be a finite union of intervals (possibly points). Now, the papers [2]-[4] prove that order-minimality indeed holds. (The paper [1] stated that order minimality was an open problem for Th(R,+,.,0,1,<,exp(.)); in fact, the papers [2]-[4] were written while [1] was in press.) Remarks: Note that we do no give explicit bounds here, but only remark that finiteness holds. However, the results being used are essentially constructive, and it is in principle possible to compute the VC dimension of a given architecture. Observe also that one can extend this result to more general architectures than the ones considered here. High-order nets (for which products of inputs are allowed) can be treated with the same proof, as are if-then-else nodes. The latter allow the application of the same techniques to the Blum-Shub-Smale model of computation as well. A number of decidability issues, loading, interpolation, teaching dimension questions, and so forth, for sigmoidal nets can also be treated using model-theoretic techniques, and will be the subject of a forthcoming paper. References: [1] Laskowski, Michael C., "Vapnik-Chervonenkis classes of definable sets," J. London Math Soc (2) 45 (1992): 377-384. [2] Wilkie, Alec J., "Some model completeness results for expansions of the ordered field of reals by Pfaffian functions," preprint, Oxford, 1991, submitted. [3] Wilkie, Alec J., "Smooth o-minimal theories and the model completeness of the real exponential field," preprint, Oxford, 1991, submitted. [4] Macintyre, Angus, Lou van den Dries, and David Marker, "The elementary theory of restricted analytic fields with exponentiation," preprint, Oxford, 1991. [5] Sontag, Eduardo D., "Feedforward nets for interpolation and classification," J. Comp. Syst. Sci. 45 (1992): 20-48. From tgd at chert.CS.ORST.EDU Mon Oct 5 19:16:30 1992 From: tgd at chert.CS.ORST.EDU (Tom Dietterich) Date: Mon, 5 Oct 92 16:16:30 PDT Subject: Machine Learning 9:4 Message-ID: <9210052316.AA17234@research.CS.ORST.EDU> Machine Learning October 1992, Volume 9, Number 4 Explorations of an Incremental, Bayesian Algorithm for Categorization J. R. Anderson and Michael Matessa A Bayesian Method for the Induction of Probabilistic Networks from Data G. F. Cooper and E. Herskovits A Framework for Average Case Analysis of Conjunctive Learning Algorithms. M. J. Pazzani and W. Sarrett Learning Boolean Functions in an Infinite Attribute Space A. Blum Technical Note: First Nearest Neighbor Classification on Frey and Slate's Letter Recognition Problem T. C. Fogarty ----- Subscriptions - Volume 8-9 (8 issues) includes postage and handling. $140 Individual $88 Member AAAI, CSCSI $301 Institutional Kluwer Academic Publishers P.O. Box 358 Accord Station Hingham, MA 02018-0358 USA or Kluwer Academic Publishers Group P.O. Box 322 3300 AH Dordrecht THE NETHERLANDS (AAAI members please include membership number) From usui at tut.ac.jp Tue Oct 6 10:41:11 1992 From: usui at tut.ac.jp (usui@tut.ac.jp) Date: Tue, 6 Oct 92 10:41:11 JST Subject: IJCNN '93 NAGOYA (Call for Papers) Message-ID: <9210060141.AA01902@bpel.tutics.tut.ac.jp> I J C N N '9 3 ( N A G O Y A ) *** C A L L F O R P A P E R S *** --------------------------------------------------------------------------- Internatinal Joint Conference on Neural Networks Nagoya Congress Center, Japan October 25-29, 1993 IJCNN'93-NAGOYA co-sponsored by the Japanese Neural Network Society (JNNS), the IEEE Neural Networks Council (NNC), the International Neural Network Society (INNS), the European Neural Network Society (ENNS), the Society of Instrument and Control Engineers (SICE, Japan), the Institute of Electronics, Information and Communication Engineers (IEICE, Japan), the Nagoya Industrial Science Research Institute, the Aichi Prefectural Government and the Nagoya Municipal Government cordially invite interested authors to submit papers in the field of neural networks for presentation at the Conference. Nagoya is a historical city famous for Nagoya Castle and is located in the central major industrial area of Japan. There is frequent direct air service from most countries. Nagoya is 2 hours away from Tokyo or 1 hour from Osaka by bullet train. Papers may be submitted for consideration as oral or poster presentations in the following areas: Neurobiological Systems Self-organization Cognitive Science Learning & Memory Image Processing & Vision Robotics & Control Speech, Hearing & Language Hybrid Systems Sensorimotor Systems (Fuzzy, Genetic, Expert Systems, AI) Neural Network Architectures Implementation Network Dynamics (Electronic, Optical, Bio-chips) Optimization Other Applications (Medical and Social Systems, Art, Economy, etc. Please specify the area of the application) Four(4) page papers MUST be received by April 30, 1993. Papers received after that date will be returned unopened. International authors should submit their work via Air Mail or Express Courier so as to ensure timely arrival. All submissions will be acknowledged by mail. Papers will be reviewed by senior researchers in the field, and all authors will be informed of the decisions at the end of the review process by June 30, 1993. A limited number of papers will be accepted for oral and poster presentations. No poster sessions are scheduled in parallel with oral sessions. All accepted papers will be published as submitted in the conference proceedings, which should be available at the conference for distribution to all regular conference registrants. Please submit six(6) copies (one camera-ready original and five copies) of the paper. Do not fold or staple the original camera-ready copy. The four page papers, including figures, tables, and references, should be written in English. The paper submitted over four pages will be charged 30,000 YEN per extra page. Papers should be submitted on 210mm x 297mm (A4) or 8-1/2" x 11" (letter size) white paper with one inch margins on all four sides (actual space to be allowed to type is 165mm (W) x 228mm (H) or 6-1/2" x 9"). They should be prepared by typewriter or letter-quality printer in one or two-column format, single-spaced, in Times or similar font of 10 points or larger, and printed on one side of the page only. Please be sure that all text, figures, captions, and references are clean, sharp, readable, and of high contrast. Fax submission are not acceptable. Centered at the top of the first page should be the complete title, author(s), affiliation(s), and mailing address(es), followed by a blank space and then an abstract, not to exceed 15 lines, followed by the text. In an accompanying letter, the following should be included. Send papers to: IJCNN'93-NAGOYA Secretariat. Full Title of the Paper Presentation Preferred Oral or Poster Corresponding Author Presenter* Name, Mailing address Name, Mailing address Telephone and FAX numbers Telephone and FAX numbers E-mail address E-mail address Technical Session Audio Visual Requirements 1st and 2nd choices e.g., 35mm Slide, OHP, VCR * Students who wish to apply for the Student Award, please specify and enclose a verification letter of status from the Department head. Call for Tutorials ------------------ Tutorials for IJCNN'93-NAGOYA will be held on Monday, October 25, 1993. Each tutorial will be three hours long. The tutorials should be designed as such and not as expanded talks. They should lead the student at the college Senior level through a pedagogically understandable development of the subject matter. Experts in neural networks and related fields are encouraged to submit proposed topics for tutorials. The proposal should be one to two pages long and describe in some detail the subject matter to be covered in the three-hour tutorial. Please mail proposals by January 5, 1993, to IJCNN'93-NAGOYA Secretariat. Industry Forum -------------- A major industry forum will be held in the afternoon on Tuesday, October 26, 1993. Speakers will include representatives from industry, government, and academia. The aim of the forum is to permit attendees to understand more fully possible industrial applications of neural networks, discuss problems that have arisen in industrial applications, and to delineate new areas of research and development of neural network applications. Exhibit Information ------------------- Exhibitors are encouraged to present the latest innovations in neural networks, including electronic and optical neuro computers, fuzzy neural networks, neural network VLSI chips and development systems, neural network design and simulation tools, software systems, and application demonstration systems. A large group of vendors and participants from academia, industry and government are expected. We believe that the IJCNN'93-NAGOYA will be the neural network largest conference and trade-show in Japan, in which to exhibit your products. Potential exhibitors should plan to sign up before April 30, 1993 for exhibit booths since exhibit space is limited. Vendors may contact the IJCNN'93-NAGOYA Secretariat. Committees & Chairs -------------------- Advisory Chair: Fumio Harashima, University of Tokyo Vicecochairs: Russell Eberhart (IEEE NNC), Research Triangle Institute Paul Werbos (INNS), National Science Foundation Teuvo Kohonen (ENNS), Helsinki University of Technology Organizing Chair: Shun-ichi Amari, University of Tokyo Program Chair: Kunihiko Fukushima, Osaka University Cochairs: Robert J. Marks,II (IEEE NNC), University of Washington Harold H. Szu (INNS), Naval Surface Warfare Center Rolf Eckmiller (ENNS), University of Dusseldorf Noboru Sugie, Nagoya University Steering Chair: Toshio Fukuda, Nagoya University General Affair Chair: Fumihito Arai, Nagoya University Finance Chairs: Hide-aki Saito, Tamagawa University Roy S. Nutter,Jr, West Virginia University Publicity Chairs: Shiro Usui, Toyohashi University of Technology Evangelia Micheli-Tzanakou, Rutgers University Publication Chair: Yoichi Okabe, University of Tokyo Exhibits Chairs: Masanori Idesawa, Riken Shigeru Okuma, Nagoya University Local Arrangement Chair: Yoshiki Uchikawa, Nagoya University Industry Forum Chairs: Noboru Ohnishi, Nagoya University Hisato Kobayashi, Hosei University Social Event Chair: Kazuhiro Kosuge, Nagoya University Tutorial Chair: Minoru Tsukada, Tamagawa University Technical Tour Chair Hideki Hashimoto, University of Tokyo IJCNN'93-NAGOYA Secretariat: Travel Plaza International Chubu, Inc. Shirakawa Dai-san Bldg., 4-8-10 Meieki, Nakamura-ku, Nagoya, 450 Japan Phone: +81-52-561-9880/8655 Fax: +81-52-561-1241 --------------------------------------------------------------------------- R E G I S T R A T I O N ----------------------- Registration Fee ---------------- Full conference registration fee includes admission to all sessions, exhibit area, welcome reception and proceedings. Tutorials and banquet are NOT included. ---------------------------------------------------------- | Member-ship | Before | After | | | | Aug. 31 '93 | Sept. 1 '93 | On-site | | ---------------------------------------------------------| | Member* | 45,000 yen | 55,000 yen | 60,000 yen | | Non-Member | 55,000 yen | 65,000 yen | 70,000 yen | | Student** | 12,000 yen | 15,000 yen | 20,000 yen | ---------------------------------------------------------- Tutorial Registration Fee ------------------------- Tutorials will be held on Monday, October 25, 1993, 10:00 am-1:00 pm. and 3:00 pm - 6:00 pm. The complete list of tutorials will be available in the June mailing. ------------------------------------------------------------ | | | Before August 31, 93 | After | | | |-------------------------| Sept. 1, | | Member- | Option | |Univ. & Non | `93 | | ship | | Industrial |profit Inst.| | | -----------------------------------------------------------| | Member* | Half day | 20,000 yen | 7,000 yen | 40,000 yen | | | Full day | 30,000 yen | 10,000 yen | 60,000 yen | |------------------------------------------------------------| | Non- | Half day | 30,000 yen | 10,000 yen | 50,000 yen | | Member | Full day | 45,000 yen | 15,000 yen | 80,000 yen | |------------------------------------------------------------| | Student**| Half day | ---------- | 5,000 yen | 20,000 yen | | | Full day | ---------- | 7,500 yen | 30,000 yen | ------------------------------------------------------------ * A member of co-sponsoring and co-operating societies. ** Students must submit a verification letter of full-time status from the Department head. Banquet ------- The IJCNN'93-NAGOYA Banquet will be held on Thursday, October 28, 1993. Note that the Banquet ticket (5,000 yen/person) is not included in the registration fee. Pre-registration is recommended, since the number of seats is limited. The registration for the Banquet can be made at the same time with the conference registration. Payment and Remittance Payment for registration and tutorial fees should be in one of the following forms : 1. A bank transfer to the following bank account: Name of Bank: Tokai Bank, Nagoya Ekimae-Branch Name of Account: Travel Plaza International Chubu, Inc. EC-ka Account No.: 1079574 Address: 6F Shirakawa Dai-san Bldg., 4-8-10 Meieki, Nakamura-ku, Nagoya, 450 Japan 2. Credit Cards (American Express, Diners, Visa, Master Card) are acceptable except for domestic registrants. Please indicate your card number and expiration date on the Registration Form Note: When making remittance, please send Registration Form to the IJCNN'93-NAGOYA Secretariat together with a copy of your bank's receipt for transfer. Personal checks and other currencies will not be accepted except Japanese yen. Confirmation and Receipt Upon receiving your Registration Form and confirming your payment, the IJCNN'93-NAGOYA Secretariat will send you a confirmation / receipt. This confirmation should be retained and presented at the registration desk of the conference site. Cancellation and Refund of the Fees All financial transactions for the conference are being handled by the IJCNN'93-NAGOYA Secretariat. Please send a written notification of cancellation directly to the office. Cancellations received on or before September 30, 1993, 50% cancel fee will be charged. We regret that no refunds for registration can be made after October 1, 1993. All refunds will be proceeded after the conference. NAGOYA ------ The City of Nagoya, with a population of over two million, is the principal city of central Japan and lies at the heart of one of the three leading areas of the country. The area in and around the city contains a large number of high-tech industries with names known worldwide, such as Toyota, Mitsubishi, Honda, Sony and Brother. The city's central location gives it excellent road and rail links to the rest of the country; there exist direct air services to 18 other cities in Japan and 26 cities abroad. Nagoya enjoys a temperate climate and agriculture flourishes on the fertile plain surrounding the city. The area has a long history; Nagoya is the birth place of two of Japan's greatest heroes: the Lords Oda Nobunaga and Toyotomi Hideyoshi, who did much to bring the 'Warring States' period to an end. Tokugawa Ieyasu who completed the task and established the Edo period was also born in the area. Nagoya is flourished under the benevolent rule of this lord and his descendants Climate and Clothing The climate in Nagoya in the late October is usually agreeable and stable, with an average temperature of 16-23 C(60-74 F). Heavy clothing is not necessary, however, a light sweater is recommended. Business suit as well as casual clothing is appropriate. TRAVEL INFORMATION ------------------ Official Travel Agent Travel Plaza International Chubu, Inc. (TPI) has been appointed as the Official Travel Agent for IJCNN'93-NAGOYA, JAPAN to handle all travel arrangements in Japan. All inquiries and application forms for hotel accommodations described herein should be addressed as follows: Travel Plaza International Chubu, Inc. Shirakawa Dai-san Bldg. 4-8-10 Meieki, Nakamura-ku Tel: +81-52-561-9880/8655 Nagoya 450, Japan Fax: +81-52-561-1241 Airline Transportation Participants from Europe and North America who are planning to come to Japan by air are advised to get in touch with the following travel agents who can provide information on discount fares. Departure cities are Los Angeles, Washington, New York, Paris, and London. Japan Travel Bureau U.K. Inc. 9 Kingsway London Tel: (01)836-9393 WC2B 6XF, England, U.K. Fax: (01)836-6215 Japan Travel Bureau International Inc. Equitable Tower 11th Floor New York, N.Y. 10019 Tel: (212)698-4955 U.S.A. Fax: (212)246-5607 Japan Travel Bureau Paris 91 Rue du Faubourg Saint-Honore 750008 Paris Tel: (01)4265-1500 France Fax: (01)4265-1132 Japan Travel Bureau International Inc. Suite 1410, One Wilshire Bldg. 624 South Grand Ave, Los Angeles, CA 90017 Tel: (213)687-9881 U.S.A. Fax: (213)621-2318 Japan Rail Pass The JAPAN RAIL PASS is a special ticket that is available only to travelers visiting Japan from foreign countries for sight-seeing. To be eligible to purchase a JAPAN RAIL PASS, you must purchase an Exchange Order from an authorized sales office or agent before you come to Japan. Please contact JTB offices or your travel agent for details. Note: The rail pass is a flash pass good on most of the trains and ferries in Japan. It provides very significant saving on transportation costs within Japan if you plan to travel more than just from Tokyo to Nagoya and return. Booking of Japan Railway tickets cannot be made before issuing Japan Rail Pass in Japan. Access to Nagoya Direct flights to Nagoya are available from the following cities: Seoul, Taipei, Pusan, Hong Kong, Singapore, Bangkok, Cheju, Jakarta, Denpasar, Kuala Lumpur, Honolulu, Portland, Los Angeles, Guam, Saipan, Toronto, Vancouver, Rio de Janeiro, Sao Paulo, Moscow, Frankfurt, Paris, London, Brisbane, Cairns, Sydney and Auckland. Participants flying from the U.S.A. are urged to fly to Los Angeles, CA, or Portland, OR, and transfer to direct flights to Nagoya on Delta Airlines, or fly to Seoul, Korea, for a connecting flight to Nagoya. For participants from other countries, flights to Narita (the New Tokyo International Airport) or Osaka International Airport are recommended. Domestic flights are available from Narita to Nagoya, but not from Osaka. The bullet train, "Shinkansen", is a fast and convenient way to get to Nagoya from either Osaka or Tokyo. Transportation from Nagoya International Airport Bus service to the Nagoya JR train station is available every 15 minutes. The bus stop (signed as No. 1) is to your left as you exit the terminal. The trip takes about 1 hour. Transportation from Narita International Airport To the Tokyo JR train station (to connect with Shinkansen), 2 ways to get from Narita to the JR train station are recommended: 1. An express train from the airport to the Tokyo JR train station. This is an all reserved seat train. Buy tickets before boarding train. Follow the signs in the airport to JR Narita station. The trip takes 1 hour. 2. A non-stop service is available, leaving Narita airport every 15 minutes. The trip will take between one and one and a half hours or more, depending on traffic conditions. The limousines have reserved seating, so it is necessary to purchase a ticket before boarding. If you plan to stay in Tokyo overnight before proceeding to Nagoya, other limousines to major Tokyo hotels are available. Transportation from Osaka International Airport Non-stop-bus service to the Shin-Osaka JR train station is available every 15 min. Foreign Exchange and Travellaer's Checks Purchase of traveller's checks in Japanese yen or U.S. dollars before departure is recommended. The conference secretariat and most of stores will accept only Japanese yen in cash only. Major credit cards are accepted in a number of shops and hotels. Foreign currency exchange and cashing of traveller's checks are available at the New Tokyo International Airport, the Osaka International Airport and major hotels. Major banks that handle foreign currencies are located in the downtown area. Banks are open from 9:00 to 15:00 on the weekday, closed on Saturday and Sunday. Electricity 100 volts, 60 Hz. --------------------------------------------------------------------------- IJCNN`93 NAGOYA October 25-29, 1993 Nagoya, JAPAN R E G I S T R A T I O N F O R M ---------------------------------- (Type or print in block letters, one sheet for each registrant please) Name: ( )Prof. ( )Dr. ( )Mr. ( )Ms. --------------------------------------------------- (Family Name) (First Name) (Middle Name) Affiliation: --------------------------------------------------------------- Mailing Address: ( )Office ( )Home --------------------------------------------------------------- Zip Code: City: Country: ----------- ----------- --------------- Phone: Fax: E-mail: ------------------- ----------------- -------------------- # REGISTRATION FEE (Please make a circle as you chose.) --------------------------------------------------- | | Before | After | | | Membership | Aug.31,'93 | Sept.1,'93 | On-site | |------------|------------|------------|------------| | Member* | 45,000 yen | 55,000 yen | 60,000 yen | |------------|------------|------------|------------| | Non-member | 55,000 yen | 65,000 yen | 70,000 yen | |------------|------------|------------|------------| | Student** | 12,000 yen | 15,000 yen | 20,000 yen | --------------------------------------------------- * Please specify Membership society : ------------------------------------ and the number : ------------------------------------ SUBTOTAL1: yen ------------------- ** Students must submit a verification letter of full-time status from the department head. # TUTORIAL REGISTRATION FEE (Please make a circle as you chose.) ------------------------------------------------------------ | | | Before August 31, 93 | After | | | |-------------------------| Sept. 1, | | Member- | Option | |Univ. & Non | `93 | | ship | | Industrial |profit Inst.| | | -----------------------------------------------------------| | Member* | Half day | 20,000 yen | 7,000 yen | 40,000 yen | | | Full day | 30,000 yen | 10,000 yen | 60,000 yen | |------------------------------------------------------------| | Non- | Half day | 30,000 yen | 10,000 yen | 50,000 yen | | Member | Full day | 45,000 yen | 15,000 yen | 80,000 yen | |------------------------------------------------------------| | Student**| Half day | ---------- | 5,000 yen | 20,000 yen | | | Full day | ---------- | 7,500 yen | 30,000 yen | ------------------------------------------------------------ ----------------------------------------------- | Session | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | |-----------|---|---|---|---|---|---|---|---|---| | Morning | | | | | | | | | | |-----------|---|---|---|---|---|---|---|---|---| | Afternoon | | | | | | | | | | ----------------------------------------------- The complete list of tutorials will be available in the June mailing. SUBTOTAL2: yen ------------------- # BANQUET (5,000 yen/person) Accompany Person's Name: ------------------------- SUBTOTAL3: yen ------------------- ( person(s) x 5,000 yen) TOTAL AMOUNT(1+2+3): yen ------------------- # WAY OF PAYMENT (Please check the appropriate box.) ( )1. Payment through Bank I have sent the total amount on ------------------------ (Date) through ------------------------------------------------ (Name of Bank) to the following account in Japanese yen. Name of Bank: Tokai Bank, Nagoya Ekimae Branch Name of Account: Travel Plaza International Chubu, Inc. EC-ka Account No.: 1079574 Address: Shirakawa Dai-san Bldg., 4-8-10 Meieki, Nakamura-ku, Nagoya, 450 Japan ( )2. Payment by Credit Card (not for domestic registrants) ( )American Express ( )Diners ( )Visa Card ( )Master Card Card No.: --------------------------------------------------- Date of Expiration: ----------------------------------------- Signature: -------------------------------------------------- * Note 1. No personal checks are accepted. 2. All payments should be made in Japanese yen. DATE: SIGNATURE: ---------------------- ---------------------------- Please send the completed form to the following address. IJCNN'93-NAGOYA Secretariat Travel Plaza International Chubu, Inc. Shirakawa Dai-san Bldg., 4-8-10 Meieki, Nakamura-ku, Nagoya, 450 Japan Phone: +81-52-561-9880/8655 Fax: +81-52-561-1241 --------------------------------------------------------------------------- IJCNN`93 NAGOYA October 25-29, 1993 Nagoya, JAPAN HOTEL RESERVATION FORM ---------------------- (Type or print in block letters, one sheet for each registrant please) Name: ( )Prof. ( )Dr. ( )Mr. ( )Ms. --------------------------------------------------- (Family Name) (First Name) (Middle Name) Affiliation: --------------------------------------------------------------- Mailing Address: ( )Office ( )Home --------------------------------------------------------------------------- Zip Code: City: Country: -------------- -------------- ------------------ Phone: Fax: ---------------------- -------------------- Arrival Schedule: Arriving at on by ------------- --------- -------------- (Airport) (Date) (Flight No.) ---------------------------------------------------------------- | | Name of Hotel | Number of Room(s) | |------------|---------------|-----------------------------------| | 1st Choice | | | | twin(s) | | | | single(s) | twin(s) | single use | |------------|---------------|---------------------------------- | | 2nd Choice | | | | | | | | single(s) | twin(s) | single use | ---------------------------------------------------------------- (continued) --------------------------------------------------------------- | | Check-in Date | Check-out Date | Total Night(s) | | -----------|--------------------------------|-----------------| | 1st Choice | | | | | | | | | |------------|---------------|----------------|---------------- | | 2nd Choice | | | | | | | | | --------------------------------------------------------------- Sharing with : ------------------------------------------------ (Family Name) (First Name) * Hotel Deposit: yen X room(s) = YEN --------------- ----------- (Room Charge/Night) # WAY OF PAYMENT (Please check the appropriate box.) ( )1. Payment through Bank I have sent the total amount on ------------------------ (Date) through ----------------------------------------------- (Name of Bank) to the following account in Japanese yen. Name of Bank: Tokai Bank, Nagoya Ekimae Branch Name of Account: Travel Plaza International Chubu, Inc. EC-ka Account No.: 1079574 Address: Shirakawa Dai-san Bldg., 4-8-10 Meieki, Nakamura-ku, Nagoya, 450 Japan ( )2. Payment by Credit Card (not for domestic registrants) ( )American Express ( )Diners ( )Visa Card ( )Master Card Card No.: --------------------------------------------------- Date of Expiration: ----------------------------------------- Signature: -------------------------------------------------- * Note 1. No personal checks are accepted. 2. All payments should be made in Japanese yen. DATE: SIGNATURE: ---------------------- ---------------------------- Please send the completed form to the following address. (by September 15, 1993) Travel Plaza International Chubu, Inc. Shirakawa Dai-san Bldg., 4-8-10 Meieki, Nakamura-ku, Nagoya, 450 Japan Phone: +81-52-561-9880/8655 Fax: +81-52-561-1241 --------------------------------------------------------------------------- Hotel Accommodations Rooms have been reserved at the following hotels in Nagoya. Reservations should be made by completing and returning the enclosed Accomodations Applications Form, indicating the name of the hotel and the number of rooms desired. No reservation will be confirmed without a deposit. Hotel assignment will be made on a first-come first-served basis. The following rates include service charges and consumption taxes. The rate are subject to change for 1993, except for Hotel Nahoya Castle. --------------------------------------------------------------- | Rank | Name of Hotel | Single | Twin | Twin Room | | | | Room | Room | Single Use | |------|-----------------|------------|------------|------------| | A 1 | Nagoya Hilton | ----- | 26,700 yen | 18,700 yen | | |-----------------|------------|------------|------------| | 2 | Hotel Nagoya | 15,500 yen | 30,000 yen | 23,000 yen | | | Castle | | | | | |-----------------|------------|------------|------------| | 3 | Nagoya Kanko | 13,500 yen | 23,000 yen | 18,500 yen | | | Hotel | | | | | |-----------------|------------|------------|------------| | 4 | Nagoya Tokyu | 16,000 yen | 25,000 yen | 19,000 yen | | | Hotel | | | | |------------------------|------------|------------|------------| | B 5 | Nagoya Int'l | 11,000 yen | 21,000 yen | 16,500 yen | | | Hotel | | | | | |-----------------|------------|------------|------------| | 6 | Hotel Castle | 10,000 yen | 18,000 yen | 14,500 yen | | | Plaza | | | | | |-----------------|------------|------------|------------| | 7 | Nagoya Daiichi | 9,500 yen | 16,000 yen | 12,500 yen | | | Hotel | | | | | |-----------------|------------|------------|------------| | 8 | Nagoya Fuji | 9,300 yen | 16,500 yen | 13,800 yen | | | Park Hotel | | | | | |-----------------|------------|------------|------------| | 9 | Hotel Lions | 8,000 yen | 15,000 yen | 12,000 yen | | | Plaza | | | | |------------------------|------------|------------|------------| | C 10 | Daiichi Fuji | 6,200 yen | 11,000 yen | ----- | | | Hotel | | | | | |-----------------|------------|------------|------------| | 11 | Nagoya Crown | 6,400 yen | 10,000 yen | ----- | | | Hotel | | | | | |-----------------|------------|------------|------------| | 12 | Nagoya Park | 6,300 yen | 11,400 yen | ----- | | | Side Hotetel | | | | --------------------------------------------------------------- Note: Since the capacity of hotel is limited, hotel reservation cannot be guaranteed after September 15, 1993 Cancellation and Refund If you wish to cancel your hotel reservation, please send a written notification directly to TPI. Deposits will be refunded after deducing the following cancellation charges. All refunds will be proceeded after the conference. When notification is received by TPI: Up to 20 days before the first night of stay ------------------ Free 19-10 days before ---------------------------- 10% of the room charge 9-5 days before ------------------------------ 20% of the room charge 4-2 days before ------------------------------ 50% of the room charge One day before or no notice given ----------- 100% of the room charge ----- Shiro Usui (usui at tut.ac.jp) Biological and Physiological Engineering Lab. Department of Information and Computer Sciences Toyohashi University of Technology Toyohashi 441, Japan From bever at prodigal.psych.rochester.edu Tue Oct 6 13:29:50 1992 From: bever at prodigal.psych.rochester.edu (Thomas Bever) Date: Tue, 6 Oct 92 13:29:50 EDT Subject: Postdoctoral fellowships at the University of Rochester Message-ID: <9210061729.AA17460@prodigal.psych.rochester.edu> POSTDOCTORAL FELLOWSHIPS IN THE LANGUAGE SCIENCES AT ROCHESTER The Center for the Sciences of Language [CSL] at the University of Rochester has a total of three NIH-funded postdoctoral trainee positions: one can start right away, the other two start anytime after July 1, 1993: all can run from one to two years. CSL is an interdisciplinary unit which connects programs in American Sign Language, Psycholinguistics, Linguistics, Natural language processing, Neuroscience, Philosophy, and Vision. Fellows will be expected to participate in a variety of existing research and seminar projects in and between these disciplines. Applicants should have a relevant background and an interest in interdisciplinary research training in the language sciences. We encourage applications from minorities and women: applicants must be US citizens or otherwise eligible for a US government fellowship. Applications should be sent to Tom Bever, CSL Director, Meliora Hall, University of Rochester, Rochester, NY, 14627; Bever at prodigal.psych.rochester.edu; 716-275-8724. Please include a vita, a statement of interests, the names and email addresses and/or phone numbers of three recommenders: also indicate preferred starting date.  From jose at tractatus.siemens.com Thu Oct 8 08:33:03 1992 From: jose at tractatus.siemens.com (Steve Hanson) Date: Thu, 8 Oct 1992 08:33:03 -0400 (EDT) Subject: NIPS*92 Tutorials Message-ID: NIPS*92 TUTORIAL PROGRAM November 30, 1992 NEUROSCIENCE 9:30 - 11:30 ``ASPECTS OF COMPUTATION WITH REAL NEURONS'' William BIALEK NEC Research Institute 1:00 - 3:00 ``ADVANCES IN COGNITIVE NEUROSCIENCE'' William HIRST New School for Social Research 3:30 - 5:30 ``CORTICAL OSCILLATIONS: CURRENT EXPERIMENTAL AND THEORETICAL STATUS'' Christof KOCH CalTech 9:30 - 11:30 ``BIFURCATIONS IN NEURAL NETWORKS'' Bard ERMENTROUT Department of Mathematics University of Pittsburgh ARCHITECTURES, ALGORITHMS, AND THEORY 9:30 - 11:30 ``LEARNING THEORY AND NEURAL COMPUTATION'' Les VALIANT Computer Science Department Harvard University 1:00 - 3:00 ``STATISTICAL ACCURACY OF NEURAL NETWORKS'' Andrew BARRON Statistics Department University of Illinois 3:30 - 5:30 ``LEARNING AND APPROXIMATION IN NEURAL NETWORKS'' Tommy POGGIO Brain & Cognitive Science & AI Lab MIT IMPLEMENTATIONS 1:00 - 3:00 ``ELECTRONIC NEURAL NETWORKS'' Josh ALSPECTOR Bellcore All inquiries for registration to Conference, Tutorials or Workshop should go to NIPS*92 Registration SIEMENS Research Center 755 College Rd. East Princeton, NJ 08550 phone 609-734-3383 email kic at learning.siemens.com Stephen J. Hanson Learning Systems Department SIEMENS Research 755 College Rd. East Princeton, NJ 08540 From lvq at cochlea.hut.fi Fri Oct 9 15:18:43 1992 From: lvq at cochlea.hut.fi (LVQ_PAK) Date: Fri, 9 Oct 92 15:18:43 EET Subject: New version of Learning Vector Quantization PD program package Message-ID: <9210091318.AA17182@cochlea.hut.fi.hut.fi> ************************************************************************ * * * LVQ_PAK * * * * The * * * * Learning Vector Quantization * * * * Program Package * * * * Version 2.1 (October 9, 1992) * * * * Prepared by the * * LVQ Programming Team of the * * Helsinki University of Technology * * Laboratory of Computer and Information Science * * Rakentajanaukio 2 C, SF-02150 Espoo * * FINLAND * * * * Copyright (c) 1991,1992 * * * ************************************************************************ Public-domain programs for Learning Vector Quantization (LVQ) algorithms are available via anonymous FTP on the Internet. "What is LVQ?", you may ask --- See the following reference, then: Teuvo Kohonen. The self-organizing map. Proceedings of the IEEE, 78(9):1464-1480, 1990. In short, LVQ is a group of methods applicable to statistical pattern recognition, in which the classes are described by a relatively small number of codebook vectors, properly placed within each class zone such that the decision borders are approximated by the nearest-neighbor rule. Unlike in normal k-nearest-neighbor (k-nn) classification, the original samples are not used as codebook vectors, but they tune the latter. LVQ is concerned with the optimal placement of these codebook vectors into class zones. This package contains all the programs necessary for the correct application of certain LVQ algorithms in an arbitrary statistical classification or pattern recognition task. To this package three options for the algorithms, the LVQ1, the LVQ2.1 and the LVQ3, have been selected. This code is distributed without charge on an "as is" basis. There is no warranty of any kind by the authors or by Helsinki University of Technology. In the implementation of the LVQ programs we have tried to use as simple code as possible. Therefore the programs are supposed to compile in various machines without any specific modifications made on the code. All programs have been written in ANSI C. The programs are available in two archive formats, one for the UNIX-environment, the other for MS-DOS. Both archives contain exactly the same files. These files can be accessed via FTP as follows: 1. Create an FTP connection from wherever you are to machine "cochlea.hut.fi". The internet address of this machine is 130.233.168.48, for those who need it. 2. Log in as user "anonymous" with your own e-mail address as password. 3. Change remote directory to "/pub/lvq_pak". 4. At this point FTP should be able to get a listing of files in this directory with DIR and fetch the ones you want with GET. (The exact FTP commands you use depend on your local FTP program.) Remember to use the binary transfer mode for compressed files. The lvq_pak program package includes the following files: - Documentation: README short description of the package and installation instructions lvq_doc.ps documentation in (c) PostScript format lvq_doc.ps.Z same as above but compressed lvq_doc.txt documentation in ASCII format - Source file archives (which contain the documentation, too): lvq_p2r1.exe Self-extracting MS-DOS archive file lvq_pak-2.1.tar UNIX tape archive file lvq_pak-2.1.tar.Z same as above but compressed An example of FTP access is given below unix> ftp cochlea.hut.fi (or 130.233.168.48) Name: anonymous Password: ftp> cd /pub/lvq_pak ftp> binary ftp> get lvq_pak-2.1.tar.Z ftp> quit unix> uncompress lvq_pak-2.1.tar.Z unix> tar xvfo lvq_pak-2.1.tar See file README for further installation instructions. All comments concerning this package should be addressed to lvq at cochlea.hut.fi. ************************************************************************ From lvq at cochlea.hut.fi Fri Oct 9 15:14:44 1992 From: lvq at cochlea.hut.fi (LVQ_PAK) Date: Fri, 9 Oct 92 15:14:44 EET Subject: Release of Self-Organizing Map PD program package Message-ID: <9210091314.AA17162@cochlea.hut.fi.hut.fi> ************************************************************************ * * * SOM_PAK * * * * The * * * * Self-Organizing Map * * * * Program Package * * * * Version 1.0 (October 9, 1992) * * * * Prepared by the * * SOM Programming Team of the * * Helsinki University of Technology * * Laboratory of Computer and Information Science * * Rakentajanaukio 2 C, SF-02150 Espoo * * FINLAND * * * * Copyright (c) 1992 * * * ************************************************************************ Some time ago we released the software package "LVQ_PAK" for the easy application of Learning Vector Quantization algorithms. Corresponding public-domain programs for the Self-Organizing Map (SOM) algorithms are now available via anonymous FTP on the Internet. "What does the Self-Organizing Map mean?", you may ask --- See the following reference, then: Teuvo Kohonen. The self-organizing map. Proceedings of the IEEE, 78(9):1464-1480, 1990. In short, Self-Organizing Map (SOM) defines a 'non-linear projection' of the probability density function of the high-dimensional input data onto the two-dimensional display. SOM places a number of reference vectors into an input data space to approximate to its data set in an ordered fashion. This package contains all the programs necessary for the application of Self-Organizing Map algorithms in an arbitrary complex data visualization task. This code is distributed without charge on an "as is" basis. There is no warranty of any kind by the authors or by Helsinki University of Technology. In the implementation of the SOM programs we have tried to use as simple code as possible. Therefore the programs are supposed to compile in various machines without any specific modifications made on the code. All programs have been written in ANSI C. The programs are available in two archive formats, one for the UNIX-environment, the other for MS-DOS. Both archives contain exactly the same files. These files can be accessed via FTP as follows: 1. Create an FTP connection from wherever you are to machine "cochlea.hut.fi". The internet address of this machine is 130.233.168.48, for those who need it. 2. Log in as user "anonymous" with your own e-mail address as password. 3. Change remote directory to "/pub/som_pak". 4. At this point FTP should be able to get a listing of files in this directory with DIR and fetch the ones you want with GET. (The exact FTP commands you use depend on your local FTP program.) Remember to use the binary transfer mode for compressed files. The som_pak program package includes the following files: - Documentation: README short description of the package and installation instructions som_doc.ps documentation in (c) PostScript format som_doc.ps.Z same as above but compressed som_doc.txt documentation in ASCII format - Source file archives (which contain the documentation, too): som_p1r0.exe Self-extracting MS-DOS archive file som_pak-1.0.tar UNIX tape archive file som_pak-1.0.tar.Z same as above but compressed An example of FTP access is given below unix> ftp cochlea.hut.fi (or 130.233.168.48) Name: anonymous Password: ftp> cd /pub/som_pak ftp> binary ftp> get som_pak-1.0.tar.Z ftp> quit unix> uncompress som_pak-1.0.tar.Z unix> tar xvfo som_pak-1.0.tar See file README for further installation instructions. All comments concerning this package should be addressed to som at cochlea.hut.fi. ************************************************************************ From soller at asylum.cs.utah.edu Fri Oct 9 12:43:20 1992 From: soller at asylum.cs.utah.edu (Jerome Soller) Date: Fri, 9 Oct 92 10:43:20 -0600 Subject: NSF Summer Fellowships to Visit Japan Message-ID: <9210091643.AA08604@asylum.cs.utah.edu> The following is a summary of the official announcement for the NSF Summer Institute in Japan sponsored by the National Science Foundation. It provides a fellowship for graduate and/or medical students to spend the summer in Japan at a Japanese research lab. Last summer, I had the opportunity to spend the summer working with the Exploratory Research Laboratory of the Fundamental Laboratories of NEC Corporation doing models of visual biological neural networks. Another student in the neural network area, Hank Wan of CMU, worked with RIKEN. Sincerely, Jerome B. Soller Ph.D. Candidate, U. of Utah Dept. of Computer Science and VA Geriatric, Research, Education, and Clinical Center soller at asylum.utah.edu ----------------------------------------------------------- The National Science Foundation and the National Institutes of Health announce... ... that applications are now being accepted for the 1993 SUMMER INSTITUTE IN JAPAN for U.S. Graduate Students in Science and Engineering, including Biomedical Science and Engineering. APPLICATION DEADLINE: December 1, 1992 Program's Goal: to provide 60 U.S. graduate students first-hand experience in a Japanese research laboratory Program Elements: ** Internship at a Japanese government, corporate or university laboratory in Tokyo or Tsukuba ** Intensive Japanese language training ** Lectures on Japanese science, history, and culture Program Duration and Dates: ** 8 weeks; June 25 to August 21, 1993 Eligibility requirements: 1. U.S. citizen or permanent resident 2. Enrolled at a U.S. institution in a science or engineering Ph.D. program, Enrolled in an M.D. program and have an interest in biomedical research, or Enrolled in an engineering M.S. program of which one year has been completed by December 1, 1992. For application materials and more information: Request NSF publication number 92-105, "1993 Summer Institute in Japan," from NSF's Publications Office at pubs at nsf.gov (InterNet) or pubs at nsf (BitNet) Phone: (202) 357-7668 Be sure to give your name and complete mailing address. To download application materials: Send e-mail message to stisserv at nsf.gov (InterNet) or stisserv at nsf (BitNet) Ignore the subject line, but body of message should read as follows: Request: stis Topic: nsf92105 Request: end You will receive a copy of publication 92-105 by return e-mail. Further inquiries: Contact NSF's Japan Program staff at NSFJinfo at nsf.gov (InterNet) or NSFJinfo at nsf (BitNet) Tel: (202) 653-5862 From ken at cns.caltech.edu Sun Oct 11 10:47:21 1992 From: ken at cns.caltech.edu (Ken Miller) Date: Sun, 11 Oct 92 07:47:21 PDT Subject: tech report announcement Message-ID: <9210111447.AA10541@zenon.cns.caltech.edu> The following tech report has been placed in the neuroprose archive as miller.hebbian.tar.Z. Instructions for retrieving and printing follow the abstract. A slightly abridged version of this paper has been submitted to Neural Computation. The Role of Constraints in Hebbian Learning Kenneth D. Miller and David J.C. MacKay Caltech Computation and Neural Systems (CNS) Program CNS Memo 19 Models of unsupervised correlation-based (Hebbian) synaptic plasticity are typically unstable: either all synapses grow until each reaches the maximum allowed strength, or all synapses decay to zero strength. A common method of avoiding these outcomes is to use a constraint that conserves or limits the total synaptic strength over a cell. We study the dynamical effects of such constraints. Two methods of enforcing a constraint are distinguished, multiplicative and subtractive. For otherwise linear learning rules, multiplicative enforcement of a constraint results in dynamics that converge to the principal eigenvector of the operator determining unconstrained synaptic development. Subtractive enforcement, in contrast, leads to a final state in which almost all synaptic strengths reach either the maximum or minimum allowed value. This final state is often dominated by weight configurations other than the principal eigenvector of the unconstrained operator. Multiplicative enforcement yields a ``graded" receptive field in which most mutually correlated inputs are represented, whereas subtractive enforcement yields a receptive field that is ``sharpened" to a few maximally-correlated inputs. If two equivalent input populations ({\it e.g.} two eyes) innervate a common target, multiplicative enforcement prevents their segregation (ocular dominance segregation) when the two populations are weakly correlated; whereas subtractive enforcement allows segregation under these circumstances. An approach to understanding constraints over input and over output cells is suggested, and some biological implementations are discussed. ------------------------------------------------ How to retrieve and print out this paper: unix> ftp archive.cis.ohio-state.edu Connected to archive.cis.ohio-state.edu. 220 archive.cis.ohio-state.edu FTP server ready. Name: anonymous 331 Guest login ok, send ident as password. Password: [your e-mail address] 230 Guest login ok, access restrictions apply. ftp> binary 200 Type set to I. ftp> cd pub/neuroprose 250 CWD command successful. ftp> get miller.hebbian.tar.Z 200 PORT command successful. 150 Opening BINARY mode data connection for miller.hebbian.tar.Z 226 Transfer complete. 480000 bytes sent in many seconds ftp> quit 221 Goodbye. unix> uncompress miller.hebbian.tar.Z unix> tar xvf miller.hebbian.tar TO SAVE DISC SPACE, THE ABOVE TWO COMMANDS MAY BE REPLACED WITH THE SINGLE COMMAND unix> zcat miller.hebbian.tar.Z | tar xvf - hebbian_p0-11.ps hebbian_p12-23.ps hebbian_p24-35.ps unix> lpr hebbian_p24-35.ps unix> lpr hebbian_p12-23.ps unix> lpr hebbian_p0-11.ps From P.Refenes at cs.ucl.ac.uk Fri Oct 9 07:56:32 1992 From: P.Refenes at cs.ucl.ac.uk (P.Refenes@cs.ucl.ac.uk) Date: Fri, 09 Oct 92 12:56:32 +0100 Subject: Papers Using neural Nets in Economics Message-ID: [Note: the following is a reply to a request for references on use of neural nets in financial modeling. The original request was submitted to, but not distributed on, the connectionists list. It also appeared on comp.ai.neural-nets and sci.eon. -- DST] In reply, to your request for references in this field a) the full set of references in our paper on financial modelling using neural nets is attached (straight ascii). b) a more detailed database in also attached (in tex). As far as we are aware this is more or less it. In addition we have a forthcoming book "neural network applications in the capital markets", and we plan a workshop to be held in London in Spring 93 - papers welcome. Paul Refenes. =========================================================== [Brock91] Brock W. A., "Causality, Chaos, Explanation and Prediction in Economics and Finance", in Casti J., and Karlqvist A., (eds), "Beyond Belief: Randomness, Prediction, and Explanation in Science", Boca Raton, FL: CRC Press, pp 230-279, (1991). [Brown63] Brown R. G. "Smoothing, Forecasting and Prediction of Discrete Time Series", Prentice-Hall International, (1963). [Burns86] Burns T., "The Interpretation and use of Economic Predictions", Proc. Royal Soc., Series A, pp 103- 125, (1986). [Chauvi89] Chauvin Y., "A back-propagation algorithm with optimal use of hidden units", In Touretzky D., (ed), "Advances in Neural Information Processing systems, Morgan Kaufmann (1989). [Deboec92] Deboeck D., "Pre-processing and evaluation of neural nets for trading stocks" Advanced Technology for Developers, vol. 1, no. 2, (Aug 1992). [Denker87] Denker J., et al "Large Automatic Learning. Rule Extraction and Generalisation", Complex Systems I: 877-922, (1987). [DutSha88] Dutta Sumitra, and Shashi Shekkar, "Bond rating: a non-conservative application", Proc. ICNN-88, San Diego, CA, July 24-27 1988, Vol. II (1988). [Econost92] Econostat, "Tactical Asset Allocation in the Global Bond Markets", TR-92/07, Hennerton House, Wargrave, Berkshire RG10 8PD, England, (1992). [FahLeb90] Fahlman S. E & Lebiere C, "The Cascade- Correlation Learning Architecture", Carnegie Mellon University, Technical Report CMU-CS-90- 100. ( 1990). [Hendry88] Hendry D. F., "Encompassing implications of feedback versus feedforward mechanisms in econometrics", Oxford Economic Papers, vol. 40, pp. 132-149, (1988). [Hinton87] Hinton Geoffrey, "Connectionist Learning Procedures", Computer Science Department, Carnegie-Melon University, December 1987. [Holden90] Holden K., "Current issues in macroeconomic", in Greenaway D., (ed), Croom Helm, (1990). [Hoptro93] Hoptroff A. R., "The principles and practice of time series forecasting and business modelling using neural nets", Neural Computing and Applications vol. 1, no 1., pp 59-66, (1993). [Kimoto90] Kimoto T., et al, "Stock Market Prediction with Modular Neural Networks", Proc., IJCNN-90, San Diego, (1990). [Klimas92] Klimasauskas C., "Genetic function optimization for time series prediction", Advanced Technology for Developers vol. 1, no. 1, (July 1992). [leCun89] le Cun. Y., "Generalisation and Network Design Strategies" Technical Report CRG-TR-89-4, University of Toronto, Department of Computer Science, (1989). [Marqu91] Marquez L., et al, "Neural networks models as an alternative to regression", Proc. Twenty-Fourth Hawaii International Conference on System Sciences, 1991, Volume 4 (pp. 129-135). [Menden89] Mendenhall W., et al "Statistics for Management And Economics", PWS-KENT Publishing Company, Boston USA, (1989). [Ormer91] Ormerod P., Taylor J. C., and Walker T., "Neiual networks in Economics", Henley Centre, (1991). [Peters91] Peters E. E., "Chaos and Order in the Capital Markets", Willey, USA, (1991). [Refene92a] Refenes A. N., "Constructive Learning and its Application to Currency Exchange Rate Prediction", in "Neural Network Applications in Investment and Finance Services", eds. Turban E., and Trippi R., Chapter 27, Probus Publishing, USA, 1992. [Refene92b] Refenes A. N., et al "Currency Exchange rate prediction and Neural Network Design Strategies", Neural computing & Applications Journal, Vol 1, no. 1., (1992). [Refene92c] Refenes A. N., et al "Stock Ranking Using Neural Networks", submitted ICNN'93, San Francisco, Department of Computer Science, University College London, (1992). [RefAze92] Refenes A. N., & Azema-Barac M., "Neural Networks for Tactical Asset Allocation in the Global Bonds Markets", Proc. IEE Third International Conference on ANNS, Brighton 1993 (submitted 1992). [Refenes93] Refenes A. N., et al "Financial Modelling Using Neural Networks", in Liddell H. (ed) "Commercial Parallel Processing", Unicom, (to appear). [RefAli91] Refenes A. N., & Alippi C., "Histological Image understanding by Error Backpropagation", Microprocessing and Microprogramming Vol. 32, pp. 437-446, , North-Holland, (1991). [RefCha92] Refenes A. N., & Chan E. B., "Sound Recognition and Optimal Neural Network Design", Proc. EUROMICRO-92, Paris (Sept. 1992). [RefVit91] Refenes A. N. & Vithlani S. "Constructive Learning by Specialisation", Proc. ICANN-91, Helsiniki, (1991). [RefZai92] Refenes A. N., & Zaidi A., "Managing Exchange Rate Prediction Strategies with Neural Networks", Proc. Workshop on Neural Networks: techniques & Applications, Liverpool (Sept. 1992), also in Lisboa P. G., and Taylor M, "Neural Networks: techniques & Applications", Ellis Horwood (1992). [Refenes91] Refenes A.N., "CLS: An Adaptive Learning Procedure and Its Application to Time Series Forecasting", Proc. IJCNN-91, Singapore, (Nov. 1991). [Refenes92d] Refenes A. N., et al "Currency Exchange Rate Forecasting by Error Backpropagation", Proc. Conference on System Sciences, HICCS-25, Kauai, HawaII, Jan. 7-10, 1992. [Rumelh86] Rumelhart D. E., et al, "Learning Internal Representation by error propagation." In Rumelhart.D.E, McClelland.J.L and PDP Research Group editors Parallel Distributed Processing: Explorations in the Microstructure of Cognition. Vol. 1 Foundation, MIT Press (1986). [Shoene90] Schoenenburg E., "Stock price prediction using neural networks: a project report", Neurocomputing 2, pp. 17-27, 1990. [TsiZei92] Tsibouris G., and Zeidenberg M., "Back propagation as a test of the efficient markets hypothesis", Proc. Hawaii International Conference on System Sciences, January 7-10th 1992, Kauai, Hawaii, Volume 4 (pp. 523-532). [White88] White Halbert, "Economic prediction using neural networks: the case of IBM daily stock returns", Department of Economics, University of California, (1988). [Wallis89] Wallis K., F., "Macroeconomic forecasting: a survey", Economic Journal, vol. 99, pp. 28-61, (1989). [Weigen90] Weigend A., et al, "Predicting the future: a connectionist approach", Int. Journal of Neural Systems, vol. 1, pp. 193-209, (1990). ====================================================================== %T Using neural nets to predict several sequential and subsequent future values from time series data %A James E. Brown %J Proceedings of the First International Conference on Artificial Intelligence Applications on Wall Street, October 9-11 1991, New York %Q Division of Management, Polytechnic University %I IEEE Computer Society Press %C Los Alamitos, CA %D 1991 %P 30-34 %T Decision support system for position optimization on currency option dealing %A Shuhei Yamaba %A Hideki Kurashima %J Proceedings of the First International Conference on Artificial Intelligence Applications on Wall Street, October 9-11th 1991, New York %Q Division of Management, Polytechnic University %I IEEE Computer Society Press %C Los Alamitos, CA %D 1991 %P 160-165 %T An intelligent trend prediction and reversal recognition system using dual-modul %A Gia-Shuh Jang %A Feipei Lai %A Bor-Wei Jiang %A Li-Hua Chien %J Proceedings of the First International Conference on Artificial Intelligence Applications on Wall Street, October 9-11th 1991, New York %Q Division of Management, Polytechnic University %I IEEE Computer Society Press %C Los Alamitos, CA %D 1991 %P 42-51 %T Economic models and time series: AI and new techniques for learning from examples %A Tomaso Poggio %I Artificial Intelligence Laboratory, MIT %C Cambridge, MA %R TR %P 15 %T Bond rating: a non-conservative application of neural networks %A Soumitra Dutta %A Shashi Shekhar %J Proceedings of the International Conference on Neural Networks, San Diego, CA, July 24-27 1988, Volume II %I IEEE %C San Diego, CA %P 443-450 %T Stock price prediction using neural networks: a project report %A E. Schoneburg %J Neurocomputing %V 2 %D 1990 %P 17-27 %T Artificial neural systems: a new tool for financial decision-making %A Delvin D. Hawley %A John D. Johnson %A Dijotam Raina %J Financial Analysts Journal %D November-December 1990 %P 63-72 %T Financial simulations on a massively parallel connection machine %A James M. Hutchison %R Report 90-04-01 %I Decision Sciences Department, University of Pennsylvania %C Philadelphia, PA %D September 1990 %P 34 %T Neural networks in economics %A Paul Ormerod %A John C. Taylor %A Ted Walker %J Money and financial markets %E Mark P. Taylor %I Blackwell Ltd %C Oxford %D 1991 %P 341-353 %G 0631179828 %T Function approximation and time series prediction with neural networks %A R.D. Jones %A Y.C. Lee %A C.W. Barnes %A G.W. Flake %A K. Lee %A P.S. Lewis %A S. Qian %I Center for Nonlinear Studies, Los Alamos %D 1989 %T Predicting the future: a connectionist approach %A A. Weigend %A B. Huberman %A D. Rumelhart %J International Journal of Neural Systems %V 1 %N 3 %D 1990 %P 193-209 %T Stock market prediction system with modular neural networks %A T. Kimoto %A K. Asakawa %J Proceedings of the International Joint Conference on Neural Networks, San Diego, June 17-21 1990 Volume I %I IEEE Neural Network Council %C Ann Arbor, MI %P 1-7 %T Forecasting economic turning points with neural nets %A R.G. Hoptroff %A M.J. Bramson %A T.J. Hall %J to be published in Neural Computing and Applications, Summer 1992 %P 6 %T Neural network applications in business minitrack %A W. Remus %A T. Hill %B Proceedings of the Twenty-Fifth Hawaii International Conference on System Sciences, January 7-10th, 1992, Kauai, Hawaii, Vol 4 %E Jay F. Nunamaker %E Ralph H. Sprague %I IEEE Computer Society Press %C Los Alamitos, CA %D 1992 %P 493 %T Neural network models for forecasting: a review %A Leorey Marquez %A Tim Hill %A Marcus O'Connor %A William Remus %B Proceedings of the Twenty-Fifth Hawaii International Conference on System Sciences, January 7-10th 1992, Kauai, Hawaii, Vol.4 %E Jay F. Nunamaker %E Ralph H. Sprague %I IEEE Computer Society Press %C Los Alamitos, CA %D 1992 %P 494-497 %T Neural nets vs. logistic regression %A T. Bell %A G. Ribar %A J. Verchio %J Proceedings of the University of Southern California Expert Systems Symposium %D November 1989 %T Contrasting neural nets with regression in predicting performance %A K. Duliba %J Proceedings of the Twenty-Fourth Hawaii International Conference on System Sciences, Volume 4 %D 1991 %P 163-170 %T A business application of artificial neural network systems %A A. Koster %A N. Sondak %A W. Bourbia %J The Journal of Computer Information Systems %V 31 %D 1990 %P 3-10 %T Neural networks models as an alternative to regression %A L. Marquez %A T. Hill %A W. Remus %A R. Worthley %J Proceedings of the Twenty-Fourth Hawaii International Conference on System Sciences, 1991, Volume 4 %D 1991 %P 129-135 %T A neural network model for bankruptcy prediction %A M. Odom %A R. Sharda %J Proceedings of the 1990 International Joint Conference on Neural Networks, San Diego, CA, June 17-21 1990, Volume II %I IEEE Neural Networks Council %C Ann Arbor, MI %D 1990 %P 163-168 %T A neural network application for bankruptcy prediction %A W. Raghupathi %A L. Schade %A R. Bapi %J Proceedings of the Twenty-Fourth Hawaii International Conference on System Sciences 1991, Volume 4 %D 1991 %P 147-155 %T Neural network models of managerial judgement %A W. Remus %A T. Hill %J Proceedings Twenty-Third Hawaii International Conference on System Sciences 1990, Volume 4 %D 1990 %P 340-344 %T Neural network models for intelligent support of managerial decision making %A W. Remus %A T. Hill %R University of Hawaii Working Paper %D 1991 %T Forecasting country risk ratings using a neural network %A J. Roy %A J. Cosset %J Proceedings of the Twenty-Third Hawaii International Conference on System Sciences 1990, Volume 4 %D 1990 %P 327-334 %T Neural networks as forecasting experts: an empirical test %A R. Sharda %A R. Patil %B Proceedings of the 1990 International Joint Conference on Neural Networks Conference, Washington DC, January 15-19 1990, Volume 2 %E Maureen Caudill %I Lawrence Erlbaum Associates %C Hillsdale, NJ %D 1990 %G 0805807764 %P 491-494 %T Connectionist approach to time series prediction: an empirical test %A R. Sharda %A R. Patil %I Oklahoma State University %C Oklahoma %R Working Paper 90-26 %D 1990 %T Neural networks for bond rating improved by multiple hidden layers %A A. Surkan %A J. Singleton %J Proceedings of the 1990 International Joint Conference on Neural Networks, San Diego, CA, June 17-21 1990, Volume 2 %I IEEE Neural Networks Council %C Ann Arbor, MI %D 1990 %P 157-162 %T Time series forecasting using neural networks vs. Box-Jenkins methodology %A Z. Tang %A C. de Almeida %A P. Fishwick %J Presented at the 1990 International Workshop on Neural Networks %D February 1990 %T Predicting stock price performance %A Y. Yoon %A G. Swales %J Proceedings of the Twenty-Fourth Hawaii International Conference on System Sciences 1991, Volume 4 %D 1991 %P 156-162 %T Neural networks as bond rating tools %A Alvin J. Surkan %A J. Clay Singleton %B Proceedings of the Twenty-Fifth Hawaii International Conference on System Sciences, January 7-10th 1992, Kauai, Hawaii %E Jay F. Nunamaker %E Ralph H. Sprague %I IEEE Computer Society Press %C Los Alamitos, CA %D 1992 %P 499-503 %T The AT\&T divestiture: effects of rating changes on bond returns %A J.W. Peavy %A J.A. Scott %J Journal of Economics and Business %V 38 %D 1986 %P 255-270 %T Currency exchange rate forecasting by error backpropagation %A A.N. Refenes %A M. Azema-Barac %A S.A. Karoussos %B Proceedings of the Twenty-Fifth Hawaii International Conference on System Sciences, January 7-10th 1992, Kauai, Hawaii %E Jay F. Nunamaker %E Ralph H. Sprague %I IEEE Computer Society Press %C Los Alamitos, CA %D 1992 %P 504-515 %T Developing neural networks to forecast agricultural commodity prices %A John Snyder %A Jason Sweat %A Michelle Richardson %A Doug Pattie %B Proceedings of the Twenty-Fifth Hawaii International Conference on System Sciences, January 7-10th 1992, Kauai, Hawaii %E Jay F. Nunamaker %E Ralph H. Spague %I IEEE Computer Society Press %C Los Alamitos, CA %D 1992 %P 516-522 %T Neural Networks for Statistical and Economic Data Workshop Proceedings, Dublin, December 1990 %Q Munotec Systems Ltd and Statistical Office of the European Communities, Luxembourg %E F. Murtagh %I Eurostat: Statistical Office of the European Communities %C Luxembourg %D 1991 %P 210 %T Parallel Problem Solving from Nature: Applications in Statistics and Economics Workshop Proceedings, Zurich, December 1991 %E D. Wurtz %E F. Murtagh %I Eurostat: Statistical Office of the European Communities %C Luxembourg %D 1992 %P 192 %T Forecasting the economic cycle: a neural network approach %A M.J. Branson %A R.G. Hoptroff %B Neural Networks for Statistical and Economic Data Workshop Proceedings, Dublin, December 1990 %E F. Murtagh %I Eurostat: Statistical Office of the European Communities %C Luxembourg %D 1991 %P 121-153 %T Analysis of univariate time series with connectionist nets: a case study of two classical examples %A C. de Groot %A D. Wurtz %B Neural Networks for Statistical and Economic Data Workshop Proceedings, Dublin, December 1990 %E F. Murtagh %I Munotec Systems %D 1991 %P 95-112 %T Stock price pattern recognition - a recurrent neural network approach %A K. Kamijo %A T. Tanigawa %B International Joint Conference on Neural Networks, San Diego, June 17-21 1990, Volume I %I IEEE Neural Networks Council %C Ann Arbor, MI %D 1990 %P 215-222 %T A short survey of neural networks for forecasting and related problems %A F. Murtagh %B Neural Networks for Statistical and Economic Data Workshop Proceedings, Dublin, December 1990 %E F. Murtagh %I Munotec Systems %D 1991 %P 87 %T Back propagation as a test of the efficient markets hypothesis %A G. Tsibouris %A M. Zeidenberg %B Proceedings of the Hawaii International Conference on System Sciences, January 7-10th 1992, Kauai, Hawaii, Volume 4 %E Jay F. Nunamaker %E Ralph H. Sprague %I IEEE Computer Society Press %C Los Alamitos, CA %D 1992 %P 523-532 %T Economic prediction using neural networks: the case of IBM daily stock returns %A H. White %I University of California, San Diego %D 1988 %T Predicting stock market fluctuations using neural network models %A G. Tsibouris %A M. Zeidenberg %R Paper presented at the Annual Meeting of the Society fro Economic Dynamics and Control, Capri, Italy 1991 %T Smoothing, forecasting and prediction of discrete time series %A R.G. Brown %I Prentice-Hall %D 1963 %S International Series in Management (Quantitative Methods Series) %P 468 %T Applied time series analysis for business and economic forecasting %A S. Nazem %I Dekker %C New York %D 1988 %S Statistics: Textbooks and Monographs Volume 93 %G 0824779134 %T Forecasting, structural time series models and the Kalman filter %A A.C. Harvey %I Cambridge University Press %C Cambridge %D 1989 %G 0521321964 %P 554 %T Bibliography on time series and stochastic processes: an international team project %E Herman O.A. Wold %I International Statistical Institute %D 1965 %P 516 %T Chaotic evolution and strange attractors: the statistical analysis of time series for deterministic nonlinear systems %A David Ruelle %I Cambridge University Press %C Cambridge %D 1989 %P 96 %G 0521362725 %T Non-linear and non-stationary time series analysis %A M.B. Priestly %I Academic Press %C London %D 1988 %G 012564910X From sayegh at CVAX.IPFW.INDIANA.EDU Mon Oct 12 04:43:12 1992 From: sayegh at CVAX.IPFW.INDIANA.EDU (sayegh@CVAX.IPFW.INDIANA.EDU) Date: Mon, 12 Oct 1992 03:43:12 EST Subject: CNS INDY 92 Message-ID: <00961F68.E78F81C0.12275@CVAX.IPFW.INDIANA.EDU> COMPUTATIONAL NEUROSCIENCE SYMPOSIUM 1992 (CNS '92) October 17, 1992 University Place Conference Center Indiana University-Purdue University at Indianapolis, Indiana In cooperation with the IEEE Systems, Man and Cybernetics Society The Computational Neuroscience Symposium (CNS '92) will highlight the interactions among engineering, science, and neuroscience. Computational neuroscience is the study of the interconnection of neuron-like elements in computing devices which leads to the discovery of the algorithms of the brain. Such algorithms may prove useful in finding optimum solutions to practical engineering problems. The focus of the symposium will be forty-five minute special lectures by eight leading international experts. KEYNOTE LECTURE: "Challenges and Promises of Networks with Neural-type Architectures" NICHOLAS DeCLARIS, Professor of Applied Mathematics, Electrical Engineering, Pathology, Epidemiology & Preventive Medicine; Director, Division of Medical Informatics, University of Maryland. SPECIAL LECTURES: "Teaching the Multiplication Tables to a Neural Network: Flexibility vs. Accuracy" JAMES ANDERSON, Professor of Cognitive & Linguistic Sciences, Brown University. "Supervised Learning for Adaptive Radar Detection" SIMON HAYKIN, Director of Communication Research Laboratory, McMaster University. "Neural Network Applications in Waveform Analysis and Pattern Recognition" EVANGELIA MICHELI-TZANAKOU, Chair and Professor of Biomedical Engineering, Rutgers University. "Signal Processing by Neural Networks in the Control of Eye Movements" DAVID ROBINSON, Professor of Ophthalmology, Biomedical Engineering & Neuroscience, The Johns Hopkins University. "Nonlinear Properties of the Hippocampal Formation" ROBERT SCLABASSI, Professor of Neurosurgery, Electrical Engineering, Behavioral Neuroscience & Psychiatry, University of Pittsburgh. "Acoustic Images in Bar Sonar and the Mechanisms Which Form Them" JAMES SIMMONS, Professor of Biology & Psychology, Brown University. "Understanding the Brain as a Neurocontroller: New Hypotheses and Experimental Possibilities" PAUL WERBOS, Program Director, National Science Foundation and President, International Neural Network Society. The conference registration fee, which includes symposium proceedings and lunch, is $50 prior to October 1, 1992 and may be paid by either check or credit card. After October 1, 1992 and for on-site registration, the fee is $75. Please contact the Conference Secretary for registration. Ms. Nancy Brockman CNS '92 Conference Secretary 799 West Michigan Street, Room 1211 Indianapolis, IN 46202 tel: (317)274-2761 fax: (317)274-0832 For overnight stay before or after the symposium, reservations may be made at the University Place Conference Center and Hotel at IUPUI. Special room rates for CNS '92 participants are $76 for one person and $90 for two. Please call (317)231-5150 or fax (317)231-5168. CNS '92 ORGANIZING COMMITTEE H. Oner Yurtseven, General Co-Chair Sidney Ochs, General Co-Chair P.G. Madhavan, Program Chair Michael Penna, Publication Chair SPONSORS OF CNS '92 Department of Physiology & Biophysics, Indiana University School of Medicine National Science Foundation IUPUI Faculty Development Office Purdue University School of Engineering & Technology at Indianapolis Indiana University-Purdue University School of Science at Indianapolis Eli Lilly and Company Department of Ophthalmology, Indiana University School of Medicine From jang at diva.Berkeley.EDU Mon Oct 12 12:58:58 1992 From: jang at diva.Berkeley.EDU (Jyh-Shing Roger Jang) Date: Mon, 12 Oct 92 09:58:58 -0700 Subject: paper available Message-ID: <9210121658.AA24703@diva.Berkeley.EDU> The following paper has been placed on the neuroprose archive as jang.adaptive_fuzzy.ps.Z and is available via anonymous ftp (from archive.cis.ohio-state.edu in the pub/neuroprose directory). This paper will appear in IEEE Trans. on Systems, Man and Cybernetics. ========================================================================= TITLE: ANFIS: Adaptive-Network-based Fuzzy Inference System ABSTRACT: This paper presents the architecture and learning procedure underlying ANFIS (Adaptive-Network-based Fuzzy Inference System), a fuzzy inference system implemented in the framework of adaptive networks. By using a hybrid learning procedure, the proposed ANFIS can construct an input-output mapping based on both human knowledge (in the form of fuzzy if-then rules) and stipulated input-output data pairs. In our simulation, we employ the ANFIS architecture to model nonlinear functions, identify nonlinear components on-linely in a control system, and predict a chaotic time series, all yielding remarkable results. Comparisons with artificail neural networks and earlier work on fuzzy modeling are listed and discussed. Other extensions of the proposed ANFIS and promising applications to automatic control and signal processing are also suggested. =========================================================================== Here is an example of how to retrieve this file: gvax> ftp archive.cis.ohio-state.edu Connected to archive.cis.ohio-state.edu. 220 archive.cis.ohio-state.edu FTP server ready. Name: anonymous 331 Guest login ok, send ident as password. Password:neuron at wherever 230 Guest login ok, access restrictions apply. ftp> binary 200 Type set to I. ftp> cd pub/neuroprose 250 CWD command successful. ftp> get jang.adaptive_fuzzy.ps.Z 200 PORT command successful. 150 Opening BINARY mode data connection for jang.adaptive_fuzzy.ps.Z 226 Transfer complete. 100000 bytes sent in 3.14159 seconds ftp> quit 221 Goodbye. gvax> uncompress jang.adaptive_fuzzy.ps.Z gvax> lpr jang.adaptive_fuzzy.ps -- J.-S. Roger Jang 571 Evans, EECS Department Univ. of California Berkeley, CA 94720 jang at diva.berkeley.edu (510)-642-5029 fax: (510)642-5775 From gary at cs.UCSD.EDU Mon Oct 12 18:29:56 1992 From: gary at cs.UCSD.EDU (Gary Cottrell) Date: Mon, 12 Oct 92 15:29:56 -0700 Subject: ACL-93 Message-ID: <9210122229.AA26674@odin.ucsd.edu> PDPNLP'ers: I am on the program committee for the Association for Computational Linguistics conference this year. I encourage connectionists to submit (excellent!) papers to this conference. Note that since (as far as I can tell) I am the only one on the committee, I can't promise a large representation by connectionists, but at least you'll be reviewed by one of your own. Now, lessee, who's going to review *my* submission? ;-) Gary Cottrell, UCSD ACL-93 CALL FOR PAPERS 31st Annual Meeting of the Association for Computational Linguistics 22-26 June 1993 Ohio State University Columbus, Ohio, USA TOPICS OF INTEREST: Papers are invited on substantial, original, and unpublished research on all aspects of computational linguistics, including, but not limited to, pragmatics, discourse, semantics, syntax, and the lexicon; phonetics, phonology, and morphology; interpreting and generating spoken and written language; linguistic, mathematical, and psychological models of language; language-oriented information retrieval; corpus-based language modelling; machine translation and translation aids; natural language interfaces and dialogue systems; message and narrative understanding systems; and theoretical and applications papers of every kind. REQUIREMENTS: Papers should describe unique work; they should emphasize completed work rather than intended work; and they should indicate clearly the state of completion of the reported results. A paper accepted for presentation at the ACL Meeting cannot be presented at another conference. Self-references which reveal the authors' identity (e.g., ``We previously showed [Smith, 1991] . . .'') should be avoided as far as possible, since reviewing will be ``blind''. FORMAT FOR SUBMISSION: Authors should submit four copies of preliminary versions of their papers, not to exceed 3200 words (exclusive of references). To facilitate blind reviewing, two title pages are required. The first (one copy only, unattached) should include the title, the name(s) of the author(s), complete addresses, a short (5 line) summary, and a specification of the topic area. The second (4 copies, heading the copies of the paper) should omit author names and addresses. Submissions that do not conform to this format will not be reviewed. As well, authors are strongly urged to email the title page (in directly readable ASCII form, with author information). Send to: Lenhart Schubert ACL-93 University of Rochester Department of Computer Science Rochester, NY 14627, USA fax: +1-716-461-2018 acl93 at cs.rochester.edu SCHEDULE: Preliminary papers are due by 6 January 1993. Authors will be notified of acceptance by 15 March 1993. Camera-ready copies of final papers prepared in a double-column format, preferably using a laser printer, must be received by 1 May 1993, along with a signed copyright release statement. STUDENT SESSIONS: Following the ACL-91/92 successes, there will again be a special Student Session organized by a committee of ACL graduate student members. ACL student members are invited to submit short papers describing innovative work in progress in any of the topics listed above. The papers will again be reviewed by a committee of students and faculty members for presentation in a workshop-style session. A separate call for papers will be issued; to get one or for other information contact Linda Suri, University of Delaware, Computer & Information Science, 103 Smith Hall, Newark, DE 19716, USA; +1-302-831-1949; suri at cis.udel.edu. OTHER ACTIVITIES: The meeting will include a program of tutorials coordinated by Philip Cohen, SRI International, Artificial Intelligence Center, 333 Ravenswood Avenue, Menlo Park, CA 94025, USA; +1-415-859-4840; pcohen at ai.sri.com. Some of the ACL Special Interest Groups may arrange workshops or other activities. CONFERENCE INFORMATION: Local arrangements are being chaired by Terry Patten, Ohio State University, Computer & Information Science, 2036 Neil Avenue Mall, Columbus, OH 43210, USA; +1-614-292-3989; patten at cis.ohio-state.edu. Anyone wishing to arrange an exhibit or present a demonstration should send a brief description together with a specification of physical requirements (space, power, telephone connections, tables, etc.) to Robert Kasper,Ohio State University, Linguistics, 222 Oxley Hall, 1712 Neil Avenue, Columbus, OH 43210, USA; +1-614-292-2844; kasper at ling.ohio-state.edu. PROGRAM COMMITTEE: The committee is chaired by Lenhart Schubert (U Rochester) and also includes Robert Carpenter (CMU) Mitch Marcus (U Pennsylvania) Garrison Cottrell (UC-San Diego) Kathleen McCoy (U Delaware) Robert Dale (U Edinburgh) Marc Moens (U Edinburgh) Bonnie Dorr (U Maryland) Johanna Moore (U Pittsburgh) Julia Hirschberg (AT&T Bell Labs) John Nerbonne (German AI Center) Paul Jacobs (GE Schenectady) James Pustejovsky (Brandeis U) Robert Kasper (Ohio State U) Uwe Reyle (U Stuttgart) Slava Katz (IBM Watson) Richard Sproat (AT&T Bell Labs) Judith Klavans (Columbia U) Jun-ichi Tsujii (UMIST) Bernard Lang (INRIA) Gregory Ward (Northwestern U) Diane Litman (AT&T Bell Labs) Janyce Wiebe (New Mexico State U) ACL INFORMATION: For other information on the conference and on the ACL more generally, contact Don Walker (ACL), Bellcore, MRE 2A379, 445 South Street, Box 1910, Morristown, NJ 07960-1910, USA; +1-201-829-4312; walker at bellcore.com. 1993 LINGUISTIC INSTITUTE: The 57th Linguistic Institute, sponsored by the LSA and co-sponsored by the ACL, will be held at The Ohio State University, in Columbus, Ohio, from June 28 until August 6, 1993, beginning right after the annual meeting of ACL. It will feature a number of computational linguistics courses, as described in the September 1992 issue of The FINITE STRING. For more information and application forms, see the June 1992 issue of the LSA Bulletin, or contact Linguistic Institute, Department of Linguistics, 222 Oxley Hall, The Ohio State University, Columbus, OH 43210, USA; +1-614-292-4052; +1-614-292-4273 fax; linginst at ling.ohio-state.edu. "e From Scott_Fahlman at SEF-PMAX.SLISP.CS.CMU.EDU Tue Oct 13 01:32:16 1992 From: Scott_Fahlman at SEF-PMAX.SLISP.CS.CMU.EDU (Scott_Fahlman@SEF-PMAX.SLISP.CS.CMU.EDU) Date: Tue, 13 Oct 92 01:32:16 -0400 Subject: Call for Workshop Presenters Message-ID: As you may have seen in the NIPS*92 workshop program, I will be running a session entitled "Reading the Entrails: Understanding What's Going On Inside a Neural Net". This will take place on December 4 in Vail, Colorado. We will have a total of four hours for this workshop, which means we can accommodate 6-8 presentations of about 20 minutes each (plus ample time for discussion). Several presentation slots are still open. I would like to hear from any of you who would like to present a technique that you have found useful for understanding what's going on inside a network, either during or after training. You don't necessarily have to be the person who *invented* the technique, though you should have some real hands-on experience. As a presenter, you should describe how a specific technique works, show (perhaps with diagrams or a videotape) how the technique was applied to some specific problem, and describe what useful insights resulted from this application. Specific issues we would like to hear about include the following: * How do you extract a set of rules (symbolic or fuzzy) from a trained neural network? Under what conditions is this possible? * How do you explain an individual network output or action in terms of the networks inputs and structure? Which inputs are most responsible for this output? * What are the best ways to visualize weights, unit states, and their trajectories over time? How can we visualize the joint behavior of a large number of units? * What can we learn from receptive-field diagrams for the hidden units? * How can we understand the behavior of recurrent and time-domain networks? (Extracting equivalent finite-state machines, etc.) * Learning pathologies and what they look like. If you would like to present something along these lines, please contact me by E-mail (sef at cs.cmu.edu) and let me know what you would like to describe. By the way, none of the NIPS workshops are limited to presenters only. People who want to show up and listen are welcome, as long as there is room. It is suggested that you register pretty soon, however. All inquiries for registration information should go to NIPS*92 Registration SIEMENS Research Center 755 College Rd. East Princeton, NJ 08550 phone 609-734-3383 email kic at learning.siemens.com See you in Vail! -- Scott =========================================================================== Scott E. Fahlman School of Computer Science Carnegie Mellon University 5000 Forbes Avenue Pittsburgh, PA 15213 Internet: sef+ at cs.cmu.edu From jose at tractatus.siemens.com Tue Oct 13 09:29:07 1992 From: jose at tractatus.siemens.com (Steve Hanson) Date: Tue, 13 Oct 1992 09:29:07 -0400 (EDT) Subject: Fwd: NIPS Program errors References: <9210131312.AA13738@learning.siemens.com> Message-ID: We have been made aware of some errors in the NIPS Program and apologize for any inconvenience this may have caused you. Please be aware of the following: The Tutorial Program is held on November 30th, 1992 (disregard incorrect date on top of page 6 in the NIPS Program booklet) The Tutorial by Josh Alspector "Electronic Neural Networks" will be held on November 30th, 1992 from 3:30-5:30 (disregard incorrect time on page 6 in the NIPS Program booklet) The dates and times as they appear on the Conference Registration form are correct. From jaap.murre at mrc-apu.cam.ac.uk Tue Oct 13 15:13:30 1992 From: jaap.murre at mrc-apu.cam.ac.uk (Jaap Murre) Date: Tue, 13 Oct 92 15:13:30 BST Subject: book announcement Message-ID: <2332.9210131413@sirius.mrc-apu.cam.ac.uk> Book Announcement Learning and Categorization in Modular Neural Networks by Jacob M.J. Murre This book introduces a new neural network model, called CALM, for categorization and learning in neural networks. CALM is a building block for modular neural networks. The internal structure of the CALM module is inspired by the neocortical minicolumn. A pivotal psychological concept in the CALM learning algorithm is self-induced arousal, which may affect the local learning rate and noise level. The author demonstrates how this model can learn the wordsuperiority effect for letter recognition, and he discusses a series of studies that simulate experiments in implicit and explicit memory, involving normal and amnesic patients. Pathological, but psychologically accurate, behavior is produced by 'lesioning' the arousal system of these models. The author also introduces as an illustrative practical application a small model that learns to recognize handwritten digits. The book also contains a concise introduction to genetic algorithms, a new computing method based on the biological metaphor of evolution, and it is demonstrated how these genetic algorithms can be used to design network architectures with superior performance. The role of modularity in parallel hardware and software implementations is discussed in some depth. Several hardware implementations are considered, including transputer networks and a dedicated 400-processor neurocomputer built by the developers of CALM in cooperation with Delft Technical University. The book ends with an evaluation of the psychological and biological plausibility of CALM models and a general discussion of catastrophic interference, generalization, and representational capacity of modular neural networks. Murre, J.M.J. (1992). Learning and categorization in modular neural networks. Hemel Hempstead: Harvester Wheatsheaf, and Hillsdale, NJ: Lawrence Erlbaum (in Canada and the USA), 244pp. Price indication: paperback $25.50 (14.95 Pound Sterling), hardback $76.50 (45.00 Pound Sterling). For additional information, contact: Simon & Schuster at Campus 400, Maylands Avenue, Hemel Hempstead, Herts HP2 7EZ, England, tel. (0442) 881900, fax. (0442) 882099; or Lawrence Erlbaum at 365 Broadway, Hillsdale, NJ 07642-1487, USA. From rsun at athos.cs.ua.edu Tue Oct 13 14:31:49 1992 From: rsun at athos.cs.ua.edu (Ron Sun) Date: Tue, 13 Oct 1992 13:31:49 -0500 Subject: TR available: variable binding Message-ID: <9210131831.AA23434@athos.cs.ua.edu> On Variable Binding in Connectionist Networks by Ron Sun This paper deals with the problem of variable binding in connectionist networks. Specifically, a more thorough solution to the variable binding problem based on the {\it Discrete Neuron} formalism is proposed and a number of issues arising in the solution are examined in relation to logic: consistency checking, binding generation, unification, and functions, etc. We analyze what is needed in order to resolve these issues, and based on this analysis, a procedure is developed for systematically setting up connectionist networks for variable binding. The DN formaism is used as a descriptive tool and the solution based on the DN formalism can be readily mapped to simple types of neural networks. This solution compares favorably to similar solutions in simplicity and completeness. To appear in: Connection Science, Vol.4, No.2. 1992 ------------------------------------------------ It is FTPable from archive.cis.ohio-state.edu in: pub/neuroprose No hardcopy available. FTP procedure: unix> ftp archive.cis.ohio-state.edu (or 128.146.8.52) Name: anonymous Password: neuron ftp> cd pub/neuroprose ftp> binary ftp> get sun.variable.ps.Z ftp> quit unix> uncompress sun.variable.ps.Z unix> lpr sun.variable.ps (or however you print postscript) From petsche at hawk.siemens.com Tue Oct 13 16:22:22 1992 From: petsche at hawk.siemens.com (Thomas Petsche) Date: Tue, 13 Oct 92 16:22:22 EDT Subject: Position available Message-ID: <9210132022.AA08278@hawk.siemens.com> Position available The Learning Systems Department at Siemens Corporate Research is looking for a software developer and programmer with interest in machine learning and/or neural networks to develop software for prototypes and in-house research projects. Current projects are focused on specific instances of time series classification, knowledge representation, computational linguistics and intelligent control. Current research includes a broad spectrum of learning algorithm design and analysis. The successful candidate will contribute software design and implementation expertise to these activities. The job requires a master's degree or equivalent; a thorough understanding of, and experience with, Unix and X-Windows programming; some familiarity with machine learning and/or neural networks. If you are interested, please send a resume (via email if possible) to Thomas Petsche petsche at learning.siemens.com FAX: 609-734-6565 Siemens Corporate Research 755 College Road East Princeton, NJ 08540 From rsun at athos.cs.ua.edu Tue Oct 13 14:37:40 1992 From: rsun at athos.cs.ua.edu (Ron Sun) Date: Tue, 13 Oct 1992 13:37:40 -0500 Subject: TR available: inheritance Message-ID: <9210131837.AA11452@athos.cs.ua.edu> An Efficient Feature-based Connectionist Inheritance Scheme Ron Sun Department of Computer Science University of Alabama Tuscaloosa, AL 35487 -------------------------------------------------------------- To appear in: IEEE Transaction on System, Man, and Cybernetics. Vol.23. No.1. 1993 ---------------------------------------------------------------- The paper describes how a connectionist architecture deals with the inheritance problem in an efficient and natural way. Based on the connectionist architecture CONSYDERR, we analyze the problem of property inheritance and formulate it in ways facilitating conceptual clarity and implementation. A set of ``benchmarks" is specified for ensuring the correctness of inheritance mechanisms. Parameters of CONSYDERR are formally derived to satisfy these benchmark requirements. We discuss how chaining of is-a links and multiple inheritance can be handled in this architecture. This paper shows that CONSYDERR with a two-level dual (localist and distributed) representation can handle inheritance and cancellation of inheritance correctly and extremely efficiently, in constant time instead of proportional to the length of a chain in an inheritance hierarchy. It also demonstrates the utility of a meaning-oriented, intensional approach (with features) for supplementing and enhancing extensional approaches. ---------------------------------------------------------------- It is FTPable from archive.cis.ohio-state.edu in: pub/neuroprose (Courtesy of Jordan Pollack) No hardcopy available. FTP procedure: unix> ftp archive.cis.ohio-state.edu (or 128.146.8.52) Name: anonymous Password: neuron ftp> cd pub/neuroprose ftp> binary ftp> get sun.inheritance.ps.Z ftp> quit unix> uncompress sun.inheritance.ps.Z unix> lpr sun.inheritance.ps (or however you print postscript) (a revised version of a previously TR) From ken at cns.caltech.edu Tue Oct 13 11:25:21 1992 From: ken at cns.caltech.edu (Ken Miller) Date: Tue, 13 Oct 92 08:25:21 PDT Subject: printing of tech report Message-ID: <9210131525.AA13002@zenon.cns.caltech.edu> With respect to the recently announced tech report, "The Role of Constraints in Hebbian Learning", by K.D. Miller and D.J.C. MacKay: there were some problems with printing at least one of the postscript files. I have placed a new set of files in neuroprose that print fine on at least one printer that previously had problems. So, if you had printing problems, please try again. If you continue to have printing problems, let me know and I can send a hardcopy (assuming low-to-moderate numbers of requests). Ken ------------------------------------------------ How to retrieve and print out this paper: unix> ftp archive.cis.ohio-state.edu [OR: ftp 128.146.8.52] Connected to archive.cis.ohio-state.edu. 220 archive.cis.ohio-state.edu FTP server ready. Name: anonymous 331 Guest login ok, send ident as password. Password: [your e-mail address] 230 Guest login ok, access restrictions apply. ftp> binary 200 Type set to I. ftp> cd pub/neuroprose 250 CWD command successful. ftp> get miller.hebbian.tar.Z 200 PORT command successful. 150 Opening BINARY mode data connection for miller.hebbian.tar.Z 226 Transfer complete. 470000 bytes sent in many seconds ftp> quit 221 Goodbye. unix> uncompress miller.hebbian.tar.Z unix> tar xvf miller.hebbian.tar TO SAVE DISC SPACE, THE ABOVE TWO COMMANDS MAY BE REPLACED WITH THE SINGLE COMMAND unix> zcat miller.hebbian.tar.Z | tar xvf - hebbian_p0-9.ps hebbian_p10-19.ps hebbian_p20-29.ps hebbian_p30-35.ps unix> lpr hebbian_p30-35.ps unix> lpr hebbian_p20-29.ps unix> lpr hebbian_p10-19.ps unix> lpr hebbian_p0-9.ps From dmt at sara.inesc.pt Wed Oct 14 08:27:46 1992 From: dmt at sara.inesc.pt (Duarte Trigueiros) Date: Wed, 14 Oct 92 11:27:46 -0100 Subject: No subject Message-ID: <9210141227.AA21622@sara.inesc.pt> In addition to Paul Refenes list, I would like to mention mine and Bob's paper on the automatic forming of ratios as internal representations of the MLP. This paper shows that the problem of discovering the appropriate ratios for performing a given task in financial statement analysis can be be simplified by using some specific training schemes in an MLP. @inproceedings( xxx , author = "Trigueiros, D. and Berry, R.", title = "The Application of Neural Network Based Methods to the Extraction of knowledge From Accounting Reports", Booktitle = "Organisational Systems and Technology: Proceedings of the $24^{th}$ Hawaii International Conference on System Sciences", Year = 1991, Pages = "136-146", Publisher = "IEEE Computer Society Press, Los Alamitos, (CA) US.", Editor = "Nunamaker, E. and Sprague, R.") I also noticed that Paul didn't mention Utans and Moody's "Selecting Neural Network Architectures via the Prediction Risk: An Application to Corporate Bond Rating Prediction" (1991), which has been published somewhere and has, or had, a version in the neuroprose archive as utans.bondrating.ps.Z .This paper is especially recommended, as the early literature on financial applications of NNs didn't care too much with things like cross-validation. The achievements, of course, were appallingly brilliant. Finally, I gathered from Paul's list of articles, that there is a book of readings entitled "Neural Network Applications in Investment and Finance". Paul is the author of an article in chapter 27. The remaining twenty six or so chapters can eventually contain interesting stuff for completing this search. When the original request for references appeared in another list I answered to it. So, I must apologise for mentioning our reference again here. I did it, as Paul list of references could give the impression, despite him, of being an attempt to be extensive. --------------------------------------------------- Duarte Trigueiros, INESC, R. Alves Redol 9, 2. 1000 Lisbon, Portugal e-mail: dmt at sara.inesc.pt FAX +351 (1) 7964710 --------------------------------------------------- From fellous%hyla.usc.edu at usc.edu Wed Oct 14 15:01:06 1992 From: fellous%hyla.usc.edu at usc.edu (Jean-Marc Fellous) Date: Wed, 14 Oct 92 12:01:06 PDT Subject: USC/CNE Workshop Message-ID: <9210141901.AA23215@hyla.usc.edu> Thank you for posting this announcement to the mailing list ... --------------------------------------------------------------------------- ----------------------------- U S C ------------------------------------ --------------------------------------------------------------------------- Neural Mechanisms of Looking, Reaching and Grasping A Workshop sponsored by the Human Frontier Science Research Program and the Center for Neural Engineering - U.S.C. Michael A. Arbib Organizer October 21-22, 1992 HEDCO NEUROSCIENCES AUDITORIUM USC, University Park Campus, Los Angeles, CA ================= Session 1, October 21 ================ Chair: Hideo Sakata 08:30 - 09:00 am Marc Jeannerod (INSERM, Lyon, France) 09:00 - 09:30 am "Functional Parcellation of Human Parietal and Premotor Cortex during Reach and Grasp Tasks" Scott Grafton School of Medicine, USC, Los Angeles, CA, USA 09:30 - 10:00 am "Anatomo-functional Organization of the 'Supplementary Motor Area' and the Adjacent Cingulate Motor Areas" Massimo Matelli Universita Degli Studi di Parma, Italy **** 10:00 - 10:30 BREAK 10:30 - 11:00 am "Inferior Area 6: New findings on Visual Information Coding for Reaching and Grasping" Giacomo Rizzolatti, Universita Degli Studi di Parma, Italy 11:00 - 11:30 am "Neural Strategies for Controlling Fast Movements" Jim-Shih Liaw CNE/Computer Science Department, USC Los Angeles, CA, USA 11:30 - 12:00 am "Cortex and Haptic Memory" Joaquin Fuster, UCLA Medical Center Los Angeles, CA, USA 12:00 - 12:30 pm "Trajectory Learning from Spatial Constraints" Michael Jordan Brain and Cognitive Science Department MIT, Cambridge, MA, USA **** 12:30 - 01:30 pm LUNCH ===================== Session 2 ==================== Chair: Jean-Paul Joseph 01:30 - 02:00 pm "Selectivity of Hand Movement-Related Neurons of the Parietal Cortex in Shape, Size and Orientation of Objects and Hand Grips" Hideo Sakata, Nihon University School of Medicine Tokyo, Japan 02:00 - 02:30 pm "Modeling the Dynamic Interactions between Subregions of the Posterior Parietal and Premotor Cortices" Andrew Fagg CNE/Computer Science Department, USC, Los Angeles, CA, USA 02:30 - 03:00 pm "Optimal Control of Reaching Movements Using Neural Networks" Alberto Borghese Center for Neural Engineering, USC and I.F.C.N.-C.N.R., Milano, Italy **** 03:00 - 03:30 BREAK 03:30 - 04:00 pm " How the Frontal Eye Field can impose a saccade goal on Superior Colliculus Neurons" Madeleine Schlag-Rey Brain Research Institute, UCLA, Los Angeles, CA, USA 04:00 - 04:30 pm "Variations on a Theme of Hallett and Lightsone" John Schlag Department of Anatomy, UCLA Los Angeles, CA, USA 04:30 - 05:00 pm "The saccade and its Context" Lucia Simo Center for Neural Engineering, USC, Los Angeles, CA 05:00 - 05:30 "An Integrative View on Modeling" Michael Arbib Center for Neural Engineering/Computer Science Department, USC Los Angeles, CA, USA ================= Session 3, October 22 =================== Chair: Giacomo Rizzolatti 08:30 - 09:00 am "Neural Activity in the Caudate Nucleus of Monkeys during Motor and Oculomotor Sequencing" Jean-Paul Joseph INSERM, Lyon, France 09:00 - 09:30 "Models of Cortico-Striatal Plasticity for Learning Associations in Space and Time" Peter Dominey Computer Science Department, USC, Los Angeles, CA, USA 09:30 - 10:00 "Eye-Head-Hand Coordination in a Pointing Task" Claude Prablanc INSERM, Lyon, France **** 10:00 - 10:30 BREAK 10:30 - 11:00 "Modeling Kinematics and Interaction of Reach and Grasp" Bruce Hoff CNE/Computer Science Department, USC, Los Angeles, CA, USA 11:00 - 11:30 "Towards a Model of the Cerebellum" Nicolas Schweighofer Center for Neural Engineering, USC, Los Angeles, CA, USA 11:30 - 12:00 "Does the Lateral Cerebellum Map Movements onto Spatial Targets?", Thomas Thach Washington University School of Medicine, St. Louis, MO, USA **** 12:00 - LUNCH ---------------------------------------------------------------------------- From becker at ai.toronto.edu Wed Oct 14 14:39:35 1992 From: becker at ai.toronto.edu (becker@ai.toronto.edu) Date: Wed, 14 Oct 1992 14:39:35 -0400 Subject: paper in neuroprose Message-ID: <92Oct14.143937edt.289@neuron.ai.toronto.edu> A postscript version of my PhD thesis has been placed in the neuroprose archive. It prints on 150 pages. The abstract is given below, followed by retrieval instructions. Sue Becker email: becker at ai.toronto.edu ----------------------------------------------------------------------------- An Information-theoretic Unsupervised Learning Algorithm for Neural Networks ABSTRACT In the unsupervised learning paradigm, a network of neuron-like units is presented an ensemble of input patterns from a structured environment, such as the visual world, and learns to represent the regularities in that input. The major goal in developing unsupervised learning algorithms is to find objective functions that characterize the quality of the network's representation without explicitly specifying the desired outputs of any of the units. Previous approaches in unsupervised learning, such as clustering, principal components analysis, and information-transmission-based methods, make minimal assumptions about the kind of structure in the environment, and they are good for preprocessing raw signal input. These methods try to model {\em all} of the structure in the environment in a single processing stage. The approach taken in this thesis is novel, in that our unsupervised learning algorithms do not try to preserve all of the information in the signal. Rather, we start by making strongly constraining assumptions about the kind of structure of interest in the environment. We then proceed to design learning algorithms which will discover precisely that structure. By constraining what kind of structure will be extracted by the network, we can force the network to discover higher level, more abstract features. Additionally, the constraining assumptions we make can provide a way of decomposing difficult learning problems into multiple simpler feature extraction stages. We propose a class of information-theoretic learning algorithms which cause a network to become tuned to spatially coherent features of visual images. Under Gaussian assumptions about the spatially coherent features in the environment, we have shown that this method works well for learning depth from random dot stereograms of curved surfaces. Using mixture models of coherence, these algorithms can be extended to deal with discontinuities, and to form multiple models of the regularities in the environment. Our simulations demonstrate the general utility of the Imax algorithms in discovering interesting, non-trivial structure (disparity and depth discontinuities) in artificial stereo images. This is the first attempt we know of to model perceptual learning beyond the earliest stages of low-level feature extraction, and to model multiple stages of unsupervised learning. ----------------------------------------------------------------------------- To retrieve from neuroprose: unix> ftp cheops.cis.ohio-state.edu Name (cheops.cis.ohio-state.edu:becker): anonymous Password: (use your email address) ftp> cd pub/neuroprose ftp> get becker.thesis1.ps.Z 200 PORT command successful. 150 Opening BINARY mode data connection for becker.thesis1.ps.Z (292385 bytes). 226 Transfer complete. 292385 bytes received in 13 seconds (22 Kbytes/s) ftp> get becker.thesis2.ps.Z 200 PORT command successful. 150 Opening BINARY mode data connection for becker.thesis2.ps.Z (366573 bytes). 226 Transfer complete. 366573 bytes received in 15 seconds (23 Kbytes/s) ftp> get becker.thesis3.ps.Z 200 PORT command successful. 150 Opening BINARY mode data connection for becker.thesis3.ps.Z (178239 bytes). 226 Transfer complete. 178239 bytes received in 9.2 seconds (19 Kbytes/s) ftp> quit 221 Goodbye. unix> uncompress becker* unix> lpr becker.thesis1.ps unix> lpr becker.thesis2.ps unix> lpr becker.thesis3.ps From mclennan at cs.utk.edu Wed Oct 14 15:05:39 1992 From: mclennan at cs.utk.edu (mclennan@cs.utk.edu) Date: Wed, 14 Oct 92 15:05:39 -0400 Subject: paper in neuroprose Message-ID: <9210141905.AA21440@maclennan.cs.utk.edu> **DO NOT FORWARD TO OTHER GROUPS** The following technical report has been placed in the Neuroprose archives at Ohio State (filename: maclennan.fieldcompbrain.ps.Z). Ftp instructions follow the abstract. The uncompressed file is large (1.36 MBytes). ----------------------------------------------------- Field Computation in the Brain Bruce MacLennan Computer Science Department University of Tennessee Knoxville, TN 37996 maclennan at cs.utk.edu Technical Report CS-92-174 ABSTRACT: We begin with a brief consideration of the *topology of knowledge*. It has traditionally been assumed that true knowledge must be represented by discrete symbol structures, but recent research in psychology, philosophy and computer science has shown the fundamental importance of *subsymbolic* information processing, in which knowledge is represented in terms of very large numbers -- or even continua -- of *microfeatures*. We believe that this sets the stage for a fundamentally new theory of knowledge, and we sketch a theory of continuous information representation and processing. Next we consider *field computa- tion*, a kind of continuous information processing that emphasizes spatially continuous *fields* of information. This is a reasonable approximation for macroscopic areas of cortex and provides a convenient mathematical framework for studying infor- mation processing at this level. We apply it also to a linear- systems model of dendritic information processing. We consider examples from the visual cortex, including Gabor and wavelet representations, and outline field-based theories of sensorimotor intentions and of model-based deduction. Presented at the 1st Appalachian Conference on Behavioral Neuro- dynamics: Processing in Biological Neural Networks, in conjunc- tion with the Inaugural Ceremonies for the Center for Brain Research and Informational Sciences, Radford University, Radford VA, September 17-20, 1992. ----------------------------- FTP INSTRUCTIONS Either use the Getps script, or do the following: unix> ftp archive.cis.ohio-state.edu (or 128.146.8.52) Name: anonymous Password: ftp> cd pub/neuroprose ftp> binary ftp> get maclennan.fieldcompbrain.ps.Z ftp> quit unix> uncompress maclennan.fieldcompbrain.ps.Z unix> lpr -s maclennan.fieldcompbrain.ps (or however you print large postscript files) If you need hardcopy, then send your request to: library at cs.utk.edu Bruce MacLennan Department of Computer Science 107 Ayres Hall The University of Tennessee Knoxville, TN 37996-1301 (615)974-0994/5067 FAX: (615)974-4404 maclennan at cs.utk.edu From jang%diva.berkeley.edu at CMU.EDU Mon Oct 12 12:58:58 1992 From: jang%diva.berkeley.edu at CMU.EDU (Jyh-Shing Roger Jang) Date: 12 Oct 1992 09:58:58 -0700 Subject: paper available Message-ID: <9210121658.AA24703@diva.Berkeley.EDU> The following paper has been placed on the neuroprose archive as jang.adaptive_fuzzy.ps.Z and is available via anonymous ftp (from archive.cis.ohio-state.edu in the pub/neuroprose directory). This paper will appear in IEEE Trans. on Systems, Man and Cybernetics. ========================================================================= TITLE: ANFIS: Adaptive-Network-based Fuzzy Inference System ABSTRACT: This paper presents the architecture and learning procedure underlying ANFIS (Adaptive-Network-based Fuzzy Inference System), a fuzzy inference system implemented in the framework of adaptive networks. By using a hybrid learning procedure, the proposed ANFIS can construct an input-output mapping based on both human knowledge (in the form of fuzzy if-then rules) and stipulated input-output data pairs. In our simulation, we employ the ANFIS architecture to model nonlinear functions, identify nonlinear components on-linely in a control system, and predict a chaotic time series, all yielding remarkable results. Comparisons with artificail neural networks and earlier work on fuzzy modeling are listed and discussed. Other extensions of the proposed ANFIS and promising applications to automatic control and signal processing are also suggested. =========================================================================== Here is an example of how to retrieve this file: gvax> ftp archive.cis.ohio-state.edu Connected to archive.cis.ohio-state.edu. 220 archive.cis.ohio-state.edu FTP server ready. Name: anonymous 331 Guest login ok, send ident as password. Password:neuron at wherever 230 Guest login ok, access restrictions apply. ftp> binary 200 Type set to I. ftp> cd pub/neuroprose 250 CWD command successful. ftp> get jang.adaptive_fuzzy.ps.Z 200 PORT command successful. 150 Opening BINARY mode data connection for jang.adaptive_fuzzy.ps.Z 226 Transfer complete. 100000 bytes sent in 3.14159 seconds ftp> quit 221 Goodbye. gvax> uncompress jang.adaptive_fuzzy.ps.Z gvax> lpr jang.adaptive_fuzzy.ps -- J.-S. Roger Jang 571 Evans, EECS Department Univ. of California Berkeley, CA 94720 jang at diva.berkeley.edu (510)-642-5029 fax: (510)642-5775 From rba at bellcore.com Fri Oct 16 15:02:57 1992 From: rba at bellcore.com (Bob Allen) Date: Fri, 16 Oct 92 15:02:57 -0400 Subject: No subject Message-ID: <9210161902.AA04158@wind.bellcore.com> NIPS92, December 1-3, 1992, Denver, Colorado STUDENT FINANCIAL SUPPORT Since there has been an overwhelming number of requests for financial support for travel to attend the NIPS92 conference in Denver, it is no longer possible to consider additional requests. Please do not send in a letter of request. If you have already sent your application, we will notify you of your status in the next week; the earliest requests will be filled, and the remainder will be on a waiting list, which will depend upon the financial success of the conference. Dr. Robert B. Allen, NIPS92 Treasurer Bellcore MRE 2A-367 445 South Street Morristown, NJ 07962-1910 From Scott_Fahlman at SEF-PMAX.SLISP.CS.CMU.EDU Sat Oct 17 10:36:16 1992 From: Scott_Fahlman at SEF-PMAX.SLISP.CS.CMU.EDU (Scott_Fahlman@SEF-PMAX.SLISP.CS.CMU.EDU) Date: Sat, 17 Oct 92 10:36:16 -0400 Subject: Post-NIPS Workshop Message-ID: I had excellent success with the call for presenters for my NIPS workshop on "Reading the Entrails: Understanding What's Going on Inside a Neural Net". I now have all the speakers I can use, plus a couple of alternates whom I hope to fit in as well, so please do not volunteer if you haven't done so already. I think this is a very exciting group of speakers. I will be releasing the final names as soon as I get final confirmation from a couple of people. -- Scott =========================================================================== Scott E. Fahlman Internet: sef+ at cs.cmu.edu Senior Research Scientist Phone: 412 268-2575 School of Computer Science Fax: 412 681-5739 Carnegie Mellon University 5000 Forbes Avenue Pittsburgh, PA 15213 From tdenoeux at hds.univ-compiegne.fr Mon Oct 19 04:44:41 1992 From: tdenoeux at hds.univ-compiegne.fr (tdenoeux@hds.univ-compiegne.fr) Date: Mon, 19 Oct 92 09:44:41 +0100 Subject: Paper available Message-ID: <9210190844.AA11604@kaa.hds.univ-compiegne.fr> The following paper has recently been accepted for publication in the Journal "Neural Networks": INITIALIZATION OF BACK-PROPAGATION NEURAL NETWORKS WITH PROTOTYPES (Neural Networks, in Press) by T. Denoeux and R. Lengelle University of Compiegne, France ABSTRACT This paper addresses the problem of initializing the weights in back-propagation networks with one hidden layer. The proposed method relies on the use of reference patterns, or prototypes, and on a transformation which maps each vector in the original feature space onto a unit-length vector in a space with one additional dimension. This scheme applies to pattern recognition tasks, as well as to the approximation of continuous functions. Issues related to the preprocessing of input patterns and to the generation of prototypes are discussed, and an algorithm for building appropriate prototypes in the continuous case is described. Also examined is the relationship between this approach and the theory of radial basis functions. Finally, simulation results are presented, showing that initializing back-propagation networks with prototypes generally results in (1) drastic reductions in training time, (2) improved robustness against local minima, and (3) better generalization. A free copy is available at the following address to those who have not easy access to the journal: +------------------------------------------------------------------------+ | tdenoeux at hds.univ-compiegne.fr Thierry DENOEUX | | Departement de Genie Informatique | | Centre de Recherches de Royallieu | | tel (+33) 44 23 44 96 Universite de Technologie de Compiegne | | fax (+33) 44 23 44 77 B.P. 649 | | 60206 COMPIEGNE CEDEX | | France | +------------------------------------------------------------------------+ From austin at minster.york.ac.uk Mon Oct 19 11:40:43 1992 From: austin at minster.york.ac.uk (austin@minster.york.ac.uk) Date: Mon, 19 Oct 92 11:40:43 Subject: No subject Message-ID: Weightless Neural Network Workshop '93 University of York York, England (in conjunction with Brunel University and Imperial College, London) 6-7 April 1993 CALL FOR CONTRIBUTIONS This two-day workshop provides a forum for the presentation and exchange of current work in the general field of weightless neural networks. Models include N-tuple systems, CMAC, Kanvera's sparse distributed memory, probabilistic logic nodes, g-RAM, p-RAM, etc, etc. Contributions on theory, realisations and applications are equally welcome. Accepted contributions will be either be presented as a paper or as part of a structured poster session. Abstracts should be submitted by 1 November 1992 to the address below, and should be approximately 400 words in length and highlight the important points of the proposed contribution. All proposals will be reviewed by the programme committee and authors notified by 20 December 1992. Full papers are required by 31 January 1993 and will be considered for publication in book form. Details of format requirements will be supplied later. Abstracts and enquiries to:- N M Allinson Department of Electronics University of York York, YO1 5DD Phone:(+44)(0)904-432350 Fax:(+44)(0)904-432335 Email:wnnw at ohm.york.ac.uk. From ken at cns.caltech.edu Fri Oct 16 14:47:13 1992 From: ken at cns.caltech.edu (Ken Miller) Date: Fri, 16 Oct 92 11:47:13 PDT Subject: one more try ... Message-ID: <9210161847.AA14659@zenon.cns.caltech.edu> With respect, again, to the techreport on "The role of constraints in Hebbian learning" by myself and David MacKay: There seem to be various memory problems so that different printers choke at different points trying to print out the ps files. So, for those who are used to TeX and have a way to print out .dvi files: I have placed hebbian.dvi and the ps files of the various figures where they can be ftp'ed. Perhaps you can print these out using dvips or dvi2ps, etc.; but your dvi=>ps program might have problems with the psfig macros, which we use to incorporate the figures. If you want to give it a try: unix> ftp kant.cns.caltech.edu [OR, ftp 131.215.135.31] login: anonymous password: [your e-mail address] ftp> cd pub/ken ftp> binary ftp> get hebbian.dvi ftp> prompt ftp> mget FIG*.ps ftp> quit again, if you can't print the paper out by this or other means, please let me know and I will send a hardcopy. This should be the last message on this topic. Ken From R.Beale at computer-science.birmingham.ac.uk Tue Oct 20 17:37:04 1992 From: R.Beale at computer-science.birmingham.ac.uk (Russell Beale) Date: Tue, 20 Oct 92 17:37:04 BST Subject: No subject Message-ID: <1111.9210201637@fat-controller.cs.bham.ac.uk> CALL FOR PAPERS British Neural Network Society Symposium on Recent Advances in Neural Networks Wednesday 3rd February 1993 Lucas Institute University of Birmingham UK Contributions are invited for oral and poster presentations. Topics of interest include, but are not restricted to, the following areas: - Theory & Algorithms Time series, learning theory, fast algorithms. - Applications Finance, image processing, medical, control. - Implementations Software, hardware, optoelectronics. - Biological Networks Perception, motor control, representation. The proceedings will be published as a book after the symposium. Please send a one-page summary, postmarked by 30th November 1992, to: BNNS'93 c/o Russell Beale School of Computer Science University of Birmingham Edgbaston Birmingham B15 2TT United Kingdom Tel: +44 (0)21 414 4773 Fax: +44 (0)21 414 4281 From zl%venezia.ROCKEFELLER.EDU at ROCKVAX.ROCKEFELLER.EDU Thu Oct 22 11:53:40 1992 From: zl%venezia.ROCKEFELLER.EDU at ROCKVAX.ROCKEFELLER.EDU (Zhaoping Li) Date: Thu, 22 Oct 92 11:53:40 -0400 Subject: No subject Message-ID: <9210221553.AA10086@venezia> ROCKEFELLER UNIVERSITY anticipates the opening of one or two positions in Computational Neuroscience Laboratory. The positions are at the postdoctoral level, and are for one year, renewable to two, starting in September 1993. The focus of the research in the lab is on understanding the computational principles of the nervous system, especially the sensory pathways. It involves analytical and computational approaches with strong emphasis on connections with real neurobiology. Members of the lab include J. Atick, Z. Li, K. Obermayer, N. Redlich, and P. Penev. The lab also maintains strong interactions with other labs at Rockefeller University, including the Gilbert, Wiesel, and the biophysics labs. Interested candidates should submit a C.V. and arrange to have three letters of recommendation sent to Prof. Joseph J. Atick Head, computational neuroscience lab The Rockefeller University 1230 York Avenue New York, NY 10021 USA The Rockefeller University is an affirmative action/equal opportunity employer, and welcomes applications from women and minority candidates. From maass at figids01.tu-graz.ac.at Fri Oct 23 12:04:33 1992 From: maass at figids01.tu-graz.ac.at (maass@figids01.tu-graz.ac.at) Date: Fri, 23 Oct 92 17:04:33 +0100 Subject: No subject Message-ID: <9210231604.AA26367@figids03.tu-graz.ac.at> The following paper has been placed in the Neuroprose archive in file maass.bounds.ps.Z . Retrieval instructions follow the abstract. --Wolfgang Maass (maass at igi.tu-graz.ac.at) -------------------------------------------------------------------------------- BOUNDS FOR THE COMPUTATIONAL POWER AND LEARNING COMPLEXITY OF ANALOG NEURAL NETS Wolfgang Maass Institute for Theoretical Computer Science, Technische Universitaet Graz, Klosterwiesgasse 32/2, A-8010 Graz, Austria ABSTRACT -------- It is shown that feedforward neural nets of constant depth with piecewise polynomial activation functions and arbitrary real weights can be simulated for boolean inputs and outputs by neural nets of a somewhat larger size and depth with heaviside gates and weights from {0,1}. This provides the first known upper bound for the computational power and VC-dimension of such neural nets. It is also shown that in the case of piecewise linear activation functions one can replace arbitrary real weights by rational numbers with polynomially many bits, without changing the boolean function that is computed by the neural net. In addition we improve the best known lower bound for the VC-dimension of a neural net with w weights and gates that use the heaviside function (or other common activation functions such as sigma) from Omega(w) to Omega(w log w). This implies the somewhat surprising fact that the Baum-Haussler upper bound for the VC-dimension of a neural net with heaviside gates is asymptotically optimal. Finally it is shown that neural nets with piecewise polynomial activation functions and a constant number of analog inputs are probably approximately correct learnable (in Valiant's model for PAC-learning, with hypotheses generated by a slightly larger neural net). --------------------------------------------------------------------------------- To retrieve the paper by anonymous ftp: unix> ftp archive.cis.ohio-state.edu # (128.146.8.52) Name: anonymous Password: neuron ftp> cd pub/neuroprose ftp> binary ftp> get maass.bounds.ps.Z ftp> quit unix> uncompress maass.bounds.ps.Z unix> lpr -P maass.bounds.ps From SABBATINI%ccvax.unicamp.br at BITNET.CC.CMU.EDU Sun Oct 25 15:56:00 1992 From: SABBATINI%ccvax.unicamp.br at BITNET.CC.CMU.EDU (SABBATINI%ccvax.unicamp.br@BITNET.CC.CMU.EDU) Date: Sun, 25 Oct 1992 15:56 GMT-0200 Subject: Symposium on Simulation of Social Processes Message-ID: <01GQDDM4SCM09EDYAJ@ccvax.unicamp.br> From cybsys at bingsuns.cc.binghamton.edu Sat Oct 24 15:37:08 1992 From: cybsys at bingsuns.cc.binghamton.edu (Cybernetics and Systems Moderator) Date: Sat, 24 Oct 1992 15:37:08 EDT Subject: Simulating Societies '93 Message-ID: <436C2AF2A0001B3A@brfapesp.bitnet> From itot at strl.nhk.or.jp Mon Oct 26 10:38:11 1992 From: itot at strl.nhk.or.jp (Takayuki Ito) Date: Mon, 26 Oct 92 10:38:11 JST Subject: position offer from RIKEN Message-ID: <9210260138.AA08929@vsun2.strl.nhk.or.jp> I am Ito in NHK(Japan Broadcasting Corporation). Dr. Tanaka in RIKEN Institute asked me to post this letter. For more details, please contact with him by telephone or fax. ------------------------------------ Offer of a position for public subscription RIKEN Institute, Information Science Laboratory Researcher Field: Physiological, anatomical, and psychological studies of higher brain functions, and development of related methodology. Available from April 1, 1993 Condition: Ph.D or scheduled by April 1, 1993. No elder than 34 on February 1, 1993. Any nationality. Inquires to Dr. Keiji Tanaka, Chief of Information Science Laboratory, fax: +81-48-462-4696, tel: +81-48-462-1111 ext.6411 ------------------------------------ ---------------------------------------------- Takayuki Ito (itot at strl.nhk.or.jp) NHK Science and Technical Research Labs. 1-10-11, Kinuta, Setagaya-ku, Tokyo 157 Japan Tel.+81-3-5494-2369, Fax.+81-3-5494-2371 ---------------------------------------------- From wilson at smith.rowland.org Wed Oct 28 10:38:54 1992 From: wilson at smith.rowland.org (Stewart Wilson) Date: Wed, 28 Oct 92 10:38:54 EST Subject: FROM ANIMALS TO ANIMATS -- registration and list of papers Message-ID: FROM ANIMALS TO ANIMATS Second International Conference on Simulation of Adaptive Behavior (SAB92) FINAL ANNOUNCEMENT with LIST OF PAPERS TO BE PRESENTED ================================================================================ 1. Conference Dates and Site The conference will take place Monday through Friday, December 7-11, 1992 at the Ilikai Hotel, Honolulu, Hawaii. The conference will be inaugerated by a reception on Sunday evening, December 6, and will be followed by a luau (Hawaiian Feast) on Friday, December 11. 2. Conference Organizers Jean-Arcady MEYER Groupe de Bioinformatique URA686.Ecole Normale Superieure 46 rue d'Ulm 75230 Paris Cedex 05 France e-mail: meyer at wotan.ens.fr meyer at frulm63.bitnet Herbert ROITBLAT Department of Psychology University of Hawaii at Manoa 2430 Campus Road Honolulu, HI 96822 USA email: roitblat at uhunix.bitnet, roitblat at uhunix.uhcc.hawaii.edu Stewart WILSON The Rowland Institute for Science 100 Cambridge Parkway Cambridge, MA 02142 USA e-mail: wilson at smith.rowland.org 3. Program Committee A. Berthoz, France, L. Booker, USA, R. Brooks, USA, P. Colgan, Canada, J. Delius, Germany, A. Dickinson, UK, J. Ferber, France, S. Goss, Belgium, P. Nachtigall, USA, L. Steels, Belgium, R. Sutton, USA, F. Toates, UK, P. Todd, USA, S. Tsuji, Japan, W. Uttal, USA, D. Waltz, USA 4. Local Arrangements Committee S. Gagnon, A. Guillot, H. Harley, D. Helweg, M. Hoffhines, G. Losey, C. Manos, P. Moore, E. Reese, P. Tarroux, P. Vincens, & S. Yamamoto 5. Official Language: English 6. Conference Objective The goal of the conference is to bring together researchers in ethology, ecology, cybernetics, artificial intelligence, robotics, and related fields so as to further our understanding of the behaviors and underlying mechanisms that allow animals and, potentially, robots to adapt and survive in uncertain environments. The conference will focus particularly on simulation models in order to help characterize and compare various organizational principles or architectures capable of inducing adaptive behavior in real or artificial animals. The conference is expected to promote: 1. Identification of the organizational principles, functional laws, and minimal properties that make it possible for a real or artificial system to persist in an uncertain environment. 2. Better understanding of how and under what conditions such systems can themselves discover these principles through conditioning, learning, induction, or self-organization. 3. Specification of the applicability of the theoretical knowledge thus acquired to the building of autonomous robots. 4. Improved theoretical and practical knowledge of adaptive systems in general, both natural and artificial. Contributions treating any of the following topics from the perspective of adaptive behavior have been invited. The deadline for submitting papers has passed, but demonstrations will still be accommodated if possible. *Individual and collective behavior *Autonomous robots *Neural correlates of behavior *Hierarchical and parallel organizations *Perception and motor control *Emergent structures and behaviors *Motivation and emotion *Problem solving and planning *Action selection and behavioral sequences *Goal-directed behavior *Neural networks and classifier systems *Ontogeny, learning, and evolution *Characterization of environments *Internal world models and cognitive processes *Applied adaptive behavior 7. Important Dates *JUL 15, 1992 Submissions must be received by the organizers *SEP 1, 1992 Deadline for early registration *OCT 1, 1992 Notification of acceptance or rejection NOV 7, 1992 Deadline for regular registration NOV 15, 1992 Camera ready revised versions due DEC 7-11, 1992 Conference dates 8. Conference Activities The conference program looks very exciting. In addition to the many excellent papers listed below, a series of cultural activities are also planned. The conference begins with a reception on Sunday evening, December 6. Papers will be presented each morning and late afternoon with an extended discussion period in between (during which the beach will be accessible). Thursday evening will be a moonlight cruise on the Navatek 1 along the shores of Waikiki to Diamond Head. Friday evening, after the close of the conference will be a luau (Hawaiian feast) at the Bishop Museum, the premier museum of culture and natural history in the Pacific. Museum admission is included in the luau price as is a planetarium show, dinner, refreshments, and local entertainment. 9. Registration All participants must register. Regular registration is $220 and late registration is $250. Students will be allowed to register for $50. Students should submit proof of their status along with their registration fee. The fee for accompanying persons is $75, which includes the reception and the cruise. A registration form is included. Return to: SAB92 Registration, Conference Center, University of Hawaii, 2530 Dole Street, Honolulu, HI 96822. 10. Meeting Site The conference activities will be held at the Ilikai Hotel. The Ilikai is situated at the gateway to Waikiki within walking distance of many fine restaurants, Ala Moana Shopping Center, and Ala Moana Park. The Hotel overlooks the Ala Wai Yacht Marina where Waikiki Beach begins. Room rates for the conference are $110 or $125 per night (single or double). The hotel is adjacent to the beach and also offers two swimming pools, a fitness center, and tennis courts. Reservations must be made directly with the hotel. Conference rates will be available for the weekend before and the weekend following the conference as well. A hotel registration form is included. Return it by November 7, 1992 to Ilikai Hotel, 1777 Ala Moana Blvd, Honolulu, HI 96815. (800) 367-8434. In Britain: 0800 282502. Arrangements have been made for a small number of student rooms in a nearby hotel at about $55 per night (single or double). Students are, of course, welcome to stay in the conference hotel. Reservations for student rooms can be made through the official travel agent. A small number of travel "scholarships" may be available to defray part or all of the expenses of attending the conference. Interested students should submit a letter of application describing their research interests, the year they expect to receive their degree, and a brief letter of recommendation from their major professor. The number and size of awards will depend on the amount of money available. Persons with disabilities may contact Herbert Roitblat for information on accessibility. Advance notice is advised if you have special needs and request an accommodation. The University of Hawaii is an Equal Opportunity/Affirmative Action Institution. 11. Travel Information Theo Stahl, Associated Travel, 947 Keeaumoku Street, Honolulu, HI 96814 (808) 949-1033, (800) 745-3444, (808) 949-1037 (fax) is the official travel agent for the conference. Participants are encouraged, but not required, to make their travel arrangements through Ms Stahl. United Airlines is offering a special conference rate for participants from US as well as European, Japanese, and Australian gateway cities served by United. Ms Stahl is very knowledgeable about the local travel market and can make arrangements to visit neighbor islands (including Hawaii with its active volcano) and for other activities. Please make your travel arrangements early because Hawaii is a popular destination in December and the conference is scheduled just before the start of the busiest season. Hertz has extended a conference rate for auto rentals. Reservations can be made through the official travel agent or directly through Hertz. Mention SAB92. CONFERENCE REGISTRATION FORM Ilikai Hotel, Honolulu, HI SAB92, December 7-11, 1992 ____________________________________________________________ Last Name First Name Middle ____________________________________________________________ Professional Affiliation ____________________________________________________________ Street Address and Internal Mail Code ____________________________________________________________ City State/Country Zip/Postal Code ____________________________________________________________ E-mail Telephone Fax SAB92 December 7-11, 1992 Registration Fees (includes reception, cruise, continental breakfasts) ___ Early (Before September 1, 1992) $180 ___ Regular (Before November 7, 1992) $220 ___ Late (After November 7, 1992) $250 ___ Student (with proof of status) $50 ___ Accompanying person (number of persons) $75 ___ Luau (number of tickets) $45 ___ Donation to support student scholarship fund $____ Enclosed is a check or money order (US $ only, payable to University of Hawaii) for $_______ Return to: SAB92 Registration, Conference Center, University of Hawaii, 2530 Dole Street, Honolulu, HI 96822. SAB92 December 7-11, 1992 SAB92 Hotel Registration Ilikai Hotel Name _____________________________________________________ Address _________________________________________________ City ____________________________________________________ State/Country, Zip ______________________________________ Telephone Number ________________________________________ Arrival Date ____________________________________________ Departure Date __________________________________________ No. of Persons __________________________________________ Preferred Room rate: _____ 1 or 2 persons $110+tax _____ 1 or 2 persons $125+tax _____ 1 Bed _____ 2 Beds _____ Handicapped Accessible All reservations must be guaranteed by check or credit card deposit for one night lodging. Amount of enclosed check: $_____ Charge to: ___Visa ___ Mastercard ___American Express ___Diner's club ___Discover Credit card Number: _______________________ Expiration Date ________ Signature ___________________________________ Request and deposit must be received by November 7, 1992. Check-in time is 3:00. Check-out time is 12:00. SAB92 December 7-11, 1992 Mail hotel registration directly to the Ilikai Hotel, 1777 Ala Moana Blvd, Honolulu, HI 96815. (800) 367-8434. In Britain: 0800 282502 ========================================================================== SECOND INTERNATIONAL CONFERENCE ON SIMULATION OF ADAPTIVE BEHAVIOR (SAB92) ========================================================================== Papers accepted for presentation at the conference and publication in the proceedings. -------------------------------------------------------------------------- Richard A. Altes "Neuronal Parameter Maps and Signal Processing" Michael A. Arbib and Hyun-Bong Lee "Neural Mechanisms Underlying Detour Behavior in Frog and Toad" Ronald C. Arkin and J. David Hobbs "Dimensions of Communication and Social Organization in Multi-Agent Robotic Systems" Leemon C. Baird, III and A. Harry Klopf "Extensions of the Associative Control Process (ACP) Network: Hierarchies and Provable Optimality" Andrea Beltratti and Sergio Margarita "Evolution of Trading Strategies Among Heterogeneous Artificial Economic Agents" Allen Brookes "The Adaptive Nature of 3D Perception" Federico Cecconi and Domenico Parisi "Neural Networks with Motivational Units" Sunil Cherian and Wade O. Troxell "A Neural Network Based Behavior Hierarchy for Locomotion Control" Dave Cliff, Philip Husbands, and Inman Harvey "Evolving Visually Guided Robots" Marco Colombetti and Marco Dorigo "Learning to Control an Autonomous Robot by Distributed Genetic Algorithms" H. Cruse, U. Mueller-Wilm, and J. Dear "Artificial Neural Nets for Controlling a 6-Legged Walking System" Lawrence Davis, Stewart W. Wilson, and David Orvosh "Temporary Memory for Examples Can Speed Learning in a Simple Adaptive System" Dwight Deugo and Franz Oppacher "An Evolutionary Approach to Cognition" Alexis Drogoul and Jacques Ferber "From Tom Thumb to the Dockers: Some Experiments with Foraging Robots" Dario Floreano "Emergence of Nest-Based Foraging Strategies in Ecosystems of Neural Networks" Liane Gabora "Should I Stay or Should I Go: Coordinating Biological Needs with Continuously-updated Assessments of the Environment" John C. Gallagher and Randall D. Beer "A Qualitative Dynamical Analysis of Evolved Locomotion Controllers" Simon Giszter "Behavior Networks and Force Fields for Simulating Spinal Reflex Behaviors of the Frog" Ralph Hartley "Propulsion and Guidance in a Simulation of the Worm C. Elegans" Inman Harvey, Philip Husbands, and Dave Cliff "Issues in Evolutionary Robotics" Tetsuya Higuchi, Tatsuya Niwa, Toshio Tanaka, Hitoshi Iba, Hugo de Garis, and Tatsumi Furuya "Evolving Hardware with Genetic Learning" Ian Horswill "A Simple, Cheap, and Robust Visual Navigation System" Hitoshi Iba, Hugo de Garis, and Tetsuya Higuchi "Evolutionary Learning of Predatory Behaviors Based on Structured Classifiers" A. Harry Klopf, James S. Morgan, and Scott E. Weaver "Modeling Nervous System Function with a Hierarchical Network of Control Systems That Learn" David Kortenkamp and Eric Chown "A Directional Spreading Activation Network for Mobile Robot Navigation" C. Ronald Kube and Hong Zhang "Collective Robotic Intelligence" Long-Ji Lin and Tom Mitchell "Memory Approaches to Reinforcement Learning in Non-Markovian Domains" Alexander Linden and Frank Weber "Implementing Inner Drive Through Competence Reflection" Michael L. Littman "A Categorization of Reinforcement Learning Environments" Luis R. Lopez and Robert E. Smith "Evolving Artificial Insect Brains: Neural Networks for Artificial Compound Eyes" Pattie Maes "Behavior-Based Artificial Intelligence" Maja J. Mataric "Designing Emergent Behaviors: From Local Interactions to Collective Intelligence" Emmanuel Mazer, Juan Manuel Ahuactzin, El-Ghazali Talbi, and Pierre Bessiere "The Ariadne's Clew Algorithm" Geoffrey F. Miller and Peter M. Todd "Evolutionary Interactions among Mate Choice, Speciation, and Runaway Sexual Selection" Ulrich Nehmzow, Tim Smithers, and Brendan McGonigle "Increasing Behavioural Repertoire in a Mobile Robot" Chisato Numaoka and Akikazu Takeuchi "Collectively Migrating Robots" Lynne E. Parker "Adaptive Action Selection for Cooperative Agent Teams" Jing Peng and Ronald J. Williams "Efficient Search Control in Dyna" Rolf Pfeifer and Paul Verschure "Designing Efficiently Navigating Non-Goal-Directed Robots" Tony J. Prescott and John E. W. Mayhew "Building Long-Range Cognitive Maps Using Local Landmarks" Craig W. Reynolds "An Evolved, Vision-Based Behavioral Model of Coordinated Group Motion" Feliz Ribeiro, Jean-Paul Barthes, and Eugenio Oliveira "Dynamic Selection of Action Sequences" Mark Ring "Two Methods for Hierarchy Learning in Reinforcement Environments" Herbert L. Roitblat, P. W. B. Moore, David A. Helweg and Paul E. Nachtigall "Representation and Processing of Acoustic Information in a Biomimetic Neural Network" Bruce E. Rosen and James M. Goodwin "Learning Autonomous Flight Control by Adaptive Coarse Coding" Nestor A. Schmajuk and H. T. Blair "The Dynamics of Spatial Navigation: An Adaptive Neural Network" Juergen Schmidhuber and Reiner Wahnsiedler "Planning Simple Trajectories Using Neural Subgoal Generators" Anton Schwartz "Perceptual Modes: Task-Directed Processing of Sensory Input" J. E. R. Staddon "A Note on Rate-Sensitive Habituation" Josh Tenenberg, Jonas Karlsson, and Steven Whitehead "Learning via Task Decomposition" Peter M. Todd and Stewart W. Wilson "Environment Structure and Adaptive Behavior From the Ground Up" Saburo Tsuji and Shigang Li "Memorizing and Representing Route Scenes" Toby Tyrrell "The Use of Hierarchies for Action Selection" William R. Uttal, Gary Bradshaw, Sriram Dayanand, Robb Lovell, Thomas Shepherd, Ramakrishna Kakarala, Kurt Skifsted, and Greg Tupper "An Integrated Computational Model of a Perceptual-Motor System" Paul F. M. J. Verschure and Rolf Pfeifer "Categorization, Representations, and The Dynamics of System-Environment Interaction: A Case Study in Autonomous Systems" Thomas Ulrich Vogel "Learning Biped Robot Obstacle Crossing" Gerhard Weiss "Action Selection and Learning in Multi-Agent Environments" Gregory M. Werner and Michael G. Dyer "Evolution of Herding Behavior in Artificial Animals" Holly Yanco and Lynn Andrea Stein "An Adaptive Communication Protocol for Cooperating Mobile Robots" R. Zapata, P. Lepinay, C. Novales, and P. Deplanques "Reactive Behaviors of Fast Mobile Robots in Unstructured Environments: Sensor-Based Control and Neural Networks" ============================================================================== From RAMPO at SALERNO.INFN.IT Thu Oct 29 08:30:00 1992 From: RAMPO at SALERNO.INFN.IT (RAMPO@SALERNO.INFN.IT) Date: 29 Oct 1992 13:30 +0000 (GMT) Subject: CALL FOR PAPERS: WIRN-93 Message-ID: <5903@SALERNO.INFN.IT> ***************** CALL FOR PAPERS ***************** The 6-th Italian Workshop on Neural Nets WIRN VIETRI-93 May 12-14, 1993 Vietri Sul Mare, Salerno ITALY FIRST ANNOUNCEMENT Organizing - Scientific Committee -------------------------------------------------- B. Apolloni (Univ. Milano) A. Bertoni ( Univ. Milano) E. R. Caianiello ( Univ. Salerno) D. D. Caviglia ( Univ. Genova) P. Campadelli ( CNR Milano) M. Ceccarelli ( Univ. Salerno - IRSIP CNR) P. Ciaccia ( Univ. Bologna) M. Frixione ( I.I.A.S.S.) G. M. Guazzo ( I.I.A.S.S.) M. Gori ( Univ. Firenze) F. Lauria ( Univ. Napoli) M. Marinaro ( Univ. Salerno) A. Negro ( Univ. Salerno) G. Orlandi ( Univ. Roma) E. Pasero ( Politecnico Torino ) A. Petrosino ( Univ. Salerno - IRSIP CNR) M. Protasi ( Univ. Roma II) S. Rampone ( Univ. Salerno - IRSIP CNR) R. Serra ( Gruppo Ferruzzi Ravenna) F. Sorbello ( Univ. Palermo) R. Stefanelli ( Politecnico Milano) L. Stringa ( IRST Trento) R. Tagliaferri ( Univ. Salerno) R. Vaccaro ( CNR Napoli) Topics ---------------------------------------------------- Mathematical Models Architectures and Algorithms Hardware and Software Design Hybrid Systems Pattern Recognition and Signal Processing Industrial and Commercial Applications Fuzzy Tecniques for Neural Networks Schedule ----------------------- Papers Due: January 15, 1993 Replies to Authors: March 29, 1993 Revised Papers Due: May 14, 1993 Sponsors ------------------------------------------------------------------------------ International Institute for Advanced Scientific Studies (IIASS) Dept. of Fisica Teorica, University of Salerno Dept. of Informatica e Applicazioni, University of Salerno Dept. of Scienze dell'Informazione, University of Milano Istituto per la Ricrca dei Sistemi Informatici Paralleli (IRSIP - CNR) Societa' Italiana Reti Neuroniche (SIREN) The 6-th Italian Workshop on Neural Nets (WIRN VIETRI-93) will take place in Vietri Sul Mare, Salerno ITALY, May 12-14, 1993. The conference will bring together scientists who are studying several topics related to neural networks. The three-day conference, to be held in the I.I.A.S.S., will feature both introductory tutorials and original, refereed papers, to be published by World Scientific Publishing. Papers should be 6 pages,including title, abstract, figures, tables, and bibliography. The first page should give keywords, postal and electronic mailing addresses, telephone, and FAX numbers. Submit 3 copies to the address shown. For more information, contact the Secretary of I.I.A.S.S. I.I.A.S.S Via G.Pellegrino, 19 84019 Vietri Sul Mare (SA) ITALY Tel. +39 89 761167 Fax +39 89 761189 E-Mail robtag at udsab.dia.unisa.it ***************************************************************** From taylor at world.std.com Fri Oct 30 09:09:54 1992 From: taylor at world.std.com (Russell R Leighton) Date: Fri, 30 Oct 1992 09:09:54 -0500 Subject: Free Neural Network Simualtion and Analysis SW (am6.0) Message-ID: <199210301409.AA24858@world.std.com> ************************************************************************* **** delete all prerelease versions!!!!!!! (they are not up to date) **** ************************************************************************* The following describes a neural network simulation environment made available free from the MITRE Corporation. The software contains a neural network simulation code generator which generates high performance ANSI C code implementations for modular backpropagation neural networks. Also included is an interface to visualization tools. FREE NEURAL NETWORK SIMULATOR AVAILABLE Aspirin/MIGRAINES Version 6.0 The Mitre Corporation is making available free to the public a neural network simulation environment called Aspirin/MIGRAINES. The software consists of a code generator that builds neural network simulations by reading a network description (written in a language called "Aspirin") and generates an ANSI C simulation. An interface (called "MIGRAINES") is provided to export data from the neural network to visualization tools. The previous version (Version 5.0) has over 600 registered installation sites world wide. The system has been ported to a number of platforms: Host platforms: convex_c2 /* Convex C2 */ convex_c3 /* Convex C3 */ cray_xmp /* Cray XMP */ cray_ymp /* Cray YMP */ cray_c90 /* Cray C90 */ dga_88k /* Data General Aviion w/88XXX */ ds_r3k /* Dec Station w/r3000 */ ds_alpha /* Dec Station w/alpha */ hp_parisc /* HP w/parisc */ pc_iX86_sysvr4 /* IBM pc 386/486 Unix SysVR4 */ pc_iX86_sysvr3 /* IBM pc 386/486 Interactive Unix SysVR3 */ ibm_rs6k /* IBM w/rs6000 */ news_68k /* News w/68XXX */ news_r3k /* News w/r3000 */ next_68k /* NeXT w/68XXX */ sgi_r3k /* Silicon Graphics w/r3000 */ sgi_r4k /* Silicon Graphics w/r4000 */ sun_sparc /* Sun w/sparc */ sun_68k /* Sun w/68XXX */ Coprocessors: mc_i860 /* Mercury w/i860 */ meiko_i860 /* Meiko w/i860 Computing Surface */ Included with the software are "config" files for these platforms. Porting to other platforms may be done by choosing the "closest" platform currently supported and adapting the config files. New Features ------------ - ANSI C ( ANSI C compiler required! If you do not have an ANSI C compiler, a free (and very good) compiler called gcc is available by anonymous ftp from prep.ai.mit.edu (18.71.0.38). ) Gcc is what was used to develop am6 on Suns. - Autoregressive backprop has better stability constraints (see examples: ringing and sequence), very good for sequence recognition - File reader supports "caching" so you can use HUGE data files (larger than physical/virtual memory). - The "analyze" utility which aids the analysis of hidden unit behavior (see examples: sonar and characters) - More examples - More portable system configuration for easy installation on systems without a "config" file in distribution Aspirin 6.0 ------------ The software that we are releasing now is for creating, and evaluating, feed-forward networks such as those used with the backpropagation learning algorithm. The software is aimed both at the expert programmer/neural network researcher who may wish to tailor significant portions of the system to his/her precise needs, as well as at casual users who will wish to use the system with an absolute minimum of effort. Aspirin was originally conceived as ``a way of dealing with MIGRAINES.'' Our goal was to create an underlying system that would exist behind the graphics and provide the network modeling facilities. The system had to be flexible enough to allow research, that is, make it easy for a user to make frequent, possibly substantial, changes to network designs and learning algorithms. At the same time it had to be efficient enough to allow large ``real-world'' neural network systems to be developed. Aspirin uses a front-end parser and code generators to realize this goal. A high level declarative language has been developed to describe a network. This language was designed to make commonly used network constructs simple to describe, but to allow any network to be described. The Aspirin file defines the type of network, the size and topology of the network, and descriptions of the network's input and output. This file may also include information such as initial values of weights, names of user defined functions. The Aspirin language is based around the concept of a "black box". A black box is a module that (optionally) receives input and (necessarily) produces output. Black boxes are autonomous units that are used to construct neural network systems. Black boxes may be connected arbitrarily to create large possibly heterogeneous network systems. As a simple example, pre or post-processing stages of a neural network can be considered black boxes that do not learn. The output of the Aspirin parser is sent to the appropriate code generator that implements the desired neural network paradigm. The goal of Aspirin is to provide a common extendible front-end language and parser for different network paradigms. The publicly available software will include a backpropagation code generator that supports several variations of the backpropagation learning algorithm. For backpropagation networks and their variations, Aspirin supports a wide variety of capabilities: 1. feed-forward layered networks with arbitrary connections 2. ``skip level'' connections 3. one and two-dimensional weight tessellations 4. a few node transfer functions (as well as user defined) 5. connections to layers/inputs at arbitrary delays, also "Waibel style" time-delay neural networks 6. autoregressive nodes. 7. line search and conjugate gradient optimization The file describing a network is processed by the Aspirin parser and files containing C functions to implement that network are generated. This code can then be linked with an application which uses these routines to control the network. Optionally, a complete simulation may be automatically generated which is integrated with the MIGRAINES interface and can read data in a variety of file formats. Currently supported file formats are: Ascii Type1, Type2, Type3 Type4 Type5 (simple floating point file formats) ProMatlab Examples -------- A set of examples comes with the distribution: xor: from RumelHart and McClelland, et al, "Parallel Distributed Processing, Vol 1: Foundations", MIT Press, 1986, pp. 330-334. encode: from RumelHart and McClelland, et al, "Parallel Distributed Processing, Vol 1: Foundations", MIT Press, 1986, pp. 335-339. bayes: Approximating the optimal bayes decision surface for a gauss-gauss problem. detect: Detecting a sine wave in noise. iris: The classic iris database. characters: Learing to recognize 4 characters independent of rotation. ring: Autoregressive network learns a decaying sinusoid impulse response. sequence: Autoregressive network learns to recognize a short sequence of orthonormal vectors. sonar: from Gorman, R. P., and Sejnowski, T. J. (1988). "Analysis of Hidden Units in a Layered Network Trained to Classify Sonar Targets" in Neural Networks, Vol. 1, pp. 75-89. spiral: from Kevin J. Lang and Michael J, Witbrock, "Learning to Tell Two Spirals Apart", in Proceedings of the 1988 Connectionist Models Summer School, Morgan Kaufmann, 1988. ntalk: from Sejnowski, T.J., and Rosenberg, C.R. (1987). "Parallel networks that learn to pronounce English text" in Complex Systems, 1, 145-168. perf: a large network used only for performance testing. monk: The backprop part of the monk paper. The MONK's problem were the basis of a first international comparison of learning algorithms. The result of this comparison is summarized in "The MONK's Problems - A Performance Comparison of Different Learning algorithms" by S.B. Thrun, J. Bala, E. Bloedorn, I. Bratko, B. Cestnik, J. Cheng, K. De Jong, S. Dzeroski, S.E. Fahlman, D. Fisher, R. Hamann, K. Kaufman, S. Keller, I. Kononenko, J. Kreuziger, R.S. Michalski, T. Mitchell, P. Pachowicz, Y. Reich H. Vafaie, W. Van de Welde, W. Wenzel, J. Wnek, and J. Zhang has been published as Technical Report CS-CMU-91-197, Carnegie Mellon University in Dec. 1991. wine: From the ``UCI Repository Of Machine Learning Databases and Domain Theories'' (ics.uci.edu: pub/machine-learning-databases). Performance of Aspirin simulations ---------------------------------- The backpropagation code generator produces simulations that run very efficiently. Aspirin simulations do best on vector machines when the networks are large, as exemplified by the Cray's performance. All simulations were done using the Unix "time" function and include all simulation overhead. The connections per second rating was calculated by multiplying the number of iterations by the total number of connections in the network and dividing by the "user" time provided by the Unix time function. Two tests were performed. In the first, the network was simply run "forward" 100,000 times and timed. In the second, the network was timed in learning mode and run until convergence. Under both tests the "user" time included the time to read in the data and initialize the network. Sonar: This network is a two layer fully connected network with 60 inputs: 2-34-60. Millions of Connections per Second Forward: SparcStation1: 1 IBM RS/6000 320: 2.8 HP9000/720: 4.0 Meiko i860 (40MHz) : 4.4 Mercury i860 (40MHz) : 5.6 Cray YMP: 21.9 Cray C90: 33.2 Forward/Backward: SparcStation1: 0.3 IBM RS/6000 320: 0.8 Meiko i860 (40MHz) : 0.9 HP9000/720: 1.1 Mercury i860 (40MHz) : 1.3 Cray YMP: 7.6 Cray C90: 13.5 Gorman, R. P., and Sejnowski, T. J. (1988). "Analysis of Hidden Units in a Layered Network Trained to Classify Sonar Targets" in Neural Networks, Vol. 1, pp. 75-89. Nettalk: This network is a two layer fully connected network with [29 x 7] inputs: 26-[15 x 8]-[29 x 7] Millions of Connections per Second Forward: SparcStation1: 1 IBM RS/6000 320: 3.5 HP9000/720: 4.5 Mercury i860 (40MHz) : 12.4 Meiko i860 (40MHz) : 12.6 Cray YMP: 113.5 Cray C90: 220.3 Forward/Backward: SparcStation1: 0.4 IBM RS/6000 320: 1.3 HP9000/720: 1.7 Meiko i860 (40MHz) : 2.5 Mercury i860 (40MHz) : 3.7 Cray YMP: 40 Cray C90: 65.6 Sejnowski, T.J., and Rosenberg, C.R. (1987). "Parallel networks that learn to pronounce English text" in Complex Systems, 1, 145-168. Perf: This network was only run on a few systems. It is very large with very long vectors. The performance on this network is in some sense a peak performance for a machine. This network is a two layer fully connected network with 2000 inputs: 100-500-2000 Millions of Connections per Second Forward: Cray YMP 103.00 Cray C90 220 Forward/Backward: Cray YMP 25.46 Cray C90 59.3 MIGRAINES ------------ The MIGRAINES interface is a terminal based interface that allows you to open Unix pipes to data in the neural network. This replaces the NeWS1.1 graphical interface in version 4.0 of the Aspirin/MIGRAINES software. The new interface is not a simple to use as the version 4.0 interface but is much more portable and flexible. The MIGRAINES interface allows users to output neural network weight and node vectors to disk or to other Unix processes. Users can display the data using either public or commercial graphics/analysis tools. Example filters are included that convert data exported through MIGRAINES to formats readable by: - Gnuplot 3 - Matlab - Mathematica - Xgobi Most of the examples (see above) use the MIGRAINES interface to dump data to disk and display it using a public software package called Gnuplot3. Gnuplot3 can be obtained via anonymous ftp from: >>>> In general, Gnuplot 3 is available as the file gnuplot3.?.tar.Z >>>> Please obtain gnuplot from the site nearest you. Many of the major ftp >>>> archives world-wide have already picked up the latest version, so if >>>> you found the old version elsewhere, you might check there. >>>> >>>> NORTH AMERICA: >>>> >>>> Anonymous ftp to dartmouth.edu (129.170.16.4) >>>> Fetch >>>> pub/gnuplot/gnuplot3.?.tar.Z >>>> in binary mode. >>>>>>>> A special hack for NeXTStep may be found on 'sonata.cc.purdue.edu' >>>>>>>> in the directory /pub/next/submissions. The gnuplot3.0 distribution >>>>>>>> is also there (in that directory). >>>>>>>> >>>>>>>> There is a problem to be aware of--you will need to recompile. >>>>>>>> gnuplot has a minor bug, so you will need to compile the command.c >>>>>>>> file separately with the HELPFILE defined as the entire path name >>>>>>>> (including the help file name.) If you don't, the Makefile will over >>>>>>>> ride the def and help won't work (in fact it will bomb the program.) NetTools ----------- We have include a simple set of analysis tools by Simon Dennis and Steven Phillips. They are used in some of the examples to illustrate the use of the MIGRAINES interface with analysis tools. The package contains three tools for network analysis: gea - Group Error Analysis pca - Principal Components Analysis cda - Canonical Discriminants Analysis Analyze ------- "analyze" is a program inspired by Denis and Phillips' Nettools. The "analyze" program does PCA, CDA, projections, and histograms. It can read the same data file formats as are supported by "bpmake" simulations and output data in a variety of formats. Associated with this utility are shell scripts that implement data reduction and feature extraction. "analyze" can be used to understand how the hidden layers separate the data in order to optimize the network architecture. How to get Aspirin/MIGRAINES ----------------------- The software is available from two FTP sites, CMU's simulator collection and UCLA's cognitive science machines. The compressed tar file is a little less than 2 megabytes. Most of this space is taken up by the documentation and examples. The software is currently only available via anonymous FTP. > To get the software from CMU's simulator collection: 1. Create an FTP connection from wherever you are to machine "pt.cs.cmu.edu" (128.2.254.155). 2. Log in as user "anonymous" with password your username. 3. Change remote directory to "/afs/cs/project/connect/code". Any subdirectories of this one should also be accessible. Parent directories should not be. ****You must do this in a single operation****: cd /afs/cs/project/connect/code 4. At this point FTP should be able to get a listing of files in this directory and fetch the ones you want. Problems? - contact us at "connectionists-request at cs.cmu.edu". 5. Set binary mode by typing the command "binary" ** THIS IS IMPORTANT ** 6. Get the file "am6.tar.Z" > To get the software from UCLA's cognitive science machines: 1. Create an FTP connection to "ftp.cognet.ucla.edu" (128.97.50.19) (typically with the command "ftp ftp.cognet.ucla.edu") 2. Log in as user "anonymous" with password your username. 3. Change remote directory to "alexis", by typing the command "cd alexis" 4. Set binary mode by typing the command "binary" ** THIS IS IMPORTANT ** 5. Get the file by typing the command "get am6.tar.Z" Other sites ----------- If these sites do not work well for you, then try the archie internet mail server. Send email: To: archie at cs.mcgill.ca Subject: prog am6.tar.Z Archie will reply with a list of internet ftp sites that you can get the software from. How to unpack the software -------------------------- After ftp'ing the file make the directory you wish to install the software. Go to that directory and type: zcat am6.tar.Z | tar xvf - -or- uncompress am6.tar.Z ; tar xvf am6.tar How to print the manual ----------------------- The user documentation is located in ./doc in a few compressed PostScript files. To print each file on a PostScript printer type: uncompress *.Z lpr -s *.ps Why? ---- I have been asked why MITRE is giving away this software. MITRE is a non-profit organization funded by the U.S. federal government. MITRE does research and development into various technical areas. Our research into neural network algorithms and applications has resulted in this software. Since MITRE is a publically funded organization, it seems appropriate that the product of the neural network research be turned back into the technical community at large. Thanks ------ Thanks to the beta sites for helping me get the bugs out and make this portable. Thanks to the folks at CMU and UCLA for the ftp sites. Copyright and license agreement ------------------------------- Since the Aspirin/MIGRAINES system is licensed free of charge, the MITRE Corporation provides absolutely no warranty. Should the Aspirin/MIGRAINES system prove defective, you must assume the cost of all necessary servicing, repair or correction. In no way will the MITRE Corporation be liable to you for damages, including any lost profits, lost monies, or other special, incidental or consequential damages arising out of the use or in ability to use the Aspirin/MIGRAINES system. This software is the copyright of The MITRE Corporation. It may be freely used and modified for research and development purposes. We require a brief acknowledgement in any research paper or other publication where this software has made a significant contribution. If you wish to use it for commercial gain you must contact The MITRE Corporation for conditions of use. The MITRE Corporation provides absolutely NO WARRANTY for this software. October, 1992 Russell Leighton * * MITRE Signal Processing Center *** *** *** *** 7525 Colshire Dr. ****** *** *** ****** McLean, Va. 22102, USA ***************************************** ***** *** *** ****** INTERNET: taylor at world.std.com, ** *** *** *** leighton at mitre.org * * From nfb507 at hp1.uni-rostock.de Fri Oct 30 18:38:21 1992 From: nfb507 at hp1.uni-rostock.de (neural network group) Date: Fri, 30 Oct 92 18:38:21 MEZ Subject: DB-investigation Message-ID: Dear Connectionist! In the appendix you will find the index of a db-investigation we made on the subject "neural hard- and software" published in Japanese literature. On request we will send the whole result (about 60 pages) to you. Please send (if available) a similar list to the adress: nfb507 at hp1.uni-rostock.de Thank you. The Neural Network Group Rostock Appendix: 1 30.11.1988 NEC and NEC MARKET DEVELOPMENT commercialize neuro computer 2 06.12.1988 Electronic Technology Lab develops image processing system 3 10.01.1989 HITACHI lab develops neural network-based computer model 4 18.01.1989 SANYO ELECTRIC develops two bio device samples 5 23.01.1989 FUJITSU organizes universities to set up AI research forum 6 01.02.1989 NTT Basic Lab develops method for developing neuro circuit 7 01.02.1989 NEC INFORMATION TECHNOLOGY to focus on neuro computer, imag 8 16.02.1989 FUJITSU develops neuro computer chip 9 22.02.1989 MATSUSHITA GIKEN develops pseudo neuron prototype that 10 01.03.1989 TOSHIBA develops diabetes diagnosis system that uses neural 11 03.03.1989 NIKKO SECURITIES and FUJITSU to develop neuro computer syst 12 04.04.1989 MITSUBISHI ELECTRIC develops technique for using single neu 13 05.04.1989 TOSHIBA develops neural network development system f 14 24.05.1989 NEC and NEC INFORMATION TECHNOLOGY develop software for 15 01.06.1989 MATSUSHITA ELECTRIC develops controlling technique that int 16 02.06.1989 Kyushu Institute of Technology group succeeds in 17 09.06.1989 FUJITSU develops PC-based neuro computer system 18 08.09.1989 HITACHI develops neural network LSI 19 13.09.1989 MITI, universities, and private companies to start 20 06.10.1989 MITSUBISHI ELECTRIC Central Lab develops device for meas 21 19.10.1989 FUJITSU to market neuro processor LSI and LSI board compute 22 30.10.1989 MATSUSHITA GIKEN and MITSUBISHI CHEMICAL INDUSTRIES e 23 01.11.1989 Toyohashi Science Technology University group develop 24 16.11.1989 FUJITSU introduces neuro application software for monitorin 25 28.11.1989 SONY to develop super chip which will integrate CPU and mem 26 12.12.1989 SUMITOMO METAL INDUSTRIES to enter neuro computer market by 27 04.01.1990 DAI-ICHI KANGYO BANK and FUJITSU launch project for 28 04.01.1990 DAI-ICHI KANGYO BANK and FUJITSU launch project for 29 09.01.1990 NEC develops face reference system that uses neuron devices 30 09.01.1990 FUJITSU LAB develops neuro computer prototype that achie 31 19.01.1990 CSK and TOSHIBA ENGINEERING develop stock investment dec 32 07.02.1990 MATSUSHITA ELECTRONICS develops analog neuro processor 33 24.02.1990 NIPPON STEEL and FUJITSU develop neuro computer- based 34 09.04.1990 USC asking three Japanese computer makers to partici 35 12.04.1990 NIPPON STEEL to widely use neuro computer systems 36 25.04.1990 RICOH develops neuro LSI 37 26.04.1990 FUJITSU starts projects for developing neuro technology 38 01.06.1990 JEIDA predicts that world's electronics industry will expa 39 09.06.1990 MITSUBISHI ELECTRIC LSI Lab develops neuron chip 40 02.07.1990 ADAPTIVE SOLUTIONS to expand cooperation with Japanese 41 23.07.1990 MITSUBISHI ELECTRIC develops optical neuro chip capable of 42 21.08.1990 MITSUBISHI ELECTRIC Central Lab develops optical neuro chip 43 09.10.1990 MITSUBISHI ELECTRIC Central Lab develops dynamic optical 44 15.10.1990 ATR TRANSLATION TELEPHONE LAB and Cargenie- Mellon Univer 45 06.11.1990 MATSUSHITA ELECTRIC develops neural network pattern recogni 46 14.11.1990 FUJITSU develops robot control software based on cerebellum 47 08.12.1990 MATSUSHITA ELECTRIC develops neuro fuzzy controlling techni 48 13.12.1990 HITACHI to set standards for fuzzy, neuro, and AI controlli 49 20.12.1990 FUJITSU and FANUC agree to jointly develop next- generation 50 26.12.1990 NTT lab develops experiment system for observing li 51 28.12.1990 WACOM develops neuro computer for connecting neurons us 52 10.01.1991 MITI to organize committee for studying feasibility of six 53 18.01.1991 MATSUSHITA ELECTRIC develops optical neuro device 54 11.02.1991 Research for optical neuro computers expanding 55 16.02.1991 MITSUBISHI ELECTRIC LSI Lab develops high-speed neural netw 56 08.03.1991 Chemical Technology Lab develops optical switching e 57 12.03.1991 MITSUBISHI ELECTRIC develops neural network chip with learn 58 24.04.1991 MITSUBISHI ELECTRIC confirms neural network can learn even 59 20.06.1991 Chiba University group and KYUSHU MATSUSHITA ELECTRIC devel 60 25.06.1991 NTT lab develops method for determining optimum number of 61 06.07.1991 Kyushu Institute of Technology group and Fuzzy System Lab d 62 31.07.1991 NEURON DATA JAPAN to put on sale Japanese version of gr 63 21.08.1991 MITSUBISHI ELECTRIC Central Lab develops optical arithmetic 64 23.08.1991 SOLITON SYSTEMS moving forward with LON business 65 16.09.1991 Kagoshima University group develops neuro-computer that 66 19.09.1991 MITSUBISHI ELECTRIC Central Lab develops optical neuro chip 67 06.12.1991 Tohoku University group develops neuron MOS transistor that 68 14.12.1991 Tohoku University group develops superconductive neuro comp 69 20.12.1991 TOSHIBA develops digital neuro chip 70 03.02.1992 FUJITSU to expand neuro computer business 71 19.02.1992 TOSHIBA Research Lab develops high-speed analog neuro compu 72 21.02.1992 MITSUBISHI ELECTRIC LSI Lab develops analog neuro chip whic 73 21.02.1991 NTT develops neuro chip 74 25.03.1992 MITSUBISHI ELECTRIC Central Lab develops neuro computer mod 75 03.04.1992 TOSHIBA develops new neural network system which can read 76 29.05.1992 MATSUSHITA ELECTRIC Central Research Lab develops neuro 77 19.06.1992 RICOH develops software-free neuro computer system 78 10.07.1992 MATSUSHITA ELECTRIC Lab develops self-multiplying neural ne 79 11.07.1992 TOSHIBA to increase distributed control network processor p 80 22.07.1992 MITSUBISHI ELECTRIC develops prototype optical neuro chip 81 09.09.1992 MATSUSHITA ELECTRIC Central Research Lab develops optical n 82 10.09.1992 HITACHI develops neuro-computing support software 83 06.10.1992 FUJITSU and KOMATSU develop world's first neural network co 84 09.10.1992 NRI develops damage estimate software which incorporates n From tesauro at watson.ibm.com Fri Oct 30 12:38:28 1992 From: tesauro at watson.ibm.com (Gerald Tesauro (8-863-7682)) Date: Fri, 30 Oct 92 12:38:28 EST Subject: Hotel reservation deadline for NIPS workshops Message-ID: The NIPS 92 post-conference workshops will take place Dec. 3-5 in Vail, Colorado, at the Radisson Resort Vail. The Radisson is offering attendees a special discounted room rate of $78.00 per night, and is holding a block of rooms for us until WEDNESDAY, NOVEMBER 4. Attendees are strongly encouraged to make their hotel reservations by this date. Reservations after Nov. 4 will be on a space-available basis only. To make reservations, call the Radisson at 303-476-4444 and mention our "NIPS" group code. Gerry Tesauro NIPS 92 Workshops Chair From mclennan at cs.utk.edu Fri Oct 30 17:34:36 1992 From: mclennan at cs.utk.edu (mclennan@cs.utk.edu) Date: Fri, 30 Oct 92 17:34:36 -0500 Subject: paper in neuroprose Message-ID: <9210302234.AA01996@maclennan.cs.utk.edu> **DO NOT FORWARD TO OTHER GROUPS** The following technical report has been placed in the Neuroprose archives at Ohio State (filename: maclennan.dendnet.ps.Z). Ftp instructions follow the abstract. N.B. The uncompressed file is quite long (1.2 Mbytes), so you may have to use the -s option on lpr to print it. ----------------------------------------------------- Information Processing in the Dendritic Net Bruce MacLennan Computer Science Department University of Tennessee Knoxville, TN 37996 maclennan at cs.utk.edu Technical Report CS-92-180 ABSTRACT: The goal of this paper is a model of the dendritic net that: (1) is mathematically tractable, (2) is reasonably true to the biol- ogy, and (3) illuminates information processing in the neuropil. First I discuss some general principles of mathematical modeling in a biological context that are relevant to the use of linearity and orthogonality in our models. Next I discuss the hypothesis that the dendritic net can be viewed as a linear field computer. Then I discuss the approximations involved in analyzing it as a dynamic, lumped-parameter, linear system. Within this basically linear framework I then present: (1) the self-organization of matched filters and of associative memories; (2) the dendritic computation of Gabor and other nonorthogonal representations; and (3) the possible effects of reverse current flow in neurons. Based on a presentation at the 2nd Annual Behavioral and Computa- tional Neuroscience Workshop, Georgetown University, Washington DC, May 18--20, 1992. ----------------------------------------------------- FTP INSTRUCTIONS Either use the Getps script, or do the following: unix> ftp archive.cis.ohio-state.edu (or 128.146.8.52) Name: anonymous Password: ftp> cd pub/neuroprose ftp> binary ftp> get maclennan.dendnet.ps.Z ftp> quit unix> uncompress maclennan.dendnet.ps.Z unix> lpr -s maclennan.dendnet.ps (or however you print LONG postscript) If you need hardcopy, then send your request to: library at cs.utk.edu Bruce MacLennan Department of Computer Science 107 Ayres Hall The University of Tennessee Knoxville, TN 37996-1301 (615)974-0994/5067 FAX: (615)974-4404 maclennan at cs.utk.edu From barto at cs.umass.edu Fri Oct 30 18:53:42 1992 From: barto at cs.umass.edu (Andy Barto) Date: Fri, 30 October 1992 18:53:42 -0500 Subject: faculty positions Message-ID: UNIVERSITY OF MASSACHUSETTS AMHERST Faculty and Research Scientist Positions The Department of Computer Science invites applications for one-three tenure-track faculty positions at the assistant and associate levels and several research-track faculty and postdoctoral positions at all levels, in all areas of computer science. Applicants should have a Ph.D. in computer science or related area and should show evidence of exceptional research promise. Senior level candidates should have a record of distinguished research. Salary is commensurate with education and experience. Our Department has grown substantially over the past five years and currently has 30 tenure-track faculty and 8 research faculty, approximately 10 postdoctoral research scientists, and 160 graduate students. Continued growth is expected over the next five years. We have ongoing research projects in robotics, vision, natural language processing, expert systems, distributed problem solving, machine learning, artificial neural networks, person-machine interfaces, distributed processing, database systems, information retrieval, operating systems, object-oriented systems, persistent object management, real-time systems, real-time software development and analysis, programming languages, computer networks, theory of computation, office automation, parallel computation, computer architecture, and medical informatics (with the UMass Medical School). Send vita, along with the names of four references to Chair of Faculty Recruiting, Department of Computer Science, University of Massachusetts, Lederle Graduate Research Center, Amherst, MA 01003 by February 1, 1993 (or Email inquiries can be sent to facrec at cs.umass.edu). An Affirmative Action/Equal Opportunity Employer From densley at eng.auburn.edu Thu Oct 1 16:17:04 1992 From: densley at eng.auburn.edu (Dillard D. Ensley) Date: Thu, 1 Oct 92 15:17:04 CDT Subject: Power Survey Results Message-ID: <9210012017.AA03134@eng.auburn.edu> Dear Connectionists, Thank you for responding to my request for sources on applying artificial neural networks to problems in the electric power industry. Following is a list of 55 sources. There are another 57 papers in the "Proceedings of the First International Forum on Applications of Neural Networks to Power Systems," Seattle, Washington, July 23-26, 1991, published by the Institute of Electrical and Electronics Engineers (IEEE). Also, the Electric Power Research Institute (EPRI) and the International Neural Network Society (INNS) held a workshop entitled "Neural Network Computing for the Electric Power Industry" at Stanford University in Stanford, California on August 17-19, 1992. Several of you mentioned that the IEEE Power Engineering Society has a task force to compile a similar bibliography. Reports are that there are over 170 sources in that project. Though some companies claim to be working on commercial applications (and I was able to verify one company's claim), they all asked to remain unpublished until such products are marketed. So be watching for these products to hit the market. 1) M. Aggoune, M.A. El-Sharkawi, D.C. Park, R.J. Marks II, "Preliminary Results on Using Artificial Neural Networks for Security Assessment," IEEE Transactions on Power Systems, Vol. 6, No. 2, pp. 890-896, May 1991. 2) Israel E. Alguindigue, Anna Loskiewicz-Buczak, Robert E. Uhrig, "Neural Networks for the Monitoring of Rotating Machinery," Proceedings of the Eighth Power Plant Dynamics, Control and Testing Symposium (in press), May 1992. 3) Israel E. Alguindigue, Anna Loskiewicz-Buczak, Robert E. Uhrig, "Clustering and Classification Techniques for the Analysis of Vibration Signatures," Proceedings of the SPIE Technical Symposium on Intelligent Information Systems Application of Artificial Neural Networks, III, April 1992. 4) Hamid Bacha, Walter Meyer, "A Neural Network Architecture for Load Forecasting," Proceedings of the 1992 International Joint Conference on Neural Networks, Vol. 2, pp. 442-447, June 1992. 5) Eric B. Bartlett, Robert E. Uhrig, "Nuclear Power Plant Status Diagnostics Using an Artificial Neural Network," Nuclear Technology, Vol. 97, pp. 272-281, March 1992. 6) Franoise Beaufays, Youssef Abdel-Magid, Bernard Widrow, "Application of Neural Networks to Load-Frequency Control in Power Systems," submitted to Neural Networks, May 1992. 7) Chao-Rong Chen, Yuan-Yih-Hsu, "Synchronous Machine Steady- State Stability Analysis Using an Artificial Neural Network," IEEE Transactions on Energy Conversion, Vol. 6, No. 1, pp. 12-20, March 1991. 8) Mo-yuen Chow, Sui Oi Yee, "Methodology for On-Line Incipient Fault Detection in Single-Phase Squirrel-Cage Induction Motors Using Artificial Neural Networks," IEEE Transactions on Energy Conversion, Vol. 6, No. 3, pp. 536- 545, September 1991. 9) Badrul H. Chowdhury, Bogdan M. Wilamowski, "Real-Time Power System Analysis Using Neural Computing," Proceedings of the 1992 Workshop on Neural Networks, February 1992. 10) Sonja Ebron, David L. Lubkeman, Mark White, "A Neural Network Approach to the Detection of Incipient Faults on Power Distribution Feeders," IEEE Transactions on Power Delivery, Vol. 5, No. 2, pp. 905-912, April 1990. 11) Tom Elliott, "Neural Networks--Next Step in Applying Artificial Intelligence," Power, pp. 45-48, March 1990. 12) D.D. Ensley, "Neural Networks Applied to the Protection of Large Synchronous Generators," M.S. Thesis, Department of Electrical Engineering, Auburn University, Alabama, to be published December 1992. 13) Y.J. Feria, J.D. McPherson, D.J. Rolling, "Cellular Neural Networks for Eddy Current Problems," IEEE Transactions on Power Delivery, Vol. 6, No. 1, pp. 187-193, January 1991. 14) Zhichao Guo, Robert E. Uhrig, "Use of Artificial Neural Networks to Analyze Nuclear Power Plant Performance" (in press), Nuclear Technology, July 1992 (expected). 15) Zhichao Guo, Robert E. Uhrig, "Sensitivity Analysis and Applications to Nuclear Power Plant," Proceedings of the 1992 International Joint Conference on Neural Networks, Vol. 2, pp. 453-458, June 1992. 16) Zhichao Guo, Robert E. Uhrig, "Using Modular Neural Networks to Monitor Accident Conditions in Nuclear Power Plants," Proceedings of the SPIE Technical Symposium on Intelligent Information Systems Application of Artificial Neural Networks, III, April 1992. 17) R.K. Hartana, G.G. Richards, "Harmonic Source Monitoring and Identification Using Neural Networks," IEEE Transactions on Power Systems, Vol. 5, No. 4, pp. 1098- 1104, November 1990. 18) Kun-Long Ho, Yuan-Yih Hsu, Chien-Chuen Yang, "Short Term Load Forecasting Using a Multilayer Neural Network with an Adaptive Learning Algorithm," IEEE Transactions on Power Systems, Vol. 7, No. 1, pp. 141-149, February 1992. 19) Yuan-Yih Hsu, Chao-Rong Chen, "Tuning of Power System Stabilizers Using an Artificial Neural Network," IEEE Transactions on Energy Conversion, Vol. 6, No. 4, pp. 612- 618, December 1991. 20) Yuan-Yih Hsu, Chien-Chuen Yang, "Design of Artificial Neural Networks for Short-Term Load Forecasting," IEE Proceedings. Part C, Generation, Transmission and Distribution, Vol. 138, No. 5, pp. 407-418, September 1991. 21) Andreas Ikonomopoulos, Lefteri H. Tsoukalas, Robert E. Uhrig, "Use of Neural Networks to Monitor Power Plant Components," Proceedings of the American Power Conference, April 1992. 22) Andreas Ikonomopoulos, Lefteri H. Tsoukalas, Robert E. Uhrig, "A Hybrid Neural Network-Fuzzy Logic Approach to Nuclear Power Plant Transient Identificaiton," Proceedings of the AI-91: Frontiers in Innovative Computing for the Nuclear Industry, pp. 217-226, September 1991. 23) N. Kandil, V.K. Sood, K. Khorasani, R.V. Patel, "Fault Identification in an AC-DC Transmission System Using Neural Networks," IEEE Transactions on Power Systems, Vol. 7, No. 2, pp. 812-819, May 1992. 24) Shahla Keyvan, Luis Carlos Rabelo, Anil Malkani, "Nuclear Reactor Condition Monitoring by Adaptive Resonance Theory," Proceedings of the 1992 International Joint Conference on Neural Networks, Vol. 3, pp. 321-328, June 1992. 25) K.Y. Lee, Y.T. Cha, J.H. Park, "Short-Term Load Forecasting Using an Artificial Neural Network," IEEE Transactions on Power Systems, Vol. 7, No. 1, pp. 124-130, February 1992. 26) Z.J. Liu, F.E. Villaseca, F. Renovich, Jr., "Neural Networks for Generation Scheduling in Power Systems," Proceedings of the 1992 International Joint Conference on Neural Networks, Vol. 2, pp. 233-238, June 1992. 27) Hiroyuki Mori, Yoshihito Tamaru, Senji Tsuzuki, "An Artificial Neural-Net Based Technique for Power System Dynamic Stability with the Kohonen Model," IEEE Transactions on Power Systems, Vol. 7, No. 2, pp. 856-864, May 1992. 28) Hiroyuki Mori, Kenji Itou, Hiroshi Uematsu, Senji Tsuzuki, "An Artificial Neural-Net Based Method for Predicting Power System Voltage Harmonics," IEEE Transactions on Power Delivery, Vol. 7, No. 1, pp. 402-409, January 1992. 29) Seibert L. Murphy, Samir I. Sayegh, "Application of Neural Networks to Acoustic Screening of Small Electric Motors," Proceedings of the 1992 International Joint Conference on Neural Networks, Vol. 2, pp. 472-477, June 1992. 30) Dagmar Niebur, Alain J. Germond, "Power System Static Security Assessment Using the Kohonen Neural Network Classifier," IEEE Transactions on Power Systems, Vol. 7, No. 2, pp. 865-872, May 1992. 31) T.T. Nguyen, H.X. Bui, "Neural Network for Power System Control Function," Australasia Universities Power and Control Conference '91, pp. 202-207, October 1991. 32) S. Osowski, "Neural Network for Estimation of Harmonic Components in a Power System," IEE Proceedings. Part C, Generation, Transmission and Distribution, Vol. 139, No. 2, pp. 129-135, March 1992. 33) D.R. Ostojic, G.T. Heydt, "Transient Stability Assessment by Pattern Recognition in the Frequency Domain," IEEE Transactions on Power Systems, Vol. 6, No. 1, pp. 231-237, February 1991. 34) Z. Ouyang, S.M. Shahidehpour, "A Hybrid Artificial Neural Network-Dynamic Programming Approach to Unit Commitment," IEEE Transactions on Power Systems, Vol. 7, No. 1, pp. 236- 242, February 1992. 35) Norman L. Ovick, "A-to-D Voltage Classifier Using Neural Network," Proceedings of the 1991 Workshop on Neural Networks, pp. 615-620, February 1991. 36) Yoh-Han Pao, Dejan J. Sobajic, "Combined Use of Unsupervised and Supervised Learning for Dynamic Security Assessment," IEEE Transactions on Power Systems, Vol. 7, No. 2, pp. 878-884, May 1992. 37) Yoh-Han Pao, Dejan J. Sobajic, "Current Status of Artificial Neural Network Applications to Power Systems in the United States," Transactions of the Institute of Electrical Engineers of Japan, Vol. 111-B, No. 7, pp. 690- 697, July 1991. 38) D.C. Park, M.A. El-Sharkawi, R.J. Marks II, "Electric Load Forecasting Using an Artificial Neural Network," IEEE Transactions on Power Systems, Vol. 6, No. 2, pp. 442-448, May 1991. 39) Alexander G. Parlos, Amir F. Atiya, Kil T. Chong, Wei K. Tsai, "Nonlinear Identification of Process Dynamics Using Neural Networks," Nuclear Technology, Vol. 97, pp. 79-96, January 1992. 40) T.M. Peng, N.F. Hubele, G.G. Karady, "Advancement in the Application of Neural Networks for Short-Term Load Forecasting," IEEE Transactions on Power Systems, Vol. 7, No. 1, pp. 250-257, February 1992. 41) Kenneth F. Reinschmidt, "Neural Networks: Next Step for Simulation and Control," Power Engineering, pp. 41-45, November 1991. 42) C. Rodriguez, S. Rementeria, C. Ruiz, A. Lafuente, J.I. Martin, J. Muguerza, "A Modular Approach to the Design of Neural Networks for Fault Diagnosis in Power Systems," Proceedings of the 1992 International Joint Conference on Neural Networks, Vol. 3, pp. 16-23, June 1992. 43) Myung-Sub Roh, Se-Woo Cheon, Soon-Heung Chang, "Power Prediction in Nuclear Power Plants Using a Back-Propagation Learning Neural Network," Nuclear Technology, Vol. 94, pp. 270-278, May 1991. 44) N. Iwan Santoso, Owen T. Tan, "Neural-Net Based Real-Time Control of Capacitors Installed on Distribution Systems," IEEE Transactions on Power Delivery, Vol. 5, No. 1, pp. 266-272, January 1990. 45) T. Satoh, K. Nara, "Maintenance Scheduling by Using Simulated Annealing Method," IEEE Transactions on Power Systems, Vol. 6, No. 2, pp. 850-857, May 1991. 46) Dejan J. Sobajic, Yoh-Han Pao, "Artificial Neural-Net Based Dynamic Security Assessment for Electric Power Systems," IEEE Transactions on Power Systems, Vol. 4, No. 1, pp. 220- 226, February 1989. 47) Michael Travis, "Neural Network Methodology for Check Valve Diagnostics," M.S. Thesis, Department of Nuclear Engineering, University of Tennessee, December 1991. 48) Robert E. Uhrig, "Potential Use of Neural Networks in Nuclear Power Plants," Proceedings of the Eighth Power Plant Dynamics, Control and Testing Symposium (in press), May 1992. 49) Robert E. Uhrig, "Use of Neural Networks in the Analysis of Complex Systems," Proceedings of the 1992 Workshop on Neural Networks, February 1992. 50) Robert E. Uhrig, "Potential Application of Neural Networks to the Operation of Nuclear Power Plants," Nuclear Safety, Vol. 32, No. 1, pp. 68-79, January-March 1991. 51) Belle R. Upadhyaya, Evren Eryurek, "Application of Neural Networks for Sensor Validation and Plant Monitoring," Nuclear Technology, Vol. 97, pp. 170-176, February 1992. 52) Siri Weerasooriya, M.A. El-Sharkawi, M. Damborg, R.J. Marks II, "Towards Static-Security Assessment of a Large-Scale Power System Using Neural Networks," IEE Proceedings. Part C, Generation, Transmission and Distribution, Vol. 139, No. 1, pp. 64-70, January 1992. 53) Siri Weerasooriya, M.A. El-Sharkawi, "Identification and Control of a DC Motor Using Back-Propagation Neural Networks," IEEE Transactions on Energy Conversion, Vol. 6, No. 4, pp. 663-669, December 1991. 54) A. Martin Wildberger, "Model-Based Reasoning, and Neural Networks, Combined in an Expert Advisor for Efficient Operation of Electric Power Plants." 55) Q.H. Wu, B.W. Hogg, G.W. Irwin, "A Neural Network Regulator for Turbogenerators," IEEE Transactions on Neural Networks, Vol. 3, No. 1, pp. 95-100, January 1992. From mike at PARK.BU.EDU Fri Oct 2 14:25:45 1992 From: mike at PARK.BU.EDU (Michael Cohen) Date: Fri, 2 Oct 92 14:25:45 -0400 Subject: No subject Message-ID: <9210021825.AA13118@cns.bu.edu> POSTDOCTORAL FELLOW CENTER FOR ADAPTIVE SYSTEMS AND DEPARTMENT OF COGNITIVE AND NEURAL SYSTEMS BOSTON UNIVERSITY A postdoctoral fellow is sought to join the Center for Adaptive Systems and the Department of Cognitive and Neural Systems, which are research leaders in the development of biological and artificial neural networks. A person is sought who has a substantial research and publication record om developing neural network models of image processing and adaptive pattern recognition. Salary: $30,000+. Excellent opportunities for broadening knowledge of neural architectures through interactions with a faculty trained in psychology, neurobiology, mathematics, computer science, physics, and engineering. Well-equipped computer, vision, speech, word recognition, and motor control laboratories are in the Department. Boston University is an Equal Opportunity/Affirmative Action Employer. Please send a curriculum vitae, 3 letters of recommendation, and illustrative research articles by January 15, 1993 to: Postdoctoral Search Committee Center for Adaptive Systems Boston University 111 Cummington Street Room 244 Boston MA 02215 From jose at tractatus.siemens.com Fri Oct 2 14:39:13 1992 From: jose at tractatus.siemens.com (Steve Hanson) Date: Fri, 2 Oct 1992 14:39:13 -0400 (EDT) Subject: NIPS*92 CONFERENCE PROGRAM Message-ID: NIPS*92 Conference PROGRAM ORAL PROGRAM: Monday, November 30 After Dinner Talk: Stuart Anstis, Psychology Department., UC San Diego "I Thought I saw it move: The Psychology of Motion Perception." Tuesday, December 1 ORAL 1: COMPLEXITY, LEARNING & GENERALIZATION [8:30--9:40] 0.1.1. T. Cover, Department of Elec. Eng., Stanford University "Complexity and Generalization in Neural Networks." (Invited Talk)[8:30am] 0.1.2. N. Intrator, Center for Neural Science, Brown University "Combining Exploratory Projection Pursuit and Projection Pursuit Regression with Application to Neural Networks" [9:00am] 0.1.3. A, Stolcke & S. Omohundro, International Computer Science Institute, Berkeley, CA "Hidden Markov Model Induction by Bayesian Model." [9:20am] 0.1.4 K-Y Siu*, V. Roychowdhury%, T. Kailath+, *Department of Elec. & Computer Eng., UC Irvine, %School of Elec. Eng., Purdue University, +Information Systems Lab, Stanford University "Computing with Almost Optimal Size Neural Networks." [9:40am] ORAL 2: CONTROL, NAVIGATION & PLANNING 0.2.1. D. DeMers* & K. Kreutz-Delgado%, *Dept. of Computer Science, UC San Diego, %Dept. of Elec. & Computer Eng. & Inst. for Neural Comp., UC San Diego "Global Regularization of Inverse Kinematics for Redundant Manipulators." [10:30am] 0.2.2 A. W. Moore & C. G. Atkeson, MIT Al Lab "Memory-based Reinforcement Learning: Efficient Computation with Prioritized Sweeping." [10:50am] 0.2.3 P. Dayan* & G. E. Hinton% *CNL, The Salk Institute %Department of Computer Science, Univeristy of Toronto "Feudal Reinforcement Learning." [11:10am] 0.2.4 D. Pomerleau, School of Computer Science, CMU "Input Reconstruction Reliability Estimation." [11:30am] SPOTLIGHT 1: COMPLEXITY, LEARNING & GENERALIZATION. CONTROL, NAVIGATION & PLANNING. [11:50-11:58am] ORAL 3: VISUAL PROCESSING 0.3.1. S. Geman, Mathematics Department, Brown University "Interpretation-guided Segmentation and Recognition." (Invited Talk) [2:00pm] 0.3.2. S. Becker, Department of Computer Science, Univ. of Toronto "Learning to Categorize Objects Using Temporal Coherence." [2:30pm] 0.3.3. S. J Nowlan & T. J. Sejnowski, CNL, The Salk Institute "Filter Selection Model for Generating Visual Motion Signals for Target Tracking." [2:50pm] 0.3.4. E. Stern*, A. Aertsen%, E. Vaadia+ & S. Hochstein** *Department of Neurobiology, Hebrew University, Jerusalem %Inst. fur Neuroinformatik, Ruhr-Univ., Bochum, Germany +Department of Physiology, Hebrew University, Jerusalem ** "Stimulus Encoding by Multi-Dimensional Receptive Fields in Single Cells and Cell Populations in V1 of Awake Monkey." [3:10pm] ORAL 4: STOCHASTIC LEARNING AND ANALYSIS 0.4.1. T. K. Leen* & J. Moody % *CSE Department, Oregon Graduate Institute %Department of Computer Science, Yale University "Probability Densities and Equilibria in Stochastic i Learning." [4:00pm] 0.4.2. W. Finnoff, Siemens AG Corp. Res. & Dev., Munich, Germany "Diffusion Approximations for the Constant Learning Rate Backpropagation Algorithm and Resistance to Local Minima." [4:20pm] 0.4.3. L. Xu & A. Yuille, Division of Applied Sciences, Harvard Univ. "Self-Organization for Robust Principal Component Analysis by the Statistical Physics Approach." [4:40pm] SPOTLIGHT 2: VISUAL PROCESSING [5:00-5:12pm] SPOTLIGHT 3: STOCHASTIC LEARNING & ANALYSIS [5:15-5:35pm] Wednesday, December 2 ORAL 5: COMPUTATIONAL AND THEORETICAL NEUROBIOLOGY 0.11.1. J. Rinzel, Mathematical Research Branch, NIH "Coupling Mechanisms and Rhythmogenesis in Neuron Models." (Invited Talk) [8:30am] 0.11.2. K. Doya, M.E.T. Boyle, and A. I. Selverston Department of Biology, UC San Diego "Mapping between Neural and Physical Activities of the Lobster Gastric Mill System." [9:00am] 0.11.3. M. E. Nelson, Beckman Institute, University of Illinois "Neural Models of Adaptive Filtering Mechanisms in the Electrosensory System."9:20am] 0.11.4. N. Burgess, J. O'Keefe and M. Reece Department of Anatomy, University College, London "Using Hippocampal 'Place Cells' for Navigation, Exploiting Phase Coding." [9:40am] 0.11.5. M. A. Gluck and C. E. Myers Center for Molecular and Behaviorial Neuroscience, Rutgers Univ. "Neural Bases of Adaptive Stimulus Representations: A Computational Theory of Hippocampal-Region Function." [10:00am] ORAL 6: SPEECH AND SIGNAL PROCESSING 0.6.1. M. Cohen*, H. Franco*, N. Morgan%, D. Rumelhart+, and V. Abrash* *SRI Inst., Menlo Park, CA %ICSI, Berkeley, CA +Psychology Department, Stanford University, CA "Context-Dependent Multiple Distribution Phonetic i Modeling with MLPS." [10:50am] 0.6.2. M. Hirayama*, E. V. Bateson%, K. Honda%, Y. Koike* and M. Kawato* *ATR Human Inf. Proc. Res. Labs %ATR Auditory and Visual Perception Res. Labs., Kyoto, Japan "Physiologically Based Speech Synthesis." [11:10am] 0.6.3. W. Liu, M. H. Goldstein, Jr. and A. G. Androu, Dept. of Elec. & Comp. Eng., The Johns Hopskins University "Analog Cochlear Model for Multiresolution Speech Analysis." [11:30am] SPOTLIGHT 4: COMPUTATIONAL AND THEORETICAL NEUROBIOLOGY. [11:50am-12:02pm] SPOTLIGHT 5: SPEECH AND SIGNAL PROCESSING. [12:04-12:08pm] ORAL 7: COMPLEXITY, LEARNING & GENERALIZATION 2 0.5.1. S. Solla, AT&T Bell Labs "The Emergence of Generalization Ability in Learning Machines." (Invited Talk) [2:00pm] 0.5.2. J. Wiles* & M Ollila% *Depts. of Computer Science & Psychology, Univ. of Queensland, Australia %Vision Lab, CITRI, Dept. of Computer Science, Univ. of Melbourne, Australia "Intersecting Regions: The Key to Combinatorial Structure in Hidden Unit Space." [2:30pm] 0.5.3. T. A. Plate, Department of Computer Science, Univ. of Toronto "Holographic Recurrent Networks." [2:50pm] 0.5.4. P. Simard, Y. LeCun & J. Denker, AT& T Bell Labs "Efficient Pattern Recognition Using a New Transformation Distance." [3:10pm] SPOTLIGHT 6: COMPLEXITY, LEARNING & GENERALISATION 2 [3:30-3:42pm] ORAL 8: IMPLEMENTATIONS 0.8.1. J. Platt, J. Anderson, & D. Kirk, Synaptics, Inc., San Jose, CA "An Analog VLSI Chip for Radial Basis Functions." [4:15pm] 0.8.2. H. P. Graf, E. Cosatto, E. Sackinger, and J. Snyder, AT&T Bell Labs "A Modular System with Multiple Neural Net Chips." [4:35pm] 0.8.3. D. J. Baxter, S. Churcher, A. Hamilton, A. F. Murray, and H. M. Rackie Department of Elec. Eng., University of Edinburgh, Scotland "The Edinburgh Pulse Stream Implementation of a i Learning-Oriented Network (Epsilon) Chip." [4:55pm] SPOTLIGHT 7: COGNITIVE SCIENCE, [5:15-5-19pm] SPOTLIGHT 8: IMPLEMENTATIONS, APPLICATIONS [5:20-5:40pm] Thursday, December 3 ORAL 9: PREDICTION 0.9.1. A. Lapedes, Theory Division, Los Alamos National Laboratory "Nonparametric Neural Networks for Prediction." (Invited Talk) [8:30am] 0.9.2. M. Plutowski*, G. Cottrell%, and H. White+ *Department of Computer Science & Engineering, %Inst. for Neural Comp. and Department of Computer Science & Eng., +Inst. for Neural Comp. and Department of Economics, UCSD "Learning Mackey-Glass from 25 Examples, Plus or Minus 2." [9:00am] ORAL 10: COGNITIVE SCIENCE 0.10.1. P. Smolensky Dept. of Computer Sci. and Inst. of Cog. Sci., Univ. of Colorado, Boulder "Harmonic Grammars for Formal Languages." [9:20am] 0.10.2. D. Gentner & A. B. Markman Department of Psychology, Northwestern University "Analogy -- Watershed or Waterloo? Structural Alignment and the Development of Connectionist Models of Cognition." [9:40am] ORAL 11: APPLICATIONS 0.7.1. Dr. W. Baxt, UCSD Medical Center "The Application of the Artificial Neural Network to Clinical Decision Making." (Invited Talk) [10:30am] 0.7.2. V. Tresp*, J. Moody%, and W-R. Delong+ *Seimens AG, Central Research, Munich, Germany %Computer Science Department, Yale University +Seimens AG, Medical Eng. Group, Erlangen, Germany "Prediction and Control of the Glucose Metabolism of a Diabetic." [11:00am] 0.7.3. P. Baldi* & Y. Chavin% *JPL, Division of Biology, Cal Tech %Net-ID, Inc., and Psychology Department, Stanford University "Neural Networks for Finger Print Matching and Classification." [11:20am] 0.7.4. M. Schenkel*, H. Weismann, I. Guyon, C. Nohl, D. Henderson, B. Bosser%, and L. Jackel AT&T Bell Labs *also ETH-Zunch, %also EECS Dept., UC Berkeley "TDNN Solutions for Recognizing On-Line Natural Handwriting." [11:40am] POSTER SPOTLIGHT TALKS (4 Minute Talks) SPOTLIGHT 1: COMPLEXITY, LEARNING & GENERALIZATION 1. CONTROL, NAVIGATION & PLANNING. P&S.1.1. K-Y Siu* & V. Roychowdhury% *Department of Elec. & Comp. Eng., UC Irvine %School of Elec. Eng., Purdue University "Optimal Depth Neural Networks for Multiplication and Related Problems." P&S.1.2. T. M. Mitchell and S. B. Thrun School of Computer Science, CMU "Explanation-Based Neural Network Learning for Robot Control." SPOTLIGHT 2: VISUAL PROCESSING P&S.2.1. S. Madarasmi*, D. Kersten%, and T-C Pong* *Department of Computer Science, %Department of Psychology, University of Minnesota "Computation of Stereo Disparity for Transparent and for Opaque Surfaces." P&S.2.2. S. Ahmad and V. Tresp Siemens Research, Munich, Germany "Some Solutions to the Missing Feature Problem in Vision." P&S.2.3. J. Utans and G. Gindi Department of Elec. Eng., Yale University "Improving Convergence in Hierarchical Matching Networks for Object Recognition." SPOTLIGHT 3: STOCHASTIC LEARNING & ANALYSIS P&S.3.1. R. M. Neal, Department of Computer Science, University of Toronto "Bayesian Learning via Stochastic Dynamics." P&S.3.2. Y. Freund*, H. S. Seung%, and N. Tishby+ *Comp. and Inf. Sci., UC Santa Cruz, %Racah Inst. of Physics, and Center for Neural Comp., Hebrew Univ., Jerusalem, +Department of Comp. Sci. and Center for Neural Comp., Hebrew Univ., Jerusalem "Accelerating Learning Using Query by Committee." P&S.3.3. A. F. Murray, J. P. Edwards Department of Elec. Eng., University of Edinburgh, Scotland "Synaptic Weight Noise During MLP Learning Enhances Fault-Tolerance." P&S.3.4. D. De Mers and G. Cottrell Department of Computer Science, UC San Diego "Non-Linear Dimensionality Reduction." P&S.3.5. N. N. Schraudolph* and T. J. Sejnowski% *Computer Science & Engr. Department, UC San Diego %Computer Neurobiology Lab., The Salk Institute "Self-Stabilizing Hebbian Learning: Beyond Principal Components." SPOTLIGHT 4: COMPUTATIONAL AND THEORETICAL NEUROBIOLOGY. P&S.4.1. I. Gutterman and N. Tishby Department of Comp. Sci. and Center for Neural Computation, Hebrew University, Jerusalem "Statistical Modeling of Cell-Assemblies Activities in Prefrontal Cortex of Behaving Monkeys." P&S.4.2. R. Linsker, IBM. TJ Watson Center, Yorktown Heights "Towards Unambiguous Derivation of Receptive Fields Using a New Optimal-Encoding Criterion." P&S.4.3. O. Coenen*, T. J. Sejnowski*, and S. G. Lisberger% *Comp. Neurobiol. Lab., Howard Hughes Medical Inst., The Salk Institute, La Jolla, CA %Department of Physiology, Kick Center for Integrating Neuroscience, UCSF, CA "Biologically Plausible Learning Rules for the Vestibular-Ocular Reflex (VOR)." SPOTLIGHT 5: SPEECH AND SIGNAL PROCESSING. P&S.5.1. M. Hild and A. Waibel, School of Computer Science, CMU "Connected Letter Recognition with a Multi-State Time Delay Neural Network." SPOTLIGHT 6: COMPLEXITY, LEARNING & GENERALIZATION 2 P&S.6.1. I. Guyon*, B. Boser%, and V. Vapnik* *AT&T Bell Labs, Holmdel, NJ %EE&CS Department, UC Berkeley "Automatic Capacity Tuning of Very Large VC-Dimension Classifiers" P&S.6.2. P.Y. Simard*, Y. LeCun*, and B. Pearlmutter% *AT&T Bell Labs, Holmdel, NJ %Yale University "Local Computation of the Second Derivative Information in a Multi-Layer Network." P&S. 6.3 H. Drucker, R. Schapire & P. Simard, AT&T Bell Labs "Improving Performance in Neural Networks Using a Boosting Algorithm." SPOTLIGHT 7: COGNITIVE SCIENCE P&S.7.1. M. C. Mozer and S. Das Department of Computer Science & Inst. of Cognitive Science, Univ. of Colorado, Boulder, CO "A Connectionist Chunker that Induces the Structure of Context-Free Languages." SPOTLIGHT 8: IMPLEMENTATIONS, APPLICATIONS, P&S.5.1. J. Lazzaro*, J. Wawrzynck*, M. Mahowald%, M. Sivilotti+, D. Gillespie$ *EE &CS, UC Berkeley %Computation and Neural Sciences, Cal Tech +Computer Science, Cal. Tech. and Tanner Research, Pasadena, CA $Computer Science, Cal. Tech. and Synaptics, San Jox, CA "Silicon Auditory Processors as Computer Peripherals." P&S.5.2. C. Koch*, B. Mathur%, S-C Liu+, J. G. Harris+, J. Luo and M. Sivilotti$ *Computation and Neural Systems, Cal. Tech. %Rockwell Intl. Science Center, Thousand Oaks, CA +Al Lab, MIT $Tanner Research, Pasadena, CA "Object-Based Analog VLSI Vision Circuits." P&S.5.3. J. Alspector, R. Meir, B. Yuhas, A. Jayakumar Bellcore, Morristown, NJ "A Parallel Gradient Descent Method for Learning in Analog VLSI Neural Networks." P&S.5.4. A. C. Tsoi, D. S. C. So, and A. Sergejew Department of Elec. Eng., University of Queensland, Australia "Classification of Electroencephalogram Using Artificial Neural Networks." P&S.5.5. Y. Salu, Physics Department, and CSTEA, Howard University "Classification of Satelite Multi-Spectral Image Data by the Binary Diamond Neural Network." NIPS '92 FINAL POSTER SESSIONS 1 & 2 TUESDAY EVENING: SESSION 1 COMPLEXITY, LEARNING AND GENERALIZATION 1 "Optimal Depth Neural Networks for Multiplication and Related Problems." Kai-Yeung Siu, Department of Elec. & Comp. Eng, UC Irvine Vwani Roychowdhury, School of Elec. Eng., Purdue University "Initial Complexity of Large Networks and Its Effect on Generalization." Chuanyi Ji, Department of Eled, Comp. & System Eng., Rensselaer Polytechnic Inst., Troy, NY "Using Hints to Successfully Learn Context-Free Grammars with a Neural Network Pushdown Automaton." Sreerupa Das, Dept. of Computer Science, Univ. of Colorado, Boulder, CO C. Lee Giles, NEC Richard Institute, Princeton, NJ Guo-Zheng Sun, Inst. for Advanced Computer Studies, Univ. of MD "Interposing an Ontogenic Model Between Genetic Algorithms and Neural Networks." Richard K. Belew, Cognitive Comp. Science Research Group, UC San Diego "Combining Neural and Symbolic Learning to Revise Probabilistic Rule Bases." J. Jeffrey Mahoney and Raymond J. Mooney, Dept. of Computer Science, University of Texas, Austin, TX "Learning Sequential Tasks by Incrementally Adding Higher Orders." Mark Ring, Dept. of Computer Sciences, University of Texas, Austin, TX "Kohonen Feature Maps and Growing Cell Structures -- A Performance Comparison." Bernard Fritzke, Universitat Erlangen-Nurnberg, Lehrstuhl fur Programmiersprachen, Erlangen, Germany "Latticed RBF Networks: An Alternative to Constructive Methods." Brian Bonnlander & Michael C. Mozer, Department of Computer Science & Institute of Cognitive Science, University of Colorado, Boulder, CO "A Boundary Hunting Radial Basis Function Classifier which Allocates Centers Constructively." Eric I. Chang & Richard P. Lippmann, MIT Lincoln Laboratory, Lexington, MA "How Hints affect Learning" Yaser Abu-Mostafa, Dept of Electrical Engineering & Computer Science, California Institute of Technology, Pasadena, CA CONTROL, NAVIGATION & PLANNING "Explanation-Based Neural Network Learning for Robot Control." Tom M. Mitchell & Sebastian B. Thrun, School of Computer Science, Carnegie Mellon University, Pittsburgh, PA "Reinforcement Learning Applied to Linear Quadratic Regulation." Steven J. Bradtke, Department of Computer & Information Science, University of Massachusetts, Amherst, MA "Neural Network On-Line Learning Control of Spacecraft Smart Structure." Dr. Christopher Bowman, Ball Aerospace Systems Group, Boulder, CO "Integration of Visual and Somatosensory Information for Preshaping Hand in Grasping Movements." Yoji Uno*, Naohiro Fukumura%, Ryoji Suzuki%, and Mitsuo Kawato* *ATR Human Information Processing Research Laboratories, Kyoto, Japan %Faculty of Engineering, University of Tokyo, Tokyo, Japan "On-Line Estimation of the Optimal Value Function: HJB-Estimators." James K. Peterson, Department of Mathematical Sciences, Clemson University, Clemson, SC "Robust Control Under Extreme Uncertainty." Vijaykumar Gullapalli, CS Department, LGRC, University of Massachusetts, Amherst, MA "Trajectory Relaxation Learning for Approximation of Robot Inverse Dynamics." T. Sanger, MIT, Cambridge, MA "Learning Spatio-Temporal Planning from a Dynamic Programming Teacher: A Feed Forward Net for the Moving Obstacle Avoidance Problem." G. Fahner and R. Eckmiller, Department of Biophysics, Division of Biocybernetics, Heinrich-Heine-University of Dusseldorf, Dusseldorf, Germany "Learning Fuzzy Rule-Based Neural Networks for Control." Rodney M. Goodman and Charles M. Higgins, Department of Electrical Engineering, Cal. Tech., Pasadena, CA VISUAL PROCESSING "Computation of Stereo Disparity for Transparent and for Opaque Surfaces." Suthep Madarasmi*, Daniel Kersten%, Ting-Cheun Pong* *Computer Science Department, %Department of Psychology, *Computer Science Department, University of Minnesota, Minneapolis, MN "Some Solutions to the Missing Feature Problem in Vision." Sabutai Ahmad and Volker Tresp, Seimens Research, Munich, Germany "Improving Convergence in Hierarchial Matching Networks for Object Recognition." Joachim Utans and Gene Gindi, Yale University "An LGN Model Which Mediates Communication Between Different Spatial Frequency Channels Through Feedback From Cortex." Carlos D. Brody, Computation and Neural Systems Program, Cal. Tech., Pasadena, CA "Unsmearing Visual Motion: Development of Long-Range Horizontal Intrinsic Connections." Kevin E. Martin and Jonathan A. Marshall, Department of Computer Science, University of North Carolina, Chapel Hill, NC "LandSat Image Analysis via a Texture Classification Neural Network." Hayit K. Greenspan and Rodney M. Goodman, Department of Electrical Engineering, Cal. Tech., Pasadena, CA "Computation of Ego-Motion from Optic Flow in Visual Cortex." Markus Lappe and Josef P. Rauschecker, National Institutes of Health Animal Center, NIMH, Poolesville, MD, and Max Planck Institute for Biological Cybernetics, Tubingen, Germany "Learning to See Where and What: A Backprop Net Trained to Make Saccades and Recognize Characters." Gale L. Martin, Mosfeq Rashid, David Chapman & James Pittman, MCC, Austin, TX STOCHASTIC LEARNING AND ANALYSIS "Bayesian Learning via Stochastic Dynamics.' Radford M. Neal, Department of Computer Science, University of Toronto, Toronto, Canada "Accelerating Learning Using Query by Committee." Yoav Freund*, H. Sebastian Seung%, and Naftali Tishby+ *Computer and Info. Sciences, UC Santa Cruz %Racah Inst. of Physics and Ctr. for Neural Computation, Hebrew University, Jerusalem +Department of Computer Science and Ctr. for Neural Computation, Hebrew University, Jerusalem "Synaptic Weight Noise During MLP Learning Enhances Fault-Tolerance." Alan F. Murray and Peter J. Edwards, Dept. of Electrical Engineering, University of Edinburgh, Scotland "Self-Stabilizing Hebbian Learning: Beyond Principal Components." Nicol N. Schraudolph* and Terrence J. Sejnowski% *Computer Science & Engr. Department, UC San Diego %Computational Neurobiology Laboratory, The Salk Institute, La Jolla, CA "Probability Densities and Basin-Hopping in Stochastic Learning." Todd K. Leen and Genevieve B. Orr, Department of Computer Science and Engineering, Oregon Graduate Institute of Science and Technology, Beaverton, OR "Information Theoretic Analysis of Connection Structure from Spike Trains." S. Shiono, S. Yamada, M. Nakashima, and Kenji Matsumoto, Central Research Laboratory, Mitsubishi Electric Corp., Hyogo, Japan "Statistical Mechanics of Learning in a Large Committee Machine." H. Schwarze and J. Hertz, The Niels Bohr Institute and Nordita, Copenhagen, Denmark "Probability Estimation from a Database Using a Gibbs Energy Model." John W. Miller and Rodney M. Goodman, Department of Electrical Engr., Cal. Tech., Pasadena, CA "On the Use of Evidence in Bayesian Reasoning." David H. Wolpert, The Santa Fe Institute, Santa Fe, NM NETWORK DYNAMICS & CHAOS "Destabilization and Route to Chaos in Neural Networks with Random Connectivity." B. Doyon*, B. Cessac%+, M. Quoy%$, M. Samuelides%$ *Unite INSERM 230, Service de Neurologie, CHU Purpan, ToulouseCedex, France %Centre d'Etudes et de Recherches de Toulouse, Toulouse Cedex, France +Laboratoire de Physique Quantique, Universite Paul Sabatier, Toulouse Cedex, France $Ecole Nationale Superieure de l'Aeronautique et de l'Espace, Toulouse Cedex, France "Predicting Complex Behavior in Space Asymmetric Networks.' Ali A. Minai and William B. Levy, Department of Neurosurgery, University of Virginia, Charlottesville, VA "Single-iteration Threshold Hamming Networks." I. Meilijosn, E. Ruppin, M. Sipper, School of Mathematical Sciences, Tel Aviv University, Tel Aviv, Israel "History-Dependent Dynamics in Attractor Neural Networks: A Bayesian Approach." Isaac Meilijosn and Eytan Ruppin, School of Mathematical Sciences, Tel Aviv University, Tel Aviv, Israel "Bifurcation Analysis of a Coupled Neural Oscillator System With Application to Visual Cortex Modeling." Galina N. Borisyuk, Roman M. Borisyuk, Alexander I. Khibnki, Institute of Mathematical Problems of Biology, Russia Academy of Sciences, Pushchino, Russia "Non-Linear Dimensionality Reduction." David DeMers and Garrison Cottrell, Department of Computer Science, UC San Diego, La Jolla, CA THEORY AND ANALYSIS "On Learning m-Perceptron Networks with Binary Weights." Mostefa Golea*, Mario Marchand* and Thomas R. Hancock% *Ottawa-Carleton Institute for Physics, University of Ottawa, Ottawa, Canada %Aiken Computation Laboratory, Harvard University, Cambridge, MA "Neural Network Model Selection Using Asymptotic Jackknife Estimator and Cross-Validation Method." Yong Liu, Department of Physics and Center for Neural Science, Brown University, Providence, RI "Learning Curves, Model Selection and Complexity of Neural Networks." Noboru Murata, Shuji Yoshizawa, and Shun-ichi Amari, Department of Mathematical Engineering and Information Physics, University of Tokyo, Japan "The Power of Approximating: A Comparison of Activation Functions." Dhaskar DasGupta and Georg Schnitger, Department of Computer Science, The Pensylvania State University, Unviersity Park, PA "Rational Parameterizations of Neural Networks." Uwe Helmke* and Robert C. Williamson% *Department of Mathematics, University of Regensburg, Regensburg, Germany %Department of Systems Engineering, Australian National University, Canberra Australia "Learning Cellular Automaton Dynamics with Neural Networks." N. H. Wulff and J. A. Hertz, CONNECT, The Niels Bohr Institute and Nordita, Copenhagen, Denmark "Some Estimations of Necessary Number of Connections and Hidden Units for Feed Forward Networks." Adam Kowalczyk, Telcom Australia, Research Laboratories, Victoria, Australia WEDNESDAY EVENING: SESSION 2 COMPLEXITY, LEARNING AND GENERALIZAITON 2 "Automatic Capacity Tuning of Very Large VC-Dimension Classifiers." I. Gunyon, B. Boser*, V. Vapnik, AT& T Bell Laboratories, Holmdel, NJ *currently in EECS Department, UC Berkeley, CA "Local Computation of the Second Derivative Information in a Multi-Layer Network." Patrice Y. Simard, Yann Le Cun and Barak Pearlmutter* AT&T Bell Laboratories, Holmdel, NJ *Yale University, New Haven, CT "Improving Performance in Neural Networks Using a Boosting Algorithm." H. Drucker, R. Schapire & P. Simard, AT&T Bell Labs, Holmdel, NJ "Learning Classification With Few Labelled Examples." Joel Ratsaby and Santosh S. Venkatesh, Department of Electrical Engineering, University of Pennsylvania, Philadelphia, PA "Second Order Derivatives for Network Pruning: Optimal Brain Surgeon." Babak Hassibi and David G. Stork, Ricoh California Research Center, Menlo Park, CA, and Department of Electrical Engineering, Stanford University, Stanford, CA "Directional-Unit Boltzmann Machines." Richard S. Zemel, Christopher K. I. Williams and Michael C. Mozer* Computer Science Department, University of Toronto, Toronto, Canada *Computer Science Department, University of Colorado, Boulder, CO "Applying Classical Optimization Techniques to Neural Network Testing." Dr. Scott A. Markel and Dr. Roger L. Crane, David Sarnoff Research Center, Princeton, NJ "Time Warping Invariant Neural Networks." G. Z. Sun, H. H. Chen, Y. C. Lee and Y. D. Liu, Institute for Advanced Computer Studies / Laboratory for Plasma Research, University of Maryland, College Park, MD "Generalization Abilities of Cascade Network Architectures." E. Littmann and H. Ritter, Department of Computer Science, Bielefeld University, Bielefeld, Germany "Assessing and Improving Neural Network Predictions by the Bootstrap Algorithm." Gerhard Paa', German National Research Center for Computer Science, Augustin, Germany "Discriminability-Based Transfer between Neural Networks." L. Y. Pratt, Department of Matheamatics and Computer Science, Colorado School of mines, Golden, CO "Summed Weight Neuron Perturbation: An O(N) Improvement over Weight Perturbation." Barry Flower and Marwan Jabri, SEDAL, Department of Electrical Engineering, University of Sydney, Australia "Supervised Clustering." Virginia de Sa and Dana Ballard, Computer Science Department, University of Rochester, Rochester, NY "Extended Regularization Methods for Nonconvergent Model Selection." W. Finnoff, F. Hergert and H. G. Zimmerman, Siemans AG, Corporate Research and Development, Munich, Germany "Synchronization and Gramatical Inference in an Oscillating Elman Net." Bill Baird* and Frank Eeckman% *Department of Mathematics, UC Berkeley, CA %O-Division, Lawrence Livermore National Laboratory, Livermore, CA "Training Hidden Units in Reinforcement Learning Networks." Charles W. Anderson, Department of Computer Science, Colorado State University, Fort Collins, CO "Nets with Unreliable Hidden Nodes Learn Error-Correcting Codes." Stephen Judd and Paul Munro, Seimens Corporate Research, Princeton, NJ, and Department of Information Science, University of Pittsburgh, PA "A Fast Stochastic Error-Descent Algorithm for Supervised Learning and Optimization." Gert Cauwenberghs, Cal. Tech., Pasadena, CA SPEECH AND SIGNAL PROCESSING "Modeling Consistency in a Speaker Independent Continuous Speech Recognition System." Yochai Konig*, Nelson Morgan*, Chuck Wooters*, Victor Abrash%, Michael Cohen%, and Horacio Franco% *International Computer Science Institute, Berkeley, CA %SRI International, Menlo Park, CA "A Hybrid Linear/Nonlinear Approach to Channel Equalization Problems." Wei-Tsih Lee*, John C. Pearson*, and Manoel F. Tenorio% *David Sarnoff Research Center, Princeton, NJ %Purdue University, School of Electrical Engineering, West Lafayette, IN "Transient Detection Using Neural Networks: The Search for the Desired Signal." Abir Zahalka and Jose C. Principe, Computational NeuroEngineering Laboratory, University of Florida, Gainesville, FL "Performance Through Consistency: MS-TDNN's for Large Vocabulary Continuous Speech Recognition." Joe Tebelskis and Alex Waibel, School of Computer Science, Carnegie Mellon University, Pittsburgh, PA "Speech Recognition Using Segmental Neural Nets with the N-Best Paradigm." G. Zavaliagkos, S. Austin, J. Makhous and R. Schwartz, BBN Systems and Technologies, Cambridge, MA "Connected Letter Recognition with a Multi-State Time Delay Neural Network." Hermann Hild and Alex Waibel, School of Computer Science, Carnegie Mellon University, Pittsburgh, PA "Classification of Electroencephalogram Using Artificial Neural Networks." A. C. Tsoi, D. S. C. So, and A. Sergejew, Department of Electrical Engineering, University of Queensland, Queensland, Australia "Classification of Satellite Multi-Spectral Image Data by the Binary Diamond Neural Network." Yehuda Salu, The Physics Department and CSTEA, Howard University, Washington, DC "Silicon Auditory Processors as Computer Peripherals." John Lazzaro*, John Wawrzynek*, M. Mahowald%, Massimo Sivilotti+, and Dave Gillespie+ *Computer Science Division, UC Berkeley, CA %Computation and Neural Sciences, Cal. Tech, Pasadena, CA +Computer Science, Cal. Tech., Pasadena, CA "Object-Based Analog VLSI Vision Circuits." Christof Koch*, Bimal Mathur%, Shih-Chii Liu+, John G. Harris$, Jin Luo and Missimo Sivilotti$ *Computation and Neural Systems, Cal. Tech., Pasadena, CA %Rockwell International Science Center, Thousand Oaks, CA +Artificial Intelligence Laboratory, MIT, Cambridge, MA $Tanner Research, Pasadena, CA "A Parallel Gradient Descent Method for Learning in Analog VLSI Neural Networks." Joshua Alspector, Ronny Meir, Ben Yuhas, Anthony Jayakumar, Bellcore, Morristown, NJ APPLICATIONS "Dynamic Planar Warping and Planar Hidden Markov Modeling: From Speech to Optical Character Recognition." Esther Levin and Roberto Pieraccini, AT&T Bell Laboratories, Murray Hill, NJ "Forecasting Demand for Electric Power." Terrence L. Fine and Jen-Lun Yuan, School of Electrical Engineering, Cornell University, Ithaca, NY "Adaptive Algorithms for Multiple Sequence Alignments." Pierre Baldi*, Tim Hunkapiller*, Yves Chauvin%, and Marcella McClure+ *Cal. Tech, Pasadena, CA %Net-ID, Inc. +UC, Irvine "A Neural Network that Learns to Interpret Myocardial Planar Thallium Scintigrams." Charles Rosenberg*, Jacob Erel%, and Henri Atlan% *Department of Computer Science, Hebrew University, Jerusalem, Israel %Department of Biophysics and Nuclear Medicine, Hadassah Medical Center, Jeruslaem, Israel IMPLEMENTATIONS "An Analog VLSI Chip for Local Velocity Estimation Based on Reichardt's Motion Algorithm." Rahul Sarpeshkar, Wyeth Bair and Christof Koch, Department of Computation and Neural Systems, Cal. Tech., Pasadena, CA "Analog VLSI Implementation of Gradient Descent." David Kirk, Douglas Kerns, Kurt Fleischer, Alan Barr Cal. Tech., Pasadena, CA "An Object-oriented Framework and its Implementation for the Simulation of Neural Nets." Alexander Linden and Christoph Tietz, AI Research Division, German National Research Center For Computer Science, Augustin, Germany "Attractor Neural Networks with Local Inhibition." L. D'Alessandro*, E. Pasero*, and R. Zecchina% *Dipart. Elettronica, Politenico di Torino %Dipart. Fisica Teorica, Universita di Torino "Biological Neurons and Model Neurons: Construction and Study of Hybrid Networks." G. Le Masson, S. Renaud-Le Masson, E. Marder, and L. F. Abbot Department of Biology and Physics and Center for Complex Systems, Brandeis University, Waltham, MA COGNITIVE SCIENCE "A Connectionist Chunker that Induces the Structure of Context-Free Languages." Michael C. Mozer and Sreerupa Das, Department of Computer Science and Institute of Cognitive Science, University of Colorado, Boulder, CO "Network Structuring and Training Using Rule-Based Knowledge." Volker Tresp*, Jurgen Hollatz%, and Subutai Ahmad* *Siemens AG, Central Research and Development, Munich, Germany %Institut fur Informatik, Munich Germany "A Dynamic Model of Priming and Repetition Blindness.' Daphne Bavelier and Michael I. Jordan, Department of Brain and CCognitive Sciences, MIT, Cambridge, MA "A Knowledge-Based Model of Geometry Learning." Geoffrey Towell* and Richard Lehrer% *Siemens Corporate Research, Princeton, NJ %Educational Psychology, University of Wisconsin, Madison, WI "Representing Meaning With Activation Gestalts." Hinrich Schutze, CSLI, Stanford, CA "Perceiving Complex Visual Scenes: An Oscillator Neural Network Model that Integrates Location-Based Attention, Perceptual Organization, and Object-Based Selection." Rainer Goebel, Department of Psychology, University of Braunschweig, Braunschweig, Germany COMPUTATIONAL AND THEORETICAL NEUROBIOLOGY "Statistical Modeling of Cell-Assembly Activities in Prefrontal Cortex of Behaving Monkeys." Itay Gutterman and Naftali, Department of Computer Science and Center for Neural Comuptation, Hebrew University, Jerusalem, Israel "Towards Unambiguous Derivation of Receptive Fields Using a New Optimal-Encoding Criterion." Ralph Linsker, IBM, T. J. Watson Research Center, Yorktown Heights, NY "Biologically Plausible Learning Rules for the Vestibulo-Ocular Reflex (VOR)." Oliver Coenen*, Terrence J. Sejnowski*, and Stephen G. Lisberger% *Computational Neurobilogy Laboratory, The Salk Institute, La Jolla, CA %Department of Physiology, W. M. Keck Foundation Center for Integrative Neuroscience; and Neuroscience Graduata Program, UC San Francisco, CA "A Non-Hebbian LTP Learning Rule in Hippocampus Enables High-Capacity Temporal Sequence Encoding." Richard Granger, James W. Whitson, Jr., and Gary Lynch, Center for the Neurobiology of Learning and Memory, UC Irvine, CA "Using Aperiodic Reinforcement for Directed Self Organization." P. Read Montague, Steven J. Nowlan, Peter Dayan and Terrance J. Sejnowski, Computational Neurobiology Laboratory, The Salk Institute, San Diego, CA "Information Processing in Neocortical Pyramidal Cells." Bartlett W. Mel, Computation and Neural Systems Program, Cal. Tech., Pasadena, CA "How Oscillatory Neuronal Responses Reflect Bistability and Switching of the Hidden Assembly Dynamics." K. Pawelzik, H.-U. Bauer, J. Deppisch, and T. Geisel, Institute fur Theoretische Physik and SFP, Frankfurt, Germany "Topography and Ocular Dominance: A New Model that Explores Positive Between-Eye Correlations." Geoffrey Goodhill, University of Edinburgh, Centre for Cognitive Science, Edinburgh, Scotland "Statistical and Dynamical Interpretation of ISIH Data from Periodically Stimulated Sensory Neurons." Frank Moss* and Andre Longtin% *Department of Physics and Department of Biology, University of Missouri, St. Louis, MO %Department of Physics, University of Ottawa, Canada "Modelling Movement Disorders with Cascaded Jordan Networks." Alexander Britain*, Gordon D. A. Brown*, Michael Malloch* and Ian J. Mitchell% *Cognitive Neurocomputation Unit, Dept. of Psychology, University of Wales, Bangor, United Kingdom %Department of Cell and Structural Biology, Manchester, United Kingdom "Spiral Waves in Integrate-And-Fire Neural Networks." John G. Milton*, Po Hsiang Chu% and Jack D. Cowan+ *Department of Neurology, University of Chicago, Chicago, IL %Department of Computer Science, De Paul University, Chicago, IL +Department of Mathematics, University of Chicago, Chicago, IL "Parameterising Feature Sensitive Cell Formation in Linsker Networks." L. C. Walton and D. L. Bisset, Electronic Engineering Laboratories, University of Kent, United Kingdom "A Recurrent Neural Network for Generation of Ocular Saccades." Lina L. E. Massone, Departments of Physiology and Electrical Engineering and Computer Science, Northwestern University, Chicago, IL "A Formal Model of the Insect Olfactory Macroglomerulus." C. Linster*, C. Masson%, M. Kerszberg+, L. Personnaz*, and G. Dreyfus* *Ecole Superieure de Physique et de Chimie Industrielles de la Villa De Paris, Laboratoire d'Electronique, Paris, France %Laboratoire de Neurobiologie Comparees des Invertebres, INRA?CNRS, Bures Sur Yvette, France +Institut Pasteur, Paris, France "An Information-Theoretic Approach to Deciphering the Hippocampal Code." William E. Skaggs, Bruce L. McNaughton, Katalin M. Gothard, Etan J. Marksu, ARL Division of Neural Systems, Memory and Aging, University of Arizona, Tuscan, AZ From haussler at cse.ucsc.edu Fri Oct 2 14:52:12 1992 From: haussler at cse.ucsc.edu (David Haussler) Date: Fri, 2 Oct 1992 11:52:12 -0700 Subject: Tech report available on hidden Markov models for proteins Message-ID: <199210021852.AA19416@arapaho.ucsc.edu> University of California at Santa Cruz Department of Computer and Information Sciences The following technical report is available electronically or as a paper copy. Instructions for getting either follow the abstract. PROTEIN MODELING USING HIDDEN MARKOV MODELS: ANALYSIS OF GLOBINS David Haussler, Anders Krogh, Saira Mian, Kimmen Sjolander UCSC-CRL-92-23 (available electronically as ucsc-crl-92-23.ps.Z) June 1992, revised September 1992 (Shorter version will appear in Proc. of 26th Hawaii Int. Conf. on System Sciences, Biocomputing technology track, Jan. 5-8, 1993) Abstract: We apply Hidden Markov Models (HMMs) to the problem of statistical modeling and multiple alignment of protein families. In a detailed series of experiments, we have taken 625 unaligned globin sequences from the Swiss Protein database, and produced a statistical model entirely automatically from the primary (unaligned) sequences using no prior knowledge of globin structure. The produced model includes all the known positions in the 7 major alpha-helices, along with the distribution for the 20 amino acids for each of these positions, as well as the probability of and average length of insertions between these positions, and the probability that each position is not present at all. Using this model, we obtained a multiple alignment of all 625 sequences that agrees almost perfectly with the structural alignment given in [1]. In our tests, we have found that 400 of the 625 globins (selected at random) are enough to produce a model of the same quality. This model based on 400 globins can discriminate the remaining (228) globins from nonglobin protein sequences with greater than 99% accuracy, and can thus be used for database searches. The method we use to obtain the statistical model from the unaligned sequences is a variant of the Expectation Maximization (EM) algorithm known as the Viterbi algorithm. This method starts with an initial "neutral" model (same amino acid distribution in each position, fixed probabilities for insertions and deletions), optimally aligns the training sequences to this model (using dynamic programming), and then reestimates the probability parameters of the model. These last two steps are iterated until no further changes are made. A simple heuristic is used to automatically adjust the number of positions that are modeled by deleting positions that are not being used and inserting new positions where needed. After this, we then iterate the whole process above again on the new model. Our method is more general and more flexible than previous applications of HMMs and the EM algorithm to alignment and modeling problems in molecular biology. This technical report is available electronically through either of the following methods: 1. through anonymous ftp from ftp.cse.ucsc.edu, in /pub/tr. Log in as "anonymous", use your email address as your password, specify "binary" before getting the file. Uncompress before printing. 2. by mail to automatic mail server rnalib at ftp.cse.ucsc.edu. Put this command on the subject line or in the body of the message: @@ send ucsc-crl-92-23.ps.Z from tr To get the index or abstract list: @@ send INDEX from tr @@ send ABSTRACTS.1992 from tr To get the list of the tr directory: @@ list tr To get the list of commands and their syntax: @@ help commands Order paper copies from: Technical Library, Baskin Center for Computer Engineering & Information Sciences, UCSC, Santa Cruz CA 95064. Questions: jean at cse.ucsc.edu From jagota at cs.Buffalo.EDU Fri Oct 2 15:28:53 1992 From: jagota at cs.Buffalo.EDU (Arun Jagota) Date: Fri, 2 Oct 92 15:28:53 EDT Subject: Report on optimization using NNs and KC Message-ID: <9210021928.AA06796@sybil.cs.Buffalo.EDU> *** DO NOT POST TO OTHER BULLETIN BOARDS *** The following report may be of interest for: * Combinatorial Optimization (Maximum Clique) via neural nets * Kolmogorov Complexity; Universal Prior Distribution; generating "hard" instances for optimization problems * A scheme for generating compressible binary vectors motivated by Kolmogorov Complexity ideas. Source code is offered and may be used to generate compressible test data for any application whose instances directly or indirectly utilize binary vectors; comparison of performance on such test data vs, say, data from the uniform distribution may be useful, as below. -------- Performance of MAX-CLIQUE Approximation Heuristics Under Description-Length Weighted Distributions Arun Jagota Kenneth W. Regan Technical Report Department of Computer Science State University at New York at Buffalo We study the average performance of several neural-net heuristics applied to the problem of finding the size of the largest clique in an undirected graph. This function is NP-hard even to approximate within a constant factor in the worst case, but the heuristics we study are known to do quite well on average for instances drawn from the uniform distribution on graphs of size n. We extend a theorem of M. Li and P. Vitanyi to show that for instances drawn from the "universal distribution" m(x), the average-case performance of any approximation algorithm has the same order as its worst-case performance. The universal distribution is not computable or samplable. However, we give a realistic analogue q(x) which lends itself to efficient empirical testing. Our results so far are: out of nine heuristics we tested, three did markedly worse under q(x) than under uniform distribution, but six others revealed little change. HOW TO ACCESS: -------------- ftp ftp.cs.buffalo.edu (or 128.205.32.9 subject-to-change) Name : anonymous > cd users/jagota > get > quit : KCC.ps, KCC.dvi (*Same but some people have had problems printing our postscript in the past. `KCC.dvi' may require `binary' mode in ftp *) : nlt.README (* Contains documentation and instructions for our compressible string generation code *) If ftp is a problem, the report may also be obtained by sending e-mail to jagota at cs.buffalo.edu Arun Jagota *** DO NOT POST TO OTHER BULLETIN BOARDS *** From jose at tractatus.siemens.com Fri Oct 2 14:41:40 1992 From: jose at tractatus.siemens.com (Steve Hanson) Date: Fri, 2 Oct 1992 14:41:40 -0400 (EDT) Subject: NIPS*92 WORKSHOP PROGRAM Message-ID: NIPS*92 WORKSHOP PROGRAM For Further information and queries on workshop please respond to WORKSHOP CHAIRPERSONS listed below ========================================================================= Character Recognition Workshop Organizers: C. L. Wilson and M. D. Garris, NIST Abstract: In order to discuss recent developments and research in OCR technology, six speakers have been invited to share from their organization's own perspective on the subject. Those invited, represent a diversified group of organizations actively developing OCR systems. Each speaker participated in the first OCR Systems Conference sponsored by the Bureau of the Census and hosted by NIST. Therefore, the impressions and results gained from the conference should provide significant context for discussions. Invited presentations: C. L. Wilson, NIST, "Census OCR Results - Are Neural Networks Better?" T. P. Vogl, ERIM, "Effect of Training Set Size on OCR Accuracy" C. L. Scofield, Nestor, "Multiple Network Architectures for Handprint and Cursive Recognition" A. Rao, Kodak, "Directions in OCR Research and Document Understanding at Eastman Kodak Company" C. J. C. Burges, ATT, "Overview of ATT OCR Technology" K. M. Mohiuddin, IBM, "Handwriting OCR Work at IBM Almaden Research Center" ========================================================================= Neural Chips: State of the Art and Perspectives. Organizer: Eros Pasero pasero at polito.it Abstract: We will encourage lively audience discussion of important issues in neural net hardware, such as: - Taxonomy: neural computer, neural processor, neural coprocessor - Digital vs. Analog: limits and benefits of the two approaches. - Algorithms or neural constraints? - Neural chips implemented in universities - Industrial chips (e.g. Intel, AT&T, Synaptics) - Future perspectives Invited presentations: TBA ========================================================================= Reading the Entrails: Understanding What's Going On Inside a Neural Net Organizer: Scott E. Fahlman, Carnegie Mellon University fahlman at cs.cmu.edu Abstract: Neural networks can be viewed as "black boxes" that learn from examples, but often it is useful to figure out what sort of internal knowledge representation (or set of "features") is being employed, or how the inputs are combined to produce particular outputs. There are many reasons why we might seek such understanding: It can tell us which inputs really are needed and which are the most critical in producing a given output. It can produce explanations that give us more confidence in the network's decisions. It can help us to understand how the network would react to new situations. It can give us insight into problems with the network's performance, stability, or learning behavior. Sometimes, it's just a matter of scientific curiosity: if a network does something impressive, we want to know how it works. In this workshop we will survey the available techniques for understanding what is happening inside a neural network, both during and after training. We plan to have a number of presenters who can describe or demonstrate various network-understanding techniques, and who can tell us what useful insights were gained using these techniques. Where appropriate, presenters will be encouraged to use slides or videotape to illustrate their favorite methods. Among the techniques we will explore are the following: Diagrams of weights, unit states, and their trajectories over time. Diagrams of the receptive fields of hidden units. How to create meaningful diagrams in high-dimensional spaces. Techniques for extracting boolean or fuzzy rule-sets from a trained network. Techniques for extracting explanations of individual network outputs or decisions. Techniques for describing the dynamic behavior of recurrent or time-domain networks. Learning pathologies and what they look like. Invited presentations: Still to be determined. The workshop organizer would like to hear from potential speakers who would like to give a short presentation of the kind described above. Techniques that have proven useful in real-world problems are especially sought, as are short videotape segments showing network ========================================================================= COMPUTATIONAL APPROACHES TO BIOLOGICAL SEQUENCE ANALYSIS-- NEURAL NET VERSUS TRADITIONAL PERPECTIVES Organizers: Paul Stolorz, Santa Fe Institute and Los Alamos National Lab Jude Shavlik, University of Wisconsin. Abstract: There has been a good deal of recent interest in the use of neural networks to tackle several important biological sequence analysis problems. These problems range from the prediction of protein secondary and tertiary structure, to the prediction of DNA protein coding regions and regulatory sites, and the identification of homologies. Several promising developments have been presented at NIPS meetings in the past few years by researchers in the connectionist field. Furthermore, a number of structural biologists and chemists have been successfully using neural network methods. The sequence analysis applications encompass a rather large amount of neural network territory, ranging from feed forward architectures to recurrent nets, Hidden Markov Models and related approaches. The aim of this workshop is to review the progress made by these disparate strands of endeavor, and to analyze their respective strengths and weaknesses. In addition, the intention is to compare the class of neural network methods with alternative approaches, both new and traditional. These alternatives include knowledge based reasoning, standard non-parametric statistical analysis, Hidden Markov models and statistical physics methods. We hope that by careful consideration and comparison of neural nets with several of the alternatives mentioned above, methods can be found which are superior to any of the individual techniques developed to date. This discussion will be a major focus of the workshop, and we both anticipate and encourage vigorous debate. Invited presentations: Jude Shavlik, U. Wisconsin: Learning Important Relations in Protein Structures Gary Stormo, U. Colorado: TBA Larry Hunter, National Library of Medicine: Bayesian Clustering of Protein Structures Soren Brunak, DTH: Network analysis of protein structure and the genetic code David Haussler, U.C. Santa Cruz: Modeling Protein Families with Hidden Markov Models Paul Stolorz and Joe Bryngelson, Santa Fe Institute and Los Alamos: Information Theory and Statistical Physics in Protein Structures ========================================================================= Statistical Regression Methods and Feedforward Nets Organizers: Lei Xu, Harvard Univ. and Adam Krzyzak, Concordia Univ. Abstract: Feedforward neural networks are often used for function approximation, density estimation and pattern classification. These tasks are also the purposes of statistical regression methods. Some methods used in the literature of neural networks and the literature of statistical regression are same, some are different, and some have close relations. Recently, the connections between the methods in the two literatures have been explored from a number of aspects. E.g., (1) connecting feedforward nets to parametric statistical regression for theoretical studies about multilayer feedforward nets; (2) relating the performances of feedforward nets to the trade-off of bias and variances in nonparameter statistics. (3) connecting Radial Basis function nets to Nonparameter Kernal Regression to get several new theoretical results on approximation ability, convergence rate and receptive field size of Radial Basis Function networks; (4) using VC dimension to study the generalization ability of multilayer feedforward nets; (5) using other statistical methods such as projection pursuit, cross-validation, EM algorithm, CART, MARS for training feedforward nets. Not only in these mentioned aspects there are still many interesting and open issues to be further explored. But also, in the literature of statistical regression there are many other methods and theoretical results on both nonparametric regression and parameteric regression (e.g., L1 kernal estimation, ..., etc). Invited presentations: Presentations will include arranged talks and submissions. Submis- sions can be sent to either of the two organizers by Email before Nov.15, 1992. Each submission can be an abstract of 200--400 words. ========================================================================= Computational Models of Visual Attention Organizer: Pete Sandon, Dartmouth College Abstract: Visual attention refers to the process by which some part of the visual field is selected over other parts for preferential processing. The details of the attentional mechanism in humans has been the subject of much recent psychophysical experimentation. Along with the abundance of new data, a number of theories of attention have been proposed, some in the form of computational models simulated on computers. The goal of this workshop is to bring together computational modelers and experimentalists to evaluate the status of current theories and to identify the most promising avenues for improving understanding of the mechanisms and behavioral roles of visual attention. Invited presentations: Pete Sandon "The time course of selection" John Tsotsos "Inhibitory beam model of visual attention" Kyle Cave "Mapping the Allocation of Spatial Attention: Knowing Where Not to Look" Mike Mozer "A principle for unsupervised decomposition and hierarchical structuring of visual objects" Eric Lumer "On the interaction between perceptual grouping, object selection, and spatial orientation of attention" Steve Yantis "Mechanisms of human visual attention: Bottom-up and top-down influences" ========================================================================= Comparison and Unification of Algorithms, Loss Functions and Complexity Measures for Learning Organizers: Isabelle Guyon, Michael Kearns and Esther Levin, AT&T Bell Labs Abstract: The purpose of the workshop is an attempt to clarify and unify the relationships between many well-studied learning algorithms, loss functions, and combinatorial and statistical measures of learning problem complexity. Many results investigating the principles underlying supervised learning from empirical observations have the following general flavor: first, a "general purpose" learning algorithm is chosen for study (for example, gradient descent or maximum a posteriori). Next, an appropriate loss function is selected, and the details of the learning model are specified (such as the mechanism generating the observations). The analysis results in a bound on the loss of the algorithm in terms of a "complexity measure" such as the Vapnik-Chervonenkis dimension or the statistical capacity. We hope that reviewing the literature with an explicit emphasis on comparisons between algorithms, loss functions and complexity measures will result in a deeper understanding of the similarities and differences of the many possible approaches to and analyses of supervised learning, and aid in extracting the common general principles underlying all of them. Significant gaps in our knowledge concerning these relationships will suggest new directions in research. Half of the available time has been reserved for discussion and informal presentations. We anticipate and encourage active audience participation. Each discussion period will begin by soliciting topics of interest from the participants for investigation. Thus, participants are strongly encouraged to think about issues they would like to see discussed and clarified prior to the workshop. All talks will be tutorial in nature. Invited presentations: Michael Kearns, Isabelle Guyon and Esther Levin: -Overview on loss functions -Overview on general purpose learning algorithms -Overview on complexity measures David Haussler: Overview on "Chinese menu" results ========================================================================= Activity-Dependent Processes in Neural Development Organizer: Adina Roskies, Salk Institute Abstract: This workshop will focus on the role of activity in setting up neural architectures. Biological systems rely upon a variety of cues, both activity-dependent and independent, in establishing their architectures. Network architectures have traditionally been pre-specified, but it is ongoing construction of architectures may endow networks with more computational power than do static architectures. Biological issues such as the role of activity in development, the mechanisms by which it operates, and the type of activity necessary will be explored, as well as computational issues such as the computational value of such processes, the relation to hebbian learning, and constructivist algorithms. Invited presentations: General Overview (Adina Roskies) The role of NMDA in cortical development (Tony Bell) Optimality, local learning rules, and the emergence of function in a sensory processing network (Ralph Linsker) Mechanisms and models of neural development through rapid volume signals (Read Montague) The role of activity in cortical development and plasticity (Brad Schlaggar) Computational advantages of constructivist algorithms (Steve Quartz) Learning, development, and evolution (Rik Belew) ========================================================================= DETERMINSTIC ANNEALING AND COMBINATORIAL OPTIMIZATION Organizer: Anand Rangarajan, Yale Univ. Abstract: Optimization problems defined on ``mixed variables'' (analog and digital) occur in a wide variety of connectionist applications. Recently, several advances have been made in deterministic annealing techniques for optimization. Deterministic annealing is a faster and more efficient alternative to simulated annealing. This workshop will focus on several of these new techniques (emerging in the last two years). Topics include improved elastic nets for the traveling salesman problem, new algorithms for graph matching, relationship between deterministic annealing algorithms and older, more conventional techniques, applications in early vision problems like surface reconstruction, internal generation of annealing schedules, etc. Invited presentations: Alan Yuille, Statistical Physics algorithms that converge Chien-Ping Lu, Competitive elastic nets for TSP Paul Stolorz, Recasting deterministic annealing as constrained optimization Davi Geiger, Surface reconstruction from uncertain data on images and stereo images. Anand Rangarajan, A new deterministic annealing algorithm for graph matching ========================================================================= The Computational Neuron Organizer: Terry Sejnowski, Salk Institute (tsejnowski at ucsd.edu) Abstract: Neurons are complex dynamical systems. Nonlinear properties arise from voltage-sensitive ionic currents and synaptic conductances; branched dendrites provide a geometric substrata for synaptic integration and learning mechanisms. What can subthreshold nonlinearities in dendrites be used to compute? How do the time courses of ionic currents affect synaptic integration and Hebbian learning mechanisms? How are ionic channels in dendrites regulated? Why are there so many different types of neurons? These are a few of the issues that will we will be discussing. In addition to short scheduled presentations designed to stimulate discussion, we invite members of the audience to present one-viewgraph talks to introduce additional topics. Invited presentations: Larry Abbott - Neurons as dynamical systems. Tony Bell - Self-organization of ionic channels in neurons. Tom McKenna - Single neuron computation. Bart Mel - Computing capacity of dendrites. ========================================================================= ROBOT LEARNING Organizers: Sebastian Thrun (CMU), Tom Mitchell (CMU), David Cohn (MIT) Abstract: Robot learning has grasped the attention of many researchers over the past few years. Previous robotics research has demonstrated the difficulty of manually encoding sufficiently accurate models of the robot and its environment to succeed at complex tasks. Recently a wide variety of learning techniques ranging from statistical calibration techniques to neural networks and reinforcement learning have been applied to problems of perception, modeling and control. Robot learning is characterized by sensor noise, control error, dynamically changing environments and the opportunity for learning by experimentation. This workshop will provide a forum for researchers active in the area of robot learning and related fields. It will include informal tutorials and presentations of recent results, given by experts in this field, as well as significant time for open discussion. Problems to be considered include: How can current learning robot techniques scale to more complex domains, characterized by massive sensor input, complex causal interactions, and long time scales? How can previously acquired knowledge accelerate subsequent learning? What representations are appropriate and how can they be learned? Invited speakers: Chris Atkeson Steve Hanson Satinder Singh Andrew W. Moore Richard Yee Andy Barto Tom Mitchell Mike Jordan Dean Pomerleau Steve Suddarth ========================================================================= Connectionist Approaches to Symbol Grounding Organizers: Georg Dorffner, Univ. Vienna; Michael Gasser, Indiana Univ. Stevan Harnad, Princeton Univ. Abstract: In recent years, there has been increasing discomfort with the disembodied nature of symbols that is a hallmark of the symbolic paradigm in cognitive science and artificial intelligence and at the same time increasing interest in the potential offered by connectionist models to ``ground'' symbols. In ignoring the mechanisms by which their symbols get ``hooked up'' to sensory and motor processes, that is, the mechanisms by which intelligent systems develop categories, symbolists have missed out on what is not only one of the more challenging areas in cognitive science but, some would argue, the very heart of what cognition is about. This workshop will focus on issues in neural network based approaches to the grounding of symbols and symbol structures. In particular, connectionist models of categorisation and of label-category association will be discussed in the light of the symbol grounding problem. Invited presentations: "Grounding Symbols in the Analog World of Objects: Can Neural Nets Make the Connection?" Stevan Harnad, Princeton University "Learning Perceptually Grounded Lexical Semantics" Terry Regier, George Lakoff, Jerry Feldman, ICSI Berkeley T.B.A. Gary Cottrell, Univ. of California, San Diego "Learning Perceptual Dimensions" Michael Gasser, Indiana University "Symbols and External Embodiments - why Grounding has to Go Two Ways" Georg Dorffner, University of Vienna "Grounding Symbols on Conceptual Knowledge" Philippe Schyns, MIT ========================================================================= Continuous Speech Recognition: Is there a connectionist advantage? Organizer: Michael Franzini (maf at cs.cmu.edu) Abstract: This workshop will address the following questions: How do neural networks compare to the alternative technologies available for speech recognition? What evidence is available to suggest that connectionism may lead to better speech recognition systems? What comparisons have been performed between connectionist and non-connectionist systems, and how ``fair'' are these comparis- ons? Which approaches to connectionist speech recognition have produced the best results, and which are likely to produce the best results in the future? Traditionally, the selection criteria for NIPS papers reflect a much greater emphasis on theoretical importance of work than on performance figures, despite the fact that recognition rate is one of the most important considerations for speech recognition researchers (and often is {\em the} most important factor in determining their financial support). For this reason, this workshop -- to be oriented more towards performance than metho- dology -- will be of interest to many NIPS participants. The issue of connectionist vs. HMM performance in speech recogni- tion is controversial in the speech recognition community. The validity of past comparisons is often disputed, as is the funda- mental value of neural networks. In this workshop, an attempt will be made to address this issue and the questions stated above by citing specific experimental results and by making arguments with a theoretical basis. Preliminary list of speakers: Ron Cole Uli Bodenhausen Hermann Hild ========================================================================= Symbolic and Subsymbolic Information Processing in Biological Neural Circuits and Systems Organizer: Vasant Honavar (honavar at iastate.edu) Abstract: Traditional information processing models in cognitive psychology which became popular with the advent of the serial computer tended to view cognition as discrete, sequential symbol processing. Neural network or connectionist models offer an alternative paradigm for modelling cognitive phenomena that relies on continuous, parallel subsymbolic processing. Biological systems appear to combine both discrete as well as continuous, sequential as well as parallel, symbolic as well as subsymbolic information processing in various forms at different levels of organization. The flow of neurotransmitter molecules and of photons into receptors is quantal; the depolarization and hyperpolarization of neuron membranes is analog; the genetic code and the decoding processes appear to be digital; global interactions mediated by neurotransmitters and slow waves appear to be both analog and digital. The purpose of this workshop is to bring together interested computer scientists, neuroscientists, psychologists, mathematicians, engineers, physicists and systems theorists to examine and discuss specific examples as well as general principles (to the extent they can be gleaned from our current state of knowledge) of information processing at various levels of organization in biological neural systems. The workshop will consist of several short presentations by participants There will be ample time for informal presentations and discussion centering around a number of key topics such as: * Computational aspects of symbolic v/s subsymbolic information processing * Coordination and control structures and processes in neural systems * Encoding and decoding structures and processes in neural systems * Generative structures and processes in neural systems * Suitability of particular paradigms for modelling specific phenomena * Software requirements for modelling biological neural systems Invited presentations: TBA Those interested in giving a presentation should write to honavar at iastate.edu ========================================================================= Computational Issues in Neural Network Training Organizers: Scott Markel and Roger Crane, Sarnoff Research Abstract: Many of the best practical neural network training results are report- ed by researchers who use variants of back-propagation and/or develop their own algorithms. Few results are obtained by using classical nu- merical optimization methods although such methods can be used effec- tively for many practical applications. Many competent researchers have concluded, based on their own experience, that classical methods have little value in solving real problems. However, use of the best commercially available implementations of such algorithms can help in understanding numerical and computational issues that arise in all training methods. Also, classical methods can be used effectively to solve practical problems. Examples of numerical issues that are ap- propriate to discuss in this workshop include: convergence rates; lo- cal minima; selection of starting points; conditioning (for higher order methods); characterization of the error surface; ... . Ample time will reserved for discussion and informal presentations. We will encourage lively audience participation. ========================================================================= Real Applications of Real Biological Circuits Organizers: Richard Granger, UC Irvine and Jim Schwaber, Du Pont Abstract: The architectures, performance rules and learning rules of most artificial neural networks are at odds with the anatomy and physiology of real biological neural circuitry. For example, mammalian telencephelon (forebrain) is characterized by extremely sparse connectivity (~1-5%), almost entirely lacks dense recurrent connections, and has extensive lateral local circuit connections; inhibition is delayed-onset and relatively long-lasting (100s of milliseconds) compared to rapid-onset brief excitation (10s of milliseconds), and they are not interchangeable. Excitatory connections learn, but there is very little evidence for plasticity in inhibitory connections. Real synaptic plasticity rules are sensitive to temporal information, are not Hebbian, and do not contain "supervision" signals in any form related to those common in ANNs. These discrepancies between natural and artificial NNs raise the question of whether such biological details are largely extraneous to the behavioral and computational utility of neural circuitry, or whether such properties may yield novel rules that confer useful computational abilities to networks that use them. In this workshop we will explicitly analyze the power and utility of a range of novel algorithms derived from detailed biology, and illustrate specific industrial applicatons of these algorithms in the fields of process control and signal processing. It is anticipated that these issues will raise controversy, and half of the workshop will be dedicated to open discussion. Preliminary list of speakers: Jim Schwaber, DuPont Bbatunde Ogunnaike, DuPont Richard Granger, University of California, Irvine John Hopfield, Cal Tech ========================================================================= Recognizing Unconstrained Handwritten Script Organizers: Krishna Nathan, IBM and James A. Pittman, MCC Abstract: Neural networks have given new life to an old research topic, the segmentation and recognition of on-line handwritten script. Isolated handprinted character recognition systems are moving from research to product development, and researchers have moved forward to integrated segmentation and recognition projects. However, the 'real world' problem is best described as one of unconstrained handwriting recognition (often on-line) since it includes both printed and cursive styles -- often within the same word. The workshop will provide a forum for participants to share ideas on preprocessing, segmentation, and recognition techniques, and the use of context to improve the performance of online handwriting recognition systems. We will also discuss issues related to what constitutes acceptable recognition performance. The collection of training and test data will also be addressed. ========================================================================= Time Series Analysis and Predic.... Organizers: John Moody, Oregon Grad. Inst., Mike Mozer, Univ. of Colorado and Andreas Weigend, Xerox PARC Abstract: Several new techniques are now being applied to the problem of predicting the future behavior of a temporal sequence and deducing properties of the system that produced the time series. We will discuss both connectionist and non-connectionist techniques. Issues include algorithms and architectures, model selection, performance measures, iterated vs long term prediction, robust prediction and estimation, the number of degrees of freedom of the system, how much noise is in the data, whether it is chaotic or not, how the error grows with prediction time, detection and classification of signals in noise, etc. Half the available time has been reserved for discussion and informal presentations. We will encourage lively audience participation. Invited presentations: Classical and Non-Neural Approaches: Advantages and Problems. (John Moody) Connectionist Approaches: Problems and Interpretations. (Mike Mozer) Beyond Prediction: What can we learn about the system? (Andreas Weigend) Physiological Time Series Modeling (Volker Tresp) Financial Forecasting (William Finnoff / Georg Zimmerman) FIR Networks (Eric Wan) Dimension Estimation (Fernando Pineda) ========================================================================= Applications of VLSI Neural Networks Organizer: Dave Andes, Naval Air Warfare Center Abstract: This workshop will provide a forum for discussion of the problems and opportunities for neural net hardware systems which solve real problems under real time and space constraints. Some of the most difficult requirements for systems of this type come, not surprisingly, from the military. Several examples of these problems and VLSI solutions will be discussed in this working group. Examples from outside the military will also be discussed. At least half the time will be devoted to open discussion of the issues raised by the experiences of those who have already applied VLSI based ANN techniques to real world problems. Preliminary list of speakers: Bill Camp, IBM Federal Systems Lynn Kern, Naval Air Warfare Center Chuck Glover, Oak Ridge National Lab Dave Andes, Naval Air Warfare Center \enddata{text822, 0} From kohring at hlrserv.hlrz.kfa-juelich.de Sat Oct 3 07:55:17 1992 From: kohring at hlrserv.hlrz.kfa-juelich.de (G. Kohring) Date: Sat, 3 Oct 92 12:55:17 +0100 Subject: Two Papers Message-ID: <9210031155.AA14653@hlrserv.hlrz.kfa-juelich.de> The paper, "On the Q-State Neuron Problem in Attractor Neural Networks" whose abstract appears below, has recently been accepted for publication in the Journal "Neural Networks". It discusses some recent results which demonstrate that the use of analog neurons in Attractor Neural Networks is not practical. An abbreviated account of this work recently appeared in the Letters section of "Journal de Physique" (Journal de Physique I, 2 (1992) p. 1549) and the abstract for this paper is also given below. If anyone does not have access to these Journals and would like to get a copy of these papers, please send a request to the following address. G.A. Kohring HLRZ an der KFA Juelich Postfach 1913 D-5170 Juelich, Germany e-mail: kohring at hlrsun.hlrz.kfa-juelich.de On the Q-State Neuron Problem in Attractor Neural Networks (Neural Networks, in press) ABSTRACT The problems encountered when using multi-state neurons in attractor neural networks are discussed. In particular, straight-forward implementations of neurons with Q states, leads to information storage capacities, E, that decrease like E ~ log_2 Q/Q^2. More sophisticated schemes yield capacities that decrease like E ~ log_2 Q/Q, but with retrieval times increasing proportional to Q. There also exist schemes whereby the information capacity reaches its maximum value of unity, but the retrieval time grows with the number of neurons, N, like O(N^3) instead of O(N^2) as in conventional models. Furthermore, since Q-state models approximate analog neurons when Q is large, the results demonstrate that the use of analog neurons is not feasible. After discussing these problems, a solution is proposed in which the information capacity is independent of Q, and the retrieval time increases proportional to \log_2 Q . The retreival properties of this model, i.e., basins of attraction, etc. are calculated and shown to be in agreement with simple theoretical arguments. Finally, a critical discussion of this approach is given. On the Problems of Neural Networks with Multi-state Neurons (Journal de Physique I, 2 (1992) p. 1549) ABSTRACT For realistic neural network applications the storage and recognition of gray-tone patterns, i.e, patterns where each neuron in the network can take one of Q different values, is more important than the storage of black and white patterns, although the latter has been more widely studied. Recently, several groups have shown the former task to be problematic with current techniques since the useful storage capacity, ALPHA, generally decreases like: ALPHA ~ Q^{-2}. In this paper one solution to this problem is proposed, which leads to the storage capacity decreasing like: ALPHA ~ (log_2 Q)^{-1}. For realistic situations, where Q=256 this implies an increase of nearly four orders of magnitude in the storage capacity. The price paid, is that the time needed to recall a pattern increases like: log_2 Q. This price can be partially offset by an efficient parallel program which runs at 1.4 Gflops on a 32 processor iPSC/860 Hypercube. From jang at diva.berkeley.edu Sat Oct 3 20:01:50 1992 From: jang at diva.berkeley.edu (Jyh-Shing Roger Jang) Date: Sat, 3 Oct 92 17:01:50 -0700 Subject: paper available Message-ID: <9210040001.AA19824@diva.Berkeley.EDU> The following paper has been placed on the neuroprose archive as jang.fuzzy.ps.Z and is available via anonymous ftp (from archive.cis.ohio-state.edu in the pub/neuroprose directory). This paper will appear in IEEE Trans. on NN. ========================================================================= Self-Learning Fuzzy Controllers Based on Temporal Back Propagation ABSTRACT: This paper presents a generalized control strategy that enhances fuzzy controllers with self-learning capability for achieving prescribed control objectives in a near-optimal manner. This methodology, termed temporal back propagation, is model-insensitive in the sense that it can deal with plants that can be represented in a piecewise differentiable format, such as difference equations, neural networks, GMDH, fuzzy models, etc. Regardless of the numbers of inputs and outputs of the plants under consideration, the proposed approach can either refine the fuzzy if-then rules obtained from human experts, or automatically derive the fuzzy if-then rules if human experts are not available. The inverted pendulum system is employed as a testbed to demonstrate the effectiveness of the proposed control scheme and the robustness of the acquired fuzzy controller. =========================================================================== Here is an example of how to retrieve this file: gvax> ftp archive.cis.ohio-state.edu Connected to archive.cis.ohio-state.edu. 220 archive.cis.ohio-state.edu FTP server ready. Name: anonymous 331 Guest login ok, send ident as password. Password:neuron at wherever 230 Guest login ok, access restrictions apply. ftp> binary 200 Type set to I. ftp> cd pub/neuroprose 250 CWD command successful. ftp> get jang.fuzzy.ps.Z 200 PORT command successful. 150 Opening BINARY mode data connection for jang.fuzzy.ps.Z 226 Transfer complete. 100000 bytes sent in 3.14159 seconds ftp> quit 221 Goodbye. gvax> uncompress jang.fuzzy.ps.Z gvax> lpr jang.fuzzy.ps -- J.-S. Roger Jang 571 Evans, EECS Department Univ. of California Berkeley, CA 94720 jang at diva.berkeley.edu (510)-642-5029 fax: (510)642-5775 From jang at diva.berkeley.edu Mon Oct 5 12:04:15 1992 From: jang at diva.berkeley.edu (Jyh-Shing Roger Jang) Date: Mon, 5 Oct 92 09:04:15 -0700 Subject: paper available in Neuroprose Message-ID: <9210051604.AA16522@diva.Berkeley.EDU> The following paper has been placed on the neuroprose archive as jang.rbfn_fuzzy.ps.Z and is available via anonymous ftp (from archive.cis.ohio-state.edu in the pub/neuroprose directory). This paper will appear in IEEE Trans. on NN. ========================================================================= TITLE: Functional Equivalence between Radial Basis Function Networks and Fuzzy Inference Systems ABSTRACT: This short article shows that under some minor restrictions, the functional behavior of radial basis function networks and fuzzy inference systems are actually equivalent. This functional equivalence implies that advances in each literature, such as new learning rules or analysis on representational power, etc., can be applied to both models directly. It is of interest to observe that two models stemming from different origins turn out to be functional equivalent. =========================================================================== Here is an example of how to retrieve this file: gvax> ftp archive.cis.ohio-state.edu Connected to archive.cis.ohio-state.edu. 220 archive.cis.ohio-state.edu FTP server ready. Name: anonymous 331 Guest login ok, send ident as password. Password:neuron at wherever 230 Guest login ok, access restrictions apply. ftp> binary 200 Type set to I. ftp> cd pub/neuroprose 250 CWD command successful. ftp> get jang.rbfn_fuzzy.ps.Z 200 PORT command successful. 150 Opening BINARY mode data connection for jang.rbfn_fuzzy.ps.Z 226 Transfer complete. 100000 bytes sent in 3.14159 seconds ftp> quit 221 Goodbye. gvax> uncompress jang.rbfn_fuzzy.ps.Z gvax> lpr jang.rbfn_fuzzy.ps -- J.-S. Roger Jang 571 Evans, EECS Department Univ. of California Berkeley, CA 94720 jang at diva.berkeley.edu (510)-642-5029 fax: (510)642-5775 From jose at tractatus.siemens.com Mon Oct 5 08:07:30 1992 From: jose at tractatus.siemens.com (Steve Hanson) Date: Mon, 5 Oct 1992 08:07:30 -0400 (EDT) Subject: for Registration See Below Message-ID: FOR NIPS*92 REGISTRATION SEE BELOW NEURAL INFORMATION PROCESSING SYSTEMS (NIPS) -Natural and Synthetic- Monday, November 30 - Thursday, December 3, 1992 Denver, Colorado This is the sixth meeting of an inter-disciplinary conference which brings together neuroscientists, engineers, computer scientists, cognitive scientists, physicists, and mathematicians interested in all aspects of neural processing and computation. A day of tutorial presentations (Nov 30) will precede the regular session and two days of focused workshops will follow at a nearby ski area (Dec 4-5). Major categories and examples of subcategories for paper submissions are the following; Neuroscience: Studies and Analyses of Neurobiological Systems, Inhibition in cortical circuits, Signals and noise in neural computation, Theoretical Neurobiology and Neurophysics. Theory: Computational Learning Theory, Complexity Theory, Dynamical Systems, Statistical Mechanics, Probability and Statistics, Approximation Theory. Implementation and Simulation: VLSI, Optical, Software Simulators, Implementation Languages, Parallel Processor Design and Benchmarks. Algorithms and Architectures: Learning Algorithms, Constructive and Pruning Algorithms, Localized Basis Functions, Tree Structured Networks, Performance Comparisons, Recurrent Networks, Combinatorial Optimization, Genetic Algorithms. Cognitive Science & AI: Natural Language, Human Learning and Memory, Perception and Psychophysics, Symbolic Reasoning. Visual Processing: Stereopsis, Visual Motion, Recognition, Image Coding and Classification. Speech and Signal Processing: Speech Recognition, Coding, and Synthesis, Text-to-Speech, Adaptive Equalization, Nonlinear Noise Removal. Control, Navigation, and Planning: Navigation and Planning, Learning Internal Models of the World, Trajectory Planning, Robotic Motor Control, Process Control. Applications: Medical Diagnosis or Data Analysis, Financial and Economic Analysis, Timeseries Prediction, Protein Structure Prediction, Music Processing, Expert Systems. The technical program will contain plenary, contributed oral and poster presentations with no parallel sessions. All presented papers will be due (January 13, 1993) after the conference in camera-ready format and will be published by Morgan Kaufmann. FOR REGISTRATION PLEASE SEND YOUR NAME AND ADDRESS ASAP TO: NIPS*92 Registration SIEMENS Research Center 755 College Road East Princeton, NJ, 08540 NIPS*92 Organizing Committee: General Chair, Stephen J. Hanson, Siemens Research & Princeton University; Program Chair, Jack Cowan, University of Chicago; Publications Chair, Lee Giles, NEC; Publicity Chair, Davi Geiger, Siemens Research; Treasurer, Bob Allen, Bellcore; Local Arrangements, Chuck Anderson, Colorado State University; Program Co-Chairs: Andy Barto, U. Mass.; Jim Burr, Stanford U.; David Haussler, UCSC ; Alan Lapedes, Los Alamos; Bruce McNaughton, U. Arizona; Barlett Mel, JPL; Mike Mozer, U. Colorado; John Pearson, SRI; Terry Sejnowski, Salk Institute; David Touretzky, CMU; Alex Waibel, CMU; Halbert White, UCSD; Alan Yuille, Harvard U.; Tutorial Chair: Stephen Hanson, Workshop Chair: Gerry Tesauro, IBM Domestic Liasons: IEEE Liaison, Terrence Fine, Cornell; Government & Corporate Liaison, Lee Giles, NEC; Overseas Liasons: Mitsuo Kawato, ATR; Marwan Jabri, University of Sydney; Benny Lautrup, Niels Bohr Institute; John Bridle, RSRE; Andreas Meier, Simon Bolivar U. 9 From mclennan at cs.utk.edu Mon Oct 5 16:10:20 1992 From: mclennan at cs.utk.edu (mclennan@cs.utk.edu) Date: Mon, 5 Oct 92 16:10:20 -0400 Subject: report available Message-ID: <9210052010.AA05363@thud.cs.utk.edu> **DO NOT FORWARD TO OTHER GROUPS** The following technical report has been placed in the Neuroprose archives at Ohio State (filename: maclennan.flexcomp.ps.Z). Ftp instructions follow the abstract. ----------------------------------------------------- Research Issues in Flexible Computing Two Presentations in Japan Bruce MacLennan Computer Science Department University of Tennessee Knoxville, TN 37996 maclennan at cs.utk.edu Technical Report CS-92-172 ABSTRACT: This report contains the text of two presentations made in Japan in 1991, both of which deal with the Japanese ``Real World Com- puting Project'' (previously known as the ``New Information Pro- cessing Technology,'' and informally as the ``Sixth Generation Project''). (1) ``Flexible Computing: How to Make it Succeed'' (invited presentation, Institute for Supercomputing Research workshop, New Directions in Supercomputing): Many applications require the flexible processing of large amounts of ambiguous, incomplete, or redundant information, including images, speech and natural language. Recent advances have shown that many of these problems can be effectively solved by _emergent computation_, which is the exploitation of the self-organizing,collective and cooperative phenomena arising from the interaction of large numbers of simple computational elements obeying local dynamical laws. Accomplish- ing flexible computing will require basic research in three areas. THEORY: We need to understand the dynamical and computa- tional properties of systems with very high degrees of parallel- ism (more than a million elements). SOFTWARE: We need to under- stand the representation and processing of subsymbolic and sym- bolic information in the brain. HARDWARE: We need to be able to implement systems having a million to a billion analog proces- sors. (2) ``The Emergence of Symbolic Processes from the Subsymbolic Substrate''(panel presentation, MITI, International Symposium on New Information Processing Technologies '91): A central question for the success of neural network technology is the relation of symbolic processes (e.g., language and logic) to the underlying subsymbolic processes (e.g., pattern recognition, analogical rea- soning and learning). This is not simply an issue of integrating neural networks with conventional expert system technology. Human symbolic cognition is flexible because it is not purely formal, and because it retains some of the ``softness'' of the subsymbolic processes. If we want our computers to be as flexi- ble as people, then we need to understand the emergence of the discrete and symbolic from the continuous and subsymbolic. ----------------------------------------------------- FTP INSTRUCTIONS Either use the Getps script, or do the following: unix> ftp archive.cis.ohio-state.edu (or 128.146.8.52) Name: anonymous Password: ftp> cd pub/neuroprose ftp> binary ftp> get maclennan.flexcomp.ps.Z ftp> quit unix> uncompress maclennan.flexcomp.ps.Z unix> lpr maclennan.flexcomp.ps (or however you print postscript) If you need hardcopy, then send your request to: library at cs.utk.edu Bruce MacLennan Department of Computer Science 107 Ayres Hall The University of Tennessee Knoxville, TN 37996-1301 (615)974-0994/5067 FAX: (615)974-4404 maclennan at cs.utk.edu From sontag at control.rutgers.edu Mon Oct 5 18:28:35 1992 From: sontag at control.rutgers.edu (sontag@control.rutgers.edu) Date: Mon, 5 Oct 92 18:28:35 BST Subject: Finiteness of VC Dimension of Sigmoidal Feedforward "Neural Nets" Message-ID: <9210052228.AA03534@control.rutgers.edu> Finiteness of VC Dimension of Sigmoidal Feedforward "Neural Nets" Angus Macintyre, Oxford University Eduardo Sontag, Rutgers University [This is NOT a TeX file; it is just for reading on the screen.] It was until now, as far as we know, an open question whether "sigmoidal neural networks" lead to learnable (in the sense of sample complexity) classes of concepts. We wish to remark via this announcement that indeed the corresponding VC dimension is finite. The result holds without imposing any bounds on weights. The proof, which is outlined below, consists of simply putting together a couple of recent results in model theory. (A detailed paper, exploring other consequences of these results in the context of neural networks, is being prepared. This problem is also related to issues of learnability for sparse polynomials [="fewnomials"].) More precisely, we define a _sigma-circuit_ as an unbounded fan-in circuit (i.e., a directed acyclic graph) whose edges are labeled by real numbers (called "weights"), and, except for the input nodes (i.e., the nodes of in-degree zero), every node is also labeled by a real number (called its "bias"). There is a single output node (i.e., node of out-degree zero). We think of such a circuit as computing a function R^m -> R, where m is the number of input nodes. This function is inductively defined on nodes V as follows. If V is the i-th input node, it computes just F(u1,...,uk)=ui. If V is a noninput node, its function is s( w0 + w1.u1 + ... + wk.uk ) where w0 is the bias of the node V, u1,...,uk are the functions computed by the nodes Vi incident to V, and wi is the weight in the edge from Vi to V. The function computed by the output node is the function computed by the circuit. Here "s" denotes the "standard sigmoid": 1 s (x) = --------- . -x 1 + e (The results will depend critically on the choice of this particular s(x), which is standard in neural network theory. Minor modifications are possible, but the result is most definitely false if, e.g., s(x) = sin(x). The result is even false for other "sigmoidal-type" functions s; see e.g. [5].) We also define a _sigmoidal feedforward architecture_ as a circuit in which all weights and biases are left as variables. In that case, we write F(u,w) for the function computed by the circuit obtained by specializing all weights and biases to a particular vector w. We view an architecture in the obvious way as a map F : R^m x R^r -> R where 'r' is the total number of weights and biases. For each subset S of R^m, a _dichotomy_ on S is a function c: S -> {-1,+1}. We say that a function f: R^m -> R _implements_ this dichotomy if it holds that c(s) > 0 <==> f(x) > 0 . THEOREM. Let F be an architecture. Then there exists a (finite) integer f = VC(F) such that, for each subset S of R^m of cardinality > f, there is some dichotomy c on S which cannot be implemented by any f = F(.,w), w in R^r. Sketch of proof: First note that one can write a formula Q(u,w) in the first order language of the real numbers with addition, multiplication, order, and real exponentiation, Th(R,+,.,0,1,<,exp(.)) (all real constants are included as well), such that: for each (u,w) in R^m x R^r, F(u,w) > 0 if and only if Q(u,w) is true . (Divisions, as needed in computing s(x), can be encoded by adding new variables z, including an atomic formula of the type z . (1 + e^-x) = 1, and then existentially quantifying on z.) The paper [1] deals precisely with problem of proving the existence of such finite integers f, for formulas in first order theories. In page 383, first paragraph, it is shown that the desired result will be true if there is _order-minimality_ for the corresponding theory. In our context, this latter property means that every set of the form: { x in R | P(x,w) true } , w in R^r , where P is any formula in the above language, with SCALAR x, must be a finite union of intervals (possibly points). Now, the papers [2]-[4] prove that order-minimality indeed holds. (The paper [1] stated that order minimality was an open problem for Th(R,+,.,0,1,<,exp(.)); in fact, the papers [2]-[4] were written while [1] was in press.) Remarks: Note that we do no give explicit bounds here, but only remark that finiteness holds. However, the results being used are essentially constructive, and it is in principle possible to compute the VC dimension of a given architecture. Observe also that one can extend this result to more general architectures than the ones considered here. High-order nets (for which products of inputs are allowed) can be treated with the same proof, as are if-then-else nodes. The latter allow the application of the same techniques to the Blum-Shub-Smale model of computation as well. A number of decidability issues, loading, interpolation, teaching dimension questions, and so forth, for sigmoidal nets can also be treated using model-theoretic techniques, and will be the subject of a forthcoming paper. References: [1] Laskowski, Michael C., "Vapnik-Chervonenkis classes of definable sets," J. London Math Soc (2) 45 (1992): 377-384. [2] Wilkie, Alec J., "Some model completeness results for expansions of the ordered field of reals by Pfaffian functions," preprint, Oxford, 1991, submitted. [3] Wilkie, Alec J., "Smooth o-minimal theories and the model completeness of the real exponential field," preprint, Oxford, 1991, submitted. [4] Macintyre, Angus, Lou van den Dries, and David Marker, "The elementary theory of restricted analytic fields with exponentiation," preprint, Oxford, 1991. [5] Sontag, Eduardo D., "Feedforward nets for interpolation and classification," J. Comp. Syst. Sci. 45 (1992): 20-48. From tgd at chert.CS.ORST.EDU Mon Oct 5 19:16:30 1992 From: tgd at chert.CS.ORST.EDU (Tom Dietterich) Date: Mon, 5 Oct 92 16:16:30 PDT Subject: Machine Learning 9:4 Message-ID: <9210052316.AA17234@research.CS.ORST.EDU> Machine Learning October 1992, Volume 9, Number 4 Explorations of an Incremental, Bayesian Algorithm for Categorization J. R. Anderson and Michael Matessa A Bayesian Method for the Induction of Probabilistic Networks from Data G. F. Cooper and E. Herskovits A Framework for Average Case Analysis of Conjunctive Learning Algorithms. M. J. Pazzani and W. Sarrett Learning Boolean Functions in an Infinite Attribute Space A. Blum Technical Note: First Nearest Neighbor Classification on Frey and Slate's Letter Recognition Problem T. C. Fogarty ----- Subscriptions - Volume 8-9 (8 issues) includes postage and handling. $140 Individual $88 Member AAAI, CSCSI $301 Institutional Kluwer Academic Publishers P.O. Box 358 Accord Station Hingham, MA 02018-0358 USA or Kluwer Academic Publishers Group P.O. Box 322 3300 AH Dordrecht THE NETHERLANDS (AAAI members please include membership number) From usui at tut.ac.jp Tue Oct 6 10:41:11 1992 From: usui at tut.ac.jp (usui@tut.ac.jp) Date: Tue, 6 Oct 92 10:41:11 JST Subject: IJCNN '93 NAGOYA (Call for Papers) Message-ID: <9210060141.AA01902@bpel.tutics.tut.ac.jp> I J C N N '9 3 ( N A G O Y A ) *** C A L L F O R P A P E R S *** --------------------------------------------------------------------------- Internatinal Joint Conference on Neural Networks Nagoya Congress Center, Japan October 25-29, 1993 IJCNN'93-NAGOYA co-sponsored by the Japanese Neural Network Society (JNNS), the IEEE Neural Networks Council (NNC), the International Neural Network Society (INNS), the European Neural Network Society (ENNS), the Society of Instrument and Control Engineers (SICE, Japan), the Institute of Electronics, Information and Communication Engineers (IEICE, Japan), the Nagoya Industrial Science Research Institute, the Aichi Prefectural Government and the Nagoya Municipal Government cordially invite interested authors to submit papers in the field of neural networks for presentation at the Conference. Nagoya is a historical city famous for Nagoya Castle and is located in the central major industrial area of Japan. There is frequent direct air service from most countries. Nagoya is 2 hours away from Tokyo or 1 hour from Osaka by bullet train. Papers may be submitted for consideration as oral or poster presentations in the following areas: Neurobiological Systems Self-organization Cognitive Science Learning & Memory Image Processing & Vision Robotics & Control Speech, Hearing & Language Hybrid Systems Sensorimotor Systems (Fuzzy, Genetic, Expert Systems, AI) Neural Network Architectures Implementation Network Dynamics (Electronic, Optical, Bio-chips) Optimization Other Applications (Medical and Social Systems, Art, Economy, etc. Please specify the area of the application) Four(4) page papers MUST be received by April 30, 1993. Papers received after that date will be returned unopened. International authors should submit their work via Air Mail or Express Courier so as to ensure timely arrival. All submissions will be acknowledged by mail. Papers will be reviewed by senior researchers in the field, and all authors will be informed of the decisions at the end of the review process by June 30, 1993. A limited number of papers will be accepted for oral and poster presentations. No poster sessions are scheduled in parallel with oral sessions. All accepted papers will be published as submitted in the conference proceedings, which should be available at the conference for distribution to all regular conference registrants. Please submit six(6) copies (one camera-ready original and five copies) of the paper. Do not fold or staple the original camera-ready copy. The four page papers, including figures, tables, and references, should be written in English. The paper submitted over four pages will be charged 30,000 YEN per extra page. Papers should be submitted on 210mm x 297mm (A4) or 8-1/2" x 11" (letter size) white paper with one inch margins on all four sides (actual space to be allowed to type is 165mm (W) x 228mm (H) or 6-1/2" x 9"). They should be prepared by typewriter or letter-quality printer in one or two-column format, single-spaced, in Times or similar font of 10 points or larger, and printed on one side of the page only. Please be sure that all text, figures, captions, and references are clean, sharp, readable, and of high contrast. Fax submission are not acceptable. Centered at the top of the first page should be the complete title, author(s), affiliation(s), and mailing address(es), followed by a blank space and then an abstract, not to exceed 15 lines, followed by the text. In an accompanying letter, the following should be included. Send papers to: IJCNN'93-NAGOYA Secretariat. Full Title of the Paper Presentation Preferred Oral or Poster Corresponding Author Presenter* Name, Mailing address Name, Mailing address Telephone and FAX numbers Telephone and FAX numbers E-mail address E-mail address Technical Session Audio Visual Requirements 1st and 2nd choices e.g., 35mm Slide, OHP, VCR * Students who wish to apply for the Student Award, please specify and enclose a verification letter of status from the Department head. Call for Tutorials ------------------ Tutorials for IJCNN'93-NAGOYA will be held on Monday, October 25, 1993. Each tutorial will be three hours long. The tutorials should be designed as such and not as expanded talks. They should lead the student at the college Senior level through a pedagogically understandable development of the subject matter. Experts in neural networks and related fields are encouraged to submit proposed topics for tutorials. The proposal should be one to two pages long and describe in some detail the subject matter to be covered in the three-hour tutorial. Please mail proposals by January 5, 1993, to IJCNN'93-NAGOYA Secretariat. Industry Forum -------------- A major industry forum will be held in the afternoon on Tuesday, October 26, 1993. Speakers will include representatives from industry, government, and academia. The aim of the forum is to permit attendees to understand more fully possible industrial applications of neural networks, discuss problems that have arisen in industrial applications, and to delineate new areas of research and development of neural network applications. Exhibit Information ------------------- Exhibitors are encouraged to present the latest innovations in neural networks, including electronic and optical neuro computers, fuzzy neural networks, neural network VLSI chips and development systems, neural network design and simulation tools, software systems, and application demonstration systems. A large group of vendors and participants from academia, industry and government are expected. We believe that the IJCNN'93-NAGOYA will be the neural network largest conference and trade-show in Japan, in which to exhibit your products. Potential exhibitors should plan to sign up before April 30, 1993 for exhibit booths since exhibit space is limited. Vendors may contact the IJCNN'93-NAGOYA Secretariat. Committees & Chairs -------------------- Advisory Chair: Fumio Harashima, University of Tokyo Vicecochairs: Russell Eberhart (IEEE NNC), Research Triangle Institute Paul Werbos (INNS), National Science Foundation Teuvo Kohonen (ENNS), Helsinki University of Technology Organizing Chair: Shun-ichi Amari, University of Tokyo Program Chair: Kunihiko Fukushima, Osaka University Cochairs: Robert J. Marks,II (IEEE NNC), University of Washington Harold H. Szu (INNS), Naval Surface Warfare Center Rolf Eckmiller (ENNS), University of Dusseldorf Noboru Sugie, Nagoya University Steering Chair: Toshio Fukuda, Nagoya University General Affair Chair: Fumihito Arai, Nagoya University Finance Chairs: Hide-aki Saito, Tamagawa University Roy S. Nutter,Jr, West Virginia University Publicity Chairs: Shiro Usui, Toyohashi University of Technology Evangelia Micheli-Tzanakou, Rutgers University Publication Chair: Yoichi Okabe, University of Tokyo Exhibits Chairs: Masanori Idesawa, Riken Shigeru Okuma, Nagoya University Local Arrangement Chair: Yoshiki Uchikawa, Nagoya University Industry Forum Chairs: Noboru Ohnishi, Nagoya University Hisato Kobayashi, Hosei University Social Event Chair: Kazuhiro Kosuge, Nagoya University Tutorial Chair: Minoru Tsukada, Tamagawa University Technical Tour Chair Hideki Hashimoto, University of Tokyo IJCNN'93-NAGOYA Secretariat: Travel Plaza International Chubu, Inc. Shirakawa Dai-san Bldg., 4-8-10 Meieki, Nakamura-ku, Nagoya, 450 Japan Phone: +81-52-561-9880/8655 Fax: +81-52-561-1241 --------------------------------------------------------------------------- R E G I S T R A T I O N ----------------------- Registration Fee ---------------- Full conference registration fee includes admission to all sessions, exhibit area, welcome reception and proceedings. Tutorials and banquet are NOT included. ---------------------------------------------------------- | Member-ship | Before | After | | | | Aug. 31 '93 | Sept. 1 '93 | On-site | | ---------------------------------------------------------| | Member* | 45,000 yen | 55,000 yen | 60,000 yen | | Non-Member | 55,000 yen | 65,000 yen | 70,000 yen | | Student** | 12,000 yen | 15,000 yen | 20,000 yen | ---------------------------------------------------------- Tutorial Registration Fee ------------------------- Tutorials will be held on Monday, October 25, 1993, 10:00 am-1:00 pm. and 3:00 pm - 6:00 pm. The complete list of tutorials will be available in the June mailing. ------------------------------------------------------------ | | | Before August 31, 93 | After | | | |-------------------------| Sept. 1, | | Member- | Option | |Univ. & Non | `93 | | ship | | Industrial |profit Inst.| | | -----------------------------------------------------------| | Member* | Half day | 20,000 yen | 7,000 yen | 40,000 yen | | | Full day | 30,000 yen | 10,000 yen | 60,000 yen | |------------------------------------------------------------| | Non- | Half day | 30,000 yen | 10,000 yen | 50,000 yen | | Member | Full day | 45,000 yen | 15,000 yen | 80,000 yen | |------------------------------------------------------------| | Student**| Half day | ---------- | 5,000 yen | 20,000 yen | | | Full day | ---------- | 7,500 yen | 30,000 yen | ------------------------------------------------------------ * A member of co-sponsoring and co-operating societies. ** Students must submit a verification letter of full-time status from the Department head. Banquet ------- The IJCNN'93-NAGOYA Banquet will be held on Thursday, October 28, 1993. Note that the Banquet ticket (5,000 yen/person) is not included in the registration fee. Pre-registration is recommended, since the number of seats is limited. The registration for the Banquet can be made at the same time with the conference registration. Payment and Remittance Payment for registration and tutorial fees should be in one of the following forms : 1. A bank transfer to the following bank account: Name of Bank: Tokai Bank, Nagoya Ekimae-Branch Name of Account: Travel Plaza International Chubu, Inc. EC-ka Account No.: 1079574 Address: 6F Shirakawa Dai-san Bldg., 4-8-10 Meieki, Nakamura-ku, Nagoya, 450 Japan 2. Credit Cards (American Express, Diners, Visa, Master Card) are acceptable except for domestic registrants. Please indicate your card number and expiration date on the Registration Form Note: When making remittance, please send Registration Form to the IJCNN'93-NAGOYA Secretariat together with a copy of your bank's receipt for transfer. Personal checks and other currencies will not be accepted except Japanese yen. Confirmation and Receipt Upon receiving your Registration Form and confirming your payment, the IJCNN'93-NAGOYA Secretariat will send you a confirmation / receipt. This confirmation should be retained and presented at the registration desk of the conference site. Cancellation and Refund of the Fees All financial transactions for the conference are being handled by the IJCNN'93-NAGOYA Secretariat. Please send a written notification of cancellation directly to the office. Cancellations received on or before September 30, 1993, 50% cancel fee will be charged. We regret that no refunds for registration can be made after October 1, 1993. All refunds will be proceeded after the conference. NAGOYA ------ The City of Nagoya, with a population of over two million, is the principal city of central Japan and lies at the heart of one of the three leading areas of the country. The area in and around the city contains a large number of high-tech industries with names known worldwide, such as Toyota, Mitsubishi, Honda, Sony and Brother. The city's central location gives it excellent road and rail links to the rest of the country; there exist direct air services to 18 other cities in Japan and 26 cities abroad. Nagoya enjoys a temperate climate and agriculture flourishes on the fertile plain surrounding the city. The area has a long history; Nagoya is the birth place of two of Japan's greatest heroes: the Lords Oda Nobunaga and Toyotomi Hideyoshi, who did much to bring the 'Warring States' period to an end. Tokugawa Ieyasu who completed the task and established the Edo period was also born in the area. Nagoya is flourished under the benevolent rule of this lord and his descendants Climate and Clothing The climate in Nagoya in the late October is usually agreeable and stable, with an average temperature of 16-23 C(60-74 F). Heavy clothing is not necessary, however, a light sweater is recommended. Business suit as well as casual clothing is appropriate. TRAVEL INFORMATION ------------------ Official Travel Agent Travel Plaza International Chubu, Inc. (TPI) has been appointed as the Official Travel Agent for IJCNN'93-NAGOYA, JAPAN to handle all travel arrangements in Japan. All inquiries and application forms for hotel accommodations described herein should be addressed as follows: Travel Plaza International Chubu, Inc. Shirakawa Dai-san Bldg. 4-8-10 Meieki, Nakamura-ku Tel: +81-52-561-9880/8655 Nagoya 450, Japan Fax: +81-52-561-1241 Airline Transportation Participants from Europe and North America who are planning to come to Japan by air are advised to get in touch with the following travel agents who can provide information on discount fares. Departure cities are Los Angeles, Washington, New York, Paris, and London. Japan Travel Bureau U.K. Inc. 9 Kingsway London Tel: (01)836-9393 WC2B 6XF, England, U.K. Fax: (01)836-6215 Japan Travel Bureau International Inc. Equitable Tower 11th Floor New York, N.Y. 10019 Tel: (212)698-4955 U.S.A. Fax: (212)246-5607 Japan Travel Bureau Paris 91 Rue du Faubourg Saint-Honore 750008 Paris Tel: (01)4265-1500 France Fax: (01)4265-1132 Japan Travel Bureau International Inc. Suite 1410, One Wilshire Bldg. 624 South Grand Ave, Los Angeles, CA 90017 Tel: (213)687-9881 U.S.A. Fax: (213)621-2318 Japan Rail Pass The JAPAN RAIL PASS is a special ticket that is available only to travelers visiting Japan from foreign countries for sight-seeing. To be eligible to purchase a JAPAN RAIL PASS, you must purchase an Exchange Order from an authorized sales office or agent before you come to Japan. Please contact JTB offices or your travel agent for details. Note: The rail pass is a flash pass good on most of the trains and ferries in Japan. It provides very significant saving on transportation costs within Japan if you plan to travel more than just from Tokyo to Nagoya and return. Booking of Japan Railway tickets cannot be made before issuing Japan Rail Pass in Japan. Access to Nagoya Direct flights to Nagoya are available from the following cities: Seoul, Taipei, Pusan, Hong Kong, Singapore, Bangkok, Cheju, Jakarta, Denpasar, Kuala Lumpur, Honolulu, Portland, Los Angeles, Guam, Saipan, Toronto, Vancouver, Rio de Janeiro, Sao Paulo, Moscow, Frankfurt, Paris, London, Brisbane, Cairns, Sydney and Auckland. Participants flying from the U.S.A. are urged to fly to Los Angeles, CA, or Portland, OR, and transfer to direct flights to Nagoya on Delta Airlines, or fly to Seoul, Korea, for a connecting flight to Nagoya. For participants from other countries, flights to Narita (the New Tokyo International Airport) or Osaka International Airport are recommended. Domestic flights are available from Narita to Nagoya, but not from Osaka. The bullet train, "Shinkansen", is a fast and convenient way to get to Nagoya from either Osaka or Tokyo. Transportation from Nagoya International Airport Bus service to the Nagoya JR train station is available every 15 minutes. The bus stop (signed as No. 1) is to your left as you exit the terminal. The trip takes about 1 hour. Transportation from Narita International Airport To the Tokyo JR train station (to connect with Shinkansen), 2 ways to get from Narita to the JR train station are recommended: 1. An express train from the airport to the Tokyo JR train station. This is an all reserved seat train. Buy tickets before boarding train. Follow the signs in the airport to JR Narita station. The trip takes 1 hour. 2. A non-stop service is available, leaving Narita airport every 15 minutes. The trip will take between one and one and a half hours or more, depending on traffic conditions. The limousines have reserved seating, so it is necessary to purchase a ticket before boarding. If you plan to stay in Tokyo overnight before proceeding to Nagoya, other limousines to major Tokyo hotels are available. Transportation from Osaka International Airport Non-stop-bus service to the Shin-Osaka JR train station is available every 15 min. Foreign Exchange and Travellaer's Checks Purchase of traveller's checks in Japanese yen or U.S. dollars before departure is recommended. The conference secretariat and most of stores will accept only Japanese yen in cash only. Major credit cards are accepted in a number of shops and hotels. Foreign currency exchange and cashing of traveller's checks are available at the New Tokyo International Airport, the Osaka International Airport and major hotels. Major banks that handle foreign currencies are located in the downtown area. Banks are open from 9:00 to 15:00 on the weekday, closed on Saturday and Sunday. Electricity 100 volts, 60 Hz. --------------------------------------------------------------------------- IJCNN`93 NAGOYA October 25-29, 1993 Nagoya, JAPAN R E G I S T R A T I O N F O R M ---------------------------------- (Type or print in block letters, one sheet for each registrant please) Name: ( )Prof. ( )Dr. ( )Mr. ( )Ms. --------------------------------------------------- (Family Name) (First Name) (Middle Name) Affiliation: --------------------------------------------------------------- Mailing Address: ( )Office ( )Home --------------------------------------------------------------- Zip Code: City: Country: ----------- ----------- --------------- Phone: Fax: E-mail: ------------------- ----------------- -------------------- # REGISTRATION FEE (Please make a circle as you chose.) --------------------------------------------------- | | Before | After | | | Membership | Aug.31,'93 | Sept.1,'93 | On-site | |------------|------------|------------|------------| | Member* | 45,000 yen | 55,000 yen | 60,000 yen | |------------|------------|------------|------------| | Non-member | 55,000 yen | 65,000 yen | 70,000 yen | |------------|------------|------------|------------| | Student** | 12,000 yen | 15,000 yen | 20,000 yen | --------------------------------------------------- * Please specify Membership society : ------------------------------------ and the number : ------------------------------------ SUBTOTAL1: yen ------------------- ** Students must submit a verification letter of full-time status from the department head. # TUTORIAL REGISTRATION FEE (Please make a circle as you chose.) ------------------------------------------------------------ | | | Before August 31, 93 | After | | | |-------------------------| Sept. 1, | | Member- | Option | |Univ. & Non | `93 | | ship | | Industrial |profit Inst.| | | -----------------------------------------------------------| | Member* | Half day | 20,000 yen | 7,000 yen | 40,000 yen | | | Full day | 30,000 yen | 10,000 yen | 60,000 yen | |------------------------------------------------------------| | Non- | Half day | 30,000 yen | 10,000 yen | 50,000 yen | | Member | Full day | 45,000 yen | 15,000 yen | 80,000 yen | |------------------------------------------------------------| | Student**| Half day | ---------- | 5,000 yen | 20,000 yen | | | Full day | ---------- | 7,500 yen | 30,000 yen | ------------------------------------------------------------ ----------------------------------------------- | Session | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | |-----------|---|---|---|---|---|---|---|---|---| | Morning | | | | | | | | | | |-----------|---|---|---|---|---|---|---|---|---| | Afternoon | | | | | | | | | | ----------------------------------------------- The complete list of tutorials will be available in the June mailing. SUBTOTAL2: yen ------------------- # BANQUET (5,000 yen/person) Accompany Person's Name: ------------------------- SUBTOTAL3: yen ------------------- ( person(s) x 5,000 yen) TOTAL AMOUNT(1+2+3): yen ------------------- # WAY OF PAYMENT (Please check the appropriate box.) ( )1. Payment through Bank I have sent the total amount on ------------------------ (Date) through ------------------------------------------------ (Name of Bank) to the following account in Japanese yen. Name of Bank: Tokai Bank, Nagoya Ekimae Branch Name of Account: Travel Plaza International Chubu, Inc. EC-ka Account No.: 1079574 Address: Shirakawa Dai-san Bldg., 4-8-10 Meieki, Nakamura-ku, Nagoya, 450 Japan ( )2. Payment by Credit Card (not for domestic registrants) ( )American Express ( )Diners ( )Visa Card ( )Master Card Card No.: --------------------------------------------------- Date of Expiration: ----------------------------------------- Signature: -------------------------------------------------- * Note 1. No personal checks are accepted. 2. All payments should be made in Japanese yen. DATE: SIGNATURE: ---------------------- ---------------------------- Please send the completed form to the following address. IJCNN'93-NAGOYA Secretariat Travel Plaza International Chubu, Inc. Shirakawa Dai-san Bldg., 4-8-10 Meieki, Nakamura-ku, Nagoya, 450 Japan Phone: +81-52-561-9880/8655 Fax: +81-52-561-1241 --------------------------------------------------------------------------- IJCNN`93 NAGOYA October 25-29, 1993 Nagoya, JAPAN HOTEL RESERVATION FORM ---------------------- (Type or print in block letters, one sheet for each registrant please) Name: ( )Prof. ( )Dr. ( )Mr. ( )Ms. --------------------------------------------------- (Family Name) (First Name) (Middle Name) Affiliation: --------------------------------------------------------------- Mailing Address: ( )Office ( )Home --------------------------------------------------------------------------- Zip Code: City: Country: -------------- -------------- ------------------ Phone: Fax: ---------------------- -------------------- Arrival Schedule: Arriving at on by ------------- --------- -------------- (Airport) (Date) (Flight No.) ---------------------------------------------------------------- | | Name of Hotel | Number of Room(s) | |------------|---------------|-----------------------------------| | 1st Choice | | | | twin(s) | | | | single(s) | twin(s) | single use | |------------|---------------|---------------------------------- | | 2nd Choice | | | | | | | | single(s) | twin(s) | single use | ---------------------------------------------------------------- (continued) --------------------------------------------------------------- | | Check-in Date | Check-out Date | Total Night(s) | | -----------|--------------------------------|-----------------| | 1st Choice | | | | | | | | | |------------|---------------|----------------|---------------- | | 2nd Choice | | | | | | | | | --------------------------------------------------------------- Sharing with : ------------------------------------------------ (Family Name) (First Name) * Hotel Deposit: yen X room(s) = YEN --------------- ----------- (Room Charge/Night) # WAY OF PAYMENT (Please check the appropriate box.) ( )1. Payment through Bank I have sent the total amount on ------------------------ (Date) through ----------------------------------------------- (Name of Bank) to the following account in Japanese yen. Name of Bank: Tokai Bank, Nagoya Ekimae Branch Name of Account: Travel Plaza International Chubu, Inc. EC-ka Account No.: 1079574 Address: Shirakawa Dai-san Bldg., 4-8-10 Meieki, Nakamura-ku, Nagoya, 450 Japan ( )2. Payment by Credit Card (not for domestic registrants) ( )American Express ( )Diners ( )Visa Card ( )Master Card Card No.: --------------------------------------------------- Date of Expiration: ----------------------------------------- Signature: -------------------------------------------------- * Note 1. No personal checks are accepted. 2. All payments should be made in Japanese yen. DATE: SIGNATURE: ---------------------- ---------------------------- Please send the completed form to the following address. (by September 15, 1993) Travel Plaza International Chubu, Inc. Shirakawa Dai-san Bldg., 4-8-10 Meieki, Nakamura-ku, Nagoya, 450 Japan Phone: +81-52-561-9880/8655 Fax: +81-52-561-1241 --------------------------------------------------------------------------- Hotel Accommodations Rooms have been reserved at the following hotels in Nagoya. Reservations should be made by completing and returning the enclosed Accomodations Applications Form, indicating the name of the hotel and the number of rooms desired. No reservation will be confirmed without a deposit. Hotel assignment will be made on a first-come first-served basis. The following rates include service charges and consumption taxes. The rate are subject to change for 1993, except for Hotel Nahoya Castle. --------------------------------------------------------------- | Rank | Name of Hotel | Single | Twin | Twin Room | | | | Room | Room | Single Use | |------|-----------------|------------|------------|------------| | A 1 | Nagoya Hilton | ----- | 26,700 yen | 18,700 yen | | |-----------------|------------|------------|------------| | 2 | Hotel Nagoya | 15,500 yen | 30,000 yen | 23,000 yen | | | Castle | | | | | |-----------------|------------|------------|------------| | 3 | Nagoya Kanko | 13,500 yen | 23,000 yen | 18,500 yen | | | Hotel | | | | | |-----------------|------------|------------|------------| | 4 | Nagoya Tokyu | 16,000 yen | 25,000 yen | 19,000 yen | | | Hotel | | | | |------------------------|------------|------------|------------| | B 5 | Nagoya Int'l | 11,000 yen | 21,000 yen | 16,500 yen | | | Hotel | | | | | |-----------------|------------|------------|------------| | 6 | Hotel Castle | 10,000 yen | 18,000 yen | 14,500 yen | | | Plaza | | | | | |-----------------|------------|------------|------------| | 7 | Nagoya Daiichi | 9,500 yen | 16,000 yen | 12,500 yen | | | Hotel | | | | | |-----------------|------------|------------|------------| | 8 | Nagoya Fuji | 9,300 yen | 16,500 yen | 13,800 yen | | | Park Hotel | | | | | |-----------------|------------|------------|------------| | 9 | Hotel Lions | 8,000 yen | 15,000 yen | 12,000 yen | | | Plaza | | | | |------------------------|------------|------------|------------| | C 10 | Daiichi Fuji | 6,200 yen | 11,000 yen | ----- | | | Hotel | | | | | |-----------------|------------|------------|------------| | 11 | Nagoya Crown | 6,400 yen | 10,000 yen | ----- | | | Hotel | | | | | |-----------------|------------|------------|------------| | 12 | Nagoya Park | 6,300 yen | 11,400 yen | ----- | | | Side Hotetel | | | | --------------------------------------------------------------- Note: Since the capacity of hotel is limited, hotel reservation cannot be guaranteed after September 15, 1993 Cancellation and Refund If you wish to cancel your hotel reservation, please send a written notification directly to TPI. Deposits will be refunded after deducing the following cancellation charges. All refunds will be proceeded after the conference. When notification is received by TPI: Up to 20 days before the first night of stay ------------------ Free 19-10 days before ---------------------------- 10% of the room charge 9-5 days before ------------------------------ 20% of the room charge 4-2 days before ------------------------------ 50% of the room charge One day before or no notice given ----------- 100% of the room charge ----- Shiro Usui (usui at tut.ac.jp) Biological and Physiological Engineering Lab. Department of Information and Computer Sciences Toyohashi University of Technology Toyohashi 441, Japan From bever at prodigal.psych.rochester.edu Tue Oct 6 13:29:50 1992 From: bever at prodigal.psych.rochester.edu (Thomas Bever) Date: Tue, 6 Oct 92 13:29:50 EDT Subject: Postdoctoral fellowships at the University of Rochester Message-ID: <9210061729.AA17460@prodigal.psych.rochester.edu> POSTDOCTORAL FELLOWSHIPS IN THE LANGUAGE SCIENCES AT ROCHESTER The Center for the Sciences of Language [CSL] at the University of Rochester has a total of three NIH-funded postdoctoral trainee positions: one can start right away, the other two start anytime after July 1, 1993: all can run from one to two years. CSL is an interdisciplinary unit which connects programs in American Sign Language, Psycholinguistics, Linguistics, Natural language processing, Neuroscience, Philosophy, and Vision. Fellows will be expected to participate in a variety of existing research and seminar projects in and between these disciplines. Applicants should have a relevant background and an interest in interdisciplinary research training in the language sciences. We encourage applications from minorities and women: applicants must be US citizens or otherwise eligible for a US government fellowship. Applications should be sent to Tom Bever, CSL Director, Meliora Hall, University of Rochester, Rochester, NY, 14627; Bever at prodigal.psych.rochester.edu; 716-275-8724. Please include a vita, a statement of interests, the names and email addresses and/or phone numbers of three recommenders: also indicate preferred starting date.  From jose at tractatus.siemens.com Thu Oct 8 08:33:03 1992 From: jose at tractatus.siemens.com (Steve Hanson) Date: Thu, 8 Oct 1992 08:33:03 -0400 (EDT) Subject: NIPS*92 Tutorials Message-ID: NIPS*92 TUTORIAL PROGRAM November 30, 1992 NEUROSCIENCE 9:30 - 11:30 ``ASPECTS OF COMPUTATION WITH REAL NEURONS'' William BIALEK NEC Research Institute 1:00 - 3:00 ``ADVANCES IN COGNITIVE NEUROSCIENCE'' William HIRST New School for Social Research 3:30 - 5:30 ``CORTICAL OSCILLATIONS: CURRENT EXPERIMENTAL AND THEORETICAL STATUS'' Christof KOCH CalTech 9:30 - 11:30 ``BIFURCATIONS IN NEURAL NETWORKS'' Bard ERMENTROUT Department of Mathematics University of Pittsburgh ARCHITECTURES, ALGORITHMS, AND THEORY 9:30 - 11:30 ``LEARNING THEORY AND NEURAL COMPUTATION'' Les VALIANT Computer Science Department Harvard University 1:00 - 3:00 ``STATISTICAL ACCURACY OF NEURAL NETWORKS'' Andrew BARRON Statistics Department University of Illinois 3:30 - 5:30 ``LEARNING AND APPROXIMATION IN NEURAL NETWORKS'' Tommy POGGIO Brain & Cognitive Science & AI Lab MIT IMPLEMENTATIONS 1:00 - 3:00 ``ELECTRONIC NEURAL NETWORKS'' Josh ALSPECTOR Bellcore All inquiries for registration to Conference, Tutorials or Workshop should go to NIPS*92 Registration SIEMENS Research Center 755 College Rd. East Princeton, NJ 08550 phone 609-734-3383 email kic at learning.siemens.com Stephen J. Hanson Learning Systems Department SIEMENS Research 755 College Rd. East Princeton, NJ 08540 From lvq at cochlea.hut.fi Fri Oct 9 15:18:43 1992 From: lvq at cochlea.hut.fi (LVQ_PAK) Date: Fri, 9 Oct 92 15:18:43 EET Subject: New version of Learning Vector Quantization PD program package Message-ID: <9210091318.AA17182@cochlea.hut.fi.hut.fi> ************************************************************************ * * * LVQ_PAK * * * * The * * * * Learning Vector Quantization * * * * Program Package * * * * Version 2.1 (October 9, 1992) * * * * Prepared by the * * LVQ Programming Team of the * * Helsinki University of Technology * * Laboratory of Computer and Information Science * * Rakentajanaukio 2 C, SF-02150 Espoo * * FINLAND * * * * Copyright (c) 1991,1992 * * * ************************************************************************ Public-domain programs for Learning Vector Quantization (LVQ) algorithms are available via anonymous FTP on the Internet. "What is LVQ?", you may ask --- See the following reference, then: Teuvo Kohonen. The self-organizing map. Proceedings of the IEEE, 78(9):1464-1480, 1990. In short, LVQ is a group of methods applicable to statistical pattern recognition, in which the classes are described by a relatively small number of codebook vectors, properly placed within each class zone such that the decision borders are approximated by the nearest-neighbor rule. Unlike in normal k-nearest-neighbor (k-nn) classification, the original samples are not used as codebook vectors, but they tune the latter. LVQ is concerned with the optimal placement of these codebook vectors into class zones. This package contains all the programs necessary for the correct application of certain LVQ algorithms in an arbitrary statistical classification or pattern recognition task. To this package three options for the algorithms, the LVQ1, the LVQ2.1 and the LVQ3, have been selected. This code is distributed without charge on an "as is" basis. There is no warranty of any kind by the authors or by Helsinki University of Technology. In the implementation of the LVQ programs we have tried to use as simple code as possible. Therefore the programs are supposed to compile in various machines without any specific modifications made on the code. All programs have been written in ANSI C. The programs are available in two archive formats, one for the UNIX-environment, the other for MS-DOS. Both archives contain exactly the same files. These files can be accessed via FTP as follows: 1. Create an FTP connection from wherever you are to machine "cochlea.hut.fi". The internet address of this machine is 130.233.168.48, for those who need it. 2. Log in as user "anonymous" with your own e-mail address as password. 3. Change remote directory to "/pub/lvq_pak". 4. At this point FTP should be able to get a listing of files in this directory with DIR and fetch the ones you want with GET. (The exact FTP commands you use depend on your local FTP program.) Remember to use the binary transfer mode for compressed files. The lvq_pak program package includes the following files: - Documentation: README short description of the package and installation instructions lvq_doc.ps documentation in (c) PostScript format lvq_doc.ps.Z same as above but compressed lvq_doc.txt documentation in ASCII format - Source file archives (which contain the documentation, too): lvq_p2r1.exe Self-extracting MS-DOS archive file lvq_pak-2.1.tar UNIX tape archive file lvq_pak-2.1.tar.Z same as above but compressed An example of FTP access is given below unix> ftp cochlea.hut.fi (or 130.233.168.48) Name: anonymous Password: ftp> cd /pub/lvq_pak ftp> binary ftp> get lvq_pak-2.1.tar.Z ftp> quit unix> uncompress lvq_pak-2.1.tar.Z unix> tar xvfo lvq_pak-2.1.tar See file README for further installation instructions. All comments concerning this package should be addressed to lvq at cochlea.hut.fi. ************************************************************************ From lvq at cochlea.hut.fi Fri Oct 9 15:14:44 1992 From: lvq at cochlea.hut.fi (LVQ_PAK) Date: Fri, 9 Oct 92 15:14:44 EET Subject: Release of Self-Organizing Map PD program package Message-ID: <9210091314.AA17162@cochlea.hut.fi.hut.fi> ************************************************************************ * * * SOM_PAK * * * * The * * * * Self-Organizing Map * * * * Program Package * * * * Version 1.0 (October 9, 1992) * * * * Prepared by the * * SOM Programming Team of the * * Helsinki University of Technology * * Laboratory of Computer and Information Science * * Rakentajanaukio 2 C, SF-02150 Espoo * * FINLAND * * * * Copyright (c) 1992 * * * ************************************************************************ Some time ago we released the software package "LVQ_PAK" for the easy application of Learning Vector Quantization algorithms. Corresponding public-domain programs for the Self-Organizing Map (SOM) algorithms are now available via anonymous FTP on the Internet. "What does the Self-Organizing Map mean?", you may ask --- See the following reference, then: Teuvo Kohonen. The self-organizing map. Proceedings of the IEEE, 78(9):1464-1480, 1990. In short, Self-Organizing Map (SOM) defines a 'non-linear projection' of the probability density function of the high-dimensional input data onto the two-dimensional display. SOM places a number of reference vectors into an input data space to approximate to its data set in an ordered fashion. This package contains all the programs necessary for the application of Self-Organizing Map algorithms in an arbitrary complex data visualization task. This code is distributed without charge on an "as is" basis. There is no warranty of any kind by the authors or by Helsinki University of Technology. In the implementation of the SOM programs we have tried to use as simple code as possible. Therefore the programs are supposed to compile in various machines without any specific modifications made on the code. All programs have been written in ANSI C. The programs are available in two archive formats, one for the UNIX-environment, the other for MS-DOS. Both archives contain exactly the same files. These files can be accessed via FTP as follows: 1. Create an FTP connection from wherever you are to machine "cochlea.hut.fi". The internet address of this machine is 130.233.168.48, for those who need it. 2. Log in as user "anonymous" with your own e-mail address as password. 3. Change remote directory to "/pub/som_pak". 4. At this point FTP should be able to get a listing of files in this directory with DIR and fetch the ones you want with GET. (The exact FTP commands you use depend on your local FTP program.) Remember to use the binary transfer mode for compressed files. The som_pak program package includes the following files: - Documentation: README short description of the package and installation instructions som_doc.ps documentation in (c) PostScript format som_doc.ps.Z same as above but compressed som_doc.txt documentation in ASCII format - Source file archives (which contain the documentation, too): som_p1r0.exe Self-extracting MS-DOS archive file som_pak-1.0.tar UNIX tape archive file som_pak-1.0.tar.Z same as above but compressed An example of FTP access is given below unix> ftp cochlea.hut.fi (or 130.233.168.48) Name: anonymous Password: ftp> cd /pub/som_pak ftp> binary ftp> get som_pak-1.0.tar.Z ftp> quit unix> uncompress som_pak-1.0.tar.Z unix> tar xvfo som_pak-1.0.tar See file README for further installation instructions. All comments concerning this package should be addressed to som at cochlea.hut.fi. ************************************************************************ From soller at asylum.cs.utah.edu Fri Oct 9 12:43:20 1992 From: soller at asylum.cs.utah.edu (Jerome Soller) Date: Fri, 9 Oct 92 10:43:20 -0600 Subject: NSF Summer Fellowships to Visit Japan Message-ID: <9210091643.AA08604@asylum.cs.utah.edu> The following is a summary of the official announcement for the NSF Summer Institute in Japan sponsored by the National Science Foundation. It provides a fellowship for graduate and/or medical students to spend the summer in Japan at a Japanese research lab. Last summer, I had the opportunity to spend the summer working with the Exploratory Research Laboratory of the Fundamental Laboratories of NEC Corporation doing models of visual biological neural networks. Another student in the neural network area, Hank Wan of CMU, worked with RIKEN. Sincerely, Jerome B. Soller Ph.D. Candidate, U. of Utah Dept. of Computer Science and VA Geriatric, Research, Education, and Clinical Center soller at asylum.utah.edu ----------------------------------------------------------- The National Science Foundation and the National Institutes of Health announce... ... that applications are now being accepted for the 1993 SUMMER INSTITUTE IN JAPAN for U.S. Graduate Students in Science and Engineering, including Biomedical Science and Engineering. APPLICATION DEADLINE: December 1, 1992 Program's Goal: to provide 60 U.S. graduate students first-hand experience in a Japanese research laboratory Program Elements: ** Internship at a Japanese government, corporate or university laboratory in Tokyo or Tsukuba ** Intensive Japanese language training ** Lectures on Japanese science, history, and culture Program Duration and Dates: ** 8 weeks; June 25 to August 21, 1993 Eligibility requirements: 1. U.S. citizen or permanent resident 2. Enrolled at a U.S. institution in a science or engineering Ph.D. program, Enrolled in an M.D. program and have an interest in biomedical research, or Enrolled in an engineering M.S. program of which one year has been completed by December 1, 1992. For application materials and more information: Request NSF publication number 92-105, "1993 Summer Institute in Japan," from NSF's Publications Office at pubs at nsf.gov (InterNet) or pubs at nsf (BitNet) Phone: (202) 357-7668 Be sure to give your name and complete mailing address. To download application materials: Send e-mail message to stisserv at nsf.gov (InterNet) or stisserv at nsf (BitNet) Ignore the subject line, but body of message should read as follows: Request: stis Topic: nsf92105 Request: end You will receive a copy of publication 92-105 by return e-mail. Further inquiries: Contact NSF's Japan Program staff at NSFJinfo at nsf.gov (InterNet) or NSFJinfo at nsf (BitNet) Tel: (202) 653-5862 From ken at cns.caltech.edu Sun Oct 11 10:47:21 1992 From: ken at cns.caltech.edu (Ken Miller) Date: Sun, 11 Oct 92 07:47:21 PDT Subject: tech report announcement Message-ID: <9210111447.AA10541@zenon.cns.caltech.edu> The following tech report has been placed in the neuroprose archive as miller.hebbian.tar.Z. Instructions for retrieving and printing follow the abstract. A slightly abridged version of this paper has been submitted to Neural Computation. The Role of Constraints in Hebbian Learning Kenneth D. Miller and David J.C. MacKay Caltech Computation and Neural Systems (CNS) Program CNS Memo 19 Models of unsupervised correlation-based (Hebbian) synaptic plasticity are typically unstable: either all synapses grow until each reaches the maximum allowed strength, or all synapses decay to zero strength. A common method of avoiding these outcomes is to use a constraint that conserves or limits the total synaptic strength over a cell. We study the dynamical effects of such constraints. Two methods of enforcing a constraint are distinguished, multiplicative and subtractive. For otherwise linear learning rules, multiplicative enforcement of a constraint results in dynamics that converge to the principal eigenvector of the operator determining unconstrained synaptic development. Subtractive enforcement, in contrast, leads to a final state in which almost all synaptic strengths reach either the maximum or minimum allowed value. This final state is often dominated by weight configurations other than the principal eigenvector of the unconstrained operator. Multiplicative enforcement yields a ``graded" receptive field in which most mutually correlated inputs are represented, whereas subtractive enforcement yields a receptive field that is ``sharpened" to a few maximally-correlated inputs. If two equivalent input populations ({\it e.g.} two eyes) innervate a common target, multiplicative enforcement prevents their segregation (ocular dominance segregation) when the two populations are weakly correlated; whereas subtractive enforcement allows segregation under these circumstances. An approach to understanding constraints over input and over output cells is suggested, and some biological implementations are discussed. ------------------------------------------------ How to retrieve and print out this paper: unix> ftp archive.cis.ohio-state.edu Connected to archive.cis.ohio-state.edu. 220 archive.cis.ohio-state.edu FTP server ready. Name: anonymous 331 Guest login ok, send ident as password. Password: [your e-mail address] 230 Guest login ok, access restrictions apply. ftp> binary 200 Type set to I. ftp> cd pub/neuroprose 250 CWD command successful. ftp> get miller.hebbian.tar.Z 200 PORT command successful. 150 Opening BINARY mode data connection for miller.hebbian.tar.Z 226 Transfer complete. 480000 bytes sent in many seconds ftp> quit 221 Goodbye. unix> uncompress miller.hebbian.tar.Z unix> tar xvf miller.hebbian.tar TO SAVE DISC SPACE, THE ABOVE TWO COMMANDS MAY BE REPLACED WITH THE SINGLE COMMAND unix> zcat miller.hebbian.tar.Z | tar xvf - hebbian_p0-11.ps hebbian_p12-23.ps hebbian_p24-35.ps unix> lpr hebbian_p24-35.ps unix> lpr hebbian_p12-23.ps unix> lpr hebbian_p0-11.ps From P.Refenes at cs.ucl.ac.uk Fri Oct 9 07:56:32 1992 From: P.Refenes at cs.ucl.ac.uk (P.Refenes@cs.ucl.ac.uk) Date: Fri, 09 Oct 92 12:56:32 +0100 Subject: Papers Using neural Nets in Economics Message-ID: [Note: the following is a reply to a request for references on use of neural nets in financial modeling. The original request was submitted to, but not distributed on, the connectionists list. It also appeared on comp.ai.neural-nets and sci.eon. -- DST] In reply, to your request for references in this field a) the full set of references in our paper on financial modelling using neural nets is attached (straight ascii). b) a more detailed database in also attached (in tex). As far as we are aware this is more or less it. In addition we have a forthcoming book "neural network applications in the capital markets", and we plan a workshop to be held in London in Spring 93 - papers welcome. Paul Refenes. =========================================================== [Brock91] Brock W. A., "Causality, Chaos, Explanation and Prediction in Economics and Finance", in Casti J., and Karlqvist A., (eds), "Beyond Belief: Randomness, Prediction, and Explanation in Science", Boca Raton, FL: CRC Press, pp 230-279, (1991). [Brown63] Brown R. G. "Smoothing, Forecasting and Prediction of Discrete Time Series", Prentice-Hall International, (1963). [Burns86] Burns T., "The Interpretation and use of Economic Predictions", Proc. Royal Soc., Series A, pp 103- 125, (1986). [Chauvi89] Chauvin Y., "A back-propagation algorithm with optimal use of hidden units", In Touretzky D., (ed), "Advances in Neural Information Processing systems, Morgan Kaufmann (1989). [Deboec92] Deboeck D., "Pre-processing and evaluation of neural nets for trading stocks" Advanced Technology for Developers, vol. 1, no. 2, (Aug 1992). [Denker87] Denker J., et al "Large Automatic Learning. Rule Extraction and Generalisation", Complex Systems I: 877-922, (1987). [DutSha88] Dutta Sumitra, and Shashi Shekkar, "Bond rating: a non-conservative application", Proc. ICNN-88, San Diego, CA, July 24-27 1988, Vol. II (1988). [Econost92] Econostat, "Tactical Asset Allocation in the Global Bond Markets", TR-92/07, Hennerton House, Wargrave, Berkshire RG10 8PD, England, (1992). [FahLeb90] Fahlman S. E & Lebiere C, "The Cascade- Correlation Learning Architecture", Carnegie Mellon University, Technical Report CMU-CS-90- 100. ( 1990). [Hendry88] Hendry D. F., "Encompassing implications of feedback versus feedforward mechanisms in econometrics", Oxford Economic Papers, vol. 40, pp. 132-149, (1988). [Hinton87] Hinton Geoffrey, "Connectionist Learning Procedures", Computer Science Department, Carnegie-Melon University, December 1987. [Holden90] Holden K., "Current issues in macroeconomic", in Greenaway D., (ed), Croom Helm, (1990). [Hoptro93] Hoptroff A. R., "The principles and practice of time series forecasting and business modelling using neural nets", Neural Computing and Applications vol. 1, no 1., pp 59-66, (1993). [Kimoto90] Kimoto T., et al, "Stock Market Prediction with Modular Neural Networks", Proc., IJCNN-90, San Diego, (1990). [Klimas92] Klimasauskas C., "Genetic function optimization for time series prediction", Advanced Technology for Developers vol. 1, no. 1, (July 1992). [leCun89] le Cun. Y., "Generalisation and Network Design Strategies" Technical Report CRG-TR-89-4, University of Toronto, Department of Computer Science, (1989). [Marqu91] Marquez L., et al, "Neural networks models as an alternative to regression", Proc. Twenty-Fourth Hawaii International Conference on System Sciences, 1991, Volume 4 (pp. 129-135). [Menden89] Mendenhall W., et al "Statistics for Management And Economics", PWS-KENT Publishing Company, Boston USA, (1989). [Ormer91] Ormerod P., Taylor J. C., and Walker T., "Neiual networks in Economics", Henley Centre, (1991). [Peters91] Peters E. E., "Chaos and Order in the Capital Markets", Willey, USA, (1991). [Refene92a] Refenes A. N., "Constructive Learning and its Application to Currency Exchange Rate Prediction", in "Neural Network Applications in Investment and Finance Services", eds. Turban E., and Trippi R., Chapter 27, Probus Publishing, USA, 1992. [Refene92b] Refenes A. N., et al "Currency Exchange rate prediction and Neural Network Design Strategies", Neural computing & Applications Journal, Vol 1, no. 1., (1992). [Refene92c] Refenes A. N., et al "Stock Ranking Using Neural Networks", submitted ICNN'93, San Francisco, Department of Computer Science, University College London, (1992). [RefAze92] Refenes A. N., & Azema-Barac M., "Neural Networks for Tactical Asset Allocation in the Global Bonds Markets", Proc. IEE Third International Conference on ANNS, Brighton 1993 (submitted 1992). [Refenes93] Refenes A. N., et al "Financial Modelling Using Neural Networks", in Liddell H. (ed) "Commercial Parallel Processing", Unicom, (to appear). [RefAli91] Refenes A. N., & Alippi C., "Histological Image understanding by Error Backpropagation", Microprocessing and Microprogramming Vol. 32, pp. 437-446, , North-Holland, (1991). [RefCha92] Refenes A. N., & Chan E. B., "Sound Recognition and Optimal Neural Network Design", Proc. EUROMICRO-92, Paris (Sept. 1992). [RefVit91] Refenes A. N. & Vithlani S. "Constructive Learning by Specialisation", Proc. ICANN-91, Helsiniki, (1991). [RefZai92] Refenes A. N., & Zaidi A., "Managing Exchange Rate Prediction Strategies with Neural Networks", Proc. Workshop on Neural Networks: techniques & Applications, Liverpool (Sept. 1992), also in Lisboa P. G., and Taylor M, "Neural Networks: techniques & Applications", Ellis Horwood (1992). [Refenes91] Refenes A.N., "CLS: An Adaptive Learning Procedure and Its Application to Time Series Forecasting", Proc. IJCNN-91, Singapore, (Nov. 1991). [Refenes92d] Refenes A. N., et al "Currency Exchange Rate Forecasting by Error Backpropagation", Proc. Conference on System Sciences, HICCS-25, Kauai, HawaII, Jan. 7-10, 1992. [Rumelh86] Rumelhart D. E., et al, "Learning Internal Representation by error propagation." In Rumelhart.D.E, McClelland.J.L and PDP Research Group editors Parallel Distributed Processing: Explorations in the Microstructure of Cognition. Vol. 1 Foundation, MIT Press (1986). [Shoene90] Schoenenburg E., "Stock price prediction using neural networks: a project report", Neurocomputing 2, pp. 17-27, 1990. [TsiZei92] Tsibouris G., and Zeidenberg M., "Back propagation as a test of the efficient markets hypothesis", Proc. Hawaii International Conference on System Sciences, January 7-10th 1992, Kauai, Hawaii, Volume 4 (pp. 523-532). [White88] White Halbert, "Economic prediction using neural networks: the case of IBM daily stock returns", Department of Economics, University of California, (1988). [Wallis89] Wallis K., F., "Macroeconomic forecasting: a survey", Economic Journal, vol. 99, pp. 28-61, (1989). [Weigen90] Weigend A., et al, "Predicting the future: a connectionist approach", Int. Journal of Neural Systems, vol. 1, pp. 193-209, (1990). ====================================================================== %T Using neural nets to predict several sequential and subsequent future values from time series data %A James E. Brown %J Proceedings of the First International Conference on Artificial Intelligence Applications on Wall Street, October 9-11 1991, New York %Q Division of Management, Polytechnic University %I IEEE Computer Society Press %C Los Alamitos, CA %D 1991 %P 30-34 %T Decision support system for position optimization on currency option dealing %A Shuhei Yamaba %A Hideki Kurashima %J Proceedings of the First International Conference on Artificial Intelligence Applications on Wall Street, October 9-11th 1991, New York %Q Division of Management, Polytechnic University %I IEEE Computer Society Press %C Los Alamitos, CA %D 1991 %P 160-165 %T An intelligent trend prediction and reversal recognition system using dual-modul %A Gia-Shuh Jang %A Feipei Lai %A Bor-Wei Jiang %A Li-Hua Chien %J Proceedings of the First International Conference on Artificial Intelligence Applications on Wall Street, October 9-11th 1991, New York %Q Division of Management, Polytechnic University %I IEEE Computer Society Press %C Los Alamitos, CA %D 1991 %P 42-51 %T Economic models and time series: AI and new techniques for learning from examples %A Tomaso Poggio %I Artificial Intelligence Laboratory, MIT %C Cambridge, MA %R TR %P 15 %T Bond rating: a non-conservative application of neural networks %A Soumitra Dutta %A Shashi Shekhar %J Proceedings of the International Conference on Neural Networks, San Diego, CA, July 24-27 1988, Volume II %I IEEE %C San Diego, CA %P 443-450 %T Stock price prediction using neural networks: a project report %A E. Schoneburg %J Neurocomputing %V 2 %D 1990 %P 17-27 %T Artificial neural systems: a new tool for financial decision-making %A Delvin D. Hawley %A John D. Johnson %A Dijotam Raina %J Financial Analysts Journal %D November-December 1990 %P 63-72 %T Financial simulations on a massively parallel connection machine %A James M. Hutchison %R Report 90-04-01 %I Decision Sciences Department, University of Pennsylvania %C Philadelphia, PA %D September 1990 %P 34 %T Neural networks in economics %A Paul Ormerod %A John C. Taylor %A Ted Walker %J Money and financial markets %E Mark P. Taylor %I Blackwell Ltd %C Oxford %D 1991 %P 341-353 %G 0631179828 %T Function approximation and time series prediction with neural networks %A R.D. Jones %A Y.C. Lee %A C.W. Barnes %A G.W. Flake %A K. Lee %A P.S. Lewis %A S. Qian %I Center for Nonlinear Studies, Los Alamos %D 1989 %T Predicting the future: a connectionist approach %A A. Weigend %A B. Huberman %A D. Rumelhart %J International Journal of Neural Systems %V 1 %N 3 %D 1990 %P 193-209 %T Stock market prediction system with modular neural networks %A T. Kimoto %A K. Asakawa %J Proceedings of the International Joint Conference on Neural Networks, San Diego, June 17-21 1990 Volume I %I IEEE Neural Network Council %C Ann Arbor, MI %P 1-7 %T Forecasting economic turning points with neural nets %A R.G. Hoptroff %A M.J. Bramson %A T.J. Hall %J to be published in Neural Computing and Applications, Summer 1992 %P 6 %T Neural network applications in business minitrack %A W. Remus %A T. Hill %B Proceedings of the Twenty-Fifth Hawaii International Conference on System Sciences, January 7-10th, 1992, Kauai, Hawaii, Vol 4 %E Jay F. Nunamaker %E Ralph H. Sprague %I IEEE Computer Society Press %C Los Alamitos, CA %D 1992 %P 493 %T Neural network models for forecasting: a review %A Leorey Marquez %A Tim Hill %A Marcus O'Connor %A William Remus %B Proceedings of the Twenty-Fifth Hawaii International Conference on System Sciences, January 7-10th 1992, Kauai, Hawaii, Vol.4 %E Jay F. Nunamaker %E Ralph H. Sprague %I IEEE Computer Society Press %C Los Alamitos, CA %D 1992 %P 494-497 %T Neural nets vs. logistic regression %A T. Bell %A G. Ribar %A J. Verchio %J Proceedings of the University of Southern California Expert Systems Symposium %D November 1989 %T Contrasting neural nets with regression in predicting performance %A K. Duliba %J Proceedings of the Twenty-Fourth Hawaii International Conference on System Sciences, Volume 4 %D 1991 %P 163-170 %T A business application of artificial neural network systems %A A. Koster %A N. Sondak %A W. Bourbia %J The Journal of Computer Information Systems %V 31 %D 1990 %P 3-10 %T Neural networks models as an alternative to regression %A L. Marquez %A T. Hill %A W. Remus %A R. Worthley %J Proceedings of the Twenty-Fourth Hawaii International Conference on System Sciences, 1991, Volume 4 %D 1991 %P 129-135 %T A neural network model for bankruptcy prediction %A M. Odom %A R. Sharda %J Proceedings of the 1990 International Joint Conference on Neural Networks, San Diego, CA, June 17-21 1990, Volume II %I IEEE Neural Networks Council %C Ann Arbor, MI %D 1990 %P 163-168 %T A neural network application for bankruptcy prediction %A W. Raghupathi %A L. Schade %A R. Bapi %J Proceedings of the Twenty-Fourth Hawaii International Conference on System Sciences 1991, Volume 4 %D 1991 %P 147-155 %T Neural network models of managerial judgement %A W. Remus %A T. Hill %J Proceedings Twenty-Third Hawaii International Conference on System Sciences 1990, Volume 4 %D 1990 %P 340-344 %T Neural network models for intelligent support of managerial decision making %A W. Remus %A T. Hill %R University of Hawaii Working Paper %D 1991 %T Forecasting country risk ratings using a neural network %A J. Roy %A J. Cosset %J Proceedings of the Twenty-Third Hawaii International Conference on System Sciences 1990, Volume 4 %D 1990 %P 327-334 %T Neural networks as forecasting experts: an empirical test %A R. Sharda %A R. Patil %B Proceedings of the 1990 International Joint Conference on Neural Networks Conference, Washington DC, January 15-19 1990, Volume 2 %E Maureen Caudill %I Lawrence Erlbaum Associates %C Hillsdale, NJ %D 1990 %G 0805807764 %P 491-494 %T Connectionist approach to time series prediction: an empirical test %A R. Sharda %A R. Patil %I Oklahoma State University %C Oklahoma %R Working Paper 90-26 %D 1990 %T Neural networks for bond rating improved by multiple hidden layers %A A. Surkan %A J. Singleton %J Proceedings of the 1990 International Joint Conference on Neural Networks, San Diego, CA, June 17-21 1990, Volume 2 %I IEEE Neural Networks Council %C Ann Arbor, MI %D 1990 %P 157-162 %T Time series forecasting using neural networks vs. Box-Jenkins methodology %A Z. Tang %A C. de Almeida %A P. Fishwick %J Presented at the 1990 International Workshop on Neural Networks %D February 1990 %T Predicting stock price performance %A Y. Yoon %A G. Swales %J Proceedings of the Twenty-Fourth Hawaii International Conference on System Sciences 1991, Volume 4 %D 1991 %P 156-162 %T Neural networks as bond rating tools %A Alvin J. Surkan %A J. Clay Singleton %B Proceedings of the Twenty-Fifth Hawaii International Conference on System Sciences, January 7-10th 1992, Kauai, Hawaii %E Jay F. Nunamaker %E Ralph H. Sprague %I IEEE Computer Society Press %C Los Alamitos, CA %D 1992 %P 499-503 %T The AT\&T divestiture: effects of rating changes on bond returns %A J.W. Peavy %A J.A. Scott %J Journal of Economics and Business %V 38 %D 1986 %P 255-270 %T Currency exchange rate forecasting by error backpropagation %A A.N. Refenes %A M. Azema-Barac %A S.A. Karoussos %B Proceedings of the Twenty-Fifth Hawaii International Conference on System Sciences, January 7-10th 1992, Kauai, Hawaii %E Jay F. Nunamaker %E Ralph H. Sprague %I IEEE Computer Society Press %C Los Alamitos, CA %D 1992 %P 504-515 %T Developing neural networks to forecast agricultural commodity prices %A John Snyder %A Jason Sweat %A Michelle Richardson %A Doug Pattie %B Proceedings of the Twenty-Fifth Hawaii International Conference on System Sciences, January 7-10th 1992, Kauai, Hawaii %E Jay F. Nunamaker %E Ralph H. Spague %I IEEE Computer Society Press %C Los Alamitos, CA %D 1992 %P 516-522 %T Neural Networks for Statistical and Economic Data Workshop Proceedings, Dublin, December 1990 %Q Munotec Systems Ltd and Statistical Office of the European Communities, Luxembourg %E F. Murtagh %I Eurostat: Statistical Office of the European Communities %C Luxembourg %D 1991 %P 210 %T Parallel Problem Solving from Nature: Applications in Statistics and Economics Workshop Proceedings, Zurich, December 1991 %E D. Wurtz %E F. Murtagh %I Eurostat: Statistical Office of the European Communities %C Luxembourg %D 1992 %P 192 %T Forecasting the economic cycle: a neural network approach %A M.J. Branson %A R.G. Hoptroff %B Neural Networks for Statistical and Economic Data Workshop Proceedings, Dublin, December 1990 %E F. Murtagh %I Eurostat: Statistical Office of the European Communities %C Luxembourg %D 1991 %P 121-153 %T Analysis of univariate time series with connectionist nets: a case study of two classical examples %A C. de Groot %A D. Wurtz %B Neural Networks for Statistical and Economic Data Workshop Proceedings, Dublin, December 1990 %E F. Murtagh %I Munotec Systems %D 1991 %P 95-112 %T Stock price pattern recognition - a recurrent neural network approach %A K. Kamijo %A T. Tanigawa %B International Joint Conference on Neural Networks, San Diego, June 17-21 1990, Volume I %I IEEE Neural Networks Council %C Ann Arbor, MI %D 1990 %P 215-222 %T A short survey of neural networks for forecasting and related problems %A F. Murtagh %B Neural Networks for Statistical and Economic Data Workshop Proceedings, Dublin, December 1990 %E F. Murtagh %I Munotec Systems %D 1991 %P 87 %T Back propagation as a test of the efficient markets hypothesis %A G. Tsibouris %A M. Zeidenberg %B Proceedings of the Hawaii International Conference on System Sciences, January 7-10th 1992, Kauai, Hawaii, Volume 4 %E Jay F. Nunamaker %E Ralph H. Sprague %I IEEE Computer Society Press %C Los Alamitos, CA %D 1992 %P 523-532 %T Economic prediction using neural networks: the case of IBM daily stock returns %A H. White %I University of California, San Diego %D 1988 %T Predicting stock market fluctuations using neural network models %A G. Tsibouris %A M. Zeidenberg %R Paper presented at the Annual Meeting of the Society fro Economic Dynamics and Control, Capri, Italy 1991 %T Smoothing, forecasting and prediction of discrete time series %A R.G. Brown %I Prentice-Hall %D 1963 %S International Series in Management (Quantitative Methods Series) %P 468 %T Applied time series analysis for business and economic forecasting %A S. Nazem %I Dekker %C New York %D 1988 %S Statistics: Textbooks and Monographs Volume 93 %G 0824779134 %T Forecasting, structural time series models and the Kalman filter %A A.C. Harvey %I Cambridge University Press %C Cambridge %D 1989 %G 0521321964 %P 554 %T Bibliography on time series and stochastic processes: an international team project %E Herman O.A. Wold %I International Statistical Institute %D 1965 %P 516 %T Chaotic evolution and strange attractors: the statistical analysis of time series for deterministic nonlinear systems %A David Ruelle %I Cambridge University Press %C Cambridge %D 1989 %P 96 %G 0521362725 %T Non-linear and non-stationary time series analysis %A M.B. Priestly %I Academic Press %C London %D 1988 %G 012564910X From sayegh at CVAX.IPFW.INDIANA.EDU Mon Oct 12 04:43:12 1992 From: sayegh at CVAX.IPFW.INDIANA.EDU (sayegh@CVAX.IPFW.INDIANA.EDU) Date: Mon, 12 Oct 1992 03:43:12 EST Subject: CNS INDY 92 Message-ID: <00961F68.E78F81C0.12275@CVAX.IPFW.INDIANA.EDU> COMPUTATIONAL NEUROSCIENCE SYMPOSIUM 1992 (CNS '92) October 17, 1992 University Place Conference Center Indiana University-Purdue University at Indianapolis, Indiana In cooperation with the IEEE Systems, Man and Cybernetics Society The Computational Neuroscience Symposium (CNS '92) will highlight the interactions among engineering, science, and neuroscience. Computational neuroscience is the study of the interconnection of neuron-like elements in computing devices which leads to the discovery of the algorithms of the brain. Such algorithms may prove useful in finding optimum solutions to practical engineering problems. The focus of the symposium will be forty-five minute special lectures by eight leading international experts. KEYNOTE LECTURE: "Challenges and Promises of Networks with Neural-type Architectures" NICHOLAS DeCLARIS, Professor of Applied Mathematics, Electrical Engineering, Pathology, Epidemiology & Preventive Medicine; Director, Division of Medical Informatics, University of Maryland. SPECIAL LECTURES: "Teaching the Multiplication Tables to a Neural Network: Flexibility vs. Accuracy" JAMES ANDERSON, Professor of Cognitive & Linguistic Sciences, Brown University. "Supervised Learning for Adaptive Radar Detection" SIMON HAYKIN, Director of Communication Research Laboratory, McMaster University. "Neural Network Applications in Waveform Analysis and Pattern Recognition" EVANGELIA MICHELI-TZANAKOU, Chair and Professor of Biomedical Engineering, Rutgers University. "Signal Processing by Neural Networks in the Control of Eye Movements" DAVID ROBINSON, Professor of Ophthalmology, Biomedical Engineering & Neuroscience, The Johns Hopkins University. "Nonlinear Properties of the Hippocampal Formation" ROBERT SCLABASSI, Professor of Neurosurgery, Electrical Engineering, Behavioral Neuroscience & Psychiatry, University of Pittsburgh. "Acoustic Images in Bar Sonar and the Mechanisms Which Form Them" JAMES SIMMONS, Professor of Biology & Psychology, Brown University. "Understanding the Brain as a Neurocontroller: New Hypotheses and Experimental Possibilities" PAUL WERBOS, Program Director, National Science Foundation and President, International Neural Network Society. The conference registration fee, which includes symposium proceedings and lunch, is $50 prior to October 1, 1992 and may be paid by either check or credit card. After October 1, 1992 and for on-site registration, the fee is $75. Please contact the Conference Secretary for registration. Ms. Nancy Brockman CNS '92 Conference Secretary 799 West Michigan Street, Room 1211 Indianapolis, IN 46202 tel: (317)274-2761 fax: (317)274-0832 For overnight stay before or after the symposium, reservations may be made at the University Place Conference Center and Hotel at IUPUI. Special room rates for CNS '92 participants are $76 for one person and $90 for two. Please call (317)231-5150 or fax (317)231-5168. CNS '92 ORGANIZING COMMITTEE H. Oner Yurtseven, General Co-Chair Sidney Ochs, General Co-Chair P.G. Madhavan, Program Chair Michael Penna, Publication Chair SPONSORS OF CNS '92 Department of Physiology & Biophysics, Indiana University School of Medicine National Science Foundation IUPUI Faculty Development Office Purdue University School of Engineering & Technology at Indianapolis Indiana University-Purdue University School of Science at Indianapolis Eli Lilly and Company Department of Ophthalmology, Indiana University School of Medicine From jang at diva.Berkeley.EDU Mon Oct 12 12:58:58 1992 From: jang at diva.Berkeley.EDU (Jyh-Shing Roger Jang) Date: Mon, 12 Oct 92 09:58:58 -0700 Subject: paper available Message-ID: <9210121658.AA24703@diva.Berkeley.EDU> The following paper has been placed on the neuroprose archive as jang.adaptive_fuzzy.ps.Z and is available via anonymous ftp (from archive.cis.ohio-state.edu in the pub/neuroprose directory). This paper will appear in IEEE Trans. on Systems, Man and Cybernetics. ========================================================================= TITLE: ANFIS: Adaptive-Network-based Fuzzy Inference System ABSTRACT: This paper presents the architecture and learning procedure underlying ANFIS (Adaptive-Network-based Fuzzy Inference System), a fuzzy inference system implemented in the framework of adaptive networks. By using a hybrid learning procedure, the proposed ANFIS can construct an input-output mapping based on both human knowledge (in the form of fuzzy if-then rules) and stipulated input-output data pairs. In our simulation, we employ the ANFIS architecture to model nonlinear functions, identify nonlinear components on-linely in a control system, and predict a chaotic time series, all yielding remarkable results. Comparisons with artificail neural networks and earlier work on fuzzy modeling are listed and discussed. Other extensions of the proposed ANFIS and promising applications to automatic control and signal processing are also suggested. =========================================================================== Here is an example of how to retrieve this file: gvax> ftp archive.cis.ohio-state.edu Connected to archive.cis.ohio-state.edu. 220 archive.cis.ohio-state.edu FTP server ready. Name: anonymous 331 Guest login ok, send ident as password. Password:neuron at wherever 230 Guest login ok, access restrictions apply. ftp> binary 200 Type set to I. ftp> cd pub/neuroprose 250 CWD command successful. ftp> get jang.adaptive_fuzzy.ps.Z 200 PORT command successful. 150 Opening BINARY mode data connection for jang.adaptive_fuzzy.ps.Z 226 Transfer complete. 100000 bytes sent in 3.14159 seconds ftp> quit 221 Goodbye. gvax> uncompress jang.adaptive_fuzzy.ps.Z gvax> lpr jang.adaptive_fuzzy.ps -- J.-S. Roger Jang 571 Evans, EECS Department Univ. of California Berkeley, CA 94720 jang at diva.berkeley.edu (510)-642-5029 fax: (510)642-5775 From gary at cs.UCSD.EDU Mon Oct 12 18:29:56 1992 From: gary at cs.UCSD.EDU (Gary Cottrell) Date: Mon, 12 Oct 92 15:29:56 -0700 Subject: ACL-93 Message-ID: <9210122229.AA26674@odin.ucsd.edu> PDPNLP'ers: I am on the program committee for the Association for Computational Linguistics conference this year. I encourage connectionists to submit (excellent!) papers to this conference. Note that since (as far as I can tell) I am the only one on the committee, I can't promise a large representation by connectionists, but at least you'll be reviewed by one of your own. Now, lessee, who's going to review *my* submission? ;-) Gary Cottrell, UCSD ACL-93 CALL FOR PAPERS 31st Annual Meeting of the Association for Computational Linguistics 22-26 June 1993 Ohio State University Columbus, Ohio, USA TOPICS OF INTEREST: Papers are invited on substantial, original, and unpublished research on all aspects of computational linguistics, including, but not limited to, pragmatics, discourse, semantics, syntax, and the lexicon; phonetics, phonology, and morphology; interpreting and generating spoken and written language; linguistic, mathematical, and psychological models of language; language-oriented information retrieval; corpus-based language modelling; machine translation and translation aids; natural language interfaces and dialogue systems; message and narrative understanding systems; and theoretical and applications papers of every kind. REQUIREMENTS: Papers should describe unique work; they should emphasize completed work rather than intended work; and they should indicate clearly the state of completion of the reported results. A paper accepted for presentation at the ACL Meeting cannot be presented at another conference. Self-references which reveal the authors' identity (e.g., ``We previously showed [Smith, 1991] . . .'') should be avoided as far as possible, since reviewing will be ``blind''. FORMAT FOR SUBMISSION: Authors should submit four copies of preliminary versions of their papers, not to exceed 3200 words (exclusive of references). To facilitate blind reviewing, two title pages are required. The first (one copy only, unattached) should include the title, the name(s) of the author(s), complete addresses, a short (5 line) summary, and a specification of the topic area. The second (4 copies, heading the copies of the paper) should omit author names and addresses. Submissions that do not conform to this format will not be reviewed. As well, authors are strongly urged to email the title page (in directly readable ASCII form, with author information). Send to: Lenhart Schubert ACL-93 University of Rochester Department of Computer Science Rochester, NY 14627, USA fax: +1-716-461-2018 acl93 at cs.rochester.edu SCHEDULE: Preliminary papers are due by 6 January 1993. Authors will be notified of acceptance by 15 March 1993. Camera-ready copies of final papers prepared in a double-column format, preferably using a laser printer, must be received by 1 May 1993, along with a signed copyright release statement. STUDENT SESSIONS: Following the ACL-91/92 successes, there will again be a special Student Session organized by a committee of ACL graduate student members. ACL student members are invited to submit short papers describing innovative work in progress in any of the topics listed above. The papers will again be reviewed by a committee of students and faculty members for presentation in a workshop-style session. A separate call for papers will be issued; to get one or for other information contact Linda Suri, University of Delaware, Computer & Information Science, 103 Smith Hall, Newark, DE 19716, USA; +1-302-831-1949; suri at cis.udel.edu. OTHER ACTIVITIES: The meeting will include a program of tutorials coordinated by Philip Cohen, SRI International, Artificial Intelligence Center, 333 Ravenswood Avenue, Menlo Park, CA 94025, USA; +1-415-859-4840; pcohen at ai.sri.com. Some of the ACL Special Interest Groups may arrange workshops or other activities. CONFERENCE INFORMATION: Local arrangements are being chaired by Terry Patten, Ohio State University, Computer & Information Science, 2036 Neil Avenue Mall, Columbus, OH 43210, USA; +1-614-292-3989; patten at cis.ohio-state.edu. Anyone wishing to arrange an exhibit or present a demonstration should send a brief description together with a specification of physical requirements (space, power, telephone connections, tables, etc.) to Robert Kasper,Ohio State University, Linguistics, 222 Oxley Hall, 1712 Neil Avenue, Columbus, OH 43210, USA; +1-614-292-2844; kasper at ling.ohio-state.edu. PROGRAM COMMITTEE: The committee is chaired by Lenhart Schubert (U Rochester) and also includes Robert Carpenter (CMU) Mitch Marcus (U Pennsylvania) Garrison Cottrell (UC-San Diego) Kathleen McCoy (U Delaware) Robert Dale (U Edinburgh) Marc Moens (U Edinburgh) Bonnie Dorr (U Maryland) Johanna Moore (U Pittsburgh) Julia Hirschberg (AT&T Bell Labs) John Nerbonne (German AI Center) Paul Jacobs (GE Schenectady) James Pustejovsky (Brandeis U) Robert Kasper (Ohio State U) Uwe Reyle (U Stuttgart) Slava Katz (IBM Watson) Richard Sproat (AT&T Bell Labs) Judith Klavans (Columbia U) Jun-ichi Tsujii (UMIST) Bernard Lang (INRIA) Gregory Ward (Northwestern U) Diane Litman (AT&T Bell Labs) Janyce Wiebe (New Mexico State U) ACL INFORMATION: For other information on the conference and on the ACL more generally, contact Don Walker (ACL), Bellcore, MRE 2A379, 445 South Street, Box 1910, Morristown, NJ 07960-1910, USA; +1-201-829-4312; walker at bellcore.com. 1993 LINGUISTIC INSTITUTE: The 57th Linguistic Institute, sponsored by the LSA and co-sponsored by the ACL, will be held at The Ohio State University, in Columbus, Ohio, from June 28 until August 6, 1993, beginning right after the annual meeting of ACL. It will feature a number of computational linguistics courses, as described in the September 1992 issue of The FINITE STRING. For more information and application forms, see the June 1992 issue of the LSA Bulletin, or contact Linguistic Institute, Department of Linguistics, 222 Oxley Hall, The Ohio State University, Columbus, OH 43210, USA; +1-614-292-4052; +1-614-292-4273 fax; linginst at ling.ohio-state.edu. "e From Scott_Fahlman at SEF-PMAX.SLISP.CS.CMU.EDU Tue Oct 13 01:32:16 1992 From: Scott_Fahlman at SEF-PMAX.SLISP.CS.CMU.EDU (Scott_Fahlman@SEF-PMAX.SLISP.CS.CMU.EDU) Date: Tue, 13 Oct 92 01:32:16 -0400 Subject: Call for Workshop Presenters Message-ID: As you may have seen in the NIPS*92 workshop program, I will be running a session entitled "Reading the Entrails: Understanding What's Going On Inside a Neural Net". This will take place on December 4 in Vail, Colorado. We will have a total of four hours for this workshop, which means we can accommodate 6-8 presentations of about 20 minutes each (plus ample time for discussion). Several presentation slots are still open. I would like to hear from any of you who would like to present a technique that you have found useful for understanding what's going on inside a network, either during or after training. You don't necessarily have to be the person who *invented* the technique, though you should have some real hands-on experience. As a presenter, you should describe how a specific technique works, show (perhaps with diagrams or a videotape) how the technique was applied to some specific problem, and describe what useful insights resulted from this application. Specific issues we would like to hear about include the following: * How do you extract a set of rules (symbolic or fuzzy) from a trained neural network? Under what conditions is this possible? * How do you explain an individual network output or action in terms of the networks inputs and structure? Which inputs are most responsible for this output? * What are the best ways to visualize weights, unit states, and their trajectories over time? How can we visualize the joint behavior of a large number of units? * What can we learn from receptive-field diagrams for the hidden units? * How can we understand the behavior of recurrent and time-domain networks? (Extracting equivalent finite-state machines, etc.) * Learning pathologies and what they look like. If you would like to present something along these lines, please contact me by E-mail (sef at cs.cmu.edu) and let me know what you would like to describe. By the way, none of the NIPS workshops are limited to presenters only. People who want to show up and listen are welcome, as long as there is room. It is suggested that you register pretty soon, however. All inquiries for registration information should go to NIPS*92 Registration SIEMENS Research Center 755 College Rd. East Princeton, NJ 08550 phone 609-734-3383 email kic at learning.siemens.com See you in Vail! -- Scott =========================================================================== Scott E. Fahlman School of Computer Science Carnegie Mellon University 5000 Forbes Avenue Pittsburgh, PA 15213 Internet: sef+ at cs.cmu.edu From jose at tractatus.siemens.com Tue Oct 13 09:29:07 1992 From: jose at tractatus.siemens.com (Steve Hanson) Date: Tue, 13 Oct 1992 09:29:07 -0400 (EDT) Subject: Fwd: NIPS Program errors References: <9210131312.AA13738@learning.siemens.com> Message-ID: We have been made aware of some errors in the NIPS Program and apologize for any inconvenience this may have caused you. Please be aware of the following: The Tutorial Program is held on November 30th, 1992 (disregard incorrect date on top of page 6 in the NIPS Program booklet) The Tutorial by Josh Alspector "Electronic Neural Networks" will be held on November 30th, 1992 from 3:30-5:30 (disregard incorrect time on page 6 in the NIPS Program booklet) The dates and times as they appear on the Conference Registration form are correct. From jaap.murre at mrc-apu.cam.ac.uk Tue Oct 13 15:13:30 1992 From: jaap.murre at mrc-apu.cam.ac.uk (Jaap Murre) Date: Tue, 13 Oct 92 15:13:30 BST Subject: book announcement Message-ID: <2332.9210131413@sirius.mrc-apu.cam.ac.uk> Book Announcement Learning and Categorization in Modular Neural Networks by Jacob M.J. Murre This book introduces a new neural network model, called CALM, for categorization and learning in neural networks. CALM is a building block for modular neural networks. The internal structure of the CALM module is inspired by the neocortical minicolumn. A pivotal psychological concept in the CALM learning algorithm is self-induced arousal, which may affect the local learning rate and noise level. The author demonstrates how this model can learn the wordsuperiority effect for letter recognition, and he discusses a series of studies that simulate experiments in implicit and explicit memory, involving normal and amnesic patients. Pathological, but psychologically accurate, behavior is produced by 'lesioning' the arousal system of these models. The author also introduces as an illustrative practical application a small model that learns to recognize handwritten digits. The book also contains a concise introduction to genetic algorithms, a new computing method based on the biological metaphor of evolution, and it is demonstrated how these genetic algorithms can be used to design network architectures with superior performance. The role of modularity in parallel hardware and software implementations is discussed in some depth. Several hardware implementations are considered, including transputer networks and a dedicated 400-processor neurocomputer built by the developers of CALM in cooperation with Delft Technical University. The book ends with an evaluation of the psychological and biological plausibility of CALM models and a general discussion of catastrophic interference, generalization, and representational capacity of modular neural networks. Murre, J.M.J. (1992). Learning and categorization in modular neural networks. Hemel Hempstead: Harvester Wheatsheaf, and Hillsdale, NJ: Lawrence Erlbaum (in Canada and the USA), 244pp. Price indication: paperback $25.50 (14.95 Pound Sterling), hardback $76.50 (45.00 Pound Sterling). For additional information, contact: Simon & Schuster at Campus 400, Maylands Avenue, Hemel Hempstead, Herts HP2 7EZ, England, tel. (0442) 881900, fax. (0442) 882099; or Lawrence Erlbaum at 365 Broadway, Hillsdale, NJ 07642-1487, USA. From rsun at athos.cs.ua.edu Tue Oct 13 14:31:49 1992 From: rsun at athos.cs.ua.edu (Ron Sun) Date: Tue, 13 Oct 1992 13:31:49 -0500 Subject: TR available: variable binding Message-ID: <9210131831.AA23434@athos.cs.ua.edu> On Variable Binding in Connectionist Networks by Ron Sun This paper deals with the problem of variable binding in connectionist networks. Specifically, a more thorough solution to the variable binding problem based on the {\it Discrete Neuron} formalism is proposed and a number of issues arising in the solution are examined in relation to logic: consistency checking, binding generation, unification, and functions, etc. We analyze what is needed in order to resolve these issues, and based on this analysis, a procedure is developed for systematically setting up connectionist networks for variable binding. The DN formaism is used as a descriptive tool and the solution based on the DN formalism can be readily mapped to simple types of neural networks. This solution compares favorably to similar solutions in simplicity and completeness. To appear in: Connection Science, Vol.4, No.2. 1992 ------------------------------------------------ It is FTPable from archive.cis.ohio-state.edu in: pub/neuroprose No hardcopy available. FTP procedure: unix> ftp archive.cis.ohio-state.edu (or 128.146.8.52) Name: anonymous Password: neuron ftp> cd pub/neuroprose ftp> binary ftp> get sun.variable.ps.Z ftp> quit unix> uncompress sun.variable.ps.Z unix> lpr sun.variable.ps (or however you print postscript) From petsche at hawk.siemens.com Tue Oct 13 16:22:22 1992 From: petsche at hawk.siemens.com (Thomas Petsche) Date: Tue, 13 Oct 92 16:22:22 EDT Subject: Position available Message-ID: <9210132022.AA08278@hawk.siemens.com> Position available The Learning Systems Department at Siemens Corporate Research is looking for a software developer and programmer with interest in machine learning and/or neural networks to develop software for prototypes and in-house research projects. Current projects are focused on specific instances of time series classification, knowledge representation, computational linguistics and intelligent control. Current research includes a broad spectrum of learning algorithm design and analysis. The successful candidate will contribute software design and implementation expertise to these activities. The job requires a master's degree or equivalent; a thorough understanding of, and experience with, Unix and X-Windows programming; some familiarity with machine learning and/or neural networks. If you are interested, please send a resume (via email if possible) to Thomas Petsche petsche at learning.siemens.com FAX: 609-734-6565 Siemens Corporate Research 755 College Road East Princeton, NJ 08540 From rsun at athos.cs.ua.edu Tue Oct 13 14:37:40 1992 From: rsun at athos.cs.ua.edu (Ron Sun) Date: Tue, 13 Oct 1992 13:37:40 -0500 Subject: TR available: inheritance Message-ID: <9210131837.AA11452@athos.cs.ua.edu> An Efficient Feature-based Connectionist Inheritance Scheme Ron Sun Department of Computer Science University of Alabama Tuscaloosa, AL 35487 -------------------------------------------------------------- To appear in: IEEE Transaction on System, Man, and Cybernetics. Vol.23. No.1. 1993 ---------------------------------------------------------------- The paper describes how a connectionist architecture deals with the inheritance problem in an efficient and natural way. Based on the connectionist architecture CONSYDERR, we analyze the problem of property inheritance and formulate it in ways facilitating conceptual clarity and implementation. A set of ``benchmarks" is specified for ensuring the correctness of inheritance mechanisms. Parameters of CONSYDERR are formally derived to satisfy these benchmark requirements. We discuss how chaining of is-a links and multiple inheritance can be handled in this architecture. This paper shows that CONSYDERR with a two-level dual (localist and distributed) representation can handle inheritance and cancellation of inheritance correctly and extremely efficiently, in constant time instead of proportional to the length of a chain in an inheritance hierarchy. It also demonstrates the utility of a meaning-oriented, intensional approach (with features) for supplementing and enhancing extensional approaches. ---------------------------------------------------------------- It is FTPable from archive.cis.ohio-state.edu in: pub/neuroprose (Courtesy of Jordan Pollack) No hardcopy available. FTP procedure: unix> ftp archive.cis.ohio-state.edu (or 128.146.8.52) Name: anonymous Password: neuron ftp> cd pub/neuroprose ftp> binary ftp> get sun.inheritance.ps.Z ftp> quit unix> uncompress sun.inheritance.ps.Z unix> lpr sun.inheritance.ps (or however you print postscript) (a revised version of a previously TR) From ken at cns.caltech.edu Tue Oct 13 11:25:21 1992 From: ken at cns.caltech.edu (Ken Miller) Date: Tue, 13 Oct 92 08:25:21 PDT Subject: printing of tech report Message-ID: <9210131525.AA13002@zenon.cns.caltech.edu> With respect to the recently announced tech report, "The Role of Constraints in Hebbian Learning", by K.D. Miller and D.J.C. MacKay: there were some problems with printing at least one of the postscript files. I have placed a new set of files in neuroprose that print fine on at least one printer that previously had problems. So, if you had printing problems, please try again. If you continue to have printing problems, let me know and I can send a hardcopy (assuming low-to-moderate numbers of requests). Ken ------------------------------------------------ How to retrieve and print out this paper: unix> ftp archive.cis.ohio-state.edu [OR: ftp 128.146.8.52] Connected to archive.cis.ohio-state.edu. 220 archive.cis.ohio-state.edu FTP server ready. Name: anonymous 331 Guest login ok, send ident as password. Password: [your e-mail address] 230 Guest login ok, access restrictions apply. ftp> binary 200 Type set to I. ftp> cd pub/neuroprose 250 CWD command successful. ftp> get miller.hebbian.tar.Z 200 PORT command successful. 150 Opening BINARY mode data connection for miller.hebbian.tar.Z 226 Transfer complete. 470000 bytes sent in many seconds ftp> quit 221 Goodbye. unix> uncompress miller.hebbian.tar.Z unix> tar xvf miller.hebbian.tar TO SAVE DISC SPACE, THE ABOVE TWO COMMANDS MAY BE REPLACED WITH THE SINGLE COMMAND unix> zcat miller.hebbian.tar.Z | tar xvf - hebbian_p0-9.ps hebbian_p10-19.ps hebbian_p20-29.ps hebbian_p30-35.ps unix> lpr hebbian_p30-35.ps unix> lpr hebbian_p20-29.ps unix> lpr hebbian_p10-19.ps unix> lpr hebbian_p0-9.ps From dmt at sara.inesc.pt Wed Oct 14 08:27:46 1992 From: dmt at sara.inesc.pt (Duarte Trigueiros) Date: Wed, 14 Oct 92 11:27:46 -0100 Subject: No subject Message-ID: <9210141227.AA21622@sara.inesc.pt> In addition to Paul Refenes list, I would like to mention mine and Bob's paper on the automatic forming of ratios as internal representations of the MLP. This paper shows that the problem of discovering the appropriate ratios for performing a given task in financial statement analysis can be be simplified by using some specific training schemes in an MLP. @inproceedings( xxx , author = "Trigueiros, D. and Berry, R.", title = "The Application of Neural Network Based Methods to the Extraction of knowledge From Accounting Reports", Booktitle = "Organisational Systems and Technology: Proceedings of the $24^{th}$ Hawaii International Conference on System Sciences", Year = 1991, Pages = "136-146", Publisher = "IEEE Computer Society Press, Los Alamitos, (CA) US.", Editor = "Nunamaker, E. and Sprague, R.") I also noticed that Paul didn't mention Utans and Moody's "Selecting Neural Network Architectures via the Prediction Risk: An Application to Corporate Bond Rating Prediction" (1991), which has been published somewhere and has, or had, a version in the neuroprose archive as utans.bondrating.ps.Z .This paper is especially recommended, as the early literature on financial applications of NNs didn't care too much with things like cross-validation. The achievements, of course, were appallingly brilliant. Finally, I gathered from Paul's list of articles, that there is a book of readings entitled "Neural Network Applications in Investment and Finance". Paul is the author of an article in chapter 27. The remaining twenty six or so chapters can eventually contain interesting stuff for completing this search. When the original request for references appeared in another list I answered to it. So, I must apologise for mentioning our reference again here. I did it, as Paul list of references could give the impression, despite him, of being an attempt to be extensive. --------------------------------------------------- Duarte Trigueiros, INESC, R. Alves Redol 9, 2. 1000 Lisbon, Portugal e-mail: dmt at sara.inesc.pt FAX +351 (1) 7964710 --------------------------------------------------- From fellous%hyla.usc.edu at usc.edu Wed Oct 14 15:01:06 1992 From: fellous%hyla.usc.edu at usc.edu (Jean-Marc Fellous) Date: Wed, 14 Oct 92 12:01:06 PDT Subject: USC/CNE Workshop Message-ID: <9210141901.AA23215@hyla.usc.edu> Thank you for posting this announcement to the mailing list ... --------------------------------------------------------------------------- ----------------------------- U S C ------------------------------------ --------------------------------------------------------------------------- Neural Mechanisms of Looking, Reaching and Grasping A Workshop sponsored by the Human Frontier Science Research Program and the Center for Neural Engineering - U.S.C. Michael A. Arbib Organizer October 21-22, 1992 HEDCO NEUROSCIENCES AUDITORIUM USC, University Park Campus, Los Angeles, CA ================= Session 1, October 21 ================ Chair: Hideo Sakata 08:30 - 09:00 am Marc Jeannerod (INSERM, Lyon, France) 09:00 - 09:30 am "Functional Parcellation of Human Parietal and Premotor Cortex during Reach and Grasp Tasks" Scott Grafton School of Medicine, USC, Los Angeles, CA, USA 09:30 - 10:00 am "Anatomo-functional Organization of the 'Supplementary Motor Area' and the Adjacent Cingulate Motor Areas" Massimo Matelli Universita Degli Studi di Parma, Italy **** 10:00 - 10:30 BREAK 10:30 - 11:00 am "Inferior Area 6: New findings on Visual Information Coding for Reaching and Grasping" Giacomo Rizzolatti, Universita Degli Studi di Parma, Italy 11:00 - 11:30 am "Neural Strategies for Controlling Fast Movements" Jim-Shih Liaw CNE/Computer Science Department, USC Los Angeles, CA, USA 11:30 - 12:00 am "Cortex and Haptic Memory" Joaquin Fuster, UCLA Medical Center Los Angeles, CA, USA 12:00 - 12:30 pm "Trajectory Learning from Spatial Constraints" Michael Jordan Brain and Cognitive Science Department MIT, Cambridge, MA, USA **** 12:30 - 01:30 pm LUNCH ===================== Session 2 ==================== Chair: Jean-Paul Joseph 01:30 - 02:00 pm "Selectivity of Hand Movement-Related Neurons of the Parietal Cortex in Shape, Size and Orientation of Objects and Hand Grips" Hideo Sakata, Nihon University School of Medicine Tokyo, Japan 02:00 - 02:30 pm "Modeling the Dynamic Interactions between Subregions of the Posterior Parietal and Premotor Cortices" Andrew Fagg CNE/Computer Science Department, USC, Los Angeles, CA, USA 02:30 - 03:00 pm "Optimal Control of Reaching Movements Using Neural Networks" Alberto Borghese Center for Neural Engineering, USC and I.F.C.N.-C.N.R., Milano, Italy **** 03:00 - 03:30 BREAK 03:30 - 04:00 pm " How the Frontal Eye Field can impose a saccade goal on Superior Colliculus Neurons" Madeleine Schlag-Rey Brain Research Institute, UCLA, Los Angeles, CA, USA 04:00 - 04:30 pm "Variations on a Theme of Hallett and Lightsone" John Schlag Department of Anatomy, UCLA Los Angeles, CA, USA 04:30 - 05:00 pm "The saccade and its Context" Lucia Simo Center for Neural Engineering, USC, Los Angeles, CA 05:00 - 05:30 "An Integrative View on Modeling" Michael Arbib Center for Neural Engineering/Computer Science Department, USC Los Angeles, CA, USA ================= Session 3, October 22 =================== Chair: Giacomo Rizzolatti 08:30 - 09:00 am "Neural Activity in the Caudate Nucleus of Monkeys during Motor and Oculomotor Sequencing" Jean-Paul Joseph INSERM, Lyon, France 09:00 - 09:30 "Models of Cortico-Striatal Plasticity for Learning Associations in Space and Time" Peter Dominey Computer Science Department, USC, Los Angeles, CA, USA 09:30 - 10:00 "Eye-Head-Hand Coordination in a Pointing Task" Claude Prablanc INSERM, Lyon, France **** 10:00 - 10:30 BREAK 10:30 - 11:00 "Modeling Kinematics and Interaction of Reach and Grasp" Bruce Hoff CNE/Computer Science Department, USC, Los Angeles, CA, USA 11:00 - 11:30 "Towards a Model of the Cerebellum" Nicolas Schweighofer Center for Neural Engineering, USC, Los Angeles, CA, USA 11:30 - 12:00 "Does the Lateral Cerebellum Map Movements onto Spatial Targets?", Thomas Thach Washington University School of Medicine, St. Louis, MO, USA **** 12:00 - LUNCH ---------------------------------------------------------------------------- From becker at ai.toronto.edu Wed Oct 14 14:39:35 1992 From: becker at ai.toronto.edu (becker@ai.toronto.edu) Date: Wed, 14 Oct 1992 14:39:35 -0400 Subject: paper in neuroprose Message-ID: <92Oct14.143937edt.289@neuron.ai.toronto.edu> A postscript version of my PhD thesis has been placed in the neuroprose archive. It prints on 150 pages. The abstract is given below, followed by retrieval instructions. Sue Becker email: becker at ai.toronto.edu ----------------------------------------------------------------------------- An Information-theoretic Unsupervised Learning Algorithm for Neural Networks ABSTRACT In the unsupervised learning paradigm, a network of neuron-like units is presented an ensemble of input patterns from a structured environment, such as the visual world, and learns to represent the regularities in that input. The major goal in developing unsupervised learning algorithms is to find objective functions that characterize the quality of the network's representation without explicitly specifying the desired outputs of any of the units. Previous approaches in unsupervised learning, such as clustering, principal components analysis, and information-transmission-based methods, make minimal assumptions about the kind of structure in the environment, and they are good for preprocessing raw signal input. These methods try to model {\em all} of the structure in the environment in a single processing stage. The approach taken in this thesis is novel, in that our unsupervised learning algorithms do not try to preserve all of the information in the signal. Rather, we start by making strongly constraining assumptions about the kind of structure of interest in the environment. We then proceed to design learning algorithms which will discover precisely that structure. By constraining what kind of structure will be extracted by the network, we can force the network to discover higher level, more abstract features. Additionally, the constraining assumptions we make can provide a way of decomposing difficult learning problems into multiple simpler feature extraction stages. We propose a class of information-theoretic learning algorithms which cause a network to become tuned to spatially coherent features of visual images. Under Gaussian assumptions about the spatially coherent features in the environment, we have shown that this method works well for learning depth from random dot stereograms of curved surfaces. Using mixture models of coherence, these algorithms can be extended to deal with discontinuities, and to form multiple models of the regularities in the environment. Our simulations demonstrate the general utility of the Imax algorithms in discovering interesting, non-trivial structure (disparity and depth discontinuities) in artificial stereo images. This is the first attempt we know of to model perceptual learning beyond the earliest stages of low-level feature extraction, and to model multiple stages of unsupervised learning. ----------------------------------------------------------------------------- To retrieve from neuroprose: unix> ftp cheops.cis.ohio-state.edu Name (cheops.cis.ohio-state.edu:becker): anonymous Password: (use your email address) ftp> cd pub/neuroprose ftp> get becker.thesis1.ps.Z 200 PORT command successful. 150 Opening BINARY mode data connection for becker.thesis1.ps.Z (292385 bytes). 226 Transfer complete. 292385 bytes received in 13 seconds (22 Kbytes/s) ftp> get becker.thesis2.ps.Z 200 PORT command successful. 150 Opening BINARY mode data connection for becker.thesis2.ps.Z (366573 bytes). 226 Transfer complete. 366573 bytes received in 15 seconds (23 Kbytes/s) ftp> get becker.thesis3.ps.Z 200 PORT command successful. 150 Opening BINARY mode data connection for becker.thesis3.ps.Z (178239 bytes). 226 Transfer complete. 178239 bytes received in 9.2 seconds (19 Kbytes/s) ftp> quit 221 Goodbye. unix> uncompress becker* unix> lpr becker.thesis1.ps unix> lpr becker.thesis2.ps unix> lpr becker.thesis3.ps From mclennan at cs.utk.edu Wed Oct 14 15:05:39 1992 From: mclennan at cs.utk.edu (mclennan@cs.utk.edu) Date: Wed, 14 Oct 92 15:05:39 -0400 Subject: paper in neuroprose Message-ID: <9210141905.AA21440@maclennan.cs.utk.edu> **DO NOT FORWARD TO OTHER GROUPS** The following technical report has been placed in the Neuroprose archives at Ohio State (filename: maclennan.fieldcompbrain.ps.Z). Ftp instructions follow the abstract. The uncompressed file is large (1.36 MBytes). ----------------------------------------------------- Field Computation in the Brain Bruce MacLennan Computer Science Department University of Tennessee Knoxville, TN 37996 maclennan at cs.utk.edu Technical Report CS-92-174 ABSTRACT: We begin with a brief consideration of the *topology of knowledge*. It has traditionally been assumed that true knowledge must be represented by discrete symbol structures, but recent research in psychology, philosophy and computer science has shown the fundamental importance of *subsymbolic* information processing, in which knowledge is represented in terms of very large numbers -- or even continua -- of *microfeatures*. We believe that this sets the stage for a fundamentally new theory of knowledge, and we sketch a theory of continuous information representation and processing. Next we consider *field computa- tion*, a kind of continuous information processing that emphasizes spatially continuous *fields* of information. This is a reasonable approximation for macroscopic areas of cortex and provides a convenient mathematical framework for studying infor- mation processing at this level. We apply it also to a linear- systems model of dendritic information processing. We consider examples from the visual cortex, including Gabor and wavelet representations, and outline field-based theories of sensorimotor intentions and of model-based deduction. Presented at the 1st Appalachian Conference on Behavioral Neuro- dynamics: Processing in Biological Neural Networks, in conjunc- tion with the Inaugural Ceremonies for the Center for Brain Research and Informational Sciences, Radford University, Radford VA, September 17-20, 1992. ----------------------------- FTP INSTRUCTIONS Either use the Getps script, or do the following: unix> ftp archive.cis.ohio-state.edu (or 128.146.8.52) Name: anonymous Password: ftp> cd pub/neuroprose ftp> binary ftp> get maclennan.fieldcompbrain.ps.Z ftp> quit unix> uncompress maclennan.fieldcompbrain.ps.Z unix> lpr -s maclennan.fieldcompbrain.ps (or however you print large postscript files) If you need hardcopy, then send your request to: library at cs.utk.edu Bruce MacLennan Department of Computer Science 107 Ayres Hall The University of Tennessee Knoxville, TN 37996-1301 (615)974-0994/5067 FAX: (615)974-4404 maclennan at cs.utk.edu From jang%diva.berkeley.edu at CMU.EDU Mon Oct 12 12:58:58 1992 From: jang%diva.berkeley.edu at CMU.EDU (Jyh-Shing Roger Jang) Date: 12 Oct 1992 09:58:58 -0700 Subject: paper available Message-ID: <9210121658.AA24703@diva.Berkeley.EDU> The following paper has been placed on the neuroprose archive as jang.adaptive_fuzzy.ps.Z and is available via anonymous ftp (from archive.cis.ohio-state.edu in the pub/neuroprose directory). This paper will appear in IEEE Trans. on Systems, Man and Cybernetics. ========================================================================= TITLE: ANFIS: Adaptive-Network-based Fuzzy Inference System ABSTRACT: This paper presents the architecture and learning procedure underlying ANFIS (Adaptive-Network-based Fuzzy Inference System), a fuzzy inference system implemented in the framework of adaptive networks. By using a hybrid learning procedure, the proposed ANFIS can construct an input-output mapping based on both human knowledge (in the form of fuzzy if-then rules) and stipulated input-output data pairs. In our simulation, we employ the ANFIS architecture to model nonlinear functions, identify nonlinear components on-linely in a control system, and predict a chaotic time series, all yielding remarkable results. Comparisons with artificail neural networks and earlier work on fuzzy modeling are listed and discussed. Other extensions of the proposed ANFIS and promising applications to automatic control and signal processing are also suggested. =========================================================================== Here is an example of how to retrieve this file: gvax> ftp archive.cis.ohio-state.edu Connected to archive.cis.ohio-state.edu. 220 archive.cis.ohio-state.edu FTP server ready. Name: anonymous 331 Guest login ok, send ident as password. Password:neuron at wherever 230 Guest login ok, access restrictions apply. ftp> binary 200 Type set to I. ftp> cd pub/neuroprose 250 CWD command successful. ftp> get jang.adaptive_fuzzy.ps.Z 200 PORT command successful. 150 Opening BINARY mode data connection for jang.adaptive_fuzzy.ps.Z 226 Transfer complete. 100000 bytes sent in 3.14159 seconds ftp> quit 221 Goodbye. gvax> uncompress jang.adaptive_fuzzy.ps.Z gvax> lpr jang.adaptive_fuzzy.ps -- J.-S. Roger Jang 571 Evans, EECS Department Univ. of California Berkeley, CA 94720 jang at diva.berkeley.edu (510)-642-5029 fax: (510)642-5775 From rba at bellcore.com Fri Oct 16 15:02:57 1992 From: rba at bellcore.com (Bob Allen) Date: Fri, 16 Oct 92 15:02:57 -0400 Subject: No subject Message-ID: <9210161902.AA04158@wind.bellcore.com> NIPS92, December 1-3, 1992, Denver, Colorado STUDENT FINANCIAL SUPPORT Since there has been an overwhelming number of requests for financial support for travel to attend the NIPS92 conference in Denver, it is no longer possible to consider additional requests. Please do not send in a letter of request. If you have already sent your application, we will notify you of your status in the next week; the earliest requests will be filled, and the remainder will be on a waiting list, which will depend upon the financial success of the conference. Dr. Robert B. Allen, NIPS92 Treasurer Bellcore MRE 2A-367 445 South Street Morristown, NJ 07962-1910 From Scott_Fahlman at SEF-PMAX.SLISP.CS.CMU.EDU Sat Oct 17 10:36:16 1992 From: Scott_Fahlman at SEF-PMAX.SLISP.CS.CMU.EDU (Scott_Fahlman@SEF-PMAX.SLISP.CS.CMU.EDU) Date: Sat, 17 Oct 92 10:36:16 -0400 Subject: Post-NIPS Workshop Message-ID: I had excellent success with the call for presenters for my NIPS workshop on "Reading the Entrails: Understanding What's Going on Inside a Neural Net". I now have all the speakers I can use, plus a couple of alternates whom I hope to fit in as well, so please do not volunteer if you haven't done so already. I think this is a very exciting group of speakers. I will be releasing the final names as soon as I get final confirmation from a couple of people. -- Scott =========================================================================== Scott E. Fahlman Internet: sef+ at cs.cmu.edu Senior Research Scientist Phone: 412 268-2575 School of Computer Science Fax: 412 681-5739 Carnegie Mellon University 5000 Forbes Avenue Pittsburgh, PA 15213 From tdenoeux at hds.univ-compiegne.fr Mon Oct 19 04:44:41 1992 From: tdenoeux at hds.univ-compiegne.fr (tdenoeux@hds.univ-compiegne.fr) Date: Mon, 19 Oct 92 09:44:41 +0100 Subject: Paper available Message-ID: <9210190844.AA11604@kaa.hds.univ-compiegne.fr> The following paper has recently been accepted for publication in the Journal "Neural Networks": INITIALIZATION OF BACK-PROPAGATION NEURAL NETWORKS WITH PROTOTYPES (Neural Networks, in Press) by T. Denoeux and R. Lengelle University of Compiegne, France ABSTRACT This paper addresses the problem of initializing the weights in back-propagation networks with one hidden layer. The proposed method relies on the use of reference patterns, or prototypes, and on a transformation which maps each vector in the original feature space onto a unit-length vector in a space with one additional dimension. This scheme applies to pattern recognition tasks, as well as to the approximation of continuous functions. Issues related to the preprocessing of input patterns and to the generation of prototypes are discussed, and an algorithm for building appropriate prototypes in the continuous case is described. Also examined is the relationship between this approach and the theory of radial basis functions. Finally, simulation results are presented, showing that initializing back-propagation networks with prototypes generally results in (1) drastic reductions in training time, (2) improved robustness against local minima, and (3) better generalization. A free copy is available at the following address to those who have not easy access to the journal: +------------------------------------------------------------------------+ | tdenoeux at hds.univ-compiegne.fr Thierry DENOEUX | | Departement de Genie Informatique | | Centre de Recherches de Royallieu | | tel (+33) 44 23 44 96 Universite de Technologie de Compiegne | | fax (+33) 44 23 44 77 B.P. 649 | | 60206 COMPIEGNE CEDEX | | France | +------------------------------------------------------------------------+ From austin at minster.york.ac.uk Mon Oct 19 11:40:43 1992 From: austin at minster.york.ac.uk (austin@minster.york.ac.uk) Date: Mon, 19 Oct 92 11:40:43 Subject: No subject Message-ID: Weightless Neural Network Workshop '93 University of York York, England (in conjunction with Brunel University and Imperial College, London) 6-7 April 1993 CALL FOR CONTRIBUTIONS This two-day workshop provides a forum for the presentation and exchange of current work in the general field of weightless neural networks. Models include N-tuple systems, CMAC, Kanvera's sparse distributed memory, probabilistic logic nodes, g-RAM, p-RAM, etc, etc. Contributions on theory, realisations and applications are equally welcome. Accepted contributions will be either be presented as a paper or as part of a structured poster session. Abstracts should be submitted by 1 November 1992 to the address below, and should be approximately 400 words in length and highlight the important points of the proposed contribution. All proposals will be reviewed by the programme committee and authors notified by 20 December 1992. Full papers are required by 31 January 1993 and will be considered for publication in book form. Details of format requirements will be supplied later. Abstracts and enquiries to:- N M Allinson Department of Electronics University of York York, YO1 5DD Phone:(+44)(0)904-432350 Fax:(+44)(0)904-432335 Email:wnnw at ohm.york.ac.uk. From ken at cns.caltech.edu Fri Oct 16 14:47:13 1992 From: ken at cns.caltech.edu (Ken Miller) Date: Fri, 16 Oct 92 11:47:13 PDT Subject: one more try ... Message-ID: <9210161847.AA14659@zenon.cns.caltech.edu> With respect, again, to the techreport on "The role of constraints in Hebbian learning" by myself and David MacKay: There seem to be various memory problems so that different printers choke at different points trying to print out the ps files. So, for those who are used to TeX and have a way to print out .dvi files: I have placed hebbian.dvi and the ps files of the various figures where they can be ftp'ed. Perhaps you can print these out using dvips or dvi2ps, etc.; but your dvi=>ps program might have problems with the psfig macros, which we use to incorporate the figures. If you want to give it a try: unix> ftp kant.cns.caltech.edu [OR, ftp 131.215.135.31] login: anonymous password: [your e-mail address] ftp> cd pub/ken ftp> binary ftp> get hebbian.dvi ftp> prompt ftp> mget FIG*.ps ftp> quit again, if you can't print the paper out by this or other means, please let me know and I will send a hardcopy. This should be the last message on this topic. Ken From R.Beale at computer-science.birmingham.ac.uk Tue Oct 20 17:37:04 1992 From: R.Beale at computer-science.birmingham.ac.uk (Russell Beale) Date: Tue, 20 Oct 92 17:37:04 BST Subject: No subject Message-ID: <1111.9210201637@fat-controller.cs.bham.ac.uk> CALL FOR PAPERS British Neural Network Society Symposium on Recent Advances in Neural Networks Wednesday 3rd February 1993 Lucas Institute University of Birmingham UK Contributions are invited for oral and poster presentations. Topics of interest include, but are not restricted to, the following areas: - Theory & Algorithms Time series, learning theory, fast algorithms. - Applications Finance, image processing, medical, control. - Implementations Software, hardware, optoelectronics. - Biological Networks Perception, motor control, representation. The proceedings will be published as a book after the symposium. Please send a one-page summary, postmarked by 30th November 1992, to: BNNS'93 c/o Russell Beale School of Computer Science University of Birmingham Edgbaston Birmingham B15 2TT United Kingdom Tel: +44 (0)21 414 4773 Fax: +44 (0)21 414 4281 From zl%venezia.ROCKEFELLER.EDU at ROCKVAX.ROCKEFELLER.EDU Thu Oct 22 11:53:40 1992 From: zl%venezia.ROCKEFELLER.EDU at ROCKVAX.ROCKEFELLER.EDU (Zhaoping Li) Date: Thu, 22 Oct 92 11:53:40 -0400 Subject: No subject Message-ID: <9210221553.AA10086@venezia> ROCKEFELLER UNIVERSITY anticipates the opening of one or two positions in Computational Neuroscience Laboratory. The positions are at the postdoctoral level, and are for one year, renewable to two, starting in September 1993. The focus of the research in the lab is on understanding the computational principles of the nervous system, especially the sensory pathways. It involves analytical and computational approaches with strong emphasis on connections with real neurobiology. Members of the lab include J. Atick, Z. Li, K. Obermayer, N. Redlich, and P. Penev. The lab also maintains strong interactions with other labs at Rockefeller University, including the Gilbert, Wiesel, and the biophysics labs. Interested candidates should submit a C.V. and arrange to have three letters of recommendation sent to Prof. Joseph J. Atick Head, computational neuroscience lab The Rockefeller University 1230 York Avenue New York, NY 10021 USA The Rockefeller University is an affirmative action/equal opportunity employer, and welcomes applications from women and minority candidates. From maass at figids01.tu-graz.ac.at Fri Oct 23 12:04:33 1992 From: maass at figids01.tu-graz.ac.at (maass@figids01.tu-graz.ac.at) Date: Fri, 23 Oct 92 17:04:33 +0100 Subject: No subject Message-ID: <9210231604.AA26367@figids03.tu-graz.ac.at> The following paper has been placed in the Neuroprose archive in file maass.bounds.ps.Z . Retrieval instructions follow the abstract. --Wolfgang Maass (maass at igi.tu-graz.ac.at) -------------------------------------------------------------------------------- BOUNDS FOR THE COMPUTATIONAL POWER AND LEARNING COMPLEXITY OF ANALOG NEURAL NETS Wolfgang Maass Institute for Theoretical Computer Science, Technische Universitaet Graz, Klosterwiesgasse 32/2, A-8010 Graz, Austria ABSTRACT -------- It is shown that feedforward neural nets of constant depth with piecewise polynomial activation functions and arbitrary real weights can be simulated for boolean inputs and outputs by neural nets of a somewhat larger size and depth with heaviside gates and weights from {0,1}. This provides the first known upper bound for the computational power and VC-dimension of such neural nets. It is also shown that in the case of piecewise linear activation functions one can replace arbitrary real weights by rational numbers with polynomially many bits, without changing the boolean function that is computed by the neural net. In addition we improve the best known lower bound for the VC-dimension of a neural net with w weights and gates that use the heaviside function (or other common activation functions such as sigma) from Omega(w) to Omega(w log w). This implies the somewhat surprising fact that the Baum-Haussler upper bound for the VC-dimension of a neural net with heaviside gates is asymptotically optimal. Finally it is shown that neural nets with piecewise polynomial activation functions and a constant number of analog inputs are probably approximately correct learnable (in Valiant's model for PAC-learning, with hypotheses generated by a slightly larger neural net). --------------------------------------------------------------------------------- To retrieve the paper by anonymous ftp: unix> ftp archive.cis.ohio-state.edu # (128.146.8.52) Name: anonymous Password: neuron ftp> cd pub/neuroprose ftp> binary ftp> get maass.bounds.ps.Z ftp> quit unix> uncompress maass.bounds.ps.Z unix> lpr -P maass.bounds.ps From SABBATINI%ccvax.unicamp.br at BITNET.CC.CMU.EDU Sun Oct 25 15:56:00 1992 From: SABBATINI%ccvax.unicamp.br at BITNET.CC.CMU.EDU (SABBATINI%ccvax.unicamp.br@BITNET.CC.CMU.EDU) Date: Sun, 25 Oct 1992 15:56 GMT-0200 Subject: Symposium on Simulation of Social Processes Message-ID: <01GQDDM4SCM09EDYAJ@ccvax.unicamp.br> From cybsys at bingsuns.cc.binghamton.edu Sat Oct 24 15:37:08 1992 From: cybsys at bingsuns.cc.binghamton.edu (Cybernetics and Systems Moderator) Date: Sat, 24 Oct 1992 15:37:08 EDT Subject: Simulating Societies '93 Message-ID: <436C2AF2A0001B3A@brfapesp.bitnet> From itot at strl.nhk.or.jp Mon Oct 26 10:38:11 1992 From: itot at strl.nhk.or.jp (Takayuki Ito) Date: Mon, 26 Oct 92 10:38:11 JST Subject: position offer from RIKEN Message-ID: <9210260138.AA08929@vsun2.strl.nhk.or.jp> I am Ito in NHK(Japan Broadcasting Corporation). Dr. Tanaka in RIKEN Institute asked me to post this letter. For more details, please contact with him by telephone or fax. ------------------------------------ Offer of a position for public subscription RIKEN Institute, Information Science Laboratory Researcher Field: Physiological, anatomical, and psychological studies of higher brain functions, and development of related methodology. Available from April 1, 1993 Condition: Ph.D or scheduled by April 1, 1993. No elder than 34 on February 1, 1993. Any nationality. Inquires to Dr. Keiji Tanaka, Chief of Information Science Laboratory, fax: +81-48-462-4696, tel: +81-48-462-1111 ext.6411 ------------------------------------ ---------------------------------------------- Takayuki Ito (itot at strl.nhk.or.jp) NHK Science and Technical Research Labs. 1-10-11, Kinuta, Setagaya-ku, Tokyo 157 Japan Tel.+81-3-5494-2369, Fax.+81-3-5494-2371 ---------------------------------------------- From wilson at smith.rowland.org Wed Oct 28 10:38:54 1992 From: wilson at smith.rowland.org (Stewart Wilson) Date: Wed, 28 Oct 92 10:38:54 EST Subject: FROM ANIMALS TO ANIMATS -- registration and list of papers Message-ID: FROM ANIMALS TO ANIMATS Second International Conference on Simulation of Adaptive Behavior (SAB92) FINAL ANNOUNCEMENT with LIST OF PAPERS TO BE PRESENTED ================================================================================ 1. Conference Dates and Site The conference will take place Monday through Friday, December 7-11, 1992 at the Ilikai Hotel, Honolulu, Hawaii. The conference will be inaugerated by a reception on Sunday evening, December 6, and will be followed by a luau (Hawaiian Feast) on Friday, December 11. 2. Conference Organizers Jean-Arcady MEYER Groupe de Bioinformatique URA686.Ecole Normale Superieure 46 rue d'Ulm 75230 Paris Cedex 05 France e-mail: meyer at wotan.ens.fr meyer at frulm63.bitnet Herbert ROITBLAT Department of Psychology University of Hawaii at Manoa 2430 Campus Road Honolulu, HI 96822 USA email: roitblat at uhunix.bitnet, roitblat at uhunix.uhcc.hawaii.edu Stewart WILSON The Rowland Institute for Science 100 Cambridge Parkway Cambridge, MA 02142 USA e-mail: wilson at smith.rowland.org 3. Program Committee A. Berthoz, France, L. Booker, USA, R. Brooks, USA, P. Colgan, Canada, J. Delius, Germany, A. Dickinson, UK, J. Ferber, France, S. Goss, Belgium, P. Nachtigall, USA, L. Steels, Belgium, R. Sutton, USA, F. Toates, UK, P. Todd, USA, S. Tsuji, Japan, W. Uttal, USA, D. Waltz, USA 4. Local Arrangements Committee S. Gagnon, A. Guillot, H. Harley, D. Helweg, M. Hoffhines, G. Losey, C. Manos, P. Moore, E. Reese, P. Tarroux, P. Vincens, & S. Yamamoto 5. Official Language: English 6. Conference Objective The goal of the conference is to bring together researchers in ethology, ecology, cybernetics, artificial intelligence, robotics, and related fields so as to further our understanding of the behaviors and underlying mechanisms that allow animals and, potentially, robots to adapt and survive in uncertain environments. The conference will focus particularly on simulation models in order to help characterize and compare various organizational principles or architectures capable of inducing adaptive behavior in real or artificial animals. The conference is expected to promote: 1. Identification of the organizational principles, functional laws, and minimal properties that make it possible for a real or artificial system to persist in an uncertain environment. 2. Better understanding of how and under what conditions such systems can themselves discover these principles through conditioning, learning, induction, or self-organization. 3. Specification of the applicability of the theoretical knowledge thus acquired to the building of autonomous robots. 4. Improved theoretical and practical knowledge of adaptive systems in general, both natural and artificial. Contributions treating any of the following topics from the perspective of adaptive behavior have been invited. The deadline for submitting papers has passed, but demonstrations will still be accommodated if possible. *Individual and collective behavior *Autonomous robots *Neural correlates of behavior *Hierarchical and parallel organizations *Perception and motor control *Emergent structures and behaviors *Motivation and emotion *Problem solving and planning *Action selection and behavioral sequences *Goal-directed behavior *Neural networks and classifier systems *Ontogeny, learning, and evolution *Characterization of environments *Internal world models and cognitive processes *Applied adaptive behavior 7. Important Dates *JUL 15, 1992 Submissions must be received by the organizers *SEP 1, 1992 Deadline for early registration *OCT 1, 1992 Notification of acceptance or rejection NOV 7, 1992 Deadline for regular registration NOV 15, 1992 Camera ready revised versions due DEC 7-11, 1992 Conference dates 8. Conference Activities The conference program looks very exciting. In addition to the many excellent papers listed below, a series of cultural activities are also planned. The conference begins with a reception on Sunday evening, December 6. Papers will be presented each morning and late afternoon with an extended discussion period in between (during which the beach will be accessible). Thursday evening will be a moonlight cruise on the Navatek 1 along the shores of Waikiki to Diamond Head. Friday evening, after the close of the conference will be a luau (Hawaiian feast) at the Bishop Museum, the premier museum of culture and natural history in the Pacific. Museum admission is included in the luau price as is a planetarium show, dinner, refreshments, and local entertainment. 9. Registration All participants must register. Regular registration is $220 and late registration is $250. Students will be allowed to register for $50. Students should submit proof of their status along with their registration fee. The fee for accompanying persons is $75, which includes the reception and the cruise. A registration form is included. Return to: SAB92 Registration, Conference Center, University of Hawaii, 2530 Dole Street, Honolulu, HI 96822. 10. Meeting Site The conference activities will be held at the Ilikai Hotel. The Ilikai is situated at the gateway to Waikiki within walking distance of many fine restaurants, Ala Moana Shopping Center, and Ala Moana Park. The Hotel overlooks the Ala Wai Yacht Marina where Waikiki Beach begins. Room rates for the conference are $110 or $125 per night (single or double). The hotel is adjacent to the beach and also offers two swimming pools, a fitness center, and tennis courts. Reservations must be made directly with the hotel. Conference rates will be available for the weekend before and the weekend following the conference as well. A hotel registration form is included. Return it by November 7, 1992 to Ilikai Hotel, 1777 Ala Moana Blvd, Honolulu, HI 96815. (800) 367-8434. In Britain: 0800 282502. Arrangements have been made for a small number of student rooms in a nearby hotel at about $55 per night (single or double). Students are, of course, welcome to stay in the conference hotel. Reservations for student rooms can be made through the official travel agent. A small number of travel "scholarships" may be available to defray part or all of the expenses of attending the conference. Interested students should submit a letter of application describing their research interests, the year they expect to receive their degree, and a brief letter of recommendation from their major professor. The number and size of awards will depend on the amount of money available. Persons with disabilities may contact Herbert Roitblat for information on accessibility. Advance notice is advised if you have special needs and request an accommodation. The University of Hawaii is an Equal Opportunity/Affirmative Action Institution. 11. Travel Information Theo Stahl, Associated Travel, 947 Keeaumoku Street, Honolulu, HI 96814 (808) 949-1033, (800) 745-3444, (808) 949-1037 (fax) is the official travel agent for the conference. Participants are encouraged, but not required, to make their travel arrangements through Ms Stahl. United Airlines is offering a special conference rate for participants from US as well as European, Japanese, and Australian gateway cities served by United. Ms Stahl is very knowledgeable about the local travel market and can make arrangements to visit neighbor islands (including Hawaii with its active volcano) and for other activities. Please make your travel arrangements early because Hawaii is a popular destination in December and the conference is scheduled just before the start of the busiest season. Hertz has extended a conference rate for auto rentals. Reservations can be made through the official travel agent or directly through Hertz. Mention SAB92. CONFERENCE REGISTRATION FORM Ilikai Hotel, Honolulu, HI SAB92, December 7-11, 1992 ____________________________________________________________ Last Name First Name Middle ____________________________________________________________ Professional Affiliation ____________________________________________________________ Street Address and Internal Mail Code ____________________________________________________________ City State/Country Zip/Postal Code ____________________________________________________________ E-mail Telephone Fax SAB92 December 7-11, 1992 Registration Fees (includes reception, cruise, continental breakfasts) ___ Early (Before September 1, 1992) $180 ___ Regular (Before November 7, 1992) $220 ___ Late (After November 7, 1992) $250 ___ Student (with proof of status) $50 ___ Accompanying person (number of persons) $75 ___ Luau (number of tickets) $45 ___ Donation to support student scholarship fund $____ Enclosed is a check or money order (US $ only, payable to University of Hawaii) for $_______ Return to: SAB92 Registration, Conference Center, University of Hawaii, 2530 Dole Street, Honolulu, HI 96822. SAB92 December 7-11, 1992 SAB92 Hotel Registration Ilikai Hotel Name _____________________________________________________ Address _________________________________________________ City ____________________________________________________ State/Country, Zip ______________________________________ Telephone Number ________________________________________ Arrival Date ____________________________________________ Departure Date __________________________________________ No. of Persons __________________________________________ Preferred Room rate: _____ 1 or 2 persons $110+tax _____ 1 or 2 persons $125+tax _____ 1 Bed _____ 2 Beds _____ Handicapped Accessible All reservations must be guaranteed by check or credit card deposit for one night lodging. Amount of enclosed check: $_____ Charge to: ___Visa ___ Mastercard ___American Express ___Diner's club ___Discover Credit card Number: _______________________ Expiration Date ________ Signature ___________________________________ Request and deposit must be received by November 7, 1992. Check-in time is 3:00. Check-out time is 12:00. SAB92 December 7-11, 1992 Mail hotel registration directly to the Ilikai Hotel, 1777 Ala Moana Blvd, Honolulu, HI 96815. (800) 367-8434. In Britain: 0800 282502 ========================================================================== SECOND INTERNATIONAL CONFERENCE ON SIMULATION OF ADAPTIVE BEHAVIOR (SAB92) ========================================================================== Papers accepted for presentation at the conference and publication in the proceedings. -------------------------------------------------------------------------- Richard A. Altes "Neuronal Parameter Maps and Signal Processing" Michael A. Arbib and Hyun-Bong Lee "Neural Mechanisms Underlying Detour Behavior in Frog and Toad" Ronald C. Arkin and J. David Hobbs "Dimensions of Communication and Social Organization in Multi-Agent Robotic Systems" Leemon C. Baird, III and A. Harry Klopf "Extensions of the Associative Control Process (ACP) Network: Hierarchies and Provable Optimality" Andrea Beltratti and Sergio Margarita "Evolution of Trading Strategies Among Heterogeneous Artificial Economic Agents" Allen Brookes "The Adaptive Nature of 3D Perception" Federico Cecconi and Domenico Parisi "Neural Networks with Motivational Units" Sunil Cherian and Wade O. Troxell "A Neural Network Based Behavior Hierarchy for Locomotion Control" Dave Cliff, Philip Husbands, and Inman Harvey "Evolving Visually Guided Robots" Marco Colombetti and Marco Dorigo "Learning to Control an Autonomous Robot by Distributed Genetic Algorithms" H. Cruse, U. Mueller-Wilm, and J. Dear "Artificial Neural Nets for Controlling a 6-Legged Walking System" Lawrence Davis, Stewart W. Wilson, and David Orvosh "Temporary Memory for Examples Can Speed Learning in a Simple Adaptive System" Dwight Deugo and Franz Oppacher "An Evolutionary Approach to Cognition" Alexis Drogoul and Jacques Ferber "From Tom Thumb to the Dockers: Some Experiments with Foraging Robots" Dario Floreano "Emergence of Nest-Based Foraging Strategies in Ecosystems of Neural Networks" Liane Gabora "Should I Stay or Should I Go: Coordinating Biological Needs with Continuously-updated Assessments of the Environment" John C. Gallagher and Randall D. Beer "A Qualitative Dynamical Analysis of Evolved Locomotion Controllers" Simon Giszter "Behavior Networks and Force Fields for Simulating Spinal Reflex Behaviors of the Frog" Ralph Hartley "Propulsion and Guidance in a Simulation of the Worm C. Elegans" Inman Harvey, Philip Husbands, and Dave Cliff "Issues in Evolutionary Robotics" Tetsuya Higuchi, Tatsuya Niwa, Toshio Tanaka, Hitoshi Iba, Hugo de Garis, and Tatsumi Furuya "Evolving Hardware with Genetic Learning" Ian Horswill "A Simple, Cheap, and Robust Visual Navigation System" Hitoshi Iba, Hugo de Garis, and Tetsuya Higuchi "Evolutionary Learning of Predatory Behaviors Based on Structured Classifiers" A. Harry Klopf, James S. Morgan, and Scott E. Weaver "Modeling Nervous System Function with a Hierarchical Network of Control Systems That Learn" David Kortenkamp and Eric Chown "A Directional Spreading Activation Network for Mobile Robot Navigation" C. Ronald Kube and Hong Zhang "Collective Robotic Intelligence" Long-Ji Lin and Tom Mitchell "Memory Approaches to Reinforcement Learning in Non-Markovian Domains" Alexander Linden and Frank Weber "Implementing Inner Drive Through Competence Reflection" Michael L. Littman "A Categorization of Reinforcement Learning Environments" Luis R. Lopez and Robert E. Smith "Evolving Artificial Insect Brains: Neural Networks for Artificial Compound Eyes" Pattie Maes "Behavior-Based Artificial Intelligence" Maja J. Mataric "Designing Emergent Behaviors: From Local Interactions to Collective Intelligence" Emmanuel Mazer, Juan Manuel Ahuactzin, El-Ghazali Talbi, and Pierre Bessiere "The Ariadne's Clew Algorithm" Geoffrey F. Miller and Peter M. Todd "Evolutionary Interactions among Mate Choice, Speciation, and Runaway Sexual Selection" Ulrich Nehmzow, Tim Smithers, and Brendan McGonigle "Increasing Behavioural Repertoire in a Mobile Robot" Chisato Numaoka and Akikazu Takeuchi "Collectively Migrating Robots" Lynne E. Parker "Adaptive Action Selection for Cooperative Agent Teams" Jing Peng and Ronald J. Williams "Efficient Search Control in Dyna" Rolf Pfeifer and Paul Verschure "Designing Efficiently Navigating Non-Goal-Directed Robots" Tony J. Prescott and John E. W. Mayhew "Building Long-Range Cognitive Maps Using Local Landmarks" Craig W. Reynolds "An Evolved, Vision-Based Behavioral Model of Coordinated Group Motion" Feliz Ribeiro, Jean-Paul Barthes, and Eugenio Oliveira "Dynamic Selection of Action Sequences" Mark Ring "Two Methods for Hierarchy Learning in Reinforcement Environments" Herbert L. Roitblat, P. W. B. Moore, David A. Helweg and Paul E. Nachtigall "Representation and Processing of Acoustic Information in a Biomimetic Neural Network" Bruce E. Rosen and James M. Goodwin "Learning Autonomous Flight Control by Adaptive Coarse Coding" Nestor A. Schmajuk and H. T. Blair "The Dynamics of Spatial Navigation: An Adaptive Neural Network" Juergen Schmidhuber and Reiner Wahnsiedler "Planning Simple Trajectories Using Neural Subgoal Generators" Anton Schwartz "Perceptual Modes: Task-Directed Processing of Sensory Input" J. E. R. Staddon "A Note on Rate-Sensitive Habituation" Josh Tenenberg, Jonas Karlsson, and Steven Whitehead "Learning via Task Decomposition" Peter M. Todd and Stewart W. Wilson "Environment Structure and Adaptive Behavior From the Ground Up" Saburo Tsuji and Shigang Li "Memorizing and Representing Route Scenes" Toby Tyrrell "The Use of Hierarchies for Action Selection" William R. Uttal, Gary Bradshaw, Sriram Dayanand, Robb Lovell, Thomas Shepherd, Ramakrishna Kakarala, Kurt Skifsted, and Greg Tupper "An Integrated Computational Model of a Perceptual-Motor System" Paul F. M. J. Verschure and Rolf Pfeifer "Categorization, Representations, and The Dynamics of System-Environment Interaction: A Case Study in Autonomous Systems" Thomas Ulrich Vogel "Learning Biped Robot Obstacle Crossing" Gerhard Weiss "Action Selection and Learning in Multi-Agent Environments" Gregory M. Werner and Michael G. Dyer "Evolution of Herding Behavior in Artificial Animals" Holly Yanco and Lynn Andrea Stein "An Adaptive Communication Protocol for Cooperating Mobile Robots" R. Zapata, P. Lepinay, C. Novales, and P. Deplanques "Reactive Behaviors of Fast Mobile Robots in Unstructured Environments: Sensor-Based Control and Neural Networks" ============================================================================== From RAMPO at SALERNO.INFN.IT Thu Oct 29 08:30:00 1992 From: RAMPO at SALERNO.INFN.IT (RAMPO@SALERNO.INFN.IT) Date: 29 Oct 1992 13:30 +0000 (GMT) Subject: CALL FOR PAPERS: WIRN-93 Message-ID: <5903@SALERNO.INFN.IT> ***************** CALL FOR PAPERS ***************** The 6-th Italian Workshop on Neural Nets WIRN VIETRI-93 May 12-14, 1993 Vietri Sul Mare, Salerno ITALY FIRST ANNOUNCEMENT Organizing - Scientific Committee -------------------------------------------------- B. Apolloni (Univ. Milano) A. Bertoni ( Univ. Milano) E. R. Caianiello ( Univ. Salerno) D. D. Caviglia ( Univ. Genova) P. Campadelli ( CNR Milano) M. Ceccarelli ( Univ. Salerno - IRSIP CNR) P. Ciaccia ( Univ. Bologna) M. Frixione ( I.I.A.S.S.) G. M. Guazzo ( I.I.A.S.S.) M. Gori ( Univ. Firenze) F. Lauria ( Univ. Napoli) M. Marinaro ( Univ. Salerno) A. Negro ( Univ. Salerno) G. Orlandi ( Univ. Roma) E. Pasero ( Politecnico Torino ) A. Petrosino ( Univ. Salerno - IRSIP CNR) M. Protasi ( Univ. Roma II) S. Rampone ( Univ. Salerno - IRSIP CNR) R. Serra ( Gruppo Ferruzzi Ravenna) F. Sorbello ( Univ. Palermo) R. Stefanelli ( Politecnico Milano) L. Stringa ( IRST Trento) R. Tagliaferri ( Univ. Salerno) R. Vaccaro ( CNR Napoli) Topics ---------------------------------------------------- Mathematical Models Architectures and Algorithms Hardware and Software Design Hybrid Systems Pattern Recognition and Signal Processing Industrial and Commercial Applications Fuzzy Tecniques for Neural Networks Schedule ----------------------- Papers Due: January 15, 1993 Replies to Authors: March 29, 1993 Revised Papers Due: May 14, 1993 Sponsors ------------------------------------------------------------------------------ International Institute for Advanced Scientific Studies (IIASS) Dept. of Fisica Teorica, University of Salerno Dept. of Informatica e Applicazioni, University of Salerno Dept. of Scienze dell'Informazione, University of Milano Istituto per la Ricrca dei Sistemi Informatici Paralleli (IRSIP - CNR) Societa' Italiana Reti Neuroniche (SIREN) The 6-th Italian Workshop on Neural Nets (WIRN VIETRI-93) will take place in Vietri Sul Mare, Salerno ITALY, May 12-14, 1993. The conference will bring together scientists who are studying several topics related to neural networks. The three-day conference, to be held in the I.I.A.S.S., will feature both introductory tutorials and original, refereed papers, to be published by World Scientific Publishing. Papers should be 6 pages,including title, abstract, figures, tables, and bibliography. The first page should give keywords, postal and electronic mailing addresses, telephone, and FAX numbers. Submit 3 copies to the address shown. For more information, contact the Secretary of I.I.A.S.S. I.I.A.S.S Via G.Pellegrino, 19 84019 Vietri Sul Mare (SA) ITALY Tel. +39 89 761167 Fax +39 89 761189 E-Mail robtag at udsab.dia.unisa.it ***************************************************************** From taylor at world.std.com Fri Oct 30 09:09:54 1992 From: taylor at world.std.com (Russell R Leighton) Date: Fri, 30 Oct 1992 09:09:54 -0500 Subject: Free Neural Network Simualtion and Analysis SW (am6.0) Message-ID: <199210301409.AA24858@world.std.com> ************************************************************************* **** delete all prerelease versions!!!!!!! (they are not up to date) **** ************************************************************************* The following describes a neural network simulation environment made available free from the MITRE Corporation. The software contains a neural network simulation code generator which generates high performance ANSI C code implementations for modular backpropagation neural networks. Also included is an interface to visualization tools. FREE NEURAL NETWORK SIMULATOR AVAILABLE Aspirin/MIGRAINES Version 6.0 The Mitre Corporation is making available free to the public a neural network simulation environment called Aspirin/MIGRAINES. The software consists of a code generator that builds neural network simulations by reading a network description (written in a language called "Aspirin") and generates an ANSI C simulation. An interface (called "MIGRAINES") is provided to export data from the neural network to visualization tools. The previous version (Version 5.0) has over 600 registered installation sites world wide. The system has been ported to a number of platforms: Host platforms: convex_c2 /* Convex C2 */ convex_c3 /* Convex C3 */ cray_xmp /* Cray XMP */ cray_ymp /* Cray YMP */ cray_c90 /* Cray C90 */ dga_88k /* Data General Aviion w/88XXX */ ds_r3k /* Dec Station w/r3000 */ ds_alpha /* Dec Station w/alpha */ hp_parisc /* HP w/parisc */ pc_iX86_sysvr4 /* IBM pc 386/486 Unix SysVR4 */ pc_iX86_sysvr3 /* IBM pc 386/486 Interactive Unix SysVR3 */ ibm_rs6k /* IBM w/rs6000 */ news_68k /* News w/68XXX */ news_r3k /* News w/r3000 */ next_68k /* NeXT w/68XXX */ sgi_r3k /* Silicon Graphics w/r3000 */ sgi_r4k /* Silicon Graphics w/r4000 */ sun_sparc /* Sun w/sparc */ sun_68k /* Sun w/68XXX */ Coprocessors: mc_i860 /* Mercury w/i860 */ meiko_i860 /* Meiko w/i860 Computing Surface */ Included with the software are "config" files for these platforms. Porting to other platforms may be done by choosing the "closest" platform currently supported and adapting the config files. New Features ------------ - ANSI C ( ANSI C compiler required! If you do not have an ANSI C compiler, a free (and very good) compiler called gcc is available by anonymous ftp from prep.ai.mit.edu (18.71.0.38). ) Gcc is what was used to develop am6 on Suns. - Autoregressive backprop has better stability constraints (see examples: ringing and sequence), very good for sequence recognition - File reader supports "caching" so you can use HUGE data files (larger than physical/virtual memory). - The "analyze" utility which aids the analysis of hidden unit behavior (see examples: sonar and characters) - More examples - More portable system configuration for easy installation on systems without a "config" file in distribution Aspirin 6.0 ------------ The software that we are releasing now is for creating, and evaluating, feed-forward networks such as those used with the backpropagation learning algorithm. The software is aimed both at the expert programmer/neural network researcher who may wish to tailor significant portions of the system to his/her precise needs, as well as at casual users who will wish to use the system with an absolute minimum of effort. Aspirin was originally conceived as ``a way of dealing with MIGRAINES.'' Our goal was to create an underlying system that would exist behind the graphics and provide the network modeling facilities. The system had to be flexible enough to allow research, that is, make it easy for a user to make frequent, possibly substantial, changes to network designs and learning algorithms. At the same time it had to be efficient enough to allow large ``real-world'' neural network systems to be developed. Aspirin uses a front-end parser and code generators to realize this goal. A high level declarative language has been developed to describe a network. This language was designed to make commonly used network constructs simple to describe, but to allow any network to be described. The Aspirin file defines the type of network, the size and topology of the network, and descriptions of the network's input and output. This file may also include information such as initial values of weights, names of user defined functions. The Aspirin language is based around the concept of a "black box". A black box is a module that (optionally) receives input and (necessarily) produces output. Black boxes are autonomous units that are used to construct neural network systems. Black boxes may be connected arbitrarily to create large possibly heterogeneous network systems. As a simple example, pre or post-processing stages of a neural network can be considered black boxes that do not learn. The output of the Aspirin parser is sent to the appropriate code generator that implements the desired neural network paradigm. The goal of Aspirin is to provide a common extendible front-end language and parser for different network paradigms. The publicly available software will include a backpropagation code generator that supports several variations of the backpropagation learning algorithm. For backpropagation networks and their variations, Aspirin supports a wide variety of capabilities: 1. feed-forward layered networks with arbitrary connections 2. ``skip level'' connections 3. one and two-dimensional weight tessellations 4. a few node transfer functions (as well as user defined) 5. connections to layers/inputs at arbitrary delays, also "Waibel style" time-delay neural networks 6. autoregressive nodes. 7. line search and conjugate gradient optimization The file describing a network is processed by the Aspirin parser and files containing C functions to implement that network are generated. This code can then be linked with an application which uses these routines to control the network. Optionally, a complete simulation may be automatically generated which is integrated with the MIGRAINES interface and can read data in a variety of file formats. Currently supported file formats are: Ascii Type1, Type2, Type3 Type4 Type5 (simple floating point file formats) ProMatlab Examples -------- A set of examples comes with the distribution: xor: from RumelHart and McClelland, et al, "Parallel Distributed Processing, Vol 1: Foundations", MIT Press, 1986, pp. 330-334. encode: from RumelHart and McClelland, et al, "Parallel Distributed Processing, Vol 1: Foundations", MIT Press, 1986, pp. 335-339. bayes: Approximating the optimal bayes decision surface for a gauss-gauss problem. detect: Detecting a sine wave in noise. iris: The classic iris database. characters: Learing to recognize 4 characters independent of rotation. ring: Autoregressive network learns a decaying sinusoid impulse response. sequence: Autoregressive network learns to recognize a short sequence of orthonormal vectors. sonar: from Gorman, R. P., and Sejnowski, T. J. (1988). "Analysis of Hidden Units in a Layered Network Trained to Classify Sonar Targets" in Neural Networks, Vol. 1, pp. 75-89. spiral: from Kevin J. Lang and Michael J, Witbrock, "Learning to Tell Two Spirals Apart", in Proceedings of the 1988 Connectionist Models Summer School, Morgan Kaufmann, 1988. ntalk: from Sejnowski, T.J., and Rosenberg, C.R. (1987). "Parallel networks that learn to pronounce English text" in Complex Systems, 1, 145-168. perf: a large network used only for performance testing. monk: The backprop part of the monk paper. The MONK's problem were the basis of a first international comparison of learning algorithms. The result of this comparison is summarized in "The MONK's Problems - A Performance Comparison of Different Learning algorithms" by S.B. Thrun, J. Bala, E. Bloedorn, I. Bratko, B. Cestnik, J. Cheng, K. De Jong, S. Dzeroski, S.E. Fahlman, D. Fisher, R. Hamann, K. Kaufman, S. Keller, I. Kononenko, J. Kreuziger, R.S. Michalski, T. Mitchell, P. Pachowicz, Y. Reich H. Vafaie, W. Van de Welde, W. Wenzel, J. Wnek, and J. Zhang has been published as Technical Report CS-CMU-91-197, Carnegie Mellon University in Dec. 1991. wine: From the ``UCI Repository Of Machine Learning Databases and Domain Theories'' (ics.uci.edu: pub/machine-learning-databases). Performance of Aspirin simulations ---------------------------------- The backpropagation code generator produces simulations that run very efficiently. Aspirin simulations do best on vector machines when the networks are large, as exemplified by the Cray's performance. All simulations were done using the Unix "time" function and include all simulation overhead. The connections per second rating was calculated by multiplying the number of iterations by the total number of connections in the network and dividing by the "user" time provided by the Unix time function. Two tests were performed. In the first, the network was simply run "forward" 100,000 times and timed. In the second, the network was timed in learning mode and run until convergence. Under both tests the "user" time included the time to read in the data and initialize the network. Sonar: This network is a two layer fully connected network with 60 inputs: 2-34-60. Millions of Connections per Second Forward: SparcStation1: 1 IBM RS/6000 320: 2.8 HP9000/720: 4.0 Meiko i860 (40MHz) : 4.4 Mercury i860 (40MHz) : 5.6 Cray YMP: 21.9 Cray C90: 33.2 Forward/Backward: SparcStation1: 0.3 IBM RS/6000 320: 0.8 Meiko i860 (40MHz) : 0.9 HP9000/720: 1.1 Mercury i860 (40MHz) : 1.3 Cray YMP: 7.6 Cray C90: 13.5 Gorman, R. P., and Sejnowski, T. J. (1988). "Analysis of Hidden Units in a Layered Network Trained to Classify Sonar Targets" in Neural Networks, Vol. 1, pp. 75-89. Nettalk: This network is a two layer fully connected network with [29 x 7] inputs: 26-[15 x 8]-[29 x 7] Millions of Connections per Second Forward: SparcStation1: 1 IBM RS/6000 320: 3.5 HP9000/720: 4.5 Mercury i860 (40MHz) : 12.4 Meiko i860 (40MHz) : 12.6 Cray YMP: 113.5 Cray C90: 220.3 Forward/Backward: SparcStation1: 0.4 IBM RS/6000 320: 1.3 HP9000/720: 1.7 Meiko i860 (40MHz) : 2.5 Mercury i860 (40MHz) : 3.7 Cray YMP: 40 Cray C90: 65.6 Sejnowski, T.J., and Rosenberg, C.R. (1987). "Parallel networks that learn to pronounce English text" in Complex Systems, 1, 145-168. Perf: This network was only run on a few systems. It is very large with very long vectors. The performance on this network is in some sense a peak performance for a machine. This network is a two layer fully connected network with 2000 inputs: 100-500-2000 Millions of Connections per Second Forward: Cray YMP 103.00 Cray C90 220 Forward/Backward: Cray YMP 25.46 Cray C90 59.3 MIGRAINES ------------ The MIGRAINES interface is a terminal based interface that allows you to open Unix pipes to data in the neural network. This replaces the NeWS1.1 graphical interface in version 4.0 of the Aspirin/MIGRAINES software. The new interface is not a simple to use as the version 4.0 interface but is much more portable and flexible. The MIGRAINES interface allows users to output neural network weight and node vectors to disk or to other Unix processes. Users can display the data using either public or commercial graphics/analysis tools. Example filters are included that convert data exported through MIGRAINES to formats readable by: - Gnuplot 3 - Matlab - Mathematica - Xgobi Most of the examples (see above) use the MIGRAINES interface to dump data to disk and display it using a public software package called Gnuplot3. Gnuplot3 can be obtained via anonymous ftp from: >>>> In general, Gnuplot 3 is available as the file gnuplot3.?.tar.Z >>>> Please obtain gnuplot from the site nearest you. Many of the major ftp >>>> archives world-wide have already picked up the latest version, so if >>>> you found the old version elsewhere, you might check there. >>>> >>>> NORTH AMERICA: >>>> >>>> Anonymous ftp to dartmouth.edu (129.170.16.4) >>>> Fetch >>>> pub/gnuplot/gnuplot3.?.tar.Z >>>> in binary mode. >>>>>>>> A special hack for NeXTStep may be found on 'sonata.cc.purdue.edu' >>>>>>>> in the directory /pub/next/submissions. The gnuplot3.0 distribution >>>>>>>> is also there (in that directory). >>>>>>>> >>>>>>>> There is a problem to be aware of--you will need to recompile. >>>>>>>> gnuplot has a minor bug, so you will need to compile the command.c >>>>>>>> file separately with the HELPFILE defined as the entire path name >>>>>>>> (including the help file name.) If you don't, the Makefile will over >>>>>>>> ride the def and help won't work (in fact it will bomb the program.) NetTools ----------- We have include a simple set of analysis tools by Simon Dennis and Steven Phillips. They are used in some of the examples to illustrate the use of the MIGRAINES interface with analysis tools. The package contains three tools for network analysis: gea - Group Error Analysis pca - Principal Components Analysis cda - Canonical Discriminants Analysis Analyze ------- "analyze" is a program inspired by Denis and Phillips' Nettools. The "analyze" program does PCA, CDA, projections, and histograms. It can read the same data file formats as are supported by "bpmake" simulations and output data in a variety of formats. Associated with this utility are shell scripts that implement data reduction and feature extraction. "analyze" can be used to understand how the hidden layers separate the data in order to optimize the network architecture. How to get Aspirin/MIGRAINES ----------------------- The software is available from two FTP sites, CMU's simulator collection and UCLA's cognitive science machines. The compressed tar file is a little less than 2 megabytes. Most of this space is taken up by the documentation and examples. The software is currently only available via anonymous FTP. > To get the software from CMU's simulator collection: 1. Create an FTP connection from wherever you are to machine "pt.cs.cmu.edu" (128.2.254.155). 2. Log in as user "anonymous" with password your username. 3. Change remote directory to "/afs/cs/project/connect/code". Any subdirectories of this one should also be accessible. Parent directories should not be. ****You must do this in a single operation****: cd /afs/cs/project/connect/code 4. At this point FTP should be able to get a listing of files in this directory and fetch the ones you want. Problems? - contact us at "connectionists-request at cs.cmu.edu". 5. Set binary mode by typing the command "binary" ** THIS IS IMPORTANT ** 6. Get the file "am6.tar.Z" > To get the software from UCLA's cognitive science machines: 1. Create an FTP connection to "ftp.cognet.ucla.edu" (128.97.50.19) (typically with the command "ftp ftp.cognet.ucla.edu") 2. Log in as user "anonymous" with password your username. 3. Change remote directory to "alexis", by typing the command "cd alexis" 4. Set binary mode by typing the command "binary" ** THIS IS IMPORTANT ** 5. Get the file by typing the command "get am6.tar.Z" Other sites ----------- If these sites do not work well for you, then try the archie internet mail server. Send email: To: archie at cs.mcgill.ca Subject: prog am6.tar.Z Archie will reply with a list of internet ftp sites that you can get the software from. How to unpack the software -------------------------- After ftp'ing the file make the directory you wish to install the software. Go to that directory and type: zcat am6.tar.Z | tar xvf - -or- uncompress am6.tar.Z ; tar xvf am6.tar How to print the manual ----------------------- The user documentation is located in ./doc in a few compressed PostScript files. To print each file on a PostScript printer type: uncompress *.Z lpr -s *.ps Why? ---- I have been asked why MITRE is giving away this software. MITRE is a non-profit organization funded by the U.S. federal government. MITRE does research and development into various technical areas. Our research into neural network algorithms and applications has resulted in this software. Since MITRE is a publically funded organization, it seems appropriate that the product of the neural network research be turned back into the technical community at large. Thanks ------ Thanks to the beta sites for helping me get the bugs out and make this portable. Thanks to the folks at CMU and UCLA for the ftp sites. Copyright and license agreement ------------------------------- Since the Aspirin/MIGRAINES system is licensed free of charge, the MITRE Corporation provides absolutely no warranty. Should the Aspirin/MIGRAINES system prove defective, you must assume the cost of all necessary servicing, repair or correction. In no way will the MITRE Corporation be liable to you for damages, including any lost profits, lost monies, or other special, incidental or consequential damages arising out of the use or in ability to use the Aspirin/MIGRAINES system. This software is the copyright of The MITRE Corporation. It may be freely used and modified for research and development purposes. We require a brief acknowledgement in any research paper or other publication where this software has made a significant contribution. If you wish to use it for commercial gain you must contact The MITRE Corporation for conditions of use. The MITRE Corporation provides absolutely NO WARRANTY for this software. October, 1992 Russell Leighton * * MITRE Signal Processing Center *** *** *** *** 7525 Colshire Dr. ****** *** *** ****** McLean, Va. 22102, USA ***************************************** ***** *** *** ****** INTERNET: taylor at world.std.com, ** *** *** *** leighton at mitre.org * * From nfb507 at hp1.uni-rostock.de Fri Oct 30 18:38:21 1992 From: nfb507 at hp1.uni-rostock.de (neural network group) Date: Fri, 30 Oct 92 18:38:21 MEZ Subject: DB-investigation Message-ID: Dear Connectionist! In the appendix you will find the index of a db-investigation we made on the subject "neural hard- and software" published in Japanese literature. On request we will send the whole result (about 60 pages) to you. Please send (if available) a similar list to the adress: nfb507 at hp1.uni-rostock.de Thank you. The Neural Network Group Rostock Appendix: 1 30.11.1988 NEC and NEC MARKET DEVELOPMENT commercialize neuro computer 2 06.12.1988 Electronic Technology Lab develops image processing system 3 10.01.1989 HITACHI lab develops neural network-based computer model 4 18.01.1989 SANYO ELECTRIC develops two bio device samples 5 23.01.1989 FUJITSU organizes universities to set up AI research forum 6 01.02.1989 NTT Basic Lab develops method for developing neuro circuit 7 01.02.1989 NEC INFORMATION TECHNOLOGY to focus on neuro computer, imag 8 16.02.1989 FUJITSU develops neuro computer chip 9 22.02.1989 MATSUSHITA GIKEN develops pseudo neuron prototype that 10 01.03.1989 TOSHIBA develops diabetes diagnosis system that uses neural 11 03.03.1989 NIKKO SECURITIES and FUJITSU to develop neuro computer syst 12 04.04.1989 MITSUBISHI ELECTRIC develops technique for using single neu 13 05.04.1989 TOSHIBA develops neural network development system f 14 24.05.1989 NEC and NEC INFORMATION TECHNOLOGY develop software for 15 01.06.1989 MATSUSHITA ELECTRIC develops controlling technique that int 16 02.06.1989 Kyushu Institute of Technology group succeeds in 17 09.06.1989 FUJITSU develops PC-based neuro computer system 18 08.09.1989 HITACHI develops neural network LSI 19 13.09.1989 MITI, universities, and private companies to start 20 06.10.1989 MITSUBISHI ELECTRIC Central Lab develops device for meas 21 19.10.1989 FUJITSU to market neuro processor LSI and LSI board compute 22 30.10.1989 MATSUSHITA GIKEN and MITSUBISHI CHEMICAL INDUSTRIES e 23 01.11.1989 Toyohashi Science Technology University group develop 24 16.11.1989 FUJITSU introduces neuro application software for monitorin 25 28.11.1989 SONY to develop super chip which will integrate CPU and mem 26 12.12.1989 SUMITOMO METAL INDUSTRIES to enter neuro computer market by 27 04.01.1990 DAI-ICHI KANGYO BANK and FUJITSU launch project for 28 04.01.1990 DAI-ICHI KANGYO BANK and FUJITSU launch project for 29 09.01.1990 NEC develops face reference system that uses neuron devices 30 09.01.1990 FUJITSU LAB develops neuro computer prototype that achie 31 19.01.1990 CSK and TOSHIBA ENGINEERING develop stock investment dec 32 07.02.1990 MATSUSHITA ELECTRONICS develops analog neuro processor 33 24.02.1990 NIPPON STEEL and FUJITSU develop neuro computer- based 34 09.04.1990 USC asking three Japanese computer makers to partici 35 12.04.1990 NIPPON STEEL to widely use neuro computer systems 36 25.04.1990 RICOH develops neuro LSI 37 26.04.1990 FUJITSU starts projects for developing neuro technology 38 01.06.1990 JEIDA predicts that world's electronics industry will expa 39 09.06.1990 MITSUBISHI ELECTRIC LSI Lab develops neuron chip 40 02.07.1990 ADAPTIVE SOLUTIONS to expand cooperation with Japanese 41 23.07.1990 MITSUBISHI ELECTRIC develops optical neuro chip capable of 42 21.08.1990 MITSUBISHI ELECTRIC Central Lab develops optical neuro chip 43 09.10.1990 MITSUBISHI ELECTRIC Central Lab develops dynamic optical 44 15.10.1990 ATR TRANSLATION TELEPHONE LAB and Cargenie- Mellon Univer 45 06.11.1990 MATSUSHITA ELECTRIC develops neural network pattern recogni 46 14.11.1990 FUJITSU develops robot control software based on cerebellum 47 08.12.1990 MATSUSHITA ELECTRIC develops neuro fuzzy controlling techni 48 13.12.1990 HITACHI to set standards for fuzzy, neuro, and AI controlli 49 20.12.1990 FUJITSU and FANUC agree to jointly develop next- generation 50 26.12.1990 NTT lab develops experiment system for observing li 51 28.12.1990 WACOM develops neuro computer for connecting neurons us 52 10.01.1991 MITI to organize committee for studying feasibility of six 53 18.01.1991 MATSUSHITA ELECTRIC develops optical neuro device 54 11.02.1991 Research for optical neuro computers expanding 55 16.02.1991 MITSUBISHI ELECTRIC LSI Lab develops high-speed neural netw 56 08.03.1991 Chemical Technology Lab develops optical switching e 57 12.03.1991 MITSUBISHI ELECTRIC develops neural network chip with learn 58 24.04.1991 MITSUBISHI ELECTRIC confirms neural network can learn even 59 20.06.1991 Chiba University group and KYUSHU MATSUSHITA ELECTRIC devel 60 25.06.1991 NTT lab develops method for determining optimum number of 61 06.07.1991 Kyushu Institute of Technology group and Fuzzy System Lab d 62 31.07.1991 NEURON DATA JAPAN to put on sale Japanese version of gr 63 21.08.1991 MITSUBISHI ELECTRIC Central Lab develops optical arithmetic 64 23.08.1991 SOLITON SYSTEMS moving forward with LON business 65 16.09.1991 Kagoshima University group develops neuro-computer that 66 19.09.1991 MITSUBISHI ELECTRIC Central Lab develops optical neuro chip 67 06.12.1991 Tohoku University group develops neuron MOS transistor that 68 14.12.1991 Tohoku University group develops superconductive neuro comp 69 20.12.1991 TOSHIBA develops digital neuro chip 70 03.02.1992 FUJITSU to expand neuro computer business 71 19.02.1992 TOSHIBA Research Lab develops high-speed analog neuro compu 72 21.02.1992 MITSUBISHI ELECTRIC LSI Lab develops analog neuro chip whic 73 21.02.1991 NTT develops neuro chip 74 25.03.1992 MITSUBISHI ELECTRIC Central Lab develops neuro computer mod 75 03.04.1992 TOSHIBA develops new neural network system which can read 76 29.05.1992 MATSUSHITA ELECTRIC Central Research Lab develops neuro 77 19.06.1992 RICOH develops software-free neuro computer system 78 10.07.1992 MATSUSHITA ELECTRIC Lab develops self-multiplying neural ne 79 11.07.1992 TOSHIBA to increase distributed control network processor p 80 22.07.1992 MITSUBISHI ELECTRIC develops prototype optical neuro chip 81 09.09.1992 MATSUSHITA ELECTRIC Central Research Lab develops optical n 82 10.09.1992 HITACHI develops neuro-computing support software 83 06.10.1992 FUJITSU and KOMATSU develop world's first neural network co 84 09.10.1992 NRI develops damage estimate software which incorporates n From tesauro at watson.ibm.com Fri Oct 30 12:38:28 1992 From: tesauro at watson.ibm.com (Gerald Tesauro (8-863-7682)) Date: Fri, 30 Oct 92 12:38:28 EST Subject: Hotel reservation deadline for NIPS workshops Message-ID: The NIPS 92 post-conference workshops will take place Dec. 3-5 in Vail, Colorado, at the Radisson Resort Vail. The Radisson is offering attendees a special discounted room rate of $78.00 per night, and is holding a block of rooms for us until WEDNESDAY, NOVEMBER 4. Attendees are strongly encouraged to make their hotel reservations by this date. Reservations after Nov. 4 will be on a space-available basis only. To make reservations, call the Radisson at 303-476-4444 and mention our "NIPS" group code. Gerry Tesauro NIPS 92 Workshops Chair From mclennan at cs.utk.edu Fri Oct 30 17:34:36 1992 From: mclennan at cs.utk.edu (mclennan@cs.utk.edu) Date: Fri, 30 Oct 92 17:34:36 -0500 Subject: paper in neuroprose Message-ID: <9210302234.AA01996@maclennan.cs.utk.edu> **DO NOT FORWARD TO OTHER GROUPS** The following technical report has been placed in the Neuroprose archives at Ohio State (filename: maclennan.dendnet.ps.Z). Ftp instructions follow the abstract. N.B. The uncompressed file is quite long (1.2 Mbytes), so you may have to use the -s option on lpr to print it. ----------------------------------------------------- Information Processing in the Dendritic Net Bruce MacLennan Computer Science Department University of Tennessee Knoxville, TN 37996 maclennan at cs.utk.edu Technical Report CS-92-180 ABSTRACT: The goal of this paper is a model of the dendritic net that: (1) is mathematically tractable, (2) is reasonably true to the biol- ogy, and (3) illuminates information processing in the neuropil. First I discuss some general principles of mathematical modeling in a biological context that are relevant to the use of linearity and orthogonality in our models. Next I discuss the hypothesis that the dendritic net can be viewed as a linear field computer. Then I discuss the approximations involved in analyzing it as a dynamic, lumped-parameter, linear system. Within this basically linear framework I then present: (1) the self-organization of matched filters and of associative memories; (2) the dendritic computation of Gabor and other nonorthogonal representations; and (3) the possible effects of reverse current flow in neurons. Based on a presentation at the 2nd Annual Behavioral and Computa- tional Neuroscience Workshop, Georgetown University, Washington DC, May 18--20, 1992. ----------------------------------------------------- FTP INSTRUCTIONS Either use the Getps script, or do the following: unix> ftp archive.cis.ohio-state.edu (or 128.146.8.52) Name: anonymous Password: ftp> cd pub/neuroprose ftp> binary ftp> get maclennan.dendnet.ps.Z ftp> quit unix> uncompress maclennan.dendnet.ps.Z unix> lpr -s maclennan.dendnet.ps (or however you print LONG postscript) If you need hardcopy, then send your request to: library at cs.utk.edu Bruce MacLennan Department of Computer Science 107 Ayres Hall The University of Tennessee Knoxville, TN 37996-1301 (615)974-0994/5067 FAX: (615)974-4404 maclennan at cs.utk.edu From barto at cs.umass.edu Fri Oct 30 18:53:42 1992 From: barto at cs.umass.edu (Andy Barto) Date: Fri, 30 October 1992 18:53:42 -0500 Subject: faculty positions Message-ID: UNIVERSITY OF MASSACHUSETTS AMHERST Faculty and Research Scientist Positions The Department of Computer Science invites applications for one-three tenure-track faculty positions at the assistant and associate levels and several research-track faculty and postdoctoral positions at all levels, in all areas of computer science. Applicants should have a Ph.D. in computer science or related area and should show evidence of exceptional research promise. Senior level candidates should have a record of distinguished research. Salary is commensurate with education and experience. Our Department has grown substantially over the past five years and currently has 30 tenure-track faculty and 8 research faculty, approximately 10 postdoctoral research scientists, and 160 graduate students. Continued growth is expected over the next five years. We have ongoing research projects in robotics, vision, natural language processing, expert systems, distributed problem solving, machine learning, artificial neural networks, person-machine interfaces, distributed processing, database systems, information retrieval, operating systems, object-oriented systems, persistent object management, real-time systems, real-time software development and analysis, programming languages, computer networks, theory of computation, office automation, parallel computation, computer architecture, and medical informatics (with the UMass Medical School). Send vita, along with the names of four references to Chair of Faculty Recruiting, Department of Computer Science, University of Massachusetts, Lederle Graduate Research Center, Amherst, MA 01003 by February 1, 1993 (or Email inquiries can be sent to facrec at cs.umass.edu). An Affirmative Action/Equal Opportunity Employer