From MARCO at INGFI1.CINECA.IT Mon Jun 1 15:01:00 1992 From: MARCO at INGFI1.CINECA.IT (MARCO@INGFI1.CINECA.IT) Date: Mon, 1 JUN 92 15:01 N Subject: Workshop announcement Message-ID: <2965@INGFI1.CINECA.IT> ------------------------------------------------------- CALL FOR PAPERS SECOND WORKSHOP ON NEURAL NETWORKS FOR SPEECH PROCESSING Florence, 10-11 December 1992 -------------------------------------------------------- Objective: This workshop will focus on the application of neural networks for speech processing. The following areas of interest will be mainly covered: - Neural Models for Speech Processing (Backpropagation, Backpropagation through time; LVQ,...): - Integration between ``classical'' A.S.R. techniques (e.g. DTW, HMM) and neural networks; - Integration between priori-knowledge and neural networks. Papers based on neural models which report results in speech compression, phoneme and word recognition,... are solicited. Critical reviews are also welcome. Contributions: Submissions should be in the form of an abstract of 2-3 pp. to be received by: Prof. Marco GORI, Dipartimento di Sistemi e Informatica, v. S. Marta, 3 50139 FIRENZE (ITALY) E-mail: marco at ingfi1.cineca.it Please, refer to Prof. Gori also for any information on the workshop. Deadlines: - Receipt of abstracts: 30 June 1992; - Notification of acceptance: 1 September 1992; - Receipt of camera-ready 10 December 1992; Working language: English, Italian Hotels: Details will be sent with the Provisional Programme. Location: Hotel Cavour, via Proconsolo, 3 - Firenze Registration Fee (VAT incl.): 200.000 Lire  From efiesler at idiap.ch Mon Jun 1 02:34:02 1992 From: efiesler at idiap.ch (Emile Fiesler) Date: Mon, 1 Jun 92 08:34:02 +0200 Subject: IEEE NNC Standards Committee Message-ID: <9206010634.AA06663@idiap.ch> On behalf of the IEEE Neural Network Council Standards Committee I am composing a database of people who are interested in, and would like to contribute to, the establishment of neural network standards. The Committee consists of the following Working Groups: A. the glossary working group concerning neural network nomenclature B. the paradigms working group concerning neural network (construction) tools C. the performance evaluation working group concerning neural network benchmarks D. the interfaces working group concerning neural network hardware and software interfaces (This working group is still tentative.) People who are interested, and would like to be on our mailing list, are invited to send me the following information: 1. Name 2. Title / Position 3. Address 4. Telephone number 5. Fax number 6. Electronic mail address 7. Interest: A short statement expressing your interests in neural network standards including which working group(s) you are interested in (A. B, C, D). E. Fiesler IDIAP Case postale 609 CH-1920 Martigny Switzerland / Suisse Tel.: +41-26-22-76-64 Fax.: +41-26-22-78-18 E-mail: EFiesler at IDIAP.CH (INTERNET)  From worth at PARK.BU.EDU Wed Jun 3 14:23:07 1992 From: worth at PARK.BU.EDU (worth@PARK.BU.EDU) Date: Wed, 3 Jun 92 14:23:07 -0400 Subject: ISSNNet Nominations Message-ID: <9206031823.AA07860@alewife.bu.edu> ______________________________________________________________________ ISSNNet Official Call for Nominations ______________________________________________________________________ The International Student Society for Neural Netwoks (ISSNNet) is in the process of re-organizing. The founders are no longer students and it is time to create a new administration. The first step in this process is to hold elections as per the existing bylaws. The current officers are preparing reports on what has been accomplished so far and are laying the groundwork for the new organization. The newly elected officers should be willing to take the task of finalizing this re-organization. Official nomination period: June 1st through 30th. Nominations will be accepted by email, surface mail, or in person at IJCNN92 (look for ISSNNet signs in the exhibition hall). As per the bylaws, nominations will be approved by the Governing board and the current Officers. At least two and no more than four nominees will be placed on the ballot for each Officer position. Selection will be based on the number of member nominations. Election ballots will be mailed (either by surface mail or electronic mail) on August 1st, 1992 and voting shall be closed on August 31st, 1992. Election to Officer positions will be based on plurality of votes among the selected nominees. All four officer positions are up for election: Position: Duties: ============== ================================================= President Chief execute officer and Spokesperson. The President is responsible for making sure that the society continues to function as described in the Bylaws. Vice President Assist the President. Director Oversees practical organizational matters. Responsible for elections. Treasurer Responsible for all monies. Qualifications for potential nominees: The nominee must be enrolled at a recognized academic institution (proof of student status will be required) AND HAVE RELIABLE ACCESS TO ELECTRONIC MAIL. Each nomination must be supported by at least 10 student members. No more than two Officers may belong to the same Area of Jurisdiction (Country, State, Province, Region, etc. with at least five student members). Moreover, the President and Vice President may not belong to the same Area of Jurisdiction. Because ISSNNet membership processing has been suspended for some time, anyone who has been a member of ISSNNet in the past can submit or support a nomination. This includes students outside the USA who were unable to submit dues because of exchange problems, but who have been on our mailing list in the past, directly through ISSNNet or through one of the Governors. Fill out the information below, and return the following form to the address shown (e-mail or surface). ---------------------------- cut here ------------------------------ ISSNNet NOMINATION FORM Please include as much information about the nominee as possible. Add lines where necessary. If using surface mail, please type. NOMINEE INFORMATION: Position: [President, Vice President, Director, Treasurer] Name (Last,First): ______________________________________ University: ______________________________________ Surface Address: ______________________________________ ______________________________________ ______________________________________ ______________________________________ Email: ______________________________________ (please type!) Phone: ______________________________________ SUPPORTING MEMBERS: Name and University: _______________________________________________ Name and University: _______________________________________________ Name and University: _______________________________________________ Name and University: _______________________________________________ Name and University: _______________________________________________ Name and University: _______________________________________________ Name and University: _______________________________________________ Name and University: _______________________________________________ Name and University: _______________________________________________ Name and University: _______________________________________________ ----------------------------- cut here ------------------------------- Return your nomination with the above information to: issnnet at cns.bu.edu or to ISSNNet Elections P.O. Box 15661 Boston, MA 02215 USA Thank you for your support! Andy. ---------------------------------------------------------------------- Andrew J. Worth (617) 353-6741 ISSNNet, Inc. ISSNNet Acting Director P.O. Box 15661 worth at cns.bu.edu Boston, MA 02215 USA ----------------------------------------------------------------------  From bdbryan at eng.clemson.edu Wed Jun 3 20:21:33 1992 From: bdbryan at eng.clemson.edu (Ben Bryant) Date: Wed, 3 Jun 92 20:21:33 EDT Subject: TDNN network configuration file(s) for PlaNet Message-ID: <9206040021.AA16545@eng.clemson.edu> I recently sent a message concerning the above that was somehow garbled in the transmission. I apologize for this. The file that was sent in the last mailing was an ascii text file containing our current "best estimate" of how the training of a TDNN takes place implemented as a PlaNet network configuration file. If there is anyone there who has experience with PlaNet and has written a correct TDNN network config file for this package, I wonder if you might be kind enough to send us a copy. If you cannot do this for non-disclosure reasons, could you please simply look ove the following implementation and tell me whether we have implemented the training procedure correctly. I would be much obliged. The following is our "best guess" TDNN: #### file for 3-layer TDNN network with input 40x15 N=2; hidden 20x13 #### N=4 ; Output 3x9 # DEFINITIONS OF DELAY define NDin 3 define NDhid 5 define NDin_1 2 define NDhid_1 4 # DEFINITIONS OF UNITS define NUin 40 define NUhid 20 define NUout 3 define NUin_1 39 define NUhid_1 19 define NUout_1 2 #DEFINITION OF INPUT FRAME define NFin 15 define NFhid (NFin-NDin+1) define NFout (NFin-NDin+2-NDhid) define BiasHid 0 define BiasOut 0 ## DEFINITIONS OF LAYERS layer Input NFin*NUin layer Hidden NUhid*NFhid layer Output NFout*NUout layer Result NUout define biasd user1 ## DEFINITIONS OF INPUT/TARGET BUFFERS target NFout*NUout input NFin*NUin ## DEFINITIONS OF CONNECTIONS define Win (NUin*NDin_1+NUin_1) define Whids 0 define Whid (NUhid_1) connect InputHidden1 Input[0-Win] to Hidden[0-Whid] define WHid (NUhid*NDhid_1+NUhid_1) define Wout (NUout_1) connect HiddenOutput1 Hidden[0-WHid] to Output[0-Wout] ## n.3layer.expr: implementation of a 3layer-feedforward-net with expressions. ## define Nin, Nhid, Nout, BiasHid and BiasOut as desired. define ErrMsg \n\tread\swith\s'network\sNin=\sNhid=\sNout=\sBiasHid=\sBiasOut=\sn.3layer.expr'\n #IFNDEF NDin; printf ErrMsg; exit; ENDIF #IFNDEF Nhid; printf ErrMsg; exit; ENDIF #IFNDEF Nout; printf ErrMsg; exit; ENDIF IFNDEF BiasHid; printf ErrMsg; exit; ENDIF IFNDEF BiasOut; printf ErrMsg; exit; ENDIF # macro definitions of the derivarives of the sigmoid for Hidden and Output IF $min==0&&$max==1 define HiddenDer Hidden*(1-Hidden) define OutputDer Output*(1-Output) ELSE define HiddenDer (Hidden-$min)*($max-Hidden)/($max-$min) define OutputDer (Output-$min)*($max-Output)/($max-$min) ENDIF ## PROCEDURE FOR ACTIVATING NETWORK FORWARD procedure activate scalar i i=0 Input=$input while ii*NUhid+NUhid_1]=InputHidden1 \ **T(Input[i*NUin->i*NUin+NUin*NDin_1+NUin_1]) i+=1 endwhile Hidden = logistic(Hidden:net+(BiasHid*Hidden:bias)) i=0 while ii*NUout+NUout_1] = HiddenOutput1 \ **T(Hidden[i*NUhid->i*NUhid+NUhid*NDhid_1+NUhid_1]) i+=1 endwhile Output=logistic(Output:net+(BiasOut*Output:bias)) $Error=mean((Output:delta=$target-Output)^2)/2 Output:delta*=OutputDer end ## PROCEDURE FOR TRAINING NETWORK matrix Hidden_delta NFout NDhid*NUhid procedure learn call activate scalar i;scalar j i=0 while ii*NUout+NUout_1] \ **HiddenOutput1*HiddenDer[i*NUhid->i*NUhid+NUhid*NDhid_1+NUhid_1] i+=1 endwhile Hidden:delta=0 i=0 while i(i+j)*NUhid+NUhid_1] \ += Hidden_delta[i][j*NUhid->j*NUhid+NUhid_1] j+=1 endwhile i+=1 endwhile i=0 while ii*NUhid+NUhid_1]/=(i+1) endif if (NFhid-ii*NUhid+NUhid_1]/=(NFhid-i) endif if ((NFhid-i>=NDhid) && (i>=NDhid)) then Hidden:delta[i*NUhid->i*NUhid+NUhid_1]/=(NDhid) endif i+=1 endwhile i = 0 InputHidden1:delta*=$alpha*(NDhid*(NFout)) while ij*NUhid+NUhid_1]) \ **Input[(i+j)*NUin->(i+j)*NUin+NDin_1*NUin+NUin_1] j+=1 endwhile i+=1 endwhile InputHidden1 += InputHidden1:delta/=(NDhid*(NFout)) i=0 HiddenOutput1:delta*=$alpha*(NFout) while ii*NUout+NUout_1]) \ **Hidden[i*NUhid->i*NUhid+NUhid*NDhid_1+NUhid_1] i+=1 endwhile HiddenOutput1:delta/=(NFout) HiddenOutput1+=HiddenOutput1:delta Hidden:bias+=Hidden:biasd=Hidden:delta*$eta+Hidden:biasd*$alpha Output:bias+=Output:biasd=Output:delta*$eta+Output:biasd*$alpha end Thanks in advance for your help. -Ben Bryant  From tgd at arris.com Wed Jun 3 17:03:26 1992 From: tgd at arris.com (Tom Dietterich) Date: Wed, 3 Jun 92 14:03:26 PDT Subject: Position Announcement: Arris Pharmaceutical Message-ID: <9206032103.AA09645@oyster.arris.com> RESEARCH SCIENTIST in Machine Learning, Neural Networks, and Statistics Arris Pharmaceutical Arris Pharmaceutical is a start-up pharmaceutical company founded in 1989 and dedicated to the efficient discovery and development of novel, orally-active human therapeutics through the application of artificial intelligence, machine learning, and pattern recognition methods. We are seeking a person with a PhD in Computer Science, Mathematics, Statistics, or related fields to join our team developing new machine learning algorithms for drug discovery. The team currently includes contributions from Tomas Lozano-Perez, Rick Lathrop, Roger Critchlow, and Tom Dietterich. The ideal candidate will have a strong background in mathematics (including spatial reasoning methods) and five years' experience in machine learning, neural networks, or statistical model-building methods. The candidate should be eager to learn the relevant parts of computational chemistry and to interact with medicinal chemists and molecular biologists. To a first approximation, the Arris drug design strategy begins by identifying a pharmaceutical target (e.g., an enzyme or a cell-surface receptor), developing assays to measure chemical binding with this target, and screening large libraries of peptides (short amino acid sequences) with these assays. The resulting data, which indicates for each compound, how well it binds to the target, will then be analyzed by machine learning algorithms to develop hypotheses that explain why some compounds bind well to the target while others do not. Information from X-ray crystallography or NMR spectroscopy may also be available to the learning algorithms. Hypotheses will then be refined by synthesizing and testing additional peptides. Finally, medicinal chemists will synthesize small organic molecules that satisfy the hypothesis, and these will become candidate drugs to be tested for medical safety and effectiveness. For more information, send your resume with the names and addresses of three references to Tom Dietterich (email: tgd at arris.com; voice: 415-737-8600; FAX: 415-737-8590). Arris Pharmaceutical Corporation 385 Oyster Point Boulevard, Suite 12 South San Francisco, CA 94080  From LC4A%ICINECA.BITNET at BITNET.CC.CMU.EDU Fri Jun 5 11:16:30 1992 From: LC4A%ICINECA.BITNET at BITNET.CC.CMU.EDU (F. Ventriglia) Date: Fri, 05 Jun 92 11:16:30 SET Subject: 1992 Capri School Message-ID: <01GKUE3KHJV49N3W6N@BITNET.CC.CMU.EDU> Last Announcement INTERNATIONAL SCHOOL on NEURAL MODELING and NEURAL NETWORKS Capri (Italy) - September 28th-October 9th, 1992 Director F. Ventriglia An International School on Neural Modelling and Neural Networks was organized under the sponsorship of the Italian Group of Cybernetics and Biophysics of the CNR, the Institute of Cybernetics of the CNR and the National Committee for Physics of the CNR; co-sponsor the American Society for Mathematical Biology. First week (Sept 28 - Oct 2) TOPICS LECTURERS 1. Neural Structures * Szentagothai, Budapest 2. Functions of Neural Structures for Visuomotor Coordination * Arbib, Los Angeles 3. Correlations in Neural Activity * Abeles, Jerusalem 4. Single Neuron Dynamics: deterministic models * Rinzel, Bethesda 5. Single Neuron Dynamics: stochastic models * Ricciardi, Naples 6. Oscillations in Neural Systems * Ermentrout, Pittsburgh 7. Noise and Chaos in Neural Systems * Erdi, Budapest Second week (Oct 5 - Oct 9) TOPICS LECTURERS 8. Mass action in Neural Systems * Freeman, Berkeley 9. Statistical Neurodynamics: kinetic approach * Ventriglia, Naples 10.Statistical Neurodynamics: sigmoidal approach * Cowan, Chicago 11.Attractor Neural Networks in Cortical Conditions * Amit, Roma 12."Real" Neural Network Models * Traub, Yorktown Heights 13.Pattern Recognition in Neural Networks * Fukushima, Osaka 14.Learning in Neural Networks * Tesauro, Yorktown Heights About six lectures (each one hour long) will be given in each day (for 5+5 days). Each lecturer will give four lectures. On Saturday, October 3, there will be the following events: 18.00-19.00 Seminar Neural Networks: looking backward and forward E.R. Caianiello - Physics Dept. - University of Salerno, Italy 19.00-20.00 Round Table The explicative value of Neural Modeling Chairman: E.R. Caianiello 21.00 --> Informal Dinner LOCATION The International School will be held in Capri, Italy. Lectures will be scheduled in "La Certosa" of Capri. WHO SHOULD ATTEND Applicants for the international School should be actively engaged in the fields of biological cybernetics, biomathematics or computer science, and have a good background in mathematics. As the number of participants must be limited to 70, preference may be given to students who are specializing in neural modelling and neural networks and to professionals who are seeking new materials for biomathematics or computer science courses. PROCEDURE for APPLICATION Applicants should provide a letter of introduction from their department, institute or company and complete the application form below. The documents should be mailed together, as soon as possible -also by E-Mail, to Dr. F. Ventriglia Registration Capri International School Istituto di Cibernetica Via Toiano 6 80072 - Arco Felice (NA) Italy Tel. (39-) 81-8534 138 E-Mail LC4A at ICINECA (bitnet) Fax (39-) 81-5267 654 Tx 710483 The deadline for application is JUNE 15, 1992. SCHOOL FEES The school fee is Italian Lire 500.000 (about 500 $) and includes notes, lunch and coffee-break for the duration of the School. A limited number of grants (covering the registration fee of Lit. 500.000) is available. The organizer applied to the Society for Mathematical Biology for travel funds for participants who are member of the SMB. If you want to apply for such a grant, you should submit a request to this effect together with your application form and your letter of introduction should confirm that these funds cannot be provided by your department or another source in your country. Preference will be given to students, postdoctoral fellows and young faculty (1-2 years) after PhD. Participants will receive information about how the fee can be paid in their letter of acceptance. LODGING As the Institute of Cybernetics has no own lodging facilities, participants will have to stay in hotels in Capri. No grants are available to cover living expenses. Whereas you are free to arrange for a hotel through your own travel-agency, it is recommended that participants use the lodging-facility reserved by the International School. A stock of rooms in hotels in Capri were reserved for participants to the school, some single and others, the greatest part, double (for two mates). Hotels in Capri Single Room Double Room Double Room Half Pension as Single Hotel Syrene 95.000 190.000 175.000 130.000 in d. 145.000 in s. Hotel Floridiana 90.000 185.000 165.000 125.000 in d. 140.000 in s. Villa Krupp 85.000 150.000 100.000 La Minerva 60.000 130.000 90.000 La Florida 50.000 100.000 80.000 The prices (in Italian Lire) are per person/day, excluding the double rooms in which each of the room-mates will pay half-price, and include bed and breakfast. <><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><> REGISTRATION FORM International School on Neural Modeling and Neural Networks Capri, September 28-October 9, 1992 Name ---------------------------------------------- First Name ----------------------------------------- Position or Title ---------------------------------- University/ -------------------------------------- Company Address -------------------------------------------- -------------------------------------------- -------------------------------------------- -------------------------------------------- Tel. ---------------------- Fax --------------------- E-mail ---------------------------------------------  From eldrache at informatik.tu-muenchen.de Fri Jun 5 10:48:32 1992 From: eldrache at informatik.tu-muenchen.de (Martin Eldracher) Date: Fri, 5 Jun 92 16:48:32 +0200 Subject: Article ready in neuroprose. Message-ID: <92Jun5.164846met_dst.23552@sunbrauer12.informatik.tu-muenchen.de> Dear Mr. Wan, Mr. Pollack just informed me, that he has placed my article in the neuroprose archive. Here follows the announcement of the paper, thanks a lot for your efforts. Yours sincerely, Martin Eldracher -------------------- announcement below this line --------------------------- I placed an article in the Neuroprose archive: The Article: -------- "Classification of Non-Linear-Separable Real-World-Problems Using Delta-Rule, Perceptrons, and Topologically Distributed Encoding" can be obtained from neuroprose with file name eldracher.tde.ps.Z . There are no extra hardcopies available. The article was originally published in the ``Proceedings of the 1992 ACM/SIGAPP Symposium on Applied Computing'' Volume II, pp1098-1104, ACM Press, ISBN: 0-89791-502-X Abstract: --------- We describe how to solve linear-non-separable problems using simple feed-forward perceptrons without hidden layers, and a biologically motivated topologically distributed encoding for input data. We point out why neural networks have advantages compared to classic mathematical algorithms without loosing performance. The Iris-dataset from Fisher is analyzed as a practical example. In order to get the file do: prompt> ftp archive.cis.ohio-state.edu Connected to archive.cis.ohio-state.edu. 220 archive FTP server (Version 6.14 Thu Apr 23 14:41:38 EDT 1992) ready. Name (archive.cis.ohio-state.edu:yourlogin):anonymous 331 Guest login ok, send e-mail address as password. Password: yourmailadress 230 Guest login ok, access restrictions apply. ftp> cd pub/neuroprose ftp>binary ftp>get eldrache.tde.ps.Z 150 Opening BINARY mode data connection for eldracher.tde.ps.Z (151908 bytes). 226 Transfer complete. ftp>bye prompt>uncompress eldracher.tde.ps.Z prompt>lpr eldracher.tde.ps (or what you use for printing a file) Martin Eldracher ------------------------------------------------------------------------------- Martin Eldracher Tel: ++49-89-2105-2406 Technische Universitaet Muenchen FAX: ++49-89-2105-8207 Institut fuer Informatik, H2 Lehrstuhl Prof. Dr. W. Brauer Arcisstr. 21, 8000 Muenchen 2, Germany e-mail: eldrache at informatik.tu-muenchen.de -------------------------------------------------------------------------------  From tgd at ICSI.Berkeley.EDU Fri Jun 5 13:42:05 1992 From: tgd at ICSI.Berkeley.EDU (Tom Dietterich) Date: Fri, 5 Jun 92 10:42:05 PDT Subject: Belew on Paradigmatic Over-Fitting Message-ID: <9206051742.AA03888@icsib22.ICSI.Berkeley.EDU> Rik Belew recently posted the following message to the GA Digest. With his permission, I am reposting it to connectionists. If you substitute XOR, encoder-decoder tasks, and 2-spirals for De Jong's F1-F5, I think the same message applies to a fair amount of connectionist research too. --Tom ====================================================================== From atul at nynexst.com Fri Jun 5 18:30:49 1992 From: atul at nynexst.com (Atul Chhabra) Date: Fri, 5 Jun 92 18:30:49 EDT Subject: references on telecommunications applications of neural nets and/or machine vision Message-ID: <9206052230.AA09090@texas.nynexst.com> I am looking for recent papers/technical reports etc. on telecommunications applications of neural networks. I have the following papers. I would appreciate receiving any additional references. Please respond by email. I will post a summary of responses to the net. I am also looking for references on applications of machine vision in telecommunications. 1. A. Hiramatsu, "ATM communications network control by neural network," IJCNN 89, Washington D.C., I/259-266, 1989. 2. J.E. Jensen, M.A. Eshara, and S.C. Barash, "Neural network controller for adaptive routing in survivable communications networks," IJCNN 90, San Diego, CA, II/29-36, 1990. 3. T. Matsumoto, M. Koga, K. Noguchi, and S. Aizawa, "Proposal for neural network applications to fiber optic transmission," IJCNN 90, San Diego, CA, I/75-80, July 1990. 4. T.X. Brown, "Neural network for switching," IEEE Communications, vol 27, no 11, 72-80, 1989. 5. T.P. Troudet and S.M. Walters, "Neural network architecture for crossbar switch control," IEEE Transactions on Curcuits and Systems, vol 38, 42-56, 1991. 6. S. Chen, G.J. Gibson and C.F.N. Cowan, "Adaptive channel equalization using a polynomial-perceptron structure," IEE Proceedings, vol 137, 257-264, 1990. 7. R.M. Goodman, J. Miller and H. Latin, "NETREX: A real time network management expert system," IEEE Globecom Workshop on the Application of Emerging Technologies in Network Operation and Management, FL, December 1988. 8. K.N. Sivarajan, "Spectrum Efficient Frequency Assignment for Cellular Radio," Caltech EE Doctoral Dissertation, June 1990. 9. M.D. Alston and P.M. Chau, "A decoder for block-coded forward error correcting systems," IJCNN 90, Washington D.C., II/302-305, January 1990. Thanks. ===================================================================== Atul K. Chhabra Phone: (914)644-2786 Member of Technical Staff Fax: (914)644-2211 NYNEX Science & Technology Internet: atul at nynexst.com 500 Westchester Avenue White Plains, NY 10604 =====================================================================  From harnad at Princeton.EDU Tue Jun 9 21:28:24 1992 From: harnad at Princeton.EDU (Stevan Harnad) Date: Tue, 9 Jun 92 21:28:24 EDT Subject: Connectionism & Reasoning: BBS Call for Commentators Message-ID: <9206100128.AA19528@clarity.Princeton.EDU> Below is the abstract of a forthcoming target article on connectionism and reasoning by Shastri & Ajjanagadde. It has been accepted for publication in Behavioral and Brain Sciences (BBS), an international, interdisciplinary journal that provides Open Peer Commentary on important and controversial current research in the biobehavioral and cognitive sciences. Commentators must be current BBS Associates or nominated by a current BBS Associate. To be considered as a commentator on this article, to suggest other appropriate commentators, or for information about how to become a BBS Associate, please send email to: harnad at clarity.princeton.edu or harnad at pucc.bitnet or write to: BBS, 20 Nassau Street, #240, Princeton NJ 08542 [tel: 609-921-7771] To help us put together a balanced list of commentators, please give some indication of the aspects of the topic on which you would bring your areas of expertise to bear if you were selected as a commentator. An electronic draft of the full text is available for inspection by anonymous ftp according to the instructions that follow after the abstract. ____________________________________________________________________ FROM SIMPLE ASSOCIATIONS TO SYSTEMATIC REASONING: A Connectionist representation of rules, variables, and dynamic bindings using temporal synchrony Lokendra Shastri Computer and Information Science Department University of Pennsylvania Philadelphia, PA 19104 shastri at central.cis.upenn.edu Venkat Ajjanagadde Wilhelm-Schickard-Institut University of Teubingen Sand 14 W-7400 Tuebingen, Germany nnsaj01 at mailserv.zdv.uni-tuebingen.de KEYWORDS: knowledge representation; reasoning; connectionism; dynamic bindings; temporal synchrony, neural oscillations, short- term memory; long-term memory; working memory; systematicity. ABSTRACT: Human agents draw a variety of inferences effortlessly, spontaneously, and with remarkable efficiency --- as though these inferences were a reflex response of their cognitive apparatus. Furthermore, these inferences are drawn with reference to a large body of background knowledge. This remarkable human ability is hard to explain given findings on the complexity of reasoning reported by researchers in artificial intelligence. It also poses a challenge for cognitive science and computational neuroscience: How can a system of simple and slow neuron-like elements represent a large body of systematic knowledge and perform a range of inferences with such speed? We describe a computational model that takes a step toward addressing the cognitive science challenge and resolving the artificial intelligence puzzle. We show how a connectionist network can encode millions of facts and rules involving n-ary predicates and variables and can perform a class of inferences in a few hundred milliseconds. Efficient reasoning requires the rapid representation and propagation of dynamic bindings. Our model achieves this by representing (1) dynamic bindings as the synchronous firing of appropriate nodes, (2) rules as interconnection patterns that direct the propagation of rhythmic activity, and (3) long-term facts as temporal pattern-matching sub-networks. The model is consistent with recent neurophysiological findings which suggest that synchronous activity occurs in the brain and may play a representational role in neural information processing. The model also makes specific, psychologically significant predictions about the nature of reflexive reasoning. It identifies constraints on the form of rules that may participate in such reasoning and relates the capacity of the working memory underlying reflexive reasoning to biological parameters such as the frequency at which nodes can sustain oscillations and the coarseness of synchronization. -------------------------------------------------------------- To help you decide whether you would be an appropriate commentator for this article, an electronic draft is retrievable by anonymous ftp from princeton.edu according to the instructions below (the filename is bbs.shastri). Please do not prepare a commentary on this draft. Just let us know, after having inspected it, what relevant expertise you feel you would bring to bear on what aspect of the article. ------------------------------------------------------------- To retrieve a file by ftp from a Unix/Internet site, type either: ftp princeton.edu or ftp 128.112.128.1 When you are asked for your login, type: anonymous Enter password as per instructions (make sure to include the specified @), and then change directories with: cd pub/harnad To show the available files, type: ls Next, retrieve the file you want with (for example): get bbs.shastri When you have the file(s) you want, type: quit Certain non-Unix/Internet sites have a facility you can use that is equivalent to the above. Sometimes the procedure for connecting to princeton.edu will be a two step process such as: ftp followed at the prompt by: open princeton.edu or open 128.112.128.1 In case of doubt or difficulty, consult your system manager. ---------- JANET users who do not have the facilty for interactive file transfer mentioned above have two options for getting BBS files. The first, which is simpler but may be subject to traffic delays, uses the file transfer utility at JANET node UK.AC.FT-RELAY. Use standard file transfer, setting the site to be UK.AC.FT-RELAY, the userid as anonymous at edu.princeton, for the password your-own-userid at your-site [the "@" is crucial], and for the remote filename the filename according to Unix conventions (i.e. something like pub/harnad/bbs.authorname). Lower case should be used where indicated, with quotes if necessary to avoid automatic translation into upper case. Setting the remote filename to be (D)pub/harnad instead of the one indicated above will provide you with a directory listing. The alternative, faster but more complicated procedure is to log on to JANET site UK.AC.NSF.SUN (with userid and password both given as guestftp), and then transfer the file interactively to a directory on that site (named by you when you log on). The method for transfer is as described above under 'Certain non-Unix/Internet sites', or you can make use of the on-line help that is available. Transfer of the file received to your own site is best done from your own site; the remote file (on the UK.AC.NSF.SUN machine) should be named as directory-name/filename (the directory name to use being that provided by you when you logged on to UK.AC.NSF.SUN). To be sociable (since NSF.SUN is short of disc space), once you have received the file on your own machine you should go back to UK.AC.NSF.SUN and delete it from your directory there. [Thanks to Brian Josephson for the above detailed UK/JANET instructions; similar special instructions for file retrieval from other networks or countries would be appreciated and will be included in updates of these instructions.] --- Where the above procedures are not available (e.g. from Bitnet or other networks), there are two fileservers -- ftpmail at decwrl.dec.com and bitftp at pucc.bitnet -- that will do the transfer for you. Send either one the one line message: help for instructions (which will be similar to the above, but will be in the form of a series of lines in an email message that ftpmail or bitftp will then execute for you). -------------------------------------------------------------  From wilson at smith.rowland.org Mon Jun 8 14:02:59 1992 From: wilson at smith.rowland.org (Stewart Wilson) Date: Mon, 08 Jun 92 14:02:59 EDT Subject: SAB92 Reminder Notice Message-ID: <9206081802.AA03028@smith.rowland.org> Dear Connectionists-List Moderator: Would you kindly broadcast the enclosed reminder of the SAB92 conference? Thank you. Stewart Wilson * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * R E M I N D E R * * From Animals to Animats: * 2nd International Conference on Simulation of Adaptive Behavior * ---------------------- * Honolulu, Hawaii, December 7-ll, 1992 * * * OBJECT: To bring together researchers in ethology, * psychology, ecology, cybernetics, artificial intelligence, * robotics, and related fields to further understanding of * the behaviors and underlying mechanisms that allow animals * and, potentially, robots to adapt and survive in uncertain * environments. * * DEADLINE -- Submissions must be received by the organizers * by JULY 15, 1992 * * To receive the full Conference Announcement and Call for * Papers, please contact one of the Organizers: * * Jean-Arcady Meyer meyer at wotan.ens.fr * meyer at frulm63.bitnet * * Herbert Roitblat roitblat at uhunix.uhcc.hawaii.edu * roitblat at uhunix.bitnet * * Stewart Wilson wilson at smith.rowland.org * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *  From atul at nynexst.com Wed Jun 10 14:32:24 1992 From: atul at nynexst.com (Atul Chhabra) Date: Wed, 10 Jun 92 14:32:24 EDT Subject: references on telecommunications applications of neural nets and/or machine vision Message-ID: <9206101832.AA12787@texas.nynexst.com> I received several responses to my original request for references. Thanks to those who responded. All of the responses dealt with application of neural networks to telecommunication technology. I am also interested in neural net applications related to telecommunication operations. Examples of such applications are network monitoring, analysis of data for alarm reporting, planning and forecasting, and handwritten character recognition for automatic remittance processing. Please email me references to such applications in the context of telecommunications industry. If there is enough interest, I will summarize to the net. Thanks. --Atul ===================================================================== Atul K. Chhabra Phone: (914)644-2786 Member of Technical Staff Fax: (914)644-2211 NYNEX Science & Technology Internet: atul at nynexst.com 500 Westchester Avenue White Plains, NY 10604 =====================================================================  From STAY8026 at bureau.ucc.ie Tue Jun 9 05:14:00 1992 From: STAY8026 at bureau.ucc.ie (STAY8026@bureau.ucc.ie) Date: Tue, 9 Jun 1992 09:14 GMT Subject: Research Fellowship Message-ID: <01GL07CQO11S0008EU@IRUCCVAX.UCC.IE> THE FOLLOWING IS LIKELY TO BE OF INTEREST TO EUROPEAN RESEARCHERS Dermot Barnes and I intend to make an application to the EC Human Capital and Mobility scheme for funding toward a post doctoral research fellowship to work on a project on connectionist simulations of recent developments in human learning which have implications for our understanding of inference. We would be interested in hearing from any post-doc connectionist who might like to put in an application for such a fellowship in tandem with ours. Our reading of the `Euro-blurb' is that the closing date for joint applications is 29 June, which is very close, but that individuals wishing to apply for a fellowship can do so continuously throughout 1992-94 with periodic selection every four months. For more information contact us by e-mail or at the address below. Dermot will be at the Quantitative Aspects of Behavior Meeting in Harvard later this week and I will be at the Belfast Neural Nets Meeting at the end of the month. P.J. Hampson Department of Applied Psychology University College Cork Ireland tel 353-21-276871 (ext 2101) fax 353-21-270439 e-mail stay8026 at iruccvax.ucc.ie  From sontag at control.rutgers.edu Thu Jun 11 11:35:34 1992 From: sontag at control.rutgers.edu (sontag@control.rutgers.edu) Date: Thu, 11 Jun 92 11:35:34 EDT Subject: "For neural networks, function determines form" in neuroprose Message-ID: <9206111535.AA01827@control.rutgers.edu> Title: "For neural networks, function determines form" Authors: Francesca Albertini and Eduardo D. Sontag Filename: albertini.ident.ps.Z Abstract: This paper shows that the weights of continuous-time feedback neural networks are uniquely identifiable from input/output measurements. Under very weak genericity assumptions, the following is true: Assume given two nets, whose neurons all have the same nonlinear activation function $\sigma$; if the two nets have equal behaviors as ``black boxes'' then necessarily they must have the same number of neurons and ---except at most for sign reversals at each node--- the same weights. (NOTE: this result is **not** a "learning" theorem. It does not provide by itself an algorithm for loading recurrent nets. It only shows "uniqueness of solution". However, work is in progress to apply the techniques developed in the proof to the learning problem.) To obtain copies of this article: unix> ftp archive.cis.ohio-state.edu (or 128.146.8.52) Name : anonymous Password: ftp> cd pub/neuroprose ftp> binary ftp> get albertini.ident.ps.Z ftp> quit unix> uncompress albertini.ident.ps.Z unix> lpr -Pps albertini.ident.ps (or however you print PostScript) (With many thanks to Jordan Pollack for providing this valuable service!) Please note: the file requires a fair amount of memory to print. If you have problems with FTP, I can e-mail you the postscript file; I cannot provide hardcopy, however.  From bhaskar at theory.cs.psu.edu Thu Jun 11 12:41:56 1992 From: bhaskar at theory.cs.psu.edu (Bhaskar DasGupta) Date: Thu, 11 Jun 1992 12:41:56 -0400 Subject: Result available. Message-ID: <9206111641.AA02754@omega.theory.cs.psu.edu> ************** PLEASE DO NOT FORWARD TO OTHER NEWSGROUPS **************** The following technical report has been placed in the neuroprose archives at Ohio State University. Questions and comments about the result will be very highly appreciated. EFFICIENT APPROXIMATION WITH NEURAL NETWORKS: A COMPARISON OF GATE FUNCTIONS* Bhaskar DasGupta Georg Schnitger Tech. Rep. CS-92-14 Department of Computer Science The Pennsylvania State University University Park PA 16802 email: {bhaskar,georg}@cs.psu.edu ABSTRACT -------- We compare different gate functions in terms of the approximation power of their circuits. Evaluation criteria are circuit size s, circuit depth d and the approximation error e(s,d). We consider two different error models, namely ``extremely tight'' approximations (i.e. e(s,d) = 2^{-s}) and the ``more relaxed'' approximations (i.e. e(s,d) = s^{-d}). Our goal is to determine those gate functions that are equivalent to the standard sigmoid sigma (x) = 1/(1+exp(-x)) under these two error models. For error e(s,d) = 2^{-s}, the class of equivalent gate functions contains, among others, (non-polynomial) rational functions, (non-polynomial) roots and most radial basis functions. Newman's approximation of |x| by rational functions is obtained as a corollary of this equivalence result. Provably not equivalent are polynomials, the sine-function and linear splines. For error e(s,d) = s^{-d}, the class of equivalent activation functions grows considerably, containing for instance linear splines, polynomials and the sine-function. This result only holds if the number of input units is counted when determining circuit size. The standard sigmoid is distinguished in that relatively small weights and thresholds suffice. Finally, we consider the computation of boolean functions. Now the binary threshold gains considerable power. Nevertheless, we show that the standard sigmoid is capable of computing a certain family of n-bit functions in constant size, whereas circuits composed of binary thresholds require size at least proportional to log n (and, O(log n loglog n logloglog n) gates are sufficient). This last result improves the previous best known separation result of Maass, Schnitger and Sontag(FOCS, 1991). (We wish to thank J. Lambert, W. Maass, R. Paturi, V. P. Roychodhury, K. Y. Siu and E. Sontag). ------------------------------------------------------------------- (* This research was partially supported by NSF Grant CCR-9114545) ------------------------------------------------------------------- ************************ How to obtain a copy ************************ a) Via FTP: unix> ftp archive.cis.ohio-state.edu (or 128.146.8.52) Name: anonymous Password: (type your email address) ftp> cd pub/neuroprose ftp> binary ftp> get dasgupta.approx.ps.Z ftp> quit unix> uncompress dasgupta.approx.ps.Z unix> lpr dasgupta.approx.ps (or however you print PostScript) b) Via postal mail: Request a hardcopy from one of the authors. *********************************************************************** Bhaskar DasGupta | email: bhaskar at omega.cs.psu.edu Department of Computer Science | phone: (814)-863-7326 333 Whitmore Laboratory | fax: (814)-865-3176 The Pennsylvania State University University Park PA 16802. ***********************************************************************  From lautrup at connect.nbi.dk Fri Jun 12 08:57:30 1992 From: lautrup at connect.nbi.dk (Benny Lautrup) Date: Fri, 12 Jun 92 13:57:30 +0100 Subject: No subject Message-ID: Begin Message: ----------------------------------------------------------------------- INTERNATIONAL JOURNAL OF NEURAL SYSTEMS The International Journal of Neural Systems is a quarterly journal which covers information processing in natural and artificial neural systems. It publishes original contributions on all aspects of this broad subject which involves physics, biology, psychology, computer science and engineering. Contributions include research papers, reviews and short communications. The journal presents a fresh undogmatic attitude towards this multidisciplinary field with the aim to be a forum for novel ideas and improved understanding of collective and cooperative phenomena with computational capabilities. ISSN: 0129-0657 (IJNS) ---------------------------------- Contents of Volume 3, issue number 1 (1992): 1. E.D. Lumer: Selective Attention to Perceptual Groups: The Phase Tracking Mechanism. 2. A. Namatame \& Y. Tsukamoto: Structural Connectionist Learning with Complementary Coding. 3. K.N. Gurney: Training Recurrent Nets of Hardware Realisable Sigma-pi Units. 4. Y. Peng, J.A. Reggia \& T. Li: A Connectionist Approach to Vertex Covering Problems. 5. E.P. Fulcher: WIS-ART: Unsupervised Clustering with RAM Discriminators. 6. P. Fedor: Principles of the Design of D-Neuronal Networks I: Net Representations for Computer Simulation of a Melody Compositional Process. 7. P. Fedor: Principles of the Design of D-Neuronal Networks II: Composing Simple Melodies. 8. D. Saad: Training Recurrent Neural Networks - The Minimal Trajectory Algorithm. ---------------------------------- Editorial board: B. Lautrup (Niels Bohr Institute, Denmark) (Editor-in-charge) S. Brunak (Technical Univ. of Denmark) (Assistant Editor-in-Charge) D. Stork (Stanford) (Book review editor) Associate editors: B. Baird (Berkeley) D. Ballard (University of Rochester) E. Baum (NEC Research Institute) S. Bjornsson (University of Iceland) J. M. Bower (CalTech) S. S. Chen (University of North Carolina) R. Eckmiller (University of Dusseldorf) J. L. Elman (University of California, San Diego) M. V. Feigelman (Landau Institute for Theoretical Physics) F. Fogelman-Soulie (Paris) K. Fukushima (Osaka University) A. Gjedde (Montreal Neurological Institute) S. Grillner (Nobel Institute for Neurophysiology, Stockholm) T. Gulliksen (University of Oslo) D. Hammerstrom (Oregon Graduate Institute) D. Horn (Tel Aviv University) J. Hounsgaard (University of Copenhagen) B. A. Huberman (XEROX PARC) L. B. Ioffe (Landau Institute for Theoretical Physics) P. I. M. Johannesma (Katholieke Univ. Nijmegen) M. Jordan (MIT) G. Josin (Neural Systems Inc.) I. Kanter (Princeton University) J. H. Kaas (Vanderbilt University) A. Lansner (Royal Institute of Technology, Stockholm) A. Lapedes (Los Alamos) B. McWhinney (Carnegie-Mellon University) M. Mezard (Ecole Normale Superieure, Paris) J. Moody (Yale, USA) A. F. Murray (University of Edinburgh) J. P. Nadal (Ecole Normale Superieure, Paris) E. Oja (Lappeenranta University of Technology, Finland) N. Parga (Centro Atomico Bariloche, Argentina) S. Patarnello (IBM ECSEC, Italy) P. Peretto (Centre d'Etudes Nucleaires de Grenoble) C. Peterson (University of Lund) K. Plunkett (University of Aarhus) S. A. Solla (AT&T Bell Labs) M. A. Virasoro (University of Rome) D. J. Wallace (University of Edinburgh) D. Zipser (University of California, San Diego) ---------------------------------- CALL FOR PAPERS Original contributions consistent with the scope of the journal are welcome. Complete instructions as well as sample copies and subscription information are available from The Editorial Secretariat, IJNS World Scientific Publishing Co. Pte. Ltd. 73, Lynton Mead, Totteridge London N20 8DH ENGLAND Telephone: (44)81-446-2461 or World Scientific Publishing Co. Inc. Suite 1B 1060 Main Street River Edge New Jersey 07661 USA Telephone: (1)201-487-9655 or World Scientific Publishing Co. Pte. Ltd. Farrer Road, P. O. Box 128 SINGAPORE 9128 Telephone (65)382-5663 ----------------------------------------------------------------------- End Message  From lautrup at connect.nbi.dk Fri Jun 12 09:00:31 1992 From: lautrup at connect.nbi.dk (Benny Lautrup) Date: Fri, 12 Jun 92 14:00:31 +0100 Subject: No subject Message-ID: ADVANCE PROGRAM IEEE Workshop on Neural Networks for Signal Processing August 31 - September 2, 1992 Copenhagen The Danish Computational Neural Network Center CONNECT and The Electronics Institute, The Technical University of Denmark In cooperation with the IEEE Signal Processing Society Invitation to Participate in the 1992 IEEE Workshop on Neural Networks for Signal Processing. The members of the Workshop Organizing Committee welcome you to the 1992 IEEE Workshop on Neural Networks for Signal Processing. The 1992 Workshop is the second workshop held in this area. The first took place in 1991 in Princeton, NJ, USA. The Workshop is organized by the IEEE Technical Committee for Neural Networks and Signal Processing. The purpose of the Workshop is to foster informal technical interac- tion on topics related to the application of neural networks to signal processing problems. Workshop Location The 1992 Workshop will be held at Hotel Marienlyst, Ndr. Strandvej 2, DK-3000 Helsingoer, Denmark, tel: +45 49202020, fax: +45 49262626. Helsingoer is a small town a little North of Copenhagen. The Workshop banquet will be held on Tuesday evening, September 1, at Kronborg Castle, which is situated close to the workshop hotel. Workshop Proceedings The proceedings of the Workshop, entitled "Neural Networks for Signal Processing - Proceedings of the 1992 IEEE Workshop", will be distributed at the Workshop. The registration fee covers one copy of the proceedings. Registration Information The Workshop registration information is given in the end of this program. It is possible to apply for a limited number of partial travel and registration grants via Program Chair. The address is given in the end of this program. Program Overview Time | Monday 31/8-92 | Tuesday 1/9-92 | Wednesday 2/9-92 ========================================================================== 8:15 AM | Opening Remarks | | -------------------------------------------------------------------------- 8:30 AM | Opening Keynote | Keynote Address | Keynote Address | Address | | -------------------------------------------------------------------------- 9:30 AM | Learning & Models | Speech 2 | Nonlinear Filtering | (Lecture) | (Lecture) | by Neural Networks | | | (Lecture) -------------------------------------------------------------------------- 11:00 AM | Break | Break | Break -------------------------------------------------------------------------- 11:30 AM | Speech 1 | Learning, System- | Image Processing and | (Poster preview) | identification and | Pattern Recognition | | Spectral Estimation | (Poster preview) | | (Poster preview) | -------------------------------------------------------------------------- 12:30 PM | Lunch | Lunch | Lunch -------------------------------------------------------------------------- 1:30 PM | Speech 1 | Learning, System- | Image Processing and | (Poster) | identification and | Pattern Recognition | | Spectral Estimation | (Poster) | | (Poster) | -------------------------------------------------------------------------- 2:45 PM | Break | Break | Break -------------------------------------------------------------------------- 3.15 PM | System | Image Processing and | Application Driven | Implementations | Analysis | Neural Models | (Lecture) | (Lecture) | (Lecture) -------------------------------------------------------------------------- Evening | Panel Discussion | Visit and Banquet at | | (8 PM) | Kronborg Castle | ========================================================================== Evening Events A Pre-Workshop reception will be held at Hotel Marienlyst at 7:00 PM on Sunday, August 30, 1992. Tuesday, September 1, 1992, 5:00 PM visit to Kronborg Castle and 7:00 PM banquet at the Castle. TECHNICAL PROGRAM Monday, August 31, 1992 8:15 AM; Opening Remarks: S.Y. Kung, F. Fallside, Workshop Chairs, Benny Lautrup, connect, Denmark, John Aa. Sorensen, Workshop Program Chair. 8:30 AM; Opening Keynote: System Identification Perspective of Neural Networks Professor Lennart Ljung, Department of Electrical Engineering, Linkoping University, Sweden. 9:30 AM; Learning & Models (Lecture Session) Chair: Jenq-Neng Hwang, Department of Electrical Engineering University of Washington, Seattle, WA, USA. 1. "Towards Faster Stochastic Gradient Search", Christian Darken, John Moody Yale University, New Haven, CT, USA. 2. "Inserting Rules into Recurrent Neural Networks", C.L. Giles, NEC Research Inst., Princeton, C.W. Omlin, Rensselaer Polytechnic Institute, Troy, NY, USA. 3. "On the Complexity of Neural Networks with Sigmoidal Units", Kai-Yeung Siu, University of California, Irvine, CA, Vwani Roychowdhury, Purdue University, West Lafayette, IN, Thomas Kailath, Stanford University, Stanford, CA, USA. 4. "A Generalization Error Estimate for Nonlinear Systems", Jan Larsen, Technical University of Denmark, Lyngby, Denmark. 11:00 AM; Coffe break 11:30 AM; Speech 1, (Oral previews of the afternoon poster session) Chair: Paul Dalsgaard, Institute of Electronic Systems, Aalborg University, Denmark. 1. "Interactive Query Learning for Isolated Speech Recognition", Jenq-Neng Hwang, Hang Li, University of Washington, Seattle, WA, USA. 2. "Adaptive Template Method for Speech Recognition", Yadong Liu, Yee-Chun Lee, Hsing-Hen Chen, Guo-Zheng Sun, University of Maryland, College Park, MD, USA. 3. "Fuzzy Partition Models and Their Effect in Continous Speech Recognition", Yoshinaga Kato, Masahide Sugiyama, ATR Interpreting Telephony Research Laboratories, Kyoto, Japan. 4. "Empirical Risk Optimization: Neural Networks and Dynamic Programming", Xavier Driancourt, Patrick Gallinari, Universite' de Paris Sud, Orsay, France. 5. "Text-Independent Talker Identification System Combining Connectionist and Conventional Models", Thierry Artieres, Younes Bennani, Patrick Gallinari, Universite' de Paris Sud, Orsay, France. 6. "A Two Layer Kohonen Neural Network using a Cochlear Model as a Front-End Processor for a Speech Recognition System", S. Lennon, E. Ambikairajah, Regional Technical College, Athlone, Ireland. 7. "Self-Structuring Hidden Control Neural Models", Helge B.D. Sorensen, Uwe Hartmann, Institute of Electronic Systems, Aalborg University, Aalborg, Denmark. 8. "Connectionist-Based Acoustic Word Models", Chuck Wooters, Nelson Morgan, International Computer Science Institute, Berkeley, CA, USA. 9. "Maximum Mutual Information Training of a Neural Predictive-Based HMM Speech Recognition System", K. Hassanein, L. Deng, M. Elmasry, University of Waterloo, Waterloo, Ontario, Canada. 10. "Training Continous Density Hidden Markov Models in Association with Self-Organizing Maps and LVQ", Mikko Kurimo, Kari Torkkola, Helsinki University of Technology, Finland. 11. "Unsupervised Sequence Classification", Jorg Kindermann, GMD-FIT.KI, Schloss Birlinghoven, Germany, Christoph Windheuser, Carnegie Mellon University, Pittsburg, PA, USA. 12. "A Mathematical Model for Speech Processing", Anna Esposito, Universita di Salerno, Salvatore Rampone, I.R.S.I.P-C.N.R, Napoli, Cesare Stanzione, International Institute for Advanced Scientific Studies, Salerno, Roberto Tagliaferri, Universita di Salerno, Italy. 13. "A New Voice and Pitch Estimator based on the Neocognitron", J.R.E. Moxham, P.A. Jones, H. McDermott, G.M. Clark, Australian Bionic Ear and Hearing Research Institute, East Melbourne 3002, Victoria, Australia. 12:30 PM; Lunch 1:30 PM; Speech 1, (Poster Session) 2:45 PM; Break 3:15 PM; System Implementations (Lecture session) Chair: Yu Hen Hu, Department of Electrical and Computer Engineering, University of Wisconsin-Madison, Madison, WI, USA. 1. "CCD's for Pattern Recognition", Alice M. Chiang, Lincoln Laboratory, Massachusetts Institute of Technology, MA, USA. 2. "An Electronic Parallel Neural CAM for Decoding", Joshua Alspector, Bellcore, Morristown, NJ, Anthony Jayakumar Bon Ngo, Cornell, Ithaca, NY, USA. 3. "Netmap-Software Tool for Mapping Neural Networks onto Parallel Computers", K. Wojtek Przytula, Huges Research Labs, Malibu, CA, Viktor Prasanna, University of California, Wei-Ming Lin, Mississippi State University, USA. 4. "A Fast Simulator for Neural Networks on DSPs of FPGAs", M. Ade, R. Lauwereins, J. Peperstraete, ESAT-Elektrotechniek, Heverlee, Belgium. Tuesday, September 1, 1992 8:30 AM; Keynote Address: "Capacity Control in Classifiers for Pattern Recognition" Dr. A. Sara Solla, AT\&T Bell Laboratories, Holmdel, NJ, USA. 9:30 AM; Speech 2, (Lecture Session) Chair: S. Katagiri, ATR Auditory and Visual Perception Research Laboratories, Kyoto, Japan. 1. "Speech Representations for Recognition by Neural Networks", B.H. Juang, AT&T Bell Laboratories, Murray Hill, NJ, USA. 2. "Classification with a Codebook-Excited Neural Network" Lizhong Wu, Frank Fallside, Cambridge University, UK. 3. "Minimal Classification Error Optimization for a Speaker Mapping Neural Network", Masahide Sugiyama, Kentaro Kurinami, ATR Interpreting Telephony Research Laboratories, Kyoto, Japan. 4. "On the Identification of Phonemes Using Acoustic-Phonetic Features Derived by a Self-Organising Neural Network", Paul Dalsgaard, Ove Andersen, Rene Jorgensen, Institute of Electronic Systems, Aalborg University, Denmark. 11:00 AM; Coffe break 11:30 AM; Learning, System Identification and Spectral Estimation. (Oral previews of afternoon poster session) Chair: Lars Kai Hansen, connect, Electronics Institute, Technical University of Denmark, Lyngby, Denmark. 1. "Prediction of Chaotic Time Series Using Recurrent Neural Networks", Jyh-Ming Kuo, Jose C. Principe, Bert deVries, University of Florida, FL, USA. 2. "Nonlinear System Identification using Multilayer Perceptrons with Locally Recurrent Synaptic Structure", Andrew Back, Ah Chung Tsoi, University of Queensland, Australia. 3. "Chaotic Signal Emulation using a Recurrent Time Delay Neural Network", Michael R. Davenport, Department of Physics, U.B.C., Shawn P. Day, Department of Electrical Engineering, U.B.C., Vancouver, Canada. 4. "Prediction with Recurrent Networks", Niels Holger Wulff, Niels Bohr Institute, John A. Hertz, Nordita, Copenhagen, Denmark. 5. "Learning of Sinusoidal Frequencies by Nonlinear Constrained Hebbian Algorithms", Juha Karhunen, Jyrki Joutsensalo, Helsinki University of Technology, Finland. 6. "A Neural Feed-Forward Network with a Polynomial Nonlinearity", Nils Hoffmann, Technical University of Denmark, Lyngby, Denmark. 7. "Application of Frequency-Domain Neural Networks to the Active Control of Harmonic Vibrations in Nonlinear Structural Systems", T.J. Sutton, S.J. Elliott University of Southampton, England. 8. "Generalization in Cascade-Correlation Networks", Steen Sjoegaard, Aarhus University, Denmark. 9. "Noise Density Estimation Using Neural Networks", M.T. Musavi, D.M. Hummels, A.J. Laffely, S.P. Kennedy, University of Maine, Maine, USA. 10. "An Efficient Model for Systems with Complex Responses", Volker Tresp, Ira Leuthausser, Ralph Neuneier, Martin Schlang, Siemens AG, Munchen, Klaus Abraham-Fuchs, Wolfgang Harer, Siemens, Erlangen, Germany. 11. "Generalized Feedforward Filters with Complex Poles", T. Oliveira e Silva, P. Guedes de Oliveira, Universidade de Aveiro, Aveiro, Portugal, J.C. Principe, University of Florida, Gainsville, FL, B. De Vries, David Sarnoff Research Center, Princeton, NJ, USA. 12. "A Simple Genetic Algorithm Applied to Discontinous Regularization", John Bach Jensen, Mads Nielsen, DIKU, University of Copenhagen, Denmark. 12:30 PM; Lunch 1:30 PM; Learning, System Identification and Spectral Estimation. (Poster Session) 2:45 PM; Break 3:15 PM; Image Processing and Analysis. (Lecture session) Chair: K. Wojtek Przytula, Hughes Research Labs, Malibu, CA, USA. 1. "Decision-Based Hierarchical Perceptron (HiPer) Networks with Signal/Image Classification Applications", S.Y. Kung, J.S. Taur, Princeton University, NJ, USA. 2. "Lateral Inhibition Neural Networks for Classification of Simulated Radar Imagery", Charles M. Bachmann, Scott A. Musman, Abraham Schultz, Naval Research Laboratory, Washington, USA. 3. "Globally Trained Neural Network Architecture for Image Compression", L. Schweizer, G. Parladori, Alcatel Italia, Milano, G.L. Sicuranza, Universita'di Trieste, Italy. 4. "Robust Identification of Human-Faces Using Mosaic Pattern and BPN", Makoto Kosugi, NTT Human Interface Laboratories, Take Yokosukashi Kanagawaken, Japan. 5:00 PM; Departure to Kronborg Castle 7:00 PM; Banquet at Kronborg Castle Wednesday, September 2, 1992 8:30 AM; Keynote Address: "Application Perspectives of the DARPA Artificial Neural Network Technology Program" Dr. Barbara Yoon, DARPA/MTO, Arlington, VA, USA. 9:30 AM; "Nonlinear Filtering by Neural Networks (Lecture Session)" Chair: Gary M. Kuhn, CCRP-IDA, Princeton, NJ, USA. 1. "Neural Networks and Nonparametric Regression", Vladimir Cherkassky, University of Minnesota, Minneapolis, USA. 2. "A Partial Analysis of Stochastic Convergence for a Generalized Two-Layer Perceptron using Backpropagation", Jeffrey L. Vaughn, Neil J. Bershad, University of California, Irvine, John J. Shynk, University of California, Santa Barbara, CA, USA. 3. "A Recurrent Neural Network for Nonlinear Time Series Prediction - A Comparative Study", S.S. Rao, S. Sethuraman, V. Ramamurti, Villanova University, Villanova, PA, USA. 4. "Dispersive Networks for Nonlinear Adaptive Filtering", Shawn P. Day, Michael R. Davenport, University of British Columbia, Vancouver, Canada. 11:00 AM; Coffe break 11:30 AM; Image Processing and Pattern Recognition (Oral previews of afternoon poster sessions) Chair: John C. Pearson, David Sarnoff Research Center, Princeton, NJ, USA. 1. "Unsupervised Multi-Level Segmentation of Multispectral Images", R.A. Fernandes, Institute for Space and Terrestrial Science, Richmond Hill, M.E. Jernigan, University of Waterloo, Waterloo, Ontario, Canada. 2. "Autoassociative Neural Networks for Image Compression: A Massively Parallel Implementation", Andrea Basso, Ecole Polytechnique Federale de Lausanne, Switzerland. 3. "Compression of Subband-Filtered Images via Neural Networks", S. Carrato, S. Marsi, University of Trieste, Trieste, Italy. 4. "An Adaptive Neural Network Model for Distinguishing Line- and Edge Detection from Texture Segregation", M.M. Van Hulle, T.Tollenaere, Katholieke Universiteit Leuven, Leuven, Belgium. 5. "Adaptive Segmentation of Textured Images using Linear Prediction and Neural Networks", Stefanos Kollias, Levon Sukissian, National Technical University of Athens, Athen, Greece. 6. "Neural Networks for Segmentation and Clustering of Biomedical Signals", Martin F. Schlang, Volker Tresp, Siemens, Munich, Klaus Abraham-Fuchs, Wolfgang Harer, Siemens, Erlangen, Germany. 7. "Some New Results in Nonlinear Predictive Image Coding Using Neural Networks", Haibo Li, Linkoping University, Linkoping, Sweden. 8. "A Neural Network Approach to Multi-Sensor Point-of-Interest Detection", Ajay N. Jain, Alliant Techsystems Inc., Hopkins, MN, USA. 9. "Supervised Learning on Large Redundant Trainingsets", Martin F. Moller, Aarhus University, Aarhus, Denmark. 10. "Neural Network Detection of Small Moving Radar Targets in an Ocean Environment", Jane Cunningham, Simon Haykin, McMaster University, Hamilton, Ontario, Canada. 11. "Discrete Neural Networks and Fingerprint Identification", Steen Sjoegaard, Aarhus University, Aarhus, Denmark. 12. "Image Recognition using a Neural Network", Keng-Chung Ho, Bin-Chang Chieu, National Taiwan Institute of Technology, Taipei, Taiwan, Republic of China. 13. "Adaptive Training of Feedback Neural Networks for Non-Linear Filtering and Control: I - A General Approach." O. Nerrand, P. Roussel-Ragot, L. Personnaz, G. Dreyfus, Ecole Superieure de Physique et de Chimie Industrielles, Paris, S. Marcos, Ecole Superieure d'Electricite, Gif Sur Yvette, France. 12:30 PM; Lunch 1:30 PM; Image Processing and Pattern Recognition (Poster Session) 2:45 PM; Break 3:15 PM; Application Driven Neural Models (Lecture session) Chair: Sathyanarayan S. Rao, Department of Electrical Engineering Villanova University, Villanova, PA, USA. 1. "Artificial Neural Network for ECG Arrhythmia Monitoring", Y.H. Hu, W.J. Tompkins, Q. Xue, University of Wisconsin- Madison, WI, USA. 2. "Constructing Neural Networks for Contact Tracking", Christopher M. DeAngelis, Naval Underwater Warfare Center, Rhode Island, Robert W. Green, University of Massachusetts Dartmouth, North Dartmouth, MA, USA. 3. "Adaptive Decision-Feedback Equalizer Using Forward-Only Counterpropagation Networks for Rayleigh Fading Channels", Ryuji Kaneda, Takeshi Manabe, Satoshi Fujii, ATR Optical and Radio Communications Research Laboratories, Kyoto, Japan. 13. "Ensemble Methods for Handwritten Digit Recognition", Lars Kai Hansen, Technical University of Denmark, Christian Liisberg, Risoe National Laboratory, Denmark, Peter Salamon, San Diego State University, San Diego, CA, USA. Workshop Committee General Chairs: S.Y. Kung F. Fallside Dept. of Electrical Engineering Engineering Department Princeton University Cambridge University Princeton, NJ 08544, USA Cambridge CB2 1PZ, UK email: kung at princeton.edu email: fallside at eng.cam.ac.uk Program Chair: Proceedings Chair: John Aa. Sorensen Candace Kamm Electronics Institute, Build 349 Box 1910 Technical University of Denmark Bellcore, 445 South Street DK-2800 Lyngby, Denmark Morristown, NJ 07960, USA email: jaas at dthei.ei.dth.dk email: cak at thumper.bellcore.com Publicity Chair: Gary M. Kuhn CCRP - IDA Thanet Road Princeton, NJ 08540, USA email: gmk%idacrd.uucp at princeton.edu Program Committee: Ronald de Beer Jenq-Neng Hwang John E. Moody John Bridle Yu Hen Hu Carsten Peterson Erik Bruun B.H. Juang Sathyanarayan S. Rao Paul Dalsgaard S. Katagiri Peter Salamon Lee Giles T. Kohonen Christian J. Wellekens Lars Kai Hansen Gary M. Kuhn Barbara Yoon Steffen Duus Hansen Benny Lautrup John Hertz Peter Koefoed Moeller ---------------------------------------------------------------------------- REGISTRATION FORM: 1992 IEEE Workshop on Neural Networks for Signal Processing. August 31 - September 2, 1992. Registration fee including single room and meals at Hotel Marienlyst: Before July 15, 1992, Danish Kr. 5200. After July 15, 1992, Danish Kr. 5350. Companion fee at Hotel Marienlyst: Danish Kr. 1160. Registration fee without hotel room: Before July 15, 1992, Danish Kr. 2800. After July 15, 1992, Danish Kr. 2950. The registration fee of Danish Kr. 5200 (5350) covers: . Attendance at all workshop sessions. . Workshop Proceedings. . Pre-Workshop reception at Sunday evening, August 30, 1992. . Hotel single room from August 30 to September 2, 1992. (3 nights). . 3 breakfasts, 3 lunches, 1 dinner, 1 banquet. . Coffe breaks and refreshments. . A Companion fee of additional Danish Kr. 1160 covers double room at Hotel Marienlyst and the Pre-Workshop reception, breakfasts and the banquet for 2 persons. The registration fee without hotel room: Danish Kr. 2800 (2950) covers: . Attendance at all workshop sessions. . Workshop Proceedings. . 3 lunches, 1 dinner, 1 banquet. . Coffe breaks and refreshments. Further information on registration: Ms. Anette Moeller-Uhl, The Niels Bohr Institute, tel: +45 3142 1616 ext. 388, fax: +45 3142 1016, email: uhl at connect.nbi.dk Please complete this form (type or print clearly) and mail with payment (by check, do not include cash) to: NNSP-92, CONNECT, The Niels Bohr Institute, Blegdamsvej 17, DK-2100 Copenhagen, Denmark. Name ------------------------------------------------------------------ Last First Middle Firm of University ---------------------------------------------------- Mailing Address ------------------------------------------------------- ----------------------------------------------------------------------- Country Phone Fax   From skrzypek at CS.UCLA.EDU Fri Jun 12 11:57:17 1992 From: skrzypek at CS.UCLA.EDU (Dr. Josef Skrzypek) Date: Fri, 12 Jun 92 08:57:17 PDT Subject: call for contributions Message-ID: CALL FOR CONTRIBUTIONS We are organizing a special edited book to be published by Kluwer, that is dedicated to the subject of NEURAL NETWORK SIMULATION ENVIRONMENTS. Submissions will be refereed. The plan calls for the book to be published in the winter/spring of 1993. I would like to invite your participation. DEADLINE FOR SUBMISSION: 25th of September, 1992 VOLUME TITLE: Neural Networks Simulation Environments EDITOR: Prof. Josef Skrzypek Department of Computer Science, 3532 BH UCLA Los Angeles CA 90024-1596 Email: skrzypek at cs.ucla.edu Tel: (310) 825 2381 Fax: (310) UCLA CSD DESCRIPTION This edited volume is devoted to ``Simulation environments for studying neuronal functions ''. The purpose of this special book is to encourage further work and discussion in the area of Neural Network Simulation tools that matured over the last decade into advanced neural modeling environments. Computer simulation is currently the best way to study dynamic properties of complex neuronal assemblies that might be mathematically intractable until we learn how to grow our own neurons on the breadboards. Simulation is also a way to avoid building prototypes without testing and verification of the design. Finally, computer simulation of very large complex systems, capable of intelligence or vision is the only reasonable way to organize the ever increasing flood of knowledge about these phenomena. In the past decade the development of neural network simulation environments has focused on two areas: 1) realistic (compartmental) models of a single neuron (small cluster of neurons) based on information currently available from the Neurosciences and 2) computational models of abstract neurons in support of "artificial" or connectionist models. All these NEURAL NETWORK SIMULATION ENVIRONMENTS HAVE NOT BEEN COMPREHENSIVELY COLLECTED, ORGANIZED, OR COMPARED ANYWHERE. Hence this volume should be a valuable addition to the desktop library for every computational neuroscientist as well as engineer designing artificial neural systems. The volume will include both invited and submitted peer-reviewed contributions. We are seeking submissions from researchers in relevant fields, including, computational neuroscience, natural and artificial vision, scientific computing, artificial intelligence, psychology, image and signal processing and pattern recognition. We are seeking submission describing MATURE (useful) WORKING simulation environements dedicated to modeling a complete spectrum of neural phenomena from membrane biophysics to computational abstraction such as for example three-layer backpropagation network. The volume will consist of three parts devoted to each major class of neural simulation methodologies: NEUROPHYSIOLOGICAL REALISM; simulators supporting neural models that incorporate detail models of membrane biophysics. PSYCHOPHYSICAL REALISM; simulators supporting computational models of neurons and networks that can account for reported psychological phenomena. Connectionist (Symbolic) - based simulators for neuronal networks models used in industry. We would like to encourage submissions from both, researchers engaged in analysis of biological systems such as modeling psychophysical/neurophysiological data using neural networks as well as from members of the engineering community who are synthesizing neural network models. The number of papers that can be included in this edited volume will be limited. Therefore, some qualified papers may be encouraged for submission to professional journals. SUBMISSION PROCEDURE Submissions should be sent to Josef Skrzypek, by Sept 25 1992. The suggested length is 20-22 double-spaced pages including figures, references, abstract and so on. Format details, etc. will be supplied on request. Authors are strongly encouraged to discuss ideas for possible submissions with the editor Tel (310)825-2381 or skrzypek at cs.ucla.edu Thank you for your considerations.  From terry at helmholtz.sdsc.edu Thu Jun 18 19:40:05 1992 From: terry at helmholtz.sdsc.edu (Terry Sejnowski) Date: Thu, 18 Jun 92 16:40:05 PDT Subject: Grant deadline August 1 Message-ID: <9206182340.AA25116@helmholtz.sdsc.edu> COGNITIVE NEUROSCIENCE - Individual Grants in Aid The McDonnell-Pew Program in Cognitive Neuroscience is accepting proposals for support of research and training in cognitive neuroscience. Preference is given for projects that are not currently funded and are interdisciplinary, involving at least two areas among clinical and basic neurosciences, computer science, psychology, linguistics and philosophy. Research support is limited to $30,000 a year for two years. Postdoctoral grants are limted to three years. Graduate student support is not available. Applications should be postmarked by August 1, 1992 to: Dr. George Miller McDonnell-Pew Program in Cognitive Neuroscience Green Hall, 1-N-6 Princeton University Princeton, NJ 08544-1010 For more information call (609) 258-5014, FAX (609) 258-3031 or e-mail cns at clarity.princeton.edu -----  From rsjuds at snll-arpagw.llnl.gov Fri Jun 19 13:03:21 1992 From: rsjuds at snll-arpagw.llnl.gov (judson richard s) Date: Fri, 19 Jun 92 10:03:21 -0700 Subject: No subject Message-ID: <9206191703.AA22191@snll-arpagw.llnl.gov> ************************************ * NOTE DEADLINE CHANGE !!!! * * * * NEW POSTER DEADLINE: JUNE 12 * * * * SCHEDULE ATTACHED * ************************************ BIOCOMPUTATION WORKSHOP Evolution as a computational process June 22-24, 1992 Doubletree Hotel 2 Portola Plaza Monterey, Ca 93940 Sponsored by the Institute for Scientific Computing Research at LLNL and the Center for Computational Engineering at SNL This workshop brings together biologists, physicists and computer scientists with interests in the study of evolution. The premisis of the workshop is that natural evolution is computational process of adaptation to an ever changing environment. Mathematical theory and computer modeling are therefore ideally suited to study evolution and conversely, evolution may be used as a model system to study the computational processes of optimization and emergent pattern formation. Fifteen invited speakers will provide general reviews and summaries of their recent research. Although oral presentations will be limited to the invited speakers, original research contributions are solicited for poster sessions in the following areas: natural evolution artificial life genetic algorithms and optimization List of speakers: ---------------- Stuart Kauffman --- University of Pennsylvania, Santa Fe Institute Alan Templeton --- Washington University, St. Louis Daniel Hillis --- Thinking Machines Inc. Richard Hudson --- University of California, Irvine Steven Frank --- University of California, Irvine Alan Hastings --- University of California, Davis Warren Ewens --- Melbourne University and University of Philadelphia Marcus Feldman --- Stanford University Lee Altenberg --- Duke University Aviv Bergman --- SRI and Stanford University Mark Bedau --- Reed College Heinz Muehlenbein --- University of Bonn Eric Mjolsness --- Yale Wolfgang Banzhaf --- Mitsubishi Electric Corp. Schedule --------- Sunday - June 21 6:00 pm - 8:00 pm RECEPTION Monday - June 22 10:00 am - 10:45 am Kaufmann Evolution and co-evolution at the edge of chaos 10:45 am - 11:30 am Hastings Use of optimization techniques to study multilocus population genetics 11:45 am - 12:30 pm Altenberg Theory on the evolution and complexity of the genotype-phenotype map 12:30 pm - 2:00 pm LUNCH 2:00 pm - 2:45 pm Templeton Gene tree overlay algorithms: a powerful methodology for studying evolution 2:45 pm - 3:30 pm Bedau Measuring evolutionary activity in noisy artificial systems 4:00 pm - 4:45 pm Muehlenbein Evolutionary algorithms as a research tool for a new evolutionary theory 7:00 pm BANQUET - MONTEREY AQUARIUM After Dinner Speaker - Warren Ewens Tuesday - June 23 10:00 am - 10:45 am Banzhaf An introduction to RNA-like algorithms with applications to sequences of binary numbers 10:45 am - 11:30 am Hillis Simulated evolution and the Red Queen Hypothesis 11:45 am - 12:30 pm Frank Coevolutionary genetics of hosts and parasites 12:30 pm - 2:00 pm LUNCH 2:00 pm - 2:45 pm Bergman Learning your environment 2:45 pm - 3:30 pm Feldman Recombination and evolution 4:00 pm - 7:30 pm POSTER SESSION Wednesday - June 24 10:00 am - 10:45 am Mjolsness Connectionist grammars in evolution and development 10:45 am - 11:30 am Hudson Title to be announced 12:00 pm - 2:00 pm GROUP LUNCH DISCUSSIONS AT LOCAL RESTAURANTS Instructions for Submissions and registration: --------------------------------------------- Authors should submit a single-page abstract clearly stating their results by June 12, 1992, to the Meeting Coordinator at the address listed below. Please indicate which of the above categories best applies to your paper. There will be no parallel sessions, and the workshop will be structured to stimulate and facilitate the active involvement of all attendees. There will be sessions on the first 2 days from 9:00 AM till 5:00 PM with 1-2 hrs lunch breaks. On the third day there will be a morning session and a short afternoon session only (maybe one talk until 3:00 PM). Registration fees are $100 for full-time Ph.D. students and $250 for all others. Fees include admission to a banquet, at the Monterey aquarium, to be held on Monday night. (There is a $50 discount for students presenting posters at the meeting.) To obtain registration materials and housing information, please contact the Meeting Coordinator. For information only please contact eeckman at mozart.llnl.gov Electronic abstract submissions only at jb at s1.gov Meeting coordinator: ------------------- Chris Ghinazzi P.O. Box 808, L-426 Lawrence Livermore Laboratory Livermore, CA 94550 phone: (510) 422-7132 email: ghinazzi at verdi.llnl.gov ------------------------------------------------------------------------ Please complete this form and return it to: Evolution as a Computational Process, c/o Chris Ghinazzi, Lawrence Livermore National Laboratory, P.O. Box 808, L-426, Livermore, CA 94550-9900. Phone (510)422-7132 or FAX: (510)422-7819 REGISTRATION FORM Name: Title: Organization: Address: City: State: Zip: Country: Citizenship: Telephone: email address: Registration Fees: Regular $250 Student $100 Student w/poster $50 Are you submitting a poster? yes no Total Payment Enclosed $________ Check or Money Order (payable in US dollars to UC Regents) Requests for refunds must be received in writing no later than June 1, 1992. Attendance is on a first-pay, first-serve basis.  From judd at learning.siemens.com Fri Jun 19 17:41:08 1992 From: judd at learning.siemens.com (Stephen Judd) Date: Fri, 19 Jun 92 17:41:08 EDT Subject: CLNL workshop Message-ID: <9206192141.AA10749@learning.siemens.com> LAST CALL FOR PAPERS Third Annual Workshop on Computational Learning Theory and `Natural' Learning Systems August 27-29, Madison, Wisconsin Siemens Corporate Research, MIT, and the University of Wisconsin- Madison are sponsoring the third annual CLNL workshop to explore the intersection of theoretical learning research and natural learning systems. (Natural systems include those that have been successful in a difficult engineering domain or those that represent natural constraints arising from biological or psychological processes or mechanisms.) The workshop will bring together a diverse set of researchers from three relatively independent learning research areas: Computational Learning Theory, AI/Machine Learning, and Connectionist Learning. Invited speakers and participants will be encouraged to examine general issues in learning systems which could provide constraints for theory, while at the same time theoretical results will be interpreted in the context of experiments with actual learning systems. Examples of experimental approaches include: Models or comparisons of learning systems in classification problems (vision, speech, etc.); Controls and Robotics, Natural language processing; Studies of generalization; Representation effects on learning rate, noise tolerance and concept or function complexity; Biological or biologically inspired models of adaptation; Competitive processing or synaptic growth and modification. Relevant theoretical subjects include: The computational and sample complexity of learning; Learning in the presence of noise; The effect on learnability of prior knowledge, representational bias, or feature construction; Learning protocol: learning sample distributions; Efficient algorithms for learning particular classes of concepts or functions; Comparison of analytical bounds with real-world experiments. Submission Procedure: Please submit 3 copies of a 100 word or less abstract and a 2000 word or less summary of original research indicating your preference for either experimental or theoretical category. The DEADLINE for submission is JUNE 30, 1992. Send abstracts and summaries to: CLNL Workshop Siemens Corporate Research 755 College Road East Princeton, NJ 08540 (Or via email to clnl at learning.siemens.com) -- ORGANIZING COMMITTEE Andrew Barto, U Massachusetts Andrew Barron, U Illinois Stephen J. Hanson, Siemens Corp. Research Michael Jordan, MIT Stephen Judd, Siemens Corp. Research Kumpati S. Narendra, Yale University Tomaso Poggio, MIT Larry Rendell, Beckman Institute Ronald L. Rivest, MIT Jude Shavlik, U Wisconsin Paul Utgoff, U Massachusetts WORKSHOP CO-CHAIRS Thomas Petsche, Siemens Corp. Research Jude Shavlik, U Wisconsin Stephen Judd, Siemens Research WORKSHOP SPONSORS Siemens Corporate Research, Inc. MIT Laboratory for Computer Science University of Wisconsin, Dept. of Computer Science  From gjg at cns.edinburgh.ac.uk Mon Jun 22 16:59:53 1992 From: gjg at cns.edinburgh.ac.uk (Geoffrey Goodhill) Date: Mon, 22 Jun 92 16:59:53 BST Subject: Tech Report Available Message-ID: <29664.9206221559@cns.ed.ac.uk> The following technical report version of my thesis is now available in neuroprose: Correlations, Competition, and Optimality: Modelling the Development of Topography and Ocular Dominance CSRP 226 Geoffrey Goodhill School of Cognitive and Computing Science University Of Sussex ABSTRACT There is strong biological evidence that the same mechanisms underly the formation of both topography and ocular dominance in the visual system. However, previous computational models of visual development do not satisfactorily address both of these phenomena simultaneously. In this thesis we discuss in detail several models of visual development, focussing particularly on the form of correlations within and between eyes. Firstly, we analyse the "correlational" model for ocular dominance development recently proposed in [Miller, Keller & Stryker 1989] . This model was originally presented for the case of identical correlations within each eye and zero correlations between the eyes. We relax these assumptions by introducing perturbative correlations within and between eyes, and show that (a) the system is unstable to non-identical perturbations in each eye, and (b) the addition of small positive correlations between the eyes, or small negative correlations within an eye, can cause binocular solutions to be favoured over monocular solutions. Secondly, we extend the elastic net model of [Goodhill 1988, Goodhill and Willshaw 1990] for the development of topography and ocular dominance, in particular considering its behaviour in the two-dimensional case. We give both qualitative and quantitative comparisons with the performance of an algorithm based on the self-organizing feature map of Kohonen, and show that in general the elastic net performs better. In addition we show that (a) both algorithms can reproduce the effects of monocular deprivation, and (b) that a global orientation for ocular dominance stripes in the elastic net case can be produced by anisotropic boundary conditions in the cortex. Thirdly, we introduce a new model that accounts for the development of topography and ocular dominance when distributed patterns of activity are presented simultaneously in both eyes, with significant correlations both within and between eyes. We show that stripe width in this model can be influenced by two factors: the extent of lateral interactions in the postsynaptic sheet, and the degree to which the two eyes are correlated. An important aspect of this model is the form of the normalization rule to limit synaptic strengths: we analyse this for a simple case. The principal conclusions of this work are as follows: 1. It is possible to formulate computational models that account for (a) both topography and stripe formation, and (b) ocular dominance segregation in the presence of *positive* correlations between the two eyes. 2. Correlations can be used as a ``currency'' with which to compare locality within an eye with correspondence between eyes. This leads to the novel prediction that stripe width can be influenced by the degree of correlation between the two eyes. Instructions for obtaining by anonymous ftp: % ftp cheops.cis.ohio-state.edu Name: anonymous Password:neuron ftp> binary ftp> cd pub/neuroprose ftp> get goodhill.thesis.tar ftp> quit % tar -xvf goodhill.thesis.tar (This creates a directory called thesis) % cd thesis % more README WARNING: goodhill.thesis.tar is 2.4 Megabytes, and the thesis takes up 13 Megabytes if all files are uncompressed (there are only 120 pages - the size is due to the large number of pictures). Each file within the tar file is individually compressed, so it is not necessary to have 13 Meg of spare space in order to print out the thesis. The hardcopy version is also available by requesting CSRP 226 from: Berry Harper School of Cognitive and Computing Sciences University of Sussex Falmer Brighton BN1 9QN GREAT BRITAIN Please enclose a cheque for either 5 pounds sterling or 10 US dollars, made out to "University of Sussex". Geoffrey Goodhill University of Edinburgh Centre for Cognitive Science 2 Buccleuch Place Edinburgh EH8 9LW email: gjg at cns.ed.ac.uk  From thildebr at aragorn.csee.lehigh.edu Tue Jun 23 18:25:16 1992 From: thildebr at aragorn.csee.lehigh.edu (Thomas H. Hildebrandt ) Date: Tue, 23 Jun 92 18:25:16 -0400 Subject: Saratchandran paper Message-ID: <9206232225.AA03162@aragorn.csee.lehigh.edu> In a correspondence paper published last July in the IEEE T. on Neural Networks, there is a paper by P. Saratchandran which claims to provide an algorithm for training of a multilayer feedforward neural network using a dynamic programming method -- in which the weights on each layer are adjusted only once, starting with the output layer and proceeding to the layer nearest the inputs.[1] This claim is not only counterintuitive, it is false. The author hides this fact from himself and from the reader by defining the error to be minimized in any layer of the network as being independent of the weights in preceding stages of the network. For example, if $I_k$ is the error at the output of the $k$th layer, it is given as a function of the input $y(k-1)$ to that layer, the weights $w(k-1)$ of that layer, and the sets of ideal weights $w^*(k)$, $w^*(k+1)$, ..., $w^*(n-1)$ on succeeding layers in the network. This makes the error to be minimized independent of the weights $w(2)$, $w(3)$, ..., $w(k-2)$ on the first $k-1$ layers! I trust that the fallacy is by now apparent. The absence of experimental data from this paper serves to strengthen my conviction that this algorithm will not work in practice --- save where the function of all but the last layer is trivial, i.e. where the output of the network is {\bf truly independent} of the weights on the $n-2$ hidden layers. (The input layer has no weights.) Thomas H. Hildebrandt Visiting Researcher EE & CS Department Lehigh University [1] Saratchandran, P.: "Dynamic Programming Approach for Optimal Weight Selection in Multilayer Neural Networks", IEEE T. on Neural Networks, V.2, N.4, pp.465-467 (July 1991).  From russ at oceanus.mitre.org Tue Jun 23 14:48:57 1992 From: russ at oceanus.mitre.org (Russell Leighton) Date: Tue, 23 Jun 92 14:48:57 EDT Subject: Request for Aspirin/MIGRAINES Applications Message-ID: <9206231848.AA12166@oceanus.mitre.org> Dear Aspirin/MIGRAINES user, We are preparing a chapter describing Aspirin/MIGRAINES for the upcoming book "Neural Networks Simulation Environments" to be published winter/spring 1993. There are now more that 450 registered A/M sites around the world, and we would like to briefly mention some of the ways that A/M has been used by others in that chapter. We would appreciate a short note describing any results you have using the A/M. "Short" means no more than a couple of sentences per application. Of particular interest are successful results that you have published (please include a full citation of the publication), but any work using A/M is of interest. Finally, if you have one or two attractive, relatively self-explanatory, postscript figures that were produced from your use of A/M that you would be willing to let us use in the chapter, we would appreciate seeing them as well. Of coarse, any of your work or figures that we use will be properly cited and/or credited. Please forward this note to other users of A/M. Sincerely, The Developers - Russell Leighton - Alexis Wieland  From jbower at cns.caltech.edu Thu Jun 25 12:28:57 1992 From: jbower at cns.caltech.edu (Jim Bower) Date: Thu, 25 Jun 92 09:28:57 PDT Subject: Caltech Faculty Position Message-ID: <9206251628.AA00316@cns.caltech.edu> ------------------------------------------------------------ DIVISION OF BIOLOGY CALIFORNIA INSTITUTE OF TECHNOLOGY The Division of Biology at the California Institute of Technology seeks applicants for a faculty position in integrative neurophysiology/computational neuroscience. Preference is given to individuals who combine behavioral, physiological, and computational approaches. Women and minority candidates are encouraged to apply. Applicants should send curriculum vitae, a statement of research interests, selected reprints, and also have at least three letters of recommendation sent directly to: Ms. Marilyn Tomich Division of Biology 156-29 California Institute of Technology Pasadena, CA 91125. The deadline for application is August 15, 1992. The California Institute of Technology is an Equal Opportunity/Affirmative Action Employer.  From thildebr at aragorn.csee.lehigh.edu Thu Jun 25 13:20:12 1992 From: thildebr at aragorn.csee.lehigh.edu (Thomas H. Hildebrandt ) Date: Thu, 25 Jun 92 13:20:12 -0400 Subject: Saratchandran paper Message-ID: <9206251720.AA06495@aragorn.csee.lehigh.edu> Barak Pearlmutter has also uncovered the fallacy in the Saratchandran paper. His (more detailed) rebuttal, entitled "A comment on `Dynamic programming approach to optimal weight selection in multilayer neural networks'" is soon to appear in the IEEE T. on Neural Networks. THH  From robtag at udsab.dia.unisa.it Tue Jun 23 09:06:20 1992 From: robtag at udsab.dia.unisa.it (Tagliaferri Roberto) Date: Tue, 23 Jun 92 15:06:20 +0200 Subject: European Human Mobility Plan Message-ID: <9206231306.AA10863@udsab.dia.unisa.it> International Institute for Advanced Scientific Studies via G. Pellegrino, 19 I-84019 Vietri sul mare (Salerno) Italia fax no. +39 (89) 761189 The International Institute for Advanced Scientific Studies (IIASS), directed by prof. E.R. Caianiello and working in cooperation with the nearby University of Salerno, is interested in participating in the European human mobility plan in the areas of neural networks and their applications to speech processing and pattern recognition and vision. The researchers interested in realizing a network of groups in one of the above areas should contact : dr Roberto Tagliaferri E-mail: robtag at udsab.dia.unisa.it  From atul at nynexst.com Mon Jun 22 12:05:45 1992 From: atul at nynexst.com (Atul Chhabra) Date: Mon, 22 Jun 92 12:05:45 EDT Subject: Summary: references on telecommunications applications of neural nets Message-ID: <9206221605.AA22005@texas.nynexst.com> I posted a request for references on telecommunications applications of neural nets to the connectionists list on June 5. Here is a summary of the responses. Thanks to all who responded. --Atul ===================================================================== Atul K. Chhabra Phone: (914)644-2786 Member of Technical Staff Fax: (914)644-2211 NYNEX Science & Technology Internet: atul at nynexst.com 500 Westchester Avenue White Plains, NY 10604 ===================================================================== -------------------------------------------------------------------------- > From: Atul Chhabra > To: Connectionists at CS.CMU.EDU > Subject: references on telecommunications applications of neural nets > and/or machine vision > > I am looking for recent papers/technical reports etc. on telecommunications > applications of neural networks. I have the following papers. I would > appreciate receiving any additional references. Please respond by email. > I will post a summary of responses to the net. > > I am also looking for references on applications of machine vision in > telecommunications. > > 1. A. Hiramatsu, "ATM communications network control by neural network," > IJCNN 89, Washington D.C., I/259-266, 1989. > > 2. J.E. Jensen, M.A. Eshara, and S.C. Barash, "Neural network controller > for adaptive routing in survivable communications networks," IJCNN 90, > San Diego, CA, II/29-36, 1990. > > 3. T. Matsumoto, M. Koga, K. Noguchi, and S. Aizawa, "Proposal for > neural network applications to fiber optic transmission," IJCNN 90, > San Diego, CA, I/75-80, July 1990. > > 4. T.X. Brown, "Neural network for switching," IEEE Communications, vol > 27, no 11, 72-80, 1989. > > 5. T.P. Troudet and S.M. Walters, "Neural network architecture for > crossbar switch control," IEEE Transactions on Curcuits and Systems, > vol 38, 42-56, 1991. > > 6. S. Chen, G.J. Gibson and C.F.N. Cowan, "Adaptive channel equalization > using a polynomial-perceptron structure," IEE Proceedings, vol 137, > 257-264, 1990. > > 7. R.M. Goodman, J. Miller and H. Latin, "NETREX: A real time network > management expert system," IEEE Globecom Workshop on the Application > of Emerging Technologies in Network Operation and Management, FL, > December 1988. > > 8. K.N. Sivarajan, "Spectrum Efficient Frequency Assignment for Cellular > Radio," Caltech EE Doctoral Dissertation, June 1990. > > 9. M.D. Alston and P.M. Chau, "A decoder for block-coded forward error > correcting systems," IJCNN 90, Washington D.C., II/302-305, January > 1990. > > Thanks. > > Atul K. Chhabra > -------------------------------------------------------------------------- > From pthc at mullian.ee.mu.OZ.AU > > Dear Atul, > > With reference to your article on the newsgroup ai.comp.neural-nets, > I would like to add the following references to your list: > > > 1. K. Nakano, M. Sengoku, S. Shinoda, Y. Yamaguchi & T.Abe, > "Channel Assignment in Cellular Mobile Communication Systems Using > Neural Networks", Singapore Int. Conf. on Communication Systems, > 531-534, Nov. 1990. > > 2. D. Kunz, "Channel Assignment for Cellular Radio Using Neural Networks", > IEEE Trans. on Vehicular Technology, Vol. 40, No. 1, 188-193, Feb 1991. > > 3. P.T.H. Chan, M. Palaniswami & D. Everitt, "Dynamic Channel Assignment for > Cellular Mobile Radio System Using Feedforward Neural Networks", > IJCNN 91, pp 1242-1247, Nov. 1991. > > 4. P.T.H. Chan, M. Palaniswami & D. Everitt, "Dynamic Channel Assignment for > Cellular Mobile Radio System Using Self-Organising Neural Networks", > The 6th Australian Teletraffic Seminar, 89-96, Nov. 1991. > > 5. J.A. Franklin, M.D. Smith & J.C. Yun, "Learning Channel Allocation > Strategies in Real Time", IEEE Vehicular Technology Conf. 92, May 1992. > > 6. D. Munoz-Rodriguez, J.A. Moreno-Cadenas and et-al, "Neural Supported > Hand Off Methodology in Micro Cellular Systems.", IEEE Vehicular > Technology Conf. 92, May 1992. > > Regards, > Peter Chan > > ===================================================================== > | \\ Telephone : +61 3 344 7436 | > | Peter T. H. Chan \\ Fax: : +61 3 344 6678 | > | \\ E-Mail : pthc at mullian.ee.mu.oz.AU | > |__________________________\\_______________________________________| > | Department of Electrical and Electronic Engineering | > | School of Information Technology and Electrical Engineering | > | The University of Melbourne, Parkville | > | Victoria 3052, AUSTRALIA | > ===================================================================== > > -------------------------------------------------------------------------- > From: nicwi at isy.liu.se (Niclas Wiberg) > > Hello, > We have compiled a list of articles written on decoding of > error-correcting codes using neural networks. The list is written > in bibtex, which is a bibliography program for the text processor latex. > > Hope this helps. > > Niclas > > > ================== Here we go =================================== > > @ARTICLE{ybw90, > AUTHOR = "Yuan, Jing and Bhargava, Vijay K and Wang, Qiang", > TITLE = "Maximum Likelihood Decoding Using Neural Nets", > JOURNAL = "Journal of the Institution of Electronics and > Telecommunication Engineers", > YEAR = "1990", > VOLUME = "36", > NUMBER = "5-6", > MONTH = "Sept.-Dec.", > PAGES = "367-376" } > > @INPROCEEDINGS{hry90, > AUTHOR = "Hrycej, Tomas", > TITLE = "Self-Organization by Delta Rule", > BOOKTITLE = "IJCNN International Joint Conference on > Neural Networks", > YEAR = "1990", > PAGES = "307-312", > ADDRESS = "New York, NY, USA" } > > @INPROCEEDINGS{jp90, > AUTHOR = "Jeffries, Clark and Protzel, Peter", > TITLE = "High Order Neural Models for Error Correcting Code", > BOOKTITLE = "Proceedings from the SPIE - The International > Society for Optical Engineering", > YEAR = "1990", > PAGES = "510-517", > ORGANIZATION = "SPIE" } > > @INPROCEEDINGS{zha89, > AUTHOR = "Zeng, Gengsheng and Hush, Don and Ahmed, Nasir", > TITLE = "An Application of Neural Net in Decoding Error-Correcting Codes", > BOOKTITLE = "Proceedings from 1989 IEEE International Symposium on > Circuits and Systems", > YEAR = "1989", > PAGES = "782-785", > ADDRESS = "New York, NY, USA" } > > @INPROCEEDINGS{slc89, > AUTHOR = "Santamaria, M.E. and Lagunas, M.A. and Cabrera, M.", > TITLE = "Neural Nets Filters: Integrated Coding and Signaling in > Communication Systems", > BOOKTITLE = "MELECON`89: Mediterranean Electrotechnical Conference > Proceedings. Integrating Research, Industry and Education in > Energy and Communication Engineering", > YEAR = "1989", > PAGES = "532-535", > ADDRESS = "IEEE, New York, NY, USA", > EDITOR = "Barbosa, A.M." } > > @ARTICLE{ph86, > AUTHOR = "Platt, J.C. and Hopfield, J.J. ", > TITLE = "Analog Decoding Using Nerual Networks", > JOURNAL = "AIP Conference Proceedings", > YEAR = "1986", > NUMBER = "151", > PAGES = "364-369" } > > @INPROCEEDINGS{fdk89, > AUTHOR = "Farotimi, O. and Dembo, A. and Kailath, T.", > TITLE = "Absolute Stability and Optimal Training for Dynamic Neural Networks", > BOOKTITLE = "Conference Record. Twenty-Third Asilomar Conference on Signals, > Systems and Computers", > YEAR = "1989", > PAGES = "133-137", > EDITOR = "Chen, R.R.", > PUBLISHER = "Maple Press, San Jose, CA, USA" } > > @INPROCEEDINGS{cm90, > AUTHOR = "Caid, William R. and Means, Robert W.", > TITLE = "Neural Network Error Correcting Decoders for Block and > Convolutional Codes", > BOOKTITLE = "GLOBECOM`90: IEEE Global Telecommunications Conference and > Exhibition. 'Communications: Connecting the Future'", > YEAR = "1990", > PAGES = "1028-1031", > PUBLISHER = "IEEE, New York, NY, USA" } > > @ARTICLE{hb91, > AUTHOR = "Hussain, M. and Bedi, Jatinder S.", > TITLE = "Decoding Scheme for Constant Weight Codes for Optical and Spread > Spectrum Applications", > JOURNAL = "Electronics Letters", > VOLUME = "27", > PAGES = "839-842", > NUMBER = "10", > YEAR = "1991" } > > @INPROCEEDINGS{ac90, > AUTHOR = "Alston, Michael D. and Chau, Paul M.", > TITLE = "A Neural Network Architecture for the Decoding of Long Constraint Length > Convolutional Codes", > BOOKTITLE = "International Joint Conference on Neural Networks ", > YEAR = "1990", > PAGES = "121-126", > PUBLISHER = "IEEE, New York, NY, USA" } > > @INPROCEEDINGS{yc90, > AUTHOR = "Yuan, Jing and Chen, C.S.", > TITLE = "Neural Net Decoders for some Block Codes", > BOOKTITLE = "IEE Proceedings I (Communications, Speech and Vision)", > YEAR = "1990", > PAGES = "309-314", > ORGANIZATION = "IEE" } > > @INPROCEEDINGS{whi89, > AUTHOR = "Whittle, P.", > TITLE = "The Achievement of Memory by an Antiphon Structure", > BOOKTITLE = "Developments in Neural Computing. Proceedings of a Meeting > on Neural Computing", > YEAR = "1989", > PAGES = "119-124", > ORGANIZATION = "IOP and London Math Society", > PUBLISHER = "Adam Hilger, Bristol, UK" } > > @INPROCEEDINGS{lp89, > AUTHOR = "Lee, Tsu-chang and Peterson, Allen M.", > TITLE = "Adaptive Vector Quantization with a Structural Level Adaptable Neural > Network", > BOOKTITLE = "IEEE Pacific Rim Conference on Communications, Computers and Signal > Processing. Conference Proceedings", > YEAR = "1989", > PAGES = "517-520", > PUBLISHER = "IEEE, New York, NY, USA" } > > @ARTICLE{bb89, > AUTHOR = "Bruck, Jehoshua and Blaum, Mario", > TITLE = "Neural Networks, Error-Correcting Codes, and Polynomials over the Binary n-Cube", > JOURNAL = "IEEE Transactions on information Theory", > VOLUME = "35", > PAGES = "976-987", > NUMBER = "5", > YEAR = "1989" } > > @INPROCEEDINGS{ybw89, > AUTHOR = "Yuan, Jing and Bhargava, Vijay K and Wang, Qiang", > TITLE = "An Error Correcting Neural Network", > BOOKTITLE = "IEEE Pacific Rim Conference on Communications, Computers and Signal > Processing. Conference Proceedings", > YEAR = "1989", > PAGES = "530-533", > PUBLISHER = "IEEE, New York, NY, USA" } > > @INPROCEEDINGS{cc90, > AUTHOR = "Chen, Chang-jia and Chen, Tai-yi", > TITLE = "Preliminary Study of the Local Maximum Problem of the Energy > Function for the Neural Network in Decoding of Binary Block Codes", > BOOKTITLE = "International Symposium on Information Theory and Its > Applications, ISITA`90", > YEAR = "1990", > PAGES = "727-729", > PUBLISHER = "IEEE, New York, NY, USA" } > > @INPROCEEDINGS{pd88, > AUTHOR = "Petsche, Thomas and Dickinson, Bradley W.", > TITLE = "A Trellis-Structured Neural Network", > BOOKTITLE = "Neural Information Processing Systems", > YEAR = "1988", > PAGES = "592-601", > PUBLISHER = "American Institute of Physics, New York, USA" } > > @INPROCEEDINGS{bar90, > AUTHOR = "Baram, Yoram", > TITLE = "Nested Neural Networks and their Codes", > BOOKTITLE = "Proceedings from 1990 IEEE International Symposium on > Information Theory", > YEAR = "1990", > PAGES = "9" } > > @TECHREPORT{str90, > AUTHOR = "Stranneby, Dag R.", > TITLE = "Error Correction of Corrupted Binary Coded Data Using Neural Networks", > INSTITUTION = "TRITA-TTT", > YEAR = "1990", > ADDRESS = "KTH, Stockholm, Sweden" } > > @TECHREPORT{sch91, > AUTHOR = "Schnell, M.", > TITLE = "Multilayer Perceptrons and Their Application to Decoding Block Codes", > INSTITUTION = "German Aerospace Research Establishment, Institute for Communications > Technology", > YEAR = "1991", > ADDRESS = "D-8031 Oberpfaffenhofen, Germany" } > > @INPROCEEDINGS{hb91-1, > AUTHOR = "Hussain, M. and Bedi, Jatinder S.", > TITLE = "Performance Evaluation of Different Neural Network Training > Algorithms in Error Control Coding", > BOOKTITLE = "SPIE`91, Applications of Artificial Intelligence and Neural > Networks", > YEAR = "1991", > PAGES = "697-707" } > > @INPROCEEDINGS{hb91-2, > AUTHOR = "Hussain, M. and Bedi, Jatinder S.", > TITLE = "Reed-{S}olomon Encoder/Decoder Application Using a Neural Network", > BOOKTITLE = "SPIE`91, Applications of Artificial Intelligence and Neural > Networks", > YEAR = "1991", > PAGES = "463-471" } > > @INPROCEEDINGS{hsb90, > AUTHOR = "Hussain, M. and Song, Jing and Bedi, Jatinder S.", > TITLE = "Neural Network Application to Error Control Coding", > BOOKTITLE = "SPIE`90", > YEAR = "1990", > PAGES = "502-510" } > > @INPROCEEDINGS{hs90, > AUTHOR = "Hussain, M. and Bedi, Jatinder S.", > TITLE = "Decoding a Class of Non Binary Codes Using Neural Networks", > BOOKTITLE = "33:rd Midwest Symposium on Circuits and Systems", > YEAR = "1990" } > > @TECHREPORT{zz91, > AUTHOR = "Zetterberg, Lars H. and Zhang, Qingshi", > TITLE = "Signal Detection Using Neural Networks and Error Correcting Codes", > INSTITUTION = "TRITA-TTT", > YEAR = "1991", > NUMBER = "9114", > ADDRESS = "KTH, Stockholm, Sweden" } > > @ARTICLE{jef90, > AUTHOR = "Jeffries, Clark", > TITLE = "Code Recognition with Neural Network Dynamical Systems", > JOURNAL = "Society for Industrial and Applied Mathematics Review", > VOLUME = "32", > PAGES = "636-651", > YEAR = "1990", > NUMBER = "4" } > > @ARTICLE{sou89, > AUTHOR = "Sourlas, Nicolas", > TITLE = "Spin-glass Models as Error-correcting Codes", > JOURNAL = "Nature", > VOLUME = "339", > PAGES = "693-695", > YEAR = "1989" } > > @TECHREPORT{gb92, > AUTHOR = "Gish, Sheri L. and Blaum, Mario", > TITLE = "Adaptive Development of Connectionist Decoders for Complex > Error-correcting Codes", > INSTITUTION = "IBM Research Division", > YEAR = "1992", > ADDRESS = "Almaden Research Center, San Jose, CA, USA" } > > @INPROCEEDINGS{thfc87, > AUTHOR = "Takefuji, Yoshiyasu and Hollis, Paul and Foo, Yoon Pin and > Cho, Yong B.", > TITLE = "Error Correcting System Based on Neural Circuits", > BOOKTITLE = "IEEE First International Conference on Neural Networks", > YEAR = "1987", > PAGES = "293-300", > VOLUME = "3", > PUBLISHER = "SOS Printing, San Diego, CA, USA" } > > @ARTICLE{yc91, > AUTHOR = "Yuan, Jing and Chen, C.S.", > TITLE = "Correlation Decoding of the (24,12) {Golay} Code Using Neural > Networks", > JOURNAL = "IEE Proceedings I (Communications, Speech and Vision)", > VOLUME = "138", > PAGES = "517-524", > YEAR = "1991" } > > @MASTERSTHESIS{aa91, > AUTHOR = "Andersson, Gunnar and Andersson, H{\aa}kan", > TITLE = "Generation of Soft Information in a Frequency-hopping > {HF} Radio System Using Neural Networks", > SCHOOL = "Link{\"{o}}ping Institute of Technology", > YEAR = "1991", > ADDRESS = "Link{\"{o}}ping, Sweden", > NOTE = "(In Swedish)" } > > @INPROCEEDINGS{lki90, > AUTHOR = "Li, Haibo and Kronander, Torbj{\"{o}}rn and Ingemarsson, Ingemar", > TITLE = "A Pattern Classifier Integrating Multilayer Perceptron and > Error-correcting Code", > BOOKTITLE = "IAPR Workshop on Machine Vision Applications", > YEAR = "1990", > PAGES = "113-116" } > > @INPROCEEDINGS{ycl90, > AUTHOR = "Yang, Jar-Ferr and Chen, Chi-Ming and Lee, Jau-Yien", > TITLE = "Neural Networks for Maximum Likelihood Error Correcting Systems", > BOOKTITLE = "IJCNN International Joint Conference on Neural Networks", > YEAR = "1990", > PAGES = "493-498", > VOLUME = "1" } > > ---------------------------------------------------------------------- > Niclas Wiberg nicwi at isy.liu.se > Dept. of EE Linkoping University Sweden > -------------------------------------------------------------------------- > From: Ernst Nordstr|m > > Hello, > > here are some references on neural networks in telecommunications: > > 1. T. Takahashi, A. Hiramatsu, "Integrated ATM traffic control by distributed > neural networks", ISS 90, Stockholm, Sweden, May 1990. > > 2. A. Hiramatsu, "Integration of ATM call admission control and link capacity > control by distributed neural networks", IEEE Journal on Selected Areas in > Communications, vol 9, no 7, Sep 1991. > > 3. X. Chen, I. Leslie, "A neural network approach towards adaptive congestion > control in broadband atm networks", GLOBECOM 91, Phoenix, AZ, Dec 1991. > > 4. N. Ansari, D. Liu, "The performance evaluation of a new neural network based > traffic management scheme for a satellite communication network", > GLOBECOM 91, Phoenix, AZ, Dec 1991. > > 5. F. Kamoun, M. Ali, "A neural network shortest path algorithm for optimum > routing in packed-switched communications networks", GLOBECOM 91, Phoenix, > AZ, Dec 1991. > > 6. M. Ali, F. Kamoun, "A neural network approach to the maximum flow problem", > GLOBECOM 91, Phoenix, AZ, Dec 1991. > > 7. R.Lancini, F.Perego, S.Tubaro, "Some experiments on vector quantization > using neural nets", GLOBECOM 91, Phoenix, AZ, Dec 1991. > > 8. B. Khasnabish, M. Ahmadi, "Congestion avoidance in large supra-high-speed > packet switching networks using neural nets", GLOBECOM 91, Phoenix, AZ, > Dec 1991. > > > > Best regards, > > Ernst Nordstrom > Department of Computer Systems > Uppsala University, Sweden > -------------------------------------------------------------------------- > From: dbs0 at gte.com (Daniel Schwartz) > > Rudolofo Millito of Bell Labs published a demonstrated > a neural network controller for an admission controller to a > queuing system. The paper was published in the Neural Information > Processing Systems proceedings of 1990 ( Morgan-Kaufman ). > > daniel b schwartz > gte laboratories > 40 sylvan rd > waltham ma 02254 > -------------------------------------------------------------------------- > From: INDE47D at Jetson.UH.EDU > > Here are some more > > 1. R. Ogier and D. Beyer : Neural Network solution to the Link Scheduling > Problem using Convex Relaxation, In IEEE Global Telecommunications Confere > nce. (Globecom 90), San Diego, California, December 1990. > > The above paper should give you some more. > > I am presently writing a tech-report on a NN solution to maximum independent > set (MIS) problem. This MIS problem is the same as the link scheduling > problem for multi-hop radio networks. I should be finishing it sometime next > week and can send you a copy if you are interested. > > - Shiv > -------------------------------------------------------------------------- > From: giles at research.nj.nec.com (Lee Giles) > > See Goudreau and Giles in the NIPS91 Proceedings - NNs for interconnection > networks. > > Let me know if you have trouble finding it. > > C. Lee Giles > NEC Research Institute > 4 Independence Way > Princeton, NJ 08540 > USA > > Internet: giles at research.nj.nec.com > UUCP: princeton!nec!giles > PHONE: (609) 951-2642 > FAX: (609) 951-2482 > -------------------------------------------------------------------------- > From: yxt3 at po.CWRU.Edu (Yoshiyasu Takefuji) > > HI. > Don't forget the following articles. > Y. Takefuji, and K. C. Lee, "An artificial hysteresis binary neuron:....," > Biological Cybernetics, 64, 353-356, 1991. > N. Funabiki, and Y. Takefuji, "A parallel algorithm for time slot assignmen > t problems in TDM hierarchical switching systems," to appear in IEEE Trans. on C > ommunications. > N. Funabiki, and Y. Takefuji, "A parallel algorithm for traffic control pro > blems in three-stage connecting networks," to appear in Journal of Parallel and > Distributed Computing. > N. Funabiki and Y. Takefuji, "A parallel algorithm for broadcast scheduling > problems in packet radio networks," to appear in IEEE Trans. on Communications. > N. Funabiki, Y. Takefuji, and K. C. Lee, "Comparison of six neural network > models on a traffic control problem in a multistage interconnection network," to > appear in IEEE Trans. on Computers. > N. Funabiki, Y. Takefuji, "A neural network parallel algorithm for channel > assignment problems in cellular radio networks," to appear in IEEE Trans. on Veh > icular Technology. > > Introduction to the control problems using NN is > found in my recent book entitled > "Neural Network Parallel Computing," > from Kluwer, Jan 1992. > > thank you. > > yoshiyasu takefuji > -------------------------------------------------------------------------- > From: ang at hertz.njit.edu (Nirwan Ansari, 201-596-3670) > > The following are a few of my NN papers on telecommunications: > > N. Ansari, "Managing the traffic of a satellite communication network > by neural networks," to appear in B. Soucek and the IRIS Group (ed.), > Dynamic, Genetic and Chaotic Programming of the 6th Generation > series, pp.339-352, Wiley 1992. > > N. Ansari and Y. Chen, "Configuring Maps for a Satellite Communication > Network by Self-organization," Journal of Neural Network Computing, > vol. 2, no. 4, pp.11-17, Spring 1991. > > N. Ansari and Y. Chen, "A Neural Network Model to Configure Maps for > a Satellite Communication Network," 1990 IEEE Global Telecommunications > Conference, December 2-5, 1990, San Diego, CA, pp. 1042-1046. > > N. Ansari and D. Liu, "The Performance Evaluation of A New Neural > Network Based Traffic Management Scheme For A Satellite Communication > Network," Proc. 1991 IEEE Global Telecommunications Conference, > December 2-5, 1991, Phoenix, AR, pp. 110-114. > > > --Nirwan Ansari > -------------------------------------------------------------------------- > To: Atul Chhabra > > Mcdonald, K., T. R. Martinez, and D. M. Campbell, > A Connectionist Method for Adaptive Real-Time Network Routing, > Proceedings of the 4th International Symposium on Artificial Intelligence, > pp. 371-377, 1991. > _______________________________________________________________ > Tony Martinez > Asst. Professor, Computer Science Dept, BYU, Provo, UT, 84602 > martinez at cs.byu.edu Phone: 801-378-6464 Fax: 801-378-2800 > -------------------------------------------------------------------------- > From: rodolfo at buckaroo.att.com > Original-From: buckaroo!rodolfo (Rodolfo A Milito +1 908 949 7614) > > You may want to consider > > Rodolfo A. Milito, Isabelle Guyon, and Sara Solla, "Neural Network > Implementation of Admission Control," Advances in Neural Information > Processing Systems 3, Morgan Kauffmann, 1991 > > I would also appreciate your comments, > > Rodolfo Milito > -------------------------------------------------------------------------- > From: an at verbum.com > > Hello, > > In response to your posting above, I have the following reference for you: > > Atsushi Hiramatsu, "ATM Communications Network Control by Neural > Networks," _IEEE_Transactions_On_Neural_Networks_, Vol. 1 No. 1, > March 1990. > > An Nguyen > E-mail: an at verbum.com > -------------------------------------------------------------------------- > From: jradue at SANDCASTLE.COSC.BROCKU.CA > > I was interested to see your request. I do not know if you are aware of the > work being done at the Communications Research Laboratory at McMaster > University in Hamilton, Ontario? They have just sponsored a Symposium on > Communications in Neurobiological Systems. A contact email address is > Myersa at SSCvax.cis.mcmaster.ca -- this is the address of the coordinting > secretary there. Another address you could try, but I do not know how often > he reads his mail is haykin at sscvax.cis.mcmaster.ca -- this is Dr Simon > Haykin, the Director of the Laboratory. > > I myself am working on the verification of handwritten signatures using > NNs, and would be interested to hear if you receive any feedback from your > request, in this area. > > Regards > > Jon Radue > > Computer Science Department > Brock University V: (416)688-5550 x 3867 > St. Catharines, Ontario F: (416)688-2789 > L2S 3A1 CANADA E: jradue at sandcastle.cosc.brocku.ca > -------------------------------------------------------------------------- > From: Frans Martin Coetzee > > Dear Atul > > On the connectionist bboard you recently expressed interest in references on > telecommunication applications of Neural nets. I would be interested in a > general post to the mailing list of the references you had received. > > As for me I cannot offer you many refernces: I do know that there is a group > working on coding/decoding of binary signals using neural networks at the > University of Victoria. Below is one of their references > > Author Yuan, J.; Bhargava, V.K.; Wang, Q.; > Dept. of Electr. & Comput. Eng., Victoria Univ., BC, CANADA > Title Maximum likelihood decoding using NEURAL nets > Source Journal of the Institution of Electronics and Telecommunication > Engineers; > J. Inst. Electron. Telecommun. Eng. (India); vol.36, no.5-6; Sept. > -Dec. 1990; pp. 367-76 > -------------------------------------------------------------------------- > From: karit at idiap.ch (Kari Torkkola) > > You were interested in references to telecommunications applications > of neural networks. Here are a couple of such references: > > > @InProceedings{Kohonen90c, > author = "Teuvo Kohonen and Kimmo Raivio and Olli Simula and Olli > Vent{\"a} and Jukka Henriksson", > title = "Combining Linear Equalization and Self-Organizing > Adaptation in Dynamic Discrete-Signal", > booktitle = "Proceedings of the International Joint Conference on > Neural Networks", > year = "1990", > pages = "I 223-228", > address = "San Diego", > month = "June", > } > > @InProceedings{Kohonen91a, > author = "Teuvo Kohonen and Kimmo Raivio and Olli Simula and > Jukka Henriksson", > title = "Performance Evaluation of Self-Organizing Map Based > Neural Equalizers in Dynamic > Discrete-Signal Detection", > booktitle = "Proceedings of the International Conference on > Artificial Neural Networks", > year = "1991", > pages = "II 1677-1680", > address = "Helsinki", > month = "June", > } > > > =================================================================== > Kari Torkkola > IDIAP (Institut Dalle Molle d'Intelligence Artificielle Perceptive) > 4, rue du Simplon > CH 1920 Martigny > Switzerland email: karit at idiap.ch > =================================================================== > -------------------------------------------------------------------------- > From: goudreau at research.nj.nec.com (Mark Goudreau) > > Hi Atul, > Here are some more references for you. Please send me a copy of the > compiled list once you are done. > Thanks -Mark > > @ARTICLE{brow2, > AUTHOR = "T.X. Brown and K.-H. Liu", > TITLE = "Neural Network Design of a {B}anyan Network Controller", > JOURNAL = "IEEE Journal on Selected Areas of Communication", > YEAR = "1990", > VOLUME = "8", > NUMBER = "8", > PAGES = "1428-1438", > MONTH = "October"} > > @INPROCEEDINGS{funa1, > AUTHOR = "N. Funabiki and Y. Takefuji and K.C. Lee", > TITLE = "A Neural Network Model for Traffic Controls in Multistage > Interconnection Networks", > BOOKTITLE = "Proceedings of the International Joint Conference on > Neural Networks 1991", > YEAR = "1991", > PAGES = "A898", > MONTH = "July"} > > @INPROCEEDINGS{goud1, > AUTHOR = "M.W. Goudreau and C.L. Giles", > TITLE = "Neural Network Routing for Multiple Stage Interconnection > Networks", > BOOKTITLE = "Proceedings of the International Joint Conference on > Neural Networks 1991", > YEAR = "1991", > PAGES = "A885", > MONTH = "July"} > > @INPROCEEDINGS{goud2, > AUTHOR = "M.W. Goudreau and C.L. Giles", > TITLE = "Neural Network Routing for Random Multiple Stage > Interconnection Networks", > BOOKTITLE = "Advances in Neural Information Processing Systems~4", > YEAR = "1992", > EDITOR = "J.E. Moody and S.J. Hanson and R.P Lippmann", > PUBLISHER = "Morgan Kaufmann Publishers", > ADDRESS = "San Mateo, CA", > PAGES = "722--729"} > > @INPROCEEDINGS{haki1, > AUTHOR = "N.Z. Hakim and H.E. Meadows", > TITLE = "A Neural Network Approach to the Setup of the {B}enes > Switch", > BOOKTITLE = "Infocom 90", > YEAR = "1990", > PAGES = "397-402"} > > @ARTICLE{marr1, > AUTHOR = "A.M. Marrakchi and T. Troudet", > TITLE = "A Neural Net Arbitrator for Large Crossbar > Packet-Switches", > JOURNAL = "IEEE Transactions on Circuits and Systems", > YEAR = "1989", > VOLUME = "36", > NUMBER = "7", > PAGES = "1039-1041", > MONTH = "July"} > > @INPROCEEDINGS{mels1, > AUTHOR = "P.J.W. Melsa and J.B. Kenney and C.E. Rohrs", > TITLE = "A Neural Network Solution for Routing in Three Stage > Interconnection Networks", > BOOKTITLE = "Proceedings of the 1990 International Symposium on > Circuits and Systems", > YEAR = "1990", > PAGES = "483-486", > MONTH = "May"} > > @INPROCEEDINGS{mels2, > AUTHOR = "P.J.W. Melsa and J.B. Kenney and C.E. Rohrs", > TITLE = "A Neural Network Solution for Call Routing with > Preferential Call Placement", > BOOKTITLE = "Proceedings of the 1990 Global Telecommunications > Conference", > YEAR = "1990", > PAGES = "1377-1382", > MONTH = "December"} > > @ARTICLE{rauc1, > AUTHOR = "H.E. Rauch and T. Winarske", > TITLE = "Neural Networks for Routing Communication Traffic", > JOURNAL = "IEEE Control Systems Magazine", > YEAR = "1988", > VOLUME = "8", > NUMBER = "2", > PAGES = "26-31", > MONTH = "April"} > > @ARTICLE{take1, > AUTHOR = "Y. Takefuji and K.C. Lee", > TITLE = "An Artificial Hysteresis Binary Neuron: A Model Suppressing > the Oscillatory Behavior of Neural Dynamics", > JOURNAL = "Biological Cybernetics", > YEAR = "1991", > VOLUME = "64", > PAGES = "353-356"} > > @INPROCEEDINGS{zhan1, > AUTHOR = "L. Zhang and S.C.A. Thomopoulos", > TITLE = "Neural Network Implementation of the Shortest Path > Algorithm for Traffic Routing in Communication Networks", > BOOKTITLE = "Proceedings of the International Joint Conference on > Neural Networks 1989", > YEAR = "1989", > PAGES = "591", > MONTH = "June"} > > ------------------------------------------------------------------------- > Mark W. Goudreau - NEC Research Institute, Inc. - 4 Independence Way > Princeton, NJ 08540 - USA - goudreau at research.nj.nec.com - (609) 951-2689 > ------------------------------------------------------------------------- > -------------------------------------------------------------------------- > From: lynda at stoney.fit.qut.edu.au (Ms Lynda Thater) > > Hello. > > I am writing in reference to your news article calling for references in the > area of neural network approaches to Telecommunications applications. I have > a few references you might want to look at. > > 1. Rauch, H.E. and Winarske, T., "Neural Networks for Routing Communication > Traffic", IEEE Control Systems Magazine, 1988. > > 2. IEEE Contr. Syst. Magazine, Special Section on Neural Networks for Systems > and Control, April 1988. > > 3. Chang F. and Wu L., "An Optimal Adaptive Routing Algorithm", IEEE > Transactions Auto, Contr., August 1986. > > 4. Vakil F. and Lazar A.A. "Flow Control Protocols for Integrated Networks > with Partially Observed Voice Traffic, " IEE Trans. Auto. Contr., January > 1987. > > 5. Tank D.W. and Hopfield J.J., "Simple 'Neural' Optimization Networks: An A/D > Converter, Signal Decisiion circuit, and a Linear Programming Circuit", > IEEE Transactions on Circuits and Systems, Vol CAS-33, May 1986. > > -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- > Lynda J. Thater AARnet: lynda at stoney.fit.qut.edu.au > Lecturer ARPA: THATER at qut.edu.au > Direct Phone: (07) 864-1923 > > School of Information Systems > _--_|\ Faculty of Information Technology > / QUT Queensland University of Technology > \_.--._/ Box 2434 Brisbane 4001 AUSTRALIA > v Phone: +61 7 864-2111 Fax: +61 7 864-1969 > --------------------------------------------------------------------------  From lazzaro at boom.CS.Berkeley.EDU Fri Jun 26 16:24:31 1992 From: lazzaro at boom.CS.Berkeley.EDU (John Lazzaro) Date: Fri, 26 Jun 92 13:24:31 PDT Subject: No subject Message-ID: <9206262024.AA10610@boom.CS.Berkeley.EDU> --------------------------------------------------------------------------------- CALL FOR PAPERS Journal of VLSI Signal Processing Special Issue on Analog VLSI Computation --------------------------------------------------------------------------------- A special issue of the Journal of VLSI Signal Processing is planned on the topic: Analog VLSI Computation. The theme of the issue will be the application of new analog VLSI techniques to complex computational tasks, particularly those relating to signal processing systems. Example topics include: 1. Processing of signals for electronic systems used in areas such as voice or image communication, 2. Data compression techniques, 3. Modeling of auditory or visual processes, 4. Analog neuron circuits for learning and adaptation, 5. Noise in analog circuits, noise reduction, and limits to signal precision, 6. Techniques for automatic error control, 7. Use of pulse sequences, and mixed analog-digital systems. Papers on other topics, particularly new and interesting applications, will be welcome. The deadline for papers is: 1 October 1992 Papers should be submitted to: Michael D. Godfrey Editor, Special Issue on Analog Computation Journal of VLSI Signal Processing Information Systems Laboratory Durand Building Department of Electrical Engineering Stanford University Stanford CA 94305, USA e-mail: godfrey at isl.stanford.edu Papers may be submitted by e-mail in TeX or PostScript (PostScript is preferred) or mailed in hard-copy form. All e-mail should be identified with Subject: J. of VLSI Sig. Proc., Special Issue.Please refer to the Section: Information for Authors in a recent issue of the Journal for details about submission requirements and format.  From shrager at xerox.com Mon Jun 29 04:18:20 1992 From: shrager at xerox.com (Jeff Shrager) Date: Mon, 29 Jun 1992 01:18:20 PDT Subject: Transfer in Recurrent Networks: A preliminary report and request for advice Message-ID: <92Jun29.011830pdt.38019@huh.parc.xerox.com> Following is an abbreviated version of a preliminary report on our attempts to produce instructed training and transfer in recurrent networks. I am posting it in the hopes of soliciting advice as to how to proceed (or not) and pointers to related work. The longer version of the report is actually not yet available as results are being computed even as I write. (This version was produced by basically just removing everything that said: ".") We would appreciate any thoughts that you have on how to proceed or where to look for related work. Jeff Shrager David Blumenthal (Incidentally, I'd be happy to give the commonlisp (Sun Franz 4.1) bp driver code (mentioned below) to anyone who has the need to run similar supervised experiments with the McClelland and Rumelhart programs. It's slightly special purpose, but very easy to modify.) --- Please Do Not Quote or Redistribute --- We have been exploring of training regimes, labeling, and transfer in recurrent backpropogation networks. Our goal in this research is to model three aspects of human development: First, people learn to associate words with actions. Second, given such associations, people can, on command, do one or a sequence of actions. Third, by practicing sequential actions put together as a result of verbal direction (or self-direction), they can learn new skills, and give new labels to these skills. Finally, each of these processes may require (or make use of) physical guidance or tutorial remediation by an "expert". For people, this is especially the case for the first of these phenomena. The metaphorical model that we use in considering these phenomena is that of interaction between a parent and child during joint activity, such as baking muffins. Shrager & Callanan (1991: Proceedings of the Cognitive Science Conference) studied the various means by which parents and their children of about 3-, 4-, and 5-years-old scoop baking soda out of a box for a muffin recipe. It was observed, first, that there is a large amount of non-directive information in the environment, especially in the verbal context, that a learner such as the child might pick up on in order to learn this skill. Furthermore, it was observed that remediation takes place differently at different ages; through physical guidance in the earlier years, and through verbal instruction later on. We set out to model such a collaborative skill acquisition setting using an algorithmic teacher (`parent') and a Jordan-style recurrent connectionist sequence learner (`child'). The problem was simplified by reducing it to that of training a net to produce a sequence of real-value (x,y) coordinates corresponding to a simple sequence of positions for the spoon. We chose interior real-value coordinates for our points instead of 0's and 1's to avoid possible edge effects. Figure 1 exemplifies the learning task. ___________ (.2,.4) | | *<<<<[out]<<<<<<*(.8,.4)| *<=============+^ | [scoop] $^ | $^ [up] | *>=============+^ | *(.2,.2)>[in]>>>*(.8,.2)| |___________| Figure 1: The outline box is meant to represent a box of baking soda. Stars represent the starting and ending points that were trained. Arrows indicate the path heads and tails. Verbal labels (names) are enclosed in [brackets]. Each step of the outer path: in+up+out (.2,.2.)->(.8,.2), (.8,.2)->(.8,.4), (.8,.4)->(.2,.4), is intended to be individually trained into a recurrent network, using a different label. Then the unified path, called "scoop" (>==...>$$>==...>) is to be either verbally composed using the previously learned labels, or else guided through along with the labels. In either case, scoop also has its own label. We think of a parent telling a child something like: to scoop the baking soda you put the spoon in, then bring it up against the box top and pull it out (to level the amount), while, in the case of a younger child, physically guiding him or her through these actions. Goals We wished to show that training of [scoop] is facilitated by pretraining with combinations of [in], [up], and [out], or, conversely, that the learning of these sequences is facilitated by pretraining with [scoop]. Secondarily, we wished to explore the function of `label' interactions, where by `label' we mean the presented inputs at the non-recurrent input units of the network. General Method For most of the experiments reported here we used a Jordan-style recurrent network with 3 plan units, 2 context units, 3 hidden units, and 2 output units connected to the output units. All of our nets will be identified by the number of units in each part of the net, labeled in the aforementioned order. Thus, the just described net will be called: 3232 (Figure 2). ----------------------- 8 9 output 6 7 8 hidden (0 1 2) (3 4) (plan) (context) Figure 2: The recurrent network, represented in accord with the numbering scheme used by the bp program. The "bp" program (McClelland & Rumelhart) was used to handle the network learning through backpropogation. A lisp front-end was written for bp within which simple algorithmic experiments could be run to train the network on a sequence of different inputs and to various criteria. Data: Figure 3 shows each training set. The individual subsequences ([in]/[out]) were given different input codings (010/100), and the entire sequence ([scoop]) was given still another code (generally the combined code: 110). We considered the input codes as labels. This each action had a different label. Figure 3: The training patterns we used in this experiment: (Negative context numbers refer to the number of the unit the context unit is linked to. See M&R, pgs 157-158 for details of this evil hack.) in3232.pat in0 0 1 0 0 0 .2 .2 in1 0 1 0 -8 -9 .8 .2 \ / \ / \ / label context target out3232.pat scr0 1 0 0 0 0 .8 .4 scr1 1 0 0 -8 -9 .2 .4 InOut3232.pat InOut0 0 1 0 0 0 .2 .2 InOut2 0 1 0 -8 -9 .8 .2 InOut3 1 0 0 0 0 .8 .4 InOut4 1 0 0 -8 -9 .2 .4 scoop3232.pat scoop0 1 1 0 0 0 .2 .2 scoop1 1 1 0 -8 -9 .8 .2 scoop2 1 1 0 -8 -9 .8 .4 scoop3 1 1 0 -8 -9 .2 .4 Parameters: We shall use the phrases "fully trained" and "to criterion" to mean that the total sum of squares ("ecrit", in the language of bp) was less than or equal to 0.01. Unless otherwise specified, weights were updated after each epoch of training (epoch mode). The training driver proceeded in steps of no smaller than 10 epochs, therefore all results are recorded at some increment of 10 epochs, even if bp had reached criterion before that point. Experiments The value of interest to us in our initial experiments is the number of training epochs required to learn [scoop] to criterion, given various prior experience. That is, we tried to train different parts of the sequence individually before training the whole. There ought to be some savings from training in a simpler task (in, out, or combinations), that can transfer to and improve (speed up) the training of the whole. Two general groups of studies were carried out: Group 1 studies pretrained the network with various combinations of [in], [out], [up], to varying degree, and then recorded the time to train [scoop] to criterion. Group 2 studies did the opposite, pretraining with [scoop] and recording the training time for the subsequences. In most cases, different labels, composed from simple binary values (e.g., 010, 101) were assigned to each subsequence, and then [scoop] was given the unified label (111) or average label (.5 .5 .5). Each reported mean and deviation results from 50 repetitions of the experiment, carried out on a newly started copy of bp (thus guaranteeing random initial weights). Deviations will be reported in parentheses following means. If no deviation is reported, the value is not a mean. When a scoop training value is reported, it is a mean (sd) difference between the end of pretraining (to whatever criterion is indicated, or 0.01), and the point at which the [scoop] pattern reached criterion, on a per-trail basis. Thus, unless otherwise specified, the phrase "pretrained" means "pretrained to criterion (of ecrit 0.01, to the next increment of 10 epochs)". Group 1 Studies (on the 3232 network) The training of [scoop] alone required 316 (87) epochs. Pretraining with [in] resulted in [scoop] training time of 326 (236). This difference was (pretty obviously!) not significant by a t-test. However, pretraining with "in" followed by "out" and then "scoop", resulted in much longer training time 600 (351), which differs from scoop alone (t(8)=2.56, p<.025), and from in+scoop (t(8)=2.05, p<.05). Similar results were obtained by pretraining with InOut. We next attempted to parameterize the amount of ill effect that pretraining with InOut was having, by "nudging" the network by changing the pretraining total sum of squares (ecrit) values. Figure 4, the graph of exp9, plots the amount of time that [scoop] takes to train, given pretraining to different critical tss values, ranging from 0.25 (very little pretraining) through 0.02 (greatest amount of pretraining). One can see that although the data is very noisy (r2=.16) there is a trend towards [scoop] requiring more training as the network "overlearns" InOut. [InOut] nudge ecrit [scoop] training rate to 0.01 ecrit ----- ----------------------------------- 0.20 (MEAN 554.0 ERR 48.286182 DEV 152.69432) 0.15 (MEAN 415.0 ERR 57.334305 DEV 181.30699) 0.10 (MEAN 574.44446 ERR 86.63639 DEV 259.90918) 0.08 (MEAN 366.0 ERR 62.203964 DEV 196.7062) 0.06 (MEAN 534.0 ERR 84.33003 DEV 266.675) 0.04 (MEAN 673.0 ERR 68.02042 DEV 215.09946) 0.02 (MEAN 644.0 ERR 88.206825 DEV 278.93448) Figure 4. `Nudge' criterion and [scoop] training rates for various levels of nudging. [This is supposed to be a plot, but it appears in this textual version as a table.] [Report of a number of unsuccessful attempts with different network architectures deleted.] Group 2 Studies Since we failed to find consistent pretraining effects from subsequences to the whole sequence, we investigated transfer in the other direction: pretraining [scoop] and looking for effects on the training times of different subsequence. This is non-trivial because the parts of the sequence were again given different labels and did always start at the first point in the [scoop] sequence. The Jordan-style recurrent network tries to replicate a particular sequence in the order in which it was learned. This is the effect that we are both depending upon and fighting against. We found considerable pretraining effect in most cases (Table 1). with [scoop] remarks alone pretraining (label) ----- ------------ ------- in 130,19 12,4 * (0 1 0) out 31,7 ? (1 0 0) up 83,23 965,99 * (0 0 0) zero effect? 123point 458,132 250,154 * (0 1 0) 1234point 288,28 375,158 (0 1 0) 23point 87,25 263,133 * (0 1 0) 234point 350,95 146,110 * (0 1 0) 1point 8,1 1,1 * (0 1 0) 2point 8,1 107,226 * (0 1 0) 4point 7,2 ? (0 1 0) scpout 77,12 ? (1 1 0) scp234point 301,85 294,161 (1 1 0) Table 1: Transfer from [scoop] to its subsequences. Numbers refer to subsequence points from the [scoop] sequence (the first point is 1, the last is 4): 4 <<<<< 3 ^ 1 >>>>> 2 Patterns that begin with "scp" have the same labels (1 1 0) as [scoop]. Results are mean,deviation from 50 trials. * indicates a significant difference. These results suggest that the network learns the sequence, and its knowledge of the sequence is not completely linked to the inputs. Thus, after pretraining with [scoop], the network can learn a similar sequence with a different label and/or a different starting point relatively easily. However, the large deviations in this case suggest that the network may be learning several different versions of [scoop], and that some of these lend themselves to transfer while others do not. (In a very few cases the system appeared to go into an infinite loop, apparently never reaching criterion, using precisely the same inputs as resulted in small reasonable training rate in most cases. These were stopped by force at times on the order of 1E5 to 1E6 epochs, depending upon when we noticed the problem, and were excluded from the results. This may have been an infinite, or at least very deep, hole in the space. Changes in learning parameters may have fixed these problems.)  From cateau at star.phys.metro-u.ac.jp Mon Jun 29 18:06:04 1992 From: cateau at star.phys.metro-u.ac.jp (Hideyuki Kato) Date: Mon, 29 Jun 92 18:06:04 JST Subject: Tech Report Available Message-ID: <9206290906.AA12984@star.phys.metro-u.ac.jp> The following techinical report is now available in neuroprose: Power law in the performance of the human memory and a simulation with a neural network model Tatsuhiro Nakajima, Nobuko Fuchikami Department of Physics, Tokyo Metropolitan University 1-1 Minami-Osawa, Hachioji, Tokyo 192-03, Japan Hideyuki Cateau Department of Physics, University of Tokyo Bunkyo-ku, Tokyo 113, Japan Hiroshi Nunokawa National Laboratory for High Energy Physics (KEK) Tsukuba-shi, Ibaraki 305, Japan This paper will appear in proceedings of ISKIT'92. The report number of this paper is TMUP-HEL-9203, TU-611, KEK-TH-334 or KEK preprint 92-40. Abstract We show that a learning pace of the back propagation model is described by a power law with high precision. Interestingly the same power law was found out in the human memory by a psychologist in the past. Therefore our result provides a quantitative evidence that the back propagation model, though it is simple, surely shares some essential structure with the human brain. In proceeding the discussion we make out a novel memory model. This model naturally avoids the notable difficulty of the back propagation network that the learning of it is very sensitive to the initial condition. * Hard copy request is not available, sorry* Instructions for obtaining by anonymous ftp: %ftp archive.cis.ohio-state.edu Name: anonymous Password:neuron ftp>bin ftp>cd pub/neuroprose ftp>get nakajima.power.tar.Z ftp>quit %uncompress nakajima.power.tar.Z %tar xvfo nakajima.power.tar Hideyuki Cateau Department of Physics, University of Tokyo, Hongo, Bunkyo-ku, Tokyo, 113 Japan  From MARCO at INGFI1.CINECA.IT Mon Jun 1 15:01:00 1992 From: MARCO at INGFI1.CINECA.IT (MARCO@INGFI1.CINECA.IT) Date: Mon, 1 JUN 92 15:01 N Subject: Workshop announcement Message-ID: <2965@INGFI1.CINECA.IT> ------------------------------------------------------- CALL FOR PAPERS SECOND WORKSHOP ON NEURAL NETWORKS FOR SPEECH PROCESSING Florence, 10-11 December 1992 -------------------------------------------------------- Objective: This workshop will focus on the application of neural networks for speech processing. The following areas of interest will be mainly covered: - Neural Models for Speech Processing (Backpropagation, Backpropagation through time; LVQ,...): - Integration between ``classical'' A.S.R. techniques (e.g. DTW, HMM) and neural networks; - Integration between priori-knowledge and neural networks. Papers based on neural models which report results in speech compression, phoneme and word recognition,... are solicited. Critical reviews are also welcome. Contributions: Submissions should be in the form of an abstract of 2-3 pp. to be received by: Prof. Marco GORI, Dipartimento di Sistemi e Informatica, v. S. Marta, 3 50139 FIRENZE (ITALY) E-mail: marco at ingfi1.cineca.it Please, refer to Prof. Gori also for any information on the workshop. Deadlines: - Receipt of abstracts: 30 June 1992; - Notification of acceptance: 1 September 1992; - Receipt of camera-ready 10 December 1992; Working language: English, Italian Hotels: Details will be sent with the Provisional Programme. Location: Hotel Cavour, via Proconsolo, 3 - Firenze Registration Fee (VAT incl.): 200.000 Lire  From efiesler at idiap.ch Mon Jun 1 02:34:02 1992 From: efiesler at idiap.ch (Emile Fiesler) Date: Mon, 1 Jun 92 08:34:02 +0200 Subject: IEEE NNC Standards Committee Message-ID: <9206010634.AA06663@idiap.ch> On behalf of the IEEE Neural Network Council Standards Committee I am composing a database of people who are interested in, and would like to contribute to, the establishment of neural network standards. The Committee consists of the following Working Groups: A. the glossary working group concerning neural network nomenclature B. the paradigms working group concerning neural network (construction) tools C. the performance evaluation working group concerning neural network benchmarks D. the interfaces working group concerning neural network hardware and software interfaces (This working group is still tentative.) People who are interested, and would like to be on our mailing list, are invited to send me the following information: 1. Name 2. Title / Position 3. Address 4. Telephone number 5. Fax number 6. Electronic mail address 7. Interest: A short statement expressing your interests in neural network standards including which working group(s) you are interested in (A. B, C, D). E. Fiesler IDIAP Case postale 609 CH-1920 Martigny Switzerland / Suisse Tel.: +41-26-22-76-64 Fax.: +41-26-22-78-18 E-mail: EFiesler at IDIAP.CH (INTERNET)  From worth at PARK.BU.EDU Wed Jun 3 14:23:07 1992 From: worth at PARK.BU.EDU (worth@PARK.BU.EDU) Date: Wed, 3 Jun 92 14:23:07 -0400 Subject: ISSNNet Nominations Message-ID: <9206031823.AA07860@alewife.bu.edu> ______________________________________________________________________ ISSNNet Official Call for Nominations ______________________________________________________________________ The International Student Society for Neural Netwoks (ISSNNet) is in the process of re-organizing. The founders are no longer students and it is time to create a new administration. The first step in this process is to hold elections as per the existing bylaws. The current officers are preparing reports on what has been accomplished so far and are laying the groundwork for the new organization. The newly elected officers should be willing to take the task of finalizing this re-organization. Official nomination period: June 1st through 30th. Nominations will be accepted by email, surface mail, or in person at IJCNN92 (look for ISSNNet signs in the exhibition hall). As per the bylaws, nominations will be approved by the Governing board and the current Officers. At least two and no more than four nominees will be placed on the ballot for each Officer position. Selection will be based on the number of member nominations. Election ballots will be mailed (either by surface mail or electronic mail) on August 1st, 1992 and voting shall be closed on August 31st, 1992. Election to Officer positions will be based on plurality of votes among the selected nominees. All four officer positions are up for election: Position: Duties: ============== ================================================= President Chief execute officer and Spokesperson. The President is responsible for making sure that the society continues to function as described in the Bylaws. Vice President Assist the President. Director Oversees practical organizational matters. Responsible for elections. Treasurer Responsible for all monies. Qualifications for potential nominees: The nominee must be enrolled at a recognized academic institution (proof of student status will be required) AND HAVE RELIABLE ACCESS TO ELECTRONIC MAIL. Each nomination must be supported by at least 10 student members. No more than two Officers may belong to the same Area of Jurisdiction (Country, State, Province, Region, etc. with at least five student members). Moreover, the President and Vice President may not belong to the same Area of Jurisdiction. Because ISSNNet membership processing has been suspended for some time, anyone who has been a member of ISSNNet in the past can submit or support a nomination. This includes students outside the USA who were unable to submit dues because of exchange problems, but who have been on our mailing list in the past, directly through ISSNNet or through one of the Governors. Fill out the information below, and return the following form to the address shown (e-mail or surface). ---------------------------- cut here ------------------------------ ISSNNet NOMINATION FORM Please include as much information about the nominee as possible. Add lines where necessary. If using surface mail, please type. NOMINEE INFORMATION: Position: [President, Vice President, Director, Treasurer] Name (Last,First): ______________________________________ University: ______________________________________ Surface Address: ______________________________________ ______________________________________ ______________________________________ ______________________________________ Email: ______________________________________ (please type!) Phone: ______________________________________ SUPPORTING MEMBERS: Name and University: _______________________________________________ Name and University: _______________________________________________ Name and University: _______________________________________________ Name and University: _______________________________________________ Name and University: _______________________________________________ Name and University: _______________________________________________ Name and University: _______________________________________________ Name and University: _______________________________________________ Name and University: _______________________________________________ Name and University: _______________________________________________ ----------------------------- cut here ------------------------------- Return your nomination with the above information to: issnnet at cns.bu.edu or to ISSNNet Elections P.O. Box 15661 Boston, MA 02215 USA Thank you for your support! Andy. ---------------------------------------------------------------------- Andrew J. Worth (617) 353-6741 ISSNNet, Inc. ISSNNet Acting Director P.O. Box 15661 worth at cns.bu.edu Boston, MA 02215 USA ----------------------------------------------------------------------  From bdbryan at eng.clemson.edu Wed Jun 3 20:21:33 1992 From: bdbryan at eng.clemson.edu (Ben Bryant) Date: Wed, 3 Jun 92 20:21:33 EDT Subject: TDNN network configuration file(s) for PlaNet Message-ID: <9206040021.AA16545@eng.clemson.edu> I recently sent a message concerning the above that was somehow garbled in the transmission. I apologize for this. The file that was sent in the last mailing was an ascii text file containing our current "best estimate" of how the training of a TDNN takes place implemented as a PlaNet network configuration file. If there is anyone there who has experience with PlaNet and has written a correct TDNN network config file for this package, I wonder if you might be kind enough to send us a copy. If you cannot do this for non-disclosure reasons, could you please simply look ove the following implementation and tell me whether we have implemented the training procedure correctly. I would be much obliged. The following is our "best guess" TDNN: #### file for 3-layer TDNN network with input 40x15 N=2; hidden 20x13 #### N=4 ; Output 3x9 # DEFINITIONS OF DELAY define NDin 3 define NDhid 5 define NDin_1 2 define NDhid_1 4 # DEFINITIONS OF UNITS define NUin 40 define NUhid 20 define NUout 3 define NUin_1 39 define NUhid_1 19 define NUout_1 2 #DEFINITION OF INPUT FRAME define NFin 15 define NFhid (NFin-NDin+1) define NFout (NFin-NDin+2-NDhid) define BiasHid 0 define BiasOut 0 ## DEFINITIONS OF LAYERS layer Input NFin*NUin layer Hidden NUhid*NFhid layer Output NFout*NUout layer Result NUout define biasd user1 ## DEFINITIONS OF INPUT/TARGET BUFFERS target NFout*NUout input NFin*NUin ## DEFINITIONS OF CONNECTIONS define Win (NUin*NDin_1+NUin_1) define Whids 0 define Whid (NUhid_1) connect InputHidden1 Input[0-Win] to Hidden[0-Whid] define WHid (NUhid*NDhid_1+NUhid_1) define Wout (NUout_1) connect HiddenOutput1 Hidden[0-WHid] to Output[0-Wout] ## n.3layer.expr: implementation of a 3layer-feedforward-net with expressions. ## define Nin, Nhid, Nout, BiasHid and BiasOut as desired. define ErrMsg \n\tread\swith\s'network\sNin=\sNhid=\sNout=\sBiasHid=\sBiasOut=\sn.3layer.expr'\n #IFNDEF NDin; printf ErrMsg; exit; ENDIF #IFNDEF Nhid; printf ErrMsg; exit; ENDIF #IFNDEF Nout; printf ErrMsg; exit; ENDIF IFNDEF BiasHid; printf ErrMsg; exit; ENDIF IFNDEF BiasOut; printf ErrMsg; exit; ENDIF # macro definitions of the derivarives of the sigmoid for Hidden and Output IF $min==0&&$max==1 define HiddenDer Hidden*(1-Hidden) define OutputDer Output*(1-Output) ELSE define HiddenDer (Hidden-$min)*($max-Hidden)/($max-$min) define OutputDer (Output-$min)*($max-Output)/($max-$min) ENDIF ## PROCEDURE FOR ACTIVATING NETWORK FORWARD procedure activate scalar i i=0 Input=$input while ii*NUhid+NUhid_1]=InputHidden1 \ **T(Input[i*NUin->i*NUin+NUin*NDin_1+NUin_1]) i+=1 endwhile Hidden = logistic(Hidden:net+(BiasHid*Hidden:bias)) i=0 while ii*NUout+NUout_1] = HiddenOutput1 \ **T(Hidden[i*NUhid->i*NUhid+NUhid*NDhid_1+NUhid_1]) i+=1 endwhile Output=logistic(Output:net+(BiasOut*Output:bias)) $Error=mean((Output:delta=$target-Output)^2)/2 Output:delta*=OutputDer end ## PROCEDURE FOR TRAINING NETWORK matrix Hidden_delta NFout NDhid*NUhid procedure learn call activate scalar i;scalar j i=0 while ii*NUout+NUout_1] \ **HiddenOutput1*HiddenDer[i*NUhid->i*NUhid+NUhid*NDhid_1+NUhid_1] i+=1 endwhile Hidden:delta=0 i=0 while i(i+j)*NUhid+NUhid_1] \ += Hidden_delta[i][j*NUhid->j*NUhid+NUhid_1] j+=1 endwhile i+=1 endwhile i=0 while ii*NUhid+NUhid_1]/=(i+1) endif if (NFhid-ii*NUhid+NUhid_1]/=(NFhid-i) endif if ((NFhid-i>=NDhid) && (i>=NDhid)) then Hidden:delta[i*NUhid->i*NUhid+NUhid_1]/=(NDhid) endif i+=1 endwhile i = 0 InputHidden1:delta*=$alpha*(NDhid*(NFout)) while ij*NUhid+NUhid_1]) \ **Input[(i+j)*NUin->(i+j)*NUin+NDin_1*NUin+NUin_1] j+=1 endwhile i+=1 endwhile InputHidden1 += InputHidden1:delta/=(NDhid*(NFout)) i=0 HiddenOutput1:delta*=$alpha*(NFout) while ii*NUout+NUout_1]) \ **Hidden[i*NUhid->i*NUhid+NUhid*NDhid_1+NUhid_1] i+=1 endwhile HiddenOutput1:delta/=(NFout) HiddenOutput1+=HiddenOutput1:delta Hidden:bias+=Hidden:biasd=Hidden:delta*$eta+Hidden:biasd*$alpha Output:bias+=Output:biasd=Output:delta*$eta+Output:biasd*$alpha end Thanks in advance for your help. -Ben Bryant  From tgd at arris.com Wed Jun 3 17:03:26 1992 From: tgd at arris.com (Tom Dietterich) Date: Wed, 3 Jun 92 14:03:26 PDT Subject: Position Announcement: Arris Pharmaceutical Message-ID: <9206032103.AA09645@oyster.arris.com> RESEARCH SCIENTIST in Machine Learning, Neural Networks, and Statistics Arris Pharmaceutical Arris Pharmaceutical is a start-up pharmaceutical company founded in 1989 and dedicated to the efficient discovery and development of novel, orally-active human therapeutics through the application of artificial intelligence, machine learning, and pattern recognition methods. We are seeking a person with a PhD in Computer Science, Mathematics, Statistics, or related fields to join our team developing new machine learning algorithms for drug discovery. The team currently includes contributions from Tomas Lozano-Perez, Rick Lathrop, Roger Critchlow, and Tom Dietterich. The ideal candidate will have a strong background in mathematics (including spatial reasoning methods) and five years' experience in machine learning, neural networks, or statistical model-building methods. The candidate should be eager to learn the relevant parts of computational chemistry and to interact with medicinal chemists and molecular biologists. To a first approximation, the Arris drug design strategy begins by identifying a pharmaceutical target (e.g., an enzyme or a cell-surface receptor), developing assays to measure chemical binding with this target, and screening large libraries of peptides (short amino acid sequences) with these assays. The resulting data, which indicates for each compound, how well it binds to the target, will then be analyzed by machine learning algorithms to develop hypotheses that explain why some compounds bind well to the target while others do not. Information from X-ray crystallography or NMR spectroscopy may also be available to the learning algorithms. Hypotheses will then be refined by synthesizing and testing additional peptides. Finally, medicinal chemists will synthesize small organic molecules that satisfy the hypothesis, and these will become candidate drugs to be tested for medical safety and effectiveness. For more information, send your resume with the names and addresses of three references to Tom Dietterich (email: tgd at arris.com; voice: 415-737-8600; FAX: 415-737-8590). Arris Pharmaceutical Corporation 385 Oyster Point Boulevard, Suite 12 South San Francisco, CA 94080  From LC4A%ICINECA.BITNET at BITNET.CC.CMU.EDU Fri Jun 5 11:16:30 1992 From: LC4A%ICINECA.BITNET at BITNET.CC.CMU.EDU (F. Ventriglia) Date: Fri, 05 Jun 92 11:16:30 SET Subject: 1992 Capri School Message-ID: <01GKUE3KHJV49N3W6N@BITNET.CC.CMU.EDU> Last Announcement INTERNATIONAL SCHOOL on NEURAL MODELING and NEURAL NETWORKS Capri (Italy) - September 28th-October 9th, 1992 Director F. Ventriglia An International School on Neural Modelling and Neural Networks was organized under the sponsorship of the Italian Group of Cybernetics and Biophysics of the CNR, the Institute of Cybernetics of the CNR and the National Committee for Physics of the CNR; co-sponsor the American Society for Mathematical Biology. First week (Sept 28 - Oct 2) TOPICS LECTURERS 1. Neural Structures * Szentagothai, Budapest 2. Functions of Neural Structures for Visuomotor Coordination * Arbib, Los Angeles 3. Correlations in Neural Activity * Abeles, Jerusalem 4. Single Neuron Dynamics: deterministic models * Rinzel, Bethesda 5. Single Neuron Dynamics: stochastic models * Ricciardi, Naples 6. Oscillations in Neural Systems * Ermentrout, Pittsburgh 7. Noise and Chaos in Neural Systems * Erdi, Budapest Second week (Oct 5 - Oct 9) TOPICS LECTURERS 8. Mass action in Neural Systems * Freeman, Berkeley 9. Statistical Neurodynamics: kinetic approach * Ventriglia, Naples 10.Statistical Neurodynamics: sigmoidal approach * Cowan, Chicago 11.Attractor Neural Networks in Cortical Conditions * Amit, Roma 12."Real" Neural Network Models * Traub, Yorktown Heights 13.Pattern Recognition in Neural Networks * Fukushima, Osaka 14.Learning in Neural Networks * Tesauro, Yorktown Heights About six lectures (each one hour long) will be given in each day (for 5+5 days). Each lecturer will give four lectures. On Saturday, October 3, there will be the following events: 18.00-19.00 Seminar Neural Networks: looking backward and forward E.R. Caianiello - Physics Dept. - University of Salerno, Italy 19.00-20.00 Round Table The explicative value of Neural Modeling Chairman: E.R. Caianiello 21.00 --> Informal Dinner LOCATION The International School will be held in Capri, Italy. Lectures will be scheduled in "La Certosa" of Capri. WHO SHOULD ATTEND Applicants for the international School should be actively engaged in the fields of biological cybernetics, biomathematics or computer science, and have a good background in mathematics. As the number of participants must be limited to 70, preference may be given to students who are specializing in neural modelling and neural networks and to professionals who are seeking new materials for biomathematics or computer science courses. PROCEDURE for APPLICATION Applicants should provide a letter of introduction from their department, institute or company and complete the application form below. The documents should be mailed together, as soon as possible -also by E-Mail, to Dr. F. Ventriglia Registration Capri International School Istituto di Cibernetica Via Toiano 6 80072 - Arco Felice (NA) Italy Tel. (39-) 81-8534 138 E-Mail LC4A at ICINECA (bitnet) Fax (39-) 81-5267 654 Tx 710483 The deadline for application is JUNE 15, 1992. SCHOOL FEES The school fee is Italian Lire 500.000 (about 500 $) and includes notes, lunch and coffee-break for the duration of the School. A limited number of grants (covering the registration fee of Lit. 500.000) is available. The organizer applied to the Society for Mathematical Biology for travel funds for participants who are member of the SMB. If you want to apply for such a grant, you should submit a request to this effect together with your application form and your letter of introduction should confirm that these funds cannot be provided by your department or another source in your country. Preference will be given to students, postdoctoral fellows and young faculty (1-2 years) after PhD. Participants will receive information about how the fee can be paid in their letter of acceptance. LODGING As the Institute of Cybernetics has no own lodging facilities, participants will have to stay in hotels in Capri. No grants are available to cover living expenses. Whereas you are free to arrange for a hotel through your own travel-agency, it is recommended that participants use the lodging-facility reserved by the International School. A stock of rooms in hotels in Capri were reserved for participants to the school, some single and others, the greatest part, double (for two mates). Hotels in Capri Single Room Double Room Double Room Half Pension as Single Hotel Syrene 95.000 190.000 175.000 130.000 in d. 145.000 in s. Hotel Floridiana 90.000 185.000 165.000 125.000 in d. 140.000 in s. Villa Krupp 85.000 150.000 100.000 La Minerva 60.000 130.000 90.000 La Florida 50.000 100.000 80.000 The prices (in Italian Lire) are per person/day, excluding the double rooms in which each of the room-mates will pay half-price, and include bed and breakfast. <><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><> REGISTRATION FORM International School on Neural Modeling and Neural Networks Capri, September 28-October 9, 1992 Name ---------------------------------------------- First Name ----------------------------------------- Position or Title ---------------------------------- University/ -------------------------------------- Company Address -------------------------------------------- -------------------------------------------- -------------------------------------------- -------------------------------------------- Tel. ---------------------- Fax --------------------- E-mail ---------------------------------------------  From eldrache at informatik.tu-muenchen.de Fri Jun 5 10:48:32 1992 From: eldrache at informatik.tu-muenchen.de (Martin Eldracher) Date: Fri, 5 Jun 92 16:48:32 +0200 Subject: Article ready in neuroprose. Message-ID: <92Jun5.164846met_dst.23552@sunbrauer12.informatik.tu-muenchen.de> Dear Mr. Wan, Mr. Pollack just informed me, that he has placed my article in the neuroprose archive. Here follows the announcement of the paper, thanks a lot for your efforts. Yours sincerely, Martin Eldracher -------------------- announcement below this line --------------------------- I placed an article in the Neuroprose archive: The Article: -------- "Classification of Non-Linear-Separable Real-World-Problems Using Delta-Rule, Perceptrons, and Topologically Distributed Encoding" can be obtained from neuroprose with file name eldracher.tde.ps.Z . There are no extra hardcopies available. The article was originally published in the ``Proceedings of the 1992 ACM/SIGAPP Symposium on Applied Computing'' Volume II, pp1098-1104, ACM Press, ISBN: 0-89791-502-X Abstract: --------- We describe how to solve linear-non-separable problems using simple feed-forward perceptrons without hidden layers, and a biologically motivated topologically distributed encoding for input data. We point out why neural networks have advantages compared to classic mathematical algorithms without loosing performance. The Iris-dataset from Fisher is analyzed as a practical example. In order to get the file do: prompt> ftp archive.cis.ohio-state.edu Connected to archive.cis.ohio-state.edu. 220 archive FTP server (Version 6.14 Thu Apr 23 14:41:38 EDT 1992) ready. Name (archive.cis.ohio-state.edu:yourlogin):anonymous 331 Guest login ok, send e-mail address as password. Password: yourmailadress 230 Guest login ok, access restrictions apply. ftp> cd pub/neuroprose ftp>binary ftp>get eldrache.tde.ps.Z 150 Opening BINARY mode data connection for eldracher.tde.ps.Z (151908 bytes). 226 Transfer complete. ftp>bye prompt>uncompress eldracher.tde.ps.Z prompt>lpr eldracher.tde.ps (or what you use for printing a file) Martin Eldracher ------------------------------------------------------------------------------- Martin Eldracher Tel: ++49-89-2105-2406 Technische Universitaet Muenchen FAX: ++49-89-2105-8207 Institut fuer Informatik, H2 Lehrstuhl Prof. Dr. W. Brauer Arcisstr. 21, 8000 Muenchen 2, Germany e-mail: eldrache at informatik.tu-muenchen.de -------------------------------------------------------------------------------  From tgd at ICSI.Berkeley.EDU Fri Jun 5 13:42:05 1992 From: tgd at ICSI.Berkeley.EDU (Tom Dietterich) Date: Fri, 5 Jun 92 10:42:05 PDT Subject: Belew on Paradigmatic Over-Fitting Message-ID: <9206051742.AA03888@icsib22.ICSI.Berkeley.EDU> Rik Belew recently posted the following message to the GA Digest. With his permission, I am reposting it to connectionists. If you substitute XOR, encoder-decoder tasks, and 2-spirals for De Jong's F1-F5, I think the same message applies to a fair amount of connectionist research too. --Tom ====================================================================== From atul at nynexst.com Fri Jun 5 18:30:49 1992 From: atul at nynexst.com (Atul Chhabra) Date: Fri, 5 Jun 92 18:30:49 EDT Subject: references on telecommunications applications of neural nets and/or machine vision Message-ID: <9206052230.AA09090@texas.nynexst.com> I am looking for recent papers/technical reports etc. on telecommunications applications of neural networks. I have the following papers. I would appreciate receiving any additional references. Please respond by email. I will post a summary of responses to the net. I am also looking for references on applications of machine vision in telecommunications. 1. A. Hiramatsu, "ATM communications network control by neural network," IJCNN 89, Washington D.C., I/259-266, 1989. 2. J.E. Jensen, M.A. Eshara, and S.C. Barash, "Neural network controller for adaptive routing in survivable communications networks," IJCNN 90, San Diego, CA, II/29-36, 1990. 3. T. Matsumoto, M. Koga, K. Noguchi, and S. Aizawa, "Proposal for neural network applications to fiber optic transmission," IJCNN 90, San Diego, CA, I/75-80, July 1990. 4. T.X. Brown, "Neural network for switching," IEEE Communications, vol 27, no 11, 72-80, 1989. 5. T.P. Troudet and S.M. Walters, "Neural network architecture for crossbar switch control," IEEE Transactions on Curcuits and Systems, vol 38, 42-56, 1991. 6. S. Chen, G.J. Gibson and C.F.N. Cowan, "Adaptive channel equalization using a polynomial-perceptron structure," IEE Proceedings, vol 137, 257-264, 1990. 7. R.M. Goodman, J. Miller and H. Latin, "NETREX: A real time network management expert system," IEEE Globecom Workshop on the Application of Emerging Technologies in Network Operation and Management, FL, December 1988. 8. K.N. Sivarajan, "Spectrum Efficient Frequency Assignment for Cellular Radio," Caltech EE Doctoral Dissertation, June 1990. 9. M.D. Alston and P.M. Chau, "A decoder for block-coded forward error correcting systems," IJCNN 90, Washington D.C., II/302-305, January 1990. Thanks. ===================================================================== Atul K. Chhabra Phone: (914)644-2786 Member of Technical Staff Fax: (914)644-2211 NYNEX Science & Technology Internet: atul at nynexst.com 500 Westchester Avenue White Plains, NY 10604 =====================================================================  From harnad at Princeton.EDU Tue Jun 9 21:28:24 1992 From: harnad at Princeton.EDU (Stevan Harnad) Date: Tue, 9 Jun 92 21:28:24 EDT Subject: Connectionism & Reasoning: BBS Call for Commentators Message-ID: <9206100128.AA19528@clarity.Princeton.EDU> Below is the abstract of a forthcoming target article on connectionism and reasoning by Shastri & Ajjanagadde. It has been accepted for publication in Behavioral and Brain Sciences (BBS), an international, interdisciplinary journal that provides Open Peer Commentary on important and controversial current research in the biobehavioral and cognitive sciences. Commentators must be current BBS Associates or nominated by a current BBS Associate. To be considered as a commentator on this article, to suggest other appropriate commentators, or for information about how to become a BBS Associate, please send email to: harnad at clarity.princeton.edu or harnad at pucc.bitnet or write to: BBS, 20 Nassau Street, #240, Princeton NJ 08542 [tel: 609-921-7771] To help us put together a balanced list of commentators, please give some indication of the aspects of the topic on which you would bring your areas of expertise to bear if you were selected as a commentator. An electronic draft of the full text is available for inspection by anonymous ftp according to the instructions that follow after the abstract. ____________________________________________________________________ FROM SIMPLE ASSOCIATIONS TO SYSTEMATIC REASONING: A Connectionist representation of rules, variables, and dynamic bindings using temporal synchrony Lokendra Shastri Computer and Information Science Department University of Pennsylvania Philadelphia, PA 19104 shastri at central.cis.upenn.edu Venkat Ajjanagadde Wilhelm-Schickard-Institut University of Teubingen Sand 14 W-7400 Tuebingen, Germany nnsaj01 at mailserv.zdv.uni-tuebingen.de KEYWORDS: knowledge representation; reasoning; connectionism; dynamic bindings; temporal synchrony, neural oscillations, short- term memory; long-term memory; working memory; systematicity. ABSTRACT: Human agents draw a variety of inferences effortlessly, spontaneously, and with remarkable efficiency --- as though these inferences were a reflex response of their cognitive apparatus. Furthermore, these inferences are drawn with reference to a large body of background knowledge. This remarkable human ability is hard to explain given findings on the complexity of reasoning reported by researchers in artificial intelligence. It also poses a challenge for cognitive science and computational neuroscience: How can a system of simple and slow neuron-like elements represent a large body of systematic knowledge and perform a range of inferences with such speed? We describe a computational model that takes a step toward addressing the cognitive science challenge and resolving the artificial intelligence puzzle. We show how a connectionist network can encode millions of facts and rules involving n-ary predicates and variables and can perform a class of inferences in a few hundred milliseconds. Efficient reasoning requires the rapid representation and propagation of dynamic bindings. Our model achieves this by representing (1) dynamic bindings as the synchronous firing of appropriate nodes, (2) rules as interconnection patterns that direct the propagation of rhythmic activity, and (3) long-term facts as temporal pattern-matching sub-networks. The model is consistent with recent neurophysiological findings which suggest that synchronous activity occurs in the brain and may play a representational role in neural information processing. The model also makes specific, psychologically significant predictions about the nature of reflexive reasoning. It identifies constraints on the form of rules that may participate in such reasoning and relates the capacity of the working memory underlying reflexive reasoning to biological parameters such as the frequency at which nodes can sustain oscillations and the coarseness of synchronization. -------------------------------------------------------------- To help you decide whether you would be an appropriate commentator for this article, an electronic draft is retrievable by anonymous ftp from princeton.edu according to the instructions below (the filename is bbs.shastri). Please do not prepare a commentary on this draft. Just let us know, after having inspected it, what relevant expertise you feel you would bring to bear on what aspect of the article. ------------------------------------------------------------- To retrieve a file by ftp from a Unix/Internet site, type either: ftp princeton.edu or ftp 128.112.128.1 When you are asked for your login, type: anonymous Enter password as per instructions (make sure to include the specified @), and then change directories with: cd pub/harnad To show the available files, type: ls Next, retrieve the file you want with (for example): get bbs.shastri When you have the file(s) you want, type: quit Certain non-Unix/Internet sites have a facility you can use that is equivalent to the above. Sometimes the procedure for connecting to princeton.edu will be a two step process such as: ftp followed at the prompt by: open princeton.edu or open 128.112.128.1 In case of doubt or difficulty, consult your system manager. ---------- JANET users who do not have the facilty for interactive file transfer mentioned above have two options for getting BBS files. The first, which is simpler but may be subject to traffic delays, uses the file transfer utility at JANET node UK.AC.FT-RELAY. Use standard file transfer, setting the site to be UK.AC.FT-RELAY, the userid as anonymous at edu.princeton, for the password your-own-userid at your-site [the "@" is crucial], and for the remote filename the filename according to Unix conventions (i.e. something like pub/harnad/bbs.authorname). Lower case should be used where indicated, with quotes if necessary to avoid automatic translation into upper case. Setting the remote filename to be (D)pub/harnad instead of the one indicated above will provide you with a directory listing. The alternative, faster but more complicated procedure is to log on to JANET site UK.AC.NSF.SUN (with userid and password both given as guestftp), and then transfer the file interactively to a directory on that site (named by you when you log on). The method for transfer is as described above under 'Certain non-Unix/Internet sites', or you can make use of the on-line help that is available. Transfer of the file received to your own site is best done from your own site; the remote file (on the UK.AC.NSF.SUN machine) should be named as directory-name/filename (the directory name to use being that provided by you when you logged on to UK.AC.NSF.SUN). To be sociable (since NSF.SUN is short of disc space), once you have received the file on your own machine you should go back to UK.AC.NSF.SUN and delete it from your directory there. [Thanks to Brian Josephson for the above detailed UK/JANET instructions; similar special instructions for file retrieval from other networks or countries would be appreciated and will be included in updates of these instructions.] --- Where the above procedures are not available (e.g. from Bitnet or other networks), there are two fileservers -- ftpmail at decwrl.dec.com and bitftp at pucc.bitnet -- that will do the transfer for you. Send either one the one line message: help for instructions (which will be similar to the above, but will be in the form of a series of lines in an email message that ftpmail or bitftp will then execute for you). -------------------------------------------------------------  From wilson at smith.rowland.org Mon Jun 8 14:02:59 1992 From: wilson at smith.rowland.org (Stewart Wilson) Date: Mon, 08 Jun 92 14:02:59 EDT Subject: SAB92 Reminder Notice Message-ID: <9206081802.AA03028@smith.rowland.org> Dear Connectionists-List Moderator: Would you kindly broadcast the enclosed reminder of the SAB92 conference? Thank you. Stewart Wilson * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * R E M I N D E R * * From Animals to Animats: * 2nd International Conference on Simulation of Adaptive Behavior * ---------------------- * Honolulu, Hawaii, December 7-ll, 1992 * * * OBJECT: To bring together researchers in ethology, * psychology, ecology, cybernetics, artificial intelligence, * robotics, and related fields to further understanding of * the behaviors and underlying mechanisms that allow animals * and, potentially, robots to adapt and survive in uncertain * environments. * * DEADLINE -- Submissions must be received by the organizers * by JULY 15, 1992 * * To receive the full Conference Announcement and Call for * Papers, please contact one of the Organizers: * * Jean-Arcady Meyer meyer at wotan.ens.fr * meyer at frulm63.bitnet * * Herbert Roitblat roitblat at uhunix.uhcc.hawaii.edu * roitblat at uhunix.bitnet * * Stewart Wilson wilson at smith.rowland.org * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *  From atul at nynexst.com Wed Jun 10 14:32:24 1992 From: atul at nynexst.com (Atul Chhabra) Date: Wed, 10 Jun 92 14:32:24 EDT Subject: references on telecommunications applications of neural nets and/or machine vision Message-ID: <9206101832.AA12787@texas.nynexst.com> I received several responses to my original request for references. Thanks to those who responded. All of the responses dealt with application of neural networks to telecommunication technology. I am also interested in neural net applications related to telecommunication operations. Examples of such applications are network monitoring, analysis of data for alarm reporting, planning and forecasting, and handwritten character recognition for automatic remittance processing. Please email me references to such applications in the context of telecommunications industry. If there is enough interest, I will summarize to the net. Thanks. --Atul ===================================================================== Atul K. Chhabra Phone: (914)644-2786 Member of Technical Staff Fax: (914)644-2211 NYNEX Science & Technology Internet: atul at nynexst.com 500 Westchester Avenue White Plains, NY 10604 =====================================================================  From STAY8026 at bureau.ucc.ie Tue Jun 9 05:14:00 1992 From: STAY8026 at bureau.ucc.ie (STAY8026@bureau.ucc.ie) Date: Tue, 9 Jun 1992 09:14 GMT Subject: Research Fellowship Message-ID: <01GL07CQO11S0008EU@IRUCCVAX.UCC.IE> THE FOLLOWING IS LIKELY TO BE OF INTEREST TO EUROPEAN RESEARCHERS Dermot Barnes and I intend to make an application to the EC Human Capital and Mobility scheme for funding toward a post doctoral research fellowship to work on a project on connectionist simulations of recent developments in human learning which have implications for our understanding of inference. We would be interested in hearing from any post-doc connectionist who might like to put in an application for such a fellowship in tandem with ours. Our reading of the `Euro-blurb' is that the closing date for joint applications is 29 June, which is very close, but that individuals wishing to apply for a fellowship can do so continuously throughout 1992-94 with periodic selection every four months. For more information contact us by e-mail or at the address below. Dermot will be at the Quantitative Aspects of Behavior Meeting in Harvard later this week and I will be at the Belfast Neural Nets Meeting at the end of the month. P.J. Hampson Department of Applied Psychology University College Cork Ireland tel 353-21-276871 (ext 2101) fax 353-21-270439 e-mail stay8026 at iruccvax.ucc.ie  From sontag at control.rutgers.edu Thu Jun 11 11:35:34 1992 From: sontag at control.rutgers.edu (sontag@control.rutgers.edu) Date: Thu, 11 Jun 92 11:35:34 EDT Subject: "For neural networks, function determines form" in neuroprose Message-ID: <9206111535.AA01827@control.rutgers.edu> Title: "For neural networks, function determines form" Authors: Francesca Albertini and Eduardo D. Sontag Filename: albertini.ident.ps.Z Abstract: This paper shows that the weights of continuous-time feedback neural networks are uniquely identifiable from input/output measurements. Under very weak genericity assumptions, the following is true: Assume given two nets, whose neurons all have the same nonlinear activation function $\sigma$; if the two nets have equal behaviors as ``black boxes'' then necessarily they must have the same number of neurons and ---except at most for sign reversals at each node--- the same weights. (NOTE: this result is **not** a "learning" theorem. It does not provide by itself an algorithm for loading recurrent nets. It only shows "uniqueness of solution". However, work is in progress to apply the techniques developed in the proof to the learning problem.) To obtain copies of this article: unix> ftp archive.cis.ohio-state.edu (or 128.146.8.52) Name : anonymous Password: ftp> cd pub/neuroprose ftp> binary ftp> get albertini.ident.ps.Z ftp> quit unix> uncompress albertini.ident.ps.Z unix> lpr -Pps albertini.ident.ps (or however you print PostScript) (With many thanks to Jordan Pollack for providing this valuable service!) Please note: the file requires a fair amount of memory to print. If you have problems with FTP, I can e-mail you the postscript file; I cannot provide hardcopy, however.  From bhaskar at theory.cs.psu.edu Thu Jun 11 12:41:56 1992 From: bhaskar at theory.cs.psu.edu (Bhaskar DasGupta) Date: Thu, 11 Jun 1992 12:41:56 -0400 Subject: Result available. Message-ID: <9206111641.AA02754@omega.theory.cs.psu.edu> ************** PLEASE DO NOT FORWARD TO OTHER NEWSGROUPS **************** The following technical report has been placed in the neuroprose archives at Ohio State University. Questions and comments about the result will be very highly appreciated. EFFICIENT APPROXIMATION WITH NEURAL NETWORKS: A COMPARISON OF GATE FUNCTIONS* Bhaskar DasGupta Georg Schnitger Tech. Rep. CS-92-14 Department of Computer Science The Pennsylvania State University University Park PA 16802 email: {bhaskar,georg}@cs.psu.edu ABSTRACT -------- We compare different gate functions in terms of the approximation power of their circuits. Evaluation criteria are circuit size s, circuit depth d and the approximation error e(s,d). We consider two different error models, namely ``extremely tight'' approximations (i.e. e(s,d) = 2^{-s}) and the ``more relaxed'' approximations (i.e. e(s,d) = s^{-d}). Our goal is to determine those gate functions that are equivalent to the standard sigmoid sigma (x) = 1/(1+exp(-x)) under these two error models. For error e(s,d) = 2^{-s}, the class of equivalent gate functions contains, among others, (non-polynomial) rational functions, (non-polynomial) roots and most radial basis functions. Newman's approximation of |x| by rational functions is obtained as a corollary of this equivalence result. Provably not equivalent are polynomials, the sine-function and linear splines. For error e(s,d) = s^{-d}, the class of equivalent activation functions grows considerably, containing for instance linear splines, polynomials and the sine-function. This result only holds if the number of input units is counted when determining circuit size. The standard sigmoid is distinguished in that relatively small weights and thresholds suffice. Finally, we consider the computation of boolean functions. Now the binary threshold gains considerable power. Nevertheless, we show that the standard sigmoid is capable of computing a certain family of n-bit functions in constant size, whereas circuits composed of binary thresholds require size at least proportional to log n (and, O(log n loglog n logloglog n) gates are sufficient). This last result improves the previous best known separation result of Maass, Schnitger and Sontag(FOCS, 1991). (We wish to thank J. Lambert, W. Maass, R. Paturi, V. P. Roychodhury, K. Y. Siu and E. Sontag). ------------------------------------------------------------------- (* This research was partially supported by NSF Grant CCR-9114545) ------------------------------------------------------------------- ************************ How to obtain a copy ************************ a) Via FTP: unix> ftp archive.cis.ohio-state.edu (or 128.146.8.52) Name: anonymous Password: (type your email address) ftp> cd pub/neuroprose ftp> binary ftp> get dasgupta.approx.ps.Z ftp> quit unix> uncompress dasgupta.approx.ps.Z unix> lpr dasgupta.approx.ps (or however you print PostScript) b) Via postal mail: Request a hardcopy from one of the authors. *********************************************************************** Bhaskar DasGupta | email: bhaskar at omega.cs.psu.edu Department of Computer Science | phone: (814)-863-7326 333 Whitmore Laboratory | fax: (814)-865-3176 The Pennsylvania State University University Park PA 16802. ***********************************************************************  From lautrup at connect.nbi.dk Fri Jun 12 08:57:30 1992 From: lautrup at connect.nbi.dk (Benny Lautrup) Date: Fri, 12 Jun 92 13:57:30 +0100 Subject: No subject Message-ID: Begin Message: ----------------------------------------------------------------------- INTERNATIONAL JOURNAL OF NEURAL SYSTEMS The International Journal of Neural Systems is a quarterly journal which covers information processing in natural and artificial neural systems. It publishes original contributions on all aspects of this broad subject which involves physics, biology, psychology, computer science and engineering. Contributions include research papers, reviews and short communications. The journal presents a fresh undogmatic attitude towards this multidisciplinary field with the aim to be a forum for novel ideas and improved understanding of collective and cooperative phenomena with computational capabilities. ISSN: 0129-0657 (IJNS) ---------------------------------- Contents of Volume 3, issue number 1 (1992): 1. E.D. Lumer: Selective Attention to Perceptual Groups: The Phase Tracking Mechanism. 2. A. Namatame \& Y. Tsukamoto: Structural Connectionist Learning with Complementary Coding. 3. K.N. Gurney: Training Recurrent Nets of Hardware Realisable Sigma-pi Units. 4. Y. Peng, J.A. Reggia \& T. Li: A Connectionist Approach to Vertex Covering Problems. 5. E.P. Fulcher: WIS-ART: Unsupervised Clustering with RAM Discriminators. 6. P. Fedor: Principles of the Design of D-Neuronal Networks I: Net Representations for Computer Simulation of a Melody Compositional Process. 7. P. Fedor: Principles of the Design of D-Neuronal Networks II: Composing Simple Melodies. 8. D. Saad: Training Recurrent Neural Networks - The Minimal Trajectory Algorithm. ---------------------------------- Editorial board: B. Lautrup (Niels Bohr Institute, Denmark) (Editor-in-charge) S. Brunak (Technical Univ. of Denmark) (Assistant Editor-in-Charge) D. Stork (Stanford) (Book review editor) Associate editors: B. Baird (Berkeley) D. Ballard (University of Rochester) E. Baum (NEC Research Institute) S. Bjornsson (University of Iceland) J. M. Bower (CalTech) S. S. Chen (University of North Carolina) R. Eckmiller (University of Dusseldorf) J. L. Elman (University of California, San Diego) M. V. Feigelman (Landau Institute for Theoretical Physics) F. Fogelman-Soulie (Paris) K. Fukushima (Osaka University) A. Gjedde (Montreal Neurological Institute) S. Grillner (Nobel Institute for Neurophysiology, Stockholm) T. Gulliksen (University of Oslo) D. Hammerstrom (Oregon Graduate Institute) D. Horn (Tel Aviv University) J. Hounsgaard (University of Copenhagen) B. A. Huberman (XEROX PARC) L. B. Ioffe (Landau Institute for Theoretical Physics) P. I. M. Johannesma (Katholieke Univ. Nijmegen) M. Jordan (MIT) G. Josin (Neural Systems Inc.) I. Kanter (Princeton University) J. H. Kaas (Vanderbilt University) A. Lansner (Royal Institute of Technology, Stockholm) A. Lapedes (Los Alamos) B. McWhinney (Carnegie-Mellon University) M. Mezard (Ecole Normale Superieure, Paris) J. Moody (Yale, USA) A. F. Murray (University of Edinburgh) J. P. Nadal (Ecole Normale Superieure, Paris) E. Oja (Lappeenranta University of Technology, Finland) N. Parga (Centro Atomico Bariloche, Argentina) S. Patarnello (IBM ECSEC, Italy) P. Peretto (Centre d'Etudes Nucleaires de Grenoble) C. Peterson (University of Lund) K. Plunkett (University of Aarhus) S. A. Solla (AT&T Bell Labs) M. A. Virasoro (University of Rome) D. J. Wallace (University of Edinburgh) D. Zipser (University of California, San Diego) ---------------------------------- CALL FOR PAPERS Original contributions consistent with the scope of the journal are welcome. Complete instructions as well as sample copies and subscription information are available from The Editorial Secretariat, IJNS World Scientific Publishing Co. Pte. Ltd. 73, Lynton Mead, Totteridge London N20 8DH ENGLAND Telephone: (44)81-446-2461 or World Scientific Publishing Co. Inc. Suite 1B 1060 Main Street River Edge New Jersey 07661 USA Telephone: (1)201-487-9655 or World Scientific Publishing Co. Pte. Ltd. Farrer Road, P. O. Box 128 SINGAPORE 9128 Telephone (65)382-5663 ----------------------------------------------------------------------- End Message  From lautrup at connect.nbi.dk Fri Jun 12 09:00:31 1992 From: lautrup at connect.nbi.dk (Benny Lautrup) Date: Fri, 12 Jun 92 14:00:31 +0100 Subject: No subject Message-ID: ADVANCE PROGRAM IEEE Workshop on Neural Networks for Signal Processing August 31 - September 2, 1992 Copenhagen The Danish Computational Neural Network Center CONNECT and The Electronics Institute, The Technical University of Denmark In cooperation with the IEEE Signal Processing Society Invitation to Participate in the 1992 IEEE Workshop on Neural Networks for Signal Processing. The members of the Workshop Organizing Committee welcome you to the 1992 IEEE Workshop on Neural Networks for Signal Processing. The 1992 Workshop is the second workshop held in this area. The first took place in 1991 in Princeton, NJ, USA. The Workshop is organized by the IEEE Technical Committee for Neural Networks and Signal Processing. The purpose of the Workshop is to foster informal technical interac- tion on topics related to the application of neural networks to signal processing problems. Workshop Location The 1992 Workshop will be held at Hotel Marienlyst, Ndr. Strandvej 2, DK-3000 Helsingoer, Denmark, tel: +45 49202020, fax: +45 49262626. Helsingoer is a small town a little North of Copenhagen. The Workshop banquet will be held on Tuesday evening, September 1, at Kronborg Castle, which is situated close to the workshop hotel. Workshop Proceedings The proceedings of the Workshop, entitled "Neural Networks for Signal Processing - Proceedings of the 1992 IEEE Workshop", will be distributed at the Workshop. The registration fee covers one copy of the proceedings. Registration Information The Workshop registration information is given in the end of this program. It is possible to apply for a limited number of partial travel and registration grants via Program Chair. The address is given in the end of this program. Program Overview Time | Monday 31/8-92 | Tuesday 1/9-92 | Wednesday 2/9-92 ========================================================================== 8:15 AM | Opening Remarks | | -------------------------------------------------------------------------- 8:30 AM | Opening Keynote | Keynote Address | Keynote Address | Address | | -------------------------------------------------------------------------- 9:30 AM | Learning & Models | Speech 2 | Nonlinear Filtering | (Lecture) | (Lecture) | by Neural Networks | | | (Lecture) -------------------------------------------------------------------------- 11:00 AM | Break | Break | Break -------------------------------------------------------------------------- 11:30 AM | Speech 1 | Learning, System- | Image Processing and | (Poster preview) | identification and | Pattern Recognition | | Spectral Estimation | (Poster preview) | | (Poster preview) | -------------------------------------------------------------------------- 12:30 PM | Lunch | Lunch | Lunch -------------------------------------------------------------------------- 1:30 PM | Speech 1 | Learning, System- | Image Processing and | (Poster) | identification and | Pattern Recognition | | Spectral Estimation | (Poster) | | (Poster) | -------------------------------------------------------------------------- 2:45 PM | Break | Break | Break -------------------------------------------------------------------------- 3.15 PM | System | Image Processing and | Application Driven | Implementations | Analysis | Neural Models | (Lecture) | (Lecture) | (Lecture) -------------------------------------------------------------------------- Evening | Panel Discussion | Visit and Banquet at | | (8 PM) | Kronborg Castle | ========================================================================== Evening Events A Pre-Workshop reception will be held at Hotel Marienlyst at 7:00 PM on Sunday, August 30, 1992. Tuesday, September 1, 1992, 5:00 PM visit to Kronborg Castle and 7:00 PM banquet at the Castle. TECHNICAL PROGRAM Monday, August 31, 1992 8:15 AM; Opening Remarks: S.Y. Kung, F. Fallside, Workshop Chairs, Benny Lautrup, connect, Denmark, John Aa. Sorensen, Workshop Program Chair. 8:30 AM; Opening Keynote: System Identification Perspective of Neural Networks Professor Lennart Ljung, Department of Electrical Engineering, Linkoping University, Sweden. 9:30 AM; Learning & Models (Lecture Session) Chair: Jenq-Neng Hwang, Department of Electrical Engineering University of Washington, Seattle, WA, USA. 1. "Towards Faster Stochastic Gradient Search", Christian Darken, John Moody Yale University, New Haven, CT, USA. 2. "Inserting Rules into Recurrent Neural Networks", C.L. Giles, NEC Research Inst., Princeton, C.W. Omlin, Rensselaer Polytechnic Institute, Troy, NY, USA. 3. "On the Complexity of Neural Networks with Sigmoidal Units", Kai-Yeung Siu, University of California, Irvine, CA, Vwani Roychowdhury, Purdue University, West Lafayette, IN, Thomas Kailath, Stanford University, Stanford, CA, USA. 4. "A Generalization Error Estimate for Nonlinear Systems", Jan Larsen, Technical University of Denmark, Lyngby, Denmark. 11:00 AM; Coffe break 11:30 AM; Speech 1, (Oral previews of the afternoon poster session) Chair: Paul Dalsgaard, Institute of Electronic Systems, Aalborg University, Denmark. 1. "Interactive Query Learning for Isolated Speech Recognition", Jenq-Neng Hwang, Hang Li, University of Washington, Seattle, WA, USA. 2. "Adaptive Template Method for Speech Recognition", Yadong Liu, Yee-Chun Lee, Hsing-Hen Chen, Guo-Zheng Sun, University of Maryland, College Park, MD, USA. 3. "Fuzzy Partition Models and Their Effect in Continous Speech Recognition", Yoshinaga Kato, Masahide Sugiyama, ATR Interpreting Telephony Research Laboratories, Kyoto, Japan. 4. "Empirical Risk Optimization: Neural Networks and Dynamic Programming", Xavier Driancourt, Patrick Gallinari, Universite' de Paris Sud, Orsay, France. 5. "Text-Independent Talker Identification System Combining Connectionist and Conventional Models", Thierry Artieres, Younes Bennani, Patrick Gallinari, Universite' de Paris Sud, Orsay, France. 6. "A Two Layer Kohonen Neural Network using a Cochlear Model as a Front-End Processor for a Speech Recognition System", S. Lennon, E. Ambikairajah, Regional Technical College, Athlone, Ireland. 7. "Self-Structuring Hidden Control Neural Models", Helge B.D. Sorensen, Uwe Hartmann, Institute of Electronic Systems, Aalborg University, Aalborg, Denmark. 8. "Connectionist-Based Acoustic Word Models", Chuck Wooters, Nelson Morgan, International Computer Science Institute, Berkeley, CA, USA. 9. "Maximum Mutual Information Training of a Neural Predictive-Based HMM Speech Recognition System", K. Hassanein, L. Deng, M. Elmasry, University of Waterloo, Waterloo, Ontario, Canada. 10. "Training Continous Density Hidden Markov Models in Association with Self-Organizing Maps and LVQ", Mikko Kurimo, Kari Torkkola, Helsinki University of Technology, Finland. 11. "Unsupervised Sequence Classification", Jorg Kindermann, GMD-FIT.KI, Schloss Birlinghoven, Germany, Christoph Windheuser, Carnegie Mellon University, Pittsburg, PA, USA. 12. "A Mathematical Model for Speech Processing", Anna Esposito, Universita di Salerno, Salvatore Rampone, I.R.S.I.P-C.N.R, Napoli, Cesare Stanzione, International Institute for Advanced Scientific Studies, Salerno, Roberto Tagliaferri, Universita di Salerno, Italy. 13. "A New Voice and Pitch Estimator based on the Neocognitron", J.R.E. Moxham, P.A. Jones, H. McDermott, G.M. Clark, Australian Bionic Ear and Hearing Research Institute, East Melbourne 3002, Victoria, Australia. 12:30 PM; Lunch 1:30 PM; Speech 1, (Poster Session) 2:45 PM; Break 3:15 PM; System Implementations (Lecture session) Chair: Yu Hen Hu, Department of Electrical and Computer Engineering, University of Wisconsin-Madison, Madison, WI, USA. 1. "CCD's for Pattern Recognition", Alice M. Chiang, Lincoln Laboratory, Massachusetts Institute of Technology, MA, USA. 2. "An Electronic Parallel Neural CAM for Decoding", Joshua Alspector, Bellcore, Morristown, NJ, Anthony Jayakumar Bon Ngo, Cornell, Ithaca, NY, USA. 3. "Netmap-Software Tool for Mapping Neural Networks onto Parallel Computers", K. Wojtek Przytula, Huges Research Labs, Malibu, CA, Viktor Prasanna, University of California, Wei-Ming Lin, Mississippi State University, USA. 4. "A Fast Simulator for Neural Networks on DSPs of FPGAs", M. Ade, R. Lauwereins, J. Peperstraete, ESAT-Elektrotechniek, Heverlee, Belgium. Tuesday, September 1, 1992 8:30 AM; Keynote Address: "Capacity Control in Classifiers for Pattern Recognition" Dr. A. Sara Solla, AT\&T Bell Laboratories, Holmdel, NJ, USA. 9:30 AM; Speech 2, (Lecture Session) Chair: S. Katagiri, ATR Auditory and Visual Perception Research Laboratories, Kyoto, Japan. 1. "Speech Representations for Recognition by Neural Networks", B.H. Juang, AT&T Bell Laboratories, Murray Hill, NJ, USA. 2. "Classification with a Codebook-Excited Neural Network" Lizhong Wu, Frank Fallside, Cambridge University, UK. 3. "Minimal Classification Error Optimization for a Speaker Mapping Neural Network", Masahide Sugiyama, Kentaro Kurinami, ATR Interpreting Telephony Research Laboratories, Kyoto, Japan. 4. "On the Identification of Phonemes Using Acoustic-Phonetic Features Derived by a Self-Organising Neural Network", Paul Dalsgaard, Ove Andersen, Rene Jorgensen, Institute of Electronic Systems, Aalborg University, Denmark. 11:00 AM; Coffe break 11:30 AM; Learning, System Identification and Spectral Estimation. (Oral previews of afternoon poster session) Chair: Lars Kai Hansen, connect, Electronics Institute, Technical University of Denmark, Lyngby, Denmark. 1. "Prediction of Chaotic Time Series Using Recurrent Neural Networks", Jyh-Ming Kuo, Jose C. Principe, Bert deVries, University of Florida, FL, USA. 2. "Nonlinear System Identification using Multilayer Perceptrons with Locally Recurrent Synaptic Structure", Andrew Back, Ah Chung Tsoi, University of Queensland, Australia. 3. "Chaotic Signal Emulation using a Recurrent Time Delay Neural Network", Michael R. Davenport, Department of Physics, U.B.C., Shawn P. Day, Department of Electrical Engineering, U.B.C., Vancouver, Canada. 4. "Prediction with Recurrent Networks", Niels Holger Wulff, Niels Bohr Institute, John A. Hertz, Nordita, Copenhagen, Denmark. 5. "Learning of Sinusoidal Frequencies by Nonlinear Constrained Hebbian Algorithms", Juha Karhunen, Jyrki Joutsensalo, Helsinki University of Technology, Finland. 6. "A Neural Feed-Forward Network with a Polynomial Nonlinearity", Nils Hoffmann, Technical University of Denmark, Lyngby, Denmark. 7. "Application of Frequency-Domain Neural Networks to the Active Control of Harmonic Vibrations in Nonlinear Structural Systems", T.J. Sutton, S.J. Elliott University of Southampton, England. 8. "Generalization in Cascade-Correlation Networks", Steen Sjoegaard, Aarhus University, Denmark. 9. "Noise Density Estimation Using Neural Networks", M.T. Musavi, D.M. Hummels, A.J. Laffely, S.P. Kennedy, University of Maine, Maine, USA. 10. "An Efficient Model for Systems with Complex Responses", Volker Tresp, Ira Leuthausser, Ralph Neuneier, Martin Schlang, Siemens AG, Munchen, Klaus Abraham-Fuchs, Wolfgang Harer, Siemens, Erlangen, Germany. 11. "Generalized Feedforward Filters with Complex Poles", T. Oliveira e Silva, P. Guedes de Oliveira, Universidade de Aveiro, Aveiro, Portugal, J.C. Principe, University of Florida, Gainsville, FL, B. De Vries, David Sarnoff Research Center, Princeton, NJ, USA. 12. "A Simple Genetic Algorithm Applied to Discontinous Regularization", John Bach Jensen, Mads Nielsen, DIKU, University of Copenhagen, Denmark. 12:30 PM; Lunch 1:30 PM; Learning, System Identification and Spectral Estimation. (Poster Session) 2:45 PM; Break 3:15 PM; Image Processing and Analysis. (Lecture session) Chair: K. Wojtek Przytula, Hughes Research Labs, Malibu, CA, USA. 1. "Decision-Based Hierarchical Perceptron (HiPer) Networks with Signal/Image Classification Applications", S.Y. Kung, J.S. Taur, Princeton University, NJ, USA. 2. "Lateral Inhibition Neural Networks for Classification of Simulated Radar Imagery", Charles M. Bachmann, Scott A. Musman, Abraham Schultz, Naval Research Laboratory, Washington, USA. 3. "Globally Trained Neural Network Architecture for Image Compression", L. Schweizer, G. Parladori, Alcatel Italia, Milano, G.L. Sicuranza, Universita'di Trieste, Italy. 4. "Robust Identification of Human-Faces Using Mosaic Pattern and BPN", Makoto Kosugi, NTT Human Interface Laboratories, Take Yokosukashi Kanagawaken, Japan. 5:00 PM; Departure to Kronborg Castle 7:00 PM; Banquet at Kronborg Castle Wednesday, September 2, 1992 8:30 AM; Keynote Address: "Application Perspectives of the DARPA Artificial Neural Network Technology Program" Dr. Barbara Yoon, DARPA/MTO, Arlington, VA, USA. 9:30 AM; "Nonlinear Filtering by Neural Networks (Lecture Session)" Chair: Gary M. Kuhn, CCRP-IDA, Princeton, NJ, USA. 1. "Neural Networks and Nonparametric Regression", Vladimir Cherkassky, University of Minnesota, Minneapolis, USA. 2. "A Partial Analysis of Stochastic Convergence for a Generalized Two-Layer Perceptron using Backpropagation", Jeffrey L. Vaughn, Neil J. Bershad, University of California, Irvine, John J. Shynk, University of California, Santa Barbara, CA, USA. 3. "A Recurrent Neural Network for Nonlinear Time Series Prediction - A Comparative Study", S.S. Rao, S. Sethuraman, V. Ramamurti, Villanova University, Villanova, PA, USA. 4. "Dispersive Networks for Nonlinear Adaptive Filtering", Shawn P. Day, Michael R. Davenport, University of British Columbia, Vancouver, Canada. 11:00 AM; Coffe break 11:30 AM; Image Processing and Pattern Recognition (Oral previews of afternoon poster sessions) Chair: John C. Pearson, David Sarnoff Research Center, Princeton, NJ, USA. 1. "Unsupervised Multi-Level Segmentation of Multispectral Images", R.A. Fernandes, Institute for Space and Terrestrial Science, Richmond Hill, M.E. Jernigan, University of Waterloo, Waterloo, Ontario, Canada. 2. "Autoassociative Neural Networks for Image Compression: A Massively Parallel Implementation", Andrea Basso, Ecole Polytechnique Federale de Lausanne, Switzerland. 3. "Compression of Subband-Filtered Images via Neural Networks", S. Carrato, S. Marsi, University of Trieste, Trieste, Italy. 4. "An Adaptive Neural Network Model for Distinguishing Line- and Edge Detection from Texture Segregation", M.M. Van Hulle, T.Tollenaere, Katholieke Universiteit Leuven, Leuven, Belgium. 5. "Adaptive Segmentation of Textured Images using Linear Prediction and Neural Networks", Stefanos Kollias, Levon Sukissian, National Technical University of Athens, Athen, Greece. 6. "Neural Networks for Segmentation and Clustering of Biomedical Signals", Martin F. Schlang, Volker Tresp, Siemens, Munich, Klaus Abraham-Fuchs, Wolfgang Harer, Siemens, Erlangen, Germany. 7. "Some New Results in Nonlinear Predictive Image Coding Using Neural Networks", Haibo Li, Linkoping University, Linkoping, Sweden. 8. "A Neural Network Approach to Multi-Sensor Point-of-Interest Detection", Ajay N. Jain, Alliant Techsystems Inc., Hopkins, MN, USA. 9. "Supervised Learning on Large Redundant Trainingsets", Martin F. Moller, Aarhus University, Aarhus, Denmark. 10. "Neural Network Detection of Small Moving Radar Targets in an Ocean Environment", Jane Cunningham, Simon Haykin, McMaster University, Hamilton, Ontario, Canada. 11. "Discrete Neural Networks and Fingerprint Identification", Steen Sjoegaard, Aarhus University, Aarhus, Denmark. 12. "Image Recognition using a Neural Network", Keng-Chung Ho, Bin-Chang Chieu, National Taiwan Institute of Technology, Taipei, Taiwan, Republic of China. 13. "Adaptive Training of Feedback Neural Networks for Non-Linear Filtering and Control: I - A General Approach." O. Nerrand, P. Roussel-Ragot, L. Personnaz, G. Dreyfus, Ecole Superieure de Physique et de Chimie Industrielles, Paris, S. Marcos, Ecole Superieure d'Electricite, Gif Sur Yvette, France. 12:30 PM; Lunch 1:30 PM; Image Processing and Pattern Recognition (Poster Session) 2:45 PM; Break 3:15 PM; Application Driven Neural Models (Lecture session) Chair: Sathyanarayan S. Rao, Department of Electrical Engineering Villanova University, Villanova, PA, USA. 1. "Artificial Neural Network for ECG Arrhythmia Monitoring", Y.H. Hu, W.J. Tompkins, Q. Xue, University of Wisconsin- Madison, WI, USA. 2. "Constructing Neural Networks for Contact Tracking", Christopher M. DeAngelis, Naval Underwater Warfare Center, Rhode Island, Robert W. Green, University of Massachusetts Dartmouth, North Dartmouth, MA, USA. 3. "Adaptive Decision-Feedback Equalizer Using Forward-Only Counterpropagation Networks for Rayleigh Fading Channels", Ryuji Kaneda, Takeshi Manabe, Satoshi Fujii, ATR Optical and Radio Communications Research Laboratories, Kyoto, Japan. 13. "Ensemble Methods for Handwritten Digit Recognition", Lars Kai Hansen, Technical University of Denmark, Christian Liisberg, Risoe National Laboratory, Denmark, Peter Salamon, San Diego State University, San Diego, CA, USA. Workshop Committee General Chairs: S.Y. Kung F. Fallside Dept. of Electrical Engineering Engineering Department Princeton University Cambridge University Princeton, NJ 08544, USA Cambridge CB2 1PZ, UK email: kung at princeton.edu email: fallside at eng.cam.ac.uk Program Chair: Proceedings Chair: John Aa. Sorensen Candace Kamm Electronics Institute, Build 349 Box 1910 Technical University of Denmark Bellcore, 445 South Street DK-2800 Lyngby, Denmark Morristown, NJ 07960, USA email: jaas at dthei.ei.dth.dk email: cak at thumper.bellcore.com Publicity Chair: Gary M. Kuhn CCRP - IDA Thanet Road Princeton, NJ 08540, USA email: gmk%idacrd.uucp at princeton.edu Program Committee: Ronald de Beer Jenq-Neng Hwang John E. Moody John Bridle Yu Hen Hu Carsten Peterson Erik Bruun B.H. Juang Sathyanarayan S. Rao Paul Dalsgaard S. Katagiri Peter Salamon Lee Giles T. Kohonen Christian J. Wellekens Lars Kai Hansen Gary M. Kuhn Barbara Yoon Steffen Duus Hansen Benny Lautrup John Hertz Peter Koefoed Moeller ---------------------------------------------------------------------------- REGISTRATION FORM: 1992 IEEE Workshop on Neural Networks for Signal Processing. August 31 - September 2, 1992. Registration fee including single room and meals at Hotel Marienlyst: Before July 15, 1992, Danish Kr. 5200. After July 15, 1992, Danish Kr. 5350. Companion fee at Hotel Marienlyst: Danish Kr. 1160. Registration fee without hotel room: Before July 15, 1992, Danish Kr. 2800. After July 15, 1992, Danish Kr. 2950. The registration fee of Danish Kr. 5200 (5350) covers: . Attendance at all workshop sessions. . Workshop Proceedings. . Pre-Workshop reception at Sunday evening, August 30, 1992. . Hotel single room from August 30 to September 2, 1992. (3 nights). . 3 breakfasts, 3 lunches, 1 dinner, 1 banquet. . Coffe breaks and refreshments. . A Companion fee of additional Danish Kr. 1160 covers double room at Hotel Marienlyst and the Pre-Workshop reception, breakfasts and the banquet for 2 persons. The registration fee without hotel room: Danish Kr. 2800 (2950) covers: . Attendance at all workshop sessions. . Workshop Proceedings. . 3 lunches, 1 dinner, 1 banquet. . Coffe breaks and refreshments. Further information on registration: Ms. Anette Moeller-Uhl, The Niels Bohr Institute, tel: +45 3142 1616 ext. 388, fax: +45 3142 1016, email: uhl at connect.nbi.dk Please complete this form (type or print clearly) and mail with payment (by check, do not include cash) to: NNSP-92, CONNECT, The Niels Bohr Institute, Blegdamsvej 17, DK-2100 Copenhagen, Denmark. Name ------------------------------------------------------------------ Last First Middle Firm of University ---------------------------------------------------- Mailing Address ------------------------------------------------------- ----------------------------------------------------------------------- Country Phone Fax   From skrzypek at CS.UCLA.EDU Fri Jun 12 11:57:17 1992 From: skrzypek at CS.UCLA.EDU (Dr. Josef Skrzypek) Date: Fri, 12 Jun 92 08:57:17 PDT Subject: call for contributions Message-ID: CALL FOR CONTRIBUTIONS We are organizing a special edited book to be published by Kluwer, that is dedicated to the subject of NEURAL NETWORK SIMULATION ENVIRONMENTS. Submissions will be refereed. The plan calls for the book to be published in the winter/spring of 1993. I would like to invite your participation. DEADLINE FOR SUBMISSION: 25th of September, 1992 VOLUME TITLE: Neural Networks Simulation Environments EDITOR: Prof. Josef Skrzypek Department of Computer Science, 3532 BH UCLA Los Angeles CA 90024-1596 Email: skrzypek at cs.ucla.edu Tel: (310) 825 2381 Fax: (310) UCLA CSD DESCRIPTION This edited volume is devoted to ``Simulation environments for studying neuronal functions ''. The purpose of this special book is to encourage further work and discussion in the area of Neural Network Simulation tools that matured over the last decade into advanced neural modeling environments. Computer simulation is currently the best way to study dynamic properties of complex neuronal assemblies that might be mathematically intractable until we learn how to grow our own neurons on the breadboards. Simulation is also a way to avoid building prototypes without testing and verification of the design. Finally, computer simulation of very large complex systems, capable of intelligence or vision is the only reasonable way to organize the ever increasing flood of knowledge about these phenomena. In the past decade the development of neural network simulation environments has focused on two areas: 1) realistic (compartmental) models of a single neuron (small cluster of neurons) based on information currently available from the Neurosciences and 2) computational models of abstract neurons in support of "artificial" or connectionist models. All these NEURAL NETWORK SIMULATION ENVIRONMENTS HAVE NOT BEEN COMPREHENSIVELY COLLECTED, ORGANIZED, OR COMPARED ANYWHERE. Hence this volume should be a valuable addition to the desktop library for every computational neuroscientist as well as engineer designing artificial neural systems. The volume will include both invited and submitted peer-reviewed contributions. We are seeking submissions from researchers in relevant fields, including, computational neuroscience, natural and artificial vision, scientific computing, artificial intelligence, psychology, image and signal processing and pattern recognition. We are seeking submission describing MATURE (useful) WORKING simulation environements dedicated to modeling a complete spectrum of neural phenomena from membrane biophysics to computational abstraction such as for example three-layer backpropagation network. The volume will consist of three parts devoted to each major class of neural simulation methodologies: NEUROPHYSIOLOGICAL REALISM; simulators supporting neural models that incorporate detail models of membrane biophysics. PSYCHOPHYSICAL REALISM; simulators supporting computational models of neurons and networks that can account for reported psychological phenomena. Connectionist (Symbolic) - based simulators for neuronal networks models used in industry. We would like to encourage submissions from both, researchers engaged in analysis of biological systems such as modeling psychophysical/neurophysiological data using neural networks as well as from members of the engineering community who are synthesizing neural network models. The number of papers that can be included in this edited volume will be limited. Therefore, some qualified papers may be encouraged for submission to professional journals. SUBMISSION PROCEDURE Submissions should be sent to Josef Skrzypek, by Sept 25 1992. The suggested length is 20-22 double-spaced pages including figures, references, abstract and so on. Format details, etc. will be supplied on request. Authors are strongly encouraged to discuss ideas for possible submissions with the editor Tel (310)825-2381 or skrzypek at cs.ucla.edu Thank you for your considerations.  From terry at helmholtz.sdsc.edu Thu Jun 18 19:40:05 1992 From: terry at helmholtz.sdsc.edu (Terry Sejnowski) Date: Thu, 18 Jun 92 16:40:05 PDT Subject: Grant deadline August 1 Message-ID: <9206182340.AA25116@helmholtz.sdsc.edu> COGNITIVE NEUROSCIENCE - Individual Grants in Aid The McDonnell-Pew Program in Cognitive Neuroscience is accepting proposals for support of research and training in cognitive neuroscience. Preference is given for projects that are not currently funded and are interdisciplinary, involving at least two areas among clinical and basic neurosciences, computer science, psychology, linguistics and philosophy. Research support is limited to $30,000 a year for two years. Postdoctoral grants are limted to three years. Graduate student support is not available. Applications should be postmarked by August 1, 1992 to: Dr. George Miller McDonnell-Pew Program in Cognitive Neuroscience Green Hall, 1-N-6 Princeton University Princeton, NJ 08544-1010 For more information call (609) 258-5014, FAX (609) 258-3031 or e-mail cns at clarity.princeton.edu -----  From rsjuds at snll-arpagw.llnl.gov Fri Jun 19 13:03:21 1992 From: rsjuds at snll-arpagw.llnl.gov (judson richard s) Date: Fri, 19 Jun 92 10:03:21 -0700 Subject: No subject Message-ID: <9206191703.AA22191@snll-arpagw.llnl.gov> ************************************ * NOTE DEADLINE CHANGE !!!! * * * * NEW POSTER DEADLINE: JUNE 12 * * * * SCHEDULE ATTACHED * ************************************ BIOCOMPUTATION WORKSHOP Evolution as a computational process June 22-24, 1992 Doubletree Hotel 2 Portola Plaza Monterey, Ca 93940 Sponsored by the Institute for Scientific Computing Research at LLNL and the Center for Computational Engineering at SNL This workshop brings together biologists, physicists and computer scientists with interests in the study of evolution. The premisis of the workshop is that natural evolution is computational process of adaptation to an ever changing environment. Mathematical theory and computer modeling are therefore ideally suited to study evolution and conversely, evolution may be used as a model system to study the computational processes of optimization and emergent pattern formation. Fifteen invited speakers will provide general reviews and summaries of their recent research. Although oral presentations will be limited to the invited speakers, original research contributions are solicited for poster sessions in the following areas: natural evolution artificial life genetic algorithms and optimization List of speakers: ---------------- Stuart Kauffman --- University of Pennsylvania, Santa Fe Institute Alan Templeton --- Washington University, St. Louis Daniel Hillis --- Thinking Machines Inc. Richard Hudson --- University of California, Irvine Steven Frank --- University of California, Irvine Alan Hastings --- University of California, Davis Warren Ewens --- Melbourne University and University of Philadelphia Marcus Feldman --- Stanford University Lee Altenberg --- Duke University Aviv Bergman --- SRI and Stanford University Mark Bedau --- Reed College Heinz Muehlenbein --- University of Bonn Eric Mjolsness --- Yale Wolfgang Banzhaf --- Mitsubishi Electric Corp. Schedule --------- Sunday - June 21 6:00 pm - 8:00 pm RECEPTION Monday - June 22 10:00 am - 10:45 am Kaufmann Evolution and co-evolution at the edge of chaos 10:45 am - 11:30 am Hastings Use of optimization techniques to study multilocus population genetics 11:45 am - 12:30 pm Altenberg Theory on the evolution and complexity of the genotype-phenotype map 12:30 pm - 2:00 pm LUNCH 2:00 pm - 2:45 pm Templeton Gene tree overlay algorithms: a powerful methodology for studying evolution 2:45 pm - 3:30 pm Bedau Measuring evolutionary activity in noisy artificial systems 4:00 pm - 4:45 pm Muehlenbein Evolutionary algorithms as a research tool for a new evolutionary theory 7:00 pm BANQUET - MONTEREY AQUARIUM After Dinner Speaker - Warren Ewens Tuesday - June 23 10:00 am - 10:45 am Banzhaf An introduction to RNA-like algorithms with applications to sequences of binary numbers 10:45 am - 11:30 am Hillis Simulated evolution and the Red Queen Hypothesis 11:45 am - 12:30 pm Frank Coevolutionary genetics of hosts and parasites 12:30 pm - 2:00 pm LUNCH 2:00 pm - 2:45 pm Bergman Learning your environment 2:45 pm - 3:30 pm Feldman Recombination and evolution 4:00 pm - 7:30 pm POSTER SESSION Wednesday - June 24 10:00 am - 10:45 am Mjolsness Connectionist grammars in evolution and development 10:45 am - 11:30 am Hudson Title to be announced 12:00 pm - 2:00 pm GROUP LUNCH DISCUSSIONS AT LOCAL RESTAURANTS Instructions for Submissions and registration: --------------------------------------------- Authors should submit a single-page abstract clearly stating their results by June 12, 1992, to the Meeting Coordinator at the address listed below. Please indicate which of the above categories best applies to your paper. There will be no parallel sessions, and the workshop will be structured to stimulate and facilitate the active involvement of all attendees. There will be sessions on the first 2 days from 9:00 AM till 5:00 PM with 1-2 hrs lunch breaks. On the third day there will be a morning session and a short afternoon session only (maybe one talk until 3:00 PM). Registration fees are $100 for full-time Ph.D. students and $250 for all others. Fees include admission to a banquet, at the Monterey aquarium, to be held on Monday night. (There is a $50 discount for students presenting posters at the meeting.) To obtain registration materials and housing information, please contact the Meeting Coordinator. For information only please contact eeckman at mozart.llnl.gov Electronic abstract submissions only at jb at s1.gov Meeting coordinator: ------------------- Chris Ghinazzi P.O. Box 808, L-426 Lawrence Livermore Laboratory Livermore, CA 94550 phone: (510) 422-7132 email: ghinazzi at verdi.llnl.gov ------------------------------------------------------------------------ Please complete this form and return it to: Evolution as a Computational Process, c/o Chris Ghinazzi, Lawrence Livermore National Laboratory, P.O. Box 808, L-426, Livermore, CA 94550-9900. Phone (510)422-7132 or FAX: (510)422-7819 REGISTRATION FORM Name: Title: Organization: Address: City: State: Zip: Country: Citizenship: Telephone: email address: Registration Fees: Regular $250 Student $100 Student w/poster $50 Are you submitting a poster? yes no Total Payment Enclosed $________ Check or Money Order (payable in US dollars to UC Regents) Requests for refunds must be received in writing no later than June 1, 1992. Attendance is on a first-pay, first-serve basis.  From judd at learning.siemens.com Fri Jun 19 17:41:08 1992 From: judd at learning.siemens.com (Stephen Judd) Date: Fri, 19 Jun 92 17:41:08 EDT Subject: CLNL workshop Message-ID: <9206192141.AA10749@learning.siemens.com> LAST CALL FOR PAPERS Third Annual Workshop on Computational Learning Theory and `Natural' Learning Systems August 27-29, Madison, Wisconsin Siemens Corporate Research, MIT, and the University of Wisconsin- Madison are sponsoring the third annual CLNL workshop to explore the intersection of theoretical learning research and natural learning systems. (Natural systems include those that have been successful in a difficult engineering domain or those that represent natural constraints arising from biological or psychological processes or mechanisms.) The workshop will bring together a diverse set of researchers from three relatively independent learning research areas: Computational Learning Theory, AI/Machine Learning, and Connectionist Learning. Invited speakers and participants will be encouraged to examine general issues in learning systems which could provide constraints for theory, while at the same time theoretical results will be interpreted in the context of experiments with actual learning systems. Examples of experimental approaches include: Models or comparisons of learning systems in classification problems (vision, speech, etc.); Controls and Robotics, Natural language processing; Studies of generalization; Representation effects on learning rate, noise tolerance and concept or function complexity; Biological or biologically inspired models of adaptation; Competitive processing or synaptic growth and modification. Relevant theoretical subjects include: The computational and sample complexity of learning; Learning in the presence of noise; The effect on learnability of prior knowledge, representational bias, or feature construction; Learning protocol: learning sample distributions; Efficient algorithms for learning particular classes of concepts or functions; Comparison of analytical bounds with real-world experiments. Submission Procedure: Please submit 3 copies of a 100 word or less abstract and a 2000 word or less summary of original research indicating your preference for either experimental or theoretical category. The DEADLINE for submission is JUNE 30, 1992. Send abstracts and summaries to: CLNL Workshop Siemens Corporate Research 755 College Road East Princeton, NJ 08540 (Or via email to clnl at learning.siemens.com) -- ORGANIZING COMMITTEE Andrew Barto, U Massachusetts Andrew Barron, U Illinois Stephen J. Hanson, Siemens Corp. Research Michael Jordan, MIT Stephen Judd, Siemens Corp. Research Kumpati S. Narendra, Yale University Tomaso Poggio, MIT Larry Rendell, Beckman Institute Ronald L. Rivest, MIT Jude Shavlik, U Wisconsin Paul Utgoff, U Massachusetts WORKSHOP CO-CHAIRS Thomas Petsche, Siemens Corp. Research Jude Shavlik, U Wisconsin Stephen Judd, Siemens Research WORKSHOP SPONSORS Siemens Corporate Research, Inc. MIT Laboratory for Computer Science University of Wisconsin, Dept. of Computer Science  From gjg at cns.edinburgh.ac.uk Mon Jun 22 16:59:53 1992 From: gjg at cns.edinburgh.ac.uk (Geoffrey Goodhill) Date: Mon, 22 Jun 92 16:59:53 BST Subject: Tech Report Available Message-ID: <29664.9206221559@cns.ed.ac.uk> The following technical report version of my thesis is now available in neuroprose: Correlations, Competition, and Optimality: Modelling the Development of Topography and Ocular Dominance CSRP 226 Geoffrey Goodhill School of Cognitive and Computing Science University Of Sussex ABSTRACT There is strong biological evidence that the same mechanisms underly the formation of both topography and ocular dominance in the visual system. However, previous computational models of visual development do not satisfactorily address both of these phenomena simultaneously. In this thesis we discuss in detail several models of visual development, focussing particularly on the form of correlations within and between eyes. Firstly, we analyse the "correlational" model for ocular dominance development recently proposed in [Miller, Keller & Stryker 1989] . This model was originally presented for the case of identical correlations within each eye and zero correlations between the eyes. We relax these assumptions by introducing perturbative correlations within and between eyes, and show that (a) the system is unstable to non-identical perturbations in each eye, and (b) the addition of small positive correlations between the eyes, or small negative correlations within an eye, can cause binocular solutions to be favoured over monocular solutions. Secondly, we extend the elastic net model of [Goodhill 1988, Goodhill and Willshaw 1990] for the development of topography and ocular dominance, in particular considering its behaviour in the two-dimensional case. We give both qualitative and quantitative comparisons with the performance of an algorithm based on the self-organizing feature map of Kohonen, and show that in general the elastic net performs better. In addition we show that (a) both algorithms can reproduce the effects of monocular deprivation, and (b) that a global orientation for ocular dominance stripes in the elastic net case can be produced by anisotropic boundary conditions in the cortex. Thirdly, we introduce a new model that accounts for the development of topography and ocular dominance when distributed patterns of activity are presented simultaneously in both eyes, with significant correlations both within and between eyes. We show that stripe width in this model can be influenced by two factors: the extent of lateral interactions in the postsynaptic sheet, and the degree to which the two eyes are correlated. An important aspect of this model is the form of the normalization rule to limit synaptic strengths: we analyse this for a simple case. The principal conclusions of this work are as follows: 1. It is possible to formulate computational models that account for (a) both topography and stripe formation, and (b) ocular dominance segregation in the presence of *positive* correlations between the two eyes. 2. Correlations can be used as a ``currency'' with which to compare locality within an eye with correspondence between eyes. This leads to the novel prediction that stripe width can be influenced by the degree of correlation between the two eyes. Instructions for obtaining by anonymous ftp: % ftp cheops.cis.ohio-state.edu Name: anonymous Password:neuron ftp> binary ftp> cd pub/neuroprose ftp> get goodhill.thesis.tar ftp> quit % tar -xvf goodhill.thesis.tar (This creates a directory called thesis) % cd thesis % more README WARNING: goodhill.thesis.tar is 2.4 Megabytes, and the thesis takes up 13 Megabytes if all files are uncompressed (there are only 120 pages - the size is due to the large number of pictures). Each file within the tar file is individually compressed, so it is not necessary to have 13 Meg of spare space in order to print out the thesis. The hardcopy version is also available by requesting CSRP 226 from: Berry Harper School of Cognitive and Computing Sciences University of Sussex Falmer Brighton BN1 9QN GREAT BRITAIN Please enclose a cheque for either 5 pounds sterling or 10 US dollars, made out to "University of Sussex". Geoffrey Goodhill University of Edinburgh Centre for Cognitive Science 2 Buccleuch Place Edinburgh EH8 9LW email: gjg at cns.ed.ac.uk  From thildebr at aragorn.csee.lehigh.edu Tue Jun 23 18:25:16 1992 From: thildebr at aragorn.csee.lehigh.edu (Thomas H. Hildebrandt ) Date: Tue, 23 Jun 92 18:25:16 -0400 Subject: Saratchandran paper Message-ID: <9206232225.AA03162@aragorn.csee.lehigh.edu> In a correspondence paper published last July in the IEEE T. on Neural Networks, there is a paper by P. Saratchandran which claims to provide an algorithm for training of a multilayer feedforward neural network using a dynamic programming method -- in which the weights on each layer are adjusted only once, starting with the output layer and proceeding to the layer nearest the inputs.[1] This claim is not only counterintuitive, it is false. The author hides this fact from himself and from the reader by defining the error to be minimized in any layer of the network as being independent of the weights in preceding stages of the network. For example, if $I_k$ is the error at the output of the $k$th layer, it is given as a function of the input $y(k-1)$ to that layer, the weights $w(k-1)$ of that layer, and the sets of ideal weights $w^*(k)$, $w^*(k+1)$, ..., $w^*(n-1)$ on succeeding layers in the network. This makes the error to be minimized independent of the weights $w(2)$, $w(3)$, ..., $w(k-2)$ on the first $k-1$ layers! I trust that the fallacy is by now apparent. The absence of experimental data from this paper serves to strengthen my conviction that this algorithm will not work in practice --- save where the function of all but the last layer is trivial, i.e. where the output of the network is {\bf truly independent} of the weights on the $n-2$ hidden layers. (The input layer has no weights.) Thomas H. Hildebrandt Visiting Researcher EE & CS Department Lehigh University [1] Saratchandran, P.: "Dynamic Programming Approach for Optimal Weight Selection in Multilayer Neural Networks", IEEE T. on Neural Networks, V.2, N.4, pp.465-467 (July 1991).  From russ at oceanus.mitre.org Tue Jun 23 14:48:57 1992 From: russ at oceanus.mitre.org (Russell Leighton) Date: Tue, 23 Jun 92 14:48:57 EDT Subject: Request for Aspirin/MIGRAINES Applications Message-ID: <9206231848.AA12166@oceanus.mitre.org> Dear Aspirin/MIGRAINES user, We are preparing a chapter describing Aspirin/MIGRAINES for the upcoming book "Neural Networks Simulation Environments" to be published winter/spring 1993. There are now more that 450 registered A/M sites around the world, and we would like to briefly mention some of the ways that A/M has been used by others in that chapter. We would appreciate a short note describing any results you have using the A/M. "Short" means no more than a couple of sentences per application. Of particular interest are successful results that you have published (please include a full citation of the publication), but any work using A/M is of interest. Finally, if you have one or two attractive, relatively self-explanatory, postscript figures that were produced from your use of A/M that you would be willing to let us use in the chapter, we would appreciate seeing them as well. Of coarse, any of your work or figures that we use will be properly cited and/or credited. Please forward this note to other users of A/M. Sincerely, The Developers - Russell Leighton - Alexis Wieland  From jbower at cns.caltech.edu Thu Jun 25 12:28:57 1992 From: jbower at cns.caltech.edu (Jim Bower) Date: Thu, 25 Jun 92 09:28:57 PDT Subject: Caltech Faculty Position Message-ID: <9206251628.AA00316@cns.caltech.edu> ------------------------------------------------------------ DIVISION OF BIOLOGY CALIFORNIA INSTITUTE OF TECHNOLOGY The Division of Biology at the California Institute of Technology seeks applicants for a faculty position in integrative neurophysiology/computational neuroscience. Preference is given to individuals who combine behavioral, physiological, and computational approaches. Women and minority candidates are encouraged to apply. Applicants should send curriculum vitae, a statement of research interests, selected reprints, and also have at least three letters of recommendation sent directly to: Ms. Marilyn Tomich Division of Biology 156-29 California Institute of Technology Pasadena, CA 91125. The deadline for application is August 15, 1992. The California Institute of Technology is an Equal Opportunity/Affirmative Action Employer.  From thildebr at aragorn.csee.lehigh.edu Thu Jun 25 13:20:12 1992 From: thildebr at aragorn.csee.lehigh.edu (Thomas H. Hildebrandt ) Date: Thu, 25 Jun 92 13:20:12 -0400 Subject: Saratchandran paper Message-ID: <9206251720.AA06495@aragorn.csee.lehigh.edu> Barak Pearlmutter has also uncovered the fallacy in the Saratchandran paper. His (more detailed) rebuttal, entitled "A comment on `Dynamic programming approach to optimal weight selection in multilayer neural networks'" is soon to appear in the IEEE T. on Neural Networks. THH  From robtag at udsab.dia.unisa.it Tue Jun 23 09:06:20 1992 From: robtag at udsab.dia.unisa.it (Tagliaferri Roberto) Date: Tue, 23 Jun 92 15:06:20 +0200 Subject: European Human Mobility Plan Message-ID: <9206231306.AA10863@udsab.dia.unisa.it> International Institute for Advanced Scientific Studies via G. Pellegrino, 19 I-84019 Vietri sul mare (Salerno) Italia fax no. +39 (89) 761189 The International Institute for Advanced Scientific Studies (IIASS), directed by prof. E.R. Caianiello and working in cooperation with the nearby University of Salerno, is interested in participating in the European human mobility plan in the areas of neural networks and their applications to speech processing and pattern recognition and vision. The researchers interested in realizing a network of groups in one of the above areas should contact : dr Roberto Tagliaferri E-mail: robtag at udsab.dia.unisa.it  From atul at nynexst.com Mon Jun 22 12:05:45 1992 From: atul at nynexst.com (Atul Chhabra) Date: Mon, 22 Jun 92 12:05:45 EDT Subject: Summary: references on telecommunications applications of neural nets Message-ID: <9206221605.AA22005@texas.nynexst.com> I posted a request for references on telecommunications applications of neural nets to the connectionists list on June 5. Here is a summary of the responses. Thanks to all who responded. --Atul ===================================================================== Atul K. Chhabra Phone: (914)644-2786 Member of Technical Staff Fax: (914)644-2211 NYNEX Science & Technology Internet: atul at nynexst.com 500 Westchester Avenue White Plains, NY 10604 ===================================================================== -------------------------------------------------------------------------- > From: Atul Chhabra > To: Connectionists at CS.CMU.EDU > Subject: references on telecommunications applications of neural nets > and/or machine vision > > I am looking for recent papers/technical reports etc. on telecommunications > applications of neural networks. I have the following papers. I would > appreciate receiving any additional references. Please respond by email. > I will post a summary of responses to the net. > > I am also looking for references on applications of machine vision in > telecommunications. > > 1. A. Hiramatsu, "ATM communications network control by neural network," > IJCNN 89, Washington D.C., I/259-266, 1989. > > 2. J.E. Jensen, M.A. Eshara, and S.C. Barash, "Neural network controller > for adaptive routing in survivable communications networks," IJCNN 90, > San Diego, CA, II/29-36, 1990. > > 3. T. Matsumoto, M. Koga, K. Noguchi, and S. Aizawa, "Proposal for > neural network applications to fiber optic transmission," IJCNN 90, > San Diego, CA, I/75-80, July 1990. > > 4. T.X. Brown, "Neural network for switching," IEEE Communications, vol > 27, no 11, 72-80, 1989. > > 5. T.P. Troudet and S.M. Walters, "Neural network architecture for > crossbar switch control," IEEE Transactions on Curcuits and Systems, > vol 38, 42-56, 1991. > > 6. S. Chen, G.J. Gibson and C.F.N. Cowan, "Adaptive channel equalization > using a polynomial-perceptron structure," IEE Proceedings, vol 137, > 257-264, 1990. > > 7. R.M. Goodman, J. Miller and H. Latin, "NETREX: A real time network > management expert system," IEEE Globecom Workshop on the Application > of Emerging Technologies in Network Operation and Management, FL, > December 1988. > > 8. K.N. Sivarajan, "Spectrum Efficient Frequency Assignment for Cellular > Radio," Caltech EE Doctoral Dissertation, June 1990. > > 9. M.D. Alston and P.M. Chau, "A decoder for block-coded forward error > correcting systems," IJCNN 90, Washington D.C., II/302-305, January > 1990. > > Thanks. > > Atul K. Chhabra > -------------------------------------------------------------------------- > From pthc at mullian.ee.mu.OZ.AU > > Dear Atul, > > With reference to your article on the newsgroup ai.comp.neural-nets, > I would like to add the following references to your list: > > > 1. K. Nakano, M. Sengoku, S. Shinoda, Y. Yamaguchi & T.Abe, > "Channel Assignment in Cellular Mobile Communication Systems Using > Neural Networks", Singapore Int. Conf. on Communication Systems, > 531-534, Nov. 1990. > > 2. D. Kunz, "Channel Assignment for Cellular Radio Using Neural Networks", > IEEE Trans. on Vehicular Technology, Vol. 40, No. 1, 188-193, Feb 1991. > > 3. P.T.H. Chan, M. Palaniswami & D. Everitt, "Dynamic Channel Assignment for > Cellular Mobile Radio System Using Feedforward Neural Networks", > IJCNN 91, pp 1242-1247, Nov. 1991. > > 4. P.T.H. Chan, M. Palaniswami & D. Everitt, "Dynamic Channel Assignment for > Cellular Mobile Radio System Using Self-Organising Neural Networks", > The 6th Australian Teletraffic Seminar, 89-96, Nov. 1991. > > 5. J.A. Franklin, M.D. Smith & J.C. Yun, "Learning Channel Allocation > Strategies in Real Time", IEEE Vehicular Technology Conf. 92, May 1992. > > 6. D. Munoz-Rodriguez, J.A. Moreno-Cadenas and et-al, "Neural Supported > Hand Off Methodology in Micro Cellular Systems.", IEEE Vehicular > Technology Conf. 92, May 1992. > > Regards, > Peter Chan > > ===================================================================== > | \\ Telephone : +61 3 344 7436 | > | Peter T. H. Chan \\ Fax: : +61 3 344 6678 | > | \\ E-Mail : pthc at mullian.ee.mu.oz.AU | > |__________________________\\_______________________________________| > | Department of Electrical and Electronic Engineering | > | School of Information Technology and Electrical Engineering | > | The University of Melbourne, Parkville | > | Victoria 3052, AUSTRALIA | > ===================================================================== > > -------------------------------------------------------------------------- > From: nicwi at isy.liu.se (Niclas Wiberg) > > Hello, > We have compiled a list of articles written on decoding of > error-correcting codes using neural networks. The list is written > in bibtex, which is a bibliography program for the text processor latex. > > Hope this helps. > > Niclas > > > ================== Here we go =================================== > > @ARTICLE{ybw90, > AUTHOR = "Yuan, Jing and Bhargava, Vijay K and Wang, Qiang", > TITLE = "Maximum Likelihood Decoding Using Neural Nets", > JOURNAL = "Journal of the Institution of Electronics and > Telecommunication Engineers", > YEAR = "1990", > VOLUME = "36", > NUMBER = "5-6", > MONTH = "Sept.-Dec.", > PAGES = "367-376" } > > @INPROCEEDINGS{hry90, > AUTHOR = "Hrycej, Tomas", > TITLE = "Self-Organization by Delta Rule", > BOOKTITLE = "IJCNN International Joint Conference on > Neural Networks", > YEAR = "1990", > PAGES = "307-312", > ADDRESS = "New York, NY, USA" } > > @INPROCEEDINGS{jp90, > AUTHOR = "Jeffries, Clark and Protzel, Peter", > TITLE = "High Order Neural Models for Error Correcting Code", > BOOKTITLE = "Proceedings from the SPIE - The International > Society for Optical Engineering", > YEAR = "1990", > PAGES = "510-517", > ORGANIZATION = "SPIE" } > > @INPROCEEDINGS{zha89, > AUTHOR = "Zeng, Gengsheng and Hush, Don and Ahmed, Nasir", > TITLE = "An Application of Neural Net in Decoding Error-Correcting Codes", > BOOKTITLE = "Proceedings from 1989 IEEE International Symposium on > Circuits and Systems", > YEAR = "1989", > PAGES = "782-785", > ADDRESS = "New York, NY, USA" } > > @INPROCEEDINGS{slc89, > AUTHOR = "Santamaria, M.E. and Lagunas, M.A. and Cabrera, M.", > TITLE = "Neural Nets Filters: Integrated Coding and Signaling in > Communication Systems", > BOOKTITLE = "MELECON`89: Mediterranean Electrotechnical Conference > Proceedings. Integrating Research, Industry and Education in > Energy and Communication Engineering", > YEAR = "1989", > PAGES = "532-535", > ADDRESS = "IEEE, New York, NY, USA", > EDITOR = "Barbosa, A.M." } > > @ARTICLE{ph86, > AUTHOR = "Platt, J.C. and Hopfield, J.J. ", > TITLE = "Analog Decoding Using Nerual Networks", > JOURNAL = "AIP Conference Proceedings", > YEAR = "1986", > NUMBER = "151", > PAGES = "364-369" } > > @INPROCEEDINGS{fdk89, > AUTHOR = "Farotimi, O. and Dembo, A. and Kailath, T.", > TITLE = "Absolute Stability and Optimal Training for Dynamic Neural Networks", > BOOKTITLE = "Conference Record. Twenty-Third Asilomar Conference on Signals, > Systems and Computers", > YEAR = "1989", > PAGES = "133-137", > EDITOR = "Chen, R.R.", > PUBLISHER = "Maple Press, San Jose, CA, USA" } > > @INPROCEEDINGS{cm90, > AUTHOR = "Caid, William R. and Means, Robert W.", > TITLE = "Neural Network Error Correcting Decoders for Block and > Convolutional Codes", > BOOKTITLE = "GLOBECOM`90: IEEE Global Telecommunications Conference and > Exhibition. 'Communications: Connecting the Future'", > YEAR = "1990", > PAGES = "1028-1031", > PUBLISHER = "IEEE, New York, NY, USA" } > > @ARTICLE{hb91, > AUTHOR = "Hussain, M. and Bedi, Jatinder S.", > TITLE = "Decoding Scheme for Constant Weight Codes for Optical and Spread > Spectrum Applications", > JOURNAL = "Electronics Letters", > VOLUME = "27", > PAGES = "839-842", > NUMBER = "10", > YEAR = "1991" } > > @INPROCEEDINGS{ac90, > AUTHOR = "Alston, Michael D. and Chau, Paul M.", > TITLE = "A Neural Network Architecture for the Decoding of Long Constraint Length > Convolutional Codes", > BOOKTITLE = "International Joint Conference on Neural Networks ", > YEAR = "1990", > PAGES = "121-126", > PUBLISHER = "IEEE, New York, NY, USA" } > > @INPROCEEDINGS{yc90, > AUTHOR = "Yuan, Jing and Chen, C.S.", > TITLE = "Neural Net Decoders for some Block Codes", > BOOKTITLE = "IEE Proceedings I (Communications, Speech and Vision)", > YEAR = "1990", > PAGES = "309-314", > ORGANIZATION = "IEE" } > > @INPROCEEDINGS{whi89, > AUTHOR = "Whittle, P.", > TITLE = "The Achievement of Memory by an Antiphon Structure", > BOOKTITLE = "Developments in Neural Computing. Proceedings of a Meeting > on Neural Computing", > YEAR = "1989", > PAGES = "119-124", > ORGANIZATION = "IOP and London Math Society", > PUBLISHER = "Adam Hilger, Bristol, UK" } > > @INPROCEEDINGS{lp89, > AUTHOR = "Lee, Tsu-chang and Peterson, Allen M.", > TITLE = "Adaptive Vector Quantization with a Structural Level Adaptable Neural > Network", > BOOKTITLE = "IEEE Pacific Rim Conference on Communications, Computers and Signal > Processing. Conference Proceedings", > YEAR = "1989", > PAGES = "517-520", > PUBLISHER = "IEEE, New York, NY, USA" } > > @ARTICLE{bb89, > AUTHOR = "Bruck, Jehoshua and Blaum, Mario", > TITLE = "Neural Networks, Error-Correcting Codes, and Polynomials over the Binary n-Cube", > JOURNAL = "IEEE Transactions on information Theory", > VOLUME = "35", > PAGES = "976-987", > NUMBER = "5", > YEAR = "1989" } > > @INPROCEEDINGS{ybw89, > AUTHOR = "Yuan, Jing and Bhargava, Vijay K and Wang, Qiang", > TITLE = "An Error Correcting Neural Network", > BOOKTITLE = "IEEE Pacific Rim Conference on Communications, Computers and Signal > Processing. Conference Proceedings", > YEAR = "1989", > PAGES = "530-533", > PUBLISHER = "IEEE, New York, NY, USA" } > > @INPROCEEDINGS{cc90, > AUTHOR = "Chen, Chang-jia and Chen, Tai-yi", > TITLE = "Preliminary Study of the Local Maximum Problem of the Energy > Function for the Neural Network in Decoding of Binary Block Codes", > BOOKTITLE = "International Symposium on Information Theory and Its > Applications, ISITA`90", > YEAR = "1990", > PAGES = "727-729", > PUBLISHER = "IEEE, New York, NY, USA" } > > @INPROCEEDINGS{pd88, > AUTHOR = "Petsche, Thomas and Dickinson, Bradley W.", > TITLE = "A Trellis-Structured Neural Network", > BOOKTITLE = "Neural Information Processing Systems", > YEAR = "1988", > PAGES = "592-601", > PUBLISHER = "American Institute of Physics, New York, USA" } > > @INPROCEEDINGS{bar90, > AUTHOR = "Baram, Yoram", > TITLE = "Nested Neural Networks and their Codes", > BOOKTITLE = "Proceedings from 1990 IEEE International Symposium on > Information Theory", > YEAR = "1990", > PAGES = "9" } > > @TECHREPORT{str90, > AUTHOR = "Stranneby, Dag R.", > TITLE = "Error Correction of Corrupted Binary Coded Data Using Neural Networks", > INSTITUTION = "TRITA-TTT", > YEAR = "1990", > ADDRESS = "KTH, Stockholm, Sweden" } > > @TECHREPORT{sch91, > AUTHOR = "Schnell, M.", > TITLE = "Multilayer Perceptrons and Their Application to Decoding Block Codes", > INSTITUTION = "German Aerospace Research Establishment, Institute for Communications > Technology", > YEAR = "1991", > ADDRESS = "D-8031 Oberpfaffenhofen, Germany" } > > @INPROCEEDINGS{hb91-1, > AUTHOR = "Hussain, M. and Bedi, Jatinder S.", > TITLE = "Performance Evaluation of Different Neural Network Training > Algorithms in Error Control Coding", > BOOKTITLE = "SPIE`91, Applications of Artificial Intelligence and Neural > Networks", > YEAR = "1991", > PAGES = "697-707" } > > @INPROCEEDINGS{hb91-2, > AUTHOR = "Hussain, M. and Bedi, Jatinder S.", > TITLE = "Reed-{S}olomon Encoder/Decoder Application Using a Neural Network", > BOOKTITLE = "SPIE`91, Applications of Artificial Intelligence and Neural > Networks", > YEAR = "1991", > PAGES = "463-471" } > > @INPROCEEDINGS{hsb90, > AUTHOR = "Hussain, M. and Song, Jing and Bedi, Jatinder S.", > TITLE = "Neural Network Application to Error Control Coding", > BOOKTITLE = "SPIE`90", > YEAR = "1990", > PAGES = "502-510" } > > @INPROCEEDINGS{hs90, > AUTHOR = "Hussain, M. and Bedi, Jatinder S.", > TITLE = "Decoding a Class of Non Binary Codes Using Neural Networks", > BOOKTITLE = "33:rd Midwest Symposium on Circuits and Systems", > YEAR = "1990" } > > @TECHREPORT{zz91, > AUTHOR = "Zetterberg, Lars H. and Zhang, Qingshi", > TITLE = "Signal Detection Using Neural Networks and Error Correcting Codes", > INSTITUTION = "TRITA-TTT", > YEAR = "1991", > NUMBER = "9114", > ADDRESS = "KTH, Stockholm, Sweden" } > > @ARTICLE{jef90, > AUTHOR = "Jeffries, Clark", > TITLE = "Code Recognition with Neural Network Dynamical Systems", > JOURNAL = "Society for Industrial and Applied Mathematics Review", > VOLUME = "32", > PAGES = "636-651", > YEAR = "1990", > NUMBER = "4" } > > @ARTICLE{sou89, > AUTHOR = "Sourlas, Nicolas", > TITLE = "Spin-glass Models as Error-correcting Codes", > JOURNAL = "Nature", > VOLUME = "339", > PAGES = "693-695", > YEAR = "1989" } > > @TECHREPORT{gb92, > AUTHOR = "Gish, Sheri L. and Blaum, Mario", > TITLE = "Adaptive Development of Connectionist Decoders for Complex > Error-correcting Codes", > INSTITUTION = "IBM Research Division", > YEAR = "1992", > ADDRESS = "Almaden Research Center, San Jose, CA, USA" } > > @INPROCEEDINGS{thfc87, > AUTHOR = "Takefuji, Yoshiyasu and Hollis, Paul and Foo, Yoon Pin and > Cho, Yong B.", > TITLE = "Error Correcting System Based on Neural Circuits", > BOOKTITLE = "IEEE First International Conference on Neural Networks", > YEAR = "1987", > PAGES = "293-300", > VOLUME = "3", > PUBLISHER = "SOS Printing, San Diego, CA, USA" } > > @ARTICLE{yc91, > AUTHOR = "Yuan, Jing and Chen, C.S.", > TITLE = "Correlation Decoding of the (24,12) {Golay} Code Using Neural > Networks", > JOURNAL = "IEE Proceedings I (Communications, Speech and Vision)", > VOLUME = "138", > PAGES = "517-524", > YEAR = "1991" } > > @MASTERSTHESIS{aa91, > AUTHOR = "Andersson, Gunnar and Andersson, H{\aa}kan", > TITLE = "Generation of Soft Information in a Frequency-hopping > {HF} Radio System Using Neural Networks", > SCHOOL = "Link{\"{o}}ping Institute of Technology", > YEAR = "1991", > ADDRESS = "Link{\"{o}}ping, Sweden", > NOTE = "(In Swedish)" } > > @INPROCEEDINGS{lki90, > AUTHOR = "Li, Haibo and Kronander, Torbj{\"{o}}rn and Ingemarsson, Ingemar", > TITLE = "A Pattern Classifier Integrating Multilayer Perceptron and > Error-correcting Code", > BOOKTITLE = "IAPR Workshop on Machine Vision Applications", > YEAR = "1990", > PAGES = "113-116" } > > @INPROCEEDINGS{ycl90, > AUTHOR = "Yang, Jar-Ferr and Chen, Chi-Ming and Lee, Jau-Yien", > TITLE = "Neural Networks for Maximum Likelihood Error Correcting Systems", > BOOKTITLE = "IJCNN International Joint Conference on Neural Networks", > YEAR = "1990", > PAGES = "493-498", > VOLUME = "1" } > > ---------------------------------------------------------------------- > Niclas Wiberg nicwi at isy.liu.se > Dept. of EE Linkoping University Sweden > -------------------------------------------------------------------------- > From: Ernst Nordstr|m > > Hello, > > here are some references on neural networks in telecommunications: > > 1. T. Takahashi, A. Hiramatsu, "Integrated ATM traffic control by distributed > neural networks", ISS 90, Stockholm, Sweden, May 1990. > > 2. A. Hiramatsu, "Integration of ATM call admission control and link capacity > control by distributed neural networks", IEEE Journal on Selected Areas in > Communications, vol 9, no 7, Sep 1991. > > 3. X. Chen, I. Leslie, "A neural network approach towards adaptive congestion > control in broadband atm networks", GLOBECOM 91, Phoenix, AZ, Dec 1991. > > 4. N. Ansari, D. Liu, "The performance evaluation of a new neural network based > traffic management scheme for a satellite communication network", > GLOBECOM 91, Phoenix, AZ, Dec 1991. > > 5. F. Kamoun, M. Ali, "A neural network shortest path algorithm for optimum > routing in packed-switched communications networks", GLOBECOM 91, Phoenix, > AZ, Dec 1991. > > 6. M. Ali, F. Kamoun, "A neural network approach to the maximum flow problem", > GLOBECOM 91, Phoenix, AZ, Dec 1991. > > 7. R.Lancini, F.Perego, S.Tubaro, "Some experiments on vector quantization > using neural nets", GLOBECOM 91, Phoenix, AZ, Dec 1991. > > 8. B. Khasnabish, M. Ahmadi, "Congestion avoidance in large supra-high-speed > packet switching networks using neural nets", GLOBECOM 91, Phoenix, AZ, > Dec 1991. > > > > Best regards, > > Ernst Nordstrom > Department of Computer Systems > Uppsala University, Sweden > -------------------------------------------------------------------------- > From: dbs0 at gte.com (Daniel Schwartz) > > Rudolofo Millito of Bell Labs published a demonstrated > a neural network controller for an admission controller to a > queuing system. The paper was published in the Neural Information > Processing Systems proceedings of 1990 ( Morgan-Kaufman ). > > daniel b schwartz > gte laboratories > 40 sylvan rd > waltham ma 02254 > -------------------------------------------------------------------------- > From: INDE47D at Jetson.UH.EDU > > Here are some more > > 1. R. Ogier and D. Beyer : Neural Network solution to the Link Scheduling > Problem using Convex Relaxation, In IEEE Global Telecommunications Confere > nce. (Globecom 90), San Diego, California, December 1990. > > The above paper should give you some more. > > I am presently writing a tech-report on a NN solution to maximum independent > set (MIS) problem. This MIS problem is the same as the link scheduling > problem for multi-hop radio networks. I should be finishing it sometime next > week and can send you a copy if you are interested. > > - Shiv > -------------------------------------------------------------------------- > From: giles at research.nj.nec.com (Lee Giles) > > See Goudreau and Giles in the NIPS91 Proceedings - NNs for interconnection > networks. > > Let me know if you have trouble finding it. > > C. Lee Giles > NEC Research Institute > 4 Independence Way > Princeton, NJ 08540 > USA > > Internet: giles at research.nj.nec.com > UUCP: princeton!nec!giles > PHONE: (609) 951-2642 > FAX: (609) 951-2482 > -------------------------------------------------------------------------- > From: yxt3 at po.CWRU.Edu (Yoshiyasu Takefuji) > > HI. > Don't forget the following articles. > Y. Takefuji, and K. C. Lee, "An artificial hysteresis binary neuron:....," > Biological Cybernetics, 64, 353-356, 1991. > N. Funabiki, and Y. Takefuji, "A parallel algorithm for time slot assignmen > t problems in TDM hierarchical switching systems," to appear in IEEE Trans. on C > ommunications. > N. Funabiki, and Y. Takefuji, "A parallel algorithm for traffic control pro > blems in three-stage connecting networks," to appear in Journal of Parallel and > Distributed Computing. > N. Funabiki and Y. Takefuji, "A parallel algorithm for broadcast scheduling > problems in packet radio networks," to appear in IEEE Trans. on Communications. > N. Funabiki, Y. Takefuji, and K. C. Lee, "Comparison of six neural network > models on a traffic control problem in a multistage interconnection network," to > appear in IEEE Trans. on Computers. > N. Funabiki, Y. Takefuji, "A neural network parallel algorithm for channel > assignment problems in cellular radio networks," to appear in IEEE Trans. on Veh > icular Technology. > > Introduction to the control problems using NN is > found in my recent book entitled > "Neural Network Parallel Computing," > from Kluwer, Jan 1992. > > thank you. > > yoshiyasu takefuji > -------------------------------------------------------------------------- > From: ang at hertz.njit.edu (Nirwan Ansari, 201-596-3670) > > The following are a few of my NN papers on telecommunications: > > N. Ansari, "Managing the traffic of a satellite communication network > by neural networks," to appear in B. Soucek and the IRIS Group (ed.), > Dynamic, Genetic and Chaotic Programming of the 6th Generation > series, pp.339-352, Wiley 1992. > > N. Ansari and Y. Chen, "Configuring Maps for a Satellite Communication > Network by Self-organization," Journal of Neural Network Computing, > vol. 2, no. 4, pp.11-17, Spring 1991. > > N. Ansari and Y. Chen, "A Neural Network Model to Configure Maps for > a Satellite Communication Network," 1990 IEEE Global Telecommunications > Conference, December 2-5, 1990, San Diego, CA, pp. 1042-1046. > > N. Ansari and D. Liu, "The Performance Evaluation of A New Neural > Network Based Traffic Management Scheme For A Satellite Communication > Network," Proc. 1991 IEEE Global Telecommunications Conference, > December 2-5, 1991, Phoenix, AR, pp. 110-114. > > > --Nirwan Ansari > -------------------------------------------------------------------------- > To: Atul Chhabra > > Mcdonald, K., T. R. Martinez, and D. M. Campbell, > A Connectionist Method for Adaptive Real-Time Network Routing, > Proceedings of the 4th International Symposium on Artificial Intelligence, > pp. 371-377, 1991. > _______________________________________________________________ > Tony Martinez > Asst. Professor, Computer Science Dept, BYU, Provo, UT, 84602 > martinez at cs.byu.edu Phone: 801-378-6464 Fax: 801-378-2800 > -------------------------------------------------------------------------- > From: rodolfo at buckaroo.att.com > Original-From: buckaroo!rodolfo (Rodolfo A Milito +1 908 949 7614) > > You may want to consider > > Rodolfo A. Milito, Isabelle Guyon, and Sara Solla, "Neural Network > Implementation of Admission Control," Advances in Neural Information > Processing Systems 3, Morgan Kauffmann, 1991 > > I would also appreciate your comments, > > Rodolfo Milito > -------------------------------------------------------------------------- > From: an at verbum.com > > Hello, > > In response to your posting above, I have the following reference for you: > > Atsushi Hiramatsu, "ATM Communications Network Control by Neural > Networks," _IEEE_Transactions_On_Neural_Networks_, Vol. 1 No. 1, > March 1990. > > An Nguyen > E-mail: an at verbum.com > -------------------------------------------------------------------------- > From: jradue at SANDCASTLE.COSC.BROCKU.CA > > I was interested to see your request. I do not know if you are aware of the > work being done at the Communications Research Laboratory at McMaster > University in Hamilton, Ontario? They have just sponsored a Symposium on > Communications in Neurobiological Systems. A contact email address is > Myersa at SSCvax.cis.mcmaster.ca -- this is the address of the coordinting > secretary there. Another address you could try, but I do not know how often > he reads his mail is haykin at sscvax.cis.mcmaster.ca -- this is Dr Simon > Haykin, the Director of the Laboratory. > > I myself am working on the verification of handwritten signatures using > NNs, and would be interested to hear if you receive any feedback from your > request, in this area. > > Regards > > Jon Radue > > Computer Science Department > Brock University V: (416)688-5550 x 3867 > St. Catharines, Ontario F: (416)688-2789 > L2S 3A1 CANADA E: jradue at sandcastle.cosc.brocku.ca > -------------------------------------------------------------------------- > From: Frans Martin Coetzee > > Dear Atul > > On the connectionist bboard you recently expressed interest in references on > telecommunication applications of Neural nets. I would be interested in a > general post to the mailing list of the references you had received. > > As for me I cannot offer you many refernces: I do know that there is a group > working on coding/decoding of binary signals using neural networks at the > University of Victoria. Below is one of their references > > Author Yuan, J.; Bhargava, V.K.; Wang, Q.; > Dept. of Electr. & Comput. Eng., Victoria Univ., BC, CANADA > Title Maximum likelihood decoding using NEURAL nets > Source Journal of the Institution of Electronics and Telecommunication > Engineers; > J. Inst. Electron. Telecommun. Eng. (India); vol.36, no.5-6; Sept. > -Dec. 1990; pp. 367-76 > -------------------------------------------------------------------------- > From: karit at idiap.ch (Kari Torkkola) > > You were interested in references to telecommunications applications > of neural networks. Here are a couple of such references: > > > @InProceedings{Kohonen90c, > author = "Teuvo Kohonen and Kimmo Raivio and Olli Simula and Olli > Vent{\"a} and Jukka Henriksson", > title = "Combining Linear Equalization and Self-Organizing > Adaptation in Dynamic Discrete-Signal", > booktitle = "Proceedings of the International Joint Conference on > Neural Networks", > year = "1990", > pages = "I 223-228", > address = "San Diego", > month = "June", > } > > @InProceedings{Kohonen91a, > author = "Teuvo Kohonen and Kimmo Raivio and Olli Simula and > Jukka Henriksson", > title = "Performance Evaluation of Self-Organizing Map Based > Neural Equalizers in Dynamic > Discrete-Signal Detection", > booktitle = "Proceedings of the International Conference on > Artificial Neural Networks", > year = "1991", > pages = "II 1677-1680", > address = "Helsinki", > month = "June", > } > > > =================================================================== > Kari Torkkola > IDIAP (Institut Dalle Molle d'Intelligence Artificielle Perceptive) > 4, rue du Simplon > CH 1920 Martigny > Switzerland email: karit at idiap.ch > =================================================================== > -------------------------------------------------------------------------- > From: goudreau at research.nj.nec.com (Mark Goudreau) > > Hi Atul, > Here are some more references for you. Please send me a copy of the > compiled list once you are done. > Thanks -Mark > > @ARTICLE{brow2, > AUTHOR = "T.X. Brown and K.-H. Liu", > TITLE = "Neural Network Design of a {B}anyan Network Controller", > JOURNAL = "IEEE Journal on Selected Areas of Communication", > YEAR = "1990", > VOLUME = "8", > NUMBER = "8", > PAGES = "1428-1438", > MONTH = "October"} > > @INPROCEEDINGS{funa1, > AUTHOR = "N. Funabiki and Y. Takefuji and K.C. Lee", > TITLE = "A Neural Network Model for Traffic Controls in Multistage > Interconnection Networks", > BOOKTITLE = "Proceedings of the International Joint Conference on > Neural Networks 1991", > YEAR = "1991", > PAGES = "A898", > MONTH = "July"} > > @INPROCEEDINGS{goud1, > AUTHOR = "M.W. Goudreau and C.L. Giles", > TITLE = "Neural Network Routing for Multiple Stage Interconnection > Networks", > BOOKTITLE = "Proceedings of the International Joint Conference on > Neural Networks 1991", > YEAR = "1991", > PAGES = "A885", > MONTH = "July"} > > @INPROCEEDINGS{goud2, > AUTHOR = "M.W. Goudreau and C.L. Giles", > TITLE = "Neural Network Routing for Random Multiple Stage > Interconnection Networks", > BOOKTITLE = "Advances in Neural Information Processing Systems~4", > YEAR = "1992", > EDITOR = "J.E. Moody and S.J. Hanson and R.P Lippmann", > PUBLISHER = "Morgan Kaufmann Publishers", > ADDRESS = "San Mateo, CA", > PAGES = "722--729"} > > @INPROCEEDINGS{haki1, > AUTHOR = "N.Z. Hakim and H.E. Meadows", > TITLE = "A Neural Network Approach to the Setup of the {B}enes > Switch", > BOOKTITLE = "Infocom 90", > YEAR = "1990", > PAGES = "397-402"} > > @ARTICLE{marr1, > AUTHOR = "A.M. Marrakchi and T. Troudet", > TITLE = "A Neural Net Arbitrator for Large Crossbar > Packet-Switches", > JOURNAL = "IEEE Transactions on Circuits and Systems", > YEAR = "1989", > VOLUME = "36", > NUMBER = "7", > PAGES = "1039-1041", > MONTH = "July"} > > @INPROCEEDINGS{mels1, > AUTHOR = "P.J.W. Melsa and J.B. Kenney and C.E. Rohrs", > TITLE = "A Neural Network Solution for Routing in Three Stage > Interconnection Networks", > BOOKTITLE = "Proceedings of the 1990 International Symposium on > Circuits and Systems", > YEAR = "1990", > PAGES = "483-486", > MONTH = "May"} > > @INPROCEEDINGS{mels2, > AUTHOR = "P.J.W. Melsa and J.B. Kenney and C.E. Rohrs", > TITLE = "A Neural Network Solution for Call Routing with > Preferential Call Placement", > BOOKTITLE = "Proceedings of the 1990 Global Telecommunications > Conference", > YEAR = "1990", > PAGES = "1377-1382", > MONTH = "December"} > > @ARTICLE{rauc1, > AUTHOR = "H.E. Rauch and T. Winarske", > TITLE = "Neural Networks for Routing Communication Traffic", > JOURNAL = "IEEE Control Systems Magazine", > YEAR = "1988", > VOLUME = "8", > NUMBER = "2", > PAGES = "26-31", > MONTH = "April"} > > @ARTICLE{take1, > AUTHOR = "Y. Takefuji and K.C. Lee", > TITLE = "An Artificial Hysteresis Binary Neuron: A Model Suppressing > the Oscillatory Behavior of Neural Dynamics", > JOURNAL = "Biological Cybernetics", > YEAR = "1991", > VOLUME = "64", > PAGES = "353-356"} > > @INPROCEEDINGS{zhan1, > AUTHOR = "L. Zhang and S.C.A. Thomopoulos", > TITLE = "Neural Network Implementation of the Shortest Path > Algorithm for Traffic Routing in Communication Networks", > BOOKTITLE = "Proceedings of the International Joint Conference on > Neural Networks 1989", > YEAR = "1989", > PAGES = "591", > MONTH = "June"} > > ------------------------------------------------------------------------- > Mark W. Goudreau - NEC Research Institute, Inc. - 4 Independence Way > Princeton, NJ 08540 - USA - goudreau at research.nj.nec.com - (609) 951-2689 > ------------------------------------------------------------------------- > -------------------------------------------------------------------------- > From: lynda at stoney.fit.qut.edu.au (Ms Lynda Thater) > > Hello. > > I am writing in reference to your news article calling for references in the > area of neural network approaches to Telecommunications applications. I have > a few references you might want to look at. > > 1. Rauch, H.E. and Winarske, T., "Neural Networks for Routing Communication > Traffic", IEEE Control Systems Magazine, 1988. > > 2. IEEE Contr. Syst. Magazine, Special Section on Neural Networks for Systems > and Control, April 1988. > > 3. Chang F. and Wu L., "An Optimal Adaptive Routing Algorithm", IEEE > Transactions Auto, Contr., August 1986. > > 4. Vakil F. and Lazar A.A. "Flow Control Protocols for Integrated Networks > with Partially Observed Voice Traffic, " IEE Trans. Auto. Contr., January > 1987. > > 5. Tank D.W. and Hopfield J.J., "Simple 'Neural' Optimization Networks: An A/D > Converter, Signal Decisiion circuit, and a Linear Programming Circuit", > IEEE Transactions on Circuits and Systems, Vol CAS-33, May 1986. > > -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- > Lynda J. Thater AARnet: lynda at stoney.fit.qut.edu.au > Lecturer ARPA: THATER at qut.edu.au > Direct Phone: (07) 864-1923 > > School of Information Systems > _--_|\ Faculty of Information Technology > / QUT Queensland University of Technology > \_.--._/ Box 2434 Brisbane 4001 AUSTRALIA > v Phone: +61 7 864-2111 Fax: +61 7 864-1969 > --------------------------------------------------------------------------  From lazzaro at boom.CS.Berkeley.EDU Fri Jun 26 16:24:31 1992 From: lazzaro at boom.CS.Berkeley.EDU (John Lazzaro) Date: Fri, 26 Jun 92 13:24:31 PDT Subject: No subject Message-ID: <9206262024.AA10610@boom.CS.Berkeley.EDU> --------------------------------------------------------------------------------- CALL FOR PAPERS Journal of VLSI Signal Processing Special Issue on Analog VLSI Computation --------------------------------------------------------------------------------- A special issue of the Journal of VLSI Signal Processing is planned on the topic: Analog VLSI Computation. The theme of the issue will be the application of new analog VLSI techniques to complex computational tasks, particularly those relating to signal processing systems. Example topics include: 1. Processing of signals for electronic systems used in areas such as voice or image communication, 2. Data compression techniques, 3. Modeling of auditory or visual processes, 4. Analog neuron circuits for learning and adaptation, 5. Noise in analog circuits, noise reduction, and limits to signal precision, 6. Techniques for automatic error control, 7. Use of pulse sequences, and mixed analog-digital systems. Papers on other topics, particularly new and interesting applications, will be welcome. The deadline for papers is: 1 October 1992 Papers should be submitted to: Michael D. Godfrey Editor, Special Issue on Analog Computation Journal of VLSI Signal Processing Information Systems Laboratory Durand Building Department of Electrical Engineering Stanford University Stanford CA 94305, USA e-mail: godfrey at isl.stanford.edu Papers may be submitted by e-mail in TeX or PostScript (PostScript is preferred) or mailed in hard-copy form. All e-mail should be identified with Subject: J. of VLSI Sig. Proc., Special Issue.Please refer to the Section: Information for Authors in a recent issue of the Journal for details about submission requirements and format.  From shrager at xerox.com Mon Jun 29 04:18:20 1992 From: shrager at xerox.com (Jeff Shrager) Date: Mon, 29 Jun 1992 01:18:20 PDT Subject: Transfer in Recurrent Networks: A preliminary report and request for advice Message-ID: <92Jun29.011830pdt.38019@huh.parc.xerox.com> Following is an abbreviated version of a preliminary report on our attempts to produce instructed training and transfer in recurrent networks. I am posting it in the hopes of soliciting advice as to how to proceed (or not) and pointers to related work. The longer version of the report is actually not yet available as results are being computed even as I write. (This version was produced by basically just removing everything that said: ".") We would appreciate any thoughts that you have on how to proceed or where to look for related work. Jeff Shrager David Blumenthal (Incidentally, I'd be happy to give the commonlisp (Sun Franz 4.1) bp driver code (mentioned below) to anyone who has the need to run similar supervised experiments with the McClelland and Rumelhart programs. It's slightly special purpose, but very easy to modify.) --- Please Do Not Quote or Redistribute --- We have been exploring of training regimes, labeling, and transfer in recurrent backpropogation networks. Our goal in this research is to model three aspects of human development: First, people learn to associate words with actions. Second, given such associations, people can, on command, do one or a sequence of actions. Third, by practicing sequential actions put together as a result of verbal direction (or self-direction), they can learn new skills, and give new labels to these skills. Finally, each of these processes may require (or make use of) physical guidance or tutorial remediation by an "expert". For people, this is especially the case for the first of these phenomena. The metaphorical model that we use in considering these phenomena is that of interaction between a parent and child during joint activity, such as baking muffins. Shrager & Callanan (1991: Proceedings of the Cognitive Science Conference) studied the various means by which parents and their children of about 3-, 4-, and 5-years-old scoop baking soda out of a box for a muffin recipe. It was observed, first, that there is a large amount of non-directive information in the environment, especially in the verbal context, that a learner such as the child might pick up on in order to learn this skill. Furthermore, it was observed that remediation takes place differently at different ages; through physical guidance in the earlier years, and through verbal instruction later on. We set out to model such a collaborative skill acquisition setting using an algorithmic teacher (`parent') and a Jordan-style recurrent connectionist sequence learner (`child'). The problem was simplified by reducing it to that of training a net to produce a sequence of real-value (x,y) coordinates corresponding to a simple sequence of positions for the spoon. We chose interior real-value coordinates for our points instead of 0's and 1's to avoid possible edge effects. Figure 1 exemplifies the learning task. ___________ (.2,.4) | | *<<<<[out]<<<<<<*(.8,.4)| *<=============+^ | [scoop] $^ | $^ [up] | *>=============+^ | *(.2,.2)>[in]>>>*(.8,.2)| |___________| Figure 1: The outline box is meant to represent a box of baking soda. Stars represent the starting and ending points that were trained. Arrows indicate the path heads and tails. Verbal labels (names) are enclosed in [brackets]. Each step of the outer path: in+up+out (.2,.2.)->(.8,.2), (.8,.2)->(.8,.4), (.8,.4)->(.2,.4), is intended to be individually trained into a recurrent network, using a different label. Then the unified path, called "scoop" (>==...>$$>==...>) is to be either verbally composed using the previously learned labels, or else guided through along with the labels. In either case, scoop also has its own label. We think of a parent telling a child something like: to scoop the baking soda you put the spoon in, then bring it up against the box top and pull it out (to level the amount), while, in the case of a younger child, physically guiding him or her through these actions. Goals We wished to show that training of [scoop] is facilitated by pretraining with combinations of [in], [up], and [out], or, conversely, that the learning of these sequences is facilitated by pretraining with [scoop]. Secondarily, we wished to explore the function of `label' interactions, where by `label' we mean the presented inputs at the non-recurrent input units of the network. General Method For most of the experiments reported here we used a Jordan-style recurrent network with 3 plan units, 2 context units, 3 hidden units, and 2 output units connected to the output units. All of our nets will be identified by the number of units in each part of the net, labeled in the aforementioned order. Thus, the just described net will be called: 3232 (Figure 2). ----------------------- 8 9 output 6 7 8 hidden (0 1 2) (3 4) (plan) (context) Figure 2: The recurrent network, represented in accord with the numbering scheme used by the bp program. The "bp" program (McClelland & Rumelhart) was used to handle the network learning through backpropogation. A lisp front-end was written for bp within which simple algorithmic experiments could be run to train the network on a sequence of different inputs and to various criteria. Data: Figure 3 shows each training set. The individual subsequences ([in]/[out]) were given different input codings (010/100), and the entire sequence ([scoop]) was given still another code (generally the combined code: 110). We considered the input codes as labels. This each action had a different label. Figure 3: The training patterns we used in this experiment: (Negative context numbers refer to the number of the unit the context unit is linked to. See M&R, pgs 157-158 for details of this evil hack.) in3232.pat in0 0 1 0 0 0 .2 .2 in1 0 1 0 -8 -9 .8 .2 \ / \ / \ / label context target out3232.pat scr0 1 0 0 0 0 .8 .4 scr1 1 0 0 -8 -9 .2 .4 InOut3232.pat InOut0 0 1 0 0 0 .2 .2 InOut2 0 1 0 -8 -9 .8 .2 InOut3 1 0 0 0 0 .8 .4 InOut4 1 0 0 -8 -9 .2 .4 scoop3232.pat scoop0 1 1 0 0 0 .2 .2 scoop1 1 1 0 -8 -9 .8 .2 scoop2 1 1 0 -8 -9 .8 .4 scoop3 1 1 0 -8 -9 .2 .4 Parameters: We shall use the phrases "fully trained" and "to criterion" to mean that the total sum of squares ("ecrit", in the language of bp) was less than or equal to 0.01. Unless otherwise specified, weights were updated after each epoch of training (epoch mode). The training driver proceeded in steps of no smaller than 10 epochs, therefore all results are recorded at some increment of 10 epochs, even if bp had reached criterion before that point. Experiments The value of interest to us in our initial experiments is the number of training epochs required to learn [scoop] to criterion, given various prior experience. That is, we tried to train different parts of the sequence individually before training the whole. There ought to be some savings from training in a simpler task (in, out, or combinations), that can transfer to and improve (speed up) the training of the whole. Two general groups of studies were carried out: Group 1 studies pretrained the network with various combinations of [in], [out], [up], to varying degree, and then recorded the time to train [scoop] to criterion. Group 2 studies did the opposite, pretraining with [scoop] and recording the training time for the subsequences. In most cases, different labels, composed from simple binary values (e.g., 010, 101) were assigned to each subsequence, and then [scoop] was given the unified label (111) or average label (.5 .5 .5). Each reported mean and deviation results from 50 repetitions of the experiment, carried out on a newly started copy of bp (thus guaranteeing random initial weights). Deviations will be reported in parentheses following means. If no deviation is reported, the value is not a mean. When a scoop training value is reported, it is a mean (sd) difference between the end of pretraining (to whatever criterion is indicated, or 0.01), and the point at which the [scoop] pattern reached criterion, on a per-trail basis. Thus, unless otherwise specified, the phrase "pretrained" means "pretrained to criterion (of ecrit 0.01, to the next increment of 10 epochs)". Group 1 Studies (on the 3232 network) The training of [scoop] alone required 316 (87) epochs. Pretraining with [in] resulted in [scoop] training time of 326 (236). This difference was (pretty obviously!) not significant by a t-test. However, pretraining with "in" followed by "out" and then "scoop", resulted in much longer training time 600 (351), which differs from scoop alone (t(8)=2.56, p<.025), and from in+scoop (t(8)=2.05, p<.05). Similar results were obtained by pretraining with InOut. We next attempted to parameterize the amount of ill effect that pretraining with InOut was having, by "nudging" the network by changing the pretraining total sum of squares (ecrit) values. Figure 4, the graph of exp9, plots the amount of time that [scoop] takes to train, given pretraining to different critical tss values, ranging from 0.25 (very little pretraining) through 0.02 (greatest amount of pretraining). One can see that although the data is very noisy (r2=.16) there is a trend towards [scoop] requiring more training as the network "overlearns" InOut. [InOut] nudge ecrit [scoop] training rate to 0.01 ecrit ----- ----------------------------------- 0.20 (MEAN 554.0 ERR 48.286182 DEV 152.69432) 0.15 (MEAN 415.0 ERR 57.334305 DEV 181.30699) 0.10 (MEAN 574.44446 ERR 86.63639 DEV 259.90918) 0.08 (MEAN 366.0 ERR 62.203964 DEV 196.7062) 0.06 (MEAN 534.0 ERR 84.33003 DEV 266.675) 0.04 (MEAN 673.0 ERR 68.02042 DEV 215.09946) 0.02 (MEAN 644.0 ERR 88.206825 DEV 278.93448) Figure 4. `Nudge' criterion and [scoop] training rates for various levels of nudging. [This is supposed to be a plot, but it appears in this textual version as a table.] [Report of a number of unsuccessful attempts with different network architectures deleted.] Group 2 Studies Since we failed to find consistent pretraining effects from subsequences to the whole sequence, we investigated transfer in the other direction: pretraining [scoop] and looking for effects on the training times of different subsequence. This is non-trivial because the parts of the sequence were again given different labels and did always start at the first point in the [scoop] sequence. The Jordan-style recurrent network tries to replicate a particular sequence in the order in which it was learned. This is the effect that we are both depending upon and fighting against. We found considerable pretraining effect in most cases (Table 1). with [scoop] remarks alone pretraining (label) ----- ------------ ------- in 130,19 12,4 * (0 1 0) out 31,7 ? (1 0 0) up 83,23 965,99 * (0 0 0) zero effect? 123point 458,132 250,154 * (0 1 0) 1234point 288,28 375,158 (0 1 0) 23point 87,25 263,133 * (0 1 0) 234point 350,95 146,110 * (0 1 0) 1point 8,1 1,1 * (0 1 0) 2point 8,1 107,226 * (0 1 0) 4point 7,2 ? (0 1 0) scpout 77,12 ? (1 1 0) scp234point 301,85 294,161 (1 1 0) Table 1: Transfer from [scoop] to its subsequences. Numbers refer to subsequence points from the [scoop] sequence (the first point is 1, the last is 4): 4 <<<<< 3 ^ 1 >>>>> 2 Patterns that begin with "scp" have the same labels (1 1 0) as [scoop]. Results are mean,deviation from 50 trials. * indicates a significant difference. These results suggest that the network learns the sequence, and its knowledge of the sequence is not completely linked to the inputs. Thus, after pretraining with [scoop], the network can learn a similar sequence with a different label and/or a different starting point relatively easily. However, the large deviations in this case suggest that the network may be learning several different versions of [scoop], and that some of these lend themselves to transfer while others do not. (In a very few cases the system appeared to go into an infinite loop, apparently never reaching criterion, using precisely the same inputs as resulted in small reasonable training rate in most cases. These were stopped by force at times on the order of 1E5 to 1E6 epochs, depending upon when we noticed the problem, and were excluded from the results. This may have been an infinite, or at least very deep, hole in the space. Changes in learning parameters may have fixed these problems.)  From cateau at star.phys.metro-u.ac.jp Mon Jun 29 18:06:04 1992 From: cateau at star.phys.metro-u.ac.jp (Hideyuki Kato) Date: Mon, 29 Jun 92 18:06:04 JST Subject: Tech Report Available Message-ID: <9206290906.AA12984@star.phys.metro-u.ac.jp> The following techinical report is now available in neuroprose: Power law in the performance of the human memory and a simulation with a neural network model Tatsuhiro Nakajima, Nobuko Fuchikami Department of Physics, Tokyo Metropolitan University 1-1 Minami-Osawa, Hachioji, Tokyo 192-03, Japan Hideyuki Cateau Department of Physics, University of Tokyo Bunkyo-ku, Tokyo 113, Japan Hiroshi Nunokawa National Laboratory for High Energy Physics (KEK) Tsukuba-shi, Ibaraki 305, Japan This paper will appear in proceedings of ISKIT'92. The report number of this paper is TMUP-HEL-9203, TU-611, KEK-TH-334 or KEK preprint 92-40. Abstract We show that a learning pace of the back propagation model is described by a power law with high precision. Interestingly the same power law was found out in the human memory by a psychologist in the past. Therefore our result provides a quantitative evidence that the back propagation model, though it is simple, surely shares some essential structure with the human brain. In proceeding the discussion we make out a novel memory model. This model naturally avoids the notable difficulty of the back propagation network that the learning of it is very sensitive to the initial condition. * Hard copy request is not available, sorry* Instructions for obtaining by anonymous ftp: %ftp archive.cis.ohio-state.edu Name: anonymous Password:neuron ftp>bin ftp>cd pub/neuroprose ftp>get nakajima.power.tar.Z ftp>quit %uncompress nakajima.power.tar.Z %tar xvfo nakajima.power.tar Hideyuki Cateau Department of Physics, University of Tokyo, Hongo, Bunkyo-ku, Tokyo, 113 Japan