report offered

FRANKLINS%MEMSTVX1.BITNET@VMA.CC.CMU.EDU FRANKLINS%MEMSTVX1.BITNET at VMA.CC.CMU.EDU
Fri Jun 15 13:44:00 EDT 1990


What follows is the abstract of a technical report,
really more of a position paper, by Max Garzon and
myself. It deals more with neural networks as
computational tools than as models of cognition.
It was motivated by more technical work of ours on the
outer reaches of neural computation under ideal
conditions. An extended abstract of this work appeared
as "Neural computability II", in Proc. 3rd Int. Joint.
Conf. on Neural Networks, Washington, D.C. 1989 I, 631-
637.


*******************************************************

      When does a neural network solve a problem?

              Stan Franklin and Max Garzon

           Institute for Intelligent Systems
          Department of Mathematical Sciences
               Memphis State University
                Memphis, TN 38152  USA



                       Abstract

     Reproducibility, scalability, controlability, and
physical realizability are characteristic features of
conventional solutions to algorithmic problems. Their
desirability for neural network approaches to
computational problems is discussed. It is argued that
reproducibility requires the eventual stability of the
network at a correct answer, scalability requires
consideration of successively larger finite (or just
infinite) networks, and the other two features require
discrete activations. A precise definition of solution
with these properties is offered. The importance of the
stability problem in neurocomputing is discussed, as
well as the need for study of infinite networks.

*******************************************************

A hard copy of the position paper (report 90-11) and/or
a full version of "Neural computability II"  may be
requested from franklins at memstvx1.bitnet. We would
greatly appreciate your comments.

Please do not REPLY to this message.

-- Stan Franklin


More information about the Connectionists mailing list