rbfs

Stephen J Hanson jose at tractatus.bellcore.com
Tue Sep 27 09:37:55 EDT 1988



>Re: Hideki KAWAHARA's recent postings on Radial basis functions

>Also, if you make a network with one hidden layer of 'spherical graded
>units' (Hanson and Burr, "Knowledge representation in connectionist
>networks"), and a simple perceptron as output unit (plus some simplifying
>assumptions), you can derive the RBF method!!
>>niranjan

It's also worth noting that any sort of generalized dichotomy (discriminant)
can be naturally embedded in Back-prop nets--in terms of polynomial boundaries (also
suggested in Hanson & Burr) or any sort of generalized volume or edge one would like 
(sigma-pi for example are simple rectangular volumes).   I believe that this sort of 
variation has a relation to synaptic-dendritic interactions which one might
imagine could be considerably more complex than linear.  However, I suspect
there is a tradeoff in terms neuron complexity and learning generality as one increases the 
complexity of the discriminant or predicate that one is using-- consequently as componential
network power increases the advantage of network computation may decrease.


(as usual "generalized discriminants" was suggested previously in statistical
and pattern recognition literature-- Duda and Hart, pp. 134-138. and
also see Tou & Gonzalez, Pattern Recognition Principles, Addison-Wesley, 
1974, pp. 48-52-- Btw--I don't think the fact that many sorts of statistical
methods seem to "pop out" of neural network approaches also means that neural
network framework is somehow derivative--remember that many of the statistical 
models and methods are ad hoc and explicitly rely on "normative" sorts of assumptions 
which may provide the only connection to some other sort of statistical method.   
In fact, i think it is rather remarkable that such simple sorts of "neural like" assumptions
can lead to families of such powerful sorts of general methods.) 

		Stephen Hanson



More information about the Connectionists mailing list