Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc.
Stephen José Hanson
jose at rubic.rutgers.edu
Thu Jan 27 09:37:40 EST 2022
Juergen, I have read through GMHD paper and a 1971 Review paper by
Ivakhnenko. These are papers about function approximation. The method
proposes to use series of polynomial functions that are stacked in
filtered sets. The filtered sets are chosen based on best fit, and
from what I can tell are manually grown.. so this must of been a tedious
and slow process (I assume could be automated). So are the GMHDs
"deep", in that they are stacked 4 deep in figure 1 (8 deep in
another). Interestingly, they are using (with obvious FA
justification) polynomials of various degree. Has this much to do with
neural networks? Yes, there were examples initiated by Rumelhart (and
me:
https://www.routledge.com/Backpropagation-Theory-Architectures-and-Applications/Chauvin-Rumelhart/p/book/9780805812596),
based on poly-synaptic dendrite complexity, but not in the GMHD paper..
which was specifically about function approximation. Ivakhnenko, lists
four reasons for the approach they took: mainly reducing data size and
being more efficient with data that one had. No mention of "internal
representations"
So when Terry, talks about "internal representations" --does he mean
function approximation? Not so much. That of course is part of this,
but the actual focus is on cognitive or perceptual or motor functions.
Representation in the brain. Hidden units (which could be polynomials)
cluster and project and model the input features wrt to the function
constraints conditioned by training data. This is more similar to
model specification through function space search. And the original
Rumelhart meaning of internal representation in PDP vol 1, was in the
case of representation certain binary functions (XOR), but more
generally about the need for "neurons" (inter-neurons) explicitly
between input (sensory) and output (motor). Consider NETTALK, in
which I did the first hierarchical clustering of the hidden units over
the input features (letters). What appeared wasn't probably
surprising.. but without model specification, the network (w.hidden
units), learned VOWELS and CONSONANT distinctions just from training
(Hanson & Burr, 1990). This would be a clear example of "internal
representations" in the sense of Rumelhart. This was not in the
intellectual space of Ivakhnenko's Group Method of Handling Data. (some
of this is discussed in more detail in some recent conversations with
Terry Sejnowski and another one to appear shortly with Geoff Hinton
(AIHUB.org look in Opinions).
Now I suppose one could be cynical and opportunistic, and even conclude
if you wanted to get more clicks, rather than title your article GROUP
METHOD OF HANDLING DATA, you should at least consider: NEURAL NETWORKS
FOR HANDLING DATA, even if you didn't think neural networks had anything
to do with your algorithm, after all everyone else is! Might get it
published in this time frame, or even read. This is not
scholarship. These publications threads are related but not dependent.
And although they diverge they could be informative if one were to try
and develop polynomial inductive growth networks (see Falhman, 1989;
Cascade correlation and Hanson 1990: Meiosis nets) to motor control in
the brain. But that's not what happened. I think, like Gauss,
you need to drop this specific claim as well.
With best regards,
Steve
On 1/25/22 12:03 PM, Schmidhuber Juergen wrote:
> For a recent example, your 2020 deep learning survey in PNAS [S20] claims that your 1985 Boltzmann machine [BM] was the first NN to learn internal representations. This paper [BM] neither cited the internal representations learnt by Ivakhnenko & Lapa's deep nets in 1965 [DEEP1-2]
--
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20220127/189b8221/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.png
Type: image/png
Size: 19957 bytes
Desc: not available
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20220127/189b8221/attachment.png>
More information about the Connectionists
mailing list