<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
</head>
<body text="#000000" bgcolor="#ecca99">
<p><br>
</p>
<p>Juergen, I have read through GMHD paper and a 1971 Review paper
by Ivakhnenko. These are papers about function approximation.
The method proposes to use series of polynomial functions that are
stacked in filtered sets. The filtered sets are chosen based on
best fit, and from what I can tell are manually grown.. so this
must of been a tedious and slow process (I assume could be
automated). So are the GMHDs "deep", in that they are stacked
4 deep in figure 1 (8 deep in another). Interestingly, they
are using (with obvious FA justification) polynomials of various
degree. Has this much to do with neural networks? Yes, there
were examples initiated by Rumelhart (and me:
<a class="moz-txt-link-freetext" href="https://www.routledge.com/Backpropagation-Theory-Architectures-and-Applications/Chauvin-Rumelhart/p/book/9780805812596">https://www.routledge.com/Backpropagation-Theory-Architectures-and-Applications/Chauvin-Rumelhart/p/book/9780805812596</a>),
based on poly-synaptic dendrite complexity, but not in the GMHD
paper.. which was specifically about function approximation.
Ivakhnenko, lists four reasons for the approach they took: mainly
reducing data size and being more efficient with data that one
had. No mention of "internal representations"<br>
</p>
<p>So when Terry, talks about "internal representations" --does he
mean function approximation? Not so much. That of course is part
of this, but the actual focus is on cognitive or perceptual or
motor functions. Representation in the brain. Hidden units
(which could be polynomials) cluster and project and model the
input features wrt to the function constraints conditioned by
training data. This is more similar to model specification
through function space search. And the original Rumelhart meaning
of internal representation in PDP vol 1, was in the case of
representation certain binary functions (XOR), but more generally
about the need for "neurons" (inter-neurons) explicitly between
input (sensory) and output (motor). Consider NETTALK, in which
I did the first hierarchical clustering of the hidden units over
the input features (letters). What appeared wasn't probably
surprising.. but without model specification, the network
(w.hidden units), learned VOWELS and CONSONANT distinctions just
from training (Hanson & Burr, 1990). This would be a clear
example of "internal representations" in the sense of
Rumelhart. This was not in the intellectual space of
Ivakhnenko's Group Method of Handling Data. (some of this is
discussed in more detail in some recent conversations with Terry
Sejnowski and another one to appear shortly with Geoff Hinton
(AIHUB.org look in Opinions).</p>
<p>Now I suppose one could be cynical and opportunistic, and even
conclude if you wanted to get more clicks, rather than title your
article GROUP METHOD OF HANDLING DATA, you should at least
consider: NEURAL NETWORKS FOR HANDLING DATA, even if you didn't
think neural networks had anything to do with your algorithm,
after all everyone else is! Might get it published in this time
frame, or even read. This is not scholarship. These
publications threads are related but not dependent. And although
they diverge they could be informative if one were to try and
develop polynomial inductive growth networks (see Falhman, 1989;
Cascade correlation and Hanson 1990: Meiosis nets) to motor
control in the brain. But that's not what happened. I
think, like Gauss, you need to drop this specific claim as well.<br>
</p>
<p>With best regards,</p>
<p>Steve<br>
</p>
<p><br>
</p>
<div class="moz-cite-prefix">On 1/25/22 12:03 PM, Schmidhuber
Juergen wrote:<br>
</div>
<blockquote type="cite"
cite="mid:58AC5011-BF6A-453F-9A5E-FAE0F63E2B02@supsi.ch">
<pre class="moz-quote-pre" wrap="">For a recent example, your 2020 deep learning survey in PNAS [S20] claims that your 1985 Boltzmann machine [BM] was the first NN to learn internal representations. This paper [BM] neither cited the internal representations learnt by Ivakhnenko & Lapa's deep nets in 1965 [DEEP1-2]</pre>
</blockquote>
<div class="moz-signature">-- <br>
<img src="cid:part1.FEA7F75C.354191ED@rubic.rutgers.edu"
border="0"></div>
</body>
</html>