<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
</head>
<body text="#000000" bgcolor="#ecca99">
<p><font size="+1">+1</font></p>
<p><font size="+1">So I was the 4th program chair of NIPS back in
1991, the 5th General Chair.<br>
</font></p>
<p><font size="+1">I have been to every advisory/exec board meeting
since that time and almost every NIPS till Covid hit--last one
in 2019.</font></p>
<p><font size="+1">I have never seen or experienced a "cabal" of
Terry "cohorts", just the opposite. Terry has maintained a
high integrity, honest environment and listened to all input and
concerns. NIPS is truly a democratic, serious and fair
enterprise and organization, which is due to Terry's careful and
light touch on the rudder.<br>
</font></p>
<p><font size="+1">Sue's points are obvious to anyone who has
curated this conference. Its really impossible to "subtly" or
otherwise change the direction of the conference or prevent good
research from being accepted. Is there a huge MISS rate.. no
doubt. But the conference has been a success, because of its
transparency and its scientific diversity.</font></p>
<p><font size="+1">I really don't understand where this is coming
from, but certainly not from the well documented concerns that
Juergen has raised. He and I disagree on historical
interpretation.. but I don't think this should be taken as
evidence of some larger paranoid view of the field and the
invisible hand that is controlling it.</font></p>
<p><font size="+1">Steve<br>
</font></p>
<p><font size="+1"><br>
</font></p>
<div class="moz-cite-prefix">On 1/4/22 10:47 AM, Sue Becker wrote:<br>
</div>
<blockquote type="cite"
cite="mid:14ffb6b0bc0902bb19bc1fc27b42eef8@mcmaster.ca">Pierre,
hI'm responding to your comment here:
<br>
<br>
<blockquote type="cite">Terry: ... you have made sure, year after
year, that you and your BHL/CIFAR
<br>
friends were able to control and subtly manipulate NIPS/NeurIPS
<br>
(misleading the field in wrong directions, preventing news ideas
and
<br>
outsiders from flourishing, and distorting credit attribution).
<br>
<br>
Can you please explain to this mailing list how this serves as
being "a
<br>
good role model" (to use your own words) for the next
generation?
<br>
</blockquote>
<br>
As loathe as I am to wade into what has become a cesspool of a
debate, you have gone way outside the bounds of accuracy, not to
mention civility and decency, in directing your mudslinging at
Terry Sejnowski. If anything, Terry deserves recognition and
thanks for his many years of service to this community.
<br>
<br>
If you think that NeurIPS is run by a bunch of insiders, try
stepping up and volunteering your service to this conference, be a
longtime committed reviewer, then become an Area Chair, do an
outstanding job and be selected as the next program chair and then
general chair. That is one path to influencing the future of the
conference. Much more importantly, the hundreds of dedicated
reviewers are the ones who actually determine the content of the
meeting, by identifying the very best papers out of the thousands
of submissions received each year. There is no top-down control
or manipulation over that process.
<br>
<br>
Cheers,
<br>
Sue
<br>
<br>
---
<br>
Sue Becker, Professor
<br>
Neurotechnology and Neuroplasticity Lab, PI
<br>
Dept. of Psychology Neuroscience & Behaviour, McMaster
University
<br>
<a class="moz-txt-link-abbreviated" href="http://www.science.mcmaster.ca/pnb/department/becker">www.science.mcmaster.ca/pnb/department/becker</a>
<br>
<br>
<br>
On 2022-01-03 09:55, Baldi,Pierre wrote:
<br>
<blockquote type="cite">Terry:
<br>
<br>
We can all agree on the importance of mentoring the next
generation.
<br>
However, given that:
<br>
<br>
1) you have been in full and sole control of the NIPS/NeurIPS
foundation
<br>
since the 1980s;
<br>
<br>
2) you have been in full and sole control of Neural Computation
since
<br>
the 1980s;
<br>
<br>
3) you have extensively published in Neural Computation (and now
also PNAS);
<br>
<br>
4) you have made sure, year after year, that you and your
BHL/CIFAR
<br>
friends were able to control and subtly manipulate NIPS/NeurIPS
<br>
(misleading the field in wrong directions, preventing news ideas
and
<br>
outsiders from flourishing, and distorting credit attribution).
<br>
<br>
Can you please explain to this mailing list how this serves as
being "a
<br>
good role model" (to use your own words) for the next
generation?
<br>
<br>
Or did you mean it in a more cynical way--indeed this is one of
the
<br>
possible ways for a scientist to be "successful"?
<br>
<br>
--Pierre
<br>
<br>
<br>
<br>
On 1/2/2022 12:29 PM, Terry Sejnowski wrote:
<br>
<blockquote type="cite">We would be remiss not to acknowledge
that backprop would not be
<br>
possible without the calculus,
<br>
so Isaac newton should also have been given credit, at least
as much
<br>
credit as Gauss.
<br>
<br>
All these threads will be sorted out by historians one hundred
years
<br>
from now.
<br>
Our precious time is better spent moving the field forward.
There is
<br>
much more to discover.
<br>
<br>
A new generation with better computational and mathematical
tools than
<br>
we had back
<br>
in the last century have joined us, so let us be good role
models and
<br>
mentors to them.
<br>
<br>
Terry
<br>
<br>
-----
<br>
<br>
On 1/2/2022 5:43 AM, Schmidhuber Juergen wrote:
<br>
<blockquote type="cite">Asim wrote: "In fairness to Jeffrey
Hinton, he did acknowledge the
<br>
work of Amari in a debate about connectionism at the ICNN’97
.... He
<br>
literally said 'Amari invented back propagation'..." when he
sat next
<br>
to Amari and Werbos. Later, however, he failed to cite
Amari’s
<br>
stochastic gradient descent (SGD) for multilayer NNs
(1967-68)
<br>
[GD1-2a] in his 2015 survey [DL3], his 2021 ACM lecture
[DL3a], and
<br>
other surveys. Furthermore, SGD [STO51-52] (Robbins, Monro,
Kiefer,
<br>
Wolfowitz, 1951-52) is not even backprop. Backprop is just a
<br>
particularly efficient way of computing gradients in
differentiable
<br>
networks, known as the reverse mode of automatic
differentiation, due
<br>
to Linnainmaa (1970) [BP1] (see also Kelley's precursor of
1960
<br>
[BPa]). Hinton did not cite these papers either, and in 2019
<br>
embarrassingly did not hesitate to accept an award for
having
<br>
"created ... the backpropagation algorithm” [HIN]. All
references and
<br>
more on this can be found in the report, especially in !
<br>
</blockquote>
Se!
<br>
<blockquote type="cite"> c. XII.
<br>
<br>
The deontology of science requires: If one "re-invents"
something
<br>
that was already known, and only becomes aware of it later,
one must
<br>
at least clarify it later [DLC], and correctly give credit
in all
<br>
follow-up papers and presentations. Also, ACM's Code of
Ethics and
<br>
Professional Conduct [ACM18] states: "Computing
professionals should
<br>
therefore credit the creators of ideas, inventions, work,
and
<br>
artifacts, and respect copyrights, patents, trade secrets,
license
<br>
agreements, and other methods of protecting authors' works."
LBH didn't.
<br>
<br>
Steve still doesn't believe that linear regression of 200
years ago
<br>
is equivalent to linear NNs. In a mature field such as math
we would
<br>
not have such a discussion. The math is clear. And even
today, many
<br>
students are taught NNs like this: let's start with a linear
<br>
single-layer NN (activation = sum of weighted inputs). Now
minimize
<br>
mean squared error on the training set. That's good old
linear
<br>
regression (method of least squares). Now let's introduce
multiple
<br>
layers and nonlinear but differentiable activation
functions, and
<br>
derive backprop for deeper nets in 1960-70 style (still used
today,
<br>
half a century later).
<br>
<br>
Sure, an important new variation of the 1950s (emphasized by
Steve)
<br>
was to transform linear NNs into binary classifiers with
threshold
<br>
functions. Nevertheless, the first adaptive NNs (still
widely used
<br>
today) are 1.5 centuries older except for the name.
<br>
<br>
Happy New Year!
<br>
<br>
Jürgen
<br>
<br>
<br>
<blockquote type="cite">On 2 Jan 2022, at 03:43, Asim Roy
<a class="moz-txt-link-rfc2396E" href="mailto:ASIM.ROY@asu.edu"><ASIM.ROY@asu.edu></a> wrote:
<br>
<br>
And, by the way, Paul Werbos was also there at the same
debate. And
<br>
so was Teuvo Kohonen.
<br>
<br>
Asim
<br>
<br>
-----Original Message-----
<br>
From: Asim Roy
<br>
Sent: Saturday, January 1, 2022 3:19 PM
<br>
To: Schmidhuber Juergen <a class="moz-txt-link-rfc2396E" href="mailto:juergen@idsia.ch"><juergen@idsia.ch></a>;
<a class="moz-txt-link-abbreviated" href="mailto:connectionists@cs.cmu.edu">connectionists@cs.cmu.edu</a>
<br>
Subject: RE: Connectionists: Scientific Integrity, the
2021 Turing
<br>
Lecture, etc.
<br>
<br>
In fairness to Jeffrey Hinton, he did acknowledge the work
of Amari
<br>
in a debate about connectionism at the ICNN’97
(International
<br>
Conference on Neural Networks) in Houston. He literally
said "Amari
<br>
invented back propagation" and Amari was sitting next to
him. I
<br>
still have a recording of that debate.
<br>
<br>
Asim Roy
<br>
Professor, Information Systems
<br>
Arizona State University
<br>
<a class="moz-txt-link-freetext" href="https://isearch.asu.edu/profile/9973">https://isearch.asu.edu/profile/9973</a>
<br>
<a class="moz-txt-link-freetext" href="https://lifeboat.com/ex/bios.asim.roy">https://lifeboat.com/ex/bios.asim.roy</a>
<br>
</blockquote>
<br>
On 2 Jan 2022, at 02:31, Stephen José Hanson
<a class="moz-txt-link-rfc2396E" href="mailto:jose@rubic.rutgers.edu"><jose@rubic.rutgers.edu></a>
<br>
wrote:
<br>
<br>
Juergen: Happy New Year!
<br>
<br>
"are not quite the same"..
<br>
<br>
I understand that its expedient sometimes to use linear
regression to
<br>
approximate the Perceptron.(i've had other connectionist
friends tell
<br>
me the same thing) which has its own incremental update
rule..that is
<br>
doing <0,1> classification. So I guess if you don't
like the
<br>
analogy to logistic regression.. maybe Fisher's LDA? This
whole
<br>
thing still doesn't scan for me.
<br>
<br>
So, again the point here is context. Do you really believe
that
<br>
Frank Rosenblatt didn't reference Gauss/Legendre/Laplace
because it
<br>
slipped his mind?? He certainly understood modern
statistics (of
<br>
the 1940s and 1950s)
<br>
<br>
Certainly you'd agree that FR could have referenced linear
regression
<br>
as a precursor, or "pretty similar" to what he was working
on, it
<br>
seems disingenuous to imply he was plagiarizing Gauss et
al.--right?
<br>
Why would he?
<br>
<br>
Finally then, in any historical reconstruction, I can think
of, it
<br>
just doesn't make sense. Sorry.
<br>
<br>
Steve
<br>
<br>
<br>
<blockquote type="cite">-----Original Message-----
<br>
From: Connectionists
<a class="moz-txt-link-rfc2396E" href="mailto:connectionists-bounces@mailman.srv.cs.cmu.edu"><connectionists-bounces@mailman.srv.cs.cmu.edu></a>
<br>
On Behalf Of Schmidhuber Juergen
<br>
Sent: Friday, December 31, 2021 11:00 AM
<br>
To: <a class="moz-txt-link-abbreviated" href="mailto:connectionists@cs.cmu.edu">connectionists@cs.cmu.edu</a>
<br>
Subject: Re: Connectionists: Scientific Integrity, the
2021 Turing
<br>
Lecture, etc.
<br>
<br>
Sure, Steve, perceptron/Adaline/other similar methods of
the
<br>
1950s/60s are not quite the same, but the obvious origin
and
<br>
ancestor of all those single-layer “shallow learning”
<br>
architectures/methods is indeed linear regression; today’s
simplest
<br>
NNs minimizing mean squared error are exactly what they
had 2
<br>
centuries ago. And the first working deep learning methods
of the
<br>
1960s did NOT really require “modern” backprop (published
in 1970 by
<br>
Linnainmaa [BP1-5]). For example, Ivakhnenko & Lapa
(1965) [DEEP1-2]
<br>
incrementally trained and pruned their deep networks layer
by layer
<br>
to learn internal representations, using regression and a
separate
<br>
validation set. Amari (1967-68)[GD1] used stochastic
gradient
<br>
descent [STO51-52] to learn internal representations
WITHOUT
<br>
“modern" backprop in his multilayer perceptrons. Jürgen
<br>
<br>
<br>
<blockquote type="cite">On 31 Dec 2021, at 18:24, Stephen
José Hanson
<br>
<a class="moz-txt-link-rfc2396E" href="mailto:jose@rubic.rutgers.edu"><jose@rubic.rutgers.edu></a> wrote:
<br>
<br>
Well the perceptron is closer to logistic regression...
but the
<br>
heaviside function of course is <0,1> so
technically not related
<br>
to linear regression which is using covariance to
estimate betas...
<br>
<br>
does that matter? Yes, if you want to be hyper
correct--as this
<br>
appears to be-- Berkson (1944) coined the logit.. as log
odds.. for
<br>
probabilistic classification.. this was formally
developed by Cox
<br>
in the early 60s, so unlikely even in this case to be a
precursor
<br>
to perceptron.
<br>
<br>
My point was that DL requires both Learning algorithm
(BP) and an
<br>
architecture.. which seems to me much more responsible
for the the
<br>
success of Dl.
<br>
<br>
S
<br>
<br>
<br>
<br>
On 12/31/21 4:03 AM, Schmidhuber Juergen wrote:
<br>
<blockquote type="cite">Steve, this is not about machine
learning in general, just about deep
<br>
learning vs shallow learning. However, I added the
Pandemonium -
<br>
thanks for that! You ask: how is a linear regressor of
1800
<br>
(Gauss/Legendre) related to a linear neural network?
It's formally
<br>
equivalent, of course! (The only difference is that
the weights are
<br>
often called beta_i rather than w_i.) Shallow
learning: one adaptive
<br>
layer. Deep learning: many adaptive layers. Cheers,
Jürgen
<br>
<br>
<br>
<br>
<br>
<blockquote type="cite">On 31 Dec 2021, at 00:28,
Stephen José Hanson
<br>
<a class="moz-txt-link-rfc2396E" href="mailto:jose@rubic.rutgers.edu"><jose@rubic.rutgers.edu></a>
<br>
wrote:
<br>
<br>
Despite the comprehensive feel of this it still
appears to me to
<br>
be too focused on Back-propagation per se.. (except
for that
<br>
pesky Gauss/Legendre ref--which still baffles me at
least how
<br>
this is related to a "neural network"), and at the
same time it
<br>
appears to be missing other more general
epoch-conceptually
<br>
relevant cases, say:
<br>
<br>
Oliver Selfridge and his Pandemonium model.. which
was a
<br>
hierarchical feature analysis system.. which
certainly was in the
<br>
air during the Neural network learning heyday...in
fact, Minsky
<br>
cites Selfridge as one of his mentors.
<br>
<br>
Arthur Samuels: Checker playing system.. which
learned a
<br>
evaluation function from a hierarchical search.
<br>
<br>
Rosenblatt's advisor was Egon Brunswick.. who was a
gestalt
<br>
perceptual psychologist who introduced the concept
that the world
<br>
was stochastic and the the organism had to adapt to
this variance
<br>
somehow.. he called it "probabilistic
functionalism" which
<br>
brought attention to learning, perception and
decision theory,
<br>
certainly all piece parts of what we call neural
networks.
<br>
<br>
There are many other such examples that influenced
or provided
<br>
context for the yeasty mix that was 1940s and 1950s
where Neural
<br>
Networks first appeared partly due to PItts and
McCulloch which
<br>
entangled the human brain with computation and early
computers
<br>
themselves.
<br>
<br>
I just don't see this as didactic, in the sense of a
conceptual
<br>
view of the multidimensional history of the
field, as
<br>
opposed to a 1-dimensional exegesis of mathematical
threads
<br>
through various statistical algorithms.
<br>
<br>
Steve
<br>
<br>
On 12/30/21 1:03 PM, Schmidhuber Juergen wrote:
<br>
<br>
<blockquote type="cite">Dear connectionists,
<br>
<br>
in the wake of massive open online peer review,
public comments
<br>
on the connectionists mailing list [CONN21] and
many additional
<br>
private comments (some by well-known deep learning
pioneers)
<br>
helped to update and improve upon version 1 of the
report. The
<br>
essential statements of the text remain unchanged
as their
<br>
accuracy remains unchallenged. I'd like to thank
everyone from
<br>
the bottom of my heart for their feedback up until
this point
<br>
and hope everyone will be satisfied with the
changes. Here is
<br>
the revised version 2 with over 300 references:
<br>
<br>
<br>
<br>
<a class="moz-txt-link-freetext" href="https://urldefense.com/v3/__https://people.idsia.ch/*juergen/scient">https://urldefense.com/v3/__https://people.idsia.ch/*juergen/scient</a>
<br>
ific-integrity-turing-award-deep-learning.html__;fg!!IKRxdwAv5BmarQ
<br>
!NsJ4lf4yO2BDIBzlUVfGKvTtf_QXY8dpZaHzCSzHCvEhXGJUTyRTzZybDQg-DZY$
<br>
<br>
<br>
<br>
In particular, Sec. II has become a brief history
of deep
<br>
learning up to the 1970s:
<br>
<br>
Some of the most powerful NN architectures (i.e.,
recurrent NNs)
<br>
were discussed in 1943 by McCulloch and Pitts
[MC43] and
<br>
formally analyzed in 1956 by Kleene [K56] - the
closely related
<br>
prior work in physics by Lenz, Ising, Kramers, and
Wannier dates
<br>
back to the 1920s [L20][I25][K41][W45]. In 1948,
Turing wrote up
<br>
ideas related to artificial evolution [TUR1] and
learning NNs.
<br>
He failed to formally publish his ideas though,
which explains
<br>
the obscurity of his thoughts here. Minsky's
simple neural SNARC
<br>
computer dates back to 1951. Rosenblatt's
perceptron with a
<br>
single adaptive layer learned in 1958 [R58]
(Joseph [R61]
<br>
mentions an earlier perceptron-like device by
Farley & Clark);
<br>
Widrow & Hoff's similar Adaline learned in
1962 [WID62]. Such
<br>
single-layer "shallow learning" actually started
around 1800
<br>
when Gauss & Legendre introduced linear
regression and the
<br>
method of least squares [DL1-2] - a famous early
example of
<br>
pattern recognition and generalization from
training!
<br>
</blockquote>
</blockquote>
</blockquote>
</blockquote>
</blockquote>
</blockquote>
!
<br>
<blockquote type="cite"> d!
<br>
<blockquote type="cite">at!
<br>
<blockquote type="cite">
<blockquote type="cite">a through a parameterized
predictor is Gauss' rediscovery of the
<br>
asteroid Ceres based on previous astronomical
observations. Deeper
<br>
multilayer perceptrons (MLPs) were discussed by
Steinbuch
<br>
[ST61-95] (1961), Joseph [R61] (1961), and Rosenblatt
[R62]
<br>
(1962), who wrote about "back-propagating errors" in
an MLP with a
<br>
hidden layer [R62], but did not yet have a general
deep learning
<br>
algorithm for deep MLPs (what's now called
backpropagation is
<br>
quite different and was first published by Linnainmaa
in 1970
<br>
[BP1-BP5][BPA-C]). Successful learning in deep
architectures
<br>
started in 1965 when Ivakhnenko & Lapa published
the first
<br>
general, working learning algorithms for deep MLPs
with
<br>
arbitrarily many hidden layers (already containing the
now popular
<br>
multiplicative gates) [DEEP1-2][DL1-2]. A paper of
1971 [DEEP2]
<br>
already described a deep learning net with 8 layers,
trained by
<br>
their highly cited method which was still popular in
the new
<br>
millennium [DL2], especially in Eastern Europ!
<br>
</blockquote>
</blockquote>
</blockquote>
</blockquote>
e!
<br>
<blockquote type="cite">
<blockquote type="cite">, w!
<br>
<blockquote type="cite">
<blockquote type="cite">here much of Machine Learning
was born [MIR](Sec. 1)[R8]. LBH !
<br>
failed to
<br>
cite this, just like they failed to cite Amari [GD1],
who in 1967
<br>
proposed stochastic gradient descent [STO51-52] (SGD)
for MLPs and
<br>
whose implementation [GD2,GD2a] (with Saito) learned
internal
<br>
representations at a time when compute was billions of
times more
<br>
expensive than today (see also Tsypkin's work
[GDa-b]). (In 1972,
<br>
Amari also published what was later sometimes called
the Hopfield
<br>
network or Amari-Hopfield Network [AMH1-3].)
Fukushima's now
<br>
widely used deep convolutional NN architecture was
first
<br>
introduced in the 1970s [CNN1].
<br>
<br>
<blockquote type="cite">
<blockquote type="cite">Jürgen
<br>
<br>
<br>
<br>
<br>
******************************
<br>
<br>
On 27 Oct 2021, at 10:52, Schmidhuber Juergen
<br>
<br>
<a class="moz-txt-link-rfc2396E" href="mailto:juergen@idsia.ch"><juergen@idsia.ch></a>
<br>
<br>
wrote:
<br>
<br>
Hi, fellow artificial neural network enthusiasts!
<br>
<br>
The connectionists mailing list is perhaps the
oldest mailing
<br>
list on ANNs, and many neural net pioneers are
still subscribed
<br>
to it. I am hoping that some of them - as well as
their
<br>
contemporaries - might be able to provide
additional valuable
<br>
insights into the history of the field.
<br>
<br>
Following the great success of massive open online
peer review
<br>
(MOOR) for my 2015 survey of deep learning (now
the most cited
<br>
article ever published in the journal Neural
Networks), I've
<br>
decided to put forward another piece for MOOR. I
want to thank the
<br>
many experts who have already provided me with
comments on it.
<br>
Please send additional relevant references and
suggestions for
<br>
improvements for the following draft directly to
me at
<br>
<br>
<a class="moz-txt-link-abbreviated" href="mailto:juergen@idsia.ch">juergen@idsia.ch</a>
<br>
<br>
:
<br>
<br>
<br>
<br>
<a class="moz-txt-link-freetext" href="https://urldefense.com/v3/__https://people.idsia.ch/*juergen/scient">https://urldefense.com/v3/__https://people.idsia.ch/*juergen/scient</a>
<br>
ific-integrity-turing-award-deep-learning.html__;fg!!IKRxdwAv5BmarQ
<br>
!NsJ4lf4yO2BDIBzlUVfGKvTtf_QXY8dpZaHzCSzHCvEhXGJUTyRTzZybDQg-DZY$
<br>
<br>
<br>
<br>
The above is a point-for-point critique of factual
errors in
<br>
ACM's justification of the ACM A. M. Turing Award
for deep
<br>
learning and a critique of the Turing Lecture
published by ACM
<br>
in July 2021. This work can also be seen as a
short history of
<br>
deep learning, at least as far as ACM's errors and
the Turing
<br>
Lecture are concerned.
<br>
<br>
I know that some view this as a controversial
topic. However, it
<br>
is the very nature of science to resolve
controversies through
<br>
facts. Credit assignment is as core to scientific
history as it
<br>
is to machine learning. My aim is to ensure that
the true
<br>
history of our field is preserved for posterity.
<br>
<br>
Thank you all in advance for your help!
<br>
<br>
Jürgen Schmidhuber
<br>
<br>
<br>
<br>
<br>
<br>
<br>
</blockquote>
--
<br>
<signature.png>
<br>
<br>
</blockquote>
</blockquote>
--
<br>
<signature.png>
<br>
</blockquote>
<br>
</blockquote>
<br>
</blockquote>
<br>
<br>
</blockquote>
</blockquote>
</blockquote>
<div class="moz-signature">-- <br>
<img src="cid:part1.72A5F2FE.079C852E@rubic.rutgers.edu"
border="0"></div>
</body>
</html>