Connectionists: Connectionists Digest, Vol 764, Issue 1

Juyang Weng juyang.weng at gmail.com
Tue Nov 16 20:30:25 EST 2021


Dear Juergen,

I respectfully waited till people have had enough time to respond to your
plagiarism allegations.

Many people probably are not aware of a much more severe problem than the
plagiarism you correctly raised:

I would like to raise here that error-backprop is a major technical flaw in
many types of neural networks (CNN, LSTM, etc.) buried in a protocol
violation called Post-Selection Using Test Sets (PSUTS).
See this IJCNN 2021 paper:
J. Weng, "On Post Selections Using Test Sets (PSUTS) in AI", in Proc.
International Joint Conference on Neural Networks, pp. 1-8, Shengzhen,
China, July 18-22, 2021. PDF file
<http://www.cse.msu.edu/~weng/research/PSUTS-IJCNN2021rvsd-cite.pdf>.

Those who do not agree with me please respond.

Best regards,
-John
----------------------------------------------------------------------

Message: 1
Date: Sun, 14 Nov 2021 16:47:36 +0000
From: Schmidhuber Juergen <juergen at idsia.ch>
To: "connectionists at cs.cmu.edu" <connectionists at cs.cmu.edu>
Subject: Re: Connectionists: Scientific Integrity, the 2021 Turing
        Lecture, etc.
Message-ID: <532DC982-9F4B-41F8-9AB4-AD21314C6472 at supsi.ch>
Content-Type: text/plain; charset="utf-8"

Dear all, thanks for your public comments, and many additional private ones!

So far nobody has challenged the accuracy of any of the statements in the
draft report currently under massive open peer review:

https://people.idsia.ch/~juergen/scientific-integrity-turing-award-deep-learning.html

Nevertheless, some of the recent comments will trigger a few minor
revisions in the near future.

Here are a few answers to some of the public comments:

Randall O'Reilly wrote: "I vaguely remember someone making an interesting
case a while back that it is the *last* person to invent something that
gets all the credit." Indeed, as I wrote in Science (2011, reference
[NASC3] in the report): "As they say: Columbus did not become famous
because he was the first to discover America, but because he was the last."
Sure, some people sometimes assign the "inventor" title to the person that
should be truly called the "popularizer." Frequently, this is precisely due
to the popularizer packaging the work of others in such a way that it
becomes easily digestible. But this is not to say that their receipt of the
title is correct or that we shouldn't do our utmost to correct it; their
receipt of such title over the ones that are actually deserving of it is
one of the most enduring issues in scientific history.

As Stephen Jos? Hanson wrote: "Well, to  popularize is not to invent. Many
of Juergen's concerns could be solved with some scholarship, such that
authors look sometime before 2006 for other relevant references."

Randy also wrote: "Sometimes, it is not the basic equations etc that
matter: it is the big picture vision." However, the same vision has almost
always been there in the earlier work on neural nets. It's just that the
work was ahead of its time. It's only in recent years that we have the
datasets and the computational power to realize those big pictures visions.
I think you would agree that simply scaling something up isn't the same as
inventing it. If it were, then the name "Newton" would have little meaning
to people nowadays.

Jonathan D. Cohen wrote: " ...it is also worth noting that science is an
*intrinsically social* endeavor, and therefore communication is a
fundamental factor." Sure, but let?s make sure that this cannot be used as
a justification of plagiarism! See Sec. 5 of the report.

Generally speaking, if B plagiarizes A but inspires C, whom should C cite?
The answer is clear.

Ponnuthurai Nagaratnam Suganthan wrote: "The name `deep learning' came
about recently." Not so. See references in Sec. X of the report: the
ancient term "deep learning" (explicitly mentioned by ACM) was actually
first introduced to Machine Learning by Dechter (1986), and to NNs by
Aizenberg et al (2000).

Tsvi Achler wrote: "Models which have true feedback (e.g. back to their own
inputs) cannot learn by backpropagation but there is plenty of evidence
these types of connections exist in the brain and are used during
recognition. Thus they get ignored: no talks in universities, no featuring
in `premier' journals and no funding. [...] Lastly Feedforward methods are
predominant in a large part because they have financial backing from large
companies with advertising and clout like Google and the self-driving craze
that never fully materialized." This is very misleading - see Sec. A, B,
and C of the report which are about recurrent nets with feedback,
especially LSTM, heavily used by Google and others, on your smartphone
since 2015. Recurrent NNs are general computers that can compute anything
your laptop can compute, including any computable model with feedback "back
to the inputs." My favorite proof from over 30 years ago: a little
subnetwork can be used to build a NAND gate, an!
 d a big recurrent network of NAND gates can emulate the CPU of your
laptop. (See also answers by Dan Levine, Gary Cottrell, and Juyang Weng.)
However, as Asim Roy pointed out, this discussion deviates from the
original topic of improper credit assignment. Please use another thread for
this.

Randy also wrote: "Should Newton be cited instead of Rumelhart et al, for
backprop, as Steve suggested? Seriously, most of the math powering today's
models is just calculus and the chain rule." This is so misleading in
several ways - see Sec. XII of the report: "Some claim that
`backpropagation is just the chain rule of Leibniz (1676) & L'Hopital
(1696).' No, it is the efficient way of applying the chain rule to big
networks with differentiable nodes (there are also many inefficient ways of
doing this). It was not published until 1970" by Seppo Linnainmaa. Of
course, the person to cite is Linnainmaa.

Randy also wrote: "how little Einstein added to what was already
established by Lorentz and others". Juyang already respectfully objected to
this misleading statement.

I agree with what Anand Ramamoorthy wrote: "Setting aside broader aspects
of the social quality of the scientific enterprise, let's take a look at a
simpler thing; individual duty. Each scientist has a duty to science (as an
intellectual discipline) and the scientific community, to uphold
fundamental principles informing the conduct of science. Credit should be
given wherever it is due - it is a matter of duty, not preference or
`strategic vale' or boosting someone because they're a great populariser.
... Crediting those who disseminate is fine and dandy, but should be for
those precise contributions, AND the originators of an idea/method/body of
work ought to be recognised - this is perhaps a bit difficult when the work
is obscured by history, but not impossible. At any rate, if one has novel
information of pertinence w.r.t original work, then the right action is
crystal clear."

See also Sec. 5 of the report: "As emphasized earlier:[DLC][HIN] `The
inventor of an important method should get credit for inventing it. They
may not always be the one who popularizes it. Then the popularizer should
get credit for popularizing it - but not for inventing it.' If one
"re-invents" something that was already known, and only becomes aware of it
later, one must at least clarify it later, and correctly give credit in
follow-up papers and presentations."

I also agree with what Zhaoping Li wrote: "I would find it hard to enter a
scientific community if it is not scholarly. Each of us can do our bit to
be scholarly, to set an example, if not a warning, to the next generation."

Randy also wrote: "Outside of a paper specifically on the history of a
field, does it really make sense to "require" everyone to cite obscure old
papers that you can't even get a PDF of on google scholar?" This sounds
almost like a defense of plagiarism. That's what time stamps of patents and
papers are for. A recurring point of the report is: the awardees did not
cite the prior art - not even in later surveys written when the true
origins of this work were well-known.

Here I fully agree with what Marina Meila wrote: "Since credit is a form of
currency in academia, let's look at the `hard currency' rewards of
invention. Who gets them? The first company to create a new product usually
fails. However, the interesting thing is that society (by this I mean the
society most of us we work in) has found it necessary to counteract this,
and we have patent laws to protect the rights of the inventors. The point
is not whether patent laws are effective or not, it's the social norm they
implement. That to protect invention one should pay attention to rewarding
the original inventors, whether we get the `product' directly from them or
not."

J?rgen


On Mon, Nov 15, 2021 at 12:35 PM <
connectionists-request at mailman.srv.cs.cmu.edu> wrote:

> Send Connectionists mailing list submissions to
>         connectionists at mailman.srv.cs.cmu.edu
>
> To subscribe or unsubscribe via the World Wide Web, visit
>         https://mailman.srv.cs.cmu.edu/mailman/listinfo/connectionists
> or, via email, send a message with subject or body 'help' to
>         connectionists-request at mailman.srv.cs.cmu.edu
>
> You can reach the person managing the list at
>         connectionists-owner at mailman.srv.cs.cmu.edu
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of Connectionists digest..."
>
>
> Today's Topics:
>
>    1. Re:  Scientific Integrity, the 2021 Turing Lecture, etc.
>       (Schmidhuber Juergen)
>    2.  [journals]  Special Issue on the topic ?Cognitive Robotics
>       in Social Applications? (Francesco Rea)
>    3. Re:  Scientific Integrity, the 2021 Turing Lecture, etc.
>       (Randall O'Reilly)
>    4. Re:  Scientific Integrity, the 2021 Turing Lecture,       etc.
>       (Maria Kesa)
>    5. Re:  Scientific Integrity, the 2021 Turing Lecture,       etc.
>       (Barak A. Pearlmutter)
>    6.  CFP - Special issue on ?Human-like Behavior and Cognition in
>       Robots? (marwen Belkaid)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Sun, 14 Nov 2021 16:47:36 +0000
> From: Schmidhuber Juergen <juergen at idsia.ch>
> To: "connectionists at cs.cmu.edu" <connectionists at cs.cmu.edu>
> Subject: Re: Connectionists: Scientific Integrity, the 2021 Turing
>         Lecture, etc.
> Message-ID: <532DC982-9F4B-41F8-9AB4-AD21314C6472 at supsi.ch>
> Content-Type: text/plain; charset="utf-8"
>
> Dear all, thanks for your public comments, and many additional private
> ones!
>
> So far nobody has challenged the accuracy of any of the statements in the
> draft report currently under massive open peer review:
>
>
> https://people.idsia.ch/~juergen/scientific-integrity-turing-award-deep-learning.html
>
> Nevertheless, some of the recent comments will trigger a few minor
> revisions in the near future.
>
> Here are a few answers to some of the public comments:
>
> Randall O'Reilly wrote: "I vaguely remember someone making an interesting
> case a while back that it is the *last* person to invent something that
> gets all the credit." Indeed, as I wrote in Science (2011, reference
> [NASC3] in the report): "As they say: Columbus did not become famous
> because he was the first to discover America, but because he was the last."
> Sure, some people sometimes assign the "inventor" title to the person that
> should be truly called the "popularizer." Frequently, this is precisely due
> to the popularizer packaging the work of others in such a way that it
> becomes easily digestible. But this is not to say that their receipt of the
> title is correct or that we shouldn't do our utmost to correct it; their
> receipt of such title over the ones that are actually deserving of it is
> one of the most enduring issues in scientific history.
>
> As Stephen Jos? Hanson wrote: "Well, to  popularize is not to invent. Many
> of Juergen's concerns could be solved with some scholarship, such that
> authors look sometime before 2006 for other relevant references."
>
> Randy also wrote: "Sometimes, it is not the basic equations etc that
> matter: it is the big picture vision." However, the same vision has almost
> always been there in the earlier work on neural nets. It's just that the
> work was ahead of its time. It's only in recent years that we have the
> datasets and the computational power to realize those big pictures visions.
> I think you would agree that simply scaling something up isn't the same as
> inventing it. If it were, then the name "Newton" would have little meaning
> to people nowadays.
>
> Jonathan D. Cohen wrote: " ...it is also worth noting that science is an
> *intrinsically social* endeavor, and therefore communication is a
> fundamental factor." Sure, but let?s make sure that this cannot be used as
> a justification of plagiarism! See Sec. 5 of the report.
>
> Generally speaking, if B plagiarizes A but inspires C, whom should C cite?
> The answer is clear.
>
> Ponnuthurai Nagaratnam Suganthan wrote: "The name `deep learning' came
> about recently." Not so. See references in Sec. X of the report: the
> ancient term "deep learning" (explicitly mentioned by ACM) was actually
> first introduced to Machine Learning by Dechter (1986), and to NNs by
> Aizenberg et al (2000).
>
> Tsvi Achler wrote: "Models which have true feedback (e.g. back to their
> own inputs) cannot learn by backpropagation but there is plenty of evidence
> these types of connections exist in the brain and are used during
> recognition. Thus they get ignored: no talks in universities, no featuring
> in `premier' journals and no funding. [...] Lastly Feedforward methods are
> predominant in a large part because they have financial backing from large
> companies with advertising and clout like Google and the self-driving craze
> that never fully materialized." This is very misleading - see Sec. A, B,
> and C of the report which are about recurrent nets with feedback,
> especially LSTM, heavily used by Google and others, on your smartphone
> since 2015. Recurrent NNs are general computers that can compute anything
> your laptop can compute, including any computable model with feedback "back
> to the inputs." My favorite proof from over 30 years ago: a little
> subnetwork can be used to build a NAND gate, an!
>  d a big recurrent network of NAND gates can emulate the CPU of your
> laptop. (See also answers by Dan Levine, Gary Cottrell, and Juyang Weng.)
> However, as Asim Roy pointed out, this discussion deviates from the
> original topic of improper credit assignment. Please use another thread for
> this.
>
> Randy also wrote: "Should Newton be cited instead of Rumelhart et al, for
> backprop, as Steve suggested? Seriously, most of the math powering today's
> models is just calculus and the chain rule." This is so misleading in
> several ways - see Sec. XII of the report: "Some claim that
> `backpropagation is just the chain rule of Leibniz (1676) & L'Hopital
> (1696).' No, it is the efficient way of applying the chain rule to big
> networks with differentiable nodes (there are also many inefficient ways of
> doing this). It was not published until 1970" by Seppo Linnainmaa. Of
> course, the person to cite is Linnainmaa.
>
> Randy also wrote: "how little Einstein added to what was already
> established by Lorentz and others". Juyang already respectfully objected to
> this misleading statement.
>
> I agree with what Anand Ramamoorthy wrote: "Setting aside broader aspects
> of the social quality of the scientific enterprise, let's take a look at a
> simpler thing; individual duty. Each scientist has a duty to science (as an
> intellectual discipline) and the scientific community, to uphold
> fundamental principles informing the conduct of science. Credit should be
> given wherever it is due - it is a matter of duty, not preference or
> `strategic vale' or boosting someone because they're a great populariser.
> ... Crediting those who disseminate is fine and dandy, but should be for
> those precise contributions, AND the originators of an idea/method/body of
> work ought to be recognised - this is perhaps a bit difficult when the work
> is obscured by history, but not impossible. At any rate, if one has novel
> information of pertinence w.r.t original work, then the right action is
> crystal clear."
>
> See also Sec. 5 of the report: "As emphasized earlier:[DLC][HIN] `The
> inventor of an important method should get credit for inventing it. They
> may not always be the one who popularizes it. Then the popularizer should
> get credit for popularizing it - but not for inventing it.' If one
> "re-invents" something that was already known, and only becomes aware of it
> later, one must at least clarify it later, and correctly give credit in
> follow-up papers and presentations."
>
> I also agree with what Zhaoping Li wrote: "I would find it hard to enter a
> scientific community if it is not scholarly. Each of us can do our bit to
> be scholarly, to set an example, if not a warning, to the next generation."
>
> Randy also wrote: "Outside of a paper specifically on the history of a
> field, does it really make sense to "require" everyone to cite obscure old
> papers that you can't even get a PDF of on google scholar?" This sounds
> almost like a defense of plagiarism. That's what time stamps of patents and
> papers are for. A recurring point of the report is: the awardees did not
> cite the prior art - not even in later surveys written when the true
> origins of this work were well-known.
>
> Here I fully agree with what Marina Meila wrote: "Since credit is a form
> of currency in academia, let's look at the `hard currency' rewards of
> invention. Who gets them? The first company to create a new product usually
> fails. However, the interesting thing is that society (by this I mean the
> society most of us we work in) has found it necessary to counteract this,
> and we have patent laws to protect the rights of the inventors. The point
> is not whether patent laws are effective or not, it's the social norm they
> implement. That to protect invention one should pay attention to rewarding
> the original inventors, whether we get the `product' directly from them or
> not."
>
> J?rgen
>
>
>
>
> *************************
>
> On 27 Oct 2021, at 10:52, Schmidhuber Juergen <juergen at idsia.ch> wrote:
>
> Hi, fellow artificial neural network enthusiasts!
>
> The connectionists mailing list is perhaps the oldest mailing list on
> ANNs, and many neural net pioneers are still subscribed to it. I am hoping
> that some of them - as well as their contemporaries - might be able to
> provide additional valuable insights into the history of the field.
>
> Following the great success of massive open online peer review (MOOR) for
> my 2015 survey of deep learning (now the most cited article ever published
> in the journal Neural Networks), I've decided to put forward another piece
> for MOOR. I want to thank the many experts who have already provided me
> with comments on it. Please send additional relevant references and
> suggestions for improvements for the following draft directly to me at
> juergen at idsia.ch:
>
>
> https://people.idsia.ch/~juergen/scientific-integrity-turing-award-deep-learning.html
>
> The above is a point-for-point critique of factual errors in ACM's
> justification of the ACM A. M. Turing Award for deep learning and a
> critique of the Turing Lecture published by ACM in July 2021. This work can
> also be seen as a short history of deep learning, at least as far as ACM's
> errors and the Turing Lecture are concerned.
>
> I know that some view this as a controversial topic. However, it is the
> very nature of science to resolve controversies through facts. Credit
> assignment is as core to scientific history as it is to machine learning.
> My aim is to ensure that the true history of our field is preserved for
> posterity.
>
> Thank you all in advance for your help!
>
> J?rgen Schmidhuber
>
>
>
>
>
>
>
> ------------------------------
>
> Message: 2
> Date: Sun, 14 Nov 2021 15:40:56 +0000
> From: Francesco Rea <Francesco.Rea at iit.it>
> To: "connectionists at mailman.srv.cs.cmu.edu"
>         <connectionists at mailman.srv.cs.cmu.edu>
> Subject: Connectionists: [journals]  Special Issue on the topic
>         ?Cognitive Robotics in Social Applications?
> Message-ID: <5af44b4a238247ccb0968b75fe639e0c at iit.it>
> Content-Type: text/plain; charset="windows-1252"
>
> Dear colleague,
>
> We hope this email finds you well!
>
> We would like to kindly inform you about a Special Issue on the topic
> ?Cognitive Robotics in Social Applications? of the open access journal
> ?Electronics? (ISSN 2079-9292, IF 2.397), for which we are serving as Guest
> Editors.
>
> We are writing to inquire whether you would be interested in submitting a
> contribution to this Special Issue. The deadline for submitting the
> manuscript is 31 December 2021.
>
> Please, find more details for this call and all the submission information
> at the following link:
>
> https://www.mdpi.com/journal/electronics/special_issues/cognitive_robots
>
> We hope you will contribute to this well-focused Special Issue, and we
> would be grateful if you could forward this information to friends and
> colleagues that might be interested in the topic.
>
> Best Regards,
>
> Prof. Dr. Dimitri Ognibene, Dr. Giovanni Pilato, Dr. Francesco Rea
> Guest Editors
>
>
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20211114/4416c1d3/attachment-0001.html
> >
>
> ------------------------------
>
> Message: 3
> Date: Mon, 15 Nov 2021 00:36:09 -0800
> From: "Randall O'Reilly" <oreilly at ucdavis.edu>
> To: Schmidhuber Juergen <juergen at idsia.ch>
> Cc: Connectionists Connectionists <connectionists at cs.cmu.edu>
> Subject: Re: Connectionists: Scientific Integrity, the 2021 Turing
>         Lecture, etc.
> Message-ID: <6AC9BA06-DBF7-4FCA-87CE-0776DE9CC498 at ucdavis.edu>
> Content-Type: text/plain;       charset=us-ascii
>
> Juergen,
>
> > Generally speaking, if B plagiarizes A but inspires C, whom should C
> cite? The answer is clear.
>
> Using the term plagiarize here implies a willful stealing of other
> people's ideas, and is a very serious allegation as I'm sure you are
> aware.  At least some of the issues you raised are clearly not of this
> form, involving obscure publications that almost certainly the so-called
> plagiarizers had no knowledge of.  This is then a case of reinvention,
> which happens all the time is still hard to avoid even with tools like
> google scholar available now (but not back when most of the relevant work
> was being done).  You should be very careful to not confuse these two
> things, and only allege plagiarism when there is a very strong case to be
> made.
>
> In any case, consider this version:
>
> If B reinvents A but publishes a much more [comprehensive | clear |
> applied | accessible | modern] (whatever) version that becomes the main way
> in which many people C learn about the relevant idea, whom should C cite?
>
> For example, I cite Rumelhart et al (1986) for backprop, because that is
> how I and most other people in the modern field learned about this idea,
> and we know for a fact that they genuinely reinvented it and conveyed its
> implications in a very compelling way.  If I might be writing a paper on
> the history of backprop, or some comprehensive review, then yes it would be
> appropriate to cite older versions that had limited impact, being careful
> to characterize the relationship as one of reinvention.
>
> Referring to Rumelhart et al (1986) as "popularizers" is a gross
> mischaracterization of the intellectual origins and true significance of
> such a work.  Many people in this discussion have used that term
> inappropriately as it applies to the relevant situations at hand here.
>
> > Randy also wrote: "how little Einstein added to what was already
> established by Lorentz and others". Juyang already respectfully objected to
> this misleading statement.
>
> I beg to differ -- this is a topic of extensive ongoing debate:
> https://en.wikipedia.org/wiki/Relativity_priority_dispute -- specifically
> with respect to special relativity, which is the case I was referring to,
> not general relativity, although it appears there are issues there too.
>
> - Randy
>
>
> ------------------------------
>
> Message: 4
> Date: Mon, 15 Nov 2021 11:11:44 +0100
> From: Maria Kesa <maria.kesa at gmail.com>
> To: "Randall O'Reilly" <oreilly at ucdavis.edu>
> Cc: Connectionists Connectionists <connectionists at cs.cmu.edu>
> Subject: Re: Connectionists: Scientific Integrity, the 2021 Turing
>         Lecture,        etc.
> Message-ID:
>         <CA+84MbJdov5CtB=
> a8xwrC07FOSLC2R6wAG47Cw9V3aQx-az4cA at mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> My personal take and you can all kiss my ass message
>
> https://fuckmyasspsychiatry.blogspot.com/2021/11/jurgen-schmidhuber-is-ethically-bankrupt.html
>
> All the very best,
> Maria Kesa
>
> On Mon, Nov 15, 2021 at 11:06 AM Randall O'Reilly <oreilly at ucdavis.edu>
> wrote:
>
> > Juergen,
> >
> > > Generally speaking, if B plagiarizes A but inspires C, whom should C
> > cite? The answer is clear.
> >
> > Using the term plagiarize here implies a willful stealing of other
> > people's ideas, and is a very serious allegation as I'm sure you are
> > aware.  At least some of the issues you raised are clearly not of this
> > form, involving obscure publications that almost certainly the so-called
> > plagiarizers had no knowledge of.  This is then a case of reinvention,
> > which happens all the time is still hard to avoid even with tools like
> > google scholar available now (but not back when most of the relevant work
> > was being done).  You should be very careful to not confuse these two
> > things, and only allege plagiarism when there is a very strong case to be
> > made.
> >
> > In any case, consider this version:
> >
> > If B reinvents A but publishes a much more [comprehensive | clear |
> > applied | accessible | modern] (whatever) version that becomes the main
> way
> > in which many people C learn about the relevant idea, whom should C cite?
> >
> > For example, I cite Rumelhart et al (1986) for backprop, because that is
> > how I and most other people in the modern field learned about this idea,
> > and we know for a fact that they genuinely reinvented it and conveyed its
> > implications in a very compelling way.  If I might be writing a paper on
> > the history of backprop, or some comprehensive review, then yes it would
> be
> > appropriate to cite older versions that had limited impact, being careful
> > to characterize the relationship as one of reinvention.
> >
> > Referring to Rumelhart et al (1986) as "popularizers" is a gross
> > mischaracterization of the intellectual origins and true significance of
> > such a work.  Many people in this discussion have used that term
> > inappropriately as it applies to the relevant situations at hand here.
> >
> > > Randy also wrote: "how little Einstein added to what was already
> > established by Lorentz and others". Juyang already respectfully objected
> to
> > this misleading statement.
> >
> > I beg to differ -- this is a topic of extensive ongoing debate:
> > https://en.wikipedia.org/wiki/Relativity_priority_dispute --
> specifically
> > with respect to special relativity, which is the case I was referring to,
> > not general relativity, although it appears there are issues there too.
> >
> > - Randy
> >
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20211115/3bbd31b2/attachment-0001.html
> >
>
> ------------------------------
>
> Message: 5
> Date: Mon, 15 Nov 2021 14:21:33 +0000
> From: "Barak A. Pearlmutter" <barak at pearlmutter.net>
> To: "connectionists at cs.cmu.edu" <connectionists at cs.cmu.edu>
> Subject: Re: Connectionists: Scientific Integrity, the 2021 Turing
>         Lecture,        etc.
> Message-ID:
>         <CANa01BJ33QcUok1mE_ZJxKsDDkOXLi9xmK624kekU=
> W6TmfLZw at mail.gmail.com>
> Content-Type: text/plain; charset="UTF-8"
>
> One point of scientific propriety and writing that may be getting lost
> in the scrum here, and which has I think contributed substantially to
> the somewhat woeful state of credit assignment in the field, is the
> traditional idea of what a citation *means*.
>
> If a paper says "we use the Foo Transform (Smith, 1995)" that,
> traditionally, implies that the author has actually read Smith (1995)
> and it describes the Foo Transform as used in the work being
> presented. If the author was told that the Foo Transform was actually
> discovered by Barker (1980) but the author hasn't actually verified
> that by reading Barker (1980), then the author should NOT just cite
> Barker. If the author heard that Barker (1980) is the "right" citation
> for the Foo Transform, but they got the details of it that they're
> actually using from Smith (1995) then they're supposed to say so: "We
> use the Foo Transform as described in Smith (1995), attributed to
> Barker (1980) by someone I met in line for the toilet at NeurIPS
> 2019".
>
> This seemingly-antediluvian practice is to guard against people citing
> "Barker (1980)" as saying something that it actually doesn't say,
> proving a theorem that it doesn't, defining terms ("rate code", cough
> cough) in a fashion that is not consistent with Barker's actual
> definitions, etc. Iterated violations of this often manifest as
> repeated and successive simplification of an idea, a so-called game of
> telephone, until something not even true is sagely attributed to some
> old publication that doesn't actually say it.
>
> So if you want to cite, say, Seppo Linnainmaa for Reverse Mode
> Automatic Differentiation, you need to have actually read it yourself.
> Otherwise you need to do a bounce citation: "Linnainman (1982)
> described by Schmidhuber (2021) as exhibiting a Fortran implementation
> of Reverse Mode Automatic Differentiation" or something like that.
>
> This is also why it's considered fine to simply cite a textbook or
> survey paper: nobody could possibly mistake those as the original
> source, but they may well be where the author actually got it from.
>
> To bring this back to the present thread: I must confess that I have
> not actually read many of the old references J?rgen brings up.
> Certainly "X (1960) invented deep learning" is not enough to allow
> someone to cite them. It's not even enough for a bounce citation. What
> did they *actually* do? What is J?rgen saying they actually did?
>
>
>
> ------------------------------
>
> Message: 6
> Date: Mon, 15 Nov 2021 16:28:04 +0100
> From: marwen Belkaid <marwen.belkaid at iit.it>
> To: <connectionists at mailman.srv.cs.cmu.edu>
> Subject: Connectionists: CFP - Special issue on ?Human-like Behavior
>         and Cognition in Robots?
> Message-ID: <611a17bd-93fa-6f28-a07b-f82ab780be1a at iit.it>
> Content-Type: text/plain; charset="utf-8"; Format="flowed"
>
>
>     Call for papers
>
> Special issue on ?Human-like Behavior and Cognition in Robots?///in the
> International Journal of Social Robotics/
>
> _Submission deadline_: January 5, 2022; Research articles and
> Theoretical papers
>
> _More info_: https://www.springer.com/journal/12369/updates/19850712
> <https://www.springer.com/journal/12369/updates/19850712>
>
>
>       *Description*
>
> This Special Issue is in continuation of the HBCR workshop organized at
> the 2021 IEEE/RSJ International Conference on Intelligent Robots and
> Systems (IROS 2021) on ?Human-like Behavior and Cognition in Robots
> <https://sites.google.com/view/hbcr-workshop-2021/home>?. Submissions
> are welcomed from contributors who attended the workshop as well as from
> those who did not.
>
> Building robots capable of behaving in a human-like manner is a
> long-term goal in robotics. It is becoming even more crucial with the
> growing number of applications in which robots are brought closer to
> humans, not only trained experts, but also inexperienced users,
> children, the elderly, or clinical populations.
>
> Current research from different disciplines contributes to this general
> endeavor in various ways:
>
>   *
>
>     by creating robots that mimic specific aspects of human behavior,
>
>   *
>
>     by designing brain-inspired cognitive architectures for robots,
>
>   *
>
>     by implementing embodied neural models driving robots? behavior,
>
>   *
>
>     by reproducing human motion dynamics on robots,
>
>   *
>
>     by investigating how humans perceive and interact with robots,
>     dependent on the degree of the robots? human-likeness.
>
> This special issue thus welcomes research articles as well as
> theoretical articles from different areas of research (e.g., robotics,
> artificial intelligence, human-robot interaction, computational modeling
> of human cognition and behavior, psychology, cognitive neuroscience)
> addressing questions such as the following:
>
>   *
>
>     How to design robots with human-like behavior and cognition?
>
>   *
>
>     What are the best methods for examining human-like behavior and
>     cognition?
>
>   *
>
>     What are the best approaches for implementing human-like behavior
>     and cognition in robots?
>
>   *
>
>     How to manipulate, control and measure robots? degree of
>     human-likeness?
>
>   *
>
>     Is autonomy a prerequisite for human-likeness?
>
>   *
>
>     How to best measure human reception of human-likeness of robots?
>
>   *
>
>     What is the link between perceived human-likeness and social
>     attunement in human-robot interaction?
>
>   *
>
>     How can such human-like robots inform and enable human-centered
>     research?
>
>   *
>
>     How can modeling human-like behavior in robots inform us about human
>     cognition?
>
>   *
>
>     In what contexts and applications do we need human-like behavior or
>     cognition?
>
>   *
>
>     And in what contexts it is not necessary?
>
>
>       *Guest editors*
>
>   *
>
>     Marwen Belkaid, Istituto Italiano di Tecnologia (Italy)
>
>   *
>
>     Giorgio Metta, Istituto Italiano di Tecnologia (Italy)
>
>   *
>
>     Tony Prescott, University of Sheffield (United Kingdom)
>
>   *
>
>     Agnieszka Wykowska, Istituto Italiano di Tecnologia (Italy)
>
>
> --
> Dr Marwen BELKAID
> Istituto Italiano di Tecnologia
> Center for Human Technologies
> Via Enrico Melen, 83
> 16152 Genoa, Italy
>
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20211115/fbe82839/attachment-0001.html
> >
>
> ------------------------------
>
> Subject: Digest Footer
>
> _______________________________________________
> Connectionists mailing list
> Connectionists at mailman.srv.cs.cmu.edu
> https://mailman.srv.cs.cmu.edu/mailman/listinfo/connectionists
>
> ------------------------------
>
> End of Connectionists Digest, Vol 764, Issue 1
> **********************************************
>


-- 
Juyang (John) Weng
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20211116/546897c7/attachment.html>


More information about the Connectionists mailing list