Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc.

Prasenjit Mitra pmitra3 at gmail.com
Tue Feb 1 09:38:46 EST 2022


All this is fascinating discussion. I fear that this will be lost in the bowels of some mailing list archive. We will repeat our not knowing or misrepresenting what was done in the past and not knowing people’s opinions and perspectives of what led to what if we do not document this well. Is there somewhere we should put all this information Wiki-style? Maybe in a wikipedia page and a whole set of connected, hierarchically organized pages? Or sort of a history of ML and related topics Wikipedia or wiki site? Does something like this already exist and I am woefully naive? If so, please provide some pointers someone. 

The reason I say this is because it can and perhaps has been captured in books, but books are not deliberative in nature. That is, it will reflect the opinions (and biases) of an author but not the deliberations of a community. Despite edit wars, the history of the edits and reverses provides valuable information if done thoughtfully with the reasoning provided, which I expect academics to do.

I am not wedded to a solution but I would love this to be captured and preserved and added and shaped and refined perhaps to come to some consensus or consensuses …

Best,
Prasenjit (Mitra)

Professor, Penn State 

> On Feb 1, 2022, at 6:17 AM, Barak A. Pearlmutter <barak at pearlmutter.net> wrote:
> 
> Jürgen,
> 
> It's fantastic that you're helping expose people to some important bits of scientific literature.
> 
> But...
> 
> > Minsky & Papert [M69] made some people think that Rosenblatt [R58-62] had only linear NNs plus threshold functions
> 
> If you actually read Minsk and Papert's "Perceptrons" book, this is not a misconception it encourages. It defines a "k-th order perceptron" as a linear threshold unit preceded by an arbitrary set of fixed nonlinearities with fan-in k. (A linear threshold unit with binary inputs would, in this terminology, be a 1st-order perceptron.) All their theorems are for k>1. For instance, they prove that a k-th order perceptron cannot do (k+1)-bit parity, which in the special case of k=1 simplifies to the trivial observation that a simple linear threshold unit cannot do xor.
> <perceptrons-book-cover-1.jpg> <perceptron-diagram-1.jpg>
> This is why you're not supposed to directly cite things you have not actually read: it's too easy to misconstrue them based on inaccurate summaries transmitted over a series of biased noisy compressive channels.
> 
> Cheers,
> 
> --Barak.




More information about the Connectionists mailing list