Connectionists: Stephen Hanson in conversation with Geoff Hinton

gary@ucsd.edu gary at eng.ucsd.edu
Sun Feb 6 17:34:56 EST 2022


"practopoietic theory challenges is the generally accepted idea that the
dynamics of a neural network (with its excitatory and inhibitory
mechanisms) is sufficient to implement a mind. Practopoiesis tells us that
this is not enough."

So you are making additional assumptions. Hence Ockham's razor applies...

On Sun, Feb 6, 2022 at 2:27 AM Danko Nikolic <danko.nikolic at gmail.com>
wrote:

> Hi Gary,
>
> you said: "Please avert your gaze while I apply Ockham’s Razor…"
>
> I dare you to apply Ockham's razor. Practopoiesis is designed with the
> Ockham's razor in mind: To account for as many mental phenomena as possible
> by making as few assumptions as possible.
>
> Danko
>
>
>
> Dr. Danko Nikolić
> www.danko-nikolic.com
> <https://urldefense.proofpoint.com/v2/url?u=http-3A__www.danko-2Dnikolic.com&d=DwMFaQ&c=-35OiAkTchMrZOngvJPOeA&r=JROnxzCHvyXQkBeidNk_-g&m=Fei-0o5oHMkqh5vccbPhSydQ37yUa42D7OcXFQGMXw42p3NeqBLnSk3gVVOTf3wx&s=_Rk1csL9Jxx6El9j1DSN8RSAkndWnXjC8t_h8eYGbGU&e=>
> https://www.linkedin.com/in/danko-nikolic/
> <https://urldefense.proofpoint.com/v2/url?u=https-3A__www.linkedin.com_in_danko-2Dnikolic_&d=DwMFaQ&c=-35OiAkTchMrZOngvJPOeA&r=JROnxzCHvyXQkBeidNk_-g&m=Fei-0o5oHMkqh5vccbPhSydQ37yUa42D7OcXFQGMXw42p3NeqBLnSk3gVVOTf3wx&s=2edqBDVAcZN2nCr955AwE3OaSL6OMOrcNQVAtY7fYMY&e=>
> --- A progress usually starts with an insight ---
>
>
> On Sat, Feb 5, 2022 at 8:05 PM gary at ucsd.edu <gary at eng.ucsd.edu> wrote:
>
>> Please avert your gaze while I apply Ockham’s Razor…
>>
>> On Sat, Feb 5, 2022 at 2:12 AM Danko Nikolic <danko.nikolic at gmail.com>
>> wrote:
>>
>>> Gary, you wrote: "What are the alternatives?"
>>>
>>> There is at least one alternative: the theory of practopoiesis which
>>> suggests that it is not the neural networks that "compute" the mental
>>> operations.
>>> It is instead the quick adaptations of neurons who are responsible for
>>> thinking and perceiving. The network only serves the function of bringing
>>> in the information and sending it out.
>>>
>>> The adaptations are suggested to do the central part of the cognition.
>>>
>>> So far, this is all hypothetical. If we develop these ideas into a
>>> working system, this would be an entirely new paradigm. It would be like
>>> the third paradigm:
>>> 1) manipulation of symbols
>>> 2) neural net
>>> 3) fast adaptations
>>>
>>>
>>> Danko
>>>
>>> Dr. Danko Nikolić
>>> www.danko-nikolic.com
>>> <https://urldefense.proofpoint.com/v2/url?u=http-3A__www.danko-2Dnikolic.com&d=DwMFaQ&c=-35OiAkTchMrZOngvJPOeA&r=JROnxzCHvyXQkBeidNk_-g&m=E9yFOiPfc4VgGz5PkKq6_bU98iHB8uH4xFNF4zaju-RNnFzCXKbXhyHNdnPL8rre&s=8RJsdJD2SsrrPj_IZmgVEQXJT9dvaULicZp8q_rzKYc&e=>
>>> https://www.linkedin.com/in/danko-nikolic/
>>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__www.linkedin.com_in_danko-2Dnikolic_&d=DwMFaQ&c=-35OiAkTchMrZOngvJPOeA&r=JROnxzCHvyXQkBeidNk_-g&m=E9yFOiPfc4VgGz5PkKq6_bU98iHB8uH4xFNF4zaju-RNnFzCXKbXhyHNdnPL8rre&s=MaN2MbUvhlQAp0jsn9LMCu21V0PDAtTwbqvdr3uycSE&e=>
>>> --- A progress usually starts with an insight ---
>>>
>>>
>>> On Fri, Feb 4, 2022 at 7:19 PM gary at ucsd.edu <gary at eng.ucsd.edu> wrote:
>>>
>>>> This is an argument from lack of imagination, as Pat Churchland used to
>>>> say. All you have to notice, is that your brain is a neural net work. What
>>>> are the alternatives?
>>>>
>>>> On Fri, Feb 4, 2022 at 4:08 AM Danko Nikolic <danko.nikolic at gmail.com>
>>>> wrote:
>>>>
>>>>>
>>>>> I suppose everyone agrees that "the brain is a physical system",
>>>>> and that "There is no “magic” inside the brain",
>>>>> and that '“understanding” is just part of “learning.”'
>>>>>
>>>>> Also, we can agree that some sort of simulation takes place behind
>>>>> understanding.
>>>>>
>>>>> However, there still is a problem: Neural network's can't implement
>>>>> the needed simulations; they cannot achieve the same cognitive effect that
>>>>> human minds can (or animal minds can).
>>>>>
>>>>> We don't know a way of wiring a neural network such that it could
>>>>> perform the simulations (understandings) necessary to find the answers to
>>>>> real-life questions, such as the hamster with a hat problem.
>>>>>
>>>>> In other words, neural networks, as we know them today, cannot:
>>>>>
>>>>> 1) learn from a small number of examples (simulation or not)
>>>>> 2) apply the knowledge to a wide range of situations
>>>>>
>>>>>
>>>>> We, as scientists, do not understand understanding. Our technology's
>>>>> simulations (their depth of understanding) are no match for the simulations
>>>>> (depth of understanding) that the biological brain performs.
>>>>>
>>>>> I think that scientific integrity also covers acknowledging when we
>>>>> did not (yet) succeed in solving a certain problem. There is still
>>>>> significant work to be done.
>>>>>
>>>>>
>>>>> Danko
>>>>>
>>>>> Dr. Danko Nikolić
>>>>> www.danko-nikolic.com
>>>>> <https://urldefense.proofpoint.com/v2/url?u=http-3A__www.danko-2Dnikolic.com&d=DwMFaQ&c=-35OiAkTchMrZOngvJPOeA&r=JROnxzCHvyXQkBeidNk_-g&m=E9yFOiPfc4VgGz5PkKq6_bU98iHB8uH4xFNF4zaju-RNnFzCXKbXhyHNdnPL8rre&s=8RJsdJD2SsrrPj_IZmgVEQXJT9dvaULicZp8q_rzKYc&e=>
>>>>> https://www.linkedin.com/in/danko-nikolic/
>>>>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__www.linkedin.com_in_danko-2Dnikolic_&d=DwMFaQ&c=-35OiAkTchMrZOngvJPOeA&r=JROnxzCHvyXQkBeidNk_-g&m=E9yFOiPfc4VgGz5PkKq6_bU98iHB8uH4xFNF4zaju-RNnFzCXKbXhyHNdnPL8rre&s=MaN2MbUvhlQAp0jsn9LMCu21V0PDAtTwbqvdr3uycSE&e=>
>>>>> --- A progress usually starts with an insight ---
>>>>>
>>>>>
>>>>>
>>>>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__www.avast.com_sig-2Demail-3Futm-5Fmedium-3Demail-26utm-5Fsource-3Dlink-26utm-5Fcampaign-3Dsig-2Demail-26utm-5Fcontent-3Dwebmail&d=DwMFaQ&c=-35OiAkTchMrZOngvJPOeA&r=JROnxzCHvyXQkBeidNk_-g&m=E9yFOiPfc4VgGz5PkKq6_bU98iHB8uH4xFNF4zaju-RNnFzCXKbXhyHNdnPL8rre&s=SiUMr7UUUNTm7PSe4xLERduI7qNxagFZ0V21tq0hJE4&e=> Virenfrei.
>>>>> www.avast.com
>>>>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__www.avast.com_sig-2Demail-3Futm-5Fmedium-3Demail-26utm-5Fsource-3Dlink-26utm-5Fcampaign-3Dsig-2Demail-26utm-5Fcontent-3Dwebmail&d=DwMFaQ&c=-35OiAkTchMrZOngvJPOeA&r=JROnxzCHvyXQkBeidNk_-g&m=E9yFOiPfc4VgGz5PkKq6_bU98iHB8uH4xFNF4zaju-RNnFzCXKbXhyHNdnPL8rre&s=SiUMr7UUUNTm7PSe4xLERduI7qNxagFZ0V21tq0hJE4&e=>
>>>>> <#m_3368963304466174950_m_-134745596574091214_m_8423976727351221435_m_-3229424020171779455_m_-1469727422087267219_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
>>>>>
>>>>> On Thu, Feb 3, 2022 at 9:35 PM Asim Roy <ASIM.ROY at asu.edu> wrote:
>>>>>
>>>>>> First of all, the brain is a physical system. There is no “magic”
>>>>>> inside the brain that does the “understanding” part. Take for example
>>>>>> learning to play tennis. You hit a few balls - some the right way and some
>>>>>> wrong – but you fairly quickly learn to hit them right most of the time. So
>>>>>> there is obviously some simulation going on in the brain about hitting the
>>>>>> ball in different ways and “learning” its consequences. What you are
>>>>>> calling “understanding” is really these simulations about different
>>>>>> scenarios. It’s also very similar to augmentation used to train image
>>>>>> recognition systems where you rotate images, obscure parts and so on, so
>>>>>> that you still can say it’s a cat even though you see only the cat’s face
>>>>>> or whiskers or a cat flipped on its back. So, if the following questions
>>>>>> relate to “understanding,” you can easily resolve this by simulating such
>>>>>> scenarios when “teaching” the system. There’s nothing “magical” about
>>>>>> “understanding.” As I said, bear in mind that the brain, after all, is a
>>>>>> physical system and “teaching” and “understanding” is embodied in that
>>>>>> physical system, not outside it. So “understanding” is just part of
>>>>>> “learning,” nothing more.
>>>>>>
>>>>>>
>>>>>>
>>>>>> DANKO:
>>>>>>
>>>>>> What would happen to the hat if the hamster rolls on its back? (Would
>>>>>> the hat fall off?)
>>>>>>
>>>>>> What would happen to the red hat when the hamster enters its lair?
>>>>>> (Would the hat fall off?)
>>>>>>
>>>>>> What would happen to that hamster when it goes foraging? (Would the
>>>>>> red hat have an influence on finding food?)
>>>>>>
>>>>>> What would happen in a situation of being chased by a predator?
>>>>>> (Would it be easier for predators to spot the hamster?)
>>>>>>
>>>>>>
>>>>>>
>>>>>> Asim Roy
>>>>>>
>>>>>> Professor, Information Systems
>>>>>>
>>>>>> Arizona State University
>>>>>>
>>>>>> Lifeboat Foundation Bios: Professor Asim Roy
>>>>>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__lifeboat.com_ex_bios.asim.roy&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=oDRJmXX22O8NcfqyLjyu4Ajmt8pcHWquTxYjeWahfuw&e=>
>>>>>>
>>>>>> Asim Roy | iSearch (asu.edu)
>>>>>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__isearch.asu.edu_profile_9973&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=jCesWT7oGgX76_y7PFh4cCIQ-Ife-esGblJyrBiDlro&e=>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> *From:* Gary Marcus <gary.marcus at nyu.edu>
>>>>>> *Sent:* Thursday, February 3, 2022 9:26 AM
>>>>>> *To:* Danko Nikolic <danko.nikolic at gmail.com>
>>>>>> *Cc:* Asim Roy <ASIM.ROY at asu.edu>; Geoffrey Hinton <
>>>>>> geoffrey.hinton at gmail.com>; AIhub <aihuborg at gmail.com>;
>>>>>> connectionists at mailman.srv.cs.cmu.edu
>>>>>> *Subject:* Re: Connectionists: Stephen Hanson in conversation with
>>>>>> Geoff Hinton
>>>>>>
>>>>>>
>>>>>>
>>>>>> Dear Danko,
>>>>>>
>>>>>>
>>>>>>
>>>>>> Well said. I had a somewhat similar response to Jeff Dean’s 2021 TED
>>>>>> talk, in which he said (paraphrasing from memory, because I don’t remember
>>>>>> the precise words) that the famous 200 Quoc Le unsupervised model [
>>>>>> https://static.googleusercontent.com/media/research.google.com/en//archive/unsupervised_icml2012.pdf
>>>>>> <https://urldefense.com/v3/__https:/static.googleusercontent.com/media/research.google.com/en/*archive/unsupervised_icml2012.pdf__;Lw!!IKRxdwAv5BmarQ!PFl2URDWVshfy1BPSwAMXKYyn1wszxpN4EPzShAm3sX83AOt05MQX07oVyVLEqo$>]
>>>>>> had learned the concept of a ca. In reality the model had clustered
>>>>>> together some catlike images based on the image statistics that it had
>>>>>> extracted, but it was a long way from a full, counterfactual-supporting
>>>>>> concept of a cat, much as you describe below.
>>>>>>
>>>>>>
>>>>>>
>>>>>> I fully agree with you that the reason for even having a semantics is
>>>>>> as you put it, "to 1) learn with a few examples and 2) apply the knowledge
>>>>>> to a broad set of situations.” GPT-3 sometimes gives the appearance of
>>>>>> having done so, but it falls apart under close inspection, so the problem
>>>>>> remains unsolved.
>>>>>>
>>>>>>
>>>>>>
>>>>>> Gary
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Feb 3, 2022, at 3:19 AM, Danko Nikolic <danko.nikolic at gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>>
>>>>>>
>>>>>> G. Hinton wrote: "I believe that any reasonable person would admit
>>>>>> that if you ask a neural net to draw a picture of a hamster wearing a red
>>>>>> hat and it draws such a picture, it understood the request."
>>>>>>
>>>>>>
>>>>>>
>>>>>> I would like to suggest why drawing a hamster with a red hat does not
>>>>>> necessarily imply understanding of the statement "hamster wearing a red
>>>>>> hat".
>>>>>>
>>>>>> To understand that "hamster wearing a red hat" would mean inferring,
>>>>>> in newly emerging situations of this hamster, all the real-life
>>>>>> implications that the red hat brings to the little animal.
>>>>>>
>>>>>>
>>>>>>
>>>>>> What would happen to the hat if the hamster rolls on its back? (Would
>>>>>> the hat fall off?)
>>>>>>
>>>>>> What would happen to the red hat when the hamster enters its lair?
>>>>>> (Would the hat fall off?)
>>>>>>
>>>>>> What would happen to that hamster when it goes foraging? (Would the
>>>>>> red hat have an influence on finding food?)
>>>>>>
>>>>>> What would happen in a situation of being chased by a predator?
>>>>>> (Would it be easier for predators to spot the hamster?)
>>>>>>
>>>>>>
>>>>>>
>>>>>> ...and so on.
>>>>>>
>>>>>>
>>>>>>
>>>>>> Countless many questions can be asked. One has understood "hamster
>>>>>> wearing a red hat" only if one can answer reasonably well many of such
>>>>>> real-life relevant questions. Similarly, a student has understood materias
>>>>>> in a class only if they can apply the materials in real-life situations
>>>>>> (e.g., applying Pythagora's theorem). If a student gives a correct answer
>>>>>> to a multiple choice question, we don't know whether the student understood
>>>>>> the material or whether this was just rote learning (often, it is rote
>>>>>> learning).
>>>>>>
>>>>>>
>>>>>>
>>>>>> I also suggest that understanding also comes together with effective
>>>>>> learning: We store new information in such a way that we can recall it
>>>>>> later and use it effectively  i.e., make good inferences in newly emerging
>>>>>> situations based on this knowledge.
>>>>>>
>>>>>>
>>>>>>
>>>>>> In short: Understanding makes us humans able to 1) learn with a few
>>>>>> examples and 2) apply the knowledge to a broad set of situations.
>>>>>>
>>>>>>
>>>>>>
>>>>>> No neural network today has such capabilities and we don't know how
>>>>>> to give them such capabilities. Neural networks need large amounts of
>>>>>> training examples that cover a large variety of situations and then
>>>>>> the networks can only deal with what the training examples have already
>>>>>> covered. Neural networks cannot extrapolate in that 'understanding' sense.
>>>>>>
>>>>>>
>>>>>>
>>>>>> I suggest that understanding truly extrapolates from a piece of
>>>>>> knowledge. It is not about satisfying a task such as translation between
>>>>>> languages or drawing hamsters with hats. It is how you got the capability
>>>>>> to complete the task: Did you only have a few examples that covered
>>>>>> something different but related and then you extrapolated from that
>>>>>> knowledge? If yes, this is going in the direction of understanding. Have
>>>>>> you seen countless examples and then interpolated among them? Then perhaps
>>>>>> it is not understanding.
>>>>>>
>>>>>>
>>>>>>
>>>>>> So, for the case of drawing a hamster wearing a red hat,
>>>>>> understanding perhaps would have taken place if the following happened
>>>>>> before that:
>>>>>>
>>>>>>
>>>>>>
>>>>>> 1) first, the network learned about hamsters (not many examples)
>>>>>>
>>>>>> 2) after that the network learned about red hats (outside the context
>>>>>> of hamsters and without many examples)
>>>>>>
>>>>>> 3) finally the network learned about drawing (outside of the context
>>>>>> of hats and hamsters, not many examples)
>>>>>>
>>>>>>
>>>>>>
>>>>>> After that, the network is asked to draw a hamster with a red hat. If
>>>>>> it does it successfully, maybe we have started cracking the problem of
>>>>>> understanding.
>>>>>>
>>>>>>
>>>>>>
>>>>>> Note also that this requires the network to learn sequentially
>>>>>> without exhibiting catastrophic forgetting of the previous knowledge, which
>>>>>> is possibly also a consequence of human learning by understanding.
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> Danko
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> Dr. Danko Nikolić
>>>>>> www.danko-nikolic.com
>>>>>> <https://urldefense.proofpoint.com/v2/url?u=http-3A__www.danko-2Dnikolic.com&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=HwOLDw6UCRzU5-FPSceKjtpNm7C6sZQU5kuGAMVbPaI&e=>
>>>>>> https://www.linkedin.com/in/danko-nikolic/
>>>>>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__www.linkedin.com_in_danko-2Dnikolic_&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=b70c8lokmxM3Kz66OfMIM4pROgAhTJOAlp205vOmCQ8&e=>
>>>>>>
>>>>>> --- A progress usually starts with an insight ---
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__www.avast.com_sig-2Demail-3Futm-5Fmedium-3Demail-26utm-5Fsource-3Dlink-26utm-5Fcampaign-3Dsig-2Demail-26utm-5Fcontent-3Dwebmail&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=Ao9QQWtO62go0hx1tb3NU6xw2FNBadjj8q64-hl5Sx4&e=>
>>>>>>
>>>>>> Virus-free. www.avast.com
>>>>>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__www.avast.com_sig-2Demail-3Futm-5Fmedium-3Demail-26utm-5Fsource-3Dlink-26utm-5Fcampaign-3Dsig-2Demail-26utm-5Fcontent-3Dwebmail&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=Ao9QQWtO62go0hx1tb3NU6xw2FNBadjj8q64-hl5Sx4&e=>
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Thu, Feb 3, 2022 at 9:55 AM Asim Roy <ASIM.ROY at asu.edu> wrote:
>>>>>>
>>>>>> Without getting into the specific dispute between Gary and Geoff, I
>>>>>> think with approaches similar to GLOM, we are finally headed in the right
>>>>>> direction. There’s plenty of neurophysiological evidence for single-cell
>>>>>> abstractions and multisensory neurons in the brain, which one might claim
>>>>>> correspond to symbols. And I think we can finally reconcile the decades old
>>>>>> dispute between Symbolic AI and Connectionism.
>>>>>>
>>>>>>
>>>>>>
>>>>>> GARY: (Your GLOM, which as you know I praised publicly, is in many
>>>>>> ways an effort to wind up with encodings that effectively serve as symbols
>>>>>> in exactly that way, guaranteed to serve as consistent representations of
>>>>>> specific concepts.)
>>>>>>
>>>>>> GARY: I have *never* called for dismissal of neural networks, but
>>>>>> rather for some hybrid between the two (as you yourself contemplated in
>>>>>> 1991); the point of the 2001 book was to characterize exactly where
>>>>>> multilayer perceptrons succeeded and broke down, and where symbols could
>>>>>> complement them.
>>>>>>
>>>>>>
>>>>>>
>>>>>> Asim Roy
>>>>>>
>>>>>> Professor, Information Systems
>>>>>>
>>>>>> Arizona State University
>>>>>>
>>>>>> Lifeboat Foundation Bios: Professor Asim Roy
>>>>>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__lifeboat.com_ex_bios.asim.roy&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=oDRJmXX22O8NcfqyLjyu4Ajmt8pcHWquTxYjeWahfuw&e=>
>>>>>>
>>>>>> Asim Roy | iSearch (asu.edu)
>>>>>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__isearch.asu.edu_profile_9973&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=jCesWT7oGgX76_y7PFh4cCIQ-Ife-esGblJyrBiDlro&e=>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> *From:* Connectionists <connectionists-bounces at mailman.srv.cs.cmu.edu>
>>>>>> *On Behalf Of *Gary Marcus
>>>>>> *Sent:* Wednesday, February 2, 2022 1:26 PM
>>>>>> *To:* Geoffrey Hinton <geoffrey.hinton at gmail.com>
>>>>>> *Cc:* AIhub <aihuborg at gmail.com>;
>>>>>> connectionists at mailman.srv.cs.cmu.edu
>>>>>> *Subject:* Re: Connectionists: Stephen Hanson in conversation with
>>>>>> Geoff Hinton
>>>>>>
>>>>>>
>>>>>>
>>>>>> Dear Geoff, and interested others,
>>>>>>
>>>>>>
>>>>>>
>>>>>> What, for example, would you make of a system that often drew the
>>>>>> red-hatted hamster you requested, and perhaps a fifth of the time gave you
>>>>>> utter nonsense?  Or say one that you trained to create birds but sometimes
>>>>>> output stuff like this:
>>>>>>
>>>>>>
>>>>>>
>>>>>> <image001.png>
>>>>>>
>>>>>>
>>>>>>
>>>>>> One could
>>>>>>
>>>>>>
>>>>>>
>>>>>> a. avert one’s eyes and deem the anomalous outputs irrelevant
>>>>>>
>>>>>> or
>>>>>>
>>>>>> b. wonder if it might be possible that sometimes the system gets the
>>>>>> right answer for the wrong reasons (eg partial historical contingency), and
>>>>>> wonder whether another approach might be indicated.
>>>>>>
>>>>>>
>>>>>>
>>>>>> Benchmarks are harder than they look; most of the field has come to
>>>>>> recognize that. The Turing Test has turned out to be a lousy measure of
>>>>>> intelligence, easily gamed. It has turned out empirically that the Winograd
>>>>>> Schema Challenge did not measure common sense as well as Hector might have
>>>>>> thought. (As it happens, I am a minor coauthor of a very recent review on
>>>>>> this very topic: https://arxiv.org/abs/2201.02387
>>>>>> <https://urldefense.com/v3/__https:/arxiv.org/abs/2201.02387__;!!IKRxdwAv5BmarQ!INA0AMmG3iD1B8MDtLfjWCwcBjxO-e-eM2Ci9KEO_XYOiIEgiywK-G_8j6L3bHA$>)
>>>>>> But its conquest in no way means machines now have common sense; many
>>>>>> people from many different perspectives recognize that (including, e.g.,
>>>>>> Yann LeCun, who generally tends to be more aligned with you than with me).
>>>>>>
>>>>>>
>>>>>>
>>>>>> So: on the goalpost of the Winograd schema, I was wrong, and you can
>>>>>> quote me; but what you said about me and machine translation remains your
>>>>>> invention, and it is inexcusable that you simply ignored my 2019
>>>>>> clarification. On the essential goal of trying to reach meaning and
>>>>>> understanding, I remain unmoved; the problem remains unsolved.
>>>>>>
>>>>>>
>>>>>>
>>>>>> All of the problems LLMs have with coherence, reliability,
>>>>>> truthfulness, misinformation, etc stand witness to that fact. (Their
>>>>>> persistent inability to filter out toxic and insulting remarks stems from
>>>>>> the same.) I am hardly the only person in the field to see that progress on
>>>>>> any given benchmark does not inherently mean that the deep underlying
>>>>>> problems have solved. You, yourself, in fact, have occasionally made that
>>>>>> point.
>>>>>>
>>>>>>
>>>>>>
>>>>>> With respect to embeddings: Embeddings are very good for natural
>>>>>> language *processing*; but NLP is not the same as NL*U* – when it
>>>>>> comes to *understanding*, their worth is still an open question.
>>>>>> Perhaps they will turn out to be necessary; they clearly aren’t sufficient.
>>>>>> In their extreme, they might even collapse into being symbols, in the sense
>>>>>> of uniquely identifiable encodings, akin to the ASCII code, in which a
>>>>>> specific set of numbers stands for a specific word or concept. (Wouldn’t
>>>>>> that be ironic?)
>>>>>>
>>>>>>
>>>>>>
>>>>>> (Your GLOM, which as you know I praised publicly, is in many ways an
>>>>>> effort to wind up with encodings that effectively serve as symbols in
>>>>>> exactly that way, guaranteed to serve as consistent representations of
>>>>>> specific concepts.)
>>>>>>
>>>>>>
>>>>>>
>>>>>> Notably absent from your email is any kind of apology for
>>>>>> misrepresenting my position. It’s fine to say that “many people thirty
>>>>>> years ago once thought X” and another to say “Gary Marcus said X in 2015”,
>>>>>> when I didn’t. I have consistently felt throughout our interactions that
>>>>>> you have mistaken me for Zenon Pylyshyn; indeed, you once (at NeurIPS 2014)
>>>>>> apologized to me for having made that error. I am still not he.
>>>>>>
>>>>>>
>>>>>>
>>>>>> Which maybe connects to the last point; if you read my work, you
>>>>>> would see thirty years of arguments *for* neural networks, just not
>>>>>> in the way that you want them to exist. I have ALWAYS argued that there is
>>>>>> a role for them;  characterizing me as a person “strongly opposed to neural
>>>>>> networks” misses the whole point of my 2001 book, which was subtitled
>>>>>> “Integrating Connectionism and Cognitive Science.”
>>>>>>
>>>>>>
>>>>>>
>>>>>> In the last two decades or so you have insisted (for reasons you have
>>>>>> never fully clarified, so far as I know) on abandoning symbol-manipulation,
>>>>>> but the reverse is not the case: I have *never* called for dismissal
>>>>>> of neural networks, but rather for some hybrid between the two (as you
>>>>>> yourself contemplated in 1991); the point of the 2001 book was to
>>>>>> characterize exactly where multilayer perceptrons succeeded and broke down,
>>>>>> and where symbols could complement them. It’s a rhetorical trick (which is
>>>>>> what the previous thread was about) to pretend otherwise.
>>>>>>
>>>>>>
>>>>>>
>>>>>> Gary
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Feb 2, 2022, at 11:22, Geoffrey Hinton <geoffrey.hinton at gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>> 
>>>>>>
>>>>>> Embeddings are just vectors of soft feature detectors and they are
>>>>>> very good for NLP.  The quote on my webpage from Gary's 2015 chapter
>>>>>> implies the opposite.
>>>>>>
>>>>>>
>>>>>>
>>>>>> A few decades ago, everyone I knew then would have agreed that the
>>>>>> ability to translate a sentence into many different languages was strong
>>>>>> evidence that you understood it.
>>>>>>
>>>>>>
>>>>>>
>>>>>> But once neural networks could do that, their critics moved the
>>>>>> goalposts. An exception is Hector Levesque who defined the goalposts more
>>>>>> sharply by saying that the ability to get pronoun references correct in
>>>>>> Winograd sentences is a crucial test. Neural nets are improving at that but
>>>>>> still have some way to go. Will Gary agree that when they can get pronoun
>>>>>> references correct in Winograd sentences they really do understand? Or does
>>>>>> he want to reserve the right to weasel out of that too?
>>>>>>
>>>>>>
>>>>>>
>>>>>> Some people, like Gary, appear to be strongly opposed to neural
>>>>>> networks because they do not fit their preconceived notions of how the mind
>>>>>> should work.
>>>>>>
>>>>>> I believe that any reasonable person would admit that if you ask a
>>>>>> neural net to draw a picture of a hamster wearing a red hat and it draws
>>>>>> such a picture, it understood the request.
>>>>>>
>>>>>>
>>>>>>
>>>>>> Geoff
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Wed, Feb 2, 2022 at 1:38 PM Gary Marcus <gary.marcus at nyu.edu>
>>>>>> wrote:
>>>>>>
>>>>>> Dear AI Hub, cc: Steven Hanson and Geoffrey Hinton, and the larger
>>>>>> neural network community,
>>>>>>
>>>>>>
>>>>>>
>>>>>> There has been a lot of recent discussion on this list about framing
>>>>>> and scientific integrity. Often the first step in restructuring narratives
>>>>>> is to bully and dehumanize critics. The second is to misrepresent their
>>>>>> position. People in positions of power are sometimes tempted to do this.
>>>>>>
>>>>>>
>>>>>>
>>>>>> The Hinton-Hanson interview that you just published is a real-time
>>>>>> example of just that. It opens with a needless and largely content-free
>>>>>> personal attack on a single scholar (me), with the explicit intention of
>>>>>> discrediting that person. Worse, the only substantive thing it says is
>>>>>> false.
>>>>>>
>>>>>>
>>>>>>
>>>>>> Hinton says “In 2015 he [Marcus] made a prediction that computers
>>>>>> wouldn’t be able to do machine translation.”
>>>>>>
>>>>>>
>>>>>>
>>>>>> I never said any such thing.
>>>>>>
>>>>>>
>>>>>>
>>>>>> What I predicted, rather, was that multilayer perceptrons, as they
>>>>>> existed then, would not (on their own, absent other mechanisms)
>>>>>> *understand* language. Seven years later, they still haven’t, except
>>>>>> in the most superficial way.
>>>>>>
>>>>>>
>>>>>>
>>>>>> I made no comment whatsoever about machine translation, which I view
>>>>>> as a separate problem, solvable to a certain degree by correspondance
>>>>>> without semantics.
>>>>>>
>>>>>>
>>>>>>
>>>>>> I specifically tried to clarify Hinton’s confusion in 2019, but,
>>>>>> disappointingly, he has continued to purvey misinformation despite that
>>>>>> clarification. Here is what I wrote privately to him then, which should
>>>>>> have put the matter to rest:
>>>>>>
>>>>>>
>>>>>>
>>>>>> You have taken a single out of context quote [from 2015] and
>>>>>> misrepresented it. The quote, which you have prominently displayed at the
>>>>>> bottom on your own web page, says:
>>>>>>
>>>>>>
>>>>>>
>>>>>> Hierarchies of features are less suited to challenges such as
>>>>>> language, inference, and high-level planning. For example, as Noam Chomsky
>>>>>> famously pointed out, language is filled with sentences you haven't seen
>>>>>> before. Pure classifier systems don't know what to do with such sentences.
>>>>>> The talent of feature detectors -- in  identifying which member of some
>>>>>> category something belongs to -- doesn't translate into understanding
>>>>>> novel  sentences, in which each sentence has its own unique meaning.
>>>>>>
>>>>>>
>>>>>>
>>>>>> It does *not* say "neural nets would not be able to deal with novel
>>>>>> sentences"; it says that hierachies of features detectors (on their own, if
>>>>>> you read the context of the essay) would have trouble
>>>>>> *understanding *novel sentences.
>>>>>>
>>>>>>
>>>>>>
>>>>>> Google Translate does yet not *understand* the content of the
>>>>>> sentences is translates. It cannot reliably answer questions about who did
>>>>>> what to whom, or why, it cannot infer the order of the events in
>>>>>> paragraphs, it can't determine the internal consistency of those events,
>>>>>> and so forth.
>>>>>>
>>>>>>
>>>>>>
>>>>>> Since then, a number of scholars, such as the the computational
>>>>>> linguist Emily Bender, have made similar points, and indeed current LLM
>>>>>> difficulties with misinformation, incoherence and fabrication all follow
>>>>>> from these concerns. Quoting from Bender’s prizewinning 2020 ACL article on
>>>>>> the matter with Alexander Koller,
>>>>>> https://aclanthology.org/2020.acl-main.463.pdf
>>>>>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__aclanthology.org_2020.acl-2Dmain.463.pdf&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=xnFSVUARkfmiXtiTP_uXfFKv4uNEGgEeTluRFR7dnUpay2BM5EiLz-XYCkBNJLlL&s=K-Vl6vSvzuYtRMi-s4j7mzPkNRTb-I6Zmf7rbuKEBpk&e=>,
>>>>>> also emphasizing issues of understanding and meaning:
>>>>>>
>>>>>>
>>>>>>
>>>>>> *The success of the large neural language models on many NLP tasks is
>>>>>> exciting. However, we find that these successes sometimes lead to hype in
>>>>>> which these models are being described as “understanding” language or
>>>>>> capturing “meaning”. In this position paper, we argue that a system trained
>>>>>> only on form has a priori no way to learn meaning. .. a clear understanding
>>>>>> of the distinction between form and meaning will help guide the field
>>>>>> towards better science around natural language understanding. *
>>>>>>
>>>>>>
>>>>>>
>>>>>> Her later article with Gebru on language models “stochastic parrots”
>>>>>> is in some ways an extension of this point; machine translation requires
>>>>>> mimicry, true understanding (which is what I was discussing in 2015)
>>>>>> requires something deeper than that.
>>>>>>
>>>>>>
>>>>>>
>>>>>> Hinton’s intellectual error here is in equating machine translation
>>>>>> with the deeper comprehension that robust natural language understanding
>>>>>> will require; as Bender and Koller observed, the two appear not to be the
>>>>>> same. (There is a longer discussion of the relation between language
>>>>>> understanding and machine translation, and why the latter has turned out to
>>>>>> be more approachable than the former, in my 2019 book with Ernest Davis).
>>>>>>
>>>>>>
>>>>>>
>>>>>> More broadly, Hinton’s ongoing dismissiveness of research from
>>>>>> perspectives other than his own (e.g. linguistics) have done the field a
>>>>>> disservice.
>>>>>>
>>>>>>
>>>>>>
>>>>>> As Herb Simon once observed, science does not have to be zero-sum.
>>>>>>
>>>>>>
>>>>>>
>>>>>> Sincerely,
>>>>>>
>>>>>> Gary Marcus
>>>>>>
>>>>>> Professor Emeritus
>>>>>>
>>>>>> New York University
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Feb 2, 2022, at 06:12, AIhub <aihuborg at gmail.com> wrote:
>>>>>>
>>>>>> 
>>>>>>
>>>>>> Stephen Hanson in conversation with Geoff Hinton
>>>>>>
>>>>>>
>>>>>>
>>>>>> In the latest episode of this video series for AIhub.org
>>>>>> <https://urldefense.proofpoint.com/v2/url?u=http-3A__AIhub.org&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=xnFSVUARkfmiXtiTP_uXfFKv4uNEGgEeTluRFR7dnUpay2BM5EiLz-XYCkBNJLlL&s=eOtzMh8ILIH5EF7K20Ks4Fr27XfNV_F24bkj-SPk-2A&e=>,
>>>>>> Stephen Hanson talks to  Geoff Hinton about neural networks,
>>>>>> backpropagation, overparameterization, digit recognition, voxel cells,
>>>>>> syntax and semantics, Winograd sentences, and more.
>>>>>>
>>>>>>
>>>>>>
>>>>>> You can watch the discussion, and read the transcript, here:
>>>>>>
>>>>>>
>>>>>> https://aihub.org/2022/02/02/what-is-ai-stephen-hanson-in-conversation-with-geoff-hinton/
>>>>>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__aihub.org_2022_02_02_what-2Dis-2Dai-2Dstephen-2Dhanson-2Din-2Dconversation-2Dwith-2Dgeoff-2Dhinton_&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=OY_RYGrfxOqV7XeNJDHuzE--aEtmNRaEyQ0VJkqFCWw&e=>
>>>>>>
>>>>>>
>>>>>>
>>>>>> About AIhub:
>>>>>>
>>>>>> AIhub is a non-profit dedicated to connecting the AI community to the
>>>>>> public by providing free, high-quality information through AIhub.org
>>>>>> <https://urldefense.proofpoint.com/v2/url?u=http-3A__AIhub.org&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=xnFSVUARkfmiXtiTP_uXfFKv4uNEGgEeTluRFR7dnUpay2BM5EiLz-XYCkBNJLlL&s=eOtzMh8ILIH5EF7K20Ks4Fr27XfNV_F24bkj-SPk-2A&e=>
>>>>>> (https://aihub.org/
>>>>>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__aihub.org_&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=IKFanqeMi73gOiS7yD-X_vRx_OqDAwv1Il5psrxnhIA&e=>).
>>>>>> We help researchers publish the latest AI news, summaries of their work,
>>>>>> opinion pieces, tutorials and more.  We are supported by many leading
>>>>>> scientific organizations in AI, namely AAAI
>>>>>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__aaai.org_&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=wBvjOWTzEkbfFAGNj9wOaiJlXMODmHNcoWO5JYHugS0&e=>,
>>>>>> NeurIPS
>>>>>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__neurips.cc_&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=3-lOHXyu8171pT_UE9hYWwK6ft4I-cvYkuX7shC00w0&e=>,
>>>>>> ICML
>>>>>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__icml.cc_imls_&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=JJyjwIpPy9gtKrZzBMbW3sRMh3P3Kcw-SvtxG35EiP0&e=>,
>>>>>> AIJ
>>>>>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__www.journals.elsevier.com_artificial-2Dintelligence&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=eWrRCVWlcbySaH3XgacPpi0iR0-NDQYCLJ1x5yyMr8U&e=>
>>>>>> /IJCAI
>>>>>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__www.journals.elsevier.com_artificial-2Dintelligence&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=eWrRCVWlcbySaH3XgacPpi0iR0-NDQYCLJ1x5yyMr8U&e=>,
>>>>>> ACM SIGAI
>>>>>> <https://urldefense.proofpoint.com/v2/url?u=http-3A__sigai.acm.org_&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=7rC6MJFaMqOms10EYDQwfnmX-zuVNhu9fz8cwUwiLGQ&e=>,
>>>>>> EurAI/AICOMM, CLAIRE
>>>>>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__claire-2Dai.org_&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=66ZofDIhuDba6Fb0LhlMGD3XbBhU7ez7dc3HD5-pXec&e=>
>>>>>> and RoboCup
>>>>>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__www.robocup.org__&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=bBI6GRq--MHLpIIahwoVN8iyXXc7JAeH3kegNKcFJc0&e=>
>>>>>> .
>>>>>>
>>>>>> Twitter: @aihuborg
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__www.avast.com_sig-2Demail-3Futm-5Fmedium-3Demail-26utm-5Fsource-3Dlink-26utm-5Fcampaign-3Dsig-2Demail-26utm-5Fcontent-3Dwebmail&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=Ao9QQWtO62go0hx1tb3NU6xw2FNBadjj8q64-hl5Sx4&e=>
>>>>>>
>>>>>> Virus-free. www.avast.com
>>>>>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__www.avast.com_sig-2Demail-3Futm-5Fmedium-3Demail-26utm-5Fsource-3Dlink-26utm-5Fcampaign-3Dsig-2Demail-26utm-5Fcontent-3Dwebmail&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=Ao9QQWtO62go0hx1tb3NU6xw2FNBadjj8q64-hl5Sx4&e=>
>>>>>>
>>>>>>
>>>>>>
>>>>> --
>>>> Gary Cottrell 858-534-6640 FAX: 858-534-7029
>>>> Computer Science and Engineering 0404
>>>> IF USING FEDEX INCLUDE THE FOLLOWING LINE:
>>>> CSE Building, Room 4130
>>>> University of California San Diego
>>>>  -
>>>> 9500 Gilman Drive # 0404
>>>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__www.google.com_maps_search_9500-2BGilman-2BDrive-2B-2523-2B0404-2BLa-2BJolla-2C-2BCa.-2B92093-2D0404-3Fentry-3Dgmail-26source-3Dg&d=DwMFaQ&c=-35OiAkTchMrZOngvJPOeA&r=JROnxzCHvyXQkBeidNk_-g&m=Fei-0o5oHMkqh5vccbPhSydQ37yUa42D7OcXFQGMXw42p3NeqBLnSk3gVVOTf3wx&s=Y3k-63xFxhUx8reX9fQp7BJL3EACL79rfHLVt-aAbWk&e=>
>>>> La Jolla, Ca. 92093-0404
>>>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__www.google.com_maps_search_9500-2BGilman-2BDrive-2B-2523-2B0404-2BLa-2BJolla-2C-2BCa.-2B92093-2D0404-3Fentry-3Dgmail-26source-3Dg&d=DwMFaQ&c=-35OiAkTchMrZOngvJPOeA&r=JROnxzCHvyXQkBeidNk_-g&m=Fei-0o5oHMkqh5vccbPhSydQ37yUa42D7OcXFQGMXw42p3NeqBLnSk3gVVOTf3wx&s=Y3k-63xFxhUx8reX9fQp7BJL3EACL79rfHLVt-aAbWk&e=>
>>>>
>>>> Email: gary at ucsd.edu
>>>> Home page: http://www-cse.ucsd.edu/~gary/
>>>> Schedule: http://tinyurl.com/b7gxpwo
>>>> <https://urldefense.proofpoint.com/v2/url?u=http-3A__tinyurl.com_b7gxpwo&d=DwMFaQ&c=-35OiAkTchMrZOngvJPOeA&r=JROnxzCHvyXQkBeidNk_-g&m=E9yFOiPfc4VgGz5PkKq6_bU98iHB8uH4xFNF4zaju-RNnFzCXKbXhyHNdnPL8rre&s=GGPEyfU2SeT7eAbyCFhOW_DUw2QV8OKHoJ7OL7vHdds&e=>
>>>>
>>>> Blind certainty - a close-mindedness that amounts to an imprisonment so
>>>> total, that the prisoner doesn’t  even know that he’s locked up. -David
>>>> Foster Wallace
>>>>
>>>>
>>>> Power to the people! —Patti Smith
>>>>
>>>> Except when they’re delusional —Gary Cottrell
>>>>
>>>>
>>>> This song makes me nostalgic for a memory I don't have -- Tess Cottrell
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> *Listen carefully,Neither the VedasNor the Qur'anWill teach you
>>>> this:Put the bit in its mouth,The saddle on its back,Your foot in the
>>>> stirrup,And ride your wild runaway mindAll the way to heaven.*
>>>>
>>>> -- Kabir
>>>>
>>> --
>> Gary Cottrell 858-534-6640 FAX: 858-534-7029
>> Computer Science and Engineering 0404
>> IF USING FEDEX INCLUDE THE FOLLOWING LINE:
>> CSE Building, Room 4130
>> University of California San Diego                                      -
>> 9500 Gilman Drive # 0404
>> La Jolla, Ca. 92093-0404
>>
>> Email: gary at ucsd.edu
>> Home page: http://www-cse.ucsd.edu/~gary/
>> Schedule: http://tinyurl.com/b7gxpwo
>> <https://urldefense.proofpoint.com/v2/url?u=http-3A__tinyurl.com_b7gxpwo&d=DwMFaQ&c=-35OiAkTchMrZOngvJPOeA&r=JROnxzCHvyXQkBeidNk_-g&m=Fei-0o5oHMkqh5vccbPhSydQ37yUa42D7OcXFQGMXw42p3NeqBLnSk3gVVOTf3wx&s=qpxhpk83bTDE94uLoN4b5Z8nJ7rvktcqtRX23A6Dqgc&e=>
>>
>> Blind certainty - a close-mindedness that amounts to an imprisonment so
>> total, that the prisoner doesn’t  even know that he’s locked up. -David
>> Foster Wallace
>>
>>
>> Power to the people! —Patti Smith
>>
>> Except when they’re delusional —Gary Cottrell
>>
>>
>> This song makes me nostalgic for a memory I don't have -- Tess Cottrell
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> *Listen carefully,Neither the VedasNor the Qur'anWill teach you this:Put
>> the bit in its mouth,The saddle on its back,Your foot in the stirrup,And
>> ride your wild runaway mindAll the way to heaven.*
>>
>> -- Kabir
>>
>

-- 
Gary Cottrell 858-534-6640 FAX: 858-534-7029
Computer Science and Engineering 0404
IF USING FEDEX INCLUDE THE FOLLOWING LINE:
CSE Building, Room 4130
University of California San Diego                                      -
9500 Gilman Drive # 0404
La Jolla, Ca. 92093-0404

Email: gary at ucsd.edu
Home page: http://www-cse.ucsd.edu/~gary/
Schedule: http://tinyurl.com/b7gxpwo

Blind certainty - a close-mindedness that amounts to an imprisonment so
total, that the prisoner doesn’t  even know that he’s locked up. -David
Foster Wallace


Power to the people! —Patti Smith

Except when they’re delusional —Gary Cottrell


This song makes me nostalgic for a memory I don't have -- Tess Cottrell










*Listen carefully,Neither the VedasNor the Qur'anWill teach you this:Put
the bit in its mouth,The saddle on its back,Your foot in the stirrup,And
ride your wild runaway mindAll the way to heaven.*

-- Kabir
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20220206/1dc034be/attachment-0001.html>


More information about the Connectionists mailing list