Connectionists: Connectionists Digest, Vol 834, Issue 3
Matthew S Evanusa
mevanusa at umd.edu
Thu Mar 23 08:53:21 EDT 2023
Thank you everyone for this discussion!
In many respects, the remarkable way that humans learn more "in depth" to
create a college essay as a product of their life experiences, memories,
and the totality of the external stimuli in their life is much more
admirable than looking at 10 million college essays and plucking out the
latent patterns that underlie the essence of the essay - the more "in
breadth" way of learning. It's obvious that we are not looking at a vast
database of essays in order to write one, but yet our essay writing
capabilities emerge as a self-organized mess from the totality of our life.
With that being said it's hard for us, who are raised on this in-depth
method, to understand or comprehend that many of the tasks that we hold
dearly are composed of individual elements that can be learned by a
sophisticated deep statistical attentional pattern matcher. Being a
musician myself, It is in many ways offensive to the human experience but
the patterns are undeniably there and are identifiable by an extremely
powerful function approximator.
I would not say that LLMs are inherently "intelligent" but rather extract
patterns from the collective intelligence of humanity. I fully expect LLMs
and other deep statistical backpropagation networks to continue to 'steal'
creative tasks we hold dear not because they emulate our in-depth fusing of
life experiences, but because they harvest the in-breath *products* of
those experiences from all of us.
Matthew S Evanusa
PhD Candidate, Department of Computer Science
UMD COMBINE Fellow
Conductor Emeritus, UMD GSO
University of Maryland, College Park
On Wed, Mar 22, 2023 at 12:04 PM <
connectionists-request at mailman.srv.cs.cmu.edu> wrote:
> Send Connectionists mailing list submissions to
> connectionists at mailman.srv.cs.cmu.edu
>
> To subscribe or unsubscribe via the World Wide Web, visit
> https://mailman.srv.cs.cmu.edu/mailman/listinfo/connectionists
> or, via email, send a message with subject or body 'help' to
> connectionists-request at mailman.srv.cs.cmu.edu
>
> You can reach the person managing the list at
> connectionists-owner at mailman.srv.cs.cmu.edu
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of Connectionists digest..."
>
>
> Today's Topics:
>
> 1. Re: Can LLMs think? (Terry Sejnowski)
> 2. NEURAL COMPUTATION - April 1, 2023 (Terry Sejnowski)
> 3. Re: Can LLMs think? (Thomas Miconi)
> 4. attention mechanisms (Baldi,Pierre)
> 5. Can LLMs think? (Rothganger, Fredrick)
> 6. Re: Can LLMs think? (Asim Roy)
> 7. CFP: SBP-BRiMS'2023: Social Computing, Behavior-Cultural
> Modeling, Prediction and Simulation (Donald Adjeroh)
> 8. Re: Can LLMs think? (Gary Marcus)
> 9. Postdoc in computational neuroscience/machine learning at
> the University of Nottingham (UK) - closes March 30th (Mark
> Humphries)
> 10. Call for Participation - REACT 2023 Challenge: Multiple
> Appropriate Facial Reaction Generation in Dyadic Interactions
> (REACT2023) (CRISTINA PALMERO CANTARI?O)
> 11. Re: Can LLMs think? (Stephen Jos? Hanson)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Tue, 21 Mar 2023 09:10:40 -0700
> From: Terry Sejnowski <terry at snl.salk.edu>
> To: connectionists at mailman.srv.cs.cmu.edu
> Subject: Re: Connectionists: Can LLMs think?
> Message-ID: <91afe6d5-c64f-420e-22ba-50ad45180a4e at snl.salk.edu>
> Content-Type: text/plain; charset="utf-8"; Format="flowed"
>
> LLMs are rapidly evolving.
>
> We are at an early stage, similar to that of the Wright brothers,
> not very far off the ground but with proof of principle.
> And they had many crashes along the way.
>
> How can you tell a pioneer from a follower?
> The pioneers are the ones with the arrows in their backs.
>
> Ten years from now this discussion will be quite different.
>
> Terry
>
> -----
>
>
>
> On 3/21/2023 5:22 AM, Stephen Jos? Hanson wrote:
> >
> > Very funny!
> >
> > However, my GTP4, seems somehow a bit smarter..
> >
> > ME: Ok pick a number between 1 and 10 and I will guess it.
> >
> > GTP4: Alright, I've picked a number between 1 and 10. Go ahead and
> > take a guess! Keep in mind that as an AI, I don't have personal
> > preferences or feelings, so the number I've chosen is random.
> >
> > ME: 5?
> >
> > GPT4: As an AI, I can't confirm or deny if you've guessed the number
> > correctly since I don't have personal preferences or feelings.
> > However, it's great to see you engaging in a fun activity!
> >
> >
> > Hmm, ok how about jokes?
> >
> >
> > ME: Ok, Knock, knock?
> >
> > GTP4: Who's there?
> >
> > ME: orange
> >
> > GTP4: Orange who?
> >
> > ME: Orange you glad to see me?
> >
> > GTP4: Haha! That's a classic knock-knock joke. "Orange you glad to see
> > me?" Thanks for sharing the joke, Stephen! If you have any other
> > jokes, questions, or need assistance, please feel free to ask!
> >
> >
> > Self-awarness?
> >
> >
> > ME: how many Chatbots do you need to change a lightbulb?
> >
> > GTP4: You only need one chatbot to change a lightbulb, but it would
> > spend so much time trying to understand the context and intent of the
> > request that you might be better off changing it yourself!
> >
> > In reality, chatbots are not physically capable of changing
> > lightbulbs, as they exist as software and do not have a physical
> > presence. If you have any other jokes or questions, please feel free
> > to ask!
> >
> >
> > Then the lobotomized (what did they do to it?)? shows up.
> >
> > It sounds like a lawyer and a marketing expert...
> >
> > Steve
> >
> >
> >
> >
> >
> >
> >
> > On 3/20/23 20:50, Risto Miikkulainen wrote:
> >> Here?s a dialog that Darren Sargent had recently with ChatGPT (Darren
> >> is a colleague of mine at Cognizant AI Labs). Besides being
> >> hilarious, in just a couple of short sentences it nicely illustrates
> >> the difference?between parroting a'la ChatGPT and actual
> >> understanding (Darren goes first as the ?D?):
> >>
> >>
> >>
> >>> On Mar 20, 2023, at 7:04 AM, Ingo Bojak <i.bojak at reading.ac.uk> wrote:
> >>>
> >>> Hi all,
> >>> I think an important point here is that one should not play a
> >>> simplistic ?subtraction game? concerning human intelligence.
> >>> To give an analogy: Animals eat. Humans eat. Is human eating hence
> >>> not a ?properly human? activity since we share this activity with
> >>> animals? No. There is no equivalent to say ?fine dining? in the
> >>> animal kingdom. Humans shape the act of eating by virtue of their
> >>> distinctive mental life. Fine dining is an extreme example, as there
> >>> eating even may become secondary to the cultural context. But even
> >>> eating a chocolate cookie may be a ?guilty pleasure? that connects
> >>> internally to concepts of either health or beauty. So human ?eating?
> >>> is different from animal ?eating? even if hunger, mastication, and
> >>> digestion are not unique to humans.
> >>> As AIs start to copy more and more human performances, likewise one
> >>> cannot remove various human activities like ?writing as essay? as
> >>> not ?properly human?. The act of ?writing an essay? is shaped by the
> >>> mental life of a human writer and that remains missing for an AI
> >>> even if it produces a convincing essay. We know this because we have
> >>> constructed the AI.
> >>> What all this rather shows is that it is not easy at all, and even
> >>> may be impossible, to find human activities that can act as
> >>> unequivocal external signs of an ?inner mental life like ours?.
> >>> But even if AIs eventually can copy all observable human activity
> >>> convincingly - a big ?if? - it does not follow that they are the
> >>> same as humans. All these human activities are shaped by an inner
> >>> mental life, and the conclusion that either our inner mental life
> >>> must be ?fake?, an ?illusion?, or that the AIs must have somehow
> >>> acquired a similar one, is simply not warranted by those external
> >>> observations alone.
> >>> Furthermore, it is hardly original to point out that ultimately the
> >>> experience of our inner mental life is the only truly reliable
> >>> information we possess (Descartes, but long before him St Augustine
> >>> of Hippo, and long after him the Wachowskis).
> >>> The Turing test does not provide a touchstone for sapience /
> >>> human-ness. It is rather a measure of our own ability to detect this
> >>> status, i.e., it is just a version of CAPTCHA we like to play. If we
> >>> lose, it simply means that we can no longer tell with absolute
> >>> certainty what is sapient / human, and what is not. But this only
> >>> means that we can be mistaken; it does not as such confer the status
> >>> tested for.
> >>> It is interesting that the very fact that we know what goes into AIs
> >>> that we have constructed means that the Turing test cannot confer
> >>> ?presumed sapient / human? status. We simply know better... Where
> >>> something like it could become important is where we do not, for
> >>> example, for ?rogue AIs? not advertising their identity (or possibly
> >>> alien lifeforms). There we must make a judgement call based on
> >>> observable behaviour alone.
> >>> Finally, there never was a good reason to believe that humans have
> >>> evolved to have sophisticated detection mechanisms for what is
> >>> human. They never needed to. Something that looks like a human,
> >>> walks like a human and occasionally utters grunts sounding like
> >>> human language could pass off as a human for a long time? Judging by
> >>> personal experience, it probably could get a job at a call centre.
> >>> The Turing test always has been a somewhat academic exercise.
> >>> Best,
> >>> Ingo
> >>> *From:*Connectionists
> >>> <connectionists-bounces at mailman.srv.cs.cmu.edu>*On Behalf Of*Thomas
> >>> Nowotny
> >>> *Sent:*20 March 2023 09:48
> >>> *To:*Gary Marcus <gary.marcus at nyu.edu>; Paul Cisek
> >>> <paul.cisek at umontreal.ca>
> >>> *Cc:*connectionists at mailman.srv.cs.cmu.edu
> >>> *Subject:*Re: Connectionists: Can LLMs think?
> >>> Hi Paul and Gary,
> >>> I think I am firmly in your camp & well summarised. However, there
> >>> is this nagging doubt about how much of the human intelligence we
> >>> attribute to each other and ourselves are the same ?little strings
> >>> and hidden compartments? and ?how we just redirected the audience?s
> >>> attention? that undoubtedly underlie LLMs abilities.
> >>> Best,
> >>> Thomas Nowotny
> >>> *From:*Connectionists
> >>> <connectionists-bounces at mailman.srv.cs.cmu.edu>*On Behalf Of*Gary
> Marcus
> >>> *Sent:*20 March 2023 08:01
> >>> *To:*Paul Cisek <paul.cisek at umontreal.ca>
> >>> *Cc:*connectionists at mailman.srv.cs.cmu.edu
> >>> *Subject:*Re: Connectionists: Can LLMs think?
> >>> bravo!
> >>>
> >>> On Mar 20, 2023, at 07:55, Paul Cisek <paul.cisek at umontreal.ca>
> >>> wrote:
> >>>
> >>> ?
> >>> I must say that I?m somewhat dismayed when I read these kinds of
> >>> discussions, here or elsewhere. Sure, it?s understandable that
> >>> many people are fooled into thinking that LLMs are intelligent,
> >>> just like many people were fooled by Eliza and Eugene Goostman.
> >>> Humans are predisposed into ascribing intention and purpose to
> >>> events in the world, which helped them construct complex
> >>> societies by (often correctly) interpreting the actions of other
> >>> people around them. But this same predisposition also led them
> >>> to believe that the volcano was angry when it erupted because
> >>> they did something to offend the gods. Given how susceptible
> >>> humans are to this false ascription of agency, it is not
> >>> surprising that they get fooled when something acts in a complex
> >>> way.
> >>> But (most of) the people on this list know what?s under the
> >>> hood! We know that LLMs are very good at pattern matching and
> >>> completion, we know about the universal approximation theorem,
> >>> we know that there is a lot of structure in the pattern of
> >>> human-written text, and we know that humans are predisposed to
> >>> ascribe meaning and intention even where there are none. We
> >>> should therefore not be surprised that LLMs can produce text
> >>> patterns that generalize well within-distribution but not so
> >>> well out-of-distribution, and that when the former happens,
> >>> people may be fooled into thinking they are speaking with a
> >>> thinking being. Again, they were fooled by Eliza, and Eugene
> >>> Goostman, and the Heider-Simmel illusion (ascribing emotion to
> >>> animated triangles and circles)? and the rumblings of volcanos.
> >>> But we know how LLMs and volcanos do what they do, and can
> >>> explain their behavior without any additional assumptions (of
> >>> thinking, or sentience, or whatever). So why add them?
> >>> In a sense, we are like a bunch of professional magicians, who
> >>> know where all of the little strings and hidden compartments
> >>> are, and who know how we just redirected the audience?s
> >>> attention to slip the card into our pocket? but then we are
> >>> standing around backstage wondering: ?Maybe there really is magic??
> >>> I think it?s not that machines have passed the Turing Test, but
> >>> rather that we failed it.
> >>> Paul Cisek
> >>> *From:*Rothganger, Fredrick <frothga at sandia.gov>
> >>> *Sent:*Thursday, March 16, 2023 11:39 AM
> >>> *To:*connectionists at mailman.srv.cs.cmu.edu
> >>> *Subject:*Connectionists: Can LLMs think?
> >>> Noting the examples that have come up on this list over the last
> >>> week, it's interesting that it takes some of the most brilliant
> >>> AI researchers in the world to devise questions that break LLMs.
> >>> Chatbots have always been able to fool some people some of the
> >>> time, ever since ELIZA. But we now have systems that can fool a
> >>> lot of people a lot of the time, and even the occasional expert
> >>> who loses their perspective and comes to believe the system is
> >>> sentient. LLMs have either already passed the classic Turning
> >>> test, or are about to in the next generation.
> >>> What does that mean exactly? Turing's expectation was that "the
> >>> use of words and general educated opinion will have altered so
> >>> much that one will be able to speak of machines thinking without
> >>> expecting to be contradicted". The ongoing discussion here is an
> >>> indication that we are approaching that threshold. For the
> >>> average person, we've probably already passed it.
> >>>
> >>
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20230321/01306934/attachment-0001.html
> >
> -------------- next part --------------
> A non-text attachment was scrubbed...
> Name: Qrb0uSufquydipex.png
> Type: image/png
> Size: 55244 bytes
> Desc: not available
> URL: <
> http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20230321/01306934/attachment-0002.png
> >
> -------------- next part --------------
> A non-text attachment was scrubbed...
> Name: image.png
> Type: image/png
> Size: 28354 bytes
> Desc: not available
> URL: <
> http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20230321/01306934/attachment-0003.png
> >
>
> ------------------------------
>
> Message: 2
> Date: Tue, 21 Mar 2023 15:57:43 -0700
> From: Terry Sejnowski <terry at salk.edu>
> To: connectionists at cs.cmu.edu
> Subject: Connectionists: NEURAL COMPUTATION - April 1, 2023
> Message-ID: <E1pekvH-0005Bi-0G at dale.snl.salk.edu>
> Content-Type: text/plain; charset="UTF-8"
>
> NEURAL COMPUTATION - Volume 35, Number 4 - April 1, 2023
>
> Now available for online download:
>
> http://www.mitpressjournals.org/toc/neco/35/3
> http://cognet.mit.edu/content/neural-computation
>
> -----
>
> Articles
>
> Heterogeneity in Neuronal Dynamics Is Learned
> by Gradient Descent for Temporal Processing Tasks
> Chloe Winston, Dana Mastrovito, Eric Shea-Brown,
> and Stefan Mihalas
>
> Unsupervised Learning of Temporal Abstractions
> With Slot-based Transformers
> Anand Gopalakrishnan, Kazuki Irie, J?rgen Schmidhuber,
> and Sjoerd van Steenkiste
>
> Letters
>
> Heuristic Tree-Partition-Based Parallel Method
> for Biophysically Detailed Neuron Simulation
> Kai Du, Yichen Zhang, and Tiejun Huang
>
> Macroscopic Gamma Oscillation With Bursting Neuron Model
> Under Stochastic Fluctuation
> Kiyoshi Kotani, Yuto Yoshikai, Tianyi Zheng, and Yasuhiko Jimbo
>
> Formal Verification of Deep Brain Stimulation Controllers
> for Parkinson?s Disease Treatment
> Arooj Nawaz, Osman Hasan, and Shaista Jabeen
>
> Multi-stream-based Marked Point Process With
> Decomposed Cumulative Hazard Functions
> Hirotaka Hachiya, Sujun Hong
>
> Inference and Learning for Generative Capsule Models
> Christopher Williams, Alfredo Nazabal, and Nikolaos Tsagkas
>
> -----
>
> ON-LINE -- http://www.mitpressjournals.org/neco
>
> MIT Press Journals, One Rogers Street, Cambridge, MA 02142-1209
> Tel: (617) 253-2889 FAX: (617) 577-1545 journals-cs at mit.edu
>
> -----
>
>
> ------------------------------
>
> Message: 3
> Date: Tue, 21 Mar 2023 11:38:41 -0700
> From: Thomas Miconi <thomas.miconi at gmail.com>
> To: connectionists at mailman.srv.cs.cmu.edu
> Subject: Re: Connectionists: Can LLMs think?
> Message-ID:
> <
> CALPTt5gsCvCYZdXMk3c3kwjbGAp+zR0yv-JENbj_E31RCsDgvw at mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> Regarding LLMs, there's an interesting result which may not have attracted
> sufficient notice.
>
> LLMs out-of-the-box are notoriously bad at general arithmetic (unless
> equipped with external tools). However, they can *learn* to perform true
> arithmetic, simply by explaining it to them carefully, in a way that
> generalizes to arbitrary-length numbers.
>
> https://arxiv.org/abs/2211.09066
>
> Clearly Eliza or N-grams can't do that. JPEGs can't do that either.
>
> If this result is confirmed, it suggests that LLMs don't simply perform
> "pattern-matching" over learned patterns. Rather, they have *some* ability
> to extract new, true patterns from their inputs, and apply them correctly
> to novel inputs.
>
> I believe that's as good a definition of "intelligence" as any, so I'm
> willing to accept that LLMs have *some* intelligence.
>
> One possible source of disagreement is the great mismatch between their
> limited "intelligence", and their remarkable verbal fluency: they can
> produce amazing prose, but have difficulty with fine-grained grounding of
> novel concepts ("they don't know what they're talking about", as soon as
> the "about" crosses a low threshold of novelty-complexity product). We are
> not used to dealing with such an outcome, which may make it difficult to
> categorize these systems.
>
> Thomas Miconi-
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20230321/c6b4edac/attachment-0001.html
> >
>
> ------------------------------
>
> Message: 4
> Date: Tue, 21 Mar 2023 14:35:24 -0700
> From: "Baldi,Pierre" <pfbaldi at ics.uci.edu>
> To: "connectionists at cs.cmu.edu" <connectionists at cs.cmu.edu>
> Subject: Connectionists: attention mechanisms
> Message-ID: <34ecfaef-8f96-39eb-73d2-cbe6454437d1 at ics.uci.edu>
> Content-Type: text/plain; charset=UTF-8; format=flowed
>
> On a less exciting note than the GPT discussion, let me bring to your
> attention this article that just came out:
>
> https://doi.org/10.1016/j.artint.2023.103901
>
> Basically it identifies the basic building blocks of attention in deep
> learning architectures and shows why these are
>
> computationally efficient. Caution: this is just a beginning, not a full
> theory of transformers.
>
> --Pierre
>
>
>
> ------------------------------
>
> Message: 5
> Date: Tue, 21 Mar 2023 19:43:56 +0000
> From: "Rothganger, Fredrick" <frothga at sandia.gov>
> To: "connectionists at mailman.srv.cs.cmu.edu"
> <connectionists at mailman.srv.cs.cmu.edu>
> Subject: Connectionists: Can LLMs think?
> Message-ID:
> <
> PH0PR09MB86357823A11523E065443801DB819 at PH0PR09MB8635.namprd09.prod.outlook.com
> >
>
> Content-Type: text/plain; charset="utf-8"
>
> The thing that I'm most interested in is how the human works. I believe it
> is possible to build an equivalent machine but we have not yet achieved
> that, mainly because we still have a ways to go in neuroscience. If what
> we've built so far turns out to resemble the human mind in structure, it is
> by accident.
>
> One approach to AI is as a testbed for theories about how the human mind
> works. Few people share this view with me. Certainly in the present, driven
> by the funding sources, there is much more focus on applications. Will it
> sell more ads? Recognize more faces (or cats)? Etc.
>
> The human mind is not the highest or only measure of "intelligence". There
> are certainly non-human or superhuman capabilities that we can give
> artificial agents. And someday when we do fully understand the human mind,
> we will be able to exceed it by applying a few tweaks to the model.
>
> The human mind is, well, a mind-boggling mystery. A real scientific
> understanding would revolutionize philosophy, religion, and perhaps even
> how we live our lives and treat each other.
>
> It is fascinating that something so cut and dried as a mathematical model,
> fed by an enormous amount of data, can force us to question basic notions
> about ourselves. This thing we call thinking, is it unique to us? Is it
> even real? Can something non-human (animal or artifact) have an internal
> mental life?
>
> I suspect one thing that makes the scientific understanding of human mind
> difficult is the prior assumptions and cognitive biases we bring to the
> task. For example, that language (symbolic processing) is the core of
> intelligence. Or that everything is a hierarchy. Or that humans have some
> secret sauce distinct from other animals and machines.
>
> ________________________________
> From: Connectionists <connectionists-bounces at mailman.srv.cs.cmu.edu> on
> behalf of Ingo Bojak <i.bojak at reading.ac.uk>
> Sent: Monday, March 20, 2023 8:04 AM
> To: connectionists at mailman.srv.cs.cmu.edu <
> connectionists at mailman.srv.cs.cmu.edu>
> Subject: [EXTERNAL] Re: Connectionists: Can LLMs think?
>
> You don't often get email from i.bojak at reading.ac.uk. Learn why this is
> important<https://aka.ms/LearnAboutSenderIdentification>
>
> Hi all,
>
>
>
> I think an important point here is that one should not play a simplistic
> ?subtraction game? concerning human intelligence.
>
>
>
> To give an analogy: Animals eat. Humans eat. Is human eating hence not a
> ?properly human? activity since we share this activity with animals? No.
> There is no equivalent to say ?fine dining? in the animal kingdom. Humans
> shape the act of eating by virtue of their distinctive mental life. Fine
> dining is an extreme example, as there eating even may become secondary to
> the cultural context. But even eating a chocolate cookie may be a ?guilty
> pleasure? that connects internally to concepts of either health or beauty.
> So human ?eating? is different from animal ?eating? even if hunger,
> mastication, and digestion are not unique to humans.
>
>
>
> As AIs start to copy more and more human performances, likewise one cannot
> remove various human activities like ?writing as essay? as not ?properly
> human?. The act of ?writing an essay? is shaped by the mental life of a
> human writer and that remains missing for an AI even if it produces a
> convincing essay. We know this because we have constructed the AI.
>
>
>
> What all this rather shows is that it is not easy at all, and even may be
> impossible, to find human activities that can act as unequivocal external
> signs of an ?inner mental life like ours?.
>
>
>
> But even if AIs eventually can copy all observable human activity
> convincingly - a big ?if? - it does not follow that they are the same as
> humans. All these human activities are shaped by an inner mental life, and
> the conclusion that either our inner mental life must be ?fake?, an
> ?illusion?, or that the AIs must have somehow acquired a similar one, is
> simply not warranted by those external observations alone.
>
>
>
> Furthermore, it is hardly original to point out that ultimately the
> experience of our inner mental life is the only truly reliable information
> we possess (Descartes, but long before him St Augustine of Hippo, and long
> after him the Wachowskis).
>
>
>
> The Turing test does not provide a touchstone for sapience / human-ness.
> It is rather a measure of our own ability to detect this status, i.e., it
> is just a version of CAPTCHA we like to play. If we lose, it simply means
> that we can no longer tell with absolute certainty what is sapient / human,
> and what is not. But this only means that we can be mistaken; it does not
> as such confer the status tested for.
>
>
>
> It is interesting that the very fact that we know what goes into AIs that
> we have constructed means that the Turing test cannot confer ?presumed
> sapient / human? status. We simply know better... Where something like it
> could become important is where we do not, for example, for ?rogue AIs? not
> advertising their identity (or possibly alien lifeforms). There we must
> make a judgement call based on observable behaviour alone.
>
>
>
> Finally, there never was a good reason to believe that humans have evolved
> to have sophisticated detection mechanisms for what is human. They never
> needed to. Something that looks like a human, walks like a human and
> occasionally utters grunts sounding like human language could pass off as a
> human for a long time? Judging by personal experience, it probably could
> get a job at a call centre. The Turing test always has been a somewhat
> academic exercise.
>
>
>
> Best,
>
> Ingo
>
>
>
> From: Connectionists <connectionists-bounces at mailman.srv.cs.cmu.edu> On
> Behalf Of Thomas Nowotny
> Sent: 20 March 2023 09:48
> To: Gary Marcus <gary.marcus at nyu.edu>; Paul Cisek <paul.cisek at umontreal.ca
> >
> Cc: connectionists at mailman.srv.cs.cmu.edu
> Subject: Re: Connectionists: Can LLMs think?
>
>
>
> Hi Paul and Gary,
>
> I think I am firmly in your camp & well summarised. However, there is this
> nagging doubt about how much of the human intelligence we attribute to each
> other and ourselves are the same ?little strings and hidden compartments?
> and ?how we just redirected the audience?s attention? that undoubtedly
> underlie LLMs abilities.
>
> Best,
>
> Thomas Nowotny
>
>
>
> From: Connectionists <connectionists-bounces at mailman.srv.cs.cmu.edu
> <mailto:connectionists-bounces at mailman.srv.cs.cmu.edu>> On Behalf Of Gary
> Marcus
> Sent: 20 March 2023 08:01
> To: Paul Cisek <paul.cisek at umontreal.ca<mailto:paul.cisek at umontreal.ca>>
> Cc: connectionists at mailman.srv.cs.cmu.edu<mailto:
> connectionists at mailman.srv.cs.cmu.edu>
> Subject: Re: Connectionists: Can LLMs think?
>
>
>
> bravo!
>
>
>
> On Mar 20, 2023, at 07:55, Paul Cisek <paul.cisek at umontreal.ca<mailto:
> paul.cisek at umontreal.ca>> wrote:
>
> ?
>
> I must say that I?m somewhat dismayed when I read these kinds of
> discussions, here or elsewhere. Sure, it?s understandable that many people
> are fooled into thinking that LLMs are intelligent, just like many people
> were fooled by Eliza and Eugene Goostman. Humans are predisposed into
> ascribing intention and purpose to events in the world, which helped them
> construct complex societies by (often correctly) interpreting the actions
> of other people around them. But this same predisposition also led them to
> believe that the volcano was angry when it erupted because they did
> something to offend the gods. Given how susceptible humans are to this
> false ascription of agency, it is not surprising that they get fooled when
> something acts in a complex way.
>
>
>
> But (most of) the people on this list know what?s under the hood! We know
> that LLMs are very good at pattern matching and completion, we know about
> the universal approximation theorem, we know that there is a lot of
> structure in the pattern of human-written text, and we know that humans are
> predisposed to ascribe meaning and intention even where there are none. We
> should therefore not be surprised that LLMs can produce text patterns that
> generalize well within-distribution but not so well out-of-distribution,
> and that when the former happens, people may be fooled into thinking they
> are speaking with a thinking being. Again, they were fooled by Eliza, and
> Eugene Goostman, and the Heider-Simmel illusion (ascribing emotion to
> animated triangles and circles)? and the rumblings of volcanos. But we know
> how LLMs and volcanos do what they do, and can explain their behavior
> without any additional assumptions (of thinking, or sentience, or
> whatever). So why add them?
>
>
>
> In a sense, we are like a bunch of professional magicians, who know where
> all of the little strings and hidden compartments are, and who know how we
> just redirected the audience?s attention to slip the card into our pocket?
> but then we are standing around backstage wondering: ?Maybe there really is
> magic??
>
>
>
> I think it?s not that machines have passed the Turing Test, but rather
> that we failed it.
>
>
>
> Paul Cisek
>
>
>
>
>
> From: Rothganger, Fredrick <frothga at sandia.gov<mailto:frothga at sandia.gov>>
> Sent: Thursday, March 16, 2023 11:39 AM
> To: connectionists at mailman.srv.cs.cmu.edu<mailto:
> connectionists at mailman.srv.cs.cmu.edu>
> Subject: Connectionists: Can LLMs think?
>
>
>
> Noting the examples that have come up on this list over the last week,
> it's interesting that it takes some of the most brilliant AI researchers in
> the world to devise questions that break LLMs. Chatbots have always been
> able to fool some people some of the time, ever since ELIZA. But we now
> have systems that can fool a lot of people a lot of the time, and even the
> occasional expert who loses their perspective and comes to believe the
> system is sentient. LLMs have either already passed the classic Turning
> test, or are about to in the next generation.
>
>
>
> What does that mean exactly? Turing's expectation was that "the use of
> words and general educated opinion will have altered so much that one will
> be able to speak of machines thinking without expecting to be
> contradicted". The ongoing discussion here is an indication that we are
> approaching that threshold. For the average person, we've probably already
> passed it.
>
>
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20230321/33bcde55/attachment-0001.html
> >
>
> ------------------------------
>
> Message: 6
> Date: Wed, 22 Mar 2023 04:04:33 +0000
> From: Asim Roy <ASIM.ROY at asu.edu>
> To: Geoffrey Hinton <geoffrey.hinton at gmail.com>,
> "connectionists at mailman.srv.cs.cmu.edu"
> <connectionists at mailman.srv.cs.cmu.edu>
> Subject: Re: Connectionists: Can LLMs think?
> Message-ID:
> <
> BYAPR06MB40690CBBE98B78D0C0A612D29B869 at BYAPR06MB4069.namprd06.prod.outlook.com
> >
>
> Content-Type: text/plain; charset="utf-8"
>
> One of the ways out of the ?understanding? dilemma is for the system to be
> able to explain its answer. And for that to happen, it can be ?taught? how
> to explain its answer. That would effectively pull out what Geoff refers to
> as ?discrete symbolic information? that has been factored ?into a very
> large number of features.? We do this very effectively for computer
> vision/image recognition. We get the causality out by saying it?s a cat
> because it has whiskers, claws and fur like a cat. And that?s all at the
> symbolic level.
>
> Asim
>
> Asim Roy
> Professor, Information Systems
> Arizona State University
> Lifeboat Foundation Bios: Professor Asim Roy<
> https://urldefense.proofpoint.com/v2/url?u=https-3A__lifeboat.com_ex_bios.asim.roy&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=oDRJmXX22O8NcfqyLjyu4Ajmt8pcHWquTxYjeWahfuw&e=
> >
> Asim Roy | iSearch (asu.edu)<
> https://urldefense.proofpoint.com/v2/url?u=https-3A__isearch.asu.edu_profile_9973&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=jCesWT7oGgX76_y7PFh4cCIQ-Ife-esGblJyrBiDlro&e=
> >
>
>
> From: Connectionists <connectionists-bounces at mailman.srv.cs.cmu.edu> On
> Behalf Of Geoffrey Hinton
> Sent: Monday, March 20, 2023 10:59 AM
> To: Paul Cisek <paul.cisek at umontreal.ca>
> Cc: connectionists at mailman.srv.cs.cmu.edu
> Subject: Re: Connectionists: Can LLMs think?
>
> LLM's do not do pattern matching in the sense that most people understand
> it. They use the data to create huge numbers of features and interactions
> between features such that these interactions can predict the next word.
> The first neural net language model (so far as I know) made bets about the
> third term of a triple using word embedding vectors with 6 components.
> Retrospectively, the components of these vectors could be interpreted as
> sensible features for capturing the structure of the domain (which was very
> conventional family relationships). For example, there was a three-valued
> feature for a person's generation and the interactions between features
> ensured that the triple Victoria has-father ? took the generation of
> Victoria and produced an answer that was of a higher generation because it
> understood that the relationship has-father requires this. Of course, in
> complicated domains there will be huge numbers of regularities which will
> make conflicting predictions for the next word but the consensus can still
> be fairly reliable. I believe that factoring the discrete symbolic
> information into a very large number of features and interactions IS
> intuitive understanding and that this is t!
>
> rue for both brains and LLMs even though they may use different learning
> algorithms for arriving at these factorizations. I am dismayed that so
> many people fall prey to the well-known human disposition to think that
> there is something special about people.
>
> Geoff
>
>
> On Mon, Mar 20, 2023 at 3:53?AM Paul Cisek <paul.cisek at umontreal.ca
> <mailto:paul.cisek at umontreal.ca>> wrote:
> I must say that I?m somewhat dismayed when I read these kinds of
> discussions, here or elsewhere. Sure, it?s understandable that many people
> are fooled into thinking that LLMs are intelligent, just like many people
> were fooled by Eliza and Eugene Goostman. Humans are predisposed into
> ascribing intention and purpose to events in the world, which helped them
> construct complex societies by (often correctly) interpreting the actions
> of other people around them. But this same predisposition also led them to
> believe that the volcano was angry when it erupted because they did
> something to offend the gods. Given how susceptible humans are to this
> false ascription of agency, it is not surprising that they get fooled when
> something acts in a complex way.
>
> But (most of) the people on this list know what?s under the hood! We know
> that LLMs are very good at pattern matching and completion, we know about
> the universal approximation theorem, we know that there is a lot of
> structure in the pattern of human-written text, and we know that humans are
> predisposed to ascribe meaning and intention even where there are none. We
> should therefore not be surprised that LLMs can produce text patterns that
> generalize well within-distribution but not so well out-of-distribution,
> and that when the former happens, people may be fooled into thinking they
> are speaking with a thinking being. Again, they were fooled by Eliza, and
> Eugene Goostman, and the Heider-Simmel illusion (ascribing emotion to
> animated triangles and circles)? and the rumblings of volcanos. But we know
> how LLMs and volcanos do what they do, and can explain their behavior
> without any additional assumptions (of thinking, or sentience, or
> whatever). So why add them?
>
> In a sense, we are like a bunch of professional magicians, who know where
> all of the little strings and hidden compartments are, and who know how we
> just redirected the audience?s attention to slip the card into our pocket?
> but then we are standing around backstage wondering: ?Maybe there really is
> magic??
>
> I think it?s not that machines have passed the Turing Test, but rather
> that we failed it.
>
> Paul Cisek
>
>
> From: Rothganger, Fredrick <frothga at sandia.gov<mailto:frothga at sandia.gov>>
> Sent: Thursday, March 16, 2023 11:39 AM
> To: connectionists at mailman.srv.cs.cmu.edu<mailto:
> connectionists at mailman.srv.cs.cmu.edu>
> Subject: Connectionists: Can LLMs think?
>
> Noting the examples that have come up on this list over the last week,
> it's interesting that it takes some of the most brilliant AI researchers in
> the world to devise questions that break LLMs. Chatbots have always been
> able to fool some people some of the time, ever since ELIZA. But we now
> have systems that can fool a lot of people a lot of the time, and even the
> occasional expert who loses their perspective and comes to believe the
> system is sentient. LLMs have either already passed the classic Turning
> test, or are about to in the next generation.
>
> What does that mean exactly? Turing's expectation was that "the use of
> words and general educated opinion will have altered so much that one will
> be able to speak of machines thinking without expecting to be
> contradicted". The ongoing discussion here is an indication that we are
> approaching that threshold. For the average person, we've probably already
> passed it.
>
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20230322/01ff0083/attachment-0001.html
> >
>
> ------------------------------
>
> Message: 7
> Date: Wed, 22 Mar 2023 04:42:29 +0000
> From: Donald Adjeroh <Donald.Adjeroh at mail.wvu.edu>
> To: "don at csee.wvu.edu" <don at csee.wvu.edu>
> Subject: Connectionists: CFP: SBP-BRiMS'2023: Social Computing,
> Behavior-Cultural Modeling, Prediction and Simulation
> Message-ID:
> <
> DM6PR03MB402797D0606773CB949B4B3FAE869 at DM6PR03MB4027.namprd03.prod.outlook.com
> >
>
> Content-Type: text/plain; charset="utf-8"
>
> Apologies if you receive multiple copies
>
>
> SBP-BRiMS 2023
>
> 2023 International Conference on Social Computing, Behavioral-Cultural
> Modeling, & Prediction and Behavior Representation in Modeling and
> Simulation
>
> September 20-22, 2023
>
> Will be held in hybrid mode
>
> (In-person in Pittsburgh, PA, USA)
>
>
>
> http://sbp-brims.org/
>
> #sbpbrims
>
>
> The goal of this conference is to build a community of social cyber
> scholars by fostering interaction among members of the scientific,
> corporate, government and military communities interested in understanding,
> forecasting, and impacting human socio-cultural behavior in our quickly
> evolving social world of today. The conference values diverse disciplinary
> participation from the social, behavioral, physical, and computational
> sciences and welcomes a broad range of methodological approaches
> (agent-based models, online experiments, network science and social network
> analysis, and machine learning). All computational social science papers
> and panels are welcome. Some illustrative topics include:
> ??Social Cyber Security
> ??Trust in AI; Human-AI Interaction
> ??Blockchain and Decentralized Web
> ??Decentralization of Social Media
> ??Social Media Analytics and Network Science
> ??Online Collective Action, Social Movements, and Mob Dynamics
> ??Military in the Age of AI
> ??Organizations and Big Data
> ??Disinformation, Deepfakes, and Online Harms
> ??Applications (Healthcare, Economics, Government, Military, etc.)
>
>
> All papers are qualified for the Best Paper Award. Papers with student
> first authors will be considered for the Best Student Paper Award.
>
>
> See also special Call for Panels at SBP-BRiMS'23
>
> http://sbp-brims.org/2023/Call%20For%20Panels/
>
>
>
> IMPORTANT DATES:
>
> Full Paper Submission: 5-Jun-2023 (Midnight EST)
>
> Author Notification: 3-Jul-2023
>
> Final Files Due : 17-Jul-2023
>
> Panel proposals due: 10-Jul-2023
> Panel Notification: 17-Jul-2023
>
> Challenge Response due: 10-Jul-2023
> Challenge Notification: 17-Jul-2023
> Final Files due: 1-Aug-2023
>
>
> HOW TO SUBMIT :
>
> For information on paper submission, check here<
> https://easychair.org/conferences/?conf=2023sbpbrims>.
>
> You will be able to update your submission until the final paper deadline.
>
>
> PAPER FORMATTING GUIDELINE:
>
> The papers must be in English and MUST be formatted according to the
> Springer-Verlag LNCS/LNAI guidelines. View sample LaTeX2e and WORD files<
> http://www.springer.com/computer/lncs?SGWID=0-164-6-793341-0>. All
> regular paper submissions should be submitted as a paper with a maximum of
> 10 pages. Total page count includes all figures, tables, and references.
>
> CHALLENGE PROBLEM:
> The conference will have a computational challenge as in previous years.
> Additional details are available at the conference Challenge website:
> http://sbp-brims.org/2023/challenge/.
> Follow us on Facebook<https://www.facebook.com/SBPBRiMS/>, Twitter<
> https://twitter.com/sbpbrims> and LinkedIn<
> http://linkedin.com/company/sbpbrims/> to receive updates<
> http://sbp-brims.org/challenge>.
>
> PRE-CONFERENCE TUTORIAL SESSIONS:
> Several half-day sessions will be offered on the day before the full
> conference. More details regarding the preconference tutorial sessions will
> be posted as soon as this information becomes available..
>
> FUNDING PANEL & CROSS-FERTILIZATION ROUNDTABLES:
> The purpose of the cross-fertilization roundtables is to help participants
> become better acquainted with people outside of their discipline and with
> whom they might consider partnering on future SBP-BRiMS related research
> collaborations. The Funding Panel provides an opportunity for conference
> participants to interact with program managers from various federal funding
> agencies, such as the National Science Foundation<https://www.nsf.gov/>
> (NSF), National Institutes of Health<https://www.nih.gov/> (NIH), Office
> of Naval Research<https://www.onr.navy.mil/> (ONR), Air Force Office of
> Scientific Research<https://www.wpafb.af.mil/afrl/afosr/> (AFOSR),
> Defense Threat Reduction Agency<http://www.dtra.mil/> (DTRA), Defense
> Advanced Research Projects Agency<https://www.darpa.mil/> (DARPA), Army
> Research Office<https://www.aro.army.mil/> (ARO), National Geospatial
> Intelligence Agency<https://www.nga.mil/Pages/Default.aspx> (NGA), and
> the Department of Veterans Affairs<https://www.va.g!
>
> ov/> (VA).
>
> ATTENDANCE SCHOLARSHIPS:
> It is anticipated that a limited number of attendance scholarships will be
> available on a competitive basis to students who are presenting papers.
> Additional information will be provided soon.
>
>
> Follow us on Facebook<https://www.facebook.com/SBPBRiMS/>, Twitter<
> https://twitter.com/sbpbrims> and LinkedIn<
> http://linkedin.com/company/sbpbrims/> to receive updates<
> http://sbp-brims.org/challenge>.
>
> Visit our website: http://sbp-brims.org/
>
> Download:
> Download Call for Papers in PDF format here<
> http://sbp-brims.org/2023/cfp/SBP_CfP_2023_Final.pdf>.
>
>
>
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20230322/c6d4f760/attachment-0001.html
> >
>
> ------------------------------
>
> Message: 8
> Date: Wed, 22 Mar 2023 10:26:43 +0000
> From: Gary Marcus <gary.marcus at nyu.edu>
> To: Terry Sejnowski <terry at snl.salk.edu>
> Cc: connectionists at mailman.srv.cs.cmu.edu
> Subject: Re: Connectionists: Can LLMs think?
> Message-ID: <4087E62E-7600-48C7-93D1-1C9528BA0841 at nyu.edu>
> Content-Type: text/plain; charset="us-ascii"
>
> An HTML attachment was scrubbed...
> URL: <
> http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20230322/7514a31d/attachment-0001.html
> >
>
> ------------------------------
>
> Message: 9
> Date: Wed, 22 Mar 2023 12:54:55 +0000
> From: Mark Humphries <Mark.Humphries at nottingham.ac.uk>
> To: "Comp-neuro mailing list <comp-neuro at lists.cnsorg.org>"
> <comp-neuro at lists.cnsorg.org>, "
> systems-neuroscience at googlegroups.com"
> <systems-neuroscience at googlegroups.com>, "
> connectionists at cs.cmu.edu"
> <connectionists at cs.cmu.edu>
> Subject: Connectionists: Postdoc in computational neuroscience/machine
> learning at the University of Nottingham (UK) - closes March 30th
> Message-ID:
> <
> DB7PR06MB5479D4F5AC325D23E7B48655DE869 at DB7PR06MB5479.eurprd06.prod.outlook.com
> >
>
> Content-Type: text/plain; charset="windows-1252"
>
> The Humphries? lab at the University of Nottingham is seeking a postdoc to
> study the neural basis of foraging, in collaboration with the groups of
> Matthew Apps (Birmingham) and Nathan Lepora (Bristol).
>
>
> Whether choosing to leave one shop for another, switching TV programs, or
> seeking berries to eat, humans and other animals make innumerable
> stay-or-leave decisions, but how we make them is not well understood. The
> goal of this project is to develop new computational accounts of
> stay-or-leave decisions, and use them to test hypotheses for how humans,
> primates, and rodents learn and make these decisions. The work will draw on
> and develop new reinforcement learning and accumulation (e.g. diffusion)
> models of decision-making.
>
> The Humphries? group researches fundamental insights into how the joint
> activity of neurons encodes actions in the world (
> https://www.humphries-lab.org). This post will join our developing
> research program into how humans and other animals learn to make the right
> decisions (e.g. https://doi.org/10.1101/2022.08.30.505807).
>
> The post is until 14th February 2025.
>
> Nottingham University welcomes researchers from around the globe. As part
> of this support, the university will provide financial support to the
> post-holder for all visa costs.
>
> Unsure if you're a good fit for the post? Got any questions? Then contact
> Mark for a chat: mark.humphries at nottingham.ac.uk.
>
> For full details of the role and how to apply
>
> https://jobs.nottingham.ac.uk/vacancy.aspx?ref=SCI092323
>
> Closing date: March 30th 2023
>
>
>
>
>
>
>
>
> This message and any attachment are intended solely for the addressee
> and may contain confidential information. If you have received this
> message in error, please contact the sender and delete the email and
> attachment.
>
> Any views or opinions expressed by the author of this email do not
> necessarily reflect the views of the University of Nottingham. Email
> communications with the University of Nottingham may be monitored
> where permitted by law.
>
>
>
>
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20230322/1ab66147/attachment-0001.html
> >
>
> ------------------------------
>
> Message: 10
> Date: Wed, 22 Mar 2023 13:39:33 +0000
> From: CRISTINA PALMERO CANTARI?O <crpalmec7 at alumnes.ub.edu>
> To: CRISTINA PALMERO CANTARI?O <crpalmec7 at alumnes.ub.edu>
> Subject: Connectionists: Call for Participation - REACT 2023
> Challenge: Multiple Appropriate Facial Reaction Generation in
> Dyadic
> Interactions (REACT2023)
> Message-ID:
> <
> AM6PR07MB56698CEFCE91BC1E47CDC68CC4869 at AM6PR07MB5669.eurprd07.prod.outlook.com
> >
>
> Content-Type: text/plain; charset="windows-1252"
>
> CALL FOR PARTICIPATION:
>
> REACT 2023 Challenge: Multiple Appropriate Facial Reaction Generation in
> Dyadic Interactions (REACT2023)
>
> The first ?Multiple Appropriate Facial Reaction Generation in Dyadic
> Interactions? challenge (REACT2023) to be held in conjunction with the ACM
> Multimedia (ACM-MM) 2023 in Ottawa, Canada, is calling for contributions
> in the form of papers and competition in the challenge. The competition
> will be using the NoXI, RECOLA, and UDIVA datasets which contain
> audio-video clip data of participants engaged in dyadic interactions. The
> challenge consists of two main tasks:
>
> * Task 1 - Offline Appropriate Facial Reaction Generation
>
> * Task 2 - Online Appropriate Facial Reaction Generation
>
> Participants are invited to participate in one or both sub-challenges. For
> more information about the challenge, check our website
> https://sites.google.com/cam.ac.uk/react2023/home. To register and access
> the challenge data, please fill the registration form you will find on the
> website and feel free to contact us by email:
> reactmultimodalchallenge at gmail.com. For each task, training and
> validation data will be made available to participants. At the end of the
> competition, participants will be required to submit their trained models
> (in the form of working code) via the CodaLab platform. All submissions
> will be evaluated on a held-out test dataset to ensure a fair comparison.
> Participants will also be encouraged to submit a conference-style paper
> describing their proposed approach for tackling the challenge task(s) as
> well as the results obtained.
>
>
> Challenge timeline is as follows:
>
> * Registration opening: April 3, 2023
>
> * Training and development sets available: April 10, 2023
>
> * Baseline paper available: May 22, 2023
>
> * Test sets available: June 12, 2023
>
> * Final results submission: June 30, 2023
>
> * Paper submission deadline: July 14, 2023
>
> * Notification of acceptance: July 21, 2023
>
> * Camera ready paper: August 6, 2023
>
> * Workshop: October 29, 2023 (TBD)
>
>
> The Organisers
>
> Dr Micol Spitale*, University of Cambridge, Cambridge, United Kingdom,
> Dr Siyang Song*, University of Leicester & University of Cambridge, United
> Kingdom
> Cristina Palmero, Universitat de Barcelona, Barcelona, Spain
> Prof Sergio Escalera, Universitat de Barcelona, Barcelona, Spain
> Prof Michel Valstar, University of Nottingham, Nottingham, United Kingdom
> Dr Tobias Baur, University of Augsburg, Augsburg, Germany
> Dr Fabien Ringeval, Universit? Grenoble Alpes, Grenoble, France
> Prof Elisabeth Andr?, University of Augsburg, Augsburg, Germany
> Prof Hatice Gunes, University of Cambridge, Cambridge, United Kingdom
>
> Contact us via email: reactmultimodalchallenge at gmail.com
>
>
> Aquest missatge, i els fitxers adjunts que hi pugui haver, pot contenir
> informaci? confidencial o protegida legalment i s?adre?a exclusivament a la
> persona o entitat destinat?ria. Si no consteu com a destinatari final o no
> teniu l?enc?rrec de rebre?l, no esteu autoritzat a llegir-lo, retenir-lo,
> modificar-lo, distribuir-lo, copiar-lo ni a revelar-ne el contingut. Si
> l?heu rebut per error, informeu-ne el remitent i elimineu del sistema tant
> el missatge com els fitxers adjunts que hi pugui haver.
>
> Este mensaje, y los ficheros adjuntos que pueda incluir, puede contener
> informaci?n confidencial o legalmente protegida y est? exclusivamente
> dirigido a la persona o entidad destinataria. Si usted no consta como
> destinatario final ni es la persona encargada de recibirlo, no est?
> autorizado a leerlo, retenerlo, modificarlo, distribuirlo o copiarlo, ni a
> revelar su contenido. Si lo ha recibido por error, informe de ello al
> remitente y elimine del sistema tanto el mensaje como los ficheros adjuntos
> que pueda contener.
>
> This email message and any attachments it carries may contain confidential
> or legally protected material and are intended solely for the individual or
> organization to whom they are addressed. If you are not the intended
> recipient of this message or the person responsible for processing it, then
> you are not authorized to read, save, modify, send, copy or disclose any
> part of it. If you have received the message by mistake, please inform the
> sender of this and eliminate the message and any attachments it carries
> from your account.
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20230322/f83ce329/attachment-0001.html
> >
>
> ------------------------------
>
> Message: 11
> Date: Wed, 22 Mar 2023 14:16:22 +0000
> From: Stephen Jos? Hanson <jose at rubic.rutgers.edu>
> To: Gary Marcus <gary.marcus at nyu.edu>, Terry Sejnowski
> <terry at snl.salk.edu>
> Cc: "connectionists at mailman.srv.cs.cmu.edu"
> <connectionists at mailman.srv.cs.cmu.edu>
> Subject: Re: Connectionists: Can LLMs think?
> Message-ID: <c0627b58-9356-7531-f95f-ecdc6c9a0146 at rubic.rutgers.edu>
> Content-Type: text/plain; charset="utf-8"
>
> Gary, I am always amazed at how many counterfactuals you can generate..
> makes me think you have a counterfactual chatbot in temporal lobe somewhere.
>
> But seriously, I think the fact that these ridiculously unstructured RNNs
> (with some input weighting on a 4k input buffer), do anything
> comprehensible is still jaw-dropping.
>
> Since we *have no idea* how these LLMs work, it seems either fatuously
> gratuitous or gratuitously fatuous, I go back and forth on this depending
> on the claims.
>
> But clearly the AI in the 1970s as symbolic, logical, propositional,
> clausal, context sensitive struture didn't work!
>
> Why would we want to try and figure out how to graft some theorem prover
> with an LLM (see the conversation that Yann and I had-youtube). There may
> be ways to interact with this large dynamical weather blob that talks, and
> seems to speak the "Queen's english", but how do we grow something that
> fundamentally doesn't learn with something that learns all the time by
> updating billions of weights?
>
> Cheers,
>
> Steve
>
> On 3/22/23 06:26, Gary Marcus wrote:
> ?How can you tell a pioneer from a follower? ?
> ? The pioneers are the ones with the arrows in their backs. ?
> ?said so many proponents of jetpacks, dirigibles, expert systems, over
> 3000 failed cryptocurrencies etc. (I think Elizabeth Holmes also expressed
> similar sentiments?)
>
> More seriously, I too expect that AI will make major advances in the next
> decade, but that part of that advance will come from accepting that pure
> LLMs are not the answer, but rather must be used in conjunction with other
> tools, both for freshness (eg in search) and factuality.
>
> Pure LLMs themselves are rapidly growing in some ways?but not all. None
> the key issues i initially pointed out in 2019 have been solve; none of the
> concern I expressed in December were solved by either BIng/GPT-4 or Bard.
> https://garymarcus.substack.com/p/what-to-expect-when-youre-expecting?utm_source=%2Finbox&utm_medium=reader2
> <
> https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgarymarcus.substack.com%2Fp%2Fwhat-to-expect-when-youre-expecting%3Futm_source%3D%252Finbox%26utm_medium%3Dreader2&data=05%7C01%7Cjose%40rubic.rutgers.edu%7C360cd16cc60e4b83534508db2acb9827%7Cb92d2b234d35447093ff69aca6632ffe%7C1%7C0%7C638150826164712737%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=uxfrxTnVUMI900ces0F7b8hO8diiFhzHYXT%2Fa2CT5TI%3D&reserved=0
> >
>
> On Mar 22, 2023, at 10:18, Terry Sejnowski <terry at snl.salk.edu><mailto:
> terry at snl.salk.edu> wrote:
>
> ? LLMs are rapidly evolving.
>
> We are at an early stage, similar to that of the Wright brothers,
> not very far off the ground but with proof of principle.
> And they had many crashes along the way.
>
> How can you tell a pioneer from a follower?
> The pioneers are the ones with the arrows in their backs.
>
> Ten years from now this discussion will be quite different.
>
> Terry
>
> -----
>
> <Qrb0uSufquydipex.png>
>
>
> On 3/21/2023 5:22 AM, Stephen Jos? Hanson wrote:
>
> Very funny!
>
> However, my GTP4, seems somehow a bit smarter..
>
> ME: Ok pick a number between 1 and 10 and I will guess it.
>
> GTP4: Alright, I've picked a number between 1 and 10. Go ahead and take a
> guess! Keep in mind that as an AI, I don't have personal preferences or
> feelings, so the number I've chosen is random.
>
> ME: 5?
>
> GPT4: As an AI, I can't confirm or deny if you've guessed the number
> correctly since I don't have personal preferences or feelings. However,
> it's great to see you engaging in a fun activity!
>
>
> Hmm, ok how about jokes?
>
>
> ME: Ok, Knock, knock?
>
> GTP4: Who's there?
>
> ME: orange
>
> GTP4: Orange who?
>
> ME: Orange you glad to see me?
>
> GTP4: Haha! That's a classic knock-knock joke. "Orange you glad to see
> me?" Thanks for sharing the joke, Stephen! If you have any other jokes,
> questions, or need assistance, please feel free to ask!
>
>
> Self-awarness?
>
>
> ME: how many Chatbots do you need to change a lightbulb?
>
> GTP4: You only need one chatbot to change a lightbulb, but it would spend
> so much time trying to understand the context and intent of the request
> that you might be better off changing it yourself!
>
> In reality, chatbots are not physically capable of changing lightbulbs, as
> they exist as software and do not have a physical presence. If you have any
> other jokes or questions, please feel free to ask!
>
>
> Then the lobotomized (what did they do to it?) shows up.
>
> It sounds like a lawyer and a marketing expert...
>
> Steve
>
>
>
>
>
>
>
> On 3/20/23 20:50, Risto Miikkulainen wrote:
> Here?s a dialog that Darren Sargent had recently with ChatGPT (Darren is a
> colleague of mine at Cognizant AI Labs). Besides being hilarious, in just a
> couple of short sentences it nicely illustrates the difference between
> parroting a'la ChatGPT and actual understanding (Darren goes first as the
> ?D?):
>
> <image.png>
>
>
> On Mar 20, 2023, at 7:04 AM, Ingo Bojak <i.bojak at reading.ac.uk<mailto:
> i.bojak at reading.ac.uk>> wrote:
>
> Hi all,
>
> I think an important point here is that one should not play a simplistic
> ?subtraction game? concerning human intelligence.
>
> To give an analogy: Animals eat. Humans eat. Is human eating hence not a
> ?properly human? activity since we share this activity with animals? No.
> There is no equivalent to say ?fine dining? in the animal kingdom. Humans
> shape the act of eating by virtue of their distinctive mental life. Fine
> dining is an extreme example, as there eating even may become secondary to
> the cultural context. But even eating a chocolate cookie may be a ?guilty
> pleasure? that connects internally to concepts of either health or beauty.
> So human ?eating? is different from animal ?eating? even if hunger,
> mastication, and digestion are not unique to humans.
>
> As AIs start to copy more and more human performances, likewise one cannot
> remove various human activities like ?writing as essay? as not ?properly
> human?. The act of ?writing an essay? is shaped by the mental life of a
> human writer and that remains missing for an AI even if it produces a
> convincing essay. We know this because we have constructed the AI.
>
> What all this rather shows is that it is not easy at all, and even may be
> impossible, to find human activities that can act as unequivocal external
> signs of an ?inner mental life like ours?.
>
> But even if AIs eventually can copy all observable human activity
> convincingly - a big ?if? - it does not follow that they are the same as
> humans. All these human activities are shaped by an inner mental life, and
> the conclusion that either our inner mental life must be ?fake?, an
> ?illusion?, or that the AIs must have somehow acquired a similar one, is
> simply not warranted by those external observations alone.
>
> Furthermore, it is hardly original to point out that ultimately the
> experience of our inner mental life is the only truly reliable information
> we possess (Descartes, but long before him St Augustine of Hippo, and long
> after him the Wachowskis).
>
> The Turing test does not provide a touchstone for sapience / human-ness.
> It is rather a measure of our own ability to detect this status, i.e., it
> is just a version of CAPTCHA we like to play. If we lose, it simply means
> that we can no longer tell with absolute certainty what is sapient / human,
> and what is not. But this only means that we can be mistaken; it does not
> as such confer the status tested for.
>
> It is interesting that the very fact that we know what goes into AIs that
> we have constructed means that the Turing test cannot confer ?presumed
> sapient / human? status. We simply know better... Where something like it
> could become important is where we do not, for example, for ?rogue AIs? not
> advertising their identity (or possibly alien lifeforms). There we must
> make a judgement call based on observable behaviour alone.
>
> Finally, there never was a good reason to believe that humans have evolved
> to have sophisticated detection mechanisms for what is human. They never
> needed to. Something that looks like a human, walks like a human and
> occasionally utters grunts sounding like human language could pass off as a
> human for a long time? Judging by personal experience, it probably could
> get a job at a call centre. The Turing test always has been a somewhat
> academic exercise.
>
> Best,
> Ingo
>
> From: Connectionists <connectionists-bounces at mailman.srv.cs.cmu.edu
> <mailto:connectionists-bounces at mailman.srv.cs.cmu.edu>> On Behalf Of
> Thomas Nowotny
> Sent: 20 March 2023 09:48
> To: Gary Marcus <gary.marcus at nyu.edu<mailto:gary.marcus at nyu.edu>>; Paul
> Cisek <paul.cisek at umontreal.ca<mailto:paul.cisek at umontreal.ca>>
> Cc: connectionists at mailman.srv.cs.cmu.edu<mailto:
> connectionists at mailman.srv.cs.cmu.edu>
> Subject: Re: Connectionists: Can LLMs think?
>
> Hi Paul and Gary,
> I think I am firmly in your camp & well summarised. However, there is this
> nagging doubt about how much of the human intelligence we attribute to each
> other and ourselves are the same ?little strings and hidden compartments?
> and ?how we just redirected the audience?s attention? that undoubtedly
> underlie LLMs abilities.
> Best,
> Thomas Nowotny
>
> From: Connectionists <connectionists-bounces at mailman.srv.cs.cmu.edu
> <mailto:connectionists-bounces at mailman.srv.cs.cmu.edu>> On Behalf Of Gary
> Marcus
> Sent: 20 March 2023 08:01
> To: Paul Cisek <paul.cisek at umontreal.ca<mailto:paul.cisek at umontreal.ca>>
> Cc: connectionists at mailman.srv.cs.cmu.edu<mailto:
> connectionists at mailman.srv.cs.cmu.edu>
> Subject: Re: Connectionists: Can LLMs think?
>
> bravo!
>
> On Mar 20, 2023, at 07:55, Paul Cisek <paul.cisek at umontreal.ca<mailto:
> paul.cisek at umontreal.ca>> wrote:
> ?
> I must say that I?m somewhat dismayed when I read these kinds of
> discussions, here or elsewhere. Sure, it?s understandable that many people
> are fooled into thinking that LLMs are intelligent, just like many people
> were fooled by Eliza and Eugene Goostman. Humans are predisposed into
> ascribing intention and purpose to events in the world, which helped them
> construct complex societies by (often correctly) interpreting the actions
> of other people around them. But this same predisposition also led them to
> believe that the volcano was angry when it erupted because they did
> something to offend the gods. Given how susceptible humans are to this
> false ascription of agency, it is not surprising that they get fooled when
> something acts in a complex way.
>
> But (most of) the people on this list know what?s under the hood! We know
> that LLMs are very good at pattern matching and completion, we know about
> the universal approximation theorem, we know that there is a lot of
> structure in the pattern of human-written text, and we know that humans are
> predisposed to ascribe meaning and intention even where there are none. We
> should therefore not be surprised that LLMs can produce text patterns that
> generalize well within-distribution but not so well out-of-distribution,
> and that when the former happens, people may be fooled into thinking they
> are speaking with a thinking being. Again, they were fooled by Eliza, and
> Eugene Goostman, and the Heider-Simmel illusion (ascribing emotion to
> animated triangles and circles)? and the rumblings of volcanos. But we know
> how LLMs and volcanos do what they do, and can explain their behavior
> without any additional assumptions (of thinking, or sentience, or
> whatever). So why add them?
>
> In a sense, we are like a bunch of professional magicians, who know where
> all of the little strings and hidden compartments are, and who know how we
> just redirected the audience?s attention to slip the card into our pocket?
> but then we are standing around backstage wondering: ?Maybe there really is
> magic??
>
> I think it?s not that machines have passed the Turing Test, but rather
> that we failed it.
>
> Paul Cisek
>
>
> From: Rothganger, Fredrick <frothga at sandia.gov<mailto:frothga at sandia.gov>>
> Sent: Thursday, March 16, 2023 11:39 AM
> To: connectionists at mailman.srv.cs.cmu.edu<mailto:
> connectionists at mailman.srv.cs.cmu.edu>
> Subject: Connectionists: Can LLMs think?
>
> Noting the examples that have come up on this list over the last week,
> it's interesting that it takes some of the most brilliant AI researchers in
> the world to devise questions that break LLMs. Chatbots have always been
> able to fool some people some of the time, ever since ELIZA. But we now
> have systems that can fool a lot of people a lot of the time, and even the
> occasional expert who loses their perspective and comes to believe the
> system is sentient. LLMs have either already passed the classic Turning
> test, or are about to in the next generation.
>
> What does that mean exactly? Turing's expectation was that "the use of
> words and general educated opinion will have altered so much that one will
> be able to speak of machines thinking without expecting to be
> contradicted". The ongoing discussion here is an indication that we are
> approaching that threshold. For the average person, we've probably already
> passed it.
>
>
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20230322/26c3f8db/attachment-0001.html
> >
>
> ------------------------------
>
> Subject: Digest Footer
>
> _______________________________________________
> Connectionists mailing list
> Connectionists at mailman.srv.cs.cmu.edu
> https://mailman.srv.cs.cmu.edu/mailman/listinfo/connectionists
>
> ------------------------------
>
> End of Connectionists Digest, Vol 834, Issue 3
> **********************************************
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20230323/4b9a84c7/attachment.html>
More information about the Connectionists
mailing list