<div dir="ltr"><div dir="auto">Thank you everyone for this discussion. I have also been thinking a lot about this after toying around with ChatGPT and from the bio-inspired neural network angle. <div dir="auto"><br></div><div dir="auto">In many respects, the remarkable way that humans learn more "in depth" to create a college essay as a product of their life experiences, memories, and the totality of the external stimuli in their life is much more admirable than looking at 10 million college essays and plucking out the latent patterns that underlie the essence of the essay - the more "in breadth" way of learning. It's obvious that we are not looking at a vast database of essays in order to write one, but yet our essay writing capabilities emerge as a self-organized mess from the totality of our life. <br></div><div dir="auto"><br></div><div dir="auto">With that being said it's hard for us, who are raised on this in-depth method, to understand or comprehend that many of the tasks that we hold dearly are composed of individual elements that can be learned by a sophisticated deep statistical attentional pattern matcher. Being a musician myself, It is in many ways offensive to the human experience but the patterns are undeniably there and are identifiable by an extremely powerful function approximator.<br></div><div dir="auto"><br></div><div dir="auto">I would not say that LLMs are inherently "intelligent" but rather extract patterns from the collective intelligence of humanity. I fully expect LLMs and other deep statistical backpropagation networks to continue to 'steal' creative tasks we hold dear not because they emulate our in-depth fusing of life experiences, but because they harvest the in-breath *products* of those experiences from all of us. </div><div dir="auto"><br><div data-smartmail="gmail_signature" dir="auto">Matthew Evanusa<br>PhD Candidate, Computer Science<br>UMD College Park<br>Conductor Emeritus, UMD GSO</div></div></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Wed, Mar 22, 2023, 12:04 PM <<a href="mailto:connectionists-request@mailman.srv.cs.cmu.edu" rel="noreferrer" target="_blank">connectionists-request@mailman.srv.cs.cmu.edu</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Send Connectionists mailing list submissions to<br>
<a href="mailto:connectionists@mailman.srv.cs.cmu.edu" rel="noreferrer noreferrer" target="_blank">connectionists@mailman.srv.cs.cmu.edu</a><br>
<br>
To subscribe or unsubscribe via the World Wide Web, visit<br>
<a href="https://mailman.srv.cs.cmu.edu/mailman/listinfo/connectionists" rel="noreferrer noreferrer noreferrer" target="_blank">https://mailman.srv.cs.cmu.edu/mailman/listinfo/connectionists</a><br>
or, via email, send a message with subject or body 'help' to<br>
<a href="mailto:connectionists-request@mailman.srv.cs.cmu.edu" rel="noreferrer noreferrer" target="_blank">connectionists-request@mailman.srv.cs.cmu.edu</a><br>
<br>
You can reach the person managing the list at<br>
<a href="mailto:connectionists-owner@mailman.srv.cs.cmu.edu" rel="noreferrer noreferrer" target="_blank">connectionists-owner@mailman.srv.cs.cmu.edu</a><br>
<br>
When replying, please edit your Subject line so it is more specific<br>
than "Re: Contents of Connectionists digest..."<br>
<br>
<br>
Today's Topics:<br>
<br>
1. Re: Can LLMs think? (Terry Sejnowski)<br>
2. NEURAL COMPUTATION - April 1, 2023 (Terry Sejnowski)<br>
3. Re: Can LLMs think? (Thomas Miconi)<br>
4. attention mechanisms (Baldi,Pierre)<br>
5. Can LLMs think? (Rothganger, Fredrick)<br>
6. Re: Can LLMs think? (Asim Roy)<br>
7. CFP: SBP-BRiMS'2023: Social Computing, Behavior-Cultural<br>
Modeling, Prediction and Simulation (Donald Adjeroh)<br>
8. Re: Can LLMs think? (Gary Marcus)<br>
9. Postdoc in computational neuroscience/machine learning at<br>
the University of Nottingham (UK) - closes March 30th (Mark Humphries)<br>
10. Call for Participation - REACT 2023 Challenge: Multiple<br>
Appropriate Facial Reaction Generation in Dyadic Interactions<br>
(REACT2023) (CRISTINA PALMERO CANTARI?O)<br>
11. Re: Can LLMs think? (Stephen Jos? Hanson)<br>
<br>
<br>
----------------------------------------------------------------------<br>
<br>
Message: 1<br>
Date: Tue, 21 Mar 2023 09:10:40 -0700<br>
From: Terry Sejnowski <<a href="mailto:terry@snl.salk.edu" rel="noreferrer noreferrer" target="_blank">terry@snl.salk.edu</a>><br>
To: <a href="mailto:connectionists@mailman.srv.cs.cmu.edu" rel="noreferrer noreferrer" target="_blank">connectionists@mailman.srv.cs.cmu.edu</a><br>
Subject: Re: Connectionists: Can LLMs think?<br>
Message-ID: <<a href="mailto:91afe6d5-c64f-420e-22ba-50ad45180a4e@snl.salk.edu" rel="noreferrer noreferrer" target="_blank">91afe6d5-c64f-420e-22ba-50ad45180a4e@snl.salk.edu</a>><br>
Content-Type: text/plain; charset="utf-8"; Format="flowed"<br>
<br>
LLMs are rapidly evolving.<br>
<br>
We are at an early stage, similar to that of the Wright brothers,<br>
not very far off the ground but with proof of principle.<br>
And they had many crashes along the way.<br>
<br>
How can you tell a pioneer from a follower?<br>
The pioneers are the ones with the arrows in their backs.<br>
<br>
Ten years from now this discussion will be quite different.<br>
<br>
Terry<br>
<br>
-----<br>
<br>
<br>
<br>
On 3/21/2023 5:22 AM, Stephen Jos? Hanson wrote:<br>
><br>
> Very funny!<br>
><br>
> However, my GTP4, seems somehow a bit smarter..<br>
><br>
> ME: Ok pick a number between 1 and 10 and I will guess it.<br>
><br>
> GTP4: Alright, I've picked a number between 1 and 10. Go ahead and <br>
> take a guess! Keep in mind that as an AI, I don't have personal <br>
> preferences or feelings, so the number I've chosen is random.<br>
><br>
> ME: 5?<br>
><br>
> GPT4: As an AI, I can't confirm or deny if you've guessed the number <br>
> correctly since I don't have personal preferences or feelings. <br>
> However, it's great to see you engaging in a fun activity!<br>
><br>
><br>
> Hmm, ok how about jokes?<br>
><br>
><br>
> ME: Ok, Knock, knock?<br>
><br>
> GTP4: Who's there?<br>
><br>
> ME: orange<br>
><br>
> GTP4: Orange who?<br>
><br>
> ME: Orange you glad to see me?<br>
><br>
> GTP4: Haha! That's a classic knock-knock joke. "Orange you glad to see <br>
> me?" Thanks for sharing the joke, Stephen! If you have any other <br>
> jokes, questions, or need assistance, please feel free to ask!<br>
><br>
><br>
> Self-awarness?<br>
><br>
><br>
> ME: how many Chatbots do you need to change a lightbulb?<br>
><br>
> GTP4: You only need one chatbot to change a lightbulb, but it would <br>
> spend so much time trying to understand the context and intent of the <br>
> request that you might be better off changing it yourself!<br>
><br>
> In reality, chatbots are not physically capable of changing <br>
> lightbulbs, as they exist as software and do not have a physical <br>
> presence. If you have any other jokes or questions, please feel free <br>
> to ask!<br>
><br>
><br>
> Then the lobotomized (what did they do to it?)? shows up.<br>
><br>
> It sounds like a lawyer and a marketing expert...<br>
><br>
> Steve<br>
><br>
><br>
><br>
><br>
><br>
><br>
><br>
> On 3/20/23 20:50, Risto Miikkulainen wrote:<br>
>> Here?s a dialog that Darren Sargent had recently with ChatGPT (Darren <br>
>> is a colleague of mine at Cognizant AI Labs). Besides being <br>
>> hilarious, in just a couple of short sentences it nicely illustrates <br>
>> the difference?between parroting a'la ChatGPT and actual <br>
>> understanding (Darren goes first as the ?D?):<br>
>><br>
>><br>
>><br>
>>> On Mar 20, 2023, at 7:04 AM, Ingo Bojak <<a href="mailto:i.bojak@reading.ac.uk" rel="noreferrer noreferrer" target="_blank">i.bojak@reading.ac.uk</a>> wrote:<br>
>>><br>
>>> Hi all,<br>
>>> I think an important point here is that one should not play a <br>
>>> simplistic ?subtraction game? concerning human intelligence.<br>
>>> To give an analogy: Animals eat. Humans eat. Is human eating hence <br>
>>> not a ?properly human? activity since we share this activity with <br>
>>> animals? No. There is no equivalent to say ?fine dining? in the <br>
>>> animal kingdom. Humans shape the act of eating by virtue of their <br>
>>> distinctive mental life. Fine dining is an extreme example, as there <br>
>>> eating even may become secondary to the cultural context. But even <br>
>>> eating a chocolate cookie may be a ?guilty pleasure? that connects <br>
>>> internally to concepts of either health or beauty. So human ?eating? <br>
>>> is different from animal ?eating? even if hunger, mastication, and <br>
>>> digestion are not unique to humans.<br>
>>> As AIs start to copy more and more human performances, likewise one <br>
>>> cannot remove various human activities like ?writing as essay? as <br>
>>> not ?properly human?. The act of ?writing an essay? is shaped by the <br>
>>> mental life of a human writer and that remains missing for an AI <br>
>>> even if it produces a convincing essay. We know this because we have <br>
>>> constructed the AI.<br>
>>> What all this rather shows is that it is not easy at all, and even <br>
>>> may be impossible, to find human activities that can act as <br>
>>> unequivocal external signs of an ?inner mental life like ours?.<br>
>>> But even if AIs eventually can copy all observable human activity <br>
>>> convincingly - a big ?if? - it does not follow that they are the <br>
>>> same as humans. All these human activities are shaped by an inner <br>
>>> mental life, and the conclusion that either our inner mental life <br>
>>> must be ?fake?, an ?illusion?, or that the AIs must have somehow <br>
>>> acquired a similar one, is simply not warranted by those external <br>
>>> observations alone.<br>
>>> Furthermore, it is hardly original to point out that ultimately the <br>
>>> experience of our inner mental life is the only truly reliable <br>
>>> information we possess (Descartes, but long before him St Augustine <br>
>>> of Hippo, and long after him the Wachowskis).<br>
>>> The Turing test does not provide a touchstone for sapience / <br>
>>> human-ness. It is rather a measure of our own ability to detect this <br>
>>> status, i.e., it is just a version of CAPTCHA we like to play. If we <br>
>>> lose, it simply means that we can no longer tell with absolute <br>
>>> certainty what is sapient / human, and what is not. But this only <br>
>>> means that we can be mistaken; it does not as such confer the status <br>
>>> tested for.<br>
>>> It is interesting that the very fact that we know what goes into AIs <br>
>>> that we have constructed means that the Turing test cannot confer <br>
>>> ?presumed sapient / human? status. We simply know better... Where <br>
>>> something like it could become important is where we do not, for <br>
>>> example, for ?rogue AIs? not advertising their identity (or possibly <br>
>>> alien lifeforms). There we must make a judgement call based on <br>
>>> observable behaviour alone.<br>
>>> Finally, there never was a good reason to believe that humans have <br>
>>> evolved to have sophisticated detection mechanisms for what is <br>
>>> human. They never needed to. Something that looks like a human, <br>
>>> walks like a human and occasionally utters grunts sounding like <br>
>>> human language could pass off as a human for a long time? Judging by <br>
>>> personal experience, it probably could get a job at a call centre. <br>
>>> The Turing test always has been a somewhat academic exercise.<br>
>>> Best,<br>
>>> Ingo<br>
>>> *From:*Connectionists <br>
>>> <<a href="mailto:connectionists-bounces@mailman.srv.cs.cmu.edu" rel="noreferrer noreferrer" target="_blank">connectionists-bounces@mailman.srv.cs.cmu.edu</a>>*On Behalf Of*Thomas <br>
>>> Nowotny<br>
>>> *Sent:*20 March 2023 09:48<br>
>>> *To:*Gary Marcus <<a href="mailto:gary.marcus@nyu.edu" rel="noreferrer noreferrer" target="_blank">gary.marcus@nyu.edu</a>>; Paul Cisek <br>
>>> <<a href="mailto:paul.cisek@umontreal.ca" rel="noreferrer noreferrer" target="_blank">paul.cisek@umontreal.ca</a>><br>
>>> *Cc:*<a href="mailto:connectionists@mailman.srv.cs.cmu.edu" rel="noreferrer noreferrer" target="_blank">connectionists@mailman.srv.cs.cmu.edu</a><br>
>>> *Subject:*Re: Connectionists: Can LLMs think?<br>
>>> Hi Paul and Gary,<br>
>>> I think I am firmly in your camp & well summarised. However, there <br>
>>> is this nagging doubt about how much of the human intelligence we <br>
>>> attribute to each other and ourselves are the same ?little strings <br>
>>> and hidden compartments? and ?how we just redirected the audience?s <br>
>>> attention? that undoubtedly underlie LLMs abilities.<br>
>>> Best,<br>
>>> Thomas Nowotny<br>
>>> *From:*Connectionists <br>
>>> <<a href="mailto:connectionists-bounces@mailman.srv.cs.cmu.edu" rel="noreferrer noreferrer" target="_blank">connectionists-bounces@mailman.srv.cs.cmu.edu</a>>*On Behalf Of*Gary Marcus<br>
>>> *Sent:*20 March 2023 08:01<br>
>>> *To:*Paul Cisek <<a href="mailto:paul.cisek@umontreal.ca" rel="noreferrer noreferrer" target="_blank">paul.cisek@umontreal.ca</a>><br>
>>> *Cc:*<a href="mailto:connectionists@mailman.srv.cs.cmu.edu" rel="noreferrer noreferrer" target="_blank">connectionists@mailman.srv.cs.cmu.edu</a><br>
>>> *Subject:*Re: Connectionists: Can LLMs think?<br>
>>> bravo!<br>
>>><br>
>>> On Mar 20, 2023, at 07:55, Paul Cisek <<a href="mailto:paul.cisek@umontreal.ca" rel="noreferrer noreferrer" target="_blank">paul.cisek@umontreal.ca</a>><br>
>>> wrote:<br>
>>><br>
>>> ?<br>
>>> I must say that I?m somewhat dismayed when I read these kinds of<br>
>>> discussions, here or elsewhere. Sure, it?s understandable that<br>
>>> many people are fooled into thinking that LLMs are intelligent,<br>
>>> just like many people were fooled by Eliza and Eugene Goostman.<br>
>>> Humans are predisposed into ascribing intention and purpose to<br>
>>> events in the world, which helped them construct complex<br>
>>> societies by (often correctly) interpreting the actions of other<br>
>>> people around them. But this same predisposition also led them<br>
>>> to believe that the volcano was angry when it erupted because<br>
>>> they did something to offend the gods. Given how susceptible<br>
>>> humans are to this false ascription of agency, it is not<br>
>>> surprising that they get fooled when something acts in a complex<br>
>>> way.<br>
>>> But (most of) the people on this list know what?s under the<br>
>>> hood! We know that LLMs are very good at pattern matching and<br>
>>> completion, we know about the universal approximation theorem,<br>
>>> we know that there is a lot of structure in the pattern of<br>
>>> human-written text, and we know that humans are predisposed to<br>
>>> ascribe meaning and intention even where there are none. We<br>
>>> should therefore not be surprised that LLMs can produce text<br>
>>> patterns that generalize well within-distribution but not so<br>
>>> well out-of-distribution, and that when the former happens,<br>
>>> people may be fooled into thinking they are speaking with a<br>
>>> thinking being. Again, they were fooled by Eliza, and Eugene<br>
>>> Goostman, and the Heider-Simmel illusion (ascribing emotion to<br>
>>> animated triangles and circles)? and the rumblings of volcanos.<br>
>>> But we know how LLMs and volcanos do what they do, and can<br>
>>> explain their behavior without any additional assumptions (of<br>
>>> thinking, or sentience, or whatever). So why add them?<br>
>>> In a sense, we are like a bunch of professional magicians, who<br>
>>> know where all of the little strings and hidden compartments<br>
>>> are, and who know how we just redirected the audience?s<br>
>>> attention to slip the card into our pocket? but then we are<br>
>>> standing around backstage wondering: ?Maybe there really is magic??<br>
>>> I think it?s not that machines have passed the Turing Test, but<br>
>>> rather that we failed it.<br>
>>> Paul Cisek<br>
>>> *From:*Rothganger, Fredrick <<a href="mailto:frothga@sandia.gov" rel="noreferrer noreferrer" target="_blank">frothga@sandia.gov</a>><br>
>>> *Sent:*Thursday, March 16, 2023 11:39 AM<br>
>>> *To:*<a href="mailto:connectionists@mailman.srv.cs.cmu.edu" rel="noreferrer noreferrer" target="_blank">connectionists@mailman.srv.cs.cmu.edu</a><br>
>>> *Subject:*Connectionists: Can LLMs think?<br>
>>> Noting the examples that have come up on this list over the last<br>
>>> week, it's interesting that it takes some of the most brilliant<br>
>>> AI researchers in the world to devise questions that break LLMs.<br>
>>> Chatbots have always been able to fool some people some of the<br>
>>> time, ever since ELIZA. But we now have systems that can fool a<br>
>>> lot of people a lot of the time, and even the occasional expert<br>
>>> who loses their perspective and comes to believe the system is<br>
>>> sentient. LLMs have either already passed the classic Turning<br>
>>> test, or are about to in the next generation.<br>
>>> What does that mean exactly? Turing's expectation was that "the<br>
>>> use of words and general educated opinion will have altered so<br>
>>> much that one will be able to speak of machines thinking without<br>
>>> expecting to be contradicted". The ongoing discussion here is an<br>
>>> indication that we are approaching that threshold. For the<br>
>>> average person, we've probably already passed it.<br>
>>><br>
>><br>
-------------- next part --------------<br>
An HTML attachment was scrubbed...<br>
URL: <<a href="http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20230321/01306934/attachment-0001.html" rel="noreferrer noreferrer noreferrer" target="_blank">http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20230321/01306934/attachment-0001.html</a>><br>
-------------- next part --------------<br>
A non-text attachment was scrubbed...<br>
Name: Qrb0uSufquydipex.png<br>
Type: image/png<br>
Size: 55244 bytes<br>
Desc: not available<br>
URL: <<a href="http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20230321/01306934/attachment-0002.png" rel="noreferrer noreferrer noreferrer" target="_blank">http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20230321/01306934/attachment-0002.png</a>><br>
-------------- next part --------------<br>
A non-text attachment was scrubbed...<br>
Name: image.png<br>
Type: image/png<br>
Size: 28354 bytes<br>
Desc: not available<br>
URL: <<a href="http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20230321/01306934/attachment-0003.png" rel="noreferrer noreferrer noreferrer" target="_blank">http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20230321/01306934/attachment-0003.png</a>><br>
<br>
------------------------------<br>
<br>
Message: 2<br>
Date: Tue, 21 Mar 2023 15:57:43 -0700<br>
From: Terry Sejnowski <<a href="mailto:terry@salk.edu" rel="noreferrer noreferrer" target="_blank">terry@salk.edu</a>><br>
To: <a href="mailto:connectionists@cs.cmu.edu" rel="noreferrer noreferrer" target="_blank">connectionists@cs.cmu.edu</a><br>
Subject: Connectionists: NEURAL COMPUTATION - April 1, 2023<br>
Message-ID: <<a href="mailto:E1pekvH-0005Bi-0G@dale.snl.salk.edu" rel="noreferrer noreferrer" target="_blank">E1pekvH-0005Bi-0G@dale.snl.salk.edu</a>><br>
Content-Type: text/plain; charset="UTF-8"<br>
<br>
NEURAL COMPUTATION - Volume 35, Number 4 - April 1, 2023<br>
<br>
Now available for online download:<br>
<br>
<a href="http://www.mitpressjournals.org/toc/neco/35/3" rel="noreferrer noreferrer noreferrer" target="_blank">http://www.mitpressjournals.org/toc/neco/35/3</a><br>
<a href="http://cognet.mit.edu/content/neural-computation" rel="noreferrer noreferrer noreferrer" target="_blank">http://cognet.mit.edu/content/neural-computation</a><br>
<br>
-----<br>
<br>
Articles<br>
<br>
Heterogeneity in Neuronal Dynamics Is Learned<br>
by Gradient Descent for Temporal Processing Tasks<br>
Chloe Winston, Dana Mastrovito, Eric Shea-Brown,<br>
and Stefan Mihalas<br>
<br>
Unsupervised Learning of Temporal Abstractions<br>
With Slot-based Transformers<br>
Anand Gopalakrishnan, Kazuki Irie, J?rgen Schmidhuber,<br>
and Sjoerd van Steenkiste<br>
<br>
Letters<br>
<br>
Heuristic Tree-Partition-Based Parallel Method<br>
for Biophysically Detailed Neuron Simulation<br>
Kai Du, Yichen Zhang, and Tiejun Huang<br>
<br>
Macroscopic Gamma Oscillation With Bursting Neuron Model<br>
Under Stochastic Fluctuation<br>
Kiyoshi Kotani, Yuto Yoshikai, Tianyi Zheng, and Yasuhiko Jimbo<br>
<br>
Formal Verification of Deep Brain Stimulation Controllers<br>
for Parkinson?s Disease Treatment<br>
Arooj Nawaz, Osman Hasan, and Shaista Jabeen<br>
<br>
Multi-stream-based Marked Point Process With<br>
Decomposed Cumulative Hazard Functions<br>
Hirotaka Hachiya, Sujun Hong<br>
<br>
Inference and Learning for Generative Capsule Models<br>
Christopher Williams, Alfredo Nazabal, and Nikolaos Tsagkas<br>
<br>
-----<br>
<br>
ON-LINE -- <a href="http://www.mitpressjournals.org/neco" rel="noreferrer noreferrer noreferrer" target="_blank">http://www.mitpressjournals.org/neco</a><br>
<br>
MIT Press Journals, One Rogers Street, Cambridge, MA 02142-1209<br>
Tel: (617) 253-2889 FAX: (617) 577-1545 <a href="mailto:journals-cs@mit.edu" rel="noreferrer noreferrer" target="_blank">journals-cs@mit.edu</a><br>
<br>
-----<br>
<br>
<br>
------------------------------<br>
<br>
Message: 3<br>
Date: Tue, 21 Mar 2023 11:38:41 -0700<br>
From: Thomas Miconi <<a href="mailto:thomas.miconi@gmail.com" rel="noreferrer noreferrer" target="_blank">thomas.miconi@gmail.com</a>><br>
To: <a href="mailto:connectionists@mailman.srv.cs.cmu.edu" rel="noreferrer noreferrer" target="_blank">connectionists@mailman.srv.cs.cmu.edu</a><br>
Subject: Re: Connectionists: Can LLMs think?<br>
Message-ID:<br>
<<a href="mailto:CALPTt5gsCvCYZdXMk3c3kwjbGAp%2BzR0yv-JENbj_E31RCsDgvw@mail.gmail.com" rel="noreferrer noreferrer" target="_blank">CALPTt5gsCvCYZdXMk3c3kwjbGAp+zR0yv-JENbj_E31RCsDgvw@mail.gmail.com</a>><br>
Content-Type: text/plain; charset="utf-8"<br>
<br>
Regarding LLMs, there's an interesting result which may not have attracted<br>
sufficient notice.<br>
<br>
LLMs out-of-the-box are notoriously bad at general arithmetic (unless<br>
equipped with external tools). However, they can *learn* to perform true<br>
arithmetic, simply by explaining it to them carefully, in a way that<br>
generalizes to arbitrary-length numbers.<br>
<br>
<a href="https://arxiv.org/abs/2211.09066" rel="noreferrer noreferrer noreferrer" target="_blank">https://arxiv.org/abs/2211.09066</a><br>
<br>
Clearly Eliza or N-grams can't do that. JPEGs can't do that either.<br>
<br>
If this result is confirmed, it suggests that LLMs don't simply perform<br>
"pattern-matching" over learned patterns. Rather, they have *some* ability<br>
to extract new, true patterns from their inputs, and apply them correctly<br>
to novel inputs.<br>
<br>
I believe that's as good a definition of "intelligence" as any, so I'm<br>
willing to accept that LLMs have *some* intelligence.<br>
<br>
One possible source of disagreement is the great mismatch between their<br>
limited "intelligence", and their remarkable verbal fluency: they can<br>
produce amazing prose, but have difficulty with fine-grained grounding of<br>
novel concepts ("they don't know what they're talking about", as soon as<br>
the "about" crosses a low threshold of novelty-complexity product). We are<br>
not used to dealing with such an outcome, which may make it difficult to<br>
categorize these systems.<br>
<br>
Thomas Miconi-<br>
-------------- next part --------------<br>
An HTML attachment was scrubbed...<br>
URL: <<a href="http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20230321/c6b4edac/attachment-0001.html" rel="noreferrer noreferrer noreferrer" target="_blank">http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20230321/c6b4edac/attachment-0001.html</a>><br>
<br>
------------------------------<br>
<br>
Message: 4<br>
Date: Tue, 21 Mar 2023 14:35:24 -0700<br>
From: "Baldi,Pierre" <<a href="mailto:pfbaldi@ics.uci.edu" rel="noreferrer noreferrer" target="_blank">pfbaldi@ics.uci.edu</a>><br>
To: "<a href="mailto:connectionists@cs.cmu.edu" rel="noreferrer noreferrer" target="_blank">connectionists@cs.cmu.edu</a>" <<a href="mailto:connectionists@cs.cmu.edu" rel="noreferrer noreferrer" target="_blank">connectionists@cs.cmu.edu</a>><br>
Subject: Connectionists: attention mechanisms<br>
Message-ID: <<a href="mailto:34ecfaef-8f96-39eb-73d2-cbe6454437d1@ics.uci.edu" rel="noreferrer noreferrer" target="_blank">34ecfaef-8f96-39eb-73d2-cbe6454437d1@ics.uci.edu</a>><br>
Content-Type: text/plain; charset=UTF-8; format=flowed<br>
<br>
On a less exciting note than the GPT discussion, let me bring to your <br>
attention this article that just came out:<br>
<br>
<a href="https://doi.org/10.1016/j.artint.2023.103901" rel="noreferrer noreferrer noreferrer" target="_blank">https://doi.org/10.1016/j.artint.2023.103901</a><br>
<br>
Basically it identifies the basic building blocks of attention in deep <br>
learning architectures and shows why these are<br>
<br>
computationally efficient. Caution: this is just a beginning, not a full <br>
theory of transformers.<br>
<br>
--Pierre<br>
<br>
<br>
<br>
------------------------------<br>
<br>
Message: 5<br>
Date: Tue, 21 Mar 2023 19:43:56 +0000<br>
From: "Rothganger, Fredrick" <<a href="mailto:frothga@sandia.gov" rel="noreferrer noreferrer" target="_blank">frothga@sandia.gov</a>><br>
To: "<a href="mailto:connectionists@mailman.srv.cs.cmu.edu" rel="noreferrer noreferrer" target="_blank">connectionists@mailman.srv.cs.cmu.edu</a>"<br>
<<a href="mailto:connectionists@mailman.srv.cs.cmu.edu" rel="noreferrer noreferrer" target="_blank">connectionists@mailman.srv.cs.cmu.edu</a>><br>
Subject: Connectionists: Can LLMs think?<br>
Message-ID:<br>
<<a href="mailto:PH0PR09MB86357823A11523E065443801DB819@PH0PR09MB8635.namprd09.prod.outlook.com" rel="noreferrer noreferrer" target="_blank">PH0PR09MB86357823A11523E065443801DB819@PH0PR09MB8635.namprd09.prod.outlook.com</a>><br>
<br>
Content-Type: text/plain; charset="utf-8"<br>
<br>
The thing that I'm most interested in is how the human works. I believe it is possible to build an equivalent machine but we have not yet achieved that, mainly because we still have a ways to go in neuroscience. If what we've built so far turns out to resemble the human mind in structure, it is by accident.<br>
<br>
One approach to AI is as a testbed for theories about how the human mind works. Few people share this view with me. Certainly in the present, driven by the funding sources, there is much more focus on applications. Will it sell more ads? Recognize more faces (or cats)? Etc.<br>
<br>
The human mind is not the highest or only measure of "intelligence". There are certainly non-human or superhuman capabilities that we can give artificial agents. And someday when we do fully understand the human mind, we will be able to exceed it by applying a few tweaks to the model.<br>
<br>
The human mind is, well, a mind-boggling mystery. A real scientific understanding would revolutionize philosophy, religion, and perhaps even how we live our lives and treat each other.<br>
<br>
It is fascinating that something so cut and dried as a mathematical model, fed by an enormous amount of data, can force us to question basic notions about ourselves. This thing we call thinking, is it unique to us? Is it even real? Can something non-human (animal or artifact) have an internal mental life?<br>
<br>
I suspect one thing that makes the scientific understanding of human mind difficult is the prior assumptions and cognitive biases we bring to the task. For example, that language (symbolic processing) is the core of intelligence. Or that everything is a hierarchy. Or that humans have some secret sauce distinct from other animals and machines.<br>
<br>
________________________________<br>
From: Connectionists <<a href="mailto:connectionists-bounces@mailman.srv.cs.cmu.edu" rel="noreferrer noreferrer" target="_blank">connectionists-bounces@mailman.srv.cs.cmu.edu</a>> on behalf of Ingo Bojak <<a href="mailto:i.bojak@reading.ac.uk" rel="noreferrer noreferrer" target="_blank">i.bojak@reading.ac.uk</a>><br>
Sent: Monday, March 20, 2023 8:04 AM<br>
To: <a href="mailto:connectionists@mailman.srv.cs.cmu.edu" rel="noreferrer noreferrer" target="_blank">connectionists@mailman.srv.cs.cmu.edu</a> <<a href="mailto:connectionists@mailman.srv.cs.cmu.edu" rel="noreferrer noreferrer" target="_blank">connectionists@mailman.srv.cs.cmu.edu</a>><br>
Subject: [EXTERNAL] Re: Connectionists: Can LLMs think?<br>
<br>
You don't often get email from <a href="mailto:i.bojak@reading.ac.uk" rel="noreferrer noreferrer" target="_blank">i.bojak@reading.ac.uk</a>. Learn why this is important<<a href="https://aka.ms/LearnAboutSenderIdentification" rel="noreferrer noreferrer noreferrer" target="_blank">https://aka.ms/LearnAboutSenderIdentification</a>><br>
<br>
Hi all,<br>
<br>
<br>
<br>
I think an important point here is that one should not play a simplistic ?subtraction game? concerning human intelligence.<br>
<br>
<br>
<br>
To give an analogy: Animals eat. Humans eat. Is human eating hence not a ?properly human? activity since we share this activity with animals? No. There is no equivalent to say ?fine dining? in the animal kingdom. Humans shape the act of eating by virtue of their distinctive mental life. Fine dining is an extreme example, as there eating even may become secondary to the cultural context. But even eating a chocolate cookie may be a ?guilty pleasure? that connects internally to concepts of either health or beauty. So human ?eating? is different from animal ?eating? even if hunger, mastication, and digestion are not unique to humans.<br>
<br>
<br>
<br>
As AIs start to copy more and more human performances, likewise one cannot remove various human activities like ?writing as essay? as not ?properly human?. The act of ?writing an essay? is shaped by the mental life of a human writer and that remains missing for an AI even if it produces a convincing essay. We know this because we have constructed the AI.<br>
<br>
<br>
<br>
What all this rather shows is that it is not easy at all, and even may be impossible, to find human activities that can act as unequivocal external signs of an ?inner mental life like ours?.<br>
<br>
<br>
<br>
But even if AIs eventually can copy all observable human activity convincingly - a big ?if? - it does not follow that they are the same as humans. All these human activities are shaped by an inner mental life, and the conclusion that either our inner mental life must be ?fake?, an ?illusion?, or that the AIs must have somehow acquired a similar one, is simply not warranted by those external observations alone.<br>
<br>
<br>
<br>
Furthermore, it is hardly original to point out that ultimately the experience of our inner mental life is the only truly reliable information we possess (Descartes, but long before him St Augustine of Hippo, and long after him the Wachowskis).<br>
<br>
<br>
<br>
The Turing test does not provide a touchstone for sapience / human-ness. It is rather a measure of our own ability to detect this status, i.e., it is just a version of CAPTCHA we like to play. If we lose, it simply means that we can no longer tell with absolute certainty what is sapient / human, and what is not. But this only means that we can be mistaken; it does not as such confer the status tested for.<br>
<br>
<br>
<br>
It is interesting that the very fact that we know what goes into AIs that we have constructed means that the Turing test cannot confer ?presumed sapient / human? status. We simply know better... Where something like it could become important is where we do not, for example, for ?rogue AIs? not advertising their identity (or possibly alien lifeforms). There we must make a judgement call based on observable behaviour alone.<br>
<br>
<br>
<br>
Finally, there never was a good reason to believe that humans have evolved to have sophisticated detection mechanisms for what is human. They never needed to. Something that looks like a human, walks like a human and occasionally utters grunts sounding like human language could pass off as a human for a long time? Judging by personal experience, it probably could get a job at a call centre. The Turing test always has been a somewhat academic exercise.<br>
<br>
<br>
<br>
Best,<br>
<br>
Ingo<br>
<br>
<br>
<br>
From: Connectionists <<a href="mailto:connectionists-bounces@mailman.srv.cs.cmu.edu" rel="noreferrer noreferrer" target="_blank">connectionists-bounces@mailman.srv.cs.cmu.edu</a>> On Behalf Of Thomas Nowotny<br>
Sent: 20 March 2023 09:48<br>
To: Gary Marcus <<a href="mailto:gary.marcus@nyu.edu" rel="noreferrer noreferrer" target="_blank">gary.marcus@nyu.edu</a>>; Paul Cisek <<a href="mailto:paul.cisek@umontreal.ca" rel="noreferrer noreferrer" target="_blank">paul.cisek@umontreal.ca</a>><br>
Cc: <a href="mailto:connectionists@mailman.srv.cs.cmu.edu" rel="noreferrer noreferrer" target="_blank">connectionists@mailman.srv.cs.cmu.edu</a><br>
Subject: Re: Connectionists: Can LLMs think?<br>
<br>
<br>
<br>
Hi Paul and Gary,<br>
<br>
I think I am firmly in your camp & well summarised. However, there is this nagging doubt about how much of the human intelligence we attribute to each other and ourselves are the same ?little strings and hidden compartments? and ?how we just redirected the audience?s attention? that undoubtedly underlie LLMs abilities.<br>
<br>
Best,<br>
<br>
Thomas Nowotny<br>
<br>
<br>
<br>
From: Connectionists <<a href="mailto:connectionists-bounces@mailman.srv.cs.cmu.edu" rel="noreferrer noreferrer" target="_blank">connectionists-bounces@mailman.srv.cs.cmu.edu</a><mailto:<a href="mailto:connectionists-bounces@mailman.srv.cs.cmu.edu" rel="noreferrer noreferrer" target="_blank">connectionists-bounces@mailman.srv.cs.cmu.edu</a>>> On Behalf Of Gary Marcus<br>
Sent: 20 March 2023 08:01<br>
To: Paul Cisek <<a href="mailto:paul.cisek@umontreal.ca" rel="noreferrer noreferrer" target="_blank">paul.cisek@umontreal.ca</a><mailto:<a href="mailto:paul.cisek@umontreal.ca" rel="noreferrer noreferrer" target="_blank">paul.cisek@umontreal.ca</a>>><br>
Cc: <a href="mailto:connectionists@mailman.srv.cs.cmu.edu" rel="noreferrer noreferrer" target="_blank">connectionists@mailman.srv.cs.cmu.edu</a><mailto:<a href="mailto:connectionists@mailman.srv.cs.cmu.edu" rel="noreferrer noreferrer" target="_blank">connectionists@mailman.srv.cs.cmu.edu</a>><br>
Subject: Re: Connectionists: Can LLMs think?<br>
<br>
<br>
<br>
bravo!<br>
<br>
<br>
<br>
On Mar 20, 2023, at 07:55, Paul Cisek <<a href="mailto:paul.cisek@umontreal.ca" rel="noreferrer noreferrer" target="_blank">paul.cisek@umontreal.ca</a><mailto:<a href="mailto:paul.cisek@umontreal.ca" rel="noreferrer noreferrer" target="_blank">paul.cisek@umontreal.ca</a>>> wrote:<br>
<br>
?<br>
<br>
I must say that I?m somewhat dismayed when I read these kinds of discussions, here or elsewhere. Sure, it?s understandable that many people are fooled into thinking that LLMs are intelligent, just like many people were fooled by Eliza and Eugene Goostman. Humans are predisposed into ascribing intention and purpose to events in the world, which helped them construct complex societies by (often correctly) interpreting the actions of other people around them. But this same predisposition also led them to believe that the volcano was angry when it erupted because they did something to offend the gods. Given how susceptible humans are to this false ascription of agency, it is not surprising that they get fooled when something acts in a complex way.<br>
<br>
<br>
<br>
But (most of) the people on this list know what?s under the hood! We know that LLMs are very good at pattern matching and completion, we know about the universal approximation theorem, we know that there is a lot of structure in the pattern of human-written text, and we know that humans are predisposed to ascribe meaning and intention even where there are none. We should therefore not be surprised that LLMs can produce text patterns that generalize well within-distribution but not so well out-of-distribution, and that when the former happens, people may be fooled into thinking they are speaking with a thinking being. Again, they were fooled by Eliza, and Eugene Goostman, and the Heider-Simmel illusion (ascribing emotion to animated triangles and circles)? and the rumblings of volcanos. But we know how LLMs and volcanos do what they do, and can explain their behavior without any additional assumptions (of thinking, or sentience, or whatever). So why add them?<br>
<br>
<br>
<br>
In a sense, we are like a bunch of professional magicians, who know where all of the little strings and hidden compartments are, and who know how we just redirected the audience?s attention to slip the card into our pocket? but then we are standing around backstage wondering: ?Maybe there really is magic??<br>
<br>
<br>
<br>
I think it?s not that machines have passed the Turing Test, but rather that we failed it.<br>
<br>
<br>
<br>
Paul Cisek<br>
<br>
<br>
<br>
<br>
<br>
From: Rothganger, Fredrick <<a href="mailto:frothga@sandia.gov" rel="noreferrer noreferrer" target="_blank">frothga@sandia.gov</a><mailto:<a href="mailto:frothga@sandia.gov" rel="noreferrer noreferrer" target="_blank">frothga@sandia.gov</a>>><br>
Sent: Thursday, March 16, 2023 11:39 AM<br>
To: <a href="mailto:connectionists@mailman.srv.cs.cmu.edu" rel="noreferrer noreferrer" target="_blank">connectionists@mailman.srv.cs.cmu.edu</a><mailto:<a href="mailto:connectionists@mailman.srv.cs.cmu.edu" rel="noreferrer noreferrer" target="_blank">connectionists@mailman.srv.cs.cmu.edu</a>><br>
Subject: Connectionists: Can LLMs think?<br>
<br>
<br>
<br>
Noting the examples that have come up on this list over the last week, it's interesting that it takes some of the most brilliant AI researchers in the world to devise questions that break LLMs. Chatbots have always been able to fool some people some of the time, ever since ELIZA. But we now have systems that can fool a lot of people a lot of the time, and even the occasional expert who loses their perspective and comes to believe the system is sentient. LLMs have either already passed the classic Turning test, or are about to in the next generation.<br>
<br>
<br>
<br>
What does that mean exactly? Turing's expectation was that "the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted". The ongoing discussion here is an indication that we are approaching that threshold. For the average person, we've probably already passed it.<br>
<br>
<br>
-------------- next part --------------<br>
An HTML attachment was scrubbed...<br>
URL: <<a href="http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20230321/33bcde55/attachment-0001.html" rel="noreferrer noreferrer noreferrer" target="_blank">http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20230321/33bcde55/attachment-0001.html</a>><br>
<br>
------------------------------<br>
<br>
Message: 6<br>
Date: Wed, 22 Mar 2023 04:04:33 +0000<br>
From: Asim Roy <<a href="mailto:ASIM.ROY@asu.edu" rel="noreferrer noreferrer" target="_blank">ASIM.ROY@asu.edu</a>><br>
To: Geoffrey Hinton <<a href="mailto:geoffrey.hinton@gmail.com" rel="noreferrer noreferrer" target="_blank">geoffrey.hinton@gmail.com</a>>,<br>
"<a href="mailto:connectionists@mailman.srv.cs.cmu.edu" rel="noreferrer noreferrer" target="_blank">connectionists@mailman.srv.cs.cmu.edu</a>"<br>
<<a href="mailto:connectionists@mailman.srv.cs.cmu.edu" rel="noreferrer noreferrer" target="_blank">connectionists@mailman.srv.cs.cmu.edu</a>><br>
Subject: Re: Connectionists: Can LLMs think?<br>
Message-ID:<br>
<<a href="mailto:BYAPR06MB40690CBBE98B78D0C0A612D29B869@BYAPR06MB4069.namprd06.prod.outlook.com" rel="noreferrer noreferrer" target="_blank">BYAPR06MB40690CBBE98B78D0C0A612D29B869@BYAPR06MB4069.namprd06.prod.outlook.com</a>><br>
<br>
Content-Type: text/plain; charset="utf-8"<br>
<br>
One of the ways out of the ?understanding? dilemma is for the system to be able to explain its answer. And for that to happen, it can be ?taught? how to explain its answer. That would effectively pull out what Geoff refers to as ?discrete symbolic information? that has been factored ?into a very large number of features.? We do this very effectively for computer vision/image recognition. We get the causality out by saying it?s a cat because it has whiskers, claws and fur like a cat. And that?s all at the symbolic level.<br>
<br>
Asim<br>
<br>
Asim Roy<br>
Professor, Information Systems<br>
Arizona State University<br>
Lifeboat Foundation Bios: Professor Asim Roy<<a href="https://urldefense.proofpoint.com/v2/url?u=https-3A__lifeboat.com_ex_bios.asim.roy&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=oDRJmXX22O8NcfqyLjyu4Ajmt8pcHWquTxYjeWahfuw&e=" rel="noreferrer noreferrer noreferrer" target="_blank">https://urldefense.proofpoint.com/v2/url?u=https-3A__lifeboat.com_ex_bios.asim.roy&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=oDRJmXX22O8NcfqyLjyu4Ajmt8pcHWquTxYjeWahfuw&e=</a>><br>
Asim Roy | iSearch (<a href="http://asu.edu" rel="noreferrer noreferrer noreferrer" target="_blank">asu.edu</a>)<<a href="https://urldefense.proofpoint.com/v2/url?u=https-3A__isearch.asu.edu_profile_9973&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=jCesWT7oGgX76_y7PFh4cCIQ-Ife-esGblJyrBiDlro&e=" rel="noreferrer noreferrer noreferrer" target="_blank">https://urldefense.proofpoint.com/v2/url?u=https-3A__isearch.asu.edu_profile_9973&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=jCesWT7oGgX76_y7PFh4cCIQ-Ife-esGblJyrBiDlro&e=</a>><br>
<br>
<br>
From: Connectionists <<a href="mailto:connectionists-bounces@mailman.srv.cs.cmu.edu" rel="noreferrer noreferrer" target="_blank">connectionists-bounces@mailman.srv.cs.cmu.edu</a>> On Behalf Of Geoffrey Hinton<br>
Sent: Monday, March 20, 2023 10:59 AM<br>
To: Paul Cisek <<a href="mailto:paul.cisek@umontreal.ca" rel="noreferrer noreferrer" target="_blank">paul.cisek@umontreal.ca</a>><br>
Cc: <a href="mailto:connectionists@mailman.srv.cs.cmu.edu" rel="noreferrer noreferrer" target="_blank">connectionists@mailman.srv.cs.cmu.edu</a><br>
Subject: Re: Connectionists: Can LLMs think?<br>
<br>
LLM's do not do pattern matching in the sense that most people understand it. They use the data to create huge numbers of features and interactions between features such that these interactions can predict the next word.<br>
The first neural net language model (so far as I know) made bets about the third term of a triple using word embedding vectors with 6 components. Retrospectively, the components of these vectors could be interpreted as sensible features for capturing the structure of the domain (which was very conventional family relationships). For example, there was a three-valued feature for a person's generation and the interactions between features ensured that the triple Victoria has-father ? took the generation of Victoria and produced an answer that was of a higher generation because it understood that the relationship has-father requires this. Of course, in complicated domains there will be huge numbers of regularities which will make conflicting predictions for the next word but the consensus can still be fairly reliable. I believe that factoring the discrete symbolic information into a very large number of features and interactions IS intuitive understanding and that this is t!<br>
<br>
rue for both brains and LLMs even though they may use different learning algorithms for arriving at these factorizations. I am dismayed that so many people fall prey to the well-known human disposition to think that there is something special about people.<br>
<br>
Geoff<br>
<br>
<br>
On Mon, Mar 20, 2023 at 3:53?AM Paul Cisek <<a href="mailto:paul.cisek@umontreal.ca" rel="noreferrer noreferrer" target="_blank">paul.cisek@umontreal.ca</a><mailto:<a href="mailto:paul.cisek@umontreal.ca" rel="noreferrer noreferrer" target="_blank">paul.cisek@umontreal.ca</a>>> wrote:<br>
I must say that I?m somewhat dismayed when I read these kinds of discussions, here or elsewhere. Sure, it?s understandable that many people are fooled into thinking that LLMs are intelligent, just like many people were fooled by Eliza and Eugene Goostman. Humans are predisposed into ascribing intention and purpose to events in the world, which helped them construct complex societies by (often correctly) interpreting the actions of other people around them. But this same predisposition also led them to believe that the volcano was angry when it erupted because they did something to offend the gods. Given how susceptible humans are to this false ascription of agency, it is not surprising that they get fooled when something acts in a complex way.<br>
<br>
But (most of) the people on this list know what?s under the hood! We know that LLMs are very good at pattern matching and completion, we know about the universal approximation theorem, we know that there is a lot of structure in the pattern of human-written text, and we know that humans are predisposed to ascribe meaning and intention even where there are none. We should therefore not be surprised that LLMs can produce text patterns that generalize well within-distribution but not so well out-of-distribution, and that when the former happens, people may be fooled into thinking they are speaking with a thinking being. Again, they were fooled by Eliza, and Eugene Goostman, and the Heider-Simmel illusion (ascribing emotion to animated triangles and circles)? and the rumblings of volcanos. But we know how LLMs and volcanos do what they do, and can explain their behavior without any additional assumptions (of thinking, or sentience, or whatever). So why add them?<br>
<br>
In a sense, we are like a bunch of professional magicians, who know where all of the little strings and hidden compartments are, and who know how we just redirected the audience?s attention to slip the card into our pocket? but then we are standing around backstage wondering: ?Maybe there really is magic??<br>
<br>
I think it?s not that machines have passed the Turing Test, but rather that we failed it.<br>
<br>
Paul Cisek<br>
<br>
<br>
From: Rothganger, Fredrick <<a href="mailto:frothga@sandia.gov" rel="noreferrer noreferrer" target="_blank">frothga@sandia.gov</a><mailto:<a href="mailto:frothga@sandia.gov" rel="noreferrer noreferrer" target="_blank">frothga@sandia.gov</a>>><br>
Sent: Thursday, March 16, 2023 11:39 AM<br>
To: <a href="mailto:connectionists@mailman.srv.cs.cmu.edu" rel="noreferrer noreferrer" target="_blank">connectionists@mailman.srv.cs.cmu.edu</a><mailto:<a href="mailto:connectionists@mailman.srv.cs.cmu.edu" rel="noreferrer noreferrer" target="_blank">connectionists@mailman.srv.cs.cmu.edu</a>><br>
Subject: Connectionists: Can LLMs think?<br>
<br>
Noting the examples that have come up on this list over the last week, it's interesting that it takes some of the most brilliant AI researchers in the world to devise questions that break LLMs. Chatbots have always been able to fool some people some of the time, ever since ELIZA. But we now have systems that can fool a lot of people a lot of the time, and even the occasional expert who loses their perspective and comes to believe the system is sentient. LLMs have either already passed the classic Turning test, or are about to in the next generation.<br>
<br>
What does that mean exactly? Turing's expectation was that "the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted". The ongoing discussion here is an indication that we are approaching that threshold. For the average person, we've probably already passed it.<br>
<br>
-------------- next part --------------<br>
An HTML attachment was scrubbed...<br>
URL: <<a href="http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20230322/01ff0083/attachment-0001.html" rel="noreferrer noreferrer noreferrer" target="_blank">http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20230322/01ff0083/attachment-0001.html</a>><br>
<br>
------------------------------<br>
<br>
Message: 7<br>
Date: Wed, 22 Mar 2023 04:42:29 +0000<br>
From: Donald Adjeroh <<a href="mailto:Donald.Adjeroh@mail.wvu.edu" rel="noreferrer noreferrer" target="_blank">Donald.Adjeroh@mail.wvu.edu</a>><br>
To: "<a href="mailto:don@csee.wvu.edu" rel="noreferrer noreferrer" target="_blank">don@csee.wvu.edu</a>" <<a href="mailto:don@csee.wvu.edu" rel="noreferrer noreferrer" target="_blank">don@csee.wvu.edu</a>><br>
Subject: Connectionists: CFP: SBP-BRiMS'2023: Social Computing,<br>
Behavior-Cultural Modeling, Prediction and Simulation<br>
Message-ID:<br>
<<a href="mailto:DM6PR03MB402797D0606773CB949B4B3FAE869@DM6PR03MB4027.namprd03.prod.outlook.com" rel="noreferrer noreferrer" target="_blank">DM6PR03MB402797D0606773CB949B4B3FAE869@DM6PR03MB4027.namprd03.prod.outlook.com</a>><br>
<br>
Content-Type: text/plain; charset="utf-8"<br>
<br>
Apologies if you receive multiple copies<br>
<br>
<br>
SBP-BRiMS 2023<br>
<br>
2023 International Conference on Social Computing, Behavioral-Cultural Modeling, & Prediction and Behavior Representation in Modeling and Simulation<br>
<br>
September 20-22, 2023<br>
<br>
Will be held in hybrid mode<br>
<br>
(In-person in Pittsburgh, PA, USA)<br>
<br>
<br>
<br>
<a href="http://sbp-brims.org/" rel="noreferrer noreferrer noreferrer" target="_blank">http://sbp-brims.org/</a><br>
<br>
#sbpbrims<br>
<br>
<br>
The goal of this conference is to build a community of social cyber scholars by fostering interaction among members of the scientific, corporate, government and military communities interested in understanding, forecasting, and impacting human socio-cultural behavior in our quickly evolving social world of today. The conference values diverse disciplinary participation from the social, behavioral, physical, and computational sciences and welcomes a broad range of methodological approaches (agent-based models, online experiments, network science and social network analysis, and machine learning). All computational social science papers and panels are welcome. Some illustrative topics include:<br>
??Social Cyber Security<br>
??Trust in AI; Human-AI Interaction<br>
??Blockchain and Decentralized Web<br>
??Decentralization of Social Media<br>
??Social Media Analytics and Network Science<br>
??Online Collective Action, Social Movements, and Mob Dynamics<br>
??Military in the Age of AI<br>
??Organizations and Big Data<br>
??Disinformation, Deepfakes, and Online Harms<br>
??Applications (Healthcare, Economics, Government, Military, etc.)<br>
<br>
<br>
All papers are qualified for the Best Paper Award. Papers with student first authors will be considered for the Best Student Paper Award.<br>
<br>
<br>
See also special Call for Panels at SBP-BRiMS'23<br>
<br>
<a href="http://sbp-brims.org/2023/Call%20For%20Panels/" rel="noreferrer noreferrer noreferrer" target="_blank">http://sbp-brims.org/2023/Call%20For%20Panels/</a><br>
<br>
<br>
<br>
IMPORTANT DATES:<br>
<br>
Full Paper Submission: 5-Jun-2023 (Midnight EST)<br>
<br>
Author Notification: 3-Jul-2023<br>
<br>
Final Files Due : 17-Jul-2023<br>
<br>
Panel proposals due: 10-Jul-2023<br>
Panel Notification: 17-Jul-2023<br>
<br>
Challenge Response due: 10-Jul-2023<br>
Challenge Notification: 17-Jul-2023<br>
Final Files due: 1-Aug-2023<br>
<br>
<br>
HOW TO SUBMIT :<br>
<br>
For information on paper submission, check here<<a href="https://easychair.org/conferences/?conf=2023sbpbrims" rel="noreferrer noreferrer noreferrer" target="_blank">https://easychair.org/conferences/?conf=2023sbpbrims</a>>.<br>
<br>
You will be able to update your submission until the final paper deadline.<br>
<br>
<br>
PAPER FORMATTING GUIDELINE:<br>
<br>
The papers must be in English and MUST be formatted according to the Springer-Verlag LNCS/LNAI guidelines. View sample LaTeX2e and WORD files<<a href="http://www.springer.com/computer/lncs?SGWID=0-164-6-793341-0" rel="noreferrer noreferrer noreferrer" target="_blank">http://www.springer.com/computer/lncs?SGWID=0-164-6-793341-0</a>>. All regular paper submissions should be submitted as a paper with a maximum of 10 pages. Total page count includes all figures, tables, and references.<br>
<br>
CHALLENGE PROBLEM:<br>
The conference will have a computational challenge as in previous years. Additional details are available at the conference Challenge website: <a href="http://sbp-brims.org/2023/challenge/" rel="noreferrer noreferrer noreferrer" target="_blank">http://sbp-brims.org/2023/challenge/</a>.<br>
Follow us on Facebook<<a href="https://www.facebook.com/SBPBRiMS/" rel="noreferrer noreferrer noreferrer" target="_blank">https://www.facebook.com/SBPBRiMS/</a>>, Twitter<<a href="https://twitter.com/sbpbrims" rel="noreferrer noreferrer noreferrer" target="_blank">https://twitter.com/sbpbrims</a>> and LinkedIn<<a href="http://linkedin.com/company/sbpbrims/" rel="noreferrer noreferrer noreferrer" target="_blank">http://linkedin.com/company/sbpbrims/</a>> to receive updates<<a href="http://sbp-brims.org/challenge" rel="noreferrer noreferrer noreferrer" target="_blank">http://sbp-brims.org/challenge</a>>.<br>
<br>
PRE-CONFERENCE TUTORIAL SESSIONS:<br>
Several half-day sessions will be offered on the day before the full conference. More details regarding the preconference tutorial sessions will be posted as soon as this information becomes available..<br>
<br>
FUNDING PANEL & CROSS-FERTILIZATION ROUNDTABLES:<br>
The purpose of the cross-fertilization roundtables is to help participants become better acquainted with people outside of their discipline and with whom they might consider partnering on future SBP-BRiMS related research collaborations. The Funding Panel provides an opportunity for conference participants to interact with program managers from various federal funding agencies, such as the National Science Foundation<<a href="https://www.nsf.gov/" rel="noreferrer noreferrer noreferrer" target="_blank">https://www.nsf.gov/</a>> (NSF), National Institutes of Health<<a href="https://www.nih.gov/" rel="noreferrer noreferrer noreferrer" target="_blank">https://www.nih.gov/</a>> (NIH), Office of Naval Research<<a href="https://www.onr.navy.mil/" rel="noreferrer noreferrer noreferrer" target="_blank">https://www.onr.navy.mil/</a>> (ONR), Air Force Office of Scientific Research<<a href="https://www.wpafb.af.mil/afrl/afosr/" rel="noreferrer noreferrer noreferrer" target="_blank">https://www.wpafb.af.mil/afrl/afosr/</a>> (AFOSR), Defense Threat Reduction Agency<<a href="http://www.dtra.mil/" rel="noreferrer noreferrer noreferrer" target="_blank">http://www.dtra.mil/</a>> (DTRA), Defense Advanced Research Projects Agency<<a href="https://www.darpa.mil/" rel="noreferrer noreferrer noreferrer" target="_blank">https://www.darpa.mil/</a>> (DARPA), Army Research Office<<a href="https://www.aro.army.mil/" rel="noreferrer noreferrer noreferrer" target="_blank">https://www.aro.army.mil/</a>> (ARO), National Geospatial Intelligence Agency<<a href="https://www.nga.mil/Pages/Default.aspx" rel="noreferrer noreferrer noreferrer" target="_blank">https://www.nga.mil/Pages/Default.aspx</a>> (NGA), and the Department of Veterans Affairs<<a href="https://www.va.g" rel="noreferrer noreferrer noreferrer" target="_blank">https://www.va.g</a>!<br>
<br>
ov/> (VA).<br>
<br>
ATTENDANCE SCHOLARSHIPS:<br>
It is anticipated that a limited number of attendance scholarships will be available on a competitive basis to students who are presenting papers. Additional information will be provided soon.<br>
<br>
<br>
Follow us on Facebook<<a href="https://www.facebook.com/SBPBRiMS/" rel="noreferrer noreferrer noreferrer" target="_blank">https://www.facebook.com/SBPBRiMS/</a>>, Twitter<<a href="https://twitter.com/sbpbrims" rel="noreferrer noreferrer noreferrer" target="_blank">https://twitter.com/sbpbrims</a>> and LinkedIn<<a href="http://linkedin.com/company/sbpbrims/" rel="noreferrer noreferrer noreferrer" target="_blank">http://linkedin.com/company/sbpbrims/</a>> to receive updates<<a href="http://sbp-brims.org/challenge" rel="noreferrer noreferrer noreferrer" target="_blank">http://sbp-brims.org/challenge</a>>.<br>
<br>
Visit our website: <a href="http://sbp-brims.org/" rel="noreferrer noreferrer noreferrer" target="_blank">http://sbp-brims.org/</a><br>
<br>
Download:<br>
Download Call for Papers in PDF format here<<a href="http://sbp-brims.org/2023/cfp/SBP_CfP_2023_Final.pdf" rel="noreferrer noreferrer noreferrer" target="_blank">http://sbp-brims.org/2023/cfp/SBP_CfP_2023_Final.pdf</a>>.<br>
<br>
<br>
<br>
-------------- next part --------------<br>
An HTML attachment was scrubbed...<br>
URL: <<a href="http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20230322/c6d4f760/attachment-0001.html" rel="noreferrer noreferrer noreferrer" target="_blank">http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20230322/c6d4f760/attachment-0001.html</a>><br>
<br>
------------------------------<br>
<br>
Message: 8<br>
Date: Wed, 22 Mar 2023 10:26:43 +0000<br>
From: Gary Marcus <<a href="mailto:gary.marcus@nyu.edu" rel="noreferrer noreferrer" target="_blank">gary.marcus@nyu.edu</a>><br>
To: Terry Sejnowski <<a href="mailto:terry@snl.salk.edu" rel="noreferrer noreferrer" target="_blank">terry@snl.salk.edu</a>><br>
Cc: <a href="mailto:connectionists@mailman.srv.cs.cmu.edu" rel="noreferrer noreferrer" target="_blank">connectionists@mailman.srv.cs.cmu.edu</a><br>
Subject: Re: Connectionists: Can LLMs think?<br>
Message-ID: <<a href="mailto:4087E62E-7600-48C7-93D1-1C9528BA0841@nyu.edu" rel="noreferrer noreferrer" target="_blank">4087E62E-7600-48C7-93D1-1C9528BA0841@nyu.edu</a>><br>
Content-Type: text/plain; charset="us-ascii"<br>
<br>
An HTML attachment was scrubbed...<br>
URL: <<a href="http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20230322/7514a31d/attachment-0001.html" rel="noreferrer noreferrer noreferrer" target="_blank">http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20230322/7514a31d/attachment-0001.html</a>><br>
<br>
------------------------------<br>
<br>
Message: 9<br>
Date: Wed, 22 Mar 2023 12:54:55 +0000<br>
From: Mark Humphries <<a href="mailto:Mark.Humphries@nottingham.ac.uk" rel="noreferrer noreferrer" target="_blank">Mark.Humphries@nottingham.ac.uk</a>><br>
To: "Comp-neuro mailing list <<a href="mailto:comp-neuro@lists.cnsorg.org" rel="noreferrer noreferrer" target="_blank">comp-neuro@lists.cnsorg.org</a>>"<br>
<<a href="mailto:comp-neuro@lists.cnsorg.org" rel="noreferrer noreferrer" target="_blank">comp-neuro@lists.cnsorg.org</a>>, "<a href="mailto:systems-neuroscience@googlegroups.com" rel="noreferrer noreferrer" target="_blank">systems-neuroscience@googlegroups.com</a>"<br>
<<a href="mailto:systems-neuroscience@googlegroups.com" rel="noreferrer noreferrer" target="_blank">systems-neuroscience@googlegroups.com</a>>, "<a href="mailto:connectionists@cs.cmu.edu" rel="noreferrer noreferrer" target="_blank">connectionists@cs.cmu.edu</a>"<br>
<<a href="mailto:connectionists@cs.cmu.edu" rel="noreferrer noreferrer" target="_blank">connectionists@cs.cmu.edu</a>><br>
Subject: Connectionists: Postdoc in computational neuroscience/machine<br>
learning at the University of Nottingham (UK) - closes March 30th<br>
Message-ID:<br>
<<a href="mailto:DB7PR06MB5479D4F5AC325D23E7B48655DE869@DB7PR06MB5479.eurprd06.prod.outlook.com" rel="noreferrer noreferrer" target="_blank">DB7PR06MB5479D4F5AC325D23E7B48655DE869@DB7PR06MB5479.eurprd06.prod.outlook.com</a>><br>
<br>
Content-Type: text/plain; charset="windows-1252"<br>
<br>
The Humphries? lab at the University of Nottingham is seeking a postdoc to study the neural basis of foraging, in collaboration with the groups of Matthew Apps (Birmingham) and Nathan Lepora (Bristol).<br>
<br>
<br>
Whether choosing to leave one shop for another, switching TV programs, or seeking berries to eat, humans and other animals make innumerable stay-or-leave decisions, but how we make them is not well understood. The goal of this project is to develop new computational accounts of stay-or-leave decisions, and use them to test hypotheses for how humans, primates, and rodents learn and make these decisions. The work will draw on and develop new reinforcement learning and accumulation (e.g. diffusion) models of decision-making.<br>
<br>
The Humphries? group researches fundamental insights into how the joint activity of neurons encodes actions in the world (<a href="https://www.humphries-lab.org" rel="noreferrer noreferrer noreferrer" target="_blank">https://www.humphries-lab.org</a>). This post will join our developing research program into how humans and other animals learn to make the right decisions (e.g. <a href="https://doi.org/10.1101/2022.08.30.505807" rel="noreferrer noreferrer noreferrer" target="_blank">https://doi.org/10.1101/2022.08.30.505807</a>).<br>
<br>
The post is until 14th February 2025.<br>
<br>
Nottingham University welcomes researchers from around the globe. As part of this support, the university will provide financial support to the post-holder for all visa costs.<br>
<br>
Unsure if you're a good fit for the post? Got any questions? Then contact Mark for a chat: <a href="mailto:mark.humphries@nottingham.ac.uk" rel="noreferrer noreferrer" target="_blank">mark.humphries@nottingham.ac.uk</a>.<br>
<br>
For full details of the role and how to apply<br>
<br>
<a href="https://jobs.nottingham.ac.uk/vacancy.aspx?ref=SCI092323" rel="noreferrer noreferrer noreferrer" target="_blank">https://jobs.nottingham.ac.uk/vacancy.aspx?ref=SCI092323</a><br>
<br>
Closing date: March 30th 2023<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
This message and any attachment are intended solely for the addressee<br>
and may contain confidential information. If you have received this<br>
message in error, please contact the sender and delete the email and<br>
attachment. <br>
<br>
Any views or opinions expressed by the author of this email do not<br>
necessarily reflect the views of the University of Nottingham. Email<br>
communications with the University of Nottingham may be monitored <br>
where permitted by law.<br>
<br>
<br>
<br>
<br>
-------------- next part --------------<br>
An HTML attachment was scrubbed...<br>
URL: <<a href="http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20230322/1ab66147/attachment-0001.html" rel="noreferrer noreferrer noreferrer" target="_blank">http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20230322/1ab66147/attachment-0001.html</a>><br>
<br>
------------------------------<br>
<br>
Message: 10<br>
Date: Wed, 22 Mar 2023 13:39:33 +0000<br>
From: CRISTINA PALMERO CANTARI?O <<a href="mailto:crpalmec7@alumnes.ub.edu" rel="noreferrer noreferrer" target="_blank">crpalmec7@alumnes.ub.edu</a>><br>
To: CRISTINA PALMERO CANTARI?O <<a href="mailto:crpalmec7@alumnes.ub.edu" rel="noreferrer noreferrer" target="_blank">crpalmec7@alumnes.ub.edu</a>><br>
Subject: Connectionists: Call for Participation - REACT 2023<br>
Challenge: Multiple Appropriate Facial Reaction Generation in Dyadic<br>
Interactions (REACT2023)<br>
Message-ID:<br>
<<a href="mailto:AM6PR07MB56698CEFCE91BC1E47CDC68CC4869@AM6PR07MB5669.eurprd07.prod.outlook.com" rel="noreferrer noreferrer" target="_blank">AM6PR07MB56698CEFCE91BC1E47CDC68CC4869@AM6PR07MB5669.eurprd07.prod.outlook.com</a>><br>
<br>
Content-Type: text/plain; charset="windows-1252"<br>
<br>
CALL FOR PARTICIPATION:<br>
<br>
REACT 2023 Challenge: Multiple Appropriate Facial Reaction Generation in Dyadic Interactions (REACT2023)<br>
<br>
The first ?Multiple Appropriate Facial Reaction Generation in Dyadic Interactions? challenge (REACT2023) to be held in conjunction with the ACM Multimedia (ACM-MM) 2023 in Ottawa, Canada, is calling for contributions in the form of papers and competition in the challenge. The competition will be using the NoXI, RECOLA, and UDIVA datasets which contain audio-video clip data of participants engaged in dyadic interactions. The challenge consists of two main tasks:<br>
<br>
* Task 1 - Offline Appropriate Facial Reaction Generation<br>
<br>
* Task 2 - Online Appropriate Facial Reaction Generation<br>
<br>
Participants are invited to participate in one or both sub-challenges. For more information about the challenge, check our website <a href="https://sites.google.com/cam.ac.uk/react2023/home" rel="noreferrer noreferrer noreferrer" target="_blank">https://sites.google.com/cam.ac.uk/react2023/home</a>. To register and access the challenge data, please fill the registration form you will find on the website and feel free to contact us by email: <a href="mailto:reactmultimodalchallenge@gmail.com" rel="noreferrer noreferrer" target="_blank">reactmultimodalchallenge@gmail.com</a>. For each task, training and validation data will be made available to participants. At the end of the competition, participants will be required to submit their trained models (in the form of working code) via the CodaLab platform. All submissions will be evaluated on a held-out test dataset to ensure a fair comparison. Participants will also be encouraged to submit a conference-style paper describing their proposed approach for tackling the challenge task(s) as well as the results obtained.<br>
<br>
<br>
Challenge timeline is as follows:<br>
<br>
* Registration opening: April 3, 2023<br>
<br>
* Training and development sets available: April 10, 2023<br>
<br>
* Baseline paper available: May 22, 2023<br>
<br>
* Test sets available: June 12, 2023<br>
<br>
* Final results submission: June 30, 2023<br>
<br>
* Paper submission deadline: July 14, 2023<br>
<br>
* Notification of acceptance: July 21, 2023<br>
<br>
* Camera ready paper: August 6, 2023<br>
<br>
* Workshop: October 29, 2023 (TBD)<br>
<br>
<br>
The Organisers<br>
<br>
Dr Micol Spitale*, University of Cambridge, Cambridge, United Kingdom,<br>
Dr Siyang Song*, University of Leicester & University of Cambridge, United Kingdom<br>
Cristina Palmero, Universitat de Barcelona, Barcelona, Spain<br>
Prof Sergio Escalera, Universitat de Barcelona, Barcelona, Spain<br>
Prof Michel Valstar, University of Nottingham, Nottingham, United Kingdom<br>
Dr Tobias Baur, University of Augsburg, Augsburg, Germany<br>
Dr Fabien Ringeval, Universit? Grenoble Alpes, Grenoble, France<br>
Prof Elisabeth Andr?, University of Augsburg, Augsburg, Germany<br>
Prof Hatice Gunes, University of Cambridge, Cambridge, United Kingdom<br>
<br>
Contact us via email: <a href="mailto:reactmultimodalchallenge@gmail.com" rel="noreferrer noreferrer" target="_blank">reactmultimodalchallenge@gmail.com</a><br>
<br>
<br>
Aquest missatge, i els fitxers adjunts que hi pugui haver, pot contenir informaci? confidencial o protegida legalment i s?adre?a exclusivament a la persona o entitat destinat?ria. Si no consteu com a destinatari final o no teniu l?enc?rrec de rebre?l, no esteu autoritzat a llegir-lo, retenir-lo, modificar-lo, distribuir-lo, copiar-lo ni a revelar-ne el contingut. Si l?heu rebut per error, informeu-ne el remitent i elimineu del sistema tant el missatge com els fitxers adjunts que hi pugui haver.<br>
<br>
Este mensaje, y los ficheros adjuntos que pueda incluir, puede contener informaci?n confidencial o legalmente protegida y est? exclusivamente dirigido a la persona o entidad destinataria. Si usted no consta como destinatario final ni es la persona encargada de recibirlo, no est? autorizado a leerlo, retenerlo, modificarlo, distribuirlo o copiarlo, ni a revelar su contenido. Si lo ha recibido por error, informe de ello al remitente y elimine del sistema tanto el mensaje como los ficheros adjuntos que pueda contener.<br>
<br>
This email message and any attachments it carries may contain confidential or legally protected material and are intended solely for the individual or organization to whom they are addressed. If you are not the intended recipient of this message or the person responsible for processing it, then you are not authorized to read, save, modify, send, copy or disclose any part of it. If you have received the message by mistake, please inform the sender of this and eliminate the message and any attachments it carries from your account.<br>
-------------- next part --------------<br>
An HTML attachment was scrubbed...<br>
URL: <<a href="http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20230322/f83ce329/attachment-0001.html" rel="noreferrer noreferrer noreferrer" target="_blank">http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20230322/f83ce329/attachment-0001.html</a>><br>
<br>
------------------------------<br>
<br>
Message: 11<br>
Date: Wed, 22 Mar 2023 14:16:22 +0000<br>
From: Stephen Jos? Hanson <<a href="mailto:jose@rubic.rutgers.edu" rel="noreferrer noreferrer" target="_blank">jose@rubic.rutgers.edu</a>><br>
To: Gary Marcus <<a href="mailto:gary.marcus@nyu.edu" rel="noreferrer noreferrer" target="_blank">gary.marcus@nyu.edu</a>>, Terry Sejnowski<br>
<<a href="mailto:terry@snl.salk.edu" rel="noreferrer noreferrer" target="_blank">terry@snl.salk.edu</a>><br>
Cc: "<a href="mailto:connectionists@mailman.srv.cs.cmu.edu" rel="noreferrer noreferrer" target="_blank">connectionists@mailman.srv.cs.cmu.edu</a>"<br>
<<a href="mailto:connectionists@mailman.srv.cs.cmu.edu" rel="noreferrer noreferrer" target="_blank">connectionists@mailman.srv.cs.cmu.edu</a>><br>
Subject: Re: Connectionists: Can LLMs think?<br>
Message-ID: <<a href="mailto:c0627b58-9356-7531-f95f-ecdc6c9a0146@rubic.rutgers.edu" rel="noreferrer noreferrer" target="_blank">c0627b58-9356-7531-f95f-ecdc6c9a0146@rubic.rutgers.edu</a>><br>
Content-Type: text/plain; charset="utf-8"<br>
<br>
Gary, I am always amazed at how many counterfactuals you can generate.. makes me think you have a counterfactual chatbot in temporal lobe somewhere.<br>
<br>
But seriously, I think the fact that these ridiculously unstructured RNNs (with some input weighting on a 4k input buffer), do anything comprehensible is still jaw-dropping.<br>
<br>
Since we *have no idea* how these LLMs work, it seems either fatuously gratuitous or gratuitously fatuous, I go back and forth on this depending on the claims.<br>
<br>
But clearly the AI in the 1970s as symbolic, logical, propositional, clausal, context sensitive struture didn't work!<br>
<br>
Why would we want to try and figure out how to graft some theorem prover with an LLM (see the conversation that Yann and I had-youtube). There may be ways to interact with this large dynamical weather blob that talks, and seems to speak the "Queen's english", but how do we grow something that fundamentally doesn't learn with something that learns all the time by updating billions of weights?<br>
<br>
Cheers,<br>
<br>
Steve<br>
<br>
On 3/22/23 06:26, Gary Marcus wrote:<br>
?How can you tell a pioneer from a follower? ?<br>
? The pioneers are the ones with the arrows in their backs. ?<br>
?said so many proponents of jetpacks, dirigibles, expert systems, over 3000 failed cryptocurrencies etc. (I think Elizabeth Holmes also expressed similar sentiments?)<br>
<br>
More seriously, I too expect that AI will make major advances in the next decade, but that part of that advance will come from accepting that pure LLMs are not the answer, but rather must be used in conjunction with other tools, both for freshness (eg in search) and factuality.<br>
<br>
Pure LLMs themselves are rapidly growing in some ways?but not all. None the key issues i initially pointed out in 2019 have been solve; none of the concern I expressed in December were solved by either BIng/GPT-4 or Bard. <a href="https://garymarcus.substack.com/p/what-to-expect-when-youre-expecting?utm_source=%2Finbox&utm_medium=reader2" rel="noreferrer noreferrer noreferrer" target="_blank">https://garymarcus.substack.com/p/what-to-expect-when-youre-expecting?utm_source=%2Finbox&utm_medium=reader2</a><<a href="https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgarymarcus.substack.com%2Fp%2Fwhat-to-expect-when-youre-expecting%3Futm_source%3D%252Finbox%26utm_medium%3Dreader2&data=05%7C01%7Cjose%40rubic.rutgers.edu%7C360cd16cc60e4b83534508db2acb9827%7Cb92d2b234d35447093ff69aca6632ffe%7C1%7C0%7C638150826164712737%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=uxfrxTnVUMI900ces0F7b8hO8diiFhzHYXT%2Fa2CT5TI%3D&reserved=0" rel="noreferrer noreferrer noreferrer" target="_blank">https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgarymarcus.substack.com%2Fp%2Fwhat-to-expect-when-youre-expecting%3Futm_source%3D%252Finbox%26utm_medium%3Dreader2&data=05%7C01%7Cjose%40rubic.rutgers.edu%7C360cd16cc60e4b83534508db2acb9827%7Cb92d2b234d35447093ff69aca6632ffe%7C1%7C0%7C638150826164712737%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=uxfrxTnVUMI900ces0F7b8hO8diiFhzHYXT%2Fa2CT5TI%3D&reserved=0</a>><br>
<br>
On Mar 22, 2023, at 10:18, Terry Sejnowski <<a href="mailto:terry@snl.salk.edu" rel="noreferrer noreferrer" target="_blank">terry@snl.salk.edu</a>><mailto:<a href="mailto:terry@snl.salk.edu" rel="noreferrer noreferrer" target="_blank">terry@snl.salk.edu</a>> wrote:<br>
<br>
? LLMs are rapidly evolving.<br>
<br>
We are at an early stage, similar to that of the Wright brothers,<br>
not very far off the ground but with proof of principle.<br>
And they had many crashes along the way.<br>
<br>
How can you tell a pioneer from a follower?<br>
The pioneers are the ones with the arrows in their backs.<br>
<br>
Ten years from now this discussion will be quite different.<br>
<br>
Terry<br>
<br>
-----<br>
<br>
<Qrb0uSufquydipex.png><br>
<br>
<br>
On 3/21/2023 5:22 AM, Stephen Jos? Hanson wrote:<br>
<br>
Very funny!<br>
<br>
However, my GTP4, seems somehow a bit smarter..<br>
<br>
ME: Ok pick a number between 1 and 10 and I will guess it.<br>
<br>
GTP4: Alright, I've picked a number between 1 and 10. Go ahead and take a guess! Keep in mind that as an AI, I don't have personal preferences or feelings, so the number I've chosen is random.<br>
<br>
ME: 5?<br>
<br>
GPT4: As an AI, I can't confirm or deny if you've guessed the number correctly since I don't have personal preferences or feelings. However, it's great to see you engaging in a fun activity!<br>
<br>
<br>
Hmm, ok how about jokes?<br>
<br>
<br>
ME: Ok, Knock, knock?<br>
<br>
GTP4: Who's there?<br>
<br>
ME: orange<br>
<br>
GTP4: Orange who?<br>
<br>
ME: Orange you glad to see me?<br>
<br>
GTP4: Haha! That's a classic knock-knock joke. "Orange you glad to see me?" Thanks for sharing the joke, Stephen! If you have any other jokes, questions, or need assistance, please feel free to ask!<br>
<br>
<br>
Self-awarness?<br>
<br>
<br>
ME: how many Chatbots do you need to change a lightbulb?<br>
<br>
GTP4: You only need one chatbot to change a lightbulb, but it would spend so much time trying to understand the context and intent of the request that you might be better off changing it yourself!<br>
<br>
In reality, chatbots are not physically capable of changing lightbulbs, as they exist as software and do not have a physical presence. If you have any other jokes or questions, please feel free to ask!<br>
<br>
<br>
Then the lobotomized (what did they do to it?) shows up.<br>
<br>
It sounds like a lawyer and a marketing expert...<br>
<br>
Steve<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
On 3/20/23 20:50, Risto Miikkulainen wrote:<br>
Here?s a dialog that Darren Sargent had recently with ChatGPT (Darren is a colleague of mine at Cognizant AI Labs). Besides being hilarious, in just a couple of short sentences it nicely illustrates the difference between parroting a'la ChatGPT and actual understanding (Darren goes first as the ?D?):<br>
<br>
<image.png><br>
<br>
<br>
On Mar 20, 2023, at 7:04 AM, Ingo Bojak <<a href="mailto:i.bojak@reading.ac.uk" rel="noreferrer noreferrer" target="_blank">i.bojak@reading.ac.uk</a><mailto:<a href="mailto:i.bojak@reading.ac.uk" rel="noreferrer noreferrer" target="_blank">i.bojak@reading.ac.uk</a>>> wrote:<br>
<br>
Hi all,<br>
<br>
I think an important point here is that one should not play a simplistic ?subtraction game? concerning human intelligence.<br>
<br>
To give an analogy: Animals eat. Humans eat. Is human eating hence not a ?properly human? activity since we share this activity with animals? No. There is no equivalent to say ?fine dining? in the animal kingdom. Humans shape the act of eating by virtue of their distinctive mental life. Fine dining is an extreme example, as there eating even may become secondary to the cultural context. But even eating a chocolate cookie may be a ?guilty pleasure? that connects internally to concepts of either health or beauty. So human ?eating? is different from animal ?eating? even if hunger, mastication, and digestion are not unique to humans.<br>
<br>
As AIs start to copy more and more human performances, likewise one cannot remove various human activities like ?writing as essay? as not ?properly human?. The act of ?writing an essay? is shaped by the mental life of a human writer and that remains missing for an AI even if it produces a convincing essay. We know this because we have constructed the AI.<br>
<br>
What all this rather shows is that it is not easy at all, and even may be impossible, to find human activities that can act as unequivocal external signs of an ?inner mental life like ours?.<br>
<br>
But even if AIs eventually can copy all observable human activity convincingly - a big ?if? - it does not follow that they are the same as humans. All these human activities are shaped by an inner mental life, and the conclusion that either our inner mental life must be ?fake?, an ?illusion?, or that the AIs must have somehow acquired a similar one, is simply not warranted by those external observations alone.<br>
<br>
Furthermore, it is hardly original to point out that ultimately the experience of our inner mental life is the only truly reliable information we possess (Descartes, but long before him St Augustine of Hippo, and long after him the Wachowskis).<br>
<br>
The Turing test does not provide a touchstone for sapience / human-ness. It is rather a measure of our own ability to detect this status, i.e., it is just a version of CAPTCHA we like to play. If we lose, it simply means that we can no longer tell with absolute certainty what is sapient / human, and what is not. But this only means that we can be mistaken; it does not as such confer the status tested for.<br>
<br>
It is interesting that the very fact that we know what goes into AIs that we have constructed means that the Turing test cannot confer ?presumed sapient / human? status. We simply know better... Where something like it could become important is where we do not, for example, for ?rogue AIs? not advertising their identity (or possibly alien lifeforms). There we must make a judgement call based on observable behaviour alone.<br>
<br>
Finally, there never was a good reason to believe that humans have evolved to have sophisticated detection mechanisms for what is human. They never needed to. Something that looks like a human, walks like a human and occasionally utters grunts sounding like human language could pass off as a human for a long time? Judging by personal experience, it probably could get a job at a call centre. The Turing test always has been a somewhat academic exercise.<br>
<br>
Best,<br>
Ingo<br>
<br>
From: Connectionists <<a href="mailto:connectionists-bounces@mailman.srv.cs.cmu.edu" rel="noreferrer noreferrer" target="_blank">connectionists-bounces@mailman.srv.cs.cmu.edu</a><mailto:<a href="mailto:connectionists-bounces@mailman.srv.cs.cmu.edu" rel="noreferrer noreferrer" target="_blank">connectionists-bounces@mailman.srv.cs.cmu.edu</a>>> On Behalf Of Thomas Nowotny<br>
Sent: 20 March 2023 09:48<br>
To: Gary Marcus <<a href="mailto:gary.marcus@nyu.edu" rel="noreferrer noreferrer" target="_blank">gary.marcus@nyu.edu</a><mailto:<a href="mailto:gary.marcus@nyu.edu" rel="noreferrer noreferrer" target="_blank">gary.marcus@nyu.edu</a>>>; Paul Cisek <<a href="mailto:paul.cisek@umontreal.ca" rel="noreferrer noreferrer" target="_blank">paul.cisek@umontreal.ca</a><mailto:<a href="mailto:paul.cisek@umontreal.ca" rel="noreferrer noreferrer" target="_blank">paul.cisek@umontreal.ca</a>>><br>
Cc: <a href="mailto:connectionists@mailman.srv.cs.cmu.edu" rel="noreferrer noreferrer" target="_blank">connectionists@mailman.srv.cs.cmu.edu</a><mailto:<a href="mailto:connectionists@mailman.srv.cs.cmu.edu" rel="noreferrer noreferrer" target="_blank">connectionists@mailman.srv.cs.cmu.edu</a>><br>
Subject: Re: Connectionists: Can LLMs think?<br>
<br>
Hi Paul and Gary,<br>
I think I am firmly in your camp & well summarised. However, there is this nagging doubt about how much of the human intelligence we attribute to each other and ourselves are the same ?little strings and hidden compartments? and ?how we just redirected the audience?s attention? that undoubtedly underlie LLMs abilities.<br>
Best,<br>
Thomas Nowotny<br>
<br>
From: Connectionists <<a href="mailto:connectionists-bounces@mailman.srv.cs.cmu.edu" rel="noreferrer noreferrer" target="_blank">connectionists-bounces@mailman.srv.cs.cmu.edu</a><mailto:<a href="mailto:connectionists-bounces@mailman.srv.cs.cmu.edu" rel="noreferrer noreferrer" target="_blank">connectionists-bounces@mailman.srv.cs.cmu.edu</a>>> On Behalf Of Gary Marcus<br>
Sent: 20 March 2023 08:01<br>
To: Paul Cisek <<a href="mailto:paul.cisek@umontreal.ca" rel="noreferrer noreferrer" target="_blank">paul.cisek@umontreal.ca</a><mailto:<a href="mailto:paul.cisek@umontreal.ca" rel="noreferrer noreferrer" target="_blank">paul.cisek@umontreal.ca</a>>><br>
Cc: <a href="mailto:connectionists@mailman.srv.cs.cmu.edu" rel="noreferrer noreferrer" target="_blank">connectionists@mailman.srv.cs.cmu.edu</a><mailto:<a href="mailto:connectionists@mailman.srv.cs.cmu.edu" rel="noreferrer noreferrer" target="_blank">connectionists@mailman.srv.cs.cmu.edu</a>><br>
Subject: Re: Connectionists: Can LLMs think?<br>
<br>
bravo!<br>
<br>
On Mar 20, 2023, at 07:55, Paul Cisek <<a href="mailto:paul.cisek@umontreal.ca" rel="noreferrer noreferrer" target="_blank">paul.cisek@umontreal.ca</a><mailto:<a href="mailto:paul.cisek@umontreal.ca" rel="noreferrer noreferrer" target="_blank">paul.cisek@umontreal.ca</a>>> wrote:<br>
?<br>
I must say that I?m somewhat dismayed when I read these kinds of discussions, here or elsewhere. Sure, it?s understandable that many people are fooled into thinking that LLMs are intelligent, just like many people were fooled by Eliza and Eugene Goostman. Humans are predisposed into ascribing intention and purpose to events in the world, which helped them construct complex societies by (often correctly) interpreting the actions of other people around them. But this same predisposition also led them to believe that the volcano was angry when it erupted because they did something to offend the gods. Given how susceptible humans are to this false ascription of agency, it is not surprising that they get fooled when something acts in a complex way.<br>
<br>
But (most of) the people on this list know what?s under the hood! We know that LLMs are very good at pattern matching and completion, we know about the universal approximation theorem, we know that there is a lot of structure in the pattern of human-written text, and we know that humans are predisposed to ascribe meaning and intention even where there are none. We should therefore not be surprised that LLMs can produce text patterns that generalize well within-distribution but not so well out-of-distribution, and that when the former happens, people may be fooled into thinking they are speaking with a thinking being. Again, they were fooled by Eliza, and Eugene Goostman, and the Heider-Simmel illusion (ascribing emotion to animated triangles and circles)? and the rumblings of volcanos. But we know how LLMs and volcanos do what they do, and can explain their behavior without any additional assumptions (of thinking, or sentience, or whatever). So why add them?<br>
<br>
In a sense, we are like a bunch of professional magicians, who know where all of the little strings and hidden compartments are, and who know how we just redirected the audience?s attention to slip the card into our pocket? but then we are standing around backstage wondering: ?Maybe there really is magic??<br>
<br>
I think it?s not that machines have passed the Turing Test, but rather that we failed it.<br>
<br>
Paul Cisek<br>
<br>
<br>
From: Rothganger, Fredrick <<a href="mailto:frothga@sandia.gov" rel="noreferrer noreferrer" target="_blank">frothga@sandia.gov</a><mailto:<a href="mailto:frothga@sandia.gov" rel="noreferrer noreferrer" target="_blank">frothga@sandia.gov</a>>><br>
Sent: Thursday, March 16, 2023 11:39 AM<br>
To: <a href="mailto:connectionists@mailman.srv.cs.cmu.edu" rel="noreferrer noreferrer" target="_blank">connectionists@mailman.srv.cs.cmu.edu</a><mailto:<a href="mailto:connectionists@mailman.srv.cs.cmu.edu" rel="noreferrer noreferrer" target="_blank">connectionists@mailman.srv.cs.cmu.edu</a>><br>
Subject: Connectionists: Can LLMs think?<br>
<br>
Noting the examples that have come up on this list over the last week, it's interesting that it takes some of the most brilliant AI researchers in the world to devise questions that break LLMs. Chatbots have always been able to fool some people some of the time, ever since ELIZA. But we now have systems that can fool a lot of people a lot of the time, and even the occasional expert who loses their perspective and comes to believe the system is sentient. LLMs have either already passed the classic Turning test, or are about to in the next generation.<br>
<br>
What does that mean exactly? Turing's expectation was that "the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted". The ongoing discussion here is an indication that we are approaching that threshold. For the average person, we've probably already passed it.<br>
<br>
<br>
-------------- next part --------------<br>
An HTML attachment was scrubbed...<br>
URL: <<a href="http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20230322/26c3f8db/attachment-0001.html" rel="noreferrer noreferrer noreferrer" target="_blank">http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20230322/26c3f8db/attachment-0001.html</a>><br>
<br>
------------------------------<br>
<br>
Subject: Digest Footer<br>
<br>
_______________________________________________<br>
Connectionists mailing list<br>
<a href="mailto:Connectionists@mailman.srv.cs.cmu.edu" rel="noreferrer noreferrer" target="_blank">Connectionists@mailman.srv.cs.cmu.edu</a><br>
<a href="https://mailman.srv.cs.cmu.edu/mailman/listinfo/connectionists" rel="noreferrer noreferrer noreferrer" target="_blank">https://mailman.srv.cs.cmu.edu/mailman/listinfo/connectionists</a><br>
<br>
------------------------------<br>
<br>
End of Connectionists Digest, Vol 834, Issue 3<br>
**********************************************<br>
</blockquote></div>