Connectionists: Can LLMs think?
Gary Marcus
gary.marcus at nyu.edu
Fri Mar 17 03:48:28 EDT 2023
Average people were fooled by a chatbot called Eugene Goostman that ultimately had exactly zero long-term impact on AI. I wrote about it and the trouble with the Turing Test here in 2014: https://www.newyorker.com/tech/annals-of-technology/what-comes-after-the-turing-test
> On Mar 17, 2023, at 8:42 AM, Rothganger, Fredrick <frothga at sandia.gov> wrote:
>
>
> Noting the examples that have come up on this list over the last week, it's interesting that it takes some of the most brilliant AI researchers in the world to devise questions that break LLMs. Chatbots have always been able to fool some people some of the time, ever since ELIZA. But we now have systems that can fool a lot of people a lot of the time, and even the occasional expert who loses their perspective and comes to believe the system is sentient. LLMs have either already passed the classic Turning test, or are about to in the next generation.
>
> What does that mean exactly? Turing's expectation was that "the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted". The ongoing discussion here is an indication that we are approaching that threshold. For the average person, we've probably already passed it.
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20230317/38abdda0/attachment.html>
More information about the Connectionists
mailing list