Connectionists: Can LLMs think?

Rothganger, Fredrick frothga at sandia.gov
Thu Mar 16 11:38:42 EDT 2023


Noting the examples that have come up on this list over the last week, it's interesting that it takes some of the most brilliant AI researchers in the world to devise questions that break LLMs. Chatbots have always been able to fool some people some of the time, ever since ELIZA. But we now have systems that can fool a lot of people a lot of the time, and even the occasional expert who loses their perspective and comes to believe the system is sentient. LLMs have either already passed the classic Turning test, or are about to in the next generation.

What does that mean exactly? Turing's expectation was that "the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted". The ongoing discussion here is an indication that we are approaching that threshold. For the average person, we've probably already passed it.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20230316/8db4f391/attachment.html>


More information about the Connectionists mailing list