Connectionists: Large Language Models and the Reverse Turing Test
Terry Sejnowski
terry at snl.salk.edu
Wed Sep 14 23:24:28 EDT 2022
*Large Language Models and the Reverse Turing Test
**
https://arxiv.org/abs/2207.14382**
*
Terrence Sejnowski
<https://arxiv.org/search/cs?searchtype=author&query=Sejnowski%2C+T>
Large Language Models (LLMs) have been transformative. They are
pre-trained foundational models that are self-supervised and can be
adapted with fine tuning to a wide ranger of natural language tasks,
each of which previously would have required a separate network
model. This is one step closer to the extraordinary versatility of
human language. GPT-3 and more recently LaMDA can carry on dialogs
with humans on many topics after minimal priming with a few
examples. However, there has been a wide range of reactions on
whether these LLMs understand what they are saying or exhibit signs
of intelligence. This high variance is exhibited in three interviews
with LLMs reaching wildly different conclusions. A new possibility
was uncovered that could explain this divergence. What appears to be
intelligence in LLMs may in fact be a mirror that reflects the
intelligence of the interviewer, a remarkable twist that could be
considered a Reverse Turing Test. If so, then by studying interviews
we may be learning more about the intelligence and beliefs of the
interviewer than the intelligence of the LLMs. As LLMs become more
capable they may transform the way we interact with machines and how
they interact with each other.
LLMs can talk the talk, but can they walk the walk?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20220914/a8cf85cf/attachment.html>
More information about the Connectionists
mailing list