<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=UTF-8">
</head>
<body>
<br>
<font size="4"><font color="#ff0000"><b><font color="#381ae6">Large
Language Models and the Reverse Turing Test</font><br>
</b></font></font><font color="#ff0000"><b><font color="#000000"><br>
<a class="moz-txt-link-freetext" href="https://arxiv.org/abs/2207.14382">https://arxiv.org/abs/2207.14382</a></font></b></font><font
size="4"><font color="#ff0000"><b><font size="4"><br>
<br>
</font></b></font></font>
<div class="authors"><a
href="https://arxiv.org/search/cs?searchtype=author&query=Sejnowski%2C+T">Terrence
Sejnowski</a></div>
<blockquote class="abstract mathjax"> Large Language Models (LLMs)
have been transformative. They are pre-trained
foundational models that are self-supervised and can be adapted
with fine
tuning to a wide ranger of natural language tasks, each of which
previously
would have required a separate network model. This is one step
closer to the
extraordinary versatility of human language. GPT-3 and more
recently LaMDA can
carry on dialogs with humans on many topics after minimal priming
with a few
examples. However, there has been a wide range of reactions on
whether these
LLMs understand what they are saying or exhibit signs of
intelligence. This
high variance is exhibited in three interviews with LLMs reaching
wildly
different conclusions. A new possibility was uncovered that could
explain this
divergence. What appears to be intelligence in LLMs may in fact be
a mirror
that reflects the intelligence of the interviewer, a remarkable
twist that
could be considered a Reverse Turing Test. If so, then by studying
interviews
we may be learning more about the intelligence and beliefs of the
interviewer
than the intelligence of the LLMs. As LLMs become more capable
they may
transform the way we interact with machines and how they interact
with each
other. <br>
<br>
LLMs can talk the talk, but can they walk the walk?<br>
<br>
<br>
</blockquote>
</body>
</html>