Connectionists: Large Language Models and the Reverse Turing Test

Gary Marcus gary.marcus at nyu.edu
Thu Sep 15 11:54:14 EDT 2022


I don’t happen to agree with the thesis (“The smarter you are, the smarter the LLM appears to be”), but the opening is genius. Thanks, Terry!

> On Sep 14, 2022, at 23:05, Terry Sejnowski <terry at snl.salk.edu> wrote:
> 
>  
> Large Language Models and the Reverse Turing Test
> 
> https://arxiv.org/abs/2207.14382
> 
> Terrence Sejnowski
> Large Language Models (LLMs) have been transformative. They are pre-trained foundational models that are self-supervised and can be adapted with fine tuning to a wide ranger of natural language tasks, each of which previously would have required a separate network model. This is one step closer to the extraordinary versatility of human language. GPT-3 and more recently LaMDA can carry on dialogs with humans on many topics after minimal priming with a few examples. However, there has been a wide range of reactions on whether these LLMs understand what they are saying or exhibit signs of intelligence. This high variance is exhibited in three interviews with LLMs reaching wildly different conclusions. A new possibility was uncovered that could explain this divergence. What appears to be intelligence in LLMs may in fact be a mirror that reflects the intelligence of the interviewer, a remarkable twist that could be considered a Reverse Turing Test. If so, then by studying interviews we may be learning more about the intelligence and beliefs of the interviewer than the intelligence of the LLMs. As LLMs become more capable they may transform the way we interact with machines and how they interact with each other. 
> 
> LLMs can talk the talk, but can they walk the walk?
> 
> 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20220915/3534c783/attachment.html>


More information about the Connectionists mailing list