Connectionists: ?==?utf-8?q? Galileo and the priest

Risto Miikkulainen risto at cs.utexas.edu
Tue Mar 14 00:49:06 EDT 2023


Back in the 1980s and 1990s we were trying to get neural networks to perform variable binding, and also what Dave Touretzky called “dynamic inferencing”, i.e. bringing together two pieces of information that it knew how to process separately but had never seen together before (like different kinds of grammatical structures). It was very difficult and did not work well. But it seems it now works in GPT: it can, for instance, write a scientific explanation in the style of Shakespeare. The attention mechanism allows it to learn relationships, and the scale-up allows it to form abstractions, and then relationships between abstractions. This effect emerges only at very large scales—scales that are starting to approach that of brain. Perhaps the scale allows it to capture a fundamental processing principle of the brain that we have not been able to identify or model before? It would be interesting to try to characterize it in these terms.

— Risto

> On Mar 13, 2023, at 3:38 AM, Claudius Gros <gros at itp.uni-frankfurt.de> wrote:
> 
> -- attention as thought processes? --
> 
> The discussion here on the list shows, that
> ChatGPT produces intriguing results. I guess
> everybody agrees. What it means remains open.
> 
> Let me throw in a hypothesis. 
> 
> With the introduction of the attention framework, 
> deep-learning architectures acquired kind of 
> 'soft links' by computing weighted superpositions 
> of other states of the network. Possibly, this may
> be similar to what happens in the brain when we 'think',
> namely to combine states of distinct brain regions
> into a single processing stream.
> 
> If that would be true (which remains to be seen), it would 
> imply that the processes performed by transformer 
> architectures would have a certain resemblance to actual
> thinking.
> 
> Any thoughts (by human brains) on this hypothesis?
> 
> Claudius
> 
> ==============================================================
> 
> 
> On Friday, March 10, 2023 20:29 CET, Geoffrey Hinton <geoffrey.hinton at gmail.com> wrote: 
> 
>> In Berthold Brecht's play about Galileo there is a scene where Galileo asks
>> a priest to look through a telescope to see the moons of Jupiter. The
>> priest says there is no point looking because it would be impossible for
>> things to go round Jupiter (this is from my memory of seeing the play about
>> 50 years ago).
>> 
>> I suspect that Chomsky thinks of himself as more like Galileo than the
>> priest. But in his recent NYT opinion piece, it appears that the authors
>> did not actually check what chatGPT would say in answer to their questions
>> about falling apples or people too stubborn to talk to. Maybe they have
>> such confidence that chatGPT could not possibly be understanding that there
>> is no point looking at the data.
> 
> 
> -- 
> ### 
> ### Prof. Dr. Claudius Gros
> ### http://itp.uni-frankfurt.de/~gros
> ### 
> ### Complex and Adaptive Dynamical Systems, A Primer   
> ### A graduate-level textbook, Springer (2008/10/13/15)
> ### 
> ### Life for barren exoplanets: The Genesis project
> ### https://link.springer.com/article/10.1007/s10509-016-2911-0
> ###
> 




More information about the Connectionists mailing list