Connectionists: Chomsky's apple

Rothganger, Fredrick frothga at sandia.gov
Mon Mar 13 11:25:16 EDT 2023


These are interesting ideas. As long as a question can be answered by referencing written text, some LLM is likely to succeed. Basically, all the required information is embedded in its natural domain. If we want machines to "understand" the world in a non-textual way, we need to come up with questions that can only be answered by referencing knowledge outside the textual domain. This is of course a bit unfair if the AI that has no opportunity to be embodied and thus learn that way. It also seems to be very difficult of us to imagine a question presented symbolically that can't be answered entirely with symbolic knowledge.

The creation of quasi-novel artwork by stable-diffusion networks might be an example of operating outside the symbolic domain. This is a bit vague, since image representation in machines is ultimately a finely-grained symbolic representation (RGB values over discrete positions).

________________________________
From: Connectionists <connectionists-bounces at mailman.srv.cs.cmu.edu> on behalf of Prof Leslie Smith <l.s.smith at cs.stir.ac.uk>
Sent: Friday, March 10, 2023 3:45 PM
To: Adam Krawitz <akrawitz at uvic.ca>
Cc: Connectionists List <connectionists at cs.cmu.edu>
Subject: [EXTERNAL] Re: Connectionists: Chomsky's apple

Dear all:

I'm beginning to think that we are looking at this from the wrong end: the
issue isn't about what/whether ChatGPT understands, but about what we mean
by "understanding" in a human or an animal, and what we might mean by
understanding in a machine.

If I say: I'll drive to London, it's clear that (a) I have access to a car
(b) I can drive etc. But I may or may not understand how the car works. I
may or may not understand the nature of the frictional forces that allow
the wheels to move the car. I may or may not understand the chemistry that
allows she internal combustion engine/battery to operate. I (and
presumably the person I am talking to) has a model of understanding of
driving cars that suffices for our needs.

In other words, out "understanding" relates to the activities we want to
do, activities that are about our operation/interaction in our
environment. So we  often use simple models that  suffice for our
activities and interactions. Our understanding is systematic, but may well
be wrong, or (more likely) just sufficient for our purposes (I know I need
to put petrol/gas in the car, or I need to charge the battery) rather than
complete (*).

Our understanding clearly works entirely differently from ChatGPT,  (and I
agree with Richard Loosemore that ascribing a human sort of understanding
to ChatGPT is not appropriate).
But if we want to use the same terms to describe machines and humans, we
should really start by deciding what these terms mean when applied to
humans.

(*) In fact out models are never complete: they rely on concepts like
solidity, fluidity, electrical current, gravity, light, etc., concepts
that we understand sufficiently for everyday usage. Completeness would
imply a full physics that went down to subatomic/quantum levels!

Adam Krawitz wrote:
>
>> ChatGPT's errors reveal that its "understanding" of the world is not
>> systematic but rather consists of patches of competence separated
>> by regions of incompetence and incoherence.
>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20230313/7ce9ae23/attachment.html>


More information about the Connectionists mailing list