Connectionists: Physics and Psychology (and the C-word)

Richard Loosemore rloosemore at susaro.com
Tue Jan 28 14:05:31 EST 2014



Brian,

Everything hinges on the definition of the concept ("consciousness") 
under consideration.

In the chapter I wrote in Wang & Goertzel's "Theoretical Foundations of 
Artificial General Intelligence" I pointed out (echoing Chalmers) that 
too much is said about C without a clear enough understanding of what is 
meant by it .... and then I went on to clarify what exactly could be 
meant by it, and thereby came to a resolution of the problem (with 
testable predictions).   So I think the answer to the question you pose 
below is that:

(a) Yes, in general, having an outcome measure that correlates with C 
... that is good, but only with a clear and unambigous meaning for C 
itself (which I don't think anyone has, so therefore it is, after all, 
of no value to look for outcome measures that correlate), and

(b) All three of the approaches you mention are sidelined and finessed 
by the approach I used in the abovementioned paper, where I clarify the 
definition by clarifying first why we have so much difficulty defining 
it.  In other words, there is a fourth way, and that is to explain it as 
... well, I have to leave that dangling because there is too much 
subtlety to pack into an elevator pitch.  (The title is the best I can 
do:  " Human and Machine Consciousness as a Boundary Effect in the 
Concept Analysis Mechanism ").

Certainly though, the weakness of all quantum mechanics 'answers' is 
that they are stranded on the wrong side of the explanatory gap.


Richard Loosemore


Reference
Loosemore, R.P.W. (2012).  Human and Machine Consciousness as a Boundary 
Effect in the Concept Analysis Mechanism.  In: P. Wang & B. Goertzel 
(Eds), Theoretical Foundations of Artifical General Intelligence.  
Atlantis Press.
http://richardloosemore.com/docs/2012a_Consciousness_rpwl.pdf


On 1/28/14, 10:34 AM, Brian J Mingus wrote:
> Hi Richard,
>
> > I can tell you that the quantum story isn't nearly enough clear in 
> the minds of physicists, yet, so how it can be applied to the C 
> question is beyond me.  Frankly, it does NOT apply:  saying anything 
> about observers and entanglement does not at any point touch the kind 
> of statements that involve talk about qualia etc.
>
> I'm not sure I see the argument you're trying to make here. If you 
> have an outcome measure that you agree correlates with consciousness, 
> then we have a framework for scientifically studying it.
>
> Here's my setup: If you create a society of models and do not expose 
> them to a corpus containing consciousness philosophy and they then, in 
> a reasonably short amount of time, independently rewrite it, they are 
> almost certainly conscious. This design explicitly rules out a 
> generative model that accidentally spits out consciousness philosophy.
>
> Another approach is to accept that our brains are so similar that you 
> and I are almost certainly both conscious, and to then perform 
> experiments on each other and study our subjective reports.
>
> Another approach is to perform experiments on your own brain and to 
> write first person reports about your experience.
>
> These three approaches each have tradeoffs, and each provide unique 
> information. The first approach, in particular, might ultimately allow 
> us to draw some of the strongest possible conclusions. For example, it 
> allows for the scientific study of the extent to which quantum effects 
> may or may not be relevant.
>
> I'm very interested in hearing any counterarguments as to why this 
> general approach won't work. If it /can't/ work, then I would argue 
> that perhaps we should not create full models of ourselves, but should 
> instead focus on upgrading ourselves. From that perspective, getting 
> this to work is extremely important, despite however futuristic it may 
> seem.
>
> > So let's let that sleeping dog lie.... (?).
>
> Not gonna' happen. :)
>
> Brian Mingus
> http://grey.colorado.edu
>
> On Tue, Jan 28, 2014 at 7:32 AM, Richard Loosemore 
> <rloosemore at susaro.com <mailto:rloosemore at susaro.com>> wrote:
>
>     On 1/27/14, 11:30 PM, Brian J Mingus wrote:
>
>         Consciousness is also such a bag of worms that we can't rule
>         out that qualia owes its totally non-obvious and a priori
>         unpredicted existence to concepts derived from quantum
>         mechanics, such as nested observers, or entanglement.
>
>         As far as I know, my litmus test for a model is the only way
>         to tell whether low-level quantum effects are required: if the
>         model, which has not been exposed to a corpus containing
>         consciousness philosophy, then goes on to independently
>         recreate consciousness philosophy, despite the fact that it is
>         composed of (for example) point neurons, then we can be sure
>         that low-level quantum mechanical details are not important.
>
>         Note, however, that such a model might still rely on nested
>         observers or entanglement. I'll let a quantum physicist chime
>         in on that - although I will note that according to news
>         articles I've read that we keep managing to entangle larger
>         and larger objects - up to the size of molecules at this time,
>         IIRC.
>
>
>         Brian Mingus
>         http://grey.colorado.edu/mingus
>
>     Speaking as someone is both a physicist and a cognitive scientist,
>     AND someone who has written papers resolving that whole C-word
>     issue, I can tell you that the quantum story isn't nearly enough
>     clear in the minds of physicists, yet, so how it can be applied to
>     the C question is beyond me.  Frankly, it does NOT apply:  saying
>     anything about observers and entanglement does not at any point
>     touch the kind of statements that involve talk about qualia etc.  
>     So let's let that sleeping dog lie.... (?).
>
>     As for using the methods/standards of physics over here in cog sci
>     ..... I think it best to listen to George Bernard Shaw on this
>     one:  "Never do unto others as you would they do unto you:  their
>     tastes may not be the same."
>
>     Our tastes (requirements/constraints/issues) are quite different,
>     so what happens elsewhere cannot be directly, slavishly imported.
>
>
>     Richard Loosemore
>
>     Wells College
>     Aurora NY
>     USA
>
>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20140128/87ceba4d/attachment.html>


More information about the Connectionists mailing list