<html>
<head>
<meta content="text/html; charset=ISO-8859-1"
http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<font face="Times New Roman, Times, serif">Brian,<br>
<br>
Quantum mechanics can be completely simulated on a classical
computer so if quantum mechanics do matter for C then it must be a
matter of computational efficiency and nothing more. We also know
that BQP (i.e. set of problems solved efficiently on a quantum
computer) is bigger than BPP (set of problems solved effficiently
on a classical computer) but not by much. I'm not fully up to
date on this but I think factoring and boson sampling or about the
only two examples that are in BQP and not in BPP. We also know
that BPP is much smaller than NP, so if C does require QM then for
some reason it sits in a small sliver of complexity space.<br>
<br>
best,<br>
Carson<br>
<br>
PS I do like your self-consistent test for confirming
consciousness. I once proposed that we could just run Turing
machines and see which ones asked why they exist as a test of C.
Kind of similar to your idea.<br>
<br>
<br>
</font>
<div class="moz-cite-prefix">On 1/28/14 3:09 PM, Brian J Mingus
wrote:<br>
</div>
<blockquote
cite="mid:CAJ=QoBSJ_=GQr=8cPg8Jo9wmtCLR7qs6Gy94cdghspRaHG0S1Q@mail.gmail.com"
type="cite">
<div dir="ltr">Hi Richard, thanks for the feedback.
<div><br>
</div>
<div><span
style="font-family:arial,sans-serif;font-size:12.727272033691406px">>
Yes, in general, having an outcome measure that correlates
with C ... that is good, but only with a clear and
unambigous meaning for C itself (which I don't think anyone
has, so therefore it is, after all, of no value to look for
outcome measures that correlate)</span><br>
</div>
<div><span
style="font-family:arial,sans-serif;font-size:12.727272033691406px"><br>
</span></div>
<div><font face="arial, sans-serif">Actually, the outcome
measure I described is independent of a clear and
unambiguous meaning for C itself, and in an interesting way:
the models, like us, essentially reinvent the entire
literature, and have a conversation as we do, inventing
almost all the same positions that we've invented (including
the one in your paper). </font></div>
<div><font face="arial, sans-serif"><br>
</font></div>
<div>I will read your paper and see if it changes my position.
At the present time, however, I can't imagine any information
that would solve the so-called zombie problem. I'm not a big
fan of integrative information theory - I don't think hydrogen
atoms are conscious, and I don't think naive bayes trained on
a large corpus and run in generative mode is conscious. Thus,
if the model doesn't go through the same philosophical
reasoning that we've collectively gone through with regards to
subjective experience, then I'm going to wonder if its
experience is anything like mine at all.<br>
</div>
<div><br>
</div>
<div>Touching back on QM, if we create a point neuron-based
model that doesn't wax philosophical on consciousness, I'm
going to wonder if we should add lower levels of analysis.</div>
<div><br>
</div>
<div>I will take a look at your paper, and see if it changes my
view on this at all.</div>
<div><br>
</div>
<div>Cheers,</div>
<div><font face="arial, sans-serif"><br>
</font></div>
<div><font face="arial, sans-serif">Brian Mingus</font></div>
<div><font face="arial, sans-serif"><br>
</font></div>
<div><font face="arial, sans-serif"><a moz-do-not-send="true"
href="http://grey.colorado.edu/mingus" target="_blank">http://grey.colorado.edu/mingus</a></font></div>
<div><span
style="font-family:arial,sans-serif;font-size:12.727272033691406px"><br>
</span></div>
<div class="gmail_extra"><br>
<br>
<div class="gmail_quote">On Tue, Jan 28, 2014 at 12:05 PM,
Richard Loosemore <span dir="ltr"><<a
moz-do-not-send="true"
href="mailto:rloosemore@susaro.com" target="_blank">rloosemore@susaro.com</a>></span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px
0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<div bgcolor="#FFFFFF" text="#000000"> <br>
<br>
Brian,<br>
<br>
Everything hinges on the definition of the concept
("consciousness") under consideration.<br>
<br>
In the chapter I wrote in Wang & Goertzel's
"Theoretical Foundations of Artificial General
Intelligence" I pointed out (echoing Chalmers) that too
much is said about C without a clear enough
understanding of what is meant by it .... and then I
went on to clarify what exactly could be meant by it,
and thereby came to a resolution of the problem (with
testable predictions). So I think the answer to the
question you pose below is that:<br>
<br>
(a) Yes, in general, having an outcome measure that
correlates with C ... that is good, but only with a
clear and unambigous meaning for C itself (which I don't
think anyone has, so therefore it is, after all, of no
value to look for outcome measures that correlate), and
<br>
<br>
(b) All three of the approaches you mention are
sidelined and finessed by the approach I used in the
abovementioned paper, where I clarify the definition by
clarifying first why we have so much difficulty defining
it. In other words, there is a fourth way, and that is
to explain it as ... well, I have to leave that dangling
because there is too much subtlety to pack into an
elevator pitch. (The title is the best I can do: "
Human and Machine Consciousness as a Boundary Effect in
the Concept Analysis Mechanism ").<br>
<br>
Certainly though, the weakness of all quantum mechanics
'answers' is that they are stranded on the wrong side of
the explanatory gap.<br>
<br>
<br>
Richard Loosemore<br>
<br>
<br>
Reference<br>
Loosemore, R.P.W. (2012). Human and Machine
Consciousness as a Boundary Effect in the Concept
Analysis Mechanism. In: P. Wang & B. Goertzel
(Eds), Theoretical Foundations of Artifical General
Intelligence. Atlantis Press.<br>
<a moz-do-not-send="true"
href="http://richardloosemore.com/docs/2012a_Consciousness_rpwl.pdf"
target="_blank">http://richardloosemore.com/docs/2012a_Consciousness_rpwl.pdf</a>
<div>
<div><br>
<br>
<br>
On 1/28/14, 10:34 AM, Brian J Mingus wrote:
<blockquote type="cite">
<div dir="ltr">
<div class="gmail_extra">Hi Richard,</div>
<div class="gmail_extra"><br>
</div>
<div class="gmail_extra">> I can tell you
that the quantum story isn't nearly enough
clear in the minds of physicists, yet, so how
it can be applied to the C question is beyond
me. Frankly, it does NOT apply: saying
anything about observers and entanglement does
not at any point touch the kind of statements
that involve talk about qualia etc.</div>
<div class="gmail_extra"><br>
</div>
<div class="gmail_extra">I'm not sure I see the
argument you're trying to make here. If you
have an outcome measure that you agree
correlates with consciousness, then we have a
framework for scientifically studying it. </div>
<div class="gmail_extra"><br>
</div>
<div class="gmail_extra">Here's my setup: If you
create a society of models and do not expose
them to a corpus containing consciousness
philosophy and they then, in a reasonably
short amount of time, independently rewrite
it, they are almost certainly conscious. This
design explicitly rules out a generative model
that accidentally spits out consciousness
philosophy.</div>
<div class="gmail_extra"><br>
</div>
<div class="gmail_extra">Another approach is to
accept that our brains are so similar that you
and I are almost certainly both conscious, and
to then perform experiments on each other and
study our subjective reports.</div>
<div class="gmail_extra"><br>
</div>
<div class="gmail_extra">Another approach is to
perform experiments on your own brain and to
write first person reports about your
experience.</div>
<div class="gmail_extra"><br>
</div>
<div class="gmail_extra"> These three approaches
each have tradeoffs, and each provide unique
information. The first approach, in
particular, might ultimately allow us to draw
some of the strongest possible conclusions.
For example, it allows for the scientific
study of the extent to which quantum effects
may or may not be relevant.</div>
<div class="gmail_extra"><br>
</div>
<div class="gmail_extra">I'm very interested in
hearing any counterarguments as to why this
general approach won't work. If it <i>can't</i> work,
then I would argue that perhaps we should not
create full models of ourselves, but should
instead focus on upgrading ourselves. From
that perspective, getting this to work is
extremely important, despite however
futuristic it may seem.</div>
<div class="gmail_extra"><br>
</div>
<div class="gmail_extra">> <span
style="font-family:arial,sans-serif;font-size:12.727272033691406px">So
let's let that sleeping dog lie.... (?).</span></div>
<div class="gmail_extra"> <span
style="font-family:arial,sans-serif;font-size:12.727272033691406px"><br>
</span></div>
<div class="gmail_extra"><span
style="font-family:arial,sans-serif;font-size:12.727272033691406px">Not
gonna' happen. :)</span></div>
<div class="gmail_extra"><br>
</div>
<div class="gmail_extra">Brian Mingus</div>
<div class="gmail_extra"><a
moz-do-not-send="true"
href="http://grey.colorado.edu"
target="_blank">http://grey.colorado.edu</a></div>
<div class="gmail_extra"><br>
<div class="gmail_quote"> On Tue, Jan 28, 2014
at 7:32 AM, Richard Loosemore <span
dir="ltr"><<a moz-do-not-send="true"
href="mailto:rloosemore@susaro.com"
target="_blank">rloosemore@susaro.com</a>></span>
wrote:<br>
<blockquote class="gmail_quote"
style="margin:0px 0px 0px
0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">On
1/27/14, 11:30 PM, Brian J Mingus wrote:<br>
<blockquote class="gmail_quote"
style="margin:0px 0px 0px
0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
Consciousness is also such a bag of
worms that we can't rule out that qualia
owes its totally non-obvious and a
priori unpredicted existence to concepts
derived from quantum mechanics, such as
nested observers, or entanglement.<br>
<br>
As far as I know, my litmus test for a
model is the only way to tell whether
low-level quantum effects are required:
if the model, which has not been exposed
to a corpus containing consciousness
philosophy, then goes on to
independently recreate consciousness
philosophy, despite the fact that it is
composed of (for example) point neurons,
then we can be sure that low-level
quantum mechanical details are not
important.<br>
<br>
Note, however, that such a model might
still rely on nested observers or
entanglement. I'll let a quantum
physicist chime in on that - although I
will note that according to news
articles I've read that we keep managing
to entangle larger and larger objects -
up to the size of molecules at this
time, IIRC.<br>
<br>
<br>
Brian Mingus<br>
<a moz-do-not-send="true"
href="http://grey.colorado.edu/mingus"
target="_blank">http://grey.colorado.edu/mingus</a><br>
<br>
</blockquote>
Speaking as someone is both a physicist
and a cognitive scientist, AND someone who
has written papers resolving that whole
C-word issue, I can tell you that the
quantum story isn't nearly enough clear in
the minds of physicists, yet, so how it
can be applied to the C question is beyond
me. Frankly, it does NOT apply: saying
anything about observers and entanglement
does not at any point touch the kind of
statements that involve talk about qualia
etc. So let's let that sleeping dog
lie.... (?).<br>
<br>
As for using the methods/standards of
physics over here in cog sci ..... I think
it best to listen to George Bernard Shaw
on this one: "Never do unto others as you
would they do unto you: their tastes may
not be the same."<br>
<br>
Our tastes
(requirements/constraints/issues) are
quite different, so what happens elsewhere
cannot be directly, slavishly imported.<br>
<br>
<br>
Richard Loosemore<br>
<br>
Wells College<br>
Aurora NY<br>
USA<br>
<br>
</blockquote>
</div>
<br>
</div>
</div>
</blockquote>
<br>
</div>
</div>
</div>
</blockquote>
</div>
<br>
</div>
</div>
</blockquote>
<br>
</body>
</html>