Are thoughts REALLY Random?

Manoel Fernando Tenorio tenorio at ee.ecn.purdue.edu
Wed Jan 31 08:53:42 EST 1990


--------

The ideas of minimum description lenght (mdl), Kolmogorov-Chaitin complexity,
and other measures are certainly related in some way. But the fact that some
nonlinear systems can display chaotic behavior, and through that from a finite
and small description generate infinite output (or very large) is at least
puzzling. One interesting and simple chaotic equation when plotted in a polar
system, displayed a behavior that looked like a wheel marked at a point, moving
at a period which when divided by the perimeter of the wheel gave an  
irrational number. So that the point, when the wheel was sample periodically,
was NEVER (infinite precision) at the same point; never periodic. We are
accustomed to ideas of semi periodicity such as a sine wave modulated by a
music signal. But for truly non repetitive behavior, the lenght of the music
signal has to be the same of the modulated signal. So , even more amazing it
seems is this chaotic business. 

I read a while ago in IEEE computer of a group in an university around the DC 
area (reference please), funded by DARPA, that was doing image compression
using chaotic series, at amazing ratios. Rissanen shown the realtionship
among image compression, sys. id., prediction, etc. using measures of
complexity. It would be great if the same ideas from the chaotic image
compression, in some form, could be applied to our problems of memory, 
description and computation. Unfortunatelly, in my experience, a property of
these systems seems to hinder the best of efforts. In trying to reduce a 
string to its generator, a simple change in precision of parameters or
initial conditions can send you in the wrong direction. It is a MANY-to -one
mapping in the worst sense, making the inverse process almost hopeless. 

It is interesting to see that we painted ourselves into a linear corner, and
now, forced to think about a nonlinear universe, we are confronted with 
phenomena that our theories don't support. Good luck to the one who is trying
to break this  code.

--ft.



< Manoel Fernando Tenorio (tenorio at ee.ecn.purdue.edu) >
< MSEE233D                                            >
< School of Electrical Engineering                    >
< Purdue University                                   >
< W. Lafayette, IN, 47907                             >



--- Your message of: Tuesday,01/30/90 ---

   From:  Jordan B Pollack <pollack at cis.ohio-state.edu>
   Subject:  Are thoughts REALLY Random? 

   (Background: Scott is commenting less on the question than on the
   unspoken subtext, which is my idea of building a very large
   reconstructive memory based on quasi-inverting something like the
   Mandelbrot set.  Given a figure, find a pointer; then only store the
   pointer and simple reconstruction function; This has been mentioned
   twice in print, in a survey article in AI Review, and in NIPS 1988.
   Yesterday's note was certainly related; but I wanted to
   ignore the search question right now!)

   >> Do you see any fundamental difference between the
   >> Mandelbrot set and the proverbial infinite number of monkeys with
   >> typewriters? 

   I think that the difference is that the initial-conditions/reduced
   descriptions/pointers to the Mandelbrot set can be precisely stored by
   physical computers.  This leads to a replicability of "access" not
   available to the monkeys.
    
   >> Maybe the right measure of Platonic density is something like the expecte
  d
   >> length of the address (M bits) that you would need to point to a specific
   >> N-bit pattern that you want to locate somewhere in this infinite heap of
   >> not-exactly-random bits. 

   Thanks! Not bad for a starting point! The Platonic Complexity (ratio
   of N/M) would decrease to 0 at the GIGO limit, and increase to
   infinity if it took effectively 0 bits to access arbitrary
   information. This is very satisfying.

   >> Why shouldn't M be much greater than N?

   Normally, we computer types live with a density of 1, as we convert
   symbolic information into bit-packed data-structures. Thus we already
   have lots of systems with PC=1! Also I can point to systems with PC <1
   (Bank teller machines) and with PC>1 (Postscript).

   Jordan Pollack                            Assistant Professor
   CIS Dept/OSU                              Laboratory for AI Research
   2036 Neil Ave                             Email: pollack at cis.ohio-state.edu
   Columbus, OH 43210                        Fax/Phone: (614) 292-4890
--- end of message ---



More information about the Connectionists mailing list