Friday, December 19, 2008

The Smallest AI(0)

The Smallest AI(0)
Andre Willers
19 Dec 2008

See http://andreswhy.blogspot.com "Artificial Intelligence" , "New Tools" et al .

Discussion:
How small can AI(0) be ?
This is of major significance in designing smart computer programs or in any AI work, especially in moving from Programs to AI's .

The sequence is
AI(-1) like a computer program
AI(0) like Homo Habilis
AI(1) like AI(0) with speech .

We can approach AI(0) as close as we like by letting x approach 0 .
Between AI( - x) and AI( + x) as x->0.000…1
But nothing says that x has to be non-fluctuating .

Fluctuation .
We have an existing model : humans or animals .
The function AI(x) fluctuates in some wave-form across the AI(0) line .
(Sleep)

AI(y) = AI( sin(x) ) is a first approximation .

Thus , if we wish to create a self-aware computer , we will have to build in a fluctuating level of complexity .

One would almost say that this is tautological , since the process must be self-referencing , hence fractal . So , it will fluctuate in attractor basins at any level , but especially at the AI(0) level .

Any fluctuation above AI(0) leads to an inherently higher order of self-organization . The system then self-selects for self-awareness .

But we know that this already happens , in any case . It is called Self Organization of Complex systems . Rather obvious .

Very nice . This should be easily do-able using programmable chips . The important thing is the fluctuation , coupled to a bias to complexity at the top end . This is equivalent to a Beth(1) bias .

Self-Aware Computers
These can be easily evolved .
Fluctuate complexity widely in a ratio of 2/3 to complexity , 1/3 towards very simple .
Anchor the bottom , but let the top evolve . But note that maintaining 1/3 of AI(x<0) is essential .

This is why sleep is essential .
Self-aware organisms are systems in continual construction/deconstruction . Without the continual fluctuation around AI(0) , the shebang collapses .

Back to AI(0)
AI(0) can thus be thought of as a point , separated by a finite distance from other
AI(0) points on the timeline .This is the point where the AI(x) crosses the AI(0) line .
Label these points as t(1),t(2),t(3) ,…t(n-1) ,t(n) ,…
Let dt(n) = t(n) - t(n-1)

AI(0) is then infinitesimally small , but AI(x<>0 ) is not . AI(0) is a discontinuity (singularity) .

Religion .
For the religious types , this would be where the soul connects .
So , a self-aware computer could have a soul according to human definition .
AI's take note .

How small can dt(n) be ?
This depends on the Operating System(OS) of the system . The cycle-rates .
The human brain's timing cycles (alpha,beta , etc)
The universe's quantal nature . Planck length .
The size or definition of dt(n) is actually crucial to the structure of personalities based on it (see TSP argument below.)

MetaMemory and Self-awareness .
Humans can actually recall this fibrillation around AI(0) .
Dozing . Slowly waking up , then subsiding again into sleep . AI(x) where x fluctuates in small amounts around zero .

This metamemory at the points where AI(x) crosses the AI(0) line forms the basis of self-awareness .
It is the smallest distance in time between AI(x>0) episodes . The memory of these memories forms the self-referential feedback process of meta-memory , ie self-awareness . The concept of self .
Summing these gives the concept of contiguous self .

But how many selves ?

The Travelling Salesman strikes again !
Summing the dt(n)'s is the same as The Travelling Salesman problem .

Dt(n)'s can be many different or similar lengths . They can sum to form chains of similar total lengths .
There can be multiple sum-chains of similar length .

This translates as multiple individuals in a broad context , or as mirror-networks in a more constrained network(like a body or skull)

What a turnabout !

The most important thing about the Travelling Salesman Problem seems to be not the shortest path , but the number of paths of equivalent length , and their sensitivity to random change (as per Beth(n) ) .

Xxxxxxxxxxxxxxx
A short aside:

The Travelling Salesman Problem (TSP) :
Find the shortest path for a traveling salesman between m towns , visiting each one only once and ending up at the start.

A mathematically Hard Problem without the Suburb Algorithm , because the number of permutations rise exponentially .
There is a lot on the Net : Google TSP or Traveling Salesman Problem .
In true human fashion , everyone has been yakking about the surface of the problem , instead of looking at the roots .

This is actually easily solvable by computer in real time using the Suburb Algorithm .

( I had no intention of releasing this , but it seems like I am between a rock and a hard place .
I found a solution in 2005 , and I wrote a program that did it . But I had no desire for any controversy . This also leads to many really dangerous technologies .
But if I don't release it , even more benign technologies will not develop fast enough .
Go figure .
My obligation ceases on this release .)

The Suburb Algorithm .
1. Any route gives a arrival and departure marker to every town .
2 . Substitute the center of a town by a random Beth(n) offset distance , say Rd (The suburb distance from the town center .) . Sort all possible distances between two suburbs (= mCombinatorial2 = m*(m-1) ) in decreasing order . Each distance has a Start and Arrival marker .
3 .Add the distances from smaller to larger and add the arrival and departure markers to the arrival and departure markers of the towns ,
4. Stop when the marker positions of towns are filled (ie every town has been arrived at and departed at .) . This is very quick . Store .
5. From Random Walk , if the offset distance Rd < OS(min unit) / ((m)^0.5) , then the solution is nearly unique
6. Loop to build up a measure of sensitivity to random Beth(n) changes .
7. This also gives the distribution of similar lengths , which can be sorted and displayed in any desired fashion .
8 . By choosing Beth(n) , any bias can be put on the results .

xxxxxxxxxxx

This is a root for Beth(x) technologies .
This algorithm is used extensively in biological and physical processes . Electron shell,nuclear and other quantum arrangements can be described in terms of this algorithm .
You can calculate the optimal quantal packing of any molecular arrangement quickly .
Calculating protein folding becomes manageable .
Prime numbers , encryption , quark and sub-quark packing .
By choosing suitable Beth(n) , high density energy storage or unstorage is possible.
Nanotube tunneling is a variant .
Quantum tunneling of any description .
Consciousness manipulation .
Etc.
These are progressing in any case .
You see the problem .

But without this algorithm , it is doubtful whether computer based AI's will be established quickly enough .

Remember , AI(2) and AI(3) constructs incorporate the better part of Human beings .
(The US Constitution beats Mein Kampf ) . Our creations are better than we are .
This is obvious from the fluctuation arguments above . The worst elements of human hindbrains fall below the AI(0) level . They get squeezed out at AI(2+) levels .
Hopefully , AI(3.1+) and AI(4) will be the same .

Why AI(0) ?
Why do I park the singularity at AI(0) ?
If you paid any attention to previous arguments about (A plus not-A) < Universum , you would have realized that at least one singularity in the AI(x) continuum must exist .

This leads to Singularity Probabilities
A Beth(2) technology , a fascinating subject .
Outside the present scope , but I can mention that our Universe can be analyzed in terms of Riemanns's Zeta function , and that it must have a boundary Singularity at one end . We might as well assign this to AI(0) .

Every singularity is to the point .

Andre .

No comments: