Sunday, December 08, 2013

StumbleUpon superior to Google

StumbleUpon as Genetic Algorithm  

Andre Willers
8 Dec 2013
Synopsis :
Very large databases can be more efficiently searched by orders of magnitude by using  the “like”/”dislike” evolutionary selection  feature and some random selection . A nifty algorithm . Very useful on the internet . Better than Google if you don’t know what you are looking  for .
2. Briefly :
Large information spaces need to be searched , some known , some unknown .
Genetic Algorithms can search these at least an order of three faster than any other process. (Proven)
Of course  , more tailored versions are available .
StumbleUpon is one of them .
It combines  random selection (probably weighted) and genetic algorithms to search very large or unknown information Spaces .
3.If you know what you are looking for , use Google .
4.If you don’t know what you are looking for , use StumbleUpon . But be sure to indicates “likes” or “dislikes” , as this increases the efficiency of the process by orders of magnitude . Literally .
5.How it works :
See Appendix A
Your sorry carcass is the result of such a process .
Try to do better when you design one .
6.Don’t get too cocky  .
Large info spaces are always complex .
Top-down causation plays a role (what I call Beth(x) systems . The dear reader should recognise by now.)
See Appendix B  for top down causation . Simply another name for complex systems having behaviours not explicable by referring to simple physics . If you do not know how this works , ask your wife how fat she is and get a pretty graphic demonstration .
Socrates would have loved StumbleUpon  .
Have Fun !

Appendix A
Genetic algorithms are one of the best ways to solve a problem for which little is known. They are a very general algorithm and so will work well in any search space. All you need to know is what you need the solution to be able to do well, and a genetic algorithm will be able to create a high quality solution. Genetic algorithms use the principles of selection and evolution to produce several solutions to a given problem.

Genetic algorithms tend to thrive in an environment in which there is a very large set of candidate solutions and in which the search space is uneven and has many hills and valleys. True, genetic algorithms will do well in any environment, but they will be greatly outclassed by more situation specific algorithms in the simpler search spaces. Therefore you must keep in mind that genetic algorithms are not always the best choice. Sometimes they can take quite a while to run and are therefore not always feasible for real time use. They are, however, one of the most powerful methods with which to (relatively) quickly create high quality solutions to a problem. Now, before we start, I'm going to provide you with some key terms so that this article makes sense.

Individual - Any possible solution
Population - Group of all individuals
Search Space - All possible solutions to the problem
Chromosome - Blueprint for an individual
Trait - Possible aspect of an individual
Allele - Possible settings for a trait
Locus - The position of a gene on the chromosome
Genome - Collection of all chromosomes
Appendix B
Recognising Top-Down Causation
(Submitted on 9 Dec 2012)
One of the basic assumptions implicit in the way physics is usually done is that all causation flows in a bottom up fashion, from micro to macro scales. However this is wrong in many cases in biology, and in particular in the way the brain functions. Here I make the case that it is also wrong in the case of digital computers - the paradigm of mechanistic algorithmic causation - and in many cases in physics, ranging from the origin of the arrow of time to the process of state vector preparation. I consider some examples from classical physics, as well as the case of digital computers, and then explain why this is possible without contradicting the causal powers of the underlying microphysics. Understanding the emergence of genuine complexity out of the underlying physics depends on recognising this kind of causation.
Appendix C

Friday, August 15, 2008
Orders of Randomness 2
Orders of Randomness 2
Andre Willers
15 Aug 2008

See : “Orders of Randomness”

I have been requested to expand a little on orders of Randomness and what it means .
Please note that human endeavours at this date use only randomness of the order of flipping a coin ( Beth(0) )

Aleph is the first letter of the Hebrew Alphabet . It was used by Cantor to denote
Classes of Infinity (ie Aleph(0) for Rational numbers , Aleph(1) for Irrational Numbers , etc

Beth is the second letter of the Hebrew Alfabet . It means “House”

I will first repeat the derivation of Orders of Randomness from : “Orders of Randomness” because it is so important .

Start Quote:
First , simple Randomness .
Flip of a coin .
Heads or Tails . 0 or 1
Flip an unbiased coin an infinite number of times ,write it down below each other and do it again .
All possible 0 and 1’s

An example : Beth(0)
Flips(1) 0,1,1,1,1,… etc
Flips(2) 0,1,1,1,0,… etc
Flips(infinity) 0,0,0,0,0,0,…etc

This describes all possible states in a delineated binary universe .
“delineated binary” means a two sided coin which cannot land on it’s side .

Now draw a diagonal line from the top left of Flips(1) to Flips(infinity) .
At every intersection of this diagonal line with a horizontal line , change the value .
The Diagonal Line of (0,1)’s is then not in the collection of all possible random
Horizontal coin-Flips(x) .

This means the Diagonal Line is of a stronger order of randomness .
This is also the standard proof of an Irrational Number .

This is the standard proof of aleph numbers .
Irrational numbers ,etc
Since any number can be written in binary (0,1) , we can infer that the order of randomness is the same as aleph numbers .

This means we can use number theory in Randomness systems .
Very important .

Google Cantor (or Kantor)

Define coin-flip Randomness as Beth(0) , analogous to Aleph(0)
Then we have at least Beth(1) , randomness an order stronger than flipping a coin .
Then we can theorize Beth(Omega) <->Aleph(Omega) .

End Quote

Cardinal Numbers .

The cardinal number is the index x of Aleph(x) .
Cantor proved that
Aleph(n+1) = 2 ^ Aleph( n )

Where n is the cardinal number of the infinity .

Tying them together :
He also proved that
P(A) = 2^ n
Where A is any set , P(A) is the PowerSet of A and n is the cardinal number of set A
Thus , Cardinal Number of P(A) =(n+1)

The PowerSet of A = the Set of all subsets of A .
This sounds fancy , but it is simply all the different ways you can combine the elements of set A . All the ways you can chop up A .
You can see it easily in a finite binomial expansion (1+1)^n = P(A) = 2^n

See : “Infinite Probes”
There we also chop and dice , using infinite series .

Can you see how it all ties together ?

Why 2 ?

This derives from the Delineation Axiom . Remember , we can only talk about something if it is distinct and identifiable from something else . This gives a minimum of 2 states : part or non-part .

That is why the Zeta-function below is described on a 2-dimensional plane , or pesky problems like Primes always boil down to 2 dimensions of some sort .

This is why the irrational numbers play such an important part in physics .
Z=a+ib describes a 2-dimensional plane useful for delineated systems without feedback systems

Its in the axiom of Delineation , dummy .

But we know that Russell proved that A+~A smaller than Universum .
The difference can be described as the Beth sequences . Since they are derivatives of summation-sequences(see below) , they define arrows usually seen as the time-arrows .

These need not to be described a-la-dunne’s serial time , as different Beth levels address the problem adequately without multiplying hypotheses .

Self-referencing systems and Beth sequences .

A Proper Self-referencing system is of one cardinal Beth number higher than the system it derives from .
Self-referencing systems (feedback systems) can always be described as sequences of Beth systems . Ie as Beth(x) <-> Beth(y) . The formal proof is a bit long for inclusion here .

The easiest way to see it is in Bayesian systems . If Beth(x) systems are included , Bayesian systems become orders of magnitude more effective .

Life , civilization and markets are such . See below .

Conservation Laws :
By definition , these can always be written in a form of
SomeExpression = 0

Random (Beth(0) Walk in Euclidean 2-dimensions

This is a powerful unifying principle derived from the Delineation Axiom .

In Random Walk the Distance from the Center is = d * (n)^0.5 . This is a property of Euclidean systems .
(Where d = step , n=number of random beth(0) steps)

Immediately we can say that the only hope of the Walker returning to the center after an infinity of Beth(0) steps is if d ~ 1/(n)^0.5 . This is the Riemann Hypothesis .

Now , see a Universum of 2-dimensional descriptors z=a+ib

Sum all of them . Add together all the possible things that can be thus described .

This can be done as follows :
From z=a+ib Raise both sides to the e
e^(z) = e^(a) . e^i(b)
Raise both sides to the ln(j) power where j is real integers.
j^(z) = j^(a) . e^(b/ln(j))

Now , sum them :
Zeta=Sum of j^(z) for j=1 to infinity

Now we extract all possible statements that embody some Conservation Law . Beth(1)

This means that Zeta is zero for the set of extracted statements if and only if (b/ln(j)) is of the order of Beth(0) and a=(-1/2)

Tensors .
The above is a definition of a tensor for a discontinous function .

Riemann’s Zeta function.
This can describe any delineated system .
If Zeta = 0 , conservation laws apply .

Zeta = Sigma(1/j )^z for j=1,2,3,…,infinity and z=a+ib , where z is complex and i =(-1)^0.5
The z bit is in two dimensions as discussed above .

This function has a deep underlying meaning for infinite systems .
If you unpack the Right-Hand side on a x-yi plane you get a graph that looks like a random walk .

If every point is visited that a random walk would visit over infinity (ie all) , without clumping , then Zeta can only be non-trivially zero if a=(-1/2) .

Why (x – yi) plane ? See “Why 2 “ above . The system is fractal . Two dimensions are necessary in any delineated system .

Remember , randomwalk distance from origin = step*sqrt(number of steps) .
So if the steps = 1/ ( sqrt(number of steps) ) , then the Origin might be reached if and only if a= -1/2
This is easily proven .

If a= - 1/2 , then b can be any real function . This would include Beth(0) and Beth(1) , but not higher orders of beth .

If a= -1/2 and b is an unreal number , then a cannot be equal to -1/2 anymore . Conservation cannot hold at any level .

Conservation Laws can only hold for Beth(0) and Beth(1) systems .

This is forced by the two dimensions of delineation .

Mathematically , this means that Beth(2+) systems of feedbacks can only be described in terms of attractors or/and fractal systems (ie not in isolation)

Physically , conservation of energy and momentum need not hold for Beth(2+) systems .

This has an interesting corollary in decryption (unpacking) . A Beth(2) mind unpacking Beth(0) or Beth(1) encryption is functionally equivalent to Non-Conservation of Energy .

Some other consequences :
If a< -½ , then Riemannian Orbitals are described . Beth(any)
Also described as nuclei , atoms .

If a> -½ , then a diffuse cloud is described . Beth(any)
Also described as magnetic effects .

What does this mean?
Present technology uses Beth(x) technology in a rather haphazard way .(Quantum physics) .

A better understanding will bring about a sudden change in capability .

Posted by Andre at 12:59 PM 

No comments: