Finding Aliens
Andre Willers
10 Dec 2013
Synopsis :
Google and Amazon has sufficient density of information for
statistical machine learning to find aliens , time travellers , dimensional
hoppers, simulationistas and terrorists trying to masquerade as humans .
Discussion :
See Appendix A for summation of algorithms .
2.What we are looking for is fat tails (http://en.wikipedia.org/wiki/Fat-tailed_distribution ) or Dragon-Kings (http://www.wired.com/wiredscience/2013/10/chaos-theory-dragon-kings/)
.
In other words , the low-probability but highly likely correlation
pieces .
The bits that don’t fit .
3.Probable and Likely .
I use this the following way :
“Probable” indicates distribution as per Appendix B in the
“Orders of Randomness”
“Likely” is used in a meta-sense to indicate relationships
between systems of different Beth(x) orders .
4. Example : an event may be low probability (even with a
fat tail) for a Beth(0) , but very likely if mixed up with a Beth(1) system.
5.See Appendix C http://andreswhy.blogspot.com/2013/11/dragon-kings.html
for examples .
6. We know what to do about terrorists . But what about the aliens
, time travellers , dimensional hoppers , simulationistas ?
Real Boojums , that can make us silently disappear away .
7. See Zevatron Guns . http://andreswhy.blogspot.com/2013/12/zevatrons-and-packing-curve.html
8. Funnily enough , there is something we can do even inside
a simulation .
9.A Zevatron Gun (http://andreswhy.blogspot.com/2013/11/zevatron-gun.html
)
Will kill or “discorporate” (ie remove) any or all of the above .
See Appendix D
10 Why allow this knowledge if it is a simulation ?
As previously discussed , this locale is primarily a kindergarden
locale , then holiday , then warfare .
11. A simulationist close to a Zevatron being fired will experience
a severe shock . As the “sub-spacetime” grid jerks finer , all sorts feedback effects
are initiated . Extremely unpleasant . You won’t kill them , but they will feel
like one sick puppy .
Think a three-day boozer hangover without even having the
pleasure of being drunk .
12.Think of a simulationist being in a even higher
simulation (Beth level) . So everyone is learning a lesson .
School used to be more fun when it was not so interactive .
Now you have to repeat class until you meet passing
standards .
Even if it takes infinities .
“It is hard being a simulation
But with my Zevatron gun
With ideas above my station
What dreary fun “
Andre
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Appendix A
Typical Algorithms
Machine learning algorithms can
be organized into a taxonomy based on the desired outcome of
the algorithm or the type of input available during training the machine[citation needed].
·
Supervised learning algorithms are trained on labelled examples,
i.e., input where the desired output is known. The supervised learning
algorithm attempts to generalise a function or mapping from inputs to outputs
which can then be used to speculatively generate an output for previously
unseen inputs.
·
Unsupervised learning algorithms operate on unlabelled examples,
i.e., input where the desired output is unknown. Here the objective is to
discover structure in the data (e.g. through acluster analysis),
not to generalise a mapping from inputs to outputs.
·
Semi-supervised learning combines both labeled and unlabelled
examples to generate an appropriate function or classifier.
·
Transduction, or transductive inference, tries
to predict new outputs on specific and fixed (test) cases from observed,
specific (training) cases.
·
Reinforcement learning is concerned with how intelligent
agents ought to act in an environment to maximise some
notion of reward. The agent executes actions which cause the observable state
of the environment to change. Through a sequence of actions, the agent attempts
to gather knowledge about how the environment responds to its actions, and
attempts to synthesise a sequence of actions that maximises a cumulative
reward.
·
Developmental learning, elaborated for Robot learning,
generates its own sequences (also called curriculum) of learning situations to
cumulatively acquire repertoires of novel skills through autonomous
self-exploration and social interaction with human teachers, and using guidance
mechanisms such as active learning, maturation, motor synergies, and imitation.
Xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Appendix B
Orders of Randomness 2
Andre Willers
15 Aug 2008
See http://andreswhy.blogspot.com : “Orders of Randomness”
I have been requested to expand a little on orders of Randomness and what it means .
Please note that human endeavours at this date use only randomness of the order of flipping a coin ( Beth(0) )
Aleph is the first letter of the Hebrew Alphabet . It was used by Cantor to denote
Classes of Infinity (ie Aleph(0) for Rational numbers , Aleph(1) for Irrational Numbers , etc
Beth is the second letter of the Hebrew Alfabet . It means “House”
I will first repeat the derivation of Orders of Randomness from http://andreswhy.blogspot.com : “Orders of Randomness” because it is so important .
----------------xxxxxx
Start Quote:
First , simple Randomness .
Flip of a coin .
Heads or Tails . 0 or 1
Flip an unbiased coin an infinite number of times ,write it down below each other and do it again .
All possible 0 and 1’s
An example : Beth(0)
xxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Flips(1) 0,1,1,1,1,… etc
Flips(2) 0,1,1,1,0,… etc
.
Flips(infinity) 0,0,0,0,0,0,…etc
Xxxxxxxxxxxxxxxxxxxxxxxxxxxxx
This describes all possible states in a delineated binary universe .
“delineated binary” means a two sided coin which cannot land on it’s side .
Now draw a diagonal line from the top left of Flips(1) to Flips(infinity) .
At every intersection of this diagonal line with a horizontal line , change the value .
The Diagonal Line of (0,1)’s is then not in the collection of all possible random
Horizontal coin-Flips(x) .
This means the Diagonal Line is of a stronger order of randomness .
This is also the standard proof of an Irrational Number .
This is the standard proof of aleph numbers .
Irrational numbers ,etc
Since any number can be written in binary (0,1) , we can infer that the order of randomness is the same as aleph numbers .
This means we can use number theory in Randomness systems .
Very important .
Google Cantor (or Kantor)
Define coin-flip Randomness as Beth(0) , analogous to Aleph(0)
Then we have at least Beth(1) , randomness an order stronger than flipping a coin .
Then we can theorize Beth(Omega) <->Aleph(Omega) .->
End Quote
----------------xxxxxx
Cardinal Numbers .
The cardinal number is the index x of Aleph(x) .
Cantor proved that
Aleph(n+1) = 2 ^ Aleph( n )
Where n is the cardinal number of the infinity .
Tying them together :
He also proved that
P(A) = 2^ n
Where A is any set , P(A) is the PowerSet of A and n is the cardinal number of set A
Thus , Cardinal Number of P(A) =(n+1)
The PowerSet of A = the Set of all subsets of A .
This sounds fancy , but it is simply all the different ways you can combine the elements of set A . All the ways you can chop up A .
You can see it easily in a finite binomial expansion (1+1)^n = P(A) = 2^n
See http://andreswhy.blogspot.com : “Infinite Probes”
There we also chop and dice , using infinite series .
Can you see how it all ties together ?
Why 2 ?
This derives from the Delineation Axiom . Remember , we can only talk about something if it is distinct and identifiable from something else . This gives a minimum of 2 states : part or non-part .
That is why the Zeta-function below is described on a 2-dimensional plane , or pesky problems like Primes always boil down to 2 dimensions of some sort .
This is why the irrational numbers play such an important part in physics .
Z=a+ib describes a 2-dimensional plane useful for delineated systems without feedback systems
Its in the axiom of Delineation , dummy .
But we know that Russell proved that A+~A smaller than Universum .
The difference can be described as the Beth sequences . Since they are derivatives of summation-sequences(see below) , they define arrows usually seen as the time-arrows .
These need not to be described a-la-dunne’s serial time , as different Beth levels address the problem adequately without multiplying hypotheses .
Self-referencing systems and Beth sequences .
A Proper Self-referencing system is of one cardinal Beth number higher than the system it derives from .
Self-referencing systems (feedback systems) can always be described as sequences of Beth systems . Ie as Beth(x) <-> Beth(y) . The formal proof is a bit long for inclusion here .->
The easiest way to see it is in Bayesian systems . If Beth(x) systems are included , Bayesian systems become orders of magnitude more effective .
Life , civilization and markets are such . See below .
Conservation Laws :
By definition , these can always be written in a form of
SomeExpression = 0
Random (Beth(0) Walk in Euclidean 2-dimensions
This is a powerful unifying principle derived from the Delineation Axiom .
In Random Walk the Distance from the Center is = d * (n)^0.5 . This is a property of Euclidean systems .
(Where d = step , n=number of random beth(0) steps)
Immediately we can say that the only hope of the Walker returning to the center after an infinity of Beth(0) steps is if d ~ 1/(n)^0.5 . This is the Riemann Hypothesis .
Now , see a Universum of 2-dimensional descriptors z=a+ib
Sum all of them . Add together all the possible things that can be thus described .
This can be done as follows :
From z=a+ib Raise both sides to the e
e^(z) = e^(a) . e^i(b)
Raise both sides to the ln(j) power where j is real integers.
j^(z) = j^(a) . e^(b/ln(j))
Now , sum them :
Zeta=Sum of j^(z) for j=1 to infinity
Now we extract all possible statements that embody some Conservation Law . Beth(1)
This means that Zeta is zero for the set of extracted statements if and only if (b/ln(j)) is of the order of Beth(0) and a=(-1/2)
Tensors .
The above is a definition of a tensor for a discontinous function .
Riemann’s Zeta function.
This can describe any delineated system .
If Zeta = 0 , conservation laws apply .
Zeta = Sigma(1/j )^z for j=1,2,3,…,infinity and z=a+ib , where z is complex and i =(-1)^0.5
The z bit is in two dimensions as discussed above .
This function has a deep underlying meaning for infinite systems .
If you unpack the Right-Hand side on a x-yi plane you get a graph that looks like a random walk .
If every point is visited that a random walk would visit over infinity (ie all) , without clumping , then Zeta can only be non-trivially zero if a=(-1/2) .
Why (x – yi) plane ? See “Why 2 “ above . The system is fractal . Two dimensions are necessary in any delineated system .
Remember , randomwalk distance from origin = step*sqrt(number of steps) .
So if the steps = 1/ ( sqrt(number of steps) ) , then the Origin might be reached if and only if a= -1/2
This is easily proven .
If a= - 1/2 , then b can be any real function . This would include Beth(0) and Beth(1) , but not higher orders of beth .
If a= -1/2 and b is an unreal number , then a cannot be equal to -1/2 anymore . Conservation cannot hold at any level .
Consequences:
Conservation Laws can only hold for Beth(0) and Beth(1) systems .
This is forced by the two dimensions of delineation .
Mathematically , this means that Beth(2+) systems of feedbacks can only be described in terms of attractors or/and fractal systems (ie not in isolation)
Physically , conservation of energy and momentum need not hold for Beth(2+) systems .
This has an interesting corollary in decryption (unpacking) . A Beth(2) mind unpacking Beth(0) or Beth(1) encryption is functionally equivalent to Non-Conservation of Energy .
Some other consequences :
If a< -½ , then Riemannian Orbitals are described . Beth(any)
Also described as nuclei , atoms .
If a> -½ , then a diffuse cloud is described . Beth(any)
Also described as magnetic effects .
What does this mean?
Present technology uses Beth(x) technology in a rather haphazard way .(Quantum physics) .
A better understanding will bring about a sudden change in capability .
Andre
Andre Willers
15 Aug 2008
See http://andreswhy.blogspot.com : “Orders of Randomness”
I have been requested to expand a little on orders of Randomness and what it means .
Please note that human endeavours at this date use only randomness of the order of flipping a coin ( Beth(0) )
Aleph is the first letter of the Hebrew Alphabet . It was used by Cantor to denote
Classes of Infinity (ie Aleph(0) for Rational numbers , Aleph(1) for Irrational Numbers , etc
Beth is the second letter of the Hebrew Alfabet . It means “House”
I will first repeat the derivation of Orders of Randomness from http://andreswhy.blogspot.com : “Orders of Randomness” because it is so important .
----------------xxxxxx
Start Quote:
First , simple Randomness .
Flip of a coin .
Heads or Tails . 0 or 1
Flip an unbiased coin an infinite number of times ,write it down below each other and do it again .
All possible 0 and 1’s
An example : Beth(0)
xxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Flips(1) 0,1,1,1,1,… etc
Flips(2) 0,1,1,1,0,… etc
.
Flips(infinity) 0,0,0,0,0,0,…etc
Xxxxxxxxxxxxxxxxxxxxxxxxxxxxx
This describes all possible states in a delineated binary universe .
“delineated binary” means a two sided coin which cannot land on it’s side .
Now draw a diagonal line from the top left of Flips(1) to Flips(infinity) .
At every intersection of this diagonal line with a horizontal line , change the value .
The Diagonal Line of (0,1)’s is then not in the collection of all possible random
Horizontal coin-Flips(x) .
This means the Diagonal Line is of a stronger order of randomness .
This is also the standard proof of an Irrational Number .
This is the standard proof of aleph numbers .
Irrational numbers ,etc
Since any number can be written in binary (0,1) , we can infer that the order of randomness is the same as aleph numbers .
This means we can use number theory in Randomness systems .
Very important .
Google Cantor (or Kantor)
Define coin-flip Randomness as Beth(0) , analogous to Aleph(0)
Then we have at least Beth(1) , randomness an order stronger than flipping a coin .
Then we can theorize Beth(Omega) <->Aleph(Omega) .->
End Quote
----------------xxxxxx
Cardinal Numbers .
The cardinal number is the index x of Aleph(x) .
Cantor proved that
Aleph(n+1) = 2 ^ Aleph( n )
Where n is the cardinal number of the infinity .
Tying them together :
He also proved that
P(A) = 2^ n
Where A is any set , P(A) is the PowerSet of A and n is the cardinal number of set A
Thus , Cardinal Number of P(A) =(n+1)
The PowerSet of A = the Set of all subsets of A .
This sounds fancy , but it is simply all the different ways you can combine the elements of set A . All the ways you can chop up A .
You can see it easily in a finite binomial expansion (1+1)^n = P(A) = 2^n
See http://andreswhy.blogspot.com : “Infinite Probes”
There we also chop and dice , using infinite series .
Can you see how it all ties together ?
Why 2 ?
This derives from the Delineation Axiom . Remember , we can only talk about something if it is distinct and identifiable from something else . This gives a minimum of 2 states : part or non-part .
That is why the Zeta-function below is described on a 2-dimensional plane , or pesky problems like Primes always boil down to 2 dimensions of some sort .
This is why the irrational numbers play such an important part in physics .
Z=a+ib describes a 2-dimensional plane useful for delineated systems without feedback systems
Its in the axiom of Delineation , dummy .
But we know that Russell proved that A+~A smaller than Universum .
The difference can be described as the Beth sequences . Since they are derivatives of summation-sequences(see below) , they define arrows usually seen as the time-arrows .
These need not to be described a-la-dunne’s serial time , as different Beth levels address the problem adequately without multiplying hypotheses .
Self-referencing systems and Beth sequences .
A Proper Self-referencing system is of one cardinal Beth number higher than the system it derives from .
Self-referencing systems (feedback systems) can always be described as sequences of Beth systems . Ie as Beth(x) <-> Beth(y) . The formal proof is a bit long for inclusion here .->
The easiest way to see it is in Bayesian systems . If Beth(x) systems are included , Bayesian systems become orders of magnitude more effective .
Life , civilization and markets are such . See below .
Conservation Laws :
By definition , these can always be written in a form of
SomeExpression = 0
Random (Beth(0) Walk in Euclidean 2-dimensions
This is a powerful unifying principle derived from the Delineation Axiom .
In Random Walk the Distance from the Center is = d * (n)^0.5 . This is a property of Euclidean systems .
(Where d = step , n=number of random beth(0) steps)
Immediately we can say that the only hope of the Walker returning to the center after an infinity of Beth(0) steps is if d ~ 1/(n)^0.5 . This is the Riemann Hypothesis .
Now , see a Universum of 2-dimensional descriptors z=a+ib
Sum all of them . Add together all the possible things that can be thus described .
This can be done as follows :
From z=a+ib Raise both sides to the e
e^(z) = e^(a) . e^i(b)
Raise both sides to the ln(j) power where j is real integers.
j^(z) = j^(a) . e^(b/ln(j))
Now , sum them :
Zeta=Sum of j^(z) for j=1 to infinity
Now we extract all possible statements that embody some Conservation Law . Beth(1)
This means that Zeta is zero for the set of extracted statements if and only if (b/ln(j)) is of the order of Beth(0) and a=(-1/2)
Tensors .
The above is a definition of a tensor for a discontinous function .
Riemann’s Zeta function.
This can describe any delineated system .
If Zeta = 0 , conservation laws apply .
Zeta = Sigma(1/j )^z for j=1,2,3,…,infinity and z=a+ib , where z is complex and i =(-1)^0.5
The z bit is in two dimensions as discussed above .
This function has a deep underlying meaning for infinite systems .
If you unpack the Right-Hand side on a x-yi plane you get a graph that looks like a random walk .
If every point is visited that a random walk would visit over infinity (ie all) , without clumping , then Zeta can only be non-trivially zero if a=(-1/2) .
Why (x – yi) plane ? See “Why 2 “ above . The system is fractal . Two dimensions are necessary in any delineated system .
Remember , randomwalk distance from origin = step*sqrt(number of steps) .
So if the steps = 1/ ( sqrt(number of steps) ) , then the Origin might be reached if and only if a= -1/2
This is easily proven .
If a= - 1/2 , then b can be any real function . This would include Beth(0) and Beth(1) , but not higher orders of beth .
If a= -1/2 and b is an unreal number , then a cannot be equal to -1/2 anymore . Conservation cannot hold at any level .
Consequences:
Conservation Laws can only hold for Beth(0) and Beth(1) systems .
This is forced by the two dimensions of delineation .
Mathematically , this means that Beth(2+) systems of feedbacks can only be described in terms of attractors or/and fractal systems (ie not in isolation)
Physically , conservation of energy and momentum need not hold for Beth(2+) systems .
This has an interesting corollary in decryption (unpacking) . A Beth(2) mind unpacking Beth(0) or Beth(1) encryption is functionally equivalent to Non-Conservation of Energy .
Some other consequences :
If a< -½ , then Riemannian Orbitals are described . Beth(any)
Also described as nuclei , atoms .
If a> -½ , then a diffuse cloud is described . Beth(any)
Also described as magnetic effects .
What does this mean?
Present technology uses Beth(x) technology in a rather haphazard way .(Quantum physics) .
A better understanding will bring about a sudden change in capability .
Andre
Xxxxxxxxxxxxxxxxxxxxxxxxxxxx
AppendixC
Dragon-Kings
Andre Willers
28 Nov 2013
Synopsis:
Extreme events lying
outside power-laws , called Dragon-Kings, might be predictable and preventable
.
Discussion :
1.This is not
intuitively obvious . Black-Swan events fall within power laws , but
Dragon-Kings do not .
We cannot easily
predict Black Swans , but it seems that Dragon-Kings paradoxically are easier
to predict and modify .
2.Power Laws (http://en.wikipedia.org/wiki/Power_law)
are found as result of feedback processes of a certain complexity .
See Appendix II . These
orders of Complexity or Randomness can be indicated as Beth(0) =Random as a
coin flip , then Beth(1) , Beth(2) etc.
3.Systems in the same
Beth(x) level follow Power Laws . But if a system includes Beth(y) levels ,
where x<>y , then Dragon-King events can occur .
4.Hence the
description as in Appendix I .
Note the term
“master-slave” in the coupled circuits . This indicates different Beth levels .
5.What does this mean
?
Self-aware
intelligences (like humans) have varying Beth levels . Thus , any system
involving them is prone to dragon-king events .In other words , it is unstable
, with sudden , huge saltations . See History .
Dragon-kings might be
easier to predict than Black swans , but I doubt whether modifying them in
really complex systems will be easy .
From Appendix I
“The pair went on to
show that they could reliably forecast when a big event was about to happen:
whenever the differences between the circuits' oscillations decreased to a
certain value, a leap of dragon-king proportions was almost always imminent.”
Not surprisingly ,
social media like Facebook , Twitter , etc , plus economic globalization are
decreasing oscillations between various countries . The “Arab Spring” can thus
be thought of as a Dragon-king event .
Expect more at increasing
frequencies . It is a positive feedback process .
6.The Singularity :
The Dragon-Emperor .
In the run-up to the
Singularity , Dragon-king events become more frequent until they go asymptotic
: the Singularity .
A better estimate of
the human Singularity should be possible using the techniques described in
Appendix I .
Still at 2028 , but
looking at the variation of the Gravitational constant (seehttp://andreswhy.blogspot.com/2013/11/zevatron-gun.html repeated
in Appendix III) , Dragon-Emperor events might be occurring as we speak .
Happy Singularity !
Andre
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Appendix I
Slaying dragon-kings could prevent financial crashes
HUGO CAVALCANTE saw the disaster coming. From
his lab at the Federal University of Paraíba in Brazil, he detected the warning
signs of an epic crash. At the last minute, he managed to nudge his system back
to safety. Crisis averted.
OK, so Cavalcante's impending crisis was only a
pair of credit-card-sized circuits that were about to start oscillating out of
sync – hardly the stuff of the evening news. But the experiment is the first to
show that a class of extreme events, colourfully called dragon-kings, can be
predicted and suppressed in a real, physical system. The feat suggests that
some day we may also be able to predict, or in some cases prevent, some of the
catastrophes in the real world that seem unstoppable, including financial
crashes, brain seizures and storms.
"People were hoping
if you could forecast extreme events, maybe we could find a way to control
them," says Cavalcante's colleague Daniel Gauthier at
Duke University in Durham, North Carolina. "We were able to completely
suppress the dragon-king events."
Dragon-kings aren't the
first animal used to describe a class of catastrophic events. In 2001, Nassim Taleb published
a book called The Black Swan,
his name for catastrophes that always catch us off-guard. But though difficult
to predict, black swans actually fall within an accepted mathematical
distribution known as a power law, which says
there will be exponentially more small events than large
ones (see diagram).
Most events or objects
found in a complex system – including earthquakes,hurricanes,
moon craters, even power imbalances in war –
also obey a power law, a ubiquity that some say hints at a deeper organising principle
at work in the universe. Others, like Taleb, focus on the fact that
a power law can't predict when black swans will occur.
Now there's another
beast to reckon with. In 2009, Didier Sornette at the Swiss Federal Institute
of Technology in Zurich reported that some
events lift their heads above the power law's parapet, the way a king's power
and wealth vastly outstrip that of the more plentiful peasant. So big that they
should be rare, these events have a greater probability of occurring than a
power law would mandate.
"There seem to be certain extremes that
happen much more often than they should if you just believe the power-law
distribution predicted by their smaller siblings," Sornette says.
He christened them dragon-kings. The dragon part
of the name stems from the fact that these events seem to obey different
mathematical laws, just as a dragon's behaviour differs from that of the other
animals.
Sornette got his first
whiff of dragon-kings when studying cracks that develop in spacecraft. Since
then, he has spotted them everywhere,
from a rainstorm that hit Venezuela in 1999 and the financial crashes in 2000
and 2007, to some epileptic seizures.
But he wasn't satisfied with merely recognising
dragon-kings. The fact that they don't follow a power law suggests they are
being produced by a different mechanism, which raises the possibility that,
unlike events that follow the power law, dragon-kings may be predictable.
He and his colleagues
have had some success, predicting a slip in the
Shanghai Stock Exchange before it happened in August 2009 and
using a few electrical pulses to suppress seizures that
might have become dragon-kings in rats and rabbits. But the
difficulty of running controlled experiments in real financial systems or
brains prevented them from going any further.
Enter Cavalcante and Gauthier's oscillating circuits.
Gauthier spent the early 1990s studying pairs of identical circuits that
behaved chaotically on their own, but would synchronise for long periods of
time when coupled in a certain way. "It's a little bit politically
incorrect, but it's sometimes called the 'master-slave' configuration,"
Gauthier says. He coupled the two circuits by measuring the difference between
the voltages running through them, and injecting a current into the
"slave" circuit to make it more like the "master". Most of
the time this worked and the two would oscillate together like a pair of
swinging pendulums, with only slight deviations away from synchronisation.
But every so often, the slave would stop
following the master and march to its own beat for a short time, before getting
back in step. Gauthier realised at the time that there were recognisable signs
that this disconnect was about to happen. It wasn't until he saw Sornette's
work that he checked for dragon-kings.
He and his colleagues have now shown that the
differences in the circuits' voltages during these desynchronisations are
indeed dragon-kings. "They were as big as the system would physically
allow, like a major disaster," Gauthier says.
The pair went on to show
that they could reliably forecast when a big event was about to happen:
whenever the differences between the circuits' oscillations decreased to a
certain value, a leap of dragon-king proportions was almost always imminent.
And once they saw it coming, they found they could apply a small electrical
nudge to the slave circuit to make sure it didn't tear away from its master (Physical Review Letters, doi.org/p44).
"We basically kill the dragon-king in the
egg," Sornette says. "The counter-mechanism kills it when it is
burgeoning."
It's a long way to go from a pair of coupled
circuits to the massive complexity of the real world. But by using this simple
system to find out at what stage in the process a dragon-king can be prevented,
Sornette hopes to see whether financial regulation could prevent a crash once a
stock market bubble has already begun to grow, a controversial topic among
regulators.
"The fear of central banks is that their
intervention might actually worsen the situation and trigger the crashes,
destabilising the system even further," he says. "That's the type of
insight we could test and check and probe with our system."
Some physicists think the gap between so-called
low dimensional systems like the pair of oscillators, which can be described by
just three variables each, and real-world complex systems like the stock
market, is too wide to bridge. "The conclusions of the paper appear
correct and interesting for people studying low dimensional chaos," says
Alfred Hubler of the University of Illinois at Urbana-Champaign. "But in
the real world, low dimensional chaos is very rare. Most real-world complex
systems have many interacting parts."
Others agree with Sornette that having a simple
physical system to manipulate will be useful. "Having a mechanical system
where you can explore it in the lab is crucially important," says Neil
Johnson at the University of Miami in Coral Gables. He studies dragon-kings in
simulations of stock markets and traffic jams and can't wait to start using a
pair of oscillators to see how they relate.
Sornette thinks the circuits are just the
beginning of a future in which we can monitor, diagnose, forecast and
ultimately control our world. "I think we are on the verge of a revolution
where we are going to be able to steer our planet better, informed by this kind
of science." It's quite a promise – not all storms, seizures and crashes
are dragon-kings, after all. But we now have a tool to explore how to deal with
those that are.
This article appeared in print under the headline "Crashing
market, hidden dragon?"
Xxxxxxxxxxxxxxxxx
Appendix II
Friday, August 15, 2008
Orders of Randomness 2
Andre Willers
15 Aug 2008
See http://andreswhy.blogspot.com : “Orders of Randomness”
I have been requested to expand a little on orders of Randomness and what it means .
Please note that human endeavours at this date use only randomness of the order of flipping a coin ( Beth(0) )
Aleph is the first letter of the Hebrew Alphabet . It was used by Cantor to denote
Classes of Infinity (ie Aleph(0) for Rational numbers , Aleph(1) for Irrational Numbers , etc
Beth is the second letter of the Hebrew Alfabet . It means “House”
I will first repeat the derivation of Orders of Randomness from http://andreswhy.blogspot.com : “Orders of Randomness” because it is so important .
----------------xxxxxx
Start Quote:
First , simple Randomness .
Flip of a coin .
Heads or Tails . 0 or 1
Flip an unbiased coin an infinite number of times ,write it down below each other and do it again .
All possible 0 and 1’s
An example : Beth(0)
xxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Flips(1) 0,1,1,1,1,… etc
Flips(2) 0,1,1,1,0,… etc
.
Flips(infinity) 0,0,0,0,0,0,…etc
Xxxxxxxxxxxxxxxxxxxxxxxxxxxxx
This describes all possible states in a delineated binary universe .
“delineated binary” means a two sided coin which cannot land on it’s side .
Now draw a diagonal line from the top left of Flips(1) to Flips(infinity) .
At every intersection of this diagonal line with a horizontal line , change the value .
The Diagonal Line of (0,1)’s is then not in the collection of all possible random
Horizontal coin-Flips(x) .
This means the Diagonal Line is of a stronger order of randomness .
This is also the standard proof of an Irrational Number .
This is the standard proof of aleph numbers .
Irrational numbers ,etc
Since any number can be written in binary (0,1) , we can infer that the order of randomness is the same as aleph numbers .
This means we can use number theory in Randomness systems .
Very important .
Google Cantor (or Kantor)
Define coin-flip Randomness as Beth(0) , analogous to Aleph(0)
Then we have at least Beth(1) , randomness an order stronger than flipping a coin .
Then we can theorize Beth(Omega) <->Aleph(Omega) .
End Quote
----------------xxxxxx
Cardinal Numbers .
The cardinal number is the index x of Aleph(x) .
Cantor proved that
Aleph(n+1) = 2 ^ Aleph( n )
Where n is the cardinal number of the infinity .
Tying them together :
He also proved that
P(A) = 2^ n
Where A is any set , P(A) is the PowerSet of A and n is the cardinal number of set A
Thus , Cardinal Number of P(A) =(n+1)
The PowerSet of A = the Set of all subsets of A .
This sounds fancy , but it is simply all the different ways you can combine the elements of set A . All the ways you can chop up A .
You can see it easily in a finite binomial expansion (1+1)^n = P(A) = 2^n
See http://andreswhy.blogspot.com : “Infinite Probes”
There we also chop and dice , using infinite series .
Can you see how it all ties together ?
Why 2 ?
This derives from the Delineation Axiom . Remember , we can only talk about something if it is distinct and identifiable from something else . This gives a minimum of 2 states : part or non-part .
That is why the Zeta-function below is described on a 2-dimensional plane , or pesky problems like Primes always boil down to 2 dimensions of some sort .
This is why the irrational numbers play such an important part in physics .
Z=a+ib describes a 2-dimensional plane useful for delineated systems without feedback systems
Its in the axiom of Delineation , dummy .
But we know that Russell proved that A+~A smaller than Universum .
The difference can be described as the Beth sequences . Since they are derivatives of summation-sequences(see below) , they define arrows usually seen as the time-arrows .
These need not to be described a-la-dunne’s serial time , as different Beth levels address the problem adequately without multiplying hypotheses .
Self-referencing systems and Beth sequences .
A Proper Self-referencing system is of one cardinal Beth number higher than the system it derives from .
Self-referencing systems (feedback systems) can always be described as sequences of Beth systems . Ie as Beth(x) <-> Beth(y) . The formal proof is a bit long for inclusion here .
The easiest way to see it is in Bayesian systems . If Beth(x) systems are included , Bayesian systems become orders of magnitude more effective .
Life , civilization and markets are such . See below .
Conservation Laws :
By definition , these can always be written in a form of
SomeExpression = 0
Random (Beth(0) Walk in Euclidean 2-dimensions
This is a powerful unifying principle derived from the Delineation Axiom .
In Random Walk the Distance from the Center is = d * (n)^0.5 . This is a property of Euclidean systems .
(Where d = step , n=number of random beth(0) steps)
Immediately we can say that the only hope of the Walker returning to the center after an infinity of Beth(0) steps is if d ~ 1/(n)^0.5 . This is the Riemann Hypothesis .
Now , see a Universum of 2-dimensional descriptors z=a+ib
Sum all of them . Add together all the possible things that can be thus described .
This can be done as follows :
From z=a+ib Raise both sides to the e
e^(z) = e^(a) . e^i(b)
Raise both sides to the ln(j) power where j is real integers.
j^(z) = j^(a) . e^(b/ln(j))
Now , sum them :
Zeta=Sum of j^(z) for j=1 to infinity
Now we extract all possible statements that embody some Conservation Law . Beth(1)
This means that Zeta is zero for the set of extracted statements if and only if (b/ln(j)) is of the order of Beth(0) and a=(-1/2)
Tensors .
The above is a definition of a tensor for a discontinous function .
Riemann’s Zeta function.
This can describe any delineated system .
If Zeta = 0 , conservation laws apply .
Zeta = Sigma(1/j )^z for j=1,2,3,…,infinity and z=a+ib , where z is complex and i =(-1)^0.5
The z bit is in two dimensions as discussed above .
This function has a deep underlying meaning for infinite systems .
If you unpack the Right-Hand side on a x-yi plane you get a graph that looks like a random walk .
If every point is visited that a random walk would visit over infinity (ie all) , without clumping , then Zeta can only be non-trivially zero if a=(-1/2) .
Why (x – yi) plane ? See “Why 2 “ above . The system is fractal . Two dimensions are necessary in any delineated system .
Remember , randomwalk distance from origin = step*sqrt(number of steps) .
So if the steps = 1/ ( sqrt(number of steps) ) , then the Origin might be reached if and only if a= -1/2
This is easily proven .
If a= - 1/2 , then b can be any real function . This would include Beth(0) and Beth(1) , but not higher orders of beth .
If a= -1/2 and b is an unreal number , then a cannot be equal to -1/2 anymore . Conservation cannot hold at any level .
Consequences:
Conservation Laws can only hold for Beth(0) and Beth(1) systems .
This is forced by the two dimensions of delineation .
Mathematically , this means that Beth(2+) systems of feedbacks can only be described in terms of attractors or/and fractal systems (ie not in isolation)
Physically , conservation of energy and momentum need not hold for Beth(2+) systems .
This has an interesting corollary in decryption (unpacking) . A Beth(2) mind unpacking Beth(0) or Beth(1) encryption is functionally equivalent to Non-Conservation of Energy .
Some other consequences :
If a< -½ , then Riemannian Orbitals are described . Beth(any)
Also described as nuclei , atoms .
If a> -½ , then a diffuse cloud is described . Beth(any)
Also described as magnetic effects .
What does this mean?
Present technology uses Beth(x) technology in a rather haphazard way .(Quantum physics) .
A better understanding will bring about a sudden change in capability .
Andre ->->
Andre Willers
15 Aug 2008
See http://andreswhy.blogspot.com : “Orders of Randomness”
I have been requested to expand a little on orders of Randomness and what it means .
Please note that human endeavours at this date use only randomness of the order of flipping a coin ( Beth(0) )
Aleph is the first letter of the Hebrew Alphabet . It was used by Cantor to denote
Classes of Infinity (ie Aleph(0) for Rational numbers , Aleph(1) for Irrational Numbers , etc
Beth is the second letter of the Hebrew Alfabet . It means “House”
I will first repeat the derivation of Orders of Randomness from http://andreswhy.blogspot.com : “Orders of Randomness” because it is so important .
----------------xxxxxx
Start Quote:
First , simple Randomness .
Flip of a coin .
Heads or Tails . 0 or 1
Flip an unbiased coin an infinite number of times ,write it down below each other and do it again .
All possible 0 and 1’s
An example : Beth(0)
xxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Flips(1) 0,1,1,1,1,… etc
Flips(2) 0,1,1,1,0,… etc
.
Flips(infinity) 0,0,0,0,0,0,…etc
Xxxxxxxxxxxxxxxxxxxxxxxxxxxxx
This describes all possible states in a delineated binary universe .
“delineated binary” means a two sided coin which cannot land on it’s side .
Now draw a diagonal line from the top left of Flips(1) to Flips(infinity) .
At every intersection of this diagonal line with a horizontal line , change the value .
The Diagonal Line of (0,1)’s is then not in the collection of all possible random
Horizontal coin-Flips(x) .
This means the Diagonal Line is of a stronger order of randomness .
This is also the standard proof of an Irrational Number .
This is the standard proof of aleph numbers .
Irrational numbers ,etc
Since any number can be written in binary (0,1) , we can infer that the order of randomness is the same as aleph numbers .
This means we can use number theory in Randomness systems .
Very important .
Google Cantor (or Kantor)
Define coin-flip Randomness as Beth(0) , analogous to Aleph(0)
Then we have at least Beth(1) , randomness an order stronger than flipping a coin .
Then we can theorize Beth(Omega) <->Aleph(Omega) .
End Quote
----------------xxxxxx
Cardinal Numbers .
The cardinal number is the index x of Aleph(x) .
Cantor proved that
Aleph(n+1) = 2 ^ Aleph( n )
Where n is the cardinal number of the infinity .
Tying them together :
He also proved that
P(A) = 2^ n
Where A is any set , P(A) is the PowerSet of A and n is the cardinal number of set A
Thus , Cardinal Number of P(A) =(n+1)
The PowerSet of A = the Set of all subsets of A .
This sounds fancy , but it is simply all the different ways you can combine the elements of set A . All the ways you can chop up A .
You can see it easily in a finite binomial expansion (1+1)^n = P(A) = 2^n
See http://andreswhy.blogspot.com : “Infinite Probes”
There we also chop and dice , using infinite series .
Can you see how it all ties together ?
Why 2 ?
This derives from the Delineation Axiom . Remember , we can only talk about something if it is distinct and identifiable from something else . This gives a minimum of 2 states : part or non-part .
That is why the Zeta-function below is described on a 2-dimensional plane , or pesky problems like Primes always boil down to 2 dimensions of some sort .
This is why the irrational numbers play such an important part in physics .
Z=a+ib describes a 2-dimensional plane useful for delineated systems without feedback systems
Its in the axiom of Delineation , dummy .
But we know that Russell proved that A+~A smaller than Universum .
The difference can be described as the Beth sequences . Since they are derivatives of summation-sequences(see below) , they define arrows usually seen as the time-arrows .
These need not to be described a-la-dunne’s serial time , as different Beth levels address the problem adequately without multiplying hypotheses .
Self-referencing systems and Beth sequences .
A Proper Self-referencing system is of one cardinal Beth number higher than the system it derives from .
Self-referencing systems (feedback systems) can always be described as sequences of Beth systems . Ie as Beth(x) <-> Beth(y) . The formal proof is a bit long for inclusion here .
The easiest way to see it is in Bayesian systems . If Beth(x) systems are included , Bayesian systems become orders of magnitude more effective .
Life , civilization and markets are such . See below .
Conservation Laws :
By definition , these can always be written in a form of
SomeExpression = 0
Random (Beth(0) Walk in Euclidean 2-dimensions
This is a powerful unifying principle derived from the Delineation Axiom .
In Random Walk the Distance from the Center is = d * (n)^0.5 . This is a property of Euclidean systems .
(Where d = step , n=number of random beth(0) steps)
Immediately we can say that the only hope of the Walker returning to the center after an infinity of Beth(0) steps is if d ~ 1/(n)^0.5 . This is the Riemann Hypothesis .
Now , see a Universum of 2-dimensional descriptors z=a+ib
Sum all of them . Add together all the possible things that can be thus described .
This can be done as follows :
From z=a+ib Raise both sides to the e
e^(z) = e^(a) . e^i(b)
Raise both sides to the ln(j) power where j is real integers.
j^(z) = j^(a) . e^(b/ln(j))
Now , sum them :
Zeta=Sum of j^(z) for j=1 to infinity
Now we extract all possible statements that embody some Conservation Law . Beth(1)
This means that Zeta is zero for the set of extracted statements if and only if (b/ln(j)) is of the order of Beth(0) and a=(-1/2)
Tensors .
The above is a definition of a tensor for a discontinous function .
Riemann’s Zeta function.
This can describe any delineated system .
If Zeta = 0 , conservation laws apply .
Zeta = Sigma(1/j )^z for j=1,2,3,…,infinity and z=a+ib , where z is complex and i =(-1)^0.5
The z bit is in two dimensions as discussed above .
This function has a deep underlying meaning for infinite systems .
If you unpack the Right-Hand side on a x-yi plane you get a graph that looks like a random walk .
If every point is visited that a random walk would visit over infinity (ie all) , without clumping , then Zeta can only be non-trivially zero if a=(-1/2) .
Why (x – yi) plane ? See “Why 2 “ above . The system is fractal . Two dimensions are necessary in any delineated system .
Remember , randomwalk distance from origin = step*sqrt(number of steps) .
So if the steps = 1/ ( sqrt(number of steps) ) , then the Origin might be reached if and only if a= -1/2
This is easily proven .
If a= - 1/2 , then b can be any real function . This would include Beth(0) and Beth(1) , but not higher orders of beth .
If a= -1/2 and b is an unreal number , then a cannot be equal to -1/2 anymore . Conservation cannot hold at any level .
Consequences:
Conservation Laws can only hold for Beth(0) and Beth(1) systems .
This is forced by the two dimensions of delineation .
Mathematically , this means that Beth(2+) systems of feedbacks can only be described in terms of attractors or/and fractal systems (ie not in isolation)
Physically , conservation of energy and momentum need not hold for Beth(2+) systems .
This has an interesting corollary in decryption (unpacking) . A Beth(2) mind unpacking Beth(0) or Beth(1) encryption is functionally equivalent to Non-Conservation of Energy .
Some other consequences :
If a< -½ , then Riemannian Orbitals are described . Beth(any)
Also described as nuclei , atoms .
If a> -½ , then a diffuse cloud is described . Beth(any)
Also described as magnetic effects .
What does this mean?
Present technology uses Beth(x) technology in a rather haphazard way .(Quantum physics) .
A better understanding will bring about a sudden change in capability .
Andre
Xxxxxxxxxxxxxxxxxxxxxxxx
Appendix III
Wednesday, November 27, 2013
Zevatron Gun
Andre Willers
27 Nov 2013
Synopsis :
An ordinary firearm
can fire zevatron beams by using a Zevatron bullet .
Discussion :
1.First , see “Desktop
Zevatron” in Appendix I
2. The Zevatron
Cartridge :
2.1 Propellant at base
: standard
2.2 The next layer is
fine conductive coils : wound or 3D printed
2.3 The last layer is
buckyballs arranged in magnetron congigurations . 3D or 4D printed .
3.How it works :
3.1 The gun barrel
must be magnetized . Stroking with a permanent magnet will do in a pinch .
3.2 On firing , the
bullet accelerates both linearly and angularly down the barrel .
3.3 This an EMP pulse
from the coils interacting with the barrel’s magnetic field .
3.4 This EMP pulse
propagates faster than the bullet and induces Chirping effects on the
magnetron-like buckyballs .
3.5 When they implode
, Zevatron level energies are released .
3.6 Aligning (3D
printing) the buckyballs correctly , the zevatron beam can be concentrated or
even collimated .
3.7 The initial setup
will take a lot of calculation and experimentation , but it only needs to be
done once . After that , manufacture as usual . Even at home , using 3D and 4D
printers . (Seehttp://andreswhy.blogspot.com/2013/11/nd-printing.html )
4. The energy
calculations are simple : just use a muzzle energy (eg 500 joulesfor a .45
handgun) and work backwards to required values for various cartridge layers .
5.What do you get ?
A thin beam of cosmic
energy-level particles . At these energies , they can be collimated .
A true long range
blaster , cutter , welder , anti-ballistic system , ultra-radar , etc
6.Safety aspects :
6.1 There will
probably be backscatter of radiation of various types . Depends on the bullet .
6.2 Simulation effects
:
If we are in a
simulation , then use of Zevatrons will put stress on the simulation . This may
already be happening .
See Appendix II . The
value of the Gravitational Constant is changing . What one would expect if the
“grid” is made finer to take Zevatrons into account . (See Appendix I)
Not only will this
also interfere with quantum effects routinely used in computers , but the
System Administrator (God , for all practical purposes) might decide to switch
off this Locale as being too “expensive” or too much trouble .
7. Quantum pollution .
Zevatron observation
is equivalent to pollution at the quantum level .
8. But the benefits
are amazing .
8.1 A finer
“subspace-grid” means stable trans-uranic elements , stable exotic matter ,
workable quantum computers .
8.2 Singularity : The
verdict is still out on whether it will snuff out humans or transcend them
The ultimate Western .
A Universe in every
holster .
Andre
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Appendix I
Desktop Zevatron
Andre Willers
22 Nov 2013
Synopsis:
We use the Schwinger
limit to produce particles with energies greater than 10^20 eV .
Discussion :
1.If the thought
experiment cannot be reproduced in “reality” , we are in a simulation .See
Appendix B
2.Thought experiment :
Consider buckyballs in
a arrangement like a magnetron . Then chirp the frequency (ie increase it). The
buckyball pockets will decrease and emit ever increasing energetic particles
until they implode in Zevatron energies .
This can be easily
done in small university lab . Or inside your body .
3.Makes a hellavu
weapon .
4.If energies go over
10^20 eV , then either
4.1 We are not in a
simulation
Or
4.2 The laws of
physics gets rewritten on the fly .
Or both
4.3 There is a quantum
superposition (most likely)
We are in 1/3
simulation , but 2/3 superposition .
5.Resonance energy
spectra :
The Zevatron will then
have distributions typical of 1/3 , 2/3
6. Beth levels .
Pauli exclusion
principle :
Taken as a general
definition of delineation (identity) . The problem is that it usually used in a
binary sense , whereas trinary would be more correct .
Inverse Pauli
principle .
Higher Beth levels
distort the Pauli exclusion principle .
The observer has very
marked effects on the observed process .
7. In a Zevatron ,
some observers would have Talent for it , whereas others would squelsh it .
Pauli was notorious
for squelshing experimental processes .
We want the opposite .
8. What does all this
sthako mean ?
It means that we are
living in a simulation 2/3 of the time , and deterministically 1/3 of the time
, in so far time has any meaning .
9. The linkage is
poetry , language , mathematics , music , physics , biology .
10 The nitty gritty :
Very high energy
particle physics incorporates the observer . If you want a Zevatron , or cold
fusion , or even hot fusion , you need
an Inverse Pauli
Person in the loop .
11. Pollyanna Fusion .
Don’t knock it . At
10^20 eV it works .
12. Of course , it
does not solve the Simulation problem . That is because you keep on thinking
Y/N , whereas it is a little bit of this and a little bit of that .
13. Think of the
universe as a congeries of information packets , each with a source and
destination address , and some (just for the hell of it) with either or neither
. Peregrinating through the Beth levels of meaning .
14. The Meaning of
Life .
Beth (1) or Beth (2)
levels : 1/3 basic physical ground states , 2/3 what you make of it .
Beth (3) and better :
What you make of it .
15. Can you see why
the Zevatron is such an interesting experiment ?
God is always inside
your decision loop .
An entity (whether an individual or an
organization) that can process this cycle quickly, observing and reacting to
unfolding events more rapidly than an opponent, can thereby "get
inside" the opponent's decision cycle and gain the advantage.
Well , God cheats ,
since He is outside time (higher Beth levels in our terminology)
16 .With Zevatrons in
play , God will have to jack up things a bit . And we are off to the races .
17 . You can’t win , but
it was a fun race .
18 Zero point energy and
Zevatrons .
Anything over the
Schwinger limit generates zero-point energy . . (See Appendix A)
This can be done
intra-cellular with 4D printers (see http://andreswhy.blogspot.com/2013/11/nd-printing.html )
Never mind food .
Energy can be obtained indefinitely by a simple injection of 4D printed
molecules .
19 . 4D Printed wine .
The ultimate
connoisseurs delight . The wine adapts to the taster’s palate , taste recepters
and immune system to tickle pleasure receptors .
20. 4D Printed Food .
Food ( and here I
include medicines) reconfigure themselves inside the gut and even inside the
cells to give maximum benefit on instructions from the Cloud .
Humans being humans ,
even now we can print 4D foods that will taste fantastic , but reassemble into
non-fattening molecules when exposed to the digestive processes .
21 . Ho–ho–Ho ! The
Petrol pill !
For long a BS story ,
this is now actually a theoretical possibility .
A 4D printed molecule
packing some serious energy can be designed to re-assemble into a combustable
hydrocarbon on exposure to water . The physics is very straightforward . This
can actually be done . It will cost , but the military will love it .
22. Put a Tiger in
your tank ! Circe Bullets .
Bullets with a payload
of 4D printed Dna/Rna/Epigenetics can convert an enemy into a tiger , sloth or
any animal .
23. I prefer variable
biltong . 4D Print biltong just as you like it . Hard , salty crust with meltingly
soft interior .
Whatever you do ,
don’t lose the nipple .
It is sad to see grown
humans in perennial search of a 4D nipple .
One of Strauss’s
lesser known works .
“The Tit-Tat Walz”
Andre
Appendix A
However, two waves or two photons not traveling
in the same direction always have a minimum combined energy in their center of
momentum frame, and it is this energy and the electric field strengths
associated with it, which determine particle-antiparticle creation, and
associated scattering phenomena.
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Appendix B
In the 1999 sci-fi film
classic The Matrix, the protagonist, Neo, is stunned to see people defying the
laws of physics, running up walls and vanishing suddenly. These superhuman
violations of the rules of the universe are possible because, unbeknownst to
him, Neo’s consciousness is embedded in the Matrix, a virtual-reality
simulation created by sentient machines.
The action really begins
when Neo is given a fateful choice: Take the blue pill and return to his
oblivious, virtual existence, or take the red pill to learn the truth about the
Matrix and find out “how deep the rabbit hole goes.”
Physicists can now offer
us the same choice, the ability to test whether we live in our own virtual
Matrix, by studying radiation from space. As fanciful as it sounds, some
philosophers have long argued that we’re actually more likely to be artificial
intelligences trapped in a fake universe than we are organic minds in the
“real” one.
But if that were true,
the very laws of physics that allow us to devise such reality-checking
technology may have little to do with the fundamental rules that govern the
meta-universe inhabited by our simulators. To us, these programmers would be
gods, able to twist reality on a whim.
So should we say yes to
the offer to take the red pill and learn the truth — or are the implications
too disturbing?
Worlds in Our Grasp
The first serious
attempt to find the truth about our universe came in 2001, when an effort to
calculate the resources needed for a universe-size simulation made the prospect
seem impossible.
Seth Lloyd, a
quantum-mechanical engineer at MIT, estimated the number of “computer
operations” our universe has performed since the Big Bang — basically, every
event that has ever happened. To repeat them, and generate a perfect facsimile
of reality down to the last atom, would take more energy than the universe has.
“The computer would have to be bigger than the
universe, and time would tick more slowly in the program than in reality,” says
Lloyd. “So why even bother building it?”
But others soon realized
that making an imperfect copy of the universe that’s just good enough to fool
its inhabitants would take far less computational power. In such a makeshift
cosmos, the fine details of the microscopic world and the farthest stars might
only be filled in by the programmers on the rare occasions that people study
them with scientific equipment. As soon as no one was looking, they’d simply
vanish.
In theory, we’d never
detect these disappearing features, however, because each time the simulators
noticed we were observing them again, they’d sketch them back in.
That realization makes
creating virtual universes eerily possible, even for us. Today’s supercomputers
already crudely model the early universe, simulating how infant galaxies grew
and changed. Given the rapid technological advances we’ve witnessed over past
decades — your cell phone has more processing power than NASA’s computers had
during the moon landings — it’s not a huge leap to imagine that such
simulations will eventually encompass intelligent life.
“We may be able to fit humans into our simulation
boxes within a century,” says Silas Beane, a nuclear physicist at the
University of Washington in Seattle. Beane develops simulations that re-create
how elementary protons and neutrons joined together to form ever larger atoms
in our young universe.
Legislation and social
mores could soon be all that keeps us from creating a universe of artificial,
but still feeling, humans — but our tech-savvy descendants may find the power
to play God too tempting to resist.
cosmic-rays
If cosmic rays don't
have random origins, it could be a sign that the universe is a simulation.
National Science
Foundation/J. Yang
They could create a
plethora of pet universes, vastly outnumbering the real cosmos. This thought
led philosopher Nick Bostrom at the University of Oxford to conclude in 2003
that it makes more sense to bet that we’re delusional silicon-based artificial
intelligences in one of these many forgeries, rather than carbon-based
organisms in the genuine universe. Since there seemed no way to tell the
difference between the two possibilities, however, bookmakers did not have to
lose sleep working out the precise odds.
Learning the Truth
That changed in 2007
when John D. Barrow, professor of mathematical sciences at Cambridge
University, suggested that an imperfect simulation of reality would contain
detectable glitches. Just like your computer, the universe’s operating system
would need updates to keep working.
As the simulation
degrades, Barrow suggested, we might see aspects of nature that are supposed to
be static — such as the speed of light or the fine-structure constant that
describes the strength of the electromagnetic force — inexplicably drift from
their “constant” values.
Last year, Beane and
colleagues suggested a more concrete test of the simulation hypothesis. Most
physicists assume that space is smooth and extends out infinitely. But
physicists modeling the early universe cannot easily re-create a perfectly
smooth background to house their atoms, stars and galaxies. Instead, they build
up their simulated space from a lattice, or grid, just as television images are
made up from multiple pixels.
The team calculated that
the motion of particles within their simulation, and thus their energy, is
related to the distance between the points of the lattice: the smaller the grid
size, the higher the energy particles can have. That means that if our universe
is a simulation, we’ll observe a maximum energy amount for the fastest
particles. And as it happens, astronomers have noticed that cosmic rays,
high-speed particles that originate in far-flung galaxies, always arrive at
Earth with a specific maximum energy of about 10^20 electron volts.
The simulation’s lattice
has another observable effect that astronomers could pick up. If space is
continuous, then there is no underlying grid that guides the direction of
cosmic rays — they should come in from every direction equally. If we live in a
simulation based on a lattice, however, the team has calculated that we
wouldn’t see this even distribution. If physicists do see an uneven
distribution, it would be a tough result to explain if the cosmos were real.
Astronomers need much
more cosmic ray data to answer this one way or another. For Beane, either
outcome would be fine. “Learning we live in a simulation would make no more
difference to my life than believing that the universe was seeded at the Big
Bang,” he says. But that’s because Beane imagines the simulators as driven
purely to understand the cosmos, with no desire to interfere with their
simulations.
Unfortunately, our
almighty simulators may instead have programmed us into a universe-size reality
show — and are capable of manipulating the rules of the game, purely for their
entertainment. In that case, maybe our best strategy is to lead lives that
amuse our audience, in the hope that our simulator-gods will resurrect us in
the afterlife of next-generation simulations.
The weird consequences
would not end there. Our simulators may be simulations themselves — just one
rabbit hole within a linked series, each with different fundamental physical
laws. “If we’re indeed a simulation, then that would be a logical possibility,
that what we’re measuring aren’t really the laws of nature, they’re some sort
of attempt at some sort of artificial law that the simulators have come up
with. That’s a depressing thought!” says Beane.
This cosmic ray test may
help reveal whether we are just lines of code in an artificial Matrix, where
the established rules of physics may be bent, or even broken. But if learning
that truth means accepting that you may never know for sure what’s real —
including yourself — would you want to know?
There is no turning
back, Neo: Do you take the blue pill, or the red pill?
The postulated
(hypothetical) sources of EECR are known as Zevatrons, named in
analogy to Lawrence Berkeley National Laboratory'sBevatron and Fermilab's Tevatron, capable of accelerating particles to 1 ZeV (1021 eV).
Xxxxxxxxxxxxxxxxxxxxxxxxxx
Appendix II
Puzzling Measurement
of "Big G" Gravitational Constant Ignites Debate [Slide Show]
Despite dozens of
measurements over more than 200 years, we still don’t know how strong gravity
is
By Clara Moskowitz
BIG "G":
Researchers at the International Bureau of Weights and Measures (BIPM) in
Sévres, France used a torsion balance apparatus (pictured) to calculate the
gravitational constant, "big G,"—a fundamental constant that has
proven difficult to measure. The latest calculation, the result of a 10-year
experiment, just adds to the confusion.
Gravity, one of the
constants of life, not to mention physics, is less than constant when it comes
to being measured. Various experiments over the years have come up with
perplexingly different values for the strength of the force of gravity, and the
latest calculation just adds to the confusion.
The results of a
painstaking 10-year experiment to calculate the value of “big G,” the universal
gravitational constant, were published this month—and they’re incompatible with
the official value of G, which itself comes from a weighted average of various
other measurements that are mostly mutually incompatible and diverge by more
than 10 times their estimated uncertainties.
The gravitational
constant “is one of these things we should know,” says Terry Quinn at the
International Bureau of Weights and Measures (BIPM) in Sévres, France, who led
the team behind the latest calculation. “It’s embarrassing to have a
fundamental constant that we cannot measure how strong it is.”
In fact, the
discrepancy is such a problem that Quinn is organizing a meeting in February at
the Royal Society in London to come up with a game plan for resolving the
impasse. The meeting’s title—“The Newtonian constant of gravitation, a constant
too difficult to measure?”—reveals the general consternation.
Although gravity seems
like one of the most salient of nature’s forces in our daily lives, it’s
actually by far the weakest, making attempts to calculate its strength an
uphill battle. “Two one-kilogram masses that are one meter apart attract each
other with a force equivalent to the weight of a few human cells,” says
University of Washington physicist Jens Gundlach, who worked on a separate 2000
measurement of big G. “Measuring such small forces on kg-objects to 10-4 or
10-5 precision is just not easy. There are a many effects that could overwhelm
gravitational effects, and all of these have to be properly understood and
taken into account
This inherent
difficulty has caused big G to become the only fundamental constant of physics
for which the uncertainty of the standard value has risen over time as more and
more measurements are made. “Though the measurements are very tough, because G
is so much weaker than other laboratory forces, we still, as a community, ought
to do better,” says University of Colorado at Boulder physicist James Faller,
who conducted a 2010 experiment to calculate big G using pendulums.
The first big G
measurement was made in 1798 by British physicist Henry Cavendish using an
apparatus called a torsion balance. In this setup, a bar with lead balls at
either end was suspended from its middle by a wire. When other lead balls were
placed alongside this bar, it rotated according to the strength of the
gravitational attraction between the balls, allowing Cavendish to measure the
gravitational constant.
Quinn and his
colleagues’ experiment was essentially a rehash of Cavendish’s setup using more
advanced methods, such as replacing the wire with a wide, thin strip of copper
beryllium, which allowed their torsion balance to hold more weight. The team
also took the further step of adding a second, independent way of measuring the
gravitational attraction: In addition to observing how much the bar twisted,
the researchers also conducted experiments with electrodes placed inside the
torsion balance that prevented it from twisting. The strength of the voltage
needed to prevent the rotation was directly related to the pull of gravity. “A
strong point of Quinn’s experiment is the fact that they use two different
methods to measure G,” says Stephan Schlamminger of the U.S. National Institute
of Standards and Technology in Gaithersburg, Md., who led a separate attempt in
2006 to calculate big G using a beam balance setup. “It is difficult to see how
the two methods can produce two numbers that are wrong, but yet agree with each
other.”
Through these dual
experiments, Quinn’s team arrived at a value of 6.67545 X 10-11 m3 kg-1 s-2.
That’s 241 parts per million above the standard value of 6.67384(80) X 10-11 m3
kg-1 s-2, which was arrived at by a special task force of the International
Council for Science’s Committee on Data for Science and Technology (CODATA)
(pdf) in 2010 by calculating a weighted average of all the various experimental
values. These values differ from one another by as much as 450 ppm of the
constant, even though most of them have estimated uncertainties of only about
40 ppm. “Clearly, many of them or most of them are subject either to serious
significant errors or grossly underestimated uncertainties,” Quinn says. Making
matters even more complex is the fact that the new measurement is strikingly
close to a calculation of big G made by Quinn and his colleagues more than 10
years ago, published in 2001, that used similar methods but a completely
separate laboratory setup.
Most scientists think
all these discrepancies reflect human sources of error, rather than a true
inconstancy of big G. We know the strength of gravity hasn’t been fluctuating
over the past 200 years, for example, because if so, the orbits of the planets
around the sun would have changed, Quinn says. Still, it’s possible that the
incompatible measurements are pointing to unknown subtleties of gravity—perhaps
its strength varies depending on how it’s measured or where on Earth the
measurements are being made?
“Either something is
wrong with the experiments, or there is a flaw in our understanding of
gravity,” says Mark Kasevich, a Stanford University physicist who conducted an
unrelated measurement of big G in 2007 using atom interferometry. “Further work
is required to clarify the situation.”
If the true value of
big G turns out to be closer to the Quinn team’s measurement than the CODATA
value, then calculations that depend on G will have to be revised. For example,
the estimated masses of the solar system’s planets, including Earth, would
change slightly. Such a revision, however, wouldn’t alter any fundamental laws
of physics, and would have very little practical effect on anyone’s life, Quinn
says. But getting to the bottom of the issue is more a matter of principle to
the scientists. “It’s not a thing one likes to leave unresolved,” he adds. “We
should be able to measure gravity.”
Quinn and his team
from the BIPM and the University of Birmingham in England published their
results Sept. 5 in Physical Review Letters.
Xxxxxxxxxxxxxxxxxxxxxxxxx
Appendix D
Zevatron Gun
Andre Willers
27 Nov 2013
Synopsis :
An ordinary firearm
can fire zevatron beams by using a Zevatron bullet .
Discussion :
1.First , see “Desktop
Zevatron” in Appendix I
2. The Zevatron
Cartridge :
2.1 Propellant at base
: standard
2.2 The next layer is
fine conductive coils : wound or 3D printed
2.3 The last layer is
buckyballs arranged in magnetron congigurations . 3D or 4D printed .
3.How it works :
3.1 The gun barrel
must be magnetized . Stroking with a permanent magnet will do in a pinch .
3.2 On firing , the
bullet accelerates both linearly and angularly down the barrel .
3.3 This an EMP pulse
from the coils interacting with the barrel’s magnetic field .
3.4 This EMP pulse
propagates faster than the bullet and induces Chirping effects on the
magnetron-like buckyballs .
3.5 When they implode ,
Zevatron level energies are released .
3.6 Aligning (3D
printing) the buckyballs correctly , the zevatron beam can be concentrated or
even collimated .
3.7 The initial setup
will take a lot of calculation and experimentation , but it only needs to be
done once . After that , manufacture as usual . Even at home , using 3D and 4D
printers . (Seehttp://andreswhy.blogspot.com/2013/11/nd-printing.html )
4. The energy
calculations are simple : just use a muzzle energy (eg 500 joulesfor a .45
handgun) and work backwards to required values for various cartridge layers .
5.What do you get ?
A thin beam of cosmic
energy-level particles . At these energies , they can be collimated .
A true long range
blaster , cutter , welder , anti-ballistic system , ultra-radar , etc
6.Safety aspects :
6.1 There will
probably be backscatter of radiation of various types . Depends on the bullet .
6.2 Simulation effects
:
If we are in a
simulation , then use of Zevatrons will put stress on the simulation . This may
already be happening .
See Appendix II . The
value of the Gravitational Constant is changing . What one would expect if the
“grid” is made finer to take Zevatrons into account . (See Appendix I)
Not only will this
also interfere with quantum effects routinely used in computers , but the
System Administrator (God , for all practical purposes) might decide to switch
off this Locale as being too “expensive” or too much trouble .
7. Quantum pollution .
Zevatron observation
is equivalent to pollution at the quantum level .
8. But the benefits
are amazing .
8.1 A finer
“subspace-grid” means stable trans-uranic elements , stable exotic matter ,
workable quantum computers .
8.2 Singularity : The
verdict is still out on whether it will snuff out humans or transcend them
The ultimate Western .
A Universe in every
holster .
Andre
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Appendix I
Desktop Zevatron
Andre Willers
22 Nov 2013
Synopsis:
We use the Schwinger
limit to produce particles with energies greater than 10^20 eV .
Discussion :
1.If the thought
experiment cannot be reproduced in “reality” , we are in a simulation .See
Appendix B
2.Thought experiment :
Consider buckyballs in
a arrangement like a magnetron . Then chirp the frequency (ie increase it). The
buckyball pockets will decrease and emit ever increasing energetic particles
until they implode in Zevatron energies .
This can be easily
done in small university lab . Or inside your body .
3.Makes a hellavu
weapon .
4.If energies go over
10^20 eV , then either
4.1 We are not in a
simulation
Or
4.2 The laws of
physics gets rewritten on the fly .
Or both
4.3 There is a quantum
superposition (most likely)
We are in 1/3
simulation , but 2/3 superposition .
5.Resonance energy
spectra :
The Zevatron will then
have distributions typical of 1/3 , 2/3
6. Beth levels .
Pauli exclusion
principle :
Taken as a general
definition of delineation (identity) . The problem is that it usually used in a
binary sense , whereas trinary would be more correct .
Inverse Pauli
principle .
Higher Beth levels
distort the Pauli exclusion principle .
The observer has very
marked effects on the observed process .
7. In a Zevatron ,
some observers would have Talent for it , whereas others would squelsh it .
Pauli was notorious
for squelshing experimental processes .
We want the opposite .
8. What does all this
sthako mean ?
It means that we are
living in a simulation 2/3 of the time , and deterministically 1/3 of the time
, in so far time has any meaning .
9. The linkage is
poetry , language , mathematics , music , physics , biology .
10 The nitty gritty :
Very high energy particle
physics incorporates the observer . If you want a Zevatron , or cold fusion ,
or even hot fusion , you need
an Inverse Pauli
Person in the loop .
11. Pollyanna Fusion .
Don’t knock it . At
10^20 eV it works .
12. Of course , it
does not solve the Simulation problem . That is because you keep on thinking
Y/N , whereas it is a little bit of this and a little bit of that .
13. Think of the
universe as a congeries of information packets , each with a source and
destination address , and some (just for the hell of it) with either or neither
. Peregrinating through the Beth levels of meaning .
14. The Meaning of
Life .
Beth (1) or Beth (2)
levels : 1/3 basic physical ground states , 2/3 what you make of it .
Beth (3) and better :
What you make of it .
15. Can you see why
the Zevatron is such an interesting experiment ?
God is always inside
your decision loop .
An entity (whether an individual or an
organization) that can process this cycle quickly, observing and reacting to
unfolding events more rapidly than an opponent, can thereby "get
inside" the opponent's decision cycle and gain the advantage.
Well , God cheats ,
since He is outside time (higher Beth levels in our terminology)
16 .With Zevatrons in
play , God will have to jack up things a bit . And we are off to the races .
17 . You can’t win , but
it was a fun race .
18 Zero point energy and
Zevatrons .
Anything over the
Schwinger limit generates zero-point energy . . (See Appendix A)
This can be done
intra-cellular with 4D printers (see http://andreswhy.blogspot.com/2013/11/nd-printing.html )
Never mind food .
Energy can be obtained indefinitely by a simple injection of 4D printed
molecules .
19 . 4D Printed wine .
The ultimate
connoisseurs delight . The wine adapts to the taster’s palate , taste recepters
and immune system to tickle pleasure receptors .
20. 4D Printed Food .
Food ( and here I
include medicines) reconfigure themselves inside the gut and even inside the
cells to give maximum benefit on instructions from the Cloud .
Humans being humans ,
even now we can print 4D foods that will taste fantastic , but reassemble into
non-fattening molecules when exposed to the digestive processes .
21 . Ho–ho–Ho ! The
Petrol pill !
For long a BS story ,
this is now actually a theoretical possibility .
A 4D printed molecule
packing some serious energy can be designed to re-assemble into a combustable
hydrocarbon on exposure to water . The physics is very straightforward . This
can actually be done . It will cost , but the military will love it .
22. Put a Tiger in
your tank ! Circe Bullets .
Bullets with a payload
of 4D printed Dna/Rna/Epigenetics can convert an enemy into a tiger , sloth or
any animal .
23. I prefer variable
biltong . 4D Print biltong just as you like it . Hard , salty crust with
meltingly soft interior .
Whatever you do ,
don’t lose the nipple .
It is sad to see grown
humans in perennial search of a 4D nipple .
One of Strauss’s
lesser known works .
“The Tit-Tat Walz”
Andre
xxxxxxxxxxxxxxxxxxxxxxxxxx
Appendix A
However, two waves or two photons not traveling
in the same direction always have a minimum combined energy in their center of
momentum frame, and it is this energy and the electric field strengths
associated with it, which determine particle-antiparticle creation, and associated
scattering phenomena.
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Appendix B
In the 1999 sci-fi film
classic The Matrix, the protagonist, Neo, is stunned to see people defying the
laws of physics, running up walls and vanishing suddenly. These superhuman
violations of the rules of the universe are possible because, unbeknownst to
him, Neo’s consciousness is embedded in the Matrix, a virtual-reality
simulation created by sentient machines.
The action really begins
when Neo is given a fateful choice: Take the blue pill and return to his
oblivious, virtual existence, or take the red pill to learn the truth about the
Matrix and find out “how deep the rabbit hole goes.”
Physicists can now offer
us the same choice, the ability to test whether we live in our own virtual
Matrix, by studying radiation from space. As fanciful as it sounds, some
philosophers have long argued that we’re actually more likely to be artificial
intelligences trapped in a fake universe than we are organic minds in the
“real” one.
But if that were true,
the very laws of physics that allow us to devise such reality-checking
technology may have little to do with the fundamental rules that govern the
meta-universe inhabited by our simulators. To us, these programmers would be
gods, able to twist reality on a whim.
So should we say yes to
the offer to take the red pill and learn the truth — or are the implications
too disturbing?
Worlds in Our Grasp
The first serious
attempt to find the truth about our universe came in 2001, when an effort to
calculate the resources needed for a universe-size simulation made the prospect
seem impossible.
Seth Lloyd, a
quantum-mechanical engineer at MIT, estimated the number of “computer
operations” our universe has performed since the Big Bang — basically, every
event that has ever happened. To repeat them, and generate a perfect facsimile
of reality down to the last atom, would take more energy than the universe has.
“The computer would have to be bigger than the
universe, and time would tick more slowly in the program than in reality,” says
Lloyd. “So why even bother building it?”
But others soon realized
that making an imperfect copy of the universe that’s just good enough to fool
its inhabitants would take far less computational power. In such a makeshift
cosmos, the fine details of the microscopic world and the farthest stars might
only be filled in by the programmers on the rare occasions that people study
them with scientific equipment. As soon as no one was looking, they’d simply
vanish.
In theory, we’d never
detect these disappearing features, however, because each time the simulators
noticed we were observing them again, they’d sketch them back in.
That realization makes
creating virtual universes eerily possible, even for us. Today’s supercomputers
already crudely model the early universe, simulating how infant galaxies grew
and changed. Given the rapid technological advances we’ve witnessed over past
decades — your cell phone has more processing power than NASA’s computers had
during the moon landings — it’s not a huge leap to imagine that such
simulations will eventually encompass intelligent life.
“We may be able to fit humans into our simulation
boxes within a century,” says Silas Beane, a nuclear physicist at the
University of Washington in Seattle. Beane develops simulations that re-create
how elementary protons and neutrons joined together to form ever larger atoms
in our young universe.
Legislation and social
mores could soon be all that keeps us from creating a universe of artificial,
but still feeling, humans — but our tech-savvy descendants may find the power
to play God too tempting to resist.
cosmic-rays
If cosmic rays don't
have random origins, it could be a sign that the universe is a simulation.
National Science
Foundation/J. Yang
They could create a
plethora of pet universes, vastly outnumbering the real cosmos. This thought
led philosopher Nick Bostrom at the University of Oxford to conclude in 2003
that it makes more sense to bet that we’re delusional silicon-based artificial
intelligences in one of these many forgeries, rather than carbon-based
organisms in the genuine universe. Since there seemed no way to tell the
difference between the two possibilities, however, bookmakers did not have to
lose sleep working out the precise odds.
Learning the Truth
That changed in 2007
when John D. Barrow, professor of mathematical sciences at Cambridge
University, suggested that an imperfect simulation of reality would contain
detectable glitches. Just like your computer, the universe’s operating system
would need updates to keep working.
As the simulation
degrades, Barrow suggested, we might see aspects of nature that are supposed to
be static — such as the speed of light or the fine-structure constant that
describes the strength of the electromagnetic force — inexplicably drift from
their “constant” values.
Last year, Beane and
colleagues suggested a more concrete test of the simulation hypothesis. Most
physicists assume that space is smooth and extends out infinitely. But
physicists modeling the early universe cannot easily re-create a perfectly
smooth background to house their atoms, stars and galaxies. Instead, they build
up their simulated space from a lattice, or grid, just as television images are
made up from multiple pixels.
The team calculated that
the motion of particles within their simulation, and thus their energy, is
related to the distance between the points of the lattice: the smaller the grid
size, the higher the energy particles can have. That means that if our universe
is a simulation, we’ll observe a maximum energy amount for the fastest
particles. And as it happens, astronomers have noticed that cosmic rays,
high-speed particles that originate in far-flung galaxies, always arrive at
Earth with a specific maximum energy of about 10^20 electron volts.
The simulation’s lattice
has another observable effect that astronomers could pick up. If space is
continuous, then there is no underlying grid that guides the direction of
cosmic rays — they should come in from every direction equally. If we live in a
simulation based on a lattice, however, the team has calculated that we
wouldn’t see this even distribution. If physicists do see an uneven
distribution, it would be a tough result to explain if the cosmos were real.
Astronomers need much
more cosmic ray data to answer this one way or another. For Beane, either
outcome would be fine. “Learning we live in a simulation would make no more
difference to my life than believing that the universe was seeded at the Big
Bang,” he says. But that’s because Beane imagines the simulators as driven
purely to understand the cosmos, with no desire to interfere with their
simulations.
Unfortunately, our
almighty simulators may instead have programmed us into a universe-size reality
show — and are capable of manipulating the rules of the game, purely for their
entertainment. In that case, maybe our best strategy is to lead lives that
amuse our audience, in the hope that our simulator-gods will resurrect us in
the afterlife of next-generation simulations.
The weird consequences
would not end there. Our simulators may be simulations themselves — just one
rabbit hole within a linked series, each with different fundamental physical
laws. “If we’re indeed a simulation, then that would be a logical possibility,
that what we’re measuring aren’t really the laws of nature, they’re some sort
of attempt at some sort of artificial law that the simulators have come up
with. That’s a depressing thought!” says Beane.
This cosmic ray test may
help reveal whether we are just lines of code in an artificial Matrix, where
the established rules of physics may be bent, or even broken. But if learning
that truth means accepting that you may never know for sure what’s real —
including yourself — would you want to know?
There is no turning
back, Neo: Do you take the blue pill, or the red pill?
The postulated
(hypothetical) sources of EECR are known as Zevatrons, named in
analogy to Lawrence Berkeley National Laboratory'sBevatron and Fermilab's Tevatron, capable of accelerating particles to 1 ZeV (1021 eV).
Xxxxxxxxxxxxxxxxxxxxxxxxxx
Appendix II
Puzzling Measurement
of "Big G" Gravitational Constant Ignites Debate [Slide Show]
Despite dozens of
measurements over more than 200 years, we still don’t know how strong gravity
is
By Clara Moskowitz
BIG "G":
Researchers at the International Bureau of Weights and Measures (BIPM) in
Sévres, France used a torsion balance apparatus (pictured) to calculate the
gravitational constant, "big G,"—a fundamental constant that has
proven difficult to measure. The latest calculation, the result of a 10-year
experiment, just adds to the confusion.
Gravity, one of the
constants of life, not to mention physics, is less than constant when it comes
to being measured. Various experiments over the years have come up with
perplexingly different values for the strength of the force of gravity, and the
latest calculation just adds to the confusion.
The results of a
painstaking 10-year experiment to calculate the value of “big G,” the universal
gravitational constant, were published this month—and they’re incompatible with
the official value of G, which itself comes from a weighted average of various other
measurements that are mostly mutually incompatible and diverge by more than 10
times their estimated uncertainties.
The gravitational
constant “is one of these things we should know,” says Terry Quinn at the
International Bureau of Weights and Measures (BIPM) in Sévres, France, who led
the team behind the latest calculation. “It’s embarrassing to have a
fundamental constant that we cannot measure how strong it is.”
In fact, the
discrepancy is such a problem that Quinn is organizing a meeting in February at
the Royal Society in London to come up with a game plan for resolving the
impasse. The meeting’s title—“The Newtonian constant of gravitation, a constant
too difficult to measure?”—reveals the general consternation.
Although gravity seems
like one of the most salient of nature’s forces in our daily lives, it’s
actually by far the weakest, making attempts to calculate its strength an
uphill battle. “Two one-kilogram masses that are one meter apart attract each
other with a force equivalent to the weight of a few human cells,” says
University of Washington physicist Jens Gundlach, who worked on a separate 2000
measurement of big G. “Measuring such small forces on kg-objects to 10-4 or
10-5 precision is just not easy. There are a many effects that could overwhelm
gravitational effects, and all of these have to be properly understood and
taken into account
This inherent
difficulty has caused big G to become the only fundamental constant of physics
for which the uncertainty of the standard value has risen over time as more and
more measurements are made. “Though the measurements are very tough, because G
is so much weaker than other laboratory forces, we still, as a community, ought
to do better,” says University of Colorado at Boulder physicist James Faller,
who conducted a 2010 experiment to calculate big G using pendulums.
The first big G
measurement was made in 1798 by British physicist Henry Cavendish using an
apparatus called a torsion balance. In this setup, a bar with lead balls at
either end was suspended from its middle by a wire. When other lead balls were
placed alongside this bar, it rotated according to the strength of the
gravitational attraction between the balls, allowing Cavendish to measure the
gravitational constant.
Quinn and his colleagues’
experiment was essentially a rehash of Cavendish’s setup using more advanced
methods, such as replacing the wire with a wide, thin strip of copper
beryllium, which allowed their torsion balance to hold more weight. The team
also took the further step of adding a second, independent way of measuring the
gravitational attraction: In addition to observing how much the bar twisted,
the researchers also conducted experiments with electrodes placed inside the
torsion balance that prevented it from twisting. The strength of the voltage
needed to prevent the rotation was directly related to the pull of gravity. “A
strong point of Quinn’s experiment is the fact that they use two different
methods to measure G,” says Stephan Schlamminger of the U.S. National Institute
of Standards and Technology in Gaithersburg, Md., who led a separate attempt in
2006 to calculate big G using a beam balance setup. “It is difficult to see how
the two methods can produce two numbers that are wrong, but yet agree with each
other.”
Through these dual
experiments, Quinn’s team arrived at a value of 6.67545 X 10-11 m3 kg-1 s-2.
That’s 241 parts per million above the standard value of 6.67384(80) X 10-11 m3
kg-1 s-2, which was arrived at by a special task force of the International
Council for Science’s Committee on Data for Science and Technology (CODATA)
(pdf) in 2010 by calculating a weighted average of all the various experimental
values. These values differ from one another by as much as 450 ppm of the
constant, even though most of them have estimated uncertainties of only about
40 ppm. “Clearly, many of them or most of them are subject either to serious
significant errors or grossly underestimated uncertainties,” Quinn says. Making
matters even more complex is the fact that the new measurement is strikingly
close to a calculation of big G made by Quinn and his colleagues more than 10
years ago, published in 2001, that used similar methods but a completely
separate laboratory setup.
Most scientists think
all these discrepancies reflect human sources of error, rather than a true
inconstancy of big G. We know the strength of gravity hasn’t been fluctuating
over the past 200 years, for example, because if so, the orbits of the planets
around the sun would have changed, Quinn says. Still, it’s possible that the
incompatible measurements are pointing to unknown subtleties of gravity—perhaps
its strength varies depending on how it’s measured or where on Earth the
measurements are being made?
“Either something is
wrong with the experiments, or there is a flaw in our understanding of
gravity,” says Mark Kasevich, a Stanford University physicist who conducted an
unrelated measurement of big G in 2007 using atom interferometry. “Further work
is required to clarify the situation.”
If the true value of
big G turns out to be closer to the Quinn team’s measurement than the CODATA
value, then calculations that depend on G will have to be revised. For example,
the estimated masses of the solar system’s planets, including Earth, would
change slightly. Such a revision, however, wouldn’t alter any fundamental laws
of physics, and would have very little practical effect on anyone’s life, Quinn
says. But getting to the bottom of the issue is more a matter of principle to
the scientists. “It’s not a thing one likes to leave unresolved,” he adds. “We
should be able to measure gravity.”
Quinn and his team
from the BIPM and the University of Birmingham in England published their
results Sept. 5 in Physical Review Letters.
xxxxxxxxxxxxxxxxxxxxxxxxx
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.