Friday, November 29, 2013

Depression and Altitude Variance

Depression and Altitude Variance

Andre Willers
30 Nov 2013
Synopsis :
Varying lung pressures prevents depression .
Discussion :
The populations with the least altitude variation has the most suicides . See Appendix II
Note that Nepal is at the bottom . That is because it is just up-and-down , with little in between .
But Greenland is at the top .
Why ?
2.See Flightrage
3.There seems to be a tie between adrenalin and lung-pressure variation . This affects the boredom systems in the hippocampus See  . A form of learned-helplessness (see Sapolsky) This is translated as depression . Even suicidal . See Appendix I .
4.Simple cures :
4.1 Take a holiday at a different elevation . Plateau to coast or vice-versa
4.2 Use PEEP .
See Appendix III for Bronchial Resonance .
4.3 Fly frequently on commercial airlines . See Appendix I
4.4 Drive in heavy traffic .
The high concentration of CO2 and especially CO is equivalent to jet flight as far as your lungs are concerned .
Hence “Road-rage”  , the equivalent of “Flight Rage” in Appendix I
4.5 Aerobic exercise
4.6 Smoking .
4.7 Coughing or sneezing . Snuff . Or even colds .
A cough a day keeps depression at bay .
4.8 Scuba diving .
5. A note of caution :
The adrenalin stimulation system is addictive . The Ghurka’s of Nepal may well have a zero suicide rate , but they have a very high homicidal rate . Their trade .
6. Are you addicted to flying ? You can easily have become dependant on frequent altitude changes to feel “good” .
The same with the others .
Honk if you are not coming up or going down .

Appendix I

Friday, November 05, 2010
Flightrage and betablockers
Flightrage and Betablockers.
Andre Willers
5 Nov 2010

Synopsis :
Betablockers exacerbates fight-or-flight reflexes from quick altitude changes , due to activation of adrenal glands .

Discussion :

Example : Capetown to Johannesburg flight .

Aircraft cabins pressurize to 8 200 feet equivalent very soon after takeoff as plane reaches cruising altitude of around 30 000 to 40 000 feet . (Wikipedia)

This is calculated to be right at the edge of the 95% Gaussian distribution , since this costs less . If you fall into the 2.5% of the remaining upper end of the sickly , elderly or pregnant , too bad .

The passenger subjectively goes from sea level to 8 200 feet in a matter of minutes .

This causes the heart to try and speed up and increase its stroke in a linear fashion to get more oxygen to the organs .(Altitude sickness)
This is noticed as a shortness of breath as the blood alkaline level falls due to CO2 exhalation . Feelings of claustrophobia and constraint are experienced .
Oxygen deprivation will be experienced by individuals on the tail end of the statistical distribution , with all the attendant symptoms (Google "Altitude sickness")

Betablockers :
However , if the passenger is taking betablockers , the heartspeed cannot compensate. The control mechanism via the brain is interrupted .

The system then initiates a more general hormone system , namely large-scale adrenal activations . Fight-or flight on a systemic scale .
The passenger goes ape . Mostly cowers , but some will go ballistic .

This is Flightrage .

Anxiety attacks :
The signaling molecules for amygdala and physical stressors are the same (unfortunately) . They have a feedback pattern that has to be disrupted . (Use alcohol)

Reduced oxygen supply :
The partial pressure of oxygen reduces linearly . Most organisms compensate by inducing endorphous melatonin release and going to sleep .
But this does not happen to all of them .

Notice that this means that you can get jet-lag even on a Capetown-Joburg route . The body interprets the varying oxygen supply and demand fluctuations as temporal fluctuations . And the inverse , of course .

Reduced oxygen demand : Melatonin supplements .
Melatonin supplements , combined with disruptors like betablockers and alcohol , will reduce oxygen consumption to compensate for subjective altitude .
Take a fast release Melatonin pill (Solal , 3 mg) about 30 minutes before flight .

What to do :

1.Two to three stiff drinks : 50 ml of 40% alcohol (a double) times 3 will be sufficient to interrupt 95% of anxiety attacks . No more , as this will erode self-control
This should work for a short flight of 2 hours . About 30 to 0 minutes before takeoff .

2. Melatonin :
Take a fast release Melatonin pill (Solal , 3 mg) about 30 minutes before flight .
For longer flights , combine with slow-release melatonon .

3.Smoking :
Smoking increases red-blood cell concentrations (Carbon monoxide effect) . The turnabout is quite rapid (24 hrs - a heritage of millions of years of fire) . Stopping smoking about 8 hours before the flight should optimize systems nicely .
Take iron supplements .

4.BetaBlockers :
Take an extra one about 30 mins before the flight . The anti-anxiety effect outweighs the fight-or-flight effect for 67% of humans 95% of the time .
Other anti-anxiety drugs like Activan can be tried .

5.Do not eat beans .
Or any gas causing foods for at least 48 hrs before flight . The sudden decrease in cabin pressure on going to cruising altitude will cause major abdominal discomfort as the gas pockets expand . Farts and belches .
(You will not see this caution on any airline website)
Mexican flights must be a culinary adventure .

6.High blood pressure medication .
Do not take 12 hours before flight .
This includes diuretics .

The sudden decrease in cabin pressure from sea-level to 8 200 feet reduces blood pressure drastically . This should be ok for a healthy young individual , but older ones with obstructed blood vessels can easily get the bends . Also euphemistically known as Deep Vein Thrombosis . The vein expands faster than the blood can rush in to fill the gap . A thrombosis results as the clotting agents lose contact with quorum anti-coagulants . Gases reverse osmosis from tissue into veins , forming microscopic thrombosis spicules . It also screws up the O2/CO2 sensor mechanisms .

7 Aspirin:
Anti-coagulents taken about 45 mins before take-off .About 3x300 mg Disprin . The pulse of concentration of the anti-coagulent is important.

8 Hydration :
At 8 200 feet , there is significant CO2 loss through breathing (Google "Altitude sickness") .The blood becomes more alkaline , causing major metabolic shifts , especially in the CNS .

So , drink Coke or Soda water .
Soda water with a dash of salt and vinegar will go down well .(Old Roman recipe)
This should be done continously about 12 hrs before takeoff and during flight .

9 On Landing :
Biological systems will be in a state of reverberating shock (ie oscillating) . Do not take any medication except alcohol for about 12 hours after landing .(The alcohol dampens shock effects . Do not use caffeine in any form )

10. Epigenetic effects .
Obesity and diabetes
A flight on a aircraft pressurized at 8 200 feet from sealevel triggers starvation mechanisms . The body thinks it cannot get enough energy , but thinks it is because of a lack of food (not oxygen) . Epigenetic markers get switched on , as if the person had lived through a famine . Fat deposit mechanisms are switched on .

Anybody who has been on a commercial flight in the last three generations will have fat children or be at risk of diabetes .

Hence the present epidemic of obesity/diabetes . The system is simply trying to compensate for a perceived repeated famine (and getting very confused in the process) .

11. Chicken fat .
See previous posts . A source of merriment to many .
However , eating chicken fat about about 6-12 hours before takeoff will significantly reduce epigenetic markers .

There is even the prospect of reversing the process :
Use pulses of sound to compress tissue . ie Really heavy beat music .
During the compression phase , the rate of chemical activity is significantly enhanced.
This convinces some quorum systems to unmark some epigenetic systems .

Why ?
Reinforcing success is an old strategic principle . About 500 million years of evolution must have developed the same at genetic level .
A failure at first is overwritten by a success later .
Therefore , there must be a method on cellular level of doing exactly this .

12.The Chicken Stomp .
Sip chicken fat soup in rhythm to the heavy beat .
Systems will compensate to switch off fat-deposit markers .
An easy way of losing weight .

After all , birds are the champion weight-watchers .

Appendix II
High Altitude Linked to Higher Suicide Risk — Again
January 19, 2011

By Alan Mozes
HealthDay Reporter
WEDNESDAY, Jan. 19 (HealthDay News) — Across the United States, suicide risk appears to be significantly higher among people who live in higher altitudes, new research suggests.
The latest observation seems to confirm the findings of previous research that unearthed a complex and as-yet not fully explained relationship between higher than average suicide rates and residency in higher elevations.
“Once you get to somewhere between 2,000 and 3,000 feet, you start seeing the suicide rates increase,” explained study author Dr. Barry E. Brenner, a professor of emergency medicine and internal medicine, as well as program director, in the department of emergency medicine at University Hospital Case Medical Center in Cleveland. “The correlation is very, very, very high, and it happens in every single region of the U.S.”
“And yet as you go up in altitude the overall death rate, or all-cause mortality, actually decreases,” Brenner noted. “So, the fact that suicide rates are increasing at the same time is a really significant and really striking finding.”
Brenner and his colleagues discuss their results in the Jan. 18 online issue of High Altitude Medicine & Biology.
The authors noted that data collected earlier this decade indicates that, globally, suicide is the 14th most common cause of death, amounting to 1.5 million fatalities every year.
Brenner’s new evidence of a linkage between suicides and high altitudes stem from an analysis of two decades worth of mortality data (1979-1998) obtained from the U.S. Centers for Disease Control and Prevention.
The CDC figures covered deaths that occurred in all 2,584 counties across the United States in that timeframe. At the same time, the authors obtained countywide elevation statistics from the U.S. Geologic Survey.
The research team determined that over the course of the 20-year period, suicides accounted for 1.4 percent of all American deaths, with an average county-wide suicide rate of 14 out of every 100,000 residents.
Even after adjusting for traditional risk factors such as age, race, household income, population density, and gender, the authors found that suicide rates (whether involving a firearm or not) were significantly higher than average in those counties with higher altitudes.
Even after adjusting for greater isolation, lower income and greater access to firearms, the findings remained statistically significant, the authors said.
In contrast, those same locales defined by relatively high topography were not home to the highest rates of death due to any and all causes. In fact, higher altitude counties actually registered lower than average death rates due to all causes.
This latter finding actually highlighted the strength of the apparent connection between suicide risk and high altitudes, the research team said.
For the time being, Brenner and his colleagues cautioned that attempts to explain the association are “speculative.”
“It may be related to obesity levels and sleep apnea that may be more common in higher altitudes,” Brenner suggested. He and his colleagues also noted that hypoxia — inadequate oxygen supply to the body’s cells and tissues — is more common at high altitudes, and is thought to increase mood disturbances, especially among emotionally unstable patients.
“It could be that hypoxic environments may lead to higher levels of depression or higher tendencies among the depressed to take suicidal action,” he said. “It’s an area that is rife for further investigation.”
Meanwhile, the research team suggested that their findings might help draw attention to residents of higher elevations who could benefit from relocation to lower altitudes and/or suicide monitoring and prevention services.
Last fall, Dr. Perry F. Renshaw, a professor of psychiatry at the Utah School of Medicine and an investigator with the Utah Science Technology and Research (USTAR) initiative, led a similar study that reported a correlation between high altitudes and higher suicide rates.
His work — published in the American Journal of Psychiatry — also crunched 20 years’ worth of data provided by the CDC. That effort revealed that nine states in the so-called “Intermountain West” region of the country (Montana, Idaho, Wyoming, Utah, Colorado, Nevada, New Mexico, Arizona and Oregon) all ranked among the top 10 states in the nation in terms of suicide rates.
Noting that these states have some of the highest altitudes in the country, Renshaw’s analysis concluded that high altitude seems to be an independent risk factor for suicide, particularly among people already prone to depression and mood disorders.
“So my take on this new study is that it’s wonderful that independently of each other we got to the same point,” Renshaw said. “Because within the suicidology world, we are always concerned that we are missing something, or that this isn’t relevant. But here, this group is probably even more methodologically sophisticated than we are, so the fact that we did much the same thing and they have replicated our finding is a very good thing.”
“I’m also not surprised that they found that suicide rates differ from the overall mortality experience in high altitude places,” he added. “Because many people do seem to adapt quite well to living in a higher altitude, and there’s something about committing suicide that’s clearly very different from mortality risk.”
“But for those people with pre-disposing factors to suicide, like depression and emotional distress, there really appears to be something quite pernicious about living at a higher altitude,” he concluded. “And this confirming finding puts us all in a better position to further explore the subject and get a better understanding of what’s going on.”
More information
The U.S. Centers for Disease Control and Prevention has more on suicide.
SOURCES: Barry E. Brenner, M.D., Ph.D., professor, emergency medicine and internal medicine, and program director, department of emergency medicine, University Hospital Case Medical Center, Cleveland; Perry F. Renshaw, M.D., Ph.D., professor, psychiatry, Utah School of Medicine, and investigator, Utah Science Technology and Research (USTAR) initiative; January 2011 High Altitude Medicine & Biology, online
Last Updated: Jan. 19, 2011

Appendix III
Thursday, June 14, 2012
Bronchial Resonance
Bronchial Resonance
Andre Willers
14 Jun 2012
Synopsis :
Oscillating Positive Expiratory Pressure (OPEP) at about 15 Hz not only clears mucus , but also enhances oxygen uptake and prevents alveoli collapse .
Discussion :
A simple and elegant device (see Appendix II) induces resonances in the bronchial system at around 15 Hz ( 6-26 Hz) . This is a resonance of the mucus bacterial film , breaking it up . For evolutionary reasons , (See Appendix IV) , the filii of the airways operate optimally at this frequency . In other words , “hard” plaques liquefy and can be expelled .
This has an obvious effect on the bronchial system . Pumping out mucus (ie obstructions) from the upper regions of the bronchial tree causes a capillary pumping-out of muck out of the alveoli . Which frees them up for better gas transfer .
There is an ancillary effect . The walls of the narrower bronchial tubes also have a greater gas-transfer capability , once cleared of insulating mucus . They then operate more like the much more efficient bird-lungs . Will this trigger some epigenetic switching-on of bird-lung genes in the human genome ? I do not know , but it is likely .
Any athlete will better his performance by using this technique .
There is an intriguing catch-up effect . The brain normally limits performance to 2/3 of maximal (Noakes) . But this takes time . Using OPEP just before an exertion means a multiplier of 3 of the increment in performance . Enough to win .
Instant Yoga :
The range 8 – 15 – 26 Hz is intriguingly close to brainwave frequencies . Using it will entrain brainwaves through pressure-wave fluctuation , especially at the synapse level . (ie neurotransmitter densities will fluctuate in resonance pressure resonances from the bronchial system ) .
Ohm mane padme sum !

Which brings us to the !Click language .
Or it’s descendant , the glottal stop . These interrupt the expiration in patterned ways . I tried it with the standard double-click (two teeth-clicks , followed by two tongue clicks during expiration using a Flutter device (see Appendix II)) . The effect was a enhanced clarity of mind , which I ascribe to enhanced oxygen intake . Be cautious . !Click combined with Flutter devices can have major biosystem effects .
Interesting Asides :
1.Are other plaques (eg arteries , alzheimers , infections , etc) also susceptible to resonances at around these frequencies ?
2Music , of course . Heavy rhythm , where the body resonates with the sound .
3.Smoking . At first glance , all smokers should use this technique . But if it leads to switching on some bird-lung genes , this would make them more susceptible to bird-flu . I simply do not know enough .
4.Coughing : this is an OPEP system . Time it . The frequency is close to 15 Hz, but only in short bursts. The artificial Flutter system is better .
5.Snoring : This is mostly on expiration (see Appendix III) . An attempt by the body system to induce a Flutter system on expiration . Thus , using a Flutter artificial system should cure most snoring problems .
6.Hiccups : it stops them cold . Also it’s little brother , that pesky reflux .(Heartburn)
7.Will it have an effect on the pylorus valve and the duodenal peristalsis , and hence Diabetes ?
(See “Cure for diabetes” May 2012)
I have no idea at present . Watch the next thrilling episode .

Where to get it in South Africa: See Appendix V

Life doesn’t suck . It is a blow job .
Andre .

Appendix I
A good summation
Oscillatory PEP Therapy
OPEP therapy was first developed and described in Switzerland, as an adjunct or supplement to traditional airwayclearance methods.
Appendix II
When the oscillation frequency approximates the resonance
frequency of the pulmonary system, endobronchial pressure oscillations are amplified and result
in vibrations of the airways. The vibrations produced by these oscillations cause the "fluttering"
sensation from which the FLUTTER derived its name. These vibrations loosen mucus from the
airway walls. The intermittent increases in endobronchial pressure decrease the collapsibility of
the airways during exhalation, increasing the likelihood of clearing mucus from the
tracheobronchial tract. The airflow accelerations increase the velocity of the air being exhaled,
facilitating the movement of mucus up the airways

Appendix III
More info
Appendix IV
Mobilization of mucus by airway oscillations.
Freitag L, Kim CS, Long WM, Venegas J, Wanner A.
Pulmonary Division, University of Miami, Mt. Sinai Medical Center.
The effects of high frequency asymmetric airway oscillations on mucus clearance were evaluated in excised tracheas of sheep, in an animal model of excessive mucus production, and in patients with bronchiectasis. Asymmetric high frequency ventilation (15 Hz) with expiratory biased flow profiles (expiratory peak-flow greater than inspiratory peak-flow) could move mucus droplets towards the pharynx in rigid and flexible tracheas by gas-liquid interaction. In rigid tracheas the mucus was transported towards the periphery of the model lung if the oscillations were inspiratory biased. In very collapsible tracheas, however, even inspiratory biased oscillations moved the mucus cephalad. Parameters influencing direction and speed of mucus are airflow profile, peak-flow, airway compliance and lung resistance. Gamma-camera studies showed that in anesthetized dogs radiolabeled artificial mucus followed the direction of the bias during high frequency ventilation. In five human volunteers with bronchiectasis and excessive secretions the asymmetric airway oscillations were superimposed during spontaneous breathing using a mouthpiece. Airway wall vibrations following the pressure swings of the oscillator could be observed. During forced expiration inward bulging of the posterior membranes of trachea and bronchi occurred at the negative pressure phase of the oscillations. This event was associated with increased appearance of sputum in the central airways. We conclude that high frequency ventilation with asymmetric flow profiles applied via tube or mouthpiece might be an effective future treatment of mucostasis.


[PubMed - indexed for MEDLINE]
Appendix V
Where to get it South Africa
Price R90
Pmb (0331) 903271
Jhb 082 900 7103
Cpt 082 871 6855
Pmb 082 900 3187

How to trace Zevatron guns .

How to trace Zevatron Guns

Andre Willers
29 Nov 2013
Meddling with physical constants can be traced .
Discussion :
Briefly, meddling like inducing energies above 10^20 eV causes changes in C (lightspeed) and G(Gravitational constant .
These can easily be tracked by GPS , or Acellerometers . It surfaces as an anomalous accellaration .
Difficult to pick up in a single measurement point , but easy if a swarm of measurements are used .
2.It will show as an anomalous netto vector in movement .
3.Cellphones , drones , etc .

4.Historical data :
These are all on record . (Echelon) .
You can see who has been screwing you around in the past .
5.It is then possible to see where those using Zevatrons or other Reality changing mechanisms hanged out .
6.Real Time :
The System administrator of the simulation must have a higher clock-speed (Say Cg)  than C . (Else He cannot be inside the decision loop) .
A swarm can then detect the difference between accellarations caused by Cg or C alterations , as a vector at least .
Say for Zevatron-cloaked warheads .
Though why anyone would bother with warheads escapes me .
So passé .
Isn’t the Singularity run-up fun ?
If you survive it .


How large can hailstones grow ?

Giant Hailstones

Andre Willers
29 Nov 2013
“Everybody gets ice . The poor in winter and the rich in summer.”  Bat Masterton

Synopsis :
How large can hailstones grow in a super-cell thunderstorm ?
Discussion :
1.Ball-shaped hailstones grow by accretion as it is bounced up and down by updrafts and downdrafts .
See Appendix I , Appendix II .
2.The updraft velocity must be bigger than the terminal velocity of the hailstone . (Else it simply falls to the ground)
3.The terminal velocity of hailstones is approximated in Appendix III .
V=8.45 * D ^0.553 , where V is terminal velocity m/s and D is diameter in cm .
An upper range updraft of 175 miles per hour is used as per Appendix II = 78.232 m/s
This gives a
D = (V/8.45) ^ (1/0.553)
D = 55.94866 cm diameter hailstone .
This is about the size of a fitness ball

About 80 kg of ice .
A comparative coconut (5 inches=12.7 cm diameter gives an ice mass of about 8 kg .
This must have happened before , but nobody believed the survivors .
4. So , things can be a lot worse .
And will be as global warming drives updraft velocities .
Icily yours
Andre .

Appendix 1
A discussion of how hail forms.

Appendix II
Ever wonder how fast air is rising into the sky during a developing thunderstorm? What about during a Tornado? Here I will try to explain what I know about vertical wind speeds in the centers of natures most violent storms.

Here I will start with the basics. Convection is simply the rising of warm air and the sinking of cooler air. To have clouds, there must be rising, warm and moist air (compared to air surrounding the air "parcel") where the moisture condenses at a certain altitude, forming the cloud. This rising air may be a thermal, from the uneven heating of the earth's surface, or forcing / convergence caused by a front, mountain range, or inflow of air into a low pressure system.

Another important thing is that more heat energy is transferred to this air parcel as the water vapor condenses (or freezes). This makes the air parcel warmer and causes it to rise faster. About 540 calories of energy are released as a single gram of water condenses! When each gram freezes, an additional 80 calories is released. The small fair weather cumulus often have updraft speeds of about 5 MPH.

The really impressive updraft speeds occur in developing thunderstorms and especially in supercells. In a general (non-severe) thunderstorm, the development and early-mature cycle is when the updraft is strongest before downdrafts begin to disrupt the storm. Typical speeds range from about 15 to 30 MPH, or roughly 1,200 to 2,500 feet per minute. At this rate, the relatively "weak" storm reaches a height of about 30,000 feet in 15 minutes, and may last only a half hour.

Severe thunderstorms, require much stronger updraft speeds and depend on the type of storm. Multicell lines generally have weaker updrafts than multicell clusters but are arranged in a "curtain". The updraft speeds in a multicell line storm are a bit stronger than the single cell general storm described earlier. Multicell cluster storms often have updraft speeds around 60 MPH in developing components, or about 5,500 feet per minute. This is quite fast, keeping in mind that most general aviation aircraft can only climb up to 3,000 feet per minute (200 Super King Air).

This is also why pilots should NEVER try to "out climb" the top of a developing thunderstorm. The strongest updraft speeds lie with the most intense kind of thunderstorm, the supercell. A supercell is a "continuous cycle" storm, meaning that it has an updraft side and downdraft side at the same time which are separated from each other allowing the storm to last much longer than 30-45 minutes.

The updraft of a supercell also has a broad low and / or mid-level rotation (mesocyclone) which my further boost its speed. Supercell updrafts generally are stronger than 50 MPH, but 70 or 80 MPH is more typical. In the Great Plains of the United States, supercells often produce baseball and grapefruit sized hail (not to mention tornadoes) because of the extreme speeds of the updrafts within. Such updrafts have been known to reach 150 to 175 MPH, or about 12,000 to 15,000 feet per minute!

No aircraft except for military fighter jets with afterburner power could climb at these rates (for example, the F4 Phantom and Lear 35 Jet both have maximum climb rates less than 8,000 feet per minute). This is why a supercell can literally go from "blue sky to tornado" in a "New York minute". At 15,000 feet per minute, an air parcel will go from ground level to 45,000 feet in only 3 minutes!

An experiment was done via special weather balloon to find out how quickly a supercell updraft will carry it. The device was released into the inflow side of an HP supercell in the Great Plains and ingested into the storm. Only 2 and a half minutes later, the balloon was in the anvil of the storm. It rode the high-velocity core of the storm and gave vital information on the structure of the storm and internal dynamics.

Supercell storms are the most dangerous to aviation. Visibility and wind-shear are the most obvious threats at low levels, however, the updraft and mesocyclone is usually strongest at 20,000 feet. A commercial airliner flying though such a storm will most likely have its wings torn off, and this has happened to planes trying to fly through severe thunderstorms.

Another pilots horror story was an L1011 trying to fly through a "hole" in a multicell cluster of severe thunderstorms. Invisible to the pilot, was that baseball sized hail was falling through that "hole" in the storm, and serious damage to the aircraft was sustained (cracked windows, cratered leading edges of wings, and crushed engine nacelles).

The most amazing stories come from several incidents of people who were unfortunate enough to parachute into a thunderstorm. Imagine a 100 MPH updraft, your parachute is descending at 10 MPH ... Do the math, this means you will go back UP at 90 MPH!

In the book "The Man Who Rode The Wind", a true story of a pilot who ejected into a thunderstorm at 45,000 feet is described. He ejected from an F8 Crusader and descended into the developing storm until his parachute deployed at 10,000 feet. He became caught in the storm updraft and actually re-ascended under his chute to 26,000 feet. Thin air caused him to pass out and the cold caused intense frost bite during his ride up and down the inside of the storm. The water inside the cloud nearly made him drown in mid air!

He was constantly slammed around by the extreme turbulence and at one point his body appeared to be ABOVE his parachute. Finally, the storm weakened and he descended back to earth 30 minutes later. A person found him in a field, severely injured but alive. This storm was not even a severe storm, just a strong summer 30-45 minute long storm. Imagine if this storm was a supercell.

Another incident happened in Germany where 5 parachutists fell victim to a thunderstorm updraft. All landed covered in ice after their wild ride ... yes, they became the "cores of hail stones". Only one of the 5 survived.

Other strange phenomena occur when a tornado picks up debris and it becomes involved with the main updraft of the supercell. This accounts for "rains" of frogs and fish if the tornado passes over water and dumps them far from their point of pickup. Some fish encased in ice occured with one such incident. Other objects such as appliances, roof shingles, insulation, plants, even a computer floppy disk and a desk have landed miles away from a violent
Appendix III
Another discussion about terminal velocities of hailstones .


Thursday, November 28, 2013



Andre Willers
28 Nov 2013
Extreme events lying outside power-laws , called Dragon-Kings, might be predictable and preventable .
Discussion :
1.This is not intuitively obvious . Black-Swan events fall within power laws , but Dragon-Kings do not .
We cannot easily predict Black Swans , but it seems that Dragon-Kings paradoxically are easier to predict and modify .
2.Power Laws ( are found as result of feedback processes of a certain complexity .
See Appendix II . These orders of Complexity or Randomness can be indicated as Beth(0) =Random as a coin flip , then  Beth(1) , Beth(2) etc.
3.Systems in the same Beth(x) level follow Power Laws . But if a system includes Beth(y) levels , where x<>y , then Dragon-King events can occur .
4.Hence the description as in Appendix I .
Note the term “master-slave” in the coupled circuits . This indicates different Beth levels .
5.What does this mean ?
Self-aware intelligences (like humans) have varying Beth levels . Thus , any system involving them is prone to dragon-king events .In other words , it is unstable , with sudden , huge saltations . See History .
Dragon-kings might be easier to predict than Black swans , but I doubt whether modifying them in really complex systems will be easy .
From Appendix I
“The pair went on to show that they could reliably forecast when a big event was about to happen: whenever the differences between the circuits' oscillations decreased to a certain value, a leap of dragon-king proportions was almost always imminent.”
Not surprisingly , social media like Facebook , Twitter , etc , plus economic globalization are decreasing oscillations between various countries . The “Arab Spring” can thus be thought of as a Dragon-king event .
Expect more at increasing frequencies . It is a positive feedback process .
6.The Singularity : The Dragon-Emperor .
In the run-up to the Singularity , Dragon-king events become more frequent until they go asymptotic : the Singularity .
A better estimate of the human Singularity should be possible using the techniques described in Appendix I .
Still at 2028 , but looking at the variation of the Gravitational constant (see  repeated in Appendix III) , Dragon-Emperor events might be occurring as we speak .
Happy Singularity !

Appendix I
Slaying dragon-kings could prevent financial crashes
·         20 November 2013 by Lisa Grossman
·         Magazine issue 2944Subscribe and save
·         For similar stories, visit the Finance and Economics Topic Guide
HUGO CAVALCANTE saw the disaster coming. From his lab at the Federal University of Paraíba in Brazil, he detected the warning signs of an epic crash. At the last minute, he managed to nudge his system back to safety. Crisis averted.
OK, so Cavalcante's impending crisis was only a pair of credit-card-sized circuits that were about to start oscillating out of sync – hardly the stuff of the evening news. But the experiment is the first to show that a class of extreme events, colourfully called dragon-kings, can be predicted and suppressed in a real, physical system. The feat suggests that some day we may also be able to predict, or in some cases prevent, some of the catastrophes in the real world that seem unstoppable, including financial crashes, brain seizures and storms.
"People were hoping if you could forecast extreme events, maybe we could find a way to control them," says Cavalcante's colleague Daniel Gauthier at Duke University in Durham, North Carolina. "We were able to completely suppress the dragon-king events."
Dragon-kings aren't the first animal used to describe a class of catastrophic events. In 2001, Nassim Taleb published a book called The Black Swan, his name for catastrophes that always catch us off-guard. But though difficult to predict, black swans actually fall within an accepted mathematical distribution known as a power law, which says there will be exponentially more small events than large ones (see diagram).
Most events or objects found in a complex system – including earthquakes,hurricanes, moon craters, even power imbalances in war – also obey a power law, a ubiquity that some say hints at a deeper organising principle at work in the universe. Others, like Taleb, focus on the fact that a power law can't predict when black swans will occur.
Now there's another beast to reckon with. In 2009, Didier Sornette at the Swiss Federal Institute of Technology in Zurich reported that some events lift their heads above the power law's parapet, the way a king's power and wealth vastly outstrip that of the more plentiful peasant. So big that they should be rare, these events have a greater probability of occurring than a power law would mandate.
"There seem to be certain extremes that happen much more often than they should if you just believe the power-law distribution predicted by their smaller siblings," Sornette says.
He christened them dragon-kings. The dragon part of the name stems from the fact that these events seem to obey different mathematical laws, just as a dragon's behaviour differs from that of the other animals.
Sornette got his first whiff of dragon-kings when studying cracks that develop in spacecraft. Since then, he has spotted them everywhere, from a rainstorm that hit Venezuela in 1999 and the financial crashes in 2000 and 2007, to some epileptic seizures.
But he wasn't satisfied with merely recognising dragon-kings. The fact that they don't follow a power law suggests they are being produced by a different mechanism, which raises the possibility that, unlike events that follow the power law, dragon-kings may be predictable.
He and his colleagues have had some success, predicting a slip in the Shanghai Stock Exchange before it happened in August 2009 and using a few electrical pulses to suppress seizures that might have become dragon-kings in rats and rabbits. But the difficulty of running controlled experiments in real financial systems or brains prevented them from going any further.
Enter Cavalcante and Gauthier's oscillating circuits. Gauthier spent the early 1990s studying pairs of identical circuits that behaved chaotically on their own, but would synchronise for long periods of time when coupled in a certain way. "It's a little bit politically incorrect, but it's sometimes called the 'master-slave' configuration," Gauthier says. He coupled the two circuits by measuring the difference between the voltages running through them, and injecting a current into the "slave" circuit to make it more like the "master". Most of the time this worked and the two would oscillate together like a pair of swinging pendulums, with only slight deviations away from synchronisation.
But every so often, the slave would stop following the master and march to its own beat for a short time, before getting back in step. Gauthier realised at the time that there were recognisable signs that this disconnect was about to happen. It wasn't until he saw Sornette's work that he checked for dragon-kings.
He and his colleagues have now shown that the differences in the circuits' voltages during these desynchronisations are indeed dragon-kings. "They were as big as the system would physically allow, like a major disaster," Gauthier says.
The pair went on to show that they could reliably forecast when a big event was about to happen: whenever the differences between the circuits' oscillations decreased to a certain value, a leap of dragon-king proportions was almost always imminent. And once they saw it coming, they found they could apply a small electrical nudge to the slave circuit to make sure it didn't tear away from its master (Physical Review
"We basically kill the dragon-king in the egg," Sornette says. "The counter-mechanism kills it when it is burgeoning."
It's a long way to go from a pair of coupled circuits to the massive complexity of the real world. But by using this simple system to find out at what stage in the process a dragon-king can be prevented, Sornette hopes to see whether financial regulation could prevent a crash once a stock market bubble has already begun to grow, a controversial topic among regulators.
"The fear of central banks is that their intervention might actually worsen the situation and trigger the crashes, destabilising the system even further," he says. "That's the type of insight we could test and check and probe with our system."
Some physicists think the gap between so-called low dimensional systems like the pair of oscillators, which can be described by just three variables each, and real-world complex systems like the stock market, is too wide to bridge. "The conclusions of the paper appear correct and interesting for people studying low dimensional chaos," says Alfred Hubler of the University of Illinois at Urbana-Champaign. "But in the real world, low dimensional chaos is very rare. Most real-world complex systems have many interacting parts."
Others agree with Sornette that having a simple physical system to manipulate will be useful. "Having a mechanical system where you can explore it in the lab is crucially important," says Neil Johnson at the University of Miami in Coral Gables. He studies dragon-kings in simulations of stock markets and traffic jams and can't wait to start using a pair of oscillators to see how they relate.
Sornette thinks the circuits are just the beginning of a future in which we can monitor, diagnose, forecast and ultimately control our world. "I think we are on the verge of a revolution where we are going to be able to steer our planet better, informed by this kind of science." It's quite a promise – not all storms, seizures and crashes are dragon-kings, after all. But we now have a tool to explore how to deal with those that are.
This article appeared in print under the headline "Crashing market, hidden dragon?"
Appendix II
Friday, August 15, 2008
Orders of Randomness 2
Orders of Randomness 2
Andre Willers
15 Aug 2008

See : “Orders of Randomness”

I have been requested to expand a little on orders of Randomness and what it means .
Please note that human endeavours at this date use only randomness of the order of flipping a coin ( Beth(0) )

Aleph is the first letter of the Hebrew Alphabet . It was used by Cantor to denote
Classes of Infinity (ie Aleph(0) for Rational numbers , Aleph(1) for Irrational Numbers , etc

Beth is the second letter of the Hebrew Alfabet . It means “House”

I will first repeat the derivation of Orders of Randomness from : “Orders of Randomness” because it is so important .

Start Quote:
First , simple Randomness .
Flip of a coin .
Heads or Tails . 0 or 1
Flip an unbiased coin an infinite number of times ,write it down below each other and do it again .
All possible 0 and 1’s

An example : Beth(0)
Flips(1) 0,1,1,1,1,… etc
Flips(2) 0,1,1,1,0,… etc
Flips(infinity) 0,0,0,0,0,0,…etc

This describes all possible states in a delineated binary universe .
“delineated binary” means a two sided coin which cannot land on it’s side .

Now draw a diagonal line from the top left of Flips(1) to Flips(infinity) .
At every intersection of this diagonal line with a horizontal line , change the value .
The Diagonal Line of (0,1)’s is then not in the collection of all possible random
Horizontal coin-Flips(x) .

This means the Diagonal Line is of a stronger order of randomness .
This is also the standard proof of an Irrational Number .

This is the standard proof of aleph numbers .
Irrational numbers ,etc
Since any number can be written in binary (0,1) , we can infer that the order of randomness is the same as aleph numbers .

This means we can use number theory in Randomness systems .
Very important .

Google Cantor (or Kantor)

Define coin-flip Randomness as Beth(0) , analogous to Aleph(0)
Then we have at least Beth(1) , randomness an order stronger than flipping a coin .
Then we can theorize Beth(Omega) <->Aleph(Omega) .

End Quote

Cardinal Numbers .

The cardinal number is the index x of Aleph(x) .
Cantor proved that
Aleph(n+1) = 2 ^ Aleph( n )

Where n is the cardinal number of the infinity .

Tying them together :
He also proved that
P(A) = 2^ n
Where A is any set , P(A) is the PowerSet of A and n is the cardinal number of set A
Thus , Cardinal Number of P(A) =(n+1)

The PowerSet of A = the Set of all subsets of A .
This sounds fancy , but it is simply all the different ways you can combine the elements of set A . All the ways you can chop up A .
You can see it easily in a finite binomial expansion (1+1)^n = P(A) = 2^n

See : “Infinite Probes”
There we also chop and dice , using infinite series .

Can you see how it all ties together ?

Why 2 ?

This derives from the Delineation Axiom . Remember , we can only talk about something if it is distinct and identifiable from something else . This gives a minimum of 2 states : part or non-part .

That is why the Zeta-function below is described on a 2-dimensional plane , or pesky problems like Primes always boil down to 2 dimensions of some sort .

This is why the irrational numbers play such an important part in physics .
Z=a+ib describes a 2-dimensional plane useful for delineated systems without feedback systems

Its in the axiom of Delineation , dummy .

But we know that Russell proved that A+~A smaller than Universum .
The difference can be described as the Beth sequences . Since they are derivatives of summation-sequences(see below) , they define arrows usually seen as the time-arrows .

These need not to be described a-la-dunne’s serial time , as different Beth levels address the problem adequately without multiplying hypotheses .

Self-referencing systems and Beth sequences .

A Proper Self-referencing system is of one cardinal Beth number higher than the system it derives from .
Self-referencing systems (feedback systems) can always be described as sequences of Beth systems . Ie as Beth(x) <-> Beth(y) . The formal proof is a bit long for inclusion here .

The easiest way to see it is in Bayesian systems . If Beth(x) systems are included , Bayesian systems become orders of magnitude more effective .

Life , civilization and markets are such . See below .

Conservation Laws :
By definition , these can always be written in a form of
SomeExpression = 0

Random (Beth(0) Walk in Euclidean 2-dimensions

This is a powerful unifying principle derived from the Delineation Axiom .

In Random Walk the Distance from the Center is = d * (n)^0.5 . This is a property of Euclidean systems .
(Where d = step , n=number of random beth(0) steps)

Immediately we can say that the only hope of the Walker returning to the center after an infinity of Beth(0) steps is if d ~ 1/(n)^0.5 . This is the Riemann Hypothesis .

Now , see a Universum of 2-dimensional descriptors z=a+ib

Sum all of them . Add together all the possible things that can be thus described .

This can be done as follows :
From z=a+ib Raise both sides to the e
e^(z) = e^(a) . e^i(b)
Raise both sides to the ln(j) power where j is real integers.
j^(z) = j^(a) . e^(b/ln(j))

Now , sum them :
Zeta=Sum of j^(z) for j=1 to infinity

Now we extract all possible statements that embody some Conservation Law . Beth(1)

This means that Zeta is zero for the set of extracted statements if and only if (b/ln(j)) is of the order of Beth(0) and a=(-1/2)

Tensors .
The above is a definition of a tensor for a discontinous function .

Riemann’s Zeta function.
This can describe any delineated system .
If Zeta = 0 , conservation laws apply .

Zeta = Sigma(1/j )^z for j=1,2,3,…,infinity and z=a+ib , where z is complex and i =(-1)^0.5
The z bit is in two dimensions as discussed above .

This function has a deep underlying meaning for infinite systems .
If you unpack the Right-Hand side on a x-yi plane you get a graph that looks like a random walk .

If every point is visited that a random walk would visit over infinity (ie all) , without clumping , then Zeta can only be non-trivially zero if a=(-1/2) .

Why (x – yi) plane ? See “Why 2 “ above . The system is fractal . Two dimensions are necessary in any delineated system .

Remember , randomwalk distance from origin = step*sqrt(number of steps) .
So if the steps = 1/ ( sqrt(number of steps) ) , then the Origin might be reached if and only if a= -1/2
This is easily proven .

If a= - 1/2 , then b can be any real function . This would include Beth(0) and Beth(1) , but not higher orders of beth .

If a= -1/2 and b is an unreal number , then a cannot be equal to -1/2 anymore . Conservation cannot hold at any level .

Conservation Laws can only hold for Beth(0) and Beth(1) systems .

This is forced by the two dimensions of delineation .

Mathematically , this means that Beth(2+) systems of feedbacks can only be described in terms of attractors or/and fractal systems (ie not in isolation)

Physically , conservation of energy and momentum need not hold for Beth(2+) systems .

This has an interesting corollary in decryption (unpacking) . A Beth(2) mind unpacking Beth(0) or Beth(1) encryption is functionally equivalent to Non-Conservation of Energy .

Some other consequences :
If a< -½ , then Riemannian Orbitals are described . Beth(any)
Also described as nuclei , atoms .

If a> -½ , then a diffuse cloud is described . Beth(any)
Also described as magnetic effects .

What does this mean?
Present technology uses Beth(x) technology in a rather haphazard way .(Quantum physics) .

A better understanding will bring about a sudden change in capability .

Appendix III
Wednesday, November 27, 2013
Zevatron Gun
Zevatron Gun

Andre Willers
27 Nov 2013
Synopsis :
An ordinary firearm can fire zevatron beams by using a Zevatron bullet .

Discussion :
1.First , see “Desktop Zevatron” in Appendix I

2. The Zevatron Cartridge :
2.1 Propellant at base : standard
2.2 The next layer is fine conductive coils : wound or 3D printed
2.3 The last layer is buckyballs arranged in magnetron congigurations . 3D or 4D printed .

3.How it works :
3.1 The gun barrel must be magnetized . Stroking with a permanent magnet will do in a pinch .
3.2 On firing , the bullet accelerates both linearly and angularly down the barrel .
3.3 This an EMP pulse from the coils interacting with the barrel’s magnetic field .
3.4 This EMP pulse propagates faster than the bullet and induces Chirping effects on the magnetron-like buckyballs .
3.5 When they implode , Zevatron level energies are released .
3.6 Aligning (3D printing) the buckyballs correctly , the zevatron beam can be concentrated or even collimated .
3.7 The initial setup will take a lot of calculation and experimentation , but it only needs to be done once . After that , manufacture as usual . Even at home , using 3D and 4D printers . (See )

4. The energy calculations are simple : just use a muzzle energy (eg 500 joulesfor a .45 handgun) and work backwards to required values for various cartridge layers .

5.What do you get ?
A thin beam of cosmic energy-level particles . At these energies , they can be collimated .
A true long range blaster , cutter , welder , anti-ballistic system , ultra-radar , etc

6.Safety aspects :
6.1 There will probably be backscatter of radiation of various types . Depends on the bullet .
6.2 Simulation effects :
If we are in a simulation , then use of Zevatrons will put stress on the simulation . This may already be happening .
See Appendix II . The value of the Gravitational Constant is changing . What one would expect if the “grid” is made finer to take Zevatrons into account . (See Appendix I)
Not only will this also interfere with quantum effects routinely used in computers , but the System Administrator (God , for all practical purposes) might decide to switch off this Locale as being too “expensive” or too much trouble .

7. Quantum pollution .
Zevatron observation is equivalent to pollution at the quantum level .

8. But the benefits are amazing .
8.1 A finer “subspace-grid” means stable trans-uranic elements , stable exotic matter , workable quantum computers .
8.2 Singularity : The verdict is still out on whether it will snuff out humans or transcend them

The ultimate Western .
A Universe in every holster .


Appendix I
Desktop Zevatron

Andre Willers
22 Nov 2013
We use the Schwinger limit to produce particles with energies greater than 10^20 eV .
Discussion :
1.If the thought experiment cannot be reproduced in “reality” , we are in a simulation .See Appendix B
2.Thought experiment :
Consider buckyballs in a arrangement like a magnetron . Then chirp the frequency (ie increase it). The buckyball pockets will decrease and emit ever increasing energetic particles until they implode in Zevatron energies .
This can be easily done in small university lab . Or inside your body .
3.Makes a hellavu weapon .
4.If energies go over 10^20 eV , then either
4.1 We are not in a simulation
4.2 The laws of physics gets rewritten on the fly .

Or both
4.3 There is a quantum superposition (most likely)
We are in 1/3 simulation , but 2/3 superposition .

5.Resonance energy spectra :
The Zevatron will then have distributions typical of 1/3 , 2/3
6. Beth levels .
Pauli exclusion principle :
Taken as a general definition of delineation (identity) . The problem is that it usually used in a binary sense , whereas trinary would be more correct .
Inverse Pauli principle .
Higher Beth levels distort the Pauli exclusion principle .
The observer has very marked effects on the observed process .
7. In a Zevatron , some observers would have Talent for it , whereas others would squelsh it .
Pauli was notorious for squelshing experimental processes .
We want the opposite .
8. What does all this sthako mean ?
It means that we are living in a simulation 2/3 of the time , and deterministically 1/3 of the time , in so far time has any meaning .
9. The linkage is poetry , language , mathematics , music , physics , biology .
10 The nitty gritty :
Very high energy particle physics incorporates the observer . If you want a Zevatron , or cold fusion , or even hot fusion , you need
an Inverse Pauli Person in the loop .
11. Pollyanna Fusion .
Don’t knock it . At 10^20 eV it works .
12. Of course , it does not solve the Simulation problem . That is because you keep on thinking Y/N , whereas it is a little bit of this and a little bit of that .
13. Think of the universe as a congeries of information packets , each with a source and destination address , and some (just for the hell of it) with either or neither . Peregrinating through the Beth levels of meaning .
14. The Meaning of Life .
Beth (1) or Beth (2) levels : 1/3 basic physical ground states , 2/3 what you make of it .
Beth (3) and better : What you make of it .
15. Can you see why the Zevatron is such an interesting experiment ?
God is always inside your decision loop .
 An entity (whether an individual or an organization) that can process this cycle quickly, observing and reacting to unfolding events more rapidly than an opponent, can thereby "get inside" the opponent's decision cycle and gain the advantage.
Well , God cheats , since He is outside time (higher Beth levels in our terminology)
16 .With Zevatrons in play , God will have to jack up things a bit . And we are off to the races .
17 . You can’t win , but it was a fun race .
18 Zero point energy and Zevatrons .
Anything over the Schwinger limit generates zero-point energy . . (See Appendix A)
This can be done intra-cellular with 4D printers (see )
Never mind food . Energy can be obtained indefinitely by a simple injection of 4D printed molecules .
19 . 4D Printed wine .
The ultimate connoisseurs delight . The wine adapts to the taster’s palate , taste recepters and immune system to tickle pleasure receptors .
20. 4D Printed Food .
Food ( and here I include medicines) reconfigure themselves inside the gut and even inside the cells to give maximum benefit on instructions from the Cloud .
Humans being humans , even now we can print 4D foods that will taste fantastic , but reassemble into non-fattening molecules when exposed to the digestive processes .
21 . Ho–ho–Ho ! The Petrol pill !
For long a BS story , this is now actually a theoretical possibility .
A 4D printed molecule packing some serious energy can be designed to re-assemble into a combustable hydrocarbon on exposure to water . The physics is very straightforward . This can actually be done . It will cost , but the military will love it .
22. Put a Tiger in your tank ! Circe Bullets .
Bullets with a payload of 4D printed Dna/Rna/Epigenetics can convert an enemy into a tiger , sloth or any animal .
23. I prefer variable biltong . 4D Print biltong just as you like it . Hard , salty crust with meltingly soft interior .
Whatever you do , don’t lose the nipple .
It is sad to see grown humans in perennial search of a 4D nipple .
One of Strauss’s lesser known works .
“The Tit-Tat Walz”
Appendix A
 However, two waves or two photons not traveling in the same direction always have a minimum combined energy in their center of momentum frame, and it is this energy and the electric field strengths associated with it, which determine particle-antiparticle creation, and associated scattering phenomena.

Appendix B

In the 1999 sci-fi film classic The Matrix, the protagonist, Neo, is stunned to see people defying the laws of physics, running up walls and vanishing suddenly. These superhuman violations of the rules of the universe are possible because, unbeknownst to him, Neo’s consciousness is embedded in the Matrix, a virtual-reality simulation created by sentient machines.

The action really begins when Neo is given a fateful choice: Take the blue pill and return to his oblivious, virtual existence, or take the red pill to learn the truth about the Matrix and find out “how deep the rabbit hole goes.”

Physicists can now offer us the same choice, the ability to test whether we live in our own virtual Matrix, by studying radiation from space. As fanciful as it sounds, some philosophers have long argued that we’re actually more likely to be artificial intelligences trapped in a fake universe than we are organic minds in the “real” one.

But if that were true, the very laws of physics that allow us to devise such reality-checking technology may have little to do with the fundamental rules that govern the meta-universe inhabited by our simulators. To us, these programmers would be gods, able to twist reality on a whim.

So should we say yes to the offer to take the red pill and learn the truth — or are the implications too disturbing?

Worlds in Our Grasp

The first serious attempt to find the truth about our universe came in 2001, when an effort to calculate the resources needed for a universe-size simulation made the prospect seem impossible.

Seth Lloyd, a quantum-mechanical engineer at MIT, estimated the number of “computer operations” our universe has performed since the Big Bang — basically, every event that has ever happened. To repeat them, and generate a perfect facsimile of reality down to the last atom, would take more energy than the universe has.

The computer would have to be bigger than the universe, and time would tick more slowly in the program than in reality,” says Lloyd. “So why even bother building it?”

But others soon realized that making an imperfect copy of the universe that’s just good enough to fool its inhabitants would take far less computational power. In such a makeshift cosmos, the fine details of the microscopic world and the farthest stars might only be filled in by the programmers on the rare occasions that people study them with scientific equipment. As soon as no one was looking, they’d simply vanish.

In theory, we’d never detect these disappearing features, however, because each time the simulators noticed we were observing them again, they’d sketch them back in.

That realization makes creating virtual universes eerily possible, even for us. Today’s supercomputers already crudely model the early universe, simulating how infant galaxies grew and changed. Given the rapid technological advances we’ve witnessed over past decades — your cell phone has more processing power than NASA’s computers had during the moon landings — it’s not a huge leap to imagine that such simulations will eventually encompass intelligent life.

We may be able to fit humans into our simulation boxes within a century,” says Silas Beane, a nuclear physicist at the University of Washington in Seattle. Beane develops simulations that re-create how elementary protons and neutrons joined together to form ever larger atoms in our young universe.

Legislation and social mores could soon be all that keeps us from creating a universe of artificial, but still feeling, humans — but our tech-savvy descendants may find the power to play God too tempting to resist.

If cosmic rays don't have random origins, it could be a sign that the universe is a simulation.

National Science Foundation/J. Yang
They could create a plethora of pet universes, vastly outnumbering the real cosmos. This thought led philosopher Nick Bostrom at the University of Oxford to conclude in 2003 that it makes more sense to bet that we’re delusional silicon-based artificial intelligences in one of these many forgeries, rather than carbon-based organisms in the genuine universe. Since there seemed no way to tell the difference between the two possibilities, however, bookmakers did not have to lose sleep working out the precise odds.

Learning the Truth

That changed in 2007 when John D. Barrow, professor of mathematical sciences at Cambridge University, suggested that an imperfect simulation of reality would contain detectable glitches. Just like your computer, the universe’s operating system would need updates to keep working.

As the simulation degrades, Barrow suggested, we might see aspects of nature that are supposed to be static — such as the speed of light or the fine-structure constant that describes the strength of the electromagnetic force — inexplicably drift from their “constant” values.

Last year, Beane and colleagues suggested a more concrete test of the simulation hypothesis. Most physicists assume that space is smooth and extends out infinitely. But physicists modeling the early universe cannot easily re-create a perfectly smooth background to house their atoms, stars and galaxies. Instead, they build up their simulated space from a lattice, or grid, just as television images are made up from multiple pixels.

The team calculated that the motion of particles within their simulation, and thus their energy, is related to the distance between the points of the lattice: the smaller the grid size, the higher the energy particles can have. That means that if our universe is a simulation, we’ll observe a maximum energy amount for the fastest particles. And as it happens, astronomers have noticed that cosmic rays, high-speed particles that originate in far-flung galaxies, always arrive at Earth with a specific maximum energy of about 10^20 electron volts.

The simulation’s lattice has another observable effect that astronomers could pick up. If space is continuous, then there is no underlying grid that guides the direction of cosmic rays — they should come in from every direction equally. If we live in a simulation based on a lattice, however, the team has calculated that we wouldn’t see this even distribution. If physicists do see an uneven distribution, it would be a tough result to explain if the cosmos were real.

Astronomers need much more cosmic ray data to answer this one way or another. For Beane, either outcome would be fine. “Learning we live in a simulation would make no more difference to my life than believing that the universe was seeded at the Big Bang,” he says. But that’s because Beane imagines the simulators as driven purely to understand the cosmos, with no desire to interfere with their simulations.

Unfortunately, our almighty simulators may instead have programmed us into a universe-size reality show — and are capable of manipulating the rules of the game, purely for their entertainment. In that case, maybe our best strategy is to lead lives that amuse our audience, in the hope that our simulator-gods will resurrect us in the afterlife of next-generation simulations.

The weird consequences would not end there. Our simulators may be simulations themselves — just one rabbit hole within a linked series, each with different fundamental physical laws. “If we’re indeed a simulation, then that would be a logical possibility, that what we’re measuring aren’t really the laws of nature, they’re some sort of attempt at some sort of artificial law that the simulators have come up with. That’s a depressing thought!” says Beane.

This cosmic ray test may help reveal whether we are just lines of code in an artificial Matrix, where the established rules of physics may be bent, or even broken. But if learning that truth means accepting that you may never know for sure what’s real — including yourself — would you want to know?

There is no turning back, Neo: Do you take the blue pill, or the red pill?

The postulated (hypothetical) sources of EECR are known as Zevatrons, named in analogy to Lawrence Berkeley National Laboratory'sBevatron and Fermilab's Tevatron, capable of accelerating particles to 1 ZeV (1021 eV).

Appendix II
Puzzling Measurement of "Big G" Gravitational Constant Ignites Debate [Slide Show]
Despite dozens of measurements over more than 200 years, we still don’t know how strong gravity is

By Clara Moskowitz

BIG "G": Researchers at the International Bureau of Weights and Measures (BIPM) in Sévres, France used a torsion balance apparatus (pictured) to calculate the gravitational constant, "big G,"—a fundamental constant that has proven difficult to measure. The latest calculation, the result of a 10-year experiment, just adds to the confusion.
Gravity, one of the constants of life, not to mention physics, is less than constant when it comes to being measured. Various experiments over the years have come up with perplexingly different values for the strength of the force of gravity, and the latest calculation just adds to the confusion.

The results of a painstaking 10-year experiment to calculate the value of “big G,” the universal gravitational constant, were published this month—and they’re incompatible with the official value of G, which itself comes from a weighted average of various other measurements that are mostly mutually incompatible and diverge by more than 10 times their estimated uncertainties.

The gravitational constant “is one of these things we should know,” says Terry Quinn at the International Bureau of Weights and Measures (BIPM) in Sévres, France, who led the team behind the latest calculation. “It’s embarrassing to have a fundamental constant that we cannot measure how strong it is.”

In fact, the discrepancy is such a problem that Quinn is organizing a meeting in February at the Royal Society in London to come up with a game plan for resolving the impasse. The meeting’s title—“The Newtonian constant of gravitation, a constant too difficult to measure?”—reveals the general consternation.

Although gravity seems like one of the most salient of nature’s forces in our daily lives, it’s actually by far the weakest, making attempts to calculate its strength an uphill battle. “Two one-kilogram masses that are one meter apart attract each other with a force equivalent to the weight of a few human cells,” says University of Washington physicist Jens Gundlach, who worked on a separate 2000 measurement of big G. “Measuring such small forces on kg-objects to 10-4 or 10-5 precision is just not easy. There are a many effects that could overwhelm gravitational effects, and all of these have to be properly understood and taken into account
This inherent difficulty has caused big G to become the only fundamental constant of physics for which the uncertainty of the standard value has risen over time as more and more measurements are made. “Though the measurements are very tough, because G is so much weaker than other laboratory forces, we still, as a community, ought to do better,” says University of Colorado at Boulder physicist James Faller, who conducted a 2010 experiment to calculate big G using pendulums.

The first big G measurement was made in 1798 by British physicist Henry Cavendish using an apparatus called a torsion balance. In this setup, a bar with lead balls at either end was suspended from its middle by a wire. When other lead balls were placed alongside this bar, it rotated according to the strength of the gravitational attraction between the balls, allowing Cavendish to measure the gravitational constant.

Quinn and his colleagues’ experiment was essentially a rehash of Cavendish’s setup using more advanced methods, such as replacing the wire with a wide, thin strip of copper beryllium, which allowed their torsion balance to hold more weight. The team also took the further step of adding a second, independent way of measuring the gravitational attraction: In addition to observing how much the bar twisted, the researchers also conducted experiments with electrodes placed inside the torsion balance that prevented it from twisting. The strength of the voltage needed to prevent the rotation was directly related to the pull of gravity. “A strong point of Quinn’s experiment is the fact that they use two different methods to measure G,” says Stephan Schlamminger of the U.S. National Institute of Standards and Technology in Gaithersburg, Md., who led a separate attempt in 2006 to calculate big G using a beam balance setup. “It is difficult to see how the two methods can produce two numbers that are wrong, but yet agree with each other.”

Through these dual experiments, Quinn’s team arrived at a value of 6.67545 X 10-11 m3 kg-1 s-2. That’s 241 parts per million above the standard value of 6.67384(80) X 10-11 m3 kg-1 s-2, which was arrived at by a special task force of the International Council for Science’s Committee on Data for Science and Technology (CODATA) (pdf) in 2010 by calculating a weighted average of all the various experimental values. These values differ from one another by as much as 450 ppm of the constant, even though most of them have estimated uncertainties of only about 40 ppm. “Clearly, many of them or most of them are subject either to serious significant errors or grossly underestimated uncertainties,” Quinn says. Making matters even more complex is the fact that the new measurement is strikingly close to a calculation of big G made by Quinn and his colleagues more than 10 years ago, published in 2001, that used similar methods but a completely separate laboratory setup.

Most scientists think all these discrepancies reflect human sources of error, rather than a true inconstancy of big G. We know the strength of gravity hasn’t been fluctuating over the past 200 years, for example, because if so, the orbits of the planets around the sun would have changed, Quinn says. Still, it’s possible that the incompatible measurements are pointing to unknown subtleties of gravity—perhaps its strength varies depending on how it’s measured or where on Earth the measurements are being made?

“Either something is wrong with the experiments, or there is a flaw in our understanding of gravity,” says Mark Kasevich, a Stanford University physicist who conducted an unrelated measurement of big G in 2007 using atom interferometry. “Further work is required to clarify the situation.”

If the true value of big G turns out to be closer to the Quinn team’s measurement than the CODATA value, then calculations that depend on G will have to be revised. For example, the estimated masses of the solar system’s planets, including Earth, would change slightly. Such a revision, however, wouldn’t alter any fundamental laws of physics, and would have very little practical effect on anyone’s life, Quinn says. But getting to the bottom of the issue is more a matter of principle to the scientists. “It’s not a thing one likes to leave unresolved,” he adds. “We should be able to measure gravity.”

Quinn and his team from the BIPM and the University of Birmingham in England published their results Sept. 5 in Physical Review Letters.