Zevatron Gun
Andre Willers
27 Nov 2013
Synopsis :
An ordinary firearm can fire zevatron beams by using a
Zevatron bullet .
Discussion :
1.First , see “Desktop Zevatron” in Appendix I
2. The Zevatron Cartridge :
2.1 Propellant at base : standard
2.2 The next layer is fine conductive coils : wound or 3D
printed
2.3 The last layer is buckyballs arranged in magnetron congigurations
. 3D or 4D printed .
3.How it works :
3.1 The gun barrel must be magnetized . Stroking with a
permanent magnet will do in a pinch .
3.2 On firing , the bullet accelerates both linearly and
angularly down the barrel .
3.3 This an EMP pulse from the coils interacting with the
barrel’s magnetic field .
3.4 This EMP pulse propagates faster than the bullet and
induces Chirping effects on the magnetron-like buckyballs .
3.5 When they implode , Zevatron level energies are released
.
3.6 Aligning (3D printing) the buckyballs correctly , the
zevatron beam can be concentrated or even collimated .
3.7 The initial setup will take a lot of calculation and
experimentation , but it only needs to be done once . After that , manufacture
as usual . Even at home , using 3D and 4D printers . (See http://andreswhy.blogspot.com/2013/11/nd-printing.html
)
4. The energy calculations are simple : just use a muzzle
energy (eg 500 joulesfor a .45 handgun) and work backwards to required values
for various cartridge layers .
5.What do you get ?
A thin beam of cosmic energy-level particles . At these
energies , they can be collimated .
A true long range blaster , cutter , welder , anti-ballistic
system , ultra-radar , etc
6.Safety aspects :
6.1 There will probably be backscatter of radiation of
various types . Depends on the bullet .
6.2 Simulation effects :
If we are in a simulation , then use of Zevatrons will put
stress on the simulation . This may already be happening .
See Appendix II . The value of the Gravitational Constant is
changing . What one would expect if the “grid” is made finer to take Zevatrons
into account . (See Appendix I)
Not only will this also interfere with quantum effects
routinely used in computers , but the System Administrator (God , for all
practical purposes) might decide to switch off this Locale as being too “expensive”
or too much trouble .
7. Quantum pollution .
Zevatron observation is equivalent to pollution at the
quantum level .
8. But the benefits are amazing .
8.1 A finer “subspace-grid” means stable trans-uranic
elements , stable exotic matter , workable quantum computers .
8.2 Singularity : The verdict is still out on whether it
will snuff out humans or transcend them
The ultimate Western .
A Universe in every holster .
Andre
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Appendix I
Desktop Zevatron
Andre Willers
22 Nov 2013
Synopsis:
We use the Schwinger limit to
produce particles with energies greater than 10^20 eV .
Discussion :
1.If the thought experiment cannot
be reproduced in “reality” , we are in a simulation .See Appendix B
2.Thought experiment :
Consider buckyballs in a arrangement
like a magnetron . Then chirp the frequency (ie increase it). The buckyball
pockets will decrease and emit ever increasing energetic particles until they
implode in Zevatron energies .
This can be easily done in small
university lab . Or inside your body .
3.Makes a hellavu weapon .
4.If energies go over 10^20 eV ,
then either
4.1 We are not in a simulation
Or
4.2 The laws of physics gets
rewritten on the fly .
Or both
4.3 There is a quantum superposition
(most likely)
We are in 1/3 simulation , but 2/3
superposition .
5.Resonance energy spectra :
The Zevatron will then have
distributions typical of 1/3 , 2/3
6. Beth levels .
Pauli exclusion principle :
Taken as a general definition of
delineation (identity) . The problem is that it usually used in a binary sense
, whereas trinary would be more correct .
Inverse Pauli principle .
Higher Beth levels distort the Pauli
exclusion principle .
The observer has very marked effects
on the observed process .
7. In a Zevatron , some observers
would have Talent for it , whereas others would squelsh it .
Pauli was notorious for squelshing
experimental processes .
We want the opposite .
8. What does all this sthako mean ?
It means that we are living in a
simulation 2/3 of the time , and deterministically 1/3 of the time , in so far
time has any meaning .
9. The linkage is poetry , language
, mathematics , music , physics , biology .
10 The nitty gritty :
Very high energy particle physics
incorporates the observer . If you want a Zevatron , or cold fusion , or even hot
fusion , you need
an Inverse Pauli Person in the loop
.
11. Pollyanna Fusion .
Don’t knock it . At 10^20 eV it
works .
12. Of course , it does not solve
the Simulation problem . That is because you keep on thinking Y/N , whereas it
is a little bit of this and a little bit of that .
13. Think of the universe as a
congeries of information packets , each with a source and destination address ,
and some (just for the hell of it) with either or neither . Peregrinating
through the Beth levels of meaning .
14. The Meaning of Life .
Beth (1) or Beth (2) levels : 1/3
basic physical ground states , 2/3 what you make of it .
Beth (3) and better : What you make
of it .
15. Can you see why the Zevatron is
such an interesting experiment ?
God is always inside your decision
loop .
An entity (whether an individual or an
organization) that can process this cycle quickly, observing and reacting to
unfolding events more rapidly than an opponent, can thereby "get
inside" the opponent's decision cycle and gain the advantage.
Well , God cheats ,
since He is outside time (higher Beth levels in our terminology)
16 .With Zevatrons in
play , God will have to jack up things a bit . And we are off to the races .
17 . You can’t win , but
it was a fun race .
18 Zero point energy and
Zevatrons .
Anything over the
Schwinger limit generates zero-point energy . . (See Appendix A)
This can be done
intra-cellular with 4D printers (see http://andreswhy.blogspot.com/2013/11/nd-printing.html
)
Never mind food . Energy can be
obtained indefinitely by a simple injection of 4D printed molecules .
19 . 4D Printed wine .
The ultimate connoisseurs delight .
The wine adapts to the taster’s palate , taste recepters and immune system to
tickle pleasure receptors .
20. 4D Printed Food .
Food ( and here I include medicines)
reconfigure themselves inside the gut and even inside the cells to give maximum
benefit on instructions from the Cloud .
Humans being humans , even now we
can print 4D foods that will taste fantastic , but reassemble into
non-fattening molecules when exposed to the digestive processes .
21 . Ho–ho–Ho ! The Petrol pill !
For long a BS story , this is now
actually a theoretical possibility .
A 4D printed molecule packing some
serious energy can be designed to re-assemble into a combustable hydrocarbon on
exposure to water . The physics is very straightforward . This can actually be
done . It will cost , but the military will love it .
22. Put a Tiger in your tank ! Circe
Bullets .
Bullets with a payload of 4D printed
Dna/Rna/Epigenetics can convert an enemy into a tiger , sloth or any animal .
23. I prefer variable biltong . 4D
Print biltong just as you like it . Hard , salty crust with meltingly soft
interior .
Whatever you do , don’t lose the
nipple .
It is sad to see grown humans in
perennial search of a 4D nipple .
One of Strauss’s lesser known works
.
“The Tit-Tat Walz”
Andre
Appendix A
However, two waves or two photons not traveling
in the same direction always have a minimum combined energy in their center of
momentum frame, and it is this energy and the electric field strengths
associated with it, which determine particle-antiparticle creation, and
associated scattering phenomena.
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Appendix B
In the 1999 sci-fi film
classic The Matrix, the protagonist, Neo, is stunned to see people defying the
laws of physics, running up walls and vanishing suddenly. These superhuman
violations of the rules of the universe are possible because, unbeknownst to him,
Neo’s consciousness is embedded in the Matrix, a virtual-reality simulation
created by sentient machines.
The action really begins
when Neo is given a fateful choice: Take the blue pill and return to his
oblivious, virtual existence, or take the red pill to learn the truth about the
Matrix and find out “how deep the rabbit hole goes.”
Physicists can now offer
us the same choice, the ability to test whether we live in our own virtual
Matrix, by studying radiation from space. As fanciful as it sounds, some
philosophers have long argued that we’re actually more likely to be artificial
intelligences trapped in a fake universe than we are organic minds in the
“real” one.
But if that were true,
the very laws of physics that allow us to devise such reality-checking
technology may have little to do with the fundamental rules that govern the
meta-universe inhabited by our simulators. To us, these programmers would be
gods, able to twist reality on a whim.
So should we say yes to
the offer to take the red pill and learn the truth — or are the implications
too disturbing?
Worlds in Our Grasp
The first serious
attempt to find the truth about our universe came in 2001, when an effort to
calculate the resources needed for a universe-size simulation made the prospect
seem impossible.
Seth Lloyd, a
quantum-mechanical engineer at MIT, estimated the number of “computer
operations” our universe has performed since the Big Bang — basically, every
event that has ever happened. To repeat them, and generate a perfect facsimile
of reality down to the last atom, would take more energy than the universe has.
“The computer would have to be bigger than the
universe, and time would tick more slowly in the program than in reality,” says
Lloyd. “So why even bother building it?”
But others soon realized
that making an imperfect copy of the universe that’s just good enough to fool
its inhabitants would take far less computational power. In such a makeshift
cosmos, the fine details of the microscopic world and the farthest stars might
only be filled in by the programmers on the rare occasions that people study
them with scientific equipment. As soon as no one was looking, they’d simply
vanish.
In theory, we’d never
detect these disappearing features, however, because each time the simulators
noticed we were observing them again, they’d sketch them back in.
That realization makes
creating virtual universes eerily possible, even for us. Today’s supercomputers
already crudely model the early universe, simulating how infant galaxies grew
and changed. Given the rapid technological advances we’ve witnessed over past
decades — your cell phone has more processing power than NASA’s computers had
during the moon landings — it’s not a huge leap to imagine that such simulations
will eventually encompass intelligent life.
“We may be able to fit humans into our simulation
boxes within a century,” says Silas Beane, a nuclear physicist at the
University of Washington in Seattle. Beane develops simulations that re-create how
elementary protons and neutrons joined together to form ever larger atoms in
our young universe.
Legislation and social
mores could soon be all that keeps us from creating a universe of artificial,
but still feeling, humans — but our tech-savvy descendants may find the power
to play God too tempting to resist.
cosmic-rays
If cosmic rays don't
have random origins, it could be a sign that the universe is a simulation.
National Science
Foundation/J. Yang
They could create a
plethora of pet universes, vastly outnumbering the real cosmos. This thought
led philosopher Nick Bostrom at the University of Oxford to conclude in 2003
that it makes more sense to bet that we’re delusional silicon-based artificial
intelligences in one of these many forgeries, rather than carbon-based
organisms in the genuine universe. Since there seemed no way to tell the
difference between the two possibilities, however, bookmakers did not have to
lose sleep working out the precise odds.
Learning the Truth
That changed in 2007
when John D. Barrow, professor of mathematical sciences at Cambridge
University, suggested that an imperfect simulation of reality would contain
detectable glitches. Just like your computer, the universe’s operating system
would need updates to keep working.
As the simulation
degrades, Barrow suggested, we might see aspects of nature that are supposed to
be static — such as the speed of light or the fine-structure constant that
describes the strength of the electromagnetic force — inexplicably drift from
their “constant” values.
Last year, Beane and
colleagues suggested a more concrete test of the simulation hypothesis. Most
physicists assume that space is smooth and extends out infinitely. But
physicists modeling the early universe cannot easily re-create a perfectly
smooth background to house their atoms, stars and galaxies. Instead, they build
up their simulated space from a lattice, or grid, just as television images are
made up from multiple pixels.
The team calculated that
the motion of particles within their simulation, and thus their energy, is
related to the distance between the points of the lattice: the smaller the grid
size, the higher the energy particles can have. That means that if our universe
is a simulation, we’ll observe a maximum energy amount for the fastest
particles. And as it happens, astronomers have noticed that cosmic rays,
high-speed particles that originate in far-flung galaxies, always arrive at
Earth with a specific maximum energy of about 10^20 electron volts.
The simulation’s lattice
has another observable effect that astronomers could pick up. If space is
continuous, then there is no underlying grid that guides the direction of
cosmic rays — they should come in from every direction equally. If we live in a
simulation based on a lattice, however, the team has calculated that we
wouldn’t see this even distribution. If physicists do see an uneven
distribution, it would be a tough result to explain if the cosmos were real.
Astronomers need much
more cosmic ray data to answer this one way or another. For Beane, either
outcome would be fine. “Learning we live in a simulation would make no more
difference to my life than believing that the universe was seeded at the Big
Bang,” he says. But that’s because Beane imagines the simulators as driven
purely to understand the cosmos, with no desire to interfere with their
simulations.
Unfortunately, our
almighty simulators may instead have programmed us into a universe-size reality
show — and are capable of manipulating the rules of the game, purely for their
entertainment. In that case, maybe our best strategy is to lead lives that
amuse our audience, in the hope that our simulator-gods will resurrect us in
the afterlife of next-generation simulations.
The weird consequences
would not end there. Our simulators may be simulations themselves — just one
rabbit hole within a linked series, each with different fundamental physical
laws. “If we’re indeed a simulation, then that would be a logical possibility,
that what we’re measuring aren’t really the laws of nature, they’re some sort
of attempt at some sort of artificial law that the simulators have come up
with. That’s a depressing thought!” says Beane.
This cosmic ray test may
help reveal whether we are just lines of code in an artificial Matrix, where
the established rules of physics may be bent, or even broken. But if learning
that truth means accepting that you may never know for sure what’s real —
including yourself — would you want to know?
There is no turning
back, Neo: Do you take the blue pill, or the red pill?
The postulated
(hypothetical) sources of EECR are known as Zevatrons, named in
analogy to Lawrence
Berkeley National Laboratory'sBevatron and Fermilab's Tevatron, capable of accelerating particles to 1 ZeV (1021 eV).
Xxxxxxxxxxxxxxxxxxxxxxxxxx
Appendix II
Puzzling Measurement of "Big
G" Gravitational Constant Ignites Debate [Slide Show]
Despite dozens of measurements over
more than 200 years, we still don’t know how strong gravity is
By Clara Moskowitz
BIG "G": Researchers at
the International Bureau of Weights and Measures (BIPM) in Sévres, France used
a torsion balance apparatus (pictured) to calculate the gravitational constant,
"big G,"—a fundamental constant that has proven difficult to measure.
The latest calculation, the result of a 10-year experiment, just adds to the
confusion.
Gravity, one of the constants of
life, not to mention physics, is less than constant when it comes to being
measured. Various experiments over the years have come up with perplexingly
different values for the strength of the force of gravity, and the latest
calculation just adds to the confusion.
The results of a painstaking 10-year
experiment to calculate the value of “big G,” the universal gravitational
constant, were published this month—and they’re incompatible with the official
value of G, which itself comes from a weighted average of various other
measurements that are mostly mutually incompatible and diverge by more than 10
times their estimated uncertainties.
The gravitational constant “is one
of these things we should know,” says Terry Quinn at the International Bureau
of Weights and Measures (BIPM) in Sévres, France, who led the team behind the
latest calculation. “It’s embarrassing to have a fundamental constant that we
cannot measure how strong it is.”
In fact, the discrepancy is such a
problem that Quinn is organizing a meeting in February at the Royal Society in
London to come up with a game plan for resolving the impasse. The meeting’s
title—“The Newtonian constant of gravitation, a constant too difficult to
measure?”—reveals the general consternation.
Although gravity seems like one of
the most salient of nature’s forces in our daily lives, it’s actually by far
the weakest, making attempts to calculate its strength an uphill battle. “Two
one-kilogram masses that are one meter apart attract each other with a force
equivalent to the weight of a few human cells,” says University of Washington
physicist Jens Gundlach, who worked on a separate 2000 measurement of big G.
“Measuring such small forces on kg-objects to 10-4 or 10-5 precision is just
not easy. There are a many effects that could overwhelm gravitational effects,
and all of these have to be properly understood and taken into account
This inherent difficulty has caused
big G to become the only fundamental constant of physics for which the
uncertainty of the standard value has risen over time as more and more
measurements are made. “Though the measurements are very tough, because G is so
much weaker than other laboratory forces, we still, as a community, ought to do
better,” says University of Colorado at Boulder physicist James Faller, who
conducted a 2010 experiment to calculate big G using pendulums.
The first big G measurement was made
in 1798 by British physicist Henry Cavendish using an apparatus called a
torsion balance. In this setup, a bar with lead balls at either end was
suspended from its middle by a wire. When other lead balls were placed alongside
this bar, it rotated according to the strength of the gravitational attraction
between the balls, allowing Cavendish to measure the gravitational constant.
Quinn and his colleagues’ experiment
was essentially a rehash of Cavendish’s setup using more advanced methods, such
as replacing the wire with a wide, thin strip of copper beryllium, which
allowed their torsion balance to hold more weight. The team also took the
further step of adding a second, independent way of measuring the gravitational
attraction: In addition to observing how much the bar twisted, the researchers
also conducted experiments with electrodes placed inside the torsion balance
that prevented it from twisting. The strength of the voltage needed to prevent
the rotation was directly related to the pull of gravity. “A strong point of
Quinn’s experiment is the fact that they use two different methods to measure
G,” says Stephan Schlamminger of the U.S. National Institute of Standards and
Technology in Gaithersburg, Md., who led a separate attempt in 2006 to
calculate big G using a beam balance setup. “It is difficult to see how the two
methods can produce two numbers that are wrong, but yet agree with each other.”
Through these dual experiments,
Quinn’s team arrived at a value of 6.67545 X 10-11 m3 kg-1 s-2. That’s 241
parts per million above the standard value of 6.67384(80) X 10-11 m3 kg-1 s-2,
which was arrived at by a special task force of the International Council for
Science’s Committee on Data for Science and Technology (CODATA) (pdf) in 2010
by calculating a weighted average of all the various experimental values. These
values differ from one another by as much as 450 ppm of the constant, even
though most of them have estimated uncertainties of only about 40 ppm.
“Clearly, many of them or most of them are subject either to serious
significant errors or grossly underestimated uncertainties,” Quinn says. Making
matters even more complex is the fact that the new measurement is strikingly
close to a calculation of big G made by Quinn and his colleagues more than 10
years ago, published in 2001, that used similar methods but a completely
separate laboratory setup.
Most scientists think all these
discrepancies reflect human sources of error, rather than a true inconstancy of
big G. We know the strength of gravity hasn’t been fluctuating over the past
200 years, for example, because if so, the orbits of the planets around the sun
would have changed, Quinn says. Still, it’s possible that the incompatible
measurements are pointing to unknown subtleties of gravity—perhaps its strength
varies depending on how it’s measured or where on Earth the measurements are
being made?
“Either something is wrong with the
experiments, or there is a flaw in our understanding of gravity,” says Mark
Kasevich, a Stanford University physicist who conducted an unrelated
measurement of big G in 2007 using atom interferometry. “Further work is
required to clarify the situation.”
If the true value of big G turns out
to be closer to the Quinn team’s measurement than the CODATA value, then
calculations that depend on G will have to be revised. For example, the
estimated masses of the solar system’s planets, including Earth, would change
slightly. Such a revision, however, wouldn’t alter any fundamental laws of
physics, and would have very little practical effect on anyone’s life, Quinn
says. But getting to the bottom of the issue is more a matter of principle to
the scientists. “It’s not a thing one likes to leave unresolved,” he adds. “We
should be able to measure gravity.”
Quinn and his team from the BIPM and
the University of Birmingham in England published their results Sept. 5 in
Physical Review Letters.
xxxxxxxxxxxxxxxxxxxxxxxxx
No comments:
Post a Comment